threads
listlengths
1
275
[ { "msg_contents": "Summary: Non-unique btree indices are returning CTIDs for rows with same\nvalue of indexed column not in logical order, imposing a high performance\npenalty.\n\nRunning PG 9.5.3 now, we have a time-based partitions of append-only tables\nwith data loaded from other sources. The tables are partitioned by time, and\ntimestamp column has an non-unique, not-null btree index.\n\nThe child tables are each ~75GB and expected to keep growing. For a child\ntable with a week's worth of data:\nrelpages | 11255802\nreltuples | 5.90502e+07\n\nThe data is loaded shortly after it's available, so have high correlation in\npg_statistic:\n[pryzbyj@viaero ~]$ psql ts -c \"SELECT tablename, correlation, n_distinct FROM pg_stats s JOIN pg_class c ON (c.relname=s.tablename) WHERE tablename LIKE 'cdrs_huawei_pgwrecord%' AND attname='recordopeningtime' ORDER BY 1\" |head\n tablename | correlation | n_distinct \n----------------------------------+-------------+------------\n cdrs_huawei_pgwrecord | 0.999997 | 102892\n cdrs_huawei_pgwrecord_2016_02_15 | 0.999658 | 96145\n cdrs_huawei_pgwrecord_2016_02_22 | 0.999943 | 91916\n cdrs_huawei_pgwrecord_2016_02_29 | 0.997219 | 50341\n cdrs_huawei_pgwrecord_2016_03_01 | 0.999947 | 97485\n\nBut note the non-uniqueness of the index column:\nts=# SELECT recordopeningtime, COUNT(1) FROM cdrs_huawei_pgwrecord WHERE recordopeningtime>='2016-05-21' AND recordopeningtime<'2016-05-22' GROUP BY 1 ORDER BY 2 DESC;\n recordopeningtime | count\n---------------------+-------\n 2016-05-21 12:17:29 | 176\n 2016-05-21 12:17:25 | 171\n 2016-05-21 13:11:33 | 170\n 2016-05-21 10:20:02 | 169\n 2016-05-21 11:30:02 | 167\n[...]\n\nWe have an daily analytic query which processes the previous day's data. For\nnew child tables, with only 1 days data loaded, this runs in ~30min, and for\nchild tables with an entire week's worth of data loaded, takes several hours\n(even though both queries process the same amount of data).\n\nFirst, I found I was able to get 30-50min query results on full week's table by\nprefering a seq scan to an index scan. The row estimates seemed fine, and the\nonly condition is the timestamp, so the planner's use of index scan is as\nexpected.\n\nAFAICT what's happening is that the index scan was returning pages\nnonsequentially. strace-ing the backend showed alternating lseek()s and\nread()s, with the offsets not consistently increasing (nor consistently\ndecreasing):\n% sudo strace -p 25588 2>&1 |grep -m9 'lseek(773'\nlseek(773, 1059766272, SEEK_SET) = 1059766272\nlseek(773, 824926208, SEEK_SET) = 824926208\nlseek(773, 990027776, SEEK_SET) = 990027776\nlseek(773, 990330880, SEEK_SET) = 990330880\nlseek(773, 1038942208, SEEK_SET) = 1038942208\nlseek(773, 1059856384, SEEK_SET) = 1059856384\nlseek(773, 977305600, SEEK_SET) = 977305600\nlseek(773, 990347264, SEEK_SET) = 990347264\nlseek(773, 871096320, SEEK_SET) = 871096320\n\n.. and consecutive read()s being rare:\nread(802, \"g\"..., 8192) = 8192\nlseek(802, 918003712, SEEK_SET) = 918003712\nread(802, \"c\"..., 8192) = 8192\nlseek(802, 859136000, SEEK_SET) = 859136000\nread(802, \"a\"..., 8192) = 8192\nlseek(802, 919601152, SEEK_SET) = 919601152\nread(802, \"d\"..., 8192) = 8192\nlseek(802, 905101312, SEEK_SET) = 905101312\nread(802, \"c\"..., 8192) = 8192\nlseek(801, 507863040, SEEK_SET) = 507863040\nread(801, \"p\"..., 8192) = 8192\nlseek(802, 914235392, SEEK_SET) = 914235392\nread(802, \"c\"..., 8192) = 8192\n\nI was able to see great improvement without planner parameters by REINDEX the\ntimestamp index. My theory is that the index/planner doesn't handle well the\ncase of many tuples with same column value, and returns pages out of logical\norder. Reindex fixes that, rewriting the index data with pages in order\n(confirmed with pageinspect), which causes index scans to fetch heap data more\nor less monotonically (if not consecutively). strace shows that consecutive\nread()s are common (without intervening seeks). I gather this allows the OS\nreadahead to kick in.\n\nPostgres seems to assume that the high degree of correlation of the table\ncolumn seen in pg_stats is how it will get data from the index scan, which\nassumption seems to be very poor on what turns out to be a higly fragmented\nindex. Is there a way to help it to understand otherwise??\n\nMaybe this is a well-understood problem/deficiency; but, is there any reason\nwhy Btree scan can't sort result by ctid for index tuples with same column\nvalue (_bt_steppage() or btgettuple())? Or maybe the problem could be\nmitigated by changing the behavior during INESRT? In the meantime, I'll be\nimplementing a reindex job.\n\nThanks,\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 May 2016 12:39:14 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "index fragmentation on insert-only table with non-unique column" }, { "msg_contents": "On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby <[email protected]> wrote:\n> I was able to see great improvement without planner parameters by REINDEX the\n> timestamp index. My theory is that the index/planner doesn't handle well the\n> case of many tuples with same column value, and returns pages out of logical\n> order. Reindex fixes that, rewriting the index data with pages in order\n> (confirmed with pageinspect), which causes index scans to fetch heap data more\n> or less monotonically (if not consecutively). strace shows that consecutive\n> read()s are common (without intervening seeks). I gather this allows the OS\n> readahead to kick in.\n\nThe basic problem is that the B-Tree code doesn't maintain this\nproperty. However, B-Tree index builds will create an index that\ninitially has this property, because the tuplesort.c code happens to\nsort index tuples with a CTID tie-breaker.\n\n> Postgres seems to assume that the high degree of correlation of the table\n> column seen in pg_stats is how it will get data from the index scan, which\n> assumption seems to be very poor on what turns out to be a higly fragmented\n> index. Is there a way to help it to understand otherwise??\n\nYour complaint is vague. Are you complaining about the planner making\na poor choice? I don't think that's the issue here, because you never\nmade any firm statement about the planner making a choice that was\nworth than an alternative that it had available.\n\nIf you're arguing for the idea that B-Trees should reliably keep\ntuples in order by a tie-break condition, that seems difficult to\nimplement, and likely not worth it in practice.\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 May 2016 21:16:20 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "* Peter Geoghegan ([email protected]) wrote:\n> On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby <[email protected]> wrote:\n> > I was able to see great improvement without planner parameters by REINDEX the\n> > timestamp index. My theory is that the index/planner doesn't handle well the\n> > case of many tuples with same column value, and returns pages out of logical\n> > order. Reindex fixes that, rewriting the index data with pages in order\n> > (confirmed with pageinspect), which causes index scans to fetch heap data more\n> > or less monotonically (if not consecutively). strace shows that consecutive\n> > read()s are common (without intervening seeks). I gather this allows the OS\n> > readahead to kick in.\n> \n> The basic problem is that the B-Tree code doesn't maintain this\n> property. However, B-Tree index builds will create an index that\n> initially has this property, because the tuplesort.c code happens to\n> sort index tuples with a CTID tie-breaker.\n> \n> > Postgres seems to assume that the high degree of correlation of the table\n> > column seen in pg_stats is how it will get data from the index scan, which\n> > assumption seems to be very poor on what turns out to be a higly fragmented\n> > index. Is there a way to help it to understand otherwise??\n> \n> Your complaint is vague. Are you complaining about the planner making\n> a poor choice? I don't think that's the issue here, because you never\n> made any firm statement about the planner making a choice that was\n> worth than an alternative that it had available.\n> \n> If you're arguing for the idea that B-Trees should reliably keep\n> tuples in order by a tie-break condition, that seems difficult to\n> implement, and likely not worth it in practice.\n\nThe ongoing discussion around how to effectively handle duplicate values\nin B-Tree seems relevant to this. In particular, if we're able to store\nduplicate values efficiently and all the locations under a single key\nare easily available then we could certainly sort those prior to going\nand visiting them.\n\nThat's not quite the same as keeping the tuples in order in the heap,\nbut would more-or-less achieve the effect desired, I believe?\n\nThanks!\n\nStephen", "msg_date": "Wed, 25 May 2016 00:26:25 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "Peter Geoghegan <[email protected]> writes:\n> The basic problem is that the B-Tree code doesn't maintain this\n> property. However, B-Tree index builds will create an index that\n> initially has this property, because the tuplesort.c code happens to\n> sort index tuples with a CTID tie-breaker.\n\nYeah. I wonder what would happen if we used the same rule for index\ninsertions. It would likely make insertions more expensive, but maybe\nnot by much. The existing \"randomization\" rule for where to insert new\nitems in a run of identical index entries would go away, because the\ninsertion point would become deterministic. I am not sure if that's\ngood or bad for insertion performance, but it would likely help for\nscan performance.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 May 2016 00:43:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with non-unique column" }, { "msg_contents": "On Tue, May 24, 2016 at 9:43 PM, Tom Lane <[email protected]> wrote:\n> Yeah. I wonder what would happen if we used the same rule for index\n> insertions. It would likely make insertions more expensive, but maybe\n> not by much. The existing \"randomization\" rule for where to insert new\n> items in a run of identical index entries would go away, because the\n> insertion point would become deterministic. I am not sure if that's\n> good or bad for insertion performance, but it would likely help for\n> scan performance.\n\nI think that if somebody tacked on a tie-breaker in the same way as in\ntuplesort.c's B-Tree IndexTuple, there'd be significant negative\nconsequences.\n\nThe equal-to-high-key case gets a choice of which page to put the new\nIndexTuple on, and I imagine that that's quite useful when it comes\nup. I'd also have concerns about the key space in the index. I think\nthat it would seriously mess with the long term utility of values in\ninternal pages, which currently can reasonably have little to do with\nthe values currently stored in leaf pages. They're functionally only\nseparators of the key space that guide index scans, so it doesn't\nmatter if the actual values are completely absent from the leaf\npages/the table itself (perhaps some of the values in the internal\npages were inserted years ago, and have long since been deleted and\nvacuumed away). Provided the distribution of values at the leaf level\nis still well characterized at higher levels (e.g. many string values\nthat start with vowels, very few that start with the letters 'x' or\n'z'), there should be no significant bloat. That's very valuable.\nUnique indexes are another problem for this naive approach.\n\nMaybe somebody could do better with a more sophisticated approach, but\nit's probably better to focus on duplicate storage or even leaf page\ncompression, as Stephen mentioned.\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 May 2016 22:18:06 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby <[email protected]> wrote:\n> Summary: Non-unique btree indices are returning CTIDs for rows with same\n> value of indexed column not in logical order, imposing a high performance\n> penalty.\n>\n> Running PG 9.5.3 now, we have a time-based partitions of append-only tables\n> with data loaded from other sources. The tables are partitioned by time, and\n> timestamp column has an non-unique, not-null btree index.\n>\n> The child tables are each ~75GB and expected to keep growing. For a child\n> table with a week's worth of data:\n> relpages | 11255802\n> reltuples | 5.90502e+07\n>\n> The data is loaded shortly after it's available, so have high correlation in\n> pg_statistic:\n> [pryzbyj@viaero ~]$ psql ts -c \"SELECT tablename, correlation, n_distinct FROM pg_stats s JOIN pg_class c ON (c.relname=s.tablename) WHERE tablename LIKE 'cdrs_huawei_pgwrecord%' AND attname='recordopeningtime' ORDER BY 1\" |head\n> tablename | correlation | n_distinct\n> ----------------------------------+-------------+------------\n> cdrs_huawei_pgwrecord | 0.999997 | 102892\n> cdrs_huawei_pgwrecord_2016_02_15 | 0.999658 | 96145\n> cdrs_huawei_pgwrecord_2016_02_22 | 0.999943 | 91916\n> cdrs_huawei_pgwrecord_2016_02_29 | 0.997219 | 50341\n> cdrs_huawei_pgwrecord_2016_03_01 | 0.999947 | 97485\n>\n> But note the non-uniqueness of the index column:\n> ts=# SELECT recordopeningtime, COUNT(1) FROM cdrs_huawei_pgwrecord WHERE recordopeningtime>='2016-05-21' AND recordopeningtime<'2016-05-22' GROUP BY 1 ORDER BY 2 DESC;\n> recordopeningtime | count\n> ---------------------+-------\n> 2016-05-21 12:17:29 | 176\n> 2016-05-21 12:17:25 | 171\n> 2016-05-21 13:11:33 | 170\n> 2016-05-21 10:20:02 | 169\n> 2016-05-21 11:30:02 | 167\n> [...]\n\nThat is not that much duplication. You aren't going to have dozens or\nhundreds of leaf pages all with equal values. (and you only showed\nthe most highly duplicated ones, presumably the average is much less)\n\n\n> We have an daily analytic query which processes the previous day's data. For\n> new child tables, with only 1 days data loaded, this runs in ~30min, and for\n> child tables with an entire week's worth of data loaded, takes several hours\n> (even though both queries process the same amount of data).\n\nFor an append only table, why would the first day of a new partition\nbe any less fragmented than that same day would be a week from now?\nAre you sure it isn't just that your week-old data has all been aged\nout of the cache?\n\n\n> First, I found I was able to get 30-50min query results on full week's table by\n> prefering a seq scan to an index scan. The row estimates seemed fine, and the\n> only condition is the timestamp, so the planner's use of index scan is as\n> expected.\n\nCan you show us the query? I would expect a bitmap scan of the index\n(which would do what you want, but even more so), instead.\n\n>\n> AFAICT what's happening is that the index scan was returning pages\n> nonsequentially. strace-ing the backend showed alternating lseek()s and\n> read()s, with the offsets not consistently increasing (nor consistently\n> decreasing):\n> % sudo strace -p 25588 2>&1 |grep -m9 'lseek(773'\n> lseek(773, 1059766272, SEEK_SET) = 1059766272\n> lseek(773, 824926208, SEEK_SET) = 824926208\n> lseek(773, 990027776, SEEK_SET) = 990027776\n> lseek(773, 990330880, SEEK_SET) = 990330880\n> lseek(773, 1038942208, SEEK_SET) = 1038942208\n> lseek(773, 1059856384, SEEK_SET) = 1059856384\n> lseek(773, 977305600, SEEK_SET) = 977305600\n> lseek(773, 990347264, SEEK_SET) = 990347264\n> lseek(773, 871096320, SEEK_SET) = 871096320\n>\n> .. and consecutive read()s being rare:\n> read(802, \"g\"..., 8192) = 8192\n> lseek(802, 918003712, SEEK_SET) = 918003712\n> read(802, \"c\"..., 8192) = 8192\n> lseek(802, 859136000, SEEK_SET) = 859136000\n> read(802, \"a\"..., 8192) = 8192\n> lseek(802, 919601152, SEEK_SET) = 919601152\n> read(802, \"d\"..., 8192) = 8192\n> lseek(802, 905101312, SEEK_SET) = 905101312\n> read(802, \"c\"..., 8192) = 8192\n> lseek(801, 507863040, SEEK_SET) = 507863040\n> read(801, \"p\"..., 8192) = 8192\n> lseek(802, 914235392, SEEK_SET) = 914235392\n> read(802, \"c\"..., 8192) = 8192\n\n\nWhich of those are the table, and which the index?\n\nSomething doesn't add up here. How could an index of an append-only\ntable possibly become that fragmented, when the highest amount of key\nduplication is about 170?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 24 May 2016 23:23:48 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "On Tue, May 24, 2016 at 09:16:20PM -0700, Peter Geoghegan wrote:\n> On Tue, May 24, 2016 at 10:39 AM, Justin Pryzby <[email protected]> wrote:\n> > Postgres seems to assume that the high degree of correlation of the table\n> > column seen in pg_stats is how it will get data from the index scan, which\n> > assumption seems to be very poor on what turns out to be a higly fragmented\n> > index. Is there a way to help it to understand otherwise??\n> \n> Your complaint is vague. Are you complaining about the planner making\n> a poor choice? I don't think that's the issue here, because you never\n> made any firm statement about the planner making a choice that was\n> worth than an alternative that it had available.\n\nI was thinking there a few possible places to make improvements: the planner\ncould have understood that scans of non-unique indices don't result in strictly\nsequential scans of the table, the degree of non-sequentialness being\ndetermined by the column statistics, and perhaps by properties of the index\nitself.\n\nOr the INSERT code or btree scan could improve on this, even if tuples aren't\nfully ordered.\n\n> If you're arguing for the idea that B-Trees should reliably keep\n> tuples in order by a tie-break condition, that seems difficult to\n> implement, and likely not worth it in practice.\n\nI had the awful idea to change the index to use (recordopeningtime,ctid).\nMaybe somebody will convince me otherwise, but may actually work better than\ntrying to reindex this table daily by 4am.\n\nThanks,\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 May 2016 08:45:25 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "On Tue, May 24, 2016 at 11:23:48PM -0700, Jeff Janes wrote:\n> > But note the non-uniqueness of the index column:\n> > ts=# SELECT recordopeningtime, COUNT(1) FROM cdrs_huawei_pgwrecord WHERE recordopeningtime>='2016-05-21' AND recordopeningtime<'2016-05-22' GROUP BY 1 ORDER BY 2 DESC;\n> > recordopeningtime | count\n> > ---------------------+-------\n> > 2016-05-21 12:17:29 | 176\n> > 2016-05-21 12:17:25 | 171\n> > 2016-05-21 13:11:33 | 170\n> > 2016-05-21 10:20:02 | 169\n> > 2016-05-21 11:30:02 | 167\n> > [...]\n> \n> That is not that much duplication. You aren't going to have dozens or\n> hundreds of leaf pages all with equal values. (and you only showed\n> the most highly duplicated ones, presumably the average is much less)\n\nPoint taken, but it's not that great of a range either:\n\nts=# SELECT recordopeningtime, COUNT(1) FROM cdrs_huawei_pgwrecord WHERE recordopeningtime>='2016-05-21' AND recordopeningtime<'2016-05-22' GROUP BY 1 ORDER BY 2 LIMIT 19;\n recordopeningtime | count \n---------------------+-------\n 2016-05-21 03:10:05 | 44\n 2016-05-21 03:55:05 | 44\n 2016-05-21 04:55:05 | 45\n\nts=# SELECT count(distinct recordopeningtime) FROM cdrs_huawei_pgwrecord WHERE recordopeningtime>='2016-05-21' AND recordopeningtime<'2016-05-22';\n-[ RECORD 1 ]\ncount | 86400\n\nts=# SELECT count(recordopeningtime) FROM cdrs_huawei_pgwrecord WHERE recordopeningtime>='2016-05-21' AND recordopeningtime<'2016-05-22';\n-[ RECORD 1 ]--\ncount | 8892865\n\n> > We have an daily analytic query which processes the previous day's data. For\n> > new child tables, with only 1 days data loaded, this runs in ~30min, and for\n> > child tables with an entire week's worth of data loaded, takes several hours\n> > (even though both queries process the same amount of data).\n> \n> For an append only table, why would the first day of a new partition\n> be any less fragmented than that same day would be a week from now?\n> Are you sure it isn't just that your week-old data has all been aged\n> out of the cache?\nI don't think it's cache effect, since we're not using the source table for\n(maybe anything) else the entire rest of the day. Server has 72GB RAM, same\nsize one the largest of the tables being joined (beginning) at 4am.\n\nI didn't mean that a given day is more fragmented now than it was last week\n(but I don't know, either). I guess when we do a query on the table with ~32\nhours of data in, it might do a seq scan rather than index scan, too.\n\nCompare the end of month partition tables:\nts=# select * FROM pgstatindex('cdrs_huawei_pgwrecord_2016_02_29_recordopeningtime_idx');\nleaf_fragmentation | 48.6\nts=# select * FROM pgstatindex('cdrs_huawei_pgwrecord_2016_03_29_recordopeningtime_idx');\nleaf_fragmentation | 48.38\nts=# select * FROM pgstatindex('cdrs_huawei_pgwrecord_2016_04_29_recordopeningtime_idx');\nleaf_fragmentation | 48.6\nts=# SELECT * FROM pgstatindex('cdrs_huawei_pgwrecord_2016_04_22_recordopeningtime_idx');\nleaf_fragmentation | 48.66\nts=# SELECT * FROM pgstatindex('cdrs_huawei_pgwrecord_2016_03_22_recordopeningtime_idx');\nleaf_fragmentation | 48.27\nts=# SELECT * FROM pgstatindex('cdrs_huawei_pgwrecord_2016_02_22_recordopeningtime_idx');\nleaf_fragmentation | 48\n\nThis one I reindexed as a test:\nts=# SELECT * FROM pgstatindex('cdrs_huawei_pgwrecord_2016_05_01_recordopeningtime_idx');\nleaf_fragmentation | 0.01\n\n.. and query ran in ~30min (reran a 2nd time, with cache effects: 25min).\n\n> > First, I found I was able to get 30-50min query results on full week's table by\n> > prefering a seq scan to an index scan. The row estimates seemed fine, and the\n> > only condition is the timestamp, so the planner's use of index scan is as\n> > expected.\n> \n> Can you show us the query? I would expect a bitmap scan of the index\n> (which would do what you want, but even more so), instead.\nSee explain, also showing additional tables/views being joined. It's NOT doing\na bitmap scan though, and I'd be interested to find why; I'm sure that would've\nimproved this query enough so it never would've been an issue.\nhttps://explain.depesz.com/s/s8KP\n\n -> Index Scan using cdrs_huawei_pgwrecord_2016_05_01_recordopeningtime_idx on cdrs_huawei_pgwrecord_2016_05_01 (cost=0.56..1601734.57 rows=8943848 width=349)\n Index Cond: ((recordopeningtime >= '2016-05-07 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-08 00:00:00'::timestamp without time zone))\n\n> > AFAICT what's happening is that the index scan was returning pages\n> > nonsequentially. strace-ing the backend showed alternating lseek()s and\n> > read()s, with the offsets not consistently increasing (nor consistently\n> > decreasing):\n..\n> \n> Which of those are the table, and which the index?\nThose weren't necessarily strace of the same process; I believe both of these\nwere table data/heap, and didn't include any index access.\n\n> Something doesn't add up here. How could an index of an append-only\n> table possibly become that fragmented, when the highest amount of key\n> duplication is about 170?\n\nI'm certainly opened to alternate interpretations / conclusions :)\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 May 2016 09:00:34 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "On Wed, May 25, 2016 at 11:00 AM, Justin Pryzby <[email protected]> wrote:\n>> > First, I found I was able to get 30-50min query results on full week's table by\n>> > prefering a seq scan to an index scan. The row estimates seemed fine, and the\n>> > only condition is the timestamp, so the planner's use of index scan is as\n>> > expected.\n>>\n>> Can you show us the query? I would expect a bitmap scan of the index\n>> (which would do what you want, but even more so), instead.\n> See explain, also showing additional tables/views being joined. It's NOT doing\n> a bitmap scan though, and I'd be interested to find why; I'm sure that would've\n> improved this query enough so it never would've been an issue.\n> https://explain.depesz.com/s/s8KP\n>\n> -> Index Scan using cdrs_huawei_pgwrecord_2016_05_01_recordopeningtime_idx on cdrs_huawei_pgwrecord_2016_05_01 (cost=0.56..1601734.57 rows=8943848 width=349)\n> Index Cond: ((recordopeningtime >= '2016-05-07 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-08 00:00:00'::timestamp without time zone))\n\nPlease show your guc settings ( see\nhttps://wiki.postgresql.org/wiki/Server_Configuration )\n\nA plan node like that, if it would result in I/O, with proper\nconfiguration should have selected a bitmap index/heap scan. If it\ndidn't, it probably thinks it has more cache than it really does, and\nthat would mean the wrong setting was set in effective_cache_size.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Jun 2016 18:26:33 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "On Fri, Jun 03, 2016 at 06:26:33PM -0300, Claudio Freire wrote:\n> On Wed, May 25, 2016 at 11:00 AM, Justin Pryzby <[email protected]> wrote:\n> >> > First, I found I was able to get 30-50min query results on full week's table by\n> >> > prefering a seq scan to an index scan. The row estimates seemed fine, and the\n> >> > only condition is the timestamp, so the planner's use of index scan is as\n> >> > expected.\n> >>\n> >> Can you show us the query? I would expect a bitmap scan of the index\n> >> (which would do what you want, but even more so), instead.\n> > See explain, also showing additional tables/views being joined. It's NOT doing\n> > a bitmap scan though, and I'd be interested to find why; I'm sure that would've\n> > improved this query enough so it never would've been an issue.\n> > https://explain.depesz.com/s/s8KP\n> >\n> > -> Index Scan using cdrs_huawei_pgwrecord_2016_05_01_recordopeningtime_idx on cdrs_huawei_pgwrecord_2016_05_01 (cost=0.56..1601734.57 rows=8943848 width=349)\n> > Index Cond: ((recordopeningtime >= '2016-05-07 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-08 00:00:00'::timestamp without time zone))\n> \n> Please show your guc settings ( see\n> https://wiki.postgresql.org/wiki/Server_Configuration )\n> \n> A plan node like that, if it would result in I/O, with proper\n> configuration should have selected a bitmap index/heap scan. If it\n> didn't, it probably thinks it has more cache than it really does, and\n> that would mean the wrong setting was set in effective_cache_size.\n\nts=# SELECT name, current_setting(name), SOURCE FROM pg_settings WHERE SOURCE='configuration file';\n dynamic_shared_memory_type | posix | configuration file\n effective_cache_size | 64GB | configuration file\n effective_io_concurrency | 8 | configuration file\n huge_pages | try | configuration file\n log_autovacuum_min_duration | 0 | configuration file\n log_checkpoints | on | configuration file\n maintenance_work_mem | 6GB | configuration file\n max_connections | 200 | configuration file\n max_wal_size | 4GB | configuration file\n min_wal_size | 6GB | configuration file\n shared_buffers | 8GB | configuration file\n wal_compression | on | configuration file\n work_mem | 1GB | configuration file\n\nI changed at least maintenance_work_mem since I originally wrote, to try to\navoid tempfiles during REINDEX (though I'm not sure it matters, as the\ntempfiles are effective cached and may never actually be written).\n\nIt's entirely possible those settings aren't ideal. The server has 72GB RAM.\nThere are usually very few (typically n<3 but at most a handful) nontrivial\nqueries running at once, if at all.\n\nI wouldn't expect any data that's not recent (table data last 2 days or index\nfrom this month) to be cached, and wouldn't expect that to be entirely cached,\neither:\n\nts=# SELECT sum(pg_table_size(oid))/1024^3 gb FROM pg_class WHERE relname~'_2016_05_..$';\ngb | 425.783050537109\n\nts=# SELECT sum(pg_table_size(oid))/1024^3 gb FROM pg_class WHERE relname~'_2016_05_...*idx';\ngb | 60.0909423828125\n\nts=# SELECT sum(pg_table_size(oid))/1024^3 gb FROM pg_class WHERE relname~'_201605.*idx';\ngb | 4.85528564453125\n\nts=# SELECT sum(pg_table_size(oid))/1024^3 gb FROM pg_class WHERE relname~'_201605$';\ngb | 86.8688049316406\n\nAs a test, I did SET effective_cache_size='1MB', before running explain, and\nstill does:\n\n|\t -> Index Scan using cdrs_huawei_pgwrecord_2016_05_29_recordopeningtime_idx on cdrs_huawei_pgwrecord_2016_05_29 (cost=0.44..1526689.49 rows=8342796 width=355)\n|\t Index Cond: ((recordopeningtime >= '2016-05-29 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-30 00:00:00'::timestamp without time zone))\n\nI Set enable_indexscan=0, and got:\n\n|\t -> Bitmap Heap Scan on cdrs_huawei_pgwrecord_2016_05_29 (cost=168006.10..4087526.04 rows=8342796 width=355)\n|\t Recheck Cond: ((recordopeningtime >= '2016-05-29 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-30 00:00:00'::timestamp without time zone))\n|\t -> Bitmap Index Scan on cdrs_huawei_pgwrecord_2016_05_29_recordopeningtime_idx (cost=0.00..165920.40 rows=8342796 width=0)\n|\t\t Index Cond: ((recordopeningtime >= '2016-05-29 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-30 00:00:00'::timestamp without time zone))\n\nHere's a minimal query which seems to isolate the symptom:\n\nts=# explain (analyze,buffers) SELECT sum(duration) FROM cdrs_huawei_pgwrecord_2016_05_22 WHERE recordopeningtime>='2016-05-22' AND recordopeningtime<'2016-05-23';\n| Aggregate (cost=2888731.67..2888731.68 rows=1 width=8) (actual time=388661.892..388661.892 rows=1 loops=1)\n| Buffers: shared hit=4058501 read=1295147 written=35800\n| -> Index Scan using cdrs_huawei_pgwrecord_2016_05_22_recordopeningtime_idx on cdrs_huawei_pgwrecord_2016_05_22 (cost=0.56..2867075.33 rows=8662534 w\n|idth=8) (actual time=0.036..379332.910 rows=8575673 loops=1)\n| Index Cond: ((recordopeningtime >= '2016-05-22 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-23 00:00:00'::timestamp\n| without time zone))\n| Buffers: shared hit=4058501 read=1295147 written=35800\n| Planning time: 0.338 ms\n| Execution time: 388661.947 ms\n\nAnd here's an older one to avoid cache, with enable_indexscan=0 \n|ts=# explain (analyze,buffers) SELECT sum(duration) FROM cdrs_huawei_pgwrecord_2016_05_08 WHERE recordopeningtime>='2016-05-08' AND recordopeningtime<'2016-05-09';\n| Aggregate (cost=10006286.58..10006286.59 rows=1 width=8) (actual time=44219.156..44219.156 rows=1 loops=1)\n| Buffers: shared hit=118 read=1213887 written=50113\n| -> Bitmap Heap Scan on cdrs_huawei_pgwrecord_2016_05_08 (cost=85142.24..9985848.96 rows=8175048 width=8) (actual time=708.024..40106.062 rows=8179338 loops=1)\n| Recheck Cond: ((recordopeningtime >= '2016-05-08 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-09 00:00:00'::timestamp without time zone))\n| Rows Removed by Index Recheck: 74909\n| Heap Blocks: lossy=1213568\n| Buffers: shared hit=118 read=1213887 written=50113\n| -> Bitmap Index Scan on cdrs_huawei_pgwrecord_2016_05_08_recordopeningtime_idx1 (cost=0.00..83098.48 rows=8175048 width=0) (actual time=706.557..706.557 rows=12135680 loops=1)\n| Index Cond: ((recordopeningtime >= '2016-05-08 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-09 00:00:00'::timestamp without time zone))\n| Buffers: shared hit=117 read=320\n| Planning time: 214.786 ms\n| Execution time: 44228.874 ms\n|(12 rows)\n\nThanks for your help.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Jun 2016 18:54:06 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "On Fri, Jun 3, 2016 at 8:54 PM, Justin Pryzby <[email protected]> wrote:\n> As a test, I did SET effective_cache_size='1MB', before running explain, and\n> still does:\n>\n> | -> Index Scan using cdrs_huawei_pgwrecord_2016_05_29_recordopeningtime_idx on cdrs_huawei_pgwrecord_2016_05_29 (cost=0.44..1526689.49 rows=8342796 width=355)\n> | Index Cond: ((recordopeningtime >= '2016-05-29 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-30 00:00:00'::timestamp without time zone))\n>\n> I Set enable_indexscan=0, and got:\n>\n> | -> Bitmap Heap Scan on cdrs_huawei_pgwrecord_2016_05_29 (cost=168006.10..4087526.04 rows=8342796 width=355)\n> | Recheck Cond: ((recordopeningtime >= '2016-05-29 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-30 00:00:00'::timestamp without time zone))\n> | -> Bitmap Index Scan on cdrs_huawei_pgwrecord_2016_05_29_recordopeningtime_idx (cost=0.00..165920.40 rows=8342796 width=0)\n> | Index Cond: ((recordopeningtime >= '2016-05-29 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-30 00:00:00'::timestamp without time zone))\n>\n> Here's a minimal query which seems to isolate the symptom:\n>\n> ts=# explain (analyze,buffers) SELECT sum(duration) FROM cdrs_huawei_pgwrecord_2016_05_22 WHERE recordopeningtime>='2016-05-22' AND recordopeningtime<'2016-05-23';\n> | Aggregate (cost=2888731.67..2888731.68 rows=1 width=8) (actual time=388661.892..388661.892 rows=1 loops=1)\n> | Buffers: shared hit=4058501 read=1295147 written=35800\n> | -> Index Scan using cdrs_huawei_pgwrecord_2016_05_22_recordopeningtime_idx on cdrs_huawei_pgwrecord_2016_05_22 (cost=0.56..2867075.33 rows=8662534 w\n> |idth=8) (actual time=0.036..379332.910 rows=8575673 loops=1)\n> | Index Cond: ((recordopeningtime >= '2016-05-22 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-23 00:00:00'::timestamp\n> | without time zone))\n> | Buffers: shared hit=4058501 read=1295147 written=35800\n> | Planning time: 0.338 ms\n> | Execution time: 388661.947 ms\n>\n> And here's an older one to avoid cache, with enable_indexscan=0\n> |ts=# explain (analyze,buffers) SELECT sum(duration) FROM cdrs_huawei_pgwrecord_2016_05_08 WHERE recordopeningtime>='2016-05-08' AND recordopeningtime<'2016-05-09';\n> | Aggregate (cost=10006286.58..10006286.59 rows=1 width=8) (actual time=44219.156..44219.156 rows=1 loops=1)\n> | Buffers: shared hit=118 read=1213887 written=50113\n> | -> Bitmap Heap Scan on cdrs_huawei_pgwrecord_2016_05_08 (cost=85142.24..9985848.96 rows=8175048 width=8) (actual time=708.024..40106.062 rows=8179338 loops=1)\n> | Recheck Cond: ((recordopeningtime >= '2016-05-08 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-09 00:00:00'::timestamp without time zone))\n> | Rows Removed by Index Recheck: 74909\n> | Heap Blocks: lossy=1213568\n> | Buffers: shared hit=118 read=1213887 written=50113\n> | -> Bitmap Index Scan on cdrs_huawei_pgwrecord_2016_05_08_recordopeningtime_idx1 (cost=0.00..83098.48 rows=8175048 width=0) (actual time=706.557..706.557 rows=12135680 loops=1)\n> | Index Cond: ((recordopeningtime >= '2016-05-08 00:00:00'::timestamp without time zone) AND (recordopeningtime < '2016-05-09 00:00:00'::timestamp without time zone))\n> | Buffers: shared hit=117 read=320\n> | Planning time: 214.786 ms\n> | Execution time: 44228.874 ms\n> |(12 rows)\n\n\nCorrect me if I'm wrong, but this looks like the planner not\naccounting for correlation when using bitmap heap scans.\n\nChecking the source, it really doesn't.\n\nSo correlated index scans look extra favourable vs bitmap index scans\nbecause bitmap heap scans consider random page costs sans correlation\neffects (even though correlation applies to bitmap heap scans as\nwell). While that sounds desirable a priori, it seems it's hurting\nthis case quite badly.\n\nI'm not sure there's any simple way of working around that.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 3 Jun 2016 23:05:17 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "On Fri, Jun 3, 2016 at 6:54 PM, Justin Pryzby <[email protected]> wrote:\n\n> max_wal_size | 4GB | configuration file\n> min_wal_size | 6GB | configuration file\n\nJust a minor digression -- it generally doesn't make sense to\nconfigure the minimum for something greater than the maximum for\nthat same thing. That should have no bearing the performance issue\nraised on the thread, but you might want to fix it anyway.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 4 Jun 2016 08:39:48 -0500", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "Claudio Freire <[email protected]> writes:\n> So correlated index scans look extra favourable vs bitmap index scans\n> because bitmap heap scans consider random page costs sans correlation\n> effects (even though correlation applies to bitmap heap scans as\n> well).\n\nReally? How? The index ordering has nothing to do with the order in\nwhich heap tuples will be visited.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 05 Jun 2016 12:03:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with non-unique column" }, { "msg_contents": "On Sun, Jun 5, 2016 at 9:03 AM, Tom Lane <[email protected]> wrote:\n> Claudio Freire <[email protected]> writes:\n>> So correlated index scans look extra favourable vs bitmap index scans\n>> because bitmap heap scans consider random page costs sans correlation\n>> effects (even though correlation applies to bitmap heap scans as\n>> well).\n>\n> Really? How? The index ordering has nothing to do with the order in\n> which heap tuples will be visited.\n\n\nIt is not the order itself, but the density.\n\nIf the index is read in a range scan (as opposed to =ANY scan), and\nthe index lead column is correlated with the table ordering, then the\nparts of the table that need to be visited will be much denser than if\nthere were no correlation. But Claudio is saying that this is not\nbeing accounted for.\n\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 5 Jun 2016 12:28:47 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "Regarding this earlier thread:\nhttps://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com#[email protected]\n\nOn Tue, May 24, 2016 at 10:39 AM, Justin Pryzby <[email protected]> wrote:\n> Summary: Non-unique btree indices are returning CTIDs for rows with same\n> value of indexed column not in logical order, imposing a high performance\n> penalty.\n\nI have to point out that by \"logical\" I clearly meant \"physical\", hopefully\nnobody was too misled..\n\nOn Sun, Jun 05, 2016 at 12:28:47PM -0700, Jeff Janes wrote:\n> On Sun, Jun 5, 2016 at 9:03 AM, Tom Lane <[email protected]> wrote:\n> > Claudio Freire <[email protected]> writes:\n> >> So correlated index scans look extra favourable vs bitmap index scans\n> >> because bitmap heap scans consider random page costs sans correlation\n> >> effects (even though correlation applies to bitmap heap scans as\n> >> well).\n> >\n> > Really? How? The index ordering has nothing to do with the order in\n> > which heap tuples will be visited.\n> \n> It is not the order itself, but the density.\n> \n> If the index is read in a range scan (as opposed to =ANY scan), and\n> the index lead column is correlated with the table ordering, then the\n> parts of the table that need to be visited will be much denser than if\n> there were no correlation. But Claudio is saying that this is not\n> being accounted for.\n\nI didn't completely understand Claudio/Jeff here, and not sure if we're on the\nsame page. For queries on these tables, the index scan was very slow, due to\nfragmented index on non-unique column, and seq scan would have been (was)\nfaster (even if it means reading 70GB and filtering out 6 of 7 days' data).\nThat was resolved by added a nightly reindex job (.. which sometimes competes\nwith other maintenance and has trouble running every table every night).\n\nBut I did find that someone else had previously reported this problem (in a\nstrikingly similar context and message, perhaps clearer than mine):\nhttps://www.postgresql.org/message-id/flat/520D6610.8040907%40emulex.com#[email protected]\n\nI also found this older thread:\nhttps://www.postgresql.org/message-id/flat/n6cmpug13b9rk1srebjvhphg0lm8dou1kn%404ax.com#[email protected]\n\nThere was mention of a TODO item:\n\n * Compute index correlation on CREATE INDEX and ANALYZE, use it for index\n * scan cost estimation\n\n.. but perhaps I misunderstand and that's long since resolved ?\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 13 Aug 2016 13:54:48 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "On Sat, Aug 13, 2016 at 3:54 PM, Justin Pryzby <[email protected]> wrote:\n> On Sun, Jun 05, 2016 at 12:28:47PM -0700, Jeff Janes wrote:\n>> On Sun, Jun 5, 2016 at 9:03 AM, Tom Lane <[email protected]> wrote:\n>> > Claudio Freire <[email protected]> writes:\n>> >> So correlated index scans look extra favourable vs bitmap index scans\n>> >> because bitmap heap scans consider random page costs sans correlation\n>> >> effects (even though correlation applies to bitmap heap scans as\n>> >> well).\n>> >\n>> > Really? How? The index ordering has nothing to do with the order in\n>> > which heap tuples will be visited.\n>>\n>> It is not the order itself, but the density.\n>>\n>> If the index is read in a range scan (as opposed to =ANY scan), and\n>> the index lead column is correlated with the table ordering, then the\n>> parts of the table that need to be visited will be much denser than if\n>> there were no correlation. But Claudio is saying that this is not\n>> being accounted for.\n>\n> I didn't completely understand Claudio/Jeff here, and not sure if we're on the\n> same page. For queries on these tables, the index scan was very slow, due to\n> fragmented index on non-unique column, and seq scan would have been (was)\n> faster (even if it means reading 70GB and filtering out 6 of 7 days' data).\n> That was resolved by added a nightly reindex job (.. which sometimes competes\n> with other maintenance and has trouble running every table every night).\n\nYes, but a bitmap index scan should be faster than both, but the\nplanner is discarding it beause it estimates it will be slower,\nbecause it doesn't account for correlation between index keys and\nphysical location.\n\nAnd, while what you clarify there would indeed affect the estimation\nfor index scans, it would only make the issue worse: the planner\nthinks the index scan will be better than it really is, because it's\nexpecting correlation, but the \"fragmentation\" of same-key runs\ndestroys that correlation. A bitmap index scan would restore it,\nthough, so the bitmap index scan would be that much better.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 15 Aug 2016 14:42:03 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index fragmentation on insert-only table with\n non-unique column" }, { "msg_contents": "Months ago I reported an issue with very slow index scan of tables with high\n\"correlation\" of its indexed column, due to (we concluded at the time)\nduplicate/repeated values of that column causing many lseek()s. References:\n\nhttps://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com#[email protected]\nhttps://www.postgresql.org/message-id/flat/520D6610.8040907%40emulex.com#[email protected]\nhttps://www.postgresql.org/message-id/flat/n6cmpug13b9rk1srebjvhphg0lm8dou1kn%404ax.com#[email protected]\n\nMy interim workarounds were an reindex job and daily granularity partitions\n(despite having an excessive number of child tables) to query execution time]]\nto encourage seq scans for daily analysis jobs rather than idx scan. I've now\ncobbled together a patch to throw around and see if we can improve on that. I\ntried several changes, hoping to discourage index scan. The logic that seems\nto most accurately reflect costs has two changes: \n\nPostgres' existing behavior stores table correlation (heap value vs. position)\nbut doesn't look at index correlation, so can't distinguish between a just-built\nindex, and a highly fragmented index, or one with highly-nonsequential TIDs.\nMy patch causes ANALYZE to do a TID scan (sampled across the MCVs) to determine\ncorrelation of heap TID vs index tuple logical location (as opposed to the\ntable correlation, computed as: heap TID vs. heap value).\n\nThe second change averages separate correlation values of small lists of 300\nconsecutive TIDs, rather than the course-granularity/aggregate correlation of a\nsmall sample of pages across the entire index. Postgres' existing sampling is\ndesigned to give an even sample across all rows. An issue with this\ncourse-granularity correlation is that the index can have a broad correlation\n(to physical heap location), but with many small-scale deviations, which don't\nshow up due to sampling a relatively small fraction of a large table; and/or\nthe effect of the deviations is insignificant/noise and correlation is still\ncomputed near 1.\n\nI believe the \"large scale\" correlation that postgres computes from block\nsample fails to well represent small-scale uncorrelated reads which contribute\nlarge number of random reads, but not included in planner cost.\n\nNot only are the index reads highly random (which the planner already assumes),\nbut the CTIDs referenced within a given btree leaf page are also substantially\nnon-sequential. It seems to affect INSERTs which, over a short interval of\ntime have a small-moderate range of column values, but which still have a\nstrong overall trend in that column WRT time (and heap location). Think of\ninserting a \"normal\" distribution of timestamps centered around now().\n\nMy original report shows lseek() for nearly every read on the *heap*. The\noriginal problem was on a table of telecommunications CDRs, indexed by \"record\nopening time\" (start time) and with high correlation value in pg_stats. We\nimport data records from a file, which is probably more or less in order of\n\"end time\".\n\nThat still displays broad correlation on \"start time\", but individual TIDs are\nnowhere near sequential. Two phone calls which end in the same 1 second\ninterval are not unlikely to have started many minutes apart... but two calls\nwhich end within the same second are very likely to have started within an hour\nof each other.. since typical duration is <1h. But, insert volume is high, and\nthere are substantial repeated keys, so the portion of an index scan returning\nCTIDs for some certain key value (timestamp with second resolution in our case)\nends up reading heap tuples for a non-negligible fraction of the table: maybe\nonly 1%, but that's 300 seeks across 1% of a table which is 10s of GB ...\nwhich is still 300 seeks, and takes long enough that cache is inadequate to\nsubstantially mitigate the cost.\n\nIt's not clear to me that there's a good way to evenly *sample* a fraction of\nthe index blocks in a manner which is agnostic to different AMs. Scanning the\nentirety of all indices on relations during (auto) analyze may be too\nexpensive. So I implemented index scan of the MCV list. I'm guessing this\nmight cause the correlation to be under-estimated, and prefer bitmap scans\nsomewhat more than justified, due to btree insertion logic for repeated keys,\nto reduce O(n^2) behavior. MCV list isn't perfect since that can happen with\neg. normally distributed floating point values (or timestamp with fractional\nseconds).\n\nI ran pageinspect on a recent child of the table that triggered the original\nreport:\n\nts=# SELECT itemoffset, ctid FROM bt_page_items('cdrs_huawei_pgwrecord_2017_07_07_recordopeningtime_idx',6) LIMIT 22 OFFSET 1;\n itemoffset | ctid \n------------+--------\n 2 | (81,4)\n 3 | (5,6)\n 4 | (6,5)\n 5 | (9,1)\n 6 | (10,1)\n 7 | (14,1)\n 8 | (21,8)\n 9 | (25,1)\n 10 | (30,5)\n 11 | (41,1)\n 12 | (48,7)\n 13 | (49,3)\n 14 | (50,2)\n 15 | (51,1)\n 16 | (54,1)\n 17 | (57,7)\n 18 | (58,6)\n 19 | (59,6)\n 20 | (63,5)\n 21 | (71,6)\n 22 | (72,5)\n 23 | (78,7)\n(22 rows)\n\n=> 22 adjacent tuples referencing 22 different heap pages, only 6 of which are sequential => 16 lseek() aka random page cost.\n\nTo generate data demonstrating this problem you can do things like:\n| CREATE TABLE t(i int, j int);\n| CREATE INDEX ON t(i);\n\\! time for a in `seq 99`; do psql -qc 'INSERT INTO t SELECT * FROM generate_series(1,99)'; done -- or, \nor:\n| INSERT INTO t SELECT (0.001*a+9*(random()-0.5))::int FROM generate_series(1,99999) a; -- or, \n\nor this one, perhaps closest to our problem case:\n| INSERT INTO t SELECT a FROM generate_series(1,999) a, generate_series(1,999) b ORDER BY a+b/9.9;\n\nNote, I was able to create a case using floats without repeated keys:\n| INSERT INTO w SELECT i/99999.0+pow(2,(-random())) FROM generate_series(1,999999) i ORDER BY i\n\n| ANALYZE t;\n-- note: the correlation is even higher for larger tables with same number of\n-- repeated keys, which is bad since the cost should go up linearly with the\n-- size and associated number of lseek(). That's one component of the problem\n-- and I think a necessarily component of any solution.\npostgres=# SELECT tablename, attname, correlation, array_length(most_common_vals,1) FROM pg_stats WHERE tablename LIKE 't%';\n tablename | attname | correlation | array_length \n-----------+---------+-------------+--------------\n t | i | 0.995212 | 87\n t | j | | \n t_i_idx | i | -0.892874 | 87\n ^^^^^^^\n... this last line is added by my patch.\n\nThat's a bad combination, high table correlation means the planner thinks only\na fraction of the heap will be read, and sequentially, so isn't discouraged\nfrom doing index scan. But index TIDs are much less well correlated (0.89 vs\n0.99).\n\nNote: the negative correlation at tuple-level seems to be related to this comment:\nsrc/backend/access/nbtree/nbtinsert.c- * Once we have chosen the page to put the key on, we'll insert it before\nsrc/backend/access/nbtree/nbtinsert.c: * any existing equal keys because of the way _bt_binsrch() works.\n\nNote also:\n|postgres=# SELECT leaf_fragmentation FROM pgstatindex('t_i_idx');\n|leaf_fragmentation | 67.63\n.. but keep in mind that leaf_fragmentation only checks leaf page order, and\nnot CTID order within index pages. The planner already assumes index pages are\nrandom reads. Maybe it's a good indicator (?), but doesn't lend itself to an\naccurate cost estimation.\n\nFor testing purposes, with:\n| shared_buffers | 128kB\n| public | t | table | pryzbyj | 35 MB | \n| SET enable_bitmapscan=off;\n| SET enable_seqscan=off;\n| SELECT pg_backend_pid();\n| SELECT sum(j) FROM t WHERE i<99;\n| Time: 3974.205 ms\n\n% sudo strace -p 530 2>/tmp/strace-pg\n% sed -r '/^\\(read|lseek/!d; s/ .*//' /tmp/strace-pg |sort |uniq -c |sort -nr |head\n 39474 lseek(41,\n 359 lseek(44,\n 8 lseek(18,\n=> 40k seeks on an 35MB table, that's 10 seeks per heap page!\n\nopen(\"base/12411/16634\", O_RDWR) = 41\nopen(\"base/12411/16636\", O_RDWR) = 44\n\npostgres=# SELECT relfilenode, relname FROM pg_class WHERE relfilenode>16630;\n 16634 | t\n 16636 | t_i_idx\n\n2017-07-07 17:45:54.075 CDT [6360] WARNING: HERE 1222: csquared=0.797225 minIO/R-P-C=109.750000 maxIO/R-P-C=4416.000000 costsize.c cost_index 702\n\nWith my patch, index scan estimated to take ~4400 seeks, rather than actual\n40k, probably because max_IO_cost assumes that a given heap page will be\nvisited only once. But in the worst case each of its tuples would require a\nseparate lseek().... I'm not suggesting to change that, since making max_IO\n100x higher will probably change query plans more dramatically than desired..\nBut also note that, unpatched, with table correlation >0.99, postgres would've\nunder-estimated min_IO_cost not by a factor of 10x but by 400x.\n\n| postgres=# REINDEX INDEX t_i_idx;\n| postgres=# ANALYZE t;\n| postgres=# SELECT tablename, attname, correlation, array_length(most_common_vals,1) FROM pg_stats WHERE tablename LIKE 't%';\n| tablename | attname | correlation | array_length \n| -----------+---------+-------------+--------------\n| t | i | 0.995235 | 67\n| t_i_idx | i | 1 | 67\n\n=> Correctly distinguishes freshly reindexed table.\n\n% sudo strace -p 6428 2>/tmp/strace-pg8\n% sed -r '/^\\(read|lseek/!d; s/ .*//' /tmp/strace-pg8 |sort |uniq -c |sort -nr\n 99 lseek(37,\n\n2017-07-07 17:49:47.745 CDT [6428] WARNING: HERE 1222: csquared=1.000000 minIO/R-P-C=108.500000 maxIO/R-P-C=4416.000000 costsize.c cost_index 702\n=> csquared=1, so min_IO cost is used instead of something close to max_IO\n(this patch doesn't change the computation of those values at all, just changes\ncorrelation, which squared value is used to interpolate between correlated/min\ncost and uncorrelated/max cost).\n\ncorrelated estimate: 108 seeks vs 99 actual, is essentially what unpatched\npostgres would've computed by using the table correlation of 0.9952, implicitly\nassuming the index to be similarly correlated.\n\nI hope that demonstrates the need to distinguish index correlation from table\ncorrelation. I'm hoping for comments on the existing patch, specifically if\nthere's a better way to sample the index than \"MCVs or partial index scan\".\nI've left some fragments in place of earlier implementation involving btree\npage level fragmentation (like statindex). Also: does it make sense to keeping\nMCV/histogram for non-expression index which duplicates table column ? The\nstats lookup in selfuncs.c btcostestimate() would have to check for correlation\nfrom index, and rest of stats from its table.\n\nA bitmap layer adds overhead, which should be avoided if not needed. But it\nshouldn't be a huge impact, and I think its relative effect is only high when\nreturning a small number of rows. I'm thinking of a few cases.\n\n - unique / low cardinality index scans or without duplicate keys: should have\n low index_pages_fetched(), so max_IO_cost should not be very different from\n min_io_cost, and new index-based correlation shouldn't have much effect\n different from table correlation.\n - unclustered/uncorrelated tables: tables whose heap have low correlation\n already discouraged from index scan; this includes tables whose column is\n UPDATEd and not just INSERTed;\n - table with correlated heap AND index: csquared should still be ~0.99 and not\n change much;\n - correlated table with uncorrelated index: this is the target case with\n intended behavior change\n\nMy apologies: I think that's a bit of a brain dump.\n\nJustin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 7 Jul 2017 18:41:19 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "estimate correlation of index separately from table (Re:\n index fragmentation on insert-only table with non-unique column)" }, { "msg_contents": "On Fri, Jul 7, 2017 at 4:41 PM, Justin Pryzby <[email protected]> wrote:\n> The second change averages separate correlation values of small lists of 300\n> consecutive TIDs, rather than the course-granularity/aggregate correlation of a\n> small sample of pages across the entire index. Postgres' existing sampling is\n> designed to give an even sample across all rows. An issue with this\n> course-granularity correlation is that the index can have a broad correlation\n> (to physical heap location), but with many small-scale deviations, which don't\n> show up due to sampling a relatively small fraction of a large table; and/or\n> the effect of the deviations is insignificant/noise and correlation is still\n> computed near 1.\n>\n> I believe the \"large scale\" correlation that postgres computes from block\n> sample fails to well represent small-scale uncorrelated reads which contribute\n> large number of random reads, but not included in planner cost.\n\nAll of that may well be true, but I've actually come around to the\nview that we should treat TID as a first class part of the keyspace,\nthat participates in comparisons as an implicit last column in all\ncases (not just when B-Trees are built within tuplesort.c). That would\nprobably obviate the need for a patch like this entirely, because\npg_stats.correlation would be unaffected by duplicates. I think that\nthis would have a number of advantages, some of which might be\nsignificant. For example, I think it could make LP_DEAD cleanup within\nnbtree more effective for some workloads, especially workloads where\nit is important for HOT pruning and LP_DEAD cleanup to work in concert\n-- locality of access probably matters with that. Also, with every\nentry in the index guaranteed to be unique, we can imagine VACUUM\nbeing much more selective with killing index entries, when the TID\narray it uses happens to be very small. With the freeze map stuff\nthat's now in place, being able to do that matters more than before.\n\nThe original Lehman & Yao algorithm is supposed to have unique keys in\nall cases, but we don't follow that in order to make duplicates work,\nwhich is implemented by changing an invariant (see nbtree README for\ndetails). So, this could simplify some aspects of how binary searches\nmust work in the face of having to deal with non-unique keys. I think\nthat there are cases where many non-HOT UPDATEs have to go through a\nbunch of duplicate leaf tuples and do visibility checking on old\nversions for no real benefit. With the TID as part of the keyspace, we\ncould instead tell the B-Tree code to insert a new tuple as part of an\nUPDATE while using the TID as part of its insertion scan key, so\nrummaging through many duplicates is avoided.\n\nThat having been said, it would be hard to do this for all the reasons\nI went into in that thread you referenced [1]. If you're going to\ntreat TID as a part of the keyspace, you have to totally embrace that,\nwhich means that the internal pages need to have heap TIDs too (not\njust pointers to the lower levels in the B-Tree, which the\nIndexTuple's t_tid pointer is used for there). Those are the place\nwhere you need to literally append this new, implicit heap TID column\nas if it was just another user-visible attribute, since that\ninformation isn't stored in the internal pages today at all. Doing\nthat has a cost, which isn't going to be acceptable if we naively\nappend a heap TID to every internal page IndexTuple. With a type like\nint4, you're going to completely bloat those pages, with big\nconsequences for fan-in.\n\nSo, really, what needs to happen to make it work is someone needs to\nwrite a suffix truncation patch, which implies that only those cases\nthat actually benefit from increasing the width of internal page\nIndexTuples (those with many \"would-be duplicates\") pay the cost. This\nis a classic technique, that I've actually already prototyped, though\nthat's extremely far from being worth posting here. That was just to\nverify my understanding.\n\nI think that I should write and put up for discussion a design\ndocument for various nbtree enhancements. These include internal page\nsuffix truncation, prefix compression, and key normalization. I'm\nprobably not going to take any of these projects on, but it would be\nuseful if there was at least a little bit of clarity about how they\ncould be implemented. Maybe we could reach some kind of weak consensus\nwithout going to great lengths. These optimizations are closely\nintertwined things, and the lack of clarity on how they all fit\ntogether is probably holding back an implementation of any one of\nthem.\n\n[1] https://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com#[email protected]\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 7 Jul 2017 20:27:02 -0700", "msg_from": "Peter Geoghegan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: estimate correlation of index separately from table (Re:\n index fragmentation on insert-only table with non-unique column)" } ]
[ { "msg_contents": "Hi all.\n\nWe have found that queries through PgBouncer 1.7.2 (with transaction pooling) to local PostgreSQL are almost two times slower in 9.5.3 than in 9.4.8 on RHEL 6 hosts (all packages are updated to last versions). Meanwhile the problem can’t be reproduced i.e. on Ubuntu 14.04 (also fully-updated).\n\nHere is how the results look like for 9.4, 9.5 and 9.6. All are built from latest commits on yesterday in\n\t* REL9_4_STABLE (a0cc89a28141595d888d8aba43163d58a1578bfb),\n\t* REL9_5_STABLE (e504d915bbf352ecfc4ed335af934e799bf01053),\n\t* master (6ee7fb8244560b7a3f224784b8ad2351107fa55d).\n\nAll of them are build on the host where testing is done (with stock gcc versions). Sysctls, pgbouncer config and everything we found are the same, postgres configs are default, PGDATA is in tmpfs. All numbers are reproducible, they are stable between runs.\n\nShortly:\n\nOS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\nRHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\nRHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\nRHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\nUbuntu 14.04\t9.4\t\t\t\t\t67458\t\t0.949 ms\nUbuntu 14.04\t9.5\t\t\t\t\t64065\t\t0.999 ms\nUbuntu 14.04\t9.6\t\t\t\t\t64350\t\t0.995 ms\n\nYou could see that the difference between major versions on Ubuntu is not significant, but on RHEL 9.5 is 70% slower than 9.4 and 9.6.\n\nBelow are more details.\n\nRHEL 6:\n\npostgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 2693962\nlatency average: 1.425 ms\ntps = 44897.461518 (including connections establishing)\ntps = 44898.763258 (excluding connections establishing)\npostgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 1572014\nlatency average: 2.443 ms\ntps = 26198.928627 (including connections establishing)\ntps = 26199.803363 (excluding connections establishing)\npostgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 2581645\nlatency average: 1.487 ms\ntps = 43025.676995 (including connections establishing)\ntps = 43027.038275 (excluding connections establishing)\npostgres@pgload05g ~ $\n\nUbuntu 14.04 (the same hardware):\n\npostgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 4047653\nlatency average: 0.949 ms\ntps = 67458.361515 (including connections establishing)\ntps = 67459.983480 (excluding connections establishing)\npostgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 3844084\nlatency average: 0.999 ms\ntps = 64065.447458 (including connections establishing)\ntps = 64066.943627 (excluding connections establishing)\npostgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 3861088\nlatency average: 0.995 ms\ntps = 64348.573126 (including connections establishing)\ntps = 64350.195750 (excluding connections establishing)\npostgres@pgloadpublic02:~$\n\nIn both tests (RHEL and Ubuntu) the bottleneck is performance of singe CPU core which is 100% consumed by PgBouncer. If pgbench connects to postgres directly I get the following (expected) numbers:\n\npostgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5432'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 10010710\nlatency average: 0.384 ms\ntps = 166835.937859 (including connections establishing)\ntps = 166849.730224 (excluding connections establishing)\npostgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5433'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 13373890\nlatency average: 0.287 ms\ntps = 222888.311289 (including connections establishing)\ntps = 222951.470125 (excluding connections establishing)\npostgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5434'\ntransaction type: SELECT only\nscaling factor: 100\nquery mode: simple\nnumber of clients: 64\nnumber of threads: 64\nduration: 60 s\nnumber of transactions actually processed: 12989816\nlatency average: 0.296 ms\ntps = 216487.458399 (including connections establishing)\ntps = 216548.069976 (excluding connections establishing)\npostgres@pgload05g ~ $\n\nCompilation options look almost the same:\n# RHEL 6\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -O2\n# Ubuntu\nCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2\n\nAttached are a simple script to deploy the testing environment (PgBouncer should be installed) and pgbouncer config. I could provide any other needed information like backtraces or perf reports or anything else.\n\n\n\n--\nMay the force be with you…\nhttps://simply.name", "msg_date": "Wed, 25 May 2016 17:33:52 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": true, "msg_subject": "9.4 -> 9.5 regression with queries through pgbouncer on RHEL 6" }, { "msg_contents": "-performance\n+hackers\n\n> 25 мая 2016 г., в 17:33, Vladimir Borodin <[email protected]> написал(а):\n> \n> Hi all.\n> \n> We have found that queries through PgBouncer 1.7.2 (with transaction pooling) to local PostgreSQL are almost two times slower in 9.5.3 than in 9.4.8 on RHEL 6 hosts (all packages are updated to last versions). Meanwhile the problem can’t be reproduced i.e. on Ubuntu 14.04 (also fully-updated).\n> \n> Here is how the results look like for 9.4, 9.5 and 9.6. All are built from latest commits on yesterday in\n> \t* REL9_4_STABLE (a0cc89a28141595d888d8aba43163d58a1578bfb),\n> \t* REL9_5_STABLE (e504d915bbf352ecfc4ed335af934e799bf01053),\n> \t* master (6ee7fb8244560b7a3f224784b8ad2351107fa55d).\n> \n> All of them are build on the host where testing is done (with stock gcc versions). Sysctls, pgbouncer config and everything we found are the same, postgres configs are default, PGDATA is in tmpfs. All numbers are reproducible, they are stable between runs.\n> \n> Shortly:\n> \n> OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n> RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n> RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n> RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n> Ubuntu 14.04\t9.4\t\t\t\t\t67458\t\t0.949 ms\n> Ubuntu 14.04\t9.5\t\t\t\t\t64065\t\t0.999 ms\n> Ubuntu 14.04\t9.6\t\t\t\t\t64350\t\t0.995 ms\n\nThe results above are not really fair, pgbouncer.ini was a bit different on Ubuntu host (application_name_add_host was disabled). Here are the right results with exactly the same configuration:\n\nOS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\nRHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\nRHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\nRHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\nUbuntu 14.04\t9.4\t\t\t\t\t45971\t\t1.392 ms\nUbuntu 14.04\t9.5\t\t\t\t\t40282\t\t1.589 ms\nUbuntu 14.04\t9.6\t\t\t\t\t45410\t\t1.409 ms\n\nIt can be seen that there is a regression for 9.5 in Ubuntu also, but not so significant. We first thought that the reason is 38628db8d8caff21eb6cf8d775c0b2d04cf07b9b (Add memory barriers for PgBackendStatus.st_changecount protocol), but in that case the regression should also be seen in 9.6 also.\n\nThere also was a bunch of changes in FE/BE communication (like 387da18874afa17156ee3af63766f17efb53c4b9 or 98a64d0bd713cb89e61bef6432befc4b7b5da59e) and that may answer the question of regression in 9.5 and normal results in 9.6. Probably the right way to find the answer is to do bisect. I’ll do it but if some more diagnostics information can help, feel free to ask about it.\n\n> \n> You could see that the difference between major versions on Ubuntu is not significant, but on RHEL 9.5 is 70% slower than 9.4 and 9.6.\n> \n> Below are more details.\n> \n> RHEL 6:\n> \n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 2693962\n> latency average: 1.425 ms\n> tps = 44897.461518 (including connections establishing)\n> tps = 44898.763258 (excluding connections establishing)\n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 1572014\n> latency average: 2.443 ms\n> tps = 26198.928627 (including connections establishing)\n> tps = 26199.803363 (excluding connections establishing)\n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 2581645\n> latency average: 1.487 ms\n> tps = 43025.676995 (including connections establishing)\n> tps = 43027.038275 (excluding connections establishing)\n> postgres@pgload05g ~ $\n> \n> Ubuntu 14.04 (the same hardware):\n> \n> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 2758348\n> latency average: 1.392 ms\n> tps = 45970.634737 (including connections establishing)\n> tps = 45971.531098 (excluding connections establishing)\n> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 2417009\n> latency average: 1.589 ms\n> tps = 40282.003641 (including connections establishing)\n> tps = 40282.855938 (excluding connections establishing)\n> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 2724666\n> latency average: 1.409 ms\n> tps = 45409.308603 (including connections establishing)\n> tps = 45410.152406 (excluding connections establishing)\n> postgres@pgloadpublic02:~$\n> \n> In both tests (RHEL and Ubuntu) the bottleneck is performance of singe CPU core which is 100% consumed by PgBouncer. If pgbench connects to postgres directly I get the following (expected) numbers:\n> \n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5432'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 10010710\n> latency average: 0.384 ms\n> tps = 166835.937859 (including connections establishing)\n> tps = 166849.730224 (excluding connections establishing)\n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5433'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 13373890\n> latency average: 0.287 ms\n> tps = 222888.311289 (including connections establishing)\n> tps = 222951.470125 (excluding connections establishing)\n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5434'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 12989816\n> latency average: 0.296 ms\n> tps = 216487.458399 (including connections establishing)\n> tps = 216548.069976 (excluding connections establishing)\n> postgres@pgload05g ~ $\n> \n> Compilation options look almost the same:\n> # RHEL 6\n> CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -O2\n> # Ubuntu\n> CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2\n> \n> Attached are a simple script to deploy the testing environment (PgBouncer should be installed) and pgbouncer config. I could provide any other needed information like backtraces or perf reports or anything else.\n> \n> <pgbouncer.ini>\n> <deploy.sh>\n> \n> --\n> May the force be with you…\n> https://simply.name <https://simply.name/>\n\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n-performance+hackers25 мая 2016 г., в 17:33, Vladimir Borodin <[email protected]> написал(а):Hi all.We have found that queries through PgBouncer 1.7.2 (with transaction pooling) to local PostgreSQL are almost two times slower in 9.5.3 than in 9.4.8 on RHEL 6 hosts (all packages are updated to last versions). Meanwhile the problem can’t be reproduced i.e. on Ubuntu 14.04 (also fully-updated).Here is how the results look like for 9.4, 9.5 and 9.6. All are built from latest commits on yesterday in * REL9_4_STABLE (a0cc89a28141595d888d8aba43163d58a1578bfb), * REL9_5_STABLE (e504d915bbf352ecfc4ed335af934e799bf01053), * master (6ee7fb8244560b7a3f224784b8ad2351107fa55d).All of them are build on the host where testing is done (with stock gcc versions). Sysctls, pgbouncer config and everything we found are the same, postgres configs are default, PGDATA is in tmpfs. All numbers are reproducible, they are stable between runs.Shortly:OS PostgreSQL version TPS Avg. latencyRHEL 6 9.4 44898 1.425 msRHEL 6 9.5 26199 2.443 msRHEL 6 9.5 43027 1.487 msUbuntu 14.04 9.4 67458 0.949 msUbuntu 14.04 9.5 64065 0.999 msUbuntu 14.04 9.6 64350 0.995 msThe results above are not really fair, pgbouncer.ini was a bit different on Ubuntu host (application_name_add_host was disabled). Here are the right results with exactly the same configuration:OS PostgreSQL version TPS Avg. latencyRHEL 6 9.4 44898 1.425 msRHEL 6 9.5 26199 2.443 msRHEL 6 9.5 43027 1.487 msUbuntu 14.04 9.4 45971 1.392 msUbuntu 14.04 9.5 40282 1.589 msUbuntu 14.04 9.6 45410 1.409 msIt can be seen that there is a regression for 9.5 in Ubuntu also, but not so significant. We first thought that the reason is 38628db8d8caff21eb6cf8d775c0b2d04cf07b9b (Add memory barriers for PgBackendStatus.st_changecount protocol), but in that case the regression should also be seen in 9.6 also.There also was a bunch of changes in FE/BE communication (like 387da18874afa17156ee3af63766f17efb53c4b9 or 98a64d0bd713cb89e61bef6432befc4b7b5da59e) and that may answer the question of regression in 9.5 and normal results in 9.6. Probably the right way to find the answer is to do bisect. I’ll do it but if some more diagnostics information can help, feel free to ask about it.You could see that the difference between major versions on Ubuntu is not significant, but on RHEL 9.5 is 70% slower than 9.4 and 9.6.Below are more details.RHEL 6:postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 2693962latency average: 1.425 mstps = 44897.461518 (including connections establishing)tps = 44898.763258 (excluding connections establishing)postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 1572014latency average: 2.443 mstps = 26198.928627 (including connections establishing)tps = 26199.803363 (excluding connections establishing)postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 2581645latency average: 1.487 mstps = 43025.676995 (including connections establishing)tps = 43027.038275 (excluding connections establishing)postgres@pgload05g ~ $Ubuntu 14.04 (the same hardware):postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 2758348latency average: 1.392 mstps = 45970.634737 (including connections establishing)tps = 45971.531098 (excluding connections establishing)postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 2417009latency average: 1.589 mstps = 40282.003641 (including connections establishing)tps = 40282.855938 (excluding connections establishing)postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 2724666latency average: 1.409 mstps = 45409.308603 (including connections establishing)tps = 45410.152406 (excluding connections establishing)postgres@pgloadpublic02:~$In both tests (RHEL and Ubuntu) the bottleneck is performance of singe CPU core which is 100% consumed by PgBouncer. If pgbench connects to postgres directly I get the following (expected) numbers:postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5432'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 10010710latency average: 0.384 mstps = 166835.937859 (including connections establishing)tps = 166849.730224 (excluding connections establishing)postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5433'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 13373890latency average: 0.287 mstps = 222888.311289 (including connections establishing)tps = 222951.470125 (excluding connections establishing)postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5434'transaction type: SELECT onlyscaling factor: 100query mode: simplenumber of clients: 64number of threads: 64duration: 60 snumber of transactions actually processed: 12989816latency average: 0.296 mstps = 216487.458399 (including connections establishing)tps = 216548.069976 (excluding connections establishing)postgres@pgload05g ~ $Compilation options look almost the same:# RHEL 6CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -O2# UbuntuCFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2Attached are a simple script to deploy the testing environment (PgBouncer should be installed) and pgbouncer config. I could provide any other needed information like backtraces or perf reports or anything else.<pgbouncer.ini><deploy.sh>\n--May the force be with you…https://simply.name\n\n\n--May the force be with you…https://simply.name", "msg_date": "Fri, 27 May 2016 19:57:34 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.4 -> 9.5 regression with queries through pgbouncer on RHEL 6" }, { "msg_contents": "Hi,\n\n\nOn 2016-05-27 19:57:34 +0300, Vladimir Borodin wrote:\n> -performance\n> > Here is how the results look like for 9.4, 9.5 and 9.6. All are built from latest commits on yesterday in\n> > \t* REL9_4_STABLE (a0cc89a28141595d888d8aba43163d58a1578bfb),\n> > \t* REL9_5_STABLE (e504d915bbf352ecfc4ed335af934e799bf01053),\n> > \t* master (6ee7fb8244560b7a3f224784b8ad2351107fa55d).\n> > \n> > All of them are build on the host where testing is done (with stock gcc versions). Sysctls, pgbouncer config and everything we found are the same, postgres configs are default, PGDATA is in tmpfs. All numbers are reproducible, they are stable between runs.\n> > \n> > Shortly:\n> > \n> > OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n> > RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n> > RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n> > RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n> > Ubuntu 14.04\t9.4\t\t\t\t\t67458\t\t0.949 ms\n> > Ubuntu 14.04\t9.5\t\t\t\t\t64065\t\t0.999 ms\n> > Ubuntu 14.04\t9.6\t\t\t\t\t64350\t\t0.995 ms\n> \n> The results above are not really fair, pgbouncer.ini was a bit different on Ubuntu host (application_name_add_host was disabled). Here are the right results with exactly the same configuration:\n> \n> OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n> RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n> RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n> RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n> Ubuntu 14.04\t9.4\t\t\t\t\t45971\t\t1.392 ms\n> Ubuntu 14.04\t9.5\t\t\t\t\t40282\t\t1.589 ms\n> Ubuntu 14.04\t9.6\t\t\t\t\t45410\t\t1.409 ms\n\nHm. I'm a bit confused. You show one result for 9.5 with bad and one\nwith good performance. I suspect the second one is supposed to be a 9.6?\n\nAm I understanding correctly that the performance near entirely\nrecovered with 9.6? If so, I suspect we might be dealing with a memory\nalignment issue. Do the 9.5 results change if you increase\nmax_connections by one or two (without changing anything else)?\n\nWhat's the actual hardware?\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Fri, 27 May 2016 14:56:57 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.4 -> 9.5 regression with queries through pgbouncer\n on RHEL 6" }, { "msg_contents": "> 28 мая 2016 г., в 0:56, Andres Freund <[email protected]> написал(а):\n> \n> Hi,\n> \n> \n> On 2016-05-27 19:57:34 +0300, Vladimir Borodin wrote:\n>> -performance\n>>> Here is how the results look like for 9.4, 9.5 and 9.6. All are built from latest commits on yesterday in\n>>> \t* REL9_4_STABLE (a0cc89a28141595d888d8aba43163d58a1578bfb),\n>>> \t* REL9_5_STABLE (e504d915bbf352ecfc4ed335af934e799bf01053),\n>>> \t* master (6ee7fb8244560b7a3f224784b8ad2351107fa55d).\n>>> \n>>> All of them are build on the host where testing is done (with stock gcc versions). Sysctls, pgbouncer config and everything we found are the same, postgres configs are default, PGDATA is in tmpfs. All numbers are reproducible, they are stable between runs.\n>>> \n>>> Shortly:\n>>> \n>>> OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n>>> RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n>>> RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n>>> RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n>>> Ubuntu 14.04\t9.4\t\t\t\t\t67458\t\t0.949 ms\n>>> Ubuntu 14.04\t9.5\t\t\t\t\t64065\t\t0.999 ms\n>>> Ubuntu 14.04\t9.6\t\t\t\t\t64350\t\t0.995 ms\n>> \n>> The results above are not really fair, pgbouncer.ini was a bit different on Ubuntu host (application_name_add_host was disabled). Here are the right results with exactly the same configuration:\n>> \n>> OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n>> RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n>> RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n>> RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n>> Ubuntu 14.04\t9.4\t\t\t\t\t45971\t\t1.392 ms\n>> Ubuntu 14.04\t9.5\t\t\t\t\t40282\t\t1.589 ms\n>> Ubuntu 14.04\t9.6\t\t\t\t\t45410\t\t1.409 ms\n> \n> Hm. I'm a bit confused. You show one result for 9.5 with bad and one\n> with good performance. I suspect the second one is supposed to be a 9.6?\n\nNo, they are both for 9.5. One of them is on RHEL 6 host, another one on Ubuntu 14.04.\n\n> \n> Am I understanding correctly that the performance near entirely\n> recovered with 9.6?\n\nYes, 9.6 is much better than 9.5.\n\n> If so, I suspect we might be dealing with a memory\n> alignment issue. Do the 9.5 results change if you increase\n> max_connections by one or two (without changing anything else)?\n\nResults with max_connections=100:\n\nOS\t\tVersion\t\tTPS\t\tAvg. latency\nRHEL 6\t\t9.4\t\t69810\t\t0.917\nRHEL 6\t\t9.5\t\t35303\t\t1.812\nRHEL 6\t\t9.6\t\t71827\t\t0.891\nUbuntu 14.04\t9.4\t\t76829\t\t0.833\nUbuntu 14.04\t9.5\t\t67574\t\t0.947\nUbuntu 14.04\t9.6\t\t79200\t\t0.808\n\nResults with max_connections=101:\n\nOS\t\tVersion\t\tTPS\t\tAvg. latency\nRHEL 6\t\t9.4\t\t70059\t\t0.914\nRHEL 6\t\t9.5\t\t35979\t\t1.779\nRHEL 6\t\t9.6\t\t71183\t\t0.899\nUbuntu 14.04\t9.4\t\t78934\t\t0.811\nUbuntu 14.04\t9.5\t\t67803\t\t0.944\nUbuntu 14.04\t9.6\t\t79624\t\t0.804\n\n\nResults with max_connections=102:\n\nOS\t\tVersion\t\tTPS\t\tAvg. latency\nRHEL 6\t\t9.4\t\t70710\t\t0.905\nRHEL 6\t\t9.5\t\t36615\t\t1.748\nRHEL 6\t\t9.6\t\t69742\t\t0.918\nUbuntu 14.04\t9.4\t\t76356\t\t0.838\nUbuntu 14.04\t9.5\t\t66814\t\t0.958\nUbuntu 14.04\t9.6\t\t78528\t\t0.815\n\nDoesn’t seem that it is a memory alignment issue. Also please note that there is no performance degradation when connections from pgbench to postgres are established without pgbouncer:\n\nOS\t\tVersion\t\tTPS\t\tAvg. latency\nRHEL 6\t\t9.4\t\t167427\t\t0.382\nRHEL 6\t\t9.5\t\t223674\t\t0.286\nRHEL 6\t\t9.6\t\t215580\t\t0.297\nUbuntu 14.04\t9.4\t\t176659\t\t0.362\nUbuntu 14.04\t9.5\t\t248277\t\t0.258\nUbuntu 14.04\t9.6\t\t245871\t\t0.260\n\n> \n> What's the actual hardware?\n\nHost with RHEL has Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (2 sockets, 16 physical cores, 32 cores with Hyper-Threading) and 256 GB of RAM while host with Ubuntu has Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz (2 sockets, 16 physical cores, 32 cores with Hyper-Threading) and 128 GB of RAM.\n\n> \n> Greetings,\n> \n> Andres Freund\n\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n28 мая 2016 г., в 0:56, Andres Freund <[email protected]> написал(а):Hi,On 2016-05-27 19:57:34 +0300, Vladimir Borodin wrote:-performanceHere is how the results look like for 9.4, 9.5 and 9.6. All are built from latest commits on yesterday in * REL9_4_STABLE (a0cc89a28141595d888d8aba43163d58a1578bfb), * REL9_5_STABLE (e504d915bbf352ecfc4ed335af934e799bf01053), * master (6ee7fb8244560b7a3f224784b8ad2351107fa55d).All of them are build on the host where testing is done (with stock gcc versions). Sysctls, pgbouncer config and everything we found are the same, postgres configs are default, PGDATA is in tmpfs. All numbers are reproducible, they are stable between runs.Shortly:OS PostgreSQL version TPS Avg. latencyRHEL 6 9.4 44898 1.425 msRHEL 6 9.5 26199 2.443 msRHEL 6 9.5 43027 1.487 msUbuntu 14.04 9.4 67458 0.949 msUbuntu 14.04 9.5 64065 0.999 msUbuntu 14.04 9.6 64350 0.995 msThe results above are not really fair, pgbouncer.ini was a bit different on Ubuntu host (application_name_add_host was disabled). Here are the right results with exactly the same configuration:OS PostgreSQL version TPS Avg. latencyRHEL 6 9.4 44898 1.425 msRHEL 6 9.5 26199 2.443 msRHEL 6 9.5 43027 1.487 msUbuntu 14.04 9.4 45971 1.392 msUbuntu 14.04 9.5 40282 1.589 msUbuntu 14.04 9.6 45410 1.409 msHm. I'm a bit confused. You show one result for 9.5 with bad and onewith good performance. I suspect the second one is supposed to be a 9.6?No, they are both for 9.5. One of them is on RHEL 6 host, another one on Ubuntu 14.04.Am I understanding correctly that the performance near entirelyrecovered with 9.6? Yes, 9.6 is much better than 9.5.If so, I suspect we might be dealing with a memoryalignment issue. Do the 9.5 results change if you increasemax_connections by one or two (without changing anything else)?Results with max_connections=100:OS Version TPS Avg. latencyRHEL 6 9.4 69810 0.917RHEL 6 9.5 35303 1.812RHEL 6 9.6 71827 0.891Ubuntu 14.04 9.4 76829 0.833Ubuntu 14.04 9.5 67574 0.947Ubuntu 14.04 9.6 79200 0.808Results with max_connections=101:OS Version TPS Avg. latencyRHEL 6 9.4 70059 0.914RHEL 6 9.5 35979 1.779RHEL 6 9.6 71183 0.899Ubuntu 14.04 9.4 78934 0.811Ubuntu 14.04 9.5 67803 0.944Ubuntu 14.04 9.6 79624 0.804Results with max_connections=102:OS Version TPS Avg. latencyRHEL 6 9.4 70710 0.905RHEL 6 9.5 36615 1.748RHEL 6 9.6 69742 0.918Ubuntu 14.04 9.4 76356 0.838Ubuntu 14.04 9.5 66814 0.958Ubuntu 14.04 9.6 78528 0.815Doesn’t seem that it is a memory alignment issue. Also please note that there is no performance degradation when connections from pgbench to postgres are established without pgbouncer:OS Version TPS Avg. latencyRHEL 6 9.4 167427 0.382RHEL 6 9.5 223674 0.286RHEL 6 9.6 215580 0.297Ubuntu 14.04 9.4 176659 0.362Ubuntu 14.04 9.5 248277 0.258Ubuntu 14.04 9.6 245871 0.260What's the actual hardware?Host with RHEL has Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz (2 sockets, 16 physical cores, 32 cores with Hyper-Threading) and 256 GB of RAM while host with Ubuntu has Intel(R) Xeon(R) CPU E5-2667 v2 @ 3.30GHz (2 sockets, 16 physical cores, 32 cores with Hyper-Threading) and 128 GB of RAM.Greetings,Andres Freund\n--May the force be with you…https://simply.name", "msg_date": "Mon, 30 May 2016 17:53:55 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.4 -> 9.5 regression with queries through pgbouncer on RHEL 6" }, { "msg_contents": "> 28 мая 2016 г., в 0:56, Andres Freund <[email protected]> написал(а):\n> \n> On 2016-05-27 19:57:34 +0300, Vladimir Borodin wrote:\n>> \n>> OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n>> RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n>> RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n>> RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n> \n> Hm. I'm a bit confused. You show one result for 9.5 with bad and one\n> with good performance. I suspect the second one is supposed to be a 9.6?\n\nSorry, I misunderstood. Yes, the last line above is for 9.6, that was a typo.\n\n> \n> Greetings,\n> \n> Andres Freund\n\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n28 мая 2016 г., в 0:56, Andres Freund <[email protected]> написал(а):On 2016-05-27 19:57:34 +0300, Vladimir Borodin wrote:OS PostgreSQL version TPS Avg. latencyRHEL 6 9.4 44898 1.425 msRHEL 6 9.5 26199 2.443 msRHEL 6 9.5 43027 1.487 msHm. I'm a bit confused. You show one result for 9.5 with bad and onewith good performance. I suspect the second one is supposed to be a 9.6?Sorry, I misunderstood. Yes, the last line above is for 9.6, that was a typo.Greetings,Andres Freund\n--May the force be with you…https://simply.name", "msg_date": "Mon, 30 May 2016 20:03:13 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.4 -> 9.5 regression with queries through pgbouncer on RHEL 6" }, { "msg_contents": "> 27 мая 2016 г., в 19:57, Vladimir Borodin <[email protected]> написал(а):\n> \n> -performance\n> +hackers\n> \n>> 25 мая 2016 г., в 17:33, Vladimir Borodin <[email protected] <mailto:[email protected]>> написал(а):\n>> \n>> Hi all.\n>> \n>> We have found that queries through PgBouncer 1.7.2 (with transaction pooling) to local PostgreSQL are almost two times slower in 9.5.3 than in 9.4.8 on RHEL 6 hosts (all packages are updated to last versions). Meanwhile the problem can’t be reproduced i.e. on Ubuntu 14.04 (also fully-updated).\n>> \n>> Here is how the results look like for 9.4, 9.5 and 9.6. All are built from latest commits on yesterday in\n>> \t* REL9_4_STABLE (a0cc89a28141595d888d8aba43163d58a1578bfb),\n>> \t* REL9_5_STABLE (e504d915bbf352ecfc4ed335af934e799bf01053),\n>> \t* master (6ee7fb8244560b7a3f224784b8ad2351107fa55d).\n>> \n>> All of them are build on the host where testing is done (with stock gcc versions). Sysctls, pgbouncer config and everything we found are the same, postgres configs are default, PGDATA is in tmpfs. All numbers are reproducible, they are stable between runs.\n>> \n>> Shortly:\n>> \n>> OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n>> RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n>> RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n>> RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n>> Ubuntu 14.04\t9.4\t\t\t\t\t67458\t\t0.949 ms\n>> Ubuntu 14.04\t9.5\t\t\t\t\t64065\t\t0.999 ms\n>> Ubuntu 14.04\t9.6\t\t\t\t\t64350\t\t0.995 ms\n> \n> The results above are not really fair, pgbouncer.ini was a bit different on Ubuntu host (application_name_add_host was disabled). Here are the right results with exactly the same configuration:\n> \n> OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n> RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n> RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n> RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n> Ubuntu 14.04\t9.4\t\t\t\t\t45971\t\t1.392 ms\n> Ubuntu 14.04\t9.5\t\t\t\t\t40282\t\t1.589 ms\n> Ubuntu 14.04\t9.6\t\t\t\t\t45410\t\t1.409 ms\n> \n> It can be seen that there is a regression for 9.5 in Ubuntu also, but not so significant. We first thought that the reason is 38628db8d8caff21eb6cf8d775c0b2d04cf07b9b (Add memory barriers for PgBackendStatus.st <http://pgbackendstatus.st/>_changecount protocol), but in that case the regression should also be seen in 9.6 also.\n> \n> There also was a bunch of changes in FE/BE communication (like 387da18874afa17156ee3af63766f17efb53c4b9 or 98a64d0bd713cb89e61bef6432befc4b7b5da59e) and that may answer the question of regression in 9.5 and normal results in 9.6. Probably the right way to find the answer is to do bisect. I’ll do it but if some more diagnostics information can help, feel free to ask about it.\n\nYep, bisect confirms that the first bad commit in REL9_5_STABLE is 387da18874afa17156ee3af63766f17efb53c4b9. Full output is attached.\nAnd bisect for master branch confirms that the situation became much better after 98a64d0bd713cb89e61bef6432befc4b7b5da59e. Output is also attached.\n\nOn Ubuntu performance degradation is ~15% and on RHEL it is ~100%. I don’t know what is the cause for different numbers on RHEL and Ubuntu but certainly there is a regression when pgbouncer is connected to postgres through localhost. When I try to connect pgbouncer to postgres through unix-socket performance is constantly bad on all postgres versions.\n\nBoth servers are for testing but I can easily provide you SSH access only to Ubuntu host if necessary. I can also gather more diagnostics if needed.\n\n\n> \n>> \n>> You could see that the difference between major versions on Ubuntu is not significant, but on RHEL 9.5 is 70% slower than 9.4 and 9.6.\n>> \n>> Below are more details.\n>> \n>> RHEL 6:\n>> \n>> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 2693962\n>> latency average: 1.425 ms\n>> tps = 44897.461518 (including connections establishing)\n>> tps = 44898.763258 (excluding connections establishing)\n>> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 1572014\n>> latency average: 2.443 ms\n>> tps = 26198.928627 (including connections establishing)\n>> tps = 26199.803363 (excluding connections establishing)\n>> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 2581645\n>> latency average: 1.487 ms\n>> tps = 43025.676995 (including connections establishing)\n>> tps = 43027.038275 (excluding connections establishing)\n>> postgres@pgload05g ~ $\n>> \n>> Ubuntu 14.04 (the same hardware):\n>> \n>> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 2758348\n>> latency average: 1.392 ms\n>> tps = 45970.634737 (including connections establishing)\n>> tps = 45971.531098 (excluding connections establishing)\n>> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 2417009\n>> latency average: 1.589 ms\n>> tps = 40282.003641 (including connections establishing)\n>> tps = 40282.855938 (excluding connections establishing)\n>> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 2724666\n>> latency average: 1.409 ms\n>> tps = 45409.308603 (including connections establishing)\n>> tps = 45410.152406 (excluding connections establishing)\n>> postgres@pgloadpublic02:~$\n>> \n>> In both tests (RHEL and Ubuntu) the bottleneck is performance of singe CPU core which is 100% consumed by PgBouncer. If pgbench connects to postgres directly I get the following (expected) numbers:\n>> \n>> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5432'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 10010710\n>> latency average: 0.384 ms\n>> tps = 166835.937859 (including connections establishing)\n>> tps = 166849.730224 (excluding connections establishing)\n>> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5433'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 13373890\n>> latency average: 0.287 ms\n>> tps = 222888.311289 (including connections establishing)\n>> tps = 222951.470125 (excluding connections establishing)\n>> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5434'\n>> transaction type: SELECT only\n>> scaling factor: 100\n>> query mode: simple\n>> number of clients: 64\n>> number of threads: 64\n>> duration: 60 s\n>> number of transactions actually processed: 12989816\n>> latency average: 0.296 ms\n>> tps = 216487.458399 (including connections establishing)\n>> tps = 216548.069976 (excluding connections establishing)\n>> postgres@pgload05g ~ $\n>> \n>> Compilation options look almost the same:\n>> # RHEL 6\n>> CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -O2\n>> # Ubuntu\n>> CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2\n>> \n>> Attached are a simple script to deploy the testing environment (PgBouncer should be installed) and pgbouncer config. I could provide any other needed information like backtraces or perf reports or anything else.\n>> \n>> <pgbouncer.ini>\n>> <deploy.sh>\n>> \n>> --\n>> May the force be with you…\n>> https://simply.name <https://simply.name/>\n> \n> \n> --\n> May the force be with you…\n> https://simply.name <https://simply.name/>\n\n--\nMay the force be with you…\nhttps://simply.name", "msg_date": "Tue, 31 May 2016 12:06:03 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.4 -> 9.5 regression with queries through pgbouncer on RHEL 6" }, { "msg_contents": "UP. repeat tests on local vm.. reults are discouraging\nOS \t PG \tTPS \t\tAVG latency\nCentos 7 \t9.5.3 \t23.711023 \t168.421\nCentos 7 \t9.5.3 \t26.609271 \t150.188\nCentos 7 \t9.5.3 \t25.220044 \t158.416\nCentos 7 \t9.5.3 \t25.598977 \t156.047\nCentos 7 \t9.4.8 \t278.572191 \t14.077\nCentos 7 \t9.4.8 \t247.237755 \t16.177\nCentos 7 \t9.4.8 \t240.007524 \t16.276\nCentos 7 \t9.4.8 \t237.862238 \t16.596\n\nps: latest update on centos 7 +xfs + lates database version from repo, no pgbouncer \n\n\n> On 25 May 2016, at 17:33, Vladimir Borodin <[email protected]> wrote:\n> \n> Hi all.\n> \n> We have found that queries through PgBouncer 1.7.2 (with transaction pooling) to local PostgreSQL are almost two times slower in 9.5.3 than in 9.4.8 on RHEL 6 hosts (all packages are updated to last versions). Meanwhile the problem can’t be reproduced i.e. on Ubuntu 14.04 (also fully-updated).\n> \n> Here is how the results look like for 9.4, 9.5 and 9.6. All are built from latest commits on yesterday in\n> \t* REL9_4_STABLE (a0cc89a28141595d888d8aba43163d58a1578bfb),\n> \t* REL9_5_STABLE (e504d915bbf352ecfc4ed335af934e799bf01053),\n> \t* master (6ee7fb8244560b7a3f224784b8ad2351107fa55d).\n> \n> All of them are build on the host where testing is done (with stock gcc versions). Sysctls, pgbouncer config and everything we found are the same, postgres configs are default, PGDATA is in tmpfs. All numbers are reproducible, they are stable between runs.\n> \n> Shortly:\n> \n> OS\t\t\tPostgreSQL version\tTPS\t\t\tAvg. latency\n> RHEL 6\t\t9.4\t\t\t\t\t44898\t\t1.425 ms\n> RHEL 6\t\t9.5\t\t\t\t\t26199\t\t2.443 ms\n> RHEL 6\t\t9.5\t\t\t\t\t43027\t\t1.487 ms\n> Ubuntu 14.04\t9.4\t\t\t\t\t67458\t\t0.949 ms\n> Ubuntu 14.04\t9.5\t\t\t\t\t64065\t\t0.999 ms\n> Ubuntu 14.04\t9.6\t\t\t\t\t64350\t\t0.995 ms\n> \n> You could see that the difference between major versions on Ubuntu is not significant, but on RHEL 9.5 is 70% slower than 9.4 and 9.6.\n> \n> Below are more details.\n> \n> RHEL 6:\n> \n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 2693962\n> latency average: 1.425 ms\n> tps = 44897.461518 (including connections establishing)\n> tps = 44898.763258 (excluding connections establishing)\n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 1572014\n> latency average: 2.443 ms\n> tps = 26198.928627 (including connections establishing)\n> tps = 26199.803363 (excluding connections establishing)\n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 2581645\n> latency average: 1.487 ms\n> tps = 43025.676995 (including connections establishing)\n> tps = 43027.038275 (excluding connections establishing)\n> postgres@pgload05g ~ $\n> \n> Ubuntu 14.04 (the same hardware):\n> \n> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg94'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 4047653\n> latency average: 0.949 ms\n> tps = 67458.361515 (including connections establishing)\n> tps = 67459.983480 (excluding connections establishing)\n> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg95'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 3844084\n> latency average: 0.999 ms\n> tps = 64065.447458 (including connections establishing)\n> tps = 64066.943627 (excluding connections establishing)\n> postgres@pgloadpublic02:~$ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=6432 dbname=pg96'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 3861088\n> latency average: 0.995 ms\n> tps = 64348.573126 (including connections establishing)\n> tps = 64350.195750 (excluding connections establishing)\n> postgres@pgloadpublic02:~$\n> \n> In both tests (RHEL and Ubuntu) the bottleneck is performance of singe CPU core which is 100% consumed by PgBouncer. If pgbench connects to postgres directly I get the following (expected) numbers:\n> \n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5432'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 10010710\n> latency average: 0.384 ms\n> tps = 166835.937859 (including connections establishing)\n> tps = 166849.730224 (excluding connections establishing)\n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5433'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 13373890\n> latency average: 0.287 ms\n> tps = 222888.311289 (including connections establishing)\n> tps = 222951.470125 (excluding connections establishing)\n> postgres@pgload05g ~ $ /usr/lib/postgresql/9.4/bin/pgbench -U postgres -T 60 -j 64 -c 64 -S -n 'host=localhost port=5434'\n> transaction type: SELECT only\n> scaling factor: 100\n> query mode: simple\n> number of clients: 64\n> number of threads: 64\n> duration: 60 s\n> number of transactions actually processed: 12989816\n> latency average: 0.296 ms\n> tps = 216487.458399 (including connections establishing)\n> tps = 216548.069976 (excluding connections establishing)\n> postgres@pgload05g ~ $\n> \n> Compilation options look almost the same:\n> # RHEL 6\n> CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -g -O2\n> # Ubuntu\n> CFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fexcess-precision=standard -g -O2\n> \n> Attached are a simple script to deploy the testing environment (PgBouncer should be installed) and pgbouncer config. I could provide any other needed information like backtraces or perf reports or anything else.\n> \n> <pgbouncer.ini>\n> <deploy.sh>\n> \n> --\n> May the force be with you…\n> https://simply.name <https://simply.name/>", "msg_date": "Thu, 2 Jun 2016 14:18:26 +0300", "msg_from": "=?utf-8?B?0JDQvdGC0L7QvSDQkdGD0YjQvNC10LvQtdCy?=\n <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.4 -> 9.5 regression with queries through pgbouncer on RHEL 6" }, { "msg_contents": "Hi,\n\nOn 2016-06-02 14:18:26 +0300, Антон Бушмелев wrote:\n> UP. repeat tests on local vm.. reults are discouraging\n> OS \t PG \tTPS \t\tAVG latency\n> Centos 7 \t9.5.3 \t23.711023 \t168.421\n> Centos 7 \t9.5.3 \t26.609271 \t150.188\n> Centos 7 \t9.5.3 \t25.220044 \t158.416\n> Centos 7 \t9.5.3 \t25.598977 \t156.047\n> Centos 7 \t9.4.8 \t278.572191 \t14.077\n> Centos 7 \t9.4.8 \t247.237755 \t16.177\n> Centos 7 \t9.4.8 \t240.007524 \t16.276\n> Centos 7 \t9.4.8 \t237.862238 \t16.596\n\nCould you provide profiles on 9.4 and 9.5? Which kernel did you have\nenabled? Is /proc/sys/kernel/sched_autogroup_enabled 1 or 0?\n\nRegards,\n\nAndres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 9 Jun 2016 15:28:56 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.4 -> 9.5 regression with queries through pgbouncer\n on RHEL 6" }, { "msg_contents": "> 10 июня 2016 г., в 1:28, Andres Freund <[email protected]> написал(а):\n> \n> Hi,\n> \n> On 2016-06-02 14:18:26 +0300, Антон Бушмелев wrote:\n>> UP. repeat tests on local vm.. reults are discouraging\n>> OS \t PG \tTPS \t\tAVG latency\n>> Centos 7 \t9.5.3 \t23.711023 \t168.421\n>> Centos 7 \t9.5.3 \t26.609271 \t150.188\n>> Centos 7 \t9.5.3 \t25.220044 \t158.416\n>> Centos 7 \t9.5.3 \t25.598977 \t156.047\n>> Centos 7 \t9.4.8 \t278.572191 \t14.077\n>> Centos 7 \t9.4.8 \t247.237755 \t16.177\n>> Centos 7 \t9.4.8 \t240.007524 \t16.276\n>> Centos 7 \t9.4.8 \t237.862238 \t16.596\n> \n> Could you provide profiles on 9.4 and 9.5? Which kernel did you have\n> enabled? Is /proc/sys/kernel/sched_autogroup_enabled 1 or 0?\n\nI don’t know anything about Anton’s installation. I’m having troubles on RHEL 6 with stock kernel (2.6.32-642.el6.x86_64). I also tried a couple of non-official kernels (3.10, 3.19) but results didn’t change much.\n\n/proc/sys/kernel/sched_autogroup_enabled doesn’t change the picture in general for 9.5 or 9.6 but improves for 9.4:\nroot@pgload05g ~ # cat /proc/sys/kernel/sched_autogroup_enabled\n0\nroot@pgload05g ~ # /tmp/run.sh\nRHEL 6\t\t9.4\t\t69163\t\t0.925\nRHEL 6\t\t9.5\t\t34495\t\t1.855\nRHEL 6\t\t9.6\t\t70631\t\t0.906\nroot@pgload05g ~ # echo 1 >/proc/sys/kernel/sched_autogroup_enabled\nroot@pgload05g ~ # /tmp/run.sh\nRHEL 6\t\t9.4\t\t82242\t\t0.778\nRHEL 6\t\t9.5\t\t34100\t\t1.877\nRHEL 6\t\t9.6\t\t70599\t\t0.907\nroot@pgload05g ~ #\n\nFor taking perf profiles I’ve recompiled all versions with CFLAGS='-O2 -fno-omit-frame-pointer’ and issued the following command during pgbench runs:\nperf record -g --call-graph=dwarf -a -o pg9?_all.data sleep 10\n\nAfter run:\nperf report -g -i pg9?_all.data >/tmp/pg9?_perf_report.txt\n\nThe results from pg9?_perf_report.txt are attached. Note that in all cases some events were lost, i.e.:\n\nroot@pgload05g ~ # perf report -g -i pg94_all.data >/tmp/pg94_perf_report.txt\nFailed to open [vsyscall], continuing without symbols\nWarning:\nProcessed 537137 events and lost 7846 chunks!\n\nCheck IO/CPU overload!\n\nroot@pgload05g ~ #\n\nThe reason for that is overloaded I/O subsystem.\n\n\n> \n> Regards,\n> \n> Andres\n\n\n--\nMay the force be with you…\nhttps://simply.name", "msg_date": "Mon, 13 Jun 2016 00:42:19 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] 9.4 -> 9.5 regression with queries through pgbouncer on\n RHEL 6" }, { "msg_contents": "Hi Vladimir,\n\nThanks for these reports.\n\nOn 2016-06-13 00:42:19 +0300, Vladimir Borodin wrote:\n> perf report -g -i pg9?_all.data >/tmp/pg9?_perf_report.txt\n\nAny chance you could redo the reports with --no-children --call-graph=fractal\nadded? The mode that includes child overheads unfortunately makes the\noutput hard to interpet/compare.\n\n> The results from pg9?_perf_report.txt are attached. Note that in all cases some events were lost, i.e.:\n> \n> root@pgload05g ~ # perf report -g -i pg94_all.data >/tmp/pg94_perf_report.txt\n> Failed to open [vsyscall], continuing without symbols\n> Warning:\n> Processed 537137 events and lost 7846 chunks!\n\nYou can reduce the overhead by reducing the sampling frequency, e.g. by\nspecifying -F 300.\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Sun, 12 Jun 2016 14:51:40 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] 9.4 -> 9.5 regression with queries through pgbouncer\n on RHEL 6" }, { "msg_contents": "> 13 июня 2016 г., в 0:51, Andres Freund <[email protected]> написал(а):\n> \n> Hi Vladimir,\n> \n> Thanks for these reports.\n> \n> On 2016-06-13 00:42:19 +0300, Vladimir Borodin wrote:\n>> perf report -g -i pg9?_all.data >/tmp/pg9?_perf_report.txt\n> \n> Any chance you could redo the reports with --no-children --call-graph=fractal\n> added? The mode that includes child overheads unfortunately makes the\n> output hard to interpet/compare.\n\nOf course. Not sure if that is important but I upgraded perf for that (because --no-children option was introduced in ~3.16), so perf record and perf report were done with different perf versions.\n\n\n\nAlso I’ve done the same test on same host (RHEL 6) but with 4.6 kernel/perf and writing perf data to /dev/shm for not loosing events. Perf report output is also attached but important thing is that the regression is not so significant:\n\nroot@pgload05g ~ # uname -r\n4.6.0-1.el6.elrepo.x86_64\nroot@pgload05g ~ # cat /proc/sys/kernel/sched_autogroup_enabled\n1\nroot@pgload05g ~ # /tmp/run.sh\nRHEL 6\t\t9.4\t\t71634\t\t0.893\nRHEL 6\t\t9.5\t\t54005\t\t1.185\nRHEL 6\t\t9.6\t\t65550\t\t0.976\nroot@pgload05g ~ # echo 0 >/proc/sys/kernel/sched_autogroup_enabled\nroot@pgload05g ~ # /tmp/run.sh\nRHEL 6\t\t9.4\t\t73041\t\t0.876\nRHEL 6\t\t9.5\t\t60105\t\t1.065\nRHEL 6\t\t9.6\t\t67984\t\t0.941\nroot@pgload05g ~ #\n\n\n\n\n> \n>> The results from pg9?_perf_report.txt are attached. Note that in all cases some events were lost, i.e.:\n>> \n>> root@pgload05g ~ # perf report -g -i pg94_all.data >/tmp/pg94_perf_report.txt\n>> Failed to open [vsyscall], continuing without symbols\n>> Warning:\n>> Processed 537137 events and lost 7846 chunks!\n> \n> You can reduce the overhead by reducing the sampling frequency, e.g. by\n> specifying -F 300.\n> \n> Greetings,\n> \n> Andres Freund\n> \n> \n> -- \n> Sent via pgsql-hackers mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-hackers\n\n\n--\nMay the force be with you…\nhttps://simply.name", "msg_date": "Mon, 13 Jun 2016 21:58:30 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] 9.4 -> 9.5 regression with queries through pgbouncer on\n RHEL 6" }, { "msg_contents": "> 13 июня 2016 г., в 21:58, Vladimir Borodin <[email protected]> написал(а):\n> \n>> \n>> 13 июня 2016 г., в 0:51, Andres Freund <[email protected] <mailto:[email protected]>> написал(а):\n>> \n>> Hi Vladimir,\n>> \n>> Thanks for these reports.\n>> \n>> On 2016-06-13 00:42:19 +0300, Vladimir Borodin wrote:\n>>> perf report -g -i pg9?_all.data >/tmp/pg9?_perf_report.txt\n>> \n>> Any chance you could redo the reports with --no-children --call-graph=fractal\n>> added? The mode that includes child overheads unfortunately makes the\n>> output hard to interpet/compare.\n> \n> Of course. Not sure if that is important but I upgraded perf for that (because --no-children option was introduced in ~3.16), so perf record and perf report were done with different perf versions.\n> \n> <pg94_perf_report.txt.gz>\n> <pg95_perf_report.txt.gz>\n> <pg96_perf_report.txt.gz>\n> \n> Also I’ve done the same test on same host (RHEL 6) but with 4.6 kernel/perf and writing perf data to /dev/shm for not loosing events. Perf report output is also attached but important thing is that the regression is not so significant:\n> \n> root@pgload05g ~ # uname -r\n> 4.6.0-1.el6.elrepo.x86_64\n> root@pgload05g ~ # cat /proc/sys/kernel/sched_autogroup_enabled\n> 1\n> root@pgload05g ~ # /tmp/run.sh\n> RHEL 6\t\t9.4\t\t71634\t\t0.893\n> RHEL 6\t\t9.5\t\t54005\t\t1.185\n> RHEL 6\t\t9.6\t\t65550\t\t0.976\n> root@pgload05g ~ # echo 0 >/proc/sys/kernel/sched_autogroup_enabled\n> root@pgload05g ~ # /tmp/run.sh\n> RHEL 6\t\t9.4\t\t73041\t\t0.876\n> RHEL 6\t\t9.5\t\t60105\t\t1.065\n> RHEL 6\t\t9.6\t\t67984\t\t0.941\n> root@pgload05g ~ #\n> \n> <pg96_perf_report_4.6.txt.gz>\n> <pg95_perf_report_4.6.txt.gz>\n> <pg94_perf_report_4.6.txt.gz>\n\nAndres, is there any chance that you would find time to look at those results? Are they actually useful?\n\n> \n> \n>> \n>>> The results from pg9?_perf_report.txt are attached. Note that in all cases some events were lost, i.e.:\n>>> \n>>> root@pgload05g ~ # perf report -g -i pg94_all.data >/tmp/pg94_perf_report.txt\n>>> Failed to open [vsyscall], continuing without symbols\n>>> Warning:\n>>> Processed 537137 events and lost 7846 chunks!\n>> \n>> You can reduce the overhead by reducing the sampling frequency, e.g. by\n>> specifying -F 300.\n>> \n>> Greetings,\n>> \n>> Andres Freund\n>> \n>> \n>> -- \n>> Sent via pgsql-hackers mailing list ([email protected] <mailto:[email protected]>)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-hackers <http://www.postgresql.org/mailpref/pgsql-hackers>\n> \n> \n> --\n> May the force be with you…\n> https://simply.name <https://simply.name/>\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n13 июня 2016 г., в 21:58, Vladimir Borodin <[email protected]> написал(а):13 июня 2016 г., в 0:51, Andres Freund <[email protected]> написал(а):Hi Vladimir,Thanks for these reports.On 2016-06-13 00:42:19 +0300, Vladimir Borodin wrote:perf report -g -i pg9?_all.data >/tmp/pg9?_perf_report.txtAny chance you could redo the reports with --no-children --call-graph=fractaladded? The mode that includes child overheads unfortunately makes theoutput hard to interpet/compare.Of course. Not sure if that is important but I upgraded perf for that (because --no-children option was introduced in ~3.16), so perf record and perf report were done with different perf versions.<pg94_perf_report.txt.gz><pg95_perf_report.txt.gz><pg96_perf_report.txt.gz>Also I’ve done the same test on same host (RHEL 6) but with 4.6 kernel/perf and writing perf data to /dev/shm for not loosing events. Perf report output is also attached but important thing is that the regression is not so significant:root@pgload05g ~ # uname -r4.6.0-1.el6.elrepo.x86_64root@pgload05g ~ # cat /proc/sys/kernel/sched_autogroup_enabled1root@pgload05g ~ # /tmp/run.shRHEL 6 9.4 71634 0.893RHEL 6 9.5 54005 1.185RHEL 6 9.6 65550 0.976root@pgload05g ~ # echo 0 >/proc/sys/kernel/sched_autogroup_enabledroot@pgload05g ~ # /tmp/run.shRHEL 6 9.4 73041 0.876RHEL 6 9.5 60105 1.065RHEL 6 9.6 67984 0.941root@pgload05g ~ #<pg96_perf_report_4.6.txt.gz><pg95_perf_report_4.6.txt.gz><pg94_perf_report_4.6.txt.gz>Andres, is there any chance that you would find time to look at those results? Are they actually useful?The results from pg9?_perf_report.txt are attached. Note that in all cases some events were lost, i.e.:root@pgload05g ~ # perf report -g -i pg94_all.data >/tmp/pg94_perf_report.txtFailed to open [vsyscall], continuing without symbolsWarning:Processed 537137 events and lost 7846 chunks!You can reduce the overhead by reducing the sampling frequency, e.g. byspecifying -F 300.Greetings,Andres Freund-- Sent via pgsql-hackers mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-hackers--May the force be with you…https://simply.name\n--May the force be with you…https://simply.name", "msg_date": "Mon, 4 Jul 2016 16:30:51 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] 9.4 -> 9.5 regression with queries through pgbouncer on\n RHEL 6" }, { "msg_contents": "On 2016-06-13 21:58:30 +0300, Vladimir Borodin wrote:\n> \n> > 13 июня 2016 г., в 0:51, Andres Freund <[email protected]> написал(а):\n> > \n> > Hi Vladimir,\n> > \n> > Thanks for these reports.\n> > \n> > On 2016-06-13 00:42:19 +0300, Vladimir Borodin wrote:\n> >> perf report -g -i pg9?_all.data >/tmp/pg9?_perf_report.txt\n> > \n> > Any chance you could redo the reports with --no-children --call-graph=fractal\n> > added? The mode that includes child overheads unfortunately makes the\n> > output hard to interpet/compare.\n> \n> Of course. Not sure if that is important but I upgraded perf for that (because --no-children option was introduced in ~3.16), so perf record and perf report were done with different perf versions.\n> \n> \n> \n> Also I’ve done the same test on same host (RHEL 6) but with 4.6 kernel/perf and writing perf data to /dev/shm for not loosing events. Perf report output is also attached but important thing is that the regression is not so significant:\n> \n> root@pgload05g ~ # uname -r\n> 4.6.0-1.el6.elrepo.x86_64\n> root@pgload05g ~ # cat /proc/sys/kernel/sched_autogroup_enabled\n> 1\n> root@pgload05g ~ # /tmp/run.sh\n> RHEL 6\t\t9.4\t\t71634\t\t0.893\n> RHEL 6\t\t9.5\t\t54005\t\t1.185\n> RHEL 6\t\t9.6\t\t65550\t\t0.976\n> root@pgload05g ~ # echo 0 >/proc/sys/kernel/sched_autogroup_enabled\n> root@pgload05g ~ # /tmp/run.sh\n> RHEL 6\t\t9.4\t\t73041\t\t0.876\n> RHEL 6\t\t9.5\t\t60105\t\t1.065\n> RHEL 6\t\t9.6\t\t67984\t\t0.941\n> root@pgload05g ~ #\n\nHm. Have you measured how large the slowdown is if you connect via tcp\nto pgbouncer, but have pgbouncer connect to postgres via unix sockets?\n\nAndres\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 14 Jul 2016 11:48:57 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] 9.4 -> 9.5 regression with queries through\n pgbouncer on RHEL 6" }, { "msg_contents": "On 2016-07-04 16:30:51 +0300, Vladimir Borodin wrote:\n>\n> > 13 июня 2016 г., в 21:58, Vladimir Borodin <[email protected]> написал(а):\n> >\n> >>\n> >> 13 июня 2016 г., в 0:51, Andres Freund <[email protected] <mailto:[email protected]>> написал(а):\n> >>\n> >> Hi Vladimir,\n> >>\n> >> Thanks for these reports.\n> >>\n> >> On 2016-06-13 00:42:19 +0300, Vladimir Borodin wrote:\n> >>> perf report -g -i pg9?_all.data >/tmp/pg9?_perf_report.txt\n> >>\n> >> Any chance you could redo the reports with --no-children --call-graph=fractal\n> >> added? The mode that includes child overheads unfortunately makes the\n> >> output hard to interpet/compare.\n> >\n> > Of course. Not sure if that is important but I upgraded perf for that (because --no-children option was introduced in ~3.16), so perf record and perf report were done with different perf versions.\n> >\n> > <pg94_perf_report.txt.gz>\n> > <pg95_perf_report.txt.gz>\n> > <pg96_perf_report.txt.gz>\n> >\n> > Also I’ve done the same test on same host (RHEL 6) but with 4.6\n> > kernel/perf and writing perf data to /dev/shm for not loosing\n> > events. Perf report output is also attached but important thing is\n> > that the regression is not so significant:\n\nFWIW, you can instead use -F 300 or something to reduce the sampling\nfrequency.\n\n> > root@pgload05g ~ # uname -r\n> > 4.6.0-1.el6.elrepo.x86_64\n> > root@pgload05g ~ # cat /proc/sys/kernel/sched_autogroup_enabled\n> > 1\n> > root@pgload05g ~ # /tmp/run.sh\n> > RHEL 6\t\t9.4\t\t71634\t\t0.893\n> > RHEL 6\t\t9.5\t\t54005\t\t1.185\n> > RHEL 6\t\t9.6\t\t65550\t\t0.976\n> > root@pgload05g ~ # echo 0 >/proc/sys/kernel/sched_autogroup_enabled\n> > root@pgload05g ~ # /tmp/run.sh\n> > RHEL 6\t\t9.4\t\t73041\t\t0.876\n> > RHEL 6\t\t9.5\t\t60105\t\t1.065\n> > RHEL 6\t\t9.6\t\t67984\t\t0.941\n> > root@pgload05g ~ #\n> >\n> > <pg96_perf_report_4.6.txt.gz>\n> > <pg95_perf_report_4.6.txt.gz>\n> > <pg94_perf_report_4.6.txt.gz>\n>\n> Andres, is there any chance that you would find time to look at those results? Are they actually useful?\n\nI don't really see anything suspicious in the profile. This looks more\nlike a kernel scheduler issue than a postgres bottleneck one. It seems\nthat somehow using nonblocking IO (started in 9.5) causes scheduling\nissues when pgbouncer is also local.\n\nCould you do perf stat -ddd -a sleep 10 or something during both runs? I\nsuspect that the context switch ratios will be quite different.\n\nAndres\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Thu, 14 Jul 2016 11:53:38 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] 9.4 -> 9.5 regression with queries through\n pgbouncer on RHEL 6" } ]
[ { "msg_contents": "We are starting some testing in AWS, with EC2, EBS backed setups.\n\nWhat I found interesting today, was a single EBS 1TB volume, gave me\nsomething like 108MB/s throughput, however a RAID10 (4 250GB EBS\nvolumes), gave me something like 31MB/s (test after test after test).\n\nI'm wondering what you folks are using inside of Amazon (not\ninterested in RDS at the moment).\n\nThanks\nTory\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 May 2016 15:34:43 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "Testing in AWS, EBS" }, { "msg_contents": "There are many factors that can affect EBS performance. For example, the\ntype of EBS volume, the instance type, whether EBS-optimized is turned on\nor not, etc.\n\nWithout the details, then there is no apples to apples comparsion...\n\nRayson\n\n==================================================\nOpen Grid Scheduler - The Official Open Source Grid Engine\nhttp://gridscheduler.sourceforge.net/\nhttp://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n\n\n\nOn Wed, May 25, 2016 at 6:34 PM, Tory M Blue <[email protected]> wrote:\n>\n> We are starting some testing in AWS, with EC2, EBS backed setups.\n>\n> What I found interesting today, was a single EBS 1TB volume, gave me\n> something like 108MB/s throughput, however a RAID10 (4 250GB EBS\n> volumes), gave me something like 31MB/s (test after test after test).\n>\n> I'm wondering what you folks are using inside of Amazon (not\n> interested in RDS at the moment).\n>\n> Thanks\n> Tory\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nThere are many factors that can affect EBS performance. For example, the type of EBS volume, the instance type, whether EBS-optimized is turned on or not, etc.Without the details, then there is no apples to apples comparsion...Rayson==================================================Open Grid Scheduler - The Official Open Source Grid Enginehttp://gridscheduler.sourceforge.net/http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.htmlOn Wed, May 25, 2016 at 6:34 PM, Tory M Blue <[email protected]> wrote:>> We are starting some testing in AWS, with EC2, EBS backed setups.>> What I found interesting today, was a single EBS 1TB volume, gave me> something like 108MB/s throughput, however a RAID10 (4 250GB EBS> volumes), gave me something like 31MB/s (test after test after test).>> I'm wondering what you folks are using inside of Amazon (not> interested in RDS at the moment).>> Thanks> Tory>>> --> Sent via pgsql-performance mailing list ([email protected])> To make changes to your subscription:> http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 25 May 2016 19:02:51 -0400", "msg_from": "Rayson Ho <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "Indeed, old-style disk EBS vs new-style SSd EBS.\n\nBe aware that EBS traffic is considered as part of the total \"network\"\ntraffic, and each type of instance has different limits on maximum network\nthroughput. Those difference are very significant, do tests on the same volume\nbetween two different type of instances, both with enough cpu and memory for\nthe I/O to be the bottleneck, you will be surprised!\n\n\nOn 2016-05-25 17:02, Rayson Ho wrote:\n> There are many factors that can affect EBS performance. For example, the type\n> of EBS volume, the instance type, whether EBS-optimized is turned on or not, etc.\n> \n> Without the details, then there is no apples to apples comparsion...\n> \n> Rayson\n> \n> ==================================================\n> Open Grid Scheduler - The Official Open Source Grid Engine\n> http://gridscheduler.sourceforge.net/\n> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n> \n> \n> \n> On Wed, May 25, 2016 at 6:34 PM, Tory M Blue <[email protected]\n> <mailto:[email protected]>> wrote:\n>>\n>> We are starting some testing in AWS, with EC2, EBS backed setups.\n>>\n>> What I found interesting today, was a single EBS 1TB volume, gave me\n>> something like 108MB/s throughput, however a RAID10 (4 250GB EBS\n>> volumes), gave me something like 31MB/s (test after test after test).\n>>\n>> I'm wondering what you folks are using inside of Amazon (not\n>> interested in RDS at the moment).\n>>\n>> Thanks\n>> Tory\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n> <mailto:[email protected]>)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 25 May 2016 17:56:28 -0600", "msg_from": "Yves Dorfsman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "Actually, when \"EBS-Optimized\" is on, then the instance gets dedicated\nbandwidth to EBS.\n\nRayson\n\n==================================================\nOpen Grid Scheduler - The Official Open Source Grid Engine\nhttp://gridscheduler.sourceforge.net/\nhttp://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n\n\n\nOn Wed, May 25, 2016 at 7:56 PM, Yves Dorfsman <[email protected]> wrote:\n\n> Indeed, old-style disk EBS vs new-style SSd EBS.\n>\n> Be aware that EBS traffic is considered as part of the total \"network\"\n> traffic, and each type of instance has different limits on maximum network\n> throughput. Those difference are very significant, do tests on the same\n> volume\n> between two different type of instances, both with enough cpu and memory\n> for\n> the I/O to be the bottleneck, you will be surprised!\n>\n>\n> On 2016-05-25 17:02, Rayson Ho wrote:\n> > There are many factors that can affect EBS performance. For example, the\n> type\n> > of EBS volume, the instance type, whether EBS-optimized is turned on or\n> not, etc.\n> >\n> > Without the details, then there is no apples to apples comparsion...\n> >\n> > Rayson\n> >\n> > ==================================================\n> > Open Grid Scheduler - The Official Open Source Grid Engine\n> > http://gridscheduler.sourceforge.net/\n> > http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n> >\n> >\n> >\n> > On Wed, May 25, 2016 at 6:34 PM, Tory M Blue <[email protected]\n> > <mailto:[email protected]>> wrote:\n> >>\n> >> We are starting some testing in AWS, with EC2, EBS backed setups.\n> >>\n> >> What I found interesting today, was a single EBS 1TB volume, gave me\n> >> something like 108MB/s throughput, however a RAID10 (4 250GB EBS\n> >> volumes), gave me something like 31MB/s (test after test after test).\n> >>\n> >> I'm wondering what you folks are using inside of Amazon (not\n> >> interested in RDS at the moment).\n> >>\n> >> Thanks\n> >> Tory\n> >>\n> >>\n> >> --\n> >> Sent via pgsql-performance mailing list (\n> [email protected]\n> > <mailto:[email protected]>)\n> >> To make changes to your subscription:\n> >> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> --\n> http://yves.zioup.com\n> gpg: 4096R/32B0F416\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nActually, when \"EBS-Optimized\" is on, then the instance gets dedicated bandwidth to EBS.Rayson==================================================Open Grid Scheduler - The Official Open Source Grid Enginehttp://gridscheduler.sourceforge.net/http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\nOn Wed, May 25, 2016 at 7:56 PM, Yves Dorfsman <[email protected]> wrote:Indeed, old-style disk EBS vs new-style SSd EBS.\n\nBe aware that EBS traffic is considered as part of the total \"network\"\ntraffic, and each type of instance has different limits on maximum network\nthroughput. Those difference are very significant, do tests on the same volume\nbetween two different type of instances, both with enough cpu and memory for\nthe I/O to be the bottleneck, you will be surprised!\n\n\nOn 2016-05-25 17:02, Rayson Ho wrote:\n> There are many factors that can affect EBS performance. For example, the type\n> of EBS volume, the instance type, whether EBS-optimized is turned on or not, etc.\n>\n> Without the details, then there is no apples to apples comparsion...\n>\n> Rayson\n>\n> ==================================================\n> Open Grid Scheduler - The Official Open Source Grid Engine\n> http://gridscheduler.sourceforge.net/\n> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n>\n>\n>\n> On Wed, May 25, 2016 at 6:34 PM, Tory M Blue <[email protected]\n> <mailto:[email protected]>> wrote:\n>>\n>> We are starting some testing in AWS, with EC2, EBS backed setups.\n>>\n>> What I found interesting today, was a single EBS 1TB volume, gave me\n>> something like 108MB/s throughput, however a RAID10 (4 250GB EBS\n>> volumes), gave me something like 31MB/s (test after test after test).\n>>\n>> I'm wondering what you folks are using inside of Amazon (not\n>> interested in RDS at the moment).\n>>\n>> Thanks\n>> Tory\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n> <mailto:[email protected]>)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 25 May 2016 21:08:20 -0400", "msg_from": "Rayson Ho <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "Hi.\n\nAWS EBS its a really painful story....\nHow was created volumes for RAID? From snapshots?\nIf you want to get the best performance from EBS it needs to pre-warmed.\n\nHere is the tutorial how to achieve that:\nhttp://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html\n\nAlso you should read this one if you want to get really great for\nperformance:\nhttp://hatim.eu/2014/05/24/leveraging-ssd-ephemeral-disks-in-ec2-part-1/\n\nGood luck!\n\n2016-05-26 1:34 GMT+03:00 Tory M Blue <[email protected]>:\n\n> We are starting some testing in AWS, with EC2, EBS backed setups.\n>\n> What I found interesting today, was a single EBS 1TB volume, gave me\n> something like 108MB/s throughput, however a RAID10 (4 250GB EBS\n> volumes), gave me something like 31MB/s (test after test after test).\n>\n> I'm wondering what you folks are using inside of Amazon (not\n> interested in RDS at the moment).\n>\n> Thanks\n> Tory\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi.AWS EBS its a really painful story....How was created volumes for RAID? From snapshots? If you want to get the best performance from EBS it needs to pre-warmed.Here is the tutorial how to achieve that:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.htmlAlso you should read this one if you want to get really great for performance:http://hatim.eu/2014/05/24/leveraging-ssd-ephemeral-disks-in-ec2-part-1/Good luck!2016-05-26 1:34 GMT+03:00 Tory M Blue <[email protected]>:We are starting some testing in AWS, with EC2, EBS backed setups.\n\nWhat I found interesting today, was a single EBS 1TB volume, gave me\nsomething like 108MB/s throughput, however a RAID10 (4 250GB EBS\nvolumes), gave me something like 31MB/s (test after test after test).\n\nI'm wondering what you folks are using inside of Amazon (not\ninterested in RDS at the moment).\n\nThanks\nTory\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 26 May 2016 11:41:55 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "On 2016-05-25 19:08, Rayson Ho wrote:\n> Actually, when \"EBS-Optimized\" is on, then the instance gets dedicated\n> bandwidth to EBS.\n\nHadn't realised that, thanks.\nIs the EBS bandwidth then somewhat limited depending on the type of instance too?\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 May 2016 06:53:07 -0600", "msg_from": "Yves Dorfsman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "Yes, the smaller instance you choose - the slower ebs will be.\nEBS lives separately from EC2, they are communicating via network. So small\ninstance = low network bandwidth = poorer disk performance.\nBut still strong recommendation to pre-warm your ebs in any case,\nespecially if they created from snapshot.\n\n2016-05-26 15:53 GMT+03:00 Yves Dorfsman <[email protected]>:\n\n> On 2016-05-25 19:08, Rayson Ho wrote:\n> > Actually, when \"EBS-Optimized\" is on, then the instance gets dedicated\n> > bandwidth to EBS.\n>\n> Hadn't realised that, thanks.\n> Is the EBS bandwidth then somewhat limited depending on the type of\n> instance too?\n>\n> --\n> http://yves.zioup.com\n> gpg: 4096R/32B0F416\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYes, the smaller instance you choose - the slower ebs will be. EBS lives separately from EC2, they are communicating via network. So small instance = low network bandwidth = poorer disk performance.But still strong recommendation to pre-warm your ebs in any case, especially if they created from snapshot.2016-05-26 15:53 GMT+03:00 Yves Dorfsman <[email protected]>:On 2016-05-25 19:08, Rayson Ho wrote:\n> Actually, when \"EBS-Optimized\" is on, then the instance gets dedicated\n> bandwidth to EBS.\n\nHadn't realised that, thanks.\nIs the EBS bandwidth then somewhat limited depending on the type of instance too?\n\n--\nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 26 May 2016 16:00:23 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "On Thu, May 26, 2016 at 9:00 AM, Artem Tomyuk <[email protected]> wrote:\n>\n> But still strong recommendation to pre-warm your ebs in any case,\nespecially if they created from snapshot.\n\nThat used to be true. However, at AWS re:Invent 2015, Amazon engineers said\nthat EBS pre-warming is not needed anymore.\n\nRayson\n\n==================================================\nOpen Grid Scheduler - The Official Open Source Grid Engine\nhttp://gridscheduler.sourceforge.net/\nhttp://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n\n\n\n> 2016-05-26 15:53 GMT+03:00 Yves Dorfsman <[email protected]>:\n>>\n>> On 2016-05-25 19:08, Rayson Ho wrote:\n>> > Actually, when \"EBS-Optimized\" is on, then the instance gets dedicated\n>> > bandwidth to EBS.\n>>\n>> Hadn't realised that, thanks.\n>> Is the EBS bandwidth then somewhat limited depending on the type of\ninstance too?\n>>\n>> --\n>> http://yves.zioup.com\n>> gpg: 4096R/32B0F416\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected]\n)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\nOn Thu, May 26, 2016 at 9:00 AM, Artem Tomyuk <[email protected]> wrote:>> But still strong recommendation to pre-warm your ebs in any case, especially if they created from snapshot.That used to be true. However, at AWS re:Invent 2015, Amazon engineers said that EBS pre-warming is not needed anymore.Rayson==================================================Open Grid Scheduler - The Official Open Source Grid Enginehttp://gridscheduler.sourceforge.net/http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html> 2016-05-26 15:53 GMT+03:00 Yves Dorfsman <[email protected]>:>>>> On 2016-05-25 19:08, Rayson Ho wrote:>> > Actually, when \"EBS-Optimized\" is on, then the instance gets dedicated>> > bandwidth to EBS.>>>> Hadn't realised that, thanks.>> Is the EBS bandwidth then somewhat limited depending on the type of instance too?>>>> -->> http://yves.zioup.com>> gpg: 4096R/32B0F416>>>>>>>> -->> Sent via pgsql-performance mailing list ([email protected])>> To make changes to your subscription:>> http://www.postgresql.org/mailpref/pgsql-performance>>", "msg_date": "Thu, 26 May 2016 09:50:23 -0400", "msg_from": "Rayson Ho <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>:\n\n> Amazon engineers said that EBS pre-warming is not needed anymore.\n\n\nbut still if you will skip this step you wont get much performance on ebs\ncreated from snapshot.\n\n2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>: Amazon engineers said that EBS pre-warming is not needed anymore.but still if you will skip this step you wont get much performance on ebs created from snapshot.", "msg_date": "Thu, 26 May 2016 17:00:26 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk <[email protected]> wrote:\n\n>\n> 2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>:\n>\n>> Amazon engineers said that EBS pre-warming is not needed anymore.\n>\n>\n> but still if you will skip this step you wont get much performance on ebs\n> created from snapshot.\n>\n\n\nIIRC, that's not what Amazon engineers said. Is that from your personal\nexperience, and if so, when did you do the test??\n\nRayson\n\n==================================================\nOpen Grid Scheduler - The Official Open Source Grid Engine\nhttp://gridscheduler.sourceforge.net/\nhttp://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n\nOn Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk <[email protected]> wrote:2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>: Amazon engineers said that EBS pre-warming is not needed anymore.but still if you will skip this step you wont get much performance on ebs created from snapshot.IIRC, that's not what Amazon engineers said. Is that from your personal experience, and if so, when did you do the test??Rayson==================================================Open Grid Scheduler - The Official Open Source Grid Enginehttp://gridscheduler.sourceforge.net/http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html", "msg_date": "Thu, 26 May 2016 10:47:23 -0400", "msg_from": "Rayson Ho <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "Please look at the official doc.\n\n\"New EBS volumes receive their maximum performance the moment that they are\navailable and do not require initialization (formerly known as\npre-warming). However, storage blocks on volumes that were restored from\nsnapshots must be initialized (pulled down from Amazon S3 and written to\nthe volume) before you can access the block\"\n\nQuotation from:\nhttp://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html\n\n2016-05-26 17:47 GMT+03:00 Rayson Ho <[email protected]>:\n\n> On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk <[email protected]>\n> wrote:\n>\n>>\n>> 2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>:\n>>\n>>> Amazon engineers said that EBS pre-warming is not needed anymore.\n>>\n>>\n>> but still if you will skip this step you wont get much performance on ebs\n>> created from snapshot.\n>>\n>\n>\n> IIRC, that's not what Amazon engineers said. Is that from your personal\n> experience, and if so, when did you do the test??\n>\n> Rayson\n>\n> ==================================================\n> Open Grid Scheduler - The Official Open Source Grid Engine\n> http://gridscheduler.sourceforge.net/\n> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n>\n>\n>\n>\n\nPlease look at the official doc.\"New EBS volumes receive their maximum performance the moment that they are available and do not require initialization (formerly known as pre-warming). However, storage blocks on volumes that were restored from snapshots must be initialized (pulled down from Amazon S3 and written to the volume) before you can access the block\"Quotation from:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html2016-05-26 17:47 GMT+03:00 Rayson Ho <[email protected]>:On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk <[email protected]> wrote:2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>: Amazon engineers said that EBS pre-warming is not needed anymore.but still if you will skip this step you wont get much performance on ebs created from snapshot.IIRC, that's not what Amazon engineers said. Is that from your personal experience, and if so, when did you do the test??Rayson==================================================Open Grid Scheduler - The Official Open Source Grid Enginehttp://gridscheduler.sourceforge.net/http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html", "msg_date": "Thu, 26 May 2016 17:52:29 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "Thanks Artem.\n\nSo no EBS pre-warming does not apply to EBS volumes created from snapshots.\n\nRayson\n\n==================================================\nOpen Grid Scheduler - The Official Open Source Grid Engine\nhttp://gridscheduler.sourceforge.net/\nhttp://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n\n\nOn Thu, May 26, 2016 at 10:52 AM, Artem Tomyuk <[email protected]> wrote:\n> Please look at the official doc.\n>\n> \"New EBS volumes receive their maximum performance the moment that they are\n> available and do not require initialization (formerly known as pre-warming).\n> However, storage blocks on volumes that were restored from snapshots must be\n> initialized (pulled down from Amazon S3 and written to the volume) before\n> you can access the block\"\n>\n> Quotation from:\n> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html\n>\n> 2016-05-26 17:47 GMT+03:00 Rayson Ho <[email protected]>:\n>>\n>> On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk <[email protected]>\n>> wrote:\n>>>\n>>>\n>>> 2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>:\n>>>>\n>>>> Amazon engineers said that EBS pre-warming is not needed anymore.\n>>>\n>>>\n>>> but still if you will skip this step you wont get much performance on ebs\n>>> created from snapshot.\n>>\n>>\n>>\n>> IIRC, that's not what Amazon engineers said. Is that from your personal\n>> experience, and if so, when did you do the test??\n>>\n>> Rayson\n>>\n>> ==================================================\n>> Open Grid Scheduler - The Official Open Source Grid Engine\n>> http://gridscheduler.sourceforge.net/\n>> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n>>\n>>\n>>\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 May 2016 10:54:54 -0400", "msg_from": "Rayson Ho <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "Why no? Or you missed something?\n\nIt should be done on every EBS restored from snapshot.\n\nIs that from your personal experience, and if so, when did you do the test??\n\nYes we are using this practice, because as a part of our production load we\nare using auto scale groups to create new instances, wheech are created\nfrom AMI, wheech stands on snapshots, so...\n\n\n\n\n\n\n\n\n2016-05-26 17:54 GMT+03:00 Rayson Ho <[email protected]>:\n\n> Thanks Artem.\n>\n> So no EBS pre-warming does not apply to EBS volumes created from snapshots.\n>\n> Rayson\n>\n> ==================================================\n> Open Grid Scheduler - The Official Open Source Grid Engine\n> http://gridscheduler.sourceforge.net/\n> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n>\n>\n> On Thu, May 26, 2016 at 10:52 AM, Artem Tomyuk <[email protected]>\n> wrote:\n> > Please look at the official doc.\n> >\n> > \"New EBS volumes receive their maximum performance the moment that they\n> are\n> > available and do not require initialization (formerly known as\n> pre-warming).\n> > However, storage blocks on volumes that were restored from snapshots\n> must be\n> > initialized (pulled down from Amazon S3 and written to the volume) before\n> > you can access the block\"\n> >\n> > Quotation from:\n> > http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html\n> >\n> > 2016-05-26 17:47 GMT+03:00 Rayson Ho <[email protected]>:\n> >>\n> >> On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk <[email protected]>\n> >> wrote:\n> >>>\n> >>>\n> >>> 2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>:\n> >>>>\n> >>>> Amazon engineers said that EBS pre-warming is not needed anymore.\n> >>>\n> >>>\n> >>> but still if you will skip this step you wont get much performance on\n> ebs\n> >>> created from snapshot.\n> >>\n> >>\n> >>\n> >> IIRC, that's not what Amazon engineers said. Is that from your personal\n> >> experience, and if so, when did you do the test??\n> >>\n> >> Rayson\n> >>\n> >> ==================================================\n> >> Open Grid Scheduler - The Official Open Source Grid Engine\n> >> http://gridscheduler.sourceforge.net/\n> >> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n> >>\n> >>\n> >>\n> >\n>\n\nWhy no? Or you missed something?It should be done on every EBS restored from snapshot. Is that from your personal experience, and if so, when did you do the test??Yes we are using this practice, because as a part of our production load we are using auto scale groups to create new instances, wheech are created from AMI, wheech stands on snapshots, so...  2016-05-26 17:54 GMT+03:00 Rayson Ho <[email protected]>:Thanks Artem.\n\nSo no EBS pre-warming does not apply to EBS volumes created from snapshots.\n\nRayson\n\n==================================================\nOpen Grid Scheduler - The Official Open Source Grid Engine\nhttp://gridscheduler.sourceforge.net/\nhttp://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n\n\nOn Thu, May 26, 2016 at 10:52 AM, Artem Tomyuk <[email protected]> wrote:\n> Please look at the official doc.\n>\n> \"New EBS volumes receive their maximum performance the moment that they are\n> available and do not require initialization (formerly known as pre-warming).\n> However, storage blocks on volumes that were restored from snapshots must be\n> initialized (pulled down from Amazon S3 and written to the volume) before\n> you can access the block\"\n>\n> Quotation from:\n> http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html\n>\n> 2016-05-26 17:47 GMT+03:00 Rayson Ho <[email protected]>:\n>>\n>> On Thu, May 26, 2016 at 10:00 AM, Artem Tomyuk <[email protected]>\n>> wrote:\n>>>\n>>>\n>>> 2016-05-26 16:50 GMT+03:00 Rayson Ho <[email protected]>:\n>>>>\n>>>> Amazon engineers said that EBS pre-warming is not needed anymore.\n>>>\n>>>\n>>> but still if you will skip this step you wont get much performance on ebs\n>>> created from snapshot.\n>>\n>>\n>>\n>> IIRC, that's not what Amazon engineers said. Is that from your personal\n>> experience, and if so, when did you do the test??\n>>\n>> Rayson\n>>\n>> ==================================================\n>> Open Grid Scheduler - The Official Open Source Grid Engine\n>> http://gridscheduler.sourceforge.net/\n>> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n>>\n>>\n>>\n>", "msg_date": "Thu, 26 May 2016 18:03:38 +0300", "msg_from": "Artem Tomyuk <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "On 2016-05-26 09:03, Artem Tomyuk wrote:\n> Why no? Or you missed something?\n\nI think Rayson is correct, but the double negative makes it hard to read:\n\n\"So no EBS pre-warming does not apply to EBS volumes created from snapshots.\"\n\nWhich I interpret as:\nSo, \"no EBS pre-warming\", does not apply to EBS volumes created from snapshots.\n\nWhich is correct, you sitll have to warm your EBS when created from sanpshots (to get the data from S3 to the filesystem).\n\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416 \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 May 2016 09:41:06 -0600", "msg_from": "Yves Dorfsman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" }, { "msg_contents": "Thanks Yves for the clarification!\n\nIt used to be very important to pre-warm EBS before running benchmarks\nin order to get consistent results.\n\nThen at re:Invent 2015, the AWS engineers said that it is not needed\nanymore, which IMO is a lot less work for us to do benchmarking in\nAWS, because pre-warming a multi-TB EBS vol is very time consuming,\nand the I/Os were not free.\n\nRayson\n\n==================================================\nOpen Grid Scheduler - The Official Open Source Grid Engine\nhttp://gridscheduler.sourceforge.net/\nhttp://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html\n\n\nOn Thu, May 26, 2016 at 11:41 AM, Yves Dorfsman <[email protected]> wrote:\n> On 2016-05-26 09:03, Artem Tomyuk wrote:\n>> Why no? Or you missed something?\n>\n> I think Rayson is correct, but the double negative makes it hard to read:\n>\n> \"So no EBS pre-warming does not apply to EBS volumes created from snapshots.\"\n>\n> Which I interpret as:\n> So, \"no EBS pre-warming\", does not apply to EBS volumes created from snapshots.\n>\n> Which is correct, you sitll have to warm your EBS when created from sanpshots (to get the data from S3 to the filesystem).\n>\n>\n> --\n> http://yves.zioup.com\n> gpg: 4096R/32B0F416\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 26 May 2016 17:26:27 -0400", "msg_from": "Rayson Ho <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Testing in AWS, EBS" } ]
[ { "msg_contents": "Hello!\n\nI am just about to upgrade from PostgreSQL 8.4.20 to 9.2.15, but I'v run\ninto some huge performance issues. Both databases are configured the\nsame way (shared_buffers = 2GB, temp_buffers = 32MB). I have increased\nwork_mem on the 9.2 from 4MB to 64MB, but to no avail.\n\nNow, the query on 8.4:\nrt4=# EXPLAIN ANALYZE VERBOSE SELECT DISTINCT main.* FROM Users main\nCROSS JOIN ACL ACL_3 JOIN Principals Principals_1 ON ( Principals_1.id\n= main.id ) JOIN CachedGroupMembers CachedGroupMembers_2 ON\n( CachedGroupMembers_2.MemberId = Principals_1.id ) JOIN\nCachedGroupMembers CachedGroupMembers_4 ON\n( CachedGroupMembers_4.MemberId = Principals_1.id ) WHERE\n((ACL_3.ObjectType = 'RT::Queue') OR (ACL_3.ObjectType = 'RT::System'\nAND ACL_3.ObjectId = 1)) AND (ACL_3.PrincipalId =\nCachedGroupMembers_4.GroupId) AND (ACL_3.PrincipalType = 'Group') AND\n(ACL_3.RightName = 'OwnTicket' OR ACL_3.RightName = 'SuperUser') AND\n(CachedGroupMembers_2.Disabled = '0') AND (CachedGroupMembers_2.GroupId\n= '4') AND (CachedGroupMembers_4.Disabled = '0') AND\n(Principals_1.Disabled = '0') AND (Principals_1.PrincipalType = 'User')\nAND (Principals_1.id != '1') ORDER BY main.Name ASC;\n \n\nQUERY\nPLAN \n \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n--------------------\n Unique (cost=19822.31..19843.46 rows=235 width=1084) (actual\ntime=6684.054..7118.015 rows=571 loops=1)\n Output: main.id, main.name, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.emailenc\noding, main.webencoding, main.externalcontactinfoid,\nmain.contactinfosystem, main.externalauthid, main.authsystem,\nmain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, ma\nin.address2, main.city, main.state, main.zip, main.country,\nmain.timezone, main.pgpkey, main.creator, main.created,\nmain.lastupdatedby, main.lastupdated\n -> Sort (cost=19822.31..19822.90 rows=235 width=1084) (actual\ntime=6684.052..7085.835 rows=33310 loops=1)\n Output: main.id, main.name, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.em\nailencoding, main.webencoding, main.externalcontactinfoid,\nmain.contactinfosystem, main.externalauthid, main.authsystem,\nmain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.addres\ns1, main.address2, main.city, main.state, main.zip, main.country,\nmain.timezone, main.pgpkey, main.creator, main.created,\nmain.lastupdatedby, main.lastupdated\n Sort Key: main.name, main.id, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.\nemailencoding, main.webencoding, main.externalcontactinfoid,\nmain.contactinfosystem, main.externalauthid, main.authsystem,\nmain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.addr\ness1, main.address2, main.city, main.state, main.zip, main.country,\nmain.timezone, main.pgpkey, main.creator, main.created,\nmain.lastupdatedby, main.lastupdated\n Sort Method: external merge Disk: 7408kB\n -> Hash Join (cost=19659.66..19813.05 rows=235 width=1084)\n(actual time=3362.897..4080.600 rows=33310 loops=1)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, m\nain.emailencoding, main.webencoding, main.externalcontactinfoid,\nmain.contactinfosystem, main.externalauthid, main.authsystem,\nmain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.\naddress1, main.address2, main.city, main.state, main.zip, main.country,\nmain.timezone, main.pgpkey, main.creator, main.created,\nmain.lastupdatedby, main.lastupdated\n Hash Cond: (acl_3.principalid =\ncachedgroupmembers_4.groupid)\n -> Bitmap Heap Scan on acl acl_3 (cost=30.04..145.27\nrows=494 width=4) (actual time=0.339..1.790 rows=528 loops=1)\n Output: acl_3.id, acl_3.principaltype,\nacl_3.principalid, acl_3.rightname, acl_3.objecttype, acl_3.objectid,\nacl_3.creator, acl_3.created, acl_3.lastupdatedby, acl_3.lastupdated\n Recheck Cond: ((((rightname)::text =\n'OwnTicket'::text) AND ((principaltype)::text = 'Group'::text)) OR\n(((rightname)::text = 'SuperUser'::text) AND ((principaltype)::text =\n'Group'::text)))\n Filter: (((objecttype)::text = 'RT::Queue'::text)\nOR (((objecttype)::text = 'RT::System'::text) AND (objectid = 1)))\n -> BitmapOr (cost=30.04..30.04 rows=529 width=0)\n(actual time=0.303..0.303 rows=0 loops=1)\n -> Bitmap Index Scan on acl1\n(cost=0.00..25.43 rows=518 width=0) (actual time=0.283..0.283 rows=524\nloops=1)\n Index Cond: (((rightname)::text =\n'OwnTicket'::text) AND ((principaltype)::text = 'Group'::text))\n -> Bitmap Index Scan on acl1\n(cost=0.00..4.36 rows=11 width=0) (actual time=0.020..0.020 rows=4\nloops=1)\n Index Cond: (((rightname)::text =\n'SuperUser'::text) AND ((principaltype)::text = 'Group'::text))\n -> Hash (cost=19615.48..19615.48 rows=1131 width=1088)\n(actual time=3301.001..3301.001 rows=949843 loops=1)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.l\nang, main.emailencoding, main.webencoding, main.externalcontactinfoid,\nmain.contactinfosystem, main.externalauthid, main.authsystem,\nmain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone,\n main.address1, main.address2, main.city, main.state, main.zip,\nmain.country, main.timezone, main.pgpkey, main.creator, main.created,\nmain.lastupdatedby, main.lastupdated, cachedgroupmembers_4.groupid\n -> Nested Loop (cost=24.59..19615.48 rows=1131\nwidth=1088) (actual time=0.540..1835.831 rows=949843 loops=1)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, \nmain.lang, main.emailencoding, main.webencoding,\nmain.externalcontactinfoid, main.contactinfosystem, main.externalauthid,\nmain.authsystem, main.gecos, main.homephone, main.workphone,\nmain.mobilephone, main.pager\nphone, main.address1, main.address2, main.city, main.state, main.zip,\nmain.country, main.timezone, main.pgpkey, main.creator, main.created,\nmain.lastupdatedby, main.lastupdated, cachedgroupmembers_4.groupid\n -> Nested Loop (cost=18.63..8795.62 rows=41\nwidth=1092) (actual time=0.438..22.198 rows=674 loops=1)\n Output: main.id, main.name,\nmain.password, main.authtoken, main.comments, main.signature,\nmain.emailaddress, main.freeformcontactinfo, main.organization,\nmain.realname, main.nick\nname, main.lang, main.emailencoding, main.webencoding,\nmain.externalcontactinfoid, main.contactinfosystem, main.externalauthid,\nmain.authsystem, main.gecos, main.homephone, main.workphone,\nmain.mobilephone, main\n.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.pgpkey, main.creator,\nmain.created, main.lastupdatedby, main.lastupdated,\ncachedgroupmembers_2.member\nid, principals_1.id\n -> Nested Loop (cost=18.63..8459.04\nrows=41 width=8) (actual time=0.381..13.384 rows=674 loops=1)\n Output: principals_1.id,\ncachedgroupmembers_2.memberid\n -> Bitmap Heap Scan on\ncachedgroupmembers cachedgroupmembers_2 (cost=18.63..2436.95 rows=669\nwidth=4) (actual time=0.308..1.973 rows=675 loops=1)\n Output:\ncachedgroupmembers_2.id, cachedgroupmembers_2.groupid,\ncachedgroupmembers_2.memberid, cachedgroupmembers_2.via,\ncachedgroupmembers_2.immediateparentid, cached\ngroupmembers_2.disabled\n Recheck Cond: (groupid = 4)\n Filter: (disabled = 0)\n -> Bitmap Index Scan on\ncachedgroupmembers3 (cost=0.00..18.46 rows=669 width=0) (actual\ntime=0.223..0.223 rows=675 loops=1)\n Index Cond: (groupid\n= 4)\n -> Index Scan using\nprincipals_pkey on principals principals_1 (cost=0.00..8.99 rows=1\nwidth=4) (actual time=0.015..0.016 rows=1 loops=675)\n Output: principals_1.id,\nprincipals_1.principaltype, principals_1.objectid, principals_1.disabled\n Index Cond:\n(principals_1.id = cachedgroupmembers_2.memberid)\n Filter: ((principals_1.id\n<> 1) AND (principals_1.disabled = 0) AND\n((principals_1.principaltype)::text = 'User'::text))\n -> Index Scan using users_pkey on\nusers main (cost=0.00..8.20 rows=1 width=1084) (actual\ntime=0.010..0.011 rows=1 loops=674)\n Output: main.id, main.name,\nmain.password, main.authtoken, main.comments, main.signature,\nmain.emailaddress, main.freeformcontactinfo, main.organization,\nmain.realname, mai\nn.nickname, main.lang, main.emailencoding, main.webencoding,\nmain.externalcontactinfoid, main.contactinfosystem, main.externalauthid,\nmain.authsystem, main.gecos, main.homephone, main.workphone,\nmain.mobilephone\n, main.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.pgpkey, main.creator,\nmain.created, main.lastupdatedby, main.lastupdated\n Index Cond: (main.id =\nprincipals_1.id)\n -> Bitmap Heap Scan on cachedgroupmembers\ncachedgroupmembers_4 (cost=5.96..263.05 rows=68 width=8) (actual\ntime=0.464..2.233 rows=1409 loops=674)\n Output: cachedgroupmembers_4.id,\ncachedgroupmembers_4.groupid, cachedgroupmembers_4.memberid,\ncachedgroupmembers_4.via, cachedgroupmembers_4.immediateparentid,\ncachedgroupmembers\n_4.disabled\n Recheck Cond:\n(cachedgroupmembers_4.memberid = main.id)\n Filter: (cachedgroupmembers_4.disabled\n= 0)\n -> Bitmap Index Scan on\ncachedgroupmembers2 (cost=0.00..5.95 rows=68 width=0) (actual\ntime=0.286..0.286 rows=1410 loops=674)\n Index Cond:\n(cachedgroupmembers_4.memberid = main.id)\n Total runtime: 7120.012 ms\n\nSame query on 9.2:\n QUERY\nPLAN \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------------------------------------------\n Unique (cost=8922.03..8922.11 rows=1 width=348) (actual\ntime=259143.677..259180.526 rows=572 loops=1)\n Output: main.id, main.name, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.gecos, m\nain.homephone, main.workphone, main.mobilephone, main.pagerphone,\nmain.address1, main.address2, main.city, main.state, main.zip,\nmain.country, main.timezone, main.creator, main.created,\nmain.lastupdatedby, main.\nlastupdated, main.smimecertificate\n -> Sort (cost=8922.03..8922.04 rows=1 width=348) (actual\ntime=259143.674..259145.919 rows=33209 loops=1)\n Output: main.id, main.name, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.ge\ncos, main.homephone, main.workphone, main.mobilephone, main.pagerphone,\nmain.address1, main.address2, main.city, main.state, main.zip,\nmain.country, main.timezone, main.creator, main.created,\nmain.lastupdatedby,\n main.lastupdated, main.smimecertificate\n Sort Key: main.name, main.id, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.\ngecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.creator, main.created,\nmain.lastupdatedb\ny, main.lastupdated, main.smimecertificate\n Sort Method: quicksort Memory: 13143kB\n -> Nested Loop (cost=47.83..8922.02 rows=1 width=348) (actual\ntime=388.225..258422.830 rows=33209 loops=1)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, m\nain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.creator, main.created,\nmain.lastupda\ntedby, main.lastupdated, main.smimecertificate\n Join Filter: (cachedgroupmembers_4.groupid =\nacl_3.principalid)\n Rows Removed by Join Filter: 495425041\n -> Bitmap Heap Scan on public.acl acl_3\n(cost=30.07..144.35 rows=497 width=4) (actual time=0.305..9.489 rows=525\nloops=1)\n Output: acl_3.id, acl_3.principaltype,\nacl_3.principalid, acl_3.rightname, acl_3.objecttype, acl_3.objectid,\nacl_3.creator, acl_3.created, acl_3.lastupdatedby, acl_3.lastupdated\n Recheck Cond: ((((acl_3.rightname)::text =\n'OwnTicket'::text) AND ((acl_3.principaltype)::text = 'Group'::text)) OR\n(((acl_3.rightname)::text = 'SuperUser'::text) AND\n((acl_3.principaltype):\n:text = 'Group'::text)))\n Filter: (((acl_3.objecttype)::text =\n'RT::Queue'::text) OR (((acl_3.objecttype)::text = 'RT::System'::text)\nAND (acl_3.objectid = 1)))\n -> BitmapOr (cost=30.07..30.07 rows=531 width=0)\n(actual time=0.270..0.270 rows=0 loops=1)\n -> Bitmap Index Scan on acl1\n(cost=0.00..25.46 rows=521 width=0) (actual time=0.248..0.248 rows=521\nloops=1)\n Index Cond: (((acl_3.rightname)::text =\n'OwnTicket'::text) AND ((acl_3.principaltype)::text = 'Group'::text))\n -> Bitmap Index Scan on acl1\n(cost=0.00..4.36 rows=11 width=0) (actual time=0.020..0.020 rows=4\nloops=1)\n Index Cond: (((acl_3.rightname)::text =\n'SuperUser'::text) AND ((acl_3.principaltype)::text = 'Group'::text))\n -> Materialize (cost=17.76..8740.41 rows=5 width=352)\n(actual time=0.004..179.471 rows=943730 loops=525)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.l\nang, main.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.creator, main.created,\nmain.la\nstupdatedby, main.lastupdated, main.smimecertificate,\ncachedgroupmembers_4.groupid\n -> Nested Loop (cost=17.76..8740.38 rows=5\nwidth=352) (actual time=0.436..1595.962 rows=943730 loops=1)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, \nmain.lang, main.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.creator, main.created, m\nain.lastupdatedby, main.lastupdated, main.smimecertificate,\ncachedgroupmembers_4.groupid\n -> Nested Loop (cost=14.17..8325.31 rows=3\nwidth=356) (actual time=0.392..27.201 rows=675 loops=1)\n Output: main.id, main.name,\nmain.password, main.authtoken, main.comments, main.signature,\nmain.emailaddress, main.freeformcontactinfo, main.organization,\nmain.realname, main.nick\nname, main.lang, main.gecos, main.homephone, main.workphone,\nmain.mobilephone, main.pagerphone, main.address1, main.address2,\nmain.city, main.state, main.zip, main.country, main.timezone,\nmain.creator, main.crea\nted, main.lastupdatedby, main.lastupdated, main.smimecertificate,\nprincipals_1.id, cachedgroupmembers_2.memberid\n -> Nested Loop (cost=14.17..8160.17\nrows=43 width=8) (actual time=0.351..16.568 rows=675 loops=1)\n Output: principals_1.id,\ncachedgroupmembers_2.memberid\n -> Bitmap Heap Scan on\npublic.cachedgroupmembers cachedgroupmembers_2 (cost=14.17..2431.45\nrows=669 width=4) (actual time=0.303..2.098 rows=676 loops=1)\n Output:\ncachedgroupmembers_2.id, cachedgroupmembers_2.groupid,\ncachedgroupmembers_2.memberid, cachedgroupmembers_2.via,\ncachedgroupmembers_2.immediateparentid, cached\ngroupmembers_2.disabled\n Recheck Cond:\n(cachedgroupmembers_2.groupid = 4)\n Filter:\n(cachedgroupmembers_2.disabled = 0)\n -> Bitmap Index Scan on\ncachedgroupmembers2 (cost=0.00..14.00 rows=669 width=0) (actual\ntime=0.215..0.215 rows=676 loops=1)\n Index Cond:\n(cachedgroupmembers_2.groupid = 4)\n -> Index Scan using\nprincipals_pkey on public.principals principals_1 (cost=0.00..8.55\nrows=1 width=4) (actual time=0.019..0.020 rows=1 loops=676)\n Output: principals_1.id\n Index Cond:\n(principals_1.id = cachedgroupmembers_2.memberid)\n Filter: ((principals_1.id\n<> 1) AND (principals_1.disabled = 0) AND\n((principals_1.principaltype)::text = 'User'::text))\n Rows Removed by Filter: 0\n -> Index Scan using users_pkey on\npublic.users main (cost=0.00..3.83 rows=1 width=348) (actual\ntime=0.014..0.015 rows=1 loops=675)\n Output: main.id, main.name,\nmain.password, main.authtoken, main.comments, main.signature,\nmain.emailaddress, main.freeformcontactinfo, main.organization,\nmain.realname, mai\nn.nickname, main.lang, main.gecos, main.homephone, main.workphone,\nmain.mobilephone, main.pagerphone, main.address1, main.address2,\nmain.city, main.state, main.zip, main.country, main.timezone,\nmain.creator, mai\nn.created, main.lastupdatedby, main.lastupdated, main.smimecertificate\n Index Cond: (main.id =\nprincipals_1.id)\n -> Bitmap Heap Scan on\npublic.cachedgroupmembers cachedgroupmembers_4 (cost=3.59..137.70\nrows=66 width=8) (actual time=0.340..2.016 rows=1398 loops=675)\n Output: cachedgroupmembers_4.id,\ncachedgroupmembers_4.groupid, cachedgroupmembers_4.memberid,\ncachedgroupmembers_4.via, cachedgroupmembers_4.immediateparentid,\ncachedgroupmembers\n_4.disabled\n Recheck Cond:\n(cachedgroupmembers_4.memberid = principals_1.id)\n Filter: (cachedgroupmembers_4.disabled\n= 0)\n Rows Removed by Filter: 0\n -> Bitmap Index Scan on\ncachedgroupmembers1 (cost=0.00..3.58 rows=66 width=0) (actual\ntime=0.210..0.210 rows=1398 loops=675)\n Index Cond:\n(cachedgroupmembers_4.memberid = principals_1.id)\n Total runtime: 259230.400 ms\n\n\nThe 9.2 chose \"nesed loop\" in stead of \"hash join\" with a massive\npenalty of 250 seconds in stead of 7-8 seconds.\n\nI have tried to set \"enable_nestloop = off\" and then this query is down\nto 4-5 seconds, but all other queries are slower.\n\nHow can I make PostgreSQL 9.2 run this query just as fast as 8.4 did?\n\n\n / Eskil\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 May 2016 14:10:30 +0200", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problems with 9.2.15" }, { "msg_contents": "Johan Fredriksson <[email protected]> writes:\n> I am just about to upgrade from PostgreSQL 8.4.20 to 9.2.15, but I'v run\n> into some huge performance issues.\n\nThe rowcount estimates from 9.2 seem greatly different from the 8.4 plan.\nDid you remember to ANALYZE all the tables after migrating? Maybe there\nwere some table-specific statistics targets that you forgot to transfer\nover? In any case, the 9.2 plan looks like garbage-in-garbage-out to\nme :-( ... without estimates at least a little closer to reality, the\nplanner is unlikely to do anything very sane.\n\n(BTW, I wonder why you are moving only to 9.2 and not something more\nrecent.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 May 2016 09:46:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "> The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.\n> Did you remember to ANALYZE all the tables after migrating? Maybe there\n> were some table-specific statistics targets that you forgot to transfer\n> over?\n\nNo, I did not. Honestly I though everything would be transfered with a \ndump/restore procedure. Unfortunatly running ANALYZE VERBOSE on all \ninvolved tables did not really improve anything.\n\n > In any case, the 9.2 plan looks like garbage-in-garbage-out to\n> me :-( ... without estimates at least a little closer to reality, the\n> planner is unlikely to do anything very sane.\n>\n> (BTW, I wonder why you are moving only to 9.2 and not something more\n> recent.)\n\nWell, 9.2.15 is what comes bundled with RHEL 7, so I decided to go with \nthat to avoid dependency issues. But I could install a more fresh \nversion from scratch if that would solve my problem.\n\n\n / Eskil\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 May 2016 16:13:09 +0200", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": ">\n> ...(BTW, I wonder why you are moving only to 9.2 and not something more\n>> recent.)\n>>\n>\n> Well, 9.2.15 is what comes bundled with RHEL 7, so I decided to go with\n> that to avoid dependency issues. But I could install a more fresh version\n> from scratch if that would solve my problem.\n>\n\nGenerally my first step is to get the latest stable directly from the\nPostgreSQL Development Group, i.e.:\nyum install\nhttps://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/pgdg-redhat95-9.5-2.noarch.rpm\n\nThen I know I'm starting with the latest and greatest and will get critical\nupdates without worrying about any distribution packager delays.\n\nCheers,\nSteve\n\n...(BTW, I wonder why you are moving only to 9.2 and not something more\nrecent.)\n\n\nWell, 9.2.15 is what comes bundled with RHEL 7, so I decided to go with that to avoid dependency issues. But I could install a more fresh version from scratch if that would solve my problem.Generally my first step is to get the latest stable directly from the PostgreSQL Development Group, i.e.:yum install https://download.postgresql.org/pub/repos/yum/9.5/redhat/rhel-7-x86_64/pgdg-redhat95-9.5-2.noarch.rpmThen I know I'm starting with the latest and greatest and will get critical updates without worrying about any distribution packager delays.Cheers,Steve", "msg_date": "Fri, 27 May 2016 07:45:45 -0700", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "> > I am just about to upgrade from PostgreSQL 8.4.20 to 9.2.15, but I'v run\n> > into some huge performance issues.\n> \n> The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.\n> Did you remember to ANALYZE all the tables after migrating? Maybe there\n> were some table-specific statistics targets that you forgot to transfer\n> over? In any case, the 9.2 plan looks like garbage-in-garbage-out to\n> me :-( ... without estimates at least a little closer to reality, the\n> planner is unlikely to do anything very sane.\n> \n> (BTW, I wonder why you are moving only to 9.2 and not something more\n> recent.)\n\nYou put me on the right track with your conclusion that the estimates\nwere off the chart. The quick-and-dirty fix \"DELETE FROM pg_statistic;\"\nsolved this problem. This database now have to build up sane estimates\nfrom scratch.\n\n\n / Eskil\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 May 2016 09:35:29 +0200", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "> > The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.\n> > Did you remember to ANALYZE all the tables after migrating? Maybe there\n> > were some table-specific statistics targets that you forgot to transfer\n> > over? In any case, the 9.2 plan looks like garbage-in-garbage-out to\n> > me :-( ... without estimates at least a little closer to reality, the\n> > planner is unlikely to do anything very sane.\n> > \n> > (BTW, I wonder why you are moving only to 9.2 and not something more\n> > recent.)\n> \n> You put me on the right track with your conclusion that the estimates\n> were off the chart. The quick-and-dirty fix \"DELETE FROM pg_statistic;\"\n> solved this problem. This database now have to build up sane estimates\n> from scratch.\n\nActually it took a VACUUM FULL; and DELETE FROM pg_statistic; followed\nby ANALYZE on all tables to get it right.\n\nCan someone please explain to me the difference between these two query\nplans:\n\nThe bad one:\n Unique (cost=6037.10..6037.18 rows=1 width=434) (actual\ntime=255608.588..255646.828 rows=572 loops=1)\n -> Sort (cost=6037.10..6037.11 rows=1 width=434) (actual\ntime=255608.583..255611.632 rows=33209 loops=1)\n Sort Method: quicksort Memory: 13143kB\n -> Nested Loop (cost=42.51..6037.09 rows=1 width=434) (actual\ntime=152.818..254886.674 rows=33209 loops=1)\n Join Filter: (cachedgroupmembers_4.groupid =\nacl_3.principalid)\n Rows Removed by Join Filter: 495425041\n -> Bitmap Heap Scan on public.acl acl_3\n(cost=30.07..144.35 rows=497 width=4) (actual time=0.284..8.184 rows=525\nloops=1)\n Recheck Cond: ((((acl_3.rightname)::text =\n'OwnTicket'::text) AND ((acl_3.principaltype)::text = 'Group'::text)) OR\n(((acl_3.rightname)::text = 'SuperUser'::text) AND\n((acl_3.principaltype):\n:text = 'Group'::text)))\n Filter: (((acl_3.objecttype)::text =\n'RT::Queue'::text) OR (((acl_3.objecttype)::text = 'RT::System'::text)\nAND (acl_3.objectid = 1)))\n -> BitmapOr (cost=30.07..30.07 rows=531 width=0)\n(actual time=0.249..0.249 rows=0 loops=1)\n -> Bitmap Index Scan on acl1\n(cost=0.00..25.46 rows=521 width=0) (actual time=0.233..0.233 rows=521\nloops=1)\n Index Cond: (((acl_3.rightname)::text =\n'OwnTicket'::text) AND ((acl_3.principaltype)::text = 'Group'::text))\n -> Bitmap Index Scan on acl1\n(cost=0.00..4.36 rows=11 width=0) (actual time=0.016..0.016 rows=4\nloops=1)\n Index Cond: (((acl_3.rightname)::text =\n'SuperUser'::text) AND ((acl_3.principaltype)::text = 'Group'::text))\n -> Materialize (cost=12.44..5870.39 rows=3 width=438)\n(actual time=0.004..176.296 rows=943730 loops=525)\n -> Nested Loop (cost=12.44..5870.37 rows=3\nwidth=438) (actual time=0.351..1028.683 rows=943730 loops=1)\n -> Nested Loop (cost=12.44..5601.49 rows=2\nwidth=442) (actual time=0.326..15.591 rows=675 loops=1)\n -> Nested Loop (cost=12.44..5502.26\nrows=27 width=8) (actual time=0.303..9.744 rows=675 loops=1)\n Output: principals_1.id,\ncachedgroupmembers_2.memberid\n -> Bitmap Heap Scan on\npublic.cachedgroupmembers cachedgroupmembers_2 (cost=12.44..1659.12\nrows=446 width=4) (actual time=0.267..1.266 rows=676 loops=1)\n\nRecheck Cond: (cachedgroupmembers_2.groupid = 4)\n Filter:\n(cachedgroupmembers_2.disabled = 0)\n -> Bitmap Index Scan on\ncachedgroupmembers2 (cost=0.00..12.33 rows=446 width=0) (actual\ntime=0.171..0.171 rows=676 loops=1)\n Index Cond:\n(cachedgroupmembers_2.groupid = 4)\n -> Index Scan using\nprincipals_pkey on public.principals principals_1 (cost=0.00..8.61\nrows=1 width=4) (actual time=0.011..0.011 rows=1 loops=676)\n Output: principals_1.id\n Index Cond:\n(principals_1.id = cachedgroupmembers_2.memberid)\n Filter: ((principals_1.id\n<> 1) AND (principals_1.disabled = 0) AND\n((principals_1.principaltype)::text = 'User'::text))\n Rows Removed by Filter: 0\n -> Index Scan using users_pkey on\npublic.users main (cost=0.00..3.67 rows=1 width=434) (actual\ntime=0.007..0.008 rows=1\nloops=675) \n Index Cond: (main.id =\nprincipals_1.id)\n -> Index Scan using cachedgroupmembers1 on\npublic.cachedgroupmembers cachedgroupmembers_4 (cost=0.00..133.79\nrows=65 width=8) (actual time=0.012..1.199 rows=1398 loops=675)\n\n Index Cond:\n(cachedgroupmembers_4.memberid = principals_1.id)\n Filter: (cachedgroupmembers_4.disabled\n= 0)\n Rows Removed by Filter: 0\n Total runtime: 255694.440 ms\n(47 rows)\n\n\nThe good one:\n Unique (cost=528.88..528.96 rows=1 width=522) (actual\ntime=5029.906..5068.395 rows=572 loops=1)\n -> Sort (cost=528.88..528.89 rows=1 width=522) (actual\ntime=5029.889..5032.743 rows=33209 loops=1)\n Sort Method: quicksort Memory: 13143kB\n -> Nested Loop (cost=36.08..528.87 rows=1 width=522) (actual\ntime=0.410..4178.931 rows=33209 loops=1)\n -> Nested Loop (cost=3.54..449.25 rows=2 width=526)\n(actual time=0.139..1459.785 rows=943730 loops=1)\n Join Filter: (principals_1.id =\ncachedgroupmembers_4.memberid)\n -> Nested Loop (cost=0.00..314.65 rows=1\nwidth=530) (actual time=0.115..12.537 rows=675 loops=1)\n -> Nested Loop (cost=0.00..310.98 rows=1\nwidth=8) (actual time=0.106..7.203 rows=675 loops=1)\n Output: principals_1.id,\ncachedgroupmembers_2.memberid\n -> Index Only Scan using disgroumem on\npublic.cachedgroupmembers cachedgroupmembers_2 (cost=0.00..101.59\nrows=24 width=4) (actual time=0.071..1.046 rows=676 loops=1)\n Output:\ncachedgroupmembers_2.groupid, cachedgroupmembers_2.memberid,\ncachedgroupmembers_2.disabled\n Index Cond:\n((cachedgroupmembers_2.groupid = 4) AND (cachedgroupmembers_2.disabled =\n0))\n Heap Fetches: 676\n -> Index Scan using principals_pkey on\npublic.principals principals_1 (cost=0.00..8.71 rows=1 width=4) (actual\ntime=0.008..0.008 rows=1 loops=676)\n Output: principals_1.id\n Index Cond: (principals_1.id =\ncachedgroupmembers_2.memberid)\n Filter: ((principals_1.id <> 1)\nAND (principals_1.disabled = 0) AND ((principals_1.principaltype)::text\n= 'User'::text))\n Rows Removed by Filter: 0\n -> Index Scan using users_pkey on\npublic.users main (cost=0.00..3.67 rows=1 width=522) (actual\ntime=0.006..0.007 rows=1 loops=675)\n Index Cond: (main.id = principals_1.id)\n -> Bitmap Heap Scan on public.cachedgroupmembers\ncachedgroupmembers_4 (cost=3.54..133.77 rows=66 width=8) (actual\ntime=0.309..1.752 rows=1398 loops=675)\n Recheck Cond: (cachedgroupmembers_4.memberid\n= main.id)\n Filter: (cachedgroupmembers_4.disabled = 0)\n Rows Removed by Filter: 0\n -> Bitmap Index Scan on cachedgroupmembers1\n(cost=0.00..3.52 rows=66 width=0) (actual time=0.185..0.185 rows=1398\nloops=675)\n Index Cond:\n(cachedgroupmembers_4.memberid = main.id)\n -> Bitmap Heap Scan on public.acl acl_3\n(cost=32.54..39.78 rows=3 width=4) (actual time=0.002..0.002 rows=0\nloops=943730)\n Recheck Cond: ((acl_3.principalid =\ncachedgroupmembers_4.groupid) AND ((((acl_3.rightname)::text =\n'OwnTicket'::text) AND ((acl_3.principaltype)::text = 'Group'::text)) OR\n(((acl_3.rightname\n)::text = 'SuperUser'::text) AND ((acl_3.principaltype)::text =\n'Group'::text))))\n Filter: (((acl_3.objecttype)::text =\n'RT::Queue'::text) OR (((acl_3.objecttype)::text = 'RT::System'::text)\nAND (acl_3.objectid = 1)))\n -> BitmapAnd (cost=32.54..32.54 rows=3 width=0)\n(actual time=0.002..0.002 rows=0 loops=943730)\n -> Bitmap Index Scan on acl3\n(cost=0.00..2.22 rows=49 width=0) (actual time=0.001..0.001 rows=1\nloops=943730)\n Index Cond: (acl_3.principalid =\ncachedgroupmembers_4.groupid)\n -> BitmapOr (cost=30.07..30.07 rows=531\nwidth=0) (actual time=0.110..0.110 rows=0 loops=4412)\n -> Bitmap Index Scan on acl1\n(cost=0.00..25.46 rows=521 width=0) (actual time=0.102..0.102 rows=521\nloops=4412)\n Index Cond:\n(((acl_3.rightname)::text = 'OwnTicket'::text) AND\n((acl_3.principaltype)::text = 'Group'::text))\n -> Bitmap Index Scan on acl1\n(cost=0.00..4.36 rows=11 width=0) (actual time=0.007..0.007 rows=4\nloops=4412)\n Index Cond:\n(((acl_3.rightname)::text = 'SuperUser'::text) AND\n((acl_3.principaltype)::text = 'Group'::text))\n Total runtime: 5069.842 ms\n(47 rows)\n\n\nWhy does PostgreSQL pick one before the other and how can I make sure\nthat it will keep using the \"good\" one instead of the \"bad\" one?\n\n / Eskil\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 May 2016 15:56:58 +0200", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "> > > The rowcount estimates from 9.2 seem greatly different from the 8.4 plan.\n> > > Did you remember to ANALYZE all the tables after migrating? Maybe there\n> > > were some table-specific statistics targets that you forgot to transfer\n> > > over? In any case, the 9.2 plan looks like garbage-in-garbage-out to\n> > > me :-( ... without estimates at least a little closer to reality, the\n> > > planner is unlikely to do anything very sane.\n> > > \n> > > (BTW, I wonder why you are moving only to 9.2 and not something more\n> > > recent.)\n> > \n> > You put me on the right track with your conclusion that the estimates\n> > were off the chart. The quick-and-dirty fix \"DELETE FROM pg_statistic;\"\n> > solved this problem. This database now have to build up sane estimates\n> > from scratch.\n> \n> Actually it took a VACUUM FULL; and DELETE FROM pg_statistic; followed\n> by ANALYZE on all tables to get it right.\n\nIt worked last time, but this time it does not work. I have deleted all\ndata in the table pg_statistic and run ANALYZE on all tables but the\nplanner still make crappy optimizations. How can I adjust the estimates\nto make the planner work better?\n\nLast time it was in testing, this time it is in production, so urgent\nhelp is needed, please!\n\nThis query now takes 90 seconds and it should not take more than 4-5\nseconds.\n\nEXPLAIN ANALYZE VERBOSE SELECT DISTINCT main.* FROM Users main CROSS\nJOIN ACL ACL_3 JOIN Principals Principals_1 ON ( Principals_1.id =\nmain.id ) JOIN CachedGroupMembers CachedGroupMembers_2 ON\n( CachedGroupMembers_2.MemberId = Principals_1.id ) JOIN\nCachedGroupMembers CachedGroupMembers_4 ON\n( CachedGroupMembers_4.MemberId = Principals_1.id ) WHERE\n((ACL_3.ObjectType = 'RT::Queue' AND ACL_3.ObjectId = 85) OR\n(ACL_3.ObjectType = 'RT::System' AND ACL_3.ObjectId = 1)) AND\n(ACL_3.PrincipalId = CachedGroupMembers_4.GroupId) AND\n(ACL_3.PrincipalType = 'Group') AND (ACL_3.RightName = 'OwnTicket') AND\n(CachedGroupMembers_2.Disabled = '0') AND (CachedGroupMembers_2.GroupId\n= '4') AND (CachedGroupMembers_4.Disabled = '0') AND\n(Principals_1.Disabled = '0') AND (Principals_1.PrincipalType = 'User')\nAND (Principals_1.id != '1') ORDER BY main.Name ASC;\n \n QUERY\nPLAN \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------------------------------------------------------------------------\n Unique (cost=8907.68..8907.76 rows=1 width=336) (actual\ntime=92075.721..92076.336 rows=176 loops=1)\n Output: main.id, main.name, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.gecos, m\nain.homephone, main.workphone, main.mobilephone, main.pagerphone,\nmain.address1, main.address2, main.city, main.state, main.zip,\nmain.country, main.timezone, main.creator, main.created,\nmain.lastupdatedby, main.\nlastupdated, main.smimecertificate\n -> Sort (cost=8907.68..8907.69 rows=1 width=336) (actual\ntime=92075.720..92075.748 rows=607 loops=1)\n Output: main.id, main.name, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.ge\ncos, main.homephone, main.workphone, main.mobilephone, main.pagerphone,\nmain.address1, main.address2, main.city, main.state, main.zip,\nmain.country, main.timezone, main.creator, main.created,\nmain.lastupdatedby,\n main.lastupdated, main.smimecertificate\n Sort Key: main.name, main.id, main.password, main.authtoken,\nmain.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, main.\ngecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.creator, main.created,\nmain.lastupdatedb\ny, main.lastupdated, main.smimecertificate\n Sort Method: quicksort Memory: 243kB\n -> Nested Loop (cost=20.37..8907.67 rows=1 width=336) (actual\ntime=540.971..92062.584 rows=607 loops=1)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.lang, m\nain.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.creator, main.created,\nmain.lastupda\ntedby, main.lastupdated, main.smimecertificate\n -> Nested Loop (cost=20.37..8845.47 rows=3 width=340)\n(actual time=0.188..1204.040 rows=972439 loops=1)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, main.l\nang, main.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.creator, main.created,\nmain.la\nstupdatedby, main.lastupdated, main.smimecertificate,\ncachedgroupmembers_4.groupid\n -> Nested Loop (cost=20.37..8568.24 rows=2\nwidth=344) (actual time=0.179..11.075 rows=688 loops=1)\n Output: main.id, main.name, main.password,\nmain.authtoken, main.comments, main.signature, main.emailaddress,\nmain.freeformcontactinfo, main.organization, main.realname,\nmain.nickname, \nmain.lang, main.gecos, main.homephone, main.workphone, main.mobilephone,\nmain.pagerphone, main.address1, main.address2, main.city, main.state,\nmain.zip, main.country, main.timezone, main.creator, main.created, m\nain.lastupdatedby, main.lastupdated, main.smimecertificate,\nprincipals_1.id, cachedgroupmembers_2.memberid\n -> Nested Loop (cost=20.37..8411.79 rows=41\nwidth=8) (actual time=0.170..6.551 rows=688 loops=1)\n Output: principals_1.id,\ncachedgroupmembers_2.memberid\n -> Bitmap Heap Scan on\npublic.cachedgroupmembers cachedgroupmembers_2 (cost=20.37..2510.57\nrows=689 width=4) (actual time=0.156..1.362 rows=689 loops=1)\n Output: cachedgroupmembers_2.id,\ncachedgroupmembers_2.groupid, cachedgroupmembers_2.memberid,\ncachedgroupmembers_2.via, cachedgroupmembers_2.immediateparentid,\ncachedgroupm\nembers_2.disabled\n Recheck Cond:\n((cachedgroupmembers_2.groupid = 4) AND (cachedgroupmembers_2.disabled =\n0))\n -> Bitmap Index Scan on\ndisgroumem (cost=0.00..20.20 rows=689 width=0) (actual\ntime=0.107..0.107 rows=689 loops=1)\n Index Cond:\n((cachedgroupmembers_2.groupid = 4) AND (cachedgroupmembers_2.disabled =\n0))\n -> Index Scan using principals_pkey on\npublic.principals principals_1 (cost=0.00..8.55 rows=1 width=4) (actual\ntime=0.006..0.007 rows=1 loops=689)\n Output: principals_1.id\n Index Cond: (principals_1.id =\ncachedgroupmembers_2.memberid)\n Filter: ((principals_1.id <> 1)\nAND (principals_1.disabled = 0) AND ((principals_1.principaltype)::text\n= 'User'::text))\n Rows Removed by Filter: 0\n -> Index Scan using users_pkey on\npublic.users main (cost=0.00..3.81 rows=1 width=336) (actual\ntime=0.005..0.006 rows=1 loops=688)\n Output: main.id, main.name,\nmain.password, main.authtoken, main.comments, main.signature,\nmain.emailaddress, main.freeformcontactinfo, main.organization,\nmain.realname, main.nick\nname, main.lang, main.gecos, main.homephone, main.workphone,\nmain.mobilephone, main.pagerphone, main.address1, main.address2,\nmain.city, main.state, main.zip, main.country, main.timezone,\nmain.creator, main.crea\nted, main.lastupdatedby, main.lastupdated, main.smimecertificate\n Index Cond: (main.id = principals_1.id)\n -> Index Scan using cachedgroupmembers1 on\npublic.cachedgroupmembers cachedgroupmembers_4 (cost=0.00..137.96\nrows=65 width=8) (actual time=0.008..1.434 rows=1413 loops=688)\n Output: cachedgroupmembers_4.id,\ncachedgroupmembers_4.groupid, cachedgroupmembers_4.memberid,\ncachedgroupmembers_4.via, cachedgroupmembers_4.immediateparentid,\ncachedgroupmembers_4.dis\nabled\n Index Cond: (cachedgroupmembers_4.memberid =\nprincipals_1.id)\n Filter: (cachedgroupmembers_4.disabled = 0)\n Rows Removed by Filter: 0\n -> Index Only Scan using acl1 on public.acl acl_3\n(cost=0.00..20.72 rows=1 width=4) (actual time=0.093..0.093 rows=0\nloops=972439)\n Output: acl_3.rightname, acl_3.objecttype,\nacl_3.objectid, acl_3.principaltype, acl_3.principalid\n Index Cond: ((acl_3.rightname = 'OwnTicket'::text)\nAND (acl_3.principaltype = 'Group'::text) AND (acl_3.principalid =\ncachedgroupmembers_4.groupid))\n Filter: ((((acl_3.objecttype)::text =\n'RT::Queue'::text) AND (acl_3.objectid = 85)) OR\n(((acl_3.objecttype)::text = 'RT::System'::text) AND (acl_3.objectid =\n1)))\n Rows Removed by Filter: 0\n Heap Fetches: 33532\n Total runtime: 92076.507 ms\n(39 rows)\n\n\n\n / Eskil\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Jul 2016 16:48:01 +0200", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "On Thu, Jul 21, 2016 at 11:48 AM, Johan Fredriksson <[email protected]> wrote:\n> EXPLAIN ANALYZE VERBOSE SELECT DISTINCT main.* FROM Users main CROSS\n> JOIN ACL ACL_3 JOIN Principals Principals_1 ON ( Principals_1.id =\n> main.id ) JOIN CachedGroupMembers CachedGroupMembers_2 ON\n> ( CachedGroupMembers_2.MemberId = Principals_1.id ) JOIN\n> CachedGroupMembers CachedGroupMembers_4 ON\n> ( CachedGroupMembers_4.MemberId = Principals_1.id ) WHERE\n> ((ACL_3.ObjectType = 'RT::Queue' AND ACL_3.ObjectId = 85) OR\n> (ACL_3.ObjectType = 'RT::System' AND ACL_3.ObjectId = 1)) AND\n> (ACL_3.PrincipalId = CachedGroupMembers_4.GroupId) AND\n> (ACL_3.PrincipalType = 'Group') AND (ACL_3.RightName = 'OwnTicket') AND\n> (CachedGroupMembers_2.Disabled = '0') AND (CachedGroupMembers_2.GroupId\n> = '4') AND (CachedGroupMembers_4.Disabled = '0') AND\n> (Principals_1.Disabled = '0') AND (Principals_1.PrincipalType = 'User')\n> AND (Principals_1.id != '1') ORDER BY main.Name ASC;\n\n\nThat cross join doesn't look right. It has no join condition.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Jul 2016 15:24:17 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "On Thu, Jul 21, 2016 at 2:24 PM, Claudio Freire <[email protected]>\nwrote:\n\n> That cross join doesn't look right. It has no join condition.\n\n\n​That is that the definition of a \"CROSS JOIN\"...\n\nDavid J.\n\nOn Thu, Jul 21, 2016 at 2:24 PM, Claudio Freire <[email protected]> wrote:That cross join doesn't look right. It has no join condition.​That is that the definition of a \"CROSS JOIN\"...David J.", "msg_date": "Thu, 21 Jul 2016 14:29:46 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "On Thu, Jul 21, 2016 at 3:29 PM, David G. Johnston\n<[email protected]> wrote:\n> On Thu, Jul 21, 2016 at 2:24 PM, Claudio Freire <[email protected]>\n> wrote:\n>>\n>> That cross join doesn't look right. It has no join condition.\n>\n>\n> That is that the definition of a \"CROSS JOIN\"...\n>\n> David J.\n\nWell, maybe it shouldn't be.\n\nA cross join I mean.\n\nI see the query and a cross join there doesn't make much sense.\n\nThere's no filtering of the output rows on the where clause either\nAFAICT, and it's producing a lot of intermediate rows that don't seem\nto be necessary. That was my point.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Jul 2016 18:12:47 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "I can add that setting enable_nestloop = 0 cuts the runtime for this query down to about 4 seconds.\nDisabling nested loops globaly does however impacts performance of a lot of other queries.\n\n / Eskil \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jul 2016 00:59:27 +0000", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "And by the way, I have also tried to upgrade to Postgresql 9.4.8 (the latest version in postgresl.org's own repository) without improvment.\n\n / Eskil\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jul 2016 01:07:28 +0000", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "On 22/07/16 13:07, Johan Fredriksson wrote:\n> And by the way, I have also tried to upgrade to Postgresql 9.4.8 (the latest version in postgresl.org's own repository) without improvment.\n>\n\nNot sure what repo you are using, but 9.5.3 and 9.6 Beta are the \n*actual* latest versions. Now I'm not sure they will actually help your \nparticular query, but are probably worth a try out!\n\nregards\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jul 2016 19:08:26 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problems with 9.2.15" }, { "msg_contents": "fre 2016-07-22 klockan 19:08 +1200 skrev Mark Kirkwood:\n> On 22/07/16 13:07, Johan Fredriksson wrote:\n> > And by the way, I have also tried to upgrade to Postgresql 9.4.8 (the latest version in postgresl.org's own repository) without improvment.\n> >\n> \n> Not sure what repo you are using, but 9.5.3 and 9.6 Beta are the \n> *actual* latest versions. Now I'm not sure they will actually help your \n> particular query, but are probably worth a try out!\n\nThe one I found on https://www.postgresql.org/download/linux/redhat/\n\nThat page points out\nhttp://yum.postgresql.org/9.4/redhat/rhel-6-x86_64/pgdg-redhat94-9.4-1.noarch.rpm as the latest. Perhaps the download-page need to be updated?\n\n\n / Eskil\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jul 2016 09:20:51 +0200", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problems with 9.2.15" } ]
[ { "msg_contents": "I am trying to insert rows that don't already exist from a temp table into\nanother table. I am using a LEFT JOIN on all the columns and checking for\nnulls in the base table to know which rows to insert. The problem is that\nthe planner is choosing a nested loop plan which is very slow over the much\nfaster (~40x) hash join. What's interesting is that all the row estimates\nappear to be fairly accurate. I'm wondering if it has something to do with\nthe GIN indexes on bigint_array_1 and bigint_array_2. Perhaps it\nmisestimates the cost of each index scan?\n\n\nPostgres 9.3.10 on 2.6.32-573.18.1.el6.x86_64 GNU/Linux\n- base_table has been VACUUM ANALYZED\n- base_table has GIN indexes on bigint_array_1 and bigint_array_2\n- base_table has btree index on id\n- base_table is 700k rows\n- temp_table is 4k rows\n- the bigint arrays are type bigint[] and contain 0 to 5 elements, with a\nmedian of 1 element\n- the time difference between nested loop vs hash join is not based on the\ncache, I can reproduce it in either order\n\ntest_db=# BEGIN;\nBEGIN\ntest_db=# EXPLAIN (ANALYZE, BUFFERS)\nINSERT INTO base_table (\n bigint_array_1, bigint_array_2, id\n) (\n SELECT s.bigint_array_1, s.bigint_array_2, s.id\n FROM temp_rows_to_insert s\n LEFT JOIN base_table t\n ON s.bigint_array_1 = t.bigint_array_1 AND s.bigint_array_2 =\nt.bigint_array_2 AND s.id = t.id\n WHERE s.bigint_array_1 IS NOT NULL AND t.bigint_array_1 IS NULL AND\ns.bigint_array_2 IS NOT NULL AND t.bigint_array_2 IS NULL AND s.id IS NOT\nNULL AND t.id IS NULL\n);\n \nQUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Insert on base_table (cost=2.97..67498.30 rows=8412 width=72) (actual\ntime=40463.771..40463.771 rows=0 loops=1)\n Buffers: shared hit=2373347 read=129 dirtied=129, local hit=122\n -> Nested Loop Anti Join (cost=2.97..67498.30 rows=8412 width=72)\n(actual time=13.694..40410.273 rows=4389 loops=1)\n Buffers: shared hit=2338092, local hit=122\n -> Seq Scan on temp_rows_to_insert s (cost=0.00..219.60 rows=9614\nwidth=72) (actual time=0.607..4.746 rows=4389 loops=1)\n Filter: ((bigint_array_1 IS NOT NULL) AND (bigint_array_2 IS\nNOT NULL) AND (id IS NOT NULL))\n Buffers: local hit=122\n -> Bitmap Heap Scan on base_table t (cost=2.97..6.98 rows=1\nwidth=74) (actual time=9.201..9.201 rows=0 loops=4389)\n Recheck Cond: ((s.bigint_array_2 = bigint_array_2) AND\n(s.bigint_array_1 = bigint_array_1))\n Filter: (s.id = id)\n Buffers: shared hit=2333695\n -> BitmapAnd (cost=2.97..2.97 rows=1 width=0) (actual\ntime=9.199..9.199 rows=0 loops=4389)\n Buffers: shared hit=2333638\n -> Bitmap Index Scan on base_table_bigint_array_2_idx \n(cost=0.00..1.04 rows=3 width=0) (actual time=2.582..2.582 rows=290\nloops=4389)\n Index Cond: (s.bigint_array_2 = bigint_array_2)\n Buffers: shared hit=738261\n -> Bitmap Index Scan on base_table_bigint_array_1_idx \n(cost=0.00..1.68 rows=3 width=0) (actual time=6.608..6.608 rows=2\nloops=4389)\n Index Cond: (s.bigint_array_1 = bigint_array_1)\n Buffers: shared hit=1595377\n Total runtime: 40463.879 ms\n(20 rows)\n\ntest_db=# rollback;\nROLLBACK\ntest_db=# BEGIN;\nBEGIN\ntest_db=# SET enable_nestloop = false;\nSET\ntest_db=# EXPLAIN (ANALYZE, BUFFERS)\nINSERT INTO base_table (\n bigint_array_1, bigint_array_2, id\n) (\n SELECT s.bigint_array_1, s.bigint_array_2, s.id\n FROM temp_rows_to_insert s\n LEFT JOIN base_table t\n ON s.bigint_array_1 = t.bigint_array_1 AND s.bigint_array_2 =\nt.bigint_array_2 AND s.id = t.id\n WHERE s.bigint_array_1 IS NOT NULL AND t.bigint_array_1 IS NULL AND\ns.bigint_array_2 IS NOT NULL AND t.bigint_array_2 IS NULL AND s.id IS NOT\nNULL AND t.id IS NULL\n);\n \nQUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Insert on base_table (cost=31711.89..71625.39 rows=8412 width=72) (actual\ntime=1838.650..1838.650 rows=0 loops=1)\n Buffers: shared hit=50013 read=123 dirtied=410, local hit=122\n -> Hash Anti Join (cost=31711.89..71625.39 rows=8412 width=72) (actual\ntime=1798.774..1812.872 rows=4389 loops=1)\n Hash Cond: ((s.bigint_array_1 = t.bigint_array_1) AND\n(s.bigint_array_2 = t.bigint_array_2) AND (s.id = t.id))\n Buffers: shared hit=14761 dirtied=287, local hit=122\n -> Seq Scan on temp_rows_to_insert s (cost=0.00..219.60 rows=9614\nwidth=72) (actual time=0.046..3.033 rows=4389 loops=1)\n Filter: ((bigint_array_1 IS NOT NULL) AND (bigint_array_2 IS\nNOT NULL) AND (id IS NOT NULL))\n Buffers: local hit=122\n -> Hash (cost=18131.96..18131.96 rows=775996 width=74) (actual\ntime=1798.528..1798.528 rows=768415 loops=1)\n Buckets: 131072 Batches: 1 Memory Usage: 84486kB\n Buffers: shared hit=10372 dirtied=287\n -> Seq Scan on base_table t (cost=0.00..18131.96\nrows=775996 width=74) (actual time=0.007..490.851 rows=768415 loops=1)\n Buffers: shared hit=10372 dirtied=287\n Total runtime: 1843.336 ms\n(14 rows)\n\ntest_db=# rollback;\nROLLBACK\n\n\nThanks,\nJake\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Planner-chooses-slow-index-heap-scan-despite-accurate-row-estimates-tp5905357.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 May 2016 12:08:57 -0700 (MST)", "msg_from": "Jake Magner <[email protected]>", "msg_from_op": true, "msg_subject": "Planner chooses slow index heap scan despite accurate row estimates" }, { "msg_contents": "Jake Magner <[email protected]> writes:\n> I am trying to insert rows that don't already exist from a temp table into\n> another table. I am using a LEFT JOIN on all the columns and checking for\n> nulls in the base table to know which rows to insert. The problem is that\n> the planner is choosing a nested loop plan which is very slow over the much\n> faster (~40x) hash join. What's interesting is that all the row estimates\n> appear to be fairly accurate. I'm wondering if it has something to do with\n> the GIN indexes on bigint_array_1 and bigint_array_2. Perhaps it\n> misestimates the cost of each index scan?\n\nI'm curious about what happens to the runtime if you repeatedly roll back\nthe INSERT and do it over (without vacuuming in between).\n\nWhat I'm thinking is that as the INSERT proceeds, it'd be making entries\nin those GIN indexes' pending-item lists, which the bitmap indexscan would\nhave to scan through since it's examining the same table you're inserting\ninto. The pending-item list is unsorted so it's relatively expensive to\nscan. Since, after you insert each row from temp_rows_to_insert, you're\ndoing a fresh bitmap indexscan, it seems like the cost to deal with the\npending item list would be proportional to O(N^2) --- so even though the\ncost per pending item is not that large, N=4000 might be enough to hurt.\n\nIf this theory is correct, a second attempt to do the INSERT without\nhaving flushed the pending-list would be even more expensive; while\nif I'm wrong and that cost is negligible, the time wouldn't change much.\n\nThe hashjoin approach avoids this problem by not using the index (and\neven if it did, the index would be scanned only once before any\ninsertions happen). The planner unfortunately has no idea about this\ninteraction.\n\nIf this diagnosis is correct, there are a couple ways you could get around\nthe problem:\n\n* disable use of the pending list by turning off \"fastupdate\" for these\nindexes.\n\n* construct the set of rows to be inserted in a separate command and\nput them into a second temp table, then insert to the main table.\n\nThe second choice is probably preferable; doing bulk GIN inserts\nwithout fastupdate is kind of expensive itself.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 May 2016 19:17:55 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner chooses slow index heap scan despite accurate row\n estimates" }, { "msg_contents": "I tried without doing an INSERT at all, just running the SELECT queries and\nthe result is the same. Nested loop is chosen but is much slower.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Planner-chooses-slow-index-heap-scan-despite-accurate-row-estimates-tp5905357p5905383.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 May 2016 16:30:54 -0700 (MST)", "msg_from": "Jake Magner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner chooses slow index heap scan despite accurate row\n estimates" }, { "msg_contents": "Jake Magner <[email protected]> writes:\n> I tried without doing an INSERT at all, just running the SELECT queries and\n> the result is the same. Nested loop is chosen but is much slower.\n\nFWIW, I just noticed that the comparisons you're using are plain equality\nof the arrays. While a GIN array index supports that, it's not exactly\nits strong suit: the sort of questions that index type supports well are\nmore like \"which arrays contain value X?\". I wonder if it'd be worth\ncreating btree indexes on the array column.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 27 May 2016 20:16:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Planner chooses slow index heap scan despite accurate row\n estimates" }, { "msg_contents": "Tom Lane-2 wrote\n> Jake Magner &lt;\n\n> jakemagner90@\n\n> &gt; writes:\n>> I tried without doing an INSERT at all, just running the SELECT queries\n>> and\n>> the result is the same. Nested loop is chosen but is much slower.\n> \n> FWIW, I just noticed that the comparisons you're using are plain equality\n> of the arrays. While a GIN array index supports that, it's not exactly\n> its strong suit: the sort of questions that index type supports well are\n> more like \"which arrays contain value X?\". I wonder if it'd be worth\n> creating btree indexes on the array column.\n\nI added btree indexes and now the nested loop uses those and is a bit faster\nthan the hash join. So the planner just misestimates the cost of doing the\nequality comparisons? I'd prefer not to add more indexes, the hash join\nperformance is fast enough if it would just choose that but I'm reluctant to\nturn off nested loops in case the table gets a lot bigger.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Planner-chooses-slow-index-heap-scan-despite-accurate-row-estimates-tp5905357p5905453.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 28 May 2016 17:38:33 -0700 (MST)", "msg_from": "Jake Magner <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Planner chooses slow index heap scan despite accurate row\n estimates" }, { "msg_contents": "On Sat, May 28, 2016 at 5:38 PM, Jake Magner <[email protected]> wrote:\n> Tom Lane-2 wrote\n>> Jake Magner &lt;\n>\n>> jakemagner90@\n>\n>> &gt; writes:\n>>> I tried without doing an INSERT at all, just running the SELECT queries\n>>> and\n>>> the result is the same. Nested loop is chosen but is much slower.\n>>\n>> FWIW, I just noticed that the comparisons you're using are plain equality\n>> of the arrays. While a GIN array index supports that, it's not exactly\n>> its strong suit: the sort of questions that index type supports well are\n>> more like \"which arrays contain value X?\". I wonder if it'd be worth\n>> creating btree indexes on the array column.\n>\n> I added btree indexes and now the nested loop uses those and is a bit faster\n> than the hash join. So the planner just misestimates the cost of doing the\n> equality comparisons?\n\nI wonder how it would do in 9.4? Either in them actually being\nfaster, or the planner doing\na better job of realizing they won't be fast.\n\n> I'd prefer not to add more indexes, the hash join\n> performance is fast enough if it would just choose that but I'm reluctant to\n> turn off nested loops in case the table gets a lot bigger.\n\nA large hash join just needs to divide it up into batches. It should\nstill be faster than the nested loop (as currently implemented) ,\nuntil you run out of temp space.\n\nBut, you already have a solution in hand. I agree you shouldn't add\nmore indexes without reason, but you do have a reason.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 May 2016 12:34:35 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Planner chooses slow index heap scan despite\n accurate row estimates" } ]
[ { "msg_contents": "Hello,\nI'm trying to find persons in an address database where I have built \ntrgm-indexes on name, street, zip and city.\n\nWhen I search for all four parts of the address (name, street, zip and city)\n\n select name, street, zip, city\n from addresses\n where name % $1\n and street % $2\n and (zip % $3 or city % $4)\n\neverything works fine: It takes less than a second to get some (5 - 500) \nproposed addresses out of 500,000 addresses and the query plan shows\n\n Bitmap Heap Scan on addresses (cost=168.31..1993.38 rows=524 ...\n Recheck Cond: ...\n -> Bitmap Index Scan on ...\n Index Cond: ...\n\nThe same happens when I search only by name with\n\n select name, street, zip, city\n from addresses\n where name % $1\n\nBut when I rewrite this query to\n\n select name, street, zip, city\n from addresses\n where similarity(name, $1) > 0.3\n\nwhich means exactly then same as the second example, the query plan \nchanges to\n\n Seq Scan on addresses (cost=0.00..149714.42 rows=174675 width=60)\n Filter: ...\n\nand the query lasts about a minute.\n\nThe reason for using the similarity function in place of the \n'%'-operator is that I want to use different similarity values in one query:\n\n select name, street, zip, city\n from addresses\n where name % $1\n and street % $2\n and (zip % $3 or city % $4)\n or similarity(name, $1) > 0.8\n\nwhich means: take all addresses where name, street, zip and city have \nlittle similarity _plus_ all addresses where the name matches very good.\n\n\nThe only way I found, was to create a temporary table from the first \nquery, change the similarity value with set_limit() and then select the \nsecond query UNION the temporary table.\n\nIs there a more elegant and straight forward way to achieve this result?\n\nregards Volker\n\n-- \nVolker Böhm Tel.: +49 4141 981155\nVoßkuhl 5 mailto:[email protected]\n21682 Stade http://www.vboehm.de\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 May 2016 19:53:59 +0200", "msg_from": "Volker Boehm <[email protected]>", "msg_from_op": true, "msg_subject": "similarity and operator '%'" }, { "msg_contents": "On Mon, May 30, 2016 at 1:53 PM, Volker Boehm <[email protected]> wrote:\n\n>\n> The reason for using the similarity function in place of the '%'-operator\n> is that I want to use different similarity values in one query:\n>\n> select name, street, zip, city\n> from addresses\n> where name % $1\n> and street % $2\n> and (zip % $3 or city % $4)\n> or similarity(name, $1) > 0.8\n>\n> which means: take all addresses where name, street, zip and city have\n> little similarity _plus_ all addresses where the name matches very good.\n>\n>\n> The only way I found, was to create a temporary table from the first\n> query, change the similarity value with set_limit() and then select the\n> second query UNION the temporary table.\n>\n> Is there a more elegant and straight forward way to achieve this result?\n>\n\n​Not that I can envision.\n\nYou are forced into using an operator due to our index implementation.\n\nYou are thus forced into using a GUC to control the parameter that the\nindex scanning function uses to compute true/false.\n\nA GUC can only take on a single value within a given query - well, not\nquite true[1] but the exception doesn't seem like it will help here.\n\nTh\nus you are consigned to​\n\n​using two queries.\n\n*​A functional index​ doesn't work since the second argument is query\nspecific\n\n[1]​ When defining a function you can attach a \"SET\" clause to it; commonly\nused for search_path but should work with any GUC. If you could wrap the\noperator comparison into a custom function you could use this capability.\nIt also would require a function that would take the threshold as a value -\nthe extension only provides variations that use the GUC.\n\nI don't think this will use the index even if it compiles (not tested):\n\nCREATE FUNCTION similarity_80(col, val)\nRETURNS boolean\nSET similarity_threshold = 0.80\nLANGUAGE sql\nAS $$\n​SELECT ​col % val;\n$$;\n\n​David J.​\n\nOn Mon, May 30, 2016 at 1:53 PM, Volker Boehm <[email protected]> wrote:\nThe reason for using the similarity function in place of the '%'-operator is that I want to use different similarity values in one query:\n\n    select name, street, zip, city\n    from addresses\n    where name % $1\n        and street % $2\n        and (zip % $3 or city % $4)\n        or similarity(name, $1) > 0.8\n\nwhich means: take all addresses where name, street, zip and city have little similarity _plus_ all addresses where the name matches very good.\n\n\nThe only way I found, was to create a temporary table from the first query, change the similarity value with set_limit() and then select the second query UNION the temporary table.\n\nIs there a more elegant and straight forward way to achieve this result?​Not that I can envision.You are forced into using an operator due to our index implementation.You are thus forced into using a GUC to control the parameter that the index scanning function uses to compute true/false.A GUC can only take on a single value within a given query - well, not quite true[1] but the exception doesn't seem like it will help here.Thus you are consigned to​ ​using two queries.*​A functional index​ doesn't work since the second argument is query specific[1]​ When defining a function you can attach a \"SET\" clause to it; commonly used for search_path but should work with any GUC.  If you could wrap the operator comparison into a custom function you could use this capability.  It also would require a function that would take the threshold as a value - the extension only provides variations that use the GUC.I don't think this will use the index even if it compiles (not tested):CREATE FUNCTION similarity_80(col, val)RETURNS booleanSET similarity_threshold = 0.80LANGUAGE sqlAS $$​SELECT ​col % val;$$;​David J.​", "msg_date": "Mon, 30 May 2016 14:20:33 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: similarity and operator '%'" }, { "msg_contents": "On Mon, May 30, 2016 at 10:53 AM, Volker Boehm <[email protected]> wrote:\n\n> The reason for using the similarity function in place of the '%'-operator is\n> that I want to use different similarity values in one query:\n>\n> select name, street, zip, city\n> from addresses\n> where name % $1\n> and street % $2\n> and (zip % $3 or city % $4)\n> or similarity(name, $1) > 0.8\n\nI think the best you can do through query writing is to use the\nmost-lenient setting in all places, and then refilter to get the less\nlenient cutoff:\n\n select name, street, zip, city\n from addresses\n where name % $1\n and street % $2\n and (zip % $3 or city % $4)\n or (name % $1 and similarity(name, $1) > 0.8)\n\nIf it were really important to me to get maximum performance, what I\nwould do is alter/fork the pg_trgm extension so that it had another\noperator, say %%%, with a hard-coded cutoff which paid no attention to\nthe set_limit(). I'm not really sure how the planner would deal with\nthat, though.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 30 May 2016 13:05:41 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: similarity and operator '%'" } ]
[ { "msg_contents": "Hi All\n Can anyone please point me to location from where i can get slony\nslony1-95-2.2.2-1.rhel5.x86_64\n<http://yum.postgresql.org/9.5/redhat/rhel-5-x86_64/slony1-95-2.2.4-4.rhel5.x86_64.rpm>\n rpm. I'm upgrading database from version 9.3 to 9.5. Current version of\nrpm we are using is slony1-93-2.2.2-1.el5.x86_64 and the one that is\navailable on postgresql website for 9.5 is slony1-95-2.2.4-4.rhel5.x86_64\n<http://yum.postgresql.org/9.5/redhat/rhel-5-x86_64/slony1-95-2.2.4-4.rhel5.x86_64.rpm>\n which is not compatible and throws an error when i test the upgrade. In\nthe past i was able to find the 2.2.2-1 version rpm for previous versions\non postgres website but not this time for postgresql 9.5\n\n\n\nThanks\nAvi\n\nHi All         Can anyone please point me to location from where i can get slony slony1-95-2.2.2-1.rhel5.x86_64  rpm. I'm upgrading database from version 9.3 to 9.5. Current version of rpm we are using is  slony1-93-2.2.2-1.el5.x86_64 and the one that is available on postgresql website for 9.5 is slony1-95-2.2.4-4.rhel5.x86_64  which is not compatible and throws an error when i test the upgrade.  In the past i was able to find the 2.2.2-1 version rpm for previous versions on postgres website but not this time for postgresql 9.5ThanksAvi", "msg_date": "Fri, 3 Jun 2016 16:03:18 -0700", "msg_from": "avi Singh <[email protected]>", "msg_from_op": true, "msg_subject": "slony rpm help slony1-95-2.2.2-1.rhel6.x86_64" }, { "msg_contents": "> From: avi Singh <[email protected]>\n>To: [email protected] \n>Sent: Saturday, 4 June 2016, 0:03\n>Subject: [PERFORM] slony rpm help slony1-95-2.2.2-1.rhel6.x86_64\n> \n>\n>\n>Hi All\n> Can anyone please point me to location from where i can get slony slony1-95-2.2.2-1.rhel5.x86_64 rpm. I'm upgrading database from version 9.3 to 9.5. Current version of rpm we are using is slony1-93-2.2.2-1.el5.x86_64 and the one that is available on postgresql website for 9.5 is slony1-95-2.2.4-4.rhel5.x86_64 which is not compatible and throws an error when i test the upgrade. In the past i was able to find the 2.2.2-1 version rpm for previous versions on postgres website but not this time for postgresql 9.5\n>\n>\n\n\nWhat you'd be better off doing is installing Slony 2.2.4 on all your servers (or better a 2.2.5) rather than trying to get the older version. If you can't get a package you could compile Slony yourself.\n\n\nThe not compatible error you mention is most likely because you've failed to update the Slony functions. See:\n\n\n http://slony.info/documentation/2.2/stmtupdatefunctions.html\n\nGlyn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 9 Jun 2016 15:32:42 +0000 (UTC)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slony rpm help slony1-95-2.2.2-1.rhel6.x86_64" }, { "msg_contents": "On Fri, Jun 3, 2016 at 7:03 PM, avi Singh <[email protected]>\nwrote:\n\n> Hi All\n> Can anyone please point me to location from where i can get slony\n> slony1-95-2.2.2-1.rhel5.x86_64\n> <http://yum.postgresql.org/9.5/redhat/rhel-5-x86_64/slony1-95-2.2.4-4.rhel5.x86_64.rpm>\n> rpm.\n>\n\nThere should not be one since Slony1-I v2.2.2 does not compile against\nPostgreSQL 9.5.\n9.5 requires at least Slony-I v2.2.4. I recommend upgrading Slony to 2.2.5\nfirst.\n\n\nRegards, Jan\n\n\n\n\n> I'm upgrading database from version 9.3 to 9.5. Current version of rpm we\n> are using is slony1-93-2.2.2-1.el5.x86_64 and the one that is available on\n> postgresql website for 9.5 is slony1-95-2.2.4-4.rhel5.x86_64\n> <http://yum.postgresql.org/9.5/redhat/rhel-5-x86_64/slony1-95-2.2.4-4.rhel5.x86_64.rpm>\n> which is not compatible and throws an error when i test the upgrade.\n> In the past i was able to find the 2.2.2-1 version rpm for previous\n> versions on postgres website but not this time for postgresql 9.5\n>\n>\n>\n> Thanks\n> Avi\n>\n>\n>\n>\n>\n\n\n-- \nJan Wieck\nSenior Postgres Architect\n\nOn Fri, Jun 3, 2016 at 7:03 PM, avi Singh <[email protected]> wrote:Hi All         Can anyone please point me to location from where i can get slony slony1-95-2.2.2-1.rhel5.x86_64  rpm.There should not be one since Slony1-I v2.2.2 does not compile against PostgreSQL 9.5.9.5 requires at least Slony-I v2.2.4. I recommend upgrading Slony to 2.2.5 first.Regards, Jan  I'm upgrading database from version 9.3 to 9.5. Current version of rpm we are using is  slony1-93-2.2.2-1.el5.x86_64 and the one that is available on postgresql website for 9.5 is slony1-95-2.2.4-4.rhel5.x86_64  which is not compatible and throws an error when i test the upgrade.  In the past i was able to find the 2.2.2-1 version rpm for previous versions on postgres website but not this time for postgresql 9.5ThanksAvi\n-- Jan WieckSenior Postgres Architect", "msg_date": "Sat, 16 Jul 2016 21:14:50 -0400", "msg_from": "Jan Wieck <[email protected]>", "msg_from_op": false, "msg_subject": "Re: slony rpm help slony1-95-2.2.2-1.rhel6.x86_64" } ]
[ { "msg_contents": "SELECT pg_database_size('DB1') takes 60ms for 7948 kB size DB. Is there any\nway to reduce the time taken or any other ways to find data base size?\n\nThanks and Regards,\nS.Sangeetha\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/pg-database-size-tp5906449.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 4 Jun 2016 00:35:55 -0700 (MST)", "msg_from": "sangeetha <[email protected]>", "msg_from_op": true, "msg_subject": "pg_database_size" }, { "msg_contents": "On Sat, Jun 4, 2016 at 5:35 PM, sangeetha <[email protected]> wrote:\n\n> SELECT pg_database_size('DB1') takes 60ms for 7948 kB size DB. Is there\n> any\n> way to reduce the time taken or any other ways to find data base size?\n\n\nWhat is the version of PostgreSQL you are using ?\n\nYou can execute the command \"\\l+\" which will list all the databases and\ntheir sizes.\n\nOr you can execute \"\\l+ <database-name>\".\n\nRegards,\nVenkata B N\n\nOn Sat, Jun 4, 2016 at 5:35 PM, sangeetha <[email protected]> wrote:SELECT pg_database_size('DB1')  takes 60ms for 7948 kB size DB. Is there any\nway to reduce the time taken or any other ways to find data base size?What is the version of PostgreSQL you are using ?You can execute the command \"\\l+\" which will list all the databases and their sizes. Or you can execute \"\\l+ <database-name>\".Regards,Venkata B N", "msg_date": "Sat, 4 Jun 2016 18:10:18 +1000", "msg_from": "Venkata Balaji N <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_database_size" }, { "msg_contents": "On 6/4/16 3:10 AM, Venkata Balaji N wrote:\n>\n> On Sat, Jun 4, 2016 at 5:35 PM, sangeetha <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> SELECT pg_database_size('DB1') takes 60ms for 7948 kB size DB. Is\n> there any\n> way to reduce the time taken or any other ways to find data base size?\n>\n>\n> What is the version of PostgreSQL you are using ?\n>\n> You can execute the command \"\\l+\" which will list all the databases and\n> their sizes.\n>\n> Or you can execute \"\\l+ <database-name>\".\n\nDepending on your needs, you could also take the sum of \npg_class.relpages and multiply that by BLKSZ.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 14 Jun 2016 16:00:11 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_database_size" } ]
[ { "msg_contents": "Hello,\n\nI have two tables phone_number and phone_number_type\n\nWhen I start transaction and insert phone_number using FK from \nphone_number_type. Then I can during another TX update row from \nphone_number_type, but I can't execute select for update on it.\n\nIn db stats I see during inserInto AccessShareLock, during update \nRowExclusieLock but during select for update AccessExclusieLock.\n\nWhy I can't execute 'select for update' but I can update???? We often \nuse 'select for update' to avoid update the same record in differents TX \nbut I don't understand why this block another tx from using this record \nas FK\n\n\nBest regards\nMirek\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 7 Jun 2016 09:31:47 +0200", "msg_from": "Streamsoft - Mirek Szajowski <[email protected]>", "msg_from_op": true, "msg_subject": "Locking concurrency: select for update vs update" }, { "msg_contents": "On 7 June 2016 at 09:31, Streamsoft - Mirek Szajowski <\[email protected]> wrote:\n\n> Hello,\n>\n> I have two tables phone_number and phone_number_type\n>\n> When I start transaction and insert phone_number using FK from\n> phone_number_type. Then I can during another TX update row from\n> phone_number_type, but I can't execute select for update on it.\n>\n> In db stats I see during inserInto AccessShareLock, during update\n> RowExclusieLock but during select for update AccessExclusieLock.\n>\n> Why I can't execute 'select for update' but I can update???? We often use\n> 'select for update' to avoid update the same record in differents TX but I\n> don't understand why this block another tx from using this record as FK\n>\n>\n> Best regards\n> Mirek\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nWhat do you mean by \" can't execute select for update on it\"? Can you show\nan example code, and the error you get?\n\n-- \n regards Szymon Lipiński\n\nOn 7 June 2016 at 09:31, Streamsoft - Mirek Szajowski <[email protected]> wrote:Hello,\n\nI have two tables phone_number and phone_number_type\n\nWhen I start transaction and insert phone_number using FK from phone_number_type. Then I can during another TX update row from phone_number_type, but I can't execute select for update on it.\n\nIn db stats I see during inserInto AccessShareLock, during update RowExclusieLock but during select for update AccessExclusieLock.\n\nWhy I can't execute 'select for update' but I can update???? We often use 'select for update' to avoid update the same record in differents TX but I don't understand why this block another tx from using this record as FK\n\n\nBest regards\nMirek\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nWhat do you mean by \" can't execute select for update on it\"? Can you show an example code, and the error you get?--     regards Szymon Lipiński", "msg_date": "Tue, 7 Jun 2016 09:35:05 +0200", "msg_from": "=?UTF-8?Q?Szymon_Lipi=C5=84ski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Locking concurrency: select for update vs update" }, { "msg_contents": "It means that second TX hangs/wait on this sql\n\n\ncode\n\nFIRST TX\n\nINSERT INTO phone_number( id_phone_number,id_phone_number_type) \nVALUES (1,500);\n\n\nSECOND TX\n\nselect * from phone_number_type WHERE id_phone_number_type=500 for \nupdate //hangs/wait to TX with insert into ends\n\n\nbut this works fine\n\n UPDATE phone_number_type SET val=val+1 WHERE id_phone_number_type=500\n\nW dniu 2016-06-07 o 09:35, Szymon Lipiński pisze:\n>\n>\n> On 7 June 2016 at 09:31, Streamsoft - Mirek Szajowski \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hello,\n>\n> I have two tables phone_number and phone_number_type\n>\n> When I start transaction and insert phone_number using FK from\n> phone_number_type. Then I can during another TX update row from\n> phone_number_type, but I can't execute select for update on it.\n>\n> In db stats I see during inserInto AccessShareLock, during update\n> RowExclusieLock but during select for update AccessExclusieLock.\n>\n> Why I can't execute 'select for update' but I can update???? We\n> often use 'select for update' to avoid update the same record in\n> differents TX but I don't understand why this block another tx\n> from using this record as FK\n>\n>\n> Best regards\n> Mirek\n>\n>\n> -- \n> Sent via pgsql-performance mailing list\n> ([email protected]\n> <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n> What do you mean by \" can't execute select for update on it\"? Can you \n> show an example code, and the error you get?\n>\n> -- \n> regards Szymon Lipiński\n\n-- \n\nz poważaniem\n\n*Mirek Szajowski*\nProjektant-programista\nTel: 663 762 690\[email protected] <mailto:[email protected]>\n\n\n*Streamsoft*\n65-140 Zielona Góra, ul.Kossaka 10\nNIP: 929-010-00-96, REGON: 970033184\nTel: +48 68 45 66 900, Fax: +48 68 45 66 933\nwww.streamsoft.pl <http://www.streamsoft.pl/>\n\n*Uwaga: * Treść niniejszej wiadomości może być poufna i objęta zakazem \njej ujawniania. Jeśli czytelnik lub odbiorca niniejszej wiadomości nie \njest jej zamierzonym adresatem, pracownikiem lub pośrednikiem \nupoważnionym do jej przekazania adresatowi, niniejszym informujemy że \nwszelkie rozprowadzanie, dystrybucja lub powielanie niniejszej \nwiadomości jest zabronione. Odbiorca lub czytelnik korespondencji, który \notrzymał ja omyłkowo, proszony jest o zawiadomienie nadawcy i usuniecie \ntego materiału z komputera. Dziękujemy. Streamsoft.\n\n*Note: * The information contained in this message may be privileged and \nconfidential and protected from disclosure. If the reader or receiver of \nthis message is not the intended recipient, or an employee or agent \nresponsible for delivering this message to the intended recipient, you \nare hereby notified that any dissemination, distribution or copying of \nthis communication is strictly prohibited. If you received this in \nerror, please contact the sender and delete the material from any \ncomputer. Thank you. Streamsoft.\n\n\n\n\n\n\n\nIt means that second TX hangs/wait on this sql\n\n\ncode\nFIRST TX\nINSERT INTO phone_number(\n id_phone_number,id_phone_number_type)    VALUES (1,500);\n\n\nSECOND TX\nselect * from phone_number_type  WHERE id_phone_number_type=500\n for update //hangs/wait to TX with insert into ends\n\n\n\nbut this works fine\n\n   UPDATE phone_number_type SET val=val+1 WHERE\n id_phone_number_type=500\n\nW dniu 2016-06-07 o 09:35, Szymon\n Lipiński pisze:\n\n\n\n\nOn 7 June 2016 at 09:31, Streamsoft -\n Mirek Szajowski <[email protected]>\n wrote:\nHello,\n\n I have two tables phone_number and phone_number_type\n\n When I start transaction and insert phone_number using FK\n from phone_number_type. Then I can during another TX\n update row from phone_number_type, but I can't execute\n select for update on it.\n\n In db stats I see during inserInto AccessShareLock, during\n update RowExclusieLock but during select for update\n AccessExclusieLock.\n\n Why I can't execute 'select for update' but I can\n update???? We often use 'select for update' to avoid\n update the same record in differents TX but I don't\n understand why this block another tx from using this\n record as FK\n\n\n Best regards\n Mirek\n\n\n -- \n Sent via pgsql-performance mailing list ([email protected])\n To make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n What do you mean by \" can't execute select for update on it\"?\n Can you show an example code, and the error you get?\n\n\n -- \n\n\n\n\n\n\n\n    regards\n Szymon\n Lipiński\n\n\n\n\n\n\n\n\n\n\n\n\n-- \n\n\n z poważaniem\n\nMirek\n Szajowski\n Projektant-programista\n Tel: 663 762 690\[email protected]\n\n\n Streamsoft\n 65-140 Zielona Góra, ul.Kossaka 10\n NIP: 929-010-00-96, REGON: 970033184\n Tel: +48 68 45 66 900, Fax: +48 68 45 66 933\nwww.streamsoft.pl\n\n\n\n Uwaga:\n \n Treść niniejszej wiadomości może być poufna i objęta\n zakazem jej ujawniania. Jeśli czytelnik lub odbiorca\n niniejszej wiadomości nie jest jej zamierzonym adresatem,\n pracownikiem lub pośrednikiem upoważnionym do jej\n przekazania adresatowi, niniejszym informujemy że wszelkie\n rozprowadzanie, dystrybucja lub powielanie niniejszej\n wiadomości jest zabronione. Odbiorca lub czytelnik\n korespondencji, który otrzymał ja omyłkowo, proszony jest\n o zawiadomienie nadawcy i usuniecie tego materiału z\n komputera. Dziękujemy. Streamsoft. \n \n Note: The information contained in this message\n may be privileged and confidential and protected from\n disclosure. If the reader or receiver of this message is\n not the intended recipient, or an employee or agent\n responsible for delivering this message to the intended\n recipient, you are hereby notified that any dissemination,\n distribution or copying of this communication is strictly\n prohibited. If you received this in error, please contact\n the sender and delete the material from any computer.\n Thank you. Streamsoft.", "msg_date": "Tue, 7 Jun 2016 09:38:31 +0200", "msg_from": "Streamsoft - Mirek Szajowski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Locking concurrency: select for update vs update" }, { "msg_contents": "Streamsoft - Mirek Szajowski <[email protected]> writes:\n> Why I can't execute 'select for update' but I can update?\n\nIn recent PG versions, the lock held due to having inserted an FK\ndependent row effectively only locks the key fields of the parent row.\nUPDATE can tell whether you're trying to change the row's key fields,\nand it will proceed if you aren't. SELECT FOR UPDATE has to lock the\nwhole row (since it must assume you might be intending to change any\nfields of the row); so it blocks until the FK lock goes away.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 07 Jun 2016 09:24:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Locking concurrency: select for update vs update" }, { "msg_contents": "Thanks\n\nafter your description I found select name from phone_number_type WHERE \nid_phone_number_type=4 for *NO KEY* update (Postgresql 9.3 )\n\n\nW dniu 2016-06-07 o 15:24, Tom Lane pisze:\n> Streamsoft - Mirek Szajowski <[email protected]> writes:\n>> Why I can't execute 'select for update' but I can update?\n> In recent PG versions, the lock held due to having inserted an FK\n> dependent row effectively only locks the key fields of the parent row.\n> UPDATE can tell whether you're trying to change the row's key fields,\n> and it will proceed if you aren't. SELECT FOR UPDATE has to lock the\n> whole row (since it must assume you might be intending to change any\n> fields of the row); so it blocks until the FK lock goes away.\n>\n> \t\t\tregards, tom lane\n\n\n\n\n\n\n\n\nThanks \n\nafter your description I found select name from\n phone_number_typeďż˝ WHERE id_phone_number_type=4 for NO KEY\n update (Postgresql 9.3 )\n\n\nW dniu 2016-06-07 oďż˝15:24, Tom Lane\n pisze:\n\n\nStreamsoft - Mirek Szajowski <[email protected]> writes:\n\n\nWhy I can't execute 'select for update' but I can update?\n\n\n\nIn recent PG versions, the lock held due to having inserted an FK\ndependent row effectively only locks the key fields of the parent row.\nUPDATE can tell whether you're trying to change the row's key fields,\nand it will proceed if you aren't. SELECT FOR UPDATE has to lock the\nwhole row (since it must assume you might be intending to change any\nfields of the row); so it blocks until the FK lock goes away.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 7 Jun 2016 15:26:27 +0200", "msg_from": "Streamsoft - Mirek Szajowski <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Locking concurrency: select for update vs update" } ]
[ { "msg_contents": "Hello,\n\nI run a query transforming huge tables to a json document based on a period.\nIt works great for a modest period (little dataset).\nHowever, when increasing the period (huge dataset) I get this error:\n\nSQL ERROR[54000]\nERROR: array size exceeds the maximum allowed (1073741823)\n\n\nThanks by advance,\n\nInformations:\n\npostgresql 9.4\nshared_buffers = 55GB\n64bit Red Hat Enterprise Linux Server release 6.7\n\nthe query:\n WITH sel AS\n (SELECT ids_pat,\n ids_nda\n FROM eds.nda\n WHERE (dt_deb_nda >= '20150101'\n AND dt_deb_nda <= '20150401')),\n diag AS\n ( SELECT ids_nda_rum,\n json_agg(diago) AS diago,\n count(1) AS total\n FROM\n (SELECT ids_nda_rum,\n json_build_object( 'cd_cim', cd_cim,\n'lib_cim',lib_typ_diag_tr, 'dt_cim',dt_exec) AS diago\n FROM eds.fait_diag_tr\n WHERE ids_nda IN\n (SELECT ids_nda\n FROM sel)\n ORDER BY dt_exec) AS diago2\n GROUP BY ids_nda_rum),\n act AS\n ( SELECT ids_nda_rum,\n json_agg(acto) AS acto,\n count(1) AS total\n FROM\n ( SELECT ids_nda_rum,\n json_build_object( 'cd_act',cd_ccam, 'dt_act',dt_exec) AS acto\n FROM eds.fait_act_tr\n WHERE ids_nda IN\n (SELECT ids_nda\n FROM sel)\n ORDER BY dt_exec) AS acto2\n GROUP BY ids_nda_rum ),\n ghm AS\n ( SELECT ids_nda_rum,\n json_agg(ghmo) AS ghmo,\n count(1) AS total\n FROM\n ( SELECT ids_nda_rum,\n json_build_object( 'cd_ghm',cd_ghm, 'cd_ghs',cd_ghs,\n'status',lib_statut_tr, 'dt_maj_rum_ghm',dt_maj_rum_ghm) AS ghmo\n FROM eds.nda_rum_ghm_tr\n LEFT JOIN eds.nda_rum_tr rum USING (ids_nda_rum)\n WHERE nda_rum_ghm_tr.ids_nda IN\n (SELECT ids_nda\n FROM sel)\n AND rum.cd_rum = 'RSS'\n ORDER BY dt_maj_rum_ghm) AS ghmo\n GROUP BY ids_nda_rum ),\n lab AS\n (SELECT ids_nda,\n json_agg(lab) AS labo,\n count(1) AS total\n FROM\n (SELECT ids_nda,\n json_build_object( 'valeur_type_tr',valeur_type_tr,\n'dt_fait', dt_fait, 'unite',unite, 'cd_test_lab',cd_test_lab,\n'valeur_sign_tr',valeur_sign_tr, 'valeur_num_tr',valeur_num_tr,\n'valeur_text_tr',valeur_text_tr,\n'valeur_abnormal_tr',valeur_abnormal_tr) AS lab\n FROM eds.fait_lab_tr\n WHERE ids_nda IN\n (SELECT ids_nda\n FROM sel)\n ORDER BY dt_fait) AS labo\n GROUP BY ids_nda),\n rum AS\n ( SELECT ids_nda,\n json_agg(rum) AS rumo,\n count(1) AS total\n FROM\n ( SELECT ids_nda,\n json_build_object( 'cd_rum',cd_rum, 'dt_deb_rum',\ndt_deb_rum, 'dt_fin_rum', dt_fin_rum, 'diag',\njson_build_object('total',diag.total,'diag',diag.diago), 'act',\njson_build_object('total',act.total,'act',act.acto) ) AS rum\n FROM eds.nda_rum_tr\n LEFT JOIN diag USING (ids_nda_rum)\n LEFT JOIN act USING (ids_nda_rum)\n WHERE ids_nda IN\n (SELECT ids_nda\n FROM sel)\n AND cd_rum = 'RUM' ) AS rumo\n GROUP BY ids_nda),\n rss AS\n ( SELECT ids_nda,\n json_agg(rss) AS rsso,\n count(1) AS total\n FROM\n ( SELECT ids_nda,\n json_build_object( 'cd_rum',cd_rum, 'dt_deb_rss',\ndt_deb_rum, 'dt_fin_rss', dt_fin_rum, 'ghm',\njson_build_object('total',ghm.total,'ghm',ghm.ghmo), 'rum',\njson_build_object('total',rum.total, 'rum',rum.rumo) ) AS rss\n FROM eds.nda_rum_tr\n LEFT JOIN ghm USING (ids_nda_rum)\n LEFT JOIN rum USING (ids_nda)\n WHERE ids_nda IN\n (SELECT ids_nda\n FROM sel)\n AND cd_rum = 'RSS' ) AS rss\n GROUP BY ids_nda),\n enc AS\n (SELECT 'Encounter' AS \"resourceType\",\n cd_nda AS \"identifier\",\n duree_hospit AS \"length\",\n lib_statut_nda_tr AS \"status\",\n lib_type_nda_tr AS \"type\",\n ids_pat,\n json_build_object('start', dt_deb_nda,'end', dt_fin_nda) AS\n\"appointment\",\n json_build_object('total',lab.total, 'lab',lab.labo) AS lab,\n json_build_object('total',rss.total, 'rss',rss.rsso) AS rss\n FROM eds.nda_tr\n LEFT JOIN lab USING (ids_nda)\n LEFT JOIN rss USING (ids_nda)\n WHERE ids_nda IN\n (SELECT ids_nda\n FROM sel)\n ORDER BY dt_deb_nda ASC)\nSELECT 'Bundle' AS \"resourceType\",\n count(1) AS total,\n array_to_json(array_agg(ROW)) AS encounter\nFROM\n (SELECT 'Patient' AS \"resourceType\",\n ipp AS \"identifier\",\n nom AS \"name\",\n cd_sex_tr AS \"gender\",\n dt_nais AS \"birthDate\",\n json_build_array(enc.*) AS encounters\n FROM eds.patient_tr\n INNER JOIN enc USING (ids_pat) ) ROW;\n\nHello,I run a query transforming huge tables to a json document based on a period.It works great for a modest period (little dataset). However, when increasing the period (huge dataset) I get this error:SQL ERROR[54000]ERROR: array size exceeds the maximum allowed (1073741823)Thanks by advance,Informations:postgresql 9.4shared_buffers = 55GB64bit Red Hat Enterprise Linux Server release 6.7the query: WITH sel AS (SELECT ids_pat, ids_nda FROM eds.nda WHERE (dt_deb_nda >= '20150101' AND dt_deb_nda <= '20150401')), diag AS ( SELECT ids_nda_rum, json_agg(diago) AS diago, count(1) AS total FROM (SELECT ids_nda_rum, json_build_object( 'cd_cim', cd_cim, 'lib_cim',lib_typ_diag_tr, 'dt_cim',dt_exec) AS diago FROM eds.fait_diag_tr WHERE ids_nda IN (SELECT ids_nda FROM sel) ORDER BY dt_exec) AS diago2 GROUP BY ids_nda_rum), act AS ( SELECT ids_nda_rum, json_agg(acto) AS acto, count(1) AS total FROM ( SELECT ids_nda_rum, json_build_object( 'cd_act',cd_ccam, 'dt_act',dt_exec) AS acto FROM eds.fait_act_tr WHERE ids_nda IN (SELECT ids_nda FROM sel) ORDER BY dt_exec) AS acto2 GROUP BY ids_nda_rum ), ghm AS ( SELECT ids_nda_rum, json_agg(ghmo) AS ghmo, count(1) AS total FROM ( SELECT ids_nda_rum, json_build_object( 'cd_ghm',cd_ghm, 'cd_ghs',cd_ghs, 'status',lib_statut_tr, 'dt_maj_rum_ghm',dt_maj_rum_ghm) AS ghmo FROM eds.nda_rum_ghm_tr LEFT JOIN eds.nda_rum_tr rum USING (ids_nda_rum) WHERE nda_rum_ghm_tr.ids_nda IN (SELECT ids_nda FROM sel) AND rum.cd_rum = 'RSS' ORDER BY dt_maj_rum_ghm) AS ghmo GROUP BY ids_nda_rum ), lab AS (SELECT ids_nda, json_agg(lab) AS labo, count(1) AS total FROM (SELECT ids_nda, json_build_object( 'valeur_type_tr',valeur_type_tr, 'dt_fait', dt_fait, 'unite',unite, 'cd_test_lab',cd_test_lab, 'valeur_sign_tr',valeur_sign_tr, 'valeur_num_tr',valeur_num_tr, 'valeur_text_tr',valeur_text_tr, 'valeur_abnormal_tr',valeur_abnormal_tr) AS lab FROM eds.fait_lab_tr WHERE ids_nda IN (SELECT ids_nda FROM sel) ORDER BY dt_fait) AS labo GROUP BY ids_nda), rum AS ( SELECT ids_nda, json_agg(rum) AS rumo, count(1) AS total FROM ( SELECT ids_nda, json_build_object( 'cd_rum',cd_rum, 'dt_deb_rum', dt_deb_rum, 'dt_fin_rum', dt_fin_rum, 'diag', json_build_object('total',diag.total,'diag',diag.diago), 'act', json_build_object('total',act.total,'act',act.acto) ) AS rum FROM eds.nda_rum_tr LEFT JOIN diag USING (ids_nda_rum) LEFT JOIN act USING (ids_nda_rum) WHERE ids_nda IN (SELECT ids_nda FROM sel) AND cd_rum = 'RUM' ) AS rumo GROUP BY ids_nda), rss AS ( SELECT ids_nda, json_agg(rss) AS rsso, count(1) AS total FROM ( SELECT ids_nda, json_build_object( 'cd_rum',cd_rum, 'dt_deb_rss', dt_deb_rum, 'dt_fin_rss', dt_fin_rum, 'ghm', json_build_object('total',ghm.total,'ghm',ghm.ghmo), 'rum', json_build_object('total',rum.total, 'rum',rum.rumo) ) AS rss FROM eds.nda_rum_tr LEFT JOIN ghm USING (ids_nda_rum) LEFT JOIN rum USING (ids_nda) WHERE ids_nda IN (SELECT ids_nda FROM sel) AND cd_rum = 'RSS' ) AS rss GROUP BY ids_nda), enc AS (SELECT 'Encounter' AS \"resourceType\", cd_nda AS \"identifier\", duree_hospit AS \"length\", lib_statut_nda_tr AS \"status\", lib_type_nda_tr AS \"type\", ids_pat, json_build_object('start', dt_deb_nda,'end', dt_fin_nda) AS \"appointment\", json_build_object('total',lab.total, 'lab',lab.labo) AS lab, json_build_object('total',rss.total, 'rss',rss.rsso) AS rss FROM eds.nda_tr LEFT JOIN lab USING (ids_nda) LEFT JOIN rss USING (ids_nda) WHERE ids_nda IN (SELECT ids_nda FROM sel) ORDER BY dt_deb_nda ASC)SELECT 'Bundle' AS \"resourceType\", count(1) AS total, array_to_json(array_agg(ROW)) AS encounterFROM (SELECT 'Patient' AS \"resourceType\", ipp AS \"identifier\", nom AS \"name\", cd_sex_tr AS \"gender\", dt_nais AS \"birthDate\", json_build_array(enc.*) AS encounters FROM eds.patient_tr INNER JOIN enc USING (ids_pat) ) ROW;", "msg_date": "Tue, 7 Jun 2016 13:44:19 +0200", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": true, "msg_subject": "array size exceeds the maximum allowed (1073741823) when building a\n json" }, { "msg_contents": "On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]> wrote:\n\n> Hello,\n>\n> I run a query transforming huge tables to a json document based on a period.\n> It works great for a modest period (little dataset).\n> However, when increasing the period (huge dataset) I get this error:\n>\n> SQL ERROR[54000]\n> ERROR: array size exceeds the maximum allowed (1073741823)\n>\n> ​https://www.postgresql.org/about/​\n\n​Maximum Field Size: 1 GB​\n\n​It doesn't matter that the data never actually is placed into a physical\ntable.\n\nDavid J.\n\nOn Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]> wrote:Hello,I run a query transforming huge tables to a json document based on a period.It works great for a modest period (little dataset). However, when increasing the period (huge dataset) I get this error:SQL ERROR[54000]ERROR: array size exceeds the maximum allowed (1073741823)​https://www.postgresql.org/about/​​Maximum Field Size: 1 GB​​It doesn't matter that the data never actually is placed into a physical table.David J.", "msg_date": "Tue, 7 Jun 2016 08:31:05 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "2016-06-07 14:31 GMT+02:00 David G. Johnston <[email protected]>:\n\n> On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]> wrote:\n>\n>> Hello,\n>>\n>> I run a query transforming huge tables to a json document based on a period.\n>> It works great for a modest period (little dataset).\n>> However, when increasing the period (huge dataset) I get this error:\n>>\n>> SQL ERROR[54000]\n>> ERROR: array size exceeds the maximum allowed (1073741823)\n>>\n>> ​https://www.postgresql.org/about/​\n>\n> ​Maximum Field Size: 1 GB​\n>\n\nIt means a json cannot exceed 1GB in postgresql, right ?\nThen I must build it with an external tool ?\n​\n\n\n>\n> ​It doesn't matter that the data never actually is placed into a physical\n> table.\n>\n> David J.\n>\n>\n\n2016-06-07 14:31 GMT+02:00 David G. Johnston <[email protected]>:On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]> wrote:Hello,I run a query transforming huge tables to a json document based on a period.It works great for a modest period (little dataset). However, when increasing the period (huge dataset) I get this error:SQL ERROR[54000]ERROR: array size exceeds the maximum allowed (1073741823)​https://www.postgresql.org/about/​​Maximum Field Size: 1 GB​It means a json cannot exceed 1GB in postgresql, right ?Then I must build it with an external tool ?​ ​It doesn't matter that the data never actually is placed into a physical table.David J.", "msg_date": "Tue, 7 Jun 2016 14:36:46 +0200", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "On Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris <[email protected]> wrote:\n\n> 2016-06-07 14:31 GMT+02:00 David G. Johnston <[email protected]>:\n>\n>> On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]>\n>> wrote:\n>>\n>>> Hello,\n>>>\n>>> I run a query transforming huge tables to a json document based on a period.\n>>> It works great for a modest period (little dataset).\n>>> However, when increasing the period (huge dataset) I get this error:\n>>>\n>>> SQL ERROR[54000]\n>>> ERROR: array size exceeds the maximum allowed (1073741823)\n>>>\n>>> ​https://www.postgresql.org/about/​\n>>\n>> ​Maximum Field Size: 1 GB​\n>>\n>\n> It means a json cannot exceed 1GB in postgresql, right ?\n>\n\n​Yes​\n\n\n> Then I must build it with an external tool ?\n> ​\n>\n>\n\n​​You have to do something different. Using multiple columns and/or\nmultiple rows might we workable.\n\nDavid J.\n\nOn Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris <[email protected]> wrote:2016-06-07 14:31 GMT+02:00 David G. Johnston <[email protected]>:On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]> wrote:Hello,I run a query transforming huge tables to a json document based on a period.It works great for a modest period (little dataset). However, when increasing the period (huge dataset) I get this error:SQL ERROR[54000]ERROR: array size exceeds the maximum allowed (1073741823)​https://www.postgresql.org/about/​​Maximum Field Size: 1 GB​It means a json cannot exceed 1GB in postgresql, right ?​Yes​ Then I must build it with an external tool ?​ ​​You have to do something different.  Using multiple columns and/or multiple rows might we workable.David J.", "msg_date": "Tue, 7 Jun 2016 08:39:18 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "2016-06-07 14:39 GMT+02:00 David G. Johnston <[email protected]>:\n\n> On Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris <[email protected]> wrote:\n>\n>> 2016-06-07 14:31 GMT+02:00 David G. Johnston <[email protected]>\n>> :\n>>\n>>> On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]>\n>>> wrote:\n>>>\n>>>> Hello,\n>>>>\n>>>> I run a query transforming huge tables to a json document based on a period.\n>>>> It works great for a modest period (little dataset).\n>>>> However, when increasing the period (huge dataset) I get this error:\n>>>>\n>>>> SQL ERROR[54000]\n>>>> ERROR: array size exceeds the maximum allowed (1073741823)\n>>>>\n>>>> ​https://www.postgresql.org/about/​\n>>>\n>>> ​Maximum Field Size: 1 GB​\n>>>\n>>\n>> It means a json cannot exceed 1GB in postgresql, right ?\n>>\n>\n> ​Yes​\n>\n>\n>> Then I must build it with an external tool ?\n>> ​\n>>\n>>\n>\n> ​​You have to do something different. Using multiple columns and/or\n> multiple rows might we workable.\n>\n\n​Certainly. Kind of disappointing, because I won't find any json builder as\nperformant as postgresql.​\n\n​\n\nWill this 1GO restriction is supposed to increase in a near future ?​\n\n\n> David J.\n>\n>\n\n2016-06-07 14:39 GMT+02:00 David G. Johnston <[email protected]>:On Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris <[email protected]> wrote:2016-06-07 14:31 GMT+02:00 David G. Johnston <[email protected]>:On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]> wrote:Hello,I run a query transforming huge tables to a json document based on a period.It works great for a modest period (little dataset). However, when increasing the period (huge dataset) I get this error:SQL ERROR[54000]ERROR: array size exceeds the maximum allowed (1073741823)​https://www.postgresql.org/about/​​Maximum Field Size: 1 GB​It means a json cannot exceed 1GB in postgresql, right ?​Yes​ Then I must build it with an external tool ?​ ​​You have to do something different.  Using multiple columns and/or multiple rows might we workable.​Certainly. Kind of disappointing, because I won't find any json builder as performant as postgresql.​ ​Will this 1GO restriction is supposed to increase in a near future ?​David J.", "msg_date": "Tue, 7 Jun 2016 14:42:28 +0200", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "On Tue, Jun 7, 2016 at 8:42 AM, Nicolas Paris <[email protected]> wrote:\n\n>\n>\n> 2016-06-07 14:39 GMT+02:00 David G. Johnston <[email protected]>:\n>\n>> On Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris <[email protected]>\n>> wrote:\n>>\n>>> 2016-06-07 14:31 GMT+02:00 David G. Johnston <[email protected]\n>>> >:\n>>>\n>>>> On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]>\n>>>> wrote:\n>>>>\n>>>>> Hello,\n>>>>>\n>>>>> I run a query transforming huge tables to a json document based on a period.\n>>>>> It works great for a modest period (little dataset).\n>>>>> However, when increasing the period (huge dataset) I get this error:\n>>>>>\n>>>>> SQL ERROR[54000]\n>>>>> ERROR: array size exceeds the maximum allowed (1073741823)\n>>>>>\n>>>>> ​https://www.postgresql.org/about/​\n>>>>\n>>>> ​Maximum Field Size: 1 GB​\n>>>>\n>>>\n>>> It means a json cannot exceed 1GB in postgresql, right ?\n>>>\n>>\n>> ​Yes​\n>>\n>>\n>>> Then I must build it with an external tool ?\n>>> ​\n>>>\n>>>\n>>\n>> ​​You have to do something different. Using multiple columns and/or\n>> multiple rows might we workable.\n>>\n>\n> ​Certainly. Kind of disappointing, because I won't find any json builder\n> as performant as postgresql.​\n>\n> ​\n>\n> Will this 1GO restriction is supposed to increase in a near future ?​\n>\n>\nThere has been zero chatter on the public lists about increasing any of the\nlimits on that page I linked to.\n\nDavid J.\n​\n\nOn Tue, Jun 7, 2016 at 8:42 AM, Nicolas Paris <[email protected]> wrote:2016-06-07 14:39 GMT+02:00 David G. Johnston <[email protected]>:On Tue, Jun 7, 2016 at 8:36 AM, Nicolas Paris <[email protected]> wrote:2016-06-07 14:31 GMT+02:00 David G. Johnston <[email protected]>:On Tue, Jun 7, 2016 at 7:44 AM, Nicolas Paris <[email protected]> wrote:Hello,I run a query transforming huge tables to a json document based on a period.It works great for a modest period (little dataset). However, when increasing the period (huge dataset) I get this error:SQL ERROR[54000]ERROR: array size exceeds the maximum allowed (1073741823)​https://www.postgresql.org/about/​​Maximum Field Size: 1 GB​It means a json cannot exceed 1GB in postgresql, right ?​Yes​ Then I must build it with an external tool ?​ ​​You have to do something different.  Using multiple columns and/or multiple rows might we workable.​Certainly. Kind of disappointing, because I won't find any json builder as performant as postgresql.​ ​Will this 1GO restriction is supposed to increase in a near future ?​There has been zero chatter on the public lists about increasing any of the limits on that page I linked to.David J.​", "msg_date": "Tue, 7 Jun 2016 08:58:43 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n> ​​You have to do something different. Using multiple columns and/or\n> multiple rows might we workable.\n> \n> \n> ​Certainly. Kind of disappointing, because I won't find any json builder\n> as performant as postgresql.​\n\nThat's nice to hear.\n\n> Will this 1GO restriction is supposed to increase in a near future ?​\n\nNot planned, no. Thing is, that's the limit for a field in general, not\njust JSON; changing it would be a fairly large patch. It's desireable,\nbut AFAIK nobody is working on it.\n\n-- \n--\nJosh Berkus\nRed Hat OSAS\n(any opinions are my own)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 7 Jun 2016 09:03:20 -0400", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "2016-06-07 15:03 GMT+02:00 Josh Berkus <[email protected]>:\n\n> On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n> > ​​You have to do something different. Using multiple columns and/or\n> > multiple rows might we workable.\n>\n\n​Getting a unique document from multiple rows coming from postgresql is not\nthat easy... The external tools considers each postgresql JSON fields as\nstrings or have to parse it again. Parsing them would add an overhead on\nthe external tool, and I d'say this would be better to build the entire\nJSON in the external tool. This leads not to use postgresql JSON builder at\nall, and delegate this job to a tool that is able to deal with > 1GO\ndocuments.\n\n\n\n> >\n> >\n> > ​Certainly. Kind of disappointing, because I won't find any json builder\n> > as performant as postgresql.​\n>\n> That's nice to hear.\n>\n> > Will this 1GO restriction is supposed to increase in a near future ?​\n>\n> Not planned, no. Thing is, that's the limit for a field in general, not\n> just JSON; changing it would be a fairly large patch. It's desireable,\n> but AFAIK nobody is working on it.\n>\n\nComparing to mongoDB 16MO document limitation 1GO is great (\nhttp://tech.tulentsev.com/2014/02/limitations-of-mongodb/)​. But for my use\ncase this is not sufficient.\n\n\n\n> --\n> --\n> Josh Berkus\n> Red Hat OSAS\n> (any opinions are my own)\n>\n\n2016-06-07 15:03 GMT+02:00 Josh Berkus <[email protected]>:On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n>     ​​You have to do something different.  Using multiple columns and/or\n>     multiple rows might we workable.​Getting a unique document from multiple rows coming from postgresql is not that easy... The external tools considers each postgresql JSON fields as strings or have to parse it again. Parsing them would add an overhead on the external tool, and I d'say this would be better to build the entire JSON in the external tool. This leads not to use postgresql JSON builder at all, and delegate this job to a tool that is able to deal with > 1GO documents. \n>\n>\n> ​Certainly. Kind of disappointing, because I won't find any json builder\n> as performant as postgresql.​\n\nThat's nice to hear.\n\n> Will this 1GO restriction is supposed to increase in a near future ?​\n\nNot planned, no.  Thing is, that's the limit for a field in general, not\njust JSON; changing it would be a fairly large patch.  It's desireable,\nbut AFAIK nobody is working on it.Comparing to mongoDB 16MO document limitation 1GO is great (http://tech.tulentsev.com/2014/02/limitations-of-mongodb/)​. But for my use case this is not sufficient.\n\n--\n--\nJosh Berkus\nRed Hat OSAS\n(any opinions are my own)", "msg_date": "Tue, 7 Jun 2016 21:23:45 +0200", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "On Tue, Jun 7, 2016 at 10:03 PM, Josh Berkus <[email protected]> wrote:\n> On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n>> You have to do something different. Using multiple columns and/or\n>> multiple rows might we workable.\n>>\n>>\n>> Certainly. Kind of disappointing, because I won't find any json builder\n>> as performant as postgresql.\n>\n> That's nice to hear.\n>\n>> Will this 1GO restriction is supposed to increase in a near future ?\n>\n> Not planned, no. Thing is, that's the limit for a field in general, not\n> just JSON; changing it would be a fairly large patch. It's desireable,\n> but AFAIK nobody is working on it.\n\nAnd there are other things to consider on top of that, like the\nmaximum allocation size for palloc, the maximum query string size,\nCOPY, etc. This is no small project, and the potential side-effects\nshould not be underestimated.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 8 Jun 2016 14:56:03 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "Michael Paquier <[email protected]> writes:\n> On Tue, Jun 7, 2016 at 10:03 PM, Josh Berkus <[email protected]> wrote:\n>> On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n>>> Will this 1GO restriction is supposed to increase in a near future ?\n\n>> Not planned, no. Thing is, that's the limit for a field in general, not\n>> just JSON; changing it would be a fairly large patch. It's desireable,\n>> but AFAIK nobody is working on it.\n\n> And there are other things to consider on top of that, like the\n> maximum allocation size for palloc, the maximum query string size,\n> COPY, etc. This is no small project, and the potential side-effects\n> should not be underestimated.\n\nIt's also fair to doubt that client-side code would \"just work\" with\nno functionality or performance problems for such large values.\n\nI await with interest the OP's results on other JSON processors that\nhave no issues with GB-sized JSON strings.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 08 Jun 2016 02:04:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823) when building\n a json" }, { "msg_contents": "On Wed, Jun 8, 2016 at 1:04 AM, Tom Lane <[email protected]> wrote:\n> Michael Paquier <[email protected]> writes:\n>> On Tue, Jun 7, 2016 at 10:03 PM, Josh Berkus <[email protected]> wrote:\n>>> On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n>>>> Will this 1GO restriction is supposed to increase in a near future ?\n>\n>>> Not planned, no. Thing is, that's the limit for a field in general, not\n>>> just JSON; changing it would be a fairly large patch. It's desireable,\n>>> but AFAIK nobody is working on it.\n>\n>> And there are other things to consider on top of that, like the\n>> maximum allocation size for palloc, the maximum query string size,\n>> COPY, etc. This is no small project, and the potential side-effects\n>> should not be underestimated.\n>\n> It's also fair to doubt that client-side code would \"just work\" with\n> no functionality or performance problems for such large values.\n>\n> I await with interest the OP's results on other JSON processors that\n> have no issues with GB-sized JSON strings.\n\nYup. Most json libraries and tools are going to be disgusting memory\nhogs or have exponential behaviors especially when you consider you\nare doing the transformation as well. Just prettifying json documents\nover 1GB can be a real challenge.\n\nFortunately the workaround here is pretty easy. Keep your query\nexactly as is but remove the final aggregation step so that it returns\na set. Next, make a small application that runs this query and does\nthe array bits around each row (basically prepending the final result\nwith [ appending the final result with ] and putting , between rows).\nIt's essential that you use a client library that does not buffer the\nentire result in memory before emitting results. This can be done in\npsql (FETCH mode), java, libpq (single row mode), etc. I suspect\nnode.js pg module can do this as well, and there certainty will be\nothers.\n\nThe basic objective is you want the rows to be streamed out of the\ndatabase without being buffered. If you do that, you should be able\nto stream arbitrarily large datasets out of the database to a json\ndocument assuming the server can produce the query.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 9 Jun 2016 08:31:23 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "2016-06-09 15:31 GMT+02:00 Merlin Moncure <[email protected]>:\n\n> On Wed, Jun 8, 2016 at 1:04 AM, Tom Lane <[email protected]> wrote:\n> > Michael Paquier <[email protected]> writes:\n> >> On Tue, Jun 7, 2016 at 10:03 PM, Josh Berkus <[email protected]> wrote:\n> >>> On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n> >>>> Will this 1GO restriction is supposed to increase in a near future ?\n> >\n> >>> Not planned, no. Thing is, that's the limit for a field in general,\n> not\n> >>> just JSON; changing it would be a fairly large patch. It's desireable,\n> >>> but AFAIK nobody is working on it.\n> >\n> >> And there are other things to consider on top of that, like the\n> >> maximum allocation size for palloc, the maximum query string size,\n> >> COPY, etc. This is no small project, and the potential side-effects\n> >> should not be underestimated.\n> >\n> > It's also fair to doubt that client-side code would \"just work\" with\n> > no functionality or performance problems for such large values.\n> >\n> > I await with interest the OP's results on other JSON processors that\n> > have no issues with GB-sized JSON strings.\n>\n> Yup. Most json libraries and tools are going to be disgusting memory\n> hogs or have exponential behaviors especially when you consider you\n> are doing the transformation as well. Just prettifying json documents\n> over 1GB can be a real challenge.\n>\n> Fortunately the workaround here is pretty easy. Keep your query\n> exactly as is but remove the final aggregation step so that it returns\n> a set. Next, make a small application that runs this query and does\n> the array bits around each row (basically prepending the final result\n> with [ appending the final result with ] and putting , between rows).\n>\n\n​The point is when prepending/appending leads to deal with strings.\nTransforming each value of the resultset to a string implies to escape the\ndouble quote.\nthen:\nrow1 contains {\"hello\":\"world\"}\nstep 1 = prepend -> \"[{\\\"hello\\\":\\\"world\\\"}\"\nstep 2 = append -> \"[{\\\"hello\\\":\\\"world\\\"},\"\nand so on\nthe json is corrupted. Hopelly I am sure I am on a wrong way about that.\n\n​\n\n\n> It's essential that you use a client library that does not buffer the\n> entire result in memory before emitting results. This can be done in\n> psql (FETCH mode), java, libpq (single row mode), etc. I suspect\n> node.js pg module can do this as well, and there certainty will be\n> others.\n>\n> The basic objective is you want the rows to be streamed out of the\n> database without being buffered. If you do that, you should be able\n> to stream arbitrarily large datasets out of the database to a json\n> document assuming the server can produce the query.\n>\n> merlin\n>\n\n2016-06-09 15:31 GMT+02:00 Merlin Moncure <[email protected]>:On Wed, Jun 8, 2016 at 1:04 AM, Tom Lane <[email protected]> wrote:\n> Michael Paquier <[email protected]> writes:\n>> On Tue, Jun 7, 2016 at 10:03 PM, Josh Berkus <[email protected]> wrote:\n>>> On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n>>>> Will this 1GO restriction is supposed to increase in a near future ?\n>\n>>> Not planned, no.  Thing is, that's the limit for a field in general, not\n>>> just JSON; changing it would be a fairly large patch.  It's desireable,\n>>> but AFAIK nobody is working on it.\n>\n>> And there are other things to consider on top of that, like the\n>> maximum allocation size for palloc, the maximum query string size,\n>> COPY, etc. This is no small project, and the potential side-effects\n>> should not be underestimated.\n>\n> It's also fair to doubt that client-side code would \"just work\" with\n> no functionality or performance problems for such large values.\n>\n> I await with interest the OP's results on other JSON processors that\n> have no issues with GB-sized JSON strings.\n\nYup.  Most json libraries and tools are going to be disgusting memory\nhogs or have exponential behaviors especially when you consider you\nare doing the transformation as well.  Just prettifying json documents\nover 1GB can be a real challenge.\n\nFortunately the workaround here is pretty easy.  Keep your query\nexactly as is but remove the final aggregation step so that it returns\na set. Next, make a small application that runs this query and does\nthe array bits around each row (basically prepending the final result\nwith [ appending the final result with ] and putting , between rows).​The point is when prepending/appending leads to deal with strings.Transforming each value of the resultset to a string implies to escape the double quote.then:row1 contains {\"hello\":\"world\"}step 1 = prepend -> \"[{\\\"hello\\\":\\\"world\\\"}\"step 2 = append -> \"[{\\\"hello\\\":\\\"world\\\"},\"and so onthe json is corrupted. Hopelly I am sure I am on a wrong way about that.​ \nIt's essential that you use a client library that does not buffer the\nentire result in memory before emitting results.   This can be done in\npsql (FETCH mode), java, libpq (single row mode), etc.   I suspect\nnode.js pg module can do this as well, and there certainty will be\nothers.\n\nThe basic objective is you want the rows to be streamed out of the\ndatabase without being buffered.  If you do that, you should be able\nto stream arbitrarily large datasets out of the database to a json\ndocument assuming the server can produce the query.\n\nmerlin", "msg_date": "Thu, 9 Jun 2016 15:43:07 +0200", "msg_from": "Nicolas Paris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" }, { "msg_contents": "On Thu, Jun 9, 2016 at 8:43 AM, Nicolas Paris <[email protected]> wrote:\n>\n>\n> 2016-06-09 15:31 GMT+02:00 Merlin Moncure <[email protected]>:\n>>\n>> On Wed, Jun 8, 2016 at 1:04 AM, Tom Lane <[email protected]> wrote:\n>> > Michael Paquier <[email protected]> writes:\n>> >> On Tue, Jun 7, 2016 at 10:03 PM, Josh Berkus <[email protected]> wrote:\n>> >>> On 06/07/2016 08:42 AM, Nicolas Paris wrote:\n>> >>>> Will this 1GO restriction is supposed to increase in a near future ?\n>> >\n>> >>> Not planned, no. Thing is, that's the limit for a field in general,\n>> >>> not\n>> >>> just JSON; changing it would be a fairly large patch. It's\n>> >>> desireable,\n>> >>> but AFAIK nobody is working on it.\n>> >\n>> >> And there are other things to consider on top of that, like the\n>> >> maximum allocation size for palloc, the maximum query string size,\n>> >> COPY, etc. This is no small project, and the potential side-effects\n>> >> should not be underestimated.\n>> >\n>> > It's also fair to doubt that client-side code would \"just work\" with\n>> > no functionality or performance problems for such large values.\n>> >\n>> > I await with interest the OP's results on other JSON processors that\n>> > have no issues with GB-sized JSON strings.\n>>\n>> Yup. Most json libraries and tools are going to be disgusting memory\n>> hogs or have exponential behaviors especially when you consider you\n>> are doing the transformation as well. Just prettifying json documents\n>> over 1GB can be a real challenge.\n>>\n>> Fortunately the workaround here is pretty easy. Keep your query\n>> exactly as is but remove the final aggregation step so that it returns\n>> a set. Next, make a small application that runs this query and does\n>> the array bits around each row (basically prepending the final result\n>> with [ appending the final result with ] and putting , between rows).\n>\n>\n> The point is when prepending/appending leads to deal with strings.\n> Transforming each value of the resultset to a string implies to escape the\n> double quote.\n> then:\n> row1 contains {\"hello\":\"world\"}\n> step 1 = prepend -> \"[{\\\"hello\\\":\\\"world\\\"}\"\n> step 2 = append -> \"[{\\\"hello\\\":\\\"world\\\"},\"\n\nright 3 rows contain {\"hello\":\"world\"}\n\nbefore iteration: emit '['\nbefore every row except the first, prepend ','\nafter iteration: emit ']'\n\nyou end up with:\n[{\"hello\":\"world\"}\n,{\"hello\":\"world\"}\n,{\"hello\":\"world\"}]\n\n...which is 100% valid json as long as each row of the set is a json object.\n\nin SQL, the technique is like this:\nselect ('[' || string_agg(j::text, ',') || ']')::json from (select\njson_build_object('hello', 'world') j from generate_series(1,3)) q;\n\nthe difference is, instead of having the database do the string_agg\nstep, it's handled on the client during iteration over the output of\ngenerate_series.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 9 Jun 2016 15:36:21 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: array size exceeds the maximum allowed (1073741823)\n when building a json" } ]
[ { "msg_contents": "Hi.\n\nI had a fight with a query planner because it doesn’t listen.\n\nThere are two indexes:\n\n - with expression in descending order:\n \"offers_offer_next_update_idx\" btree (offer_next_update(update_ts, update_freq) DESC) WHERE o_archived = false\n - unique with two columns:\n \"offers_source_id_o_key_idx\" UNIQUE, btree (source_id, o_key)\n\nHere's the query with filter for offers.source_id columns which\nis pretty slow because \"offers_source_id_o_key_idx\" is not used:\n\n EXPLAIN ANALYZE\n SELECT offers.o_url AS offers_o_url\n FROM offers\n WHERE offers.source_id = 1 AND offers.o_archived = false AND now() > offer_next_update(offers.update_ts, offers.update_freq)\n ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC\n LIMIT 1000;\n\n Limit (cost=0.68..23403.77 rows=1000 width=116) (actual time=143.544..147.870 rows=1000 loops=1)\n -> Index Scan using offers_offer_next_update_idx on offers (cost=0.68..1017824.69 rows=43491 width=116) (actual time=143.542..147.615 rows=1000 loops=1)\n Index Cond: (now() > offer_next_update(update_ts, update_freq))\n Filter: (source_id = 1)\n Rows Removed by Filter: 121376\n Total runtime: 148.023 ms\n\n\nWhen I remove filter on offers.source_id, query plan looks like this:\n\n EXPLAIN ANALYZE\n SELECT offers.o_url AS offers_o_url\n FROM offers\n WHERE offers.o_archived = false AND now() > offer_next_update(offers.update_ts, offers.update_freq)\n ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC\n LIMIT 1000;\n\n Limit (cost=0.68..4238.27 rows=1000 width=116) (actual time=0.060..3.877 rows=1000 loops=1)\n -> Index Scan using offers_offer_next_update_idx on offers (cost=0.68..1069411.78 rows=252363 width=116) (actual time=0.058..3.577 rows=1000 loops=1)\n Index Cond: (now() > offer_next_update(update_ts, update_freq))\n Total runtime: 4.031 ms\n\n\nI even tried to change orders of conditions in second query but it doesn't seem\nto make a difference for a planner.\n\nShouldn't query planner use offers_source_id_o_key_idx to speed up query above?\n\n\nPostgreSQL version: PostgreSQL 9.3.12 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4, 64-bit\n\nConfiguration:\n name | current_setting | source\n------------------------------+----------------------------------------+----------------------\n application_name | psql | client\n checkpoint_completion_target | 0.9 | configuration file\n checkpoint_segments | 3 | configuration file\n client_encoding | UTF8 | client\n DateStyle | ISO, MDY | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n effective_cache_size | 128MB | configuration file\n external_pid_file | /var/run/postgresql/9.3-main.pid | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | en_US.UTF-8 | configuration file\n max_connections | 100 | configuration file\n max_locks_per_transaction | 168 | configuration file\n max_stack_depth | 2MB | environment variable\n port | 5432 | configuration file\n shared_buffers | 4GB | configuration file\n temp_buffers | 12MB | configuration file\n unix_socket_directories | /var/run/postgresql | configuration file\n work_mem | 16MB | configuration file\n\n\nDefinitions:\n\nCREATE OR REPLACE FUNCTION public.offer_next_update(last timestamp without time zone, minutes smallint)\n RETURNS timestamp without time zone\n LANGUAGE plpgsql\n IMMUTABLE\nAS $function$\nBEGIN\n RETURN last + (minutes || ' min')::interval;\nEND\n$function$\n\n\n\nHi.I had a fight with a query planner because it doesn’t listen.There are two indexes: - with expression in descending order:    \"offers_offer_next_update_idx\" btree (offer_next_update(update_ts, update_freq) DESC) WHERE o_archived = false - unique with two columns:    \"offers_source_id_o_key_idx\" UNIQUE, btree (source_id, o_key)Here's the query with filter for offers.source_id columns whichis pretty slow because \"offers_source_id_o_key_idx\" is not used:    EXPLAIN ANALYZE    SELECT offers.o_url AS offers_o_url    FROM offers    WHERE offers.source_id = 1 AND offers.o_archived = false AND now() > offer_next_update(offers.update_ts, offers.update_freq)    ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC    LIMIT 1000;    Limit  (cost=0.68..23403.77 rows=1000 width=116) (actual time=143.544..147.870 rows=1000 loops=1)      ->  Index Scan using offers_offer_next_update_idx on offers  (cost=0.68..1017824.69 rows=43491 width=116) (actual time=143.542..147.615 rows=1000 loops=1)            Index Cond: (now() > offer_next_update(update_ts, update_freq))            Filter: (source_id = 1)            Rows Removed by Filter: 121376    Total runtime: 148.023 msWhen I remove filter on offers.source_id, query plan looks like this:    EXPLAIN ANALYZE    SELECT offers.o_url AS offers_o_url    FROM offers    WHERE offers.o_archived = false AND now() > offer_next_update(offers.update_ts, offers.update_freq)    ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC    LIMIT 1000;    Limit  (cost=0.68..4238.27 rows=1000 width=116) (actual time=0.060..3.877 rows=1000 loops=1)      ->  Index Scan using offers_offer_next_update_idx on offers  (cost=0.68..1069411.78 rows=252363 width=116) (actual time=0.058..3.577 rows=1000 loops=1)            Index Cond: (now() > offer_next_update(update_ts, update_freq))    Total runtime: 4.031 msI even tried to change orders of conditions in second query but it doesn't seemto make a difference for a planner.Shouldn't query planner use offers_source_id_o_key_idx to speed up query above?PostgreSQL version: PostgreSQL 9.3.12 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4, 64-bitConfiguration:             name             |            current_setting             |        source------------------------------+----------------------------------------+---------------------- application_name             | psql                                   | client checkpoint_completion_target | 0.9                                    | configuration file checkpoint_segments          | 3                                      | configuration file client_encoding              | UTF8                                   | client DateStyle                    | ISO, MDY                               | configuration file default_text_search_config   | pg_catalog.english                     | configuration file effective_cache_size         | 128MB                                  | configuration file external_pid_file            | /var/run/postgresql/9.3-main.pid       | configuration file lc_messages                  | en_US.UTF-8                            | configuration file lc_monetary                  | en_US.UTF-8                            | configuration file lc_numeric                   | en_US.UTF-8                            | configuration file lc_time                      | en_US.UTF-8                            | configuration file max_connections              | 100                                    | configuration file max_locks_per_transaction    | 168                                    | configuration file max_stack_depth              | 2MB                                    | environment variable port                         | 5432                                   | configuration file shared_buffers               | 4GB                                    | configuration file temp_buffers                 | 12MB                                   | configuration file unix_socket_directories      | /var/run/postgresql                    | configuration file work_mem                     | 16MB                                   | configuration fileDefinitions:CREATE OR REPLACE FUNCTION public.offer_next_update(last timestamp without time zone, minutes smallint) RETURNS timestamp without time zone LANGUAGE plpgsql IMMUTABLEAS $function$BEGIN        RETURN last + (minutes || ' min')::interval;END$function$", "msg_date": "Tue, 7 Jun 2016 15:39:14 +0200", "msg_from": "=?utf-8?Q?Rafa=C5=82_Gutkowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Combination of partial and full indexes" }, { "msg_contents": "I dont think offers_source_id_o_key_idx will be used at all. It is a UNIQUE index on (source_id, o_key), but your query does not filter for any \"o_key\", so reading that index does not provide the pointers needed to fetch the actual data in the table.\n\nI will try an index on source_id, offer_next_update(offers.update_ts, offers.update_freq) and see what happens\n\nHTH\nGerardo\n\n----- Mensaje original -----\n> De: \"Rafał Gutkowski\" <[email protected]>\n> Para: [email protected]\n> Enviados: Martes, 7 de Junio 2016 10:39:14\n> Asunto: [PERFORM] Combination of partial and full indexes\n> \n> \n> Hi.\n> \n> \n> I had a fight with a query planner because it doesn’t listen.\n> \n> \n> There are two indexes:\n> \n> \n> - with expression in descending order:\n> \"offers_offer_next_update_idx\" btree (offer_next_update(update_ts,\n> update_freq) DESC) WHERE o_archived = false\n> - unique with two columns:\n> \"offers_source_id_o_key_idx\" UNIQUE, btree (source_id, o_key)\n> \n> \n> Here's the query with filter for offers.source_id columns which\n> is pretty slow because \"offers_source_id_o_key_idx\" is not used:\n> \n> \n> EXPLAIN ANALYZE\n> SELECT offers.o_url AS offers_o_url\n> FROM offers\n> WHERE offers.source_id = 1 AND offers.o_archived = false AND now() >\n> offer_next_update(offers.update_ts, offers.update_freq)\n> ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC\n> LIMIT 1000;\n> \n> \n> Limit (cost=0.68..23403.77 rows=1000 width=116) (actual\n> time=143.544..147.870 rows=1000 loops=1)\n> -> Index Scan using offers_offer_next_update_idx on offers\n> (cost=0.68..1017824.69 rows=43491 width=116) (actual\n> time=143.542..147.615 rows=1000 loops=1)\n> Index Cond: (now() > offer_next_update(update_ts, update_freq))\n> Filter: (source_id = 1)\n> Rows Removed by Filter: 121376\n> Total runtime: 148.023 ms\n> \n> \n> \n> \n> When I remove filter on offers.source_id, query plan looks like this:\n> \n> \n> EXPLAIN ANALYZE\n> SELECT offers.o_url AS offers_o_url\n> FROM offers\n> WHERE offers.o_archived = false AND now() >\n> offer_next_update(offers.update_ts, offers.update_freq)\n> ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC\n> LIMIT 1000;\n> \n> \n> Limit (cost=0.68..4238.27 rows=1000 width=116) (actual\n> time=0.060..3.877 rows=1000 loops=1)\n> -> Index Scan using offers_offer_next_update_idx on offers\n> (cost=0.68..1069411.78 rows=252363 width=116) (actual\n> time=0.058..3.577 rows=1000 loops=1)\n> Index Cond: (now() > offer_next_update(update_ts, update_freq))\n> Total runtime: 4.031 ms\n> \n> \n> \n> \n> I even tried to change orders of conditions in second query but it\n> doesn't seem\n> to make a difference for a planner.\n> \n> \n> Shouldn't query planner use offers_source_id_o_key_idx to speed up\n> query above?\n> \n> \n> \n> \n> PostgreSQL version: PostgreSQL 9.3.12 on x86_64-unknown-linux-gnu,\n> compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4, 64-bit\n> \n> \n> Configuration:\n> name | current_setting | source\n> ------------------------------+----------------------------------------+----------------------\n> application_name | psql | client\n> checkpoint_completion_target | 0.9 | configuration file\n> checkpoint_segments | 3 | configuration file\n> client_encoding | UTF8 | client\n> DateStyle | ISO, MDY | configuration file\n> default_text_search_config | pg_catalog.english | configuration file\n> effective_cache_size | 128MB | configuration file\n> external_pid_file | /var/run/postgresql/9.3-main.pid | configuration\n> file\n> lc_messages | en_US.UTF-8 | configuration file\n> lc_monetary | en_US.UTF-8 | configuration file\n> lc_numeric | en_US.UTF-8 | configuration file\n> lc_time | en_US.UTF-8 | configuration file\n> max_connections | 100 | configuration file\n> max_locks_per_transaction | 168 | configuration file\n> max_stack_depth | 2MB | environment variable\n> port | 5432 | configuration file\n> shared_buffers | 4GB | configuration file\n> temp_buffers | 12MB | configuration file\n> unix_socket_directories | /var/run/postgresql | configuration file\n> work_mem | 16MB | configuration file\n> \n> \n> \n> \n> Definitions:\n> \n> \n> \n> CREATE OR REPLACE FUNCTION public.offer_next_update(last timestamp\n> without time zone, minutes smallint)\n> RETURNS timestamp without time zone\n> LANGUAGE plpgsql\n> IMMUTABLE\n> AS $function$\n> BEGIN\n> RETURN last + (minutes || ' min')::interval;\n> END\n> $function$\n> \n> \n> \n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 7 Jun 2016 14:36:13 -0300 (ART)", "msg_from": "Gerardo Herzig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Combination of partial and full indexes" }, { "msg_contents": "I thought that first column from left in multi-column index can and will be used just as it would be a single column index.\n\nIt doesn’t seem to work with unqiue indexes, which ultimetly makes sense.\n\nThank you Gerardo.\n\n> On 07 Jun 2016, at 19:36, Gerardo Herzig <[email protected]> wrote:\n> \n> I dont think offers_source_id_o_key_idx will be used at all. It is a UNIQUE index on (source_id, o_key), but your query does not filter for any \"o_key\", so reading that index does not provide the pointers needed to fetch the actual data in the table.\n> \n> I will try an index on source_id, offer_next_update(offers.update_ts, offers.update_freq) and see what happens\n> \n> HTH\n> Gerardo\n> \n> ----- Mensaje original -----\n>> De: \"Rafał Gutkowski\" <[email protected]>\n>> Para: [email protected]\n>> Enviados: Martes, 7 de Junio 2016 10:39:14\n>> Asunto: [PERFORM] Combination of partial and full indexes\n>> \n>> \n>> Hi.\n>> \n>> \n>> I had a fight with a query planner because it doesn’t listen.\n>> \n>> \n>> There are two indexes:\n>> \n>> \n>> - with expression in descending order:\n>> \"offers_offer_next_update_idx\" btree (offer_next_update(update_ts,\n>> update_freq) DESC) WHERE o_archived = false\n>> - unique with two columns:\n>> \"offers_source_id_o_key_idx\" UNIQUE, btree (source_id, o_key)\n>> \n>> \n>> Here's the query with filter for offers.source_id columns which\n>> is pretty slow because \"offers_source_id_o_key_idx\" is not used:\n>> \n>> \n>> EXPLAIN ANALYZE\n>> SELECT offers.o_url AS offers_o_url\n>> FROM offers\n>> WHERE offers.source_id = 1 AND offers.o_archived = false AND now() >\n>> offer_next_update(offers.update_ts, offers.update_freq)\n>> ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC\n>> LIMIT 1000;\n>> \n>> \n>> Limit (cost=0.68..23403.77 rows=1000 width=116) (actual\n>> time=143.544..147.870 rows=1000 loops=1)\n>> -> Index Scan using offers_offer_next_update_idx on offers\n>> (cost=0.68..1017824.69 rows=43491 width=116) (actual\n>> time=143.542..147.615 rows=1000 loops=1)\n>> Index Cond: (now() > offer_next_update(update_ts, update_freq))\n>> Filter: (source_id = 1)\n>> Rows Removed by Filter: 121376\n>> Total runtime: 148.023 ms\n>> \n>> \n>> \n>> \n>> When I remove filter on offers.source_id, query plan looks like this:\n>> \n>> \n>> EXPLAIN ANALYZE\n>> SELECT offers.o_url AS offers_o_url\n>> FROM offers\n>> WHERE offers.o_archived = false AND now() >\n>> offer_next_update(offers.update_ts, offers.update_freq)\n>> ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC\n>> LIMIT 1000;\n>> \n>> \n>> Limit (cost=0.68..4238.27 rows=1000 width=116) (actual\n>> time=0.060..3.877 rows=1000 loops=1)\n>> -> Index Scan using offers_offer_next_update_idx on offers\n>> (cost=0.68..1069411.78 rows=252363 width=116) (actual\n>> time=0.058..3.577 rows=1000 loops=1)\n>> Index Cond: (now() > offer_next_update(update_ts, update_freq))\n>> Total runtime: 4.031 ms\n>> \n>> \n>> \n>> \n>> I even tried to change orders of conditions in second query but it\n>> doesn't seem\n>> to make a difference for a planner.\n>> \n>> \n>> Shouldn't query planner use offers_source_id_o_key_idx to speed up\n>> query above?\n>> \n>> \n>> \n>> \n>> PostgreSQL version: PostgreSQL 9.3.12 on x86_64-unknown-linux-gnu,\n>> compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4, 64-bit\n>> \n>> \n>> Configuration:\n>> name | current_setting | source\n>> ------------------------------+----------------------------------------+----------------------\n>> application_name | psql | client\n>> checkpoint_completion_target | 0.9 | configuration file\n>> checkpoint_segments | 3 | configuration file\n>> client_encoding | UTF8 | client\n>> DateStyle | ISO, MDY | configuration file\n>> default_text_search_config | pg_catalog.english | configuration file\n>> effective_cache_size | 128MB | configuration file\n>> external_pid_file | /var/run/postgresql/9.3-main.pid | configuration\n>> file\n>> lc_messages | en_US.UTF-8 | configuration file\n>> lc_monetary | en_US.UTF-8 | configuration file\n>> lc_numeric | en_US.UTF-8 | configuration file\n>> lc_time | en_US.UTF-8 | configuration file\n>> max_connections | 100 | configuration file\n>> max_locks_per_transaction | 168 | configuration file\n>> max_stack_depth | 2MB | environment variable\n>> port | 5432 | configuration file\n>> shared_buffers | 4GB | configuration file\n>> temp_buffers | 12MB | configuration file\n>> unix_socket_directories | /var/run/postgresql | configuration file\n>> work_mem | 16MB | configuration file\n>> \n>> \n>> \n>> \n>> Definitions:\n>> \n>> \n>> \n>> CREATE OR REPLACE FUNCTION public.offer_next_update(last timestamp\n>> without time zone, minutes smallint)\n>> RETURNS timestamp without time zone\n>> LANGUAGE plpgsql\n>> IMMUTABLE\n>> AS $function$\n>> BEGIN\n>> RETURN last + (minutes || ' min')::interval;\n>> END\n>> $function$\n>> \n>> \n>> \n>> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 8 Jun 2016 10:52:13 +0200", "msg_from": "=?utf-8?Q?Rafa=C5=82_Gutkowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Combination of partial and full indexes" }, { "msg_contents": "Altough, creating index `btree (source_id)` still changes nothing. So is `btree (source_id) WHERE o_archived = false`.\n\nIt looks like partial indexes and full indexes cannot mix togheter even if when they have same condition.\n\n> On 08 Jun 2016, at 10:52, Rafał Gutkowski <[email protected]> wrote:\n> \n> I thought that first column from left in multi-column index can and will be used just as it would be a single column index.\n> \n> It doesn’t seem to work with unqiue indexes, which ultimetly makes sense.\n> \n> Thank you Gerardo.\n> \n>> On 07 Jun 2016, at 19:36, Gerardo Herzig <[email protected]> wrote:\n>> \n>> I dont think offers_source_id_o_key_idx will be used at all. It is a UNIQUE index on (source_id, o_key), but your query does not filter for any \"o_key\", so reading that index does not provide the pointers needed to fetch the actual data in the table.\n>> \n>> I will try an index on source_id, offer_next_update(offers.update_ts, offers.update_freq) and see what happens\n>> \n>> HTH\n>> Gerardo\n>> \n>> ----- Mensaje original -----\n>>> De: \"Rafał Gutkowski\" <[email protected]>\n>>> Para: [email protected]\n>>> Enviados: Martes, 7 de Junio 2016 10:39:14\n>>> Asunto: [PERFORM] Combination of partial and full indexes\n>>> \n>>> \n>>> Hi.\n>>> \n>>> \n>>> I had a fight with a query planner because it doesn’t listen.\n>>> \n>>> \n>>> There are two indexes:\n>>> \n>>> \n>>> - with expression in descending order:\n>>> \"offers_offer_next_update_idx\" btree (offer_next_update(update_ts,\n>>> update_freq) DESC) WHERE o_archived = false\n>>> - unique with two columns:\n>>> \"offers_source_id_o_key_idx\" UNIQUE, btree (source_id, o_key)\n>>> \n>>> \n>>> Here's the query with filter for offers.source_id columns which\n>>> is pretty slow because \"offers_source_id_o_key_idx\" is not used:\n>>> \n>>> \n>>> EXPLAIN ANALYZE\n>>> SELECT offers.o_url AS offers_o_url\n>>> FROM offers\n>>> WHERE offers.source_id = 1 AND offers.o_archived = false AND now() >\n>>> offer_next_update(offers.update_ts, offers.update_freq)\n>>> ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC\n>>> LIMIT 1000;\n>>> \n>>> \n>>> Limit (cost=0.68..23403.77 rows=1000 width=116) (actual\n>>> time=143.544..147.870 rows=1000 loops=1)\n>>> -> Index Scan using offers_offer_next_update_idx on offers\n>>> (cost=0.68..1017824.69 rows=43491 width=116) (actual\n>>> time=143.542..147.615 rows=1000 loops=1)\n>>> Index Cond: (now() > offer_next_update(update_ts, update_freq))\n>>> Filter: (source_id = 1)\n>>> Rows Removed by Filter: 121376\n>>> Total runtime: 148.023 ms\n>>> \n>>> \n>>> \n>>> \n>>> When I remove filter on offers.source_id, query plan looks like this:\n>>> \n>>> \n>>> EXPLAIN ANALYZE\n>>> SELECT offers.o_url AS offers_o_url\n>>> FROM offers\n>>> WHERE offers.o_archived = false AND now() >\n>>> offer_next_update(offers.update_ts, offers.update_freq)\n>>> ORDER BY offer_next_update(offers.update_ts, offers.update_freq) DESC\n>>> LIMIT 1000;\n>>> \n>>> \n>>> Limit (cost=0.68..4238.27 rows=1000 width=116) (actual\n>>> time=0.060..3.877 rows=1000 loops=1)\n>>> -> Index Scan using offers_offer_next_update_idx on offers\n>>> (cost=0.68..1069411.78 rows=252363 width=116) (actual\n>>> time=0.058..3.577 rows=1000 loops=1)\n>>> Index Cond: (now() > offer_next_update(update_ts, update_freq))\n>>> Total runtime: 4.031 ms\n>>> \n>>> \n>>> \n>>> \n>>> I even tried to change orders of conditions in second query but it\n>>> doesn't seem\n>>> to make a difference for a planner.\n>>> \n>>> \n>>> Shouldn't query planner use offers_source_id_o_key_idx to speed up\n>>> query above?\n>>> \n>>> \n>>> \n>>> \n>>> PostgreSQL version: PostgreSQL 9.3.12 on x86_64-unknown-linux-gnu,\n>>> compiled by gcc (Ubuntu 4.8.4-2ubuntu1~14.04.1) 4.8.4, 64-bit\n>>> \n>>> \n>>> Configuration:\n>>> name | current_setting | source\n>>> ------------------------------+----------------------------------------+----------------------\n>>> application_name | psql | client\n>>> checkpoint_completion_target | 0.9 | configuration file\n>>> checkpoint_segments | 3 | configuration file\n>>> client_encoding | UTF8 | client\n>>> DateStyle | ISO, MDY | configuration file\n>>> default_text_search_config | pg_catalog.english | configuration file\n>>> effective_cache_size | 128MB | configuration file\n>>> external_pid_file | /var/run/postgresql/9.3-main.pid | configuration\n>>> file\n>>> lc_messages | en_US.UTF-8 | configuration file\n>>> lc_monetary | en_US.UTF-8 | configuration file\n>>> lc_numeric | en_US.UTF-8 | configuration file\n>>> lc_time | en_US.UTF-8 | configuration file\n>>> max_connections | 100 | configuration file\n>>> max_locks_per_transaction | 168 | configuration file\n>>> max_stack_depth | 2MB | environment variable\n>>> port | 5432 | configuration file\n>>> shared_buffers | 4GB | configuration file\n>>> temp_buffers | 12MB | configuration file\n>>> unix_socket_directories | /var/run/postgresql | configuration file\n>>> work_mem | 16MB | configuration file\n>>> \n>>> \n>>> \n>>> \n>>> Definitions:\n>>> \n>>> \n>>> \n>>> CREATE OR REPLACE FUNCTION public.offer_next_update(last timestamp\n>>> without time zone, minutes smallint)\n>>> RETURNS timestamp without time zone\n>>> LANGUAGE plpgsql\n>>> IMMUTABLE\n>>> AS $function$\n>>> BEGIN\n>>> RETURN last + (minutes || ' min')::interval;\n>>> END\n>>> $function$\n>>> \n>>> \n>>> \n>>> \n> \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 8 Jun 2016 11:01:12 +0200", "msg_from": "=?utf-8?Q?Rafa=C5=82_Gutkowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Combination of partial and full indexes" } ]
[ { "msg_contents": "Hello,\nFirst time poster here. Bear with me.\nUsing PostgreSQL 9.5\nI have a situation where I have a LIKE and a NOT LIKE in the same query to\nidentify strings in a varchar field. Since I am using wildcards, I have\ncreated a GIN index on the field in question, which makes LIKE '%xxxx%'\nsearches run very fast. The problem is the NOT LIKE phrases, which (as\nwould be expected) force a sequential scan. Being that we're talking about\nmillions of records, this is not desirable.\nHere's the question...\nIs there a way, *using a single query*, to emulate the process of running\nthe LIKE part first, then running the NOT LIKE just on those results? I\ncan accomplish this in a multi-step process by separating the single query\ninto two queries, populating a temporary table with the results of the\nLIKEs, then running the NOT LIKEs on the temporary table. For various\nreasons, this is not the ideal solution for me.\nOr is there another approach that would accomplish the same thing with the\nsame level of performance?\n\nHello,First time poster here.  Bear with me.Using PostgreSQL 9.5I have a situation where I have a LIKE and a NOT LIKE in the same query to identify strings in a varchar field.  Since I am using wildcards, I have created a GIN index on the field in question, which makes LIKE '%xxxx%' searches run very fast.  The problem is the NOT LIKE phrases, which (as would be expected) force a sequential scan.  Being that we're talking about millions of records, this is not desirable.Here's the question...Is there a way, using a single query, to emulate the process of running the LIKE part first, then running the NOT LIKE just on those results?  I can accomplish this in a multi-step process by separating the single query into two queries, populating a temporary table with the results of the LIKEs, then running the NOT LIKEs on the temporary table.  For various reasons, this is not the ideal solution for me.Or is there another approach that would accomplish the same thing with the same level of performance?", "msg_date": "Tue, 7 Jun 2016 21:57:44 -0700", "msg_from": "Ed Felstein <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of LIKE/NOT LIKE when used in single query" }, { "msg_contents": "On Wednesday, June 8, 2016, Ed Felstein <[email protected]> wrote:\n\n> Hello,\n> First time poster here. Bear with me.\n> Using PostgreSQL 9.5\n> I have a situation where I have a LIKE and a NOT LIKE in the same query to\n> identify strings in a varchar field. Since I am using wildcards, I have\n> created a GIN index on the field in question, which makes LIKE '%xxxx%'\n> searches run very fast. The problem is the NOT LIKE phrases, which (as\n> would be expected) force a sequential scan. Being that we're talking about\n> millions of records, this is not desirable.\n> Here's the question...\n> Is there a way, *using a single query*, to emulate the process of running\n> the LIKE part first, then running the NOT LIKE just on those results? I\n> can accomplish this in a multi-step process by separating the single query\n> into two queries, populating a temporary table with the results of the\n> LIKEs, then running the NOT LIKEs on the temporary table. For various\n> reasons, this is not the ideal solution for me.\n> Or is there another approach that would accomplish the same thing with the\n> same level of performance?\n>\n\n\nTry AND...where col like '' and col not like ''\n\nOr a CTE (with)\n\nWith likeqry as ( select where like )\nSelect from likeqry where not like\n\n(sorry for brevity but not at a pc)\n\nDavid J.\n\nOn Wednesday, June 8, 2016, Ed Felstein <[email protected]> wrote:Hello,First time poster here.  Bear with me.Using PostgreSQL 9.5I have a situation where I have a LIKE and a NOT LIKE in the same query to identify strings in a varchar field.  Since I am using wildcards, I have created a GIN index on the field in question, which makes LIKE '%xxxx%' searches run very fast.  The problem is the NOT LIKE phrases, which (as would be expected) force a sequential scan.  Being that we're talking about millions of records, this is not desirable.Here's the question...Is there a way, using a single query, to emulate the process of running the LIKE part first, then running the NOT LIKE just on those results?  I can accomplish this in a multi-step process by separating the single query into two queries, populating a temporary table with the results of the LIKEs, then running the NOT LIKEs on the temporary table.  For various reasons, this is not the ideal solution for me.Or is there another approach that would accomplish the same thing with the same level of performance?Try AND...where col like '' and col not like ''Or a CTE (with)With likeqry as ( select where like )Select from likeqry where not like(sorry for brevity but not at a pc)David J.", "msg_date": "Wed, 8 Jun 2016 01:33:24 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of LIKE/NOT LIKE when used in single query" }, { "msg_contents": "On Tue, Jun 7, 2016 at 9:57 PM, Ed Felstein <[email protected]> wrote:\n> Hello,\n> First time poster here. Bear with me.\n> Using PostgreSQL 9.5\n> I have a situation where I have a LIKE and a NOT LIKE in the same query to\n> identify strings in a varchar field. Since I am using wildcards, I have\n> created a GIN index on the field in question, which makes LIKE '%xxxx%'\n> searches run very fast. The problem is the NOT LIKE phrases, which (as\n> would be expected) force a sequential scan. Being that we're talking about\n> millions of records, this is not desirable.\n> Here's the question...\n> Is there a way, using a single query, to emulate the process of running the\n> LIKE part first, then running the NOT LIKE just on those results?\n\nJust do it. In my hands, the planner is smart enough to figure it out\nfor itself.\n\nexplain analyze select * from stuff where synonym like '%BAT%' and\nsynonym not like '%col not like%' ;\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on stuff (cost=16.10..63.08 rows=13 width=14)\n(actual time=9.465..10.642 rows=23 loops=1)\n Recheck Cond: (synonym ~~ '%BAT%'::text)\n Rows Removed by Index Recheck: 76\n Filter: (synonym !~~ '%col not like%'::text)\n Heap Blocks: exact=57\n -> Bitmap Index Scan on integrity_synonym_synonym_idx\n(cost=0.00..16.10 rows=13 width=0) (actual time=8.847..8.847 rows=99\nloops=1)\n Index Cond: (synonym ~~ '%BAT%'::text)\n Planning time: 18.261 ms\n Execution time: 10.932 ms\n\n\nSo it is using the index for the positive match, and filtering those\nresults for the negative match, just as you wanted.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 9 Jun 2016 09:37:50 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of LIKE/NOT LIKE when used in single query" } ]
[ { "msg_contents": "In our Django app we have messages (currently about 7 million in table\nmsgs_message) and labels (about 300), and a join table to associate\nmessages with labels (about 500,000 in msgs_message_labels). Not sure\nyou'll need them, but here are the relevant table schemas:\n\nCREATE TABLE msgs_message\n(\n id INTEGER PRIMARY KEY NOT NULL,\n type VARCHAR NOT NULL,\n text TEXT NOT NULL,\n is_archived BOOLEAN NOT NULL,\n created_on TIMESTAMP WITH TIME ZONE NOT NULL,\n contact_id INTEGER NOT NULL,\n org_id INTEGER NOT NULL,\n case_id INTEGER,\n backend_id INTEGER NOT NULL,\n is_handled BOOLEAN NOT NULL,\n is_flagged BOOLEAN NOT NULL,\n is_active BOOLEAN NOT NULL,\n has_labels BOOLEAN NOT NULL,\n CONSTRAINT\nmsgs_message_contact_id_5c8e3f216c115643_fk_contacts_contact_id FOREIGN KEY\n(contact_id) REFERENCES contacts_contact (id),\n CONSTRAINT msgs_message_org_id_81a0adfcc99151d_fk_orgs_org_id FOREIGN\nKEY (org_id) REFERENCES orgs_org (id),\n CONSTRAINT msgs_message_case_id_51998150f9629c_fk_cases_case_id FOREIGN\nKEY (case_id) REFERENCES cases_case (id)\n);\nCREATE UNIQUE INDEX msgs_message_backend_id_key ON msgs_message\n(backend_id);\nCREATE INDEX msgs_message_6d82f13d ON msgs_message (contact_id);\nCREATE INDEX msgs_message_9cf869aa ON msgs_message (org_id);\nCREATE INDEX msgs_message_7f12ca67 ON msgs_message (case_id);\n\nCREATE TABLE msgs_message_labels\n(\n id INTEGER PRIMARY KEY NOT NULL,\n message_id INTEGER NOT NULL,\n label_id INTEGER NOT NULL,\n CONSTRAINT\nmsgs_message_lab_message_id_1dfa44628fe448dd_fk_msgs_message_id FOREIGN KEY\n(message_id) REFERENCES msgs_message (id),\n CONSTRAINT\nmsgs_message_labels_label_id_77cbdebd8d255b7a_fk_msgs_label_id FOREIGN KEY\n(label_id) REFERENCES msgs_label (id)\n);\nCREATE UNIQUE INDEX msgs_message_labels_message_id_label_id_key ON\nmsgs_message_labels (message_id, label_id);\nCREATE INDEX msgs_message_labels_4ccaa172 ON msgs_message_labels\n(message_id);\nCREATE INDEX msgs_message_labels_abec2aca ON msgs_message_labels (label_id);\n\nUsers can search for messages, and they are returned page by page in\nreverse chronological order. There are several partial multi-column indexes\non the message table, but the one used for the example queries below is\n\nCREATE INDEX msgs_inbox ON msgs_message(org_id, created_on DESC)\nWHERE is_active = TRUE AND is_handled = TRUE AND is_archived = FALSE AND\nhas_labels = TRUE;\n\nSo a typical query for the latest page of messages looks like (\nhttps://explain.depesz.com/s/G9ew):\n\nSELECT \"msgs_message\".*\nFROM \"msgs_message\"\nWHERE (\"msgs_message\".\"org_id\" = 7\n AND \"msgs_message\".\"is_active\" = true\n AND \"msgs_message\".\"is_handled\" = true\n AND \"msgs_message\".\"has_labels\" = true\n AND \"msgs_message\".\"is_archived\" = false\n AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000\n+00:00'::timestamptz\n) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n\nBut users can also search for messages that have one or more labels,\nleading to queries that look like:\n\nSELECT DISTINCT \"msgs_message\".*\nFROM \"msgs_message\"\nINNER JOIN \"msgs_message_labels\" ON ( \"msgs_message\".\"id\" =\n\"msgs_message_labels\".\"message_id\" )\nWHERE (\"msgs_message\".\"org_id\" = 7\n AND \"msgs_message\".\"is_active\" = true\n AND \"msgs_message\".\"is_handled\" = true\n AND \"msgs_message_labels\".\"label_id\" IN (127, 128, 135, 136, 137, 138,\n140, 141, 143, 144)\n AND \"msgs_message\".\"has_labels\" = true\n AND \"msgs_message\".\"is_archived\" = false\n AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000\n+00:00'::timestamptz\n) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n\nMost of time, this query performs like https://explain.depesz.com/s/ksOC\n(~15ms). It's no longer using the using the msgs_inbox index, but it's\nplenty fast. However, sometimes it performs like\nhttps://explain.depesz.com/s/81c (67000ms)\n\nAnd if you run it again, it'll be fast again. Am I correct in interpreting\nthat second explain as being slow because msgs_message_pkey isn't cached?\nIt looks like it read from that index 3556 times, and each time took 18.559\n(?) ms, and that adds up to 65,996ms. The database server says it has lots\nof free memory so is there something I should be doing to keep that index\nin memory?\n\nGenerally speaking, is there a good strategy for optimising queries like\nthese which involve two tables?\n\n - I tried moving the label references into an int array on msgs_message,\n and then using btree_gin to create a multi-column index involving the array\n column, but that doesn't appear to be very useful for these ordered queries\n because it's not an ordered index.\n - I tried adding created_on to msgs_message_labels table but I couldn't\n find a way of avoiding the in-memory sort.\n - Have thought about dynamically creating partial indexes for each label\n using an array column on msgs_message to hold label ids, and index\n condition like WHERE label_ids && ARRAY[123] but not sure what other\n problems I'll run into with hundreds of indexes on the same table.\n\nServer is an Amazon RDS instance with default settings and Postgres 9.3.10,\nwith one other database in the instance.\n\nAll advice very much appreciated, thanks\n\n-- \n*Rowan Seymour* | +260 964153686\n\nIn our Django app we have messages (currently about 7 million in table msgs_message) and labels (about 300), and a join table to associate messages with labels (about 500,000 in msgs_message_labels). Not sure you'll need them, but here are the relevant table schemas:CREATE TABLE msgs_message(    id INTEGER PRIMARY KEY NOT NULL,    type VARCHAR NOT NULL,    text TEXT NOT NULL,    is_archived BOOLEAN NOT NULL,    created_on TIMESTAMP WITH TIME ZONE NOT NULL,    contact_id INTEGER NOT NULL,    org_id INTEGER NOT NULL,    case_id INTEGER,    backend_id INTEGER NOT NULL,    is_handled BOOLEAN NOT NULL,    is_flagged BOOLEAN NOT NULL,    is_active BOOLEAN NOT NULL,    has_labels BOOLEAN NOT NULL,    CONSTRAINT msgs_message_contact_id_5c8e3f216c115643_fk_contacts_contact_id FOREIGN KEY (contact_id) REFERENCES contacts_contact (id),    CONSTRAINT msgs_message_org_id_81a0adfcc99151d_fk_orgs_org_id FOREIGN KEY (org_id) REFERENCES orgs_org (id),    CONSTRAINT msgs_message_case_id_51998150f9629c_fk_cases_case_id FOREIGN KEY (case_id) REFERENCES cases_case (id));CREATE UNIQUE INDEX msgs_message_backend_id_key ON msgs_message (backend_id);CREATE INDEX msgs_message_6d82f13d ON msgs_message (contact_id);CREATE INDEX msgs_message_9cf869aa ON msgs_message (org_id);CREATE INDEX msgs_message_7f12ca67 ON msgs_message (case_id);CREATE TABLE msgs_message_labels(    id INTEGER PRIMARY KEY NOT NULL,    message_id INTEGER NOT NULL,    label_id INTEGER NOT NULL,    CONSTRAINT msgs_message_lab_message_id_1dfa44628fe448dd_fk_msgs_message_id FOREIGN KEY (message_id) REFERENCES msgs_message (id),    CONSTRAINT msgs_message_labels_label_id_77cbdebd8d255b7a_fk_msgs_label_id FOREIGN KEY (label_id) REFERENCES msgs_label (id));CREATE UNIQUE INDEX msgs_message_labels_message_id_label_id_key ON msgs_message_labels (message_id, label_id);CREATE INDEX msgs_message_labels_4ccaa172 ON msgs_message_labels (message_id);CREATE INDEX msgs_message_labels_abec2aca ON msgs_message_labels (label_id);Users can search for messages, and they are returned page by page in reverse chronological order. There are several partial multi-column indexes on the message table, but the one used for the example queries below isCREATE INDEX msgs_inbox ON msgs_message(org_id, created_on DESC)WHERE is_active = TRUE AND is_handled = TRUE AND is_archived = FALSE AND has_labels = TRUE;So a typical query for the latest page of messages looks like (https://explain.depesz.com/s/G9ew):SELECT \"msgs_message\".* FROM \"msgs_message\" WHERE (\"msgs_message\".\"org_id\" = 7     AND \"msgs_message\".\"is_active\" = true     AND \"msgs_message\".\"is_handled\" = true     AND \"msgs_message\".\"has_labels\" = true     AND \"msgs_message\".\"is_archived\" = false     AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000+00:00'::timestamptz) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50But users can also search for messages that have one or more labels, leading to queries that look like:SELECT DISTINCT \"msgs_message\".* FROM \"msgs_message\" INNER JOIN \"msgs_message_labels\" ON ( \"msgs_message\".\"id\" = \"msgs_message_labels\".\"message_id\" ) WHERE (\"msgs_message\".\"org_id\" = 7     AND \"msgs_message\".\"is_active\" = true     AND \"msgs_message\".\"is_handled\" = true     AND \"msgs_message_labels\".\"label_id\" IN (127, 128, 135, 136, 137, 138, 140, 141, 143, 144)     AND \"msgs_message\".\"has_labels\" = true     AND \"msgs_message\".\"is_archived\" = false     AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000+00:00'::timestamptz) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\nMost of time, this query performs like https://explain.depesz.com/s/ksOC (~15ms). It's no longer using the using the msgs_inbox index, but it's plenty fast. However, sometimes it performs like https://explain.depesz.com/s/81c (67000ms)And if you run it again, it'll be fast again. Am I correct in interpreting that second explain as being slow because msgs_message_pkey isn't cached? It looks like it read from that index 3556 times, and each time took 18.559 (?) ms, and that adds up to 65,996ms. The database server says it has lots of free memory so is there something I should be doing to keep that index in memory?Generally speaking, is there a good strategy for optimising queries like these which involve two tables?I tried moving the label references into an int array on msgs_message, and then using btree_gin to create a multi-column index involving the array column, but that doesn't appear to be very useful for these ordered queries because it's not an ordered index.I tried adding created_on to msgs_message_labels table but I couldn't find a way of avoiding the in-memory sort.Have thought about dynamically creating partial indexes for each label using an array column on msgs_message to hold label ids, and index condition like WHERE label_ids && ARRAY[123] but not sure what other problems I'll run into with hundreds of indexes on the same table.Server is an Amazon RDS instance with default settings and Postgres 9.3.10, with one other database in the instance.All advice very much appreciated, thanks-- Rowan Seymour | +260 964153686", "msg_date": "Fri, 10 Jun 2016 15:04:01 +0200", "msg_from": "Rowan Seymour <[email protected]>", "msg_from_op": true, "msg_subject": "Many-to-many performance problem" }, { "msg_contents": "Rowan Seymour <[email protected]> writes:\n> Most of time, this query performs like https://explain.depesz.com/s/ksOC\n> (~15ms). It's no longer using the using the msgs_inbox index, but it's\n> plenty fast. However, sometimes it performs like\n> https://explain.depesz.com/s/81c (67000ms)\n> And if you run it again, it'll be fast again.\n\nIt looks like everything is fine as long as all the data the query needs\nis already in PG's shared buffers. As soon as it has to go to disk,\nyou're hurting, because disk reads seem to be taking ~10ms on average.\n\n> Server is an Amazon RDS instance with default settings and Postgres 9.3.10,\n\nAnd I think you just explained your problem. Did you spring for adequate\nguaranteed IOPS on this instance? If not, you need to, or else live with\nerratic performance. If you did, you have a beef to raise with AWS that\nyou're not getting the performance you paid for.\n\nYou might be able to ameliorate matters by raising shared_buffers, but\nunless your database isn't growing that approach has limited future.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 10 Jun 2016 10:13:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many-to-many performance problem" }, { "msg_contents": "\nI thought this was a really interesting case, and would love to learn from it, please bare with me if my questions are naive.\n\nOn 2016-06-10 08:13, Tom Lane wrote:\n> Rowan Seymour <[email protected]> writes:\n>> Most of time, this query performs like https://explain.depesz.com/s/ksOC\n>> (~15ms). It's no longer using the using the msgs_inbox index, but it's\n>> plenty fast. However, sometimes it performs like\n>> https://explain.depesz.com/s/81c (67000ms)\n>> And if you run it again, it'll be fast again.\n> \n> It looks like everything is fine as long as all the data the query needs\n> is already in PG's shared buffers. As soon as it has to go to disk,\n> you're hurting, because disk reads seem to be taking ~10ms on average.\n\n\n -> Index Scan using msgs_message_pkey on msgs_message (cost=0.43..8.04 rows=1 width=47) (actual time=18.550..18.559 rows=0 loops=3556)\n Index Cond: (id = msgs_message_labels.message_id)\n Filter: (is_active AND is_handled AND has_labels AND (NOT is_archived) AND (created_on < '2016-06-10 07:11:06.381+00'::timestamp with time zone) AND (org_id = 7))\n Rows Removed by Filter: 1\n Buffers: shared hit=11032 read=3235 dirtied=5\n\nDo you mean that it reads the index from disk? Or that it looks things up in the index, and fetch data on disk (based on that lookup)?\nIs the 18ms from the Buffers: read=3235? That's 3235 rows read from disk?\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416 \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 10 Jun 2016 08:27:12 -0600", "msg_from": "Yves Dorfsman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many-to-many performance problem" }, { "msg_contents": "Yves Dorfsman <[email protected]> writes:\n> On 2016-06-10 08:13, Tom Lane wrote:\n>> It looks like everything is fine as long as all the data the query needs\n>> is already in PG's shared buffers. As soon as it has to go to disk,\n>> you're hurting, because disk reads seem to be taking ~10ms on average.\n\n> -> Index Scan using msgs_message_pkey on msgs_message (cost=0.43..8.04 rows=1 width=47) (actual time=18.550..18.559 rows=0 loops=3556)\n> Index Cond: (id = msgs_message_labels.message_id)\n> Filter: (is_active AND is_handled AND has_labels AND (NOT is_archived) AND (created_on < '2016-06-10 07:11:06.381+00'::timestamp with time zone) AND (org_id = 7))\n> Rows Removed by Filter: 1\n> Buffers: shared hit=11032 read=3235 dirtied=5\n\n> Do you mean that it reads the index from disk? Or that it looks things up in the index, and fetch data on disk (based on that lookup)?\n\nThe \"reads\" here might be either index pages or table pages; we can't tell\nfrom EXPLAIN's statistics. It's probably a good bet that more of them are\ntable pages than index pages though, just because the index should be a\nlot smaller than the table and more fully represented in cache.\n\nAs for the numbers, we see that 18.559 * 3556 = 65995 ms were spent in\nthis indexscan plan node, versus negligible time for the same plan node\nwhen no reads happened. So we can blame pretty much all that time on\nthe 3235 disk reads, giving an average per read of just over 20ms. Some\nof the other plan nodes show lower averages, though, so I was conservative\nand said \"~10 ms\".\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 10 Jun 2016 11:23:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many-to-many performance problem" }, { "msg_contents": "On 10.06.2016 16:04, Rowan Seymour wrote:\n> In our Django app we have messages (currently about 7 million in table \n> msgs_message) and labels (about 300), and a join table to associate \n> messages with labels (about 500,000 in msgs_message_labels). Not sure \n> you'll need them, but here are the relevant table schemas:\n>\n> CREATE TABLE msgs_message\n> (\n> id INTEGER PRIMARY KEY NOT NULL,\n> type VARCHAR NOT NULL,\n> text TEXT NOT NULL,\n> is_archived BOOLEAN NOT NULL,\n> created_on TIMESTAMP WITH TIME ZONE NOT NULL,\n> contact_id INTEGER NOT NULL,\n> org_id INTEGER NOT NULL,\n> case_id INTEGER,\n> backend_id INTEGER NOT NULL,\n> is_handled BOOLEAN NOT NULL,\n> is_flagged BOOLEAN NOT NULL,\n> is_active BOOLEAN NOT NULL,\n> has_labels BOOLEAN NOT NULL,\n> CONSTRAINT \n> msgs_message_contact_id_5c8e3f216c115643_fk_contacts_contact_id \n> FOREIGN KEY (contact_id) REFERENCES contacts_contact (id),\n> CONSTRAINT msgs_message_org_id_81a0adfcc99151d_fk_orgs_org_id \n> FOREIGN KEY (org_id) REFERENCES orgs_org (id),\n> CONSTRAINT msgs_message_case_id_51998150f9629c_fk_cases_case_id \n> FOREIGN KEY (case_id) REFERENCES cases_case (id)\n> );\n> CREATE UNIQUE INDEX msgs_message_backend_id_key ON msgs_message \n> (backend_id);\n> CREATE INDEX msgs_message_6d82f13d ON msgs_message (contact_id);\n> CREATE INDEX msgs_message_9cf869aa ON msgs_message (org_id);\n> CREATE INDEX msgs_message_7f12ca67 ON msgs_message (case_id);\n>\n> CREATE TABLE msgs_message_labels\n> (\n> id INTEGER PRIMARY KEY NOT NULL,\n> message_id INTEGER NOT NULL,\n> label_id INTEGER NOT NULL,\n> CONSTRAINT \n> msgs_message_lab_message_id_1dfa44628fe448dd_fk_msgs_message_id \n> FOREIGN KEY (message_id) REFERENCES msgs_message (id),\n> CONSTRAINT \n> msgs_message_labels_label_id_77cbdebd8d255b7a_fk_msgs_label_id FOREIGN \n> KEY (label_id) REFERENCES msgs_label (id)\n> );\n> CREATE UNIQUE INDEX msgs_message_labels_message_id_label_id_key ON \n> msgs_message_labels (message_id, label_id);\n> CREATE INDEX msgs_message_labels_4ccaa172 ON msgs_message_labels \n> (message_id);\n> CREATE INDEX msgs_message_labels_abec2aca ON msgs_message_labels \n> (label_id);\n>\n> Users can search for messages, and they are returned page by page in \n> reverse chronological order. There are several partial multi-column \n> indexes on the message table, but the one used for the example queries \n> below is\n>\n> CREATE INDEX msgs_inbox ON msgs_message(org_id, created_on DESC)\n> WHERE is_active = TRUE AND is_handled = TRUE AND is_archived = FALSE \n> AND has_labels = TRUE;\n>\n> So a typical query for the latest page of messages looks like \n> (https://explain.depesz.com/s/G9ew):\n>\n> SELECT \"msgs_message\".*\n> FROM \"msgs_message\"\n> WHERE (\"msgs_message\".\"org_id\" = 7\n> AND \"msgs_message\".\"is_active\" = true\n> AND \"msgs_message\".\"is_handled\" = true\n> AND \"msgs_message\".\"has_labels\" = true\n> AND \"msgs_message\".\"is_archived\" = false\n> AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000 \n> <tel:06.381000>+00:00'::timestamptz\n> ) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n>\n> But users can also search for messages that have one or more labels, \n> leading to queries that look like:\n>\n> SELECT DISTINCT \"msgs_message\".*\n> FROM \"msgs_message\"\n> INNER JOIN \"msgs_message_labels\" ON ( \"msgs_message\".\"id\" = \n> \"msgs_message_labels\".\"message_id\" )\n> WHERE (\"msgs_message\".\"org_id\" = 7\n> AND \"msgs_message\".\"is_active\" = true\n> AND \"msgs_message\".\"is_handled\" = true\n> AND \"msgs_message_labels\".\"label_id\" IN (127, 128, 135, 136, 137, \n> 138, 140, 141, 143, 144)\n> AND \"msgs_message\".\"has_labels\" = true\n> AND \"msgs_message\".\"is_archived\" = false\n> AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000 \n> <tel:06.381000>+00:00'::timestamptz\n> ) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n>\n> Most of time, this query performs like \n> https://explain.depesz.com/s/ksOC (~15ms). It's no longer using the \n> using the msgs_inbox index, but it's plenty fast. However, sometimes \n> it performs like https://explain.depesz.com/s/81c (67000ms)\n>\n> And if you run it again, it'll be fast again. Am I correct in \n> interpreting that second explain as being slow because \n> msgs_message_pkey isn't cached? It looks like it read from that index \n> 3556 times, and each time took 18.559 (?) ms, and that adds up \n> to 65,996ms. The database server says it has lots of free memory so is \n> there something I should be doing to keep that index in memory?\n>\n> Generally speaking, is there a good strategy for optimising queries \n> like these which involve two tables?\n>\n> * I tried moving the label references into an int array on\n> msgs_message, and then using btree_gin to create a multi-column\n> index involving the array column, but that doesn't appear to be\n> very useful for these ordered queries because it's not an ordered\n> index.\n> * I tried adding created_on to msgs_message_labels table but I\n> couldn't find a way of avoiding the in-memory sort.\n> * Have thought about dynamically creating partial indexes for each\n> label using an array column on msgs_message to hold label ids, and\n> index condition like WHERE label_ids && ARRAY[123] but not sure\n> what other problems I'll run into with hundreds of indexes on the\n> same table.\n>\n> Server is an Amazon RDS instance with default settings and Postgres \n> 9.3.10, with one other database in the instance.\n>\n> All advice very much appreciated, thanks\n>\n> -- \n> *Rowan Seymour* | +260 964153686 <tel:%2B260%20964153686>\nHello! What do you mean by\n\"Server is an Amazon RDS instance with default settings and Postgres \n9.3.10, with one other database in the instance.\"\nPG is with default config or smth else?\nIs it with default config as it is as from compile version? If so you \nshould definitely have to do some tuning on it.\nBy looking on plan i saw a lot of disk read. It can be linked to small \nshared memory dedicated to PG exactly what Tom said.\nCan you share pg config or raise for example shared_buffers parameter?\n\n\nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 10.06.2016 16:04, Rowan Seymour\n wrote:\n\n\n\nIn our Django app we have messages (currently about 7\n million in table msgs_message) and labels (about 300), and a\n join table to associate messages with labels (about 500,000 in\n msgs_message_labels). Not sure you'll need them, but here are\n the relevant table schemas:\n\n\n\nCREATE TABLE msgs_message\n(\n    id INTEGER PRIMARY KEY NOT NULL,\n    type VARCHAR NOT NULL,\n    text TEXT NOT NULL,\n    is_archived BOOLEAN NOT NULL,\n    created_on TIMESTAMP WITH TIME ZONE NOT NULL,\n    contact_id INTEGER NOT NULL,\n    org_id INTEGER NOT NULL,\n    case_id INTEGER,\n    backend_id INTEGER NOT NULL,\n    is_handled BOOLEAN NOT NULL,\n    is_flagged BOOLEAN NOT NULL,\n    is_active BOOLEAN NOT NULL,\n    has_labels BOOLEAN NOT NULL,\n    CONSTRAINT\n msgs_message_contact_id_5c8e3f216c115643_fk_contacts_contact_id\n FOREIGN KEY (contact_id) REFERENCES contacts_contact (id),\n    CONSTRAINT\n msgs_message_org_id_81a0adfcc99151d_fk_orgs_org_id FOREIGN\n KEY (org_id) REFERENCES orgs_org (id),\n    CONSTRAINT\n msgs_message_case_id_51998150f9629c_fk_cases_case_id FOREIGN\n KEY (case_id) REFERENCES cases_case (id)\n);\nCREATE UNIQUE INDEX msgs_message_backend_id_key ON\n msgs_message (backend_id);\nCREATE INDEX msgs_message_6d82f13d ON msgs_message\n (contact_id);\nCREATE INDEX msgs_message_9cf869aa ON msgs_message\n (org_id);\nCREATE INDEX msgs_message_7f12ca67 ON msgs_message\n (case_id);\n\n\n\n\nCREATE TABLE msgs_message_labels\n(\n    id INTEGER PRIMARY KEY NOT NULL,\n    message_id INTEGER NOT NULL,\n    label_id INTEGER NOT NULL,\n    CONSTRAINT\n msgs_message_lab_message_id_1dfa44628fe448dd_fk_msgs_message_id\n FOREIGN KEY (message_id) REFERENCES msgs_message (id),\n    CONSTRAINT\n msgs_message_labels_label_id_77cbdebd8d255b7a_fk_msgs_label_id\n FOREIGN KEY (label_id) REFERENCES msgs_label (id)\n);\nCREATE UNIQUE INDEX\n msgs_message_labels_message_id_label_id_key ON\n msgs_message_labels (message_id, label_id);\nCREATE INDEX msgs_message_labels_4ccaa172 ON\n msgs_message_labels (message_id);\nCREATE INDEX msgs_message_labels_abec2aca ON\n msgs_message_labels (label_id);\n\n\n\nUsers can search for messages, and they are returned page\n by page in reverse chronological order. There are several\n partial multi-column indexes on the message table, but the one\n used for the example queries below is\n\n\n\nCREATE INDEX msgs_inbox ON msgs_message(org_id,\n created_on DESC)\nWHERE is_active = TRUE AND is_handled = TRUE AND\n is_archived = FALSE AND has_labels = TRUE;\n\n\n\nSo a typical query for the latest page of messages looks\n like (https://explain.depesz.com/s/G9ew):\n\n\nSELECT \"msgs_message\".* \nFROM \"msgs_message\" \nWHERE (\"msgs_message\".\"org_id\" = 7 \n    AND \"msgs_message\".\"is_active\" = true \n    AND \"msgs_message\".\"is_handled\" = true \n    AND \"msgs_message\".\"has_labels\" = true \n    AND \"msgs_message\".\"is_archived\" = false \n    AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000+00:00'::timestamptz\n) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n\n\nBut users can also search for messages that have one or\n more labels, leading to queries that look like:\n\n\n\nSELECT DISTINCT \"msgs_message\".* \nFROM \"msgs_message\" \nINNER JOIN \"msgs_message_labels\" ON ( \"msgs_message\".\"id\"\n = \"msgs_message_labels\".\"message_id\" ) \nWHERE (\"msgs_message\".\"org_id\" = 7 \n    AND \"msgs_message\".\"is_active\" = true \n    AND \"msgs_message\".\"is_handled\" = true \n    AND \"msgs_message_labels\".\"label_id\" IN (127, 128,\n 135, 136, 137, 138, 140, 141, 143, 144) \n    AND \"msgs_message\".\"has_labels\" = true \n    AND \"msgs_message\".\"is_archived\" = false \n    AND \"msgs_message\".\"created_on\" <\n '2016-06-10T07:11:06.381000+00:00'::timestamptz\n) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n\n\n\nMost of time, this query performs like https://explain.depesz.com/s/ksOC\n (~15ms). It's no longer using the using the msgs_inbox index,\n but it's plenty fast. However, sometimes it performs like https://explain.depesz.com/s/81c\n (67000ms)\n\n\nAnd if you run it again, it'll be fast again. Am I correct\n in interpreting that second explain as being slow because\n msgs_message_pkey isn't cached? It looks like it read from\n that index 3556 times, and each time took 18.559 (?) ms, and\n that adds up to 65,996ms. The database server says it has lots\n of free memory so is there something I should be doing to keep\n that index in memory?\n\n\nGenerally speaking, is there a good strategy for optimising\n queries like these which involve two tables?\n\n\nI tried moving the label references into an int array on\n msgs_message, and then using btree_gin to create a\n multi-column index involving the array column, but that\n doesn't appear to be very useful for these ordered queries\n because it's not an ordered index.\nI tried adding created_on to msgs_message_labels table\n but I couldn't find a way of avoiding the in-memory sort.\nHave thought about dynamically creating partial indexes\n for each label using an array column on msgs_message to\n hold label ids, and index condition like WHERE label_ids\n && ARRAY[123] but not sure what other problems\n I'll run into with hundreds of indexes on the same table.\n\nServer is an Amazon RDS instance with default settings\n and Postgres 9.3.10, with one other database in the\n instance.\n\n\n\nAll advice very much appreciated, thanks\n\n\n -- \n\n\n\n\n\nRowan Seymour | +260\n 964153686\n\n\n\n\n\n\n\n\n Hello! What do you mean by \n \"Server is an Amazon RDS instance with default settings and Postgres\n 9.3.10, with one other database in the instance.\" \n PG is with default config or smth else? \n Is it  with default config as it is as from compile version? If so\n you should definitely have to do some tuning on it.\n By looking on plan i saw a lot of disk read. It can be linked to\n small shared memory dedicated to PG exactly what Tom said. \n Can you share pg config or raise for example shared_buffers\n parameter?\n\n\nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Fri, 10 Jun 2016 19:11:46 +0300", "msg_from": "Alex Ignatov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Many-to-many performance problem" }, { "msg_contents": "When you create an Postgres RDS instance, it's comes with a\n\"default.postgres9.3\" parameter group which contains substitutions based on\nthe server size. The defaults for the memory related settings are:\n\neffective_cache_size = {DBInstanceClassMemory/16384}\nmaintenance_work_mem = GREATEST({DBInstanceClassMemory/63963136*1024},65536)\nshared_buffers = {DBInstanceClassMemory/32768}\ntemp_buffers = <not set>\nwork_mem = <not set>\n\nAccording to\nhttp://www.davidmkerr.com/2013/11/tune-your-postgres-rds-instance-via.html,\nthe units for effective_cache_size on AWS RDS, are 8kb blocks (am not sure\nwhy this is...), so DBInstanceClassMemory/16384 = DBInstanceClassMemory/(2\n* 8kb) = 50% of system memory.\n\nWe upgraded the server over the weekend which doubled the system memory and\nincreased the available IOPS, and that appears to have greatly improved the\nsituation, but there have still been a few timeouts. I'm wondering now if\nactivity on the other database in this instance doesn't occasionally push\nour indexes out of memory.\n\nThanks, Rowan\n\nOn 10 June 2016 at 18:11, Alex Ignatov <[email protected]> wrote:\n\n>\n> On 10.06.2016 16:04, Rowan Seymour wrote:\n>\n> In our Django app we have messages (currently about 7 million in table\n> msgs_message) and labels (about 300), and a join table to associate\n> messages with labels (about 500,000 in msgs_message_labels). Not sure\n> you'll need them, but here are the relevant table schemas:\n>\n> CREATE TABLE msgs_message\n> (\n> id INTEGER PRIMARY KEY NOT NULL,\n> type VARCHAR NOT NULL,\n> text TEXT NOT NULL,\n> is_archived BOOLEAN NOT NULL,\n> created_on TIMESTAMP WITH TIME ZONE NOT NULL,\n> contact_id INTEGER NOT NULL,\n> org_id INTEGER NOT NULL,\n> case_id INTEGER,\n> backend_id INTEGER NOT NULL,\n> is_handled BOOLEAN NOT NULL,\n> is_flagged BOOLEAN NOT NULL,\n> is_active BOOLEAN NOT NULL,\n> has_labels BOOLEAN NOT NULL,\n> CONSTRAINT\n> msgs_message_contact_id_5c8e3f216c115643_fk_contacts_contact_id FOREIGN KEY\n> (contact_id) REFERENCES contacts_contact (id),\n> CONSTRAINT msgs_message_org_id_81a0adfcc99151d_fk_orgs_org_id FOREIGN\n> KEY (org_id) REFERENCES orgs_org (id),\n> CONSTRAINT msgs_message_case_id_51998150f9629c_fk_cases_case_id\n> FOREIGN KEY (case_id) REFERENCES cases_case (id)\n> );\n> CREATE UNIQUE INDEX msgs_message_backend_id_key ON msgs_message\n> (backend_id);\n> CREATE INDEX msgs_message_6d82f13d ON msgs_message (contact_id);\n> CREATE INDEX msgs_message_9cf869aa ON msgs_message (org_id);\n> CREATE INDEX msgs_message_7f12ca67 ON msgs_message (case_id);\n>\n> CREATE TABLE msgs_message_labels\n> (\n> id INTEGER PRIMARY KEY NOT NULL,\n> message_id INTEGER NOT NULL,\n> label_id INTEGER NOT NULL,\n> CONSTRAINT\n> msgs_message_lab_message_id_1dfa44628fe448dd_fk_msgs_message_id FOREIGN KEY\n> (message_id) REFERENCES msgs_message (id),\n> CONSTRAINT\n> msgs_message_labels_label_id_77cbdebd8d255b7a_fk_msgs_label_id FOREIGN KEY\n> (label_id) REFERENCES msgs_label (id)\n> );\n> CREATE UNIQUE INDEX msgs_message_labels_message_id_label_id_key ON\n> msgs_message_labels (message_id, label_id);\n> CREATE INDEX msgs_message_labels_4ccaa172 ON msgs_message_labels\n> (message_id);\n> CREATE INDEX msgs_message_labels_abec2aca ON msgs_message_labels\n> (label_id);\n>\n> Users can search for messages, and they are returned page by page in\n> reverse chronological order. There are several partial multi-column indexes\n> on the message table, but the one used for the example queries below is\n>\n> CREATE INDEX msgs_inbox ON msgs_message(org_id, created_on DESC)\n> WHERE is_active = TRUE AND is_handled = TRUE AND is_archived = FALSE AND\n> has_labels = TRUE;\n>\n> So a typical query for the latest page of messages looks like (\n> https://explain.depesz.com/s/G9ew):\n>\n> SELECT \"msgs_message\".*\n> FROM \"msgs_message\"\n> WHERE (\"msgs_message\".\"org_id\" = 7\n> AND \"msgs_message\".\"is_active\" = true\n> AND \"msgs_message\".\"is_handled\" = true\n> AND \"msgs_message\".\"has_labels\" = true\n> AND \"msgs_message\".\"is_archived\" = false\n> AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000\n> +00:00'::timestamptz\n> ) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n>\n> But users can also search for messages that have one or more labels,\n> leading to queries that look like:\n>\n> SELECT DISTINCT \"msgs_message\".*\n> FROM \"msgs_message\"\n> INNER JOIN \"msgs_message_labels\" ON ( \"msgs_message\".\"id\" =\n> \"msgs_message_labels\".\"message_id\" )\n> WHERE (\"msgs_message\".\"org_id\" = 7\n> AND \"msgs_message\".\"is_active\" = true\n> AND \"msgs_message\".\"is_handled\" = true\n> AND \"msgs_message_labels\".\"label_id\" IN (127, 128, 135, 136, 137, 138,\n> 140, 141, 143, 144)\n> AND \"msgs_message\".\"has_labels\" = true\n> AND \"msgs_message\".\"is_archived\" = false\n> AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000\n> +00:00'::timestamptz\n> ) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n>\n> Most of time, this query performs like <https://explain.depesz.com/s/ksOC>\n> https://explain.depesz.com/s/ksOC (~15ms). It's no longer using the using\n> the msgs_inbox index, but it's plenty fast. However, sometimes it performs\n> like <https://explain.depesz.com/s/81c>https://explain.depesz.com/s/81c\n> (67000ms)\n>\n> And if you run it again, it'll be fast again. Am I correct in interpreting\n> that second explain as being slow because msgs_message_pkey isn't cached?\n> It looks like it read from that index 3556 times, and each time took 18.559\n> (?) ms, and that adds up to 65,996ms. The database server says it has lots\n> of free memory so is there something I should be doing to keep that index\n> in memory?\n>\n> Generally speaking, is there a good strategy for optimising queries like\n> these which involve two tables?\n>\n> - I tried moving the label references into an int array on\n> msgs_message, and then using btree_gin to create a multi-column index\n> involving the array column, but that doesn't appear to be very useful for\n> these ordered queries because it's not an ordered index.\n> - I tried adding created_on to msgs_message_labels table but I\n> couldn't find a way of avoiding the in-memory sort.\n> - Have thought about dynamically creating partial indexes for each\n> label using an array column on msgs_message to hold label ids, and index\n> condition like WHERE label_ids && ARRAY[123] but not sure what other\n> problems I'll run into with hundreds of indexes on the same table.\n>\n> Server is an Amazon RDS instance with default settings and Postgres\n> 9.3.10, with one other database in the instance.\n>\n> All advice very much appreciated, thanks\n>\n> --\n> *Rowan Seymour* | +260 964153686 <%2B260%20964153686>\n>\n> Hello! What do you mean by\n> \"Server is an Amazon RDS instance with default settings and Postgres\n> 9.3.10, with one other database in the instance.\"\n> PG is with default config or smth else?\n> Is it with default config as it is as from compile version? If so you\n> should definitely have to do some tuning on it.\n> By looking on plan i saw a lot of disk read. It can be linked to small\n> shared memory dedicated to PG exactly what Tom said.\n> Can you share pg config or raise for example shared_buffers parameter?\n>\n>\n> Alex Ignatov\n> Postgres Professional: http://www.postgrespro.com\n> The Russian Postgres Company\n>\n>\n>\n>\n\n\n-- \n*Rowan Seymour* | +260 964153686 | @rowanseymour\n\nWhen you create an Postgres RDS instance, it's comes with a \"default.postgres9.3\" parameter group which contains substitutions based on the server size. The defaults for the memory related settings are:effective_cache_size = {DBInstanceClassMemory/16384}maintenance_work_mem = GREATEST({DBInstanceClassMemory/63963136*1024},65536)shared_buffers = {DBInstanceClassMemory/32768}temp_buffers = <not set>work_mem = <not set>According to http://www.davidmkerr.com/2013/11/tune-your-postgres-rds-instance-via.html, the units for effective_cache_size on AWS RDS, are 8kb blocks (am not sure why this is...), so DBInstanceClassMemory/16384 = DBInstanceClassMemory/(2 * 8kb) = 50% of system memory.We upgraded the server over the weekend which doubled the system memory and increased the available IOPS, and that appears to have greatly improved the situation, but there have still been a few timeouts. I'm wondering now if activity on the other database in this instance doesn't occasionally push our indexes out of memory.Thanks, RowanOn 10 June 2016 at 18:11, Alex Ignatov <[email protected]> wrote:\n\n\nOn 10.06.2016 16:04, Rowan Seymour\n wrote:\n\n\n\nIn our Django app we have messages (currently about 7\n million in table msgs_message) and labels (about 300), and a\n join table to associate messages with labels (about 500,000 in\n msgs_message_labels). Not sure you'll need them, but here are\n the relevant table schemas:\n\n\n\nCREATE TABLE msgs_message\n(\n    id INTEGER PRIMARY KEY NOT NULL,\n    type VARCHAR NOT NULL,\n    text TEXT NOT NULL,\n    is_archived BOOLEAN NOT NULL,\n    created_on TIMESTAMP WITH TIME ZONE NOT NULL,\n    contact_id INTEGER NOT NULL,\n    org_id INTEGER NOT NULL,\n    case_id INTEGER,\n    backend_id INTEGER NOT NULL,\n    is_handled BOOLEAN NOT NULL,\n    is_flagged BOOLEAN NOT NULL,\n    is_active BOOLEAN NOT NULL,\n    has_labels BOOLEAN NOT NULL,\n    CONSTRAINT\n msgs_message_contact_id_5c8e3f216c115643_fk_contacts_contact_id\n FOREIGN KEY (contact_id) REFERENCES contacts_contact (id),\n    CONSTRAINT\n msgs_message_org_id_81a0adfcc99151d_fk_orgs_org_id FOREIGN\n KEY (org_id) REFERENCES orgs_org (id),\n    CONSTRAINT\n msgs_message_case_id_51998150f9629c_fk_cases_case_id FOREIGN\n KEY (case_id) REFERENCES cases_case (id)\n);\nCREATE UNIQUE INDEX msgs_message_backend_id_key ON\n msgs_message (backend_id);\nCREATE INDEX msgs_message_6d82f13d ON msgs_message\n (contact_id);\nCREATE INDEX msgs_message_9cf869aa ON msgs_message\n (org_id);\nCREATE INDEX msgs_message_7f12ca67 ON msgs_message\n (case_id);\n\n\n\n\nCREATE TABLE msgs_message_labels\n(\n    id INTEGER PRIMARY KEY NOT NULL,\n    message_id INTEGER NOT NULL,\n    label_id INTEGER NOT NULL,\n    CONSTRAINT\n msgs_message_lab_message_id_1dfa44628fe448dd_fk_msgs_message_id\n FOREIGN KEY (message_id) REFERENCES msgs_message (id),\n    CONSTRAINT\n msgs_message_labels_label_id_77cbdebd8d255b7a_fk_msgs_label_id\n FOREIGN KEY (label_id) REFERENCES msgs_label (id)\n);\nCREATE UNIQUE INDEX\n msgs_message_labels_message_id_label_id_key ON\n msgs_message_labels (message_id, label_id);\nCREATE INDEX msgs_message_labels_4ccaa172 ON\n msgs_message_labels (message_id);\nCREATE INDEX msgs_message_labels_abec2aca ON\n msgs_message_labels (label_id);\n\n\n\nUsers can search for messages, and they are returned page\n by page in reverse chronological order. There are several\n partial multi-column indexes on the message table, but the one\n used for the example queries below is\n\n\n\nCREATE INDEX msgs_inbox ON msgs_message(org_id,\n created_on DESC)\nWHERE is_active = TRUE AND is_handled = TRUE AND\n is_archived = FALSE AND has_labels = TRUE;\n\n\n\nSo a typical query for the latest page of messages looks\n like (https://explain.depesz.com/s/G9ew):\n\n\nSELECT \"msgs_message\".* \nFROM \"msgs_message\" \nWHERE (\"msgs_message\".\"org_id\" = 7 \n    AND \"msgs_message\".\"is_active\" = true \n    AND \"msgs_message\".\"is_handled\" = true \n    AND \"msgs_message\".\"has_labels\" = true \n    AND \"msgs_message\".\"is_archived\" = false \n    AND \"msgs_message\".\"created_on\" < '2016-06-10T07:11:06.381000+00:00'::timestamptz\n) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n\n\nBut users can also search for messages that have one or\n more labels, leading to queries that look like:\n\n\n\nSELECT DISTINCT \"msgs_message\".* \nFROM \"msgs_message\" \nINNER JOIN \"msgs_message_labels\" ON ( \"msgs_message\".\"id\"\n = \"msgs_message_labels\".\"message_id\" ) \nWHERE (\"msgs_message\".\"org_id\" = 7 \n    AND \"msgs_message\".\"is_active\" = true \n    AND \"msgs_message\".\"is_handled\" = true \n    AND \"msgs_message_labels\".\"label_id\" IN (127, 128,\n 135, 136, 137, 138, 140, 141, 143, 144) \n    AND \"msgs_message\".\"has_labels\" = true \n    AND \"msgs_message\".\"is_archived\" = false \n    AND \"msgs_message\".\"created_on\" <\n '2016-06-10T07:11:06.381000+00:00'::timestamptz\n) ORDER BY \"msgs_message\".\"created_on\" DESC LIMIT 50\n\n\n\nMost of time, this query performs like https://explain.depesz.com/s/ksOC\n (~15ms). It's no longer using the using the msgs_inbox index,\n but it's plenty fast. However, sometimes it performs like https://explain.depesz.com/s/81c\n (67000ms)\n\n\nAnd if you run it again, it'll be fast again. Am I correct\n in interpreting that second explain as being slow because\n msgs_message_pkey isn't cached? It looks like it read from\n that index 3556 times, and each time took 18.559 (?) ms, and\n that adds up to 65,996ms. The database server says it has lots\n of free memory so is there something I should be doing to keep\n that index in memory?\n\n\nGenerally speaking, is there a good strategy for optimising\n queries like these which involve two tables?\n\n\nI tried moving the label references into an int array on\n msgs_message, and then using btree_gin to create a\n multi-column index involving the array column, but that\n doesn't appear to be very useful for these ordered queries\n because it's not an ordered index.\nI tried adding created_on to msgs_message_labels table\n but I couldn't find a way of avoiding the in-memory sort.\nHave thought about dynamically creating partial indexes\n for each label using an array column on msgs_message to\n hold label ids, and index condition like WHERE label_ids\n && ARRAY[123] but not sure what other problems\n I'll run into with hundreds of indexes on the same table.\n\nServer is an Amazon RDS instance with default settings\n and Postgres 9.3.10, with one other database in the\n instance.\n\n\n\nAll advice very much appreciated, thanks\n\n\n -- \n\n\n\n\n\nRowan Seymour | +260\n 964153686\n\n\n\n\n\n\n\n\n Hello! What do you mean by \n \"Server is an Amazon RDS instance with default settings and Postgres\n 9.3.10, with one other database in the instance.\" \n PG is with default config or smth else? \n Is it  with default config as it is as from compile version? If so\n you should definitely have to do some tuning on it.\n By looking on plan i saw a lot of disk read. It can be linked to\n small shared memory dedicated to PG exactly what Tom said. \n Can you share pg config or raise for example shared_buffers\n parameter?\n\n\nAlex Ignatov\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n-- Rowan Seymour | +260 964153686 | @rowanseymour", "msg_date": "Thu, 16 Jun 2016 10:29:49 +0200", "msg_from": "Rowan Seymour <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Many-to-many performance problem" } ]
[ { "msg_contents": "Hi,\n\nI have an application which stores a large amounts of hex-encoded hash\nstrings (nearly 100 GB of them), which means:\n\n - The number of distinct characters (alphabet) is limited to 16\n - Each string is of the same length, 64 characters\n - The strings are essentially random\n\nCreating a B-Tree index on this results in the index size being larger than\nthe table itself, and there are disk space constraints.\n\nI've found the SP-GIST radix tree index, and thought it could be a good\nmatch for the data because of the above constraints. An attempt to create\nit (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes more\nthan 12 hours (while a similar B-tree index takes a few hours at most), so\nI've interrupted it because \"it probably is not going to finish in a\nreasonable time\". Some slides I found on the spgist index allude that both\nbuild time and size are not really suitable for this purpose.\n\nMy question is: what would be the most size-efficient index for this\nsituation?\n\nHi,I have an application which stores a large amounts of hex-encoded hash strings (nearly 100 GB of them), which means:The number of distinct characters (alphabet) is limited to 16Each string is of the same length, 64 charactersThe strings are essentially randomCreating a B-Tree index on this results in the index size being larger than the table itself, and there are disk space constraints.I've found the SP-GIST radix tree index, and thought it could be a good match for the data because of the above constraints. An attempt to create it (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes more than 12 hours (while a similar B-tree index takes a few hours at most), so I've interrupted it because \"it probably is not going to finish in a reasonable time\". Some slides I found on the spgist index allude that both build time and size are not really suitable for this purpose.My question is: what would be the most size-efficient index for this situation?", "msg_date": "Wed, 15 Jun 2016 11:34:18 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Indexes for hashes" }, { "msg_contents": "Hello Ivan,\n\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n>\n> * The number of distinct characters (alphabet) is limited to 16\n> * Each string is of the same length, 64 characters\n> * The strings are essentially random\n>\n> Creating a B-Tree index on this results in the index size being larger\n> than the table itself, and there are disk space constraints.\n>\n> I've found the SP-GIST radix tree index, and thought it could be a good\n> match for the data because of the above constraints. An attempt to\n> create it (as in CREATE INDEX ON t USING spgist(field_name)) apparently\n> takes more than 12 hours (while a similar B-tree index takes a few hours\n> at most), so I've interrupted it because \"it probably is not going to\n> finish in a reasonable time\". Some slides I found on the spgist index\n> allude that both build time and size are not really suitable for this\n> purpose.\n>\n> My question is: what would be the most size-efficient index for this\n> situation?\n\nIt depends on what you want to query. What about the BRIN-Index:\nhttps://www.postgresql.org/docs/9.5/static/brin-intro.html\n\nThis will result in a very small size, but depending on what you want to \nquery it will fit or not fit your needs.\n\nGreetings,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 15 Jun 2016 14:45:49 +0200", "msg_from": "Torsten Zuehlsdorff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> Hi,\n> \n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n> \n> - The number of distinct characters (alphabet) is limited to 16\n> - Each string is of the same length, 64 characters\n> - The strings are essentially random\n> \n> Creating a B-Tree index on this results in the index size being larger than\n> the table itself, and there are disk space constraints.\n> \n> I've found the SP-GIST radix tree index, and thought it could be a good\n> match for the data because of the above constraints. An attempt to create\n> it (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes more\n> than 12 hours (while a similar B-tree index takes a few hours at most), so\n> I've interrupted it because \"it probably is not going to finish in a\n> reasonable time\". Some slides I found on the spgist index allude that both\n> build time and size are not really suitable for this purpose.\n> \n> My question is: what would be the most size-efficient index for this\n> situation?\n\nHi Ivan,\n\nIf the strings are really random, then maybe a function index on the first\n4, 8, or 16 characters could be used to narrow the search space and not need\nto index all 64. If they are not \"good\" random numbers, you could use a hash\nindex on the strings. It will be much smaller since it currently uses a 32-bit\nhash. It has a number of caveats and is not currently crash-safe, but it seems\nlike it might work in your environment. You can also use a functional index on\na hash-function applied to your values with a btree to give you crash safety.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 15 Jun 2016 08:03:07 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On 15 June 2016 at 15:03, [email protected] <[email protected]> wrote:\n\n> On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> > Hi,\n> >\n> > I have an application which stores a large amounts of hex-encoded hash\n> > strings (nearly 100 GB of them), which means:\n> >\n> > - The number of distinct characters (alphabet) is limited to 16\n> > - Each string is of the same length, 64 characters\n> > - The strings are essentially random\n> >\n> > Creating a B-Tree index on this results in the index size being larger\n> than\n> > the table itself, and there are disk space constraints.\n> >\n> > I've found the SP-GIST radix tree index, and thought it could be a good\n> > match for the data because of the above constraints. An attempt to create\n> > it (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes\n> more\n> > than 12 hours (while a similar B-tree index takes a few hours at most),\n> so\n> > I've interrupted it because \"it probably is not going to finish in a\n> > reasonable time\". Some slides I found on the spgist index allude that\n> both\n> > build time and size are not really suitable for this purpose.\n> >\n> > My question is: what would be the most size-efficient index for this\n> > situation?\n>\n> Hi Ivan,\n>\n> If the strings are really random, then maybe a function index on the first\n> 4, 8, or 16 characters could be used to narrow the search space and not\n> need\n> to index all 64. If they are not \"good\" random numbers, you could use a\n> hash\n> index on the strings. It will be much smaller since it currently uses a\n> 32-bit\n> hash. It has a number of caveats and is not currently crash-safe, but it\n> seems\n> like it might work in your environment. You can also use a functional\n> index on\n> a hash-function applied to your values with a btree to give you crash\n> safety.\n>\n>\nHi,\n\nI figured the hash index might be helpful and I've tried it in the\nmeantime: on one of the smaller tables (which is 51 GB in size), a btree\nindex is 32 GB, while the hash index is 22 GB (so btree is around 45%\nlarger).\n\nI don't suppose there's an effort in progress to make hash indexes use WAL?\n:D\n\nOn 15 June 2016 at 15:03, [email protected] <[email protected]> wrote:On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> Hi,\n>\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n>\n>    - The number of distinct characters (alphabet) is limited to 16\n>    - Each string is of the same length, 64 characters\n>    - The strings are essentially random\n>\n> Creating a B-Tree index on this results in the index size being larger than\n> the table itself, and there are disk space constraints.\n>\n> I've found the SP-GIST radix tree index, and thought it could be a good\n> match for the data because of the above constraints. An attempt to create\n> it (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes more\n> than 12 hours (while a similar B-tree index takes a few hours at most), so\n> I've interrupted it because \"it probably is not going to finish in a\n> reasonable time\". Some slides I found on the spgist index allude that both\n> build time and size are not really suitable for this purpose.\n>\n> My question is: what would be the most size-efficient index for this\n> situation?\n\nHi Ivan,\n\nIf the strings are really random, then maybe a function index on the first\n4, 8, or 16 characters could be used to narrow the search space and not need\nto index all 64. If they are not \"good\" random numbers, you could use a hash\nindex on the strings. It will be much smaller since it currently uses a 32-bit\nhash. It has a number of caveats and is not currently crash-safe, but it seems\nlike it might work in your environment. You can also use a functional index on\na hash-function applied to your values with a btree to give you crash safety.\nHi,I figured the hash index might be helpful and I've tried it in the meantime: on one of the smaller tables (which is 51 GB in size), a btree index is 32 GB, while the hash index is 22 GB (so btree is around 45% larger).I don't suppose there's an effort in progress to make hash indexes use WAL? :D", "msg_date": "Wed, 15 Jun 2016 15:09:04 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On Wed, Jun 15, 2016 at 03:09:04PM +0200, Ivan Voras wrote:\n> On 15 June 2016 at 15:03, [email protected] <[email protected]> wrote:\n> \n> > On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> > > Hi,\n> > >\n> > > I have an application which stores a large amounts of hex-encoded hash\n> > > strings (nearly 100 GB of them), which means:\n> > >\n> > > - The number of distinct characters (alphabet) is limited to 16\n> > > - Each string is of the same length, 64 characters\n> > > - The strings are essentially random\n> > >\n> > > Creating a B-Tree index on this results in the index size being larger\n> > than\n> > > the table itself, and there are disk space constraints.\n> > >\n> > > I've found the SP-GIST radix tree index, and thought it could be a good\n> > > match for the data because of the above constraints. An attempt to create\n> > > it (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes\n> > more\n> > > than 12 hours (while a similar B-tree index takes a few hours at most),\n> > so\n> > > I've interrupted it because \"it probably is not going to finish in a\n> > > reasonable time\". Some slides I found on the spgist index allude that\n> > both\n> > > build time and size are not really suitable for this purpose.\n> > >\n> > > My question is: what would be the most size-efficient index for this\n> > > situation?\n> >\n> > Hi Ivan,\n> >\n> > If the strings are really random, then maybe a function index on the first\n> > 4, 8, or 16 characters could be used to narrow the search space and not\n> > need\n> > to index all 64. If they are not \"good\" random numbers, you could use a\n> > hash\n> > index on the strings. It will be much smaller since it currently uses a\n> > 32-bit\n> > hash. It has a number of caveats and is not currently crash-safe, but it\n> > seems\n> > like it might work in your environment. You can also use a functional\n> > index on\n> > a hash-function applied to your values with a btree to give you crash\n> > safety.\n> >\n> >\n> Hi,\n> \n> I figured the hash index might be helpful and I've tried it in the\n> meantime: on one of the smaller tables (which is 51 GB in size), a btree\n> index is 32 GB, while the hash index is 22 GB (so btree is around 45%\n> larger).\n> \n> I don't suppose there's an effort in progress to make hash indexes use WAL?\n> :D\n\nHi Ivan,\n\nSeveral people have looked at it but it has not made it to the top of anyone's\nto-do list. So if you need WAL and crash-safety, a functional index on a hash\nof your values is currently your best bet.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 15 Jun 2016 08:16:14 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n\nWhy do you keep them hex encoded, and not use bytea?\n\nI made a sample table with 1 million rows, looking like this:\n\n Table \"public.new\"\n Column | Type | Modifiers \n---------+-------+-----------\n texthex | text | \n a_bytea | bytea | \n\nvalues are like:\n\n$ select * from new limit 10;\n texthex | a_bytea \n------------------------------------------------------------------+--------------------------------------------------------------------\n c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f | \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db | \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 | \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 | \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a | \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea | \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 | \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c | \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 | \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa | \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n(10 rows)\n\ncreated two indexes:\ncreate index i1 on new (texthex);\ncreate index i2 on new (a_bytea);\n\ni1 is 91MB, and i2 is 56MB.\n\nIndex creation was also much faster - best out of 3 runs for i1 was 4928.982\nms, best out of 3 runs for i2 was 2047.648 ms\n\nBest regards,\n\ndepesz\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 15 Jun 2016 15:38:33 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "Hi,\n\nI understand your idea, and have also been thinking about it. Basically,\nexisting applications would need to be modified, however slightly, and that\nwouldn't be good.\n\n\n\n\nOn 15 June 2016 at 15:38, hubert depesz lubaczewski <[email protected]>\nwrote:\n\n> On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> > I have an application which stores a large amounts of hex-encoded hash\n> > strings (nearly 100 GB of them), which means:\n>\n> Why do you keep them hex encoded, and not use bytea?\n>\n> I made a sample table with 1 million rows, looking like this:\n>\n> Table \"public.new\"\n> Column | Type | Modifiers\n> ---------+-------+-----------\n> texthex | text |\n> a_bytea | bytea |\n>\n> values are like:\n>\n> $ select * from new limit 10;\n> texthex |\n> a_bytea\n>\n> ------------------------------------------------------------------+--------------------------------------------------------------------\n> c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f |\n> \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n> 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db |\n> \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n> 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 |\n> \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n> fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 |\n> \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n> ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a |\n> \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n> 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea |\n> \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n> 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 |\n> \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n> 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c |\n> \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n> 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 |\n> \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n> 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa |\n> \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n> (10 rows)\n>\n> created two indexes:\n> create index i1 on new (texthex);\n> create index i2 on new (a_bytea);\n>\n> i1 is 91MB, and i2 is 56MB.\n>\n> Index creation was also much faster - best out of 3 runs for i1 was\n> 4928.982\n> ms, best out of 3 runs for i2 was 2047.648 ms\n>\n> Best regards,\n>\n> depesz\n>\n>\n\nHi,I understand your idea, and have also been thinking about it. Basically, existing applications would need to be modified, however slightly, and that wouldn't be good.On 15 June 2016 at 15:38, hubert depesz lubaczewski <[email protected]> wrote:On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n\nWhy do you keep them hex encoded, and not use bytea?\n\nI made a sample table with 1 million rows, looking like this:\n\n     Table \"public.new\"\n Column  | Type  | Modifiers\n---------+-------+-----------\n texthex | text  |\n a_bytea | bytea |\n\nvalues are like:\n\n$ select * from new limit 10;\n                             texthex                              |                              a_bytea\n------------------------------------------------------------------+--------------------------------------------------------------------\n c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f | \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db | \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 | \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 | \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a | \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea | \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 | \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c | \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 | \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa | \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n(10 rows)\n\ncreated two indexes:\ncreate index i1 on new (texthex);\ncreate index i2 on new (a_bytea);\n\ni1 is 91MB, and i2 is 56MB.\n\nIndex creation was also much faster - best out of 3 runs for i1 was 4928.982\nms, best out of 3 runs for i2 was 2047.648 ms\n\nBest regards,\n\ndepesz", "msg_date": "Wed, 15 Jun 2016 15:54:07 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "Hi Ivan,\n\nHow about using crc32 ? and then index the integer as the result of crc32\nfunction? you can split the hash into 2 part and do crc32 2x ? and then\ncreate composite index on both integer (the crc32 result)\ninstead of using 64 char, you only employ 2 integer as index key.\n\nRegards,\n\nJul\n\nOn Wed, Jun 15, 2016 at 8:54 PM, Ivan Voras <[email protected]> wrote:\n\n> Hi,\n>\n> I understand your idea, and have also been thinking about it. Basically,\n> existing applications would need to be modified, however slightly, and that\n> wouldn't be good.\n>\n>\n>\n>\n> On 15 June 2016 at 15:38, hubert depesz lubaczewski <[email protected]>\n> wrote:\n>\n>> On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n>> > I have an application which stores a large amounts of hex-encoded hash\n>> > strings (nearly 100 GB of them), which means:\n>>\n>> Why do you keep them hex encoded, and not use bytea?\n>>\n>> I made a sample table with 1 million rows, looking like this:\n>>\n>> Table \"public.new\"\n>> Column | Type | Modifiers\n>> ---------+-------+-----------\n>> texthex | text |\n>> a_bytea | bytea |\n>>\n>> values are like:\n>>\n>> $ select * from new limit 10;\n>> texthex |\n>> a_bytea\n>>\n>> ------------------------------------------------------------------+--------------------------------------------------------------------\n>> c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f |\n>> \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n>> 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db |\n>> \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n>> 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 |\n>> \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n>> fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 |\n>> \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n>> ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a |\n>> \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n>> 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea |\n>> \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n>> 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 |\n>> \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n>> 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c |\n>> \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n>> 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 |\n>> \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n>> 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa |\n>> \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n>> (10 rows)\n>>\n>> created two indexes:\n>> create index i1 on new (texthex);\n>> create index i2 on new (a_bytea);\n>>\n>> i1 is 91MB, and i2 is 56MB.\n>>\n>> Index creation was also much faster - best out of 3 runs for i1 was\n>> 4928.982\n>> ms, best out of 3 runs for i2 was 2047.648 ms\n>>\n>> Best regards,\n>>\n>> depesz\n>>\n>>\n>\n\n\n-- \n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source an Open Mind Company)\n\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nHi Ivan, How about using crc32 ? and then index the integer as the result of crc32 function? you can split the hash into 2 part and do crc32 2x ? and then create composite index on both integer (the crc32 result)instead of using 64 char, you only employ 2 integer as index key. Regards, JulOn Wed, Jun 15, 2016 at 8:54 PM, Ivan Voras <[email protected]> wrote:Hi,I understand your idea, and have also been thinking about it. Basically, existing applications would need to be modified, however slightly, and that wouldn't be good.On 15 June 2016 at 15:38, hubert depesz lubaczewski <[email protected]> wrote:On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n\nWhy do you keep them hex encoded, and not use bytea?\n\nI made a sample table with 1 million rows, looking like this:\n\n     Table \"public.new\"\n Column  | Type  | Modifiers\n---------+-------+-----------\n texthex | text  |\n a_bytea | bytea |\n\nvalues are like:\n\n$ select * from new limit 10;\n                             texthex                              |                              a_bytea\n------------------------------------------------------------------+--------------------------------------------------------------------\n c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f | \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db | \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 | \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 | \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a | \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea | \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 | \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c | \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 | \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa | \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n(10 rows)\n\ncreated two indexes:\ncreate index i1 on new (texthex);\ncreate index i2 on new (a_bytea);\n\ni1 is 91MB, and i2 is 56MB.\n\nIndex creation was also much faster - best out of 3 runs for i1 was 4928.982\nms, best out of 3 runs for i2 was 2047.648 ms\n\nBest regards,\n\ndepesz\n\n\n-- Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source an Open Mind Company)Pusat Niaga ITC Roxy Mas Blok C2/42.  Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.", "msg_date": "Wed, 15 Jun 2016 20:58:15 +0700", "msg_from": "julyanto SUTANDANG <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "Hi,\n\nThis idea is similar to the substring one, and while it does give excellent\nperformance and small size, it requires application code modifications, so\nit's out.\n\n\nOn 15 June 2016 at 15:58, julyanto SUTANDANG <[email protected]> wrote:\n\n> Hi Ivan,\n>\n> How about using crc32 ? and then index the integer as the result of crc32\n> function? you can split the hash into 2 part and do crc32 2x ? and then\n> create composite index on both integer (the crc32 result)\n> instead of using 64 char, you only employ 2 integer as index key.\n>\n> Regards,\n>\n> Jul\n>\n> On Wed, Jun 15, 2016 at 8:54 PM, Ivan Voras <[email protected]> wrote:\n>\n>> Hi,\n>>\n>> I understand your idea, and have also been thinking about it. Basically,\n>> existing applications would need to be modified, however slightly, and that\n>> wouldn't be good.\n>>\n>>\n>>\n>>\n>> On 15 June 2016 at 15:38, hubert depesz lubaczewski <[email protected]>\n>> wrote:\n>>\n>>> On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n>>> > I have an application which stores a large amounts of hex-encoded hash\n>>> > strings (nearly 100 GB of them), which means:\n>>>\n>>> Why do you keep them hex encoded, and not use bytea?\n>>>\n>>> I made a sample table with 1 million rows, looking like this:\n>>>\n>>> Table \"public.new\"\n>>> Column | Type | Modifiers\n>>> ---------+-------+-----------\n>>> texthex | text |\n>>> a_bytea | bytea |\n>>>\n>>> values are like:\n>>>\n>>> $ select * from new limit 10;\n>>> texthex |\n>>> a_bytea\n>>>\n>>> ------------------------------------------------------------------+--------------------------------------------------------------------\n>>> c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f |\n>>> \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n>>> 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db |\n>>> \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n>>> 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 |\n>>> \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n>>> fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 |\n>>> \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n>>> ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a |\n>>> \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n>>> 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea |\n>>> \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n>>> 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 |\n>>> \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n>>> 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c |\n>>> \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n>>> 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 |\n>>> \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n>>> 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa |\n>>> \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n>>> (10 rows)\n>>>\n>>> created two indexes:\n>>> create index i1 on new (texthex);\n>>> create index i2 on new (a_bytea);\n>>>\n>>> i1 is 91MB, and i2 is 56MB.\n>>>\n>>> Index creation was also much faster - best out of 3 runs for i1 was\n>>> 4928.982\n>>> ms, best out of 3 runs for i2 was 2047.648 ms\n>>>\n>>> Best regards,\n>>>\n>>> depesz\n>>>\n>>>\n>>\n>\n>\n> --\n>\n>\n> Julyanto SUTANDANG\n>\n> Equnix Business Solutions, PT\n> (An Open Source an Open Mind Company)\n>\n> Pusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\n> Pusat\n> T: +6221 22866662 F: +62216315281 M: +628164858028\n>\n>\n> Caution: The information enclosed in this email (and any attachments) may\n> be legally privileged and/or confidential and is intended only for the use\n> of the addressee(s). No addressee should forward, print, copy, or otherwise\n> reproduce this message in any manner that would allow it to be viewed by\n> any individual not originally listed as a recipient. If the reader of this\n> message is not the intended recipient, you are hereby notified that any\n> unauthorized disclosure, dissemination, distribution, copying or the taking\n> of any action in reliance on the information herein is strictly prohibited.\n> If you have received this communication in error, please immediately notify\n> the sender and delete this message.Unless it is made by the authorized\n> person, any views expressed in this message are those of the individual\n> sender and may not necessarily reflect the views of PT Equnix Business\n> Solutions.\n>\n\nHi,This idea is similar to the substring one, and while it does give excellent performance and small size, it requires application code modifications, so it's out.On 15 June 2016 at 15:58, julyanto SUTANDANG <[email protected]> wrote:Hi Ivan, How about using crc32 ? and then index the integer as the result of crc32 function? you can split the hash into 2 part and do crc32 2x ? and then create composite index on both integer (the crc32 result)instead of using 64 char, you only employ 2 integer as index key. Regards, JulOn Wed, Jun 15, 2016 at 8:54 PM, Ivan Voras <[email protected]> wrote:Hi,I understand your idea, and have also been thinking about it. Basically, existing applications would need to be modified, however slightly, and that wouldn't be good.On 15 June 2016 at 15:38, hubert depesz lubaczewski <[email protected]> wrote:On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n\nWhy do you keep them hex encoded, and not use bytea?\n\nI made a sample table with 1 million rows, looking like this:\n\n     Table \"public.new\"\n Column  | Type  | Modifiers\n---------+-------+-----------\n texthex | text  |\n a_bytea | bytea |\n\nvalues are like:\n\n$ select * from new limit 10;\n                             texthex                              |                              a_bytea\n------------------------------------------------------------------+--------------------------------------------------------------------\n c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f | \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db | \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 | \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 | \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a | \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea | \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 | \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c | \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 | \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa | \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n(10 rows)\n\ncreated two indexes:\ncreate index i1 on new (texthex);\ncreate index i2 on new (a_bytea);\n\ni1 is 91MB, and i2 is 56MB.\n\nIndex creation was also much faster - best out of 3 runs for i1 was 4928.982\nms, best out of 3 runs for i2 was 2047.648 ms\n\nBest regards,\n\ndepesz\n\n\n-- Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source an Open Mind Company)Pusat Niaga ITC Roxy Mas Blok C2/42.  Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.", "msg_date": "Wed, 15 Jun 2016 16:00:22 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "Hi,\n\nJust for testing... is there a fast (i.e. written in C) crc32 or a similar\nsmall hash function for PostgreSQL?\n\n\nOn 15 June 2016 at 16:00, Ivan Voras <[email protected]> wrote:\n\n> Hi,\n>\n> This idea is similar to the substring one, and while it does give\n> excellent performance and small size, it requires application code\n> modifications, so it's out.\n>\n>\n> On 15 June 2016 at 15:58, julyanto SUTANDANG <[email protected]>\n> wrote:\n>\n>> Hi Ivan,\n>>\n>> How about using crc32 ? and then index the integer as the result of crc32\n>> function? you can split the hash into 2 part and do crc32 2x ? and then\n>> create composite index on both integer (the crc32 result)\n>> instead of using 64 char, you only employ 2 integer as index key.\n>>\n>> Regards,\n>>\n>> Jul\n>>\n>> On Wed, Jun 15, 2016 at 8:54 PM, Ivan Voras <[email protected]> wrote:\n>>\n>>> Hi,\n>>>\n>>> I understand your idea, and have also been thinking about it. Basically,\n>>> existing applications would need to be modified, however slightly, and that\n>>> wouldn't be good.\n>>>\n>>>\n>>>\n>>>\n>>> On 15 June 2016 at 15:38, hubert depesz lubaczewski <[email protected]>\n>>> wrote:\n>>>\n>>>> On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n>>>> > I have an application which stores a large amounts of hex-encoded hash\n>>>> > strings (nearly 100 GB of them), which means:\n>>>>\n>>>> Why do you keep them hex encoded, and not use bytea?\n>>>>\n>>>> I made a sample table with 1 million rows, looking like this:\n>>>>\n>>>> Table \"public.new\"\n>>>> Column | Type | Modifiers\n>>>> ---------+-------+-----------\n>>>> texthex | text |\n>>>> a_bytea | bytea |\n>>>>\n>>>> values are like:\n>>>>\n>>>> $ select * from new limit 10;\n>>>> texthex |\n>>>> a_bytea\n>>>>\n>>>> ------------------------------------------------------------------+--------------------------------------------------------------------\n>>>> c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f |\n>>>> \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n>>>> 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db |\n>>>> \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n>>>> 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 |\n>>>> \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n>>>> fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 |\n>>>> \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n>>>> ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a |\n>>>> \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n>>>> 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea |\n>>>> \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n>>>> 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 |\n>>>> \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n>>>> 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c |\n>>>> \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n>>>> 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 |\n>>>> \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n>>>> 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa |\n>>>> \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n>>>> (10 rows)\n>>>>\n>>>> created two indexes:\n>>>> create index i1 on new (texthex);\n>>>> create index i2 on new (a_bytea);\n>>>>\n>>>> i1 is 91MB, and i2 is 56MB.\n>>>>\n>>>> Index creation was also much faster - best out of 3 runs for i1 was\n>>>> 4928.982\n>>>> ms, best out of 3 runs for i2 was 2047.648 ms\n>>>>\n>>>> Best regards,\n>>>>\n>>>> depesz\n>>>>\n>>>>\n>>>\n>>\n>>\n>> --\n>>\n>>\n>> Julyanto SUTANDANG\n>>\n>> Equnix Business Solutions, PT\n>> (An Open Source an Open Mind Company)\n>>\n>> Pusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\n>> Pusat\n>> T: +6221 22866662 F: +62216315281 M: +628164858028\n>>\n>>\n>> Caution: The information enclosed in this email (and any attachments) may\n>> be legally privileged and/or confidential and is intended only for the use\n>> of the addressee(s). No addressee should forward, print, copy, or otherwise\n>> reproduce this message in any manner that would allow it to be viewed by\n>> any individual not originally listed as a recipient. If the reader of this\n>> message is not the intended recipient, you are hereby notified that any\n>> unauthorized disclosure, dissemination, distribution, copying or the taking\n>> of any action in reliance on the information herein is strictly prohibited.\n>> If you have received this communication in error, please immediately notify\n>> the sender and delete this message.Unless it is made by the authorized\n>> person, any views expressed in this message are those of the individual\n>> sender and may not necessarily reflect the views of PT Equnix Business\n>> Solutions.\n>>\n>\n>\n\nHi,Just for testing... is there a fast (i.e. written in C) crc32 or a similar small hash function for PostgreSQL?On 15 June 2016 at 16:00, Ivan Voras <[email protected]> wrote:Hi,This idea is similar to the substring one, and while it does give excellent performance and small size, it requires application code modifications, so it's out.On 15 June 2016 at 15:58, julyanto SUTANDANG <[email protected]> wrote:Hi Ivan, How about using crc32 ? and then index the integer as the result of crc32 function? you can split the hash into 2 part and do crc32 2x ? and then create composite index on both integer (the crc32 result)instead of using 64 char, you only employ 2 integer as index key. Regards, JulOn Wed, Jun 15, 2016 at 8:54 PM, Ivan Voras <[email protected]> wrote:Hi,I understand your idea, and have also been thinking about it. Basically, existing applications would need to be modified, however slightly, and that wouldn't be good.On 15 June 2016 at 15:38, hubert depesz lubaczewski <[email protected]> wrote:On Wed, Jun 15, 2016 at 11:34:18AM +0200, Ivan Voras wrote:\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n\nWhy do you keep them hex encoded, and not use bytea?\n\nI made a sample table with 1 million rows, looking like this:\n\n     Table \"public.new\"\n Column  | Type  | Modifiers\n---------+-------+-----------\n texthex | text  |\n a_bytea | bytea |\n\nvalues are like:\n\n$ select * from new limit 10;\n                             texthex                              |                              a_bytea\n------------------------------------------------------------------+--------------------------------------------------------------------\n c968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f | \\xc968f64426b941bc9a8f6d4e87fc151c7a7192679837618570b7989c67c31e2f\n 61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db | \\x61dffbf002d7fc3db9df5953aca7f2e434a78d4c5fdd0db6f90f43ee8c4371db\n 757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033 | \\x757acf228adf2357356fd38d03b80529771f211e0ad3b35b66a23d6de53e5033\n fba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15 | \\xfba35b8b33a7fccc2ac7d96389d43be509ff17636fe3c0f8a33af6d009f84f15\n ecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a | \\xecd8587a8b9acae650760cea8683f8e1c131c4054c0d64b1d7de0ff269ccc61a\n 11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea | \\x11782c73bb3fc9f281b41d3eff8c1e7907b3494b3abe7b6982c1e88f49dad2ea\n 5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852 | \\x5862bd8d645e4d44997a485c616bc18f1acabeaec5df3c3b09b9d4c08643e852\n 2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c | \\x2d09a5cca2c03153a55faa3aff13df0f0593f4355a1b2cfcf9237c2931b4918c\n 2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7 | \\x2186eb2bcc12319ee6e00f7e08a1d61e379a75c01c579c29d0338693bc31c7c7\n 2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa | \\x2061bd05049c51bd1162e4d77f72a37f06d2397fc522ef587ed172a5ad8d57aa\n(10 rows)\n\ncreated two indexes:\ncreate index i1 on new (texthex);\ncreate index i2 on new (a_bytea);\n\ni1 is 91MB, and i2 is 56MB.\n\nIndex creation was also much faster - best out of 3 runs for i1 was 4928.982\nms, best out of 3 runs for i2 was 2047.648 ms\n\nBest regards,\n\ndepesz\n\n\n-- Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source an Open Mind Company)Pusat Niaga ITC Roxy Mas Blok C2/42.  Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.", "msg_date": "Wed, 15 Jun 2016 16:20:46 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On 06/15/2016 07:20 AM, Ivan Voras wrote:\n> Hi,\n>\n> Just for testing... is there a fast (i.e. written in C) crc32 or a\n> similar small hash function for PostgreSQL?\n\nhttps://www.postgresql.org/docs/9.5/static/pgcrypto.html\n\nWe also have a builtin md5().\n\nJD\n-- \nCommand Prompt, Inc. http://the.postgres.company/\n +1-503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nEveryone appreciates your honesty, until you are honest with them.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 15 Jun 2016 07:36:13 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On Wed, Jun 15, 2016 at 04:20:46PM +0200, Ivan Voras wrote:\n> Hi,\n> \n> Just for testing... is there a fast (i.e. written in C) crc32 or a similar\n> small hash function for PostgreSQL?\n> \n\nHi Ivan,\n\nHere is an extension that provides a number of different hash\nfunctions, including a version of the version used internally:\n\nhttps://github.com/markokr/pghashlib\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 15 Jun 2016 09:53:44 -0500", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On Wed, Jun 15, 2016 at 6:16 AM, [email protected] <[email protected]> wrote:\n> On Wed, Jun 15, 2016 at 03:09:04PM +0200, Ivan Voras wrote:\n>> On 15 June 2016 at 15:03, [email protected] <[email protected]> wrote:\n>>\n>>\n>> I don't suppose there's an effort in progress to make hash indexes use WAL?\n>> :D\n>\n> Hi Ivan,\n>\n> Several people have looked at it but it has not made it to the top of anyone's\n> to-do list.\n\nI don't know if it is the top of his todo list, but Amit seems pretty\nserious about it:\n\nhttps://www.postgresql.org/message-id/CAA4eK1LfzcZYxLoXS874Ad0+S-ZM60U9bwcyiUZx9mHZ-KCWhw@mail.gmail.com\n\nI hope to give him some help if I get a chance.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 15 Jun 2016 08:14:45 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On Wed, Jun 15, 2016 at 6:34 AM, Ivan Voras <[email protected]> wrote:\n>\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n>\n> The number of distinct characters (alphabet) is limited to 16\n> Each string is of the same length, 64 characters\n> The strings are essentially random\n>\n> Creating a B-Tree index on this results in the index size being larger than\n> the table itself, and there are disk space constraints.\n>\n> I've found the SP-GIST radix tree index, and thought it could be a good\n> match for the data because of the above constraints. An attempt to create it\n> (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes more\n> than 12 hours (while a similar B-tree index takes a few hours at most), so\n> I've interrupted it because \"it probably is not going to finish in a\n> reasonable time\". Some slides I found on the spgist index allude that both\n> build time and size are not really suitable for this purpose.\n\n\nI've found that hash btree indexes tend to perform well in these situations:\n\nCREATE INDEX ON t USING btree (hashtext(fieldname));\n\nHowever, you'll have to modify your queries to query for both, the\nhashtext and the text itself:\n\nSELECT * FROM t WHERE hashtext(fieldname) = hashtext('blabla') AND\nfieldname = 'blabla';\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 17 Jun 2016 00:51:03 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "This way is doing faster using crc32(data) than hashtext since crc32 is\nhardware accelerated in intel (and others perhaps)\nthis way (crc32) is no way the same as hash, much way faster than\nothers...\n\nRegards,\n\n\nOn Fri, Jun 17, 2016 at 10:51 AM, Claudio Freire <[email protected]>\nwrote:\n\n> On Wed, Jun 15, 2016 at 6:34 AM, Ivan Voras <[email protected]> wrote:\n> >\n> > I have an application which stores a large amounts of hex-encoded hash\n> > strings (nearly 100 GB of them), which means:\n> >\n> > The number of distinct characters (alphabet) is limited to 16\n> > Each string is of the same length, 64 characters\n> > The strings are essentially random\n> >\n> > Creating a B-Tree index on this results in the index size being larger\n> than\n> > the table itself, and there are disk space constraints.\n> >\n> > I've found the SP-GIST radix tree index, and thought it could be a good\n> > match for the data because of the above constraints. An attempt to\n> create it\n> > (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes more\n> > than 12 hours (while a similar B-tree index takes a few hours at most),\n> so\n> > I've interrupted it because \"it probably is not going to finish in a\n> > reasonable time\". Some slides I found on the spgist index allude that\n> both\n> > build time and size are not really suitable for this purpose.\n>\n>\n> I've found that hash btree indexes tend to perform well in these\n> situations:\n>\n> CREATE INDEX ON t USING btree (hashtext(fieldname));\n>\n> However, you'll have to modify your queries to query for both, the\n> hashtext and the text itself:\n>\n> SELECT * FROM t WHERE hashtext(fieldname) = hashtext('blabla') AND\n> fieldname = 'blabla';\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source an Open Mind Company)\n\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nThis way is doing faster using crc32(data) than hashtext since crc32 is hardware accelerated in intel (and others perhaps) this way (crc32)  is no way the same as hash, much way faster than others... Regards, On Fri, Jun 17, 2016 at 10:51 AM, Claudio Freire <[email protected]> wrote:On Wed, Jun 15, 2016 at 6:34 AM, Ivan Voras <[email protected]> wrote:\n>\n> I have an application which stores a large amounts of hex-encoded hash\n> strings (nearly 100 GB of them), which means:\n>\n> The number of distinct characters (alphabet) is limited to 16\n> Each string is of the same length, 64 characters\n> The strings are essentially random\n>\n> Creating a B-Tree index on this results in the index size being larger than\n> the table itself, and there are disk space constraints.\n>\n> I've found the SP-GIST radix tree index, and thought it could be a good\n> match for the data because of the above constraints. An attempt to create it\n> (as in CREATE INDEX ON t USING spgist(field_name)) apparently takes more\n> than 12 hours (while a similar B-tree index takes a few hours at most), so\n> I've interrupted it because \"it probably is not going to finish in a\n> reasonable time\". Some slides I found on the spgist index allude that both\n> build time and size are not really suitable for this purpose.\n\n\nI've found that hash btree indexes tend to perform well in these situations:\n\nCREATE INDEX ON t USING btree (hashtext(fieldname));\n\nHowever, you'll have to modify your queries to query for both, the\nhashtext and the text itself:\n\nSELECT * FROM t WHERE hashtext(fieldname) = hashtext('blabla') AND\nfieldname = 'blabla';\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source an Open Mind Company)Pusat Niaga ITC Roxy Mas Blok C2/42.  Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.", "msg_date": "Fri, 17 Jun 2016 11:09:02 +0700", "msg_from": "julyanto SUTANDANG <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "On Fri, Jun 17, 2016 at 1:09 AM, julyanto SUTANDANG\n<[email protected]> wrote:\n> This way is doing faster using crc32(data) than hashtext since crc32 is\n> hardware accelerated in intel (and others perhaps)\n> this way (crc32) is no way the same as hash, much way faster than others...\n>\n> Regards,\n\nSure, but I've had uniformity issues with crc32.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 17 Jun 2016 01:18:52 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "And in any case, there's no crc32 in the built-in pgcrypto module.\n\n\nOn 17 June 2016 at 06:18, Claudio Freire <[email protected]> wrote:\n\n> On Fri, Jun 17, 2016 at 1:09 AM, julyanto SUTANDANG\n> <[email protected]> wrote:\n> > This way is doing faster using crc32(data) than hashtext since crc32 is\n> > hardware accelerated in intel (and others perhaps)\n> > this way (crc32) is no way the same as hash, much way faster than\n> others...\n> >\n> > Regards,\n>\n> Sure, but I've had uniformity issues with crc32.\n>\n\nAnd in any case, there's no crc32 in the built-in pgcrypto module.On 17 June 2016 at 06:18, Claudio Freire <[email protected]> wrote:On Fri, Jun 17, 2016 at 1:09 AM, julyanto SUTANDANG\n<[email protected]> wrote:\n> This way is doing faster using crc32(data) than hashtext since crc32 is\n> hardware accelerated in intel (and others perhaps)\n> this way (crc32)  is no way the same as hash, much way faster than others...\n>\n> Regards,\n\nSure, but I've had uniformity issues with crc32.", "msg_date": "Fri, 17 Jun 2016 10:32:09 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes for hashes" }, { "msg_contents": "Crc32 is great because it is supported by Intel Hardware, unfortunatelly\nyou have to code something like this:\n\nhttp://stackoverflow.com/questions/31184201/how-to-implement-crc32-taking-advantage-of-intel-specific-instructions\n\n\nint32_t sse42_crc32(const uint8_t *bytes, size_t len){\n uint32_t hash = 0;\n size_t i = 0;\n for (i=0;i<len;i++) {\n hash = _mm_crc32_u8(hash, bytes[i]);\n }\n\n return hash;}\n\nIt is supported by GCC and will implemented as hardware computing which\nreally fast.\nyou can use 64bit integer to have more precise hashing, so don't worry\nabout uniformity.\n\nBtw: crc32 is not part of the cryptography, it is part of hashing or\nsignature.\n\nRegards,\n\n\nOn Fri, Jun 17, 2016 at 3:32 PM, Ivan Voras <[email protected]> wrote:\n\n> And in any case, there's no crc32 in the built-in pgcrypto module.\n>\n>\n> On 17 June 2016 at 06:18, Claudio Freire <[email protected]> wrote:\n>\n>> On Fri, Jun 17, 2016 at 1:09 AM, julyanto SUTANDANG\n>> <[email protected]> wrote:\n>> > This way is doing faster using crc32(data) than hashtext since crc32 is\n>> > hardware accelerated in intel (and others perhaps)\n>> > this way (crc32) is no way the same as hash, much way faster than\n>> others...\n>> >\n>> > Regards,\n>>\n>> Sure, but I've had uniformity issues with crc32.\n>>\n>\n>\n\n\n-- \n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source an Open Mind Company)\n\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nCrc32 is great because it is supported by Intel Hardware, unfortunatelly you have to code something like this:http://stackoverflow.com/questions/31184201/how-to-implement-crc32-taking-advantage-of-intel-specific-instructionsint32_t sse42_crc32(const uint8_t *bytes, size_t len)\n{\n uint32_t hash = 0;\n size_t i = 0;\n for (i=0;i<len;i++) {\n hash = _mm_crc32_u8(hash, bytes[i]);\n }\n\n return hash;\n}It is supported by GCC and will implemented as hardware computing which really fast.you can use 64bit integer to have more precise hashing, so don't worry about uniformity.Btw: crc32 is not part of the cryptography, it is part of hashing or signature.Regards,  On Fri, Jun 17, 2016 at 3:32 PM, Ivan Voras <[email protected]> wrote:And in any case, there's no crc32 in the built-in pgcrypto module.On 17 June 2016 at 06:18, Claudio Freire <[email protected]> wrote:On Fri, Jun 17, 2016 at 1:09 AM, julyanto SUTANDANG\n<[email protected]> wrote:\n> This way is doing faster using crc32(data) than hashtext since crc32 is\n> hardware accelerated in intel (and others perhaps)\n> this way (crc32)  is no way the same as hash, much way faster than others...\n>\n> Regards,\n\nSure, but I've had uniformity issues with crc32.\n\n-- Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source an Open Mind Company)Pusat Niaga ITC Roxy Mas Blok C2/42.  Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.", "msg_date": "Fri, 17 Jun 2016 15:48:56 +0700", "msg_from": "julyanto SUTANDANG <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes for hashes" } ]
[ { "msg_contents": "This is my first post to the mailing list, so I apologize for any etiquette\nissues.\n\nI have a few databases that I am trying to move from one system to\nanother. Both systems are running Windows 7 and Postgres 8.4, and they are\npretty powerful machines (40-core Xeon workstations with decent hardware\nacross the board). While the DBs vary in size, I'm working right now with\none that is roughly 50 tables and probably 75M rows, and is about 300MB on\ndisk when exported via pg_dump.\n\nI am exporting and restoring using these commands (on separate sytems):\npg_dump -F c mydb > mydb.dump\npg_restore -C -j 10 mydb.dump\n\nThe dump process runs in about a minute and seems fine. The restore process\nhas already been running for around 7 hours.\n\nYesterday, I tried restoring a larger DB that is roughly triple the\ndimensions listed above, and it ran for over 16 hours without completing.\n\nI followed the advice given at\nhttp://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html and\nset the conf settings as directed and restarted the server.\n\nYou can see in the command line that I am trying to use the -j parameter\nfor parallelism, but I don't see much evidence of that in Task Manager. CPU\nload is consistently 1 or 2% and only a couple cores seem to be doing\nanything, there certainly aren't 10 cpu-bound cores. I'm not sure where to\nlook for pg_restore's disk I/O, but there is an entry for pg_restore in\nTask Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write\nBytes. Since that's just the parent process that might make sense but I\ndon't see much activity elsewhere either.\n\nIs there something simple that I am missing here? Does the -j flag not work\nin 8.4 and I should use --jobs? It just seems like none of the CPU or RAM\nusage I'd expect from this process are evident, it's taking many times\nlonger than I would expect, and I don't know how to verify if the things\nI'm trying are working or not.\n\nAny insight would be appreciated!\n\nThanks,\nAdrian\n\nThis is my first post to the mailing list, so I apologize for any etiquette issues.I have a few databases that I am trying to move from one system to another.  Both systems are running Windows 7 and Postgres 8.4, and they are pretty powerful machines (40-core Xeon workstations with decent hardware across the board). While the DBs vary in size, I'm working right now with one that is roughly 50 tables and probably 75M rows, and is about 300MB on disk when exported via pg_dump. I am exporting and restoring using these commands (on separate sytems):pg_dump -F c mydb > mydb.dumppg_restore -C -j 10 mydb.dumpThe dump process runs in about a minute and seems fine. The restore process has already been running for around 7 hours.Yesterday, I tried restoring a larger DB that is roughly triple the dimensions listed above, and it ran for over 16 hours without completing.I followed the advice given at http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html and set the conf settings as directed and restarted the server.You can see in the command line that I am trying to use the -j parameter for parallelism, but I don't see much evidence of that in Task Manager. CPU load is consistently 1 or 2% and only a couple cores seem to be doing anything, there certainly aren't 10 cpu-bound cores. I'm not sure where to look for pg_restore's disk I/O, but there is an entry for pg_restore in Task Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write Bytes. Since that's just the parent process that might make sense but I don't see much activity elsewhere either.Is there something simple that I am missing here? Does the -j flag not work in 8.4 and I should use --jobs? It just seems like none of the CPU or RAM usage I'd expect from this process are evident, it's taking many times longer than I would expect, and I don't know how to verify if the things I'm trying are working or not.Any insight would be appreciated!Thanks,Adrian", "msg_date": "Wed, 15 Jun 2016 18:00:06 -0400", "msg_from": "Adrian Myers <[email protected]>", "msg_from_op": true, "msg_subject": "pg_restore seems very slow" }, { "msg_contents": "On Wed, Jun 15, 2016 at 6:00 PM, Adrian Myers <[email protected]>\nwrote:\n\n> This is my first post to the mailing list, so I apologize for any\n> etiquette issues.\n>\n> I have a few databases that I am trying to move from one system to\n> another. Both systems are running Windows 7 and Postgres 8.4, and they are\n> pretty powerful machines (40-core Xeon workstations with decent hardware\n> across the board). While the DBs vary in size, I'm working right now with\n> one that is roughly 50 tables and probably 75M rows, and is about 300MB on\n> disk when exported via pg_dump.\n>\n> I am exporting and restoring using these commands (on separate sytems):\n> pg_dump -F c mydb > mydb.dump\n> pg_restore -C -j 10 mydb.dump\n>\n> The dump process runs in about a minute and seems fine. The restore\n> process has already been running for around 7 hours.\n>\n> Yesterday, I tried restoring a larger DB that is roughly triple the\n> dimensions listed above, and it ran for over 16 hours without completing.\n>\n> I followed the advice given at\n> http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html and\n> set the conf settings as directed and restarted the server.\n>\n> You can see in the command line that I am trying to use the -j parameter\n> for parallelism, but I don't see much evidence of that in Task Manager. CPU\n> load is consistently 1 or 2% and only a couple cores seem to be doing\n> anything, there certainly aren't 10 cpu-bound cores. I'm not sure where to\n> look for pg_restore's disk I/O, but there is an entry for pg_restore in\n> Task Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write\n> Bytes. Since that's just the parent process that might make sense but I\n> don't see much activity elsewhere either.\n>\n> Is there something simple that I am missing here? Does the -j flag not\n> work in 8.4 and I should use --jobs? It just seems like none of the CPU or\n> RAM usage I'd expect from this process are evident, it's taking many times\n> longer than I would expect, and I don't know how to verify if the things\n> I'm trying are working or not.\n>\n> Any insight would be appreciated!\n>\n>\n​Did any databases restore properly?\n\nAre there any message in logs or on the terminal​? You should add the\n\"--verbose\" option to your pg_restore command to help provoke this.\n\n-C can be problematic at times. Consider manually ensuring the desired\ntarget database exists and is setup correctly (matches the original) and\nthen do a non-create restoration to it specifically.\n\n-j should work fine in 8.4 (according to the docs)\n\nYou need to get to a point where you are seeing feedback from the\npg_restore process. Once you get it telling you what it is doing (or\ntrying to do) then diagnosing can begin.\n\n​David J.\n​\n\nOn Wed, Jun 15, 2016 at 6:00 PM, Adrian Myers <[email protected]> wrote:This is my first post to the mailing list, so I apologize for any etiquette issues.I have a few databases that I am trying to move from one system to another.  Both systems are running Windows 7 and Postgres 8.4, and they are pretty powerful machines (40-core Xeon workstations with decent hardware across the board). While the DBs vary in size, I'm working right now with one that is roughly 50 tables and probably 75M rows, and is about 300MB on disk when exported via pg_dump. I am exporting and restoring using these commands (on separate sytems):pg_dump -F c mydb > mydb.dumppg_restore -C -j 10 mydb.dumpThe dump process runs in about a minute and seems fine. The restore process has already been running for around 7 hours.Yesterday, I tried restoring a larger DB that is roughly triple the dimensions listed above, and it ran for over 16 hours without completing.I followed the advice given at http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html and set the conf settings as directed and restarted the server.You can see in the command line that I am trying to use the -j parameter for parallelism, but I don't see much evidence of that in Task Manager. CPU load is consistently 1 or 2% and only a couple cores seem to be doing anything, there certainly aren't 10 cpu-bound cores. I'm not sure where to look for pg_restore's disk I/O, but there is an entry for pg_restore in Task Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write Bytes. Since that's just the parent process that might make sense but I don't see much activity elsewhere either.Is there something simple that I am missing here? Does the -j flag not work in 8.4 and I should use --jobs? It just seems like none of the CPU or RAM usage I'd expect from this process are evident, it's taking many times longer than I would expect, and I don't know how to verify if the things I'm trying are working or not.Any insight would be appreciated!​Did any databases restore properly?Are there any message in logs or on the terminal​?  You should add the \"--verbose\" option to your pg_restore command to help provoke this.-C can be problematic at times.  Consider manually ensuring the desired target database exists and is setup correctly (matches the original) and then do a non-create restoration to it specifically.-j should work fine in 8.4 (according to the docs)You need to get to a point where you are seeing feedback from the pg_restore process.  Once you get it telling you what it is doing (or trying to do) then diagnosing can begin.​David J.​", "msg_date": "Wed, 15 Jun 2016 18:08:52 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore seems very slow" }, { "msg_contents": "Hi David,\n\nThank you for your reply. Yes, there is quite a lot of feedback in the\nterminal. I can see a small flurry of table operations followed by hours of\ntable contents being printed, presumably as they are inserted. I didn't use\nthe --verbose option, but it seems to be echoing everything it is doing.\n\nI haven't seen any errors, and I was able to restore a couple very small\ntables successfully, so it seems like the process is valid. The problem is\nthat pg_restore is running for extremely long periods of time on even\nmodestly large tables and I can't tell if the optimizations I am trying,\nsuch as the -j concurrency option, are having any effect.\n\nThanks,\nAdrian\n\nOn Wed, Jun 15, 2016 at 6:08 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Wed, Jun 15, 2016 at 6:00 PM, Adrian Myers <[email protected]>\n> wrote:\n>\n>> This is my first post to the mailing list, so I apologize for any\n>> etiquette issues.\n>>\n>> I have a few databases that I am trying to move from one system to\n>> another. Both systems are running Windows 7 and Postgres 8.4, and they are\n>> pretty powerful machines (40-core Xeon workstations with decent hardware\n>> across the board). While the DBs vary in size, I'm working right now with\n>> one that is roughly 50 tables and probably 75M rows, and is about 300MB on\n>> disk when exported via pg_dump.\n>>\n>> I am exporting and restoring using these commands (on separate sytems):\n>> pg_dump -F c mydb > mydb.dump\n>> pg_restore -C -j 10 mydb.dump\n>>\n>> The dump process runs in about a minute and seems fine. The restore\n>> process has already been running for around 7 hours.\n>>\n>> Yesterday, I tried restoring a larger DB that is roughly triple the\n>> dimensions listed above, and it ran for over 16 hours without completing.\n>>\n>> I followed the advice given at\n>> http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html and\n>> set the conf settings as directed and restarted the server.\n>>\n>> You can see in the command line that I am trying to use the -j parameter\n>> for parallelism, but I don't see much evidence of that in Task Manager. CPU\n>> load is consistently 1 or 2% and only a couple cores seem to be doing\n>> anything, there certainly aren't 10 cpu-bound cores. I'm not sure where to\n>> look for pg_restore's disk I/O, but there is an entry for pg_restore in\n>> Task Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write\n>> Bytes. Since that's just the parent process that might make sense but I\n>> don't see much activity elsewhere either.\n>>\n>> Is there something simple that I am missing here? Does the -j flag not\n>> work in 8.4 and I should use --jobs? It just seems like none of the CPU or\n>> RAM usage I'd expect from this process are evident, it's taking many times\n>> longer than I would expect, and I don't know how to verify if the things\n>> I'm trying are working or not.\n>>\n>> Any insight would be appreciated!\n>>\n>>\n> ​Did any databases restore properly?\n>\n> Are there any message in logs or on the terminal​? You should add the\n> \"--verbose\" option to your pg_restore command to help provoke this.\n>\n> -C can be problematic at times. Consider manually ensuring the desired\n> target database exists and is setup correctly (matches the original) and\n> then do a non-create restoration to it specifically.\n>\n> -j should work fine in 8.4 (according to the docs)\n>\n> You need to get to a point where you are seeing feedback from the\n> pg_restore process. Once you get it telling you what it is doing (or\n> trying to do) then diagnosing can begin.\n>\n> ​David J.\n> ​\n>\n>\n\nHi David,Thank you for your reply. Yes, there is quite a lot of feedback in the terminal. I can see a small flurry of table operations followed by hours of table contents being printed, presumably as they are inserted. I didn't use the --verbose option, but it seems to be echoing everything it is doing.I haven't seen any errors, and I was able to restore a couple very small tables successfully, so it seems like the process is valid. The problem is that pg_restore is running for extremely long periods of time on even modestly large tables and I can't tell if the optimizations I am trying, such as the -j concurrency option, are having any effect.Thanks,AdrianOn Wed, Jun 15, 2016 at 6:08 PM, David G. Johnston <[email protected]> wrote:On Wed, Jun 15, 2016 at 6:00 PM, Adrian Myers <[email protected]> wrote:This is my first post to the mailing list, so I apologize for any etiquette issues.I have a few databases that I am trying to move from one system to another.  Both systems are running Windows 7 and Postgres 8.4, and they are pretty powerful machines (40-core Xeon workstations with decent hardware across the board). While the DBs vary in size, I'm working right now with one that is roughly 50 tables and probably 75M rows, and is about 300MB on disk when exported via pg_dump. I am exporting and restoring using these commands (on separate sytems):pg_dump -F c mydb > mydb.dumppg_restore -C -j 10 mydb.dumpThe dump process runs in about a minute and seems fine. The restore process has already been running for around 7 hours.Yesterday, I tried restoring a larger DB that is roughly triple the dimensions listed above, and it ran for over 16 hours without completing.I followed the advice given at http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html and set the conf settings as directed and restarted the server.You can see in the command line that I am trying to use the -j parameter for parallelism, but I don't see much evidence of that in Task Manager. CPU load is consistently 1 or 2% and only a couple cores seem to be doing anything, there certainly aren't 10 cpu-bound cores. I'm not sure where to look for pg_restore's disk I/O, but there is an entry for pg_restore in Task Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write Bytes. Since that's just the parent process that might make sense but I don't see much activity elsewhere either.Is there something simple that I am missing here? Does the -j flag not work in 8.4 and I should use --jobs? It just seems like none of the CPU or RAM usage I'd expect from this process are evident, it's taking many times longer than I would expect, and I don't know how to verify if the things I'm trying are working or not.Any insight would be appreciated!​Did any databases restore properly?Are there any message in logs or on the terminal​?  You should add the \"--verbose\" option to your pg_restore command to help provoke this.-C can be problematic at times.  Consider manually ensuring the desired target database exists and is setup correctly (matches the original) and then do a non-create restoration to it specifically.-j should work fine in 8.4 (according to the docs)You need to get to a point where you are seeing feedback from the pg_restore process.  Once you get it telling you what it is doing (or trying to do) then diagnosing can begin.​David J.​", "msg_date": "Wed, 15 Jun 2016 19:41:28 -0400", "msg_from": "Adrian Myers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_restore seems very slow" }, { "msg_contents": "The \"simple\" case may be anti-virus or firewall blocking feeding into the\ndatabase. Be sure to check windows system logs for any unusual messages.\n\nCheck the postgres log (usually in PGDATA/pg_logs)\n\nFor seeing disk I/O on Win7 check out\nhttp://www.digitalcitizen.life/how-use-resource-monitor-windows-7\n\nTry also to restore without any -j or --jobs to see if you get more\nactivity on CPU or disk.\n\nCan you view any data in the tables to at least know it's loading?\n\nThanks,\nAdam C. Scott\n\n\n\n\nOn Wed, Jun 15, 2016 at 4:41 PM, Adrian Myers <[email protected]>\nwrote:\n\n> Hi David,\n>\n> Thank you for your reply. Yes, there is quite a lot of feedback in the\n> terminal. I can see a small flurry of table operations followed by hours of\n> table contents being printed, presumably as they are inserted. I didn't use\n> the --verbose option, but it seems to be echoing everything it is doing.\n>\n> I haven't seen any errors, and I was able to restore a couple very small\n> tables successfully, so it seems like the process is valid. The problem is\n> that pg_restore is running for extremely long periods of time on even\n> modestly large tables and I can't tell if the optimizations I am trying,\n> such as the -j concurrency option, are having any effect.\n>\n> Thanks,\n> Adrian\n>\n> On Wed, Jun 15, 2016 at 6:08 PM, David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Wed, Jun 15, 2016 at 6:00 PM, Adrian Myers <[email protected]>\n>> wrote:\n>>\n>>> This is my first post to the mailing list, so I apologize for any\n>>> etiquette issues.\n>>>\n>>> I have a few databases that I am trying to move from one system to\n>>> another. Both systems are running Windows 7 and Postgres 8.4, and they are\n>>> pretty powerful machines (40-core Xeon workstations with decent hardware\n>>> across the board). While the DBs vary in size, I'm working right now with\n>>> one that is roughly 50 tables and probably 75M rows, and is about 300MB on\n>>> disk when exported via pg_dump.\n>>>\n>>> I am exporting and restoring using these commands (on separate sytems):\n>>> pg_dump -F c mydb > mydb.dump\n>>> pg_restore -C -j 10 mydb.dump\n>>>\n>>> The dump process runs in about a minute and seems fine. The restore\n>>> process has already been running for around 7 hours.\n>>>\n>>> Yesterday, I tried restoring a larger DB that is roughly triple the\n>>> dimensions listed above, and it ran for over 16 hours without completing.\n>>>\n>>> I followed the advice given at\n>>> http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html\n>>> and set the conf settings as directed and restarted the server.\n>>>\n>>> You can see in the command line that I am trying to use the -j parameter\n>>> for parallelism, but I don't see much evidence of that in Task Manager. CPU\n>>> load is consistently 1 or 2% and only a couple cores seem to be doing\n>>> anything, there certainly aren't 10 cpu-bound cores. I'm not sure where to\n>>> look for pg_restore's disk I/O, but there is an entry for pg_restore in\n>>> Task Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write\n>>> Bytes. Since that's just the parent process that might make sense but I\n>>> don't see much activity elsewhere either.\n>>>\n>>> Is there something simple that I am missing here? Does the -j flag not\n>>> work in 8.4 and I should use --jobs? It just seems like none of the CPU or\n>>> RAM usage I'd expect from this process are evident, it's taking many times\n>>> longer than I would expect, and I don't know how to verify if the things\n>>> I'm trying are working or not.\n>>>\n>>> Any insight would be appreciated!\n>>>\n>>>\n>> ​Did any databases restore properly?\n>>\n>> Are there any message in logs or on the terminal​? You should add the\n>> \"--verbose\" option to your pg_restore command to help provoke this.\n>>\n>> -C can be problematic at times. Consider manually ensuring the desired\n>> target database exists and is setup correctly (matches the original) and\n>> then do a non-create restoration to it specifically.\n>>\n>> -j should work fine in 8.4 (according to the docs)\n>>\n>> You need to get to a point where you are seeing feedback from the\n>> pg_restore process. Once you get it telling you what it is doing (or\n>> trying to do) then diagnosing can begin.\n>>\n>> ​David J.\n>> ​\n>>\n>>\n>\n\nThe \"simple\" case may be anti-virus or firewall blocking feeding into the database.  Be sure to check windows system logs for any unusual messages.Check the postgres log (usually in PGDATA/pg_logs)For seeing disk I/O on Win7 check out http://www.digitalcitizen.life/how-use-resource-monitor-windows-7Try also to  restore without any -j or --jobs to see if you get more activity on CPU or disk.Can you view any data in the tables to at least know it's loading?Thanks,Adam C. ScottOn Wed, Jun 15, 2016 at 4:41 PM, Adrian Myers <[email protected]> wrote:Hi David,Thank you for your reply. Yes, there is quite a lot of feedback in the terminal. I can see a small flurry of table operations followed by hours of table contents being printed, presumably as they are inserted. I didn't use the --verbose option, but it seems to be echoing everything it is doing.I haven't seen any errors, and I was able to restore a couple very small tables successfully, so it seems like the process is valid. The problem is that pg_restore is running for extremely long periods of time on even modestly large tables and I can't tell if the optimizations I am trying, such as the -j concurrency option, are having any effect.Thanks,AdrianOn Wed, Jun 15, 2016 at 6:08 PM, David G. Johnston <[email protected]> wrote:On Wed, Jun 15, 2016 at 6:00 PM, Adrian Myers <[email protected]> wrote:This is my first post to the mailing list, so I apologize for any etiquette issues.I have a few databases that I am trying to move from one system to another.  Both systems are running Windows 7 and Postgres 8.4, and they are pretty powerful machines (40-core Xeon workstations with decent hardware across the board). While the DBs vary in size, I'm working right now with one that is roughly 50 tables and probably 75M rows, and is about 300MB on disk when exported via pg_dump. I am exporting and restoring using these commands (on separate sytems):pg_dump -F c mydb > mydb.dumppg_restore -C -j 10 mydb.dumpThe dump process runs in about a minute and seems fine. The restore process has already been running for around 7 hours.Yesterday, I tried restoring a larger DB that is roughly triple the dimensions listed above, and it ran for over 16 hours without completing.I followed the advice given at http://www.databasesoup.com/2014/09/settings-for-fast-pgrestore.html and set the conf settings as directed and restarted the server.You can see in the command line that I am trying to use the -j parameter for parallelism, but I don't see much evidence of that in Task Manager. CPU load is consistently 1 or 2% and only a couple cores seem to be doing anything, there certainly aren't 10 cpu-bound cores. I'm not sure where to look for pg_restore's disk I/O, but there is an entry for pg_restore in Task Manager/Processes which shows almost no I/O Read Bytes and 0 I/O Write Bytes. Since that's just the parent process that might make sense but I don't see much activity elsewhere either.Is there something simple that I am missing here? Does the -j flag not work in 8.4 and I should use --jobs? It just seems like none of the CPU or RAM usage I'd expect from this process are evident, it's taking many times longer than I would expect, and I don't know how to verify if the things I'm trying are working or not.Any insight would be appreciated!​Did any databases restore properly?Are there any message in logs or on the terminal​?  You should add the \"--verbose\" option to your pg_restore command to help provoke this.-C can be problematic at times.  Consider manually ensuring the desired target database exists and is setup correctly (matches the original) and then do a non-create restoration to it specifically.-j should work fine in 8.4 (according to the docs)You need to get to a point where you are seeing feedback from the pg_restore process.  Once you get it telling you what it is doing (or trying to do) then diagnosing can begin.​David J.​", "msg_date": "Wed, 15 Jun 2016 18:43:51 -0700", "msg_from": "Adam Scott <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_restore seems very slow" } ]
[ { "msg_contents": "Hello,\n \nI've a basic table with about 100K rows:\n \n\n\nCREATE TABLE \"public\".\"push_topic\" (\n \"id\" Serial PRIMARY KEY,\n \"guid\" public.push_guid NOT NULL,\n \"authenticatorsending\" Varchar(32) NOT NULL,\n \"authenticatorsubscription\" Varchar(32) NOT NULL,\n \"countpushed\" Integer NOT NULL,\n \"datecreated\" timestamp NOT NULL,\n \"datelastpush\" timestamp\n)\nCREATE UNIQUE INDEX push_topic_idx_topicguid ON push_topic\n  USING btree (guid)\n\n\n \nWhen I query this through pgsql, the queries are fast as expected.\n\nThis is the query:\n\nselect * from push_topic where guid = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'\n\nAnd the plan:\n\n\n\nIndex Scan using push_topic_idx_topicguid on push_topic (cost=0.42..8.44 rows=1 width=103) (actual time=0.117..0.121 rows=1 loops=1)\n Index Cond: ((guid)::bpchar = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::bpchar)\n Buffers: shared hit=3 read=1\nTotal runtime: 0.191 ms\n\n\n\nHowever when I run the exact query through a different application (CodeSynthesis ORM) the query is very slow (~ 115ms logged)\nI noted this is due to a sequential scan happening on the table instead of an index scan.\n\nThis is query plan in the log file:\n\n\n\nLOG: plan:\nDETAIL: {PLANNEDSTMT \n\t :commandType 1 \n\t :queryId 0 \n\t :hasReturning false \n\t :hasModifyingCTE false \n\t :canSetTag true \n\t :transientPlan false \n\t :planTree \n\t {SEQSCAN \n\t :startup_cost 0.00 \n\t :total_cost 2877.58 \n\t :plan_rows 429 \n\t :plan_width 103 \n\t :targetlist (\n\t {TARGETENTRY \n\t :expr \n\t {VAR \n\t :varno 1 \n\t :varattno 1 \n\t :vartype 23 \n\t :vartypmod -1 \n\t :varcollid 0 \n\t :varlevelsup 0 \n\t :varnoold 1 \n\t :varoattno 1 \n\t :location 7\n\t }\n\t :resno 1 \n\t :resname id \n\t :ressortgroupref 0 \n\t :resorigtbl 16393 \n\t :resorigcol 1 \n\t :resjunk false\n\t }\n\t {TARGETENTRY \n\t :expr \n\t {VAR \n\t :varno 1 \n\t :varattno 2 \n\t :vartype 16385 \n\t :vartypmod -1 \n\t :varcollid 100 \n\t :varlevelsup 0 \n\t :varnoold 1 \n\t :varoattno 2 \n\t :location 26\n\t }\n\t :resno 2 \n\t :resname guid \n\t :ressortgroupref 0 \n\t :resorigtbl 16393 \n\t :resorigcol 2 \n\t :resjunk false\n\t }\n\t {TARGETENTRY \n\t :expr \n\t {VAR \n\t :varno 1 \n\t :varattno 3 \n\t :vartype 1043 \n\t :vartypmod 36 \n\t :varcollid 100 \n\t :varlevelsup 0 \n\t :varnoold 1 \n\t :varoattno 3 \n\t :location 47\n\t }\n\t :resno 3 \n\t :resname authenticatorsending \n\t :ressortgroupref 0 \n\t :resorigtbl 16393 \n\t :resorigcol 3 \n\t :resjunk false\n\t }\n\t {TARGETENTRY \n\t :expr \n\t {VAR \n\t :varno 1 \n\t :varattno 4 \n\t :vartype 1043 \n\t :vartypmod 36 \n\t :varcollid 100 \n\t :varlevelsup 0 \n\t :varnoold 1 \n\t :varoattno 4 \n\t :location 84\n\t }\n\t :resno 4 \n\t :resname authenticatorsubscription \n\t :ressortgroupref 0 \n\t :resorigtbl 16393 \n\t :resorigcol 4 \n\t :resjunk false\n\t }\n\t {TARGETENTRY \n\t :expr \n\t {VAR \n\t :varno 1 \n\t :varattno 5 \n\t :vartype 23 \n\t :vartypmod -1 \n\t :varcollid 0 \n\t :varlevelsup 0 \n\t :varnoold 1 \n\t :varoattno 5 \n\t :location 126\n\t }\n\t :resno 5 \n\t :resname countpushed \n\t :ressortgroupref 0 \n\t :resorigtbl 16393 \n\t :resorigcol 5 \n\t :resjunk false\n\t }\n\t {TARGETENTRY \n\t :expr \n\t {VAR \n\t :varno 1 \n\t :varattno 6 \n\t :vartype 1114 \n\t :vartypmod -1 \n\t :varcollid 0 \n\t :varlevelsup 0 \n\t :varnoold 1 \n\t :varoattno 6 \n\t :location 154\n\t }\n\t :resno 6 \n\t :resname datecreated \n\t :ressortgroupref 0 \n\t :resorigtbl 16393 \n\t :resorigcol 6 \n\t :resjunk false\n\t }\n\t {TARGETENTRY \n\t :expr \n\t {VAR \n\t :varno 1 \n\t :varattno 7 \n\t :vartype 1114 \n\t :vartypmod -1 \n\t :varcollid 0 \n\t :varlevelsup 0 \n\t :varnoold 1 \n\t :varoattno 7 \n\t :location 182\n\t }\n\t :resno 7 \n\t :resname datelastpush \n\t :ressortgroupref 0 \n\t :resorigtbl 16393 \n\t :resorigcol 7 \n\t :resjunk false\n\t }\n\t )\n\t :qual (\n\t {OPEXPR \n\t :opno 98 \n\t :opfuncid 67 \n\t :opresulttype 16 \n\t :opretset false \n\t :opcollid 0 \n\t :inputcollid 100 \n\t :args (\n\t {FUNCEXPR \n\t :funcid 401 \n\t :funcresulttype 25 \n\t :funcretset false \n\t :funcvariadic false \n\t :funcformat 2 \n\t :funccollid 100 \n\t :inputcollid 100 \n\t :args (\n\t {VAR \n\t :varno 1 \n\t :varattno 2 \n\t :vartype 16385 \n\t :vartypmod -1 \n\t :varcollid 100 \n\t :varlevelsup 0 \n\t :varnoold 1 \n\t :varoattno 2 \n\t :location 234\n\t }\n\t )\n\t :location -1\n\t }\n\t {CONST \n\t :consttype 25 \n\t :consttypmod -1 \n\t :constcollid 100 \n\t :constlen -1 \n\t :constbyval false \n\t :constisnull false \n\t :location -1 \n\t :constvalue 40 [ -96 0 0 0 48 48 53 51 54 49 69 56 45 51 51 69 65 \n\t 45 49 70 48 69 45 66 50 49 55 45 67 57 49 66 52 65 67 55 66 67 69 \n\t 54 ]\n\t }\n\t )\n\t :location 254\n\t }\n\t )\n\t :lefttree <> \n\t :righttree <> \n\t :initPlan <> \n\t :extParam (b)\n\t :allParam (b)\n\t :scanrelid 1\n\t }\n\t :rtable (\n\t {RTE \n\t :alias <> \n\t :eref \n\t {ALIAS \n\t :aliasname push_topic \n\t :colnames (\"id\" \"guid\" \"authenticatorsending\" \"authenticatorsubscript\n\t ion\" \"countpushed\" \"datecreated\" \"datelastpush\")\n\t }\n\t :rtekind 0 \n\t :relid 16393 \n\t :relkind r \n\t :lateral false \n\t :inh false \n\t :inFromCl true \n\t :requiredPerms 2 \n\t :checkAsUser 0 \n\t :selectedCols (b 9 10 11 12 13 14 15)\n\t :modifiedCols (b)\n\t }\n\t )\n\t :resultRelations <> \n\t :utilityStmt <> \n\t :subplans <> \n\t :rewindPlanIDs (b)\n\t :rowMarks <> \n\t :relationOids (o 16393)\n\t :invalItems <> \n\t :nParamExec 0\n\t }\n\t\nSTATEMENT: SELECT \"push_topic\".\"id\", \"push_topic\".\"guid\", \"push_topic\".\"authenticatorsending\", \"push_topic\".\"authenticatorsubscription\", \"push_topic\".\"countpushed\", \"push_topic\".\"datecreated\", \"push_topic\".\"datelastpush\" FROM \"push_topic\" WHERE \"push_topic\".\"guid\" = $1\nLOG: duration: 115.498 ms execute query_mc_push_database_Topic: SELECT \"push_topic\".\"id\", \"push_topic\".\"guid\", \"push_topic\".\"authenticatorsending\", \"push_topic\".\"authenticatorsubscription\", \"push_topic\".\"countpushed\", \"push_topic\".\"datecreated\", \"push_topic\".\"datelastpush\" FROM \"push_topic\" WHERE \"push_topic\".\"guid\" = $1\n\n\n\n\nAny idea how to solve this ?\n\nThank you\n\nMeike\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 16 Jun 2016 09:58:46 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Index not used" }, { "msg_contents": "When you run psql, are you running that on the application server or the database server? Does the application run on the same server as the database and how is the application connecting to the database (JDBC, ODBC, etc)?\r\n\r\nIn other words is there a difference in network time between the 2?\r\n\r\nAlso the queries are not exactly the same. With psql you use \"select *\" and the application specifies what columns it wants returned and the order to return them. Try running the exact query on both.\r\n\r\nRegards\r\nJohn\r\n \r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\r\nSent: Thursday, June 16, 2016 12:59 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] Index not used\r\n\r\nHello,\r\n \r\nI've a basic table with about 100K rows:\r\n \r\n\r\n\r\nCREATE TABLE \"public\".\"push_topic\" (\r\n \"id\" Serial PRIMARY KEY,\r\n \"guid\" public.push_guid NOT NULL,\r\n \"authenticatorsending\" Varchar(32) NOT NULL,\r\n \"authenticatorsubscription\" Varchar(32) NOT NULL,\r\n \"countpushed\" Integer NOT NULL,\r\n \"datecreated\" timestamp NOT NULL,\r\n \"datelastpush\" timestamp\r\n)\r\nCREATE UNIQUE INDEX push_topic_idx_topicguid ON push_topic\r\n  USING btree (guid)\r\n\r\n\r\n \r\nWhen I query this through pgsql, the queries are fast as expected.\r\n\r\nThis is the query:\r\n\r\nselect * from push_topic where guid = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'\r\n\r\nAnd the plan:\r\n\r\n\r\n\r\nIndex Scan using push_topic_idx_topicguid on push_topic (cost=0.42..8.44 rows=1 width=103) (actual time=0.117..0.121 rows=1 loops=1)\r\n Index Cond: ((guid)::bpchar = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::bpchar)\r\n Buffers: shared hit=3 read=1\r\nTotal runtime: 0.191 ms\r\n\r\n\r\n\r\nHowever when I run the exact query through a different application (CodeSynthesis ORM) the query is very slow (~ 115ms logged)\r\nI noted this is due to a sequential scan happening on the table instead of an index scan.\r\n\r\nThis is query plan in the log file:\r\n\r\n\r\n\r\nLOG: plan:\r\nDETAIL: {PLANNEDSTMT \r\n\t :commandType 1 \r\n\t :queryId 0 \r\n\t :hasReturning false \r\n\t :hasModifyingCTE false \r\n\t :canSetTag true \r\n\t :transientPlan false \r\n\t :planTree \r\n\t {SEQSCAN \r\n\t :startup_cost 0.00 \r\n\t :total_cost 2877.58 \r\n\t :plan_rows 429 \r\n\t :plan_width 103 \r\n\t :targetlist (\r\n\t {TARGETENTRY \r\n\t :expr \r\n\t {VAR \r\n\t :varno 1 \r\n\t :varattno 1 \r\n\t :vartype 23 \r\n\t :vartypmod -1 \r\n\t :varcollid 0 \r\n\t :varlevelsup 0 \r\n\t :varnoold 1 \r\n\t :varoattno 1 \r\n\t :location 7\r\n\t }\r\n\t :resno 1 \r\n\t :resname id \r\n\t :ressortgroupref 0 \r\n\t :resorigtbl 16393 \r\n\t :resorigcol 1 \r\n\t :resjunk false\r\n\t }\r\n\t {TARGETENTRY \r\n\t :expr \r\n\t {VAR \r\n\t :varno 1 \r\n\t :varattno 2 \r\n\t :vartype 16385 \r\n\t :vartypmod -1 \r\n\t :varcollid 100 \r\n\t :varlevelsup 0 \r\n\t :varnoold 1 \r\n\t :varoattno 2 \r\n\t :location 26\r\n\t }\r\n\t :resno 2 \r\n\t :resname guid \r\n\t :ressortgroupref 0 \r\n\t :resorigtbl 16393 \r\n\t :resorigcol 2 \r\n\t :resjunk false\r\n\t }\r\n\t {TARGETENTRY \r\n\t :expr \r\n\t {VAR \r\n\t :varno 1 \r\n\t :varattno 3 \r\n\t :vartype 1043 \r\n\t :vartypmod 36 \r\n\t :varcollid 100 \r\n\t :varlevelsup 0 \r\n\t :varnoold 1 \r\n\t :varoattno 3 \r\n\t :location 47\r\n\t }\r\n\t :resno 3 \r\n\t :resname authenticatorsending \r\n\t :ressortgroupref 0 \r\n\t :resorigtbl 16393 \r\n\t :resorigcol 3 \r\n\t :resjunk false\r\n\t }\r\n\t {TARGETENTRY \r\n\t :expr \r\n\t {VAR \r\n\t :varno 1 \r\n\t :varattno 4 \r\n\t :vartype 1043 \r\n\t :vartypmod 36 \r\n\t :varcollid 100 \r\n\t :varlevelsup 0 \r\n\t :varnoold 1 \r\n\t :varoattno 4 \r\n\t :location 84\r\n\t }\r\n\t :resno 4 \r\n\t :resname authenticatorsubscription \r\n\t :ressortgroupref 0 \r\n\t :resorigtbl 16393 \r\n\t :resorigcol 4 \r\n\t :resjunk false\r\n\t }\r\n\t {TARGETENTRY \r\n\t :expr \r\n\t {VAR \r\n\t :varno 1 \r\n\t :varattno 5 \r\n\t :vartype 23 \r\n\t :vartypmod -1 \r\n\t :varcollid 0 \r\n\t :varlevelsup 0 \r\n\t :varnoold 1 \r\n\t :varoattno 5 \r\n\t :location 126\r\n\t }\r\n\t :resno 5 \r\n\t :resname countpushed \r\n\t :ressortgroupref 0 \r\n\t :resorigtbl 16393 \r\n\t :resorigcol 5 \r\n\t :resjunk false\r\n\t }\r\n\t {TARGETENTRY \r\n\t :expr \r\n\t {VAR \r\n\t :varno 1 \r\n\t :varattno 6 \r\n\t :vartype 1114 \r\n\t :vartypmod -1 \r\n\t :varcollid 0 \r\n\t :varlevelsup 0 \r\n\t :varnoold 1 \r\n\t :varoattno 6 \r\n\t :location 154\r\n\t }\r\n\t :resno 6 \r\n\t :resname datecreated \r\n\t :ressortgroupref 0 \r\n\t :resorigtbl 16393 \r\n\t :resorigcol 6 \r\n\t :resjunk false\r\n\t }\r\n\t {TARGETENTRY \r\n\t :expr \r\n\t {VAR \r\n\t :varno 1 \r\n\t :varattno 7 \r\n\t :vartype 1114 \r\n\t :vartypmod -1 \r\n\t :varcollid 0 \r\n\t :varlevelsup 0 \r\n\t :varnoold 1 \r\n\t :varoattno 7 \r\n\t :location 182\r\n\t }\r\n\t :resno 7 \r\n\t :resname datelastpush \r\n\t :ressortgroupref 0 \r\n\t :resorigtbl 16393 \r\n\t :resorigcol 7 \r\n\t :resjunk false\r\n\t }\r\n\t )\r\n\t :qual (\r\n\t {OPEXPR \r\n\t :opno 98 \r\n\t :opfuncid 67 \r\n\t :opresulttype 16 \r\n\t :opretset false \r\n\t :opcollid 0 \r\n\t :inputcollid 100 \r\n\t :args (\r\n\t {FUNCEXPR \r\n\t :funcid 401 \r\n\t :funcresulttype 25 \r\n\t :funcretset false \r\n\t :funcvariadic false \r\n\t :funcformat 2 \r\n\t :funccollid 100 \r\n\t :inputcollid 100 \r\n\t :args (\r\n\t {VAR \r\n\t :varno 1 \r\n\t :varattno 2 \r\n\t :vartype 16385 \r\n\t :vartypmod -1 \r\n\t :varcollid 100 \r\n\t :varlevelsup 0 \r\n\t :varnoold 1 \r\n\t :varoattno 2 \r\n\t :location 234\r\n\t }\r\n\t )\r\n\t :location -1\r\n\t }\r\n\t {CONST \r\n\t :consttype 25 \r\n\t :consttypmod -1 \r\n\t :constcollid 100 \r\n\t :constlen -1 \r\n\t :constbyval false \r\n\t :constisnull false \r\n\t :location -1 \r\n\t :constvalue 40 [ -96 0 0 0 48 48 53 51 54 49 69 56 45 51 51 69 65 \r\n\t 45 49 70 48 69 45 66 50 49 55 45 67 57 49 66 52 65 67 55 66 67 69 \r\n\t 54 ]\r\n\t }\r\n\t )\r\n\t :location 254\r\n\t }\r\n\t )\r\n\t :lefttree <> \r\n\t :righttree <> \r\n\t :initPlan <> \r\n\t :extParam (b)\r\n\t :allParam (b)\r\n\t :scanrelid 1\r\n\t }\r\n\t :rtable (\r\n\t {RTE \r\n\t :alias <> \r\n\t :eref \r\n\t {ALIAS \r\n\t :aliasname push_topic \r\n\t :colnames (\"id\" \"guid\" \"authenticatorsending\" \"authenticatorsubscript\r\n\t ion\" \"countpushed\" \"datecreated\" \"datelastpush\")\r\n\t }\r\n\t :rtekind 0 \r\n\t :relid 16393 \r\n\t :relkind r \r\n\t :lateral false \r\n\t :inh false \r\n\t :inFromCl true \r\n\t :requiredPerms 2 \r\n\t :checkAsUser 0 \r\n\t :selectedCols (b 9 10 11 12 13 14 15)\r\n\t :modifiedCols (b)\r\n\t }\r\n\t )\r\n\t :resultRelations <> \r\n\t :utilityStmt <> \r\n\t :subplans <> \r\n\t :rewindPlanIDs (b)\r\n\t :rowMarks <> \r\n\t :relationOids (o 16393)\r\n\t :invalItems <> \r\n\t :nParamExec 0\r\n\t }\r\n\t\r\nSTATEMENT: SELECT \"push_topic\".\"id\", \"push_topic\".\"guid\", \"push_topic\".\"authenticatorsending\", \"push_topic\".\"authenticatorsubscription\", \"push_topic\".\"countpushed\", \"push_topic\".\"datecreated\", \"push_topic\".\"datelastpush\" FROM \"push_topic\" WHERE \"push_topic\".\"guid\" = $1\r\nLOG: duration: 115.498 ms execute query_mc_push_database_Topic: SELECT \"push_topic\".\"id\", \"push_topic\".\"guid\", \"push_topic\".\"authenticatorsending\", \"push_topic\".\"authenticatorsubscription\", \"push_topic\".\"countpushed\", \"push_topic\".\"datecreated\", \"push_topic\".\"datelastpush\" FROM \"push_topic\" WHERE \"push_topic\".\"guid\" = $1\r\n\r\n\r\n\r\n\r\nAny idea how to solve this ?\r\n\r\nThank you\r\n\r\nMeike\r\n\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 16 Jun 2016 12:27:11 +0000", "msg_from": "John Gorman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used" }, { "msg_contents": "[email protected] writes:\n> When I query this through pgsql, the queries are fast as expected.\n> select * from push_topic where guid = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'\n> Index Scan using push_topic_idx_topicguid on push_topic (cost=0.42..8.44 rows=1 width=103) (actual time=0.117..0.121 rows=1 loops=1)\n> Index Cond: ((guid)::bpchar = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::bpchar)\n> Buffers: shared hit=3 read=1\n> Total runtime: 0.191 ms\n\n> However when I run the exact query through a different application (CodeSynthesis ORM) the query is very slow (~ 115ms logged)\n> I noted this is due to a sequential scan happening on the table instead of an index scan.\n\nIt looks like what that app is actually issuing is something different\nfrom what you tested by hand, to wit\n\nselect * from push_topic where guid = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::text\n\nwhich causes the comparison to be resolved as texteq not bpchareq, ie you\neffectively have\n\nselect * from push_topic where guid::text = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::text\n\nand that doesn't match a bpchar index. If you can't persuade the app to\nlabel the comparison value as bpchar not text, the easiest fix would be\nto create an additional index on \"guid::text\".\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 16 Jun 2016 11:05:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used" }, { "msg_contents": "On Thu, Jun 16, 2016 at 11:05 AM, Tom Lane <[email protected]> wrote:\n\n> [email protected] writes:\n> > When I query this through pgsql, the queries are fast as expected.\n> > select * from push_topic where guid =\n> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'\n> > Index Scan using push_topic_idx_topicguid on push_topic\n> (cost=0.42..8.44 rows=1 width=103) (actual time=0.117..0.121 rows=1 loops=1)\n> > Index Cond: ((guid)::bpchar =\n> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::bpchar)\n> > Buffers: shared hit=3 read=1\n> > Total runtime: 0.191 ms\n>\n> > However when I run the exact query through a different application\n> (CodeSynthesis ORM) the query is very slow (~ 115ms logged)\n> > I noted this is due to a sequential scan happening on the table instead\n> of an index scan.\n>\n> It looks like what that app is actually issuing is something different\n> from what you tested by hand, to wit\n>\n> select * from push_topic where guid =\n> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::text\n>\n> which causes the comparison to be resolved as texteq not bpchareq, ie you\n> effectively have\n>\n> select * from push_topic where guid::text =\n> 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::text\n>\n> and that doesn't match a bpchar index. If you can't persuade the app to\n> label the comparison value as bpchar not text, the easiest fix would be\n> to create an additional index on \"guid::text\".\n>\n\n​Or, better, persuade the app to label the value \"\n​\npublic.push_guid\n​\" since that is the column's type​...a type you haven't defined for us.\nIf you get to add explicit casts this should be easy...but I'm not familiar\nwith the framework you are using.\n\nDavid J.\n\nOn Thu, Jun 16, 2016 at 11:05 AM, Tom Lane <[email protected]> wrote:[email protected] writes:\n> When I query this through pgsql, the queries are fast as expected.\n> select * from push_topic where guid = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'\n> Index Scan using push_topic_idx_topicguid on push_topic  (cost=0.42..8.44 rows=1 width=103) (actual time=0.117..0.121 rows=1 loops=1)\n>   Index Cond: ((guid)::bpchar = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::bpchar)\n>   Buffers: shared hit=3 read=1\n> Total runtime: 0.191 ms\n\n> However when I run the exact query through a different application (CodeSynthesis ORM) the query is very slow (~ 115ms logged)\n> I noted this is due to a sequential scan happening on the table instead of an index scan.\n\nIt looks like what that app is actually issuing is something different\nfrom what you tested by hand, to wit\n\nselect * from push_topic where guid = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::text\n\nwhich causes the comparison to be resolved as texteq not bpchareq, ie you\neffectively have\n\nselect * from push_topic where guid::text = 'DD748CCD-B8A4-3B9F-8F60-67F1F673CFE5'::text\n\nand that doesn't match a bpchar index.  If you can't persuade the app to\nlabel the comparison value as bpchar not text, the easiest fix would be\nto create an additional index on \"guid::text\".​Or, better, persuade the app to label the value \"​public.push_guid​\" since that is the column's type​...a type you haven't defined for us.  If you get to add explicit casts this should be easy...but I'm not familiar with the framework you are using.David J.", "msg_date": "Thu, 16 Jun 2016 11:53:34 -0400", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index not used" }, { "msg_contents": "> ​Or, better, persuade the app to label the value \"\n​\npublic.push_guid\n​\" since that is the column's type​...a type you haven't defined for us.  If you get to add explicit casts this should be easy...but I'm not familiar with the framework you are using.\n \n \npush_guid was a CHARACTER(36) column. I ended up converting it to CHARACTER VARYING(36).\nIndex is now being used and performance is as expected.\n \nThanks a lot\nMeike\n\n", "msg_date": "Sun, 19 Jun 2016 11:59:26 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Index not used" } ]
[ { "msg_contents": "Hey all, testing out 9.6 beta 1 right now on Debian 8.5.\n\nI have a query that is much slower on 9.6 than 9.5.3.\n\nAs a side note, when I explain analyze instead of just executing the query\nit takes more than 2x as long to run. I have tried looking for info on that\nonline but have not found any. Anyone know the reason for that?\n\nThe data is very close between the two servers, one is my production system\nso the only difference is slightly more added today since I set up the 9.6\nserver last night.\n\nThe query in question is here:\nSELECT cp.claim_id\n, cp.claim_product_id\n, cp.product_id\n, cp.uom_type_id\n, cp.rebate_requested_quantity\n, cp.rebate_requested_rate\n, cp.rebate_allowed_quantity\n, cp.rebate_allowed_rate\n, cp.distributor_company_id\n, cp.resolve_date\nFROM claim_product cp\nINNER JOIN _claims_to_process x\nON cp.claim_id = x.claim_id\nWHERE NOT EXISTS (\nSELECT 1\nFROM claim_product_reason_code r\nWHERE r.claim_product_id = cp.claim_product_id\nAND r.claim_reason_type = ANY (ARRAY['REJECT'::enum.claim_reason_type,\n'OVERRIDE'::enum.claim_reason_type, 'RECALC'::enum.claim_reason_type])\nAND upper_inf(r.active_range)\n);\n\nThe query plan on 9.6 is here (disabled parallelism):\n'Nested Loop (cost=17574.63..30834.02 rows=1 width=106) (actual\ntime=241.934..40332.190 rows=26994 loops=1)'\n' Join Filter: (cp.claim_id = x.claim_id)'\n' Rows Removed by Join Filter: 92335590'\n' -> Hash Anti Join (cost=17574.63..30808.68 rows=1 width=106) (actual\ntime=173.742..586.805 rows=102171 loops=1)'\n' Hash Cond: (cp.claim_product_id = r.claim_product_id)'\n' -> Seq Scan on claim_product cp (cost=0.00..6714.76 rows=202076\nwidth=106) (actual time=0.028..183.376 rows=202076 loops=1)'\n' -> Hash (cost=16972.49..16972.49 rows=48171 width=16) (actual\ntime=173.436..173.436 rows=99905 loops=1)'\n' Buckets: 131072 (originally 65536) Batches: 1 (originally\n1) Memory Usage: 5708kB'\n' -> Bitmap Heap Scan on claim_product_reason_code r\n (cost=4398.71..16972.49 rows=48171 width=16) (actual time=25.278..127.540\nrows=99905 loops=1)'\n' Recheck Cond: ((claim_reason_type = ANY\n('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[])) AND\nupper_inf(active_range))'\n' Heap Blocks: exact=10067'\n' -> Bitmap Index Scan on\nclaim_product_reason_code_active_range_idx (cost=0.00..4386.67 rows=48171\nwidth=0) (actual time=23.174..23.174 rows=99905 loops=1)'\n' Index Cond: (claim_reason_type = ANY\n('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[]))'\n' -> Seq Scan on _claims_to_process x (cost=0.00..14.04 rows=904\nwidth=16) (actual time=0.005..0.182 rows=904 loops=102171)'\n'Planning time: 1.934 ms'\n'Execution time: 40337.858 ms'\n\nThe 9.5.3 plan is here:\n'Hash Anti Join (cost=19884.53..39281.57 rows=30681 width=106) (actual\ntime=848.791..978.036 rows=27354 loops=1)'\n' Hash Cond: (cp.claim_product_id = r.claim_product_id)'\n' -> Nested Loop (cost=0.42..17990.36 rows=41140 width=106) (actual\ntime=0.132..106.333 rows=28775 loops=1)'\n' -> Seq Scan on _claims_to_process x (cost=0.00..27.00 rows=1700\nwidth=16) (actual time=0.037..0.465 rows=923 loops=1)'\n' -> Index Scan using idx_claim_product_claim_id on claim_product\ncp (cost=0.42..10.33 rows=24 width=106) (actual time=0.015..0.093 rows=31\nloops=923)'\n' Index Cond: (claim_id = x.claim_id)'\n' -> Hash (cost=19239.13..19239.13 rows=51599 width=16) (actual\ntime=848.263..848.263 rows=100024 loops=1)'\n' Buckets: 131072 (originally 65536) Batches: 1 (originally 1)\n Memory Usage: 5713kB'\n' -> Bitmap Heap Scan on claim_product_reason_code r\n (cost=6240.64..19239.13 rows=51599 width=16) (actual time=31.505..782.799\nrows=100024 loops=1)'\n' Recheck Cond: ((claim_reason_type = ANY\n('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[])) AND\nupper_inf(active_range))'\n' Heap Blocks: exact=6261'\n' -> Bitmap Index Scan on\nclaim_product_reason_code_active_range_idx (cost=0.00..6227.74 rows=51599\nwidth=0) (actual time=30.231..30.231 rows=100051 loops=1)'\n' Index Cond: (claim_reason_type = ANY\n('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[]))'\n'Planning time: 1.691 ms'\n'Execution time: 982.667 ms'\n\n\nJust for fun I set enable_nestloop=false on 9.6 and this is the plan I get:\n'Hash Join (cost=17599.97..30834.04 rows=1 width=106) (actual\ntime=108.892..349.885 rows=26994 loops=1)'\n' Hash Cond: (cp.claim_id = x.claim_id)'\n' -> Hash Anti Join (cost=17574.63..30808.68 rows=1 width=106) (actual\ntime=107.464..316.527 rows=102171 loops=1)'\n' Hash Cond: (cp.claim_product_id = r.claim_product_id)'\n' -> Seq Scan on claim_product cp (cost=0.00..6714.76 rows=202076\nwidth=106) (actual time=0.011..61.230 rows=202076 loops=1)'\n' -> Hash (cost=16972.49..16972.49 rows=48171 width=16) (actual\ntime=107.315..107.315 rows=99905 loops=1)'\n' Buckets: 131072 (originally 65536) Batches: 1 (originally\n1) Memory Usage: 5708kB'\n' -> Bitmap Heap Scan on claim_product_reason_code r\n (cost=4398.71..16972.49 rows=48171 width=16) (actual time=23.478..68.644\nrows=99905 loops=1)'\n' Recheck Cond: ((claim_reason_type = ANY\n('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[])) AND\nupper_inf(active_range))'\n' Heap Blocks: exact=10067'\n' -> Bitmap Index Scan on\nclaim_product_reason_code_active_range_idx (cost=0.00..4386.67 rows=48171\nwidth=0) (actual time=21.475..21.475 rows=99905 loops=1)'\n' Index Cond: (claim_reason_type = ANY\n('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[]))'\n' -> Hash (cost=14.04..14.04 rows=904 width=16) (actual\ntime=0.937..0.937 rows=904 loops=1)'\n' Buckets: 1024 Batches: 1 Memory Usage: 51kB'\n' -> Seq Scan on _claims_to_process x (cost=0.00..14.04 rows=904\nwidth=16) (actual time=0.022..0.442 rows=904 loops=1)'\n'Planning time: 1.475 ms'\n'Execution time: 353.958 ms'\n\nHey all, testing out 9.6 beta 1 right now on Debian 8.5.I have a query that is much slower on 9.6 than 9.5.3.As a side note, when I explain analyze instead of just executing the query it takes more than 2x as long to run. I have tried looking for info on that online but have not found any.  Anyone know the reason for that?The data is very close between the two servers, one is my production system so the only difference is slightly more added today since I set up the 9.6 server last night.The query in question is here: SELECT cp.claim_id , cp.claim_product_id , cp.product_id , cp.uom_type_id , cp.rebate_requested_quantity , cp.rebate_requested_rate , cp.rebate_allowed_quantity , cp.rebate_allowed_rate , cp.distributor_company_id , cp.resolve_date FROM claim_product cp INNER JOIN _claims_to_process x ON cp.claim_id = x.claim_id WHERE NOT EXISTS (  SELECT 1 FROM claim_product_reason_code r WHERE r.claim_product_id = cp.claim_product_id AND r.claim_reason_type = ANY (ARRAY['REJECT'::enum.claim_reason_type, 'OVERRIDE'::enum.claim_reason_type, 'RECALC'::enum.claim_reason_type]) AND upper_inf(r.active_range) );The query plan on 9.6 is here (disabled parallelism):'Nested Loop  (cost=17574.63..30834.02 rows=1 width=106) (actual time=241.934..40332.190 rows=26994 loops=1)''  Join Filter: (cp.claim_id = x.claim_id)''  Rows Removed by Join Filter: 92335590''  ->  Hash Anti Join  (cost=17574.63..30808.68 rows=1 width=106) (actual time=173.742..586.805 rows=102171 loops=1)''        Hash Cond: (cp.claim_product_id = r.claim_product_id)''        ->  Seq Scan on claim_product cp  (cost=0.00..6714.76 rows=202076 width=106) (actual time=0.028..183.376 rows=202076 loops=1)''        ->  Hash  (cost=16972.49..16972.49 rows=48171 width=16) (actual time=173.436..173.436 rows=99905 loops=1)''              Buckets: 131072 (originally 65536)  Batches: 1 (originally 1)  Memory Usage: 5708kB''              ->  Bitmap Heap Scan on claim_product_reason_code r  (cost=4398.71..16972.49 rows=48171 width=16) (actual time=25.278..127.540 rows=99905 loops=1)''                    Recheck Cond: ((claim_reason_type = ANY ('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[])) AND upper_inf(active_range))''                    Heap Blocks: exact=10067''                    ->  Bitmap Index Scan on claim_product_reason_code_active_range_idx  (cost=0.00..4386.67 rows=48171 width=0) (actual time=23.174..23.174 rows=99905 loops=1)''                          Index Cond: (claim_reason_type = ANY ('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[]))''  ->  Seq Scan on _claims_to_process x  (cost=0.00..14.04 rows=904 width=16) (actual time=0.005..0.182 rows=904 loops=102171)''Planning time: 1.934 ms''Execution time: 40337.858 ms'The 9.5.3 plan is here:'Hash Anti Join  (cost=19884.53..39281.57 rows=30681 width=106) (actual time=848.791..978.036 rows=27354 loops=1)''  Hash Cond: (cp.claim_product_id = r.claim_product_id)''  ->  Nested Loop  (cost=0.42..17990.36 rows=41140 width=106) (actual time=0.132..106.333 rows=28775 loops=1)''        ->  Seq Scan on _claims_to_process x  (cost=0.00..27.00 rows=1700 width=16) (actual time=0.037..0.465 rows=923 loops=1)''        ->  Index Scan using idx_claim_product_claim_id on claim_product cp  (cost=0.42..10.33 rows=24 width=106) (actual time=0.015..0.093 rows=31 loops=923)''              Index Cond: (claim_id = x.claim_id)''  ->  Hash  (cost=19239.13..19239.13 rows=51599 width=16) (actual time=848.263..848.263 rows=100024 loops=1)''        Buckets: 131072 (originally 65536)  Batches: 1 (originally 1)  Memory Usage: 5713kB''        ->  Bitmap Heap Scan on claim_product_reason_code r  (cost=6240.64..19239.13 rows=51599 width=16) (actual time=31.505..782.799 rows=100024 loops=1)''              Recheck Cond: ((claim_reason_type = ANY ('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[])) AND upper_inf(active_range))''              Heap Blocks: exact=6261''              ->  Bitmap Index Scan on claim_product_reason_code_active_range_idx  (cost=0.00..6227.74 rows=51599 width=0) (actual time=30.231..30.231 rows=100051 loops=1)''                    Index Cond: (claim_reason_type = ANY ('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[]))''Planning time: 1.691 ms''Execution time: 982.667 ms'Just for fun I set enable_nestloop=false on 9.6 and this is the plan I get:'Hash Join  (cost=17599.97..30834.04 rows=1 width=106) (actual time=108.892..349.885 rows=26994 loops=1)''  Hash Cond: (cp.claim_id = x.claim_id)''  ->  Hash Anti Join  (cost=17574.63..30808.68 rows=1 width=106) (actual time=107.464..316.527 rows=102171 loops=1)''        Hash Cond: (cp.claim_product_id = r.claim_product_id)''        ->  Seq Scan on claim_product cp  (cost=0.00..6714.76 rows=202076 width=106) (actual time=0.011..61.230 rows=202076 loops=1)''        ->  Hash  (cost=16972.49..16972.49 rows=48171 width=16) (actual time=107.315..107.315 rows=99905 loops=1)''              Buckets: 131072 (originally 65536)  Batches: 1 (originally 1)  Memory Usage: 5708kB''              ->  Bitmap Heap Scan on claim_product_reason_code r  (cost=4398.71..16972.49 rows=48171 width=16) (actual time=23.478..68.644 rows=99905 loops=1)''                    Recheck Cond: ((claim_reason_type = ANY ('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[])) AND upper_inf(active_range))''                    Heap Blocks: exact=10067''                    ->  Bitmap Index Scan on claim_product_reason_code_active_range_idx  (cost=0.00..4386.67 rows=48171 width=0) (actual time=21.475..21.475 rows=99905 loops=1)''                          Index Cond: (claim_reason_type = ANY ('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[]))''  ->  Hash  (cost=14.04..14.04 rows=904 width=16) (actual time=0.937..0.937 rows=904 loops=1)''        Buckets: 1024  Batches: 1  Memory Usage: 51kB''        ->  Seq Scan on _claims_to_process x  (cost=0.00..14.04 rows=904 width=16) (actual time=0.022..0.442 rows=904 loops=1)''Planning time: 1.475 ms''Execution time: 353.958 ms'", "msg_date": "Thu, 16 Jun 2016 21:56:21 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "9.6 query slower than 9.5.3" }, { "msg_contents": "Adam Brusselback <[email protected]> writes:\n> Hey all, testing out 9.6 beta 1 right now on Debian 8.5.\n> I have a query that is much slower on 9.6 than 9.5.3.\n\nThe rowcount estimates in 9.6 seem way off. Did you ANALYZE the tables\nafter loading them into 9.6? Maybe you forgot some statistics target\nsettings?\n\nIf it's not that, I wonder whether the misestimates are connected to the\nforeign-key-based estimation feature. Are there any FKs on the tables\ninvolved? May we see the table schemas?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 16 Jun 2016 22:04:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.6 query slower than 9.5.3" }, { "msg_contents": "I analyzed all tables involved after loading, and also while trying to\ndiagnose this issue.\n\nI have the same statistics target settings on both servers.\n\nHere are the schemas for the tables:\n\nOn Thu, Jun 16, 2016 at 10:04 PM, Tom Lane <[email protected]> wrote:\n\n> Adam Brusselback <[email protected]> writes:\n> > Hey all, testing out 9.6 beta 1 right now on Debian 8.5.\n> > I have a query that is much slower on 9.6 than 9.5.3.\n>\n> The rowcount estimates in 9.6 seem way off. Did you ANALYZE the tables\n> after loading them into 9.6? Maybe you forgot some statistics target\n> settings?\n>\n> If it's not that, I wonder whether the misestimates are connected to the\n> foreign-key-based estimation feature. Are there any FKs on the tables\n> involved? May we see the table schemas?\n>\n> regards, tom lane\n>\n\nI analyzed all tables involved after loading, and also while trying to diagnose this issue.I have the same statistics target settings on both servers.Here are the schemas for the tables:On Thu, Jun 16, 2016 at 10:04 PM, Tom Lane <[email protected]> wrote:Adam Brusselback <[email protected]> writes:\n> Hey all, testing out 9.6 beta 1 right now on Debian 8.5.\n> I have a query that is much slower on 9.6 than 9.5.3.\n\nThe rowcount estimates in 9.6 seem way off.  Did you ANALYZE the tables\nafter loading them into 9.6?  Maybe you forgot some statistics target\nsettings?\n\nIf it's not that, I wonder whether the misestimates are connected to the\nforeign-key-based estimation feature.  Are there any FKs on the tables\ninvolved?  May we see the table schemas?\n\n                        regards, tom lane", "msg_date": "Thu, 16 Jun 2016 22:09:53 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.6 query slower than 9.5.3" }, { "msg_contents": "Gah, hit send too soon...\n\nCREATE TEMPORARY TABLE _claims_to_process ( claim_id uuid, starting_state\nenum.claim_state );\n\nCREATE TABLE claim_product\n(\n claim_product_id uuid NOT NULL DEFAULT gen_random_uuid(),\n claim_id uuid NOT NULL,\n product_id uuid NOT NULL,\n uom_type_id uuid NOT NULL,\n rebate_requested_quantity numeric NOT NULL,\n rebate_requested_rate numeric NOT NULL,\n rebate_allowed_quantity numeric NOT NULL,\n rebate_allowed_rate numeric NOT NULL,\n distributor_company_id uuid,\n location_company_id uuid,\n contract_item_id uuid,\n claimant_contract_name character varying, -- NOT SOURCE OF TRUTH; Client\ndefined. - Yesod\n resolve_date date NOT NULL, -- FIXME: TENTATIVE NAME; Does not mean\ncontract_item_id resolve date. - Yesod\n rebate_calculated_rate numeric NOT NULL,\n CONSTRAINT claim_product_pkey PRIMARY KEY (claim_product_id),\n CONSTRAINT claim_product_claim_id_fkey FOREIGN KEY (claim_id)\n REFERENCES claim (claim_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT claim_product_contract_item_id_fkey FOREIGN KEY\n(contract_item_id)\n REFERENCES contract_item (contract_item_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT claim_product_distributor_company_id_fkey FOREIGN KEY\n(distributor_company_id)\n REFERENCES company (company_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT claim_product_location_company_id_fkey FOREIGN KEY\n(location_company_id)\n REFERENCES company (company_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT claim_product_product_id_fkey FOREIGN KEY (product_id)\n REFERENCES product (product_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT claim_product_uom_type_id_fkey FOREIGN KEY (uom_type_id)\n REFERENCES uom_type (uom_type_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE claim_product\n OWNER TO root;\nGRANT ALL ON TABLE claim_product TO root;\nCOMMENT ON COLUMN claim_product.claimant_contract_name IS 'NOT SOURCE OF\nTRUTH; Client defined. - Yesod';\nCOMMENT ON COLUMN claim_product.resolve_date IS 'FIXME: TENTATIVE NAME;\nDoes not mean contract_item_id resolve date. - Yesod';\n\n\n-- Index: idx_claim_product_claim_id\n\n-- DROP INDEX idx_claim_product_claim_id;\n\nCREATE INDEX idx_claim_product_claim_id\n ON claim_product\n USING btree\n (claim_id);\n\n-- Index: idx_claim_product_contract_item_id\n\n-- DROP INDEX idx_claim_product_contract_item_id;\n\nCREATE INDEX idx_claim_product_contract_item_id\n ON claim_product\n USING btree\n (contract_item_id);\n\n\n-- Trigger: claim_product_iud_trigger on claim_product\n\n-- DROP TRIGGER claim_product_iud_trigger ON claim_product;\n\nCREATE TRIGGER claim_product_iud_trigger\n AFTER INSERT OR UPDATE OR DELETE\n ON claim_product\n FOR EACH ROW\n EXECUTE PROCEDURE gosimple.claim_product_on_iud();\n\n-- Trigger: claim_product_statement_trigger on claim_product\n\n-- DROP TRIGGER claim_product_statement_trigger ON claim_product;\n\nCREATE TRIGGER claim_product_statement_trigger\n AFTER INSERT OR UPDATE OR DELETE\n ON claim_product\n FOR EACH STATEMENT\n EXECUTE PROCEDURE gosimple.claim_product_statement_refresh_trigger();\n\nCREATE TABLE claim_product_reason_code\n(\n claim_product_reason_code_id uuid NOT NULL DEFAULT gen_random_uuid(),\n claim_product_id uuid NOT NULL,\n claim_reason_type enum.claim_reason_type NOT NULL,\n claim_reason_code enum.claim_reason_code NOT NULL,\n claim_reason_note character varying,\n active_range tstzrange NOT NULL DEFAULT tstzrange(now(), NULL::timestamp\nwith time zone),\n CONSTRAINT claim_product_reason_code_pkey PRIMARY KEY\n(claim_product_reason_code_id),\n CONSTRAINT claim_product_reason_code_claim_product_id_fkey FOREIGN KEY\n(claim_product_id)\n REFERENCES claim_product (claim_product_id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT claim_product_reason_code_active_range_excl EXCLUDE\n USING gist (gosimple.uuid_to_bytea(claim_product_id) WITH =,\ngosimple.enum_to_oid('enum'::text, 'claim_reason_type'::text,\nclaim_reason_type) WITH =, gosimple.enum_to_oid('enum'::text,\n'claim_reason_code'::text, claim_reason_code) WITH =, active_range WITH &&),\n CONSTRAINT claim_product_reason_code_excl EXCLUDE\n USING gist (gosimple.uuid_to_bytea(claim_product_id) WITH =, (\nCASE\n WHEN upper(active_range) IS NULL THEN 'infinity'::text\n ELSE NULL::text\nEND) WITH =, gosimple.enum_to_oid('enum'::text, 'claim_reason_type'::text,\nclaim_reason_type) WITH <>),\n CONSTRAINT claim_product_reason_code_unique UNIQUE (claim_product_id,\nclaim_reason_type, claim_reason_code, active_range)\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE claim_product_reason_code\n OWNER TO root;\nGRANT ALL ON TABLE claim_product_reason_code TO root;\n\n-- Index: claim_product_reason_code_active_range_idx\n\n-- DROP INDEX claim_product_reason_code_active_range_idx;\n\nCREATE INDEX claim_product_reason_code_active_range_idx\n ON claim_product_reason_code\n USING btree\n (claim_product_id, claim_reason_type)\n WHERE upper_inf(active_range);\n\n-- Index: claim_product_reason_code_not_pend_unique\n\n-- DROP INDEX claim_product_reason_code_not_pend_unique;\n\nCREATE UNIQUE INDEX claim_product_reason_code_not_pend_unique\n ON claim_product_reason_code\n USING btree\n (claim_product_id, claim_reason_type)\n WHERE upper(active_range) IS NULL AND claim_reason_type <>\n'PEND'::enum.claim_reason_type;\n\n\n-- Trigger: claim_product_reason_code_insert_trigger on\nclaim_product_reason_code\n\n-- DROP TRIGGER claim_product_reason_code_insert_trigger ON\nclaim_product_reason_code;\n\nCREATE TRIGGER claim_product_reason_code_insert_trigger\n BEFORE INSERT\n ON claim_product_reason_code\n FOR EACH ROW\n EXECUTE PROCEDURE\ngosimple.update_claim_product_reason_code_active_range();\n\nOn Thu, Jun 16, 2016 at 10:09 PM, Adam Brusselback <\[email protected]> wrote:\n\n> I analyzed all tables involved after loading, and also while trying to\n> diagnose this issue.\n>\n> I have the same statistics target settings on both servers.\n>\n> Here are the schemas for the tables:\n>\n> On Thu, Jun 16, 2016 at 10:04 PM, Tom Lane <[email protected]> wrote:\n>\n>> Adam Brusselback <[email protected]> writes:\n>> > Hey all, testing out 9.6 beta 1 right now on Debian 8.5.\n>> > I have a query that is much slower on 9.6 than 9.5.3.\n>>\n>> The rowcount estimates in 9.6 seem way off. Did you ANALYZE the tables\n>> after loading them into 9.6? Maybe you forgot some statistics target\n>> settings?\n>>\n>> If it's not that, I wonder whether the misestimates are connected to the\n>> foreign-key-based estimation feature. Are there any FKs on the tables\n>> involved? May we see the table schemas?\n>>\n>> regards, tom lane\n>>\n>\n>\n\nGah, hit send too soon...CREATE TEMPORARY TABLE _claims_to_process ( claim_id uuid, starting_state enum.claim_state );CREATE TABLE claim_product(  claim_product_id uuid NOT NULL DEFAULT gen_random_uuid(),  claim_id uuid NOT NULL,  product_id uuid NOT NULL,  uom_type_id uuid NOT NULL,  rebate_requested_quantity numeric NOT NULL,  rebate_requested_rate numeric NOT NULL,  rebate_allowed_quantity numeric NOT NULL,  rebate_allowed_rate numeric NOT NULL,  distributor_company_id uuid,  location_company_id uuid,  contract_item_id uuid,  claimant_contract_name character varying, -- NOT SOURCE OF TRUTH; Client defined. - Yesod  resolve_date date NOT NULL, -- FIXME: TENTATIVE NAME; Does not mean contract_item_id resolve date. - Yesod  rebate_calculated_rate numeric NOT NULL,  CONSTRAINT claim_product_pkey PRIMARY KEY (claim_product_id),  CONSTRAINT claim_product_claim_id_fkey FOREIGN KEY (claim_id)      REFERENCES claim (claim_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT claim_product_contract_item_id_fkey FOREIGN KEY (contract_item_id)      REFERENCES contract_item (contract_item_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT claim_product_distributor_company_id_fkey FOREIGN KEY (distributor_company_id)      REFERENCES company (company_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT claim_product_location_company_id_fkey FOREIGN KEY (location_company_id)      REFERENCES company (company_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT claim_product_product_id_fkey FOREIGN KEY (product_id)      REFERENCES product (product_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT claim_product_uom_type_id_fkey FOREIGN KEY (uom_type_id)      REFERENCES uom_type (uom_type_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION)WITH (  OIDS=FALSE);ALTER TABLE claim_product  OWNER TO root;GRANT ALL ON TABLE claim_product TO root;COMMENT ON COLUMN claim_product.claimant_contract_name IS 'NOT SOURCE OF TRUTH; Client defined. - Yesod';COMMENT ON COLUMN claim_product.resolve_date IS 'FIXME: TENTATIVE NAME; Does not mean contract_item_id resolve date. - Yesod';-- Index: idx_claim_product_claim_id-- DROP INDEX idx_claim_product_claim_id;CREATE INDEX idx_claim_product_claim_id  ON claim_product  USING btree  (claim_id);-- Index: idx_claim_product_contract_item_id-- DROP INDEX idx_claim_product_contract_item_id;CREATE INDEX idx_claim_product_contract_item_id  ON claim_product  USING btree  (contract_item_id);-- Trigger: claim_product_iud_trigger on claim_product-- DROP TRIGGER claim_product_iud_trigger ON claim_product;CREATE TRIGGER claim_product_iud_trigger  AFTER INSERT OR UPDATE OR DELETE  ON claim_product  FOR EACH ROW  EXECUTE PROCEDURE gosimple.claim_product_on_iud();-- Trigger: claim_product_statement_trigger on claim_product-- DROP TRIGGER claim_product_statement_trigger ON claim_product;CREATE TRIGGER claim_product_statement_trigger  AFTER INSERT OR UPDATE OR DELETE  ON claim_product  FOR EACH STATEMENT  EXECUTE PROCEDURE gosimple.claim_product_statement_refresh_trigger();CREATE TABLE claim_product_reason_code(  claim_product_reason_code_id uuid NOT NULL DEFAULT gen_random_uuid(),  claim_product_id uuid NOT NULL,  claim_reason_type enum.claim_reason_type NOT NULL,  claim_reason_code enum.claim_reason_code NOT NULL,  claim_reason_note character varying,  active_range tstzrange NOT NULL DEFAULT tstzrange(now(), NULL::timestamp with time zone),  CONSTRAINT claim_product_reason_code_pkey PRIMARY KEY (claim_product_reason_code_id),  CONSTRAINT claim_product_reason_code_claim_product_id_fkey FOREIGN KEY (claim_product_id)      REFERENCES claim_product (claim_product_id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT claim_product_reason_code_active_range_excl EXCLUDE   USING gist (gosimple.uuid_to_bytea(claim_product_id) WITH =, gosimple.enum_to_oid('enum'::text, 'claim_reason_type'::text, claim_reason_type) WITH =, gosimple.enum_to_oid('enum'::text, 'claim_reason_code'::text, claim_reason_code) WITH =, active_range WITH &&),  CONSTRAINT claim_product_reason_code_excl EXCLUDE   USING gist (gosimple.uuid_to_bytea(claim_product_id) WITH =, (CASE    WHEN upper(active_range) IS NULL THEN 'infinity'::text    ELSE NULL::textEND) WITH =, gosimple.enum_to_oid('enum'::text, 'claim_reason_type'::text, claim_reason_type) WITH <>),  CONSTRAINT claim_product_reason_code_unique UNIQUE (claim_product_id, claim_reason_type, claim_reason_code, active_range))WITH (  OIDS=FALSE);ALTER TABLE claim_product_reason_code  OWNER TO root;GRANT ALL ON TABLE claim_product_reason_code TO root;-- Index: claim_product_reason_code_active_range_idx-- DROP INDEX claim_product_reason_code_active_range_idx;CREATE INDEX claim_product_reason_code_active_range_idx  ON claim_product_reason_code  USING btree  (claim_product_id, claim_reason_type)  WHERE upper_inf(active_range);-- Index: claim_product_reason_code_not_pend_unique-- DROP INDEX claim_product_reason_code_not_pend_unique;CREATE UNIQUE INDEX claim_product_reason_code_not_pend_unique  ON claim_product_reason_code  USING btree  (claim_product_id, claim_reason_type)  WHERE upper(active_range) IS NULL AND claim_reason_type <> 'PEND'::enum.claim_reason_type;-- Trigger: claim_product_reason_code_insert_trigger on claim_product_reason_code-- DROP TRIGGER claim_product_reason_code_insert_trigger ON claim_product_reason_code;CREATE TRIGGER claim_product_reason_code_insert_trigger  BEFORE INSERT  ON claim_product_reason_code  FOR EACH ROW  EXECUTE PROCEDURE gosimple.update_claim_product_reason_code_active_range();On Thu, Jun 16, 2016 at 10:09 PM, Adam Brusselback <[email protected]> wrote:I analyzed all tables involved after loading, and also while trying to diagnose this issue.I have the same statistics target settings on both servers.Here are the schemas for the tables:On Thu, Jun 16, 2016 at 10:04 PM, Tom Lane <[email protected]> wrote:Adam Brusselback <[email protected]> writes:\n> Hey all, testing out 9.6 beta 1 right now on Debian 8.5.\n> I have a query that is much slower on 9.6 than 9.5.3.\n\nThe rowcount estimates in 9.6 seem way off.  Did you ANALYZE the tables\nafter loading them into 9.6?  Maybe you forgot some statistics target\nsettings?\n\nIf it's not that, I wonder whether the misestimates are connected to the\nforeign-key-based estimation feature.  Are there any FKs on the tables\ninvolved?  May we see the table schemas?\n\n                        regards, tom lane", "msg_date": "Thu, 16 Jun 2016 22:14:22 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.6 query slower than 9.5.3" }, { "msg_contents": "Adam Brusselback <[email protected]> writes:\n> Gah, hit send too soon...\n\nHm, definitely a lot of foreign keys in there. Do the estimates get\nbetter (or at least closer to 9.5) if you do\n\"set enable_fkey_estimates = off\"?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 16 Jun 2016 22:45:02 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.6 query slower than 9.5.3" }, { "msg_contents": "Alright with that off I get:\n\n'Nested Loop Anti Join (cost=25.76..21210.81 rows=16684 width=106) (actual\ntime=0.688..249.585 rows=26994 loops=1)'\n' -> Hash Join (cost=25.34..7716.95 rows=21906 width=106) (actual\ntime=0.671..124.663 rows=28467 loops=1)'\n' Hash Cond: (cp.claim_id = x.claim_id)'\n' -> Seq Scan on claim_product cp (cost=0.00..6714.76 rows=202076\nwidth=106) (actual time=0.016..55.230 rows=202076 loops=1)'\n' -> Hash (cost=14.04..14.04 rows=904 width=16) (actual\ntime=0.484..0.484 rows=904 loops=1)'\n' Buckets: 1024 Batches: 1 Memory Usage: 51kB'\n' -> Seq Scan on _claims_to_process x (cost=0.00..14.04\nrows=904 width=16) (actual time=0.013..0.235 rows=904 loops=1)'\n' -> Index Only Scan using claim_product_reason_code_active_range_idx on\nclaim_product_reason_code r (cost=0.42..0.61 rows=1 width=16) (actual\ntime=0.004..0.004 rows=0 loops=28467)'\n' Index Cond: (claim_product_id = cp.claim_product_id)'\n' Filter: (claim_reason_type = ANY\n('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[]))'\n' Rows Removed by Filter: 1'\n' Heap Fetches: 27031'\n'Planning time: 0.984 ms'\n'Execution time: 253.976 ms'\n\nWay better.\n\nAlright with that off I get:  'Nested Loop Anti Join  (cost=25.76..21210.81 rows=16684 width=106) (actual time=0.688..249.585 rows=26994 loops=1)''  ->  Hash Join  (cost=25.34..7716.95 rows=21906 width=106) (actual time=0.671..124.663 rows=28467 loops=1)''        Hash Cond: (cp.claim_id = x.claim_id)''        ->  Seq Scan on claim_product cp  (cost=0.00..6714.76 rows=202076 width=106) (actual time=0.016..55.230 rows=202076 loops=1)''        ->  Hash  (cost=14.04..14.04 rows=904 width=16) (actual time=0.484..0.484 rows=904 loops=1)''              Buckets: 1024  Batches: 1  Memory Usage: 51kB''              ->  Seq Scan on _claims_to_process x  (cost=0.00..14.04 rows=904 width=16) (actual time=0.013..0.235 rows=904 loops=1)''  ->  Index Only Scan using claim_product_reason_code_active_range_idx on claim_product_reason_code r  (cost=0.42..0.61 rows=1 width=16) (actual time=0.004..0.004 rows=0 loops=28467)''        Index Cond: (claim_product_id = cp.claim_product_id)''        Filter: (claim_reason_type = ANY ('{REJECT,OVERRIDE,RECALC}'::enum.claim_reason_type[]))''        Rows Removed by Filter: 1''        Heap Fetches: 27031''Planning time: 0.984 ms''Execution time: 253.976 ms'Way better.", "msg_date": "Thu, 16 Jun 2016 23:36:22 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.6 query slower than 9.5.3" }, { "msg_contents": "Adam Brusselback <[email protected]> writes:\n> Alright with that off I get:\n> ...\n> Way better.\n\nOK, that confirms the suspicion that beta1's FK-join-estimation logic\nis the culprit here. We had already decided that that logic is broken,\nand there's a rewrite in progress:\nhttps://www.postgresql.org/message-id/15245.1466031608%40sss.pgh.pa.us\n\nI wonder though whether the rewrite will fix your example. Could you\neither make some test data available, or try HEAD + aforesaid patch \nto see if it behaves sanely on your data?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 16 Jun 2016 23:57:32 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.6 query slower than 9.5.3" }, { "msg_contents": "It'd be really hard to get a test dataset together I think, so I suppose\ni'll learn how to compile Postgres. Will let you know how that goes.\n\nIt'd be really hard to get a test dataset together I think, so I suppose i'll learn how to compile Postgres.  Will let you know how that goes.", "msg_date": "Fri, 17 Jun 2016 00:07:17 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.6 query slower than 9.5.3" }, { "msg_contents": "I finally managed to get it compiled, patched, and working. It gave the\nsame plan with the same estimates as when I turned fkey_estimates off.\n\nI was wondering if I did things properly though, as i don't see the\nenable_fkey_estimates GUC any more. Was it removed?\n\nI finally managed to get it compiled, patched, and working.  It gave the same plan with the same estimates as when I turned fkey_estimates off.I was wondering if I did things properly though, as i don't see the enable_fkey_estimates GUC any more. Was it removed?", "msg_date": "Fri, 17 Jun 2016 11:18:19 -0400", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 9.6 query slower than 9.5.3" }, { "msg_contents": "Adam Brusselback <[email protected]> writes:\n> I finally managed to get it compiled, patched, and working. It gave the\n> same plan with the same estimates as when I turned fkey_estimates off.\n\nOK, well, at least it's not making things worse ;-). But I think that\nthis estimation method isn't very helpful for antijoin cases anyway.\n\n> I was wondering if I did things properly though, as i don't see the\n> enable_fkey_estimates GUC any more. Was it removed?\n\nYes, that was only intended for debugging, and the consensus was that\nit probably shouldn't have been committed in the first place.\n\nThanks for taking the trouble to check this!\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 17 Jun 2016 11:22:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 9.6 query slower than 9.5.3" } ]
[ { "msg_contents": "Hi ,\n\nI am connecting to PostgreSQL 9.4 via an ODBC driver on Windows machine from MS VBA application. I am facing huge performance issues while inserting data continuously. On analysing the logs , there were around 90000 statements related to Save Points and Release Points.\n\nduration: 2.000 ms\n2016-06-17 12:45:02 BST LOG: statement: RELEASE _EXEC_SVP_1018CCF8\n2016-06-17 12:45:02 BST LOG: duration: 1.000 ms\n2016-06-17 12:45:05 BST LOG: statement: SAVEPOINT _EXEC_SVP_186EB5C8\n2016-06-17 12:45:05 BST LOG: duration: 0.000 ms\n\nI am guessing these statements are causing an overhead while inserting records in to the table. Could you please let me know if I need to change any configuration settings to avoid creating any save points as the transaction is handled in the application.\n\nThanks\n\nRegards,\nEisha Shetty\nACCENTURE | UK-NEWCASTLE\n* +44 7741587433\n* [email protected]<mailto:[email protected]>\n________________________________\nThis message is for the designated recipient only and may contain privileged, proprietary, or otherwise private information. If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.Accenture means Accenture (UK) Limited (registered number 4757301), registered in England and Wales with registered address at 30 Fenchurch Street, London EC3M 3BD.\n\n\n\n\n\n\n\n\n\n\n\n\n\nHi ,\n \nI am connecting to PostgreSQL 9.4 via an ODBC driver on Windows machine from MS VBA application. I am facing huge performance issues while inserting data continuously. On analysing the logs , there were around 90000 statements related to\n Save Points and Release Points. \n \nduration: 2.000 ms\n2016-06-17 12:45:02 BST LOG:  statement: RELEASE _EXEC_SVP_1018CCF8\n2016-06-17 12:45:02 BST LOG:  duration: 1.000 ms\n2016-06-17 12:45:05 BST LOG:  statement: SAVEPOINT _EXEC_SVP_186EB5C8\n2016-06-17 12:45:05 BST LOG:  duration: 0.000 ms\n \nI am guessing these statements are causing an overhead while inserting records in to the table. Could you please let me know if I need to change any configuration settings to avoid creating any save points as the transaction is handled\n in the application.\n \nThanks\n \nRegards,\nEisha Shetty\nACCENTURE | UK-NEWCASTLE\n( +44\n 7741587433\n*\[email protected]\n\n\n\n\n\nThis message is for the designated recipient only and may contain privileged, proprietary, or otherwise private information.\n If you have received it in error, please notify the sender immediately and delete the original. Any other use of the e-mail by you is prohibited. Where allowed by local law, electronic communications with Accenture and its affiliates, including e-mail and\n instant messaging (including content), may be scanned by our systems for the purposes of information security and assessment of internal compliance with Accenture policy.Accenture means Accenture (UK) Limited (registered number 4757301), registered in England\n and Wales with registered address at 30 Fenchurch Street, London EC3M 3BD.", "msg_date": "Fri, 17 Jun 2016 15:19:34 +0000", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Savepoint and Releasepoint in Logs" }, { "msg_contents": "On Fri, Jun 17, 2016 at 8:19 AM, <[email protected]> wrote:\n> Hi ,\n>\n> I am connecting to PostgreSQL 9.4 via an ODBC driver on Windows machine from\n> MS VBA application. I am facing huge performance issues while inserting data\n> continuously. On analysing the logs , there were around 90000 statements\n> related to Save Points and Release Points.\n>\n>\n>\n> duration: 2.000 ms\n>\n> 2016-06-17 12:45:02 BST LOG: statement: RELEASE _EXEC_SVP_1018CCF8\n>\n> 2016-06-17 12:45:02 BST LOG: duration: 1.000 ms\n>\n> 2016-06-17 12:45:05 BST LOG: statement: SAVEPOINT _EXEC_SVP_186EB5C8\n>\n> 2016-06-17 12:45:05 BST LOG: duration: 0.000 ms\n>\n>\n>\n> I am guessing these statements are causing an overhead while inserting\n> records in to the table.\n\n\nThe fact that there is 3 seconds between the release of one savepoint\nthe start of the next suggests that your client, not the server, is\nthe dominant bottleneck.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sun, 19 Jun 2016 09:55:16 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Savepoint and Releasepoint in Logs" } ]
[ { "msg_contents": "Hey guys!\n\nWe are looking for active beta participants to try out our new SaaS-BaseD\nMonitoring Tool. Our tool will monitor your databases and their underlying\n(virtual) infrastructure. If you would like to be a part of the beta, sign\nup here: http://www.bluemedora.com/early-access/\n\nWe will initially be supporting MSSQL, Oracle, PostgreSQL, Mongo, DynamoDB\nand MySQL (and MariaDB). And then we will add support to SQL Azure, DB2,\nAurora, RDS, etc. as the beta progresses.\n\nIf you have any questions, feel free to post and I will be happy to answer\nthem.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Looking-for-more-Beta-Users-tp5908721.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 20 Jun 2016 07:00:15 -0700 (MST)", "msg_from": "jonescam <[email protected]>", "msg_from_op": true, "msg_subject": "Looking for more Beta Users!" } ]
[ { "msg_contents": "I'm working with a third-party plugin that does chemistry. It's very fast.\nHowever, I'm trying to do a sampling query, such as the first 1% of the\ndatabase, and I just can't get the planner to create a good plan. Here is\nthe full query (the |>| operator does a subgraph match of a molecular\nsubstructure, in this case benzene, to find all molecules that have a\nbenzene ring in the database):\n\nexplain analyze select * from version where smiles |>| 'c1ccccc1';\n ...\n Index Scan using i_version_smiles on version (cost=3445.75..147094.03\nrows=180283 width=36) (actual time=336.493..10015.753\n rows=180973 loops=1)\n Index Cond: (smiles |>| 'c1ccccc1'::molecule)\n Planning time: 1.228 ms\n Execution time: 10371.903 ms\n\n\nTen seconds over 263,000 molecules, which is actually good. Now let's limit\nit to the first 1% of the rows:\n\nexplain analyze select * from version where smiles |>| 'c1ccccc1' and\nversion_id < 897630;\n...\n Index Scan using pk_version on version (cost=0.42..131940.05 rows=1643\nwidth=36) (actual time=6.122..2816.298 rows=2039 loops=1)\n Index Cond: (version_id < 897630)\n Filter: (smiles |>| 'c1ccccc1'::molecule)\n Rows Removed by Filter: 590\n Planning time: 1.217 ms\n Execution time: 2822.117 ms\n\n\nNotice that it doesn't use the i_version_smiles index at all, but instead\napplies the very expensive filter |>| to all 1% of the database. So instead\nof getting a 100x speedup, we only get a 3x speedup, about 30x worse that\nwhat is theoretically possible.\n\nThe production database is about 50x larger than this test database.\n\nMaybe I misunderstand what's possible with indexes, but it seems to me that\nit could first do the pk_version index scan, and then use the results of\nthat to do a limited index-scan search using the i_version_smiles index. Is\nthat not possible? Is each index scan \"self contained\", that is, it doesn't\ntake into account the results of another index scan?\n\nThanks,\nCraig\n\nI'm working with a third-party plugin that does chemistry. It's very fast. However, I'm trying to do a sampling query, such as the first 1% of the database, and I just can't get the planner to create a good plan.  Here is the full query (the |>| operator does a subgraph match of a molecular substructure, in this case benzene, to find all molecules that have a benzene ring in the database):explain analyze select * from version where smiles |>| 'c1ccccc1'; ... Index Scan using i_version_smiles on version  (cost=3445.75..147094.03 rows=180283 width=36) (actual time=336.493..10015.753 rows=180973 loops=1)   Index Cond: (smiles |>| 'c1ccccc1'::molecule) Planning time: 1.228 ms Execution time: 10371.903 msTen seconds over 263,000 molecules, which is actually good. Now let's limit it to the first 1% of the rows:explain analyze select * from version where smiles |>| 'c1ccccc1' and version_id < 897630;... Index Scan using pk_version on version  (cost=0.42..131940.05 rows=1643 width=36) (actual time=6.122..2816.298 rows=2039 loops=1)   Index Cond: (version_id < 897630)   Filter: (smiles |>| 'c1ccccc1'::molecule)   Rows Removed by Filter: 590 Planning time: 1.217 ms Execution time: 2822.117 msNotice that it doesn't use the i_version_smiles index at all, but instead applies the very expensive filter |>| to all 1% of the database. So instead of getting a 100x speedup, we only get a 3x speedup, about 30x worse that what is theoretically possible.The production database is about 50x larger than this test database.Maybe I misunderstand what's possible with indexes, but it seems to me that it could first do the pk_version index scan, and then use the results of that to do a limited index-scan search using the i_version_smiles index. Is that not possible? Is each index scan \"self contained\", that is, it doesn't take into account the results of another index scan?Thanks,Craig", "msg_date": "Wed, 22 Jun 2016 09:03:35 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Can't get two index scans" }, { "msg_contents": "On Wed, Jun 22, 2016 at 9:03 AM, Craig James <[email protected]> wrote:\n> I'm working with a third-party plugin that does chemistry.\n\n\nOut of personal/professional curiosity, which one are you using, if\nthat can be disclosed?\n\n....\n\n\n> Notice that it doesn't use the i_version_smiles index at all, but instead\n> applies the very expensive filter |>| to all 1% of the database.\n\nYou have to tell the database that |>| is very expensive, by setting\nthe COST of the function which it invokes. You can get the name of\nthe function with:\n\nselect oprcode from pg_operator where oprname ='|>|' ;\n\n(taking care for schema and overloading, etc.)\n\nI would set the COST to at least 1000, probably more.\n\n> So instead\n> of getting a 100x speedup, we only get a 3x speedup, about 30x worse that\n> what is theoretically possible.\n>\n> The production database is about 50x larger than this test database.\n>\n> Maybe I misunderstand what's possible with indexes, but it seems to me that\n> it could first do the pk_version index scan, and then use the results of\n> that to do a limited index-scan search using the i_version_smiles index. Is\n> that not possible?\n\nI don't think it can do that. What it can do is run each index scan\nto completion as a bitmap index scan, and then AND the bitmaps\ntogether.\n\nYou might be able to build a multiple column index on (smiles,\nversion_id) and have it do the right thing automatically. Whether that\nis possible, and if so how effective it will actually be, would depend\non the implementation details of |>|. My gut feeling is that it would\nnot work well.\n\nYou could partition your data on version_id. Then it would keep a\nseparate smiles index on each partition, and would only consult those\nindexes which can possibly contain (according to the CHECK\nconstraints) the version_ids of interest in the query.\n\nAlso, if you tune your system using benzene, you will be probably\narrive at a place not optimal for more realistic queries.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 22 Jun 2016 11:36:31 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't get two index scans" }, { "msg_contents": "On Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Jun 22, 2016 at 9:03 AM, Craig James <[email protected]>\n> wrote:\n> > I'm working with a third-party plugin that does chemistry.\n>\n>\n> Out of personal/professional curiosity, which one are you using, if\n> that can be disclosed?\n>\n\nChemAxon (JChem)\n\n\n> > Notice that it doesn't use the i_version_smiles index at all, but instead\n> > applies the very expensive filter |>| to all 1% of the database.\n>\n> You have to tell the database that |>| is very expensive, by setting\n> the COST of the function which it invokes. You can get the name of\n> the function with:\n>\n> select oprcode from pg_operator where oprname ='|>|' ;\n>\n> (taking care for schema and overloading, etc.)\n>\n> I would set the COST to at least 1000, probably more.\n>\n\nI'll try this. I've done it with my own functions, but didn't realize you\ncould do it with existing operators.\n\n\n> > So instead\n> > of getting a 100x speedup, we only get a 3x speedup, about 30x worse that\n> > what is theoretically possible.\n> >\n> > The production database is about 50x larger than this test database.\n> >\n> > Maybe I misunderstand what's possible with indexes, but it seems to me\n> that\n> > it could first do the pk_version index scan, and then use the results of\n> > that to do a limited index-scan search using the i_version_smiles index.\n> Is\n> > that not possible?\n>\n> I don't think it can do that. What it can do is run each index scan\n> to completion as a bitmap index scan, and then AND the bitmaps\n> together.\n>\n\nThat won't help in this case because the index scan of the molecule table\ncan be slow.\n\n\n>\n> You might be able to build a multiple column index on (smiles,\n> version_id) and have it do the right thing automatically. Whether that\n> is possible, and if so how effective it will actually be, would depend\n> on the implementation details of |>|. My gut feeling is that it would\n> not work well.\n>\n\nNo, because it's not a normal exact-match query. The analogy would be that\nyou can build a multi-column index for an '=' operation on a string, but it\nwouldn't help if you were doing an '~' or 'LIKE' operation.\n\n\n> You could partition your data on version_id. Then it would keep a\n> separate smiles index on each partition, and would only consult those\n> indexes which can possibly contain (according to the CHECK\n> constraints) the version_ids of interest in the query.\n>\n\nI actually struck on this solution today and it works well. Instead\npartitioning on the version_id, I added a column \"p\" (\"partition\") and used\n20 partitions where p is a random number from 0..19. This has the advantage\nthat as new compounds are added, they are distributed throughout the\npartitions, so each partition remains a 5% sample of the whole.\n\nIt's pretty cool. A full-table scan of all partitions is slightly slower,\nbut if I want to do a sample and limit the run time, I can query with p = 0.\n\nIt also has another huge benefit for a web site: I can give the user a\nprogress-bar widget by querying the partitions one-by-one and updating the\nprogress in 5% increments. This is really critical for long-running\nqueries.\n\n\n> Also, if you tune your system using benzene, you will be probably\n> arrive at a place not optimal for more realistic queries.\n>\n\nNo, it's actually very useful. I'm not interested in optimizing typical\nqueries, but rather in limiting worst-case queries. This is a public web\nsite, and you never know what molecule someone will draw. In fact, it's\nquite common for visitors to draw silly molecules like benzine or methane\nthat would result in a heavy load if left to run to completion.\n\nThanks for your help!\nCraig\n\n\n> Cheers,\n>\n> Jeff\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes <[email protected]> wrote:On Wed, Jun 22, 2016 at 9:03 AM, Craig James <[email protected]> wrote:\n> I'm working with a third-party plugin that does chemistry.\n\n\nOut of personal/professional curiosity, which one are you using, if\nthat can be disclosed?ChemAxon (JChem) > Notice that it doesn't use the i_version_smiles index at all, but instead\n> applies the very expensive filter |>| to all 1% of the database.\n\nYou have to tell the database that |>| is very expensive, by setting\nthe COST of the function which it invokes.  You can get the name of\nthe function with:\n\nselect oprcode from pg_operator where oprname ='|>|' ;\n\n(taking care for schema and overloading, etc.)\n\nI would set the COST to at least 1000, probably more.I'll try this. I've done it with my own functions, but didn't realize you could do it with existing operators.\n\n> So instead\n> of getting a 100x speedup, we only get a 3x speedup, about 30x worse that\n> what is theoretically possible.\n>\n> The production database is about 50x larger than this test database.\n>\n> Maybe I misunderstand what's possible with indexes, but it seems to me that\n> it could first do the pk_version index scan, and then use the results of\n> that to do a limited index-scan search using the i_version_smiles index. Is\n> that not possible?\n\nI don't think it can do that.  What it can do is run each index scan\nto completion as a bitmap index scan, and then AND the bitmaps\ntogether.That won't help in this case because the index scan of the molecule table can be slow. \n\nYou might be able to build a multiple column index on (smiles,\nversion_id) and have it do the right thing automatically. Whether that\nis possible, and if so how effective it will actually be, would depend\non the implementation details of |>|. My gut feeling is that it would\nnot work well.No, because it's not a normal exact-match query. The analogy would be that you can build a multi-column index for an '=' operation on a string, but it wouldn't help if you were doing an '~' or 'LIKE' operation. You could partition your data on version_id.  Then it would keep a\nseparate smiles index on each partition, and would only consult those\nindexes which can possibly contain (according to the CHECK\nconstraints) the version_ids of interest in the query.I actually struck on this solution today and it works well. Instead partitioning on the version_id, I added a column \"p\" (\"partition\") and used 20 partitions where p is a random number from 0..19. This has the advantage that as new compounds are added, they are distributed throughout the partitions, so each partition remains a 5% sample of the whole.It's pretty cool. A full-table scan of all partitions is slightly slower, but if I want to do a sample and limit the run time, I can query with p = 0.It also has another huge benefit for a web site: I can give the user a progress-bar widget by querying the partitions one-by-one and updating the progress in 5% increments. This is really critical for long-running queries. \n\nAlso, if you tune your system using benzene, you will be probably\narrive at a place not optimal for more realistic queries.No, it's actually very useful. I'm not interested in optimizing typical queries, but rather in limiting worst-case queries. This is a public web site, and you never know what molecule someone will draw. In fact, it's quite common for visitors to draw silly molecules like benzine or methane that would result in a heavy load if left to run to completion.Thanks for your help!Craig\n\nCheers,\n\nJeff\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Wed, 22 Jun 2016 21:36:11 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't get two index scans" }, { "msg_contents": "On Wed, Jun 22, 2016 at 9:36 PM, Craig James <[email protected]> wrote:\n> On Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes <[email protected]> wrote:\n\n>> You might be able to build a multiple column index on (smiles,\n>> version_id) and have it do the right thing automatically. Whether that\n>> is possible, and if so how effective it will actually be, would depend\n>> on the implementation details of |>|. My gut feeling is that it would\n>> not work well.\n>\n>\n> No, because it's not a normal exact-match query. The analogy would be that\n> you can build a multi-column index for an '=' operation on a string, but it\n> wouldn't help if you were doing an '~' or 'LIKE' operation.\n\nThat restriction only applies to BTREE indexes. GiST and GIN indexes\nwork differently, and don't have that particular limitation. They can\nuse the second column of the index even if the first column is not\nused, or (in the case of GiST at least) the first column is used with\nan operator other than equality.\n\nThe main problems I've run into with GiST indexes is that they\nsometimes take absurdly long times to build; and that the\nsplit-picking algorithm might arrive at buckets ill-suited to your\nqueries so that the consultation of the index \"works\" in the sense\nthat it discards most of the non-matching rows without inspecting\nthem, but isn't actually faster. Unfortunately, both of these problems\nseem hard to predict. You pretty much have to try it (on a full-size\ndata set, as scaling up from toy data sets is also hard to predict)\nand see how it does.\n\nBut, JChem's cartridge is apparently not using a GiST index, which is\nwhat my first guess was. I can't really figure out what PostgreSQL\nAPI it is tapping into, so whatever it is very well might not support\nmulti-column indexes at all.\n\n>> You could partition your data on version_id. Then it would keep a\n>> separate smiles index on each partition, and would only consult those\n>> indexes which can possibly contain (according to the CHECK\n>> constraints) the version_ids of interest in the query.\n>\n>\n> I actually struck on this solution today and it works well. Instead\n> partitioning on the version_id, I added a column \"p\" (\"partition\") and used\n> 20 partitions where p is a random number from 0..19. This has the advantage\n> that as new compounds are added, they are distributed throughout the\n> partitions, so each partition remains a 5% sample of the whole.\n>\n> It's pretty cool. A full-table scan of all partitions is slightly slower,\n> but if I want to do a sample and limit the run time, I can query with p = 0.\n>\n> It also has another huge benefit for a web site: I can give the user a\n> progress-bar widget by querying the partitions one-by-one and updating the\n> progress in 5% increments. This is really critical for long-running queries.\n\nThat does sound pretty useful. You could potentially get the same\nbenefit with the multicolumn GiST index, without needing to partition\nthe table. In a vague hand-wavy way, building an index \"USING GIST\n(p, smiles jchem_op_class)\" is like using p to automatically partition\nthe index so it acts like individual indexes over smiles for each\nvalue of p. But it is unlikely to ever be as efficient as\nwell-crafted explicit partitions, and once you have gone to the effort\nof setting them up there would probably be no point in trying to\nchange over.\n\n\n>> Also, if you tune your system using benzene, you will be probably\n>> arrive at a place not optimal for more realistic queries.\n>\n>\n> No, it's actually very useful. I'm not interested in optimizing typical\n> queries, but rather in limiting worst-case queries. This is a public web\n> site, and you never know what molecule someone will draw. In fact, it's\n> quite common for visitors to draw silly molecules like benzine or methane\n> that would result in a heavy load if left to run to completion.\n\nMy benefit in having a non-public web site, is that I can just walk\nover to their desk and yell at the people who do things like that to\nmy database.\n\n(And I promise to stop searching for methane on your web site.)\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 23 Jun 2016 08:47:56 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Can't get two index scans" }, { "msg_contents": "On Thu, Jun 23, 2016 at 8:47 AM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Jun 22, 2016 at 9:36 PM, Craig James <[email protected]>\n> wrote:\n> > On Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes <[email protected]>\n> wrote:\n> ...\n> But, JChem's cartridge is apparently not using a GiST index, which is\n> what my first guess was. I can't really figure out what PostgreSQL\n> API it is tapping into, so whatever it is very well might not support\n> multi-column indexes at all.\n>\n\nThey run a separate chemistry server process, which I believe is a Java\napp. My guess (and it's strictly a guess) is that they use classic chemical\nbitmap fingerprints, and the index scan runs in their separate process and\nonly returns the index-scan results.\n\nCraig\n\nOn Thu, Jun 23, 2016 at 8:47 AM, Jeff Janes <[email protected]> wrote:On Wed, Jun 22, 2016 at 9:36 PM, Craig James <[email protected]> wrote:\n> On Wed, Jun 22, 2016 at 11:36 AM, Jeff Janes <[email protected]> wrote:...\nBut, JChem's cartridge is apparently not using a GiST index, which is\nwhat my first guess was.  I can't really figure out what PostgreSQL\nAPI it is tapping into, so whatever it is very well might not support\nmulti-column indexes at all.They run a separate chemistry server process, which I believe is a Java app. My guess (and it's strictly a guess) is that they use classic chemical bitmap fingerprints, and the index scan runs in their separate process and only returns the index-scan results.Craig", "msg_date": "Thu, 23 Jun 2016 12:52:25 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Can't get two index scans" } ]
[ { "msg_contents": "Hi,\nI've postgres 9.5.3 installed on win7 64 bit, and ubuntu 16.04tls 64 \nbit, same SSD (evo 850 pro) , two different partitions. Laptop is 3.8Ghz.\nI've in each partition a simple database with one table called data256 \nwith one column of 256 char.\nI wrote a program using libpq which:\n1 connects to 127.0.0.1 to the server\n2 drops and recreates the table;\n3 executes 2000 times the exec() function with the command \"INSERT INTO \ndata256 VALUES ('AAAAAA...... 250 times')\"\nI want to commit after every insert of course.\nThe program is the same both in win and linux; in ansi c, so it's portable.\n\nPerformance:\nWin7: 8000 write/sec\nLinux: 419 write/sec\n\nI don't figure out why such a difference. Also what should I expect? \nWhich one is reasonable?\n\nI compared the two postgresql.conf, they're identical (except obvious \nthings), they're the default ones, I didn't touch them. I just tried to \ndisable ssl in one because it was set but nothing changes.\nI didn't go into deeper analysis because the source C file used for test \nis the same and the two postgresql.conf are identical.\n\nThen, in order to test write / flush without postgres, I made another C \nprogram, to open a file in writing, and for 1000 times : write 256 bytes \nand flush them (using fsync in linux and FlushFileBuffers in win).\nWin7: 200 write/sec\nLinux: 100 write/sec\n\n\n\n\nThanks\nPupillo\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 25 Jun 2016 18:19:50 +0200", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "can't explain commit performance win7 vs linux : 8000/s vs 419/s" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> Performance:\n> Win7: 8000 write/sec\n> Linux: 419 write/sec\n\nMy immediate reaction to that is that Windows isn't actually writing\nthe data to disk when it should in order to guarantee that commits\nare persistent. There are multiple layers that might be trying to\noptimize away the writes, and I don't know enough about Windows to\nhelp you debug it. But see\n\nhttps://www.postgresql.org/docs/9.5/static/wal-reliability.html\n\nfor some discussion.\n\n> I don't figure out why such a difference. Also what should I expect? \n> Which one is reasonable?\n\nThe lower number sounds a lot more plausible for laptop-grade hardware.\nIf you weren't using an SSD I wouldn't believe that one was doing\npersistent commits either.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 25 Jun 2016 14:08:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't explain commit performance win7 vs linux : 8000/s vs 419/s" }, { "msg_contents": "On Sat, Jun 25, 2016 at 9:19 AM, [email protected]\n<[email protected]> wrote:\n> Hi,\n> I've postgres 9.5.3 installed on win7 64 bit, and ubuntu 16.04tls 64 bit,\n> same SSD (evo 850 pro) , two different partitions. Laptop is 3.8Ghz.\n> I've in each partition a simple database with one table called data256 with\n> one column of 256 char.\n> I wrote a program using libpq which:\n> 1 connects to 127.0.0.1 to the server\n> 2 drops and recreates the table;\n> 3 executes 2000 times the exec() function with the command \"INSERT INTO\n> data256 VALUES ('AAAAAA...... 250 times')\"\n> I want to commit after every insert of course.\n> The program is the same both in win and linux; in ansi c, so it's portable.\n>\n> Performance:\n> Win7: 8000 write/sec\n> Linux: 419 write/sec\n>\n> I don't figure out why such a difference. Also what should I expect? Which\n> one is reasonable?\n\nThe Win7 numbers seem suspiciously high to me, even for SSD. Have you\ntried holding the power button until it hard-resets the computer in\nthe middle of a run (preferably several runs going in parallel), and\nsee if comes back up without corruption and contains consistent data?\nAnd then repeat that a several times?\n\n\n> I compared the two postgresql.conf, they're identical (except obvious\n> things), they're the default ones, I didn't touch them.\n\nWe don't know which things are obvious to you.\n\n>\n> Then, in order to test write / flush without postgres, I made another C\n> program, to open a file in writing, and for 1000 times : write 256 bytes and\n> flush them (using fsync in linux and FlushFileBuffers in win).\n> Win7: 200 write/sec\n> Linux: 100 write/sec\n\nRather than rolling your own program, can you run pg_test_fsync on each?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 25 Jun 2016 12:23:42 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't explain commit performance win7 vs linux : 8000/s\n vs 419/s" }, { "msg_contents": "my guess:\n- maybe the NTFS compression is enabled? [ \"Compress this drive to save\ndisk space” ? ] [ your test data is ideal for compression: VALUES\n('AAAAAA...... 250 times') ]\n- or Windows Samsung Magician extreme settings? or RAPID mode cache\nenabled?\n\n\"RAPID mode is a RAM caching feature. Samsung’s RAPID white paper states\nthat RAPID works by analyzing “system traffic and leverages spare system\nresources (DRAM and CPU) to deliver read acceleration through intelligent\ncaching of hot data and write optimization through tight coordination with\nthe SSD.”\nhttp://www.thessdreview.com/software-2/samsung-magician-4-5-rapid-mode-2-1-testing/\n\n\n\non Ubuntu16.04 (+ Samsung SSD 840 PRO ) I use \"Samsung SSD Magician DC\n\" trim optimization [ \"sudo ./magician -d 0 -T \" ]\nhttp://jcutrer.com/howto/linux/samsung-magician-command-line-linux\n\n$ sudo ./magician\n================================================================================================\nSamsung(R) SSD Magician DC Version 1.0\nCopyright (c) 2014 Samsung Corporation\n================================================================================================\nUsage: ./magician [operation] ..\n\nAllowed Operations:\n-L[ --list] Shows a disk(s) attached to the system.\n-F[ --firmware-update] Updates firmware to specified disk.\n-E[ --erase] Securely Erases all data from specified disk.\n-O[ --over-provision] Performs one of the Over-Provisioning related\n operations on specified disk.\n*-T[ --trim] Optimizes specified disk. *\n\n-S[ --smart] Shows S.M.A.R.T values of specified disk.\n-M[ --setmax] Performs SetMax related operations on specified\ndisk.\n-W[ --writecache] Enables/Disables Write Cache on specified disk.\n-X[ --sctcachestate] Gets the SCT write cache state for specified disk.\n-C[ --command-history] Shows history of the previously executed commands.\n-I[ --info] Displays the disk details to the user.\n-license Shows the End User License Agreement.\n-H[ --help] Shows detailed Help.\n\n\nregards,\n Imre\n\n\n2016-06-25 18:19 GMT+02:00 [email protected] <[email protected]>:\n\n> Hi,\n> I've postgres 9.5.3 installed on win7 64 bit, and ubuntu 16.04tls 64 bit,\n> same SSD (evo 850 pro) , two different partitions. Laptop is 3.8Ghz.\n> I've in each partition a simple database with one table called data256\n> with one column of 256 char.\n> I wrote a program using libpq which:\n> 1 connects to 127.0.0.1 to the server\n> 2 drops and recreates the table;\n> 3 executes 2000 times the exec() function with the command \"INSERT INTO\n> data256 VALUES ('AAAAAA...... 250 times')\"\n> I want to commit after every insert of course.\n> The program is the same both in win and linux; in ansi c, so it's portable.\n>\n> Performance:\n> Win7: 8000 write/sec\n> Linux: 419 write/sec\n>\n> I don't figure out why such a difference. Also what should I expect? Which\n> one is reasonable?\n>\n> I compared the two postgresql.conf, they're identical (except obvious\n> things), they're the default ones, I didn't touch them. I just tried to\n> disable ssl in one because it was set but nothing changes.\n> I didn't go into deeper analysis because the source C file used for test\n> is the same and the two postgresql.conf are identical.\n>\n> Then, in order to test write / flush without postgres, I made another C\n> program, to open a file in writing, and for 1000 times : write 256 bytes\n> and flush them (using fsync in linux and FlushFileBuffers in win).\n> Win7: 200 write/sec\n> Linux: 100 write/sec\n>\n>\n>\n>\n> Thanks\n> Pupillo\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nmy guess:- maybe the NTFS compression is enabled?  [  \"Compress this drive to save disk space” ? ]  [  your test data is ideal for compression:  VALUES ('AAAAAA...... 250 times') ]- or Windows Samsung Magician extreme settings?  or RAPID mode cache enabled? \"RAPID mode is a RAM caching feature. Samsung’s RAPID white paper states that RAPID works by analyzing “system traffic and leverages spare system resources (DRAM and CPU) to deliver read acceleration through intelligent caching of hot data and write optimization through tight coordination with the SSD.”http://www.thessdreview.com/software-2/samsung-magician-4-5-rapid-mode-2-1-testing/on Ubuntu16.04  (+ Samsung SSD 840 PRO )    I use \"Samsung SSD Magician DC \"  trim optimization   [ \"sudo ./magician -d 0 -T \" ]http://jcutrer.com/howto/linux/samsung-magician-command-line-linux$ sudo ./magician================================================================================================Samsung(R) SSD Magician DC Version 1.0Copyright (c) 2014 Samsung Corporation================================================================================================Usage:  ./magician  [operation] ..Allowed Operations:-L[ --list]              Shows a disk(s) attached to the system.           -F[ --firmware-update]   Updates firmware to specified disk.               -E[ --erase]             Securely Erases all data from specified disk.     -O[ --over-provision]    Performs one of the Over-Provisioning related                              operations on specified disk.                     -T[ --trim]              Optimizes specified disk.                         -S[ --smart]             Shows S.M.A.R.T values of specified disk.         -M[ --setmax]            Performs SetMax related operations on specified disk.-W[ --writecache]        Enables/Disables Write Cache on specified disk.   -X[ --sctcachestate]     Gets the SCT write cache state for specified disk.-C[ --command-history]   Shows history of the previously executed commands.-I[ --info]              Displays the disk details to the user.            -license                 Shows the End User License Agreement.             -H[ --help]              Shows detailed Help.                              regards, Imre2016-06-25 18:19 GMT+02:00 [email protected] <[email protected]>:Hi,\nI've postgres 9.5.3 installed on win7 64 bit, and ubuntu 16.04tls 64 bit,  same SSD (evo 850 pro) , two different partitions. Laptop is 3.8Ghz.\nI've in each partition a simple database with one table called data256 with one column of 256 char.\nI wrote a program using libpq which:\n1 connects to 127.0.0.1 to the server\n2 drops and recreates the table;\n3 executes 2000 times the exec() function with the command  \"INSERT INTO data256 VALUES ('AAAAAA...... 250 times')\"\nI want to commit after every insert of course.\nThe program is the same both in win and linux; in ansi c, so it's portable.\n\nPerformance:\nWin7: 8000 write/sec\nLinux: 419 write/sec\n\nI don't figure out why such a difference. Also what should I expect? Which one is reasonable?\n\nI compared the two postgresql.conf, they're identical (except obvious things), they're the default ones, I didn't touch them. I just tried to disable ssl in one because it was set but nothing changes.\nI didn't go into deeper analysis because the source C file used for test is the same and the two postgresql.conf are identical.\n\nThen, in order to test write / flush without postgres, I made another C program, to open a file in writing, and for 1000 times : write 256 bytes and flush them (using fsync in linux and FlushFileBuffers in win).\nWin7: 200 write/sec\nLinux: 100 write/sec\n\n\n\n\nThanks\nPupillo\n\n\n\n\n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sun, 26 Jun 2016 00:27:57 +0200", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't explain commit performance win7 vs linux : 8000/s\n vs 419/s" } ]
[ { "msg_contents": "Hello Guys,\n\nI found very strange behavior on one of the master-slave clusters.\n\nIn case i'm running query several times in the same session i'm getting performance increase on the second request and huge performance decrease starting from 6 request. Behavior in 100% situations repeatable but only for this type of query. Issue happens on both master and slave systems. There is no such issue on other clusters.\n\nQuery:\nSELECT * FROM get_results(1, 1, 1, '1', '1234567890', '1234567890', 2*get_rate(26, 26, now()), ARRAY[1], '{}', 13, ARRAY[1, 2, 3], ARRAY[1, 2, 3]) WHERE a AND b AND c AND d AND NOT e AND f\n\nQuery is as simple as running a procedure with additional filters afterwards. Procedure inside calls pretty complex query joining 7-8 tables and several side-procedures. get_result returns ~ 30 columns and ~15-30 rows  as a result.\n\nExplain shows interesting picture.\n\nFor the first call:\n\nRows Removed by Filter: 14\nBuffers: shared hit=7599\nPlanning time: 0.190 ms\nExecution time: 86.083 ms\n\nFor the second-fifth calls:\n\n Rows Removed by Filter: 14\n Buffers: shared hit=4804\n Planning time: 0.113 ms\n Execution time: 57.835 ms\n\nFor the six and afterwards:\n\n Rows Removed by Filter: 14\n Buffers: shared hit=24474\n Planning time: 0.073 ms\n Execution time: 217.545 ms\n\nSo we can see consistent pattern between 'shared hit' and query performance.\nI tried to change definition of the function from STABLE to VOLATILE and tried to set small and very big costs values but with no luck.\n\nServer:\n256Gb RAM\n4 x Intel(R) Xeon(R) CPU E7- 8837 @ 2.67GHz\nRAID 10 from 8 x TOSHIBA AL13SXB600N based on MegaRAID\n\n\npostgresql.conf (9.4.7)\nshared_buffers = 64578MB\nwork_mem = 64MB\nmaintenance_work_mem = 256MB\neffective_cache_size = 129156MB\n\n\nAny idea how I should investigate it further or what it could be?\n\nThanks in advance!\n\n\nSuren Arustamyan\[email protected]\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 27 Jun 2016 14:43:33 +0300", "msg_from": "=?UTF-8?B?U3VyZW4gQXJ1c3RhbXlhbg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?B?W1BFUkZPUk1dIENhY2hlIHBlcmZvcm1hbmNlIGRlY3JlYXNlcw==?=" } ]
[ { "msg_contents": "testing\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 28 Jun 2016 10:10:07 -0400", "msg_from": "George Neuner <[email protected]>", "msg_from_op": true, "msg_subject": "testing - ignore" } ]
[ { "msg_contents": "Hi,\n\nI have a weird slow query issue I can't seem to find the cause of, so I'm\nhoping someone here can shed some light on this.\n\nContext:\nI have an application which (among other things) implements a sort of job\nqueue using postgres for persistence. While I know a RDBMS is not necessarily\nthe most ideal tool for this, there are some constraints and requirements\nwhich make including a messaging/queueing middleware less than trivial.\nTo reduce locking issues we make use of advisory locks and CTE queries to\nselect jobs from the queue (inspired by\nhttps://github.com/chanks/que/blob/master/lib/que/sql.rb) as well as some\nexplicit or optimistic locking during job processing. As the volume of jobs\nto process isn't extremely high, performance seems very good - except for the\nissue I'm describing below.\n\nThe problem:\nMost queries execute fast, however sometimes queries on the job table (which\ncontains the job queue) take exactly 122 seconds (+ 0-50ms) to execute for no\nclear reason. I have set log_min_duration_statement to 200ms, and the only\nqueries that are logged are the 2-minute queries. I have also enabled\nlog_lock_waits, but no lock waits are logged. Whereas extremely high\nperformance is not to be expected, a query taking two minutes is way too\nmuch - especially seeing as other concurrent queries seem to be completing\nperfectly on time.\nAs a reference, in a time span of less than 4 minutes the application\nprocesses a test load of 2000+ jobs going through 3 stages in the job\nprocessing 'pipeline' - each stage using the CTE query to pick up a job. So\nthat's 6000+ executions of the CTE + several 1000 executions of other queries,\nof which only a few (usually 1-5 per test run) inexplicably take two minutes\nto complete. The offending queries seem to be quite random: SELECT ... FOR\nUPDATE queries, CTE queries, and sometimes even regular SELECTs querying our\njob table take inexplicably long.\n\n\nPostgreSQL version:\nPostgreSQL 9.3.4, compiled by Visual C++ build 1600, 64-bit\n\nHow PostgreSQL was installed: through GUI installer from EnterpriseDB\n\nChanges made to the settings in the postgresql.conf file:\n name | current_setting | source\n-----------------------------+----------------------+----------------------\n application_name | psql | client\n client_encoding | WIN1252 | client\n DateStyle | ISO, DMY | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n lc_messages | English_Ireland.1252 | configuration file\n lc_monetary | English_Ireland.1252 | configuration file\n lc_numeric | English_Ireland.1252 | configuration file\n lc_time | English_Ireland.1252 | configuration file\n listen_addresses | * | configuration file\n log_autovacuum_min_duration | 100ms | configuration file\n log_destination | stderr | configuration file\n log_line_prefix | %m | configuration file\n log_lock_waits | on | configuration file\n log_min_duration_statement | 200ms | configuration file\n log_timezone | Europe/Brussels | configuration file\n logging_collector | on | configuration file\n max_connections | 100 | configuration file\n max_stack_depth | 2MB | environment variable\n port | 5432 | configuration file\n shared_buffers | 128MB | configuration file\n tcp_keepalives_count | 0 | configuration file\n tcp_keepalives_idle | 30 | configuration file\n tcp_keepalives_interval | 30 | configuration file\n TimeZone | Europe/Brussels | configuration file\n\nOperating system and version: Windows 8.1 64bit (version 6.3 build 9600)\n\nProgram used to connect to PostgreSQL: custom Java application using\nHibernate, HikariCP and the pgjdbc-ng 0.6 driver. The application runs on the\nsame machine as the PostgreSQL database (at least for now, during development)\n\nRelevant or unusual information in the PostgreSQL server logs?: aside from the\nslow query log, there is nothing special in the logs I think.\nFor example, this is the entirety of what was logged during one test run - I\njust shortened the select query a bit (replacing a list of fields with *) for\nreadability as hibernate doesn't produce the most clean queries:\n2016-06-27 22:53:06.764 CEST LOG: database system was shut down at\n2016-06-27 22:53:04 CEST\n2016-06-27 22:53:06.892 CEST LOG: database system is ready to accept\nconnections\n2016-06-27 22:53:07.039 CEST LOG: autovacuum launcher started\n2016-06-27 22:55:07.764 CEST LOG: automatic analyze of table\n\"app.public.job\" system usage: CPU 0.00s/0.06u sec elapsed 0.20 sec\n2016-06-27 22:57:09.570 CEST LOG: duration: 122006.000 ms bind\ncached-1453392550: select job.* from job inner join project on\njob.project_id = project.id inner join category on project.category_id\n= category.id inner join type on job.type_id = type.id where job.id=$1\n2016-06-27 22:57:09.570 CEST DETAIL: parameters: $1 = '34309'\n2016-06-27 22:57:09.572 CEST LOG: duration: 122011.000 ms bind\ncached--195714239: WITH RECURSIVE jobs AS (SELECT (i).*,\npg_try_advisory_lock((i).id) AS locked FROM ( SELECT i FROM\njob_priority AS i WHERE status = $1 ORDER BY priority DESC, received\nASC LIMIT 1) AS i1 UNION ALL (SELECT (i).*,\npg_try_advisory_lock((i).id) AS locked FROM ( SELECT ( SELECT i FROM\njob_priority AS i WHERE status = $2 AND (-priority, received) >\n(-jobs.priority, jobs.received) ORDER BY priority DESC, received ASC\nLIMIT 1) AS i FROM jobs WHERE jobs.id IS NOT NULL LIMIT 1) AS i1 ) )\nSELECT id, path, status, received, attempts, last_modified, version\nFROM jobs WHERE locked LIMIT 1\n2016-06-27 22:57:09.572 CEST DETAIL: parameters: $1 = 'RECEIVED', $2\n= 'RECEIVED'\n2016-06-27 22:58:07.332 CEST LOG: automatic analyze of table\n\"app.public.job \" system usage: CPU 0.00s/0.06u sec elapsed 0.11 sec\n2016-06-27 22:59:07.693 CEST LOG: automatic vacuum of table\n\"app.public.job \": index scans: 1\n pages: 0 removed, 140 remain\n tuples: 136 removed, 2391 remain\n buffer usage: 546 hits, 0 misses, 209 dirtied\n avg read rate: 0.000 MB/s, avg write rate: 3.388 MB/s\n system usage: CPU 0.00s/0.00u sec elapsed 0.48 sec\n\n\nFull table and index schema:\nCREATE TABLE job\n(\n id INTEGER PRIMARY KEY,\n a VARCHAR(64) NOT NULL,\n path VARCHAR(512) NOT NULL UNIQUE,\n status VARCHAR(20) NOT NULL,\n project_id INTEGER NOT NULL REFERENCES project(id),\n type_id INTEGER NOT NULL REFERENCES type(id),\n b INTEGER NOT NULL,\n c INTEGER NOT NULL,\n d VARCHAR(32) NOT NULL,\n received TIMESTAMP NOT NULL,\n done TIMESTAMP,\n e VARCHAR(512),\n error VARCHAR(512),\n error_time TIMESTAMP,\n attempts INTEGER NOT NULL,\n last_modified TIMESTAMP,\n version INTEGER\n);\n\\d+ output:\n Column | Type | Modifiers | Storage | Stats\n------------------+-----------------------------+-----------+----------+------\n target | Description\n--------+-------------\n id | integer | not null | plain |\n a | character varying(64) | not null | extended |\n path | character varying(512) | not null | extended |\n status | character varying(20) | not null | extended |\n project_id | integer | not null | plain |\n type_id | integer | not null | plain |\n b | integer | not null | plain |\n c | integer | not null | plain |\n d | character varying(32) | not null | extended |\n received | timestamp without time zone | not null | plain |\n done | timestamp without time zone | | plain |\n e | character varying(512) | | extended |\n error | character varying(512) | | extended |\n error_time | timestamp without time zone | | plain |\n attempts | integer | not null | plain |\n last_modified | timestamp without time zone | | plain |\n version | integer | | plain |\nIndexes:\n \"job_pkey\" PRIMARY KEY, btree (id)\n \"job_path_key\" UNIQUE CONSTRAINT, btree (path)\nForeign-key constraints:\n \"job_project_id_fkey\" FOREIGN KEY (project _id) REFERENCES project (id)\n \"job_type_id_fkey\" FOREIGN KEY (type_id) REFERENCES type(id)\nHas OIDs: no\n\nAdditional information on the table:\n- a few columns contain mostly nulls (i.e. a column indicating any error\nmessages which may occur during processing of the job)\n- Due to the nature of the application the table does get a high number of\ninserts/updates in a short time. As mentioned above, in the test in which I\nobserve this issue the job table grows from 0 to 2000-2500 rows very quickly,\nwith each row receiving on average 4 more updates.\n- Note: the CTE selects from a job_priority view which is a JOIN of 3 tables:\njob - project - category, with categories having a priority which impacts job\nprocessing order. The test case I am currently using comprises only 5 projects\nand a single category.\n\nExplain: unfortunately it's not always the same query that causes issues. If\nyou want I can still run the most common queries through EXPLAIN ANALYZE but\nhaven't included this output here as this email is already long enough...\n\nHistory: this is during development of an application, so there is no\nproduction history\n\nHardware: i7 CPU, 15 GB of RAM, SSD - this is a development machine, so\nthere's lots of other stuff running too (but no other databases in active use)\n\n\nThings I tried:\n- Upgrading to PostgreSQL 9.5.3, compiled by Visual C++ build 1800, 64-bit\nThis did not solve the problem, queries still take 122 seconds from time to\ntime\n- Enable auto_explain to obtain more information. Unfortunately, for the\n2-minute queries no plan is logged. If I manually force a query to take a long\ntime (eg SELECT pg_sleep(5)) or if I set auto_explain.log_min_duration low\nenough plans are logged for slow queries, _except_ for these 2-minute\nmysteries. This makes it very hard to see exactly why the query is taking so\nlong. Is this a bug in auto_explain? Are cached plans never logged (this is\nnot indicated in the documentation)? Some other reason why no plan would be\nlogged? Note that this happens both in PostgreSQL 9.3.4 and 9.5.3\n- Running some queries through EXPLAIN ANALYZE; however, these execute quickly\n(a few ms) and don't seem to indicate a clear problem to me\n\n\nWhen I run my test there is very high I/O on the system for a few minutes.\nHowever, I am reluctant to point to that as the sole cause for the following\nreasons:\n- other concurrent queries execute fast (it's a multi threaded application and\nother threads continue normally)\n- if I look at the resource monitor, some queries remain hung up even after\ndisk I/O has quieted down\n- if I/O performance were the cause I would expect to see a lot more variance,\nnow the queries always take exactly 122s (with only about 50ms variance)\n\nGiven the reasonably small dataset (a pg_dump of the full database containing\nabout 2500 jobs is less than 1MB) I would think that the whole database fits\nin memory anyway, making this issue all the more puzzling. Have I missed\nsomething obvious?\n\nBest regards,\nRoel\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 03:24:14 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Random slow queries" }, { "msg_contents": "On Tue, Jun 28, 2016 at 8:24 PM, <[email protected]> wrote:\n\n> The problem:\n> Most queries execute fast, however sometimes queries on the job\n> table (which contains the job queue) take exactly 122 seconds\n> (+ 0-50ms) to execute for no clear reason.\n\n> Have I missed something obvious?\n\nPlease monitor for the start of such an event and capture the full\ncontents of pg_stat_activity and pg_locks during that 2 minute\nwindow.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 07:45:49 -0500", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random slow queries" }, { "msg_contents": "\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\r\nSent: Tuesday, June 28, 2016 9:24 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Random slow queries\r\n\r\nHi,\r\n\r\nI have a weird slow query issue I can't seem to find the cause of, so I'm hoping someone here can shed some light on this.\r\n\r\n........................................\r\n........................................\r\n\r\n\r\nGiven the reasonably small dataset (a pg_dump of the full database containing about 2500 jobs is less than 1MB) I would think that the whole database fits in memory anyway, making this issue all the more puzzling. Have I missed something obvious?\r\n\r\nBest regards,\r\nRoel\r\n\r\n______________________________________________________________________________________________________________________\r\n\r\nDid you try AUTO_EXPLAIN extension (https://www.postgresql.org/docs/9.3/static/auto-explain.html) for diagnostic purposes?\r\nWith auto_explain.loganalize = true it will log automatically EXPLAIN ANALYZE output, rather than just EXPLAIN output. Turning this parameter ON permanently could have negative impact on over-all performance, so use it judiciously. \r\n\r\nRegards,\r\nIgor\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 14:32:30 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random slow queries" }, { "msg_contents": "On 29 June 2016 at 14:45, Kevin Grittner <[email protected]> wrote:\n> Please monitor for the start of such an event and capture the full\n> contents of pg_stat_activity and pg_locks during that 2 minute\n> window.\n\nI had already looked at that manually and found nothing unusual. To be more\nthorough, I now had a batch file log the contents of pg_stat_activity and\npg_locks every 5 seconds.\n\nDuring my test run, there was one offending query invocation, a simple\nSELECT * FROM job WHERE field = $1\nOf course the actual query specified the list of fields as it was generated\nby Hibernate, but that is what it boils down to - no joins etc. The column on\nwhich was queried is a VARCHAR(64) NOT NULL, not unique nor indexed (though\nin practice most values are unique).\n\nI can of course post the full output somewhere if you want (though it's more\nthan 1000 lines). In the meantime, here is what I can gather from the output\nfor these 2 minutes:\n\n1) Looking at the logged pg_stat_activity data, there usually is only one\nother query executing: the select * from pg_stat_activity itself. Sometimes\nthere's another query or a connection idle in transaction (which disappears\nagain in the next output from pg_stat_activity), but given the volume of\nqueries that's executed this seems expected.\n\n2) Looking at pg_locks, the only locks that are consistently held throughout\nthose 2 minutes are these 5:\n- the locks held by the slow query itself: an AccessShareLock on the job\ntable, and a virtualxid ExclusiveLock (the query does not happen within a\ntransaction).\n- the advisory lock for the job this thread is processing\n- locks held by the SELECT * FROM pg_locks query (a lock on the pg_locks\ntable and a virtualxid lock)\n\nThese 5 locks are of course all granted. Other locks change every 5 seconds,\nand often no other locks are held at all.\n\nBest regards,\nRoel\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 20:01:38 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Random slow queries" }, { "msg_contents": "On 29 June 2016 at 16:32, Igor Neyman <[email protected]> wrote:\n> Did you try AUTO_EXPLAIN extension\n> (https://www.postgresql.org/docs/9.3/static/auto-explain.html) for\n> diagnostic purposes?\n> With auto_explain.loganalize = true it will log automatically EXPLAIN\n> ANALYZE output, rather than just EXPLAIN output. Turning this parameter ON\n> permanently could have negative impact on over-all performance, so use it\n> judiciously.\n>\n> Regards,\n> Igor\n>\n\nYes, I tried that. As mentioned in my original email:\n> Things I tried:\n> ...\n> - Enable auto_explain to obtain more information. Unfortunately, for the\n> 2-minute queries no plan is logged. If I manually force a query to take a\n> long time (eg SELECT pg_sleep(5)) or if I set auto_explain.log_min_duration\n> low enough plans are logged for slow queries, _except_ for these 2-minute\n> mysteries. This makes it very hard to see exactly why the query is taking so\n> long. Is this a bug in auto_explain? Are cached plans never logged (this is\n> not indicated in the documentation)? Some other reason why no plan would be\n> logged? Note that this happens both in PostgreSQL 9.3.4 and 9.5.3\n\nTo be more precise, this is the config I used for auto_explain:\nshared_preload_libraries = 'auto_explain'\nauto_explain.log_min_duration = 2000\nauto_explain.log_analyze = true\nauto_explain.log_buffers = true\nauto_explain.log_timing = true\n\nOn a test query (eg using pg_sleep) there is logging for both the slow query\nlog and for auto_explain; for my actual problem queries there's only the slow\nquery log output, no auto_explain query plan.\nAs you can see from the logs I posted, it appears the execution plan was\ncached (LOG: duration: 122006.000 ms bind cached-1453392550: select....).\nMaybe those aren't processed by auto_explain?\n\nBest regards,\nRoel\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 20:04:31 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Random slow queries" }, { "msg_contents": "[email protected] writes:\n> As you can see from the logs I posted, it appears the execution plan was\n> cached (LOG: duration: 122006.000 ms bind cached-1453392550: select....).\n> Maybe those aren't processed by auto_explain?\n\nIn that, \"cached-1453392550\" is a statement name given by the client;\nyou'd know better than we do where it's coming from, but it has no\nparticular significance to the server.\n\nThe real information here is that what is taking 122 seconds is the BIND\nstep of extended query protocol. That explains why auto_explain doesn't\nnotice it; auto_explain only instruments the execution phase. Typically,\nwhat takes time in the BIND step is planning the query, so it seems like\nwe have to conclude that something in planning is getting hung up. That\ndoesn't get us very much closer to an explanation though :-(.\n\nDon't know if it would be practical for you at all, but if you could\nattach to a process that's stuck like this with a debugger and get a stack\ntrace, that would probably be very informative.\n\nhttps://wiki.postgresql.org/wiki/Generating_a_stack_trace_of_a_PostgreSQL_backend\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 16:20:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random slow queries" }, { "msg_contents": "On Tue, Jun 28, 2016 at 6:24 PM, <[email protected]> wrote:\n>\n>\n> PostgreSQL version:\n> PostgreSQL 9.3.4, compiled by Visual C++ build 1600, 64-bit\n\nThe current minor version of that branch is 9.3.13, so you are 9 bug\nfix releases behind.\n\nI don't know if this matters, because I see that my first guess of\nyour problem was fixed in commit 4162a55c77cbb54acb4ac442e, which was\nalready included in 9.3.4. (Yes, you did say you also observed the\nproblem in 9.5.3, but still, why intentionally run something that far\nbehind?)\n\n\n> Things I tried:\n> - Upgrading to PostgreSQL 9.5.3, compiled by Visual C++ build 1800, 64-bit\n> This did not solve the problem, queries still take 122 seconds from time to\n> time\n\nCould you try 9.6beta2?\n\nIn particular, I am wondering if your problem was solved by\n\ncommit 8a7d0701814a4e293efad22091d6f6fb441bbe1c\nAuthor: Tom Lane <[email protected]>\nDate: Wed Aug 26 18:18:57 2015 -0400\n\n Speed up HeapTupleSatisfiesMVCC() by replacing the XID-in-progress test.\n\n\nI am not entirely sure why this (as opposed to the previous-mentioned\n4162a55c77cbb54) would fix a problem occurring during BIND, though.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 14:30:35 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random slow queries" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> On Tue, Jun 28, 2016 at 6:24 PM, <[email protected]> wrote:\n>> PostgreSQL 9.3.4, compiled by Visual C++ build 1600, 64-bit\n\n> The current minor version of that branch is 9.3.13, so you are 9 bug\n> fix releases behind.\n\nDefinitely a fair complaint.\n\n> I don't know if this matters, because I see that my first guess of\n> your problem was fixed in commit 4162a55c77cbb54acb4ac442e, which was\n> already included in 9.3.4.\n\nThat commit could have helped if the problem were simply slow planning.\nBut I do not see how it explains a *consistent* 122-second delay.\nThat sounds very much like a timeout expiring someplace, and I have\nno idea where.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 17:44:24 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random slow queries" }, { "msg_contents": "On 29 June 2016 at 22:20, Tom Lane <[email protected]> wrote:\n> Don't know if it would be practical for you at all, but if you could\n> attach to a process that's stuck like this with a debugger and get a stack\n> trace, that would probably be very informative.\n\nIt seems I have found the cause of my issues: my antivirus software.\nWhen I tried to debug a stuck process, the currently executing code was in\nsome DLL without debugging information etc. When checking where the DLL came\nfrom, it appeared to be from the security software I had installed.\n\nI had been meaning to change because I have had some performance issues in\nthe past (nowhere near as bad as this issue though!) but hadn't yet gotten\naround to it. After switching this weekend, the issue went away completely.\n\nEven though I had previously noticed some performance issues with my\nantivirus, I must say this still is a very weird failure mode - especially\nas it still occurred if I disabled all realtime protection (one of the\nfirst things I tried, even if not mentioned in my earlier emails).\n\nThanks for the time spent trying to help!\n\nBest regards,\nRoel\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Jul 2016 14:19:08 +0200", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Random slow queries" }, { "msg_contents": "On 6/29/16 1:01 PM, [email protected] wrote:\n> During my test run, there was one offending query invocation, a simple\n> SELECT * FROM job WHERE field = $1\n> Of course the actual query specified the list of fields as it was generated\n> by Hibernate, but that is what it boils down to - no joins etc. The column on\n> which was queried is a VARCHAR(64) NOT NULL, not unique nor indexed (though\n> in practice most values are unique).\n\nBe careful about your assumptions there... SELECT * can have redically \ndifferent performance than selecting individual fields. In particular, \nif you select something that's been toasted external, that's going to \nproduce it's own index scan of the toast table, which could then run \ninto conflicts with vacuuming.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jul 2016 09:51:14 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Random slow queries" } ]
[ { "msg_contents": "About a day ago, there seems to have been some trouble in the network of my database (postgresql 9.3). \n\nI’m running my db with a streaming replication setup with wall shipping. \n\nI sync wal logs to a mounted networkdrive using archive_command = 'rsync -a %p /mnt/wal_drive/wals/%f </dev/null’. Somehow this command was failing, leading to my pg_xlog dir building up (590Gb). I rebooted the server, and the archiving command seems to succeed now - however - After about an hour of running, the pg_xlog drive has not decreased in size - I would have expect that! I can see that lot’s of files get’s synced to the /mnt/wal_drive/wals dir, but somehow the pg_xlog dir is not swept (yet)? Will this happen automatically eventually, or do I need to do something manually?\n\nPS. I found this blog post http://www.hivelogik.com/blog/?p=513, but I’m unsure if it’s necessary and if it can be dangerous?\n\nBest\nAbout a day ago, there seems to have been some trouble in the network of my database (postgresql 9.3). I’m running my db with a streaming replication setup with wall shipping. I sync wal logs to a mounted networkdrive using archive_command = 'rsync -a %p /mnt/wal_drive/wals/%f </dev/null’. Somehow this command was failing, leading to my pg_xlog dir building up (590Gb). I rebooted the server, and the archiving command seems to succeed now - however - After about an hour of running, the pg_xlog drive has not decreased in size - I would have expect that! I can see that lot’s of files get’s synced to the /mnt/wal_drive/wals dir, but somehow the pg_xlog dir is not swept (yet)? Will this happen automatically eventually, or do I need to do something manually?PS. I found this blog post http://www.hivelogik.com/blog/?p=513, but I’m unsure if it’s necessary and if it can be dangerous?\n\nBest", "msg_date": "Wed, 29 Jun 2016 12:00:00 +0200", "msg_from": "=?utf-8?Q?Niels_Kristian_Schj=C3=B8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "pg_xlog dir not getting swept" }, { "msg_contents": "On Wed, Jun 29, 2016 at 3:00 AM, Niels Kristian Schjødt\n<[email protected]> wrote:\n> About a day ago, there seems to have been some trouble in the network of my\n> database (postgresql 9.3).\n>\n> I’m running my db with a streaming replication setup with wall shipping.\n>\n> I sync wal logs to a mounted networkdrive using archive_command = 'rsync -a\n> %p /mnt/wal_drive/wals/%f </dev/null’. Somehow this command was failing,\n> leading to my pg_xlog dir building up (590Gb). I rebooted the server, and\n> the archiving command seems to succeed now - however - After about an hour\n> of running, the pg_xlog drive has not decreased in size - I would have\n> expect that! I can see that lot’s of files get’s synced to the\n> /mnt/wal_drive/wals dir, but somehow the pg_xlog dir is not swept (yet)?\n> Will this happen automatically eventually, or do I need to do something\n> manually?\n\nSuccessfully archived files are only removed by the checkpointer. The\nlogic is quite complex and it can be very frustrating trying to\npredict exactly when any given file will get removed. You might want\nto run a few manual checkpoints to see if that cleans it up. But turn\non log_checkpoints and reload the configuration first.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 29 Jun 2016 12:19:50 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_xlog dir not getting swept" }, { "msg_contents": "Thanks, after a few more hours of waiting, things started get cleaned up. Things are back in order now.\n\nNiels Kristian Schjødt\nCo-founder & Developer\n\nE-Mail: [email protected] <mailto:[email protected]>\nMobile: +45 28 73 04 93\n\n\n\n\nwww.autouncle.com <http://www.autouncle.com/>\nFollow us: Facebook <https://www.facebook.com/AutoUncle> | Google+ <https://plus.google.com/+AutoUncle> | LinkedIn <http://www.linkedin.com/company/autouncle> | Twitter <https://twitter.com/AutoUncle> \nGet app for: iPhone & iPad <https://itunes.apple.com/en/app/autouncle/id533433816?mt=8> | Android <https://play.google.com/store/apps/details?id=com.autouncle.autouncle>\n\n\n\n\n> Den 29. jun. 2016 kl. 21.19 skrev Jeff Janes <[email protected]>:\n> \n> On Wed, Jun 29, 2016 at 3:00 AM, Niels Kristian Schjødt\n> <[email protected]> wrote:\n>> About a day ago, there seems to have been some trouble in the network of my\n>> database (postgresql 9.3).\n>> \n>> I’m running my db with a streaming replication setup with wall shipping.\n>> \n>> I sync wal logs to a mounted networkdrive using archive_command = 'rsync -a\n>> %p /mnt/wal_drive/wals/%f </dev/null’. Somehow this command was failing,\n>> leading to my pg_xlog dir building up (590Gb). I rebooted the server, and\n>> the archiving command seems to succeed now - however - After about an hour\n>> of running, the pg_xlog drive has not decreased in size - I would have\n>> expect that! I can see that lot’s of files get’s synced to the\n>> /mnt/wal_drive/wals dir, but somehow the pg_xlog dir is not swept (yet)?\n>> Will this happen automatically eventually, or do I need to do something\n>> manually?\n> \n> Successfully archived files are only removed by the checkpointer. The\n> logic is quite complex and it can be very frustrating trying to\n> predict exactly when any given file will get removed. You might want\n> to run a few manual checkpoints to see if that cleans it up. But turn\n> on log_checkpoints and reload the configuration first.\n> \n> Cheers,\n> \n> Jeff", "msg_date": "Thu, 30 Jun 2016 11:36:56 +0200", "msg_from": "=?utf-8?Q?Niels_Kristian_Schj=C3=B8dt?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pg_xlog dir not getting swept" } ]
[ { "msg_contents": "Hi.\n\nI'm trying to build an OLAP-oriented DB based on PostgresSQL.\n\nUser works with a paginated report in the web-browser. Interface allows \nto fetch data for a custom date-range selection,\ndisplay individual rows (20-50 per page) and totals (for entire \nselection, even not visible on the current page) and sorting by any column.\n\nThe main goal is to deliver results of the basic SELECT queries to the \nend-user in less than 2 seconds.\n\nI was able to achieve that except for one table (the biggest one).\n\nIt consist of multiple dimensions (date, gran, aid, pid, sid, fid, \nsubid) and metrics (see below).\nUser can filter by any dimension and sort by any metric.\n\nHere is a CREATE script for this table:\n\nCREATE TABLE stats.feed_sub\n(\n date date NOT NULL,\n gran interval NOT NULL,\n aid smallint NOT NULL,\n pid smallint NOT NULL,\n sid smallint NOT NULL,\n fid smallint NOT NULL,\n subid text NOT NULL,\n rev_est_pub real NOT NULL,\n rev_est_feed real NOT NULL,\n rev_raw real NOT NULL,\n c_total bigint NOT NULL,\n c_passed bigint NOT NULL,\n q_total bigint NOT NULL,\n q_passed bigint NOT NULL,\n q_filt_geo bigint NOT NULL,\n q_filt_browser bigint NOT NULL,\n q_filt_os bigint NOT NULL,\n q_filt_ip bigint NOT NULL,\n q_filt_subid bigint NOT NULL,\n q_filt_pause bigint NOT NULL,\n q_filt_click_cap_ip bigint NOT NULL,\n q_filt_query_cap bigint NOT NULL,\n q_filt_click_cap bigint NOT NULL,\n q_filt_rev_cap bigint NOT NULL,\n q_filt_erpm_floor bigint NOT NULL,\n c_filt_click_cap_ip bigint NOT NULL,\n c_filt_doubleclick bigint NOT NULL,\n c_filt_url_expired bigint NOT NULL,\n c_filt_fast_click bigint NOT NULL,\n c_filt_delay_clicks bigint NOT NULL,\n c_filt_ip_mismatch bigint NOT NULL,\n c_filt_ref_mismatch bigint NOT NULL,\n c_filt_lng_mismatch bigint NOT NULL,\n c_filt_ua_mismatch bigint NOT NULL,\n res_impr bigint NOT NULL,\n rev_ver_pub real,\n rev_ver_feed real,\n c_ver bigint,\n q_filt_ref bigint NOT NULL\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX ix_feed_sub_date\n ON stats.feed_sub\n USING brin\n (date);\n\nCREATE UNIQUE INDEX ixu_feed_sub\n ON stats.feed_sub\n USING btree\n (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog.\"default\");\n\nHere is some sizing info (https://wiki.postgresql.org/wiki/Disk_Usage):\n\nrelation,size\nstats.feed_sub,5644 MB\nstats.ixu_feed_sub,1594 MB\n\nrow_estimate\n15865627\n\nHere is the typical query (for totals beige):\nSELECT\n sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,\n sum(stats.feed_sub.c_filt_doubleclick) AS clicks_on_target,\n sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,\n sum(stats.feed_sub.c_filt_fast_click) AS fast_click,\n sum(stats.feed_sub.c_filt_ip_mismatch) AS ip_mismatch,\n sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,\n sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,\n sum(stats.feed_sub.c_filt_ua_mismatch) AS ua_mismatch,\n sum(stats.feed_sub.c_filt_url_expired) AS url_expired,\n stats.feed_sub.subid AS stats_feed_sub_subid,\n stats.feed_sub.sid AS stats_feed_sub_sid\nFROM stats.feed_sub\nWHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND\n stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND\n stats.feed_sub.gran = '1 day'\n AND stats.feed_sub.aid = 3\nGROUP BY\n stats.feed_sub.subid, stats.feed_sub.sid;\n\nQUERY PLAN\nHashAggregate (cost=901171.72..912354.97 rows=344100 width=86) (actual \ntime=7207.825..7335.473 rows=126044 loops=1)\n\" Group Key: subid, sid\"\n Buffers: shared hit=3635804\n -> Index Scan using ixu_feed_sub on feed_sub (cost=0.56..806544.38 \nrows=3440994 width=86) (actual time=0.020..3650.208 rows=3578344 loops=1)\n Index Cond: ((date >= '2016-06-01 00:00:00'::timestamp without \ntime zone) AND (date <= '2016-06-30 00:00:00'::timestamp without time \nzone) AND (gran = '1 day'::interval) AND (aid = 3))\n Buffers: shared hit=3635804\nPlanning time: 0.150 ms\nExecution time: 7352.009 ms\n\nAs I can see - it takes 3.6 seconds just for an index scan (which sits \nin RAM).\n+3 seconds for groupings +1-2 seconds for network transfers, so I'm \ncompletely out of my \"sub 2 seconds\" goal.\n\nQuestions are:\n1. Am I using the right DB\\architecture for achieving my goal? Are there \nany better solution for that?\n2. Have I reached some physical limits? Will installing faster RAM\\CPU help?\n\nThanks in advance!\n\nServer config:\n\nOS:\n > uname -a\nFreeBSD sqldb 10.2-RELEASE-p9 FreeBSD 10.2-RELEASE-p9 #0: Thu Jan 14 \n01:32:46 UTC 2016 \[email protected]:/usr/obj/usr/src/sys/GENERIC amd64\n\n\nCPU: Intel(R) Xeon(R) CPU E5-1630 v3\n > sysctl -a | egrep -i 'hw.machine|hw.model|hw.ncpu'\nhw.machine: amd64\nhw.model: Intel(R) Xeon(R) CPU E5-1630 v3 @ 3.70GHz\nhw.ncpu: 8\nhw.machine_arch: amd64\n\n\nMEM: 64GB\n > sysctl hw.physmem\nhw.physmem: 68572983296\n\n\nHDD: 2x480GB SSD (ZFS mirror)\n > camcontrol devlist\n<INTEL SSDSC2BB480H4 D2010380> at scbus5 target 0 lun 0 (ada0,pass1)\n<INTEL SSDSC2BB480H4 D2010380> at scbus6 target 0 lun 0 (ada1,pass2)\n\n\nFS:\n > zfs list\nNAME USED AVAIL REFER MOUNTPOINT\nzroot 36.5G 390G 96K /zroot\n...\nzroot/ara/sqldb/pgsql 33.7G 390G 33.7G /ara/sqldb/pgsql\n\n > zfs get primarycache,recordsize,logbias,compression zroot/ara/sqldb/pgsql\nNAME PROPERTY VALUE SOURCE\nzroot/ara/sqldb/pgsql primarycache all local\nzroot/ara/sqldb/pgsql recordsize 8K local\nzroot/ara/sqldb/pgsql logbias latency local\nzroot/ara/sqldb/pgsql compression lz4 inherited from zroot\n\n\nMisc:\n > cat /etc/sysctl.conf\nvfs.zfs.metaslab.lba_weighting_enabled=0\n\n\nPostgres:\n > /usr/local/bin/postgres --version\npostgres (PostgreSQL) 9.5.3\n\n > cat postgresql.conf:\n...\nlisten_addresses = '*'\n\nmax_connections = 100\nshared_buffers = 16GB\neffective_cache_size = 48GB\nwork_mem = 500MB\nmaintenance_work_mem = 2GB\nmin_wal_size = 4GB\nmax_wal_size = 8GB\ncheckpoint_completion_target = 0.9\nwal_buffers = 16MB\ndefault_statistics_target = 500\nrandom_page_cost = 1\n\nlog_lock_waits = on\nlog_directory = 'pg_log'\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\nlog_destination = 'csvlog'\nlogging_collector = on\nlog_min_duration_statement = 10000\n\nshared_preload_libraries = 'pg_stat_statements'\ntrack_activity_query_size = 10000\ntrack_io_timing = on\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 1 Jul 2016 17:54:54 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "less than 2 sec for response - possible?" }, { "msg_contents": "trafdev <[email protected]> writes:\n> CREATE INDEX ix_feed_sub_date\n> ON stats.feed_sub\n> USING brin\n> (date);\n\n> CREATE UNIQUE INDEX ixu_feed_sub\n> ON stats.feed_sub\n> USING btree\n> (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog.\"default\");\n\n> HashAggregate (cost=901171.72..912354.97 rows=344100 width=86) (actual \n> time=7207.825..7335.473 rows=126044 loops=1)\n> \" Group Key: subid, sid\"\n> Buffers: shared hit=3635804\n> -> Index Scan using ixu_feed_sub on feed_sub (cost=0.56..806544.38 \n> rows=3440994 width=86) (actual time=0.020..3650.208 rows=3578344 loops=1)\n> Index Cond: ((date >= '2016-06-01 00:00:00'::timestamp without \n> time zone) AND (date <= '2016-06-30 00:00:00'::timestamp without time \n> zone) AND (gran = '1 day'::interval) AND (aid = 3))\n> Buffers: shared hit=3635804\n> Planning time: 0.150 ms\n> Execution time: 7352.009 ms\n\nNeither of those indexes is terribly well designed for this query.\nA btree index on (aid, gran, date) or (gran, aid, date) would work\nmuch better. See\n\nhttps://www.postgresql.org/docs/9.5/static/indexes-multicolumn.html\n\nYou could rearrange the column order in that giant unique index\nand get some of the benefit. But if you're desperate to optimize\nthis particular query, an index not bearing so many irrelevant columns\nwould probably be better for it.\n\nAn alternative way of thinking would be to create an index with those\nthree leading columns and then all of the other columns used by this\nquery as later columns. That would be an even larger index, but it would\nallow an index-only scan, which might be quite a lot faster. The fact\nthat you seem to be hitting about one page for each row retrieved says\nthat the data you need is pretty badly scattered, so constructing an index\nthat concentrates everything you need into one range of the index might\nbe the ticket.\n\nEither of these additional-index ideas is going to penalize table\ninsertions/updates, so keep an eye on that end of the performance\nquestion too.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 01 Jul 2016 21:23:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "Thanks Tom.\n\nI've created index on aid, date:\n\ncreate index aaa on stats.feed_sub(aid,date);\n\nand simplified a query (dropped gran as it's equal for all rows anyway):\n\nSELECT\n sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,\n sum(stats.feed_sub.c_filt_doubleclick) AS clicks_on_target,\n sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,\n sum(stats.feed_sub.c_filt_fast_click) AS fast_click,\n sum(stats.feed_sub.c_filt_ip_mismatch) AS ip_mismatch,\n sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,\n sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,\n sum(stats.feed_sub.c_filt_ua_mismatch) AS ua_mismatch,\n sum(stats.feed_sub.c_filt_url_expired) AS url_expired,\n stats.feed_sub.subid AS stats_feed_sub_subid,\n stats.feed_sub.sid AS stats_feed_sub_sid\nFROM stats.feed_sub\nWHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND\n stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND\n stats.feed_sub.aid = 3\nGROUP BY\n stats.feed_sub.subid, stats.feed_sub.sid;\n\nAll data is in the cache and it still takes almost 5 seconds to complete:\n\nQUERY PLAN\nHashAggregate (cost=792450.42..803727.24 rows=346979 width=86) (actual \ntime=4742.145..4882.468 rows=126533 loops=1)\n\" Group Key: subid, sid\"\n Buffers: shared hit=1350371\n -> Index Scan using aaa on feed_sub (cost=0.43..697031.39 \nrows=3469783 width=86) (actual time=0.026..1655.394 rows=3588376 loops=1)\n Index Cond: ((aid = 3) AND (date >= '2016-06-01 \n00:00:00'::timestamp without time zone) AND (date <= '2016-06-30 \n00:00:00'::timestamp without time zone))\n Buffers: shared hit=1350371\nPlanning time: 0.159 ms\nExecution time: 4899.934 ms\n\nIt's better, but still is far from \"<2 secs\" goal.\n\nAny thoughts?\n\n\nOn 07/01/16 18:23, Tom Lane wrote:\n> trafdev <[email protected]> writes:\n>> CREATE INDEX ix_feed_sub_date\n>> ON stats.feed_sub\n>> USING brin\n>> (date);\n>\n>> CREATE UNIQUE INDEX ixu_feed_sub\n>> ON stats.feed_sub\n>> USING btree\n>> (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog.\"default\");\n>\n>> HashAggregate (cost=901171.72..912354.97 rows=344100 width=86) (actual\n>> time=7207.825..7335.473 rows=126044 loops=1)\n>> \" Group Key: subid, sid\"\n>> Buffers: shared hit=3635804\n>> -> Index Scan using ixu_feed_sub on feed_sub (cost=0.56..806544.38\n>> rows=3440994 width=86) (actual time=0.020..3650.208 rows=3578344 loops=1)\n>> Index Cond: ((date >= '2016-06-01 00:00:00'::timestamp without\n>> time zone) AND (date <= '2016-06-30 00:00:00'::timestamp without time\n>> zone) AND (gran = '1 day'::interval) AND (aid = 3))\n>> Buffers: shared hit=3635804\n>> Planning time: 0.150 ms\n>> Execution time: 7352.009 ms\n>\n> Neither of those indexes is terribly well designed for this query.\n> A btree index on (aid, gran, date) or (gran, aid, date) would work\n> much better. See\n>\n> https://www.postgresql.org/docs/9.5/static/indexes-multicolumn.html\n>\n> You could rearrange the column order in that giant unique index\n> and get some of the benefit. But if you're desperate to optimize\n> this particular query, an index not bearing so many irrelevant columns\n> would probably be better for it.\n>\n> An alternative way of thinking would be to create an index with those\n> three leading columns and then all of the other columns used by this\n> query as later columns. That would be an even larger index, but it would\n> allow an index-only scan, which might be quite a lot faster. The fact\n> that you seem to be hitting about one page for each row retrieved says\n> that the data you need is pretty badly scattered, so constructing an index\n> that concentrates everything you need into one range of the index might\n> be the ticket.\n>\n> Either of these additional-index ideas is going to penalize table\n> insertions/updates, so keep an eye on that end of the performance\n> question too.\n>\n> \t\t\tregards, tom lane\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 1 Jul 2016 19:48:06 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "Hello\n\nHave you solved your problem ?\n\nCould it be a conversion overhead from 'timestamp without time zone' to \n'date' ? In this case, I don't know if planer store constants as date or \ntimestamp.\n\nMathieu Pujol\n\nLe 02/07/2016 à 04:48, trafdev a écrit :\n> Thanks Tom.\n>\n> I've created index on aid, date:\n>\n> create index aaa on stats.feed_sub(aid,date);\n>\n> and simplified a query (dropped gran as it's equal for all rows anyway):\n>\n> SELECT\n> sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,\n> sum(stats.feed_sub.c_filt_doubleclick) AS clicks_on_target,\n> sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,\n> sum(stats.feed_sub.c_filt_fast_click) AS fast_click,\n> sum(stats.feed_sub.c_filt_ip_mismatch) AS ip_mismatch,\n> sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,\n> sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,\n> sum(stats.feed_sub.c_filt_ua_mismatch) AS ua_mismatch,\n> sum(stats.feed_sub.c_filt_url_expired) AS url_expired,\n> stats.feed_sub.subid AS stats_feed_sub_subid,\n> stats.feed_sub.sid AS stats_feed_sub_sid\n> FROM stats.feed_sub\n> WHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND\n> stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND\n> stats.feed_sub.aid = 3\n> GROUP BY\n> stats.feed_sub.subid, stats.feed_sub.sid;\n>\n> All data is in the cache and it still takes almost 5 seconds to complete:\n>\n> QUERY PLAN\n> HashAggregate (cost=792450.42..803727.24 rows=346979 width=86) \n> (actual time=4742.145..4882.468 rows=126533 loops=1)\n> \" Group Key: subid, sid\"\n> Buffers: shared hit=1350371\n> -> Index Scan using aaa on feed_sub (cost=0.43..697031.39 \n> rows=3469783 width=86) (actual time=0.026..1655.394 rows=3588376 loops=1)\n> Index Cond: ((aid = 3) AND (date >= '2016-06-01 \n> 00:00:00'::timestamp without time zone) AND (date <= '2016-06-30 \n> 00:00:00'::timestamp without time zone))\n> Buffers: shared hit=1350371\n> Planning time: 0.159 ms\n> Execution time: 4899.934 ms\n>\n> It's better, but still is far from \"<2 secs\" goal.\n>\n> Any thoughts?\n>\n>\n> On 07/01/16 18:23, Tom Lane wrote:\n>> trafdev <[email protected]> writes:\n>>> CREATE INDEX ix_feed_sub_date\n>>> ON stats.feed_sub\n>>> USING brin\n>>> (date);\n>>\n>>> CREATE UNIQUE INDEX ixu_feed_sub\n>>> ON stats.feed_sub\n>>> USING btree\n>>> (date, gran, aid, pid, sid, fid, subid COLLATE \n>>> pg_catalog.\"default\");\n>>\n>>> HashAggregate (cost=901171.72..912354.97 rows=344100 width=86) (actual\n>>> time=7207.825..7335.473 rows=126044 loops=1)\n>>> \" Group Key: subid, sid\"\n>>> Buffers: shared hit=3635804\n>>> -> Index Scan using ixu_feed_sub on feed_sub (cost=0.56..806544.38\n>>> rows=3440994 width=86) (actual time=0.020..3650.208 rows=3578344 \n>>> loops=1)\n>>> Index Cond: ((date >= '2016-06-01 00:00:00'::timestamp without\n>>> time zone) AND (date <= '2016-06-30 00:00:00'::timestamp without time\n>>> zone) AND (gran = '1 day'::interval) AND (aid = 3))\n>>> Buffers: shared hit=3635804\n>>> Planning time: 0.150 ms\n>>> Execution time: 7352.009 ms\n>>\n>> Neither of those indexes is terribly well designed for this query.\n>> A btree index on (aid, gran, date) or (gran, aid, date) would work\n>> much better. See\n>>\n>> https://www.postgresql.org/docs/9.5/static/indexes-multicolumn.html\n>>\n>> You could rearrange the column order in that giant unique index\n>> and get some of the benefit. But if you're desperate to optimize\n>> this particular query, an index not bearing so many irrelevant columns\n>> would probably be better for it.\n>>\n>> An alternative way of thinking would be to create an index with those\n>> three leading columns and then all of the other columns used by this\n>> query as later columns. That would be an even larger index, but it \n>> would\n>> allow an index-only scan, which might be quite a lot faster. The fact\n>> that you seem to be hitting about one page for each row retrieved says\n>> that the data you need is pretty badly scattered, so constructing an \n>> index\n>> that concentrates everything you need into one range of the index might\n>> be the ticket.\n>>\n>> Either of these additional-index ideas is going to penalize table\n>> insertions/updates, so keep an eye on that end of the performance\n>> question too.\n>>\n>> regards, tom lane\n>>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Jul 2016 12:10:31 +0200", "msg_from": "Pujol Mathieu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "\n\nOn 02.07.2016 02:54, trafdev wrote:\n> Hi.\n>\n> I'm trying to build an OLAP-oriented DB based on PostgresSQL.\n>\n> User works with a paginated report in the web-browser. Interface allows\n> to fetch data for a custom date-range selection,\n> display individual rows (20-50 per page) and totals (for entire\n> selection, even not visible on the current page) and sorting by any column.\n>\n> The main goal is to deliver results of the basic SELECT queries to the\n> end-user in less than 2 seconds.\n>\n> I was able to achieve that except for one table (the biggest one).\n>\n> It consist of multiple dimensions (date, gran, aid, pid, sid, fid,\n> subid) and metrics (see below).\n> User can filter by any dimension and sort by any metric.\n>\n> Here is a CREATE script for this table:\n>\n> CREATE TABLE stats.feed_sub\n> (\n> date date NOT NULL,\n> gran interval NOT NULL,\n> aid smallint NOT NULL,\n> pid smallint NOT NULL,\n> sid smallint NOT NULL,\n> fid smallint NOT NULL,\n> subid text NOT NULL,\n> rev_est_pub real NOT NULL,\n> rev_est_feed real NOT NULL,\n> rev_raw real NOT NULL,\n> c_total bigint NOT NULL,\n> c_passed bigint NOT NULL,\n> q_total bigint NOT NULL,\n> q_passed bigint NOT NULL,\n> q_filt_geo bigint NOT NULL,\n> q_filt_browser bigint NOT NULL,\n> q_filt_os bigint NOT NULL,\n> q_filt_ip bigint NOT NULL,\n> q_filt_subid bigint NOT NULL,\n> q_filt_pause bigint NOT NULL,\n> q_filt_click_cap_ip bigint NOT NULL,\n> q_filt_query_cap bigint NOT NULL,\n> q_filt_click_cap bigint NOT NULL,\n> q_filt_rev_cap bigint NOT NULL,\n> q_filt_erpm_floor bigint NOT NULL,\n> c_filt_click_cap_ip bigint NOT NULL,\n> c_filt_doubleclick bigint NOT NULL,\n> c_filt_url_expired bigint NOT NULL,\n> c_filt_fast_click bigint NOT NULL,\n> c_filt_delay_clicks bigint NOT NULL,\n> c_filt_ip_mismatch bigint NOT NULL,\n> c_filt_ref_mismatch bigint NOT NULL,\n> c_filt_lng_mismatch bigint NOT NULL,\n> c_filt_ua_mismatch bigint NOT NULL,\n> res_impr bigint NOT NULL,\n> rev_ver_pub real,\n> rev_ver_feed real,\n> c_ver bigint,\n> q_filt_ref bigint NOT NULL\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX ix_feed_sub_date\n> ON stats.feed_sub\n> USING brin\n> (date);\n>\n> CREATE UNIQUE INDEX ixu_feed_sub\n> ON stats.feed_sub\n> USING btree\n> (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog.\"default\");\n>\n> Here is some sizing info (https://wiki.postgresql.org/wiki/Disk_Usage):\n>\n> relation,size\n> stats.feed_sub,5644 MB\n> stats.ixu_feed_sub,1594 MB\n>\n> row_estimate\n> 15865627\n>\n> Here is the typical query (for totals beige):\n> SELECT\n> sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,\n> sum(stats.feed_sub.c_filt_doubleclick) AS clicks_on_target,\n> sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,\n> sum(stats.feed_sub.c_filt_fast_click) AS fast_click,\n> sum(stats.feed_sub.c_filt_ip_mismatch) AS ip_mismatch,\n> sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,\n> sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,\n> sum(stats.feed_sub.c_filt_ua_mismatch) AS ua_mismatch,\n> sum(stats.feed_sub.c_filt_url_expired) AS url_expired,\n> stats.feed_sub.subid AS stats_feed_sub_subid,\n> stats.feed_sub.sid AS stats_feed_sub_sid\n> FROM stats.feed_sub\n> WHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND\n> stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND\n> stats.feed_sub.gran = '1 day'\n> AND stats.feed_sub.aid = 3\n> GROUP BY\n> stats.feed_sub.subid, stats.feed_sub.sid;\n\nYou cast every date to an timestamp. Why? You can adjust the index to:\n\nCREATE UNIQUE INDEX ixu_feed_sub\nON stats.feed_sub\nUSING btree\n(date::timestamp, gran, aid, pid, sid, fid, subid COLLATE \npg_catalog.\"default\");\n\nBut since i see no need for the cast at all (maybe i missed it) try it \nwithout!\n\nGreetings,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Jul 2016 13:39:40 +0200", "msg_from": "Torsten Zuehlsdorff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "Hi, yes I've tried it in the past, it makes no any difference at all:\n\nWith TIMESTAMP cast:\n\nQUERY PLAN\nHashAggregate (cost=1405666.90..1416585.93 rows=335970 width=86) \n(actual time=4794.585..4923.062 rows=126533 loops=1)\n\" Group Key: subid, sid\"\n Buffers: shared hit=1486949\n -> Index Scan using ix_feed_sub_aid_date on feed_sub \n(cost=0.44..1313275.32 rows=3359694 width=86) (actual \ntime=0.020..1736.005 rows=3588376 loops=1)\n Index Cond: ((aid = 3) AND (date >= '2016-06-01 \n00:00:00'::timestamp without time zone) AND (date <= '2016-06-30 \n00:00:00'::timestamp without time zone))\n Buffers: shared hit=1486949\nPlanning time: 0.158 ms\nExecution time: 4939.965 ms\n\n\nWithout TIMESTAMP cast:\n\nQUERY PLAN\nHashAggregate (cost=1405666.90..1416585.93 rows=335970 width=86) \n(actual time=4797.272..4924.015 rows=126533 loops=1)\n\" Group Key: subid, sid\"\n Buffers: shared hit=1486949\n -> Index Scan using ix_feed_sub_aid_date on feed_sub \n(cost=0.44..1313275.32 rows=3359694 width=86) (actual \ntime=0.019..1783.104 rows=3588376 loops=1)\n Index Cond: ((aid = 3) AND (date >= '2016-06-01'::date) AND \n(date <= '2016-06-30'::date))\n Buffers: shared hit=1486949\nPlanning time: 0.164 ms\nExecution time: 4941.259 ms\n\nI need to be sure it's a physical limitation of a Postgresql (when all \ndata is in a memory and fetching\\joining 1.5 mln of rows can't be done \nin less than 2-3 seconds) and there is no way to improve it.\n\n\n\nOn 07/05/16 04:39, Torsten Zuehlsdorff wrote:\n>\n>\n> On 02.07.2016 02:54, trafdev wrote:\n> > Hi.\n> >\n> > I'm trying to build an OLAP-oriented DB based on PostgresSQL.\n> >\n> > User works with a paginated report in the web-browser. Interface allows\n> > to fetch data for a custom date-range selection,\n> > display individual rows (20-50 per page) and totals (for entire\n> > selection, even not visible on the current page) and sorting by any\n> > column.\n> >\n> > The main goal is to deliver results of the basic SELECT queries to the\n> > end-user in less than 2 seconds.\n> >\n> > I was able to achieve that except for one table (the biggest one).\n> >\n> > It consist of multiple dimensions (date, gran, aid, pid, sid, fid,\n> > subid) and metrics (see below).\n> > User can filter by any dimension and sort by any metric.\n> >\n> > Here is a CREATE script for this table:\n> >\n> > CREATE TABLE stats.feed_sub\n> > (\n> > date date NOT NULL,\n> > gran interval NOT NULL,\n> > aid smallint NOT NULL,\n> > pid smallint NOT NULL,\n> > sid smallint NOT NULL,\n> > fid smallint NOT NULL,\n> > subid text NOT NULL,\n> > rev_est_pub real NOT NULL,\n> > rev_est_feed real NOT NULL,\n> > rev_raw real NOT NULL,\n> > c_total bigint NOT NULL,\n> > c_passed bigint NOT NULL,\n> > q_total bigint NOT NULL,\n> > q_passed bigint NOT NULL,\n> > q_filt_geo bigint NOT NULL,\n> > q_filt_browser bigint NOT NULL,\n> > q_filt_os bigint NOT NULL,\n> > q_filt_ip bigint NOT NULL,\n> > q_filt_subid bigint NOT NULL,\n> > q_filt_pause bigint NOT NULL,\n> > q_filt_click_cap_ip bigint NOT NULL,\n> > q_filt_query_cap bigint NOT NULL,\n> > q_filt_click_cap bigint NOT NULL,\n> > q_filt_rev_cap bigint NOT NULL,\n> > q_filt_erpm_floor bigint NOT NULL,\n> > c_filt_click_cap_ip bigint NOT NULL,\n> > c_filt_doubleclick bigint NOT NULL,\n> > c_filt_url_expired bigint NOT NULL,\n> > c_filt_fast_click bigint NOT NULL,\n> > c_filt_delay_clicks bigint NOT NULL,\n> > c_filt_ip_mismatch bigint NOT NULL,\n> > c_filt_ref_mismatch bigint NOT NULL,\n> > c_filt_lng_mismatch bigint NOT NULL,\n> > c_filt_ua_mismatch bigint NOT NULL,\n> > res_impr bigint NOT NULL,\n> > rev_ver_pub real,\n> > rev_ver_feed real,\n> > c_ver bigint,\n> > q_filt_ref bigint NOT NULL\n> > )\n> > WITH (\n> > OIDS=FALSE\n> > );\n> >\n> > CREATE INDEX ix_feed_sub_date\n> > ON stats.feed_sub\n> > USING brin\n> > (date);\n> >\n> > CREATE UNIQUE INDEX ixu_feed_sub\n> > ON stats.feed_sub\n> > USING btree\n> > (date, gran, aid, pid, sid, fid, subid COLLATE pg_catalog.\"default\");\n> >\n> > Here is some sizing info (https://wiki.postgresql.org/wiki/Disk_Usage):\n> >\n> > relation,size\n> > stats.feed_sub,5644 MB\n> > stats.ixu_feed_sub,1594 MB\n> >\n> > row_estimate\n> > 15865627\n> >\n> > Here is the typical query (for totals beige):\n> > SELECT\n> > sum(stats.feed_sub.c_filt_click_cap_ip) AS clicks_from_ip,\n> > sum(stats.feed_sub.c_filt_doubleclick) AS clicks_on_target,\n> > sum(stats.feed_sub.c_filt_delay_clicks) AS ip_click_period,\n> > sum(stats.feed_sub.c_filt_fast_click) AS fast_click,\n> > sum(stats.feed_sub.c_filt_ip_mismatch) AS ip_mismatch,\n> > sum(stats.feed_sub.c_filt_lng_mismatch) AS lng_mismatch,\n> > sum(stats.feed_sub.c_filt_ref_mismatch) AS ref_mismatch,\n> > sum(stats.feed_sub.c_filt_ua_mismatch) AS ua_mismatch,\n> > sum(stats.feed_sub.c_filt_url_expired) AS url_expired,\n> > stats.feed_sub.subid AS stats_feed_sub_subid,\n> > stats.feed_sub.sid AS stats_feed_sub_sid\n> > FROM stats.feed_sub\n> > WHERE stats.feed_sub.date >= '2016-06-01' :: TIMESTAMP AND\n> > stats.feed_sub.date <= '2016-06-30' :: TIMESTAMP AND\n> > stats.feed_sub.gran = '1 day'\n> > AND stats.feed_sub.aid = 3\n> > GROUP BY\n> > stats.feed_sub.subid, stats.feed_sub.sid;\n>\n> You cast every date to an timestamp. Why? You can adjust the index to:\n>\n> CREATE UNIQUE INDEX ixu_feed_sub\n> ON stats.feed_sub\n> USING btree\n> (date::timestamp, gran, aid, pid, sid, fid, subid COLLATE\n> pg_catalog.\"default\");\n>\n> But since i see no need for the cast at all (maybe i missed it) try it\n> without!\n>\n> Greetings,\n> Torsten\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Jul 2016 08:35:56 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "On 05.07.2016 17:35, trafdev wrote:\n > [..]\n> Without TIMESTAMP cast:\n>\n> QUERY PLAN\n> HashAggregate (cost=1405666.90..1416585.93 rows=335970 width=86)\n> (actual time=4797.272..4924.015 rows=126533 loops=1)\n> \" Group Key: subid, sid\"\n> Buffers: shared hit=1486949\n> -> Index Scan using ix_feed_sub_aid_date on feed_sub\n> (cost=0.44..1313275.32 rows=3359694 width=86) (actual\n> time=0.019..1783.104 rows=3588376 loops=1)\n> Index Cond: ((aid = 3) AND (date >= '2016-06-01'::date) AND\n> (date <= '2016-06-30'::date))\n> Buffers: shared hit=1486949\n> Planning time: 0.164 ms\n> Execution time: 4941.259 ms\n>\n> I need to be sure it's a physical limitation of a Postgresql (when all\n> data is in a memory and fetching\\joining 1.5 mln of rows can't be done\n> in less than 2-3 seconds) and there is no way to improve it.\n\nIt could be a physical limitation of your hardware. I just did a short \ntest on one of my databases:\n\nAggregate (cost=532018.95..532018.96 rows=1 width=0) (actual \ntime=3396.689..3396.689 rows=1 loops=1)\n Buffers: shared hit=155711\n -> Index Only Scan using requests_request_time_idx on requests \n(cost=0.43..493109.90 rows=15563620 width=0) (actual \ntime=0.021..2174.614 rows=16443288 loops=1)\n Index Cond: ((request_time >= '2016-07-01 \n00:00:00+00'::timestamp with time zone) AND (request_time <= '2017-07-06 \n00:00:00+00'::timestamp with time zone))\n Heap Fetches: 31254\n Buffers: shared hit=155711\n Planning time: 0.143 ms\n Execution time: 3396.715 ms\n(8 rows)\n\nAs you can see i can get 16.4 Mio rows within 3.4 seconds from cache. \nYour index-scan fetches 3.5 mio in 1.7 second, that's hardly half of the \nperformance of my database.\n\nGreetings,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jul 2016 10:35:51 +0200", "msg_from": "Torsten Zuehlsdorff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "Wondering what are your CPU\\RAM characteristics?\n\nOn 07/06/16 01:35, Torsten Zuehlsdorff wrote:\n> On 05.07.2016 17:35, trafdev wrote:\n>> [..]\n>> Without TIMESTAMP cast:\n>>\n>> QUERY PLAN\n>> HashAggregate (cost=1405666.90..1416585.93 rows=335970 width=86)\n>> (actual time=4797.272..4924.015 rows=126533 loops=1)\n>> \" Group Key: subid, sid\"\n>> Buffers: shared hit=1486949\n>> -> Index Scan using ix_feed_sub_aid_date on feed_sub\n>> (cost=0.44..1313275.32 rows=3359694 width=86) (actual\n>> time=0.019..1783.104 rows=3588376 loops=1)\n>> Index Cond: ((aid = 3) AND (date >= '2016-06-01'::date) AND\n>> (date <= '2016-06-30'::date))\n>> Buffers: shared hit=1486949\n>> Planning time: 0.164 ms\n>> Execution time: 4941.259 ms\n>>\n>> I need to be sure it's a physical limitation of a Postgresql (when all\n>> data is in a memory and fetching\\joining 1.5 mln of rows can't be done\n>> in less than 2-3 seconds) and there is no way to improve it.\n>\n> It could be a physical limitation of your hardware. I just did a short\n> test on one of my databases:\n>\n> Aggregate (cost=532018.95..532018.96 rows=1 width=0) (actual\n> time=3396.689..3396.689 rows=1 loops=1)\n> Buffers: shared hit=155711\n> -> Index Only Scan using requests_request_time_idx on requests\n> (cost=0.43..493109.90 rows=15563620 width=0) (actual\n> time=0.021..2174.614 rows=16443288 loops=1)\n> Index Cond: ((request_time >= '2016-07-01\n> 00:00:00+00'::timestamp with time zone) AND (request_time <= '2017-07-06\n> 00:00:00+00'::timestamp with time zone))\n> Heap Fetches: 31254\n> Buffers: shared hit=155711\n> Planning time: 0.143 ms\n> Execution time: 3396.715 ms\n> (8 rows)\n>\n> As you can see i can get 16.4 Mio rows within 3.4 seconds from cache.\n> Your index-scan fetches 3.5 mio in 1.7 second, that's hardly half of the\n> performance of my database.\n>\n> Greetings,\n> Torsten\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jul 2016 08:06:03 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "\nOn 06.07.2016 17:06, trafdev wrote:\n> Wondering what are your CPU\\RAM characteristics?\n\nIntel Core i7-2600 Quad Core\n32 GB DDR3 RAM\n2x 3 TB SATA III HDD\n\nHDD is:\nModel Family: Seagate Barracuda XT\nDevice Model: ST33000651AS\nFirmware Version: CC45\nUser Capacity: 3,000,592,982,016 bytes [3.00 TB]\nSector Size: 512 bytes logical/physical\nRotation Rate: 7200 rpm\nForm Factor: 3.5 inches\nDevice is: In smartctl database [for details use: -P show]\nATA Version is: ATA8-ACS T13/1699-D revision 4\nSATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)\n\nRAM is for example:\n\nHandle 0x002D, DMI type 17, 28 bytes\nMemory Device\n Array Handle: 0x002A\n Error Information Handle: No Error\n Total Width: 64 bits\n Data Width: 64 bits\n Size: 8192 MB\n Form Factor: DIMM\n Set: None\n Locator: DIMM0\n Bank Locator: BANK0\n Type: DDR3\n Type Detail: Synchronous\n Speed: 1333 MHz\n Manufacturer: Undefined\n Serial Number: 4430793\n Asset Tag: AssetTagNum0\n Part Number: CT102464BA160B.C16\n Rank: 2\n\nOS is FreeBSD 10.3. Do you need more information?\n\nGreetings,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jul 2016 17:27:29 +0200", "msg_from": "Torsten Zuehlsdorff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "Well, our CPU\\RAM configs are almost same...\n\nThe difference is - you're fetching\\grouping 8 times less rows than I:\n\nYou scan 16.5 mln rows and fetch ~200k rows in 2 seconds and than spend \n1.4 sec for aggregation\n\nI'm scanning 3.5 mln rows and fetching 1.5 mln rows (8 times more than \nyou) in 1.8 seconds and then spending rest (2.3 seconds) for aggregation...\n\nSo please try to extend dates range 8 times and repeat your test.\n\n\n\nOn 07/06/16 08:27, Torsten Zuehlsdorff wrote:\n>\n> On 06.07.2016 17:06, trafdev wrote:\n>> Wondering what are your CPU\\RAM characteristics?\n>\n> Intel Core i7-2600 Quad Core\n> 32 GB DDR3 RAM\n> 2x 3 TB SATA III HDD\n>\n> HDD is:\n> Model Family: Seagate Barracuda XT\n> Device Model: ST33000651AS\n> Firmware Version: CC45\n> User Capacity: 3,000,592,982,016 bytes [3.00 TB]\n> Sector Size: 512 bytes logical/physical\n> Rotation Rate: 7200 rpm\n> Form Factor: 3.5 inches\n> Device is: In smartctl database [for details use: -P show]\n> ATA Version is: ATA8-ACS T13/1699-D revision 4\n> SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)\n>\n> RAM is for example:\n>\n> Handle 0x002D, DMI type 17, 28 bytes\n> Memory Device\n> Array Handle: 0x002A\n> Error Information Handle: No Error\n> Total Width: 64 bits\n> Data Width: 64 bits\n> Size: 8192 MB\n> Form Factor: DIMM\n> Set: None\n> Locator: DIMM0\n> Bank Locator: BANK0\n> Type: DDR3\n> Type Detail: Synchronous\n> Speed: 1333 MHz\n> Manufacturer: Undefined\n> Serial Number: 4430793\n> Asset Tag: AssetTagNum0\n> Part Number: CT102464BA160B.C16\n> Rank: 2\n>\n> OS is FreeBSD 10.3. Do you need more information?\n>\n> Greetings,\n> Torsten\n>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jul 2016 09:46:39 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "So does that mean Postgres is not capable to scan\\aggregate less than 10 \nmln rows and deliver result in less than 2 seconds?\n\nOn 07/06/16 09:46, trafdev wrote:\n> Well, our CPU\\RAM configs are almost same...\n>\n> The difference is - you're fetching\\grouping 8 times less rows than I:\n>\n> You scan 16.5 mln rows and fetch ~200k rows in 2 seconds and than spend\n> 1.4 sec for aggregation\n>\n> I'm scanning 3.5 mln rows and fetching 1.5 mln rows (8 times more than\n> you) in 1.8 seconds and then spending rest (2.3 seconds) for aggregation...\n>\n> So please try to extend dates range 8 times and repeat your test.\n>\n>\n>\n> On 07/06/16 08:27, Torsten Zuehlsdorff wrote:\n>>\n>> On 06.07.2016 17:06, trafdev wrote:\n>>> Wondering what are your CPU\\RAM characteristics?\n>>\n>> Intel Core i7-2600 Quad Core\n>> 32 GB DDR3 RAM\n>> 2x 3 TB SATA III HDD\n>>\n>> HDD is:\n>> Model Family: Seagate Barracuda XT\n>> Device Model: ST33000651AS\n>> Firmware Version: CC45\n>> User Capacity: 3,000,592,982,016 bytes [3.00 TB]\n>> Sector Size: 512 bytes logical/physical\n>> Rotation Rate: 7200 rpm\n>> Form Factor: 3.5 inches\n>> Device is: In smartctl database [for details use: -P show]\n>> ATA Version is: ATA8-ACS T13/1699-D revision 4\n>> SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)\n>>\n>> RAM is for example:\n>>\n>> Handle 0x002D, DMI type 17, 28 bytes\n>> Memory Device\n>> Array Handle: 0x002A\n>> Error Information Handle: No Error\n>> Total Width: 64 bits\n>> Data Width: 64 bits\n>> Size: 8192 MB\n>> Form Factor: DIMM\n>> Set: None\n>> Locator: DIMM0\n>> Bank Locator: BANK0\n>> Type: DDR3\n>> Type Detail: Synchronous\n>> Speed: 1333 MHz\n>> Manufacturer: Undefined\n>> Serial Number: 4430793\n>> Asset Tag: AssetTagNum0\n>> Part Number: CT102464BA160B.C16\n>> Rank: 2\n>>\n>> OS is FreeBSD 10.3. Do you need more information?\n>>\n>> Greetings,\n>> Torsten\n>>\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 9 Jul 2016 10:26:34 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "On 7/9/16 12:26 PM, trafdev wrote:\n> So does that mean Postgres is not capable to scan\\aggregate less than 10\n> mln rows and deliver result in less than 2 seconds?\n\nThat's going to depend entirely on your hardware, and how big the rows \nare. At some point you're simply going to run out of memory bandwidth, \nespecially since your access pattern is very scattered.\n\n> On 07/06/16 09:46, trafdev wrote:\n>> Well, our CPU\\RAM configs are almost same...\n>>\n>> The difference is - you're fetching\\grouping 8 times less rows than I:\n\nHuh? The explain output certainly doesn't show that.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jul 2016 09:10:57 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": ">>> The difference is - you're fetching\\grouping 8 times less rows than I:\n>\n> Huh? The explain output certainly doesn't show that.\n\nWhy not?\n\nMy output:\nBuffers: shared hit=1486949\n\nTorsten's output:\nBuffers: shared hit=155711\n\nThis is amount of rows fetched for further processing (when all data is \nin memory), isn't it?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jul 2016 07:28:38 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "On 7/19/16 9:28 AM, trafdev wrote:\n>>>> The difference is - you're fetching\\grouping 8 times less rows than I:\n>>\n>> Huh? The explain output certainly doesn't show that.\n>\n> Why not?\n>\n> My output:\n> Buffers: shared hit=1486949\n>\n> Torsten's output:\n> Buffers: shared hit=155711\n>\n> This is amount of rows fetched for further processing (when all data is\n> in memory), isn't it?\n\nThat's buffers, not rows.\n\nBTW, if my math is correct, reading 1486949 8K buffers is 11GB, which \nyour query did in ~1.8s at 6GB/s. Admittedly that's pretty hand-wavy \n(pulling a datum from a shared buffer doesn't require reading the whole \nbuffer; on the other hand, you also visited each buffer \n3359694/1486949=2.6 times), but last time I measured, 6GB/s was a pretty \nreasonable amount of memory bandwidth for something hitting main memory.\n\nYou've got ~30 bigints in that table (240 bytes) plus a bunch of other \nstuff. That means you'll only be able to fit maybe 20 rows per 8K page. \nAt some point you'll simply hit the limits of hardware.\n\nIf you really need that kind of performance you'll probably need to have \nsome form of aggregate tables that you pull from. In your case, an \naggregate of each day would presumably work well; that would mean you'd \nbe reading 30 rows instead of 3.5M.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jul 2016 09:41:27 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "Right, buffers are not rows, but still 8 times less...\n\nThe table I'm reading from is already aggregated on daily basis (so \nthere is no way to aggregate it more).\n\nWill extending page to say 128K improve performance?\n\nOn 07/19/16 07:41, Jim Nasby wrote:\n> On 7/19/16 9:28 AM, trafdev wrote:\n>>>>> The difference is - you're fetching\\grouping 8 times less rows than I:\n>>>\n>>> Huh? The explain output certainly doesn't show that.\n>>\n>> Why not?\n>>\n>> My output:\n>> Buffers: shared hit=1486949\n>>\n>> Torsten's output:\n>> Buffers: shared hit=155711\n>>\n>> This is amount of rows fetched for further processing (when all data is\n>> in memory), isn't it?\n>\n> That's buffers, not rows.\n>\n> BTW, if my math is correct, reading 1486949 8K buffers is 11GB, which\n> your query did in ~1.8s at 6GB/s. Admittedly that's pretty hand-wavy\n> (pulling a datum from a shared buffer doesn't require reading the whole\n> buffer; on the other hand, you also visited each buffer\n> 3359694/1486949=2.6 times), but last time I measured, 6GB/s was a pretty\n> reasonable amount of memory bandwidth for something hitting main memory.\n>\n> You've got ~30 bigints in that table (240 bytes) plus a bunch of other\n> stuff. That means you'll only be able to fit maybe 20 rows per 8K page.\n> At some point you'll simply hit the limits of hardware.\n>\n> If you really need that kind of performance you'll probably need to have\n> some form of aggregate tables that you pull from. In your case, an\n> aggregate of each day would presumably work well; that would mean you'd\n> be reading 30 rows instead of 3.5M.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jul 2016 07:56:45 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: less than 2 sec for response - possible?" }, { "msg_contents": "On 7/19/16 9:56 AM, trafdev wrote:\n> Will extending page to say 128K improve performance?\n\nWell, you can't go to more than 32K, but yes, it might.\n\nEven then, I think your biggest problem is that the data locality is too \nlow. You're only grabbing ~3 rows every time you read a buffer that \nprobably contains ~20 rows. So that's an area for improvement. The other \nthing that would help a lot is to trim the table down so it's not as wide.\n\nActually, something else that could potentially help a lot is to store \narrays of many data points in each row, either by turning each column \ninto an array or storing an array of a composite type. [1] is exploring \nthose ideas right now.\n\nYou could also try cstore_fdw. It's not a magic bullet, but it's storage \nwill be much more efficient than what you're doing right now.\n\n[1] https://github.com/ElephantStack/ElephantStack\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Jul 2016 17:20:32 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: less than 2 sec for response - possible?" } ]
[ { "msg_contents": "I tried to DELETE about 7 million rows at once, and the query went up to\n15% of the RAM (120 GB in total), which pushed some indexes out and the\nserver load went up to 250, so I had to kill the query.\n\nThe involved table does not have neither foreign keys referring to other\ntables, nor other tables refer to it. The size of the table itself is 19 GB\n(15% of 120 GB). So why the DELETE tried to put the entire table in memory,\nor what did it do to take so much memory?\n\nI am using 9.4.5.\n\nRegards,\n--\nKouber Saparev\n\nI tried to DELETE about 7 million rows at once, and the query went up to 15% of the RAM (120 GB in total), which pushed some indexes out and the server load went up to 250, so I had to kill the query.The involved table does not have neither foreign keys referring to other tables, nor other tables refer to it. The size of the table itself is 19 GB (15% of 120 GB). So why the DELETE tried to put the entire table in memory, or what did it do to take so much memory?I am using 9.4.5.Regards,--Kouber Saparev", "msg_date": "Mon, 4 Jul 2016 19:35:47 +0300", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "DELETE takes too much memory" }, { "msg_contents": "Kouber Saparev wrote:\n> I tried to DELETE about 7 million rows at once, and the query went up to\n> 15% of the RAM (120 GB in total), which pushed some indexes out and the\n> server load went up to 250, so I had to kill the query.\n> \n> The involved table does not have neither foreign keys referring to other\n> tables, nor other tables refer to it. The size of the table itself is 19 GB\n> (15% of 120 GB). So why the DELETE tried to put the entire table in memory,\n> or what did it do to take so much memory?\n\nAre there triggers in the table? Deferred triggers in particular can\nuse memory.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 4 Jul 2016 13:04:58 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE takes too much memory" }, { "msg_contents": "No. There are AFTER triggers on other tables that write to this one though.\nIt is an audits table, so I omitted all the foreign keys on purpose.\n\n2016-07-04 20:04 GMT+03:00 Alvaro Herrera <[email protected]>:\n\n> Kouber Saparev wrote:\n> > I tried to DELETE about 7 million rows at once, and the query went up to\n> > 15% of the RAM (120 GB in total), which pushed some indexes out and the\n> > server load went up to 250, so I had to kill the query.\n> >\n> > The involved table does not have neither foreign keys referring to other\n> > tables, nor other tables refer to it. The size of the table itself is 19\n> GB\n> > (15% of 120 GB). So why the DELETE tried to put the entire table in\n> memory,\n> > or what did it do to take so much memory?\n>\n> Are there triggers in the table? Deferred triggers in particular can\n> use memory.\n>\n> --\n> Álvaro Herrera http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nNo. There are AFTER triggers on other tables that write to this one though. It is an audits table, so I omitted all the foreign keys on purpose.2016-07-04 20:04 GMT+03:00 Alvaro Herrera <[email protected]>:Kouber Saparev wrote:\n> I tried to DELETE about 7 million rows at once, and the query went up to\n> 15% of the RAM (120 GB in total), which pushed some indexes out and the\n> server load went up to 250, so I had to kill the query.\n>\n> The involved table does not have neither foreign keys referring to other\n> tables, nor other tables refer to it. The size of the table itself is 19 GB\n> (15% of 120 GB). So why the DELETE tried to put the entire table in memory,\n> or what did it do to take so much memory?\n\nAre there triggers in the table?  Deferred triggers in particular can\nuse memory.\n\n--\nÁlvaro Herrera                http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services", "msg_date": "Mon, 4 Jul 2016 20:10:09 +0300", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DELETE takes too much memory" }, { "msg_contents": "On 07/04/2016 10:10 AM, Kouber Saparev wrote:\n> No. There are AFTER triggers on other tables that write to this one\n> though. It is an audits table, so I omitted all the foreign keys on purpose.\n\nIs it possible that the DELETE blocked many of those triggers due to\nlocking the same rows?\n\nIncidentally, any time I get into deleting large numbers of rows, I\ngenerally find it faster to rebuild the table instead ...\n\n-- \n--\nJosh Berkus\nRed Hat OSAS\n(any opinions are my own)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Jul 2016 11:51:54 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE takes too much memory" }, { "msg_contents": "Well, basically there are only INSERTs going on there (it is a table\nholding audit records for each DML statement). I do not see how a DELETE\nstatement could block an INSERT?\n\nYou are correct that rebuilding the table will be faster, but then, there\nis a chance that some INSERT's will be blocked and eventually will fail\n(depending on the duration of the rebuilding, the exact moment I run it,\nand the involved operations on the other tables).\n\nCould such a memory consumption be related to a GET DIAGNOSTICS plpgsql\nblock? The delete itself is within a stored procedure, and then I return\nthe amount of the deleted rows from the function:\n\nDELETE FROM\n audits.audits\nWHERE\n id <= last_synced_audits_id;\n\nGET DIAGNOSTICS counter = ROW_COUNT;\n\nRETURN counter;\n\n\n2016-07-05 21:51 GMT+03:00 Josh Berkus <[email protected]>:\n\n> On 07/04/2016 10:10 AM, Kouber Saparev wrote:\n> > No. There are AFTER triggers on other tables that write to this one\n> > though. It is an audits table, so I omitted all the foreign keys on\n> purpose.\n>\n> Is it possible that the DELETE blocked many of those triggers due to\n> locking the same rows?\n>\n> Incidentally, any time I get into deleting large numbers of rows, I\n> generally find it faster to rebuild the table instead ...\n>\n> --\n> --\n> Josh Berkus\n> Red Hat OSAS\n> (any opinions are my own)\n>\n\nWell, basically there are only INSERTs going on there (it is a table holding audit records for each DML statement). I do not see how a DELETE statement could block an INSERT?You are correct that rebuilding the table will be faster, but then, there is a chance that some INSERT's will be blocked and eventually will fail (depending on the duration of the rebuilding, the exact moment I run it, and the involved operations on the other tables).Could such a memory consumption be related to a GET DIAGNOSTICS plpgsql block? The delete itself is within a stored procedure, and then I return the amount of the deleted rows from the function:DELETE FROM  audits.auditsWHERE  id <= last_synced_audits_id;GET DIAGNOSTICS counter = ROW_COUNT;RETURN counter;2016-07-05 21:51 GMT+03:00 Josh Berkus <[email protected]>:On 07/04/2016 10:10 AM, Kouber Saparev wrote:\n> No. There are AFTER triggers on other tables that write to this one\n> though. It is an audits table, so I omitted all the foreign keys on purpose.\n\nIs it possible that the DELETE blocked many of those triggers due to\nlocking the same rows?\n\nIncidentally, any time I get into deleting large numbers of rows, I\ngenerally find it faster to rebuild the table instead ...\n\n--\n--\nJosh Berkus\nRed Hat OSAS\n(any opinions are my own)", "msg_date": "Wed, 6 Jul 2016 00:03:30 +0300", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DELETE takes too much memory" }, { "msg_contents": "On Mon, Jul 4, 2016 at 11:35 AM, Kouber Saparev <[email protected]> wrote:\n> I tried to DELETE about 7 million rows at once, and the query went up to 15%\n> of the RAM (120 GB in total), which pushed some indexes out and the server\n> load went up to 250, so I had to kill the query.\n>\n> The involved table does not have neither foreign keys referring to other\n> tables, nor other tables refer to it. The size of the table itself is 19 GB\n> (15% of 120 GB). So why the DELETE tried to put the entire table in memory,\n> or what did it do to take so much memory?\n>\n> I am using 9.4.5.\n\nHow did you measure memory usage exactly? In particular, memory\nconsumption from the pid attached to the query or generalized to the\nserver? Is this linux and if so what memory metric did you use? What\nkinds of indexes are on this table (in particular, gin/gist?)?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jul 2016 11:12:02 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE takes too much memory" }, { "msg_contents": "I was using the pg_activity monitoring tool, which I find quite awesome.\n\nhttps://github.com/julmon/pg_activity\n\nThere are 3 btree indexes, here's the definition of the table itself:\n\n Table \"audits.audits\"\n Column | Type |\n Modifiers\n-------------------+-----------------------------+-----------------------------------------------------------------------\n id | bigint | not null default\nnextval('audits.audits_id_seq'::regclass)\n auditable_type_id | oid | not null\n auditable_id | integer |\n operation | audits.operation | not null\n old_data | jsonb |\n new_data | jsonb |\n user_id | integer | default\n(NULLIF(session.get_var('user_id'::text), ''::text))::integer\n ip | inet | default\n(NULLIF(session.get_var('ip'::text), ''::text))::inet\n service_name | character varying(100) | default\nNULLIF(session.get_var('service'::text), ''::text)\n service_action | text | default\nNULLIF(session.get_var('action'::text), ''::text)\n created_at | timestamp without time zone | not null default\nclock_timestamp()\nIndexes:\n \"audits_pkey\" PRIMARY KEY, btree (id)\n \"index_audits_on_auditable_type_id_and_auditable_id\" btree\n(auditable_type_id, auditable_id)\n \"index_audits_on_created_at\" btree (created_at)\n\n2016-07-06 19:12 GMT+03:00 Merlin Moncure <[email protected]>:\n\n> On Mon, Jul 4, 2016 at 11:35 AM, Kouber Saparev <[email protected]> wrote:\n> > I tried to DELETE about 7 million rows at once, and the query went up to\n> 15%\n> > of the RAM (120 GB in total), which pushed some indexes out and the\n> server\n> > load went up to 250, so I had to kill the query.\n> >\n> > The involved table does not have neither foreign keys referring to other\n> > tables, nor other tables refer to it. The size of the table itself is 19\n> GB\n> > (15% of 120 GB). So why the DELETE tried to put the entire table in\n> memory,\n> > or what did it do to take so much memory?\n> >\n> > I am using 9.4.5.\n>\n> How did you measure memory usage exactly? In particular, memory\n> consumption from the pid attached to the query or generalized to the\n> server? Is this linux and if so what memory metric did you use? What\n> kinds of indexes are on this table (in particular, gin/gist?)?\n>\n> merlin\n>\n\nI was using the pg_activity monitoring tool, which I find quite awesome.https://github.com/julmon/pg_activityThere are 3 btree indexes, here's the definition of the table itself:                                                  Table \"audits.audits\"      Column       |            Type             |                               Modifiers-------------------+-----------------------------+----------------------------------------------------------------------- id                | bigint                      | not null default nextval('audits.audits_id_seq'::regclass) auditable_type_id | oid                         | not null auditable_id      | integer                     | operation         | audits.operation            | not null old_data          | jsonb                       | new_data          | jsonb                       | user_id           | integer                     | default (NULLIF(session.get_var('user_id'::text), ''::text))::integer ip                | inet                        | default (NULLIF(session.get_var('ip'::text), ''::text))::inet service_name      | character varying(100)      | default NULLIF(session.get_var('service'::text), ''::text) service_action    | text                        | default NULLIF(session.get_var('action'::text), ''::text) created_at        | timestamp without time zone | not null default clock_timestamp()Indexes:    \"audits_pkey\" PRIMARY KEY, btree (id)    \"index_audits_on_auditable_type_id_and_auditable_id\" btree (auditable_type_id, auditable_id)    \"index_audits_on_created_at\" btree (created_at)2016-07-06 19:12 GMT+03:00 Merlin Moncure <[email protected]>:On Mon, Jul 4, 2016 at 11:35 AM, Kouber Saparev <[email protected]> wrote:\n> I tried to DELETE about 7 million rows at once, and the query went up to 15%\n> of the RAM (120 GB in total), which pushed some indexes out and the server\n> load went up to 250, so I had to kill the query.\n>\n> The involved table does not have neither foreign keys referring to other\n> tables, nor other tables refer to it. The size of the table itself is 19 GB\n> (15% of 120 GB). So why the DELETE tried to put the entire table in memory,\n> or what did it do to take so much memory?\n>\n> I am using 9.4.5.\n\nHow did you measure memory usage exactly?  In particular, memory\nconsumption from the pid attached to the query or generalized to the\nserver?  Is this linux and if so what memory metric did you use?  What\nkinds of indexes are on this table (in particular, gin/gist?)?\n\nmerlin", "msg_date": "Thu, 7 Jul 2016 10:39:59 +0300", "msg_from": "Kouber Saparev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DELETE takes too much memory" }, { "msg_contents": "On 7/5/16 4:03 PM, Kouber Saparev wrote:\n> Could such a memory consumption be related to a GET DIAGNOSTICS plpgsql\n> block? The delete itself is within a stored procedure, and then I return\n> the amount of the deleted rows from the function:\n\nLooking at the code, no, GET DIAG won't change anything; \nexec_stmt_execsql() is simply remembering the count returned by SPI; it \nhas no idea whether anything will end up using that count.\n\nThe only thing I can think of is that you have triggers that are \nconsuming the memory (either the trigger funcs, or because it's an \nafter/constraint trigger), or that there's something screwy with finding \nthe target rows. I can't see how the latter could be an issue if id is a \nsimple int though.\n\nThere are ways to get memory debug info, but I'm not sure if they'd \nreally be safe to use in production (in particular, they require \nstopping the process by attaching gdb and calling a function. I think \nyou also need a special compile.)\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jul 2016 09:28:20 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE takes too much memory" } ]
[ { "msg_contents": "Hi,\n\nI was wondering whether there are any plans to include the plan of the\nFK check in EXPLAIN output. Or is there a different way to get to see\nall the plans of triggers as well as of the main SQL?\n\nWhen researching I found this thread from 2011 and the output format\ndoes not seem to have changed since then:\n\nhttps://www.postgresql.org/message-id/flat/3798971.mRNc5JcYXj%40moltowork#3798971.mRNc5JcYXj@moltowork\n\nKind regards\n\nrobert\n\n-- \n[guy, jim, charlie].each {|him| remember.him do |as, often| as.you_can\n- without end}\nhttp://blog.rubybestpractices.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 5 Jul 2016 14:14:39 +0200", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": true, "msg_subject": "Seeing execution plan of foreign key constraint check?" }, { "msg_contents": "On 7/5/16 7:14 AM, Robert Klemme wrote:\n> I was wondering whether there are any plans to include the plan of the\n> FK check in EXPLAIN output. Or is there a different way to get to see\n> all the plans of triggers as well as of the main SQL?\n>\n> When researching I found this thread from 2011 and the output format\n> does not seem to have changed since then:\n>\n> https://www.postgresql.org/message-id/flat/3798971.mRNc5JcYXj%40moltowork#3798971.mRNc5JcYXj@moltowork\n\nNo one has discussed it recently.\n\nUnfortunately, this isn't the type of thing that would excite most of \nthe core hackers, so it's unlikely any of them will pick this up. The \nbest bet for getting this done is to decide you want to work on it \nyourself and email -hackers (please email before creating the patch!), \npay one of the consulting companies to do it, or create a bounty and see \nif others will chip in to pay someone to do it.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jul 2016 09:47:18 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeing execution plan of foreign key constraint check?" }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> On 7/5/16 7:14 AM, Robert Klemme wrote:\n>> I was wondering whether there are any plans to include the plan of the\n>> FK check in EXPLAIN output. Or is there a different way to get to see\n>> all the plans of triggers as well as of the main SQL?\n\n> Unfortunately, this isn't the type of thing that would excite most of \n> the core hackers, so it's unlikely any of them will pick this up.\n\nIt's not so much that people don't care, as that it's not apparent how to\nimprove this without breaking desirable system properties --- in this\ncase, that functions are black boxes so far as callers are concerned.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 19 Jul 2016 16:10:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeing execution plan of foreign key constraint check?" }, { "msg_contents": "On 7/19/16 3:10 PM, Tom Lane wrote:\n> Jim Nasby <[email protected]> writes:\n>> On 7/5/16 7:14 AM, Robert Klemme wrote:\n>>> I was wondering whether there are any plans to include the plan of the\n>>> FK check in EXPLAIN output. Or is there a different way to get to see\n>>> all the plans of triggers as well as of the main SQL?\n>\n>> Unfortunately, this isn't the type of thing that would excite most of\n>> the core hackers, so it's unlikely any of them will pick this up.\n>\n> It's not so much that people don't care, as that it's not apparent how to\n> improve this without breaking desirable system properties --- in this\n> case, that functions are black boxes so far as callers are concerned.\n\nI thought we already broke out time spent in triggers as part of \nEXPLAIN, and that the FK \"triggers\" were specifically ignored? (Granted, \nthat doesn't give you plan info, just timing...)\n\nAs for function plans, ISTM that could be added to the PL handlers if we \nwanted to (allow a function invocation to return an array of explain \noutputs).\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Jul 2016 16:42:04 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeing execution plan of foreign key constraint check?" }, { "msg_contents": "Jim Nasby <[email protected]> writes:\n> On 7/19/16 3:10 PM, Tom Lane wrote:\n>> It's not so much that people don't care, as that it's not apparent how to\n>> improve this without breaking desirable system properties --- in this\n>> case, that functions are black boxes so far as callers are concerned.\n\n> I thought we already broke out time spent in triggers as part of \n> EXPLAIN,\n\n... yes ...\n\n> and that the FK \"triggers\" were specifically ignored?\n\nNo. You get something like\n\n# explain analyze insert into cc values(1);\n QUERY PLAN \n------------------------------------------------------------------------------------------\n Insert on cc (cost=0.00..0.01 rows=1 width=4) (actual time=0.192..0.192 rows=0 loops=1)\n -> Result (cost=0.00..0.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Planning time: 0.035 ms\n Trigger for constraint cc_f1_fkey: time=1.246 calls=1\n Execution time: 1.473 ms\n(5 rows)\n\n\nEXPLAIN does know enough about FK triggers to label them with the\nassociated constraint name rather than calling them something like\n\"RI_ConstraintTrigger_c_81956\"; but it does not have any ability\nto reach inside them.\n\n> As for function plans, ISTM that could be added to the PL handlers if we \n> wanted to (allow a function invocation to return an array of explain \n> outputs).\n\nWhere would you put those, particularly for functions executed many\ntimes in the query? Would it include sub-functions recursively?\nI mean, yeah, in principle we could do something roughly like that,\nbut it's not easy and presenting the results intelligibly seems\nalmost impossible.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Jul 2016 17:59:44 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeing execution plan of foreign key constraint check?" }, { "msg_contents": "On 7/21/16 4:59 PM, Tom Lane wrote:\n>> > As for function plans, ISTM that could be added to the PL handlers if we\n>> > wanted to (allow a function invocation to return an array of explain\n>> > outputs).\n> Where would you put those, particularly for functions executed many\n> times in the query? Would it include sub-functions recursively?\n> I mean, yeah, in principle we could do something roughly like that,\n> but it's not easy and presenting the results intelligibly seems\n> almost impossible.\n\nYeah, it'd certainly need to be handled internally in a \nmachine-understandable form that got aggregated before presentation (or \nwith non-text output formats we could provide the raw data). Or just \npunt and don't capture the data unless you're using an alternative \noutput format.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 21 Jul 2016 17:14:55 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeing execution plan of foreign key constraint check?" }, { "msg_contents": "On Fri, Jul 22, 2016 at 12:14 AM, Jim Nasby <[email protected]> wrote:\n> On 7/21/16 4:59 PM, Tom Lane wrote:\n>>>\n>>> > As for function plans, ISTM that could be added to the PL handlers if\n>>> > we\n>>> > wanted to (allow a function invocation to return an array of explain\n>>> > outputs).\n>>\n>> Where would you put those, particularly for functions executed many\n>> times in the query? Would it include sub-functions recursively?\n>> I mean, yeah, in principle we could do something roughly like that,\n>> but it's not easy and presenting the results intelligibly seems\n>> almost impossible.\n>\n>\n> Yeah, it'd certainly need to be handled internally in a\n> machine-understandable form that got aggregated before presentation (or with\n> non-text output formats we could provide the raw data). Or just punt and\n> don't capture the data unless you're using an alternative output format.\n\nI'd imagine the output to just list all \"recursive\" execution plans\nexecuted probably along with indicators for how much IO and / or CPU\nthey were responsible for. The \"recursive\" plans could also be sorted\nin decreasing order of total (i.e. across all individual invocations)\ntime spent so you see the most impacting plan first. All of that would\nloose displaying calling relationships at the advantage of a simpler\npresentation. I think, the relationship which statement / function\ninvoked with other could be determined by looking at statements /\nfunctions. And I guess often it will be apparent from names already.\n\nI am wondering what to do if the same statement has multiple execution\nplans if that is possible in such a scenario. Present all the plans or\njust the one with the highest impact? Show them next to each other so\nthe user is immediately aware that all these plans originated from the\nsame piece of SQL?\n\nKind regards\n\nrobert\n\n-- \n[guy, jim, charlie].each {|him| remember.him do |as, often| as.you_can\n- without end}\nhttp://blog.rubybestpractices.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jul 2016 10:37:20 +0200", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Seeing execution plan of foreign key constraint check?" }, { "msg_contents": "On 7/22/16 3:37 AM, Robert Klemme wrote:\n> I am wondering what to do if the same statement has multiple execution\n> plans if that is possible in such a scenario. Present all the plans or\n> just the one with the highest impact? Show them next to each other so\n> the user is immediately aware that all these plans originated from the\n> same piece of SQL?\n\nplpgsql runs all it's stuff via SPI, which can replan queries. So yes, I \nthink it's necessary to deal with that.\n\nThat said, if we only kept the most expensive X plans from a given \nfunction, that could handle both cases.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jul 2016 20:57:07 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Seeing execution plan of foreign key constraint check?" } ]
[ { "msg_contents": "Hello,\n\nI've been reading Mr. Greg Smith's \"Postgres 9.0 - High Performance\" book\nand I have some questions regarding the guidelines I found in the book,\nbecause I suspect some of them can't be followed blindly to the letter on a\nserver with lots of RAM and SSDs.\n\nHere are my server specs:\n\nIntel Xeon E5-1650 v3 Hexa-Core Haswell\n256GB DDR4 ECC RAM\nBattery backed hardware RAID with 512MB of WriteBack cache (LSI MegaRAID\nSAS 9260-4i)\nRAID1 - 2x480GB Samsung SSD with power loss protection (will be used to\nstore the PostgreSQL database)\nRAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to\nstore PostgreSQL transactions logs)\n\nFirst of all, the book suggests that I should enable the WriteBack cache of\nthe HWRAID and disable the disk cache to increase performance and ensure\ndata safety. Is it still advisable to do this on SSDs, specifically the\nstep of disabling the disk cache? Wouldn't that increase the wear rate of\nthe SSD?\n\nSecondly, the book suggests that we increase the device readahead from 256\nto 4096. As far as I understand, this was done in order to reduce the\nnumber of seeks on a rotating hard drive, so again, is this still\napplicable to SSDs?\n\nThe other tunable I've been looking into is vm.dirty_ratio and\nvm.dirty_background_ratio. I reckon that the book's recommendation to lower\nvm.dirty_background_ratio to 5 and vm.dirty_ratio to 10 is not enough for a\nserver with such big amount of RAM. How much lower should I set these\nvalues, given that my RAID's WriteBack cache size is 512MB?\n\nThank you very much.\n\nKaixi Luo\n\nHello,I've been reading Mr. Greg Smith's \"Postgres 9.0 - High Performance\" book and I have some questions regarding the guidelines I found in the book, because I suspect some of them can't be followed blindly to the letter on a server with lots of RAM and SSDs.Here are my server specs:Intel Xeon E5-1650 v3 Hexa-Core Haswell 256GB DDR4 ECC RAMBattery backed hardware RAID with 512MB of WriteBack cache (LSI MegaRAID SAS 9260-4i)RAID1 - 2x480GB Samsung SSD with power loss protection (will be used to store the PostgreSQL database)RAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to store PostgreSQL transactions logs)First of all, the book suggests that I should enable the WriteBack cache of the HWRAID and disable the disk cache to increase performance and ensure data safety. Is it still advisable to do this on SSDs, specifically the step of disabling the disk cache? Wouldn't that increase the wear rate of the SSD?Secondly, the book suggests that we increase the device readahead from 256 to 4096. As far as I understand, this was done in order to reduce the number of seeks on a rotating hard drive, so again, is this still applicable to SSDs?The other tunable I've been looking into is vm.dirty_ratio and vm.dirty_background_ratio. I reckon that the book's recommendation to lower vm.dirty_background_ratio to 5 and vm.dirty_ratio to 10 is not enough for a server with such big amount of RAM. How much lower should I set these values, given that my RAID's WriteBack cache size is 512MB?Thank you very much.Kaixi Luo", "msg_date": "Tue, 05 Jul 2016 14:50:46 +0000", "msg_from": "Kaixi Luo <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning guidelines for server with 256GB of RAM and SSDs?" }, { "msg_contents": "On Tue, Jul 5, 2016 at 9:50 AM, Kaixi Luo <[email protected]> wrote:\n> Hello,\n>\n> I've been reading Mr. Greg Smith's \"Postgres 9.0 - High Performance\" book\n> and I have some questions regarding the guidelines I found in the book,\n> because I suspect some of them can't be followed blindly to the letter on a\n> server with lots of RAM and SSDs.\n>\n> Here are my server specs:\n>\n> Intel Xeon E5-1650 v3 Hexa-Core Haswell\n> 256GB DDR4 ECC RAM\n> Battery backed hardware RAID with 512MB of WriteBack cache (LSI MegaRAID SAS\n> 9260-4i)\n> RAID1 - 2x480GB Samsung SSD with power loss protection (will be used to\n> store the PostgreSQL database)\n> RAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to\n> store PostgreSQL transactions logs)\n>\n> First of all, the book suggests that I should enable the WriteBack cache of\n> the HWRAID and disable the disk cache to increase performance and ensure\n> data safety. Is it still advisable to do this on SSDs, specifically the step\n> of disabling the disk cache? Wouldn't that increase the wear rate of the\n> SSD?\n\nAt the time that book was written, the majority of SSDs were known not\nto be completely honest and/or reliable about data integrity in the\nface of a power event. Now it's a hit or miss situation (for example,\nsee here: http://blog.nordeus.com/dev-ops/power-failure-testing-with-ssds.htm).\nThe intel drives S3500/S3700 and their descendants are the standard\nagainst which other drives should be judged IMO. The S3500 family in\nparticular offers tremendous value for database usage. Do your\nresearch; the warning is still relevant but the blanket statement no\nlonger applies. Spinning drives are completely obsolete for database\napplications in my experience.\n\nDisabling write back cache for write heavy database loads will will\ndestroy it in short order due to write amplication and will generally\ncause it to underperform hard drives in my experience.\n\nWith good SSDs and a good motherboard, I do not recommend a caching\nraid controller; software raid is a better choice for many reasons.\n\nOne parameter that needs to be analyzed with SSD is\neffective_io_concurrency. see\nhttps://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jul 2016 13:13:05 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and SSDs?" }, { "msg_contents": "On Wed, Jul 6, 2016 at 12:13 PM, Merlin Moncure <[email protected]> wrote:\n> Disabling write back cache for write heavy database loads will will\n> destroy it in short order due to write amplication and will generally\n> cause it to underperform hard drives in my experience.\n\nInteresting. We found our best performance with a RAID-5 of 10 800GB\nSSDs (Intel 3500/3700 series) that we got MUCH faster performance with\nall write caching turned off on our LSI MEgaRAID controllers. We went\nfrom 3 to 4ktps to 15 to 18ktps. And after a year of hard use we still\nshow ~90% life left (these machines handle thousands of writes per\nsecond in real use) It could be that the caching was getting in the\nway of RAID calcs or some other issue. With RAID-1 I have no clue what\nthe performance will be with write cache on or off.\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jul 2016 15:48:36 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and SSDs?" }, { "msg_contents": "Regarding the Nordeus blog Merlin linked.\n\nThey say:\n\"This doesn't mean the data was really written to disk, it can still remain in the disk cache, but enterprise drives usually make sure the data was really written to disk on fsync calls.\"\n\nThis isn't actually true for enterprise drives (when I say enterprise in the context of an SSD, I'm assuming full power loss protection via capacitors on the drive like the Intel DC S3x00 series). Most enterprise SSDs will ignore calls to disable disk cache or to flush the disk cache as doing so is entirely unnecessary.\n\n\nRegarding write back cache:\nDisabling the write back cache won't have a real large impact on the endurance of the drive unless it reduces the total number of bytes written (which it won't). I've seen drives that perform better with it disabled and drives that perform better with it enabled. I would test in your environment and make the decision based on performance. \n\n\nRegarding the Crucial drive for logs:\nAs far as I'm aware, none of the Crucial drives have power loss protection. To use these drives you would want to disable disk cache which would drop your performance a fair bit.\n\n\nWrite amplification:\nI wouldn't expect write amplification to be a serious issue unless you hit every LBA on the device early in its life and never execute TRIM. This is one of the reasons software RAID can be a better solution for something like this. MDADM supports TRIM in RAID devices. So unless you run the drives above 90% full, the write amplification would be minimal so long as you have a daily fstrim cron job.\n\nWes Vaske | Senior Storage Solutions Engineer\nMicron Technology\n\n________________________________________\nFrom: [email protected] <[email protected]> on behalf of Merlin Moncure <[email protected]>\nSent: Wednesday, July 6, 2016 1:13 PM\nTo: Kaixi Luo\nCc: postgres performance list\nSubject: Re: [PERFORM] Tuning guidelines for server with 256GB of RAM and SSDs?\n\nOn Tue, Jul 5, 2016 at 9:50 AM, Kaixi Luo <[email protected]> wrote:\n> Hello,\n>\n> I've been reading Mr. Greg Smith's \"Postgres 9.0 - High Performance\" book\n> and I have some questions regarding the guidelines I found in the book,\n> because I suspect some of them can't be followed blindly to the letter on a\n> server with lots of RAM and SSDs.\n>\n> Here are my server specs:\n>\n> Intel Xeon E5-1650 v3 Hexa-Core Haswell\n> 256GB DDR4 ECC RAM\n> Battery backed hardware RAID with 512MB of WriteBack cache (LSI MegaRAID SAS\n> 9260-4i)\n> RAID1 - 2x480GB Samsung SSD with power loss protection (will be used to\n> store the PostgreSQL database)\n> RAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to\n> store PostgreSQL transactions logs)\n>\n> First of all, the book suggests that I should enable the WriteBack cache of\n> the HWRAID and disable the disk cache to increase performance and ensure\n> data safety. Is it still advisable to do this on SSDs, specifically the step\n> of disabling the disk cache? Wouldn't that increase the wear rate of the\n> SSD?\n\nAt the time that book was written, the majority of SSDs were known not\nto be completely honest and/or reliable about data integrity in the\nface of a power event. Now it's a hit or miss situation (for example,\nsee here: http://blog.nordeus.com/dev-ops/power-failure-testing-with-ssds.htm).\nThe intel drives S3500/S3700 and their descendants are the standard\nagainst which other drives should be judged IMO. The S3500 family in\nparticular offers tremendous value for database usage. Do your\nresearch; the warning is still relevant but the blanket statement no\nlonger applies. Spinning drives are completely obsolete for database\napplications in my experience.\n\nDisabling write back cache for write heavy database loads will will\ndestroy it in short order due to write amplication and will generally\ncause it to underperform hard drives in my experience.\n\nWith good SSDs and a good motherboard, I do not recommend a caching\nraid controller; software raid is a better choice for many reasons.\n\nOne parameter that needs to be analyzed with SSD is\neffective_io_concurrency. see\nhttps://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com\n\nmerlin\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 6 Jul 2016 22:34:12 +0000", "msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and\n SSDs?" }, { "msg_contents": "On Wed, Jul 6, 2016 at 4:48 PM, Scott Marlowe <[email protected]> wrote:\n> On Wed, Jul 6, 2016 at 12:13 PM, Merlin Moncure <[email protected]> wrote:\n>> Disabling write back cache for write heavy database loads will will\n>> destroy it in short order due to write amplication and will generally\n>> cause it to underperform hard drives in my experience.\n>\n> Interesting. We found our best performance with a RAID-5 of 10 800GB\n> SSDs (Intel 3500/3700 series) that we got MUCH faster performance with\n> all write caching turned off on our LSI MEgaRAID controllers. We went\n> from 3 to 4ktps to 15 to 18ktps. And after a year of hard use we still\n> show ~90% life left (these machines handle thousands of writes per\n> second in real use) It could be that the caching was getting in the\n> way of RAID calcs or some other issue. With RAID-1 I have no clue what\n> the performance will be with write cache on or off.\n\nRight -- by that I meant disabling the write back cache on the drive\nitself, so that all writes are immediately flushed. Disabling write\nback on the raid controller should be the right choice; each of these\ndrives essentially is a 'caching raid controller' for all intents and\npurposes. Hardware raid controllers are engineered around performance\nand reliability assumptions that are no longer correct in an SSD\nworld. Personally I would have plugged the drives directly to the\nmotherboard (assuming it's a got enough lanes) and mounted the raid\nagainst mdadm and compared.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Jul 2016 11:27:10 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and SSDs?" }, { "msg_contents": "On Thu, Jul 7, 2016 at 10:27 AM, Merlin Moncure <[email protected]> wrote:\n> On Wed, Jul 6, 2016 at 4:48 PM, Scott Marlowe <[email protected]> wrote:\n>> On Wed, Jul 6, 2016 at 12:13 PM, Merlin Moncure <[email protected]> wrote:\n>>> Disabling write back cache for write heavy database loads will will\n>>> destroy it in short order due to write amplication and will generally\n>>> cause it to underperform hard drives in my experience.\n>>\n>> Interesting. We found our best performance with a RAID-5 of 10 800GB\n>> SSDs (Intel 3500/3700 series) that we got MUCH faster performance with\n>> all write caching turned off on our LSI MEgaRAID controllers. We went\n>> from 3 to 4ktps to 15 to 18ktps. And after a year of hard use we still\n>> show ~90% life left (these machines handle thousands of writes per\n>> second in real use) It could be that the caching was getting in the\n>> way of RAID calcs or some other issue. With RAID-1 I have no clue what\n>> the performance will be with write cache on or off.\n>\n> Right -- by that I meant disabling the write back cache on the drive\n> itself, so that all writes are immediately flushed. Disabling write\n> back on the raid controller should be the right choice; each of these\n> drives essentially is a 'caching raid controller' for all intents and\n> purposes. Hardware raid controllers are engineered around performance\n> and reliability assumptions that are no longer correct in an SSD\n> world. Personally I would have plugged the drives directly to the\n> motherboard (assuming it's a got enough lanes) and mounted the raid\n> against mdadm and compared.\n\nOh yeah definitely. And yea we've found that mdadm and raw HBAs work\nbetter than most RAID controllers for SSDs.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Jul 2016 11:00:22 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and SSDs?" }, { "msg_contents": "> Regarding write back cache:\n> Disabling the write back cache won't have a real large impact on the\n> endurance of the drive unless it reduces the total number of bytes written\n> (which it won't). I've seen drives that perform better with it disabled and\n> drives that perform better with it enabled. I would test in your\n> environment and make the decision based on performance.\n>\n>\nThanks. I assume you are referring to the write back cache on the RAID\ncontroller here and not the disk cache itself.\n\nKaixi\n\nRegarding write back cache:\nDisabling the write back cache won't have a real large impact on the endurance of the drive unless it reduces the total number of bytes written (which it won't). I've seen drives that perform better with it disabled and drives that perform better with it enabled. I would test in your environment and make the decision based on performance.Thanks. I assume you are referring to the write back cache on the RAID controller here and not the disk cache itself.Kaixi", "msg_date": "Thu, 07 Jul 2016 21:06:53 +0000", "msg_from": "Kaixi Luo <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and SSDs?" } ]
[ { "msg_contents": "Hi, \nWe had a similar situation and the best performance was with 64MB background_bytes and 512 MB dirty_bytes.\nTigran.\n\nOn Jul 5, 2016 16:51, Kaixi Luo <[email protected]> wrote:Hello,I've been reading Mr. Greg Smith's \"Postgres 9.0 - High Performance\" book and I have some questions regarding the guidelines I found in the book, because I suspect some of them can't be followed blindly to the letter on a server with lots of RAM and SSDs.Here are my server specs:Intel Xeon E5-1650 v3 Hexa-Core Haswell 256GB DDR4 ECC RAMBattery backed hardware RAID with 512MB of WriteBack cache (LSI MegaRAID SAS 9260-4i)RAID1 - 2x480GB Samsung SSD with power loss protection (will be used to store the PostgreSQL database)RAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to store PostgreSQL transactions logs)First of all, the book suggests that I should enable the WriteBack cache of the HWRAID and disable the disk cache to increase performance and ensure data safety. Is it still advisable to do this on SSDs, specifically the step of disabling the disk cache? Wouldn't that increase the wear rate of the SSD?Secondly, the book suggests that we increase the device readahead from 256 to 4096. As far as I understand, this was done in order to reduce the number of seeks on a rotating hard drive, so again, is this still applicable to SSDs?The other tunable I've been looking into is vm.dirty_ratio and vm.dirty_background_ratio. I reckon that the book's recommendation to lower vm.dirty_background_ratio to 5 and vm.dirty_ratio to 10 is not enough for a server with such big amount of RAM. How much lower should I set these values, given that my RAID's WriteBack cache size is 512MB?Thank you very much.Kaixi Luo\n", "msg_date": "Tue, 5 Jul 2016 21:17:33 +0200 (CEST)", "msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and\n SSDs?" }, { "msg_contents": "On 06/07/16 07:17, Mkrtchyan, Tigran wrote:\n> Hi,\n>\n> We had a similar situation and the best performance was with 64MB\n> background_bytes and 512 MB dirty_bytes.\n>\n> Tigran.\n>\n> On Jul 5, 2016 16:51, Kaixi Luo <[email protected]> wrote:\n>\n>\n> Here are my server specs:\n>\n> RAID1 - 2x480GB Samsung SSD with power loss protection (will be used to\n> store the PostgreSQL database)\n> RAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to\n> store PostgreSQL transactions logs)\n>\n\nCan you tell the exact model numbers for the Samsung and Crucial SSD's? \nIt typically matters! E.g I have some Crucial M550 that have capacitors \nand (originally) claimed to be power off safe, but with testing have \nbeen shown to be not really power off safe at all. I'd be dubious about \nSamsungs too.\n\nThe Intel Datacenter range (S3700 and similar) are known to have power \noff safety that does work.\n\nregards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Jul 2016 16:59:46 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and\n SSDs?" }, { "msg_contents": "It's a Crucial CT250MX200SSD1 and a Samsung MZ7LM480HCHP-00003.\n\nRegards,\n\nKaixi\n\n\nOn Thu, Jul 7, 2016 at 6:59 AM, Mark Kirkwood <[email protected]\n> wrote:\n\n> On 06/07/16 07:17, Mkrtchyan, Tigran wrote:\n>\n>> Hi,\n>>\n>> We had a similar situation and the best performance was with 64MB\n>> background_bytes and 512 MB dirty_bytes.\n>>\n>> Tigran.\n>>\n>> On Jul 5, 2016 16:51, Kaixi Luo <[email protected]> wrote:\n>>\n>>\n>> Here are my server specs:\n>>\n>> RAID1 - 2x480GB Samsung SSD with power loss protection (will be used\n>> to\n>> store the PostgreSQL database)\n>> RAID1 - 2x240GB Crucial SSD with power loss protection. (will be\n>> used to\n>> store PostgreSQL transactions logs)\n>>\n>>\n> Can you tell the exact model numbers for the Samsung and Crucial SSD's? It\n> typically matters! E.g I have some Crucial M550 that have capacitors and\n> (originally) claimed to be power off safe, but with testing have been shown\n> to be not really power off safe at all. I'd be dubious about Samsungs too.\n>\n> The Intel Datacenter range (S3700 and similar) are known to have power off\n> safety that does work.\n>\n> regards\n>\n> Mark\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIt's a Crucial CT250MX200SSD1 and a Samsung MZ7LM480HCHP-00003.Regards,KaixiOn Thu, Jul 7, 2016 at 6:59 AM, Mark Kirkwood <[email protected]> wrote:On 06/07/16 07:17, Mkrtchyan, Tigran wrote:\n\nHi,\n\nWe had a similar situation and the best performance was with 64MB\nbackground_bytes and 512 MB dirty_bytes.\n\nTigran.\n\nOn Jul 5, 2016 16:51, Kaixi Luo <[email protected]> wrote:\n\n\n     Here are my server specs:\n\n     RAID1 - 2x480GB Samsung SSD with power loss protection (will be used to\n     store the PostgreSQL database)\n     RAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to\n     store PostgreSQL transactions logs)\n\n\n\nCan you tell the exact model numbers for the Samsung and Crucial SSD's? It typically matters! E.g I have some Crucial M550 that have capacitors and (originally) claimed to be power off safe, but with testing have been shown to be not really power off safe at all. I'd be dubious about Samsungs too.\n\nThe Intel Datacenter range (S3700 and similar) are known to have power off safety that does work.\n\nregards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 7 Jul 2016 09:49:58 +0200", "msg_from": "Kaixi Luo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and SSDs?" }, { "msg_contents": "?The Crucial drive does not have power loss protection. The Samsung drive does.\n\n\n(The Crucial M550 has capacitors to protect data that's already been written to the device but not the entire cache. For instance, if data is read from the device during a garbage collection operation, the M550 will protect that data instead of introducing corruption of old data. This is listed as \"power loss protection\" on the spec sheet but it's not the level of protection that people on this list would expect from a drive)\n\n\n________________________________\nFrom: [email protected] <[email protected]> on behalf of Kaixi Luo <[email protected]>\nSent: Thursday, July 7, 2016 2:49 AM\nTo: Mark Kirkwood\nCc: [email protected]\nSubject: Re: [PERFORM] Tuning guidelines for server with 256GB of RAM and SSDs?\n\nIt's a Crucial CT250MX200SSD1 and a Samsung MZ7LM480HCHP-00003.\n\nRegards,\n\nKaixi\n\n\nOn Thu, Jul 7, 2016 at 6:59 AM, Mark Kirkwood <[email protected]<mailto:[email protected]>> wrote:\nOn 06/07/16 07:17, Mkrtchyan, Tigran wrote:\nHi,\n\nWe had a similar situation and the best performance was with 64MB\nbackground_bytes and 512 MB dirty_bytes.\n\nTigran.\n\nOn Jul 5, 2016 16:51, Kaixi Luo <[email protected]<mailto:[email protected]>> wrote:\n\n\n Here are my server specs:\n\n RAID1 - 2x480GB Samsung SSD with power loss protection (will be used to\n store the PostgreSQL database)\n RAID1 - 2x240GB Crucial SSD with power loss protection. (will be used to\n store PostgreSQL transactions logs)\n\n\nCan you tell the exact model numbers for the Samsung and Crucial SSD's? It typically matters! E.g I have some Crucial M550 that have capacitors and (originally) claimed to be power off safe, but with testing have been shown to be not really power off safe at all. I'd be dubious about Samsungs too.\n\nThe Intel Datacenter range (S3700 and similar) are known to have power off safety that does work.\n\nregards\n\nMark\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 7 Jul 2016 14:09:32 +0000", "msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and\n SSDs?" }, { "msg_contents": "On 08/07/16 02:09, Wes Vaske (wvaske) wrote:\n> ?The Crucial drive does not have power loss protection. The Samsung drive does.\n>\n>\n> (The Crucial M550 has capacitors to protect data that's already been written to the device but not the entire cache. For instance, if data is read from the device during a garbage collection operation, the M550 will protect that data instead of introducing corruption of old data. This is listed as \"power loss protection\" on the spec sheet but it's not the level of protection that people on this list would expect from a drive)\n>\n\nYes - the MX200 board (see):\n\nhttp://www.anandtech.com/show/9258/crucial-mx200-250gb-500gb-1tb-ssd-review\n\nlooks to have the same sort of capacitors that the M550 uses, so not \nideal for db or transaction logs!\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 8 Jul 2016 15:13:54 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning guidelines for server with 256GB of RAM and\n SSDs?" } ]
[ { "msg_contents": "Why all this concern about how long a disk (or SSD) drive can stay up\nafter a power failure?\n\nIt seems to me that anyone interested in maintaining an important\ndatabase would have suitable backup power on their entire systems,\nincluding the disk drives, so they could coast over any power loss.\n\nI do not have any database that important, but my machine has an APC\nSmart-UPS that has 2 1/2 hours of backup time with relatively new\nbatteries in it. It is so oversize because my previous computer used\nmuch more power than this one does. And if my power company has a brown\nout or black out of over 7 seconds, my natural gas fueled backup\ngenerator picks up the load very quickly.\n\nAm I overlooking something?\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key:166D840A 0C610C8B Registered Machine 1935521.\n /( )\\ Shrewsbury, New Jersey http://linuxcounter.net\n ^^-^^ 06:15:01 up 36 days, 12:17, 2 users, load average: 4.16, 4.26, 4.30\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 08 Jul 2016 06:23:38 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Capacitors, etc., in hard drives and SSD for DBMS machines..." }, { "msg_contents": "On 08/07/2016 13:23, Jean-David Beyer wrote:\n> Why all this concern about how long a disk (or SSD) drive can stay up\n> after a power failure?\n>\n> It seems to me that anyone interested in maintaining an important\n> database would have suitable backup power on their entire systems,\n> including the disk drives, so they could coast over any power loss.\n>\n> I do not have any database that important, but my machine has an APC\n> Smart-UPS that has 2 1/2 hours of backup time with relatively new\n> batteries in it. It is so oversize because my previous computer used\n> much more power than this one does. And if my power company has a brown\n> out or black out of over 7 seconds, my natural gas fueled backup\n> generator picks up the load very quickly.\n>\n> Am I overlooking something?\n>\n\nUPS-es can fail too ... :)\n\nAnd so many things could be happen ... once I plugged out the power cord \nfrom the UPS which powered the database server (which was a production \nserver) ... I thought powering something else :)\nbut lucky me ... the controller was flash backed\n\n\n\n-- \n Levi\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 8 Jul 2016 13:36:46 +0300", "msg_from": "Levente Birta <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capacitors, etc., in hard drives and SSD for DBMS\n machines..." }, { "msg_contents": "On Fri, Jul 8, 2016 at 12:23 PM, Jean-David Beyer <[email protected]>\nwrote:\n\n> Why all this concern about how long a disk (or SSD) drive can stay up\n> after a power failure?\n>\n> It seems to me that anyone interested in maintaining an important\n> database would have suitable backup power on their entire systems,\n> including the disk drives, so they could coast over any power loss.\n>\n> I do not have any database that important, but my machine has an APC\n> Smart-UPS that has 2 1/2 hours of backup time with relatively new\n> batteries in it. It is so oversize because my previous computer used\n> much more power than this one does. And if my power company has a brown\n> out or black out of over 7 seconds, my natural gas fueled backup\n> generator picks up the load very quickly.\n>\n> Am I overlooking something?\n>\n\nEach added protection help, and cover some of the possible failure\nmodes one may encounter.\n\nMost datacenters shouldn't lose power, and when they do, ups or\nequivalent systems should pick up, and then generators.\n\nYet poweroffs happens. every element between the power\nsource and the disk drives storing the database have chances\nof failure too. (including those two 'end' elements)\n\nMost servers shouldn't be powered off but it happens, alimentation\ncables may be moved, pdu may shutoff, electrical protections\nmay trigger, someone may press one of the power buttons...\n\nIdeally you want protections on each level, or at least\nclosest to the data (so that there are fewer potential elements\nto consider for failure cases)\n\n-- \nThomas SAMSON\n\nOn Fri, Jul 8, 2016 at 12:23 PM, Jean-David Beyer <[email protected]> wrote:Why all this concern about how long a disk (or SSD) drive can stay up\nafter a power failure?\n\nIt seems to me that anyone interested in maintaining an important\ndatabase would have suitable backup power on their entire systems,\nincluding the disk drives, so they could coast over any power loss.\n\nI do not have any database that important, but my machine has an APC\nSmart-UPS that has 2 1/2 hours of backup time with relatively new\nbatteries in it. It is so oversize because my previous computer used\nmuch more power than this one does. And if my power company has a brown\nout or black out of over 7 seconds, my natural gas fueled backup\ngenerator picks up the load very quickly.\n\nAm I overlooking something?Each added protection help, and cover some of the possible failuremodes one may encounter.Most datacenters shouldn't lose power, and when they do, ups orequivalent systems should pick up, and then generators.Yet poweroffs happens. every element between the powersource and the disk drives storing the database have chancesof failure too. (including those two 'end' elements)Most servers shouldn't be powered off but it happens, alimentationcables may be moved, pdu may shutoff, electrical protectionsmay trigger, someone may press one of the power buttons...Ideally you want protections on each level, or at leastclosest to the data (so that there are fewer potential elementsto consider for failure cases)-- Thomas SAMSON", "msg_date": "Fri, 8 Jul 2016 12:45:06 +0200", "msg_from": "Thomas Samson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capacitors, etc., in hard drives and SSD for DBMS machines..." }, { "msg_contents": "\n\nOp 7/8/2016 om 12:23 PM schreef Jean-David Beyer:\n> Why all this concern about how long a disk (or SSD) drive can stay up\n> after a power failure?\n>\n> It seems to me that anyone interested in maintaining an important\n> database would have suitable backup power on their entire systems,\n> including the disk drives, so they could coast over any power loss.\n>\nAs others have mentioned; *any* link in the power line can fail, from \nthe building's power\nto the plug literaly falling out of the harddisk itself. Using multiple \npower sources,\nUPS, BBU etc reduce the risk, but the internal capacitors of an SSD are \nthe only thing\nthat will *always* provide power to the disk, no matter what caused the \npower to fail.\n\nIt's like having a small UPS in the disk itself, with near-zero chance \nof failure.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 8 Jul 2016 13:44:02 +0200", "msg_from": "vincent <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capacitors, etc., in hard drives and SSD for DBMS\n machines..." }, { "msg_contents": "On 7/8/2016 05:23, Jean-David Beyer wrote:\n> Why all this concern about how long a disk (or SSD) drive can stay up\n> after a power failure?\nNever had a power supply fail, have you? Or (accidentally) pull the\nwrong cord? :)\n> It seems to me that anyone interested in maintaining an important\n> database would have suitable backup power on their entire systems,\n> including the disk drives, so they could coast over any power loss.\n>\n> I do not have any database that important, but my machine has an APC\n> Smart-UPS that has 2 1/2 hours of backup time with relatively new\n> batteries in it. It is so oversize because my previous computer used\n> much more power than this one does. And if my power company has a brown\n> out or black out of over 7 seconds, my natural gas fueled backup\n> generator picks up the load very quickly.\n>\n> Am I overlooking something?\nYep -- Murphy. And he's a bastard.\n\n-- \nKarl Denninger\[email protected] <mailto:[email protected]>\n/The Market Ticker/\n/[S/MIME encrypted email preferred]/", "msg_date": "Fri, 8 Jul 2016 08:27:37 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capacitors, etc., in hard drives and SSD for DBMS\n machines..." }, { "msg_contents": "> Why all this concern about how long a disk (or SSD) drive can stay up\n> after a power failure?\n\nWhen we're discussing SSD power loss protection, it's not a question of how long the drive can stay up but whether data at rest or data in flight are going to be lost/corrupted in the event of a power loss.\n\nThere are a couple big reasons for this.\n\n1. NAND write latency is actually somewhat poor.\n\nSSDs are comprised of NAND chips, DRAM for cache, and the controller. If the SSD disabled its disk cache, the write latencies under moderate load would move from the sub 100 microseconds range to the 1-10 milliseconds range. This is due to how the SSD writes to NAND. A single write operation takes a fairly large amount of time but large blocks cans be written as a single operation. \n\n\n2. Garbage Collection\n\nIf you're not familiar with GC, I definitely recommend reading up as it's one of the defining characteristics of SSDs (and now SMR HDDs). The basic principle is that SSDs don't support a modification to a page (8KB). Instead, the contents would need to be erased then written. Additionally, the slice of the chip that can be read, written, or erased are not the same size for each operation. Erase Blocks are much bigger than the page (eg: 2MB vs 8KB). This means that to modify an 8KB page, the entire 2MB erase block needs to be read to the disk cache, erased, then written with the new 8KB page along with the rest of the existing data in the 2MB erase block.\n\nThis operation needs to be power loss protected (it's the operation that the Crucial drives protect against). If it's not, then the data that is read to cache could be lost or corrupted if power is lost during the operation. The data in the erase block is not necessarily related to the page being modified and could be anywhere else in the filesystem. *IMPORTANT: This is data at rest that may have been written years prior. It is not just new data that may be lost if a GC operation can not complete.*\n\n\nTL;DR: Many SSDs will not disable disk cache even if you give the command to do so. Full Power Loss Protection at the drive level should be a requirement for any Enterprise or Data Center application to ensure no data loss or corruption of data at rest.\n\n\nThis is why there is so much concern with the internals to specific SSDs regarding behavior in a power loss event. It can have large impacts on the reliability of the entire system.\n\n\nWes Vaske | Senior Storage Solutions Engineer\nMicron Technology\n\n________________________________________\nFrom: [email protected] <[email protected]> on behalf of Levente Birta <[email protected]>\nSent: Friday, July 8, 2016 5:36 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Capacitors, etc., in hard drives and SSD for DBMS machines...\n\nOn 08/07/2016 13:23, Jean-David Beyer wrote:\n> Why all this concern about how long a disk (or SSD) drive can stay up\n> after a power failure?\n>\n> It seems to me that anyone interested in maintaining an important\n> database would have suitable backup power on their entire systems,\n> including the disk drives, so they could coast over any power loss.\n>\n> I do not have any database that important, but my machine has an APC\n> Smart-UPS that has 2 1/2 hours of backup time with relatively new\n> batteries in it. It is so oversize because my previous computer used\n> much more power than this one does. And if my power company has a brown\n> out or black out of over 7 seconds, my natural gas fueled backup\n> generator picks up the load very quickly.\n>\n> Am I overlooking something?\n>\n\nUPS-es can fail too ... :)\n\nAnd so many things could be happen ... once I plugged out the power cord\nfrom the UPS which powered the database server (which was a production\nserver) ... I thought powering something else :)\nbut lucky me ... the controller was flash backed\n\n\n\n--\n Levi\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 8 Jul 2016 14:50:26 +0000", "msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Capacitors, etc., in hard drives and SSD for DBMS\n machines..." }, { "msg_contents": "On 07/08/2016 07:44 AM, vincent wrote:\n> \n> \n> Op 7/8/2016 om 12:23 PM schreef Jean-David Beyer:\n>> Why all this concern about how long a disk (or SSD) drive can stay up\n>> after a power failure?\n>>\n>> It seems to me that anyone interested in maintaining an important\n>> database would have suitable backup power on their entire systems,\n>> including the disk drives, so they could coast over any power loss.\n>>\n> As others have mentioned; *any* link in the power line can fail, from\n> the building's power\n> to the plug literaly falling out of the harddisk itself. Using multiple\n> power sources,\n> UPS, BBU etc reduce the risk, but the internal capacitors of an SSD are\n> the only thing\n> that will *always* provide power to the disk, no matter what caused the\n> power to fail.\n> \n> It's like having a small UPS in the disk itself, with near-zero chance\n> of failure.\n> \n> \nThank you for all the responses.\n\nThe only time I had a power supply fail in a computer was in a 10 year\nold computer. When storm Sandy came by, the power went out and the\ncomputer had plenty of time to do a controlled shutdown.\n\nBut when the power was restored about a week later, the power flipped on\nand off at just the right rate to fry the power supply, before the\nsystem even started up enough to shut down again. So I lost no data. All\nI had to do is buy a new computer and restore from the backup tape.\n\nOf course, those capacitors in the disk itself could fail. Fortunately,\nthere have been giant improvements in capacitor manufacture reliability\nsince I had to study reliability of large electronic systems for a\nmilitary contract way back then.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key:166D840A 0C610C8B Registered Machine 1935521.\n /( )\\ Shrewsbury, New Jersey http://linuxcounter.net\n ^^-^^ 10:50:01 up 36 days, 16:52, 2 users, load average: 4.95, 5.23, 5.18\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 08 Jul 2016 10:56:41 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Capacitors, etc.,\n in hard drives and SSD for DBMS machines..." } ]
[ { "msg_contents": "I've found that pgtune doesn't quite provide the benefit that I would\nlike. It still uses large work mem and maintenance work mem numbers,\neven though, up until now Postgres has an issue with large numbers of\ntuples, so seems that smaller settings are better, in the 64MB type\nrange. (based on feedback from this list in the past and testing of\nlarger numbers on dedicated systems).\n\nAlso I've found no benefit to larger Effective cache numbers in boxen\ndedicated to postgres.\n\n So looking for a good start as I start bringing up systems in Amazon,\nAWS for performance.\n\npgtune is a great idea, but it's numbers seem to be based on what\nshould be, vs what is..\n\nI'm currently on CentOS6 and 9.4.5\n\nHardware specs of the AWS systems are 8 cpu/60 GB, I may bump that to\na 16/122gb, but trying to control costs and I know going from 8 to 32\nyielded almost 0, my biggest gain was memory but even then I don't\nthink I've got settings correct.\n\nThanks\nTory\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 11 Jul 2016 17:34:53 -0700", "msg_from": "Tory M Blue <[email protected]>", "msg_from_op": true, "msg_subject": "pgtune or similar to assist in initial settings" }, { "msg_contents": "> So looking for a good start as I start bringing up systems in Amazon,\n AWS for performance.\n\nMaybe useful information:\n\n- \"Amazon Web Services – RDBMS in the Cloud: PostgreSQL on AWS\" (2013)\n https://aws.amazon.com/whitepapers/postgresql-in-the-cloud/ Download\nWhitepaper <http://media.amazonwebservices.com/AWS_RDBMS_PostgreSQL.pdf>\n\n-\" Amazon EBS Volume Performance on Linux Instances\"\n http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.html\n\n\nImre\n\n\n\n\n2016-07-12 2:34 GMT+02:00 Tory M Blue <[email protected]>:\n\n> I've found that pgtune doesn't quite provide the benefit that I would\n> like. It still uses large work mem and maintenance work mem numbers,\n> even though, up until now Postgres has an issue with large numbers of\n> tuples, so seems that smaller settings are better, in the 64MB type\n> range. (based on feedback from this list in the past and testing of\n> larger numbers on dedicated systems).\n>\n> Also I've found no benefit to larger Effective cache numbers in boxen\n> dedicated to postgres.\n>\n> So looking for a good start as I start bringing up systems in Amazon,\n> AWS for performance.\n>\n> pgtune is a great idea, but it's numbers seem to be based on what\n> should be, vs what is..\n>\n> I'm currently on CentOS6 and 9.4.5\n>\n> Hardware specs of the AWS systems are 8 cpu/60 GB, I may bump that to\n> a 16/122gb, but trying to control costs and I know going from 8 to 32\n> yielded almost 0, my biggest gain was memory but even then I don't\n> think I've got settings correct.\n>\n> Thanks\n> Tory\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n>  So looking for a good start as I start bringing up systems in Amazon,  AWS for performance.Maybe useful information: - \"Amazon Web Services – RDBMS in the Cloud: PostgreSQL on AWS\"  (2013)    https://aws.amazon.com/whitepapers/postgresql-in-the-cloud/  Download Whitepaper-\" Amazon EBS Volume Performance on Linux Instances\"  http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSPerformance.htmlImre2016-07-12 2:34 GMT+02:00 Tory M Blue <[email protected]>:I've found that pgtune doesn't quite provide the benefit that I would\nlike. It still uses large work mem and maintenance work mem numbers,\neven though, up until now Postgres has an issue with large numbers of\ntuples, so seems that smaller settings are better, in the 64MB type\nrange. (based on feedback from this list in the past and testing of\nlarger numbers on dedicated systems).\n\nAlso I've found no benefit to larger Effective cache numbers in boxen\ndedicated to postgres.\n\n So looking for a good start as I start bringing up systems in Amazon,\nAWS for performance.\n\npgtune is a great idea, but it's numbers seem to be based on what\nshould be, vs what is..\n\nI'm currently on CentOS6 and 9.4.5\n\nHardware specs of the AWS systems are 8 cpu/60 GB, I may bump that to\na 16/122gb, but trying to control costs and I know going from 8 to 32\nyielded almost 0, my biggest gain was memory but even then I don't\nthink I've got settings correct.\n\nThanks\nTory\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 12 Jul 2016 22:03:55 +0200", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgtune or similar to assist in initial settings" } ]
[ { "msg_contents": ">I don't really see anything suspicious in the profile. This looks more\n>like a kernel scheduler issue than a postgres bottleneck one. It seems\n>that somehow using nonblocking IO (started in 9.5) causes scheduling\n>issues when pgbouncer is also local.\n>\n>Could you do perf stat -ddd -a sleep 10 or something during both runs? I\n>suspect that the context switch ratios will be quite different.\n\nPerf show that in 9.5 case context switches occurs about 2 times less.\nPerf output is attached.\n\nRegards,\nDmitriy Sarafannikov\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers", "msg_date": "Fri, 15 Jul 2016 12:24:21 +0300", "msg_from": "=?UTF-8?B?RG1pdHJpeSBTYXJhZmFubmlrb3Y=?= <[email protected]>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?UmU6IFtIQUNLRVJTXSBbUEVSRk9STV0gOS40IC0+IDkuNSByZWdyZXNzaW9u?=\n =?UTF-8?B?IHdpdGggcXVlcmllcyB0aHJvdWdoIHBnYm91bmNlciBvbiBSSEVMIDY=?=" } ]
[ { "msg_contents": "Came across this from a client today. Was able to work around it with a \nfence, but wanted to report it for the next time Robert generates \nstatistics on planner problems. ;) It appears the problem is the planner \ncouldn't recognize that even though there's ~400k rows for user 3737558, \nvery few of them will actually match the rest of the predicates \n(specifically m_ident).\n\n> data=> explain analyze SELECT id FROM table_name WHERE user_id = ‘36’ and m_ident= 'x12345' AND deleted IS NULL ORDER BY changed DESC LIMIT 1;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=240.31..240.32 rows=1 width=12) (actual time=0.188..0.189 rows=1 loops=1)\n> -> Sort (cost=240.31..240.32 rows=1 width=12) (actual time=0.187..0.187 rows=1 loops=1)\n> Sort Key: changed\n> Sort Method: quicksort Memory: 25kB\n> -> Index Scan using table_name__user_id_deleted on table_name (cost=0.56..240.30 rows=1 width=12) (actual time=0.131..0.178 rows=2 loops=1)\n> Index Cond: ((user_id = 36) AND (deleted IS NULL))\n> Filter: ((m_ident)::text = 'x12345'::text)\n> Rows Removed by Filter: 63\n> Planning time: 0.371 ms\n> Execution time: 0.357 ms\n>\n> (10 rows)\n>\n>\n> data=> explain analyze SELECT id FROM table_name WHERE user_id = '3737558' AND m_ident = 'xxx1234' AND deleted IS NULL ORDER BY changed DESC LIMIT 1;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> Limit (cost=0.43..177673.83 rows=1 width=12) (actual time=17151.010..17151.010 rows=0 loops=1)\n> -> Index Scan Backward using table_name___changed on table_name (cost=0.43..888367.40 rows=5 width=12) (actual time=17151.010..17151.010 rows=0 loops=1)\n> Filter: ((deleted IS NULL) AND (user_id = 3737558) AND ((m_ident)::text = 'xxx1234'::text))\n> Rows Removed by Filter: 16238592\n> Planning time: 0.189 ms\n>\n> Execution time: 17151.042 ms\n>\n> (6 rows)\n>\n> With fence...\n>\n> data=> EXPLAIN ANALYZE SELECT id FROM (SELECT * FROM table_name WHERE user_id = 3737558 AND m_ident = 'xxx1234' AND deleted IS NULL OFFSET 0) a ORDER BY changed DESC LIMIT 1;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=221150.73..221150.74 rows=1 width=12) (actual time=1391.148..1391.148 rows=0 loops=1)\n> -> Sort (cost=221150.73..221150.75 rows=6 width=12) (actual time=1391.147..1391.147 rows=0 loops=1)\n> Sort Key: a.changed\n> Sort Method: quicksort Memory: 25kB\n> -> Subquery Scan on a (cost=4414.63..221150.70 rows=6 width=12) (actual time=1391.115..1391.115 rows=0 loops=1)\n> -> Bitmap Heap Scan on table_name (cost=4414.63..221150.64 rows=6 width=170) (actual time=1391.113..1391.113 rows=0 loops=1)\n> Recheck Cond: ((user_id = 3737558) AND (deleted IS NULL))\n> Filter: ((m_ident)::text = 'AAL3979'::text)\n> Rows Removed by Filter: 386150\n> Heap Blocks: exact=119205\n> -> Bitmap Index Scan on table_name__user_id_deleted (cost=0.00..4414.63 rows=247407 width=0) (actual time=150.593..150.593 rows=397748 loops=1)\n> Index Cond: ((user_id = 3737558) AND (deleted IS NULL))\n> Planning time: 1.613 ms\n> Execution time: 1392.732 ms\n> (14 rows)\n>\n> Relevant indexes:\n>\n> \"table_name__enabled_date_end_enabled\" btree (date_end, enabled)\n> \"table_name__user_id\" btree (user_id)\n> \"table_name__user_id_deleted\" btree (user_id, deleted)\n> \"table_name___changed\" btree (changed)\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 20 Jul 2016 15:56:31 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": true, "msg_subject": "Poor choice of backward scan" } ]
[ { "msg_contents": "I'm working on Postgresql 9.5.3 and executed a query which takes 5 or 7 seconds and it should not take more than 0.30 milliseconds, the query is:\n\n\n-----------QUERY--------------------------------------------------------------------------------------\n\n\nwith recursive t(level,parent_id,id) as (\nselect 0,parent_id,id from parties where parent_id = 105\nunion\nselect t.level + 1,c.parent_id,c.id from parties c join t on c.parent_id = t.id\n)\nselect distinct id from t order by id;\n\n--------------------------------------------------------------------------------------------------------------\n\n\nThe parties table has 245512 rows and one index named \"index_parties_on_parent_id\" , so I added an EXPLAIN ANALYZE VERBOSE to get more details and it was the result:\n\n\n--------RESULT--------------------------------------------------------------------------------------\n\n\nSort (cost=21237260.78..21237261.28 rows=200 width=4) (actual time=6850.338..6850.343 rows=88 loops=1)\nOutput: t.id\nSort Key: t.id\nSort Method: quicksort Memory: 29kB\nCTE t\n-> Recursive Union (cost=0.43..20562814.38 rows=29974967 width=12) (actual time=0.072..6850.180 rows=88 loops=1)\n-> Index Scan using index_parties_on_parent_id on public.parties (cost=0.43..3091.24 rows=807 width=8) (actual time=0.064..0.154 rows=23 loops=1)\nOutput: 0, parties.parent_id, parties.id\nIndex Cond: (parties.parent_id = 105)\n-> Hash Join (cost=777279.14..1996022.38 rows=2997416 width=12) (actual time=2245.623..2283.290 rows=22 loops=3)\nOutput: (t_1.level + 1), c.parent_id, c.id\nHash Cond: (t_1.id = c.parent_id)\n-> WorkTable Scan on t t_1 (cost=0.00..161.40 rows=8070 width=8) (actual time=0.002..0.009 rows=29 loops=3)\nOutput: t_1.level, t_1.id\n-> Hash (cost=606642.73..606642.73 rows=10400673 width=8) (actual time=2206.149..2206.149 rows=1742 loops=3)\nOutput: c.parent_id, c.id\nBuckets: 2097152 Batches: 16 Memory Usage: 16388kB\n-> Seq Scan on public.parties c (cost=0.00..606642.73 rows=10400673 width=8) (actual time=71.070..2190.318 rows=244249 loops=3)\nOutput: c.parent_id, c.id\n-> HashAggregate (cost=674436.76..674438.76 rows=200 width=4) (actual time=6850.291..6850.305 rows=88 loops=1)\nOutput: t.id\nGroup Key: t.id\n-> CTE Scan on t (cost=0.00..599499.34 rows=29974967 width=4) (actual time=0.075..6850.236 rows=88 loops=1)\nOutput: t.id\nPlanning time: 0.815 ms\nExecution time: 7026.026 ms\n\n----------------------------------------------------------------------------------------------------------------------\n\nSo, I could see that index_parties_on_parent_id showed 10400673 rows and checking index_parties_on_parent_id index I get this information: num_rows = 10400673 and index_size = 310 MB\n\nCould Anybody explain me why the difference between parties table = 245512 and index_parties_on_parent_id index = 10400673? and How could I improve this index and its response time?\n\n\n\n\n\n\n\n\n\nI'm working on Postgresql 9.5.3 and executed a query which takes 5 or 7 seconds and it should not take more than 0.30 milliseconds, the query is:\n\n\n-----------QUERY--------------------------------------------------------------------------------------\n\n\n\nwith recursive t(level,parent_id,id) as (\nselect 0,parent_id,id from parties where parent_id = 105\nunion\nselect t.level + 1,c.parent_id,c.id from parties c join t on c.parent_id = t.id\n)\nselect distinct id from t order by id;\n\n\n--------------------------------------------------------------------------------------------------------------\n\n\nThe parties table has 245512 rows and one index named \"index_parties_on_parent_id\"\n , so I added an EXPLAIN ANALYZE VERBOSE to get more details and it was the result:\n\n\n--------RESULT--------------------------------------------------------------------------------------\n\n\n\n\n\n\nSort (cost=21237260.78..21237261.28 rows=200 width=4) (actual time=6850.338..6850.343 rows=88 loops=1)\n\n\nOutput: t.id\n\n\nSort Key: t.id\n\n\nSort Method: quicksort Memory: 29kB\n\n\nCTE t\n\n\n-> Recursive Union (cost=0.43..20562814.38 rows=29974967 width=12) (actual time=0.072..6850.180 rows=88 loops=1)\n\n\n-> Index Scan using index_parties_on_parent_id on public.parties (cost=0.43..3091.24 rows=807 width=8) (actual time=0.064..0.154 rows=23 loops=1)\n\n\nOutput: 0, parties.parent_id, parties.id\n\n\nIndex Cond: (parties.parent_id = 105)\n\n\n-> Hash Join (cost=777279.14..1996022.38 rows=2997416 width=12) (actual time=2245.623..2283.290 rows=22 loops=3)\n\n\nOutput: (t_1.level + 1), c.parent_id, c.id\n\n\nHash Cond: (t_1.id = c.parent_id)\n\n\n-> WorkTable Scan on t t_1 (cost=0.00..161.40 rows=8070 width=8) (actual time=0.002..0.009 rows=29 loops=3)\n\n\nOutput: t_1.level, t_1.id\n\n\n-> Hash (cost=606642.73..606642.73 rows=10400673 width=8) (actual time=2206.149..2206.149 rows=1742 loops=3)\n\n\nOutput: c.parent_id, c.id\n\n\nBuckets: 2097152 Batches: 16 Memory Usage: 16388kB\n\n\n-> Seq Scan on public.parties c (cost=0.00..606642.73 rows=10400673 width=8) (actual time=71.070..2190.318 rows=244249 loops=3)\n\n\nOutput: c.parent_id, c.id\n\n\n-> HashAggregate (cost=674436.76..674438.76 rows=200 width=4) (actual time=6850.291..6850.305 rows=88 loops=1)\n\n\nOutput: t.id\n\n\nGroup Key: t.id\n\n\n-> CTE Scan on t (cost=0.00..599499.34 rows=29974967 width=4) (actual time=0.075..6850.236 rows=88 loops=1)\n\n\nOutput: t.id\n\n\nPlanning time: 0.815 ms\n\n\nExecution time: 7026.026 ms\n\n----------------------------------------------------------------------------------------------------------------------\n\nSo, I could see that \nindex_parties_on_parent_id showed 10400673 rows and checking index_parties_on_parent_id\n index I get this information: num_rows = \n10400673 and index_size = 310 MB \n\nCould Anybody explain me why the difference between \nparties table = 245512 and index_parties_on_parent_id\n index = 10400673? and How could I improve this index and its response\n time?", "msg_date": "Fri, 22 Jul 2016 13:34:55 +0000", "msg_from": "Oscar Camuendo <[email protected]>", "msg_from_op": true, "msg_subject": "[PERFORMANCE] Performance index and table " }, { "msg_contents": "Oscar Camuendo <[email protected]> writes:\n> I'm working on Postgresql 9.5.3 and executed a query which takes 5 or 7 seconds and it should not take more than 0.30 milliseconds, the query is:\n\nHave you ANALYZEd your tables lately? Some of these estimated row counts\nseem awfully far off for no very good reason.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 22 Jul 2016 10:29:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORMANCE] Performance index and table" } ]
[ { "msg_contents": "Hi all\nI'm having a problem with a slow query - I tried several things to optimize the queries but didn't really help. The output of explain analyse shows sequential scan on a table of 25 million rows. Even though it is indexed and (I put a multi-column index on the fields used in the query), the explain utility shows no usage of the scan...\nQuery takes around 200 sec...\nBefore considering a design change...I wanted to make sure that there is no way to optimize the query....\nexplain analyze select s.attvalue from functionalvarattributes s, tags t, variableattributetypes vat where t.id=s.tag_id and t.status!='Internal'and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and split_part(split_part(s.attvalue,' ',1),'.',1) in (select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15) except select s.attvalue from functionalvarattributes s, tags t, usertemplvarattribute utva, usertemplatevariable utv, variableattributetypes vat where vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and utv.id=utva.usertempvariable_fk and utv.usertempl_id=15 and t.id=s.tag_id and t.status!='Internal'and split_part(split_part(s.attvalue,' ',1),'.',1) in (select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15);\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------\nHashSetOp Except (cost=171505.51..2086914.68 rows=1103 width=8) (actual time=186584.977..186584.977 rows=0 loops=1)\n -> Append (cost=171505.51..2031899.30 rows=22006150 width=8) (actual time=36550.214..186584.539 rows=320 loops=1)\n -> Subquery Scan on \"*SELECT* 1\" (cost=171505.51..905822.16 rows=155062 width=8) (actual time=36550.213..87210.878 rows=2 lo\nops=1)\n -> Hash Join (cost=171505.51..904271.54 rows=155062 width=8) (actual time=36550.212..87210.874 rows=2 loops=1)\n Hash Cond: (split_part(split_part((s.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e.name)::text)\n -> Hash Join (cost=193.91..726328.81 rows=310124 width=8) (actual time=42.242..63701.027 rows=308287 loops=1)\n Hash Cond: (s.tag_id = t.id)\n -> Hash Join (cost=188.03..716954.60 rows=1671226 width=16) (actual time=42.154..63387.723 rows=651155 loo\nps=1)\n Hash Cond: (s.atttype_id = vat.id)\n -> Seq Scan on functionalvarattributes s (cost=0.00..604691.04 rows=25430204 width=24) (actual time=\n0.007..53954.210 rows=25429808 loops=1)\n -> Hash (cost=183.18..183.18 rows=388 width=8) (actual time=42.113..42.113 rows=388 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\n -> Seq Scan on variableattributetypes vat (cost=0.00..183.18 rows=388 width=8) (actual time=0.\n003..41.984 rows=388 loops=1)\n Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\n Rows Removed by Filter: 5516\n -> Hash (cost=5.43..5.43 rows=36 width=8) (actual time=0.064..0.064 rows=36 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on tags t (cost=0.00..5.43 rows=36 width=8) (actual time=0.012..0.052 rows=36 loops=1)\n Filter: ((status)::text <> 'Internal'::text)\n Rows Removed by Filter: 158\n -> Hash (cost=171250.07..171250.07 rows=4923 width=24) (actual time=23162.533..23162.533 rows=16 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> HashAggregate (cost=171200.84..171250.07 rows=4923 width=24) (actual time=23162.498..23162.518 rows=16\nloops=1)\n -> Hash Join (cost=8.95..171188.53 rows=4923 width=24) (actual time=17.642..23162.464 rows=48 loops=\n1)\n Hash Cond: (e.usertemplatevar_id = ut.id)\n -> Seq Scan on functionalvariables e (cost=0.00..155513.07 rows=4164607 width=32) (actual time\n=0.008..21674.864 rows=4164350 loops=1)\n -> Hash (cost=8.75..8.75 rows=16 width=8) (actual time=0.058..0.058 rows=16 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using usertemp_utv_idx on usertemplatevariable ut (cost=0.29..8.75 rows=16\nwidth=8) (actual time=0.043..0.052 rows=16 loops=1)\n Index Cond: (usertempl_id = 15)\n -> Subquery Scan on \"*SELECT* 2\" (cost=172514.13..1126077.14 rows=21851088 width=8) (actual time=43579.873..99373.299 rows=3\n18 loops=1)\n -> Hash Join (cost=172514.13..907566.26 rows=21851088 width=8) (actual time=43579.870..99372.820 rows=318 loops=1)\n Hash Cond: (split_part(split_part((s_1.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e_1.name)::text)\n -> Hash Join (cost=193.91..726328.81 rows=310124 width=8) (actual time=2.724..71226.183 rows=308287 loops=1)\n Hash Cond: (s_1.tag_id = t_1.id)\n -> Hash Join (cost=188.03..716954.60 rows=1671226 width=16) (actual time=2.548..70764.941 rows=651155 loop\ns=1)\n Hash Cond: (s_1.atttype_id = vat_1.id)\n -> Seq Scan on functionalvarattributes s_1 (cost=0.00..604691.04 rows=25430204 width=24) (actual tim\ne=0.003..57363.539 rows=25429808 loops=1)\n -> Hash (cost=183.18..183.18 rows=388 width=8) (actual time=2.450..2.450 rows=388 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\n -> Seq Scan on variableattributetypes vat_1 (cost=0.00..183.18 rows=388 width=8) (actual time=\n0.014..2.153 rows=388 loops=1)\n Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\n Rows Removed by Filter: 5516\n -> Hash (cost=5.43..5.43 rows=36 width=8) (actual time=0.131..0.131 rows=36 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on tags t_1 (cost=0.00..5.43 rows=36 width=8) (actual time=0.015..0.100 rows=36 loops=1)\n Filter: ((status)::text <> 'Internal'::text)\n Rows Removed by Filter: 158\n -> Hash (cost=172318.46..172318.46 rows=141 width=24) (actual time=27594.115..27594.115 rows=2544 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 134kB\n -> Nested Loop (cost=171201.54..172318.46 rows=141 width=24) (actual time=27586.058..27592.012 rows=2544 l\noops=1)\n -> Nested Loop (cost=171201.12..172243.46 rows=16 width=32) (actual time=27585.957..27586.510 rows=2\n56 loops=1)\n -> HashAggregate (cost=171200.84..171250.07 rows=4923 width=24) (actual time=27572.535..27572.\n595 rows=16 loops=1)\n -> Hash Join (cost=8.95..171188.53 rows=4923 width=24) (actual time=27.159..27572.439 ro\nws=48 loops=1)\n Hash Cond: (e_1.usertemplatevar_id = ut_1.id)\n -> Seq Scan on functionalvariables e_1 (cost=0.00..155513.07 rows=4164607 width=32\n) (actual time=0.163..23959.820 rows=4164350 loops=1)\n -> Hash (cost=8.75..8.75 rows=16 width=8) (actual time=0.070..0.070 rows=16 loops=\n1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using usertemp_utv_idx on usertemplatevariable ut_1 (cost=0.29\n..8.75 rows=16 width=8) (actual time=0.040..0.057 rows=16 loops=1)\n Index Cond: (usertempl_id = 15)\n -> Materialize (cost=0.29..8.83 rows=16 width=8) (actual time=0.839..0.851 rows=16 loops=16)\n -> Index Scan using usertemp_utv_idx on usertemplatevariable utv (cost=0.29..8.75 rows=1\n6 width=8) (actual time=0.039..0.080 rows=16 loops=1)\n Index Cond: (usertempl_id = 15)\n -> Index Only Scan using usertemplvarattribute_atttypeid_key on usertemplvarattribute utva (cost=0.4\n2..4.60 rows=9 width=8) (actual time=0.004..0.011 rows=10 loops=256)\n Index Cond: (usertempvariable_fk = utv.id)\n Heap Fetches: 0\nTotal runtime: 186585.376 ms\n(67 rows)\n\n\n\\d functionalvarattributes;\n Table \"public.functionalvarattributes\"\n Column | Type | Modifiers\n---------------------+-----------------------------+----------------------------------------------------------------------\nid | bigint | not null default nextval('functionalvarattributes_id_seq'::regclass)\nattvalue | character varying(4000) | not null\ncreatedat | timestamp without time zone |\n description | character varying(500) |\n updatedat | timestamp without time zone |\n autosaved | boolean | not null\natttype_id | bigint |\n codactemplvaratt_fk | bigint |\n funcvar_fk | bigint | not null\ntag_id | bigint |\n usertemplvaratt_fk | bigint |\n useratttype_id | bigint |\n keyattvalue | character varying(255) |\nIndexes:\n \"functionalvarattributes_pkey\" PRIMARY KEY, btree (id)\n \"functionalvarattributes_funcvar_fk_tag_id_atttype_id_key\" UNIQUE CONSTRAINT, btree (funcvar_fk, tag_id, atttype_id)\n \"usertemplvaratt_funcvaratt_idx\" btree (usertemplvaratt_fk)\n \"vat_funcvaratt_multi_idx\" btree (atttype_id, attvalue, tag_id)\nForeign-key constraints:\n \"fk6b514a7b1929df33\" FOREIGN KEY (useratttype_id) REFERENCES userattributetypes(id)\n \"fk6b514a7b19d38f01\" FOREIGN KEY (codactemplvaratt_fk) REFERENCES codactemplvarattribute(id)\n \"fk6b514a7b2080a717\" FOREIGN KEY (atttype_id) REFERENCES variableattributetypes(id)\n \"fk6b514a7ba4d2f942\" FOREIGN KEY (funcvar_fk) REFERENCES functionalvariables(id)\n \"fk6b514a7bc81d711d\" FOREIGN KEY (usertemplvaratt_fk) REFERENCES usertemplvarattribute(id)\n \"fk6b514a7bcbbfa8b8\" FOREIGN KEY (tag_id) REFERENCES tags(id)\n\nVersion of postgresql is 9.3 on linux RHEL\n\nuname -a\nLinux 4504DS-SRV-0043.codac.iter.org 2.6.32-431.20.3.el6.x86_64 #1 SMP Fri Jun 6 18:30:54 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux\nThanks for your help\nLana\n\n\n\n\n\n\n\n\n\n\nHi all\nI’m having a problem with a slow query – I tried several things to optimize the queries but didn’t really help. The output of explain analyse shows sequential scan on a table of 25 million rows. Even though it is indexed and (I put a multi-column\n index on the fields used in the query), the explain utility shows no usage of the scan…\nQuery takes around 200 sec…\nBefore considering a design change…I wanted to make sure that there is no way to optimize the query….\nexplain analyze select s.attvalue from functionalvarattributes s, tags t, variableattributetypes vat where t.id=s.tag_id and t.status!='Internal'and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and\n split_part(split_part(s.attvalue,' ',1),'.',1) in (select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15) except select s.attvalue from functionalvarattributes s, tags t, usertemplvarattribute\n utva, usertemplatevariable utv, variableattributetypes vat where vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and utv.id=utva.usertempvariable_fk and utv.usertempl_id=15 and t.id=s.tag_id and t.status!='Internal'and split_part(split_part(s.attvalue,'\n ',1),'.',1) in (select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15);\n                                                                                                 QUERY PLAN                           \n\n                                                                     \n---------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------\nHashSetOp Except  (cost=171505.51..2086914.68 rows=1103 width=8) (actual time=186584.977..186584.977 rows=0 loops=1)\n   ->  Append  (cost=171505.51..2031899.30 rows=22006150 width=8) (actual time=36550.214..186584.539 rows=320 loops=1)\n         ->  Subquery Scan on \"*SELECT* 1\"  (cost=171505.51..905822.16 rows=155062 width=8) (actual time=36550.213..87210.878 rows=2 lo\nops=1)\n               ->  Hash Join  (cost=171505.51..904271.54 rows=155062 width=8) (actual time=36550.212..87210.874 rows=2 loops=1)\n                     Hash Cond: (split_part(split_part((s.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e.name)::text)\n                     ->  Hash Join  (cost=193.91..726328.81 rows=310124 width=8) (actual time=42.242..63701.027 rows=308287 loops=1)\n                           Hash Cond: (s.tag_id = t.id)\n                           ->  Hash Join  (cost=188.03..716954.60 rows=1671226 width=16) (actual time=42.154..63387.723 rows=651155 loo\nps=1)\n                                 Hash Cond: (s.atttype_id = vat.id)\n                                 ->  Seq Scan on functionalvarattributes s  (cost=0.00..604691.04 rows=25430204 width=24) (actual time=\n0.007..53954.210 rows=25429808 loops=1)\n                                 ->  Hash  (cost=183.18..183.18 rows=388 width=8) (actual time=42.113..42.113 rows=388 loops=1)\n                                       Buckets: 1024  Batches: 1  Memory Usage: 16kB\n                                       ->  Seq Scan on variableattributetypes vat  (cost=0.00..183.18 rows=388 width=8) (actual time=0.\n003..41.984 rows=388 loops=1)\n                                             Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\n                                             Rows Removed by Filter: 5516\n                           ->  Hash  (cost=5.43..5.43 rows=36 width=8) (actual time=0.064..0.064 rows=36 loops=1)\n                                 Buckets: 1024  Batches: 1  Memory Usage: 2kB\n                                 ->  Seq Scan on tags t  (cost=0.00..5.43 rows=36 width=8) (actual time=0.012..0.052 rows=36 loops=1)\n                                       Filter: ((status)::text <> 'Internal'::text)\n                                       Rows Removed by Filter: 158\n                     ->  Hash  (cost=171250.07..171250.07 rows=4923 width=24) (actual time=23162.533..23162.533 rows=16 loops=1)\n                           Buckets: 1024  Batches: 1  Memory Usage: 1kB\n                           ->  HashAggregate  (cost=171200.84..171250.07 rows=4923 width=24) (actual time=23162.498..23162.518 rows=16\n\nloops=1)\n                                 ->  Hash Join  (cost=8.95..171188.53 rows=4923 width=24) (actual time=17.642..23162.464 rows=48 loops=\n1)\n                                       Hash Cond: (e.usertemplatevar_id = ut.id)\n                                       ->  Seq Scan on functionalvariables e  (cost=0.00..155513.07 rows=4164607 width=32) (actual time\n=0.008..21674.864 rows=4164350 loops=1)\n                                       ->  Hash  (cost=8.75..8.75 rows=16 width=8) (actual time=0.058..0.058 rows=16 loops=1)\n                                             Buckets: 1024  Batches: 1  Memory Usage: 1kB\n                                             ->  Index Scan using usertemp_utv_idx on usertemplatevariable ut  (cost=0.29..8.75 rows=16\nwidth=8) (actual time=0.043..0.052 rows=16 loops=1)\n                                                   Index Cond: (usertempl_id = 15)\n         ->  Subquery Scan on \"*SELECT* 2\"  (cost=172514.13..1126077.14 rows=21851088 width=8) (actual time=43579.873..99373.299 rows=3\n18 loops=1)\n               ->  Hash Join  (cost=172514.13..907566.26 rows=21851088 width=8) (actual time=43579.870..99372.820 rows=318 loops=1)\n                     Hash Cond: (split_part(split_part((s_1.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e_1.name)::text)\n                     ->  Hash Join  (cost=193.91..726328.81 rows=310124 width=8) (actual time=2.724..71226.183 rows=308287 loops=1)\n                           Hash Cond: (s_1.tag_id = t_1.id)\n                           ->  Hash Join  (cost=188.03..716954.60 rows=1671226 width=16) (actual time=2.548..70764.941 rows=651155 loop\ns=1)\n                                 Hash Cond: (s_1.atttype_id = vat_1.id)\n                                 ->  Seq Scan on functionalvarattributes s_1  (cost=0.00..604691.04 rows=25430204 width=24) (actual tim\ne=0.003..57363.539 rows=25429808 loops=1)\n                                 ->  Hash  (cost=183.18..183.18 rows=388 width=8) (actual time=2.450..2.450 rows=388 loops=1)\n                                       Buckets: 1024  Batches: 1  Memory Usage: 16kB\n                                       ->  Seq Scan on variableattributetypes vat_1  (cost=0.00..183.18 rows=388 width=8) (actual time=\n0.014..2.153 rows=388 loops=1)\n                                             Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\n                                             Rows Removed by Filter: 5516\n                           ->  Hash  (cost=5.43..5.43 rows=36 width=8) (actual time=0.131..0.131 rows=36 loops=1)\n                                 Buckets: 1024  Batches: 1  Memory Usage: 2kB\n                                 ->  Seq Scan on tags t_1  (cost=0.00..5.43 rows=36 width=8) (actual time=0.015..0.100 rows=36 loops=1)\n                                       Filter: ((status)::text <> 'Internal'::text)\n                                       Rows Removed by Filter: 158\n                     ->  Hash  (cost=172318.46..172318.46 rows=141 width=24) (actual time=27594.115..27594.115 rows=2544 loops=1)\n                           Buckets: 1024  Batches: 1  Memory Usage: 134kB\n                           ->  Nested Loop  (cost=171201.54..172318.46 rows=141 width=24) (actual time=27586.058..27592.012 rows=2544 l\noops=1)\n                                 ->  Nested Loop  (cost=171201.12..172243.46 rows=16 width=32) (actual time=27585.957..27586.510 rows=2\n56 loops=1)\n                                       ->  HashAggregate  (cost=171200.84..171250.07 rows=4923 width=24) (actual time=27572.535..27572.\n595 rows=16 loops=1)\n                                             ->  Hash Join  (cost=8.95..171188.53 rows=4923 width=24) (actual time=27.159..27572.439 ro\nws=48 loops=1)\n                                                   Hash Cond: (e_1.usertemplatevar_id = ut_1.id)\n                                                   ->  Seq Scan on functionalvariables e_1  (cost=0.00..155513.07 rows=4164607 width=32\n) (actual time=0.163..23959.820 rows=4164350 loops=1)\n                                                   ->  Hash  (cost=8.75..8.75 rows=16 width=8) (actual time=0.070..0.070 rows=16 loops=\n1)\n                                                         Buckets: 1024  Batches: 1  Memory Usage: 1kB\n                                                         ->  Index Scan using usertemp_utv_idx on usertemplatevariable ut_1  (cost=0.29\n..8.75 rows=16 width=8) (actual time=0.040..0.057 rows=16 loops=1)\n                                                               Index Cond: (usertempl_id = 15)\n                                       ->  Materialize  (cost=0.29..8.83 rows=16 width=8) (actual time=0.839..0.851 rows=16 loops=16)\n                                             ->  Index Scan using usertemp_utv_idx on usertemplatevariable utv  (cost=0.29..8.75 rows=1\n6 width=8) (actual time=0.039..0.080 rows=16 loops=1)\n                                                   Index Cond: (usertempl_id = 15)\n                                 ->  Index Only Scan using usertemplvarattribute_atttypeid_key on usertemplvarattribute utva  (cost=0.4\n2..4.60 rows=9 width=8) (actual time=0.004..0.011 rows=10 loops=256)\n                                       Index Cond: (usertempvariable_fk = utv.id)\n                                       Heap Fetches: 0\nTotal runtime: 186585.376 ms\n(67 rows)\n \n \n\\d functionalvarattributes;\n                                          Table \"public.functionalvarattributes\"\n       Column        |            Type             |                              Modifiers                              \n\n---------------------+-----------------------------+----------------------------------------------------------------------\nid                  | bigint                      | not null default nextval('functionalvarattributes_id_seq'::regclass)\nattvalue            | character varying(4000)     | not null\ncreatedat           | timestamp without time zone | \n description         | character varying(500)      | \n updatedat           | timestamp without time zone | \n autosaved           | boolean                     | not null\natttype_id          | bigint                      | \n codactemplvaratt_fk | bigint                      | \n funcvar_fk          | bigint                      | not null\ntag_id              | bigint                      | \n usertemplvaratt_fk  | bigint                      | \n useratttype_id      | bigint                      | \n keyattvalue         | character varying(255)      | \nIndexes:\n    \"functionalvarattributes_pkey\" PRIMARY KEY, btree (id)\n    \"functionalvarattributes_funcvar_fk_tag_id_atttype_id_key\" UNIQUE CONSTRAINT, btree (funcvar_fk, tag_id, atttype_id)\n    \"usertemplvaratt_funcvaratt_idx\" btree (usertemplvaratt_fk)\n    \"vat_funcvaratt_multi_idx\" btree (atttype_id, attvalue, tag_id)\nForeign-key constraints:\n    \"fk6b514a7b1929df33\" FOREIGN KEY (useratttype_id) REFERENCES userattributetypes(id)\n    \"fk6b514a7b19d38f01\" FOREIGN KEY (codactemplvaratt_fk) REFERENCES codactemplvarattribute(id)\n    \"fk6b514a7b2080a717\" FOREIGN KEY (atttype_id) REFERENCES variableattributetypes(id)\n    \"fk6b514a7ba4d2f942\" FOREIGN KEY (funcvar_fk) REFERENCES functionalvariables(id)\n    \"fk6b514a7bc81d711d\" FOREIGN KEY (usertemplvaratt_fk) REFERENCES usertemplvarattribute(id)\n    \"fk6b514a7bcbbfa8b8\" FOREIGN KEY (tag_id) REFERENCES tags(id)\n \nVersion of postgresql is 9.3 on linux RHEL \n \nuname -a\nLinux 4504DS-SRV-0043.codac.iter.org 2.6.32-431.20.3.el6.x86_64 #1 SMP Fri Jun 6 18:30:54 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux\nThanks for your help\nLana", "msg_date": "Mon, 25 Jul 2016 15:26:46 +0000", "msg_from": "Abadie Lana <[email protected]>", "msg_from_op": true, "msg_subject": "Very slow query (3-4mn) on a table with 25millions rows" }, { "msg_contents": "Abadie Lana <[email protected]> writes:\n> I'm having a problem with a slow query - I tried several things to optimize the queries but didn't really help. The output of explain analyse shows sequential scan on a table of 25 million rows. Even though it is indexed and (I put a multi-column index on the fields used in the query), the explain utility shows no usage of the scan...\n\nThat index looks pretty useless judging from the rowcounts, so I'm not\nsurprised that the planner didn't use it. You might have better luck with\nan index on the split_part expression\n\nsplit_part(split_part((s.attvalue)::text, ' '::text, 1), '.'::text, 1)\n\nsince it's the join of that to e.name that seems to be actually selective.\n(The planner doesn't appear to realize that it is, but ANALYZE'ing after\ncreating the index should fix that.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 25 Jul 2016 14:06:57 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow query (3-4mn) on a table with 25millions rows" }, { "msg_contents": "Hi Tom,\nThanks for the hints..\n\nI made various tests for index\nThe best I could get is the following one with \ncreate index vat_funcvaratt_multi_idx on functionalvarattributes(split_part(split_part(attvalue,' ',1),'.',1), tag_id, atttype_id);\nanalyze functionalvarattributes;\n\nexplain analyze select s.attvalue from functionalvarattributes s, tags t, variableattributetypes vat where t.id=s.tag_id and t.status!='Internal'and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and split_part(split_part(s.attvalue,' ',1),'.',1) in (select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15) except select s.attvalue from functionalvarattributes s, tags t, usertemplvarattribute utva, usertemplatevariable utv, variableattributetypes vat where vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and utv.id=utva.usertempvariable_fk and utv.usertempl_id=15 and t.id=s.tag_id and t.status!='Internal'and split_part(split_part(s.attvalue,' ',1),'.',1) in (select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15);\n QUERY PLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------\n HashSetOp Except (cost=171505.51..2361978.74 rows=1116 width=8) (actual time=66476.682..66476.682 rows=0 loops=1)\n -> Append (cost=171505.51..2251949.02 rows=44011889 width=8) (actual time=12511.639..66476.544 rows=320 loops=1)\n -> Subquery Scan on \"*SELECT* 1\" (cost=171505.51..907368.77 rows=310121 width=8) (actual time=12511.638..31775.404 rows=2 lo\nops=1)\n -> Hash Join (cost=171505.51..904267.56 rows=310121 width=8) (actual time=12511.636..31775.401 rows=2 loops=1)\n Hash Cond: (split_part(split_part((s.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e.name)::text)\n -> Hash Join (cost=193.91..726325.20 rows=310121 width=8) (actual time=1.227..24083.777 rows=308287 loops=1)\n Hash Cond: (s.tag_id = t.id)\n -> Hash Join (cost=188.03..716951.08 rows=1671210 width=16) (actual time=1.157..23810.490 rows=651155 loop\ns=1)\n Hash Cond: (s.atttype_id = vat.id)\n -> Seq Scan on functionalvarattributes s (cost=0.00..604688.60 rows=25429960 width=24) (actual time=\n0.002..15719.449 rows=25429808 loops=1)\n -> Hash (cost=183.18..183.18 rows=388 width=8) (actual time=1.116..1.116 rows=388 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\n -> Seq Scan on variableattributetypes vat (cost=0.00..183.18 rows=388 width=8) (actual time=0.\n005..0.987 rows=388 loops=1)\n Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\n Rows Removed by Filter: 5516\n -> Hash (cost=5.43..5.43 rows=36 width=8) (actual time=0.064..0.064 rows=36 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on tags t (cost=0.00..5.43 rows=36 width=8) (actual time=0.008..0.055 rows=36 loops=1)\n Filter: ((status)::text <> 'Internal'::text)\n Rows Removed by Filter: 158\n -> Hash (cost=171250.07..171250.07 rows=4923 width=24) (actual time=7377.344..7377.344 rows=16 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> HashAggregate (cost=171200.84..171250.07 rows=4923 width=24) (actual time=7377.310..7377.329 rows=16 lo\nops=1)\n -> Hash Join (cost=8.95..171188.53 rows=4923 width=24) (actual time=3.178..7377.271 rows=48 loops=1)\n Hash Cond: (e.usertemplatevar_id = ut.id)\n -> Seq Scan on functionalvariables e (cost=0.00..155513.07 rows=4164607 width=32) (actual time\n=1.271..5246.277 rows=4164350 loops=1)\n -> Hash (cost=8.75..8.75 rows=16 width=8) (actual time=0.026..0.026 rows=16 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using usertemp_utv_idx on usertemplatevariable ut (cost=0.29..8.75 rows=16\n width=8) (actual time=0.011..0.020 rows=16 loops=1)\n Index Cond: (usertempl_id = 15)\n -> Subquery Scan on \"*SELECT* 2\" (cost=172514.13..1344580.25 rows=43701768 width=8) (actual time=11551.477..34701.030 rows=3\n18 loops=1)\n -> Hash Join (cost=172514.13..907562.57 rows=43701768 width=8) (actual time=11551.475..34700.876 rows=318 loops=1)\n Hash Cond: (split_part(split_part((s_1.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e_1.name)::text)\n -> Hash Join (cost=193.91..726325.20 rows=310121 width=8) (actual time=1.281..27733.991 rows=308287 loops=1)\n Hash Cond: (s_1.tag_id = t_1.id)\n -> Hash Join (cost=188.03..716951.08 rows=1671210 width=16) (actual time=1.194..27391.475 rows=651155 loop\ns=1)\n Hash Cond: (s_1.atttype_id = vat_1.id)\n -> Seq Scan on functionalvarattributes s_1 (cost=0.00..604688.60 rows=25429960 width=24) (actual tim\ne=0.001..17189.172 rows=25429808 loops=1)\n -> Hash (cost=183.18..183.18 rows=388 width=8) (actual time=1.153..1.153 rows=388 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\n -> Seq Scan on variableattributetypes vat_1 (cost=0.00..183.18 rows=388 width=8) (actual time=\n0.007..1.015 rows=388 loops=1)\n Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\n Rows Removed by Filter: 5516\n -> Hash (cost=5.43..5.43 rows=36 width=8) (actual time=0.065..0.065 rows=36 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on tags t_1 (cost=0.00..5.43 rows=36 width=8) (actual time=0.010..0.053 rows=36 loops=1)\n Filter: ((status)::text <> 'Internal'::text)\n Rows Removed by Filter: 158\n -> Hash (cost=172318.46..172318.46 rows=141 width=24) (actual time=6553.620..6553.620 rows=2544 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 134kB\n -> Nested Loop (cost=171201.54..172318.46 rows=141 width=24) (actual time=6550.096..6552.789 rows=2544 loo\nps=1)\n -> Nested Loop (cost=171201.12..172243.46 rows=16 width=32) (actual time=6550.077..6550.305 rows=256\n loops=1)\n -> HashAggregate (cost=171200.84..171250.07 rows=4923 width=24) (actual time=6542.508..6542.53\n5 rows=16 loops=1)\n -> Hash Join (cost=8.95..171188.53 rows=4923 width=24) (actual time=12.705..6542.472 row\ns=48 loops=1)\n Hash Cond: (e_1.usertemplatevar_id = ut_1.id)\n -> Seq Scan on functionalvariables e_1 (cost=0.00..155513.07 rows=4164607 width=32\n) (actual time=7.324..5008.051 rows=4164350 loops=1)\n -> Hash (cost=8.75..8.75 rows=16 width=8) (actual time=0.033..0.033 rows=16 loops=\n1)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> Index Scan using usertemp_utv_idx on usertemplatevariable ut_1 (cost=0.29\n..8.75 rows=16 width=8) (actual time=0.018..0.026 rows=16 loops=1)\n Index Cond: (usertempl_id = 15)\n -> Materialize (cost=0.29..8.83 rows=16 width=8) (actual time=0.473..0.478 rows=16 loops=16)\n -> Index Scan using usertemp_utv_idx on usertemplatevariable utv (cost=0.29..8.75 rows=1\n6 width=8) (actual time=0.032..0.041 rows=16 loops=1)\n Index Cond: (usertempl_id = 15)\n -> Index Only Scan using usertemplvarattribute_atttypeid_key on usertemplvarattribute utva (cost=0.4\n2..4.60 rows=9 width=8) (actual time=0.002..0.004 rows=10 loops=256)\n Index Cond: (usertempvariable_fk = utv.id)\n Heap Fetches: 0\n Total runtime: 66476.942 ms\n(67 rows)\n\nIs this acceptable or can I get better results?\nThanks\nLana\n\n>>-----Original Message-----\n>>From: Tom Lane [mailto:[email protected]]\n>>Sent: 25 July 2016 20:07\n>>To: Abadie Lana\n>>Cc: [email protected]\n>>Subject: Re: [PERFORM] Very slow query (3-4mn) on a table with 25millions\n>>rows\n>>\n>>Abadie Lana <[email protected]> writes:\n>>> I'm having a problem with a slow query - I tried several things to optimize the\n>>queries but didn't really help. The output of explain analyse shows sequential\n>>scan on a table of 25 million rows. Even though it is indexed and (I put a multi-\n>>column index on the fields used in the query), the explain utility shows no usage\n>>of the scan...\n>>\n>>That index looks pretty useless judging from the rowcounts, so I'm not surprised\n>>that the planner didn't use it. You might have better luck with an index on the\n>>split_part expression\n>>\n>>split_part(split_part((s.attvalue)::text, ' '::text, 1), '.'::text, 1)\n>>\n>>since it's the join of that to e.name that seems to be actually selective.\n>>(The planner doesn't appear to realize that it is, but ANALYZE'ing after creating\n>>the index should fix that.)\n>>\n>>\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Jul 2016 09:01:24 +0000", "msg_from": "Abadie Lana <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow query (3-4mn) on a table with 25millions\n rows" }, { "msg_contents": "El 26/07/16 a las 06:01, Abadie Lana escribió:\n> Hi Tom,\n> Thanks for the hints..\n> \n> I made various tests for index\n> The best I could get is the following one with \n> create index vat_funcvaratt_multi_idx on functionalvarattributes(split_part(split_part(attvalue,' ',1),'.',1), tag_id, atttype_id);\n> analyze functionalvarattributes;\n\nI suggest running analyze over the other tables involved in the query\n(or over the whole DB) and then sending back the explain analyze, or\neven better EXPLAIN (ANALYZE,BUFFERS).\n\nSome estimates are close and others are really wrong.\n\nI'm not saying that's going to give you a big bust but we'll be able to\nsee the planner with fresh stats\n\n-- \nMartín Marqués http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 26 Jul 2016 07:34:17 -0300", "msg_from": "=?UTF-8?Q?Mart=c3=adn_Marqu=c3=a9s?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow query (3-4mn) on a table with 25millions rows" }, { "msg_contents": "Dear Martin\r\nI run an analyse on the whole database + explicit analyse on tables involved in the query.\r\nHere the result of explain (analyse, buffer). Thanks for your help and let me know if you need more information.\r\n\r\nexplain (analyze, buffers) select s.attvalue from functionalvarattributes s, tags t, variableattributetypes vat where t.id=s.tag_id and t.status!='Internal'and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and split_part(split_part(s.attvalue,' ',1),'.',1) in (select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15) except select s.attvalue from functionalvarattributes s, tags t, usertemplvarattribute utva, usertemplatevariable utv, variableattributetypes vat where vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and utv.id=utva.usertempvariable_fk and utv.usertempl_id=15 and t.id=s.tag_id and t.status!='Internal'and split_part(split_part(s.attvalue,' ',1),'.',1) in (select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15);\r\n QUERY PLAN \r\n \r\n---------------------------------------------------------------------------------------------------------------------------------------\r\n---------------------------------------------------------------------\r\n HashSetOp Except (cost=171506.51..2361929.77 rows=1102 width=8) (actual time=75622.307..75622.307 rows=0 loops=1)\r\n Buffers: shared hit=4423 read=925096\r\n -> Append (cost=171506.51..2251904.08 rows=44010276 width=8) (actual time=13510.950..75622.159 rows=320 loops=1)\r\n Buffers: shared hit=4423 read=925096\r\n -> Subquery Scan on \"*SELECT* 1\" (cost=171506.51..907352.41 rows=310110 width=8) (actual time=13510.950..41131.939 rows=2 lo\r\nops=1)\r\n Buffers: shared hit=1785 read=462580\r\n -> Hash Join (cost=171506.51..904251.31 rows=310110 width=8) (actual time=13510.947..41131.932 rows=2 loops=1)\r\n Hash Cond: (split_part(split_part((s.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e.name)::text)\r\n Buffers: shared hit=1785 read=462580\r\n -> Hash Join (cost=193.91..726311.49 rows=310110 width=8) (actual time=1.016..33826.718 rows=308287 loops=1)\r\n Hash Cond: (s.tag_id = t.id)\r\n Buffers: shared hit=1070 read=349424\r\n -> Hash Join (cost=188.03..716937.71 rows=1671149 width=16) (actual time=0.941..33398.776 rows=651155 loop\r\ns=1)\r\n Hash Cond: (s.atttype_id = vat.id)\r\n Buffers: shared hit=1067 read=349424\r\n -> Seq Scan on functionalvarattributes s (cost=0.00..604679.32 rows=25429032 width=24) (actual time=\r\n0.002..20099.045 rows=25429808 loops=1)\r\n Buffers: shared hit=965 read=349424\r\n -> Hash (cost=183.18..183.18 rows=388 width=8) (actual time=0.900..0.900 rows=388 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\r\n Buffers: shared hit=102\r\n -> Seq Scan on variableattributetypes vat (cost=0.00..183.18 rows=388 width=8) (actual time=0.\r\n005..0.803 rows=388 loops=1)\r\n Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\r\n Rows Removed by Filter: 5516\r\n Buffers: shared hit=102\r\n -> Hash (cost=5.43..5.43 rows=36 width=8) (actual time=0.070..0.070 rows=36 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\r\n Buffers: shared hit=3\r\n -> Seq Scan on tags t (cost=0.00..5.43 rows=36 width=8) (actual time=0.007..0.057 rows=36 loops=1)\r\n Filter: ((status)::text <> 'Internal'::text)\r\n Rows Removed by Filter: 158\r\n Buffers: shared hit=3\r\n -> Hash (cost=171251.03..171251.03 rows=4926 width=24) (actual time=6801.452..6801.452 rows=16 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\r\n Buffers: shared hit=715 read=113156\r\n -> HashAggregate (cost=171201.77..171251.03 rows=4926 width=24) (actual time=6801.417..6801.435 rows=16 lo\r\nops=1)\r\n Buffers: shared hit=715 read=113156\r\n -> Hash Join (cost=8.95..171189.45 rows=4926 width=24) (actual time=12.812..6801.387 rows=48 loops=1\r\n)\r\n Hash Cond: (e.usertemplatevar_id = ut.id)\r\n Buffers: shared hit=715 read=113156\r\n -> Seq Scan on functionalvariables e (cost=0.00..155513.72 rows=4164672 width=32) (actual time\r\n=5.244..4924.135 rows=4164350 loops=1)\r\n Buffers: shared hit=711 read=113156\r\n -> Hash (cost=8.75..8.75 rows=16 width=8) (actual time=0.030..0.030 rows=16 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\r\n Buffers: shared hit=4\r\n -> Index Scan using usertemp_utv_idx on usertemplatevariable ut (cost=0.29..8.75 rows=16\r\n width=8) (actual time=0.012..0.023 rows=16 loops=1)\r\n Index Cond: (usertempl_id = 15)\r\n Buffers: shared hit=4\r\n -> Subquery Scan on \"*SELECT* 2\" (cost=172515.69..1344551.67 rows=43700166 width=8) (actual time=12639.042..34490.098 rows=3\r\n18 loops=1)\r\n Buffers: shared hit=2638 read=462516\r\n -> Hash Join (cost=172515.69..907550.01 rows=43700166 width=8) (actual time=12639.040..34489.953 rows=318 loops=1)\r\n Hash Cond: (split_part(split_part((s_1.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e_1.name)::text)\r\n Buffers: shared hit=2638 read=462516\r\n -> Hash Join (cost=193.91..726311.49 rows=310110 width=8) (actual time=2.354..26734.043 rows=308287 loops=1)\r\n Hash Cond: (s_1.tag_id = t_1.id)\r\n Buffers: shared hit=1102 read=349392\r\n -> Hash Join (cost=188.03..716937.71 rows=1671149 width=16) (actual time=2.176..26421.280 rows=651155 loop\r\ns=1)\r\n Hash Cond: (s_1.atttype_id = vat_1.id)\r\n Buffers: shared hit=1099 read=349392\r\n -> Seq Scan on functionalvarattributes s_1 (cost=0.00..604679.32 rows=25429032 width=24) (actual tim\r\ne=0.003..16949.841 rows=25429808 loops=1)\r\n Buffers: shared hit=997 read=349392\r\n -> Hash (cost=183.18..183.18 rows=388 width=8) (actual time=2.092..2.092 rows=388 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\r\n Buffers: shared hit=102\r\n -> Seq Scan on variableattributetypes vat_1 (cost=0.00..183.18 rows=388 width=8) (actual time=\r\n0.014..1.852 rows=388 loops=1)\r\n Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\r\n Rows Removed by Filter: 5516\r\n Buffers: shared hit=102\r\n -> Hash (cost=5.43..5.43 rows=36 width=8) (actual time=0.138..0.138 rows=36 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\r\n Buffers: shared hit=3\r\n -> Seq Scan on tags t_1 (cost=0.00..5.43 rows=36 width=8) (actual time=0.016..0.088 rows=36 loops=1)\r\n Filter: ((status)::text <> 'Internal'::text)\r\n Rows Removed by Filter: 158\r\n Buffers: shared hit=3\r\n -> Hash (cost=172320.02..172320.02 rows=141 width=24) (actual time=7386.827..7386.827 rows=2544 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 134kB\r\n Buffers: shared hit=1536 read=113124\r\n -> Nested Loop (cost=171202.47..172320.02 rows=141 width=24) (actual time=7378.869..7384.698 rows=2544 loo\r\nps=1)\r\n Buffers: shared hit=1536 read=113124\r\n -> Nested Loop (cost=171202.05..172245.02 rows=16 width=32) (actual time=7378.835..7379.342 rows=256\r\n loops=1)\r\n Buffers: shared hit=751 read=113124\r\n -> HashAggregate (cost=171201.77..171251.03 rows=4926 width=24) (actual time=7368.551..7368.62\r\n0 rows=16 loops=1)\r\n Buffers: shared hit=747 read=113124\r\n -> Hash Join (cost=8.95..171189.45 rows=4926 width=24) (actual time=13.272..7368.471 row\r\ns=48 loops=1)\r\n Hash Cond: (e_1.usertemplatevar_id = ut_1.id)\r\n Buffers: shared hit=747 read=113124\r\n -> Seq Scan on functionalvariables e_1 (cost=0.00..155513.72 rows=4164672 width=32\r\n) (actual time=9.412..5383.223 rows=4164350 loops=1)\r\n Buffers: shared hit=743 read=113124\r\n -> Hash (cost=8.75..8.75 rows=16 width=8) (actual time=0.061..0.061 rows=16 loops=\r\n1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\r\n Buffers: shared hit=4\r\n -> Index Scan using usertemp_utv_idx on usertemplatevariable ut_1 (cost=0.29\r\n..8.75 rows=16 width=8) (actual time=0.032..0.052 rows=16 loops=1)\r\n Index Cond: (usertempl_id = 15)\r\n Buffers: shared hit=4\r\n -> Materialize (cost=0.29..8.83 rows=16 width=8) (actual time=0.643..0.654 rows=16 loops=16)\r\n Buffers: shared hit=4\r\n -> Index Scan using usertemp_utv_idx on usertemplatevariable utv (cost=0.29..8.75 rows=1\r\n6 width=8) (actual time=0.052..0.075 rows=16 loops=1)\r\n Index Cond: (usertempl_id = 15)\r\n Buffers: shared hit=4\r\n -> Index Only Scan using usertemplvarattribute_atttypeid_key on usertemplvarattribute utva (cost=0.4\r\n2..4.60 rows=9 width=8) (actual time=0.004..0.010 rows=10 loops=256)\r\n Index Cond: (usertempvariable_fk = utv.id)\r\n Heap Fetches: 0\r\n Buffers: shared hit=785\r\n Total runtime: 75622.559 ms\r\n(104 rows)\r\n\r\n\r\nLana ABADIE\r\nDatabase Engineer\r\nCODAC Section\r\n\r\nITER Organization, Building 72/4108, SCOD, Control System Division\r\nRoute de Vinon-sur-Verdon - CS 90 046 - 13067 St Paul Lez Durance Cedex - France\r\nPhone: +33 4 42 17 84 02\r\nGet the latest ITER news on http://www.iter.org/whatsnew\r\n>>-----Original Message-----\r\n>>From: Martín Marqués [mailto:[email protected]]\r\n>>Sent: 26 July 2016 12:34\r\n>>To: Abadie Lana; Tom Lane\r\n>>Cc: [email protected]\r\n>>Subject: Re: [PERFORM] Very slow query (3-4mn) on a table with 25millions\r\n>>rows\r\n>>\r\n>>El 26/07/16 a las 06:01, Abadie Lana escribió:\r\n>>> Hi Tom,\r\n>>> Thanks for the hints..\r\n>>>\r\n>>> I made various tests for index\r\n>>> The best I could get is the following one with\r\n>>> create index vat_funcvaratt_multi_idx on\r\n>>functionalvarattributes(split_part(split_part(attvalue,' ',1),'.',1), tag_id,\r\n>>atttype_id);\r\n>>> analyze functionalvarattributes;\r\n>>\r\n>>I suggest running analyze over the other tables involved in the query\r\n>>(or over the whole DB) and then sending back the explain analyze, or\r\n>>even better EXPLAIN (ANALYZE,BUFFERS).\r\n>>\r\n>>Some estimates are close and others are really wrong.\r\n>>\r\n>>I'm not saying that's going to give you a big bust but we'll be able to\r\n>>see the planner with fresh stats\r\n>>\r\n>>--\r\n>>Martín Marqués http://www.2ndQuadrant.com/\r\n>>PostgreSQL Development, 24x7 Support, Training & Services\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 27 Jul 2016 06:03:22 +0000", "msg_from": "Abadie Lana <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow query (3-4mn) on a table with 25millions\n rows" }, { "msg_contents": "Hello Lana,\n\nOn Wed, Jul 27, 2016 at 8:03 AM, Abadie Lana <[email protected]> wrote:\n\n> Here the result of explain (analyse, buffer). Thanks for your help and let\n> me know if you need more information.\n\n\nI noticed 3 things in your query:\n\n1. In the second part (after the except), the 2 tables utva and utv are not\njoined against the others table. Is there a missing join somewhere ?\n\nLet that snipset:\n\nselect s.attvalue\n from functionalvarattributes s\n , tags t\n , variableattributetypes vat\n where t.id=s.tag_id\n and t.status!='Internal'\n and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')\n and vat.id=s.atttype_id\n and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name\n from\nfunctionalvariables e\n ,\nusertemplatevariable ut\n where\ne.usertemplatevar_id=ut.id\n and\nut.usertempl_id=15\n )\n\nbe called A\n\nLet that snipset:\n\nselect *\n from usertemplvarattribute utva\n , usertemplatevariable utv\n where utv.id=utva.usertempvariable_fk\n and utv.usertempl_id=15\n\nbe called B\n\nThen you query is:\n\nA\nexcept\nA CROSS JOIN B\n\nIf B is not the empty set, than the above query is guaranteed to always\nhave 0 row.\n\n2. Assuming your query is right (even if I failed to understand its point),\nwe could only do the A snipset once instead of twice using a with clause as\nin:\n\nwith filtered_s as (\nselect s.attvalue\n from functionalvarattributes s\n , tags t\n , variableattributetypes vat\n where t.id=s.tag_id\n and t.status!='Internal'\n and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')\n and vat.id=s.atttype_id\n and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name\n from\nfunctionalvariables e\n ,\nusertemplatevariable ut\n where\ne.usertemplatevar_id=ut.id\n and\nut.usertempl_id=15\n )\n)\nselect s.attvalue\n from filtered_s s\nexcept\nselect s.attvalue\n from filtered_s s\n , usertemplvarattribute utva\n , usertemplatevariable utv\n where utv.id=utva.usertempvariable_fk\n and utv.usertempl_id=15\n;\n\nThis rewritten query should run about 2x. faster.\n\n3. The planner believe that the e.name subselect will give 4926 rows\n(instead of 16 in reality), due to this wrong estimate it will consider the\nvat_funcvaratt_multi_idx index as not usefull. I don't know how to give the\nplanner more accurate info ...\n\n-- \nFélix\n\nHello Lana,On Wed, Jul 27, 2016 at 8:03 AM, Abadie Lana <[email protected]> wrote:\r\nHere the result of explain (analyse, buffer). Thanks for your help and let me know if you need more information.I noticed 3 things in your query:1. In the second part (after the except), the 2 tables utva and utv are not joined against the others table. Is there a missing join somewhere ?Let that snipset:select s.attvalue   from functionalvarattributes s     , tags t     , variableattributetypes vat where t.id=s.tag_id    and t.status!='Internal'   and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')    and vat.id=s.atttype_id    and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name                                                              from functionalvariables e                                                                , usertemplatevariable ut                                                             where e.usertemplatevar_id=ut.id                                                               and ut.usertempl_id=15                                                           )be called ALet that snipset:select *  from usertemplvarattribute utva     , usertemplatevariable utv  where utv.id=utva.usertempvariable_fk     and utv.usertempl_id=15 be called BThen you query is:AexceptA CROSS JOIN BIf B is not the empty set, than the above query is guaranteed to always have 0 row.2. Assuming your query is right (even if I failed to understand its point), we could only do the A snipset once instead of twice using a with clause as in:with filtered_s as (select s.attvalue   from functionalvarattributes s     , tags t     , variableattributetypes vat where t.id=s.tag_id    and t.status!='Internal'   and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')    and vat.id=s.atttype_id    and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name                                                              from functionalvariables e                                                                , usertemplatevariable ut                                                             where e.usertemplatevar_id=ut.id                                                               and ut.usertempl_id=15                                                           ))select s.attvalue   from filtered_s sexceptselect s.attvalue   from filtered_s s     , usertemplvarattribute utva     , usertemplatevariable utv  where utv.id=utva.usertempvariable_fk     and utv.usertempl_id=15 ;This rewritten query should run about 2x. faster.3. The planner believe that the e.name subselect will give 4926 rows (instead of 16 in reality), due to this wrong estimate it will consider the vat_funcvaratt_multi_idx index as not usefull. I don't know how to give the planner more accurate info ...-- Félix", "msg_date": "Wed, 27 Jul 2016 11:15:49 +0200", "msg_from": "=?UTF-8?Q?F=C3=A9lix_GERZAGUET?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow query (3-4mn) on a table with 25millions rows" }, { "msg_contents": "On Wed, Jul 27, 2016 at 11:15 AM, Félix GERZAGUET <[email protected]\n> wrote:\n\n> I don't know how to give the planner more accurate info ...\n>\n\nCould you try to materialize the e.name subquery in another table. As in\n\ncreate table func_var_name_for_tpl_15 as\nselect e.name\n from\nfunctionalvariables e\n ,\nusertemplatevariable ut\n where\ne.usertemplatevar_id=ut.id\n and\nut.usertempl_id=15\n;\n\nThen analyse that table\nThen try the rewritten query:\n\nwith filtered_s as (\nselect s.attvalue\n from functionalvarattributes s\n , tags t\n , variableattributetypes vat\n where t.id=s.tag_id\n and t.status!='Internal'\n and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')\n and vat.id=s.atttype_id\n and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name\n from\nfunc_var_name_for_tpl_15\ne\n )\n)\nselect s.attvalue\n from filtered_s s\nexcept\nselect s.attvalue\n from filtered_s s\n , usertemplvarattribute utva\n , usertemplatevariable utv\n where utv.id=utva.usertempvariable_fk\n and utv.usertempl_id=15\n;\n\nDoes it use the vat_funcvaratt_multi_idx index now ?\n\n--\nFélix\n\nOn Wed, Jul 27, 2016 at 11:15 AM, Félix GERZAGUET <[email protected]> wrote: I don't know how to give the planner more accurate info ... Could you try to materialize the e.name subquery in another table. As increate table func_var_name_for_tpl_15 as select e.name                                                              from functionalvariables e                                                                , usertemplatevariable ut                                                             where e.usertemplatevar_id=ut.id                                                               and ut.usertempl_id=15;Then analyse that tableThen try the rewritten query:with filtered_s as (select s.attvalue   from functionalvarattributes s     , tags t     , variableattributetypes vat where t.id=s.tag_id    and t.status!='Internal'   and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')    and vat.id=s.atttype_id    and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name                                                              from func_var_name_for_tpl_15 e                                                                ))select s.attvalue   from filtered_s sexceptselect s.attvalue   from filtered_s s     , usertemplvarattribute utva     , usertemplatevariable utv  where utv.id=utva.usertempvariable_fk     and utv.usertempl_id=15 ;Does it use the vat_funcvaratt_multi_idx index now ?--Félix", "msg_date": "Wed, 27 Jul 2016 11:36:44 +0200", "msg_from": "=?UTF-8?Q?F=C3=A9lix_GERZAGUET?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very slow query (3-4mn) on a table with 25millions rows" }, { "msg_contents": "Hello Felix\r\nThanks indeed the new query is much faster…The query itself is complicated to explain basically you can view it as graph and want to make sure that there is no dependencies if I remove a set of points….\r\n\r\nexplain analyze with filtered_s as ( select s.attvalue from functionalvarattributes s, tags t, variableattributetypes vat where t.id=s.tag_id and t.status!='Internal' and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15) ) select s.attvalue from filtered_s s except select s.attvalue from filtered_s s , usertemplvarattribute utva, usertemplatevariable utv where utv.id=utva.usertempvariable_fk and utv.usertempl_id=15;\r\n QUERY PLAN\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------------------------\r\nHashSetOp Except (cost=904251.31..2013436.93 rows=200 width=516) (actual time=40007.482..40007.482 rows=0 loops=1)\r\n CTE filtered_s\r\n -> Hash Join (cost=171506.51..904251.31 rows=310110 width=8) (actual time=13986.554..40005.687 rows=2 loops=1)\r\n Hash Cond: (split_part(split_part((s_2.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e.name)::text)\r\n -> Hash Join (cost=193.91..726311.49 rows=310110 width=8) (actual time=2.675..30633.916 rows=308287 loops=1)\r\n Hash Cond: (s_2.tag_id = t.id)\r\n -> Hash Join (cost=188.03..716937.71 rows=1671149 width=16) (actual time=2.518..30249.987 rows=651155 loops=1)\r\n Hash Cond: (s_2.atttype_id = vat.id)\r\n -> Seq Scan on functionalvarattributes s_2 (cost=0.00..604679.32 rows=25429032 width=24) (actual time=0.005..1\r\n9229.473 rows=25429808 loops=1)\r\n -> Hash (cost=183.18..183.18 rows=388 width=8) (actual time=2.433..2.433 rows=388 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\r\n -> Seq Scan on variableattributetypes vat (cost=0.00..183.18 rows=388 width=8) (actual time=0.010..2.171\r\nrows=388 loops=1)\r\n Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\r\n Rows Removed by Filter: 5516\r\n -> Hash (cost=5.43..5.43 rows=36 width=8) (actual time=0.147..0.147 rows=36 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\r\n -> Seq Scan on tags t (cost=0.00..5.43 rows=36 width=8) (actual time=0.015..0.119 rows=36 loops=1)\r\n Filter: ((status)::text <> 'Internal'::text)\r\n Rows Removed by Filter: 158\r\n -> Hash (cost=171251.03..171251.03 rows=4926 width=24) (actual time=8939.073..8939.073 rows=16 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\r\n -> HashAggregate (cost=171201.77..171251.03 rows=4926 width=24) (actual time=8939.039..8939.058 rows=16 loops=1)\r\n -> Hash Join (cost=8.95..171189.45 rows=4926 width=24) (actual time=3188.453..8938.943 rows=48 loops=1)\r\n Hash Cond: (e.usertemplatevar_id = ut.id)\r\n -> Seq Scan on functionalvariables e (cost=0.00..155513.72 rows=4164672 width=32) (actual time=0.004..65\r\n54.351 rows=4164350 loops=1)\r\n -> Hash (cost=8.75..8.75 rows=16 width=8) (actual time=0.042..0.042 rows=16 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\r\n -> Index Scan using usertemp_utv_idx on usertemplatevariable ut (cost=0.29..8.75 rows=16 width=8)\r\n(actual time=0.015..0.029 rows=16 loops=1)\r\n Index Cond: (usertempl_id = 15)\r\n -> Append (cost=0.00..999159.97 rows=44010259 width=516) (actual time=13986.564..40007.199 rows=320 loops=1)\r\n -> Subquery Scan on \"*SELECT* 1\" (cost=0.00..9303.30 rows=310110 width=516) (actual time=13986.563..40005.703 rows=2 loops=1\r\n)\r\n -> CTE Scan on filtered_s s (cost=0.00..6202.20 rows=310110 width=516) (actual time=13986.561..40005.699 rows=2 loops=\r\n1)\r\n -> Subquery Scan on \"*SELECT* 2\" (cost=0.70..989856.67 rows=43700149 width=516) (actual time=0.071..1.242 rows=318 loops=1)\r\n -> Nested Loop (cost=0.70..552855.18 rows=43700149 width=516) (actual time=0.069..0.941 rows=318 loops=1)\r\n -> CTE Scan on filtered_s s_1 (cost=0.00..6202.20 rows=310110 width=516) (actual time=0.003..0.005 rows=2 loops=\r\n1)\r\n -> Materialize (cost=0.70..84.46 rows=141 width=0) (actual time=0.032..0.331 rows=159 loops=2)\r\n -> Nested Loop (cost=0.70..83.75 rows=141 width=0) (actual time=0.053..0.426 rows=159 loops=1)\r\n -> Index Scan using usertemp_utv_idx on usertemplatevariable utv (cost=0.29..8.75 rows=16 width=8) (\r\nactual time=0.030..0.052 rows=16 loops=1)\r\n Index Cond: (usertempl_id = 15)\r\n -> Index Only Scan using usertemplvarattribute_atttypeid_key on usertemplvarattribute utva (cost=0.4\r\n2..4.60 rows=9 width=8) (actual time=0.005..0.011 rows=10 loops=16)\r\n Index Cond: (usertempvariable_fk = utv.id)\r\n Heap Fetches: 0\r\nTotal runtime: 40007.716 ms\r\n\r\n\r\nLana\r\nFrom: Félix GERZAGUET [mailto:[email protected]]\r\nSent: 27 July 2016 11:16\r\nTo: Abadie Lana\r\nCc: Martín Marqués; Tom Lane; [email protected]\r\nSubject: Re: [PERFORM] Very slow query (3-4mn) on a table with 25millions rows\r\n\r\nHello Lana,\r\n\r\nOn Wed, Jul 27, 2016 at 8:03 AM, Abadie Lana <[email protected]<mailto:[email protected]>> wrote:\r\nHere the result of explain (analyse, buffer). Thanks for your help and let me know if you need more information.\r\n\r\nI noticed 3 things in your query:\r\n1. In the second part (after the except), the 2 tables utva and utv are not joined against the others table. Is there a missing join somewhere ?\r\n\r\nLet that snipset:\r\n\r\nselect s.attvalue\r\n from functionalvarattributes s\r\n , tags t\r\n , variableattributetypes vat\r\n where t.id<http://t.id>=s.tag_id\r\n and t.status!='Internal'\r\n and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')\r\n and vat.id<http://vat.id>=s.atttype_id\r\n and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name<http://e.name>\r\n from functionalvariables e\r\n , usertemplatevariable ut\r\n where e.usertemplatevar_id=ut.id<http://ut.id>\r\n and ut.usertempl_id=15\r\n )\r\nbe called A\r\nLet that snipset:\r\n\r\nselect *\r\n from usertemplvarattribute utva\r\n , usertemplatevariable utv\r\n where utv.id<http://utv.id>=utva.usertempvariable_fk\r\n and utv.usertempl_id=15\r\nbe called B\r\nThen you query is:\r\nA\r\nexcept\r\nA CROSS JOIN B\r\nIf B is not the empty set, than the above query is guaranteed to always have 0 row.\r\n\r\n2. Assuming your query is right (even if I failed to understand its point), we could only do the A snipset once instead of twice using a with clause as in:\r\n\r\nwith filtered_s as (\r\nselect s.attvalue\r\n from functionalvarattributes s\r\n , tags t\r\n , variableattributetypes vat\r\n where t.id<http://t.id>=s.tag_id\r\n and t.status!='Internal'\r\n and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')\r\n and vat.id<http://vat.id>=s.atttype_id\r\n and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name<http://e.name>\r\n from functionalvariables e\r\n , usertemplatevariable ut\r\n where e.usertemplatevar_id=ut.id<http://ut.id>\r\n and ut.usertempl_id=15\r\n )\r\n)\r\nselect s.attvalue\r\n from filtered_s s\r\nexcept\r\nselect s.attvalue\r\n from filtered_s s\r\n , usertemplvarattribute utva\r\n , usertemplatevariable utv\r\n where utv.id<http://utv.id>=utva.usertempvariable_fk\r\n and utv.usertempl_id=15\r\n;\r\nThis rewritten query should run about 2x. faster.\r\n3. The planner believe that the e.name<http://e.name> subselect will give 4926 rows (instead of 16 in reality), due to this wrong estimate it will consider the vat_funcvaratt_multi_idx index as not usefull. I don't know how to give the planner more accurate info ...\r\n\r\n--\r\nFélix\r\n\r\n\n\n\n\n\n\n\n\n\nHello Felix\nThanks indeed the new query is much faster…The query itself is complicated to explain basically you can view it as graph and want to make sure that there is\r\n no dependencies if I remove a set of points….\n \nexplain analyze with filtered_s as ( select s.attvalue  from functionalvarattributes s, tags t, variableattributetypes vat where t.id=s.tag_id and t.status!='Internal'\r\n and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')  and vat.id=s.atttype_id and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15)\r\n ) select s.attvalue from filtered_s s except select s.attvalue from filtered_s s , usertemplvarattribute utva, usertemplatevariable utv where utv.id=utva.usertempvariable_fk and  utv.usertempl_id=15;\n                                                                                                QUERY PLAN                            \r\n\n                                                                    \n---------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------------\nHashSetOp Except  (cost=904251.31..2013436.93 rows=200 width=516) (actual time=40007.482..40007.482 rows=0 loops=1)\n   CTE filtered_s\n     ->  Hash Join  (cost=171506.51..904251.31 rows=310110 width=8) (actual time=13986.554..40005.687 rows=2 loops=1)\n           Hash Cond: (split_part(split_part((s_2.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e.name)::text)\n           ->  Hash Join  (cost=193.91..726311.49 rows=310110 width=8) (actual time=2.675..30633.916 rows=308287 loops=1)\n                 Hash Cond: (s_2.tag_id = t.id)\n                 ->  Hash Join  (cost=188.03..716937.71 rows=1671149 width=16) (actual time=2.518..30249.987 rows=651155 loops=1)\n                       Hash Cond: (s_2.atttype_id = vat.id)\n                       ->  Seq Scan on functionalvarattributes s_2  (cost=0.00..604679.32 rows=25429032 width=24) (actual time=0.005..1\n9229.473 rows=25429808 loops=1)\n                       ->  Hash  (cost=183.18..183.18 rows=388 width=8) (actual time=2.433..2.433 rows=388 loops=1)\n                             Buckets: 1024  Batches: 1  Memory Usage: 16kB\n                             ->  Seq Scan on variableattributetypes vat  (cost=0.00..183.18 rows=388 width=8) (actual time=0.010..2.171\nrows=388 loops=1)\n                                   Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\n                                   Rows Removed by Filter: 5516\n                 ->  Hash  (cost=5.43..5.43 rows=36 width=8) (actual time=0.147..0.147 rows=36 loops=1)\n                       Buckets: 1024  Batches: 1  Memory Usage: 2kB\n                       ->  Seq Scan on tags t  (cost=0.00..5.43 rows=36 width=8) (actual time=0.015..0.119 rows=36 loops=1)\n                             Filter: ((status)::text <> 'Internal'::text)\n                             Rows Removed by Filter: 158\n           ->  Hash  (cost=171251.03..171251.03 rows=4926 width=24) (actual time=8939.073..8939.073 rows=16 loops=1)\n                 Buckets: 1024  Batches: 1  Memory Usage: 1kB\n                 ->  HashAggregate  (cost=171201.77..171251.03 rows=4926 width=24) (actual time=8939.039..8939.058 rows=16 loops=1)\n                       ->  Hash Join  (cost=8.95..171189.45 rows=4926 width=24) (actual time=3188.453..8938.943 rows=48 loops=1)\n                             Hash Cond: (e.usertemplatevar_id = ut.id)\n                             ->  Seq Scan on functionalvariables e  (cost=0.00..155513.72 rows=4164672 width=32) (actual time=0.004..65\n54.351 rows=4164350 loops=1)\n                             ->  Hash  (cost=8.75..8.75 rows=16 width=8) (actual time=0.042..0.042 rows=16 loops=1)\n                                   Buckets: 1024  Batches: 1  Memory Usage: 1kB\n                                   ->  Index Scan using usertemp_utv_idx on usertemplatevariable ut  (cost=0.29..8.75 rows=16 width=8)\r\n\n(actual time=0.015..0.029 rows=16 loops=1)\n                                         Index Cond: (usertempl_id = 15)\n   ->  Append  (cost=0.00..999159.97 rows=44010259 width=516) (actual time=13986.564..40007.199 rows=320 loops=1)\n         ->  Subquery Scan on \"*SELECT* 1\"  (cost=0.00..9303.30 rows=310110 width=516) (actual time=13986.563..40005.703 rows=2 loops=1\n)\n               ->  CTE Scan on filtered_s s  (cost=0.00..6202.20 rows=310110 width=516) (actual time=13986.561..40005.699 rows=2 loops=\n1)\n         ->  Subquery Scan on \"*SELECT* 2\"  (cost=0.70..989856.67 rows=43700149 width=516) (actual time=0.071..1.242 rows=318 loops=1)\n               ->  Nested Loop  (cost=0.70..552855.18 rows=43700149 width=516) (actual time=0.069..0.941 rows=318 loops=1)\n                     ->  CTE Scan on filtered_s s_1  (cost=0.00..6202.20 rows=310110 width=516) (actual time=0.003..0.005 rows=2 loops=\n1)\n                     ->  Materialize  (cost=0.70..84.46 rows=141 width=0) (actual time=0.032..0.331 rows=159 loops=2)\n                           ->  Nested Loop  (cost=0.70..83.75 rows=141 width=0) (actual time=0.053..0.426 rows=159 loops=1)\n                                 ->  Index Scan using usertemp_utv_idx on usertemplatevariable utv  (cost=0.29..8.75 rows=16 width=8) (\nactual time=0.030..0.052 rows=16 loops=1)\n                                       Index Cond: (usertempl_id = 15)\n                                ->  Index Only Scan using usertemplvarattribute_atttypeid_key on usertemplvarattribute utva  (cost=0.4\n2..4.60 rows=9 width=8) (actual time=0.005..0.011 rows=10 loops=16)\n                                       Index Cond: (usertempvariable_fk = utv.id)\n                                       Heap Fetches: 0\nTotal runtime: 40007.716 ms\n \n \nLana\n\n\n\nFrom: Félix GERZAGUET [mailto:[email protected]]\r\n\nSent: 27 July 2016 11:16\nTo: Abadie Lana\nCc: Martín Marqués; Tom Lane; [email protected]\nSubject: Re: [PERFORM] Very slow query (3-4mn) on a table with 25millions rows\n\n\n \n\nHello Lana,\n\n \n\nOn Wed, Jul 27, 2016 at 8:03 AM, Abadie Lana <[email protected]> wrote:\nHere the result of explain (analyse, buffer). Thanks for your help and let me know if you need more information.\n\n \n\n\nI noticed 3 things in your query:\n\n\n1. In the second part (after the except), the 2 tables utva and utv are not joined against the others table. Is there a missing join somewhere ?\n\n\n\r\nLet that snipset:\n\r\nselect s.attvalue \r\n  from functionalvarattributes s\r\n     , tags t\r\n     , variableattributetypes vat\r\n where t.id=s.tag_id \r\n   and t.status!='Internal'\r\n   and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') \r\n   and vat.id=s.atttype_id \r\n   and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select \r\ne.name \r\n                                                             from functionalvariables e\r\n                                                                , usertemplatevariable ut\r\n\r\n                                                            where e.usertemplatevar_id=ut.id\n\r\n                                                              and ut.usertempl_id=15\r\n                                                           )\n\n\nbe called A\n\n\nLet that snipset:\n\r\nselect *\r\n  from usertemplvarattribute utva\r\n     , usertemplatevariable utv\r\n  where utv.id=utva.usertempvariable_fk\r\n\r\n    and utv.usertempl_id=15 \n\n\nbe called B\n\n\nThen you query is:\n\n\nA\n\n\nexcept\n\n\nA CROSS JOIN B\n\n\nIf B is not the empty set, than the above query is guaranteed to always have 0 row.\n\n\n \n\n\n2. Assuming your query is right (even if I failed to understand its point), we could only do the A snipset once instead of twice using a with clause as in:\n\r\nwith filtered_s as (\r\nselect s.attvalue \r\n  from functionalvarattributes s\r\n     , tags t\r\n     , variableattributetypes vat\r\n where t.id=s.tag_id \r\n   and t.status!='Internal'\r\n   and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') \r\n   and vat.id=s.atttype_id \r\n   and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select \r\ne.name \r\n                                                             from functionalvariables e\r\n                                                                , usertemplatevariable ut\r\n\r\n                                                            where e.usertemplatevar_id=ut.id\n\r\n                                                              and ut.usertempl_id=15\r\n                                                           )\r\n)\r\nselect s.attvalue \r\n  from filtered_s s\r\nexcept\r\nselect s.attvalue \r\n  from filtered_s s\r\n     , usertemplvarattribute utva\r\n     , usertemplatevariable utv\r\n  where utv.id=utva.usertempvariable_fk\r\n\r\n    and utv.usertempl_id=15 \r\n;\n\n\nThis rewritten query should run about 2x. faster.\n\n\n3. The planner believe that the \r\ne.name subselect will give 4926 rows (instead of 16 in reality), due to this wrong estimate it will consider the vat_funcvaratt_multi_idx index as not usefull. I don't know how to give the planner more accurate info ...\n\n\n\r\n-- \n\n\nFélix", "msg_date": "Wed, 27 Jul 2016 14:55:16 +0000", "msg_from": "Abadie Lana <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow query (3-4mn) on a table with 25millions\n rows" }, { "msg_contents": "Sorry for the delay\r\nStill no use of the index\r\ncreate table func_var_name_for_tpl_15 as select e.name from functionalvariables e, usertemplatevariable ut where e.usertemplatevar_id=ut.id and ut.usertempl_id=15;\r\nSELECT 48\r\n=# analyze func_var_name_for_tpl_15;\r\nANALYZE\r\n=# explain analyze with filtered_s as ( select s.attvalue from functionalvarattributes s , tags t, variableattributetypes vat where t.id=s.tag_id and t.status!='Internal' and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK') and vat.id=s.atttype_id and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name from func_var_name_for_tpl_15 e)) select s.attvalue from filtered_s s, usertemplvarattribute utva, usertemplatevariable utv where utv.id=utva.usertempvariable_fk and utv.usertempl_id=15;\r\n QUERY PLAN\r\n\r\n---------------------------------------------------------------------------------------------------------------------------------------\r\n--------------------------------------------------\r\nNested Loop (cost=689051.63..698514.55 rows=741512 width=516) (actual time=11043.744..47958.871 rows=318 loops=1)\r\n CTE filtered_s\r\n -> Hash Join (cost=195.99..689050.93 rows=5262 width=8) (actual time=11043.680..47957.962 rows=2 loops=1)\r\n Hash Cond: (s_1.tag_id = t.id)\r\n -> Hash Join (cost=190.11..688886.10 rows=28355 width=16) (actual time=11043.499..47957.774 rows=6 loops=1)\r\n Hash Cond: (s_1.atttype_id = vat.id)\r\n -> Hash Semi Join (cost=2.08..686796.55 rows=431458 width=24) (actual time=11040.920..47955.181 rows=6 loops=1)\r\n Hash Cond: (split_part(split_part((s_1.attvalue)::text, ' '::text, 1), '.'::text, 1) = (e.name)::text)\r\n -> Seq Scan on functionalvarattributes s_1 (cost=0.00..604679.32 rows=25429032 width=24) (actual time=0.006..2\r\n2378.636 rows=25429808 loops=1)\r\n -> Hash (cost=1.48..1.48 rows=48 width=21) (actual time=0.063..0.063 rows=48 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 3kB\r\n -> Seq Scan on func_var_name_for_tpl_15 e (cost=0.00..1.48 rows=48 width=21) (actual time=0.006..0.032 r\r\nows=48 loops=1)\r\n -> Hash (cost=183.18..183.18 rows=388 width=8) (actual time=2.480..2.480 rows=388 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 16kB\r\n -> Seq Scan on variableattributetypes vat (cost=0.00..183.18 rows=388 width=8) (actual time=0.021..2.220 rows=\r\n388 loops=1)\r\n Filter: ((fieldtype)::text = ANY ('{DBF_INLINK,DBF_OUTLINK,DBF_FWDLINK}'::text[]))\r\n Rows Removed by Filter: 5516\r\n -> Hash (cost=5.43..5.43 rows=36 width=8) (actual time=0.166..0.166 rows=36 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\r\n -> Seq Scan on tags t (cost=0.00..5.43 rows=36 width=8) (actual time=0.015..0.137 rows=36 loops=1)\r\n Filter: ((status)::text <> 'Internal'::text)\r\n Rows Removed by Filter: 158\r\n -> CTE Scan on filtered_s s (cost=0.00..105.24 rows=5262 width=516) (actual time=11043.686..47957.977 rows=2 loops=1)\r\n -> Materialize (cost=0.70..84.46 rows=141 width=0) (actual time=0.027..0.307 rows=159 loops=2)\r\n -> Nested Loop (cost=0.70..83.75 rows=141 width=0) (actual time=0.049..0.394 rows=159 loops=1)\r\n -> Index Scan using usertemp_utv_idx on usertemplatevariable utv (cost=0.29..8.75 rows=16 width=8) (actual time=0.025.\r\n.0.040 rows=16 loops=1)\r\n Index Cond: (usertempl_id = 15)\r\n -> Index Only Scan using usertemplvarattribute_atttypeid_key on usertemplvarattribute utva (cost=0.42..4.60 rows=9 wid\r\nth=8) (actual time=0.005..0.013 rows=10 loops=16)\r\n Index Cond: (usertempvariable_fk = utv.id)\r\n Heap Fetches: 0\r\nTotal runtime: 47959.180 ms\r\n(31 rows)\r\n\r\nsddcryo=#\r\n\r\n[iterlogo]<http://www.iter.org/>\r\nLana ABADIE\r\nDatabase Engineer\r\nCODAC Section\r\n\r\nITER Organization, Building 72/4108, SCOD, Control System Division\r\nRoute de Vinon-sur-Verdon - CS 90 046 - 13067 St Paul Lez Durance Cedex – France\r\nPhone: +33 4 42 17 84 02\r\nGet the latest ITER news on http://www.iter.org/whatsnew\r\nFrom: Félix GERZAGUET [mailto:[email protected]]\r\nSent: 27 July 2016 11:37\r\nTo: Abadie Lana\r\nCc: Martín Marqués; Tom Lane; [email protected]\r\nSubject: Re: [PERFORM] Very slow query (3-4mn) on a table with 25millions rows\r\n\r\n\r\nOn Wed, Jul 27, 2016 at 11:15 AM, Félix GERZAGUET <[email protected]<mailto:[email protected]>> wrote:\r\n I don't know how to give the planner more accurate info ...\r\n\r\nCould you try to materialize the e.name<http://e.name> subquery in another table. As in\r\n\r\ncreate table func_var_name_for_tpl_15 as\r\nselect e.name<http://e.name>\r\n from functionalvariables e\r\n , usertemplatevariable ut\r\n where e.usertemplatevar_id=ut.id<http://ut.id>\r\n and ut.usertempl_id=15\r\n;\r\nThen analyse that table\r\nThen try the rewritten query:\r\n\r\nwith filtered_s as (\r\nselect s.attvalue\r\n from functionalvarattributes s\r\n , tags t\r\n , variableattributetypes vat\r\n where t.id<http://t.id>=s.tag_id\r\n and t.status!='Internal'\r\n and vat.fieldtype in ('DBF_INLINK','DBF_OUTLINK','DBF_FWDLINK')\r\n and vat.id<http://vat.id>=s.atttype_id\r\n and split_part(split_part(s.attvalue,' ',1),'.',1) in ( select e.name<http://e.name>\r\n from func_var_name_for_tpl_15 e\r\n )\r\n)\r\nselect s.attvalue\r\n from filtered_s s\r\nexcept\r\nselect s.attvalue\r\n from filtered_s s\r\n , usertemplvarattribute utva\r\n , usertemplatevariable utv\r\n where utv.id<http://utv.id>=utva.usertempvariable_fk\r\n and utv.usertempl_id=15\r\n;\r\n\r\nDoes it use the vat_funcvaratt_multi_idx index now ?\r\n\r\n--\r\nFélix", "msg_date": "Thu, 28 Jul 2016 09:55:25 +0000", "msg_from": "Abadie Lana <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very slow query (3-4mn) on a table with 25millions\n rows" } ]
[ { "msg_contents": "Hi.\n\nI have an OLAP-oriented DB (light occasional bulk writes and heavy \naggregated selects over large periods of data) based on Postgres 9.5.3.\n\nServer is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS, \nmirror).\n\nThe largest table is 13GB (with a 4GB index on it), other tables are 4, \n2 and less than 1GB.\n\nAfter reading a lot of articles and \"howto-s\" I've collected following \nset of tweaks and hints:\n\n\nZFS pools creation:\nzfs create zroot/ara/sqldb\nzfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql\n\n\nzfs get primarycache,recordsize,logbias,compression zroot/ara/sqldb/pgsql\nNAME PROPERTY VALUE SOURCE\nzroot/ara/sqldb/pgsql primarycache all local\nzroot/ara/sqldb/pgsql recordsize 8K local\nzroot/ara/sqldb/pgsql logbias latency local\nzroot/ara/sqldb/pgsql compression lz4 inherited from zroot\n\nL2ARC is disabled\nVDEV cache is disabled\n\n\npgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\npgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D /ara/sqldb/pgsql/data\"\n\n\n/etc/sysctl.conf\nvfs.zfs.metaslab.lba_weighting_enabled=0\n\n\npostgresql.conf:\nlisten_addresses = '*'\nmax_connections = 100\nshared_buffers = 16GB\neffective_cache_size = 48GB\nwork_mem = 500MB\nmaintenance_work_mem = 2GB\nmin_wal_size = 4GB\nmax_wal_size = 8GB\ncheckpoint_completion_target = 0.9\nwal_buffers = 16MB\ndefault_statistics_target = 500\nrandom_page_cost = 1\nlog_lock_waits = on\nlog_directory = 'pg_log'\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\nlog_destination = 'csvlog'\nlogging_collector = on\nlog_min_duration_statement = 10000\nshared_preload_libraries = 'pg_stat_statements'\ntrack_activity_query_size = 10000\ntrack_io_timing = on\n\n\nzfs-stats -A\n------------------------------------------------------------------------\nZFS Subsystem Report\t\t\t\tThu Jul 28 21:58:46 2016\n------------------------------------------------------------------------\nARC Summary: (HEALTHY)\n\tMemory Throttle Count:\t\t\t0\nARC Misc:\n\tDeleted:\t\t\t\t14.92b\n\tRecycle Misses:\t\t\t\t7.01m\n\tMutex Misses:\t\t\t\t4.72m\n\tEvict Skips:\t\t\t\t1.28b\nARC Size:\t\t\t\t53.27%\t32.59\tGiB\n\tTarget Size: (Adaptive)\t\t53.28%\t32.60\tGiB\n\tMin Size (Hard Limit):\t\t12.50%\t7.65\tGiB\n\tMax Size (High Water):\t\t8:1\t61.18\tGiB\nARC Size Breakdown:\n\tRecently Used Cache Size:\t92.83%\t30.26\tGiB\n\tFrequently Used Cache Size:\t7.17%\t2.34\tGiB\nARC Hash Breakdown:\n\tElements Max:\t\t\t\t10.36m\n\tElements Current:\t\t78.09%\t8.09m\n\tCollisions:\t\t\t\t9.63b\n\tChain Max:\t\t\t\t26\n\tChains:\t\t\t\t\t1.49m\n------------------------------------------------------------------------\n\nzfs-stats -E\n------------------------------------------------------------------------\nZFS Subsystem Report\t\t\t\tThu Jul 28 21:59:57 2016\n------------------------------------------------------------------------\nARC Efficiency:\t\t\t\t\t49.85b\n\tCache Hit Ratio:\t\t70.94%\t35.36b\n\tCache Miss Ratio:\t\t29.06%\t14.49b\n\tActual Hit Ratio:\t\t66.32%\t33.06b\n\tData Demand Efficiency:\t\t84.85%\t25.39b\n\tData Prefetch Efficiency:\t17.85%\t12.90b\n\tCACHE HITS BY CACHE LIST:\n\t Anonymously Used:\t\t4.10%\t1.45b\n\t Most Recently Used:\t\t37.82%\t13.37b\n\t Most Frequently Used:\t\t55.67%\t19.68b\n\t Most Recently Used Ghost:\t0.58%\t203.42m\n\t Most Frequently Used Ghost:\t1.84%\t649.83m\n\tCACHE HITS BY DATA TYPE:\n\t Demand Data:\t\t\t60.92%\t21.54b\n\t Prefetch Data:\t\t6.51%\t2.30b\n\t Demand Metadata:\t\t32.56%\t11.51b\n\t Prefetch Metadata:\t\t0.00%\t358.22k\n\tCACHE MISSES BY DATA TYPE:\n\t Demand Data:\t\t\t26.55%\t3.85b\n\t Prefetch Data:\t\t73.13%\t10.59b\n\t Demand Metadata:\t\t0.31%\t44.95m\n\t Prefetch Metadata:\t\t0.00%\t350.48k\n\nzfs-stats -Z\n------------------------------------------------------------------------\nZFS Subsystem Report\t\t\t\tThu Jul 28 22:02:46 2016\n------------------------------------------------------------------------\nFile-Level Prefetch: (HEALTHY)\nDMU Efficiency:\t\t\t\t\t49.97b\n\tHit Ratio:\t\t\t55.85%\t27.90b\n\tMiss Ratio:\t\t\t44.15%\t22.06b\n\tColinear:\t\t\t\t22.06b\n\t Hit Ratio:\t\t\t0.04%\t7.93m\n\t Miss Ratio:\t\t\t99.96%\t22.05b\n\tStride:\t\t\t\t\t17.85b\n\t Hit Ratio:\t\t\t99.61%\t17.78b\n\t Miss Ratio:\t\t\t0.39%\t69.46m\nDMU Misc:\n\tReclaim:\t\t\t\t22.05b\n\t Successes:\t\t\t0.05%\t10.53m\n\t Failures:\t\t\t99.95%\t22.04b\n\tStreams:\t\t\t\t10.14b\n\t +Resets:\t\t\t0.10%\t9.97m\n\t -Resets:\t\t\t99.90%\t10.13b\n\t Bogus:\t\t\t\t0\n\n\nNotes\\concerns:\n\n- primarycache=metadata (recommended in most articles) produces a \nsignificant performance degradation (in SELECT queries);\n\n- from what I can see, Postgres uses memory too carefully. I would like \nsomehow to force it to keep accessed data in memory as long as possible. \nInstead I often see that even frequently accessed data is pushed out of \nmemory cache for no apparent reasons.\n\nDo I miss something important in my configs? Are there any double \nwrites\\reads somewhere because of OS\\ZFS\\Postgres caches? How to avoid them?\n\nPlease share your experience\\tips. Thanks.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 28 Jul 2016 23:04:55 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "\n\nOn 07/29/2016 08:04 AM, trafdev wrote:\n> Hi.\n>\n> I have an OLAP-oriented DB (light occasional bulk writes and heavy\n> aggregated selects over large periods of data) based on Postgres 9.5.3.\n>\n> Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS,\n> mirror).\n>\n> The largest table is 13GB (with a 4GB index on it), other tables are 4,\n> 2 and less than 1GB.\n>\n> After reading a lot of articles and \"howto-s\" I've collected following\n> set of tweaks and hints:\n>\n>\n> ZFS pools creation:\n> zfs create zroot/ara/sqldb\n> zfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql\n>\n>\n> zfs get primarycache,recordsize,logbias,compression zroot/ara/sqldb/pgsql\n> NAME PROPERTY VALUE SOURCE\n> zroot/ara/sqldb/pgsql primarycache all local\n> zroot/ara/sqldb/pgsql recordsize 8K local\n> zroot/ara/sqldb/pgsql logbias latency local\n> zroot/ara/sqldb/pgsql compression lz4 inherited from zroot\n>\n> L2ARC is disabled\n> VDEV cache is disabled\n>\n>\n> pgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\n> pgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D /ara/sqldb/pgsql/data\"\n>\n>\n> /etc/sysctl.conf\n> vfs.zfs.metaslab.lba_weighting_enabled=0\n>\n>\n> postgresql.conf:\n> listen_addresses = '*'\n> max_connections = 100\n> shared_buffers = 16GB\n> effective_cache_size = 48GB\n\nIt may not be a problem for your workload, but this effective_cache_size \nvalue is far too high.\n\n> work_mem = 500MB\n> maintenance_work_mem = 2GB\n> min_wal_size = 4GB\n> max_wal_size = 8GB\n> checkpoint_completion_target = 0.9\n\nYou probably need to increase the checkpoint_timeout too.\n\n> wal_buffers = 16MB\n> default_statistics_target = 500\n> random_page_cost = 1\n> log_lock_waits = on\n> log_directory = 'pg_log'\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n> log_destination = 'csvlog'\n> logging_collector = on\n> log_min_duration_statement = 10000\n> shared_preload_libraries = 'pg_stat_statements'\n> track_activity_query_size = 10000\n> track_io_timing = on\n>\n>\n> zfs-stats -A\n> ------------------------------------------------------------------------\n> ZFS Subsystem Report Thu Jul 28 21:58:46 2016\n> ------------------------------------------------------------------------\n> ARC Summary: (HEALTHY)\n> Memory Throttle Count: 0\n> ARC Misc:\n> Deleted: 14.92b\n> Recycle Misses: 7.01m\n> Mutex Misses: 4.72m\n> Evict Skips: 1.28b\n> ARC Size: 53.27% 32.59 GiB\n> Target Size: (Adaptive) 53.28% 32.60 GiB\n> Min Size (Hard Limit): 12.50% 7.65 GiB\n> Max Size (High Water): 8:1 61.18 GiB\n> ARC Size Breakdown:\n> Recently Used Cache Size: 92.83% 30.26 GiB\n> Frequently Used Cache Size: 7.17% 2.34 GiB\n> ARC Hash Breakdown:\n> Elements Max: 10.36m\n> Elements Current: 78.09% 8.09m\n> Collisions: 9.63b\n> Chain Max: 26\n> Chains: 1.49m\n> ------------------------------------------------------------------------\n>\n> zfs-stats -E\n> ------------------------------------------------------------------------\n> ZFS Subsystem Report Thu Jul 28 21:59:57 2016\n> ------------------------------------------------------------------------\n> ARC Efficiency: 49.85b\n> Cache Hit Ratio: 70.94% 35.36b\n> Cache Miss Ratio: 29.06% 14.49b\n> Actual Hit Ratio: 66.32% 33.06b\n> Data Demand Efficiency: 84.85% 25.39b\n> Data Prefetch Efficiency: 17.85% 12.90b\n> CACHE HITS BY CACHE LIST:\n> Anonymously Used: 4.10% 1.45b\n> Most Recently Used: 37.82% 13.37b\n> Most Frequently Used: 55.67% 19.68b\n> Most Recently Used Ghost: 0.58% 203.42m\n> Most Frequently Used Ghost: 1.84% 649.83m\n> CACHE HITS BY DATA TYPE:\n> Demand Data: 60.92% 21.54b\n> Prefetch Data: 6.51% 2.30b\n> Demand Metadata: 32.56% 11.51b\n> Prefetch Metadata: 0.00% 358.22k\n> CACHE MISSES BY DATA TYPE:\n> Demand Data: 26.55% 3.85b\n> Prefetch Data: 73.13% 10.59b\n> Demand Metadata: 0.31% 44.95m\n> Prefetch Metadata: 0.00% 350.48k\n>\n> zfs-stats -Z\n> ------------------------------------------------------------------------\n> ZFS Subsystem Report Thu Jul 28 22:02:46 2016\n> ------------------------------------------------------------------------\n> File-Level Prefetch: (HEALTHY)\n> DMU Efficiency: 49.97b\n> Hit Ratio: 55.85% 27.90b\n> Miss Ratio: 44.15% 22.06b\n> Colinear: 22.06b\n> Hit Ratio: 0.04% 7.93m\n> Miss Ratio: 99.96% 22.05b\n> Stride: 17.85b\n> Hit Ratio: 99.61% 17.78b\n> Miss Ratio: 0.39% 69.46m\n> DMU Misc:\n> Reclaim: 22.05b\n> Successes: 0.05% 10.53m\n> Failures: 99.95% 22.04b\n> Streams: 10.14b\n> +Resets: 0.10% 9.97m\n> -Resets: 99.90% 10.13b\n> Bogus: 0\n>\n>\n> Notes\\concerns:\n>\n> - primarycache=metadata (recommended in most articles) produces a\n> significant performance degradation (in SELECT queries);\n\nThose articles are wrong. PostgreSQL relies of filesystem cache, so it \nneeds primarycache=all.\n\n>\n> - from what I can see, Postgres uses memory too carefully. I would like\n> somehow to force it to keep accessed data in memory as long as possible.\n> Instead I often see that even frequently accessed data is pushed out of\n> memory cache for no apparent reasons.\n >\n\nThis is probably a consequence of the primarycache misconfiguration.\n\n>\n> Do I miss something important in my configs? Are there any double\n> writes\\reads somewhere because of OS\\ZFS\\Postgres caches? How to avoid\n> them?\n>\n> Please share your experience\\tips. Thanks.\n>\n>\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 29 Jul 2016 08:30:49 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "> > - from what I can see, Postgres uses memory too carefully. I would like\n> > somehow to force it to keep accessed data in memory as long as possible.\n> > Instead I often see that even frequently accessed data is pushed out of\n> > memory cache for no apparent reasons.\n> >\n>\n> This is probably a consequence of the primarycache misconfiguration.\n\nThanks! And I'm using \"primarycache=all\" in my deployment...\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 28 Jul 2016 23:47:22 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "\n\nOn 29.07.2016 08:30, Tomas Vondra wrote:\n>\n>\n> On 07/29/2016 08:04 AM, trafdev wrote:\n>> Hi.\n>>\n>> I have an OLAP-oriented DB (light occasional bulk writes and heavy\n>> aggregated selects over large periods of data) based on Postgres 9.5.3.\n>>\n>> Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS,\n>> mirror).\n>>\n>> The largest table is 13GB (with a 4GB index on it), other tables are 4,\n>> 2 and less than 1GB.\n>>\n>> After reading a lot of articles and \"howto-s\" I've collected following\n>> set of tweaks and hints:\n>>\n>>\n>> ZFS pools creation:\n>> zfs create zroot/ara/sqldb\n>> zfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql\n>>\n>>\n>> zfs get primarycache,recordsize,logbias,compression zroot/ara/sqldb/pgsql\n>> NAME PROPERTY VALUE SOURCE\n>> zroot/ara/sqldb/pgsql primarycache all local\n>> zroot/ara/sqldb/pgsql recordsize 8K local\n>> zroot/ara/sqldb/pgsql logbias latency local\n>> zroot/ara/sqldb/pgsql compression lz4 inherited from zroot\n>>\n>> L2ARC is disabled\n>> VDEV cache is disabled\n>>\n>>\n>> pgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\n>> pgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D /ara/sqldb/pgsql/data\"\n>>\n>>\n>> /etc/sysctl.conf\n>> vfs.zfs.metaslab.lba_weighting_enabled=0\n>>\n>>\n>> postgresql.conf:\n>> listen_addresses = '*'\n>> max_connections = 100\n>> shared_buffers = 16GB\n>> effective_cache_size = 48GB\n>\n> It may not be a problem for your workload, but this effective_cache_size\n> value is far too high.\n\nMay i asked why? ZFS in default caches your size of RAM minus 1 GB. \nGetting the shared buffer from the 64 GB RAM i would asume 47 GB would \nbe a better value. But this would not be far too high. So please can you \nexplain this?\n\nGreetings,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Sep 2016 18:00:02 +0200", "msg_from": "Torsten Zuehlsdorff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "On 09/27/2016 06:00 PM, Torsten Zuehlsdorff wrote:\n>\n>\n> On 29.07.2016 08:30, Tomas Vondra wrote:\n>>\n>>\n>> On 07/29/2016 08:04 AM, trafdev wrote:\n>>> Hi.\n>>>\n>>> I have an OLAP-oriented DB (light occasional bulk writes and heavy\n>>> aggregated selects over large periods of data) based on Postgres 9.5.3.\n>>>\n>>> Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS,\n>>> mirror).\n>>>\n>>> The largest table is 13GB (with a 4GB index on it), other tables are 4,\n>>> 2 and less than 1GB.\n>>>\n>>> After reading a lot of articles and \"howto-s\" I've collected following\n>>> set of tweaks and hints:\n>>>\n>>>\n>>> ZFS pools creation:\n>>> zfs create zroot/ara/sqldb\n>>> zfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql\n>>>\n>>>\n>>> zfs get primarycache,recordsize,logbias,compression\n>>> zroot/ara/sqldb/pgsql\n>>> NAME PROPERTY VALUE SOURCE\n>>> zroot/ara/sqldb/pgsql primarycache all local\n>>> zroot/ara/sqldb/pgsql recordsize 8K local\n>>> zroot/ara/sqldb/pgsql logbias latency local\n>>> zroot/ara/sqldb/pgsql compression lz4 inherited from zroot\n>>>\n>>> L2ARC is disabled\n>>> VDEV cache is disabled\n>>>\n>>>\n>>> pgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\n>>> pgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D /ara/sqldb/pgsql/data\"\n>>>\n>>>\n>>> /etc/sysctl.conf\n>>> vfs.zfs.metaslab.lba_weighting_enabled=0\n>>>\n>>>\n>>> postgresql.conf:\n>>> listen_addresses = '*'\n>>> max_connections = 100\n>>> shared_buffers = 16GB\n>>> effective_cache_size = 48GB\n>>\n>> It may not be a problem for your workload, but this effective_cache_size\n>> value is far too high.\n>\n> May i asked why? ZFS in default caches your size of RAM minus 1 GB.\n> Getting the shared buffer from the 64 GB RAM i would asume 47 GB\n> would be a better value. But this would not be far too high. So\n> please can you explain this?\n\nBecause it's not a global value, but an estimate of how much RAM is \navailable as a cache for a single query. So if you're running 10 queries \nat the same time, they'll have to share the memory.\n\nIt's a bit trickier as there's often a fair amount of cross-backend \nsharing (backends accessing the same data, so it's likely one backend \nloads data into cache, and then other backends access it too).\n\nIt also ignores that memory may get allocated for other reasons - some \nqueries may allocate quite a bit of memory for sorts/aggregations, so \nnot only is\n\n effective_cache_size = RAM - shared_buffers\n\nexcessive as it ignores the per-query nature, but also because it \nneglects these other allocations.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 27 Sep 2016 23:38:42 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "On 9/27/2016 16:38, Tomas Vondra wrote:\n> On 09/27/2016 06:00 PM, Torsten Zuehlsdorff wrote:\n>>\n>>\n>> On 29.07.2016 08:30, Tomas Vondra wrote:\n>>>\n>>>\n>>> On 07/29/2016 08:04 AM, trafdev wrote:\n>>>> Hi.\n>>>>\n>>>> I have an OLAP-oriented DB (light occasional bulk writes and heavy\n>>>> aggregated selects over large periods of data) based on Postgres\n>>>> 9.5.3.\n>>>>\n>>>> Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on\n>>>> ZFS,\n>>>> mirror).\n>>>>\n>>>> The largest table is 13GB (with a 4GB index on it), other tables\n>>>> are 4,\n>>>> 2 and less than 1GB.\n>>>>\n>>>> After reading a lot of articles and \"howto-s\" I've collected following\n>>>> set of tweaks and hints:\n>>>>\n>>>>\n>>>> ZFS pools creation:\n>>>> zfs create zroot/ara/sqldb\n>>>> zfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql\n>>>>\n>>>>\n>>>> zfs get primarycache,recordsize,logbias,compression\n>>>> zroot/ara/sqldb/pgsql\n>>>> NAME PROPERTY VALUE SOURCE\n>>>> zroot/ara/sqldb/pgsql primarycache all local\n>>>> zroot/ara/sqldb/pgsql recordsize 8K local\n>>>> zroot/ara/sqldb/pgsql logbias latency local\n>>>> zroot/ara/sqldb/pgsql compression lz4 inherited from\n>>>> zroot\n>>>>\n>>>> L2ARC is disabled\n>>>> VDEV cache is disabled\n>>>>\n>>>>\n>>>> pgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\n>>>> pgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D\n>>>> /ara/sqldb/pgsql/data\"\n>>>>\n>>>>\n>>>> /etc/sysctl.conf\n>>>> vfs.zfs.metaslab.lba_weighting_enabled=0\n>>>>\n>>>>\n>>>> postgresql.conf:\n>>>> listen_addresses = '*'\n>>>> max_connections = 100\n>>>> shared_buffers = 16GB\n>>>> effective_cache_size = 48GB\n>>>\n>>> It may not be a problem for your workload, but this\n>>> effective_cache_size\n>>> value is far too high.\n>>\n>> May i asked why? ZFS in default caches your size of RAM minus 1 GB.\n>> Getting the shared buffer from the 64 GB RAM i would asume 47 GB\n>> would be a better value. But this would not be far too high. So\n>> please can you explain this?\n>\n> Because it's not a global value, but an estimate of how much RAM is\n> available as a cache for a single query. So if you're running 10\n> queries at the same time, they'll have to share the memory.\n>\n> It's a bit trickier as there's often a fair amount of cross-backend\n> sharing (backends accessing the same data, so it's likely one backend\n> loads data into cache, and then other backends access it too).\n>\n> It also ignores that memory may get allocated for other reasons - some\n> queries may allocate quite a bit of memory for sorts/aggregations, so\n> not only is\n>\n> effective_cache_size = RAM - shared_buffers\n>\n> excessive as it ignores the per-query nature, but also because it\n> neglects these other allocations.\n>\n> regards\n>\nYou may well find that with lz4 compression a 128kb record size on that\nfilesystem is materially faster -- it is here for most workloads under\nPostgres.\n\n\n\n-- \nKarl Denninger\[email protected] <mailto:[email protected]>\n/The Market Ticker/\n/[S/MIME encrypted email preferred]/", "msg_date": "Tue, 27 Sep 2016 17:15:26 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "+1\nlarger record size can increase compression ratio,so reduce the io.\n\nDid you set atime off for zfs?\n\n2016年9月28日 6:16 AM,\"Karl Denninger\" <[email protected]>写道:\n\n> On 9/27/2016 16:38, Tomas Vondra wrote:\n>\n> On 09/27/2016 06:00 PM, Torsten Zuehlsdorff wrote:\n>\n>\n>\n> On 29.07.2016 08:30, Tomas Vondra wrote:\n>\n>\n>\n> On 07/29/2016 08:04 AM, trafdev wrote:\n>\n> Hi.\n>\n> I have an OLAP-oriented DB (light occasional bulk writes and heavy\n> aggregated selects over large periods of data) based on Postgres 9.5.3.\n>\n> Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS,\n> mirror).\n>\n> The largest table is 13GB (with a 4GB index on it), other tables are 4,\n> 2 and less than 1GB.\n>\n> After reading a lot of articles and \"howto-s\" I've collected following\n> set of tweaks and hints:\n>\n>\n> ZFS pools creation:\n> zfs create zroot/ara/sqldb\n> zfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql\n>\n>\n> zfs get primarycache,recordsize,logbias,compression\n> zroot/ara/sqldb/pgsql\n> NAME PROPERTY VALUE SOURCE\n> zroot/ara/sqldb/pgsql primarycache all local\n> zroot/ara/sqldb/pgsql recordsize 8K local\n> zroot/ara/sqldb/pgsql logbias latency local\n> zroot/ara/sqldb/pgsql compression lz4 inherited from zroot\n>\n> L2ARC is disabled\n> VDEV cache is disabled\n>\n>\n> pgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\n> pgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D /ara/sqldb/pgsql/data\"\n>\n>\n> /etc/sysctl.conf\n> vfs.zfs.metaslab.lba_weighting_enabled=0\n>\n>\n> postgresql.conf:\n> listen_addresses = '*'\n> max_connections = 100\n> shared_buffers = 16GB\n> effective_cache_size = 48GB\n>\n>\n> It may not be a problem for your workload, but this effective_cache_size\n> value is far too high.\n>\n>\n> May i asked why? ZFS in default caches your size of RAM minus 1 GB.\n> Getting the shared buffer from the 64 GB RAM i would asume 47 GB\n> would be a better value. But this would not be far too high. So\n> please can you explain this?\n>\n>\n> Because it's not a global value, but an estimate of how much RAM is\n> available as a cache for a single query. So if you're running 10 queries at\n> the same time, they'll have to share the memory.\n>\n> It's a bit trickier as there's often a fair amount of cross-backend\n> sharing (backends accessing the same data, so it's likely one backend loads\n> data into cache, and then other backends access it too).\n>\n> It also ignores that memory may get allocated for other reasons - some\n> queries may allocate quite a bit of memory for sorts/aggregations, so not\n> only is\n>\n> effective_cache_size = RAM - shared_buffers\n>\n> excessive as it ignores the per-query nature, but also because it neglects\n> these other allocations.\n>\n> regards\n>\n> You may well find that with lz4 compression a 128kb record size on that\n> filesystem is materially faster -- it is here for most workloads under\n> Postgres.\n>\n>\n>\n> --\n> Karl Denninger\n> [email protected]\n> *The Market Ticker*\n> *[S/MIME encrypted email preferred]*\n>\n\n+1\nlarger record size can increase compression ratio,so reduce the io.\nDid you set atime off for zfs?\n2016年9月28日 6:16 AM,\"Karl Denninger\" <[email protected]>写道:\n\n On 9/27/2016 16:38, Tomas Vondra wrote:\nOn 09/27/2016 06:00 PM, Torsten Zuehlsdorff wrote:\n \n\n\n\n On 29.07.2016 08:30, Tomas Vondra wrote:\n \n\n\n\n On 07/29/2016 08:04 AM, trafdev wrote:\n \nHi.\n \n\n I have an OLAP-oriented DB (light occasional bulk writes and\n heavy\n \n aggregated selects over large periods of data) based on\n Postgres 9.5.3.\n \n\n Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD\n (root on ZFS,\n \n mirror).\n \n\n The largest table is 13GB (with a 4GB index on it), other\n tables are 4,\n \n 2 and less than 1GB.\n \n\n After reading a lot of articles and \"howto-s\" I've collected\n following\n \n set of tweaks and hints:\n \n\n\n ZFS pools creation:\n \n zfs create zroot/ara/sqldb\n \n zfs create -o recordsize=8k -o primarycache=all\n zroot/ara/sqldb/pgsql\n \n\n\n zfs get primarycache,recordsize,logbias,compression\n \n zroot/ara/sqldb/pgsql\n \n NAME                   PROPERTY      VALUE         SOURCE\n \n zroot/ara/sqldb/pgsql  primarycache  all           local\n \n zroot/ara/sqldb/pgsql  recordsize    8K            local\n \n zroot/ara/sqldb/pgsql  logbias       latency       local\n \n zroot/ara/sqldb/pgsql  compression   lz4           inherited\n from zroot\n \n\n L2ARC is disabled\n \n VDEV cache is disabled\n \n\n\n pgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\n \n pgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D\n /ara/sqldb/pgsql/data\"\n \n\n\n /etc/sysctl.conf\n \n vfs.zfs.metaslab.lba_weighting_enabled=0\n \n\n\n postgresql.conf:\n \n listen_addresses = '*'\n \n max_connections = 100\n \n shared_buffers = 16GB\n \n effective_cache_size = 48GB\n \n\n\n It may not be a problem for your workload, but this\n effective_cache_size\n \n value is far too high.\n \n\n\n May i asked why? ZFS in default caches your size of RAM minus 1\n GB.\n \n Getting the shared buffer from the 64 GB RAM i would asume 47 GB\n \n would be a better value. But this would not be far too high. So\n \n please can you explain this?\n \n\n\n Because it's not a global value, but an estimate of how much RAM\n is available as a cache for a single query. So if you're running\n 10 queries at the same time, they'll have to share the memory.\n \n\n It's a bit trickier as there's often a fair amount of\n cross-backend sharing (backends accessing the same data, so it's\n likely one backend loads data into cache, and then other backends\n access it too).\n \n\n It also ignores that memory may get allocated for other reasons -\n some queries may allocate quite a bit of memory for\n sorts/aggregations, so not only is\n \n\n    effective_cache_size = RAM - shared_buffers\n \n\n excessive as it ignores the per-query nature, but also because it\n neglects these other allocations.\n \n\n regards\n \n\n\n You may well find that with lz4 compression a 128kb record size on\n that filesystem is materially faster -- it is here for most\n workloads under Postgres.\n\n\n\n-- \n Karl Denninger\[email protected]\nThe Market Ticker\n[S/MIME encrypted email preferred]", "msg_date": "Wed, 28 Sep 2016 12:06:07 +0800", "msg_from": "Jov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "using zfs,you can tune full page write off for pg,which can save wal write\nio.\n\n2016年7月29日 2:05 PM,\"trafdev\" <[email protected]>写道:\n\n> Hi.\n>\n> I have an OLAP-oriented DB (light occasional bulk writes and heavy\n> aggregated selects over large periods of data) based on Postgres 9.5.3.\n>\n> Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS,\n> mirror).\n>\n> The largest table is 13GB (with a 4GB index on it), other tables are 4, 2\n> and less than 1GB.\n>\n> After reading a lot of articles and \"howto-s\" I've collected following set\n> of tweaks and hints:\n>\n>\n> ZFS pools creation:\n> zfs create zroot/ara/sqldb\n> zfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql\n>\n>\n> zfs get primarycache,recordsize,logbias,compression zroot/ara/sqldb/pgsql\n> NAME PROPERTY VALUE SOURCE\n> zroot/ara/sqldb/pgsql primarycache all local\n> zroot/ara/sqldb/pgsql recordsize 8K local\n> zroot/ara/sqldb/pgsql logbias latency local\n> zroot/ara/sqldb/pgsql compression lz4 inherited from zroot\n>\n> L2ARC is disabled\n> VDEV cache is disabled\n>\n>\n> pgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\n> pgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D /ara/sqldb/pgsql/data\"\n>\n>\n> /etc/sysctl.conf\n> vfs.zfs.metaslab.lba_weighting_enabled=0\n>\n>\n> postgresql.conf:\n> listen_addresses = '*'\n> max_connections = 100\n> shared_buffers = 16GB\n> effective_cache_size = 48GB\n> work_mem = 500MB\n> maintenance_work_mem = 2GB\n> min_wal_size = 4GB\n> max_wal_size = 8GB\n> checkpoint_completion_target = 0.9\n> wal_buffers = 16MB\n> default_statistics_target = 500\n> random_page_cost = 1\n> log_lock_waits = on\n> log_directory = 'pg_log'\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n> log_destination = 'csvlog'\n> logging_collector = on\n> log_min_duration_statement = 10000\n> shared_preload_libraries = 'pg_stat_statements'\n> track_activity_query_size = 10000\n> track_io_timing = on\n>\n>\n> zfs-stats -A\n> ------------------------------------------------------------------------\n> ZFS Subsystem Report Thu Jul 28 21:58:46 2016\n> ------------------------------------------------------------------------\n> ARC Summary: (HEALTHY)\n> Memory Throttle Count: 0\n> ARC Misc:\n> Deleted: 14.92b\n> Recycle Misses: 7.01m\n> Mutex Misses: 4.72m\n> Evict Skips: 1.28b\n> ARC Size: 53.27% 32.59 GiB\n> Target Size: (Adaptive) 53.28% 32.60 GiB\n> Min Size (Hard Limit): 12.50% 7.65 GiB\n> Max Size (High Water): 8:1 61.18 GiB\n> ARC Size Breakdown:\n> Recently Used Cache Size: 92.83% 30.26 GiB\n> Frequently Used Cache Size: 7.17% 2.34 GiB\n> ARC Hash Breakdown:\n> Elements Max: 10.36m\n> Elements Current: 78.09% 8.09m\n> Collisions: 9.63b\n> Chain Max: 26\n> Chains: 1.49m\n> ------------------------------------------------------------------------\n>\n> zfs-stats -E\n> ------------------------------------------------------------------------\n> ZFS Subsystem Report Thu Jul 28 21:59:57 2016\n> ------------------------------------------------------------------------\n> ARC Efficiency: 49.85b\n> Cache Hit Ratio: 70.94% 35.36b\n> Cache Miss Ratio: 29.06% 14.49b\n> Actual Hit Ratio: 66.32% 33.06b\n> Data Demand Efficiency: 84.85% 25.39b\n> Data Prefetch Efficiency: 17.85% 12.90b\n> CACHE HITS BY CACHE LIST:\n> Anonymously Used: 4.10% 1.45b\n> Most Recently Used: 37.82% 13.37b\n> Most Frequently Used: 55.67% 19.68b\n> Most Recently Used Ghost: 0.58% 203.42m\n> Most Frequently Used Ghost: 1.84% 649.83m\n> CACHE HITS BY DATA TYPE:\n> Demand Data: 60.92% 21.54b\n> Prefetch Data: 6.51% 2.30b\n> Demand Metadata: 32.56% 11.51b\n> Prefetch Metadata: 0.00% 358.22k\n> CACHE MISSES BY DATA TYPE:\n> Demand Data: 26.55% 3.85b\n> Prefetch Data: 73.13% 10.59b\n> Demand Metadata: 0.31% 44.95m\n> Prefetch Metadata: 0.00% 350.48k\n>\n> zfs-stats -Z\n> ------------------------------------------------------------------------\n> ZFS Subsystem Report Thu Jul 28 22:02:46 2016\n> ------------------------------------------------------------------------\n> File-Level Prefetch: (HEALTHY)\n> DMU Efficiency: 49.97b\n> Hit Ratio: 55.85% 27.90b\n> Miss Ratio: 44.15% 22.06b\n> Colinear: 22.06b\n> Hit Ratio: 0.04% 7.93m\n> Miss Ratio: 99.96% 22.05b\n> Stride: 17.85b\n> Hit Ratio: 99.61% 17.78b\n> Miss Ratio: 0.39% 69.46m\n> DMU Misc:\n> Reclaim: 22.05b\n> Successes: 0.05% 10.53m\n> Failures: 99.95% 22.04b\n> Streams: 10.14b\n> +Resets: 0.10% 9.97m\n> -Resets: 99.90% 10.13b\n> Bogus: 0\n>\n>\n> Notes\\concerns:\n>\n> - primarycache=metadata (recommended in most articles) produces a\n> significant performance degradation (in SELECT queries);\n>\n> - from what I can see, Postgres uses memory too carefully. I would like\n> somehow to force it to keep accessed data in memory as long as possible.\n> Instead I often see that even frequently accessed data is pushed out of\n> memory cache for no apparent reasons.\n>\n> Do I miss something important in my configs? Are there any double\n> writes\\reads somewhere because of OS\\ZFS\\Postgres caches? How to avoid them?\n>\n> Please share your experience\\tips. Thanks.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nusing zfs,you can tune full page write off  for pg,which can save wal write io.\n2016年7月29日 2:05 PM,\"trafdev\" <[email protected]>写道:Hi.\n\r\nI have an OLAP-oriented DB (light occasional bulk writes and heavy aggregated selects over large periods of data) based on Postgres 9.5.3.\n\r\nServer is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD (root on ZFS, mirror).\n\r\nThe largest table is 13GB (with a 4GB index on it), other tables are 4, 2 and less than 1GB.\n\r\nAfter reading a lot of articles and \"howto-s\" I've collected following set of tweaks and hints:\n\n\r\nZFS pools creation:\r\nzfs create zroot/ara/sqldb\r\nzfs create -o recordsize=8k -o primarycache=all zroot/ara/sqldb/pgsql\n\n\r\nzfs get primarycache,recordsize,logbias,compression zroot/ara/sqldb/pgsql\r\nNAME                   PROPERTY      VALUE         SOURCE\r\nzroot/ara/sqldb/pgsql  primarycache  all           local\r\nzroot/ara/sqldb/pgsql  recordsize    8K            local\r\nzroot/ara/sqldb/pgsql  logbias       latency       local\r\nzroot/ara/sqldb/pgsql  compression   lz4           inherited from zroot\n\r\nL2ARC is disabled\r\nVDEV cache is disabled\n\n\r\npgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\r\npgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D /ara/sqldb/pgsql/data\"\n\n\r\n/etc/sysctl.conf\r\nvfs.zfs.metaslab.lba_weighting_enabled=0\n\n\r\npostgresql.conf:\r\nlisten_addresses = '*'\r\nmax_connections = 100\r\nshared_buffers = 16GB\r\neffective_cache_size = 48GB\r\nwork_mem = 500MB\r\nmaintenance_work_mem = 2GB\r\nmin_wal_size = 4GB\r\nmax_wal_size = 8GB\r\ncheckpoint_completion_target = 0.9\r\nwal_buffers = 16MB\r\ndefault_statistics_target = 500\r\nrandom_page_cost = 1\r\nlog_lock_waits = on\r\nlog_directory = 'pg_log'\r\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\r\nlog_destination = 'csvlog'\r\nlogging_collector = on\r\nlog_min_duration_statement = 10000\r\nshared_preload_libraries = 'pg_stat_statements'\r\ntrack_activity_query_size = 10000\r\ntrack_io_timing = on\n\n\r\nzfs-stats -A\r\n------------------------------------------------------------------------\r\nZFS Subsystem Report                            Thu Jul 28 21:58:46 2016\r\n------------------------------------------------------------------------\r\nARC Summary: (HEALTHY)\r\n        Memory Throttle Count:                  0\r\nARC Misc:\r\n        Deleted:                                14.92b\r\n        Recycle Misses:                         7.01m\r\n        Mutex Misses:                           4.72m\r\n        Evict Skips:                            1.28b\r\nARC Size:                               53.27%  32.59   GiB\r\n        Target Size: (Adaptive)         53.28%  32.60   GiB\r\n        Min Size (Hard Limit):          12.50%  7.65    GiB\r\n        Max Size (High Water):          8:1     61.18   GiB\r\nARC Size Breakdown:\r\n        Recently Used Cache Size:       92.83%  30.26   GiB\r\n        Frequently Used Cache Size:     7.17%   2.34    GiB\r\nARC Hash Breakdown:\r\n        Elements Max:                           10.36m\r\n        Elements Current:               78.09%  8.09m\r\n        Collisions:                             9.63b\r\n        Chain Max:                              26\r\n        Chains:                                 1.49m\r\n------------------------------------------------------------------------\n\r\nzfs-stats -E\r\n------------------------------------------------------------------------\r\nZFS Subsystem Report                            Thu Jul 28 21:59:57 2016\r\n------------------------------------------------------------------------\r\nARC Efficiency:                                 49.85b\r\n        Cache Hit Ratio:                70.94%  35.36b\r\n        Cache Miss Ratio:               29.06%  14.49b\r\n        Actual Hit Ratio:               66.32%  33.06b\r\n        Data Demand Efficiency:         84.85%  25.39b\r\n        Data Prefetch Efficiency:       17.85%  12.90b\r\n        CACHE HITS BY CACHE LIST:\r\n          Anonymously Used:             4.10%   1.45b\r\n          Most Recently Used:           37.82%  13.37b\r\n          Most Frequently Used:         55.67%  19.68b\r\n          Most Recently Used Ghost:     0.58%   203.42m\r\n          Most Frequently Used Ghost:   1.84%   649.83m\r\n        CACHE HITS BY DATA TYPE:\r\n          Demand Data:                  60.92%  21.54b\r\n          Prefetch Data:                6.51%   2.30b\r\n          Demand Metadata:              32.56%  11.51b\r\n          Prefetch Metadata:            0.00%   358.22k\r\n        CACHE MISSES BY DATA TYPE:\r\n          Demand Data:                  26.55%  3.85b\r\n          Prefetch Data:                73.13%  10.59b\r\n          Demand Metadata:              0.31%   44.95m\r\n          Prefetch Metadata:            0.00%   350.48k\n\r\nzfs-stats -Z\r\n------------------------------------------------------------------------\r\nZFS Subsystem Report                            Thu Jul 28 22:02:46 2016\r\n------------------------------------------------------------------------\r\nFile-Level Prefetch: (HEALTHY)\r\nDMU Efficiency:                                 49.97b\r\n        Hit Ratio:                      55.85%  27.90b\r\n        Miss Ratio:                     44.15%  22.06b\r\n        Colinear:                               22.06b\r\n          Hit Ratio:                    0.04%   7.93m\r\n          Miss Ratio:                   99.96%  22.05b\r\n        Stride:                                 17.85b\r\n          Hit Ratio:                    99.61%  17.78b\r\n          Miss Ratio:                   0.39%   69.46m\r\nDMU Misc:\r\n        Reclaim:                                22.05b\r\n          Successes:                    0.05%   10.53m\r\n          Failures:                     99.95%  22.04b\r\n        Streams:                                10.14b\r\n          +Resets:                      0.10%   9.97m\r\n          -Resets:                      99.90%  10.13b\r\n          Bogus:                                0\n\n\r\nNotes\\concerns:\n\r\n- primarycache=metadata (recommended in most articles) produces a significant performance degradation (in SELECT queries);\n\r\n- from what I can see, Postgres uses memory too carefully. I would like somehow to force it to keep accessed data in memory as long as possible. Instead I often see that even frequently accessed data is pushed out of memory cache for no apparent reasons.\n\r\nDo I miss something important in my configs? Are there any double writes\\reads somewhere because of OS\\ZFS\\Postgres caches? How to avoid them?\n\r\nPlease share your experience\\tips. Thanks.\n\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Wed, 28 Sep 2016 12:11:35 +0800", "msg_from": "Jov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "On 9/27/2016 23:06, Jov wrote:\n>\n> +1\n> larger record size can increase compression ratio,so reduce the io.\n>\n> Did you set atime off for zfs?\n>\n>\n> 2016年9月28日 6:16 AM,\"Karl Denninger\" <[email protected]\n> <mailto:[email protected]>>写道:\n>\n> On 9/27/2016 16:38, Tomas Vondra wrote:\n>> On 09/27/2016 06:00 PM, Torsten Zuehlsdorff wrote:\n>>>\n>>>\n>>> On 29.07.2016 08:30, Tomas Vondra wrote:\n>>>>\n>>>>\n>>>> On 07/29/2016 08:04 AM, trafdev wrote:\n>>>>> Hi.\n>>>>>\n>>>>> I have an OLAP-oriented DB (light occasional bulk writes and\n>>>>> heavy\n>>>>> aggregated selects over large periods of data) based on\n>>>>> Postgres 9.5.3.\n>>>>>\n>>>>> Server is a FreeBSD 10.3 with 64GB of RAM and 2x500GB SSD\n>>>>> (root on ZFS,\n>>>>> mirror).\n>>>>>\n>>>>> The largest table is 13GB (with a 4GB index on it), other\n>>>>> tables are 4,\n>>>>> 2 and less than 1GB.\n>>>>>\n>>>>> After reading a lot of articles and \"howto-s\" I've collected\n>>>>> following\n>>>>> set of tweaks and hints:\n>>>>>\n>>>>>\n>>>>> ZFS pools creation:\n>>>>> zfs create zroot/ara/sqldb\n>>>>> zfs create -o recordsize=8k -o primarycache=all\n>>>>> zroot/ara/sqldb/pgsql\n>>>>>\n>>>>>\n>>>>> zfs get primarycache,recordsize,logbias,compression\n>>>>> zroot/ara/sqldb/pgsql\n>>>>> NAME PROPERTY VALUE SOURCE\n>>>>> zroot/ara/sqldb/pgsql primarycache all local\n>>>>> zroot/ara/sqldb/pgsql recordsize 8K local\n>>>>> zroot/ara/sqldb/pgsql logbias latency local\n>>>>> zroot/ara/sqldb/pgsql compression lz4 inherited\n>>>>> from zroot\n>>>>>\n>>>>> L2ARC is disabled\n>>>>> VDEV cache is disabled\n>>>>>\n>>>>>\n>>>>> pgsql -c \"mkdir /ara/sqldb/pgsql/data_ix\"\n>>>>> pgsql -c \"initdb --locale=en_US.UTF-8 -E UTF-8 -D\n>>>>> /ara/sqldb/pgsql/data\"\n>>>>>\n>>>>>\n>>>>> /etc/sysctl.conf\n>>>>> vfs.zfs.metaslab.lba_weighting_enabled=0\n>>>>>\n>>>>>\n>>>>> postgresql.conf:\n>>>>> listen_addresses = '*'\n>>>>> max_connections = 100\n>>>>> shared_buffers = 16GB\n>>>>> effective_cache_size = 48GB\n>>>>\n>>>> It may not be a problem for your workload, but this\n>>>> effective_cache_size\n>>>> value is far too high.\n>>>\n>>> May i asked why? ZFS in default caches your size of RAM minus 1 GB.\n>>> Getting the shared buffer from the 64 GB RAM i would asume 47 GB\n>>> would be a better value. But this would not be far too high. So\n>>> please can you explain this?\n>>\n>> Because it's not a global value, but an estimate of how much RAM\n>> is available as a cache for a single query. So if you're running\n>> 10 queries at the same time, they'll have to share the memory.\n>>\n>> It's a bit trickier as there's often a fair amount of\n>> cross-backend sharing (backends accessing the same data, so it's\n>> likely one backend loads data into cache, and then other backends\n>> access it too).\n>>\n>> It also ignores that memory may get allocated for other reasons -\n>> some queries may allocate quite a bit of memory for\n>> sorts/aggregations, so not only is\n>>\n>> effective_cache_size = RAM - shared_buffers\n>>\n>> excessive as it ignores the per-query nature, but also because it\n>> neglects these other allocations.\n>>\n>> regards\n>>\n> You may well find that with lz4 compression a 128kb record size on\n> that filesystem is materially faster -- it is here for most\n> workloads under Postgres.\n>\n>\n>\n> -- \n> Karl Denninger\n> [email protected] <mailto:[email protected]>\n> /The Market Ticker/\n> /[S/MIME encrypted email preferred]/\n>\n\nYes.\n\nNon-default stuff...\n\ndbms/ticker-9.5 compressratio 1.88x -\ndbms/ticker-9.5 mounted yes -\ndbms/ticker-9.5 quota none default\ndbms/ticker-9.5 reservation none default\ndbms/ticker-9.5 recordsize 128K default\ndbms/ticker-9.5 mountpoint /dbms/ticker-9.5 local\ndbms/ticker-9.5 sharenfs off default\ndbms/ticker-9.5 checksum on default\ndbms/ticker-9.5 compression lz4 inherited\nfrom dbms\ndbms/ticker-9.5 atime off inherited\nfrom dbms\ndbms/ticker-9.5 logbias throughput inherited\nfrom dbms\n\n\n-- \nKarl Denninger\[email protected] <mailto:[email protected]>\n/The Market Ticker/\n/[S/MIME encrypted email preferred]/", "msg_date": "Wed, 28 Sep 2016 06:44:36 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "Thanks Jov and Karl!\n\nWhat do you think about:\n\nprimarycache=all\n\nfor SELECT queries over same data sets?\n\n> Yes.\n>\n> Non-default stuff...\n>\n> dbms/ticker-9.5 compressratio 1.88x -\n> dbms/ticker-9.5 mounted yes -\n> dbms/ticker-9.5 quota none default\n> dbms/ticker-9.5 reservation none default\n> dbms/ticker-9.5 recordsize 128K default\n> dbms/ticker-9.5 mountpoint /dbms/ticker-9.5 local\n> dbms/ticker-9.5 sharenfs off default\n> dbms/ticker-9.5 checksum on default\n> dbms/ticker-9.5 compression lz4 inherited from dbms\n> dbms/ticker-9.5 atime off inherited from dbms\n> dbms/ticker-9.5 logbias throughput inherited from dbms\n>\n>\n> -- \n> Karl Denninger\n> [email protected] <mailto:[email protected]>\n> /The Market Ticker/\n> /[S/MIME encrypted email preferred]/\n\n\n\n\n\n\n\n Thanks Jov and Karl!\n\n What do you think about:\n\n primarycache=all\n\n for SELECT queries over same data sets?\n\n Yes.\n\n Non-default stuff...\n\n dbms/ticker-9.5  compressratio         1.88x                  -\n dbms/ticker-9.5  mounted               yes                    -\n dbms/ticker-9.5  quota                 none                  \n default\n dbms/ticker-9.5  reservation           none                  \n default\n dbms/ticker-9.5  recordsize            128K                  \n default\n dbms/ticker-9.5  mountpoint            /dbms/ticker-9.5      \n local\n dbms/ticker-9.5  sharenfs              off                   \n default\n dbms/ticker-9.5  checksum              on                    \n default\n dbms/ticker-9.5  compression           lz4                   \n inherited from dbms\n dbms/ticker-9.5  atime                 off                   \n inherited from dbms\n dbms/ticker-9.5  logbias               throughput            \n inherited from dbms\n\n\n-- \n Karl Denninger\[email protected]\nThe Market Ticker\n[S/MIME encrypted email preferred]", "msg_date": "Wed, 28 Sep 2016 11:42:13 -0700", "msg_from": "trafdev <[email protected]>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" }, { "msg_contents": "On 9/28/2016 13:42, trafdev wrote:\n> Thanks Jov and Karl!\n>\n> What do you think about:\n>\n> primarycache=all\n>\n> for SELECT queries over same data sets?\n>\n>> Yes.\n>>\n>> Non-default stuff...\n>>\n>> dbms/ticker-9.5 compressratio 1.88x -\n>> dbms/ticker-9.5 mounted yes -\n>> dbms/ticker-9.5 quota none default\n>> dbms/ticker-9.5 reservation none default\n>> dbms/ticker-9.5 recordsize 128K default\n>> dbms/ticker-9.5 mountpoint /dbms/ticker-9.5 local\n>> dbms/ticker-9.5 sharenfs off default\n>> dbms/ticker-9.5 checksum on default\n>> dbms/ticker-9.5 compression lz4 \n>> inherited from dbms\n>> dbms/ticker-9.5 atime off \n>> inherited from dbms\n>> dbms/ticker-9.5 logbias throughput \n>> inherited from dbms\n>>\n>>\n>> -- \n>> Karl Denninger\n>> [email protected] <mailto:[email protected]>\n>> /The Market Ticker/\n>> /[S/MIME encrypted email preferred]/\n>\nPrimarycache=all is the default; changing it ought to be contemplated\nonly under VERY specific circumstances. In the case of a database if\nyou shut off \"all\" then an 8kb data read with 128kb blocksize will\nresult in reading 128kb (the block size), returning the requested piece\nout of the 128kb and then /throwing away /the rest of the data read\nsince you prohibited it from going into the ARC. That's\nalmost-certainly going to do bad things for throughput!\n\nNote that having an L2ARC, which is the place where you might find\nsetting primarycache to have a benefit, is itself something you need to\ninstrument under your specific workload to see if its worth it. If you\nwant to know if it *might* be worth it you can use (on FreeBSD)\nzfs-stats -E; if you're seeing materially more than 15% cache misses\nthen it *might* help, assuming what you put it on is *very* fast (e.g. SSD)\n\nIn addition if you're on FreeBSD (and you say you are) be aware that the\nvm system and ZFS interact in some \"interesting\" ways under certain load\nprofiles. UMA is involved to a material degree in the issue. I have\ndone quite a bit of work on the internal ZFS code in this regard; 11.x\nis better-behaved than 10.x to a quite-material degree. I have a patch\nset out against both 10.x and 11.x that address some (but not all) of\nthe issues.\n\n-- \nKarl Denninger\[email protected] <mailto:[email protected]>\n/The Market Ticker/\n/[S/MIME encrypted email preferred]/", "msg_date": "Wed, 28 Sep 2016 14:06:35 -0500", "msg_from": "Karl Denninger <[email protected]>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on ZFS: performance tuning" } ]
[ { "msg_contents": "I have this schema:\n \nCREATE TABLE onp_crm_person( id serial PRIMARY KEY, onp_user_id bigint \nreferencesonp_user(id) deferrable initially deferred, is_resource boolean not \nnull default false, UNIQUE(onp_user_id) ); CREATE TABLE onp_crm_activity_log( id\nbigserial PRIMARY KEY, relation_id integer REFERENCES \nonp_crm_relation(entity_id), logged_forint references \nonp_crm_person(onp_user_id), durationbigint ); CREATE TABLE onp_crm_invoice( \nentity_idbigint PRIMARY KEY REFERENCES onp_crm_entity(entity_id), status_key \nVARCHAR NOT NULL, credit_against bigint REFERENCES onp_crm_invoice(entity_id), \nsent_dateDATE, UNIQUE(credit_against) deferrable INITIALLY DEFERRED -- \ninvoice_print_template_id is added after creation of \norigo_invoice_print_template); CREATE TABLE onp_crm_invoice_line ( id SERIAL \nPRIMARY KEY, invoice_id INTEGER NOT NULL REFERENCES onp_crm_invoice (entity_id) \n);CREATE TABLE onp_crm_calendarentry_invoice_membership( invoice_line_id \nINTEGER NOT NULL REFERENCESonp_crm_invoice_line(id) ON DELETE CASCADE, \ncalendar_entry_idINTEGER NOT NULL REFERENCES onp_crm_activity_log(id), unique\n(invoice_line_id, calendar_entry_id)DEFERRABLE INITIALLY DEFERRED ); \n \nThis query performs terribly slow ( ~26 minutes, 1561346.597ms):\n \nexplain analyze SELECT log.relation_id as company_id , sum(log.duration) AS \ndurationFROM onp_crm_activity_log log JOIN onp_crm_person logfor ON \nlogfor.onp_user_id =log.logged_for AND logfor.is_resource = FALSE WHERE 1 = 1 \n-- Filter out already invoiced before 2016-06-27 AND NOT EXISTS( SELECT * FROM \nonp_crm_calendarentry_invoice_membership cemJOIN onp_crm_invoice_line il ON \ncem.invoice_line_id = il.idJOIN onp_crm_invoice inv ON il.invoice_id = \ninv.entity_idWHERE cem.calendar_entry_id = log.id AND inv.status_key = \n'INVOICE_STATUS_INVOICED' AND inv.sent_date <= '2016-06-27' AND NOT EXISTS( \nSELECT* FROM onp_crm_invoice creditnote WHERE il.invoice_id = \ncreditnote.credit_againstAND creditnote.status_key = 'INVOICE_STATUS_INVOICED' \nANDcreditnote.sent_date <= '2016-06-27' ) ) GROUP BY log.relation_id ; \n \nExplain output:\n                                                                              \n                                       QUERY PLAN \n                                                                                                                      \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=13778.63..13796.39 rows=1421 width=12) (actual \ntime=1561343.861..1561344.042 rows=724 loops=1)\n   Group Key: log.relation_id\n   ->  Nested Loop Anti Join  (cost=741.35..13768.63 rows=2000 width=12) \n(actual time=471.973..1561221.929 rows=96095 loops=1)\n         Join Filter: (cem.calendar_entry_id = log.id)\n         Rows Removed by Join Filter: 11895758618\n         ->  Hash Join  (cost=86.56..9729.03 rows=2000 width=20) (actual \ntime=0.170..668.911 rows=181644 loops=1)\n               Hash Cond: (log.logged_for = logfor.onp_user_id)\n               ->  Seq Scan on onp_crm_activity_log log  (cost=0.00..8930.98 \nrows=184398 width=24) (actual time=0.007..538.893 rows=182378 loops=1)\n               ->  Hash  (cost=39.46..39.46 rows=3768 width=8) (actual \ntime=0.126..0.126 rows=36 loops=1)\n                     Buckets: 4096  Batches: 1  Memory Usage: 34kB\n                     ->  Bitmap Heap Scan on onp_crm_person logfor \n (cost=3.69..39.46 rows=3768 width=8) (actual time=0.040..0.106 rows=36 loops=1)\n                           Recheck Cond: (onp_user_id IS NOT NULL)\n                           Filter: (NOT is_resource)\n                           Rows Removed by Filter: 5\n                           Heap Blocks: exact=10\n                           ->  Bitmap Index Scan on onp_crm_person_onp_id_idx \n (cost=0.00..2.75 rows=41 width=0) (actual time=0.019..0.019 rows=41 loops=1)\n         ->  Materialize  (cost=654.79..4009.60 rows=1 width=4) (actual \ntime=0.000..2.829 rows=65490 loops=181644)\n               ->  Nested Loop  (cost=654.79..4009.59 rows=1 width=4) (actual \ntime=9.056..386.835 rows=85668 loops=1)\n                     ->  Nested Loop  (cost=654.50..4009.27 rows=1 width=8) \n(actual time=9.046..165.280 rows=88151 loops=1)\n                           ->  Hash Anti Join  (cost=654.21..4008.72 rows=1 \nwidth=8) (actual time=9.016..40.672 rows=76174 loops=1)\n                                 Hash Cond: (il.invoice_id = \ncreditnote.credit_against)\n                                 ->  Seq Scan on onp_crm_invoice_line il \n (cost=0.00..3062.01 rows=78001 width=8) (actual time=0.005..11.259 rows=78614 \nloops=1)\n                                 ->  Hash  (cost=510.56..510.56 rows=11492 \nwidth=8) (actual time=8.940..8.940 rows=372 loops=1)\n                                       Buckets: 16384  Batches: 1  Memory \nUsage: 143kB\n                                       ->  Seq Scan on onp_crm_invoice \ncreditnote  (cost=0.00..510.56 rows=11492 width=8) (actual time=0.014..7.882 \nrows=11507 loops=1)\n                                             Filter: ((sent_date <= \n'2016-06-27'::date) AND ((status_key)::text = 'INVOICE_STATUS_INVOICED'::text))\n                                             Rows Removed by Filter: 149\n                           ->  Index Only Scan using \nonp_crm_calendarentry_invoice_invoice_line_id_calendar_entr_key on \nonp_crm_calendarentry_invoice_membership cem  (cost=0.29..0.45 rows=9 width=8) \n(actual time=0.001..0.001 rows=1 loops=76174)\n                                 Index Cond: (invoice_line_id = il.id)\n                                 Heap Fetches: 4371\n                     ->  Index Scan using onp_crm_invoice_pkey on \nonp_crm_invoice inv  (cost=0.29..0.31 rows=1 width=8) (actual time=0.002..0.002 \nrows=1 loops=88151)\n                           Index Cond: (entity_id = il.invoice_id)\n                           Filter: ((sent_date <= '2016-06-27'::date) AND \n((status_key)::text = 'INVOICE_STATUS_INVOICED'::text))\n                           Rows Removed by Filter: 0\n Planning time: 3.307 ms\n Execution time: 1561346.597 ms\n (36 rows)\n\n  \n \nHere is with set enable_nestloop to false;\n                                                                              \n       QUERY PLAN \n                                                                                      \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=15936.07..15953.83 rows=1421 width=12) (actual \ntime=337.878..338.045 rows=724 loops=1)\n   Group Key: log.relation_id\n   ->  Hash Anti Join  (cost=6258.35..15926.07 rows=2000 width=12) (actual \ntime=139.976..317.402 rows=96097 loops=1)\n         Hash Cond: (log.id = cem.calendar_entry_id)\n         ->  Hash Join  (cost=86.56..9729.03 rows=2000 width=20) (actual \ntime=0.164..132.935 rows=181646 loops=1)\n               Hash Cond: (log.logged_for = logfor.onp_user_id)\n               ->  Seq Scan on onp_crm_activity_log log  (cost=0.00..8930.98 \nrows=184398 width=24) (actual time=0.006..89.170 rows=182380 loops=1)\n               ->  Hash  (cost=39.46..39.46 rows=3768 width=8) (actual \ntime=0.118..0.118 rows=36 loops=1)\n                     Buckets: 4096  Batches: 1  Memory Usage: 34kB\n                     ->  Bitmap Heap Scan on onp_crm_person logfor \n (cost=3.69..39.46 rows=3768 width=8) (actual time=0.037..0.101 rows=36 loops=1)\n                           Recheck Cond: (onp_user_id IS NOT NULL)\n                           Filter: (NOT is_resource)\n                           Rows Removed by Filter: 5\n                           Heap Blocks: exact=10\n                           ->  Bitmap Index Scan on onp_crm_person_onp_id_idx \n (cost=0.00..2.75 rows=41 width=0) (actual time=0.017..0.017 rows=41 loops=1)\n         ->  Hash  (cost=6171.78..6171.78 rows=1 width=4) (actual \ntime=139.779..139.779 rows=85668 loops=1)\n               Buckets: 131072 (originally 1024)  Batches: 1 (originally 1) \n Memory Usage: 4036kB\n               ->  Hash Join  (cost=4562.41..6171.78 rows=1 width=4) (actual \ntime=92.471..125.417 rows=85668 loops=1)\n                     Hash Cond: (cem.invoice_line_id = il.id)\n                     ->  Seq Scan on onp_crm_calendarentry_invoice_membership \ncem  (cost=0.00..1278.44 rows=88244 width=8) (actual time=0.005..7.941 \nrows=88734 loops=1)\n                     ->  Hash  (cost=4562.40..4562.40 rows=1 width=4) (actual \ntime=92.444..92.444 rows=75570 loops=1)\n                           Buckets: 131072 (originally 1024)  Batches: 1 \n(originally 1)  Memory Usage: 3681kB\n                           ->  Hash Join  (cost=4008.74..4562.40 rows=1 \nwidth=4) (actual time=61.797..79.981 rows=75570 loops=1)\n                                 Hash Cond: (inv.entity_id = il.invoice_id)\n                                 ->  Seq Scan on onp_crm_invoice inv \n (cost=0.00..510.56 rows=11492 width=8) (actual time=0.027..4.489 rows=11507 \nloops=1)\n                                       Filter: ((sent_date <= \n'2016-06-27'::date) AND ((status_key)::text = 'INVOICE_STATUS_INVOICED'::text))\n                                       Rows Removed by Filter: 151\n                                 ->  Hash  (cost=4008.72..4008.72 rows=1 \nwidth=8) (actual time=61.751..61.751 rows=76182 loops=1)\n                                       Buckets: 131072 (originally 1024) \n Batches: 1 (originally 1)  Memory Usage: 4000kB\n                                       ->  Hash Anti Join \n (cost=654.21..4008.72 rows=1 width=8) (actual time=12.568..47.911 rows=76182 \nloops=1)\n                                             Hash Cond: (il.invoice_id = \ncreditnote.credit_against)\n                                             ->  Seq Scan on \nonp_crm_invoice_line il  (cost=0.00..3062.01 rows=78001 width=8) (actual \ntime=0.008..12.476 rows=78622 loops=1)\n                                             ->  Hash  (cost=510.56..510.56 \nrows=11492 width=8) (actual time=12.477..12.477 rows=372 loops=1)\n                                                   Buckets: 16384  Batches: 1 \n Memory Usage: 143kB\n                                                   ->  Seq Scan on \nonp_crm_invoice creditnote  (cost=0.00..510.56 rows=11492 width=8) (actual \ntime=0.008..10.963 rows=11507 loops=1)\n                                                         Filter: ((sent_date \n<= '2016-06-27'::date) AND ((status_key)::text = \n'INVOICE_STATUS_INVOICED'::text))\n                                                         Rows Removed by \nFilter: 151\n Planning time: 3.510 ms\n Execution time: 338.349 ms\n (39 rows)\n\n So my question is is there something I can do to make PG favor a Hash Anti \nJoin instead of a Nested Loop Anti Join (which I assume is the problem)?\nCan the nested NOT EXISTS be re-written to be more performant?\n \nThanks.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Mon, 1 Aug 2016 15:33:04 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Very poor performance with Nested Loop Anti Join" }, { "msg_contents": "På mandag 01. august 2016 kl. 15:33:04, skrev Andreas Joseph Krogh <\[email protected] <mailto:[email protected]>>:\nI have this schema:\n \nCREATE TABLE onp_crm_person( id serial PRIMARY KEY, onp_user_id bigint \nreferencesonp_user(id) deferrable initially deferred, is_resource boolean not \nnull default false, UNIQUE(onp_user_id) ); CREATE TABLE onp_crm_activity_log( id\nbigserial PRIMARY KEY, relation_id integer REFERENCES \nonp_crm_relation(entity_id), logged_forint references \nonp_crm_person(onp_user_id), durationbigint ); CREATE TABLE onp_crm_invoice( \nentity_idbigint PRIMARY KEY REFERENCES onp_crm_entity(entity_id), status_key \nVARCHAR NOT NULL, credit_against bigint REFERENCES onp_crm_invoice(entity_id), \nsent_dateDATE, UNIQUE(credit_against) deferrable INITIALLY DEFERRED -- \ninvoice_print_template_id is added after creation of \norigo_invoice_print_template); CREATE TABLE onp_crm_invoice_line ( id SERIAL \nPRIMARY KEY, invoice_id INTEGER NOT NULL REFERENCES onp_crm_invoice (entity_id) \n);CREATE TABLE onp_crm_calendarentry_invoice_membership( invoice_line_id \nINTEGER NOT NULL REFERENCESonp_crm_invoice_line(id) ON DELETE CASCADE, \ncalendar_entry_idINTEGER NOT NULL REFERENCES onp_crm_activity_log(id), unique\n(invoice_line_id, calendar_entry_id)DEFERRABLE INITIALLY DEFERRED ); \n \nThis query performs terribly slow ( ~26 minutes, 1561346.597ms):\n \nexplain analyze SELECT log.relation_id as company_id , sum(log.duration) AS \ndurationFROM onp_crm_activity_log log JOIN onp_crm_person logfor ON \nlogfor.onp_user_id =log.logged_for AND logfor.is_resource = FALSE WHERE 1 = 1 \n-- Filter out already invoiced before 2016-06-27 AND NOT EXISTS( SELECT * FROM \nonp_crm_calendarentry_invoice_membership cemJOIN onp_crm_invoice_line il ON \ncem.invoice_line_id = il.idJOIN onp_crm_invoice inv ON il.invoice_id = \ninv.entity_idWHERE cem.calendar_entry_id = log.id AND inv.status_key = \n'INVOICE_STATUS_INVOICED' AND inv.sent_date <= '2016-06-27' AND NOT EXISTS( \nSELECT* FROM onp_crm_invoice creditnote WHERE il.invoice_id = \ncreditnote.credit_againstAND creditnote.status_key = 'INVOICE_STATUS_INVOICED' \nANDcreditnote.sent_date <= '2016-06-27' ) ) GROUP BY log.relation_id ; \n \nExplain output:\n                                                                              \n                                       QUERY PLAN \n                                                                                                                      \n \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=13778.63..13796.39 rows=1421 width=12) (actual \ntime=1561343.861..1561344.042 rows=724 loops=1)\n   Group Key: log.relation_id\n   ->  Nested Loop Anti Join  (cost=741.35..13768.63 rows=2000 width=12) \n(actual time=471.973..1561221.929 rows=96095 loops=1)\n         Join Filter: (cem.calendar_entry_id = log.id)\n         Rows Removed by Join Filter: 11895758618\n         ->  Hash Join  (cost=86.56..9729.03 rows=2000 width=20) (actual \ntime=0.170..668.911 rows=181644 loops=1)\n               Hash Cond: (log.logged_for = logfor.onp_user_id)\n               ->  Seq Scan on onp_crm_activity_log log  (cost=0.00..8930.98 \nrows=184398 width=24) (actual time=0.007..538.893 rows=182378 loops=1)\n               ->  Hash  (cost=39.46..39.46 rows=3768 width=8) (actual \ntime=0.126..0.126 rows=36 loops=1)\n                     Buckets: 4096  Batches: 1  Memory Usage: 34kB\n                     ->  Bitmap Heap Scan on onp_crm_person logfor \n (cost=3.69..39.46 rows=3768 width=8) (actual time=0.040..0.106 rows=36 loops=1)\n                           Recheck Cond: (onp_user_id IS NOT NULL)\n                           Filter: (NOT is_resource)\n                           Rows Removed by Filter: 5\n                           Heap Blocks: exact=10\n                           ->  Bitmap Index Scan on onp_crm_person_onp_id_idx \n (cost=0.00..2.75 rows=41 width=0) (actual time=0.019..0.019 rows=41 loops=1)\n         ->  Materialize  (cost=654.79..4009.60 rows=1 width=4) (actual \ntime=0.000..2.829 rows=65490 loops=181644)\n               ->  Nested Loop  (cost=654.79..4009.59 rows=1 width=4) (actual \ntime=9.056..386.835 rows=85668 loops=1)\n                     ->  Nested Loop  (cost=654.50..4009.27 rows=1 width=8) \n(actual time=9.046..165.280 rows=88151 loops=1)\n                           ->  Hash Anti Join  (cost=654.21..4008.72 rows=1 \nwidth=8) (actual time=9.016..40.672 rows=76174 loops=1)\n                                 Hash Cond: (il.invoice_id = \ncreditnote.credit_against)\n                                 ->  Seq Scan on onp_crm_invoice_line il \n (cost=0.00..3062.01 rows=78001 width=8) (actual time=0.005..11.259 rows=78614 \nloops=1)\n                                 ->  Hash  (cost=510.56..510.56 rows=11492 \nwidth=8) (actual time=8.940..8.940 rows=372 loops=1)\n                                       Buckets: 16384  Batches: 1  Memory \nUsage: 143kB\n                                       ->  Seq Scan on onp_crm_invoice \ncreditnote  (cost=0.00..510.56 rows=11492 width=8) (actual time=0.014..7.882 \nrows=11507 loops=1)\n                                             Filter: ((sent_date <= \n'2016-06-27'::date) AND ((status_key)::text = 'INVOICE_STATUS_INVOICED'::text))\n                                             Rows Removed by Filter: 149\n                           ->  Index Only Scan using \nonp_crm_calendarentry_invoice_invoice_line_id_calendar_entr_key on \nonp_crm_calendarentry_invoice_membership cem  (cost=0.29..0.45 rows=9 width=8) \n(actual time=0.001..0.001 rows=1 loops=76174)\n                                 Index Cond: (invoice_line_id = il.id)\n                                 Heap Fetches: 4371\n                     ->  Index Scan using onp_crm_invoice_pkey on \nonp_crm_invoice inv  (cost=0.29..0.31 rows=1 width=8) (actual time=0.002..0.002 \nrows=1 loops=88151)\n                           Index Cond: (entity_id = il.invoice_id)\n                           Filter: ((sent_date <= '2016-06-27'::date) AND \n((status_key)::text = 'INVOICE_STATUS_INVOICED'::text))\n                           Rows Removed by Filter: 0\n Planning time: 3.307 ms\n Execution time: 1561346.597 ms\n (36 rows)\n\n  \n \nHere is with set enable_nestloop to false;\n                                                                              \n       QUERY PLAN \n                                                                                      \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=15936.07..15953.83 rows=1421 width=12) (actual \ntime=337.878..338.045 rows=724 loops=1)\n   Group Key: log.relation_id\n   ->  Hash Anti Join  (cost=6258.35..15926.07 rows=2000 width=12) (actual \ntime=139.976..317.402 rows=96097 loops=1)\n         Hash Cond: (log.id = cem.calendar_entry_id)\n         ->  Hash Join  (cost=86.56..9729.03 rows=2000 width=20) (actual \ntime=0.164..132.935 rows=181646 loops=1)\n               Hash Cond: (log.logged_for = logfor.onp_user_id)\n               ->  Seq Scan on onp_crm_activity_log log  (cost=0.00..8930.98 \nrows=184398 width=24) (actual time=0.006..89.170 rows=182380 loops=1)\n               ->  Hash  (cost=39.46..39.46 rows=3768 width=8) (actual \ntime=0.118..0.118 rows=36 loops=1)\n                     Buckets: 4096  Batches: 1  Memory Usage: 34kB\n                     ->  Bitmap Heap Scan on onp_crm_person logfor \n (cost=3.69..39.46 rows=3768 width=8) (actual time=0.037..0.101 rows=36 loops=1)\n                           Recheck Cond: (onp_user_id IS NOT NULL)\n                           Filter: (NOT is_resource)\n                           Rows Removed by Filter: 5\n                           Heap Blocks: exact=10\n                           ->  Bitmap Index Scan on onp_crm_person_onp_id_idx \n (cost=0.00..2.75 rows=41 width=0) (actual time=0.017..0.017 rows=41 loops=1)\n         ->  Hash  (cost=6171.78..6171.78 rows=1 width=4) (actual \ntime=139.779..139.779 rows=85668 loops=1)\n               Buckets: 131072 (originally 1024)  Batches: 1 (originally 1) \n Memory Usage: 4036kB\n               ->  Hash Join  (cost=4562.41..6171.78 rows=1 width=4) (actual \ntime=92.471..125.417 rows=85668 loops=1)\n                     Hash Cond: (cem.invoice_line_id = il.id)\n                     ->  Seq Scan on onp_crm_calendarentry_invoice_membership \ncem  (cost=0.00..1278.44 rows=88244 width=8) (actual time=0.005..7.941 \nrows=88734 loops=1)\n                     ->  Hash  (cost=4562.40..4562.40 rows=1 width=4) (actual \ntime=92.444..92.444 rows=75570 loops=1)\n                           Buckets: 131072 (originally 1024)  Batches: 1 \n(originally 1)  Memory Usage: 3681kB\n                           ->  Hash Join  (cost=4008.74..4562.40 rows=1 \nwidth=4) (actual time=61.797..79.981 rows=75570 loops=1)\n                                 Hash Cond: (inv.entity_id = il.invoice_id)\n                                 ->  Seq Scan on onp_crm_invoice inv \n (cost=0.00..510.56 rows=11492 width=8) (actual time=0.027..4.489 rows=11507 \nloops=1)\n                                       Filter: ((sent_date <= \n'2016-06-27'::date) AND ((status_key)::text = 'INVOICE_STATUS_INVOICED'::text))\n                                       Rows Removed by Filter: 151\n                                 ->  Hash  (cost=4008.72..4008.72 rows=1 \nwidth=8) (actual time=61.751..61.751 rows=76182 loops=1)\n                                       Buckets: 131072 (originally 1024) \n Batches: 1 (originally 1)  Memory Usage: 4000kB\n                                       ->  Hash Anti Join \n (cost=654.21..4008.72 rows=1 width=8) (actual time=12.568..47.911 rows=76182 \nloops=1)\n                                             Hash Cond: (il.invoice_id = \ncreditnote.credit_against)\n                                             ->  Seq Scan on \nonp_crm_invoice_line il  (cost=0.00..3062.01 rows=78001 width=8) (actual \ntime=0.008..12.476 rows=78622 loops=1)\n                                             ->  Hash  (cost=510.56..510.56 \nrows=11492 width=8) (actual time=12.477..12.477 rows=372 loops=1)\n                                                   Buckets: 16384  Batches: 1 \n Memory Usage: 143kB\n                                                   ->  Seq Scan on \nonp_crm_invoice creditnote  (cost=0.00..510.56 rows=11492 width=8) (actual \ntime=0.008..10.963 rows=11507 loops=1)\n                                                         Filter: ((sent_date \n<= '2016-06-27'::date) AND ((status_key)::text = \n'INVOICE_STATUS_INVOICED'::text))\n                                                         Rows Removed by \nFilter: 151\n Planning time: 3.510 ms\n Execution time: 338.349 ms\n (39 rows)\n\n So my question is is there something I can do to make PG favor a Hash Anti \nJoin instead of a Nested Loop Anti Join (which I assume is the problem)?\nCan the nested NOT EXISTS be re-written to be more performant?\n \nThanks.\n \nIf I leave out the JOIN with onp_crm_person the query-plan is also much more \nefficient:\n \nexplain analyze SELECT log.relation_id as company_id , sum(log.duration) AS \ndurationFROM onp_crm_activity_log log WHERE 1 = 1 -- Filter out already invoiced\nAND NOTEXISTS( SELECT * FROM onp_crm_calendarentry_invoice_membership cem JOIN \nonp_crm_invoice_line ilON cem.invoice_line_id = il.id JOIN onp_crm_invoice inv \nONil.invoice_id = inv.entity_id WHERE cem.calendar_entry_id = log.id AND \ninv.status_key ='INVOICE_STATUS_INVOICED' AND inv.sent_date <= '2016-06-27' AND \nNOTEXISTS( SELECT * FROM onp_crm_invoice creditnote WHERE il.invoice_id = \ncreditnote.credit_againstAND creditnote.status_key = 'INVOICE_STATUS_INVOICED' \nANDcreditnote.sent_date <= '2016-06-27' ) ) GROUP BY log.relation_id ;  \n\nQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=16192.30..16210.07 rows=1421 width=12) (actual time\n=562.978..563.144rows=724 loops=1) Group Key: log.relation_id -> Hash Anti Join \n(cost=4009.60..15270.19 rows=184423 width=12) (actual time=422.215..542.871 rows\n=96712 loops=1) Hash Cond: (log.id = cem.calendar_entry_id) -> Seq Scan on \nonp_crm_activity_loglog (cost=0.00..8932.24 rows=184424 width=20) (actual time\n=0.007..83.262rows=182380 loops=1) -> Hash (cost=4009.59..4009.59 rows=1 width=4\n) (actualtime=422.178..422.178 rows=85668 loops=1) Buckets: 131072 (originally \n1024) Batches: 1 (originally 1) Memory Usage: 4036kB -> Nested Loop (cost\n=654.79..4009.59rows=1 width=4) (actual time=12.210..397.383 rows=85668 loops=1\n) -> NestedLoop (cost=654.50..4009.27 rows=1 width=8) (actual time\n=12.189..173.380rows=88201 loops=1) -> Hash Anti Join (cost=654.21..4008.72 rows\n=1 width=8) (actual time=12.099..47.757 rows=76197 loops=1) Hash Cond: \n(il.invoice_id = creditnote.credit_against) -> Seq Scanon onp_crm_invoice_line \nil (cost=0.00..3062.01 rows=78001 width=8) (actual time=0.009..13.797 rows=78637\nloops=1) -> Hash (cost=510.56..510.56 rows=11492 width=8) (actual time\n=12.012..12.012rows=372 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 143kB \n-> Seq Scanon onp_crm_invoice creditnote (cost=0.00..510.56 rows=11492 width=8) \n(actualtime=0.026..10.564 rows=11507 loops=1) Filter: ((sent_date <= \n'2016-06-27'::date) AND ((status_key)::text = 'INVOICE_STATUS_INVOICED'::text)) \nRowsRemoved by Filter: 155 -> Index Only Scan using \nonp_crm_calendarentry_invoice_invoice_line_id_calendar_entr_keyon \nonp_crm_calendarentry_invoice_membership cem (cost=0.29..0.45 rows=9 width=8) \n(actualtime=0.001..0.001 rows=1 loops=76197) Index Cond: (invoice_line_id = \nil.id) Heap Fetches:4421 -> Index Scan using onp_crm_invoice_pkey on \nonp_crm_invoice inv (cost=0.29..0.31 rows=1 width=8) (actual time=0.002..0.002 \nrows=1 loops=88201) Index Cond: (entity_id = il.invoice_id) Filter: ((sent_date \n<='2016-06-27'::date) AND ((status_key)::text = 'INVOICE_STATUS_INVOICED'::text\n))Rows Removed by Filter: 0 Planning time: 2.763 ms Execution time: 563.403 ms (\n26rows) \n \nBut of course I cannot do that... It's just for information...\n \n--\nAndreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Mon, 1 Aug 2016 15:40:48 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very poor performance with Nested Loop Anti Join" }, { "msg_contents": "Andreas Joseph Krogh <[email protected]> writes:\n> This query performs terribly slow (~26 minutes, 1561346.597ms):\n\nSeems like the key misestimation is on the inner antijoin:\n\n> -> Hash Anti Join (cost=654.21..4008.72 rows=1 width=8) (actual time=9.016..40.672 rows=76174 loops=1)\n> Hash Cond: (il.invoice_id = creditnote.credit_against)\n> -> Seq Scan on onp_crm_invoice_line il (cost=0.00..3062.01 rows=78001 width=8) (actual time=0.005..11.259 rows=78614 loops=1)\n> -> Hash (cost=510.56..510.56 rows=11492 width=8) (actual time=8.940..8.940 rows=372 loops=1)\n> Buckets: 16384 Batches: 1 Memory Usage: 143kB\n> -> Seq Scan on onp_crm_invoice creditnote (cost=0.00..510.56 rows=11492 width=8) (actual time=0.014..7.882 rows=11507 loops=1)\n> Filter: ((sent_date <= '2016-06-27'::date) AND ((status_key)::text = 'INVOICE_STATUS_INVOICED'::text))\n> Rows Removed by Filter: 149\n\nIf it realized that this produces 78k rows not 1, it'd likely do something\nsmarter at the outer antijoin.\n\nI have no idea why that estimate's so far off though. What PG version is\nthis? Stats all up to date on these two tables? Are the rows excluded\nby the filter condition on \"creditnote\" significantly different from the\nrest of that table?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 01 Aug 2016 19:15:05 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Very poor performance with Nested Loop Anti Join" }, { "msg_contents": "På tirsdag 02. august 2016 kl. 01:15:05, skrev Tom Lane <[email protected] \n<mailto:[email protected]>>:\nAndreas Joseph Krogh <[email protected]> writes:\n > This query performs terribly slow (~26 minutes,��1561346.597ms):\n\n Seems like the key misestimation is on the inner antijoin:\n\n >                ->  Hash Anti Join  (cost=654.21..4008.72 rows=1 width=8) \n(actual time=9.016..40.672 rows=76174 loops=1)\n >                      Hash Cond: (il.invoice_id = creditnote.credit_against)\n >                      ->  Seq Scan on onp_crm_invoice_line il  \n(cost=0.00..3062.01 rows=78001 width=8) (actual time=0.005..11.259 rows=78614 \nloops=1)\n >                      ->  Hash  (cost=510.56..510.56 rows=11492 width=8) \n(actual time=8.940..8.940 rows=372 loops=1)\n >                            Buckets: 16384  Batches: 1  Memory Usage: 143kB\n >                            ->  Seq Scan on onp_crm_invoice creditnote  \n(cost=0.00..510.56 rows=11492 width=8) (actual time=0.014..7.882 rows=11507 \nloops=1)\n >                                  Filter: ((sent_date <= '2016-06-27'::date) \nAND ((status_key)::text = 'INVOICE_STATUS_INVOICED'::text))\n >                                  Rows Removed by Filter: 149\n\n If it realized that this produces 78k rows not 1, it'd likely do something\n smarter at the outer antijoin.\n\n I have no idea why that estimate's so far off though.  What PG version is\n this?  Stats all up to date on these two tables?\n \nSorry for not providing PG-version, this is on 9.5.3.\nAll stats are up to date, or should be a I've analyzed all manually.\n\n \nAre the rows excluded\n by the filter condition on \"creditnote\" significantly different from the\n rest of that table?\n \nThis happens also without the filter-cond:\n \nexplain analyze SELECT log.relation_id as company_id , sum(log.duration) AS \ndurationFROM onp_crm_activity_log log JOIN onp_crm_person logfor ON \nlogfor.onp_user_id =log.logged_for AND logfor.is_resource = FALSE WHERE 1 = 1 \n-- Filter out already invoiced AND NOT EXISTS( SELECT * FROM \nonp_crm_calendarentry_invoice_membership cemJOIN onp_crm_invoice_line il ON \ncem.invoice_line_id = il.idJOIN onp_crm_invoice inv ON il.invoice_id = \ninv.entity_idWHERE cem.calendar_entry_id = log.id AND NOT EXISTS( SELECT * FROM \nonp_crm_invoice creditnoteWHERE il.invoice_id = creditnote.credit_against ) ) \nGROUP BY log.relation_id ; \n \nQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=12049.35..12067.11 rows=1421 width=12) (actual time\n=1386683.646..1386683.858rows=720 loops=1) Group Key: log.relation_id -> Nested \nLoopAnti Join (cost=512.08..12039.32 rows=2006 width=12) (actual time\n=395.017..1386576.756rows=93480 loops=1) Join Filter: (cem.calendar_entry_id = \nlog.id) Rows Removed by Join Filter: 12185913244 -> Hash Join (cost\n=86.56..9757.61rows=2006 width=20) (actual time=0.165..366.778 rows=181872 \nloops=1) Hash Cond: (log.logged_for = logfor.onp_user_id) -> Seq Scan on \nonp_crm_activity_loglog (cost=0.00..8957.45 rows=184945 width=24) (actual time\n=0.003..256.862rows=182606 loops=1) -> Hash (cost=39.46..39.46 rows=3768 width=8\n) (actualtime=0.132..0.132 rows=36 loops=1) Buckets: 4096 Batches: 1 Memory \nUsage: 34kB -> Bitmap Heap Scan on onp_crm_person logfor (cost=3.69..39.46 rows=\n3768width=8) (actual time=0.033..0.125 rows=36 loops=1) Recheck Cond: \n(onp_user_idIS NOT NULL) Filter: (NOT is_resource) Rows Removed by Filter: 5 \nHeap Blocks: exact=10 -> Bitmap Index Scan on onp_crm_person_onp_id_idx (cost\n=0.00..2.75rows=41 width=0) (actual time=0.017..0.017 rows=41 loops=1) -> \nMaterialize (cost=425.53..2251.62 rows=1 width=4) (actual time=0.000..2.544 rows\n=67003 loops=181872) -> Nested Loop (cost=425.53..2251.61 rows=1 width=4) \n(actualtime=3.283..320.057 rows=88511 loops=1) -> Nested Loop (cost\n=425.24..2251.30rows=1 width=8) (actual time=3.241..154.783 rows=88511 loops=1) \n->Hash Anti Join (cost=424.95..2250.75 rows=1 width=8) (actual time\n=3.110..30.097rows=76281 loops=1) Hash Cond: (il.invoice_id = \ncreditnote.credit_against) ->Index Only Scan using \norigo_invoice_line_id_invoice_idxon onp_crm_invoice_line il (cost=0.29..1530.95 \nrows=78707 width=8) (actual time=0.030..13.719 rows=78740 loops=1) Heap Fetches:\n2967 -> Hash (cost=278.22..278.22 rows=11715 width=8) (actual time=3.003..3.003 \nrows=376 loops=1) Buckets: 16384 Batches: 1 Memory Usage: 143kB -> Index Only \nScanusing origo_invoice_credit_against_idx on onp_crm_invoice creditnote (cost\n=0.29..278.22rows=11715 width=8) (actual time=0.042..2.082 rows=11692 loops=1) \nHeap Fetches:1151 -> Index Only Scan using \nonp_crm_calendarentry_invoice_invoice_line_id_calendar_entr_keyon \nonp_crm_calendarentry_invoice_membership cem (cost=0.29..0.45 rows=9 width=8) \n(actualtime=0.001..0.001 rows=1 loops=76281) Index Cond: (invoice_line_id = \nil.id) Heap Fetches:4753 -> Index Only Scan using onp_crm_invoice_pkey on \nonp_crm_invoice inv (cost=0.29..0.30 rows=1 width=8) (actual time=0.001..0.001 \nrows=1 loops=88511) Index Cond: (entity_id = il.invoice_id) Heap Fetches: 12084 \nPlanningtime: 5.824 ms Execution time: 1386686.664 ms (35 rows)  \nWith set enable_nestloop to off;\n \nQUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=13975.70..13993.46 rows=1421 width=12) (actual time\n=338.185..338.341rows=720 loops=1) Group Key: log.relation_id -> Hash Anti Join \n(cost=4265.19..13965.66 rows=2007 width=12) (actual time=147.696..318.314 rows=\n93511loops=1) Hash Cond: (log.id = cem.calendar_entry_id) -> Hash Join (cost\n=86.56..9761.69rows=2007 width=20) (actual time=0.166..127.604 rows=181915 \nloops=1) Hash Cond: (log.logged_for = logfor.onp_user_id) -> Seq Scan on \nonp_crm_activity_loglog (cost=0.00..8961.23 rows=185023 width=24) (actual time\n=0.006..84.093rows=182649 loops=1) -> Hash (cost=39.46..39.46 rows=3768 width=8\n) (actualtime=0.123..0.123 rows=36 loops=1) Buckets: 4096 Batches: 1 Memory \nUsage: 34kB -> Bitmap Heap Scan on onp_crm_person logfor (cost=3.69..39.46 rows=\n3768width=8) (actual time=0.038..0.102 rows=36 loops=1) Recheck Cond: \n(onp_user_idIS NOT NULL) Filter: (NOT is_resource) Rows Removed by Filter: 5 \nHeap Blocks: exact=10 -> Bitmap Index Scan on onp_crm_person_onp_id_idx (cost\n=0.00..2.75rows=41 width=0) (actual time=0.019..0.019 rows=41 loops=1) -> Hash (\ncost=4178.62..4178.62 rows=1 width=4) (actual time=147.497..147.497 rows=88523 \nloops=1) Buckets: 131072 (originally 1024) Batches: 1 (originally 1) Memory \nUsage: 4137kB -> Hash Join (cost=2553.20..4178.62 rows=1 width=4) (actual time\n=98.512..133.017rows=88523 loops=1) Hash Cond: (cem.invoice_line_id = il.id) -> \nSeq Scanon onp_crm_calendarentry_invoice_membership cem (cost=0.00..1290.66 rows\n=89266 width=8) (actual time=0.006..11.151 rows=89175 loops=1) -> Hash (cost\n=2553.19..2553.19rows=1 width=4) (actual time=98.481..98.481 rows=76286 loops=1\n) Buckets:131072 (originally 1024) Batches: 1 (originally 1) Memory Usage: 3706\nkB -> MergeJoin (cost=2252.01..2553.19 rows=1 width=4) (actual time\n=50.922..87.641rows=76286 loops=1) Merge Cond: (il.invoice_id = inv.entity_id) \n-> Sort (cost=2251.73..2251.73 rows=1 width=8) (actual time=50.872..55.552 rows=\n76286loops=1) Sort Key: il.invoice_id Sort Method: quicksort Memory: 6648kB -> \nHashAnti Join (cost=425.91..2251.72 rows=1 width=8) (actual time=5.904..35.979 \nrows=76286 loops=1) Hash Cond: (il.invoice_id = creditnote.credit_against) -> \nIndex OnlyScan using origo_invoice_line_id_invoice_idx on onp_crm_invoice_line \nil (cost=0.29..1530.95 rows=78707 width=8) (actual time=0.028..16.124 rows=78745\nloops=1) Heap Fetches: 2972 -> Hash (cost=278.74..278.74 rows=11750 width=8) \n(actualtime=5.792..5.792 rows=376 loops=1) Buckets: 16384 Batches: 1 Memory \nUsage: 143kB -> Index Only Scan using origo_invoice_credit_against_idx on \nonp_crm_invoice creditnote (cost=0.29..278.74 rows=11750 width=8) (actual time\n=0.067..4.466rows=11694 loops=1) Heap Fetches: 1155 -> Index Only Scan using \nonp_crm_invoice_pkeyon onp_crm_invoice inv (cost=0.29..272.09 rows=11750 width=8\n) (actualtime=0.040..10.755 rows=76661 loops=1) Heap Fetches: 3840 Planning time\n:3.762 ms Execution time: 339.634 ms (39 rows) \n \n \n\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Tue, 2 Aug 2016 13:30:13 +0200 (CEST)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Very poor performance with Nested Loop Anti Join" } ]
[ { "msg_contents": "Sir/Madam,Plateform: RHEL6.5,  Postgresql9.4.0.\ncreate extension plperl;\nCreate language plperl;\nI have done following settings:\nPerl version 5.10vi /etc/ld.so.conf.d/libperl.conf/usr/lib/5.10/multi-thread/i386.../CORE/libperl.soldconfig\nERROR: Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\" undefined symbol Perl_sv_2bool_flags\nERROR: could not load library \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\": \n/opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol: Perl_sv_2bool_flags\n\nHow do I solve.\nKindly resolve it.\n\nRegards\nOm Prakash\n\n\n\nSir/Madam,Plateform: RHEL6.5,  Postgresql9.4.0.create extension plperl;Create language plperl;I have done following settings:Perl version 5.10vi /etc/ld.so.conf.d/libperl.conf/usr/lib/5.10/multi-thread/i386.../CORE/libperl.soldconfigERROR: Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\" undefined symbol Perl_sv_2bool_flagsERROR: could not load library \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\": \n/opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol: Perl_sv_2bool_flagsHow do I solve.Kindly resolve it.RegardsOm Prakash", "msg_date": "Tue, 2 Aug 2016 05:47:19 +0000 (UTC)", "msg_from": "Om Prakash Jaiswal <[email protected]>", "msg_from_op": true, "msg_subject": "Create language plperlu Error" }, { "msg_contents": "On 8/1/2016 10:47 PM, Om Prakash Jaiswal wrote:\n> Sir/Madam,\n> Plateform: RHEL6.5, Postgresql9.4.0.\n>\n> create extension plperl;\n> Create language plperl;\n>\n> I have done following settings:\n> Perl version 5.10\n> vi /etc/ld.so.conf.d/libperl.conf\n> /usr/lib/5.10/multi-thread/i386.../CORE/libperl.so\n> ldconfig\n>\n> ERROR: Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\" \n> undefined symbol Perl_sv_2bool_flags\n>\n> |ERROR:could notload library \n> \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\":/opt/PostgreSQL/9.2/lib/postgresql/plperl.so:undefinedsymbol:Perl_sv_2bool_flags \n> How do I solve. Kindly resolve it. |\n\n/opt suggests you're running the EnterpriseDB installation of \nPostgresql. I would instead use the RPM distribution from \nhttp://yum.postgresql.org, this integrates just fine with plperl, \nplpython, etc on redhat and centos and other similar platforms.\n\nbtw, RHEL 6.5 is several years behind in security updates, I believe 6.8 \nis the current update, you really should update.\n\nditto, PostgreSQL 9.4.0 is long superseded, current 9.4 version is 9.4.8\n\n\n-- \njohn r pierce, recycling bits in santa cruz\n\n\n\n\n\n\n\nOn 8/1/2016 10:47 PM, Om Prakash\r\n Jaiswal wrote:\n\n\nSir/Madam,\nPlateform:\r\n RHEL6.5,  Postgresql9.4.0.\n\n\ncreate extension\r\n plperl;\n\nCreate language\r\n plperl;\n\n\nI have done\r\n following settings:\n\nPerl version 5.10\nvi\r\n /etc/ld.so.conf.d/libperl.conf\n/usr/lib/5.10/multi-thread/i386.../CORE/libperl.so\nldconfig\n\n\nERROR:\r\n Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\"\r\n undefined symbol Perl_sv_2bool_flags\n\n\nERROR: could not load library \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\": \r\n/opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol: Perl_sv_2bool_flags\r\n\r\nHow do I solve.\r\nKindly resolve it.\r\n\n\n\n/opt suggests you're running the EnterpriseDB installation of\r\n Postgresql.  I would instead use the RPM distribution from\r\n http://yum.postgresql.org, this integrates just fine with plperl,\r\n plpython, etc on redhat and centos and other similar platforms.\nbtw, RHEL 6.5 is several years behind in security updates, I\r\n believe 6.8 is the current update, you really should update.\nditto, PostgreSQL 9.4.0 is long superseded, current 9.4 version\r\n is 9.4.8\n\n\n-- \r\njohn r pierce, recycling bits in santa cruz", "msg_date": "Mon, 1 Aug 2016 23:05:08 -0700", "msg_from": "John R Pierce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create language plperlu Error" }, { "msg_contents": "Hi Prakash,\n\nIn addition to John's update,\n\nWhat plperl.so is telling you that one of its symbols is undefined. This\nmay be due to compile options or may means you need a newer version of Perl\nand libperl.so (5.10.1 is quite old). Recent versions of PotgreSQL and\nplperl usually require a newer version of perl than the one you cite.\n\nSo try using perl 5.14 and greater which contains Perl_sv_2bool_flags.\n\n\n\nOn 2 August 2016 at 11:35, John R Pierce <[email protected]> wrote:\n\n> On 8/1/2016 10:47 PM, Om Prakash Jaiswal wrote:\n>\n> Sir/Madam,\n> Plateform: RHEL6.5, Postgresql9.4.0.\n>\n> create extension plperl;\n> Create language plperl;\n>\n> I have done following settings:\n> Perl version 5.10\n> vi /etc/ld.so.conf.d/libperl.conf\n> /usr/lib/5.10/multi-thread/i386.../CORE/libperl.so\n> ldconfig\n>\n> ERROR: Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\"\n> undefined symbol Perl_sv_2bool_flags\n>\n> ERROR: could not load library \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\": /opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol: Perl_sv_2bool_flags\n>\n> How do I solve.\n> Kindly resolve it.\n>\n>\n> /opt suggests you're running the EnterpriseDB installation of Postgresql.\n> I would instead use the RPM distribution from http://yum.postgresql.org,\n> this integrates just fine with plperl, plpython, etc on redhat and centos\n> and other similar platforms.\n>\n> btw, RHEL 6.5 is several years behind in security updates, I believe 6.8\n> is the current update, you really should update.\n>\n> ditto, PostgreSQL 9.4.0 is long superseded, current 9.4 version is 9.4.8\n>\n>\n> --\n> john r pierce, recycling bits in santa cruz\n>\n>\n\n\n-- \n--Regards\n Ranjeet R. Dhumal\n\nHi Prakash,In addition to John's update, What plperl.so is telling you that one of its symbols is undefined. This may be due to compile options or may means you need a newer version of Perl and libperl.so (5.10.1 is quite old). Recent versions of PotgreSQL and plperl usually require a newer version of perl than the one you cite.So try using perl 5.14 and greater which contains Perl_sv_2bool_flags.On 2 August 2016 at 11:35, John R Pierce <[email protected]> wrote:\n\nOn 8/1/2016 10:47 PM, Om Prakash\n Jaiswal wrote:\n\n\nSir/Madam,\nPlateform:\n RHEL6.5,  Postgresql9.4.0.\n\n\ncreate extension\n plperl;\n\nCreate language\n plperl;\n\n\nI have done\n following settings:\n\nPerl version 5.10\nvi\n /etc/ld.so.conf.d/libperl.conf\n/usr/lib/5.10/multi-thread/i386.../CORE/libperl.so\nldconfig\n\n\nERROR:\n Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\"\n undefined symbol Perl_sv_2bool_flags\n\n\nERROR: could not load library \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\": \n/opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol: Perl_sv_2bool_flags\n\nHow do I solve.\nKindly resolve it.\n\n\n\n/opt suggests you're running the EnterpriseDB installation of\n Postgresql.  I would instead use the RPM distribution from\n http://yum.postgresql.org, this integrates just fine with plperl,\n plpython, etc on redhat and centos and other similar platforms.\nbtw, RHEL 6.5 is several years behind in security updates, I\n believe 6.8 is the current update, you really should update.\nditto, PostgreSQL 9.4.0 is long superseded, current 9.4 version\n is 9.4.8\n\n\n-- \njohn r pierce, recycling bits in santa cruz\n\n-- --Regards  Ranjeet  R. Dhumal", "msg_date": "Tue, 2 Aug 2016 12:15:39 +0530", "msg_from": "Ranjeet Dhumal <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create language plperlu Error" }, { "msg_contents": "On 8/1/2016 11:45 PM, Ranjeet Dhumal wrote:\n>\n> What plperl.so is telling you that one of its symbols is undefined. \n> This may be due to compile options or may means you need a newer \n> version of Perl and libperl.so (5.10.1 is quite old). Recent versions \n> of PotgreSQL and plperl usually require a newer version of perl than \n> the one you cite.\n>\n> So try using perl 5.14 and greater which contains Perl_sv_2bool_flags.\n\nRHEL 6 has perl 5.10.1, you don't update it. you could install a \nnewer perl in a different location but its pretty unlikely the postgres \nstuff would use it as-is.\n\nthe PGDG rpm distributions are built to use the appropriate system \nperl. I don't know what enterprisedb is building their /opt/postgres \nto use.\n\n\n\n\n-- \njohn r pierce, recycling bits in santa cruz\n\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n", "msg_date": "Tue, 2 Aug 2016 00:21:26 -0700", "msg_from": "John R Pierce <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create language plperlu Error" }, { "msg_contents": "On Tue, Aug 2, 2016 at 7:47 AM, Om Prakash Jaiswal <[email protected]> wrote:\n> ERROR: could not load library\n> \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\":\n> /opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol:\n> Perl_sv_2bool_flags\n\nSeems to me you are running a Perl version compiled with different\noptions than those expected from PostgreSQL. How do you install perl\nand dependencies?\nHave you compiled it (or PostgreSQL)?\n\nBy the way, spreading your message around several mailing list,\nespecially not related to your problem, is the right way to get your\nmessage ignored.\n\nLuca\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n", "msg_date": "Tue, 2 Aug 2016 14:00:25 +0200", "msg_from": "Luca Ferrari <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Create language plperlu Error" }, { "msg_contents": "\n> Sir/Madam,\n> Plateform: RHEL6.5, Postgresql9.4.0.\n> \n> \n> create extension plperl;\n> \n> Create language plperl;\n> \n> \n> I have done following settings:\n> \n> Perl version 5.10\n> vi /etc/ld.so.conf.d/libperl.conf\n> /usr/lib/5.10/multi-thread/i386.../CORE/libperl.so\n> ldconfig\n> \n> \n> ERROR: Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\"\n> undefined symbol Perl_sv_2bool_flags\n> \n> \n> ERROR: could not load library \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\": \n> /opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol: Perl_sv_2bool_flags\n> \n> How do I solve.\n> Kindly resolve it.\n> \n> Regards\n> Om Prakash\n\nDo you have the packet postgresql-plperl installed?\n\n / Eskil\n\n\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n", "msg_date": "Mon, 08 Aug 2016 09:26:40 +0200", "msg_from": "Johan Fredriksson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Create language plperlu Error" }, { "msg_contents": "Yes, I have installed postgresql-plperl package.But I am not able to execute:create extension plperlcreate language plperlu \n\n On Monday, 8 August 2016 12:56 PM, Johan Fredriksson <[email protected]> wrote:\n \n\n \n> Sir/Madam,\n> Plateform: RHEL6.5,  Postgresql9.4.0.\n> \n> \n> create extension plperl;\n> \n> Create language plperl;\n> \n> \n> I have done following settings:\n> \n> Perl version 5.10\n> vi /etc/ld.so.conf.d/libperl.conf\n> /usr/lib/5.10/multi-thread/i386.../CORE/libperl.so\n> ldconfig\n> \n> \n> ERROR: Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\"\n> undefined symbol Perl_sv_2bool_flags\n> \n> \n> ERROR:  could not load library \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\": \n> /opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol: Perl_sv_2bool_flags\n> \n> How do I solve.\n> Kindly resolve it.\n> \n> Regards\n> Om Prakash\n\nDo you have the packet postgresql-plperl installed?\n\n        / Eskil\n\n\n\n\n \nYes, I have installed postgresql-plperl package.But I am not able to execute:create extension plperlcreate language plperlu On Monday, 8 August 2016 12:56 PM, Johan Fredriksson <[email protected]> wrote: > Sir/Madam,> Plateform: RHEL6.5,  Postgresql9.4.0.> > > create extension plperl;> > Create language plperl;> > > I have done following settings:> > Perl version 5.10> vi /etc/ld.so.conf.d/libperl.conf> /usr/lib/5.10/multi-thread/i386.../CORE/libperl.so> ldconfig> > > ERROR: Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\"> undefined symbol Perl_sv_2bool_flags> > > ERROR:  could not load library \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\": > /opt/PostgreSQL/9.2/lib/postgresql/plperl.so: undefined symbol: Perl_sv_2bool_flags> > How do I solve.> Kindly resolve it.> > Regards> Om PrakashDo you have the packet postgresql-plperl installed?        / Eskil", "msg_date": "Mon, 8 Aug 2016 10:30:20 +0000 (UTC)", "msg_from": "Om Prakash Jaiswal <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Create language plperlu Error" }, { "msg_contents": "\n\n----- Mensaje original -----\n> De: \"Om Prakash Jaiswal\" <[email protected]>\n> Para: [email protected], \"Pgsql-admin\" <[email protected]>, [email protected]\n> Enviados: Martes, 2 de Agosto 2016 2:47:19\n> Asunto: [PERFORM] Create language plperlu Error\n> \n> \n> \n> Sir/Madam,\n> Plateform: RHEL6.5, Postgresql9.4.0.\n> \n> \n> create extension plperl;\n> \n> Create language plperl;\n> \n> \n> I have done following settings:\n> \n> Perl version 5.10\n> vi /etc/ld.so.conf.d/libperl.conf\n> /usr/lib/5.10/multi-thread/i386.../CORE/libperl.so\n> ldconfig\n> \n> \n> ERROR: Can not load \"/opt/Postgresql/9.4/lib/postgresql/plperl.so\"\n> undefined symbol Perl_sv_2bool_flags\n> \n> ERROR : could not load library\n> \"/opt/PostgreSQL/9.4/lib/postgresql/plperl.so\" : /opt/ PostgreSQL /\n> 9.2 / lib / postgresql / plperl . so : undefined symbol :\n> Perl_sv_2bool_flags\n> \n> How do I solve.\n> Kindly resolve it.\n> \n\nI think your 9.4 is using the 9.2 version of the plperl.so library. Maybe you should recompile.\n\nHTH\nGerardo\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n", "msg_date": "Mon, 8 Aug 2016 08:21:00 -0300 (ART)", "msg_from": "Gerardo Herzig <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Create language plperlu Error" } ]
[ { "msg_contents": "Hey guys!\n\nWe are looking for active beta participants to try out our new SaaS-Based\nDatabase Monitoring Tool. Our tool will monitor your databases and their\nunderlying (virtual) infrastructure. If you would like to be a part of the\nbeta, sign up here: http://www.bluemedora.com/early-access/\n\nWe will initially be supporting RDS, MSSQL, Oracle, PostgreSQL, Mongo,\nDynamoDB and MySQL (and MariaDB). And then we will add support to SQL Azure,\nDB2, Aurora, etc. as the beta progresses.\n\nIf you have any questions, feel free to post and I will be happy to answer\nthem.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Need-Beta-Users-for-New-Database-Monitoring-Solution-tp5914651.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 3 Aug 2016 09:27:56 -0700 (MST)", "msg_from": "jonescam <[email protected]>", "msg_from_op": true, "msg_subject": "Need Beta Users for New Database Monitoring Solution!" } ]
[ { "msg_contents": "Hi,\r\n\r\nI’ve got a SQL runs for about 4 seconds first time it’s been executed,but very fast (20ms) for the consequent runs. I thought it’s because that the first time table being loaded into memory. However, if you change the where clause value from “cat” to “dog”, it runs about 4 seconds as it’s never been executed before. Therefore, it doesn’t sound like the reason of table not being cached.\r\n\r\nCan someone explain why it behaves like this? It PG 9.3, I can try pg_prewarm to cache both tables by creating the extension (probably need to find a 9.4 box and copy those files) if the reason is table not being cached.\r\n\r\nFrom execution plan below, it shows Nested Loop is the slowest part - actual time=349.257..4265.928 rows=457 , it’s really slow, for just 457 rows and takes 4 seconds!!! But very fast for repetitive runs.\r\n\r\ndev=# explain analyze\r\nSELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\r\nON w.name = o.name WHERE (w.name LIKE '%cat%' OR w.displayname LIKE '%cat%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\r\nORDER BY o.cnt DESC LIMIT 100;\r\n QUERY PLAN\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nLimit (cost=1723.11..1723.36 rows=100 width=50) (actual time=4267.352..4267.407 rows=100 loops=1)\r\n -> Sort (cost=1723.11..1723.44 rows=131 width=50) (actual time=4267.351..4267.381 rows=100 loops=1)\r\n Sort Key: o.cnt\r\n Sort Method: top-N heapsort Memory: 32kB\r\n -> Nested Loop (cost=97.61..1718.50 rows=131 width=50) (actual time=349.257..4265.928 rows=457 loops=1)\r\n -> Bitmap Heap Scan on data w (cost=97.05..593.54 rows=131 width=40) (actual time=239.135..387.077 rows=892 loops=1)\r\n Recheck Cond: (((name)::text ~~ '%cat%'::text) OR ((displayname)::text ~~ '%cat%'::text))\r\n Rows Removed by Index Recheck: 3\r\n Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\r\n Rows Removed by Filter: 1646\r\n -> BitmapOr (cost=97.05..97.05 rows=132 width=0) (actual time=238.931..238.931 rows=0 loops=1)\r\n -> Bitmap Index Scan on idx_data_3 (cost=0.00..60.98 rows=131 width=0) (actual time=195.392..195.392 rows=2539 loops=1)\r\n Index Cond: ((name)::text ~~ '%cat%'::text)\r\n -> Bitmap Index Scan on idx_data_4 (cost=0.00..36.00 rows=1 width=0) (actual time=43.537..43.537 rows=14 loops=1)\r\n Index Cond: ((displayname)::text ~~ '%cat%'::text)\r\n -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=4.334..4.345 rows=1 loops=892)\r\n Index Cond: (name = (w.name)::text)\r\nTotal runtime: 4267.560 ms\r\n(18 rows)\r\n\r\nTime: 4269.990 ms\r\n\r\ndev=# explain analyze\r\nSELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\r\nON w.name = o.name WHERE (w.name LIKE '%cat%' OR w.displayname LIKE '%cat%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\r\nORDER BY o.cnt DESC LIMIT 100;\r\n QUERY PLAN\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nLimit (cost=1723.11..1723.36 rows=100 width=50) (actual time=37.843..37.885 rows=100 loops=1)\r\n -> Sort (cost=1723.11..1723.44 rows=131 width=50) (actual time=37.842..37.861 rows=100 loops=1)\r\n Sort Key: o.cnt\r\n Sort Method: top-N heapsort Memory: 32kB\r\n -> Nested Loop (cost=97.61..1718.50 rows=131 width=50) (actual time=5.528..37.373 rows=457 loops=1)\r\n -> Bitmap Heap Scan on data w (cost=97.05..593.54 rows=131 width=40) (actual time=3.741..11.799 rows=892 loops=1)\r\n Recheck Cond: (((name)::text ~~ '%cat%'::text) OR ((displayname)::text ~~ '%cat%'::text))\r\n Rows Removed by Index Recheck: 3\r\n Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\r\n Rows Removed by Filter: 1646\r\n -> BitmapOr (cost=97.05..97.05 rows=132 width=0) (actual time=3.547..3.547 rows=0 loops=1)\r\n -> Bitmap Index Scan on idx_data_3 (cost=0.00..60.98 rows=131 width=0) (actual time=3.480..3.480 rows=2539 loops=1)\r\n Index Cond: ((name)::text ~~ '%cat%'::text)\r\n -> Bitmap Index Scan on idx_data_4 (cost=0.00..36.00 rows=1 width=0) (actual time=0.067..0.067 rows=14 loops=1)\r\n Index Cond: ((displayname)::text ~~ '%cat%'::text)\r\n -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=0.027..0.027 rows=1 loops=892)\r\n Index Cond: (name = (w.name)::text)\r\nTotal runtime: 37.974 ms\r\n(18 rows)\r\n\r\nTime: 40.158 ms\r\n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI’ve got a SQL runs for about 4 seconds first time it’s been executed,but very fast (20ms) for the consequent runs. I thought it’s because that the first time table being loaded into memory. However, if you change the where clause value\r\n from “cat” to “dog”, it runs about 4 seconds as it’s never been executed before. Therefore, it doesn’t sound like the reason of table not being cached.\r\n\n \nCan someone explain why it behaves like this? It PG 9.3, I can try pg_prewarm to cache both tables by creating the extension (probably need to find a 9.4 box and copy those files) if the reason is table not being cached.\r\n\n \nFrom execution plan below, it shows Nested Loop is the slowest part - actual time=349.257..4265.928 rows=457 , it’s really slow, for just 457 rows and takes 4 seconds!!! But very fast for repetitive runs.\n \n\ndev=# explain analyze\nSELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\nON w.name = o.name WHERE (w.name LIKE '%cat%' OR w.displayname LIKE '%cat%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\nORDER BY o.cnt DESC LIMIT 100;\n                                                                              QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit  (cost=1723.11..1723.36 rows=100 width=50) (actual time=4267.352..4267.407 rows=100 loops=1)\n   ->  Sort  (cost=1723.11..1723.44 rows=131 width=50) (actual time=4267.351..4267.381 rows=100 loops=1)\n         Sort Key: o.cnt\n         Sort Method: top-N heapsort  Memory: 32kB\n         ->  Nested Loop  (cost=97.61..1718.50 rows=131 width=50) (actual time=349.257..4265.928 rows=457 loops=1)\n               ->  Bitmap Heap Scan on data w  (cost=97.05..593.54 rows=131 width=40) (actual time=239.135..387.077 rows=892 loops=1)\n                     Recheck Cond: (((name)::text ~~ '%cat%'::text) OR ((displayname)::text ~~ '%cat%'::text))\n                     Rows Removed by Index Recheck: 3\n                     Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\n                     Rows Removed by Filter: 1646\n                     ->  BitmapOr  (cost=97.05..97.05 rows=132 width=0) (actual time=238.931..238.931 rows=0 loops=1)\n                           ->  Bitmap Index Scan on idx_data_3  (cost=0.00..60.98 rows=131 width=0) (actual time=195.392..195.392 rows=2539 loops=1)\n                                 Index Cond: ((name)::text ~~ '%cat%'::text)\n                           ->  Bitmap Index Scan on idx_data_4  (cost=0.00..36.00 rows=1 width=0) (actual time=43.537..43.537 rows=14 loops=1)\n                                 Index Cond: ((displayname)::text ~~ '%cat%'::text)\n               ->  Index Scan using idx_order_1_us on order o  (cost=0.56..8.58 rows=1 width=30) (actual time=4.334..4.345 rows=1 loops=892)\n                    Index Cond: (name = (w.name)::text)\nTotal runtime: 4267.560 ms\n(18 rows)\n \nTime: 4269.990 ms\n \ndev=# explain analyze\nSELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\nON w.name = o.name WHERE (w.name LIKE '%cat%' OR w.displayname LIKE '%cat%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\nORDER BY o.cnt DESC LIMIT 100;\n                                                                              QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit  (cost=1723.11..1723.36 rows=100 width=50) (actual time=37.843..37.885 rows=100 loops=1)\n   ->  Sort  (cost=1723.11..1723.44 rows=131 width=50) (actual time=37.842..37.861 rows=100 loops=1)\n         Sort Key: o.cnt\n         Sort Method: top-N heapsort  Memory: 32kB\n         ->  Nested Loop  (cost=97.61..1718.50 rows=131 width=50) (actual time=5.528..37.373 rows=457 loops=1)\n               ->  Bitmap Heap Scan on data w  (cost=97.05..593.54 rows=131 width=40) (actual time=3.741..11.799 rows=892 loops=1)\n                     Recheck Cond: (((name)::text ~~ '%cat%'::text) OR ((displayname)::text ~~ '%cat%'::text))\n                     Rows Removed by Index Recheck: 3\n                     Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\n                     Rows Removed by Filter: 1646\n                     ->  BitmapOr  (cost=97.05..97.05 rows=132 width=0) (actual time=3.547..3.547 rows=0 loops=1)\n                           ->  Bitmap Index Scan on idx_data_3  (cost=0.00..60.98 rows=131 width=0) (actual time=3.480..3.480 rows=2539 loops=1)\n                                 Index Cond: ((name)::text ~~ '%cat%'::text)\n                           ->  Bitmap Index Scan on idx_data_4  (cost=0.00..36.00 rows=1 width=0) (actual time=0.067..0.067 rows=14 loops=1)\n                                 Index Cond: ((displayname)::text ~~ '%cat%'::text)\n               ->  Index Scan using idx_order_1_us on order o  (cost=0.56..8.58 rows=1 width=30) (actual time=0.027..0.027 rows=1 loops=892)\n                     Index Cond: (name = (w.name)::text)\nTotal runtime: 37.974 ms\n(18 rows)\n \nTime: 40.158 ms", "msg_date": "Tue, 9 Aug 2016 23:27:54 +0000", "msg_from": "Suya Huang <[email protected]>", "msg_from_op": true, "msg_subject": "what's the slowest part in the SQL" }, { "msg_contents": "On Tue, Aug 9, 2016 at 8:27 PM, Suya Huang <[email protected]> wrote:\n> I’ve got a SQL runs for about 4 seconds first time it’s been executed,but\n> very fast (20ms) for the consequent runs. I thought it’s because that the\n> first time table being loaded into memory. However, if you change the where\n> clause value from “cat” to “dog”, it runs about 4 seconds as it’s never been\n> executed before. Therefore, it doesn’t sound like the reason of table not\n> being cached.\n>\n>\n>\n> Can someone explain why it behaves like this? It PG 9.3, I can try\n> pg_prewarm to cache both tables by creating the extension (probably need to\n> find a 9.4 box and copy those files) if the reason is table not being\n> cached.\n>\n>\n>\n> From execution plan below, it shows Nested Loop is the slowest part - actual\n> time=349.257..4265.928 rows=457 , it’s really slow, for just 457 rows and\n> takes 4 seconds!!! But very fast for repetitive runs.\n>\n>\n>\n> dev=# explain analyze\n>\n> SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\n>\n> ON w.name = o.name WHERE (w.name LIKE '%cat%' OR w.displayname LIKE '%cat%')\n> AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\n>\n> ORDER BY o.cnt DESC LIMIT 100;\n\nYou're showing the explain for \"cat\", where the interesting one is\nprobably \"dog\".\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 9 Aug 2016 20:52:48 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "Hi Claudio,\r\n\r\nThe plan for dog is exactly the same as what’s for cat, thus I didn’t paste them here.\r\n\r\nRichard Albright just pointed that it’s because the result has been cached not the table, I think that makes sense. So my question changes to the efficiency of NESTED LOOP JOIN, 400 rows for 4 seconds, sounds slow to me. Is that normal?\r\n\r\nThanks,\r\nSuya\r\n\r\nOn 8/10/16, 9:52 AM, \"Claudio Freire\" <[email protected]> wrote:\r\n\r\nOn Tue, Aug 9, 2016 at 8:27 PM, Suya Huang <[email protected]> wrote:\r\n> I’ve got a SQL runs for about 4 seconds first time it’s been executed,but\r\n> very fast (20ms) for the consequent runs. I thought it’s because that the\r\n> first time table being loaded into memory. However, if you change the where\r\n> clause value from “cat” to “dog”, it runs about 4 seconds as it’s never been\r\n> executed before. Therefore, it doesn’t sound like the reason of table not\r\n> being cached.\r\n>\r\n>\r\n>\r\n> Can someone explain why it behaves like this? It PG 9.3, I can try\r\n> pg_prewarm to cache both tables by creating the extension (probably need to\r\n> find a 9.4 box and copy those files) if the reason is table not being\r\n> cached.\r\n>\r\n>\r\n>\r\n> From execution plan below, it shows Nested Loop is the slowest part - actual\r\n> time=349.257..4265.928 rows=457 , it’s really slow, for just 457 rows and\r\n> takes 4 seconds!!! But very fast for repetitive runs.\r\n>\r\n>\r\n>\r\n> dev=# explain analyze\r\n>\r\n> SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\r\n>\r\n> ON w.name = o.name WHERE (w.name LIKE '%cat%' OR w.displayname LIKE '%cat%')\r\n> AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\r\n>\r\n> ORDER BY o.cnt DESC LIMIT 100;\r\n\r\nYou're showing the explain for \"cat\", where the interesting one is\r\nprobably \"dog\".\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 10 Aug 2016 00:12:12 +0000", "msg_from": "Suya Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "On Tue, Aug 9, 2016 at 9:12 PM, Suya Huang <[email protected]> wrote:\n> Hi Claudio,\n>\n> The plan for dog is exactly the same as what’s for cat, thus I didn’t paste them here.\n\nAre you sure?\n\nThe plan itself may be the same, but the numbers may be different, and\nin fact be key to understanding the problem.\n\n>\n> Richard Albright just pointed that it’s because the result has been cached not the table, I think that makes sense. So my question changes to the efficiency of NESTED LOOP JOIN, 400 rows for 4 seconds, sounds slow to me. Is that normal?\n\nFrom the looks of those timing numbers, everything involving reads\nfrom disk is slower on the first run. That clearly points to disk\ncache effects. So this explain looks completely normal.\n\nIf the query for \"dog\" doesn't get a speedup on second runs, it could\njust be that the data it visits doesn't fit in disk cache, so the\nnumbers are important, they can tell you that.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 9 Aug 2016 21:28:28 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "Hi Claudio,\r\n\r\nhere comes the dog version:\r\n\r\ndev=# explain analyze\r\ndev-# SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\r\ndev-# ON w.name = o.name WHERE (w.name LIKE '%dog%' OR w.displayname LIKE '%dog%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\r\ndev-# ORDER BY o.cnt DESC LIMIT 100;\r\n QUERY PLAN\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Limit (cost=1761.35..1761.60 rows=100 width=50) (actual time=3622.605..3622.647 rows=100 loops=1)\r\n -> Sort (cost=1761.35..1761.69 rows=138 width=50) (actual time=3622.603..3622.621 rows=100 loops=1)\r\n Sort Key: o.cnt\r\n Sort Method: quicksort Memory: 32kB\r\n -> Nested Loop (cost=53.66..1756.44 rows=138 width=50) (actual time=215.934..3622.397 rows=101 loops=1)\r\n -> Bitmap Heap Scan on data w (cost=53.11..571.37 rows=138 width=40) (actual time=146.340..562.583 rows=526 loops=1)\r\n Recheck Cond: (((name)::text ~~ '%dog%'::text) OR ((displayname)::text ~~ '%dog%'::text))\r\n Rows Removed by Index Recheck: 7\r\n Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\r\n Rows Removed by Filter: 1249\r\n -> BitmapOr (cost=53.11..53.11 rows=138 width=0) (actual time=145.906..145.906 rows=0 loops=1)\r\n -> Bitmap Index Scan on idx_data_3 (cost=0.00..32.98 rows=131 width=0) (actual time=133.637..133.637 rows=1782 loops=1)\r\n Index Cond: ((name)::text ~~ '%dog%'::text)\r\n -> Bitmap Index Scan on idx_data_4 (cost=0.00..20.05 rows=7 width=0) (actual time=12.267..12.267 rows=3 loops=1)\r\n Index Cond: ((displayname)::text ~~ '%dog%'::text)\r\n -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=5.814..5.814 rows=0 loops=526)\r\n Index Cond: (name = (w.name)::text)\r\n Total runtime: 3622.756 ms\r\n(18 rows)\r\n\r\nTime: 3652.654 ms\r\n\r\n\r\ndev=# explain analyze\r\n SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\r\n ON w.name = o.name WHERE (w.name LIKE '%dog%' OR w.displayname LIKE '%dog%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\r\n ORDER BY o.cnt DESC LIMIT 100;\r\n QUERY PLAN\r\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n Limit (cost=1761.35..1761.60 rows=100 width=50) (actual time=21.938..21.980 rows=100 loops=1)\r\n -> Sort (cost=1761.35..1761.69 rows=138 width=50) (actual time=21.937..21.953 rows=100 loops=1)\r\n Sort Key: o.cnt\r\n Sort Method: quicksort Memory: 32kB\r\n -> Nested Loop (cost=53.66..1756.44 rows=138 width=50) (actual time=3.791..21.818 rows=101 loops=1)\r\n -> Bitmap Heap Scan on data w (cost=53.11..571.37 rows=138 width=40) (actual time=3.467..7.802 rows=526 loops=1)\r\n Recheck Cond: (((name)::text ~~ '%dog%'::text) OR ((displayname)::text ~~ '%dog%'::text))\r\n Rows Removed by Index Recheck: 7\r\n Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\r\n Rows Removed by Filter: 1249\r\n -> BitmapOr (cost=53.11..53.11 rows=138 width=0) (actual time=3.241..3.241 rows=0 loops=1)\r\n -> Bitmap Index Scan on idx_data_3 (cost=0.00..32.98 rows=131 width=0) (actual time=3.216..3.216 rows=1782 loops=1)\r\n Index Cond: ((name)::text ~~ '%dog%'::text)\r\n -> Bitmap Index Scan on idx_data_4 (cost=0.00..20.05 rows=7 width=0) (actual time=0.022..0.022 rows=3 loops=1)\r\n Index Cond: ((displayname)::text ~~ '%dog%'::text)\r\n -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=0.025..0.026 rows=0 loops=526)\r\n Index Cond: (name = (w.name)::text)\r\n Total runtime: 22.069 ms\r\n(18 rows)\r\n\r\n\r\nOn 8/10/16, 10:28 AM, \"Claudio Freire\" <[email protected]> wrote:\r\n\r\nOn Tue, Aug 9, 2016 at 9:12 PM, Suya Huang <[email protected]> wrote:\r\n> Hi Claudio,\r\n>\r\n> The plan for dog is exactly the same as what’s for cat, thus I didn’t paste them here.\r\n\r\nAre you sure?\r\n\r\nThe plan itself may be the same, but the numbers may be different, and\r\nin fact be key to understanding the problem.\r\n\r\n>\r\n> Richard Albright just pointed that it’s because the result has been cached not the table, I think that makes sense. So my question changes to the efficiency of NESTED LOOP JOIN, 400 rows for 4 seconds, sounds slow to me. Is that normal?\r\n\r\nFrom the looks of those timing numbers, everything involving reads\r\nfrom disk is slower on the first run. That clearly points to disk\r\ncache effects. So this explain looks completely normal.\r\n\r\nIf the query for \"dog\" doesn't get a speedup on second runs, it could\r\njust be that the data it visits doesn't fit in disk cache, so the\r\nnumbers are important, they can tell you that.\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 10 Aug 2016 00:34:38 +0000", "msg_from": "Suya Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "Suya Huang <[email protected]> writes:\n> -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=5.814..5.814 rows=0 loops=526)\n\n4 or so ms per row fetched is well within expectation for random access to\nspinning-rust media. For example, a 15K RPM drive spins at 4 ms per\nrevolution, so rotational delay alone would probably explain this number,\nnever mind needing to do any seeks. So I see nothing even slightly\nunexpected here, assuming that the \"order\" table is large enough that none\nof what you need is in RAM already. If you need more performance, look\ninto SSDs.\n\n(If you have storage kit for which you'd expect better performance than\nthis, you should start by explaining what it is.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 09 Aug 2016 20:45:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "On Tue, Aug 9, 2016 at 9:34 PM, Suya Huang <[email protected]> wrote:\n> dev=# explain analyze\n> SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\n> ON w.name = o.name WHERE (w.name LIKE '%dog%' OR w.displayname LIKE '%dog%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\n> ORDER BY o.cnt DESC LIMIT 100;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=1761.35..1761.60 rows=100 width=50) (actual time=21.938..21.980 rows=100 loops=1)\n> -> Sort (cost=1761.35..1761.69 rows=138 width=50) (actual time=21.937..21.953 rows=100 loops=1)\n> Sort Key: o.cnt\n> Sort Method: quicksort Memory: 32kB\n> -> Nested Loop (cost=53.66..1756.44 rows=138 width=50) (actual time=3.791..21.818 rows=101 loops=1)\n> -> Bitmap Heap Scan on data w (cost=53.11..571.37 rows=138 width=40) (actual time=3.467..7.802 rows=526 loops=1)\n> Recheck Cond: (((name)::text ~~ '%dog%'::text) OR ((displayname)::text ~~ '%dog%'::text))\n> Rows Removed by Index Recheck: 7\n> Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\n> Rows Removed by Filter: 1249\n> -> BitmapOr (cost=53.11..53.11 rows=138 width=0) (actual time=3.241..3.241 rows=0 loops=1)\n> -> Bitmap Index Scan on idx_data_3 (cost=0.00..32.98 rows=131 width=0) (actual time=3.216..3.216 rows=1782 loops=1)\n> Index Cond: ((name)::text ~~ '%dog%'::text)\n> -> Bitmap Index Scan on idx_data_4 (cost=0.00..20.05 rows=7 width=0) (actual time=0.022..0.022 rows=3 loops=1)\n> Index Cond: ((displayname)::text ~~ '%dog%'::text)\n> -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=0.025..0.026 rows=0 loops=526)\n> Index Cond: (name = (w.name)::text)\n> Total runtime: 22.069 ms\n> (18 rows)\n\nMaybe I misunderstood your question, but dog here seems to behave just like cat.\n\nAre you expecting that running first \"cat\" and then \"dog\" should make\n\"dog\" go fast?\n\nThat's not how it works, the rows for cat and dog may not reside on\nthe same pages, so what's cached for \"cat\" doesn't work for \"dog\" and\nviceversa. It could even be the other way around, if by chance they\nresided on the same page, so... it still looks normal.\n\nClearly your bottleneck is the I/O subsystem.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 9 Aug 2016 21:46:42 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "On Tue, Aug 9, 2016 at 9:46 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Aug 9, 2016 at 9:34 PM, Suya Huang <[email protected]> wrote:\n>> dev=# explain analyze\n>> SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\n>> ON w.name = o.name WHERE (w.name LIKE '%dog%' OR w.displayname LIKE '%dog%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\n>> ORDER BY o.cnt DESC LIMIT 100;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> Limit (cost=1761.35..1761.60 rows=100 width=50) (actual time=21.938..21.980 rows=100 loops=1)\n>> -> Sort (cost=1761.35..1761.69 rows=138 width=50) (actual time=21.937..21.953 rows=100 loops=1)\n>> Sort Key: o.cnt\n>> Sort Method: quicksort Memory: 32kB\n>> -> Nested Loop (cost=53.66..1756.44 rows=138 width=50) (actual time=3.791..21.818 rows=101 loops=1)\n>> -> Bitmap Heap Scan on data w (cost=53.11..571.37 rows=138 width=40) (actual time=3.467..7.802 rows=526 loops=1)\n>> Recheck Cond: (((name)::text ~~ '%dog%'::text) OR ((displayname)::text ~~ '%dog%'::text))\n>> Rows Removed by Index Recheck: 7\n>> Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\n>> Rows Removed by Filter: 1249\n>> -> BitmapOr (cost=53.11..53.11 rows=138 width=0) (actual time=3.241..3.241 rows=0 loops=1)\n>> -> Bitmap Index Scan on idx_data_3 (cost=0.00..32.98 rows=131 width=0) (actual time=3.216..3.216 rows=1782 loops=1)\n>> Index Cond: ((name)::text ~~ '%dog%'::text)\n>> -> Bitmap Index Scan on idx_data_4 (cost=0.00..20.05 rows=7 width=0) (actual time=0.022..0.022 rows=3 loops=1)\n>> Index Cond: ((displayname)::text ~~ '%dog%'::text)\n>> -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=0.025..0.026 rows=0 loops=526)\n>> Index Cond: (name = (w.name)::text)\n>> Total runtime: 22.069 ms\n>> (18 rows)\n>\n> Maybe I misunderstood your question, but dog here seems to behave just like cat.\n>\n> Are you expecting that running first \"cat\" and then \"dog\" should make\n> \"dog\" go fast?\n>\n> That's not how it works, the rows for cat and dog may not reside on\n> the same pages, so what's cached for \"cat\" doesn't work for \"dog\" and\n> viceversa. It could even be the other way around, if by chance they\n> resided on the same page, so... it still looks normal.\n>\n> Clearly your bottleneck is the I/O subsystem.\n\nBtw, what kind of index are idx_data_3 and idx_data_4?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 9 Aug 2016 21:49:38 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "Thank you Tom very much, that’s the piece of information I miss. \r\n\r\nSo, should I expect that the nested loop join would be much faster if I cache both tables (use pg_prewarm) into memory as it waives the disk read?\r\n\r\nThanks,\r\nSuya\r\n\r\nOn 8/10/16, 10:45 AM, \"Tom Lane\" <[email protected]> wrote:\r\n\r\nSuya Huang <[email protected]> writes:\r\n> -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=5.814..5.814 rows=0 loops=526)\r\n\r\n4 or so ms per row fetched is well within expectation for random access to\r\nspinning-rust media. For example, a 15K RPM drive spins at 4 ms per\r\nrevolution, so rotational delay alone would probably explain this number,\r\nnever mind needing to do any seeks. So I see nothing even slightly\r\nunexpected here, assuming that the \"order\" table is large enough that none\r\nof what you need is in RAM already. If you need more performance, look\r\ninto SSDs.\r\n\r\n(If you have storage kit for which you'd expect better performance than\r\nthis, you should start by explaining what it is.)\r\n\r\n\t\t\tregards, tom lane\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 10 Aug 2016 01:43:54 +0000", "msg_from": "Suya Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "Hi Claudio,\r\n\r\nHere is the index definition\r\n \"idx_data_3\" gin (name gin_trgm_ops), tablespace \"tbs_data\"\r\n \"idx_data_4\" gin (displayname gin_trgm_ops), tablespace \"tbs_data\"\r\n\r\nOn 8/10/16, 10:49 AM, \"Claudio Freire\" <[email protected]> wrote:\r\n\r\nOn Tue, Aug 9, 2016 at 9:46 PM, Claudio Freire <[email protected]> wrote:\r\n> On Tue, Aug 9, 2016 at 9:34 PM, Suya Huang <[email protected]> wrote:\r\n>> dev=# explain analyze\r\n>> SELECT COALESCE(w.displayname, o.name) FROM order o INNER JOIN data w\r\n>> ON w.name = o.name WHERE (w.name LIKE '%dog%' OR w.displayname LIKE '%dog%') AND (NOT w.categories && ARRAY[1, 6, 10, 1337])\r\n>> ORDER BY o.cnt DESC LIMIT 100;\r\n>> QUERY PLAN\r\n>> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n>> Limit (cost=1761.35..1761.60 rows=100 width=50) (actual time=21.938..21.980 rows=100 loops=1)\r\n>> -> Sort (cost=1761.35..1761.69 rows=138 width=50) (actual time=21.937..21.953 rows=100 loops=1)\r\n>> Sort Key: o.cnt\r\n>> Sort Method: quicksort Memory: 32kB\r\n>> -> Nested Loop (cost=53.66..1756.44 rows=138 width=50) (actual time=3.791..21.818 rows=101 loops=1)\r\n>> -> Bitmap Heap Scan on data w (cost=53.11..571.37 rows=138 width=40) (actual time=3.467..7.802 rows=526 loops=1)\r\n>> Recheck Cond: (((name)::text ~~ '%dog%'::text) OR ((displayname)::text ~~ '%dog%'::text))\r\n>> Rows Removed by Index Recheck: 7\r\n>> Filter: (NOT (categories && '{1,6,10,1337}'::integer[]))\r\n>> Rows Removed by Filter: 1249\r\n>> -> BitmapOr (cost=53.11..53.11 rows=138 width=0) (actual time=3.241..3.241 rows=0 loops=1)\r\n>> -> Bitmap Index Scan on idx_data_3 (cost=0.00..32.98 rows=131 width=0) (actual time=3.216..3.216 rows=1782 loops=1)\r\n>> Index Cond: ((name)::text ~~ '%dog%'::text)\r\n>> -> Bitmap Index Scan on idx_data_4 (cost=0.00..20.05 rows=7 width=0) (actual time=0.022..0.022 rows=3 loops=1)\r\n>> Index Cond: ((displayname)::text ~~ '%dog%'::text)\r\n>> -> Index Scan using idx_order_1_us on order o (cost=0.56..8.58 rows=1 width=30) (actual time=0.025..0.026 rows=0 loops=526)\r\n>> Index Cond: (name = (w.name)::text)\r\n>> Total runtime: 22.069 ms\r\n>> (18 rows)\r\n>\r\n> Maybe I misunderstood your question, but dog here seems to behave just like cat.\r\n>\r\n> Are you expecting that running first \"cat\" and then \"dog\" should make\r\n> \"dog\" go fast?\r\n>\r\n> That's not how it works, the rows for cat and dog may not reside on\r\n> the same pages, so what's cached for \"cat\" doesn't work for \"dog\" and\r\n> viceversa. It could even be the other way around, if by chance they\r\n> resided on the same page, so... it still looks normal.\r\n>\r\n> Clearly your bottleneck is the I/O subsystem.\r\n\r\nBtw, what kind of index are idx_data_3 and idx_data_4?\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 10 Aug 2016 01:46:30 +0000", "msg_from": "Suya Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "Suya Huang <[email protected]> writes:\n> Thank you Tom very much, that’s the piece of information I miss. \n> So, should I expect that the nested loop join would be much faster if I cache both tables (use pg_prewarm) into memory as it waives the disk read?\n\npg_prewarm is not going to magically fix things if your table is bigger\nthan RAM, which it apparently is.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 09 Aug 2016 21:57:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "Not really, the server has 2 GB memory (PROD is a lot more than this dev box), so the table should be able to fit in memory if we preload them.\r\n\r\nMemTotal: 2049572 kB\r\n\r\ndev=# select pg_size_pretty(pg_relation_size('data'));\r\n pg_size_pretty\r\n----------------\r\n 141 MB\r\n(1 row)\r\n\r\nTime: 2.640 ms\r\n\r\ndev=# select pg_size_pretty(pg_relation_size('order'));\r\n pg_size_pretty\r\n----------------\r\n 516 MB\r\n(1 row)\r\n\r\nThanks,\r\nSuya\r\nOn 8/10/16, 11:57 AM, \"Tom Lane\" <[email protected]> wrote:\r\n\r\nSuya Huang <[email protected]> writes:\r\n> Thank you Tom very much, that’s the piece of information I miss. \r\n> So, should I expect that the nested loop join would be much faster if I cache both tables (use pg_prewarm) into memory as it waives the disk read?\r\n\r\npg_prewarm is not going to magically fix things if your table is bigger\r\nthan RAM, which it apparently is.\r\n\r\n\t\t\tregards, tom lane\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 10 Aug 2016 02:02:10 +0000", "msg_from": "Suya Huang <[email protected]>", "msg_from_op": true, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "On Tue, Aug 9, 2016 at 10:46 PM, Suya Huang <[email protected]> wrote:\n> Hi Claudio,\n>\n> Here is the index definition\n> \"idx_data_3\" gin (name gin_trgm_ops), tablespace \"tbs_data\"\n> \"idx_data_4\" gin (displayname gin_trgm_ops), tablespace \"tbs_data\"\n\n\nGIN indexes are quite big, they can be bigger than the data. Check\ntheir size, it's possible that you can't fit data + indexes on those\n2GB.\n\nBut if prod is going to be bigger, prehaps it's a non-issue for you?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 10 Aug 2016 15:10:34 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's the slowest part in the SQL" }, { "msg_contents": "On Tue, Aug 9, 2016 at 6:27 PM, Suya Huang <[email protected]> wrote:\n> Hi,\n> I’ve got a SQL runs for about 4 seconds first time it’s been executed,but\n> very fast (20ms) for the consequent runs. I thought it’s because that the\n> first time table being loaded into memory. However, if you change the where\n> clause value from “cat” to “dog”, it runs about 4 seconds as it’s never been\n> executed before. Therefore, it doesn’t sound like the reason of table not\n> being cached.\n\nLIMIT clause operations combined with random access are particularly\nsensitive to caching on slow media. The exact pages you want are\nscattered around the dist but repeated scans of the same values will\npull up exactly the ones you want. You can warm the table assuming\nyour memory is sufficient enough to cache all the data you need.\nAnother (I think better-) plan is to buy media with faster random\naccess.\n\nAre you using pg_trgm to index the 'name' field? gist/gin indexes are\n*very* dependent on caching/fast drives as the indexes tend to be fat.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 16 Aug 2016 08:56:01 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: what's the slowest part in the SQL" } ]
[ { "msg_contents": "Hello,\n\nIs it possible to log queries using sequential scans? Or possibly every\nquery in a way which allows grepping for those with sequential scans?\n\nHello,Is it possible to log queries using sequential scans? Or possibly every query in a way which allows grepping for those with sequential scans?", "msg_date": "Wed, 10 Aug 2016 13:13:48 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Logging queries using sequential scans" }, { "msg_contents": "Hi\n\n2016-08-10 13:13 GMT+02:00 Ivan Voras <[email protected]>:\n\n> Hello,\n>\n> Is it possible to log queries using sequential scans? Or possibly every\n> query in a way which allows grepping for those with sequential scans?\n>\n>\n>\nyou can log execution plan with auto_explain extension\n\n https://www.postgresql.org/docs/current/static/auto-explain.html\n\nThen you can grep the queries with seq scan\n\nRegards\n\nPavel\n\nHi2016-08-10 13:13 GMT+02:00 Ivan Voras <[email protected]>:Hello,Is it possible to log queries using sequential scans? Or possibly every query in a way which allows grepping for those with sequential scans?you can log execution plan with auto_explain extension https://www.postgresql.org/docs/current/static/auto-explain.htmlThen you can grep the queries with seq scanRegardsPavel", "msg_date": "Wed, 10 Aug 2016 13:39:17 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Logging queries using sequential scans" } ]
[ { "msg_contents": "I have a table (registry.entry) which has ~ 100 inherited tables. This\nis a master table and it's empty:\n\npostgres@db=# select count(*) from only registry.entry;\n count\n-------\n 0\n(1 row)\n\nMaster table has rules, inherited tables has check constraints. Data\npartitioned by value of area_id. But when I run a query with area_id\nin where clause, planner do seq scan on master table if master table\nhas no indexes or index scan if has:\n\nAppend (cost=0.12..1750.11 rows=670 width=256)\n -> Index Scan using MASTER_TABLE_INDEX on entry e (cost=0.12..6.15\nrows=1 width=253)\n Index Cond: (((cadastral_number)::text ~>=~\n'61:44:0030502'::text) AND ((cadastral_number)::text ~<~\n'61:44:0030503'::text))\n Filter: (((cadastral_number)::text ~~ '61:44:0030502%'::text)\nAND (area_id = 1381) AND (quarter_id = 1368779))\n -> Bitmap Heap Scan on entry_61_44 e_1 (cost=1381.62..1743.95\nrows=669 width=256)\n Recheck Cond: (quarter_id = 1368779)\n Filter: (((cadastral_number)::text ~~ '61:44:0030502%'::text)\nAND (area_id = 1381))\n -> BitmapAnd (cost=1381.62..1381.62 rows=122 width=0)\n -> Bitmap Index Scan on\nentry_61_44_cadastral_number_idx (cost=0.00..321.57 rows=12901\nwidth=0)\n Index Cond: (((cadastral_number)::text ~>=~\n'61:44:0030502'::text) AND ((cadastral_number)::text ~<~\n'61:44:0030503'::text))\n -> Bitmap Index Scan on entry_61_44_quarter_id_idx\n(cost=0.00..1059.47 rows=67205 width=0)\n Index Cond: (quarter_id = 1368779)\n\nAs you can see, postgres scan only one needed partition and (!) an\nindex from master table, In this example I has an index on master\ntable because it's a production server and when I drop it query time\nis too long.\nIn the past (before partitioning) master table has many rows. I made\nvacuum and vacuum analyze for registry.entry, but it didn't help.\npgAdmin says that table size is 21Gb, live tuples: 0, dead tuples: 0.\n\nWhat am I doing wrong?\n\n-- \nAndrey Zhidenkov\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 11 Aug 2016 13:46:47 +0300", "msg_from": "Andrey Zhidenkov <[email protected]>", "msg_from_op": true, "msg_subject": "Planner do seq scan on empty master partitioned table" }, { "msg_contents": "Andrey Zhidenkov <[email protected]> writes:\n> I have a table (registry.entry) which has ~ 100 inherited tables. This\n> is a master table and it's empty:\n\nAs long as it's empty, a seqscan should be essentially free. Don't\nworry about it. And definitely don't create indexes, that will just\nadd cost.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 11 Aug 2016 09:50:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner do seq scan on empty master partitioned table" }, { "msg_contents": "> 11 авг. 2016 г., в 13:46, Andrey Zhidenkov <[email protected]> написал(а):\n> \n> I have a table (registry.entry) which has ~ 100 inherited tables. This\n> is a master table and it's empty:\n> \n> postgres@db=# select count(*) from only registry.entry;\n> count\n> -------\n> 0\n> (1 row)\n> \n> Master table has rules, inherited tables has check constraints. Data\n> partitioned by value of area_id. But when I run a query with area_id\n> in where clause, planner do seq scan on master table if master table\n> has no indexes or index scan if has:\n> \n> Append (cost=0.12..1750.11 rows=670 width=256)\n> -> Index Scan using MASTER_TABLE_INDEX on entry e (cost=0.12..6.15\n> rows=1 width=253)\n> Index Cond: (((cadastral_number)::text ~>=~\n> '61:44:0030502'::text) AND ((cadastral_number)::text ~<~\n> '61:44:0030503'::text))\n> Filter: (((cadastral_number)::text ~~ '61:44:0030502%'::text)\n> AND (area_id = 1381) AND (quarter_id = 1368779))\n> -> Bitmap Heap Scan on entry_61_44 e_1 (cost=1381.62..1743.95\n> rows=669 width=256)\n> Recheck Cond: (quarter_id = 1368779)\n> Filter: (((cadastral_number)::text ~~ '61:44:0030502%'::text)\n> AND (area_id = 1381))\n> -> BitmapAnd (cost=1381.62..1381.62 rows=122 width=0)\n> -> Bitmap Index Scan on\n> entry_61_44_cadastral_number_idx (cost=0.00..321.57 rows=12901\n> width=0)\n> Index Cond: (((cadastral_number)::text ~>=~\n> '61:44:0030502'::text) AND ((cadastral_number)::text ~<~\n> '61:44:0030503'::text))\n> -> Bitmap Index Scan on entry_61_44_quarter_id_idx\n> (cost=0.00..1059.47 rows=67205 width=0)\n> Index Cond: (quarter_id = 1368779)\n> \n> As you can see, postgres scan only one needed partition and (!) an\n> index from master table, In this example I has an index on master\n> table because it's a production server and when I drop it query time\n> is too long.\n> In the past (before partitioning) master table has many rows. I made\n> vacuum and vacuum analyze for registry.entry, but it didn't help.\n> pgAdmin says that table size is 21Gb, live tuples: 0, dead tuples: 0.\n\nYou can make TRUNCATE ONLY master_table. But don’t forget the ONLY keyword because in that case it will truncate all child tables also :)\n\n> \n> What am I doing wrong?\n> \n> -- \n> Andrey Zhidenkov\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n11 авг. 2016 г., в 13:46, Andrey Zhidenkov <[email protected]> написал(а):I have a table (registry.entry) which has ~ 100 inherited tables. Thisis a master table and it's empty:postgres@db=# select count(*) from only registry.entry; count-------     0(1 row)Master table has rules, inherited tables has check constraints. Datapartitioned by value of area_id. But when I run a query with area_idin where clause, planner do seq scan on master table if master tablehas no indexes or index scan if has:Append  (cost=0.12..1750.11 rows=670 width=256)  ->  Index Scan using MASTER_TABLE_INDEX on entry e  (cost=0.12..6.15rows=1 width=253)        Index Cond: (((cadastral_number)::text ~>=~'61:44:0030502'::text) AND ((cadastral_number)::text ~<~'61:44:0030503'::text))        Filter: (((cadastral_number)::text ~~ '61:44:0030502%'::text)AND (area_id = 1381) AND (quarter_id = 1368779))  ->  Bitmap Heap Scan on entry_61_44 e_1  (cost=1381.62..1743.95rows=669 width=256)        Recheck Cond: (quarter_id = 1368779)        Filter: (((cadastral_number)::text ~~ '61:44:0030502%'::text)AND (area_id = 1381))        ->  BitmapAnd  (cost=1381.62..1381.62 rows=122 width=0)              ->  Bitmap Index Scan onentry_61_44_cadastral_number_idx  (cost=0.00..321.57 rows=12901width=0)                    Index Cond: (((cadastral_number)::text ~>=~'61:44:0030502'::text) AND ((cadastral_number)::text ~<~'61:44:0030503'::text))              ->  Bitmap Index Scan on entry_61_44_quarter_id_idx(cost=0.00..1059.47 rows=67205 width=0)                    Index Cond: (quarter_id = 1368779)As you can see, postgres scan only one needed partition and (!) anindex from master table, In this example I has an index on mastertable because it's a production server and when I drop it query timeis too long.In the past (before partitioning) master table has many rows. I madevacuum and vacuum analyze for registry.entry, but it didn't help.pgAdmin says that table size is 21Gb, live tuples: 0, dead tuples: 0.You can make TRUNCATE ONLY master_table. But don’t forget the ONLY keyword because in that case it will truncate all child tables also :)What am I doing wrong?-- Andrey Zhidenkov-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n--May the force be with you…https://simply.name", "msg_date": "Thu, 11 Aug 2016 16:55:49 +0300", "msg_from": "Vladimir Borodin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Planner do seq scan on empty master partitioned table" } ]
[ { "msg_contents": "Hi,\nthe problem I'm dealing with is long holding locks during extensions of\ntable: \nprocess xxx still waiting for ExclusiveLock on extension of relation xxx of \ndatabase xxx after 3000.158 ms\nMy application is write intensive, in one round I need to insert about 1M\nrows. The general scheme of the process looks as follows:\n1. rename table t01 to t02\n2. insert into t02 1M rows in chunks for about 100k\n3. from t01 (previously loaded table) insert data through stored procedure\nto b01 - this happens parallel in over a dozen sessions\n4. truncate t01\n\nSome data:\nPostgreSQL version 9.5\n\n commit_delay | 0 \n| Sets the delay in microseconds between transaction commit and flushing WAL\nto disk.\n checkpoint_completion_target | 0.9 \n| Time spent flushing dirty buffers during checkpoint, as fraction of\ncheckpoint interval\n maintenance_work_mem | 2GB \n| Sets the maximum memory to be used for maintenance operations.\nshared_buffers | 2GB\n\nwal_block_size | 8192 \n| Shows the block size in the write ahead log.\n wal_buffers | 16MB \n| Sets the number of disk-page buffers in shared memory for WAL.\n wal_compression | off \n| Compresses full-page writes written in WAL file.\n wal_keep_segments | 0 \n| Sets the number of WAL files held for standby servers.\n wal_level | minimal \n| Set the level of information written to the WAL.\n wal_log_hints | off \n| Writes full pages to WAL when first modified after a checkpoint, even for\na non-critical modifications.\n wal_receiver_status_interval | 10s \n| Sets the maximum interval between WAL receiver status reports to the\nprimary.\n wal_receiver_timeout | 1min \n| Sets the maximum wait time to receive data from the primary.\n wal_retrieve_retry_interval | 5s \n| Sets the time to wait before retrying to retrieve WAL after a failed\nattempt.\n wal_segment_size | 16MB \n| Shows the number of pages per write ahead log segment.\n wal_sender_timeout | 1min \n| Sets the maximum time to wait for WAL replication.\n wal_sync_method | fdatasync \n| Selects the method used for forcing WAL updates to disk.\n wal_writer_delay | 200ms \n| WAL writer sleep time between WAL flushes.\n work_mem | 32MB \n| Sets the maximum memory to be used for query workspaces.\n\nCheckpoints occur every ~ 30sec.\n\nFollowing the advices from this mailing list shared buffers size was changed\nfrom 12 to 2GB but nothing has changed.\n\nI'm not sure or my bottleneck is the I/O subsystem or there is anything else\nI can do to make it faster? What I came up with is (but I'm not sure if any\nof this makes sense):\n* change settings for bgwriter/wal?\n* make sure huge pages are in use by changing huge_pages parameter to on\n* replace truncate with DROP/CREATE command?\n* turning off fsync for loading?\n* increase commit_delay value?\n* move temporary tables to a different tablespace\n\nYour advice or suggestions will be much appreciated.\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Big-data-INSERT-optimization-ExclusiveLock-on-extension-of-the-table-tp5916781.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 17 Aug 2016 04:45:03 -0700 (MST)", "msg_from": "pinker <[email protected]>", "msg_from_op": true, "msg_subject": "Big data INSERT optimization - ExclusiveLock on extension of the\n table" }, { "msg_contents": "On 8/17/16 6:45 AM, pinker wrote:\n> 1. rename table t01 to t02\nOK...\n> 2. insert into t02 1M rows in chunks for about 100k\nWhy not just insert into t01??\n> 3. from t01 (previously loaded table) insert data through stored procedure\nBut you renamed t01 so it no longer exists???\n> to b01 - this happens parallel in over a dozen sessions\nb01?\n> 4. truncate t01\nHuh??\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 18 Aug 2016 16:50:39 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big data INSERT optimization - ExclusiveLock on\n extension of the table" }, { "msg_contents": "\n\n> 1. rename table t01 to t02\nOK...\n> 2. insert into t02 1M rows in chunks for about 100k\nWhy not just insert into t01??\n\nBecause of cpu utilization, it speeds up when load is divided\n\n> 3. from t01 (previously loaded table) insert data through stored procedure\nBut you renamed t01 so it no longer exists???\n> to b01 - this happens parallel in over a dozen sessions\nb01?\n\nthat's another table - permanent one\n\n> 4. truncate t01\nHuh??\n\nThe data were inserted to permanent storage so the temporary table can be\ntruncated and reused.\n\nOk, maybe the process is not so important; let's say the table is loaded,\nthen data are fetched and reloaded to other table through stored procedure\n(with it's logic), then the table is truncated and process goes again. The\nmost important part is holding ExclusiveLocks ~ 1-5s.\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Big-data-INSERT-optimization-ExclusiveLock-on-extension-of-the-table-tp5916781p5917136.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 18 Aug 2016 15:26:35 -0700 (MST)", "msg_from": "pinker <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Big data INSERT optimization - ExclusiveLock on extension of\n the table" }, { "msg_contents": "On 8/18/16 5:26 PM, pinker wrote:\n>\n>\n>> 1. rename table t01 to t02\n> OK...\n>> 2. insert into t02 1M rows in chunks for about 100k\n> Why not just insert into t01??\n>\n> Because of cpu utilization, it speeds up when load is divided\n\nThat still doesn't explain why you renamed t01 to t02.\n\n>> 3. from t01 (previously loaded table) insert data through stored procedure\n> But you renamed t01 so it no longer exists???\n>> to b01 - this happens parallel in over a dozen sessions\n> b01?\n>\n> that's another table - permanent one\n>\n>> 4. truncate t01\n> Huh??\n>\n> The data were inserted to permanent storage so the temporary table can be\n> truncated and reused.\n\nExcept t01 doesn't exist anymore...\n\n> Ok, maybe the process is not so important; let's say the table is loaded,\n> then data are fetched and reloaded to other table through stored procedure\n> (with it's logic), then the table is truncated and process goes again. The\n> most important part is holding ExclusiveLocks ~ 1-5s.\n\nThe process is important though, because AFAIK the only thing that \nblocks the extension lock is another process extending the relation, \nvacuum, or something trying to record information about free space and \nan FSM page not existing. Is there something else doing inserts into the \ntable at the same time? Is something doing a bunch of updates or deletes \non pages that are newly inserted?\n\nBTW, there we improvements made to relation extension in 9.6, so if you \nhave some way to test this on 9.6 it would be useful to know if it's \nstill a problem or not.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 19 Aug 2016 09:13:20 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Big data INSERT optimization - ExclusiveLock on\n extension of the table" }, { "msg_contents": "On Wed, Aug 17, 2016 at 6:45 AM, pinker <[email protected]> wrote:\n> Hi,\n> the problem I'm dealing with is long holding locks during extensions of\n> table:\n> process xxx still waiting for ExclusiveLock on extension of relation xxx of\n> database xxx after 3000.158 ms\n> My application is write intensive, in one round I need to insert about 1M\n> rows. The general scheme of the process looks as follows:\n> 1. rename table t01 to t02\n> 2. insert into t02 1M rows in chunks for about 100k\n> 3. from t01 (previously loaded table) insert data through stored procedure\n> to b01 - this happens parallel in over a dozen sessions\n> 4. truncate t01\n>\n> Some data:\n> PostgreSQL version 9.5\n>\n> commit_delay | 0\n> | Sets the delay in microseconds between transaction commit and flushing WAL\n> to disk.\n> checkpoint_completion_target | 0.9\n> | Time spent flushing dirty buffers during checkpoint, as fraction of\n> checkpoint interval\n> maintenance_work_mem | 2GB\n> | Sets the maximum memory to be used for maintenance operations.\n> shared_buffers | 2GB\n>\n> wal_block_size | 8192\n> | Shows the block size in the write ahead log.\n> wal_buffers | 16MB\n> | Sets the number of disk-page buffers in shared memory for WAL.\n> wal_compression | off\n> | Compresses full-page writes written in WAL file.\n> wal_keep_segments | 0\n> | Sets the number of WAL files held for standby servers.\n> wal_level | minimal\n> | Set the level of information written to the WAL.\n> wal_log_hints | off\n> | Writes full pages to WAL when first modified after a checkpoint, even for\n> a non-critical modifications.\n> wal_receiver_status_interval | 10s\n> | Sets the maximum interval between WAL receiver status reports to the\n> primary.\n> wal_receiver_timeout | 1min\n> | Sets the maximum wait time to receive data from the primary.\n> wal_retrieve_retry_interval | 5s\n> | Sets the time to wait before retrying to retrieve WAL after a failed\n> attempt.\n> wal_segment_size | 16MB\n> | Shows the number of pages per write ahead log segment.\n> wal_sender_timeout | 1min\n> | Sets the maximum time to wait for WAL replication.\n> wal_sync_method | fdatasync\n> | Selects the method used for forcing WAL updates to disk.\n> wal_writer_delay | 200ms\n> | WAL writer sleep time between WAL flushes.\n> work_mem | 32MB\n> | Sets the maximum memory to be used for query workspaces.\n>\n> Checkpoints occur every ~ 30sec.\n>\n> Following the advices from this mailing list shared buffers size was changed\n> from 12 to 2GB but nothing has changed.\n>\n> I'm not sure or my bottleneck is the I/O subsystem or there is anything else\n> I can do to make it faster? What I came up with is (but I'm not sure if any\n> of this makes sense):\n> * change settings for bgwriter/wal?\n> * make sure huge pages are in use by changing huge_pages parameter to on\n> * replace truncate with DROP/CREATE command?\n> * turning off fsync for loading?\n> * increase commit_delay value?\n> * move temporary tables to a different tablespace\n>\n> Your advice or suggestions will be much appreciated.\n\nHere's how I do it:\nCREATE TABLE t_new (LIKE t INCLUDING ALL);\n<insert from n threads to t_new>\n\nBEGIN;\nDROP TABLE t;\nALTER TABLE t_new RENAME to t;\n<recreate views etc as needed>\nCOMMIT;\n\nIf moving multiple tables in a single transaction I do a looped lock\nprobe with NOWAIT to avoid deadlocks. Postgres deadlock resolution\nbehavior is such that longer running processes seem to get killed\nfirst; in these scenarios it seems to almost always kill the one you\n*don't* want killed :-).\n\nThis strategy will even work in complicated scenarios, for example\npartitioned tables; you can build up the partition on the side and\nswap in the the new one over the old one in a transaction.\n\nThe above is all about avoiding locks. If your problem is i/o bound,\nhere are some general strategies to improve insert performance:\n*) UNLOGGED tables (beware: no data on spurious restart)\n*) synchronous_commit =false\n*) ensure shared_buffers is high enough (too low and you get checkpoints)\n\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 19 Aug 2016 10:01:11 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big data INSERT optimization - ExclusiveLock on\n extension of the table" } ]
[ { "msg_contents": "Greetings.\n\nI have a question on why planner chooses `IndexScan` for the following\nquery:\n\n SELECT la.loan_id, la.due_date, la.is_current\n FROM loan_agreements la WHERE la.is_current AND '2016-08-11' >\nla.due_date;\n\nRelevant (cannot post it all, sorry) table definition is:\n\n Column Type Modifiers\n ------------------------------ --------------------------- ---------\n id bigint not null\n ...\n is_current boolean not null\n due_date date not null\n loan_id bigint\n\n Indexes:\n \"loan_agreements_pkey\" PRIMARY KEY, btree (id)\n ...\n \"idx_loan_agreements_due_date\" btree (due_date)\n \"idx_loan_agreemnets_loan_id_cond_is_current_true\" btree (loan_id)\nWHERE is_current = true\n\nSome stats:\n SELECT relname,reltuples::numeric,relpages FROM pg_class WHERE oid IN\n('loan_agreements'::regclass,\n'idx_loan_agreemnets_loan_id_cond_is_current_true'::regclass,\n'idx_loan_agreements_due_date'::regclass);\n relname reltuples relpages\n ------------------------------------------------ --------- --------\n idx_loan_agreements_due_date 664707 1828\n idx_loan_agreemnets_loan_id_cond_is_current_true 237910 655\n loan_agreements 664707 18117\n\n\nSettings:\n\n SELECT name,setting,unit FROM pg_settings WHERE name ~\n'(buffers|mem|cost)$';\n name setting unit\n -------------------- -------- ----\n autovacuum_work_mem 524288 kB\n cpu_index_tuple_cost 0.005 ¤\n cpu_operator_cost 0.0025 ¤\n cpu_tuple_cost 0.01 ¤\n maintenance_work_mem 16777216 kB\n random_page_cost 2.5 ¤\n seq_page_cost 1 ¤\n shared_buffers 1572864 8kB\n temp_buffers 8192 8kB\n wal_buffers 2048 8kB\n work_mem 65536 kB\n PostgreSQL 9.4.8 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)\n4.4.7 20120313 (Red Hat 4.4.7-16), 64-bit\n\nPlanner chooses the following plan:\n\n QUERY PLAN\n\n ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on\nloan_agreements la (cost=0.42..16986.53 rows=226145 width=13) (actual\ntime=0.054..462.394 rows=216530 loops=1)\n Filter: ('2016-08-11'::date > due_date)\n Rows Removed by Filter: 21304\n Buffers: shared hit=208343 read=18399\n Planning time: 0.168 ms\n Execution time: 479.773 ms\n\nIf I disable IndexScans, plan changes likes this:\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on loan_agreements la (cost=2884.01..23974.88\nrows=226145 width=13) (actual time=38.893..200.376 rows=216530 loops=1)\n Recheck Cond: is_current\n Filter: ('2016-08-11'::date > due_date)\n Rows Removed by Filter: 21304\n Heap Blocks: exact=18117\n Buffers: shared hit=18212 read=557\n -> Bitmap Index Scan on\nidx_loan_agreemnets_loan_id_cond_is_current_true (cost=0.00..2827.47\nrows=237910 width=0) (actual time=35.166..35.166 rows=237853 loops=1)\n Buffers: shared hit=119 read=533\n Planning time: 0.171 ms\n Execution time: 214.341 ms\n\nQuestion is — why IndexScan over partial index is estimated less than\nBitmapHeap + BitmapIndex scan. And how can I tell Planner, that IndexScan\nover 1/3 of table is not a good thing — IndexScan is touching 10x more\npages and in a typical situation those are cold.\n\nThanks in advance.\n\n-- \nVictor Y. Yegorov\n\nGreetings.I have a question on why planner chooses `IndexScan` for the following query:    SELECT la.loan_id, la.due_date, la.is_current      FROM loan_agreements la WHERE la.is_current AND '2016-08-11' > la.due_date;Relevant (cannot post it all, sorry) table definition is:                Column                        Type             Modifiers    ------------------------------ --------------------------- ---------    id                             bigint                      not null    ...    is_current                     boolean                     not null    due_date                       date                        not null    loan_id                        bigint        Indexes:        \"loan_agreements_pkey\" PRIMARY KEY, btree (id)        ...        \"idx_loan_agreements_due_date\" btree (due_date)        \"idx_loan_agreemnets_loan_id_cond_is_current_true\" btree (loan_id) WHERE is_current = trueSome stats:    SELECT relname,reltuples::numeric,relpages FROM pg_class WHERE oid IN ('loan_agreements'::regclass, 'idx_loan_agreemnets_loan_id_cond_is_current_true'::regclass, 'idx_loan_agreements_due_date'::regclass);                    relname                      reltuples relpages    ------------------------------------------------ --------- --------    idx_loan_agreements_due_date                        664707     1828    idx_loan_agreemnets_loan_id_cond_is_current_true    237910      655    loan_agreements                                     664707    18117Settings:    SELECT name,setting,unit FROM pg_settings WHERE name ~ '(buffers|mem|cost)$';            name         setting  unit    -------------------- -------- ----    autovacuum_work_mem  524288   kB    cpu_index_tuple_cost 0.005    ¤    cpu_operator_cost    0.0025   ¤    cpu_tuple_cost       0.01     ¤    maintenance_work_mem 16777216 kB    random_page_cost     2.5      ¤    seq_page_cost        1        ¤    shared_buffers       1572864  8kB    temp_buffers         8192     8kB    wal_buffers          2048     8kB    work_mem             65536    kB    PostgreSQL 9.4.8 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16), 64-bitPlanner chooses the following plan:                                                                                         QUERY PLAN    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------    Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on loan_agreements la      (cost=0.42..16986.53 rows=226145 width=13) (actual time=0.054..462.394 rows=216530 loops=1)      Filter: ('2016-08-11'::date > due_date)      Rows Removed by Filter: 21304      Buffers: shared hit=208343 read=18399    Planning time: 0.168 ms    Execution time: 479.773 msIf I disable IndexScans, plan changes likes this:                                                                              QUERY PLAN    ----------------------------------------------------------------------------------------------------------------------------------------------------------------------    Bitmap Heap Scan on loan_agreements la  (cost=2884.01..23974.88 rows=226145 width=13) (actual time=38.893..200.376 rows=216530 loops=1)      Recheck Cond: is_current      Filter: ('2016-08-11'::date > due_date)      Rows Removed by Filter: 21304      Heap Blocks: exact=18117      Buffers: shared hit=18212 read=557      ->  Bitmap Index Scan on idx_loan_agreemnets_loan_id_cond_is_current_true  (cost=0.00..2827.47 rows=237910 width=0) (actual time=35.166..35.166 rows=237853 loops=1)            Buffers: shared hit=119 read=533    Planning time: 0.171 ms    Execution time: 214.341 msQuestion is — why IndexScan over partial index is estimated less than BitmapHeap + BitmapIndex scan. And how can I tell Planner, that IndexScan over 1/3 of table is not a good thing — IndexScan is touching 10x more pages and in a typical situation those are cold.Thanks in advance.-- Victor Y. Yegorov", "msg_date": "Thu, 18 Aug 2016 16:52:11 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": true, "msg_subject": "Estimates on partial index" }, { "msg_contents": "Victor Yegorov <[email protected]> writes:\n> Settings:\n> random_page_cost 2.5 ¤\n> seq_page_cost 1 ¤\n\n> Question is — why IndexScan over partial index is estimated less than\n> BitmapHeap + BitmapIndex scan. And how can I tell Planner, that IndexScan\n> over 1/3 of table is not a good thing — IndexScan is touching 10x more\n> pages and in a typical situation those are cold.\n\nIn that case you've got random_page_cost too far down. Values less than\nthe default of 4 are generally only appropriate if the bulk of your\ndatabase stays in RAM.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 18 Aug 2016 09:56:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimates on partial index" }, { "msg_contents": "On Thu, Aug 18, 2016 at 6:52 AM, Victor Yegorov <[email protected]> wrote:\n> Greetings.\n>\n> I have a question on why planner chooses `IndexScan` for the following\n> query:\n>\n> SELECT la.loan_id, la.due_date, la.is_current\n> FROM loan_agreements la WHERE la.is_current AND '2016-08-11' >\n> la.due_date;\n>\n...\n>\n> Planner chooses the following plan:\n>\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on\n> loan_agreements la (cost=0.42..16986.53 rows=226145 width=13) (actual\n> time=0.054..462.394 rows=216530 loops=1)\n> Filter: ('2016-08-11'::date > due_date)\n> Rows Removed by Filter: 21304\n> Buffers: shared hit=208343 read=18399\n> Planning time: 0.168 ms\n> Execution time: 479.773 ms\n>\n> If I disable IndexScans, plan changes likes this:\n>\n> QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on loan_agreements la (cost=2884.01..23974.88\n> rows=226145 width=13) (actual time=38.893..200.376 rows=216530 loops=1)\n> Recheck Cond: is_current\n> Filter: ('2016-08-11'::date > due_date)\n> Rows Removed by Filter: 21304\n> Heap Blocks: exact=18117\n> Buffers: shared hit=18212 read=557\n> -> Bitmap Index Scan on\n> idx_loan_agreemnets_loan_id_cond_is_current_true (cost=0.00..2827.47\n> rows=237910 width=0) (actual time=35.166..35.166 rows=237853 loops=1)\n> Buffers: shared hit=119 read=533\n> Planning time: 0.171 ms\n> Execution time: 214.341 ms\n>\n> Question is — why IndexScan over partial index is estimated less than\n> BitmapHeap + BitmapIndex scan. And how can I tell Planner, that IndexScan\n> over 1/3 of table is not a good thing — IndexScan is touching 10x more pages\n\nBoth plans touch the same pages. The index scan just touches some of\nthose pages over and over again. A large setting of\neffective_cache_size would tell it that the page will most likely\nstill be in cache when it comes back to touch it again, meaning the\ncost of doing so will be small, basically free.\n\n> and in a typical situation those are cold.\n\nBut they won't be, because it is heating them up itself, and\neffective_cache_size says that stay then hot for the duration of the\nquery.\n\nAlso, with a random_page_cost of 2.5, you are telling it that even\ncold pages are not all that cold.\n\nWhat are the correlations of the is_current column to the ctid order,\nand of the loan_id column to the ctid order?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 18 Aug 2016 08:59:27 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimates on partial index" }, { "msg_contents": "2016-08-18 16:56 GMT+03:00 Tom Lane <[email protected]>:\n\n> In that case you've got random_page_cost too far down. Values less than\n> the default of 4 are generally only appropriate if the bulk of your\n> database stays in RAM.\n>\n\nOh, that's interesting. I was under impression, that r_p_c reflects IO\nspeed, like — make it smaller for SSDs.\nTo make this query prefer BitmapScan, I need to bump r_p_c to 5.8. And 6.0\nmakes it switch to SeqScan.\n\nDoes it means, that for huge DB (this one is ~1.5TB) it is better to\nincrease r_p_c?\n\nStill, this effect shows only for partial indexes, i.e. if I disable `\nidx_loan_agreemnets_loan_id_cond_is_current_true`,\nthan planner will not use any other and goes straight to SeqScan.\nDoes it means, that amount of table-related IO is not accounted for when\nplanning scan over partial index?\n\n-- \nVictor Y. Yegorov\n\n2016-08-18 16:56 GMT+03:00 Tom Lane <[email protected]>:In that case you've got random_page_cost too far down.  Values less than\nthe default of 4 are generally only appropriate if the bulk of your\ndatabase stays in RAM.Oh, that's interesting. I was under impression, that r_p_c reflects IO speed, like — make it smaller for SSDs.To make this query prefer BitmapScan, I need to bump r_p_c to 5.8. And 6.0 makes it switch to SeqScan.Does it means, that for huge DB (this one is ~1.5TB) it is better to increase r_p_c?Still, this effect shows only for partial indexes, i.e. if I disable `idx_loan_agreemnets_loan_id_cond_is_current_true`,than planner will not use any other and goes straight to SeqScan.Does it means, that amount of table-related IO is not accounted for when planning scan over partial index?-- Victor Y. Yegorov", "msg_date": "Thu, 18 Aug 2016 21:40:32 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimates on partial index" }, { "msg_contents": "2016-08-18 18:59 GMT+03:00 Jeff Janes <[email protected]>:\n\n> Both plans touch the same pages. The index scan just touches some of\n> those pages over and over again. A large setting of\n> effective_cache_size would tell it that the page will most likely\n> still be in cache when it comes back to touch it again, meaning the\n> cost of doing so will be small, basically free.\n>\n> > and in a typical situation those are cold.\n>\n> But they won't be, because it is heating them up itself, and\n> effective_cache_size says that stay then hot for the duration of the\n> query.\n>\n\n(Re-sending as I've missed to add the list.)\n\nBut IndexScan means, that not only index, table is also accessed.\nAnd although index is small get's hot quite quickly (yes, e_c_s is 96GB on\nthis dedicated box),\ntable is not. And this clearly adds up to the total time.\n\nI am wondering, if heap page accesses are also accounted for during\nplanning.\n\n\nAlso, with a random_page_cost of 2.5, you are telling it that even\n> cold pages are not all that cold.\n>\n\nYes, this was new for me and I will review my setup.\nCurrent setting is based on the fact we're running SSDs.\n\n\nWhat are the correlations of the is_current column to the ctid order,\n> and of the loan_id column to the ctid order?\n>\n\n SELECT attname,null_frac,avg_width,n_distinct,correlation FROM pg_stats\nWHERE tablename='loan_agreements' AND attname IN\n('loan_id','is_current','due_date');\n attname null_frac avg_width n_distinct correlation\n ---------- --------- --------- ---------- -----------\n due_date 0 4 1197 0.982312\n is_current 0 1 2 0.547268\n loan_id 0 8 -0.202438 0.937507\n\n\n-- \nVictor Y. Yegorov\n\n2016-08-18 18:59 GMT+03:00 Jeff Janes <[email protected]>:Both plans touch the same pages.  The index scan just touches some of\nthose pages over and over again.  A large setting of\neffective_cache_size would tell it that the page will most likely\nstill be in cache when it comes back to touch it again, meaning the\ncost of doing so will be small, basically free.\n\n> and in a typical situation those are cold.\n\nBut they won't be, because it is heating them up itself, and\neffective_cache_size says that stay then hot for the duration of the\nquery.(Re-sending as I've missed to add the list.) But IndexScan means, that not only index, table is also accessed.And although index is small get's hot quite quickly (yes, e_c_s is 96GB on this dedicated box),table is not. And this clearly adds up to the total time.I am wondering, if heap page accesses are also accounted for during planning.Also, with a random_page_cost of 2.5, you are telling it that even\ncold pages are not all that cold.Yes, this was new for me and I will review my setup.Current setting is based on the fact we're running SSDs.\nWhat are the correlations of the is_current column to the ctid order,\nand of the loan_id column to the ctid order?    SELECT attname,null_frac,avg_width,n_distinct,correlation FROM pg_stats WHERE tablename='loan_agreements' AND attname IN ('loan_id','is_current','due_date');     attname   null_frac avg_width n_distinct correlation    ---------- --------- --------- ---------- -----------    due_date           0         4       1197    0.982312    is_current         0         1          2    0.547268    loan_id            0         8  -0.202438    0.937507-- Victor Y. Yegorov", "msg_date": "Thu, 18 Aug 2016 21:55:27 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimates on partial index" }, { "msg_contents": "On Thu, Aug 18, 2016 at 11:55 AM, Victor Yegorov <[email protected]> wrote:\n> 2016-08-18 18:59 GMT+03:00 Jeff Janes <[email protected]>:\n>>\n>> Both plans touch the same pages. The index scan just touches some of\n>> those pages over and over again. A large setting of\n>> effective_cache_size would tell it that the page will most likely\n>> still be in cache when it comes back to touch it again, meaning the\n>> cost of doing so will be small, basically free.\n>>\n>> > and in a typical situation those are cold.\n>>\n>> But they won't be, because it is heating them up itself, and\n>> effective_cache_size says that stay then hot for the duration of the\n>> query.\n>\n>\n> But IndexScan means, that not only index, table is also accessed.\n> And although index is small get's hot quite quickly (yes, e_c_s is 96GB on\n> this dedicated box),\n> table is not.\n\nBoth types of scans have to touch the same set of pages. The bitmap\nhits all of the needed index pages first and memorizes the relevant\nresults, then hits all the needed table pages. The regular index scan\nkeeps jumping back and forth from index to table. But they are the\nsame set of pages either way.\n\nWith a regular index scan, if the same table page is pointed to from\n40 different places in the index, then it will be touched 40 different\ntimes. But at least 39 of those times it is going to already be in\nmemory. The bitmap scan will touch the page just one and deal with\nall 40 entries.\n\n\n> And this clearly adds up to the total time.\n\nThat isn't clear at all from the info you gave. You would have to set\ntrack_io_timing=on in order to show something like that. And we don't\nknow if you ran each query once in the order shown, and posted what\nyou got (with one warming the cache for the other); or if you have ran\neach repeatedly and posted representative examples with a pre-warmed\ncache.\n\n\n> I am wondering, if heap page accesses are also accounted for during\n> planning.\n\nIt does account for them, but perhaps not perfectly. See \"[PERFORM]\nindex fragmentation on insert-only table with non-unique column\" for\nsome arguments on that which might be relevant to you.\n\nIf you can come up with a data generator which creates data that\nothers can use to reproduce this situation, we can then investigate it\nin more detail.\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 18 Aug 2016 13:06:52 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimates on partial index" }, { "msg_contents": "On 8/18/16 3:06 PM, Jeff Janes wrote:\n> If you can come up with a data generator which creates data that\n> others can use to reproduce this situation, we can then investigate it\n> in more detail.\n\nBTW, the easy fix to this is most likely to create an index on due_date \nWHERE is_current. Or perhaps partition the table so non-current rows \ndon't live with current ones. I'm not a fan of mixing history tracking \nin with the main table, because it leads to problems like this.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 18 Aug 2016 17:01:05 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimates on partial index" }, { "msg_contents": "K te er Xi\n\nSent from Nine<http://www.9folders.com/>\n________________________________\nFrom: Jim Nasby <[email protected]>\nSent: 19-Aug-2016 03:32\nTo: Jeff Janes; Victor Yegorov\nCc: [email protected]\nSubject: Re: [PERFORM] Estimates on partial index\n\nOn 8/18/16 3:06 PM, Jeff Janes wrote:\n> If you can come up with a data generator which creates data that\n> others can use to reproduce this situation, we can then investigate it\n> in more detail.\n\nBTW, the easy fix to this is most likely to create an index on due_date\nWHERE is_current. Or perhaps partition the table so non-current rows\ndon't live with current ones. I'm not a fan of mixing history tracking\nin with the main table, because it leads to problems like this.\n--\nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\n\n\n\n  K te er Xi \n\nSent from \nNine\n\n\n\n\nFrom: Jim Nasby <[email protected]>\nSent: 19-Aug-2016 03:32\nTo: Jeff Janes; Victor Yegorov\nCc: [email protected]\nSubject: Re: [PERFORM] Estimates on partial index\n\n\n\n\n\n\nOn 8/18/16 3:06 PM, Jeff Janes wrote:\n> If you can come up with a data generator which creates data that\n> others can use to reproduce this situation, we can then investigate it\n> in more detail.\n\nBTW, the easy fix to this is most likely to create an index on due_date \nWHERE is_current. Or perhaps partition the table so non-current rows \ndon't live with current ones. I'm not a fan of mixing history tracking \nin with the main table, because it leads to problems like this.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532)   mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 19 Aug 2016 04:21:16 +0000", "msg_from": "Ashish Kumar Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimates on partial index" }, { "msg_contents": "2016-08-18 23:06 GMT+03:00 Jeff Janes <[email protected]>:\n\n> It does account for them, but perhaps not perfectly. See \"[PERFORM]\n> index fragmentation on insert-only table with non-unique column\" for\n> some arguments on that which might be relevant to you.\n>\n\nThanks for pointing this out, good stuff to know.\n\n\nIf you can come up with a data generator which creates data that\n> others can use to reproduce this situation, we can then investigate it\n> in more detail.\n>\n\nI do not think this has any value anymore. I have reconfigured the server:\n- r_p_c returned to the default 4.0\n- turned on track_io_timing\n- reindexed all indexes on the table\n- went with the suggestion from Jim about partial index on `due_date`,\nalthough\n delayed it a bit to get a better view on the situation\n\nRunning several explains in a row produces the following:\n\n Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on\nloan_agreements la (cost=0.42..21469.40 rows=224331 width=13) (actual\ntime=0.040..2687.422 rows=216440 loops=1)\n Filter: ('2016-08-11'::date > due_date)\n Rows Removed by Filter: 21806\n Buffers: shared hit=226670 read=692 dirtied=48\n I/O Timings: read=9.854\n Planning time: 1885.219 ms\n Execution time: 2712.833 ms\n\n Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on\nloan_agreements la (cost=0.42..21469.40 rows=224331 width=13) (actual\ntime=426.027..2273.617 rows=216440 loops=1)\n Filter: ('2016-08-11'::date > due_date)\n Rows Removed by Filter: 21806\n Buffers: shared hit=227276\n Planning time: 0.175 ms\n Execution time: 2296.414 ms\n\n Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on\nloan_agreements la (cost=0.42..21469.40 rows=224331 width=13) (actual\ntime=0.034..297.113 rows=216440 loops=1)\n Filter: ('2016-08-11'::date > due_date)\n Rows Removed by Filter: 21806\n Buffers: shared hit=227276\n Planning time: 0.173 ms\n Execution time: 310.509 ms\n\n Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on\nloan_agreements la (cost=0.42..21469.40 rows=224331 width=13) (actual\ntime=0.031..286.212 rows=216440 loops=1)\n Filter: ('2016-08-11'::date > due_date)\n Rows Removed by Filter: 21806\n Buffers: shared hit=227276\n Planning time: 0.163 ms\n Execution time: 299.831 ms\n\nThis makes me think, that IO is not my issue here and, honestly, I have no\nclue what can be behind this.\n\nWhat I noticed — queries do experience this kind of hiccups from time to\ntime. CPU and IO monitoring shows no spikes at all, CPU is below 40% all\nthe time.\nCurrently I am trying to find out ways how to track down what's going on\nhere.\n\nStill — thanks everyone for the feedback, it was valuable for me!\n\n\n-- \nVictor Y. Yegorov\n\n2016-08-18 23:06 GMT+03:00 Jeff Janes <[email protected]>:It does account for them, but perhaps not perfectly.  See \"[PERFORM]\nindex fragmentation on insert-only table with non-unique column\" for\nsome arguments on that which might be relevant to you.Thanks for pointing this out, good stuff to know.\nIf you can come up with a data generator which creates data that\nothers can use to reproduce this situation, we can then investigate it\nin more detail.I do not think this has any value anymore. I have reconfigured the server:- r_p_c returned to the default 4.0- turned on track_io_timing- reindexed all indexes on the table- went with the suggestion from Jim about partial index on `due_date`, although  delayed it a bit to get a better view on the situationRunning several explains in a row produces the following:    Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on loan_agreements la  (cost=0.42..21469.40 rows=224331 width=13) (actual time=0.040..2687.422 rows=216440 loops=1)      Filter: ('2016-08-11'::date > due_date)      Rows Removed by Filter: 21806      Buffers: shared hit=226670 read=692 dirtied=48      I/O Timings: read=9.854    Planning time: 1885.219 ms    Execution time: 2712.833 ms        Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on loan_agreements la  (cost=0.42..21469.40 rows=224331 width=13) (actual time=426.027..2273.617 rows=216440 loops=1)      Filter: ('2016-08-11'::date > due_date)      Rows Removed by Filter: 21806      Buffers: shared hit=227276    Planning time: 0.175 ms    Execution time: 2296.414 ms        Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on loan_agreements la  (cost=0.42..21469.40 rows=224331 width=13) (actual time=0.034..297.113 rows=216440 loops=1)      Filter: ('2016-08-11'::date > due_date)      Rows Removed by Filter: 21806      Buffers: shared hit=227276    Planning time: 0.173 ms    Execution time: 310.509 ms        Index Scan using idx_loan_agreemnets_loan_id_cond_is_current_true on loan_agreements la  (cost=0.42..21469.40 rows=224331 width=13) (actual time=0.031..286.212 rows=216440 loops=1)      Filter: ('2016-08-11'::date > due_date)      Rows Removed by Filter: 21806      Buffers: shared hit=227276    Planning time: 0.163 ms    Execution time: 299.831 msThis makes me think, that IO is not my issue here and, honestly, I have no clue what can be behind this.What I noticed — queries do experience this kind of hiccups from time to time. CPU and IO monitoring shows no spikes at all, CPU is below 40% all the time.Currently I am trying to find out ways how to track down what's going on here.Still — thanks everyone for the feedback, it was valuable for me!-- Victor Y. Yegorov", "msg_date": "Fri, 19 Aug 2016 14:31:49 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimates on partial index" }, { "msg_contents": "2016-08-18 21:40 GMT+03:00 Victor Yegorov <[email protected]>:\n\n> Oh, that's interesting. I was under impression, that r_p_c reflects IO\n> speed, like — make it smaller for SSDs.\n> To make this query prefer BitmapScan, I need to bump r_p_c to 5.8. And 6.0\n> makes it switch to SeqScan.\n>\n\nI was looking into different databases and queries around — many of them\nprefers to use indexes over SeqScans, even if index is not a \"perfect\"\nmatch,\nlike using index on the 2-nd column of the index (like searching for `rev`\nvia IndexScan over `id,rev` index).\nI need to bump r_p_c to 6 (at least) to make things shift towards\nBtimapScans, and I feel uncertain about such increase.\n\nThis makes me thinking — can this situation be an indication, that tables\nare bloated?\n(I've performed reindexing recently, touching majority of indexes around,\nwhile tables were not touched.)\n\n\n-- \nVictor Y. Yegorov\n\n2016-08-18 21:40 GMT+03:00 Victor Yegorov <[email protected]>:Oh, that's interesting. I was under impression, that r_p_c reflects IO speed, like — make it smaller for SSDs.To make this query prefer BitmapScan, I need to bump r_p_c to 5.8. And 6.0 makes it switch to SeqScan.I was looking into different databases and queries around — many of them prefers to use indexes over SeqScans, even if index is not a \"perfect\" match,like using index on the 2-nd column of the index (like searching for `rev` via IndexScan over `id,rev` index).I need to bump r_p_c to 6 (at least) to make things shift towards BtimapScans, and I feel uncertain about such increase.This makes me thinking — can this situation be an indication, that tables are bloated?(I've performed reindexing recently, touching majority of indexes around, while tables were not touched.)-- Victor Y. Yegorov", "msg_date": "Fri, 19 Aug 2016 15:50:55 +0300", "msg_from": "Victor Yegorov <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Estimates on partial index" } ]
[ { "msg_contents": "On 2016-08-20 08:21, [email protected] wrote:\n> Welcome to the pgsql-performance mailing list!\n> Your password at PostgreSQL Mailing Lists is\n> \n> x8DiA6\n> \n> To leave this mailing list, send the following command in the body\n> of a message to [email protected]:\n> \n> approve x8DiA6 unsubscribe pgsql-performance\n> [email protected]\n> \n> This command will work even if your address changes. For that reason,\n> among others, it is important that you keep a copy of this message.\n> \n> To post a message to the mailing list, send it to\n> [email protected]\n> \n> If you need help or have questions about the mailing list, please\n> contact the people who manage the list by sending a message to\n> [email protected]\n> \n> You can manage your subscription by visiting the following WWW \n> location:\n> \n> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in>\nDear Sir/Mam,\n\nI have a PostgreSQL 9.5 instance running on Windows 8 machine with 4GB \nof RAM.This server is mainly used for inserting/updating large amounts \nof data via copy/insert/update commands, and seldom for running select \nqueries.\n\nHere are the relevant configuration parameters I changed:\n\nmax_connections = 100\nshared_buffers = 512MB\neffective_cache_size = 3GB\nwork_mem = 12233kB\nmaintenance_work_mem = 256MB\nmin_wal_size = 1GB max_wal_size = 2GB\ncheckpoint_completion_target = 0.7\nwal_buffers = 16MB\ndefault_statistics_target = 100\n\nAfter setting in postgresql.conf. I run the select query to fetch large \namount of record of 29000 in postgresql but it takes 10.3 seconds but \nthe same query takes 2 seconds for execution in MSSQL.\n\nSo my query is how to improve the perfermance in postgresql.\n\nRegards,\nDebasis Moharana\n.NET Software Developer\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 20 Aug 2016 08:27:09 +0000", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pgsql-performance issue" }, { "msg_contents": "2016-08-20 10:27 GMT+02:00 <[email protected]>:\n\n> On 2016-08-20 08:21, [email protected] wrote:\n>\n>> Welcome to the pgsql-performance mailing list!\n>> Your password at PostgreSQL Mailing Lists is\n>>\n>> x8DiA6\n>>\n>> To leave this mailing list, send the following command in the body\n>> of a message to [email protected]:\n>>\n>> approve x8DiA6 unsubscribe pgsql-performance\n>> [email protected]\n>>\n>> This command will work even if your address changes. For that reason,\n>> among others, it is important that you keep a copy of this message.\n>>\n>> To post a message to the mailing list, send it to\n>> [email protected]\n>>\n>> If you need help or have questions about the mailing list, please\n>> contact the people who manage the list by sending a message to\n>> [email protected]\n>>\n>> You can manage your subscription by visiting the following WWW location:\n>>\n>> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql\n>> .org/debasis.moharana%40ipathsolutions.co.in>\n>>\n> Dear Sir/Mam,\n>\n> I have a PostgreSQL 9.5 instance running on Windows 8 machine with 4GB of\n> RAM.This server is mainly used for inserting/updating large amounts of data\n> via copy/insert/update commands, and seldom for running select queries.\n>\n> Here are the relevant configuration parameters I changed:\n>\n> max_connections = 100\n> shared_buffers = 512MB\n> effective_cache_size = 3GB\n> work_mem = 12233kB\n> maintenance_work_mem = 256MB\n> min_wal_size = 1GB max_wal_size = 2GB\n> checkpoint_completion_target = 0.7\n> wal_buffers = 16MB\n> default_statistics_target = 100\n>\n> After setting in postgresql.conf. I run the select query to fetch large\n> amount of record of 29000 in postgresql but it takes 10.3 seconds but the\n> same query takes 2 seconds for execution in MSSQL.\n>\n> So my query is how to improve the perfermance in postgresql.\n>\n\nhi\n\nplease, send execution plan of slow query\n\nhttps://www.postgresql.org/docs/current/static/sql-explain.html\nhttps://explain.depesz.com/\n\np.s. Did you do VACUUM and ANALYZE on database?\n\nRegards\n\nPavel\n\n>\n> Regards,\n> Debasis Moharana\n> .NET Software Developer\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2016-08-20 10:27 GMT+02:00 <[email protected]>:On 2016-08-20 08:21, [email protected] wrote:\n\nWelcome to the pgsql-performance mailing list!\nYour password at PostgreSQL Mailing Lists is\n\nx8DiA6\n\nTo leave this mailing list, send the following command in the body\nof a message to [email protected]:\n\napprove x8DiA6 unsubscribe pgsql-performance\[email protected]\n\nThis command will work even if your address changes.  For that reason,\namong others, it is important that you keep a copy of this message.\n\nTo post a message to the mailing list, send it to\n  [email protected]\n\nIf you need help or have questions about the mailing list, please\ncontact the people who manage the list by sending a message to\n  [email protected]\n\nYou can manage your subscription by visiting the following WWW location:\n\n<https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in>\n\nDear Sir/Mam,\n\nI have a PostgreSQL 9.5 instance running on Windows 8 machine with 4GB of RAM.This server is mainly used for inserting/updating large amounts of data via copy/insert/update commands, and seldom for running select queries.\n\nHere are the relevant configuration parameters I changed:\n\nmax_connections = 100\nshared_buffers = 512MB\neffective_cache_size = 3GB\nwork_mem = 12233kB\nmaintenance_work_mem = 256MB\nmin_wal_size = 1GB max_wal_size = 2GB\ncheckpoint_completion_target = 0.7\nwal_buffers = 16MB\ndefault_statistics_target = 100\n\nAfter setting in postgresql.conf. I run the select query to fetch large amount of record of 29000 in postgresql but it takes 10.3 seconds but the same query takes 2 seconds for execution in MSSQL.\n\nSo my query is how to improve the perfermance in postgresql.hiplease, send execution plan of slow query https://www.postgresql.org/docs/current/static/sql-explain.htmlhttps://explain.depesz.com/p.s. Did you do VACUUM and ANALYZE on database?RegardsPavel\n\nRegards,\nDebasis Moharana\n.NET Software Developer\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 20 Aug 2016 10:58:26 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-performance issue" }, { "msg_contents": "On 2016-08-20 08:58, Pavel Stehule wrote:\n> 2016-08-20 10:27 GMT+02:00 <[email protected]>:\n> \n>> On 2016-08-20 08:21, [email protected] wrote:\n>> \n>>> Welcome to the pgsql-performance mailing list!\n>>> Your password at PostgreSQL Mailing Lists is\n>>> \n>>> x8DiA6\n>>> \n>>> To leave this mailing list, send the following command in the\n>>> body\n>>> of a message to [email protected]:\n>>> \n>>> approve x8DiA6 unsubscribe pgsql-performance\n>>> [email protected]\n>>> \n>>> This command will work even if your address changes. For that\n>>> reason,\n>>> among others, it is important that you keep a copy of this\n>>> message.\n>>> \n>>> To post a message to the mailing list, send it to\n>>> [email protected]\n>>> \n>>> If you need help or have questions about the mailing list, please\n>>> contact the people who manage the list by sending a message to\n>>> [email protected]\n>>> \n>>> You can manage your subscription by visiting the following WWW\n>>> location:\n>>> \n>>> \n>> \n> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n>>> [1]>\n>> Dear Sir/Mam,\n>> \n>> I have a PostgreSQL 9.5 instance running on Windows 8 machine with\n>> 4GB of RAM.This server is mainly used for inserting/updating large\n>> amounts of data via copy/insert/update commands, and seldom for\n>> running select queries.\n>> \n>> Here are the relevant configuration parameters I changed:\n>> \n>> max_connections = 100\n>> shared_buffers = 512MB\n>> effective_cache_size = 3GB\n>> work_mem = 12233kB\n>> maintenance_work_mem = 256MB\n>> min_wal_size = 1GB max_wal_size = 2GB\n>> checkpoint_completion_target = 0.7\n>> wal_buffers = 16MB\n>> default_statistics_target = 100\n>> \n>> After setting in postgresql.conf. I run the select query to fetch\n>> large amount of record of 29000 in postgresql but it takes 10.3\n>> seconds but the same query takes 2 seconds for execution in MSSQL.\n>> \n>> So my query is how to improve the perfermance in postgresql.\n> \n> hi\n> \n> please, send execution plan of slow query\n> \n> https://www.postgresql.org/docs/current/static/sql-explain.html [3]\n> https://explain.depesz.com/ [4]\n> \n> p.s. Did you do VACUUM and ANALYZE on database?\n> \n> Regards\n> \n> Pavel\n> \n>> Regards,\n>> Debasis Moharana\n>> .NET Software Developer\n>> \n>> --\n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance [2]\n> \n> \n> \n> Links:\n> ------\n> [1]\n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n> [2] http://www.postgresql.org/mailpref/pgsql-performance\n> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n> [4] https://explain.depesz.com/\n\n\n\nHi,\n\nPlease check the execution plan details\n\n\nExecution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from \ntblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n\n\"Nested Loop (cost=0.00..507.51 rows=39593 width=224) (actual \ntime=0.032..13.026 rows=39593 loops=1)\"\n\" Buffers: shared read=8\"\n\" I/O Timings: read=0.058\"\n\" -> Seq Scan on tblpurchaseorderstock (cost=0.00..7.89 rows=289 \nwidth=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n\" Buffers: shared read=5\"\n\" I/O Timings: read=0.040\"\n\" -> Materialize (cost=0.00..5.05 rows=137 width=129) (actual \ntime=0.000..0.006 rows=137 loops=289)\"\n\" Buffers: shared read=3\"\n\" I/O Timings: read=0.019\"\n\" -> Seq Scan on tblpurchaseorderinfo (cost=0.00..4.37 rows=137 \nwidth=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n\" Buffers: shared read=3\"\n\" I/O Timings: read=0.019\"\n\"Planning time: 56.052 ms\"\n\"Execution time: 14.038 ms\"\n\nRegards,\nDebasis Moharana\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 20 Aug 2016 11:31:32 +0000", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: pgsql-performance issue" }, { "msg_contents": "2016-08-20 13:31 GMT+02:00 <[email protected]>:\n\n> On 2016-08-20 08:58, Pavel Stehule wrote:\n>\n>> 2016-08-20 10:27 GMT+02:00 <[email protected]>:\n>>\n>> On 2016-08-20 08:21, [email protected] wrote:\n>>>\n>>> Welcome to the pgsql-performance mailing list!\n>>>> Your password at PostgreSQL Mailing Lists is\n>>>>\n>>>> x8DiA6\n>>>>\n>>>> To leave this mailing list, send the following command in the\n>>>> body\n>>>> of a message to [email protected]:\n>>>>\n>>>> approve x8DiA6 unsubscribe pgsql-performance\n>>>> [email protected]\n>>>>\n>>>> This command will work even if your address changes. For that\n>>>> reason,\n>>>> among others, it is important that you keep a copy of this\n>>>> message.\n>>>>\n>>>> To post a message to the mailing list, send it to\n>>>> [email protected]\n>>>>\n>>>> If you need help or have questions about the mailing list, please\n>>>> contact the people who manage the list by sending a message to\n>>>> [email protected]\n>>>>\n>>>> You can manage your subscription by visiting the following WWW\n>>>> location:\n>>>>\n>>>>\n>>>>\n>>> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql\n>> .org/debasis.moharana%40ipathsolutions.co.in\n>>\n>>> [1]>\n>>>>\n>>> Dear Sir/Mam,\n>>>\n>>> I have a PostgreSQL 9.5 instance running on Windows 8 machine with\n>>> 4GB of RAM.This server is mainly used for inserting/updating large\n>>> amounts of data via copy/insert/update commands, and seldom for\n>>> running select queries.\n>>>\n>>> Here are the relevant configuration parameters I changed:\n>>>\n>>> max_connections = 100\n>>> shared_buffers = 512MB\n>>> effective_cache_size = 3GB\n>>> work_mem = 12233kB\n>>> maintenance_work_mem = 256MB\n>>> min_wal_size = 1GB max_wal_size = 2GB\n>>> checkpoint_completion_target = 0.7\n>>> wal_buffers = 16MB\n>>> default_statistics_target = 100\n>>>\n>>> After setting in postgresql.conf. I run the select query to fetch\n>>> large amount of record of 29000 in postgresql but it takes 10.3\n>>> seconds but the same query takes 2 seconds for execution in MSSQL.\n>>>\n>>> So my query is how to improve the perfermance in postgresql.\n>>>\n>>\n>> hi\n>>\n>> please, send execution plan of slow query\n>>\n>> https://www.postgresql.org/docs/current/static/sql-explain.html [3]\n>> https://explain.depesz.com/ [4]\n>>\n>> p.s. Did you do VACUUM and ANALYZE on database?\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> Regards,\n>>> Debasis Moharana\n>>> .NET Software Developer\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance [2]\n>>>\n>>\n>>\n>>\n>> Links:\n>> ------\n>> [1]\n>> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.\n>> org/debasis.moharana%40ipathsolutions.co.in\n>> [2] http://www.postgresql.org/mailpref/pgsql-performance\n>> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n>> [4] https://explain.depesz.com/\n>>\n>\n>\n>\n> Hi,\n>\n> Please check the execution plan details\n>\n>\n> Execution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from\n> tblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n>\n> \"Nested Loop (cost=0.00..507.51 rows=39593 width=224) (actual\n> time=0.032..13.026 rows=39593 loops=1)\"\n> \" Buffers: shared read=8\"\n> \" I/O Timings: read=0.058\"\n> \" -> Seq Scan on tblpurchaseorderstock (cost=0.00..7.89 rows=289\n> width=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n> \" Buffers: shared read=5\"\n> \" I/O Timings: read=0.040\"\n> \" -> Materialize (cost=0.00..5.05 rows=137 width=129) (actual\n> time=0.000..0.006 rows=137 loops=289)\"\n> \" Buffers: shared read=3\"\n> \" I/O Timings: read=0.019\"\n> \" -> Seq Scan on tblpurchaseorderinfo (cost=0.00..4.37 rows=137\n> width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n> \" Buffers: shared read=3\"\n> \" I/O Timings: read=0.019\"\n> \"Planning time: 56.052 ms\"\n> \"Execution time: 14.038 ms\"\n>\n\nIt is same query? It needs only 14ms\n\nRegards\n\nPavel\n\n\n>\n> Regards,\n> Debasis Moharana\n>\n\n2016-08-20 13:31 GMT+02:00 <[email protected]>:On 2016-08-20 08:58, Pavel Stehule wrote:\n\n2016-08-20 10:27 GMT+02:00 <[email protected]>:\n\n\nOn 2016-08-20 08:21, [email protected] wrote:\n\n\nWelcome to the pgsql-performance mailing list!\nYour password at PostgreSQL Mailing Lists is\n\nx8DiA6\n\nTo leave this mailing list, send the following command in the\nbody\nof a message to [email protected]:\n\napprove x8DiA6 unsubscribe pgsql-performance\[email protected]\n\nThis command will work even if your address changes. For that\nreason,\namong others, it is important that you keep a copy of this\nmessage.\n\nTo post a message to the mailing list, send it to\[email protected]\n\nIf you need help or have questions about the mailing list, please\ncontact the people who manage the list by sending a message to\[email protected]\n\nYou can manage your subscription by visiting the following WWW\nlocation:\n\n\n\n\n\n<https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n\n[1]>\n\nDear Sir/Mam,\n\nI have a PostgreSQL 9.5 instance running on Windows 8 machine with\n4GB of RAM.This server is mainly used for inserting/updating large\namounts of data via copy/insert/update commands, and seldom for\nrunning select queries.\n\nHere are the relevant configuration parameters I changed:\n\nmax_connections = 100\nshared_buffers = 512MB\neffective_cache_size = 3GB\nwork_mem = 12233kB\nmaintenance_work_mem = 256MB\nmin_wal_size = 1GB max_wal_size = 2GB\ncheckpoint_completion_target = 0.7\nwal_buffers = 16MB\ndefault_statistics_target = 100\n\nAfter setting in postgresql.conf. I run the select query to fetch\nlarge amount of record of 29000 in postgresql but it takes 10.3\nseconds but the same query takes 2 seconds for execution in MSSQL.\n\nSo my query is how to improve the perfermance in postgresql.\n\n\nhi\n\nplease, send execution plan of slow query\n\nhttps://www.postgresql.org/docs/current/static/sql-explain.html [3]\nhttps://explain.depesz.com/ [4]\n\np.s. Did you do VACUUM and ANALYZE on database?\n\nRegards\n\nPavel\n\n\nRegards,\nDebasis Moharana\n.NET Software Developer\n\n--\nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance [2]\n\n\n\n\nLinks:\n------\n[1]\nhttps://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n[2] http://www.postgresql.org/mailpref/pgsql-performance\n[3] https://www.postgresql.org/docs/current/static/sql-explain.html\n[4] https://explain.depesz.com/\n\n\n\n\nHi,\n\nPlease check the execution plan details\n\n\nExecution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from tblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n\n\"Nested Loop  (cost=0.00..507.51 rows=39593 width=224) (actual time=0.032..13.026 rows=39593 loops=1)\"\n\"  Buffers: shared read=8\"\n\"  I/O Timings: read=0.058\"\n\"  ->  Seq Scan on tblpurchaseorderstock  (cost=0.00..7.89 rows=289 width=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n\"        Buffers: shared read=5\"\n\"        I/O Timings: read=0.040\"\n\"  ->  Materialize  (cost=0.00..5.05 rows=137 width=129) (actual time=0.000..0.006 rows=137 loops=289)\"\n\"        Buffers: shared read=3\"\n\"        I/O Timings: read=0.019\"\n\"        ->  Seq Scan on tblpurchaseorderinfo  (cost=0.00..4.37 rows=137 width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n\"              Buffers: shared read=3\"\n\"              I/O Timings: read=0.019\"\n\"Planning time: 56.052 ms\"\n\"Execution time: 14.038 ms\"It is same query? It needs only 14ms RegardsPavel \n\nRegards,\nDebasis Moharana", "msg_date": "Sat, 20 Aug 2016 13:42:17 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-performance issue" }, { "msg_contents": "On 2016-08-20 11:42, Pavel Stehule wrote:\n> 2016-08-20 13:31 GMT+02:00 <[email protected]>:\n> \n>> On 2016-08-20 08:58, Pavel Stehule wrote:\n>> 2016-08-20 10:27 GMT+02:00 <[email protected]>:\n>> \n>> On 2016-08-20 08:21, [email protected] wrote:\n>> \n>> Welcome to the pgsql-performance mailing list!\n>> Your password at PostgreSQL Mailing Lists is\n>> \n>> x8DiA6\n>> \n>> To leave this mailing list, send the following command in the\n>> body\n>> of a message to [email protected]:\n>> \n>> approve x8DiA6 unsubscribe pgsql-performance\n>> [email protected]\n>> \n>> This command will work even if your address changes. For that\n>> reason,\n>> among others, it is important that you keep a copy of this\n>> message.\n>> \n>> To post a message to the mailing list, send it to\n>> [email protected]\n>> \n>> If you need help or have questions about the mailing list, please\n>> contact the people who manage the list by sending a message to\n>> [email protected]\n>> \n>> You can manage your subscription by visiting the following WWW\n>> location:\n> \n> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n> [2]\n> \n>>> [1]>\n>> Dear Sir/Mam,\n>> \n>> I have a PostgreSQL 9.5 instance running on Windows 8 machine with\n>> 4GB of RAM.This server is mainly used for inserting/updating large\n>> amounts of data via copy/insert/update commands, and seldom for\n>> running select queries.\n>> \n>> Here are the relevant configuration parameters I changed:\n>> \n>> max_connections = 100\n>> shared_buffers = 512MB\n>> effective_cache_size = 3GB\n>> work_mem = 12233kB\n>> maintenance_work_mem = 256MB\n>> min_wal_size = 1GB max_wal_size = 2GB\n>> checkpoint_completion_target = 0.7\n>> wal_buffers = 16MB\n>> default_statistics_target = 100\n>> \n>> After setting in postgresql.conf. I run the select query to fetch\n>> large amount of record of 29000 in postgresql but it takes 10.3\n>> seconds but the same query takes 2 seconds for execution in MSSQL.\n>> \n>> So my query is how to improve the perfermance in postgresql.\n> \n> hi\n> \n> please, send execution plan of slow query\n> \n> https://www.postgresql.org/docs/current/static/sql-explain.html [3]\n> [3]\n> https://explain.depesz.com/ [4] [4]\n> \n> p.s. Did you do VACUUM and ANALYZE on database?\n> \n> Regards\n> \n> Pavel\n> \n>> Regards,\n>> Debasis Moharana\n>> .NET Software Developer\n>> \n>> --\n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance [1] [2]\n> \n> Links:\n> ------\n> [1]\n> \n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n> [2]\n> [2] http://www.postgresql.org/mailpref/pgsql-performance [1]\n> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n> [3]\n> [4] https://explain.depesz.com/ [4]\n> \n> Hi,\n> \n> Please check the execution plan details\n> \n> Execution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from\n> tblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n> \n> \"Nested Loop (cost=0.00..507.51 rows=39593 width=224) (actual\n> time=0.032..13.026 rows=39593 loops=1)\"\n> \" Buffers: shared read=8\"\n> \" I/O Timings: read=0.058\"\n> \" -> Seq Scan on tblpurchaseorderstock (cost=0.00..7.89 rows=289\n> width=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n> \" Buffers: shared read=5\"\n> \" I/O Timings: read=0.040\"\n> \" -> Materialize (cost=0.00..5.05 rows=137 width=129) (actual\n> time=0.000..0.006 rows=137 loops=289)\"\n> \" Buffers: shared read=3\"\n> \" I/O Timings: read=0.019\"\n> \" -> Seq Scan on tblpurchaseorderinfo (cost=0.00..4.37\n> rows=137 width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n> \" Buffers: shared read=3\"\n> \" I/O Timings: read=0.019\"\n> \"Planning time: 56.052 ms\"\n> \"Execution time: 14.038 ms\"\n> \n> It is same query? It needs only 14ms\n> \n> Regards\n> \n> Pavel\n> \n>> Regards,\n>> Debasis Moharana\n> \n> \n> \n> Links:\n> ------\n> [1] http://www.postgresql.org/mailpref/pgsql-performance\n> [2]\n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n> [4] https://explain.depesz.com/\n\n\n\nHi,\n\nYes you right.But it will take more time(10.3 sec.) Plase check the \nsnap.\n\n\nCan you please tell me what we need to setup so that it will take the \nactual time.\n\nRegards,\nDebasis Moharana\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Sat, 20 Aug 2016 11:59:20 +0000", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: pgsql-performance issue" }, { "msg_contents": "2016-08-20 13:59 GMT+02:00 <[email protected]>:\n\n> On 2016-08-20 11:42, Pavel Stehule wrote:\n>\n>> 2016-08-20 13:31 GMT+02:00 <[email protected]>:\n>>\n>> On 2016-08-20 08:58, Pavel Stehule wrote:\n>>> 2016-08-20 10:27 GMT+02:00 <[email protected]>:\n>>>\n>>> On 2016-08-20 08:21, [email protected] wrote:\n>>>\n>>> Welcome to the pgsql-performance mailing list!\n>>> Your password at PostgreSQL Mailing Lists is\n>>>\n>>> x8DiA6\n>>>\n>>> To leave this mailing list, send the following command in the\n>>> body\n>>> of a message to [email protected]:\n>>>\n>>> approve x8DiA6 unsubscribe pgsql-performance\n>>> [email protected]\n>>>\n>>> This command will work even if your address changes. For that\n>>> reason,\n>>> among others, it is important that you keep a copy of this\n>>> message.\n>>>\n>>> To post a message to the mailing list, send it to\n>>> [email protected]\n>>>\n>>> If you need help or have questions about the mailing list, please\n>>> contact the people who manage the list by sending a message to\n>>> [email protected]\n>>>\n>>> You can manage your subscription by visiting the following WWW\n>>> location:\n>>>\n>>\n>> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql\n>> .org/debasis.moharana%40ipathsolutions.co.in\n>> [2]\n>>\n>> [1]>\n>>>>\n>>> Dear Sir/Mam,\n>>>\n>>> I have a PostgreSQL 9.5 instance running on Windows 8 machine with\n>>> 4GB of RAM.This server is mainly used for inserting/updating large\n>>> amounts of data via copy/insert/update commands, and seldom for\n>>> running select queries.\n>>>\n>>> Here are the relevant configuration parameters I changed:\n>>>\n>>> max_connections = 100\n>>> shared_buffers = 512MB\n>>> effective_cache_size = 3GB\n>>> work_mem = 12233kB\n>>> maintenance_work_mem = 256MB\n>>> min_wal_size = 1GB max_wal_size = 2GB\n>>> checkpoint_completion_target = 0.7\n>>> wal_buffers = 16MB\n>>> default_statistics_target = 100\n>>>\n>>> After setting in postgresql.conf. I run the select query to fetch\n>>> large amount of record of 29000 in postgresql but it takes 10.3\n>>> seconds but the same query takes 2 seconds for execution in MSSQL.\n>>>\n>>> So my query is how to improve the perfermance in postgresql.\n>>>\n>>\n>> hi\n>>\n>> please, send execution plan of slow query\n>>\n>> https://www.postgresql.org/docs/current/static/sql-explain.html [3]\n>> [3]\n>> https://explain.depesz.com/ [4] [4]\n>>\n>> p.s. Did you do VACUUM and ANALYZE on database?\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> Regards,\n>>> Debasis Moharana\n>>> .NET Software Developer\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance [1] [2]\n>>>\n>>\n>> Links:\n>> ------\n>> [1]\n>>\n>> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.\n>> org/debasis.moharana%40ipathsolutions.co.in\n>> [2]\n>> [2] http://www.postgresql.org/mailpref/pgsql-performance [1]\n>> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n>> [3]\n>> [4] https://explain.depesz.com/ [4]\n>>\n>>\n>> Hi,\n>>\n>> Please check the execution plan details\n>>\n>> Execution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from\n>> tblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n>>\n>> \"Nested Loop (cost=0.00..507.51 rows=39593 width=224) (actual\n>> time=0.032..13.026 rows=39593 loops=1)\"\n>> \" Buffers: shared read=8\"\n>> \" I/O Timings: read=0.058\"\n>> \" -> Seq Scan on tblpurchaseorderstock (cost=0.00..7.89 rows=289\n>> width=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n>> \" Buffers: shared read=5\"\n>> \" I/O Timings: read=0.040\"\n>> \" -> Materialize (cost=0.00..5.05 rows=137 width=129) (actual\n>> time=0.000..0.006 rows=137 loops=289)\"\n>> \" Buffers: shared read=3\"\n>> \" I/O Timings: read=0.019\"\n>> \" -> Seq Scan on tblpurchaseorderinfo (cost=0.00..4.37\n>> rows=137 width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n>> \" Buffers: shared read=3\"\n>> \" I/O Timings: read=0.019\"\n>> \"Planning time: 56.052 ms\"\n>> \"Execution time: 14.038 ms\"\n>>\n>> It is same query? It needs only 14ms\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> Regards,\n>>> Debasis Moharana\n>>>\n>>\n>>\n>>\n>> Links:\n>> ------\n>> [1] http://www.postgresql.org/mailpref/pgsql-performance\n>> [2]\n>> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.\n>> org/debasis.moharana%40ipathsolutions.co.in\n>> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n>> [4] https://explain.depesz.com/\n>>\n>\n>\n>\n> Hi,\n>\n> Yes you right.But it will take more time(10.3 sec.) Plase check the snap.\n>\n\nThe real time you can see in EXPLAIN ANALYZE ... output. The some strange\ntime what you can see in PgAdmin can be based on\n\na) PgAdmin issue - pgAdmin is relativly slow client due slow formatting -\nthe time of processing in your application can be pretty better, try to\ncheck another client\n\nb) there can be some network issues - the problem is in passing data from\nserver to client\n\nbut probably variant is @a - pgAdmin is not good for benchmarking - use\n\"psql\" console instead.\n\nPavel\n\n\n>\n>\n> Can you please tell me what we need to setup so that it will take the\n> actual time.\n>\n> Regards,\n> Debasis Moharana\n\n2016-08-20 13:59 GMT+02:00 <[email protected]>:On 2016-08-20 11:42, Pavel Stehule wrote:\n\n2016-08-20 13:31 GMT+02:00 <[email protected]>:\n\n\nOn 2016-08-20 08:58, Pavel Stehule wrote:\n2016-08-20 10:27 GMT+02:00 <[email protected]>:\n\nOn 2016-08-20 08:21, [email protected] wrote:\n\nWelcome to the pgsql-performance mailing list!\nYour password at PostgreSQL Mailing Lists is\n\nx8DiA6\n\nTo leave this mailing list, send the following command in the\nbody\nof a message to [email protected]:\n\napprove x8DiA6 unsubscribe pgsql-performance\[email protected]\n\nThis command will work even if your address changes. For that\nreason,\namong others, it is important that you keep a copy of this\nmessage.\n\nTo post a message to the mailing list, send it to\[email protected]\n\nIf you need help or have questions about the mailing list, please\ncontact the people who manage the list by sending a message to\[email protected]\n\nYou can manage your subscription by visiting the following WWW\nlocation:\n\n\n<https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n[2]\n\n\n[1]>\n\nDear Sir/Mam,\n\nI have a PostgreSQL 9.5 instance running on Windows 8 machine with\n4GB of RAM.This server is mainly used for inserting/updating large\namounts of data via copy/insert/update commands, and seldom for\nrunning select queries.\n\nHere are the relevant configuration parameters I changed:\n\nmax_connections = 100\nshared_buffers = 512MB\neffective_cache_size = 3GB\nwork_mem = 12233kB\nmaintenance_work_mem = 256MB\nmin_wal_size = 1GB max_wal_size = 2GB\ncheckpoint_completion_target = 0.7\nwal_buffers = 16MB\ndefault_statistics_target = 100\n\nAfter setting in postgresql.conf. I run the select query to fetch\nlarge amount of record of 29000 in postgresql but it takes 10.3\nseconds but the same query takes 2 seconds for execution in MSSQL.\n\nSo my query is how to improve the perfermance in postgresql.\n\n\n hi\n\n please, send execution plan of slow query\n\n https://www.postgresql.org/docs/current/static/sql-explain.html [3]\n[3]\n https://explain.depesz.com/ [4] [4]\n\n p.s. Did you do VACUUM and ANALYZE on database?\n\n Regards\n\n Pavel\n\n\nRegards,\nDebasis Moharana\n.NET Software Developer\n\n--\nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance [1] [2]\n\n\n Links:\n ------\n [1]\n\nhttps://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n[2]\n [2] http://www.postgresql.org/mailpref/pgsql-performance [1]\n [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n[3]\n [4] https://explain.depesz.com/ [4]\n\n Hi,\n\n Please check the execution plan details\n\n Execution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from\ntblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n\n \"Nested Loop  (cost=0.00..507.51 rows=39593 width=224) (actual\ntime=0.032..13.026 rows=39593 loops=1)\"\n \"  Buffers: shared read=8\"\n \"  I/O Timings: read=0.058\"\n \"  ->  Seq Scan on tblpurchaseorderstock  (cost=0.00..7.89 rows=289\nwidth=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n \"        Buffers: shared read=5\"\n \"        I/O Timings: read=0.040\"\n \"  ->  Materialize  (cost=0.00..5.05 rows=137 width=129) (actual\ntime=0.000..0.006 rows=137 loops=289)\"\n \"        Buffers: shared read=3\"\n \"        I/O Timings: read=0.019\"\n \"        ->  Seq Scan on tblpurchaseorderinfo  (cost=0.00..4.37\nrows=137 width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n \"              Buffers: shared read=3\"\n \"              I/O Timings: read=0.019\"\n \"Planning time: 56.052 ms\"\n \"Execution time: 14.038 ms\"\n\nIt is same query? It needs only 14ms\n\nRegards\n\nPavel\n\n\nRegards,\nDebasis Moharana\n\n\n\n\nLinks:\n------\n[1] http://www.postgresql.org/mailpref/pgsql-performance\n[2]\nhttps://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n[3] https://www.postgresql.org/docs/current/static/sql-explain.html\n[4] https://explain.depesz.com/\n\n\n\n\nHi,\n\nYes you right.But it will take more time(10.3 sec.) Plase check the snap.The real time you can see in EXPLAIN ANALYZE ... output. The some strange time what you can see in PgAdmin can be based ona) PgAdmin issue - pgAdmin is relativly slow client due slow formatting - the time of processing in your application can be pretty better, try to check another clientb) there can be some network issues - the problem is in passing data from server to clientbut probably variant is @a - pgAdmin is not good for benchmarking - use \"psql\" console instead.Pavel \n\n\nCan you please tell me what we need to setup so that it will take the actual time.\n\nRegards,\nDebasis Moharana", "msg_date": "Sat, 20 Aug 2016 14:05:34 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-performance issue" }, { "msg_contents": "On 2016-08-20 12:05, Pavel Stehule wrote:\n> 2016-08-20 13:59 GMT+02:00 <[email protected]>:\n> \n>> On 2016-08-20 11:42, Pavel Stehule wrote:\n>> \n>> 2016-08-20 13:31 GMT+02:00 <[email protected]>:\n>> \n>> On 2016-08-20 08:58, Pavel Stehule wrote:\n>> 2016-08-20 10:27 GMT+02:00 <[email protected]>:\n>> \n>> On 2016-08-20 08:21, [email protected] wrote:\n>> \n>> Welcome to the pgsql-performance mailing list!\n>> Your password at PostgreSQL Mailing Lists is\n>> \n>> x8DiA6\n>> \n>> To leave this mailing list, send the following command in the\n>> body\n>> of a message to [email protected]:\n>> \n>> approve x8DiA6 unsubscribe pgsql-performance\n>> [email protected]\n>> \n>> This command will work even if your address changes. For that\n>> reason,\n>> among others, it is important that you keep a copy of this\n>> message.\n>> \n>> To post a message to the mailing list, send it to\n>> [email protected]\n>> \n>> If you need help or have questions about the mailing list, please\n>> contact the people who manage the list by sending a message to\n>> [email protected]\n>> \n>> You can manage your subscription by visiting the following WWW\n>> location:\n>> \n>> \n> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n>> [1]\n>> [2]\n>> \n>> [1]>\n>> Dear Sir/Mam,\n>> \n>> I have a PostgreSQL 9.5 instance running on Windows 8 machine with\n>> 4GB of RAM.This server is mainly used for inserting/updating large\n>> amounts of data via copy/insert/update commands, and seldom for\n>> running select queries.\n>> \n>> Here are the relevant configuration parameters I changed:\n>> \n>> max_connections = 100\n>> shared_buffers = 512MB\n>> effective_cache_size = 3GB\n>> work_mem = 12233kB\n>> maintenance_work_mem = 256MB\n>> min_wal_size = 1GB max_wal_size = 2GB\n>> checkpoint_completion_target = 0.7\n>> wal_buffers = 16MB\n>> default_statistics_target = 100\n>> \n>> After setting in postgresql.conf. I run the select query to fetch\n>> large amount of record of 29000 in postgresql but it takes 10.3\n>> seconds but the same query takes 2 seconds for execution in MSSQL.\n>> \n>> So my query is how to improve the perfermance in postgresql.\n> \n> hi\n> \n> please, send execution plan of slow query\n> \n> https://www.postgresql.org/docs/current/static/sql-explain.html [3]\n> [3]\n> [3]\n> https://explain.depesz.com/ [4] [4] [4]\n> \n> p.s. Did you do VACUUM and ANALYZE on database?\n> \n> Regards\n> \n> Pavel\n> \n>> Regards,\n>> Debasis Moharana\n>> .NET Software Developer\n>> \n>> --\n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance [2] [1] [2]\n> \n> Links:\n> ------\n> [1]\n> \n> \n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n> [1]\n> [2]\n> [2] http://www.postgresql.org/mailpref/pgsql-performance [2] [1]\n> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n> [3]\n> [3]\n> [4] https://explain.depesz.com/ [4] [4]\n> \n> Hi,\n> \n> Please check the execution plan details\n> \n> Execution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from\n> tblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n> \n> \"Nested Loop (cost=0.00..507.51 rows=39593 width=224) (actual\n> time=0.032..13.026 rows=39593 loops=1)\"\n> \" Buffers: shared read=8\"\n> \" I/O Timings: read=0.058\"\n> \" -> Seq Scan on tblpurchaseorderstock (cost=0.00..7.89 rows=289\n> width=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n> \" Buffers: shared read=5\"\n> \" I/O Timings: read=0.040\"\n> \" -> Materialize (cost=0.00..5.05 rows=137 width=129) (actual\n> time=0.000..0.006 rows=137 loops=289)\"\n> \" Buffers: shared read=3\"\n> \" I/O Timings: read=0.019\"\n> \" -> Seq Scan on tblpurchaseorderinfo (cost=0.00..4.37\n> rows=137 width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n> \" Buffers: shared read=3\"\n> \" I/O Timings: read=0.019\"\n> \"Planning time: 56.052 ms\"\n> \"Execution time: 14.038 ms\"\n> \n> It is same query? It needs only 14ms\n> \n> Regards\n> \n> Pavel\n> \n>> Regards,\n>> Debasis Moharana\n> \n> Links:\n> ------\n> [1] http://www.postgresql.org/mailpref/pgsql-performance [2]\n> [2]\n> \n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n> [1]\n> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n> [3]\n> [4] https://explain.depesz.com/ [4]\n> \n> Hi,\n> \n> Yes you right.But it will take more time(10.3 sec.) Plase check the\n> snap.\n> \n> The real time you can see in EXPLAIN ANALYZE ... output. The some\n> strange time what you can see in PgAdmin can be based on\n> \n> a) PgAdmin issue - pgAdmin is relativly slow client due slow\n> formatting - the time of processing in your application can be pretty\n> better, try to check another client\n> \n> b) there can be some network issues - the problem is in passing data\n> from server to client\n> \n> but probably variant is @a - pgAdmin is not good for benchmarking -\n> use \"psql\" console instead.\n> \n> Pavel\n> \n>> Can you please tell me what we need to setup so that it will take\n>> the actual time.\n>> \n>> Regards,\n>> Debasis Moharana\n> \n> \n> \n> Links:\n> ------\n> [1]\n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n> [2] http://www.postgresql.org/mailpref/pgsql-performance\n> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n> [4] https://explain.depesz.com/\n\n\nHi,\n\nActually i am fresher on this.So want to connect my application with \npostgresql instead of MSSQL.\n\nIf we are using psql console for executing the query then it will faster \naccording to you.\nBut what is the other option to use instead of pgadmin.\n\nCan you give me some link for reference.\n\nRegards,\nDebasis Moharana\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 20 Aug 2016 12:17:01 +0000", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: pgsql-performance issue" }, { "msg_contents": "2016-08-20 14:17 GMT+02:00 <[email protected]>:\n\n> On 2016-08-20 12:05, Pavel Stehule wrote:\n>\n>> 2016-08-20 13:59 GMT+02:00 <[email protected]>:\n>>\n>> On 2016-08-20 11:42, Pavel Stehule wrote:\n>>>\n>>> 2016-08-20 13:31 GMT+02:00 <[email protected]>:\n>>>\n>>> On 2016-08-20 08:58, Pavel Stehule wrote:\n>>> 2016-08-20 10:27 GMT+02:00 <[email protected]>:\n>>>\n>>> On 2016-08-20 08:21, [email protected] wrote:\n>>>\n>>> Welcome to the pgsql-performance mailing list!\n>>> Your password at PostgreSQL Mailing Lists is\n>>>\n>>> x8DiA6\n>>>\n>>> To leave this mailing list, send the following command in the\n>>> body\n>>> of a message to [email protected]:\n>>>\n>>> approve x8DiA6 unsubscribe pgsql-performance\n>>> [email protected]\n>>>\n>>> This command will work even if your address changes. For that\n>>> reason,\n>>> among others, it is important that you keep a copy of this\n>>> message.\n>>>\n>>> To post a message to the mailing list, send it to\n>>> [email protected]\n>>>\n>>> If you need help or have questions about the mailing list, please\n>>> contact the people who manage the list by sending a message to\n>>> [email protected]\n>>>\n>>> You can manage your subscription by visiting the following WWW\n>>> location:\n>>>\n>>>\n>>> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql\n>> .org/debasis.moharana%40ipathsolutions.co.in\n>>\n>>> [1]\n>>>\n>>> [2]\n>>>\n>>> [1]>\n>>> Dear Sir/Mam,\n>>>\n>>> I have a PostgreSQL 9.5 instance running on Windows 8 machine with\n>>> 4GB of RAM.This server is mainly used for inserting/updating large\n>>> amounts of data via copy/insert/update commands, and seldom for\n>>> running select queries.\n>>>\n>>> Here are the relevant configuration parameters I changed:\n>>>\n>>> max_connections = 100\n>>> shared_buffers = 512MB\n>>> effective_cache_size = 3GB\n>>> work_mem = 12233kB\n>>> maintenance_work_mem = 256MB\n>>> min_wal_size = 1GB max_wal_size = 2GB\n>>> checkpoint_completion_target = 0.7\n>>> wal_buffers = 16MB\n>>> default_statistics_target = 100\n>>>\n>>> After setting in postgresql.conf. I run the select query to fetch\n>>> large amount of record of 29000 in postgresql but it takes 10.3\n>>> seconds but the same query takes 2 seconds for execution in MSSQL.\n>>>\n>>> So my query is how to improve the perfermance in postgresql.\n>>>\n>>\n>> hi\n>>\n>> please, send execution plan of slow query\n>>\n>> https://www.postgresql.org/docs/current/static/sql-explain.html [3]\n>> [3]\n>> [3]\n>> https://explain.depesz.com/ [4] [4] [4]\n>>\n>> p.s. Did you do VACUUM and ANALYZE on database?\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> Regards,\n>>> Debasis Moharana\n>>> .NET Software Developer\n>>>\n>>> --\n>>> Sent via pgsql-performance mailing list\n>>> ([email protected])\n>>> To make changes to your subscription:\n>>> http://www.postgresql.org/mailpref/pgsql-performance [2] [1] [2]\n>>>\n>>\n>> Links:\n>> ------\n>> [1]\n>>\n>>\n>> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.\n>> org/debasis.moharana%40ipathsolutions.co.in\n>> [1]\n>> [2]\n>> [2] http://www.postgresql.org/mailpref/pgsql-performance [2] [1]\n>> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n>> [3]\n>> [3]\n>> [4] https://explain.depesz.com/ [4] [4]\n>>\n>>\n>> Hi,\n>>\n>> Please check the execution plan details\n>>\n>> Execution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from\n>> tblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n>>\n>> \"Nested Loop (cost=0.00..507.51 rows=39593 width=224) (actual\n>> time=0.032..13.026 rows=39593 loops=1)\"\n>> \" Buffers: shared read=8\"\n>> \" I/O Timings: read=0.058\"\n>> \" -> Seq Scan on tblpurchaseorderstock (cost=0.00..7.89 rows=289\n>> width=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n>> \" Buffers: shared read=5\"\n>> \" I/O Timings: read=0.040\"\n>> \" -> Materialize (cost=0.00..5.05 rows=137 width=129) (actual\n>> time=0.000..0.006 rows=137 loops=289)\"\n>> \" Buffers: shared read=3\"\n>> \" I/O Timings: read=0.019\"\n>> \" -> Seq Scan on tblpurchaseorderinfo (cost=0.00..4.37\n>> rows=137 width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n>> \" Buffers: shared read=3\"\n>> \" I/O Timings: read=0.019\"\n>> \"Planning time: 56.052 ms\"\n>> \"Execution time: 14.038 ms\"\n>>\n>> It is same query? It needs only 14ms\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>> Regards,\n>>> Debasis Moharana\n>>>\n>>\n>> Links:\n>> ------\n>> [1] http://www.postgresql.org/mailpref/pgsql-performance [2]\n>> [2]\n>>\n>> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.\n>> org/debasis.moharana%40ipathsolutions.co.in\n>> [1]\n>> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n>> [3]\n>> [4] https://explain.depesz.com/ [4]\n>>\n>> Hi,\n>>\n>> Yes you right.But it will take more time(10.3 sec.) Plase check the\n>> snap.\n>>\n>> The real time you can see in EXPLAIN ANALYZE ... output. The some\n>> strange time what you can see in PgAdmin can be based on\n>>\n>> a) PgAdmin issue - pgAdmin is relativly slow client due slow\n>> formatting - the time of processing in your application can be pretty\n>> better, try to check another client\n>>\n>> b) there can be some network issues - the problem is in passing data\n>> from server to client\n>>\n>> but probably variant is @a - pgAdmin is not good for benchmarking -\n>> use \"psql\" console instead.\n>>\n>> Pavel\n>>\n>> Can you please tell me what we need to setup so that it will take\n>>> the actual time.\n>>>\n>>> Regards,\n>>> Debasis Moharana\n>>>\n>>\n>>\n>>\n>> Links:\n>> ------\n>> [1]\n>> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.\n>> org/debasis.moharana%40ipathsolutions.co.in\n>> [2] http://www.postgresql.org/mailpref/pgsql-performance\n>> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n>> [4] https://explain.depesz.com/\n>>\n>\n>\n> Hi,\n>\n> Actually i am fresher on this.So want to connect my application with\n> postgresql instead of MSSQL.\n>\n> If we are using psql console for executing the query then it will faster\n> according to you.\n> But what is the other option to use instead of pgadmin.\n>\n> Can you give me some link for reference.\n>\n\n https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools\nhttp://www.sqlmanager.net/en/products/postgresql/manager\n\nRegards\n\nPavel\n\n\n> Regards,\n> Debasis Moharana\n>\n\n2016-08-20 14:17 GMT+02:00 <[email protected]>:On 2016-08-20 12:05, Pavel Stehule wrote:\n\n2016-08-20 13:59 GMT+02:00 <[email protected]>:\n\n\nOn 2016-08-20 11:42, Pavel Stehule wrote:\n\n2016-08-20 13:31 GMT+02:00 <[email protected]>:\n\nOn 2016-08-20 08:58, Pavel Stehule wrote:\n2016-08-20 10:27 GMT+02:00 <[email protected]>:\n\nOn 2016-08-20 08:21, [email protected] wrote:\n\nWelcome to the pgsql-performance mailing list!\nYour password at PostgreSQL Mailing Lists is\n\nx8DiA6\n\nTo leave this mailing list, send the following command in the\nbody\nof a message to [email protected]:\n\napprove x8DiA6 unsubscribe pgsql-performance\[email protected]\n\nThis command will work even if your address changes. For that\nreason,\namong others, it is important that you keep a copy of this\nmessage.\n\nTo post a message to the mailing list, send it to\[email protected]\n\nIf you need help or have questions about the mailing list, please\ncontact the people who manage the list by sending a message to\[email protected]\n\nYou can manage your subscription by visiting the following WWW\nlocation:\n\n\n\n<https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n\n[1]\n[2]\n\n[1]>\nDear Sir/Mam,\n\nI have a PostgreSQL 9.5 instance running on Windows 8 machine with\n4GB of RAM.This server is mainly used for inserting/updating large\namounts of data via copy/insert/update commands, and seldom for\nrunning select queries.\n\nHere are the relevant configuration parameters I changed:\n\nmax_connections = 100\nshared_buffers = 512MB\neffective_cache_size = 3GB\nwork_mem = 12233kB\nmaintenance_work_mem = 256MB\nmin_wal_size = 1GB max_wal_size = 2GB\ncheckpoint_completion_target = 0.7\nwal_buffers = 16MB\ndefault_statistics_target = 100\n\nAfter setting in postgresql.conf. I run the select query to fetch\nlarge amount of record of 29000 in postgresql but it takes 10.3\nseconds but the same query takes 2 seconds for execution in MSSQL.\n\nSo my query is how to improve the perfermance in postgresql.\n\n\n  hi\n\n  please, send execution plan of slow query\n\n  https://www.postgresql.org/docs/current/static/sql-explain.html [3]\n[3]\n [3]\n  https://explain.depesz.com/ [4] [4] [4]\n\n  p.s. Did you do VACUUM and ANALYZE on database?\n\n  Regards\n\n  Pavel\n\n\nRegards,\nDebasis Moharana\n.NET Software Developer\n\n--\nSent via pgsql-performance mailing list\n([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance [2] [1] [2]\n\n\n  Links:\n  ------\n  [1]\n\n\nhttps://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n[1]\n [2]\n  [2] http://www.postgresql.org/mailpref/pgsql-performance [2] [1]\n  [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n[3]\n [3]\n  [4] https://explain.depesz.com/ [4] [4]\n\n  Hi,\n\n  Please check the execution plan details\n\n  Execution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from\n tblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n\n  \"Nested Loop  (cost=0.00..507.51 rows=39593 width=224) (actual\n time=0.032..13.026 rows=39593 loops=1)\"\n  \"  Buffers: shared read=8\"\n  \"  I/O Timings: read=0.058\"\n  \"  ->  Seq Scan on tblpurchaseorderstock  (cost=0.00..7.89 rows=289\n width=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n  \"        Buffers: shared read=5\"\n  \"        I/O Timings: read=0.040\"\n  \"  ->  Materialize  (cost=0.00..5.05 rows=137 width=129) (actual\n time=0.000..0.006 rows=137 loops=289)\"\n  \"        Buffers: shared read=3\"\n  \"        I/O Timings: read=0.019\"\n  \"        ->  Seq Scan on tblpurchaseorderinfo  (cost=0.00..4.37\n rows=137 width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n  \"              Buffers: shared read=3\"\n  \"              I/O Timings: read=0.019\"\n  \"Planning time: 56.052 ms\"\n  \"Execution time: 14.038 ms\"\n\n It is same query? It needs only 14ms\n\n Regards\n\n Pavel\n\n\nRegards,\nDebasis Moharana\n\n\n Links:\n ------\n [1] http://www.postgresql.org/mailpref/pgsql-performance [2]\n [2]\n\nhttps://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n[1]\n [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n[3]\n [4] https://explain.depesz.com/ [4]\n\n Hi,\n\n Yes you right.But it will take more time(10.3 sec.) Plase check the\nsnap.\n\nThe real time you can see in EXPLAIN ANALYZE ... output. The some\nstrange time what you can see in PgAdmin can be based on\n\na) PgAdmin issue - pgAdmin is relativly slow client due slow\nformatting - the time of processing in your application can be pretty\nbetter, try to check another client\n\nb) there can be some network issues - the problem is in passing data\nfrom server to client\n\nbut probably variant is @a - pgAdmin is not good for benchmarking -\nuse \"psql\" console instead.\n\nPavel\n\n\nCan you please tell me what we need to setup so that it will take\nthe actual time.\n\nRegards,\nDebasis Moharana\n\n\n\n\nLinks:\n------\n[1]\nhttps://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n[2] http://www.postgresql.org/mailpref/pgsql-performance\n[3] https://www.postgresql.org/docs/current/static/sql-explain.html\n[4] https://explain.depesz.com/\n\n\n\nHi,\n\nActually i am fresher on this.So want to connect my application with postgresql instead of MSSQL.\n\nIf we are using psql console for executing the query then it will faster according to you.\nBut what is the other option to use instead of pgadmin.\n\nCan you give me some link for reference. https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Toolshttp://www.sqlmanager.net/en/products/postgresql/managerRegardsPavel\n\nRegards,\nDebasis Moharana", "msg_date": "Sat, 20 Aug 2016 14:19:24 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-performance issue" }, { "msg_contents": "On 2016-08-20 12:19, Pavel Stehule wrote:\n> 2016-08-20 14:17 GMT+02:00 <[email protected]>:\n> \n>> On 2016-08-20 12:05, Pavel Stehule wrote:\n>> \n>> 2016-08-20 13:59 GMT+02:00 <[email protected]>:\n>> \n>> On 2016-08-20 11:42, Pavel Stehule wrote:\n>> \n>> 2016-08-20 13:31 GMT+02:00 <[email protected]>:\n>> \n>> On 2016-08-20 08:58, Pavel Stehule wrote:\n>> 2016-08-20 10:27 GMT+02:00 <[email protected]>:\n>> \n>> On 2016-08-20 08:21, [email protected] wrote:\n>> \n>> Welcome to the pgsql-performance mailing list!\n>> Your password at PostgreSQL Mailing Lists is\n>> \n>> x8DiA6\n>> \n>> To leave this mailing list, send the following command in the\n>> body\n>> of a message to [email protected]:\n>> \n>> approve x8DiA6 unsubscribe pgsql-performance\n>> [email protected]\n>> \n>> This command will work even if your address changes. For that\n>> reason,\n>> among others, it is important that you keep a copy of this\n>> message.\n>> \n>> To post a message to the mailing list, send it to\n>> [email protected]\n>> \n>> If you need help or have questions about the mailing list, please\n>> contact the people who manage the list by sending a message to\n>> [email protected]\n>> \n>> You can manage your subscription by visiting the following WWW\n>> location:\n>> \n>> \n> <https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n>> [1]\n>> [1]\n>> \n>> [2]\n>> \n>> [1]>\n>> Dear Sir/Mam,\n>> \n>> I have a PostgreSQL 9.5 instance running on Windows 8 machine with\n>> 4GB of RAM.This server is mainly used for inserting/updating large\n>> amounts of data via copy/insert/update commands, and seldom for\n>> running select queries.\n>> \n>> Here are the relevant configuration parameters I changed:\n>> \n>> max_connections = 100\n>> shared_buffers = 512MB\n>> effective_cache_size = 3GB\n>> work_mem = 12233kB\n>> maintenance_work_mem = 256MB\n>> min_wal_size = 1GB max_wal_size = 2GB\n>> checkpoint_completion_target = 0.7\n>> wal_buffers = 16MB\n>> default_statistics_target = 100\n>> \n>> After setting in postgresql.conf. I run the select query to fetch\n>> large amount of record of 29000 in postgresql but it takes 10.3\n>> seconds but the same query takes 2 seconds for execution in MSSQL.\n>> \n>> So my query is how to improve the perfermance in postgresql.\n>> \n>> hi\n>> \n>> please, send execution plan of slow query\n>> \n>> https://www.postgresql.org/docs/current/static/sql-explain.html\n>> [2] [3]\n>> [3]\n>> [3]\n>> https://explain.depesz.com/ [3] [4] [4] [4]\n>> \n>> p.s. Did you do VACUUM and ANALYZE on database?\n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> Regards,\n>> Debasis Moharana\n>> .NET Software Developer\n>> \n>> --\n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance [4] [2] [1]\n>> [2]\n>> \n>> Links:\n>> ------\n>> [1]\n>> \n>> \n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n>> [1]\n>> [1]\n>> [2]\n>> [2] http://www.postgresql.org/mailpref/pgsql-performance [4] [2]\n>> [1]\n>> [3]\n>> https://www.postgresql.org/docs/current/static/sql-explain.html [2]\n>> [3]\n>> [3]\n>> [4] https://explain.depesz.com/ [3] [4] [4]\n>> \n>> Hi,\n>> \n>> Please check the execution plan details\n>> \n>> Execution Query is = EXPLAIN (ANALYZE, BUFFERS) select * from\n>> tblPurchaseOrderstock cross join tblPurchaseOrderInfo;\n>> \n>> \"Nested Loop (cost=0.00..507.51 rows=39593 width=224) (actual\n>> time=0.032..13.026 rows=39593 loops=1)\"\n>> \" Buffers: shared read=8\"\n>> \" I/O Timings: read=0.058\"\n>> \" -> Seq Scan on tblpurchaseorderstock (cost=0.00..7.89\n>> rows=289\n>> width=95) (actual time=0.014..0.082 rows=289 loops=1)\"\n>> \" Buffers: shared read=5\"\n>> \" I/O Timings: read=0.040\"\n>> \" -> Materialize (cost=0.00..5.05 rows=137 width=129) (actual\n>> time=0.000..0.006 rows=137 loops=289)\"\n>> \" Buffers: shared read=3\"\n>> \" I/O Timings: read=0.019\"\n>> \" -> Seq Scan on tblpurchaseorderinfo (cost=0.00..4.37\n>> rows=137 width=129) (actual time=0.011..0.035 rows=137 loops=1)\"\n>> \" Buffers: shared read=3\"\n>> \" I/O Timings: read=0.019\"\n>> \"Planning time: 56.052 ms\"\n>> \"Execution time: 14.038 ms\"\n>> \n>> It is same query? It needs only 14ms\n>> \n>> Regards\n>> \n>> Pavel\n>> \n>> Regards,\n>> Debasis Moharana\n>> \n>> Links:\n>> ------\n>> [1] http://www.postgresql.org/mailpref/pgsql-performance [4] [2]\n>> [2]\n>> \n>> \n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n>> [1]\n>> [1]\n>> [3]\n>> https://www.postgresql.org/docs/current/static/sql-explain.html [2]\n>> [3]\n>> [4] https://explain.depesz.com/ [3] [4]\n>> \n>> Hi,\n>> \n>> Yes you right.But it will take more time(10.3 sec.) Plase check\n>> the\n>> snap.\n>> \n>> The real time you can see in EXPLAIN ANALYZE ... output. The some\n>> strange time what you can see in PgAdmin can be based on\n>> \n>> a) PgAdmin issue - pgAdmin is relativly slow client due slow\n>> formatting - the time of processing in your application can be\n>> pretty\n>> better, try to check another client\n>> \n>> b) there can be some network issues - the problem is in passing\n>> data\n>> from server to client\n>> \n>> but probably variant is @a - pgAdmin is not good for benchmarking -\n>> use \"psql\" console instead.\n>> \n>> Pavel\n>> \n>> Can you please tell me what we need to setup so that it will take\n>> the actual time.\n>> \n>> Regards,\n>> Debasis Moharana\n>> \n>> Links:\n>> ------\n>> [1]\n>> \n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n>> [1]\n>> [2] http://www.postgresql.org/mailpref/pgsql-performance [4]\n>> [3] https://www.postgresql.org/docs/current/static/sql-explain.html\n>> [2]\n>> [4] https://explain.depesz.com/ [3]\n> \n> Hi,\n> \n> Actually i am fresher on this.So want to connect my application with\n> postgresql instead of MSSQL.\n> \n> If we are using psql console for executing the query then it will\n> faster according to you.\n> But what is the other option to use instead of pgadmin.\n> \n> Can you give me some link for reference.\n> \n> \n> https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools\n> [5]\n> http://www.sqlmanager.net/en/products/postgresql/manager [6]\n> \n> Regards\n> \n> Pavel\n> \n>> Regards,\n>> Debasis Moharana\n> \n> \n> \n> Links:\n> ------\n> [1]\n> https://lists.postgresql.org/mj/mj_wwwusr/domain=postgresql.org/debasis.moharana%40ipathsolutions.co.in\n> [2] https://www.postgresql.org/docs/current/static/sql-explain.html\n> [3] https://explain.depesz.com/\n> [4] http://www.postgresql.org/mailpref/pgsql-performance\n> [5] \n> https://wiki.postgresql.org/wiki/Community_Guide_to_PostgreSQL_GUI_Tools\n> [6] http://www.sqlmanager.net/en/products/postgresql/manager\n\n\nHi,\n\n\nThanks a lot. its now faster execution of query.Let me check all the \nthings in postgresql.If i have any further query then i will get back to \nyou.\nRegards,\nDebasis Moharana\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 22 Aug 2016 06:01:18 +0000", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: pgsql-performance issue" } ]
[ { "msg_contents": "Dear Sir/Mam,\n\nI have a PostgreSQL 9.5 instance running on Windows 8 machine with 4GB \nof RAM.This server is mainly used for inserting/updating large amounts \nof data via copy/insert/update commands, and seldom for running select \nqueries.\n\nHere are the relevant configuration parameters I changed:\n\nmax_connections = 100\nshared_buffers = 512MB\neffective_cache_size = 3GB\nwork_mem = 12233kB\nmaintenance_work_mem = 256MB\nmin_wal_size = 1GB max_wal_size = 2GB\ncheckpoint_completion_target = 0.7\nwal_buffers = 16MB\ndefault_statistics_target = 100\n\nAfter setting in postgresql.conf. I run the select query to fetch large \namount of record of 29000 in postgresql but it takes 10.3 seconds but \nthe same query takes 2 seconds for execution in MSSQL.\n\nSo my query is how to improve the perfermance in postgresql.\n\nRegards,\nDebasis Moharana\n.NET Software Developer\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 20 Aug 2016 08:38:43 +0000", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "pgsql-performance issue" }, { "msg_contents": "Hi,\n\nOn 2016-08-20 08:38:43 +0000, [email protected] wrote:\n> I have a PostgreSQL 9.5 instance running on Windows 8 machine with 4GB of\n> RAM.This server is mainly used for inserting/updating large amounts of data\n> via copy/insert/update commands, and seldom for running select queries.\n> \n> Here are the relevant configuration parameters I changed:\n> \n> max_connections = 100\n> shared_buffers = 512MB\n> effective_cache_size = 3GB\n> work_mem = 12233kB\n> maintenance_work_mem = 256MB\n> min_wal_size = 1GB max_wal_size = 2GB\n> checkpoint_completion_target = 0.7\n> wal_buffers = 16MB\n> default_statistics_target = 100\n> \n> After setting in postgresql.conf. I run the select query to fetch large\n> amount of record of 29000 in postgresql but it takes 10.3 seconds but the\n> same query takes 2 seconds for execution in MSSQL.\n> \n> So my query is how to improve the perfermance in postgresql.\n\nPlease provide the output EXPLAIN (ANALYZE, BUFFERS) yourquery; and your\nquery. Then we'll possibly be able to help you - atm we don't have\nenough information.\n\nRegards,\n\nAndres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 29 Aug 2016 13:14:03 -0700", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgsql-performance issue" } ]
[ { "msg_contents": "Hello,\n\nI have the following tables and query. I would like to get some help to\nfind out why it is slow and how its performance could be improved.\n\nThanks,\nTommi K.\n\n\n*--Table definitions---*\nCREATE TABLE \"Measurement\"\n(\n id bigserial NOT NULL,\n product_id bigserial NOT NULL,\n nominal_data_id bigserial NOT NULL,\n description text,\n serial text,\n measurement_time timestamp without time zone,\n status smallint,\n system_description text,\n CONSTRAINT \"Measurement_pkey\" PRIMARY KEY (id),\n CONSTRAINT \"Measurement_nominal_data_id_fkey\" FOREIGN KEY\n(nominal_data_id)\n REFERENCES \"Nominal_data\" (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT \"Measurement_product_id_fkey\" FOREIGN KEY (product_id)\n REFERENCES \"Product\" (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX measurement_time_index\n ON \"Measurement\"\n USING btree\n (measurement_time);\nALTER TABLE \"Measurement\" CLUSTER ON measurement_time_index;\n\nCREATE TABLE \"Product\"\n(\n id bigserial NOT NULL,\n name text,\n description text,\n info text,\n system_name text,\n CONSTRAINT \"Product_pkey\" PRIMARY KEY (id)\n)\nWITH (\n OIDS=FALSE\n);\n\n\nCREATE TABLE \"Extra_info\"\n(\n id bigserial NOT NULL,\n measurement_id bigserial NOT NULL,\n name text,\n description text,\n info text,\n type text,\n value_string text,\n value_double double precision,\n value_integer bigint,\n value_bool boolean,\n CONSTRAINT \"Extra_info_pkey\" PRIMARY KEY (id),\n CONSTRAINT \"Extra_info_measurement_id_fkey\" FOREIGN KEY (measurement_id)\n REFERENCES \"Measurement\" (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX extra_info_measurement_id_index\n ON \"Extra_info\"\n USING btree\n (measurement_id);\n\nCREATE TABLE \"Feature\"\n(\n id bigserial NOT NULL,\n measurement_id bigserial NOT NULL,\n name text,\n description text,\n info text,\n CONSTRAINT \"Feature_pkey\" PRIMARY KEY (id),\n CONSTRAINT \"Feature_measurement_id_fkey\" FOREIGN KEY (measurement_id)\n REFERENCES \"Measurement\" (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX feature_measurement_id_and_name_index\n ON \"Feature\"\n USING btree\n (measurement_id, name COLLATE pg_catalog.\"default\");\n\nCREATE INDEX feature_measurement_id_index\n ON \"Feature\"\n USING hash\n (measurement_id);\n\n\nCREATE TABLE \"Point\"\n(\n id bigserial NOT NULL,\n feature_id bigserial NOT NULL,\n x double precision,\n y double precision,\n z double precision,\n status_x smallint,\n status_y smallint,\n status_z smallint,\n difference_x double precision,\n difference_y double precision,\n difference_z double precision,\n CONSTRAINT \"Point_pkey\" PRIMARY KEY (id),\n CONSTRAINT \"Point_feature_id_fkey\" FOREIGN KEY (feature_id)\n REFERENCES \"Feature\" (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX point_feature_id_index\n ON \"Point\"\n USING btree\n (feature_id);\n\nCREATE TABLE \"Warning\"\n(\n id bigserial NOT NULL,\n feature_id bigserial NOT NULL,\n \"number\" smallint,\n info text,\n CONSTRAINT \"Warning_pkey\" PRIMARY KEY (id),\n CONSTRAINT \"Warning_feature_id_fkey\" FOREIGN KEY (feature_id)\n REFERENCES \"Feature\" (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\n\nCREATE INDEX warning_feature_id_index\n ON \"Warning\"\n USING btree\n (feature_id);\n\n\n*---Query---*\nSELECT\nf.name,\nf.description,\nSUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND\nwarning.id IS NULL THEN 1 ELSE 0 END) AS green_count,\nSUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND\nwarning.id IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,\nSUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0)\nAND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id\nIS NULL THEN 1 ELSE 0 END) AS yellow_count,\nSUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0)\nAND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id\nIS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count,\nSUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND\nwarning.id IS NULL THEN 1 ELSE 0 END) AS red_count,\nSUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND\nwarning.id IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count,\nSUM(CASE WHEN (p.status_x = 1000 OR p.status_y = 1000 OR p.status_z = 1000)\nAND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS unable_to_measure_count\nFROM \"Point\" p\nJOIN \"Feature\" f ON f.id = p.feature_id\nJOIN \"Measurement\" measurement ON measurement.id = f.measurement_id\nJOIN \"Product\" product ON product.id = measurement.product_id\nLEFT JOIN \"Warning\" warning ON f.id = warning.feature_id\nWHERE (product.name ILIKE 'Part 1') AND\nmeasurement.measurement_start_time >= '2015-06-18 17:00:00' AND\nmeasurement.measurement_start_time <= '2015-06-18 18:00:00' AND\nmeasurement.id NOT IN(SELECT measurement_id FROM \"Extra_info\" e\nWHERE e.measurement_id = measurement.id AND e.description = 'Clamp' AND\ne.value_string ILIKE 'Clamped%')\n GROUP BY f.name, f.description;\n\n\n*---Explain Analyze---*\nGroupAggregate (cost=1336999.08..1337569.18 rows=5562 width=33) (actual\ntime=6223.622..6272.321 rows=255 loops=1)\n Buffers: shared hit=263552 read=996, temp read=119 written=119\n -> Sort (cost=1336999.08..1337012.98 rows=5562 width=33) (actual\ntime=6223.262..6231.106 rows=26265 loops=1)\n Sort Key: f.name, f.description\n Sort Method: external merge Disk: 936kB\n Buffers: shared hit=263552 read=996, temp read=119 written=119\n -> Nested Loop Left Join (cost=0.00..1336653.08 rows=5562\nwidth=33) (actual time=55.792..6128.875 rows=26265 loops=1)\n Buffers: shared hit=263552 read=996\n -> Nested Loop (cost=0.00..1220487.17 rows=5562 width=33)\n(actual time=55.773..5910.852 rows=26265 loops=1)\n Buffers: shared hit=182401 read=954\n -> Nested Loop (cost=0.00..22593.53 rows=8272\nwidth=27) (actual time=30.980..3252.869 rows=38831 loops=1)\n Buffers: shared hit=972 read=528\n -> Nested Loop (cost=0.00..657.24 rows=22\nwidth=8) (actual time=0.102..109.577 rows=103 loops=1)\n Join Filter: (measurement.product_id =\nproduct.id)\n Rows Removed by Join Filter: 18\n Buffers: shared hit=484 read=9\n -> Seq Scan on \"Product\" product\n (cost=0.00..1.04 rows=1 width=8) (actual time=0.010..0.019 rows=1 loops=1)\n Filter: (name ~~* 'Part 1'::text)\n Rows Removed by Filter: 2\n Buffers: shared hit=1\n -> Index Scan using\nmeasurement_start_time_index on \"Measurement\" measurement\n (cost=0.00..655.37 rows=67 width=16) (actual time=0.042..109.416 rows=121\nloops=1)\n Index Cond: ((measurement_start_time\n>= '2015-06-18 17:00:00'::timestamp without time zone) AND\n(measurement_start_time <= '2015-06-18 18:00:00'::timestamp without time\nzone))\n Filter: (NOT (SubPlan 1))\n Buffers: shared hit=483 read=9\n SubPlan 1\n -> Index Scan using\nextra_info_measurement_id_index on \"Extra_info\" e (cost=0.00..9.66 rows=1\nwidth=8) (actual time=0.900..0.900 rows=0 loops=121)\n Index Cond: (measurement_id =\nmeasurement.id)\n Filter: ((value_string ~~*\n'Clamped%'::text) AND (description = 'Clamp'::text))\n Rows Removed by Filter: 2\n Buffers: shared hit=479 read=7\n -> Index Scan using\nfeature_measurement_id_and_name_index on \"Feature\" rf (cost=0.00..993.40\nrows=370 width=35) (actual time=28.152..30.407 rows=377 loops=103)\n Index Cond: (measurement_id = measurement.id\n)\n Buffers: shared hit=488 read=519\n -> Index Scan using point_feature_id_index on \"Point\"\np (cost=0.00..144.80 rows=1 width=14) (actual time=0.067..0.067 rows=1\nloops=38831)\n Index Cond: (feature_id = f.id)\n Buffers: shared hit=181429 read=426\n -> Index Scan using warning_feature_id_index on \"Warning\"\nwarning (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007\nrows=0 loops=26265)\n Index Cond: (f.id = feature_id)\n Buffers: shared hit=81151 read=42\nTotal runtime: 6273.312 ms\n\n\n*---Version---*\nPostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\n\n\n*---Table sizes---*\nExtra_info 1223400 rows\nFeature 185436000 rows\nMeasurement 500000 rows\nPoint 124681000 rows\nWarning 11766800 rows\n\n*---Hardware---*\nIntel Core i5-2320 CPU 3.00GHz (4 CPUs)\n6GB Memory\n64-bit Operating System (Windows 7 Professional)\nWD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache\n\n*---History---*\nQuery gets slower as more data is added to the database\n\n*---Maintenance---*\nAutovacuum is used with default settings\n\nHello, I have the following tables and query. I would like to get some help to find out why it is slow and how its performance could be improved.Thanks,Tommi K.--Table definitions---CREATE TABLE \"Measurement\"(  id bigserial NOT NULL,  product_id bigserial NOT NULL,  nominal_data_id bigserial NOT NULL,  description text,  serial text,  measurement_time timestamp without time zone,  status smallint,  system_description text,  CONSTRAINT \"Measurement_pkey\" PRIMARY KEY (id),  CONSTRAINT \"Measurement_nominal_data_id_fkey\" FOREIGN KEY (nominal_data_id)      REFERENCES \"Nominal_data\" (id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION,  CONSTRAINT \"Measurement_product_id_fkey\" FOREIGN KEY (product_id)      REFERENCES \"Product\" (id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION)WITH (  OIDS=FALSE);CREATE INDEX measurement_time_index  ON \"Measurement\"  USING btree  (measurement_time);ALTER TABLE \"Measurement\" CLUSTER ON measurement_time_index;CREATE TABLE \"Product\"(  id bigserial NOT NULL,  name text,  description text,  info text,  system_name text,  CONSTRAINT \"Product_pkey\" PRIMARY KEY (id))WITH (  OIDS=FALSE);CREATE TABLE \"Extra_info\"(  id bigserial NOT NULL,  measurement_id bigserial NOT NULL,  name text,  description text,  info text,  type text,  value_string text,  value_double double precision,  value_integer bigint,  value_bool boolean,  CONSTRAINT \"Extra_info_pkey\" PRIMARY KEY (id),  CONSTRAINT \"Extra_info_measurement_id_fkey\" FOREIGN KEY (measurement_id)      REFERENCES \"Measurement\" (id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION)WITH (  OIDS=FALSE);CREATE INDEX extra_info_measurement_id_index  ON \"Extra_info\"  USING btree  (measurement_id);CREATE TABLE \"Feature\"(  id bigserial NOT NULL,  measurement_id bigserial NOT NULL,  name text,  description text,  info text,  CONSTRAINT \"Feature_pkey\" PRIMARY KEY (id),  CONSTRAINT \"Feature_measurement_id_fkey\" FOREIGN KEY (measurement_id)      REFERENCES \"Measurement\" (id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION)WITH (  OIDS=FALSE);CREATE INDEX feature_measurement_id_and_name_index  ON \"Feature\"  USING btree  (measurement_id, name COLLATE pg_catalog.\"default\");CREATE INDEX feature_measurement_id_index  ON \"Feature\"  USING hash  (measurement_id);CREATE TABLE \"Point\"(  id bigserial NOT NULL,  feature_id bigserial NOT NULL,  x double precision,  y double precision,  z double precision,  status_x smallint,  status_y smallint,  status_z smallint,  difference_x double precision,  difference_y double precision,  difference_z double precision,  CONSTRAINT \"Point_pkey\" PRIMARY KEY (id),  CONSTRAINT \"Point_feature_id_fkey\" FOREIGN KEY (feature_id)      REFERENCES \"Feature\" (id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION)WITH (  OIDS=FALSE);CREATE INDEX point_feature_id_index  ON \"Point\"  USING btree  (feature_id);CREATE TABLE \"Warning\"(  id bigserial NOT NULL,  feature_id bigserial NOT NULL,  \"number\" smallint,  info text,  CONSTRAINT \"Warning_pkey\" PRIMARY KEY (id),  CONSTRAINT \"Warning_feature_id_fkey\" FOREIGN KEY (feature_id)      REFERENCES \"Feature\" (id) MATCH SIMPLE      ON UPDATE NO ACTION ON DELETE NO ACTION)WITH (  OIDS=FALSE);CREATE INDEX warning_feature_id_index  ON \"Warning\"  USING btree  (feature_id);---Query---SELECTf.name, f.description,SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id IS NULL THEN 1 ELSE 0 END) AS green_count, SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NULL THEN 1 ELSE 0 END) AS yellow_count, SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count, SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NULL THEN 1 ELSE 0 END) AS red_count, SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count, SUM(CASE WHEN (p.status_x = 1000 OR p.status_y = 1000 OR p.status_z = 1000) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS unable_to_measure_count FROM \"Point\" p JOIN \"Feature\" f ON f.id = p.feature_idJOIN \"Measurement\" measurement ON measurement.id = f.measurement_id JOIN \"Product\" product ON product.id = measurement.product_id LEFT JOIN \"Warning\" warning ON f.id = warning.feature_idWHERE (product.name ILIKE 'Part 1') AND measurement.measurement_start_time >= '2015-06-18 17:00:00' AND measurement.measurement_start_time <= '2015-06-18 18:00:00' AND measurement.id NOT IN(SELECT measurement_id FROM \"Extra_info\" e WHERE e.measurement_id = measurement.id AND e.description = 'Clamp' AND e.value_string ILIKE 'Clamped%') GROUP BY f.name, f.description;---Explain Analyze---GroupAggregate  (cost=1336999.08..1337569.18 rows=5562 width=33) (actual time=6223.622..6272.321 rows=255 loops=1)  Buffers: shared hit=263552 read=996, temp read=119 written=119  ->  Sort  (cost=1336999.08..1337012.98 rows=5562 width=33) (actual time=6223.262..6231.106 rows=26265 loops=1)        Sort Key: f.name, f.description        Sort Method: external merge  Disk: 936kB        Buffers: shared hit=263552 read=996, temp read=119 written=119        ->  Nested Loop Left Join  (cost=0.00..1336653.08 rows=5562 width=33) (actual time=55.792..6128.875 rows=26265 loops=1)              Buffers: shared hit=263552 read=996              ->  Nested Loop  (cost=0.00..1220487.17 rows=5562 width=33) (actual time=55.773..5910.852 rows=26265 loops=1)                    Buffers: shared hit=182401 read=954                    ->  Nested Loop  (cost=0.00..22593.53 rows=8272 width=27) (actual time=30.980..3252.869 rows=38831 loops=1)                          Buffers: shared hit=972 read=528                          ->  Nested Loop  (cost=0.00..657.24 rows=22 width=8) (actual time=0.102..109.577 rows=103 loops=1)                                Join Filter: (measurement.product_id = product.id)                                Rows Removed by Join Filter: 18                                Buffers: shared hit=484 read=9                                ->  Seq Scan on \"Product\" product  (cost=0.00..1.04 rows=1 width=8) (actual time=0.010..0.019 rows=1 loops=1)                                      Filter: (name ~~* 'Part 1'::text)                                      Rows Removed by Filter: 2                                      Buffers: shared hit=1                                ->  Index Scan using measurement_start_time_index on \"Measurement\" measurement  (cost=0.00..655.37 rows=67 width=16) (actual time=0.042..109.416 rows=121 loops=1)                                      Index Cond: ((measurement_start_time >= '2015-06-18 17:00:00'::timestamp without time zone) AND (measurement_start_time <= '2015-06-18 18:00:00'::timestamp without time zone))                                      Filter: (NOT (SubPlan 1))                                      Buffers: shared hit=483 read=9                                      SubPlan 1                                        ->  Index Scan using extra_info_measurement_id_index on \"Extra_info\" e  (cost=0.00..9.66 rows=1 width=8) (actual time=0.900..0.900 rows=0 loops=121)                                              Index Cond: (measurement_id = measurement.id)                                              Filter: ((value_string ~~* 'Clamped%'::text) AND (description = 'Clamp'::text))                                              Rows Removed by Filter: 2                                              Buffers: shared hit=479 read=7                          ->  Index Scan using feature_measurement_id_and_name_index on \"Feature\" rf  (cost=0.00..993.40 rows=370 width=35) (actual time=28.152..30.407 rows=377 loops=103)                                Index Cond: (measurement_id = measurement.id)                                Buffers: shared hit=488 read=519                    ->  Index Scan using point_feature_id_index on \"Point\" p  (cost=0.00..144.80 rows=1 width=14) (actual time=0.067..0.067 rows=1 loops=38831)                          Index Cond: (feature_id = f.id)                          Buffers: shared hit=181429 read=426              ->  Index Scan using warning_feature_id_index on \"Warning\" warning  (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007 rows=0 loops=26265)                    Index Cond: (f.id = feature_id)                    Buffers: shared hit=81151 read=42Total runtime: 6273.312 ms---Version---PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit---Table sizes---Extra_info 1223400 rowsFeature 185436000 rowsMeasurement 500000 rowsPoint 124681000 rowsWarning 11766800 rows---Hardware---Intel Core i5-2320 CPU 3.00GHz (4 CPUs)6GB Memory64-bit Operating System (Windows 7 Professional)WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache---History---Query gets slower as more data is added to the database---Maintenance---Autovacuum is used with default settings", "msg_date": "Wed, 24 Aug 2016 14:35:23 +0300", "msg_from": "Tommi Kaksonen <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query with big tables" }, { "msg_contents": "Tommi Kaksonen <[email protected]> wrote:\n\n> ---Version---\n> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\n\ncurrent point release for 9.2 is 9.2.18, you are some years behind.\n\nThe plan seems okay for me, apart from the on-disk sort: increase\nwork_mem to avoid that.\n\nIf i where you i would switch to PG 9.5 - or wait for 9.6 and parallel\nexecution of aggregates.\n\n\n\nRegards, Andreas Kretschmer\n-- \nAndreas Kretschmer\nhttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 25 Aug 2016 12:36:19 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" } ]
[ { "msg_contents": "Hello,\nthanks for the response. I did not get the response to my email even though\nI am subscribed to the pgsql-performance mail list. Let's hope that I get\nthe next one :)\n\nIncreasing work_mem did not have great impact on the performance. But I\nwill try to update the PostgreSQL version to see if it speeds up things.\n\nHowever is there way to keep query time constant as the database size\ngrows. Should I use partitioning or partial indexes?\n\nBest Regards,\nTommi Kaksonen\n\nHello,thanks for the response. I did not get the response to my email even though I am subscribed to the pgsql-performance mail list. Let's hope that I get the next one :)Increasing work_mem did not have great impact on the performance. But I will try to update the PostgreSQL version to see if it speeds up things.However is there way to keep query time constant as the database size grows. Should I use partitioning or partial indexes?Best Regards,Tommi Kaksonen", "msg_date": "Fri, 26 Aug 2016 16:17:49 +0300", "msg_from": "Tommi K <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "On Fri, Aug 26, 2016 at 6:17 AM, Tommi K <[email protected]> wrote:\n\n> Hello,\n> thanks for the response. I did not get the response to my email even\n> though I am subscribed to the pgsql-performance mail list. Let's hope that\n> I get the next one :)\n>\n\nPlease include the email you are replying to when you respond. It saves\neveryone time if they don't have to dig up your old emails, and many of us\ndiscard old emails anyway and have no idea what you wrote before.\n\nCraig\n\n\n> Increasing work_mem did not have great impact on the performance. But I\n> will try to update the PostgreSQL version to see if it speeds up things.\n>\n> However is there way to keep query time constant as the database size\n> grows. Should I use partitioning or partial indexes?\n>\n> Best Regards,\n> Tommi Kaksonen\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Fri, Aug 26, 2016 at 6:17 AM, Tommi K <[email protected]> wrote:Hello,thanks for the response. I did not get the response to my email even though I am subscribed to the pgsql-performance mail list. Let's hope that I get the next one :)Please include the email you are replying to when you respond. It saves everyone time if they don't have to dig up your old emails, and many of us discard old emails anyway and have no idea what you wrote before.CraigIncreasing work_mem did not have great impact on the performance. But I will try to update the PostgreSQL version to see if it speeds up things.However is there way to keep query time constant as the database size grows. Should I use partitioning or partial indexes?Best Regards,Tommi Kaksonen\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Fri, 26 Aug 2016 06:40:20 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "Ok, sorry that I did not add the original message. I thought that it would\nbe automatically added to the message thread.\n\nHere is the question again:\n\nIs there way to keep query time constant as the database size grows. Should\nI use partitioning or partial indexes?\n\nThanks,\nTommi Kaksonen\n\n\n\n> Hello,\n>\n> I have the following tables and query. I would like to get some help to\nfind out why it is slow and how its performance could be improved.\n>\n> Thanks,\n> Tommi K.\n>\n>\n> --Table definitions---\n> CREATE TABLE \"Measurement\"\n> (\n> id bigserial NOT NULL,\n> product_id bigserial NOT NULL,\n> nominal_data_id bigserial NOT NULL,\n> description text,\n> serial text,\n> measurement_time timestamp without time zone,\n> status smallint,\n> system_description text,\n> CONSTRAINT \"Measurement_pkey\" PRIMARY KEY (id),\n> CONSTRAINT \"Measurement_nominal_data_id_fkey\" FOREIGN KEY\n(nominal_data_id)\n> REFERENCES \"Nominal_data\" (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT \"Measurement_product_id_fkey\" FOREIGN KEY (product_id)\n> REFERENCES \"Product\" (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX measurement_time_index\n> ON \"Measurement\"\n> USING btree\n> (measurement_time);\n> ALTER TABLE \"Measurement\" CLUSTER ON measurement_time_index;\n>\n> CREATE TABLE \"Product\"\n> (\n> id bigserial NOT NULL,\n> name text,\n> description text,\n> info text,\n> system_name text,\n> CONSTRAINT \"Product_pkey\" PRIMARY KEY (id)\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n>\n> CREATE TABLE \"Extra_info\"\n> (\n> id bigserial NOT NULL,\n> measurement_id bigserial NOT NULL,\n> name text,\n> description text,\n> info text,\n> type text,\n> value_string text,\n> value_double double precision,\n> value_integer bigint,\n> value_bool boolean,\n> CONSTRAINT \"Extra_info_pkey\" PRIMARY KEY (id),\n> CONSTRAINT \"Extra_info_measurement_id_fkey\" FOREIGN KEY (measurement_id)\n> REFERENCES \"Measurement\" (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX extra_info_measurement_id_index\n> ON \"Extra_info\"\n> USING btree\n> (measurement_id);\n>\n> CREATE TABLE \"Feature\"\n> (\n> id bigserial NOT NULL,\n> measurement_id bigserial NOT NULL,\n> name text,\n> description text,\n> info text,\n> CONSTRAINT \"Feature_pkey\" PRIMARY KEY (id),\n> CONSTRAINT \"Feature_measurement_id_fkey\" FOREIGN KEY (measurement_id)\n> REFERENCES \"Measurement\" (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX feature_measurement_id_and_name_index\n> ON \"Feature\"\n> USING btree\n> (measurement_id, name COLLATE pg_catalog.\"default\");\n>\n> CREATE INDEX feature_measurement_id_index\n> ON \"Feature\"\n> USING hash\n> (measurement_id);\n>\n>\n> CREATE TABLE \"Point\"\n> (\n> id bigserial NOT NULL,\n> feature_id bigserial NOT NULL,\n> x double precision,\n> y double precision,\n> z double precision,\n> status_x smallint,\n> status_y smallint,\n> status_z smallint,\n> difference_x double precision,\n> difference_y double precision,\n> difference_z double precision,\n> CONSTRAINT \"Point_pkey\" PRIMARY KEY (id),\n> CONSTRAINT \"Point_feature_id_fkey\" FOREIGN KEY (feature_id)\n> REFERENCES \"Feature\" (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX point_feature_id_index\n> ON \"Point\"\n> USING btree\n> (feature_id);\n>\n> CREATE TABLE \"Warning\"\n> (\n> id bigserial NOT NULL,\n> feature_id bigserial NOT NULL,\n> \"number\" smallint,\n> info text,\n> CONSTRAINT \"Warning_pkey\" PRIMARY KEY (id),\n> CONSTRAINT \"Warning_feature_id_fkey\" FOREIGN KEY (feature_id)\n> REFERENCES \"Feature\" (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE INDEX warning_feature_id_index\n> ON \"Warning\"\n> USING btree\n> (feature_id);\n>\n>\n> ---Query---\n> SELECT\n> f.name,\n> f.description,\n> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND\nwarning.id IS NULL THEN 1 ELSE 0 END) AS green_count,\n> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND\nwarning.id IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,\n> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0)\nAND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id\nIS NULL THEN 1 ELSE 0 END) AS yellow_count,\n> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0)\nAND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id\nIS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count,\n> SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND\nwarning.id IS NULL THEN 1 ELSE 0 END) AS red_count,\n> SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND\nwarning.id IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count,\n> SUM(CASE WHEN (p.status_x = 1000 OR p.status_y = 1000 OR p.status_z =\n1000) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS\nunable_to_measure_count\n> FROM \"Point\" p\n> JOIN \"Feature\" f ON f.id = p.feature_id\n> JOIN \"Measurement\" measurement ON measurement.id = f.measurement_id\n> JOIN \"Product\" product ON product.id = measurement.product_id\n> LEFT JOIN \"Warning\" warning ON f.id = warning.feature_id\n> WHERE (product.name ILIKE 'Part 1') AND\n> measurement.measurement_start_time >= '2015-06-18 17:00:00' AND\n> measurement.measurement_start_time <= '2015-06-18 18:00:00' AND\n> measurement.id NOT IN(SELECT measurement_id FROM \"Extra_info\" e\n> WHERE e.measurement_id = measurement.id AND e.description = 'Clamp' AND\ne.value_string ILIKE 'Clamped%')\n> GROUP BY f.name, f.description;\n>\n>\n> ---Explain Analyze---\n> GroupAggregate (cost=1336999.08..1337569.18 rows=5562 width=33) (actual\ntime=6223.622..6272.321 rows=255 loops=1)\n> Buffers: shared hit=263552 read=996, temp read=119 written=119\n> -> Sort (cost=1336999.08..1337012.98 rows=5562 width=33) (actual\ntime=6223.262..6231.106 rows=26265 loops=1)\n> Sort Key: f.name, f.description\n> Sort Method: external merge Disk: 936kB\n> Buffers: shared hit=263552 read=996, temp read=119 written=119\n> -> Nested Loop Left Join (cost=0.00..1336653.08 rows=5562\nwidth=33) (actual time=55.792..6128.875 rows=26265 loops=1)\n> Buffers: shared hit=263552 read=996\n> -> Nested Loop (cost=0.00..1220487.17 rows=5562 width=33)\n(actual time=55.773..5910.852 rows=26265 loops=1)\n> Buffers: shared hit=182401 read=954\n> -> Nested Loop (cost=0.00..22593.53 rows=8272\nwidth=27) (actual time=30.980..3252.869 rows=38831 loops=1)\n> Buffers: shared hit=972 read=528\n> -> Nested Loop (cost=0.00..657.24 rows=22\nwidth=8) (actual time=0.102..109.577 rows=103 loops=1)\n> Join Filter: (measurement.product_id =\nproduct.id)\n> Rows Removed by Join Filter: 18\n> Buffers: shared hit=484 read=9\n> -> Seq Scan on \"Product\" product\n (cost=0.00..1.04 rows=1 width=8) (actual time=0.010..0.019 rows=1 loops=1)\n> Filter: (name ~~* 'Part 1'::text)\n> Rows Removed by Filter: 2\n> Buffers: shared hit=1\n> -> Index Scan using\nmeasurement_start_time_index on \"Measurement\" measurement\n (cost=0.00..655.37 rows=67 width=16) (actual time=0.042..109.416 rows=121\nloops=1)\n> Index Cond:\n((measurement_start_time >= '2015-06-18 17:00:00'::timestamp without time\nzone) AND (measurement_start_time <= '2015-06-18 18:00:00'::timestamp\nwithout time zone))\n> Filter: (NOT (SubPlan 1))\n> Buffers: shared hit=483 read=9\n> SubPlan 1\n> -> Index Scan using\nextra_info_measurement_id_index on \"Extra_info\" e (cost=0.00..9.66 rows=1\nwidth=8) (actual time=0.900..0.900 rows=0 loops=121)\n> Index Cond: (measurement_id\n= measurement.id)\n> Filter: ((value_string ~~*\n'Clamped%'::text) AND (description = 'Clamp'::text))\n> Rows Removed by Filter: 2\n> Buffers: shared hit=479\nread=7\n> -> Index Scan using\nfeature_measurement_id_and_name_index on \"Feature\" rf (cost=0.00..993.40\nrows=370 width=35) (actual time=28.152..30.407 rows=377 loops=103)\n> Index Cond: (measurement_id =\nmeasurement.id)\n> Buffers: shared hit=488 read=519\n> -> Index Scan using point_feature_id_index on\n\"Point\" p (cost=0.00..144.80 rows=1 width=14) (actual time=0.067..0.067\nrows=1 loops=38831)\n> Index Cond: (feature_id = f.id)\n> Buffers: shared hit=181429 read=426\n> -> Index Scan using warning_feature_id_index on \"Warning\"\nwarning (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007\nrows=0 loops=26265)\n> Index Cond: (f.id = feature_id)\n> Buffers: shared hit=81151 read=42\n> Total runtime: 6273.312 ms\n>\n>\n> ---Version---\n> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\n>\n>\n> ---Table sizes---\n> Extra_info 1223400 rows\n> Feature 185436000 rows\n> Measurement 500000 rows\n> Point 124681000 rows\n> Warning 11766800 rows\n>\n> ---Hardware---\n> Intel Core i5-2320 CPU 3.00GHz (4 CPUs)\n> 6GB Memory\n> 64-bit Operating System (Windows 7 Professional)\n> WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache\n>\n> ---History---\n> Query gets slower as more data is added to the database\n>\n> ---Maintenance---\n> Autovacuum is used with default settings\n>\n> Tommi Kaksonen <t2nn2t(at)gmail(dot)com> wrote:\n>\n> ------------------------------------------------------------------------\n>\n> > ---Version---\n> > PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\n>\n> current point release for 9.2 is 9.2.18, you are some years behind.\n>\n> The plan seems okay for me, apart from the on-disk sort: increase\n> work_mem to avoid that.\n>\n> If i where you i would switch to PG 9.5 - or wait for 9.6 and parallel\n> execution of aggregates.\n>\n>\n>\n> Regards, Andreas Kretschmer\n> --\n> Andreas Kretschmer\n> http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n> ------------------------------------------------------------------------\n>\n> Hello,\n> thanks for the response. I did not get the response to my email even\nthough I am subscribed to the pgsql-performance mail list. Let's hope that\nI get the next one :)\n>\n> Increasing work_mem did not have great impact on the performance. But I\nwill try to update the PostgreSQL version to see if it speeds up things.\n>\n> However is there way to keep query time constant as the database size\ngrows. Should I use partitioning or partial indexes?\n>\n> Best Regards,\n> Tommi Kaksonen\n>\n> ------------------------------------------------------------------------\n>\n> Please include the email you are replying to when you respond. It saves\n> everyone time if they don't have to dig up your old emails, and many of us\n> discard old emails anyway and have no idea what you wrote before.\n>\n> Craig\n\nOk, sorry that I did not add the original message. I thought that it would be automatically added to the message thread.Here is the question again:Is there way to keep query time constant as the database size grows. Should I use partitioning or partial indexes?Thanks,Tommi Kaksonen> Hello, > > I have the following tables and query. I would like to get some help to find out why it is slow and how its performance could be improved.> > Thanks,> Tommi K.> > > --Table definitions---> CREATE TABLE \"Measurement\"> (>   id bigserial NOT NULL,>   product_id bigserial NOT NULL,>   nominal_data_id bigserial NOT NULL,>   description text,>   serial text,>   measurement_time timestamp without time zone,>   status smallint,>   system_description text,>   CONSTRAINT \"Measurement_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Measurement_nominal_data_id_fkey\" FOREIGN KEY (nominal_data_id)>       REFERENCES \"Nominal_data\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION,>   CONSTRAINT \"Measurement_product_id_fkey\" FOREIGN KEY (product_id)>       REFERENCES \"Product\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX measurement_time_index>   ON \"Measurement\">   USING btree>   (measurement_time);> ALTER TABLE \"Measurement\" CLUSTER ON measurement_time_index;> > CREATE TABLE \"Product\"> (>   id bigserial NOT NULL,>   name text,>   description text,>   info text,>   system_name text,>   CONSTRAINT \"Product_pkey\" PRIMARY KEY (id)> )> WITH (>   OIDS=FALSE> );> > > CREATE TABLE \"Extra_info\"> (>   id bigserial NOT NULL,>   measurement_id bigserial NOT NULL,>   name text,>   description text,>   info text,>   type text,>   value_string text,>   value_double double precision,>   value_integer bigint,>   value_bool boolean,>   CONSTRAINT \"Extra_info_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Extra_info_measurement_id_fkey\" FOREIGN KEY (measurement_id)>       REFERENCES \"Measurement\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX extra_info_measurement_id_index>   ON \"Extra_info\">   USING btree>   (measurement_id);> > CREATE TABLE \"Feature\"> (>   id bigserial NOT NULL,>   measurement_id bigserial NOT NULL,>   name text,>   description text,>   info text,>   CONSTRAINT \"Feature_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Feature_measurement_id_fkey\" FOREIGN KEY (measurement_id)>       REFERENCES \"Measurement\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX feature_measurement_id_and_name_index>   ON \"Feature\">   USING btree>   (measurement_id, name COLLATE pg_catalog.\"default\");> > CREATE INDEX feature_measurement_id_index>   ON \"Feature\">   USING hash>   (measurement_id);> > > CREATE TABLE \"Point\"> (>   id bigserial NOT NULL,>   feature_id bigserial NOT NULL,>   x double precision,>   y double precision,>   z double precision,>   status_x smallint,>   status_y smallint,>   status_z smallint,>   difference_x double precision,>   difference_y double precision,>   difference_z double precision,>   CONSTRAINT \"Point_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Point_feature_id_fkey\" FOREIGN KEY (feature_id)>       REFERENCES \"Feature\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX point_feature_id_index>   ON \"Point\">   USING btree>   (feature_id);> > CREATE TABLE \"Warning\"> (>   id bigserial NOT NULL,>   feature_id bigserial NOT NULL,>   \"number\" smallint,>   info text,>   CONSTRAINT \"Warning_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Warning_feature_id_fkey\" FOREIGN KEY (feature_id)>       REFERENCES \"Feature\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX warning_feature_id_index>   ON \"Warning\">   USING btree>   (feature_id);> > > ---Query---> SELECT> f.name, > f.description,> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id IS NULL THEN 1 ELSE 0 END) AS green_count, > SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NULL THEN 1 ELSE 0 END) AS yellow_count, > SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count, > SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NULL THEN 1 ELSE 0 END) AS red_count, > SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count, > SUM(CASE WHEN (p.status_x = 1000 OR p.status_y = 1000 OR p.status_z = 1000) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS unable_to_measure_count > FROM \"Point\" p > JOIN \"Feature\" f ON f.id = p.feature_id> JOIN \"Measurement\" measurement ON measurement.id = f.measurement_id > JOIN \"Product\" product ON product.id = measurement.product_id > LEFT JOIN \"Warning\" warning ON f.id = warning.feature_id> WHERE (product.name ILIKE 'Part 1') AND > measurement.measurement_start_time >= '2015-06-18 17:00:00' AND > measurement.measurement_start_time <= '2015-06-18 18:00:00' AND > measurement.id NOT IN(SELECT measurement_id FROM \"Extra_info\" e > WHERE e.measurement_id = measurement.id AND e.description = 'Clamp' AND e.value_string ILIKE 'Clamped%')>  GROUP BY f.name, f.description;> > > ---Explain Analyze---> GroupAggregate  (cost=1336999.08..1337569.18 rows=5562 width=33) (actual time=6223.622..6272.321 rows=255 loops=1)>   Buffers: shared hit=263552 read=996, temp read=119 written=119>   ->  Sort  (cost=1336999.08..1337012.98 rows=5562 width=33) (actual time=6223.262..6231.106 rows=26265 loops=1)>         Sort Key: f.name, f.description>         Sort Method: external merge  Disk: 936kB>         Buffers: shared hit=263552 read=996, temp read=119 written=119>         ->  Nested Loop Left Join  (cost=0.00..1336653.08 rows=5562 width=33) (actual time=55.792..6128.875 rows=26265 loops=1)>               Buffers: shared hit=263552 read=996>               ->  Nested Loop  (cost=0.00..1220487.17 rows=5562 width=33) (actual time=55.773..5910.852 rows=26265 loops=1)>                     Buffers: shared hit=182401 read=954>                     ->  Nested Loop  (cost=0.00..22593.53 rows=8272 width=27) (actual time=30.980..3252.869 rows=38831 loops=1)>                           Buffers: shared hit=972 read=528>                           ->  Nested Loop  (cost=0.00..657.24 rows=22 width=8) (actual time=0.102..109.577 rows=103 loops=1)>                                 Join Filter: (measurement.product_id = product.id)>                                 Rows Removed by Join Filter: 18>                                 Buffers: shared hit=484 read=9>                                 ->  Seq Scan on \"Product\" product  (cost=0.00..1.04 rows=1 width=8) (actual time=0.010..0.019 rows=1 loops=1)>                                       Filter: (name ~~* 'Part 1'::text)>                                       Rows Removed by Filter: 2>                                       Buffers: shared hit=1>                                 ->  Index Scan using measurement_start_time_index on \"Measurement\" measurement  (cost=0.00..655.37 rows=67 width=16) (actual time=0.042..109.416 rows=121 loops=1)>                                       Index Cond: ((measurement_start_time >= '2015-06-18 17:00:00'::timestamp without time zone) AND (measurement_start_time <= '2015-06-18 18:00:00'::timestamp without time zone))>                                       Filter: (NOT (SubPlan 1))>                                       Buffers: shared hit=483 read=9>                                       SubPlan 1>                                         ->  Index Scan using extra_info_measurement_id_index on \"Extra_info\" e  (cost=0.00..9.66 rows=1 width=8) (actual time=0.900..0.900 rows=0 loops=121)>                                               Index Cond: (measurement_id = measurement.id)>                                               Filter: ((value_string ~~* 'Clamped%'::text) AND (description = 'Clamp'::text))>                                               Rows Removed by Filter: 2>                                               Buffers: shared hit=479 read=7>                           ->  Index Scan using feature_measurement_id_and_name_index on \"Feature\" rf  (cost=0.00..993.40 rows=370 width=35) (actual time=28.152..30.407 rows=377 loops=103)>                                 Index Cond: (measurement_id = measurement.id)>                                 Buffers: shared hit=488 read=519>                     ->  Index Scan using point_feature_id_index on \"Point\" p  (cost=0.00..144.80 rows=1 width=14) (actual time=0.067..0.067 rows=1 loops=38831)>                           Index Cond: (feature_id = f.id)>                           Buffers: shared hit=181429 read=426>               ->  Index Scan using warning_feature_id_index on \"Warning\" warning  (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007 rows=0 loops=26265)>                     Index Cond: (f.id = feature_id)>                     Buffers: shared hit=81151 read=42> Total runtime: 6273.312 ms> > > ---Version---> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit> > > ---Table sizes---> Extra_info 1223400 rows> Feature 185436000 rows> Measurement 500000 rows> Point 124681000 rows> Warning 11766800 rows> > ---Hardware---> Intel Core i5-2320 CPU 3.00GHz (4 CPUs)> 6GB Memory> 64-bit Operating System (Windows 7 Professional)> WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache> > ---History---> Query gets slower as more data is added to the database> > ---Maintenance---> Autovacuum is used with default settings> > Tommi Kaksonen <t2nn2t(at)gmail(dot)com> wrote:> > ------------------------------------------------------------------------> > > ---Version---> > PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit> > current point release for 9.2 is 9.2.18, you are some years behind.> > The plan seems okay for me, apart from the on-disk sort: increase> work_mem to avoid that.> > If i where you i would switch to PG 9.5 - or wait for 9.6 and parallel> execution of aggregates.> > > > Regards, Andreas Kretschmer> -- > Andreas Kretschmer> http://www.2ndQuadrant.com/> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services>> ------------------------------------------------------------------------>> Hello,> thanks for the response. I did not get the response to my email even though I am subscribed to the pgsql-performance mail list. Let's hope that I get the next one :)> > Increasing work_mem did not have great impact on the performance. But I will try to update the PostgreSQL version to see if it speeds up things.> > However is there way to keep query time constant as the database size grows. Should I use partitioning or partial indexes?> > Best Regards,> Tommi Kaksonen>> ------------------------------------------------------------------------>> Please include the email you are replying to when you respond. It saves> everyone time if they don't have to dig up your old emails, and many of us> discard old emails anyway and have no idea what you wrote before.>> Craig", "msg_date": "Fri, 26 Aug 2016 17:25:05 +0300", "msg_from": "Tommi K <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Tommi K\nSent: Friday, August 26, 2016 7:25 AM\nTo: Craig James <[email protected]>\nCc: andreas kretschmer <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Slow query with big tables\n\n \n\nOk, sorry that I did not add the original message. I thought that it would be automatically added to the message thread.\n\n \n\nHere is the question again:\n\n \n\nIs there way to keep query time constant as the database size grows. Should I use partitioning or partial indexes?\n\n \n\nThanks,\n\nTommi Kaksonen\n\n \n\n \n\n \n\n> Hello, \n\n> \n\n> I have the following tables and query. I would like to get some help to find out why it is slow and how its performance could be improved.\n\n> \n\n> Thanks,\n\n> Tommi K.\n\n> \n\n> \n\n> --Table definitions---\n\n> CREATE TABLE \"Measurement\"\n\n> (\n\n> id bigserial NOT NULL,\n\n> product_id bigserial NOT NULL,\n\n> nominal_data_id bigserial NOT NULL,\n\n> description text,\n\n> serial text,\n\n> measurement_time timestamp without time zone,\n\n> status smallint,\n\n> system_description text,\n\n> CONSTRAINT \"Measurement_pkey\" PRIMARY KEY (id),\n\n> CONSTRAINT \"Measurement_nominal_data_id_fkey\" FOREIGN KEY (nominal_data_id)\n\n> REFERENCES \"Nominal_data\" (id) MATCH SIMPLE\n\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n\n> CONSTRAINT \"Measurement_product_id_fkey\" FOREIGN KEY (product_id)\n\n> REFERENCES \"Product\" (id) MATCH SIMPLE\n\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n\n> )\n\n> WITH (\n\n> OIDS=FALSE\n\n> );\n\n> \n\n> CREATE INDEX measurement_time_index\n\n> ON \"Measurement\"\n\n> USING btree\n\n> (measurement_time);\n\n> ALTER TABLE \"Measurement\" CLUSTER ON measurement_time_index;\n\n> \n\n> CREATE TABLE \"Product\"\n\n> (\n\n> id bigserial NOT NULL,\n\n> name text,\n\n> description text,\n\n> info text,\n\n> system_name text,\n\n> CONSTRAINT \"Product_pkey\" PRIMARY KEY (id)\n\n> )\n\n> WITH (\n\n> OIDS=FALSE\n\n> );\n\n> \n\n> \n\n> CREATE TABLE \"Extra_info\"\n\n> (\n\n> id bigserial NOT NULL,\n\n> measurement_id bigserial NOT NULL,\n\n> name text,\n\n> description text,\n\n> info text,\n\n> type text,\n\n> value_string text,\n\n> value_double double precision,\n\n> value_integer bigint,\n\n> value_bool boolean,\n\n> CONSTRAINT \"Extra_info_pkey\" PRIMARY KEY (id),\n\n> CONSTRAINT \"Extra_info_measurement_id_fkey\" FOREIGN KEY (measurement_id)\n\n> REFERENCES \"Measurement\" (id) MATCH SIMPLE\n\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n\n> )\n\n> WITH (\n\n> OIDS=FALSE\n\n> );\n\n> \n\n> CREATE INDEX extra_info_measurement_id_index\n\n> ON \"Extra_info\"\n\n> USING btree\n\n> (measurement_id);\n\n> \n\n> CREATE TABLE \"Feature\"\n\n> (\n\n> id bigserial NOT NULL,\n\n> measurement_id bigserial NOT NULL,\n\n> name text,\n\n> description text,\n\n> info text,\n\n> CONSTRAINT \"Feature_pkey\" PRIMARY KEY (id),\n\n> CONSTRAINT \"Feature_measurement_id_fkey\" FOREIGN KEY (measurement_id)\n\n> REFERENCES \"Measurement\" (id) MATCH SIMPLE\n\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n\n> )\n\n> WITH (\n\n> OIDS=FALSE\n\n> );\n\n> \n\n> CREATE INDEX feature_measurement_id_and_name_index\n\n> ON \"Feature\"\n\n> USING btree\n\n> (measurement_id, name COLLATE pg_catalog.\"default\");\n\n> \n\n> CREATE INDEX feature_measurement_id_index\n\n> ON \"Feature\"\n\n> USING hash\n\n> (measurement_id);\n\n> \n\n> \n\n> CREATE TABLE \"Point\"\n\n> (\n\n> id bigserial NOT NULL,\n\n> feature_id bigserial NOT NULL,\n\n> x double precision,\n\n> y double precision,\n\n> z double precision,\n\n> status_x smallint,\n\n> status_y smallint,\n\n> status_z smallint,\n\n> difference_x double precision,\n\n> difference_y double precision,\n\n> difference_z double precision,\n\n> CONSTRAINT \"Point_pkey\" PRIMARY KEY (id),\n\n> CONSTRAINT \"Point_feature_id_fkey\" FOREIGN KEY (feature_id)\n\n> REFERENCES \"Feature\" (id) MATCH SIMPLE\n\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n\n> )\n\n> WITH (\n\n> OIDS=FALSE\n\n> );\n\n> \n\n> CREATE INDEX point_feature_id_index\n\n> ON \"Point\"\n\n> USING btree\n\n> (feature_id);\n\n> \n\n> CREATE TABLE \"Warning\"\n\n> (\n\n> id bigserial NOT NULL,\n\n> feature_id bigserial NOT NULL,\n\n> \"number\" smallint,\n\n> info text,\n\n> CONSTRAINT \"Warning_pkey\" PRIMARY KEY (id),\n\n> CONSTRAINT \"Warning_feature_id_fkey\" FOREIGN KEY (feature_id)\n\n> REFERENCES \"Feature\" (id) MATCH SIMPLE\n\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n\n> )\n\n> WITH (\n\n> OIDS=FALSE\n\n> );\n\n> \n\n> CREATE INDEX warning_feature_id_index\n\n> ON \"Warning\"\n\n> USING btree\n\n> (feature_id);\n\n> \n\n> \n\n> ---Query---\n\n> SELECT\n\n> f.name <http://f.name> , \n\n> f.description,\n\n> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id <http://warning.id> IS NULL THEN 1 ELSE 0 END) AS green_count, \n\n> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id <http://warning.id> IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,\n\n> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id <http://warning.id> IS NULL THEN 1 ELSE 0 END) AS yellow_count, \n\n> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id <http://warning.id> IS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count, \n\n> SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id <http://warning.id> IS NULL THEN 1 ELSE 0 END) AS red_count, \n\n> SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id <http://warning.id> IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count, \n\n> SUM(CASE WHEN (p.status_x = 1000 OR p.status_y = 1000 OR p.status_z = 1000) AND warning.id <http://warning.id> IS NOT NULL THEN 1 ELSE 0 END) AS unable_to_measure_count \n\n> FROM \"Point\" p \n\n> JOIN \"Feature\" f ON f.id <http://f.id> = p.feature_id\n\n> JOIN \"Measurement\" measurement ON measurement.id <http://measurement.id> = f.measurement_id \n\n> JOIN \"Product\" product ON product.id <http://product.id> = measurement.product_id \n\n> LEFT JOIN \"Warning\" warning ON f.id <http://f.id> = warning.feature_id\n\n> WHERE (product.name <http://product.name> ILIKE 'Part 1') AND \n\n> measurement.measurement_start_time >= '2015-06-18 17:00:00' AND \n\n> measurement.measurement_start_time <= '2015-06-18 18:00:00' AND \n\n> measurement.id <http://measurement.id> NOT IN(SELECT measurement_id FROM \"Extra_info\" e \n\n> WHERE e.measurement_id = measurement.id <http://measurement.id> AND e.description = 'Clamp' AND e.value_string ILIKE 'Clamped%')\n\n> GROUP BY f.name <http://f.name> , f.description;\n\n> \n\n> \n\n> ---Explain Analyze---\n\n> GroupAggregate (cost=1336999.08..1337569.18 rows=5562 width=33) (actual time=6223.622..6272.321 rows=255 loops=1)\n\n> Buffers: shared hit=263552 read=996, temp read=119 written=119\n\n> -> Sort (cost=1336999.08..1337012.98 rows=5562 width=33) (actual time=6223.262..6231.106 rows=26265 loops=1)\n\n> Sort Key: f.name <http://f.name> , f.description\n\n> Sort Method: external merge Disk: 936kB\n\n> Buffers: shared hit=263552 read=996, temp read=119 written=119\n\n> -> Nested Loop Left Join (cost=0.00..1336653.08 rows=5562 width=33) (actual time=55.792..6128.875 rows=26265 loops=1)\n\n> Buffers: shared hit=263552 read=996\n\n> -> Nested Loop (cost=0.00..1220487.17 rows=5562 width=33) (actual time=55.773..5910.852 rows=26265 loops=1)\n\n> Buffers: shared hit=182401 read=954\n\n> -> Nested Loop (cost=0.00..22593.53 rows=8272 width=27) (actual time=30.980..3252.869 rows=38831 loops=1)\n\n> Buffers: shared hit=972 read=528\n\n> -> Nested Loop (cost=0.00..657.24 rows=22 width=8) (actual time=0.102..109.577 rows=103 loops=1)\n\n> Join Filter: (measurement.product_id = product.id <http://product.id> )\n\n> Rows Removed by Join Filter: 18\n\n> Buffers: shared hit=484 read=9\n\n> -> Seq Scan on \"Product\" product (cost=0.00..1.04 rows=1 width=8) (actual time=0.010..0.019 rows=1 loops=1)\n\n> Filter: (name ~~* 'Part 1'::text)\n\n> Rows Removed by Filter: 2\n\n> Buffers: shared hit=1\n\n> -> Index Scan using measurement_start_time_index on \"Measurement\" measurement (cost=0.00..655.37 rows=67 width=16) (actual time=0.042..109.416 rows=121 loops=1)\n\n> Index Cond: ((measurement_start_time >= '2015-06-18 17:00:00'::timestamp without time zone) AND (measurement_start_time <= '2015-06-18 18:00:00'::timestamp without time zone))\n\n> Filter: (NOT (SubPlan 1))\n\n> Buffers: shared hit=483 read=9\n\n> SubPlan 1\n\n> -> Index Scan using extra_info_measurement_id_index on \"Extra_info\" e (cost=0.00..9.66 rows=1 width=8) (actual time=0.900..0.900 rows=0 loops=121)\n\n> Index Cond: (measurement_id = measurement.id <http://measurement.id> )\n\n> Filter: ((value_string ~~* 'Clamped%'::text) AND (description = 'Clamp'::text))\n\n> Rows Removed by Filter: 2\n\n> Buffers: shared hit=479 read=7\n\n> -> Index Scan using feature_measurement_id_and_name_index on \"Feature\" rf (cost=0.00..993.40 rows=370 width=35) (actual time=28.152..30.407 rows=377 loops=103)\n\n> Index Cond: (measurement_id = measurement.id <http://measurement.id> )\n\n> Buffers: shared hit=488 read=519\n\n> -> Index Scan using point_feature_id_index on \"Point\" p (cost=0.00..144.80 rows=1 width=14) (actual time=0.067..0.067 rows=1 loops=38831)\n\n> Index Cond: (feature_id = f.id <http://f.id> )\n\n> Buffers: shared hit=181429 read=426\n\n> -> Index Scan using warning_feature_id_index on \"Warning\" warning (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007 rows=0 loops=26265)\n\n> Index Cond: (f.id <http://f.id> = feature_id)\n\n> Buffers: shared hit=81151 read=42\n\n> Total runtime: 6273.312 ms\n\n> \n\n> \n\n> ---Version---\n\n> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\n\n> \n\n> \n\n> ---Table sizes---\n\n> Extra_info 1223400 rows\n\n> Feature 185436000 rows\n\n> Measurement 500000 rows\n\n> Point 124681000 rows\n\n> Warning 11766800 rows\n\n> \n\n> ---Hardware---\n\n> Intel Core i5-2320 CPU 3.00GHz (4 CPUs)\n\n> 6GB Memory\n\n> 64-bit Operating System (Windows 7 Professional)\n\n> WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache\n\n> \n\n> ---History---\n\n> Query gets slower as more data is added to the database\n\n> \n\n> ---Maintenance---\n\n> Autovacuum is used with default settings\n\n> \n\n> Tommi Kaksonen <t2nn2t(at)gmail(dot)com> wrote:\n\n \n\n \n\nI don’t see a reason to partition such small data. What I do see is you attempting to run a big query on what looks like a small desktop pc. 6GB of ram, especially under Windows 7, isn’t enough ram for a database server. Run the query on a normal small server of say 16gb and it should perform fine. IMO.\n\n \n\nMike\n\n \n\n\n From: [email protected] [mailto:[email protected]] On Behalf Of Tommi KSent: Friday, August 26, 2016 7:25 AMTo: Craig James <[email protected]>Cc: andreas kretschmer <[email protected]>; [email protected]: Re: [PERFORM] Slow query with big tables Ok, sorry that I did not add the original message. I thought that it would be automatically added to the message thread. Here is the question again: Is there way to keep query time constant as the database size grows. Should I use partitioning or partial indexes? Thanks,Tommi Kaksonen   > Hello, > > I have the following tables and query. I would like to get some help to find out why it is slow and how its performance could be improved.> > Thanks,> Tommi K.> > > --Table definitions---> CREATE TABLE \"Measurement\"> (>   id bigserial NOT NULL,>   product_id bigserial NOT NULL,>   nominal_data_id bigserial NOT NULL,>   description text,>   serial text,>   measurement_time timestamp without time zone,>   status smallint,>   system_description text,>   CONSTRAINT \"Measurement_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Measurement_nominal_data_id_fkey\" FOREIGN KEY (nominal_data_id)>       REFERENCES \"Nominal_data\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION,>   CONSTRAINT \"Measurement_product_id_fkey\" FOREIGN KEY (product_id)>       REFERENCES \"Product\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX measurement_time_index>   ON \"Measurement\">   USING btree>   (measurement_time);> ALTER TABLE \"Measurement\" CLUSTER ON measurement_time_index;> > CREATE TABLE \"Product\"> (>   id bigserial NOT NULL,>   name text,>   description text,>   info text,>   system_name text,>   CONSTRAINT \"Product_pkey\" PRIMARY KEY (id)> )> WITH (>   OIDS=FALSE> );> > > CREATE TABLE \"Extra_info\"> (>   id bigserial NOT NULL,>   measurement_id bigserial NOT NULL,>   name text,>   description text,>   info text,>   type text,>   value_string text,>   value_double double precision,>   value_integer bigint,>   value_bool boolean,>   CONSTRAINT \"Extra_info_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Extra_info_measurement_id_fkey\" FOREIGN KEY (measurement_id)>       REFERENCES \"Measurement\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX extra_info_measurement_id_index>   ON \"Extra_info\">   USING btree>   (measurement_id);> > CREATE TABLE \"Feature\"> (>   id bigserial NOT NULL,>   measurement_id bigserial NOT NULL,>   name text,>   description text,>   info text,>   CONSTRAINT \"Feature_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Feature_measurement_id_fkey\" FOREIGN KEY (measurement_id)>       REFERENCES \"Measurement\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX feature_measurement_id_and_name_index>   ON \"Feature\">   USING btree>   (measurement_id, name COLLATE pg_catalog.\"default\");> > CREATE INDEX feature_measurement_id_index>   ON \"Feature\">   USING hash>   (measurement_id);> > > CREATE TABLE \"Point\"> (>   id bigserial NOT NULL,>   feature_id bigserial NOT NULL,>   x double precision,>   y double precision,>   z double precision,>   status_x smallint,>   status_y smallint,>   status_z smallint,>   difference_x double precision,>   difference_y double precision,>   difference_z double precision,>   CONSTRAINT \"Point_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Point_feature_id_fkey\" FOREIGN KEY (feature_id)>       REFERENCES \"Feature\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX point_feature_id_index>   ON \"Point\">   USING btree>   (feature_id);> > CREATE TABLE \"Warning\"> (>   id bigserial NOT NULL,>   feature_id bigserial NOT NULL,>   \"number\" smallint,>   info text,>   CONSTRAINT \"Warning_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Warning_feature_id_fkey\" FOREIGN KEY (feature_id)>       REFERENCES \"Feature\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX warning_feature_id_index>   ON \"Warning\">   USING btree>   (feature_id);> > > ---Query---> SELECT> f.name, > f.description,> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id IS NULL THEN 1 ELSE 0 END) AS green_count, > SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NULL THEN 1 ELSE 0 END) AS yellow_count, > SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count, > SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NULL THEN 1 ELSE 0 END) AS red_count, > SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count, > SUM(CASE WHEN (p.status_x = 1000 OR p.status_y = 1000 OR p.status_z = 1000) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS unable_to_measure_count > FROM \"Point\" p > JOIN \"Feature\" f ON f.id = p.feature_id> JOIN \"Measurement\" measurement ON measurement.id = f.measurement_id > JOIN \"Product\" product ON product.id = measurement.product_id > LEFT JOIN \"Warning\" warning ON f.id = warning.feature_id> WHERE (product.name ILIKE 'Part 1') AND > measurement.measurement_start_time >= '2015-06-18 17:00:00' AND > measurement.measurement_start_time <= '2015-06-18 18:00:00' AND > measurement.id NOT IN(SELECT measurement_id FROM \"Extra_info\" e > WHERE e.measurement_id = measurement.id AND e.description = 'Clamp' AND e.value_string ILIKE 'Clamped%')>  GROUP BY f.name, f.description;> > > ---Explain Analyze---> GroupAggregate  (cost=1336999.08..1337569.18 rows=5562 width=33) (actual time=6223.622..6272.321 rows=255 loops=1)>   Buffers: shared hit=263552 read=996, temp read=119 written=119>   ->  Sort  (cost=1336999.08..1337012.98 rows=5562 width=33) (actual time=6223.262..6231.106 rows=26265 loops=1)>         Sort Key: f.name, f.description>         Sort Method: external merge  Disk: 936kB>         Buffers: shared hit=263552 read=996, temp read=119 written=119>         ->  Nested Loop Left Join  (cost=0.00..1336653.08 rows=5562 width=33) (actual time=55.792..6128.875 rows=26265 loops=1)>               Buffers: shared hit=263552 read=996>               ->  Nested Loop  (cost=0.00..1220487.17 rows=5562 width=33) (actual time=55.773..5910.852 rows=26265 loops=1)>                     Buffers: shared hit=182401 read=954>                     ->  Nested Loop  (cost=0.00..22593.53 rows=8272 width=27) (actual time=30.980..3252.869 rows=38831 loops=1)>                           Buffers: shared hit=972 read=528>                           ->  Nested Loop  (cost=0.00..657.24 rows=22 width=8) (actual time=0.102..109.577 rows=103 loops=1)>                                 Join Filter: (measurement.product_id = product.id)>                                 Rows Removed by Join Filter: 18>                                 Buffers: shared hit=484 read=9>                                 ->  Seq Scan on \"Product\" product  (cost=0.00..1.04 rows=1 width=8) (actual time=0.010..0.019 rows=1 loops=1)>                                       Filter: (name ~~* 'Part 1'::text)>                                       Rows Removed by Filter: 2>                                       Buffers: shared hit=1>                                 ->  Index Scan using measurement_start_time_index on \"Measurement\" measurement  (cost=0.00..655.37 rows=67 width=16) (actual time=0.042..109.416 rows=121 loops=1)>                                       Index Cond: ((measurement_start_time >= '2015-06-18 17:00:00'::timestamp without time zone) AND (measurement_start_time <= '2015-06-18 18:00:00'::timestamp without time zone))>                                       Filter: (NOT (SubPlan 1))>                                       Buffers: shared hit=483 read=9>                                       SubPlan 1>                                         ->  Index Scan using extra_info_measurement_id_index on \"Extra_info\" e  (cost=0.00..9.66 rows=1 width=8) (actual time=0.900..0.900 rows=0 loops=121)>                                               Index Cond: (measurement_id = measurement.id)>                                               Filter: ((value_string ~~* 'Clamped%'::text) AND (description = 'Clamp'::text))>                                               Rows Removed by Filter: 2>                                               Buffers: shared hit=479 read=7>                           ->  Index Scan using feature_measurement_id_and_name_index on \"Feature\" rf  (cost=0.00..993.40 rows=370 width=35) (actual time=28.152..30.407 rows=377 loops=103)>                                 Index Cond: (measurement_id = measurement.id)>                                 Buffers: shared hit=488 read=519>                     ->  Index Scan using point_feature_id_index on \"Point\" p  (cost=0.00..144.80 rows=1 width=14) (actual time=0.067..0.067 rows=1 loops=38831)>                           Index Cond: (feature_id = f.id)>                           Buffers: shared hit=181429 read=426>               ->  Index Scan using warning_feature_id_index on \"Warning\" warning  (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007 rows=0 loops=26265)>                     Index Cond: (f.id = feature_id)>                     Buffers: shared hit=81151 read=42> Total runtime: 6273.312 ms> > > ---Version---> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit> > > ---Table sizes---> Extra_info          1223400 rows> Feature                              185436000 rows> Measurement     500000 rows> Point                   124681000 rows> Warning                            11766800 rows> > ---Hardware---> Intel Core i5-2320 CPU 3.00GHz (4 CPUs)> 6GB Memory> 64-bit Operating System (Windows 7 Professional)> WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache> > ---History---> Query gets slower as more data is added to the database> > ---Maintenance---> Autovacuum is used with default settings> > Tommi Kaksonen <t2nn2t(at)gmail(dot)com> wrote:  I don’t see a reason to partition such small data.  What I do see is you attempting to run a big query on what looks like a small desktop pc.  6GB of ram, especially under Windows 7, isn’t enough ram for a database server.  Run the query on a normal small server of say 16gb and it should perform fine.  IMO. Mike", "msg_date": "Fri, 26 Aug 2016 13:26:46 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "On 8/26/16 3:26 PM, Mike Sofen wrote:\n> Is there way to keep query time constant as the database size grows.\n\nNo. More data == more time. Unless you find a way to break the laws of \nphysics.\n\n> Should I use partitioning or partial indexes?\n\nNeither technique is a magic bullet. I doubt either would help here.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 26 Aug 2016 23:11:30 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "2016-08-26 22:26 GMT+02:00 Mike Sofen <[email protected]>:\n\n>\n>\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected]] *On Behalf Of *Tommi K\n> *Sent:* Friday, August 26, 2016 7:25 AM\n> *To:* Craig James <[email protected]>\n> *Cc:* andreas kretschmer <[email protected]>;\n> [email protected]\n> *Subject:* Re: [PERFORM] Slow query with big tables\n>\n>\n>\n> Ok, sorry that I did not add the original message. I thought that it would\n> be automatically added to the message thread.\n>\n>\n>\n> Here is the question again:\n>\n>\n>\n> Is there way to keep query time constant as the database size grows.\n> Should I use partitioning or partial indexes?\n>\n\ntry to disable nested_loop - there are bad estimations.\n\nThis query should not be fast - there are two ILIKE filters with negative\nimpact on estimations.\n\nRegards\n\nPavel\n\n\n>\n>\n> Thanks,\n>\n> Tommi Kaksonen\n>\n>\n>\n>\n>\n>\n>\n> > Hello,\n>\n> >\n>\n> > I have the following tables and query. I would like to get some help to\n> find out why it is slow and how its performance could be improved.\n>\n> >\n>\n> > Thanks,\n>\n> > Tommi K.\n>\n> >\n>\n> >\n>\n> > --Table definitions---\n>\n> > CREATE TABLE \"Measurement\"\n>\n> > (\n>\n> > id bigserial NOT NULL,\n>\n> > product_id bigserial NOT NULL,\n>\n> > nominal_data_id bigserial NOT NULL,\n>\n> > description text,\n>\n> > serial text,\n>\n> > measurement_time timestamp without time zone,\n>\n> > status smallint,\n>\n> > system_description text,\n>\n> > CONSTRAINT \"Measurement_pkey\" PRIMARY KEY (id),\n>\n> > CONSTRAINT \"Measurement_nominal_data_id_fkey\" FOREIGN KEY\n> (nominal_data_id)\n>\n> > REFERENCES \"Nominal_data\" (id) MATCH SIMPLE\n>\n> > ON UPDATE NO ACTION ON DELETE NO ACTION,\n>\n> > CONSTRAINT \"Measurement_product_id_fkey\" FOREIGN KEY (product_id)\n>\n> > REFERENCES \"Product\" (id) MATCH SIMPLE\n>\n> > ON UPDATE NO ACTION ON DELETE NO ACTION\n>\n> > )\n>\n> > WITH (\n>\n> > OIDS=FALSE\n>\n> > );\n>\n> >\n>\n> > CREATE INDEX measurement_time_index\n>\n> > ON \"Measurement\"\n>\n> > USING btree\n>\n> > (measurement_time);\n>\n> > ALTER TABLE \"Measurement\" CLUSTER ON measurement_time_index;\n>\n> >\n>\n> > CREATE TABLE \"Product\"\n>\n> > (\n>\n> > id bigserial NOT NULL,\n>\n> > name text,\n>\n> > description text,\n>\n> > info text,\n>\n> > system_name text,\n>\n> > CONSTRAINT \"Product_pkey\" PRIMARY KEY (id)\n>\n> > )\n>\n> > WITH (\n>\n> > OIDS=FALSE\n>\n> > );\n>\n> >\n>\n> >\n>\n> > CREATE TABLE \"Extra_info\"\n>\n> > (\n>\n> > id bigserial NOT NULL,\n>\n> > measurement_id bigserial NOT NULL,\n>\n> > name text,\n>\n> > description text,\n>\n> > info text,\n>\n> > type text,\n>\n> > value_string text,\n>\n> > value_double double precision,\n>\n> > value_integer bigint,\n>\n> > value_bool boolean,\n>\n> > CONSTRAINT \"Extra_info_pkey\" PRIMARY KEY (id),\n>\n> > CONSTRAINT \"Extra_info_measurement_id_fkey\" FOREIGN KEY\n> (measurement_id)\n>\n> > REFERENCES \"Measurement\" (id) MATCH SIMPLE\n>\n> > ON UPDATE NO ACTION ON DELETE NO ACTION\n>\n> > )\n>\n> > WITH (\n>\n> > OIDS=FALSE\n>\n> > );\n>\n> >\n>\n> > CREATE INDEX extra_info_measurement_id_index\n>\n> > ON \"Extra_info\"\n>\n> > USING btree\n>\n> > (measurement_id);\n>\n> >\n>\n> > CREATE TABLE \"Feature\"\n>\n> > (\n>\n> > id bigserial NOT NULL,\n>\n> > measurement_id bigserial NOT NULL,\n>\n> > name text,\n>\n> > description text,\n>\n> > info text,\n>\n> > CONSTRAINT \"Feature_pkey\" PRIMARY KEY (id),\n>\n> > CONSTRAINT \"Feature_measurement_id_fkey\" FOREIGN KEY (measurement_id)\n>\n> > REFERENCES \"Measurement\" (id) MATCH SIMPLE\n>\n> > ON UPDATE NO ACTION ON DELETE NO ACTION\n>\n> > )\n>\n> > WITH (\n>\n> > OIDS=FALSE\n>\n> > );\n>\n> >\n>\n> > CREATE INDEX feature_measurement_id_and_name_index\n>\n> > ON \"Feature\"\n>\n> > USING btree\n>\n> > (measurement_id, name COLLATE pg_catalog.\"default\");\n>\n> >\n>\n> > CREATE INDEX feature_measurement_id_index\n>\n> > ON \"Feature\"\n>\n> > USING hash\n>\n> > (measurement_id);\n>\n> >\n>\n> >\n>\n> > CREATE TABLE \"Point\"\n>\n> > (\n>\n> > id bigserial NOT NULL,\n>\n> > feature_id bigserial NOT NULL,\n>\n> > x double precision,\n>\n> > y double precision,\n>\n> > z double precision,\n>\n> > status_x smallint,\n>\n> > status_y smallint,\n>\n> > status_z smallint,\n>\n> > difference_x double precision,\n>\n> > difference_y double precision,\n>\n> > difference_z double precision,\n>\n> > CONSTRAINT \"Point_pkey\" PRIMARY KEY (id),\n>\n> > CONSTRAINT \"Point_feature_id_fkey\" FOREIGN KEY (feature_id)\n>\n> > REFERENCES \"Feature\" (id) MATCH SIMPLE\n>\n> > ON UPDATE NO ACTION ON DELETE NO ACTION\n>\n> > )\n>\n> > WITH (\n>\n> > OIDS=FALSE\n>\n> > );\n>\n> >\n>\n> > CREATE INDEX point_feature_id_index\n>\n> > ON \"Point\"\n>\n> > USING btree\n>\n> > (feature_id);\n>\n> >\n>\n> > CREATE TABLE \"Warning\"\n>\n> > (\n>\n> > id bigserial NOT NULL,\n>\n> > feature_id bigserial NOT NULL,\n>\n> > \"number\" smallint,\n>\n> > info text,\n>\n> > CONSTRAINT \"Warning_pkey\" PRIMARY KEY (id),\n>\n> > CONSTRAINT \"Warning_feature_id_fkey\" FOREIGN KEY (feature_id)\n>\n> > REFERENCES \"Feature\" (id) MATCH SIMPLE\n>\n> > ON UPDATE NO ACTION ON DELETE NO ACTION\n>\n> > )\n>\n> > WITH (\n>\n> > OIDS=FALSE\n>\n> > );\n>\n> >\n>\n> > CREATE INDEX warning_feature_id_index\n>\n> > ON \"Warning\"\n>\n> > USING btree\n>\n> > (feature_id);\n>\n> >\n>\n> >\n>\n> > ---Query---\n>\n> > SELECT\n>\n> > f.name,\n>\n> > f.description,\n>\n> > SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND\n> warning.id IS NULL THEN 1 ELSE 0 END) AS green_count,\n>\n> > SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND\n> warning.id IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,\n>\n> > SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0)\n> AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND\n> warning.id IS NULL THEN 1 ELSE 0 END) AS yellow_count,\n>\n> > SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0)\n> AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND\n> warning.id IS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count,\n>\n> > SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND\n> warning.id IS NULL THEN 1 ELSE 0 END) AS red_count,\n>\n> > SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND\n> warning.id IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count,\n>\n> > SUM(CASE WHEN (p.status_x = 1000 OR p.status_y = 1000 OR p.status_z =\n> 1000) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS\n> unable_to_measure_count\n>\n> > FROM \"Point\" p\n>\n> > JOIN \"Feature\" f ON f.id = p.feature_id\n>\n> > JOIN \"Measurement\" measurement ON measurement.id = f.measurement_id\n>\n> > JOIN \"Product\" product ON product.id = measurement.product_id\n>\n> > LEFT JOIN \"Warning\" warning ON f.id = warning.feature_id\n>\n> > WHERE (product.name ILIKE 'Part 1') AND\n>\n> > measurement.measurement_start_time >= '2015-06-18 17:00:00' AND\n>\n> > measurement.measurement_start_time <= '2015-06-18 18:00:00' AND\n>\n> > measurement.id NOT IN(SELECT measurement_id FROM \"Extra_info\" e\n>\n> > WHERE e.measurement_id = measurement.id AND e.description = 'Clamp' AND\n> e.value_string ILIKE 'Clamped%')\n>\n> > GROUP BY f.name, f.description;\n>\n> >\n>\n> >\n>\n> > ---Explain Analyze---\n>\n> > GroupAggregate (cost=1336999.08..1337569.18 rows=5562 width=33) (actual\n> time=6223.622..6272.321 rows=255 loops=1)\n>\n> > Buffers: shared hit=263552 read=996, temp read=119 written=119\n>\n> > -> Sort (cost=1336999.08..1337012.98 rows=5562 width=33) (actual\n> time=6223.262..6231.106 rows=26265 loops=1)\n>\n> > Sort Key: f.name, f.description\n>\n> > Sort Method: external merge Disk: 936kB\n>\n> > Buffers: shared hit=263552 read=996, temp read=119 written=119\n>\n> > -> Nested Loop Left Join (cost=0.00..1336653.08 rows=5562\n> width=33) (actual time=55.792..6128.875 rows=26265 loops=1)\n>\n> > Buffers: shared hit=263552 read=996\n>\n> > -> Nested Loop (cost=0.00..1220487.17 rows=5562\n> width=33) (actual time=55.773..5910.852 rows=26265 loops=1)\n>\n> > Buffers: shared hit=182401 read=954\n>\n> > -> Nested Loop (cost=0.00..22593.53 rows=8272\n> width=27) (actual time=30.980..3252.869 rows=38831 loops=1)\n>\n> > Buffers: shared hit=972 read=528\n>\n> > -> Nested Loop (cost=0.00..657.24 rows=22\n> width=8) (actual time=0.102..109.577 rows=103 loops=1)\n>\n> > Join Filter: (measurement.product_id =\n> product.id)\n>\n> > Rows Removed by Join Filter: 18\n>\n> > Buffers: shared hit=484 read=9\n>\n> > -> Seq Scan on \"Product\" product\n> (cost=0.00..1.04 rows=1 width=8) (actual time=0.010..0.019 rows=1 loops=1)\n>\n> > Filter: (name ~~* 'Part 1'::text)\n>\n> > Rows Removed by Filter: 2\n>\n> > Buffers: shared hit=1\n>\n> > -> Index Scan using\n> measurement_start_time_index on \"Measurement\" measurement\n> (cost=0.00..655.37 rows=67 width=16) (actual time=0.042..109.416 rows=121\n> loops=1)\n>\n> > Index Cond:\n> ((measurement_start_time >= '2015-06-18 17:00:00'::timestamp without time\n> zone) AND (measurement_start_time <= '2015-06-18 18:00:00'::timestamp\n> without time zone))\n>\n> > Filter: (NOT (SubPlan 1))\n>\n> > Buffers: shared hit=483 read=9\n>\n> > SubPlan 1\n>\n> > -> Index Scan using\n> extra_info_measurement_id_index on \"Extra_info\" e (cost=0.00..9.66\n> rows=1 width=8) (actual time=0.900..0.900 rows=0 loops=121)\n>\n> > Index Cond:\n> (measurement_id = measurement.id)\n>\n> > Filter: ((value_string ~~*\n> 'Clamped%'::text) AND (description = 'Clamp'::text))\n>\n> > Rows Removed by Filter: 2\n>\n> > Buffers: shared hit=479\n> read=7\n>\n> > -> Index Scan using\n> feature_measurement_id_and_name_index on \"Feature\" rf (cost=0.00..993.40\n> rows=370 width=35) (actual time=28.152..30.407 rows=377 loops=103)\n>\n> > Index Cond: (measurement_id =\n> measurement.id)\n>\n> > Buffers: shared hit=488 read=519\n>\n> > -> Index Scan using point_feature_id_index on\n> \"Point\" p (cost=0.00..144.80 rows=1 width=14) (actual time=0.067..0.067\n> rows=1 loops=38831)\n>\n> > Index Cond: (feature_id = f.id)\n>\n> > Buffers: shared hit=181429 read=426\n>\n> > -> Index Scan using warning_feature_id_index on \"Warning\"\n> warning (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007\n> rows=0 loops=26265)\n>\n> > Index Cond: (f.id = feature_id)\n>\n> > Buffers: shared hit=81151 read=42\n>\n> > Total runtime: 6273.312 ms\n>\n> >\n>\n> >\n>\n> > ---Version---\n>\n> > PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit\n>\n> >\n>\n> >\n>\n> > ---Table sizes---\n>\n> > Extra_info 1223400 rows\n>\n> > Feature 185436000 rows\n>\n> > Measurement 500000 rows\n>\n> > Point 124681000 rows\n>\n> > Warning 11766800 rows\n>\n> >\n>\n> > ---Hardware---\n>\n> > Intel Core i5-2320 CPU 3.00GHz (4 CPUs)\n>\n> > 6GB Memory\n>\n> > 64-bit Operating System (Windows 7 Professional)\n>\n> > WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache\n>\n> >\n>\n> > ---History---\n>\n> > Query gets slower as more data is added to the database\n>\n> >\n>\n> > ---Maintenance---\n>\n> > Autovacuum is used with default settings\n>\n> >\n>\n> > Tommi Kaksonen <t2nn2t(at)gmail(dot)com> wrote:\n>\n>\n>\n>\n>\n> I don’t see a reason to partition such small data. What I do see is you\n> attempting to run a big query on what looks like a small desktop pc. 6GB\n> of ram, especially under Windows 7, isn’t enough ram for a database\n> server. Run the query on a normal small server of say 16gb and it should\n> perform fine. IMO.\n>\n>\n>\n> Mike\n>\n>\n>\n\n2016-08-26 22:26 GMT+02:00 Mike Sofen <[email protected]>: From: [email protected] [mailto:[email protected]] On Behalf Of Tommi KSent: Friday, August 26, 2016 7:25 AMTo: Craig James <[email protected]>Cc: andreas kretschmer <[email protected]>; [email protected]: Re: [PERFORM] Slow query with big tables Ok, sorry that I did not add the original message. I thought that it would be automatically added to the message thread. Here is the question again: Is there way to keep query time constant as the database size grows. Should I use partitioning or partial indexes?try to disable nested_loop - there are bad estimations.This query should not be fast - there are two ILIKE filters with negative impact on estimations.RegardsPavel  Thanks,Tommi Kaksonen   > Hello, > > I have the following tables and query. I would like to get some help to find out why it is slow and how its performance could be improved.> > Thanks,> Tommi K.> > > --Table definitions---> CREATE TABLE \"Measurement\"> (>   id bigserial NOT NULL,>   product_id bigserial NOT NULL,>   nominal_data_id bigserial NOT NULL,>   description text,>   serial text,>   measurement_time timestamp without time zone,>   status smallint,>   system_description text,>   CONSTRAINT \"Measurement_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Measurement_nominal_data_id_fkey\" FOREIGN KEY (nominal_data_id)>       REFERENCES \"Nominal_data\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION,>   CONSTRAINT \"Measurement_product_id_fkey\" FOREIGN KEY (product_id)>       REFERENCES \"Product\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX measurement_time_index>   ON \"Measurement\">   USING btree>   (measurement_time);> ALTER TABLE \"Measurement\" CLUSTER ON measurement_time_index;> > CREATE TABLE \"Product\"> (>   id bigserial NOT NULL,>   name text,>   description text,>   info text,>   system_name text,>   CONSTRAINT \"Product_pkey\" PRIMARY KEY (id)> )> WITH (>   OIDS=FALSE> );> > > CREATE TABLE \"Extra_info\"> (>   id bigserial NOT NULL,>   measurement_id bigserial NOT NULL,>   name text,>   description text,>   info text,>   type text,>   value_string text,>   value_double double precision,>   value_integer bigint,>   value_bool boolean,>   CONSTRAINT \"Extra_info_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Extra_info_measurement_id_fkey\" FOREIGN KEY (measurement_id)>       REFERENCES \"Measurement\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX extra_info_measurement_id_index>   ON \"Extra_info\">   USING btree>   (measurement_id);> > CREATE TABLE \"Feature\"> (>   id bigserial NOT NULL,>   measurement_id bigserial NOT NULL,>   name text,>   description text,>   info text,>   CONSTRAINT \"Feature_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Feature_measurement_id_fkey\" FOREIGN KEY (measurement_id)>       REFERENCES \"Measurement\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX feature_measurement_id_and_name_index>   ON \"Feature\">   USING btree>   (measurement_id, name COLLATE pg_catalog.\"default\");> > CREATE INDEX feature_measurement_id_index>   ON \"Feature\">   USING hash>   (measurement_id);> > > CREATE TABLE \"Point\"> (>   id bigserial NOT NULL,>   feature_id bigserial NOT NULL,>   x double precision,>   y double precision,>   z double precision,>   status_x smallint,>   status_y smallint,>   status_z smallint,>   difference_x double precision,>   difference_y double precision,>   difference_z double precision,>   CONSTRAINT \"Point_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Point_feature_id_fkey\" FOREIGN KEY (feature_id)>       REFERENCES \"Feature\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX point_feature_id_index>   ON \"Point\">   USING btree>   (feature_id);> > CREATE TABLE \"Warning\"> (>   id bigserial NOT NULL,>   feature_id bigserial NOT NULL,>   \"number\" smallint,>   info text,>   CONSTRAINT \"Warning_pkey\" PRIMARY KEY (id),>   CONSTRAINT \"Warning_feature_id_fkey\" FOREIGN KEY (feature_id)>       REFERENCES \"Feature\" (id) MATCH SIMPLE>       ON UPDATE NO ACTION ON DELETE NO ACTION> )> WITH (>   OIDS=FALSE> );> > CREATE INDEX warning_feature_id_index>   ON \"Warning\">   USING btree>   (feature_id);> > > ---Query---> SELECT> f.name, > f.description,> SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id IS NULL THEN 1 ELSE 0 END) AS green_count, > SUM(CASE WHEN p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0 AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS green_warned_count,> SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NULL THEN 1 ELSE 0 END) AS yellow_count, > SUM(CASE WHEN NOT (p.status_x = 0 AND p.status_y = 0 AND p.status_z = 0) AND NOT (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS yellow_warned_count, > SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NULL THEN 1 ELSE 0 END) AS red_count, > SUM(CASE WHEN (p.status_x = 2 OR p.status_y = 2 OR p.status_z = 2) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS red_warned_count, > SUM(CASE WHEN (p.status_x = 1000 OR p.status_y = 1000 OR p.status_z = 1000) AND warning.id IS NOT NULL THEN 1 ELSE 0 END) AS unable_to_measure_count > FROM \"Point\" p > JOIN \"Feature\" f ON f.id = p.feature_id> JOIN \"Measurement\" measurement ON measurement.id = f.measurement_id > JOIN \"Product\" product ON product.id = measurement.product_id > LEFT JOIN \"Warning\" warning ON f.id = warning.feature_id> WHERE (product.name ILIKE 'Part 1') AND > measurement.measurement_start_time >= '2015-06-18 17:00:00' AND > measurement.measurement_start_time <= '2015-06-18 18:00:00' AND > measurement.id NOT IN(SELECT measurement_id FROM \"Extra_info\" e > WHERE e.measurement_id = measurement.id AND e.description = 'Clamp' AND e.value_string ILIKE 'Clamped%')>  GROUP BY f.name, f.description;> > > ---Explain Analyze---> GroupAggregate  (cost=1336999.08..1337569.18 rows=5562 width=33) (actual time=6223.622..6272.321 rows=255 loops=1)>   Buffers: shared hit=263552 read=996, temp read=119 written=119>   ->  Sort  (cost=1336999.08..1337012.98 rows=5562 width=33) (actual time=6223.262..6231.106 rows=26265 loops=1)>         Sort Key: f.name, f.description>         Sort Method: external merge  Disk: 936kB>         Buffers: shared hit=263552 read=996, temp read=119 written=119>         ->  Nested Loop Left Join  (cost=0.00..1336653.08 rows=5562 width=33) (actual time=55.792..6128.875 rows=26265 loops=1)>               Buffers: shared hit=263552 read=996>               ->  Nested Loop  (cost=0.00..1220487.17 rows=5562 width=33) (actual time=55.773..5910.852 rows=26265 loops=1)>                     Buffers: shared hit=182401 read=954>                     ->  Nested Loop  (cost=0.00..22593.53 rows=8272 width=27) (actual time=30.980..3252.869 rows=38831 loops=1)>                           Buffers: shared hit=972 read=528>                           ->  Nested Loop  (cost=0.00..657.24 rows=22 width=8) (actual time=0.102..109.577 rows=103 loops=1)>                                 Join Filter: (measurement.product_id = product.id)>                                 Rows Removed by Join Filter: 18>                                 Buffers: shared hit=484 read=9>                                 ->  Seq Scan on \"Product\" product  (cost=0.00..1.04 rows=1 width=8) (actual time=0.010..0.019 rows=1 loops=1)>                                       Filter: (name ~~* 'Part 1'::text)>                                       Rows Removed by Filter: 2>                                       Buffers: shared hit=1>                                 ->  Index Scan using measurement_start_time_index on \"Measurement\" measurement  (cost=0.00..655.37 rows=67 width=16) (actual time=0.042..109.416 rows=121 loops=1)>                                       Index Cond: ((measurement_start_time >= '2015-06-18 17:00:00'::timestamp without time zone) AND (measurement_start_time <= '2015-06-18 18:00:00'::timestamp without time zone))>                                       Filter: (NOT (SubPlan 1))>                                       Buffers: shared hit=483 read=9>                                       SubPlan 1>                                         ->  Index Scan using extra_info_measurement_id_index on \"Extra_info\" e  (cost=0.00..9.66 rows=1 width=8) (actual time=0.900..0.900 rows=0 loops=121)>                                               Index Cond: (measurement_id = measurement.id)>                                               Filter: ((value_string ~~* 'Clamped%'::text) AND (description = 'Clamp'::text))>                                               Rows Removed by Filter: 2>                                               Buffers: shared hit=479 read=7>                           ->  Index Scan using feature_measurement_id_and_name_index on \"Feature\" rf  (cost=0.00..993.40 rows=370 width=35) (actual time=28.152..30.407 rows=377 loops=103)>                                 Index Cond: (measurement_id = measurement.id)>                                 Buffers: shared hit=488 read=519>                     ->  Index Scan using point_feature_id_index on \"Point\" p  (cost=0.00..144.80 rows=1 width=14) (actual time=0.067..0.067 rows=1 loops=38831)>                           Index Cond: (feature_id = f.id)>                           Buffers: shared hit=181429 read=426>               ->  Index Scan using warning_feature_id_index on \"Warning\" warning  (cost=0.00..20.88 rows=1 width=16) (actual time=0.007..0.007 rows=0 loops=26265)>                     Index Cond: (f.id = feature_id)>                     Buffers: shared hit=81151 read=42> Total runtime: 6273.312 ms> > > ---Version---> PostgreSQL 9.2.1, compiled by Visual C++ build 1600, 64-bit> > > ---Table sizes---> Extra_info          1223400 rows> Feature                              185436000 rows> Measurement     500000 rows> Point                   124681000 rows> Warning                            11766800 rows> > ---Hardware---> Intel Core i5-2320 CPU 3.00GHz (4 CPUs)> 6GB Memory> 64-bit Operating System (Windows 7 Professional)> WD Blue 500GB HDD - 7200 RPM SATA 6 Gb/s 16MB Cache> > ---History---> Query gets slower as more data is added to the database> > ---Maintenance---> Autovacuum is used with default settings> > Tommi Kaksonen <t2nn2t(at)gmail(dot)com> wrote:  I don’t see a reason to partition such small data.  What I do see is you attempting to run a big query on what looks like a small desktop pc.  6GB of ram, especially under Windows 7, isn’t enough ram for a database server.  Run the query on a normal small server of say 16gb and it should perform fine.  IMO. Mike", "msg_date": "Sat, 27 Aug 2016 06:55:48 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "On Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby <[email protected]> wrote:\n\n> On 8/26/16 3:26 PM, Mike Sofen wrote:\n>\n>> Is there way to keep query time constant as the database size grows.\n>>\n>\n> No. More data == more time. Unless you find a way to break the laws of\n> physics.\n>\n\nStraight hash-table indexes (which Postgres doesn't use) have O(1) access\ntime. The amount of data has no effect on the access time. Postgres uses\nB-trees which have O(logN) performance. There is no \"law of physics\" that\nsays Postgres couldn't employ pure hash indexes.\n\nCraig\n\n\n> Should I use partitioning or partial indexes?\n>>\n>\n> Neither technique is a magic bullet. I doubt either would help here.\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX\n> Experts in Analytics, Data Architecture and PostgreSQL\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n> 855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby <[email protected]> wrote:On 8/26/16 3:26 PM, Mike Sofen wrote:\n\nIs there way to keep query time constant as the database size grows.\n\n\nNo. More data == more time. Unless you find a way to break the laws of physics.Straight hash-table indexes (which Postgres doesn't use) have O(1) access time. The amount of data has no effect on the access time. Postgres uses B-trees which have O(logN) performance. There is no \"law of physics\" that says Postgres couldn't employ pure hash indexes.Craig\n\n\nShould I use partitioning or partial indexes?\n\n\nNeither technique is a magic bullet. I doubt either would help here.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532)   mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Sat, 27 Aug 2016 07:13:00 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "Craig James <[email protected]> writes:\n> Straight hash-table indexes (which Postgres doesn't use) have O(1) access\n> time. The amount of data has no effect on the access time.\n\nThis is wishful thinking --- once you have enough data, O(1) goes out the\nwindow. For example, a hash index is certainly not going to continue to\nscale linearly once you reach its maximum possible number of buckets\n(2^N for N-bit hashes, and remember you can't get very many useful hash\nbits out of small objects like integers). But even before that, large\nnumbers of buckets put enough stress on your storage system that you will\nsee some not very O(1)-ish behavior, just because too little of the index\nfits in whatever cache and RAM you have. Any storage hierarchy is\nultimately going to impose O(log N) access costs, that's the way they're\nbuilt.\n\nI think it's fairly pointless to discuss such matters in the abstract.\nIf you want to make useful engineering tradeoffs you have to talk about\nspecific data sets and available hardware.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 27 Aug 2016 12:36:56 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "On Sat, Aug 27, 2016 at 7:13 AM, Craig James <[email protected]> wrote:\n\n> On Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby <[email protected]>\n> wrote:\n>\n>> On 8/26/16 3:26 PM, Mike Sofen wrote:\n>>\n>>> Is there way to keep query time constant as the database size grows.\n>>>\n>>\n>> No. More data == more time. Unless you find a way to break the laws of\n>> physics.\n>>\n>\n> Straight hash-table indexes (which Postgres doesn't use) have O(1) access\n> time.\n>\n\nBut he isn't doing single-row lookups, he is doing large aggregations. If\nyou have to aggregate N rows, doing a O(1) operation on different N\noccasions is still O(N).\n\nNot that big-O is useful here anyway. It assumes that either everything\nfits in RAM (and is already there), or that nothing fits in RAM and it all\nhas to be fetched from disk, even the index root pages, every time it is\nneeded. Tommi is not operating under an environment where the first\nassumption holds, and no one operates in an environment where the second\nassumption holds.\n\nAs N increases beyond available RAM, your actual time for a single look-up\nis going to be a weighted average of two different constant-time\noperations, one with a small constant and one with a large constant. Big-O\nnotation ignores this nicety and assumes all operations are at the slower\nspeed, because that is what the limit of the weighted average will be as N\ngets very large. But real world systems do not operate at the infinite\nlimit.\n\nSo his run time could easily be proportional to N^2, if he aggregates more\nrows and each one of them is less likely to be a cache hit.\n\nCheers,\n\nJeff\n\nOn Sat, Aug 27, 2016 at 7:13 AM, Craig James <[email protected]> wrote:On Fri, Aug 26, 2016 at 9:11 PM, Jim Nasby <[email protected]> wrote:On 8/26/16 3:26 PM, Mike Sofen wrote:\n\nIs there way to keep query time constant as the database size grows.\n\n\nNo. More data == more time. Unless you find a way to break the laws of physics.Straight hash-table indexes (which Postgres doesn't use) have O(1) access time. But he isn't doing single-row lookups, he is doing large aggregations.  If you have to aggregate N rows, doing a O(1) operation on different N occasions is still O(N).Not that big-O is useful here anyway.  It assumes that either everything fits in RAM (and is already there), or that nothing fits in RAM and it all has to be fetched from disk, even the index root pages, every time it is needed.  Tommi is not operating under an environment where the first assumption holds, and no one operates in an environment where the second assumption holds.As N increases beyond available RAM, your actual time for a single look-up is going to be a weighted average of two different constant-time operations, one with a small constant and one with a large constant.  Big-O notation ignores this nicety and assumes all operations are at the slower speed, because that is what the limit of the weighted average will be as N gets very large. But real world systems do not operate at the infinite limit.So his run time could easily be proportional to N^2, if he aggregates more rows and each one of them is less likely to be a cache hit.Cheers,Jeff", "msg_date": "Sat, 27 Aug 2016 11:12:17 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "On Fri, Aug 26, 2016 at 6:17 AM, Tommi K <[email protected]> wrote:\n\n> Hello,\n> thanks for the response. I did not get the response to my email even\n> though I am subscribed to the pgsql-performance mail list. Let's hope that\n> I get the next one :)\n>\n> Increasing work_mem did not have great impact on the performance. But I\n> will try to update the PostgreSQL version to see if it speeds up things.\n>\n> However is there way to keep query time constant as the database size\n> grows.\n>\n\nNot likely. If the number of rows you are aggregating grows, it will take\nmore work to do those aggregations.\n\nIf the number of rows being aggregated doesn't grow, because all the growth\noccurs outside of the measurement_time range, even then the new data will\nstill make it harder to keep the stuff you want cached in memory. If you\nreally want more-constant query time, you could approach that by giving the\nmachine as little RAM as possible. This works not by making the large\ndatabase case faster, but by making the small database case slower. That\nusually is not what people want.\n\n\n\n> Should I use partitioning or partial indexes?\n>\n\nPartitioning the Feature and Point tables on measurement_time (or\nmeasurement_start_time,\nyou are not consistent on what it is called) might be helpful. However,\nmeasurement_time does not exist in those tables, so you would first have to\nde-normalize by introducing it into them.\n\nMore likely to be helpful would be precomputing the aggregates and storing\nthem in a materialized view (not available in 9.2). Also, more RAM and\nbetter hard-drives can't hurt.\n\nCheers,\n\nJeff\n\nOn Fri, Aug 26, 2016 at 6:17 AM, Tommi K <[email protected]> wrote:Hello,thanks for the response. I did not get the response to my email even though I am subscribed to the pgsql-performance mail list. Let's hope that I get the next one :)Increasing work_mem did not have great impact on the performance. But I will try to update the PostgreSQL version to see if it speeds up things.However is there way to keep query time constant as the database size grows. Not likely.  If the number of rows you are aggregating grows, it will take more work to do those aggregations.If the number of rows being aggregated doesn't grow, because all the growth occurs outside of the measurement_time range, even then the new data will still make it harder to keep the stuff you want cached in memory.  If you really want more-constant query time, you could approach that by giving the machine as little RAM as possible.  This works not by making the large database case faster, but by making the small database case slower.  That usually is not what people want. Should I use partitioning or partial indexes?Partitioning the Feature and Point tables on measurement_time (or measurement_start_time, you are not consistent on what it is called) might be helpful.  However, measurement_time does not exist in those tables, so you would first have to de-normalize by introducing it into them.More likely to be helpful would be precomputing the aggregates and storing them in a materialized view (not available in 9.2).   Also, more RAM and better hard-drives can't hurt.Cheers,Jeff", "msg_date": "Sat, 27 Aug 2016 11:33:50 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" }, { "msg_contents": "On Sat, Aug 27, 2016 at 18:33 GMT+03:00, Jeff Janes\n<jeff(dot)janes(at)gmail(dot)com> wrote:\n\n> Partitioning the Feature and Point tables on measurement_time (or\n> measurement_start_time,\n> you are not consistent on what it is called) might be helpful. However,\n> measurement_time does not exist in those tables, so you would first have\nto\n> de-normalize by introducing it into them.\n>\n> More likely to be helpful would be precomputing the aggregates and storing\n> them in a materialized view (not available in 9.2). Also, more RAM and\n> better hard-drives can't hurt.\n\nThanks a lot for help and all suggestions. Before this I tried to partition\nby measurement_id (Feature table) and by result_feature_id (Point table)\nbut the performance was worse than without partitioning. Using\nmeasurement_time in partitioning might be a better idea\n(measurement_start_time was meant to be measurement_time).\n\nI think I will update to newer version, use better hardware and try\nmaterialized views for better performance.\n\nBest Regards,\nTommi Kaksonen\n\n\n2016-08-27 21:33 GMT+03:00 Jeff Janes <[email protected]>:\n\n> On Fri, Aug 26, 2016 at 6:17 AM, Tommi K <[email protected]> wrote:\n>\n>> Hello,\n>> thanks for the response. I did not get the response to my email even\n>> though I am subscribed to the pgsql-performance mail list. Let's hope that\n>> I get the next one :)\n>>\n>> Increasing work_mem did not have great impact on the performance. But I\n>> will try to update the PostgreSQL version to see if it speeds up things.\n>>\n>> However is there way to keep query time constant as the database size\n>> grows.\n>>\n>\n> Not likely. If the number of rows you are aggregating grows, it will take\n> more work to do those aggregations.\n>\n> If the number of rows being aggregated doesn't grow, because all the\n> growth occurs outside of the measurement_time range, even then the new\n> data will still make it harder to keep the stuff you want cached in\n> memory. If you really want more-constant query time, you could approach\n> that by giving the machine as little RAM as possible. This works not by\n> making the large database case faster, but by making the small database\n> case slower. That usually is not what people want.\n>\n>\n>\n>> Should I use partitioning or partial indexes?\n>>\n>\n> Partitioning the Feature and Point tables on measurement_time (or measurement_start_time,\n> you are not consistent on what it is called) might be helpful. However,\n> measurement_time does not exist in those tables, so you would first have\n> to de-normalize by introducing it into them.\n>\n> More likely to be helpful would be precomputing the aggregates and storing\n> them in a materialized view (not available in 9.2). Also, more RAM and\n> better hard-drives can't hurt.\n>\n> Cheers,\n>\n> Jeff\n>\n\nOn Sat, Aug 27, 2016 at 18:33 GMT+03:00, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:> Partitioning the Feature and Point tables on measurement_time (or> measurement_start_time,> you are not consistent on what it is called) might be helpful.  However,> measurement_time does not exist in those tables, so you would first have to> de-normalize by introducing it into them.> > More likely to be helpful would be precomputing the aggregates and storing> them in a materialized view (not available in 9.2).   Also, more RAM and> better hard-drives can't hurt.Thanks a lot for help and all suggestions. Before this I tried to partition by measurement_id (Feature table) and by result_feature_id (Point table) but the performance was worse than without partitioning. Using measurement_time in partitioning might be a better idea (measurement_start_time was meant to be measurement_time).I think I will update to newer version, use better hardware and try materialized views for better performance.Best Regards,Tommi Kaksonen2016-08-27 21:33 GMT+03:00 Jeff Janes <[email protected]>:On Fri, Aug 26, 2016 at 6:17 AM, Tommi K <[email protected]> wrote:Hello,thanks for the response. I did not get the response to my email even though I am subscribed to the pgsql-performance mail list. Let's hope that I get the next one :)Increasing work_mem did not have great impact on the performance. But I will try to update the PostgreSQL version to see if it speeds up things.However is there way to keep query time constant as the database size grows. Not likely.  If the number of rows you are aggregating grows, it will take more work to do those aggregations.If the number of rows being aggregated doesn't grow, because all the growth occurs outside of the measurement_time range, even then the new data will still make it harder to keep the stuff you want cached in memory.  If you really want more-constant query time, you could approach that by giving the machine as little RAM as possible.  This works not by making the large database case faster, but by making the small database case slower.  That usually is not what people want. Should I use partitioning or partial indexes?Partitioning the Feature and Point tables on measurement_time (or measurement_start_time, you are not consistent on what it is called) might be helpful.  However, measurement_time does not exist in those tables, so you would first have to de-normalize by introducing it into them.More likely to be helpful would be precomputing the aggregates and storing them in a materialized view (not available in 9.2).   Also, more RAM and better hard-drives can't hurt.Cheers,Jeff", "msg_date": "Mon, 29 Aug 2016 10:00:10 +0300", "msg_from": "Tommi Kaksonen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query with big tables" } ]
[ { "msg_contents": "Is it possible to find the number of disk IOs performed for a query? EXPLAIN ANALYZE looks like it shows number of sequential rows scanned, but not number of IOs. \n\nMy database is on an NVMe SSD, and am trying to cut microseconds of disk IO per query by possibly denormalizing.\n\nThank you,\n\n-bobby\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 31 Aug 2016 18:01:19 -0400", "msg_from": "Bobby Mozumder <[email protected]>", "msg_from_op": true, "msg_subject": "Possible to find disk IOs for a Query?" }, { "msg_contents": "On Aug 31, 2016, at 3:01 PM, Bobby Mozumder <[email protected]> wrote:\n> \n> Is it possible to find the number of disk IOs performed for a query? EXPLAIN ANALYZE looks like it shows number of sequential rows scanned, but not number of IOs. \n\nPostgres knows the number of rows it will need to pull to do your query, but it has no way of knowing if a block not in its own cache can be satisfied via filesystem cache, or if it will fall through to disk read. If you are on linux, you might be able to tell the effectiveness of your filesystem cache via something like http://www.brendangregg.com/blog/2014-12-31/linux-page-cache-hit-ratio.html <http://www.brendangregg.com/blog/2014-12-31/linux-page-cache-hit-ratio.html>\n\n…but that's hardly going to show you something as granular as a per-query cost.\nOn Aug 31, 2016, at 3:01 PM, Bobby Mozumder <[email protected]> wrote:Is it possible to find the number of disk IOs performed for a query?  EXPLAIN ANALYZE looks like it shows number of sequential rows scanned, but not number of IOs.  Postgres knows the number of rows it will need to pull to do your query, but it has no way of knowing if a block not in its own cache can be satisfied via filesystem cache, or if it will fall through to disk read. If you are on linux, you might be able to tell the effectiveness of your filesystem cache via something like http://www.brendangregg.com/blog/2014-12-31/linux-page-cache-hit-ratio.html…but that's hardly going to show you something as granular as a per-query cost.", "msg_date": "Wed, 31 Aug 2016 15:10:10 -0700", "msg_from": "Ben Chobot <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to find disk IOs for a Query?" }, { "msg_contents": "On Wed, Aug 31, 2016 at 3:01 PM, Bobby Mozumder <[email protected]> wrote:\n\n> Is it possible to find the number of disk IOs performed for a query?\n> EXPLAIN ANALYZE looks like it shows number of sequential rows scanned, but\n> not number of IOs.\n>\n> My database is on an NVMe SSD, and am trying to cut microseconds of disk\n> IO per query by possibly denormalizing.\n>\n\nMaybe helpful, altough slightly different since it works on an aggregate\nbasis:\n\nIf you set \"track_io_timing=on\" in your postgresql.conf, you can use\npg_stat_statements [1] to get I/O timings (i.e. how long a certain type of\nquery took for I/O access).\n\nTypically I'd use this in combination with system-level metrics, so you can\nunderstand which queries were running at the time of a given I/O spike.\n\n[1] https://www.postgresql.org/docs/9.5/static/pgstatstatements.html\n\nBest,\nLukas\n\n-- \nLukas Fittl\n\nSkype: lfittl\nPhone: +1 415 321 0630\n\nOn Wed, Aug 31, 2016 at 3:01 PM, Bobby Mozumder <[email protected]> wrote:Is it possible to find the number of disk IOs performed for a query?  EXPLAIN ANALYZE looks like it shows number of sequential rows scanned, but not number of IOs.\n\nMy database is on an NVMe SSD, and am trying to cut microseconds of disk IO per query by possibly denormalizing.Maybe helpful, altough slightly different since it works on an aggregate basis:If you set \"track_io_timing=on\" in your postgresql.conf, you can use pg_stat_statements [1] to get I/O timings (i.e. how long a certain type of query took for I/O access).Typically I'd use this in combination with system-level metrics, so you can understand which queries were running at the time of a given I/O spike.[1] https://www.postgresql.org/docs/9.5/static/pgstatstatements.htmlBest,Lukas-- Lukas FittlSkype: lfittlPhone: +1 415 321 0630", "msg_date": "Wed, 31 Aug 2016 15:17:27 -0700", "msg_from": "Lukas Fittl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to find disk IOs for a Query?" }, { "msg_contents": "On 01/09/16 10:01, Bobby Mozumder wrote:\n\n> Is it possible to find the number of disk IOs performed for a query? EXPLAIN ANALYZE looks like it shows number of sequential rows scanned, but not number of IOs.\n>\n> My database is on an NVMe SSD, and am trying to cut microseconds of disk IO per query by possibly denormalizing.\n>\n\n\nTry EXPLAIN (ANALYZE, BUFFERS) e.g:\n\nbench=# EXPLAIN (ANALYZE, BUFFERS) SELECT count(*) FROM pgbench_accounts \nWHERE bid=1;\nQUERY PLAN\n\n--------------------------------------------------------------------------------\n--------------------------------------------------------------------\n Finalize Aggregate (cost=217118.90..217118.91 rows=1 width=8) (actual \ntime=259\n.723..259.723 rows=1 loops=1)\n Buffers: shared hit=2370 read=161727\n -> Gather (cost=217118.68..217118.89 rows=2 width=8) (actual \ntime=259.686..\n259.720 rows=3 loops=1)\n Workers Planned: 2\n Workers Launched: 2\n Buffers: shared hit=2370 read=161727\n -> Partial Aggregate (cost=216118.68..216118.69 rows=1 \nwidth=8) (actu\nal time=258.473..258.473 rows=1 loops=3)\n Buffers: shared hit=2208 read=161727\n -> Parallel Seq Scan on pgbench_accounts \n(cost=0.00..216018.33\nrows=40139 width=0) (actual time=0.014..256.820 rows=33333 loops=3)\n Filter: (bid = 1)\n Rows Removed by Filter: 3300000\n Buffers: shared hit=2208 read=161727\n Planning time: 0.044 ms\n Execution time: 260.357 ms\n(14 rows)\n\n...shows the number of (8k unless you've changed it) pages read from \ndisk or cache. Now this might not be exactly what you are after - the \nother way to attack this is to trace your backend postgres process (err \nperfmon...no idea how to do this on windows...) and count read and write \ncalls.\n\nregards\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 1 Sep 2016 17:56:35 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to find disk IOs for a Query?" }, { "msg_contents": "On 01/09/16 17:56, Mark Kirkwood wrote:\n\n> the other way to attack this is to trace your backend postgres \n> process (err perfmon...no idea how to do this on windows...)\n\nNo idea why I thought you were on windows (maybe was reading another \nmessage just before yours) - sorry!\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 1 Sep 2016 21:08:54 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to find disk IOs for a Query?" } ]
[ { "msg_contents": "High level summary: server ram has a significant impact on batch processing\nperformance (no surprise), and AWS processing can largely compete with local\nservers IF the AWS network connection is optimized.\n\nWith the recent threads about insert performance (and performance in\ngeneral), I thought I'd share some numbers that could assist some other\nPostgres users in planning their environments.\n\nI am currently running a Postgres dev server in AWS and we are evaluating a\nhigh powered physical server for our data center, for which we received a\ndemo unit from Cisco for testing. Great opportunity to test a range of\npinch points that could restrict scalability and performance, comparing how\n2 very different servers behave under a high bulk loading/transform\nscenario. The scenario is that I'm migrating mysql data (\"v1\", eventually\n20tb of genomics data) over to a new Postgres server (\"v2\").\n\n[As a side note, I'm attempting to get a third server spun up, being a high\npowered AWS EC2 instance (an r3.4xlarge with 122gb ram, 16 cores, 6tb SSD\nEBS Optimized with 16k guaranteed IOPS). When I finish the testing against\nthe 3rd server, I'll report again.]\n\nLandscape:\nSource mysql server: Dell physical 24 cores at 2.8ghz, 32gb ram, 1gbe\nnetworking, Percona/mysql v5.5.3 on linux in our data center\nAWS: EC2 m4.xlarge instance with 16 gb ram, 4 cores at 2.4ghz, 3tb SSD. PG\nv9.5.1 on Red Hat 4.8.5-4 64 bit, on a 10gb Direct Connect link from our\ndata center to.\nCisco: Hyperflex HX240c M4 node with UCS B200 M4 blade, with 256gb ram, 48\ncores at 2.2ghz, 4tb direct attached Intel flash (SSD) for the OS, 10tb of\nNetApp Filer SSD storage via 4gb HBA cards. PG v9.5.1 on Red Hat 4.8.5-4 64\nbit, 10gbe networking but has to throttle down to 1gbe when talking to the\nmysql source server.\n\nPASS 1:\nProcess: Extract (pull the raw v1 data over the network to the 32 v2\nstaging tables) \nNum Source Rows: 8,232,673 (Small Test) \nRowcount Compression: 1.0 (1:1 copy) \nAWS Time in Secs: 1,516** \nCisco Time in Secs: 376 \nDifference: 4.0x\nComment: AWS: 5.7k rows/sec cisco: 21.9k rows/sec\n(**network speed appears to be the factor, see notes below)\n\nProcess: Transform/Load (all work local to the server - read,\ntransform, write as a single batch) \nNum Source Rows: 5,575,255 (many smaller batches from the source\ntables, all writes going to a single target table) \nAvg Rowcount Compression: 10.3 (jsonb row compression resulting in 10x\nfewer rows) \nAWS Time in Secs: 408 \nCisco Time in Secs: 294 \nDifference: 1.4x (the Cisco is 40% faster...not a huge difference)\nComment:AWS: 13.6k rows/sec Cisco: 19k rows/sec\n\nNotes: The testing has revealed an issue with the networking in our data\ncenter, which appears to be causing abnormally slow transfer speed to AWS.\nThat is being investigated. So if we look at just the Transform/Load\nprocess, we can see that both AWS and the local Cisco server have comparable\nprocessing speeds on the small dataset.\n\nHowever, when I moved to a medium sized dataset of 204m rows, a different\npattern emerged. I'm including just the Transform/Load process here, and\ntesting just ONE table out of the batch:\n\nPASS 2:\nProcess: Transform/Load (all work local to the server - read,\ntransform, write as a single batch) \nNum Source Rows: 10,554,800 (one batch from just a single source table\ngoing to a single target table) \nAvg Rowcount Compression: 31.5 (jsonb row compression resulting in\n31.5x fewer rows) \nAWS Time in Secs: 2,493 (41.5 minutes) \nCisco Time in Secs: 661 (10 minutes) \nDifference: 3.8x\nComment:AWS: 4.2k rows/sec Cisco: 16k rows/sec\n\nIt's obvious the size of the batch exceeded the AWS server memory, resulting\nin a profoundly slower processing time. This was a true, apples to apples\ncomparison between Pass 1 and Pass 2: average row lengths were within 7% of\neach other (1121 vs 1203) using identical table structures and processing\ncode, the only difference was the target server.\n\nI'm happy to answer questions about these results.\n\nMike Sofen (USA)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 1 Sep 2016 19:30:58 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres bulk insert/ETL performance on high speed servers - test\n results" }, { "msg_contents": "On Thu, Sep 1, 2016 at 11:30 PM, Mike Sofen <[email protected]> wrote:\n> PASS 2:\n> Process: Transform/Load (all work local to the server - read,\n> transform, write as a single batch)\n> Num Source Rows: 10,554,800 (one batch from just a single source table\n> going to a single target table)\n> Avg Rowcount Compression: 31.5 (jsonb row compression resulting in\n> 31.5x fewer rows)\n> AWS Time in Secs: 2,493 (41.5 minutes)\n> Cisco Time in Secs: 661 (10 minutes)\n> Difference: 3.8x\n> Comment:AWS: 4.2k rows/sec Cisco: 16k rows/sec\n>\n> It's obvious the size of the batch exceeded the AWS server memory, resulting\n> in a profoundly slower processing time. This was a true, apples to apples\n> comparison between Pass 1 and Pass 2: average row lengths were within 7% of\n> each other (1121 vs 1203) using identical table structures and processing\n> code, the only difference was the target server.\n>\n> I'm happy to answer questions about these results.\n\n\nAre you sure it's a memory thing and not an EBS bandwidth thing?\n\nEBS has significantly less bandwidth than direct-attached flash.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 2 Sep 2016 17:26:49 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres bulk insert/ETL performance on high speed\n servers - test results" }, { "msg_contents": "From: Claudio Freire Sent: Friday, September 02, 2016 1:27 PM\nOn Thu, Sep 1, 2016 at 11:30 PM, Mike Sofen < <mailto:[email protected]> [email protected]> wrote:\n\n> It's obvious the size of the batch exceeded the AWS server memory, \n\n> resulting in a profoundly slower processing time. This was a true, \n\n> apples to apples comparison between Pass 1 and Pass 2: average row \n\n> lengths were within 7% of each other (1121 vs 1203) using identical \n\n> table structures and processing code, the only difference was the target server.\n\n> \n\n> I'm happy to answer questions about these results.\n\n \n\nAre you sure it's a memory thing and not an EBS bandwidth thing?\n\n \n\nEBS has significantly less bandwidth than direct-attached flash.\n\n \n\nYou raise a good point. However, other disk activities involving large data (like backup/restore and pure large table copying), on both platforms, do not seem to support that notion. I did have both our IT department and Cisco turn on instrumentation for my last test, capturing all aspects of both tests on both platforms, and I’m hoping to see the results early next week and will reply again.\n\n \n\nMike\n\n\nFrom: Claudio Freire   Sent: Friday, September 02, 2016 1:27 PMOn Thu, Sep 1, 2016 at 11:30 PM, Mike Sofen <[email protected]> wrote:> It's obvious the size of the batch exceeded the AWS server memory, > resulting in a profoundly slower processing time.  This was a true, > apples to apples comparison between Pass 1 and Pass 2: average row > lengths were within 7% of each other (1121 vs 1203) using identical > table structures and processing code, the only difference was the target server.> > I'm happy to answer questions about these results. Are you sure it's a memory thing and not an EBS bandwidth thing? EBS has significantly less bandwidth than direct-attached flash. You raise a good point.  However, other disk activities involving large data (like backup/restore and pure large table copying), on both platforms, do not seem to support that notion.  I did have both our IT department and Cisco turn on instrumentation for my last test, capturing all aspects of both tests on both platforms, and I’m hoping to see the results early next week and will reply again. Mike", "msg_date": "Sun, 4 Sep 2016 05:34:01 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres bulk insert/ETL performance on high speed servers - test\n results" }, { "msg_contents": "On 9/4/16 7:34 AM, Mike Sofen wrote:\n> You raise a good point. However, other disk activities involving large\n> data (like backup/restore and pure large table copying), on both\n> platforms, do not seem to support that notion. I did have both our IT\n> department and Cisco turn on instrumentation for my last test, capturing\n> all aspects of both tests on both platforms, and I’m hoping to see the\n> results early next week and will reply again.\n\nSomething important to remember about Postgres is that it makes \nvirtually no efforts to optimize IO; it throws the entire problem in the \nOSes lap. So differences in OS config or in IO *latency* can have a \nmassive impact on performance. Because of the sensitivity to IO latency, \nyou can also end up with a workload that only reports say 60% IO \nutilization but is essentially IO bound (would be 100% IO utilization if \nenough read-ahead was happening).\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 7 Sep 2016 14:22:10 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres bulk insert/ETL performance on high speed\n servers - test results" }, { "msg_contents": "From: Jim Nasby [mailto:[email protected]] Sent: Wednesday, September 07, 2016 12:22 PM\nOn 9/4/16 7:34 AM, Mike Sofen wrote:\n\n> You raise a good point. However, other disk activities involving \n\n> large data (like backup/restore and pure large table copying), on both \n\n> platforms, do not seem to support that notion. I did have both our IT \n\n> department and Cisco turn on instrumentation for my last test, \n\n> capturing all aspects of both tests on both platforms, and I’m hoping \n\n> to see the results early next week and will reply again.\n\n \n\nSomething important to remember about Postgres is that it makes virtually no efforts to optimize IO; it throws the entire problem in the OSes lap. So differences in OS config or in IO *latency* can have a massive impact on performance. Because of the sensitivity to IO latency, you can also end up with a workload that only reports say 60% IO utilization but is essentially IO bound (would be 100% IO utilization if enough read-ahead was happening).\n\n--\n\nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! <http://BlueTreble.com> http://BlueTreble.com\n\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n=============\n\nHi Jim,\n\n \n\nThanks for that info regarding the sensitivity to IO latency. As it turns out, our network guys have determined while the AWS Direct Connect pipe is running at “normal” speed, the end to latency is quite high and are working with AWS support to see if there are any optimizations to be done. To me, the performance differences have to be tied to networking, especially since it does appear that for these EC2 instances, all data – both SSD and network – is consuming bandwidth in their network “connection”, possibly adding to PG IO pressure. I’ll keep your note in mind as we evaluate next steps.\n\n \n\nMike\n\n \n\n \n\n\nFrom: Jim Nasby [mailto:[email protected]]  Sent: Wednesday, September 07, 2016 12:22 PMOn 9/4/16 7:34 AM, Mike Sofen wrote:> You raise a good point.  However, other disk activities involving > large data (like backup/restore and pure large table copying), on both > platforms, do not seem to support that notion.  I did have both our IT > department and Cisco turn on instrumentation for my last test, > capturing all aspects of both tests on both platforms, and I’m hoping > to see the results early next week and will reply again. Something important to remember about Postgres is that it makes virtually no efforts to optimize IO; it throws the entire problem in the OSes lap. So differences in OS config or in IO *latency* can have a massive impact on performance. Because of the sensitivity to IO latency, you can also end up with a workload that only reports say 60% IO utilization but is essentially IO bound (would be 100% IO utilization if enough read-ahead was happening).--Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX Experts in Analytics, Data Architecture and PostgreSQL Data in Trouble? Get it in Treble! http://BlueTreble.com855-TREBLE2 (855-873-2532)   mobile: 512-569-9461=============Hi Jim, Thanks for that info regarding the sensitivity to IO latency.  As it turns out, our network guys have determined while the AWS Direct Connect pipe is running at “normal” speed, the end to latency is quite high and are working with AWS support to see if there are any optimizations to be done.  To me, the performance differences have to be tied to networking, especially since it does appear that for these EC2 instances, all data – both SSD and network – is consuming bandwidth in their network “connection”, possibly adding to PG IO pressure.  I’ll keep your note in mind as we evaluate next steps. Mike", "msg_date": "Wed, 7 Sep 2016 19:28:46 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres bulk insert/ETL performance on high speed servers - test\n results" } ]
[ { "msg_contents": "\n\nHi *,\n\njust installed official rpm from http://yum.postgresql.org/ to check\nfunctionality and performance of 9.6rc1. Unfortunately, binaries are\ncompiled with debug_assertions=on, which makes any performance testing\nuseless.\n\nRegards,\n Tigran.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 7 Sep 2016 16:32:37 +0200 (CEST)", "msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>", "msg_from_op": true, "msg_subject": "debug_assertions is enabled in official 9.6rc1 build" } ]
[ { "msg_contents": "Hello,\n\n\nMy Application has normally 25 to 30 connections and it is doing lot of\ninsert/update/delete operation.\nThe database size is 100GB.\niowait is at 40% to 45 % and CPU idle time is at 45% to 50%\nTOTAL RAM = 8 GB TOTAL CPU = 4\n\npostgresql.conf parametre:\n\nshared_buffers = 2GB\nwork_mem = 100MB\neffective_cache_size = 2GB\nmaintenance_work_mem = 500MB\nautovacuum = off\nwal_buffers = 64MB\n\n\nHow can i reduce iowait and CPU idle time. It is slowing all the queries.\nThe queries that used to take 1 sec,it is taking 12-15 seconds.\n\n\nversion detials:\nLinux zabbix-inst.novalocal 3.10.0-229.7.2.el7.x86_64 #1 SMP Fri May 15\n21:38:46 EDT 2015 x86_64 x86_64 x86_64 GNU/Linux\ndatabase version: postgrsql 9.2.13 community.\n\nThanks,\nSamir Magar\n\nHello,My Application has normally 25 to 30 connections and it is doing lot of insert/update/delete operation.The database size is 100GB. iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%TOTAL RAM = 8 GB   TOTAL CPU = 4 postgresql.conf parametre:shared_buffers = 2GBwork_mem = 100MBeffective_cache_size = 2GBmaintenance_work_mem = 500MBautovacuum = offwal_buffers = 64MBHow can i reduce iowait and CPU idle time. It is slowing all the queries. The queries that used to take 1 sec,it is taking 12-15 seconds.version detials: Linux zabbix-inst.novalocal 3.10.0-229.7.2.el7.x86_64 #1 SMP Fri May 15 21:38:46 EDT 2015 x86_64 x86_64 x86_64 GNU/Linuxdatabase version: postgrsql 9.2.13 community.Thanks,Samir Magar", "msg_date": "Sat, 10 Sep 2016 16:19:57 +0530", "msg_from": "Samir Magar <[email protected]>", "msg_from_op": true, "msg_subject": "How to reduce IOWAIT and CPU idle time?" }, { "msg_contents": "On Sat, Sep 10, 2016 at 8:49 PM, Samir Magar <[email protected]> wrote:\n\n> Hello,\n>\n>\n> My Application has normally 25 to 30 connections and it is doing lot of\n> insert/update/delete operation.\n> The database size is 100GB.\n> iowait is at 40% to 45 % and CPU idle time is at 45% to 50%\n> TOTAL RAM = 8 GB TOTAL CPU = 4\n>\n> postgresql.conf parametre:\n>\n> shared_buffers = 2GB\n> work_mem = 100MB\n> effective_cache_size = 2GB\n> maintenance_work_mem = 500MB\n> autovacuum = off\n> wal_buffers = 64MB\n>\n>\n> How can i reduce iowait and CPU idle time. It is slowing all the queries.\n> The queries that used to take 1 sec,it is taking 12-15 seconds.\n>\n\nThat does not point out the specific problem you are facing. Queries can\nslow down for a lot of reasons like as follows -\n\n- Lack of maintenance\n- Bloats in Tables and Indexes\n- Data size growth\n- If writes are slowing down, then it could be because of slow disks\n\nAre you saying that queries are slowing down when there are heavy writes ?\nAre you referring to SELECTs or all types of queries ?\n\nRegards,\nVenkata B N\n\nFujitsu Australia\n\nOn Sat, Sep 10, 2016 at 8:49 PM, Samir Magar <[email protected]> wrote:Hello,My Application has normally 25 to 30 connections and it is doing lot of insert/update/delete operation.The database size is 100GB. iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%TOTAL RAM = 8 GB   TOTAL CPU = 4 postgresql.conf parametre:shared_buffers = 2GBwork_mem = 100MBeffective_cache_size = 2GBmaintenance_work_mem = 500MBautovacuum = offwal_buffers = 64MBHow can i reduce iowait and CPU idle time. It is slowing all the queries. The queries that used to take 1 sec,it is taking 12-15 seconds.That does not point out the specific problem you are facing. Queries can slow down for a lot of reasons like as follows -- Lack of maintenance- Bloats in Tables and Indexes- Data size growth- If writes are slowing down, then it could be because of slow disksAre you saying that queries are slowing down when there are heavy writes ? Are you referring to SELECTs or all types of queries ?Regards,Venkata B NFujitsu Australia", "msg_date": "Sun, 11 Sep 2016 10:34:41 +1000", "msg_from": "Venkata B Nagothi <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to reduce IOWAIT and CPU idle time?" }, { "msg_contents": "On Sat, Sep 10, 2016 at 3:49 AM, Samir Magar <[email protected]> wrote:\n\n> Hello,\n>\n>\n> My Application has normally 25 to 30 connections and it is doing lot of\n> insert/update/delete operation.\n> The database size is 100GB.\n> iowait is at 40% to 45 % and CPU idle time is at 45% to 50%\n> TOTAL RAM = 8 GB TOTAL CPU = 4\n>\n> postgresql.conf parametre:\n>\n> shared_buffers = 2GB\n> work_mem = 100MB\n> effective_cache_size = 2GB\n> maintenance_work_mem = 500MB\n> autovacuum = off\n> wal_buffers = 64MB\n>\n>\n> How can i reduce iowait and CPU idle time. It is slowing all the queries.\n> The queries that used to take 1 sec,it is taking 12-15 seconds.\n>\n\nWhat changed between the 1 sec regime and the 12-15 second regime? Just\ngrowth in the database size?\n\nIndex-update-intensive databases will often undergo a collapse in\nperformance once the portion of the indexes which are being rapidly dirtied\nexceeds shared_buffers + (some kernel specific factor related\nto dirty_background_bytes and kin)\n\nIf you think this is the problem, you could try violating the conventional\nwisdom by setting shared_buffers 80% to 90% of available RAM, rather than\n20% to 25%.\n\nCheers,\n\nJeff\n\nOn Sat, Sep 10, 2016 at 3:49 AM, Samir Magar <[email protected]> wrote:Hello,My Application has normally 25 to 30 connections and it is doing lot of insert/update/delete operation.The database size is 100GB. iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%TOTAL RAM = 8 GB   TOTAL CPU = 4 postgresql.conf parametre:shared_buffers = 2GBwork_mem = 100MBeffective_cache_size = 2GBmaintenance_work_mem = 500MBautovacuum = offwal_buffers = 64MBHow can i reduce iowait and CPU idle time. It is slowing all the queries. The queries that used to take 1 sec,it is taking 12-15 seconds.What changed between the 1 sec regime and the 12-15 second regime?  Just growth in the database size?Index-update-intensive databases will often undergo a collapse in performance once the portion of the indexes which are being rapidly dirtied exceeds shared_buffers + (some kernel specific factor related to dirty_background_bytes and kin)If you think this is the problem, you could try violating the conventional wisdom by setting shared_buffers 80% to 90% of available RAM, rather than 20% to 25%.Cheers,Jeff", "msg_date": "Sat, 10 Sep 2016 18:46:53 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to reduce IOWAIT and CPU idle time?" }, { "msg_contents": "Hello Venkata,\n\nThank you for the reply!\n\nI forgot to specify the application name.It is ZABBIX tool using postgreSQL\ndatabase. All types of queries are running slow and i can see application\nis writing continuously.\nYesterday , i updated effective_io_concurrency to 25 which earlier set as\ndefault. But this has not helped me to solve the problem.\n\nYes, you are right, the database size has grown from 5 GB database to 100\nGB database and may be there is problem in slownesss in disk. However we\ncannot replace the disk right now.\n\nI ran vacuum and analyze manually on all the tables as well. Still it is\nnot helping. Can you think of any other setting which i should enable?\n\n\nThanks,\nSamir Magar\n\nOn Sun, Sep 11, 2016 at 6:04 AM, Venkata B Nagothi <[email protected]>\nwrote:\n\n>\n> On Sat, Sep 10, 2016 at 8:49 PM, Samir Magar <[email protected]>\n> wrote:\n>\n>> Hello,\n>>\n>>\n>> My Application has normally 25 to 30 connections and it is doing lot of\n>> insert/update/delete operation.\n>> The database size is 100GB.\n>> iowait is at 40% to 45 % and CPU idle time is at 45% to 50%\n>> TOTAL RAM = 8 GB TOTAL CPU = 4\n>>\n>> postgresql.conf parametre:\n>>\n>> shared_buffers = 2GB\n>> work_mem = 100MB\n>> effective_cache_size = 2GB\n>> maintenance_work_mem = 500MB\n>> autovacuum = off\n>> wal_buffers = 64MB\n>>\n>>\n>> How can i reduce iowait and CPU idle time. It is slowing all the queries.\n>> The queries that used to take 1 sec,it is taking 12-15 seconds.\n>>\n>\n> That does not point out the specific problem you are facing. Queries can\n> slow down for a lot of reasons like as follows -\n>\n> - Lack of maintenance\n> - Bloats in Tables and Indexes\n> - Data size growth\n> - If writes are slowing down, then it could be because of slow disks\n>\n> Are you saying that queries are slowing down when there are heavy writes ?\n> Are you referring to SELECTs or all types of queries ?\n>\n> Regards,\n> Venkata B N\n>\n> Fujitsu Australia\n>\n\nHello Venkata,Thank you for the reply!I forgot to specify the application name.It is ZABBIX tool using postgreSQL database. All types of queries are running slow and i can see application is writing continuously.Yesterday , i updated effective_io_concurrency to 25 which earlier set as default. But this has not helped me to solve the problem.Yes, you are right, the database size has grown from 5 GB database to 100 GB database and may be there is problem in slownesss in disk. However we cannot replace the disk right now.I ran vacuum and analyze manually on all the tables as well. Still it is not helping. Can you think of any other setting which i should enable?Thanks,Samir MagarOn Sun, Sep 11, 2016 at 6:04 AM, Venkata B Nagothi <[email protected]> wrote:On Sat, Sep 10, 2016 at 8:49 PM, Samir Magar <[email protected]> wrote:Hello,My Application has normally 25 to 30 connections and it is doing lot of insert/update/delete operation.The database size is 100GB. iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%TOTAL RAM = 8 GB   TOTAL CPU = 4 postgresql.conf parametre:shared_buffers = 2GBwork_mem = 100MBeffective_cache_size = 2GBmaintenance_work_mem = 500MBautovacuum = offwal_buffers = 64MBHow can i reduce iowait and CPU idle time. It is slowing all the queries. The queries that used to take 1 sec,it is taking 12-15 seconds.That does not point out the specific problem you are facing. Queries can slow down for a lot of reasons like as follows -- Lack of maintenance- Bloats in Tables and Indexes- Data size growth- If writes are slowing down, then it could be because of slow disksAre you saying that queries are slowing down when there are heavy writes ? Are you referring to SELECTs or all types of queries ?Regards,Venkata B NFujitsu Australia", "msg_date": "Sun, 11 Sep 2016 12:40:53 +0530", "msg_from": "Samir Magar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to reduce IOWAIT and CPU idle time?" }, { "msg_contents": "On Sat, Sep 10, 2016 at 4:19 PM, Samir Magar <[email protected]> wrote:\n\n> Hello,\n>\n>\n> My Application has normally 25 to 30 connections and it is doing lot of\n> insert/update/delete operation.\n> The database size is 100GB.\n> iowait is at 40% to 45 % and CPU idle time is at 45% to 50%\n> TOTAL RAM = 8 GB TOTAL CPU = 4\n>\n> postgresql.conf parametre:\n>\n>\n> autovacuum = off\n>\n>\nThat could be the source of your problem. Why autovacuum is turned off? Has\ndatabase grown from 5GB to 100GB because of bloat or so much new data has\nbeen inserted? If it's a bloat, vacuum may not now be enough to recover\nfrom that and you would need a vacuum full. In general, it's not a good\nidea to turn autovacuum off.\n\nThanks,\nPavan\n\n-- \n Pavan Deolasee http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\nOn Sat, Sep 10, 2016 at 4:19 PM, Samir Magar <[email protected]> wrote:Hello,My Application has normally 25 to 30 connections and it is doing lot of insert/update/delete operation.The database size is 100GB. iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%TOTAL RAM = 8 GB   TOTAL CPU = 4 postgresql.conf parametre:autovacuum = off\nThat could be the source of your problem. Why autovacuum is turned off? Has database grown from 5GB to 100GB because of bloat or so much new data has been inserted? If it's a bloat, vacuum may not now be enough to recover from that and you would need a vacuum full. In general, it's not a good idea to turn autovacuum off.Thanks,Pavan--  Pavan Deolasee                   http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services", "msg_date": "Sun, 11 Sep 2016 13:10:11 +0530", "msg_from": "Pavan Deolasee <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to reduce IOWAIT and CPU idle time?" }, { "msg_contents": "Hello Jeff,\n\nThanks for the reply!\n\nYes, you are right, the database size has grown from 5 GB database to 100\nGB database and may be there is problem in slowness in disk. However we\ncannot replace the disk right now.\n\nSure. i will try to increase the shared_buffer value to 90% and see the\nperformance.\n\n\nThanks,\nSamir Magar\n\n\n\n\nOn Sun, Sep 11, 2016 at 7:16 AM, Jeff Janes <[email protected]> wrote:\n\n> On Sat, Sep 10, 2016 at 3:49 AM, Samir Magar <[email protected]>\n> wrote:\n>\n>> Hello,\n>>\n>>\n>> My Application has normally 25 to 30 connections and it is doing lot of\n>> insert/update/delete operation.\n>> The database size is 100GB.\n>> iowait is at 40% to 45 % and CPU idle time is at 45% to 50%\n>> TOTAL RAM = 8 GB TOTAL CPU = 4\n>>\n>> postgresql.conf parametre:\n>>\n>> shared_buffers = 2GB\n>> work_mem = 100MB\n>> effective_cache_size = 2GB\n>> maintenance_work_mem = 500MB\n>> autovacuum = off\n>> wal_buffers = 64MB\n>>\n>>\n>> How can i reduce iowait and CPU idle time. It is slowing all the queries.\n>> The queries that used to take 1 sec,it is taking 12-15 seconds.\n>>\n>\n> What changed between the 1 sec regime and the 12-15 second regime? Just\n> growth in the database size?\n>\n> Index-update-intensive databases will often undergo a collapse in\n> performance once the portion of the indexes which are being rapidly dirtied\n> exceeds shared_buffers + (some kernel specific factor related\n> to dirty_background_bytes and kin)\n>\n> If you think this is the problem, you could try violating the conventional\n> wisdom by setting shared_buffers 80% to 90% of available RAM, rather than\n> 20% to 25%.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHello Jeff,Thanks for the reply! Yes, you are right, the database size has grown from 5 GB database to 100 GB database and may be there is problem in slowness in disk. However we cannot replace the disk right now.Sure. i will try to increase the shared_buffer value to 90% and see the performance.Thanks,Samir MagarOn Sun, Sep 11, 2016 at 7:16 AM, Jeff Janes <[email protected]> wrote:On Sat, Sep 10, 2016 at 3:49 AM, Samir Magar <[email protected]> wrote:Hello,My Application has normally 25 to 30 connections and it is doing lot of insert/update/delete operation.The database size is 100GB. iowait  is at 40% to 45 % and CPU idle time is at 45% to 50%TOTAL RAM = 8 GB   TOTAL CPU = 4 postgresql.conf parametre:shared_buffers = 2GBwork_mem = 100MBeffective_cache_size = 2GBmaintenance_work_mem = 500MBautovacuum = offwal_buffers = 64MBHow can i reduce iowait and CPU idle time. It is slowing all the queries. The queries that used to take 1 sec,it is taking 12-15 seconds.What changed between the 1 sec regime and the 12-15 second regime?  Just growth in the database size?Index-update-intensive databases will often undergo a collapse in performance once the portion of the indexes which are being rapidly dirtied exceeds shared_buffers + (some kernel specific factor related to dirty_background_bytes and kin)If you think this is the problem, you could try violating the conventional wisdom by setting shared_buffers 80% to 90% of available RAM, rather than 20% to 25%.Cheers,Jeff", "msg_date": "Sun, 11 Sep 2016 13:34:26 +0530", "msg_from": "Samir Magar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to reduce IOWAIT and CPU idle time?" } ]
[ { "msg_contents": "Hi all,\nOne of my customer has reported to me a performance problem when using \nthe E-Maj extension.\nIt is a tool that allows to log updates performed on application tables, \nwith the capability to cancel them.\nIt is based on classic triggers with a log table associated to each \napplication table.\nThe performance issue, encountered in very specific situations, is the \ntime needed to cancel a significant number of insertions.\nI have build a simple test case that reproduces the problem without the \nneed of the extension. It just mimics the behaviour.\nAttached is the psql script and its result.\nThe updates cancellation operation is done in 3 steps:\n- create a temporary table that holds each primary key to process\n- delete from the application table all rows that are no longer wished \n(previously inserted rows and new values of updated rows)\n- insert into the application table old rows we want to see again \n(previously deleted rows or old values of updated rows)\nThe performance problem only concerns the third statement (the INSERT).\nI have run this test case in various recent postgres versions, from 9.1 \nto 9.6, with the same results.\nThe problem appears when:\n- the application table has a primary key with a large number of columns \n(at least 7 columns in this test case)\n- and nothing but INSERT statements have been executed on the \napplication table\n- and the log trigger remains active (to provide a nice feature: cancel \nthe cancellation !)\nIn the test case, I create a table and populate it with 100,000 rows, \ncreate the log mechanism, then insert 10,000 rows and finaly cancel \nthese 10,000 rows insertion.\nThe faulting INSERT statement has the following explain:\nexplain analyze\nINSERT INTO t1\n SELECT \nt1_log.c1,t1_log.c2,t1_log.c3,t1_log.c4,t1_log.c5,t1_log.c6,t1_log.c7,t1_log.c8\n FROM t1_log, tmp\n WHERE t1_log.c1 = tmp.c1 AND t1_log.c2 = tmp.c2 AND t1_log.c3 = tmp.c3\n AND t1_log.c4 = tmp.c4 AND t1_log.c5 = tmp.c5 AND t1_log.c6 = tmp.c6\n AND t1_log.c7 = tmp.c7\n AND t1_log.gid = tmp.gid AND t1_log.tuple = 'OLD';\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nInsert on t1 (cost=0.00..890.90 rows=1 width=32) (actual \ntime=434571.193..434571.193 rows=0 loops=1)\n -> Nested Loop (cost=0.00..890.90 rows=1 width=32) (actual \ntime=434571.187..434571.187 rows=0 loops=1)\n Join Filter: ((t1_log.c1 = tmp.c1) AND (t1_log.c2 = tmp.c2) \nAND (t1_log.c3 = tmp.c3) AND (t1_log.c4 = tmp.c4) AND (t1_log.c5 = \ntmp.c5) AND (t1_log.c6 = tmp.c6) AND (t1_log.c7 = tmp.c7) AND \n(t1_log.gid = tmp.gid))\n Rows Removed by Join Filter: 100000000\n -> Index Scan using t1_log_gid_tuple_idx on t1_log \n(cost=0.00..423.22 rows=1 width=40) (actual time=0.378..69.594 \nrows=10000 loops=1)\n Index Cond: ((tuple)::text = 'OLD'::text)\n -> Seq Scan on tmp (cost=0.00..176.17 rows=9717 width=36) \n(actual time=0.006..21.678 rows=10000 loops=10000)\nTotal runtime: 434571.243 ms\n(8 rows)\nTime: 434572,146 ms\nWhen the conditions are not exactly met, I get:\nexplain analyze\nINSERT INTO t1\n SELECT \nt1_log.c1,t1_log.c2,t1_log.c3,t1_log.c4,t1_log.c5,t1_log.c6,t1_log.c7,t1_log.c8\n FROM t1_log, tmp\n WHERE t1_log.c1 = tmp.c1 AND t1_log.c2 = tmp.c2 AND t1_log.c3 = tmp.c3\n AND t1_log.c4 = tmp.c4 AND t1_log.c5 = tmp.c5 AND t1_log.c6 = tmp.c6\n AND t1_log.c7 = tmp.c7\n AND t1_log.gid = tmp.gid AND t1_log.tuple = 'OLD';\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nInsert on t1 (cost=438.65..906.34 rows=1 width=32) (actual \ntime=111.526..111.526 rows=0 loops=1)\n -> Hash Join (cost=438.65..906.34 rows=1 width=32) (actual \ntime=111.521..111.521 rows=0 loops=1)\n Hash Cond: ((tmp.c1 = t1_log.c1) AND (tmp.c2 = t1_log.c2) AND \n(tmp.c3 = t1_log.c3) AND (tmp.c4 = t1_log.c4) AND (tmp.c5 = t1_log.c5) \nAND (tmp.c6 = t1_log.c6) AND (tmp.c7 = t1_log.c7) AND (tmp.gid = \nt1_log.gid))\n -> Seq Scan on tmp (cost=0.00..176.17 rows=9717 width=36) \n(actual time=0.007..22.444 rows=10000 loops=1)\n -> Hash (cost=435.68..435.68 rows=99 width=40) (actual \ntime=58.300..58.300 rows=10000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 586kB\n -> Seq Scan on t1_log (cost=0.00..435.68 rows=99 \nwidth=40) (actual time=2.281..28.430 rows=10000 loops=1)\n Filter: ((tuple)::text = 'OLD'::text)\n Rows Removed by Filter: 10000\nTotal runtime: 111.603 ms\n(10 rows)\nSo we get a nested loop in the bad case, instead of a hash join.\nBut what looks strange to me in this nested loop is that the seq scan on \nthe tmp table is executed 10000 times (once for each t1_log row) while \nno row matches the \"t1_log.tuple = 'OLD'\" condition, leading to a \ndramatic O^2 behaviour.\nI have also remarked that the problem disappears when:\n- an index is added on the temporary table,\n- or the log trigger is disabled,\n- or the enable_nestloop is disabled (bringing what is currenlty my \nfavourite workaround),\n- or when I delete from pg_statistics the row concerning the \"tuple\" \ncolumn of the log table (that presently says that there is nothing but \n'NEW' values).\nThanks by advance for any explanation about this case.\nBest regards.\nPhilippe.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 12 Sep 2016 16:15:05 +0200", "msg_from": "phb07 <[email protected]>", "msg_from_op": true, "msg_subject": "Strange nested loop for an INSERT" }, { "msg_contents": "phb07 <[email protected]> writes:\n> The performance issue, encountered in very specific situations, is the \n> time needed to cancel a significant number of insertions.\n> I have build a simple test case that reproduces the problem without the \n> need of the extension. It just mimics the behaviour.\n\nAt least for this example, the problem is that the DELETE enormously\nalters the statistics for the t1_log.tuple column (going from 100% \"NEW\"\nto 50% \"NEW\" and 50% \"OLD\"), but the plan for your last command is\ngenerated with stats saying there are no \"OLD\" entries. So you get a plan\nthat would be fast for small numbers of \"OLD\" entries, but it sucks when\nthere are lots of them. The fix I would recommend is to do a manual\n\"ANALYZE t1_log\" after such a large data change. Auto-ANALYZE would fix\nit for you after a minute or so, probably, but if your script doesn't want\nto wait around then an extra ANALYZE is the ticket.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 12 Sep 2016 10:41:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange nested loop for an INSERT" }, { "msg_contents": "Thanks, Tom, for this quick answer.\n\n\nLe 12/09/2016 � 16:41, Tom Lane a �crit :\n> phb07 <[email protected]> writes:\n>> The performance issue, encountered in very specific situations, is the\n>> time needed to cancel a significant number of insertions.\n>> I have build a simple test case that reproduces the problem without the\n>> need of the extension. It just mimics the behaviour.\n> At least for this example, the problem is that the DELETE enormously\n> alters the statistics for the t1_log.tuple column (going from 100% \"NEW\"\n> to 50% \"NEW\" and 50% \"OLD\"), but the plan for your last command is\n> generated with stats saying there are no \"OLD\" entries. So you get a plan\n> that would be fast for small numbers of \"OLD\" entries, but it sucks when\n> there are lots of them. The fix I would recommend is to do a manual\n> \"ANALYZE t1_log\" after such a large data change. Auto-ANALYZE would fix\n> it for you after a minute or so, probably, but if your script doesn't want\n> to wait around then an extra ANALYZE is the ticket.\n>\n> \t\t\tregards, tom lane\n>\nI understand the point (and I now realize that I should have found the \nanswer by myself...)\nAdding an ANALYZE of the log table effectively changes the plan and \nbrings good performances for the INSERT statement.\nThe drawback is the overhead of this added ANALYZE statement. With a \nheavy processing like in this test case, it is worth to be done. But for \ncommon cases, it's a little bit expensive.\nBut I keep the idea and I will study the best solution to implement.\n\nRegards. Philippe.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 12 Sep 2016 20:05:34 +0200", "msg_from": "phb07 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange nested loop for an INSERT" }, { "msg_contents": "On 9/12/16 1:05 PM, phb07 wrote:\n> The drawback is the overhead of this added ANALYZE statement. With a\n> heavy processing like in this test case, it is worth to be done. But for\n> common cases, it's a little bit expensive.\n\nYou could always look at the number of rows affected by a command and \nmake a decision on whether to ANALYZE based on that, possibly by looking \nat pg_stat_all_tables.n_mod_since_analyze.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 21 Sep 2016 16:42:04 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange nested loop for an INSERT" }, { "msg_contents": "\nLe 21/09/2016 � 23:42, Jim Nasby a �crit :\n> On 9/12/16 1:05 PM, phb07 wrote:\n>> The drawback is the overhead of this added ANALYZE statement. With a\n>> heavy processing like in this test case, it is worth to be done. But for\n>> common cases, it's a little bit expensive.\n>\n> You could always look at the number of rows affected by a command and \n> make a decision on whether to ANALYZE based on that, possibly by \n> looking at pg_stat_all_tables.n_mod_since_analyze.\nI have solved the issue by adding an ANALYZE between both statements. To \navoid the associated overhead for cases when it is not worth to be done, \nthe ANALYZE is only performed when more than 1000 rows have just been \ndeleted by the first statement (as the logic is embeded into a plpgsql \nfunction, the GET DIAGNOSTICS statement provides the information). This \nthreshold is approximately the point where the potential loss due to bad \nestimates equals the ANALYZE cost.\nBut the idea of using the n_mod_since_analyze data to also take into \naccount other recent updates not yet reflected into the statistics is \nvery interesting.\n\nThanks.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 23 Sep 2016 19:59:46 +0200", "msg_from": "phb07 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Strange nested loop for an INSERT" }, { "msg_contents": "On 9/23/16 12:59 PM, phb07 wrote:\n>\n> Le 21/09/2016 � 23:42, Jim Nasby a �crit :\n>> On 9/12/16 1:05 PM, phb07 wrote:\n>>> The drawback is the overhead of this added ANALYZE statement. With a\n>>> heavy processing like in this test case, it is worth to be done. But for\n>>> common cases, it's a little bit expensive.\n>>\n>> You could always look at the number of rows affected by a command and\n>> make a decision on whether to ANALYZE based on that, possibly by\n>> looking at pg_stat_all_tables.n_mod_since_analyze.\n> I have solved the issue by adding an ANALYZE between both statements. To\n> avoid the associated overhead for cases when it is not worth to be done,\n> the ANALYZE is only performed when more than 1000 rows have just been\n> deleted by the first statement (as the logic is embeded into a plpgsql\n> function, the GET DIAGNOSTICS statement provides the information). This\n> threshold is approximately the point where the potential loss due to bad\n> estimates equals the ANALYZE cost.\n> But the idea of using the n_mod_since_analyze data to also take into\n> account other recent updates not yet reflected into the statistics is\n> very interesting.\n\nAnother interesting possibility would be to look at \npg_catalog.pg_stat_xact_all_tables; if you add n_tup_ins, _upd, and _del \nthat will tell you how much n_mod_since_analyze will be increased when \nyour transaction commits, so you could guage exactly how much the \ncurrent transaction has changed things.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 23 Sep 2016 13:51:15 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Strange nested loop for an INSERT" } ]
[ { "msg_contents": "I’m running PostgreSQL 9.5.4 on a virtual machine for production purposes. It runs Ubuntu 16.04.1 LTS 64bit, 32GB RAM, 461GB disk space and 4 x logical CPUs.\n\nPostgres executes the following activities:\n- many INSERTS for ETL\n- a lot of read and write operations for the main OLTP application\n\nThe ETL job is still under development, so I’m launching several sequential “tries” in order to get the whole thing working. The ETL procedure consists of a lot of inserts packed inside transactions. At the moment each transaction consists of 100k inserts, so for a 90mln rows table I get 90mln inserts packed in 900 transactions. I know it’s not the best, but JDBC drivers combined with Pentaho doesn’t seem to pack more inserts into one, so I get a lot of overhead. I can see INSERT, BIND and PARSE called for each insert.. I think it’s Pentaho which embeds the INSERT in a parametric query.. I hate Pentaho.. anyway..\n\nThe ETL procedure does the following:\n1) DROP SCHEMA IF EXISTS data_schema CASCADE;\n2) creates the “data_schema” schema and populates it with tables and rows using INSERTs as described before;\n3) if an error occurs, drop the schema\n\nI’m repeating the previous steps many times because of some Pentaho errors which the team is working on in order to get it working. This stresses the WAL because the interruption of the process interrupts the current transaction and is followed by a DROP SCHEMA .. CASCADE.\n\nAfter few days since we began debugging the ETL elaboration, the disk filled up and the last ETL job was automatically aborted. Note that the DB data directory is located on the same root disk at /var/lib/postgresql/9.5/main\n\nWhat shocked me was that the data directory of Postgres was just 815MB in size ($ du -h /var/lib/postgresql/9.5/main ) and pg_xlog was 705MB, but the entire disk was full (\"df -h\" returned a disk usage of 100%).\n\nI looked for any postgres activity and only noticed a checkpoint writer process that was writing at low speeds (IO usage was about 5%).\nAlso, \"SELECT * FROM pg_stat_activity\" returned nothing and the most shocking part was that the \"du -h /“ command returned 56GB as the total size of files stored on the whole disk!!! The same was for “du -ha /“, which returns the apparent size. \n\nThe total disk size is 461GB, so how is it possible that “df -h” resulted in 461GB occupied while “du -h /“ returned just 56GB?\n\nAfter executing:\n$ service postgresql stop\n$ service postgresql start\n\nthe disk was freed and “df -h” returned a usage of just 16%!\n\nThe other questions are:\n- how can I prevent the disk from filling up? I’m using the default configuration for the WAL (1GB max size).\n- how can I tune Postgres to speed up the INSERTs?\n\nThe actual configuration is the following:\nlisten_addresses = 'localhost'\nmax_connections = 32\nshared_buffers = 16GB\nwork_mem = 128MB\nmaintenance_work_mem = 512MB\neffective_io_concurrency = 10\ncheckpoint_completion_target = 0.9\ncpu_tuple_cost = 0.02\ncpu_index_tuple_cost = 0.01\ncpu_operator_cost = 0.005\neffective_cache_size = 24GB\ndefault_statistics_target = 1000\n\nMay be that some of these parameters causes this strange behavior? checkpoint_completion_target?\n\nThanks to everyone for the support.\n\nBest regards,\n Pietro Pugni\n\n\n\n\nI’m running PostgreSQL 9.5.4 on a virtual machine for production purposes. It runs Ubuntu 16.04.1 LTS 64bit, 32GB RAM, 461GB disk space and 4 x logical CPUs.Postgres executes the following activities:- many INSERTS for ETL- a lot of read and write operations for the main OLTP applicationThe ETL job is still under development, so I’m launching several sequential “tries” in order to get the whole thing working. The ETL procedure consists of a lot of inserts packed inside transactions. At the moment each transaction consists of 100k inserts, so for a 90mln rows table I get 90mln inserts packed in 900 transactions. I know it’s not the best, but JDBC drivers combined with Pentaho doesn’t seem to pack more inserts into one, so I get a lot of overhead. I can see INSERT, BIND and PARSE called for each insert.. I think it’s Pentaho which embeds the INSERT in a parametric query.. I hate Pentaho.. anyway..The ETL procedure does the following:1) DROP SCHEMA IF EXISTS data_schema CASCADE;2) creates the “data_schema” schema and populates it with tables and rows using INSERTs as described before;3) if an error occurs, drop the schemaI’m repeating the previous steps many times because of some Pentaho errors which the team is working on in order to get it working. This stresses the WAL because the interruption of the process interrupts the current transaction and is followed by a DROP SCHEMA .. CASCADE.After few days since we began debugging the ETL elaboration, the disk filled up and the last ETL job was automatically aborted. Note that the DB data directory is located on the same root disk at /var/lib/postgresql/9.5/mainWhat shocked me was that the data directory of Postgres was just 815MB in size ($ du -h /var/lib/postgresql/9.5/main ) and pg_xlog was 705MB, but the entire disk was full (\"df -h\" returned a disk usage of 100%).I looked for any postgres activity and only noticed a checkpoint writer process that was writing at low speeds (IO usage was about 5%).Also, \"SELECT * FROM pg_stat_activity\" returned nothing and the most shocking part was that the \"du -h /“ command returned 56GB as the total size of files stored on the whole disk!!! The same was for “du -ha /“, which returns the apparent size. The total disk size is 461GB, so how is it possible that “df -h” resulted in 461GB occupied while “du -h /“ returned just 56GB?After executing:$ service postgresql stop$ service postgresql startthe disk was freed and “df -h” returned a usage of just 16%!The other questions are:- how can I prevent the disk from filling up? I’m using the default configuration for the WAL (1GB max size).- how can I tune Postgres to speed up the INSERTs?The actual configuration is the following:listen_addresses = 'localhost'max_connections = 32shared_buffers = 16GBwork_mem = 128MBmaintenance_work_mem = 512MBeffective_io_concurrency = 10checkpoint_completion_target = 0.9cpu_tuple_cost = 0.02cpu_index_tuple_cost = 0.01cpu_operator_cost = 0.005effective_cache_size = 24GBdefault_statistics_target = 1000May be that some of these parameters causes this strange behavior? checkpoint_completion_target?Thanks to everyone for the support.Best regards, Pietro Pugni", "msg_date": "Wed, 14 Sep 2016 15:53:33 +0200", "msg_from": "Pietro Pugni <[email protected]>", "msg_from_op": true, "msg_subject": "Disk filled-up issue after a lot of inserts and drop schema" }, { "msg_contents": "In Unix/Linux with many of the common file system types, if you delete a\nfile, but a process still has it open, it will continue to \"own\" the disk\nspace until that process closes the file descriptor or dies. If you try\n\"ls\" or other file system commands, you won't actually see the file there,\nyet it really is, and it still has exclusive control of a portion of the\ndisk. The file is \"unlinked\" but the data blocks for the file are still\nreserved.\n\nLike 'ls', the 'du' command only looks at files that still exist and adds\nup the disk space for those files. It does not know about these files that\nhave been unlinked, but still reserve a large portion of the disk.\n\nI don't know why something still has an open file descriptor on something\nyou believe has been removed, but at least that explains why you are\nexperiencing the discrepancy between \"du\" and the real available space on\nthe disk.\n\n\nOn Wed, Sep 14, 2016 at 9:53 AM, Pietro Pugni <[email protected]>\nwrote:\n\n> I’m running PostgreSQL 9.5.4 on a virtual machine for production purposes.\n> It runs Ubuntu 16.04.1 LTS 64bit, 32GB RAM, 461GB disk space and 4 x\n> logical CPUs.\n>\n> Postgres executes the following activities:\n> - many INSERTS for ETL\n> - a lot of read and write operations for the main OLTP application\n>\n> The ETL job is still under development, so I’m launching several\n> sequential “tries” in order to get the whole thing working. The ETL\n> procedure consists of a lot of inserts packed inside transactions. At the\n> moment each transaction consists of 100k inserts, so for a 90mln rows table\n> I get 90mln inserts packed in 900 transactions. I know it’s not the best,\n> but JDBC drivers combined with Pentaho doesn’t seem to pack more inserts\n> into one, so I get a lot of overhead. I can see INSERT, BIND and PARSE\n> called for each insert.. I think it’s Pentaho which embeds the INSERT in a\n> parametric query.. I hate Pentaho.. anyway..\n>\n> The ETL procedure does the following:\n> 1) DROP SCHEMA IF EXISTS data_schema CASCADE;\n> 2) creates the “data_schema” schema and populates it with tables and rows\n> using INSERTs as described before;\n> 3) if an error occurs, drop the schema\n>\n> I’m repeating the previous steps many times because of some Pentaho errors\n> which the team is working on in order to get it working. This stresses the\n> WAL because the interruption of the process interrupts the current\n> transaction and is followed by a DROP SCHEMA .. CASCADE.\n>\n> *After few days since we began debugging the ETL elaboration, the disk\n> filled up and the last ETL job was automatically aborted*. Note that the\n> DB data directory is located on the same root disk at\n> /var/lib/postgresql/9.5/main\n>\n> What shocked me was that the *data directory of Postgres was just 815MB*\n> in size ($ du -h /var/lib/postgresql/9.5/main ) and pg_xlog was 705MB, *but\n> the entire disk was full *(\"df -h\" returned a disk usage of 100%).\n>\n> I looked for any postgres activity and only noticed a checkpoint writer\n> process that was writing at low speeds (IO usage was about 5%).\n> Also, \"SELECT * FROM pg_stat_activity\" returned nothing and the most\n> shocking part was that the \"du -h /“ command returned 56GB as the total\n> size of files stored on the whole disk!!! The same was for “du -ha /“,\n> which returns the apparent size.\n>\n> The total disk size is 461GB, *so how is it possible that “df -h”\n> resulted in 461GB occupied while “du -h /“ returned just 56GB?*\n>\n> After executing:\n> $ service postgresql stop\n> $ service postgresql start\n>\n> *the disk was freed and “df -h” returned a usage of just 16%!*\n>\n> The other questions are:\n> - *how can I prevent the disk from filling up? I’m using the default\n> configuration for the WAL (1GB max size).*\n> - *how can I tune Postgres to speed up the INSERTs?*\n>\n> The *actual configuration* is the following:\n> listen_addresses = 'localhost'\n> max_connections = 32\n> shared_buffers = 16GB\n> work_mem = 128MB\n> maintenance_work_mem = 512MB\n> effective_io_concurrency = 10\n> checkpoint_completion_target = 0.9\n> cpu_tuple_cost = 0.02\n> cpu_index_tuple_cost = 0.01\n> cpu_operator_cost = 0.005\n> effective_cache_size = 24GB\n> default_statistics_target = 1000\n>\n> *May be that some of these parameters causes this strange behavior?\n> checkpoint_completion_target?*\n>\n> Thanks to everyone for the support.\n>\n> Best regards,\n> Pietro Pugni\n>\n>\n>\n>\n\nIn Unix/Linux with many of the common file system types, if you delete a file, but a process still has it open, it will continue to \"own\" the disk space until that process closes the file descriptor or dies.  If you try \"ls\" or other file system commands, you won't actually see the file there, yet it really is, and it still has exclusive control of a portion of the disk.  The file is \"unlinked\" but the data blocks for the file are still reserved.Like 'ls', the 'du' command only looks at files that still exist and adds up the disk space for those files.  It does not know about these files that have been unlinked, but still reserve a large portion of the disk.I don't know why something still has an open file descriptor on something you believe has been removed, but at least that explains why you are experiencing the discrepancy between \"du\" and the real available space on the disk.On Wed, Sep 14, 2016 at 9:53 AM, Pietro Pugni <[email protected]> wrote:I’m running PostgreSQL 9.5.4 on a virtual machine for production purposes. It runs Ubuntu 16.04.1 LTS 64bit, 32GB RAM, 461GB disk space and 4 x logical CPUs.Postgres executes the following activities:- many INSERTS for ETL- a lot of read and write operations for the main OLTP applicationThe ETL job is still under development, so I’m launching several sequential “tries” in order to get the whole thing working. The ETL procedure consists of a lot of inserts packed inside transactions. At the moment each transaction consists of 100k inserts, so for a 90mln rows table I get 90mln inserts packed in 900 transactions. I know it’s not the best, but JDBC drivers combined with Pentaho doesn’t seem to pack more inserts into one, so I get a lot of overhead. I can see INSERT, BIND and PARSE called for each insert.. I think it’s Pentaho which embeds the INSERT in a parametric query.. I hate Pentaho.. anyway..The ETL procedure does the following:1) DROP SCHEMA IF EXISTS data_schema CASCADE;2) creates the “data_schema” schema and populates it with tables and rows using INSERTs as described before;3) if an error occurs, drop the schemaI’m repeating the previous steps many times because of some Pentaho errors which the team is working on in order to get it working. This stresses the WAL because the interruption of the process interrupts the current transaction and is followed by a DROP SCHEMA .. CASCADE.After few days since we began debugging the ETL elaboration, the disk filled up and the last ETL job was automatically aborted. Note that the DB data directory is located on the same root disk at /var/lib/postgresql/9.5/mainWhat shocked me was that the data directory of Postgres was just 815MB in size ($ du -h /var/lib/postgresql/9.5/main ) and pg_xlog was 705MB, but the entire disk was full (\"df -h\" returned a disk usage of 100%).I looked for any postgres activity and only noticed a checkpoint writer process that was writing at low speeds (IO usage was about 5%).Also, \"SELECT * FROM pg_stat_activity\" returned nothing and the most shocking part was that the \"du -h /“ command returned 56GB as the total size of files stored on the whole disk!!! The same was for “du -ha /“, which returns the apparent size. The total disk size is 461GB, so how is it possible that “df -h” resulted in 461GB occupied while “du -h /“ returned just 56GB?After executing:$ service postgresql stop$ service postgresql startthe disk was freed and “df -h” returned a usage of just 16%!The other questions are:- how can I prevent the disk from filling up? I’m using the default configuration for the WAL (1GB max size).- how can I tune Postgres to speed up the INSERTs?The actual configuration is the following:listen_addresses = 'localhost'max_connections = 32shared_buffers = 16GBwork_mem = 128MBmaintenance_work_mem = 512MBeffective_io_concurrency = 10checkpoint_completion_target = 0.9cpu_tuple_cost = 0.02cpu_index_tuple_cost = 0.01cpu_operator_cost = 0.005effective_cache_size = 24GBdefault_statistics_target = 1000May be that some of these parameters causes this strange behavior? checkpoint_completion_target?Thanks to everyone for the support.Best regards, Pietro Pugni", "msg_date": "Wed, 14 Sep 2016 10:17:41 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk filled-up issue after a lot of inserts and drop schema" }, { "msg_contents": "Rick Otten <[email protected]> writes:\n> I don't know why something still has an open file descriptor on something\n> you believe has been removed, but at least that explains why you are\n> experiencing the discrepancy between \"du\" and the real available space on\n> the disk.\n\nYeah, the reported behavior clearly indicates that some PG process is\nholding open files that should have been dropped (and were unlinked).\nThat's a bug, but there's not enough info here to find and fix it.\n\nIf we're really lucky, this is the same bug that Andres found and fixed\nlast week:\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=26ce63ce76f91eac7570fcb893321ed0233d62ff\n\nbut that guess is probably too optimistic, especially if it's a background\nprocess (such as the checkpointer process) that is holding the open files.\n\nIf you can reproduce this, which I'm guessing you can, please use\n\"lsof\" or similar tool to see which Postgres process is holding open\nreferences to lots of no-longer-there files.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 14 Sep 2016 10:44:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk filled-up issue after a lot of inserts and drop schema" }, { "msg_contents": "Thank you guys.\nI’ve jsut discovered the issue.. I set \"logging_collector=off” in the previous email but didn’t comment the other log* parameters, so Postgres was logging every single INSERT! This was caused the disk to fill up.\n\nThe strange issue is that the log file didn’t exists when the disk filled up. I personally looked for it but it wasn’t where it should have been ( /var/log/postgesql/ ), so I can’t exactly confirm that the issue was the log file getting bigger and bigger.\n\nNow, after writing the previous mail and rebooting postgres, I run several ETL jobs and the disk space was filling up. The log file reached 110GB in size.\nAfter disabling *ALL* the log options in postgresql.conf, the log file does just the essential and default information.\n\n\nI’m sorry to have launched a false alarm, but we can consider the issue solved.\n\nThank you again\n\nBest regards,\n Pietro Pugni\n\n\n\n\n> Il giorno 14 set 2016, alle ore 16:44, Tom Lane <[email protected]> ha scritto:\n> \n> Rick Otten <[email protected]> writes:\n>> I don't know why something still has an open file descriptor on something\n>> you believe has been removed, but at least that explains why you are\n>> experiencing the discrepancy between \"du\" and the real available space on\n>> the disk.\n> \n> Yeah, the reported behavior clearly indicates that some PG process is\n> holding open files that should have been dropped (and were unlinked).\n> That's a bug, but there's not enough info here to find and fix it.\n> \n> If we're really lucky, this is the same bug that Andres found and fixed\n> last week:\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=26ce63ce76f91eac7570fcb893321ed0233d62ff\n> \n> but that guess is probably too optimistic, especially if it's a background\n> process (such as the checkpointer process) that is holding the open files.\n> \n> If you can reproduce this, which I'm guessing you can, please use\n> \"lsof\" or similar tool to see which Postgres process is holding open\n> references to lots of no-longer-there files.\n> \n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 14 Sep 2016 19:45:34 +0200", "msg_from": "Pietro Pugni <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disk filled-up issue after a lot of inserts and drop schema" }, { "msg_contents": "Pietro Pugni <[email protected]> writes:\n> I’ve jsut discovered the issue.. I set \"logging_collector=off” in the previous email but didn’t comment the other log* parameters, so Postgres was logging every single INSERT! This was caused the disk to fill up.\n\nAh.\n\n> The strange issue is that the log file didn’t exists when the disk filled up. I personally looked for it but it wasn’t where it should have been ( /var/log/postgesql/ ), so I can’t exactly confirm that the issue was the log file getting bigger and bigger.\n\nSeems like the log file must have gotten unlinked while still active,\nor at least, *something* had an open reference to it. It's hard to\nspeculate about the cause for that without more info about how you've got\nthe logging set up. (Are you using the log collector? Are you rotating\nlogs?) But I seriously doubt it represents a Postgres bug. Unlike the\nsituation with data files, it's very hard to see how PG could be holding\nonto a reference to an unused log file. It only ever writes to one log\nfile at a time.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 14 Sep 2016 13:55:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Disk filled-up issue after a lot of inserts and drop schema" }, { "msg_contents": "Log rotation was active and set to 5MB or 1 day.\nI don’t know if it is a bug, but Postgres was logging even if logging_collector was set to “off”.\nAlso, that big log file wasn’t visible for me, in fact “ls” and “du” didn’t detect it.\n\nThanks again\n\nBest regards,\n Pietro Pugni\n\n> Il giorno 14 set 2016, alle ore 19:55, Tom Lane <[email protected]> ha scritto:\n> \n> Pietro Pugni <[email protected]> writes:\n>> I’ve jsut discovered the issue.. I set \"logging_collector=off” in the previous email but didn’t comment the other log* parameters, so Postgres was logging every single INSERT! This was caused the disk to fill up.\n> \n> Ah.\n> \n>> The strange issue is that the log file didn’t exists when the disk filled up. I personally looked for it but it wasn’t where it should have been ( /var/log/postgesql/ ), so I can’t exactly confirm that the issue was the log file getting bigger and bigger.\n> \n> Seems like the log file must have gotten unlinked while still active,\n> or at least, *something* had an open reference to it. It's hard to\n> speculate about the cause for that without more info about how you've got\n> the logging set up. (Are you using the log collector? Are you rotating\n> logs?) But I seriously doubt it represents a Postgres bug. Unlike the\n> situation with data files, it's very hard to see how PG could be holding\n> onto a reference to an unused log file. It only ever writes to one log\n> file at a time.\n> \n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 14 Sep 2016 20:00:59 +0200", "msg_from": "Pietro Pugni <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Disk filled-up issue after a lot of inserts and drop schema" } ]
[ { "msg_contents": "Hi,\n\nI have a problem with the performance of some queries using unnest after migrating from V9.1.12 to V9.5.3. It doesn't depend on if I am using pg_upgrade or pg_dumpall for migration.\nI tried different versions of PostgreSQL. The problem starts with V9.2.\nThe databases V9.1.12 and V9.5.3 are on the same virtual machine with Microsoft Windows 2012 R2, 16 GB Ram and 1 Intel Xeon CPU E7-4820 2,00 GHz and they are using the same postgresql.conf.\n\nThe server configuration changes are:\n\"archive_command\" \"copy \"\"%p\"\" \"\"E:/9.5/archive/%f\"\"\"\n\"archive_mode\" \"on\"\n\"autovacuum\" \"off\"\n\"autovacuum_analyze_threshold\" \"250\"\n\"autovacuum_naptime\" \"1min\"\n\"autovacuum_vacuum_threshold\" \"1000\"\n\"default_text_search_config\" \"pg_catalog.simple\"\n\"default_with_oids\" \"on\"\n\"dynamic_shared_memory_type\" \"windows\"\n\"effective_cache_size\" \"4000MB\"\n\"lc_messages\" \"German, Germany\"\n\"lc_monetary\" \"German, Germany\"\n\"lc_numeric\" \"German, Germany\"\n\"lc_time\" \"German, Germany\"\n\"listen_addresses\" \"*\"\n\"log_autovacuum_min_duration\" \"0\"\n\"log_connections\" \"on\"\n\"log_destination\" \"stderr\"\n\"log_directory\" \"E:/9.5/log\"\n\"log_disconnections\" \"on\"\n\"log_line_prefix\" \"%t %u %r %d \"\n\"log_min_duration_statement\" \"-1\"\n\"log_min_error_statement\" \"debug5\"\n\"log_statement\" \"mod\"\n\"log_temp_files\" \"20MB\"\n\"log_truncate_on_rotation\" \"on\"\n\"logging_collector\" \"on\"\n\"maintenance_work_mem\" \"256MB\"\n\"max_connections\" \"200\"\n\"max_stack_depth\" \"2MB\"\n\"port\" \"5432\"\n\"shared_buffers\" \"4000MB\"\n\"wal_buffers\" \"2MB\"\n\"wal_level\" \"archive\"\n\"work_mem\" \"20MB\"\n\nI created the following test-schema and test-table on both databases:\n\ncreate schema schema_test AUTHORIZATION postgres;\n\nCREATE TABLE schema_test.table_a\n(\n col0001 character varying(10) NOT NULL, -- customer number\n col0002 character varying(5) NOT NULL, -- account number\n col0003 date NOT NULL, -- booking period\n col0004 smallint NOT NULL DEFAULT 0, -- cost center\n col0005 numeric(12,2) NOT NULL DEFAULT 0, -- value01\n col0006 numeric(12,2) NOT NULL DEFAULT 0, -- value02\n CONSTRAINT table_a_pk PRIMARY KEY (col0001, col0002, col0003, col0004),\n CONSTRAINT table_a_chk01 CHECK (col0002::text ~ '^[[:digit:]]{0,5}$'::text)\n)\nWITH (\n OIDS=TRUE\n);\n\nALTER TABLE schema_test.table_a OWNER TO postgres;\nGRANT ALL ON TABLE schema_test.table_a TO PUBLIC;\n\nThen I imported 50 datas:\n\n5010010000 01351 2000-01-01 0 1568.13 0.00\n5010010000 01351 2000-12-01 0 -1568.13 0.00\n7810405800 01491 2005-12-01 0 1347.00 0.00\n7810405801 05720 2005-12-01 0 148.92 0.00\n5010010000 01496 2000-01-01 0 -3196.90 -142834.53\n5010010000 01496 2000-02-01 0 -1628.77 0.00\n5010010000 01496 2000-03-01 0 -1628.77 0.00\n5010010000 01496 2000-04-01 0 -1628.77 0.00\n5010010000 01496 2000-05-01 0 -1628.77 0.00\n5010010000 01496 2000-06-01 0 -1628.77 0.00\n5010010000 01496 2000-07-01 0 -1628.77 0.00\n5010010000 01496 2000-08-01 0 -1628.77 0.00\n5010010000 01496 2000-09-01 0 -1628.77 0.00\n5010010000 01496 2000-10-01 0 -1628.77 0.00\n5010010000 01496 2000-11-01 0 -1628.77 0.00\n7810405800 01490 2005-12-01 0 1533.20 0.00\n5010010000 01496 2000-12-01 0 -60.64 0.00\n7810405801 05600 2005-12-01 0 74.82 0.00\n5010010000 02009 2000-01-01 0 11808.59 0.00\n7810405801 01101 2005-12-01 0 12700.00 0.00\n7810405801 01225 2005-12-01 0 -5898.23 0.00\n5010010000 02009 2000-02-01 0 11808.59 0.00\n7810405801 05958 2005-12-01 0 76.25 0.00\n5010010000 02009 2000-03-01 0 11808.59 0.00\n7810405802 04502 2005-12-01 0 144.89 0.00\n7810405802 04320 2005-12-01 0 22.48 0.00\n5010010000 02009 2000-04-01 0 11808.59 0.00\n3030112600 01201 2006-02-01 0 -29.88 0.00\n5010010000 02009 2000-05-01 0 11808.59 0.00\n7810405802 01001 2005-12-01 0 2416.24 0.00\n7810405802 09295 2005-12-01 0 -5219.00 0.00\n5010010000 02009 2000-06-01 0 11808.59 0.00\n7810405802 05216 2005-12-01 0 719.86 0.00\n7810405802 08823 2005-12-01 0 -14318.85 0.00\n5010010000 02009 2000-07-01 0 11808.59 0.00\n7810405802 09800 2005-12-01 0 -51.29 0.00\n3030112600 09000 2006-02-01 0 -29550.83 0.00\n5010010000 02009 2000-08-01 0 11808.59 0.00\n7810405801 04500 2005-12-01 0 175.00 0.00\n3030112600 04100 2006-02-01 0 1839.19 0.00\n5010010000 02009 2000-09-01 0 11808.59 0.00\n7810405801 05890 2005-12-01 0 1200.00 0.00\n3030112600 05958 2006-02-01 0 24.56 0.00\n5010010000 02009 2000-10-01 0 11808.59 0.00\n7810405802 04802 2005-12-01 0 1347.18 0.00\n7810405801 04800 2005-12-01 0 354.51 0.00\n5010010000 02009 2000-11-01 0 11808.59 0.00\n7810405801 04400 2005-12-01 0 47.97 0.00\n7810405801 04510 2005-12-01 0 326.80 0.00\n5010010000 02009 2000-12-01 0 11808.59 0.00\n\nThe query with the problem:\n\nselect col0002\nfrom schema_test.table_a\nwhere col0001 in (select unnest(string_to_array('5010010000',',')))\ngroup by 1\norder by 1\n\nV9.1: 16 msec\nV9.5: 31 msec\n\nIn the original table we have 15 million rows.\nV9.1: 47 msec\nV9.5: 6,2 sec\n\nExplain Analyze:\nV9.1:\n[cid:[email protected]]\n\nV9.5:\n[cid:[email protected]]\n\nQuery plan:\nV9.1:\n[cid:[email protected]]\n\"Sort (cost=23.57..24.07 rows=200 width=9) (actual time=0.210..0.210 rows=3 loops=1)\"\n\" Sort Key: table_a.col0002\"\n\" Sort Method: quicksort Memory: 25kB\"\n\" Buffers: shared hit=2\"\n\" -> HashAggregate (cost=13.93..15.93 rows=200 width=9) (actual time=0.184..0.186 rows=3 loops=1)\"\n\" Buffers: shared hit=2\"\n\" -> Nested Loop (cost=4.31..12.82 rows=445 width=9) (actual time=0.126..0.154 rows=26 loops=1)\"\n\" Buffers: shared hit=2\"\n\" -> HashAggregate (cost=0.02..0.03 rows=1 width=32) (actual time=0.027..0.028 rows=1 loops=1)\"\n\" -> Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.013..0.015 rows=1 loops=1)\"\n\" -> Bitmap Heap Scan on table_a (cost=4.28..12.73 rows=4 width=23) (actual time=0.086..0.095 rows=26 loops=1)\"\n\" Recheck Cond: ((col0001)::text = (unnest('{5010010000}'::text[])))\"\n\" Buffers: shared hit=2\"\n\" -> Bitmap Index Scan on table_a_pk (cost=0.00..4.28 rows=4 width=0) (actual time=0.063..0.063 rows=26 loops=1)\"\n\" Index Cond: ((col0001)::text = (unnest('{5010010000}'::text[])))\"\n\" Buffers: shared hit=1\"\n\"Total runtime: 0.339 ms\"\n\nhttps://explain.depesz.com/s/sdN\n\n\nV9.5:\n[cid:[email protected]]\n\n\"Sort (cost=40.09..40.59 rows=200 width=9) (actual time=0.172..0.173 rows=3 loops=1)\"\n\" Sort Key: table_a.col0002\"\n\" Sort Method: quicksort Memory: 25kB\"\n\" Buffers: shared hit=1\"\n\" -> HashAggregate (cost=30.45..32.45 rows=200 width=9) (actual time=0.137..0.138 rows=3 loops=1)\"\n\" Group Key: table_a.col0002\"\n\" Buffers: shared hit=1\"\n\" -> Hash Semi Join (cost=2.76..29.31 rows=455 width=9) (actual time=0.061..0.113 rows=26 loops=1)\"\n\" Hash Cond: ((table_a.col0001)::text = (unnest('{5010010000}'::text[])))\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on table_a (cost=0.00..19.10 rows=910 width=23) (actual time=0.022..0.038 rows=50 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.51..1.51 rows=100 width=32) (actual time=0.023..0.023 rows=1 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 9kB\"\n\" -> Result (cost=0.00..0.51 rows=100 width=0) (actual time=0.013..0.015 rows=1 loops=1)\"\n\"Planning time: 0.413 ms\"\n\"Execution time: 0.263 ms\"\n\nhttps://explain.depesz.com/s/JUYr\n\n\nThe difference is that V9.1 uses Nested Loop and the index table_a_pk. V9.2 and higher don't use the index.\nWhat is the reason? Is there a parameter we can change?\nThanks for your help.\nGreetings,\nUdo Knels\nDipl.-Informatiker\n\nTelefon: 0231 / 4506 375\nTelefax: 0231 / 4506 9375\nE-Mail : [email protected]<mailto:[email protected]>\n________________________________\n\n[cid:[email protected]]\n\n\nSchleefstr. 32\n44287 Dortmund\[email protected]<mailto:[email protected]>\n\n\nSitz: Dortmund\nAmtsgericht: Dortmund, HRB 6231\n\nGeschäftsführer:\nHans Auf dem Kamp\n\nUSt-IdNr.: DE124728517\n________________________________\nDiese E-Mail kann vertrauliche Informationen enthalten. Wenn Sie nicht der Adressat sind, sind Sie nicht zur Verwendung\nder in dieser E-Mail enthaltenen Informationen befugt. Bitte benachrichtigen Sie uns über den irrtümlichen Erhalt.\nThis e-mail may contain confidential information. If you are not the addressee you are not authorized to make use\nof the information contained in this e-mail. Please inform us immediately that you have received it by mistake.", "msg_date": "Mon, 19 Sep 2016 07:29:48 +0000", "msg_from": "\"Knels, Udo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with performance using query with unnest after migrating from\n V9.1 to V9.2 and higher" }, { "msg_contents": "On 9/19/16 2:29 AM, Knels, Udo wrote:\n> The difference is that V9.1 uses Nested Loop and the index table_a_pk.\n> V9.2 and higher don�t use the index.\n\nFirst thing I'd try is running a manual ANALYZE; on the upgraded \ndatabase; the 9.2 plan you showed seems to be using default values, so \nit thinks it's going to get 100 rows when it's only getting a few.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 21 Sep 2016 16:48:41 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with performance using query with unnest after\n migrating from V9.1 to V9.2 and higher" }, { "msg_contents": "Hi,\n\nI tried the following on the upgraded database:\nanalyze schema_test.table_a;\n\nBut the result is the same. \n\nhttps://explain.depesz.com/s/hsx5\n\n\"Sort (cost=5.94..6.01 rows=26 width=6) (actual time=0.199..0.200 rows=3 loops=1)\"\n\" Sort Key: table_a.col0002\"\n\" Sort Method: quicksort Memory: 25kB\"\n\" Buffers: shared hit=1\"\n\" -> HashAggregate (cost=5.07..5.33 rows=26 width=6) (actual time=0.161..0.163 rows=3 loops=1)\"\n\" Group Key: table_a.col0002\"\n\" Buffers: shared hit=1\"\n\" -> Hash Semi Join (cost=2.76..4.95 rows=50 width=6) (actual time=0.070..0.133 rows=26 loops=1)\"\n\" Hash Cond: ((table_a.col0001)::text = (unnest('{5010010000}'::text[])))\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on table_a (cost=0.00..1.50 rows=50 width=17) (actual time=0.015..0.034 rows=50 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.51..1.51 rows=100 width=32) (actual time=0.028..0.028 rows=1 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 9kB\"\n\" -> Result (cost=0.00..0.51 rows=100 width=0) (actual time=0.015..0.017 rows=1 loops=1)\"\n\"Planning time: 0.653 ms\"\n\"Execution time: 0.326 ms\"\n\nGreetings\n\nUdo Knels\ntreubuch IT GmbH\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 22 Sep 2016 12:39:30 +0000", "msg_from": "\"Knels, Udo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with performance using query with unnest after\n migrating from V9.1 to V9.2 and higher" }, { "msg_contents": "\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Knels, Udo\nSent: Thursday, September 22, 2016 8:40 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Problem with performance using query with unnest after migrating from V9.1 to V9.2 and higher\n\nHi,\n\nI tried the following on the upgraded database:\nanalyze schema_test.table_a;\n\nBut the result is the same. \n\nhttps://explain.depesz.com/s/hsx5\n\n\"Sort (cost=5.94..6.01 rows=26 width=6) (actual time=0.199..0.200 rows=3 loops=1)\"\n\" Sort Key: table_a.col0002\"\n\" Sort Method: quicksort Memory: 25kB\"\n\" Buffers: shared hit=1\"\n\" -> HashAggregate (cost=5.07..5.33 rows=26 width=6) (actual time=0.161..0.163 rows=3 loops=1)\"\n\" Group Key: table_a.col0002\"\n\" Buffers: shared hit=1\"\n\" -> Hash Semi Join (cost=2.76..4.95 rows=50 width=6) (actual time=0.070..0.133 rows=26 loops=1)\"\n\" Hash Cond: ((table_a.col0001)::text = (unnest('{5010010000}'::text[])))\"\n\" Buffers: shared hit=1\"\n\" -> Seq Scan on table_a (cost=0.00..1.50 rows=50 width=17) (actual time=0.015..0.034 rows=50 loops=1)\"\n\" Buffers: shared hit=1\"\n\" -> Hash (cost=1.51..1.51 rows=100 width=32) (actual time=0.028..0.028 rows=1 loops=1)\"\n\" Buckets: 1024 Batches: 1 Memory Usage: 9kB\"\n\" -> Result (cost=0.00..0.51 rows=100 width=0) (actual time=0.015..0.017 rows=1 loops=1)\"\n\"Planning time: 0.653 ms\"\n\"Execution time: 0.326 ms\"\n\nGreetings\n\nUdo Knels\ntreubuch IT GmbH\n_____________________________________________________________________________________________\n\ntable_a is too small, just 50 records.\nOptimizer decided (correctly) that Seq Scan is cheaper than using an index.\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 22 Sep 2016 13:38:52 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with performance using query with unnest after\n migrating from V9.1 to V9.2 and higher" }, { "msg_contents": "Igor Neyman <[email protected]> writes:\n> table_a is too small, just 50 records.\n> Optimizer decided (correctly) that Seq Scan is cheaper than using an index.\n\nYeah. The given test case is quite useless for demonstrating that you\nhave a problem, since it's actually *faster* on 9.5 than 9.1.\n\nWhat I suspect is happening is that 9.2 and up assume that an unnest()\nwill produce 100 rows, whereas 9.1 assumed it would produce only 1 row.\nThe latter happened to be more accurate for this specific case, though\nin general it could result in selection of very bad plans.\n\nIf you are intending only one value be selected, don't use unnest();\nyou'd be better off with \"(string_to_array('5010010000',','))[1]\"\nor something like that.\n\nIn the long run we should teach the planner how to produce better\nestimates for unnest-on-a-constant-array, though I'm unsure whether\nthat would help your real application as opposed to this test case.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 22 Sep 2016 10:35:30 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with performance using query with unnest after migrating\n from V9.1 to V9.2 and higher" }, { "msg_contents": "Hi,\n\nThank you very much for your answers.\n\nYes, 50 rows aren't enough, but the original table has about 14 million rows and after analyzing the table I got the same result. \n\nWe changed our functions and used string_to_array instead of unnest and its ok. \n\nIt was not only a problem with one value to be selected. The problem exists with three or more too. Maybe the implementation of unnest has changed from V9.1 to V9.5. In V9.1 there was only one array as argument. Since V9.4 we can use more than one array as argument. And so the planner works different. So, if we change from one version to another in the future, we have to check the PostgreSQL-functions if the behaviour of the function or the planner has changed and then replace the function. It would be great if we could see this in the documentation.\n\nGreetings\n\nUdo Knels\ntreubuch IT GmbH\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Sep 2016 15:59:58 +0000", "msg_from": "\"Knels, Udo\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Problem with performance using query with unnest after\n migrating from V9.1 to V9.2 and higher" } ]
[ { "msg_contents": "Hello,\n\ni would please like to have some suggestions to optimize Postgres 8.4 for a very heavy number of select (with join) queries.\nThe queries read data, very rarely they write.\n\nThank you!\nFrancesco\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 19 Sep 2016 09:40:00 +0200", "msg_from": "Job <[email protected]>", "msg_from_op": true, "msg_subject": "Postgresql 8.4 optimize for heavy select load" }, { "msg_contents": "On 19/09/16 19:40, Job wrote:\n\n> Hello,\n>\n> i would please like to have some suggestions to optimize Postgres 8.4 for a very heavy number of select (with join) queries.\n> The queries read data, very rarely they write.\n>\n\nWe probably need to see schema and query examples to help you (with \nEXPLAIN ANALYZE output). Also - err 8.4 - I (and others probably) will \nrecommend you upgrade to a more recent (and supported for that matter) \nversion - currently 9.5/9.6 - lots of performance improvements you are \nmissing out on!\n\nBest wishes\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 19 Sep 2016 20:23:18 +1200", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql 8.4 optimize for heavy select load" }, { "msg_contents": "On 19.09.2016 10:23, Mark Kirkwood wrote:\n> On 19/09/16 19:40, Job wrote:\n>\n>> Hello,\n>>\n>> i would please like to have some suggestions to optimize Postgres 8.4\n>> for a very heavy number of select (with join) queries.\n>> The queries read data, very rarely they write.\n>>\n>\n> We probably need to see schema and query examples to help you (with\n> EXPLAIN ANALYZE output). Also - err 8.4 - I (and others probably) will\n> recommend you upgrade to a more recent (and supported for that matter)\n> version - currently 9.5/9.6 - lots of performance improvements you are\n> missing out on!\n\nEspecially since 8.4 is out of support for 2 years:\nhttps://www.postgresql.org/support/versioning/\n\nGreetings,\nTorsten\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 19 Sep 2016 11:09:19 +0200", "msg_from": "Torsten Zuehlsdorff <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql 8.4 optimize for heavy select load" }, { "msg_contents": "You might wanted to upgrade to new version 9.5 with small effort by using\npg_upgrade,\nwe have done upgrading and achieve more than 20x faster from 8.4 to 9.5 (it\ndepends on the type of sql statement actually)\n\nJul.\n\n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source and Open Mind Company)\nwww.equnix.co.id\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nOn Mon, Sep 19, 2016 at 3:23 PM, Mark Kirkwood <\[email protected]> wrote:\n\n> On 19/09/16 19:40, Job wrote:\n>\n> Hello,\n>>\n>> i would please like to have some suggestions to optimize Postgres 8.4 for\n>> a very heavy number of select (with join) queries.\n>> The queries read data, very rarely they write.\n>>\n>>\n> We probably need to see schema and query examples to help you (with\n> EXPLAIN ANALYZE output). Also - err 8.4 - I (and others probably) will\n> recommend you upgrade to a more recent (and supported for that matter)\n> version - currently 9.5/9.6 - lots of performance improvements you are\n> missing out on!\n>\n> Best wishes\n>\n> Mark\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYou might wanted to upgrade to new  version 9.5 with small effort by using pg_upgrade, we have done upgrading and achieve more than 20x faster from 8.4 to 9.5 (it depends on the type of sql statement actually)Jul.Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source and Open Mind Company)www.equnix.co.idPusat Niaga ITC Roxy Mas Blok C2/42.  Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.\nOn Mon, Sep 19, 2016 at 3:23 PM, Mark Kirkwood <[email protected]> wrote:On 19/09/16 19:40, Job wrote:\n\n\nHello,\n\ni would please like to have some suggestions to optimize Postgres 8.4 for a very heavy number of select (with join) queries.\nThe queries read data, very rarely they write.\n\n\n\nWe probably need to see schema and query examples to help you (with EXPLAIN ANALYZE output). Also - err 8.4 - I (and others probably) will recommend you upgrade to a more recent (and supported for that matter) version - currently 9.5/9.6 - lots of performance improvements you are missing out on!\n\nBest wishes\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 19 Sep 2016 17:05:40 +0700", "msg_from": "julyanto SUTANDANG <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgresql 8.4 optimize for heavy select load" } ]
[ { "msg_contents": "Hello, I am curious about the performance of queries against a master table\nthat seem to do seq scans on each child table. When the same query is\nissued at a partition directly it uses the partition index and is very\nfast.\n\nThe partition constraint is in the query criteria. We have non overlapping\ncheck constraints and constraint exclusion is set to partition.\n\nHere is the master table\n Column Type\n Modifiers\naggregate_id bigint not null default\nnextval('seq_aggregate'::regclass)\nlanding_id integer not null\nclient_program_id integer\nsequence_number bigint\nstart_datetime timestamp without time zone not null\nend_datetime timestamp without time zone not null\nbody jsonb not null\nclient_parsing_status_code character(1)\nvalidation_status_code character(1)\nclient_parsing_datetime timestamp without time zone\nvalidation_datetime timestamp without time zone\nlatest_flag_datetime timestamp without time zone\nlatest_flag boolean not null\nIndexes:\n \"pk_aggregate\" PRIMARY KEY, btree (aggregate_id)\n \"ix_aggregate_landing_id_aggregate_id_parsing_status\" btree\n(landing_id, aggregate_id, client_parsing_status_code)\n \"ix_aggregate_landing_id_start_datetime\" btree (landing_id,\nstart_datetime)\n \"ix_aggregate_latest_flag\" btree (latest_flag_datetime) WHERE\nlatest_flag = false\n \"ix_aggregate_validation_status_code\" btree (validation_datetime) WHERE\nvalidation_status_code = 'P'::bpchar AND latest_flag = true\nCheck constraints:\n \"ck_aggregate_client_parsing_status_code\" CHECK\n(client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY\n(ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS\nNULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar,\n'I'::bpchar])))\nForeign-key constraints:\n \"fk_aggregate_client_program\" FOREIGN KEY (client_program_id)\nREFERENCES client_program(client_program_id)\n \"fk_aggregate_landing\" FOREIGN KEY (landing_id) REFERENCES\nlanding(landing_id)\nNumber of child tables: 17 (Use \\d+ to list them.)\n\nand here is a child table showing a check constraint\n Table \"stage.aggregate__00007223\"\n Column Type\n Modifiers\n────────────────────────── ───────────────────────────\naggregate_id bigint not null default\nnextval('seq_aggregate'::regclass)\nlanding_id integer not null\nclient_program_id integer\nsequence_number bigint\nstart_datetime timestamp without time zone not null\nend_datetime timestamp without time zone not null\nbody jsonb not null\nclient_parsing_status_code character(1)\nvalidation_status_code character(1)\nclient_parsing_datetime timestamp without time zone\nvalidation_datetime timestamp without time zone\nlatest_flag_datetime timestamp without time zone\nlatest_flag boolean not null\nIndexes:\n \"pk_aggregate__00007223\" PRIMARY KEY, btree (aggregate_id), tablespace\n\"archive\"\n \"ix_aggregate__00007223_landing_id_aggregate_id_parsing_status\" btree\n(landing_id, aggregate_id, client_parsing_status_code), tablespace \"archive\"\n \"ix_aggregate__00007223_landing_id_start_datetime\" btree (landing_id,\nstart_datetime), tablespace \"archive\"\n \"ix_aggregate__00007223_latest_flag\" btree (latest_flag_datetime) WHERE\nlatest_flag = false, tablespace \"archive\"\n \"ix_aggregate__00007223_validation_status_code\" btree\n(validation_datetime) WHERE validation_status_code = 'P'::bpchar AND\nlatest_flag = true, tablespace \"archive\"\nCheck constraints:\n \"ck_aggregate__00007223_landing_id\" CHECK (landing_id >= 7223 AND\nlanding_id < 9503)\n \"ck_aggregate_client_parsing_status_code\" CHECK\n(client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY\n(ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS\nNULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar,\n'I'::bpchar])))\nInherits: aggregate\nTablespace: \"archive\"\n\nHere is an example of the query explain plan against the master table:\n\nselect landing_id from landing L\nwhere exists\n(\nselect landing_id\nfrom stage.aggregate A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\n\nHash Join (cost=59793745.91..59793775.14 rows=28 width=4)\n Hash Cond: (a.landing_id = l.landing_id)\n -> HashAggregate (cost=59792700.41..59792721.46 rows=2105 width=4)\n Group Key: a.landing_id\n -> Append (cost=0.00..59481729.32 rows=124388438 width=4)\n -> Seq Scan on aggregate a (cost=0.00..0.00 rows=1 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00000000 a_1\n (cost=0.00..1430331.50 rows=2105558 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00000470 a_2 (cost=0.00..74082.10\nrows=247002 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00001435 a_3\n (cost=0.00..8174909.44 rows=17610357 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00001685 a_4\n (cost=0.00..11011311.44 rows=23516624 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00003836 a_5\n (cost=0.00..5833050.44 rows=13102557 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00005638 a_6\n (cost=0.00..5950768.16 rows=12342003 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00007223 a_7\n (cost=0.00..6561806.24 rows=13203237 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00009503 a_8\n (cost=0.00..5420961.64 rows=10931794 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00011162 a_9\n (cost=0.00..4262902.64 rows=8560011 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00012707 a_10\n (cost=0.00..4216271.28 rows=9077921 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00014695 a_11\n (cost=0.00..3441205.72 rows=7674495 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00016457 a_12\n (cost=0.00..688010.74 rows=1509212 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00016805 a_13\n (cost=0.00..145219.14 rows=311402 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00016871 a_14 (cost=0.00..21.40\nrows=190 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00016874 a_15\n (cost=0.00..478011.62 rows=1031110 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00017048 a_16 (cost=0.00..21.40\nrows=190 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Seq Scan on aggregate__00017049 a_17\n (cost=0.00..1792844.42 rows=3164774 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n1000000000)\n -> Hash (cost=1042.69..1042.69 rows=225 width=4)\n -> Seq Scan on landing l (cost=0.00..1042.69 rows=225 width=4)\n Filter: (source_id = 36)\n\nAnd here is an example of the query using the index when ran against a\npartition directly\n\nselect landing_id from landing L\nwhere exists\n(\nselect landing_id\nfrom stage.aggregate__00007223 A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\nNested Loop Semi Join (cost=0.56..3454.75 rows=5 width=4)\n -> Seq Scan on landing l (cost=0.00..1042.69 rows=225 width=4)\n Filter: (source_id = 36)\n -> Index Scan using ix_aggregate__00007223_landing_id_start_datetime on\naggregate__00007223 a (cost=0.56..359345.74 rows=36173 width=4)\n Index Cond: (landing_id = l.landing_id)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n\n\nThe parent table never had rows, and pg_class had relpages=0. I saw a\nsuggestion in a different thread about updating this value to greater than\n0 so I tried that but didnt get a different plan. We have\nautovacuum/analyze enabled and also run nightly vacuum/analyze on the\ndatabase to keep stats up to date.\n\nI'm new to troubleshooting partition query performance and not sure what I\nam missing here. Any advice is appreciated.\n\nHello, I am curious about the performance of queries against a master table that seem to do seq scans on each child table.  When the same query is issued at a partition directly it uses the partition index and is very fast.  The partition constraint is in the query criteria.  We have non overlapping check constraints and constraint exclusion is set to partition.Here is the master table          Column                      Type                                  Modifiers                      aggregate_id               bigint                      not null default nextval('seq_aggregate'::regclass)landing_id                 integer                     not nullclient_program_id          integer                     sequence_number            bigint                      start_datetime             timestamp without time zone not nullend_datetime               timestamp without time zone not nullbody                       jsonb                       not nullclient_parsing_status_code character(1)                validation_status_code     character(1)                client_parsing_datetime    timestamp without time zone validation_datetime        timestamp without time zone latest_flag_datetime       timestamp without time zone latest_flag                boolean                     not nullIndexes:    \"pk_aggregate\" PRIMARY KEY, btree (aggregate_id)    \"ix_aggregate_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code)    \"ix_aggregate_landing_id_start_datetime\" btree (landing_id, start_datetime)    \"ix_aggregate_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false    \"ix_aggregate_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = trueCheck constraints:    \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))    \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))Foreign-key constraints:    \"fk_aggregate_client_program\" FOREIGN KEY (client_program_id) REFERENCES client_program(client_program_id)    \"fk_aggregate_landing\" FOREIGN KEY (landing_id) REFERENCES landing(landing_id)Number of child tables: 17 (Use \\d+ to list them.)and here is a child table showing a check constraint                                     Table \"stage.aggregate__00007223\"          Column                      Type                                  Modifiers                      ────────────────────────── ─────────────────────────── aggregate_id               bigint                      not null default nextval('seq_aggregate'::regclass)landing_id                 integer                     not nullclient_program_id          integer                     sequence_number            bigint                      start_datetime             timestamp without time zone not nullend_datetime               timestamp without time zone not nullbody                       jsonb                       not nullclient_parsing_status_code character(1)                validation_status_code     character(1)                client_parsing_datetime    timestamp without time zone validation_datetime        timestamp without time zone latest_flag_datetime       timestamp without time zone latest_flag                boolean                     not nullIndexes:    \"pk_aggregate__00007223\" PRIMARY KEY, btree (aggregate_id), tablespace \"archive\"    \"ix_aggregate__00007223_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code), tablespace \"archive\"    \"ix_aggregate__00007223_landing_id_start_datetime\" btree (landing_id, start_datetime), tablespace \"archive\"    \"ix_aggregate__00007223_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false, tablespace \"archive\"    \"ix_aggregate__00007223_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true, tablespace \"archive\"Check constraints:    \"ck_aggregate__00007223_landing_id\" CHECK (landing_id >= 7223 AND landing_id < 9503)    \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))    \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))Inherits: aggregateTablespace: \"archive\"Here is an example of the query explain plan against the master table:select landing_id from landing Lwhere exists (select landing_idfrom stage.aggregate AWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000and L.landing_id = A.Landing_id)and L.source_id = 36Hash Join  (cost=59793745.91..59793775.14 rows=28 width=4)  Hash Cond: (a.landing_id = l.landing_id)  ->  HashAggregate  (cost=59792700.41..59792721.46 rows=2105 width=4)        Group Key: a.landing_id        ->  Append  (cost=0.00..59481729.32 rows=124388438 width=4)              ->  Seq Scan on aggregate a  (cost=0.00..0.00 rows=1 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00000000 a_1  (cost=0.00..1430331.50 rows=2105558 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00000470 a_2  (cost=0.00..74082.10 rows=247002 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00001435 a_3  (cost=0.00..8174909.44 rows=17610357 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00001685 a_4  (cost=0.00..11011311.44 rows=23516624 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00003836 a_5  (cost=0.00..5833050.44 rows=13102557 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00005638 a_6  (cost=0.00..5950768.16 rows=12342003 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00007223 a_7  (cost=0.00..6561806.24 rows=13203237 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00009503 a_8  (cost=0.00..5420961.64 rows=10931794 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00011162 a_9  (cost=0.00..4262902.64 rows=8560011 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00012707 a_10  (cost=0.00..4216271.28 rows=9077921 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00014695 a_11  (cost=0.00..3441205.72 rows=7674495 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00016457 a_12  (cost=0.00..688010.74 rows=1509212 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00016805 a_13  (cost=0.00..145219.14 rows=311402 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00016871 a_14  (cost=0.00..21.40 rows=190 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00016874 a_15  (cost=0.00..478011.62 rows=1031110 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00017048 a_16  (cost=0.00..21.40 rows=190 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)              ->  Seq Scan on aggregate__00017049 a_17  (cost=0.00..1792844.42 rows=3164774 width=4)                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)  ->  Hash  (cost=1042.69..1042.69 rows=225 width=4)        ->  Seq Scan on landing l  (cost=0.00..1042.69 rows=225 width=4)              Filter: (source_id = 36)And here is an example of the query using the index when ran against a partition directlyselect landing_id from landing Lwhere exists (select landing_idfrom stage.aggregate__00007223 AWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000and L.landing_id = A.Landing_id)and L.source_id = 36Nested Loop Semi Join  (cost=0.56..3454.75 rows=5 width=4)  ->  Seq Scan on landing l  (cost=0.00..1042.69 rows=225 width=4)        Filter: (source_id = 36)  ->  Index Scan using ix_aggregate__00007223_landing_id_start_datetime on aggregate__00007223 a  (cost=0.56..359345.74 rows=36173 width=4)        Index Cond: (landing_id = l.landing_id)        Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)The parent table never had rows, and pg_class had relpages=0.  I saw a suggestion in a different thread about updating this value to greater than 0 so I tried that but didnt get a different plan.  We have autovacuum/analyze enabled and also run nightly vacuum/analyze on the database to keep stats up to date.I'm new to troubleshooting partition query performance and not sure what I am missing here.  Any advice is appreciated.", "msg_date": "Wed, 21 Sep 2016 11:53:15 -0500", "msg_from": "Mike Broers <[email protected]>", "msg_from_op": true, "msg_subject": "query against single partition uses index, against master table does\n seq scan" }, { "msg_contents": "Postgres does not have capability to selectively choose child tables unless the query's \"WHERE\" clause is simple, and it matches (exactly) the CHECK constraint definition. I have resolved similar issue by explicitly adding check constraint expression in every SQL against the master table. This is also determined by the constraint_exclusion setting value. Check the manual (9.5): https://www.postgresql.org/docs/current/static/ddl-partitioning.html.\n\n\nI would try tweaking WHERE clause to match Check constraint definition. Global partitioning index (like in Oracle) would help, but its just my wish.\n\n\n\nRegards,\nGanesh Kannan\n\n\n________________________________\nFrom: [email protected] <[email protected]> on behalf of Mike Broers <[email protected]>\nSent: Wednesday, September 21, 2016 12:53 PM\nTo: [email protected]\nSubject: [PERFORM] query against single partition uses index, against master table does seq scan\n\nHello, I am curious about the performance of queries against a master table that seem to do seq scans on each child table. When the same query is issued at a partition directly it uses the partition index and is very fast.\n\nThe partition constraint is in the query criteria. We have non overlapping check constraints and constraint exclusion is set to partition.\n\nHere is the master table\n Column Type Modifiers\naggregate_id bigint not null default nextval('seq_aggregate'::regclass)\nlanding_id integer not null\nclient_program_id integer\nsequence_number bigint\nstart_datetime timestamp without time zone not null\nend_datetime timestamp without time zone not null\nbody jsonb not null\nclient_parsing_status_code character(1)\nvalidation_status_code character(1)\nclient_parsing_datetime timestamp without time zone\nvalidation_datetime timestamp without time zone\nlatest_flag_datetime timestamp without time zone\nlatest_flag boolean not null\nIndexes:\n \"pk_aggregate\" PRIMARY KEY, btree (aggregate_id)\n \"ix_aggregate_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code)\n \"ix_aggregate_landing_id_start_datetime\" btree (landing_id, start_datetime)\n \"ix_aggregate_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false\n \"ix_aggregate_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true\nCheck constraints:\n \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\nForeign-key constraints:\n \"fk_aggregate_client_program\" FOREIGN KEY (client_program_id) REFERENCES client_program(client_program_id)\n \"fk_aggregate_landing\" FOREIGN KEY (landing_id) REFERENCES landing(landing_id)\nNumber of child tables: 17 (Use \\d+ to list them.)\n\nand here is a child table showing a check constraint\n Table \"stage.aggregate__00007223\"\n Column Type Modifiers\n────────────────────────── ───────────────────────────\naggregate_id bigint not null default nextval('seq_aggregate'::regclass)\nlanding_id integer not null\nclient_program_id integer\nsequence_number bigint\nstart_datetime timestamp without time zone not null\nend_datetime timestamp without time zone not null\nbody jsonb not null\nclient_parsing_status_code character(1)\nvalidation_status_code character(1)\nclient_parsing_datetime timestamp without time zone\nvalidation_datetime timestamp without time zone\nlatest_flag_datetime timestamp without time zone\nlatest_flag boolean not null\nIndexes:\n \"pk_aggregate__00007223\" PRIMARY KEY, btree (aggregate_id), tablespace \"archive\"\n \"ix_aggregate__00007223_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code), tablespace \"archive\"\n \"ix_aggregate__00007223_landing_id_start_datetime\" btree (landing_id, start_datetime), tablespace \"archive\"\n \"ix_aggregate__00007223_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false, tablespace \"archive\"\n \"ix_aggregate__00007223_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true, tablespace \"archive\"\nCheck constraints:\n \"ck_aggregate__00007223_landing_id\" CHECK (landing_id >= 7223 AND landing_id < 9503)\n \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\nInherits: aggregate\nTablespace: \"archive\"\n\nHere is an example of the query explain plan against the master table:\n\nselect landing_id from landing L\nwhere exists\n(\nselect landing_id\nfrom stage.aggregate A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\n\nHash Join (cost=59793745.91..59793775.14 rows=28 width=4)\n Hash Cond: (a.landing_id = l.landing_id)\n -> HashAggregate (cost=59792700.41..59792721.46 rows=2105 width=4)\n Group Key: a.landing_id\n -> Append (cost=0.00..59481729.32 rows=124388438 width=4)\n -> Seq Scan on aggregate a (cost=0.00..0.00 rows=1 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00000000 a_1 (cost=0.00..1430331.50 rows=2105558 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00000470 a_2 (cost=0.00..74082.10 rows=247002 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00001435 a_3 (cost=0.00..8174909.44 rows=17610357 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00001685 a_4 (cost=0.00..11011311.44 rows=23516624 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00003836 a_5 (cost=0.00..5833050.44 rows=13102557 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00005638 a_6 (cost=0.00..5950768.16 rows=12342003 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00007223 a_7 (cost=0.00..6561806.24 rows=13203237 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00009503 a_8 (cost=0.00..5420961.64 rows=10931794 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00011162 a_9 (cost=0.00..4262902.64 rows=8560011 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00012707 a_10 (cost=0.00..4216271.28 rows=9077921 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00014695 a_11 (cost=0.00..3441205.72 rows=7674495 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00016457 a_12 (cost=0.00..688010.74 rows=1509212 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00016805 a_13 (cost=0.00..145219.14 rows=311402 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00016871 a_14 (cost=0.00..21.40 rows=190 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00016874 a_15 (cost=0.00..478011.62 rows=1031110 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00017048 a_16 (cost=0.00..21.40 rows=190 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Seq Scan on aggregate__00017049 a_17 (cost=0.00..1792844.42 rows=3164774 width=4)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n -> Hash (cost=1042.69..1042.69 rows=225 width=4)\n -> Seq Scan on landing l (cost=0.00..1042.69 rows=225 width=4)\n Filter: (source_id = 36)\n\nAnd here is an example of the query using the index when ran against a partition directly\n\nselect landing_id from landing L\nwhere exists\n(\nselect landing_id\nfrom stage.aggregate__00007223 A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\nNested Loop Semi Join (cost=0.56..3454.75 rows=5 width=4)\n -> Seq Scan on landing l (cost=0.00..1042.69 rows=225 width=4)\n Filter: (source_id = 36)\n -> Index Scan using ix_aggregate__00007223_landing_id_start_datetime on aggregate__00007223 a (cost=0.56..359345.74 rows=36173 width=4)\n Index Cond: (landing_id = l.landing_id)\n Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n\n\nThe parent table never had rows, and pg_class had relpages=0. I saw a suggestion in a different thread about updating this value to greater than 0 so I tried that but didnt get a different plan. We have autovacuum/analyze enabled and also run nightly vacuum/analyze on the database to keep stats up to date.\n\nI'm new to troubleshooting partition query performance and not sure what I am missing here. Any advice is appreciated.\n\n\n\n\n\n\n\n\nPostgres does not have capability to selectively choose child tables unless the query's \"WHERE\"\n clause is simple, and it matches (exactly) the CHECK constraint definition.  I have resolved similar issue by explicitly adding check constraint expression in every SQL against the master table. This is also determined by the constraint_exclusion setting value.\n Check the manual (9.5): https://www.postgresql.org/docs/current/static/ddl-partitioning.html. \n\n\nI would try tweaking WHERE clause to match Check constraint definition. Global partitioning index (like in Oracle) would help,\n but its just my wish.\n\n\n\n\n\nRegards,\nGanesh Kannan\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: [email protected] <[email protected]> on behalf of Mike Broers <[email protected]>\nSent: Wednesday, September 21, 2016 12:53 PM\nTo: [email protected]\nSubject: [PERFORM] query against single partition uses index, against master table does seq scan\n \n\n\n\nHello, I am curious about the performance of queries against a master table that seem to do seq scans on each child table.  When the same query is issued at a partition directly it uses the partition index and is very fast.  \n\n\nThe partition constraint is in the query criteria.  We have non overlapping check constraints and constraint exclusion is set to partition.\n\n\nHere is the master table\n\n          Column                      Type                                  Modifiers                      \naggregate_id               bigint                      not null default nextval('seq_aggregate'::regclass)\nlanding_id                 integer                     not null\nclient_program_id          integer                     \nsequence_number            bigint                      \nstart_datetime             timestamp without time zone not null\nend_datetime               timestamp without time zone not null\nbody                       jsonb                       not null\nclient_parsing_status_code character(1)                \nvalidation_status_code     character(1)                \nclient_parsing_datetime    timestamp without time zone \nvalidation_datetime        timestamp without time zone \nlatest_flag_datetime       timestamp without time zone \nlatest_flag                boolean                     not null\nIndexes:\n    \"pk_aggregate\" PRIMARY KEY, btree (aggregate_id)\n    \"ix_aggregate_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code)\n    \"ix_aggregate_landing_id_start_datetime\" btree (landing_id, start_datetime)\n    \"ix_aggregate_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false\n    \"ix_aggregate_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true\nCheck constraints:\n    \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n    \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\nForeign-key constraints:\n    \"fk_aggregate_client_program\" FOREIGN KEY (client_program_id) REFERENCES client_program(client_program_id)\n    \"fk_aggregate_landing\" FOREIGN KEY (landing_id) REFERENCES landing(landing_id)\nNumber of child tables: 17 (Use \\d+ to list them.)\n\n\n\nand here is a child table showing a check constraint\n\n                                     Table \"stage.aggregate__00007223\"\n          Column                      Type                                  Modifiers                      \n────────────────────────── ─────────────────────────── \naggregate_id               bigint                      not null default nextval('seq_aggregate'::regclass)\nlanding_id                 integer                     not null\nclient_program_id          integer                     \nsequence_number            bigint                      \nstart_datetime             timestamp without time zone not null\nend_datetime               timestamp without time zone not null\nbody                       jsonb                       not null\nclient_parsing_status_code character(1)                \nvalidation_status_code     character(1)                \nclient_parsing_datetime    timestamp without time zone \nvalidation_datetime        timestamp without time zone \nlatest_flag_datetime       timestamp without time zone \nlatest_flag                boolean                     not null\nIndexes:\n    \"pk_aggregate__00007223\" PRIMARY KEY, btree (aggregate_id), tablespace \"archive\"\n    \"ix_aggregate__00007223_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code), tablespace \"archive\"\n    \"ix_aggregate__00007223_landing_id_start_datetime\" btree (landing_id, start_datetime), tablespace \"archive\"\n    \"ix_aggregate__00007223_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false, tablespace \"archive\"\n    \"ix_aggregate__00007223_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true, tablespace \"archive\"\nCheck constraints:\n    \"ck_aggregate__00007223_landing_id\" CHECK (landing_id >= 7223 AND landing_id < 9503)\n    \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n    \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\nInherits: aggregate\nTablespace: \"archive\"\n\n\n\nHere is an example of the query explain plan against the master table:\n\n\n\nselect landing_id from landing L\nwhere exists \n(\nselect landing_id\nfrom stage.aggregate A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\n\n\n\n\n\nHash Join  (cost=59793745.91..59793775.14 rows=28 width=4)\n  Hash Cond: (a.landing_id = l.landing_id)\n  ->  HashAggregate  (cost=59792700.41..59792721.46 rows=2105 width=4)\n        Group Key: a.landing_id\n        ->  Append  (cost=0.00..59481729.32 rows=124388438 width=4)\n              ->  Seq Scan on aggregate a  (cost=0.00..0.00 rows=1 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00000000 a_1  (cost=0.00..1430331.50 rows=2105558 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00000470 a_2  (cost=0.00..74082.10 rows=247002 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00001435 a_3  (cost=0.00..8174909.44 rows=17610357 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00001685 a_4  (cost=0.00..11011311.44 rows=23516624 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00003836 a_5  (cost=0.00..5833050.44 rows=13102557 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00005638 a_6  (cost=0.00..5950768.16 rows=12342003 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00007223 a_7  (cost=0.00..6561806.24 rows=13203237 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00009503 a_8  (cost=0.00..5420961.64 rows=10931794 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00011162 a_9  (cost=0.00..4262902.64 rows=8560011 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00012707 a_10  (cost=0.00..4216271.28 rows=9077921 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00014695 a_11  (cost=0.00..3441205.72 rows=7674495 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016457 a_12  (cost=0.00..688010.74 rows=1509212 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016805 a_13  (cost=0.00..145219.14 rows=311402 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016871 a_14  (cost=0.00..21.40 rows=190 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016874 a_15  (cost=0.00..478011.62 rows=1031110 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00017048 a_16  (cost=0.00..21.40 rows=190 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00017049 a_17  (cost=0.00..1792844.42 rows=3164774 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n  ->  Hash  (cost=1042.69..1042.69 rows=225 width=4)\n        ->  Seq Scan on landing l  (cost=0.00..1042.69 rows=225 width=4)\n              Filter: (source_id = 36)\n\n\n\nAnd here is an example of the query using the index when ran against a partition directly\n\n\n\nselect landing_id from landing L\nwhere exists \n(\nselect landing_id\nfrom stage.aggregate__00007223 A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\n\n\n\nNested Loop Semi Join  (cost=0.56..3454.75 rows=5 width=4)\n  ->  Seq Scan on landing l  (cost=0.00..1042.69 rows=225 width=4)\n        Filter: (source_id = 36)\n  ->  Index Scan using ix_aggregate__00007223_landing_id_start_datetime on aggregate__00007223 a  (cost=0.56..359345.74 rows=36173 width=4)\n        Index Cond: (landing_id = l.landing_id)\n        Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n\n\n\n\n\nThe parent table never had rows, and pg_class had relpages=0.  I saw a suggestion in a different thread about updating this value to greater than 0 so I tried that but didnt get a different plan.  We have autovacuum/analyze enabled and also run nightly\n vacuum/analyze on the database to keep stats up to date.\n\n\nI'm new to troubleshooting partition query performance and not sure what I am missing here.  Any advice is appreciated.", "msg_date": "Wed, 21 Sep 2016 17:15:05 +0000", "msg_from": "Ganesh Kannan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query against single partition uses index, against\n master table does seq scan" }, { "msg_contents": "Thanks for your response - Is 'selectively choosing what partition'\ndifferent than utilizing each partitions index when scanning each\npartition? To clarify, I expect to find results in each partition, but to\nhave postgres use each partitions index instead of full table scans. It\nseems redundant to add a where clauses to match each exclusion criteria but\ni will try that and report back - thank you for the suggestion.\n\nOn Wed, Sep 21, 2016 at 12:15 PM, Ganesh Kannan <\[email protected]> wrote:\n\n> Postgres does not have capability to selectively choose child tables\n> unless the query's \"WHERE\" clause is simple, and it matches (exactly) the\n> CHECK constraint definition. I have resolved similar issue by explicitly\n> adding check constraint expression in every SQL against the master table.\n> This is also determined by the constraint_exclusion setting value. Check\n> the manual (9.5): https://www.postgresql.org/docs/current/static/ddl-\n> partitioning.html.\n>\n>\n> I would try tweaking WHERE clause to match Check constraint definition.\n> Global partitioning index (like in Oracle) would help, but its just my wish.\n>\n>\n>\n> Regards,\n> Ganesh Kannan\n>\n>\n>\n> ------------------------------\n> *From:* [email protected] <pgsql-performance-owner@\n> postgresql.org> on behalf of Mike Broers <[email protected]>\n> *Sent:* Wednesday, September 21, 2016 12:53 PM\n> *To:* [email protected]\n> *Subject:* [PERFORM] query against single partition uses index, against\n> master table does seq scan\n>\n> Hello, I am curious about the performance of queries against a master\n> table that seem to do seq scans on each child table. When the same query\n> is issued at a partition directly it uses the partition index and is very\n> fast.\n>\n> The partition constraint is in the query criteria. We have non\n> overlapping check constraints and constraint exclusion is set to partition.\n>\n> Here is the master table\n> Column Type\n> Modifiers\n> aggregate_id bigint not null default\n> nextval('seq_aggregate'::regclass)\n> landing_id integer not null\n> client_program_id integer\n> sequence_number bigint\n> start_datetime timestamp without time zone not null\n> end_datetime timestamp without time zone not null\n> body jsonb not null\n> client_parsing_status_code character(1)\n> validation_status_code character(1)\n> client_parsing_datetime timestamp without time zone\n> validation_datetime timestamp without time zone\n> latest_flag_datetime timestamp without time zone\n> latest_flag boolean not null\n> Indexes:\n> \"pk_aggregate\" PRIMARY KEY, btree (aggregate_id)\n> \"ix_aggregate_landing_id_aggregate_id_parsing_status\" btree\n> (landing_id, aggregate_id, client_parsing_status_code)\n> \"ix_aggregate_landing_id_start_datetime\" btree (landing_id,\n> start_datetime)\n> \"ix_aggregate_latest_flag\" btree (latest_flag_datetime) WHERE\n> latest_flag = false\n> \"ix_aggregate_validation_status_code\" btree (validation_datetime)\n> WHERE validation_status_code = 'P'::bpchar AND latest_flag = true\n> Check constraints:\n> \"ck_aggregate_client_parsing_status_code\" CHECK\n> (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY\n> (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n> \"ck_aggregate_validation_status_code\" CHECK (validation_status_code\n> IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar,\n> 'I'::bpchar])))\n> Foreign-key constraints:\n> \"fk_aggregate_client_program\" FOREIGN KEY (client_program_id)\n> REFERENCES client_program(client_program_id)\n> \"fk_aggregate_landing\" FOREIGN KEY (landing_id) REFERENCES\n> landing(landing_id)\n> Number of child tables: 17 (Use \\d+ to list them.)\n>\n> and here is a child table showing a check constraint\n> Table \"stage.aggregate__00007223\"\n> Column Type\n> Modifiers\n> ────────────────────────── ───────────────────────────\n> aggregate_id bigint not null default\n> nextval('seq_aggregate'::regclass)\n> landing_id integer not null\n> client_program_id integer\n> sequence_number bigint\n> start_datetime timestamp without time zone not null\n> end_datetime timestamp without time zone not null\n> body jsonb not null\n> client_parsing_status_code character(1)\n> validation_status_code character(1)\n> client_parsing_datetime timestamp without time zone\n> validation_datetime timestamp without time zone\n> latest_flag_datetime timestamp without time zone\n> latest_flag boolean not null\n> Indexes:\n> \"pk_aggregate__00007223\" PRIMARY KEY, btree (aggregate_id), tablespace\n> \"archive\"\n> \"ix_aggregate__00007223_landing_id_aggregate_id_parsing_status\" btree\n> (landing_id, aggregate_id, client_parsing_status_code), tablespace \"archive\"\n> \"ix_aggregate__00007223_landing_id_start_datetime\" btree (landing_id,\n> start_datetime), tablespace \"archive\"\n> \"ix_aggregate__00007223_latest_flag\" btree (latest_flag_datetime)\n> WHERE latest_flag = false, tablespace \"archive\"\n> \"ix_aggregate__00007223_validation_status_code\" btree\n> (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND\n> latest_flag = true, tablespace \"archive\"\n> Check constraints:\n> \"ck_aggregate__00007223_landing_id\" CHECK (landing_id >= 7223 AND\n> landing_id < 9503)\n> \"ck_aggregate_client_parsing_status_code\" CHECK\n> (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY\n> (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n> \"ck_aggregate_validation_status_code\" CHECK (validation_status_code\n> IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar,\n> 'I'::bpchar])))\n> Inherits: aggregate\n> Tablespace: \"archive\"\n>\n> Here is an example of the query explain plan against the master table:\n>\n> select landing_id from landing L\n> where exists\n> (\n> select landing_id\n> from stage.aggregate A\n> WHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\n> and L.landing_id = A.Landing_id\n> )\n> and L.source_id = 36\n>\n>\n> Hash Join (cost=59793745.91..59793775.14 rows=28 width=4)\n> Hash Cond: (a.landing_id = l.landing_id)\n> -> HashAggregate (cost=59792700.41..59792721.46 rows=2105 width=4)\n> Group Key: a.landing_id\n> -> Append (cost=0.00..59481729.32 rows=124388438 width=4)\n> -> Seq Scan on aggregate a (cost=0.00..0.00 rows=1 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00000000 a_1\n> (cost=0.00..1430331.50 rows=2105558 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00000470 a_2\n> (cost=0.00..74082.10 rows=247002 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00001435 a_3\n> (cost=0.00..8174909.44 rows=17610357 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00001685 a_4\n> (cost=0.00..11011311.44 rows=23516624 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00003836 a_5\n> (cost=0.00..5833050.44 rows=13102557 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00005638 a_6\n> (cost=0.00..5950768.16 rows=12342003 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00007223 a_7\n> (cost=0.00..6561806.24 rows=13203237 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00009503 a_8\n> (cost=0.00..5420961.64 rows=10931794 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00011162 a_9\n> (cost=0.00..4262902.64 rows=8560011 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00012707 a_10\n> (cost=0.00..4216271.28 rows=9077921 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00014695 a_11\n> (cost=0.00..3441205.72 rows=7674495 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00016457 a_12\n> (cost=0.00..688010.74 rows=1509212 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00016805 a_13\n> (cost=0.00..145219.14 rows=311402 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00016871 a_14 (cost=0.00..21.40\n> rows=190 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00016874 a_15\n> (cost=0.00..478011.62 rows=1031110 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00017048 a_16 (cost=0.00..21.40\n> rows=190 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Seq Scan on aggregate__00017049 a_17\n> (cost=0.00..1792844.42 rows=3164774 width=4)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n> 1000000000)\n> -> Hash (cost=1042.69..1042.69 rows=225 width=4)\n> -> Seq Scan on landing l (cost=0.00..1042.69 rows=225 width=4)\n> Filter: (source_id = 36)\n>\n> And here is an example of the query using the index when ran against a\n> partition directly\n>\n> select landing_id from landing L\n> where exists\n> (\n> select landing_id\n> from stage.aggregate__00007223 A\n> WHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\n> and L.landing_id = A.Landing_id\n> )\n> and L.source_id = 36\n>\n> Nested Loop Semi Join (cost=0.56..3454.75 rows=5 width=4)\n> -> Seq Scan on landing l (cost=0.00..1042.69 rows=225 width=4)\n> Filter: (source_id = 36)\n> -> Index Scan using ix_aggregate__00007223_landing_id_start_datetime\n> on aggregate__00007223 a (cost=0.56..359345.74 rows=36173 width=4)\n> Index Cond: (landing_id = l.landing_id)\n> Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n>\n>\n> The parent table never had rows, and pg_class had relpages=0. I saw a\n> suggestion in a different thread about updating this value to greater than\n> 0 so I tried that but didnt get a different plan. We have\n> autovacuum/analyze enabled and also run nightly vacuum/analyze on the\n> database to keep stats up to date.\n>\n> I'm new to troubleshooting partition query performance and not sure what I\n> am missing here. Any advice is appreciated.\n>\n\nThanks for your response - Is 'selectively choosing what partition' different than utilizing each partitions index when scanning each partition?  To clarify, I expect to find results in each partition, but to have postgres use each partitions index instead of full table scans. It seems redundant to add a where clauses to match each exclusion criteria but i will try that and report back - thank you for the suggestion.On Wed, Sep 21, 2016 at 12:15 PM, Ganesh Kannan <[email protected]> wrote:\n\n\nPostgres does not have capability to selectively choose child tables unless the query's \"WHERE\"\n clause is simple, and it matches (exactly) the CHECK constraint definition.  I have resolved similar issue by explicitly adding check constraint expression in every SQL against the master table. This is also determined by the constraint_exclusion setting value.\n Check the manual (9.5): https://www.postgresql.org/docs/current/static/ddl-partitioning.html. \n\n\nI would try tweaking WHERE clause to match Check constraint definition. Global partitioning index (like in Oracle) would help,\n but its just my wish.\n\n\n\n\n\nRegards,\nGanesh Kannan\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: [email protected] <[email protected]> on behalf of Mike Broers <[email protected]>\nSent: Wednesday, September 21, 2016 12:53 PM\nTo: [email protected]\nSubject: [PERFORM] query against single partition uses index, against master table does seq scan\n \n\n\n\nHello, I am curious about the performance of queries against a master table that seem to do seq scans on each child table.  When the same query is issued at a partition directly it uses the partition index and is very fast.  \n\n\nThe partition constraint is in the query criteria.  We have non overlapping check constraints and constraint exclusion is set to partition.\n\n\nHere is the master table\n\n          Column                      Type                                  Modifiers                      \naggregate_id               bigint                      not null default nextval('seq_aggregate'::regclass)\nlanding_id                 integer                     not null\nclient_program_id          integer                     \nsequence_number            bigint                      \nstart_datetime             timestamp without time zone not null\nend_datetime               timestamp without time zone not null\nbody                       jsonb                       not null\nclient_parsing_status_code character(1)                \nvalidation_status_code     character(1)                \nclient_parsing_datetime    timestamp without time zone \nvalidation_datetime        timestamp without time zone \nlatest_flag_datetime       timestamp without time zone \nlatest_flag                boolean                     not null\nIndexes:\n    \"pk_aggregate\" PRIMARY KEY, btree (aggregate_id)\n    \"ix_aggregate_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code)\n    \"ix_aggregate_landing_id_start_datetime\" btree (landing_id, start_datetime)\n    \"ix_aggregate_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false\n    \"ix_aggregate_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true\nCheck constraints:\n    \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n    \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\nForeign-key constraints:\n    \"fk_aggregate_client_program\" FOREIGN KEY (client_program_id) REFERENCES client_program(client_program_id)\n    \"fk_aggregate_landing\" FOREIGN KEY (landing_id) REFERENCES landing(landing_id)\nNumber of child tables: 17 (Use \\d+ to list them.)\n\n\n\nand here is a child table showing a check constraint\n\n                                     Table \"stage.aggregate__00007223\"\n          Column                      Type                                  Modifiers                      \n────────────────────────── ─────────────────────────── \naggregate_id               bigint                      not null default nextval('seq_aggregate'::regclass)\nlanding_id                 integer                     not null\nclient_program_id          integer                     \nsequence_number            bigint                      \nstart_datetime             timestamp without time zone not null\nend_datetime               timestamp without time zone not null\nbody                       jsonb                       not null\nclient_parsing_status_code character(1)                \nvalidation_status_code     character(1)                \nclient_parsing_datetime    timestamp without time zone \nvalidation_datetime        timestamp without time zone \nlatest_flag_datetime       timestamp without time zone \nlatest_flag                boolean                     not null\nIndexes:\n    \"pk_aggregate__00007223\" PRIMARY KEY, btree (aggregate_id), tablespace \"archive\"\n    \"ix_aggregate__00007223_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code), tablespace \"archive\"\n    \"ix_aggregate__00007223_landing_id_start_datetime\" btree (landing_id, start_datetime), tablespace \"archive\"\n    \"ix_aggregate__00007223_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false, tablespace \"archive\"\n    \"ix_aggregate__00007223_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true, tablespace \"archive\"\nCheck constraints:\n    \"ck_aggregate__00007223_landing_id\" CHECK (landing_id >= 7223 AND landing_id < 9503)\n    \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n    \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\nInherits: aggregate\nTablespace: \"archive\"\n\n\n\nHere is an example of the query explain plan against the master table:\n\n\n\nselect landing_id from landing L\nwhere exists \n(\nselect landing_id\nfrom stage.aggregate A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\n\n\n\n\n\nHash Join  (cost=59793745.91..59793775.14 rows=28 width=4)\n  Hash Cond: (a.landing_id = l.landing_id)\n  ->  HashAggregate  (cost=59792700.41..59792721.46 rows=2105 width=4)\n        Group Key: a.landing_id\n        ->  Append  (cost=0.00..59481729.32 rows=124388438 width=4)\n              ->  Seq Scan on aggregate a  (cost=0.00..0.00 rows=1 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00000000 a_1  (cost=0.00..1430331.50 rows=2105558 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00000470 a_2  (cost=0.00..74082.10 rows=247002 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00001435 a_3  (cost=0.00..8174909.44 rows=17610357 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00001685 a_4  (cost=0.00..11011311.44 rows=23516624 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00003836 a_5  (cost=0.00..5833050.44 rows=13102557 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00005638 a_6  (cost=0.00..5950768.16 rows=12342003 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00007223 a_7  (cost=0.00..6561806.24 rows=13203237 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00009503 a_8  (cost=0.00..5420961.64 rows=10931794 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00011162 a_9  (cost=0.00..4262902.64 rows=8560011 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00012707 a_10  (cost=0.00..4216271.28 rows=9077921 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00014695 a_11  (cost=0.00..3441205.72 rows=7674495 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016457 a_12  (cost=0.00..688010.74 rows=1509212 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016805 a_13  (cost=0.00..145219.14 rows=311402 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016871 a_14  (cost=0.00..21.40 rows=190 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016874 a_15  (cost=0.00..478011.62 rows=1031110 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00017048 a_16  (cost=0.00..21.40 rows=190 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00017049 a_17  (cost=0.00..1792844.42 rows=3164774 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n  ->  Hash  (cost=1042.69..1042.69 rows=225 width=4)\n        ->  Seq Scan on landing l  (cost=0.00..1042.69 rows=225 width=4)\n              Filter: (source_id = 36)\n\n\n\nAnd here is an example of the query using the index when ran against a partition directly\n\n\n\nselect landing_id from landing L\nwhere exists \n(\nselect landing_id\nfrom stage.aggregate__00007223 A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\n\n\n\nNested Loop Semi Join  (cost=0.56..3454.75 rows=5 width=4)\n  ->  Seq Scan on landing l  (cost=0.00..1042.69 rows=225 width=4)\n        Filter: (source_id = 36)\n  ->  Index Scan using ix_aggregate__00007223_landing_id_start_datetime on aggregate__00007223 a  (cost=0.56..359345.74 rows=36173 width=4)\n        Index Cond: (landing_id = l.landing_id)\n        Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n\n\n\n\n\nThe parent table never had rows, and pg_class had relpages=0.  I saw a suggestion in a different thread about updating this value to greater than 0 so I tried that but didnt get a different plan.  We have autovacuum/analyze enabled and also run nightly\n vacuum/analyze on the database to keep stats up to date.\n\n\nI'm new to troubleshooting partition query performance and not sure what I am missing here.  Any advice is appreciated.", "msg_date": "Wed, 21 Sep 2016 12:37:19 -0500", "msg_from": "Mike Broers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query against single partition uses index, against\n master table does seq scan" }, { "msg_contents": "When I excluded the non indexed search criteria the query on aggregate used\nthe indexes on each partition, without specifying the constraint exclusion\ncriteria. When I added the constraint exclusion criteria to the non\nindexed criteria, it still used seq scans.\n\nI ended up getting an acceptable plan by using a subquery on the indexed\npartition and using those results to scan for the unindexed value.\n\nOn Wed, Sep 21, 2016 at 12:37 PM, Mike Broers <[email protected]> wrote:\n\n> Thanks for your response - Is 'selectively choosing what partition'\n> different than utilizing each partitions index when scanning each\n> partition? To clarify, I expect to find results in each partition, but to\n> have postgres use each partitions index instead of full table scans. It\n> seems redundant to add a where clauses to match each exclusion criteria but\n> i will try that and report back - thank you for the suggestion.\n>\n> On Wed, Sep 21, 2016 at 12:15 PM, Ganesh Kannan <ganesh.kannan@\n> weatheranalytics.com> wrote:\n>\n>> Postgres does not have capability to selectively choose child tables\n>> unless the query's \"WHERE\" clause is simple, and it matches (exactly) the\n>> CHECK constraint definition. I have resolved similar issue by explicitly\n>> adding check constraint expression in every SQL against the master table.\n>> This is also determined by the constraint_exclusion setting value. Check\n>> the manual (9.5): https://www.postgresql.org/docs/current/static/ddl-pa\n>> rtitioning.html.\n>>\n>>\n>> I would try tweaking WHERE clause to match Check constraint definition.\n>> Global partitioning index (like in Oracle) would help, but its just my wish.\n>>\n>>\n>>\n>> Regards,\n>> Ganesh Kannan\n>>\n>>\n>>\n>> ------------------------------\n>> *From:* [email protected] <\n>> [email protected]> on behalf of Mike Broers <\n>> [email protected]>\n>> *Sent:* Wednesday, September 21, 2016 12:53 PM\n>> *To:* [email protected]\n>> *Subject:* [PERFORM] query against single partition uses index, against\n>> master table does seq scan\n>>\n>> Hello, I am curious about the performance of queries against a master\n>> table that seem to do seq scans on each child table. When the same query\n>> is issued at a partition directly it uses the partition index and is very\n>> fast.\n>>\n>> The partition constraint is in the query criteria. We have non\n>> overlapping check constraints and constraint exclusion is set to partition.\n>>\n>> Here is the master table\n>> Column Type\n>> Modifiers\n>> aggregate_id bigint not null default\n>> nextval('seq_aggregate'::regclass)\n>> landing_id integer not null\n>> client_program_id integer\n>> sequence_number bigint\n>> start_datetime timestamp without time zone not null\n>> end_datetime timestamp without time zone not null\n>> body jsonb not null\n>> client_parsing_status_code character(1)\n>> validation_status_code character(1)\n>> client_parsing_datetime timestamp without time zone\n>> validation_datetime timestamp without time zone\n>> latest_flag_datetime timestamp without time zone\n>> latest_flag boolean not null\n>> Indexes:\n>> \"pk_aggregate\" PRIMARY KEY, btree (aggregate_id)\n>> \"ix_aggregate_landing_id_aggregate_id_parsing_status\" btree\n>> (landing_id, aggregate_id, client_parsing_status_code)\n>> \"ix_aggregate_landing_id_start_datetime\" btree (landing_id,\n>> start_datetime)\n>> \"ix_aggregate_latest_flag\" btree (latest_flag_datetime) WHERE\n>> latest_flag = false\n>> \"ix_aggregate_validation_status_code\" btree (validation_datetime)\n>> WHERE validation_status_code = 'P'::bpchar AND latest_flag = true\n>> Check constraints:\n>> \"ck_aggregate_client_parsing_status_code\" CHECK\n>> (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY\n>> (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n>> \"ck_aggregate_validation_status_code\" CHECK (validation_status_code\n>> IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar,\n>> 'I'::bpchar])))\n>> Foreign-key constraints:\n>> \"fk_aggregate_client_program\" FOREIGN KEY (client_program_id)\n>> REFERENCES client_program(client_program_id)\n>> \"fk_aggregate_landing\" FOREIGN KEY (landing_id) REFERENCES\n>> landing(landing_id)\n>> Number of child tables: 17 (Use \\d+ to list them.)\n>>\n>> and here is a child table showing a check constraint\n>> Table \"stage.aggregate__00007223\"\n>> Column Type\n>> Modifiers\n>> ────────────────────────── ───────────────────────────\n>> aggregate_id bigint not null default\n>> nextval('seq_aggregate'::regclass)\n>> landing_id integer not null\n>> client_program_id integer\n>> sequence_number bigint\n>> start_datetime timestamp without time zone not null\n>> end_datetime timestamp without time zone not null\n>> body jsonb not null\n>> client_parsing_status_code character(1)\n>> validation_status_code character(1)\n>> client_parsing_datetime timestamp without time zone\n>> validation_datetime timestamp without time zone\n>> latest_flag_datetime timestamp without time zone\n>> latest_flag boolean not null\n>> Indexes:\n>> \"pk_aggregate__00007223\" PRIMARY KEY, btree (aggregate_id),\n>> tablespace \"archive\"\n>> \"ix_aggregate__00007223_landing_id_aggregate_id_parsing_status\"\n>> btree (landing_id, aggregate_id, client_parsing_status_code), tablespace\n>> \"archive\"\n>> \"ix_aggregate__00007223_landing_id_start_datetime\" btree\n>> (landing_id, start_datetime), tablespace \"archive\"\n>> \"ix_aggregate__00007223_latest_flag\" btree (latest_flag_datetime)\n>> WHERE latest_flag = false, tablespace \"archive\"\n>> \"ix_aggregate__00007223_validation_status_code\" btree\n>> (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND\n>> latest_flag = true, tablespace \"archive\"\n>> Check constraints:\n>> \"ck_aggregate__00007223_landing_id\" CHECK (landing_id >= 7223 AND\n>> landing_id < 9503)\n>> \"ck_aggregate_client_parsing_status_code\" CHECK\n>> (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY\n>> (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n>> \"ck_aggregate_validation_status_code\" CHECK (validation_status_code\n>> IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar,\n>> 'I'::bpchar])))\n>> Inherits: aggregate\n>> Tablespace: \"archive\"\n>>\n>> Here is an example of the query explain plan against the master table:\n>>\n>> select landing_id from landing L\n>> where exists\n>> (\n>> select landing_id\n>> from stage.aggregate A\n>> WHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\n>> and L.landing_id = A.Landing_id\n>> )\n>> and L.source_id = 36\n>>\n>>\n>> Hash Join (cost=59793745.91..59793775.14 rows=28 width=4)\n>> Hash Cond: (a.landing_id = l.landing_id)\n>> -> HashAggregate (cost=59792700.41..59792721.46 rows=2105 width=4)\n>> Group Key: a.landing_id\n>> -> Append (cost=0.00..59481729.32 rows=124388438 width=4)\n>> -> Seq Scan on aggregate a (cost=0.00..0.00 rows=1\n>> width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00000000 a_1\n>> (cost=0.00..1430331.50 rows=2105558 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00000470 a_2\n>> (cost=0.00..74082.10 rows=247002 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00001435 a_3\n>> (cost=0.00..8174909.44 rows=17610357 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00001685 a_4\n>> (cost=0.00..11011311.44 rows=23516624 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00003836 a_5\n>> (cost=0.00..5833050.44 rows=13102557 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00005638 a_6\n>> (cost=0.00..5950768.16 rows=12342003 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00007223 a_7\n>> (cost=0.00..6561806.24 rows=13203237 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00009503 a_8\n>> (cost=0.00..5420961.64 rows=10931794 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00011162 a_9\n>> (cost=0.00..4262902.64 rows=8560011 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00012707 a_10\n>> (cost=0.00..4216271.28 rows=9077921 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00014695 a_11\n>> (cost=0.00..3441205.72 rows=7674495 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00016457 a_12\n>> (cost=0.00..688010.74 rows=1509212 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00016805 a_13\n>> (cost=0.00..145219.14 rows=311402 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00016871 a_14 (cost=0.00..21.40\n>> rows=190 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00016874 a_15\n>> (cost=0.00..478011.62 rows=1031110 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00017048 a_16 (cost=0.00..21.40\n>> rows=190 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Seq Scan on aggregate__00017049 a_17\n>> (cost=0.00..1792844.42 rows=3164774 width=4)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >=\n>> 1000000000)\n>> -> Hash (cost=1042.69..1042.69 rows=225 width=4)\n>> -> Seq Scan on landing l (cost=0.00..1042.69 rows=225 width=4)\n>> Filter: (source_id = 36)\n>>\n>> And here is an example of the query using the index when ran against a\n>> partition directly\n>>\n>> select landing_id from landing L\n>> where exists\n>> (\n>> select landing_id\n>> from stage.aggregate__00007223 A\n>> WHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\n>> and L.landing_id = A.Landing_id\n>> )\n>> and L.source_id = 36\n>>\n>> Nested Loop Semi Join (cost=0.56..3454.75 rows=5 width=4)\n>> -> Seq Scan on landing l (cost=0.00..1042.69 rows=225 width=4)\n>> Filter: (source_id = 36)\n>> -> Index Scan using ix_aggregate__00007223_landing_id_start_datetime\n>> on aggregate__00007223 a (cost=0.56..359345.74 rows=36173 width=4)\n>> Index Cond: (landing_id = l.landing_id)\n>> Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n>>\n>>\n>> The parent table never had rows, and pg_class had relpages=0. I saw a\n>> suggestion in a different thread about updating this value to greater than\n>> 0 so I tried that but didnt get a different plan. We have\n>> autovacuum/analyze enabled and also run nightly vacuum/analyze on the\n>> database to keep stats up to date.\n>>\n>> I'm new to troubleshooting partition query performance and not sure what\n>> I am missing here. Any advice is appreciated.\n>>\n>\n>\n\nWhen I excluded the non indexed search criteria the query on aggregate used the indexes on each partition, without specifying the constraint exclusion criteria.  When I added the constraint exclusion criteria to the non indexed criteria, it still used seq scans.  I ended up getting an acceptable plan by using a subquery on the indexed partition and using those results to scan for the unindexed value. On Wed, Sep 21, 2016 at 12:37 PM, Mike Broers <[email protected]> wrote:Thanks for your response - Is 'selectively choosing what partition' different than utilizing each partitions index when scanning each partition?  To clarify, I expect to find results in each partition, but to have postgres use each partitions index instead of full table scans. It seems redundant to add a where clauses to match each exclusion criteria but i will try that and report back - thank you for the suggestion.On Wed, Sep 21, 2016 at 12:15 PM, Ganesh Kannan <[email protected]> wrote:\n\n\nPostgres does not have capability to selectively choose child tables unless the query's \"WHERE\"\n clause is simple, and it matches (exactly) the CHECK constraint definition.  I have resolved similar issue by explicitly adding check constraint expression in every SQL against the master table. This is also determined by the constraint_exclusion setting value.\n Check the manual (9.5): https://www.postgresql.org/docs/current/static/ddl-partitioning.html. \n\n\nI would try tweaking WHERE clause to match Check constraint definition. Global partitioning index (like in Oracle) would help,\n but its just my wish.\n\n\n\n\n\nRegards,\nGanesh Kannan\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nFrom: [email protected] <[email protected]> on behalf of Mike Broers <[email protected]>\nSent: Wednesday, September 21, 2016 12:53 PM\nTo: [email protected]\nSubject: [PERFORM] query against single partition uses index, against master table does seq scan\n \n\n\n\nHello, I am curious about the performance of queries against a master table that seem to do seq scans on each child table.  When the same query is issued at a partition directly it uses the partition index and is very fast.  \n\n\nThe partition constraint is in the query criteria.  We have non overlapping check constraints and constraint exclusion is set to partition.\n\n\nHere is the master table\n\n          Column                      Type                                  Modifiers                      \naggregate_id               bigint                      not null default nextval('seq_aggregate'::regclass)\nlanding_id                 integer                     not null\nclient_program_id          integer                     \nsequence_number            bigint                      \nstart_datetime             timestamp without time zone not null\nend_datetime               timestamp without time zone not null\nbody                       jsonb                       not null\nclient_parsing_status_code character(1)                \nvalidation_status_code     character(1)                \nclient_parsing_datetime    timestamp without time zone \nvalidation_datetime        timestamp without time zone \nlatest_flag_datetime       timestamp without time zone \nlatest_flag                boolean                     not null\nIndexes:\n    \"pk_aggregate\" PRIMARY KEY, btree (aggregate_id)\n    \"ix_aggregate_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code)\n    \"ix_aggregate_landing_id_start_datetime\" btree (landing_id, start_datetime)\n    \"ix_aggregate_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false\n    \"ix_aggregate_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true\nCheck constraints:\n    \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n    \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\nForeign-key constraints:\n    \"fk_aggregate_client_program\" FOREIGN KEY (client_program_id) REFERENCES client_program(client_program_id)\n    \"fk_aggregate_landing\" FOREIGN KEY (landing_id) REFERENCES landing(landing_id)\nNumber of child tables: 17 (Use \\d+ to list them.)\n\n\n\nand here is a child table showing a check constraint\n\n                                     Table \"stage.aggregate__00007223\"\n          Column                      Type                                  Modifiers                      \n────────────────────────── ─────────────────────────── \naggregate_id               bigint                      not null default nextval('seq_aggregate'::regclass)\nlanding_id                 integer                     not null\nclient_program_id          integer                     \nsequence_number            bigint                      \nstart_datetime             timestamp without time zone not null\nend_datetime               timestamp without time zone not null\nbody                       jsonb                       not null\nclient_parsing_status_code character(1)                \nvalidation_status_code     character(1)                \nclient_parsing_datetime    timestamp without time zone \nvalidation_datetime        timestamp without time zone \nlatest_flag_datetime       timestamp without time zone \nlatest_flag                boolean                     not null\nIndexes:\n    \"pk_aggregate__00007223\" PRIMARY KEY, btree (aggregate_id), tablespace \"archive\"\n    \"ix_aggregate__00007223_landing_id_aggregate_id_parsing_status\" btree (landing_id, aggregate_id, client_parsing_status_code), tablespace \"archive\"\n    \"ix_aggregate__00007223_landing_id_start_datetime\" btree (landing_id, start_datetime), tablespace \"archive\"\n    \"ix_aggregate__00007223_latest_flag\" btree (latest_flag_datetime) WHERE latest_flag = false, tablespace \"archive\"\n    \"ix_aggregate__00007223_validation_status_code\" btree (validation_datetime) WHERE validation_status_code = 'P'::bpchar AND latest_flag = true, tablespace \"archive\"\nCheck constraints:\n    \"ck_aggregate__00007223_landing_id\" CHECK (landing_id >= 7223 AND landing_id < 9503)\n    \"ck_aggregate_client_parsing_status_code\" CHECK (client_parsing_status_code IS NULL OR (client_parsing_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\n    \"ck_aggregate_validation_status_code\" CHECK (validation_status_code IS NULL OR (validation_status_code = ANY (ARRAY['P'::bpchar, 'F'::bpchar, 'I'::bpchar])))\nInherits: aggregate\nTablespace: \"archive\"\n\n\n\nHere is an example of the query explain plan against the master table:\n\n\n\nselect landing_id from landing L\nwhere exists \n(\nselect landing_id\nfrom stage.aggregate A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\n\n\n\n\n\nHash Join  (cost=59793745.91..59793775.14 rows=28 width=4)\n  Hash Cond: (a.landing_id = l.landing_id)\n  ->  HashAggregate  (cost=59792700.41..59792721.46 rows=2105 width=4)\n        Group Key: a.landing_id\n        ->  Append  (cost=0.00..59481729.32 rows=124388438 width=4)\n              ->  Seq Scan on aggregate a  (cost=0.00..0.00 rows=1 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00000000 a_1  (cost=0.00..1430331.50 rows=2105558 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00000470 a_2  (cost=0.00..74082.10 rows=247002 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00001435 a_3  (cost=0.00..8174909.44 rows=17610357 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00001685 a_4  (cost=0.00..11011311.44 rows=23516624 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00003836 a_5  (cost=0.00..5833050.44 rows=13102557 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00005638 a_6  (cost=0.00..5950768.16 rows=12342003 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00007223 a_7  (cost=0.00..6561806.24 rows=13203237 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00009503 a_8  (cost=0.00..5420961.64 rows=10931794 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00011162 a_9  (cost=0.00..4262902.64 rows=8560011 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00012707 a_10  (cost=0.00..4216271.28 rows=9077921 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00014695 a_11  (cost=0.00..3441205.72 rows=7674495 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016457 a_12  (cost=0.00..688010.74 rows=1509212 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016805 a_13  (cost=0.00..145219.14 rows=311402 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016871 a_14  (cost=0.00..21.40 rows=190 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00016874 a_15  (cost=0.00..478011.62 rows=1031110 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00017048 a_16  (cost=0.00..21.40 rows=190 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n              ->  Seq Scan on aggregate__00017049 a_17  (cost=0.00..1792844.42 rows=3164774 width=4)\n                    Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n  ->  Hash  (cost=1042.69..1042.69 rows=225 width=4)\n        ->  Seq Scan on landing l  (cost=0.00..1042.69 rows=225 width=4)\n              Filter: (source_id = 36)\n\n\n\nAnd here is an example of the query using the index when ran against a partition directly\n\n\n\nselect landing_id from landing L\nwhere exists \n(\nselect landing_id\nfrom stage.aggregate__00007223 A\nWHERE (A.body#>>'{Cost}')::BIGINT >= 1000000000\nand L.landing_id = A.Landing_id\n)\nand L.source_id = 36\n\n\n\n\nNested Loop Semi Join  (cost=0.56..3454.75 rows=5 width=4)\n  ->  Seq Scan on landing l  (cost=0.00..1042.69 rows=225 width=4)\n        Filter: (source_id = 36)\n  ->  Index Scan using ix_aggregate__00007223_landing_id_start_datetime on aggregate__00007223 a  (cost=0.56..359345.74 rows=36173 width=4)\n        Index Cond: (landing_id = l.landing_id)\n        Filter: (((body #>> '{Cost}'::text[]))::bigint >= 1000000000)\n\n\n\n\n\nThe parent table never had rows, and pg_class had relpages=0.  I saw a suggestion in a different thread about updating this value to greater than 0 so I tried that but didnt get a different plan.  We have autovacuum/analyze enabled and also run nightly\n vacuum/analyze on the database to keep stats up to date.\n\n\nI'm new to troubleshooting partition query performance and not sure what I am missing here.  Any advice is appreciated.", "msg_date": "Wed, 21 Sep 2016 16:00:31 -0500", "msg_from": "Mike Broers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query against single partition uses index, against\n master table does seq scan" }, { "msg_contents": "Mike Broers <[email protected]> writes:\n> Hello, I am curious about the performance of queries against a master table\n> that seem to do seq scans on each child table. When the same query is\n> issued at a partition directly it uses the partition index and is very\n> fast.\n\nWhat PG version is that? For me, everything since 9.0 seems to be willing\nto consider the type of plan you're expecting.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 21 Sep 2016 22:11:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query against single partition uses index,\n against master table does seq scan" }, { "msg_contents": "This is 9.5, sorry I didnt mention that in the initial post. I am guessing\nthe issue is that the secondary non-indexed criteria is a search through a\njsonb column?\n\nLet me know if I can provide any additional info, as I stated I am working\naround it with a subquery at the moment. This seems like it may become a\nmore frequent ad-hoc need so if there is something else I can do it would\nbe appreciated.\n\nOn Wed, Sep 21, 2016 at 9:11 PM, Tom Lane <[email protected]> wrote:\n\n> Mike Broers <[email protected]> writes:\n> > Hello, I am curious about the performance of queries against a master\n> table\n> > that seem to do seq scans on each child table. When the same query is\n> > issued at a partition directly it uses the partition index and is very\n> > fast.\n>\n> What PG version is that? For me, everything since 9.0 seems to be willing\n> to consider the type of plan you're expecting.\n>\n> regards, tom lane\n>\n\nThis is 9.5, sorry I didnt mention that in the initial post. I am guessing the issue is that the secondary non-indexed criteria is a search through a jsonb column?Let me know if I can provide any additional info, as I stated I am working around it with a subquery at the moment.  This seems like it may become a more frequent ad-hoc need so if there is something else I can do it would be appreciated.On Wed, Sep 21, 2016 at 9:11 PM, Tom Lane <[email protected]> wrote:Mike Broers <[email protected]> writes:\n> Hello, I am curious about the performance of queries against a master table\n> that seem to do seq scans on each child table.  When the same query is\n> issued at a partition directly it uses the partition index and is very\n> fast.\n\nWhat PG version is that?  For me, everything since 9.0 seems to be willing\nto consider the type of plan you're expecting.\n\n                        regards, tom lane", "msg_date": "Wed, 21 Sep 2016 23:09:12 -0500", "msg_from": "Mike Broers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query against single partition uses index, against\n master table does seq scan" }, { "msg_contents": "Mike Broers <[email protected]> writes:\n> This is 9.5, sorry I didnt mention that in the initial post.\n\nHmm, that's odd then.\n\n> I am guessing the issue is that the secondary non-indexed criteria is a\n> search through a jsonb column?\n\nDoubt it; it should have considered the plan you are thinking of anyway.\nMaybe it did, but threw it away on some bogus cost estimate. If you could\nproduce a self-contained test case, I'd be willing to take a look.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 22 Sep 2016 12:25:15 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query against single partition uses index,\n against master table does seq scan" } ]
[ { "msg_contents": "Hi pgsql-performance list,\n\n\nwhat is the recommended way of doing **multiple-table-spanning joins \nwith ORs in the WHERE-clause**?\n\n\nUntil now, we've used the LEFT OUTER JOIN to filter big_table like so:\n\n\nSELECT DISTINCT <fields of big_table>\nFROM\n \"big_table\"\n LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" = \n\"table_a\".\"big_table_id\")\n LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" = \n\"table_b\".\"big_table_id\")\nWHERE\n \"table_a\".\"item_id\" IN (<handful of items>)\n OR\n \"table_b\".\"item_id\" IN (<handful of items>);\n\n\nHowever, this results in an awful slow plan (requiring to scan the \ncomplete big_table which obviously isn't optimal).\nSo, we decided (at least for now) to split up the query into two \nseparate ones and merge/de-duplicate the result with application logic:\n\n\nSELECT <fields of big_table>\nFROM\n \"big_table\" INNER JOIN \"table_a\" ON (\"big_table\".\"id\" = \n\"table_a\".\"big_table_id\")\nWHERE\n \"table_a\".\"item_id\" IN (<handful of items>);\n\n\nSELECT <fields of big_table>\nFROM\n \"big_table\" INNER JOIN \"table_b\" ON (\"big_table\".\"id\" = \n\"table_b\".\"big_table_id\")\nWHERE\n \"table_b\".\"item_id\" IN (<handful of items>);\n\n\nAs you can imagine we would be very glad to solve this issue with a \nsingle query and without having to re-code existing logic of PostgreSQL. \nBut how?\n\n\nBest,\nSven\n\n\nPS: if you require EXPLAIN ANALYZE, I can post them as well.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 22 Sep 2016 15:24:49 +0200", "msg_from": "\"Sven R. Kunze\" <[email protected]>", "msg_from_op": true, "msg_subject": "Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "> However, this results in an awful slow plan (requiring to scan the\ncomplete big_table which obviously isn't optimal)\n\nYou mean to say there is a sequential scan ? An explain would be helpful.\nAre there indexes on the provided where clauses.\n\nPostgres can do a Bitmap heap scan to combine indexes, there is no need to\nfire two separate queries.\n\nOn Thu, Sep 22, 2016 at 6:54 PM, Sven R. Kunze <[email protected]> wrote:\n\n> Hi pgsql-performance list,\n>\n>\n> what is the recommended way of doing **multiple-table-spanning joins with\n> ORs in the WHERE-clause**?\n>\n>\n> Until now, we've used the LEFT OUTER JOIN to filter big_table like so:\n>\n>\n> SELECT DISTINCT <fields of big_table>\n> FROM\n> \"big_table\"\n> LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" =\n> \"table_a\".\"big_table_id\")\n> LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" =\n> \"table_b\".\"big_table_id\")\n> WHERE\n> \"table_a\".\"item_id\" IN (<handful of items>)\n> OR\n> \"table_b\".\"item_id\" IN (<handful of items>);\n>\n>\n> However, this results in an awful slow plan (requiring to scan the\n> complete big_table which obviously isn't optimal).\n> So, we decided (at least for now) to split up the query into two separate\n> ones and merge/de-duplicate the result with application logic:\n>\n>\n> SELECT <fields of big_table>\n> FROM\n> \"big_table\" INNER JOIN \"table_a\" ON (\"big_table\".\"id\" =\n> \"table_a\".\"big_table_id\")\n> WHERE\n> \"table_a\".\"item_id\" IN (<handful of items>);\n>\n>\n> SELECT <fields of big_table>\n> FROM\n> \"big_table\" INNER JOIN \"table_b\" ON (\"big_table\".\"id\" =\n> \"table_b\".\"big_table_id\")\n> WHERE\n> \"table_b\".\"item_id\" IN (<handful of items>);\n>\n>\n> As you can imagine we would be very glad to solve this issue with a single\n> query and without having to re-code existing logic of PostgreSQL. But how?\n>\n>\n> Best,\n> Sven\n>\n>\n> PS: if you require EXPLAIN ANALYZE, I can post them as well.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nRegards,\nMadusudanan.B.N <http://madusudanan.com>\n\n> However, this results in an awful slow plan (requiring to scan the complete big_table which obviously isn't optimal)You mean to say there is a sequential scan ? An explain would be helpful. Are there indexes on the provided where clauses. Postgres can do a Bitmap heap scan to combine indexes, there is no need to fire two separate queries.On Thu, Sep 22, 2016 at 6:54 PM, Sven R. Kunze <[email protected]> wrote:Hi pgsql-performance list,\n\n\nwhat is the recommended way of doing **multiple-table-spanning joins with ORs in the WHERE-clause**?\n\n\nUntil now, we've used the LEFT OUTER JOIN to filter big_table like so:\n\n\nSELECT DISTINCT <fields of big_table>\nFROM\n    \"big_table\"\n    LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" = \"table_a\".\"big_table_id\")\n    LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" = \"table_b\".\"big_table_id\")\nWHERE\n    \"table_a\".\"item_id\" IN (<handful of items>)\n    OR\n    \"table_b\".\"item_id\" IN (<handful of items>);\n\n\nHowever, this results in an awful slow plan (requiring to scan the complete big_table which obviously isn't optimal).\nSo, we decided (at least for now) to split up the query into two separate ones and merge/de-duplicate the result with application logic:\n\n\nSELECT <fields of big_table>\nFROM\n    \"big_table\" INNER JOIN \"table_a\" ON (\"big_table\".\"id\" = \"table_a\".\"big_table_id\")\nWHERE\n    \"table_a\".\"item_id\" IN (<handful of items>);\n\n\nSELECT <fields of big_table>\nFROM\n    \"big_table\" INNER JOIN \"table_b\" ON (\"big_table\".\"id\" = \"table_b\".\"big_table_id\")\nWHERE\n    \"table_b\".\"item_id\" IN (<handful of items>);\n\n\nAs you can imagine we would be very glad to solve this issue with a single query and without having to re-code existing logic of PostgreSQL. But how?\n\n\nBest,\nSven\n\n\nPS: if you require EXPLAIN ANALYZE, I can post them as well.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Regards,Madusudanan.B.N", "msg_date": "Thu, 22 Sep 2016 19:07:17 +0530", "msg_from": "\"Madusudanan.B.N\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Sven R. Kunze\r\nSent: Thursday, September 22, 2016 9:25 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] Multiple-Table-Spanning Joins with ORs in WHERE Clause\r\n\r\nHi pgsql-performance list,\r\n\r\n\r\nwhat is the recommended way of doing **multiple-table-spanning joins with ORs in the WHERE-clause**?\r\n\r\n\r\nUntil now, we've used the LEFT OUTER JOIN to filter big_table like so:\r\n\r\n\r\nSELECT DISTINCT <fields of big_table>\r\nFROM\r\n \"big_table\"\r\n LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" = \r\n\"table_a\".\"big_table_id\")\r\n LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" = \r\n\"table_b\".\"big_table_id\")\r\nWHERE\r\n \"table_a\".\"item_id\" IN (<handful of items>)\r\n OR\r\n \"table_b\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nHowever, this results in an awful slow plan (requiring to scan the \r\ncomplete big_table which obviously isn't optimal).\r\nSo, we decided (at least for now) to split up the query into two \r\nseparate ones and merge/de-duplicate the result with application logic:\r\n\r\n\r\nSELECT <fields of big_table>\r\nFROM\r\n \"big_table\" INNER JOIN \"table_a\" ON (\"big_table\".\"id\" = \r\n\"table_a\".\"big_table_id\")\r\nWHERE\r\n \"table_a\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nSELECT <fields of big_table>\r\nFROM\r\n \"big_table\" INNER JOIN \"table_b\" ON (\"big_table\".\"id\" = \r\n\"table_b\".\"big_table_id\")\r\nWHERE\r\n \"table_b\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nAs you can imagine we would be very glad to solve this issue with a \r\nsingle query and without having to re-code existing logic of PostgreSQL. \r\nBut how?\r\n\r\n\r\nBest,\r\nSven\r\n\r\n\r\nPS: if you require EXPLAIN ANALYZE, I can post them as well.\r\n\r\n______________________________________________________________________________________________\r\n\r\nWhat about:\r\n\r\nSELECT <fields of big_table>\r\nFROM\r\n \"big_table\" INNER JOIN \"table_a\" ON (\"big_table\".\"id\" = \r\n\"table_a\".\"big_table_id\")\r\nWHERE\r\n \"table_a\".\"item_id\" IN (<handful of items>)\r\nUNION\r\nSELECT <fields of big_table>\r\nFROM\r\n \"big_table\" INNER JOIN \"table_b\" ON (\"big_table\".\"id\" = \r\n\"table_b\".\"big_table_id\")\r\nWHERE\r\n \"table_b\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 22 Sep 2016 14:32:53 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "\r\n-----Original Message-----\r\nFrom: Igor Neyman \r\nSent: Thursday, September 22, 2016 10:33 AM\r\nTo: 'Sven R. Kunze' <[email protected]>; [email protected]\r\nSubject: RE: [PERFORM] Multiple-Table-Spanning Joins with ORs in WHERE Clause\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Sven R. Kunze\r\nSent: Thursday, September 22, 2016 9:25 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] Multiple-Table-Spanning Joins with ORs in WHERE Clause\r\n\r\nHi pgsql-performance list,\r\n\r\n\r\nwhat is the recommended way of doing **multiple-table-spanning joins with ORs in the WHERE-clause**?\r\n\r\n\r\nUntil now, we've used the LEFT OUTER JOIN to filter big_table like so:\r\n\r\n\r\nSELECT DISTINCT <fields of big_table>\r\nFROM\r\n \"big_table\"\r\n LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" =\r\n\"table_a\".\"big_table_id\")\r\n LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" =\r\n\"table_b\".\"big_table_id\")\r\nWHERE\r\n \"table_a\".\"item_id\" IN (<handful of items>)\r\n OR\r\n \"table_b\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nHowever, this results in an awful slow plan (requiring to scan the complete big_table which obviously isn't optimal).\r\nSo, we decided (at least for now) to split up the query into two separate ones and merge/de-duplicate the result with application logic:\r\n\r\n\r\nSELECT <fields of big_table>\r\nFROM\r\n \"big_table\" INNER JOIN \"table_a\" ON (\"big_table\".\"id\" =\r\n\"table_a\".\"big_table_id\")\r\nWHERE\r\n \"table_a\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nSELECT <fields of big_table>\r\nFROM\r\n \"big_table\" INNER JOIN \"table_b\" ON (\"big_table\".\"id\" = \r\n\"table_b\".\"big_table_id\")\r\nWHERE\r\n \"table_b\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nAs you can imagine we would be very glad to solve this issue with a \r\nsingle query and without having to re-code existing logic of PostgreSQL. \r\nBut how?\r\n\r\n\r\nBest,\r\nSven\r\n\r\n\r\nPS: if you require EXPLAIN ANALYZE, I can post them as well.\r\n\r\n______________________________________________________________________________________________\r\n\r\nAnother option to try::\r\n\r\n\r\nSELECT DISTINCT <fields of big_table>\r\nFROM\r\n \"big_table\"\r\n LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" = \"table_a\".\"big_table_id\" AND \"table_a\".\"item_id\" IN (<handful of items>))\r\n LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" = \"table_b\".\"big_table_id\" AND \"table_b\".\"item_id\" IN (<handful of items>));\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 22 Sep 2016 14:36:24 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Igor Neyman\r\nSent: Thursday, September 22, 2016 10:36 AM\r\nTo: Sven R. Kunze <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] Multiple-Table-Spanning Joins with ORs in WHERE Clause\r\n\r\n\r\n-----Original Message-----\r\nFrom: Igor Neyman \r\nSent: Thursday, September 22, 2016 10:33 AM\r\nTo: 'Sven R. Kunze' <[email protected]>; [email protected]\r\nSubject: RE: [PERFORM] Multiple-Table-Spanning Joins with ORs in WHERE Clause\r\n\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Sven R. Kunze\r\nSent: Thursday, September 22, 2016 9:25 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] Multiple-Table-Spanning Joins with ORs in WHERE Clause\r\n\r\nHi pgsql-performance list,\r\n\r\n\r\nwhat is the recommended way of doing **multiple-table-spanning joins with ORs in the WHERE-clause**?\r\n\r\n\r\nUntil now, we've used the LEFT OUTER JOIN to filter big_table like so:\r\n\r\n\r\nSELECT DISTINCT <fields of big_table>\r\nFROM\r\n \"big_table\"\r\n LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" =\r\n\"table_a\".\"big_table_id\")\r\n LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" =\r\n\"table_b\".\"big_table_id\")\r\nWHERE\r\n \"table_a\".\"item_id\" IN (<handful of items>)\r\n OR\r\n \"table_b\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nHowever, this results in an awful slow plan (requiring to scan the complete big_table which obviously isn't optimal).\r\nSo, we decided (at least for now) to split up the query into two separate ones and merge/de-duplicate the result with application logic:\r\n\r\n\r\nSELECT <fields of big_table>\r\nFROM\r\n \"big_table\" INNER JOIN \"table_a\" ON (\"big_table\".\"id\" =\r\n\"table_a\".\"big_table_id\")\r\nWHERE\r\n \"table_a\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nSELECT <fields of big_table>\r\nFROM\r\n \"big_table\" INNER JOIN \"table_b\" ON (\"big_table\".\"id\" = \r\n\"table_b\".\"big_table_id\")\r\nWHERE\r\n \"table_b\".\"item_id\" IN (<handful of items>);\r\n\r\n\r\nAs you can imagine we would be very glad to solve this issue with a \r\nsingle query and without having to re-code existing logic of PostgreSQL. \r\nBut how?\r\n\r\n\r\nBest,\r\nSven\r\n\r\n\r\nPS: if you require EXPLAIN ANALYZE, I can post them as well.\r\n\r\n______________________________________________________________________________________________\r\n\r\nAnother option to try::\r\n\r\n\r\nSELECT DISTINCT <fields of big_table>\r\nFROM\r\n \"big_table\"\r\n LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" = \"table_a\".\"big_table_id\" AND \"table_a\".\"item_id\" IN (<handful of items>))\r\n LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" = \"table_b\".\"big_table_id\" AND \"table_b\".\"item_id\" IN (<handful of items>));\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n_______________________________________________________________________________________________________\r\n\r\nPlease disregard this last suggestion, it'll not produce required results.\r\n\r\nSolution using UNION should work.\r\n\r\nRegards,\r\nIgor Neyman\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 22 Sep 2016 14:39:44 +0000", "msg_from": "Igor Neyman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "On Thu, Sep 22, 2016 at 6:37 AM, Madusudanan.B.N <[email protected]>\nwrote:\n\n> > However, this results in an awful slow plan (requiring to scan the\n> complete big_table which obviously isn't optimal)\n>\n> You mean to say there is a sequential scan ? An explain would be helpful.\n> Are there indexes on the provided where clauses.\n>\n> Postgres can do a Bitmap heap scan to combine indexes, there is no need to\n> fire two separate queries.\n>\n\nIt can't combine bitmap scans that come from different tables.\n\nBut he can just combine the two queries into one, with a UNION.\n\nCheers,\n\nJeff\n\nOn Thu, Sep 22, 2016 at 6:37 AM, Madusudanan.B.N <[email protected]> wrote:> However, this results in an awful slow plan (requiring to scan the complete big_table which obviously isn't optimal)You mean to say there is a sequential scan ? An explain would be helpful. Are there indexes on the provided where clauses. Postgres can do a Bitmap heap scan to combine indexes, there is no need to fire two separate queries.It can't combine bitmap scans that come from different tables.But he can just combine the two queries into one, with a UNION.Cheers,Jeff", "msg_date": "Thu, 22 Sep 2016 09:20:38 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "Thanks a lot Madusudanan, Igor, Lutz and Jeff for your suggestions.\n\nWhat I can confirm is that the UNION ideas runs extremely fast (don't \nhave access to the db right now to test the subquery idea, but will \ncheck next week as I travel right now). Thanks again! :)\n\n\nI was wondering: would it be possible for PostgreSQL to rewrite the \nquery to generate the UNION (or subquery plan if it's also fast) on it's \nown?\n\n\nThanks,\nSven\n\nOn 22.09.2016 16:44, lfischer wrote:\n> Hi Sven\n>\n> Why not do something like\n>\n> SELECT * FROM big_table\n> WHERE\n> id in (SELECT big_table_id FROM table_a WHERE \"table_a\".\"item_id\" \n> IN (<handful of items>))\n> OR\n> id in (SELECT big_table_id FROM table_a WHERE \"table_b\".\"item_id\" \n> IN (<handful of items>))\n>\n> that way you don't need the \"distinct\" and therefore there should be \n> less comparison going on.\n>\n> Lutz\n>\n> On 22/09/16 14:24, Sven R. Kunze wrote:\n>> Hi pgsql-performance list,\n>>\n>>\n>> what is the recommended way of doing **multiple-table-spanning joins \n>> with ORs in the WHERE-clause**?\n>>\n>>\n>> Until now, we've used the LEFT OUTER JOIN to filter big_table like so:\n>>\n>>\n>> SELECT DISTINCT <fields of big_table>\n>> FROM\n>> \"big_table\"\n>> LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" = \n>> \"table_a\".\"big_table_id\")\n>> LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" = \n>> \"table_b\".\"big_table_id\")\n>> WHERE\n>> \"table_a\".\"item_id\" IN (<handful of items>)\n>> OR\n>> \"table_b\".\"item_id\" IN (<handful of items>);\n>>\n>>\n>> However, this results in an awful slow plan (requiring to scan the \n>> complete big_table which obviously isn't optimal).\n>> So, we decided (at least for now) to split up the query into two \n>> separate ones and merge/de-duplicate the result with application logic:\n>>\n>>\n>> SELECT <fields of big_table>\n>> FROM\n>> \"big_table\" INNER JOIN \"table_a\" ON (\"big_table\".\"id\" = \n>> \"table_a\".\"big_table_id\")\n>> WHERE\n>> \"table_a\".\"item_id\" IN (<handful of items>);\n>>\n>>\n>> SELECT <fields of big_table>\n>> FROM\n>> \"big_table\" INNER JOIN \"table_b\" ON (\"big_table\".\"id\" = \n>> \"table_b\".\"big_table_id\")\n>> WHERE\n>> \"table_b\".\"item_id\" IN (<handful of items>);\n>>\n>>\n>> As you can imagine we would be very glad to solve this issue with a \n>> single query and without having to re-code existing logic of \n>> PostgreSQL. But how?\n>>\n>>\n>> Best,\n>> Sven\n>>\n>>\n>> PS: if you require EXPLAIN ANALYZE, I can post them as well.\n>>\n>>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 23 Sep 2016 08:35:09 +0200", "msg_from": "\"Sven R. Kunze\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "On 23.09.2016 11:00, Pavel Stehule wrote:\n> 2016-09-23 8:35 GMT+02:00 Sven R. Kunze <[email protected] \n> <mailto:[email protected]>>:\n>\n> I was wondering: would it be possible for PostgreSQL to rewrite\n> the query to generate the UNION (or subquery plan if it's also\n> fast) on it's own?\n>\n>\n> It depends on real data. On your specific data the UNION variant is \n> pretty fast, on different set, the UNION can be pretty slow. It is \n> related to difficult OR predicate estimation.\n\nI figure that the UNION is fast if the sub-results are small (which they \nare in our case). On the contrary, when they are huge, the OUTER JOIN \nvariant might be preferable.\n\n\nIs there something I can do to help here?\n\nOr do you think it's naturally application-dependent and thus should be \nsolved with application logic just as we did?\n\n\nCheers,\nSven\n\n\n\n\n\n\nOn 23.09.2016 11:00, Pavel Stehule\n wrote:\n\n\n2016-09-23 8:35 GMT+02:00 Sven R. Kunze <[email protected]>:\n\n\nI was\n wondering: would it be possible for PostgreSQL to rewrite\n the query to generate the UNION (or subquery plan if it's\n also fast) on it's own?\n\n\n\nIt depends on real data. On your specific data the\n UNION variant is pretty fast, on different set, the UNION\n can be pretty slow. It is related to difficult OR\n predicate estimation.\n\n\n\n\n\n I figure that the UNION is fast if the sub-results are small (which\n they are in our case). On the contrary, when they are huge, the\n OUTER JOIN variant might be preferable.\n\n\n Is there something I can do to help here?\n\n Or do you think it's naturally application-dependent and thus should\n be solved with application logic just as we did?\n\n\n Cheers,\n Sven", "msg_date": "Thu, 29 Sep 2016 14:20:31 +0200", "msg_from": "\"Sven R. Kunze\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "On Thu, Sep 22, 2016 at 11:35 PM, Sven R. Kunze <[email protected]> wrote:\n\n> Thanks a lot Madusudanan, Igor, Lutz and Jeff for your suggestions.\n>\n> What I can confirm is that the UNION ideas runs extremely fast (don't have\n> access to the db right now to test the subquery idea, but will check next\n> week as I travel right now). Thanks again! :)\n>\n>\n> I was wondering: would it be possible for PostgreSQL to rewrite the query\n> to generate the UNION (or subquery plan if it's also fast) on it's own?\n>\n\nI don't know what the subquery plan is, I don't see references to that in\nthe email chain.\n\nI don't believe that current versions of PostgreSQL are capable of\nrewriting the plan in the style of a union. It is not just a matter of\ntweaking the cost estimates, it simply never considers such a plan in the\nfirst place given the text of your query.\n\nPerhaps some future version of PostgreSQL could do so, but my gut feeling\nis that that is not very likely. It would take a lot of work, would risk\nbreaking or slowing down other things, and is probably too much of a niche\nissue to attract a lot of interest.\n\nWhy not just use the union? Are you using a framework which generates the\nquery automatically and you have no control over it? Or do you just think\nit is ugly or fragile for some other reason?\n\nPerhaps moving the union from the outside to the inside would be more\nsuitable? That way teh select list is only specified once, and if you AND\nmore clauses into the WHERE condition they also only need to be specified\nonce.\n\nSELECT * FROM big_table\nWHERE\n id in (SELECT big_table_id FROM table_a WHERE \"table_a\".\"item_id\" IN\n(<handful of items>) union\n SELECT big_table_id FROM table_a WHERE \"table_b\".\"item_id\" IN\n(<handful of items>)\n );\n\n\nCheers,\n\nJeff\n\nOn Thu, Sep 22, 2016 at 11:35 PM, Sven R. Kunze <[email protected]> wrote:Thanks a lot Madusudanan, Igor, Lutz and Jeff for your suggestions.\n\nWhat I can confirm is that the UNION ideas runs extremely fast (don't have access to the db right now to test the subquery idea, but will check next week as I travel right now). Thanks again! :)\n\n\nI was wondering: would it be possible for PostgreSQL to rewrite the query to generate the UNION (or subquery plan if it's also fast) on it's own?I don't know what the subquery plan is, I don't see references to that in the email chain.I don't believe that current versions of PostgreSQL are capable of rewriting the plan in the style of a union.  It is not just a matter of tweaking the cost estimates, it simply never considers such a plan in the first place given the text of your query.Perhaps some future version of PostgreSQL could do so, but my gut feeling is that that is not very likely.  It would take a lot of work, would risk breaking or slowing down other things, and is probably too much of a niche issue to attract a lot of interest.Why not just use the union?  Are you using a framework which generates the query automatically and you have no control over it?  Or do you just think it is ugly or fragile for some other reason?Perhaps moving the union from the outside to the inside would be more suitable?  That way teh select list is only specified once, and if you AND more clauses into the WHERE condition they also only need to be specified once.SELECT * FROM big_tableWHERE     id in (SELECT big_table_id FROM table_a WHERE \"table_a\".\"item_id\" IN (<handful of items>) union              SELECT big_table_id FROM table_a WHERE \"table_b\".\"item_id\" IN (<handful of items>)      );Cheers,Jeff", "msg_date": "Thu, 29 Sep 2016 11:03:16 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "2016-09-29 14:20 GMT+02:00 Sven R. Kunze <[email protected]>:\n\n> On 23.09.2016 11:00, Pavel Stehule wrote:\n>\n> 2016-09-23 8:35 GMT+02:00 Sven R. Kunze <[email protected]>:\n>\n>> I was wondering: would it be possible for PostgreSQL to rewrite the query\n>> to generate the UNION (or subquery plan if it's also fast) on it's own?\n>>\n>\n> It depends on real data. On your specific data the UNION variant is pretty\n> fast, on different set, the UNION can be pretty slow. It is related to\n> difficult OR predicate estimation.\n>\n>\n> I figure that the UNION is fast if the sub-results are small (which they\n> are in our case). On the contrary, when they are huge, the OUTER JOIN\n> variant might be preferable.\n>\n>\n> Is there something I can do to help here?\n>\n> Or do you think it's naturally application-dependent and thus should be\n> solved with application logic just as we did?\n>\n\nIn ideal world then plan should be independent on used form. The most\ndifficult is safe estimation of OR predicates. With correct estimation the\ntransformation to UNION form should not be necessary I am think.\n\nRegards\n\nPavel\n\n\n>\n> Cheers,\n> Sven\n>\n\n2016-09-29 14:20 GMT+02:00 Sven R. Kunze <[email protected]>:\n\nOn 23.09.2016 11:00, Pavel Stehule\n wrote:\n\n\n2016-09-23 8:35 GMT+02:00 Sven R. Kunze <[email protected]>:\n\n\nI was\n wondering: would it be possible for PostgreSQL to rewrite\n the query to generate the UNION (or subquery plan if it's\n also fast) on it's own?\n\n\n\nIt depends on real data. On your specific data the\n UNION variant is pretty fast, on different set, the UNION\n can be pretty slow. It is related to difficult OR\n predicate estimation.\n\n\n\n\n\n I figure that the UNION is fast if the sub-results are small (which\n they are in our case). On the contrary, when they are huge, the\n OUTER JOIN variant might be preferable.\n\n\n Is there something I can do to help here?\n\n Or do you think it's naturally application-dependent and thus should\n be solved with application logic just as we did?In ideal world then plan should be independent on used form. The most difficult is safe estimation of OR predicates. With correct estimation the transformation to UNION form should not be necessary I am think. RegardsPavel\n\n\n Cheers,\n Sven", "msg_date": "Thu, 29 Sep 2016 20:12:58 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "Hi Jeff,\n\nOn 29.09.2016 20:03, Jeff Janes wrote:\n> I don't know what the subquery plan is, I don't see references to that \n> in the email chain.\n\nLutz posted the following solution:\n\nSELECT * FROM big_table\nWHERE\n id in (SELECT big_table_id FROM table_a WHERE \"table_a\".\"item_id\" \nIN (<handful of items>))\n OR\n id in (SELECT big_table_id FROM table_a WHERE \"table_b\".\"item_id\" \nIN (<handful of items>))\n\n> I don't believe that current versions of PostgreSQL are capable of \n> rewriting the plan in the style of a union. It is not just a matter \n> of tweaking the cost estimates, it simply never considers such a plan \n> in the first place given the text of your query.\n\nThat's okay and that's why I am asking here. :)\n\n> Perhaps some future version of PostgreSQL could do so, but my gut \n> feeling is that that is not very likely. It would take a lot of work, \n> would risk breaking or slowing down other things, and is probably too \n> much of a niche issue to attract a lot of interest.\n\nI don't hope so; in business and reports/stats applications there is a \nlot of room for this.\n\nWhy do you think that OR-ing several tables is a niche issue? I can at \nleast name 3 different projects (from 3 different domains) where \ncombining 3 or more tables with OR is relevant and should be reasonably \nfast.\n\nMost domains that could benefit would probably have star-like schemas. \nSo, big_table corresponds to the center of the star, whereas the rays \ncorrespond to various (even dynamic) extensions to the base data structure.\n\n> Why not just use the union?\n\nSure that would work in this particular case. However, this thread \nactually sought a general answer to \"how to OR more than two tables\".\n\n> Are you using a framework which generates the query automatically and \n> you have no control over it?\n\nWe use a framework and we can use the UNION if we want to.\n\n> Or do you just think it is ugly or fragile for some other reason?\n\nI don't think it's ugly or fragile. I am just used to the fact that **if \nit's equivalent** then PostgreSQL can figure it out (without constant \nsupervision from application developers).\n\nSo, it's just a matter of inconvenience. ;)\n\n> Perhaps moving the union from the outside to the inside would be more \n> suitable? That way teh select list is only specified once, and if you \n> AND more clauses into the WHERE condition they also only need to be \n> specified once.\n>\n> SELECT * FROM big_table\n> WHERE\n> id in (SELECT big_table_id FROM table_a WHERE \"table_a\".\"item_id\" \n> IN (<handful of items>) union\n> SELECT big_table_id FROM table_a WHERE \"table_b\".\"item_id\" IN \n> (<handful of items>)\n> );\n\nYet another solution I guess, so thanks a lot. :)\n\nThis multitude of solution also shows that applications developers might \nbe overwhelmed by choosing the most appropriate AND most long-lasting \none. Because what I take from the discussion is that a UNION might be \nappropriate right now but that could change in the future even for the \nvery same use-case at hand.\n\nCheers,\nSven\n\n\n\n\n\n\nHi Jeff,\n\nOn 29.09.2016 20:03, Jeff Janes wrote:\n\n\n\n\nI don't know what the subquery plan\n is, I don't see references to that in the email chain.\n\n\n\n\n Lutz posted the following solution:\n\n SELECT * FROM big_table\n \n WHERE\n \n      id in (SELECT big_table_id FROM table_a WHERE\n \"table_a\".\"item_id\" IN (<handful of items>))\n \n     OR\n \n      id in (SELECT big_table_id FROM table_a WHERE\n \"table_b\".\"item_id\" IN (<handful of items>))\n \n\n\n\n\n\nI don't believe that current versions of PostgreSQL are\n capable of rewriting the plan in the style of a union.  It\n is not just a matter of tweaking the cost estimates, it\n simply never considers such a plan in the first place\n given the text of your query.\n\n\n\n\n\n That's okay and that's why I am asking here. :)\n\n\n\n\n\nPerhaps some future version of PostgreSQL could do so,\n but my gut feeling is that that is not very likely.  It\n would take a lot of work, would risk breaking or slowing\n down other things, and is probably too much of a niche\n issue to attract a lot of interest.\n\n\n\n\n\n I don't hope so; in business and reports/stats applications there is\n a lot of room for this.\n\n Why do you think that OR-ing several tables is a niche issue? I can\n at least name 3 different projects (from 3 different domains) where\n combining 3 or more tables with OR is relevant and should be\n reasonably fast.\n\n Most domains that could benefit would probably have star-like\n schemas. So, big_table corresponds to the center of the star,\n whereas the rays correspond to various (even dynamic) extensions to\n the base data structure.\n\n\n\n\n\nWhy not just use the union?\n\n\n\n\n\n Sure that would work in this particular case. However, this thread\n actually sought a general answer to \"how to OR more than two\n tables\".\n\n\n\n\n\nAre you using a framework which generates the query\n automatically and you have no control over it?\n\n\n\n\n\n We use a framework and we can use the UNION if we want to.\n\n\n\n\n\nOr do you just think it is ugly or fragile for some\n other reason?\n\n\n\n\n\n I don't think it's ugly or fragile. I am just used to the fact that\n **if it's equivalent** then PostgreSQL can figure it out (without\n constant supervision from application developers).\n\n So, it's just a matter of inconvenience. ;)\n\n\n\n\n\nPerhaps moving the union from the outside to the inside\n would be more suitable?  That way teh select list is only\n specified once, and if you AND more clauses into the WHERE\n condition they also only need to be specified once.\n\n\nSELECT\n * FROM big_table\nWHERE\n     id in\n (SELECT big_table_id FROM table_a WHERE\n \"table_a\".\"item_id\" IN (<handful of items>) union \n       \n      SELECT big_table_id FROM table_a WHERE\n \"table_b\".\"item_id\" IN (<handful of items>)\n     \n );\n\n\n\n\n\n\n Yet another solution I guess, so thanks a lot. :) \n\n This multitude of solution also shows that applications developers\n might be overwhelmed by choosing the most appropriate AND most\n long-lasting one. Because what I take from the discussion is that a\n UNION might be appropriate right now but that could change in the\n future even for the very same use-case at hand.\n\n Cheers,\n Sven", "msg_date": "Thu, 29 Sep 2016 20:48:01 +0200", "msg_from": "\"Sven R. Kunze\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "On 29.09.2016 20:12, Pavel Stehule wrote:\n> In ideal world then plan should be independent on used form. The most \n> difficult is safe estimation of OR predicates. With correct estimation \n> the transformation to UNION form should not be necessary I am think.\n\nAh, okay. That's interesting.\n\nSo how can I help here?\n\nRegards,\nSven\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Sep 2016 20:49:32 +0200", "msg_from": "\"Sven R. Kunze\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "2016-09-29 20:49 GMT+02:00 Sven R. Kunze <[email protected]>:\n\n> On 29.09.2016 20:12, Pavel Stehule wrote:\n>\n>> In ideal world then plan should be independent on used form. The most\n>> difficult is safe estimation of OR predicates. With correct estimation the\n>> transformation to UNION form should not be necessary I am think.\n>>\n>\n> Ah, okay. That's interesting.\n>\n> So how can I help here?\n>\n\ntry to write a patch :) or better, help with enhancing PostgreSQL's\nestimation model. Tomas Vondra is working 2 years on multicolumn\nstatistics. He needs help with review.\n\nRegards\n\nPavel\n\n>\n> Regards,\n> Sven\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2016-09-29 20:49 GMT+02:00 Sven R. Kunze <[email protected]>:On 29.09.2016 20:12, Pavel Stehule wrote:\n\nIn ideal world then plan should be independent on used form. The most difficult is safe estimation of OR predicates. With correct estimation the transformation to UNION form should not be necessary I am think.\n\n\nAh, okay. That's interesting.\n\nSo how can I help here?try to write a patch :) or better, help with enhancing PostgreSQL's estimation model. Tomas Vondra is working 2 years on multicolumn statistics. He needs help with review.RegardsPavel \n\nRegards,\nSven\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 29 Sep 2016 21:11:34 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "On Thu, Sep 29, 2016 at 11:48 AM, Sven R. Kunze <[email protected]> wrote:\n\n> On 29.09.2016 20:03, Jeff Janes wrote:\n>\n> Perhaps some future version of PostgreSQL could do so, but my gut feeling\n> is that that is not very likely. It would take a lot of work, would risk\n> breaking or slowing down other things, and is probably too much of a niche\n> issue to attract a lot of interest.\n>\n>\n> I don't hope so; in business and reports/stats applications there is a lot\n> of room for this.\n>\n> Why do you think that OR-ing several tables is a niche issue? I can at\n> least name 3 different projects (from 3 different domains) where combining\n> 3 or more tables with OR is relevant and should be reasonably fast.\n>\n\nWell, I don't recall seeing this issue on this list before (or a few other\nforums I read) while I see several other issues over and over again. So\nthat is why I think it is a niche issue. Perhaps I've have seen it before\nand just forgotten, or have not recognized it as being the same issue each\ntime.\n\n\n\n> This multitude of solution also shows that applications developers might\n> be overwhelmed by choosing the most appropriate AND most long-lasting one.\n> Because what I take from the discussion is that a UNION might be\n> appropriate right now but that could change in the future even for the very\n> same use-case at hand.\n>\n\nI'm not sure what would cause it to change. Do you mean if you suddenly\nstart selecting a much larger portion of the table? I don't know that the\nunion would be particularly bad in that case, either.\n\nI'm not saying it wouldn't be nice to fix it. I just don't think it is\nparticularly likely to happen soon. I could be wrong (especially if you\ncan write the code to make it happen).\n\nCheers,\n\nJeff\n\nOn Thu, Sep 29, 2016 at 11:48 AM, Sven R. Kunze <[email protected]> wrote:\n\nOn 29.09.2016 20:03, Jeff Janes wrote:\n\n\n\n\nPerhaps some future version of PostgreSQL could do so,\n but my gut feeling is that that is not very likely.  It\n would take a lot of work, would risk breaking or slowing\n down other things, and is probably too much of a niche\n issue to attract a lot of interest.\n\n\n\n\n\n I don't hope so; in business and reports/stats applications there is\n a lot of room for this.\n\n Why do you think that OR-ing several tables is a niche issue? I can\n at least name 3 different projects (from 3 different domains) where\n combining 3 or more tables with OR is relevant and should be\n reasonably fast.Well, I don't recall seeing this issue on this list before (or a few other forums I read) while I see several other issues over and over again.  So that is why I think it is a niche issue.  Perhaps I've have seen it before and just forgotten, or have not recognized it as being the same issue each time. \n This multitude of solution also shows that applications developers\n might be overwhelmed by choosing the most appropriate AND most\n long-lasting one. Because what I take from the discussion is that a\n UNION might be appropriate right now but that could change in the\n future even for the very same use-case at hand.I'm not sure what would cause it to change.  Do you mean if you suddenly start selecting a much larger portion of the table?  I don't know that the union would be particularly bad in that case, either.I'm not saying it wouldn't be nice to fix it.  I just don't think it is particularly likely to happen soon.  I could be wrong (especially if you can write the code to make it happen).Cheers,Jeff", "msg_date": "Thu, 29 Sep 2016 13:26:09 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "On Thu, Sep 29, 2016 at 11:12 AM, Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> 2016-09-29 14:20 GMT+02:00 Sven R. Kunze <[email protected]>:\n>\n>> On 23.09.2016 11:00, Pavel Stehule wrote:\n>>\n>> 2016-09-23 8:35 GMT+02:00 Sven R. Kunze <[email protected]>:\n>>\n>>> I was wondering: would it be possible for PostgreSQL to rewrite the\n>>> query to generate the UNION (or subquery plan if it's also fast) on it's\n>>> own?\n>>>\n>>\n>> It depends on real data. On your specific data the UNION variant is\n>> pretty fast, on different set, the UNION can be pretty slow. It is related\n>> to difficult OR predicate estimation.\n>>\n>>\n>> I figure that the UNION is fast if the sub-results are small (which they\n>> are in our case). On the contrary, when they are huge, the OUTER JOIN\n>> variant might be preferable.\n>>\n>>\n>> Is there something I can do to help here?\n>>\n>> Or do you think it's naturally application-dependent and thus should be\n>> solved with application logic just as we did?\n>>\n>\n> In ideal world then plan should be independent on used form. The most\n> difficult is safe estimation of OR predicates. With correct estimation the\n> transformation to UNION form should not be necessary I am think.\n>\n\nI don't think it is an estimation issue. If it were, the planner would\nalways choose the same inefficient plan (providing the join collapse\nlimits, etc. don't come into play, which I don't think they do here) for\nall the different ways of writing the query.\n\nSince that is not happening, the planner must not be able to prove that the\ndifferent queries are semantically identical to each other, which means\nthat it can't pick the other plan no matter how good the estimates look.\n\nCheers,\n\nJeff\n\nOn Thu, Sep 29, 2016 at 11:12 AM, Pavel Stehule <[email protected]> wrote:2016-09-29 14:20 GMT+02:00 Sven R. Kunze <[email protected]>:\n\nOn 23.09.2016 11:00, Pavel Stehule\n wrote:\n\n\n2016-09-23 8:35 GMT+02:00 Sven R. Kunze <[email protected]>:\n\n\nI was\n wondering: would it be possible for PostgreSQL to rewrite\n the query to generate the UNION (or subquery plan if it's\n also fast) on it's own?\n\n\n\nIt depends on real data. On your specific data the\n UNION variant is pretty fast, on different set, the UNION\n can be pretty slow. It is related to difficult OR\n predicate estimation.\n\n\n\n\n\n I figure that the UNION is fast if the sub-results are small (which\n they are in our case). On the contrary, when they are huge, the\n OUTER JOIN variant might be preferable.\n\n\n Is there something I can do to help here?\n\n Or do you think it's naturally application-dependent and thus should\n be solved with application logic just as we did?In ideal world then plan should be independent on used form. The most difficult is safe estimation of OR predicates. With correct estimation the transformation to UNION form should not be necessary I am think. I don't think it is an estimation issue.  If it were, the planner would always choose the same inefficient plan (providing the join collapse limits, etc. don't come into play, which I don't think they do here) for all the different ways of writing the query.Since that is not happening, the planner must not be able to prove that the different queries are semantically identical to each other, which means that it can't pick the other plan no matter how good the estimates look.Cheers,Jeff", "msg_date": "Thu, 29 Sep 2016 13:35:33 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "On 29.09.2016 22:26, Jeff Janes wrote:\n> Well, I don't recall seeing this issue on this list before (or a few \n> other forums I read) while I see several other issues over and over \n> again. So that is why I think it is a niche issue. Perhaps I've have \n> seen it before and just forgotten, or have not recognized it as being \n> the same issue each time.\n\nUnderstood.\n\n>\n> This multitude of solution also shows that applications developers\n> might be overwhelmed by choosing the most appropriate AND most\n> long-lasting one. Because what I take from the discussion is that\n> a UNION might be appropriate right now but that could change in\n> the future even for the very same use-case at hand.\n>\n>\n> I'm not sure what would cause it to change. Do you mean if you \n> suddenly start selecting a much larger portion of the table? I don't \n> know that the union would be particularly bad in that case, either.\n\nNot suddenly but gradually. Data can change and we don't know for sure \nhow people will use our systems in the future. Hence, another plan would \nbe more optimal or even a seq scan on big_table would be faster.\n\nIn the case at hand, I doubt it but you never know.\n\n> I'm not saying it wouldn't be nice to fix it. I just don't think it \n> is particularly likely to happen soon. I could be wrong (especially \n> if you can write the code to make it happen).\n\nI have been thinking about this. It would be an interesting exercise as \nI haven't written much of C in the last decade but sometimes one needs \nto get out of the comfort zone to get things going.\n\n\nSven\n\n\n\n\n\n\nOn 29.09.2016 22:26, Jeff Janes wrote:\n\n\n\n\nWell, I don't recall seeing this\n issue on this list before (or a few other forums I read)\n while I see several other issues over and over again.  So\n that is why I think it is a niche issue.  Perhaps I've have\n seen it before and just forgotten, or have not recognized it\n as being the same issue each time.\n\n\n\n\n Understood.\n\n\n\n\n\n \n\n\n This multitude of solution also shows that applications\n developers might be overwhelmed by choosing the most\n appropriate AND most long-lasting one. Because what I\n take from the discussion is that a UNION might be\n appropriate right now but that could change in the\n future even for the very same use-case at hand.\n\n\n\n\nI'm not sure what would cause it to change.  Do you\n mean if you suddenly start selecting a much larger portion\n of the table?  I don't know that the union would be\n particularly bad in that case, either.\n\n\n\n\n\n Not suddenly but gradually. Data can change and we don't know for\n sure how people will use our systems in the future. Hence, another\n plan would be more optimal or even a seq scan on big_table would be\n faster.\n\n In the case at hand, I doubt it but you never know.\n\n\n\n\n\nI'm not saying it wouldn't be nice to fix it.  I just\n don't think it is particularly likely to happen soon.  I\n could be wrong (especially if you can write the code to\n make it happen).\n\n\n\n\n\n I have been thinking about this. It would be an interesting exercise\n as I haven't written much of C in the last decade but sometimes one\n needs to get out of the comfort zone to get things going.\n\n\n Sven", "msg_date": "Fri, 30 Sep 2016 13:22:44 +0200", "msg_from": "\"Sven R. Kunze\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" }, { "msg_contents": "Now I found time to investigate all proposed queries side by side. Here \nare the results (warmup + multiple executions). TL;DR - Jeff's proposed \nanswer performs significantly faster with our data than any other \nsolution (both planning and execution time).\n\n\nI have no real idea how PostgreSQL does query rewriting but I guess the \nfollowing steps (and reverse ones) are necessary:\n\n1) detect \"DISTINCT+LEFT OUTER JOIN\" and rewrite to \"SUBQUERY\"\n\n2) detect \"MUTUAL JOIN ON KEY + OR\" and rewrite to \"UNION\"\n\n3) detect \"MUTUAL IN KEY+ OR\" and rewrite to \"UNION\"\n\n4) detect \"UNION + MUTUAL JOIN ON KEY\" and rewrite to \"SUBQUERY + UNION\"\n\n\nDoing (1+2) or (3+4) would result in the optimal query. To (1+2) seems \neasier to do, although a \"common SELECT lift up\"/\"UNION push down\" (if \nthat's even the correct name) would also be great to have (that's 4)). \nIs this somehow correct?\n\n\nRegarding cost estimation: it seems like PostgreSQL is clever enough \nhere. So, I tend to agree with Jeff that this is not an issue with cost \nestimation.\n\n\n---- DISTINCT + LEFT OUTER JOIN\n\n\nexplain analyze\nSELECT distinct <columns of big_table>\nFROM \"big_table\"\n LEFT OUTER JOIN \"table_a\" ON (\"big_table\".\"id\" = \n\"table_a\".\"big_table_id\")\n LEFT OUTER JOIN \"table_b\" ON (\"big_table\".\"id\" = \n\"table_b\".\"big_table_id\")\nWHERE\n (\"table_a\".\"item_id\" IN (<handful of items>)\n OR\n \"table_b\".\"item_id\" IN (<handful of items>));\n\n\n\n HashAggregate (cost=206268.67..206269.46 rows=79 width=185) (actual \ntime=904.859..904.860 rows=5 loops=1)\n Group Key: <columns of big_table>\n -> Merge Left Join (cost=1.26..206265.11 rows=79 width=185) \n(actual time=904.835..904.846 rows=6 loops=1)\n Merge Cond: (big_table.id = table_a.big_table_id)\n Filter: (((table_a.item_id)::text = ANY ('<handful of \nitems>'::text[])) OR ((table_b.item_id)::text = ANY ('<handful of \nitems>'::text[])))\n Rows Removed by Filter: 901355\n -> Merge Left Join (cost=0.85..196703.22 rows=858293 \nwidth=243) (actual time=0.009..745.736 rows=858690 loops=1)\n Merge Cond: (big_table.id = table_b.big_table_id)\n -> Index Scan using big_table_pkey on big_table \n(cost=0.42..180776.64 rows=858293 width=185) (actual time=0.005..399.102 \nrows=858690 loops=1)\n -> Index Scan using table_b_pkey on table_b \n(cost=0.42..10343.86 rows=274959 width=62) (actual time=0.003..60.961 \nrows=274958 loops=1)\n -> Index Scan using table_a_big_table_id on table_a \n(cost=0.42..4445.35 rows=118836 width=57) (actual time=0.003..25.456 \nrows=118833 loops=1)\n Planning time: 0.934 ms\n Execution time: 904.936 ms\n\n\n\n\n---- SUBQUERY\n\nexplain analyze\nSELECT <columns of big_table>\nFROM \"big_table\"\nWHERE\n \"big_table\".\"id\" in (SELECT \"table_a\".\"big_table_id\" FROM \"table_a\" \nWHERE \"table_a\".\"item_id\" in (<handful of items>))\n OR\n \"big_table\".\"id\" in (SELECT \"table_b\".\"big_table_id\" FROM \"table_b\" \nWHERE \"table_b\".\"item_id\" IN (<handful of items>));\n\n\n\n Seq Scan on big_table (cost=100.41..115110.80 rows=643720 width=185) \n(actual time=229.819..229.825 rows=5 loops=1)\n Filter: ((hashed SubPlan 1) OR (hashed SubPlan 2))\n Rows Removed by Filter: 858685\n SubPlan 1\n -> Index Scan using table_a_item_id_211f18d89c25bc21_uniq on \ntable_a (cost=0.42..58.22 rows=9 width=4) (actual time=0.026..0.043 \nrows=5 loops=1)\n Index Cond: ((item_id)::text = ANY ('<handful of \nitems>'::text[]))\n SubPlan 2\n -> Index Scan using table_b_item_id_611f9f519d835e89_uniq on \ntable_b (cost=0.42..42.15 rows=5 width=4) (actual time=0.007..0.040 \nrows=5 loops=1)\n Index Cond: ((item_id)::text = ANY ('<handful of \nitems>'::text[]))\n Planning time: 0.261 ms\n Execution time: 229.901 ms\n\n\n\n---- UNION\n\nexplain analyze\nSELECT <columns of big_table>\nFROM \"big_table\"\n INNER JOIN \"table_a\" ON (\"big_table\".\"id\" = \"table_a\".\"big_table_id\")\nWHERE\n \"table_a\".\"item_id\" IN (<handful of items>)\nUNION\nSELECT <columns of big_table>\nFROM \"big_table\"\n INNER JOIN \"table_b\" ON (\"big_table\".\"id\" = \"table_b\".\"big_table_id\")\nWHERE\n \"table_b\".\"item_id\" IN (<handful of items>);\n\n HashAggregate (cost=216.84..216.98 rows=14 width=185) (actual \ntime=0.092..0.093 rows=5 loops=1)\n Group Key: <columns of big_table>\n -> Append (cost=22.59..216.21 rows=14 width=185) (actual \ntime=0.035..0.080 rows=10 loops=1)\n -> Nested Loop (cost=22.59..132.17 rows=9 width=185) (actual \ntime=0.034..0.044 rows=5 loops=1)\n -> Bitmap Heap Scan on table_a (cost=22.16..56.10 \nrows=9 width=4) (actual time=0.029..0.029 rows=5 loops=1)\n Recheck Cond: ((item_id)::text = ANY ('<handful of \nitems>'::text[]))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on \ntable_a_item_id_211f18d89c25bc21_uniq (cost=0.00..22.16 rows=9 width=0) \n(actual time=0.027..0.027 rows=5 loops=1)\n Index Cond: ((item_id)::text = ANY \n('<handful of items>'::text[]))\n -> Index Scan using big_table_pkey on big_table \n(cost=0.42..8.44 rows=1 width=185) (actual time=0.002..0.002 rows=1 loops=5)\n Index Cond: (id = table_a.big_table_id)\n -> Nested Loop (cost=22.58..83.90 rows=5 width=185) (actual \ntime=0.029..0.035 rows=5 loops=1)\n -> Bitmap Heap Scan on table_b (cost=22.15..41.64 \nrows=5 width=4) (actual time=0.026..0.026 rows=5 loops=1)\n Recheck Cond: ((item_id)::text = ANY ('<handful of \nitems>'::text[]))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on \ntable_b_item_id_611f9f519d835e89_uniq (cost=0.00..22.15 rows=5 width=0) \n(actual time=0.025..0.025 rows=5 loops=1)\n Index Cond: ((item_id)::text = ANY \n('<handful of items>'::text[]))\n -> Index Scan using big_table_pkey on big_table \nbig_table_1 (cost=0.42..8.44 rows=1 width=185) (actual \ntime=0.001..0.001 rows=1 loops=5)\n Index Cond: (id = table_b.big_table_id)\n Planning time: 0.594 ms\n Execution time: 0.177 ms\n\n\n\n\n---- SUBQUERY + UNION\n\n\nOn 29.09.2016 20:03, Jeff Janes wrote:\n> SELECT * FROM big_table\n> WHERE\n> id in (SELECT big_table_id FROM table_a WHERE \"table_a\".\"item_id\" \n> IN (<handful of items>) union\n> SELECT big_table_id FROM table_a WHERE \"table_b\".\"item_id\" IN \n> (<handful of items>)\n> );\n\n\n Nested Loop (cost=98.34..216.53 rows=14 width=185) (actual \ntime=0.061..0.069 rows=5 loops=1)\n -> HashAggregate (cost=97.91..98.05 rows=14 width=4) (actual \ntime=0.057..0.058 rows=5 loops=1)\n Group Key: table_a.big_table_id\n -> Append (cost=22.16..97.88 rows=14 width=4) (actual \ntime=0.028..0.054 rows=10 loops=1)\n -> Bitmap Heap Scan on table_a (cost=22.16..56.10 \nrows=9 width=4) (actual time=0.028..0.029 rows=5 loops=1)\n Recheck Cond: ((item_id)::text = ANY ('<handful of \nitems>'::text[]))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on \ntable_a_item_id_211f18d89c25bc21_uniq (cost=0.00..22.16 rows=9 width=0) \n(actual time=0.026..0.026 rows=5 loops=1)\n Index Cond: ((item_id)::text = ANY \n('<handful of items>'::text[]))\n -> Bitmap Heap Scan on table_b (cost=22.15..41.64 \nrows=5 width=4) (actual time=0.024..0.024 rows=5 loops=1)\n Recheck Cond: ((item_id)::text = ANY ('<handful of \nitems>'::text[]))\n Heap Blocks: exact=1\n -> Bitmap Index Scan on \ntable_b_item_id_611f9f519d835e89_uniq (cost=0.00..22.15 rows=5 width=0) \n(actual time=0.023..0.023 rows=5 loops=1)\n Index Cond: ((item_id)::text = ANY \n('<handful of items>'::text[]))\n -> Index Scan using big_table_pkey on big_table (cost=0.42..8.44 \nrows=1 width=185) (actual time=0.001..0.001 rows=1 loops=5)\n Index Cond: (id = table_a.big_table_id)\n Planning time: 0.250 ms\n Execution time: 0.104 ms\n\n\n\n\n\n\nCheers,\nSven\n\n\nNOTE: I added a file with the results for better readability.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 30 Sep 2016 13:29:14 +0200", "msg_from": "\"Sven R. Kunze\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Multiple-Table-Spanning Joins with ORs in WHERE Clause" } ]
[ { "msg_contents": "I’m storing thousands of independent documents each containing around 20k\nrows. The larger the document, the more likely it is to be active with\ninserts and updates (1000s/day). The most common read query is to get all\nthe rows for a single document (100s/day). It will be supporting real-time\ncollaboration but with strong-consistency for a simple schema so not\nwell-suited to dedicated \"document databases\" that assume schema-less &\neventual consistency. I won’t have great hardware/budget so need to squeeze\nthe most out of the least.\n\nMy question is whether to put all documents into a single huge table or\npartition by document?\n\nThe documents are independent so its purely a performance question. Its too\nmany tables for postgresql partitioning support but I don’t get any benefit\nfrom a master table and constraints. Handling partitioning in application\nlogic is effectively zero cost.\n\nI know that 1000s of tables is regarded as an anti-pattern but I can only\nsee the performance and maintenance benefits of one table per independent\ndocument e.g. fast per-table vacuum, incremental schema updates, easy\nfuture sharding. A monster table will require additional key columns and\nindexes that don’t have any value beyond allowing the documents to sit in\nthe same table.\n\nThe only downsides seem to be the system level per-table overhead but I\nonly see that as a problem if I have a very long tail of tiny documents.\nI'd rather solve that problem if it occurs than manage an\nall-eggs-in-one-basket monster table.\n\nIs there anything significant I am missing in my reasoning? Is it mostly a\n“relational purist” perspective that argues against multiple tables? Should\nI be looking at alternative tech for this problem?\n\nThe one factor I haven't fully resolved is how much a caching layer in\nfront of the database changes things.\n\nThanks for your help.\n\nI’m storing thousands of independent documents each containing around 20k rows. The larger the document, the more likely it is to be active with inserts and updates (1000s/day). The most common read query is to get all the rows for a single document (100s/day). It will be supporting real-time collaboration but with strong-consistency for a simple schema so not well-suited to dedicated \"document databases\" that assume schema-less & eventual consistency. I won’t have great hardware/budget so need to squeeze the most out of the least.My question is whether to put all documents into a single huge table or partition by document?The documents are independent so its purely a performance question. Its too many tables for postgresql partitioning support but I don’t get any benefit from a master table and constraints. Handling partitioning in application logic is effectively zero cost.I know that 1000s of tables is regarded as an anti-pattern but I can only see the performance and maintenance benefits of one table per independent document e.g. fast per-table vacuum, incremental schema updates, easy future sharding. A monster table will require additional key columns and indexes that don’t have any value beyond allowing the documents to sit in the same table.The only downsides seem to be the system level per-table overhead but I only see that as a problem if I have a very long tail of tiny documents. I'd rather solve that problem if it occurs than manage an all-eggs-in-one-basket monster table.Is there anything significant I am missing in my reasoning? Is it mostly a “relational purist” perspective that argues against multiple tables? Should I be looking at alternative tech for this problem?The one factor I haven't fully resolved is how much a caching layer in front of the database changes things.Thanks for your help.", "msg_date": "Fri, 23 Sep 2016 11:12:22 +0100", "msg_from": "Dev Nop <[email protected]>", "msg_from_op": true, "msg_subject": "Storing large documents - one table or partition by doc?" }, { "msg_contents": "From: Dev Nop Sent: Friday, September 23, 2016 3:12 AM\nI’m storing thousands of independent documents each containing around 20k rows. The larger the document, the more likely it is to be active with inserts and updates (1000s/day). The most common read query is to get all the rows for a single document (100s/day). It will be supporting real-time collaboration but with strong-consistency for a simple schema so not well-suited to dedicated \"document databases\" that assume schema-less & eventual consistency. I won’t have great hardware/budget so need to squeeze the most out of the least.\n\n \n\nMy question is whether to put all documents into a single huge table or partition by document?\n\n \n\nThe documents are independent so its purely a performance question. Its too many tables for postgresql partitioning support but I don’t get any benefit from a master table and constraints. Handling partitioning in application logic is effectively zero cost.\n\n \n\nI know that 1000s of tables is regarded as an anti-pattern but I can only see the performance and maintenance benefits of one table per independent document e.g. fast per-table vacuum, incremental schema updates, easy future sharding. A monster table will require additional key columns and indexes that don’t have any value beyond allowing the documents to sit in the same table.\n\n \n\nThe only downsides seem to be the system level per-table overhead but I only see that as a problem if I have a very long tail of tiny documents. I'd rather solve that problem if it occurs than manage an all-eggs-in-one-basket monster table.\n\n\nIs there anything significant I am missing in my reasoning? Is it mostly a “relational purist” perspective that argues against multiple tables? Should I be looking at alternative tech for this problem?\n\n \n\nThe one factor I haven't fully resolved is how much a caching layer in front of the database changes things.\n\n \n\nThanks for your help.\n\n---------------------------------\n\nThis is, to me, a very standard, almost classic, relational pattern, and one that a relational engine handles extremely well, especially the consistency and locking needed to support lots of updates. Inserts are irrelevant unless the parent record must be locked to do so…that would be a bad design.\n\n \n\nImagine a normal parent-child table pair, 1:M, with the 20k rows per parent document in the child table. Unless there’s something very bizarre about the access patterns against that child table, those 20k rows per document would not normally all be in play for every user on every access throughout that access (it’s too much data to show on a web page, for instance). Even so, at “100s” of large queries per day, it’s a trivial load unless each child row contains a large json blob…which doesn’t jive with your table description.\n\n \n\nSo with proper indexing, I can’t see where there will be a performance issue. Worst case, you create a few partitions based on some category, but the row counts you’re describing don’t yet warrant it. I’m running a few hundred million rows in a new “child” table on a dev server (4 cores/16gb ram) with large json documents in each row and it’s still web page performant on normal queries, using a paging model (say 20 full rows per web page request). The critical pieces, hardware-wise, are memory (buy as much as you can afford) and using SSDs (required, IMO). It’s much harder to create measurable loads on the CPUs. Amazon has memory optimized EC2 instances that support that pattern (with SSD storage).\n\n \n\nAre there other issues/requirements that are creating other performance concerns that aren’t obvious in your initial post?\n\n \n\nMike Sofen (Synthetic Genomics)\n\n\nFrom: Dev Nop  Sent: Friday, September 23, 2016 3:12 AMI’m storing thousands of independent documents each containing around 20k rows. The larger the document, the more likely it is to be active with inserts and updates (1000s/day). The most common read query is to get all the rows for a single document (100s/day). It will be supporting real-time collaboration but with strong-consistency for a simple schema so not well-suited to dedicated \"document databases\" that assume schema-less & eventual consistency. I won’t have great hardware/budget so need to squeeze the most out of the least. My question is whether to put all documents into a single huge table or partition by document? The documents are independent so its purely a performance question. Its too many tables for postgresql partitioning support but I don’t get any benefit from a master table and constraints. Handling partitioning in application logic is effectively zero cost. I know that 1000s of tables is regarded as an anti-pattern but I can only see the performance and maintenance benefits of one table per independent document e.g. fast per-table vacuum, incremental schema updates, easy future sharding. A monster table will require additional key columns and indexes that don’t have any value beyond allowing the documents to sit in the same table. The only downsides seem to be the system level per-table overhead but I only see that as a problem if I have a very long tail of tiny documents. I'd rather solve that problem if it occurs than manage an all-eggs-in-one-basket monster table.Is there anything significant I am missing in my reasoning? Is it mostly a “relational purist” perspective that argues against multiple tables? Should I be looking at alternative tech for this problem? The one factor I haven't fully resolved is how much a caching layer in front of the database changes things. Thanks for your help.---------------------------------This is, to me, a very standard, almost classic, relational pattern, and one that a relational engine handles extremely well, especially the consistency and locking needed to support lots of updates.  Inserts are irrelevant unless the parent record must be locked to do so…that would be a bad design. Imagine a normal parent-child table pair, 1:M, with the 20k rows per parent document in the child table.  Unless there’s something very bizarre about the access patterns against that child table, those 20k rows per document would not normally all be in play for every user on every access throughout that access (it’s too much data to show on a web page, for instance).  Even so, at “100s” of large queries per day, it’s a trivial load unless each child row contains a large json blob…which doesn’t jive with your table description. So with proper indexing, I can’t see where there will be a performance issue.   Worst case, you create a few partitions based on some category, but the row counts you’re describing don’t yet warrant it.  I’m running a few hundred million rows in a new “child” table on a dev server (4 cores/16gb ram) with large json documents in each row and it’s still web page performant on normal queries, using a paging model (say 20 full rows per web page request).  The critical pieces, hardware-wise, are memory (buy as much as you can afford) and using SSDs (required, IMO).  It’s much harder to create measurable loads on the CPUs.  Amazon has memory optimized EC2 instances that support that pattern (with SSD storage). Are there other issues/requirements that are creating other performance concerns that aren’t obvious in your initial post? Mike Sofen     (Synthetic Genomics)", "msg_date": "Fri, 23 Sep 2016 05:14:12 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing large documents - one table or partition by doc?" }, { "msg_contents": "On 9/23/16 7:14 AM, Mike Sofen wrote:\n> So with proper indexing, I can’t see where there will be a performance\n> issue.\n\nTable bloat could become problematic. If there is a pattern where you \ncan predict which documents are likely to be active (say, documents that \nhave been modified in the last 10 days), then you can keep all of those \nin a set of tables that is fairly small, and keep the remaining \ndocuments in a set of \"archive\" tables. That will help reduce bloat in \nthe large archive tables. Before putting in that extra work though, I'd \njust try the simple solution and see how well it works.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 23 Sep 2016 13:59:42 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing large documents - one table or partition by\n doc?" }, { "msg_contents": "On Fri, Sep 23, 2016 at 3:12 AM, Dev Nop <[email protected]> wrote:\n\n> I’m storing thousands of independent documents each containing around 20k\n> rows. The larger the document, the more likely it is to be active with\n> inserts and updates (1000s/day). The most common read query is to get all\n> the rows for a single document (100s/day).\n>\n\nHow can the query be an order of magnitude less than the writes? Wouldn't\nanything doing an insert or update want to see the results of other\npeople's inserts/updates about as frequently as they happen?\n\n\n\n\n> It will be supporting real-time collaboration but with strong-consistency\n> for a simple schema so not well-suited to dedicated \"document databases\"\n> that assume schema-less & eventual consistency. I won’t have great\n> hardware/budget so need to squeeze the most out of the least.\n>\n> My question is whether to put all documents into a single huge table or\n> partition by document?\n>\n> The documents are independent so its purely a performance question. Its\n> too many tables for postgresql partitioning support but I don’t get any\n> benefit from a master table and constraints. Handling partitioning in\n> application logic is effectively zero cost.\n>\n> I know that 1000s of tables is regarded as an anti-pattern but I can only\n> see the performance and maintenance benefits of one table per independent\n> document e.g. fast per-table vacuum, incremental schema updates, easy\n> future sharding. A monster table will require additional key columns and\n> indexes that don’t have any value beyond allowing the documents to sit in\n> the same table.\n>\n\nIf you go the partitioned route, I would add the extra column anyway (but\nnot an index on it), so that it is there if/when you need it.\n\n\n>\n> The only downsides seem to be the system level per-table overhead but I\n> only see that as a problem if I have a very long tail of tiny documents.\n> I'd rather solve that problem if it occurs than manage an\n> all-eggs-in-one-basket monster table.\n>\n> Is there anything significant I am missing in my reasoning?\n>\n\nIf you use a reasonably modern version of PostgreSQL (say, >=9.4) , the\noverhead of having 1000s of tables should not be too large of a problem.\nWhen get into the 100,000 range, that it is likely to start being a\nproblem. If you get to 1,000,000, you almost definitely have a problem.\n\nCheers,\n\nJeff\n\nOn Fri, Sep 23, 2016 at 3:12 AM, Dev Nop <[email protected]> wrote:I’m storing thousands of independent documents each containing around 20k rows. The larger the document, the more likely it is to be active with inserts and updates (1000s/day). The most common read query is to get all the rows for a single document (100s/day).How can the query be an order of magnitude less than the writes? Wouldn't anything doing an insert or update want to see the results of other people's inserts/updates about as frequently as they happen?  It will be supporting real-time collaboration but with strong-consistency for a simple schema so not well-suited to dedicated \"document databases\" that assume schema-less & eventual consistency. I won’t have great hardware/budget so need to squeeze the most out of the least.My question is whether to put all documents into a single huge table or partition by document?The documents are independent so its purely a performance question. Its too many tables for postgresql partitioning support but I don’t get any benefit from a master table and constraints. Handling partitioning in application logic is effectively zero cost.I know that 1000s of tables is regarded as an anti-pattern but I can only see the performance and maintenance benefits of one table per independent document e.g. fast per-table vacuum, incremental schema updates, easy future sharding. A monster table will require additional key columns and indexes that don’t have any value beyond allowing the documents to sit in the same table.If you go the partitioned route, I would add the extra column anyway (but not an index on it), so that it is there if/when you need it. The only downsides seem to be the system level per-table overhead but I only see that as a problem if I have a very long tail of tiny documents. I'd rather solve that problem if it occurs than manage an all-eggs-in-one-basket monster table.Is there anything significant I am missing in my reasoning?If you use a reasonably modern version of PostgreSQL (say, >=9.4) , the overhead of having 1000s of tables should not be too large of a problem.  When get into the 100,000 range, that it is likely to start being a problem.  If you get to 1,000,000, you almost definitely have a problem.Cheers,Jeff", "msg_date": "Sun, 25 Sep 2016 17:20:14 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing large documents - one table or partition by doc?" } ]
[ { "msg_contents": "Thank you Mike & Jim for your responses.\n\n> Are there other issues/requirements that are creating other performance\nconcerns\n> that aren’t obvious in your initial post\n\nYes, there are a few things:\n\n*1. Full document queries really are necessary*\n\n> it’s too much data to show on a web page, for instance\n\nThe documents are like spreadsheets so the whole document must be in memory\nto view a part of it or make a change. Client apps don't have to render the\nwhole document but the data must be there.\n\nThis means that the applications are sensitive to the size of ids. A\nprevious incarnation used GUIDs which was a brutal overhead for large\ndocuments.\n\n*2. The documents are graph structures*\n\nThe data structure is technically a property graph. The default text format\nis hierarchical with cross links.\n\nThere are currently no requirements to query the graph structure of the\ndocument so postgresql can be ignorant of the internal relationships. It is\npossible for the API to return an unordered flat list of rows and require\nthe client to build the graph.\n\nHowever, the most desirable data returned by the API would be an ordered\nhierarchical format. I'm not sure if postgresql has a role building the\ntree or whether to do it in code externally. I imagine using CTEs to\nprocess a column containing an array of child ids is brutal but maybe its\nmore efficient than loading all rows into memory and processing it in code.\nI guess doing it in code means the cpu load is on the api-server which can\nbe scaled horizontally more easily than a db server.\n\nThere is no interest in integrity constraints on element relationships. If\nthere were an application bug creating an invalid document structure, it\nwould be better to store it than reject it. The application can fix broken\ndocuments but not data loss.\n\n*3. Element ids are only unique within a document*\n\nThe element ids the application uses are not globally unique across\ndocuments. Modelling what we currently have in a single table would mean\nthe primary key is a composite of (document_id, element_id) which I don't\nbelieve is good practice?\n\nUsing current ids, a pseudo schema might look like:\n\nCREATE TABLE document (\nid serial primary key,\n revision int,\nname text,\n root_element_id int);\n\nCREATE TABLE element (\ndocument_id int REFERENCES document(id),\nelement_id int,\ndata text,\nchildren int[],\n primary key (document_id, element_id));\n\nThe two main queries are intentionally trivial but any advice on indexes\nwould be helpful e.g.\n\na) Fetch a document without creating the tree:\n\n select * from element where document_id = DOCID\n\nb) Update an element:\n\n update element\n set data = \"new data\"\n where document_id=DOC_ID and element_id=EL_ID\n\n + update history & increment revision\n\n*4. Storing history of changes*\n\nThe application is to view change history, wind back changes, restore old\nrevisions and provide a change-stream to clients. Its not an initial\nperformance concern because its not part of the main workflow but history\ntables will be much larger than the documents themselves but append only\nand rarely queried.\n\nUpdating them would be part of the update transaction so maybe they could\nbecome a bottleneck? A single edit from a client is actually a batch of\nsmaller changes so a pseudo schema supporting change-sets might look\nsomething like:\n\nCREATE TABLE changeset (\nid bigserial primary key,\ndocument_id int REFERENCES document(id),\nuser_id int REFERENCES users(id),\nmodified_at timestamp);\n\nCREATE TABLE changes (\nid bigserial primary key,\nchangeset_id int REFERENCES changeset(id),\nchange_type int,\nelement_id int,\nold_value text,\nnew_value text);\n\nCREATE TABLE history (\nid bigserial primary key,\nelement_id int,\ndata text,\nchildren int[],\nvalid_from bigint REFERENCES changeset(id),\nvalid_to bigint REFERENCES changeset(id));\n\nWhere [history] is used for fetching complete revisions and [changes] is\nused to store the change stream to support winding recent changes back or\nenable offline clients with old revisions to catch up.\n\n*4. My Nightmares (fighting the last war)*\n\nIn a previous life, I had bad experiences with un-partitioned gargantuan\ntables in Sql Server standard. Table-locking operations could take 24 hours\nto complete and DBAs had to spend weekends defragging, rebuilding indexes,\nperforming schema migrations, data migrations or handling backups. It\nalways felt on the edge of disaster e.g. a misbehaving query plan that one\nday decides to ignore indexes and do a full table scan... Every fix you\nwanted to do couldn't be done because it would take too long to process and\ncause too much downtime.\n\nMy nightmares are of a future filled with hours of down-time caused by\nstruggling to restore a gargantuan table from a backup due to a problem\nwith just one tiny document or schema changes that require disconnecting\nall clients for hours when instead I could ignore best practice, create 10k\ntables and process them iteratively and live in a utopia where I never have\n100% downtime only per document unavailability.\n\nthanks for you help\n\nThank you Mike & Jim for your responses.> Are there other issues/requirements that are creating other performance concerns> that aren’t obvious in your initial postYes, there are a few things:1. Full document queries really are necessary> it’s too much data to show on a web page, for instanceThe documents are like spreadsheets so the whole document must be in memory to view a part of it or make a change. Client apps don't have to render the whole document but the data must be there.This means that the applications are sensitive to the size of ids. A previous incarnation used GUIDs which was a brutal overhead for large documents.2. The documents are graph structuresThe data structure is technically a property graph. The default text format is hierarchical with cross links.There are currently no requirements to query the graph structure of the document so postgresql can be ignorant of the internal relationships. It is possible for the API to return an unordered flat list of rows and require the client to build the graph.However, the most desirable data returned by the API would be an ordered hierarchical format. I'm not sure if postgresql has a role building the tree or whether to do it in code externally. I imagine using CTEs to process a column containing an array of child ids is brutal but maybe its more efficient than loading all rows into memory and processing it in code. I guess doing it in code means the cpu load is on the api-server which can be scaled horizontally more easily than a db server.There is no interest in integrity constraints on element relationships. If there were an application bug creating an invalid document structure, it would be better to store it than reject it. The application can fix broken documents but not data loss.3. Element ids are only unique within a documentThe element ids the application uses are not globally unique across documents. Modelling what we currently have in a single table would mean the primary key is a composite of (document_id, element_id) which I don't believe is good practice?Using current ids, a pseudo schema might look like:CREATE TABLE document ( id serial primary key,         revision int, name text,        root_element_id int);CREATE TABLE element ( document_id int REFERENCES document(id),  element_id int,  data text, children int[],        primary key (document_id, element_id));The two main queries are intentionally trivial but any advice on indexes would be helpful e.g.a) Fetch a document without creating the tree:  select * from element where document_id = DOCIDb) Update an element:  update element     set data = \"new data\"     where document_id=DOC_ID and element_id=EL_ID  + update history & increment revision4. Storing history of changesThe application is to view change history, wind back changes, restore old revisions and provide a change-stream to clients. Its not an initial performance concern because its not part of the main workflow but history tables will be much larger than the documents themselves but append only and rarely queried.Updating them would be part of the update transaction so maybe they could become a bottleneck? A single edit from a client is actually a batch of smaller changes so a pseudo schema supporting change-sets might look something like:CREATE TABLE changeset (\tid bigserial primary key, document_id int REFERENCES document(id),\tuser_id int REFERENCES users(id),\tmodified_at timestamp);CREATE TABLE changes (\tid bigserial primary key, changeset_id int REFERENCES changeset(id),\tchange_type int, element_id int,  old_value text, \tnew_value text);CREATE TABLE history (\tid bigserial primary key, element_id int,  data text, children int[], valid_from bigint REFERENCES changeset(id), valid_to bigint REFERENCES changeset(id));Where [history] is used for fetching complete revisions and [changes] is used to store the change stream to support winding recent changes back or enable offline clients with old revisions to catch up.4. My Nightmares (fighting the last war)In a previous life, I had bad experiences with un-partitioned gargantuan tables in Sql Server standard. Table-locking operations could take 24 hours to complete and DBAs had to spend weekends defragging, rebuilding indexes, performing schema migrations, data migrations or handling backups. It always felt on the edge of disaster e.g. a misbehaving query plan that one day decides to ignore indexes and do a full table scan... Every fix you wanted to do couldn't be done because it would take too long to process and cause too much downtime.My nightmares are of a future filled with hours of down-time caused by struggling to restore a gargantuan table from a backup due to a problem with just one tiny document or schema changes that require disconnecting all clients for hours when instead I could ignore best practice, create 10k tables and process them iteratively and live in a utopia where I never have 100% downtime only per document unavailability.thanks for you help", "msg_date": "Sat, 24 Sep 2016 12:33:13 +0100", "msg_from": "Dev Nop <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Storing large documents - one table or partition by doc?" }, { "msg_contents": "On 9/24/16 6:33 AM, Dev Nop wrote:\n> This means that the applications are sensitive to the size of ids. A\n> previous incarnation used GUIDs which was a brutal overhead for large\n> documents.\n\nIf GUIDs *stored in a binary format* were too large, then you won't be \nterribly happy with the 24 byte per-row overhead in Postgres.\n\nWhat I would look into at this point is using int ranges and arrays to \ngreatly reduce your overhead:\n\nCREATE TABLE ...(\n document_version_id int NOT NULL REFERENCES document_version\n , document_line_range int4range NOT NULL\n , document_lines text[] NOT NULL\n , EXCLUDE USING gist( document_version_id =, document_line_range && )\n);\n\nThat allows you to store the lines of a document as an array of values, ie:\n\nINSERT INTO ... VALUES(\n 1\n , '[11-15]'\n , '[11:15]={line11,line12,line13,line14,line15}'\n);\n\nNote that I'm using explicit array bounds syntax to make the array \nbounds match the line numbers. I'm not sure that's a great idea, but it \nis possible.\n\n\n> My nightmares are of a future filled with hours of down-time caused by\n> struggling to restore a gargantuan table from a backup due to a problem\n> with just one tiny document or schema changes that require disconnecting\n> all clients for hours when instead I could ignore best practice, create\n> 10k tables and process them iteratively and live in a utopia where I\n> never have 100% downtime only per document unavailability.\n\nAt some size you'd certainly want partitioning. The good news is that \nyou can mostly hide partitioning from the application and other database \nlogic, so there's not a lot of incentive to set it up immediately. You \ncan always do that after the fact.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 24 Sep 2016 16:30:06 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing large documents - one table or partition by\n doc?" }, { "msg_contents": ">\n> If GUIDs *stored in a binary format* were too large, then you won't be\n> terribly happy with the 24 byte per-row overhead in Postgres.\n\n\nHeh. In this case the ids have a life outside the database in various text\nformats.\n\n\n> What I would look into at this point is using int ranges and arrays to\n> greatly reduce your overhead:\n> CREATE TABLE ...(\n> document_version_id int NOT NULL REFERENCES document_version\n> , document_line_range int4range NOT NULL\n> , document_lines text[] NOT NULL\n> , EXCLUDE USING gist( document_version_id =, document_line_range && )\n> );\n\n\nThanks! Some new things for me to learn about there. Had to read \"Range\nTypes: Your Life Will Never Be The Same\" - lol. https://wiki.postgresql.org/\nimages/7/73/Range-types-pgopen-2012.pdf\n\nTo check I understand what you are proposing: the current version and\nhistory is stored in the same table. Each line is referred to by a\nsequential line number and then lines are stored in sequential chunks with\nrange + array. The gist index is preventing any insert with the same\nversion & line range. This sounds very compact for a static doc but doesn't\nit mean lines must be renumbered on inserts/moves?\n\nOn Mon, Sep 26, 2016 at 9:26 AM, Dev Nop <[email protected]> wrote:\n\n> If GUIDs *stored in a binary format* were too large, then you won't be\n>> terribly happy with the 24 byte per-row overhead in Postgres.\n>\n>\n> Heh. In this case the ids have a life outside the database in various text\n> formats.\n>\n>\n>> What I would look into at this point is using int ranges and arrays to\n>> greatly reduce your overhead:\n>> CREATE TABLE ...(\n>> document_version_id int NOT NULL REFERENCES document_version\n>> , document_line_range int4range NOT NULL\n>> , document_lines text[] NOT NULL\n>> , EXCLUDE USING gist( document_version_id =, document_line_range && )\n>> );\n>\n>\n> Thanks! Some new things for me to learn about there. Had to read \"Range\n> Types: Your Life Will Never Be The Same\" - lol.\n> https://wiki.postgresql.org/images/7/73/Range-types-pgopen-2012.pdf\n>\n> To check I understand what you are proposing: the current version and\n> history is stored in the same table. Each line is referred to by a\n> sequential line number and then lines are stored in sequential chunks with\n> range + array. The gist index is preventing any insert with the same\n> version & line range. This sounds very compact for a static doc but doesn't\n> it mean lines must be renumbered on inserts/moves?\n>\n>\n\nIf GUIDs *stored in a binary format* were too large, then you won't be terribly happy with the 24 byte per-row overhead in Postgres.Heh. In this case the ids have a life outside the database in various text formats. What I would look into at this point is using int ranges and arrays to greatly reduce your overhead:CREATE TABLE ...(  document_version_id int NOT NULL REFERENCES document_version  , document_line_range int4range NOT NULL  , document_lines text[] NOT NULL  , EXCLUDE USING gist( document_version_id =, document_line_range && ));Thanks! Some new things for me to learn about there. Had to read \"Range Types: Your Life Will Never Be The Same\" - lol. https://wiki.postgresql.org/images/7/73/Range-types-pgopen-2012.pdfTo check I understand what you are proposing: the current version and history is stored in the same table. Each line is referred to by a sequential line number and then lines are stored in sequential chunks with range + array. The gist index is preventing any insert with the same version & line range. This sounds very compact for a static doc but doesn't it mean lines must be renumbered on inserts/moves?On Mon, Sep 26, 2016 at 9:26 AM, Dev Nop <[email protected]> wrote:If GUIDs *stored in a binary format* were too large, then you won't be terribly happy with the 24 byte per-row overhead in Postgres.Heh. In this case the ids have a life outside the database in various text formats. What I would look into at this point is using int ranges and arrays to greatly reduce your overhead:CREATE TABLE ...(  document_version_id int NOT NULL REFERENCES document_version  , document_line_range int4range NOT NULL  , document_lines text[] NOT NULL  , EXCLUDE USING gist( document_version_id =, document_line_range && ));Thanks! Some new things for me to learn about there. Had to read \"Range Types: Your Life Will Never Be The Same\" - lol. https://wiki.postgresql.org/images/7/73/Range-types-pgopen-2012.pdfTo check I understand what you are proposing: the current version and history is stored in the same table. Each line is referred to by a sequential line number and then lines are stored in sequential chunks with range + array. The gist index is preventing any insert with the same version & line range. This sounds very compact for a static doc but doesn't it mean lines must be renumbered on inserts/moves?", "msg_date": "Mon, 26 Sep 2016 09:27:28 +0100", "msg_from": "Dev Nop <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Storing large documents - one table or partition by doc?" }, { "msg_contents": "Please CC the mailing list so others can chime in or learn...\n\nOn 9/26/16 3:26 AM, Dev Nop wrote:\n> What I would look into at this point is using int ranges and arrays\n> to greatly reduce your overhead:\n> CREATE TABLE ...(\n> document_version_id int NOT NULL REFERENCES document_version\n> , document_line_range int4range NOT NULL\n> , document_lines text[] NOT NULL\n> , EXCLUDE USING gist( document_version_id =, document_line_range && )\n> );\n>\n>\n> Thanks! Some new things for me to learn about there. Had to read \"Range\n> Types: Your Life Will Never Be The Same\" - lol.\n> https://wiki.postgresql.org/images/7/73/Range-types-pgopen-2012.pdf\n>\n> To check I understand what you are proposing: the current version and\n> history is stored in the same table. Each line is referred to by a\n> sequential line number and then lines are stored in sequential chunks\n> with range + array. The gist index is preventing any insert with the\n> same version & line range. This sounds very compact for a static doc but\n\nYou've got it correct.\n\n> doesn't it mean lines must be renumbered on inserts/moves?\n\nYes, but based on your prior descriptions I was assuming that was what \nyou wanted... weren't you basically suggesting storing one line per row?\n\nThere's certainly other options if you want full tracking of every \nchange... for example, you could store every change as some form of a \ndiff, and only store the full document every X number of changes.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Sep 2016 14:58:50 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Storing large documents - one table or partition by\n doc?" } ]
[ { "msg_contents": "Hey all,\n\nObviously everyone who's been in PostgreSQL or almost any RDBMS for a time\nhas said not to have millions of tables. I too have long believed it until\nrecently.\n\nAWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for\nPGDATA. Over the weekend, I created 8M tables with 16M indexes on those\ntables. Table creation initially took 0.018031 secs, average 0.027467 and\nafter tossing out outliers (qty 5) the maximum creation time found was\n0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049 seconds.\nTables were created by a single process. Do note that table creation is\ndone via plpgsql function as there are other housekeeping tasks necessary\nthough minimal.\n\nNo system tuning but here is a list of PostgreSQL knobs and switches:\nshared_buffers = 2GB\nwork_mem = 48 MB\nmax_stack_depth = 4 MB\nsynchronous_commit = off\neffective_cache_size = 200 GB\npg_xlog is on it's own file system\n\nThere are some still obvious problems. General DBA functions such as\nVACUUM and ANALYZE should not be done. Each will run forever and cause\nmuch grief. Backups are problematic in the traditional pg_dump and PITR\nspace. Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing\nit in my test case) are no-no's. A system or database crash could take\npotentially hours to days to recover. There are likely other issues ahead.\n\nYou may wonder, \"why is Greg attempting such a thing?\" I looked at\nDynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\nit's antiquated and don't get me started on \"Hadoop\". I looked at many\nothers and ultimately the recommended use of each vendor was to have one\ntable for all data. That overcomes the millions of tables problem, right?\n\nProblem with the \"one big table\" solution is I anticipate 1,200 trillion\nrecords. Random access is expected and the customer expects <30ms reads\nfor a single record fetch.\n\nNo data is loaded... yet Table and index creation only. I am interested\nin the opinions of all including tests I may perform. If you had this\nsetup, what would you capture / analyze? I have a job running preparing\ndata. I did this on a much smaller scale (50k tables) and data load via\nfunction allowed close to 6,000 records/second. The schema has been\nsimplified since and last test reach just over 20,000 records/second with\n300k tables.\n\nI'm not looking for alternatives yet but input to my test. Takers?\n\nI can't promise immediate feedback but will do my best to respond with\nresults.\n\nTIA,\n-Greg\n\nHey all,Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently.AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process.  Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.No system tuning but here is a list of PostgreSQL knobs and switches:shared_buffers = 2GBwork_mem = 48 MBmax_stack_depth = 4 MBsynchronous_commit = offeffective_cache_size = 200 GBpg_xlog is on it's own file systemThere are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.  Backups are problematic in the traditional pg_dump and PITR space.  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead.You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right?Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch.No data is loaded... yet  Table and index creation only.  I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.I'm not looking for alternatives yet but input to my test.  Takers?I can't promise immediate feedback but will do my best to respond with results.TIA,-Greg", "msg_date": "Sun, 25 Sep 2016 20:50:09 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Millions of tables" }, { "msg_contents": "Dear Greg,\n\nHave you checked PostgresXL ?\nwith millions of table, how the apps choose which table is approriate?\nin my opinion, with that scale it should go with parallel query with\ndata sharing like what PostgresXL is done.\n\nThanks,\n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source and Open Mind Company)\nwww.equnix.co.id\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta Pusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments)\nmay be legally privileged and/or confidential and is intended only for\nthe use of the addressee(s). No addressee should forward, print, copy,\nor otherwise reproduce this message in any manner that would allow it\nto be viewed by any individual not originally listed as a recipient.\nIf the reader of this message is not the intended recipient, you are\nhereby notified that any unauthorized disclosure, dissemination,\ndistribution, copying or the taking of any action in reliance on the\ninformation herein is strictly prohibited. If you have received this\ncommunication in error, please immediately notify the sender and\ndelete this message.Unless it is made by the authorized person, any\nviews expressed in this message are those of the individual sender and\nmay not necessarily reflect the views of PT Equnix Business Solutions.\n\n\nOn Mon, Sep 26, 2016 at 9:50 AM, Greg Spiegelberg\n<[email protected]> wrote:\n> Hey all,\n>\n> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time\n> has said not to have millions of tables. I too have long believed it until\n> recently.\n>\n> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for\n> PGDATA. Over the weekend, I created 8M tables with 16M indexes on those\n> tables. Table creation initially took 0.018031 secs, average 0.027467 and\n> after tossing out outliers (qty 5) the maximum creation time found was\n> 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049 seconds.\n> Tables were created by a single process. Do note that table creation is\n> done via plpgsql function as there are other housekeeping tasks necessary\n> though minimal.\n>\n> No system tuning but here is a list of PostgreSQL knobs and switches:\n> shared_buffers = 2GB\n> work_mem = 48 MB\n> max_stack_depth = 4 MB\n> synchronous_commit = off\n> effective_cache_size = 200 GB\n> pg_xlog is on it's own file system\n>\n> There are some still obvious problems. General DBA functions such as VACUUM\n> and ANALYZE should not be done. Each will run forever and cause much grief.\n> Backups are problematic in the traditional pg_dump and PITR space. Large\n> JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test\n> case) are no-no's. A system or database crash could take potentially hours\n> to days to recover. There are likely other issues ahead.\n>\n> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n> it's antiquated and don't get me started on \"Hadoop\". I looked at many\n> others and ultimately the recommended use of each vendor was to have one\n> table for all data. That overcomes the millions of tables problem, right?\n>\n> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n> records. Random access is expected and the customer expects <30ms reads for\n> a single record fetch.\n>\n> No data is loaded... yet Table and index creation only. I am interested in\n> the opinions of all including tests I may perform. If you had this setup,\n> what would you capture / analyze? I have a job running preparing data. I\n> did this on a much smaller scale (50k tables) and data load via function\n> allowed close to 6,000 records/second. The schema has been simplified since\n> and last test reach just over 20,000 records/second with 300k tables.\n>\n> I'm not looking for alternatives yet but input to my test. Takers?\n>\n> I can't promise immediate feedback but will do my best to respond with\n> results.\n>\n> TIA,\n> -Greg\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Sep 2016 10:04:26 +0700", "msg_from": "julyanto SUTANDANG <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "From: Greg Spiegelberg Sent: Sunday, September 25, 2016 7:50 PM\n… Over the weekend, I created 8M tables with 16M indexes on those tables. \n\n… A system or database crash could take potentially hours to days to recover. There are likely other issues ahead.\n\n \n\nYou may wonder, \"why is Greg attempting such a thing?\" I looked at DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\". Problem with the \"one big table\" solution is I anticipate 1,200 trillion records. Random access is expected and the customer expects <30ms reads for a single record fetch.\n\n \n\nI'm not looking for alternatives yet but input to my test.\n\n_________\n\n \n\nHoly guacamole, batman! Ok, here’s my take: you’ve traded the risks/limitations of the known for the risks of the unknown. The unknown being, in the numerous places where postgres historical development may have cut corners, you may be the first to exercise those corners and flame out like the recent SpaceX rocket.\n\n \n\nPut it another way – you’re going to bet your career (perhaps) or a client’s future on an architectural model that just doesn’t seem feasible. I think you’ve got a remarkable design problem to solve, and am glad you’ve chosen to share that problem with us.\n\n \n\nAnd I do think it will boil down to this: it’s not that you CAN do it on Postgres (which you clearly can), but once in production, assuming things are actually stable, how will you handle the data management aspects like inevitable breakage, data integrity issues, backups, restores, user contention for resources, fault tolerance and disaster recovery. Just listing the tables will take forever. Add a column? Never. I do think the amount of testing you’ll need to do prove that every normal data management function still works at that table count…that in itself is going to be not a lot of fun.\n\n \n\nThis one hurts my head. Ironically, the most logical destination for this type of data may actually be Hadoop – auto-scale, auto-shard, fault tolerant, etc…and I’m not a Hadoopie.\n\n \n\nI am looking forward to hearing how this all plays out, it will be quite an adventure! All the best,\n\n \n\nMike Sofen (Synthetic Genomics…on Postgres 9.5x)\n\n\nFrom: Greg Spiegelberg  Sent: Sunday, September 25, 2016 7:50 PM… Over the weekend, I created 8M tables with 16M indexes on those tables. … A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead. You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch. I'm not looking for alternatives yet but input to my test._________ Holy guacamole, batman!  Ok, here’s my take:  you’ve traded the risks/limitations of the known for the risks of the unknown.  The unknown being, in the numerous places where postgres historical development may have cut corners, you may be the first to exercise those corners and flame out like the recent SpaceX rocket. Put it another way – you’re going to bet your career (perhaps) or a client’s future on an architectural model that just doesn’t seem feasible.  I think you’ve got a remarkable design problem to solve, and am glad you’ve chosen to share that problem with us. And I do think it will boil down to this: it’s not that you CAN do it on Postgres (which you clearly can), but once in production, assuming things are actually stable, how will you handle the data management aspects like inevitable breakage, data integrity issues, backups, restores, user contention for resources, fault tolerance and disaster recovery.  Just listing the tables will take forever.  Add a column?  Never.  I do think the amount of testing you’ll need to do prove that every normal data management function still works at that table count…that in itself is going to be not a lot of fun. This one hurts my head.  Ironically, the most logical destination for this type of data may actually be Hadoop – auto-scale, auto-shard, fault tolerant, etc…and I’m not a Hadoopie. I am looking forward to hearing how this all plays out, it will be quite an adventure!  All the best, Mike Sofen (Synthetic Genomics…on Postgres 9.5x)", "msg_date": "Sun, 25 Sep 2016 20:23:28 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Precisely why I shared with the group. I must understand the risks\ninvolved. I need to explore if it can be stable at this size when does it\nbecome unstable? Aside from locking down user access to superuser, is\nthere a way to prohibit database-wide VACUUM & ANALYZE? Certainly putting\nmy trust in autovacuum :) which is something I have not yet fully explored\nhow to best tune.\n\nCouple more numbers... ~231 GB is the size of PGDATA with 8M empty tables\nand 16M empty indexes. ~5% of inodes on the file system have been used.\nSar data during the 8M table creation shows a very stable and regular I/O\npattern. Not a blip worth mentioning.\n\nAnother point worth mentioning, the tables contain a boolean, int8's and\ntimestamptz's only. Nothing of variable size like bytea, text, json or\nxml. Each of the 8M tables will contain on the very high side between 140k\nand 200k records. The application also has a heads up as to which table\ncontains which record. The searches come in saying \"give me record X from\npartition key Y\" where Y identifies the table and X is used in the filter\non the table.\n\nLast point, add column will never be done. I can hear eyes rolling :) but\nthe schema and it's intended use is complete. You'll have to trust me on\nthat one.\n\n-Greg\n\nOn Sun, Sep 25, 2016 at 9:23 PM, Mike Sofen <[email protected]> wrote:\n\n> *From:* Greg Spiegelberg *Sent:* Sunday, September 25, 2016 7:50 PM\n> … Over the weekend, I created 8M tables with 16M indexes on those tables.\n>\n> … A system or database crash could take potentially hours to days to\n> recover. There are likely other issues ahead.\n>\n>\n>\n> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n> it's antiquated and don't get me started on \"Hadoop\". Problem with the\n> \"one big table\" solution is I anticipate 1,200 trillion records. Random\n> access is expected and the customer expects <30ms reads for a single record\n> fetch.\n>\n>\n>\n> I'm not looking for alternatives yet but input to my test.\n>\n> _________\n>\n>\n>\n> Holy guacamole, batman! Ok, here’s my take: you’ve traded the\n> risks/limitations of the known for the risks of the unknown. The unknown\n> being, in the numerous places where postgres historical development may\n> have cut corners, you may be the first to exercise those corners and flame\n> out like the recent SpaceX rocket.\n>\n>\n>\n> Put it another way – you’re going to bet your career (perhaps) or a\n> client’s future on an architectural model that just doesn’t seem feasible.\n> I think you’ve got a remarkable design problem to solve, and am glad you’ve\n> chosen to share that problem with us.\n>\n>\n>\n> And I do think it will boil down to this: it’s not that you CAN do it on\n> Postgres (which you clearly can), but once in production, assuming things\n> are actually stable, how will you handle the data management aspects like\n> inevitable breakage, data integrity issues, backups, restores, user\n> contention for resources, fault tolerance and disaster recovery. Just\n> listing the tables will take forever. Add a column? Never. I do think\n> the amount of testing you’ll need to do prove that every normal data\n> management function still works at that table count…that in itself is going\n> to be not a lot of fun.\n>\n>\n>\n> This one hurts my head. Ironically, the most logical destination for this\n> type of data may actually be Hadoop – auto-scale, auto-shard, fault\n> tolerant, etc…and I’m not a Hadoopie.\n>\n>\n>\n> I am looking forward to hearing how this all plays out, it will be quite\n> an adventure! All the best,\n>\n>\n>\n> Mike Sofen (Synthetic Genomics…on Postgres 9.5x)\n>\n\nPrecisely why I shared with the group.  I must understand the risks involved.  I need to explore if it can be stable at this size when does it become unstable?  Aside from locking down user access to superuser, is there a way to prohibit database-wide VACUUM & ANALYZE?  Certainly putting my trust in autovacuum :) which is something I have not yet fully explored how to best tune.Couple more numbers... ~231 GB is the size of PGDATA with 8M empty tables and 16M empty indexes.  ~5% of inodes on the file system have been used.  Sar data during the 8M table creation shows a very stable and regular I/O pattern.  Not a blip worth mentioning.Another point worth mentioning, the tables contain a boolean, int8's and timestamptz's only.  Nothing of variable size like bytea, text, json or xml.  Each of the 8M tables will contain on the very high side between 140k and 200k records.  The application also has a heads up as to which table contains which record.  The searches come in saying \"give me record X from partition key Y\" where Y identifies the table and X is used in the filter on the table.Last point, add column will never be done.  I can hear eyes rolling :) but the schema and it's intended use is complete.  You'll have to trust me on that one.-GregOn Sun, Sep 25, 2016 at 9:23 PM, Mike Sofen <[email protected]> wrote:From: Greg Spiegelberg  Sent: Sunday, September 25, 2016 7:50 PM… Over the weekend, I created 8M tables with 16M indexes on those tables. … A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead. You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch. I'm not looking for alternatives yet but input to my test._________ Holy guacamole, batman!  Ok, here’s my take:  you’ve traded the risks/limitations of the known for the risks of the unknown.  The unknown being, in the numerous places where postgres historical development may have cut corners, you may be the first to exercise those corners and flame out like the recent SpaceX rocket. Put it another way – you’re going to bet your career (perhaps) or a client’s future on an architectural model that just doesn’t seem feasible.  I think you’ve got a remarkable design problem to solve, and am glad you’ve chosen to share that problem with us. And I do think it will boil down to this: it’s not that you CAN do it on Postgres (which you clearly can), but once in production, assuming things are actually stable, how will you handle the data management aspects like inevitable breakage, data integrity issues, backups, restores, user contention for resources, fault tolerance and disaster recovery.  Just listing the tables will take forever.  Add a column?  Never.  I do think the amount of testing you’ll need to do prove that every normal data management function still works at that table count…that in itself is going to be not a lot of fun. This one hurts my head.  Ironically, the most logical destination for this type of data may actually be Hadoop – auto-scale, auto-shard, fault tolerant, etc…and I’m not a Hadoopie. I am looking forward to hearing how this all plays out, it will be quite an adventure!  All the best, Mike Sofen (Synthetic Genomics…on Postgres 9.5x)", "msg_date": "Sun, 25 Sep 2016 22:05:18 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "I did look at PostgresXL and CitusDB. Both are admirable however neither\ncould support the need to read a random record consistently under 30ms.\nIt's a similar problem Cassandra and others have: network latency. At this\nscale, to provide the ability to access any given record amongst trillions\nit is imperative to know precisely where it is stored (system & database)\nand read a relatively small index. I have other requirements that prohibit\nuse of any technology that is eventually consistent.\n\nI liken the problem to fishing. To find a particular fish of length, size,\ncolor &c in a data lake you must accept the possibility of scanning the\nentire lake. However, if all fish were in barrels where each barrel had a\nparticular kind of fish of specific length, size, color &c then the problem\nis far simpler.\n\n-Greg\n\nOn Sun, Sep 25, 2016 at 9:04 PM, julyanto SUTANDANG <[email protected]>\nwrote:\n\n> Dear Greg,\n>\n> Have you checked PostgresXL ?\n> with millions of table, how the apps choose which table is approriate?\n> in my opinion, with that scale it should go with parallel query with\n> data sharing like what PostgresXL is done.\n>\n> Thanks,\n>\n>\n> Julyanto SUTANDANG\n>\n> Equnix Business Solutions, PT\n> (An Open Source and Open Mind Company)\n> www.equnix.co.id\n> Pusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\n> Pusat\n> T: +6221 22866662 F: +62216315281 M: +628164858028\n>\n>\n> Caution: The information enclosed in this email (and any attachments)\n> may be legally privileged and/or confidential and is intended only for\n> the use of the addressee(s). No addressee should forward, print, copy,\n> or otherwise reproduce this message in any manner that would allow it\n> to be viewed by any individual not originally listed as a recipient.\n> If the reader of this message is not the intended recipient, you are\n> hereby notified that any unauthorized disclosure, dissemination,\n> distribution, copying or the taking of any action in reliance on the\n> information herein is strictly prohibited. If you have received this\n> communication in error, please immediately notify the sender and\n> delete this message.Unless it is made by the authorized person, any\n> views expressed in this message are those of the individual sender and\n> may not necessarily reflect the views of PT Equnix Business Solutions.\n>\n>\n> On Mon, Sep 26, 2016 at 9:50 AM, Greg Spiegelberg\n> <[email protected]> wrote:\n> > Hey all,\n> >\n> > Obviously everyone who's been in PostgreSQL or almost any RDBMS for a\n> time\n> > has said not to have millions of tables. I too have long believed it\n> until\n> > recently.\n> >\n> > AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1)\n> for\n> > PGDATA. Over the weekend, I created 8M tables with 16M indexes on those\n> > tables. Table creation initially took 0.018031 secs, average 0.027467\n> and\n> > after tossing out outliers (qty 5) the maximum creation time found was\n> > 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049 seconds.\n> > Tables were created by a single process. Do note that table creation is\n> > done via plpgsql function as there are other housekeeping tasks necessary\n> > though minimal.\n> >\n> > No system tuning but here is a list of PostgreSQL knobs and switches:\n> > shared_buffers = 2GB\n> > work_mem = 48 MB\n> > max_stack_depth = 4 MB\n> > synchronous_commit = off\n> > effective_cache_size = 200 GB\n> > pg_xlog is on it's own file system\n> >\n> > There are some still obvious problems. General DBA functions such as\n> VACUUM\n> > and ANALYZE should not be done. Each will run forever and cause much\n> grief.\n> > Backups are problematic in the traditional pg_dump and PITR space. Large\n> > JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my\n> test\n> > case) are no-no's. A system or database crash could take potentially\n> hours\n> > to days to recover. There are likely other issues ahead.\n> >\n> > You may wonder, \"why is Greg attempting such a thing?\" I looked at\n> > DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n> > it's antiquated and don't get me started on \"Hadoop\". I looked at many\n> > others and ultimately the recommended use of each vendor was to have one\n> > table for all data. That overcomes the millions of tables problem,\n> right?\n> >\n> > Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n> > records. Random access is expected and the customer expects <30ms reads\n> for\n> > a single record fetch.\n> >\n> > No data is loaded... yet Table and index creation only. I am\n> interested in\n> > the opinions of all including tests I may perform. If you had this\n> setup,\n> > what would you capture / analyze? I have a job running preparing data.\n> I\n> > did this on a much smaller scale (50k tables) and data load via function\n> > allowed close to 6,000 records/second. The schema has been simplified\n> since\n> > and last test reach just over 20,000 records/second with 300k tables.\n> >\n> > I'm not looking for alternatives yet but input to my test. Takers?\n> >\n> > I can't promise immediate feedback but will do my best to respond with\n> > results.\n> >\n> > TIA,\n> > -Greg\n>\n\nI did look at PostgresXL and CitusDB.  Both are admirable however neither could support the need to read a random record consistently under 30ms.  It's a similar problem Cassandra and others have: network latency.  At this scale, to provide the ability to access any given record amongst trillions it is imperative to know precisely where it is stored (system & database) and read a relatively small index.  I have other requirements that prohibit use of any technology that is eventually consistent.I liken the problem to fishing.  To find a particular fish of length, size, color &c in a data lake you must accept the possibility of scanning the entire lake.  However, if all fish were in barrels where each barrel had a particular kind of fish of specific length, size, color &c then the problem is far simpler.-GregOn Sun, Sep 25, 2016 at 9:04 PM, julyanto SUTANDANG <[email protected]> wrote:Dear Greg,\n\nHave you checked PostgresXL ?\nwith millions of table, how the apps choose which table is approriate?\nin my opinion, with that scale it should go with parallel query with\ndata sharing like what PostgresXL is done.\n\nThanks,\n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source and Open Mind Company)\nwww.equnix.co.id\nPusat Niaga ITC Roxy Mas Blok C2/42.  Jl. KH Hasyim Ashari 125, Jakarta Pusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments)\nmay be legally privileged and/or confidential and is intended only for\nthe use of the addressee(s). No addressee should forward, print, copy,\nor otherwise reproduce this message in any manner that would allow it\nto be viewed by any individual not originally listed as a recipient.\nIf the reader of this message is not the intended recipient, you are\nhereby notified that any unauthorized disclosure, dissemination,\ndistribution, copying or the taking of any action in reliance on the\ninformation herein is strictly prohibited. If you have received this\ncommunication in error, please immediately notify the sender and\ndelete this message.Unless it is made by the authorized person, any\nviews expressed in this message are those of the individual sender and\nmay not necessarily reflect the views of PT Equnix Business Solutions.\n\n\nOn Mon, Sep 26, 2016 at 9:50 AM, Greg Spiegelberg\n<[email protected]> wrote:\n> Hey all,\n>\n> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time\n> has said not to have millions of tables.  I too have long believed it until\n> recently.\n>\n> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for\n> PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those\n> tables.  Table creation initially took 0.018031 secs, average 0.027467 and\n> after tossing out outliers (qty 5) the maximum creation time found was\n> 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.\n> Tables were created by a single process.  Do note that table creation is\n> done via plpgsql function as there are other housekeeping tasks necessary\n> though minimal.\n>\n> No system tuning but here is a list of PostgreSQL knobs and switches:\n> shared_buffers = 2GB\n> work_mem = 48 MB\n> max_stack_depth = 4 MB\n> synchronous_commit = off\n> effective_cache_size = 200 GB\n> pg_xlog is on it's own file system\n>\n> There are some still obvious problems.  General DBA functions such as VACUUM\n> and ANALYZE should not be done.  Each will run forever and cause much grief.\n> Backups are problematic in the traditional pg_dump and PITR space.  Large\n> JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test\n> case) are no-no's.  A system or database crash could take potentially hours\n> to days to recover.  There are likely other issues ahead.\n>\n> You may wonder, \"why is Greg attempting such a thing?\"  I looked at\n> DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it,\n> it's antiquated and don't get me started on \"Hadoop\".  I looked at many\n> others and ultimately the recommended use of each vendor was to have one\n> table for all data.  That overcomes the millions of tables problem, right?\n>\n> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n> records.  Random access is expected and the customer expects <30ms reads for\n> a single record fetch.\n>\n> No data is loaded... yet  Table and index creation only.  I am interested in\n> the opinions of all including tests I may perform.  If you had this setup,\n> what would you capture / analyze?  I have a job running preparing data.  I\n> did this on a much smaller scale (50k tables) and data load via function\n> allowed close to 6,000 records/second.  The schema has been simplified since\n> and last test reach just over 20,000 records/second with 300k tables.\n>\n> I'm not looking for alternatives yet but input to my test.  Takers?\n>\n> I can't promise immediate feedback but will do my best to respond with\n> results.\n>\n> TIA,\n> -Greg", "msg_date": "Sun, 25 Sep 2016 22:19:21 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Hi Greg,\n\nPlease follow the conventions of this mailing list, to avoid confusion \n- see bottom of this posting for further comments\n\n\nOn 26/09/16 17:05, Greg Spiegelberg wrote:\n> Precisely why I shared with the group. I must understand the risks \n> involved. I need to explore if it can be stable at this size when \n> does it become unstable? Aside from locking down user access to \n> superuser, is there a way to prohibit database-wide VACUUM & ANALYZE? \n> Certainly putting my trust in autovacuum :) which is something I have \n> not yet fully explored how to best tune.\n>\n> Couple more numbers... ~231 GB is the size of PGDATA with 8M empty \n> tables and 16M empty indexes. ~5% of inodes on the file system have \n> been used. Sar data during the 8M table creation shows a very stable \n> and regular I/O pattern. Not a blip worth mentioning.\n>\n> Another point worth mentioning, the tables contain a boolean, int8's \n> and timestamptz's only. Nothing of variable size like bytea, text, \n> json or xml. Each of the 8M tables will contain on the very high side \n> between 140k and 200k records. The application also has a heads up as \n> to which table contains which record. The searches come in saying \n> \"give me record X from partition key Y\" where Y identifies the table \n> and X is used in the filter on the table.\n>\n> Last point, add column will never be done. I can hear eyes rolling :) \n> but the schema and it's intended use is complete. You'll have to \n> trust me on that one.\n>\n> -Greg\n>\n> On Sun, Sep 25, 2016 at 9:23 PM, Mike Sofen <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> *From:*Greg Spiegelberg *Sent:* Sunday, September 25, 2016 7:50 PM\n> … Over the weekend, I created 8M tables with 16M indexes on those\n> tables.\n>\n> … A system or database crash could take potentially hours to days\n> to recover. There are likely other issues ahead.\n>\n> You may wonder, \"why is Greg attempting such a thing?\" I looked\n> at DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's\n> face it, it's antiquated and don't get me started on \"Hadoop\". \n> Problem with the \"one big table\" solution is I anticipate 1,200\n> trillion records. Random access is expected and the customer\n> expects <30ms reads for a single record fetch.\n>\n> I'm not looking for alternatives yet but input to my test.\n>\n> _________\n>\n> Holy guacamole, batman! Ok, here’s my take: you’ve traded the\n> risks/limitations of the known for the risks of the unknown. The\n> unknown being, in the numerous places where postgres historical\n> development may have cut corners, you may be the first to exercise\n> those corners and flame out like the recent SpaceX rocket.\n>\n> Put it another way – you’re going to bet your career (perhaps) or\n> a client’s future on an architectural model that just doesn’t seem\n> feasible. I think you’ve got a remarkable design problem to\n> solve, and am glad you’ve chosen to share that problem with us.\n>\n> And I do think it will boil down to this: it’s not that you CAN do\n> it on Postgres (which you clearly can), but once in production,\n> assuming things are actually stable, how will you handle the data\n> management aspects like inevitable breakage, data integrity\n> issues, backups, restores, user contention for resources, fault\n> tolerance and disaster recovery. Just listing the tables will\n> take forever. Add a column? Never. I do think the amount of\n> testing you’ll need to do prove that every normal data management\n> function still works at that table count…that in itself is going\n> to be not a lot of fun.\n>\n> This one hurts my head. Ironically, the most logical destination\n> for this type of data may actually be Hadoop – auto-scale,\n> auto-shard, fault tolerant, etc…and I’m not a Hadoopie.\n>\n> I am looking forward to hearing how this all plays out, it will be\n> quite an adventure! All the best,\n>\n> Mike Sofen (Synthetic Genomics…on Postgres 9.5x)\n>\n>\nIn this list, the convention is to post replies at the end (with some rare exceptions), or interspersed when appropriate,\nand to omit parts no longer relevant.\n\nThe motivation of bottom posting like this: is that people get to see the context before the reply, AND emails don't end\nup getting longer & longer as people reply at the beginning forgetting to trim the now irrelevant stuff at the end.\n\n\nCheers,\nGavin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Sep 2016 18:04:52 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]>\nwrote:\n\n> Hey all,\n>\n> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time\n> has said not to have millions of tables. I too have long believed it until\n> recently.\n>\n> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for\n> PGDATA. Over the weekend, I created 8M tables with 16M indexes on those\n> tables. Table creation initially took 0.018031 secs, average 0.027467 and\n> after tossing out outliers (qty 5) the maximum creation time found was\n> 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049 seconds.\n> Tables were created by a single process. Do note that table creation is\n> done via plpgsql function as there are other housekeeping tasks necessary\n> though minimal.\n>\n> No system tuning but here is a list of PostgreSQL knobs and switches:\n> shared_buffers = 2GB\n> work_mem = 48 MB\n> max_stack_depth = 4 MB\n> synchronous_commit = off\n> effective_cache_size = 200 GB\n> pg_xlog is on it's own file system\n>\n> There are some still obvious problems. General DBA functions such as\n> VACUUM and ANALYZE should not be done. Each will run forever and cause\n> much grief.\n>\n\nWhy would the auto versions of those cause less grief than the manual\nversions?\n\n\n> Backups are problematic in the traditional pg_dump and PITR space.\n>\n\nIs there a third option to those two spaces? File-system snapshots?\n\n\n> Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in\n> my test case) are no-no's. A system or database crash could take\n> potentially hours to days to recover.\n>\n\nIsn't that a show-stopper?\n\n\n> There are likely other issues ahead.\n>\n> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n> it's antiquated and don't get me started on \"Hadoop\". I looked at many\n> others and ultimately the recommended use of each vendor was to have one\n> table for all data. That overcomes the millions of tables problem, right?\n>\n> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n> records. Random access is expected and the customer expects <30ms reads\n> for a single record fetch.\n>\n\nSorry, I don't really follow. Whether you have 1 table or millions,\neventually someone has to go get the data off the disk. Why would the\nnumber of tables make much of a difference to that fundamental?\n\nAlso, how many tablespaces do you anticipate having? Can you get 120\npetabytes of storage all mounted to one machine?\n\n\n> No data is loaded... yet Table and index creation only. I am interested\n> in the opinions of all including tests I may perform. If you had this\n> setup, what would you capture / analyze? I have a job running preparing\n> data. I did this on a much smaller scale (50k tables) and data load via\n> function allowed close to 6,000 records/second. The schema has been\n> simplified since and last test reach just over 20,000 records/second with\n> 300k tables.\n>\n> I'm not looking for alternatives yet but input to my test. Takers?\n>\n\nGo through and put one row (or 8kB worth of rows) into each of 8 million\ntable. The stats collector and the autovacuum process will start going\nnuts. Now, maybe you can deal with it. But maybe not. That is the first\nnon-obvious thing I'd look at.\n\nCheers,\n\nJeff\n\nOn Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]> wrote:Hey all,Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently.AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process.  Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.No system tuning but here is a list of PostgreSQL knobs and switches:shared_buffers = 2GBwork_mem = 48 MBmax_stack_depth = 4 MBsynchronous_commit = offeffective_cache_size = 200 GBpg_xlog is on it's own file systemThere are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.Why would the auto versions of those cause less grief than the manual versions?   Backups are problematic in the traditional pg_dump and PITR space. Is there a third option to those two spaces?  File-system snapshots?  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover. Isn't that a show-stopper?  There are likely other issues ahead.You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right?Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch.Sorry, I don't really follow.  Whether you have 1 table or millions, eventually someone has to go get the data off the disk. Why would the number of tables make much of a difference to that fundamental?Also, how many tablespaces do you anticipate having?  Can you get 120 petabytes of storage all mounted to one machine?No data is loaded... yet  Table and index creation only.  I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.I'm not looking for alternatives yet but input to my test.  Takers?Go through and put one row (or 8kB worth of rows) into each of 8 million table.  The stats collector and the autovacuum process will start going nuts.  Now, maybe you can deal with it.  But maybe not.  That is the first non-obvious thing I'd look at.Cheers,Jeff", "msg_date": "Sun, 25 Sep 2016 23:07:43 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "-sorry for my last email, which also not bottom posting-\n\nHi Greg,\nOn Mon, Sep 26, 2016 at 11:19 AM, Greg Spiegelberg <[email protected]>\nwrote:\n\n> I did look at PostgresXL and CitusDB. Both are admirable however neither\n> could support the need to read a random record consistently under 30ms.\n> It's a similar problem Cassandra and others have: network latency. At this\n> scale, to provide the ability to access any given record amongst trillions\n> it is imperative to know precisely where it is stored (system & database)\n> and read a relatively small index. I have other requirements that prohibit\n> use of any technology that is eventually consistent.\n>\n Then, you can get below 30ms, but how many process you might have to have\nconncurently?\nThis is something that you should consider, single machine can only have\nless than 50 HT for intel, 192HT for Power8, still it is far below millions\ncompare with the number of tables (8Million)\nIf you use index correctly, you would not need sequencial scan since the\nscanning run on the memory (index loaded into memory)\nDo you plan to query thru Master table of the partition? it is quite slow\nactually, considering millions rule to check for every query.\n\nwith 8 Millions of data, you would require very big data storage for sure\nand it would not fit mounted into single machine unless you would planning\nto use IBM z machines.\n\n\n> I liken the problem to fishing. To find a particular fish of length,\n> size, color &c in a data lake you must accept the possibility of scanning\n> the entire lake. However, if all fish were in barrels where each barrel\n> had a particular kind of fish of specific length, size, color &c then the\n> problem is far simpler.\n>\n>\n\n-sorry for my last email, which also not bottom posting-Hi Greg, \nOn Mon, Sep 26, 2016 at 11:19 AM, Greg Spiegelberg <[email protected]> wrote:I did look at PostgresXL and CitusDB.  Both are admirable however neither could support the need to read a random record consistently under 30ms.  It's a similar problem Cassandra and others have: network latency.  At this scale, to provide the ability to access any given record amongst trillions it is imperative to know precisely where it is stored (system & database) and read a relatively small index.  I have other requirements that prohibit use of any technology that is eventually consistent. Then, you can get below 30ms, but how many process you might have to have conncurently? This is something that you should consider, single machine can only have less than 50 HT for intel, 192HT for Power8, still it is far below millions compare with the number of tables (8Million) If you use index correctly, you would not need sequencial scan since the scanning run on the memory (index loaded into memory)Do you plan to query thru Master table of the partition? it is quite slow actually, considering millions rule to check for every query. with 8 Millions of data, you would require very big data storage for sure and it would not fit mounted into single machine unless you would planning to use IBM z machines.I liken the problem to fishing.  To find a particular fish of length, size, color &c in a data lake you must accept the possibility of scanning the entire lake.  However, if all fish were in barrels where each barrel had a particular kind of fish of specific length, size, color &c then the problem is far simpler.", "msg_date": "Mon, 26 Sep 2016 13:28:10 +0700", "msg_from": "julyanto SUTANDANG <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "\n\nOn 26/09/16 05:50, Greg Spiegelberg wrote:\n> Hey all,\n>\n> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a \n> time has said not to have millions of tables. I too have long \n> believed it until recently.\n>\n> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) \n> for PGDATA. Over the weekend, I created 8M tables with 16M indexes on \n> those tables. Table creation initially took 0.018031 secs, average \n> 0.027467 and after tossing out outliers (qty 5) the maximum creation \n> time found was 0.66139 seconds. Total time 30 hours, 31 minutes and \n> 8.435049 seconds. Tables were created by a single process. Do note \n> that table creation is done via plpgsql function as there are other \n> housekeeping tasks necessary though minimal.\n>\n> No system tuning but here is a list of PostgreSQL knobs and switches:\n> shared_buffers = 2GB\n> work_mem = 48 MB\n> max_stack_depth = 4 MB\n> synchronous_commit = off\n> effective_cache_size = 200 GB\n> pg_xlog is on it's own file system\n>\n> There are some still obvious problems. General DBA functions such as \n> VACUUM and ANALYZE should not be done. Each will run forever and \n> cause much grief. Backups are problematic in the traditional pg_dump \n> and PITR space. Large JOIN's by VIEW, SELECT or via table inheritance \n> (I am abusing it in my test case) are no-no's. A system or database \n> crash could take potentially hours to days to recover. There are \n> likely other issues ahead.\n>\n> You may wonder, \"why is Greg attempting such a thing?\" I looked at \n> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face \n> it, it's antiquated and don't get me started on \"Hadoop\". I looked at \n> many others and ultimately the recommended use of each vendor was to \n> have one table for all data. That overcomes the millions of tables \n> problem, right?\n>\n> Problem with the \"one big table\" solution is I anticipate 1,200 \n> trillion records. Random access is expected and the customer expects \n> <30ms reads for a single record fetch.\n>\n> No data is loaded... yet Table and index creation only. I am \n> interested in the opinions of all including tests I may perform. If \n> you had this setup, what would you capture / analyze? I have a job \n> running preparing data. I did this on a much smaller scale (50k \n> tables) and data load via function allowed close to 6,000 \n> records/second. The schema has been simplified since and last test \n> reach just over 20,000 records/second with 300k tables.\n>\n> I'm not looking for alternatives yet but input to my test. Takers?\n>\n> I can't promise immediate feedback but will do my best to respond with \n> results.\n>\n> TIA,\n> -Greg\n\n Hi Greg.\n\n This is a problem (creating a large number of tables; really large \nindeed) that we researched in my company a while ago. You might want to \nread about it: https://www.pgcon.org/2013/schedule/events/595.en.html\n\n Cheers,\n\n Álvaro\n\n\n-- \n\nÁlvaro Hernández Tortosa\n\n\n-----------\n8Kdata\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Sep 2016 11:28:54 +0300", "msg_from": "=?UTF-8?Q?=c3=81lvaro_Hern=c3=a1ndez_Tortosa?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On 26 September 2016 at 11:19, Greg Spiegelberg <[email protected]>\nwrote:\n\n> I did look at PostgresXL and CitusDB. Both are admirable however neither\n> could support the need to read a random record consistently under 30ms.\n> It's a similar problem Cassandra and others have: network latency. At this\n> scale, to provide the ability to access any given record amongst trillions\n> it is imperative to know precisely where it is stored (system & database)\n> and read a relatively small index. I have other requirements that prohibit\n> use of any technology that is eventually consistent.\n>\n> I liken the problem to fishing. To find a particular fish of length,\n> size, color &c in a data lake you must accept the possibility of scanning\n> the entire lake. However, if all fish were in barrels where each barrel\n> had a particular kind of fish of specific length, size, color &c then the\n> problem is far simpler.\n>\n> -Greg\n>\n\nMy gut tells me that if you do solve the problem and get PostgreSQL (or\nanything) reading consistently at under 30ms with that many tables you will\nhave solved one problem by creating another.\n\nYou discounted Cassandra due to network latency, but are now trying a\nmonolithic PostgreSQL setup. It might be worth trying a single node\nScyllaDB or Cassandra deploy (no need for QUORUM or network overhead),\nperhaps using layered compaction so all your data gets broken out into\n160MB chunks. And certainly wander over to the ScyllaDB mailing list, as\nthey are very focused on performance problems like yours and should offer\nsome insight even if a Cassandra style architecture cannot meet your\nrequirements.\n\nAn alternative if you exhaust or don't trust other options, use a foreign\ndata wrapper to access your own custom storage. A single table at the PG\nlevel, you can shard the data yourself into 8 bazillion separate stores, in\nwhatever structure suites your read and write operations (maybe reusing an\nembedded db engine, ordered flat file+log+index, whatever).\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\nOn 26 September 2016 at 11:19, Greg Spiegelberg <[email protected]> wrote:I did look at PostgresXL and CitusDB.  Both are admirable however neither could support the need to read a random record consistently under 30ms.  It's a similar problem Cassandra and others have: network latency.  At this scale, to provide the ability to access any given record amongst trillions it is imperative to know precisely where it is stored (system & database) and read a relatively small index.  I have other requirements that prohibit use of any technology that is eventually consistent.I liken the problem to fishing.  To find a particular fish of length, size, color &c in a data lake you must accept the possibility of scanning the entire lake.  However, if all fish were in barrels where each barrel had a particular kind of fish of specific length, size, color &c then the problem is far simpler.-GregMy gut tells me that if you do solve the problem and get PostgreSQL (or anything) reading consistently at under 30ms with that many tables you will have solved one problem by creating another.You discounted Cassandra due to network latency, but are now trying a monolithic PostgreSQL setup. It might be worth trying a single node ScyllaDB or Cassandra deploy (no need for QUORUM or network overhead), perhaps using layered compaction so all your data gets broken out into 160MB chunks. And certainly wander over to the ScyllaDB mailing list, as they are very focused on performance problems like yours and should offer some insight even if a Cassandra style architecture cannot meet your requirements.An alternative if you exhaust or don't trust other options, use a foreign data wrapper to access your own custom storage. A single table at the PG level, you can shard the data yourself into 8 bazillion separate stores, in whatever structure suites your read and write operations (maybe reusing an embedded db engine, ordered flat file+log+index, whatever).-- Stuart Bishop <[email protected]>http://www.stuartbishop.net/", "msg_date": "Mon, 26 Sep 2016 16:43:20 +0700", "msg_from": "Stuart Bishop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Are the tables constantly being written to, or is this a mostly read\nscenario? One architecture possibility, if the writes are not so\nfrequent, is to create just a handful of very big tables for writing, and\nthen make smaller tables as materialized views for reading. The vacuum and\nbloat management could be done back a the big tables. The materialized\nviews could be refreshed or replaced during non-peak hours. The\nmaterialized views could be on a different tablespace than the root\ntables. They could also be structured to reflect real-world query patterns\nwhich are sometimes different than the raw data storage engineering problem.\n\nWith some logging you may be able to see that the data is not truly\nrandomly accessed, but rather clustered around just some of the millions of\ntables. Then the engineering problem becomes \"How do I service 90% of the\nqueries on these tables in 30ms ?\" Rather than \"How do I service 100% of\nthe queries 100% of the time in 30ms?\" Knowing 90% of the queries hit just\na few hundred tables, makes the first question easier to answer.\n\nSimilarly, if most of the columns are static and only a few columns are\nactually changing, you could consider pulling the static stuff out of the\nsame table with the dynamic stuff and then look at joins in your queries.\nThe end goal is to be able to get solid indexes and tables that don't\nchange a lot so they can be tightly packed and cached. (less bloat, less\nfragmentation, fewer disk accesses).\n\nWith regards to consistent query performance, I think you need to get out\nof AWS. That environment is terrible if you are going for consistency\nunless you buy dedicated hardware, and then you are paying so much money it\nis ridiculous.\n\nAlso I think having 10M rows in a table is not a problem for the query\ntimes you are referring to. So instead of millions of tables, unless I'm\ndoing my math wrong, you probably only need thousands of tables.\n\n\n\nOn Mon, Sep 26, 2016 at 5:43 AM, Stuart Bishop <[email protected]>\nwrote:\n\n> On 26 September 2016 at 11:19, Greg Spiegelberg <[email protected]>\n> wrote:\n>\n>> I did look at PostgresXL and CitusDB. Both are admirable however neither\n>> could support the need to read a random record consistently under 30ms.\n>> It's a similar problem Cassandra and others have: network latency. At this\n>> scale, to provide the ability to access any given record amongst trillions\n>> it is imperative to know precisely where it is stored (system & database)\n>> and read a relatively small index. I have other requirements that prohibit\n>> use of any technology that is eventually consistent.\n>>\n>> I liken the problem to fishing. To find a particular fish of length,\n>> size, color &c in a data lake you must accept the possibility of scanning\n>> the entire lake. However, if all fish were in barrels where each barrel\n>> had a particular kind of fish of specific length, size, color &c then the\n>> problem is far simpler.\n>>\n>> -Greg\n>>\n>\n> My gut tells me that if you do solve the problem and get PostgreSQL (or\n> anything) reading consistently at under 30ms with that many tables you will\n> have solved one problem by creating another.\n>\n> You discounted Cassandra due to network latency, but are now trying a\n> monolithic PostgreSQL setup. It might be worth trying a single node\n> ScyllaDB or Cassandra deploy (no need for QUORUM or network overhead),\n> perhaps using layered compaction so all your data gets broken out into\n> 160MB chunks. And certainly wander over to the ScyllaDB mailing list, as\n> they are very focused on performance problems like yours and should offer\n> some insight even if a Cassandra style architecture cannot meet your\n> requirements.\n>\n> An alternative if you exhaust or don't trust other options, use a foreign\n> data wrapper to access your own custom storage. A single table at the PG\n> level, you can shard the data yourself into 8 bazillion separate stores, in\n> whatever structure suites your read and write operations (maybe reusing an\n> embedded db engine, ordered flat file+log+index, whatever).\n>\n> --\n> Stuart Bishop <[email protected]>\n> http://www.stuartbishop.net/\n>\n\nAre the tables constantly being written to, or is this a mostly read scenario?   One architecture possibility, if the writes are not so frequent,  is to create just a handful of very big tables for writing, and then make smaller tables as materialized views for reading.  The vacuum and bloat management could be done back a the big tables. The materialized views could be refreshed or replaced during non-peak hours.  The materialized views could be on a different tablespace than the root tables.  They could also be structured to reflect real-world query patterns which are sometimes different than the raw data storage engineering problem.With some logging you may be able to see that the data is not truly randomly accessed, but rather clustered around just some of the millions of tables.   Then the engineering problem becomes \"How do I service 90% of the queries on these tables in 30ms ?\" Rather than \"How do I service 100% of the queries 100% of the time in 30ms?\"  Knowing 90% of the queries hit just a few hundred tables, makes the first question easier to answer.Similarly, if most of the columns are static and only a few columns are actually changing, you could consider pulling the static stuff out of the same table with the dynamic stuff and then look at joins in your queries.  The end goal is to be able to get solid indexes and tables that don't change a lot so they can be tightly packed and cached.  (less bloat, less fragmentation, fewer disk accesses).With regards to consistent query performance, I think you need to get out of AWS.  That environment is terrible if you are going for consistency unless you buy dedicated hardware, and then you are paying so much money it is ridiculous.Also I think having 10M rows in a table is not a problem for the query times you are referring to.  So instead of millions of tables, unless I'm doing my math wrong, you probably only need thousands of tables.On Mon, Sep 26, 2016 at 5:43 AM, Stuart Bishop <[email protected]> wrote:On 26 September 2016 at 11:19, Greg Spiegelberg <[email protected]> wrote:I did look at PostgresXL and CitusDB.  Both are admirable however neither could support the need to read a random record consistently under 30ms.  It's a similar problem Cassandra and others have: network latency.  At this scale, to provide the ability to access any given record amongst trillions it is imperative to know precisely where it is stored (system & database) and read a relatively small index.  I have other requirements that prohibit use of any technology that is eventually consistent.I liken the problem to fishing.  To find a particular fish of length, size, color &c in a data lake you must accept the possibility of scanning the entire lake.  However, if all fish were in barrels where each barrel had a particular kind of fish of specific length, size, color &c then the problem is far simpler.-GregMy gut tells me that if you do solve the problem and get PostgreSQL (or anything) reading consistently at under 30ms with that many tables you will have solved one problem by creating another.You discounted Cassandra due to network latency, but are now trying a monolithic PostgreSQL setup. It might be worth trying a single node ScyllaDB or Cassandra deploy (no need for QUORUM or network overhead), perhaps using layered compaction so all your data gets broken out into 160MB chunks. And certainly wander over to the ScyllaDB mailing list, as they are very focused on performance problems like yours and should offer some insight even if a Cassandra style architecture cannot meet your requirements.An alternative if you exhaust or don't trust other options, use a foreign data wrapper to access your own custom storage. A single table at the PG level, you can shard the data yourself into 8 bazillion separate stores, in whatever structure suites your read and write operations (maybe reusing an embedded db engine, ordered flat file+log+index, whatever).-- Stuart Bishop <[email protected]>http://www.stuartbishop.net/", "msg_date": "Mon, 26 Sep 2016 06:23:58 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Following list etiquette response inline ;)\n\nOn Mon, Sep 26, 2016 at 2:28 AM, Álvaro Hernández Tortosa <[email protected]>\nwrote:\n\n>\n>\n> On 26/09/16 05:50, Greg Spiegelberg wrote:\n>\n>> Hey all,\n>>\n>> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a\n>> time has said not to have millions of tables. I too have long believed it\n>> until recently.\n>>\n>> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1)\n>> for PGDATA. Over the weekend, I created 8M tables with 16M indexes on\n>> those tables. Table creation initially took 0.018031 secs, average\n>> 0.027467 and after tossing out outliers (qty 5) the maximum creation time\n>> found was 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049\n>> seconds. Tables were created by a single process. Do note that table\n>> creation is done via plpgsql function as there are other housekeeping tasks\n>> necessary though minimal.\n>>\n>> No system tuning but here is a list of PostgreSQL knobs and switches:\n>> shared_buffers = 2GB\n>> work_mem = 48 MB\n>> max_stack_depth = 4 MB\n>> synchronous_commit = off\n>> effective_cache_size = 200 GB\n>> pg_xlog is on it's own file system\n>>\n>> There are some still obvious problems. General DBA functions such as\n>> VACUUM and ANALYZE should not be done. Each will run forever and cause\n>> much grief. Backups are problematic in the traditional pg_dump and PITR\n>> space. Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing\n>> it in my test case) are no-no's. A system or database crash could take\n>> potentially hours to days to recover. There are likely other issues ahead.\n>>\n>> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n>> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n>> it's antiquated and don't get me started on \"Hadoop\". I looked at many\n>> others and ultimately the recommended use of each vendor was to have one\n>> table for all data. That overcomes the millions of tables problem, right?\n>>\n>> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n>> records. Random access is expected and the customer expects <30ms reads\n>> for a single record fetch.\n>>\n>> No data is loaded... yet Table and index creation only. I am interested\n>> in the opinions of all including tests I may perform. If you had this\n>> setup, what would you capture / analyze? I have a job running preparing\n>> data. I did this on a much smaller scale (50k tables) and data load via\n>> function allowed close to 6,000 records/second. The schema has been\n>> simplified since and last test reach just over 20,000 records/second with\n>> 300k tables.\n>>\n>> I'm not looking for alternatives yet but input to my test. Takers?\n>>\n>> I can't promise immediate feedback but will do my best to respond with\n>> results.\n>>\n>> TIA,\n>> -Greg\n>>\n>\n> Hi Greg.\n>\n> This is a problem (creating a large number of tables; really large\n> indeed) that we researched in my company a while ago. You might want to\n> read about it: https://www.pgcon.org/2013/schedule/events/595.en.html\n>\n>\nupdatedb, funny. Thank you for the pointer. I had no intention of going\nto 1B tables.\n\nI may need to understand autovacuum better. My impression was it consulted\nstatistics and performed vacuums one table at a time based on the vacuum\nthreshold formula on\nhttps://www.postgresql.org/docs/9.5/static/routine-vacuuming.html.\n\n\n -Greg\n\nFollowing list etiquette response inline ;)On Mon, Sep 26, 2016 at 2:28 AM, Álvaro Hernández Tortosa <[email protected]> wrote:\n\nOn 26/09/16 05:50, Greg Spiegelberg wrote:\n\nHey all,\n\nObviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently.\n\nAWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process. Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.\n\nNo system tuning but here is a list of PostgreSQL knobs and switches:\nshared_buffers = 2GB\nwork_mem = 48 MB\nmax_stack_depth = 4 MB\nsynchronous_commit = off\neffective_cache_size = 200 GB\npg_xlog is on it's own file system\n\nThere are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.  Backups are problematic in the traditional pg_dump and PITR space.  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead.\n\nYou may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right?\n\nProblem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch.\n\nNo data is loaded... yet  Table and index creation only. I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.\n\nI'm not looking for alternatives yet but input to my test. Takers?\n\nI can't promise immediate feedback but will do my best to respond with results.\n\nTIA,\n-Greg\n\n\n    Hi Greg.\n\n    This is a problem (creating a large number of tables; really large indeed) that we researched in my company a while ago. You might want to read about it: https://www.pgcon.org/2013/schedule/events/595.en.html\nupdatedb, funny.  Thank you for the pointer.  I had no intention of going to 1B tables.I may need to understand autovacuum better.  My impression was it consulted statistics and performed vacuums one table at a time based on the vacuum threshold formula on  https://www.postgresql.org/docs/9.5/static/routine-vacuuming.html.   -Greg", "msg_date": "Mon, 26 Sep 2016 06:53:12 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Something that is not talked about at all in this thread is caching. A bunch\nof memcache servers in front of the DB should be able to help with the 30ms\nconstraint (doesn't have to be memcache, some caching technology).\n\n-- \nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Sep 2016 06:54:31 -0600", "msg_from": "Yves Dorfsman <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "From: Rick Otten Sent: Monday, September 26, 2016 3:24 AM\nAre the tables constantly being written to, or is this a mostly read scenario? \n\n \n\nWith regards to consistent query performance, I think you need to get out of AWS. That environment is terrible if you are going for consistency unless you buy dedicated hardware, and then you are paying so much money it is ridiculous.\n\n \n\nAlso I think having 10M rows in a table is not a problem for the query times you are referring to. So instead of millions of tables, unless I'm doing my math wrong, you probably only need thousands of tables.\n\n----------\n\nExcellent thoughts: the read/write behavior will/should drive a lot of the design; AWS does not guarantee consistency or latency; and 10m rows is nothing to PG.\n\n \n\nRe AWS: we’re on it, at least for now. In my profiling of our performance there, I consistently get low latencies…I just know that there will be random higher latencies, but the statistical average will be low. I just ran a quick test against a modest sized table on a modest sized EC2 instance (m4.xlarge – 4 core/16gb ram, 3 tb ssd): the table has 15m rows but is huge (it represents nearly 500m rows compressed in jsonb documents), with 5 indexed key columns and a total of 12 columns. I queried for a single, non-PK, indexed value using “select *” (so it included the json) and it took 22ms, without the json it took 11ms. Especially with the db/memory-optimized EC2 instances now available (with guaranteed IOPS), performance against even 100m row tables should still stay within your requirements.\n\n \n\nSo Rick’s point about not needing millions of tables is right on. If there’s a way to create table “clumps”, at least you’ll have a more modest table count.\n\n \n\nMike Sofen (Synthetic Genomics)\n\n\nFrom: Rick Otten   Sent: Monday, September 26, 2016 3:24 AMAre the tables constantly being written to, or is this a mostly read scenario?  With regards to consistent query performance, I think you need to get out of AWS.  That environment is terrible if you are going for consistency unless you buy dedicated hardware, and then you are paying so much money it is ridiculous. Also I think having 10M rows in a table is not a problem for the query times you are referring to.  So instead of millions of tables, unless I'm doing my math wrong, you probably only need thousands of tables.----------Excellent thoughts:  the read/write behavior will/should drive a lot of the design;  AWS does not guarantee consistency or latency;  and 10m rows is nothing to PG. Re AWS:  we’re on it, at least for now.  In my profiling of our performance there, I consistently get low latencies…I just know that there will be random higher latencies, but the statistical average will be low.  I just ran a quick test against a modest sized table on a modest sized EC2 instance (m4.xlarge – 4 core/16gb ram, 3 tb ssd):  the table has 15m rows but is huge (it represents nearly 500m rows compressed in jsonb documents), with 5 indexed key columns and a total of 12 columns.  I queried for a single, non-PK, indexed value using “select *” (so it included the json) and it took 22ms, without the json it took 11ms.  Especially with the db/memory-optimized EC2 instances now available (with guaranteed IOPS), performance against even 100m row tables should still stay within your requirements. So Rick’s point about not needing millions of tables is right on.  If there’s a way to create table “clumps”, at least you’ll have a more modest table count. Mike Sofen (Synthetic Genomics)", "msg_date": "Mon, 26 Sep 2016 06:05:01 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Mon, Sep 26, 2016 at 3:43 AM, Stuart Bishop <[email protected]>\nwrote:\n\n> On 26 September 2016 at 11:19, Greg Spiegelberg <[email protected]>\n> wrote:\n>\n>> I did look at PostgresXL and CitusDB. Both are admirable however neither\n>> could support the need to read a random record consistently under 30ms.\n>> It's a similar problem Cassandra and others have: network latency. At this\n>> scale, to provide the ability to access any given record amongst trillions\n>> it is imperative to know precisely where it is stored (system & database)\n>> and read a relatively small index. I have other requirements that prohibit\n>> use of any technology that is eventually consistent.\n>>\n>> I liken the problem to fishing. To find a particular fish of length,\n>> size, color &c in a data lake you must accept the possibility of scanning\n>> the entire lake. However, if all fish were in barrels where each barrel\n>> had a particular kind of fish of specific length, size, color &c then the\n>> problem is far simpler.\n>>\n>> -Greg\n>>\n>\n> My gut tells me that if you do solve the problem and get PostgreSQL (or\n> anything) reading consistently at under 30ms with that many tables you will\n> have solved one problem by creating another.\n>\n>\nExactly why I am exploring. What are the trade offs?\n\n\n\n> You discounted Cassandra due to network latency, but are now trying a\n> monolithic PostgreSQL setup. It might be worth trying a single node\n> ScyllaDB or Cassandra deploy (no need for QUORUM or network overhead),\n> perhaps using layered compaction so all your data gets broken out into\n> 160MB chunks. And certainly wander over to the ScyllaDB mailing list, as\n> they are very focused on performance problems like yours and should offer\n> some insight even if a Cassandra style architecture cannot meet your\n> requirements.\n>\n>\nCassandra performance, according to the experts I consulted, starts to fall\noff once the stored dataset exceeds ~3 TB. Much too small for my use\ncase. Again, I do have other reasons for not using Cassandra and others\nnamely deduplication of information referenced by my millions of tables.\nThere are no guarantees in many outside of the RDBMS realm.\n\n\n\n> An alternative if you exhaust or don't trust other options, use a foreign\n> data wrapper to access your own custom storage. A single table at the PG\n> level, you can shard the data yourself into 8 bazillion separate stores, in\n> whatever structure suites your read and write operations (maybe reusing an\n> embedded db engine, ordered flat file+log+index, whatever).\n>\n>\nHowever even 8 bazillion FDW's may cause an \"overflow\" of relationships at\nthe loss of having an efficient storage engine acting more like a traffic\ncop. In such a case, I would opt to put such logic in the app to directly\naccess the true storage over using FDW's.\n\n-Greg\n\nOn Mon, Sep 26, 2016 at 3:43 AM, Stuart Bishop <[email protected]> wrote:On 26 September 2016 at 11:19, Greg Spiegelberg <[email protected]> wrote:I did look at PostgresXL and CitusDB.  Both are admirable however neither could support the need to read a random record consistently under 30ms.  It's a similar problem Cassandra and others have: network latency.  At this scale, to provide the ability to access any given record amongst trillions it is imperative to know precisely where it is stored (system & database) and read a relatively small index.  I have other requirements that prohibit use of any technology that is eventually consistent.I liken the problem to fishing.  To find a particular fish of length, size, color &c in a data lake you must accept the possibility of scanning the entire lake.  However, if all fish were in barrels where each barrel had a particular kind of fish of specific length, size, color &c then the problem is far simpler.-GregMy gut tells me that if you do solve the problem and get PostgreSQL (or anything) reading consistently at under 30ms with that many tables you will have solved one problem by creating another.Exactly why I am exploring.  What are the trade offs? You discounted Cassandra due to network latency, but are now trying a monolithic PostgreSQL setup. It might be worth trying a single node ScyllaDB or Cassandra deploy (no need for QUORUM or network overhead), perhaps using layered compaction so all your data gets broken out into 160MB chunks. And certainly wander over to the ScyllaDB mailing list, as they are very focused on performance problems like yours and should offer some insight even if a Cassandra style architecture cannot meet your requirements.Cassandra performance, according to the experts I consulted, starts to fall off once the stored dataset exceeds ~3 TB.  Much too small for my use case.  Again, I do have other reasons for not using Cassandra and others namely deduplication of information referenced by my millions of tables.  There are no guarantees in many outside of the RDBMS realm.  An alternative if you exhaust or don't trust other options, use a foreign data wrapper to access your own custom storage. A single table at the PG level, you can shard the data yourself into 8 bazillion separate stores, in whatever structure suites your read and write operations (maybe reusing an embedded db engine, ordered flat file+log+index, whatever).However even 8 bazillion FDW's may cause an \"overflow\" of relationships at the loss of having an efficient storage engine acting more like a traffic cop.  In such a case, I would opt to put such logic in the app to directly access the true storage over using FDW's.-Greg", "msg_date": "Mon, 26 Sep 2016 07:51:10 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Consider the problem though. Random access to trillions of records with no\nguarantee any one will be fetched twice in a short time frame nullifies the\neffectiveness of a cache unless the cache is enormous. If such a cache\nwere that big, 100's of TB's, I wouldn't be looking at on-disk storage\noptions. :)\n\n-Greg\n\nOn Mon, Sep 26, 2016 at 6:54 AM, Yves Dorfsman <[email protected]> wrote:\n\n> Something that is not talked about at all in this thread is caching. A\n> bunch\n> of memcache servers in front of the DB should be able to help with the 30ms\n> constraint (doesn't have to be memcache, some caching technology).\n>\n> --\n> http://yves.zioup.com\n> gpg: 4096R/32B0F416\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nConsider the problem though.  Random access to trillions of records with no guarantee any one will be fetched twice in a short time frame nullifies the effectiveness of a cache unless the cache is enormous.  If such a cache were that big, 100's of TB's, I wouldn't be looking at on-disk storage options.  :)-GregOn Mon, Sep 26, 2016 at 6:54 AM, Yves Dorfsman <[email protected]> wrote:Something that is not talked about at all in this thread is caching. A bunch\nof memcache servers in front of the DB should be able to help with the 30ms\nconstraint (doesn't have to be memcache, some caching technology).\n\n--\nhttp://yves.zioup.com\ngpg: 4096R/32B0F416\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Mon, 26 Sep 2016 07:53:16 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Mon, Sep 26, 2016 at 7:05 AM, Mike Sofen <[email protected]> wrote:\n\n> *From:* Rick Otten *Sent:* Monday, September 26, 2016 3:24 AM\n> Are the tables constantly being written to, or is this a mostly read\n> scenario?\n>\n>\n>\n> With regards to consistent query performance, I think you need to get out\n> of AWS. That environment is terrible if you are going for consistency\n> unless you buy dedicated hardware, and then you are paying so much money it\n> is ridiculous.\n>\n>\n>\n> Also I think having 10M rows in a table is not a problem for the query\n> times you are referring to. So instead of millions of tables, unless I'm\n> doing my math wrong, you probably only need thousands of tables.\n>\n> ----------\n>\n> Excellent thoughts: the read/write behavior will/should drive a lot of\n> the design; AWS does not guarantee consistency or latency; and 10m rows\n> is nothing to PG.\n>\n>\n>\n> Re AWS: we’re on it, at least for now. In my profiling of our\n> performance there, I consistently get low latencies…I just know that there\n> will be random higher latencies, but the statistical average will be low.\n> I just ran a quick test against a modest sized table on a modest sized EC2\n> instance (m4.xlarge – 4 core/16gb ram, 3 tb ssd): the table has 15m rows\n> but is huge (it represents nearly 500m rows compressed in jsonb documents),\n> with 5 indexed key columns and a total of 12 columns. I queried for a\n> single, non-PK, indexed value using “select *” (so it included the json)\n> and it took 22ms, without the json it took 11ms. Especially with the\n> db/memory-optimized EC2 instances now available (with guaranteed IOPS),\n> performance against even 100m row tables should still stay within your\n> requirements.\n>\n>\n>\n> So Rick’s point about not needing millions of tables is right on. If\n> there’s a way to create table “clumps”, at least you’ll have a more modest\n> table count.\n>\n>\n>\n\nAbsolutely! The 8M tables do \"belong\" to a larger group and the option to\nreduce the 8M tables to ~4000 is an option however the problem then becomes\nrather than having an anticipated 140k records/table to 140M to 500M\nrecords/table. I'm concerned read access times will go out the window. It\nis on the docket to test.\n\n-Greg\n\nOn Mon, Sep 26, 2016 at 7:05 AM, Mike Sofen <[email protected]> wrote:From: Rick Otten   Sent: Monday, September 26, 2016 3:24 AMAre the tables constantly being written to, or is this a mostly read scenario?  With regards to consistent query performance, I think you need to get out of AWS.  That environment is terrible if you are going for consistency unless you buy dedicated hardware, and then you are paying so much money it is ridiculous. Also I think having 10M rows in a table is not a problem for the query times you are referring to.  So instead of millions of tables, unless I'm doing my math wrong, you probably only need thousands of tables.----------Excellent thoughts:  the read/write behavior will/should drive a lot of the design;  AWS does not guarantee consistency or latency;  and 10m rows is nothing to PG. Re AWS:  we’re on it, at least for now.  In my profiling of our performance there, I consistently get low latencies…I just know that there will be random higher latencies, but the statistical average will be low.  I just ran a quick test against a modest sized table on a modest sized EC2 instance (m4.xlarge – 4 core/16gb ram, 3 tb ssd):  the table has 15m rows but is huge (it represents nearly 500m rows compressed in jsonb documents), with 5 indexed key columns and a total of 12 columns.  I queried for a single, non-PK, indexed value using “select *” (so it included the json) and it took 22ms, without the json it took 11ms.  Especially with the db/memory-optimized EC2 instances now available (with guaranteed IOPS), performance against even 100m row tables should still stay within your requirements. So Rick’s point about not needing millions of tables is right on.  If there’s a way to create table “clumps”, at least you’ll have a more modest table count. Absolutely!  The 8M tables do \"belong\" to a larger group and the option to reduce the 8M tables to ~4000 is an option however the problem then becomes rather than having an anticipated 140k records/table to 140M to 500M records/table.  I'm concerned read access times will go out the window.  It is on the docket to test.-Greg", "msg_date": "Mon, 26 Sep 2016 07:57:11 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Mon, Sep 26, 2016 at 4:23 AM, Rick Otten <[email protected]>\nwrote:\n\n> Are the tables constantly being written to, or is this a mostly read\n> scenario? One architecture possibility, if the writes are not so\n> frequent, is to create just a handful of very big tables for writing, and\n> then make smaller tables as materialized views for reading. The vacuum and\n> bloat management could be done back a the big tables. The materialized\n> views could be refreshed or replaced during non-peak hours. The\n> materialized views could be on a different tablespace than the root\n> tables. They could also be structured to reflect real-world query patterns\n> which are sometimes different than the raw data storage engineering problem.\n>\n>\nLike any data warehouse, I expect 90%+ of the activity being writes but the\nnecessity of low latency reads is an absolute must else the design doesn't\nget off the ground.\n\nMaterialized views are neat for many cases but not this one. Current\nversions of the data must be available the moment after they are written.\n\nI am considering tablespaces however I have no way to properly size. One\ngroup of tables may contain 1M records and another 100M. Could be on the\nsame file system but I'd have the same problems internal to PostgreSQL and\nthe only thing overcome is millions of files in a single directory which is\na file system selection problem.\n\n\n\n> With some logging you may be able to see that the data is not truly\n> randomly accessed, but rather clustered around just some of the millions of\n> tables. Then the engineering problem becomes \"How do I service 90% of the\n> queries on these tables in 30ms ?\" Rather than \"How do I service 100% of\n> the queries 100% of the time in 30ms?\" Knowing 90% of the queries hit just\n> a few hundred tables, makes the first question easier to answer.\n>\n> Similarly, if most of the columns are static and only a few columns are\n> actually changing, you could consider pulling the static stuff out of the\n> same table with the dynamic stuff and then look at joins in your queries.\n> The end goal is to be able to get solid indexes and tables that don't\n> change a lot so they can be tightly packed and cached. (less bloat, less\n> fragmentation, fewer disk accesses).\n>\n> With regards to consistent query performance, I think you need to get out\n> of AWS. That environment is terrible if you are going for consistency\n> unless you buy dedicated hardware, and then you are paying so much money it\n> is ridiculous.\n>\n>\nTrue about AWS and though it is possible hardware may be purchased AWS is\nthe right place to start. 1) AWS is not IT and won't take months to\napprove budget for gear+deployment and more importantly 2) still in design\nphase and if deployed there is no way to predict true adoption meaning\nit'll start small. AWS is the right place for now.\n\n\n\n> Also I think having 10M rows in a table is not a problem for the query\n> times you are referring to. So instead of millions of tables, unless I'm\n> doing my math wrong, you probably only need thousands of tables.\n>\n>\n>\n> On Mon, Sep 26, 2016 at 5:43 AM, Stuart Bishop <[email protected]>\n> wrote:\n>\n>> On 26 September 2016 at 11:19, Greg Spiegelberg <[email protected]>\n>> wrote:\n>>\n>>> I did look at PostgresXL and CitusDB. Both are admirable however\n>>> neither could support the need to read a random record consistently under\n>>> 30ms. It's a similar problem Cassandra and others have: network latency.\n>>> At this scale, to provide the ability to access any given record amongst\n>>> trillions it is imperative to know precisely where it is stored (system &\n>>> database) and read a relatively small index. I have other requirements\n>>> that prohibit use of any technology that is eventually consistent.\n>>>\n>>> I liken the problem to fishing. To find a particular fish of length,\n>>> size, color &c in a data lake you must accept the possibility of scanning\n>>> the entire lake. However, if all fish were in barrels where each barrel\n>>> had a particular kind of fish of specific length, size, color &c then the\n>>> problem is far simpler.\n>>>\n>>> -Greg\n>>>\n>>\n>> My gut tells me that if you do solve the problem and get PostgreSQL (or\n>> anything) reading consistently at under 30ms with that many tables you will\n>> have solved one problem by creating another.\n>>\n>> You discounted Cassandra due to network latency, but are now trying a\n>> monolithic PostgreSQL setup. It might be worth trying a single node\n>> ScyllaDB or Cassandra deploy (no need for QUORUM or network overhead),\n>> perhaps using layered compaction so all your data gets broken out into\n>> 160MB chunks. And certainly wander over to the ScyllaDB mailing list, as\n>> they are very focused on performance problems like yours and should offer\n>> some insight even if a Cassandra style architecture cannot meet your\n>> requirements.\n>>\n>> An alternative if you exhaust or don't trust other options, use a foreign\n>> data wrapper to access your own custom storage. A single table at the PG\n>> level, you can shard the data yourself into 8 bazillion separate stores, in\n>> whatever structure suites your read and write operations (maybe reusing an\n>> embedded db engine, ordered flat file+log+index, whatever).\n>>\n>> --\n>> Stuart Bishop <[email protected]>\n>> http://www.stuartbishop.net/\n>>\n>\n>\n\nOn Mon, Sep 26, 2016 at 4:23 AM, Rick Otten <[email protected]> wrote:Are the tables constantly being written to, or is this a mostly read scenario?   One architecture possibility, if the writes are not so frequent,  is to create just a handful of very big tables for writing, and then make smaller tables as materialized views for reading.  The vacuum and bloat management could be done back a the big tables. The materialized views could be refreshed or replaced during non-peak hours.  The materialized views could be on a different tablespace than the root tables.  They could also be structured to reflect real-world query patterns which are sometimes different than the raw data storage engineering problem.Like any data warehouse, I expect 90%+ of the activity being writes but the necessity of low latency reads is an absolute must else the design doesn't get off the ground.Materialized views are neat for many cases but not this one.  Current versions of the data must be available the moment after they are written.I am considering tablespaces however I have no way to properly size.  One group of tables may contain 1M records and another 100M.  Could be on the same file system but I'd have the same problems internal to PostgreSQL and the only thing overcome is millions of files in a single directory which is a file system selection problem. With some logging you may be able to see that the data is not truly randomly accessed, but rather clustered around just some of the millions of tables.   Then the engineering problem becomes \"How do I service 90% of the queries on these tables in 30ms ?\" Rather than \"How do I service 100% of the queries 100% of the time in 30ms?\"  Knowing 90% of the queries hit just a few hundred tables, makes the first question easier to answer.Similarly, if most of the columns are static and only a few columns are actually changing, you could consider pulling the static stuff out of the same table with the dynamic stuff and then look at joins in your queries.  The end goal is to be able to get solid indexes and tables that don't change a lot so they can be tightly packed and cached.  (less bloat, less fragmentation, fewer disk accesses).With regards to consistent query performance, I think you need to get out of AWS.  That environment is terrible if you are going for consistency unless you buy dedicated hardware, and then you are paying so much money it is ridiculous.True about AWS and though it is possible hardware may be purchased AWS is the right place to start.  1) AWS is not IT and won't take months to approve budget for gear+deployment and more importantly 2) still in design phase and if deployed there is no way to predict true adoption meaning it'll start small.  AWS is the right place for now. Also I think having 10M rows in a table is not a problem for the query times you are referring to.  So instead of millions of tables, unless I'm doing my math wrong, you probably only need thousands of tables.On Mon, Sep 26, 2016 at 5:43 AM, Stuart Bishop <[email protected]> wrote:On 26 September 2016 at 11:19, Greg Spiegelberg <[email protected]> wrote:I did look at PostgresXL and CitusDB.  Both are admirable however neither could support the need to read a random record consistently under 30ms.  It's a similar problem Cassandra and others have: network latency.  At this scale, to provide the ability to access any given record amongst trillions it is imperative to know precisely where it is stored (system & database) and read a relatively small index.  I have other requirements that prohibit use of any technology that is eventually consistent.I liken the problem to fishing.  To find a particular fish of length, size, color &c in a data lake you must accept the possibility of scanning the entire lake.  However, if all fish were in barrels where each barrel had a particular kind of fish of specific length, size, color &c then the problem is far simpler.-GregMy gut tells me that if you do solve the problem and get PostgreSQL (or anything) reading consistently at under 30ms with that many tables you will have solved one problem by creating another.You discounted Cassandra due to network latency, but are now trying a monolithic PostgreSQL setup. It might be worth trying a single node ScyllaDB or Cassandra deploy (no need for QUORUM or network overhead), perhaps using layered compaction so all your data gets broken out into 160MB chunks. And certainly wander over to the ScyllaDB mailing list, as they are very focused on performance problems like yours and should offer some insight even if a Cassandra style architecture cannot meet your requirements.An alternative if you exhaust or don't trust other options, use a foreign data wrapper to access your own custom storage. A single table at the PG level, you can shard the data yourself into 8 bazillion separate stores, in whatever structure suites your read and write operations (maybe reusing an embedded db engine, ordered flat file+log+index, whatever).-- Stuart Bishop <[email protected]>http://www.stuartbishop.net/", "msg_date": "Mon, 26 Sep 2016 08:09:04 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On 26 September 2016 at 20:51, Greg Spiegelberg <[email protected]>\nwrote:\n\n>\n> An alternative if you exhaust or don't trust other options, use a foreign\n>> data wrapper to access your own custom storage. A single table at the PG\n>> level, you can shard the data yourself into 8 bazillion separate stores, in\n>> whatever structure suites your read and write operations (maybe reusing an\n>> embedded db engine, ordered flat file+log+index, whatever).\n>>\n>>\n> However even 8 bazillion FDW's may cause an \"overflow\" of relationships at\n> the loss of having an efficient storage engine acting more like a traffic\n> cop. In such a case, I would opt to put such logic in the app to directly\n> access the true storage over using FDW's.\n>\n\nI mean one fdw table, which shards internally to 8 bazillion stores on\ndisk. It has the sharding key, can calculate exactly which store(s) need to\nbe hit, and returns the rows and to PostgreSQL it looks like 1 big table\nwith 1.3 trillion rows. And if it doesn't do that in 30ms you get to blame\nyourself :)\n\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\nOn 26 September 2016 at 20:51, Greg Spiegelberg <[email protected]> wrote:An alternative if you exhaust or don't trust other options, use a foreign data wrapper to access your own custom storage. A single table at the PG level, you can shard the data yourself into 8 bazillion separate stores, in whatever structure suites your read and write operations (maybe reusing an embedded db engine, ordered flat file+log+index, whatever).However even 8 bazillion FDW's may cause an \"overflow\" of relationships at the loss of having an efficient storage engine acting more like a traffic cop.  In such a case, I would opt to put such logic in the app to directly access the true storage over using FDW's.I mean one fdw table, which shards internally to 8 bazillion stores on disk. It has the sharding key, can calculate exactly which store(s) need to be hit, and returns the rows and to PostgreSQL it looks like 1 big table with 1.3 trillion rows. And if it doesn't do that in 30ms you get to blame yourself :)-- Stuart Bishop <[email protected]>http://www.stuartbishop.net/", "msg_date": "Mon, 26 Sep 2016 21:21:22 +0700", "msg_from": "Stuart Bishop <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Sun, Sep 25, 2016 at 8:50 PM, Greg Spiegelberg <[email protected]>\nwrote:\n\n> Hey all,\n>\n> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time\n> has said not to have millions of tables. I too have long believed it until\n> recently.\n>\n> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for\n> PGDATA. Over the weekend, I created 8M tables with 16M indexes on those\n> tables. Table creation initially took 0.018031 secs, average 0.027467 and\n> after tossing out outliers (qty 5) the maximum creation time found was\n> 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049 seconds.\n> Tables were created by a single process. Do note that table creation is\n> done via plpgsql function as there are other housekeeping tasks necessary\n> though minimal.\n>\n> No system tuning but here is a list of PostgreSQL knobs and switches:\n> shared_buffers = 2GB\n> work_mem = 48 MB\n> max_stack_depth = 4 MB\n> synchronous_commit = off\n> effective_cache_size = 200 GB\n> pg_xlog is on it's own file system\n>\n> There are some still obvious problems. General DBA functions such as\n> VACUUM and ANALYZE should not be done. Each will run forever and cause\n> much grief. Backups are problematic in the traditional pg_dump and PITR\n> space. Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing\n> it in my test case) are no-no's. A system or database crash could take\n> potentially hours to days to recover. There are likely other issues ahead.\n>\n> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n> it's antiquated and don't get me started on \"Hadoop\". I looked at many\n> others and ultimately the recommended use of each vendor was to have one\n> table for all data. That overcomes the millions of tables problem, right?\n>\n> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n> records. Random access is expected and the customer expects <30ms reads\n> for a single record fetch.\n>\n> No data is loaded... yet Table and index creation only. I am interested\n> in the opinions of all including tests I may perform. If you had this\n> setup, what would you capture / analyze? I have a job running preparing\n> data. I did this on a much smaller scale (50k tables) and data load via\n> function allowed close to 6,000 records/second. The schema has been\n> simplified since and last test reach just over 20,000 records/second with\n> 300k tables.\n>\n> I'm not looking for alternatives yet but input to my test. Takers?\n>\n> I can't promise immediate feedback but will do my best to respond with\n> results.\n>\n> TIA,\n> -Greg\n>\n\nI've gotten more responses than anticipated and have answered some\nquestions and gotten some insight but my challenge again is what should I\ncapture along the way to prove or disprove this storage pattern?\nAlternatives to the storage pattern aside, I need ideas to test rig,\ncapture metrics and suggestions to tune it.\n\nIn the next 24 hours, I will be sending ~1 trillion records to the test\ndatabase. Because of time to set up, I'd rather have things set up\nproperly the first go.\n\nThanks!\n-Greg\n\nOn Sun, Sep 25, 2016 at 8:50 PM, Greg Spiegelberg <[email protected]> wrote:Hey all,Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently.AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process.  Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.No system tuning but here is a list of PostgreSQL knobs and switches:shared_buffers = 2GBwork_mem = 48 MBmax_stack_depth = 4 MBsynchronous_commit = offeffective_cache_size = 200 GBpg_xlog is on it's own file systemThere are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.  Backups are problematic in the traditional pg_dump and PITR space.  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead.You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right?Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch.No data is loaded... yet  Table and index creation only.  I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.I'm not looking for alternatives yet but input to my test.  Takers?I can't promise immediate feedback but will do my best to respond with results.TIA,-GregI've gotten more responses than anticipated and have answered some questions and gotten some insight but my challenge again is what should I capture along the way to prove or disprove this storage pattern?  Alternatives to the storage pattern aside, I need ideas to test rig, capture metrics and suggestions to tune it.In the next 24 hours, I will be sending ~1 trillion records to the test database.  Because of time to set up, I'd rather have things set up properly the first go.Thanks!-Greg", "msg_date": "Mon, 26 Sep 2016 08:24:30 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Mon, Sep 26, 2016 at 5:53 AM, Greg Spiegelberg <[email protected]>\nwrote:\n\n>\n>>\n>> I may need to understand autovacuum better. My impression was it\n> consulted statistics and performed vacuums one table at a time based on the\n> vacuum threshold formula on https://www.postgresql.org/\n> docs/9.5/static/routine-vacuuming.html.\n>\n\nA problem is that those statistics are stored in one file (per database; it\nused to be one file per cluster). With 8 million tables, that is going to\nbe a pretty big file. But the code pretty much assumes the file is going\nto be pretty small, and so it has no compunction about commanding that it\nbe read and written, in its entirety, quite often.\n\nCheers,\n\nJeff\n\nOn Mon, Sep 26, 2016 at 5:53 AM, Greg Spiegelberg <[email protected]> wrote:I may need to understand autovacuum better.  My impression was it consulted statistics and performed vacuums one table at a time based on the vacuum threshold formula on  https://www.postgresql.org/docs/9.5/static/routine-vacuuming.html.  A problem is that those statistics are stored in one file (per database; it used to be one file per cluster).  With 8 million tables, that is going to be a pretty big file.  But the code pretty much assumes the file is going to be pretty small, and so it has no compunction about commanding that it be read and written, in its entirety, quite often.Cheers,Jeff", "msg_date": "Mon, 26 Sep 2016 09:29:23 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Jeff Janes <[email protected]> writes:\n> A problem is that those statistics are stored in one file (per database; it\n> used to be one file per cluster). With 8 million tables, that is going to\n> be a pretty big file. But the code pretty much assumes the file is going\n> to be pretty small, and so it has no compunction about commanding that it\n> be read and written, in its entirety, quite often.\n\nI don't know that anyone ever believed it would be small. But at the\ntime the pgstats code was written, there was no good alternative to\npassing the data through files. (And I'm not sure we envisioned\napplications that would be demanding fresh data constantly, anyway.)\n\nNow that the DSM stuff exists and has been more or less shaken out,\nI wonder how practical it'd be to use a DSM segment to make the stats\ncollector's data available to backends. You'd need a workaround for\nthe fact that not all the DSM implementations support resize (although\ngiven the lack of callers of dsm_resize, one could be forgiven for\nwondering whether any of that code has been tested at all). But you\ncould imagine abandoning one DSM segment and creating a new one of\ndouble the size anytime the hash tables got too big.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 26 Sep 2016 12:52:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]>\nwrote:\n\n> Hey all,\n>\n> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time\n> has said not to have millions of tables. I too have long believed it until\n> recently.\n>\n> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for\n> PGDATA. Over the weekend, I created 8M tables with 16M indexes on those\n> tables. Table creation initially took 0.018031 secs, average 0.027467 and\n> after tossing out outliers (qty 5) the maximum creation time found was\n> 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049 seconds.\n> Tables were created by a single process. Do note that table creation is\n> done via plpgsql function as there are other housekeeping tasks necessary\n> though minimal.\n>\n> No system tuning but here is a list of PostgreSQL knobs and switches:\n> shared_buffers = 2GB\n> work_mem = 48 MB\n> max_stack_depth = 4 MB\n> synchronous_commit = off\n> effective_cache_size = 200 GB\n> pg_xlog is on it's own file system\n>\n> There are some still obvious problems. General DBA functions such as\n> VACUUM and ANALYZE should not be done. Each will run forever and cause\n> much grief. Backups are problematic in the traditional pg_dump and PITR\n> space. Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing\n> it in my test case) are no-no's. A system or database crash could take\n> potentially hours to days to recover. There are likely other issues ahead.\n>\n> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n> it's antiquated and don't get me started on \"Hadoop\". I looked at many\n> others and ultimately the recommended use of each vendor was to have one\n> table for all data. That overcomes the millions of tables problem, right?\n>\n> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n> records. Random access is expected and the customer expects <30ms reads\n> for a single record fetch.\n>\n\nYou don't give enough details to fully explain the problem you're trying to\nsolve.\n\n - Will records ever be updated or deleted? If so, what percentage and at\n what frequency?\n - What specifically are you storing (e.g. list of integers, strings,\n people's sex habits, ...)? Or more importantly, are these fixed- or\n variable-sized records?\n - Once the 1,200 trillion records are loaded, is that it? Or do more\n data arrive, and if so, at what rate?\n - Do your queries change, or is there a fixed set of queries?\n - How complex are the joins?\n\nThe reason I ask these specific questions is because, as others have\npointed out, this might be a perfect case for a custom (non-relational)\ndatabase. Relational databases are general-purpose tools, sort of like a\nSwiss-Army knife. A Swiss-Army knife does most things passably, but if you\nwant to carve wood, or butcher meat, or slice vegetables, you get a knife\nmeant for that specific task.\n\nI've written several custom database-storage systems for very specific\nhigh-performance systems. It's generally a couple weeks of work, and you\nhave a tailored performance and storage that's hard for a general-purpose\nrelational system to match.\n\nThe difficulty of building such a system depends a lot on the answers to\nthe questions above.\n\nCraig\n\n\n> No data is loaded... yet Table and index creation only. I am interested\n> in the opinions of all including tests I may perform. If you had this\n> setup, what would you capture / analyze? I have a job running preparing\n> data. I did this on a much smaller scale (50k tables) and data load via\n> function allowed close to 6,000 records/second. The schema has been\n> simplified since and last test reach just over 20,000 records/second with\n> 300k tables.\n>\n> I'm not looking for alternatives yet but input to my test. Takers?\n>\n> I can't promise immediate feedback but will do my best to respond with\n> results.\n>\n> TIA,\n> -Greg\n>\n\n\n\n-- \n---------------------------------\nCraig A. James\nChief Technology Officer\neMolecules, Inc.\n---------------------------------\n\nOn Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]> wrote:Hey all,Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently.AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process.  Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.No system tuning but here is a list of PostgreSQL knobs and switches:shared_buffers = 2GBwork_mem = 48 MBmax_stack_depth = 4 MBsynchronous_commit = offeffective_cache_size = 200 GBpg_xlog is on it's own file systemThere are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.  Backups are problematic in the traditional pg_dump and PITR space.  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead.You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right?Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch.You don't give enough details to fully explain the problem you're trying to solve.Will records ever be updated or deleted? If so, what percentage and at what frequency?What specifically are you storing (e.g. list of integers, strings, people's sex habits, ...)? Or more importantly, are these fixed- or variable-sized records?Once the 1,200 trillion records are loaded, is that it? Or do more data arrive, and if so, at what rate?Do your queries change, or is there a fixed set of queries?How complex are the joins?The reason I ask these specific questions is because, as others have pointed out, this might be a perfect case for a custom (non-relational) database. Relational databases are general-purpose tools, sort of like a Swiss-Army knife. A Swiss-Army knife does most things passably, but if you want to carve wood, or butcher meat, or slice vegetables, you get a knife meant for that specific task.I've written several custom database-storage systems for very specific high-performance systems. It's generally a couple weeks of work, and you have a tailored performance and storage that's hard for a general-purpose relational system to match.The difficulty of building such a system depends a lot on the answers to the questions above.CraigNo data is loaded... yet  Table and index creation only.  I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.I'm not looking for alternatives yet but input to my test.  Takers?I can't promise immediate feedback but will do my best to respond with results.TIA,-Greg\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Tue, 27 Sep 2016 07:30:11 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "From: Greg Spiegelberg Sent: Monday, September 26, 2016 7:25 AM\nI've gotten more responses than anticipated and have answered some questions and gotten some insight but my challenge again is what should I capture along the way to prove or disprove this storage pattern? Alternatives to the storage pattern aside, I need ideas to test rig, capture metrics and suggestions to tune it.\n\n \n\nIn the next 24 hours, I will be sending ~1 trillion records to the test database. Because of time to set up, I'd rather have things set up properly the first go.\n\n \n\nThanks!\n\n-Greg \n\n---------------------\n\nGreg, I ran another quick test on a wider table than you’ve described, but this time with 80 million rows, with core counts, ram and ssd storage similar to what you’d have on that AWS EC2 instance. This table had 7 columns (3 integers, 3 text, 1 timestamptz) with an average width of 157 chars, one btree index on the pk int column. Using explain analyze, I picked one id value out of the 80m and ran a select * where id = x. It did an index scan, had a planning time of 0.077ms, and an execution time of 0.254 seconds. I ran the query for a variety of widely spaced values (so the data was uncached) and the timing never changed. This has been mirroring my general experience with PG – very fast reads on indexed queries. \n\n \n\nSummary: I think your buckets can be WAY bigger than you are envisioning for the simple table design you’ve described. I’m betting you can easily do 500 million rows per bucket before approaching anything close to the 30ms max query time.\n\n \n\nMike Sofen (Synthetic Genomics)\n\n\n  From: Greg Spiegelberg   Sent: Monday, September 26, 2016 7:25 AMI've gotten more responses than anticipated and have answered some questions and gotten some insight but my challenge again is what should I capture along the way to prove or disprove this storage pattern?  Alternatives to the storage pattern aside, I need ideas to test rig, capture metrics and suggestions to tune it. In the next 24 hours, I will be sending ~1 trillion records to the test database.  Because of time to set up, I'd rather have things set up properly the first go. Thanks!-Greg ---------------------Greg, I ran another quick test on a wider table than you’ve described, but this time with 80 million rows, with core counts, ram and ssd storage similar to what you’d have on that AWS EC2 instance.  This table had 7 columns (3 integers, 3 text, 1 timestamptz) with an average width of 157 chars, one btree index on the pk int column.  Using explain analyze, I picked one id value out of the 80m and ran a select * where id = x.  It did an index scan, had a planning time of 0.077ms, and an execution time of 0.254 seconds.  I ran the query for a variety of widely spaced values (so the data was uncached) and the timing never changed. This has been mirroring my general experience with PG – very fast reads on indexed queries.  Summary:  I think your buckets can be WAY bigger than you are envisioning for the simple table design you’ve described.  I’m betting you can easily do 500 million rows per bucket before approaching anything close to the 30ms max query time. Mike Sofen (Synthetic Genomics)", "msg_date": "Tue, 27 Sep 2016 08:10:15 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "From: Mike Sofen Sent: Tuesday, September 27, 2016 8:10 AM\n\n\n\nFrom: Greg Spiegelberg Sent: Monday, September 26, 2016 7:25 AM\nI've gotten more responses than anticipated and have answered some questions and gotten some insight but my challenge again is what should I capture along the way to prove or disprove this storage pattern? Alternatives to the storage pattern aside, I need ideas to test rig, capture metrics and suggestions to tune it.\n\n \n\nIn the next 24 hours, I will be sending ~1 trillion records to the test database. Because of time to set up, I'd rather have things set up properly the first go.\n\n \n\nThanks!\n\n-Greg \n\n---------------------\n\nGreg, I ran another quick test on a wider table than you’ve described, but this time with 80 million rows, with core counts, ram and ssd storage similar to what you’d have on that AWS EC2 instance. This table had 7 columns (3 integers, 3 text, 1 timestamptz) with an average width of 157 chars, one btree index on the pk int column. Using explain analyze, I picked one id value out of the 80m and ran a select * where id = x. It did an index scan, had a planning time of 0.077ms, and an execution time of 0.254 seconds. I ran the query for a variety of widely spaced values (so the data was uncached) and the timing never changed. This has been mirroring my general experience with PG – very fast reads on indexed queries. \n\n \n\nSummary: I think your buckets can be WAY bigger than you are envisioning for the simple table design you’ve described. I’m betting you can easily do 500 million rows per bucket before approaching anything close to the 30ms max query time.\n\n \n\nMike Sofen (Synthetic Genomics)\n\n \n\nTotally typo’d the execution time: it was 0.254 MILLISECONDS, not SECONDS. Thus my comment about going up 10x in bucket size instead of appearing to be right at the limit. Sorry!\n\n \n\nMike\n\n\nFrom: Mike Sofen   Sent: Tuesday, September 27, 2016 8:10 AMFrom: Greg Spiegelberg   Sent: Monday, September 26, 2016 7:25 AMI've gotten more responses than anticipated and have answered some questions and gotten some insight but my challenge again is what should I capture along the way to prove or disprove this storage pattern?  Alternatives to the storage pattern aside, I need ideas to test rig, capture metrics and suggestions to tune it. In the next 24 hours, I will be sending ~1 trillion records to the test database.  Because of time to set up, I'd rather have things set up properly the first go. Thanks!-Greg ---------------------Greg, I ran another quick test on a wider table than you’ve described, but this time with 80 million rows, with core counts, ram and ssd storage similar to what you’d have on that AWS EC2 instance.  This table had 7 columns (3 integers, 3 text, 1 timestamptz) with an average width of 157 chars, one btree index on the pk int column.  Using explain analyze, I picked one id value out of the 80m and ran a select * where id = x.  It did an index scan, had a planning time of 0.077ms, and an execution time of 0.254 seconds.  I ran the query for a variety of widely spaced values (so the data was uncached) and the timing never changed. This has been mirroring my general experience with PG – very fast reads on indexed queries.  Summary:  I think your buckets can be WAY bigger than you are envisioning for the simple table design you’ve described.  I’m betting you can easily do 500 million rows per bucket before approaching anything close to the 30ms max query time. Mike Sofen (Synthetic Genomics) Totally typo’d the execution time:  it was 0.254 MILLISECONDS, not SECONDS.  Thus my comment about going up 10x in bucket size instead of appearing to be right at the limit.  Sorry! Mike", "msg_date": "Tue, 27 Sep 2016 08:42:44 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Tue, Sep 27, 2016 at 8:30 AM, Craig James <[email protected]> wrote:\n\n> On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]>\n> wrote:\n>\n>> Hey all,\n>>\n>> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a\n>> time has said not to have millions of tables. I too have long believed it\n>> until recently.\n>>\n>> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1)\n>> for PGDATA. Over the weekend, I created 8M tables with 16M indexes on\n>> those tables. Table creation initially took 0.018031 secs, average\n>> 0.027467 and after tossing out outliers (qty 5) the maximum creation time\n>> found was 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049\n>> seconds. Tables were created by a single process. Do note that table\n>> creation is done via plpgsql function as there are other housekeeping tasks\n>> necessary though minimal.\n>>\n>> No system tuning but here is a list of PostgreSQL knobs and switches:\n>> shared_buffers = 2GB\n>> work_mem = 48 MB\n>> max_stack_depth = 4 MB\n>> synchronous_commit = off\n>> effective_cache_size = 200 GB\n>> pg_xlog is on it's own file system\n>>\n>> There are some still obvious problems. General DBA functions such as\n>> VACUUM and ANALYZE should not be done. Each will run forever and cause\n>> much grief. Backups are problematic in the traditional pg_dump and PITR\n>> space. Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing\n>> it in my test case) are no-no's. A system or database crash could take\n>> potentially hours to days to recover. There are likely other issues ahead.\n>>\n>> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n>> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n>> it's antiquated and don't get me started on \"Hadoop\". I looked at many\n>> others and ultimately the recommended use of each vendor was to have one\n>> table for all data. That overcomes the millions of tables problem, right?\n>>\n>> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n>> records. Random access is expected and the customer expects <30ms reads\n>> for a single record fetch.\n>>\n>\n> You don't give enough details to fully explain the problem you're trying\n> to solve.\n>\n> - Will records ever be updated or deleted? If so, what percentage and\n> at what frequency?\n> - What specifically are you storing (e.g. list of integers, strings,\n> people's sex habits, ...)? Or more importantly, are these fixed- or\n> variable-sized records?\n> - Once the 1,200 trillion records are loaded, is that it? Or do more\n> data arrive, and if so, at what rate?\n> - Do your queries change, or is there a fixed set of queries?\n> - How complex are the joins?\n>\n> Excellent questions.\n\n1a. Half of the 4M tables will contain ~140k records and UPDATE's will\noccur on roughly 100 records/day/table. No DELETE's on this first half.\n1b. Second half of the 4M tables will contain ~200k records. Zero UPDATE's\nhowever DELETE's will occur on ~100 records/day/table.\n\n2. All 4M tables contain 7 columns: (4) bigints, (2) timestamptz and (1)\nboolean. 2M of the table will have an PKEY on (1) bigint table only.\nSecond 2M table have a PKEY on (bigint,timestamptz) and two additional\nindexes on (bigint, timestamptz) different columns.\n\n3. The trillions-of-records load is just to push the system to find the\nmaximum record load capability. Reality, 200M records / day or\n~2,300/second average is the expectation once in production.\n\n4. Queries are fixed and match the indexes laid down on the tables. Goal\nis <30ms/query. I have attempted queries with and without indexes.\nWithout indexes the average query response varied between 20ms and 40ms\nwhereas indexes respond within a much tighter range of 5ms to 9ms. Both\nquery performance tests were done during data-ingest.\n\n5. Zero JOIN's and I won't let it ever happen. However the 4M tables\nINHERIT a data grouping table. Test rig limits child tables to\n1,000/parent. This was done to explore some other possible access patterns\nbut they are secondary and if it doesn't work then either a) the\nrequirement will be dropped or b) I may look at storing the data in the\n1,000 child tables directly in the parent table and I'll need to re-run\nload & read tests.\n\n\n\n> The reason I ask these specific questions is because, as others have\n> pointed out, this might be a perfect case for a custom (non-relational)\n> database. Relational databases are general-purpose tools, sort of like a\n> Swiss-Army knife. A Swiss-Army knife does most things passably, but if you\n> want to carve wood, or butcher meat, or slice vegetables, you get a knife\n> meant for that specific task.\n>\n>\nI'm at a sizing phase. If 4M tables works I'll attempt 16M tables. If it\npoints to only 2M or 1M then that's fine. The 4M table database in only a\nsingle cog in the storage service design. Anticipating ~40 of these\ndatabases but it is dependent upon how many tables work in a single\ninstance.\n\nThe 4M tables are strict relationship tables referencing two other tables\ncontaining a JSONB column in each. The PostgreSQL JSONB function and\noperator set facilitates current needs beautifully and leaves much room for\nfuture use cases. Using other technologies really limits the query search\nand storage capabilities.\n\nI've explored many other technologies and the possibility of using\nPostgreSQL for the 2 tables with JSONB and relationships elsewhere however\nI foresee too many complexities and possible problems. I am confident in\nPostgreSQL and the implementation but, as I said, I need to understand the\nsize limits.\n\n\nI mentioned the 2 tables with JSONB so I'll elaborate a little more on\nquery patterns. Every query performs 3 SELECT's.\n1. SELECT on JSONB table #1 (~140k records total) searching for records\nmatching a JSONB literal (most common use) or pattern. Returns id1.\n2. SELECT on known table from the 4M using id1 from step 1 returned id2.\n3. SELECT on JSONB table #2 (~500k to 90M records) search for record match\nid2 returned in step 2.\n\n\n\n\n> I've written several custom database-storage systems for very specific\n> high-performance systems. It's generally a couple weeks of work, and you\n> have a tailored performance and storage that's hard for a general-purpose\n> relational system to match.\n>\n>\nYeah, we're kinda beyond the write-it-yourself because of the need to\nmaintain-it-yourself. :)\n\n\nHope some of these answers helped.\n\n-Greg\n\n\n\n> The difficulty of building such a system depends a lot on the answers to\n> the questions above.\n>\n> Craig\n>\n>\n>> No data is loaded... yet Table and index creation only. I am interested\n>> in the opinions of all including tests I may perform. If you had this\n>> setup, what would you capture / analyze? I have a job running preparing\n>> data. I did this on a much smaller scale (50k tables) and data load via\n>> function allowed close to 6,000 records/second. The schema has been\n>> simplified since and last test reach just over 20,000 records/second with\n>> 300k tables.\n>>\n>> I'm not looking for alternatives yet but input to my test. Takers?\n>>\n>> I can't promise immediate feedback but will do my best to respond with\n>> results.\n>>\n>> TIA,\n>> -Greg\n>>\n>\n>\n>\n> --\n> ---------------------------------\n> Craig A. James\n> Chief Technology Officer\n> eMolecules, Inc.\n> ---------------------------------\n>\n\nOn Tue, Sep 27, 2016 at 8:30 AM, Craig James <[email protected]> wrote:On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]> wrote:Hey all,Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently.AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process.  Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.No system tuning but here is a list of PostgreSQL knobs and switches:shared_buffers = 2GBwork_mem = 48 MBmax_stack_depth = 4 MBsynchronous_commit = offeffective_cache_size = 200 GBpg_xlog is on it's own file systemThere are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.  Backups are problematic in the traditional pg_dump and PITR space.  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead.You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right?Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch.You don't give enough details to fully explain the problem you're trying to solve.Will records ever be updated or deleted? If so, what percentage and at what frequency?What specifically are you storing (e.g. list of integers, strings, people's sex habits, ...)? Or more importantly, are these fixed- or variable-sized records?Once the 1,200 trillion records are loaded, is that it? Or do more data arrive, and if so, at what rate?Do your queries change, or is there a fixed set of queries?How complex are the joins?Excellent questions.1a. Half of the 4M tables will contain ~140k records and UPDATE's will occur on roughly 100 records/day/table.  No DELETE's on this first half.1b. Second half of the 4M tables will contain ~200k records.  Zero UPDATE's however DELETE's will occur on ~100 records/day/table.2. All 4M tables contain 7 columns: (4) bigints, (2) timestamptz and (1) boolean.  2M of the table will have an PKEY on (1) bigint table only.  Second 2M table have a PKEY on (bigint,timestamptz) and two additional indexes on (bigint, timestamptz) different columns.3. The trillions-of-records load is just to push the system to find the maximum record load capability.  Reality, 200M records / day or ~2,300/second average is the expectation once in production.4. Queries are fixed and match the indexes laid down on the tables.  Goal is <30ms/query.  I have attempted queries with and without indexes.  Without indexes the average query response varied between 20ms and 40ms whereas indexes respond within a much tighter range of 5ms to 9ms.  Both query performance tests were done during data-ingest.5. Zero JOIN's and I won't let it ever happen.  However the 4M tables INHERIT a data grouping table.  Test rig limits child tables to 1,000/parent.  This was done to explore some other possible access patterns but they are secondary and if it doesn't work then either a) the requirement will be dropped or b) I may look at storing the data in the 1,000 child tables directly in the parent table and I'll need to re-run load & read tests. The reason I ask these specific questions is because, as others have pointed out, this might be a perfect case for a custom (non-relational) database. Relational databases are general-purpose tools, sort of like a Swiss-Army knife. A Swiss-Army knife does most things passably, but if you want to carve wood, or butcher meat, or slice vegetables, you get a knife meant for that specific task.I'm at a sizing phase.  If 4M tables works I'll attempt 16M tables.  If it points to only 2M or 1M then that's fine.  The 4M table database in only a single cog in the storage service design.  Anticipating ~40 of these databases but it is dependent upon how many tables work in a single instance.The 4M tables are strict relationship tables referencing two other tables containing a JSONB column in each.  The PostgreSQL JSONB function and operator set facilitates current needs beautifully and leaves much room for future use cases.  Using other technologies really limits the query search and storage capabilities.I've explored many other technologies and the possibility of using PostgreSQL for the 2 tables with JSONB and relationships elsewhere however I foresee too many complexities and possible problems.  I am confident in PostgreSQL and the implementation but, as I said, I need to understand the size limits.I mentioned the 2 tables with JSONB so I'll elaborate a little more on query patterns.  Every query performs 3 SELECT's.1. SELECT on JSONB table #1 (~140k records total) searching for records matching a JSONB literal (most common use) or pattern.  Returns id1.2. SELECT on known table from the 4M using id1 from step 1 returned id2.3. SELECT on JSONB table #2 (~500k to 90M records) search for record match id2 returned in step 2. I've written several custom database-storage systems for very specific high-performance systems. It's generally a couple weeks of work, and you have a tailored performance and storage that's hard for a general-purpose relational system to match.Yeah, we're kinda beyond the write-it-yourself because of the need to maintain-it-yourself.  :)Hope some of these answers helped.-Greg The difficulty of building such a system depends a lot on the answers to the questions above.CraigNo data is loaded... yet  Table and index creation only.  I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.I'm not looking for alternatives yet but input to my test.  Takers?I can't promise immediate feedback but will do my best to respond with results.TIA,-Greg\n-- ---------------------------------Craig A. JamesChief Technology OfficereMolecules, Inc.---------------------------------", "msg_date": "Tue, 27 Sep 2016 09:46:05 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Tue, Sep 27, 2016 at 9:42 AM, Mike Sofen <[email protected]> wrote:\n\n> *From:* Mike Sofen *Sent:* Tuesday, September 27, 2016 8:10 AM\n>\n> *From:* Greg Spiegelberg *Sent:* Monday, September 26, 2016 7:25 AM\n> I've gotten more responses than anticipated and have answered some\n> questions and gotten some insight but my challenge again is what should I\n> capture along the way to prove or disprove this storage pattern?\n> Alternatives to the storage pattern aside, I need ideas to test rig,\n> capture metrics and suggestions to tune it.\n>\n>\n>\n> In the next 24 hours, I will be sending ~1 trillion records to the test\n> database. Because of time to set up, I'd rather have things set up\n> properly the first go.\n>\n>\n>\n> Thanks!\n>\n> -Greg\n>\n> ---------------------\n>\n> Greg, I ran another quick test on a wider table than you’ve described, but\n> this time with 80 million rows, with core counts, ram and ssd storage\n> similar to what you’d have on that AWS EC2 instance. This table had 7\n> columns (3 integers, 3 text, 1 timestamptz) with an average width of 157\n> chars, one btree index on the pk int column. Using explain analyze, I\n> picked one id value out of the 80m and ran a select * where id = x. It did\n> an index scan, had a planning time of 0.077ms, and an execution time of\n> 0.254 seconds. I ran the query for a variety of widely spaced values (so\n> the data was uncached) and the timing never changed. This has been\n> mirroring my general experience with PG – very fast reads on indexed\n> queries.\n>\n>\n>\n> Summary: I think your buckets can be WAY bigger than you are envisioning\n> for the simple table design you’ve described. I’m betting you can easily\n> do 500 million rows per bucket before approaching anything close to the\n> 30ms max query time.\n>\n>\n>\n> Mike Sofen (Synthetic Genomics)\n>\n>\n>\n> Totally typo’d the execution time: it was 0.254 MILLISECONDS, not\n> SECONDS. Thus my comment about going up 10x in bucket size instead of\n> appearing to be right at the limit. Sorry!\n>\n>\n>\nI figured. :)\n\nHaven't ruled it out but expectations of this implementation is to perform\nat worst 3X slower than memcache or Redis.\n\nBigger buckets mean a wider possibility of response times. Some buckets\nmay contain 140k records and some 100X more.\n\n-Greg\n\nOn Tue, Sep 27, 2016 at 9:42 AM, Mike Sofen <[email protected]> wrote:From: Mike Sofen   Sent: Tuesday, September 27, 2016 8:10 AMFrom: Greg Spiegelberg   Sent: Monday, September 26, 2016 7:25 AMI've gotten more responses than anticipated and have answered some questions and gotten some insight but my challenge again is what should I capture along the way to prove or disprove this storage pattern?  Alternatives to the storage pattern aside, I need ideas to test rig, capture metrics and suggestions to tune it. In the next 24 hours, I will be sending ~1 trillion records to the test database.  Because of time to set up, I'd rather have things set up properly the first go. Thanks!-Greg ---------------------Greg, I ran another quick test on a wider table than you’ve described, but this time with 80 million rows, with core counts, ram and ssd storage similar to what you’d have on that AWS EC2 instance.  This table had 7 columns (3 integers, 3 text, 1 timestamptz) with an average width of 157 chars, one btree index on the pk int column.  Using explain analyze, I picked one id value out of the 80m and ran a select * where id = x.  It did an index scan, had a planning time of 0.077ms, and an execution time of 0.254 seconds.  I ran the query for a variety of widely spaced values (so the data was uncached) and the timing never changed. This has been mirroring my general experience with PG – very fast reads on indexed queries.  Summary:  I think your buckets can be WAY bigger than you are envisioning for the simple table design you’ve described.  I’m betting you can easily do 500 million rows per bucket before approaching anything close to the 30ms max query time. Mike Sofen (Synthetic Genomics) Totally typo’d the execution time:  it was 0.254 MILLISECONDS, not SECONDS.  Thus my comment about going up 10x in bucket size instead of appearing to be right at the limit.  Sorry!I figured.  :)Haven't ruled it out but expectations of this implementation is to perform at worst 3X slower than memcache or Redis.Bigger buckets mean a wider possibility of response times.  Some buckets may contain 140k records and some 100X more.-Greg", "msg_date": "Tue, 27 Sep 2016 09:49:49 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]>\nwrote:\n\n> Hey all,\n>\n> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time\n> has said not to have millions of tables. I too have long believed it until\n> recently.\n>\n> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for\n> PGDATA. Over the weekend, I created 8M tables with 16M indexes on those\n> tables. Table creation initially took 0.018031 secs, average 0.027467 and\n> after tossing out outliers (qty 5) the maximum creation time found was\n> 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049 seconds.\n> Tables were created by a single process. Do note that table creation is\n> done via plpgsql function as there are other housekeeping tasks necessary\n> though minimal.\n>\n> No system tuning but here is a list of PostgreSQL knobs and switches:\n> shared_buffers = 2GB\n> work_mem = 48 MB\n> max_stack_depth = 4 MB\n> synchronous_commit = off\n> effective_cache_size = 200 GB\n> pg_xlog is on it's own file system\n>\n> There are some still obvious problems. General DBA functions such as\n> VACUUM and ANALYZE should not be done. Each will run forever and cause\n> much grief. Backups are problematic in the traditional pg_dump and PITR\n> space. Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing\n> it in my test case) are no-no's. A system or database crash could take\n> potentially hours to days to recover. There are likely other issues ahead.\n>\n> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n> it's antiquated and don't get me started on \"Hadoop\". I looked at many\n> others and ultimately the recommended use of each vendor was to have one\n> table for all data. That overcomes the millions of tables problem, right?\n>\n> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n> records. Random access is expected and the customer expects <30ms reads\n> for a single record fetch.\n>\n> No data is loaded... yet Table and index creation only. I am interested\n> in the opinions of all including tests I may perform. If you had this\n> setup, what would you capture / analyze? I have a job running preparing\n> data. I did this on a much smaller scale (50k tables) and data load via\n> function allowed close to 6,000 records/second. The schema has been\n> simplified since and last test reach just over 20,000 records/second with\n> 300k tables.\n>\n> I'm not looking for alternatives yet but input to my test. Takers?\n>\n> I can't promise immediate feedback but will do my best to respond with\n> results.\n>\n> TIA,\n> -Greg\n>\n\nI have not seen any mention of transaction ID wraparound mentioned in this\nthread yet. With the numbers that you are looking at, I could see this as a\nmajor issue.\n\nT\n\nOn Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]> wrote:Hey all,Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently.AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process.  Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.No system tuning but here is a list of PostgreSQL knobs and switches:shared_buffers = 2GBwork_mem = 48 MBmax_stack_depth = 4 MBsynchronous_commit = offeffective_cache_size = 200 GBpg_xlog is on it's own file systemThere are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.  Backups are problematic in the traditional pg_dump and PITR space.  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead.You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right?Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch.No data is loaded... yet  Table and index creation only.  I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.I'm not looking for alternatives yet but input to my test.  Takers?I can't promise immediate feedback but will do my best to respond with results.TIA,-Greg\nI have not seen any mention of transaction ID wraparound mentioned in this thread yet. With the numbers that you are looking at, I could see this as a major issue.T", "msg_date": "Tue, 27 Sep 2016 09:15:14 -0700", "msg_from": "Terry Schmitt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Tue, Sep 27, 2016 at 10:15 AM, Terry Schmitt <[email protected]>\nwrote:\n\n>\n>\n> On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]>\n> wrote:\n>\n>> Hey all,\n>>\n>> Obviously everyone who's been in PostgreSQL or almost any RDBMS for a\n>> time has said not to have millions of tables. I too have long believed it\n>> until recently.\n>>\n>> AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1)\n>> for PGDATA. Over the weekend, I created 8M tables with 16M indexes on\n>> those tables. Table creation initially took 0.018031 secs, average\n>> 0.027467 and after tossing out outliers (qty 5) the maximum creation time\n>> found was 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049\n>> seconds. Tables were created by a single process. Do note that table\n>> creation is done via plpgsql function as there are other housekeeping tasks\n>> necessary though minimal.\n>>\n>> No system tuning but here is a list of PostgreSQL knobs and switches:\n>> shared_buffers = 2GB\n>> work_mem = 48 MB\n>> max_stack_depth = 4 MB\n>> synchronous_commit = off\n>> effective_cache_size = 200 GB\n>> pg_xlog is on it's own file system\n>>\n>> There are some still obvious problems. General DBA functions such as\n>> VACUUM and ANALYZE should not be done. Each will run forever and cause\n>> much grief. Backups are problematic in the traditional pg_dump and PITR\n>> space. Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing\n>> it in my test case) are no-no's. A system or database crash could take\n>> potentially hours to days to recover. There are likely other issues ahead.\n>>\n>> You may wonder, \"why is Greg attempting such a thing?\" I looked at\n>> DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it,\n>> it's antiquated and don't get me started on \"Hadoop\". I looked at many\n>> others and ultimately the recommended use of each vendor was to have one\n>> table for all data. That overcomes the millions of tables problem, right?\n>>\n>> Problem with the \"one big table\" solution is I anticipate 1,200 trillion\n>> records. Random access is expected and the customer expects <30ms reads\n>> for a single record fetch.\n>>\n>> No data is loaded... yet Table and index creation only. I am interested\n>> in the opinions of all including tests I may perform. If you had this\n>> setup, what would you capture / analyze? I have a job running preparing\n>> data. I did this on a much smaller scale (50k tables) and data load via\n>> function allowed close to 6,000 records/second. The schema has been\n>> simplified since and last test reach just over 20,000 records/second with\n>> 300k tables.\n>>\n>> I'm not looking for alternatives yet but input to my test. Takers?\n>>\n>> I can't promise immediate feedback but will do my best to respond with\n>> results.\n>>\n>> TIA,\n>> -Greg\n>>\n>\n> I have not seen any mention of transaction ID wraparound mentioned in this\n> thread yet. With the numbers that you are looking at, I could see this as a\n> major issue.\n>\n> T\n>\n\nThank you Terry. You get the gold star. :) I was waiting for that to\ncome up.\n\nSuccess means handling this condition. A whole database vacuum and\ndump-restore is out of the question. Can a properly tuned autovacuum\nprevent the situation?\n\n-Greg\n\nOn Tue, Sep 27, 2016 at 10:15 AM, Terry Schmitt <[email protected]> wrote:On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]> wrote:Hey all,Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently.AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process.  Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.No system tuning but here is a list of PostgreSQL knobs and switches:shared_buffers = 2GBwork_mem = 48 MBmax_stack_depth = 4 MBsynchronous_commit = offeffective_cache_size = 200 GBpg_xlog is on it's own file systemThere are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.  Backups are problematic in the traditional pg_dump and PITR space.  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead.You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right?Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch.No data is loaded... yet  Table and index creation only.  I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.I'm not looking for alternatives yet but input to my test.  Takers?I can't promise immediate feedback but will do my best to respond with results.TIA,-Greg\nI have not seen any mention of transaction ID wraparound mentioned in this thread yet. With the numbers that you are looking at, I could see this as a major issue.T\nThank you Terry.  You get the gold star.  :)   I was waiting for that to come up.Success means handling this condition.  A whole database vacuum and dump-restore is out of the question.  Can a properly tuned autovacuum prevent the situation?-Greg", "msg_date": "Tue, 27 Sep 2016 10:27:43 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": ">\n> Have you considered having many databases (e.g. 100) and possibly many\npostgresql servers (e.g. 10) started on different ports?\nThis would give you 1000x less tables per db.\n\n>\n>>\n\nHave you considered having many databases (e.g. 100) and possibly many postgresql servers (e.g. 10) started on different ports?This would give you 1000x less tables per db.", "msg_date": "Wed, 28 Sep 2016 15:39:44 +0000", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "If going that route, why not just use plproxy?\n\nOn Wed, Sep 28, 2016 at 11:39 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n\n> Have you considered having many databases (e.g. 100) and possibly many\n> postgresql servers (e.g. 10) started on different ports?\n> This would give you 1000x less tables per db.\n>\n>>\n>>>\n\nIf going that route, why not just use plproxy?On Wed, Sep 28, 2016 at 11:39 AM, Vitalii Tymchyshyn <[email protected]> wrote:Have you considered having many databases (e.g. 100) and possibly many postgresql servers (e.g. 10) started on different ports?This would give you 1000x less tables per db.", "msg_date": "Wed, 28 Sep 2016 12:05:19 -0400", "msg_from": "Richard Albright <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Wed, Sep 28, 2016 at 9:39 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n\n> Have you considered having many databases (e.g. 100) and possibly many\n> postgresql servers (e.g. 10) started on different ports?\n> This would give you 1000x less tables per db.\n>\n\nThe system design already allows for many database servers. 40 is okay,\n100 isn't terrible but if it's thousands then operations might lynch me.\n\n-Greg\n\nOn Wed, Sep 28, 2016 at 9:39 AM, Vitalii Tymchyshyn <[email protected]> wrote:Have you considered having many databases (e.g. 100) and possibly many postgresql servers (e.g. 10) started on different ports?This would give you 1000x less tables per db.\nThe system design already allows for many database servers.  40 is okay, 100 isn't terrible but if it's thousands then operations might lynch me.-Greg", "msg_date": "Wed, 28 Sep 2016 10:27:00 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Greg,\n\n* Greg Spiegelberg ([email protected]) wrote:\n> Bigger buckets mean a wider possibility of response times. Some buckets\n> may contain 140k records and some 100X more.\n\nHave you analyzed the depth of the btree indexes to see how many more\npages need to be read to handle finding a row in 140k records vs. 14M\nrecords vs. 140M records?\n\nI suspect you'd find that the change in actual depth (meaning how many\npages have to actually be read to find the row you're looking for) isn't\nvery much and that your concern over the \"wider possibility of response\ntimes\" isn't well founded.\n\nSince you have a hard-set 30ms maximum for query response time, I would\nsuggest you work out how long it takes to read a cold page from your I/O\nsubsystem and then you can work through exactly how many page reads\ncould be done in that 30ms (or perhaps 20ms, to allow for whatever\noverhead there will be in the rest of the system and as a buffer) and\nthen work that back to how deep the index can be based on that many page\nreads and then how many records are required to create an index of that\ndepth. Of course, the page from the heap will also need to be read and\nthere's a bit of additional work to be done, but the disk i/o for cold\npages is almost certainly where most time will be spent.\n\nI suspect you'll discover that millions of tables is a couple orders of\nmagnitude off of how many you'd need to keep the number of page reads\nbelow the threshold you work out based on your I/O.\n\nOf course, you would need a consistent I/O subsystem, or at least one\nwhere you know the maximum possible latency to pull a cold page.\n\nLastly, you'll want to figure out how to handle system crash/restart if\nthis system requires a high uptime. I expect you'd want to have at\nleast one replica and a setup which allows you to flip traffic to it\nvery quickly to maintain the 30ms response times.\n\nThanks!\n\nStephen", "msg_date": "Wed, 28 Sep 2016 13:27:59 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost <[email protected]> wrote:\n\n> Greg,\n>\n> * Greg Spiegelberg ([email protected]) wrote:\n> > Bigger buckets mean a wider possibility of response times. Some buckets\n> > may contain 140k records and some 100X more.\n>\n> Have you analyzed the depth of the btree indexes to see how many more\n> pages need to be read to handle finding a row in 140k records vs. 14M\n> records vs. 140M records?\n>\n> I suspect you'd find that the change in actual depth (meaning how many\n> pages have to actually be read to find the row you're looking for) isn't\n> very much and that your concern over the \"wider possibility of response\n> times\" isn't well founded.\n>\n>\nStephen,\nExcellent feedback! Um, how does one look at tree depth in PostgreSQL?\nOracle I know but have not done the same in PG. Pointers?\n\n\n\n> Since you have a hard-set 30ms maximum for query response time, I would\n> suggest you work out how long it takes to read a cold page from your I/O\n> subsystem and then you can work through exactly how many page reads\n> could be done in that 30ms (or perhaps 20ms, to allow for whatever\n> overhead there will be in the rest of the system and as a buffer) and\n> then work that back to how deep the index can be based on that many page\n> reads and then how many records are required to create an index of that\n> depth. Of course, the page from the heap will also need to be read and\n> there's a bit of additional work to be done, but the disk i/o for cold\n> pages is almost certainly where most time will be spent.\n>\n> I suspect you'll discover that millions of tables is a couple orders of\n> magnitude off of how many you'd need to keep the number of page reads\n> below the threshold you work out based on your I/O.\n>\n> Of course, you would need a consistent I/O subsystem, or at least one\n> where you know the maximum possible latency to pull a cold page.\n>\n> Lastly, you'll want to figure out how to handle system crash/restart if\n> this system requires a high uptime. I expect you'd want to have at\n> least one replica and a setup which allows you to flip traffic to it\n> very quickly to maintain the 30ms response times.\n>\n\nI'm replicating via messaging. PG replication is fine for smaller db's but\nI don't trust networks and PG upgrade intricacies complicate matters.\n\n-Greg\n\nOn Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost <[email protected]> wrote:Greg,\n\n* Greg Spiegelberg ([email protected]) wrote:\n> Bigger buckets mean a wider possibility of response times.  Some buckets\n> may contain 140k records and some 100X more.\n\nHave you analyzed the depth of the btree indexes to see how many more\npages need to be read to handle finding a row in 140k records vs. 14M\nrecords vs. 140M records?\n\nI suspect you'd find that the change in actual depth (meaning how many\npages have to actually be read to find the row you're looking for) isn't\nvery much and that your concern over the \"wider possibility of response\ntimes\" isn't well founded.\nStephen,Excellent feedback!   Um, how does one look at tree depth in PostgreSQL?  Oracle I know but have not done the same in PG.  Pointers? \nSince you have a hard-set 30ms maximum for query response time, I would\nsuggest you work out how long it takes to read a cold page from your I/O\nsubsystem and then you can work through exactly how many page reads\ncould be done in that 30ms (or perhaps 20ms, to allow for whatever\noverhead there will be in the rest of the system and as a buffer) and\nthen work that back to how deep the index can be based on that many page\nreads and then how many records are required to create an index of that\ndepth.  Of course, the page from the heap will also need to be read and\nthere's a bit of additional work to be done, but the disk i/o for cold\npages is almost certainly where most time will be spent.\n\nI suspect you'll discover that millions of tables is a couple orders of\nmagnitude off of how many you'd need to keep the number of page reads\nbelow the threshold you work out based on your I/O.\n\nOf course, you would need a consistent I/O subsystem, or at least one\nwhere you know the maximum possible latency to pull a cold page.\n\nLastly, you'll want to figure out how to handle system crash/restart if\nthis system requires a high uptime.  I expect you'd want to have at\nleast one replica and a setup which allows you to flip traffic to it\nvery quickly to maintain the 30ms response times.I'm replicating via messaging.  PG replication is fine for smaller db's but I don't trust networks and PG upgrade intricacies complicate matters.-Greg", "msg_date": "Wed, 28 Sep 2016 11:38:42 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Greg,\n\n* Greg Spiegelberg ([email protected]) wrote:\n> On Wed, Sep 28, 2016 at 11:27 AM, Stephen Frost <[email protected]> wrote:\n> > * Greg Spiegelberg ([email protected]) wrote:\n> > > Bigger buckets mean a wider possibility of response times. Some buckets\n> > > may contain 140k records and some 100X more.\n> >\n> > Have you analyzed the depth of the btree indexes to see how many more\n> > pages need to be read to handle finding a row in 140k records vs. 14M\n> > records vs. 140M records?\n> >\n> > I suspect you'd find that the change in actual depth (meaning how many\n> > pages have to actually be read to find the row you're looking for) isn't\n> > very much and that your concern over the \"wider possibility of response\n> > times\" isn't well founded.\n\n> Excellent feedback! Um, how does one look at tree depth in PostgreSQL?\n> Oracle I know but have not done the same in PG. Pointers?\n\nCREATE EXTENSION pageinspect;\n\nSELECT * FROM bt_metap('indexname');\n\nhttps://www.postgresql.org/docs/9.5/static/pageinspect.html\n\nThanks!\n\nStephen", "msg_date": "Wed, 28 Sep 2016 14:00:35 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On 26 September 2016 at 05:19, Greg Spiegelberg <[email protected]> wrote:\n> I did look at PostgresXL and CitusDB. Both are admirable however neither\n> could support the need to read a random record consistently under 30ms.\n> It's a similar problem Cassandra and others have: network latency. At this\n> scale, to provide the ability to access any given record amongst trillions\n> it is imperative to know precisely where it is stored (system & database)\n> and read a relatively small index. I have other requirements that prohibit\n> use of any technology that is eventually consistent.\n\nThen XL is exactly what you need, since it does allow you to calculate\nexactly where the record is via hash and then access it, which makes\nthe request just a single datanode task.\n\nXL is not the same as CitusDB.\n\n> I liken the problem to fishing. To find a particular fish of length, size,\n> color &c in a data lake you must accept the possibility of scanning the\n> entire lake. However, if all fish were in barrels where each barrel had a\n> particular kind of fish of specific length, size, color &c then the problem\n> is far simpler.\n\nThe task of putting the fish in the appropriate barrel is quite hard.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Sep 2016 09:21:39 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Greg Spiegelberg\nSent: Tuesday, September 27, 2016 7:28 PM\nTo: Terry Schmitt <[email protected]>\nCc: pgsql-performa. <[email protected]>\nSubject: Re: [PERFORM] Millions of tables\n\n \n\nOn Tue, Sep 27, 2016 at 10:15 AM, Terry Schmitt <[email protected] <mailto:[email protected]> > wrote:\n\n \n\n \n\nOn Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected] <mailto:[email protected]> > wrote:\n\nHey all,\n\n \n\nObviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables. I too have long believed it until recently.\n\n \n\nAWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA. Over the weekend, I created 8M tables with 16M indexes on those tables. Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds. Total time 30 hours, 31 minutes and 8.435049 seconds. Tables were created by a single process. Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal.\n\n \n\nNo system tuning but here is a list of PostgreSQL knobs and switches:\n\nshared_buffers = 2GB\n\nwork_mem = 48 MB\n\nmax_stack_depth = 4 MB\n\nsynchronous_commit = off\n\neffective_cache_size = 200 GB\n\npg_xlog is on it's own file system\n\n \n\nThere are some still obvious problems. General DBA functions such as VACUUM and ANALYZE should not be done. Each will run forever and cause much grief. Backups are problematic in the traditional pg_dump and PITR space. Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's. A system or database crash could take potentially hours to days to recover. There are likely other issues ahead.\n\n \n\nYou may wonder, \"why is Greg attempting such a thing?\" I looked at DynamoDB, BigTable, and Cassandra. I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\". I looked at many others and ultimately the recommended use of each vendor was to have one table for all data. That overcomes the millions of tables problem, right?\n\n \n\nProblem with the \"one big table\" solution is I anticipate 1,200 trillion records. Random access is expected and the customer expects <30ms reads for a single record fetch.\n\n \n\nNo data is loaded... yet Table and index creation only. I am interested in the opinions of all including tests I may perform. If you had this setup, what would you capture / analyze? I have a job running preparing data. I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second. The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables.\n\n \n\nI'm not looking for alternatives yet but input to my test. Takers?\n\n \n\nI can't promise immediate feedback but will do my best to respond with results.\n\n \n\nTIA,\n\n-Greg\n\n \n\nI have not seen any mention of transaction ID wraparound mentioned in this thread yet. With the numbers that you are looking at, I could see this as a major issue.\n\n \n\nT\n\n \n\nThank you Terry. You get the gold star. :) I was waiting for that to come up.\n\n \n\nSuccess means handling this condition. A whole database vacuum and dump-restore is out of the question. Can a properly tuned autovacuum prevent the situation?\n\n \n\n-Greg\n\n \n\nHi!\n\nWith millions of tables you have to set autovacuum_max_workers sky-high =). We have some situation when at thousands of tables autovacuum can’t vacuum all tables that need it. Simply it vacuums some of most modified table and never reach others. Only manual vacuum can help with this situation. With wraparound issue it can be a nightmare \n\n \n\n--\n\nAlex Ignatov \nPostgres Professional: <http://www.postgrespro.com> http://www.postgrespro.com \nThe Russian Postgres Company\n\n \n\n\n   From: [email protected] [mailto:[email protected]] On Behalf Of Greg SpiegelbergSent: Tuesday, September 27, 2016 7:28 PMTo: Terry Schmitt <[email protected]>Cc: pgsql-performa. <[email protected]>Subject: Re: [PERFORM] Millions of tables On Tue, Sep 27, 2016 at 10:15 AM, Terry Schmitt <[email protected]> wrote:  On Sun, Sep 25, 2016 at 7:50 PM, Greg Spiegelberg <[email protected]> wrote:Hey all, Obviously everyone who's been in PostgreSQL or almost any RDBMS for a time has said not to have millions of tables.  I too have long believed it until recently. AWS d2.8xlarge instance with 9.5 is my test rig using XFS on EBS (io1) for PGDATA.  Over the weekend, I created 8M tables with 16M indexes on those tables.  Table creation initially took 0.018031 secs, average 0.027467 and after tossing out outliers (qty 5) the maximum creation time found was 0.66139 seconds.  Total time 30 hours, 31 minutes and 8.435049 seconds.  Tables were created by a single process.  Do note that table creation is done via plpgsql function as there are other housekeeping tasks necessary though minimal. No system tuning but here is a list of PostgreSQL knobs and switches:shared_buffers = 2GBwork_mem = 48 MBmax_stack_depth = 4 MBsynchronous_commit = offeffective_cache_size = 200 GBpg_xlog is on it's own file system There are some still obvious problems.  General DBA functions such as VACUUM and ANALYZE should not be done.  Each will run forever and cause much grief.  Backups are problematic in the traditional pg_dump and PITR space.  Large JOIN's by VIEW, SELECT or via table inheritance (I am abusing it in my test case) are no-no's.  A system or database crash could take potentially hours to days to recover.  There are likely other issues ahead. You may wonder, \"why is Greg attempting such a thing?\"  I looked at DynamoDB, BigTable, and Cassandra.  I like Greenplum but, let's face it, it's antiquated and don't get me started on \"Hadoop\".  I looked at many others and ultimately the recommended use of each vendor was to have one table for all data.  That overcomes the millions of tables problem, right? Problem with the \"one big table\" solution is I anticipate 1,200 trillion records.  Random access is expected and the customer expects <30ms reads for a single record fetch. No data is loaded... yet  Table and index creation only.  I am interested in the opinions of all including tests I may perform.  If you had this setup, what would you capture / analyze?  I have a job running preparing data.  I did this on a much smaller scale (50k tables) and data load via function allowed close to 6,000 records/second.  The schema has been simplified since and last test reach just over 20,000 records/second with 300k tables. I'm not looking for alternatives yet but input to my test.  Takers? I can't promise immediate feedback but will do my best to respond with results. TIA,-Greg I have not seen any mention of transaction ID wraparound mentioned in this thread yet. With the numbers that you are looking at, I could see this as a major issue. T Thank you Terry.  You get the gold star.  :)   I was waiting for that to come up. Success means handling this condition.  A whole database vacuum and dump-restore is out of the question.  Can a properly tuned autovacuum prevent the situation? -Greg Hi!With millions of tables you have to set    autovacuum_max_workers  sky-high =). We have some situation when at thousands of tables autovacuum can’t vacuum all tables that need it. Simply it vacuums some of most modified table and never reach others. Only manual vacuum can help with this situation. With wraparound issue it can be a nightmare  --Alex Ignatov Postgres Professional: http://www.postgrespro.com The Russian Postgres Company", "msg_date": "Thu, 29 Sep 2016 14:11:07 +0300", "msg_from": "\"Alex Ignatov \\(postgrespro\\)\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On 9/29/16 6:11 AM, Alex Ignatov (postgrespro) wrote:\n> With millions of tables you have to set autovacuum_max_workers\n> sky-high =). We have some situation when at thousands of tables\n> autovacuum can’t vacuum all tables that need it. Simply it vacuums some\n> of most modified table and never reach others. Only manual vacuum can\n> help with this situation. With wraparound issue it can be a nightmare\n\nSpecifically, autovac isn't going to start worrying about anti-wrap \nvacuums until tables start hitting autovacuum_freeze_max_age (or \nautovacuum_multixact_freeze_max_age). Any tables that hit that threshold \ngo to the front of the line for being vacuumed. (But keep in mind that \nthere is no universal line, just what each worker computes on it's own \nwhen it's started).\n\nWhere things will completely fall apart for you is if a lot of tables \nall have roughly the same relfrozenxid (or relminmxid), like they would \nimmediately after a large load. In that scenario you'll suddenly have \nloads of work for autovac to do, all at the same time. That will make \nthe database, DBAs and you Very Unhappy (tm).\n\nSomehow, some way, you *must* do a vacuum of the entire database. \nLuckily the freeze map in 9.6 means you'd only have to do that one time \n(assuming the data really is static). In any older version, (auto)vacuum \nwill need to eventually *read everything in every table* at least once \nevery ~2B transactions.\n\nThe only impact the number of tables is going to have on this is \ngranularity. If you have a small number of large tables, you'll have \n(auto)vacuum processes that will need to run *uninterrupted* for a long \ntime to move the freeze threshold on each table. If you have tons of \nsmall tables, you'll need tons of separate (auto)vacuums, but each one \nwill run for a shorter interval, and if one gets interrupted it won't be \nas big a deal.\n\nThere is one potentially significant difference between autovac and \nmanual vacuums here; autovac treats toast tables as just another table, \nwith their own stats and their own freeze needs. If you're generating a \nlot of toast records that might make a difference.\n\nWhen it comes to vacuum, you might find \nhttps://www.pgcon.org/2015/schedule/events/829.en.html useful.\n\nOn a different topic... I didn't see anything in the thread about what \nyou're storing, but with the row counts you're talking about I'm \nguessing it's something that's time-series. \nhttps://github.com/ElephantStack/ElephantStack is a project exploring \nthe idea of using Postgres array types as a far more efficient way to \nstore that kind of data; instead of an overhead of 24 bytes per row \n(plus indexes) arrays give you essentially zero overhead per row. \nThere's no code yet, but a few of us have done testing on some real \nworld data (see the google group referenced from the README).\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Sep 2016 17:49:02 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Thu, Sep 29, 2016 at 4:11 AM, Alex Ignatov (postgrespro) <\[email protected]> wrote:\n\n>\n>\n> *From:* [email protected] [mailto:pgsql-performance-\n> [email protected]] *On Behalf Of *\n>\n> Thank you Terry. You get the gold star. :) I was waiting for that to\n> come up.\n>\n>\n>\n> Success means handling this condition. A whole database vacuum and\n> dump-restore is out of the question. Can a properly tuned autovacuum\n> prevent the situation?\n>\n>\n>\n> -Greg\n>\n>\n>\n> Hi!\n>\n> With millions of tables you have to set autovacuum_max_workers\n> sky-high =). We have some situation when at thousands of tables autovacuum\n> can’t vacuum all tables that need it. Simply it vacuums some of most\n> modified table and never reach others.\n>\n\nAny autovacuum worker should vacuum all tables in its assigned database\nwhich it perceives need vacuuming, as long as it can get the lock. Unless\nthe worker is interrupted, for example by frequent database shutdowns, it\nshould reach all tables in that database before it exits. Unless there is\na bug, or you are constantly restarting the database before autovacuum can\nfinish or doing something else to kill them off, what you describe should\nnot happen.\n\nIf it is a bug, we should fix it. Can you give more details?\n\nThere is a known bug when you multiple active databases in the same\ncluster. Once one database reaches the age where anti-wrap around vacuums\nkick in, then all future autovacuum workers are directed to that one\ndatabase, starving all other databases of auto-vacuuming. But that doesn't\nsound like what you are describing.\n\nCheers,\n\nJeff\n\nOn Thu, Sep 29, 2016 at 4:11 AM, Alex Ignatov (postgrespro) <[email protected]> wrote: From: [email protected] [mailto:[email protected]] On Behalf Of  Thank you Terry.  You get the gold star.  :)   I was waiting for that to come up. Success means handling this condition.  A whole database vacuum and dump-restore is out of the question.  Can a properly tuned autovacuum prevent the situation? -Greg Hi!With millions of tables you have to set    autovacuum_max_workers  sky-high =). We have some situation when at thousands of tables autovacuum can’t vacuum all tables that need it. Simply it vacuums some of most modified table and never reach others.Any autovacuum worker should vacuum all tables in its assigned database which it perceives need vacuuming, as long as it can get the lock.  Unless the worker is interrupted, for example by frequent database shutdowns, it should reach all tables in that database before it exits.  Unless there is a bug, or you are constantly restarting the database before autovacuum can finish or doing something else to kill them off, what you describe should not happen.If it is a bug, we should fix it.  Can you give more details?There is a known bug when you multiple active databases in the same cluster.  Once one database reaches the age where anti-wrap around vacuums kick in, then all future autovacuum workers are directed to that one database, starving all other databases of auto-vacuuming.  But that doesn't sound like what you are describing.Cheers,Jeff", "msg_date": "Sat, 1 Oct 2016 10:54:30 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On Fri, Sep 30, 2016 at 4:49 PM, Jim Nasby <[email protected]> wrote:\n\n> On 9/29/16 6:11 AM, Alex Ignatov (postgrespro) wrote:\n>\n>> With millions of tables you have to set autovacuum_max_workers\n>> sky-high =). We have some situation when at thousands of tables\n>> autovacuum can’t vacuum all tables that need it. Simply it vacuums some\n>> of most modified table and never reach others. Only manual vacuum can\n>> help with this situation. With wraparound issue it can be a nightmare\n>>\n>\n> Specifically, autovac isn't going to start worrying about anti-wrap\n> vacuums until tables start hitting autovacuum_freeze_max_age (or\n> autovacuum_multixact_freeze_max_age). Any tables that hit that threshold\n> go to the front of the line for being vacuumed. (But keep in mind that\n> there is no universal line, just what each worker computes on it's own when\n> it's started).\n>\n> Where things will completely fall apart for you is if a lot of tables all\n> have roughly the same relfrozenxid (or relminmxid), like they would\n> immediately after a large load. In that scenario you'll suddenly have loads\n> of work for autovac to do, all at the same time. That will make the\n> database, DBAs and you Very Unhappy (tm).\n>\n> Somehow, some way, you *must* do a vacuum of the entire database. Luckily\n> the freeze map in 9.6 means you'd only have to do that one time (assuming\n> the data really is static). In any older version, (auto)vacuum will need to\n> eventually *read everything in every table* at least once every ~2B\n> transactions.\n>\n\nData is not static. The 4M tables fall into one of two groups.\n\nGroup A contains 2M tables. INSERT will occur ~100 times/day and maximum\nnumber of records anticipated will be 200k. Periodic DELETE's will occur\nremoving \"old\" records. Age is something the client sets and I have no way\nof saying 1 or 10k records will be removed.\n\nGroup B contains the other 2M tables. Maximum records ~140k and UPSERT\nwill be the only mechanism used to populate and maintain. Periodic\nDELETE's may run on these tables as well removing \"old\" records.\n\nWill a set of tables require vacuum'ing at the same time? Quite possibly\nbut I have no way to say 2 or 200k tables will need it.\n\n\nWhen you say \"must do a vacuum of the entire database\", are you saying the\nentire database must be vacuum'd as a whole per 2B transactions or all\ntables must be vacuum'd eventually at least once? I want to be absolutely\nclear on what you're saying.\n\n\n\n> There is one potentially significant difference between autovac and manual\n> vacuums here; autovac treats toast tables as just another table, with their\n> own stats and their own freeze needs. If you're generating a lot of toast\n> records that might make a difference.\n>\n\nI do not anticipate TOAST entering the picture. No single column or record\n> 8KB or even approaching it. We have a few databases that (ab)use pg_toast\nand I want to avoid those complications.\n\n\n-Greg\n\nOn Fri, Sep 30, 2016 at 4:49 PM, Jim Nasby <[email protected]> wrote:On 9/29/16 6:11 AM, Alex Ignatov (postgrespro) wrote:\n\nWith millions of tables you have to set    autovacuum_max_workers\n sky-high =). We have some situation when at thousands of tables\nautovacuum can’t vacuum all tables that need it. Simply it vacuums some\nof most modified table and never reach others. Only manual vacuum can\nhelp with this situation. With wraparound issue it can be a nightmare\n\n\nSpecifically, autovac isn't going to start worrying about anti-wrap vacuums until tables start hitting autovacuum_freeze_max_age (or autovacuum_multixact_freeze_max_age). Any tables that hit that threshold go to the front of the line for being vacuumed. (But keep in mind that there is no universal line, just what each worker computes on it's own when it's started).\n\nWhere things will completely fall apart for you is if a lot of tables all have roughly the same relfrozenxid (or relminmxid), like they would immediately after a large load. In that scenario you'll suddenly have loads of work for autovac to do, all at the same time. That will make the database, DBAs and you Very Unhappy (tm).\n\nSomehow, some way, you *must* do a vacuum of the entire database. Luckily the freeze map in 9.6 means you'd only have to do that one time (assuming the data really is static). In any older version, (auto)vacuum will need to eventually *read everything in every table* at least once every ~2B transactions.Data is not static.  The 4M tables fall into one of two groups.Group A contains 2M tables.  INSERT will occur ~100 times/day and maximum number of records anticipated will be 200k.  Periodic DELETE's will occur removing \"old\" records.  Age is something the client sets and I have no way of saying 1 or 10k records will be removed.Group B contains the other 2M tables.  Maximum records ~140k and UPSERT will be the only mechanism used to populate and maintain.  Periodic DELETE's may run on these tables as well removing \"old\" records.Will a set of tables require vacuum'ing at the same time?  Quite possibly but I have no way to say 2 or 200k tables will need it.When you say \"must do a vacuum of the entire database\", are you saying the entire database must be vacuum'd as a whole per 2B transactions or all tables must be vacuum'd eventually at least once?  I want to be absolutely clear on what you're saying. \nThere is one potentially significant difference between autovac and manual vacuums here; autovac treats toast tables as just another table, with their own stats and their own freeze needs. If you're generating a lot of toast records that might make a difference.I do not anticipate TOAST entering the picture.  No single column or record > 8KB or even approaching it. We have a few databases that (ab)use pg_toast and I want to avoid those complications. -Greg", "msg_date": "Wed, 5 Oct 2016 06:34:02 -0600", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "On 10/5/16 7:34 AM, Greg Spiegelberg wrote:\n> When you say \"must do a vacuum of the entire database\", are you saying\n> the entire database must be vacuum'd as a whole per 2B transactions or\n> all tables must be vacuum'd eventually at least once?\n\nAll tables at least once. Prior to 9.6, that had to happen ever 2B \ntransactions. With 9.6 there's a freeze map, so if a page never gets \ndirtied between vacuum freeze runs then it doesn't need to be frozen.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 8 Oct 2016 16:42:01 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" }, { "msg_contents": "Greg, sorry for the resent: I had forgotten to include the list.\n\nOn Wed, Oct 5, 2016 at 2:34 PM, Greg Spiegelberg <[email protected]> wrote:\n\n> Data is not static. The 4M tables fall into one of two groups.\n>\n> Group A contains 2M tables. INSERT will occur ~100 times/day and maximum\n> number of records anticipated will be 200k. Periodic DELETE's will occur\n> removing \"old\" records. Age is something the client sets and I have no way\n> of saying 1 or 10k records will be removed.\n\nThe ~100 times / day are per table I assume. Also, I assume DELETES\nwill probably delete batches (because the time criteria catches\nseveral records).\n\n> Group B contains the other 2M tables. Maximum records ~140k and UPSERT will\n> be the only mechanism used to populate and maintain. Periodic DELETE's may\n> run on these tables as well removing \"old\" records.\n\nSo there will be inserts and updates.\n\nEither I missed it or you did not mention the criteria for placing a\nrecord in one of the 4M buckets. Can you shed light on what the\ncriteria are? That would obviously suggest what indexing could be\ndone.\n\nAlso it would be interesting to see results of your tests with btree\non really large tables as Stephen had suggested. I know it is not the\nprimary tests you want to do but I would rather first explore\n\"traditional\" schema before I venture in the unknown of the multi\nmillion dollar, pardon, table schema.\n\nKind regards\n\n\n-- \n[guy, jim, charlie].each {|him| remember.him do |as, often| as.you_can\n- without end}\nhttp://blog.rubybestpractices.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 25 Nov 2016 18:14:42 +0100", "msg_from": "Robert Klemme <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Millions of tables" } ]
[ { "msg_contents": "test:\r\ncreate type h3 as (id int,name char(10));\r\n\r\nCREATE or replace FUNCTION proc17() \r\nRETURNS SETOF h3 AS $$ \r\nDECLARE\r\nv_rec h3;\r\nBEGIN \r\n create temp table abc(id int,name varchar) on commit drop;\r\ninsert into abc select 1,'lw';\r\ninsert into abc select 2,'lw2';\r\nfor v_rec in\r\nselect * from abc loop\r\nreturn next v_rec;\r\nend loop;\r\nEND; \r\n$$ \r\nLANGUAGE plpgsql;\r\n\r\n\r\nCREATE or replace FUNCTION proc16() \r\nRETURNS SETOF h3 AS $$ \r\nDECLARE\r\n id_array int[];\r\n name_arr varchar[];\r\n v_rec h3;\r\nBEGIN \r\n id_array =array[1,2];\r\n name_arr=array['lw','lw2'];\r\nfor v_rec in\r\nselect unnest(id_array) ,unnest(name_arr) loop\r\nreturn next v_rec; \r\nend loop;\r\nEND; \r\n$$ \r\nLANGUAGE plpgsql;\r\npostgres=# select * from proc17();\r\n id | name \r\n----+------------\r\n 1 | lw \r\n 2 | lw2 \r\n(2 rows)\r\n\r\nTime: 68.372 ms\r\npostgres=# select * from proc16();\r\n id | name \r\n----+------------\r\n 1 | lw \r\n 2 | lw2 \r\n(2 rows)\r\n\r\nTime: 1.357 ms\r\n\r\ntemp talbe result:\r\n[postgres@pg95 test_sql]$ pgbench -M prepared -n -r -c 2 -j 2 -T 10 -f temporary_test_1.sql \r\ntransaction type: Custom query\r\nscaling factor: 1\r\nquery mode: prepared\r\nnumber of clients: 2\r\nnumber of threads: 2\r\nduration: 10 s\r\nnumber of transactions actually processed: 5173\r\nlatency average: 3.866 ms\r\ntps = 517.229191 (including connections establishing)\r\ntps = 517.367956 (excluding connections establishing)\r\nstatement latencies in milliseconds:\r\n3.863798 select * from proc17();\r\n\r\narray result:\r\n[postgres@pg95 test_sql]$ pgbench -M prepared -n -r -c 2 -j 2 -T 10 -f arrary_test_1.sql \r\ntransaction type: Custom query\r\nscaling factor: 1\r\nquery mode: prepared\r\nnumber of clients: 2\r\nnumber of threads: 2\r\nduration: 10 s\r\nnumber of transactions actually processed: 149381\r\nlatency average: 0.134 ms\r\ntps = 14936.875176 (including connections establishing)\r\ntps = 14940.234960 (excluding connections establishing)\r\nstatement latencies in milliseconds:\r\n0.132983 select * from proc16();\r\n\r\nArray is not convenient to use in function, whether there are other methods can be replaced temp table in function\r\n\r\n\r\n\r\n\r\[email protected]\r\n\n\ntest:create type  h3 as (id int,name char(10));CREATE or replace FUNCTION proc17() RETURNS SETOF h3  AS $$ DECLAREv_rec h3;BEGIN     create temp table abc(id int,name varchar) on commit drop; insert into abc select 1,'lw'; insert into abc select 2,'lw2'; for v_rec in select * from abc loop return next v_rec; end loop;END; $$ LANGUAGE plpgsql;CREATE or replace FUNCTION proc16() RETURNS   SETOF h3 AS $$ DECLARE id_array int[]; name_arr varchar[]; v_rec h3;BEGIN     id_array =array[1,2];    name_arr=array['lw','lw2']; for v_rec in select unnest(id_array)  ,unnest(name_arr) loop return next v_rec;                 end loop;END; $$ LANGUAGE plpgsql;postgres=# select * from proc17(); id |    name    ----+------------  1 | lw          2 | lw2       (2 rows)Time: 68.372 mspostgres=# select * from proc16(); id |    name    ----+------------  1 | lw          2 | lw2       (2 rows)Time: 1.357 mstemp talbe result:[postgres@pg95 test_sql]$ pgbench -M prepared -n -r -c 2 -j 2 -T 10 -f temporary_test_1.sql transaction type: Custom queryscaling factor: 1query mode: preparednumber of clients: 2number of threads: 2duration: 10 snumber of transactions actually processed: 5173latency average: 3.866 mstps = 517.229191 (including connections establishing)tps = 517.367956 (excluding connections establishing)statement latencies in milliseconds: 3.863798 select * from proc17();array result:[postgres@pg95 test_sql]$ pgbench -M prepared -n -r -c 2 -j 2 -T 10 -f arrary_test_1.sql transaction type: Custom queryscaling factor: 1query mode: preparednumber of clients: 2number of threads: 2duration: 10 snumber of transactions actually processed: 149381latency average: 0.134 mstps = 14936.875176 (including connections establishing)tps = 14940.234960 (excluding connections establishing)statement latencies in milliseconds: 0.132983 select * from proc16();Array is not convenient to use in function, whether there are other methods can be replaced temp table in function\n\[email protected]", "msg_date": "Mon, 26 Sep 2016 23:39:11 +0800", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "temporary table vs array performance" }, { "msg_contents": "Its considered bad form to post to multiple lists. Please pick the most\nrelevant one - in this case I'd suggest -general.\n\nOn Mon, Sep 26, 2016 at 8:39 AM, [email protected] <[email protected]> wrote:\n\n>\n> Array is not convenient to use in function, whether\n> there are other methods can be replaced temp table in function\n>\n>\n​I have no difficulty using arrays in functions.\n\nAs for \"other methods\" - you can use CTE (WITH) to create a truly local\ntable - updating the catalogs by using a temp table is indeed quite\nexpensive.\n\nWITH vals AS ( VALUES (1, 'lw'), (2, 'lw2') )\nSELECT * FROM vals;\n\nDavid J.\n\nIts considered bad form to post to multiple lists.  Please pick the most relevant one - in this case I'd suggest -general.On Mon, Sep 26, 2016 at 8:39 AM, [email protected] <[email protected]> wrote:\nArray is not convenient to use in function, whether there are other methods can be replaced temp table in function​I have no difficulty using arrays in functions.As for \"other methods\" - you can use CTE (WITH) to create a truly local table - updating the catalogs by using a temp table is indeed quite expensive.WITH vals AS  ( VALUES (1, 'lw'), (2, 'lw2') ) SELECT * FROM vals;David J.", "msg_date": "Mon, 26 Sep 2016 08:49:42 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] temporary table vs array performance" }, { "msg_contents": "2016-09-26 17:39 GMT+02:00 [email protected] <[email protected]>:\n\n> test:\n> create type h3 as (id int,name char(10));\n>\n> CREATE or replace FUNCTION proc17()\n> RETURNS SETOF h3 AS $$\n> DECLARE\n> v_rec h3;\n> BEGIN\n> create temp table abc(id int,name varchar) on commit drop;\n> insert into abc select 1,'lw';\n> insert into abc select 2,'lw2';\n> for v_rec in\n> select * from abc loop\n> return next v_rec;\n> end loop;\n> END;\n> $$\n> LANGUAGE plpgsql;\n>\n>\n> CREATE or replace FUNCTION proc16()\n> RETURNS SETOF h3 AS $$\n> DECLARE\n> id_array int[];\n> name_arr varchar[];\n> v_rec h3;\n> BEGIN\n> id_array =array[1,2];\n> name_arr=array['lw','lw2'];\n> for v_rec in\n> select unnest(id_array) ,unnest(name_arr) loop\n> return next v_rec;\n> end loop;\n> END;\n> $$\n> LANGUAGE plpgsql;\n> postgres=# select * from proc17();\n> id | name\n> ----+------------\n> 1 | lw\n> 2 | lw2\n> (2 rows)\n>\n> Time: 68.372 ms\n> postgres=# select * from proc16();\n> id | name\n> ----+------------\n> 1 | lw\n> 2 | lw2\n> (2 rows)\n>\n> Time: 1.357 ms\n>\n> temp talbe result:\n> [postgres@pg95 test_sql]$ pgbench -M prepared -n -r -c\n> 2 -j 2 -T 10 -f temporary_test_1.sql\n> transaction type: Custom query\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 2\n> number of threads: 2\n> duration: 10 s\n> number of transactions actually processed: 5173\n> latency average: 3.866 ms\n> tps = 517.229191 (including connections establishing)\n> tps = 517.367956 (excluding connections establishing)\n> statement latencies in milliseconds:\n> 3.863798 select * from proc17();\n>\n> array result:\n> [postgres@pg95 test_sql]$ pgbench -M prepared -n -r -c\n> 2 -j 2 -T 10 -f arrary_test_1.sql\n> transaction type: Custom query\n> scaling factor: 1\n> query mode: prepared\n> number of clients: 2\n> number of threads: 2\n> duration: 10 s\n> number of transactions actually processed: 149381\n> latency average: 0.134 ms\n> tps = 14936.875176 (including connections establishing)\n> tps = 14940.234960 (excluding connections establishing)\n> statement latencies in milliseconds:\n> 0.132983 select * from proc16();\n>\n> Array is not convenient to use in function, whether\n> there are other methods can be replaced temp table in function\n>\n>\nTemporary tables are pretty expensive - from more reasons, and horrible\nwhen you use fresh table for two rows only. More if you recreate it every\ntransaction.\n\nMore often pattern is create first and delete repeatedly. Better don't use\ntemp tables when it is necessary. It is one reason why PostgreSQL supports\na arrays. Partially - PostgreSQL arrays are analogy to T-SQL memory tables.\n\nRegards\n\nPavel\n\n\n\n\n>\n> ------------------------------\n> [email protected]\n>\n\n2016-09-26 17:39 GMT+02:00 [email protected] <[email protected]>:\ntest:create type  h3 as (id int,name char(10));CREATE or replace FUNCTION proc17() RETURNS SETOF h3  AS $$ DECLAREv_rec h3;BEGIN     create temp table abc(id int,name varchar) on commit drop; insert into abc select 1,'lw'; insert into abc select 2,'lw2'; for v_rec in select * from abc loop return next v_rec; end loop;END; $$ LANGUAGE plpgsql;CREATE or replace FUNCTION proc16() RETURNS   SETOF h3 AS $$ DECLARE id_array int[]; name_arr varchar[]; v_rec h3;BEGIN     id_array =array[1,2];    name_arr=array['lw','lw2']; for v_rec in select unnest(id_array)  ,unnest(name_arr) loop return next v_rec;                 end loop;END; $$ LANGUAGE plpgsql;postgres=# select * from proc17(); id |    name    ----+------------  1 | lw          2 | lw2       (2 rows)Time: 68.372 mspostgres=# select * from proc16(); id |    name    ----+------------  1 | lw          2 | lw2       (2 rows)Time: 1.357 mstemp talbe result:[postgres@pg95 test_sql]$ pgbench -M prepared -n -r -c 2 -j 2 -T 10 -f temporary_test_1.sql transaction type: Custom queryscaling factor: 1query mode: preparednumber of clients: 2number of threads: 2duration: 10 snumber of transactions actually processed: 5173latency average: 3.866 mstps = 517.229191 (including connections establishing)tps = 517.367956 (excluding connections establishing)statement latencies in milliseconds: 3.863798 select * from proc17();array result:[postgres@pg95 test_sql]$ pgbench -M prepared -n -r -c 2 -j 2 -T 10 -f arrary_test_1.sql transaction type: Custom queryscaling factor: 1query mode: preparednumber of clients: 2number of threads: 2duration: 10 snumber of transactions actually processed: 149381latency average: 0.134 mstps = 14936.875176 (including connections establishing)tps = 14940.234960 (excluding connections establishing)statement latencies in milliseconds: 0.132983 select * from proc16();Array is not convenient to use in function, whether there are other methods can be replaced temp table in functionTemporary tables are pretty expensive - from more reasons, and horrible when you use fresh table for two rows only. More if you recreate it every transaction.More often pattern is create first and delete repeatedly. Better don't use temp tables when it is necessary. It is one reason why PostgreSQL supports a arrays. Partially - PostgreSQL arrays are analogy to T-SQL memory tables.RegardsPavel \n\[email protected]", "msg_date": "Mon, 26 Sep 2016 18:16:31 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] temporary table vs array performance" }, { "msg_contents": "On Mon, Sep 26, 2016 at 9:18 AM, 邓彪 <[email protected]> wrote:\n\n> we have to do dml in temp table,the CTE is not fit\n>\n>\n​Moving this to -general only...​\n\n​Please direct all replies to the list.\n\nYou are asking for help but not providing any context for what your\nrequirements are. You are not likely to get good help.\n\nBest case, supply a working function (self contained test case) that does\nexactly what you need it to do but uses a temporary table and performs\nbadly. Lacking that at least attempt to describe your problem and not just\npoint out that creating temporary tables is expensive.\n\nDavid J.\n​\n\nOn Mon, Sep 26, 2016 at 9:18 AM, 邓彪 <[email protected]> wrote:\n\nwe have to do dml in temp table,the CTE is not fit ​Moving this to -general only...​​Please direct all replies to the list.You are asking for help but not providing any context for what your requirements are.  You are not likely to get good help.Best case, supply a working function (self contained test case) that does exactly what you need it to do but uses a temporary table and performs badly.  Lacking that at least attempt to describe your problem and not just point out that creating temporary tables is expensive.David J.​", "msg_date": "Mon, 26 Sep 2016 09:23:52 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] temporary table vs array performance" } ]
[ { "msg_contents": "I've got a query that takes a surprisingly long time to run, and I'm having\na really rough time trying to figure it out.\n\nBefore I get started, here are the specifics of the situation:\n\nHere is the table that I'm working with (apologies for spammy indices, I've\nbeen throwing shit at the wall)\n\n Table \"public.syncerevent\"\n\n Column | Type | Modifiers\n\n\n--------------+---------+----------------------------------------------------------\n\n id | bigint | not null default\nnextval('syncerevent_id_seq'::regclass)\n\n userid | text |\n\n event | text |\n\n eventid | text |\n\n originatorid | text |\n\n propogatorid | text |\n\n kwargs | text |\n\n conflicted | integer |\n\nIndexes:\n\n \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n\n \"syncereventidindex\" UNIQUE, btree (eventid)\n\n \"anothersyncereventidindex\" btree (userid)\n\n \"anothersyncereventidindexwithascending\" btree (userid, id)\n\n \"asdfasdgasdf\" btree (userid, id DESC)\n\n \"syncereventuseridhashindex\" hash (userid)\n\nTo provide some context, as per the wiki,\nthere are 3,290,600 rows in this table.\nIt gets added to frequently, but never deleted from.\nThe \"kwargs\" column often contains mid-size JSON strings (roughly 30K\ncharacters on average)\nAs of right now, the table has 53 users in it. About 20% of those have a\nnegligible number of events, but the rest of the users have a fairly even\nsmattering.\n\nEXPLAIN (ANALYZE, BUFFERS) says:\n\n\n QUERY PLAN\n\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.43..1218.57 rows=4000 width=615) (actual\ntime=3352.390..3403.572 rows=4000 loops=1)\n\n Buffers: shared hit=120244 read=160198\n\n -> Index Scan using syncerevent_pkey on syncerevent\n(cost=0.43..388147.29 rows=1274560 width=615) (actual\ntime=3352.386..3383.100 rows=4000 loops=1)\n\n Index Cond: (id > 12468)\n\n Filter: ((propogatorid <>\n'\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text) AND (conflicted <> 1)\nAND (userid = '57dc984f1c87461c0967e228'::text))\n\n Rows Removed by Filter: 1685801\n\n Buffers: shared hit=120244 read=160198\n\n Planning time: 0.833 ms\n\n Execution time: 3407.633 ms\n\n(9 rows)\n\n\nThe postgres verison is: PostgreSQL 9.5.2 on x86_64-pc-linux-gnu, compiled\nby gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n\n\nThis query has gotten slower over time.\n\nThe postgres server is running on a db.m3.medium RDS instance on Amazon.\n\n(3.75GB of ram)\n\n(~3 GHz processor, single core)\n\nI ran VACUUM, and ANALYZEd this table just prior to running the EXPLAIN\ncommand.\n\nHere are the server settings:\n\n name | current_setting\n | source\n\n\n\n\n application_name | psql\n | client\n\n archive_command |\n/etc/rds/dbbin/pgscripts/rds_wal_archive %p | configuration file\n\n archive_mode | on\n | configuration file\n\n archive_timeout | 5min\n | configuration file\n\n autovacuum_analyze_scale_factor | 0.05\n | configuration file\n\n autovacuum_naptime | 30s\n | configuration file\n\n autovacuum_vacuum_scale_factor | 0.1\n | configuration file\n\n checkpoint_completion_target | 0.9\n | configuration file\n\n client_encoding | UTF8\n | client\n\n effective_cache_size | 1818912kB\n | configuration file\n\n fsync | on\n | configuration file\n\n full_page_writes | on\n | configuration file\n\n hot_standby | off\n | configuration file\n\n listen_addresses | *\n | command line\n\n lo_compat_privileges | off\n | configuration file\n\n log_checkpoints | on\n | configuration file\n\n log_directory | /rdsdbdata/log/error\n\nSorry for the formatting, I'm not sure of the best way to format this data\non a mailing list.\n\n\nIf it matters/interests you, here is my underlying confusion:\n\n From some internet sleuthing, I've decided that having a table per user\n(which would totally make this problem a non-issue) isn't a great idea.\nBecause there is a file per table, having a table per user would not scale.\nMy next thought was partial indexes (which would also totally help), but\nsince there is also a table per index, this really doesn't side-step the\nproblem. My rough mental model says: If there exists a way that a\ntable-per-user scheme would make this more efficient, then there should\nalso exist an index that could achieve the same effect (or close enough to\nnot matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could\nutilize at least one of the two indexes on the userId column, but clearly\nI'm not understanding something.\n\nAny help in making this query more efficient would be greatly appreciated,\nand any conceptual insights would be extra awesome.\n\nThanks for reading.\n\n-Jake\n\nI've got a query that takes a surprisingly long time to run, and I'm having a really rough time trying to figure it out.Before I get started, here are the specifics of the situation:Here is the table that I'm working with (apologies for spammy indices, I've been throwing shit at the wall)\n                            Table \"public.syncerevent\"\n    Column    |  Type   |                        Modifiers                         \n--------------+---------+----------------------------------------------------------\n id           | bigint  | not null default nextval('syncerevent_id_seq'::regclass)\n userid       | text    | \n event        | text    | \n eventid      | text    | \n originatorid | text    | \n propogatorid | text    | \n kwargs       | text    | \n conflicted   | integer | \nIndexes:\n    \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n    \"syncereventidindex\" UNIQUE, btree (eventid)\n    \"anothersyncereventidindex\" btree (userid)\n    \"anothersyncereventidindexwithascending\" btree (userid, id)\n    \"asdfasdgasdf\" btree (userid, id DESC)\n    \"syncereventuseridhashindex\" hash (userid)To provide some context, as per the wiki, there are 3,290,600 rows in this table. It gets added to frequently, but never deleted from. The \"kwargs\" column often contains mid-size JSON strings (roughly 30K characters on average)As of right now, the table has 53 users in it. About 20% of those have a negligible number of events, but the rest of the users have a fairly even smattering.EXPLAIN (ANALYZE, BUFFERS) says:\n                                                                          QUERY PLAN                                                                          \n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.43..1218.57 rows=4000 width=615) (actual time=3352.390..3403.572 rows=4000 loops=1)\n   Buffers: shared hit=120244 read=160198\n   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..388147.29 rows=1274560 width=615) (actual time=3352.386..3383.100 rows=4000 loops=1)\n         Index Cond: (id > 12468)\n         Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text) AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))\n         Rows Removed by Filter: 1685801\n         Buffers: shared hit=120244 read=160198\n Planning time: 0.833 ms\n Execution time: 3407.633 ms\n(9 rows)The postgres verison is: PostgreSQL 9.5.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bitThis query has gotten slower over time.The postgres server is running on a db.m3.medium RDS instance on Amazon.(3.75GB of ram)(~3 GHz processor, single core)I ran VACUUM, and ANALYZEd this table just prior to running the EXPLAIN command.Here are the server settings: name                                   | current_setting                               | source  application_name                       | psql                                          | client archive_command                        | /etc/rds/dbbin/pgscripts/rds_wal_archive %p   | configuration file archive_mode                           | on                                            | configuration file archive_timeout                        | 5min                                          | configuration file autovacuum_analyze_scale_factor        | 0.05                                          | configuration file autovacuum_naptime                     | 30s                                           | configuration file autovacuum_vacuum_scale_factor         | 0.1                                           | configuration file checkpoint_completion_target           | 0.9                                           | configuration file client_encoding                        | UTF8                                          | client effective_cache_size                   | 1818912kB                                     | configuration file fsync                                  | on                                            | configuration file full_page_writes                       | on                                            | configuration file hot_standby                            | off                                           | configuration file listen_addresses                       | *                                             | command line lo_compat_privileges                   | off                                           | configuration file log_checkpoints                        | on                                            | configuration file log_directory                          | /rdsdbdata/log/errorSorry for the formatting, I'm not sure of the best way to format this data on a mailing list.If it matters/interests you, here is my underlying confusion:From some internet sleuthing, I've decided that having a table per user (which would totally make this problem a non-issue) isn't a great idea. Because there is a file per table, having a table per user would not scale. My next thought was partial indexes (which would also totally help), but since there is also a table per index, this really doesn't side-step the problem. My rough mental model says: If there exists a way that a table-per-user scheme would make this more efficient, then there should also exist an index that could achieve the same effect (or close enough to not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could utilize at least one of the two indexes on the userId column, but clearly I'm not understanding something.Any help in making this query more efficient would be greatly appreciated, and any conceptual insights would be extra awesome.Thanks for reading.-Jake", "msg_date": "Tue, 27 Sep 2016 17:02:43 -0700", "msg_from": "Jake Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Unexpected expensive index scan" }, { "msg_contents": "Herp, forgot to include the query:\n\nSELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN\n('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND\nuserId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^\n\nOn Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]>\nwrote:\n\n> I've got a query that takes a surprisingly long time to run, and I'm\n> having a really rough time trying to figure it out.\n>\n> Before I get started, here are the specifics of the situation:\n>\n> Here is the table that I'm working with (apologies for spammy indices,\n> I've been throwing shit at the wall)\n>\n> Table \"public.syncerevent\"\n>\n> Column | Type | Modifiers\n>\n>\n> --------------+---------+-----------------------------------\n> -----------------------\n>\n> id | bigint | not null default nextval('syncerevent_id_seq'::\n> regclass)\n>\n> userid | text |\n>\n> event | text |\n>\n> eventid | text |\n>\n> originatorid | text |\n>\n> propogatorid | text |\n>\n> kwargs | text |\n>\n> conflicted | integer |\n>\n> Indexes:\n>\n> \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n>\n> \"syncereventidindex\" UNIQUE, btree (eventid)\n>\n> \"anothersyncereventidindex\" btree (userid)\n>\n> \"anothersyncereventidindexwithascending\" btree (userid, id)\n>\n> \"asdfasdgasdf\" btree (userid, id DESC)\n>\n> \"syncereventuseridhashindex\" hash (userid)\n>\n> To provide some context, as per the wiki,\n> there are 3,290,600 rows in this table.\n> It gets added to frequently, but never deleted from.\n> The \"kwargs\" column often contains mid-size JSON strings (roughly 30K\n> characters on average)\n> As of right now, the table has 53 users in it. About 20% of those have a\n> negligible number of events, but the rest of the users have a fairly even\n> smattering.\n>\n> EXPLAIN (ANALYZE, BUFFERS) says:\n>\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> --------------------------------------\n>\n> Limit (cost=0.43..1218.57 rows=4000 width=615) (actual\n> time=3352.390..3403.572 rows=4000 loops=1)\n>\n> Buffers: shared hit=120244 read=160198\n>\n> -> Index Scan using syncerevent_pkey on syncerevent\n> (cost=0.43..388147.29 rows=1274560 width=615) (actual\n> time=3352.386..3383.100 rows=4000 loops=1)\n>\n> Index Cond: (id > 12468)\n>\n> Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text)\n> AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))\n>\n> Rows Removed by Filter: 1685801\n>\n> Buffers: shared hit=120244 read=160198\n>\n> Planning time: 0.833 ms\n>\n> Execution time: 3407.633 ms\n>\n> (9 rows)\n>\n>\n> The postgres verison is: PostgreSQL 9.5.2 on x86_64-pc-linux-gnu, compiled\n> by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n>\n>\n> This query has gotten slower over time.\n>\n> The postgres server is running on a db.m3.medium RDS instance on Amazon.\n>\n> (3.75GB of ram)\n>\n> (~3 GHz processor, single core)\n>\n> I ran VACUUM, and ANALYZEd this table just prior to running the EXPLAIN\n> command.\n>\n> Here are the server settings:\n>\n> name | current_setting\n> | source\n>\n>\n>\n>\n> application_name | psql\n> | client\n>\n> archive_command | /etc/rds/dbbin/pgscripts/rds_wal_archive\n> %p | configuration file\n>\n> archive_mode | on\n> | configuration file\n>\n> archive_timeout | 5min\n> | configuration file\n>\n> autovacuum_analyze_scale_factor | 0.05\n> | configuration file\n>\n> autovacuum_naptime | 30s\n> | configuration file\n>\n> autovacuum_vacuum_scale_factor | 0.1\n> | configuration file\n>\n> checkpoint_completion_target | 0.9\n> | configuration file\n>\n> client_encoding | UTF8\n> | client\n>\n> effective_cache_size | 1818912kB\n> | configuration file\n>\n> fsync | on\n> | configuration file\n>\n> full_page_writes | on\n> | configuration file\n>\n> hot_standby | off\n> | configuration file\n>\n> listen_addresses | *\n> | command line\n>\n> lo_compat_privileges | off\n> | configuration file\n>\n> log_checkpoints | on\n> | configuration file\n>\n> log_directory | /rdsdbdata/log/error\n>\n> Sorry for the formatting, I'm not sure of the best way to format this data\n> on a mailing list.\n>\n>\n> If it matters/interests you, here is my underlying confusion:\n>\n> From some internet sleuthing, I've decided that having a table per user\n> (which would totally make this problem a non-issue) isn't a great idea.\n> Because there is a file per table, having a table per user would not scale.\n> My next thought was partial indexes (which would also totally help), but\n> since there is also a table per index, this really doesn't side-step the\n> problem. My rough mental model says: If there exists a way that a\n> table-per-user scheme would make this more efficient, then there should\n> also exist an index that could achieve the same effect (or close enough to\n> not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could\n> utilize at least one of the two indexes on the userId column, but clearly\n> I'm not understanding something.\n>\n> Any help in making this query more efficient would be greatly appreciated,\n> and any conceptual insights would be extra awesome.\n>\n> Thanks for reading.\n>\n> -Jake\n>\n\nHerp, forgot to include the query:\nSELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^On Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]> wrote:I've got a query that takes a surprisingly long time to run, and I'm having a really rough time trying to figure it out.Before I get started, here are the specifics of the situation:Here is the table that I'm working with (apologies for spammy indices, I've been throwing shit at the wall)\n                            Table \"public.syncerevent\"\n    Column    |  Type   |                        Modifiers                         \n--------------+---------+----------------------------------------------------------\n id           | bigint  | not null default nextval('syncerevent_id_seq'::regclass)\n userid       | text    | \n event        | text    | \n eventid      | text    | \n originatorid | text    | \n propogatorid | text    | \n kwargs       | text    | \n conflicted   | integer | \nIndexes:\n    \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n    \"syncereventidindex\" UNIQUE, btree (eventid)\n    \"anothersyncereventidindex\" btree (userid)\n    \"anothersyncereventidindexwithascending\" btree (userid, id)\n    \"asdfasdgasdf\" btree (userid, id DESC)\n    \"syncereventuseridhashindex\" hash (userid)To provide some context, as per the wiki, there are 3,290,600 rows in this table. It gets added to frequently, but never deleted from. The \"kwargs\" column often contains mid-size JSON strings (roughly 30K characters on average)As of right now, the table has 53 users in it. About 20% of those have a negligible number of events, but the rest of the users have a fairly even smattering.EXPLAIN (ANALYZE, BUFFERS) says:\n                                                                          QUERY PLAN                                                                          \n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.43..1218.57 rows=4000 width=615) (actual time=3352.390..3403.572 rows=4000 loops=1)\n   Buffers: shared hit=120244 read=160198\n   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..388147.29 rows=1274560 width=615) (actual time=3352.386..3383.100 rows=4000 loops=1)\n         Index Cond: (id > 12468)\n         Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text) AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))\n         Rows Removed by Filter: 1685801\n         Buffers: shared hit=120244 read=160198\n Planning time: 0.833 ms\n Execution time: 3407.633 ms\n(9 rows)The postgres verison is: PostgreSQL 9.5.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bitThis query has gotten slower over time.The postgres server is running on a db.m3.medium RDS instance on Amazon.(3.75GB of ram)(~3 GHz processor, single core)I ran VACUUM, and ANALYZEd this table just prior to running the EXPLAIN command.Here are the server settings: name                                   | current_setting                               | source  application_name                       | psql                                          | client archive_command                        | /etc/rds/dbbin/pgscripts/rds_wal_archive %p   | configuration file archive_mode                           | on                                            | configuration file archive_timeout                        | 5min                                          | configuration file autovacuum_analyze_scale_factor        | 0.05                                          | configuration file autovacuum_naptime                     | 30s                                           | configuration file autovacuum_vacuum_scale_factor         | 0.1                                           | configuration file checkpoint_completion_target           | 0.9                                           | configuration file client_encoding                        | UTF8                                          | client effective_cache_size                   | 1818912kB                                     | configuration file fsync                                  | on                                            | configuration file full_page_writes                       | on                                            | configuration file hot_standby                            | off                                           | configuration file listen_addresses                       | *                                             | command line lo_compat_privileges                   | off                                           | configuration file log_checkpoints                        | on                                            | configuration file log_directory                          | /rdsdbdata/log/errorSorry for the formatting, I'm not sure of the best way to format this data on a mailing list.If it matters/interests you, here is my underlying confusion:From some internet sleuthing, I've decided that having a table per user (which would totally make this problem a non-issue) isn't a great idea. Because there is a file per table, having a table per user would not scale. My next thought was partial indexes (which would also totally help), but since there is also a table per index, this really doesn't side-step the problem. My rough mental model says: If there exists a way that a table-per-user scheme would make this more efficient, then there should also exist an index that could achieve the same effect (or close enough to not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could utilize at least one of the two indexes on the userId column, but clearly I'm not understanding something.Any help in making this query more efficient would be greatly appreciated, and any conceptual insights would be extra awesome.Thanks for reading.-Jake", "msg_date": "Tue, 27 Sep 2016 17:21:40 -0700", "msg_from": "Jake Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected expensive index scan" }, { "msg_contents": "From: Jake Nielsen Sent: Tuesday, September 27, 2016 5:22 PM\n\n\nthe query\n\nSELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^\n\n \n\nOn Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected] <mailto:[email protected]> > wrote:\n\nI've got a query that takes a surprisingly long time to run, and I'm having a really rough time trying to figure it out.\n\n \n\nBefore I get started, here are the specifics of the situation:\n\n \n\nHere is the table that I'm working with (apologies for spammy indices, I've been throwing shit at the wall)\n\n Table \"public.syncerevent\"\n\n Column | Type | Modifiers \n\n--------------+---------+----------------------------------------------------------\n\n id | bigint | not null default nextval('syncerevent_id_seq'::regclass)\n\n userid | text | \n\n event | text | \n\n eventid | text | \n\n originatorid | text | \n\n propogatorid | text | \n\n kwargs | text | \n\n conflicted | integer | \n\nIndexes:\n\n \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n\n \"syncereventidindex\" UNIQUE, btree (eventid)\n\n \"anothersyncereventidindex\" btree (userid)\n\n \"anothersyncereventidindexwithascending\" btree (userid, id)\n\n \"asdfasdgasdf\" btree (userid, id DESC)\n\n \"syncereventuseridhashindex\" hash (userid)\n\n \n\nTo provide some context, as per the wiki, \n\nthere are 3,290,600 rows in this table. \n\nIt gets added to frequently, but never deleted from. \n\nThe \"kwargs\" column often contains mid-size JSON strings (roughly 30K characters on average)\n\nAs of right now, the table has 53 users in it. About 20% of those have a negligible number of events, but the rest of the users have a fairly even smattering.\n\n \n\nEXPLAIN (ANALYZE, BUFFERS) says:\n\n QUERY PLAN \n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.43..1218.57 rows=4000 width=615) (actual time=3352.390..3403.572 rows=4000 loops=1) Buffers: shared hit=120244 read=160198\n\n -> Index Scan using syncerevent_pkey on syncerevent (cost=0.43..388147.29 rows=1274560 width=615) (actual time=3352.386..3383.100 rows=4000 loops=1)\n\n Index Cond: (id > 12468)\n\n Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text) AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))\n\n Rows Removed by Filter: 1685801\n\n Buffers: shared hit=120244 read=160198\n\n Planning time: 0.833 ms\n\n Execution time: 3407.633 ms\n\n(9 rows)\n\nIf it matters/interests you, here is my underlying confusion:\n\n From some internet sleuthing, I've decided that having a table per user (which would totally make this problem a non-issue) isn't a great idea. Because there is a file per table, having a table per user would not scale. My next thought was partial indexes (which would also totally help), but since there is also a table per index, this really doesn't side-step the problem. My rough mental model says: If there exists a way that a table-per-user scheme would make this more efficient, then there should also exist an index that could achieve the same effect (or close enough to not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could utilize at least one of the two indexes on the userId column, but clearly I'm not understanding something.\n\nAny help in making this query more efficient would be greatly appreciated, and any conceptual insights would be extra awesome.\n\nThanks for reading.\n\n-Jake\n\n----------------------\n\n \n\nThis stands out: WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"')\n\nAs does this from the analyze: Rows Removed by Filter: 1685801\n\n \n\nThe propogaterid is practically the only column NOT indexed and it’s used in a “not in”. It looks like it’s having to do a table scan for all the rows above the id cutoff to see if any meet the filter requirement. “not in” can be very expensive. An index might help on this column. Have you tried that?\n\n \n\nYour rowcounts aren’t high enough to require partitioning or any other changes to your table that I can see right now.\n\n \n\nMike Sofen (Synthetic Genomics)\n\n \n\n\nFrom: Jake Nielsen    Sent: Tuesday, September 27, 2016 5:22 PMthe querySELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^ On Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]> wrote:I've got a query that takes a surprisingly long time to run, and I'm having a really rough time trying to figure it out. Before I get started, here are the specifics of the situation: Here is the table that I'm working with (apologies for spammy indices, I've been throwing shit at the wall)                            Table \"public.syncerevent\"    Column    |  Type   |                        Modifiers                         --------------+---------+---------------------------------------------------------- id           | bigint  | not null default nextval('syncerevent_id_seq'::regclass) userid       | text    |  event        | text    |  eventid      | text    |  originatorid | text    |  propogatorid | text    |  kwargs       | text    |  conflicted   | integer | Indexes:    \"syncerevent_pkey\" PRIMARY KEY, btree (id)    \"syncereventidindex\" UNIQUE, btree (eventid)    \"anothersyncereventidindex\" btree (userid)    \"anothersyncereventidindexwithascending\" btree (userid, id)    \"asdfasdgasdf\" btree (userid, id DESC)    \"syncereventuseridhashindex\" hash (userid) To provide some context, as per the wiki, there are 3,290,600 rows in this table. It gets added to frequently, but never deleted from. The \"kwargs\" column often contains mid-size JSON strings (roughly 30K characters on average)As of right now, the table has 53 users in it. About 20% of those have a negligible number of events, but the rest of the users have a fairly even smattering. EXPLAIN (ANALYZE, BUFFERS) says:                                                                          QUERY PLAN                                                                          -------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.43..1218.57 rows=4000 width=615) (actual time=3352.390..3403.572 rows=4000 loops=1)  Buffers: shared hit=120244 read=160198   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..388147.29 rows=1274560 width=615) (actual time=3352.386..3383.100 rows=4000 loops=1)         Index Cond: (id > 12468)         Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text) AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))         Rows Removed by Filter: 1685801         Buffers: shared hit=120244 read=160198 Planning time: 0.833 ms Execution time: 3407.633 ms(9 rows)If it matters/interests you, here is my underlying confusion:From some internet sleuthing, I've decided that having a table per user (which would totally make this problem a non-issue) isn't a great idea. Because there is a file per table, having a table per user would not scale. My next thought was partial indexes (which would also totally help), but since there is also a table per index, this really doesn't side-step the problem. My rough mental model says: If there exists a way that a table-per-user scheme would make this more efficient, then there should also exist an index that could achieve the same effect (or close enough to not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could utilize at least one of the two indexes on the userId column, but clearly I'm not understanding something.Any help in making this query more efficient would be greatly appreciated, and any conceptual insights would be extra awesome.Thanks for reading.-Jake---------------------- This stands out:  WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"')As does this from the analyze:  Rows Removed by Filter: 1685801 The propogaterid is practically the only column NOT indexed and it’s used in a “not in”.  It looks like it’s having to do a table scan for all the rows above the id cutoff to see if any meet the filter requirement.  “not in” can be very expensive.  An index might help on this column.  Have you tried that? Your rowcounts aren’t high enough to require partitioning or any other changes to your table that I can see right now. Mike Sofen  (Synthetic Genomics)", "msg_date": "Tue, 27 Sep 2016 17:41:55 -0700", "msg_from": "\"Mike Sofen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected expensive index scan" }, { "msg_contents": "On Tue, Sep 27, 2016 at 5:41 PM, Mike Sofen <[email protected]> wrote:\n\n> *From:* Jake Nielsen *Sent:* Tuesday, September 27, 2016 5:22 PM\n>\n>\n> the query\n>\n> SELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN\n> ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND\n> userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^\n>\n>\n>\n> On Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]>\n> wrote:\n>\n> I've got a query that takes a surprisingly long time to run, and I'm\n> having a really rough time trying to figure it out.\n>\n>\n>\n> Before I get started, here are the specifics of the situation:\n>\n>\n>\n> Here is the table that I'm working with (apologies for spammy indices,\n> I've been throwing shit at the wall)\n>\n> Table \"public.syncerevent\"\n>\n> Column | Type | Modifiers\n>\n>\n> --------------+---------+-----------------------------------\n> -----------------------\n>\n> id | bigint | not null default nextval('syncerevent_id_seq'::\n> regclass)\n>\n> userid | text |\n>\n> event | text |\n>\n> eventid | text |\n>\n> originatorid | text |\n>\n> propogatorid | text |\n>\n> kwargs | text |\n>\n> conflicted | integer |\n>\n> Indexes:\n>\n> \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n>\n> \"syncereventidindex\" UNIQUE, btree (eventid)\n>\n> \"anothersyncereventidindex\" btree (userid)\n>\n> \"anothersyncereventidindexwithascending\" btree (userid, id)\n>\n> \"asdfasdgasdf\" btree (userid, id DESC)\n>\n> \"syncereventuseridhashindex\" hash (userid)\n>\n>\n>\n> To provide some context, as per the wiki,\n>\n> there are 3,290,600 rows in this table.\n>\n> It gets added to frequently, but never deleted from.\n>\n> The \"kwargs\" column often contains mid-size JSON strings (roughly 30K\n> characters on average)\n>\n> As of right now, the table has 53 users in it. About 20% of those have a\n> negligible number of events, but the rest of the users have a fairly even\n> smattering.\n>\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS) says:\n>\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> --------------------------------------\n>\n> Limit (cost=0.43..1218.57 rows=4000 width=615) (actual\n> time=3352.390..3403.572 rows=4000 loops=1) Buffers: shared hit=120244\n> read=160198\n>\n> -> Index Scan using syncerevent_pkey on syncerevent\n> (cost=0.43..388147.29 rows=1274560 width=615) (actual\n> time=3352.386..3383.100 rows=4000 loops=1)\n>\n> Index Cond: (id > 12468)\n>\n> Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text)\n> AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))\n>\n> Rows Removed by Filter: 1685801\n>\n> Buffers: shared hit=120244 read=160198\n>\n> Planning time: 0.833 ms\n>\n> Execution time: 3407.633 ms\n>\n> (9 rows)\n>\n> If it matters/interests you, here is my underlying confusion:\n>\n> From some internet sleuthing, I've decided that having a table per user\n> (which would totally make this problem a non-issue) isn't a great idea.\n> Because there is a file per table, having a table per user would not scale.\n> My next thought was partial indexes (which would also totally help), but\n> since there is also a table per index, this really doesn't side-step the\n> problem. My rough mental model says: If there exists a way that a\n> table-per-user scheme would make this more efficient, then there should\n> also exist an index that could achieve the same effect (or close enough to\n> not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could\n> utilize at least one of the two indexes on the userId column, but clearly\n> I'm not understanding something.\n>\n> Any help in making this query more efficient would be greatly appreciated,\n> and any conceptual insights would be extra awesome.\n>\n> Thanks for reading.\n>\n> -Jake\n>\n> ----------------------\n>\n>\n>\n> This stands out: WHERE ID > 12468 AND propogatorId NOT IN\n> ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"')\n>\n> As does this from the analyze: Rows Removed by Filter: 1685801\n>\n>\n>\n> The propogaterid is practically the only column NOT indexed and it’s used\n> in a “not in”. It looks like it’s having to do a table scan for all the\n> rows above the id cutoff to see if any meet the filter requirement. “not\n> in” can be very expensive. An index might help on this column. Have you\n> tried that?\n>\n>\n>\n> Your rowcounts aren’t high enough to require partitioning or any other\n> changes to your table that I can see right now.\n>\n>\n>\n> Mike Sofen (Synthetic Genomics)\n>\n>\n>\n\nThanks Mike, that's true, I hadn't thought of non-indexed columns forcing a\nscan. Unfortunately, just to test this out, I tried pulling out the more\nsuspect parts of the query, and it still seems to want to do an index scan:\n\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userId =\n'57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;\n\n\nQUERY PLAN\n\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.43..1140.62 rows=4000 width=615) (actual\ntime=2706.365..2732.308 rows=4000 loops=1)\n\n Buffers: shared hit=120239 read=161924\n\n -> Index Scan using syncerevent_pkey on syncerevent\n(cost=0.43..364982.77 rows=1280431 width=615) (actual\ntime=2706.360..2715.514 rows=4000 loops=1)\n\n Filter: (userid = '57dc984f1c87461c0967e228'::text)\n\n Rows Removed by Filter: 1698269\n\n Buffers: shared hit=120239 read=161924\n\n Planning time: 0.131 ms\n\n Execution time: 2748.526 ms\n\n(8 rows)\n\nIt definitely looks to me like it's starting at the ID = 12468 row, and\njust grinding up the rows. The filter is (unsurprisingly) false for most of\nthe rows, so it ends up having to chew up half the table before it actually\nfinds 4000 rows that match.\n\nAfter creating a partial index using that userId, things go way faster.\nThis is more-or-less what I assumed I'd get by making having that\nmulti-column index of (userId, Id), but alas:\n\nremoteSyncerLogistics=> CREATE INDEX sillyIndex ON syncerevent (ID) where\nuserId = '57dc984f1c87461c0967e228';\n\nCREATE INDEX\n\nremoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\nSyncerEvent WHERe userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT\n4000;\n\n QUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.43..443.21 rows=4000 width=615) (actual time=0.074..13.349\nrows=4000 loops=1)\n\n Buffers: shared hit=842 read=13\n\n -> Index Scan using sillyindex on syncerevent (cost=0.43..141748.41\nrows=1280506 width=615) (actual time=0.071..5.372 rows=4000 loops=1)\n\n Buffers: shared hit=842 read=13\n\n Planning time: 0.245 ms\n\n Execution time: 25.404 ms\n\n(6 rows)\n\n\nremoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\nSyncerEvent WHERe userId = '57dc984f1c87461c0967e228' AND ID > 12468 ORDER\nBY ID LIMIT 4000;\n\n QUERY\nPLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.43..453.34 rows=4000 width=615) (actual time=0.023..13.244\nrows=4000 loops=1)\n\n Buffers: shared hit=855\n\n -> Index Scan using sillyindex on syncerevent (cost=0.43..144420.43\nrows=1275492 width=615) (actual time=0.020..5.392 rows=4000 loops=1)\n\n Index Cond: (id > 12468)\n\n Buffers: shared hit=855\n\n Planning time: 0.253 ms\n\n Execution time: 29.371 ms\n\n(7 rows)\n\n\nAny thoughts?\n\n-Jake\n\n\nOn Tue, Sep 27, 2016 at 5:41 PM, Mike Sofen <[email protected]> wrote:From: Jake Nielsen    Sent: Tuesday, September 27, 2016 5:22 PMthe querySELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^ On Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]> wrote:I've got a query that takes a surprisingly long time to run, and I'm having a really rough time trying to figure it out. Before I get started, here are the specifics of the situation: Here is the table that I'm working with (apologies for spammy indices, I've been throwing shit at the wall)                            Table \"public.syncerevent\"    Column    |  Type   |                        Modifiers                         --------------+---------+---------------------------------------------------------- id           | bigint  | not null default nextval('syncerevent_id_seq'::regclass) userid       | text    |  event        | text    |  eventid      | text    |  originatorid | text    |  propogatorid | text    |  kwargs       | text    |  conflicted   | integer | Indexes:    \"syncerevent_pkey\" PRIMARY KEY, btree (id)    \"syncereventidindex\" UNIQUE, btree (eventid)    \"anothersyncereventidindex\" btree (userid)    \"anothersyncereventidindexwithascending\" btree (userid, id)    \"asdfasdgasdf\" btree (userid, id DESC)    \"syncereventuseridhashindex\" hash (userid) To provide some context, as per the wiki, there are 3,290,600 rows in this table. It gets added to frequently, but never deleted from. The \"kwargs\" column often contains mid-size JSON strings (roughly 30K characters on average)As of right now, the table has 53 users in it. About 20% of those have a negligible number of events, but the rest of the users have a fairly even smattering. EXPLAIN (ANALYZE, BUFFERS) says:                                                                          QUERY PLAN                                                                          -------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.43..1218.57 rows=4000 width=615) (actual time=3352.390..3403.572 rows=4000 loops=1)  Buffers: shared hit=120244 read=160198   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..388147.29 rows=1274560 width=615) (actual time=3352.386..3383.100 rows=4000 loops=1)         Index Cond: (id > 12468)         Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text) AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))         Rows Removed by Filter: 1685801         Buffers: shared hit=120244 read=160198 Planning time: 0.833 ms Execution time: 3407.633 ms(9 rows)If it matters/interests you, here is my underlying confusion:From some internet sleuthing, I've decided that having a table per user (which would totally make this problem a non-issue) isn't a great idea. Because there is a file per table, having a table per user would not scale. My next thought was partial indexes (which would also totally help), but since there is also a table per index, this really doesn't side-step the problem. My rough mental model says: If there exists a way that a table-per-user scheme would make this more efficient, then there should also exist an index that could achieve the same effect (or close enough to not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could utilize at least one of the two indexes on the userId column, but clearly I'm not understanding something.Any help in making this query more efficient would be greatly appreciated, and any conceptual insights would be extra awesome.Thanks for reading.-Jake---------------------- This stands out:  WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"')As does this from the analyze:  Rows Removed by Filter: 1685801 The propogaterid is practically the only column NOT indexed and it’s used in a “not in”.  It looks like it’s having to do a table scan for all the rows above the id cutoff to see if any meet the filter requirement.  “not in” can be very expensive.  An index might help on this column.  Have you tried that? Your rowcounts aren’t high enough to require partitioning or any other changes to your table that I can see right now. Mike Sofen  (Synthetic Genomics) Thanks Mike, that's true, I hadn't thought of non-indexed columns forcing a scan. Unfortunately, just to test this out, I tried pulling out the more suspect parts of the query, and it still seems to want to do an index scan:EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;                                                                        QUERY PLAN                                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.43..1140.62 rows=4000 width=615) (actual time=2706.365..2732.308 rows=4000 loops=1)   Buffers: shared hit=120239 read=161924   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..364982.77 rows=1280431 width=615) (actual time=2706.360..2715.514 rows=4000 loops=1)         Filter: (userid = '57dc984f1c87461c0967e228'::text)         Rows Removed by Filter: 1698269         Buffers: shared hit=120239 read=161924 Planning time: 0.131 ms Execution time: 2748.526 ms(8 rows)It definitely looks to me like it's starting at the ID = 12468 row, and just grinding up the rows. The filter is (unsurprisingly) false for most of the rows, so it ends up having to chew up half the table before it actually finds 4000 rows that match.After creating a partial index using that userId, things go way faster. This is more-or-less what I assumed I'd get by making having that multi-column index of (userId, Id), but alas:\nremoteSyncerLogistics=> CREATE INDEX sillyIndex ON syncerevent (ID) where userId = '57dc984f1c87461c0967e228';\nCREATE INDEX\nremoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;\n                                                                  QUERY PLAN                                                                  \n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.43..443.21 rows=4000 width=615) (actual time=0.074..13.349 rows=4000 loops=1)\n   Buffers: shared hit=842 read=13\n   ->  Index Scan using sillyindex on syncerevent  (cost=0.43..141748.41 rows=1280506 width=615) (actual time=0.071..5.372 rows=4000 loops=1)\n         Buffers: shared hit=842 read=13\n Planning time: 0.245 ms\n Execution time: 25.404 ms\n(6 rows)\n\nremoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' AND ID > 12468 ORDER BY ID LIMIT 4000;\n                                                                  QUERY PLAN                                                                  \n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.43..453.34 rows=4000 width=615) (actual time=0.023..13.244 rows=4000 loops=1)\n   Buffers: shared hit=855\n   ->  Index Scan using sillyindex on syncerevent  (cost=0.43..144420.43 rows=1275492 width=615) (actual time=0.020..5.392 rows=4000 loops=1)\n         Index Cond: (id > 12468)\n         Buffers: shared hit=855\n Planning time: 0.253 ms\n Execution time: 29.371 ms\n(7 rows)Any thoughts?-Jake", "msg_date": "Tue, 27 Sep 2016 18:03:00 -0700", "msg_from": "Jake Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected expensive index scan" }, { "msg_contents": "On Tue, Sep 27, 2016 at 6:03 PM, Jake Nielsen <[email protected]>\nwrote:\n\n>\n> On Tue, Sep 27, 2016 at 5:41 PM, Mike Sofen <[email protected]> wrote:\n>\n>> *From:* Jake Nielsen *Sent:* Tuesday, September 27, 2016 5:22 PM\n>>\n>>\n>> the query\n>>\n>> SELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN\n>> ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND\n>> userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^\n>>\n>>\n>>\n>> On Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]>\n>> wrote:\n>>\n>> I've got a query that takes a surprisingly long time to run, and I'm\n>> having a really rough time trying to figure it out.\n>>\n>>\n>>\n>> Before I get started, here are the specifics of the situation:\n>>\n>>\n>>\n>> Here is the table that I'm working with (apologies for spammy indices,\n>> I've been throwing shit at the wall)\n>>\n>> Table \"public.syncerevent\"\n>>\n>> Column | Type | Modifiers\n>>\n>>\n>> --------------+---------+-----------------------------------\n>> -----------------------\n>>\n>> id | bigint | not null default nextval('syncerevent_id_seq'::\n>> regclass)\n>>\n>> userid | text |\n>>\n>> event | text |\n>>\n>> eventid | text |\n>>\n>> originatorid | text |\n>>\n>> propogatorid | text |\n>>\n>> kwargs | text |\n>>\n>> conflicted | integer |\n>>\n>> Indexes:\n>>\n>> \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n>>\n>> \"syncereventidindex\" UNIQUE, btree (eventid)\n>>\n>> \"anothersyncereventidindex\" btree (userid)\n>>\n>> \"anothersyncereventidindexwithascending\" btree (userid, id)\n>>\n>> \"asdfasdgasdf\" btree (userid, id DESC)\n>>\n>> \"syncereventuseridhashindex\" hash (userid)\n>>\n>>\n>>\n>> To provide some context, as per the wiki,\n>>\n>> there are 3,290,600 rows in this table.\n>>\n>> It gets added to frequently, but never deleted from.\n>>\n>> The \"kwargs\" column often contains mid-size JSON strings (roughly 30K\n>> characters on average)\n>>\n>> As of right now, the table has 53 users in it. About 20% of those have a\n>> negligible number of events, but the rest of the users have a fairly even\n>> smattering.\n>>\n>>\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) says:\n>>\n>>\n>> QUERY PLAN\n>>\n>>\n>> ------------------------------------------------------------\n>> ------------------------------------------------------------\n>> --------------------------------------\n>>\n>> Limit (cost=0.43..1218.57 rows=4000 width=615) (actual\n>> time=3352.390..3403.572 rows=4000 loops=1) Buffers: shared hit=120244\n>> read=160198\n>>\n>> -> Index Scan using syncerevent_pkey on syncerevent\n>> (cost=0.43..388147.29 rows=1274560 width=615) (actual\n>> time=3352.386..3383.100 rows=4000 loops=1)\n>>\n>> Index Cond: (id > 12468)\n>>\n>> Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text)\n>> AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))\n>>\n>> Rows Removed by Filter: 1685801\n>>\n>> Buffers: shared hit=120244 read=160198\n>>\n>> Planning time: 0.833 ms\n>>\n>> Execution time: 3407.633 ms\n>>\n>> (9 rows)\n>>\n>> If it matters/interests you, here is my underlying confusion:\n>>\n>> From some internet sleuthing, I've decided that having a table per user\n>> (which would totally make this problem a non-issue) isn't a great idea.\n>> Because there is a file per table, having a table per user would not scale.\n>> My next thought was partial indexes (which would also totally help), but\n>> since there is also a table per index, this really doesn't side-step the\n>> problem. My rough mental model says: If there exists a way that a\n>> table-per-user scheme would make this more efficient, then there should\n>> also exist an index that could achieve the same effect (or close enough to\n>> not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could\n>> utilize at least one of the two indexes on the userId column, but clearly\n>> I'm not understanding something.\n>>\n>> Any help in making this query more efficient would be greatly\n>> appreciated, and any conceptual insights would be extra awesome.\n>>\n>> Thanks for reading.\n>>\n>> -Jake\n>>\n>> ----------------------\n>>\n>>\n>>\n>> This stands out: WHERE ID > 12468 AND propogatorId NOT IN\n>> ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"')\n>>\n>> As does this from the analyze: Rows Removed by Filter: 1685801\n>>\n>>\n>>\n>> The propogaterid is practically the only column NOT indexed and it’s used\n>> in a “not in”. It looks like it’s having to do a table scan for all the\n>> rows above the id cutoff to see if any meet the filter requirement. “not\n>> in” can be very expensive. An index might help on this column. Have you\n>> tried that?\n>>\n>>\n>>\n>> Your rowcounts aren’t high enough to require partitioning or any other\n>> changes to your table that I can see right now.\n>>\n>>\n>>\n>> Mike Sofen (Synthetic Genomics)\n>>\n>>\n>>\n>\n> Thanks Mike, that's true, I hadn't thought of non-indexed columns forcing\n> a scan. Unfortunately, just to test this out, I tried pulling out the more\n> suspect parts of the query, and it still seems to want to do an index scan:\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userId =\n> '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;\n>\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ----------------------------------\n>\n> Limit (cost=0.43..1140.62 rows=4000 width=615) (actual\n> time=2706.365..2732.308 rows=4000 loops=1)\n>\n> Buffers: shared hit=120239 read=161924\n>\n> -> Index Scan using syncerevent_pkey on syncerevent\n> (cost=0.43..364982.77 rows=1280431 width=615) (actual\n> time=2706.360..2715.514 rows=4000 loops=1)\n>\n> Filter: (userid = '57dc984f1c87461c0967e228'::text)\n>\n> Rows Removed by Filter: 1698269\n>\n> Buffers: shared hit=120239 read=161924\n>\n> Planning time: 0.131 ms\n>\n> Execution time: 2748.526 ms\n>\n> (8 rows)\n>\n> It definitely looks to me like it's starting at the ID = 12468 row, and\n> just grinding up the rows. The filter is (unsurprisingly) false for most of\n> the rows, so it ends up having to chew up half the table before it actually\n> finds 4000 rows that match.\n>\n> After creating a partial index using that userId, things go way faster.\n> This is more-or-less what I assumed I'd get by making having that\n> multi-column index of (userId, Id), but alas:\n>\n> remoteSyncerLogistics=> CREATE INDEX sillyIndex ON syncerevent (ID) where\n> userId = '57dc984f1c87461c0967e228';\n>\n> CREATE INDEX\n>\n> remoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n> SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT\n> 4000;\n>\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ----------------------\n>\n> Limit (cost=0.43..443.21 rows=4000 width=615) (actual time=0.074..13.349\n> rows=4000 loops=1)\n>\n> Buffers: shared hit=842 read=13\n>\n> -> Index Scan using sillyindex on syncerevent (cost=0.43..141748.41\n> rows=1280506 width=615) (actual time=0.071..5.372 rows=4000 loops=1)\n>\n> Buffers: shared hit=842 read=13\n>\n> Planning time: 0.245 ms\n>\n> Execution time: 25.404 ms\n>\n> (6 rows)\n>\n>\n> remoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n> SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' AND ID > 12468 ORDER\n> BY ID LIMIT 4000;\n>\n> QUERY\n> PLAN\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ----------------------\n>\n> Limit (cost=0.43..453.34 rows=4000 width=615) (actual time=0.023..13.244\n> rows=4000 loops=1)\n>\n> Buffers: shared hit=855\n>\n> -> Index Scan using sillyindex on syncerevent (cost=0.43..144420.43\n> rows=1275492 width=615) (actual time=0.020..5.392 rows=4000 loops=1)\n>\n> Index Cond: (id > 12468)\n>\n> Buffers: shared hit=855\n>\n> Planning time: 0.253 ms\n>\n> Execution time: 29.371 ms\n>\n> (7 rows)\n>\n>\n> Any thoughts?\n>\n> -Jake\n>\n\nHmmm, here's another unexpected piece of information:\n\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userid =\n'57dc984f1c87461c0967e228';\n\n QUERY PLAN\n\n\n---------------------------------------------------------------------------------------------------------------------------\n\n Seq Scan on syncerevent (cost=0.00..311251.51 rows=1248302 width=618)\n(actual time=0.008..4970.507 rows=1259137 loops=1)\n\n Filter: (userid = '57dc984f1c87461c0967e228'::text)\n\n Rows Removed by Filter: 2032685\n\n Buffers: shared hit=108601 read=161523\n\n Planning time: 0.092 ms\n\n Execution time: 7662.845 ms\n\n(6 rows)\n\n\nLooks like even just using userid='blah' doesn't actually result in the\nindex being used, despite the fact that there are indexes on the userId\ncolumn:\n\n\n \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n\n \"syncereventidindex\" UNIQUE, btree (eventid)\n\n \"anothersyncereventidindex\" btree (userid)\n\n \"anothersyncereventidindexwithascending\" btree (userid, id)\n\n \"asdfasdgasdf\" btree (userid, id DESC)\n\n \"syncereventuseridhashindex\" hash (userid)\n\n\n-Jake\n\nOn Tue, Sep 27, 2016 at 6:03 PM, Jake Nielsen <[email protected]> wrote:\nOn Tue, Sep 27, 2016 at 5:41 PM, Mike Sofen <[email protected]> wrote:From: Jake Nielsen    Sent: Tuesday, September 27, 2016 5:22 PMthe querySELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^ On Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]> wrote:I've got a query that takes a surprisingly long time to run, and I'm having a really rough time trying to figure it out. Before I get started, here are the specifics of the situation: Here is the table that I'm working with (apologies for spammy indices, I've been throwing shit at the wall)                            Table \"public.syncerevent\"    Column    |  Type   |                        Modifiers                         --------------+---------+---------------------------------------------------------- id           | bigint  | not null default nextval('syncerevent_id_seq'::regclass) userid       | text    |  event        | text    |  eventid      | text    |  originatorid | text    |  propogatorid | text    |  kwargs       | text    |  conflicted   | integer | Indexes:    \"syncerevent_pkey\" PRIMARY KEY, btree (id)    \"syncereventidindex\" UNIQUE, btree (eventid)    \"anothersyncereventidindex\" btree (userid)    \"anothersyncereventidindexwithascending\" btree (userid, id)    \"asdfasdgasdf\" btree (userid, id DESC)    \"syncereventuseridhashindex\" hash (userid) To provide some context, as per the wiki, there are 3,290,600 rows in this table. It gets added to frequently, but never deleted from. The \"kwargs\" column often contains mid-size JSON strings (roughly 30K characters on average)As of right now, the table has 53 users in it. About 20% of those have a negligible number of events, but the rest of the users have a fairly even smattering. EXPLAIN (ANALYZE, BUFFERS) says:                                                                          QUERY PLAN                                                                          -------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.43..1218.57 rows=4000 width=615) (actual time=3352.390..3403.572 rows=4000 loops=1)  Buffers: shared hit=120244 read=160198   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..388147.29 rows=1274560 width=615) (actual time=3352.386..3383.100 rows=4000 loops=1)         Index Cond: (id > 12468)         Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text) AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))         Rows Removed by Filter: 1685801         Buffers: shared hit=120244 read=160198 Planning time: 0.833 ms Execution time: 3407.633 ms(9 rows)If it matters/interests you, here is my underlying confusion:From some internet sleuthing, I've decided that having a table per user (which would totally make this problem a non-issue) isn't a great idea. Because there is a file per table, having a table per user would not scale. My next thought was partial indexes (which would also totally help), but since there is also a table per index, this really doesn't side-step the problem. My rough mental model says: If there exists a way that a table-per-user scheme would make this more efficient, then there should also exist an index that could achieve the same effect (or close enough to not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could utilize at least one of the two indexes on the userId column, but clearly I'm not understanding something.Any help in making this query more efficient would be greatly appreciated, and any conceptual insights would be extra awesome.Thanks for reading.-Jake---------------------- This stands out:  WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"')As does this from the analyze:  Rows Removed by Filter: 1685801 The propogaterid is practically the only column NOT indexed and it’s used in a “not in”.  It looks like it’s having to do a table scan for all the rows above the id cutoff to see if any meet the filter requirement.  “not in” can be very expensive.  An index might help on this column.  Have you tried that? Your rowcounts aren’t high enough to require partitioning or any other changes to your table that I can see right now. Mike Sofen  (Synthetic Genomics) Thanks Mike, that's true, I hadn't thought of non-indexed columns forcing a scan. Unfortunately, just to test this out, I tried pulling out the more suspect parts of the query, and it still seems to want to do an index scan:EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;                                                                        QUERY PLAN                                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.43..1140.62 rows=4000 width=615) (actual time=2706.365..2732.308 rows=4000 loops=1)   Buffers: shared hit=120239 read=161924   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..364982.77 rows=1280431 width=615) (actual time=2706.360..2715.514 rows=4000 loops=1)         Filter: (userid = '57dc984f1c87461c0967e228'::text)         Rows Removed by Filter: 1698269         Buffers: shared hit=120239 read=161924 Planning time: 0.131 ms Execution time: 2748.526 ms(8 rows)It definitely looks to me like it's starting at the ID = 12468 row, and just grinding up the rows. The filter is (unsurprisingly) false for most of the rows, so it ends up having to chew up half the table before it actually finds 4000 rows that match.After creating a partial index using that userId, things go way faster. This is more-or-less what I assumed I'd get by making having that multi-column index of (userId, Id), but alas:\nremoteSyncerLogistics=> CREATE INDEX sillyIndex ON syncerevent (ID) where userId = '57dc984f1c87461c0967e228';\nCREATE INDEX\nremoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;\n                                                                  QUERY PLAN                                                                  \n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.43..443.21 rows=4000 width=615) (actual time=0.074..13.349 rows=4000 loops=1)\n   Buffers: shared hit=842 read=13\n   ->  Index Scan using sillyindex on syncerevent  (cost=0.43..141748.41 rows=1280506 width=615) (actual time=0.071..5.372 rows=4000 loops=1)\n         Buffers: shared hit=842 read=13\n Planning time: 0.245 ms\n Execution time: 25.404 ms\n(6 rows)\n\nremoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' AND ID > 12468 ORDER BY ID LIMIT 4000;\n                                                                  QUERY PLAN                                                                  \n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.43..453.34 rows=4000 width=615) (actual time=0.023..13.244 rows=4000 loops=1)\n   Buffers: shared hit=855\n   ->  Index Scan using sillyindex on syncerevent  (cost=0.43..144420.43 rows=1275492 width=615) (actual time=0.020..5.392 rows=4000 loops=1)\n         Index Cond: (id > 12468)\n         Buffers: shared hit=855\n Planning time: 0.253 ms\n Execution time: 29.371 ms\n(7 rows)Any thoughts?-Jake\nHmmm, here's another unexpected piece of information:\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userid = '57dc984f1c87461c0967e228';\n                                                        QUERY PLAN                                                         \n---------------------------------------------------------------------------------------------------------------------------\n Seq Scan on syncerevent  (cost=0.00..311251.51 rows=1248302 width=618) (actual time=0.008..4970.507 rows=1259137 loops=1)\n   Filter: (userid = '57dc984f1c87461c0967e228'::text)\n   Rows Removed by Filter: 2032685\n   Buffers: shared hit=108601 read=161523\n Planning time: 0.092 ms\n Execution time: 7662.845 ms\n(6 rows)Looks like even just using userid='blah' doesn't actually result in the index being used, despite the fact that there are indexes on the userId column:    \"syncerevent_pkey\" PRIMARY KEY, btree (id)    \"syncereventidindex\" UNIQUE, btree (eventid)    \"anothersyncereventidindex\" btree (userid)    \"anothersyncereventidindexwithascending\" btree (userid, id)    \"asdfasdgasdf\" btree (userid, id DESC)\n    \"syncereventuseridhashindex\" hash (userid)-Jake", "msg_date": "Tue, 27 Sep 2016 18:24:00 -0700", "msg_from": "Jake Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected expensive index scan" }, { "msg_contents": "On Tue, Sep 27, 2016 at 6:24 PM, Jake Nielsen <[email protected]>\nwrote:\n\n>\n>\n> On Tue, Sep 27, 2016 at 6:03 PM, Jake Nielsen <[email protected]>\n> wrote:\n>\n>>\n>> On Tue, Sep 27, 2016 at 5:41 PM, Mike Sofen <[email protected]> wrote:\n>>\n>>> *From:* Jake Nielsen *Sent:* Tuesday, September 27, 2016 5:22 PM\n>>>\n>>>\n>>> the query\n>>>\n>>> SELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN\n>>> ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND\n>>> userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^\n>>>\n>>>\n>>>\n>>> On Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]>\n>>> wrote:\n>>>\n>>> I've got a query that takes a surprisingly long time to run, and I'm\n>>> having a really rough time trying to figure it out.\n>>>\n>>>\n>>>\n>>> Before I get started, here are the specifics of the situation:\n>>>\n>>>\n>>>\n>>> Here is the table that I'm working with (apologies for spammy indices,\n>>> I've been throwing shit at the wall)\n>>>\n>>> Table \"public.syncerevent\"\n>>>\n>>> Column | Type | Modifiers\n>>>\n>>>\n>>> --------------+---------+-----------------------------------\n>>> -----------------------\n>>>\n>>> id | bigint | not null default nextval('syncerevent_id_seq'::\n>>> regclass)\n>>>\n>>> userid | text |\n>>>\n>>> event | text |\n>>>\n>>> eventid | text |\n>>>\n>>> originatorid | text |\n>>>\n>>> propogatorid | text |\n>>>\n>>> kwargs | text |\n>>>\n>>> conflicted | integer |\n>>>\n>>> Indexes:\n>>>\n>>> \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n>>>\n>>> \"syncereventidindex\" UNIQUE, btree (eventid)\n>>>\n>>> \"anothersyncereventidindex\" btree (userid)\n>>>\n>>> \"anothersyncereventidindexwithascending\" btree (userid, id)\n>>>\n>>> \"asdfasdgasdf\" btree (userid, id DESC)\n>>>\n>>> \"syncereventuseridhashindex\" hash (userid)\n>>>\n>>>\n>>>\n>>> To provide some context, as per the wiki,\n>>>\n>>> there are 3,290,600 rows in this table.\n>>>\n>>> It gets added to frequently, but never deleted from.\n>>>\n>>> The \"kwargs\" column often contains mid-size JSON strings (roughly 30K\n>>> characters on average)\n>>>\n>>> As of right now, the table has 53 users in it. About 20% of those have a\n>>> negligible number of events, but the rest of the users have a fairly even\n>>> smattering.\n>>>\n>>>\n>>>\n>>> EXPLAIN (ANALYZE, BUFFERS) says:\n>>>\n>>>\n>>> QUERY PLAN\n>>>\n>>>\n>>> ------------------------------------------------------------\n>>> ------------------------------------------------------------\n>>> --------------------------------------\n>>>\n>>> Limit (cost=0.43..1218.57 rows=4000 width=615) (actual\n>>> time=3352.390..3403.572 rows=4000 loops=1) Buffers: shared hit=120244\n>>> read=160198\n>>>\n>>> -> Index Scan using syncerevent_pkey on syncerevent\n>>> (cost=0.43..388147.29 rows=1274560 width=615) (actual\n>>> time=3352.386..3383.100 rows=4000 loops=1)\n>>>\n>>> Index Cond: (id > 12468)\n>>>\n>>> Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text)\n>>> AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))\n>>>\n>>> Rows Removed by Filter: 1685801\n>>>\n>>> Buffers: shared hit=120244 read=160198\n>>>\n>>> Planning time: 0.833 ms\n>>>\n>>> Execution time: 3407.633 ms\n>>>\n>>> (9 rows)\n>>>\n>>> If it matters/interests you, here is my underlying confusion:\n>>>\n>>> From some internet sleuthing, I've decided that having a table per user\n>>> (which would totally make this problem a non-issue) isn't a great idea.\n>>> Because there is a file per table, having a table per user would not scale.\n>>> My next thought was partial indexes (which would also totally help), but\n>>> since there is also a table per index, this really doesn't side-step the\n>>> problem. My rough mental model says: If there exists a way that a\n>>> table-per-user scheme would make this more efficient, then there should\n>>> also exist an index that could achieve the same effect (or close enough to\n>>> not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could\n>>> utilize at least one of the two indexes on the userId column, but clearly\n>>> I'm not understanding something.\n>>>\n>>> Any help in making this query more efficient would be greatly\n>>> appreciated, and any conceptual insights would be extra awesome.\n>>>\n>>> Thanks for reading.\n>>>\n>>> -Jake\n>>>\n>>> ----------------------\n>>>\n>>>\n>>>\n>>> This stands out: WHERE ID > 12468 AND propogatorId NOT IN\n>>> ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"')\n>>>\n>>> As does this from the analyze: Rows Removed by Filter: 1685801\n>>>\n>>>\n>>>\n>>> The propogaterid is practically the only column NOT indexed and it’s\n>>> used in a “not in”. It looks like it’s having to do a table scan for all\n>>> the rows above the id cutoff to see if any meet the filter requirement.\n>>> “not in” can be very expensive. An index might help on this column. Have\n>>> you tried that?\n>>>\n>>>\n>>>\n>>> Your rowcounts aren’t high enough to require partitioning or any other\n>>> changes to your table that I can see right now.\n>>>\n>>>\n>>>\n>>> Mike Sofen (Synthetic Genomics)\n>>>\n>>>\n>>>\n>>\n>> Thanks Mike, that's true, I hadn't thought of non-indexed columns forcing\n>> a scan. Unfortunately, just to test this out, I tried pulling out the more\n>> suspect parts of the query, and it still seems to want to do an index scan:\n>>\n>>\n>> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userId =\n>> '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;\n>>\n>>\n>> QUERY PLAN\n>>\n>>\n>> ------------------------------------------------------------\n>> ------------------------------------------------------------\n>> ----------------------------------\n>>\n>> Limit (cost=0.43..1140.62 rows=4000 width=615) (actual\n>> time=2706.365..2732.308 rows=4000 loops=1)\n>>\n>> Buffers: shared hit=120239 read=161924\n>>\n>> -> Index Scan using syncerevent_pkey on syncerevent\n>> (cost=0.43..364982.77 rows=1280431 width=615) (actual\n>> time=2706.360..2715.514 rows=4000 loops=1)\n>>\n>> Filter: (userid = '57dc984f1c87461c0967e228'::text)\n>>\n>> Rows Removed by Filter: 1698269\n>>\n>> Buffers: shared hit=120239 read=161924\n>>\n>> Planning time: 0.131 ms\n>>\n>> Execution time: 2748.526 ms\n>>\n>> (8 rows)\n>>\n>> It definitely looks to me like it's starting at the ID = 12468 row, and\n>> just grinding up the rows. The filter is (unsurprisingly) false for most of\n>> the rows, so it ends up having to chew up half the table before it actually\n>> finds 4000 rows that match.\n>>\n>> After creating a partial index using that userId, things go way faster.\n>> This is more-or-less what I assumed I'd get by making having that\n>> multi-column index of (userId, Id), but alas:\n>>\n>> remoteSyncerLogistics=> CREATE INDEX sillyIndex ON syncerevent (ID) where\n>> userId = '57dc984f1c87461c0967e228';\n>>\n>> CREATE INDEX\n>>\n>> remoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n>> SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT\n>> 4000;\n>>\n>> QUERY\n>> PLAN\n>>\n>> ------------------------------------------------------------\n>> ------------------------------------------------------------\n>> ----------------------\n>>\n>> Limit (cost=0.43..443.21 rows=4000 width=615) (actual\n>> time=0.074..13.349 rows=4000 loops=1)\n>>\n>> Buffers: shared hit=842 read=13\n>>\n>> -> Index Scan using sillyindex on syncerevent (cost=0.43..141748.41\n>> rows=1280506 width=615) (actual time=0.071..5.372 rows=4000 loops=1)\n>>\n>> Buffers: shared hit=842 read=13\n>>\n>> Planning time: 0.245 ms\n>>\n>> Execution time: 25.404 ms\n>>\n>> (6 rows)\n>>\n>>\n>> remoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM\n>> SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' AND ID > 12468 ORDER\n>> BY ID LIMIT 4000;\n>>\n>> QUERY\n>> PLAN\n>>\n>> ------------------------------------------------------------\n>> ------------------------------------------------------------\n>> ----------------------\n>>\n>> Limit (cost=0.43..453.34 rows=4000 width=615) (actual\n>> time=0.023..13.244 rows=4000 loops=1)\n>>\n>> Buffers: shared hit=855\n>>\n>> -> Index Scan using sillyindex on syncerevent (cost=0.43..144420.43\n>> rows=1275492 width=615) (actual time=0.020..5.392 rows=4000 loops=1)\n>>\n>> Index Cond: (id > 12468)\n>>\n>> Buffers: shared hit=855\n>>\n>> Planning time: 0.253 ms\n>>\n>> Execution time: 29.371 ms\n>>\n>> (7 rows)\n>>\n>>\n>> Any thoughts?\n>>\n>> -Jake\n>>\n>\n> Hmmm, here's another unexpected piece of information:\n>\n>\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userid =\n> '57dc984f1c87461c0967e228';\n>\n> QUERY PLAN\n>\n>\n> ------------------------------------------------------------\n> ---------------------------------------------------------------\n>\n> Seq Scan on syncerevent (cost=0.00..311251.51 rows=1248302 width=618)\n> (actual time=0.008..4970.507 rows=1259137 loops=1)\n>\n> Filter: (userid = '57dc984f1c87461c0967e228'::text)\n>\n> Rows Removed by Filter: 2032685\n>\n> Buffers: shared hit=108601 read=161523\n>\n> Planning time: 0.092 ms\n>\n> Execution time: 7662.845 ms\n>\n> (6 rows)\n>\n>\n> Looks like even just using userid='blah' doesn't actually result in the\n> index being used, despite the fact that there are indexes on the userId\n> column:\n>\n>\n> \"syncerevent_pkey\" PRIMARY KEY, btree (id)\n>\n> \"syncereventidindex\" UNIQUE, btree (eventid)\n>\n> \"anothersyncereventidindex\" btree (userid)\n>\n> \"anothersyncereventidindexwithascending\" btree (userid, id)\n>\n> \"asdfasdgasdf\" btree (userid, id DESC)\n>\n> \"syncereventuseridhashindex\" hash (userid)\n>\n>\n> -Jake\n>\n\nSo... it seems that setting the userId to one that has less rows in the\ntable results in the index actually being used...\n\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId =\n'57d35db7353b0d627c0e592f' AND ID > 12468 ORDER BY ID LIMIT 4000;\n\n\n QUERY PLAN\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n Limit (cost=0.56..8574.30 rows=4000 width=618) (actual time=0.031..13.190\nrows=4000 loops=1)\n\n Buffers: shared hit=867\n\n -> Index Scan using anothersyncereventidindexwithascending on\nsyncerevent (cost=0.56..216680.62 rows=101090 width=618) (actual\ntime=0.027..5.313 rows=4000 loops=1)\n\n Index Cond: ((userid = '57d35db7353b0d627c0e592f'::text) AND (id >\n12468))\n\n Buffers: shared hit=867\n\n Planning time: 0.168 ms\n\n Execution time: 29.331 ms\n\n(7 rows)\n\n\nIs there some way to force the use of one of the indexes on the userId\ncolumn?\n\n\nOn Tue, Sep 27, 2016 at 6:24 PM, Jake Nielsen <[email protected]> wrote:On Tue, Sep 27, 2016 at 6:03 PM, Jake Nielsen <[email protected]> wrote:\nOn Tue, Sep 27, 2016 at 5:41 PM, Mike Sofen <[email protected]> wrote:From: Jake Nielsen    Sent: Tuesday, September 27, 2016 5:22 PMthe querySELECT * FROM SyncerEvent WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"') AND conflicted != 1 AND userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;^ On Tue, Sep 27, 2016 at 5:02 PM, Jake Nielsen <[email protected]> wrote:I've got a query that takes a surprisingly long time to run, and I'm having a really rough time trying to figure it out. Before I get started, here are the specifics of the situation: Here is the table that I'm working with (apologies for spammy indices, I've been throwing shit at the wall)                            Table \"public.syncerevent\"    Column    |  Type   |                        Modifiers                         --------------+---------+---------------------------------------------------------- id           | bigint  | not null default nextval('syncerevent_id_seq'::regclass) userid       | text    |  event        | text    |  eventid      | text    |  originatorid | text    |  propogatorid | text    |  kwargs       | text    |  conflicted   | integer | Indexes:    \"syncerevent_pkey\" PRIMARY KEY, btree (id)    \"syncereventidindex\" UNIQUE, btree (eventid)    \"anothersyncereventidindex\" btree (userid)    \"anothersyncereventidindexwithascending\" btree (userid, id)    \"asdfasdgasdf\" btree (userid, id DESC)    \"syncereventuseridhashindex\" hash (userid) To provide some context, as per the wiki, there are 3,290,600 rows in this table. It gets added to frequently, but never deleted from. The \"kwargs\" column often contains mid-size JSON strings (roughly 30K characters on average)As of right now, the table has 53 users in it. About 20% of those have a negligible number of events, but the rest of the users have a fairly even smattering. EXPLAIN (ANALYZE, BUFFERS) says:                                                                          QUERY PLAN                                                                          -------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.43..1218.57 rows=4000 width=615) (actual time=3352.390..3403.572 rows=4000 loops=1)  Buffers: shared hit=120244 read=160198   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..388147.29 rows=1274560 width=615) (actual time=3352.386..3383.100 rows=4000 loops=1)         Index Cond: (id > 12468)         Filter: ((propogatorid <> '\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"'::text) AND (conflicted <> 1) AND (userid = '57dc984f1c87461c0967e228'::text))         Rows Removed by Filter: 1685801         Buffers: shared hit=120244 read=160198 Planning time: 0.833 ms Execution time: 3407.633 ms(9 rows)If it matters/interests you, here is my underlying confusion:From some internet sleuthing, I've decided that having a table per user (which would totally make this problem a non-issue) isn't a great idea. Because there is a file per table, having a table per user would not scale. My next thought was partial indexes (which would also totally help), but since there is also a table per index, this really doesn't side-step the problem. My rough mental model says: If there exists a way that a table-per-user scheme would make this more efficient, then there should also exist an index that could achieve the same effect (or close enough to not matter). I would think that \"userid = '57dc984f1c87461c0967e228'\" could utilize at least one of the two indexes on the userId column, but clearly I'm not understanding something.Any help in making this query more efficient would be greatly appreciated, and any conceptual insights would be extra awesome.Thanks for reading.-Jake---------------------- This stands out:  WHERE ID > 12468 AND propogatorId NOT IN ('\"d8130ab9!-66d0!-4f13!-acec!-a9556362f0ad\"')As does this from the analyze:  Rows Removed by Filter: 1685801 The propogaterid is practically the only column NOT indexed and it’s used in a “not in”.  It looks like it’s having to do a table scan for all the rows above the id cutoff to see if any meet the filter requirement.  “not in” can be very expensive.  An index might help on this column.  Have you tried that? Your rowcounts aren’t high enough to require partitioning or any other changes to your table that I can see right now. Mike Sofen  (Synthetic Genomics) Thanks Mike, that's true, I hadn't thought of non-indexed columns forcing a scan. Unfortunately, just to test this out, I tried pulling out the more suspect parts of the query, and it still seems to want to do an index scan:EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;                                                                        QUERY PLAN                                                                        ---------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.43..1140.62 rows=4000 width=615) (actual time=2706.365..2732.308 rows=4000 loops=1)   Buffers: shared hit=120239 read=161924   ->  Index Scan using syncerevent_pkey on syncerevent  (cost=0.43..364982.77 rows=1280431 width=615) (actual time=2706.360..2715.514 rows=4000 loops=1)         Filter: (userid = '57dc984f1c87461c0967e228'::text)         Rows Removed by Filter: 1698269         Buffers: shared hit=120239 read=161924 Planning time: 0.131 ms Execution time: 2748.526 ms(8 rows)It definitely looks to me like it's starting at the ID = 12468 row, and just grinding up the rows. The filter is (unsurprisingly) false for most of the rows, so it ends up having to chew up half the table before it actually finds 4000 rows that match.After creating a partial index using that userId, things go way faster. This is more-or-less what I assumed I'd get by making having that multi-column index of (userId, Id), but alas:\nremoteSyncerLogistics=> CREATE INDEX sillyIndex ON syncerevent (ID) where userId = '57dc984f1c87461c0967e228';\nCREATE INDEX\nremoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' ORDER BY ID LIMIT 4000;\n                                                                  QUERY PLAN                                                                  \n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.43..443.21 rows=4000 width=615) (actual time=0.074..13.349 rows=4000 loops=1)\n   Buffers: shared hit=842 read=13\n   ->  Index Scan using sillyindex on syncerevent  (cost=0.43..141748.41 rows=1280506 width=615) (actual time=0.071..5.372 rows=4000 loops=1)\n         Buffers: shared hit=842 read=13\n Planning time: 0.245 ms\n Execution time: 25.404 ms\n(6 rows)\n\nremoteSyncerLogistics=> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId = '57dc984f1c87461c0967e228' AND ID > 12468 ORDER BY ID LIMIT 4000;\n                                                                  QUERY PLAN                                                                  \n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.43..453.34 rows=4000 width=615) (actual time=0.023..13.244 rows=4000 loops=1)\n   Buffers: shared hit=855\n   ->  Index Scan using sillyindex on syncerevent  (cost=0.43..144420.43 rows=1275492 width=615) (actual time=0.020..5.392 rows=4000 loops=1)\n         Index Cond: (id > 12468)\n         Buffers: shared hit=855\n Planning time: 0.253 ms\n Execution time: 29.371 ms\n(7 rows)Any thoughts?-Jake\nHmmm, here's another unexpected piece of information:\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERE userid = '57dc984f1c87461c0967e228';\n                                                        QUERY PLAN                                                         \n---------------------------------------------------------------------------------------------------------------------------\n Seq Scan on syncerevent  (cost=0.00..311251.51 rows=1248302 width=618) (actual time=0.008..4970.507 rows=1259137 loops=1)\n   Filter: (userid = '57dc984f1c87461c0967e228'::text)\n   Rows Removed by Filter: 2032685\n   Buffers: shared hit=108601 read=161523\n Planning time: 0.092 ms\n Execution time: 7662.845 ms\n(6 rows)Looks like even just using userid='blah' doesn't actually result in the index being used, despite the fact that there are indexes on the userId column:    \"syncerevent_pkey\" PRIMARY KEY, btree (id)    \"syncereventidindex\" UNIQUE, btree (eventid)    \"anothersyncereventidindex\" btree (userid)    \"anothersyncereventidindexwithascending\" btree (userid, id)    \"asdfasdgasdf\" btree (userid, id DESC)\n    \"syncereventuseridhashindex\" hash (userid)-Jake\nSo... it seems that setting the userId to one that has less rows in the table results in the index actually being used...\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId = '57d35db7353b0d627c0e592f' AND ID > 12468 ORDER BY ID LIMIT 4000;                                                                              QUERY PLAN                                                                                ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=0.56..8574.30 rows=4000 width=618) (actual time=0.031..13.190 rows=4000 loops=1)   Buffers: shared hit=867   ->  Index Scan using anothersyncereventidindexwithascending on syncerevent  (cost=0.56..216680.62 rows=101090 width=618) (actual time=0.027..5.313 rows=4000 loops=1)         Index Cond: ((userid = '57d35db7353b0d627c0e592f'::text) AND (id > 12468))         Buffers: shared hit=867 Planning time: 0.168 ms Execution time: 29.331 ms(7 rows)Is there some way to force the use of one of the indexes on the userId column?", "msg_date": "Tue, 27 Sep 2016 19:45:54 -0700", "msg_from": "Jake Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected expensive index scan" }, { "msg_contents": "[ Please don't re-quote the entire damn thread in each followup. Have\nsome respect for your readers' time, and assume that they have already\nseen the previous traffic, or could go look it up if they haven't.\nThe point of quoting at all is just to quickly remind people where we\nare in the discussion. ]\n\nJake Nielsen <[email protected]> writes:\n> So... it seems that setting the userId to one that has less rows in the\n> table results in the index actually being used...\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM SyncerEvent WHERe userId =\n> '57d35db7353b0d627c0e592f' AND ID > 12468 ORDER BY ID LIMIT 4000;\n\nIt looks from the numbers floating around in this thread that the\nuserId used in your original query actually matches about 50% of\nthe table. That would make it unsurprising that the planner doesn't\nwant to use an index. A rule of thumb is that a seqscan is going\nto be cheaper than an indexscan if your query retrieves, or even\njust has to fetch, more than a few percent of the table.\n\nNow, given the existence of an index on (userID, ID) --- in that\norder --- I would expect the planner to want to use that index\nfor a query shaped exactly as you show above. Basically, it knows\nthat that just requires starting at the ('57d35db7353b0d627c0e592f',\n12468) position in the index and scanning forward for 4000 index\nentries; no extraneous table rows will be fetched at all. If you\nincreased the LIMIT enough, it'd go over to a seqscan-and-sort to\navoid doing so much random access to the table, but I'd think the\ncrossover point for that is well above 4000 out of 3.3M rows.\n\nHowever, as soon as you add any other unindexable conditions,\nthe situation changes because rows that fail the additional\nconditions represent useless fetches. Now, instead of fetching\n4000 rows using the index, it's fetching 4000 times some multiplier.\n\nIt's hard to tell for sure given the available info, but I think\nthat the extra inequalities in your original query reject a pretty\nsizable proportion of rows, resulting in the indexscan approach\nneeding to fetch a great deal more than 4000 rows, making it look\nto be more expensive than a seqscan.\n\nI'm not sure why it's preferring the pkey index to the one on\n(userID, ID), but possibly that has something to do with that\nindex being better correlated to the physical table order, resulting\nin a prediction of less random I/O when using that index.\n\nSo the bottom line is that given your data statistics, there may\nwell be no really good plan for your original query. It just\nrequires fetching a lot of rows, and indexes can't help very much.\n\nIf you say \"well yeah, but it seems to perform fine when I force\nit to use that index anyway\", the answer may be that you need to\nadjust random_page_cost. The default value is OK for tables that\nare mostly sitting on spinning rust, but if your database is\nRAM-resident or SSD-resident you probably want a value closer to 1.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 28 Sep 2016 09:04:52 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected expensive index scan" }, { "msg_contents": "On Wed, Sep 28, 2016 at 6:04 AM, Tom Lane <[email protected]> wrote:\n\n> [ Please don't re-quote the entire damn thread in each followup. Have\n> some respect for your readers' time, and assume that they have already\n> seen the previous traffic, or could go look it up if they haven't.\n> The point of quoting at all is just to quickly remind people where we\n> are in the discussion. ]\n>\n\nSorry, understood.\n\n\n>\n> If you say \"well yeah, but it seems to perform fine when I force\n> it to use that index anyway\", the answer may be that you need to\n> adjust random_page_cost. The default value is OK for tables that\n> are mostly sitting on spinning rust, but if your database is\n> RAM-resident or SSD-resident you probably want a value closer to 1.\n>\n\nAhhh, this could absolutely be the key right here. I could totally see why\nit would make sense for the planner to do what it's doing given that it's\nweighting sequential access more favorably than random access.\n\nBeautiful! After changing the random_page_cost to 1.0 the original query\nwent from ~3.5s to ~35ms. This is exactly the kind of insight I was fishing\nfor in the original post. I'll keep in mind that the query planner is very\ntunable and has these sorts of hardware-related trade-offs in the future. I\ncan't thank you enough!\n\nCheers!\n\nOn Wed, Sep 28, 2016 at 6:04 AM, Tom Lane <[email protected]> wrote:[ Please don't re-quote the entire damn thread in each followup. Have\nsome respect for your readers' time, and assume that they have already\nseen the previous traffic, or could go look it up if they haven't.\nThe point of quoting at all is just to quickly remind people where we\nare in the discussion. ]Sorry, understood. \n\nIf you say \"well yeah, but it seems to perform fine when I force\nit to use that index anyway\", the answer may be that you need to\nadjust random_page_cost.  The default value is OK for tables that\nare mostly sitting on spinning rust, but if your database is\nRAM-resident or SSD-resident you probably want a value closer to 1.Ahhh, this could absolutely be the key right here. I could totally see why it would make sense for the planner to do what it's doing given that it's weighting sequential access more favorably than random access.Beautiful! After changing the random_page_cost to 1.0 the original query went from ~3.5s to ~35ms. This is exactly the kind of insight I was fishing for in the original post. I'll keep in mind that the query planner is very tunable and has these sorts of hardware-related trade-offs in the future. I can't thank you enough!Cheers!", "msg_date": "Wed, 28 Sep 2016 11:11:13 -0700", "msg_from": "Jake Nielsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Unexpected expensive index scan" }, { "msg_contents": "On 9/28/16 1:11 PM, Jake Nielsen wrote:\n> Beautiful! After changing the random_page_cost to 1.0 the original query\n> went from ~3.5s to ~35ms. This is exactly the kind of insight I was\n> fishing for in the original post. I'll keep in mind that the query\n> planner is very tunable and has these sorts of hardware-related\n> trade-offs in the future. I can't thank you enough!\n\nBe careful with setting random_page_cost to exactly 1... that tells the \nplanner that an index scan has nearly the same cost as a sequential \nscan, which is absolutely never the case, even with the database in \nmemory. 1.1 or maybe even 1.01 is probably a safer bet.\n\nAlso note that you can set those parameters within a single session, as \nwell as within a single transaction. So if you need to force a different \nsetting for a single query, you could always do\n\nBEGIN;\nSET LOCAL random_page_cost = 1;\nSELECT ...\nCOMMIT; (or rollback...)\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 30 Sep 2016 17:19:59 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unexpected expensive index scan" } ]
[ { "msg_contents": "Hi!\n\nWe are having a baffling problem we hope you might be able to help with. We were hoping to speed up postgres restores to our reporting server. First, we were seeing missing indexes with pg_restore to our reporting server for one of our databases when we did pg_restore with multiple jobs (a clean restore, we also tried dropping the database prior to restore, just in case something was extant and amiss). The indexes missed were not consistent, and we were only ever seeing errors on import that indicated an index had not yet been built. For example:\n\npg_restore: [archiver (db)] could not execute query: ERROR: index \"index_versions_on_item_type_and_item_id\" does not exist\n Command was: DROP INDEX public.index_versions_on_item_type_and_item_id;\n\nWhich seemed like a reasonable error to us. We had no errors on insertion to indicate that index creation was a problem. \n\nWe believed this might be a race condition, so we attempted to do a schema-only restore followed by a data-only restore just for this database. This worked a few times, and then began growing exponentially in completion time before it became unsustainable. We figured we were using too many jobs, so we decreased them. Nothing helped.\n\nWe decided to move back to a multi-job regular restore, and then the restores began crashing thusly:\n[2016-09-14 02:20:36 UTC] LOG: server process (PID 27624) was terminated by signal 9: Killed\n[2016-09-14 02:20:36 UTC] LOG: terminating any other active server processes\n[2016-09-14 02:20:36 UTC] postgres [local] DBNAME WARNING: terminating connection because of crash of another server process\n[2016-09-14 02:20:36 UTC] postgres [local] DBNAME DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.\n\nThe restore crashed this way for all job numbers except for one. We’re now stuck back where we were prior to increasing job numbers, at one job for this restore in order to prevent errors and crashes. \n\nBackground: \n\t• 3 ec2 instances with postgres\n\t\t• 1 used for reporting, on Postgresql 9.5.4\n\t\t\t• Reporting server is a c4.2xlarge, and should have been able to handle multiple jobs (8cpu / https://aws.amazon.com/ec2/instance-types/ )\n\t\t• 2 production servers; one leader and one follower, both on Postgresql 9.5.3. \n\nWe have one very large database, 678GB, and several others, but the largest is our concern. \n\nI have attached our postgresql.conf file. Thank you so much for your time.\n\nBest,\n\n\n\nCea Stapleton \nOperations Engineer\nhttp://www.healthfinch.com\n\n\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 29 Sep 2016 07:36:55 -0500", "msg_from": "Cea Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "Failing Multi-Job Restores, Missing Indexes on Restore" }, { "msg_contents": "Cea Stapleton <[email protected]> writes:\n> We are having a baffling problem we hope you might be able to help with. We were hoping to speed up postgres restores to our reporting server. First, we were seeing missing indexes with pg_restore to our reporting server for one of our databases when we did pg_restore with multiple jobs (a clean restore, we also tried dropping the database prior to restore, just in case something was extant and amiss). The indexes missed were not consistent, and we were only ever seeing errors on import that indicated an index had not yet been built. For example:\n\n> pg_restore: [archiver (db)] could not execute query: ERROR: index \"index_versions_on_item_type_and_item_id\" does not exist\n> Command was: DROP INDEX public.index_versions_on_item_type_and_item_id;\n\nWhich PG version is that; particularly, which pg_restore version?\nWhat's the exact pg_restore command you were issuing?\n\n> We decided to move back to a multi-job regular restore, and then the restores began crashing thusly:\n> [2016-09-14 02:20:36 UTC] LOG: server process (PID 27624) was terminated by signal 9: Killed\n\nThis is probably the dreaded Linux OOM killer. Fix by reconfiguring your\nsystem to disallow memory overcommit, or at least make it not apply to\nPostgres, cf\nhttps://www.postgresql.org/docs/9.5/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Sep 2016 08:52:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Failing Multi-Job Restores, Missing Indexes on Restore" }, { "msg_contents": "Thanks Tom!\n\nWe’re using pg_restore (PostgreSQL) 9.5.4 for the restores. We’ve used variations on the job number:\n\n/usr/bin/pg_restore -j 6 -Fc -O -c -d DBNAME RESTORE_FILE”\n\nWe’ll take a look at the memory overcommit - would that also explain the index issues we were seeing before we were seeing the crashes?\n\t\nCea Stapleton \nOperations Engineer\nhttp://www.healthfinch.com\n\t\n\n> On Sep 29, 2016, at 7:52 AM, Tom Lane <[email protected]> wrote:\n> \n> Cea Stapleton <[email protected]> writes:\n>> We are having a baffling problem we hope you might be able to help with. We were hoping to speed up postgres restores to our reporting server. First, we were seeing missing indexes with pg_restore to our reporting server for one of our databases when we did pg_restore with multiple jobs (a clean restore, we also tried dropping the database prior to restore, just in case something was extant and amiss). The indexes missed were not consistent, and we were only ever seeing errors on import that indicated an index had not yet been built. For example:\n> \n>> pg_restore: [archiver (db)] could not execute query: ERROR: index \"index_versions_on_item_type_and_item_id\" does not exist\n>> Command was: DROP INDEX public.index_versions_on_item_type_and_item_id;\n> \n> Which PG version is that; particularly, which pg_restore version?\n> What's the exact pg_restore command you were issuing?\n> \n>> We decided to move back to a multi-job regular restore, and then the restores began crashing thusly:\n>> [2016-09-14 02:20:36 UTC] LOG: server process (PID 27624) was terminated by signal 9: Killed\n> \n> This is probably the dreaded Linux OOM killer. Fix by reconfiguring your\n> system to disallow memory overcommit, or at least make it not apply to\n> Postgres, cf\n> https://www.postgresql.org/docs/9.5/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT\n> \n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Sep 2016 07:56:43 -0500", "msg_from": "Cea Stapleton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Failing Multi-Job Restores, Missing Indexes on Restore" }, { "msg_contents": "Cea Stapleton <[email protected]> writes:\n> We’re using pg_restore (PostgreSQL) 9.5.4 for the restores. We’ve used variations on the job number:\n> /usr/bin/pg_restore -j 6 -Fc -O -c -d DBNAME RESTORE_FILE”\n\nOK ... do you actually need the -c, and if so why?\n\n> We’ll take a look at the memory overcommit - would that also explain the index issues we were seeing before we were seeing the crashes?\n\nUnlikely. I'm guessing that there's some sort of race condition involved\nin parallel restore with -c, but it's not very clear what.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 29 Sep 2016 09:09:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Failing Multi-Job Restores, Missing Indexes on Restore" } ]
[ { "msg_contents": "Hi,\r\nI am relatively new to MYSQL and not really sure I am in the right forum for this.\r\n\r\nI have a situation which I am not understanding. I am performing a simple query :\r\n\r\nSelect * from tableA\r\nWhere date >= ‘2016’06-01’\r\nAnd date < ‘2016-07-01’\r\n\r\nIndex is on date\r\nQuery returns 6271 rows\r\n\r\nWhen doing explain on the same query\r\nThe rows column shows 11462, nearly twice the amount (this result is consistent on most all tables)\r\n\r\nWhen selecting count from the table , returns 2668664\r\n\r\nWhen selecting from information_schema.tables table_rows column shows 2459114\r\n\r\nWhile this is indicative of out dated statistics\r\n\r\nHave done an analyze table but no changes.\r\n\r\nThanks,\r\nJoe\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Jake Nielsen\r\nSent: Wednesday, September 28, 2016 2:11 PM\r\nTo: Tom Lane <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Unexpected expensive index scan\r\n\r\n\r\n\r\nOn Wed, Sep 28, 2016 at 6:04 AM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\r\n[ Please don't re-quote the entire damn thread in each followup. Have\r\nsome respect for your readers' time, and assume that they have already\r\nseen the previous traffic, or could go look it up if they haven't.\r\nThe point of quoting at all is just to quickly remind people where we\r\nare in the discussion. ]\r\n\r\nSorry, understood.\r\n\r\n\r\nIf you say \"well yeah, but it seems to perform fine when I force\r\nit to use that index anyway\", the answer may be that you need to\r\nadjust random_page_cost. The default value is OK for tables that\r\nare mostly sitting on spinning rust, but if your database is\r\nRAM-resident or SSD-resident you probably want a value closer to 1.\r\n\r\nAhhh, this could absolutely be the key right here. I could totally see why it would make sense for the planner to do what it's doing given that it's weighting sequential access more favorably than random access.\r\n\r\nBeautiful! After changing the random_page_cost to 1.0 the original query went from ~3.5s to ~35ms. This is exactly the kind of insight I was fishing for in the original post. I'll keep in mind that the query planner is very tunable and has these sorts of hardware-related trade-offs in the future. I can't thank you enough!\r\n\r\nCheers!\r\n\r\n\nHi,I am relatively new to MYSQL and not really sure I am in the right forum for this. I have a situation which I am not understanding.  I am performing a simple query : Select * from tableAWhere date >= ‘2016’06-01’And date < ‘2016-07-01’ Index is on dateQuery returns 6271 rows When doing explain on the same query The rows column shows  11462,  nearly twice the amount  (this result is consistent on most all tables) When selecting count from the table , returns  2668664 When selecting from information_schema.tables  table_rows column shows 2459114 While this is indicative of out dated statistics  Have done an analyze table but no changes. Thanks,Joe From: [email protected] [mailto:[email protected]] On Behalf Of Jake NielsenSent: Wednesday, September 28, 2016 2:11 PMTo: Tom Lane <[email protected]>Cc: [email protected]: Re: [PERFORM] Unexpected expensive index scan   On Wed, Sep 28, 2016 at 6:04 AM, Tom Lane <[email protected]> wrote:[ Please don't re-quote the entire damn thread in each followup. Havesome respect for your readers' time, and assume that they have alreadyseen the previous traffic, or could go look it up if they haven't.The point of quoting at all is just to quickly remind people where weare in the discussion. ] Sorry, understood. If you say \"well yeah, but it seems to perform fine when I forceit to use that index anyway\", the answer may be that you need toadjust random_page_cost.  The default value is OK for tables thatare mostly sitting on spinning rust, but if your database isRAM-resident or SSD-resident you probably want a value closer to 1. Ahhh, this could absolutely be the key right here. I could totally see why it would make sense for the planner to do what it's doing given that it's weighting sequential access more favorably than random access. Beautiful! After changing the random_page_cost to 1.0 the original query went from ~3.5s to ~35ms. This is exactly the kind of insight I was fishing for in the original post. I'll keep in mind that the query planner is very tunable and has these sorts of hardware-related trade-offs in the future. I can't thank you enough! Cheers!", "msg_date": "Fri, 30 Sep 2016 07:03:05 -0500", "msg_from": "Joe Proietti <[email protected]>", "msg_from_op": true, "msg_subject": "MYSQL Stats" }, { "msg_contents": "My Apologies , was in the wrong email/forum, please disregard my email!\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Joe Proietti\r\nSent: Friday, September 30, 2016 8:03 AM\r\nTo: Jake Nielsen <[email protected]>; Tom Lane <[email protected]>\r\nCc: [email protected]\r\nSubject: [PERFORM] MYSQL Stats\r\n\r\nHi,\r\nI am relatively new to MYSQL and not really sure I am in the right forum for this.\r\n\r\nI have a situation which I am not understanding. I am performing a simple query :\r\n\r\nSelect * from tableA\r\nWhere date >= ‘2016’06-01’\r\nAnd date < ‘2016-07-01’\r\n\r\nIndex is on date\r\nQuery returns 6271 rows\r\n\r\nWhen doing explain on the same query\r\nThe rows column shows 11462, nearly twice the amount (this result is consistent on most all tables)\r\n\r\nWhen selecting count from the table , returns 2668664\r\n\r\nWhen selecting from information_schema.tables table_rows column shows 2459114\r\n\r\nWhile this is indicative of out dated statistics\r\n\r\nHave done an analyze table but no changes.\r\n\r\nThanks,\r\nJoe\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]] On Behalf Of Jake Nielsen\r\nSent: Wednesday, September 28, 2016 2:11 PM\r\nTo: Tom Lane <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Unexpected expensive index scan\r\n\r\n\r\n\r\nOn Wed, Sep 28, 2016 at 6:04 AM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\r\n[ Please don't re-quote the entire damn thread in each followup. Have\r\nsome respect for your readers' time, and assume that they have already\r\nseen the previous traffic, or could go look it up if they haven't.\r\nThe point of quoting at all is just to quickly remind people where we\r\nare in the discussion. ]\r\n\r\nSorry, understood.\r\n\r\n\r\nIf you say \"well yeah, but it seems to perform fine when I force\r\nit to use that index anyway\", the answer may be that you need to\r\nadjust random_page_cost. The default value is OK for tables that\r\nare mostly sitting on spinning rust, but if your database is\r\nRAM-resident or SSD-resident you probably want a value closer to 1.\r\n\r\nAhhh, this could absolutely be the key right here. I could totally see why it would make sense for the planner to do what it's doing given that it's weighting sequential access more favorably than random access.\r\n\r\nBeautiful! After changing the random_page_cost to 1.0 the original query went from ~3.5s to ~35ms. This is exactly the kind of insight I was fishing for in the original post. I'll keep in mind that the query planner is very tunable and has these sorts of hardware-related trade-offs in the future. I can't thank you enough!\r\n\r\nCheers!\r\n\r\n\nMy Apologies ,  was in the wrong email/forum,  please disregard my email! From: [email protected] [mailto:[email protected]] On Behalf Of Joe ProiettiSent: Friday, September 30, 2016 8:03 AMTo: Jake Nielsen <[email protected]>; Tom Lane <[email protected]>Cc: [email protected]: [PERFORM] MYSQL Stats Hi,I am relatively new to MYSQL and not really sure I am in the right forum for this. I have a situation which I am not understanding.  I am performing a simple query : Select * from tableAWhere date >= ‘2016’06-01’And date < ‘2016-07-01’ Index is on dateQuery returns 6271 rows When doing explain on the same query The rows column shows  11462,  nearly twice the amount  (this result is consistent on most all tables) When selecting count from the table , returns  2668664 When selecting from information_schema.tables  table_rows column shows 2459114 While this is indicative of out dated statistics  Have done an analyze table but no changes. Thanks,Joe From: [email protected] [mailto:[email protected]] On Behalf Of Jake NielsenSent: Wednesday, September 28, 2016 2:11 PMTo: Tom Lane <[email protected]>Cc: [email protected]: Re: [PERFORM] Unexpected expensive index scan   On Wed, Sep 28, 2016 at 6:04 AM, Tom Lane <[email protected]> wrote:[ Please don't re-quote the entire damn thread in each followup. Havesome respect for your readers' time, and assume that they have alreadyseen the previous traffic, or could go look it up if they haven't.The point of quoting at all is just to quickly remind people where weare in the discussion. ] Sorry, understood. If you say \"well yeah, but it seems to perform fine when I forceit to use that index anyway\", the answer may be that you need toadjust random_page_cost.  The default value is OK for tables thatare mostly sitting on spinning rust, but if your database isRAM-resident or SSD-resident you probably want a value closer to 1. Ahhh, this could absolutely be the key right here. I could totally see why it would make sense for the planner to do what it's doing given that it's weighting sequential access more favorably than random access. Beautiful! After changing the random_page_cost to 1.0 the original query went from ~3.5s to ~35ms. This is exactly the kind of insight I was fishing for in the original post. I'll keep in mind that the query planner is very tunable and has these sorts of hardware-related trade-offs in the future. I can't thank you enough! Cheers!", "msg_date": "Fri, 30 Sep 2016 08:44:40 -0500", "msg_from": "Joe Proietti <[email protected]>", "msg_from_op": true, "msg_subject": "Re: MYSQL Stats" }, { "msg_contents": "On 01/10/16 01:03, Joe Proietti wrote:\n>\n> Hi,\n>\n> I am relatively new to MYSQL and not really sure I am in the right \n> forum for this.\n>\n[...]\n\nIf your data is important to you, then PostgreSQL is safer!\n\nI've used both MySQL & PostgreSQL, and that latter is easier to use.\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 4 Oct 2016 17:20:27 +1300", "msg_from": "Gavin Flower <[email protected]>", "msg_from_op": false, "msg_subject": "Re: MYSQL Stats" } ]
[ { "msg_contents": "Hi,\n\nI have a table of around 20 G, more than 220 million records, and I'm\nrunning this query on it:\n\nexplain analyze SELECT MAX(id) - (SELECT id FROM expl_transactions WHERE\ndateAdded < (now() - INTERVAL '10 MINUTES') ORDER BY dateAdded DESC LIMIT\n1) FROM expl_transactions;\n\n\"id\" is SERIAL, \"dateAdded\" is timestamp without timezone\n\nThe \"dateAdded\" field also has a \"default now()\" applied to it some time\nafter its creation, and a fair amount of null values in the records (which\nI don't think matters for this query, but maybe I'm wrong).\n\nMy first idea is to create a default BRIN index on dateAdded since the\nabove query is not run frequently. To my surprise, the planner refused to\nuse the index and used sequential scan instead. When I forced sequential\nscanning off, I got this:\n\nhttps://explain.depesz.com/s/W8oo\n\nThe query was executing for 40+ seconds. It seems like the \"index scan\" on\nit returns nearly 9% of the table, 25 mil rows. Since the data in\ndateAdded actually is sequential and fairly selective (having now() as the\ndefault over a long period of time), this surprises me.\n\nWith a normal btree index, of course, it runs fine:\n\nhttps://explain.depesz.com/s/TB5\n\n\nAny ideas?\n\nHi,I have a table of around 20 G, more than 220 million records, and I'm running this query on it:explain analyze SELECT MAX(id) - (SELECT id FROM expl_transactions WHERE dateAdded < (now() - INTERVAL '10 MINUTES') ORDER BY dateAdded DESC LIMIT 1) FROM expl_transactions;\"id\" is SERIAL, \"dateAdded\" is timestamp without timezoneThe \"dateAdded\" field also has a \"default now()\" applied to it some time after its creation, and a fair amount of null values in the records (which I don't think matters for this query, but maybe I'm wrong).My first idea is to create a default BRIN index on dateAdded since the above query is not run frequently. To my surprise, the planner refused to use the index and used sequential scan instead. When I forced sequential scanning off, I got this:https://explain.depesz.com/s/W8ooThe query was executing for 40+ seconds. It seems like the \"index scan\" on it returns nearly 9% of the table, 25 mil rows. Since the data in dateAdded actually is sequential and fairly selective (having now() as the default over a long period of time), this surprises me.With a normal btree index, of course, it runs fine:https://explain.depesz.com/s/TB5Any ideas?", "msg_date": "Mon, 3 Oct 2016 11:00:52 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Understanding BRIN index performance" }, { "msg_contents": "I don't think a BRIN index would help in either case.\n\nBRIN just marks each page with a max and min boundaries which are helpful\nin where clauses and has nothing to do with ordering.\n\nFor the first operation i.e Max a btree index would do an index scan\nbackward which is just an index lookup in reverse and for order by it can\nuse the index as well since a btree index is ordered by default.\n\nThat is the reason why it switches to a sequential scan since there is no\nway for a BRIN index to be used in the case of a max / order by.\n\n\n\nOn Mon, Oct 3, 2016 at 2:30 PM, Ivan Voras <[email protected]> wrote:\n\n> Hi,\n>\n> I have a table of around 20 G, more than 220 million records, and I'm\n> running this query on it:\n>\n> explain analyze SELECT MAX(id) - (SELECT id FROM expl_transactions WHERE\n> dateAdded < (now() - INTERVAL '10 MINUTES') ORDER BY dateAdded DESC LIMIT\n> 1) FROM expl_transactions;\n>\n> \"id\" is SERIAL, \"dateAdded\" is timestamp without timezone\n>\n> The \"dateAdded\" field also has a \"default now()\" applied to it some time\n> after its creation, and a fair amount of null values in the records (which\n> I don't think matters for this query, but maybe I'm wrong).\n>\n> My first idea is to create a default BRIN index on dateAdded since the\n> above query is not run frequently. To my surprise, the planner refused to\n> use the index and used sequential scan instead. When I forced sequential\n> scanning off, I got this:\n>\n> https://explain.depesz.com/s/W8oo\n>\n> The query was executing for 40+ seconds. It seems like the \"index scan\" on\n> it returns nearly 9% of the table, 25 mil rows. Since the data in\n> dateAdded actually is sequential and fairly selective (having now() as the\n> default over a long period of time), this surprises me.\n>\n> With a normal btree index, of course, it runs fine:\n>\n> https://explain.depesz.com/s/TB5\n>\n>\n> Any ideas?\n>\n>\n\n\n-- \nRegards,\nMadusudanan.B.N <http://madusudanan.com>\n\nI don't think a BRIN index would help in either case.BRIN just marks each page with a max and min boundaries which are helpful in where clauses and has nothing to do with ordering.For the first operation i.e Max a btree index would do an index scan backward which is just an index lookup in reverse and for order by it can use the index as well since a btree index is ordered by default.That is the reason why it switches to a sequential scan since there is no way for a BRIN index to be used in the case of a max / order by.On Mon, Oct 3, 2016 at 2:30 PM, Ivan Voras <[email protected]> wrote:Hi,I have a table of around 20 G, more than 220 million records, and I'm running this query on it:explain analyze SELECT MAX(id) - (SELECT id FROM expl_transactions WHERE dateAdded < (now() - INTERVAL '10 MINUTES') ORDER BY dateAdded DESC LIMIT 1) FROM expl_transactions;\"id\" is SERIAL, \"dateAdded\" is timestamp without timezoneThe \"dateAdded\" field also has a \"default now()\" applied to it some time after its creation, and a fair amount of null values in the records (which I don't think matters for this query, but maybe I'm wrong).My first idea is to create a default BRIN index on dateAdded since the above query is not run frequently. To my surprise, the planner refused to use the index and used sequential scan instead. When I forced sequential scanning off, I got this:https://explain.depesz.com/s/W8ooThe query was executing for 40+ seconds. It seems like the \"index scan\" on it returns nearly 9% of the table, 25 mil rows. Since the data in dateAdded actually is sequential and fairly selective (having now() as the default over a long period of time), this surprises me.With a normal btree index, of course, it runs fine:https://explain.depesz.com/s/TB5Any ideas?\n-- Regards,Madusudanan.B.N", "msg_date": "Mon, 3 Oct 2016 14:56:37 +0530", "msg_from": "\"Madusudanan.B.N\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding BRIN index performance" }, { "msg_contents": "On 3 October 2016 at 10:00, Ivan Voras <[email protected]> wrote:\n> Hi,\n>\n> I have a table of around 20 G, more than 220 million records, and I'm\n> running this query on it:\n>\n> explain analyze SELECT MAX(id) - (SELECT id FROM expl_transactions WHERE\n> dateAdded < (now() - INTERVAL '10 MINUTES') ORDER BY dateAdded DESC LIMIT 1)\n> FROM expl_transactions;\n>\n> \"id\" is SERIAL, \"dateAdded\" is timestamp without timezone\n>\n> The \"dateAdded\" field also has a \"default now()\" applied to it some time\n> after its creation, and a fair amount of null values in the records (which I\n> don't think matters for this query, but maybe I'm wrong).\n>\n> My first idea is to create a default BRIN index on dateAdded since the above\n> query is not run frequently. To my surprise, the planner refused to use the\n> index and used sequential scan instead. When I forced sequential scanning\n> off, I got this:\n>\n> https://explain.depesz.com/s/W8oo\n>\n> The query was executing for 40+ seconds. It seems like the \"index scan\" on\n> it returns nearly 9% of the table, 25 mil rows. Since the data in dateAdded\n> actually is sequential and fairly selective (having now() as the default\n> over a long period of time), this surprises me.\n>\n> With a normal btree index, of course, it runs fine:\n>\n> https://explain.depesz.com/s/TB5\n\nBtree retains ordering, BRIN does not.\n\nWe've discussed optimizing the sort based upon BRIN metadata, but\nthat's not implemented yet.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 3 Oct 2016 10:40:37 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding BRIN index performance" }, { "msg_contents": "On 3 October 2016 at 11:40, Simon Riggs <[email protected]> wrote:\n\n> On 3 October 2016 at 10:00, Ivan Voras <[email protected]> wrote:\n>\n\n\n> > My first idea is to create a default BRIN index on dateAdded since the\n> above\n> > query is not run frequently. To my surprise, the planner refused to use\n> the\n> > index and used sequential scan instead. When I forced sequential scanning\n> > off, I got this:\n> >\n> > https://explain.depesz.com/s/W8oo\n> >\n> > The query was executing for 40+ seconds. It seems like the \"index scan\"\n> on\n> > it returns nearly 9% of the table, 25 mil rows. Since the data in\n> dateAdded\n> > actually is sequential and fairly selective (having now() as the default\n> > over a long period of time), this surprises me.\n> >\n> > With a normal btree index, of course, it runs fine:\n> >\n> > https://explain.depesz.com/s/TB5\n>\n> Btree retains ordering, BRIN does not.\n>\n> We've discussed optimizing the sort based upon BRIN metadata, but\n> that's not implemented yet.\n>\n\n\nI get that, my question was more about why the index scan returned 25 mil\nrows, when the pages are sequentially filled by timestamps? In my\nunderstading of BRIN, it should have returned a small number of pages which\nwould have been filtered (and sorted) for the exact data, right?\n\nOn 3 October 2016 at 11:40, Simon Riggs <[email protected]> wrote:On 3 October 2016 at 10:00, Ivan Voras <[email protected]> wrote:\n \n> My first idea is to create a default BRIN index on dateAdded since the above\n> query is not run frequently. To my surprise, the planner refused to use the\n> index and used sequential scan instead. When I forced sequential scanning\n> off, I got this:\n>\n> https://explain.depesz.com/s/W8oo\n>\n> The query was executing for 40+ seconds. It seems like the \"index scan\" on\n> it returns nearly 9% of the table, 25 mil rows. Since the data in dateAdded\n> actually is sequential and fairly selective (having now() as the default\n> over a long period of time), this surprises me.\n>\n> With a normal btree index, of course, it runs fine:\n>\n> https://explain.depesz.com/s/TB5\n\nBtree retains ordering, BRIN does not.\n\nWe've discussed optimizing the sort based upon BRIN metadata, but\nthat's not implemented yet.I get that, my question was more about why the index scan returned 25 mil rows, when the pages are sequentially filled by timestamps? In my understading of BRIN, it should have returned a small number of pages which would have been filtered (and sorted) for the exact data, right?", "msg_date": "Mon, 3 Oct 2016 11:58:46 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Understanding BRIN index performance" }, { "msg_contents": "On 3 October 2016 at 10:58, Ivan Voras <[email protected]> wrote:\n\n> I get that, my question was more about why the index scan returned 25 mil\n> rows, when the pages are sequentially filled by timestamps? In my\n> understading of BRIN, it should have returned a small number of pages which\n> would have been filtered (and sorted) for the exact data, right?\n\nThat could be most simply explained if the distribution of your data\nis not what you think it is.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 3 Oct 2016 11:05:54 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Understanding BRIN index performance" }, { "msg_contents": "On 3 October 2016 at 12:05, Simon Riggs <[email protected]> wrote:\n\n> On 3 October 2016 at 10:58, Ivan Voras <[email protected]> wrote:\n>\n> > I get that, my question was more about why the index scan returned 25 mil\n> > rows, when the pages are sequentially filled by timestamps? In my\n> > understading of BRIN, it should have returned a small number of pages\n> which\n> > would have been filtered (and sorted) for the exact data, right?\n>\n> That could be most simply explained if the distribution of your data\n> is not what you think it is.\n>\n\n\nSomething doesn't add up.\nI've clustered the table, then created a BRIN index, and the number of rows\nresulting from the index scan dropped only very slightly.\n\nHmmm, looking at your original reply about the metadata, and my query, did\nyou mean something like this:\n\nSELECT id FROM expl_transactions WHERE dateAdded < (now() - INTERVAL '10\nMINUTES') ORDER BY dateAdded DESC LIMIT 1\n\nTo solve this with a BRIN index, the index records (range pairs?)\nthemselves would need to be ordered, to be able to perform the \"ORDER by\n... DESC\" operation with the index, and then sort it and take the single\nrecord from this operation, and there is currently no such data being\nrecorded?\n\nOn 3 October 2016 at 12:05, Simon Riggs <[email protected]> wrote:On 3 October 2016 at 10:58, Ivan Voras <[email protected]> wrote:\n\n> I get that, my question was more about why the index scan returned 25 mil\n> rows, when the pages are sequentially filled by timestamps? In my\n> understading of BRIN, it should have returned a small number of pages which\n> would have been filtered (and sorted) for the exact data, right?\n\nThat could be most simply explained if the distribution of your data\nis not what you think it is.Something doesn't add up.I've clustered the table, then created a BRIN index, and the number of rows resulting from the index scan dropped only very slightly.Hmmm, looking at your original reply about the metadata, and my query, did you mean something like this:SELECT id FROM expl_transactions WHERE dateAdded < (now() - INTERVAL '10 MINUTES') ORDER BY dateAdded DESC LIMIT 1To solve this with a BRIN index, the index records (range pairs?) themselves would need to be ordered, to be able to perform the \"ORDER by ... DESC\" operation with the index, and then sort it and take the single record from this operation, and there is currently no such data being recorded?", "msg_date": "Mon, 3 Oct 2016 13:29:30 +0200", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Understanding BRIN index performance" } ]
[ { "msg_contents": "\n\n\n\n\n Hi,\n Today, I noticed strange situation:\n\n The same query run on different servers has very different plan:\n\n Q: SELECT b.* FROM kredytob b  WHERE pesel = '22222222222'  ORDER BY\n b.id DESC LIMIT 1 \n\n Slow plan:\n\n \"Limit  (cost=0.43..28712.33 rows=1 width=4) (actual\n time=2574.041..2574.044 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=316132 read=110001\"\n \"  ->  Index Scan Backward using kredytob_pkey on public.kredytob\n b  (cost=0.43..3244444.80 rows=113 width=4) (actual\n time=2574.034..2574.034 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Filter: (b.pesel = '22222222222'::bpchar)\"\n \"        Rows Removed by Filter: 433609\"\n \"        Buffers: shared hit=316132 read=110001\"\n \"Planning time: 0.414 ms\"\n \"Execution time: 2574.139 ms\"\n\n\n Fast plan:\n \"Limit  (cost=115240.66..115240.66 rows=1 width=4) (actual\n time=463.275..463.276 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=14661 read=4576\"\n \"  ->  Sort  (cost=115240.66..115240.94 rows=112 width=4) (actual\n time=463.271..463.271 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=14661 read=4576\"\n \"        ->  Index Scan using kredytob_pesel_typkred_opclass_idx\n on public.kredytob b  (cost=0.43..115240.10 rows=112 width=4)\n (actual time=311.347..463.183 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel = '22222222222'::bpchar)\"\n \"              Buffers: shared hit=14661 read=4576\"\n \"Planning time: 0.383 ms\"\n \"Execution time: 463.324 ms\"\n\n Data is almost equal - \"slow\" has a few more rows in table. (\"Fast\"\n is a copy from 1 am today).\n Why runtime is slower?\n\n -- \n Andrzej Zawadzki\n\n\n", "msg_date": "Mon, 10 Oct 2016 17:31:28 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": true, "msg_subject": "Why query plan is different?" }, { "msg_contents": "2016-10-10 17:31 GMT+02:00 Andrzej Zawadzki <[email protected]>:\n\n> Hi,\n> Today, I noticed strange situation:\n>\n> The same query run on different servers has very different plan:\n>\n> Q: SELECT b.* FROM kredytob b WHERE pesel = '22222222222' ORDER BY b.id\n> DESC LIMIT 1\n>\n> Slow plan:\n>\n> \"Limit (cost=0.43..28712.33 rows=1 width=4) (actual\n> time=2574.041..2574.044 rows=1 loops=1)\"\n> \" Output: id\"\n> \" Buffers: shared hit=316132 read=110001\"\n> \" -> Index Scan Backward using kredytob_pkey on public.kredytob b\n> (cost=0.43..3244444.80 rows=113 width=4) (actual time=2574.034..2574.034\n> rows=1 loops=1)\"\n> \" Output: id\"\n> \" Filter: (b.pesel = '22222222222'::bpchar)\"\n> \" Rows Removed by Filter: 433609\"\n>\n\nhere is backward index scan with - lot of rows is thrown\n\nRows Removed by Filter: 433609\"\n\nprobably index definition on these servers are different\n\nregards\n\nPavel\n\n\n\n> \" Buffers: shared hit=316132 read=110001\"\n> \"Planning time: 0.414 ms\"\n> \"Execution time: 2574.139 ms\"\n>\n>\n> Fast plan:\n> \"Limit (cost=115240.66..115240.66 rows=1 width=4) (actual\n> time=463.275..463.276 rows=1 loops=1)\"\n> \" Output: id\"\n> \" Buffers: shared hit=14661 read=4576\"\n> \" -> Sort (cost=115240.66..115240.94 rows=112 width=4) (actual\n> time=463.271..463.271 rows=1 loops=1)\"\n> \" Output: id\"\n> \" Sort Key: b.id DESC\"\n> \" Sort Method: top-N heapsort Memory: 25kB\"\n> \" Buffers: shared hit=14661 read=4576\"\n> \" -> Index Scan using kredytob_pesel_typkred_opclass_idx on\n> public.kredytob b (cost=0.43..115240.10 rows=112 width=4) (actual\n> time=311.347..463.183 rows=5 loops=1)\"\n> \" Output: id\"\n> \" Index Cond: (b.pesel = '22222222222'::bpchar)\"\n> \" Buffers: shared hit=14661 read=4576\"\n> \"Planning time: 0.383 ms\"\n> \"Execution time: 463.324 ms\"\n>\n> Data is almost equal - \"slow\" has a few more rows in table. (\"Fast\" is a\n> copy from 1 am today).\n> Why runtime is slower?\n>\n> --\n> Andrzej Zawadzki\n>\n\n2016-10-10 17:31 GMT+02:00 Andrzej Zawadzki <[email protected]>:\n\n Hi,\n Today, I noticed strange situation:\n\n The same query run on different servers has very different plan:\n\n Q: SELECT b.* FROM kredytob b  WHERE pesel = '22222222222'  ORDER BY\n b.id DESC LIMIT 1 \n\n Slow plan:\n\n \"Limit  (cost=0.43..28712.33 rows=1 width=4) (actual\n time=2574.041..2574.044 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=316132 read=110001\"\n \"  ->  Index Scan Backward using kredytob_pkey on public.kredytob\n b  (cost=0.43..3244444.80 rows=113 width=4) (actual\n time=2574.034..2574.034 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Filter: (b.pesel = '22222222222'::bpchar)\"\n \"        Rows Removed by Filter: 433609\"here is backward index scan with - lot of rows is thrown Rows Removed by Filter: 433609\" probably index definition on these servers are differentregardsPavel \n \"        Buffers: shared hit=316132 read=110001\"\n \"Planning time: 0.414 ms\"\n \"Execution time: 2574.139 ms\"\n\n\n Fast plan:\n \"Limit  (cost=115240.66..115240.66 rows=1 width=4) (actual\n time=463.275..463.276 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=14661 read=4576\"\n \"  ->  Sort  (cost=115240.66..115240.94 rows=112 width=4) (actual\n time=463.271..463.271 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=14661 read=4576\"\n \"        ->  Index Scan using kredytob_pesel_typkred_opclass_idx\n on public.kredytob b  (cost=0.43..115240.10 rows=112 width=4)\n (actual time=311.347..463.183 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel = '22222222222'::bpchar)\"\n \"              Buffers: shared hit=14661 read=4576\"\n \"Planning time: 0.383 ms\"\n \"Execution time: 463.324 ms\"\n\n Data is almost equal - \"slow\" has a few more rows in table. (\"Fast\"\n is a copy from 1 am today).\n Why runtime is slower?\n\n -- \n Andrzej Zawadzki", "msg_date": "Mon, 10 Oct 2016 19:09:15 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why query plan is different?" }, { "msg_contents": "\n\n\n\n\nOn 10.10.2016 19:09, Pavel Stehule\n wrote:\n\n\n\n\n2016-10-10 17:31 GMT+02:00 Andrzej\n Zawadzki <[email protected]>:\n\n Hi,\n Today, I noticed strange situation:\n\n The same query run on different servers has very\n different plan:\n\n Q: SELECT b.* FROM kredytob b  WHERE pesel =\n '22222222222'  ORDER BY b.id DESC LIMIT\n 1 \n\n Slow plan:\n\n \"Limit  (cost=0.43..28712.33 rows=1 width=4) (actual\n time=2574.041..2574.044 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=316132 read=110001\"\n \"  ->  Index Scan Backward using kredytob_pkey on\n public.kredytob b  (cost=0.43..3244444.80 rows=113\n width=4) (actual time=2574.034..2574.034 rows=1\n loops=1)\"\n \"        Output: id\"\n \"        Filter: (b.pesel = '22222222222'::bpchar)\"\n \"        Rows Removed by Filter: 433609\"\n\n\n\n\nhere is backward index scan with - lot of rows is\n thrown \n\n Rows Removed by Filter: 433609\" \n\n\nprobably index definition on these servers are\n different\n\n\n\n\n\n\n No! That's binary copy of whole database.\n Index are the same!\n But, when I ask database without \"ORDER...\"\n (SELECT b.id FROM kredytob b  WHERE pesel = '22222222222';)\n  then:\n\n \"SLOW\"\n\n \"Index Scan using kredytob_pesel_typkred_opclass_idx on\n public.kredytob b  (cost=0.43..115349.30 rows=113 width=4) (actual\n time=233.767..392.710 rows=5 loops=1)\"\n \"  Output: id\"\n \"  Index Cond: (b.pesel = '22222222222'::bpchar)\"\n \"  Buffers: shared hit=19259\"\n \"Planning time: 0.254 ms\"\n \"Execution time: 392.761 ms\"\n\n \"FAST\"\n\n \"Index Scan using kredytob_pesel_typkred_opclass_idx on\n public.kredytob b  (cost=0.43..115240.10 rows=112 width=4) (actual\n time=378.737..836.208 rows=5 loops=1)\"\n \"  Output: id\"\n \"  Index Cond: (b.pesel = '22222222222'::bpchar)\"\n \"  Buffers: shared read=19237\"\n \"Planning time: 0.568 ms\"\n \"Execution time: 836.261 ms\"\n\n So, index is used in both queries but when is \"ORDER\" then\n everything change...\n Why?\n\n\n\n\n\n\n\n\nregards\n\n\nPavel\n\n\n\n \n\n \"        Buffers: shared\n hit=316132 read=110001\"\n \"Planning time: 0.414 ms\"\n \"Execution time: 2574.139 ms\"\n\n\n Fast plan:\n \"Limit  (cost=115240.66..115240.66 rows=1 width=4)\n (actual time=463.275..463.276 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=14661 read=4576\"\n \"  ->  Sort  (cost=115240.66..115240.94 rows=112\n width=4) (actual time=463.271..463.271 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=14661 read=4576\"\n \"        ->  Index Scan using kredytob_pesel_typkred_opclass_idx\n on public.kredytob b  (cost=0.43..115240.10 rows=112\n width=4) (actual time=311.347..463.183 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel =\n '22222222222'::bpchar)\"\n \"              Buffers: shared hit=14661 read=4576\"\n \"Planning time: 0.383 ms\"\n \"Execution time: 463.324 ms\"\n\n Data is almost equal - \"slow\" has a few more rows in\n table. (\"Fast\" is a copy from 1 am today).\n Why runtime is slower?\n\n -- \n Andrzej Zawadzki\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Mon, 10 Oct 2016 22:51:07 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why query plan is different?" }, { "msg_contents": "\n\n\n\n\nOn 10.10.2016 17:31, Andrzej Zawadzki\n wrote:\n\n\n\n Hi,\n Today, I noticed strange situation:\n\n The same query run on different servers has very different plan:\n\n Q: SELECT b.* FROM kredytob b  WHERE pesel = '22222222222'  ORDER\n BY b.id DESC LIMIT 1 \n\n Slow plan:\n\n \"Limit  (cost=0.43..28712.33 rows=1 width=4) (actual\n time=2574.041..2574.044 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=316132 read=110001\"\n \"  ->  Index Scan Backward using kredytob_pkey on\n public.kredytob b  (cost=0.43..3244444.80 rows=113 width=4)\n (actual time=2574.034..2574.034 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Filter: (b.pesel = '22222222222'::bpchar)\"\n \"        Rows Removed by Filter: 433609\"\n \"        Buffers: shared hit=316132 read=110001\"\n \"Planning time: 0.414 ms\"\n \"Execution time: 2574.139 ms\"\n\n\n Fast plan:\n \"Limit  (cost=115240.66..115240.66 rows=1 width=4) (actual\n time=463.275..463.276 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=14661 read=4576\"\n \"  ->  Sort  (cost=115240.66..115240.94 rows=112 width=4)\n (actual time=463.271..463.271 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=14661 read=4576\"\n \"        ->  Index Scan using\n kredytob_pesel_typkred_opclass_idx on public.kredytob b \n (cost=0.43..115240.10 rows=112 width=4) (actual\n time=311.347..463.183 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel = '22222222222'::bpchar)\"\n \"              Buffers: shared hit=14661 read=4576\"\n \"Planning time: 0.383 ms\"\n \"Execution time: 463.324 ms\"\n\n Data is almost equal - \"slow\" has a few more rows in table.\n (\"Fast\" is a copy from 1 am today).\n Why runtime is slower?\n\n\n I made another INDEX, without opclass:\n\n CREATE INDEX kredytob_pesel_typkred_idx\n   ON public.kredytob\n   USING btree\n   (pesel COLLATE pg_catalog.\"default\", typkred);\n\n after that: analyze kredytob;\n\n And now:\n \"Limit  (cost=333.31..333.31 rows=1 width=4) (actual\n time=0.100..0.102 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=8\"\n \"  ->  Sort  (cost=333.31..333.59 rows=114 width=4) (actual\n time=0.095..0.095 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=8\"\n \"        ->  Index Scan using kredytob_pesel_typkred_idx on\n public.kredytob b  (cost=0.43..332.74 rows=114 width=4) (actual\n time=0.046..0.065 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel = '22222222222'::bpchar)\"\n \"              Buffers: shared hit=8\"\n \"Planning time: 0.438 ms\"\n \"Execution time: 0.154 ms\"\n\n So, what is a reason that \"SLOW\" server doesn't like opclass index?\n\n -- \n Andrzej\n\n\n", "msg_date": "Mon, 10 Oct 2016 23:17:09 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why query plan is different?" }, { "msg_contents": "2016-10-10 23:17 GMT+02:00 Andrzej Zawadzki <[email protected]>:\n\n> On 10.10.2016 17:31, Andrzej Zawadzki wrote:\n>\n> Hi,\n> Today, I noticed strange situation:\n>\n> The same query run on different servers has very different plan:\n>\n> Q: SELECT b.* FROM kredytob b WHERE pesel = '22222222222' ORDER BY b.id\n> DESC LIMIT 1\n>\n> Slow plan:\n>\n> \"Limit (cost=0.43..28712.33 rows=1 width=4) (actual\n> time=2574.041..2574.044 rows=1 loops=1)\"\n> \" Output: id\"\n> \" Buffers: shared hit=316132 read=110001\"\n> \" -> Index Scan Backward using kredytob_pkey on public.kredytob b\n> (cost=0.43..3244444.80 rows=113 width=4) (actual time=2574.034..2574.034\n> rows=1 loops=1)\"\n> \" Output: id\"\n> \" Filter: (b.pesel = '22222222222'::bpchar)\"\n> \" Rows Removed by Filter: 433609\"\n> \" Buffers: shared hit=316132 read=110001\"\n> \"Planning time: 0.414 ms\"\n> \"Execution time: 2574.139 ms\"\n>\n>\n> Fast plan:\n> \"Limit (cost=115240.66..115240.66 rows=1 width=4) (actual\n> time=463.275..463.276 rows=1 loops=1)\"\n> \" Output: id\"\n> \" Buffers: shared hit=14661 read=4576\"\n> \" -> Sort (cost=115240.66..115240.94 rows=112 width=4) (actual\n> time=463.271..463.271 rows=1 loops=1)\"\n> \" Output: id\"\n> \" Sort Key: b.id DESC\"\n> \" Sort Method: top-N heapsort Memory: 25kB\"\n> \" Buffers: shared hit=14661 read=4576\"\n> \" -> Index Scan using kredytob_pesel_typkred_opclass_idx on\n> public.kredytob b (cost=0.43..115240.10 rows=112 width=4) (actual\n> time=311.347..463.183 rows=5 loops=1)\"\n> \" Output: id\"\n> \" Index Cond: (b.pesel = '22222222222'::bpchar)\"\n> \" Buffers: shared hit=14661 read=4576\"\n> \"Planning time: 0.383 ms\"\n> \"Execution time: 463.324 ms\"\n>\n> Data is almost equal - \"slow\" has a few more rows in table. (\"Fast\" is a\n> copy from 1 am today).\n> Why runtime is slower?\n>\n>\n> I made another INDEX, without opclass:\n>\n> CREATE INDEX kredytob_pesel_typkred_idx\n> ON public.kredytob\n> USING btree\n> (pesel COLLATE pg_catalog.\"default\", typkred);\n>\n> after that: analyze kredytob;\n>\n> And now:\n> \"Limit (cost=333.31..333.31 rows=1 width=4) (actual time=0.100..0.102\n> rows=1 loops=1)\"\n> \" Output: id\"\n> \" Buffers: shared hit=8\"\n> \" -> Sort (cost=333.31..333.59 rows=114 width=4) (actual\n> time=0.095..0.095 rows=1 loops=1)\"\n> \" Output: id\"\n> \" Sort Key: b.id DESC\"\n> \" Sort Method: top-N heapsort Memory: 25kB\"\n> \" Buffers: shared hit=8\"\n> \" -> Index Scan using kredytob_pesel_typkred_idx on\n> public.kredytob b (cost=0.43..332.74 rows=114 width=4) (actual\n> time=0.046..0.065 rows=5 loops=1)\"\n> \" Output: id\"\n> \" Index Cond: (b.pesel = '22222222222'::bpchar)\"\n> \" Buffers: shared hit=8\"\n> \"Planning time: 0.438 ms\"\n> \"Execution time: 0.154 ms\"\n>\n> So, what is a reason that \"SLOW\" server doesn't like opclass index?\n>\n\nwhat is default locales?\n\nPavel\n\n\n>\n> --\n> Andrzej\n>\n\n2016-10-10 23:17 GMT+02:00 Andrzej Zawadzki <[email protected]>:\n\nOn 10.10.2016 17:31, Andrzej Zawadzki\n wrote:\n\n\n \n Hi,\n Today, I noticed strange situation:\n\n The same query run on different servers has very different plan:\n\n Q: SELECT b.* FROM kredytob b  WHERE pesel = '22222222222'  ORDER\n BY b.id DESC LIMIT 1 \n\n Slow plan:\n\n \"Limit  (cost=0.43..28712.33 rows=1 width=4) (actual\n time=2574.041..2574.044 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=316132 read=110001\"\n \"  ->  Index Scan Backward using kredytob_pkey on\n public.kredytob b  (cost=0.43..3244444.80 rows=113 width=4)\n (actual time=2574.034..2574.034 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Filter: (b.pesel = '22222222222'::bpchar)\"\n \"        Rows Removed by Filter: 433609\"\n \"        Buffers: shared hit=316132 read=110001\"\n \"Planning time: 0.414 ms\"\n \"Execution time: 2574.139 ms\"\n\n\n Fast plan:\n \"Limit  (cost=115240.66..115240.66 rows=1 width=4) (actual\n time=463.275..463.276 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=14661 read=4576\"\n \"  ->  Sort  (cost=115240.66..115240.94 rows=112 width=4)\n (actual time=463.271..463.271 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=14661 read=4576\"\n \"        ->  Index Scan using\n kredytob_pesel_typkred_opclass_idx on public.kredytob b \n (cost=0.43..115240.10 rows=112 width=4) (actual\n time=311.347..463.183 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel = '22222222222'::bpchar)\"\n \"              Buffers: shared hit=14661 read=4576\"\n \"Planning time: 0.383 ms\"\n \"Execution time: 463.324 ms\"\n\n Data is almost equal - \"slow\" has a few more rows in table.\n (\"Fast\" is a copy from 1 am today).\n Why runtime is slower?\n\n\n I made another INDEX, without opclass:\n\n CREATE INDEX kredytob_pesel_typkred_idx\n   ON public.kredytob\n   USING btree\n   (pesel COLLATE pg_catalog.\"default\", typkred);\n\n after that: analyze kredytob;\n\n And now:\n \"Limit  (cost=333.31..333.31 rows=1 width=4) (actual\n time=0.100..0.102 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=8\"\n \"  ->  Sort  (cost=333.31..333.59 rows=114 width=4) (actual\n time=0.095..0.095 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=8\"\n \"        ->  Index Scan using kredytob_pesel_typkred_idx on\n public.kredytob b  (cost=0.43..332.74 rows=114 width=4) (actual\n time=0.046..0.065 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel = '22222222222'::bpchar)\"\n \"              Buffers: shared hit=8\"\n \"Planning time: 0.438 ms\"\n \"Execution time: 0.154 ms\"\n\n So, what is a reason that \"SLOW\" server doesn't like opclass index?what is default locales?Pavel \n\n -- \n Andrzej", "msg_date": "Tue, 11 Oct 2016 03:47:24 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why query plan is different?" }, { "msg_contents": "\n\n\n\n\nOn 11.10.2016 03:47, Pavel Stehule\n wrote:\n\n\n\n\n2016-10-10 23:17 GMT+02:00 Andrzej\n Zawadzki <[email protected]>:\n\n\n\n\nOn\n 10.10.2016 17:31, Andrzej Zawadzki wrote:\n\n Hi,\n Today, I noticed strange situation:\n\n The same query run on different servers has very\n different plan:\n\n Q: SELECT b.* FROM kredytob b  WHERE pesel =\n '22222222222'  ORDER BY b.id DESC\n LIMIT 1 \n\n Slow plan:\n\n \"Limit  (cost=0.43..28712.33 rows=1 width=4)\n (actual time=2574.041..2574.044 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=316132 read=110001\"\n \"  ->  Index Scan Backward using kredytob_pkey\n on public.kredytob b  (cost=0.43..3244444.80\n rows=113 width=4) (actual time=2574.034..2574.034\n rows=1 loops=1)\"\n \"        Output: id\"\n \"        Filter: (b.pesel =\n '22222222222'::bpchar)\"\n \"        Rows Removed by Filter: 433609\"\n \"        Buffers: shared hit=316132 read=110001\"\n \"Planning time: 0.414 ms\"\n \"Execution time: 2574.139 ms\"\n\n\n Fast plan:\n \"Limit  (cost=115240.66..115240.66 rows=1 width=4)\n (actual time=463.275..463.276 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=14661 read=4576\"\n \"  ->  Sort  (cost=115240.66..115240.94\n rows=112 width=4) (actual time=463.271..463.271\n rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id\n DESC\"\n \"        Sort Method: top-N heapsort  Memory:\n 25kB\"\n \"        Buffers: shared hit=14661 read=4576\"\n \"        ->  Index Scan using\n kredytob_pesel_typkred_opclass_idx on\n public.kredytob b  (cost=0.43..115240.10 rows=112\n width=4) (actual time=311.347..463.183 rows=5\n loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel =\n '22222222222'::bpchar)\"\n \"              Buffers: shared hit=14661\n read=4576\"\n \"Planning time: 0.383 ms\"\n \"Execution time: 463.324 ms\"\n\n Data is almost equal - \"slow\" has a few more rows\n in table. (\"Fast\" is a copy from 1 am today).\n Why runtime is slower?\n\n\n\n\n I made another INDEX, without opclass:\n\n CREATE INDEX kredytob_pesel_typkred_idx\n   ON public.kredytob\n   USING btree\n   (pesel COLLATE pg_catalog.\"default\", typkred);\n\n after that: analyze kredytob;\n\n And now:\n \"Limit  (cost=333.31..333.31 rows=1 width=4) (actual\n time=0.100..0.102 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=8\"\n \"  ->  Sort  (cost=333.31..333.59 rows=114 width=4)\n (actual time=0.095..0.095 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=8\"\n \"        ->  Index Scan using\n kredytob_pesel_typkred_idx on public.kredytob b \n (cost=0.43..332.74 rows=114 width=4) (actual\n time=0.046..0.065 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel =\n '22222222222'::bpchar)\"\n \"              Buffers: shared hit=8\"\n \"Planning time: 0.438 ms\"\n \"Execution time: 0.154 ms\"\n\n So, what is a reason that \"SLOW\" server doesn't like\n opclass index?\n\n\n\n\nwhat is default locales?\n\n\n\n\n\n\n LATIN2 - that's why I use opclass.\n\n -- \n Andrzej\n\n\n", "msg_date": "Tue, 11 Oct 2016 13:19:14 +0200", "msg_from": "Andrzej Zawadzki <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why query plan is different?" }, { "msg_contents": "2016-10-11 13:19 GMT+02:00 Andrzej Zawadzki <[email protected]>:\n\n> On 11.10.2016 03:47, Pavel Stehule wrote:\n>\n>\n>\n> 2016-10-10 23:17 GMT+02:00 Andrzej Zawadzki <[email protected]>:\n>\n>> On 10.10.2016 17:31, Andrzej Zawadzki wrote:\n>>\n>> Hi,\n>> Today, I noticed strange situation:\n>>\n>> The same query run on different servers has very different plan:\n>>\n>> Q: SELECT b.* FROM kredytob b WHERE pesel = '22222222222' ORDER BY b.id\n>> DESC LIMIT 1\n>>\n>> Slow plan:\n>>\n>> \"Limit (cost=0.43..28712.33 rows=1 width=4) (actual\n>> time=2574.041..2574.044 rows=1 loops=1)\"\n>> \" Output: id\"\n>> \" Buffers: shared hit=316132 read=110001\"\n>> \" -> Index Scan Backward using kredytob_pkey on public.kredytob b\n>> (cost=0.43..3244444.80 rows=113 width=4) (actual time=2574.034..2574.034\n>> rows=1 loops=1)\"\n>> \" Output: id\"\n>> \" Filter: (b.pesel = '22222222222'::bpchar)\"\n>> \" Rows Removed by Filter: 433609\"\n>> \" Buffers: shared hit=316132 read=110001\"\n>> \"Planning time: 0.414 ms\"\n>> \"Execution time: 2574.139 ms\"\n>>\n>>\n>> Fast plan:\n>> \"Limit (cost=115240.66..115240.66 rows=1 width=4) (actual\n>> time=463.275..463.276 rows=1 loops=1)\"\n>> \" Output: id\"\n>> \" Buffers: shared hit=14661 read=4576\"\n>> \" -> Sort (cost=115240.66..115240.94 rows=112 width=4) (actual\n>> time=463.271..463.271 rows=1 loops=1)\"\n>> \" Output: id\"\n>> \" Sort Key: b.id DESC\"\n>> \" Sort Method: top-N heapsort Memory: 25kB\"\n>> \" Buffers: shared hit=14661 read=4576\"\n>> \" -> Index Scan using kredytob_pesel_typkred_opclass_idx on\n>> public.kredytob b (cost=0.43..115240.10 rows=112 width=4) (actual\n>> time=311.347..463.183 rows=5 loops=1)\"\n>> \" Output: id\"\n>> \" Index Cond: (b.pesel = '22222222222'::bpchar)\"\n>> \" Buffers: shared hit=14661 read=4576\"\n>> \"Planning time: 0.383 ms\"\n>> \"Execution time: 463.324 ms\"\n>>\n>> Data is almost equal - \"slow\" has a few more rows in table. (\"Fast\" is a\n>> copy from 1 am today).\n>> Why runtime is slower?\n>>\n>>\n>> I made another INDEX, without opclass:\n>>\n>> CREATE INDEX kredytob_pesel_typkred_idx\n>> ON public.kredytob\n>> USING btree\n>> (pesel COLLATE pg_catalog.\"default\", typkred);\n>>\n>> after that: analyze kredytob;\n>>\n>> And now:\n>> \"Limit (cost=333.31..333.31 rows=1 width=4) (actual time=0.100..0.102\n>> rows=1 loops=1)\"\n>> \" Output: id\"\n>> \" Buffers: shared hit=8\"\n>> \" -> Sort (cost=333.31..333.59 rows=114 width=4) (actual\n>> time=0.095..0.095 rows=1 loops=1)\"\n>> \" Output: id\"\n>> \" Sort Key: b.id DESC\"\n>> \" Sort Method: top-N heapsort Memory: 25kB\"\n>> \" Buffers: shared hit=8\"\n>> \" -> Index Scan using kredytob_pesel_typkred_idx on\n>> public.kredytob b (cost=0.43..332.74 rows=114 width=4) (actual\n>> time=0.046..0.065 rows=5 loops=1)\"\n>> \" Output: id\"\n>> \" Index Cond: (b.pesel = '22222222222'::bpchar)\"\n>> \" Buffers: shared hit=8\"\n>> \"Planning time: 0.438 ms\"\n>> \"Execution time: 0.154 ms\"\n>>\n>> So, what is a reason that \"SLOW\" server doesn't like opclass index?\n>>\n>\n> what is default locales?\n>\n> LATIN2 - that's why I use opclass.\n>\n\nIs it this local in both cases?\n\nRegards\n\nPavel\n\n\n>\n> --\n> Andrzej\n>\n\n2016-10-11 13:19 GMT+02:00 Andrzej Zawadzki <[email protected]>:\n\nOn 11.10.2016 03:47, Pavel Stehule\n wrote:\n\n\n\n\n2016-10-10 23:17 GMT+02:00 Andrzej\n Zawadzki <[email protected]>:\n\n\n\n\nOn\n 10.10.2016 17:31, Andrzej Zawadzki wrote:\n\n Hi,\n Today, I noticed strange situation:\n\n The same query run on different servers has very\n different plan:\n\n Q: SELECT b.* FROM kredytob b  WHERE pesel =\n '22222222222'  ORDER BY b.id DESC\n LIMIT 1 \n\n Slow plan:\n\n \"Limit  (cost=0.43..28712.33 rows=1 width=4)\n (actual time=2574.041..2574.044 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=316132 read=110001\"\n \"  ->  Index Scan Backward using kredytob_pkey\n on public.kredytob b  (cost=0.43..3244444.80\n rows=113 width=4) (actual time=2574.034..2574.034\n rows=1 loops=1)\"\n \"        Output: id\"\n \"        Filter: (b.pesel =\n '22222222222'::bpchar)\"\n \"        Rows Removed by Filter: 433609\"\n \"        Buffers: shared hit=316132 read=110001\"\n \"Planning time: 0.414 ms\"\n \"Execution time: 2574.139 ms\"\n\n\n Fast plan:\n \"Limit  (cost=115240.66..115240.66 rows=1 width=4)\n (actual time=463.275..463.276 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=14661 read=4576\"\n \"  ->  Sort  (cost=115240.66..115240.94\n rows=112 width=4) (actual time=463.271..463.271\n rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id\n DESC\"\n \"        Sort Method: top-N heapsort  Memory:\n 25kB\"\n \"        Buffers: shared hit=14661 read=4576\"\n \"        ->  Index Scan using\n kredytob_pesel_typkred_opclass_idx on\n public.kredytob b  (cost=0.43..115240.10 rows=112\n width=4) (actual time=311.347..463.183 rows=5\n loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel =\n '22222222222'::bpchar)\"\n \"              Buffers: shared hit=14661\n read=4576\"\n \"Planning time: 0.383 ms\"\n \"Execution time: 463.324 ms\"\n\n Data is almost equal - \"slow\" has a few more rows\n in table. (\"Fast\" is a copy from 1 am today).\n Why runtime is slower?\n\n\n\n\n I made another INDEX, without opclass:\n\n CREATE INDEX kredytob_pesel_typkred_idx\n   ON public.kredytob\n   USING btree\n   (pesel COLLATE pg_catalog.\"default\", typkred);\n\n after that: analyze kredytob;\n\n And now:\n \"Limit  (cost=333.31..333.31 rows=1 width=4) (actual\n time=0.100..0.102 rows=1 loops=1)\"\n \"  Output: id\"\n \"  Buffers: shared hit=8\"\n \"  ->  Sort  (cost=333.31..333.59 rows=114 width=4)\n (actual time=0.095..0.095 rows=1 loops=1)\"\n \"        Output: id\"\n \"        Sort Key: b.id DESC\"\n \"        Sort Method: top-N heapsort  Memory: 25kB\"\n \"        Buffers: shared hit=8\"\n \"        ->  Index Scan using\n kredytob_pesel_typkred_idx on public.kredytob b \n (cost=0.43..332.74 rows=114 width=4) (actual\n time=0.046..0.065 rows=5 loops=1)\"\n \"              Output: id\"\n \"              Index Cond: (b.pesel =\n '22222222222'::bpchar)\"\n \"              Buffers: shared hit=8\"\n \"Planning time: 0.438 ms\"\n \"Execution time: 0.154 ms\"\n\n So, what is a reason that \"SLOW\" server doesn't like\n opclass index?\n\n\n\n\nwhat is default locales?\n\n\n\n\n\n\n LATIN2 - that's why I use opclass.Is it this local in both cases?RegardsPavel \n\n -- \n Andrzej", "msg_date": "Tue, 11 Oct 2016 13:21:36 +0200", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why query plan is different?" } ]
[ { "msg_contents": "Team,\n\nwe are seeing delay in converting logs from ready state to done state in\npg_xlog archive status.\n\nwe have seen pg_xlog generated 2000 per hour and it is getting archived\n1894. So the speed at which the archiving is done is too slow as compare to\nthe pg_xlog generation\n\nSo our pg_xlog directory keeps filling regularly. What should be the real\ncause here?\n\nWe cannot see any specific error on pg_log except no space left on device.\n\n\n\ncurrent setting:\n\nwal_level = archive\n\narchive_mode = on\n\nmax_wal_senders = 3\n\narchive_command = 'gzip < %p > /pgarchive/%f'\n\ncheckpoint_segments = 3\n\ncheckpoint_timeout = 5min\n\nlog_checkpoints = on\n\narchive_timeout = 60\n\n\n\n\n\nThanks,\n\nSamir Magar\n\nTeam,\nwe are seeing delay in converting logs from ready state to\ndone state in pg_xlog archive status.\nwe have seen pg_xlog generated 2000 per hour and it is\ngetting archived 1894. So the speed at which the archiving is done is too slow\nas compare to the pg_xlog generation \nSo our pg_xlog directory keeps filling regularly. What\nshould be the real cause here?\nWe cannot see any specific error on pg_log  except no space left on device. \n \ncurrent setting:\nwal_level = archive\narchive_mode = on\nmax_wal_senders = 3\narchive_command = 'gzip < %p > /pgarchive/%f'\ncheckpoint_segments = 3\ncheckpoint_timeout = 5min\nlog_checkpoints = on\narchive_timeout = 60\n \n \nThanks,\nSamir Magar", "msg_date": "Wed, 12 Oct 2016 10:56:38 +0530", "msg_from": "Samir Magar <[email protected]>", "msg_from_op": true, "msg_subject": "Delay in converting logs from ready state to done state" }, { "msg_contents": "On 12/10/2016 07:26, Samir Magar wrote:\n> Team,\n> \n> we are seeing delay in converting logs from ready state to done state in\n> pg_xlog archive status.\n> \n> we have seen pg_xlog generated 2000 per hour and it is getting archived\n> 1894. So the speed at which the archiving is done is too slow as compare\n> to the pg_xlog generation\n> \n> So our pg_xlog directory keeps filling regularly. What should be the\n> real cause here?\n> \n> We cannot see any specific error on pg_log except no space left on device.\n> \n> \n> \n> current setting:\n> \n> wal_level = archive\n> \n> archive_mode = on\n> \n> max_wal_senders = 3\n> \n> archive_command = 'gzip < %p > /pgarchive/%f'\n> \n\nYou could use pigz which is parallel, that could speed up compression.\n\n> checkpoint_segments = 3\n> \n\nthis is way to low. If you generate 2000 WAL per hour, you should\nconfigure it to something like 170 (or 5 min average if 2000 is a\nspike). It'll perform less checkpoint and also generate less WALs.\n\n\n-- \nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 12 Oct 2016 08:03:53 +0200", "msg_from": "Julien Rouhaud <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Delay in converting logs from ready state to done state" }, { "msg_contents": "*Hello Julien,*\n\n*Thank you for your prompt response!*\n*we have changed the checkpoint_segment to 170 and use pigz for the pg_xlog\ncompress.*\n*It is working very well now.*\n\n*Thanks again!!*\n\n*Regards,*\n*Samir Magar*\n\nOn Wed, Oct 12, 2016 at 11:33 AM, Julien Rouhaud <[email protected]>\nwrote:\n\n> On 12/10/2016 07:26, Samir Magar wrote:\n> > Team,\n> >\n> > we are seeing delay in converting logs from ready state to done state in\n> > pg_xlog archive status.\n> >\n> > we have seen pg_xlog generated 2000 per hour and it is getting archived\n> > 1894. So the speed at which the archiving is done is too slow as compare\n> > to the pg_xlog generation\n> >\n> > So our pg_xlog directory keeps filling regularly. What should be the\n> > real cause here?\n> >\n> > We cannot see any specific error on pg_log except no space left on\n> device.\n> >\n> >\n> >\n> > current setting:\n> >\n> > wal_level = archive\n> >\n> > archive_mode = on\n> >\n> > max_wal_senders = 3\n> >\n> > archive_command = 'gzip < %p > /pgarchive/%f'\n> >\n>\n> You could use pigz which is parallel, that could speed up compression.\n>\n> > checkpoint_segments = 3\n> >\n>\n> this is way to low. If you generate 2000 WAL per hour, you should\n> configure it to something like 170 (or 5 min average if 2000 is a\n> spike). It'll perform less checkpoint and also generate less WALs.\n>\n>\n> --\n> Julien Rouhaud\n> http://dalibo.com - http://dalibo.org\n>\n\nHello Julien,Thank you for your prompt response!we have changed the checkpoint_segment to 170 and use pigz for the pg_xlog compress.It is working very well now.Thanks again!!Regards,Samir MagarOn Wed, Oct 12, 2016 at 11:33 AM, Julien Rouhaud <[email protected]> wrote:On 12/10/2016 07:26, Samir Magar wrote:\n> Team,\n>\n> we are seeing delay in converting logs from ready state to done state in\n> pg_xlog archive status.\n>\n> we have seen pg_xlog generated 2000 per hour and it is getting archived\n> 1894. So the speed at which the archiving is done is too slow as compare\n> to the pg_xlog generation\n>\n> So our pg_xlog directory keeps filling regularly. What should be the\n> real cause here?\n>\n> We cannot see any specific error on pg_log  except no space left on device.\n>\n>\n>\n> current setting:\n>\n> wal_level = archive\n>\n> archive_mode = on\n>\n> max_wal_senders = 3\n>\n> archive_command = 'gzip < %p > /pgarchive/%f'\n>\n\nYou could use pigz which is parallel, that could speed up compression.\n\n> checkpoint_segments = 3\n>\n\nthis is way to low. If you generate 2000 WAL per hour, you should\nconfigure it to something like 170 (or 5 min average if 2000 is a\nspike).  It'll perform less checkpoint and also generate less WALs.\n\n\n--\nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org", "msg_date": "Wed, 12 Oct 2016 12:34:35 +0530", "msg_from": "Samir Magar <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Delay in converting logs from ready state to done state" } ]
[ { "msg_contents": "Hello,\n\n\n\nWe are using postgresql 9.2 on redhat linux instance over openstack cloud.\n\n\n\nDatabase is around 441 GB.\n\n\n\nWe are using below command to take backup:\n\n\n\npg_basebackup -v -D /pgbackup/$bkupdir -Ft -z -c fast\n\n\n\nBackup size created is around 84GB.\n\n\n\nHowever, it is taking almost 10 hr 21 minutes to complete.\n\n\n\nLooking for speed improvement?\n\n\n\nThanks& Regards,\n\nVaze Swapnil\n\nHello, We are using postgresql 9.2 on redhat linux instance over openstack cloud. Database is around 441 GB. We are using below command to take backup: pg_basebackup -v -D /pgbackup/$bkupdir -Ft -z -c fast Backup size created is around 84GB.  However, it is taking almost 10 hr 21 minutes to complete. Looking for speed improvement? Thanks& Regards,Vaze Swapnil", "msg_date": "Fri, 14 Oct 2016 18:05:01 +0530", "msg_from": "Swapnil Vaze <[email protected]>", "msg_from_op": true, "msg_subject": "pg_basebackup running slow" }, { "msg_contents": "What is the settings for max_wal_sender?\nyou can try increasing this parameter to improve backup performance.\n\n\nThanks,\nSamir Magar\n\nOn Fri, Oct 14, 2016 at 6:05 PM, Swapnil Vaze <[email protected]> wrote:\n\n> Hello,\n>\n>\n>\n> We are using postgresql 9.2 on redhat linux instance over openstack cloud.\n>\n>\n>\n> Database is around 441 GB.\n>\n>\n>\n> We are using below command to take backup:\n>\n>\n>\n> pg_basebackup -v -D /pgbackup/$bkupdir -Ft -z -c fast\n>\n>\n>\n> Backup size created is around 84GB.\n>\n>\n>\n> However, it is taking almost 10 hr 21 minutes to complete.\n>\n>\n>\n> Looking for speed improvement?\n>\n>\n>\n> Thanks& Regards,\n>\n> Vaze Swapnil\n>\n\nWhat is the settings for max_wal_sender?you can try increasing this parameter to improve backup performance.Thanks,Samir MagarOn Fri, Oct 14, 2016 at 6:05 PM, Swapnil Vaze <[email protected]> wrote:Hello, We are using postgresql 9.2 on redhat linux instance over openstack cloud. Database is around 441 GB. We are using below command to take backup: pg_basebackup -v -D /pgbackup/$bkupdir -Ft -z -c fast Backup size created is around 84GB.  However, it is taking almost 10 hr 21 minutes to complete. Looking for speed improvement? Thanks& Regards,Vaze Swapnil", "msg_date": "Fri, 14 Oct 2016 18:51:46 +0530", "msg_from": "Samir Magar <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup running slow" }, { "msg_contents": "On Fri, Oct 14, 2016 at 10:21 PM, Samir Magar <[email protected]> wrote:\n> What is the settings for max_wal_sender?\n> you can try increasing this parameter to improve backup performance.\n\nmax_wal_senders has no influence on the performance of a base backup\ntaken as a base backup is just sent through one single WAL sender\nprocess. What matters here is the network bandwidth.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Oct 2016 22:37:11 +0900", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup running slow" }, { "msg_contents": "Vaze,\n\n* Swapnil Vaze ([email protected]) wrote:\n> We are using postgresql 9.2 on redhat linux instance over openstack cloud.\n> \n> Database is around 441 GB.\n> \n> We are using below command to take backup:\n> \n> pg_basebackup -v -D /pgbackup/$bkupdir -Ft -z -c fast\n> \n> Backup size created is around 84GB.\n> \n> However, it is taking almost 10 hr 21 minutes to complete.\n> \n> Looking for speed improvement?\n\npg_basebackup is single-threaded and the compression is pretty\nCPU-intensive. You could try reducing the compression level, but that\nwill make the backups larger, of course. Also, there's a limit to how\nfar that will get you- once you get to \"no compression\", that's just as\nfast as pg_basebackup can run.\n\nIf you're interested in a backup tool which can operate in parallel, you\nmight want to look at pgbackrest.\n\nThanks!\n\nStephen", "msg_date": "Fri, 14 Oct 2016 09:37:51 -0400", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_basebackup running slow" } ]
[ { "msg_contents": "The depesz link for explain (analyze, buffers) is shown below for 3\ndifferent queries. The first two queries show a log dump of the postgres\nlog, showing a query that was generated by Java Hibernate. The third query\nwas one I wrote and ran in pgadmin that I think is similar to what\nHibernate is doing. You can see in the first link query plan output that\nthe query shown below is doing a sequential scan on REMINDERS which I do\nnot understand since REMINDERS has an index on FK_USER. That isn't the\nfocus of my question. In the second query I forced it to use the index but\nit wasn't any faster. Now, if I go into pgadmin and execute what I thought\nwas an equivalent query it is much faster (less than 2ms vs. 2000 ms). My\nquestion is basically: why is the query I wrote so much faster than the\nhibernate ones, when the query appears essentially the same?\n\nSchema for REMINDERS table (SQL shown):\n\nCREATE TABLE pgsch.reminders\n(\n id bigint NOT NULL,\n fk_user bigint NOT NULL,\n date timestamp without time zone NOT NULL,\n finished character(1) COLLATE \"default\".pg_catalog NOT NULL DEFAULT\n'F'::bpchar,\n subject character varying(255) COLLATE \"default\".pg_catalog NOT NULL,\n is_on character(1) COLLATE \"default\".pg_catalog NOT NULL DEFAULT\n'T'::bpchar,\n body text COLLATE \"default\".pg_catalog,\n CONSTRAINT reminders_pkey PRIMARY KEY (id),\n CONSTRAINT usr_cstr FOREIGN KEY (fk_user)\n REFERENCES pgsch.user (id) MATCH SIMPLE\n ON UPDATE NO ACTION\n ON DELETE NO ACTION\n DEFERRABLE INITIALLY DEFERRED\n)\nWITH (\n OIDS = FALSE\n)\nTABLESPACE pg_default;\n\nCREATE INDEX remidx\n ON pgsch.reminders USING btree\n (fk_user)\n TABLESPACE pg_default;\n\n__________________________________________________________________\n\n\nAdditional info:\nPostgreSQL 9.5.3\nreminders table has 66K rows\nSettings used in postgres conf:\nauto_explain.log_analyze = true, auto_explain.log_buffers = true,\ntrack_io_timing = true, work_mem = 256MB, shared_buffers =\n8GB, effective_cache_size = 25GB\n\nAdditional info from the postgres log for the first query:\n\n2016-10-14 11:23:38.107 EDT >LOG: duration: 0.798 ms parse <unnamed>:\nselect distinct this_.FK_USER as y0_ from pgsch.REMINDERS this_ where\nthis_.FK_USER in ($1, ..., $999)\n2016-10-14 11:23:38.109 EDT >LOG: duration: 1.164 ms bind <unnamed>:\nselect distinct this_.FK_USER as y0_ from pgsch.REMINDERS this_ where\nthis_.FK_USER in ($1, ..., $999)\n\n >DETAIL: parameters: $1 = '213', $2 = '382', $3 = '131', $4 = '174', $5 =\n'885', ..., $992 = '830', $993 = '333', $994 = '414', $995 = '481', $996 =\n'454', $997 = '728', $998 = '281', $999 = '717'\n duration: 1571.404 ms execute <unnamed>: select distinct this_.FK_USER\nas y0_ from pgsch.REMINDERS this_ where this_.FK_USER in ($1, ..., $999)\n parameters: $1 = '213', $2 = '382', $3 = '131', $4 = '174', $5 = '885',\n..., $992 = '830', $993 = '333', $994 = '414', $995 = '481', $996 = '454',\n$997 = '728', $998 = '281', $999 = '717'\n\n 2016-10-14 11:23:39.682 EDT >LOG: duration: 1571.388 ms plan:\n Query Text: select distinct this_.FK_USER as y0_ from\npgsch.REMINDERS this_ where this_.FK_USER in ($1, ..., $999)\nhttps://explain.depesz.com/s/nxMH\n\nThen I put an additional setting in my postgres.conf (enable_seqscan = off)\nand got this for the second query:\nhttps://explain.depesz.com/s/1UE0\n\n So as you can see it didn't really improve anything even though it was\nusing the index.\n\n However, if I run the following query in pgadmin: \"explain(analyze,\nbuffers) select distinct this_.FK_USER as y0_ from pgsch.REMINDERS this_\nwhere this_.FK_USER in (1...1000);\"\n I get the following output. Note that 1...1000 here are 1000 values I\ngenerated. I ran this a bunch of times and the randomness or values\nthemselves don't seem to make much difference (always faster than 2ms). I\nalso ran this query as a prepared statement with no noticeable difference,\nsince I figured that's what Hibernate is doing.\n https://explain.depesz.com/s/EEb\n\n\nWhy does the third query (pgadmin one) show 270 for the rows whereas the\nquery from the postgres log (initiated by hibernate) shows 65597 rows,\nwhich is close to the real number of rows?\nAny other tips why the hibernate query is so slow compared to the pgadmin\none? I am totally stumped here and out of ideas.\n\nThanks for any help or suggestions anyone can provide.\n\nThe depesz link for explain (analyze, buffers) is shown below for 3 different queries. The first two queries show a log dump of the postgres log, showing a query that was generated by Java Hibernate. The third query was one I wrote and ran in pgadmin that I think is similar to what Hibernate is doing. You can see in the first link query plan output that the query shown below is doing a sequential scan on REMINDERS which I do not understand since REMINDERS has an index on FK_USER. That isn't the focus of my question. In the second query I forced it to use the index but it wasn't any faster. Now, if I go into pgadmin and execute what I thought was an equivalent query it is much faster (less than 2ms vs. 2000 ms). My question is basically: why is the query I wrote so much faster than the hibernate ones, when the query appears essentially the same?Schema for REMINDERS table (SQL shown):CREATE TABLE pgsch.reminders(    id bigint NOT NULL,    fk_user bigint NOT NULL,    date timestamp without time zone NOT NULL,    finished character(1) COLLATE \"default\".pg_catalog NOT NULL DEFAULT 'F'::bpchar,    subject character varying(255) COLLATE \"default\".pg_catalog NOT NULL,    is_on character(1) COLLATE \"default\".pg_catalog NOT NULL DEFAULT 'T'::bpchar,    body text COLLATE \"default\".pg_catalog,    CONSTRAINT reminders_pkey PRIMARY KEY (id),    CONSTRAINT usr_cstr FOREIGN KEY (fk_user)        REFERENCES pgsch.user (id) MATCH SIMPLE        ON UPDATE NO ACTION        ON DELETE NO ACTION        DEFERRABLE INITIALLY DEFERRED)WITH (    OIDS = FALSE)TABLESPACE pg_default;CREATE INDEX remidx    ON pgsch.reminders USING btree    (fk_user)    TABLESPACE pg_default;__________________________________________________________________Additional info:PostgreSQL 9.5.3reminders table has 66K rowsSettings used in postgres conf: auto_explain.log_analyze = true, auto_explain.log_buffers = true, track_io_timing = true, work_mem = 256MB, shared_buffers = 8GB, effective_cache_size = 25GBAdditional info from the postgres log for the first query:2016-10-14 11:23:38.107 EDT >LOG:  duration: 0.798 ms  parse <unnamed>: select distinct this_.FK_USER as y0_ from pgsch.REMINDERS this_ where this_.FK_USER in ($1, ..., $999)2016-10-14 11:23:38.109 EDT >LOG:  duration: 1.164 ms  bind <unnamed>: select distinct this_.FK_USER as y0_ from pgsch.REMINDERS this_ where this_.FK_USER in ($1, ..., $999) >DETAIL:  parameters: $1 = '213', $2 = '382', $3 = '131', $4 = '174', $5 = '885', ..., $992 = '830', $993 = '333', $994 = '414', $995 = '481', $996 = '454', $997 = '728', $998 = '281', $999 = '717'  duration: 1571.404 ms  execute <unnamed>: select distinct this_.FK_USER as y0_ from pgsch.REMINDERS this_ where this_.FK_USER in ($1, ..., $999)  parameters: $1 = '213', $2 = '382', $3 = '131', $4 = '174', $5 = '885', ..., $992 = '830', $993 = '333', $994 = '414', $995 = '481', $996 = '454', $997 = '728', $998 = '281', $999 = '717'  2016-10-14 11:23:39.682 EDT >LOG:  duration: 1571.388 ms  plan:        Query Text: select distinct this_.FK_USER as y0_ from pgsch.REMINDERS this_ where this_.FK_USER in ($1, ..., $999) https://explain.depesz.com/s/nxMHThen I put an additional setting in my postgres.conf (enable_seqscan = off) and got this for the second query:https://explain.depesz.com/s/1UE0  So as you can see it didn't really improve anything even though it was using the index.  However, if I run the following query in pgadmin: \"explain(analyze, buffers) select distinct this_.FK_USER as y0_ from pgsch.REMINDERS this_ where this_.FK_USER in (1...1000);\"  I get the following output. Note that 1...1000 here are 1000 values I generated. I ran this a bunch of times and the randomness or values themselves don't seem to make much difference (always faster than 2ms). I also ran this query as a prepared statement with no noticeable difference, since I figured that's what Hibernate is doing. https://explain.depesz.com/s/EEb  Why does the third query (pgadmin one) show 270 for the rows whereas the query from the postgres log (initiated by hibernate) shows 65597 rows, which is close to the real number of rows? Any other tips why the hibernate query is so slow compared to the pgadmin one?  I am totally stumped here and out of ideas.Thanks for any help or suggestions anyone can provide.", "msg_date": "Fri, 14 Oct 2016 13:27:04 -0400", "msg_from": "Kyle Moser <[email protected]>", "msg_from_op": true, "msg_subject": "Hibernate generated query slow compared to 'equivalent' hand written\n one" }, { "msg_contents": "Kyle Moser <[email protected]> writes:\n> The depesz link for explain (analyze, buffers) is shown below for 3\n> different queries. The first two queries show a log dump of the postgres\n> log, showing a query that was generated by Java Hibernate. The third query\n> was one I wrote and ran in pgadmin that I think is similar to what\n> Hibernate is doing.\n\nIt's not all that similar: according to the EXPLAIN output, the condition\nHibernate is generating is\n\nFilter: ((FK_USER)::numeric = ANY ('{213,382,131,...,717}'::numeric[]))\n\nwhereas your handwritten query is generating\n\nIndex Cond: (fk_user = ANY ('{70,150,1248,1269,1530,...,199954}'::bigint[]))\n\nIOW, Hibernate is telling the server that the parameters it's supplying\nare NUMERIC not INTEGER, which results in a query using numeric_eq, which\ncan't be indexed by a bigint index.\n\nIf you can't find a hammer big enough to persuade Hibernate that it's\ndealing with integers/bigints rather than numerics, you could probably\nregain most of the performance by creating an index on (FK_USER::numeric).\n\nBTW, why is one of your EXPLAINs showing the identifiers in upper case\nand the other in lower case? One could be forgiven for wondering if\nthese were really against the same data.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Oct 2016 13:46:31 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hibernate generated query slow compared to 'equivalent' hand\n written one" }, { "msg_contents": "Tom,\n\nThanks so much for the response. They are the same data, that was due to\ndeidentification on my part. So even though the second Hibernate query says\n\"index only scan\" (in addition to the filter, as you said) it is\ninefficient. Why does it say index only scan if it can't use the index due\nto the types being numeric and the index being bigint? (I suppose my\nquestion here is how to interpret the output properly - so I don't make\nthis mistake again).\n\nOn Fri, Oct 14, 2016 at 1:46 PM, Tom Lane <[email protected]> wrote:\n\n> Kyle Moser <[email protected]> writes:\n> > The depesz link for explain (analyze, buffers) is shown below for 3\n> > different queries. The first two queries show a log dump of the postgres\n> > log, showing a query that was generated by Java Hibernate. The third\n> query\n> > was one I wrote and ran in pgadmin that I think is similar to what\n> > Hibernate is doing.\n>\n> It's not all that similar: according to the EXPLAIN output, the condition\n> Hibernate is generating is\n>\n> Filter: ((FK_USER)::numeric = ANY ('{213,382,131,...,717}'::numeric[]))\n>\n> whereas your handwritten query is generating\n>\n> Index Cond: (fk_user = ANY ('{70,150,1248,1269,1530,...,\n> 199954}'::bigint[]))\n>\n> IOW, Hibernate is telling the server that the parameters it's supplying\n> are NUMERIC not INTEGER, which results in a query using numeric_eq, which\n> can't be indexed by a bigint index.\n>\n> If you can't find a hammer big enough to persuade Hibernate that it's\n> dealing with integers/bigints rather than numerics, you could probably\n> regain most of the performance by creating an index on (FK_USER::numeric).\n>\n> BTW, why is one of your EXPLAINs showing the identifiers in upper case\n> and the other in lower case? One could be forgiven for wondering if\n> these were really against the same data.\n>\n> regards, tom lane\n>\n\nTom,Thanks so much for the response. They are the same data, that was due to deidentification on my part. So even though the second Hibernate query says \"index only scan\" (in addition to the filter, as you said) it is inefficient. Why does it say index only scan if it can't use the index due to the types being numeric and the index being bigint? (I suppose my question here is how to interpret the output properly - so I don't make this mistake again). On Fri, Oct 14, 2016 at 1:46 PM, Tom Lane <[email protected]> wrote:Kyle Moser <[email protected]> writes:\n> The depesz link for explain (analyze, buffers) is shown below for 3\n> different queries. The first two queries show a log dump of the postgres\n> log, showing a query that was generated by Java Hibernate. The third query\n> was one I wrote and ran in pgadmin that I think is similar to what\n> Hibernate is doing.\n\nIt's not all that similar: according to the EXPLAIN output, the condition\nHibernate is generating is\n\nFilter: ((FK_USER)::numeric = ANY ('{213,382,131,...,717}'::numeric[]))\n\nwhereas your handwritten query is generating\n\nIndex Cond: (fk_user = ANY ('{70,150,1248,1269,1530,...,199954}'::bigint[]))\n\nIOW, Hibernate is telling the server that the parameters it's supplying\nare NUMERIC not INTEGER, which results in a query using numeric_eq, which\ncan't be indexed by a bigint index.\n\nIf you can't find a hammer big enough to persuade Hibernate that it's\ndealing with integers/bigints rather than numerics, you could probably\nregain most of the performance by creating an index on (FK_USER::numeric).\n\nBTW, why is one of your EXPLAINs showing the identifiers in upper case\nand the other in lower case?  One could be forgiven for wondering if\nthese were really against the same data.\n\n                        regards, tom lane", "msg_date": "Fri, 14 Oct 2016 14:09:56 -0400", "msg_from": "Kyle Moser <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hibernate generated query slow compared to 'equivalent'\n hand written one" }, { "msg_contents": "Kyle Moser <[email protected]> writes:\n> Thanks so much for the response. They are the same data, that was due to\n> deidentification on my part. So even though the second Hibernate query says\n> \"index only scan\" (in addition to the filter, as you said) it is\n> inefficient. Why does it say index only scan if it can't use the index due\n> to the types being numeric and the index being bigint? (I suppose my\n> question here is how to interpret the output properly - so I don't make\n> this mistake again).\n\nThe key thing to notice about that is that it says \"Filter\" not\n\"Index Cond\". That means it's pulling data from the index but\nnot making use of the index's search ability --- that is, it's\nscanning every index entry and applying the \"IN\" condition to the\nvalue, in much the same way as it'd do with heap entries in a plain\nseqscan. That's a pretty silly plan, which in most cases you would\nnot get if you hadn't forced it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 14 Oct 2016 14:37:27 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hibernate generated query slow compared to 'equivalent' hand\n written one" } ]
[ { "msg_contents": "How fast is Postgres's string concatenation in comparison to the various Python string concatenation? I'm using PREPARE statements for my SELECT queries for my web server.\n\nI'm wondering if I should just generate my JSON API (or even HTML) strings in Postgres directly, instead of in Python. This would involve a few IF-THEN-ELSE (in Python) which I convert to CASE-WHEN (in Postgres) as well.\n\nI’m not sure about the internals of Postgres and how it compares speedwise to the Python bytecode interpreter (and future JIT compilers like PyPy). Is Postgres generating bytecode and interpreting that for string concatenation & Case statements?\n\n-bobby\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 18 Oct 2016 16:35:20 -0400", "msg_from": "Bobby Mozumder <[email protected]>", "msg_from_op": true, "msg_subject": "Should I generate strings in Postgres of Python?" }, { "msg_contents": "This strikes me as something that shouldn't matter in the vast majority of\napplications. Putting a bunch of logic for rendering an\napplication-specific format of your data in prepared statements or stored\nprocedures in your database violates the separation of concerns that most\nfolks like to maintain between the layers of an application.\n\nThe difference in string concatenation performance is unlikely to be a\nsignificant proportion of total request latency unless you are generating\nvery long strings from very short components very inefficiently (appending\nin a loop with immutable strings, for example). Otherwise, waiting on disk\nand network are likely to be a far higher percentage of total request\nlatency than string concatenation. Additionally, it is usually vastly\ncheaper to scale your application layer horizontally than it is to scale a\ndatabase, so even if the application logic is slightly slower, it will\nusually be cheaper to throw more compute horsepower at the application\nlayer if/when latency starts to become a problem unless latency is\nproblematic even when serving only a single request at a time.\n\nUse your database to store data and your application to render the data in\nan application-specific manner. That way, if you end up with multiple\napplications requiring different representations, you don't have to\naccommodate both in your data storage and retrieval layer.\n\nIf your rendering code is a significant percentage of total latency,\nconsider caching the rendered results rather than moving the rendering\nlogic into your data storage layer - which is unlikely to be significantly\nfaster, anyway. Most mature languages/environments do basic string\nmanipulation pretty efficiently when left to their own devices.\n\nOn Tue, Oct 18, 2016 at 1:35 PM, Bobby Mozumder <[email protected]> wrote:\n\n> How fast is Postgres's string concatenation in comparison to the various\n> Python string concatenation? I'm using PREPARE statements for my SELECT\n> queries for my web server.\n>\n> I'm wondering if I should just generate my JSON API (or even HTML) strings\n> in Postgres directly, instead of in Python. This would involve a few\n> IF-THEN-ELSE (in Python) which I convert to CASE-WHEN (in Postgres) as well.\n>\n> I’m not sure about the internals of Postgres and how it compares speedwise\n> to the Python bytecode interpreter (and future JIT compilers like PyPy).\n> Is Postgres generating bytecode and interpreting that for string\n> concatenation & Case statements?\n>\n> -bobby\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThis strikes me as something that shouldn't matter in the vast majority of applications. Putting a bunch of logic for rendering an application-specific format of your data in prepared statements or stored procedures in your database violates the separation of concerns that most folks like to maintain between the layers of an application.  The difference in string concatenation performance is unlikely to be a significant proportion of total request latency unless you are generating very long strings from very short components very inefficiently (appending in a loop with immutable strings, for example). Otherwise, waiting on disk and network are likely to be a far higher percentage of total request latency than string concatenation. Additionally, it is usually vastly cheaper to scale your application layer horizontally than it is to scale a database, so even if the application logic is slightly slower, it will usually be cheaper to throw more compute horsepower at the application layer if/when latency starts to become a problem unless latency is problematic even when serving only a single request at a time.  Use your database to store data and your application to render the data in an application-specific manner.  That way, if you end up with multiple applications requiring different representations, you don't have to accommodate both in your data storage and retrieval layer.If your rendering code is a significant percentage of total latency, consider caching the rendered results rather than moving the rendering logic into your data storage layer - which is unlikely to be significantly faster, anyway. Most mature languages/environments do basic string manipulation pretty efficiently when left to their own devices.On Tue, Oct 18, 2016 at 1:35 PM, Bobby Mozumder <[email protected]> wrote:How fast is Postgres's string concatenation in comparison to the various Python string concatenation?  I'm using PREPARE statements for my SELECT queries for my web server.\n\nI'm wondering if I should just generate my JSON API (or even HTML) strings in Postgres directly, instead of in Python.  This would involve a few IF-THEN-ELSE (in Python) which I convert to CASE-WHEN (in Postgres) as well.\n\nI’m not sure about the internals of Postgres and how it compares speedwise to the Python bytecode interpreter (and future JIT compilers like PyPy).  Is Postgres generating bytecode and interpreting that for string concatenation & Case statements?\n\n-bobby\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 18 Oct 2016 15:53:54 -0700", "msg_from": "Sam Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Should I generate strings in Postgres of Python?" } ]
[ { "msg_contents": "Hello:\n\nI've a question about the performance of a query plan that uses a nested\nloop, and whose inner loop uses an index scan. Would you be so kind to\nhelp me, please?\n\nI'm using PostgreSQL 9.5.4 on Ubuntu 14.04 64-bit (kernel 4.8.2). I've 3\ntables, which are \"answers\", \"test_completions\" and \"courses\". The first\none contains around ~30 million rows, whereas the others only have a few\nthousands each one. The query that I'm performing is very simple,\nalthough retrieves lots of rows:\n\n ---------------------\n SELECT answers.*\n FROM answers\n JOIN test_completions ON test_completions.test_completion_id =\nanswers.test_completion_id\n JOIN courses ON courses.course_id = test_completions.course_id\n WHERE courses.group_id = 2;\n ---------------------\n\n\nThis yields the following plan:\n\n ---------------------\n Nested Loop (cost=245.92..383723.28 rows=7109606 width=38) (actual\ntime=1.091..2616.553 rows=8906075 loops=1)\n -> Hash Join (cost=245.36..539.81 rows=3081 width=8) (actual\ntime=1.077..6.087 rows=3123 loops=1)\n Hash Cond: (test_completions.course_id =\ncourses.course_id)\n -> Seq Scan on test_completions (cost=0.00..214.65\nrows=13065 width=16) (actual time=0.005..1.051 rows=13065 loops=1)\n -> Hash (cost=204.11..204.11 rows=3300 width=8)\n(actual time=1.063..1.063 rows=3300 loops=1)\n Buckets: 4096 Batches: 1 Memory Usage:\n161kB\n -> Bitmap Heap Scan on courses \n(cost=45.86..204.11 rows=3300 width=8) (actual time=0.186..0.777\nrows=3300 loops=1)\n Recheck Cond: (group_id = 2)\n Heap Blocks: exact=117\n -> Bitmap Index Scan on\nfki_courses_group_id_fkey (cost=0.00..45.03 rows=3300 width=0) (actual\ntime=0.172..0.172 rows=3300 loops=1)\n Index Cond:\n(group_id = 2)\n ### HERE ###\n -> Index Scan using fki_answers_test_completion_id_fkey on\nanswers (cost=0.56..96.90 rows=2747 width=38) (actual time=0.007..0.558\nrows=2852 loops=3123)\n ### HERE ###\n Index Cond: (test_completion_id =\ntest_completions.test_completion_id)\n Planning time: 0.523 ms\n Execution time: 2805.530 ms\n ---------------------\n\nMy doubt is about the inner loop of the nested loop, the one that I've\ndelimited with ### HERE ### . This loop is the part that, obviously,\nmore time consumes. Because its run 3,123 times and requires lots of\naccesses to multiple database pages. But, Is there anything that I can\ndo to reduce even more the time spent in this part? Apart of:\n\n * Clustering the \"answers\" table.\n * Upgrading PostgreSQL to version 9.6, to take advantage of the\nindex scans in parallel.\n * Upgrading the hardware.\n\nThank you!\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 19 Oct 2016 12:54:46 +0200", "msg_from": "negora <[email protected]>", "msg_from_op": true, "msg_subject": "Performance of a nested loop, whose inner loop uses an index scan." }, { "msg_contents": "On Wed, Oct 19, 2016 at 8:54 AM, negora <[email protected]> wrote:\n\n> Nested Loop (cost=245.92..383723.28 rows=7109606 width=38) (actual\n> time=1.091..2616.553 rows=8906075 loops=1)\n>\n\nI wonder about the use-case for this query, because it returns more than 8M\nrows, so 2.6 seconds that sounds that much for so many rows. Is it for an\nend user application? Isn't there any kind of pagination?\n\n\n-- \nMatheus de Oliveira\n\nOn Wed, Oct 19, 2016 at 8:54 AM, negora <[email protected]> wrote:\n    Nested Loop  (cost=245.92..383723.28 rows=7109606 width=38) (actual\ntime=1.091..2616.553 rows=8906075 loops=1)I wonder about the use-case for this query, because it returns more than 8M rows, so 2.6 seconds that sounds that much for so many rows. Is it for an end user application? Isn't there any kind of pagination?-- Matheus de Oliveira", "msg_date": "Wed, 19 Oct 2016 09:15:59 -0200", "msg_from": "Matheus de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance of a nested loop, whose inner loop uses an\n index scan." }, { "msg_contents": "\n\n\n\n\nHi Matheus:\nThanks for your prompt answer. It's for a web application. This\n part of the application allows to export the answers to a CSV\n file. So pagination isn't possible here. The user can choose among\n several filters. The group of the courses is one of them. She can\n combine as many filters as she wants. So the query that I\n presented in my previous message was one of the \"broadest\"\n examples. But it's the one that I'm interested in.\n\nReally, I'm more interested in the relative time than in the\n absolute time. Because I could create the file\n asynchronously, in the background, so that the user downloaded it\n at a later time. That's not the problem. My doubt is if 2.8\n seconds is the best that I can do. Is it an acceptable time?\nThank you! ;)\n\n\nOn 19/10/16 13:15, Matheus de Oliveira\n wrote:\n\n\n\n\nOn Wed, Oct 19, 2016 at 8:54 AM,\n negora <[email protected]>\n wrote:\n\n\n     Nested Loop  (cost=245.92..383723.28 rows=7109606\n width=38) (actual\n time=1.091..2616.553 rows=8906075 loops=1)\n\n\n\n\nI wonder about the use-case for this\n query, because it returns more than 8M rows, so 2.6 seconds\n that sounds that much for so many rows. Is it for an end user\n application? Isn't there any kind of pagination?\n\n\n\n -- \n\n\nMatheus de Oliveira\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 19 Oct 2016 19:07:13 +0200", "msg_from": "negora <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance of a nested loop, whose inner loop uses an\n index scan." } ]
[ { "msg_contents": "Hi\n\n\nI have two main problems and that is slow updates and joins, but when I build up the table met_vaer_wisline.nora_bc25_observation with more than 4 billion we are able to insert about 85.000 rows pr sekund so thats ok.\n\n\nThe problems start when I need to update or joins with other tables using this table.\n\n\nIn this example I have two tables one with 4 billion rows and another with 50000 rows and then I try to do a standard simple join between this two tables and this takes 397391 ms. with this SQL (the query plan is added is further down)\n\nSELECT o.*\n\nFROM\n\nmet_vaer_wisline.nora_bc25_observation o,\n\nmet_vaer_wisline.new_data n\n\nWHERE o.point_uid_ref = n.id_point AND o.epoch = n.epoch\n\nbut if I use this SQL it takes 25727 ms (the query plan is added is further down).\n\nSELECT\n\no.*\n\nFROM\n\n(\n\nSELECT o.*\n\nFROM\n\nmet_vaer_wisline.nora_bc25_observation o\n\nWHERE\n\nEXISTS (SELECT 1 FROM (SELECT distinct epoch FROM met_vaer_wisline.new_data) AS n WHERE n.epoch = o.epoch )\n\nAND\n\nEXISTS (SELECT 1 FROM (SELECT distinct id_point FROM met_vaer_wisline.new_data) AS n WHERE n.id_point = o.point_uid_ref )\n\n) AS o,\n\nmet_vaer_wisline.new_data n\n\nWHERE o.point_uid_ref = n.id_point AND o.epoch = n.epoch\n\n\nThe columns are indexed and I did run vacuum analyze on both tables before I tested. work_mem is 200MB but I also tested with much more work_mem but that does not change the execution time.\n\nThe CPU goes to 100% when the query is running and there is no IOWait while the SQL is running.\n\n\nWhy is the second SQL 15 times faster ?\n\nIs this normal or have I done something wrong here ?\n\n\nI have tested clustering around a index but that did not help.\n\n\nIs the only way to fix slow updates and joins to use partitioning ?\n\nhttps://www.postgresql.org/docs/9.6/static/ddl-partitioning.html\n\n\n\nHere are the SQL and more info\n\n\nEXPLAIN analyze\n\nSELECT o.*\n\nFROM\n\nmet_vaer_wisline.nora_bc25_observation o,\n\nmet_vaer_wisline.new_data n\n\nWHERE o.point_uid_ref = n.id_point AND o.epoch = n.epoch\n\n\n\n-[ RECORD 1 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Merge Join (cost=0.87..34374722.51 rows=52579 width=16) (actual time=0.127..397379.844 rows=50000 loops=1)\n\n-[ RECORD 2 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Merge Cond: (n.id_point = o.point_uid_ref)\n\n-[ RECORD 3 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Join Filter: (o.epoch = n.epoch)\n\n-[ RECORD 4 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Rows Removed by Join Filter: 2179150000\n\n-[ RECORD 5 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Index Scan using idx_met_vaer_wisline_new_data_id_point on new_data n (cost=0.29..23802.89 rows=50000 width=8) (actual time=0.024..16.736 rows=50000 loops=1)\n\n-[ RECORD 6 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Index Scan using idx_met_vaer_wisline_nora_bc25_observation_point_uid_ref on nora_bc25_observation o (cost=0.58..2927642364.25 rows=4263866624 width=16) (actual time=0.016..210486.136 rows=2179200001 loops=1)\n\n-[ RECORD 7 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Total runtime: 397383.663 ms\n\n\nTime: 397391.388 ms\n\n\n\nEXPLAIN analyze\n\nSELECT\n\no.*\n\nFROM\n\n(\n\nSELECT o.*\n\nFROM\n\nmet_vaer_wisline.nora_bc25_observation o\n\nWHERE\n\nEXISTS (SELECT 1 FROM (SELECT distinct epoch FROM met_vaer_wisline.new_data) AS n WHERE n.epoch = o.epoch )\n\nAND\n\nEXISTS (SELECT 1 FROM (SELECT distinct id_point FROM met_vaer_wisline.new_data) AS n WHERE n.id_point = o.point_uid_ref )\n\n) AS o,\n\nmet_vaer_wisline.new_data n\n\nWHERE o.point_uid_ref = n.id_point AND o.epoch = n.epoch\n\n\n-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Hash Semi Join (cost=1019.70..1039762.81 rows=54862 width=16) (actual time=359.284..25717.838 rows=50096 loops=1)\n\n-[ RECORD 2 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Hash Cond: (o.point_uid_ref = new_data_1.id_point)\n\n-[ RECORD 3 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Nested Loop (cost=0.87..972602.28 rows=24964326 width=16) (actual time=0.287..24412.088 rows=24262088 loops=1)\n\n-[ RECORD 4 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Unique (cost=0.29..1014.29 rows=248 width=4) (actual time=0.117..6.849 rows=248 loops=1)\n\n-[ RECORD 5 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Index Only Scan using idx_met_vaer_wisline_new_data_epoch on new_data (cost=0.29..889.29 rows=50000 width=4) (actual time=0.115..4.521 rows=50000 loops=1)\n\n-[ RECORD 6 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Heap Fetches: 0\n\n-[ RECORD 7 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Index Scan using idx_met_vaer_wisline_nora_bc25_observation_epoch on nora_bc25_observation o (cost=0.58..2911.05 rows=100663 width=16) (actual time=0.014..89.512 rows=97831 loops=248)\n\n-[ RECORD 8 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Index Cond: (epoch = new_data.epoch)\n\n-[ RECORD 9 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Hash (cost=1016.31..1016.31 rows=202 width=4) (actual time=16.636..16.636 rows=202 loops=1)\n\n-[ RECORD 10 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Buckets: 1024 Batches: 1 Memory Usage: 8kB\n\n-[ RECORD 11 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Unique (cost=0.29..1014.29 rows=202 width=4) (actual time=0.046..16.544 rows=202 loops=1)\n\n-[ RECORD 12 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | -> Index Only Scan using idx_met_vaer_wisline_new_data_id_point on new_data new_data_1 (cost=0.29..889.29 rows=50000 width=4) (actual time=0.046..11.315 rows=50000 loops=1)\n\n-[ RECORD 13 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Heap Fetches: 0\n\n-[ RECORD 14 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Total runtime: 25719.120 ms\n\n\nTime: 25727.097 ms\n\n\n\nselect version();\n\n version\n\n--------------------------------------------------------------------------------------------------------------\n\n PostgreSQL 9.3.9 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n\n(1 row)\n\n\n \\d met_vaer_wisline.nora_bc25_observation;\n\nTable \"met_vaer_wisline.nora_bc25_observation\"\n\n Column | Type | Modifiers\n\n--------------------+---------+-----------\n\n point_uid_ref | integer | not null\n\n epoch | integer | not null\n\n windspeed_10m | real |\n\n air_temperature_2m | real |\n\nIndexes:\n\n \"idx_met_vaer_wisline_nora_bc25_observation_epoch\" btree (epoch)\n\n \"idx_met_vaer_wisline_nora_bc25_observation_point_uid_ref\" btree (point_uid_ref)\n\n\n\n\\d met_vaer_wisline.new_data ;\n\n Unlogged table \"met_vaer_wisline.new_data\"\n\n Column | Type | Modifiers\n\n--------------------+-------------------+-----------\n\n windspeed_10m | real |\n\n air_temperature_2m | real |\n\n lon | character varying | not null\n\n lat | character varying | not null\n\n epoch | integer |\n\n epoch_as_numeric | numeric | not null\n\n rest | character varying |\n\n id_point | integer |\n\nIndexes:\n\n \"idx_met_vaer_wisline_new_data_epoch\" btree (epoch)\n\n \"idx_met_vaer_wisline_new_data_id_point\" btree (id_point)\n\n\nvacuum analyze met_vaer_wisline.nora_bc25_observation;\n\n\nvacuum analyze met_vaer_wisline.new_data;\n\n\nSELECT count(*) from met_vaer_wisline.new_data;\n\n count\n\n-------\n\n 50000\n\n(1 row)\n\n\nSELECT count(*) from met_vaer_wisline.nora_bc25_observation ;\n\n count\n\n------------\n\n 4263866304\n\n\nThanks .\n\n\nLars\n\n\n\n\n\n\n\n\n\n\n\n\nHi\n\n\n\n\n\n\n\n\n\nI have two main problems and that is slow updates and joins, but\n when I build up the table met_vaer_wisline.nora_bc25_observation with more than 4 billion we are able to insert about 85.000 rows pr sekund so thats ok.\n\n\n\n\nThe problems start when I need to update or joins with other tables using this table.\n\n\n\n\nIn this example I have two tables one with 4 billion rows and another with 50000 rows and then I try to do a standard simple join between this two tables and this takes 397391  ms. with this SQL (the query plan is added is further down)\n\nSELECT o.*\n\nFROM \n\nmet_vaer_wisline.nora_bc25_observation o,\n\nmet_vaer_wisline.new_data n\n\n\n\nWHERE o.point_uid_ref = n.id_point \nAND o.epoch = n.epoch\n\n\nbut if I use this SQL  it takes 25727 ms (the query plan is added is further down).\n\nSELECT \n\no.*\n\nFROM \n\n(\n\nSELECT o.*\n\nFROM \n\nmet_vaer_wisline.nora_bc25_observation o\n\nWHERE \n\nEXISTS (SELECT 1\nFROM  (SELECT\ndistinct epoch FROM  met_vaer_wisline.new_data)\nAS n WHERE n.epoch = o.epoch )\n\nAND \n\nEXISTS (SELECT 1\nFROM  (SELECT\ndistinct id_point \nFROM  met_vaer_wisline.new_data) AS n \nWHERE n.id_point = o.point_uid_ref )\n\n) AS o,\n\nmet_vaer_wisline.new_data n\n\n\n\nWHERE o.point_uid_ref = n.id_point \nAND o.epoch = n.epoch\n\n\n\nThe columns are indexed and I did run vacuum analyze on both tables before I tested. work_mem is 200MB but I also tested with much more work_mem but that does not change\n the execution time.\n\n\nThe CPU goes to 100% when the query is running and there is no IOWait while the SQL is running.\n\n\n\n\nWhy is the second SQL 15 times faster ?\n\n\n\n\nIs this normal or have I done something wrong here ?\n\n\n\n\n\n\nI have tested clustering around a index but that did not help.\n\n\n\n\n\n\n\n\n\n\nIs the only way to fix slow updates and joins to\nuse partitioning ?\n\n\nhttps://www.postgresql.org/docs/9.6/static/ddl-partitioning.html\n\n\n\n\n\n\n\n\nHere are the SQL and more info\n\n\n\n\nEXPLAIN  analyze\n\nSELECT o.*\n\nFROM \n\nmet_vaer_wisline.nora_bc25_observation o,\n\nmet_vaer_wisline.new_data n\n\nWHERE o.point_uid_ref = n.id_point \nAND o.epoch = n.epoch\n\n\n\n\n\n\n\n-[ RECORD 1 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Merge Join  (cost=0.87..34374722.51 rows=52579 width=16) (actual\ntime=0.127..397379.844 \nrows=50000 loops=1)\n\n-[ RECORD 2 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |   Merge Cond: (n.id_point = o.point_uid_ref)\n\n-[ RECORD 3 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |   Join Filter: (o.epoch = n.epoch)\n\n-[ RECORD 4 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |   Rows Removed by Join Filter: 2179150000\n\n-[ RECORD 5 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |   ->  Index Scan using idx_met_vaer_wisline_new_data_id_point\non new_data n  (cost=0.29..23802.89 \nrows=50000 width=8) (actual time=0.024..16.736\nrows=50000 loops=1)\n\n-[ RECORD 6 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |   ->  Index Scan using idx_met_vaer_wisline_nora_bc25_observation_point_uid_ref\non nora_bc25_observation o  (cost=0.58..2927642364.25\nrows=4263866624 width=16) (actual \ntime=0.016..210486.136 rows=2179200001 loops=1)\n\n-[ RECORD 7 ]-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Total runtime: 397383.663 ms\n\n\n\n\nTime: 397391.388 ms\n\n\n\n\n\n\n\nEXPLAIN  analyze\n\nSELECT \n\no.*\n\nFROM \n\n(\n\nSELECT o.*\n\nFROM \n\nmet_vaer_wisline.nora_bc25_observation o\n\nWHERE \n\nEXISTS (SELECT 1\nFROM  (SELECT\ndistinct epoch FROM  met_vaer_wisline.new_data)\nAS n WHERE n.epoch = o.epoch )\n\nAND \n\nEXISTS (SELECT 1\nFROM  (SELECT\ndistinct id_point \nFROM  met_vaer_wisline.new_data) AS n \nWHERE n.id_point = o.point_uid_ref )\n\n) AS o,\n\nmet_vaer_wisline.new_data n\n\nWHERE o.point_uid_ref = n.id_point \nAND o.epoch = n.epoch\n\n\n\n\n-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Hash Semi Join  (cost=1019.70..1039762.81 \nrows=54862 width=16) (actual time=359.284..25717.838\nrows=50096 loops=1)\n\n-[ RECORD 2 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |   Hash Cond: (o.point_uid_ref = new_data_1.id_point)\n\n-[ RECORD 3 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |   ->  Nested Loop  (cost=0.87..972602.28 \nrows=24964326 width=16) (actual time=0.287..24412.088\nrows=24262088 loops=1)\n\n-[ RECORD 4 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |         ->  Unique  (cost=0.29..1014.29 \nrows=248 width=4) (actual time=0.117..6.849\nrows=248 loops=1)\n\n-[ RECORD 5 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |               ->  Index Only Scan using idx_met_vaer_wisline_new_data_epoch\non new_data  (cost=0.29..889.29 \nrows=50000 width=4) (actual time=0.115..4.521\nrows=50000 loops=1)\n\n-[ RECORD 6 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |                     Heap Fetches: 0\n\n-[ RECORD 7 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |         ->  Index Scan using idx_met_vaer_wisline_nora_bc25_observation_epoch\non nora_bc25_observation o  (cost=0.58..2911.05\nrows=100663 width=16) (actual \ntime=0.014..89.512 rows=97831 loops=248)\n\n-[ RECORD 8 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |               Index Cond: (epoch = new_data.epoch)\n\n-[ RECORD 9 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |   ->  Hash  (cost=1016.31..1016.31 rows=202 width=4) (actual\ntime=16.636..16.636 \nrows=202 loops=1)\n\n-[ RECORD 10 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |         Buckets: 1024  Batches: 1  Memory Usage: 8kB\n\n-[ RECORD 11 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |         ->  Unique  (cost=0.29..1014.29 \nrows=202 width=4) (actual time=0.046..16.544\nrows=202 loops=1)\n\n-[ RECORD 12 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |               ->  Index Only Scan using idx_met_vaer_wisline_new_data_id_point\non new_data new_data_1  (cost=0.29..889.29 \nrows=50000 width=4) (actual time=0.046..11.315\nrows=50000 loops=1)\n\n-[ RECORD 13 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN |                     Heap Fetches: 0\n\n-[ RECORD 14 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nQUERY PLAN | Total runtime: 25719.120 ms\n\n\n\n\n\n\nTime: 25727.097 ms\n\n\n\n\n\n\n\nselect version();\n\n                                                   version                                                    \n\n--------------------------------------------------------------------------------------------------------------\n\n PostgreSQL 9.3.9 on x86_64-unknown-linux-gnu, compiled\nby gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-9), 64-bit\n\n(1 row)\n\n\n\n\n \\d met_vaer_wisline.nora_bc25_observation;\n\nTable \"met_vaer_wisline.nora_bc25_observation\"\n\n       Column       |  Type   | Modifiers \n\n--------------------+---------+-----------\n\n point_uid_ref      | integer | \nnot null\n\n epoch              | integer | \nnot null\n\n windspeed_10m      | real    | \n\n air_temperature_2m | real    | \n\nIndexes:\n\n    \"idx_met_vaer_wisline_nora_bc25_observation_epoch\" btree (epoch)\n\n    \"idx_met_vaer_wisline_nora_bc25_observation_point_uid_ref\" btree (point_uid_ref)\n\n    \n\n\\d met_vaer_wisline.new_data ;\n\n     Unlogged table\n\"met_vaer_wisline.new_data\"\n\n       Column       |       Type        | Modifiers \n\n--------------------+-------------------+-----------\n\n windspeed_10m      | real              | \n\n air_temperature_2m | real              | \n\n lon                | character \nvarying | not \nnull\n\n lat                | character \nvarying | not \nnull\n\n epoch              | integer           | \n\n epoch_as_numeric   | numeric           | \nnot null\n\n rest               | character \nvarying | \n\n id_point           | integer           | \n\nIndexes:\n\n    \"idx_met_vaer_wisline_new_data_epoch\" btree (epoch)\n\n    \"idx_met_vaer_wisline_new_data_id_point\" btree (id_point)\n\n\n\n\nvacuum analyze met_vaer_wisline.nora_bc25_observation;\n\n\n\n\nvacuum analyze met_vaer_wisline.new_data;\n\n\n\n\nSELECT count(*)\nfrom met_vaer_wisline.new_data;\n\n count \n\n-------\n\n 50000\n\n(1 row)\n\n\n\n\nSELECT count(*)\nfrom met_vaer_wisline.nora_bc25_observation ;\n\n   count    \n\n------------\n\n  4263866304\n\n\n\n\n\nThanks .\n\n\n\n\nLars", "msg_date": "Mon, 24 Oct 2016 08:11:48 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Fast insert, but slow join and updates for table with 4 billion rows" }, { "msg_contents": "Lars Aksel Opsahl <[email protected]> writes:\n> In this example I have two tables one with 4 billion rows and another with 50000 rows and then I try to do a standard simple join between this two tables and this takes 397391 ms. with this SQL (the query plan is added is further down)\n\nThis particular query would work a lot better if you had an index on\nnora_bc25_observation (point_uid_ref, epoch), ie both join columns\nin one index. I get the impression that that ought to be the primary\nkey of the table, which would be an even stronger reason to have a\nunique index on it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Oct 2016 08:52:08 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fast insert,\n but slow join and updates for table with 4 billion rows" }, { "msg_contents": "Hi\n\nYes this makes both the update and both selects much faster. We are now down to 3000 ms. for select, but then I get a problem with another SQL where I only use epoch in the query. \n\nSELECT count(o.*) FROM met_vaer_wisline.nora_bc25_observation o WHERE o.epoch = 1288440000;\n count \n-------\n 97831\n(1 row)\nTime: 92763.389 ms\n\nTo get the SQL above work fast it seems like we also need a single index on the epoch column, this means two indexes on the same column and that eats memory when we have more than 4 billion rows.\n\nIs it any way to avoid to two indexes on the epoch column ?\n\nThanks.\n\nLars\n\nEXPLAIN analyze SELECT count(o.*) FROM met_vaer_wisline.nora_bc25_observation o WHERE o.epoch = 1288440000;\n-[ RECORD 1 ]-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Aggregate (cost=44016888.13..44016888.14 rows=1 width=42) (actual time=91307.470..91307.471 rows=1 loops=1)\n-[ RECORD 2 ]-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | -> Index Scan using idx_met_vaer_wisline_nora_bc25_observation_test on nora_bc25_observation o (cost=0.58..44016649.38 rows=95500 width=42) (actual time=1.942..91287.495 rows=97831 loops=1)\n-[ RECORD 3 ]-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Index Cond: (epoch = 1288440000)\n-[ RECORD 4 ]-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Total runtime: 91307.534 ms\n\n\nEXPLAIN analyze\nSELECT count(o.*)\nFROM \nmet_vaer_wisline.nora_bc25_observation o,\nmet_vaer_wisline.new_data n\nWHERE o.point_uid_ref = n.id_point AND o.epoch = n.epoch;\n-[ RECORD 1 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Aggregate (cost=131857.71..131857.72 rows=1 width=42) (actual time=182.459..182.459 rows=1 loops=1)\n-[ RECORD 2 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | -> Nested Loop (cost=0.58..131727.00 rows=52283 width=42) (actual time=0.114..177.420 rows=50000 loops=1)\n-[ RECORD 3 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | -> Seq Scan on new_data n (cost=0.00..1136.00 rows=50000 width=8) (actual time=0.050..7.873 rows=50000 loops=1)\n-[ RECORD 4 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | -> Index Scan using idx_met_vaer_wisline_nora_bc25_observation_test on nora_bc25_observation o (cost=0.58..2.60 rows=1 width=50) (actual time=0.003..0.003 rows=1 loops=50000)\n-[ RECORD 5 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Index Cond: ((point_uid_ref = n.id_point) AND (epoch = n.epoch))\n-[ RECORD 6 ]----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nQUERY PLAN | Total runtime: 182.536 ms\n\nTime: 3095.618 ms\n\n\nLars\n \n\n________________________________________\nFra: [email protected] <[email protected]> på vegne av Tom Lane <[email protected]>\nSendt: 24. oktober 2016 14:52\nTil: Lars Aksel Opsahl\nKopi: [email protected]\nEmne: Re: [PERFORM] Fast insert, but slow join and updates for table with 4 billion rows\n\nLars Aksel Opsahl <[email protected]> writes:\n> In this example I have two tables one with 4 billion rows and another with 50000 rows and then I try to do a standard simple join between this two tables and this takes 397391 ms. with this SQL (the query plan is added is further down)\n\nThis particular query would work a lot better if you had an index on\nnora_bc25_observation (point_uid_ref, epoch), ie both join columns\nin one index. I get the impression that that ought to be the primary\nkey of the table, which would be an even stronger reason to have a\nunique index on it.\n\n regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Oct 2016 20:07:35 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fast insert, but slow join and updates for table with 4\n billion rows" }, { "msg_contents": "On Mon, Oct 24, 2016 at 2:07 PM, Lars Aksel Opsahl <[email protected]> wrote:\n> Hi\n>\n> Yes this makes both the update and both selects much faster. We are now down to 3000 ms. for select, but then I get a problem with another SQL where I only use epoch in the query.\n>\n> SELECT count(o.*) FROM met_vaer_wisline.nora_bc25_observation o WHERE o.epoch = 1288440000;\n> count\n> -------\n> 97831\n> (1 row)\n> Time: 92763.389 ms\n>\n> To get the SQL above work fast it seems like we also need a single index on the epoch column, this means two indexes on the same column and that eats memory when we have more than 4 billion rows.\n>\n> Is it any way to avoid to two indexes on the epoch column ?\n\nYou could try reversing the order. Basically whatever comes first in a\ntwo column index is easier / possible for postgres to use like a\nsingle column index. If not. then you're probably stuck with two\nindexes.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Oct 2016 14:23:12 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fast insert, but slow join and updates for table with 4\n billion rows" }, { "msg_contents": "\nHi\n\nYes that helps, I tested this on now on the first column now.\n\nThis basically means that only the first column in multiple column index may be used in single column query.\n\nEXPLAIN analyze SELECT count(o.*) FROM met_vaer_wisline.nora_bc25_observation o WHERE o.point_uid_ref = 15 ;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=45540.97..45540.98 rows=1 width=42) (actual time=24.715..24.715 rows=1 loops=1)\n -> Bitmap Heap Scan on nora_bc25_observation o (cost=477.66..45427.40 rows=45430 width=42) (actual time=6.436..19.006 rows=43832 loops=1)\n Recheck Cond: (point_uid_ref = 15)\n -> Bitmap Index Scan on idx_met_vaer_wisline_nora_bc25_observation_test (cost=0.00..466.30 rows=45430 width=0) (actual time=6.320..6.320 rows=43832 loops=1)\n Index Cond: (point_uid_ref = 15)\n Total runtime: 24.767 ms\n(6 rows)\n\n\nThanks\n\nLars\n\n________________________________________\nFra: Scott Marlowe <[email protected]>\nSendt: 24. oktober 2016 22:23\nTil: Lars Aksel Opsahl\nKopi: Tom Lane; [email protected]\nEmne: Re: [PERFORM] Fast insert, but slow join and updates for table with 4 billion rows\n\nOn Mon, Oct 24, 2016 at 2:07 PM, Lars Aksel Opsahl <[email protected]> wrote:\n> Hi\n>\n> Yes this makes both the update and both selects much faster. We are now down to 3000 ms. for select, but then I get a problem with another SQL where I only use epoch in the query.\n>\n> SELECT count(o.*) FROM met_vaer_wisline.nora_bc25_observation o WHERE o.epoch = 1288440000;\n> count\n> -------\n> 97831\n> (1 row)\n> Time: 92763.389 ms\n>\n> To get the SQL above work fast it seems like we also need a single index on the epoch column, this means two indexes on the same column and that eats memory when we have more than 4 billion rows.\n>\n> Is it any way to avoid to two indexes on the epoch column ?\n\nYou could try reversing the order. Basically whatever comes first in a\ntwo column index is easier / possible for postgres to use like a\nsingle column index. If not. then you're probably stuck with two\nindexes.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 24 Oct 2016 20:44:32 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fast insert, but slow join and updates for table with 4\n billion rows" }, { "msg_contents": "\nHi\n\nI have now tested through insert and the updated and it works extremely good with out doing any partitioning on big table (428.812.8392 rows) this case, when we used a common index as Tom Suggested. \n\nWe are able to insert 172.000 rows pr. second.\n\nThe number of rows are competed based total time from when we start to read the csv files and until the last file is done. We use GNU parallel and run 5 threads. The number of inserts are actually 172.000 * 2 because first we copy the rows into a temp table and there we prepare the date and then insert them into the common big main table. There is no errors in the log. \n\nWe are able update 98.000 rows pr, second.\n\nSince each update also means one insert we are close 200.000 inserts and updates pr. second. For update we give a column column that is null a value. Thats is done for all the 4.3 billions rows. We run 5 threads in parallel here also, and there is no error and no dead locks. \n\nTo get around the problem with duplication of indexes it's solvable in this project because first we add date and then we do analyses, this means that we can have different indexes when adding data and we are using them.\n\nIn this project we going add about 25 billions geo located observations which which will be used for doing analyses. I suppose that we at some level have to do partitioning but so far Postgres has worked extremely well even if it's based on MVCC. \n\nPostgres/Postgis software and communities are sure for sure really fun to work with Postgres/Postgis open source software hold a very high quality. \n\nThanks.\n\nLars\n\n________________________________________\nFra: [email protected] <[email protected]> på vegne av Tom Lane <[email protected]>\nSendt: 24. oktober 2016 14:52\nTil: Lars Aksel Opsahl\nKopi: [email protected]\nEmne: Re: [PERFORM] Fast insert, but slow join and updates for table with 4 billion rows\n\nLars Aksel Opsahl <[email protected]> writes:\n> In this example I have two tables one with 4 billion rows and another with 50000 rows and then I try to do a standard simple join between this two tables and this takes 397391 ms. with this SQL (the query plan is added is further down)\n\nThis particular query would work a lot better if you had an index on\nnora_bc25_observation (point_uid_ref, epoch), ie both join columns\nin one index. I get the impression that that ought to be the primary\nkey of the table, which would be an even stronger reason to have a\nunique index on it.\n\n regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 25 Oct 2016 20:52:05 +0000", "msg_from": "Lars Aksel Opsahl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Fast insert, but slow join and updates for table with 4\n billion rows" } ]
[ { "msg_contents": "Hi.\n\nQuery run time degraded after migration from Pg 9.0 + postgis 1.5 to Pg 9.4\n+ postgis 2.2.\n\n1 ms versus 7 ms.\n\nSame query, same data, same schema, similar hardware. Data is small and\nfits in cache.\n\nEXPLAIN shows heap scan cost increase. What can be the reason for 40-fold\nincrease in page scans needed to run Bitmap Heap Scan with Filter and\nRecheck?\n\nGIST index performance looks OK.\n\n\nPostgreSQL 9.0.23 on x86_64-suse-linux-gnu, compiled by GCC gcc (SUSE\nLinux) 4.8.3 20140627 [gcc-4_8-branch revision 212064], 64-bit\nPOSTGIS=\"1.5.4\" GEOS=\"3.4.2-CAPI-1.8.2 r3921\" PROJ=\"Rel. 4.8.0, 6 March\n2012\" LIBXML=\"2.7.8\" USE_STATS (procs from 1.5 r5976 need upgrade)\n-> https://explain.depesz.com/s/C3Vw\n\nPostgreSQL 9.4.7 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux)\n4.8.5, 64-bit\nPOSTGIS=\"2.2.2 r14797\" GEOS=\"3.5.0-CAPI-1.9.0 r4084\" PROJ=\"Rel. 4.9.2, 08\nSeptember 2015\" GDAL=\"GDAL 2.1.0, released 2016/04/25 GDAL_DATA not found\"\nLIBXML=\"2.9.1\" LIBJSON=\"0.12\" (core procs from \"2.2.1 r14555\" need upgrade)\nTOPOLOGY (topology procs from \"2.2.1 r14555\" need upgrade) RASTER (raster\nprocs from \"2.2.1 r14555\" need upgrade)\n-> https://explain.depesz.com/s/24GA\n\n\nQuery:\n\nSELECT\nround(meters_to_miles(st_distance_sphere(ST_GeomFromText('POINT(-77.0364\n38.89524)', 4326),llpoint))::numeric,2) as _distance FROM storelocator\nWHERE st_expand(ST_GeomFromText('POINT(-77.0364 38.89524)', 4326),\nmiles_to_degree(50,38.89524)) && llpoint AND\nst_distance_sphere(ST_GeomFromText('POINT(-77.0364 38.89524)', 4326),\nllpoint) <= miles_to_meters(50) ORDER BY _distance LIMIT 10;\n\n\n\nthanks for any suggestions / ideas.\n\n\n\nFilip\n\nHi.Query run time degraded after migration from Pg 9.0 + postgis 1.5 to Pg 9.4 + postgis 2.2.1 ms versus 7 ms.Same query, same data, same schema, similar hardware. Data is small and fits in cache.EXPLAIN shows heap scan cost increase. What can be the reason for 40-fold increase in page scans needed to run Bitmap Heap Scan with Filter and Recheck?GIST index performance looks OK.PostgreSQL 9.0.23 on x86_64-suse-linux-gnu, compiled by GCC gcc (SUSE Linux) 4.8.3 20140627 [gcc-4_8-branch revision 212064], 64-bitPOSTGIS=\"1.5.4\" GEOS=\"3.4.2-CAPI-1.8.2 r3921\" PROJ=\"Rel. 4.8.0, 6 March 2012\" LIBXML=\"2.7.8\" USE_STATS (procs from 1.5 r5976 need upgrade)-> https://explain.depesz.com/s/C3VwPostgreSQL 9.4.7 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux) 4.8.5, 64-bitPOSTGIS=\"2.2.2 r14797\" GEOS=\"3.5.0-CAPI-1.9.0 r4084\" PROJ=\"Rel. 4.9.2, 08 September 2015\" GDAL=\"GDAL 2.1.0, released 2016/04/25 GDAL_DATA not found\" LIBXML=\"2.9.1\" LIBJSON=\"0.12\" (core procs from \"2.2.1 r14555\" need upgrade) TOPOLOGY (topology procs from \"2.2.1 r14555\" need upgrade) RASTER (raster procs from \"2.2.1 r14555\" need upgrade)-> https://explain.depesz.com/s/24GAQuery:SELECT round(meters_to_miles(st_distance_sphere(ST_GeomFromText('POINT(-77.0364 38.89524)', 4326),llpoint))::numeric,2) as _distance FROM storelocator WHERE st_expand(ST_GeomFromText('POINT(-77.0364 38.89524)', 4326), miles_to_degree(50,38.89524)) && llpoint AND st_distance_sphere(ST_GeomFromText('POINT(-77.0364 38.89524)', 4326), llpoint) <= miles_to_meters(50) ORDER BY _distance LIMIT 10;thanks for any suggestions / ideas.Filip", "msg_date": "Wed, 26 Oct 2016 15:48:42 +0200", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "query slowdown after 9.0 -> 9.4 migration" }, { "msg_contents": "On 10/26/2016 03:48 PM, Filip Rembiałkowski wrote:\n> Hi.\n>\n> Query run time degraded after migration from Pg 9.0 + postgis 1.5 to Pg\n> 9.4 + postgis 2.2.\n>\n> 1 ms versus 7 ms.\n>\n> Same query, same data, same schema, similar hardware. Data is small and\n> fits in cache.\n>\n> EXPLAIN shows heap scan cost increase. What can be the reason for\n> 40-fold increase in page scans needed to run Bitmap Heap Scan with\n> Filter and Recheck?\n>\n\nOn 9.0 the the scan accesses only 8 buffers:\n\nBuffers: shared hit=8\n\nwhile on 9.4 it has to inspect 316 of them:\n\nBuffers: shared hit=316\n\nPerhaps the table is organized / sorted differently, or something like \nthat. How did you do the upgrade?\n\n\nragards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Oct 2016 00:55:22 +0200", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slowdown after 9.0 -> 9.4 migration" }, { "msg_contents": "Tomas Vondra <[email protected]> wrote:\n\n> On 10/26/2016 03:48 PM, Filip Rembiałkowski wrote:\n>> Hi.\n>>\n>> Query run time degraded after migration from Pg 9.0 + postgis 1.5 to Pg\n>> 9.4 + postgis 2.2.\n>> EXPLAIN shows heap scan cost increase. What can be the reason for\n>> 40-fold increase in page scans needed to run Bitmap Heap Scan with\n>> Filter and Recheck?\n>>\n>\n> On 9.0 the the scan accesses only 8 buffers:\n>\n> Buffers: shared hit=8\n>\n> while on 9.4 it has to inspect 316 of them:\n>\n> Buffers: shared hit=316\n\nnice point ;-)\n\n\n>\n> Perhaps the table is organized / sorted differently, or something like \n> that. How did you do the upgrade?\n\nMaybe table-bloat? Filip, check if autovacuum runs properly.\n\n\n\nRegards, Andreas Kretschmer\n-- \nAndreas Kretschmer\nhttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 27 Oct 2016 07:38:38 +0200", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slowdown after 9.0 -> 9.4 migration" }, { "msg_contents": "On Thu, Oct 27, 2016 at 7:38 AM, Andreas Kretschmer <\[email protected]> wrote:\n\n> Tomas Vondra <[email protected]> wrote:\n>\n> >\n> > Perhaps the table is organized / sorted differently, or something like\n> > that. How did you do the upgrade?\n>\n>\nNothing special, dump + reload. The table in question is tiny - 280 kB, 859\nrows.\n\n\n\n> Maybe table-bloat? Filip, check if autovacuum runs properly.\n>\n> ​\nYes, it does. Just to be sure I ran VACUUM FULL, ANALZYE and REINDEX on all\ntables and indexes - no change :-(\n\nAny other ideas (before drawing on heavy tools like strace)?\n\n\nDoes it make sense to ask on postgis-users list?\n\n​\n\n​Thanks,\nFilip\n​\n\nOn Thu, Oct 27, 2016 at 7:38 AM, Andreas Kretschmer <[email protected]> wrote:Tomas Vondra <[email protected]> wrote:\n\n>\n> Perhaps the table is organized / sorted differently, or something like\n> that. How did you do the upgrade?\nNothing special, dump + reload. The table in question is tiny - 280 kB, 859 rows. \nMaybe table-bloat? Filip, check if autovacuum runs properly.\n​Yes, it does. Just to be sure I ran VACUUM FULL, ANALZYE and REINDEX on all tables and indexes - no change :-(Any other ideas (before drawing on heavy tools like strace)?Does it make sense to ask on postgis-users list?​​Thanks,Filip​", "msg_date": "Fri, 28 Oct 2016 03:37:19 +0200", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query slowdown after 9.0 -> 9.4 migration" }, { "msg_contents": "On 10/27/16 8:37 PM, Filip Rembiałkowski wrote:\n> Does it make sense to ask on postgis-users list?\n\nYes. I suspect that the reason Buffers: shared hit is so high is because \nof something st_distance_sphere() is doing.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Nov 2016 18:43:02 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query slowdown after 9.0 -> 9.4 migration" } ]
[ { "msg_contents": "This is a weird problem. A \"limit 5\" query runs quicky as expected, but a\n\"limit 1\" query never finishes -- it just blasts along at 100% CPU until I\ngive up. And this is a join between two small tables (262K rows and 109K\nrows). Both tables were recently analyzed.\n\nThis is Postgres 9.3.5 (yes, we'll be upgrading soon...), running on Ubuntu\n12.04.\n\nNote that \"version\" is a view in the schema being queried, but\nregistry.version is a real table. The idea is that the view in the schema\nbeing queried is a filter that narrows the \"registry.version\" table to only\nrows relevant to this schema.\n\ns=> \\d+ version\n View \"chemdiv_bb.version\"\n Column | Type | Modifiers | Storage |\nDescription\n------------+-----------------------------+-----------+----------+-------------\n version_id | integer | | plain |\n parent_id | integer | | plain |\n isosmiles | text | | extended |\n created | timestamp without time zone | | plain |\nView definition:\n SELECT rv.version_id,\n rv.parent_id,\n rv.isosmiles,\n rv.created\n FROM registry.version rv\n JOIN ( SELECT DISTINCT sample.version_id\n FROM sample) ss USING (version_id);\n\n\nThe column \"version_id\" is indexed on both tables (it's PK on the\nregistry.version table).\n\nexplain select version_id from version order by version_id desc limit 5;\n\n Limit (cost=14577.29..14577.70 rows=5 width=4) (actual\ntime=1077.113..1077.162 rows=5 loops=1)\n -> Merge Join (cost=14577.29..23681.16 rows=109114 width=4) (actual\ntime=1077.108..1077.142 rows=5 loops=1)\n Merge Cond: (rv.version_id = sample.version_id)\n -> Index Only Scan Backward using version_pkey on version rv\n (cost=0.42..6812.85 rows=261895 width=4) (actual time=0.045..126.641\nrows=70125 loops=1)\n Heap Fetches: 0\n -> Sort (cost=14576.87..14849.65 rows=109114 width=4) (actual\ntime=830.842..830.851 rows=5 loops=1)\n Sort Key: sample.version_id\n Sort Method: quicksort Memory: 8188kB\n -> HashAggregate (cost=3264.21..4355.35 rows=109114\nwidth=4) (actual time=420.018..630.393 rows=109133 loops=1)\n -> Seq Scan on sample (cost=0.00..2991.37\nrows=109137 width=4) (actual time=0.012..206.822 rows=109137 loops=1)\n Total runtime: 1078.363 ms\n\nNo problem, works as expected. But lower the limit to 1 and it never\nfinishes. I can't show \"explain analyze ...\", so here's the output from\njust \"explain\".\n\nexplain select version_id from version order by version_id desc limit 1;\n\n Limit (cost=3264.63..7193.14 rows=1 width=4)\n -> Nested Loop (cost=3264.63..428658697.57 rows=109114 width=4)\n Join Filter: (rv.version_id = sample.version_id)\n -> Index Only Scan Backward using version_pkey on version rv\n (cost=0.42..6812.85 rows=261895 width=4)\n -> Materialize (cost=3264.21..5992.06 rows=109114 width=4)\n -> HashAggregate (cost=3264.21..4355.35 rows=109114\nwidth=4)\n -> Seq Scan on sample (cost=0.00..2991.37\nrows=109137 width=4)\n\nWhy would this trivial query run forever at 100% CPU?\n\nThis, by the way, is the \"old fashioned\" way to do max(version_id), which\nused to be slow in Postgres. I have switched the query to use\nmax(version_id), but worry that other queries will get hung up for no\napparent reason.\n\nThanks,\nCraig\n\nThis is a weird problem. A \"limit 5\" query runs quicky as expected, but a \"limit 1\" query never finishes -- it just blasts along at 100% CPU until I give up. And this is a join between two small tables (262K rows and 109K rows). Both tables were recently analyzed.This is Postgres 9.3.5 (yes, we'll be upgrading soon...), running on Ubuntu 12.04.Note that \"version\" is a view in the schema being queried, but registry.version is a real table. The idea is that the view in the schema being queried is a filter that narrows the \"registry.version\" table to only rows relevant to this schema.s=> \\d+ version                           View \"chemdiv_bb.version\"   Column   |            Type             | Modifiers | Storage  | Description------------+-----------------------------+-----------+----------+------------- version_id | integer                     |           | plain    | parent_id  | integer                     |           | plain    | isosmiles  | text                        |           | extended | created    | timestamp without time zone |           | plain    |View definition: SELECT rv.version_id,    rv.parent_id,    rv.isosmiles,    rv.created   FROM registry.version rv     JOIN ( SELECT DISTINCT sample.version_id           FROM sample) ss USING (version_id);The column \"version_id\" is indexed on both tables (it's PK on the registry.version table).explain select version_id from version order by version_id desc limit 5; Limit  (cost=14577.29..14577.70 rows=5 width=4) (actual time=1077.113..1077.162 rows=5 loops=1)   ->  Merge Join  (cost=14577.29..23681.16 rows=109114 width=4) (actual time=1077.108..1077.142 rows=5 loops=1)         Merge Cond: (rv.version_id = sample.version_id)         ->  Index Only Scan Backward using version_pkey on version rv  (cost=0.42..6812.85 rows=261895 width=4) (actual time=0.045..126.641 rows=70125 loops=1)               Heap Fetches: 0         ->  Sort  (cost=14576.87..14849.65 rows=109114 width=4) (actual time=830.842..830.851 rows=5 loops=1)               Sort Key: sample.version_id               Sort Method: quicksort  Memory: 8188kB               ->  HashAggregate  (cost=3264.21..4355.35 rows=109114 width=4) (actual time=420.018..630.393 rows=109133 loops=1)                     ->  Seq Scan on sample  (cost=0.00..2991.37 rows=109137 width=4) (actual time=0.012..206.822 rows=109137 loops=1) Total runtime: 1078.363 msNo problem, works as expected. But lower the limit to 1 and it never finishes. I can't show \"explain analyze ...\", so here's the output from just \"explain\".explain select version_id from version order by version_id desc limit 1; Limit  (cost=3264.63..7193.14 rows=1 width=4)   ->  Nested Loop  (cost=3264.63..428658697.57 rows=109114 width=4)         Join Filter: (rv.version_id = sample.version_id)         ->  Index Only Scan Backward using version_pkey on version rv  (cost=0.42..6812.85 rows=261895 width=4)         ->  Materialize  (cost=3264.21..5992.06 rows=109114 width=4)               ->  HashAggregate  (cost=3264.21..4355.35 rows=109114 width=4)                     ->  Seq Scan on sample  (cost=0.00..2991.37 rows=109137 width=4)Why would this trivial query run forever at 100% CPU?This, by the way, is the \"old fashioned\" way to do max(version_id), which used to be slow in Postgres. I have switched the query to use max(version_id), but worry that other queries will get hung up for no apparent reason.Thanks,Craig", "msg_date": "Thu, 27 Oct 2016 13:46:05 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "limit 1 on view never finishes" }, { "msg_contents": "On 10/27/16 3:46 PM, Craig James wrote:\n>\n> Limit (cost=3264.63..7193.14 rows=1 width=4)\n> -> Nested Loop (cost=3264.63..428658697.57 rows=109114 width=4)\n> Join Filter: (rv.version_id = sample.version_id)\n> -> Index Only Scan Backward using version_pkey on version rv\n> (cost=0.42..6812.85 rows=261895 width=4)\n> -> Materialize (cost=3264.21..5992.06 rows=109114 width=4)\n> -> HashAggregate (cost=3264.21..4355.35 rows=109114\n> width=4)\n> -> Seq Scan on sample (cost=0.00..2991.37\n> rows=109137 width=4)\n>\n> Why would this trivial query run forever at 100% CPU?\n\nMy bet is that there's a lot of rows in version that have a higher \nversion than what's in sample. That means a lot of repeated scans \nthrough the tuplestore underneath the Materialize node.\n\nIf you can remove duplicates from sample and get rid of the DISTINCT, \nthis will probably get a better plan. If you can't do that then you \ncould try changing the JOIN to an IN:\n\nSELECT ... FROM sample\n WHERE version_id IN (SELECT version_id FROM sample)\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Nov 2016 18:37:03 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: limit 1 on view never finishes" } ]
[ { "msg_contents": "I've recently been blessed to move one of my databases onto a huge IBM P8 computer. Its a power PC architecture with 20 8-way cores (so postgres SHOULD believe there are 160 cores available) and 1 TB of RAM.\n\nI've always done my postgres tuning with a copy of \"pgtune\" which says in the output:\n\n# WARNING\n# this tool not being optimal\n# for very high memory systems\n\nSo . . . what would I want to do differently based on the fact that I have a \"very high memory system\"?\n\nSuggestions welcome!\n\n(There are several different databases, mostly related to our work in social media and malware analytics. The databases are smaller than available RAM. Around 80 million social media profiles with 700M or so links, growing by 10% a week or so. The malware database has extracted statistics and data about 17 million malware samples, growing by about 5% per week. The Social Media side has just shy of 100 'fetchers' that insert/update (but don't delete.) A few human analysts interact with the data, hitting some pretty heavy queries as they do link analysis and natural language processing, but mostly the database use is the \"fetchers\")\n\n------------------------------------------------------------------------------\nGary Warner, Director CETIFS [email protected]\nCenter for Emerging Technology Investigations Forensics & Security\nThe University of Alabama at Birmingham (UAB)\n205.422.2113\n------------------------------------------------------------------------------\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 28 Oct 2016 15:44:13 +0000", "msg_from": "\"Warner, Gary, Jr\" <[email protected]>", "msg_from_op": true, "msg_subject": "Big Memory Boxes and pgtune" }, { "msg_contents": "On Fri, Oct 28, 2016 at 10:44 AM, Warner, Gary, Jr <[email protected]> wrote:\n\n> I've recently been blessed to move one of my databases onto a\n> huge IBM P8 computer. Its a power PC architecture with 20 8-way\n> cores (so postgres SHOULD believe there are 160 cores available)\n> and 1 TB of RAM.\n\n> So . . . what would I want to do differently based on the fact\n> that I have a \"very high memory system\"?\n\nWhat OS are you looking at?\n\nThe first advice I would give is to use a very recent version of\nboth the OS and PostgreSQL. Such large machines are a recent\nenough phenomenon that older software is not likely to be optimized\nto perform well on it. For similar reasons, be sure to stay up to\ndate with minor releases of both.\n\nIf the OS has support for them, you probably want to become\nfamiliar with these commands:\n\nnumactl --hardware\nlscpu\n\nYou may want to benchmark different options, but I suspect that you\nwill see better performance by putting each database on a separate\ncluster and using cpusets (or the equivalent) so that each cluster\nuses a subset of the 160 cores and the RAM directly attached to the\nsubset.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 28 Oct 2016 12:09:51 -0500", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Memory Boxes and pgtune" }, { "msg_contents": "On 10/28/2016 08:44 AM, Warner, Gary, Jr wrote:\n> I've recently been blessed to move one of my databases onto a huge IBM P8 computer. Its a power PC architecture with 20 8-way cores (so postgres SHOULD believe there are 160 cores available) and 1 TB of RAM.\n>\n> I've always done my postgres tuning with a copy of \"pgtune\" which says in the output:\n>\n> # WARNING\n> # this tool not being optimal\n> # for very high memory systems\n>\n> So . . . what would I want to do differently based on the fact that I have a \"very high memory system\"?\n\nThe most obvious is that you are going to want to have (depending on \nPostgreSQL version):\n\n* A very high shared_buffers (in newer releases, it is not uncommon to \nhave many, many GB of)\n* Use that work_mem baby. You have 1TB available? Take your average data \nset return, and make work_mem at least that.\n* IIRC (and this may be old advice), maintenance_work_mem up to 4GB. As \nI recall it won't effectively use more than that but I could be wrong.\n\nLastly but most importantly, test test test.\n\nJD\n\n-- \nCommand Prompt, Inc. http://the.postgres.company/\n +1-503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nEveryone appreciates your honesty, until you are honest with them.\nUnless otherwise stated, opinions are my own.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 28 Oct 2016 12:33:10 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Memory Boxes and pgtune" }, { "msg_contents": "On 10/28/16 2:33 PM, Joshua D. Drake wrote:\n> * A very high shared_buffers (in newer releases, it is not uncommon to\n> have many, many GB of)\n\nKeep in mind that you might get very poor results if shared_buffers is \nlarge, but not large enough to fit the entire database. In that case \nbuffer replacement will be *extremely* expensive. Some operations will \nuse a different buffer replacement strategy, so you might be OK if some \nof the database doesn't fit in shared buffers; that will depend a lot on \nyour access patterns.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532) mobile: 512-569-9461\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Nov 2016 18:46:47 -0500", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Memory Boxes and pgtune" }, { "msg_contents": "On Wed, Nov 2, 2016 at 5:46 PM, Jim Nasby <[email protected]> wrote:\n> On 10/28/16 2:33 PM, Joshua D. Drake wrote:\n>>\n>> * A very high shared_buffers (in newer releases, it is not uncommon to\n>> have many, many GB of)\n>\n>\n> Keep in mind that you might get very poor results if shared_buffers is\n> large, but not large enough to fit the entire database. In that case buffer\n> replacement will be *extremely* expensive. Some operations will use a\n> different buffer replacement strategy, so you might be OK if some of the\n> database doesn't fit in shared buffers; that will depend a lot on your\n> access patterns.\n\n\nThis. Especially on machines with fast CPUs / memory and SSDs\nunderneath, lots of buffers can sometimes just get in the way. The\nlinux kernel (and most other kernels) has hundreds, even thousands of\nman hours put into the file caching code and it's often faster to let\nthe kernel do that job with the extra memory.\n\nOnly a benchmark of a production type load can tell you what to\nexpect, and only production itself will reveal the absolute truth.\nWhere I used to work we had 5TB databases on machines with anywhere\nfrom 128GB to 512GB and honesly the extra memory didn't make a huge\ndifference. They had 8 disk RAID-5 arrays with controller caching\nturned off, because it got in the way, and cranking up shared_buffers\ndidn't not make them faster. I think we settled on something under\n10GB on most of them\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Nov 2016 20:24:58 -0600", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Big Memory Boxes and pgtune" } ]
[ { "msg_contents": "My PG 9.4.5 server runs on Amazon RDS some times of the day we have a lot of checkpoints really close (less than 1 minute apart, see logs below) and we are trying to tune the DB to minimize the impact of the checkpoint or reduce the number of checkpoints.\r\n\r\nServer Stats\r\n· Instance Type db.r3.4xl\r\n• 16 vCPUs 122GB of RAM\r\n• PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\r\n\r\nSome PG Stats\r\n• Shared Buffers = 31427608kB\r\n• Checkpoint Segments = 64\r\n• Checkpoint completion target = .9\r\n• Rest of the configuration is below\r\n\r\nThings we are doing\r\n• We have a huge table where each row is over 1kB and its very busy. We are splitting that into multiple tables especially the one json field that making it large.\r\n\r\nQuestions\r\n• Each checkpoint log writes out the following checkpoint complete: wrote 166481 buffers (4.2%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=32.441 s, sync=0.050 s, total=32.550 s; sync files=274, longest=0.049 s, average=0.000 s\r\n• What does buffers mean? How do I find out how much RAM that is equivalent to?\r\n• Based on my RDS stats I don't think IOPs will help, because I don't see any flat lines on my write operations / second graph. Is this a good assumption?\r\n• What else can we tune to spread out checkpoints?\r\n\r\nHere is the Postgres configuration\r\n\r\napplication_name | psql\r\narchive_mode | on\r\n archive_timeout | 5min\r\n autovacuum_analyze_scale_factor | 0.05\r\n autovacuum_naptime | 30s\r\n autovacuum_vacuum_scale_factor | 0.1\r\n checkpoint_completion_target | 0.9\r\n checkpoint_segments | 64\r\n client_encoding | UTF8\r\n effective_cache_size | 62855216kB\r\n fsync | on\r\n full_page_writes | on\r\n hot_standby | off\r\n listen_addresses | *\r\n lo_compat_privileges | off\r\n log_checkpoints | on\r\n log_error_verbosity | default\r\n log_file_mode | 0644\r\n log_hostname | on\r\n log_line_prefix | %t:%r:%u@%d:[%p]:\r\n log_rotation_age | 1h\r\n log_timezone | UTC\r\n log_truncate_on_rotation | off\r\n logging_collector | on\r\n maintenance_work_mem | 250MB\r\n max_connections | 4092\r\n max_locks_per_transaction | 64\r\n max_prepared_transactions | 0\r\n max_replication_slots | 5\r\n max_stack_depth | 6MB\r\n max_wal_senders | 10\r\n port | 5432\r\n rds.extensions | btree_gin,btree_gist,chkpass,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,ip4r,isn,ltree,pgcrypto,pgrowlocks,pgstattuple,pg_buffercache,pg_prewarm,pg_stat_statements,pg_trgm,plcoffee,plls,plperl,plpgsql,pltcl,plv8,postgis,postgis_tiger_geocoder,postgis_topology,postgres_fdw,sslinfo,tablefunc,test_parser,tsearch2,unaccent,uuid-ossp\r\n rds.internal_databases | rdsadmin,template0\r\n rds.rds_superuser_reserved_connections | 0\r\n rds.superuser_variables | session_replication_role\r\n shared_buffers | 31427608kB\r\n ssl | on\r\nssl_renegotiation_limit | 0\r\n superuser_reserved_connections | 3\r\n synchronous_commit | off\r\n TimeZone | UTC\r\n unix_socket_group | rdsdb\r\n unix_socket_permissions | 0700\r\n wal_keep_segments | 32\r\n wal_level | hot_standby\r\n wal_receiver_timeout | 30s\r\n wal_sender_timeout | 30s\r\n work_mem | 8MB\r\n\r\nHere are some checkpoint logs from a busy period\r\n\r\n2016-10-26 18:54:32 UTC::@:[5594]:LOG: checkpoint starting: xlog\r\n2016-10-26 18:55:10 UTC::@:[5594]:LOG: checkpoint complete: wrote 168357 buffers (4.3%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=37.650 s, sync=0.036 s, total=37.879 s; sync files=266, longest=0.027 s, average=0.000 s\r\n2016-10-26 18:55:15 UTC::@:[5594]:LOG: checkpoint starting: xlog\r\n2016-10-26 18:55:53 UTC::@:[5594]:LOG: checkpoint complete: wrote 171087 buffers (4.4%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=38.147 s, sync=0.116 s, total=38.298 s; sync files=276, longest=0.098 s, average=0.000 s\r\n2016-10-26 18:55:58 UTC::@:[5594]:LOG: checkpoint starting: xlog\r\n2016-10-26 18:56:34 UTC::@:[5594]:LOG: checkpoint complete: wrote 171131 buffers (4.4%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=35.080 s, sync=0.142 s, total=35.323 s; sync files=274, longest=0.103 s, average=0.000 s\r\n2016-10-26 18:56:39 UTC::@:[5594]:LOG: checkpoint starting: xlog\r\n2016-10-26 18:57:14 UTC::@:[5594]:LOG: checkpoint complete: wrote 158818 buffers (4.0%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=35.545 s, sync=0.072 s, total=35.771 s; sync files=277, longest=0.036 s, average=0.000 s\r\n2016-10-26 18:57:19 UTC::@:[5594]:LOG: checkpoint starting: xlog\r\n2016-10-26 18:57:56 UTC::@:[5594]:LOG: checkpoint complete: wrote 179405 buffers (4.6%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=36.289 s, sync=0.069 s, total=36.405 s; sync files=295, longest=0.030 s, average=0.000 s\r\n2016-10-26 18:58:01 UTC::@:[5594]:LOG: checkpoint starting: xlog\r\n2016-10-26 18:58:37 UTC::@:[5594]:LOG: checkpoint complete: wrote 169711 buffers (4.3%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=35.248 s, sync=0.199 s, total=35.511 s; sync files=295, longest=0.146 s, average=0.000 s\r\n2016-10-26 18:58:42 UTC::@:[5594]:LOG: checkpoint starting: xlog\r\n2016-10-26 18:59:17 UTC::@:[5594]:LOG: checkpoint complete: wrote 174124 buffers (4.4%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=34.666 s, sync=0.059 s, total=34.790 s; sync files=279, longest=0.051 s, average=0.000 s\r\n2016-10-26 18:59:23 UTC::@:[5594]:LOG: checkpoint starting: xlog\r\n2016-10-26 18:59:55 UTC::@:[5594]:LOG: checkpoint complete: wrote 166481 buffers (4.2%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=32.441 s, sync=0.050 s, total=32.550 s; sync files=274, longest=0.049 s, average=0.000 s\r\n\n\n\n\n\n\n\n\n\n\n\nMy PG 9.4.5 server runs on Amazon RDS some times of the day we have a lot of checkpoints really close (less than 1 minute apart, see logs below) and we are trying to tune the DB to minimize the impact of the checkpoint or reduce the number\r\n of checkpoints.\n \nServer Stats\n·         Instance Type db.r3.4xl\n•         16 vCPUs 122GB of RAM\n•         PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n \nSome PG Stats\n•         Shared Buffers = 31427608kB\n•         Checkpoint Segments = 64\n•         Checkpoint completion target = .9\n•         Rest of the configuration is below\n \nThings we are doing\n•         We have a huge table where each row is over 1kB and its very busy. We are splitting that into multiple tables especially the one json field that making it large.\n \nQuestions\n•         Each checkpoint log writes out the following checkpoint complete: wrote 166481 buffers (4.2%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=32.441 s, sync=0.050 s, total=32.550 s; sync files=274, longest=0.049\r\n s, average=0.000 s\n•         What does buffers mean? How do I find out how much RAM that is equivalent to?\n•         Based on my RDS stats I don't think IOPs will help, because I don't see any flat lines on my write operations / second graph. Is this a good assumption?\n•         What else can we tune to spread out checkpoints?\n \nHere is the Postgres configuration\n \napplication_name                       | psql                                                                                                                                                                                                                                                                                                                                                                                     \narchive_mode                           | on                                                                                                                                                                                                                                            \r\n                                                                                                                                           \n archive_timeout                        | 5min                                                                                                                                                                                                                                                                                                                                                                                     \n autovacuum_analyze_scale_factor        | 0.05                                                                                                                                                                                                                                                                                                                                                                                     \n autovacuum_naptime                     | 30s                                                                                                                                                                                                                                                               \r\n                                                                                                                       \n autovacuum_vacuum_scale_factor         | 0.1                                                                                                                                                                                                                                                                                                                                                                                      \n checkpoint_completion_target           | 0.9                                                                                                                                                                                                                                                                                                                                                                                      \n checkpoint_segments                    | 64                                                                                                                                                                                                                                                                                                                                                                                       \n client_encoding                        | UTF8                                                                                                                                                                                                                                                                                                                                                                                     \n effective_cache_size                   | 62855216kB                                                                                                                                                                                                    \r\n                                                                                                                                                                           \n fsync                                  | on                                                                                                                                                                                                                                                                                                                                                                                       \n full_page_writes                       | on                                                                                                                                                                                                                                                                                                                                                                                       \n hot_standby                            | off                                                                                                                                                                                                                               \r\n                                                                                                                                                       \n listen_addresses                       | *                                                                                                                                                                                                                                                                                                                                                                                        \n lo_compat_privileges                   | off                                                                                                                                                                                                                                                                                                                                                                                      \n log_checkpoints                        | on                                                                                                                                                                                                                                                    \r\n                                                                                                                                   \n log_error_verbosity                    | default                                                                                                                                                                                                                                                                                                                                                                                  \n log_file_mode                          | 0644                                                                                                                                                                                                                 \r\n                                                                                                                                                                    \n log_hostname                           | on                                                                                                                                                                                                                                                                                                                                                                                       \n log_line_prefix                        | %t:%r:%u@%d:[%p]:                                                                                                                                                                                                                                                                                                                                                                        \n log_rotation_age                       | 1h                                                                                                                                                                                                                                       \r\n                                                                                                                                                \n log_timezone                           | UTC                                                                                                                                                                                                                                                                                                                                                                                      \n log_truncate_on_rotation               | off                                                                                                                                                                                                                                                                                                                                                                                      \n logging_collector                      | on                                                                                                                                                                                                                                                           \r\n                                                                                                                            \n maintenance_work_mem                   | 250MB                                                                                                                                                                                                                                                                                                                                                                                    \n max_connections                        | 4092                                                                                                                                                                                                                                                                                                                                                                                     \n max_locks_per_transaction              | 64                                                                                                                                                                                                                                                                                                                                                                                       \n max_prepared_transactions              | 0                                                                                                                                                                                                                                                                                                                                                                                        \n max_replication_slots                  | 5                                                                                                                                                                                                        \r\n                                                                                                                                                                                \n max_stack_depth                        | 6MB                                                                                                                                                                                                                                                                                                                                                                                      \n max_wal_senders                        | 10                                                                                                                                                                                                                                                                                                                                                                                       \n port                                   | 5432                                                                                                                                                                                                                         \r\n                                                                                                                                                            \n rds.extensions                         | btree_gin,btree_gist,chkpass,citext,cube,dblink,dict_int,dict_xsyn,earthdistance,fuzzystrmatch,hstore,intagg,intarray,ip4r,isn,ltree,pgcrypto,pgrowlocks,pgstattuple,pg_buffercache,pg_prewarm,pg_stat_statements,pg_trgm,plcoffee,plls,plperl,plpgsql,pltcl,plv8,postgis,postgis_tiger_geocoder,postgis_topology,postgres_fdw,sslinfo,tablefunc,test_parser,tsearch2,unaccent,uuid-ossp\n rds.internal_databases                 | rdsadmin,template0                                                                                                                                                                                                                                                                                                                                                                       \n rds.rds_superuser_reserved_connections | 0                                                                                                                                                                                                                                                \r\n                                                                                                                                        \n rds.superuser_variables                | session_replication_role                                                                                                                                                                                                                                                                                                                                                                 \n shared_buffers                         | 31427608kB                                                                                                                                                                                                                                                                                                                                                                               \n ssl                                    | on                                                                                                                                                                                                                                                                                                                                                                                       \nssl_renegotiation_limit                | 0                                                                                                                                                                                                                                                                                                                                                                                        \n superuser_reserved_connections         | 3                                                                                                                                                                                                                \r\n                                                                                                                                                                        \n synchronous_commit                     | off                                                                                                                                                                                                                                                                                                                                                                                      \n TimeZone                               | UTC                                                                                                                                                                                                                                                                                                                                                                                      \n unix_socket_group                      | rdsdb                                                                                                                                                                                                                                \r\n                                                                                                                                                    \n unix_socket_permissions                | 0700                                                                                                                                                                                                                 \r\n                                                                                                                                                                    \n wal_keep_segments                      | 32                                                                                                                                                                                                                                                                                                                                                                                       \n wal_level                              | hot_standby                                                                                                                                                                                                                                                                                                                                                                              \n wal_receiver_timeout                   | 30s                                                                                                                                                                                                                                      \r\n                                                                                                                                                \n wal_sender_timeout                     | 30s                                                                                                                                                                                                                                                                                                                                                                                      \n work_mem                               | 8MB                                                                                                                                                                                                                                                                                                                                                                                      \n \nHere are some checkpoint logs from a busy period\n \n2016-10-26 18:54:32 UTC::@:[5594]:LOG:  checkpoint starting: xlog\n2016-10-26 18:55:10 UTC::@:[5594]:LOG:  checkpoint complete: wrote 168357 buffers (4.3%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=37.650 s, sync=0.036 s, total=37.879 s; sync files=266, longest=0.027 s, average=0.000\r\n s\n2016-10-26 18:55:15 UTC::@:[5594]:LOG:  checkpoint starting: xlog\n2016-10-26 18:55:53 UTC::@:[5594]:LOG:  checkpoint complete: wrote 171087 buffers (4.4%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=38.147 s, sync=0.116 s, total=38.298 s; sync files=276, longest=0.098 s, average=0.000\r\n s\n2016-10-26 18:55:58 UTC::@:[5594]:LOG:  checkpoint starting: xlog\n2016-10-26 18:56:34 UTC::@:[5594]:LOG:  checkpoint complete: wrote 171131 buffers (4.4%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=35.080 s, sync=0.142 s, total=35.323 s; sync files=274, longest=0.103 s, average=0.000\r\n s\n2016-10-26 18:56:39 UTC::@:[5594]:LOG:  checkpoint starting: xlog\n2016-10-26 18:57:14 UTC::@:[5594]:LOG:  checkpoint complete: wrote 158818 buffers (4.0%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=35.545 s, sync=0.072 s, total=35.771 s; sync files=277, longest=0.036 s, average=0.000\r\n s\n2016-10-26 18:57:19 UTC::@:[5594]:LOG:  checkpoint starting: xlog\n2016-10-26 18:57:56 UTC::@:[5594]:LOG:  checkpoint complete: wrote 179405 buffers (4.6%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=36.289 s, sync=0.069 s, total=36.405 s; sync files=295, longest=0.030 s, average=0.000\r\n s\n2016-10-26 18:58:01 UTC::@:[5594]:LOG:  checkpoint starting: xlog\n2016-10-26 18:58:37 UTC::@:[5594]:LOG:  checkpoint complete: wrote 169711 buffers (4.3%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=35.248 s, sync=0.199 s, total=35.511 s; sync files=295, longest=0.146 s, average=0.000\r\n s\n2016-10-26 18:58:42 UTC::@:[5594]:LOG:  checkpoint starting: xlog\n2016-10-26 18:59:17 UTC::@:[5594]:LOG:  checkpoint complete: wrote 174124 buffers (4.4%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=34.666 s, sync=0.059 s, total=34.790 s; sync files=279, longest=0.051 s, average=0.000\r\n s\n2016-10-26 18:59:23 UTC::@:[5594]:LOG:  checkpoint starting: xlog\n2016-10-26 18:59:55 UTC::@:[5594]:LOG:  checkpoint complete: wrote 166481 buffers (4.2%); 0 transaction log file(s) added, 0 removed, 64 recycled; write=32.441 s, sync=0.050 s, total=32.550 s; sync files=274, longest=0.049 s, average=0.000\r\n s", "msg_date": "Mon, 31 Oct 2016 19:19:58 +0000", "msg_from": "Andre Henry <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning Checkpoints" }, { "msg_contents": "On 10/31/2016 08:19 PM, Andre Henry wrote:\n> My PG 9.4.5 server runs on Amazon RDS some times of the day we have a\n> lot of checkpoints really close (less than 1 minute apart, see logs\n> below) and we are trying to tune the DB to minimize the impact of the\n> checkpoint or reduce the number of checkpoints.\n>\n> Server Stats\n>\n> · Instance Type db.r3.4xl\n>\n> • 16 vCPUs 122GB of RAM\n>\n> • PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc\n> (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16), 64-bit\n>\n>\n>\n> Some PG Stats\n>\n> • Shared Buffers = 31427608kB\n>\n> • Checkpoint Segments = 64\n>\n> • Checkpoint completion target = .9\n>\n> • Rest of the configuration is below\n>\n>\n>\n> Things we are doing\n>\n> • We have a huge table where each row is over 1kB and its very\n> busy. We are splitting that into multiple tables especially the one json\n> field that making it large.\n>\n>\n>\n> Questions\n>\n> • Each checkpoint log writes out the following checkpoint\n> complete: wrote 166481 buffers (4.2%); 0 transaction log file(s) added,\n> 0 removed, 64 recycled; write=32.441 s, sync=0.050 s, total=32.550 s;\n> sync files=274, longest=0.049 s, average=0.000 s\n>\n\nOK, each checkpoint has to write all dirty data from checkpoints. You \nhave ~170k buffers worth of dirty data, i.e. ~1.3GB.\n\n> • What does buffers mean? How do I find out how much RAM that is\n> equivalent to?\n>\n\nBuffer holds 8kB of data, which is the \"chunk\" of data files.\n\n> • Based on my RDS stats I don't think IOPs will help, because I\n> don't see any flat lines on my write operations / second graph. Is this\n> a good assumption?\n>\n\nNot sure what you mean by this. Also, maybe you should talk to AWS if \nyou're on RDS.\n\n> • What else can we tune to spread out checkpoints?\n>\n\nBased on the logs, your checkpoints are triggered by filling WAL. I see \nyour checkpoints happen every 30 - 40 seconds, and you only have 64 \nsegments.\n\nSo to get checkpoints checkpoints triggered by timeout (which I assume \nis 5 minutes, because you have not mentioned checkpoint_timeout), you \nneed to increase checkpoint_segments enough to hold 5 minutes worth of WAL.\n\nThat means 300/30 * 64, i.e. roughly 640 segments (it's likely an \noverestimate, due to full page writes, but well).\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 31 Oct 2016 22:59:50 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning Checkpoints" } ]
[ { "msg_contents": "Hello! \nWe have a situation when after creation of new materialized view cpu utilization falls down (from about 50% to about 30%), at the same time we have a cron job, which does refresh of old materialized view, but it does no effect at performance.\nCan anyone explain why is it so? And what is the difference between refresh and create new?\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 1 Nov 2016 11:26:42 +0500", "msg_from": "=?utf-8?B?0JDQvdGC0L7QvSDQnNCw0LfRg9C90LjQvQ==?=\n <[email protected]>", "msg_from_op": true, "msg_subject": "Refresh materialized view vs recreate " }, { "msg_contents": "On Tue, Nov 1, 2016 at 1:26 AM, Антон Мазунин\n<[email protected]> wrote:\n\n> We have a situation when after creation of new materialized view\n> cpu utilization falls down (from about 50% to about 30%), at the\n> same time we have a cron job, which does refresh of old\n> materialized view, but it does no effect at performance.\n> Can anyone explain why is it so?\n\nI am not able to understand what you are saying here. Could you\nperhaps show the commands you are using and their output (both to\ncreate or refresh the materialized views and to measure impact)?\n\n> what is the difference between refresh and create new?\n\nIn either case the query associated with the materialized view is\nrun, and the output saved to storage. For CREATE or for REFRESH\nwithout CONCURRENTLY, it is saved to the permanent tablespace and\nindexes are built from scratch. For REFRESH CONCURRENTLY the query\nresult is saved to a temporary workspace and this is \"diffed\"\nagainst the existing permanent copy, which is modified to match the\nnew data through simple DML statements. No explicit index rebuild\nis needed; entries are adjusted as part of running the DML.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 1 Nov 2016 10:25:45 -0500", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Refresh materialized view vs recreate" } ]
[ { "msg_contents": "Hi everyone,\n\nI'm facing a peformance decrease after switching to a more performant VPS :\nhttp://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-better\n\nMy questions are:\n\n 1. What benchmark should I perform before switching to a new server?\n 2. What's your rule of thumb regarding my specific issue? What should be\n investigated first?\n\n\nBest Regards,\n\nBenjamin\n\nHi everyone,I'm facing a peformance decrease after switching to a more performant VPS : http://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-betterMy questions are:What benchmark should I perform before switching to a new server?What's your rule of thumb regarding my specific issue? What should be investigated first?Best Regards,Benjamin", "msg_date": "Wed, 2 Nov 2016 14:26:43 +0100", "msg_from": "Benjamin Toueg <[email protected]>", "msg_from_op": true, "msg_subject": "Perf decreased although server is better" }, { "msg_contents": "On 11/02/2016 02:26 PM, Benjamin Toueg wrote:\n> Hi everyone,\n>\n> I'm facing a peformance decrease after switching to a more performant\n> VPS :\n> http://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-better\n>\n\nWell, changing so many things at once (CPU, RAM, storage, Ubuntu \nversion, probably kernel version, PostgreSQL version) is a bad idea, \nexactly because it makes investigating regressions more complicated.\n\n> My questions are:\n>\n> 1. What benchmark should I perform before switching to a new server?\n\nThree types of benchmarks, in this order:\n\n1) system-level benchmarks to test various resources (fio to test disks, \netc.)\n\n2) general-purpose PostgreSQL benchmarks (regular pgbench, ...)\n\n3) application-specific benchmarks, or at least pgbench with templates \nthat match your workload somewhat better than the default one\n\nStart with (1), compare results between machines, if it's OK start with \n(2) and so on.\n\n> 2. What's your rule of thumb regarding my specific issue? What should\n> be investigated first?\n>\n\nThere's a bottleneck somewhere. You need to identify which resource is \nit and why, until then it's just wild guessing. Try measuring how long \nthe requests take at different points - at the app server, at the \ndatabase, etc. That will tell you whether it's a database issue, a \nnetwork issue etc. If the queries take longer on the database, use \nsomething like perf to profile the system.\n\nISTM it's not a disk issue (at least the chart shows minimum usage). But \nyou're doing ~400tps, returning ~5M rows per second.\n\nAlso, if it turns out to be a database issue, more info about config and \ndata set would be useful.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Nov 2016 15:36:25 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "On Wed, Nov 2, 2016 at 8:26 AM, Benjamin Toueg <[email protected]> wrote:\n\n> I'm facing a peformance decrease after switching to a more performant VPS :\n\nIn my world, the VPS that performs worse is not considered \"more\nperformant\", no matter what the sales materials say.\n\n> What benchmark should I perform before switching to a new server?\n\nPersonally, I always like to run bonnie++ and the STREAM RAM test\nbefore even installing the database software.\n\n> What's your rule of thumb regarding my specific issue? What should be\n> investigated first?\n\nI would start by comparing the results of both of the\nabove-mentioned tests for the two environments.\n\nJust one observation, based on the limited data -- a higher network\nlatency between the client and the database might explain what\nyou've presented. I would check that, too.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Nov 2016 09:38:44 -0500", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "How did you migrate from one system to the other?\n\n[ I recently moved a large time series table from 9.5.4 to 9.6.1 using dump\nand restore. Although it put the BRIN index on the time column back on, it\nwas borked. Reindexing didn't help. I had to switch it to a regular btree\nindex. I think the data wasn't inserted back into the database in time\norder. Therefore, because it was all over the place, the efficiency gains\nfrom the BRIN index were lost. It was probably because I restored it with\n\"-j 8\". -- It is possible something you didn't expect when moving\nintroduce new inefficiencies. I also found running pg_repack after the\nrestore helped performance (and storage size) on my new system too. ]\n\n\nDo you have all of the kernel settings configured per best practices?\nSometimes it is easy to forget them when you get totally focused on just\nmoving the data.\n(Things such as your hugepages settings)\n\nWith 9.6 you can enable parallel queries. Of course you wouldn't be\ncomparing apples-to-apples then, but take advantage of that feature if you\ncan.\n\n\nOn Wed, Nov 2, 2016 at 9:26 AM, Benjamin Toueg <[email protected]> wrote:\n\n> Hi everyone,\n>\n> I'm facing a peformance decrease after switching to a more performant VPS\n> : http://serverfault.com/questions/812702/posgres-perf-\n> decreased-although-server-is-better\n>\n> My questions are:\n>\n> 1. What benchmark should I perform before switching to a new server?\n> 2. What's your rule of thumb regarding my specific issue? What should\n> be investigated first?\n>\n>\n> Best Regards,\n>\n> Benjamin\n>\n>\n\nHow did you migrate from one system to the other? [ I recently moved a large time series table from 9.5.4 to 9.6.1 using dump and restore.  Although it put the BRIN index on the time column back on, it was borked.  Reindexing didn't help.  I had to switch it to a regular btree index.  I think the data wasn't inserted back into the database in time order.  Therefore, because it was all over the place, the efficiency gains from the BRIN index were lost.  It was probably because I restored it with \"-j 8\".  -- It is possible something you didn't expect when moving introduce new inefficiencies.   I also found running pg_repack after the restore helped performance (and storage size) on my new system too. ]Do you have all of the kernel settings configured per best practices?  Sometimes it is easy to forget them when you get totally focused on just moving the data.(Things such as your hugepages settings)With 9.6 you can enable parallel queries.  Of course you wouldn't be comparing apples-to-apples then, but take advantage of that feature if you can.On Wed, Nov 2, 2016 at 9:26 AM, Benjamin Toueg <[email protected]> wrote:Hi everyone,I'm facing a peformance decrease after switching to a more performant VPS : http://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-betterMy questions are:What benchmark should I perform before switching to a new server?What's your rule of thumb regarding my specific issue? What should be investigated first?Best Regards,Benjamin", "msg_date": "Wed, 2 Nov 2016 10:55:52 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "Stream gives substantially better results with the new server (before\n<http://pastebin.com/xrLN9Q6s>/after <http://pastebin.com/fDm9aNDh>)\n\nI've run \"bonnie++ -u postgres -d /tmp/ -s 4096M -r 1096\" on both machines.\nI don't know how to read bonnie++ results (before\n<http://pastebin.com/mBi8UstP>/after <http://pastebin.com/hbTrZ8hT>) but it\nlooks quite the same, sometimes better for the new, sometimes better for\nthe old.\n\nN.B. tests have been performed while on production on the new server\nwhereas the old it's not solicited anymore.\n\nI used pg_dump 9.6.1 to dump from Postgres 9.4.5 and restored with pg_restore\n9.6.1 with \"-j 4\" option.\n\nThe database has many tables and indexes (50+). I haven't isolated which\nqueries performs worse, it's just a general effect.\n\nThe loss in terms of response time seems now really consistent, look how\nthe yellow suddenly grew by 100%:\n[image: Images intégrées 1]\n\nI haven't tried `pg_repack` yet.\n\nAs a user, is it expected that restoring could break index efficiencies\nwhen moving from one version to another?\n\nI feel really bad now because I'm not confident re-switching back to the\nold server will restore initial performance :(\n\n2016-11-02 15:55 GMT+01:00 Rick Otten <[email protected]>:\n\n> How did you migrate from one system to the other?\n>\n> [ I recently moved a large time series table from 9.5.4 to 9.6.1 using\n> dump and restore. Although it put the BRIN index on the time column back\n> on, it was borked. Reindexing didn't help. I had to switch it to a\n> regular btree index. I think the data wasn't inserted back into the\n> database in time order. Therefore, because it was all over the place, the\n> efficiency gains from the BRIN index were lost. It was probably because I\n> restored it with \"-j 8\". -- It is possible something you didn't expect\n> when moving introduce new inefficiencies. I also found running pg_repack\n> after the restore helped performance (and storage size) on my new system\n> too. ]\n>\n>\n> Do you have all of the kernel settings configured per best practices?\n> Sometimes it is easy to forget them when you get totally focused on just\n> moving the data.\n> (Things such as your hugepages settings)\n>\n> With 9.6 you can enable parallel queries. Of course you wouldn't be\n> comparing apples-to-apples then, but take advantage of that feature if you\n> can.\n>\n>\n> On Wed, Nov 2, 2016 at 9:26 AM, Benjamin Toueg <[email protected]> wrote:\n>\n>> Hi everyone,\n>>\n>> I'm facing a peformance decrease after switching to a more performant VPS\n>> : http://serverfault.com/questions/812702/posgres-perf-decreas\n>> ed-although-server-is-better\n>>\n>> My questions are:\n>>\n>> 1. What benchmark should I perform before switching to a new server?\n>> 2. What's your rule of thumb regarding my specific issue? What should\n>> be investigated first?\n>>\n>>\n>> Best Regards,\n>>\n>> Benjamin\n>>\n>>\n>", "msg_date": "Thu, 3 Nov 2016 15:51:17 +0100", "msg_from": "Benjamin Toueg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "On Thu, Nov 3, 2016 at 9:51 AM, Benjamin Toueg <[email protected]> wrote:\n>\n> Stream gives substantially better results with the new server (before/after)\n\nYep, the new server can access RAM at about twice the speed of the old.\n\n> I've run \"bonnie++ -u postgres -d /tmp/ -s 4096M -r 1096\" on both\n> machines. I don't know how to read bonnie++ results (before/after)\n> but it looks quite the same, sometimes better for the new,\n> sometimes better for the old.\n\nOn most metrics the new machine looks better, but there are a few\nthings that look potentially problematic with the new machine: the\nnew machine uses about 1.57x the CPU time of the old per block\nwritten sequentially ((41 / 143557) / (16 / 87991)); so if the box\nbecomes CPU starved, you might notice writes getting slower than on\nthe new box. Also, several of the latency numbers are worse -- in\nsome cases far worse. If I'm understanding that properly, it\nsuggests that while total throughput from a number of connections\nmay be better on the new machine, a single connection may not run\nthe same query as quickly. That probably makes the new machine\nbetter for handling an OLTP workload from many concurrent clients,\nbut perhaps not as good at cranking out a single big report or\nrunning dump/restore.\n\nYes, it is quite possible that the new machine could be faster at\nsome things and slower at others.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Nov 2016 19:05:31 -0500", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "I've noticed a network latency increase. Ping between web server and\ndatabase : 0.6 ms avg before, 5.3 ms avg after -- it wasn't that big 4 days\nago :(\n\nI've narrowed my investigation to one particular \"Transaction\" in terms of\nthe NewRelic APM. It's basically the main HTTP request of my application.\n\nLooks like the ping impacts psycopg2:connect (see http://imgur.com/a/LDH1c):\n4 ms up to 16 ms on average.\n\nThat I can understand. However, I don't understand the performance decrease\nof the select queries on table1 (see https://i.stack.imgur.com/QaUqy.png):\n80 ms up to 160 ms on average\n\nSame goes for table 2 (see http://imgur.com/a/CnETs): 4 ms up to 20 ms on\naverage\n\nHowever, there is a commit in my request, and it performs better (see\nhttp://imgur.com/a/td8Dc): 12 ms down to 6 ms on average.\n\nI don't see how this can be due to network latency!\n\nI will provide a new bonnie++ benchmark when the requests per minute is at\nthe lowest (remember I can only run benchmarks while the server is in use).\n\nRick, what did you mean by kernel configuration? The OS is a standard\nUbuntu 16.04:\n\n - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016\nx86_64 x86_64 x86_64 GNU/Linux\n\nDo you think losing half the number of cores can explain my performance\nissue ? (AMD 8 cores down to Haswell 4 cores).\n\nBest Regards,\n\nBenjamin\n\nPS : I've edited the SO post\nhttp://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-better\n\n2016-11-04 1:05 GMT+01:00 Kevin Grittner <[email protected]>:\n\n> On Thu, Nov 3, 2016 at 9:51 AM, Benjamin Toueg <[email protected]> wrote:\n> >\n> > Stream gives substantially better results with the new server\n> (before/after)\n>\n> Yep, the new server can access RAM at about twice the speed of the old.\n>\n> > I've run \"bonnie++ -u postgres -d /tmp/ -s 4096M -r 1096\" on both\n> > machines. I don't know how to read bonnie++ results (before/after)\n> > but it looks quite the same, sometimes better for the new,\n> > sometimes better for the old.\n>\n> On most metrics the new machine looks better, but there are a few\n> things that look potentially problematic with the new machine: the\n> new machine uses about 1.57x the CPU time of the old per block\n> written sequentially ((41 / 143557) / (16 / 87991)); so if the box\n> becomes CPU starved, you might notice writes getting slower than on\n> the new box. Also, several of the latency numbers are worse -- in\n> some cases far worse. If I'm understanding that properly, it\n> suggests that while total throughput from a number of connections\n> may be better on the new machine, a single connection may not run\n> the same query as quickly. That probably makes the new machine\n> better for handling an OLTP workload from many concurrent clients,\n> but perhaps not as good at cranking out a single big report or\n> running dump/restore.\n>\n> Yes, it is quite possible that the new machine could be faster at\n> some things and slower at others.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nI've noticed a network latency increase. Ping between web server and database : 0.6 ms avg before, 5.3 ms avg after -- it wasn't that big 4 days ago :(I've narrowed my investigation to one particular \"Transaction\" in terms of the NewRelic APM. It's basically the main HTTP request of my application.Looks like the ping impacts psycopg2:connect (see http://imgur.com/a/LDH1c): 4 ms up to 16 ms on average.That I can understand. However, I don't understand the performance decrease of the select queries on table1 (see https://i.stack.imgur.com/QaUqy.png): 80 ms up to 160 ms on averageSame goes for table 2 (see http://imgur.com/a/CnETs): 4 ms up to 20 ms on averageHowever, there is a commit in my request, and it performs better (see http://imgur.com/a/td8Dc): 12 ms down to 6 ms on average.I don't see how this can be due to network latency!I will provide a new bonnie++ benchmark when the requests per minute is at the lowest (remember I can only run benchmarks while the server is in use).Rick, what did you mean by kernel configuration? The OS is a standard Ubuntu 16.04: - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxDo you think losing half the number of cores can explain my performance issue ? (AMD 8 cores down to Haswell 4 cores).Best Regards,BenjaminPS : I've edited the SO post http://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-better2016-11-04 1:05 GMT+01:00 Kevin Grittner <[email protected]>:On Thu, Nov 3, 2016 at 9:51 AM, Benjamin Toueg <[email protected]> wrote:\n>\n> Stream gives substantially better results with the new server (before/after)\n\nYep, the new server can access RAM at about twice the speed of the old.\n\n> I've run \"bonnie++ -u postgres -d /tmp/ -s 4096M -r 1096\" on both\n> machines. I don't know how to read bonnie++ results (before/after)\n> but it looks quite the same, sometimes better for the new,\n> sometimes better for the old.\n\nOn most metrics the new machine looks better, but there are a few\nthings that look potentially problematic with the new machine: the\nnew machine uses about 1.57x the CPU time of the old per block\nwritten sequentially ((41 / 143557) / (16 / 87991)); so if the box\nbecomes CPU starved, you might notice writes getting slower than on\nthe new box.  Also, several of the latency numbers are worse -- in\nsome cases far worse.  If I'm understanding that properly, it\nsuggests that while total throughput from a number of connections\nmay be better on the new machine, a single connection may not run\nthe same query as quickly.  That probably makes the new machine\nbetter for handling an OLTP workload from many concurrent clients,\nbut perhaps not as good at cranking out a single big report or\nrunning dump/restore.\n\nYes, it is quite possible that the new machine could be faster at\nsome things and slower at others.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 4 Nov 2016 12:53:49 +0100", "msg_from": "Benjamin Toueg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "My guess would be that your server upgrade wasn't the upgrade you thought\nit was.\n\nYou network latency could definitely be the cause of most of this. The\nproblem is you're not measuring this from the server side. It's not only\ngoing to impact connect time, but you're going to get your data a bit\nslower as well. I'm assuming your pgbouncer is installed on the postgres\nbox. Do you have the pgbouncer addon installed? If you're looking to\nisolate network transfer, pgbouncer should give the statistics like avg\nquery time to see if they line up with what you're seeing in your APM.\n\nAlso, you reduced your number of cores, which is a big problem because\nbasically you just cut your max query capacity in half. Assuming a\ndedicated box, each CPU can only process up to one postgres query at a\ntime. Previously, you could process up to 8 queries simultaneously, whereas\nnow you can only do 4. Now, since most of your queries are probably in the\nms, that can still be quite a bit of queries in a second time frame and you\nmay never hit 4 going at the same exact time, but without seeing your\npgbouncer config, this may actually be happening based on all the idle\nconnections you're seeing.\n\nIf ALL your connections are coming from a single pgbouncer locally on the\npostgres box, then you can use your server resources better by setting\nmax_connections to 6 (the number of cores + one or two more for you to\nconnect locally via pgsql). Then, set your default_pool_size in pgbouncer\nto 4 (number of cores) and reserve_pool_size to 0, and restart. This will\nkeep the number of sessions limited to what your box is actually capable of\ndoing and will help you avoid loading postgres down with more than it's\ncapable of doing, which can make a bad situation worse.\n\nMy recommendation would be to go back to the old server if it's available.\nIf not, get a new one in the same data center as your web servers with at\nleast 8 cores to put you back where you were.\n\n\nOn Fri, Nov 4, 2016 at 7:55 AM Benjamin Toueg <[email protected]> wrote:\n\n> I've noticed a network latency increase. Ping between web server and\n> database : 0.6 ms avg before, 5.3 ms avg after -- it wasn't that big 4 days\n> ago :(\n>\n> I've narrowed my investigation to one particular \"Transaction\" in terms of\n> the NewRelic APM. It's basically the main HTTP request of my application.\n>\n> Looks like the ping impacts psycopg2:connect (see http://imgur.com/a/LDH1c):\n> 4 ms up to 16 ms on average.\n>\n> That I can understand. However, I don't understand the performance\n> decrease of the select queries on table1 (see\n> https://i.stack.imgur.com/QaUqy.png): 80 ms up to 160 ms on average\n>\n> Same goes for table 2 (see http://imgur.com/a/CnETs): 4 ms up to 20 ms on\n> average\n>\n> However, there is a commit in my request, and it performs better (see\n> http://imgur.com/a/td8Dc): 12 ms down to 6 ms on average.\n>\n> I don't see how this can be due to network latency!\n>\n> I will provide a new bonnie++ benchmark when the requests per minute is at\n> the lowest (remember I can only run benchmarks while the server is in use).\n>\n> Rick, what did you mean by kernel configuration? The OS is a standard\n> Ubuntu 16.04:\n>\n> - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016\n> x86_64 x86_64 x86_64 GNU/Linux\n>\n> Do you think losing half the number of cores can explain my performance\n> issue ? (AMD 8 cores down to Haswell 4 cores).\n>\n> Best Regards,\n>\n> Benjamin\n>\n> PS : I've edited the SO post\n> http://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-better\n>\n> 2016-11-04 1:05 GMT+01:00 Kevin Grittner <[email protected]>:\n>\n> On Thu, Nov 3, 2016 at 9:51 AM, Benjamin Toueg <[email protected]> wrote:\n> >\n> > Stream gives substantially better results with the new server\n> (before/after)\n>\n> Yep, the new server can access RAM at about twice the speed of the old.\n>\n> > I've run \"bonnie++ -u postgres -d /tmp/ -s 4096M -r 1096\" on both\n> > machines. I don't know how to read bonnie++ results (before/after)\n> > but it looks quite the same, sometimes better for the new,\n> > sometimes better for the old.\n>\n> On most metrics the new machine looks better, but there are a few\n> things that look potentially problematic with the new machine: the\n> new machine uses about 1.57x the CPU time of the old per block\n> written sequentially ((41 / 143557) / (16 / 87991)); so if the box\n> becomes CPU starved, you might notice writes getting slower than on\n> the new box. Also, several of the latency numbers are worse -- in\n> some cases far worse. If I'm understanding that properly, it\n> suggests that while total throughput from a number of connections\n> may be better on the new machine, a single connection may not run\n> the same query as quickly. That probably makes the new machine\n> better for handling an OLTP workload from many concurrent clients,\n> but perhaps not as good at cranking out a single big report or\n> running dump/restore.\n>\n> Yes, it is quite possible that the new machine could be faster at\n> some things and slower at others.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\nMy guess would be that your server upgrade wasn't the upgrade you thought it was. You network latency could definitely be the cause of most of this. The problem is you're not measuring this from the server side. It's not only going to impact connect time, but you're going to get your data a bit slower as well. I'm assuming your pgbouncer is installed on the postgres box. Do you have the pgbouncer addon installed? If you're looking to isolate network transfer, pgbouncer should give the statistics like avg query time to see if they line up with what you're seeing in your APM.Also, you reduced your number of cores, which is a big problem because basically you just cut your max query capacity in half. Assuming a dedicated box, each CPU can only process up to one postgres query at a time. Previously, you could process up to 8 queries simultaneously, whereas now you can only do 4. Now, since most of your queries are probably in the ms, that can still be quite a bit of queries in a second time frame and you may never hit 4 going at the same exact time, but without seeing your pgbouncer config, this may actually be happening based on all the idle connections you're seeing.If ALL your connections are coming from a single pgbouncer locally on the postgres box, then you can use your server resources better by setting max_connections to 6 (the number of cores + one or two more for you to connect locally via pgsql). Then, set your default_pool_size in pgbouncer to 4 (number of cores) and reserve_pool_size to 0, and restart. This will keep the number of sessions limited to what your box is actually capable of doing and will help you avoid loading postgres down with more than it's capable of doing, which can make a bad situation worse.My recommendation would be to go back to the old server if it's available. If not, get a new one in the same data center as your web servers with at least 8 cores to put you back where you were.On Fri, Nov 4, 2016 at 7:55 AM Benjamin Toueg <[email protected]> wrote:I've noticed a network latency increase. Ping between web server and database : 0.6 ms avg before, 5.3 ms avg after -- it wasn't that big 4 days ago :(I've narrowed my investigation to one particular \"Transaction\" in terms of the NewRelic APM. It's basically the main HTTP request of my application.Looks like the ping impacts psycopg2:connect (see http://imgur.com/a/LDH1c): 4 ms up to 16 ms on average.That I can understand. However, I don't understand the performance decrease of the select queries on table1 (see https://i.stack.imgur.com/QaUqy.png): 80 ms up to 160 ms on averageSame goes for table 2 (see http://imgur.com/a/CnETs): 4 ms up to 20 ms on averageHowever, there is a commit in my request, and it performs better (see http://imgur.com/a/td8Dc): 12 ms down to 6 ms on average.I don't see how this can be due to network latency!I will provide a new bonnie++ benchmark when the requests per minute is at the lowest (remember I can only run benchmarks while the server is in use).Rick, what did you mean by kernel configuration? The OS is a standard Ubuntu 16.04: - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxDo you think losing half the number of cores can explain my performance issue ? (AMD 8 cores down to Haswell 4 cores).Best Regards,BenjaminPS : I've edited the SO post http://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-better2016-11-04 1:05 GMT+01:00 Kevin Grittner <[email protected]>:On Thu, Nov 3, 2016 at 9:51 AM, Benjamin Toueg <[email protected]> wrote:\n>\n> Stream gives substantially better results with the new server (before/after)\n\nYep, the new server can access RAM at about twice the speed of the old.\n\n> I've run \"bonnie++ -u postgres -d /tmp/ -s 4096M -r 1096\" on both\n> machines. I don't know how to read bonnie++ results (before/after)\n> but it looks quite the same, sometimes better for the new,\n> sometimes better for the old.\n\nOn most metrics the new machine looks better, but there are a few\nthings that look potentially problematic with the new machine: the\nnew machine uses about 1.57x the CPU time of the old per block\nwritten sequentially ((41 / 143557) / (16 / 87991)); so if the box\nbecomes CPU starved, you might notice writes getting slower than on\nthe new box.  Also, several of the latency numbers are worse -- in\nsome cases far worse.  If I'm understanding that properly, it\nsuggests that while total throughput from a number of connections\nmay be better on the new machine, a single connection may not run\nthe same query as quickly.  That probably makes the new machine\nbetter for handling an OLTP workload from many concurrent clients,\nbut perhaps not as good at cranking out a single big report or\nrunning dump/restore.\n\nYes, it is quite possible that the new machine could be faster at\nsome things and slower at others.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 04 Nov 2016 13:27:14 +0000", "msg_from": "Will Platnick <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "> Rick, what did you mean by kernel configuration? The OS is a standard\nUbuntu 16.04:\n>\n> - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016\nx86_64 x86_64 x86_64 GNU/Linux\n>\n> Do you think losing half the number of cores can explain my performance\nissue ? (AMD 8 cores down to Haswell 4 cores).\n\nI was referring to some of the tunables discussed on this page:\n https://www.postgresql.org/docs/9.6/static/kernel-resources.html\n\nSpecifically, in my environment I update /etc/security/limits.conf to\ninclude:\n\n\n* hard nofile 65536\n* soft nofile 65536\n\n* hard stack 16384\n* soft stack 16384\n\n* hard memlock unlimited\n* soft memlock unlimited\n\n\nAnd then add this to /etc/pam.d/common-session so that they get picked up\nwhen I su to the postgres user:\n\nsession required pam_limits.so\n\n\nI update sysctl.conf with huge pages:\n\nvm.hugetlb_shm_group=5432\nvm.nr_hugepages=4300\n\n\n(The number of huge pages may be different for your environment.)\nAnd create and add the postgres user to the huge pages group:\n\nhugepages:x:5432:postgres\n\n\nYou may also want to look at some TCP tunables, and check your shared\nmemory limits too.\n\nI only mentioned this because sometimes when you move from one system to\nanother, you can get so caught up in getting the database set up and data\nmigration that you overlook the basic system settings...\n\nRegarding the number of cores, most of the postgresql queries are going to\nbe single threaded. The number of cores won't impact the performance of a\nsingle query except in certain circumstances:\n 1) You have parallel queries enabled and the table is doing some sort\nof expensive sequence scan\n 2) You have so many concurrent queries running the whole system is cpu\nstarved.\n 3) There is some other resource contention between the cpus that causes\n_more_ cpus to actually run slower than fewer. (It happens - I had a\nserver back in the 90's which had severe lock contention over /dev/tcp.\nAdding more cpus made it slower.)\n 4) The near-cache memory gets fragmented in a way that processors have\nto reach deeper in the caches to find what they need. (I'm not explaining\nthat very well, but it is unlikely to be a problem in your case anyway.)\n\nA quick and simple command to get a sense of how busy your cpus are is:\n\n $ mpstat -P ALL 5\n\n(let it run for a few of the 5 second intervals)\n\nIf they are all running pretty hot, then more cores might help. If just\none is running hot, then more cores probably won't do anything.\n\n\n\nOn Fri, Nov 4, 2016 at 7:53 AM, Benjamin Toueg <[email protected]> wrote:\n\n> I've noticed a network latency increase. Ping between web server and\n> database : 0.6 ms avg before, 5.3 ms avg after -- it wasn't that big 4 days\n> ago :(\n>\n> I've narrowed my investigation to one particular \"Transaction\" in terms of\n> the NewRelic APM. It's basically the main HTTP request of my application.\n>\n> Looks like the ping impacts psycopg2:connect (see http://imgur.com/a/LDH1c):\n> 4 ms up to 16 ms on average.\n>\n> That I can understand. However, I don't understand the performance\n> decrease of the select queries on table1 (see https://i.stack.imgur.com/\n> QaUqy.png): 80 ms up to 160 ms on average\n>\n> Same goes for table 2 (see http://imgur.com/a/CnETs): 4 ms up to 20 ms on\n> average\n>\n> However, there is a commit in my request, and it performs better (see\n> http://imgur.com/a/td8Dc): 12 ms down to 6 ms on average.\n>\n> I don't see how this can be due to network latency!\n>\n> I will provide a new bonnie++ benchmark when the requests per minute is at\n> the lowest (remember I can only run benchmarks while the server is in use).\n>\n> Rick, what did you mean by kernel configuration? The OS is a standard\n> Ubuntu 16.04:\n>\n> - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016\n> x86_64 x86_64 x86_64 GNU/Linux\n>\n> Do you think losing half the number of cores can explain my performance\n> issue ? (AMD 8 cores down to Haswell 4 cores).\n>\n> Best Regards,\n>\n> Benjamin\n>\n> PS : I've edited the SO post http://serverfault.com/\n> questions/812702/posgres-perf-decreased-although-server-is-better\n>\n> 2016-11-04 1:05 GMT+01:00 Kevin Grittner <[email protected]>:\n>\n>> On Thu, Nov 3, 2016 at 9:51 AM, Benjamin Toueg <[email protected]> wrote:\n>> >\n>> > Stream gives substantially better results with the new server\n>> (before/after)\n>>\n>> Yep, the new server can access RAM at about twice the speed of the old.\n>>\n>> > I've run \"bonnie++ -u postgres -d /tmp/ -s 4096M -r 1096\" on both\n>> > machines. I don't know how to read bonnie++ results (before/after)\n>> > but it looks quite the same, sometimes better for the new,\n>> > sometimes better for the old.\n>>\n>> On most metrics the new machine looks better, but there are a few\n>> things that look potentially problematic with the new machine: the\n>> new machine uses about 1.57x the CPU time of the old per block\n>> written sequentially ((41 / 143557) / (16 / 87991)); so if the box\n>> becomes CPU starved, you might notice writes getting slower than on\n>> the new box. Also, several of the latency numbers are worse -- in\n>> some cases far worse. If I'm understanding that properly, it\n>> suggests that while total throughput from a number of connections\n>> may be better on the new machine, a single connection may not run\n>> the same query as quickly. That probably makes the new machine\n>> better for handling an OLTP workload from many concurrent clients,\n>> but perhaps not as good at cranking out a single big report or\n>> running dump/restore.\n>>\n>> Yes, it is quite possible that the new machine could be faster at\n>> some things and slower at others.\n>>\n>> --\n>> Kevin Grittner\n>> EDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>\n>\n\n> Rick, what did you mean by kernel configuration? The OS is a standard Ubuntu 16.04:>> - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux>> Do you think losing half the number of cores can explain my performance issue ? (AMD 8 cores down to Haswell 4 cores).I was referring to some of the tunables discussed on this page:    https://www.postgresql.org/docs/9.6/static/kernel-resources.htmlSpecifically, in my environment I update /etc/security/limits.conf to include:* hard nofile 65536* soft nofile 65536* hard stack 16384* soft stack 16384* hard memlock unlimited* soft memlock unlimitedAnd then add this to /etc/pam.d/common-session so that they get picked up when I su to the postgres user:  session required        pam_limits.soI update sysctl.conf with huge pages:vm.hugetlb_shm_group=5432vm.nr_hugepages=4300(The number of huge pages may be different for your environment.)And create and add the postgres user to the huge pages group:hugepages:x:5432:postgresYou may also want to look at some TCP tunables, and check your shared memory limits too.I only mentioned this because sometimes when you move from one system to another, you can get so caught up in getting the database set up  and data migration that you overlook the basic system settings...Regarding the number of cores, most of the postgresql queries are going to be single threaded.  The number of cores won't impact the performance of a single query except in certain circumstances:    1) You have parallel queries enabled and the table is doing some sort of expensive sequence scan    2) You have so many concurrent queries running the whole system is cpu starved.    3) There is some other resource contention between the cpus that causes _more_ cpus to actually run slower than fewer.  (It happens - I had a server back in the 90's which had severe lock contention over /dev/tcp.  Adding more cpus made it slower.)    4) The near-cache memory gets fragmented in a way that processors have to reach deeper in the caches to find what they need.  (I'm not explaining that very well, but it is unlikely to be a problem in your case anyway.) A quick and simple command to get a sense of how busy your cpus are is:    $ mpstat -P ALL 5(let it run for a few of the 5 second intervals)If they are all running pretty hot, then more cores might help.  If just one is running hot, then more cores probably won't do anything.On Fri, Nov 4, 2016 at 7:53 AM, Benjamin Toueg <[email protected]> wrote:I've noticed a network latency increase. Ping between web server and database : 0.6 ms avg before, 5.3 ms avg after -- it wasn't that big 4 days ago :(I've narrowed my investigation to one particular \"Transaction\" in terms of the NewRelic APM. It's basically the main HTTP request of my application.Looks like the ping impacts psycopg2:connect (see http://imgur.com/a/LDH1c): 4 ms up to 16 ms on average.That I can understand. However, I don't understand the performance decrease of the select queries on table1 (see https://i.stack.imgur.com/QaUqy.png): 80 ms up to 160 ms on averageSame goes for table 2 (see http://imgur.com/a/CnETs): 4 ms up to 20 ms on averageHowever, there is a commit in my request, and it performs better (see http://imgur.com/a/td8Dc): 12 ms down to 6 ms on average.I don't see how this can be due to network latency!I will provide a new bonnie++ benchmark when the requests per minute is at the lowest (remember I can only run benchmarks while the server is in use).Rick, what did you mean by kernel configuration? The OS is a standard Ubuntu 16.04: - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxDo you think losing half the number of cores can explain my performance issue ? (AMD 8 cores down to Haswell 4 cores).Best Regards,BenjaminPS : I've edited the SO post http://serverfault.com/questions/812702/posgres-perf-decreased-although-server-is-better2016-11-04 1:05 GMT+01:00 Kevin Grittner <[email protected]>:On Thu, Nov 3, 2016 at 9:51 AM, Benjamin Toueg <[email protected]> wrote:\n>\n> Stream gives substantially better results with the new server (before/after)\n\nYep, the new server can access RAM at about twice the speed of the old.\n\n> I've run \"bonnie++ -u postgres -d /tmp/ -s 4096M -r 1096\" on both\n> machines. I don't know how to read bonnie++ results (before/after)\n> but it looks quite the same, sometimes better for the new,\n> sometimes better for the old.\n\nOn most metrics the new machine looks better, but there are a few\nthings that look potentially problematic with the new machine: the\nnew machine uses about 1.57x the CPU time of the old per block\nwritten sequentially ((41 / 143557) / (16 / 87991)); so if the box\nbecomes CPU starved, you might notice writes getting slower than on\nthe new box.  Also, several of the latency numbers are worse -- in\nsome cases far worse.  If I'm understanding that properly, it\nsuggests that while total throughput from a number of connections\nmay be better on the new machine, a single connection may not run\nthe same query as quickly.  That probably makes the new machine\nbetter for handling an OLTP workload from many concurrent clients,\nbut perhaps not as good at cranking out a single big report or\nrunning dump/restore.\n\nYes, it is quite possible that the new machine could be faster at\nsome things and slower at others.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 4 Nov 2016 09:29:58 -0400", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "On Fri, Nov 4, 2016 at 6:53 AM, Benjamin Toueg <[email protected]> wrote:\n\n> I don't see how this can be due to network latency!\n\nI'm not suggesting it is due to network latency -- it is due to the\nlatency for storage requests. That won't depend on network latency\nunless you are going to a LAN for storage.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Nov 2016 09:05:38 -0500", "msg_from": "Kevin Grittner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perf decreased although server is better" }, { "msg_contents": "Hi,\n\nI tried pg_repack on the new server with no luck, so I've decided to move\nback to the old server to discard:\n\n 1. performance decrease due to server raw characteristics\n 2. performance decrease due to network latencies\n\nI've seen no improvement whatsoever. Could the issue be due to one of, or\nan interaction between :\n\n 1. Ubuntu 16.04.1 LTS (Linux 3.14.32-vps-grs-ipv6-64 x86_64 x86_64\n x86_64 GNU/Linux)\n 2. Postgres 9.6.1/Postgis 2.3\n 3. pgbouncer 1.7.2\n\nI guess to be sure I would need to find a way to restore my initial\nperformances one way or another 😒\n\nThe postgresql.conf is the same as before, and very close to the defaults:\nhttp://pastebin.com/ZGYH38ft. I realised I had \"track_functions on\" but I\nturned it off and it didn't help.\n\nThe only thing that changed purposefully is the postgres and pgbouncer\nauthentication mechanism (it used to be `trust` for both, now it's `md5`\nfor both).\n\nAny help appreciated,\n\nThanks\n\n2016-11-04 15:05 GMT+01:00 Kevin Grittner <[email protected]>:\n\n> On Fri, Nov 4, 2016 at 6:53 AM, Benjamin Toueg <[email protected]> wrote:\n>\n> > I don't see how this can be due to network latency!\n>\n> I'm not suggesting it is due to network latency -- it is due to the\n> latency for storage requests. That won't depend on network latency\n> unless you are going to a LAN for storage.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nHi,I tried pg_repack on the new server with no luck, so I've decided to move back to the old server to discard:performance decrease due to server raw characteristicsperformance decrease due to network latenciesI've seen no improvement whatsoever. Could the issue be due to one of, or an interaction between :Ubuntu 16.04.1 LTS (Linux 3.14.32-vps-grs-ipv6-64 x86_64 x86_64 x86_64 GNU/Linux)Postgres 9.6.1/Postgis 2.3pgbouncer 1.7.2I guess to be sure I would need to find a way to restore my initial performances one way or another 😒The postgresql.conf is the same as before, and very close to the defaults: http://pastebin.com/ZGYH38ft. I realised I had \"track_functions on\" but I turned it off and it didn't help.The only thing that changed purposefully is the postgres and pgbouncer authentication mechanism (it used to be `trust` for both, now it's `md5` for both).Any help appreciated,Thanks2016-11-04 15:05 GMT+01:00 Kevin Grittner <[email protected]>:On Fri, Nov 4, 2016 at 6:53 AM, Benjamin Toueg <[email protected]> wrote:\n\n> I don't see how this can be due to network latency!\n\nI'm not suggesting it is due to network latency -- it is due to the\nlatency for storage requests.  That won't depend on network latency\nunless you are going to a LAN for storage.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 10 Nov 2016 10:52:16 +0100", "msg_from": "Benjamin Toueg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perf decreased although server is better" } ]
[ { "msg_contents": "We have a report query which joins (multiple times, actually) against this\ntrivial, tiny table:\n\nts=# \\d bsm_to_switch\nTable \"public.bsm_to_switch\"\n Column | Type | Modifiers \n--------+------+-----------\n bsm | text | not null\n switch | text | not null\n\nts=# SELECT length(bsm), length(switch) FROM bsm_to_switch;\n length | length \n--------+--------\n 10 | 6\n 10 | 6\n(2 rows)\n\nThe column values are distinct.\n\nI believe the join is being (badly) underestimated, leading to a crappy plan\ninvolving multiple nested loop joins, which takes 2.5 hours instead of a\nhandful of seconds; I believe that might be resolved by populating its MCV\nlist..\n\n..however, on reading commands/analyze.c, the issue is these columns have no\nduplicates, and also postgres decides that \"since the number of distinct rows\nis greater than 10% of the total number of rows\", that ndistinct should be -1\n(meaning it scales with the table size). That's fine, except that it then\neffectively precludes populating the MCV list.\n\n|\tif (nmultiple == 0)\n|\t{\n|\t\t/*\n|\t\t * If we found no repeated non-null values, assume it's a unique\n|\t\t * column; but be sure to discount for any nulls we found.\n|\t\t */\n|\t\tstats->stadistinct = -1.0 * (1.0 - stats->stanullfrac);\n|\t}\n|\telse if (track_cnt < track_max && toowide_cnt == 0 &&\n|\t\t\t nmultiple == track_cnt)\n|\t{\n|\t\t/*\n|\t\t * Our track list includes every value in the sample, and every\n|\t\t * value appeared more than once. Assume the column has just\n|\t\t * these values. (This case is meant to address columns with\n|\t\t * small, fixed sets of possible values, such as boolean or enum\n|\t\t * columns. If there are any values that appear just once in the\n|\t\t * sample, including too-wide values, we should assume that that's\n|\t\t * not what we're dealing with.)\n|\t\t */\n|\t\tstats->stadistinct = track_cnt;\n|\t}\n\nts=# SELECT attname, inherited, null_frac, avg_width, n_distinct, most_common_vals FROM pg_stats WHERE tablename='bsm_to_switch';\n attname | inherited | null_frac | avg_width | n_distinct | most_common_vals \n---------+-----------+-----------+-----------+------------+------------------\n bsm | f | 0 | 11 | -1 | \n switch | f | 0 | 7 | -1 | \n(2 rows)\n\n\nAny ideas? I tried setting n_distinct=2, but that seems to not have any effect\nwithin ANALYZE itself.\n\nts=# SELECT attname, inherited, null_frac, avg_width, n_distinct, most_common_vals FROM pg_stats WHERE tablename='bsm_to_switch';\n attname | inherited | null_frac | avg_width | n_distinct | most_common_vals \n---------+-----------+-----------+-----------+------------+------------------\n bsm | f | 0 | 11 | 2 | \n switch | f | 0 | 7 | 2 | \n(2 rows)\n\nThanks in advance.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Nov 2016 13:53:18 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "no MCV list of tiny table with unique columns" }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n> I believe the join is being (badly) underestimated, leading to a crappy plan\n> involving multiple nested loop joins, which takes 2.5 hours instead of a\n> handful of seconds; I believe that might be resolved by populating its MCV\n> list..\n\nWith only two rows in the table, I'm not real sure why you'd need an MCV\nlist. Could we see the actual problem query (and the other table\nschemas), rather than diving into the code first?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 02 Nov 2016 16:05:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: no MCV list of tiny table with unique columns" }, { "msg_contents": "On Wed, Nov 02, 2016 at 04:05:46PM -0400, Tom Lane wrote:\n> Justin Pryzby <[email protected]> writes:\n> > I believe the join is being (badly) underestimated, leading to a crappy plan\n> > involving multiple nested loop joins, which takes 2.5 hours instead of a\n> > handful of seconds; I believe that might be resolved by populating its MCV\n> > list..\n> \n> With only two rows in the table, I'm not real sure why you'd need an MCV\n> list. Could we see the actual problem query (and the other table\n> schemas), rather than diving into the code first?\n\nSigh, yes, but understand that it's a legacy report which happens to currently\nbe near the top of my list of things to improve:\n\nhttps://explain.depesz.com/s/5rN6\n\nThe relevant table is involved three times:\n\nSeq Scan on two_november mike_oscar (cost=0.000..1.020 rows=2 width=18) (actual time=0.010..0.010 rows=2 loops=1) \nSeq Scan on echo_oscar foxtrot (cost=0.000..209.860 rows=6,286 width=13) (actual time=0.014..2.271 rows=5,842 loops=1) \nSeq Scan on two_november xray_yankee_alpha (cost=0.000..1.020 rows=2 width=18) (actual time=0.017..0.019 rows=2 loops=1) \n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 2 Nov 2016 15:19:55 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: no MCV list of tiny table with unique columns" }, { "msg_contents": "Justin Pryzby <[email protected]> writes:\n>> With only two rows in the table, I'm not real sure why you'd need an MCV\n>> list. Could we see the actual problem query (and the other table\n>> schemas), rather than diving into the code first?\n\n> Sigh, yes, but understand that it's a legacy report which happens to currently\n> be near the top of my list of things to improve:\n> https://explain.depesz.com/s/5rN6\n\nHmm, I wonder what you have join_collapse_limit and from_collapse_limit\nset to. There's an awful lot of tables in that query.\n\nAlso, it seems like most of the rowcount misestimations have to do with\ninheritance child tables, eg\n\n Append (cost=0.000..50,814.990 rows=2,156 width=36) (actual time=9.054..1,026.409 rows=429,692 loops=1)\n Seq Scan on delta_mike golf_six (cost=0.000..0.000 rows=1 width=36) (actual time=0.009..0.009 rows=0 loops=1)\n Filter: ((four_charlie >= 'alpha_six'::timestamp without time zone) AND (four_charlie <= 'four_three'::timestamp without time zone) AND (echo_tango('seven_november'::text, four_charlie) >= 'november_golf'::double precision) AND (echo_tango('seven_november'::text, four_charlie) <= 'papa_quebec'::double precision))\n Index Scan using bravo on papa_two four_delta (cost=0.430..50,814.990 rows=2,155 width=36) (actual time=9.043..848.063 rows=429,692 loops=1)\n Index Cond: ((four_charlie >= 'alpha_six'::timestamp without time zone) AND (four_charlie <= 'four_three'::timestamp without time zone))\n Filter: ((echo_tango('seven_november'::text, four_charlie) >= 'november_golf'::double precision) AND (echo_tango('seven_november'::text, four_charlie) <= 'papa_quebec'::double precision))\n\nThere's not a lot of point in worrying about your two-row table when these\nother estimates are off by multiple orders of magnitude. In this\nparticular case my first bet would be that the planner has no idea about\nthe selectivity of the conditions on \"echo_tango('seven_november'::text,\nfour_charlie)\". Reformulating that, or maybe making an index on it just\nso that ANALYZE will gather stats about it, could help.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 02 Nov 2016 19:48:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: no MCV list of tiny table with unique columns" }, { "msg_contents": "On Wed, Nov 02, 2016 at 07:48:23PM -0400, Tom Lane wrote:\n\n> There's not a lot of point in worrying about your two-row table when these\n> other estimates are off by multiple orders of magnitude. In this\n> particular case my first bet would be that the planner has no idea about\n> the selectivity of the conditions on \"echo_tango('seven_november'::text,\n> four_charlie)\". Reformulating that, or maybe making an index on it just\n> so that ANALYZE will gather stats about it, could help.\n\nThanks, you're exactly right. That's date_trunc('hour') BTW.\n\nWe actually already have a \"new way\" of doing that which avoids date_trunc, so\nnow I just have to get it in place for 100+ old reports..\n\nI thought I had tried that before, but I think I was confusing myself, and\ntried putting the index on the parent, which ends up with no stats since it's\nempty.\n\nWith indices+analyze:\n Sort (cost=189014.28..189014.28 rows=1 width=785) (actual time=25063.831..25063.886 rows=328 loops=1)\n ...\n\nBTW:\njoin_collapse_limit | 8\nfrom_collapse_limit | 8\n\n..and changing them doesn't seem to have any effect. By my count there's 11\ntables, not counting multiply a few used multiply..\n\nThanks again.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 5 Nov 2016 17:36:37 -0500", "msg_from": "Justin Pryzby <[email protected]>", "msg_from_op": true, "msg_subject": "Re: no MCV list of tiny table with unique columns" } ]
[ { "msg_contents": "Hi friends,\n\nI am running 2 Linux machines, kernel 3.13.0-45-generic #74-Ubuntu SMP.\nPostgresql version 9.4 in both machine, in a Hot Standby cenario.\n\nMaster-Slave using WAL files, not streaming replication.\n\nThe archive_command from master is:\n\narchive_command = '/usr/bin/rsync -a -e \"ssh\" \"%p\"\nslave:/data2/postgres/standby/main/incoming/\"%f\"' #\n\n\nThe recovery.conf from slave is:\nstandby_mode = 'on'\nrestore_command = 'cp /data2/postgres/standby/main/incoming/%f \"%p\"'\n\n\nWe have a have intensive write operation generating for example 1577 wals\nsegments per hour ~= 26 segments per minute.\n\nThe slave is very behind from master, more than 20 hours.\nI can see that all WAL segments on master are on ready state, waiting for\narchive_command do his jobs.\n\n\n\nThe slave is waiting for the wal files as described above.\n\n016-11-02 18:57:48 UTC::@:[15698]: LOG: unexpected pageaddr C955/C5000000\nin log segment 000000010000C96000000023, offset 0\n2016-11-02 18:57:54 UTC::@:[15698]: LOG: restored log file\n\"000000010000C96000000022\" from archive\n2016-11-02 18:57:54 UTC::@:[15698]: LOG: restored log file\n\"000000010000C96000000023\" from archive\n2016-11-02 18:57:54 UTC::@:[15698]: LOG: restored log file\n\"000000010000C96000000024\" from archive\ncp: cannot stat\n‘/data2/postgres/standby/main/incoming/000000010000C96000000025’: No such\nfile or directory\n2016-11-02 18:57:54 UTC::@:[15698]: LOG: unexpected pageaddr C956/71000000\nin log segment 000000010000C96000000025, offset 0\n2016-11-02 18:57:58 UTC::@:[15698]: LOG: restored log file\n\"000000010000C96000000024\" from archive\ncp: cannot stat\n‘/data2/postgres/standby/main/incoming/000000010000C96000000025’: No such\nfile or directory\n\n\nIt seems that archive_command is very slowly compared with the amount of\nWAL segments generated.\nAny suggestions??? Should I use another strategy to increase the\narchive_command process speed???\n\n\nBest Regards,\n\nHi friends,I am running 2 Linux machines, kernel  3.13.0-45-generic #74-Ubuntu SMP.Postgresql version 9.4 in both machine, in a Hot Standby cenario.Master-Slave using WAL files, not streaming replication.The archive_command from master is:archive_command = '/usr/bin/rsync -a -e \"ssh\" \"%p\" slave:/data2/postgres/standby/main/incoming/\"%f\"' #The recovery.conf from slave is:standby_mode = 'on'restore_command = 'cp /data2/postgres/standby/main/incoming/%f \"%p\"'We have a have intensive write operation generating for example 1577 wals segments per hour ~= 26 segments per minute.The slave is very behind from master, more than 20 hours.I can see that all WAL segments on master are on ready state, waiting for archive_command do his jobs. The slave is waiting for the wal files as described above.016-11-02 18:57:48 UTC::@:[15698]: LOG:  unexpected pageaddr C955/C5000000 in log segment 000000010000C96000000023, offset 02016-11-02 18:57:54 UTC::@:[15698]: LOG:  restored log file \"000000010000C96000000022\" from archive2016-11-02 18:57:54 UTC::@:[15698]: LOG:  restored log file \"000000010000C96000000023\" from archive2016-11-02 18:57:54 UTC::@:[15698]: LOG:  restored log file \"000000010000C96000000024\" from archivecp: cannot stat ‘/data2/postgres/standby/main/incoming/000000010000C96000000025’: No such file or directory2016-11-02 18:57:54 UTC::@:[15698]: LOG:  unexpected pageaddr C956/71000000 in log segment 000000010000C96000000025, offset 02016-11-02 18:57:58 UTC::@:[15698]: LOG:  restored log file \"000000010000C96000000024\" from archivecp: cannot stat ‘/data2/postgres/standby/main/incoming/000000010000C96000000025’: No such file or directoryIt seems that archive_command is very slowly compared with the amount of WAL segments generated.Any suggestions??? Should I use another strategy to increase the archive_command process speed???Best Regards,", "msg_date": "Wed, 2 Nov 2016 20:06:05 +0100", "msg_from": "Joao Junior <[email protected]>", "msg_from_op": true, "msg_subject": "archive_command too slow." }, { "msg_contents": "On Wed, Nov 2, 2016 at 12:06 PM, Joao Junior <[email protected]> wrote:\n\n> Hi friends,\n>\n> I am running 2 Linux machines, kernel 3.13.0-45-generic #74-Ubuntu SMP.\n> Postgresql version 9.4 in both machine, in a Hot Standby cenario.\n>\n> Master-Slave using WAL files, not streaming replication.\n>\n> The archive_command from master is:\n>\n> archive_command = '/usr/bin/rsync -a -e \"ssh\" \"%p\"\n> slave:/data2/postgres/standby/main/incoming/\"%f\"' #\n>\n\n\nHow long does it take just to set up the ssh tunnel?\n\n$ time ssh slave hostname\n\nIn my hands, this takes about 0.5, every time. If you need to archive 26\nsegments per minute, that much overhead is going to consume a substantial\nfraction of your time budget.\n\nHow much network bandwidth do you have? If you scp a big chunk of files in\none command over to the slave (not into a production directory of it,of\ncourse) how fast does that go?\n\n$ time rsync datadir/pg_xlog/000000010000C9600000004? slave:/tmp/foo/\n\n\n...\n\n\n\n>\n> It seems that archive_command is very slowly compared with the amount of\n> WAL segments generated.\n> Any suggestions??? Should I use another strategy to increase the\n> archive_command process speed???\n>\n\n\nIf network throughput is the problem, use compression, or get a faster\nnetwork.\n\nIf setting up the ssh tunnel is the problem, you could assess whether you\nreally need that security, or compile a custom postgresql with larger WAL\nfile sizes, or write a fancy archive_command which first archives the files\nto a local directory, and then transfers them in chunks to the slave. Or\nmaybe use streaming rather than file shipping.\n\n\nCheers,\n\nJeff\n\nOn Wed, Nov 2, 2016 at 12:06 PM, Joao Junior <[email protected]> wrote:Hi friends,I am running 2 Linux machines, kernel  3.13.0-45-generic #74-Ubuntu SMP.Postgresql version 9.4 in both machine, in a Hot Standby cenario.Master-Slave using WAL files, not streaming replication.The archive_command from master is:archive_command = '/usr/bin/rsync -a -e \"ssh\" \"%p\" slave:/data2/postgres/standby/main/incoming/\"%f\"' #How long does it take just to set up the ssh tunnel?$ time ssh slave hostnameIn my hands, this takes about 0.5, every time.  If you need to archive 26 segments per minute, that much overhead is going to consume a substantial fraction of your time budget.How much network bandwidth do you have?  If you scp a big chunk of files in one command over to the slave (not into a production directory of it,of course) how fast does that go?$ time rsync datadir/pg_xlog/000000010000C9600000004? slave:/tmp/foo/... It seems that archive_command is very slowly compared with the amount of WAL segments generated.Any suggestions??? Should I use another strategy to increase the archive_command process speed???If network throughput is the problem, use compression, or get a faster network.If setting up the ssh tunnel is the problem, you could assess whether you really need that security, or compile a custom postgresql with larger WAL file sizes, or write a fancy archive_command which first archives the files to a local directory, and then transfers them in chunks to the slave.  Or maybe use streaming rather than file shipping.Cheers,Jeff", "msg_date": "Fri, 4 Nov 2016 09:19:19 -0700", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: archive_command too slow." }, { "msg_contents": "On Fri, Nov 4, 2016 at 1:19 PM, Jeff Janes <[email protected]> wrote:\n> If setting up the ssh tunnel is the problem, you could assess whether you\n> really need that security, or compile a custom postgresql with larger WAL\n> file sizes, or write a fancy archive_command which first archives the files\n> to a local directory, and then transfers them in chunks to the slave. Or\n> maybe use streaming rather than file shipping.\n\nAnother option is to use ssh's ControlMaster and ControlPersist\nfeatures to keep the SSH tunnel alive between commands.\n\nYou'd have to set up the RSYNC_CONNECT_PROG environment variable on\nyour archive command for that and include the relevant options for ssh\nin the command.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 4 Nov 2016 19:30:32 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: archive_command too slow." }, { "msg_contents": "Greetings,\n\n* Joao Junior ([email protected]) wrote:\n> I am running 2 Linux machines, kernel 3.13.0-45-generic #74-Ubuntu SMP.\n> Postgresql version 9.4 in both machine, in a Hot Standby cenario.\n> \n> Master-Slave using WAL files, not streaming replication.\n> \n> The archive_command from master is:\n> \n> archive_command = '/usr/bin/rsync -a -e \"ssh\" \"%p\"\n> slave:/data2/postgres/standby/main/incoming/\"%f\"' #\n\nAdmittedly, you're talking about this for only WAL replay, but I wanted\nto point out that this is not a safe archive_command setting because\nyou are not forcing an fsync() on the remote side to be sure that the\nWAL is saved out to disk before telling PG that it can remove that WAL\nsegment.\n\nWhat that means is that a crash on the remote system at just the wrong\ntime will create a hole in your WAL, possibly meaning that you wouldn't\nbe able to replay WAL on the remote side past that point and would have\nto rebuild/re-sync your replica.\n\nThere's a lot more that needs to be done carefully and correctly to\nensure proper backups than this also.\n\nIn short, don't try to write your own software for doing this. Use one\nof the existing tools which do the correct things for you. In\nparticular, when it comes to dealing with archive_command being a\nbottleneck, pgbackrest has an async mode where it will keep open a\nlong-running SSH session to the remote side to avoid the overhead\nassociated with re-exec'ing SSH. The ControlMaster approach helps too,\nbut you still have to exec rsync and ssh and it's ultimately expensive\nto even do that. Unfortunatly, restore_command is single threaded and\nis going to be pretty expensive too, though pgbackrest's archive-get\ntries to be as fast as it can be.\n\npgbackrest also has the ability to perform incremental backups and\nincremental restores, in parallel, allowing you to periodically bring\nyour replica up to speed very quickly using file transfers instead of\nusing WAL replay. This does mean that you have a period of downtime for\nyour replica, of course, but that might be a worthwhile trade-off.\n\nFor pure backups, another approach is to use pg_receivexlog and a tool\nlike barman which supports verifying that the WAL for a given backup has\nreached the remote side.\n\nThanks!\n\nStephen", "msg_date": "Tue, 8 Nov 2016 10:47:01 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: archive_command too slow." } ]
[ { "msg_contents": "So I have a data warehouse type of postgresql server. Each day a new db is\ncreated and new data is inserted into the new db (around 20G). The data is\nfairly simple in structure, around 100 tables and some of the table is\npretty big with millions of rows. Each table has only a primary key, no\nforeign key or other constrain at all. The db is never changed once the\nday's insertion is done. \n\nI need an efficient way to migrate the db from SSD to HDD when the DB\nbecomes \"cold\" (say 1 week later, fewer people are interested in the content\nany more) for sake of saving storage expenses. I want my tables still online\n(readable) when the copy is in process. In theory this should work since\ncopying does not modify the tables nor do we modify the table at all. Only\nat the moment when some metadata (such as pg_class table, etc) is updated\nshould the db block any reading query.\n\nI have tried many solutions, like pg_dump and pg_restore, pg_repack, etc.\nNone of them achieves high throughput since I need a File System level copy\ninstead of re-insertion type of solution. \n\nAn easy way to do so is to simply make a new tablespace pointing at the HDD,\nand change the tablespace. However, the problem of ALTER TABLE SET\nTABLESPACE is that it takes an AccessExclusiveLock to perform, which makes\nmy db offline for quite a while (GBs of data migration takes some time). So\nI decide to change modify the source code of Postgresql 9.6. I changed the\nlockmode of ALTER TABLE SET TABLESPACE to \"ExclusiveLock\" instead of\n\"AccessExclusiveLock\". right after the data copy I acquire an\n\"AccessExclusiveLock\", which persists until the command finishes.\n \nI added\n\t*relation_close(rel, NoLock);\n\trel = relation_open(tableOid, AccessExclusiveLock);*\nat tablecmds.c:9703\n(http://doxygen.postgresql.org/tablecmds_8c_source.html#l09703)\n\nIt seems working. But PostgreSQL is a very complex system. I know changing\nthe lockmode may impact other subcommands, but let's forget such issues at\nthis moment. I will probably make an extension and do the migration instead\nof changing the source code if I am sure it does not break PostgreSQL.\nStill, I am not sure my modification will introduce any other problem that I\njust haven't encounter yet. So the question becomes: will changing\nAccessExclusiveLock to ExclusiveLock before updating metadata introduce any\nproblem? Some kind of deadlock maybe? \n\nTo my understanding it shouldn't since the only difference is\nAccessExclusiveLock does not allow read. However, allowing read when the db\nis still readable should not destroy other mechanisms. Am I correct?\n\nThanks in advance.\n\n\n\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Hot-migration-of-tables-tp5928902.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Nov 2016 11:07:11 -0700 (MST)", "msg_from": "YueLi <[email protected]>", "msg_from_op": true, "msg_subject": "Hot migration of tables" } ]
[ { "msg_contents": "Hello all, I have a query that was running quickly enough on 9.5.5 and has\nslowed to a halt after upgrading to 9.6.1.\n\nThe server hardware is the same, 2 core, 4GB ram DigitalOcean virtual\nserver, running Debian 8.6.\n\nThe query is a pretty heavy reporting query to aggregate the dollar value\nof \"claims\" against a specific \"contract\".\n\nThe query is below for reference.\n\nQuery plan on 9.6.1: https://explain.depesz.com/s/NwmH\nQuery plan on 9.5.5: https://explain.depesz.com/s/ioI4\n\nWe just migrated over the weekend, and this issue was brought to my\nattention today.\n\nSELECT\n> con.client_id\n> , 'product'::text AS type\n> , p.product_number AS identifier\n> , p.product_name AS description\n> , civ.rebate_direct_decimal_model\n> , civ.rebate_deviated_decimal_model\n> , civ.rebate_direct_value\n> , civ.rebate_direct_type\n> , civ.rebate_deviated_value\n> , civ.rebate_deviated_type\n> , actuals.claimant\n> , actuals.claimant_id\n> , civ.estimated_quantity AS prob_exp_volume\n> , COALESCE(actuals.rebate_allowed_quantity, 0) AS actual_volume\n> , CASE WHEN r.rate IS NULL OR (r.rate < 0) THEN NULL ELSE (r.rate *\n> civ.estimated_quantity) END AS prob_exp_dollars\n> , COALESCE(actuals.rebate_allowed_dollars, 0) AS actual_dollars\n> , COALESCE(actuals.transaction_date,null) AS transaction_date\n> FROM contract con\n> INNER JOIN contract_item_view civ\n> ON con.contract_id = civ.contract_id\n> INNER JOIN product p\n> ON civ.product_id = p.product_id\n> INNER JOIN product_uom_conversion civuomc\n> ON civ.uom_type_id = civuomc.uom_type_id\n> AND civ.product_id = civuomc.product_id\n> LEFT JOIN LATERAL (\n> SELECT\n> claim_product.product_id\n> , company.company_name AS claimant\n> , company.company_id AS claimant_id\n> , MAX(COALESCE(transaction.transaction_date,null)) AS transaction_date\n> , SUM((civuomc.rate / cpuomc.rate) *\n> claim_product.rebate_allowed_quantity) AS rebate_allowed_quantity\n> , SUM(claim_product.rebate_allowed_quantity *\n> claim_product.rebate_allowed_rate) AS rebate_allowed_dollars\n> FROM contract\n> INNER JOIN contract_claim_bridge\n> USING (contract_id)\n> INNER JOIN claim\n> USING (claim_id)\n> INNER JOIN claim_product\n> USING (claim_id)\n> INNER JOIN product_uom_conversion cpuomc\n> ON claim_product.uom_type_id = cpuomc.uom_type_id\n> AND claim_product.product_id = cpuomc.product_id\n> INNER JOIN invoice\n> USING (invoice_id)\n> INNER JOIN company\n> ON company.company_id = invoice.claimant_company_id\n> LEFT JOIN LATERAL (\n> SELECT MAX(transaction_date) AS transaction_date\n> FROM claim_transaction\n> WHERE TRUE\n> AND claim_transaction.claim_id = claim.claim_id\n> AND claim_transaction.transaction_type IN\n> ('PAYMENT'::enum.transaction_type,'DEDUCTION'::enum.transaction_type)\n> ORDER BY transaction_date DESC\n> LIMIT 1\n> ) transaction\n> ON TRUE\n> WHERE contract.contract_sequence = con.contract_sequence\n> AND contract.contract_renew_version = con.contract_renew_version\n> AND contract.client_id = con.client_id\n> AND claim.claim_state = 'COMPLETE'\n> GROUP BY claim_product.product_id, company.company_name, company.company_id\n> ) actuals\n> ON actuals.product_id = civ.product_id\n> LEFT JOIN LATERAL gosimple.calculate_contract_item_probable_exposure_rate(\n> (\n> SELECT array_agg(row(x.contract_item_id, x.estimated_quantity,\n> x.price)::gosimple.in_calculate_contract_item_probable_exposure_rate)\n> FROM\n> (\n> SELECT\n> civ2.contract_item_id\n> , civ2.estimated_quantity AS estimated_quantity\n> , AVG(pd2.price / puc2.rate) AS price\n> FROM contract con2\n> INNER JOIN contract_item_view civ2\n> ON con2.contract_id = civ2.contract_id\n> LEFT JOIN product_uom_conversion puc2\n> ON puc2.product_id = civ2.product_id\n> AND puc2.uom_type_id = civ2.uom_type_id\n> LEFT JOIN price_default pd2\n> ON civ2.product_id = pd2.product_id\n> AND pd2.active_range @> now()\n> WHERE TRUE\n> AND con2.contract_id = con.contract_id\n> GROUP BY civ2.contract_item_id, civ2.estimated_quantity\n> ) AS x\n> )) r\n> ON civ.contract_item_id = r.contract_item_id\n> WHERE TRUE\n> AND con.contract_id = '54e28f3b-8f87-46fc-abf0-6fe86f528c0c'\n\nHello all, I have a query that was running quickly enough on 9.5.5 and has slowed to a halt after upgrading to 9.6.1.The server hardware is the same, 2 core, 4GB ram DigitalOcean virtual server, running Debian 8.6.The query is a pretty heavy reporting query to aggregate the dollar value of \"claims\" against a specific \"contract\".The query is below for reference.Query plan on 9.6.1: https://explain.depesz.com/s/NwmHQuery plan on 9.5.5: https://explain.depesz.com/s/ioI4We just migrated over the weekend, and this issue was brought to my attention today.SELECT   con.client_id, 'product'::text AS type, p.product_number AS identifier, p.product_name AS description, civ.rebate_direct_decimal_model, civ.rebate_deviated_decimal_model, civ.rebate_direct_value, civ.rebate_direct_type, civ.rebate_deviated_value, civ.rebate_deviated_type, actuals.claimant, actuals.claimant_id, civ.estimated_quantity AS prob_exp_volume, COALESCE(actuals.rebate_allowed_quantity, 0) AS actual_volume, CASE WHEN r.rate IS NULL OR (r.rate < 0) THEN NULL ELSE (r.rate * civ.estimated_quantity) END AS prob_exp_dollars , COALESCE(actuals.rebate_allowed_dollars, 0) AS actual_dollars , COALESCE(actuals.transaction_date,null) AS transaction_dateFROM contract conINNER JOIN contract_item_view civ ON con.contract_id = civ.contract_idINNER JOIN product p ON civ.product_id = p.product_id INNER JOIN product_uom_conversion civuomc ON civ.uom_type_id = civuomc.uom_type_id AND civ.product_id = civuomc.product_idLEFT JOIN LATERAL ( SELECT   claim_product.product_id , company.company_name AS claimant , company.company_id AS claimant_id , MAX(COALESCE(transaction.transaction_date,null)) AS transaction_date , SUM((civuomc.rate / cpuomc.rate) * claim_product.rebate_allowed_quantity) AS rebate_allowed_quantity , SUM(claim_product.rebate_allowed_quantity * claim_product.rebate_allowed_rate) AS rebate_allowed_dollars FROM contract INNER JOIN contract_claim_bridge USING (contract_id) INNER JOIN claim USING (claim_id) INNER JOIN claim_product USING (claim_id) INNER JOIN product_uom_conversion cpuomc ON claim_product.uom_type_id = cpuomc.uom_type_id AND claim_product.product_id = cpuomc.product_id INNER JOIN invoice USING (invoice_id) INNER JOIN company ON company.company_id = invoice.claimant_company_id LEFT JOIN LATERAL ( SELECT MAX(transaction_date) AS transaction_date FROM claim_transaction WHERE TRUE AND claim_transaction.claim_id = claim.claim_id AND claim_transaction.transaction_type IN ('PAYMENT'::enum.transaction_type,'DEDUCTION'::enum.transaction_type) ORDER BY transaction_date DESC LIMIT 1 )  transaction ON TRUE WHERE contract.contract_sequence = con.contract_sequence AND contract.contract_renew_version = con.contract_renew_version AND contract.client_id = con.client_id AND claim.claim_state = 'COMPLETE' GROUP BY claim_product.product_id, company.company_name, company.company_id ) actuals ON actuals.product_id = civ.product_id LEFT JOIN LATERAL gosimple.calculate_contract_item_probable_exposure_rate( ( SELECT array_agg(row(x.contract_item_id, x.estimated_quantity, x.price)::gosimple.in_calculate_contract_item_probable_exposure_rate) FROM ( SELECT   civ2.contract_item_id , civ2.estimated_quantity AS estimated_quantity , AVG(pd2.price / puc2.rate) AS price FROM contract con2 INNER JOIN contract_item_view civ2 ON con2.contract_id = civ2.contract_id LEFT JOIN product_uom_conversion puc2 ON puc2.product_id = civ2.product_id AND puc2.uom_type_id = civ2.uom_type_id LEFT JOIN price_default pd2 ON civ2.product_id = pd2.product_id AND pd2.active_range @> now() WHERE TRUE AND con2.contract_id = con.contract_id GROUP BY civ2.contract_item_id, civ2.estimated_quantity ) AS x )) r ON civ.contract_item_id = r.contract_item_idWHERE TRUEAND con.contract_id = '54e28f3b-8f87-46fc-abf0-6fe86f528c0c'", "msg_date": "Mon, 7 Nov 2016 10:52:12 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Query much slower after upgrade to 9.6.1" }, { "msg_contents": "As suggested in the Postgres slack channel by lukasfittl, I disabled\nhashagg on my old server, and ran the query again. That changed one piece\nto a groupagg (like was used on the new server) and the performance was\nsimilar to the 9.6.1 box.\n\n9.5.5 w/ hashagg disabled: https://explain.depesz.com/s/SBVt\n\nAs suggested in the Postgres slack channel by lukasfittl, I disabled hashagg on my old server, and ran the query again. That changed one piece to a groupagg (like was used on the new server) and the performance was similar to the 9.6.1 box.9.5.5 w/ hashagg disabled: https://explain.depesz.com/s/SBVt", "msg_date": "Mon, 7 Nov 2016 13:05:40 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query much slower after upgrade to 9.6.1" }, { "msg_contents": "Adam Brusselback <[email protected]> writes:\n> As suggested in the Postgres slack channel by lukasfittl, I disabled\n> hashagg on my old server, and ran the query again. That changed one piece\n> to a groupagg (like was used on the new server) and the performance was\n> similar to the 9.6.1 box.\n\nIf the problem is \"new server won't use hashagg\", I'd wonder whether\nthe work_mem setting is the same, or whether maybe you need to bump\nit up some (the planner's estimate of how big the hashtable would be\nmight have changed a bit).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 07 Nov 2016 14:16:49 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower after upgrade to 9.6.1" }, { "msg_contents": ">\n> If the problem is \"new server won't use hashagg\", I'd wonder whether\n> the work_mem setting is the same, or whether maybe you need to bump\n> it up some (the planner's estimate of how big the hashtable would be\n> might have changed a bit).\n>\nI actually was speaking with Stephen Frost in the slack channel, and tested\nboth of those theories.\n\nThe work_mem was the same between the two servers (12MB), but he suggested\nI play around with it. I tried 4MB, 20MB, and 128MB. There was no\ndifference from 12MB with any of them.\n\nI have my default_statistics_target set to 300, and ran a VACUUM ANALYZE\nright after the upgrade to 9.6.1. He suggested I lower it, so I put it\nback down to 100, ran a VACUUM ANALYZE, and observed no change in query. I\nalso tried going the other way and set it to 1000, VACUUM ANALYZE, and\nagain, no difference to query.\n\nIf the problem is \"new server won't use hashagg\", I'd wonder whether\nthe work_mem setting is the same, or whether maybe you need to bump\nit up some (the planner's estimate of how big the hashtable would be\nmight have changed a bit).I actually was speaking with Stephen Frost in the slack channel, and tested both of those theories.The work_mem was the same between the two servers (12MB), but he suggested I play around with it. I tried 4MB, 20MB, and 128MB. There was no difference from 12MB with any of them.I have my default_statistics_target set to 300, and ran a VACUUM ANALYZE right after the upgrade to 9.6.1.  He suggested I lower it, so I put it back down to 100, ran a VACUUM ANALYZE, and observed no change in query.  I also tried going the other way and set it to 1000, VACUUM ANALYZE, and again, no difference to query.", "msg_date": "Mon, 7 Nov 2016 14:26:49 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query much slower after upgrade to 9.6.1" }, { "msg_contents": "Adam Brusselback <[email protected]> writes:\n>> If the problem is \"new server won't use hashagg\", I'd wonder whether\n>> the work_mem setting is the same, or whether maybe you need to bump\n>> it up some (the planner's estimate of how big the hashtable would be\n>> might have changed a bit).\n\n> I actually was speaking with Stephen Frost in the slack channel, and tested\n> both of those theories.\n\n> The work_mem was the same between the two servers (12MB), but he suggested\n> I play around with it. I tried 4MB, 20MB, and 128MB. There was no\n> difference from 12MB with any of them.\n\n> I have my default_statistics_target set to 300, and ran a VACUUM ANALYZE\n> right after the upgrade to 9.6.1. He suggested I lower it, so I put it\n> back down to 100, ran a VACUUM ANALYZE, and observed no change in query. I\n> also tried going the other way and set it to 1000, VACUUM ANALYZE, and\n> again, no difference to query.\n\nDid you pay attention to the estimated number of groups (ie, estimated\noutput rowcount for the aggregation plan node) while fooling around with\nthe statistics? How does it compare to reality, and to 9.5's estimate?\n\nThere were several different changes in the planner's number-of-distinct-\nvalues estimation code in 9.6, so maybe the the cause of the difference is\nsomewhere around there.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 07 Nov 2016 14:32:14 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query much slower after upgrade to 9.6.1" }, { "msg_contents": ">\n> Did you pay attention to the estimated number of groups (ie, estimated\n> output rowcount for the aggregation plan node) while fooling around with\n> the statistics? How does it compare to reality, and to 9.5's estimate?\n>\n\nI'm re-doing the tests and paying attention to that now.\n\nWith statistics = 100, the under / over estimations change only slightly.\nNothing that drastically alters anything: https://explain.depesz.com/s/GEWy\nWith statistics = 1000 pretty much the same as above:\nhttps://explain.depesz.com/s/6CWM\n\nSo between 9.5.5, 9.6.1, none of the stats changed in a noticeable way.\nChanging the statistics target on 9.6.1 slightly altered the estimates, but\nnothing to write home about.\nAll have some significant deviations from actual row counts in the part of\nthe query which is making the query slow.\n\nDid you pay attention to the estimated number of groups (ie, estimated\noutput rowcount for the aggregation plan node) while fooling around with\nthe statistics?  How does it compare to reality, and to 9.5's estimate?I'm re-doing the tests and paying attention to that now.With statistics = 100, the under / over estimations change only slightly. Nothing that drastically alters anything: https://explain.depesz.com/s/GEWyWith statistics = 1000 pretty much the same as above: https://explain.depesz.com/s/6CWMSo between 9.5.5, 9.6.1, none of the stats changed in a noticeable way. Changing the statistics target on 9.6.1 slightly altered the estimates, but nothing to write home about.All have some significant deviations from actual row counts in the part of the query which is making the query slow.", "msg_date": "Mon, 7 Nov 2016 15:31:33 -0500", "msg_from": "Adam Brusselback <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query much slower after upgrade to 9.6.1" } ]
[ { "msg_contents": "Hi,\n\nI try to tune one Recursive CTE.\n\nExplain Plan can be found here\nhttps://explain.depesz.com/s/yLVd\n\nAnyone can give me direction to check?\n\n//H.\n\n\n\n\n\n\n\nHi,\n\nI try to tune one Recursive CTE.\n\nExplain Plan can be found here\nhttps://explain.depesz.com/s/yLVd\n\nAnyone can give me direction to check?\n\n//H.", "msg_date": "Wed, 09 Nov 2016 14:05:55 +0100", "msg_from": "Henrik Ekenberg <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning one Recurcive CTE" }, { "msg_contents": "På onsdag 09. november 2016 kl. 14:05:55, skrev Henrik Ekenberg <\[email protected] <mailto:[email protected]>>:\nHi,\n\n I try to tune one Recursive CTE.\n\n Explain Plan can be found here\n https://explain.depesz.com/s/yLVd\n\n Anyone can give me direction to check?\n\n //H.\n\n \nRule number one; Always provide the query in question when asking for help \ntuning it.\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Wed, 9 Nov 2016 14:29:47 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning one Recurcive CTE" }, { "msg_contents": "Hi,\n\nI will need to anonymized before sending it.\nDo you know if there is any tuning documents related to CTE scans\n\n//H\n\n> På onsdag 09. november 2016 kl. 14:05:55, skrev Henrik Ekenberg\n> <[email protected]>:\n>\n>> Hi,\n>>\n>> I try to tune one Recursive CTE.\n>>\n>> Explain Plan can be found here\n>> https://explain.depesz.com/s/yLVd\n>>\n>> Anyone can give me direction to check?\n>>\n>> //H.\n>\n>  \n> Rule number one; Always provide the query in question when asking for\n> help tuning it.\n>  \n> -- ANDREAS JOSEPH KROGH\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com[1]\n> [1]\n>\n>  \n\n\n\nLinks:\n------\n[1] https://www.visena.com", "msg_date": "Wed, 09 Nov 2016 15:30:20 +0100", "msg_from": "Henrik Ekenberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tuning one Recurcive CTE" }, { "msg_contents": "På onsdag 09. november 2016 kl. 15:30:20, skrev Henrik Ekenberg <\[email protected] <mailto:[email protected]>>:\nHi,\n\n I will need to anonymized before sending it.\n Do you know if there is any tuning documents related to CTE scans\n\nYou might want to read this:\nhttp://blog.2ndquadrant.com/postgresql-ctes-are-optimization-fences/\n\nhttps://robots.thoughtbot.com/advanced-postgres-performance-tips#common-table-expressions-and-subqueries\n\nhttps://www.postgresql.org/message-id/CAPo4y_XUJR1sijvTySy9W%2BShpORwzbhSdEzE9pgtc1%3DcTkvpkw%40mail.gmail.com\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>", "msg_date": "Wed, 9 Nov 2016 17:22:34 +0100 (CET)", "msg_from": "Andreas Joseph Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning one Recurcive CTE" } ]
[ { "msg_contents": "Hello,\n\nI am trying to implement an efficient \"like\" over a text[]. I see a lot of people have tried before me and I learnt a lot through the forums. The results of my search is that a query like the following is optimal:\n\nselect count(*)\n from claims\nwhere (select count(*)\n from unnest(\"ICD9_DGNS_CD\") x_\n where x_ like '427%'\n ) > 0\n\nSo I figured I'd create a Function to encapsulate the concept:\n\nCREATE OR REPLACE FUNCTION ArrayLike(text[], text)\nRETURNS bigint\nAS 'select count(*) from unnest($1) a where a like $2'\nLANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n\nThis works functionally, but performs like crap: full table scan, and cannot make use of any index it seems. Basically, it feels like PG can't inline that function.\n\nI have been trying all evening to find a way to rewrite it to trick the compiler/planner into inlining. I tried the operator approach for example, but performance is again not good.\n\ncreate function rlike(text,text)\nreturns bool as 'select $2 like $1' language sql strict immutable;\ncreate operator ``` (procedure = rlike, leftarg = text,\n rightarg = text, commutator = ```);\nCREATE OR REPLACE FUNCTION MyLike(text[], text)\nRETURNS boolean\nAS 'select $2 ``` ANY($1)'\nLANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n\nAnd by not good, I mean that on my table of 2M+ rows, the \"native\" query takes 3s, while the function version takes 9s and the operator version takes (via the function, or through the operator directly), takes 15s.\n\nAny ideas or pointers?\n\n\nThank you,\nLaurent Hasson\n\n\n\n\n\n\n\n\n\n\nHello,\n \nI am trying to implement an efficient “like” over a text[]. I see a lot of people have tried before me and I learnt a lot through the forums. The results of my search is that a query like the following is optimal:\n \nselect count(*)\n\n  from claims\nwhere (select count(*)\n\n          from unnest(\"ICD9_DGNS_CD\") x_\n\n         where x_ like '427%'\n       ) > 0\n \nSo I figured I’d create a Function to encapsulate the concept:\n \nCREATE OR REPLACE FUNCTION ArrayLike(text[], text)\nRETURNS bigint\nAS 'select count(*) from unnest($1) a where a like $2'\nLANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n \nThis works functionally, but performs like crap: full table scan, and cannot make use of any index it seems. Basically, it feels like PG can’t inline that function.\n \nI have been trying all evening to find a way to rewrite it to trick the compiler/planner into inlining. I tried the operator approach for example, but performance is again not good.\n \ncreate function rlike(text,text)\n\nreturns bool as 'select $2 like $1' language sql strict immutable;\ncreate operator  ``` (procedure = rlike, leftarg = text,\n\n                      rightarg = text, commutator = ```);\nCREATE OR REPLACE FUNCTION MyLike(text[], text)\nRETURNS boolean\nAS 'select $2 ``` ANY($1)'\nLANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n \nAnd by not good, I mean that on my table of 2M+ rows, the “native” query takes 3s, while the function version takes 9s and the operator version takes (via the function, or through the operator directly), takes 15s.\n \nAny ideas or pointers?\n \n \nThank you,\nLaurent Hasson", "msg_date": "Fri, 11 Nov 2016 06:54:23 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Inlining of functions (doing LIKE on an array)" }, { "msg_contents": "\n\n\n> From: [email protected] [mailto:[email protected]] On Behalf Of [email protected]\n> Sent: Freitag, 11. November 2016 07:54\n> To: [email protected]\n> Subject: [PERFORM] Inlining of functions (doing LIKE on an array)\n> \n> Hello,\n> \n> I am trying to implement an efficient \"like\" over a text[]. I see a lot of people have tried before me and I learnt a lot through the forums. The results of my search is that a query like the following is optimal:\n> \n> select count(*) \n> from claims\n> where (select count(*) \n> from unnest(\"ICD9_DGNS_CD\") x_ \n> where x_ like '427%'\n> ) > 0\n> \n\nHi,\nare you using GIN indexes?\n\nhttp://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns \n\nmoreover your query can still be optimized:\n=>\nselect count(*) \n from claims\nwhere exists (select *\n from unnest(\"ICD9_DGNS_CD\") x_ \n where x_ like '427%'\n ) \n\nregards,\n\nMarc Mamin\n\n> So I figured I'd create a Function to encapsulate the concept:\n> \n> CREATE OR REPLACE FUNCTION ArrayLike(text[], text)\n> RETURNS bigint\n> AS 'select count(*) from unnest($1) a where a like $2'\n> LANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n> \n> This works functionally, but performs like crap: full table scan, and cannot make use of any index it seems. Basically, it feels like PG can't inline that function.\n> \n> I have been trying all evening to find a way to rewrite it to trick the compiler/planner into inlining. I tried the operator approach for example, but performance is again not good.\n> \n> create function rlike(text,text) \n> returns bool as 'select $2 like $1' language sql strict immutable;\n> create operator ``` (procedure = rlike, leftarg = text, \n> rightarg = text, commutator = ```);\n> CREATE OR REPLACE FUNCTION MyLike(text[], text)\n> RETURNS boolean\n> AS 'select $2 ``` ANY($1)'\n> LANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n> \n> And by not good, I mean that on my table of 2M+ rows, the \"native\" query takes 3s, while the function version takes 9s and the operator version takes (via the function, or through the operator directly), takes 15s.\n> \n> Any ideas or pointers?\n> \n> \n> Thank you,\n> Laurent Hasson\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Nov 2016 12:43:31 +0000", "msg_from": "Marc Mamin <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inlining of functions (doing LIKE on an array)" }, { "msg_contents": "I tried \"exists\", but won't work in the Function, i.e.,\n\nCREATE OR REPLACE FUNCTION ArrayLike(text[], text) RETURNS bigint\n AS 'exists (select * from unnest($1) a where a like $2)'\nLANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n\nIt's as expected though. As for the GIN indices, I tried and it didn't make a difference, which I guess is expected as well because of the Like operator. I don't expect regular indices to work on regular columns for Like operations, especially '%xxx' ones, so I didn't expect GIN indices to work either for Array columns with Like. Am I wrong?\n\nFinally, I think the issue is actually not what I originally thought (i.e., index usage, as per above). But the inlining still is the culprit. Here is the plan for \n\nselect count(*) from claims\nwhere (select count(*) from unnest(\"SECONDARY_ICD9_DGNS_CD\") x_ where x_ like '427%' ) > 0\n\n\"Aggregate (cost=2633016.66..2633016.67 rows=1 width=0) (actual time=3761.888..3761.889 rows=1 loops=1)\"\n\" -> Seq Scan on claims (cost=0.00..2631359.33 rows=662931 width=0) (actual time=0.097..3757.314 rows=85632 loops=1)\"\n\" Filter: ((SubPlan 1) > 0)\"\n\" Rows Removed by Filter: 1851321\"\n\" SubPlan 1\"\n\" -> Aggregate (cost=1.25..1.26 rows=1 width=0) (actual time=0.001..0.001 rows=1 loops=1936953)\"\n\" -> Function Scan on unnest a (cost=0.00..1.25 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1936953)\"\n\" Filter: (a ~~ '427%'::text)\"\n\" Rows Removed by Filter: 2\"\n\"Planning time: 0.461 ms\"\n\"Execution time: 3762.272 ms\"\n\nAnd when using the function:\n\n\"Aggregate (cost=614390.75..614390.76 rows=1 width=0) (actual time=8169.416..8169.417 rows=1 loops=1)\"\n\" -> Seq Scan on claims (cost=0.00..612733.43 rows=662931 width=0) (actual time=0.163..8162.679 rows=85632 loops=1)\"\n\" Filter: (tilda.\"like\"(\"SECONDARY_ICD9_DGNS_CD\", '427%'::text) > 0)\"\n\" Rows Removed by Filter: 1851321\"\n\"Planning time: 0.166 ms\"\n\"Execution time: 8169.676 ms\"\n\nThere is something fundamental here it seems, but I am not so good at reading plans to understand the differences here.\n\n\n\n\nThank you,\nLaurent Hasson\n\n-----Original Message-----\nFrom: Marc Mamin [mailto:[email protected]] \nSent: Friday, November 11, 2016 07:44\nTo: [email protected]; [email protected]\nSubject: RE: Inlining of functions (doing LIKE on an array)\n\n\n\n\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of \n> [email protected]\n> Sent: Freitag, 11. November 2016 07:54\n> To: [email protected]\n> Subject: [PERFORM] Inlining of functions (doing LIKE on an array)\n> \n> Hello,\n> \n> I am trying to implement an efficient \"like\" over a text[]. I see a lot of people have tried before me and I learnt a lot through the forums. The results of my search is that a query like the following is optimal:\n> \n> select count(*) \n> from claims\n> where (select count(*) \n> from unnest(\"ICD9_DGNS_CD\") x_ \n> where x_ like '427%'\n> ) > 0\n> \n\nHi,\nare you using GIN indexes?\n\nhttp://stackoverflow.com/questions/4058731/can-postgresql-index-array-columns \n\nmoreover your query can still be optimized:\n=>\nselect count(*)\n from claims\nwhere exists (select *\n from unnest(\"ICD9_DGNS_CD\") x_ \n where x_ like '427%'\n ) \n\nregards,\n\nMarc Mamin\n\n> So I figured I'd create a Function to encapsulate the concept:\n> \n> CREATE OR REPLACE FUNCTION ArrayLike(text[], text) RETURNS bigint AS \n> 'select count(*) from unnest($1) a where a like $2'\n> LANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n> \n> This works functionally, but performs like crap: full table scan, and cannot make use of any index it seems. Basically, it feels like PG can't inline that function.\n> \n> I have been trying all evening to find a way to rewrite it to trick the compiler/planner into inlining. I tried the operator approach for example, but performance is again not good.\n> \n> create function rlike(text,text)\n> returns bool as 'select $2 like $1' language sql strict immutable; \n> create operator ``` (procedure = rlike, leftarg = text,\n> rightarg = text, commutator = ```); CREATE OR \n> REPLACE FUNCTION MyLike(text[], text) RETURNS boolean AS 'select $2 \n> ``` ANY($1)'\n> LANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n> \n> And by not good, I mean that on my table of 2M+ rows, the \"native\" query takes 3s, while the function version takes 9s and the operator version takes (via the function, or through the operator directly), takes 15s.\n> \n> Any ideas or pointers?\n> \n> \n> Thank you,\n> Laurent Hasson\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Nov 2016 16:14:08 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inlining of functions (doing LIKE on an array)" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> I tried \"exists\", but won't work in the Function, i.e.,\n> CREATE OR REPLACE FUNCTION ArrayLike(text[], text) RETURNS bigint\n> AS 'exists (select * from unnest($1) a where a like $2)'\n> LANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n\nSyntax and semantics problems. This would work:\n\nregression=# CREATE OR REPLACE FUNCTION ArrayLike(text[], text) RETURNS bool\nregression-# as 'select exists (select * from unnest($1) a where a like $2)'\nregression-# LANGUAGE SQL STRICT IMMUTABLE;\nCREATE FUNCTION\nregression=# create table tt (f1 text[]);\nCREATE TABLE\nregression=# explain select * from tt where ArrayLike(f1, 'foo');\n QUERY PLAN \n-------------------------------------------------------\n Seq Scan on tt (cost=0.00..363.60 rows=453 width=32)\n Filter: arraylike(f1, 'foo'::text)\n(2 rows)\n\nBut we don't inline SQL functions containing sub-selects, so you're still\nstuck with the rather high overhead of a SQL function. A plpgsql function\nmight be a bit faster:\n\nCREATE OR REPLACE FUNCTION ArrayLike(text[], text) RETURNS bool\nas 'begin return exists (select * from unnest($1) a where a like $2); end'\nLANGUAGE plpgSQL STRICT IMMUTABLE;\n\nBTW, I'd be pretty suspicious of marking this function leakproof,\nbecause the underlying LIKE operator isn't leakproof according to\npg_proc.\n\n\n> It's as expected though. As for the GIN indices, I tried and it didn't make a difference, which I guess is expected as well because of the Like operator. I don't expect regular indices to work on regular columns for Like operations, especially '%xxx' ones, so I didn't expect GIN indices to work either for Array columns with Like. Am I wrong?\n\nPlain GIN index, probably not. A pg_trgm index could help with LIKE\nsearches, but I don't think we have a variant of that for array columns.\n\nHave you considered renormalizing the data so that you don't have\narrays?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Nov 2016 11:46:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inlining of functions (doing LIKE on an array)" }, { "msg_contents": "On Thu, Nov 10, 2016 at 10:54 PM, [email protected] <\[email protected]> wrote:\n\n> Hello,\n>\n>\n>\n> I am trying to implement an efficient “like” over a text[]. I see a lot of\n> people have tried before me and I learnt a lot through the forums.\n>\n\nHave you looked at parray_gin?\n\nhttps://github.com/theirix/parray_gin\n\n(Also on PGXN, but I don't know how up-to-date it is there)\n\nOr you could create an regular pg_trgm index on the expression:\n\narray_to_string(\"ICD9_DGNS_CD\",'<some safe delimiter>')\n\nIf you can find a safe delimiter to use (one that can't be part of the\ntext[]).\n\nThe performance of these options will depend on both the nature of your\ndata and the nature of your queries.\n\nCheers,\n\nJeff\n\nOn Thu, Nov 10, 2016 at 10:54 PM, [email protected] <[email protected]> wrote:\n\n\nHello,\n \nI am trying to implement an efficient “like” over a text[]. I see a lot of people have tried before me and I learnt a lot through the forums.Have you looked at parray_gin?https://github.com/theirix/parray_gin(Also on PGXN, but I don't know how up-to-date it is there)Or you could create an regular pg_trgm index on the expression:array_to_string(\"ICD9_DGNS_CD\",'<some safe delimiter>')If you can find a safe delimiter to use (one that can't be part of the text[]).The performance of these options will depend on both the nature of your data and the nature of your queries.Cheers,Jeff", "msg_date": "Fri, 11 Nov 2016 15:32:33 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inlining of functions (doing LIKE on an array)" }, { "msg_contents": "Thanks for the pointer on the \"select exists\" syntax Tom. Much appreciated. I couldn't figure it out! And as for normalizing, yes, thought about it, but the one-to-many relationship would make other scenarios we have more complex and slower. So I am juggling with trade-offs.\n\nSo, here are my findings. I did 10 runs for each of the 4 options I have arrived at. The runs were pretty consistent, within a few 10th's of a second off each other, so little variability. Not 100% scientific, but good enough for my test. I picked here the last run I had with the plans for illustration.\n\nTake-aways:\n-----------------------\n - The \"select exists\" (#3) approach is roughly 40% faster than \"select count(*) > 0\" (#1).\n - The SQL Function version (#3) Vs the plpgSQL function version (#2) of the same query performs better (~30%)\n - The inlined version (#4) is twice as fast (roughly) as the SQL version (#3).\n\nI wish there were a way to force inlining, or some other mechanism as the performance difference is large here. I'll be using the inlining approach when possible, but the SQL Function approach is simpler and will likely be more suitable for some developers.\n\nDetails:\n-----------------\n1- select count(*) > 0 as SQL\n===================================\nCREATE OR REPLACE FUNCTION MyLike2(text[], text) RETURNS boolean\n AS 'select count(*) > 0 from unnest($1) a where a like $2'\nLANGUAGE SQL STRICT IMMUTABLE\n\nEXPLAIN ANALYZE\nselect count(*) \nfrom cms.claims\nwhere MyLike2(\"code\", '427%')\n--\"Aggregate (cost=609418.77..609418.78 rows=1 width=0) (actual time=8464.372..8464.372 rows=1 loops=1)\"\n--\" -> Seq Scan on claims (cost=0.00..607761.44 rows=662931 width=0) (actual time=0.077..8457.963 rows=85632 loops=1)\"\n--\" Filter: MyLike2(\"code\", '427%'::text)\"\n--\" Rows Removed by Filter: 1851321\"\n--\"Planning time: 0.131 ms\"\n--\"Execution time: 8464.407 ms\"\n\n2- select exists as plpgSQL\n===================================\nCREATE OR REPLACE FUNCTION MyLike3(text[], text) RETURNS boolean\n AS 'begin return exists (select * from unnest($1) a where a like $2); end'\nLANGUAGE plpgSQL STRICT IMMUTABLE\n\nEXPLAIN ANALYZE\nselect count(*) \nfrom cms.claims\nwhere MyLike3(\"code\", '427%')\n--\"Aggregate (cost=609418.77..609418.78 rows=1 width=0) (actual time=7708.945..7708.945 rows=1 loops=1)\"\n--\" -> Seq Scan on claims (cost=0.00..607761.44 rows=662931 width=0) (actual time=0.040..7700.528 rows=85632 loops=1)\"\n--\" Filter: MyLike3(\"code\", '427%'::text)\"\n--\" Rows Removed by Filter: 1851321\"\n--\"Planning time: 0.076 ms\"\n--\"Execution time: 7708.975 ms\"\n\n3- select exists as SQL\n===================================\nCREATE OR REPLACE FUNCTION MyLike(text[], text) RETURNS boolean\n AS 'select exists (select * from unnest($1) a where a like $2)'\nLANGUAGE SQL STRICT IMMUTABLE\n\nEXPLAIN ANALYZE\nselect count(*) \nfrom cms.claims\nwhere MyLike(\"code\", '427%')\n--\"Aggregate (cost=609418.77..609418.78 rows=1 width=0) (actual time=5524.690..5524.690 rows=1 loops=1)\"\n--\" -> Seq Scan on claims (cost=0.00..607761.44 rows=662931 width=0) (actual time=0.064..5515.886 rows=85632 loops=1)\"\n--\" Filter: tilda.\"like\"(\"code\", '427%'::text)\"\n--\" Rows Removed by Filter: 1851321\"\n--\"Planning time: 0.097 ms\"\n--\"Execution time: 5524.718 ms\"\n\n4- select exists inlined\n===================================\nEXPLAIN ANALYZE\nselect count(*) \nfrom cms.claims\nwhere exists (select * from unnest(\"SECONDARY_ICD9_DGNS_CD\") a where a like '427%')\n--\"Aggregate (cost=2604013.42..2604013.43 rows=1 width=0) (actual time=2842.259..2842.259 rows=1 loops=1)\"\n--\" -> Seq Scan on claims (cost=0.00..2601527.42 rows=994397 width=0) (actual time=0.017..2837.122 rows=85632 loops=1)\"\n--\" Filter: (SubPlan 1)\"\n--\" Rows Removed by Filter: 1851321\"\n--\" SubPlan 1\"\n--\" -> Function Scan on unnest a (cost=0.00..1.25 rows=1 width=0) (actual time=0.001..0.001 rows=0 loops=1936953)\"\n--\" Filter: (a ~~ '427%'::text)\"\n--\" Rows Removed by Filter: 2\"\n--\"Planning time: 0.155 ms\"\n--\"Execution time: 2842.311 ms\"\n\n\nThank you,\nLaurent Hasson\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Friday, November 11, 2016 11:46\nTo: [email protected]\nCc: Marc Mamin <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Inlining of functions (doing LIKE on an array)\n\n\"[email protected]\" <[email protected]> writes:\n> I tried \"exists\", but won't work in the Function, i.e., CREATE OR \n> REPLACE FUNCTION ArrayLike(text[], text) RETURNS bigint\n> AS 'exists (select * from unnest($1) a where a like $2)'\n> LANGUAGE SQL STRICT IMMUTABLE LEAKPROOF\n\nSyntax and semantics problems. This would work:\n\nregression=# CREATE OR REPLACE FUNCTION ArrayLike(text[], text) RETURNS bool regression-# as 'select exists (select * from unnest($1) a where a like $2)'\nregression-# LANGUAGE SQL STRICT IMMUTABLE; CREATE FUNCTION regression=# create table tt (f1 text[]); CREATE TABLE regression=# explain select * from tt where ArrayLike(f1, 'foo');\n QUERY PLAN \n-------------------------------------------------------\n Seq Scan on tt (cost=0.00..363.60 rows=453 width=32)\n Filter: arraylike(f1, 'foo'::text)\n(2 rows)\n\nBut we don't inline SQL functions containing sub-selects, so you're still stuck with the rather high overhead of a SQL function. A plpgsql function might be a bit faster:\n\nCREATE OR REPLACE FUNCTION ArrayLike(text[], text) RETURNS bool as 'begin return exists (select * from unnest($1) a where a like $2); end'\nLANGUAGE plpgSQL STRICT IMMUTABLE;\n\nBTW, I'd be pretty suspicious of marking this function leakproof, because the underlying LIKE operator isn't leakproof according to pg_proc.\n\n\n> It's as expected though. As for the GIN indices, I tried and it didn't make a difference, which I guess is expected as well because of the Like operator. I don't expect regular indices to work on regular columns for Like operations, especially '%xxx' ones, so I didn't expect GIN indices to work either for Array columns with Like. Am I wrong?\n\nPlain GIN index, probably not. A pg_trgm index could help with LIKE searches, but I don't think we have a variant of that for array columns.\n\nHave you considered renormalizing the data so that you don't have arrays?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 12 Nov 2016 00:41:16 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inlining of functions (doing LIKE on an array)" }, { "msg_contents": "\"[email protected]\" <[email protected]> writes:\n> I wish there were a way to force inlining, or some other mechanism as the performance difference is large here. I'll be using the inlining approach when possible, but the SQL Function approach is simpler and will likely be more suitable for some developers.\n\nI'm not sure that there's any fundamental reason why we don't inline SQL\nfunctions containing sub-selects. It may just be not having wanted to put\nany effort into the case way-back-when. Inlining happens too late to\nallow a resulting WHERE EXISTS to get mutated into a semijoin, but in this\nexample that couldn't happen anyway, so it's not much of an objection.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 12 Nov 2016 14:59:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Inlining of functions (doing LIKE on an array)" }, { "msg_contents": "Yep, agreed. A simple lexical macro-like approach to test \"if it works\" could be a simple approach to see if inlining a piece of sql would not break the main query?\n\nLaurent Hasson\nSent from my BlackBerry Passport\n\n Original Message\nFrom: Tom Lane\nSent: Saturday, November 12, 2016 14:59\nTo: [email protected]\nCc: Marc Mamin; [email protected]\nSubject: Re: [PERFORM] Inlining of functions (doing LIKE on an array)\n\n\n\"[email protected]\" <[email protected]> writes:\n> I wish there were a way to force inlining, or some other mechanism as the performance difference is large here. I'll be using the inlining approach when possible, but the SQL Function approach is simpler and will likely be more suitable for some developers.\n\nI'm not sure that there's any fundamental reason why we don't inline SQL\nfunctions containing sub-selects. It may just be not having wanted to put\nany effort into the case way-back-when. Inlining happens too late to\nallow a resulting WHERE EXISTS to get mutated into a semijoin, but in this\nexample that couldn't happen anyway, so it's not much of an objection.\n\n regards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Sat, 12 Nov 2016 20:17:08 +0000", "msg_from": "\"[email protected]\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Inlining of functions (doing LIKE on an array)" } ]
[ { "msg_contents": "Hi,\n\nI have a select moving around a lot of data and takes times\nAny advice tuning this query ?\n\nEXPLAIN (ANALYZE ON, BUFFERS ON)\n    select\n    d.books,\n    d.date publish_date,\n    extract(dow from d.date) publish_dow,\n    week_num_fixed,\n    coalesce(sum(case when i.invno is not null then 1 else 0 end),0) as\ndaily_cnt,\n    coalesce(sum(i.activation_amount_sek),0) as daily_amt_sek\n    from dates_per_books d\n    left join publishing_data i on (d.books=i.books and\nd.date=i.publish_date)\n    group by 1,2,3,4;\n\n( explain : https://explain.depesz.com/s/aDOi )\n    \n                                                                                           \nQUERY\nPLAN                                                                                      \n \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate  (cost=44606264.52..48172260.66 rows=4318263 width=68)\n(actual time=839980.887..1029679.771 rows=43182733 loops=1)\n   Group Key: d.books, d.date, (date_part('dow'::text,\n(d.date)::timestamp without time zone)), d.week_num_fixed\n   Buffers: shared hit=3, local hit=10153260 read=165591641, temp\nread=2097960 written=2097960\n   I/O Timings: read=399828.103\n   ->  Sort  (cost=44606264.52..45104896.89 rows=199452945 width=48)\n(actual time=839980.840..933883.311 rows=283894005 loops=1)\n         Sort Key: d.books, d.date, (date_part('dow'::text,\n(d.date)::timestamp without time zone)), d.week_num_fixed\n         Sort Method: external merge  Disk: 16782928kB\n         Buffers: shared hit=3, local hit=10153260 read=165591641,\ntemp read=2097960 written=2097960\n         I/O Timings: read=399828.103\n         ->  Merge Left Join  (cost=191.15..13428896.40\nrows=199452945 width=48) (actual time=0.031..734937.112 rows=283894005\nloops=1)\n               Merge Cond: ((d.books = i.books) AND (d.date =\ni.publish_date))\n               Buffers: local hit=10153260 read=165591641\n               I/O Timings: read=399828.103\n               ->  Index Scan using books_date on\ndates_per_books d  (cost=0.56..1177329.91 rows=43182628 width=20) (actual\ntime=0.005..33789.216 rows=43182733 loops=1)\n                     Buffers: local hit=10 read=475818\n                     I/O Timings: read=27761.376\n               ->  Index Scan using activations_books_date\non publishing_data i  (cost=0.57..7797117.25 rows=249348384 width=32)\n(actual time=0.004..579806.706 rows=249348443 loops=1)\n                     Buffers: local hit=10153250\nread=165115823\n                     I/O Timings: read=372066.727\n Planning time: 2.864 ms\n Execution time: 1034284.193 ms\n(21 rows)\n\n(END)\n\n\n\n\n\n\n\nHi,\n\nI have a select moving around a lot of data and takes times\nAny advice tuning this query ?\n\nEXPLAIN (ANALYZE ON, BUFFERS ON)\n    select\n    d.books,\n    d.date publish_date,\n    extract(dow from d.date) publish_dow,\n    week_num_fixed,\n    coalesce(sum(case when i.invno is not null then 1 else 0 end),0) as daily_cnt,\n    coalesce(sum(i.activation_amount_sek),0) as daily_amt_sek\n    from dates_per_books d\n    left join publishing_data i on (d.books=i.books and d.date=i.publish_date)\n    group by 1,2,3,4;\n\n( explain : https://explain.depesz.com/s/aDOi )\n    \n                                                                                            QUERY PLAN                                                                                        \n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate  (cost=44606264.52..48172260.66 rows=4318263 width=68) (actual time=839980.887..1029679.771 rows=43182733 loops=1)\n   Group Key: d.books, d.date, (date_part('dow'::text, (d.date)::timestamp without time zone)), d.week_num_fixed\n   Buffers: shared hit=3, local hit=10153260 read=165591641, temp read=2097960 written=2097960\n   I/O Timings: read=399828.103\n   ->  Sort  (cost=44606264.52..45104896.89 rows=199452945 width=48) (actual time=839980.840..933883.311 rows=283894005 loops=1)\n         Sort Key: d.books, d.date, (date_part('dow'::text, (d.date)::timestamp without time zone)), d.week_num_fixed\n         Sort Method: external merge  Disk: 16782928kB\n         Buffers: shared hit=3, local hit=10153260 read=165591641, temp read=2097960 written=2097960\n         I/O Timings: read=399828.103\n         ->  Merge Left Join  (cost=191.15..13428896.40 rows=199452945 width=48) (actual time=0.031..734937.112 rows=283894005 loops=1)\n               Merge Cond: ((d.books = i.books) AND (d.date = i.publish_date))\n               Buffers: local hit=10153260 read=165591641\n               I/O Timings: read=399828.103\n               ->  Index Scan using books_date on dates_per_books d  (cost=0.56..1177329.91 rows=43182628 width=20) (actual time=0.005..33789.216 rows=43182733 loops=1)\n                     Buffers: local hit=10 read=475818\n                     I/O Timings: read=27761.376\n               ->  Index Scan using activations_books_date on publishing_data i  (cost=0.57..7797117.25 rows=249348384 width=32) (actual time=0.004..579806.706 rows=249348443 loops=1)\n                     Buffers: local hit=10153250 read=165115823\n                     I/O Timings: read=372066.727\n Planning time: 2.864 ms\n Execution time: 1034284.193 ms\n(21 rows)\n\n(END)", "msg_date": "Fri, 11 Nov 2016 16:19:08 +0100", "msg_from": "Henrik Ekenberg <[email protected]>", "msg_from_op": true, "msg_subject": "Any advice tuning this query ?" }, { "msg_contents": "Hi,\n\nOn Fri, 2016-11-11 at 16:19 +0100, Henrik Ekenberg wrote:\n>          Sort Method: external merge  Disk: 16782928kB\n\nThis query is generating 16GB temp file on disk. Is this the amount of data you\nwant to sort?\n\nRegards,\n-- \nDevrim GÜNDÜZ\nEnterpriseDB: http://www.enterprisedb.com\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nTwitter: @DevrimGunduz , @DevrimGunduzTR", "msg_date": "Fri, 11 Nov 2016 18:55:49 +0300", "msg_from": "Devrim =?ISO-8859-1?Q?G=FCnd=FCz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any advice tuning this query ?" }, { "msg_contents": "I have a couple of suggestions which should lead to some minor \nimprovements, but in general I am surprised by the huge size of the \nresult set. Is your goal really to get a 43 million row result? When a \nquery returns that many rows usually all possible query plans are more \nor less bad.\n\n1) You can remove \"3\" from the group by clause to avoid having to sort \nthat data when we already sort by d.date.\n\n2) If (books, date) is the primary key of dates_per_books we can also \nsafely remove \"4\" from the group by clause further reducing the length \nof the keys that we need to sort.\n\n3) For a minor speed up change \"coalesce(sum(case when i.invno is not \nnull then 1 else 0 end),0)\" to \"count(i.invno)\".\n\nAndreas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 11 Nov 2016 17:22:37 +0100", "msg_from": "Andreas Karlsson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any advice tuning this query ?" }, { "msg_contents": "On Fri, Nov 11, 2016 at 7:19 AM, Henrik Ekenberg <[email protected]> wrote:\n\n> Hi,\n>\n> I have a select moving around a lot of data and takes times\n> Any advice tuning this query ?\n>\n> EXPLAIN (ANALYZE ON, BUFFERS ON)\n>\n\nWhen accessing lots of data, sometimes the act of collecting timing on all\nof the actions makes the query take >2x times longer, or more, and distorts\nthe timings it collects.\n\nTry running the same query like:\n\nEXPLAIN (ANALYZE ON, BUFFERS ON, timing off)\n\nIf the Execution times are very similar either way, then you don't have\nthis problem. But if they differ, then you can't depend on the results of\nthe timings reported when timing is turned on. Large sorts are\nparticularly subject to this problem.\n\nMore than half the time (if the times are believable) goes to scanning the\nindex activations_books_date. You might be better off with a sort rather\nthan an index scan. You can test this by doing:\n\nbegin;\ndrop index activations_books_date;\n<explain your query here>;\nrollback;\n\nDon't do that on production server, as it will block other access to the\ntable for the duration.\n\n\nYou might also benefit from hash joins/aggregates, but you would have to\nset work_mem to a very large value get them. I'd start by setting work_mem\nin your session to 1TB, and seeing if that changes the explain plan (just\nexplain, not explain analyze!). If that supports the hash\njoins/aggregates, then keeping lowering work_mem until you find the minimum\nthat supports the hash plans. Then ponder if it is safe to use that much\nwork_mem \"for real\" given your RAM and level of concurrent access.\n\nCheers,\n\nJeff\n\nOn Fri, Nov 11, 2016 at 7:19 AM, Henrik Ekenberg <[email protected]> wrote:\n\nHi,\n\nI have a select moving around a lot of data and takes times\nAny advice tuning this query ?\n\nEXPLAIN (ANALYZE ON, BUFFERS ON)When accessing lots of data, sometimes the act of collecting timing on all of the actions makes the query take >2x times longer, or more, and distorts the timings it collects.Try running the same query like:EXPLAIN (ANALYZE ON, BUFFERS ON, timing off)If the Execution times are very similar either way, then you don't have this problem.  But if they differ, then you can't depend on the results of the timings reported when timing is turned on.  Large sorts are particularly subject to this problem.More than half the time (if the times are believable) goes to scanning the index activations_books_date.  You might be better off with a sort rather than an index scan.  You can test this by doing:begin;drop index activations_books_date;<explain your query here>;rollback;Don't do that on production server, as it will block other access to the table for the duration.You might also benefit from hash joins/aggregates, but you would have to set work_mem to a very large value get them.  I'd start by setting work_mem in your session to 1TB, and seeing if that changes the explain plan (just explain, not explain analyze!).  If that supports the hash joins/aggregates, then keeping lowering work_mem until you find the minimum that supports the hash plans.  Then ponder if it is safe to use that much work_mem \"for real\" given your RAM and level  of concurrent access.Cheers,Jeff", "msg_date": "Sat, 12 Nov 2016 11:25:34 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any advice tuning this query ?" } ]
[ { "msg_contents": "Dear list,\nI’m looking for some guidelines on how to optimize the configuration of a production database dedicated to a DWH application.\nI run the application on different machines and have solved several issues since now but am struggling on a production environment running Red Hat 6.7 and PostgreSQL 9.5.3.\nMy application does a lot of reads and many writes (plain “SELECT … INTO” and “INSERT”, no “UPDATE”), but on a order of magnitude lower than the reads.\nThe work flow consists of two big blocks: an ETL phase and the workloads on the data imported during the ETL phase.\n\nThe biggest schema has about 1.2 billions of rows distributed over a ten of tables; many of those tables are partitioned and have indexes. At the moment the database stores two schemas but I plan to add other three schemas of similar size.\n\nThe machine is virtualized and has 8 CPUs at about 3GHz, 64GB of RAM and 5TB of storage. It runs on Red Hat 6.7, kernel 2.6.x\n\nThe configuration changes I made so far are:\nmax_connections = 30\nshared_buffers = 32GB\nwork_mem = 256MB\nmaintenance_work_mem = 4GB\neffective_io_concurrency = 30\ncheckpoint_completion_target = 0.9\nrandom_page_cost = 2.0\neffective_cache_size = 48GB\ndefault_statistics_target = 1000\n\nautovacuum is on and the collation is ‘C’.\n\n\nThe first issue I faced was about maintenance_work_mem because I set it to 16GB and the server silently crashed during a VACUUM because I didn’t consider that it could take up to autovacuum_max_workers * maintenance_work_mem (roughly 48GB). So I lowered maintenance_work_mem to 4GB and it did work. Should I set maintenance_work_mem to a smaller value (1GB) after the ETL terminates or can I leave it at 4GB without degrading the overall performance?\n\nThe second issue emerged during a intensive parallel query. I implemented a splitter that parallelize certain kind of queries. There were 8 similar queries running that was working on 8 overall disjoined subsets of the same table; this table has roughly 4.5 millions of rows. These queries uses SELECT DISTINCT, ORDER BY, OVER (PARTITION BY … ORDER BY) and COALESCE(). At a certain point the server crashed and I found the following error in the logs: \n\npostgres server process was terminated by signal 9 killed\n\nAfter some research, I found that probably it was the OOM killer. Running “dmesg” tells that effectively it was. Reading the documentation and this answer on SO ( http://stackoverflow.com/questions/16418173/psql-seems-to-timeout-with-long-queries <http://stackoverflow.com/questions/16418173/psql-seems-to-timeout-with-long-queries> ), I realized that probably the issue is due to a misconfiguration. The value I set for this pg instance don’t seem to be so wrong, except maybe from maintenance_work_mem. I will certainly disable OOM as suggested by the official docs ( https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT <https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT> ) but was wondering if I could tune the configuration a little better. Can someone give me some more advices?\n\nI run the same application with different data (and workload) on other machines, but they have different configurations (Ubuntu 16.0.4). On one of them I previously disabled the virtual memory overcommit and never experienced that issue, but the machine has 128GB of RAM.\n\nI hope to have been clear enough.\nThank you everyone\n Pietro\nDear list,I’m looking for some guidelines on how to optimize the configuration of a production database dedicated to a DWH application.I run the application on different machines and have solved several issues since now but am struggling on a production environment running Red Hat 6.7 and PostgreSQL 9.5.3.My application does a lot of reads and many writes (plain “SELECT … INTO” and “INSERT”, no “UPDATE”), but on a order of magnitude lower than the reads.The work flow consists of two big blocks: an ETL phase and the workloads on the data imported during the ETL phase.The biggest schema has about 1.2 billions of rows distributed over a ten of tables; many of those tables are partitioned and have indexes. At the moment the database stores two schemas but I plan to add other three schemas of similar size.The machine is virtualized and has 8 CPUs at about 3GHz, 64GB of RAM and 5TB of storage. It runs on Red Hat 6.7, kernel 2.6.xThe configuration changes I made so far are:max_connections = 30shared_buffers = 32GBwork_mem = 256MBmaintenance_work_mem = 4GBeffective_io_concurrency = 30checkpoint_completion_target = 0.9random_page_cost = 2.0effective_cache_size = 48GBdefault_statistics_target = 1000autovacuum is on and the collation is ‘C’.The first issue I faced was about maintenance_work_mem because I set it to 16GB and the server silently crashed during a VACUUM because I didn’t consider that it could take up to autovacuum_max_workers * maintenance_work_mem (roughly 48GB). So I lowered maintenance_work_mem to 4GB and it did work. Should I set maintenance_work_mem to a smaller value (1GB) after the ETL terminates or can I leave it at 4GB without degrading the overall performance?The second issue emerged during a intensive parallel query. I implemented a splitter that parallelize certain kind of queries. There were 8 similar queries running that was working on 8 overall disjoined subsets of the same table; this table has roughly 4.5 millions of rows. These queries uses SELECT DISTINCT, ORDER BY, OVER (PARTITION BY … ORDER BY) and COALESCE(). At a certain point the server crashed and I found the following error in the logs: postgres server process was terminated by signal 9 killedAfter some research, I found that probably it was the OOM killer. Running “dmesg” tells that effectively it was. Reading the documentation and this answer on SO ( http://stackoverflow.com/questions/16418173/psql-seems-to-timeout-with-long-queries ), I realized that probably the issue is due to a misconfiguration. The value I set for this pg instance don’t seem to be so wrong, except maybe from maintenance_work_mem. I will certainly disable OOM as suggested by the official docs ( https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT ) but was wondering if I could tune the configuration a little better. Can someone give me some more advices?I run the same application with different data (and workload) on other machines, but they have different configurations (Ubuntu 16.0.4). On one of them I previously disabled the virtual memory overcommit and never experienced that issue, but the machine has 128GB of RAM.I hope to have been clear enough.Thank you everyone Pietro", "msg_date": "Mon, 14 Nov 2016 12:45:38 +0100", "msg_from": "Pietro Pugni <[email protected]>", "msg_from_op": true, "msg_subject": "Some tuning suggestions on a Red Hat 6.7 - PG 9.5.3 production\n environment" }, { "msg_contents": "dear Pietro,\nare you sure about\n\neffective_io_concurrency = 30\n\ncould you please explain the type of disk storage?\n\nIl 14/Nov/2016 12:46, \"Pietro Pugni\" <[email protected]> ha scritto:\n\n> Dear list,\n> I’m looking for some guidelines on how to optimize the configuration of a\n> production database dedicated to a DWH application.\n> I run the application on different machines and have solved several issues\n> since now but am struggling on a production environment running Red Hat 6.7\n> and PostgreSQL 9.5.3.\n> My application does a lot of reads and many writes (plain “SELECT … INTO”\n> and “INSERT”, no “UPDATE”), but on a order of magnitude lower than the\n> reads.\n> The work flow consists of two big blocks: an ETL phase and the workloads\n> on the data imported during the ETL phase.\n>\n> The biggest schema has about 1.2 billions of rows distributed over a ten\n> of tables; many of those tables are partitioned and have indexes. At the\n> moment the database stores two schemas but I plan to add other three\n> schemas of similar size.\n>\n> The machine is virtualized and has 8 CPUs at about 3GHz, 64GB of RAM and\n> 5TB of storage. It runs on Red Hat 6.7, kernel 2.6.x\n>\n> The configuration changes I made so far are:\n> max_connections = 30\n> shared_buffers = 32GB\n> work_mem = 256MB\n> maintenance_work_mem = 4GB\n> effective_io_concurrency = 30\n> checkpoint_completion_target = 0.9\n> random_page_cost = 2.0\n> effective_cache_size = 48GB\n> default_statistics_target = 1000\n>\n> autovacuum is on and the collation is ‘C’.\n>\n>\n> The first issue I faced was about maintenance_work_mem because I set it to\n> 16GB and the server silently crashed during a VACUUM because I didn’t\n> consider that it could take up to autovacuum_max_workers *\n> maintenance_work_mem (roughly 48GB). So I lowered maintenance_work_mem to\n> 4GB and it did work. *Should I set maintenance_work_mem to a smaller\n> value (1GB) after the ETL terminates or can I leave it at 4GB without\n> degrading the overall performance?*\n>\n> The second issue emerged during a intensive parallel query. I implemented\n> a splitter that parallelize certain kind of queries. There were 8 similar\n> queries running that was working on 8 overall disjoined subsets of the same\n> table; this table has roughly 4.5 millions of rows. These queries uses\n> SELECT DISTINCT, ORDER BY, OVER (PARTITION BY … ORDER BY) and COALESCE().\n> At a certain point the server crashed and I found the following error in\n> the logs:\n>\n> *postgres server process was terminated by signal 9 killed*\n>\n> After some research, I found that probably it was the OOM killer. Running\n> “dmesg” tells that effectively it was. Reading the documentation and this\n> answer on SO ( http://stackoverflow.com/questions/16418173/psql-seems-\n> to-timeout-with-long-queries ), I realized that probably the issue is due\n> to a misconfiguration. The value I set for this pg instance don’t seem to\n> be so wrong, except maybe from maintenance_work_mem. I will certainly\n> disable OOM as suggested by the official docs (\n> https://www.postgresql.org/docs/current/static/kernel-\n> resources.html#LINUX-MEMORY-OVERCOMMIT ) but *was wondering if I could\n> tune the configuration a little better. Can someone give me some more\n> advices?*\n>\n> I run the same application with different data (and workload) on other\n> machines, but they have different configurations (Ubuntu 16.0.4). On one of\n> them I previously disabled the virtual memory overcommit and never\n> experienced that issue, but the machine has 128GB of RAM.\n>\n> I hope to have been clear enough.\n> Thank you everyone\n> Pietro\n>\n\ndear Pietro, \nare you sure about \neffective_io_concurrency = 30\ncould you please explain the type of disk storage? \nIl 14/Nov/2016 12:46, \"Pietro Pugni\" <[email protected]> ha scritto:Dear list,I’m looking for some guidelines on how to optimize the configuration of a production database dedicated to a DWH application.I run the application on different machines and have solved several issues since now but am struggling on a production environment running Red Hat 6.7 and PostgreSQL 9.5.3.My application does a lot of reads and many writes (plain “SELECT … INTO” and “INSERT”, no “UPDATE”), but on a order of magnitude lower than the reads.The work flow consists of two big blocks: an ETL phase and the workloads on the data imported during the ETL phase.The biggest schema has about 1.2 billions of rows distributed over a ten of tables; many of those tables are partitioned and have indexes. At the moment the database stores two schemas but I plan to add other three schemas of similar size.The machine is virtualized and has 8 CPUs at about 3GHz, 64GB of RAM and 5TB of storage. It runs on Red Hat 6.7, kernel 2.6.xThe configuration changes I made so far are:max_connections = 30shared_buffers = 32GBwork_mem = 256MBmaintenance_work_mem = 4GBeffective_io_concurrency = 30checkpoint_completion_target = 0.9random_page_cost = 2.0effective_cache_size = 48GBdefault_statistics_target = 1000autovacuum is on and the collation is ‘C’.The first issue I faced was about maintenance_work_mem because I set it to 16GB and the server silently crashed during a VACUUM because I didn’t consider that it could take up to autovacuum_max_workers * maintenance_work_mem (roughly 48GB). So I lowered maintenance_work_mem to 4GB and it did work. Should I set maintenance_work_mem to a smaller value (1GB) after the ETL terminates or can I leave it at 4GB without degrading the overall performance?The second issue emerged during a intensive parallel query. I implemented a splitter that parallelize certain kind of queries. There were 8 similar queries running that was working on 8 overall disjoined subsets of the same table; this table has roughly 4.5 millions of rows. These queries uses SELECT DISTINCT, ORDER BY, OVER (PARTITION BY … ORDER BY) and COALESCE(). At a certain point the server crashed and I found the following error in the logs: postgres server process was terminated by signal 9 killedAfter some research, I found that probably it was the OOM killer. Running “dmesg” tells that effectively it was. Reading the documentation and this answer on SO ( http://stackoverflow.com/questions/16418173/psql-seems-to-timeout-with-long-queries ), I realized that probably the issue is due to a misconfiguration. The value I set for this pg instance don’t seem to be so wrong, except maybe from maintenance_work_mem. I will certainly disable OOM as suggested by the official docs ( https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT ) but was wondering if I could tune the configuration a little better. Can someone give me some more advices?I run the same application with different data (and workload) on other machines, but they have different configurations (Ubuntu 16.0.4). On one of them I previously disabled the virtual memory overcommit and never experienced that issue, but the machine has 128GB of RAM.I hope to have been clear enough.Thank you everyone Pietro", "msg_date": "Mon, 14 Nov 2016 18:36:51 +0100", "msg_from": "domenico febbo <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some tuning suggestions on a Red Hat 6.7 - PG 9.5.3\n production environment" }, { "msg_contents": "Dear Domenico,\nI pushed a little hard on that because the virtualizer runs on a distributed system composed by 7 clusters with more than 100 cores and an enterprise storage. I know that usually effective_io_concurrency is set based on the number of disks available in a RAID configuration (minus the stripe disks), so I decided to push hard on this due to the nature of the host machine. I have no much information about that machine but I can investigate.\n\nAnyway, the OOM configuration described in the official documentation prevents from killing the postmaster.\n\nI’m still wondering about the other configuration parameters, if they are reasonable or can be tweaked to get more performance.\n\nThank you for your support\n Pietro\n\nPS I made a typo in the configuration. max_connections is 20, not 30.\n\n\n> Il giorno 14 nov 2016, alle ore 18:36, domenico febbo <[email protected]> ha scritto:\n> \n> dear Pietro, \n> are you sure about\n> \n> effective_io_concurrency = 30\n> \n> could you please explain the type of disk storage?\n> \n> \n> Il 14/Nov/2016 12:46, \"Pietro Pugni\" <[email protected] <mailto:[email protected]>> ha scritto:\n> Dear list,\n> I’m looking for some guidelines on how to optimize the configuration of a production database dedicated to a DWH application.\n> I run the application on different machines and have solved several issues since now but am struggling on a production environment running Red Hat 6.7 and PostgreSQL 9.5.3.\n> My application does a lot of reads and many writes (plain “SELECT … INTO” and “INSERT”, no “UPDATE”), but on a order of magnitude lower than the reads.\n> The work flow consists of two big blocks: an ETL phase and the workloads on the data imported during the ETL phase.\n> \n> The biggest schema has about 1.2 billions of rows distributed over a ten of tables; many of those tables are partitioned and have indexes. At the moment the database stores two schemas but I plan to add other three schemas of similar size.\n> \n> The machine is virtualized and has 8 CPUs at about 3GHz, 64GB of RAM and 5TB of storage. It runs on Red Hat 6.7, kernel 2.6.x\n> \n> The configuration changes I made so far are:\n> max_connections = 30\n> shared_buffers = 32GB\n> work_mem = 256MB\n> maintenance_work_mem = 4GB\n> effective_io_concurrency = 30\n> checkpoint_completion_target = 0.9\n> random_page_cost = 2.0\n> effective_cache_size = 48GB\n> default_statistics_target = 1000\n> \n> autovacuum is on and the collation is ‘C’.\n> \n> \n> The first issue I faced was about maintenance_work_mem because I set it to 16GB and the server silently crashed during a VACUUM because I didn’t consider that it could take up to autovacuum_max_workers * maintenance_work_mem (roughly 48GB). So I lowered maintenance_work_mem to 4GB and it did work. Should I set maintenance_work_mem to a smaller value (1GB) after the ETL terminates or can I leave it at 4GB without degrading the overall performance?\n> \n> The second issue emerged during a intensive parallel query. I implemented a splitter that parallelize certain kind of queries. There were 8 similar queries running that was working on 8 overall disjoined subsets of the same table; this table has roughly 4.5 millions of rows. These queries uses SELECT DISTINCT, ORDER BY, OVER (PARTITION BY … ORDER BY) and COALESCE(). At a certain point the server crashed and I found the following error in the logs: \n> \n> postgres server process was terminated by signal 9 killed\n> \n> After some research, I found that probably it was the OOM killer. Running “dmesg” tells that effectively it was. Reading the documentation and this answer on SO ( http://stackoverflow.com/questions/16418173/psql-seems-to-timeout-with-long-queries <http://stackoverflow.com/questions/16418173/psql-seems-to-timeout-with-long-queries> ), I realized that probably the issue is due to a misconfiguration. The value I set for this pg instance don’t seem to be so wrong, except maybe from maintenance_work_mem. I will certainly disable OOM as suggested by the official docs ( https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT <https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT> ) but was wondering if I could tune the configuration a little better. Can someone give me some more advices?\n> \n> I run the same application with different data (and workload) on other machines, but they have different configurations (Ubuntu 16.0.4). On one of them I previously disabled the virtual memory overcommit and never experienced that issue, but the machine has 128GB of RAM.\n> \n> I hope to have been clear enough.\n> Thank you everyone\n> Pietro\n\n\nDear Domenico,I pushed a little hard on that because the virtualizer runs on a distributed system composed by 7 clusters with more than 100 cores and an enterprise storage. I know that usually effective_io_concurrency is set based on the number of disks available in a RAID configuration (minus the stripe disks), so I decided to push hard on this due to the nature of the host machine. I have no much information about that machine but I can investigate.Anyway, the OOM configuration described in the official documentation prevents from killing the postmaster.I’m still wondering about the other configuration parameters, if they are reasonable or can be tweaked to get more performance.Thank you for your support PietroPS I made a typo in the configuration. max_connections is 20, not 30.Il giorno 14 nov 2016, alle ore 18:36, domenico febbo <[email protected]> ha scritto:dear Pietro, \nare you sure about effective_io_concurrency = 30could you please explain the type of disk storage? \nIl 14/Nov/2016 12:46, \"Pietro Pugni\" <[email protected]> ha scritto:Dear list,I’m looking for some guidelines on how to optimize the configuration of a production database dedicated to a DWH application.I run the application on different machines and have solved several issues since now but am struggling on a production environment running Red Hat 6.7 and PostgreSQL 9.5.3.My application does a lot of reads and many writes (plain “SELECT … INTO” and “INSERT”, no “UPDATE”), but on a order of magnitude lower than the reads.The work flow consists of two big blocks: an ETL phase and the workloads on the data imported during the ETL phase.The biggest schema has about 1.2 billions of rows distributed over a ten of tables; many of those tables are partitioned and have indexes. At the moment the database stores two schemas but I plan to add other three schemas of similar size.The machine is virtualized and has 8 CPUs at about 3GHz, 64GB of RAM and 5TB of storage. It runs on Red Hat 6.7, kernel 2.6.xThe configuration changes I made so far are:max_connections = 30shared_buffers = 32GBwork_mem = 256MBmaintenance_work_mem = 4GBeffective_io_concurrency = 30checkpoint_completion_target = 0.9random_page_cost = 2.0effective_cache_size = 48GBdefault_statistics_target = 1000autovacuum is on and the collation is ‘C’.The first issue I faced was about maintenance_work_mem because I set it to 16GB and the server silently crashed during a VACUUM because I didn’t consider that it could take up to autovacuum_max_workers * maintenance_work_mem (roughly 48GB). So I lowered maintenance_work_mem to 4GB and it did work. Should I set maintenance_work_mem to a smaller value (1GB) after the ETL terminates or can I leave it at 4GB without degrading the overall performance?The second issue emerged during a intensive parallel query. I implemented a splitter that parallelize certain kind of queries. There were 8 similar queries running that was working on 8 overall disjoined subsets of the same table; this table has roughly 4.5 millions of rows. These queries uses SELECT DISTINCT, ORDER BY, OVER (PARTITION BY … ORDER BY) and COALESCE(). At a certain point the server crashed and I found the following error in the logs: postgres server process was terminated by signal 9 killedAfter some research, I found that probably it was the OOM killer. Running “dmesg” tells that effectively it was. Reading the documentation and this answer on SO ( http://stackoverflow.com/questions/16418173/psql-seems-to-timeout-with-long-queries ), I realized that probably the issue is due to a misconfiguration. The value I set for this pg instance don’t seem to be so wrong, except maybe from maintenance_work_mem. I will certainly disable OOM as suggested by the official docs ( https://www.postgresql.org/docs/current/static/kernel-resources.html#LINUX-MEMORY-OVERCOMMIT ) but was wondering if I could tune the configuration a little better. Can someone give me some more advices?I run the same application with different data (and workload) on other machines, but they have different configurations (Ubuntu 16.0.4). On one of them I previously disabled the virtual memory overcommit and never experienced that issue, but the machine has 128GB of RAM.I hope to have been clear enough.Thank you everyone Pietro", "msg_date": "Mon, 14 Nov 2016 21:50:38 +0100", "msg_from": "Pietro Pugni <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Some tuning suggestions on a Red Hat 6.7 - PG 9.5.3 production\n environment" }, { "msg_contents": "On Mon, Nov 14, 2016 at 11:36 AM, domenico febbo\n<[email protected]> wrote:\n> dear Pietro,\n> are you sure about\n>\n> effective_io_concurrency = 30\n>\n> could you please explain the type of disk storage?\n\nfast storage can certainly utilize high settings of\neffective_io_concurrency at least in some cases...for example see:\nhttps://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Nov 2016 15:24:03 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some tuning suggestions on a Red Hat 6.7 - PG 9.5.3\n production environment" }, { "msg_contents": "On Mon, Nov 14, 2016 at 3:45 AM, Pietro Pugni <[email protected]>\nwrote:\n\n>\n> The first issue I faced was about maintenance_work_mem because I set it to\n> 16GB and the server silently crashed during a VACUUM because I didn’t\n> consider that it could take up to autovacuum_max_workers *\n> maintenance_work_mem (roughly 48GB).\n>\n\nI don't think that this is the true cause of the problem. In current\nversions of PostgreSQL, VACUUM cannot make use of more than 1GB of\nprocess-local memory, even if maintenance_work_mem is set to a far greater\nvalue.\n\nCheers,\n\nJeff\n\nOn Mon, Nov 14, 2016 at 3:45 AM, Pietro Pugni <[email protected]> wrote:The first issue I faced was about maintenance_work_mem because I set it to 16GB and the server silently crashed during a VACUUM because I didn’t consider that it could take up to autovacuum_max_workers * maintenance_work_mem (roughly 48GB). I don't think that this is the true cause of the problem. In current versions of PostgreSQL, VACUUM cannot make use of more than 1GB of process-local memory, even if maintenance_work_mem is set to a far greater value.Cheers,Jeff", "msg_date": "Tue, 15 Nov 2016 15:47:44 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Some tuning suggestions on a Red Hat 6.7 - PG 9.5.3\n production environment" } ]
[ { "msg_contents": "Hi,\n\nOn our production environment (PostgreSQL 9.4.5 on\nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat\n4.8.5-4), 64-bit), one of our queries runs very slow, about 5 minutes . We\nnoticed that it does not use an index that we anticapited it would.\n\nThe query is\n\nselect booking0_.*\nfrom booking booking0_\nwhere booking0_.customer_id in (\n select customer1_.id\n from customer customer1_\n where lower((customer1_.first_name||'\n'||customer1_.last_name)) like '%gatef%'\n )\norder by booking0_.id desc\nlimit 30;\n\nWe have just over 3.2 million records on booking and customer tables.\n\n\n 1.\n\n QUERY PLAN\n\n 2.\n\n Limit (cost=0.86..11549.23 rows=30 width=241) (actual\ntime=9459.997..279283.497 rows=10 loops=1)\n\n 3.\n\n -> Nested Loop Semi Join (cost=0.86..1979391.88 rows=5142\nwidth=241) (actual time=9459.995..279283.482 rows=10 loops=1)\n\n 4.\n\n -> Index Scan Backward using pk_booking_id on booking\nbooking0_ (cost=0.43..522902.65 rows=2964333 width=241) (actual\ntime=0.043..226812.994 rows=3212711 loops=1)\n\n 5.\n\n -> Index Scan using pk_customer_id on customer customer1_\n(cost=0.43..0.49 rows=1 width=4) (actual time=0.016..0.016 rows=0\nloops=3212711)\n\n 6.\n\n Index Cond: (id = booking0_.customer_id)\n\n 7.\n\n Filter: (lower((((first_name)::text || ' '::text) ||\n(last_name)::text)) ~~ '%gatef%'::text)\n\n 8.\n\n Rows Removed by Filter: 1\n\n 9.\n\n Planning time: 2.901 ms\n\n 10.\n\n Execution time: 279283.646 ms\n\n\n\n\nThe index that we expect it to use is\n\nCREATE INDEX idx_customer_name_lower\n ON customer\n USING gin\n (lower((first_name::text || ' '::text) || last_name::text) COLLATE\npg_catalog.\"default\" gin_trgm_ops);\n\nexplain (analyze, buffers)\nselect customer1_.id\n from customer customer1_\n where lower((customer1_.first_name||' '||customer1_.last_name)) like\n'%gatef%';\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on customer customer1_ (cost=2875.87..11087.13 rows=5144\nwidth=4) (actual time=768.692..1571.241 rows=11 loops=1)\n Recheck Cond: (lower((((first_name)::text || ' '::text) ||\n(last_name)::text)) ~~ '%gatef%'::text)\n Heap Blocks: exact=11\n Buffers: shared hit=1420 read=23\n -> Bitmap Index Scan on idx_customer_name_lower (cost=0.00..2874.59\nrows=5144 width=0) (actual time=763.327..763.327 rows=11 loops=1)\n Index Cond: (lower((((first_name)::text || ' '::text) ||\n(last_name)::text)) ~~ '%gatef%'::text)\n Buffers: shared hit=1418 read=14\n Planning time: 240.111 ms\n Execution time: 1571.403 ms\n\nAnd then filter with customer_id index on booking table\n\nCREATE INDEX idx_booking_customer_id\n ON booking\n USING btree\n (customer_id);\n\nWe have also created an index on booking table for id desc and customer_id\n\ncreate index concurrently idx_booking_id_desc_customer_id on booking\nusing btree(id desc, customer_id);\n\nBut result was same\n\n\n 1.\n\n QUERY PLAN\n\n 2.\n\n Limit (cost=0.86..12223.57 rows=30 width=241) (actual\ntime=1282.724..197879.302 rows=10 loops=1)\n\n 3.\n\n -> Nested Loop Semi Join (cost=0.86..2094972.51 rows=5142\nwidth=241) (actual time=1282.724..197879.292 rows=10 loops=1)\n\n 4.\n\n -> Index Scan Backward using pk_booking_id on booking\nbooking0_ (cost=0.43..525390.04 rows=3212872 width=241) (actual\ntime=0.012..131563.721 rows=3212879 loops=1)\n\n 5.\n\n -> Index Scan using pk_customer_id on customer customer1_\n(cost=0.43..0.49 rows=1 width=4) (actual time=0.020..0.020 rows=0\nloops=3212879)\n\n 6.\n\n Index Cond: (id = booking0_.customer_id)\n\n 7.\n\n Filter: (lower((((first_name)::text || ' '::text) ||\n(last_name)::text)) ~~ '%gatef%'::text)\n\n 8.\n\n Rows Removed by Filter: 1\n\n 9.\n\n Planning time: 0.424 ms\n\n 10.\n\n Execution time: 197879.348 ms\n\n\n\nIf we remove \"order by id desc\" then it uses index that we expect it to\nuse. But we need that order by clause: with same query we are using a\npagination (offset) if there are more than 30 records.\n\n\n 1.\n\n QUERY PLAN\n\n 2.\n\n Limit (cost=2790.29..2968.29 rows=30 width=241) (actual\ntime=27.932..38.643 rows=10 loops=1)\n\n 3.\n\n -> Nested Loop (cost=2790.29..33299.63 rows=5142 width=241)\n(actual time=27.931..38.640 rows=10 loops=1)\n\n 4.\n\n -> Bitmap Heap Scan on customer customer1_\n(cost=2789.86..10997.73 rows=5142 width=4) (actual time=27.046..27.159\nrows=11 loops=1)\n\n 5.\n\n Recheck Cond: (lower((((first_name)::text || '\n'::text) || (last_name)::text)) ~~ '%gatef%'::text)\n\n 6.\n\n Heap Blocks: exact=11\n\n 7.\n\n -> Bitmap Index Scan on idx_customer_name_lower\n(cost=0.00..2788.57 rows=5142 width=0) (actual time=27.013..27.013\nrows=11 loops=1)\n\n 8.\n\n Index Cond: (lower((((first_name)::text || '\n'::text) || (last_name)::text)) ~~ '%gatef%'::text)\n\n 9.\n\n -> Index Scan using idx_booking_customer_id on booking\nbooking0_ (cost=0.43..4.33 rows=1 width=241) (actual\ntime=1.041..1.041 rows=1 loops=11)\n\n 10.\n\n Index Cond: (customer_id = customer1_.id)\n\n 11.\n\n Planning time: 0.414 ms\n\n 12.\n\n Execution time: 38.757 ms\n\n\n\nOne a different database with 450K records it uses idx_customer_name_lower\n\n\"Limit (cost=3982.71..3982.79 rows=30 width=597) (actual time=0.166..0.166\nrows=0 loops=1)\"\n\" Buffers: shared hit=10\"\n\" -> Sort (cost=3982.71..3984.49 rows=711 width=597) (actual\ntime=0.165..0.165 rows=0 loops=1)\"\n\" Sort Key: booking0_.id\"\n\" Sort Method: quicksort Memory: 25kB\"\n\" Buffers: shared hit=10\"\n\" -> Nested Loop (cost=25.94..3961.71 rows=711 width=597) (actual\ntime=0.159..0.159 rows=0 loops=1)\"\n\" Buffers: shared hit=10\"\n\" -> Bitmap Heap Scan on customer customer1_\n(cost=25.52..1133.10 rows=711 width=4) (actual time=0.159..0.159 rows=0\nloops=1)\"\n\" Recheck Cond: (lower((((first_name)::text || '\n'::text) || (last_name)::text)) ~~ '%gatef%'::text)\"\n\" Buffers: shared hit=10\"\n\" -> Bitmap Index Scan on idx_customer_name_lower\n(cost=0.00..25.34 rows=711 width=0) (actual time=0.157..0.157 rows=0\nloops=1)\"\n\" Index Cond: (lower((((first_name)::text || '\n'::text) || (last_name)::text)) ~~ '%gatef%'::text)\"\n\" Buffers: shared hit=10\"\n\" -> Index Scan using idx_booking_id_desc_customer_id on\nbooking booking0_ (cost=0.42..3.97 rows=1 width=597) (never executed)\"\n\" Index Cond: (customer_id = customer1_.id)\"\n\"Planning time: 1.052 ms\"\n\"Execution time: 0.241 ms\"\n\n\nWe are using autovacuum but we have also run vacuum analyze on those tables\nexplicitly. Also every morning vacuum analyze is working on this database.\n\nautovacuum_vacuum_threshold = 500\nautovacuum_analyze_threshold = 500\nautovacuum_vacuum_scale_factor = 0.1\nautovacuum_analyze_scale_factor = 0.1\n\nsome configurations we have changed :\n\nrandom_page_cost = 2.0\ncpu_tuple_cost = 0.005\ncpu_index_tuple_cost = 0.005\nshared_buffers = 4GB\nwork_mem = 128MB\n\nAs history, before gin index, we were using btree index on first_name and\nlast_name columns and we were searching with 'gatef%', so only find names\nstart with given parameter. We were not satisfied with OR condition there\n(beside we wanted to use a \"contains\" search), that's why we choose to\ncreate GIN index. Individually, if you search on customer it is really\nfast. In our development database with less amount of data, we also saw\nquery planner choose this index instead of index scan backward but with\nmore data like on production, it chooses not to use this index.\n\nWould you have any suggestions, that we can improve the execution time of\nthis query?\n\nThanks in advance.\n\nSeckin\n\nHi,On our production environment (PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit), one of our queries runs very slow, about 5 minutes . We noticed that it does not use an index that we anticapited it would. The query is select booking0_.*from booking booking0_ where booking0_.customer_id in (              select customer1_.id                  from customer customer1_                where lower((customer1_.first_name||' '||customer1_.last_name)) like '%gatef%'          ) order by booking0_.id desc limit 30;We have just over 3.2 million records on booking and customer tables. QUERY PLANLimit (cost=0.86..11549.23 rows=30 width=241) (actual time=9459.997..279283.497 rows=10 loops=1) -> Nested Loop Semi Join (cost=0.86..1979391.88 rows=5142 width=241) (actual time=9459.995..279283.482 rows=10 loops=1) -> Index Scan Backward using pk_booking_id on booking booking0_ (cost=0.43..522902.65 rows=2964333 width=241) (actual time=0.043..226812.994 rows=3212711 loops=1) -> Index Scan using pk_customer_id on customer customer1_ (cost=0.43..0.49 rows=1 width=4) (actual time=0.016..0.016 rows=0 loops=3212711) Index Cond: (id = booking0_.customer_id) Filter: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%gatef%'::text) Rows Removed by Filter: 1Planning time: 2.901 msExecution time: 279283.646 msThe index that we expect it to use is CREATE INDEX idx_customer_name_lower  ON customer  USING gin  (lower((first_name::text || ' '::text) || last_name::text) COLLATE pg_catalog.\"default\" gin_trgm_ops);explain (analyze, buffers)select customer1_.id    from customer customer1_  where lower((customer1_.first_name||' '||customer1_.last_name)) like '%gatef%'; QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------- Bitmap Heap Scan on customer customer1_  (cost=2875.87..11087.13 rows=5144 width=4) (actual time=768.692..1571.241 rows=11 loops=1)   Recheck Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%gatef%'::text)   Heap Blocks: exact=11   Buffers: shared hit=1420 read=23   ->  Bitmap Index Scan on idx_customer_name_lower  (cost=0.00..2874.59 rows=5144 width=0) (actual time=763.327..763.327 rows=11 loops=1)         Index Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%gatef%'::text)         Buffers: shared hit=1418 read=14 Planning time: 240.111 ms Execution time: 1571.403 msAnd then filter with customer_id index on booking table CREATE INDEX idx_booking_customer_id  ON booking  USING btree  (customer_id);We have also created an index on booking table for id desc and customer_id create index concurrently idx_booking_id_desc_customer_id on booking using btree(id desc, customer_id);But result was sameQUERY PLANLimit (cost=0.86..12223.57 rows=30 width=241) (actual time=1282.724..197879.302 rows=10 loops=1) -> Nested Loop Semi Join (cost=0.86..2094972.51 rows=5142 width=241) (actual time=1282.724..197879.292 rows=10 loops=1) -> Index Scan Backward using pk_booking_id on booking booking0_ (cost=0.43..525390.04 rows=3212872 width=241) (actual time=0.012..131563.721 rows=3212879 loops=1) -> Index Scan using pk_customer_id on customer customer1_ (cost=0.43..0.49 rows=1 width=4) (actual time=0.020..0.020 rows=0 loops=3212879) Index Cond: (id = booking0_.customer_id) Filter: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%gatef%'::text) Rows Removed by Filter: 1Planning time: 0.424 msExecution time: 197879.348 msIf we remove \"order by id desc\" then it uses index that we expect it to use. But we need that order by clause: with same query we are using a pagination (offset) if there are more than 30 records.QUERY PLANLimit (cost=2790.29..2968.29 rows=30 width=241) (actual time=27.932..38.643 rows=10 loops=1) -> Nested Loop (cost=2790.29..33299.63 rows=5142 width=241) (actual time=27.931..38.640 rows=10 loops=1) -> Bitmap Heap Scan on customer customer1_ (cost=2789.86..10997.73 rows=5142 width=4) (actual time=27.046..27.159 rows=11 loops=1) Recheck Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%gatef%'::text) Heap Blocks: exact=11 -> Bitmap Index Scan on idx_customer_name_lower (cost=0.00..2788.57 rows=5142 width=0) (actual time=27.013..27.013 rows=11 loops=1) Index Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%gatef%'::text) -> Index Scan using idx_booking_customer_id on booking booking0_ (cost=0.43..4.33 rows=1 width=241) (actual time=1.041..1.041 rows=1 loops=11) Index Cond: (customer_id = customer1_.id)Planning time: 0.414 msExecution time: 38.757 msOne a different database with 450K records it uses idx_customer_name_lower\"Limit  (cost=3982.71..3982.79 rows=30 width=597) (actual time=0.166..0.166 rows=0 loops=1)\"\"  Buffers: shared hit=10\"\"  ->  Sort  (cost=3982.71..3984.49 rows=711 width=597) (actual time=0.165..0.165 rows=0 loops=1)\"\"        Sort Key: booking0_.id\"\"        Sort Method: quicksort  Memory: 25kB\"\"        Buffers: shared hit=10\"\"        ->  Nested Loop  (cost=25.94..3961.71 rows=711 width=597) (actual time=0.159..0.159 rows=0 loops=1)\"\"              Buffers: shared hit=10\"\"              ->  Bitmap Heap Scan on customer customer1_  (cost=25.52..1133.10 rows=711 width=4) (actual time=0.159..0.159 rows=0 loops=1)\"\"                    Recheck Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%gatef%'::text)\"\"                    Buffers: shared hit=10\"\"                    ->  Bitmap Index Scan on idx_customer_name_lower  (cost=0.00..25.34 rows=711 width=0) (actual time=0.157..0.157 rows=0 loops=1)\"\"                          Index Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%gatef%'::text)\"\"                          Buffers: shared hit=10\"\"              ->  Index Scan using idx_booking_id_desc_customer_id on booking booking0_  (cost=0.42..3.97 rows=1 width=597) (never executed)\"\"                    Index Cond: (customer_id = customer1_.id)\"\"Planning time: 1.052 ms\"\"Execution time: 0.241 ms\"We are using autovacuum but we have also run vacuum analyze on those tables explicitly. Also every morning vacuum analyze is working on this database.autovacuum_vacuum_threshold = 500autovacuum_analyze_threshold = 500autovacuum_vacuum_scale_factor = 0.1autovacuum_analyze_scale_factor = 0.1some configurations we have changed : random_page_cost = 2.0cpu_tuple_cost = 0.005cpu_index_tuple_cost = 0.005shared_buffers = 4GBwork_mem = 128MBAs history, before gin index, we were using btree index on first_name and last_name columns and we were searching with 'gatef%', so only find names start with given parameter. We were not satisfied with OR condition there (beside we wanted to use a \"contains\" search), that's why we choose to create GIN index. Individually, if you search on customer it is really fast. In our development database with less amount of data, we also saw query planner choose this index instead of index scan backward but with more data like on production, it chooses not to use this index.Would you have any suggestions, that we can improve the execution time of this query? Thanks in advance.Seckin", "msg_date": "Mon, 14 Nov 2016 15:01:27 +0300", "msg_from": "Seckin Pulatkan <[email protected]>", "msg_from_op": true, "msg_subject": "Query planner chooses index scan backward instead of better index\n option" }, { "msg_contents": "On Mon, Nov 14, 2016 at 4:01 AM, Seckin Pulatkan <[email protected]>\nwrote:\n\n> Hi,\n>\n> On our production environment (PostgreSQL 9.4.5 on\n> x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat\n> 4.8.5-4), 64-bit), one of our queries runs very slow, about 5 minutes . We\n> noticed that it does not use an index that we anticapited it would.\n>\n> The query is\n>\n> select booking0_.*\n> from booking booking0_\n> where booking0_.customer_id in (\n> select customer1_.id\n> from customer customer1_\n> where lower((customer1_.first_name||'\n> '||customer1_.last_name)) like '%gatef%'\n> )\n> order by booking0_.id desc\n> limit 30;\n>\n\n\nIt thinks it is going to find 30 rows which meet your condition very\nquickly, so by walking the index backwards it can avoid needing to do a\nsort. But, the rows which meet your sub-select conditions are biased\ntowards the front of the index, so in fact it was to walk backwards through\nmost of your index before finding 30 eligible rows.\n\nYour best bet is probably to force it into the plan you want by using a CTE:\n\nwith t as\n(select booking0_.*\nfrom booking booking0_\nwhere booking0_.customer_id in (\n select customer1_.id\n from customer customer1_\n where lower((customer1_.first_name||'\n'||customer1_.last_name)) like '%gatef%'\n) select * from t order by booking0_.id desc limit 30;\n\nCheers,\n\nJeff\n\nOn Mon, Nov 14, 2016 at 4:01 AM, Seckin Pulatkan <[email protected]> wrote:Hi,On our production environment (PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit), one of our queries runs very slow, about 5 minutes . We noticed that it does not use an index that we anticapited it would. The query is select booking0_.*from booking booking0_ where booking0_.customer_id in (              select customer1_.id                  from customer customer1_                where lower((customer1_.first_name||' '||customer1_.last_name)) like '%gatef%'          ) order by booking0_.id desc limit 30;It thinks it is going to find 30 rows which meet your condition very quickly, so by walking the index backwards it can avoid needing to do a sort.  But, the rows which meet your sub-select conditions are biased towards the front of the index, so in fact it was to walk backwards through most of your index before finding 30 eligible rows.Your best bet is probably to force it into the plan you want by using a CTE:with t as (select booking0_.*from booking booking0_ where booking0_.customer_id in (              select customer1_.id                  from customer customer1_                where lower((customer1_.first_name||' '||customer1_.last_name)) like '%gatef%')  select * from t order by booking0_.id desc limit 30;Cheers,Jeff", "msg_date": "Mon, 14 Nov 2016 08:50:13 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner chooses index scan backward instead of\n better index option" }, { "msg_contents": "Thank you, Jeff for your reply.\n\nYes, we tested with CTE as well but we are using Hibernate to generate the\nquery and there are some more conditions that can be added if certain\nparameters supplied. For my knowledge, Hibernate is still not supporting\nCTE structures yet. That's why I will keep this as last resort to convert\nit to native query but much appreciated for the info you gave how query\nplanner is thinking.\n\nexplain (analyze, buffers)\nwith cte as (select booking0_.*\nfrom booking booking0_\nwhere (booking0_.customer_id in (select customer1_.id from customer\ncustomer1_ where (lower((customer1_.first_name||' '||customer1_.last_name))\nlike '%sahby%')))\n)\nselect * from cte\norder by cte.id desc limit 30\n\n QUERY PLAN\n------------------------------------------------------------\n------------------------------------------------------------\n-----------------------------------\n Limit (cost=34171.73..34171.80 rows=30 width=1237) (actual\ntime=321.370..321.371 rows=4 loops=1)\n Buffers: shared hit=18 read=1680\n CTE cte\n -> Nested Loop (cost=3384.39..33967.93 rows=5155 width=241) (actual\ntime=309.167..321.312 rows=4 loops=1)\n Buffers: shared hit=15 read=1680\n -> Bitmap Heap Scan on customer customer1_\n(cost=3383.96..11612.18 rows=5155 width=4) (actual time=302.196..310.625\nrows=4 loops=1)\n Recheck Cond: (lower((((first_name)::text || ' '::text) ||\n(last_name)::text)) ~~ '%sahby%'::text)\n Heap Blocks: exact=3\n Buffers: shared hit=5 read=1674\n -> Bitmap Index Scan on idx_customer_name_lower\n(cost=0.00..3382.67 rows=5155 width=0) (actual time=300.142..300.142 rows=4\nloops=1)\n Index Cond: (lower((((first_name)::text || '\n'::text) || (last_name)::text)) ~~ '%sahby%'::text)\n Buffers: shared hit=5 read=1671\n -> Index Scan using idx_booking_customer_id on booking\nbooking0_ (cost=0.43..4.33 rows=1 width=241) (actual time=2.666..2.667\nrows=1 loops=4)\n Index Cond: (customer_id = customer1_.id)\n Buffers: shared hit=10 read=6\n -> Sort (cost=203.80..216.69 rows=5155 width=1237) (actual\ntime=321.368..321.369 rows=4 loops=1)\n Sort Key: cte.id\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=18 read=1680\n -> CTE Scan on cte (cost=0.00..51.55 rows=5155 width=1237)\n(actual time=309.173..321.327 rows=4 loops=1)\n Buffers: shared hit=15 read=1680\n Planning time: 92.501 ms\n Execution time: 321.521 ms\n\n\nI will also share another info.. We have also passenger table, same as\ncustomer regards to this name fields and search but relation is different\nthen.. Passenger (4.2 million records) has booking_id then the query\nplanner behaves different. It runs the in clause query first.\n\nexplain (analyze, buffers)\nselect booking0_.*\nfrom booking booking0_\nwhere (booking0_.id in (select p.booking_id from passenger p where\n(lower((p.first_name||' '||p.last_name)) like '%sahby%')))\norder by booking0_.id desc limit 30\n\n QUERY PLAN\n------------------------------------------------------------\n------------------------------------------------------------\n----------------------------------------\n Limit (cost=4871.81..4871.88 rows=30 width=241) (actual\ntime=91.867..91.868 rows=4 loops=1)\n Buffers: shared hit=22 read=1683\n -> Sort (cost=4871.81..4872.76 rows=383 width=241) (actual\ntime=91.866..91.866 rows=4 loops=1)\n Sort Key: booking0_.id\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=22 read=1683\n -> Nested Loop (cost=4107.13..4860.49 rows=383 width=241)\n(actual time=90.791..91.850 rows=4 loops=1)\n Buffers: shared hit=22 read=1683\n -> HashAggregate (cost=4106.70..4107.55 rows=170 width=4)\n(actual time=86.624..86.627 rows=4 loops=1)\n Group Key: p.booking_id\n Buffers: shared hit=10 read=1679\n -> Bitmap Heap Scan on passenger p\n(cost=3366.97..4105.74 rows=383 width=4) (actual time=86.561..86.613 rows=4\nloops=1)\n Recheck Cond: (lower((((first_name)::text || '\n'::text) || (last_name)::text)) ~~ '%sahby%'::text)\n Heap Blocks: exact=4\n Buffers: shared hit=10 read=1679\n -> Bitmap Index Scan on\nidx_passenger_name_lower (cost=0.00..3366.88 rows=383 width=0) (actual\ntime=80.148..80.148 rows=4 loops=1)\n Index Cond: (lower((((first_name)::text ||\n' '::text) || (last_name)::text)) ~~ '%sahby%'::text)\n Buffers: shared hit=7 read=1678\n -> Index Scan using pk_booking_id on booking booking0_\n(cost=0.43..4.42 rows=1 width=241) (actual time=1.300..1.301 rows=1 loops=4)\n Index Cond: (id = p.booking_id)\n Buffers: shared hit=12 read=4\n Planning time: 39.774 ms\n Execution time: 92.085 ms\n\nRegards,\n\nSeckin\n\nps: sorry Jeff for double email.\n\nOn Mon, Nov 14, 2016 at 7:50 PM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Nov 14, 2016 at 4:01 AM, Seckin Pulatkan <[email protected]\n> > wrote:\n>\n>> Hi,\n>>\n>> On our production environment (PostgreSQL 9.4.5 on\n>> x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat\n>> 4.8.5-4), 64-bit), one of our queries runs very slow, about 5 minutes . We\n>> noticed that it does not use an index that we anticapited it would.\n>>\n>> The query is\n>>\n>> select booking0_.*\n>> from booking booking0_\n>> where booking0_.customer_id in (\n>> select customer1_.id\n>> from customer customer1_\n>> where lower((customer1_.first_name||'\n>> '||customer1_.last_name)) like '%gatef%'\n>> )\n>> order by booking0_.id desc\n>> limit 30;\n>>\n>\n>\n> It thinks it is going to find 30 rows which meet your condition very\n> quickly, so by walking the index backwards it can avoid needing to do a\n> sort. But, the rows which meet your sub-select conditions are biased\n> towards the front of the index, so in fact it was to walk backwards through\n> most of your index before finding 30 eligible rows.\n>\n> Your best bet is probably to force it into the plan you want by using a\n> CTE:\n>\n> with t as\n> (select booking0_.*\n> from booking booking0_\n> where booking0_.customer_id in (\n> select customer1_.id\n> from customer customer1_\n> where lower((customer1_.first_name||'\n> '||customer1_.last_name)) like '%gatef%'\n> ) select * from t order by booking0_.id desc limit 30;\n>\n> Cheers,\n>\n> Jeff\n>\n\nThank you, Jeff for your reply.Yes, we tested \nwith CTE as well but we are using Hibernate to generate the query and \nthere are some more conditions that can be added if certain parameters \nsupplied. For my knowledge, Hibernate is still not supporting CTE \nstructures yet. That's why I will keep this as last resort to convert it\n to native query but much appreciated for the info you gave how query \nplanner is thinking.explain (analyze, buffers) with cte as (select booking0_.*from booking booking0_ where (booking0_.customer_id in (select customer1_.id from customer customer1_ where (lower((customer1_.first_name||' '||customer1_.last_name)) like '%sahby%'))) )select * from cteorder by cte.id desc limit 30 QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=34171.73..34171.80 rows=30 width=1237) (actual time=321.370..321.371 rows=4 loops=1)   Buffers: shared hit=18 read=1680   CTE cte     ->  Nested Loop  (cost=3384.39..33967.93 rows=5155 width=241) (actual time=309.167..321.312 rows=4 loops=1)           Buffers: shared hit=15 read=1680          \n ->  Bitmap Heap Scan on customer customer1_  (cost=3383.96..11612.18\n rows=5155 width=4) (actual time=302.196..310.625 rows=4 loops=1)                 Recheck Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%sahby%'::text)                 Heap Blocks: exact=3                 Buffers: shared hit=5 read=1674                \n ->  Bitmap Index Scan on idx_customer_name_lower  \n(cost=0.00..3382.67 rows=5155 width=0) (actual time=300.142..300.142 \nrows=4 loops=1)                       Index Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%sahby%'::text)                       Buffers: shared hit=5 read=1671          \n ->  Index Scan using idx_booking_customer_id on booking booking0_  \n(cost=0.43..4.33 rows=1 width=241) (actual time=2.666..2.667 rows=1 \nloops=4)                 Index Cond: (customer_id = customer1_.id)                 Buffers: shared hit=10 read=6   ->  Sort  (cost=203.80..216.69 rows=5155 width=1237) (actual time=321.368..321.369 rows=4 loops=1)         Sort Key: cte.id         Sort Method: quicksort  Memory: 25kB         Buffers: shared hit=18 read=1680         ->  CTE Scan on cte  (cost=0.00..51.55 rows=5155 width=1237) (actual time=309.173..321.327 rows=4 loops=1)               Buffers: shared hit=15 read=1680 Planning time: 92.501 ms Execution time: 321.521 msI\n will also share another info.. We have also passenger table, same as \ncustomer regards to this name fields and search but relation is \ndifferent then.. Passenger (4.2 million records) has booking_id then the\n query planner behaves different. It runs the in clause query first.explain (analyze, buffers) select booking0_.*from booking booking0_ where (booking0_.id in (select p.booking_id from passenger p where (lower((p.first_name||' '||p.last_name)) like '%sahby%'))) order by booking0_.id desc limit 30 QUERY PLAN---------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit  (cost=4871.81..4871.88 rows=30 width=241) (actual time=91.867..91.868 rows=4 loops=1)   Buffers: shared hit=22 read=1683   ->  Sort  (cost=4871.81..4872.76 rows=383 width=241) (actual time=91.866..91.866 rows=4 loops=1)         Sort Key: booking0_.id         Sort Method: quicksort  Memory: 25kB         Buffers: shared hit=22 read=1683         ->  Nested Loop  (cost=4107.13..4860.49 rows=383 width=241) (actual time=90.791..91.850 rows=4 loops=1)               Buffers: shared hit=22 read=1683               ->  HashAggregate  (cost=4106.70..4107.55 rows=170 width=4) (actual time=86.624..86.627 rows=4 loops=1)                     Group Key: p.booking_id                     Buffers: shared hit=10 read=1679                    \n ->  Bitmap Heap Scan on passenger p  (cost=3366.97..4105.74 rows=383\n width=4) (actual time=86.561..86.613 rows=4 loops=1)                           Recheck Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%sahby%'::text)                           Heap Blocks: exact=4                           Buffers: shared hit=10 read=1679                          \n ->  Bitmap Index Scan on idx_passenger_name_lower  \n(cost=0.00..3366.88 rows=383 width=0) (actual time=80.148..80.148 rows=4\n loops=1)                                 Index Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%sahby%'::text)                                 Buffers: shared hit=7 read=1678              \n ->  Index Scan using pk_booking_id on booking booking0_  \n(cost=0.43..4.42 rows=1 width=241) (actual time=1.300..1.301 rows=1 \nloops=4)                     Index Cond: (id = p.booking_id)                     Buffers: shared hit=12 read=4 Planning time: 39.774 ms Execution time: 92.085 msRegards,Seckinps: sorry Jeff for double email.On Mon, Nov 14, 2016 at 7:50 PM, Jeff Janes <[email protected]> wrote:On Mon, Nov 14, 2016 at 4:01 AM, Seckin Pulatkan <[email protected]> wrote:Hi,On our production environment (PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4), 64-bit), one of our queries runs very slow, about 5 minutes . We noticed that it does not use an index that we anticapited it would. The query is select booking0_.*from booking booking0_ where booking0_.customer_id in (              select customer1_.id                  from customer customer1_                where lower((customer1_.first_name||' '||customer1_.last_name)) like '%gatef%'          ) order by booking0_.id desc limit 30;It thinks it is going to find 30 rows which meet your condition very quickly, so by walking the index backwards it can avoid needing to do a sort.  But, the rows which meet your sub-select conditions are biased towards the front of the index, so in fact it was to walk backwards through most of your index before finding 30 eligible rows.Your best bet is probably to force it into the plan you want by using a CTE:with t as (select booking0_.*from booking booking0_ where booking0_.customer_id in (              select customer1_.id                  from customer customer1_                where lower((customer1_.first_name||' '||customer1_.last_name)) like '%gatef%')  select * from t order by booking0_.id desc limit 30;Cheers,Jeff", "msg_date": "Tue, 15 Nov 2016 11:44:57 +0300", "msg_from": "Seckin Pulatkan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner chooses index scan backward instead of\n better index option" }, { "msg_contents": "After Jeff Janes' reply, I have tried a couple of limit values and found at\nthe current state of data, 90 was a change on the query planner.\n\n explain (analyze, buffers)\n select booking0_.*\n from booking booking0_\n where (booking0_.customer_id in (select customer1_.id from\ncustomer customer1_ where (lower((customer1_.first_name||'\n'||customer1_.last_name)) like '%sahby%')))\n order by booking0_.id desc limit 90;\n\n\n QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=34267.44..34267.66 rows=90 width=241) (actual\ntime=20.140..20.141 rows=4 loops=1)\n Buffers: shared hit=1742\n -> Sort (cost=34267.44..34280.33 rows=5157 width=241) (actual\ntime=20.139..20.140 rows=4 loops=1)\n Sort Key: booking0_.id\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1742\n -> Nested Loop (cost=3478.41..34074.26 rows=5157\nwidth=241) (actual time=20.079..20.117 rows=4 loops=1)\n Buffers: shared hit=1742\n -> Bitmap Heap Scan on customer customer1_\n(cost=3477.98..11709.61 rows=5157 width=4) (actual time=20.055..20.063\nrows=4 loops=1)\n Recheck Cond: (lower((((first_name)::text ||\n' '::text) || (last_name)::text)) ~~ '%sahby%'::text)\n Heap Blocks: exact=3\n Buffers: shared hit=1726\n -> Bitmap Index Scan on\nidx_customer_name_lower (cost=0.00..3476.69 rows=5157 width=0)\n(actual time=20.024..20.024 rows=4 loops=1)\n Index Cond: (lower((((first_name)::text\n|| ' '::text) || (last_name)::text)) ~~ '%sahby%'::text)\n Buffers: shared hit=1723\n -> Index Scan using idx_booking_customer_id on\nbooking booking0_ (cost=0.43..4.33 rows=1 width=241) (actual\ntime=0.008..0.008 rows=1 loops=4)\n Index Cond: (customer_id = customer1_.id)\n Buffers: shared hit=16\n Planning time: 0.431 ms\n Execution time: 20.187 ms\n\n\nSo instead of converting Criteria api query into Native query to use CTE as\nsuggested by Jeff :\n{quote}\nwith t as\n(select booking0_.*\nfrom booking booking0_\nwhere booking0_.customer_id in (\n select customer1_.id\n from customer customer1_\n where lower((customer1_.first_name||'\n'||customer1_.last_name)) like '%gatef%'\n) select * from t order by booking0_.id desc limit 30;\n{quote}\n\nI have used a limit of 500 (just to be far away from 90 when table size is\nincreased) and then take top 30 on Java layer.\n\nThanks,\n\nSeckin\n\nAfter Jeff Janes' reply, I have tried a couple of limit values and found at the current state of data, 90 was a change on the query planner.\n explain (analyze, buffers)\n select booking0_.*\n from booking booking0_\n where (booking0_.customer_id in (select customer1_.id from customer customer1_ where (lower((customer1_.first_name||' '||customer1_.last_name)) like '%sahby%')))\n order by booking0_.id desc limit 90; QUERY PLAN\n ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=34267.44..34267.66 rows=90 width=241) (actual time=20.140..20.141 rows=4 loops=1)\n Buffers: shared hit=1742\n -> Sort (cost=34267.44..34280.33 rows=5157 width=241) (actual time=20.139..20.140 rows=4 loops=1)\n Sort Key: booking0_.id\n Sort Method: quicksort Memory: 25kB\n Buffers: shared hit=1742\n -> Nested Loop (cost=3478.41..34074.26 rows=5157 width=241) (actual time=20.079..20.117 rows=4 loops=1)\n Buffers: shared hit=1742\n -> Bitmap Heap Scan on customer customer1_ (cost=3477.98..11709.61 rows=5157 width=4) (actual time=20.055..20.063 rows=4 loops=1)\n Recheck Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%sahby%'::text)\n Heap Blocks: exact=3\n Buffers: shared hit=1726\n -> Bitmap Index Scan on idx_customer_name_lower (cost=0.00..3476.69 rows=5157 width=0) (actual time=20.024..20.024 rows=4 loops=1)\n Index Cond: (lower((((first_name)::text || ' '::text) || (last_name)::text)) ~~ '%sahby%'::text)\n Buffers: shared hit=1723\n -> Index Scan using idx_booking_customer_id on booking booking0_ (cost=0.43..4.33 rows=1 width=241) (actual time=0.008..0.008 rows=1 loops=4)\n Index Cond: (customer_id = customer1_.id)\n Buffers: shared hit=16\n Planning time: 0.431 ms\n Execution time: 20.187 ms \nSo instead of converting Criteria api query into Native query to use CTE as suggested by Jeff :{quote}with t as (select booking0_.*from booking booking0_ where booking0_.customer_id in (              select customer1_.id                  from customer customer1_                where lower((customer1_.first_name||' '||customer1_.last_name)) like '%gatef%')  select * from t order by booking0_.id desc limit 30;{quote}I have used a limit of 500 (just to be far away from 90 when table size is increased) and then take top 30 on Java layer.Thanks,Seckin", "msg_date": "Thu, 17 Nov 2016 15:33:06 +0300", "msg_from": "Seckin Pulatkan <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner chooses index scan backward instead of better index\n option" } ]
[ { "msg_contents": "I have the a table with two indexes...\n\nCREATE TABLE mobile_summary_usage\n(\n import text,\n msisdn text,\n type text,\n total integer,\n day date,\n cycle text\n);\n\nCREATE INDEX mobile_summary_usage_msisdn_cycle ON mobile_summary_usage\nUSING btree (msisdn, cycle);\n\nCREATE INDEX mobile_summary_usage_cycle ON mobile_summary_usage USING\nbtree (cycle);\n\n\nWe insert approximately 2M records into this table each day. Whenever\nsomeone wants to see the total amount of voice calls, text messages or\ndata they've used we query the table with the following\n\nSELECT msisdn, type, sum (total), units\nFROM mobile_summary_usage msu, mobile_summary_type mst\nWHERE type = id AND msisdn = ? AND cycle = ?\nGROUP BY msisdn, type, units;\n\nWhere:\nmsisdn is a mobile number\ncycle is a billing cycle, e.g. 2016-10\nmobile_summary_type contains 3 rows, one for each usage type.\n\nEverything was working fine until we flipped over from 2016-10 to\n2016-11. Then instead of averaging well below a 0.5 seconds to\nrespond, Postgres started taking over a second.\n\nRunning EXPLAIN ANALYZE on the above query shows that in 2016-10 when\nthere are approximately 100M rows, Postgres uses the compound (msisdn,\ncycle) index. This has a cost of 3218.98 and takes 0.071 seconds.\n\nHashAggregate (cost=3213.12..3218.98 rows=586 width=52) (actual\ntime=0.071..0.071 rows=0 loops=1)\n Group Key: msu.msisdn, msu.type, mst.units\n -> Hash Join (cost=62.54..3205.15 rows=797 width=52) (actual\ntime=0.069..0.069 rows=0 loops=1)\n Hash Cond: (msu.type = mst.id)\n -> Bitmap Heap Scan on mobile_summary_usage msu\n(cost=32.74..3164.39 rows=797 width=20) (actual time=0.037..0.037\nrows=0 loops=1)\n Recheck Cond: ((msisdn = '07700900331'::text) AND (cycle\n= '2016-10'::text))\n -> Bitmap Index Scan on\nmobile_summary_usage_msisdn_cycle (cost=0.00..32.54 rows=797 width=0)\n(actual time=0.036..0.036 rows=0 loops=1)\n Index Cond: ((msisdn = '07700900331'::text) AND\n(cycle = '2016-10'::text))\n -> Hash (cost=18.80..18.80 rows=880 width=64) (actual\ntime=0.026..0.026 rows=4 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on mobile_summary_type mst\n(cost=0.00..18.80 rows=880 width=64) (actual time=0.014..0.016 rows=4\nloops=1)\nPlanning time: 0.197 ms\nExecution time: 0.125 ms\n\n\nWhen I re-run the plan for 2016-11 (currently 4M rows), Postgres uses\nthe simpler \"cycle\" index. The cost is 12.79 but the actual time taken\nis 1412.609 seconds\n\nHashAggregate (cost=12.78..12.79 rows=1 width=52) (actual\ntime=1412.609..1412.609 rows=0 loops=1)\nExecution time: 1412.674 ms\n Group Key: msu.msisdn, msu.type, mst.units\n -> Nested Loop (cost=0.72..12.77 rows=1 width=52) (actual\ntime=1412.606..1412.606 rows=0 loops=1)\n -> Index Scan using mobile_summary_usage_cycle on\nmobile_summary_usage msu (cost=0.57..4.59 rows=1 width=20) (actual\ntime=1412.604..1412.604 rows=0 loops=1)\n -> Index Scan using mobile_summary_type_pkey on\nmobile_summary_type mst (cost=0.15..8.17 rows=1 width=64) (never\nexecuted)\n Rows Removed by Filter: 3932875\n Index Cond: (id = msu.type)\n Index Cond: (cycle = '2016-11'::text)\n Filter: (msisdn = '07700900331'::text)\n\n\n\nI understand there are a whole host of reasons why postgres may chose\ndifferent plans based on data volumes, but in this case despite the\nlower cost the performance is significantly worse. Is there any\nexplanation for why it's making such a poor decision and\nrecommendations for how to fix it?\n\nAny help appreciated.\n\nI have the a table with two indexes...CREATE TABLE mobile_summary_usage(   import   text,   msisdn   text,   type     text,   total    integer,   day      date,   cycle    text);CREATE INDEX mobile_summary_usage_msisdn_cycle ON mobile_summary_usageUSING btree (msisdn, cycle);CREATE INDEX mobile_summary_usage_cycle ON mobile_summary_usage USINGbtree (cycle);We insert approximately 2M records into this table each day. Wheneversomeone wants to see the total amount of voice calls, text messages ordata they've used we query the table with the followingSELECT msisdn, type, sum (total), unitsFROM mobile_summary_usage msu, mobile_summary_type mstWHERE type = id AND msisdn = ? AND cycle = ?GROUP BY msisdn, type, units;Where:msisdn is a mobile numbercycle is a billing cycle, e.g. 2016-10mobile_summary_type contains 3 rows, one for each usage type.Everything was working fine until we flipped over from 2016-10 to2016-11. Then instead of averaging well below a 0.5 seconds torespond, Postgres started taking over a second.Running EXPLAIN ANALYZE on the above query shows that in 2016-10 whenthere are approximately 100M rows, Postgres uses the compound (msisdn,cycle) index. This has a cost of 3218.98 and takes 0.071 seconds.HashAggregate  (cost=3213.12..3218.98 rows=586 width=52) (actualtime=0.071..0.071 rows=0 loops=1)  Group Key: msu.msisdn, msu.type, mst.units  ->  Hash Join  (cost=62.54..3205.15 rows=797 width=52) (actualtime=0.069..0.069 rows=0 loops=1)        Hash Cond: (msu.type = mst.id)        ->  Bitmap Heap Scan on mobile_summary_usage msu(cost=32.74..3164.39 rows=797 width=20) (actual time=0.037..0.037rows=0 loops=1)              Recheck Cond: ((msisdn = '07700900331'::text) AND (cycle= '2016-10'::text))              ->  Bitmap Index Scan onmobile_summary_usage_msisdn_cycle  (cost=0.00..32.54 rows=797 width=0)(actual time=0.036..0.036 rows=0 loops=1)                    Index Cond: ((msisdn = '07700900331'::text) AND(cycle = '2016-10'::text))        ->  Hash  (cost=18.80..18.80 rows=880 width=64) (actualtime=0.026..0.026 rows=4 loops=1)              Buckets: 1024  Batches: 1  Memory Usage: 9kB              ->  Seq Scan on mobile_summary_type mst(cost=0.00..18.80 rows=880 width=64) (actual time=0.014..0.016 rows=4loops=1)Planning time: 0.197 msExecution time: 0.125 msWhen I re-run the plan for 2016-11 (currently 4M rows), Postgres usesthe simpler \"cycle\" index. The cost is 12.79 but the actual time takenis 1412.609 secondsHashAggregate  (cost=12.78..12.79 rows=1 width=52) (actualtime=1412.609..1412.609 rows=0 loops=1)Execution time: 1412.674 ms  Group Key: msu.msisdn, msu.type, mst.units  ->  Nested Loop  (cost=0.72..12.77 rows=1 width=52) (actualtime=1412.606..1412.606 rows=0 loops=1)        ->  Index Scan using mobile_summary_usage_cycle onmobile_summary_usage msu  (cost=0.57..4.59 rows=1 width=20) (actualtime=1412.604..1412.604 rows=0 loops=1)        ->  Index Scan using mobile_summary_type_pkey onmobile_summary_type mst  (cost=0.15..8.17 rows=1 width=64) (neverexecuted)              Rows Removed by Filter: 3932875              Index Cond: (id = msu.type)              Index Cond: (cycle = '2016-11'::text)              Filter: (msisdn = '07700900331'::text)I understand there are a whole host of reasons why postgres may chosedifferent plans based on data volumes, but in this case despite thelower cost the performance is significantly worse. Is there anyexplanation for why it's making such a poor decision andrecommendations for how to fix it?Any help appreciated.", "msg_date": "Mon, 14 Nov 2016 17:53:43 +0000", "msg_from": "Stephen Cresswell <[email protected]>", "msg_from_op": true, "msg_subject": "Why is the optimiser choosing a sub-optimal plan?" }, { "msg_contents": "Stephen Cresswell <[email protected]> writes:\n> I have the a table with two indexes...\n\n(1) Tell us about the other table, mobile_summary_type.\n\n(2) Did you transcribe the second query plan correctly? I have a hard\ntime believing that EXPLAIN printed two Index Cond lines for the same\nindexscan.\n\n(3) What PG version is this, exactly?\n\n(4) Are you doing anything funny like disabling autovacuum/autoanalyze?\nThe rowcount estimates in the \"good\" plan seem rather far away from\nreality, and it's not obvious why, particularly here:\n\n> -> Seq Scan on mobile_summary_type mst\n> (cost=0.00..18.80 rows=880 width=64) (actual time=0.014..0.016 rows=4\n> loops=1)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 14 Nov 2016 14:37:21 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why is the optimiser choosing a sub-optimal plan?" } ]
[ { "msg_contents": "Hi,\n\nI have some data to join and I want to get som advice from you.\n\nAny tips ? Any comments are apreciated\n\n//H\n\nselect trade_no\nfrom\nforecast_trades.hist_account_balance\nleft join trades using (trade_no)\nwhere  trade_date > current_date - 120\n   and    trade_date < current_date - 30\n   and    forex = 'f'\n   and    options = 'f'\n   group by trade_no\n   having max(account_size) > 0\n;\n\n( Query Plan : https://explain.depesz.com/s/4lOD )\n\nQUERY\nPLAN                                                                                  \n \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=34760605.76..34773866.26 rows=1060840 width=15)\n(actual time=1142816.632..1150194.076 rows=2550634 loops=1)\n   Group Key: hist_account_balance.trade_no\n   Filter: (max(hist_account_balance.account_size) > 0::numeric)\n   Rows Removed by Filter: 18240023\n   ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694 width=15)\n(actual time=60321.201..1108647.151 rows=44188963 loops=1)\n         Hash Cond: (hist_account_balance.trade_no =\ntrades.trade_no)\n         ->  Seq Scan on hist_account_balance \n(cost=0.00..14986455.20 rows=570046720 width=15) (actual\ntime=0.016..524427.140 rows=549165594 loops=1)\n         ->  Hash  (cost=3159184.13..3159184.13 rows=19872098\nwidth=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)\n               Buckets: 2097152  Batches: 1  Memory Usage:\n913651kB\n               ->  Index Scan using trades_trade_date_index\non trades  (cost=0.58..3159184.13 rows=19872098 width=12) (actual\ntime=0.078..52213.976 rows=20790658 loops=1)\n                     Index Cond: ((trade_date >\n(('now'::cstring)::date - 120)) AND (trade_date < (('now'::cstring)::date -\n30)))\n                     Filter: ((NOT forex) AND (NOT\noptions))\n                     Rows Removed by Filter: 2387523\n Planning time: 2.157 ms\n Execution time: 1151234.290 ms\n(15 rows)\n\n\n\n\n\n\n\nHi,\n\nI have some data to join and I want to get som advice from you.\n\nAny tips ? Any comments are apreciated\n\n//H\n\n\nselect trade_no\nfrom\nforecast_trades.hist_account_balance\nleft join trades using (trade_no)\nwhere  trade_date > current_date - 120\n   and    trade_date < current_date - 30\n   and    forex = 'f'\n   and    options = 'f'\n   group by trade_no\n   having max(account_size) > 0\n;\n\n\n( Query Plan : https://explain.depesz.com/s/4lOD )\n\nQUERY PLAN                                                                                    \n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n HashAggregate  (cost=34760605.76..34773866.26 rows=1060840 width=15) (actual time=1142816.632..1150194.076 rows=2550634 loops=1)\n   Group Key: hist_account_balance.trade_no\n   Filter: (max(hist_account_balance.account_size) > 0::numeric)\n   Rows Removed by Filter: 18240023\n   ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694 width=15) (actual time=60321.201..1108647.151 rows=44188963 loops=1)\n         Hash Cond: (hist_account_balance.trade_no = trades.trade_no)\n         ->  Seq Scan on hist_account_balance  (cost=0.00..14986455.20 rows=570046720 width=15) (actual time=0.016..524427.140 rows=549165594 loops=1)\n         ->  Hash  (cost=3159184.13..3159184.13 rows=19872098 width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)\n               Buckets: 2097152  Batches: 1  Memory Usage: 913651kB\n               ->  Index Scan using trades_trade_date_index on trades  (cost=0.58..3159184.13 rows=19872098 width=12) (actual time=0.078..52213.976 rows=20790658 loops=1)\n                     Index Cond: ((trade_date > (('now'::cstring)::date - 120)) AND (trade_date < (('now'::cstring)::date - 30)))\n                     Filter: ((NOT forex) AND (NOT options))\n                     Rows Removed by Filter: 2387523\n Planning time: 2.157 ms\n Execution time: 1151234.290 ms\n(15 rows)", "msg_date": "Tue, 15 Nov 2016 14:27:13 +0100", "msg_from": "Henrik Ekenberg <[email protected]>", "msg_from_op": true, "msg_subject": "Sql Query :: Any advice ?" }, { "msg_contents": "On 2016-11-15 14:27, Henrik Ekenberg wrote:\n> Hi,\n> \n> I have some data to join and I want to get som advice from you.\n> \n> Any tips ? Any comments are apreciated\n> \n> //H\n> \n> select trade_no\n> from\n> forecast_trades.hist_account_balance\n> left join trades using (trade_no)\n> where trade_date > current_date - 120\n> and trade_date < current_date - 30\n> and forex = 'f'\n> and options = 'f'\n> group by trade_no\n> having max(account_size) > 0\n> ;\n> \n> ( Query Plan : https://explain.depesz.com/s/4lOD )\n> \n> QUERY PLAN\n> \n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=34760605.76..34773866.26 rows=1060840 width=15)\n> (actual time=1142816.632..1150194.076 rows=2550634 loops=1)\n> Group Key: hist_account_balance.trade_no\n> Filter: (max(hist_account_balance.account_size) > 0::numeric)\n> Rows Removed by Filter: 18240023\n> -> Hash Join (cost=3407585.35..34530512.29 rows=46018694\n> width=15) (actual time=60321.201..1108647.151 rows=44188963 loops=1)\n> Hash Cond: (hist_account_balance.trade_no = trades.trade_no)\n> -> Seq Scan on hist_account_balance (cost=0.00..14986455.20\n> rows=570046720 width=15) (actual time=0.016..524427.140 rows=549165594\n> loops=1)\n> -> Hash (cost=3159184.13..3159184.13 rows=19872098\n> width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)\n> Buckets: 2097152 Batches: 1 Memory Usage: 913651kB\n> -> Index Scan using trades_trade_date_index on trades\n> (cost=0.58..3159184.13 rows=19872098 width=12) (actual\n> time=0.078..52213.976 rows=20790658 loops=1)\n> Index Cond: ((trade_date >\n> (('now'::cstring)::date - 120)) AND (trade_date <\n> (('now'::cstring)::date - 30)))\n> Filter: ((NOT forex) AND (NOT options))\n> Rows Removed by Filter: 2387523\n> Planning time: 2.157 ms\n> Execution time: 1151234.290 ms\n> (15 rows)\n\n\nWhat kind of indexes have you created for those tables?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Nov 2016 14:50:43 +0100", "msg_from": "vinny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sql Query :: Any advice ?" }, { "msg_contents": "Here are the indexes I have for those queries\n\nIndexes:\n\nhist_account_balance  :: \"hist_account_balance_ix1\" btree (trade_no)\n   \n\ntrades :: \"trades_pkey\" PRIMARY KEY, btree  (trade_no)\n \"trades_trade_date_index\" btree (trade_date)\n\n//H\n\nQuoting vinny <[email protected]>:\n\n> On 2016-11-15 14:27, Henrik Ekenberg wrote:\n>> Hi,\n>>\n>> I have some data to join and I want to get som advice from you.\n>>\n>> Any tips ? Any comments are apreciated\n>>\n>> //H\n>>\n>> select trade_no\n>> from\n>> forecast_trades.hist_account_balance\n>> left join trades using (trade_no)\n>> where  trade_date > current_date - 120\n>>   and    trade_date < current_date - 30\n>>   and    forex = 'f'\n>>   and    options = 'f'\n>>   group by trade_no\n>>   having max(account_size) > 0\n>> ;\n>>\n>> ( Query Plan : https://explain.depesz.com/s/4lOD )\n>>\n>> QUERY PLAN\n>>\n>>\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>> HashAggregate  (cost=34760605.76..34773866.26 rows=1060840 width=15)\n>> (actual time=1142816.632..1150194.076 rows=2550634 loops=1)\n>>   Group Key: hist_account_balance.trade_no\n>>   Filter: (max(hist_account_balance.account_size) > 0::numeric)\n>>   Rows Removed by Filter: 18240023\n>>   ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694\n>> width=15) (actual time=60321.201..1108647.151 rows=44188963 loops=1)\n>>         Hash Cond: (hist_account_balance.trade_no = trades.trade_no)\n>>         ->  Seq Scan on hist_account_balance \n(cost=0.00..14986455.20\n>> rows=570046720 width=15) (actual time=0.016..524427.140 rows=549165594\n>> loops=1)\n>>         ->  Hash  (cost=3159184.13..3159184.13 rows=19872098\n>> width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)\n>>               Buckets: 2097152  Batches: 1  Memory Usage:\n913651kB\n>>               ->  Index Scan using trades_trade_date_index on\ntrades\n>> (cost=0.58..3159184.13 rows=19872098 width=12) (actual\n>> time=0.078..52213.976 rows=20790658 loops=1)\n>>                     Index Cond: ((trade_date >\n>> (('now'::cstring)::date - 120)) AND (trade_date <\n>> (('now'::cstring)::date - 30)))\n>>                     Filter: ((NOT forex) AND (NOT options))\n>>                     Rows Removed by Filter: 2387523\n>> Planning time: 2.157 ms\n>> Execution time: 1151234.290 ms\n>> (15 rows)\n>\n> What kind of indexes have you created for those tables?\n>\n> --\n> Sent via pgsql-performance mailing list\n([email protected])\n> To make changes to your\n> subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\nHere are the indexes I have for those queries\n\nIndexes:\n\nhist_account_balance  :: \"hist_account_balance_ix1\" btree (trade_no)\n   \n\ntrades :: \"trades_pkey\" PRIMARY KEY, btree  (trade_no)\n \"trades_trade_date_index\" btree (trade_date)\n\n//H\n\n\nQuoting vinny <[email protected]>:\n\nOn 2016-11-15 14:27, Henrik Ekenberg wrote:\n\nHi,\n\nI have some data to join and I want to get som advice from you.\n\nAny tips ? Any comments are apreciated\n\n//H\n\nselect trade_no\nfrom\nforecast_trades.hist_account_balance\nleft join trades using (trade_no)\nwhere  trade_date > current_date - 120\n  and    trade_date < current_date - 30\n  and    forex = 'f'\n  and    options = 'f'\n  group by trade_no\n  having max(account_size) > 0\n;\n\n( Query Plan : https://explain.depesz.com/s/4lOD )\n\nQUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate  (cost=34760605.76..34773866.26 rows=1060840 width=15)\n(actual time=1142816.632..1150194.076 rows=2550634 loops=1)\n  Group Key: hist_account_balance.trade_no\n  Filter: (max(hist_account_balance.account_size) > 0::numeric)\n  Rows Removed by Filter: 18240023\n  ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694\nwidth=15) (actual time=60321.201..1108647.151 rows=44188963 loops=1)\n        Hash Cond: (hist_account_balance.trade_no = trades.trade_no)\n        ->  Seq Scan on hist_account_balance  (cost=0.00..14986455.20\nrows=570046720 width=15) (actual time=0.016..524427.140 rows=549165594\nloops=1)\n        ->  Hash  (cost=3159184.13..3159184.13 rows=19872098\nwidth=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)\n              Buckets: 2097152  Batches: 1  Memory Usage: 913651kB\n              ->  Index Scan using trades_trade_date_index on trades\n(cost=0.58..3159184.13 rows=19872098 width=12) (actual\ntime=0.078..52213.976 rows=20790658 loops=1)\n                    Index Cond: ((trade_date >\n(('now'::cstring)::date - 120)) AND (trade_date <\n(('now'::cstring)::date - 30)))\n                    Filter: ((NOT forex) AND (NOT options))\n                    Rows Removed by Filter: 2387523\nPlanning time: 2.157 ms\nExecution time: 1151234.290 ms\n(15 rows)\n\nWhat kind of indexes have you created for those tables?\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 15 Nov 2016 15:30:53 +0100", "msg_from": "Henrik Ekenberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Sql Query :: Any advice ?" }, { "msg_contents": "Are the forex and options in the hist_account_balance table?\nThe sequential scan is on that table so if they are,\nso I'm guessing they should probably by in the index.\n\nOn 2016-11-15 15:30, Henrik Ekenberg wrote:\n> Here are the indexes I have for those queries\n> \n> Indexes:\n> \n> hist_account_balance :: \"hist_account_balance_ix1\" btree (trade_no)\n> \n> trades :: \"trades_pkey\" PRIMARY KEY, btree (trade_no)\n> \"trades_trade_date_index\" btree (trade_date)\n> \n> //H\n> \n> Quoting vinny <[email protected]>:\n> \n>> On 2016-11-15 14:27, Henrik Ekenberg wrote:\n>> \n>>> Hi,\n>>> \n>>> I have some data to join and I want to get som advice from you.\n>>> \n>>> Any tips ? Any comments are apreciated\n>>> \n>>> //H\n>>> \n>>> select trade_no\n>>> from\n>>> forecast_trades.hist_account_balance\n>>> left join trades using (trade_no)\n>>> where trade_date > current_date - 120\n>>> and trade_date < current_date - 30\n>>> and forex = 'f'\n>>> and options = 'f'\n>>> group by trade_no\n>>> having max(account_size) > 0\n>>> ;\n>>> \n>>> ( Query Plan : https://explain.depesz.com/s/4lOD )\n>>> \n>>> QUERY PLAN\n>>> \n>>> \n>> \n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>> HashAggregate (cost=34760605.76..34773866.26 rows=1060840\n>>> width=15)\n>>> (actual time=1142816.632..1150194.076 rows=2550634 loops=1)\n>>> Group Key: hist_account_balance.trade_no\n>>> Filter: (max(hist_account_balance.account_size) > 0::numeric)\n>>> Rows Removed by Filter: 18240023\n>>> -> Hash Join (cost=3407585.35..34530512.29 rows=46018694\n>>> width=15) (actual time=60321.201..1108647.151 rows=44188963\n>>> loops=1)\n>>> Hash Cond: (hist_account_balance.trade_no =\n>>> trades.trade_no)\n>>> -> Seq Scan on hist_account_balance\n>>> (cost=0.00..14986455.20\n>>> rows=570046720 width=15) (actual time=0.016..524427.140\n>>> rows=549165594\n>>> loops=1)\n>>> -> Hash (cost=3159184.13..3159184.13 rows=19872098\n>>> width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)\n>>> Buckets: 2097152 Batches: 1 Memory Usage: 913651kB\n>>> -> Index Scan using trades_trade_date_index on\n>>> trades\n>>> (cost=0.58..3159184.13 rows=19872098 width=12) (actual\n>>> time=0.078..52213.976 rows=20790658 loops=1)\n>>> Index Cond: ((trade_date >\n>>> (('now'::cstring)::date - 120)) AND (trade_date <\n>>> (('now'::cstring)::date - 30)))\n>>> Filter: ((NOT forex) AND (NOT options))\n>>> Rows Removed by Filter: 2387523\n>>> Planning time: 2.157 ms\n>>> Execution time: 1151234.290 ms\n>>> (15 rows)\n>> What kind of indexes have you created for those tables?\n>> \n>> --\n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your\n>> subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Nov 2016 16:44:47 +0100", "msg_from": "vinny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sql Query :: Any advice ?" }, { "msg_contents": "Hi, \n\nForex and options are in  trades table\n\nBest regards \nHenrik \n\n\n\nSent from my Mi padOn vinny <[email protected]>, Nov 15, 2016 6:46 PM wrote:Are the forex and options in the hist_account_balance table?\r\nThe sequential scan is on that table so if they are,\r\nso I'm guessing they should probably by in the index.\r\n\nOn 2016-11-15 15:30, Henrik Ekenberg wrote:\r\n> Here are the indexes I have for those queries\r\n> \r\n> Indexes:\r\n> \r\n> hist_account_balance  :: \"hist_account_balance_ix1\" btree (trade_no)\r\n> \r\n> trades :: \"trades_pkey\" PRIMARY KEY, btree  (trade_no)\r\n>  \"trades_trade_date_index\" btree (trade_date)\r\n> \r\n> //H\r\n> \r\n> Quoting vinny <[email protected]>:\r\n> \r\n>> On 2016-11-15 14:27, Henrik Ekenberg wrote:\r\n>> \r\n>>> Hi,\r\n>>> \r\n>>> I have some data to join and I want to get som advice from you.\r\n>>> \r\n>>> Any tips ? Any comments are apreciated\r\n>>> \r\n>>> //H\r\n>>> \r\n>>> select trade_no\r\n>>> from\r\n>>> forecast_trades.hist_account_balance\r\n>>> left join trades using (trade_no)\r\n>>> where  trade_date > current_date - 120\r\n>>> and    trade_date < current_date - 30\r\n>>> and    forex = 'f'\r\n>>> and    options = 'f'\r\n>>> group by trade_no\r\n>>> having max(account_size) > 0\r\n>>> ;\r\n>>> \r\n>>> ( Query Plan : https://explain.depesz.com/s/4lOD )\r\n>>> \r\n>>> QUERY PLAN\r\n>>> \r\n>>> \r\n>> \r\n> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n>>> HashAggregate  (cost=34760605.76..34773866.26 rows=1060840\r\n>>> width=15)\r\n>>> (actual time=1142816.632..1150194.076 rows=2550634 loops=1)\r\n>>> Group Key: hist_account_balance.trade_no\r\n>>> Filter: (max(hist_account_balance.account_size) > 0::numeric)\r\n>>> Rows Removed by Filter: 18240023\r\n>>> ->  Hash Join  (cost=3407585.35..34530512.29 rows=46018694\r\n>>> width=15) (actual time=60321.201..1108647.151 rows=44188963\r\n>>> loops=1)\r\n>>> Hash Cond: (hist_account_balance.trade_no =\r\n>>> trades.trade_no)\r\n>>> ->  Seq Scan on hist_account_balance\r\n>>> (cost=0.00..14986455.20\r\n>>> rows=570046720 width=15) (actual time=0.016..524427.140\r\n>>> rows=549165594\r\n>>> loops=1)\r\n>>> ->  Hash  (cost=3159184.13..3159184.13 rows=19872098\r\n>>> width=12) (actual time=60307.001..60307.001 rows=20790658 loops=1)\r\n>>> Buckets: 2097152  Batches: 1  Memory Usage: 913651kB\r\n>>> ->  Index Scan using trades_trade_date_index on\r\n>>> trades\r\n>>> (cost=0.58..3159184.13 rows=19872098 width=12) (actual\r\n>>> time=0.078..52213.976 rows=20790658 loops=1)\r\n>>> Index Cond: ((trade_date >\r\n>>> (('now'::cstring)::date - 120)) AND (trade_date <\r\n>>> (('now'::cstring)::date - 30)))\r\n>>> Filter: ((NOT forex) AND (NOT options))\r\n>>> Rows Removed by Filter: 2387523\r\n>>> Planning time: 2.157 ms\r\n>>> Execution time: 1151234.290 ms\r\n>>> (15 rows)\r\n>> What kind of indexes have you created for those tables?\r\n>> \r\n>> --\r\n>> Sent via pgsql-performance mailing list\r\n>> ([email protected])\r\n>> To make changes to your\r\n>> subscription:http://www.postgresql.org/mailpref/pgsql-performance \n\n\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance \n\n", "msg_date": "Tue, 15 Nov 2016 20:23:38 +0300", "msg_from": "Henrik <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Sql Query :: Any advice ?" } ]
[ { "msg_contents": "Hello!\nWe have a server with 8.4.1 that we want to migrate to 9.6.1\nBefore doing anything, we ran pgbench serveral times.\nThe results were always similar to the following:\n\n$ pgbench -l -c 100 -T 30 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 100\nduration: 30 s\nnumber of transactions actually processed: 36049\ntps = 1193.682690 (including connections establishing)\ntps = 1198.838960 (excluding connections establishing)\n\nThen, we follow the procedure in https://www.postgresql.org/docs/9.6/static/pgupgrade.html to upgrade the server using pg_upgrade.\nTo install the new version, we downloaded and compiled the sources, with the same option that we use with the previous version (configure --prefix=/var/lib/pgsql).\nWe upgrade only one server, so we don't run the steps for replication.\n\nAfter this, we ran the script analyze_new_cluster.sh, that was created by pg_upgrade, to generate statistics.\n\nAt this point, we run pgbench again, serveral times, to make the comparision.\nThe results were always similar to the following:\n\n$ pgbench -l -c 100 -T 30 pgbench\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 100\nnumber of threads: 1\nduration: 30 s\nnumber of transactions actually processed: 27428\nlatency average = 110.104 ms\ntps = 908.234296 (including connections establishing)\ntps = 908.278187 (excluding connections establishing)\n\nWe ran the statistics again, this time with vacuumdb --all --analyze, no change at all.\n\nIn the postgresql.conf of the new version (9.6.1), we use this values:\nmax_connections = 100\nsuperuser_reserved_connections = 3\nshared_buffers = 512MB\nwork_mem = 5MB\nmaintenance_work_mem = 128MB\neffective_cache_size = 1500MB\nmax_wal_size = 2GB\nmin_wal_size = 1GB\nwal_level = replica\n\nIn the postgresql.conf of the old version (8.4.1), we use this values:\nmax_connections = 100\nshared_buffers = 512MB\n(The other values are set by default)\n\nWe try also with the default values in the new installation, without change in the times.\n\nThe hardware doesn't change, its a Intel(R) Pentium(R) CPU G3220 @ 3.00GHz with 2 cores, 2GB of RAM, 500GB SCSI hard disk. The operating system is Enterprise Linux Enterprise Linux Server release 5.8, 64 bits.\n\nAny suggestion about what could be the problem?\nThanks!\nGabriela\n\n\n\n\n\n\n\n\n\n\nHello!\nWe have a server with 8.4.1 that we want to migrate to 9.6.1\nBefore doing anything, we ran pgbench serveral times.\nThe results were always similar to the following:\n\n\n$ pgbench -l -c 100 -T 30 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 100\nduration: 30 s\nnumber of transactions actually processed: 36049\ntps = 1193.682690 (including connections establishing)\ntps = 1198.838960 (excluding connections establishing)\n\n\nThen, we follow the procedure in \nhttps://www.postgresql.org/docs/9.6/static/pgupgrade.html to upgrade the server using pg_upgrade. \nTo install the new version, we downloaded and compiled the sources, with the same option that we use with the previous version (configure --prefix=/var/lib/pgsql).\nWe upgrade only one server, so we don't run the steps for replication.\n\n\nAfter this, we ran the script analyze_new_cluster.sh, that was created by pg_upgrade, to generate statistics.\n\n\nAt this point, we run pgbench again, serveral times, to make the comparision.\nThe results were always similar to the following:\n\n\n$ pgbench -l -c 100 -T 30 pgbench\nstarting vacuum...end.\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 100\nnumber of threads: 1\nduration: 30 s\nnumber of transactions actually processed: 27428\nlatency average = 110.104 ms\ntps = 908.234296 (including connections establishing)\ntps = 908.278187 (excluding connections establishing)\n\n\nWe ran the statistics again, this time with vacuumdb --all --analyze, no change at all.\n\n\nIn the postgresql.conf of the new version (9.6.1), we use this values:\nmax_connections = 100\nsuperuser_reserved_connections = 3\nshared_buffers = 512MB\nwork_mem = 5MB\n\nmaintenance_work_mem = 128MB\neffective_cache_size = 1500MB\nmax_wal_size = 2GB\nmin_wal_size = 1GB\nwal_level = replica\n\n\nIn the postgresql.conf of the old version (8.4.1), we use this values:\nmax_connections = 100\nshared_buffers = 512MB\n(The other values are set by default)\n\n\nWe try also with the default values in the new installation, without change in the times.\n\n\nThe hardware doesn't change, its a Intel(R) Pentium(R) CPU G3220 @ 3.00GHz with 2 cores, 2GB of RAM, 500GB SCSI hard disk. The operating system is Enterprise Linux Enterprise Linux Server release 5.8, 64 bits.\n\n\nAny suggestion about what could be the problem?\nThanks!\nGabriela", "msg_date": "Tue, 15 Nov 2016 21:57:08 +0000", "msg_from": "Gabriela Serventi <[email protected]>", "msg_from_op": true, "msg_subject": "Performance decrease after upgrade to 9.6.1" }, { "msg_contents": "Gabriela Serventi <[email protected]> writes:\n> $ pgbench -l -c 100 -T 30 pgbench\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 1\n> query mode: simple\n> number of clients: 100\n> number of threads: 1\n> duration: 30 s\n> number of transactions actually processed: 27428\n> latency average = 110.104 ms\n> tps = 908.234296 (including connections establishing)\n> tps = 908.278187 (excluding connections establishing)\n\nThat's not a tremendously exciting benchmark case, for a number of\nreasons:\n\n* 100 sessions in a scale-factor-1 database are all going to be fighting\nover updating the single row in the pgbench_branches table.\n\n* 100 sessions driven by a single pgbench thread are probably going to be\nbottlenecked by that thread, not by the server.\n\n* 100 sessions on a machine with only 2 cores is going to be all about\nprocess-swap contention anyhow.\n\n\nMy first thought about why the difference from 8.4 to 9.6 is that pgbench\nhas grown a lot more measurement apparatus since then (for example, the\ntransaction latency numbers, which weren't there at all in 8.4). You\nmight try testing 9.6 server with 8.4 pgbench and vice versa to tease out\nhow much of this is actually on pgbench changes not the server. But in\nthe end, what you're measuring here is mostly contention, and you'd need\nto alter the test parameters to make it not so. The \"Good Practices\"\nsection at the bottom of the pgbench reference page has some tips about\nthat.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Tue, 15 Nov 2016 17:35:03 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance decrease after upgrade to 9.6.1" }, { "msg_contents": "Hi Tom!\n\nThanks for the answer.\n\nThis is just one of the benchmark that we run, we test with fewer clients and much more time, but you're right about de scale-factor, we didn't realize about that.\n\nWe are going to test using your recomendations.\n\nThanks!\n\n\n________________________________\nDe: Tom Lane <[email protected]>\nEnviado: martes, 15 de noviembre de 2016 19:35:03\nPara: Gabriela Serventi\nCc: [email protected]\nAsunto: Re: [PERFORM] Performance decrease after upgrade to 9.6.1\n\nGabriela Serventi <[email protected]> writes:\n> $ pgbench -l -c 100 -T 30 pgbench\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 1\n> query mode: simple\n> number of clients: 100\n> number of threads: 1\n> duration: 30 s\n> number of transactions actually processed: 27428\n> latency average = 110.104 ms\n> tps = 908.234296 (including connections establishing)\n> tps = 908.278187 (excluding connections establishing)\n\nThat's not a tremendously exciting benchmark case, for a number of\nreasons:\n\n* 100 sessions in a scale-factor-1 database are all going to be fighting\nover updating the single row in the pgbench_branches table.\n\n* 100 sessions driven by a single pgbench thread are probably going to be\nbottlenecked by that thread, not by the server.\n\n* 100 sessions on a machine with only 2 cores is going to be all about\nprocess-swap contention anyhow.\n\n\nMy first thought about why the difference from 8.4 to 9.6 is that pgbench\nhas grown a lot more measurement apparatus since then (for example, the\ntransaction latency numbers, which weren't there at all in 8.4). You\nmight try testing 9.6 server with 8.4 pgbench and vice versa to tease out\nhow much of this is actually on pgbench changes not the server. But in\nthe end, what you're measuring here is mostly contention, and you'd need\nto alter the test parameters to make it not so. The \"Good Practices\"\nsection at the bottom of the pgbench reference page has some tips about\nthat.\n\n regards, tom lane\n\n\n\n\n\n\n\n\n\n\n\n\nHi Tom! \nThanks for the answer.\nThis is just one of the benchmark that we run, we test with fewer clients and much more time, but you're right about de scale-factor, we didn't realize about that.\nWe are going to test using your recomendations.\nThanks!\n\n\n\n\nDe: Tom Lane <[email protected]>\nEnviado: martes, 15 de noviembre de 2016 19:35:03\nPara: Gabriela Serventi\nCc: [email protected]\nAsunto: Re: [PERFORM] Performance decrease after upgrade to 9.6.1\n \n\n\n\nGabriela Serventi <[email protected]> writes:\n> $ pgbench -l -c 100 -T 30 pgbench\n> starting vacuum...end.\n> transaction type: <builtin: TPC-B (sort of)>\n> scaling factor: 1\n> query mode: simple\n> number of clients: 100\n> number of threads: 1\n> duration: 30 s\n> number of transactions actually processed: 27428\n> latency average = 110.104 ms\n> tps = 908.234296 (including connections establishing)\n> tps = 908.278187 (excluding connections establishing)\n\nThat's not a tremendously exciting benchmark case, for a number of\nreasons:\n\n* 100 sessions in a scale-factor-1 database are all going to be fighting\nover updating the single row in the pgbench_branches table.\n\n* 100 sessions driven by a single pgbench thread are probably going to be\nbottlenecked by that thread, not by the server.\n\n* 100 sessions on a machine with only 2 cores is going to be all about\nprocess-swap contention anyhow.\n\n\nMy first thought about why the difference from 8.4 to 9.6 is that pgbench\nhas grown a lot more measurement apparatus since then (for example, the\ntransaction latency numbers, which weren't there at all in 8.4).  You\nmight try testing 9.6 server with 8.4 pgbench and vice versa to tease out\nhow much of this is actually on pgbench changes not the server.  But in\nthe end, what you're measuring here is mostly contention, and you'd need\nto alter the test parameters to make it not so.  The \"Good Practices\"\nsection at the bottom of the pgbench reference page has some tips about\nthat.\n\n                        regards, tom lane", "msg_date": "Wed, 16 Nov 2016 00:50:46 +0000", "msg_from": "Gabriela Serventi <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance decrease after upgrade to 9.6.1" } ]
[ { "msg_contents": "Hi,\n\nI run one query and the execution is very different\n\nSelect messages\n from mails_hist\n join mails using (messages)\n where message_date > '2016-07-19 00:00:00'\n and message_date < '2016-10-17 00:00:00'\n Group by message;\n\nmessage_date is one timestamp\n\nQuery gives around 6300 rows\n\nWhat can I look for ?\n\n//Bill\n\nHi,I run one query and the execution is very different Select messages   from   mails_hist   join   mails using (messages)   where  message_date > '2016-07-19 00:00:00'   and    message_date < '2016-10-17 00:00:00'   Group by message;message_date is one timestampQuery gives around 6300 rowsWhat can I look for ?//Bill", "msg_date": "Wed, 16 Nov 2016 10:10:36 +0100", "msg_from": "Metatrader EA <[email protected]>", "msg_from_op": true, "msg_subject": "Run one query and execution time is very different" } ]
[ { "msg_contents": "Hi,\n\nI run this query select count(*) from analyse_forecast where\ndaterange_analyse <@ daterange( current_date - 150, current_date, '[]') ;\n\nSometimes will query be quick and sometimes is same query SUPER SLOWWWW\n\nSomething I can do ? Something I can check for ?\n\n//Bill\n\nHi,I run this query select count(*) from analyse_forecast where daterange_analyse  <@ daterange( current_date - 150, current_date, '[]') ;Sometimes will query be quick and sometimes is same query SUPER SLOWWWW Something I can do ? Something I can check for ?//Bill", "msg_date": "Thu, 17 Nov 2016 11:43:19 +0100", "msg_from": "Metatrader EA <[email protected]>", "msg_from_op": true, "msg_subject": "Query hangs sometimes" }, { "msg_contents": "Locks? (VACUUM FULL, etc) Autovacuum?\n\n> Hi,\n>\n> I run this query select count(*) from analyse_forecast where daterange_analyse�\n> <@ daterange( current_date - 150, current_date, '[]') ;\n>\n> Sometimes will query be quick and sometimes is same query SUPER SLOWWWW\n>\n> Something I can do ? Something I can check for ?\n>\n> //Bill\n>\n\n-- \nGuillaume Cottenceau\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Nov 2016 12:02:34 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query hangs sometimes" }, { "msg_contents": "Hi,\n\nHow can I check this?\n\n//Bill\n\nOn Thu, Nov 17, 2016 at 12:02 PM, Guillaume Cottenceau <[email protected]> wrote:\n\n> Locks? (VACUUM FULL, etc) Autovacuum?\n>\n> > Hi,\n> >\n> > I run this query select count(*) from analyse_forecast where\n> daterange_analyse\n> > <@ daterange( current_date - 150, current_date, '[]') ;\n> >\n> > Sometimes will query be quick and sometimes is same query SUPER SLOWWWW\n> >\n> > Something I can do ? Something I can check for ?\n> >\n> > //Bill\n> >\n>\n> --\n> Guillaume Cottenceau\n>\n\nHi,How can I check this?//BillOn Thu, Nov 17, 2016 at 12:02 PM, Guillaume Cottenceau <[email protected]> wrote:Locks? (VACUUM FULL, etc) Autovacuum?\n\n> Hi,\n>\n> I run this query select count(*) from analyse_forecast where daterange_analyse \n> <@ daterange( current_date - 150, current_date, '[]') ;\n>\n> Sometimes will query be quick and sometimes is same query SUPER SLOWWWW\n>\n> Something I can do ? Something I can check for ?\n>\n> //Bill\n>\n\n--\nGuillaume Cottenceau", "msg_date": "Thu, 17 Nov 2016 12:55:11 +0100", "msg_from": "Metatrader EA <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query hangs sometimes" }, { "msg_contents": "On Thu, Nov 17, 2016 at 3:55 AM, Metatrader EA <[email protected]> wrote:\n> How can I check this?\n\nSeveral options are listed in the docs:\nhttps://www.postgresql.org/docs/9.6/static/monitoring.html\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 17 Nov 2016 13:49:00 -0800", "msg_from": "Michael Paquier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query hangs sometimes" } ]
[ { "msg_contents": "If I construct the materialized view with an 'order by', I can use a BRIN\nindex to a sometimes significant performance advantage, at least for the\nprimary sort field. I have observed that even though the first pass is a\nlittle lossy and I get index rechecks, it is still much faster than a\nregular btree index.\n\nDoes it matter if I also try to CLUSTER the materialized view on that\nprimary sort field? Or is it already clustered because of the 'order by'?\n\nWould the brin index work better on a clustered materialized view instead\nof an ordered materialized view?\n\nWhen I refresh the materialized view (concurrently) is the order_by\npreserved? Would the clustering be preserved?\n\nI'm trying to get a handle on the concept of clustering and how that is\ndifferent than order_by and which would be better and how much advantage it\nreally gets me. I'll continue to do experiments with this, but thought\nsome of the performance gurus on this list would have some immediate\nthoughts on the subject off the top of their heads, and others reading this\nlist might find the observations interesting.\n\nThank you for your time.\n\nIf I construct the materialized view with an 'order by', I can use a BRIN index to a sometimes significant performance advantage, at least for the primary sort field.  I have observed that even though the first pass is a little lossy and I get index rechecks, it is still much faster than a regular btree index.Does it matter if I also try to CLUSTER the materialized view on that primary sort field? Or is it already clustered because of the 'order by'?Would the brin index work better on a clustered materialized view instead of an ordered materialized view?When I refresh the materialized view (concurrently) is the order_by preserved?  Would the clustering be preserved?I'm trying to get a handle on the concept of clustering and how that is different than order_by and which would be better and how much advantage it really gets me.   I'll continue to do experiments with this, but thought some of the performance gurus on this list would have some immediate thoughts on the subject off the top of their heads, and others reading this list might find the observations interesting.Thank you for your time.", "msg_date": "Thu, 17 Nov 2016 11:36:21 -0500", "msg_from": "Rick Otten <[email protected]>", "msg_from_op": true, "msg_subject": "materialized view order by and clustering" }, { "msg_contents": "On Thu, Nov 17, 2016 at 9:36 AM, Rick Otten <[email protected]>\nwrote:\n\n>\n> Does it matter if I also try to CLUSTER the materialized view on that\n> primary sort field? Or is it already clustered because of the 'order by'?\n>\n> ​[...]​\n>\n> When I refresh the materialized view (concurrently) is the order_by\n> preserved? Would the clustering be preserved?\n>\n>\n​\n​The notes on the REFRESH MATERIALIZED VIEW page seem informative to this\nquestion:\n\n​\"While the default index for future CLUSTER operations is retained,\nREFRESH MATERIALIZED VIEW does not order the generated rows based on this\nproperty. If you want the data to be ordered upon generation, you must use\nan ORDER BY clause in the backing query.\"\n\n​https://www.postgresql.org/docs/9.6/static/sql-refreshmaterializedview.html\n​\n​\n\n\n> I'm trying to get a handle on the concept of clustering and how that is\n> different than order_by and which would be better and how much advantage it\n> really gets me.\n>\n\n​\nCLUSTER is a physical property\n​(table only) ​\nwhile ORDER BY is a logical one\n​ (view only)\n\nWith respect to materialized views - which act as both table and view - the\nlogically ordered view data gets saved to the physical table thus making\nthe table clustered on whatever order by is specified.\n\n​David J.\n​\n​\n\nOn Thu, Nov 17, 2016 at 9:36 AM, Rick Otten <[email protected]> wrote:Does it matter if I also try to CLUSTER the materialized view on that primary sort field? Or is it already clustered because of the 'order by'?​[...]​When I refresh the materialized view (concurrently) is the order_by preserved?  Would the clustering be preserved?​​The notes on the REFRESH MATERIALIZED VIEW page seem informative to this question:​\"While the default index for future CLUSTER operations is retained, REFRESH MATERIALIZED VIEW does not order the generated rows based on this property. If you want the data to be ordered upon generation, you must use an ORDER BY clause in the backing query.\"​https://www.postgresql.org/docs/9.6/static/sql-refreshmaterializedview.html​​ I'm trying to get a handle on the concept of clustering and how that is different than order_by and which would be better and how much advantage it really gets me.​CLUSTER is a physical property ​(table only) ​while ORDER BY is a logical one​ (view only)With respect to materialized views - which act as both table and view - the logically ordered view data gets saved to the physical table thus making the table clustered on whatever order by is specified.​David J.​​", "msg_date": "Thu, 17 Nov 2016 10:06:43 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: materialized view order by and clustering" } ]
[ { "msg_contents": "Hi,\n\nDo I miss something?\nShouldn't I have some rows from this query ?\n\nWhy is it empty?\n\n\n\nSELECT relname, idx_tup_fetch + seq_tup_read as TotalReads from\npg_stat_all_tables\n WHERE idx_tup_fetch + seq_tup_read != 0\n order by TotalReads desc\n LIMIT 10;\n relname | totalreads\n---------+------------\n(0 rows)\n\n//Bill\n\nHi,Do I miss something?Shouldn't I have some rows from this query ?Why is it empty?SELECT relname, idx_tup_fetch + seq_tup_read as TotalReads from pg_stat_all_tables WHERE idx_tup_fetch + seq_tup_read != 0 order by TotalReads desc LIMIT 10; relname | totalreads ---------+------------(0 rows)//Bill", "msg_date": "Fri, 18 Nov 2016 14:52:54 +0100", "msg_from": "Metatrader EA <[email protected]>", "msg_from_op": true, "msg_subject": "DO I miss something ?" }, { "msg_contents": "On 2016-11-18 14:52, Metatrader EA wrote:\n> Hi,\n> \n> Do I miss something?\n> Shouldn't I have some rows from this query ?\n> \n> Why is it empty?\n> \n> SELECT relname, idx_tup_fetch + seq_tup_read as TotalReads from\n> pg_stat_all_tables\n> WHERE idx_tup_fetch + seq_tup_read != 0\n> order by TotalReads desc\n> LIMIT 10;\n> relname | totalreads\n> ---------+------------\n> (0 rows)\n> \n> //Bill\n\n\nIs statistics collection enabled in the config?\n\nSee: https://www.postgresql.org/docs/9.5/static/monitoring-stats.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 18 Nov 2016 15:04:46 +0100", "msg_from": "vinny <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DO I miss something ?" } ]
[ { "msg_contents": "I am trying to configure Postgres (version 9.5) to take advantage of very\nlarge memory environment. Server configuration has 256GB RAM, 12 cores and\n2 SSDs on RAID0 and runs Ubuntu. Swap is set at 4GB.\n\nThe machine is used for data warehouse operations. Typically only 1\nstatement runs at a time (but these can be very complex, for example doing\nrolling window functions with partition by several fields, over 30 metrics\nover 40 million rows of data. Number of joins in statements rarely exceeds\n5 (for big tables the number of joins is smaller). Window functions on\nlarge tables are common.\n\nNo other resource intensive processes are running at the same time.\n\nAll tables get recreated from scratch every day. If DB goes kaput, it's a\nminor inconvenience to restore from backups synced to the cloud.\n\nI started off with these settings which I compiled from a variety of\nsources including pgtune (not sure how good they are, I'm not a system\nadministrator):\n\nshared_buffers = 65024MB\nwork_mem = 1680MB\nmaintenance_work_mem = 10GB\nfsync = off\nwal_buffers = 16MB\nmax_wal_size = 8GB\nmin_wal_size = 4GB\ncheckpoint_completion_target = 0.9\nrandom_page_cost = 2.0\neffective_cache_size = 192GB\ndefault_statistics_target = 500\n\nThe first time I spotted something wrong was this 40 million row table\nmentioned above. Looking at the resources on Ubuntu, as soon as the\nstatement started memory usage went up dramatically. Within a minute it\nwent to 100% (yes, the whole 256GB!) and postgres crashed with the\nmessage *FATAL:\nthe database system is in recovery mode*.\n\nI've tried various different settings, more notably:\n\n - I've reduced shared_buffers to 10GB but kept work_mem at 1600MB.\n - I've added the following lines to /etc/sysctl.conf (pinched from\n google searches):\n\nvm.swappiness = 0 vm.overcommit_memory = 2 vm.overcommit_ratio = 95\nvm.dirty_ratio = 2 vm.dirty_background_ratio = 1\n\nQuery again crashed, this time with message *\"out of memory DETAIL: Failed\non request of size 112\"*.\n\nWith these settings, this is the screenshot as memory usage approaches 100%:\n https://www.evernote.com/l/AJIE90HcZwVG_o2KqjYIOn72eQHQx2pc0QI\n\nI've then tried different settings for work_mem, not changing anything else.\n\nwork_mem = 400MB -> query runs fine but memory usage in the system doesn't\nexceed 1.3%\n\nwork_mem = 500MB -> usage hits 100% and postgres crashes out of memory.\n\nSo looks like work_mem is to blame. *However, can someone explain why at\n400MB Postgres does not seem to take advantage of the shedload of available\nmemory in the system?!*\n\nLooking for suggestions here. I'm not a DB system administrator, I'm just\nan analyst who wants to get their analysis done fast and efficiently hence\nthe hardware spec! What combination of settings can I try to make sure\npostgres makes full use of the available memory (without blindly trying\nvarious combinations)? How can I investigate what's limiting postgres from\ndoing so?\n\nI've done some reading but it's hard to tell what advice might apply to\n2016 hardware.\n\nIs there something else I need to configure on the Ubuntu side?\n\nGetting really desperate here so any help is greatly appreciated!\n\nThanks\n\nCarmen\n\nI am trying to configure Postgres (version 9.5) to take advantage of very large memory environment. Server configuration has 256GB RAM, 12 cores and 2 SSDs on RAID0 and runs Ubuntu. Swap is set at 4GB.The machine is used for data warehouse operations. Typically only 1 statement runs at a time (but these can be very complex, for example doing rolling window functions with partition by several fields, over 30 metrics over 40 million rows of data. Number of joins in statements rarely exceeds 5 (for big tables the number of joins is smaller). Window functions on large tables are common.No other resource intensive processes are running at the same time.All tables get recreated from scratch every day. If DB goes kaput, it's a minor inconvenience to restore from backups synced to the cloud.I started off with these settings which I compiled from a variety of sources including pgtune (not sure how good they are, I'm not a system administrator):shared_buffers = 65024MB\nwork_mem = 1680MB\nmaintenance_work_mem = 10GB\nfsync = off\nwal_buffers = 16MB\nmax_wal_size = 8GB\nmin_wal_size = 4GB\ncheckpoint_completion_target = 0.9\nrandom_page_cost = 2.0\neffective_cache_size = 192GB\ndefault_statistics_target = 500The first time I spotted something wrong was this 40 million row table mentioned above. Looking at the resources on Ubuntu, as soon as the statement started memory usage went up dramatically. Within a minute it went to 100% (yes, the whole 256GB!) and postgres crashed with the message FATAL: the database system is in recovery mode.I've tried various different settings, more notably: I've reduced shared_buffers to 10GB but kept work_mem at 1600MB.I've added the following lines to /etc/sysctl.conf (pinched from google searches):vm.swappiness = 0\nvm.overcommit_memory = 2\nvm.overcommit_ratio = 95\nvm.dirty_ratio = 2\nvm.dirty_background_ratio = 1\nQuery again crashed, this time with message \"out of memory DETAIL: Failed on request of size 112\".With these settings, this is the screenshot as memory usage approaches 100%: https://www.evernote.com/l/AJIE90HcZwVG_o2KqjYIOn72eQHQx2pc0QII've then tried different settings for work_mem, not changing anything else.work_mem = 400MB -> query runs fine but memory usage in the system doesn't exceed 1.3% work_mem = 500MB -> usage hits 100% and postgres crashes out of memory.So looks like work_mem is to blame. However, can someone explain why at 400MB Postgres does not seem to take advantage of the shedload of available memory in the system?!Looking for suggestions here. I'm not a DB system administrator, I'm just an analyst who wants to get their analysis done fast and efficiently hence the hardware spec! What combination of settings can I try to make sure postgres makes full use of the available memory (without blindly trying various combinations)? How can I investigate what's limiting postgres from doing so?I've done some reading but it's hard to tell what advice might apply to 2016 hardware.Is there something else I need to configure on the Ubuntu side?Getting really desperate here so any help is greatly appreciated!ThanksCarmen", "msg_date": "Wed, 23 Nov 2016 22:15:25 +0000", "msg_from": "Carmen Mardiros <[email protected]>", "msg_from_op": true, "msg_subject": "How to tune Postgres to take advantage of 256GB RAM hardware" }, { "msg_contents": "\n\nAm 23. November 2016 23:15:25 MEZ, schrieb Carmen Mardiros <[email protected]>:\n>\n>various combinations)? How can I investigate what's limiting postgres\n>from\n>doing so?\n\nWhy fsync=off?\n\nPlease run the queries with EXPLAIN ANALYSE and show us the output.\n\n-- \nDiese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 24 Nov 2016 05:19:36 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to tune Postgres to take advantage of 256GB RAM hardware" }, { "msg_contents": "Hi Andreas,\n\nThanks for the reply. fsync is off because I don't care if the data gets\ncorrupted in this environment if it means a performance gain, as it's only\na minor inconvenience to restore from backups.\n\n\nHere is the query itself that pushes memory usage to 100%:\nhttp://pastebin.com/VzCAerwd . What's interesting is that if I execute this\nstand-alone, memory usage never exceeds 1.3% and query completes in 7\nminutes. But if I paste 2 other queries in the query window, followed by\nthis one and execute them all at the same time, by the time postgres\nreaches the 3rd (this query), memory usage goes up to 100%. I don't\nunderstand why this is.\n\nAnd the EXPLAIN output: https://explain.depesz.com/s/hwH5 (have to admit I\ndon't know how to read this!). Any help is greatly appreciated.\n\nOn Thu, 24 Nov 2016 at 04:19 Andreas Kretschmer <[email protected]>\nwrote:\n\n>\n>\n> Am 23. November 2016 23:15:25 MEZ, schrieb Carmen Mardiros <\n> [email protected]>:\n> >\n> >various combinations)? How can I investigate what's limiting postgres\n> >from\n> >doing so?\n>\n> Why fsync=off?\n>\n> Please run the queries with EXPLAIN ANALYSE and show us the output.\n>\n> --\n> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail\n> gesendet.\n>\n\nHi Andreas,Thanks for the reply. fsync is off because I don't care if the data gets corrupted in this environment if it means a performance gain, as it's only a minor inconvenience to restore from backups.Here is the query itself that pushes memory usage to 100%: http://pastebin.com/VzCAerwd . What's interesting is that if I execute this stand-alone, memory usage never exceeds 1.3% and query completes in 7 minutes. But if I paste 2 other queries in the query window, followed by this one and execute them all at the same time, by the time postgres reaches the 3rd (this query), memory usage goes up to 100%. I don't understand why this is.And the EXPLAIN output: https://explain.depesz.com/s/hwH5 (have to admit I don't know how to read this!). Any help is greatly appreciated.On Thu, 24 Nov 2016 at 04:19 Andreas Kretschmer <[email protected]> wrote:\n\nAm 23. November 2016 23:15:25 MEZ, schrieb Carmen Mardiros <[email protected]>:\n>\n>various combinations)? How can I investigate what's limiting postgres\n>from\n>doing so?\n\nWhy fsync=off?\n\nPlease run the queries with EXPLAIN ANALYSE and show us the output.\n\n--\nDiese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.", "msg_date": "Thu, 24 Nov 2016 08:17:46 +0000", "msg_from": "Carmen Mardiros <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to tune Postgres to take advantage of 256GB RAM hardware" }, { "msg_contents": "> I am trying to configure Postgres (version 9.5)\n\nThis is the latest PG 9.5 ? ( = 9.5.5\n<https://www.postgresql.org/docs/devel/static/release-9-5-5.html> ? [\nRelease Date : 2016-10-27 ] )\n\n2016-11-24 9:17 GMT+01:00 Carmen Mardiros <[email protected]>:\n\n> Hi Andreas,\n>\n> Thanks for the reply. fsync is off because I don't care if the data gets\n> corrupted in this environment if it means a performance gain, as it's only\n> a minor inconvenience to restore from backups.\n>\n>\n> Here is the query itself that pushes memory usage to 100%:\n> http://pastebin.com/VzCAerwd . What's interesting is that if I execute\n> this stand-alone, memory usage never exceeds 1.3% and query completes in 7\n> minutes. But if I paste 2 other queries in the query window, followed by\n> this one and execute them all at the same time, by the time postgres\n> reaches the 3rd (this query), memory usage goes up to 100%. I don't\n> understand why this is.\n>\n> And the EXPLAIN output: https://explain.depesz.com/s/hwH5 (have to admit\n> I don't know how to read this!). Any help is greatly appreciated.\n>\n> On Thu, 24 Nov 2016 at 04:19 Andreas Kretschmer <[email protected]>\n> wrote:\n>\n>>\n>>\n>> Am 23. November 2016 23:15:25 MEZ, schrieb Carmen Mardiros <\n>> [email protected]>:\n>> >\n>> >various combinations)? How can I investigate what's limiting postgres\n>> >from\n>> >doing so?\n>>\n>> Why fsync=off?\n>>\n>> Please run the queries with EXPLAIN ANALYSE and show us the output.\n>>\n>> --\n>> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail\n>> gesendet.\n>>\n>\n\n> I am trying to configure Postgres (version 9.5) This is the latest PG 9.5 ?  (  =  9.5.5 ? [ Release Date : 2016-10-27 ] )  2016-11-24 9:17 GMT+01:00 Carmen Mardiros <[email protected]>:Hi Andreas,Thanks for the reply. fsync is off because I don't care if the data gets corrupted in this environment if it means a performance gain, as it's only a minor inconvenience to restore from backups.Here is the query itself that pushes memory usage to 100%: http://pastebin.com/VzCAerwd . What's interesting is that if I execute this stand-alone, memory usage never exceeds 1.3% and query completes in 7 minutes. But if I paste 2 other queries in the query window, followed by this one and execute them all at the same time, by the time postgres reaches the 3rd (this query), memory usage goes up to 100%. I don't understand why this is.And the EXPLAIN output: https://explain.depesz.com/s/hwH5 (have to admit I don't know how to read this!). Any help is greatly appreciated.On Thu, 24 Nov 2016 at 04:19 Andreas Kretschmer <[email protected]> wrote:\n\nAm 23. November 2016 23:15:25 MEZ, schrieb Carmen Mardiros <[email protected]>:\n>\n>various combinations)? How can I investigate what's limiting postgres\n>from\n>doing so?\n\nWhy fsync=off?\n\nPlease run the queries with EXPLAIN ANALYSE and show us the output.\n\n--\nDiese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.", "msg_date": "Thu, 24 Nov 2016 12:46:32 +0100", "msg_from": "Imre Samu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to tune Postgres to take advantage of 256GB RAM hardware" }, { "msg_contents": "It's PostgreSQL 9.5.4\n\nOn Thu, 24 Nov 2016 at 11:46 Imre Samu <[email protected]> wrote:\n\n> > I am trying to configure Postgres (version 9.5)\n>\n> This is the latest PG 9.5 ? ( = 9.5.5\n> <https://www.postgresql.org/docs/devel/static/release-9-5-5.html> ? [\n> Release Date : 2016-10-27 ] )\n>\n> 2016-11-24 9:17 GMT+01:00 Carmen Mardiros <[email protected]>:\n>\n> Hi Andreas,\n>\n> Thanks for the reply. fsync is off because I don't care if the data gets\n> corrupted in this environment if it means a performance gain, as it's only\n> a minor inconvenience to restore from backups.\n>\n>\n> Here is the query itself that pushes memory usage to 100%:\n> http://pastebin.com/VzCAerwd . What's interesting is that if I execute\n> this stand-alone, memory usage never exceeds 1.3% and query completes in 7\n> minutes. But if I paste 2 other queries in the query window, followed by\n> this one and execute them all at the same time, by the time postgres\n> reaches the 3rd (this query), memory usage goes up to 100%. I don't\n> understand why this is.\n>\n> And the EXPLAIN output: https://explain.depesz.com/s/hwH5 (have to admit\n> I don't know how to read this!). Any help is greatly appreciated.\n>\n> On Thu, 24 Nov 2016 at 04:19 Andreas Kretschmer <[email protected]>\n> wrote:\n>\n>\n>\n> Am 23. November 2016 23:15:25 MEZ, schrieb Carmen Mardiros <\n> [email protected]>:\n> >\n> >various combinations)? How can I investigate what's limiting postgres\n> >from\n> >doing so?\n>\n> Why fsync=off?\n>\n> Please run the queries with EXPLAIN ANALYSE and show us the output.\n>\n> --\n> Diese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail\n> gesendet.\n>\n>\n>\n\nIt's PostgreSQL 9.5.4On Thu, 24 Nov 2016 at 11:46 Imre Samu <[email protected]> wrote:> I am trying to configure Postgres (version 9.5) This is the latest PG 9.5 ?  (  =  9.5.5 ? [ Release Date : 2016-10-27 ] )  2016-11-24 9:17 GMT+01:00 Carmen Mardiros <[email protected]>:Hi Andreas,Thanks for the reply. fsync is off because I don't care if the data gets corrupted in this environment if it means a performance gain, as it's only a minor inconvenience to restore from backups.Here is the query itself that pushes memory usage to 100%: http://pastebin.com/VzCAerwd . What's interesting is that if I execute this stand-alone, memory usage never exceeds 1.3% and query completes in 7 minutes. But if I paste 2 other queries in the query window, followed by this one and execute them all at the same time, by the time postgres reaches the 3rd (this query), memory usage goes up to 100%. I don't understand why this is.And the EXPLAIN output: https://explain.depesz.com/s/hwH5 (have to admit I don't know how to read this!). Any help is greatly appreciated.On Thu, 24 Nov 2016 at 04:19 Andreas Kretschmer <[email protected]> wrote:\n\nAm 23. November 2016 23:15:25 MEZ, schrieb Carmen Mardiros <[email protected]>:\n>\n>various combinations)? How can I investigate what's limiting postgres\n>from\n>doing so?\n\nWhy fsync=off?\n\nPlease run the queries with EXPLAIN ANALYSE and show us the output.\n\n--\nDiese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.", "msg_date": "Thu, 24 Nov 2016 15:31:26 +0000", "msg_from": "Carmen Mardiros <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to tune Postgres to take advantage of 256GB RAM hardware" }, { "msg_contents": "Carmen Mardiros <[email protected]> writes:\n> I've then tried different settings for work_mem, not changing anything else.\n> work_mem = 400MB -> query runs fine but memory usage in the system doesn't\n> exceed 1.3%\n> work_mem = 500MB -> usage hits 100% and postgres crashes out of memory.\n\nI suspect what may be happening is that when you push work_mem to >=\n500MB, the planner decides it can replace the GroupAgg step with a\nHashAgg, which tries to form all the aggregate results at once in memory.\nBecause of the drastic underestimate of the number of groups\n(2.7 mil vs 27 mil actual), the hash table is much bigger than the planner\nis expecting, causing memory consumption to bloat way beyond what it\nshould be.\n\nYou could confirm this idea by seeing if the EXPLAIN output changes that\nway depending on work_mem. (Use plain EXPLAIN, not EXPLAIN ANALYZE, so\nyou don't actually run out of memory while experimenting.) If it's true,\nyou might be able to improve the group-count estimate by increasing the\nstatistics target for ANALYZE.\n\nHowever, the group-count underestimate only seems to be a factor of 10,\nso you'd still expect the memory usage to not be more than 5GB if the\nplanner were getting it right otherwise. So there may be something\nelse wrong, maybe a plain old memory leak.\n\nCan you generate a self-contained example that causes similar memory\noverconsumption? I'm guessing the behavior isn't very sensitive to\nthe exact data you're using as long as the group counts are similar,\nso maybe you could post a script that generates junk test data that\ncauses this, rather than needing 27M rows of real data.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 24 Nov 2016 13:40:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to tune Postgres to take advantage of 256GB RAM hardware" } ]
[ { "msg_contents": "Hello,\n\nThere's a slow UPDATE query in my logs (~60s). However, when I run it\nmanually, it's really fast ( < 0.5ms).\n\n2016-11-23 18:13:51.962 GMT [742-25]: bpd_production bpd@web001(40916)\n00000 Passenger RubyApp: /var/www/bpd/current (production) LOG: duration:\n59876.947 ms statement: UPDATE \"contacts\" SET \"updated_at\" = '2016-11-23\n18:12:52.055456' WHERE \"contacts\".\"id\" = 2845179\n\nThis particular query isn't waiting on any locks that I can see. I actually\nfound it because I followed a lock queue up the chain and it bubbled up to\nthis query. This was one that was waiting on process 742:\n\n2016-11-23 18:13:00.095 GMT [314-118]: bpd_production bpd@web001(40547)\n00000 Passenger RubyApp: /var/www/bpd/current (production) LOG: process\n314 still waiting for ShareLock on transaction 1663649998 after 1000.067 ms\n2016-11-23 18:13:00.095 GMT [314-119]: bpd_production bpd@web001(40547)\n00000 Passenger RubyApp: /var/www/bpd/current (production) DETAIL: Process\nholding the lock: 742. Wait queue: 314.\n2016-11-23 18:13:00.095 GMT [314-120]: bpd_production bpd@web001(40547)\n00000 Passenger RubyApp: /var/www/bpd/current (production) CONTEXT: while\nupdating tuple (288387,8) in relation \"contacts\"\n2016-11-23 18:13:00.095 GMT [314-121]: bpd_production bpd@web001(40547)\n00000 Passenger RubyApp: /var/www/bpd/current (production) STATEMENT:\n UPDATE \"contacts\" SET \"news_items_last_modified\" = '2016-11-23\n18:12:59.090806' WHERE \"contacts\".\"id\" = 2845179\n\nAnyhow, I'm at a complete loss here since I've hit a dead end.\n\nThank you!\n\n*Aldo Sarmiento*\n\nHello,There's a slow UPDATE query in my logs (~60s). However, when I run it manually, it's really fast ( < 0.5ms).2016-11-23 18:13:51.962 GMT [742-25]: bpd_production bpd@web001(40916) 00000 Passenger RubyApp: /var/www/bpd/current (production) LOG:  duration: 59876.947 ms  statement: UPDATE \"contacts\" SET \"updated_at\" = '2016-11-23 18:12:52.055456' WHERE \"contacts\".\"id\" = 2845179This particular query isn't waiting on any locks that I can see. I actually found it because I followed a lock queue up the chain and it bubbled up to this query. This was one that was waiting on process 742:2016-11-23 18:13:00.095 GMT [314-118]: bpd_production bpd@web001(40547) 00000 Passenger RubyApp: /var/www/bpd/current (production) LOG:  process 314 still waiting for ShareLock on transaction 1663649998 after 1000.067 ms2016-11-23 18:13:00.095 GMT [314-119]: bpd_production bpd@web001(40547) 00000 Passenger RubyApp: /var/www/bpd/current (production) DETAIL:  Process holding the lock: 742. Wait queue: 314.2016-11-23 18:13:00.095 GMT [314-120]: bpd_production bpd@web001(40547) 00000 Passenger RubyApp: /var/www/bpd/current (production) CONTEXT:  while updating tuple (288387,8) in relation \"contacts\"2016-11-23 18:13:00.095 GMT [314-121]: bpd_production bpd@web001(40547) 00000 Passenger RubyApp: /var/www/bpd/current (production) STATEMENT:  UPDATE \"contacts\" SET \"news_items_last_modified\" = '2016-11-23 18:12:59.090806' WHERE \"contacts\".\"id\" = 2845179Anyhow, I'm at a complete loss here since I've hit a dead end.Thank you!Aldo Sarmiento", "msg_date": "Fri, 25 Nov 2016 21:13:06 -0800", "msg_from": "Aldo Sarmiento <[email protected]>", "msg_from_op": true, "msg_subject": "Slow UPDATE in logs that's usually fast" } ]
[ { "msg_contents": "Hello all,\r\n I can monitor a table use triger write in language C as the example in postgres document .But I don't know whether or not can a trigger monitor more than one table?Is there a example?\r\nThanks,\r\nzhangkai\r\n\r\n2016-11-28\r\n\r\n\r\n\r\nzhangkai\n\n\n\n\n\n\n\nHello all,\n     I can monitor a table use triger write in language \r\nC as the example in postgres document .But I don't know whether or not can \r\na trigger monitor more than one table?Is there a example?\nThanks,\nzhangkai\n \n2016-11-28\n\n\n\nzhangkai", "msg_date": "Mon, 28 Nov 2016 16:59:37 +0800", "msg_from": "\"zhangkai.gis\"<[email protected]>", "msg_from_op": true, "msg_subject": "can trigger monitor two tables?" }, { "msg_contents": "zhangkai.gis <[email protected]> wrote:\n\n> Hello all,\n> I can monitor a table use triger write in language C as the example in\n> postgres document .But I don't know whether or not can a trigger monitor more\n> than one table?Is there a example?\n\ni hope i understand you ...\n\nyou can use the same trigger-function for more than one table, but a\nTRIGGER is per table, you have to define the trigger for every table.\n\n\nRegards, Andreas Kretschmer\n-- \nAndreas Kretschmer\nhttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 28 Nov 2016 11:49:07 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can trigger monitor two tables?" } ]
[ { "msg_contents": "Hi,\n\ni have a performance issue with bitmap index scans on huge amounts of big jsonb documents.\n\n\n===== Background ===== \n\n- table with big jsonb documents\n- gin index on these documents\n- queries using index conditions with low selectivity\n\n\n===== Example ===== \n\nselect version(); \n> PostgreSQL 9.6.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit\n\nshow work_mem;\n> 1GB\n\n-- setup test data\ncreate table bitmap_scan_test as\nselect\ni,\n(select jsonb_agg(jsonb_build_object('x', i % 2, 'filler', md5(j::text))) from generate_series(0, 100) j) big_jsonb\nfrom\ngenerate_series(0, 100000) i;\n\ncreate index on bitmap_scan_test using gin (big_jsonb);\n\nanalyze bitmap_scan_test;\n\n\n-- query with bitmap scan\nexplain analyze\nselect\ncount(*)\nfrom\nbitmap_scan_test\nwhere\nbig_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]';\n\nAggregate (cost=272.74..272.75 rows=1 width=8) (actual time=622.272..622.272 rows=1 loops=1)\n -> Bitmap Heap Scan on bitmap_scan_test (cost=120.78..272.49 rows=100 width=0) (actual time=16.496..617.431 rows=50000 loops=1)\n Recheck Cond: (big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]'::jsonb)\n Heap Blocks: exact=637\n -> Bitmap Index Scan on bitmap_scan_test_big_jsonb_idx (cost=0.00..120.75 rows=100 width=0) (actual time=16.371..16.371 rows=50000 loops=1)\n Index Cond: (big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]'::jsonb)\nPlanning time: 0.106 ms\nExecution time: 622.334 ms\n\n\nperf top -p... shows heavy usage of pglz_decompress:\n\nOverhead Shared Object Symbol\n 51,06% postgres [.] pglz_decompress\n 7,33% libc-2.12.so [.] memcpy\n...\n\n===== End of example ===== \n\n\nI wonder why bitmap heap scan adds such a big amount of time on top of the plain bitmap index scan. \nIt seems to me, that the recheck is active although all blocks are exact [1] and that pg is loading the jsonb for the recheck.\n\nIs this an expected behavior?\n\n\nRegards,\nMarc-Olaf\n\n\n[1] (http://dba.stackexchange.com/questions/106264/recheck-cond-line-in-query-plans-with-a-bitmap-index-scan)\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Wed, 30 Nov 2016 13:26:16 +0100", "msg_from": "Marc-Olaf Jaschke <[email protected]>", "msg_from_op": true, "msg_subject": "performance issue with bitmap index scans on huge amounts of big\n jsonb documents" }, { "msg_contents": "> big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]';\n\n\n> I wonder why bitmap heap scan adds such a big amount of time on top of the\n> plain bitmap index scan.\n> It seems to me, that the recheck is active although all blocks are exact\n> [1] and that pg is loading the jsonb for the recheck.\n>\n> Is this an expected behavior?\n>\n\n\nYes, this is expected. The gin index is lossy. It knows that all the\nelements are present (except when it doesn't--large elements might get\nhashed down and suffer hash collisions), but it doesn't know what the\nrecursive structure between them is, and has to do a recheck.\n\nFor example, if you change your example where clause to:\n\nbig_jsonb @> '[{\"filler\": 1, \"x\": \"cfcd208495d565ef66e7dff9f98764da\"}]';\n\nYou will see that the index still returns 50,000 rows, but now all of them\nget rejected upon the recheck.\n\nYou could try changing the type of index to jsonb_path_ops. In your given\nexample, it won't make a difference, because you are actually counting half\nthe table and so half the table needs to be rechecked. But in my example,\njsonb_path_ops successfully rejects all the rows at the index stage.\n\nCheers,\n\nJeff\n\n> big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]';\n\nI wonder why bitmap heap scan adds such a big amount of time on top of the plain bitmap index scan.\nIt seems to me, that the recheck is active although all blocks are exact [1] and that pg is loading the jsonb for the recheck.\n\nIs this an expected behavior?Yes, this is expected.  The gin index is lossy.  It knows that all the elements are present (except when it doesn't--large elements might get hashed down and suffer hash collisions), but it doesn't know what the recursive structure between them is, and has to do a recheck.For example, if you change your example where clause to:big_jsonb @> '[{\"filler\": 1, \"x\": \"cfcd208495d565ef66e7dff9f98764da\"}]';You will see that the index still returns 50,000 rows, but now all of them get rejected upon the recheck.You could try changing the type of index to jsonb_path_ops.  In your given example, it won't make a difference, because you are actually counting half the table and so half the table needs to be rechecked.  But in my example, jsonb_path_ops successfully rejects all the rows at the index stage.Cheers,Jeff", "msg_date": "Sun, 4 Dec 2016 18:28:16 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance issue with bitmap index scans on huge\n amounts of big jsonb documents" }, { "msg_contents": "Thanks for the explanation!\n\nBest Regards,\nMarc-Olaf\n\nMarc-Olaf Jaschke · Softwareentwickler\nshopping24 GmbH\n\nWerner-Otto-Straße 1-7 · 22179 Hamburg\nTelefon: +49 (0) 40 6461 5830 · Fax: +49 (0) 40 64 61 7879\[email protected] · www.s24.com\nAG Hamburg HRB 63371\nvertreten durch Dr. Björn Schäfers und Martin Mildner\n\n2016-12-05 3:28 GMT+01:00 Jeff Janes <[email protected]>:\n\n>\n> > big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]';\n>\n>\n>> I wonder why bitmap heap scan adds such a big amount of time on top of\n>> the plain bitmap index scan.\n>> It seems to me, that the recheck is active although all blocks are exact\n>> [1] and that pg is loading the jsonb for the recheck.\n>>\n>> Is this an expected behavior?\n>>\n>\n>\n> Yes, this is expected. The gin index is lossy. It knows that all the\n> elements are present (except when it doesn't--large elements might get\n> hashed down and suffer hash collisions), but it doesn't know what the\n> recursive structure between them is, and has to do a recheck.\n>\n> For example, if you change your example where clause to:\n>\n> big_jsonb @> '[{\"filler\": 1, \"x\": \"cfcd208495d565ef66e7dff9f98764da\"}]';\n>\n> You will see that the index still returns 50,000 rows, but now all of them\n> get rejected upon the recheck.\n>\n> You could try changing the type of index to jsonb_path_ops. In your given\n> example, it won't make a difference, because you are actually counting half\n> the table and so half the table needs to be rechecked. But in my example,\n> jsonb_path_ops successfully rejects all the rows at the index stage.\n>\n> Cheers,\n>\n> Jeff\n>\n\nThanks for the explanation!Best Regards,Marc-OlafMarc-Olaf Jaschke · Softwareentwicklershopping24 GmbHWerner-Otto-Straße 1-7 · 22179 Hamburg Telefon: +49 (0) 40 6461 5830 · Fax: +49 (0) 40 64 61 [email protected] · www.s24.comAG Hamburg HRB 63371vertreten durch Dr. Björn Schäfers und Martin Mildner\n2016-12-05 3:28 GMT+01:00 Jeff Janes <[email protected]>:> big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]';\n\nI wonder why bitmap heap scan adds such a big amount of time on top of the plain bitmap index scan.\nIt seems to me, that the recheck is active although all blocks are exact [1] and that pg is loading the jsonb for the recheck.\n\nIs this an expected behavior?Yes, this is expected.  The gin index is lossy.  It knows that all the elements are present (except when it doesn't--large elements might get hashed down and suffer hash collisions), but it doesn't know what the recursive structure between them is, and has to do a recheck.For example, if you change your example where clause to:big_jsonb @> '[{\"filler\": 1, \"x\": \"cfcd208495d565ef66e7dff9f98764da\"}]';You will see that the index still returns 50,000 rows, but now all of them get rejected upon the recheck.You could try changing the type of index to jsonb_path_ops.  In your given example, it won't make a difference, because you are actually counting half the table and so half the table needs to be rechecked.  But in my example, jsonb_path_ops successfully rejects all the rows at the index stage.Cheers,Jeff", "msg_date": "Mon, 5 Dec 2016 15:51:19 +0100", "msg_from": "Marc-Olaf Jaschke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: performance issue with bitmap index scans on huge\n amounts of big jsonb documents" }, { "msg_contents": "On Wed, Nov 30, 2016 at 6:26 AM, Marc-Olaf Jaschke\n<[email protected]> wrote:\n> Hi,\n>\n> i have a performance issue with bitmap index scans on huge amounts of big jsonb documents.\n>\n>\n> ===== Background =====\n>\n> - table with big jsonb documents\n> - gin index on these documents\n> - queries using index conditions with low selectivity\n>\n>\n> ===== Example =====\n>\n> select version();\n>> PostgreSQL 9.6.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-17), 64-bit\n>\n> show work_mem;\n>> 1GB\n>\n> -- setup test data\n> create table bitmap_scan_test as\n> select\n> i,\n> (select jsonb_agg(jsonb_build_object('x', i % 2, 'filler', md5(j::text))) from generate_series(0, 100) j) big_jsonb\n> from\n> generate_series(0, 100000) i;\n>\n> create index on bitmap_scan_test using gin (big_jsonb);\n>\n> analyze bitmap_scan_test;\n>\n>\n> -- query with bitmap scan\n> explain analyze\n> select\n> count(*)\n> from\n> bitmap_scan_test\n> where\n> big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]';\n>\n> Aggregate (cost=272.74..272.75 rows=1 width=8) (actual time=622.272..622.272 rows=1 loops=1)\n> -> Bitmap Heap Scan on bitmap_scan_test (cost=120.78..272.49 rows=100 width=0) (actual time=16.496..617.431 rows=50000 loops=1)\n> Recheck Cond: (big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]'::jsonb)\n> Heap Blocks: exact=637\n> -> Bitmap Index Scan on bitmap_scan_test_big_jsonb_idx (cost=0.00..120.75 rows=100 width=0) (actual time=16.371..16.371 rows=50000 loops=1)\n> Index Cond: (big_jsonb @> '[{\"x\": 1, \"filler\": \"cfcd208495d565ef66e7dff9f98764da\"}]'::jsonb)\n> Planning time: 0.106 ms\n> Execution time: 622.334 ms\n>\n>\n> perf top -p... shows heavy usage of pglz_decompress:\n>\n> Overhead Shared Object Symbol\n> 51,06% postgres [.] pglz_decompress\n> 7,33% libc-2.12.so [.] memcpy\n\nAnother thing to possibly look at is configuring the column not to\ncompress; over half the time is spent decompressing the data. See:\nALTER [ COLUMN ] column_name SET STORAGE { PLAIN | EXTERNAL | EXTENDED | MAIN }\n\nNaturally this is a huge tradeoff so do some careful analysis before\nmaking the change.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 9 Dec 2016 11:03:29 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance issue with bitmap index scans on huge\n amounts of big jsonb documents" } ]