threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Substantial different index use between 9.5 and 9.6\nPostgres versions 9.5 and 9.6 running on Windows Server 2012. Installed \nusing EnterpriseDB. Both instances are on the same server, \npostgresql.conf for both are the same except max_locks_per_transaction = \n200 in 9.6 (caused insertion errors otherwise). On 9.6, Postgis is \n2.3.0, and I think same on 9.5 but not sure how to tell.\n\nDatabases on the 2 instances are the same (as far as I can tell).\n\nI have 2 relevant tables (created using same script in both instances). \nOne contains a geometry column (geom geometry(1107464) - a polygon) \nwith gist index. This table has around 10 billion records. The disks \nthese databases on aren't particularly fast, and indexing took about a week.\nSecond table has latitude (numeric(10, 8)), and longitude (numeric(11, \n8)) and about 10 million records.\n\nThe query I'm running is (a part of an insertion into a new table I was \ntrying to run)\n SELECT address_default_geocode_pid,\n (SELECT elevation FROM m_elevations e WHERE ST_Contains(e.geom, \nST_SetSRID(ST_MakePoint(longitude, latitude), 4326))),\n ST_SetSRID(ST_MakePoint(latitude, longitude), 4283)\n FROM address_default_geocode;\n\nUnder 9.5 the insertion takes about 11 hours. I gave up on 9.6.\n\nI thought I'd try just one record, so:\n\nSELECT address_default_geocode_pid,\n (SELECT elevation FROM m_elevations e WHERE ST_Contains(e.geom, \nST_SetSRID(ST_MakePoint(longitude, latitude), 4326))),\n ST_SetSRID(ST_MakePoint(latitude, longitude), 4283)\n FROM address_default_geocode\n WHERE latitude = -33.87718472 AND longitude = 151.27544336;\n\nThis returns 3 rows (which is more than the average I'd expect BTW). On \n9.5 takes a few seconds (3-5) and again I gave up on 9.6\n\nLooking just at the query shown above, I noted a difference in explained \nbehaviour. Here is the output from 9.5:\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on address_default_geocode (cost=0.00..37760293.94 rows=1 \nwidth=25)\n Filter: ((latitude = '-33.87718472'::numeric) AND (longitude = \n151.27544336))\n SubPlan 1\n -> Bitmap Heap Scan on m_elevations e \n(cost=282802.21..37401439.43 rows=3512160 width=8)\n Recheck Cond: (geom ~ \nst_setsrid(st_makepoint((address_default_geocode.longitude)::double \nprecision, (address_default_geocode.latitude)::double precision), 4326))\n Filter: _st_contains(geom, \nst_setsrid(st_makepoint((address_default_geocode.longitude)::double \nprecision, (address_default_geocode.latitude)::double precision), 4326))\n -> Bitmap Index Scan on m_elevations_geom_idx \n(cost=0.00..281924.17 rows=10536480 width=0)\n Index Cond: (geom ~ \nst_setsrid(st_makepoint((address_default_geocode.longitude)::double \nprecision, (address_default_geocode.latitude)::double precision), 4326))\n(8 rows)\n\n From 9.6\n\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on address_default_geocode \n(cost=10000000000.00..23297309357.08 rows=1 width=49)\n Filter: ((latitude = '-33.87718472'::numeric) AND (longitude = \n151.27544336))\n SubPlan 1\n -> Seq Scan on m_elevations e \n(cost=10000000000.00..13296950520.12 rows=3512159563 width=8)\n Filter: st_contains(geom, \nst_setsrid(st_makepoint((address_default_geocode.longitude)::double \nprecision, (address_default_geocode.latitude)::double precision), 4326))\n(5 rows)\n\nInterestingly (change is hard coding of coordinates in second line):\n\nexplain SELECT address_default_geocode_pid,\n (SELECT elevation FROM m_elevations e WHERE ST_Contains(e.geom, \nST_SetSRID(ST_MakePoint(151.27544336, -33.87718472), 4326))),\n ST_SetSRID(ST_MakePoint(latitude, longitude), 4283)\n FROM address_default_geocode\n WHERE latitude = -33.87718472 AND longitude = 151.27544336;\n\nGives (in 9.6)\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------\n Seq Scan on address_default_geocode \n(cost=10037428497.36..10037787334.33 rows=1 width=49)\n Filter: ((latitude = '-33.87718472'::numeric) AND (longitude = \n151.27544336))\n InitPlan 1 (returns $0)\n -> Bitmap Heap Scan on m_elevations e \n(cost=272194.20..37428497.36 rows=3512160 width=8)\n Recheck Cond: (geom ~ \n'0101000020E610000036E3976ED0E86240B879C29647F040C0'::geometry)\n Filter: _st_contains(geom, \n'0101000020E610000036E3976ED0E86240B879C29647F040C0'::geometry)\n -> Bitmap Index Scan on m_elevations_geom_idx \n(cost=0.00..271316.16 rows=10536480 width=0)\n Index Cond: (geom ~ \n'0101000020E610000036E3976ED0E86240B879C29647F040C0'::geometry)\n(8 rows)\n\nWhich looks better.\n\nSo for some reason, 9.6 planner decides not to use the index for a small \nnumber of records returned from address_default_geocode.\nI have vacuum analysed both tables.\nClearly a sequential scan on 10 billion records is pretty slow (to say \nthe least).\n\nHas anyone seen anything like this/got any thoughts?\n\nI tried \"set enable_seqscan=false\" but didn't seem to have any effect.\n\nRegards\n\nBill\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Dec 2016 10:38:50 +1100",
"msg_from": "Bill Measday <[email protected]>",
"msg_from_op": true,
"msg_subject": "Substantial different index use between 9.5 and 9.6"
},
{
"msg_contents": "Bill Measday <[email protected]> writes:\n> Substantial different index use between 9.5 and 9.6\n\nMaybe you missed an ANALYZE after migrating? The plan difference\nseems to be due to a vast difference in rowcount estimate for the\nm_elevations condition:\n\n> -> Bitmap Heap Scan on m_elevations e \n> (cost=282802.21..37401439.43 rows=3512160 width=8)\n\n> -> Seq Scan on m_elevations e \n> (cost=10000000000.00..13296950520.12 rows=3512159563 width=8)\n\nIf you don't know where that factor-of-1000 came from, maybe take\nit up with the postgis folk. It'd mostly be coming out of their\nselectivity estimation routines.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 01 Dec 2016 18:48:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Substantial different index use between 9.5 and 9.6"
},
{
"msg_contents": "Thanks Tom.\n\nFirst, this wasn't a migration but new db loaded from scratch (if that \nmatters).\n\nAs per the end of the original post \"I have vacuum analysed both \ntables\". I assume this is what you meant?\n\nMy gut feel was that it isn't a postgis issue since the third example I \ngave uses the index, but I will take it up with them too.\n\nRgds\n\n\nBill\n\nOn 2/12/2016 10:48 AM, Tom Lane wrote:\n> Bill Measday <[email protected]> writes:\n>> Substantial different index use between 9.5 and 9.6\n> Maybe you missed an ANALYZE after migrating? The plan difference\n> seems to be due to a vast difference in rowcount estimate for the\n> m_elevations condition:\n>\n>> -> Bitmap Heap Scan on m_elevations e\n>> (cost=282802.21..37401439.43 rows=3512160 width=8)\n>> -> Seq Scan on m_elevations e\n>> (cost=10000000000.00..13296950520.12 rows=3512159563 width=8)\n> If you don't know where that factor-of-1000 came from, maybe take\n> it up with the postgis folk. It'd mostly be coming out of their\n> selectivity estimation routines.\n>\n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 2 Dec 2016 11:26:09 +1100",
"msg_from": "Bill Measday <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Substantial different index use between 9.5 and 9.6"
},
{
"msg_contents": "ANALYZE takes samples at random, so statistics might be different even with same postgresql version:\n\nhttps://www.postgresql.org/docs/current/static/sql-analyze.html\n\nFor large tables, ANALYZE takes a random sample of the table contents, rather than examining every row. This allows even very large tables to be analyzed in a small amount of time. Note, however, that the statistics are only approximate, and will change slightly each time ANALYZE is run, even if the actual table contents did not change. This might result in small changes in the planner's estimated costs shown by EXPLAIN <https://www.postgresql.org/docs/current/static/sql-explain.html>. In rare situations, this non-determinism will cause the planner's choices of query plans to change after ANALYZE is run. To avoid this, raise the amount of statistics collected by ANALYZE, as described below.\n\nThough, having that round (x 1000) difference, my bet is that you have different statistics target whether on database, table or columns, see:\n\nThe extent of analysis can be controlled by adjusting the default_statistics_target <https://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET> configuration variable, or on a column-by-column basis by setting the per-column statistics target with ALTER TABLE ... ALTER COLUMN ... SET STATISTICS (see ALTER TABLE <https://www.postgresql.org/docs/current/static/sql-altertable.html>). The target value sets the maximum number of entries in the most-common-value list and the maximum number of bins in the histogram. The default target value is 100, but this can be adjusted up or down to trade off accuracy of planner estimates against the time taken for ANALYZE and the amount of space occupied in pg_statistic. In particular, setting the statistics target to zero disables collection of statistics for that column. It might be useful to do that for columns that are never used as part of the WHERE, GROUP BY, or ORDER BY clauses of queries, since the planner will have no use for statistics on such columns.\n\nHere is some help on how to see statistics per column:\n\nhttp://stackoverflow.com/questions/15034622/check-statistics-targets-in-postgresql <http://stackoverflow.com/questions/15034622/check-statistics-targets-in-postgresql>\n\nCheck if this is the case.\n\n\n\n\n\n> El 2 dic 2016, a las 1:26, Bill Measday <[email protected]> escribió:\n> \n> Thanks Tom.\n> \n> First, this wasn't a migration but new db loaded from scratch (if that matters).\n> \n> As per the end of the original post \"I have vacuum analysed both tables\". I assume this is what you meant?\n> \n> My gut feel was that it isn't a postgis issue since the third example I gave uses the index, but I will take it up with them too.\n> \n> Rgds\n> \n> \n> Bill\n> \n> On 2/12/2016 10:48 AM, Tom Lane wrote:\n>> Bill Measday <[email protected]> writes:\n>>> Substantial different index use between 9.5 and 9.6\n>> Maybe you missed an ANALYZE after migrating? The plan difference\n>> seems to be due to a vast difference in rowcount estimate for the\n>> m_elevations condition:\n>> \n>>> -> Bitmap Heap Scan on m_elevations e\n>>> (cost=282802.21..37401439.43 rows=3512160 width=8)\n>>> -> Seq Scan on m_elevations e\n>>> (cost=10000000000.00..13296950520.12 rows=3512159563 width=8)\n>> If you don't know where that factor-of-1000 came from, maybe take\n>> it up with the postgis folk. It'd mostly be coming out of their\n>> selectivity estimation routines.\n>> \n>> \t\t\tregards, tom lane\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nANALYZE takes samples at random, so statistics might be different even with same postgresql version:https://www.postgresql.org/docs/current/static/sql-analyze.htmlFor large tables, ANALYZE takes a random sample of the table contents, rather than examining every row. This allows even very large tables to be analyzed in a small amount of time. Note, however, that the statistics are only approximate, and will change slightly each time ANALYZE is run, even if the actual table contents did not change. This might result in small changes in the planner's estimated costs shown by EXPLAIN. In rare situations, this non-determinism will cause the planner's choices of query plans to change after ANALYZE is run. To avoid this, raise the amount of statistics collected by ANALYZE, as described below.Though, having that round (x 1000) difference, my bet is that you have different statistics target whether on database, table or columns, see:The extent of analysis can be controlled by adjusting the default_statistics_target configuration variable, or on a column-by-column basis by setting the per-column statistics target with ALTER TABLE ... ALTER COLUMN ... SET STATISTICS (see ALTER TABLE). The target value sets the maximum number of entries in the most-common-value list and the maximum number of bins in the histogram. The default target value is 100, but this can be adjusted up or down to trade off accuracy of planner estimates against the time taken for ANALYZE and the amount of space occupied in pg_statistic. In particular, setting the statistics target to zero disables collection of statistics for that column. It might be useful to do that for columns that are never used as part of the WHERE, GROUP BY, or ORDER BY clauses of queries, since the planner will have no use for statistics on such columns.Here is some help on how to see statistics per column:http://stackoverflow.com/questions/15034622/check-statistics-targets-in-postgresqlCheck if this is the case.El 2 dic 2016, a las 1:26, Bill Measday <[email protected]> escribió:Thanks Tom.First, this wasn't a migration but new db loaded from scratch (if that matters).As per the end of the original post \"I have vacuum analysed both tables\". I assume this is what you meant?My gut feel was that it isn't a postgis issue since the third example I gave uses the index, but I will take it up with them too.RgdsBillOn 2/12/2016 10:48 AM, Tom Lane wrote:Bill Measday <[email protected]> writes:Substantial different index use between 9.5 and 9.6Maybe you missed an ANALYZE after migrating? The plan differenceseems to be due to a vast difference in rowcount estimate for them_elevations condition: -> Bitmap Heap Scan on m_elevations e(cost=282802.21..37401439.43 rows=3512160 width=8) -> Seq Scan on m_elevations e(cost=10000000000.00..13296950520.12 rows=3512159563 width=8)If you don't know where that factor-of-1000 came from, maybe takeit up with the postgis folk. It'd mostly be coming out of theirselectivity estimation routines. regards, tom lane-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 2 Dec 2016 16:41:56 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Substantial different index use between 9.5 and 9.6"
},
{
"msg_contents": "Seems to be a replicable issue in PostGis - ticket raised at their end, \nso I'll wait for a resolution of the root cause.\n\n\nThanks for your help/thoughts.\n\n\nRgds\n\n\nBill\n\n\nOn 3/12/2016 2:41 AM, Daniel Blanch Bataller wrote:\n> ANALYZE takes samples at random, so statistics might be different even \n> with same postgresql version:\n>\n> https://www.postgresql.org/docs/current/static/sql-analyze.html\n>\n> For large tables, ANALYZE takes a random sample of the table\n> contents, rather than examining every row. This allows even very\n> large tables to be analyzed in a small amount of time. Note,\n> however, that the statistics are only approximate, and will change\n> slightly each time ANALYZE is run, even if the actual table\n> contents did not change. This might result in small changes in the\n> planner's estimated costs shown by EXPLAIN\n> <https://www.postgresql.org/docs/current/static/sql-explain.html>.\n> In rare situations, this non-determinism will cause the planner's\n> choices of query plans to change after ANALYZE is run. To avoid\n> this, raise the amount of statistics collected by ANALYZE, as\n> described below.\n>\n>\n> Though, having that round (x 1000) difference, my bet is that you have \n> different statistics target whether on database, table or columns, see:\n>\n> The extent of analysis can be controlled by adjusting the\n> default_statistics_target\n> <https://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-DEFAULT-STATISTICS-TARGET> configuration\n> variable, or on a column-by-column basis by setting the per-column\n> statistics target with ALTER TABLE ... ALTER COLUMN ... SET\n> STATISTICS (see ALTER TABLE\n> <https://www.postgresql.org/docs/current/static/sql-altertable.html>).\n> The target value sets the maximum number of entries in the\n> most-common-value list and the maximum number of bins in the\n> histogram. The default target value is 100, but this can be\n> adjusted up or down to trade off accuracy of planner estimates\n> against the time taken for ANALYZE and the amount of space\n> occupied in pg_statistic. In particular, setting the statistics\n> target to zero disables collection of statistics for that column.\n> It might be useful to do that for columns that are never used as\n> part of the WHERE, GROUP BY, or ORDER BY clauses of queries, since\n> the planner will have no use for statistics on such columns.\n>\n>\n> Here is some help on how to see statistics per column:\n>\n> http://stackoverflow.com/questions/15034622/check-statistics-targets-in-postgresql\n>\n> Check if this is the case.\n>\n>\n>\n>\n>\n>\n>> El 2 dic 2016, a las 1:26, Bill Measday <[email protected] \n>> <mailto:[email protected]>> escribió:\n>>\n>> Thanks Tom.\n>>\n>> First, this wasn't a migration but new db loaded from scratch (if \n>> that matters).\n>>\n>> As per the end of the original post \"I have vacuum analysed both \n>> tables\". I assume this is what you meant?\n>>\n>> My gut feel was that it isn't a postgis issue since the third example \n>> I gave uses the index, but I will take it up with them too.\n>>\n>> Rgds\n>>\n>>\n>> Bill\n>>\n>> On 2/12/2016 10:48 AM, Tom Lane wrote:\n>>> Bill Measday <[email protected] <mailto:[email protected]>> writes:\n>>>> Substantial different index use between 9.5 and 9.6\n>>> Maybe you missed an ANALYZE after migrating? The plan difference\n>>> seems to be due to a vast difference in rowcount estimate for the\n>>> m_elevations condition:\n>>>\n>>>> -> Bitmap Heap Scan on m_elevations e\n>>>> (cost=282802.21..37401439.43 rows=3512160 width=8)\n>>>> -> Seq Scan on m_elevations e\n>>>> (cost=10000000000.00..13296950520.12 rows=3512159563 width=8)\n>>> If you don't know where that factor-of-1000 came from, maybe take\n>>> it up with the postgis folk. It'd mostly be coming out of their\n>>> selectivity estimation routines.\n>>>\n>>> regards, tom lane\n>>\n>>\n>>\n>> -- \n>> Sent via pgsql-performance mailing list \n>> ([email protected] \n>> <mailto:[email protected]>)\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n\n\n\n\nSeems to be a replicable issue in PostGis - ticket raised at\n their end, so I'll wait for a resolution of the root cause.\n\n\nThanks for your help/thoughts.\n\n Rgds\n\n Bill\n\n\nOn 3/12/2016 2:41 AM, Daniel Blanch\n Bataller wrote:\n\n\n\nANALYZE takes samples at random, so statistics might\n be different even with same postgresql version:\n\n\n\nhttps://www.postgresql.org/docs/current/static/sql-analyze.html\n\n\nFor large tables, ANALYZE takes a random sample of the table contents,\n rather than examining every row. This allows even very large\n tables to be analyzed in a small amount of time. Note,\n however, that the statistics are only approximate, and will\n change slightly each time ANALYZE is run, even if the actual table contents did not\n change. This might result in small changes in the planner's\n estimated costs shown by EXPLAIN. In rare situations, this non-determinism will\n cause the planner's choices of query plans to change after ANALYZE is run. To avoid this, raise the amount of\n statistics collected by ANALYZE, as described below.\n\n\n\nThough, having\n that round (x 1000) difference, my bet is that you have\n different statistics target whether on database, table or\n columns, see:\n\n\n\nThe extent of analysis can be\n controlled by adjusting the default_statistics_target configuration variable, or on a column-by-column\n basis by setting the per-column statistics target with ALTER TABLE ...\n ALTER COLUMN ... SET STATISTICS (see ALTER\n TABLE). The target value sets the maximum\n number of entries in the most-common-value list and the\n maximum number of bins in the histogram. The default target\n value is 100, but this can be adjusted up or down to trade\n off accuracy of planner estimates against the time taken\n for ANALYZE and the amount of space occupied in pg_statistic. In particular, setting the statistics target to\n zero disables collection of statistics for that column. It\n might be useful to do that for columns that are never used\n as part of the WHERE, GROUP BY, or ORDER BY clauses of\n queries, since the planner will have no use for statistics\n on such columns.\n\n\n\nHere is some\n help on how to see statistics per column:\n\n\n\nhttp://stackoverflow.com/questions/15034622/check-statistics-targets-in-postgresql\n\n\n\nCheck if this is the case.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nEl 2 dic 2016, a las 1:26, Bill Measday <[email protected]> escribió:\n\n\nThanks Tom.\n\n First, this wasn't a migration but new db loaded from\n scratch (if that matters).\n\n As per the end of the original post \"I have vacuum\n analysed both tables\". I assume this is what you meant?\n\n My gut feel was that it isn't a postgis issue since the\n third example I gave uses the index, but I will take it up\n with them too.\n\n Rgds\n\n\n Bill\n\n On 2/12/2016 10:48 AM, Tom Lane wrote:\nBill Measday <[email protected]> writes:\nSubstantial different\n index use between 9.5 and 9.6\n\n Maybe you missed an ANALYZE after migrating? The plan\n difference\n seems to be due to a vast difference in rowcount\n estimate for the\n m_elevations condition:\n\n -> Bitmap\n Heap Scan on m_elevations e\n (cost=282802.21..37401439.43 rows=3512160 width=8)\n -> Seq Scan on m_elevations e\n (cost=10000000000.00..13296950520.12 rows=3512159563\n width=8)\n\n If you don't know where that factor-of-1000 came from,\n maybe take\n it up with the postgis folk. It'd mostly be coming out\n of their\n selectivity estimation routines.\n\n regards,\n tom lane\n\n\n\n\n -- \n Sent via pgsql-performance mailing list ([email protected])\n To make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sun, 4 Dec 2016 07:42:02 +1100",
"msg_from": "Bill Measday <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Substantial different index use between 9.5 and 9.6"
}
] |
[
{
"msg_contents": "Hello, List.\n\nWe’ve got a strange planner behavior on a query to one of our bigger tables after we upgraded to postgres 9.6.1 recently.\nThe table (schema here http://pastebin.com/nRAny4bw <http://pastebin.com/nRAny4bw>) has 28m+ rows and is used to store chat messages for different chat rooms (symbol is the room id).\nThe query is as follows:\nSELECT \"tv_site_chathistory\".\"source\" FROM \"tv_site_chathistory\" WHERE \"tv_site_chathistory\".\"symbol\" = ’pm_OmoGVzBdyPnpYkXD' ORDER BY \"tv_site_chathistory\".\"id\" DESC LIMIT 30;\n(explain analyze is here https://explain.depesz.com/s/iyT <https://explain.depesz.com/s/iyT>)\n\nFor some reason planner chooses to scan using pkey index instead of index on symbol column. Most times it uses the right index, but for this particular ‘symbol’ value is resorts to pkey scan. One possible clue could be that last 30 rows with this particular symbol are spanning some relatively large time of creation.\n\nAny advice would be greatly appreciated!\nHello, List.We’ve got a strange planner behavior on a query to one of our bigger tables after we upgraded to postgres 9.6.1 recently.The table (schema here http://pastebin.com/nRAny4bw) has 28m+ rows and is used to store chat messages for different chat rooms (symbol is the room id).The query is as follows:SELECT \"tv_site_chathistory\".\"source\" FROM \"tv_site_chathistory\" WHERE \"tv_site_chathistory\".\"symbol\" = ’pm_OmoGVzBdyPnpYkXD' ORDER BY \"tv_site_chathistory\".\"id\" DESC LIMIT 30;(explain analyze is here https://explain.depesz.com/s/iyT)For some reason planner chooses to scan using pkey index instead of index on symbol column. Most times it uses the right index, but for this particular ‘symbol’ value is resorts to pkey scan. One possible clue could be that last 30 rows with this particular symbol are spanning some relatively large time of creation.Any advice would be greatly appreciated!",
"msg_date": "Tue, 6 Dec 2016 17:35:23 +0300",
"msg_from": "Andrey Povazhnyi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query question"
},
{
"msg_contents": "Andrey Povazhnyi <[email protected]> writes:\n> We’ve got a strange planner behavior on a query to one of our bigger tables after we upgraded to postgres 9.6.1 recently.\n\nThe basic problem with this query is that there are no good alternatives.\nThe planner believes there are about 53K rows matching the WHERE\ncondition. (I assume this estimate is roughly in line with reality,\nelse we have different problems to talk about.) It can either scan down\nthe \"id\" index and stop when it finds the 30th row matching WHERE, or\nit can use the \"symbol\" index to read all 53K rows matching WHERE and\nthen sort them by \"id\". Neither one of those is going to be speedy;\nbut the more rows there are matching WHERE, the better the first way\nis going to look.\n\nIf you're worried about doing this a lot, it might be worth your while\nto provide a 2-column index on (source, id) --- in that order --- which\nwould allow a query plan that directly finds the required 30 rows as\nconsecutive index entries. Possibly this could replace your index on\n\"source\" alone, depending on how much bigger the 2-col index is and\nhow many queries have no use for the second column. See\nhttps://www.postgresql.org/docs/current/static/indexes.html\nparticularly 11.3 - 11.5.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 06 Dec 2016 10:33:55 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query question"
},
{
"msg_contents": "Tom,\n\nThank you for a thorough answer. We’ll try the 2-column index.\n\nRegards,\nAndrey Povazhnyi\n\n> On Dec 6, 2016, at 6:33 PM, Tom Lane <[email protected]> wrote:\n> \n> Andrey Povazhnyi <[email protected]> writes:\n>> We’ve got a strange planner behavior on a query to one of our bigger tables after we upgraded to postgres 9.6.1 recently.\n> \n> The basic problem with this query is that there are no good alternatives.\n> The planner believes there are about 53K rows matching the WHERE\n> condition. (I assume this estimate is roughly in line with reality,\n> else we have different problems to talk about.) It can either scan down\n> the \"id\" index and stop when it finds the 30th row matching WHERE, or\n> it can use the \"symbol\" index to read all 53K rows matching WHERE and\n> then sort them by \"id\". Neither one of those is going to be speedy;\n> but the more rows there are matching WHERE, the better the first way\n> is going to look.\n> \n> If you're worried about doing this a lot, it might be worth your while\n> to provide a 2-column index on (source, id) --- in that order --- which\n> would allow a query plan that directly finds the required 30 rows as\n> consecutive index entries. Possibly this could replace your index on\n> \"source\" alone, depending on how much bigger the 2-col index is and\n> how many queries have no use for the second column. See\n> https://www.postgresql.org/docs/current/static/indexes.html\n> particularly 11.3 - 11.5.\n> \n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Dec 2016 12:51:07 +0300",
"msg_from": "Andrey Povazhnyi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query question"
}
] |
[
{
"msg_contents": "Hi all,\nI have a query that I *think* should use a multicolumn index, but\nsometimes isn't, resulting in slow queries.\n\nWe have a DB that records GPS coordinates for vehicles:\n\n Table \"public.updates\"\n Column | Type | Modifiers\n------------+--------------------------+------------------------------------------------------\n id | integer | not null default\nnextval('updates_id_seq'::regclass)\n driver_id | integer | not null\n latitude | double precision | not null\n longitude | double precision | not null\n time | timestamp with time zone | not null default now()\n vehicle_id | integer |\nIndexes:\n \"updates_pkey\" PRIMARY KEY, btree (id)\n \"ix_updates_time\" btree (\"time\")\n \"updates_driver_id_time_idx\" btree (driver_id, \"time\")\n \"updates_vehicle_id_time_idx\" btree (vehicle_id, \"time\")\n\nTable has about 15M records across 100 distinct driver_id.\n\nI want to get the last record for a specific driver:\n\nSELECT * FROM updates WHERE driver_id=123 ORDER BY \"time\" DESC LIMIT 1;\n\nFor some values of driver_id, it does what I expect and uses\nupdates_driver_id_time_idx to fetch the records in 2 ms or less. For\nother values of driver_id, it does an index scan backwards on\nix_updates_time, taking upwards of 2 minutes.\n\nGood plan:\n\n Limit (cost=0.11..1.38 rows=1 width=56) (actual time=2.710..2.710\nrows=1 loops=1)\n -> Index Scan Backward using updates_driver_id_time_idx on updates\n (cost=0.11..139278.28 rows=110051 width=56) (actual time=2.709..2.709\nrows=1 loops=1)\n Index Cond: (driver_id = 17127)\n Total runtime: 2.732 ms\n(4 rows)\n\nBad plan:\n\n Limit (cost=0.09..0.69 rows=1 width=56) (actual\ntime=216769.111..216769.112 rows=1 loops=1)\n -> Index Scan Backward using ix_updates_time on updates\n(cost=0.09..272339.04 rows=448679 width=56) (actual\ntime=216769.110..216769.110 rows=1 loops=1)\n Filter: (driver_id = 30132)\n Rows Removed by Filter: 5132087\n Total runtime: 216769.174 ms\n\n\n From cursory testing, the difference seems to be based on how many\ntotal rows there are for a particular driver. The above query uses\nupdates_driver_id_time_idx for drivers with less than about 300K rows,\nbut uses ix_updates_time for drivers with more than about 300K rows.\n\nAnything we can do to make it do the \"right\" thing? We are also\nconsidering denormalizing the data and keeping a \"cache\" of the same\ndata in another table.\n\npgsql version: 9.3.14 and 9.5.3, already tried vacuum analyze.\n\nThanks,\nEric\n\n\n-- \nEric Jiang, DoubleMap\[email protected] | www.doublemap.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Dec 2016 09:00:16 -0800",
"msg_from": "Eric Jiang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Querying with multicolumn index"
},
{
"msg_contents": "På fredag 09. desember 2016 kl. 18:00:16, skrev Eric Jiang <[email protected] \n<mailto:[email protected]>>:\nHi all,\n I have a query that I *think* should use a multicolumn index, but\n sometimes isn't, resulting in slow queries.\n\n We have a DB that records GPS coordinates for vehicles:\n\n Table \"public.updates\"\n Column | Type | Modifiers\n \n------------+--------------------------+------------------------------------------------------\n id | integer | not null default\n nextval('updates_id_seq'::regclass)\n driver_id | integer | not null\n latitude | double precision | not null\n longitude | double precision | not null\n time | timestamp with time zone | not null default now()\n vehicle_id | integer |\n Indexes:\n \"updates_pkey\" PRIMARY KEY, btree (id)\n \"ix_updates_time\" btree (\"time\")\n \"updates_driver_id_time_idx\" btree (driver_id, \"time\")\n \"updates_vehicle_id_time_idx\" btree (vehicle_id, \"time\")\n\n Table has about 15M records across 100 distinct driver_id.\n\n I want to get the last record for a specific driver:\n\n SELECT * FROM updates WHERE driver_id=123 ORDER BY \"time\" DESC LIMIT 1;\n\n For some values of driver_id, it does what I expect and uses\n updates_driver_id_time_idx to fetch the records in 2 ms or less. For\n other values of driver_id, it does an index scan backwards on\n ix_updates_time, taking upwards of 2 minutes.\n\n Good plan:\n\n Limit (cost=0.11..1.38 rows=1 width=56) (actual time=2.710..2.710\n rows=1 loops=1)\n -> Index Scan Backward using updates_driver_id_time_idx on updates\n (cost=0.11..139278.28 rows=110051 width=56) (actual time=2.709..2.709\n rows=1 loops=1)\n Index Cond: (driver_id = 17127)\n Total runtime: 2.732 ms\n (4 rows)\n\n Bad plan:\n\n Limit (cost=0.09..0.69 rows=1 width=56) (actual\n time=216769.111..216769.112 rows=1 loops=1)\n -> Index Scan Backward using ix_updates_time on updates\n (cost=0.09..272339.04 rows=448679 width=56) (actual\n time=216769.110..216769.110 rows=1 loops=1)\n Filter: (driver_id = 30132)\n Rows Removed by Filter: 5132087\n Total runtime: 216769.174 ms\n\n\n From cursory testing, the difference seems to be based on how many\n total rows there are for a particular driver. The above query uses\n updates_driver_id_time_idx for drivers with less than about 300K rows,\n but uses ix_updates_time for drivers with more than about 300K rows.\n\n Anything we can do to make it do the \"right\" thing? We are also\n considering denormalizing the data and keeping a \"cache\" of the same\n data in another table.\n\n pgsql version: 9.3.14 and 9.5.3, already tried vacuum analyze.\n\n Thanks,\n Eric\n \nYou should be having this index:\n \ncreate index updates_driver_time_idx ON updates(driver_id, \"time\" DESC);\n \n-- Andreas Joseph Krogh\nCTO / Partner - Visena AS\nMobile: +47 909 56 963\[email protected] <mailto:[email protected]>\nwww.visena.com <https://www.visena.com>\n <https://www.visena.com>",
"msg_date": "Fri, 9 Dec 2016 18:56:21 +0100 (CET)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "On Fri, Dec 9, 2016 at 9:56 AM, Andreas Joseph Krogh <[email protected]>\nwrote:\n\n> You should be having this index:\n>\n> create index updates_driver_time_idx ON updates(driver_id, \"time\" *DESC)*;\n>\n\nI'm not sure I understand the intent of this fix - are you saying that\nbtree indexes only work in a certain direction?\n\nI created this index and the query plans did not change.\n\n-- \nEric Jiang, DoubleMap\[email protected] | www.doublemap.com\n\nOn Fri, Dec 9, 2016 at 9:56 AM, Andreas Joseph Krogh <[email protected]> wrote:You should be having this index:\n \ncreate index updates_driver_time_idx ON updates(driver_id, \"time\" DESC);I'm not sure I understand the intent of this fix - are you saying that btree indexes only work in a certain direction?I created this index and the query plans did not change.-- Eric Jiang, [email protected] | www.doublemap.com",
"msg_date": "Fri, 9 Dec 2016 10:58:01 -0800",
"msg_from": "Eric Jiang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "Eric Jiang <[email protected]> writes:\n> I have a query that I *think* should use a multicolumn index, but\n> sometimes isn't, resulting in slow queries.\n\nI tried to duplicate this behavior, without success. Are you running\nwith nondefault planner parameters?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 09 Dec 2016 18:51:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "Hi\n\nAs a quick fix: Have you considered dropping ix_updates_time index? \n\nI’ve been able to reproduce the issue, but with bigger numbers than you. When I dropped ix_updates_time it went much much faster. It uses updates_driver_id_time_idx index instead.\n\nFor some reason the planner does not make a good estimation in this case. Can you show us EXPLAIN (ANALYZE, BUFFERS) before and after dropping ix_updates_time index? Can you show us too the output of SHOW shared_buffers; ? \n\nI suspect the issue has to do with low shared_buffers configuration and cache misses, and maybe some costs suboptimal configuration I’ll try to find it out, if anyone can enlighten us it will be very welcomed.\n\n\nP.S. Meanwhile If you still need 'time' index, you can create an index using ‘time' and ‘customer_id' in this order.\n\n\n\nCheers,\n\nDaniel Blanch.\n\n\n> El 9 dic 2016, a las 18:00, Eric Jiang <[email protected]> escribió:\n> \n> Hi all,\n> I have a query that I *think* should use a multicolumn index, but\n> sometimes isn't, resulting in slow queries.\n> \n> We have a DB that records GPS coordinates for vehicles:\n> \n> Table \"public.updates\"\n> Column | Type | Modifiers\n> ------------+--------------------------+------------------------------------------------------\n> id | integer | not null default\n> nextval('updates_id_seq'::regclass)\n> driver_id | integer | not null\n> latitude | double precision | not null\n> longitude | double precision | not null\n> time | timestamp with time zone | not null default now()\n> vehicle_id | integer |\n> Indexes:\n> \"updates_pkey\" PRIMARY KEY, btree (id)\n> \"ix_updates_time\" btree (\"time\")\n> \"updates_driver_id_time_idx\" btree (driver_id, \"time\")\n> \"updates_vehicle_id_time_idx\" btree (vehicle_id, \"time\")\n> \n> Table has about 15M records across 100 distinct driver_id.\n> \n> I want to get the last record for a specific driver:\n> \n> SELECT * FROM updates WHERE driver_id=123 ORDER BY \"time\" DESC LIMIT 1;\n> \n> For some values of driver_id, it does what I expect and uses\n> updates_driver_id_time_idx to fetch the records in 2 ms or less. For\n> other values of driver_id, it does an index scan backwards on\n> ix_updates_time, taking upwards of 2 minutes.\n> \n> Good plan:\n> \n> Limit (cost=0.11..1.38 rows=1 width=56) (actual time=2.710..2.710\n> rows=1 loops=1)\n> -> Index Scan Backward using updates_driver_id_time_idx on updates\n> (cost=0.11..139278.28 rows=110051 width=56) (actual time=2.709..2.709\n> rows=1 loops=1)\n> Index Cond: (driver_id = 17127)\n> Total runtime: 2.732 ms\n> (4 rows)\n> \n> Bad plan:\n> \n> Limit (cost=0.09..0.69 rows=1 width=56) (actual\n> time=216769.111..216769.112 rows=1 loops=1)\n> -> Index Scan Backward using ix_updates_time on updates\n> (cost=0.09..272339.04 rows=448679 width=56) (actual\n> time=216769.110..216769.110 rows=1 loops=1)\n> Filter: (driver_id = 30132)\n> Rows Removed by Filter: 5132087\n> Total runtime: 216769.174 ms\n> \n> \n> From cursory testing, the difference seems to be based on how many\n> total rows there are for a particular driver. The above query uses\n> updates_driver_id_time_idx for drivers with less than about 300K rows,\n> but uses ix_updates_time for drivers with more than about 300K rows.\n> \n> Anything we can do to make it do the \"right\" thing? We are also\n> considering denormalizing the data and keeping a \"cache\" of the same\n> data in another table.\n> \n> pgsql version: 9.3.14 and 9.5.3, already tried vacuum analyze.\n> \n> Thanks,\n> Eric\n> \n> \n> -- \n> Eric Jiang, DoubleMap\n> [email protected] | www.doublemap.com\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Dec 2016 09:06:31 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "We aren't using any special planner settings - all enable_* options are \"on\".\n\nOn Sat, Dec 10, 2016 at 12:06 AM, Daniel Blanch Bataller\n<[email protected]> wrote:\n> As a quick fix: Have you considered dropping ix_updates_time index?\n\nWe do occasionally want to use ix_updates_time, although not very often.\n\n> I’ve been able to reproduce the issue, but with bigger numbers than you. When I dropped ix_updates_time it went much much faster. It uses updates_driver_id_time_idx index instead.\n>\n> For some reason the planner does not make a good estimation in this case. Can you show us EXPLAIN (ANALYZE, BUFFERS) before and after dropping ix_updates_time index? Can you show us too the output of SHOW shared_buffers; ?\n\nHere's EXPLAIN (ANALYZE, BUFFERS) with the above bad query on a cold cache:\n\n Limit (cost=0.09..0.70 rows=1 width=56) (actual\ntime=244846.915..244846.915 rows=1 loops=1)\n Buffers: shared hit=3999254 read=57831\n I/O Timings: read=242139.661\n -> Index Scan Backward using ix_updates_time on updates\n(cost=0.09..271696.74 rows=442550 width=56) (actual\ntime=244846.913..244846.913 rows=1 loops=1)\n Filter: (driver_id = 30132)\n Rows Removed by Filter: 5316811\n Buffers: shared hit=3999254 read=57831\n I/O Timings: read=242139.661\n Total runtime: 244846.946 ms\n\nand after dropping ix_updates_time:\n\n Limit (cost=0.11..0.98 rows=1 width=56) (actual time=2.270..2.271\nrows=1 loops=1)\n Buffers: shared hit=1 read=4\n I/O Timings: read=2.230\n -> Index Scan Backward using updates_driver_id_time_idx on updates\n (cost=0.11..382307.69 rows=442550 width=56) (actual time=2.270..2.270\nrows=1 loops=1)\n Index Cond: (driver_id = 30132)\n Buffers: shared hit=1 read=4\n I/O Timings: read=2.230\n Total runtime: 2.305 ms\n\nand `SHOW shared_buffers;`\n\n shared_buffers\n----------------\n 244MB\n\n> I suspect the issue has to do with low shared_buffers configuration and cache misses, and maybe some costs suboptimal configuration I’ll try to find it out, if anyone can enlighten us it will be very welcomed.\n>\n>\n> P.S. Meanwhile If you still need 'time' index, you can create an index using ‘time' and ‘customer_id' in this order.\n\nDid you mean an index on (time, driver_id)? I did:\n\nCREATE INDEX CONCURRENTLY ix_updates_time_driver_id ON updates\n(\"time\", driver_id)\n\nbut seems like the planner will use it for driver_id having more than\n~300k rows:\n\n Limit (cost=0.11..0.79 rows=1 width=56) (actual\ntime=115.051..115.052 rows=1 loops=1)\n Buffers: shared hit=20376\n -> Index Scan Backward using ix_updates_time_driver_id on updates\n(cost=0.11..302189.90 rows=443924 width=56) (actual\ntime=115.048..115.048 rows=1 loops=1)\n Index Cond: (driver_id = 30132)\n Buffers: shared hit=20376\n Total runtime: 115.091 ms\n\nIt does seem faster than when having an index on just \"time\", but\nstill not optimal.\n\nReally appreciate everyone's help!\n\n-- \nEric Jiang, DoubleMap\[email protected] | www.doublemap.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Dec 2016 12:15:35 -0800",
"msg_from": "Eric Jiang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "Hi,\n\nOn 12/10/2016 12:51 AM, Tom Lane wrote:\n> Eric Jiang <[email protected]> writes:\n>> I have a query that I *think* should use a multicolumn index, but\n>> sometimes isn't, resulting in slow queries.\n>\n> I tried to duplicate this behavior, without success. Are you running\n> with nondefault planner parameters?\n>\n\nMy guess is this is a case of LIMIT the matching rows are uniformly \ndistributed in the input data. The planner likely concludes that for a \ndriver with a lot of data we'll find the first row using ix_updates_time \nvery quickly, and that it will be cheaper than inspecting the larger \nmulti-column index. But imagine a driver with a lots of data long time \nago. That breaks the LIMIT fairly quickly.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Dec 2016 21:34:18 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "Eric Jiang <[email protected]> writes:\n> We aren't using any special planner settings - all enable_* options are \"on\".\n\nNo, I'm asking about the cost settings (random_page_cost etc). The cost\nestimates you're showing seem impossible with the default settings.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Dec 2016 19:49:23 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "Tomas Vondra <[email protected]> writes:\n> On 12/10/2016 12:51 AM, Tom Lane wrote:\n>> I tried to duplicate this behavior, without success. Are you running\n>> with nondefault planner parameters?\n\n> My guess is this is a case of LIMIT the matching rows are uniformly \n> distributed in the input data. The planner likely concludes that for a \n> driver with a lot of data we'll find the first row using ix_updates_time \n> very quickly, and that it will be cheaper than inspecting the larger \n> multi-column index. But imagine a driver with a lots of data long time \n> ago. That breaks the LIMIT fairly quickly.\n\nThe fact that it's slow enough to be a problem is doubtless related to\nthat effect. But AFAICS, the planner should never prefer that index\nfor this query, because even with a uniform-density assumption, the\nindex that really matches the query ought to look better.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Dec 2016 19:51:33 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "On Sat, Dec 10, 2016 at 4:49 PM, Tom Lane <[email protected]> wrote:\n>> We aren't using any special planner settings - all enable_* options are \"on\".\n>\n> No, I'm asking about the cost settings (random_page_cost etc). The cost\n> estimates you're showing seem impossible with the default settings.\n\nTom, really appreciate your pointers. This problem was occurring on\nHeroku Postgres databases, and they seem to have set different cost\nconstants. I tried using SET LOCAL to set them back to the default\nsettings before running EXPLAIN.\n\nMy testing here shows that resetting all of random_page_cost,\ncpu_tuple_cost, cpu_index_tuple_cost, and cpu_operator_cost does not\nchange the plan (but does change the cost estimates), while setting\neffective_cache_size alone will change the plan.\n\nSpecifically, changing only effective_cache_size from '900000kB' to\n'4GB' caused the planner to prefer the optimal index\nupdates_driver_id_time_idx.\n\nIs increasing the DB's RAM the correct fix for this problem? It seems\nto me that no matter how much cache is available, looking at the\n(driver_id, time) index is always the optimal choice for this query.\n\nThanks,\nEric\n\n-- \nEric Jiang, DoubleMap\[email protected] | www.doublemap.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Dec 2016 18:08:48 -0800",
"msg_from": "Eric Jiang <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "Hi all,\n\nThomas is absolutely right, the distribution I synthetically made, had 6M records but very old, 9M old, as you can see it had to skip 9M records before finding a suitable record using time index. \n\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM updates WHERE driver_id = 100 ORDER BY \"time\" DESC LIMIT 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.44..0.65 rows=1 width=36) (actual time=3827.807..3827.807 rows=1 loops=1)\n Buffers: shared hit=24592 read=99594 written=659\n -> Index Scan Backward using updates_time_idx on updates (cost=0.44..1284780.53 rows=6064800 width=36) (actual time=3827.805..3827.805 rows=1 loops=1)\n Filter: (driver_id = 100)\n Rows Removed by Filter: 9000000\n Buffers: shared hit=24592 read=99594 written=659\n Planning time: 0.159 ms\n Execution time: 3827.846 ms\n(8 rows)\n\n\nHere you have my tests where I was able to reproduce the problem using default settings on 9.6, 9.5 and 9.3. 9.6 and 9.5 choose the wrong index, while 9.3 didn’t. (update: 9.5 didn’t fail last time) \n\n\n\n\n\n\n\n\nHowever when I tried to add more than one value with this strange distribution ~ 30% of distribution to one value the index bad choice problem didn’t happen again in none of the different versions.\n\nI Hope this helps. Regards,\n\nDaniel Blanch.\n\n\n> El 10 dic 2016, a las 21:34, Tomas Vondra <[email protected]> escribió:\n> \n> Hi,\n> \n> On 12/10/2016 12:51 AM, Tom Lane wrote:\n>> Eric Jiang <[email protected]> writes:\n>>> I have a query that I *think* should use a multicolumn index, but\n>>> sometimes isn't, resulting in slow queries.\n>> \n>> I tried to duplicate this behavior, without success. Are you running\n>> with nondefault planner parameters?\n>> \n> \n> My guess is this is a case of LIMIT the matching rows are uniformly distributed in the input data. The planner likely concludes that for a driver with a lot of data we'll find the first row using ix_updates_time very quickly, and that it will be cheaper than inspecting the larger multi-column index. But imagine a driver with a lots of data long time ago. That breaks the LIMIT fairly quickly.\n> \n> regards\n> \n> -- \n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sun, 11 Dec 2016 07:04:45 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying with multicolumn index"
},
{
"msg_contents": "Hi all,\n\nIf anyone still interested in the issue I think I have a very plausible explanation of Eric’s postgresql bad index choice that is: bloated updates_driver_id_time_idx index.\n\nThough it’s possible to fool postgresql planner, as I’ve shown in previous tests, this happens with a very concrete data distribution ~ 100 evenly distributed keys over 15M records and ~ 6M records under one single key, if you play a bit with figures it doesn’t happen anymore.\n\nEric’s data distribution wasn’t that extreme, as far as I know he had ~ 100K vs 500K distributions … Well, I’ve been able to reproduce the problem with a close data distribution to Erik’s. I made it creating a ‘bloated index’. If optimal index is too big, postgres tries with another suboptimal index, in this case index ’time’. \n\nSee this excerpt of my tests results:\n\n(..)\n-- populate table with 99 homogeneus distributed values\nINSERT INTO updates SELECT q, q % 99, q, q, to_timestamp(q), q % 99 FROM generate_series(1, 15000000) q;\nINSERT 0 15000000\nTime: 65686,547 ms\n-- populate table with 1 value with 500K rows, simmilar distribution you posted.\nINSERT INTO updates SELECT q + 15000000, 100, q, q, to_timestamp(q), -- timestamp will start at 1 at end at 6M\n\t100 FROM generate_series(1, 500000) q;\nINSERT 0 500000\nTime: 2463,073 ms\n-- add constraints and indexes\n\n(…)\n\n-- create 'bloated' driver_id, time index.\nCREATE INDEX ON updates (driver_id, \"time\") WITH (fillfactor = 10);\nCREATE INDEX\nTime: 41234,091 ms\n-- check index sizes, updates_driver_id_idx is huge.\nSELECT relname, relpages FROM pg_class WHERE relname LIKE 'updates%';\n relname | relpages \n-----------------------------+----------\n updates | 129167\n updates_driver_id_time_idx | 576919 \n updates_id_seq | 1\n updates_pkey | 42502\n updates_time_idx | 42502\n updates_vehicle_id_time_idx | 59684\n(6 rows)\n\nTime: 16,810 ms\n-- check behavior with bloated index\nANALYZE updates;\nANALYZE\nTime: 254,917 ms\n\n(..)\n\nTime: 4,635 ms\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM updates WHERE driver_id = 100 ORDER BY \"time\" DESC LIMIT 1;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.43..1.91 rows=1 width=36) (actual time=21486.015..21486.015 rows=1 loops=1)\n Buffers: shared hit=39618 read=160454 written=592\n -> Index Scan Backward using updates_time_idx on updates (cost=0.43..691283.45 rows=469134 width=36) (actual time=21486.014..21486.014 rows=1 loops=1)\n Filter: (driver_id = 100)\n Rows Removed by Filter: 14500000\n Buffers: shared hit=39618 read=160454 written=592\n Planning time: 0.171 ms\n Execution time: 21486.068 ms\n(8 rows)\n\nTime: 21486,905 ms\n-- rebuild index with default fillfactor\nALTER INDEX updates_driver_id_time_idx SET (fillfactor = 90);\nALTER INDEX\nTime: 0,682 ms\nREINDEX INDEX updates_driver_id_time_idx;\nREINDEX\nTime: 23559,530 ms\n-- recheck index sizes, updates_driver_id_idx should look pretty simmilar to others.\nSELECT relname, relpages FROM pg_class WHERE relname LIKE 'updates%';\n relname | relpages \n-----------------------------+----------\n updates | 129167\n updates_driver_id_time_idx | 59684\n updates_id_seq | 1\n updates_pkey | 42502\n updates_time_idx | 42502\n updates_vehicle_id_time_idx | 59684\n(6 rows)\n\nTime: 0,452 ms\n-- check behavior with regular sized index\nEXPLAIN (ANALYZE, BUFFERS) SELECT * FROM updates WHERE driver_id = 100 ORDER BY \"time\" DESC LIMIT 1;\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.56..1.69 rows=1 width=36) (actual time=0.032..0.033 rows=1 loops=1)\n Buffers: shared hit=2 read=3\n -> Index Scan Backward using updates_driver_id_time_idx on updates (cost=0.56..529197.34 rows=469133 width=36) (actual time=0.032..0.032 rows=1 loops=1)\n Index Cond: (driver_id = 100)\n Buffers: shared hit=2 read=3\n Planning time: 0.074 ms\n Execution time: 0.046 ms\n(7 rows)\n\nTime: 0,312 ms\n\n\n@Eric\n\nHow to solve the problem:\n\nFirst of all check if this is the case, check indexes sizes, if you have this problem updates_driver_id_time_idx should be significantly bigger than others.\n\nSELECT relname, relpages FROM pg_class WHERE relname LIKE 'updates%’;\n\nCheck index configuration to see if you have different fillfactor configuration\n\n\\d+ updates_driver_id_time_idx\n\nIf you have setup a different fillfactor, turn it to normal, that is 90%. I don’t see why you should have a low fillfactor, your data doesn’t seem to have frecuent updates, by the contrary, it seems only write and read data.\n\nALTER INDEX updates_driver_id_time_idx SET (fillfactor = 90)\n\nIf your index fillfactor is normal, there is a chance it got bloated, but this is rare. \n\nReindex your data.\n\nREINDEX INDEX updates_driver_id_time_idx;\n\nRun tests again.\n\n\nRegards,\n\nDaniel Blanch.\n\nP.S. Here you have my full tests and output, you might find them useful. Don’t forget to show us index sizes and index configuration, please.\n\n\n\n\n\n> El 11 dic 2016, a las 7:04, Daniel Blanch Bataller <[email protected]> escribió:\n> \n> Hi all,\n> \n> Thomas is absolutely right, the distribution I synthetically made, had 6M records but very old, 9M old, as you can see it had to skip 9M records before finding a suitable record using time index. \n> \n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM updates WHERE driver_id = 100 ORDER BY \"time\" DESC LIMIT 1;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.44..0.65 rows=1 width=36) (actual time=3827.807..3827.807 rows=1 loops=1)\n> Buffers: shared hit=24592 read=99594 written=659\n> -> Index Scan Backward using updates_time_idx on updates (cost=0.44..1284780.53 rows=6064800 width=36) (actual time=3827.805..3827.805 rows=1 loops=1)\n> Filter: (driver_id = 100)\n> Rows Removed by Filter: 9000000\n> Buffers: shared hit=24592 read=99594 written=659\n> Planning time: 0.159 ms\n> Execution time: 3827.846 ms\n> (8 rows)\n> \n> \n> Here you have my tests where I was able to reproduce the problem using default settings on 9.6, 9.5 and 9.3. 9.6 and 9.5 choose the wrong index, while 9.3 didn’t. (update: 9.5 didn’t fail last time) \n> \n> <test_bad_index_choice.sql><bad_idx_choice.9.6.out><bad_idx_choice.9.5.out><bad_idx_choice.9.3.out>\n> \n> However when I tried to add more than one value with this strange distribution ~ 30% of distribution to one value the index bad choice problem didn’t happen again in none of the different versions.\n> \n> I Hope this helps. Regards,\n> \n> Daniel Blanch.\n> \n> \n>> El 10 dic 2016, a las 21:34, Tomas Vondra <[email protected]> escribió:\n>> \n>> Hi,\n>> \n>> On 12/10/2016 12:51 AM, Tom Lane wrote:\n>>> Eric Jiang <[email protected]> writes:\n>>>> I have a query that I *think* should use a multicolumn index, but\n>>>> sometimes isn't, resulting in slow queries.\n>>> \n>>> I tried to duplicate this behavior, without success. Are you running\n>>> with nondefault planner parameters?\n>>> \n>> \n>> My guess is this is a case of LIMIT the matching rows are uniformly distributed in the input data. The planner likely concludes that for a driver with a lot of data we'll find the first row using ix_updates_time very quickly, and that it will be cheaper than inspecting the larger multi-column index. But imagine a driver with a lots of data long time ago. That breaks the LIMIT fairly quickly.\n>> \n>> regards\n>> \n>> -- \n>> Tomas Vondra http://www.2ndQuadrant.com\n>> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>> \n>> \n>> -- \n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>",
"msg_date": "Tue, 13 Dec 2016 22:14:44 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Querying with multicolumn index"
}
] |
[
{
"msg_contents": "Hi All;\n\n\nWe have a client running on VMware, they have heavy write traffic and we \nwant to isolate the IO for the tx logs (pg_xlog). However it seems the \nbest plan based on feedback from the client is either\n\n(a) simply leave the pg_xlog dir in the VMDK\n\nor\n\n(b) relocate pg_xlog to NAS/NFS\n\n\nI'm not a VMware expert, however I thought VMware would allow the \ncreation of multiple disk volumes and attach them via separate mount \npoints. Is this not true? If it is an option can someone point me to a \nhow to...\n\n\nAlso, if we cannot do multiple VMDK volumes then what is everyone's \nthoughts about relocating pg_xlog to an NFS mount?\n\n\nThanks in advance\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 Dec 2016 13:16:14 -0700",
"msg_from": "ProPAAS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Isolation of tx logs on VMware"
},
{
"msg_contents": "On 12/13/2016 12:16 PM, ProPAAS DBA wrote:\n> Hi All;\n\n>\n> I'm not a VMware expert, however I thought VMware would allow the\n> creation of multiple disk volumes and attach them via separate mount\n> points. Is this not true? If it is an option can someone point me to a\n> how to...\n\nYes it is possible to do this and then you will be able to use standard \nOS tools to determine the IO utilization.\n\n>\n>\n> Also, if we cannot do multiple VMDK volumes then what is everyone's\n> thoughts about relocating pg_xlog to an NFS mount?\n>\n\nI personally wouldn't do it but it would depend on the implementation.\n\nJD\n\n>\n> Thanks in advance\n>\n>\n>\n>\n>\n\n\n-- \nCommand Prompt, Inc. http://the.postgres.company/\n +1-503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nEveryone appreciates your honesty, until you are honest with them.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 Dec 2016 12:22:16 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Isolation of tx logs on VMware"
}
] |
[
{
"msg_contents": "Hello mailing list!\n\nWe have a JSONB column in a table that has a key (string) => value (int)\nstructure. We want to be able to create a btree index that accepts an\narbitrary key to allowing sorting by key.\n\nTable Example:\n\n> +----+------------------------------------------+\n> | ID | JSONB |\n> +----+------------------------------------------+\n> | 1 | {\"key_1\": 20, \"key_2\": 30, \"key_52\": -1} |\n> | 2 | {\"key_1\": 10} |\n> +----+------------------------------------------+\n\n\nHere is the kind of query we want to run:\n\n> select id, (jsonb ->> 'key_1')::int as sort_key\n> from my_table\n> where (jsonb ? 'key_1' and (jsonb ->> 'key_1')::int > 0) and (jsonb ?\n> 'key_2' and jsonb ->> (jsonb ->> 'key_2')::int > 50)\n\norder by sort_key desc\n> limit 100;\n\n\nWe know that we can create indexes for each individual key (create index\nmy_table_key_1_idx on my_table using btree((jsonb -> 'key_1')) or using a\npartial index including the ? operator) but the issue is that there are\naround 5000 potential keys, which means 5000 indexes.\n\nWe tried doing the relational thing, and splitting the JSONB table into\nit's own separate table, which is great because we can use a simple btree\nindex, but unfortunately this forces us to use weird queries such as:\n\n> select id, max(value) filter (where key = 'key_1') as sort_key\n> from my_table_split\n> where (\n> (key = 'key_1' and value > 0) or\n> (key = 'key_2' and value > 50)\n> )\n> group by id having count(*) = 2\n>\norder by sort_key desc\n\nlimit 100;\n\n\nSuch a query takes a disappointing long time to aggregate. This also has\nthe disadvantage that if we wanted to expand my_table we'd have to do an\ninner join further decreasing performance.\n\nI see that in 2013 there was a talk (\nhttp://www.sai.msu.su/~megera/postgres/talks/Next%20generation%20of%20GIN.pdf)\nabout ordered GIN indexes which seems perfect for our case, but I can't see\nany progress or updates on that.\n\nDoes anyone have any ideas on how to approach this in a for performant way\nwith the Postgres we have today?\n\nThank you,\nRory.\n\nHello mailing list!We have a JSONB column in a table that has a key (string) => value (int) structure. We want to be able to create a btree index that accepts an arbitrary key to allowing sorting by key.Table Example:+----+------------------------------------------+| ID | JSONB |+----+------------------------------------------+| 1 | {\"key_1\": 20, \"key_2\": 30, \"key_52\": -1} || 2 | {\"key_1\": 10} |+----+------------------------------------------+Here is the kind of query we want to run:select id, (jsonb ->> 'key_1')::int as sort_keyfrom my_table where (jsonb ? 'key_1' and (jsonb ->> 'key_1')::int > 0) and (jsonb ? 'key_2' and jsonb ->> (jsonb ->> 'key_2')::int > 50)order by sort_key desclimit 100;We know that we can create indexes for each individual key (create index my_table_key_1_idx on my_table using btree((jsonb -> 'key_1')) or using a partial index including the ? operator) but the issue is that there are around 5000 potential keys, which means 5000 indexes.We tried doing the relational thing, and splitting the JSONB table into it's own separate table, which is great because we can use a simple btree index, but unfortunately this forces us to use weird queries such as:select id, max(value) filter (where key = 'key_1') as sort_keyfrom my_table_splitwhere ( (key = 'key_1' and value > 0) or (key = 'key_2' and value > 50))group by id having count(*) = 2order by sort_key desc limit 100;Such a query takes a disappointing long time to aggregate. This also has the disadvantage that if we wanted to expand my_table we'd have to do an inner join further decreasing performance.I see that in 2013 there was a talk (http://www.sai.msu.su/~megera/postgres/talks/Next%20generation%20of%20GIN.pdf) about ordered GIN indexes which seems perfect for our case, but I can't see any progress or updates on that.Does anyone have any ideas on how to approach this in a for performant way with the Postgres we have today?Thank you,Rory.",
"msg_date": "Wed, 14 Dec 2016 15:18:09 +1300",
"msg_from": "Rory <[email protected]>",
"msg_from_op": true,
"msg_subject": "Ordering on GIN Index"
}
] |
[
{
"msg_contents": "Dear expert,\n\nIn postgreSQL-9.1,the size of pgsql_tmp inside tablespace (Temp tablespace) is increased by 544G in one day.\nHowever, the DBsize is as usual but tablespace size is getting increased.\nCould you please suggest why it is happening ?\n\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\nDear expert,\n \nIn postgreSQL-9.1,the size of \npgsql_tmp inside tablespace (Temp tablespace) is increased by 544G in one day.\nHowever, the DBsize is as usual but tablespace size is getting increased.\nCould you please suggest why it is happening ?\n \n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Thu, 15 Dec 2016 10:28:39 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Size of Temporary tablespace is increasing very much in postgresql\n 9.1."
},
{
"msg_contents": "On 15/12/16 23:28, Dinesh Chandra 12108 wrote:\n\n> Dear expert,\n>\n> In postgreSQL-9.1,the size of *pgsql_tmp* inside tablespace (Temp \n> tablespace) is increased by 544G in one day.\n>\n> However, the DBsize is as usual but tablespace size is getting increased.\n>\n> Could you please suggest why it is happening ?\n>\n>\n\nThat is due to queries doing sorts or (hash) joins. You can log which \nqueries are doing this with the log_temp_files parameter.\n\nNow it might be that this is just normal/expected (e.g complex data \nwarehouse style workload), but it could also be many small queries that \nmight benefit from some additional indexes (logging the queries will \nhelp you decide what if anything needs to be done).\n\nregards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 17 Dec 2016 14:57:57 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Size of Temporary tablespace is increasing very much in\n postgresql 9.1."
},
{
"msg_contents": "Dear Mark,\n\nThanks for your valuable comment.\nNow problem is resolved.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n-----Original Message-----\nFrom: Mark Kirkwood [mailto:[email protected]]\nSent: 17 December, 2016 7:28 AM\nTo: Dinesh Chandra 12108 <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Size of Temporary tablespace is increasing very much in postgresql 9.1.\n\nOn 15/12/16 23:28, Dinesh Chandra 12108 wrote:\n\n> Dear expert,\n>\n> In postgreSQL-9.1,the size of *pgsql_tmp* inside tablespace (Temp\n> tablespace) is increased by 544G in one day.\n>\n> However, the DBsize is as usual but tablespace size is getting increased.\n>\n> Could you please suggest why it is happening ?\n>\n>\n\nThat is due to queries doing sorts or (hash) joins. You can log which queries are doing this with the log_temp_files parameter.\n\nNow it might be that this is just normal/expected (e.g complex data warehouse style workload), but it could also be many small queries that might benefit from some additional indexes (logging the queries will help you decide what if anything needs to be done).\n\nregards\n\nMark\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 19 Dec 2016 09:19:18 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Size of Temporary tablespace is increasing very much in\n postgresql 9.1."
}
] |
[
{
"msg_contents": "Hey All,\n\nI am not a PG expert. I like PG but i am puzzled as to what I shoud do .\n\nI have a 4 core 5 GIG vm running a 500M db (it should fit to ram easly) and\nI face slow queries.\n\nhere is a view that I have :\n SELECT o.id,\n cc.name AS \"from\",\n o.phone,\n c.name AS \"to\",\n parcel_info.value::character varying(64) AS email,\n o.barcode AS barcode_global,\n o.barcode_alt AS barcode,\n uu.name AS destination,\n cur.name AS source,\n o.code,\n tr.context AS status,\n tr.gener AS last_update,\n rc.value::character varying(254) AS refcode,\n o.type,\n slot_inf.title AS size\n FROM data.orders o\n LEFT JOIN data.clients c ON c.id = o.client_id\n LEFT JOIN data.users u ON u.id = o.user_id\n LEFT JOIN data.clients cc ON cc.id = u.client_id\n LEFT JOIN data.users uu ON o.destin = uu.id\n LEFT JOIN ( SELECT DISTINCT ON (ccsend.order_id) ccsend.order_id,\n cu.name\n FROM data.ccsend\n LEFT JOIN data.users cu ON cu.id = ccsend.source_id) cur ON\ncur.order_id = o.id\n LEFT JOIN ( SELECT DISTINCT ON (track.order_id) track.order_id,\n co.context,\n track.gener\n FROM data.track\n LEFT JOIN data.contexts co ON co.id = track.context\n ORDER BY track.order_id, track.id DESC) tr ON tr.order_id = o.id\n LEFT JOIN ( SELECT oi.order_id,\n oi.key,\n oi.value\n FROM data.orders_info oi\n WHERE oi.key::text = 'email'::text) parcel_info ON\nparcel_info.order_id = o.id\n LEFT JOIN ( SELECT orders_info.order_id,\n orders_info.value\n FROM data.orders_info\n WHERE orders_info.key::text = 'refcode'::text) rc ON rc.order_id\n= o.id\n LEFT JOIN data.slot_inf ON o.size = slot_inf.id;\n\n\n\n\nand the xplain :https://explain.depesz.com/s/0LTn\n\nIt runs for ~5 seconds .\n\nCan anyone suggest me anything on this ?\n\ntx,\nGabliver\n\nHey All, I am not a PG expert. I like PG but i am puzzled as to what I shoud do . I have a 4 core 5 GIG vm running a 500M db (it should fit to ram easly) and I face slow queries. here is a view that I have : SELECT o.id, cc.name AS \"from\", o.phone, c.name AS \"to\", parcel_info.value::character varying(64) AS email, o.barcode AS barcode_global, o.barcode_alt AS barcode, uu.name AS destination, cur.name AS source, o.code, tr.context AS status, tr.gener AS last_update, rc.value::character varying(254) AS refcode, o.type, slot_inf.title AS size FROM data.orders o LEFT JOIN data.clients c ON c.id = o.client_id LEFT JOIN data.users u ON u.id = o.user_id LEFT JOIN data.clients cc ON cc.id = u.client_id LEFT JOIN data.users uu ON o.destin = uu.id LEFT JOIN ( SELECT DISTINCT ON (ccsend.order_id) ccsend.order_id, cu.name FROM data.ccsend LEFT JOIN data.users cu ON cu.id = ccsend.source_id) cur ON cur.order_id = o.id LEFT JOIN ( SELECT DISTINCT ON (track.order_id) track.order_id, co.context, track.gener FROM data.track LEFT JOIN data.contexts co ON co.id = track.context ORDER BY track.order_id, track.id DESC) tr ON tr.order_id = o.id LEFT JOIN ( SELECT oi.order_id, oi.key, oi.value FROM data.orders_info oi WHERE oi.key::text = 'email'::text) parcel_info ON parcel_info.order_id = o.id LEFT JOIN ( SELECT orders_info.order_id, orders_info.value FROM data.orders_info WHERE orders_info.key::text = 'refcode'::text) rc ON rc.order_id = o.id LEFT JOIN data.slot_inf ON o.size = slot_inf.id;and the xplain :https://explain.depesz.com/s/0LTnIt runs for ~5 seconds . Can anyone suggest me anything on this ? tx,Gabliver",
"msg_date": "Sat, 17 Dec 2016 23:25:53 +0100",
"msg_from": "Gabliver Faluker <[email protected]>",
"msg_from_op": true,
"msg_subject": "bad performance"
},
{
"msg_contents": "Gabliver Faluker <[email protected]> writes:\n> It runs for ~5 seconds .\n\nI'm a little skeptical that a 12-way join producing 340K rows\nand executing in 5 seconds should be considered \"bad performance\".\n\nIt looks like it'd help some if you increased work_mem enough to let\nboth sorts happen in-memory rather than externally. But really, this\nis going to take awhile no matter what. Do you really need all 340K\nrows of the result? Can you improve your data representation so that\nyou don't need to join quite so many tables to get the answer, and\n(probably even more importantly) so that you don't need to use\nSELECT DISTINCT? The sort/unique steps needed to do DISTINCT are\neating a large part of the runtime, and they also form an optimization\nfence IIRC.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 17 Dec 2016 18:04:18 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance"
},
{
"msg_contents": "On 17/12/16 23:04, Tom Lane wrote:\n> so that you don't need to use\n> SELECT DISTINCT? The sort/unique steps needed to do DISTINCT are\n> eating a large part of the runtime,\n\nDoes a hash join result in a set of buckets that are then read out\nin order? It might, unless the sort method takes advantage of\npartially-sorted inout, be cheaper (by log(num-buckets)) to sort/uniq\neach bucket separately (and it would parallelize, too).\n-- \nCheers,\n Jeremy\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 25 Dec 2016 13:33:55 +0000",
"msg_from": "Jeremy Harris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: bad performance"
}
] |
[
{
"msg_contents": "Dear Expert,\n\nCould you please suggest me for the below query?\n\nI want to vacuum the entire database with the exception of several\ntables.\n\nBecause there are some tables which are very big in size, so I just want to exclude them.\n\nThanks in advance.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\nDear Expert,\n \nCould you please suggest me for the below query?\n \nI want to vacuum the entire database with the exception of several\ntables.\n \nBecause there are some tables which are very big in size, so I just want to exclude them.\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Wed, 21 Dec 2016 17:49:16 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to vacuum entire database excluding some tables in PostgreSQL9.1."
},
{
"msg_contents": "Something like this in a bash script?\n\n\n#!/bin/bash\n\necho \"select schemaname, tablename from pg_tables where tablename not in \n(your list of excluded tables) and schemaname not in \n('information_schema', 'pg_catalog')\" | psql -t > /tmp/tablist\n\nexec < /tmp/tablist\nwhile read line\ndo\n set - $line\n echo \"Vacuuming [$1] [$3]\"\n echo \"VACUUM VERBOSE ${1}.${3}\" | psql\n\ndone\n\n\n\n\nOn 12/21/2016 10:49 AM, Dinesh Chandra 12108 wrote:\n>\n> Dear Expert,\n>\n> Could you please suggest me for the below query?\n>\n> *I want to vacuum the entire database with the exception of several*\n>\n> *tables.*\n>\n> Because there are some tables which are very big in size, so I just \n> want to exclude them.\n>\n> Thanks in advance.\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 | Ext 1078 |[email protected] \n> <mailto:%[email protected]>\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n> ------------------------------------------------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) \n> and may contain confidential and privileged information. Any \n> unauthorized review, use, disclosure or distribution is prohibited. If \n> you are not the intended recipient, please contact the sender by reply \n> email and destroy all copies of the original message. Check all \n> attachments for viruses before opening them. All views or opinions \n> presented in this e-mail are those of the author and may not reflect \n> the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\nSomething like this in a bash script?\n\n\n#!/bin/bash\n\n echo \"select schemaname, tablename from pg_tables where tablename\n not in (your list of excluded tables) and schemaname not in\n ('information_schema', 'pg_catalog')\" | psql -t > /tmp/tablist\n\n exec < /tmp/tablist\n while read line \n do\n ���� set - $line\n ���� echo \"Vacuuming [$1] [$3]\"\n ���� echo \"VACUUM VERBOSE ${1}.${3}\" | psql\n\n done\n\n\n\n\n\nOn 12/21/2016 10:49 AM, Dinesh Chandra\n 12108 wrote:\n\n\n\n\n\nDear Expert,\n�\nCould you\n please suggest me for the below query?\n�\nI want to\n vacuum the entire database with the exception of several\ntables.\n�\nBecause there\n are some tables which are very big in size, so I just want\n to exclude them.\n�\nThanks in\n advance.\n�\nRegards,\nDinesh\n Chandra\n|Database\n administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\n +91-9953975849 | Ext 1078\n |[email protected]\n\nPlot\n No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201\n 305,India.\n�\n\n\n\n\n DISCLAIMER:\n\n This email message is for the sole use of the intended\n recipient(s) and may contain confidential and privileged\n information. Any unauthorized review, use, disclosure or\n distribution is prohibited. If you are not the intended\n recipient, please contact the sender by reply email and destroy\n all copies of the original message. Check all attachments for\n viruses before opening them. All views or opinions presented in\n this e-mail are those of the author and may not reflect the\n opinion of Cyient or those of our affiliates.",
"msg_date": "Wed, 21 Dec 2016 11:16:28 -0700",
"msg_from": "ProPAAS DBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to vacuum entire database excluding some tables in\n PostgreSQL9.1."
}
] |
[
{
"msg_contents": "Dear Expert,\nI am getting the below error in my database.\nERROR: invalid page header in block 25561983 of relation pg_tblspc/55703433/PG_9.1_201105231/55703436/113490260\n\nCan you please suggest me how to resolve it? I think its related to block corruption.\nHow can I find particular block in which table? And how to resolve that particular block issue.\n\nThanks in adwance\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\nDear Expert,\nI am getting the below error in my database.\nERROR: invalid page header in block 25561983 of relation pg_tblspc/55703433/PG_9.1_201105231/55703436/113490260\n \nCan you please suggest me how to resolve it? I think its related to block corruption.\nHow can I find particular block in which table? And how to resolve that particular block issue.\n \nThanks in adwance\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Tue, 27 Dec 2016 09:24:17 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Invalid page header in block 25561983 of relation pg_tblspc"
},
{
"msg_contents": "On Tue, Dec 27, 2016 at 6:24 PM, Dinesh Chandra 12108\n<[email protected]> wrote:\n> Can you please suggest me how to resolve it? I think its related to block\n> corruption.\n\nYou may want to roll in a backup, and move to a different server:\nhttps://wiki.postgresql.org/wiki/Corruption\n\n> How can I find particular block in which table? And how to resolve that\n> particular block issue.\n\npg_filenode_relation() is your friend:\nhttps://www.postgresql.org/docs/9.6/static/functions-admin.html#FUNCTIONS-ADMIN-DBOBJECT\n\n\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and may\n> contain confidential and privileged information. Any unauthorized review,\n> use, disclosure or distribution is prohibited. If you are not the intended\n> recipient, please contact the sender by reply email and destroy all copies\n> of the original message. Check all attachments for viruses before opening\n> them. All views or opinions presented in this e-mail are those of the author\n> and may not reflect the opinion of Cyient or those of our affiliates.\n\n#fail. This is a public mailing list.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 Dec 2016 20:01:04 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Invalid page header in block 25561983 of relation pg_tblspc"
}
] |
[
{
"msg_contents": "Dear colleagues,\n\ncan anyone please explain, why we do not create indexes on master?\nIn my case master / child design blindly follows partitioning guide https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html <https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html>.\nMy collaborator was unhappy with performance of queries over master table with filtering by one of fields\n\n\tSELECT * FROM “master\" WHERE “field\" BETWEEN x AND y\n\n(there are indexes for “field” on child tables).\nHe has created index on master once and found that the query returns 100x faster.\nI have naive idea that it won’t help if index is created before the data is there — i.e. indexes on master aren’t updated when data loaded to child table.\nI’m curious is it right or it’s something less primitive.\n\nThanks and have a happy holidays!\nVal.\nDear colleagues,can anyone please explain, why we do not create indexes on master?In my case master / child design blindly follows partitioning guide https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html.My collaborator was unhappy with performance of queries over master table with filtering by one of fields SELECT * FROM “master\" WHERE “field\" BETWEEN x AND y(there are indexes for “field” on child tables).He has created index on master once and found that the query returns 100x faster.I have naive idea that it won’t help if index is created before the data is there — i.e. indexes on master aren’t updated when data loaded to child table.I’m curious is it right or it’s something less primitive.Thanks and have a happy holidays!Val.",
"msg_date": "Tue, 27 Dec 2016 18:22:13 +0300",
"msg_from": "Valerii Valeev <[email protected]>",
"msg_from_op": true,
"msg_subject": "why we do not create indexes on master"
},
{
"msg_contents": "Valerii Valeev <[email protected]> wrote:\n\n> Dear colleagues,\n> \n> can anyone please explain, why we do not create indexes on master?\n> In my case master / child design blindly follows partitioning guide https://\n> www.postgresql.org/docs/9.6/static/ddl-partitioning.html.\n> My collaborator was unhappy with performance of queries over master table with\n> filtering by one of fields\n> \n> SELECT * FROM “master\" WHERE “field\" BETWEEN x AND y\n> \n> (there are indexes for “field” on child tables).\n> He has created index on master once and found that the query returns 100x\n> faster.\n\nplease show us explain analyse with/without index on master.\n\n\n\nRegards, Andreas Kretschmer\n-- \nAndreas Kretschmer\nhttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 Dec 2016 17:04:27 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why we do not create indexes on master"
},
{
"msg_contents": "On Tue, Dec 27, 2016 at 8:22 AM, Valerii Valeev <[email protected]>\nwrote:\n\n> I have naive idea that it won’t help if index is created before the data\n> is there — i.e. indexes on master aren’t updated when data loaded to child\n> table.\n>\n\nIndexes on the master table of a partition scheme never reflect the\ncontents of child tables.\n\nIn most partitioning schemes the master table is empty so even if it\ndoesn't have an index on a particular field execution would typically be\nquick. This is why #4 on the page you linked to:\n\n\"\"\"\nFor each partition, create an index on the key column(s), as well as any\nother indexes you might want. (The key index is not strictly necessary, but\nin most scenarios it is helpful. If you intend the key values to be unique\nthen you should always create a unique or primary-key constraint for each\npartition.)\n\"\"\"\n\ndoesn't say anything about creating other indexes on the master table. See\n#1 in that list for an explicit statement of this assumption.\n\nIf the master is not empty, and of considerable size, and the field being\nsearched is not indexed, then it is unsurprising that the query would take\na long time to execute when obtaining rows from the master table. If this\nis the case then you've gotten away from the expected usage of partitions\nand so need to do things that aren't in the manual to make them work.\n\nDavid J.\n\n\n\nDavid J.\n\nOn Tue, Dec 27, 2016 at 8:22 AM, Valerii Valeev <[email protected]> wrote:I have naive idea that it won’t help if index is created before the data is there — i.e. indexes on master aren’t updated when data loaded to child table.Indexes on the master table of a partition scheme never reflect the contents of child tables.In most partitioning schemes the master table is empty so even if it doesn't have an index on a particular field execution would typically be quick. This is why #4 on the page you linked to:\"\"\"For each partition, create an index on the key column(s), as well as any other indexes you might want. (The key index is not strictly necessary, but in most scenarios it is helpful. If you intend the key values to be unique then you should always create a unique or primary-key constraint for each partition.)\"\"\"doesn't say anything about creating other indexes on the master table. See #1 in that list for an explicit statement of this assumption.If the master is not empty, and of considerable size, and the field being searched is not indexed, then it is unsurprising that the query would take a long time to execute when obtaining rows from the master table. If this is the case then you've gotten away from the expected usage of partitions and so need to do things that aren't in the manual to make them work.David J.David J.",
"msg_date": "Tue, 27 Dec 2016 09:19:37 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why we do not create indexes on master"
},
{
"msg_contents": "Thank you David,\n\nI used same rationale to convince my colleague — it didn’t work :)\nSort of “pragmatic” person who does what seems working no matter what happens tomorrow.\nSo I’m seeking for better understanding of what's happening to have other cause to convince him.\n\nLet me break it down once again. The experience is as follows:\n\n- partitioning follows the guide\n- master empty, no indexes\n- child tables have index on field “field”\n- query like\n\tSELECT * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’\ntakes more than 100 sec\n- after that my mate adds index on “master”(“field”) — again, all data is in child tables\n- same query takes under 1sec\n\nQuestions I’d love to clarify:\n\n- Q1: is it correct that described situation happens because index created on master does account data that is already there in child?\n- Q2: is it correct that index on master created before inserting record to child tables will not take into account this record?\n- Q3: are there any other bad sides of indexes on master table?\n\nRegards,\nVal.\n\n> On Dec 27 2016, at 19:19, David G. Johnston <[email protected]> wrote:\n> \n> On Tue, Dec 27, 2016 at 8:22 AM, Valerii Valeev <[email protected] <mailto:[email protected]>> wrote:\n> I have naive idea that it won’t help if index is created before the data is there — i.e. indexes on master aren’t updated when data loaded to child table.\n> \n> Indexes on the master table of a partition scheme never reflect the contents of child tables.\n> \n> In most partitioning schemes the master table is empty so even if it doesn't have an index on a particular field execution would typically be quick. This is why #4 on the page you linked to:\n> \n> \"\"\"\n> For each partition, create an index on the key column(s), as well as any other indexes you might want. (The key index is not strictly necessary, but in most scenarios it is helpful. If you intend the key values to be unique then you should always create a unique or primary-key constraint for each partition.)\n> \"\"\"\n> \n> doesn't say anything about creating other indexes on the master table. See #1 in that list for an explicit statement of this assumption.\n> \n> If the master is not empty, and of considerable size, and the field being searched is not indexed, then it is unsurprising that the query would take a long time to execute when obtaining rows from the master table. If this is the case then you've gotten away from the expected usage of partitions and so need to do things that aren't in the manual to make them work.\n> \n> David J.\n> \n> \n> \n> David J.\n> \n\n\nThank you David,I used same rationale to convince my colleague — it didn’t work :)Sort of “pragmatic” person who does what seems working no matter what happens tomorrow.So I’m seeking for better understanding of what's happening to have other cause to convince him.Let me break it down once again. The experience is as follows:- partitioning follows the guide- master empty, no indexes- child tables have index on field “field”- query like SELECT * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’takes more than 100 sec- after that my mate adds index on “master”(“field”) — again, all data is in child tables- same query takes under 1secQuestions I’d love to clarify:- Q1: is it correct that described situation happens because index created on master does account data that is already there in child?- Q2: is it correct that index on master created before inserting record to child tables will not take into account this record?- Q3: are there any other bad sides of indexes on master table?Regards,Val.On Dec 27 2016, at 19:19, David G. Johnston <[email protected]> wrote:On Tue, Dec 27, 2016 at 8:22 AM, Valerii Valeev <[email protected]> wrote:I have naive idea that it won’t help if index is created before the data is there — i.e. indexes on master aren’t updated when data loaded to child table.Indexes on the master table of a partition scheme never reflect the contents of child tables.In most partitioning schemes the master table is empty so even if it doesn't have an index on a particular field execution would typically be quick. This is why #4 on the page you linked to:\"\"\"For each partition, create an index on the key column(s), as well as any other indexes you might want. (The key index is not strictly necessary, but in most scenarios it is helpful. If you intend the key values to be unique then you should always create a unique or primary-key constraint for each partition.)\"\"\"doesn't say anything about creating other indexes on the master table. See #1 in that list for an explicit statement of this assumption.If the master is not empty, and of considerable size, and the field being searched is not indexed, then it is unsurprising that the query would take a long time to execute when obtaining rows from the master table. If this is the case then you've gotten away from the expected usage of partitions and so need to do things that aren't in the manual to make them work.David J.David J.",
"msg_date": "Tue, 27 Dec 2016 20:38:05 +0300",
"msg_from": "Valerii Valeev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: why we do not create indexes on master"
},
{
"msg_contents": "Possibly some buffer caching is happening, what happens if you then \nremove the added index and run the query again?\n\n\n\nOn 12/27/2016 10:38 AM, Valerii Valeev wrote:\n> Thank you David,\n>\n> I used same rationale to convince my colleague — it didn’t work :)\n> Sort of “pragmatic” person who does what seems working no matter what \n> happens tomorrow.\n> So I’m seeking for better understanding of what's happening to have \n> other cause to convince him.\n>\n> Let me break it down once again. The experience is as follows:\n>\n> - partitioning follows the guide\n> - master empty, no indexes\n> - child tables have index on field “field”\n> - query like\n> SELECT * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’\n> takes more than 100 sec\n> - after that my mate adds index on “master”(“field”) — again, all data \n> is in child tables\n> - same query takes under 1sec\n>\n> Questions I’d love to clarify:\n>\n> - Q1: is it correct that described situation happens because index \n> created on master does account data that is already there in child?\n> - Q2: is it correct that index on master created before inserting \n> record to child tables will not take into account this record?\n> - Q3: are there any other bad sides of indexes on master table?\n>\n> Regards,\n> Val.\n>\n>> On Dec 27 2016, at 19:19, David G. Johnston \n>> <[email protected] <mailto:[email protected]>> wrote:\n>>\n>> On Tue, Dec 27, 2016 at 8:22 AM, Valerii Valeev \n>> <[email protected] <mailto:[email protected]>>wrote:\n>>\n>> I have naive idea that it won’t help if index is created before\n>> the data is there — i.e. indexes on master aren’t updated when\n>> data loaded to child table.\n>>\n>>\n>> Indexes on the master table of a partition scheme never reflect the \n>> contents of child tables.\n>>\n>> In most partitioning schemes the master table is empty so even if it \n>> doesn't have an index on a particular field execution would typically \n>> be quick. This is why #4 on the page you linked to:\n>>\n>> \"\"\"\n>> For each partition, create an index on the key column(s), as well as \n>> any other indexes you might want. (The key index is not strictly \n>> necessary, but in most scenarios it is helpful. If you intend the key \n>> values to be unique then you should always create a unique or \n>> primary-key constraint for each partition.)\n>> \"\"\"\n>>\n>> doesn't say anything about creating other indexes on the master \n>> table. See #1 in that list for an explicit statement of this assumption.\n>>\n>> If the master is not empty, and of considerable size, and the field \n>> being searched is not indexed, then it is unsurprising that the query \n>> would take a long time to execute when obtaining rows from the master \n>> table. If this is the case then you've gotten away from the expected \n>> usage of partitions and so need to do things that aren't in the \n>> manual to make them work.\n>>\n>> David J.\n>>\n>>\n>>\n>> David J.\n>>\n>\n\n\n\n\n\n\n\nPossibly some buffer caching is happening, what happens if you\n then remove the added index and run the query again?\n\n\n\nOn 12/27/2016 10:38 AM, Valerii Valeev\n wrote:\n\n\n\n Thank you David,\n \n\nI used same rationale to convince my colleague — it\n didn’t work :)\nSort of “pragmatic” person who does what seems\n working no matter what happens tomorrow.\nSo I’m seeking for better understanding of what's\n happening to have other cause to convince him.\n\n\nLet me break it down once again. The experience is\n as follows:\n\n\n- partitioning follows the guide\n- master empty, no indexes\n- child tables have index on field “field”\n- query like\n SELECT\n * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’\ntakes more than 100 sec\n- after that my mate adds index on “master”(“field”)\n — again, all data is in child tables\n- same query takes under 1sec\n\n\nQuestions I’d love to clarify:\n\n\n- Q1: is it correct that described situation happens\n because index created on master does account data that is\n already there in child?\n- Q2: is it correct that index on master created\n before inserting record to child tables will not take into\n account this record?\n- Q3: are there any other bad sides of indexes on\n master table?\n\n\nRegards,\nVal.\n\n\n\n\n\nOn Dec 27 2016, at 19:19, David G. Johnston\n <[email protected]>\n wrote:\n\n\n\nOn\n Tue, Dec 27, 2016 at 8:22 AM, Valerii Valeev <[email protected]>\n wrote:\n\n\n\n\n\nI have naive idea that it won’t\n help if index is created before the data is\n there — i.e. indexes on master aren’t updated\n when data loaded to child table.\n\n\n\n\nIndexes\n on the master table of a partition scheme never\n reflect the contents of child tables.\n\n\nIn\n most partitioning schemes the master table is\n empty so even if it doesn't have an index on a\n particular field execution would typically be\n quick. This is why #4 on the page you linked to:\n\n\n\"\"\"\nFor each\n partition, create an index on the key column(s),\n as well as any other indexes you might want.\n (The key index is not strictly necessary, but in\n most scenarios it is helpful. If you intend the\n key values to be unique then you should always\n create a unique or primary-key constraint for\n each partition.)\n\n\"\"\"\n\n\ndoesn't say\n anything about creating other indexes on the\n master table. See #1 in that list for an\n explicit statement of this assumption.\n\n\nIf\n the master is not empty, and of considerable size,\n and the field being searched is not indexed, then\n it is unsurprising that the query would take a\n long time to execute when obtaining rows from the\n master table. If this is the case then you've\n gotten away from the expected usage of partitions\n and so need to do things that aren't in the manual\n to make them work.\n\n\nDavid\n J.\n\n\n\n\n\n\nDavid\n J.",
"msg_date": "Tue, 27 Dec 2016 10:43:24 -0700",
"msg_from": "ProPAAS DBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why we do not create indexes on master"
},
{
"msg_contents": "On Tue, Dec 27, 2016 at 10:38 AM, Valerii Valeev <[email protected]>\nwrote:\n\n> Thank you David,\n>\n> I used same rationale to convince my colleague — it didn’t work :)\n> Sort of “pragmatic” person who does what seems working no matter what\n> happens tomorrow.\n> So I’m seeking for better understanding of what's happening to have other\n> cause to convince him.\n>\n> Let me break it down once again. The experience is as follows:\n>\n> - partitioning follows the guide\n>\n\nOnly somewhat helpful...\n\n\n> - master empty, no indexes\n> - child tables have index on field “field”\n> - query like\n> SELECT * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’\n> takes more than 100 sec\n>\n\nAll retrieved data now exists in cache/buffers...\n\n\n> - after that my mate adds index on “master”(“field”) — again, all data is\n> in child tables\n> - same query takes under 1sec\n>\n\nAs Andreas said if you really want to explore what is happening here you\nneed to use EXPLAIN ANALYZE.\n\nGiven the flow described above I/O retrieval performance differences, or\nthe attempt to query the table kicking off an ANALYZE, seems like possible\ncontributing factors.\n\n\n> Questions I’d love to clarify:\n>\n> - Q1: is it correct that described situation happens because index created\n> on master does account data that is already there in child?\n>\n\nNo\n\n\n> - Q2: is it correct that index on master created before inserting record\n> to child tables will not take into account this record?\n>\n\nYes\n\n\n> - Q3: are there any other bad sides of indexes on master table?\n>\n\nNo\n\nDavid J.\n\nOn Tue, Dec 27, 2016 at 10:38 AM, Valerii Valeev <[email protected]> wrote:Thank you David,I used same rationale to convince my colleague — it didn’t work :)Sort of “pragmatic” person who does what seems working no matter what happens tomorrow.So I’m seeking for better understanding of what's happening to have other cause to convince him.Let me break it down once again. The experience is as follows:- partitioning follows the guideOnly somewhat helpful...- master empty, no indexes- child tables have index on field “field”- query like SELECT * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’takes more than 100 secAll retrieved data now exists in cache/buffers... - after that my mate adds index on “master”(“field”) — again, all data is in child tables- same query takes under 1secAs Andreas said if you really want to explore what is happening here you need to use EXPLAIN ANALYZE.Given the flow described above I/O retrieval performance differences, or the attempt to query the table kicking off an ANALYZE, seems like possible contributing factors.Questions I’d love to clarify:- Q1: is it correct that described situation happens because index created on master does account data that is already there in child? No- Q2: is it correct that index on master created before inserting record to child tables will not take into account this record? Yes- Q3: are there any other bad sides of indexes on master table?NoDavid J.",
"msg_date": "Tue, 27 Dec 2016 10:48:26 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why we do not create indexes on master"
},
{
"msg_contents": "David,\n\nthanks a lot for the comments and for clarity. As I already responded to Andreas, I’m going to get some test data and try to investigate myself.\nThought maybe I’m missing some common knowledge, that’s why asked here before taking deeper look.\n\nRegards,\nVal.\n \n> On Dec 27 2016, at 20:48, David G. Johnston <[email protected]> wrote:\n> \n> On Tue, Dec 27, 2016 at 10:38 AM, Valerii Valeev <[email protected] <mailto:[email protected]>> wrote:\n> Thank you David,\n> \n> I used same rationale to convince my colleague — it didn’t work :)\n> Sort of “pragmatic” person who does what seems working no matter what happens tomorrow.\n> So I’m seeking for better understanding of what's happening to have other cause to convince him.\n> \n> Let me break it down once again. The experience is as follows:\n> \n> - partitioning follows the guide\n> \n> Only somewhat helpful...\n> \n> - master empty, no indexes\n> - child tables have index on field “field”\n> - query like\n> \tSELECT * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’\n> takes more than 100 sec\n> \n> All retrieved data now exists in cache/buffers...\n> \n> - after that my mate adds index on “master”(“field”) — again, all data is in child tables\n> - same query takes under 1sec\n> \n> As Andreas said if you really want to explore what is happening here you need to use EXPLAIN ANALYZE.\n> \n> Given the flow described above I/O retrieval performance differences, or the attempt to query the table kicking off an ANALYZE, seems like possible contributing factors.\n> \n> \n> Questions I’d love to clarify:\n> \n> - Q1: is it correct that described situation happens because index created on master does account data that is already there in child? \n> \n> No\n> \n> - Q2: is it correct that index on master created before inserting record to child tables will not take into account this record? \n> \n> Yes\n> \n> - Q3: are there any other bad sides of indexes on master table?\n> \n> No\n> \n> David J.\n> \n\n\nDavid,thanks a lot for the comments and for clarity. As I already responded to Andreas, I’m going to get some test data and try to investigate myself.Thought maybe I’m missing some common knowledge, that’s why asked here before taking deeper look.Regards,Val. On Dec 27 2016, at 20:48, David G. Johnston <[email protected]> wrote:On Tue, Dec 27, 2016 at 10:38 AM, Valerii Valeev <[email protected]> wrote:Thank you David,I used same rationale to convince my colleague — it didn’t work :)Sort of “pragmatic” person who does what seems working no matter what happens tomorrow.So I’m seeking for better understanding of what's happening to have other cause to convince him.Let me break it down once again. The experience is as follows:- partitioning follows the guideOnly somewhat helpful...- master empty, no indexes- child tables have index on field “field”- query like SELECT * FROM “master” WHERE “field” BETWEEN ‘1' AND ‘2’takes more than 100 secAll retrieved data now exists in cache/buffers... - after that my mate adds index on “master”(“field”) — again, all data is in child tables- same query takes under 1secAs Andreas said if you really want to explore what is happening here you need to use EXPLAIN ANALYZE.Given the flow described above I/O retrieval performance differences, or the attempt to query the table kicking off an ANALYZE, seems like possible contributing factors.Questions I’d love to clarify:- Q1: is it correct that described situation happens because index created on master does account data that is already there in child? No- Q2: is it correct that index on master created before inserting record to child tables will not take into account this record? Yes- Q3: are there any other bad sides of indexes on master table?NoDavid J.",
"msg_date": "Wed, 28 Dec 2016 03:02:55 +0300",
"msg_from": "Valerii Valeev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: why we do not create indexes on master"
}
] |
[
{
"msg_contents": "Hi there, fellow experts!\n\nI need an advice with query that became slower after 9.3 to 9.6 migration.\n\nFirst of all, I'm from the dev team.\n\nBefore migration, we (programmers) made some modifications on query bring\nit's average time from 8s to 2-3s.\n\nAs this query is the most executed on our system (it builds the user panel\nto work), every bit that we can squeeze from it will be nice.\n\nNow, after server migration to 9.6 we're experiencing bad times with this\nquery again.\n\nUnfortunately, I don't have the old query plain (9.3 version) to show you,\nbut in the actual version (9.6) I can see some buffers written that tells\nme that something is wrong.\n\nOur server has 250GB of memory available, but the database team says that\nthey can't do nothing to make this query better. I'm not sure, as some\nbuffers are written on disk.\n\nAny tip/help will be much appreciated (even from the query side).\n\nThank you!\n\nThe query plan: https://explain.depesz.com/s/5KMn\n\nNote: I tried to add index on kilo_victor table already, but Postgresql\nstill thinks that is better to do a seq scan.\n\n\nFlávio Henrique\n\nHi there, fellow experts!I need an advice with query that became slower after 9.3 to 9.6 migration.First of all, I'm from the dev team.Before migration, we (programmers) made some modifications on query bring it's average time from 8s to 2-3s.As this query is the most executed on our system (it builds the user panel to work), every bit that we can squeeze from it will be nice.Now, after server migration to 9.6 we're experiencing bad times with this query again.Unfortunately, I don't have the old query plain (9.3 version) to show you, but in the actual version (9.6) I can see some buffers written that tells me that something is wrong.Our server has 250GB of memory available, but the database team says that they can't do nothing to make this query better. I'm not sure, as some buffers are written on disk.Any tip/help will be much appreciated (even from the query side).Thank you!The query plan: https://explain.depesz.com/s/5KMnNote: I tried to add index on kilo_victor table already, but Postgresql still thinks that is better to do a seq scan.Flávio Henrique",
"msg_date": "Tue, 27 Dec 2016 21:50:05 -0200",
"msg_from": "=?UTF-8?Q?Fl=C3=A1vio_Henrique?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "> \n> Hi there, fellow experts!\n> \n> \n> I need an advice with query that became slower after 9.3 to 9.6\n> migration.\n> \n> \n> First of all, I'm from the dev team.\n> \n> \n> Before migration, we (programmers) made some modifications on query\n> bring it's average time from 8s to 2-3s.\n> \n> \n> As this query is the most executed on our system (it builds the user\n> panel to work), every bit that we can squeeze from it will be nice.\n> \n> \n> Now, after server migration to 9.6 we're experiencing bad times with\n> this query again.\n> \n> \n> Unfortunately, I don't have the old query plain (9.3 version) to show\n> you, but in the actual version (9.6) I can see some buffers written\n> that tells me that something is wrong.\n> \n> \n> Our server has 250GB of memory available, but the database team says\n> that they can't do nothing to make this query better. I'm not sure,\n> as some buffers are written on disk.\n> \n> \n> Any tip/help will be much appreciated (even from the query side).\n> \n> \n> Thank you!\n> \n> \n> The query plan: https://explain.depesz.com/s/5KMn\n> \n> \n> Note: I tried to add index on kilo_victor table already, but\n> Postgresql still thinks that is better to do a seq scan.\n> \n> \n\nI dont know about the data distribution in kilo_victor, but maybe a partial index\nON kilo_victor (juliet_romeo) where not xray_seven\n?\n\nGerardo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 Dec 2016 23:52:30 -0300 (ART)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "The biggest impact on performance you can achieve is by using a materialized view. if it’s so heavily used as you said, even 2-3 seconds in a multiuser OLTP environment still unacceptable under my point of view. I don’t know if this is the case but if you have 1000 users connecting at 8 am all at the same time … it will freeze the app for a while ..\n\nAsk your self: how old data can be? and take into account that you can refresh the materialized view as often as you want, even every 10 secs if you want.\n\nBeides this, there there's still some room for improvement. Perhaps you have not created the right index to avoid seq scans. Have a look at indexes on expressions.\n\nOn systems side: ask them if they have not changed anything in effective_cache_size and shared_buffers parameters, I presume they haven’t change anything related to costs.\n\nRegards.\n\nDaniel Blanch.\n\n\n> El 28 dic 2016, a las 0:50, Flávio Henrique <[email protected]> escribió:\n> \n> Hi there, fellow experts!\n> \n> I need an advice with query that became slower after 9.3 to 9.6 migration.\n> \n> First of all, I'm from the dev team.\n> \n> Before migration, we (programmers) made some modifications on query bring it's average time from 8s to 2-3s.\n> \n> As this query is the most executed on our system (it builds the user panel to work), every bit that we can squeeze from it will be nice.\n> \n> Now, after server migration to 9.6 we're experiencing bad times with this query again.\n> \n> Unfortunately, I don't have the old query plain (9.3 version) to show you, but in the actual version (9.6) I can see some buffers written that tells me that something is wrong.\n> \n> Our server has 250GB of memory available, but the database team says that they can't do nothing to make this query better. I'm not sure, as some buffers are written on disk.\n> \n> Any tip/help will be much appreciated (even from the query side).\n> \n> Thank you!\n> \n> The query plan: https://explain.depesz.com/s/5KMn <https://explain.depesz.com/s/5KMn>\n> \n> Note: I tried to add index on kilo_victor table already, but Postgresql still thinks that is better to do a seq scan.\n> \n> \n> Flávio Henrique\n\n\nThe biggest impact on performance you can achieve is by using a materialized view. if it’s so heavily used as you said, even 2-3 seconds in a multiuser OLTP environment still unacceptable under my point of view. I don’t know if this is the case but if you have 1000 users connecting at 8 am all at the same time … it will freeze the app for a while ..Ask your self: how old data can be? and take into account that you can refresh the materialized view as often as you want, even every 10 secs if you want.Beides this, there there's still some room for improvement. Perhaps you have not created the right index to avoid seq scans. Have a look at indexes on expressions.On systems side: ask them if they have not changed anything in effective_cache_size and shared_buffers parameters, I presume they haven’t change anything related to costs.Regards.Daniel Blanch.El 28 dic 2016, a las 0:50, Flávio Henrique <[email protected]> escribió:Hi there, fellow experts!I need an advice with query that became slower after 9.3 to 9.6 migration.First of all, I'm from the dev team.Before migration, we (programmers) made some modifications on query bring it's average time from 8s to 2-3s.As this query is the most executed on our system (it builds the user panel to work), every bit that we can squeeze from it will be nice.Now, after server migration to 9.6 we're experiencing bad times with this query again.Unfortunately, I don't have the old query plain (9.3 version) to show you, but in the actual version (9.6) I can see some buffers written that tells me that something is wrong.Our server has 250GB of memory available, but the database team says that they can't do nothing to make this query better. I'm not sure, as some buffers are written on disk.Any tip/help will be much appreciated (even from the query side).Thank you!The query plan: https://explain.depesz.com/s/5KMnNote: I tried to add index on kilo_victor table already, but Postgresql still thinks that is better to do a seq scan.Flávio Henrique",
"msg_date": "Wed, 28 Dec 2016 14:11:33 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique <[email protected]> wrote:\n\n> I can see some buffers written that tells me\n> that something is wrong.\n\nTry running VACUUM FREEZE ANALYZE on all tables involved in the\nquery (or just run it as a superuser on the whole database). Do\n*not* use the FULL option. Among other things, this will ensure\nthat you have somewhat current statistics, and that all hint bits\nare set. (I remember my surprise the first time I converted a\ntable to PostgreSQL, ran SELECT count(*) on it to make sure all\nrows made it, saw a very long run time with disk writes as the\nbottleneck. That's when I learned about hint bits.)\n\nYou should also make sure that autovacuum is aggressive enough on\nthe new cluster. Without that, any performance benefit from the\nabove will slowly disappear.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Jan 2017 08:25:12 -0600",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique <[email protected]> wrote:\n> Hi there, fellow experts!\n>\n> I need an advice with query that became slower after 9.3 to 9.6 migration.\n>\n> First of all, I'm from the dev team.\n>\n> Before migration, we (programmers) made some modifications on query bring\n> it's average time from 8s to 2-3s.\n>\n> As this query is the most executed on our system (it builds the user panel\n> to work), every bit that we can squeeze from it will be nice.\n>\n> Now, after server migration to 9.6 we're experiencing bad times with this\n> query again.\n>\n> Unfortunately, I don't have the old query plain (9.3 version) to show you,\n> but in the actual version (9.6) I can see some buffers written that tells me\n> that something is wrong.\n>\n> Our server has 250GB of memory available, but the database team says that\n> they can't do nothing to make this query better. I'm not sure, as some\n> buffers are written on disk.\n>\n> Any tip/help will be much appreciated (even from the query side).\n>\n> Thank you!\n>\n> The query plan: https://explain.depesz.com/s/5KMn\n>\n> Note: I tried to add index on kilo_victor table already, but Postgresql\n> still thinks that is better to do a seq scan.\n\nHard to provide more without the query or the 'old' plan. Here are\nsome things you can try:\n*) Set effective_io_concurrency high. You have some heap scanning\ngoing on and this can sometimes help (but it should be marginal).\n*) See if you can get any juice out of parallel query\n*) try playing with enable_nestloop and enable_seqscan. these are\nhail mary passes but worth a shot.\n\nRun the query back to back with same arguments in the same database\nsession. Does performance improve?\n\nBig gains (if any) are likely due to indexing strategy.\nI do see some suspicious casting, for example:\n\nJoin Filter: ((four_charlie.delta_tango)::integer =\n(six_quebec.golf_bravo)::integer)\n\nAre you casting in the query or joining through dissimilar data types?\n I suspect your database team might be incorrect.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jan 2017 08:40:59 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "Hi all!\nSorry the delay (holidays).\n\nWell, the most expensive sequencial scan was solved.\nI asked the db team to drop the index and recreate it and guess what: now\npostgresql is using it and the time dropped.\n(thank you, @Gerardo Herzig!)\n\nI think there's still room for improvement, but the problem is not so\ncrucial right now.\nI'll try to investigate every help mentioned here. Thank you all.\n\n@Daniel Blanch\nI'll make some tests with a materialized view. Thank you.\n\n> On systems side: ask them if they have not changed anything in\n> effective_cache_size and shared_buffers parameters, I presume they haven’t\n> change anything related to costs.\n\nReplying your comment, I think they tunned the server:\neffective_cache_size = 196GB\nshared_buffers = 24GB (this shouldn't be higher?)\n\n@Kevin Grittner\nsorry, but I'm not sure when the autovacuum is aggressive enough, but here\nmy settings related:\nautovacuum |on\nautovacuum_analyze_scale_factor |0.05\nautovacuum_analyze_threshold |10\nautovacuum_freeze_max_age |200000000\nautovacuum_max_workers |3\nautovacuum_multixact_freeze_max_age |400000000\nautovacuum_naptime |15s\nautovacuum_vacuum_cost_delay |10ms\nautovacuum_vacuum_cost_limit |-1\nautovacuum_vacuum_scale_factor |0.1\nautovacuum_vacuum_threshold |10\nautovacuum_work_mem |-1\n\n@Merlin Moncure\n\n> Big gains (if any) are likely due to indexing strategy.\n> I do see some suspicious casting, for example:\n> Join Filter: ((four_charlie.delta_tango)::integer =\n> (six_quebec.golf_bravo)::integer)\n> Are you casting in the query or joining through dissimilar data types?\n\nNo casts in query. The joins are on same data types.\n\nThank you all for the answers. Happy 2017!\n\nFlávio Henrique\n--------------------------------------------------------\n\"There are only 10 types of people in the world: Those who understand\nbinary, and those who don't\"\n--------------------------------------------------------\n\nOn Thu, Jan 5, 2017 at 12:40 PM, Merlin Moncure <[email protected]> wrote:\n\n> On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique <[email protected]>\n> wrote:\n> > Hi there, fellow experts!\n> >\n> > I need an advice with query that became slower after 9.3 to 9.6\n> migration.\n> >\n> > First of all, I'm from the dev team.\n> >\n> > Before migration, we (programmers) made some modifications on query bring\n> > it's average time from 8s to 2-3s.\n> >\n> > As this query is the most executed on our system (it builds the user\n> panel\n> > to work), every bit that we can squeeze from it will be nice.\n> >\n> > Now, after server migration to 9.6 we're experiencing bad times with this\n> > query again.\n> >\n> > Unfortunately, I don't have the old query plain (9.3 version) to show\n> you,\n> > but in the actual version (9.6) I can see some buffers written that\n> tells me\n> > that something is wrong.\n> >\n> > Our server has 250GB of memory available, but the database team says that\n> > they can't do nothing to make this query better. I'm not sure, as some\n> > buffers are written on disk.\n> >\n> > Any tip/help will be much appreciated (even from the query side).\n> >\n> > Thank you!\n> >\n> > The query plan: https://explain.depesz.com/s/5KMn\n> >\n> > Note: I tried to add index on kilo_victor table already, but Postgresql\n> > still thinks that is better to do a seq scan.\n>\n> Hard to provide more without the query or the 'old' plan. Here are\n> some things you can try:\n> *) Set effective_io_concurrency high. You have some heap scanning\n> going on and this can sometimes help (but it should be marginal).\n> *) See if you can get any juice out of parallel query\n> *) try playing with enable_nestloop and enable_seqscan. these are\n> hail mary passes but worth a shot.\n>\n> Run the query back to back with same arguments in the same database\n> session. Does performance improve?\n>\n> Big gains (if any) are likely due to indexing strategy.\n> I do see some suspicious casting, for example:\n>\n> Join Filter: ((four_charlie.delta_tango)::integer =\n> (six_quebec.golf_bravo)::integer)\n>\n> Are you casting in the query or joining through dissimilar data types?\n> I suspect your database team might be incorrect.\n>\n> merlin\n>\n\nHi all!Sorry the delay (holidays).Well, the most expensive sequencial scan was solved.I asked the db team to drop the index and recreate it and guess what: now postgresql is using it and the time dropped.(thank you, @Gerardo Herzig!)I think there's still room for improvement, but the problem is not so crucial right now.I'll try to investigate every help mentioned here. Thank you all.@Daniel BlanchI'll make some tests with a materialized view. Thank you.On systems side: ask them if they have not changed anything in effective_cache_size and shared_buffers parameters, I presume they haven’t change anything related to costs.Replying your comment, I think they tunned the server:effective_cache_size = 196GBshared_buffers = 24GB (this shouldn't be higher?)@Kevin Grittnersorry, but I'm not sure when the autovacuum is aggressive enough, but here my settings related:autovacuum |on autovacuum_analyze_scale_factor |0.05 autovacuum_analyze_threshold |10 autovacuum_freeze_max_age |200000000 autovacuum_max_workers |3 autovacuum_multixact_freeze_max_age |400000000 autovacuum_naptime |15s autovacuum_vacuum_cost_delay |10ms autovacuum_vacuum_cost_limit |-1 autovacuum_vacuum_scale_factor |0.1 autovacuum_vacuum_threshold |10 autovacuum_work_mem |-1 @Merlin MoncureBig gains (if any) are likely due to indexing strategy.I do see some suspicious casting, for example:Join Filter: ((four_charlie.delta_tango)::integer =(six_quebec.golf_bravo)::integer)Are you casting in the query or joining through dissimilar data types?No casts in query. The joins are on same data types. Thank you all for the answers. Happy 2017!Flávio Henrique--------------------------------------------------------\"There are only 10 types of people in the world: Those who understand binary, and those who don't\"--------------------------------------------------------\nOn Thu, Jan 5, 2017 at 12:40 PM, Merlin Moncure <[email protected]> wrote:On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique <[email protected]> wrote:\n> Hi there, fellow experts!\n>\n> I need an advice with query that became slower after 9.3 to 9.6 migration.\n>\n> First of all, I'm from the dev team.\n>\n> Before migration, we (programmers) made some modifications on query bring\n> it's average time from 8s to 2-3s.\n>\n> As this query is the most executed on our system (it builds the user panel\n> to work), every bit that we can squeeze from it will be nice.\n>\n> Now, after server migration to 9.6 we're experiencing bad times with this\n> query again.\n>\n> Unfortunately, I don't have the old query plain (9.3 version) to show you,\n> but in the actual version (9.6) I can see some buffers written that tells me\n> that something is wrong.\n>\n> Our server has 250GB of memory available, but the database team says that\n> they can't do nothing to make this query better. I'm not sure, as some\n> buffers are written on disk.\n>\n> Any tip/help will be much appreciated (even from the query side).\n>\n> Thank you!\n>\n> The query plan: https://explain.depesz.com/s/5KMn\n>\n> Note: I tried to add index on kilo_victor table already, but Postgresql\n> still thinks that is better to do a seq scan.\n\nHard to provide more without the query or the 'old' plan. Here are\nsome things you can try:\n*) Set effective_io_concurrency high. You have some heap scanning\ngoing on and this can sometimes help (but it should be marginal).\n*) See if you can get any juice out of parallel query\n*) try playing with enable_nestloop and enable_seqscan. these are\nhail mary passes but worth a shot.\n\nRun the query back to back with same arguments in the same database\nsession. Does performance improve?\n\nBig gains (if any) are likely due to indexing strategy.\nI do see some suspicious casting, for example:\n\nJoin Filter: ((four_charlie.delta_tango)::integer =\n(six_quebec.golf_bravo)::integer)\n\nAre you casting in the query or joining through dissimilar data types?\n I suspect your database team might be incorrect.\n\nmerlin",
"msg_date": "Thu, 5 Jan 2017 14:51:41 -0200",
"msg_from": "=?UTF-8?Q?Fl=C3=A1vio_Henrique?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "On Thu, Jan 5, 2017 at 10:51 AM, Flávio Henrique <[email protected]> wrote:\n\n> Replying your comment, I think they tunned the server:\n> effective_cache_size = 196GB\n> shared_buffers = 24GB (this shouldn't be higher?)\n\nProbably not, although it may be a good idea to try settings either\nside of that (say, 16GB and 32GB) and monitor performance compared\nto the current setting.\n\n> autovacuum_max_workers |3\n\nIf you ever see all workers busy at the same time for 30 minutes or\nmore, you should probably consider raising that so that small,\nfrequently updated tables are not neglected for too long.\n\n> autovacuum_vacuum_cost_limit |-1\n\nThat is going to default to vacuum_cost_limit, which is usually\n200. If the server is actually big enough to merit\n\"effective_cache_size = 196GB\" then you should probably bump this\nsetting to something like 2000.\n\n> autovacuum_work_mem |-1\n\nThat is going to default to maintenance_work_mem. On a big\nmachine, you probably want that set to somewhere between 1GB and\n2GB.\n\nSome other tuning to the cost parameters might be helpful, but\nthere's not enough data on the thread to know what else to suggest.\nIf you hit some other slow query, you might want to report it in\nthe manner suggested here:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jan 2017 11:10:57 -0600",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "Hi,\n\nIf just recreating the index now it uses it, it might mean that the index was bloated, that is, it grew so big that it was cheaper a seq scan.\n\nI’ve seen another case recently where postgres 9.6 wasn’t using the right index in a query, I was able to reproduce the issue crafting index bigger, much bigger than it should be. \n\nCan you record index size as it is now? Keep this info, and If problem happens again check indexes size, and see if they have grow too much.\n\ni.e. SELECT relname, relpages, reltuples FROM pg_class WHERE relname = ‘index_name'\n\nThis might help to see if this is the problem, that indexes are growing too much for some reason.\n\nRegards.\n\nP.S the other parameters don't seem to be the cause of the problem to me.\n\n> El 5 ene 2017, a las 17:51, Flávio Henrique <[email protected]> escribió:\n> \n> Hi all!\n> Sorry the delay (holidays).\n> \n> Well, the most expensive sequencial scan was solved.\n> I asked the db team to drop the index and recreate it and guess what: now postgresql is using it and the time dropped.\n> (thank you, @Gerardo Herzig!)\n> \n> I think there's still room for improvement, but the problem is not so crucial right now.\n> I'll try to investigate every help mentioned here. Thank you all.\n> \n> @Daniel Blanch\n> I'll make some tests with a materialized view. Thank you.\n> On systems side: ask them if they have not changed anything in effective_cache_size and shared_buffers parameters, I presume they haven’t change anything related to costs.\n> Replying your comment, I think they tunned the server:\n> effective_cache_size = 196GB\n> shared_buffers = 24GB (this shouldn't be higher?)\n> \n> @Kevin Grittner\n> sorry, but I'm not sure when the autovacuum is aggressive enough, but here my settings related:\n> autovacuum |on \n> autovacuum_analyze_scale_factor |0.05 \n> autovacuum_analyze_threshold |10 \n> autovacuum_freeze_max_age |200000000 \n> autovacuum_max_workers |3 \n> autovacuum_multixact_freeze_max_age |400000000 \n> autovacuum_naptime |15s \n> autovacuum_vacuum_cost_delay |10ms \n> autovacuum_vacuum_cost_limit |-1 \n> autovacuum_vacuum_scale_factor |0.1 \n> autovacuum_vacuum_threshold |10 \n> autovacuum_work_mem |-1 \n> \n> @Merlin Moncure\n> Big gains (if any) are likely due to indexing strategy.\n> I do see some suspicious casting, for example:\n> Join Filter: ((four_charlie.delta_tango)::integer =\n> (six_quebec.golf_bravo)::integer)\n> Are you casting in the query or joining through dissimilar data types?\n> No casts in query. The joins are on same data types. \n> \n> Thank you all for the answers. Happy 2017!\n> \n> Flávio Henrique\n> --------------------------------------------------------\n> \"There are only 10 types of people in the world: Those who understand binary, and those who don't\"\n> --------------------------------------------------------\n> \n> On Thu, Jan 5, 2017 at 12:40 PM, Merlin Moncure <[email protected] <mailto:[email protected]>> wrote:\n> On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique <[email protected] <mailto:[email protected]>> wrote:\n> > Hi there, fellow experts!\n> >\n> > I need an advice with query that became slower after 9.3 to 9.6 migration.\n> >\n> > First of all, I'm from the dev team.\n> >\n> > Before migration, we (programmers) made some modifications on query bring\n> > it's average time from 8s to 2-3s.\n> >\n> > As this query is the most executed on our system (it builds the user panel\n> > to work), every bit that we can squeeze from it will be nice.\n> >\n> > Now, after server migration to 9.6 we're experiencing bad times with this\n> > query again.\n> >\n> > Unfortunately, I don't have the old query plain (9.3 version) to show you,\n> > but in the actual version (9.6) I can see some buffers written that tells me\n> > that something is wrong.\n> >\n> > Our server has 250GB of memory available, but the database team says that\n> > they can't do nothing to make this query better. I'm not sure, as some\n> > buffers are written on disk.\n> >\n> > Any tip/help will be much appreciated (even from the query side).\n> >\n> > Thank you!\n> >\n> > The query plan: https://explain.depesz.com/s/5KMn <https://explain.depesz.com/s/5KMn>\n> >\n> > Note: I tried to add index on kilo_victor table already, but Postgresql\n> > still thinks that is better to do a seq scan.\n> \n> Hard to provide more without the query or the 'old' plan. Here are\n> some things you can try:\n> *) Set effective_io_concurrency high. You have some heap scanning\n> going on and this can sometimes help (but it should be marginal).\n> *) See if you can get any juice out of parallel query\n> *) try playing with enable_nestloop and enable_seqscan. these are\n> hail mary passes but worth a shot.\n> \n> Run the query back to back with same arguments in the same database\n> session. Does performance improve?\n> \n> Big gains (if any) are likely due to indexing strategy.\n> I do see some suspicious casting, for example:\n> \n> Join Filter: ((four_charlie.delta_tango)::integer =\n> (six_quebec.golf_bravo)::integer)\n> \n> Are you casting in the query or joining through dissimilar data types?\n> I suspect your database team might be incorrect.\n> \n> merlin\n> \n\n\nHi,If just recreating the index now it uses it, it might mean that the index was bloated, that is, it grew so big that it was cheaper a seq scan.I’ve seen another case recently where postgres 9.6 wasn’t using the right index in a query, I was able to reproduce the issue crafting index bigger, much bigger than it should be. Can you record index size as it is now? Keep this info, and If problem happens again check indexes size, and see if they have grow too much.i.e. SELECT relname, relpages, reltuples FROM pg_class WHERE relname = ‘index_name'This might help to see if this is the problem, that indexes are growing too much for some reason.Regards.P.S the other parameters don't seem to be the cause of the problem to me.El 5 ene 2017, a las 17:51, Flávio Henrique <[email protected]> escribió:Hi all!Sorry the delay (holidays).Well, the most expensive sequencial scan was solved.I asked the db team to drop the index and recreate it and guess what: now postgresql is using it and the time dropped.(thank you, @Gerardo Herzig!)I think there's still room for improvement, but the problem is not so crucial right now.I'll try to investigate every help mentioned here. Thank you all.@Daniel BlanchI'll make some tests with a materialized view. Thank you.On systems side: ask them if they have not changed anything in effective_cache_size and shared_buffers parameters, I presume they haven’t change anything related to costs.Replying your comment, I think they tunned the server:effective_cache_size = 196GBshared_buffers = 24GB (this shouldn't be higher?)@Kevin Grittnersorry, but I'm not sure when the autovacuum is aggressive enough, but here my settings related:autovacuum |on autovacuum_analyze_scale_factor |0.05 autovacuum_analyze_threshold |10 autovacuum_freeze_max_age |200000000 autovacuum_max_workers |3 autovacuum_multixact_freeze_max_age |400000000 autovacuum_naptime |15s autovacuum_vacuum_cost_delay |10ms autovacuum_vacuum_cost_limit |-1 autovacuum_vacuum_scale_factor |0.1 autovacuum_vacuum_threshold |10 autovacuum_work_mem |-1 @Merlin MoncureBig gains (if any) are likely due to indexing strategy.I do see some suspicious casting, for example:Join Filter: ((four_charlie.delta_tango)::integer =(six_quebec.golf_bravo)::integer)Are you casting in the query or joining through dissimilar data types?No casts in query. The joins are on same data types. Thank you all for the answers. Happy 2017!Flávio Henrique--------------------------------------------------------\"There are only 10 types of people in the world: Those who understand binary, and those who don't\"--------------------------------------------------------\nOn Thu, Jan 5, 2017 at 12:40 PM, Merlin Moncure <[email protected]> wrote:On Tue, Dec 27, 2016 at 5:50 PM, Flávio Henrique <[email protected]> wrote:\n> Hi there, fellow experts!\n>\n> I need an advice with query that became slower after 9.3 to 9.6 migration.\n>\n> First of all, I'm from the dev team.\n>\n> Before migration, we (programmers) made some modifications on query bring\n> it's average time from 8s to 2-3s.\n>\n> As this query is the most executed on our system (it builds the user panel\n> to work), every bit that we can squeeze from it will be nice.\n>\n> Now, after server migration to 9.6 we're experiencing bad times with this\n> query again.\n>\n> Unfortunately, I don't have the old query plain (9.3 version) to show you,\n> but in the actual version (9.6) I can see some buffers written that tells me\n> that something is wrong.\n>\n> Our server has 250GB of memory available, but the database team says that\n> they can't do nothing to make this query better. I'm not sure, as some\n> buffers are written on disk.\n>\n> Any tip/help will be much appreciated (even from the query side).\n>\n> Thank you!\n>\n> The query plan: https://explain.depesz.com/s/5KMn\n>\n> Note: I tried to add index on kilo_victor table already, but Postgresql\n> still thinks that is better to do a seq scan.\n\nHard to provide more without the query or the 'old' plan. Here are\nsome things you can try:\n*) Set effective_io_concurrency high. You have some heap scanning\ngoing on and this can sometimes help (but it should be marginal).\n*) See if you can get any juice out of parallel query\n*) try playing with enable_nestloop and enable_seqscan. these are\nhail mary passes but worth a shot.\n\nRun the query back to back with same arguments in the same database\nsession. Does performance improve?\n\nBig gains (if any) are likely due to indexing strategy.\nI do see some suspicious casting, for example:\n\nJoin Filter: ((four_charlie.delta_tango)::integer =\n(six_quebec.golf_bravo)::integer)\n\nAre you casting in the query or joining through dissimilar data types?\n I suspect your database team might be incorrect.\n\nmerlin",
"msg_date": "Thu, 5 Jan 2017 18:51:10 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "On Thu, Jan 5, 2017 at 10:51 AM, Flávio Henrique <[email protected]> wrote:\r\n> @Merlin Moncure\r\n>>\r\n>> Big gains (if any) are likely due to indexing strategy.\r\n>> I do see some suspicious casting, for example:\r\n>> Join Filter: ((four_charlie.delta_tango)::integer =\r\n>> (six_quebec.golf_bravo)::integer)\r\n>> Are you casting in the query or joining through dissimilar data types?\r\n>\r\n> No casts in query. The joins are on same data types.\r\n\r\nwell, something is going on.\r\n\r\ncreate table t(i int);\r\ncreate table t2(i int);\r\nset enable_hashjoin to false;\r\nset enable_mergejoin to false;\r\n\r\nyields:\r\n\r\npostgres=# explain select * from t join t2 on t.i = t2.i;\r\n QUERY PLAN\r\n──────────────────────────────────────────────────────────────────\r\n Nested Loop (cost=0.00..97614.88 rows=32512 width=8)\r\n Join Filter: (t.i = t2.i)\r\n -> Seq Scan on t (cost=0.00..35.50 rows=2550 width=4)\r\n -> Materialize (cost=0.00..48.25 rows=2550 width=4)\r\n -> Seq Scan on t2 (cost=0.00..35.50 rows=2550 width=4)\r\n\r\nplease note the non-casted join filter.\r\n\r\nhowever,\r\n\r\npostgres=# explain select * from t join t2 on t.i::bigint = t2.i::bigint;\r\n QUERY PLAN\r\n──────────────────────────────────────────────────────────────────\r\n Nested Loop (cost=0.00..130127.38 rows=32512 width=8)\r\n Join Filter: ((t.i)::bigint = (t2.i)::bigint)\r\n -> Seq Scan on t (cost=0.00..35.50 rows=2550 width=4)\r\n -> Materialize (cost=0.00..48.25 rows=2550 width=4)\r\n -> Seq Scan on t2 (cost=0.00..35.50 rows=2550 width=4)\r\n\r\nnotice the casts in the join filter. Furthermore, please note the\r\nhigher query cost due to the server accounting for the casting\r\ninvolved in the join. Any kind of non-equality based operation in a\r\njoin or the predicate side of a where condition can get very expensive\r\nvery quickly. (it remains difficult to see if there's any way to\r\nimprove the join operation due to lack of visibility on the query\r\nstring).\r\n\r\nmerlin\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Jan 2017 13:01:50 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "Can you remove me from your mailing list?\n\nThanks.\n\nCan you remove me from your mailing list?Thanks.",
"msg_date": "Thu, 5 Jan 2017 21:14:17 +0000",
"msg_from": "Filipe Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "On Fri, Jan 6, 2017 at 6:14 AM, Filipe Oliveira <[email protected]> wrote:\n> Can you remove me from your mailing list?\n\nThere is an unsubscribe action here:\nhttps://www.postgresql.org/community/lists/subscribe/\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Jan 2017 21:51:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "Thank you for the reply. I had been trying to find that option for awhile\nnow.\n\nOn Fri, Jan 6, 2017 at 12:51 PM, Michael Paquier <[email protected]>\nwrote:\n\n> On Fri, Jan 6, 2017 at 6:14 AM, Filipe Oliveira <[email protected]>\n> wrote:\n> > Can you remove me from your mailing list?\n>\n> There is an unsubscribe action here:\n> https://www.postgresql.org/community/lists/subscribe/\n> --\n> Michael\n>\n\nThank you for the reply. I had been trying to find that option for awhile now.On Fri, Jan 6, 2017 at 12:51 PM, Michael Paquier <[email protected]> wrote:On Fri, Jan 6, 2017 at 6:14 AM, Filipe Oliveira <[email protected]> wrote:\n> Can you remove me from your mailing list?\n\nThere is an unsubscribe action here:\nhttps://www.postgresql.org/community/lists/subscribe/\n--\nMichael",
"msg_date": "Fri, 6 Jan 2017 18:25:54 +0000",
"msg_from": "Filipe Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "On Thu, Jan 5, 2017 at 9:51 AM, Daniel Blanch Bataller\n<[email protected]> wrote:\n> If just recreating the index now it uses it, it might mean that the index\n> was bloated, that is, it grew so big that it was cheaper a seq scan.\n>\n> I’ve seen another case recently where postgres 9.6 wasn’t using the right\n> index in a query, I was able to reproduce the issue crafting index bigger,\n> much bigger than it should be.\n>\n> Can you record index size as it is now? Keep this info, and If problem\n> happens again check indexes size, and see if they have grow too much.\n>\n> i.e. SELECT relname, relpages, reltuples FROM pg_class WHERE relname =\n> ‘index_name'\n>\n> This might help to see if this is the problem, that indexes are growing too\n> much for some reason.\n\nAre these unique indexes or not? Did Flavio have a workload with many UPDATEs?\n\nI ask these questions because I think it's possible that this is\nexplained by a regression in 9.5's handling of index bloat, described\nhere:\n\nhttp://postgr.es/m/CAH2-Wz=SfAKVMv1x9Jh19EJ8am8TZn9f-yECipS9HrrRqSswnA@mail.gmail.com\n\nI'm trying to track down cases where this could be an issue, to get a\nbetter sense of the problem.\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Jul 2017 14:44:03 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
},
{
"msg_contents": "On Tue, Dec 27, 2016 at 3:50 PM, Flávio Henrique <[email protected]> wrote:\n> Hi there, fellow experts!\n>\n> I need an advice with query that became slower after 9.3 to 9.6 migration.\n>\n> First of all, I'm from the dev team.\n>\n> Before migration, we (programmers) made some modifications on query bring\n> it's average time from 8s to 2-3s.\n>\n> As this query is the most executed on our system (it builds the user panel\n> to work), every bit that we can squeeze from it will be nice.\n>\n> Now, after server migration to 9.6 we're experiencing bad times with this\n> query again.\n>\n> Unfortunately, I don't have the old query plain (9.3 version) to show you,\n> but in the actual version (9.6) I can see some buffers written that tells me\n> that something is wrong.\n>\n> Our server has 250GB of memory available, but the database team says that\n> they can't do nothing to make this query better. I'm not sure, as some\n> buffers are written on disk.\n\nThe second sorts etc start spilling to disk your performance is gonna\ntank. Try increasing work_mem to something moderate like 256M to 1G.\nNote that work_mem is per sort / action, so if you got 100 users\nrunning queries with 2 or 3 sorts at a time you can exhaust memory\nreal fast. OTOH, a db with proper pooling on connections etc (aka 10\nto 20 live connections at a time) cna easily handle 1G work_mem if\nit's got 256G RAM\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Jul 2017 13:49:00 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query after 9.3 to 9.6 migration"
}
] |
[
{
"msg_contents": "Hi Alex,\n\nsorry, I missed your response somehow, got it only with today’s digest.\n\nThanks for the hint. I have basic idea how to investigate query perf issues.\nI thought maybe I miss some understanding of the grounds.\nIt’ll take some time to get enough test data and I’ll try to take a look myself and come back after that.\n\nRegards,\nVal.\n\n\n> From: Andreas Kretschmer <[email protected] <mailto:[email protected]>>\n> Subject: Re: why we do not create indexes on master\n> Date: Dec 27 2016 19:04:27 GMT+3\n> To: [email protected] <mailto:[email protected]>\n> \n> \n> Valerii Valeev <[email protected] <mailto:[email protected]>> wrote:\n> \n>> Dear colleagues,\n>> \n>> can anyone please explain, why we do not create indexes on master?\n>> In my case master / child design blindly follows partitioning guide https://\n>> www.postgresql.org/docs/9.6/static/ddl-partitioning.html <http://www.postgresql.org/docs/9.6/static/ddl-partitioning.html>.\n>> My collaborator was unhappy with performance of queries over master table with\n>> filtering by one of fields\n>> \n>> SELECT * FROM “master\" WHERE “field\" BETWEEN x AND y\n>> \n>> (there are indexes for “field” on child tables).\n>> He has created index on master once and found that the query returns 100x\n>> faster.\n> \n> please show us explain analyse with/without index on master.\n> \n> \n> \n> Regards, Andreas Kretschmer\n> -- \n> Andreas Kretschmer\n> http://www.2ndQuadrant.com/ <http://www.2ndquadrant.com/>\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\nHi Alex,sorry, I missed your response somehow, got it only with today’s digest.Thanks for the hint. I have basic idea how to investigate query perf issues.I thought maybe I miss some understanding of the grounds.It’ll take some time to get enough test data and I’ll try to take a look myself and come back after that.Regards,Val.From: Andreas Kretschmer <[email protected]>Subject: Re: why we do not create indexes on masterDate: Dec 27 2016 19:04:27 GMT+3To: [email protected] Valeev <[email protected]> wrote:Dear colleagues,can anyone please explain, why we do not create indexes on master?In my case master / child design blindly follows partitioning guide https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html.My collaborator was unhappy with performance of queries over master table withfiltering by one of fieldsSELECT * FROM “master\" WHERE “field\" BETWEEN x AND y(there are indexes for “field” on child tables).He has created index on master once and found that the query returns 100xfaster.please show us explain analyse with/without index on master.Regards, Andreas Kretschmer-- Andreas Kretschmerhttp://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 28 Dec 2016 02:57:18 +0300",
"msg_from": "Valerii Valeev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: [pgsql-performance] Daily digest v1.4804 (8 messages)"
}
] |
[
{
"msg_contents": "Hello there!\n\nI have an performance issue with functions and args type.\n\nTable and data:\ncreate table t1 (id serial, str char(32));\ninsert into t1 (str) select md5(s::text) from generate_series(1, 1000000)\nas s;\n\nAnd simple functions:\ncreate function f1(line text) returns void as $$\nbegin\n perform * from t1 where str = line;\nend;\n$$ language plpgsql;\n\ncreate function f2(line char) returns void as $$\nbegin\n perform * from t1 where str = line;\nend;\n$$ language plpgsql;\n\nQuery:\ntest=> explain analyze select f2('2b00042f7481c7b056c4b410d28f33cf');\n QUERY PLAN\n\n------------------------------------------------------------\n----------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=189.008..189.010\nrows=1 loops=1)\n Planning time: 0.039 ms\n Execution time: 189.039 ms\n(3 rows)\n\nTime: 189,524 ms\ntest=> explain analyze select f1('2b00042f7481c7b056c4b410d28f33cf');\n QUERY PLAN\n\n------------------------------------------------------------\n----------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=513.734..513.735\nrows=1 loops=1)\n Planning time: 0.024 ms\n Execution time: 513.757 ms\n(3 rows)\n\nTime: 514,125 ms\ntest=> explain analyze select f1('2b00042f7481c7b056c4b410d28f33\ncf'::char(32));\n QUERY PLAN\n\n------------------------------------------------------------\n----------------------------\n Result (cost=0.00..0.26 rows=1 width=0) (actual time=513.507..513.509\nrows=1 loops=1)\n Planning time: 0.074 ms\n Execution time: 513.535 ms\n(3 rows)\n\nTime: 514,104 ms\ntest=>\n\nSeems that casting param from text to char(32) needs to be done only once\nand f1 and f2 must be identical on performance. But function f2 with text\nparam significantly slower, even with casting arg while pass it to function.\n\nTested postgresql versions 9.5.5 and 9.6.1 on Ubuntu 16.04. It's normal\nbehavior or it's can be fixed?\n\n-- \nAndrey Khozov\n\nHello there!I have an performance issue with functions and args type.Table and data:create table t1 (id serial, str char(32));insert into t1 (str) select md5(s::text) from generate_series(1, 1000000) as s;And simple functions:create function f1(line text) returns void as $$begin perform * from t1 where str = line;end;$$ language plpgsql;create function f2(line char) returns void as $$begin perform * from t1 where str = line;end;$$ language plpgsql;Query:test=> explain analyze select f2('2b00042f7481c7b056c4b410d28f33cf'); QUERY PLAN ---------------------------------------------------------------------------------------- Result (cost=0.00..0.26 rows=1 width=0) (actual time=189.008..189.010 rows=1 loops=1) Planning time: 0.039 ms Execution time: 189.039 ms(3 rows)Time: 189,524 mstest=> explain analyze select f1('2b00042f7481c7b056c4b410d28f33cf'); QUERY PLAN ---------------------------------------------------------------------------------------- Result (cost=0.00..0.26 rows=1 width=0) (actual time=513.734..513.735 rows=1 loops=1) Planning time: 0.024 ms Execution time: 513.757 ms(3 rows)Time: 514,125 mstest=> explain analyze select f1('2b00042f7481c7b056c4b410d28f33cf'::char(32)); QUERY PLAN ---------------------------------------------------------------------------------------- Result (cost=0.00..0.26 rows=1 width=0) (actual time=513.507..513.509 rows=1 loops=1) Planning time: 0.074 ms Execution time: 513.535 ms(3 rows)Time: 514,104 mstest=> Seems that casting param from text to char(32) needs to be done only once and f1 and f2 must be identical on performance. But function f2 with text param significantly slower, even with casting arg while pass it to function.Tested postgresql versions 9.5.5 and 9.6.1 on Ubuntu 16.04. It's normal behavior or it's can be fixed?-- Andrey Khozov",
"msg_date": "Mon, 2 Jan 2017 19:34:49 +0500",
"msg_from": "=?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issue with castings args of the function"
},
{
"msg_contents": "Hi\n\n2017-01-02 15:34 GMT+01:00 Андрей Хозов <[email protected]>:\n\n> Hello there!\n>\n> I have an performance issue with functions and args type.\n>\n> Table and data:\n> create table t1 (id serial, str char(32));\n> insert into t1 (str) select md5(s::text) from generate_series(1, 1000000)\n> as s;\n>\n> And simple functions:\n> create function f1(line text) returns void as $$\n> begin\n> perform * from t1 where str = line;\n> end;\n> $$ language plpgsql;\n>\n> create function f2(line char) returns void as $$\n> begin\n> perform * from t1 where str = line;\n> end;\n> $$ language plpgsql;\n>\n> Query:\n> test=> explain analyze select f2('2b00042f7481c7b056c4b410d28f33cf');\n> QUERY PLAN\n>\n> ------------------------------------------------------------\n> ----------------------------\n> Result (cost=0.00..0.26 rows=1 width=0) (actual time=189.008..189.010\n> rows=1 loops=1)\n> Planning time: 0.039 ms\n> Execution time: 189.039 ms\n> (3 rows)\n>\n> Time: 189,524 ms\n> test=> explain analyze select f1('2b00042f7481c7b056c4b410d28f33cf');\n> QUERY PLAN\n>\n> ------------------------------------------------------------\n> ----------------------------\n> Result (cost=0.00..0.26 rows=1 width=0) (actual time=513.734..513.735\n> rows=1 loops=1)\n> Planning time: 0.024 ms\n> Execution time: 513.757 ms\n> (3 rows)\n>\n> Time: 514,125 ms\n> test=> explain analyze select f1('2b00042f7481c7b056c4b410d2\n> 8f33cf'::char(32));\n> QUERY PLAN\n>\n> ------------------------------------------------------------\n> ----------------------------\n> Result (cost=0.00..0.26 rows=1 width=0) (actual time=513.507..513.509\n> rows=1 loops=1)\n> Planning time: 0.074 ms\n> Execution time: 513.535 ms\n> (3 rows)\n>\n\nThis explain shows nothing - you need to use nested explain\n\nlook on auto-explain\nhttps://www.postgresql.org/docs/current/static/auto-explain.html\n\nMaybe index was not used due different types.\n\nRegards\n\nPavel\n\n\n> Time: 514,104 ms\n> test=>\n> \n> Seems that casting param from text to char(32) needs to be done only once\n> and f1 and f2 must be identical on performance. But function f2 with text\n> param significantly slower, even with casting arg while pass it to function.\n>\n> Tested postgresql versions 9.5.5 and 9.6.1 on Ubuntu 16.04. It's normal\n> behavior or it's can be fixed?\n>\n> --\n> Andrey Khozov\n>\n\nHi2017-01-02 15:34 GMT+01:00 Андрей Хозов <[email protected]>:Hello there!I have an performance issue with functions and args type.Table and data:create table t1 (id serial, str char(32));insert into t1 (str) select md5(s::text) from generate_series(1, 1000000) as s;And simple functions:create function f1(line text) returns void as $$begin perform * from t1 where str = line;end;$$ language plpgsql;create function f2(line char) returns void as $$begin perform * from t1 where str = line;end;$$ language plpgsql;Query:test=> explain analyze select f2('2b00042f7481c7b056c4b410d28f33cf'); QUERY PLAN ---------------------------------------------------------------------------------------- Result (cost=0.00..0.26 rows=1 width=0) (actual time=189.008..189.010 rows=1 loops=1) Planning time: 0.039 ms Execution time: 189.039 ms(3 rows)Time: 189,524 mstest=> explain analyze select f1('2b00042f7481c7b056c4b410d28f33cf'); QUERY PLAN ---------------------------------------------------------------------------------------- Result (cost=0.00..0.26 rows=1 width=0) (actual time=513.734..513.735 rows=1 loops=1) Planning time: 0.024 ms Execution time: 513.757 ms(3 rows)Time: 514,125 mstest=> explain analyze select f1('2b00042f7481c7b056c4b410d28f33cf'::char(32)); QUERY PLAN ---------------------------------------------------------------------------------------- Result (cost=0.00..0.26 rows=1 width=0) (actual time=513.507..513.509 rows=1 loops=1) Planning time: 0.074 ms Execution time: 513.535 ms(3 rows)This explain shows nothing - you need to use nested explainlook on auto-explain https://www.postgresql.org/docs/current/static/auto-explain.htmlMaybe index was not used due different types.RegardsPavelTime: 514,104 mstest=> Seems that casting param from text to char(32) needs to be done only once and f1 and f2 must be identical on performance. But function f2 with text param significantly slower, even with casting arg while pass it to function.Tested postgresql versions 9.5.5 and 9.6.1 on Ubuntu 16.04. It's normal behavior or it's can be fixed?-- Andrey Khozov",
"msg_date": "Mon, 2 Jan 2017 16:17:14 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with castings args of the function"
},
{
"msg_contents": "=?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= <[email protected]> writes:\n> create table t1 (id serial, str char(32));\n\n> create function f1(line text) returns void as $$\n> begin\n> perform * from t1 where str = line;\n> end;\n> $$ language plpgsql;\n\nThis query is specifying a text comparison (text = text operator).\nSince the table column isn't text, a char-to-text conversion must\nhappen at each line.\n\n> create function f2(line char) returns void as $$\n> begin\n> perform * from t1 where str = line;\n> end;\n> $$ language plpgsql;\n\nThis query is specifying a char(n) comparison (char = char operator).\nNo type conversion step needed, so it's faster.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 02 Jan 2017 11:36:20 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue with castings args of the function"
},
{
"msg_contents": "Thanks all for explain!\n\nOn Mon, Jan 2, 2017 at 9:36 PM, Tom Lane <[email protected]> wrote:\n\n> =?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= <[email protected]> writes:\n> > create table t1 (id serial, str char(32));\n>\n> > create function f1(line text) returns void as $$\n> > begin\n> > perform * from t1 where str = line;\n> > end;\n> > $$ language plpgsql;\n>\n> This query is specifying a text comparison (text = text operator).\n> Since the table column isn't text, a char-to-text conversion must\n> happen at each line.\n>\n> > create function f2(line char) returns void as $$\n> > begin\n> > perform * from t1 where str = line;\n> > end;\n> > $$ language plpgsql;\n>\n> This query is specifying a char(n) comparison (char = char operator).\n> No type conversion step needed, so it's faster.\n>\n> regards, tom lane\n>\n\n\n\n-- \nAndrey Khozov\n\nThanks all for explain!On Mon, Jan 2, 2017 at 9:36 PM, Tom Lane <[email protected]> wrote:=?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= <[email protected]> writes:\n> create table t1 (id serial, str char(32));\n\n> create function f1(line text) returns void as $$\n> begin\n> perform * from t1 where str = line;\n> end;\n> $$ language plpgsql;\n\nThis query is specifying a text comparison (text = text operator).\nSince the table column isn't text, a char-to-text conversion must\nhappen at each line.\n\n> create function f2(line char) returns void as $$\n> begin\n> perform * from t1 where str = line;\n> end;\n> $$ language plpgsql;\n\nThis query is specifying a char(n) comparison (char = char operator).\nNo type conversion step needed, so it's faster.\n\n regards, tom lane\n-- Andrey Khozov",
"msg_date": "Mon, 2 Jan 2017 21:52:37 +0500",
"msg_from": "=?UTF-8?B?0JDQvdC00YDQtdC5INCl0L7Qt9C+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue with castings args of the function"
}
] |
[
{
"msg_contents": "Hi, \n\ntoday i head a issue with pgsql 9.3, a lot of clients complained that they are unable to connect to the server. \nwhen i have checked the logs i sow a lot of \nFATAL: canceling authentication due to timeout \n\nserver configuration is: \nUbuntu 16.04 \n128G RAM \nRAID 10 4X 2T HDD \nIntel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHz \n\nclients are connecting remotely to the server, maybe i have misconfigured someting in postgresql setup. \n\npgsql config: \ndata_directory = '/var/lib/postgresql/9.3/main' # use data in another directory \nhba_file = '/etc/postgresql/9.3/main/pg_hba.conf' # host-based authentication file \nident_file = '/etc/postgresql/9.3/main/pg_ident.conf' # ident configuration file \nexternal_pid_file = '/var/run/postgresql/9.3-main.pid' # write an extra PID file \nlisten_addresses = '*' # what IP address(es) to listen on; \nport = 5432 # (change requires restart) \nmax_connections = 1000 # (change requires restart) \nunix_socket_directories = '/var/run/postgresql' # comma-separated list of directories \nssl = on # (change requires restart) \nssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers \nssl_cert_file = 'server.crt' # (change requires restart) \nssl_key_file = 'server.key' # (change requires restart) \npassword_encryption = on \nmax_files_per_process = 5000 # min 25 \n# (change requires restart) \nvacuum_cost_delay = 20 # 0-100 milliseconds \nvacuum_cost_page_hit = 1 # 0-10000 credits \nvacuum_cost_page_miss = 10 # 0-10000 credits \nvacuum_cost_page_dirty = 20 # 0-10000 credits \nvacuum_cost_limit = 200 # 1-10000 credits \n# (change requires restart) \nfsync = on # turns forced synchronization on or off \nsynchronous_commit = on # synchronization level; \n# off, local, remote_write, or on \nwal_sync_method = fsync # the default is the first option \n\nfull_page_writes = on # recover from partial page writes \nwal_buffers = -1 # min 32kB, -1 sets based on shared_buffers \n# (change requires restart) \nwal_writer_delay = 200ms # 1-10000 milliseconds \ncommit_delay = 0 # range 0-100000, in microseconds \ncommit_siblings = 5 # range 1-1000 \ncheckpoint_segments = 64 # in logfile segments, min 1, 16MB each \ncheckpoint_timeout = 15min # range 30s-1h \ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 \ncheckpoint_warning = 30s # 0 disables \n\nlog_checkpoints = on \nlog_line_prefix = '%t [%p-%l] %q%u@%d ' # special values: \n\ntrack_activities = on \ntrack_counts = on \nstats_temp_directory = '/var/run/postgresql/9.3-main.pg_stat_tmp' \nautovacuum = on # Enable autovacuum subprocess? 'on' \n# requires track_counts to also be on. \nlog_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and \nautovacuum_max_workers = 5 # max number of autovacuum subprocesses \n# (change requires restart) \nautovacuum_naptime = 5min # time between autovacuum runs \nautovacuum_vacuum_threshold = 500 # min number of row updates before \n# vacuum \nautovacuum_analyze_threshold = 500 # min number of row updates before \n# analyze \nautovacuum_vacuum_scale_factor = 0.4 # fraction of table size before vacuum \nautovacuum_analyze_scale_factor = 0.2 # fraction of table size before analyze \n\nautovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for \n\ndatestyle = 'iso, mdy' \ntimezone = 'Europe/Bucharest' \n\nclient_encoding = sql_ascii # actually, defaults to database \n# encoding \nlc_messages = 'C' # locale for system error message \n# strings \nlc_monetary = 'C' # locale for monetary formatting \nlc_numeric = 'C' # locale for number formatting \nlc_time = 'C' # locale for time formatting \ndefault_text_search_config = 'pg_catalog.english' \n\ndefault_statistics_target = 100 # pgtune wizard 2016-12-11 \nmaintenance_work_mem = 1GB # pgtune wizard 2016-12-11 \nconstraint_exclusion = on # pgtune wizard 2016-12-11 \neffective_cache_size = 88GB # pgtune wizard 2016-12-11 \nwork_mem = 64MB # pgtune wizard 2016-12-11 \nwal_buffers = 32MB # pgtune wizard 2016-12-11 \nshared_buffers = 30GB # pgtune wizard 2016-12-11 \n\nThanks, \n\n\nHi,today i head a issue with pgsql 9.3, a lot of clients complained that they are unable to connect to the server.when i have checked the logs i sow a lot of FATAL: canceling authentication due to timeoutserver configuration is:Ubuntu 16.04128G RAMRAID 10 4X 2T HDDIntel(R) Xeon(R) CPU E5-1650 v2 @ 3.50GHzclients are connecting remotely to the server, maybe i have misconfigured someting in postgresql setup.pgsql config:data_directory = '/var/lib/postgresql/9.3/main' # use data in another directoryhba_file = '/etc/postgresql/9.3/main/pg_hba.conf' # host-based authentication fileident_file = '/etc/postgresql/9.3/main/pg_ident.conf' # ident configuration fileexternal_pid_file = '/var/run/postgresql/9.3-main.pid' # write an extra PID filelisten_addresses = '*' # what IP address(es) to listen on;port = 5432 # (change requires restart)max_connections = 1000 # (change requires restart)unix_socket_directories = '/var/run/postgresql' # comma-separated list of directoriesssl = on # (change requires restart)ssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphersssl_cert_file = 'server.crt' # (change requires restart)ssl_key_file = 'server.key' # (change requires restart)password_encryption = onmax_files_per_process = 5000 # min 25 # (change requires restart)vacuum_cost_delay = 20 # 0-100 millisecondsvacuum_cost_page_hit = 1 # 0-10000 creditsvacuum_cost_page_miss = 10 # 0-10000 creditsvacuum_cost_page_dirty = 20 # 0-10000 creditsvacuum_cost_limit = 200 # 1-10000 credits # (change requires restart)fsync = on # turns forced synchronization on or offsynchronous_commit = on # synchronization level; # off, local, remote_write, or onwal_sync_method = fsync # the default is the first optionfull_page_writes = on # recover from partial page writeswal_buffers = -1 # min 32kB, -1 sets based on shared_buffers # (change requires restart)wal_writer_delay = 200ms # 1-10000 millisecondscommit_delay = 0 # range 0-100000, in microsecondscommit_siblings = 5 # range 1-1000checkpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 15min # range 30s-1hcheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0checkpoint_warning = 30s # 0 disables log_checkpoints = onlog_line_prefix = '%t [%p-%l] %q%u@%d ' # special values: track_activities = ontrack_counts = onstats_temp_directory = '/var/run/postgresql/9.3-main.pg_stat_tmp'autovacuum = on # Enable autovacuum subprocess? 'on' # requires track_counts to also be on.log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions andautovacuum_max_workers = 5 # max number of autovacuum subprocesses # (change requires restart)autovacuum_naptime = 5min # time between autovacuum runsautovacuum_vacuum_threshold = 500 # min number of row updates before # vacuumautovacuum_analyze_threshold = 500 # min number of row updates before # analyzeautovacuum_vacuum_scale_factor = 0.4 # fraction of table size before vacuumautovacuum_analyze_scale_factor = 0.2 # fraction of table size before analyzeautovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay fordatestyle = 'iso, mdy'timezone = 'Europe/Bucharest'client_encoding = sql_ascii # actually, defaults to database # encodinglc_messages = 'C' # locale for system error message # stringslc_monetary = 'C' # locale for monetary formattinglc_numeric = 'C' # locale for number formattinglc_time = 'C' # locale for time formattingdefault_text_search_config = 'pg_catalog.english'default_statistics_target = 100 # pgtune wizard 2016-12-11maintenance_work_mem = 1GB # pgtune wizard 2016-12-11constraint_exclusion = on # pgtune wizard 2016-12-11effective_cache_size = 88GB # pgtune wizard 2016-12-11work_mem = 64MB # pgtune wizard 2016-12-11wal_buffers = 32MB # pgtune wizard 2016-12-11shared_buffers = 30GB # pgtune wizard 2016-12-11Thanks,",
"msg_date": "Tue, 3 Jan 2017 11:00:44 +0100 (CET)",
"msg_from": "Vucomir Ianculov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unable to connect to server"
},
{
"msg_contents": "Vucomir Ianculov <[email protected]> wrote:\n\n> Hi,\n> \n> today i head a issue with pgsql 9.3, a lot of clients complained that they are\n> unable to connect to the server.\n> when i have checked the logs i sow a lot of\n> FATAL: canceling authentication due to timeout\n\nyou can increase authentication_timeout, default is 1 minute. How many\nclients do you have, do you really needs 1000 connections? If yes,\nplease consider a connection pooler like pgbouncer.\n\n\n\nRegards, Andreas Kretschmer\n-- \nAndreas Kretschmer\nhttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Jan 2017 10:12:23 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to connect to server"
},
{
"msg_contents": "HI Andreas, \n\nthanks for your replay, i have checked connection number for the last 3 mouths and maximum number of connection was 300, so i can reduce this is it's cousin the issue. \nduring this error server was 99% idle, the only thing is that there was a pg_dump running on one of the DB, can this cause this error, also i have restarted pg server 2 days before because of server cert update. \n\n\nThanks. \n\n\n----- Original Message -----\n\nFrom: \"Andreas Kretschmer\" <[email protected]> \nTo: [email protected] \nSent: Wednesday, January 4, 2017 11:12:23 AM \nSubject: Re: [PERFORM] Unable to connect to server \n\nVucomir Ianculov <[email protected]> wrote: \n\n> Hi, \n> \n> today i head a issue with pgsql 9.3, a lot of clients complained that they are \n> unable to connect to the server. \n> when i have checked the logs i sow a lot of \n> FATAL: canceling authentication due to timeout \n\nyou can increase authentication_timeout, default is 1 minute. How many \nclients do you have, do you really needs 1000 connections? If yes, \nplease consider a connection pooler like pgbouncer. \n\n\n\nRegards, Andreas Kretschmer \n-- \nAndreas Kretschmer \nhttp://www.2ndQuadrant.com/ \nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n-- \nSent via pgsql-performance mailing list ([email protected]) \nTo make changes to your subscription: \nhttp://www.postgresql.org/mailpref/pgsql-performance \n\n\nHI Andreas,thanks for your replay, i have checked connection number for the last 3 mouths and maximum number of connection was 300, so i can reduce this is it's cousin the issue.during this error server was 99% idle, the only thing is that there was a pg_dump running on one of the DB, can this cause this error, also i have restarted pg server 2 days before because of server cert update.Thanks.From: \"Andreas Kretschmer\" <[email protected]>To: [email protected]: Wednesday, January 4, 2017 11:12:23 AMSubject: Re: [PERFORM] Unable to connect to serverVucomir Ianculov <[email protected]> wrote:> Hi,> > today i head a issue with pgsql 9.3, a lot of clients complained that they are> unable to connect to the server.> when i have checked the logs i sow a lot of> FATAL: canceling authentication due to timeoutyou can increase authentication_timeout, default is 1 minute. How manyclients do you have, do you really needs 1000 connections? If yes,please consider a connection pooler like pgbouncer.Regards, Andreas Kretschmer-- Andreas Kretschmerhttp://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 4 Jan 2017 12:03:00 +0100 (CET)",
"msg_from": "Vucomir Ianculov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unable to connect to server"
},
{
"msg_contents": "Vucomir Ianculov <[email protected]> wrote:\n\n> HI Andreas,\n> \n> thanks for your replay, i have checked connection number for the last 3 mouths\n> and maximum number of connection was 300, so i can reduce this is it's cousin\n> the issue.\n> during this error server was 99% idle, the only thing is that there was a\n> pg_dump running on one of the DB, can this cause this error, also i have\n\nno.\n\n\nRegards, Andreas Kretschmer\n-- \nAndreas Kretschmer\nhttp://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 4 Jan 2017 12:12:20 +0100",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unable to connect to server"
}
] |
[
{
"msg_contents": "Hello,\n\nI'm investigating options for an environment which has about a dozen\nservers and several dozen databases on each, and they occasionally need to\nrun huge reports which slow down other services. This is of course \"legacy\ncode\". After some discussion, the idea is to offload these reports to\nseparate servers - and that would be fairly straightforward if not for the\nfact that the report code creates temp tables which are not allowed on\nread-only hot standby replicas.\n\nSo, the next best thing would be to fiddle with the storage system and make\nlightweight snapshots of live database clusters (their storage volumes) and\nmount them on the reporting servers when needed for the reports. This is a\nbit messy :-)\n\nI'm basically fishing for ideas. Are there any other options available\nwhich would offer fast replication-like behaviour ?\n\nIf not, what practices would minimise problems with the storage snapshots\nidea? Any filesystem options?\n\nHello,I'm investigating options for an environment which has about a dozen servers and several dozen databases on each, and they occasionally need to run huge reports which slow down other services. This is of course \"legacy code\". After some discussion, the idea is to offload these reports to separate servers - and that would be fairly straightforward if not for the fact that the report code creates temp tables which are not allowed on read-only hot standby replicas.So, the next best thing would be to fiddle with the storage system and make lightweight snapshots of live database clusters (their storage volumes) and mount them on the reporting servers when needed for the reports. This is a bit messy :-)I'm basically fishing for ideas. Are there any other options available which would offer fast replication-like behaviour ?If not, what practices would minimise problems with the storage snapshots idea? Any filesystem options?",
"msg_date": "Fri, 6 Jan 2017 20:24:51 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sort-of replication for reporting purposes"
},
{
"msg_contents": "On Fri, Jan 6, 2017 at 12:24 PM, Ivan Voras <[email protected]> wrote:\n> Hello,\n>\n> I'm investigating options for an environment which has about a dozen servers\n> and several dozen databases on each, and they occasionally need to run huge\n> reports which slow down other services. This is of course \"legacy code\".\n> After some discussion, the idea is to offload these reports to separate\n> servers - and that would be fairly straightforward if not for the fact that\n> the report code creates temp tables which are not allowed on read-only hot\n> standby replicas.\n>\n> So, the next best thing would be to fiddle with the storage system and make\n> lightweight snapshots of live database clusters (their storage volumes) and\n> mount them on the reporting servers when needed for the reports. This is a\n> bit messy :-)\n>\n> I'm basically fishing for ideas. Are there any other options available which\n> would offer fast replication-like behaviour ?\n>\n> If not, what practices would minimise problems with the storage snapshots\n> idea? Any filesystem options?\n\nI've always solved this with slony replication, but pg_basebackup\nshould be pretty good for making sort of up to date slave copies. Just\ntoss a recovery.conf file and touch whatever failover file the slave\nexpects etc.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Jan 2017 12:30:30 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort-of replication for reporting purposes"
},
{
"msg_contents": "On 6 Jan 2017 8:30 p.m., \"Scott Marlowe\" <[email protected]> wrote:\n\nOn Fri, Jan 6, 2017 at 12:24 PM, Ivan Voras <[email protected]> wrote:\n> Hello,\n>\n> I'm investigating options for an environment which has about a dozen\nservers\n> and several dozen databases on each, and they occasionally need to run\nhuge\n> reports which slow down other services. This is of course \"legacy code\".\n> After some discussion, the idea is to offload these reports to separate\n> servers - and that would be fairly straightforward if not for the fact\nthat\n> the report code creates temp tables which are not allowed on read-only hot\n> standby replicas.\n>\n> So, the next best thing would be to fiddle with the storage system and\nmake\n> lightweight snapshots of live database clusters (their storage volumes)\nand\n> mount them on the reporting servers when needed for the reports. This is a\n> bit messy :-)\n>\n> I'm basically fishing for ideas. Are there any other options available\nwhich\n> would offer fast replication-like behaviour ?\n>\n> If not, what practices would minimise problems with the storage snapshots\n> idea? Any filesystem options?\n\nI've always solved this with slony replication, but pg_basebackup\nshould be pretty good for making sort of up to date slave copies. Just\ntoss a recovery.conf file and touch whatever failover file the slave\nexpects etc.\n\n\nI forgot to add one more information, the databases are 50G+ each so doing\nthe base backup on demand over the network is not a great option.\n\nOn 6 Jan 2017 8:30 p.m., \"Scott Marlowe\" <[email protected]> wrote:On Fri, Jan 6, 2017 at 12:24 PM, Ivan Voras <[email protected]> wrote:\n> Hello,\n>\n> I'm investigating options for an environment which has about a dozen servers\n> and several dozen databases on each, and they occasionally need to run huge\n> reports which slow down other services. This is of course \"legacy code\".\n> After some discussion, the idea is to offload these reports to separate\n> servers - and that would be fairly straightforward if not for the fact that\n> the report code creates temp tables which are not allowed on read-only hot\n> standby replicas.\n>\n> So, the next best thing would be to fiddle with the storage system and make\n> lightweight snapshots of live database clusters (their storage volumes) and\n> mount them on the reporting servers when needed for the reports. This is a\n> bit messy :-)\n>\n> I'm basically fishing for ideas. Are there any other options available which\n> would offer fast replication-like behaviour ?\n>\n> If not, what practices would minimise problems with the storage snapshots\n> idea? Any filesystem options?\n\nI've always solved this with slony replication, but pg_basebackup\nshould be pretty good for making sort of up to date slave copies. Just\ntoss a recovery.conf file and touch whatever failover file the slave\nexpects etc.\nI forgot to add one more information, the databases are 50G+ each so doing the base backup on demand over the network is not a great option.",
"msg_date": "Fri, 6 Jan 2017 20:33:00 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort-of replication for reporting purposes"
},
{
"msg_contents": "I suggest SymmetricDS. ( http://symmetricds.org )\n\nI've had good luck using them to aggregate data from a heterogeneous suite\nof database systems and versions back to a single back-end data mart for\nexactly this purpose.\n\n\n\nOn Fri, Jan 6, 2017 at 2:24 PM, Ivan Voras <[email protected]> wrote:\n\n> Hello,\n>\n> I'm investigating options for an environment which has about a dozen\n> servers and several dozen databases on each, and they occasionally need to\n> run huge reports which slow down other services. This is of course \"legacy\n> code\". After some discussion, the idea is to offload these reports to\n> separate servers - and that would be fairly straightforward if not for the\n> fact that the report code creates temp tables which are not allowed on\n> read-only hot standby replicas.\n>\n> So, the next best thing would be to fiddle with the storage system and\n> make lightweight snapshots of live database clusters (their storage\n> volumes) and mount them on the reporting servers when needed for the\n> reports. This is a bit messy :-)\n>\n> I'm basically fishing for ideas. Are there any other options available\n> which would offer fast replication-like behaviour ?\n>\n> If not, what practices would minimise problems with the storage snapshots\n> idea? Any filesystem options?\n>\n>\n\nI suggest SymmetricDS. ( http://symmetricds.org )I've had good luck using them to aggregate data from a heterogeneous suite of database systems and versions back to a single back-end data mart for exactly this purpose.On Fri, Jan 6, 2017 at 2:24 PM, Ivan Voras <[email protected]> wrote:Hello,I'm investigating options for an environment which has about a dozen servers and several dozen databases on each, and they occasionally need to run huge reports which slow down other services. This is of course \"legacy code\". After some discussion, the idea is to offload these reports to separate servers - and that would be fairly straightforward if not for the fact that the report code creates temp tables which are not allowed on read-only hot standby replicas.So, the next best thing would be to fiddle with the storage system and make lightweight snapshots of live database clusters (their storage volumes) and mount them on the reporting servers when needed for the reports. This is a bit messy :-)I'm basically fishing for ideas. Are there any other options available which would offer fast replication-like behaviour ?If not, what practices would minimise problems with the storage snapshots idea? Any filesystem options?",
"msg_date": "Fri, 6 Jan 2017 14:33:04 -0500",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort-of replication for reporting purposes"
},
{
"msg_contents": "Ivan,\n\n* Ivan Voras ([email protected]) wrote:\n> I'm investigating options for an environment which has about a dozen\n> servers and several dozen databases on each, and they occasionally need to\n> run huge reports which slow down other services. This is of course \"legacy\n> code\". After some discussion, the idea is to offload these reports to\n> separate servers - and that would be fairly straightforward if not for the\n> fact that the report code creates temp tables which are not allowed on\n> read-only hot standby replicas.\n\nYou could create a new server which has postgres_fdw connections to your\nread-only replicas and run the reporting code there. That could suck,\nof course, since the data would have to be pulled across to be\naggregated (assuming that's what your reporting script is doing).\n\nIf you can't change the reporting script at all, that might be what you\nhave to do though. Be sure to look at the postgres_fdw options about\nbatch size and how planning is done.\n\nIf you can change the reporting script, another option is to create FDWs\non your primary servers with FDW tables that point to some other server\nand then have the reporting script use the FDW tables as the temp or\ndestination tables on the replica. The magic here is that FDW tables on\na read-only replica *can* be written to, but you have to create the FDW\nand the FDW tables on the primary and let them be replicated.\n\nAs also mentioned, you could use trigger-based replication (eg: bucardo,\nslony, etc) instead of block-based, or you could look at the logical\nreplication capabilities (pg_logical) to see about using that for your\nreplica-for-reporting instead.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 6 Jan 2017 14:43:03 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort-of replication for reporting purposes"
},
{
"msg_contents": "\n\nOn 01/06/2017 12:24 PM, Ivan Voras wrote:\n> Hello,\n>\n> I'm investigating options for an environment which has about a dozen \n> servers and several dozen databases on each, and they occasionally \n> need to run huge reports which slow down other services. This is of \n> course \"legacy code\". After some discussion, the idea is to offload \n> these reports to separate servers - and that would be fairly \n> straightforward if not for the fact that the report code creates temp \n> tables which are not allowed on read-only hot standby replicas.\n>\n> So, the next best thing would be to fiddle with the storage system and \n> make lightweight snapshots of live database clusters (their storage \n> volumes) and mount them on the reporting servers when needed for the \n> reports. This is a bit messy :-)\n>\n> I'm basically fishing for ideas. Are there any other options available \n> which would offer fast replication-like behaviour ?\n>\n> If not, what practices would minimise problems with the storage \n> snapshots idea? Any filesystem options?\n>\n\nYou could have a look at SLONY - it locks the replicated tables into \nread only but the standby cluster remains read/write. As an added bonus \nyou could replicate everything into a single reporting database cluster, \nin separate schema's there are lots and lots of features with SLONY that \ngive you flexibility.\n\nhttp://slony.info/\n\nI can't speak from direct experience but I think pg_logical may offer \nsimilar features\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Jan 2017 10:48:24 -0700",
"msg_from": "ProPAAS DBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort-of replication for reporting purposes"
},
{
"msg_contents": "On 7 January 2017 at 02:33, Ivan Voras <[email protected]> wrote:\n\n>\n>\n> On 6 Jan 2017 8:30 p.m., \"Scott Marlowe\" <[email protected]> wrote:\n>\n> On Fri, Jan 6, 2017 at 12:24 PM, Ivan Voras <[email protected]> wrote:\n> > Hello,\n> >\n> > I'm investigating options for an environment which has about a dozen\n> servers\n> > and several dozen databases on each, and they occasionally need to run\n> huge\n> > reports which slow down other services. This is of course \"legacy code\".\n> > After some discussion, the idea is to offload these reports to separate\n> > servers - and that would be fairly straightforward if not for the fact\n> that\n> > the report code creates temp tables which are not allowed on read-only\n> hot\n> > standby replicas.\n> >\n> > So, the next best thing would be to fiddle with the storage system and\n> make\n> > lightweight snapshots of live database clusters (their storage volumes)\n> and\n> > mount them on the reporting servers when needed for the reports. This is\n> a\n> > bit messy :-)\n> >\n> > I'm basically fishing for ideas. Are there any other options available\n> which\n> > would offer fast replication-like behaviour ?\n> >\n> > If not, what practices would minimise problems with the storage snapshots\n> > idea? Any filesystem options?\n>\n> I've always solved this with slony replication, but pg_basebackup\n> should be pretty good for making sort of up to date slave copies. Just\n> toss a recovery.conf file and touch whatever failover file the slave\n> expects etc.\n>\n>\n> I forgot to add one more information, the databases are 50G+ each so doing\n> the base backup on demand over the network is not a great option.\n>\n\nIf you don't want to rebuild your report databases, you can use PostgreSQL\nbuilt in replication to keep them in sync. Just promote the replica to a\nprimary, run your reports, then wind it back to a standby and let it catch\nup. pg_rewind might be able to wind it back, or you could use a filesystem\nsnapshot from before you promoted the replica to a primary. You do need to\nensure that the real primary keep enough WAL logs to cover the period your\nreport database is broken out.\n\nPersonally though, I'd take the opportunity to set up wal shipping and\npoint in time recovery on your primary, and rebuild your reporting database\nregularly from these backups. You get your fresh reporting database on\ndemand without overloading the primary, and regularly test your backups.\n\n\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\nOn 7 January 2017 at 02:33, Ivan Voras <[email protected]> wrote:On 6 Jan 2017 8:30 p.m., \"Scott Marlowe\" <[email protected]> wrote:On Fri, Jan 6, 2017 at 12:24 PM, Ivan Voras <[email protected]> wrote:\n> Hello,\n>\n> I'm investigating options for an environment which has about a dozen servers\n> and several dozen databases on each, and they occasionally need to run huge\n> reports which slow down other services. This is of course \"legacy code\".\n> After some discussion, the idea is to offload these reports to separate\n> servers - and that would be fairly straightforward if not for the fact that\n> the report code creates temp tables which are not allowed on read-only hot\n> standby replicas.\n>\n> So, the next best thing would be to fiddle with the storage system and make\n> lightweight snapshots of live database clusters (their storage volumes) and\n> mount them on the reporting servers when needed for the reports. This is a\n> bit messy :-)\n>\n> I'm basically fishing for ideas. Are there any other options available which\n> would offer fast replication-like behaviour ?\n>\n> If not, what practices would minimise problems with the storage snapshots\n> idea? Any filesystem options?\n\nI've always solved this with slony replication, but pg_basebackup\nshould be pretty good for making sort of up to date slave copies. Just\ntoss a recovery.conf file and touch whatever failover file the slave\nexpects etc.\nI forgot to add one more information, the databases are 50G+ each so doing the base backup on demand over the network is not a great option.If you don't want to rebuild your report databases, you can use PostgreSQL built in replication to keep them in sync. Just promote the replica to a primary, run your reports, then wind it back to a standby and let it catch up. pg_rewind might be able to wind it back, or you could use a filesystem snapshot from before you promoted the replica to a primary. You do need to ensure that the real primary keep enough WAL logs to cover the period your report database is broken out.Personally though, I'd take the opportunity to set up wal shipping and point in time recovery on your primary, and rebuild your reporting database regularly from these backups. You get your fresh reporting database on demand without overloading the primary, and regularly test your backups.-- Stuart Bishop <[email protected]>http://www.stuartbishop.net/",
"msg_date": "Fri, 13 Jan 2017 18:00:31 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort-of replication for reporting purposes"
},
{
"msg_contents": "On 13 January 2017 at 12:00, Stuart Bishop <[email protected]> wrote:\n\n>\n>\n> On 7 January 2017 at 02:33, Ivan Voras <[email protected]> wrote:\n>\n>>\n>>\n>>\n>> I forgot to add one more information, the databases are 50G+ each so\n>> doing the base backup on demand over the network is not a great option.\n>>\n>\n> If you don't want to rebuild your report databases, you can use PostgreSQL\n> built in replication to keep them in sync. Just promote the replica to a\n> primary, run your reports, then wind it back to a standby and let it catch\n> up.\n>\n\n\nAh, that's a nice option, didn't know about pg_rewind! I need to read about\nit some more...\nSo far, it seems like the best one.\n\n\n\n> Personally though, I'd take the opportunity to set up wal shipping and\n> point in time recovery on your primary, and rebuild your reporting database\n> regularly from these backups. You get your fresh reporting database on\n> demand without overloading the primary, and regularly test your backups.\n>\n\nI don't think that would solve the main problem. If I set up WAL shipping,\nthen the secondary server will periodically need to ingest the logs, right?\nAnd then I'm either back to running it for a while and rewinding it, as\nyou've said, or basically restoring it from scratch every time which will\nbe slower than just doing a base backup, right?\n\nOn 13 January 2017 at 12:00, Stuart Bishop <[email protected]> wrote:On 7 January 2017 at 02:33, Ivan Voras <[email protected]> wrote:\nI forgot to add one more information, the databases are 50G+ each so doing the base backup on demand over the network is not a great option.If you don't want to rebuild your report databases, you can use PostgreSQL built in replication to keep them in sync. Just promote the replica to a primary, run your reports, then wind it back to a standby and let it catch up. Ah, that's a nice option, didn't know about pg_rewind! I need to read about it some more...So far, it seems like the best one. Personally though, I'd take the opportunity to set up wal shipping and point in time recovery on your primary, and rebuild your reporting database regularly from these backups. You get your fresh reporting database on demand without overloading the primary, and regularly test your backups.I don't think that would solve the main problem. If I set up WAL shipping, then the secondary server will periodically need to ingest the logs, right? And then I'm either back to running it for a while and rewinding it, as you've said, or basically restoring it from scratch every time which will be slower than just doing a base backup, right?",
"msg_date": "Fri, 13 Jan 2017 12:17:48 +0100",
"msg_from": "Ivan Voras <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Sort-of replication for reporting purposes"
},
{
"msg_contents": " Why not utilize the pglogical plugin from 2ndQuadrant? They demonstrate your use case on the webpage for it and it is free. Phillip Couto From: [email protected]: January 13, 2017 06:20To: [email protected]: [email protected]; [email protected]: Re: [PERFORM] Sort-of replication for reporting purposes On 13 January 2017 at 12:00, Stuart Bishop <[email protected]> wrote:On 7 January 2017 at 02:33, Ivan Voras <[email protected]> wrote:\nI forgot to add one more information, the databases are 50G+ each so doing the base backup on demand over the network is not a great option.If you don't want to rebuild your report databases, you can use PostgreSQL built in replication to keep them in sync. Just promote the replica to a primary, run your reports, then wind it back to a standby and let it catch up. Ah, that's a nice option, didn't know about pg_rewind! I need to read about it some more...So far, it seems like the best one. Personally though, I'd take the opportunity to set up wal shipping and point in time recovery on your primary, and rebuild your reporting database regularly from these backups. You get your fresh reporting database on demand without overloading the primary, and regularly test your backups.I don't think that would solve the main problem. If I set up WAL shipping, then the secondary server will periodically need to ingest the logs, right? And then I'm either back to running it for a while and rewinding it, as you've said, or basically restoring it from scratch every time which will be slower than just doing a base backup, right?\n\n",
"msg_date": "Fri, 13 Jan 2017 06:47:36 -0500",
"msg_from": "\"Phillip Couto\" <[email protected]> ",
"msg_from_op": false,
"msg_subject": "Re: Sort-of replication for reporting purposes"
},
{
"msg_contents": "On 13 January 2017 at 18:17, Ivan Voras <[email protected]> wrote:\n\n> On 13 January 2017 at 12:00, Stuart Bishop <[email protected]>\n> wrote:\n>\n>>\n>>\n>> On 7 January 2017 at 02:33, Ivan Voras <[email protected]> wrote:\n>>\n>>>\n>>>\n>>>\n>>> I forgot to add one more information, the databases are 50G+ each so\n>>> doing the base backup on demand over the network is not a great option.\n>>>\n>>\n>> If you don't want to rebuild your report databases, you can use\n>> PostgreSQL built in replication to keep them in sync. Just promote the\n>> replica to a primary, run your reports, then wind it back to a standby and\n>> let it catch up.\n>>\n>\n>\n> Ah, that's a nice option, didn't know about pg_rewind! I need to read\n> about it some more...\n> So far, it seems like the best one.\n>\n>\n>\n>> Personally though, I'd take the opportunity to set up wal shipping and\n>> point in time recovery on your primary, and rebuild your reporting database\n>> regularly from these backups. You get your fresh reporting database on\n>> demand without overloading the primary, and regularly test your backups.\n>>\n>\n> I don't think that would solve the main problem. If I set up WAL shipping,\n> then the secondary server will periodically need to ingest the logs, right?\n> And then I'm either back to running it for a while and rewinding it, as\n> you've said, or basically restoring it from scratch every time which will\n> be slower than just doing a base backup, right?\n>\n\nIt is solving a different problem (reliable, tested backups). As a side\neffect, you end up with a copy of your main database that you can run\nreports on. I'm suggesting that maybe the slow restoration of the database\nis not actually a problem, but instead that you can use it to your\nadvantage. Maybe this fits into your bigger picture. Or maybe having a\ndozen hot standbys of your existing dozen servers is a better option for\nyou.\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\nOn 13 January 2017 at 18:17, Ivan Voras <[email protected]> wrote:On 13 January 2017 at 12:00, Stuart Bishop <[email protected]> wrote:On 7 January 2017 at 02:33, Ivan Voras <[email protected]> wrote:\nI forgot to add one more information, the databases are 50G+ each so doing the base backup on demand over the network is not a great option.If you don't want to rebuild your report databases, you can use PostgreSQL built in replication to keep them in sync. Just promote the replica to a primary, run your reports, then wind it back to a standby and let it catch up. Ah, that's a nice option, didn't know about pg_rewind! I need to read about it some more...So far, it seems like the best one. Personally though, I'd take the opportunity to set up wal shipping and point in time recovery on your primary, and rebuild your reporting database regularly from these backups. You get your fresh reporting database on demand without overloading the primary, and regularly test your backups.I don't think that would solve the main problem. If I set up WAL shipping, then the secondary server will periodically need to ingest the logs, right? And then I'm either back to running it for a while and rewinding it, as you've said, or basically restoring it from scratch every time which will be slower than just doing a base backup, right?It is solving a different problem (reliable, tested backups). As a side effect, you end up with a copy of your main database that you can run reports on. I'm suggesting that maybe the slow restoration of the database is not actually a problem, but instead that you can use it to your advantage. Maybe this fits into your bigger picture. Or maybe having a dozen hot standbys of your existing dozen servers is a better option for you.-- Stuart Bishop <[email protected]>http://www.stuartbishop.net/",
"msg_date": "Fri, 13 Jan 2017 18:48:35 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sort-of replication for reporting purposes"
}
] |
[
{
"msg_contents": "I'm using postgresql 9.5.4 on amazon RDS with ~1300 persistent connections\nfrom rails 4.2 with \"prepared_statements: false\". Over the course of hours\nand days, the \"Freeable Memory\" RDS stat continues to go down indefinitely\nbut jumps back up to a relatively small working set every time we reconnect\n(restart our servers). If we let it go too long, it goes all the way to\nzero and the database instance really does start to go into swap and\neventually fail. Subtracting the freeable memory over days from the peaks\nwhen we restart we see that there are 10's of MB per connection on average.\n\n[image: enter image description here]\nDigging into the per-pid RSS from enhanced monitoring, we see the same slow\ngrowth on example connection pids but the total RSS seems to just be a\nproxy for actual memory usage per connection (\nhttps://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/).\n\n[image: enter image description here]\n\nHow can I either:\n\nChange the default.postgres9.5 parameters below to avoid unbounded memory\ngrowth per-connection\nDetermine what queries cause this unbounded growth and change them to\nprevent it\nDetermine what type of buffering/caching is causing this unbounded growth\nso that I can use that to do either of the above\n\n[image: enter image description here]\n\nI'm using postgresql 9.5.4 on amazon RDS with ~1300 persistent connections from rails 4.2 with \"prepared_statements: false\". Over the course of hours and days, the \"Freeable Memory\" RDS stat continues to go down indefinitely but jumps back up to a relatively small working set every time we reconnect (restart our servers). If we let it go too long, it goes all the way to zero and the database instance really does start to go into swap and eventually fail. Subtracting the freeable memory over days from the peaks when we restart we see that there are 10's of MB per connection on average.Digging into the per-pid RSS from enhanced monitoring, we see the same slow growth on example connection pids but the total RSS seems to just be a proxy for actual memory usage per connection (https://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/).How can I either:Change the default.postgres9.5 parameters below to avoid unbounded memory growth per-connectionDetermine what queries cause this unbounded growth and change them to prevent itDetermine what type of buffering/caching is causing this unbounded growth so that I can use that to do either of the above",
"msg_date": "Thu, 12 Jan 2017 12:08:30 -0500",
"msg_from": "Eric Jensen <[email protected]>",
"msg_from_op": true,
"msg_subject": "How can I find the source of postgresql per-connection memory leaks?"
},
{
"msg_contents": "On 01/12/2017 09:08 AM, Eric Jensen wrote:\n> I'm using postgresql 9.5.4 on amazon RDS with ~1300 persistent\n> connections from rails 4.2 with \"prepared_statements: false\". Over the\n> enter image description here\n\nPostgreSQL on RDS is a closed product. My recommendation would be to \ncontact Amazon support. They are likely to be able to provide you with \nbetter support.\n\nSincerely,\n\nJD\n\n\n\n-- \nCommand Prompt, Inc. http://the.postgres.company/\n +1-503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nEveryone appreciates your honesty, until you are honest with them.\nUnless otherwise stated, opinions are my own.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Jan 2017 09:18:08 -0800",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How can I find the source of postgresql per-connection\n memory leaks?"
}
] |
[
{
"msg_contents": "Hello,\nWe are migrating database from one server to another.\nAs such we are not making any changes in the database structure.\nStil getting below error while restore using pgdump.\n\n\npg_restore: [archiver (db)] COPY failed for table \"tcb_test\": ERROR:\n \"5.40593839802118076e-315\" is out of range for type double precision\nCONTEXT: COPY tcb_test, line 3932596, column va_odometro:\n\"5.40593839802118076e-315\"\n\nCould you please help us how can we avoid solve this error?\n\npostgres version 9.5\nOS: Red hat 7.1\n\nThanks,\nSamir Magar\n\nHello,We are migrating database from one server to another.As such we are not making any changes in the database structure.Stil getting below error while restore using pgdump.pg_restore: [archiver (db)] COPY failed for table \"tcb_test\": ERROR: \"5.40593839802118076e-315\" is out of range for type double precisionCONTEXT: COPY tcb_test, line 3932596, column va_odometro: \"5.40593839802118076e-315\"Could you please help us how can we avoid solve this error?postgres version 9.5 OS: Red hat 7.1Thanks,Samir Magar",
"msg_date": "Mon, 16 Jan 2017 14:50:30 +0530",
"msg_from": "Samir Magar <[email protected]>",
"msg_from_op": true,
"msg_subject": "out of range error while restore using pgdump"
},
{
"msg_contents": "Samir Magar <[email protected]> writes:\n> pg_restore: [archiver (db)] COPY failed for table \"tcb_test\": ERROR:\n> \"5.40593839802118076e-315\" is out of range for type double precision\n\nThat's ... weird. I don't have RHEL7 installed to test, but I don't\nsee any error for that value on RHEL6 or Fedora 25, which ought to\nbracket that version.\n\nI suppose your version of strtod() must be refusing to do gradual\nunderflow, or else you're running on hardware that doesn't do\nIEEE-compliant arithmetic. But I didn't think RHEL supported any\nsuch hardware (unless maybe it's s/390?). And I can't find any\ndocumentation suggesting that glibc supports turning off gradual\nunderflow, either.\n\nPerhaps you're using some extension that fools around with the\nhardware floating-point options?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Jan 2017 10:30:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: out of range error while restore using pgdump"
}
] |
[
{
"msg_contents": "I originally was thinking I had a performance problem related to\nrow-level security but in reducing it to a simple test case I realized\nit applies more generally. The query planner does not seem to\nrecognize that it can eliminate redundant calls to a STABLE function.\nIt will optimize the same call if the function is marked IMMUTABLE.\n\nIn my case, the function call does not take any arguments and is thus\ntrivially independent of row data, and appears in a WHERE clause being\ncompared to constants. Why wouldn't the optimizer treat this case the\nsame as IMMUTABLE?\n\n\nToy problem\n------------------------------------------\n\nThis abstracts a scenario where a web application stores client\nidentity attributes (i.e. roles or groups) into a config variable\nwhich can be accessed from a stored procedure to allow policy checks\nagainst that client context.\n\nThe example query emulates a policy that is a disjunction of two\nrules:\n\n 1. clients who intersect a constant ACL\n '{C,E,Q}'::text[] && current_attributes()\n\n 2. rows whose \"cls\" intersect a constant mask\n cls = ANY('{0,1,5}'::text[])\n\nThis is a common idiom for us, where some rows are restricted from the\ngeneral user base but certain higher privilege clients can see all\nrows.\n\nSo our test query is simply:\n\n SELECT *\n FROM mydata\n WHERE '{C,E,Q}'::text[] && current_attributes()\n OR cls = ANY('{0,1,5}'::text[])\n ;\n\n\nTest setup\n------------------------------------------\n\nI set a very high cost on the function to attempt to encourage the\nplanner to optimize away the calls, but it doesn't seem to make any\ndifference. Based on some other discussions I have seen, I also tried\ndeclaring it as LEAKPROOF but saw no change in behavior there either.\n\n CREATE OR REPLACE FUNCTION current_attributes() RETURNS text[]\n STABLE COST 1000000\n AS $$\n BEGIN\n RETURN current_setting('mytest.attributes')::text[];\n END;\n $$ LANGUAGE plpgsql;\n\n CREATE TABLE mydata (\n id serial PRIMARY KEY,\n val integer,\n cls text\n );\n\n INSERT INTO mydata (val, cls)\n SELECT v, (v % 13)::text\n FROM generate_series(1, 1000000, 1) AS s (v);\n\n CREATE INDEX ON mydata(cls);\n ANALYZE mydata;\n\n\nResulting plans and performance\n------------------------------------------\n\nThese results are with PostgreSQL 9.5 on a Fedora 25 workstation but I\nsee essentially the same behavior on 9.6 as well.\n\nFor an intersecting ACL scenario, I set client context as:\n\n SELECT set_config('mytest.attributes', '{A,B,C,D}', False);\n\nand for non-intersecting, I set:\n\n SELECT set_config('mytest.attributes', '{}', False);\n\nIn an ideal world, the planner knows it can solve the ACL intersection\nonce, independent of any row data and then form a different plan\noptimized around that answer, the same as if we'd just put a constant\ntrue or false term in our WHERE clause.\n\n\nA. STABLE function for intersecting ACL\n\n Seq Scan on mydata (cost=0.00..2500021656.00 rows=238030 width=10) (actual time=0.053..1463.382 rows=1000000 loops=1)\n Filter: (('{C,E,Q}'::text[] && current_attributes()) OR (cls = ANY ('{0,1,5}'::text[])))\n Planning time: 0.093 ms\n Execution time: 1500.395 ms\n\nB. IMMUTABLE function for intersecting ACL\n\n Seq Scan on mydata (cost=0.00..15406.00 rows=1000000 width=10) (actual time=0.009..78.474 rows=1000000 loops=1)\n Planning time: 0.247 ms\n Execution time: 117.610 ms\n\nC. STABLE function for non-intersecting ACL\n\n Seq Scan on mydata (cost=0.00..2500021656.00 rows=238030 width=10) (actual time=0.179..1190.484 rows=230770 loops=1)\n Filter: (('{C,E,Q}'::text[] && current_attributes()) OR (cls = ANY ('{0,1,5}'::text[])))\n Rows Removed by Filter: 769230\n Planning time: 0.088 ms\n Execution time: 1199.729 ms\n\nD. IMMUTABLE function for non-intersecting ACL\n\n Bitmap Heap Scan on mydata (cost=4058.36..12631.44 rows=230333 width=10) (actual time=32.444..76.934 rows=230770 loops=1)\n Recheck Cond: (cls = ANY ('{0,1,5}'::text[]))\n Heap Blocks: exact=5406\n -> Bitmap Index Scan on mydata_cls_idx (cost=0.00..4000.78 rows=230333 width=0) (actual time=31.012..31.012 rows=230770 loops=1)\n Index Cond: (cls = ANY ('{0,1,5}'::text[]))\n Planning time: 0.331 ms\n Execution time: 87.475 ms\n\nYou can see the roughly 10-15x performance difference above. In our\nreal application with more sprawling data, sorting, and lots of\navailable column indices, the effects are even more pronounced.\n\nIs there any hope for the planner to start optimizing these\nrow-independent, stable function calls the same way it does immutable\nones?\n\nWe tend to start a transaction, set the config parameter with the web\nclient identity attributes, and then run the other\nperformance-sensitive statements to completion (or error) in the same\ntransaction. We don't further modify the parameter during the\nlifetime of one web request handler. We cycle through different\nparameter settings only when we reuse a connection for multiple web\nrequests which may all be from different clients.\n\nIs it safe to lie and call the function IMMUTABLE to get good plans,\neven though we do modify the config value infrequently during one\npostgres connection? Does postgres ever cache immutable function\nresults between statements? If so, is there any way to invalidate\nthat cache without fully closing and reopening a connection?\n\n\nThanks,\n\n\nKarl\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Jan 2017 14:36:33 -0800",
"msg_from": "Karl Czajkowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "Karl Czajkowski <[email protected]> writes:\n> The query planner does not seem to\n> recognize that it can eliminate redundant calls to a STABLE function.\n\nNo, it doesn't.\n\n> In my case, the function call does not take any arguments and is thus\n> trivially independent of row data, and appears in a WHERE clause being\n> compared to constants. Why wouldn't the optimizer treat this case the\n> same as IMMUTABLE?\n\n\"The same as IMMUTABLE\" would be to reduce the function to a constant at\nplan time, which would be the wrong thing. It would be valid to execute\nit only once at query start, but there's no built-in mechanism for that.\n\nBut you could force it by putting it in a sub-SELECT, that is if you\ndon't like the performance of\n\n SELECT ... slow_stable_function() ...\n\ntry this:\n\n SELECT ... (SELECT slow_stable_function()) ...\n\nThat works because it's an uncorrelated sub-query, which gets evaluated\njust once per run. But the overhead associated with that mechanism is\nhigh enough that forcing it automatically for every stable function would\nbe a loser. I'd recommend doing it only where it *really* matters.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Jan 2017 17:54:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "On Jan 18, Tom Lane modulated:\n> \"The same as IMMUTABLE\" would be to reduce the function to a constant at\n> plan time, which would be the wrong thing. It would be valid to execute\n> it only once at query start, but there's no built-in mechanism for that.\n> \n\nThat's what I was afraid of... right, we'd like for it to partially\nevaluate these and then finish the plan.\n\nIs there a correctness hazard with pretending our function is\nIMMUTABLE, even though we will change the underlying config parameter\nin the same connection? It would be very bad if we changed our\nparameter to reflect a different web client identity context, but then\nsomehow got cached plans based on the previous setting...\n\n> But you could force it by putting it in a sub-SELECT, that is if you\n> don't like the performance of\n> \n> SELECT ... slow_stable_function() ...\n> \n> try this:\n> \n> SELECT ... (SELECT slow_stable_function()) ...\n> \n\nHa, you might recall a while back I was having problems with\nround-tripping our RLS policies because I had tried such sub-selects\nwhich return arrays and the internal format lost the casts needed to\nget the correct parse when reloading a dump... :-)\n\n\nkarl\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Jan 2017 15:06:46 -0800",
"msg_from": "Karl Czajkowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "On Wed, Jan 18, 2017 at 3:54 PM, Tom Lane <[email protected]> wrote:\n\n> Karl Czajkowski <[email protected]> writes:\n> > The query planner does not seem to\n> > recognize that it can eliminate redundant calls to a STABLE function.\n>\n> No, it doesn't.\n>\n> > In my case, the function call does not take any arguments and is thus\n> > trivially independent of row data, and appears in a WHERE clause being\n> > compared to constants. Why wouldn't the optimizer treat this case the\n> > same as IMMUTABLE?\n>\n> \"The same as IMMUTABLE\" would be to reduce the function to a constant at\n> plan time, which would be the wrong thing. It would be valid to execute\n> it only once at query start, but there's no built-in mechanism for that.\n>\n\nI'm feeling a bit dense here but even after having read a number of these\nkinds of interchanges I still can't get it to stick. I think part of the\nproblem is this sentence from the docs:\n\nhttps://www.postgresql.org/docs/current/static/xfunc-volatility.html\n\n(Stable): \"This category allows the optimizer to optimize multiple calls\nof the function to a single call\"\n\nI read that sentence (and the surrounding paragraph) and wonder why then\ndoesn't it do so in this case.\n\nIf PostgreSQL cannot execute it only once at query start then all this talk\nabout optimization seems misleading. At worse there should be an sentence\nexplaining when the optimizations noted in that paragraph cannot occur -\nand probably examples of both as well since its not clear when it can occur.\n\nSome TLC to the docs here would be welcomed.\n\nDavid J.\n\nOn Wed, Jan 18, 2017 at 3:54 PM, Tom Lane <[email protected]> wrote:Karl Czajkowski <[email protected]> writes:\n> The query planner does not seem to\n> recognize that it can eliminate redundant calls to a STABLE function.\n\nNo, it doesn't.\n\n> In my case, the function call does not take any arguments and is thus\n> trivially independent of row data, and appears in a WHERE clause being\n> compared to constants. Why wouldn't the optimizer treat this case the\n> same as IMMUTABLE?\n\n\"The same as IMMUTABLE\" would be to reduce the function to a constant at\nplan time, which would be the wrong thing. It would be valid to execute\nit only once at query start, but there's no built-in mechanism for that.I'm feeling a bit dense here but even after having read a number of these kinds of interchanges I still can't get it to stick. I think part of the problem is this sentence from the docs:https://www.postgresql.org/docs/current/static/xfunc-volatility.html(Stable): \"This category allows the optimizer to optimize multiple calls of the function to a single call\"I read that sentence (and the surrounding paragraph) and wonder why then doesn't it do so in this case.If PostgreSQL cannot execute it only once at query start then all this talk about optimization seems misleading. At worse there should be an sentence explaining when the optimizations noted in that paragraph cannot occur - and probably examples of both as well since its not clear when it can occur.Some TLC to the docs here would be welcomed.David J.",
"msg_date": "Wed, 18 Jan 2017 16:06:54 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> I'm feeling a bit dense here but even after having read a number of these\n> kinds of interchanges I still can't get it to stick. I think part of the\n> problem is this sentence from the docs:\n> https://www.postgresql.org/docs/current/static/xfunc-volatility.html\n\n> (Stable): \"This category allows the optimizer to optimize multiple calls\n> of the function to a single call\"\n\n> I read that sentence (and the surrounding paragraph) and wonder why then\n> doesn't it do so in this case.\n\nIt says \"allows\", it doesn't say \"requires\".\n\nThe reason we have this category is that without it, it would be formally\ninvalid to optimize an expression involving a non-immutable function into\nan index comparison value, because in that context the function is indeed\nonly evaluated once (before the comparison value is fed into the index\nmachinery). But there isn't a mechanism for that behavior outside of\nindex scans.\n\n> If PostgreSQL cannot execute it only once at query start then all this talk\n> about optimization seems misleading. At worse there should be an sentence\n> explaining when the optimizations noted in that paragraph cannot occur -\n> and probably examples of both as well since its not clear when it can occur.\n\nIf you want an exact definition of when things will happen or not happen,\nstart reading the source code. I'm loath to document small optimizer\ndetails since they change all the time.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Jan 2017 18:23:26 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "Karl Czajkowski <[email protected]> writes:\n> Is there a correctness hazard with pretending our function is\n> IMMUTABLE, even though we will change the underlying config parameter\n> in the same connection?\n\nYou could probably get away with that if you never ever use prepared\nqueries (beware that almost anything in plpgsql is a prepared query).\nIt's a trick that's likely to bite you eventually though.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Jan 2017 18:25:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "On Wed, Jan 18, 2017 at 4:23 PM, Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > I'm feeling a bit dense here but even after having read a number of\n> these\n> > kinds of interchanges I still can't get it to stick. I think part of the\n> > problem is this sentence from the docs:\n> > https://www.postgresql.org/docs/current/static/xfunc-volatility.html\n>\n> > (Stable): \"This category allows the optimizer to optimize multiple calls\n> > of the function to a single call\"\n>\n\n\n> If PostgreSQL cannot execute it only once at query start then all this\n> talk\n> > about optimization seems misleading. At worse there should be an\n> sentence\n> > explaining when the optimizations noted in that paragraph cannot occur -\n> > and probably examples of both as well since its not clear when it can\n> occur.\n>\n> If you want an exact definition of when things will happen or not happen,\n> start reading the source code. I'm loath to document small optimizer\n> details since they change all the time.\n>\n\nThat would not be a productive exercise for me, or most people who just\nwant\nsome idea of what to expect in terms of behavior when they write and use a\nStable function (Immutable and Volatile seem fairly easy to reason about).\n\nIs there anything fatally wrong with the following comprehension?\n\n\"\"\"\nA STABLE function cannot modify the database and is guaranteed to\nreturn the same results given the same arguments for all rows\nwithin a single statement.\n\nThis category allows the optimizer to take an expression of the form\n(indexed_column = stable_function(...)) and evaluate stable_function(...)\nonce at the beginning of the query and use the result to scan\nthe index. (Since an index scan will evaluate the comparison\nvalue only once, not once at each row, it is not valid to use a VOLATILE\n function in an index scan condition). ?Note that should an index scan not\nbe\nchosen for the plan the function will be invoked once-per-row?\n\nExpressions of the forms (constant = stable_function()),\nand (SELECT stable_function() FROM generate_series(1,5)) are not presently\noptimized to a single per-query evaluation. To obtain the equivalent you\ncan invoke the function in a sub-query or CTE and reference the result\nwherever it is needed.\n\"\"\"\n\nIt probably isn't perfect but if the average user isn't going to benefit\nfrom\nanything besides \"index_column = function()\" with an index plan then the\nfalse hope that is being held due to the use of \"allows + in particular\"\nshould probably be dispelled.\n\nThanks!\n\nDavid J.\n\nOn Wed, Jan 18, 2017 at 4:23 PM, Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> I'm feeling a bit dense here but even after having read a number of these\n> kinds of interchanges I still can't get it to stick. I think part of the\n> problem is this sentence from the docs:\n> https://www.postgresql.org/docs/current/static/xfunc-volatility.html\n\n> (Stable): \"This category allows the optimizer to optimize multiple calls\n> of the function to a single call\" \n> If PostgreSQL cannot execute it only once at query start then all this talk\n> about optimization seems misleading. At worse there should be an sentence\n> explaining when the optimizations noted in that paragraph cannot occur -\n> and probably examples of both as well since its not clear when it can occur.\n\nIf you want an exact definition of when things will happen or not happen,\nstart reading the source code. I'm loath to document small optimizer\ndetails since they change all the time.That would not be a productive exercise for me, or most people who just wantsome idea of what to expect in terms of behavior when they write and use a Stable function (Immutable and Volatile seem fairly easy to reason about).Is there anything fatally wrong with the following comprehension?\"\"\"A STABLE function cannot modify the database and is guaranteed to return the same results given the same arguments for all rows within a single statement.This category allows the optimizer to take an expression of the form(indexed_column = stable_function(...)) and evaluate stable_function(...)once at the beginning of the query and use the result to scan the index. (Since an index scan will evaluate the comparison value only once, not once at each row, it is not valid to use a VOLATILE function in an index scan condition). ?Note that should an index scan not bechosen for the plan the function will be invoked once-per-row?Expressions of the forms (constant = stable_function()), and (SELECT stable_function() FROM generate_series(1,5)) are not presently optimized to a single per-query evaluation. To obtain the equivalent you can invoke the function in a sub-query or CTE and reference the result wherever it is needed.\"\"\"It probably isn't perfect but if the average user isn't going to benefit fromanything besides \"index_column = function()\" with an index plan then thefalse hope that is being held due to the use of \"allows + in particular\" should probably be dispelled.Thanks!David J.",
"msg_date": "Wed, 18 Jan 2017 17:09:20 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "On Jan 18, Tom Lane modulated:\n> Karl Czajkowski <[email protected]> writes:\n> > Is there a correctness hazard with pretending our function is\n> > IMMUTABLE, even though we will change the underlying config parameter\n> > in the same connection?\n> \n> You could probably get away with that if you never ever use prepared\n> queries (beware that almost anything in plpgsql is a prepared query).\n> It's a trick that's likely to bite you eventually though.\n> \n\nThat sounds unnerving. I think I need to play it safe. :-/\n\nDoes the plan cache disappear with each connection/backend process?\nOr is there also a risk of plans being shared between backends?\n\nWould it be invasive or a small hack to have something like\n\"transaction-immutable\" which can be precomputed during planning, like\nimmutable, but then must discard those plans at the end of the\ntransaction...?\n\n\nkarl\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 18 Jan 2017 18:45:12 -0800",
"msg_from": "Karl Czajkowski <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "On 1/18/17 6:09 PM, David G. Johnston wrote:\n> That would not be a productive exercise for me, or most people who just\n> want\n> some idea of what to expect in terms of behavior when they write and use a\n> Stable function (Immutable and Volatile seem fairly easy to reason about).\n\nYeah, this isn't an uncommon question for users to have, and \"read the \ncode\" isn't a great answer.\n\nIf there's a README or comment block that describes this, that might be \na reasonable compromise.\n\nIt would certainly be useful to document how to push the planner in the \nright direction as well. I didn't realize that SELECT ... (SELECT \nslow_stable_function()) was a thing until reading this thread.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jan 2017 09:10:00 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
},
{
"msg_contents": "On Mon, Jan 23, 2017 at 9:10 AM, Jim Nasby <[email protected]> wrote:\n> On 1/18/17 6:09 PM, David G. Johnston wrote:\n>>\n>> That would not be a productive exercise for me, or most people who just\n>> want\n>> some idea of what to expect in terms of behavior when they write and use a\n>> Stable function (Immutable and Volatile seem fairly easy to reason about).\n>\n>\n> Yeah, this isn't an uncommon question for users to have, and \"read the code\"\n> isn't a great answer.\n>\n> If there's a README or comment block that describes this, that might be a\n> reasonable compromise.\n>\n> It would certainly be useful to document how to push the planner in the\n> right direction as well. I didn't realize that SELECT ... (SELECT\n> slow_stable_function()) was a thing until reading this thread.\n\nTotally agree.\n\nThere are other odd cases as well, mostly relating to SQL inlining\n(for example, marking a function IMMUTABLE can cause it to fall out of\ninlining you would get by giving no designation). If you documented\nall the rules, I think you'd find the rules are a bit odd and could be\nsimplified.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jan 2017 18:08:52 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: optimizing immutable vs. stable function calls?"
}
] |
[
{
"msg_contents": "Hi,\n\nIs there something in the roadmap to optimize the inner join?\n\nI've this situation above. Table b has 400 rows with null in the column b.\n\nexplain analyze select * from a inner join b on (b.b = a.a);\n\n\"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\"\n\" Merge Cond: (a.a = b.b)\"\n\" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\"\n\" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\"\n\"Total runtime: 1.248 ms\"\n\nMy question is: Why the planner isn't removing the null rows during the \nscan of table b?\n\n-- \nClailson Soares Din�zio de Almeida\n\n\n\n\n\n\n\nHi,\n\n Is there something in the roadmap to optimize the inner join?\n\n I've this situation above. Table b has 400 rows with null in the column b.\n\n explain analyze select * from a inner join b on (b.b = a.a);\n \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \n\" Merge Cond: (a.a = b.b)\" \n\" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \n\" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \n\"Total runtime: 1.248 ms\" \n\nMy question is: Why the planner isn't removing the null rows during the scan of table b?\n\n-- \nClailson Soares Din�zio de Almeida",
"msg_date": "Thu, 19 Jan 2017 07:08:47 -0300",
"msg_from": "Clailson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimization inner join"
},
{
"msg_contents": "NULL is still a value that may be paired with a NULL in a.a\r\n\r\nThe only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined.\r\n\r\nNot sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?\r\n\r\n-----------------\r\nPhillip Couto\r\n\r\n\r\n\r\n> On Jan 19, 2017, at 05:08, Clailson <[email protected]> wrote:\r\n> \r\n> Hi,\r\n> \r\n> Is there something in the roadmap to optimize the inner join?\r\n> \r\n> I've this situation above. Table b has 400 rows with null in the column b.\r\n> \r\n> explain analyze select * from a inner join b on (b.b = a.a);\r\n> \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \r\n> \" Merge Cond: (a.a = b.b)\" \r\n> \" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \r\n> \" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \r\n> \"Total runtime: 1.248 ms\" \r\n> \r\n> My question is: Why the planner isn't removing the null rows during the scan of table b?\r\n> -- \r\n> Clailson Soares Dinízio de Almeida\r\n\r\n\n\nNULL is still a value that may be paired with a NULL in a.aThe only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined.Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?\n-----------------Phillip Couto\n\nOn Jan 19, 2017, at 05:08, Clailson <[email protected]> wrote:\n\n\nHi,\n\r\n Is there something in the roadmap to optimize the inner join?\n\r\n I've this situation above. Table b has 400 rows with null in the column b.\n\r\n explain analyze select * from a inner join b on (b.b = a.a);\r\n \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \r\n\" Merge Cond: (a.a = b.b)\" \r\n\" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \r\n\" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \r\n\"Total runtime: 1.248 ms\" \r\n\r\nMy question is: Why the planner isn't removing the null rows during the scan of table b?\r\n\n-- \r\nClailson Soares Dinízio de Almeida",
"msg_date": "Thu, 19 Jan 2017 07:34:57 -0500",
"msg_from": "\"Phillip Couto\" <[email protected]> ",
"msg_from_op": false,
"msg_subject": "Re: Optimization inner join"
},
{
"msg_contents": "Hi Phillip.\n\n> The only optimization I could see is if the a.a column has NOT NULL \n> defined while b.b does not have NOT NULL defined.\na.a is the primary key on table a and b.b is the foreign key on table b.\n\n Tabela \"public.a\"\n+--------+---------+---------------+\n| Coluna | Tipo | Modificadores |\n+--------+---------+---------------+\n| a | integer | n�o nulo |\n| b | integer | |\n+--------+---------+---------------+\n�ndices:\n \"a_pkey\" PRIMARY KEY, btree (a)\nReferenciada por:\n TABLE \"b\" CONSTRAINT \"b_b_fkey\" FOREIGN KEY (b) REFERENCES a(a)\n\n Tabela \"public.b\"\n+--------+---------+---------------+\n| Coluna | Tipo | Modificadores |\n+--------+---------+---------------+\n| a | integer | n�o nulo |\n| b | integer | |\n+--------+---------+---------------+\n�ndices:\n \"b_pkey\" PRIMARY KEY, btree (a)\nRestri��es de chave estrangeira:\n \"b_b_fkey\" FOREIGN KEY (b) REFERENCES a(a)\n\n>\n> Not sure if it is all that common. Curious what if you put b.b IS NOT \n> NULL in the WHERE statement?\n\nIt's the question. In the company I work with, one of my clients asked \nme: \"Why PostgreSQL does not remove rows with null in column b (table \nb), before joining, since these rows have no corresponding in table a?\" \nI gave the suggestion to put the IS NOT NULL in the WHERE statement, but \nHE can't modify the query in the application.\n\nI did the tests with Oracle and it uses a predicate in the query plan, \nremoving the lines where b.b is null. In Oracle, it�s the same plan, \nwith and without IS NOT NULL in the WHERE statement.\n\n-- \nClailson Soares Din�zio de Almeida\n\n\nOn 19/01/2017 09:34, Phillip Couto wrote:\n> NULL is still a value that may be paired with a NULL in a.a\n>\n> The only optimization I could see is if the a.a column has NOT NULL \n> defined while b.b does not have NOT NULL defined.\n>\n> Not sure if it is all that common. Curious what if you put b.b IS NOT \n> NULL in the WHERE statement?\n>\n> -----------------\n> Phillip Couto\n>\n>\n>\n>> On Jan 19, 2017, at 05:08, Clailson <[email protected] \n>> <mailto:[email protected]>> wrote:\n>>\n>> Hi,\n>>\n>> Is there something in the roadmap to optimize the inner join?\n>>\n>> I've this situation above. Table b has 400 rows with null in the \n>> column b.\n>>\n>> explain analyze select * from a inner join b on (b.b = a.a);\n>> \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\"\n>> \" Merge Cond: (a.a = b.b)\"\n>> \" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\"\n>> \" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\"\n>> \"Total runtime: 1.248 ms\"\n>>\n>> My question is: Why the planner isn't removing the null rows during \n>> the scan of table b?\n>> -- \n>> Clailson Soares Din�zio de Almeida\n>\n\n\n\n\n\n\n\nHi Phillip.\n\n\nThe only optimization I could see is if the a.a\n column has NOT NULL defined while b.b does not have NOT NULL\n defined.\n\n a.a is the primary key on table a and b.b is the foreign key on\n table b.\n\n Tabela \"public.a\" \n+--------+---------+---------------+\n| Coluna | Tipo | Modificadores |\n+--------+---------+---------------+\n| a | integer | n�o nulo |\n| b | integer | |\n+--------+---------+---------------+\n�ndices:\n \"a_pkey\" PRIMARY KEY, btree (a)\nReferenciada por:\n TABLE \"b\" CONSTRAINT \"b_b_fkey\" FOREIGN KEY (b) REFERENCES a(a)\n\n Tabela \"public.b\" \n+--------+---------+---------------+\n| Coluna | Tipo | Modificadores |\n+--------+---------+---------------+\n| a | integer | n�o nulo |\n| b | integer | |\n+--------+---------+---------------+\n�ndices:\n \"b_pkey\" PRIMARY KEY, btree (a)\nRestri��es de chave estrangeira:\n \"b_b_fkey\" FOREIGN KEY (b) REFERENCES a(a)\n\nNot sure if it is all that common. Curious what if\n you put b.b IS NOT NULL in the WHERE statement?\n\n\n It's the question. In the company I work with, one\n of my clients asked me: \"Why PostgreSQL does not remove rows\n with null in column b (table b), before joining, since these\n rows have no corresponding in table a?\" I gave the suggestion to put the IS NOT NULL in the\n WHERE statement, but HE can't modify the query in the\n application. \n\n I did the tests with Oracle and it uses a predicate in the query\n plan, removing the lines where b.b is null. In Oracle, it�s the same plan, with and without IS NOT\n NULL in the WHERE statement.\n\n-- \nClailson Soares Din�zio de Almeida\n\nOn 19/01/2017 09:34, Phillip Couto\n wrote:\n\n\n\n NULL is still a value that may be paired with a NULL in a.a\n \n\nThe only optimization I could see is if the a.a\n column has NOT NULL defined while b.b does not have NOT NULL\n defined.\n\n\nNot sure if it is all that common. Curious what if\n you put b.b IS NOT NULL in the WHERE statement?\n\n\n -----------------\nPhillip\n Couto\n\n\n\n\n\n\n\nOn Jan 19, 2017, at 05:08, Clailson <[email protected]>\n wrote:\n\n\n\n Hi,\n\n Is there something in the roadmap to optimize the\n inner join?\n\n I've this situation above. Table b has 400\n rows with null in the column b.\n\n explain analyze select * from a inner join b on (b.b =\n a.a);\n \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \n\" Merge Cond: (a.a = b.b)\" \n\" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \n\" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \n\"Total runtime: 1.248 ms\" \n\nMy question is: Why the planner isn't removing the null rows during the scan of table b?\n\n-- \nClailson Soares Din�zio de Almeida",
"msg_date": "Thu, 19 Jan 2017 10:04:28 -0300",
"msg_from": "Clailson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Optimization inner join"
},
{
"msg_contents": "Ah ok that makes sense. I am curious if there is actually a performance benefit to doing that. In postgresql as per the execution plan you provided the Merge Join joins both sets after the have been sorted. If they are sorted already then the NULLs will all be grouped at the beginning or end. (Can’t remember what the ordering is) Postgresql will just skip all the records with probably the same effort as removing them and then merging. The only performance improvement I could potentially see is if there is a lot of NULLS in one set then the cost to sort them may be large enough to recoup by ignoring them before the merge sort.\r\n\r\nI hope someone more familiar with the internals can chime in as I would like to learn more if there is a real benefit here or a better idea why postgres does not do it.\r\n\r\n-----------------\r\nPhillip Couto\r\n\r\n\r\n\r\n> On Jan 19, 2017, at 08:04, Clailson <[email protected]> wrote:\r\n> \r\n> Hi Phillip.\r\n> \r\n>> The only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined.\r\n> a.a is the primary key on table a and b.b is the foreign key on table b.\r\n> \r\n> Tabela \"public.a\" \r\n> +--------+---------+---------------+\r\n> | Coluna | Tipo | Modificadores |\r\n> +--------+---------+---------------+\r\n> | a | integer | não nulo |\r\n> | b | integer | |\r\n> +--------+---------+---------------+\r\n> Índices:\r\n> \"a_pkey\" PRIMARY KEY, btree (a)\r\n> Referenciada por:\r\n> TABLE \"b\" CONSTRAINT \"b_b_fkey\" FOREIGN KEY (b) REFERENCES a(a)\r\n> \r\n> Tabela \"public.b\" \r\n> +--------+---------+---------------+\r\n> | Coluna | Tipo | Modificadores |\r\n> +--------+---------+---------------+\r\n> | a | integer | não nulo |\r\n> | b | integer | |\r\n> +--------+---------+---------------+\r\n> Índices:\r\n> \"b_pkey\" PRIMARY KEY, btree (a)\r\n> Restrições de chave estrangeira:\r\n> \"b_b_fkey\" FOREIGN KEY (b) REFERENCES a(a)\r\n>> \r\n>> Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?\r\n> \r\n> It's the question. In the company I work with, one of my clients asked me: \"Why PostgreSQL does not remove rows with null in column b (table b), before joining, since these rows have no corresponding in table a?\" I gave the suggestion to put the IS NOT NULL in the WHERE statement, but HE can't modify the query in the application. \r\n> \r\n> I did the tests with Oracle and it uses a predicate in the query plan, removing the lines where b.b is null. In Oracle, it´s the same plan, with and without IS NOT NULL in the WHERE statement.\r\n> -- \r\n> Clailson Soares Dinízio de Almeida\r\n> \r\n> On 19/01/2017 09:34, Phillip Couto wrote:\r\n>> NULL is still a value that may be paired with a NULL in a.a\r\n>> \r\n>> The only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined.\r\n>> \r\n>> Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?\r\n>> \r\n>> -----------------\r\n>> Phillip Couto\r\n>> \r\n>> \r\n>> \r\n>>> On Jan 19, 2017, at 05:08, Clailson <[email protected] <mailto:[email protected]>> wrote:\r\n>>> \r\n>>> Hi,\r\n>>> \r\n>>> Is there something in the roadmap to optimize the inner join?\r\n>>> \r\n>>> I've this situation above. Table b has 400 rows with null in the column b.\r\n>>> \r\n>>> explain analyze select * from a inner join b on (b.b = a.a);\r\n>>> \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \r\n>>> \" Merge Cond: (a.a = b.b)\" \r\n>>> \" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \r\n>>> \" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \r\n>>> \"Total runtime: 1.248 ms\" \r\n>>> \r\n>>> My question is: Why the planner isn't removing the null rows during the scan of table b?\r\n>>> -- \r\n>>> Clailson Soares Dinízio de Almeida\r\n>> \r\n> \r\n\r\n\n\nAh ok that makes sense. I am curious if there is actually a performance benefit to doing that. In postgresql as per the execution plan you provided the Merge Join joins both sets after the have been sorted. If they are sorted already then the NULLs will all be grouped at the beginning or end. (Can’t remember what the ordering is) Postgresql will just skip all the records with probably the same effort as removing them and then merging. The only performance improvement I could potentially see is if there is a lot of NULLS in one set then the cost to sort them may be large enough to recoup by ignoring them before the merge sort.I hope someone more familiar with the internals can chime in as I would like to learn more if there is a real benefit here or a better idea why postgres does not do it.\n-----------------Phillip Couto\n\nOn Jan 19, 2017, at 08:04, Clailson <[email protected]> wrote:\n\n\nHi Phillip.\n\n\nThe only optimization I could see is if the a.a\r\n column has NOT NULL defined while b.b does not have NOT NULL\r\n defined.\n\r\n a.a is the primary key on table a and b.b is the foreign key on\r\n table b.\n\n Tabela \"public.a\" \r\n+--------+---------+---------------+\r\n| Coluna | Tipo | Modificadores |\r\n+--------+---------+---------------+\r\n| a | integer | não nulo |\r\n| b | integer | |\r\n+--------+---------+---------------+\r\nÍndices:\r\n \"a_pkey\" PRIMARY KEY, btree (a)\r\nReferenciada por:\r\n TABLE \"b\" CONSTRAINT \"b_b_fkey\" FOREIGN KEY (b) REFERENCES a(a)\r\n\r\n Tabela \"public.b\" \r\n+--------+---------+---------------+\r\n| Coluna | Tipo | Modificadores |\r\n+--------+---------+---------------+\r\n| a | integer | não nulo |\r\n| b | integer | |\r\n+--------+---------+---------------+\r\nÍndices:\r\n \"b_pkey\" PRIMARY KEY, btree (a)\r\nRestrições de chave estrangeira:\r\n \"b_b_fkey\" FOREIGN KEY (b) REFERENCES a(a)\n\nNot sure if it is all that common. Curious what if\r\n you put b.b IS NOT NULL in the WHERE statement?\n\n\r\n It's the question. In the company I work with, one\r\n of my clients asked me: \"Why PostgreSQL does not remove rows\r\n with null in column b (table b), before joining, since these\r\n rows have no corresponding in table a?\" I gave the suggestion to put the IS NOT NULL in the\r\n WHERE statement, but HE can't modify the query in the\r\n application. \n\r\n I did the tests with Oracle and it uses a predicate in the query\r\n plan, removing the lines where b.b is null. In Oracle, it´s the same plan, with and without IS NOT\r\n NULL in the WHERE statement.\n\n-- \r\nClailson Soares Dinízio de Almeida\n\nOn 19/01/2017 09:34, Phillip Couto\r\n wrote:\n\n\n\r\n NULL is still a value that may be paired with a NULL in a.a\r\n \n\nThe only optimization I could see is if the a.a\r\n column has NOT NULL defined while b.b does not have NOT NULL\r\n defined.\n\n\nNot sure if it is all that common. Curious what if\r\n you put b.b IS NOT NULL in the WHERE statement?\n\n\r\n -----------------\nPhillip\r\n Couto\n\n\n\n\n\n\n\nOn Jan 19, 2017, at 05:08, Clailson <[email protected]>\r\n wrote:\n\n\n\n Hi,\n\r\n Is there something in the roadmap to optimize the\r\n inner join?\n\r\n I've this situation above. Table b has 400\r\n rows with null in the column b.\n\r\n explain analyze select * from a inner join b on (b.b =\r\n a.a);\r\n \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \r\n\" Merge Cond: (a.a = b.b)\" \r\n\" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \r\n\" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \r\n\"Total runtime: 1.248 ms\" \r\n\r\nMy question is: Why the planner isn't removing the null rows during the scan of table b?\r\n\n-- \r\nClailson Soares Dinízio de Almeida",
"msg_date": "Thu, 19 Jan 2017 08:18:19 -0500",
"msg_from": "\"Phillip Couto\" <[email protected]> ",
"msg_from_op": false,
"msg_subject": "Re: Optimization inner join"
},
{
"msg_contents": "Hello,\n\nEm 19/01/2017 11:04, Clailson escreveu:\n> Hi Phillip.\n>\n>\n>>\n>> Not sure if it is all that common. Curious what if you put b.b IS NOT \n>> NULL in the WHERE statement?\n>\n> It's the question. In the company I work with, one of my clients asked \n> me: \"Why PostgreSQL does not remove rows with null in column b (table \n> b), before joining, since these rows have no corresponding in table \n> a?\" I gave the suggestion to put the IS NOT NULL in the WHERE \n> statement, but HE can't modify the query in the application.\n>\n> I did the tests with Oracle and it uses a predicate in the query plan, \n> removing the lines where b.b is null. In Oracle, it´s the same plan, \n> with and without IS NOT NULL in the WHERE statement.\n\nBeing the client in question, I would like to make a little remark: What \nwe thought could be optimized here at first is on the row estimate of \nthe index scan; which could take null_frac into account. To put things \ninto perspective, our similar case in production has a table with 6 \nmillion lines where only 9.5k aren´t null for the join field, an the \nover-estimation is throwing away good plans (like ~150ms execution time) \nin favor of pretty bad ones (~80s execution time).\n\nWe´ve asked application people to put the where not null workaround, \nwhich works great, and are waiting on an answer, but I believe getting \nbetter estimates without that would be great if possible.\n\n>\n> On 19/01/2017 09:34, Phillip Couto wrote:\n>> NULL is still a value that may be paired with a NULL in a.a \n\nIs that so? I would believe you would never get a match, as NULL <> NULL\n\n>>> On Jan 19, 2017, at 05:08, Clailson <[email protected] \n>>> <mailto:[email protected]>> wrote:\n>>>\n>>> Hi,\n>>>\n>>> Is there something in the roadmap to optimize the inner join?\n>>>\n>>> I've this situation above. Table b has 400 rows with null in the \n>>> column b.\n>>>\n>>> explain analyze select * from a inner join b on (b.b = a.a);\n>>> \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\"\n>>> \" Merge Cond: (a.a = b.b)\"\n>>> \" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\"\n>>> \" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\"\n>>> \"Total runtime: 1.248 ms\"\n>>>\n>>> My question is: Why the planner isn't removing the null rows during \n>>> the scan of table b?\n>>> -- \n>>> Clailson Soares Dinízio de Almeida\n>>\n>\n\n\nRegards,\n\nGustavo R. Montesino\nTribunal Regional do Trabalho da 2a Região\nSecretaria de Tecnologia da Informação e Comunicação\nCoordenadoria de Infraestrutura de TIC\nSeção de Administração de Banco de Dados\nAv. Marquês de São Vicente, 121 - Bl. A - Sala 404\nTelefone: (11) 3150-2082\n\n\n\n\n\n\n\n\n\n Hello,\n\nEm 19/01/2017 11:04, Clailson escreveu:\n\n\n\nHi Phillip.\n\n\n\nNot sure if it is all that common. Curious what if\n you put b.b IS NOT NULL in the WHERE statement?\n\n\n It's the question. In the company I work with, one\n\n of my clients asked me: \"Why PostgreSQL does not remove rows\n with null in column b (table b), before joining, since these\n rows have no corresponding in table a?\" I gave the suggestion to put the IS NOT NULL in\n the WHERE statement, but HE can't modify the query in the\n application. \n\n I did the tests with Oracle and it uses a predicate in the\n query plan, removing the lines where b.b is null. In Oracle, it´s the same plan, with and without IS\n NOT NULL in the WHERE statement.\n\n\n Being the client in question, I would like to make a little remark:\n What we thought could be optimized here at first is on the row\n estimate of the index scan; which could take null_frac into account.\n To put things into perspective, our similar case in production has a\n table with 6 million lines where only 9.5k aren´t null for the join\n field, an the over-estimation is throwing away good plans (like\n ~150ms execution time) in favor of pretty bad ones (~80s execution\n time).\n\n We´ve asked application people to put the where not null workaround,\n which works great, and are waiting on an answer, but I believe\n getting better estimates without that would be great if possible.\n\n\nOn 19/01/2017 09:34, Phillip Couto\n wrote:\n\n\n\n NULL is still a value that may be paired with a NULL in a.a \n\n\n Is that so? I would believe you would never get a match, as NULL\n <> NULL\n\n\n\n\n\n\nOn Jan 19, 2017, at 05:08, Clailson <[email protected]>\n\n wrote:\n\n\n\n Hi,\n\n Is there something in the roadmap to optimize the\n inner join?\n\n I've this situation above. Table b\n has\n 400 rows with null in the column b.\n\n explain analyze select * from a inner join b on (b.b =\n a.a);\n \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \n\" Merge Cond: (a.a = b.b)\" \n\" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \n\" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \n\"Total runtime: 1.248 ms\" \n\nMy question is: Why the planner isn't removing the null rows during the scan of table b?\n\n-- \nClailson Soares Dinízio de Almeida\n\n\n\n\n\n\n\n\n\n\n\n\n Regards,\n\nGustavo R. Montesino\nTribunal Regional do Trabalho da 2a Região\nSecretaria de Tecnologia da Informação e Comunicação\nCoordenadoria de Infraestrutura de TIC\nSeção de Administração de Banco de Dados\nAv. Marquês de São Vicente, 121 - Bl. A - Sala 404\nTelefone: (11) 3150-2082",
"msg_date": "Thu, 19 Jan 2017 11:23:54 -0200",
"msg_from": "Gustavo Rezende Montesino <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization inner join"
},
{
"msg_contents": "Hi.\n\nIn SQL \"null == any value\" resolves to false, so optimizer can safely skip\nnulls from either side if any for the inner join.\n\nBest regards, Vitalii Tymchyshyn\n\nNULL is still a value that may be paired with a NULL in a.a\n>\n> The only optimization I could see is if the a.a column has NOT NULL\n> defined while b.b does not have NOT NULL defined.\n>\n> Not sure if it is all that common. Curious what if you put b.b IS NOT NULL\n> in the WHERE statement?\n>\n> -----------------\n> Phillip Couto\n>\n>\n>\n>\n>\n\nHi.In SQL \"null == any value\" resolves to false, so optimizer can safely skip nulls from either side if any for the inner join.Best regards, Vitalii TymchyshynNULL is still a value that may be paired with a NULL in a.aThe only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined.Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?\n-----------------Phillip Couto",
"msg_date": "Thu, 19 Jan 2017 13:30:33 +0000",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization inner join"
},
{
"msg_contents": "I apologize my statement about NULL being used to join is incorrect as both Vitalii and Gustavo have both pointed out in their respective replies.\r\n\r\n-----------------\r\nPhillip Couto\r\n\r\n\r\n\r\n> On Jan 19, 2017, at 08:30, Vitalii Tymchyshyn <[email protected]> wrote:\r\n> \r\n> \r\n> Hi.\r\n> \r\n> In SQL \"null == any value\" resolves to false, so optimizer can safely skip nulls from either side if any for the inner join.\r\n> \r\n> Best regards, Vitalii Tymchyshyn\r\n> \r\n> NULL is still a value that may be paired with a NULL in a.a\r\n> \r\n> The only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined.\r\n> \r\n> Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?\r\n> \r\n> -----------------\r\n> Phillip Couto\r\n> \r\n> \r\n> \r\n> \r\n\r\n\n\nI apologize my statement about NULL being used to join is incorrect as both Vitalii and Gustavo have both pointed out in their respective replies.\n-----------------Phillip Couto\n\nOn Jan 19, 2017, at 08:30, Vitalii Tymchyshyn <[email protected]> wrote:Hi.In SQL \"null == any value\" resolves to false, so optimizer can safely skip nulls from either side if any for the inner join.Best regards, Vitalii TymchyshynNULL is still a value that may be paired with a NULL in a.aThe only optimization I could see is if the a.a column has NOT NULL defined while b.b does not have NOT NULL defined.Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?\n-----------------Phillip Couto",
"msg_date": "Thu, 19 Jan 2017 08:46:03 -0500",
"msg_from": "\"Phillip Couto\" <[email protected]> ",
"msg_from_op": false,
"msg_subject": "Re: Optimization inner join"
},
{
"msg_contents": "Gustavo Rezende Montesino <[email protected]> writes:\n> Being the client in question, I would like to make a little remark: What \n> we thought could be optimized here at first is on the row estimate of \n> the index scan; which could take null_frac into account. To put things \n> into perspective, our similar case in production has a table with 6 \n> million lines where only 9.5k aren´t null for the join field, an the \n> over-estimation is throwing away good plans (like ~150ms execution time) \n> in favor of pretty bad ones (~80s execution time).\n\nPlease provide a concrete test case for that. AFAIK the null fraction\nshould be accounted for in join size estimates. Here's a little test\ncase showing that it is:\n\nregression=# create table t1 as select generate_series(1,1000000) as f1;\nSELECT 1000000\nregression=# analyze t1;\nANALYZE\nregression=# create table t2 as select generate_series(1,1000000) as f1;\nSELECT 1000000\nregression=# analyze t2;\nANALYZE\nregression=# explain select * from t1,t2 where t1.f1=t2.f1;\n QUERY PLAN \n------------------------------------------------------------------------\n Hash Join (cost=30832.00..70728.00 rows=1000000 width=8)\n Hash Cond: (t1.f1 = t2.f1)\n -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=4)\n -> Hash (cost=14425.00..14425.00 rows=1000000 width=4)\n -> Seq Scan on t2 (cost=0.00..14425.00 rows=1000000 width=4)\n(5 rows)\n\nregression=# insert into t2 select null from generate_series(1,1000000);\nINSERT 0 1000000\nregression=# analyze t2;\nANALYZE\nregression=# explain select * from t1,t2 where t1.f1=t2.f1;\n QUERY PLAN \n------------------------------------------------------------------------\n Hash Join (cost=30832.00..95727.00 rows=1000000 width=8)\n Hash Cond: (t2.f1 = t1.f1)\n -> Seq Scan on t2 (cost=0.00..27862.00 rows=2000000 width=4)\n -> Hash (cost=14425.00..14425.00 rows=1000000 width=4)\n -> Seq Scan on t1 (cost=0.00..14425.00 rows=1000000 width=4)\n(5 rows)\n\nThe join size estimate is still correct even though it knows there are\nmany more rows in t2.\n\nAs for inserting a not-null test at the scan level, I'm not exactly\nconvinced that it's a win:\n\nregression=# \\timing\nTiming is on.\nregression=# select count(*) from t1,t2 where t1.f1=t2.f1;\n count \n---------\n 1000000\n(1 row)\n\nTime: 562.914 ms\nregression=# select count(*) from t1,t2 where t1.f1=t2.f1 and t2.f1 is not null;\n count \n---------\n 1000000\n(1 row)\n\nTime: 564.896 ms\n\n[ ftr, these times are best-of-three-trials ]\n\nIt's possible that in the case where an explicit sort has to be inserted,\nreducing the amount of data passing through the sort would be worth doing;\nbut in the general case that's unproven.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Jan 2017 09:13:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization inner join"
},
{
"msg_contents": "The picture is becoming clearer now. So to recap the issue is in the plan selection not utilizing the null_frac statistic properly to skip what seems to be in your case 99% of the rows which are NULL for the field the join is happening on and would be discarded anyways.\r\n\r\nFor completeness do you mind posting what versions of postgres you have tested this on?\r\n\r\n-----------------\r\nPhillip Couto\r\n\r\n\r\n\r\n> On Jan 19, 2017, at 08:23, Gustavo Rezende Montesino <[email protected]> wrote:\r\n> \r\n> Hello,\r\n> \r\n> Em 19/01/2017 11:04, Clailson escreveu:\r\n>> Hi Phillip.\r\n>> \r\n>> \r\n>>> \r\n>>> Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?\r\n>> \r\n>> It's the question. In the company I work with, one of my clients asked me: \"Why PostgreSQL does not remove rows with null in column b (table b), before joining, since these rows have no corresponding in table a?\" I gave the suggestion to put the IS NOT NULL in the WHERE statement, but HE can't modify the query in the application. \r\n>> \r\n>> I did the tests with Oracle and it uses a predicate in the query plan, removing the lines where b.b is null. In Oracle, it´s the same plan, with and without IS NOT NULL in the WHERE statement.\r\n> \r\n> Being the client in question, I would like to make a little remark: What we thought could be optimized here at first is on the row estimate of the index scan; which could take null_frac into account. To put things into perspective, our similar case in production has a table with 6 million lines where only 9.5k aren´t null for the join field, an the over-estimation is throwing away good plans (like ~150ms execution time) in favor of pretty bad ones (~80s execution time).\r\n> \r\n> We´ve asked application people to put the where not null workaround, which works great, and are waiting on an answer, but I believe getting better estimates without that would be great if possible.\r\n> \r\n>> \r\n>> On 19/01/2017 09:34, Phillip Couto wrote:\r\n>>> NULL is still a value that may be paired with a NULL in a.a\r\n> \r\n> Is that so? I would believe you would never get a match, as NULL <> NULL\r\n> \r\n>>>> On Jan 19, 2017, at 05:08, Clailson < <mailto:[email protected]>[email protected] <mailto:[email protected]>> wrote:\r\n>>>> \r\n>>>> Hi,\r\n>>>> \r\n>>>> Is there something in the roadmap to optimize the inner join?\r\n>>>> \r\n>>>> I've this situation above. Table b has 400 rows with null in the column b.\r\n>>>> \r\n>>>> explain analyze select * from a inner join b on (b.b = a.a);\r\n>>>> \"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \r\n>>>> \" Merge Cond: (a.a = b.b)\" \r\n>>>> \" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \r\n>>>> \" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \r\n>>>> \"Total runtime: 1.248 ms\" \r\n>>>> \r\n>>>> My question is: Why the planner isn't removing the null rows during the scan of table b?\r\n>>>> -- \r\n>>>> Clailson Soares Dinízio de Almeida\r\n>>> \r\n>> \r\n> \r\n> \r\n> Regards,\r\n> \r\n> Gustavo R. Montesino\r\n> Tribunal Regional do Trabalho da 2a Região\r\n> Secretaria de Tecnologia da Informação e Comunicação\r\n> Coordenadoria de Infraestrutura de TIC\r\n> Seção de Administração de Banco de Dados\r\n> Av. Marquês de São Vicente, 121 - Bl. A - Sala 404\r\n> Telefone: (11) 3150-2082\r\n> \r\n> \r\n\r\n\n\nThe picture is becoming clearer now. So to recap the issue is in the plan selection not utilizing the null_frac statistic properly to skip what seems to be in your case 99% of the rows which are NULL for the field the join is happening on and would be discarded anyways.For completeness do you mind posting what versions of postgres you have tested this on?\n-----------------Phillip Couto\n\nOn Jan 19, 2017, at 08:23, Gustavo Rezende Montesino <[email protected]> wrote:Hello,Em 19/01/2017 11:04, Clailson escreveu:Hi Phillip.Not sure if it is all that common. Curious what if you put b.b IS NOT NULL in the WHERE statement?It's the question. In the company I work with, one of my clients asked me: \"Why PostgreSQL does not remove rows with null in column b (table b), before joining, since these rows have no corresponding in table a?\" I gave the suggestion to put the IS NOT NULL in the WHERE statement, but HE can't modify the query in the application. I did the tests with Oracle and it uses a predicate in the query plan, removing the lines where b.b is null. In Oracle, it´s the same plan, with and without IS NOT NULL in the WHERE statement.Being the client in question, I would like to make a little remark: What we thought could be optimized here at first is on the row estimate of the index scan; which could take null_frac into account. To put things into perspective, our similar case in production has a table with 6 million lines where only 9.5k aren´t null for the join field, an the over-estimation is throwing away good plans (like ~150ms execution time) in favor of pretty bad ones (~80s execution time).We´ve asked application people to put the where not null workaround, which works great, and are waiting on an answer, but I believe getting better estimates without that would be great if possible.On 19/01/2017 09:34, Phillip Couto wrote:NULL is still a value that may be paired with a NULL in a.aIs that so? I would believe you would never get a match, as NULL <> NULLOn Jan 19, 2017, at 05:08, Clailson <[email protected]> wrote:Hi,Is there something in the roadmap to optimize the inner join?I've this situation above. Table b has 400 rows with null in the column b.explain analyze select * from a inner join b on (b.b = a.a);\"Merge Join (cost=0.55..65.30 rows=599 width=16) (actual time=0.030..1.173 rows=599 loops=1)\" \r\n\" Merge Cond: (a.a = b.b)\" \r\n\" -> Index Scan using a_pkey on a (cost=0.28..35.27 rows=1000 width=8) (actual time=0.014..0.364 rows=1000 loops=1)\" \r\n\" -> Index Scan using in01 on b (cost=0.28..33.27 rows=1000 width=8) (actual time=0.012..0.249 rows=600 loops=1)\" \r\n\"Total runtime: 1.248 ms\" \r\n\r\nMy question is: Why the planner isn't removing the null rows during the scan of table b?\r\n-- \r\nClailson Soares Dinízio de Almeida\r\nRegards,Gustavo R. Montesino\r\nTribunal Regional do Trabalho da 2a Região\r\nSecretaria de Tecnologia da Informação e Comunicação\r\nCoordenadoria de Infraestrutura de TIC\r\nSeção de Administração de Banco de Dados\r\nAv. Marquês de São Vicente, 121 - Bl. A - Sala 404\r\nTelefone: (11) 3150-2082",
"msg_date": "Thu, 19 Jan 2017 09:15:09 -0500",
"msg_from": "\"Phillip Couto\" <[email protected]> ",
"msg_from_op": false,
"msg_subject": "Re: Optimization inner join"
},
{
"msg_contents": "Em 19/01/2017 12:13, Tom Lane escreveu:\n> Gustavo Rezende Montesino <[email protected]> writes:\n>> Being the client in question, I would like to make a little remark: What\n>> we thought could be optimized here at first is on the row estimate of\n>> the index scan; which could take null_frac into account. To put things\n>> into perspective, our similar case in production has a table with 6\n>> million lines where only 9.5k aren´t null for the join field, an the\n>> over-estimation is throwing away good plans (like ~150ms execution time)\n>> in favor of pretty bad ones (~80s execution time).\n> Please provide a concrete test case for that. AFAIK the null fraction\n> should be accounted for in join size estimates. Here's a little test\n> case showing that it is:\n\nHello,\n\nExpanding a little on you example:\n\npostgres=# create table t1 as select generate_series(1,1000000) as f1;\nSELECT 1000000\npostgres=# create table t2 as select generate_series(1,1000000) as f1;\nSELECT 1000000\npostgres=# insert into t2 select null from generate_series(1,1000000);\nINSERT 0 1000000\npostgres=# create index on t1(f1);\nCREATE INDEX\npostgres=# create index on t2(f1);\nCREATE INDEX\npostgres=# analyze t1;\nANALYZE\npostgres=# analyze t2;\nANALYZE\npostgres=# explain select * from t1,t2 where t1.f1=t2.f1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Merge Join (cost=2.68..59298.81 rows=499433 width=8)\n Merge Cond: (t1.f1 = t2.f1)\n -> Index Only Scan using t1_f1_idx on t1 (cost=0.42..24916.42 \nrows=1000000 width=4)\n -> Index Only Scan using t2_f1_idx on t2 (cost=0.43..48837.43 \nrows=2000000 width=4)\n(4 rows)\npostgres=# explain select * from t1,t2 where t1.f1=t2.f1 and t2.f1 is \nnot null;\n QUERY PLAN\n-----------------------------------------------------------------------------------------\n Merge Join (cost=1.85..44588.02 rows=249434 width=8)\n Merge Cond: (t1.f1 = t2.f1)\n -> Index Only Scan using t1_f1_idx on t1 (cost=0.42..24916.42 \nrows=1000000 width=4)\n -> Index Only Scan using t2_f1_idx on t2 (cost=0.43..26890.60 \nrows=998867 width=4)\n Index Cond: (f1 IS NOT NULL)\n(5 rows)\n\n\nNotice the difference in the estimated costs. In our real case this \ndifference leads\nto a (very) bad plan choice.\n\nBTW, execution itself is indeed faster without the not null clause.\n\nThese tests where on 9.3, but our production with the \"real\" case is in \n9.6. Behavior seems\nto be the same on both.\n\n\nRegards,\n\nGustavo R. Montesino\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Jan 2017 12:45:46 -0200",
"msg_from": "Gustavo Rezende Montesino <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimization inner join"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm new in this mailing list, sorry if my post is not well formed.\n\nFirst of all, I would thank all the team and the contributors\naround PostgreSQL for their work.\n\nMy question…\n\nThe explain analyze of the following code is https://explain.depesz.com/s/VhOv\n✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯\nWITH vp AS (\n SELECT dossier.id AS dossier_id\n FROM dossier, vente_process\n WHERE dossier.id = vente_process.dossier_id\n GROUP BY dossier.id\n)\n, affected_ccial AS (\n SELECT\n d.id AS dossier_id\n FROM dossier d, dossier_rel_ccial\n WHERE date_stop > now()\n AND dossier_rel_ccial.enabled\n AND d.id = dossier_rel_ccial.dossier_id\n GROUP BY d.id\n)\n[OTHER CTEs - TRUNCATED CODE]\nSELECT\n count(*)\nFROM dossier d\n LEFT JOIN vp ON vp.dossier_id = d.id\n LEFT JOIN affected_ccial ON affected_ccial.dossier_id = d.id\n LEFT JOIN dm_bien ON dm_bien.dossier_id = d.id\n LEFT JOIN rdv_r2 ON rdv_r2.dossier_id = d.id\n LEFT JOIN rdv_ra ON rdv_ra.dossier_id = d.id\n LEFT JOIN mandat_papier_non_recu ON mandat_papier_non_recu.dossier_id = d.id\n LEFT JOIN annonce csite_annonce_enabled ON csite_annonce_enabled.dossier_id = d.id\n LEFT JOIN invalidated_estimation ON invalidated_estimation.dossier_id = d.id\n LEFT JOIN num_mandat_reserved ON num_mandat_reserved.dossier_id = d.id\n LEFT JOIN d_status ON d_status.dossier_id = d.id\n\nWHERE TRUE\n AND vp.dossier_id IS NOT NULL\n AND affected_ccial.dossier_id IS NOT NULL\n AND d.vente_etape_id = 1200\n AND NOT d.is_certivia\n AND dm_bien.dossier_id IS NULL\n AND rdv_r2.dossier_id IS NULL\n AND rdv_ra.dossier_id IS NULL\n AND mandat_papier_non_recu.dossier_id IS NOT NULL\n AND (csite_annonce_enabled.dossier_id IS NULL OR NOT csite_annonce_enabled.on_csite_enabled)\n AND invalidated_estimation.dossier_id IS NULL\n AND num_mandat_reserved.dossier_id IS NULL\n AND NOT d_status.status_ids @> '{175}'\n;\n✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯\n\nwhere :\n\n✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯\n-- The \"WHERE\" conditions are destinated to be dynamically generated by\n-- an API\n\n-- All the CTEs contain at most 55260 records.\n\n=> WITH vp AS (\n SELECT dossier.id AS dossier_id\n FROM dossier, vente_process\n WHERE dossier.id = vente_process.dossier_id\n GROUP BY dossier.id\n) select count(*) from vp;\n┌───────┐\n│ count │\n├───────┤\n│ 42792 │\n└───────┘\n\n=> select count(*) from dossier;\n┌───────┐\n│ count │\n├───────┤\n│ 55260 │\n└───────┘\n\n=> \\d dossier\n\n Table \"public.dossier\"\n┌─────────────────────┬─────────────────────────────┬──────────────────────────────────────────────────────┐\n│ Column │ Type │ Modifiers │\n├─────────────────────┼─────────────────────────────┼──────────────────────────────────────────────────────┤\n│ id │ integer │ not null default nextval('dossier_id_seq'::regclass) │\n│ bien_id │ integer │ not null │\n│ date_insert │ timestamp without time zone │ not null default now() │\n│ data │ hstore │ │\n│ vente_type_id │ integer │ │\n│ vente_arret_id │ integer │ │\n│ vente_etape_id │ integer │ not null │\n│ apporteur_id │ integer │ │\n│ mandat_id │ integer │ │\n│ old_cpro_dossier_id │ integer │ │\n│ en_contentieux │ boolean │ not null default false │\n│ is_certivia │ boolean │ not null default false │\n│ no_print_pool │ boolean │ not null default false │\n└─────────────────────┴─────────────────────────────┴──────────────────────────────────────────────────────┘\nIndexes:\n \"dossier_pkey\" PRIMARY KEY, btree (id)\n \"dossier_old_cpro_dossier_id_uniq\" UNIQUE CONSTRAINT, btree (old_cpro_dossier_id)\n \"dossier_bien_id_idx\" btree (bien_id)\n \"dossier_date_insert_idx\" btree (date_insert)\n \"dossier_mandat_id_idx\" btree (mandat_id)\n \"dossier_vente_arret_id_idx\" btree (vente_arret_id NULLS FIRST)\nForeign-key constraints:\n \"dos_bien_fk\" FOREIGN KEY (bien_id) REFERENCES bien(id) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"dos_vdt_fk\" FOREIGN KEY (vente_type_id) REFERENCES vente_type(id) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"dos_ven_arr_id_fk\" FOREIGN KEY (vente_arret_id) REFERENCES vente_arret(id) ON UPDATE RESTRICT ON DELETE SET NULL\n \"dossier_apporteur_id_fkey\" FOREIGN KEY (apporteur_id) REFERENCES apporteur(id)\n \"dossier_mandat_id_fkey\" FOREIGN KEY (mandat_id) REFERENCES mandat(id) ON UPDATE CASCADE ON DELETE RESTRICT\n \"dossier_vente_etape_id_fk\" FOREIGN KEY (vente_etape_id) REFERENCES vente_etape(id) ON UPDATE CASCADE ON DELETE RESTRICT DEFERRABLE INITIALLY DEFERRED\n\n\n=> select count(*) from vente_process;\n┌────────┐\n│ count │\n├────────┤\n│ 334783 │\n└────────┘\n\n=> \\d vente_process\n\n Table \"public.vente_process\"\n┌─────────────────┬─────────────────────────────┬────────────────────────────────────────────────────────────┐\n│ Column │ Type │ Modifiers │\n├─────────────────┼─────────────────────────────┼────────────────────────────────────────────────────────────┤\n│ id │ integer │ not null default nextval('vente_process_id_seq'::regclass) │\n│ dossier_id │ integer │ not null │\n│ id_reference │ integer │ │\n│ table_name │ character varying(50) │ not null │\n│ vente_action_id │ integer │ not null │\n│ date_insert │ timestamp without time zone │ not null default now() │\n│ ccial_id │ integer │ │\n│ admin_id │ integer │ │\n│ acq_id │ integer │ │\n│ vente_etape_id │ integer │ │\n│ data │ jsonb │ │\n└─────────────────┴─────────────────────────────┴────────────────────────────────────────────────────────────┘\nIndexes:\n \"vente_process_pkey\" PRIMARY KEY, btree (id)\n \"vente_process_dossier_id_idx\" btree (dossier_id)\nCheck constraints:\n \"proc_ven_ccial_admin_chk\" CHECK (ccial_id IS NOT NULL OR admin_id IS NOT NULL)\nForeign-key constraints:\n \"pro_ven_adm_fk\" FOREIGN KEY (admin_id) REFERENCES personne_employe(id) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"pro_ven_cci_fk\" FOREIGN KEY (ccial_id) REFERENCES personne_employe(id) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"pro_ven_dos_fk\" FOREIGN KEY (dossier_id) REFERENCES dossier(id) ON UPDATE CASCADE ON DELETE RESTRICT\n \"pro_ven_typ_act_ven_fk\" FOREIGN KEY (vente_action_id) REFERENCES vente_action(id) ON UPDATE RESTRICT ON DELETE RESTRICT\n \"vente_process_acq_id_fkey\" FOREIGN KEY (acq_id) REFERENCES personne_acq(id)\n \"vente_process_vente_etape_id_fk\" FOREIGN KEY (vente_etape_id) REFERENCES vente_etape(id) ON UPDATE CASCADE ON DELETE RESTRICT DEFERRABLE INITIALLY DEFERRED\n✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯\n\nIf I permute the line\n LEFT JOIN vp ON vp.dossier_id = d.id\nwith\n LEFT JOIN affected_ccial ON affected_ccial.dossier_id = d.id\n\nresulting in this similar query :\n✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯\n[CODE]\nFROM dossier d\n LEFT JOIN affected_ccial ON affected_ccial.dossier_id = d.id\n LEFT JOIN vp ON vp.dossier_id = d.id\n[CODE]\n✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯✂⋯\n\nThe explain analyze is https://explain.depesz.com/s/sKGW\nresulting in a total time of 798.693ms instead of 65,843.533ms\n\n1. Can somebody explain me why the second query is near 100 faster than the\nfirst one ?\n\n2. Is there a rule that suggest the best order of the statements JOIN ?\n I'd read this doc https://www.postgresql.org/docs/9.6/static/explicit-joins.html\n but I don't see any logic join order in this case…\n\n3. Why the two queries are very fast when I remove the WHERE\nconditions ?\n\nI can provide additional informations if needed.\n\nThanks for your reading and your eventual answer,\n-- \nPI\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Jan 2017 15:37:03 +0100",
"msg_from": "Philippe Ivaldi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Chaotic query planning ?"
},
{
"msg_contents": "Philippe Ivaldi wrote:\r\n> The explain analyze of the following code is https://explain.depesz.com/s/VhOv\r\n>\r\n> [OTHER CTEs - TRUNCATED CODE]\r\n> SELECT\r\n> count(*)\r\n> FROM dossier d\r\n> LEFT JOIN vp ON vp.dossier_id = d.id\r\n> LEFT JOIN affected_ccial ON affected_ccial.dossier_id = d.id\r\n> LEFT JOIN dm_bien ON dm_bien.dossier_id = d.id\r\n> LEFT JOIN rdv_r2 ON rdv_r2.dossier_id = d.id\r\n> LEFT JOIN rdv_ra ON rdv_ra.dossier_id = d.id\r\n> LEFT JOIN mandat_papier_non_recu ON mandat_papier_non_recu.dossier_id = d.id\r\n> LEFT JOIN annonce csite_annonce_enabled ON csite_annonce_enabled.dossier_id = d.id\r\n> LEFT JOIN invalidated_estimation ON invalidated_estimation.dossier_id = d.id\r\n> LEFT JOIN num_mandat_reserved ON num_mandat_reserved.dossier_id = d.id\r\n> LEFT JOIN d_status ON d_status.dossier_id = d.id\r\n> WHERE [...]\r\n>\r\n> [...]\r\n> \r\n> If I permute the line\r\n> LEFT JOIN vp ON vp.dossier_id = d.id\r\n> with\r\n> LEFT JOIN affected_ccial ON affected_ccial.dossier_id = d.id\r\n> \r\n> The explain analyze is https://explain.depesz.com/s/sKGW\r\n> resulting in a total time of 798.693ms instead of 65,843.533ms\r\n> \r\n> 1. Can somebody explain me why the second query is near 100 faster than the\r\n> first one ?\r\n> \r\n> 2. Is there a rule that suggest the best order of the statements JOIN ?\r\n> I'd read this doc https://www.postgresql.org/docs/9.6/static/explicit-joins.html\r\n> but I don't see any logic join order in this case…\r\n> \r\n> 3. Why the two queries are very fast when I remove the WHERE\r\n> conditions ?\r\n> \r\n> I can provide additional informations if needed.\r\n\r\nYou join more than 8 tables in your query, and 8 is the default\r\nvalue for join_collapse_limit.\r\n\r\nhttps://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-JOIN-COLLAPSE-LIMIT\r\n\r\nIn this case, PostgreSQL doesn't perform an exhaustive search of\r\nthe possible query plans, but joins them in the order provided.\r\n\r\nExperiment with raising join_collapse_limit and from_collapse_limit to 11.\r\n\r\nAlternatively, optimize the join order by hand and don't tune the parameters.\r\n\r\nYours,\r\nLaurenz Albe\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 19 Jan 2017 16:05:15 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Chaotic query planning ?"
},
{
"msg_contents": "Albe Laurenz wrote\n\n> […]\n> Experiment with raising join_collapse_limit and from_collapse_limit to 11.\n\nThank you, this solve the problem.\n\n> Alternatively, optimize the join order by hand and don't tune the parameters.\n\nWhat is surprising is that there is no apparent/logical optimal strategy.\n\nBest regards,\n-- \nPI\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 30 Jan 2017 09:24:11 +0100",
"msg_from": "Philippe Ivaldi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Chaotic query planning ?"
}
] |
[
{
"msg_contents": "Hi Expert,\n\nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n\nThanks in advance.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Fri, 20 Jan 2017 11:24:52 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Backup taking long time !!!"
},
{
"msg_contents": "If you can upgrade to a newer version, there is parallel pg dump.\n\nDocumentation -\nhttps://www.postgresql.org/docs/current/static/backup-dump.html\n\nRelated blog -\nhttp://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/\n\nWhich can give significant speed up depending on your machine's I/O\ncapabilities.\n\n\n\nOn Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Hi Expert,\n>\n>\n>\n> I have a database having size around 1350 GB, created in PostgreSQL-9.1 in\n> Linux platform.\n>\n> I am using pg_dump to take backup which takes around 12 hours to complete.\n>\n> Could you please suggest me how I can make my backup fast so that it\n> complete in less hours?\n>\n>\n>\n> Thanks in advance.\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 | Ext 1078 |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n\n\n\n-- \nRegards,\nMadusudanan.B.N <http://madusudanan.com>\n\nIf you can upgrade to a newer version, there is parallel pg dump.Documentation - https://www.postgresql.org/docs/current/static/backup-dump.htmlRelated blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/Which can give significant speed up depending on your machine's I/O capabilities.On Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n-- Regards,Madusudanan.B.N",
"msg_date": "Fri, 20 Jan 2017 17:04:09 +0530",
"msg_from": "\"Madusudanan.B.N\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Exactly parallel option is there in version 9.3 but I can’t upgrade new version due to some concerns.\r\nCould you suggest in 9.1 how may I fix it.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\nFrom: Madusudanan.B.N [mailto:[email protected]]\r\nSent: 20 January, 2017 5:04 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Backup taking long time !!!\r\n\r\nIf you can upgrade to a newer version, there is parallel pg dump.\r\n\r\nDocumentation - https://www.postgresql.org/docs/current/static/backup-dump.html\r\n\r\nRelated blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/\r\n\r\nWhich can give significant speed up depending on your machine's I/O capabilities.\r\n\r\n\r\n\r\nOn Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\r\nHi Expert,\r\n\r\nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\r\nI am using pg_dump to take backup which takes around 12 hours to complete.\r\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\r\n\r\nThanks in advance.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n\r\n\r\n\r\n--\r\nRegards,\r\nMadusudanan.B.N<http://madusudanan.com>\r\n\r\n\n\n\n\n\n\n\n\n\nExactly parallel option is there in version 9.3 but I can’t upgrade new version due to some concerns.\nCould you suggest in 9.1 how may I fix it.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Madusudanan.B.N [mailto:[email protected]]\r\n\nSent: 20 January, 2017 5:04 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Backup taking long time !!!\n \n\nIf you can upgrade to a newer version, there is parallel pg dump.\n\n \n\n\nDocumentation - https://www.postgresql.org/docs/current/static/backup-dump.html\n\n\n \n\n\nRelated blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/\n\n\n \n\n\nWhich can give significant speed up depending on your machine's I/O capabilities.\n\n\n \n\n\n \n\n\n \n\nOn Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n \n\n-- \n\n\n\n\n\n\n\n\n\nRegards,\nMadusudanan.B.N",
"msg_date": "Fri, 20 Jan 2017 11:43:33 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Hi\n\n2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108 <[email protected]>:\n\n> Exactly parallel option is there in version 9.3 but I can’t upgrade new\n> version due to some concerns.\n>\n> Could you suggest in 9.1 how may I fix it.\n>\n\n1. don't use it - you can use physical full backup with export transaction\nsegments.\n\nor\n\n2. buy faster IO\n\nRegards\n\nPavel Stehule\n\n\n\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n> |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n> *From:* Madusudanan.B.N [mailto:[email protected]]\n> *Sent:* 20 January, 2017 5:04 PM\n> *To:* Dinesh Chandra 12108 <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Backup taking long time !!!\n>\n>\n>\n> If you can upgrade to a newer version, there is parallel pg dump.\n>\n>\n>\n> Documentation - https://www.postgresql.org/docs/current/static/backup-\n> dump.html\n>\n>\n>\n> Related blog - http://paquier.xyz/postgresql-2/postgres-9-3-\n> feature-highlight-parallel-pg_dump/\n>\n>\n>\n> Which can give significant speed up depending on your machine's I/O\n> capabilities.\n>\n>\n>\n>\n>\n>\n>\n> On Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <\n> [email protected]> wrote:\n>\n> Hi Expert,\n>\n>\n>\n> I have a database having size around 1350 GB, created in PostgreSQL-9.1 in\n> Linux platform.\n>\n> I am using pg_dump to take backup which takes around 12 hours to complete.\n>\n> Could you please suggest me how I can make my backup fast so that it\n> complete in less hours?\n>\n>\n>\n> Thanks in advance.\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n> |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n>\n> ------------------------------\n>\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n>\n>\n>\n>\n> --\n>\n> Regards,\n> Madusudanan.B.N <http://madusudanan.com>\n>\n>\n>\n\nHi2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108 <[email protected]>:\n\n\nExactly parallel option is there in version 9.3 but I can’t upgrade new version due to some concerns.\nCould you suggest in 9.1 how may I fix it.1. don't use it - you can use physical full backup with export transaction segments.or2. buy faster IORegardsPavel Stehule\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Madusudanan.B.N [mailto:[email protected]]\n\nSent: 20 January, 2017 5:04 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Backup taking long time !!!\n \n\nIf you can upgrade to a newer version, there is parallel pg dump.\n\n \n\n\nDocumentation - https://www.postgresql.org/docs/current/static/backup-dump.html\n\n\n \n\n\nRelated blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/\n\n\n \n\n\nWhich can give significant speed up depending on your machine's I/O capabilities.\n\n\n \n\n\n \n\n\n \n\nOn Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n \n\n-- \n\n\n\n\n\n\n\n\n\nRegards,\nMadusudanan.B.N",
"msg_date": "Fri, 20 Jan 2017 12:48:56 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "I hope you realise that 9.1 is EOLed - http://blog.2ndquadrant.com/\npostgresql-9-1-end-of-life/\n\nWhich means that the version that you are using will not receive any\nupdates which includes critical updates to performance and security.\n\nIf I were you, I would work on the issues that stops me from upgrading, but\nthat is my opinion.\n\nAs pavel said, you could use a physical backup instead of pg dump.\n\nOn Fri, Jan 20, 2017 at 5:18 PM, Pavel Stehule <[email protected]>\nwrote:\n\n> Hi\n>\n> 2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108 <[email protected]\n> >:\n>\n>> Exactly parallel option is there in version 9.3 but I can’t upgrade new\n>> version due to some concerns.\n>>\n>> Could you suggest in 9.1 how may I fix it.\n>>\n>\n> 1. don't use it - you can use physical full backup with export transaction\n> segments.\n>\n> or\n>\n> 2. buy faster IO\n>\n> Regards\n>\n> Pavel Stehule\n>\n>\n>\n>>\n>> *Regards,*\n>>\n>> *Dinesh Chandra*\n>>\n>> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>>\n>> *------------------------------------------------------------------*\n>>\n>> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n>> |[email protected]\n>>\n>> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>>\n>>\n>>\n>> *From:* Madusudanan.B.N [mailto:[email protected]]\n>> *Sent:* 20 January, 2017 5:04 PM\n>> *To:* Dinesh Chandra 12108 <[email protected]>\n>> *Cc:* [email protected]\n>> *Subject:* Re: [PERFORM] Backup taking long time !!!\n>>\n>>\n>>\n>> If you can upgrade to a newer version, there is parallel pg dump.\n>>\n>>\n>>\n>> Documentation - https://www.postgresql.org/d\n>> ocs/current/static/backup-dump.html\n>>\n>>\n>>\n>> Related blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-\n>> highlight-parallel-pg_dump/\n>>\n>>\n>>\n>> Which can give significant speed up depending on your machine's I/O\n>> capabilities.\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> On Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <\n>> [email protected]> wrote:\n>>\n>> Hi Expert,\n>>\n>>\n>>\n>> I have a database having size around 1350 GB, created in PostgreSQL-9.1\n>> in Linux platform.\n>>\n>> I am using pg_dump to take backup which takes around 12 hours to complete.\n>>\n>> Could you please suggest me how I can make my backup fast so that it\n>> complete in less hours?\n>>\n>>\n>>\n>> Thanks in advance.\n>>\n>>\n>>\n>> *Regards,*\n>>\n>> *Dinesh Chandra*\n>>\n>> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>>\n>> *------------------------------------------------------------------*\n>>\n>> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n>> |[email protected]\n>>\n>> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>>\n>>\n>>\n>>\n>> ------------------------------\n>>\n>>\n>> DISCLAIMER:\n>>\n>> This email message is for the sole use of the intended recipient(s) and\n>> may contain confidential and privileged information. Any unauthorized\n>> review, use, disclosure or distribution is prohibited. If you are not the\n>> intended recipient, please contact the sender by reply email and destroy\n>> all copies of the original message. Check all attachments for viruses\n>> before opening them. All views or opinions presented in this e-mail are\n>> those of the author and may not reflect the opinion of Cyient or those of\n>> our affiliates.\n>>\n>>\n>>\n>>\n>>\n>> --\n>>\n>> Regards,\n>> Madusudanan.B.N <http://madusudanan.com>\n>>\n>>\n>>\n>\n>\n\n\n-- \nRegards,\nMadusudanan.B.N <http://madusudanan.com>\n\nI hope you realise that 9.1 is EOLed - http://blog.2ndquadrant.com/postgresql-9-1-end-of-life/Which means that the version that you are using will not receive any updates which includes critical updates to performance and security.If I were you, I would work on the issues that stops me from upgrading, but that is my opinion.As pavel said, you could use a physical backup instead of pg dump.On Fri, Jan 20, 2017 at 5:18 PM, Pavel Stehule <[email protected]> wrote:Hi2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108 <[email protected]>:\n\n\nExactly parallel option is there in version 9.3 but I can’t upgrade new version due to some concerns.\nCould you suggest in 9.1 how may I fix it.1. don't use it - you can use physical full backup with export transaction segments.or2. buy faster IORegardsPavel Stehule\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Madusudanan.B.N [mailto:[email protected]]\n\nSent: 20 January, 2017 5:04 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Backup taking long time !!!\n \n\nIf you can upgrade to a newer version, there is parallel pg dump.\n\n \n\n\nDocumentation - https://www.postgresql.org/docs/current/static/backup-dump.html\n\n\n \n\n\nRelated blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/\n\n\n \n\n\nWhich can give significant speed up depending on your machine's I/O capabilities.\n\n\n \n\n\n \n\n\n \n\nOn Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n \n\n-- \n\n\n\n\n\n\n\n\n\nRegards,\nMadusudanan.B.N\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n-- Regards,Madusudanan.B.N",
"msg_date": "Fri, 20 Jan 2017 17:22:40 +0530",
"msg_from": "\"Madusudanan.B.N\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Dear Pavel,\r\n\r\nThanks for quick response.\r\nMay I know how can I use physical full backup with export transaction segments.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\nFrom: Pavel Stehule [mailto:[email protected]]\r\nSent: 20 January, 2017 5:19 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>\r\nCc: Madusudanan.B.N <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] Backup taking long time !!!\r\n\r\nHi\r\n\r\n2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>>:\r\nExactly parallel option is there in version 9.3 but I can’t upgrade new version due to some concerns.\r\nCould you suggest in 9.1 how may I fix it.\r\n\r\n1. don't use it - you can use physical full backup with export transaction segments.\r\n\r\nor\r\n\r\n2. buy faster IO\r\n\r\nRegards\r\n\r\nPavel Stehule\r\n\r\n\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849<tel:+91%2099539%2075849> | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\nFrom: Madusudanan.B.N [mailto:[email protected]<mailto:[email protected]>]\r\nSent: 20 January, 2017 5:04 PM\r\nTo: Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>>\r\nCc: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Backup taking long time !!!\r\n\r\nIf you can upgrade to a newer version, there is parallel pg dump.\r\n\r\nDocumentation - https://www.postgresql.org/docs/current/static/backup-dump.html\r\n\r\nRelated blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/\r\n\r\nWhich can give significant speed up depending on your machine's I/O capabilities.\r\n\r\n\r\n\r\nOn Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\r\nHi Expert,\r\n\r\nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\r\nI am using pg_dump to take backup which takes around 12 hours to complete.\r\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\r\n\r\nThanks in advance.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849<tel:+91%2099539%2075849> | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n\r\n\r\n\r\n--\r\nRegards,\r\nMadusudanan.B.N<http://madusudanan.com>\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nDear Pavel,\n \nThanks for quick response.\nMay I know how can I use physical full backup with export transaction segments.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Pavel Stehule [mailto:[email protected]]\r\n\nSent: 20 January, 2017 5:19 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: Madusudanan.B.N <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Backup taking long time !!!\n \n\nHi\n\n \n\n2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108 <[email protected]>:\n\n\n\nExactly parallel option is there in version 9.3 but I can’t upgrade new version due to some concerns.\nCould you suggest in 9.1 how may I fix it.\n\n\n\n\n \n\n\n1. don't use it - you can use physical full backup with export transaction segments.\n\n\n \n\n\nor\n\n\n \n\n\n2. buy faster IO\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel Stehule\n\n\n \n\n\n \n\n\n\n\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\r\n+91-9953975849 | Ext 1078 \r\n|[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Madusudanan.B.N [mailto:[email protected]]\r\n\nSent: 20 January, 2017 5:04 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Backup taking long time !!!\n\n\n \n\nIf you can upgrade to a newer version, there is parallel pg dump.\n\n \n\n\nDocumentation - https://www.postgresql.org/docs/current/static/backup-dump.html\n\n\n \n\n\nRelated blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/\n\n\n \n\n\nWhich can give significant speed up depending on your machine's I/O capabilities.\n\n\n \n\n\n \n\n\n \n\nOn Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\r\n+91-9953975849 | Ext 1078 \r\n|[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n \n\n--\r\n\n\n\n\n\n\n\n\n\n\nRegards,\nMadusudanan.B.N",
"msg_date": "Fri, 20 Jan 2017 11:53:24 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "2017-01-20 12:53 GMT+01:00 Dinesh Chandra 12108 <[email protected]>:\n\n> Dear Pavel,\n>\n>\n>\n> Thanks for quick response.\n>\n> May I know how can I use physical full backup with export transaction\n> segments.\n>\n\nhttps://www.postgresql.org/docs/9.1/static/continuous-archiving.html\n\nThis process can be automatized by some applications like barman\nhttp://www.pgbarman.org/\n\nRegards\n\nPavel\n\n\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n> |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n> *From:* Pavel Stehule [mailto:[email protected]]\n> *Sent:* 20 January, 2017 5:19 PM\n> *To:* Dinesh Chandra 12108 <[email protected]>\n> *Cc:* Madusudanan.B.N <[email protected]>;\n> [email protected]\n>\n> *Subject:* Re: [PERFORM] Backup taking long time !!!\n>\n>\n>\n> Hi\n>\n>\n>\n> 2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108 <[email protected]\n> >:\n>\n> Exactly parallel option is there in version 9.3 but I can’t upgrade new\n> version due to some concerns.\n>\n> Could you suggest in 9.1 how may I fix it.\n>\n>\n>\n> 1. don't use it - you can use physical full backup with export transaction\n> segments.\n>\n>\n>\n> or\n>\n>\n>\n> 2. buy faster IO\n>\n>\n>\n> Regards\n>\n>\n>\n> Pavel Stehule\n>\n>\n>\n>\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n> |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n> *From:* Madusudanan.B.N [mailto:[email protected]]\n> *Sent:* 20 January, 2017 5:04 PM\n> *To:* Dinesh Chandra 12108 <[email protected]>\n> *Cc:* [email protected]\n> *Subject:* Re: [PERFORM] Backup taking long time !!!\n>\n>\n>\n> If you can upgrade to a newer version, there is parallel pg dump.\n>\n>\n>\n> Documentation - https://www.postgresql.org/docs/current/static/backup-\n> dump.html\n>\n>\n>\n> Related blog - http://paquier.xyz/postgresql-2/postgres-9-3-\n> feature-highlight-parallel-pg_dump/\n>\n>\n>\n> Which can give significant speed up depending on your machine's I/O\n> capabilities.\n>\n>\n>\n>\n>\n>\n>\n> On Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <\n> [email protected]> wrote:\n>\n> Hi Expert,\n>\n>\n>\n> I have a database having size around 1350 GB, created in PostgreSQL-9.1 in\n> Linux platform.\n>\n> I am using pg_dump to take backup which takes around 12 hours to complete.\n>\n> Could you please suggest me how I can make my backup fast so that it\n> complete in less hours?\n>\n>\n>\n> Thanks in advance.\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n> |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n>\n> ------------------------------\n>\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n>\n>\n>\n>\n> --\n>\n> Regards,\n> Madusudanan.B.N <http://madusudanan.com>\n>\n>\n>\n>\n>\n\n2017-01-20 12:53 GMT+01:00 Dinesh Chandra 12108 <[email protected]>:\n\n\nDear Pavel,\n \nThanks for quick response.\nMay I know how can I use physical full backup with export transaction segments.https://www.postgresql.org/docs/9.1/static/continuous-archiving.htmlThis process can be automatized by some applications like barman http://www.pgbarman.org/RegardsPavel\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Pavel Stehule [mailto:[email protected]]\n\nSent: 20 January, 2017 5:19 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: Madusudanan.B.N <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Backup taking long time !!!\n \n\nHi\n\n \n\n2017-01-20 12:43 GMT+01:00 Dinesh Chandra 12108 <[email protected]>:\n\n\n\nExactly parallel option is there in version 9.3 but I can’t upgrade new version due to some concerns.\nCould you suggest in 9.1 how may I fix it.\n\n\n\n\n \n\n\n1. don't use it - you can use physical full backup with export transaction segments.\n\n\n \n\n\nor\n\n\n \n\n\n2. buy faster IO\n\n\n \n\n\nRegards\n\n\n \n\n\nPavel Stehule\n\n\n \n\n\n \n\n\n\n\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\n+91-9953975849 | Ext 1078 \n|[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Madusudanan.B.N [mailto:[email protected]]\n\nSent: 20 January, 2017 5:04 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Backup taking long time !!!\n\n\n \n\nIf you can upgrade to a newer version, there is parallel pg dump.\n\n \n\n\nDocumentation - https://www.postgresql.org/docs/current/static/backup-dump.html\n\n\n \n\n\nRelated blog - http://paquier.xyz/postgresql-2/postgres-9-3-feature-highlight-parallel-pg_dump/\n\n\n \n\n\nWhich can give significant speed up depending on your machine's I/O capabilities.\n\n\n \n\n\n \n\n\n \n\nOn Fri, Jan 20, 2017 at 4:54 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\n+91-9953975849 | Ext 1078 \n|[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n \n\n--\n\n\n\n\n\n\n\n\n\n\nRegards,\nMadusudanan.B.N",
"msg_date": "Fri, 20 Jan 2017 12:55:43 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "* Pavel Stehule ([email protected]) wrote:\n> 2017-01-20 12:53 GMT+01:00 Dinesh Chandra 12108 <[email protected]>:\n> > Thanks for quick response.\n> >\n> > May I know how can I use physical full backup with export transaction\n> > segments.\n> >\n> \n> https://www.postgresql.org/docs/9.1/static/continuous-archiving.html\n> \n> This process can be automatized by some applications like barman\n> http://www.pgbarman.org/\n\nLast I checked, barman is still single-threaded.\n\nIf the database is large enough that you need multi-process backup, I'd\nsuggest looking at pgbackrest- http://www.pgbackrest.org.\n\npgbackrest has parallel backup, incremental/differential/full backup\nsupport, supports compression, CRC checking, and a whole ton of other\ngood stuff.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 20 Jan 2017 07:22:24 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "> 20 янв. 2017 г., в 15:22, Stephen Frost <[email protected]> написал(а):\n>> \n>> This process can be automatized by some applications like barman\n>> http://www.pgbarman.org/\n> \n> Last I checked, barman is still single-threaded.\n> \n> If the database is large enough that you need multi-process backup, I'd\n> suggest looking at pgbackrest- http://www.pgbackrest.org.\n> \n> pgbackrest has parallel backup, incremental/differential/full backup\n> support, supports compression, CRC checking, and a whole ton of other\n> good stuff.\n\nIncrements in pgbackrest are done on file level which is not really efficient. We have done parallelism, compression and page-level increments (9.3+) in barman fork [1], but unfortunately guys from 2ndquadrant-it don’t hurry to work on it.\n\nAnd actually it would be much better to do a good backup and recovery manager part of the core postgres.\n\n[1] https://github.com/secwall/barman\n[2] https://github.com/2ndquadrant-it/barman/issues/21\n\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n20 янв. 2017 г., в 15:22, Stephen Frost <[email protected]> написал(а):This process can be automatized by some applications like barmanhttp://www.pgbarman.org/Last I checked, barman is still single-threaded.If the database is large enough that you need multi-process backup, I'dsuggest looking at pgbackrest- http://www.pgbackrest.org.pgbackrest has parallel backup, incremental/differential/full backupsupport, supports compression, CRC checking, and a whole ton of othergood stuff.Increments in pgbackrest are done on file level which is not really efficient. We have done parallelism, compression and page-level increments (9.3+) in barman fork [1], but unfortunately guys from 2ndquadrant-it don’t hurry to work on it.And actually it would be much better to do a good backup and recovery manager part of the core postgres.[1] https://github.com/secwall/barman[2] https://github.com/2ndquadrant-it/barman/issues/21\n--May the force be with you…https://simply.name",
"msg_date": "Fri, 20 Jan 2017 15:57:57 +0300",
"msg_from": "Vladimir Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Vladimir,\n\n* Vladimir Borodin ([email protected]) wrote:\n> > 20 янв. 2017 г., в 15:22, Stephen Frost <[email protected]> написал(а):\n> >> This process can be automatized by some applications like barman\n> >> http://www.pgbarman.org/\n> > \n> > Last I checked, barman is still single-threaded.\n> > \n> > If the database is large enough that you need multi-process backup, I'd\n> > suggest looking at pgbackrest- http://www.pgbackrest.org.\n> > \n> > pgbackrest has parallel backup, incremental/differential/full backup\n> > support, supports compression, CRC checking, and a whole ton of other\n> > good stuff.\n> \n> Increments in pgbackrest are done on file level which is not really efficient. We have done parallelism, compression and page-level increments (9.3+) in barman fork [1], but unfortunately guys from 2ndquadrant-it don’t hurry to work on it.\n\nWe're looking at page-level incremental backup in pgbackrest also. For\nlarger systems, we've not heard too much complaining about it being\nfile-based though, which is why it hasn't been a priority. Of course,\nthe OP is on 9.1 too, so.\n\nAs for your fork, well, I can't say I really blame the barman folks for\nbeing cautious- that's usually a good thing in your backup software. :)\n\nI'm curious how you're handling compressed page-level incremental\nbackups though. I looked through barman-incr and it wasn't obvious to\nme what was going wrt how the incrementals are stored, are they ending\nup as sparse files, or are you actually copying/overwriting the prior\nfile in the backup repository? Apologies, python isn't my first\nlanguage, but the lack of any comment anywhere in that file doesn't\nreally help.\n\n> And actually it would be much better to do a good backup and recovery manager part of the core postgres.\n\nSure, but that's not going to happen for 9.1, or even 9.6, and I doubt\nPG10 is going to suddenly get parallel base-backup with compression.\n\nI've been discussing ways to improve the situation with Magnus and we do\nhave some ideas about it, but that's really an independent effort as\nwe're still going to need a tool for released versions of PG.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 20 Jan 2017 08:40:54 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "> 20 янв. 2017 г., в 16:40, Stephen Frost <[email protected]> написал(а):\n> \n> Vladimir,\n> \n>> Increments in pgbackrest are done on file level which is not really efficient. We have done parallelism, compression and page-level increments (9.3+) in barman fork [1], but unfortunately guys from 2ndquadrant-it don’t hurry to work on it.\n> \n> We're looking at page-level incremental backup in pgbackrest also. For\n> larger systems, we've not heard too much complaining about it being\n> file-based though, which is why it hasn't been a priority. Of course,\n> the OP is on 9.1 too, so.\n\nWell, we have forked barman and made everything from the above just because we needed ~ 2 PB of disk space for storing backups for our ~ 300 TB of data. (Our recovery window is 7 days) And on 5 TB database it took a lot of time to make/restore a backup.\n\n> \n> As for your fork, well, I can't say I really blame the barman folks for\n> being cautious- that's usually a good thing in your backup software. :)\n\nThe reason seems to be not the caution but the lack of time for working on it. But yep, it took us half a year to deploy our fork everywhere. And it would take much more time if we didn’t have system for checking backups consistency.\n\n> \n> I'm curious how you're handling compressed page-level incremental\n> backups though. I looked through barman-incr and it wasn't obvious to\n> me what was going wrt how the incrementals are stored, are they ending\n> up as sparse files, or are you actually copying/overwriting the prior\n> file in the backup repository?\n\nNo, we do store each file in the following way. At the beginning you write a map of changed pages. At second you write changed pages themselves. The compression is streaming so you don’t need much memory for that but the downside of this approach is that you read each datafile twice (we believe in page cache here).\n\n> Apologies, python isn't my first\n> language, but the lack of any comment anywhere in that file doesn't\n> really help.\n\nNot a problem. Actually, it would be much easier to understand if it was a series of commits rather than one commit that we do ammend and force-push after each rebase on vanilla barman. We should add comments.\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n20 янв. 2017 г., в 16:40, Stephen Frost <[email protected]> написал(а):Vladimir,Increments in pgbackrest are done on file level which is not really efficient. We have done parallelism, compression and page-level increments (9.3+) in barman fork [1], but unfortunately guys from 2ndquadrant-it don’t hurry to work on it.We're looking at page-level incremental backup in pgbackrest also. Forlarger systems, we've not heard too much complaining about it beingfile-based though, which is why it hasn't been a priority. Of course,the OP is on 9.1 too, so.Well, we have forked barman and made everything from the above just because we needed ~ 2 PB of disk space for storing backups for our ~ 300 TB of data. (Our recovery window is 7 days) And on 5 TB database it took a lot of time to make/restore a backup.As for your fork, well, I can't say I really blame the barman folks forbeing cautious- that's usually a good thing in your backup software. :)The reason seems to be not the caution but the lack of time for working on it. But yep, it took us half a year to deploy our fork everywhere. And it would take much more time if we didn’t have system for checking backups consistency.I'm curious how you're handling compressed page-level incrementalbackups though. I looked through barman-incr and it wasn't obvious tome what was going wrt how the incrementals are stored, are they endingup as sparse files, or are you actually copying/overwriting the priorfile in the backup repository?No, we do store each file in the following way. At the beginning you write a map of changed pages. At second you write changed pages themselves. The compression is streaming so you don’t need much memory for that but the downside of this approach is that you read each datafile twice (we believe in page cache here). Apologies, python isn't my firstlanguage, but the lack of any comment anywhere in that file doesn'treally help.Not a problem. Actually, it would be much easier to understand if it was a series of commits rather than one commit that we do ammend and force-push after each rebase on vanilla barman. We should add comments.\n--May the force be with you…https://simply.name",
"msg_date": "Fri, 20 Jan 2017 17:45:51 +0300",
"msg_from": "Vladimir Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Vladimir,\n\n* Vladimir Borodin ([email protected]) wrote:\n> > 20 янв. 2017 г., в 16:40, Stephen Frost <[email protected]> написал(а):\n> >> Increments in pgbackrest are done on file level which is not really efficient. We have done parallelism, compression and page-level increments (9.3+) in barman fork [1], but unfortunately guys from 2ndquadrant-it don’t hurry to work on it.\n> > \n> > We're looking at page-level incremental backup in pgbackrest also. For\n> > larger systems, we've not heard too much complaining about it being\n> > file-based though, which is why it hasn't been a priority. Of course,\n> > the OP is on 9.1 too, so.\n> \n> Well, we have forked barman and made everything from the above just because we needed ~ 2 PB of disk space for storing backups for our ~ 300 TB of data. (Our recovery window is 7 days) And on 5 TB database it took a lot of time to make/restore a backup.\n\nRight, without incremental or compressed backups, you'd have to have\nroom for 7 full copies of your database. Have you looked at what your\nincrementals would be like with file-level incrementals and compression?\n\nSingle-process backup/restore is definitely going to be slow. We've\nseen pgbackrest doing as much as 3TB/hr with 32 cores handling\ncompression. Of course, your i/o, network, et al, need to be able to\nhandle it.\n\n> > As for your fork, well, I can't say I really blame the barman folks for\n> > being cautious- that's usually a good thing in your backup software. :)\n> \n> The reason seems to be not the caution but the lack of time for working on it. But yep, it took us half a year to deploy our fork everywhere. And it would take much more time if we didn’t have system for checking backups consistency.\n\nHow are you testing your backups..? Do you have page-level checksums\nenabled on your database? pgbackrest recently added the ability to\ncheck PG page-level checksums during a backup and report issues. We've\nalso been looking at how to use pgbackrest to do backup/restore+replay\npage-level difference analysis but there's still a number of things\nwhich can cause differences, so it's a bit difficult to do.\n\nOf course, doing a pgbackrest-restore-replay+pg_dump+pg_restore is\npretty easy to do and we do use that in some places to validate\nbackups.\n\n> > I'm curious how you're handling compressed page-level incremental\n> > backups though. I looked through barman-incr and it wasn't obvious to\n> > me what was going wrt how the incrementals are stored, are they ending\n> > up as sparse files, or are you actually copying/overwriting the prior\n> > file in the backup repository?\n> \n> No, we do store each file in the following way. At the beginning you write a map of changed pages. At second you write changed pages themselves. The compression is streaming so you don’t need much memory for that but the downside of this approach is that you read each datafile twice (we believe in page cache here).\n\nAh, yes, I noticed that you passed over the file twice but wasn't quite\nsure what functools.partial() was doing and a quick read of the docs\nmade me think you were doing seeking there.\n\nAll the pages are the same size, so I'm surprised you didn't consider\njust having a format along the lines of: magic+offset+page,\nmagic+offset+page, magic+offset+page, etc...\n\nI'd have to defer to David on this, but I think he was considering\nhaving some kind of a bitmap to indicate which pages changed instead\nof storing the full offset as, again, all the pages are the same size.\n\n> > Apologies, python isn't my first\n> > language, but the lack of any comment anywhere in that file doesn't\n> > really help.\n> \n> Not a problem. Actually, it would be much easier to understand if it was a series of commits rather than one commit that we do ammend and force-push after each rebase on vanilla barman. We should add comments.\n\nBoth would make it easier to understand, though the comments would be\nmore helpful for me as I don't actually know the barman code all that\nwell.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 20 Jan 2017 10:06:46 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "> 20 янв. 2017 г., в 18:06, Stephen Frost <[email protected]> написал(а):\n> \n> Right, without incremental or compressed backups, you'd have to have\n> room for 7 full copies of your database. Have you looked at what your\n> incrementals would be like with file-level incrementals and compression?\n\nMost of our DBs can’t use partitioning over time-series fields, so we have a lot of datafiles in which only a few pages have been modified. So file-level increments didn’t really work for us. And we didn’t use compression in barman before patching it because single-threaded compression sucks.\n\n> How are you testing your backups..? Do you have page-level checksums\n> enabled on your database? \n\nYep, we use checksums. We restore latest backup with recovery_target = 'immediate' and do COPY tablename TO '/dev/null’ with checking exit code for each table in each database (in several threads, of course).\n\n> pgbackrest recently added the ability to\n> check PG page-level checksums during a backup and report issues.\n\nSounds interesting, should take a look.\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n20 янв. 2017 г., в 18:06, Stephen Frost <[email protected]> написал(а):Right, without incremental or compressed backups, you'd have to haveroom for 7 full copies of your database. Have you looked at what yourincrementals would be like with file-level incrementals and compression?Most of our DBs can’t use partitioning over time-series fields, so we have a lot of datafiles in which only a few pages have been modified. So file-level increments didn’t really work for us. And we didn’t use compression in barman before patching it because single-threaded compression sucks.How are you testing your backups..? Do you have page-level checksumsenabled on your database? Yep, we use checksums. We restore latest backup with recovery_target = 'immediate' and do COPY tablename TO '/dev/null’ with checking exit code for each table in each database (in several threads, of course).pgbackrest recently added the ability tocheck PG page-level checksums during a backup and report issues. Sounds interesting, should take a look.\n--May the force be with you…https://simply.name",
"msg_date": "Fri, 20 Jan 2017 19:44:17 +0300",
"msg_from": "Vladimir Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Vladimir,\n\n* Vladimir Borodin ([email protected]) wrote:\n> > 20 янв. 2017 г., в 18:06, Stephen Frost <[email protected]> написал(а):\n> > \n> > Right, without incremental or compressed backups, you'd have to have\n> > room for 7 full copies of your database. Have you looked at what your\n> > incrementals would be like with file-level incrementals and compression?\n> \n> Most of our DBs can’t use partitioning over time-series fields, so we have a lot of datafiles in which only a few pages have been modified. So file-level increments didn’t really work for us. And we didn’t use compression in barman before patching it because single-threaded compression sucks.\n\nInteresting. That's certainly the kind of use-case we are thinking\nabout for pgbackrest's page-level incremental support. Hopefully it\nwon't be too much longer before we add support for it.\n\n> > How are you testing your backups..? Do you have page-level checksums\n> > enabled on your database? \n> \n> Yep, we use checksums. We restore latest backup with recovery_target = 'immediate' and do COPY tablename TO '/dev/null’ with checking exit code for each table in each database (in several threads, of course).\n\nRight, unfortunately that only checks the heap pages, it won't help with\ncorruption happening in an index file or other files which have a\nchecksum.\n\n> > pgbackrest recently added the ability to\n> > check PG page-level checksums during a backup and report issues.\n> \n> Sounds interesting, should take a look.\n\nIt's done with a C library that's optional and not yet included in the\npackages on apt/yum.p.o, though we hope it will be soon. The C library\nis based, unsurprisingly, on the PG backend code and so should be pretty\nfast. All of the checking is done on whole pgbackrest blocks, in\nstream, so it doesn't slow down the backup process too much.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 20 Jan 2017 11:59:22 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Hi Dinesh,\n\nBest practice in doing full backup is using RSYNC, but before you can copy\nthe DATADIR, you might you pg_start_backup to tell the server not to write\ninto the DATADIR, because you are copying that data. After finished copy\nall the data in DATADIR, you can ask server to continue flushing the data\nfrom logs, by commanding pg_stop_backup. Remember, not to copy the XLOG\ndir.\n\nThere are another way more simpler, which is applying command\npg_basebackup, which actually did that way in simpler version.\n\nif you did pg_dump, you wont get the exact copy of your data, and you will\ntake longer downtime to recover the backup data. By that way, recovering is\nonly starting up the postgres with that copy.\n\n\nGood luck!\n\n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source and Open Mind Company)\nwww.equnix.co.id\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nOn Fri, Jan 20, 2017 at 6:24 PM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Hi Expert,\n>\n>\n>\n> I have a database having size around 1350 GB, created in PostgreSQL-9.1 in\n> Linux platform.\n>\n> I am using pg_dump to take backup which takes around 12 hours to complete.\n>\n> Could you please suggest me how I can make my backup fast so that it\n> complete in less hours?\n>\n>\n>\n> Thanks in advance.\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n> |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n\nHi Dinesh, Best practice in doing full backup is using RSYNC, but before you can copy the DATADIR, you might you pg_start_backup to tell the server not to write into the DATADIR, because you are copying that data. After finished copy all the data in DATADIR, you can ask server to continue flushing the data from logs, by commanding pg_stop_backup. Remember, not to copy the XLOG dir. There are another way more simpler, which is applying command pg_basebackup, which actually did that way in simpler version.if you did pg_dump, you wont get the exact copy of your data, and you will take longer downtime to recover the backup data. By that way, recovering is only starting up the postgres with that copy. Good luck!Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source and Open Mind Company)www.equnix.co.idPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.\nOn Fri, Jan 20, 2017 at 6:24 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Sun, 22 Jan 2017 20:20:20 +0700",
"msg_from": "julyanto SUTANDANG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "CORRECTION:\n\n\"you might you pg_start_backup to tell the server not to write into the\nDATADIR\"\n\nbecome\n\n\"you might *use* pg_start_backup to tell the server not to write into the\n*BASEDIR*, actually server still writes but only to XLOGDIR \"\n\n\nRegards,\n\n\n\n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source and Open Mind Company)\nwww.equnix.co.id\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nOn Sun, Jan 22, 2017 at 8:20 PM, julyanto SUTANDANG <[email protected]>\nwrote:\n\n> Hi Dinesh,\n>\n> Best practice in doing full backup is using RSYNC, but before you can copy\n> the DATADIR, you might you pg_start_backup to tell the server not to write\n> into the DATADIR, because you are copying that data. After finished copy\n> all the data in DATADIR, you can ask server to continue flushing the data\n> from logs, by commanding pg_stop_backup. Remember, not to copy the XLOG\n> dir.\n>\n> There are another way more simpler, which is applying command\n> pg_basebackup, which actually did that way in simpler version.\n>\n> if you did pg_dump, you wont get the exact copy of your data, and you will\n> take longer downtime to recover the backup data. By that way, recovering is\n> only starting up the postgres with that copy.\n>\n>\n> Good luck!\n>\n>\n>\n> Julyanto SUTANDANG\n>\n> Equnix Business Solutions, PT\n> (An Open Source and Open Mind Company)\n> www.equnix.co.id\n> Pusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\n> Pusat\n> T: +6221 22866662 <(021)%2022866662> F: +62216315281 <(021)%206315281> M:\n> +628164858028 <0816-4858-028>\n>\n>\n> Caution: The information enclosed in this email (and any attachments) may\n> be legally privileged and/or confidential and is intended only for the use\n> of the addressee(s). No addressee should forward, print, copy, or otherwise\n> reproduce this message in any manner that would allow it to be viewed by\n> any individual not originally listed as a recipient. If the reader of this\n> message is not the intended recipient, you are hereby notified that any\n> unauthorized disclosure, dissemination, distribution, copying or the taking\n> of any action in reliance on the information herein is strictly prohibited.\n> If you have received this communication in error, please immediately notify\n> the sender and delete this message.Unless it is made by the authorized\n> person, any views expressed in this message are those of the individual\n> sender and may not necessarily reflect the views of PT Equnix Business\n> Solutions.\n>\n> On Fri, Jan 20, 2017 at 6:24 PM, Dinesh Chandra 12108 <\n> [email protected]> wrote:\n>\n>> Hi Expert,\n>>\n>>\n>>\n>> I have a database having size around 1350 GB, created in PostgreSQL-9.1\n>> in Linux platform.\n>>\n>> I am using pg_dump to take backup which takes around 12 hours to complete.\n>>\n>> Could you please suggest me how I can make my backup fast so that it\n>> complete in less hours?\n>>\n>>\n>>\n>> Thanks in advance.\n>>\n>>\n>>\n>> *Regards,*\n>>\n>> *Dinesh Chandra*\n>>\n>> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>>\n>> *------------------------------------------------------------------*\n>>\n>> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n>> |[email protected]\n>>\n>> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>>\n>>\n>>\n>> ------------------------------\n>>\n>> DISCLAIMER:\n>>\n>> This email message is for the sole use of the intended recipient(s) and\n>> may contain confidential and privileged information. Any unauthorized\n>> review, use, disclosure or distribution is prohibited. If you are not the\n>> intended recipient, please contact the sender by reply email and destroy\n>> all copies of the original message. Check all attachments for viruses\n>> before opening them. All views or opinions presented in this e-mail are\n>> those of the author and may not reflect the opinion of Cyient or those of\n>> our affiliates.\n>>\n>\n>\n\nCORRECTION:\"you might you pg_start_backup to tell the server not to write into the DATADIR\"become\"you might *use* pg_start_backup to tell the server not to write into the *BASEDIR*, actually server still writes but only to XLOGDIR \"Regards, Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source and Open Mind Company)www.equnix.co.idPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.\nOn Sun, Jan 22, 2017 at 8:20 PM, julyanto SUTANDANG <[email protected]> wrote:Hi Dinesh, Best practice in doing full backup is using RSYNC, but before you can copy the DATADIR, you might you pg_start_backup to tell the server not to write into the DATADIR, because you are copying that data. After finished copy all the data in DATADIR, you can ask server to continue flushing the data from logs, by commanding pg_stop_backup. Remember, not to copy the XLOG dir. There are another way more simpler, which is applying command pg_basebackup, which actually did that way in simpler version.if you did pg_dump, you wont get the exact copy of your data, and you will take longer downtime to recover the backup data. By that way, recovering is only starting up the postgres with that copy. Good luck!Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source and Open Mind Company)www.equnix.co.idPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.\nOn Fri, Jan 20, 2017 at 6:24 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nHi Expert,\n \nI have a database having size around 1350 GB, created in PostgreSQL-9.1 in Linux platform.\nI am using pg_dump to take backup which takes around 12 hours to complete.\nCould you please suggest me how I can make my backup fast so that it complete in less hours?\n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Sun, 22 Jan 2017 20:36:16 +0700",
"msg_from": "julyanto SUTANDANG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Greetings,\n\n* julyanto SUTANDANG ([email protected]) wrote:\n> Best practice in doing full backup is using RSYNC, but before you can copy\n> the DATADIR, you might you pg_start_backup to tell the server not to write\n> into the DATADIR, because you are copying that data. After finished copy\n> all the data in DATADIR, you can ask server to continue flushing the data\n> from logs, by commanding pg_stop_backup. Remember, not to copy the XLOG\n> dir.\n\nWhoah. That is not, at all, correct, if I'm understanding what you're\nsuggesting.\n\nPG most certainly *does* continue to write into the data directory even\nafter pg_start_backup() has been run. You *must* use archive_command or\npg_receivexlog to capture all of the WAL during the backup to have a\nconsistent backup.\n\n> There are another way more simpler, which is applying command\n> pg_basebackup, which actually did that way in simpler version.\n\npg_basebackup has options to stream the WAL during the backup to capture\nit, which is how it handles that.\n\n> if you did pg_dump, you wont get the exact copy of your data, and you will\n> take longer downtime to recover the backup data. By that way, recovering is\n> only starting up the postgres with that copy.\n\npg_dump will generally take longer to do a restore, yes. Recovering\nfrom a backup does require that a recovery.conf exists with a\nrestore_command that PG can use to get the WAL files it needs, or that\nall of the WAL from the backup is in pg_xlog/pg_wal.\n\nPlease do not claim that PG stops writing to the DATADIR or BASEDIR\nafter a pg_start_backup(), that is not correct and could lead to invalid\nbackups.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 22 Jan 2017 09:55:11 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Greetings,\n\n* julyanto SUTANDANG ([email protected]) wrote:\n> CORRECTION:\n> \n> \"you might you pg_start_backup to tell the server not to write into the\n> DATADIR\"\n> \n> become\n> \n> \"you might *use* pg_start_backup to tell the server not to write into the\n> *BASEDIR*, actually server still writes but only to XLOGDIR \"\n\nJust to make sure anyone reading the mailing list archives isn't\nconfused, running pg_start_backup does *not* make PG stop writing to\nBASEDIR (or DATADIR, or anything, really). PG *will* continue to write\ndata into BASEDIR after pg_start_backup has been called.\n\nThe only thing that pg_start_backup does is identify an entry in the WAL\nstream, from which point all WAL must be replayed when restoring the\nbackup. All WAL generated from that point (pg_start_backup point) until\nthe pg_stop_backup point *must* be replayed when restoring the backup or\nthe database will not be consistent.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 22 Jan 2017 09:57:40 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Hi Stephen,\n\nPlease elaborate more of what you are saying. What i am saying is based on\nthe Official Docs, Forum and our own test. This is what we had to do to\nsave time, both backing up and restoring.\n\nhttps://www.postgresql.org/docs/9.6/static/functions-admin.html\n\nWhen PostgreSQL in the mode of Start Backup, PostgreSQL only writes to the\nXLOG, then you can safely rsync / copy the base data (snapshot) then later\nyou can have full copy of snapshot backup data.\nif you wanted to backup in later day, you can use rsync then it will copy\nfaster because rsync only copy the difference, rather than copy all the\ndata.\n\nmy latter explanation is: use pg_basebackup, it will do it automatically\nfor you.\n\nCMIIW,\n\n\n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source and Open Mind Company)\nwww.equnix.co.id\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nOn Sun, Jan 22, 2017 at 9:55 PM, Stephen Frost <[email protected]> wrote:\n\n> Greetings,\n>\n> * julyanto SUTANDANG ([email protected]) wrote:\n> > Best practice in doing full backup is using RSYNC, but before you can\n> copy\n> > the DATADIR, you might you pg_start_backup to tell the server not to\n> write\n> > into the DATADIR, because you are copying that data. After finished copy\n> > all the data in DATADIR, you can ask server to continue flushing the data\n> > from logs, by commanding pg_stop_backup. Remember, not to copy the XLOG\n> > dir.\n>\n> Whoah. That is not, at all, correct, if I'm understanding what you're\n> suggesting.\n>\n> PG most certainly *does* continue to write into the data directory even\n> after pg_start_backup() has been run. You *must* use archive_command or\n> pg_receivexlog to capture all of the WAL during the backup to have a\n> consistent backup.\n>\n> > There are another way more simpler, which is applying command\n> > pg_basebackup, which actually did that way in simpler version.\n>\n> pg_basebackup has options to stream the WAL during the backup to capture\n> it, which is how it handles that.\n>\n> > if you did pg_dump, you wont get the exact copy of your data, and you\n> will\n> > take longer downtime to recover the backup data. By that way, recovering\n> is\n> > only starting up the postgres with that copy.\n>\n> pg_dump will generally take longer to do a restore, yes. Recovering\n> from a backup does require that a recovery.conf exists with a\n> restore_command that PG can use to get the WAL files it needs, or that\n> all of the WAL from the backup is in pg_xlog/pg_wal.\n>\n> Please do not claim that PG stops writing to the DATADIR or BASEDIR\n> after a pg_start_backup(), that is not correct and could lead to invalid\n> backups.\n>\n> Thanks!\n>\n> Stephen\n>\n\nHi Stephen, Please elaborate more of what you are saying. What i am saying is based on the Official Docs, Forum and our own test. This is what we had to do to save time, both backing up and restoring. https://www.postgresql.org/docs/9.6/static/functions-admin.htmlWhen PostgreSQL in the mode of Start Backup, PostgreSQL only writes to the XLOG, then you can safely rsync / copy the base data (snapshot) then later you can have full copy of snapshot backup data. if you wanted to backup in later day, you can use rsync then it will copy faster because rsync only copy the difference, rather than copy all the data. my latter explanation is: use pg_basebackup, it will do it automatically for you.CMIIW, Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source and Open Mind Company)www.equnix.co.idPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.\nOn Sun, Jan 22, 2017 at 9:55 PM, Stephen Frost <[email protected]> wrote:Greetings,\n\n* julyanto SUTANDANG ([email protected]) wrote:\n> Best practice in doing full backup is using RSYNC, but before you can copy\n> the DATADIR, you might you pg_start_backup to tell the server not to write\n> into the DATADIR, because you are copying that data. After finished copy\n> all the data in DATADIR, you can ask server to continue flushing the data\n> from logs, by commanding pg_stop_backup. Remember, not to copy the XLOG\n> dir.\n\nWhoah. That is not, at all, correct, if I'm understanding what you're\nsuggesting.\n\nPG most certainly *does* continue to write into the data directory even\nafter pg_start_backup() has been run. You *must* use archive_command or\npg_receivexlog to capture all of the WAL during the backup to have a\nconsistent backup.\n\n> There are another way more simpler, which is applying command\n> pg_basebackup, which actually did that way in simpler version.\n\npg_basebackup has options to stream the WAL during the backup to capture\nit, which is how it handles that.\n\n> if you did pg_dump, you wont get the exact copy of your data, and you will\n> take longer downtime to recover the backup data. By that way, recovering is\n> only starting up the postgres with that copy.\n\npg_dump will generally take longer to do a restore, yes. Recovering\nfrom a backup does require that a recovery.conf exists with a\nrestore_command that PG can use to get the WAL files it needs, or that\nall of the WAL from the backup is in pg_xlog/pg_wal.\n\nPlease do not claim that PG stops writing to the DATADIR or BASEDIR\nafter a pg_start_backup(), that is not correct and could lead to invalid\nbackups.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 22 Jan 2017 23:04:13 +0700",
"msg_from": "julyanto SUTANDANG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Greetings,\n\n* julyanto SUTANDANG ([email protected]) wrote:\n> Please elaborate more of what you are saying. What i am saying is based on\n> the Official Docs, Forum and our own test. This is what we had to do to\n> save time, both backing up and restoring.\n> \n> https://www.postgresql.org/docs/9.6/static/functions-admin.html\n> \n> When PostgreSQL in the mode of Start Backup, PostgreSQL only writes to the\n> XLOG, then you can safely rsync / copy the base data (snapshot) then later\n> you can have full copy of snapshot backup data.\n\nYou are confusing two things.\n\nAfter calling pg_start_backup, you can safely copy the contents of the\ndata directory, that is correct.\n\nHowever, PostgreSQL *will* continue to write to the data directory.\nThat, however, is ok, because those changes will *also* be written into\nthe WAL and, after calling pg_start_backup(), you collect all of the\nWAL using archive_command or pg_receivexlog. With all of the WAL\nwhich was created during the backup, PG will be able to recover from the\nchanges made during the backup to the data directory, but you *must*\nhave all of that WAL, or the backup will be inconsistent because of\nthose changes that were made to the data directory after\npg_start_backup() was called.\n\nIn other words, if you aren't using pg_receivexlog or archive_command,\nyour backups are invalid.\n\n> if you wanted to backup in later day, you can use rsync then it will copy\n> faster because rsync only copy the difference, rather than copy all the\n> data.\n\nThis is *also* incorrect. rsync, by itself, is *not* safe to use for\ndoing that kind of incremental backup, unless you enable checksums. The\nreason for this is that rsync has only a 1-second level granularity and\nit is possible (unlikely, though it has been demonstrated) to miss\nchanges made to a file within that 1-second window.\n\n> my latter explanation is: use pg_basebackup, it will do it automatically\n> for you.\n\nYes, if you are unsure about how to perform a safe backup properly,\nusing pg_basebackup or one of the existing backup tools is, by far, the\nbest approach. Attempting to roll your own backup system based on rsync\nis not something I am comfortable recommending any more because it is\n*not* simple to do correctly.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 22 Jan 2017 11:37:48 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Hi Stephen,\n\n> > When PostgreSQL in the mode of Start Backup, PostgreSQL only writes to\n> the\n> > XLOG, then you can safely rsync / copy the base data (snapshot) then\n> later\n> > you can have full copy of snapshot backup data.\n>\n> You are confusing two things.\n>\n> After calling pg_start_backup, you can safely copy the contents of the\n> data directory, that is correct.\n\n\n> However, PostgreSQL *will* continue to write to the data directory.\n> That, however, is ok, because those changes will *also* be written into\n> the WAL and, after calling pg_start_backup(), you collect all of the\n> WAL using archive_command or pg_receivexlog.\n\nThanks for elaborating this Information, this is new, so whatever it is the\nprocedure is *Correct and Workable*.\n\n\n> With all of the WAL\n> which was created during the backup, PG will be able to recover from the\n> changes made during the backup to the data directory, but you *must*\n> have all of that WAL, or the backup will be inconsistent because of\n>\n\nThat is rather out of question, because all what we discuss here is just\ndoing full/snapshot backup.\nThe backup is Full Backup or Snapshot and it will work whenever needed.\nWe are not saying about Incremental Backup yet.\nAlong with collecting the XLOG File, you can have incremental backup and\nhaving complete continuous data backup.\nin this case, Stephen is suggesting on using pg_receivexlog or\narchive_command\n(everything here is actually explained well on the docs))\n\n\nthose changes that were made to the data directory after\n> pg_start_backup() was called.\n>\n> In other words, if you aren't using pg_receivexlog or archive_command,\n> your backups are invalid.\n>\nI doubt that *invalid* here is a valid word\nIn term of snapshot backup and as long as the snapshot can be run, that is\nvalid, isn't it?\n\n> if you wanted to backup in later day, you can use rsync then it will copy\n> > faster because rsync only copy the difference, rather than copy all the\n> > data.\n>\n> This is *also* incorrect. rsync, by itself, is *not* safe to use for\n> doing that kind of incremental backup, unless you enable checksums. The\n> reason for this is that rsync has only a 1-second level granularity and\n> it is possible (unlikely, though it has been demonstrated) to miss\n> changes made to a file within that 1-second window.\n>\nAs long as that is not XLOG file, anyway.. as you are saying that wouldn't\nbe a problem since actually we can run the XLOG for recovery. .\n\n\n>\n> > my latter explanation is: use pg_basebackup, it will do it automatically\n> > for you.\n>\n> Yes, if you are unsure about how to perform a safe backup properly,\n> using pg_basebackup or one of the existing backup tools is, by far, the\n> best approach. Attempting to roll your own backup system based on rsync\n> is not something I am comfortable recommending any more because it is\n> *not* simple to do correctly.\n>\nOK, that is fine, and actually we are using that.\nthe reason why i explain about start_backup and stop_backup is to give a\ngradual understand, and hoping that people will get the mechanism in the\nback understandable.\n\n\n>\n> Thanks!\n>\n> Thanks for your great explanation!\n\n\n> Stephen\n>\n\nHi Stephen, > When PostgreSQL in the mode of Start Backup, PostgreSQL only writes to the\n> XLOG, then you can safely rsync / copy the base data (snapshot) then later\n> you can have full copy of snapshot backup data.\n\nYou are confusing two things.\n\nAfter calling pg_start_backup, you can safely copy the contents of the\ndata directory, that is correct. \n\nHowever, PostgreSQL *will* continue to write to the data directory.\nThat, however, is ok, because those changes will *also* be written into\nthe WAL and, after calling pg_start_backup(), you collect all of the\nWAL using archive_command or pg_receivexlog.Thanks for elaborating this Information, this is new, so whatever it is the procedure is Correct and Workable. With all of the WAL\nwhich was created during the backup, PG will be able to recover from the\nchanges made during the backup to the data directory, but you *must*\nhave all of that WAL, or the backup will be inconsistent because ofThat is rather out of question, because all what we discuss here is just doing full/snapshot backup.The backup is Full Backup or Snapshot and it will work whenever needed. We are not saying about Incremental Backup yet.Along with collecting the XLOG File, you can have incremental backup and having complete continuous data backup. in this case, Stephen is suggesting on using pg_receivexlog or archive_command (everything here is actually explained well on the docs))\nthose changes that were made to the data directory after\npg_start_backup() was called.\n\nIn other words, if you aren't using pg_receivexlog or archive_command,\nyour backups are invalid.I doubt that *invalid* here is a valid word In term of snapshot backup and as long as the snapshot can be run, that is valid, isn't it? \n> if you wanted to backup in later day, you can use rsync then it will copy\n> faster because rsync only copy the difference, rather than copy all the\n> data.\n\nThis is *also* incorrect. rsync, by itself, is *not* safe to use for\ndoing that kind of incremental backup, unless you enable checksums. The\nreason for this is that rsync has only a 1-second level granularity and\nit is possible (unlikely, though it has been demonstrated) to miss\nchanges made to a file within that 1-second window.As long as that is not XLOG file, anyway.. as you are saying that wouldn't be a problem since actually we can run the XLOG for recovery. . \n\n> my latter explanation is: use pg_basebackup, it will do it automatically\n> for you.\n\nYes, if you are unsure about how to perform a safe backup properly,\nusing pg_basebackup or one of the existing backup tools is, by far, the\nbest approach. Attempting to roll your own backup system based on rsync\nis not something I am comfortable recommending any more because it is\n*not* simple to do correctly.OK, that is fine, and actually we are using that. the reason why i explain about start_backup and stop_backup is to give a gradual understand, and hoping that people will get the mechanism in the back understandable. \n\nThanks!\nThanks for your great explanation! \nStephen",
"msg_date": "Mon, 23 Jan 2017 00:03:01 +0700",
"msg_from": "julyanto SUTANDANG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Greetings,\n\n* julyanto SUTANDANG ([email protected]) wrote:\n> Thanks for elaborating this Information, this is new, so whatever it is the\n> procedure is *Correct and Workable*.\n\nBackups are extremely important, so I get quite concerned when people\nprovide incorrect information regarding them.\n\n> > With all of the WAL\n> > which was created during the backup, PG will be able to recover from the\n> > changes made during the backup to the data directory, but you *must*\n> > have all of that WAL, or the backup will be inconsistent because of\n> \n> That is rather out of question, because all what we discuss here is just\n> doing full/snapshot backup.\n\nIt's unclear what you mean by 'out of question' or why you believe that\nit matters if it's a full backup or not.\n\nAny backup of PG *must* include all of the WAL that was created during\nthe backup.\n\n> The backup is Full Backup or Snapshot and it will work whenever needed.\n> We are not saying about Incremental Backup yet.\n> Along with collecting the XLOG File, you can have incremental backup and\n> having complete continuous data backup.\n> in this case, Stephen is suggesting on using pg_receivexlog or\n> archive_command\n> (everything here is actually explained well on the docs))\n\nNo, that is not correct. You must have the WAL for a full backup as\nwell. If I understand what you're suggesting, it's that WAL is only for\npoint-in-time-recovery, but that is *not* correct, WAL is required for\nrestoring a full backup to a consistent state.\n\n> those changes that were made to the data directory after\n> > pg_start_backup() was called.\n> >\n> > In other words, if you aren't using pg_receivexlog or archive_command,\n> > your backups are invalid.\n> >\n> I doubt that *invalid* here is a valid word\n> In term of snapshot backup and as long as the snapshot can be run, that is\n> valid, isn't it?\n\nIt's absolutely correct, you must have the WAL generated during your\nbackup or the backup is invalid.\n\nIf, what you mean by 'snapshot' is a *full-system atomic snapshot*,\nprovided by some layer lower than PostgreSQL that is *exactly* as if the\nmachine was physically turned off all at once, then, and *only* then,\ncan you be guaranteed that PG will be able to recover, but the reason\nfor that is because PG will go back to the last checkpoint that\nhappened, as recorded in pg_control, and replay all of the WAL in the\npg_xlog/pg_wal directory, which must all exist and be complete for all\ncommitted transaction because the WAL was sync'd to disk before the\ncommit was acknowledged and the WAL is not removed until after a\ncheckpoint has completed which has sync'd the data in the data directory\nout to the filesystem.\n\nThat's also known as 'crash recovery' and it works precisely because all\nof the WAL is available at the time of the event and we have a known\npoint to go back to (the checkpoint).\n\nDuring a backup, multiple checkpoints can occur and WAL will be removed\nfrom the pg_xlog/pg_wal directory during the backup; WAL which is\ncritical to the consistency of the database and which must be retained\nby the user because it must be used to perform WAL replay of the\ndatabase when restoring from the backup which was made.\n\n> > if you wanted to backup in later day, you can use rsync then it will copy\n> > > faster because rsync only copy the difference, rather than copy all the\n> > > data.\n> >\n> > This is *also* incorrect. rsync, by itself, is *not* safe to use for\n> > doing that kind of incremental backup, unless you enable checksums. The\n> > reason for this is that rsync has only a 1-second level granularity and\n> > it is possible (unlikely, though it has been demonstrated) to miss\n> > changes made to a file within that 1-second window.\n>\n> As long as that is not XLOG file, anyway.. as you are saying that wouldn't\n> be a problem since actually we can run the XLOG for recovery. .\n\nNo, that's also not correct, unless you keep all WAL since the *first*\nfull backup.\n\nThe 1-second window concern is regarding the validity of a subsequent\nincremental backup.\n\nThis is what happens, more-or-less:\n\n1- File datadir/A is copied by rsync\n2- backup starts, user retains all WAL during backup #1\n3- File datadir/A is copied by rsync in the same second as backup\n started\n4- File datadir/A is *subsequently* modified by PG and the data is\n written out to the filesystem, still within the same second as when\n the backup started\n5- The rsync finishes, the backup finishes, all WAL for backup #1 is\n retained, which includes the changes made to datadir/A during the\n backup. Everything is fine at this point for backup #1.\n\n6- A new, incremental, backup is started, called backup #2.\n7- rsync does *not* copy the file datadir/A because it was not\n subsequently changed by the user and the timestamp is the same,\n according to rsync's 1-second-level granularity.\n8- The WAL for backup #2 is retained, but it does not contain any of the\n changes which were made to datadir/A because *those* changes are in\n the WAL which was written out during backup #1\n9- backup #2 completes, with its WAL retainined\n10- At this point, backup #2 is an invalid backup.\n\nThis is not hypothetical, it's been shown to be possible to have this\nhappen.\n\n(side-note: this is all from memory, so perhaps there's a detail or two\nincorrect, but this is the gist of the issue)\n\n> > > my latter explanation is: use pg_basebackup, it will do it automatically\n> > > for you.\n> >\n> > Yes, if you are unsure about how to perform a safe backup properly,\n> > using pg_basebackup or one of the existing backup tools is, by far, the\n> > best approach. Attempting to roll your own backup system based on rsync\n> > is not something I am comfortable recommending any more because it is\n> > *not* simple to do correctly.\n>\n> OK, that is fine, and actually we are using that.\n\nYou must be sure to use one of the methods with pg_basebackup that keeps\nall of the WAL created during the full backup. That would be one of:\npg_basebackup -x, pg_basebackup -X stream, or pg_basebackup +\npg_receivexlog.\n\n> the reason why i explain about start_backup and stop_backup is to give a\n> gradual understand, and hoping that people will get the mechanism in the\n> back understandable.\n\nI'm more than happy to have people explaining about\npg_start/stop_backup, but I do have an issue when the explanation is\nincorrect and could cause a user to use a backup method which will\nresult in an invalid backup.\n\nThanks!\n\nStephen",
"msg_date": "Sun, 22 Jan 2017 12:32:45 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Hi All,\n\nEspecially for Stephen Frost, Thank you very much for your deeply\nexplanation and elaboration!\nAnyway, all has clear, i am not disagree with Stephen, i am the lucky one\nget in corrected by Expert like you.\nin short, please use pg_basebackup for getting snapshot and don't forget\nfor the WAL log to be archive also so we can get complete full and\nincremental backup. (that is better, rather than only occasional backup\nright?)\n\nSo this is anyway what we should do, in doing backup for PostgreSQL. by\nthis way, we can ensure \"D\" Durability of your data in Database across\ndisaster and across location, not only within an Instance.\n\nThanks,\n\n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source and Open Mind Company)\nwww.equnix.co.id\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nOn Mon, Jan 23, 2017 at 12:32 AM, Stephen Frost <[email protected]> wrote:\n\n> Greetings,\n>\n> * julyanto SUTANDANG ([email protected]) wrote:\n> > Thanks for elaborating this Information, this is new, so whatever it is\n> the\n> > procedure is *Correct and Workable*.\n>\n> Backups are extremely important, so I get quite concerned when people\n> provide incorrect information regarding them.\n>\n> > > With all of the WAL\n> > > which was created during the backup, PG will be able to recover from\n> the\n> > > changes made during the backup to the data directory, but you *must*\n> > > have all of that WAL, or the backup will be inconsistent because of\n> >\n> > That is rather out of question, because all what we discuss here is just\n> > doing full/snapshot backup.\n>\n> It's unclear what you mean by 'out of question' or why you believe that\n> it matters if it's a full backup or not.\n>\n> Any backup of PG *must* include all of the WAL that was created during\n> the backup.\n>\n> > The backup is Full Backup or Snapshot and it will work whenever needed.\n> > We are not saying about Incremental Backup yet.\n> > Along with collecting the XLOG File, you can have incremental backup and\n> > having complete continuous data backup.\n> > in this case, Stephen is suggesting on using pg_receivexlog or\n> > archive_command\n> > (everything here is actually explained well on the docs))\n>\n> No, that is not correct. You must have the WAL for a full backup as\n> well. If I understand what you're suggesting, it's that WAL is only for\n> point-in-time-recovery, but that is *not* correct, WAL is required for\n> restoring a full backup to a consistent state.\n>\n> > those changes that were made to the data directory after\n> > > pg_start_backup() was called.\n> > >\n> > > In other words, if you aren't using pg_receivexlog or archive_command,\n> > > your backups are invalid.\n> > >\n> > I doubt that *invalid* here is a valid word\n> > In term of snapshot backup and as long as the snapshot can be run, that\n> is\n> > valid, isn't it?\n>\n> It's absolutely correct, you must have the WAL generated during your\n> backup or the backup is invalid.\n>\n> If, what you mean by 'snapshot' is a *full-system atomic snapshot*,\n> provided by some layer lower than PostgreSQL that is *exactly* as if the\n> machine was physically turned off all at once, then, and *only* then,\n> can you be guaranteed that PG will be able to recover, but the reason\n> for that is because PG will go back to the last checkpoint that\n> happened, as recorded in pg_control, and replay all of the WAL in the\n> pg_xlog/pg_wal directory, which must all exist and be complete for all\n> committed transaction because the WAL was sync'd to disk before the\n> commit was acknowledged and the WAL is not removed until after a\n> checkpoint has completed which has sync'd the data in the data directory\n> out to the filesystem.\n>\n> That's also known as 'crash recovery' and it works precisely because all\n> of the WAL is available at the time of the event and we have a known\n> point to go back to (the checkpoint).\n>\n> During a backup, multiple checkpoints can occur and WAL will be removed\n> from the pg_xlog/pg_wal directory during the backup; WAL which is\n> critical to the consistency of the database and which must be retained\n> by the user because it must be used to perform WAL replay of the\n> database when restoring from the backup which was made.\n>\n> > > if you wanted to backup in later day, you can use rsync then it will\n> copy\n> > > > faster because rsync only copy the difference, rather than copy all\n> the\n> > > > data.\n> > >\n> > > This is *also* incorrect. rsync, by itself, is *not* safe to use for\n> > > doing that kind of incremental backup, unless you enable checksums.\n> The\n> > > reason for this is that rsync has only a 1-second level granularity and\n> > > it is possible (unlikely, though it has been demonstrated) to miss\n> > > changes made to a file within that 1-second window.\n> >\n> > As long as that is not XLOG file, anyway.. as you are saying that\n> wouldn't\n> > be a problem since actually we can run the XLOG for recovery. .\n>\n> No, that's also not correct, unless you keep all WAL since the *first*\n> full backup.\n>\n> The 1-second window concern is regarding the validity of a subsequent\n> incremental backup.\n>\n> This is what happens, more-or-less:\n>\n> 1- File datadir/A is copied by rsync\n> 2- backup starts, user retains all WAL during backup #1\n> 3- File datadir/A is copied by rsync in the same second as backup\n> started\n> 4- File datadir/A is *subsequently* modified by PG and the data is\n> written out to the filesystem, still within the same second as when\n> the backup started\n> 5- The rsync finishes, the backup finishes, all WAL for backup #1 is\n> retained, which includes the changes made to datadir/A during the\n> backup. Everything is fine at this point for backup #1.\n>\n> 6- A new, incremental, backup is started, called backup #2.\n> 7- rsync does *not* copy the file datadir/A because it was not\n> subsequently changed by the user and the timestamp is the same,\n> according to rsync's 1-second-level granularity.\n> 8- The WAL for backup #2 is retained, but it does not contain any of the\n> changes which were made to datadir/A because *those* changes are in\n> the WAL which was written out during backup #1\n> 9- backup #2 completes, with its WAL retainined\n> 10- At this point, backup #2 is an invalid backup.\n>\n> This is not hypothetical, it's been shown to be possible to have this\n> happen.\n>\n> (side-note: this is all from memory, so perhaps there's a detail or two\n> incorrect, but this is the gist of the issue)\n>\n> > > > my latter explanation is: use pg_basebackup, it will do it\n> automatically\n> > > > for you.\n> > >\n> > > Yes, if you are unsure about how to perform a safe backup properly,\n> > > using pg_basebackup or one of the existing backup tools is, by far, the\n> > > best approach. Attempting to roll your own backup system based on\n> rsync\n> > > is not something I am comfortable recommending any more because it is\n> > > *not* simple to do correctly.\n> >\n> > OK, that is fine, and actually we are using that.\n>\n> You must be sure to use one of the methods with pg_basebackup that keeps\n> all of the WAL created during the full backup. That would be one of:\n> pg_basebackup -x, pg_basebackup -X stream, or pg_basebackup +\n> pg_receivexlog.\n>\n> > the reason why i explain about start_backup and stop_backup is to give a\n> > gradual understand, and hoping that people will get the mechanism in the\n> > back understandable.\n>\n> I'm more than happy to have people explaining about\n> pg_start/stop_backup, but I do have an issue when the explanation is\n> incorrect and could cause a user to use a backup method which will\n> result in an invalid backup.\n>\n> Thanks!\n>\n> Stephen\n>\n\nHi All, Especially for Stephen Frost, Thank you very much for your deeply explanation and elaboration!Anyway, all has clear, i am not disagree with Stephen, i am the lucky one get in corrected by Expert like you. in short, please use pg_basebackup for getting snapshot and don't forget for the WAL log to be archive also so we can get complete full and incremental backup. (that is better, rather than only occasional backup right?) So this is anyway what we should do, in doing backup for PostgreSQL. by this way, we can ensure \"D\" Durability of your data in Database across disaster and across location, not only within an Instance. Thanks,Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source and Open Mind Company)www.equnix.co.idPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.\nOn Mon, Jan 23, 2017 at 12:32 AM, Stephen Frost <[email protected]> wrote:Greetings,\n\n* julyanto SUTANDANG ([email protected]) wrote:\n> Thanks for elaborating this Information, this is new, so whatever it is the\n> procedure is *Correct and Workable*.\n\nBackups are extremely important, so I get quite concerned when people\nprovide incorrect information regarding them.\n\n> > With all of the WAL\n> > which was created during the backup, PG will be able to recover from the\n> > changes made during the backup to the data directory, but you *must*\n> > have all of that WAL, or the backup will be inconsistent because of\n>\n> That is rather out of question, because all what we discuss here is just\n> doing full/snapshot backup.\n\nIt's unclear what you mean by 'out of question' or why you believe that\nit matters if it's a full backup or not.\n\nAny backup of PG *must* include all of the WAL that was created during\nthe backup.\n\n> The backup is Full Backup or Snapshot and it will work whenever needed.\n> We are not saying about Incremental Backup yet.\n> Along with collecting the XLOG File, you can have incremental backup and\n> having complete continuous data backup.\n> in this case, Stephen is suggesting on using pg_receivexlog or\n> archive_command\n> (everything here is actually explained well on the docs))\n\nNo, that is not correct. You must have the WAL for a full backup as\nwell. If I understand what you're suggesting, it's that WAL is only for\npoint-in-time-recovery, but that is *not* correct, WAL is required for\nrestoring a full backup to a consistent state.\n\n> those changes that were made to the data directory after\n> > pg_start_backup() was called.\n> >\n> > In other words, if you aren't using pg_receivexlog or archive_command,\n> > your backups are invalid.\n> >\n> I doubt that *invalid* here is a valid word\n> In term of snapshot backup and as long as the snapshot can be run, that is\n> valid, isn't it?\n\nIt's absolutely correct, you must have the WAL generated during your\nbackup or the backup is invalid.\n\nIf, what you mean by 'snapshot' is a *full-system atomic snapshot*,\nprovided by some layer lower than PostgreSQL that is *exactly* as if the\nmachine was physically turned off all at once, then, and *only* then,\ncan you be guaranteed that PG will be able to recover, but the reason\nfor that is because PG will go back to the last checkpoint that\nhappened, as recorded in pg_control, and replay all of the WAL in the\npg_xlog/pg_wal directory, which must all exist and be complete for all\ncommitted transaction because the WAL was sync'd to disk before the\ncommit was acknowledged and the WAL is not removed until after a\ncheckpoint has completed which has sync'd the data in the data directory\nout to the filesystem.\n\nThat's also known as 'crash recovery' and it works precisely because all\nof the WAL is available at the time of the event and we have a known\npoint to go back to (the checkpoint).\n\nDuring a backup, multiple checkpoints can occur and WAL will be removed\nfrom the pg_xlog/pg_wal directory during the backup; WAL which is\ncritical to the consistency of the database and which must be retained\nby the user because it must be used to perform WAL replay of the\ndatabase when restoring from the backup which was made.\n\n> > if you wanted to backup in later day, you can use rsync then it will copy\n> > > faster because rsync only copy the difference, rather than copy all the\n> > > data.\n> >\n> > This is *also* incorrect. rsync, by itself, is *not* safe to use for\n> > doing that kind of incremental backup, unless you enable checksums. The\n> > reason for this is that rsync has only a 1-second level granularity and\n> > it is possible (unlikely, though it has been demonstrated) to miss\n> > changes made to a file within that 1-second window.\n>\n> As long as that is not XLOG file, anyway.. as you are saying that wouldn't\n> be a problem since actually we can run the XLOG for recovery. .\n\nNo, that's also not correct, unless you keep all WAL since the *first*\nfull backup.\n\nThe 1-second window concern is regarding the validity of a subsequent\nincremental backup.\n\nThis is what happens, more-or-less:\n\n1- File datadir/A is copied by rsync\n2- backup starts, user retains all WAL during backup #1\n3- File datadir/A is copied by rsync in the same second as backup\n started\n4- File datadir/A is *subsequently* modified by PG and the data is\n written out to the filesystem, still within the same second as when\n the backup started\n5- The rsync finishes, the backup finishes, all WAL for backup #1 is\n retained, which includes the changes made to datadir/A during the\n backup. Everything is fine at this point for backup #1.\n\n6- A new, incremental, backup is started, called backup #2.\n7- rsync does *not* copy the file datadir/A because it was not\n subsequently changed by the user and the timestamp is the same,\n according to rsync's 1-second-level granularity.\n8- The WAL for backup #2 is retained, but it does not contain any of the\n changes which were made to datadir/A because *those* changes are in\n the WAL which was written out during backup #1\n9- backup #2 completes, with its WAL retainined\n10- At this point, backup #2 is an invalid backup.\n\nThis is not hypothetical, it's been shown to be possible to have this\nhappen.\n\n(side-note: this is all from memory, so perhaps there's a detail or two\nincorrect, but this is the gist of the issue)\n\n> > > my latter explanation is: use pg_basebackup, it will do it automatically\n> > > for you.\n> >\n> > Yes, if you are unsure about how to perform a safe backup properly,\n> > using pg_basebackup or one of the existing backup tools is, by far, the\n> > best approach. Attempting to roll your own backup system based on rsync\n> > is not something I am comfortable recommending any more because it is\n> > *not* simple to do correctly.\n>\n> OK, that is fine, and actually we are using that.\n\nYou must be sure to use one of the methods with pg_basebackup that keeps\nall of the WAL created during the full backup. That would be one of:\npg_basebackup -x, pg_basebackup -X stream, or pg_basebackup +\npg_receivexlog.\n\n> the reason why i explain about start_backup and stop_backup is to give a\n> gradual understand, and hoping that people will get the mechanism in the\n> back understandable.\n\nI'm more than happy to have people explaining about\npg_start/stop_backup, but I do have an issue when the explanation is\nincorrect and could cause a user to use a backup method which will\nresult in an invalid backup.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 23 Jan 2017 00:56:23 +0700",
"msg_from": "julyanto SUTANDANG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "> 20 янв. 2017 г., в 19:59, Stephen Frost <[email protected]> написал(а):\n> \n>>> How are you testing your backups..? Do you have page-level checksums\n>>> enabled on your database? \n>> \n>> Yep, we use checksums. We restore latest backup with recovery_target = 'immediate' and do COPY tablename TO '/dev/null’ with checking exit code for each table in each database (in several threads, of course).\n> \n> Right, unfortunately that only checks the heap pages, it won't help with\n> corruption happening in an index file or other files which have a\n> checksum.\n\nThat’s fine for us because indexes could be rebuilt. The main idea is the guarantee that data would not be lost.\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n20 янв. 2017 г., в 19:59, Stephen Frost <[email protected]> написал(а):How are you testing your backups..? Do you have page-level checksumsenabled on your database? Yep, we use checksums. We restore latest backup with recovery_target = 'immediate' and do COPY tablename TO '/dev/null’ with checking exit code for each table in each database (in several threads, of course).Right, unfortunately that only checks the heap pages, it won't help withcorruption happening in an index file or other files which have achecksum.That’s fine for us because indexes could be rebuilt. The main idea is the guarantee that data would not be lost.\n--May the force be with you…https://simply.name",
"msg_date": "Mon, 23 Jan 2017 09:44:25 +0300",
"msg_from": "Vladimir Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Vladimir,\n\n* Vladimir Borodin ([email protected]) wrote:\n> > 20 янв. 2017 г., в 19:59, Stephen Frost <[email protected]> написал(а):\n> >>> How are you testing your backups..? Do you have page-level checksums\n> >>> enabled on your database? \n> >> \n> >> Yep, we use checksums. We restore latest backup with recovery_target = 'immediate' and do COPY tablename TO '/dev/null’ with checking exit code for each table in each database (in several threads, of course).\n> > \n> > Right, unfortunately that only checks the heap pages, it won't help with\n> > corruption happening in an index file or other files which have a\n> > checksum.\n> \n> That’s fine for us because indexes could be rebuilt. The main idea is the guarantee that data would not be lost.\n\nFair enough, however, if you don't check that the indexes are valid then\nyou could end up with corruption in the database if you start operating\nagainst such a recovered database.\n\nConsider what happens on a 'primary key' lookup or insert- if the index\ndoesn't find a conflicting tuple (perhaps because the index is corrupt),\nthen it will happily allow the INSERT to go through, even though it\nshould have been prevented.\n\nIndexes are a *really* important component to having a valid database.\nIf you aren't checking the validity of them, then you're running the\nrisk of being exposed to corruption in them, either ongoing or when\nrestoring.\n\nIf you *always* rebuild your indexes when restoring from a backup, then\nyou should be fine, of course, but if you're going to do that then you\nmight consider using pg_dump instead, which would do that and validate\nall foreign key references too.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 23 Jan 2017 01:48:39 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Hello,\n\n>>> Increments in pgbackrest are done on file level which is not really\n>>> efficient. We have done parallelism, compression and page-level\n>>> increments (9.3+) in barman fork [1], but unfortunately guys from\n>>> 2ndquadrant-it don’t hurry to work on it.\n>>\n>> We're looking at page-level incremental backup in pgbackrest also. For\n>> larger systems, we've not heard too much complaining about it being\n>> file-based though, which is why it hasn't been a priority. Of course,\n>> the OP is on 9.1 too, so.\n>\n> Well, we have forked barman and made everything from the above just\n> because we needed ~ 2 PB of disk space for storing backups for our ~ 300\n> TB of data. (Our recovery window is 7 days) And on 5 TB database it took\n> a lot of time to make/restore a backup.\n\nI just have around 11 TB but switched to ZFS based backups only. I'm \nusing snapshots therefore which gives some flexibility. I can rolback \nthem, i can just clone it and work with a full copy as a different \ncluster (and just the differences are stored) and i can send them \nincrementally to other servers. This is very fine for my use cases but \nit doesn't fit everything of course.\n\nGreetings,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jan 2017 09:31:38 +0100",
"msg_from": "Torsten Zuehlsdorff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "On 1/20/17 9:06 AM, Stephen Frost wrote:\n> All the pages are the same size, so I'm surprised you didn't consider\n> just having a format along the lines of: magic+offset+page,\n> magic+offset+page, magic+offset+page, etc...\n\nKeep in mind that if you go that route you need to accommodate BLKSZ <> \n8192.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jan 2017 09:16:38 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "* Jim Nasby ([email protected]) wrote:\n> On 1/20/17 9:06 AM, Stephen Frost wrote:\n> >All the pages are the same size, so I'm surprised you didn't consider\n> >just having a format along the lines of: magic+offset+page,\n> >magic+offset+page, magic+offset+page, etc...\n> \n> Keep in mind that if you go that route you need to accommodate BLKSZ\n> <> 8192.\n\nIf you want my 2c on that, running with BLKSZ <> 8192 is playing with\nfire, or at least running with scissors.\n\nThat said, yes, the code should either barf when BLKSZ <> 8192 in a very\nclear way early on, or handle it correctly, and be tested with such\nconfigurations.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 23 Jan 2017 10:27:22 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "On 1/22/17 11:32 AM, Stephen Frost wrote:\n> The 1-second window concern is regarding the validity of a subsequent\n> incremental backup.\n\nBTW, there's a simpler scenario here:\n\nPostgres touches file.\nrsync notices file has different timestamp, starts copying.\nPostgres touches file again.\n\nIf those 3 steps happen in the same second, you now have an invalid \nbackup. There's probably other scenarios as well.\n\nIn short, if you're using rsync, it's *critical* that you give it the \n--checksum option, which tells rsync to ignore file size and timestamp.\n\n>>>> my latter explanation is: use pg_basebackup, it will do it automatically\n>>>> for you.\n>>> Yes, if you are unsure about how to perform a safe backup properly,\n>>> using pg_basebackup or one of the existing backup tools is, by far, the\n>>> best approach. Attempting to roll your own backup system based on rsync\n>>> is not something I am comfortable recommending any more because it is\n>>> *not* simple to do correctly.\n>> OK, that is fine, and actually we are using that.\n> You must be sure to use one of the methods with pg_basebackup that keeps\n> all of the WAL created during the full backup. That would be one of:\n> pg_basebackup -x, pg_basebackup -X stream, or pg_basebackup +\n> pg_receivexlog.\n>\n>> the reason why i explain about start_backup and stop_backup is to give a\n>> gradual understand, and hoping that people will get the mechanism in the\n>> back understandable.\n> I'm more than happy to have people explaining about\n> pg_start/stop_backup, but I do have an issue when the explanation is\n> incorrect and could cause a user to use a backup method which will\n> result in an invalid backup.\n\nThe other *critical* thing with PITR backups: you must test EVERY backup \nthat you take. No test == no backup. There's far, far too many things \nthat can go wrong, especially if you're rolling your own tool.\n\nThe complexities around PITR are why I always recommend also using \npg_dump on a periodic (usually weekly) basis as part of your full DR \nstrategy. You'll probably never use the pg_dump backups, but (in most \ncases) they're a really cheap insurance policy.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jan 2017 09:28:34 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "On 1/23/17 9:27 AM, Stephen Frost wrote:\n> If you want my 2c on that, running with BLKSZ <> 8192 is playing with\n> fire, or at least running with scissors.\n\nI've never seen it myself, but I'm under the impression that it's not \nunheard of for OLAP environments. Given how sensitive PG is to IO \nlatency a larger block size could theoretically mean a big performance \nimprovement in some scenarios.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting, Austin TX\nExperts in Analytics, Data Architecture and PostgreSQL\nData in Trouble? Get it in Treble! http://BlueTreble.com\n855-TREBLE2 (855-873-2532)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jan 2017 09:31:29 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "On Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost <[email protected]> wrote:\n\n> Greetings,\n>\n> * julyanto SUTANDANG ([email protected]) wrote:\n> > CORRECTION:\n> >\n> > \"you might you pg_start_backup to tell the server not to write into the\n> > DATADIR\"\n> >\n> > become\n> >\n> > \"you might *use* pg_start_backup to tell the server not to write into the\n> > *BASEDIR*, actually server still writes but only to XLOGDIR \"\n>\n> Just to make sure anyone reading the mailing list archives isn't\n> confused, running pg_start_backup does *not* make PG stop writing to\n> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n> data into BASEDIR after pg_start_backup has been called.\n>\n\n\nCorrect. Unfortunately it is a very common myth that it does cause\nPostgreSQL to stop writing to the base dir.\n\n\n>\n> The only thing that pg_start_backup does is identify an entry in the WAL\n> stream, from which point all WAL must be replayed when restoring the\n> backup. All WAL generated from that point (pg_start_backup point) until\n> the pg_stop_backup point *must* be replayed when restoring the backup or\n> the database will not be consistent.\n>\n\npg_start_backup also forces full_page_writes to be effectively 'on' for the\nduration of the backup, if it is not already explicitly on (which it\nusually will already be). This affects pg_xlog, of course, not base. But\nit is an essential step for people who run with full_page_writes=off, as it\nensures that anything in base which got changed mid-copy will be fixed up\nduring replay of the WAL.\n\n\nCheers,\n\nJeff\n\nOn Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost <[email protected]> wrote:Greetings,\n\n* julyanto SUTANDANG ([email protected]) wrote:\n> CORRECTION:\n>\n> \"you might you pg_start_backup to tell the server not to write into the\n> DATADIR\"\n>\n> become\n>\n> \"you might *use* pg_start_backup to tell the server not to write into the\n> *BASEDIR*, actually server still writes but only to XLOGDIR \"\n\nJust to make sure anyone reading the mailing list archives isn't\nconfused, running pg_start_backup does *not* make PG stop writing to\nBASEDIR (or DATADIR, or anything, really). PG *will* continue to write\ndata into BASEDIR after pg_start_backup has been called.Correct. Unfortunately it is a very common myth that it does cause PostgreSQL to stop writing to the base dir. \n\nThe only thing that pg_start_backup does is identify an entry in the WAL\nstream, from which point all WAL must be replayed when restoring the\nbackup. All WAL generated from that point (pg_start_backup point) until\nthe pg_stop_backup point *must* be replayed when restoring the backup or\nthe database will not be consistent.pg_start_backup also forces full_page_writes to be effectively 'on' for the duration of the backup, if it is not already explicitly on (which it usually will already be). This affects pg_xlog, of course, not base. But it is an essential step for people who run with full_page_writes=off, as it ensures that anything in base which got changed mid-copy will be fixed up during replay of the WAL. Cheers,Jeff",
"msg_date": "Mon, 23 Jan 2017 09:12:22 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Greetings,\n\n* Torsten Zuehlsdorff ([email protected]) wrote:\n> I just have around 11 TB but switched to ZFS based backups only. I'm\n> using snapshots therefore which gives some flexibility. I can\n> rolback them, i can just clone it and work with a full copy as a\n> different cluster (and just the differences are stored) and i can\n> send them incrementally to other servers. This is very fine for my\n> use cases but it doesn't fit everything of course.\n\nWhile that's another approach, it does require that those snapshots are\nperformed correctly (both by you and by ZFS) and are entirely atomic to\nthe entire PG instance.\n\nFor example, I don't believe ZFS snapshots will be atomic if multiple\nZFS filesystems on independent ZFS pools are being used underneath a\nsingle PG instance.\n\nAnd, as others have also said, always test, test, test.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 23 Jan 2017 12:20:54 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "* Jeff Janes ([email protected]) wrote:\n> On Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost <[email protected]> wrote:\n> > Just to make sure anyone reading the mailing list archives isn't\n> > confused, running pg_start_backup does *not* make PG stop writing to\n> > BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n> > data into BASEDIR after pg_start_backup has been called.\n> \n> Correct. Unfortunately it is a very common myth that it does cause\n> PostgreSQL to stop writing to the base dir.\n\nI would love a way to dispel that myth. :/\n\nIf you have any suggestions of how we could improve the docs, I'd\ncertainly be happy to take a look and try to help.\n\n> > The only thing that pg_start_backup does is identify an entry in the WAL\n> > stream, from which point all WAL must be replayed when restoring the\n> > backup. All WAL generated from that point (pg_start_backup point) until\n> > the pg_stop_backup point *must* be replayed when restoring the backup or\n> > the database will not be consistent.\n> \n> pg_start_backup also forces full_page_writes to be effectively 'on' for the\n> duration of the backup, if it is not already explicitly on (which it\n> usually will already be). This affects pg_xlog, of course, not base. But\n> it is an essential step for people who run with full_page_writes=off, as it\n> ensures that anything in base which got changed mid-copy will be fixed up\n> during replay of the WAL.\n\nAgreed.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 23 Jan 2017 12:23:01 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "On Mon, Jan 23, 2017 at 7:28 AM, Jim Nasby <[email protected]> wrote:\n\n> On 1/22/17 11:32 AM, Stephen Frost wrote:\n>\n>> The 1-second window concern is regarding the validity of a subsequent\n>> incremental backup.\n>>\n>\n> BTW, there's a simpler scenario here:\n>\n> Postgres touches file.\n> rsync notices file has different timestamp, starts copying.\n> Postgres touches file again.\n>\n> If those 3 steps happen in the same second, you now have an invalid\n> backup. There's probably other scenarios as well.\n>\n\nTo be clear, you don't have an invalid backup *now*, as replay of the WAL\nwill fix it up. You will have an invalid backup next time you take a\nbackup, using a copy of the backup you just took now as the rsync\ndestination of that future backup.\n\nIf you were to actually fire up a copy of the backup and go through\nrecovery, then shut it down, and then use that post-recovery copy as the\ndestination of the rsync, would that eliminate the risk (barring clock skew\nbetween systems)?\n\n\n> In short, if you're using rsync, it's *critical* that you give it the\n> --checksum option, which tells rsync to ignore file size and timestamp.\n\n\nWhich unfortunately obliterates much of the point of using rsync for many\npeople. You can still save on bandwidth, but not on local IO on each end.\n\nCheers,\n\nJeff\n\nOn Mon, Jan 23, 2017 at 7:28 AM, Jim Nasby <[email protected]> wrote:On 1/22/17 11:32 AM, Stephen Frost wrote:\n\nThe 1-second window concern is regarding the validity of a subsequent\nincremental backup.\n\n\nBTW, there's a simpler scenario here:\n\nPostgres touches file.\nrsync notices file has different timestamp, starts copying.\nPostgres touches file again.\n\nIf those 3 steps happen in the same second, you now have an invalid backup. There's probably other scenarios as well.To be clear, you don't have an invalid backup *now*, as replay of the WAL will fix it up. You will have an invalid backup next time you take a backup, using a copy of the backup you just took now as the rsync destination of that future backup.If you were to actually fire up a copy of the backup and go through recovery, then shut it down, and then use that post-recovery copy as the destination of the rsync, would that eliminate the risk (barring clock skew between systems)?\n\nIn short, if you're using rsync, it's *critical* that you give it the --checksum option, which tells rsync to ignore file size and timestamp.Which unfortunately obliterates much of the point of using rsync for many people. You can still save on bandwidth, but not on local IO on each end.Cheers,Jeff",
"msg_date": "Mon, 23 Jan 2017 09:32:28 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "On 23 January 2017 at 17:12, Jeff Janes <[email protected]> wrote:\n\n>> Just to make sure anyone reading the mailing list archives isn't\n>> confused, running pg_start_backup does *not* make PG stop writing to\n>> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n>> data into BASEDIR after pg_start_backup has been called.\n>\n>\n>\n> Correct. Unfortunately it is a very common myth that it does cause\n> PostgreSQL to stop writing to the base dir.\n\nNever heard that one before. Wow. Who's been saying that?\n\nIt's taken me years to hunt down all invalid backup memes and terminate them.\n\nNever fails to surprise me how many people don't read the docs.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Jan 2017 17:43:56 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "* Jeff Janes ([email protected]) wrote:\n> On Mon, Jan 23, 2017 at 7:28 AM, Jim Nasby <[email protected]> wrote:\n> > On 1/22/17 11:32 AM, Stephen Frost wrote:\n> >> The 1-second window concern is regarding the validity of a subsequent\n> >> incremental backup.\n> >\n> > BTW, there's a simpler scenario here:\n> >\n> > Postgres touches file.\n> > rsync notices file has different timestamp, starts copying.\n> > Postgres touches file again.\n> >\n> > If those 3 steps happen in the same second, you now have an invalid\n> > backup. There's probably other scenarios as well.\n\nAh, yeah, I think the outline I had was why we decided that even a file\nwith the same timestamp as the start of the backup couldn't be trusted.\n\n> To be clear, you don't have an invalid backup *now*, as replay of the WAL\n> will fix it up. You will have an invalid backup next time you take a\n> backup, using a copy of the backup you just took now as the rsync\n> destination of that future backup.\n\nCorrect.\n\n> If you were to actually fire up a copy of the backup and go through\n> recovery, then shut it down, and then use that post-recovery copy as the\n> destination of the rsync, would that eliminate the risk (barring clock skew\n> between systems)?\n\nI believe it would *change* things, but not eliminate the risk- consider\nthis: what's the timestamp going to be on the files that were modified\nthrough WAL recovery? It would be *after* the backup was done. I\nbelieve (but not sure) that rsync will still copy the file if there's\nany difference in timestamp, but it's technically possible that you\ncould get really unlikely and have the same post-backup timestamp as the\nfile ends up having when the following backup is done, meaning that the\nfile isn't copied even though its contents are no longer the same (the\nprimary server's copy has whatever was written to that file in the same\nsecond that the restored server was writing the WAL replay into the\nfile).\n\nAdmittedly, that's pretty unlikely, but it's not impossible and that's\nwhere you can get into *serious* trouble because it becomes darn near\nimpossible to figure out what the heck went wrong, and that's just not\ncool with backups.\n\nDo it properly, or use something that does. This isn't where you want\nto be playing fast-and-loose.\n\n> > In short, if you're using rsync, it's *critical* that you give it the\n> > --checksum option, which tells rsync to ignore file size and timestamp.\n> \n> Which unfortunately obliterates much of the point of using rsync for many\n> people. You can still save on bandwidth, but not on local IO on each end.\n\nNo, it means that rsync is *not* a good tool for doing incremental\nbackups of PG. Would be great if we could get more people to understand\nthat.\n\n'cp' is an equally inappropriate and bad tool for doing WAL archiving,\nbtw. Would be great if our docs were clear on that.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 23 Jan 2017 12:47:37 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "* Simon Riggs ([email protected]) wrote:\n> On 23 January 2017 at 17:12, Jeff Janes <[email protected]> wrote:\n> >> Just to make sure anyone reading the mailing list archives isn't\n> >> confused, running pg_start_backup does *not* make PG stop writing to\n> >> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n> >> data into BASEDIR after pg_start_backup has been called.\n> >\n> > Correct. Unfortunately it is a very common myth that it does cause\n> > PostgreSQL to stop writing to the base dir.\n> \n> Never heard that one before. Wow. Who's been saying that?\n\nWell, this conversation started because of such a comment, so at least\none person on this thread (though I believe that's been clarified\nsufficiently now).\n\n> It's taken me years to hunt down all invalid backup memes and terminate them.\n\nA never-ending and thankless task, so, my thanks to you for your\nefforts. :)\n\n> Never fails to surprise me how many people don't read the docs.\n\n+1MM.\n\nThanks again!\n\nStephen",
"msg_date": "Mon, 23 Jan 2017 12:49:43 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Dear Jeff,\n\nThanks for the correction and by this email, we hope that myth has gone\nforever :)\nWill do that to inform other about this matter.\n\nAnd agree with all of us here that: using pg_basebackup is the best\napproach rather than do it manually through pg_start_backup, right?\n\nThanks and Regards,\n\nJul.\n\n\n\nJulyanto SUTANDANG\n\nEqunix Business Solutions, PT\n(An Open Source and Open Mind Company)\nwww.equnix.co.id\nPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta\nPusat\nT: +6221 22866662 F: +62216315281 M: +628164858028\n\n\nCaution: The information enclosed in this email (and any attachments) may\nbe legally privileged and/or confidential and is intended only for the use\nof the addressee(s). No addressee should forward, print, copy, or otherwise\nreproduce this message in any manner that would allow it to be viewed by\nany individual not originally listed as a recipient. If the reader of this\nmessage is not the intended recipient, you are hereby notified that any\nunauthorized disclosure, dissemination, distribution, copying or the taking\nof any action in reliance on the information herein is strictly prohibited.\nIf you have received this communication in error, please immediately notify\nthe sender and delete this message.Unless it is made by the authorized\nperson, any views expressed in this message are those of the individual\nsender and may not necessarily reflect the views of PT Equnix Business\nSolutions.\n\nOn Tue, Jan 24, 2017 at 12:12 AM, Jeff Janes <[email protected]> wrote:\n\n> On Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost <[email protected]> wrote:\n>\n>> Greetings,\n>>\n>> * julyanto SUTANDANG ([email protected]) wrote:\n>> > CORRECTION:\n>> >\n>> > \"you might you pg_start_backup to tell the server not to write into the\n>> > DATADIR\"\n>> >\n>> > become\n>> >\n>> > \"you might *use* pg_start_backup to tell the server not to write into\n>> the\n>> > *BASEDIR*, actually server still writes but only to XLOGDIR \"\n>>\n>> Just to make sure anyone reading the mailing list archives isn't\n>> confused, running pg_start_backup does *not* make PG stop writing to\n>> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n>> data into BASEDIR after pg_start_backup has been called.\n>>\n>\n>\n> Correct. Unfortunately it is a very common myth that it does cause\n> PostgreSQL to stop writing to the base dir.\n>\n>\n>>\n>> The only thing that pg_start_backup does is identify an entry in the WAL\n>> stream, from which point all WAL must be replayed when restoring the\n>> backup. All WAL generated from that point (pg_start_backup point) until\n>> the pg_stop_backup point *must* be replayed when restoring the backup or\n>> the database will not be consistent.\n>>\n>\n> pg_start_backup also forces full_page_writes to be effectively 'on' for\n> the duration of the backup, if it is not already explicitly on (which it\n> usually will already be). This affects pg_xlog, of course, not base. But\n> it is an essential step for people who run with full_page_writes=off, as it\n> ensures that anything in base which got changed mid-copy will be fixed up\n> during replay of the WAL.\n>\n>\n> Cheers,\n>\n> Jeff\n>\n\nDear Jeff, Thanks for the correction and by this email, we hope that myth has gone forever :)Will do that to inform other about this matter.And agree with all of us here that: using pg_basebackup is the best approach rather than do it manually through pg_start_backup, right? Thanks and Regards,Jul. Julyanto SUTANDANGEqunix Business Solutions, PT(An Open Source and Open Mind Company)www.equnix.co.idPusat Niaga ITC Roxy Mas Blok C2/42. Jl. KH Hasyim Ashari 125, Jakarta PusatT: +6221 22866662 F: +62216315281 M: +628164858028Caution: The information enclosed in this email (and any attachments) may be legally privileged and/or confidential and is intended only for the use of the addressee(s). No addressee should forward, print, copy, or otherwise reproduce this message in any manner that would allow it to be viewed by any individual not originally listed as a recipient. If the reader of this message is not the intended recipient, you are hereby notified that any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is strictly prohibited. If you have received this communication in error, please immediately notify the sender and delete this message.Unless it is made by the authorized person, any views expressed in this message are those of the individual sender and may not necessarily reflect the views of PT Equnix Business Solutions.\nOn Tue, Jan 24, 2017 at 12:12 AM, Jeff Janes <[email protected]> wrote:On Sun, Jan 22, 2017 at 6:57 AM, Stephen Frost <[email protected]> wrote:Greetings,\n\n* julyanto SUTANDANG ([email protected]) wrote:\n> CORRECTION:\n>\n> \"you might you pg_start_backup to tell the server not to write into the\n> DATADIR\"\n>\n> become\n>\n> \"you might *use* pg_start_backup to tell the server not to write into the\n> *BASEDIR*, actually server still writes but only to XLOGDIR \"\n\nJust to make sure anyone reading the mailing list archives isn't\nconfused, running pg_start_backup does *not* make PG stop writing to\nBASEDIR (or DATADIR, or anything, really). PG *will* continue to write\ndata into BASEDIR after pg_start_backup has been called.Correct. Unfortunately it is a very common myth that it does cause PostgreSQL to stop writing to the base dir. \n\nThe only thing that pg_start_backup does is identify an entry in the WAL\nstream, from which point all WAL must be replayed when restoring the\nbackup. All WAL generated from that point (pg_start_backup point) until\nthe pg_stop_backup point *must* be replayed when restoring the backup or\nthe database will not be consistent.pg_start_backup also forces full_page_writes to be effectively 'on' for the duration of the backup, if it is not already explicitly on (which it usually will already be). This affects pg_xlog, of course, not base. But it is an essential step for people who run with full_page_writes=off, as it ensures that anything in base which got changed mid-copy will be fixed up during replay of the WAL. Cheers,Jeff",
"msg_date": "Tue, 24 Jan 2017 07:08:21 +0700",
"msg_from": "julyanto SUTANDANG <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "On Mon, Jan 23, 2017 at 9:43 AM, Simon Riggs <[email protected]> wrote:\n\n> On 23 January 2017 at 17:12, Jeff Janes <[email protected]> wrote:\n>\n> >> Just to make sure anyone reading the mailing list archives isn't\n> >> confused, running pg_start_backup does *not* make PG stop writing to\n> >> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n> >> data into BASEDIR after pg_start_backup has been called.\n> >\n> >\n> >\n> > Correct. Unfortunately it is a very common myth that it does cause\n> > PostgreSQL to stop writing to the base dir.\n>\n> Never heard that one before. Wow. Who's been saying that?\n>\n> It's taken me years to hunt down all invalid backup memes and terminate\n> them.\n>\n> Never fails to surprise me how many people don't read the docs.\n>\n\nI've seen it on stackexchange, and a few times on the pgsql mailing lists,\nand talking to people in person. I've never traced it back some\n\"authoritative\" source who is making the claim, I think many people just\nindependently think up \"How would I implement pg_start_backup if I were\ndoing it\" and then come up with the same false conclusion, and then all\nreinforce each other.\n\nI don't think the docs are particularly clear on this. There is the comment\n\"Some file system backup tools emit warnings or errors if the files they\nare trying to copy change while the copy proceeds. When taking a base\nbackup of an active database, this situation is normal and not an error\"\nbut the reader could think that comment could apply to any of the files in\nthe datadirectory (in particular, pg_xlog), and could think that it doesn't\napply to the files in datadirectory/base in particular. In other words,\nonce they form the wrong understanding, the docs (if read) don't force them\nto change it, as they could interpret it in ways that are consistent.\n\nOf course the docs aren't a textbook and aren't trying to fully describe\nthe theory of operation; just give the people a recipe they can follow. But\npeople will make inferences from that recipe anyway. I don't know if it is\nworth trying preemptively dispel these mistakes in the docs.\n\nCheers,\n\nJeff\n\nOn Mon, Jan 23, 2017 at 9:43 AM, Simon Riggs <[email protected]> wrote:On 23 January 2017 at 17:12, Jeff Janes <[email protected]> wrote:\n\n>> Just to make sure anyone reading the mailing list archives isn't\n>> confused, running pg_start_backup does *not* make PG stop writing to\n>> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n>> data into BASEDIR after pg_start_backup has been called.\n>\n>\n>\n> Correct. Unfortunately it is a very common myth that it does cause\n> PostgreSQL to stop writing to the base dir.\n\nNever heard that one before. Wow. Who's been saying that?\n\nIt's taken me years to hunt down all invalid backup memes and terminate them.\n\nNever fails to surprise me how many people don't read the docs.I've seen it on stackexchange, and a few times on the pgsql mailing lists, and talking to people in person. I've never traced it back some \"authoritative\" source who is making the claim, I think many people just independently think up \"How would I implement pg_start_backup if I were doing it\" and then come up with the same false conclusion, and then all reinforce each other.I don't think the docs are particularly clear on this. There is the comment \"Some file system backup tools emit warnings or errors if the files they are trying to copy change while the copy proceeds. When taking a base backup of an active database, this situation is normal and not an error\" but the reader could think that comment could apply to any of the files in the datadirectory (in particular, pg_xlog), and could think that it doesn't apply to the files in datadirectory/base in particular. In other words, once they form the wrong understanding, the docs (if read) don't force them to change it, as they could interpret it in ways that are consistent.Of course the docs aren't a textbook and aren't trying to fully describe the theory of operation; just give the people a recipe they can follow. But people will make inferences from that recipe anyway. I don't know if it is worth trying preemptively dispel these mistakes in the docs.Cheers,Jeff",
"msg_date": "Tue, 24 Jan 2017 07:55:41 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Actually, I think this may be the way Oracle Hot Backups work. It was my\nimpression that feature temporarily suspends writes into a specific\ntablespace so you can take a snapshot of it. It has been a few years since\nI've had to do Oracle work though and I could be mis-remembering. People\nmay be confusing Oracle and PostgreSQL.\n\n\nOn Tue, Jan 24, 2017 at 10:55 AM, Jeff Janes <[email protected]> wrote:\n\n> On Mon, Jan 23, 2017 at 9:43 AM, Simon Riggs <[email protected]>\n> wrote:\n>\n>> On 23 January 2017 at 17:12, Jeff Janes <[email protected]> wrote:\n>>\n>> >> Just to make sure anyone reading the mailing list archives isn't\n>> >> confused, running pg_start_backup does *not* make PG stop writing to\n>> >> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n>> >> data into BASEDIR after pg_start_backup has been called.\n>> >\n>> >\n>> >\n>> > Correct. Unfortunately it is a very common myth that it does cause\n>> > PostgreSQL to stop writing to the base dir.\n>>\n>> Never heard that one before. Wow. Who's been saying that?\n>>\n>> It's taken me years to hunt down all invalid backup memes and terminate\n>> them.\n>>\n>> Never fails to surprise me how many people don't read the docs.\n>>\n>\n> I've seen it on stackexchange, and a few times on the pgsql mailing lists,\n> and talking to people in person. I've never traced it back some\n> \"authoritative\" source who is making the claim, I think many people just\n> independently think up \"How would I implement pg_start_backup if I were\n> doing it\" and then come up with the same false conclusion, and then all\n> reinforce each other.\n>\n> I don't think the docs are particularly clear on this. There is the\n> comment \"Some file system backup tools emit warnings or errors if the\n> files they are trying to copy change while the copy proceeds. When taking a\n> base backup of an active database, this situation is normal and not an\n> error\" but the reader could think that comment could apply to any of the\n> files in the datadirectory (in particular, pg_xlog), and could think that\n> it doesn't apply to the files in datadirectory/base in particular. In\n> other words, once they form the wrong understanding, the docs (if read)\n> don't force them to change it, as they could interpret it in ways that are\n> consistent.\n>\n> Of course the docs aren't a textbook and aren't trying to fully describe\n> the theory of operation; just give the people a recipe they can follow. But\n> people will make inferences from that recipe anyway. I don't know if it is\n> worth trying preemptively dispel these mistakes in the docs.\n>\n> Cheers,\n>\n> Jeff\n>\n>\n\nActually, I think this may be the way Oracle Hot Backups work. It was my impression that feature temporarily suspends writes into a specific tablespace so you can take a snapshot of it. It has been a few years since I've had to do Oracle work though and I could be mis-remembering. People may be confusing Oracle and PostgreSQL.On Tue, Jan 24, 2017 at 10:55 AM, Jeff Janes <[email protected]> wrote:On Mon, Jan 23, 2017 at 9:43 AM, Simon Riggs <[email protected]> wrote:On 23 January 2017 at 17:12, Jeff Janes <[email protected]> wrote:\n\n>> Just to make sure anyone reading the mailing list archives isn't\n>> confused, running pg_start_backup does *not* make PG stop writing to\n>> BASEDIR (or DATADIR, or anything, really). PG *will* continue to write\n>> data into BASEDIR after pg_start_backup has been called.\n>\n>\n>\n> Correct. Unfortunately it is a very common myth that it does cause\n> PostgreSQL to stop writing to the base dir.\n\nNever heard that one before. Wow. Who's been saying that?\n\nIt's taken me years to hunt down all invalid backup memes and terminate them.\n\nNever fails to surprise me how many people don't read the docs.I've seen it on stackexchange, and a few times on the pgsql mailing lists, and talking to people in person. I've never traced it back some \"authoritative\" source who is making the claim, I think many people just independently think up \"How would I implement pg_start_backup if I were doing it\" and then come up with the same false conclusion, and then all reinforce each other.I don't think the docs are particularly clear on this. There is the comment \"Some file system backup tools emit warnings or errors if the files they are trying to copy change while the copy proceeds. When taking a base backup of an active database, this situation is normal and not an error\" but the reader could think that comment could apply to any of the files in the datadirectory (in particular, pg_xlog), and could think that it doesn't apply to the files in datadirectory/base in particular. In other words, once they form the wrong understanding, the docs (if read) don't force them to change it, as they could interpret it in ways that are consistent.Of course the docs aren't a textbook and aren't trying to fully describe the theory of operation; just give the people a recipe they can follow. But people will make inferences from that recipe anyway. I don't know if it is worth trying preemptively dispel these mistakes in the docs.Cheers,Jeff",
"msg_date": "Tue, 24 Jan 2017 11:15:02 -0500",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "Greetings,\n\n* Rick Otten ([email protected]) wrote:\n> Actually, I think this may be the way Oracle Hot Backups work. It was my\n> impression that feature temporarily suspends writes into a specific\n> tablespace so you can take a snapshot of it. It has been a few years since\n> I've had to do Oracle work though and I could be mis-remembering. People\n> may be confusing Oracle and PostgreSQL.\n\nYes, that thought has occured to me as well, in some other database\nsystems you can ask for the system to be quiesced.\n\nThanks!\n\nStephen",
"msg_date": "Tue, 24 Jan 2017 11:39:15 -0500",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
},
{
"msg_contents": "On 1/20/17 10:06 AM, Stephen Frost wrote:\n\n> Ah, yes, I noticed that you passed over the file twice but wasn't quite\n> sure what functools.partial() was doing and a quick read of the docs\n> made me think you were doing seeking there.\n> \n> All the pages are the same size, so I'm surprised you didn't consider\n> just having a format along the lines of: magic+offset+page,\n> magic+offset+page, magic+offset+page, etc...\n> \n> I'd have to defer to David on this, but I think he was considering\n> having some kind of a bitmap to indicate which pages changed instead\n> of storing the full offset as, again, all the pages are the same size.\n\nI have actually gone through a few different ideas (including both of\nthe above) and haven't settled on anything final yet. Most of the ideas\nI've come up with so far are more optimal for backup performance but I\nwould rather bias towards restores which tend to be more time sensitive.\n\nThe ideal solution would be something that works well for both.\n\n-- \n-David\[email protected]",
"msg_date": "Tue, 31 Jan 2017 10:20:18 -0500",
"msg_from": "David Steele <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Backup taking long time !!!"
}
] |
[
{
"msg_contents": "Hi All,\n\nI have two servers. On the first one I have postgresql version 9.6 . On the\nsecond one I have version 9.3 . I ran pgbench on both servers.\n\nFirst server results:\nscaling factor: 10\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 10 s\nnumber of transactions actually processed: 4639\nlatency average = 2.156 ms\ntps = 463.818971 (including connections establishing)\ntps = 464.017489 (excluding connections establishing)\n\nSecond server results:\nscaling factor: 10\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 10 s\nnumber of transactions actually processed: 3771\ntps = 377.084162 (including connections establishing)\ntps = 377.134546 (excluding connections establishing)\nSo first server perform much better.\n\nBUT if I run a trivial select on both servers, on a similar table, the\nselect perform much much better on second server!\n\nFirst server explain results:\n\nLimit (cost=0.00..0.83 rows=30 width=33) (actual time=0.152..0.794 rows=30\nloops=1)\n Buffers: shared hit=1\n -> Seq Scan on symbols_tests (cost=0.00..1.57 rows=57 width=33) (actual\ntime=0.040..0.261 rows=30 loops=1)\n Buffers: shared hit=1\nPlanning time: 0.282 ms\nExecution time: 1.062 ms\nSecond server explain results:\n\nLimit (cost=0.00..0.83 rows=30 width=622) (actual time=0.006..0.010\nrows=30 loops=1)\n Buffers: shared hit=1\n -> Seq Scan on symbols_tests (cost=0.00..1.57 rows=57 width=622)\n(actual time=0.006..0.007 rows=30 loops=1)\n Buffers: shared hit=1\nTotal runtime: 0.020 ms\n\nBoth servers have SSD. First server is a VPS, the second server is a\ndedicated server.\n\nAny idea why this contradiction ? If you need more details regarding server\nresources(CPU, memory etc) please let me know.\n\nRegards\n\n-- \nStock Monitor - Stock Analysis Made EASY <https://www.stockmonitor.com>\n\nHi All,I have two servers. On the first one I have postgresql version 9.6 . On the second one I have version 9.3 . I ran pgbench on both servers.First server results:scaling factor: 10query mode: simplenumber of clients: 1number of threads: 1duration: 10 snumber of transactions actually processed: 4639latency average = 2.156 mstps = 463.818971 (including connections establishing)tps = 464.017489 (excluding connections establishing)Second server results:scaling factor: 10query mode: simplenumber of clients: 1number of threads: 1duration: 10 snumber of transactions actually processed: 3771tps = 377.084162 (including connections establishing)tps = 377.134546 (excluding connections establishing)So first server perform much better.BUT if I run a trivial select on both servers, on a similar table, the select perform much much better on second server!First server explain results:Limit (cost=0.00..0.83 rows=30 width=33) (actual time=0.152..0.794 rows=30 loops=1) Buffers: shared hit=1 -> Seq Scan on symbols_tests (cost=0.00..1.57 rows=57 width=33) (actual time=0.040..0.261 rows=30 loops=1) Buffers: shared hit=1Planning time: 0.282 msExecution time: 1.062 msSecond server explain results:Limit (cost=0.00..0.83 rows=30 width=622) (actual time=0.006..0.010 rows=30 loops=1) Buffers: shared hit=1 -> Seq Scan on symbols_tests (cost=0.00..1.57 rows=57 width=622) (actual time=0.006..0.007 rows=30 loops=1) Buffers: shared hit=1Total runtime: 0.020 msBoth servers have SSD. First server is a VPS, the second server is a dedicated server.Any idea why this contradiction ? If you need more details regarding server resources(CPU, memory etc) please let me know.Regards-- Stock Monitor - Stock Analysis Made EASY",
"msg_date": "Mon, 23 Jan 2017 18:55:04 +0200",
"msg_from": "Gabriel Dodan <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance contradiction"
},
{
"msg_contents": "On 23 January 2017 at 17:55, Gabriel Dodan <[email protected]> wrote:\n>\n> BUT if I run a trivial select on both servers, on a similar table, the\nselect\n> perform much much better on second server!\n\nYou're comparing two very different systems it seems, therefore you might be\nlooking at difference in the performance of EXPLAIN, just getting timing\ninformation of your system may be the most expensive part[1], you could\ndisable\nthe timing explicity:\n\nEXPLAIN (ANALYZE ON, TIMING OFF) <query>\n\nAnd, there is something that stands out:\n\nSo it seems there is also some difference in the data, we could validate the\nactual numbers:\n\nSELECT sum(pg_column_size(symbols_tests))/count(*) FROM symbols_tests;\n\nregards,\n\nFeike\n\n[1]\nhttps://www.postgresql.org/docs/current/static/using-explain.html#USING-EXPLAIN-CAVEATS\n\nOn 23 January 2017 at 17:55, Gabriel Dodan <[email protected]> wrote:>> BUT if I run a trivial select on both servers, on a similar table, the select> perform much much better on second server!You're comparing two very different systems it seems, therefore you might belooking at difference in the performance of EXPLAIN, just getting timinginformation of your system may be the most expensive part[1], you could disablethe timing explicity:EXPLAIN (ANALYZE ON, TIMING OFF) <query>And, there is something that stands out:So it seems there is also some difference in the data, we could validate theactual numbers:SELECT sum(pg_column_size(symbols_tests))/count(*) FROM symbols_tests;regards,Feike[1] https://www.postgresql.org/docs/current/static/using-explain.html#USING-EXPLAIN-CAVEATS",
"msg_date": "Wed, 15 Feb 2017 09:25:11 +0100",
"msg_from": "Feike Steenbergen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance contradiction"
}
] |
[
{
"msg_contents": "Hi, \n\ni'm seeing a lot of connection time out in postgresql log \n\n2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:33 EET [6906-1] xxxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:44 EET [6912-1] xxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:48 EET [6913-1] xxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:52 EET [6920-1] xxx@YYY: canceling authentication due to timeout \n2017-01-25 11:10:52 EET [6930-1] postgres@postgres FATAL: canceling authentication due to timeout \n2017-01-25 11:10:53 EET [6921-1] xxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:11:08 EET [6933-1] xxx@YYY FATAL: canceling authentication due to timeout \n\ni also tired to login as postgres user from command line. \n\npostgres@sppgsql01:~$ psql \npsql: server closed the connection unexpectedly \nThis probably means the server terminated abnormally \nbefore or while processing the request. \n\nserver load is ok, under 1, \n\nmemory usage is also ok. \n\nroot@server ~ # free -m \ntotal used free shared buff/cache available \nMem: 128908 3398 795 31671 124714 92980 \nSwap: 8191 1418 6773 \n\nsystem is a ubuntu 16.04 and i'm using postgresql 9.3 \n\nconnection limit is at 500 at the moment i have 180 connection, authentication_timeout is default 1 min. \n\nhere is the postgresql.conf \n\n\ndata_directory = '/var/lib/postgresql/9.3/main' # use data in another directory \n# (change requires restart) \nhba_file = '/etc/postgresql/9.3/main/pg_hba.conf' # host-based authentication file \n# (change requires restart) \nident_file = '/etc/postgresql/9.3/main/pg_ident.conf' # ident configuration file \n# (change requires restart) \nexternal_pid_file = '/var/run/postgresql/9.3-main.pid' # write an extra PID file \n# (change requires restart) \nlisten_addresses = '*' # what IP address(es) to listen on; \n# comma-separated list of addresses; \n# defaults to 'localhost'; use '*' for all \n# (change requires restart) \nport = 5432 # (change requires restart) \nmax_connections = 500 # (change requires restart) \nunix_socket_directories = '/var/run/postgresql' # comma-separated list of directories \n# (change requires restart) \n# (change requires restart) \n# (change requires restart) \n# (change requires restart) \nssl = on # (change requires restart) \nssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers \n# (change requires restart) \nssl_cert_file = 'server.crt' # (change requires restart) \nssl_key_file = 'server.key' # (change requires restart) \npassword_encryption = on \n# 0 selects the system default \n# 0 selects the system default \n# 0 selects the system default \n# (change requires restart) \n# (change requires restart) \n# in kB, or -1 for no limit \nmax_files_per_process = 5000 # min 25 \n# (change requires restart) \nvacuum_cost_delay = 20 # 0-100 milliseconds \nvacuum_cost_page_hit = 1 # 0-10000 credits \nvacuum_cost_page_miss = 10 # 0-10000 credits \nvacuum_cost_page_dirty = 20 # 0-10000 credits \nvacuum_cost_limit = 200 # 1-10000 credits \n# (change requires restart) \nfsync = on # turns forced synchronization on or off \nsynchronous_commit = on # synchronization level; \n# off, local, remote_write, or on \nwal_sync_method = fsync # the default is the first option \n# supported by the operating system: \n# open_datasync \n# fdatasync (default on Linux) \n# fsync \n# fsync_writethrough \n# open_sync \nfull_page_writes = on # recover from partial page writes \nwal_buffers = -1 # min 32kB, -1 sets based on shared_buffers \n# (change requires restart) \nwal_writer_delay = 200ms # 1-10000 milliseconds \ncommit_delay = 0 # range 0-100000, in microseconds \ncommit_siblings = 5 # range 1-1000 \ncheckpoint_segments = 64 # in logfile segments, min 1, 16MB each \ncheckpoint_timeout = 15min # range 30s-1h \ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 \ncheckpoint_warning = 30s # 0 disables \n\nlog_checkpoints = on \nlog_line_prefix = '%t [%p-%l] %q%u@%d ' # special values: \n\nlog_timezone = 'Europe/Bucharest' \ntrack_activities = on \ntrack_counts = on \nstats_temp_directory = '/var/run/postgresql/9.3-main.pg_stat_tmp' \nautovacuum = on # Enable autovacuum subprocess? 'on' \n# requires track_counts to also be on. \nlog_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and \n\nautovacuum_max_workers = 5 # max number of autovacuum subprocesses \n# (change requires restart) \nautovacuum_naptime = 5min # time between autovacuum runs \nautovacuum_vacuum_threshold = 500 # min number of row updates before \n# vacuum \nautovacuum_analyze_threshold = 500 # min number of row updates before \n# analyze \nautovacuum_vacuum_scale_factor = 0.4 # fraction of table size before vacuum \nautovacuum_analyze_scale_factor = 0.2 # fraction of table size before analyze \n\nautovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for \n\ndatestyle = 'iso, mdy' \ntimezone = 'Europe/Bucharest' \n\nclient_encoding = sql_ascii # actually, defaults to database \n# encoding \nlc_messages = 'C' # locale for system error message \n# strings \nlc_monetary = 'C' # locale for monetary formatting \nlc_numeric = 'C' # locale for number formatting \nlc_time = 'C' # locale for time formatting \ndefault_text_search_config = 'pg_catalog.english' \n# (change requires restart) \n# (change requires restart) \n# directory 'conf.d' \ndefault_statistics_target = 100 # pgtune wizard 2016-12-11 \nmaintenance_work_mem = 1GB # pgtune wizard 2016-12-11 \nconstraint_exclusion = on # pgtune wizard 2016-12-11 \neffective_cache_size = 88GB # pgtune wizard 2016-12-11 \nwork_mem = 64MB # pgtune wizard 2016-12-11 \nwal_buffers = 32MB # pgtune wizard 2016-12-11 \nshared_buffers = 30GB # pgtune wizard 2016-12-11 \n\n\ndose anyone know why this thing are happening? \n\nThanks. \n\nBr, \nVuko \n\nHi,i'm seeing a lot of connection time out in postgresql log2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:33 EET [6906-1] xxxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:44 EET [6912-1] xxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:48 EET [6913-1] xxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:52 EET [6920-1] xxx@YYY: canceling authentication due to timeout2017-01-25 11:10:52 EET [6930-1] postgres@postgres FATAL: canceling authentication due to timeout2017-01-25 11:10:53 EET [6921-1] xxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:11:08 EET [6933-1] xxx@YYY FATAL: canceling authentication due to timeouti also tired to login as postgres user from command line.postgres@sppgsql01:~$ psqlpsql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.server load is ok, under 1, memory usage is also ok.root@server ~ # free -m total used free shared buff/cache availableMem: 128908 3398 795 31671 124714 92980Swap: 8191 1418 6773system is a ubuntu 16.04 and i'm using postgresql 9.3connection limit is at 500 at the moment i have 180 connection, authentication_timeout is default 1 min.here is the postgresql.confdata_directory = '/var/lib/postgresql/9.3/main' # use data in another directory # (change requires restart)hba_file = '/etc/postgresql/9.3/main/pg_hba.conf' # host-based authentication file # (change requires restart)ident_file = '/etc/postgresql/9.3/main/pg_ident.conf' # ident configuration file # (change requires restart)external_pid_file = '/var/run/postgresql/9.3-main.pid' # write an extra PID file # (change requires restart)listen_addresses = '*' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost'; use '*' for all # (change requires restart)port = 5432 # (change requires restart)max_connections = 500 # (change requires restart)unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories # (change requires restart) # (change requires restart) # (change requires restart) # (change requires restart)ssl = on # (change requires restart)ssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers # (change requires restart)ssl_cert_file = 'server.crt' # (change requires restart)ssl_key_file = 'server.key' # (change requires restart)password_encryption = on # 0 selects the system default # 0 selects the system default # 0 selects the system default # (change requires restart) # (change requires restart) # in kB, or -1 for no limitmax_files_per_process = 5000 # min 25 # (change requires restart)vacuum_cost_delay = 20 # 0-100 millisecondsvacuum_cost_page_hit = 1 # 0-10000 creditsvacuum_cost_page_miss = 10 # 0-10000 creditsvacuum_cost_page_dirty = 20 # 0-10000 creditsvacuum_cost_limit = 200 # 1-10000 credits # (change requires restart)fsync = on # turns forced synchronization on or offsynchronous_commit = on # synchronization level; # off, local, remote_write, or onwal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync (default on Linux) # fsync # fsync_writethrough # open_syncfull_page_writes = on # recover from partial page writeswal_buffers = -1 # min 32kB, -1 sets based on shared_buffers # (change requires restart)wal_writer_delay = 200ms # 1-10000 millisecondscommit_delay = 0 # range 0-100000, in microsecondscommit_siblings = 5 # range 1-1000checkpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 15min # range 30s-1hcheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0checkpoint_warning = 30s # 0 disables log_checkpoints = onlog_line_prefix = '%t [%p-%l] %q%u@%d ' # special values: log_timezone = 'Europe/Bucharest'track_activities = ontrack_counts = onstats_temp_directory = '/var/run/postgresql/9.3-main.pg_stat_tmp'autovacuum = on # Enable autovacuum subprocess? 'on' # requires track_counts to also be on.log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and autovacuum_max_workers = 5 # max number of autovacuum subprocesses # (change requires restart)autovacuum_naptime = 5min # time between autovacuum runsautovacuum_vacuum_threshold = 500 # min number of row updates before # vacuumautovacuum_analyze_threshold = 500 # min number of row updates before # analyzeautovacuum_vacuum_scale_factor = 0.4 # fraction of table size before vacuumautovacuum_analyze_scale_factor = 0.2 # fraction of table size before analyze autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for datestyle = 'iso, mdy'timezone = 'Europe/Bucharest' client_encoding = sql_ascii # actually, defaults to database # encodinglc_messages = 'C' # locale for system error message # stringslc_monetary = 'C' # locale for monetary formattinglc_numeric = 'C' # locale for number formattinglc_time = 'C' # locale for time formattingdefault_text_search_config = 'pg_catalog.english' # (change requires restart) # (change requires restart) # directory 'conf.d'default_statistics_target = 100 # pgtune wizard 2016-12-11maintenance_work_mem = 1GB # pgtune wizard 2016-12-11constraint_exclusion = on # pgtune wizard 2016-12-11effective_cache_size = 88GB # pgtune wizard 2016-12-11work_mem = 64MB # pgtune wizard 2016-12-11wal_buffers = 32MB # pgtune wizard 2016-12-11shared_buffers = 30GB # pgtune wizard 2016-12-11dose anyone know why this thing are happening?Thanks.Br,Vuko",
"msg_date": "Wed, 25 Jan 2017 10:23:39 +0100 (CET)",
"msg_from": "Vucomir Ianculov <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgsql connection timeone"
},
{
"msg_contents": "Hi, \n\nbefore timeout started this was the last automatic vacuum entry's: \n\n\n\n2017-01-25 10:47:07 EET [5995-1] LOG: automatic vacuum of table \"13Tim07Tuning.pg_catalog.pg_attribute\": index scans: 1 \npages: 0 removed, 261 remain \ntuples: 3547 removed, 8812 remain \nbuffer usage: 927 hits, 112 misses, 161 dirtied \navg read rate: 1.438 MB/s, avg write rate: 2.067 MB/s \nsystem usage: CPU 0.02s/0.00u sec elapsed 0.60 sec \n2017-01-25 10:47:07 EET [5995-2] LOG: automatic analyze of table \"13Tim07Tuning.pg_catalog.pg_class\" system usage: CPU 0.00s/0.00u sec elapsed 0.08 sec \n2017-01-25 10:47:07 EET [5995-3] LOG: automatic vacuum of table \"13Tim07Tuning.pg_catalog.pg_index\": index scans: 1 \npages: 0 removed, 10 remain \ntuples: 14 removed, 312 remain \nbuffer usage: 62 hits, 5 misses, 15 dirtied \navg read rate: 0.767 MB/s, avg write rate: 2.301 MB/s \nsystem usage: CPU 0.00s/0.00u sec elapsed 0.05 sec \n2017-01-25 10:47:10 EET [6060-1] LOG: automatic analyze of table \"12westm11.pg_catalog.pg_type\" system usage: CPU 0.00s/0.00u sec elapsed 0.08 sec \n2017-01-25 10:47:10 EET [6060-2] LOG: automatic analyze of table \"12westm11.pg_catalog.pg_attribute\" system usage: CPU 0.02s/0.01u sec elapsed 0.12 sec \n2017-01-25 10:48:30 EET [28355-4597] LOG: checkpoint starting: time \n2017-01-25 10:49:34 EET [6222-1]XXXX@YYYY FATAL: canceling authentication due to timeout \n\n\n\n\n----- Original Message -----\n\nFrom: \"Vucomir Ianculov\" <[email protected]> \nTo: [email protected] \nSent: Wednesday, January 25, 2017 11:23:39 AM \nSubject: [PERFORM] pgsql connection timeone \n\n\nHi, \n\ni'm seeing a lot of connection time out in postgresql log \n\n2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:33 EET [6906-1] xxxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:44 EET [6912-1] xxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:48 EET [6913-1] xxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:10:52 EET [6920-1] xxx@YYY: canceling authentication due to timeout \n2017-01-25 11:10:52 EET [6930-1] postgres@postgres FATAL: canceling authentication due to timeout \n2017-01-25 11:10:53 EET [6921-1] xxx@YYY FATAL: canceling authentication due to timeout \n2017-01-25 11:11:08 EET [6933-1] xxx@YYY FATAL: canceling authentication due to timeout \n\ni also tired to login as postgres user from command line. \n\npostgres@sppgsql01:~$ psql \npsql: server closed the connection unexpectedly \nThis probably means the server terminated abnormally \nbefore or while processing the request. \n\nserver load is ok, under 1, \n\nmemory usage is also ok. \n\nroot@server ~ # free -m \ntotal used free shared buff/cache available \nMem: 128908 3398 795 31671 124714 92980 \nSwap: 8191 1418 6773 \n\nsystem is a ubuntu 16.04 and i'm using postgresql 9.3 \n\nconnection limit is at 500 at the moment i have 180 connection, authentication_timeout is default 1 min. \n\nhere is the postgresql.conf \n\n\ndata_directory = '/var/lib/postgresql/9.3/main' # use data in another directory \n# (change requires restart) \nhba_file = '/etc/postgresql/9.3/main/pg_hba.conf' # host-based authentication file \n# (change requires restart) \nident_file = '/etc/postgresql/9.3/main/pg_ident.conf' # ident configuration file \n# (change requires restart) \nexternal_pid_file = '/var/run/postgresql/9.3-main.pid' # write an extra PID file \n# (change requires restart) \nlisten_addresses = '*' # what IP address(es) to listen on; \n# comma-separated list of addresses; \n# defaults to 'localhost'; use '*' for all \n# (change requires restart) \nport = 5432 # (change requires restart) \nmax_connections = 500 # (change requires restart) \nunix_socket_directories = '/var/run/postgresql' # comma-separated list of directories \n# (change requires restart) \n# (change requires restart) \n# (change requires restart) \n# (change requires restart) \nssl = on # (change requires restart) \nssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers \n# (change requires restart) \nssl_cert_file = 'server.crt' # (change requires restart) \nssl_key_file = 'server.key' # (change requires restart) \npassword_encryption = on \n# 0 selects the system default \n# 0 selects the system default \n# 0 selects the system default \n# (change requires restart) \n# (change requires restart) \n# in kB, or -1 for no limit \nmax_files_per_process = 5000 # min 25 \n# (change requires restart) \nvacuum_cost_delay = 20 # 0-100 milliseconds \nvacuum_cost_page_hit = 1 # 0-10000 credits \nvacuum_cost_page_miss = 10 # 0-10000 credits \nvacuum_cost_page_dirty = 20 # 0-10000 credits \nvacuum_cost_limit = 200 # 1-10000 credits \n# (change requires restart) \nfsync = on # turns forced synchronization on or off \nsynchronous_commit = on # synchronization level; \n# off, local, remote_write, or on \nwal_sync_method = fsync # the default is the first option \n# supported by the operating system: \n# open_datasync \n# fdatasync (default on Linux) \n# fsync \n# fsync_writethrough \n# open_sync \nfull_page_writes = on # recover from partial page writes \nwal_buffers = -1 # min 32kB, -1 sets based on shared_buffers \n# (change requires restart) \nwal_writer_delay = 200ms # 1-10000 milliseconds \ncommit_delay = 0 # range 0-100000, in microseconds \ncommit_siblings = 5 # range 1-1000 \ncheckpoint_segments = 64 # in logfile segments, min 1, 16MB each \ncheckpoint_timeout = 15min # range 30s-1h \ncheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 \ncheckpoint_warning = 30s # 0 disables \n\nlog_checkpoints = on \nlog_line_prefix = '%t [%p-%l] %q%u@%d ' # special values: \n\nlog_timezone = 'Europe/Bucharest' \ntrack_activities = on \ntrack_counts = on \nstats_temp_directory = '/var/run/postgresql/9.3-main.pg_stat_tmp' \nautovacuum = on # Enable autovacuum subprocess? 'on' \n# requires track_counts to also be on. \nlog_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and \n\nautovacuum_max_workers = 5 # max number of autovacuum subprocesses \n# (change requires restart) \nautovacuum_naptime = 5min # time between autovacuum runs \nautovacuum_vacuum_threshold = 500 # min number of row updates before \n# vacuum \nautovacuum_analyze_threshold = 500 # min number of row updates before \n# analyze \nautovacuum_vacuum_scale_factor = 0.4 # fraction of table size before vacuum \nautovacuum_analyze_scale_factor = 0.2 # fraction of table size before analyze \n\nautovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for \n\ndatestyle = 'iso, mdy' \ntimezone = 'Europe/Bucharest' \n\nclient_encoding = sql_ascii # actually, defaults to database \n# encoding \nlc_messages = 'C' # locale for system error message \n# strings \nlc_monetary = 'C' # locale for monetary formatting \nlc_numeric = 'C' # locale for number formatting \nlc_time = 'C' # locale for time formatting \ndefault_text_search_config = 'pg_catalog.english' \n# (change requires restart) \n# (change requires restart) \n# directory 'conf.d' \ndefault_statistics_target = 100 # pgtune wizard 2016-12-11 \nmaintenance_work_mem = 1GB # pgtune wizard 2016-12-11 \nconstraint_exclusion = on # pgtune wizard 2016-12-11 \neffective_cache_size = 88GB # pgtune wizard 2016-12-11 \nwork_mem = 64MB # pgtune wizard 2016-12-11 \nwal_buffers = 32MB # pgtune wizard 2016-12-11 \nshared_buffers = 30GB # pgtune wizard 2016-12-11 \n\n\ndose anyone know why this thing are happening? \n\nThanks. \n\nBr, \nVuko \n\n\nHi,before timeout started this was the last automatic vacuum entry's:2017-01-25 10:47:07 EET [5995-1] LOG: automatic vacuum of table \"13Tim07Tuning.pg_catalog.pg_attribute\": index scans: 1 pages: 0 removed, 261 remain tuples: 3547 removed, 8812 remain buffer usage: 927 hits, 112 misses, 161 dirtied avg read rate: 1.438 MB/s, avg write rate: 2.067 MB/s system usage: CPU 0.02s/0.00u sec elapsed 0.60 sec2017-01-25 10:47:07 EET [5995-2] LOG: automatic analyze of table \"13Tim07Tuning.pg_catalog.pg_class\" system usage: CPU 0.00s/0.00u sec elapsed 0.08 sec2017-01-25 10:47:07 EET [5995-3] LOG: automatic vacuum of table \"13Tim07Tuning.pg_catalog.pg_index\": index scans: 1 pages: 0 removed, 10 remain tuples: 14 removed, 312 remain buffer usage: 62 hits, 5 misses, 15 dirtied avg read rate: 0.767 MB/s, avg write rate: 2.301 MB/s system usage: CPU 0.00s/0.00u sec elapsed 0.05 sec2017-01-25 10:47:10 EET [6060-1] LOG: automatic analyze of table \"12westm11.pg_catalog.pg_type\" system usage: CPU 0.00s/0.00u sec elapsed 0.08 sec2017-01-25 10:47:10 EET [6060-2] LOG: automatic analyze of table \"12westm11.pg_catalog.pg_attribute\" system usage: CPU 0.02s/0.01u sec elapsed 0.12 sec2017-01-25 10:48:30 EET [28355-4597] LOG: checkpoint starting: time2017-01-25 10:49:34 EET [6222-1]XXXX@YYYY FATAL: canceling authentication due to timeoutFrom: \"Vucomir Ianculov\" <[email protected]>To: [email protected]: Wednesday, January 25, 2017 11:23:39 AMSubject: [PERFORM] pgsql connection timeoneHi,i'm seeing a lot of connection time out in postgresql log2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:33 EET [6906-1] xxxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:44 EET [6912-1] xxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:48 EET [6913-1] xxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:10:52 EET [6920-1] xxx@YYY: canceling authentication due to timeout2017-01-25 11:10:52 EET [6930-1] postgres@postgres FATAL: canceling authentication due to timeout2017-01-25 11:10:53 EET [6921-1] xxx@YYY FATAL: canceling authentication due to timeout2017-01-25 11:11:08 EET [6933-1] xxx@YYY FATAL: canceling authentication due to timeouti also tired to login as postgres user from command line.postgres@sppgsql01:~$ psqlpsql: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.server load is ok, under 1, memory usage is also ok.root@server ~ # free -m total used free shared buff/cache availableMem: 128908 3398 795 31671 124714 92980Swap: 8191 1418 6773system is a ubuntu 16.04 and i'm using postgresql 9.3connection limit is at 500 at the moment i have 180 connection, authentication_timeout is default 1 min.here is the postgresql.confdata_directory = '/var/lib/postgresql/9.3/main' # use data in another directory # (change requires restart)hba_file = '/etc/postgresql/9.3/main/pg_hba.conf' # host-based authentication file # (change requires restart)ident_file = '/etc/postgresql/9.3/main/pg_ident.conf' # ident configuration file # (change requires restart)external_pid_file = '/var/run/postgresql/9.3-main.pid' # write an extra PID file # (change requires restart)listen_addresses = '*' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost'; use '*' for all # (change requires restart)port = 5432 # (change requires restart)max_connections = 500 # (change requires restart)unix_socket_directories = '/var/run/postgresql' # comma-separated list of directories # (change requires restart) # (change requires restart) # (change requires restart) # (change requires restart)ssl = on # (change requires restart)ssl_ciphers = 'DEFAULT:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers # (change requires restart)ssl_cert_file = 'server.crt' # (change requires restart)ssl_key_file = 'server.key' # (change requires restart)password_encryption = on # 0 selects the system default # 0 selects the system default # 0 selects the system default # (change requires restart) # (change requires restart) # in kB, or -1 for no limitmax_files_per_process = 5000 # min 25 # (change requires restart)vacuum_cost_delay = 20 # 0-100 millisecondsvacuum_cost_page_hit = 1 # 0-10000 creditsvacuum_cost_page_miss = 10 # 0-10000 creditsvacuum_cost_page_dirty = 20 # 0-10000 creditsvacuum_cost_limit = 200 # 1-10000 credits # (change requires restart)fsync = on # turns forced synchronization on or offsynchronous_commit = on # synchronization level; # off, local, remote_write, or onwal_sync_method = fsync # the default is the first option # supported by the operating system: # open_datasync # fdatasync (default on Linux) # fsync # fsync_writethrough # open_syncfull_page_writes = on # recover from partial page writeswal_buffers = -1 # min 32kB, -1 sets based on shared_buffers # (change requires restart)wal_writer_delay = 200ms # 1-10000 millisecondscommit_delay = 0 # range 0-100000, in microsecondscommit_siblings = 5 # range 1-1000checkpoint_segments = 64 # in logfile segments, min 1, 16MB eachcheckpoint_timeout = 15min # range 30s-1hcheckpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0checkpoint_warning = 30s # 0 disables log_checkpoints = onlog_line_prefix = '%t [%p-%l] %q%u@%d ' # special values: log_timezone = 'Europe/Bucharest'track_activities = ontrack_counts = onstats_temp_directory = '/var/run/postgresql/9.3-main.pg_stat_tmp'autovacuum = on # Enable autovacuum subprocess? 'on' # requires track_counts to also be on.log_autovacuum_min_duration = 0 # -1 disables, 0 logs all actions and autovacuum_max_workers = 5 # max number of autovacuum subprocesses # (change requires restart)autovacuum_naptime = 5min # time between autovacuum runsautovacuum_vacuum_threshold = 500 # min number of row updates before # vacuumautovacuum_analyze_threshold = 500 # min number of row updates before # analyzeautovacuum_vacuum_scale_factor = 0.4 # fraction of table size before vacuumautovacuum_analyze_scale_factor = 0.2 # fraction of table size before analyze autovacuum_vacuum_cost_delay = 20ms # default vacuum cost delay for datestyle = 'iso, mdy'timezone = 'Europe/Bucharest' client_encoding = sql_ascii # actually, defaults to database # encodinglc_messages = 'C' # locale for system error message # stringslc_monetary = 'C' # locale for monetary formattinglc_numeric = 'C' # locale for number formattinglc_time = 'C' # locale for time formattingdefault_text_search_config = 'pg_catalog.english' # (change requires restart) # (change requires restart) # directory 'conf.d'default_statistics_target = 100 # pgtune wizard 2016-12-11maintenance_work_mem = 1GB # pgtune wizard 2016-12-11constraint_exclusion = on # pgtune wizard 2016-12-11effective_cache_size = 88GB # pgtune wizard 2016-12-11work_mem = 64MB # pgtune wizard 2016-12-11wal_buffers = 32MB # pgtune wizard 2016-12-11shared_buffers = 30GB # pgtune wizard 2016-12-11dose anyone know why this thing are happening?Thanks.Br,Vuko",
"msg_date": "Wed, 25 Jan 2017 10:44:59 +0100 (CET)",
"msg_from": "Vucomir Ianculov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql connection timeone"
},
{
"msg_contents": "Vucomir Ianculov <[email protected]> writes:\n> i'm seeing a lot of connection time out in postgresql log \n\n> 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout \n\nSo ... what authentication method are you using?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Jan 2017 08:15:28 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql connection timeone"
},
{
"msg_contents": "Hi Tom, \n\nthis is the entry from pg_hba.conf \n\nhost all all 0.0.0.0/0 md5 \n\ni needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram. \n\n\n\n\n----- Original Message -----\n\nFrom: \"Tom Lane\" <[email protected]> \nTo: \"Vucomir Ianculov\" <[email protected]> \nCc: [email protected] \nSent: Wednesday, January 25, 2017 3:15:28 PM \nSubject: Re: [PERFORM] pgsql connection timeone \n\nVucomir Ianculov <[email protected]> writes: \n> i'm seeing a lot of connection time out in postgresql log \n\n> 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout \n\nSo ... what authentication method are you using? \n\nregards, tom lane \n\n\nHi Tom,this is the entry from pg_hba.confhost all all 0.0.0.0/0 md5i needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram.From: \"Tom Lane\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>Cc: [email protected]: Wednesday, January 25, 2017 3:15:28 PMSubject: Re: [PERFORM] pgsql connection timeoneVucomir Ianculov <[email protected]> writes:> i'm seeing a lot of connection time out in postgresql log > 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout So ... what authentication method are you using? regards, tom lane",
"msg_date": "Sat, 28 Jan 2017 11:03:55 +0100 (CET)",
"msg_from": "Vucomir Ianculov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql connection timeone"
},
{
"msg_contents": "can anyone help me with my problem? \ni'm really don't know from when the problem can be. \n\n\n\n\n----- Original Message -----\n\nFrom: \"Vucomir Ianculov\" <[email protected]> \nTo: \"Tom Lane\" <[email protected]> \nCc: [email protected] \nSent: Saturday, January 28, 2017 12:03:55 PM \nSubject: Re: [PERFORM] pgsql connection timeone \n\n\nHi Tom, \n\nthis is the entry from pg_hba.conf \n\nhost all all 0.0.0.0/0 md5 \n\ni needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram. \n\n\n\n\n----- Original Message -----\n\nFrom: \"Tom Lane\" <[email protected]> \nTo: \"Vucomir Ianculov\" <[email protected]> \nCc: [email protected] \nSent: Wednesday, January 25, 2017 3:15:28 PM \nSubject: Re: [PERFORM] pgsql connection timeone \n\nVucomir Ianculov <[email protected]> writes: \n> i'm seeing a lot of connection time out in postgresql log \n\n> 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout \n\nSo ... what authentication method are you using? \n\nregards, tom lane \n\n\n\ncan anyone help me with my problem?i'm really don't know from when the problem can be.From: \"Vucomir Ianculov\" <[email protected]>To: \"Tom Lane\" <[email protected]>Cc: [email protected]: Saturday, January 28, 2017 12:03:55 PMSubject: Re: [PERFORM] pgsql connection timeoneHi Tom,this is the entry from pg_hba.confhost all all 0.0.0.0/0 md5i needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram.From: \"Tom Lane\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>Cc: [email protected]: Wednesday, January 25, 2017 3:15:28 PMSubject: Re: [PERFORM] pgsql connection timeoneVucomir Ianculov <[email protected]> writes:> i'm seeing a lot of connection time out in postgresql log > 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout So ... what authentication method are you using? regards, tom lane",
"msg_date": "Wed, 1 Feb 2017 13:51:58 +0100 (CET)",
"msg_from": "Vucomir Ianculov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql connection timeone"
},
{
"msg_contents": "Just a wild guess, but did you check your random source? We had similar\nproblems in Oracle and had to switch to /dev/urandom. It can be done with a\nsystem variable setting.\n\nOn Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov <[email protected]> wrote:\n\n> can anyone help me with my problem?\n> i'm really don't know from when the problem can be.\n>\n>\n>\n> ------------------------------\n> *From: *\"Vucomir Ianculov\" <[email protected]>\n> *To: *\"Tom Lane\" <[email protected]>\n> *Cc: *[email protected]\n> *Sent: *Saturday, January 28, 2017 12:03:55 PM\n>\n> *Subject: *Re: [PERFORM] pgsql connection timeone\n>\n> Hi Tom,\n>\n> this is the entry from pg_hba.conf\n> host all all 0.0.0.0/0 md5\n>\n> i needed to restart postgres service to be able to accept new connection,\n> witch it strange becouse there was no load on the server and it head a lot\n> of free ram.\n>\n>\n>\n>\n> ------------------------------\n> *From: *\"Tom Lane\" <[email protected]>\n> *To: *\"Vucomir Ianculov\" <[email protected]>\n> *Cc: *[email protected]\n> *Sent: *Wednesday, January 25, 2017 3:15:28 PM\n> *Subject: *Re: [PERFORM] pgsql connection timeone\n>\n> Vucomir Ianculov <[email protected]> writes:\n> > i'm seeing a lot of connection time out in postgresql log\n>\n> > 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling\n> authentication due to timeout\n> > 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling\n> authentication due to timeout\n> > 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling\n> authentication due to timeout\n>\n> So ... what authentication method are you using?\n>\n> regards, tom lane\n>\n>\n\nJust a wild guess, but did you check your random source? We had similar problems in Oracle and had to switch to /dev/urandom. It can be done with a system variable setting.On Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov <[email protected]> wrote:can anyone help me with my problem?i'm really don't know from when the problem can be.From: \"Vucomir Ianculov\" <[email protected]>To: \"Tom Lane\" <[email protected]>Cc: [email protected]: Saturday, January 28, 2017 12:03:55 PMSubject: Re: [PERFORM] pgsql connection timeoneHi Tom,this is the entry from pg_hba.confhost all all 0.0.0.0/0 md5i needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram.From: \"Tom Lane\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>Cc: [email protected]: Wednesday, January 25, 2017 3:15:28 PMSubject: Re: [PERFORM] pgsql connection timeoneVucomir Ianculov <[email protected]> writes:> i'm seeing a lot of connection time out in postgresql log > 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout So ... what authentication method are you using? regards, tom lane",
"msg_date": "Wed, 01 Feb 2017 17:11:12 +0000",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql connection timeone"
},
{
"msg_contents": "Hi Vitalii, \n\nno, hove can i check it? searched but did not find any useful information . \n\nThanks, \n\nBr, \nVuko \n----- Original Message -----\n\nFrom: \"Vitalii Tymchyshyn\" <[email protected]> \nTo: \"Vucomir Ianculov\" <[email protected]>, \"Tom Lane\" <[email protected]> \nCc: [email protected] \nSent: Wednesday, February 1, 2017 7:11:12 PM \nSubject: Re: [PERFORM] pgsql connection timeone \n\nJust a wild guess, but did you check your random source? We had similar problems in Oracle and had to switch to /dev/urandom. It can be done with a system variable setting. \n\n\n\nOn Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov < [email protected] > wrote: \n\n\n\n\ncan anyone help me with my problem? \ni'm really don't know from when the problem can be. \n\n\n\n\n\n\nFrom: \"Vucomir Ianculov\" < [email protected] > \nTo: \"Tom Lane\" < [email protected] > \nCc: [email protected] \nSent: Saturday, January 28, 2017 12:03:55 PM \n\n\n\nSubject: Re: [PERFORM] pgsql connection timeone \n\n\nHi Tom, \n\nthis is the entry from pg_hba.conf \n\nhost all all 0.0.0.0/0 md5 \n\ni needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram. \n\n\n\n\n\n\nFrom: \"Tom Lane\" < [email protected] > \nTo: \"Vucomir Ianculov\" < [email protected] > \nCc: [email protected] \nSent: Wednesday, January 25, 2017 3:15:28 PM \nSubject: Re: [PERFORM] pgsql connection timeone \n\nVucomir Ianculov < [email protected] > writes: \n> i'm seeing a lot of connection time out in postgresql log \n\n> 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout \n\nSo ... what authentication method are you using? \n\nregards, tom lane \n\n\n\n\n\nHi Vitalii,no, hove can i check it? searched but did not find any useful information .Thanks,Br,VukoFrom: \"Vitalii Tymchyshyn\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>, \"Tom Lane\" <[email protected]>Cc: [email protected]: Wednesday, February 1, 2017 7:11:12 PMSubject: Re: [PERFORM] pgsql connection timeoneJust a wild guess, but did you check your random source? We had similar problems in Oracle and had to switch to /dev/urandom. It can be done with a system variable setting.On Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov <[email protected]> wrote:can anyone help me with my problem?i'm really don't know from when the problem can be.From: \"Vucomir Ianculov\" <[email protected]>To: \"Tom Lane\" <[email protected]>Cc: [email protected]: Saturday, January 28, 2017 12:03:55 PMSubject: Re: [PERFORM] pgsql connection timeoneHi Tom,this is the entry from pg_hba.confhost all all 0.0.0.0/0 md5i needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram.From: \"Tom Lane\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>Cc: [email protected]: Wednesday, January 25, 2017 3:15:28 PMSubject: Re: [PERFORM] pgsql connection timeoneVucomir Ianculov <[email protected]> writes:> i'm seeing a lot of connection time out in postgresql log > 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout So ... what authentication method are you using? regards, tom lane",
"msg_date": "Sat, 4 Feb 2017 17:45:08 +0100 (CET)",
"msg_from": "Vucomir Ianculov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql connection timeone"
},
{
"msg_contents": "Well, you can try switching to urandom, see\nhttp://stackoverflow.com/questions/137212/how-to-solve-performance-problem-with-java-securerandom,\nsecond answer.\n\nсб, 4 лют. 2017 о 11:45 Vucomir Ianculov <[email protected]> пише:\n\n> Hi Vitalii,\n>\n> no, hove can i check it? searched but did not find any useful information .\n>\n> Thanks,\n>\n> Br,\n> Vuko\n> ------------------------------\n> *From: *\"Vitalii Tymchyshyn\" <[email protected]>\n> *To: *\"Vucomir Ianculov\" <[email protected]>, \"Tom Lane\" <\n> [email protected]>\n> *Cc: *[email protected]\n> *Sent: *Wednesday, February 1, 2017 7:11:12 PM\n>\n> *Subject: *Re: [PERFORM] pgsql connection timeone\n>\n> Just a wild guess, but did you check your random source? We had similar\n> problems in Oracle and had to switch to /dev/urandom. It can be done with a\n> system variable setting.\n>\n> On Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov <[email protected]> wrote:\n>\n> can anyone help me with my problem?\n> i'm really don't know from when the problem can be.\n>\n>\n>\n> ------------------------------\n> *From: *\"Vucomir Ianculov\" <[email protected]>\n> *To: *\"Tom Lane\" <[email protected]>\n> *Cc: *[email protected]\n> *Sent: *Saturday, January 28, 2017 12:03:55 PM\n>\n> *Subject: *Re: [PERFORM] pgsql connection timeone\n>\n> Hi Tom,\n>\n> this is the entry from pg_hba.conf\n> host all all 0.0.0.0/0 md5\n>\n> i needed to restart postgres service to be able to accept new connection,\n> witch it strange becouse there was no load on the server and it head a lot\n> of free ram.\n>\n>\n>\n>\n> ------------------------------\n> *From: *\"Tom Lane\" <[email protected]>\n> *To: *\"Vucomir Ianculov\" <[email protected]>\n> *Cc: *[email protected]\n> *Sent: *Wednesday, January 25, 2017 3:15:28 PM\n> *Subject: *Re: [PERFORM] pgsql connection timeone\n>\n> Vucomir Ianculov <[email protected]> writes:\n> > i'm seeing a lot of connection time out in postgresql log\n>\n> > 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling\n> authentication due to timeout\n> > 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling\n> authentication due to timeout\n> > 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling\n> authentication due to timeout\n>\n> So ... what authentication method are you using?\n>\n> regards, tom lane\n>\n>\n\nWell, you can try switching to urandom, see http://stackoverflow.com/questions/137212/how-to-solve-performance-problem-with-java-securerandom, second answer.сб, 4 лют. 2017 о 11:45 Vucomir Ianculov <[email protected]> пише:Hi Vitalii,no, hove can i check it? searched but did not find any useful information .Thanks,Br,VukoFrom: \"Vitalii Tymchyshyn\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>, \"Tom Lane\" <[email protected]>Cc: [email protected]: Wednesday, February 1, 2017 7:11:12 PMSubject: Re: [PERFORM] pgsql connection timeoneJust a wild guess, but did you check your random source? We had similar problems in Oracle and had to switch to /dev/urandom. It can be done with a system variable setting.On Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov <[email protected]> wrote:can anyone help me with my problem?i'm really don't know from when the problem can be.From: \"Vucomir Ianculov\" <[email protected]>To: \"Tom Lane\" <[email protected]>Cc: [email protected]: Saturday, January 28, 2017 12:03:55 PMSubject: Re: [PERFORM] pgsql connection timeoneHi Tom,this is the entry from pg_hba.confhost all all 0.0.0.0/0 md5i needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram.From: \"Tom Lane\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>Cc: [email protected]: Wednesday, January 25, 2017 3:15:28 PMSubject: Re: [PERFORM] pgsql connection timeoneVucomir Ianculov <[email protected]> writes:> i'm seeing a lot of connection time out in postgresql log > 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout So ... what authentication method are you using? regards, tom lane",
"msg_date": "Sat, 04 Feb 2017 18:38:34 +0000",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgsql connection timeone"
},
{
"msg_contents": "Hi, \n\nsorry for the late replay, i have check but from what i see this is on the application side. \ni'm not able to login to postgres from the command line. \n\ndose anyone have any other ideas on this problem. \n\nBr, \nvuko \n\n\n----- Original Message -----\n\nFrom: \"Vitalii Tymchyshyn\" <[email protected]> \nTo: \"Vucomir Ianculov\" <[email protected]> \nCc: [email protected], \"Tom Lane\" <[email protected]> \nSent: Saturday, February 4, 2017 8:38:34 PM \nSubject: Re: [PERFORM] pgsql connection timeone \n\n\nWell, you can try switching to urandom, see http://stackoverflow.com/questions/137212/how-to-solve-performance-problem-with-java-securerandom , second answer. \n\n\nсб, 4 лют. 2017 о 11:45 Vucomir Ianculov < [email protected] > пише: \n\n\n\n\nHi Vitalii, \n\nno, hove can i check it? searched but did not find any useful information . \n\nThanks, \n\nBr, \nVuko \n\n\nFrom: \"Vitalii Tymchyshyn\" < [email protected] > \nTo: \"Vucomir Ianculov\" < [email protected] >, \"Tom Lane\" < [email protected] > \nCc: [email protected] \nSent: Wednesday, February 1, 2017 7:11:12 PM \n\n\n\nSubject: Re: [PERFORM] pgsql connection timeone \n\nJust a wild guess, but did you check your random source? We had similar problems in Oracle and had to switch to /dev/urandom. It can be done with a system variable setting. \n\n\n\nOn Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov < [email protected] > wrote: \n\n<blockquote>\n\n\ncan anyone help me with my problem? \ni'm really don't know from when the problem can be. \n\n\n\n\n\n\nFrom: \"Vucomir Ianculov\" < [email protected] > \nTo: \"Tom Lane\" < [email protected] > \nCc: [email protected] \nSent: Saturday, January 28, 2017 12:03:55 PM \n\n\n\nSubject: Re: [PERFORM] pgsql connection timeone \n\n\nHi Tom, \n\nthis is the entry from pg_hba.conf \n\nhost all all 0.0.0.0/0 md5 \n\ni needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram. \n\n\n\n\n\n\nFrom: \"Tom Lane\" < [email protected] > \nTo: \"Vucomir Ianculov\" < [email protected] > \nCc: [email protected] \nSent: Wednesday, January 25, 2017 3:15:28 PM \nSubject: Re: [PERFORM] pgsql connection timeone \n\nVucomir Ianculov < [email protected] > writes: \n> i'm seeing a lot of connection time out in postgresql log \n\n> 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout \n> 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout \n\nSo ... what authentication method are you using? \n\nregards, tom lane \n\n\n\n\n</blockquote>\n\n\nHi,sorry for the late replay, i have check but from what i see this is on the application side.i'm not able to login to postgres from the command line.dose anyone have any other ideas on this problem.Br,vukoFrom: \"Vitalii Tymchyshyn\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>Cc: [email protected], \"Tom Lane\" <[email protected]>Sent: Saturday, February 4, 2017 8:38:34 PMSubject: Re: [PERFORM] pgsql connection timeoneWell, you can try switching to urandom, see http://stackoverflow.com/questions/137212/how-to-solve-performance-problem-with-java-securerandom, second answer.сб, 4 лют. 2017 о 11:45 Vucomir Ianculov <[email protected]> пише:Hi Vitalii,no, hove can i check it? searched but did not find any useful information .Thanks,Br,VukoFrom: \"Vitalii Tymchyshyn\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>, \"Tom Lane\" <[email protected]>Cc: [email protected]: Wednesday, February 1, 2017 7:11:12 PMSubject: Re: [PERFORM] pgsql connection timeoneJust a wild guess, but did you check your random source? We had similar problems in Oracle and had to switch to /dev/urandom. It can be done with a system variable setting.On Wed, Feb 1, 2017, 7:52 AM Vucomir Ianculov <[email protected]> wrote:can anyone help me with my problem?i'm really don't know from when the problem can be.From: \"Vucomir Ianculov\" <[email protected]>To: \"Tom Lane\" <[email protected]>Cc: [email protected]: Saturday, January 28, 2017 12:03:55 PMSubject: Re: [PERFORM] pgsql connection timeoneHi Tom,this is the entry from pg_hba.confhost all all 0.0.0.0/0 md5i needed to restart postgres service to be able to accept new connection, witch it strange becouse there was no load on the server and it head a lot of free ram.From: \"Tom Lane\" <[email protected]>To: \"Vucomir Ianculov\" <[email protected]>Cc: [email protected]: Wednesday, January 25, 2017 3:15:28 PMSubject: Re: [PERFORM] pgsql connection timeoneVucomir Ianculov <[email protected]> writes:> i'm seeing a lot of connection time out in postgresql log > 2017-01-25 11:09:47 EET [6897-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:15 EET [6901-1] XXX@YYY FATAL: canceling authentication due to timeout > 2017-01-25 11:10:17 EET [6902-1] xxx@YYY FATAL: canceling authentication due to timeout So ... what authentication method are you using? regards, tom lane",
"msg_date": "Fri, 17 Feb 2017 17:52:18 +0100 (CET)",
"msg_from": "Vucomir Ianculov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pgsql connection timeone"
}
] |
[
{
"msg_contents": "I was just troubleshooting a strange performance issue with pg_trgm\n(greatest extension over) that ran great in testing but poor in\nproduction following a 9.6 in place upgrade from 9.2. By poor I mean\n7x slower. Problem was resolved by ALTER EXTENSION UPDATE followed by\na REINDEX on the impacted table. Hope this helps somebody at some\npoint :-).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 Feb 2017 18:08:04 +0530",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "PSA: upgrade your extensions"
},
{
"msg_contents": "Hi Merlin:\n\nAny EXPLAIN on the query affected? size of indexes before and after reindex?\n\nRegards,\n\nDaniel.\n\n\n> El 1 feb 2017, a las 13:38, Merlin Moncure <[email protected]> escribió:\n> \n> I was just troubleshooting a strange performance issue with pg_trgm\n> (greatest extension over) that ran great in testing but poor in\n> production following a 9.6 in place upgrade from 9.2. By poor I mean\n> 7x slower. Problem was resolved by ALTER EXTENSION UPDATE followed by\n> a REINDEX on the impacted table. Hope this helps somebody at some\n> point :-).\n> \n> merlin\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 Feb 2017 16:18:28 +0100",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSA: upgrade your extensions"
},
{
"msg_contents": "On Wed, Feb 1, 2017 at 4:38 AM, Merlin Moncure <[email protected]> wrote:\n\n> I was just troubleshooting a strange performance issue with pg_trgm\n> (greatest extension over) that ran great in testing but poor in\n> production following a 9.6 in place upgrade from 9.2. By poor I mean\n> 7x slower. Problem was resolved by ALTER EXTENSION UPDATE followed by\n> a REINDEX on the impacted table. Hope this helps somebody at some\n> point :-).\n>\n\nIt was probably the implementation of the triconsistent function for\npg_trgm (or I would like to think so, anyway).\n\nBut if so, the REINDEX should not have been necessary, just the ALTER\nEXTENSION UPDATE should do the trick. Rebuiding a large gin index can be\npretty slow.\n\nCheers,\n\nJeff\n\nOn Wed, Feb 1, 2017 at 4:38 AM, Merlin Moncure <[email protected]> wrote:I was just troubleshooting a strange performance issue with pg_trgm\n(greatest extension over) that ran great in testing but poor in\nproduction following a 9.6 in place upgrade from 9.2. By poor I mean\n7x slower. Problem was resolved by ALTER EXTENSION UPDATE followed by\na REINDEX on the impacted table. Hope this helps somebody at some\npoint :-).It was probably the implementation of the triconsistent function for pg_trgm (or I would like to think so, anyway). But if so, the REINDEX should not have been necessary, just the ALTER EXTENSION UPDATE should do the trick. Rebuiding a large gin index can be pretty slow.Cheers,Jeff",
"msg_date": "Wed, 1 Feb 2017 11:48:42 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PSA: upgrade your extensions"
},
{
"msg_contents": "On Thu, Feb 2, 2017 at 1:18 AM, Jeff Janes <[email protected]> wrote:\n> On Wed, Feb 1, 2017 at 4:38 AM, Merlin Moncure <[email protected]> wrote:\n>>\n>> I was just troubleshooting a strange performance issue with pg_trgm\n>> (greatest extension over) that ran great in testing but poor in\n>> production following a 9.6 in place upgrade from 9.2. By poor I mean\n>> 7x slower. Problem was resolved by ALTER EXTENSION UPDATE followed by\n>> a REINDEX on the impacted table. Hope this helps somebody at some\n>> point :-).\n>\n> It was probably the implementation of the triconsistent function for pg_trgm\n> (or I would like to think so, anyway).\n\nYeah, this is definitely the case. We are seeing 50-80% runtime\nreduction in many common cases, with the problematic cases being in\nthe upper end of that range.\n\n> But if so, the REINDEX should not have been necessary, just the ALTER\n> EXTENSION UPDATE should do the trick. Rebuiding a large gin index can be\n> pretty slow.\n\nHm, I thought it *was* necessary, in my poking. However the evidence\nis destroyed and it's not worth restaging the test, so I'll take your\nword for it.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Feb 2017 17:54:52 +0530",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PSA: upgrade your extensions"
}
] |
[
{
"msg_contents": "Dear all,\r\n\r\nI got the following problem for which I could not find a solution by searching the archives:\r\n\r\nI have Tables Ta, Tb, Tc with primary keys as bigserials.\r\nTa references Tb references Tc.\r\nNot all but most rows in Ta reference exactly one row in Tb.\r\n\r\nAbove the Tables I have views Va, Vb, Vc that gather some other information in addition to the Tables Ta,Tb,Tc.\r\nVa outputs columns of Vb outputs columns of Vc.\r\nalso, some other nested selects form columns of Va, Vb, Vc.\r\n\r\nmy problem is the join in Va:\r\nIt basically says\r\n\r\nSELECT Ta.PK, Ta.FK-Tb, Ta.x…, Ta.y, Vb.x, Vb.y, …\r\nFROM Ta, Vb\r\nWHERE Vb.TbPK = Ta.TbFK\r\nAND <somerestrictionOnTa>\r\n\r\nEven if I select exactly one row of Ta which also results in exacly one row of Vb (and then Vc),\r\nthe planner seems to always create all possible lines (of a subselect) in Vb and afterwards drop all\r\nlines but one when performing the join.\r\n\r\nWhen I replace Va's join of Ta and Vb by directly placing Vb's SELECT in Va\r\nthe planner finds that it needs only one row of the former Vb (which is then incorporated in Va).\r\n\r\nAlso, when I replace the above join by subselects resulting in a\r\nSELECT Ta.PK-id, Ta.FK-Tb, Ta.x…,\r\n(SELECT Vb.x FROM Vb WHERE Vb.TbPK = Ta.TbFK) AS x,\r\n(SELECT Vb.y FROM Vb WHERE Vb.TbPK = Ta.TbFK) AS y\r\nWHERE e.g Ta.PK-id = <singlevalue>\r\n\r\nthe planner is - for each subselect - able to perform Vb's operations on only the one row\r\nthat matches the FK of Ta.\r\nThat the planner repeats this for each of the above subselects of Vb is a\r\ndifferent story which I don't understand either.\r\n\r\n\r\nMy question is:\r\nIs there any way to convince the planner that it makes sense for\r\nthe Vb joined with Ta into Va to first select one row of Tb and then perform the rest of Vb on this one row?\r\n(And why is the plan for the regular join form differing from the subselects?)\r\n\r\nHere is the explain,analyze of Va's SELECT using four fast subselects on Vb:\r\nhttps://explain.depesz.com/s/2tp\r\n\r\nHere is it for the original Va's SELECT with the slow join of Ta, Vb:\r\nhttps://explain.depesz.com/s/oKS\r\n\r\nBTW: That some expressions in Vb are slow and inefficient is understood and can be corrected by me.\r\nThat's what made this problem visible but to my understanding this should not matter for the question.\r\n\r\nAddition information:\r\n\"PostgreSQL 9.6.1, compiled by Visual C++ build 1800, 64-bit\"\r\nThe same problem existed on 9.3 before. I updated to 9.6.1 to see if it gets better which it did not.\r\nHardware:\r\nVM running Windows 10 or WS2012R2 on WS2012R2 HyperV running on Xeon E5-2600. SSD buffered HDDs.\r\nI have the impression that this problem was at least invisible on 8.1 which I used before 9.3.\r\n\r\nAny insight is welcome.\r\nLet me know if more information is needed to analyze the question.\r\n\r\nRegards,\r\nTitus\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Feb 2017 21:23:49 +0000",
"msg_from": "Titus von Boxberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "strange and slow joining of nested views"
},
{
"msg_contents": "Titus von Boxberg <[email protected]> writes:\n> I got the following problem for which I could not find a solution by searching the archives:\n> I have Tables Ta, Tb, Tc with primary keys as bigserials.\n> Ta references Tb references Tc.\n> Not all but most rows in Ta reference exactly one row in Tb.\n\nHm, your problem query has 11 table scans (not to mention a couple of\nsubplans) so you're oversimplifying here. Anyway, I think that increasing\njoin_collapse_limit and/or from_collapse_limit to at least 11 might help.\nAs-is, you're more or less at the mercy of whether your textual query\nstructure corresponds to a good join order.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 04 Feb 2017 00:16:21 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: strange and slow joining of nested views"
},
{
"msg_contents": "> Von: Tom Lane [mailto:[email protected]]\r\n> Gesendet: Samstag, 4. Februar 2017 06:16\r\n> \r\n> Titus von Boxberg <[email protected]> writes:\r\n> > I got the following problem for which I could not find a solution by\r\n> searching the archives:\r\n> > I have Tables Ta, Tb, Tc with primary keys as bigserials.\r\n> > Ta references Tb references Tc.\r\n> > Not all but most rows in Ta reference exactly one row in Tb.\r\n> \r\n> Hm, your problem query has 11 table scans (not to mention a couple of\r\n> subplans) so you're oversimplifying here. Anyway, I think that\r\n> increasing join_collapse_limit and/or from_collapse_limit to at least 11\r\n> might help.\r\n> As-is, you're more or less at the mercy of whether your textual query\r\n> structure corresponds to a good join order.\r\n> \r\n> \t\t\tregards, tom lane\r\n\r\nThanks, I found the problem:\r\n\r\nIn the slow join case the planner always fails to restrict\r\none subselect in the joined view using EXISTS and one with a SUM clause\r\nto the the one row that actually gets used by the join.\r\nBoth use functions that I forgot to declare STABLE.\r\nAfter correcting this, the query is fast and the explain output looks like expected.\r\n\r\nStill, it would be nice to know what makes the join different from a subselect.\r\nsetting geqo = off and varying join_collapse_limit and from_collapse_limit\r\nfrom 1 to 50 did not change anything in the initial behaviour.\r\nShouldn't the planner eventually find them being equivalent?\r\n\r\nRegards,\r\nTitus\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 4 Feb 2017 20:20:14 +0000",
"msg_from": "Titus von Boxberg <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: strange and slow joining of nested views"
}
] |
[
{
"msg_contents": "Postgres Gurus,\n\nThank you all so much for your continued hard work with Postgres—the DB is amazing and I continue to look forward to each new release! One feature I’ve grown to love in the last year or two is the ltree column type. Unfortunately, I am starting to run into significant DB contention when performing basic and complex queries involving the ltree path and its associated GiST index. Here’s my setup:\n\n* PostgreSQL version 9.6.1 running on AWS RDS\n * 1 x master (db.r3.large - 2 vCPUs, 15GB RAM)\n * 1 x read replica (db.r3.large - 2 vCPUs, 15GB RAM)\n * extensions:\n * ltree v1.1\n * btree-gist v1.2\n\nI have a table called “organizations” with about 500,000 rows in it. The relevant portion of the schema looks like this:\n\n Table \"public.organizations\"\n Column | Type | Modifiers \n-----------------+--------------------------+--------------------------------\n id | character varying(36) | not null\n path | ltree | \nIndexes:\n \"organizations_pkey\" PRIMARY KEY, btree (id)\n \"path_gist_idx\" gist (path)\n \"path_idx\" btree (path)\n\nThe id column is a UUID and the path column is an ltree containing id-like hierarchical values. Example data is:\n\n id | path \n--------------------------------------+---------------------------------------------------------------------------------------------------------\n dba3511f-32ef-486f-ade2-7540ae28922e | root.dba3511f32ef486fade27540ae28922e\n 0209a983-88fa-47df-8328-6d9f39d60951 | root.dba3511f32ef486fade27540ae28922e.0209a98388fa47df83286d9f39d60951\n 05a49dba-a823-42e3-9f4b-f9ec9cdffcde | root.dba3511f32ef486fade27540ae28922e.05a49dbaa82342e39f4bf9ec9cdffcde\n 07166591-aba2-4e91-a00b-e1491edaa9b3 | root.dba3511f32ef486fade27540ae28922e.07166591aba24e91a00be1491edaa9b3\n 0777d32b-33f9-4131-a552-7e8b8a1355bb | root.dba3511f32ef486fade27540ae28922e.0777d32b33f94131a5527e8b8a1355bb\n 07ad8c30-76ad-4ea9-99c5-8e934ce45b03 | root.dba3511f32ef486fade27540ae28922e.07ad8c3076ad4ea999c58e934ce45b03\n 09566d1a-4924-4311-8687-d4389c130e76 | root.dba3511f32ef486fade27540ae28922e.09566d1a492443118687d4389c130e76\n 09cca793-af47-4f79-938f-72e1f37a5580 | root.dba3511f32ef486fade27540ae28922e.09cca793af474f79938f72e1f37a5580\n 0edc1ba7-830a-4da9-8f69-d3eb0611946f | root.dba3511f32ef486fade27540ae28922e.0edc1ba7830a4da98f69d3eb0611946f\n 10b20349-7da4-41d8-a780-e09b750b9236 | root.dba3511f32ef486fade27540ae28922e.10b203497da441d8a780e09b750b9236\n 2d55bfbb-4785-4368-a7e8-45afef6ae753 | root.dba3511f32ef486fade27540ae28922e.10b203497da441d8a780e09b750b9236.2d55bfbb47854368a7e845afef6ae753\n……\n(196 rows)\n\nI’m trying to track down issues causing queries to take 10-200ms and, I think, causing contention within the database which exacerbates the all query times.\n\nIssue #1: the query planner costs seem to be inaccurate, despite running ANALYZE on the organizations table and REINDEX on the GiST index:\n\ndb=> EXPLAIN ANALYZE SELECT path FROM organizations WHERE path ~ 'root.dba3511f32ef486fade27540ae28922e.*{1,}';\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on organizations (cost=72.25..1880.85 rows=495 width=134) (actual time=0.273..0.547 rows=195 loops=1)\n Recheck Cond: (path ~ 'root.dba3511f32ef486fade27540ae28922e.*{1,}'::lquery)\n Heap Blocks: exact=175\n -> Bitmap Index Scan on path_gist_idx (cost=0.00..72.12 rows=495 width=0) (actual time=0.252..0.252 rows=195 loops=1)\n Index Cond: (path ~ 'root.dba3511f32ef486fade27540ae28922e.*{1,}'::lquery)\n Planning time: 0.061 ms\n Execution time: 0.603 ms\n(7 rows)\n\nThe previous sample data excerpt shows 196 rows out of almost 500,000 are a child of the ltree path being searched. The query planner shows a cost estimate of:\n\ncost=72.25..1880.85 rows=495\n\nand an actual time/rows of:\n\nactual time=0.273..0.547 rows=195\n\nIf I modify the query slightly to avoid the ‘~’ operator with the ltree GiST index, it improves the cost estimate and, in a production environment with lots of contention, the query takes about half the time to execute:\n\ndb=> EXPLAIN ANALYZE SELECT path FROM organizations WHERE path <@ 'root.dba3511f32ef486fade27540ae28922e' AND path != 'root.dba3511f32ef486fade27540ae28922e';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on organizations (cost=8.80..204.18 rows=50 width=134) (actual time=0.298..0.606 rows=195 loops=1)\n Recheck Cond: (path <@ 'root.dba3511f32ef486fade27540ae28922e'::ltree)\n Filter: (path <> 'root.dba3511f32ef486fade27540ae28922e'::ltree)\n Rows Removed by Filter: 1\n Heap Blocks: exact=176\n -> Bitmap Index Scan on path_gist_idx (cost=0.00..8.79 rows=50 width=0) (actual time=0.222..0.222 rows=196 loops=1)\n Index Cond: (path <@ 'root.dba3511f32ef486fade27540ae28922e'::ltree)\n Planning time: 0.115 ms\n Execution time: 0.665 ms\n(9 rows)\n\nBoth queries hit the GiST index and yet still have wildly inaccurate cost estimates. Can anyone provide advice on how to further track down the contention issues related to these queries? Or further optimizations to make on the ltree field? I think this is all related to the inaccurate cost estimates but I’m not sure.\n\nIssue #2:\n\nWhen using the same query with a join, combined with high CPU utilization due to contention, the an entire query takes orders of magnitude longer (32ms) than it’s individual components (index only scan of 0.161ms and bitmap heap scan of 0.804ms).\n\ndb=> EXPLAIN ANALYZE SELECT o.path FROM organizations o, hostname h WHERE o.id = h.organization_id AND path ~ 'root.dba3511f32ef486fade27540ae28922e.*{1,}';\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=72.66..3303.89 rows=41 width=134) (actual time=32.530..32.530 rows=0 loops=1)\n -> Bitmap Heap Scan on organizations o (cost=72.25..1880.85 rows=495 width=171) (actual time=0.400..0.894 rows=195 loops=1)\n Recheck Cond: (path ~ 'root.dba3511f32ef486fade27540ae28922e.*{1,}'::lquery)\n Heap Blocks: exact=175\n -> Bitmap Index Scan on path_gist_idx (cost=0.00..72.12 rows=495 width=0) (actual time=0.374..0.374 rows=195 loops=1)\n Index Cond: (path ~ 'root.dba3511f32ef486fade27540ae28922e.*{1,}'::lquery)\n -> Index Only Scan using hostname_organization_id_idx on hostname h (cost=0.41..2.86 rows=1 width=37) (actual time=0.161..0.161 rows=0 loops=195)\n Index Cond: (organization_id = (o.id)::text)\n Heap Fetches: 0\n Planning time: 0.521 ms\n Execution time: 32.577 ms\n(11 rows)\n\nDoes anyone have any ideas on helping to track down issues with contention? I’m not seeing any locks, so I’m thinking it’s related to these queries taking longer and stacking up on top of each other. The above JOIN query is taking on average 30ms to complete at a rate of about 10 queries/second.\n\nThanks!\n\nPC\n\n--------\nPC Drew\nCTO \nt: (800) 313-6438 <tel:%28800%29-313-6438>\ne: [email protected] <mailto:[email protected]>\n\nSchoolBlocks.com <http://schoolblocks.com/> - School Websites Reimagined",
"msg_date": "Thu, 9 Feb 2017 11:48:16 -0700",
"msg_from": "PC Drew <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inaccurate GiST Index Cost Causes DB Contention"
}
] |
[
{
"msg_contents": "**Short version of my question:**\n\nIf I hold a cursor reference to an astronomically huge result set in my\nclient code, would it be ridiculous (i.e. completely defeats the point of\ncursors) to issue \"FETCH ALL FROM cursorname\" as my next command? Or would\nthis slowly stream the data back to me as I consume it (at least in\nprinciple, assuming that I have a well written driver sitting between me\nand Postgres)?\n\n**More detail**\n\nIf I understand things at all correctly, then Postgres cursors are REALLY\nfor dealing with the following problem [even though they can be used\n(abused?) for other things, such as returning multiple different result\nsets from one function]:\n\n> Note: The current implementation of RETURN NEXT and RETURN QUERY\n> stores the entire result set before returning from the function, as\n> discussed above. That means that if a PL/pgSQL function produces a\n> very large result set, performance might be poor: data will be written\n> to disk to avoid memory exhaustion, but the function itself will not\n> return until the entire result set has been generated.\n\n(ref:\nhttps://www.postgresql.org/docs/9.6/static/plpgsql-control-structures.html)\n\nBut (again if I understand correctly) when you write a function which\nreturns a cursor then the whole query is NOT buffered into memory (and\ndisk) before the user of the function can start to consume anything, but\ninstead the results can be consumed bit by bit. (There is more overhead\nsetting up and using the cursor, but it's worth it to avoid massive buffer\nallocation for very large result sets.)\n\n(ref:\nhttps://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551)\n\nI would like to understand how this relates to SELECTS and FETCHES over the\nwire to a Postgres server.\n\nIn all cases, I'm talk about consuming results from client code which is\ncommunicating with Postgres on a socket behind the scenes (using the Npgsql\nlibrary in my case, actually).\n\nQ1: What if I try to execute \"SELECT * FROM AstronomicallyLargeTable\" as my\nonly command over the wire to Postgres? Will that allocate all the memory\nfor the entire select and then start to send data back to me? Or will it\n(effectively) generate its own cursor and stream the data back a little at\na time (with no huge additional buffer allocation on the server)?\n\nQ2: What if I already have a cursor reference to an astronomically large\nresult set (say because I've already done one round trip, and got back the\ncursor reference from some function), and then I execute \"FETCH ALL FROM\ncursorname\" over the wire to Postgres? Is that stupid, because it will\nallocate ALL the memory for all the results *on the Postgres server* before\nsending anything back to me? Or will \"FETCH ALL FROM cursorname\" actually\nwork as I'd like it to, streaming the data back slowly as I consume it,\nwithout any massive buffer allocation happening on the Postgres server?\n\n**Short version of my question:**If I hold a cursor reference to an astronomically huge result set in my client code, would it be ridiculous (i.e. completely defeats the point of cursors) to issue \"FETCH ALL FROM cursorname\" as my next command? Or would this slowly stream the data back to me as I consume it (at least in principle, assuming that I have a well written driver sitting between me and Postgres)?**More detail**If I understand things at all correctly, then Postgres cursors are REALLY for dealing with the following problem [even though they can be used (abused?) for other things, such as returning multiple different result sets from one function]:> Note: The current implementation of RETURN NEXT and RETURN QUERY> stores the entire result set before returning from the function, as> discussed above. That means that if a PL/pgSQL function produces a> very large result set, performance might be poor: data will be written> to disk to avoid memory exhaustion, but the function itself will not> return until the entire result set has been generated.(ref: https://www.postgresql.org/docs/9.6/static/plpgsql-control-structures.html)But (again if I understand correctly) when you write a function which returns a cursor then the whole query is NOT buffered into memory (and disk) before the user of the function can start to consume anything, but instead the results can be consumed bit by bit. (There is more overhead setting up and using the cursor, but it's worth it to avoid massive buffer allocation for very large result sets.)(ref: https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551)I would like to understand how this relates to SELECTS and FETCHES over the wire to a Postgres server.In all cases, I'm talk about consuming results from client code which is communicating with Postgres on a socket behind the scenes (using the Npgsql library in my case, actually).Q1: What if I try to execute \"SELECT * FROM AstronomicallyLargeTable\" as my only command over the wire to Postgres? Will that allocate all the memory for the entire select and then start to send data back to me? Or will it (effectively) generate its own cursor and stream the data back a little at a time (with no huge additional buffer allocation on the server)?Q2: What if I already have a cursor reference to an astronomically large result set (say because I've already done one round trip, and got back the cursor reference from some function), and then I execute \"FETCH ALL FROM cursorname\" over the wire to Postgres? Is that stupid, because it will allocate ALL the memory for all the results *on the Postgres server* before sending anything back to me? Or will \"FETCH ALL FROM cursorname\" actually work as I'd like it to, streaming the data back slowly as I consume it, without any massive buffer allocation happening on the Postgres server?",
"msg_date": "Fri, 17 Feb 2017 07:59:10 +0000",
"msg_from": "Mike Beaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "I asked the same question at the same time on Stack Overflow (sincere\napologies if this is a breach of etiquette - I really needed an answer, and\nI thought the two communities might not overlap).\n\nStackoverflow now has an answer, by me:\nhttp://stackoverflow.com/q/42292341/#42297234 - which is based on\naccumulating the most consistent, coherent information from the answers and\ncomments given there so far.\n\nI think this is right, and I happily repeat it below, for anyone finding my\nquestion on this list. But I would still *love* to find official PostgreSQL\ndocumentation of all this. And of course to be told - quickly! - if anyone\nknows it is still wrong.\n\n***The answer is:***\n\n**Q1:** For `SELECT * FROM AstronomicallyHugeTable` sent over the wire,\nthen PostgreSQL will *not* generate a huge buffer, and will stream the data\nefficiently, starting quickly, to the client.\n\n**Q2:** For `FETCH ALL FROM CursorToAstronomicallyHugeTable` sent over the\nwire, then PostgreSQL will also *not* generate a huge buffer, and also will\nstream the data efficiently, starting quickly, to the client.\n\n**Implications of this for `FETCH ALL FROM cursor`**\n\nIF (and this is a big if) you have client software which is NOT going to\nstore all the fetched data anywhere, but is just trying to do something\nwith it row by row (and this presupposes that your data access layer\nsupports this, which Npgsql does), then there is nothing wrong with `FETCH\nALL FROM cursor`. No huge buffers anywhere. No long setup time. Processing\nhuge data this way will certainly run for a very long time - or at least\nuntil the user or some other condition aborts the process, and the cursor\ncan be closed. But it will start to run quickly, and its usage of resources\nwill be efficient.\n\n**WARNINGS**\n\nIt would *never* make sense to do `FETCH ALL FROM cursor` for\nastronomically large data, if your client side code (including your data\naccess layer) has any bottleneck at all at which means that all the data\nfrom a command is fetched before any processing can be done. Many data\naccess layers (and especially data access wrappers) are like this. So\nbeware. But it is also true that not all client side code is made this way.\n\nReturning huge data using a `TABLE` or `SETOF` return type from within a\nPostgeSQL function will *always* be broken (i.e. will create a huge buffer\nand take a very long time to start). This will be so whether the function\nis called from SQL to SQL or called over the wire. The bottleneck is before\nthe function returns. For efficient returns of very large data sets you\nmust use a cursor return from a function (or else do `SELECT *` directly\nover the wire), in every case.\n\nI asked the same question at the same time on Stack Overflow (sincere apologies if this is a breach of etiquette - I really needed an answer, and I thought the two communities might not overlap).Stackoverflow now has an answer, by me: http://stackoverflow.com/q/42292341/#42297234 - which is based on accumulating the most consistent, coherent information from the answers and comments given there so far.I think this is right, and I happily repeat it below, for anyone finding my question on this list. But I would still *love* to find official PostgreSQL documentation of all this. And of course to be told - quickly! - if anyone knows it is still wrong.***The answer is:*****Q1:** For `SELECT * FROM AstronomicallyHugeTable` sent over the wire, then PostgreSQL will *not* generate a huge buffer, and will stream the data efficiently, starting quickly, to the client.**Q2:** For `FETCH ALL FROM CursorToAstronomicallyHugeTable` sent over the wire, then PostgreSQL will also *not* generate a huge buffer, and also will stream the data efficiently, starting quickly, to the client.**Implications of this for `FETCH ALL FROM cursor`**IF (and this is a big if) you have client software which is NOT going to store all the fetched data anywhere, but is just trying to do something with it row by row (and this presupposes that your data access layer supports this, which Npgsql does), then there is nothing wrong with `FETCH ALL FROM cursor`. No huge buffers anywhere. No long setup time. Processing huge data this way will certainly run for a very long time - or at least until the user or some other condition aborts the process, and the cursor can be closed. But it will start to run quickly, and its usage of resources will be efficient.**WARNINGS**It would *never* make sense to do `FETCH ALL FROM cursor` for astronomically large data, if your client side code (including your data access layer) has any bottleneck at all at which means that all the data from a command is fetched before any processing can be done. Many data access layers (and especially data access wrappers) are like this. So beware. But it is also true that not all client side code is made this way.Returning huge data using a `TABLE` or `SETOF` return type from within a PostgeSQL function will *always* be broken (i.e. will create a huge buffer and take a very long time to start). This will be so whether the function is called from SQL to SQL or called over the wire. The bottleneck is before the function returns. For efficient returns of very large data sets you must use a cursor return from a function (or else do `SELECT *` directly over the wire), in every case.",
"msg_date": "Fri, 17 Feb 2017 12:04:59 +0000",
"msg_from": "Mike Beaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "Mike Beaton <[email protected]> writes:\n> [ generally accurate information ]\n\n> **WARNINGS**\n\n> It would *never* make sense to do `FETCH ALL FROM cursor` for\n> astronomically large data, if your client side code (including your data\n> access layer) has any bottleneck at all at which means that all the data\n> from a command is fetched before any processing can be done. Many data\n> access layers (and especially data access wrappers) are like this. So\n> beware. But it is also true that not all client side code is made this way.\n\nIt would probably be good to point out that most client-side libraries\nwill do it that way, including libpq, because then they can make success\nor failure of the query look atomic to the application. If you use an\nAPI that lets you see rows as they come off the wire, it's up to you\nto recover properly from a query failure that occurs after some/many rows\nhave already been returned.\n\n> Returning huge data using a `TABLE` or `SETOF` return type from within a\n> PostgeSQL function will *always* be broken (i.e. will create a huge buffer\n> and take a very long time to start). This will be so whether the function\n> is called from SQL to SQL or called over the wire.\n\nI believe this is false in general. I think it's probably true for all\nthe standard PL languages, because they don't want to bother with\nsuspending/resuming execution, so they make \"RETURN NEXT\" add the row to\na tuplestore not return it immediately. But it's definitely possible to\nwrite a C function that returns a row at a time, and depending on what the\ncalling SQL statement looks like, that could get streamed back to the\nclient live rather than being buffered first.\n\nAs a trivial example, if you do\n\tselect generate_series(1,100000000);\nin psql and watch what's happening with \"top\", you'll see psql's memory\nusage going through the roof (because libpq tries to buffer the result)\nbut the connected backend's memory usage is steady as a rock --- nor\ndoes it dump the data into a temporary file. On the other hand,\n\tselect * from generate_series(1,100000000);\ndoes dump the data into a temp file, something we ought to work on\nimproving.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 Feb 2017 11:39:40 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "Dear Tom,\n\nThis is very helpful, thank you.\n\nYou make a very useful point that the limitation is basically on PL/pgSQL\nand other PL languages. And someone on SO already pointed out that an\ninline SQL function with a enormous sized TABLE return value also doesn't\nhave any buffering problems. So that's a very convenient option, whenever\nSQL alone is powerful enough.\n\nYou make the further very helpful point that any library which is written\nusing `libpq` won't work as desired on `FETCH ALL FROM HugeCursor`. But I\ndon't know whether that's 'most' libraries. I think that depends on your\nprogramming milieu! I'm working in the world of ADO.NET (but the same seems\nto apply to JDBC) where most low level drivers are not written using\n`libpq` but rather directly with sockets against the database - which makes\nsense because a streaming data reader is part of the contract which those\ndrivers have to implement.\n\nIt's definitely worth noting that the `FETCH 100000 FROM cursor` until\nexhausted pattern will *always* be safe. But most fundamentally I did, very\nspecifically, want to know if the `FETCH ALL FROM\nCursorToAstronomicallyLargeData` pattern can *ever* work sensibly. It seems\nit clearly can and does if certain assumptions are met. Assumptions which I\nactually know *are* met, in the case in which I potentially wanted to use\nit!\n\nOne outstanding question I have. Based on a lot of helpful responses given\nto the SO question I can now test and see what disk buffers are generated\n(by setting `log_temp_files` to `0` and then `tail -f log`), as well as how\nlong it takes for results to start arriving.\n\nWith a large (10,000,000 row) test table, if I do `SELECT * FROM table` on\npsql it starts to return results immediately with no disk buffer. If I do\n`FETCH ALL FROM cursortotable` on psql it takes about 7.5 seconds to start\nreturning results, and generates a 14MB buffer. If I do `SELECT * FROM\ntable` on a correctly coded streaming client, it also starts to return\nresults immediately with no disk buffer. But if I do `FETCH ALL FROM\ncursortotable` from my streaming client, it takes about 1.5 seconds for\nresults to start coming... but again with no disk buffer, as hoped\n\nI was kind of hoping that the 'it creates a buffer' and the 'it takes a\nwhile to start' issues would be pretty much directly aligned, but it's\nclearly not as simple as that! I don't know if you can offer any more\nhelpful insight on this last aspect?\n\nMany thanks,\n\nMike\n\nDear Tom,This is very helpful, thank you.You make a very useful point that the limitation is basically on PL/pgSQL and other PL languages. And someone on SO already pointed out that an inline SQL function with a enormous sized TABLE return value also doesn't have any buffering problems. So that's a very convenient option, whenever SQL alone is powerful enough.You make the further very helpful point that any library which is written using `libpq` won't work as desired on `FETCH ALL FROM HugeCursor`. But I don't know whether that's 'most' libraries. I think that depends on your programming milieu! I'm working in the world of ADO.NET (but the same seems to apply to JDBC) where most low level drivers are not written using `libpq` but rather directly with sockets against the database - which makes sense because a streaming data reader is part of the contract which those drivers have to implement.It's definitely worth noting that the `FETCH 100000 FROM cursor` until exhausted pattern will *always* be safe. But most fundamentally I did, very specifically, want to know if the `FETCH ALL FROM CursorToAstronomicallyLargeData` pattern can *ever* work sensibly. It seems it clearly can and does if certain assumptions are met. Assumptions which I actually know *are* met, in the case in which I potentially wanted to use it!One outstanding question I have. Based on a lot of helpful responses given to the SO question I can now test and see what disk buffers are generated (by setting `log_temp_files` to `0` and then `tail -f log`), as well as how long it takes for results to start arriving.With a large (10,000,000 row) test table, if I do `SELECT * FROM table` on psql it starts to return results immediately with no disk buffer. If I do `FETCH ALL FROM cursortotable` on psql it takes about 7.5 seconds to start returning results, and generates a 14MB buffer. If I do `SELECT * FROM table` on a correctly coded streaming client, it also starts to return results immediately with no disk buffer. But if I do `FETCH ALL FROM cursortotable` from my streaming client, it takes about 1.5 seconds for results to start coming... but again with no disk buffer, as hopedI was kind of hoping that the 'it creates a buffer' and the 'it takes a while to start' issues would be pretty much directly aligned, but it's clearly not as simple as that! I don't know if you can offer any more helpful insight on this last aspect?Many thanks,Mike",
"msg_date": "Sat, 18 Feb 2017 07:57:10 +0000",
"msg_from": "Mike Beaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "I meant to say: \"the `FETCH 10000 FROM cursor` until exhausted pattern will\nalways be safe\". Nasty typo, sorry!\n\nI meant to say: \"the `FETCH 10000 FROM cursor` until exhausted pattern will always be safe\". Nasty typo, sorry!",
"msg_date": "Sat, 18 Feb 2017 08:46:41 +0000",
"msg_from": "Mike Beaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "Mike Beaton <[email protected]> writes:\n> One outstanding question I have. Based on a lot of helpful responses given\n> to the SO question I can now test and see what disk buffers are generated\n> (by setting `log_temp_files` to `0` and then `tail -f log`), as well as how\n> long it takes for results to start arriving.\n\n> With a large (10,000,000 row) test table, if I do `SELECT * FROM table` on\n> psql it starts to return results immediately with no disk buffer. If I do\n> `FETCH ALL FROM cursortotable` on psql it takes about 7.5 seconds to start\n> returning results, and generates a 14MB buffer. If I do `SELECT * FROM\n> table` on a correctly coded streaming client, it also starts to return\n> results immediately with no disk buffer. But if I do `FETCH ALL FROM\n> cursortotable` from my streaming client, it takes about 1.5 seconds for\n> results to start coming... but again with no disk buffer, as hoped\n\nSeems odd. Is your cursor just on \"SELECT * FROM table\", or is there\nsome processing in there you're not mentioning? Maybe it's a cursor\nWITH HOLD and you're exiting the source transaction?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 18 Feb 2017 12:43:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "> Seems odd. Is your cursor just on \"SELECT * FROM table\", or is there\n> some processing in there you're not mentioning? Maybe it's a cursor\n> WITH HOLD and you're exiting the source transaction?\n\nHi Tom,\n\nI've deleted my own Stack Overflow answer in favour of Laurenz Albe's one.\n\nNew TL;DR (I'm afraid): PostgreSQL is always generating a huge buffer file\non `FETCH ALL FROM CursorToHuge`.\n\nThe test data is created by:\n\n`SELECT * INTO large FROM generate_series(1, 10000000) id;`\n\nThe test function to generate the cursor is:\n\n````\nCREATE OR REPLACE FUNCTION lump() RETURNS refcursor\n LANGUAGE plpgsql AS\n$$DECLARE\n c CURSOR FOR SELECT id FROM large;\nBEGIN\n c := 'c';\n OPEN c;\n RETURN c;\nEND;$$;\n````\n\nThe two tests are:\n\n`SELECT * FROM large;`\n\nResult: no buffer file.\n\nAnd:\n\n````\nBEGIN;\nSELECT lump();\nFETCH ALL FROM c;\nCOMMIT;\n````\n\nResult: 14MB buffer, every time.\n\nThe buffer file appears in `base\\pgsql_tmp` while the data is streaming but\nonly appears in the Postgres log file at the point when it is released\n(which makes sense, as its final size is part of the log row).\n\nThis has the additionally confusing result that the buffer file is reported\nin the Postgres logs just before the user sees the first row of data on\n`psql` (and on anything using `libpq`), but just after the user sees the\nlast row of data, on any client program which is streaming the data via a\nstreaming data access layer (such as `Npgsql`, or `JDBC` with the right\nconfiguration).\n\n> Seems odd. Is your cursor just on \"SELECT * FROM table\", or is there> some processing in there you're not mentioning? Maybe it's a cursor> WITH HOLD and you're exiting the source transaction?Hi Tom,I've deleted my own Stack Overflow answer in favour of Laurenz Albe's one.New TL;DR (I'm afraid): PostgreSQL is always generating a huge buffer file on `FETCH ALL FROM CursorToHuge`.The test data is created by:`SELECT * INTO large FROM generate_series(1, 10000000) id;`The test function to generate the cursor is:````CREATE OR REPLACE FUNCTION lump() RETURNS refcursor LANGUAGE plpgsql AS$$DECLARE c CURSOR FOR SELECT id FROM large;BEGIN c := 'c'; OPEN c; RETURN c;END;$$;````The two tests are:`SELECT * FROM large;`Result: no buffer file.And:````BEGIN;SELECT lump();FETCH ALL FROM c;COMMIT;````Result: 14MB buffer, every time.The buffer file appears in `base\\pgsql_tmp` while the data is streaming but only appears in the Postgres log file at the point when it is released (which makes sense, as its final size is part of the log row).This has the additionally confusing result that the buffer file is reported in the Postgres logs just before the user sees the first row of data on `psql` (and on anything using `libpq`), but just after the user sees the last row of data, on any client program which is streaming the data via a streaming data access layer (such as `Npgsql`, or `JDBC` with the right configuration).",
"msg_date": "Sun, 19 Feb 2017 07:54:18 +0000",
"msg_from": "Mike Beaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "The generated buffer is 140MB, not 14MB. At 14 bytes per row, that makes\nsense.\n\nI have done another test.\n\nIf I execute `FETCH ALL FROM cursor` I get a 140MB disk buffer file, on the\nPostgreSQL server, reported in its log.\n\nIf I execute `FETCH 5000000 FROM cursor` (exactly half the rows), I see a\n70MB disk buffer file.\n\nThis is regardless of how many rows I actually stream from thE connection\nbefore closing the cursor.\n\nThe generated buffer is 140MB, not 14MB. At 14 bytes per row, that makes sense.I have done another test.If I execute `FETCH ALL FROM cursor` I get a 140MB disk buffer file, on the PostgreSQL server, reported in its log.If I execute `FETCH 5000000 FROM cursor` (exactly half the rows), I see a 70MB disk buffer file.This is regardless of how many rows I actually stream from thE connection before closing the cursor.",
"msg_date": "Tue, 21 Feb 2017 12:36:36 +0000",
"msg_from": "Mike Beaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "Mike Beaton <[email protected]> writes:\n> New TL;DR (I'm afraid): PostgreSQL is always generating a huge buffer file\n> on `FETCH ALL FROM CursorToHuge`.\n\nI poked into this and determined that it's happening because pquery.c\nexecutes FETCH statements the same as it does with any other\ntuple-returning utility statement, ie \"run it to completion and put\nthe results in a tuplestore, then send the tuplestore contents to the\nclient\". I think the main reason nobody worried about that being\nnon-optimal was that we weren't expecting people to FETCH very large\namounts of data in one go --- if you want the whole query result at\nonce, why are you bothering with a cursor?\n\nThis could probably be improved, but it would (I think) require inventing\nan additional PortalStrategy specifically for FETCH, and writing\nassociated code paths in pquery.c. Don't know when/if someone might get\nexcited enough about it to do that.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Feb 2017 08:32:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "Thanks, Tom.\n\nWouldn't this mean that cursors are noticeably non-optimal even for normal\ndata sizes, since the entire data to be streamed from the table is always\nduplicated into another buffer and then streamed?\n\n> if you want the whole query result at once, why are you bothering with a\ncursor?\n\nThe PostgreSQL docs (\nhttps://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551)\nclearly\nrecommend cursors as a way to return a reference to a large result set from\na function (as I understood, this is recommended precisely as a way to\navoid tuple-based buffering of the data).\n\nSo following that advice, it's not unreasonable that I would actually have\na cursor to a large dataset.\n\nThen, I would ideally want to be able to fetch the data from that cursor\nwithout the entire data getting duplicated (even if only a bit at a time\ninstead of all at once, which seems to be the best case behaviour) as I go.\n\nAdditionally, I thought that if I had a streaming use-case (which I do),\nand a streaming data-access layer (which I do), then since `SELECT * FROM\nlarge` is absolutely fine, end-to-end, in that situation, then by symmetry\nand the principle of least astonishment `FETCH ALL FROM cursor` might be\nfine too.\n\nThanks, Tom.Wouldn't this mean that cursors are noticeably non-optimal even for normal data sizes, since the entire data to be streamed from the table is always duplicated into another buffer and then streamed?> if you want the whole query result at once, why are you bothering with a cursor?The PostgreSQL docs (https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) clearly recommend cursors as a way to return a reference to a large result set from a function (as I understood, this is recommended precisely as a way to avoid tuple-based buffering of the data).So following that advice, it's not unreasonable that I would actually have a cursor to a large dataset.Then, I would ideally want to be able to fetch the data from that cursor without the entire data getting duplicated (even if only a bit at a time instead of all at once, which seems to be the best case behaviour) as I go.Additionally, I thought that if I had a streaming use-case (which I do), and a streaming data-access layer (which I do), then since `SELECT * FROM large` is absolutely fine, end-to-end, in that situation, then by symmetry and the principle of least astonishment `FETCH ALL FROM cursor` might be fine too.",
"msg_date": "Tue, 21 Feb 2017 13:49:09 +0000",
"msg_from": "Mike Beaton <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "My experience with cursors in PostgreSQL with Java has been to stay away from them. We support 2 databases with our product, PostgreSQL (default) and SQL Server. While re-encrypting data in a database the application used cursors with a fetch size of 1000.\r\n\r\nWorked perfectly on SQL Server and on PostgreSQL until we got to a PostgreSQL table with more than 11 million rows. After spending weeks trying to figure out what was happening, I realized that when it gets to a table with more than 10 million rows for some reason, the cursor functionality just silently stopped working and it was reading the entire table. I asked another very senior architect to look at it and he came to the same conclusion. Because of limited time, I ended up working around it using limit/offset.\r\n\r\nAgain we are using Java, so the problem could just be in the PostgreSQL JDBC driver. Also we were on 9.1 at the time.\r\n\r\nRegards\r\nJohn\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Mike Beaton\r\nSent: Tuesday, February 21, 2017 6:49 AM\r\nTo: [email protected]\r\nSubject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres\r\n\r\nThanks, Tom.\r\n\r\nWouldn't this mean that cursors are noticeably non-optimal even for normal data sizes, since the entire data to be streamed from the table is always duplicated into another buffer and then streamed?\r\n\r\n> if you want the whole query result at once, why are you bothering with a cursor?\r\n\r\nThe PostgreSQL docs (https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) clearly recommend cursors as a way to return a reference to a large result set from a function (as I understood, this is recommended precisely as a way to avoid tuple-based buffering of the data).\r\n\r\nSo following that advice, it's not unreasonable that I would actually have a cursor to a large dataset.\r\n\r\nThen, I would ideally want to be able to fetch the data from that cursor without the entire data getting duplicated (even if only a bit at a time instead of all at once, which seems to be the best case behaviour) as I go.\r\n\r\nAdditionally, I thought that if I had a streaming use-case (which I do), and a streaming data-access layer (which I do), then since `SELECT * FROM large` is absolutely fine, end-to-end, in that situation, then by symmetry and the principle of least astonishment `FETCH ALL FROM cursor` might be fine too.\r\n\r\n\n\n\n\n\n\n\n\n\nMy experience with cursors in PostgreSQL with Java has been to stay away from them. We support 2 databases with our product, PostgreSQL (default) and SQL Server. While re-encrypting\r\n data in a database the application used cursors with a fetch size of 1000.\n \nWorked perfectly on SQL Server and on PostgreSQL until we got to a PostgreSQL table with more than 11 million rows. After spending weeks trying to figure out what was happening,\r\n I realized that when it gets to a table with more than 10 million rows for some reason, the cursor functionality just silently stopped working and it was reading the entire table. I asked another very senior architect to look at it and he came to the same\r\n conclusion. Because of limited time, I ended up working around it using limit/offset.\n \nAgain we are using Java, so the problem could just be in the PostgreSQL JDBC driver. Also we were on 9.1 at the time.\n \nRegards\nJohn\n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Mike Beaton\nSent: Tuesday, February 21, 2017 6:49 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres\n \n\nThanks, Tom.\n\r\nWouldn't this mean that cursors are noticeably non-optimal even for normal data sizes, since the entire data to be streamed from the table is always duplicated into another buffer and then streamed?\n\n \n\n\n> if you want the whole query result at once, why are you bothering with a cursor?\n\n\n \n\n\nThe PostgreSQL docs (https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) clearly recommend cursors\r\n as a way to return a reference to a large result set from a function (as I understood, this is recommended precisely as a way to avoid tuple-based buffering of the data).\n\n\n \n\n\nSo following that advice, it's not unreasonable that I would actually have a cursor to a large dataset.\n\n\n \n\n\nThen, I would ideally want to be able to fetch the data from that cursor without the entire data getting duplicated (even if only a bit at a time instead of all at once, which seems to be the best case behaviour)\r\n as I go.\n\n\n \n\n\nAdditionally, I thought that if I had a streaming use-case (which I do), and a streaming data-access layer (which I do), then since `SELECT * FROM large` is absolutely fine, end-to-end, in that situation, then\r\n by symmetry and the principle of least astonishment `FETCH ALL FROM cursor` might be fine too.",
"msg_date": "Tue, 21 Feb 2017 14:06:37 +0000",
"msg_from": "John Gorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Correct use of cursors for very large result sets in\n Postgres"
},
{
"msg_contents": "For JDBC there are certain prerequisites for setFetchSize to work, e.g.\nusing forward only result sets and transactions.\n\nвт, 21 лют. 2017 о 09:06 John Gorman <[email protected]> пише:\n\n> My experience with cursors in PostgreSQL with Java has been to stay away\n> from them. We support 2 databases with our product, PostgreSQL (default)\n> and SQL Server. While re-encrypting data in a database the application used\n> cursors with a fetch size of 1000.\n>\n>\n>\n> Worked perfectly on SQL Server and on PostgreSQL until we got to a\n> PostgreSQL table with more than 11 million rows. After spending weeks\n> trying to figure out what was happening, I realized that when it gets to a\n> table with more than 10 million rows for some reason, the cursor\n> functionality just silently stopped working and it was reading the entire\n> table. I asked another very senior architect to look at it and he came to\n> the same conclusion. Because of limited time, I ended up working around it\n> using limit/offset.\n>\n>\n>\n> Again we are using Java, so the problem could just be in the PostgreSQL\n> JDBC driver. Also we were on 9.1 at the time.\n>\n>\n>\n> Regards\n>\n> John\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Mike Beaton\n> *Sent:* Tuesday, February 21, 2017 6:49 AM\n> *To:* [email protected]\n> *Subject:* Re: [PERFORM] Correct use of cursors for very large result\n> sets in Postgres\n>\n>\n>\n> Thanks, Tom.\n>\n> Wouldn't this mean that cursors are noticeably non-optimal even for normal\n> data sizes, since the entire data to be streamed from the table is always\n> duplicated into another buffer and then streamed?\n>\n>\n>\n> > if you want the whole query result at once, why are you bothering with\n> a cursor?\n>\n>\n>\n> The PostgreSQL docs (\n> https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) clearly\n> recommend cursors as a way to return a reference to a large result set from\n> a function (as I understood, this is recommended precisely as a way to\n> avoid tuple-based buffering of the data).\n>\n>\n>\n> So following that advice, it's not unreasonable that I would actually have\n> a cursor to a large dataset.\n>\n>\n>\n> Then, I would ideally want to be able to fetch the data from that cursor\n> without the entire data getting duplicated (even if only a bit at a time\n> instead of all at once, which seems to be the best case behaviour) as I go.\n>\n>\n>\n> Additionally, I thought that if I had a streaming use-case (which I do),\n> and a streaming data-access layer (which I do), then since `SELECT * FROM\n> large` is absolutely fine, end-to-end, in that situation, then by symmetry\n> and the principle of least astonishment `FETCH ALL FROM cursor` might be\n> fine too.\n>\n>\n>\n\nFor JDBC there are certain prerequisites for setFetchSize to work, e.g. using forward only result sets and transactions.вт, 21 лют. 2017 о 09:06 John Gorman <[email protected]> пише:\n\n\nMy experience with cursors in PostgreSQL with Java has been to stay away from them. We support 2 databases with our product, PostgreSQL (default) and SQL Server. While re-encrypting\n data in a database the application used cursors with a fetch size of 1000.\n \nWorked perfectly on SQL Server and on PostgreSQL until we got to a PostgreSQL table with more than 11 million rows. After spending weeks trying to figure out what was happening,\n I realized that when it gets to a table with more than 10 million rows for some reason, the cursor functionality just silently stopped working and it was reading the entire table. I asked another very senior architect to look at it and he came to the same\n conclusion. Because of limited time, I ended up working around it using limit/offset.\n \nAgain we are using Java, so the problem could just be in the PostgreSQL JDBC driver. Also we were on 9.1 at the time.\n \nRegards\nJohn\n \nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Mike Beaton\nSent: Tuesday, February 21, 2017 6:49 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres\n \n\nThanks, Tom.\n\nWouldn't this mean that cursors are noticeably non-optimal even for normal data sizes, since the entire data to be streamed from the table is always duplicated into another buffer and then streamed?\n\n \n\n\n> if you want the whole query result at once, why are you bothering with a cursor?\n\n\n \n\n\nThe PostgreSQL docs (https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) clearly recommend cursors\n as a way to return a reference to a large result set from a function (as I understood, this is recommended precisely as a way to avoid tuple-based buffering of the data).\n\n\n \n\n\nSo following that advice, it's not unreasonable that I would actually have a cursor to a large dataset.\n\n\n \n\n\nThen, I would ideally want to be able to fetch the data from that cursor without the entire data getting duplicated (even if only a bit at a time instead of all at once, which seems to be the best case behaviour)\n as I go.\n\n\n \n\n\nAdditionally, I thought that if I had a streaming use-case (which I do), and a streaming data-access layer (which I do), then since `SELECT * FROM large` is absolutely fine, end-to-end, in that situation, then\n by symmetry and the principle of least astonishment `FETCH ALL FROM cursor` might be fine too.",
"msg_date": "Thu, 23 Feb 2017 03:13:45 +0000",
"msg_from": "Vitalii Tymchyshyn <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Correct use of cursors for very large result sets in Postgres"
},
{
"msg_contents": "Yes of course that’s all verified and taken into account during code initialization\r\n\r\n\r\nFrom: Vitalii Tymchyshyn [mailto:[email protected]]\r\nSent: Wednesday, February 22, 2017 8:14 PM\r\nTo: John Gorman; [email protected]\r\nSubject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres\r\n\r\nFor JDBC there are certain prerequisites for setFetchSize to work, e.g. using forward only result sets and transactions.\r\n\r\nвт, 21 лют. 2017 о 09:06 John Gorman <[email protected]<mailto:[email protected]>> пише:\r\nMy experience with cursors in PostgreSQL with Java has been to stay away from them. We support 2 databases with our product, PostgreSQL (default) and SQL Server. While re-encrypting data in a database the application used cursors with a fetch size of 1000.\r\n\r\nWorked perfectly on SQL Server and on PostgreSQL until we got to a PostgreSQL table with more than 11 million rows. After spending weeks trying to figure out what was happening, I realized that when it gets to a table with more than 10 million rows for some reason, the cursor functionality just silently stopped working and it was reading the entire table. I asked another very senior architect to look at it and he came to the same conclusion. Because of limited time, I ended up working around it using limit/offset.\r\n\r\nAgain we are using Java, so the problem could just be in the PostgreSQL JDBC driver. Also we were on 9.1 at the time.\r\n\r\nRegards\r\nJohn\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of Mike Beaton\r\nSent: Tuesday, February 21, 2017 6:49 AM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres\r\n\r\nThanks, Tom.\r\n\r\nWouldn't this mean that cursors are noticeably non-optimal even for normal data sizes, since the entire data to be streamed from the table is always duplicated into another buffer and then streamed?\r\n\r\n> if you want the whole query result at once, why are you bothering with a cursor?\r\n\r\nThe PostgreSQL docs (https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) clearly recommend cursors as a way to return a reference to a large result set from a function (as I understood, this is recommended precisely as a way to avoid tuple-based buffering of the data).\r\n\r\nSo following that advice, it's not unreasonable that I would actually have a cursor to a large dataset.\r\n\r\nThen, I would ideally want to be able to fetch the data from that cursor without the entire data getting duplicated (even if only a bit at a time instead of all at once, which seems to be the best case behaviour) as I go.\r\n\r\nAdditionally, I thought that if I had a streaming use-case (which I do), and a streaming data-access layer (which I do), then since `SELECT * FROM large` is absolutely fine, end-to-end, in that situation, then by symmetry and the principle of least astonishment `FETCH ALL FROM cursor` might be fine too.\r\n\r\n\n\n\n\n\n\n\n\n\nYes of course that’s all verified and taken into account during code initialization\n \n \nFrom: Vitalii Tymchyshyn [mailto:[email protected]]\r\n\nSent: Wednesday, February 22, 2017 8:14 PM\nTo: John Gorman; [email protected]\nSubject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres\n \n\nFor JDBC there are certain prerequisites for setFetchSize to work, e.g. using forward only result sets and transactions.\n\n \n\n\nвт, 21 лют. 2017 о 09:06 John Gorman <[email protected]> пише:\n\n\n\n\nMy experience with cursors in PostgreSQL with Java has been to stay away from them. We support\r\n 2 databases with our product, PostgreSQL (default) and SQL Server. While re-encrypting data in a database the application used cursors with a fetch size of 1000.\n \nWorked perfectly on SQL Server and on PostgreSQL until we got to a PostgreSQL table with\r\n more than 11 million rows. After spending weeks trying to figure out what was happening, I realized that when it gets to a table with more than 10 million rows for some reason, the cursor functionality just silently stopped working and it was reading the entire\r\n table. I asked another very senior architect to look at it and he came to the same conclusion. Because of limited time, I ended up working around it using limit/offset.\n \nAgain we are using Java, so the problem could just be in the PostgreSQL JDBC driver. Also\r\n we were on 9.1 at the time.\n \nRegards\nJohn\n \nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of Mike Beaton\nSent: Tuesday, February 21, 2017 6:49 AM\nTo: \r\[email protected]\nSubject: Re: [PERFORM] Correct use of cursors for very large result sets in Postgres\n\n\n\n\n \n\nThanks, Tom.\n\r\nWouldn't this mean that cursors are noticeably non-optimal even for normal data sizes, since the entire data to be streamed from the table is always duplicated into another buffer and then streamed?\n\n \n\n\n> if you want the whole query result at once, why are you bothering with a cursor?\n\n\n \n\n\nThe PostgreSQL docs (https://www.postgresql.org/docs/9.6/static/plpgsql-cursors.html#AEN66551) clearly\r\n recommend cursors as a way to return a reference to a large result set from a function (as I understood, this is recommended precisely as a way to avoid tuple-based buffering of the data).\n\n\n \n\n\nSo following that advice, it's not unreasonable that I would actually have a cursor to a large dataset.\n\n\n \n\n\nThen, I would ideally want to be able to fetch the data from that cursor without the entire data getting duplicated (even if\r\n only a bit at a time instead of all at once, which seems to be the best case behaviour) as I go.\n\n\n \n\n\nAdditionally, I thought that if I had a streaming use-case (which I do), and a streaming data-access layer (which I do), then\r\n since `SELECT * FROM large` is absolutely fine, end-to-end, in that situation, then by symmetry and the principle of least astonishment `FETCH ALL FROM cursor` might be fine too.",
"msg_date": "Thu, 23 Feb 2017 12:47:09 +0000",
"msg_from": "John Gorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Correct use of cursors for very large result sets in\n Postgres"
}
] |
[
{
"msg_contents": "I am seeing this strange behavior, I don't know if this is by design by\nPostgres.\n\nI have an index on a column which is defined as \"character varying(255)\".\nWhen the value I am searching for is of a certain length, the optimizer\nuses the index but when the value is long, the optimizer doesn't use the\nindex but does a seq scan on the table. Is this by design? How can I make\nthe optimizer use the index no matter what the size/length of the value\nbeing searched for?\n\n\nPostgreSQL version: 9.4\n\n\nmy_db=# explain (analyze, buffers) select count(*) from tab where ID =\n'01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea' ;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------\n Aggregate (cost=14.80..14.81 rows=1 width=0) (actual time=0.114..0.114\nrows=1 loops=1)\n Buffers: shared hit=12\n -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual\ntime=0.025..0.109 rows=5 loops=1)\n Filter: ((ID)::text = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text)\n Rows Removed by Filter: 218\n Buffers: shared hit=12\n Planning time: 0.155 ms\n Execution time: 0.167 ms\n(8 rows)\n\nmy_db=# create index tab_idx1 on tab(ID);\n\nCREATE INDEX\nmy_db=# explain (analyze, buffers) select count(*) from tab where ID = '\n01625cfa-2bf8-45cf' ;\n QUERY PLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=8.29..8.30 rows=1 width=0) (actual time=0.048..0.048\nrows=1 loops=1)\n Buffers: shared read=2\n -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29 rows=1\nwidth=0) (actual time=0.043..0.043 rows=0 loops=1)\n Index Cond: (ID = '01625cfa-2bf8-45cf'::text)\n Heap Fetches: 0\n Buffers: shared read=2\n Planning time: 0.250 ms\n Execution time: 0.096 ms\n(8 rows)\n\nmy_db=# explain (analyze, buffers) select count(*) from tab where ID = '\n01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea' ;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------\n Aggregate (cost=14.80..14.81 rows=1 width=0) (actual time=0.115..0.115\nrows=1 loops=1)\n Buffers: shared hit=12\n -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual\ntime=0.031..0.108 rows=5 loops=1)\n Filter: ((ID)::text = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text)\n Rows Removed by Filter: 218\n Buffers: shared hit=12\n Planning time: 0.122 ms\n Execution time: 0.180 ms\n(8 rows)\n\nmy_db=#\n\nI am seeing this strange behavior, I don't know if this is by design by Postgres. I have an index on a column which is defined as \"character varying(255)\". When the value I am searching for is of a certain length, the optimizer uses the index but when the value is long, the optimizer doesn't use the index but does a seq scan on the table. Is this by design? How can I make the optimizer use the index no matter what the size/length of the value being searched for? PostgreSQL version: 9.4 my_db=# explain (analyze, buffers) select count(*) from tab where ID = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea' ; QUERY PLAN ----------------------------------------------------------------------------------------------------------- Aggregate (cost=14.80..14.81 rows=1 width=0) (actual time=0.114..0.114 rows=1 loops=1) Buffers: shared hit=12 -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual time=0.025..0.109 rows=5 loops=1) Filter: ((ID)::text = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text) Rows Removed by Filter: 218 Buffers: shared hit=12 Planning time: 0.155 ms Execution time: 0.167 ms(8 rows)my_db=# create index tab_idx1 on tab(ID); CREATE INDEXmy_db=# explain (analyze, buffers) select count(*) from tab where ID = '01625cfa-2bf8-45cf' ; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=8.29..8.30 rows=1 width=0) (actual time=0.048..0.048 rows=1 loops=1) Buffers: shared read=2 -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29 rows=1 width=0) (actual time=0.043..0.043 rows=0 loops=1) Index Cond: (ID = '01625cfa-2bf8-45cf'::text) Heap Fetches: 0 Buffers: shared read=2 Planning time: 0.250 ms Execution time: 0.096 ms(8 rows)my_db=# explain (analyze, buffers) select count(*) from tab where ID = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea' ; QUERY PLAN ----------------------------------------------------------------------------------------------------------- Aggregate (cost=14.80..14.81 rows=1 width=0) (actual time=0.115..0.115 rows=1 loops=1) Buffers: shared hit=12 -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual time=0.031..0.108 rows=5 loops=1) Filter: ((ID)::text = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text) Rows Removed by Filter: 218 Buffers: shared hit=12 Planning time: 0.122 ms Execution time: 0.180 ms(8 rows)my_db=#",
"msg_date": "Fri, 17 Feb 2017 17:19:00 -0500",
"msg_from": "Hustler DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Number of characters in column preventing index usage"
},
{
"msg_contents": "On Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA <[email protected]> wrote:\n\n>\n> my_db=# create index tab_idx1 on tab(ID);\n>\n> CREATE INDEX\n> my_db=# explain (analyze, buffers) select count(*) from tab where ID = '\n> 01625cfa-2bf8-45cf' ;\n> QUERY PLAN\n>\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ---------------\n> Aggregate (cost=8.29..8.30 rows=1 width=0) (actual time=0.048..0.048\n> rows=1 loops=1)\n> Buffers: shared read=2\n> -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29 rows=1\n> width=0) (actual time=0.043..0.043 rows=0 loops=1)\n> Index Cond: (ID = '01625cfa-2bf8-45cf'::text)\n>\n>\n\n> -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual\n> time=0.031..0.108 rows=5 loops=1)\n> Filter: ((ID)::text = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea\n> '::text)\n> Rows Removed by Filter: 218\n> Buffers: shared hit=12\n> Planning time: 0.122 ms\n> Execution time: 0.180 ms\n> (8 rows)\n>\n\nIIRC the only reason the first query cares to use the index is because it\ncan perform an Index Only Scan and thus avoid touching the heap at all. If\nit cannot avoid touching the heap the planner is going to just use a\nsequential scan to retrieve the records directly from the heap and save the\nindex lookup step.\n\nDavid J.\n\nOn Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA <[email protected]> wrote:my_db=# create index tab_idx1 on tab(ID); CREATE INDEXmy_db=# explain (analyze, buffers) select count(*) from tab where ID = '01625cfa-2bf8-45cf' ; QUERY PLAN --------------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=8.29..8.30 rows=1 width=0) (actual time=0.048..0.048 rows=1 loops=1) Buffers: shared read=2 -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29 rows=1 width=0) (actual time=0.043..0.043 rows=0 loops=1) Index Cond: (ID = '01625cfa-2bf8-45cf'::text) -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual time=0.031..0.108 rows=5 loops=1) Filter: ((ID)::text = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text) Rows Removed by Filter: 218 Buffers: shared hit=12 Planning time: 0.122 ms Execution time: 0.180 ms(8 rows)IIRC the only reason the first query cares to use the index is because it can perform an Index Only Scan and thus avoid touching the heap at all. If it cannot avoid touching the heap the planner is going to just use a sequential scan to retrieve the records directly from the heap and save the index lookup step.David J.",
"msg_date": "Fri, 17 Feb 2017 15:42:26 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Number of characters in column preventing index usage"
},
{
"msg_contents": "Hi,\n\nOn 02/17/2017 11:19 PM, Hustler DBA wrote:\n> I am seeing this strange behavior, I don't know if this is by design by\n> Postgres.\n>\n> I have an index on a column which is defined as \"character\n> varying(255)\". When the value I am searching for is of a certain length,\n> the optimizer uses the index but when the value is long, the optimizer\n> doesn't use the index but does a seq scan on the table. Is this by\n> design? How can I make the optimizer use the index no matter what the\n> size/length of the value being searched for?\n>\n\nAFAIK there are no such checks, i.e. the optimizer does not consider the \nlength of the value when deciding between scan types.\n\n>\n> PostgreSQL version: 9.4\n>\n\nThat's good to know, but we also need information about the table \ninvolved in your queries. I'd bet the table is tiny (it seems to be just \n12 pages, so ~100kB), making the indexes rather useless.\n\n> my_db=# explain (analyze, buffers) select count(*) from tab where ID =\n> '01625cfa-2bf8-45cf' ;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=8.29..8.30 rows=1 width=0) (actual time=0.048..0.048\n> rows=1 loops=1)\n> Buffers: shared read=2\n> -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29 rows=1\n> width=0) (actual time=0.043..0.043 rows=0 loops=1)\n> Index Cond: (ID = '01625cfa-2bf8-45cf'::text)\n> Heap Fetches: 0\n> Buffers: shared read=2\n> Planning time: 0.250 ms\n> Execution time: 0.096 ms\n> (8 rows)\n>\n> my_db=# explain (analyze, buffers) select count(*) from tab where ID =\n> '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea' ;\n> QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------\n> Aggregate (cost=14.80..14.81 rows=1 width=0) (actual time=0.115..0.115\n> rows=1 loops=1)\n> Buffers: shared hit=12\n> -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual\n> time=0.031..0.108 rows=5 loops=1)\n> Filter: ((ID)::text = '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text)\n> Rows Removed by Filter: 218\n> Buffers: shared hit=12\n> Planning time: 0.122 ms\n> Execution time: 0.180 ms\n> (8 rows)\n\nThe only difference I see is that for the long value the planner expects \n5 rows, while for the short one it expects 1 row. That may seem a bit \nstrange, but I'd bet it finds the short value in some statistic (MCV, \nhistogram) ans so can provide very accurate estimate. While for the \nlonger one, it ends up using some default (0.5% for equality IIRC) or \nvalue deduced from ndistinct. Or something like that.\n\nThe differences between the two plans are rather negligible, both in \nterms of costs (8.3 vs. 14.81) and runtime (0.1 vs 0.2 ms). The choice \nof a sequential scan seems perfectly reasonable for such tiny tables.\n\nFWIW it's impossible to draw conclusions based on two EXPLAIN ANALYZE \nexecutions. The timing instrumentation from EXPLAIN ANALYZE may have \nsignificant impact impact (different for each plan!). You also need to \ntesting with more values and longer runs, not just a single execution \n(there are caching effects etc.)\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 Feb 2017 23:49:29 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Number of characters in column preventing index usage"
},
{
"msg_contents": "Hustler DBA wrote:\n> I am seeing this strange behavior, I don't know if this is by design by\n> Postgres.\n> \n> I have an index on a column which is defined as \"character varying(255)\".\n> When the value I am searching for is of a certain length, the optimizer\n> uses the index but when the value is long, the optimizer doesn't use the\n> index but does a seq scan on the table. Is this by design? How can I make\n> the optimizer use the index no matter what the size/length of the value\n> being searched for?\n\nAs I recall, selectivity for strings is estimated based on the length of\nthe string. Since your sample string looks suspiciously like an UUID,\nperhaps you'd be better served by using an UUID column for it, which may\ngive better results. This would prevent you from using the shortened\nversion for searches (which I suppose you can do with LIKE using the\nvarchar type), but you could replace it with something like this:\n\nselect *\nfrom tab\nwhere ID between '01625cfa-2bf8-45cf-0000-000000000000' and\n '01625cfa-2bf8-45cf-ffff-ffffffffffff';\n\nStorage (both the table and indexes) is going to be more efficient this\nway too.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 Feb 2017 19:51:43 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Number of characters in column preventing index usage"
},
{
"msg_contents": "On 02/17/2017 11:42 PM, David G. Johnston wrote:\n> On Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA <[email protected]\n> <mailto:[email protected]>>wrote:\n>\n>\n> my_db=# create index tab_idx1 on tab(ID);\n>\n> CREATE INDEX\n> my_db=# explain (analyze, buffers) select count(*) from tab where ID\n> = '01625cfa-2bf8-45cf' ;\n> QUERY\n> PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=8.29..8.30 rows=1 width=0) (actual\n> time=0.048..0.048 rows=1 loops=1)\n> Buffers: shared read=2\n> -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29\n> rows=1 width=0) (actual time=0.043..0.043 rows=0 loops=1)\n> Index Cond: (ID = '01625cfa-2bf8-45cf'::text)\n>\n>\n>\n> -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual\n> time=0.031..0.108 rows=5 loops=1)\n> Filter: ((ID)::text =\n> '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text)\n> Rows Removed by Filter: 218\n> Buffers: shared hit=12\n> Planning time: 0.122 ms\n> Execution time: 0.180 ms\n> (8 rows)\n>\n>\n> IIRC the only reason the first query cares to use the index is because\n> it can perform an Index Only Scan and thus avoid touching the heap at\n> all. If it cannot avoid touching the heap the planner is going to just\n> use a sequential scan to retrieve the records directly from the heap and\n> save the index lookup step.\n>\n\nI don't follow - the queries are exactly the same in both cases, except \nthe parameter value. So both cases are eligible for index only scan.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 Feb 2017 23:52:43 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Number of characters in column preventing index usage"
},
{
"msg_contents": "Yes, both queries are the same, I just shorten the parameter value to see\nwhat would have happened. The database that I inherited has a column that\nstores GUID/UUIDs in a varchar(255) and a select on that table on that\ncolumn is doing a FULL TABLE SCAN (seq scan). All the values in the column\nare 36 characters long. The table is 104 KB.\n\nI realize that there was no index on that column so when I created the\nindex and tried to search on a parameter value, it doesn't use the index,\nbut when I shorten the parameter value then the optimizer decides to use an\nindex for the search.\n\n\n\nOn Fri, Feb 17, 2017 at 5:52 PM, Tomas Vondra <[email protected]>\nwrote:\n\n> On 02/17/2017 11:42 PM, David G. Johnston wrote:\n>\n>> On Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA <[email protected]\n>> <mailto:[email protected]>>wrote:\n>>\n>>\n>>\n>> my_db=# create index tab_idx1 on tab(ID);\n>>\n>> CREATE INDEX\n>> my_db=# explain (analyze, buffers) select count(*) from tab where ID\n>> = '01625cfa-2bf8-45cf' ;\n>> QUERY\n>> PLAN\n>> ------------------------------------------------------------\n>> ------------------------------------------------------------\n>> ---------------\n>> Aggregate (cost=8.29..8.30 rows=1 width=0) (actual\n>> time=0.048..0.048 rows=1 loops=1)\n>> Buffers: shared read=2\n>> -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29\n>> rows=1 width=0) (actual time=0.043..0.043 rows=0 loops=1)\n>> Index Cond: (ID = '01625cfa-2bf8-45cf'::text)\n>>\n>>\n>>\n>> -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual\n>> time=0.031..0.108 rows=5 loops=1)\n>> Filter: ((ID)::text =\n>> '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text)\n>> Rows Removed by Filter: 218\n>> Buffers: shared hit=12\n>> Planning time: 0.122 ms\n>> Execution time: 0.180 ms\n>> (8 rows)\n>>\n>>\n>> IIRC the only reason the first query cares to use the index is because\n>> it can perform an Index Only Scan and thus avoid touching the heap at\n>> all. If it cannot avoid touching the heap the planner is going to just\n>> use a sequential scan to retrieve the records directly from the heap and\n>> save the index lookup step.\n>>\n>>\n> I don't follow - the queries are exactly the same in both cases, except\n> the parameter value. So both cases are eligible for index only scan.\n>\n> regards\n>\n> --\n> Tomas Vondra http://www.2ndQuadrant.com\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYes, both queries are the same, I just shorten the parameter value to see what would have happened. The database that I inherited has a column that stores GUID/UUIDs in a varchar(255) and a select on that table on that column is doing a FULL TABLE SCAN (seq scan). All the values in the column are 36 characters long. The table is 104 KB. I realize that there was no index on that column so when I created the index and tried to search on a parameter value, it doesn't use the index, but when I shorten the parameter value then the optimizer decides to use an index for the search.On Fri, Feb 17, 2017 at 5:52 PM, Tomas Vondra <[email protected]> wrote:On 02/17/2017 11:42 PM, David G. Johnston wrote:\n\nOn Fri, Feb 17, 2017 at 3:19 PM, Hustler DBA <[email protected]\n<mailto:[email protected]>>wrote:\n\n\n my_db=# create index tab_idx1 on tab(ID);\n\n CREATE INDEX\n my_db=# explain (analyze, buffers) select count(*) from tab where ID\n = '01625cfa-2bf8-45cf' ;\n QUERY\n PLAN\n ---------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=8.29..8.30 rows=1 width=0) (actual\n time=0.048..0.048 rows=1 loops=1)\n Buffers: shared read=2\n -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29\n rows=1 width=0) (actual time=0.043..0.043 rows=0 loops=1)\n Index Cond: (ID = '01625cfa-2bf8-45cf'::text)\n\n\n\n -> Seq Scan on tab (cost=0.00..14.79 rows=5 width=0) (actual\n time=0.031..0.108 rows=5 loops=1)\n Filter: ((ID)::text =\n '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'::text)\n Rows Removed by Filter: 218\n Buffers: shared hit=12\n Planning time: 0.122 ms\n Execution time: 0.180 ms\n (8 rows)\n\n\nIIRC the only reason the first query cares to use the index is because\nit can perform an Index Only Scan and thus avoid touching the heap at\nall. If it cannot avoid touching the heap the planner is going to just\nuse a sequential scan to retrieve the records directly from the heap and\nsave the index lookup step.\n\n\n\nI don't follow - the queries are exactly the same in both cases, except the parameter value. So both cases are eligible for index only scan.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 17 Feb 2017 18:16:23 -0500",
"msg_from": "Hustler DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Number of characters in column preventing index usage"
},
{
"msg_contents": "On Fri, Feb 17, 2017 at 3:49 PM, Tomas Vondra <[email protected]>\nwrote:\n\n> That may seem a bit strange, but I'd bet it finds the short value in some\n> statistic (MCV, histogram) ans so can provide very accurate estimate.\n\n\n -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29 rows=1\nwidth=0) (actual time=0.043..0.043 rows=0 loops=1)\n\nI'm not seeing how any of the statistic columns would capture a value that\ndoesn't actually appear in the table...(actual ... row=0)\n\nUnless there is some prefix matching going on here since the short value is\na substring(1, n) of the longer one which does appear 5 times.\n\nI guess maybe because the value doesn't appear it uses the index (via IOS)\nto confirm absence (or near absence, i.e., 1) while, knowing the larger\nvalue appears 5 times out of 223, it decides a quick table scan is faster\nthan any form of double-lookup (whether on the visibility map or the heap).\n\nhttps://www.postgresql.org/docs/9.6/static/indexes-index-only-scans.html\n\nDavid J.\n\nOn Fri, Feb 17, 2017 at 3:49 PM, Tomas Vondra <[email protected]> wrote: That may seem a bit strange, but I'd bet it finds the short value in some statistic (MCV, histogram) ans so can provide very accurate estimate. -> Index Only Scan using tab_idx1 on tab (cost=0.27..8.29 rows=1width=0) (actual time=0.043..0.043 rows=0 loops=1)I'm not seeing how any of the statistic columns would capture a value that doesn't actually appear in the table...(actual ... row=0)Unless there is some prefix matching going on here since the short value is a substring(1, n) of the longer one which does appear 5 times.I guess maybe because the value doesn't appear it uses the index (via IOS) to confirm absence (or near absence, i.e., 1) while, knowing the larger value appears 5 times out of 223, it decides a quick table scan is faster than any form of double-lookup (whether on the visibility map or the heap).https://www.postgresql.org/docs/9.6/static/indexes-index-only-scans.htmlDavid J.",
"msg_date": "Fri, 17 Feb 2017 16:19:15 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Number of characters in column preventing index usage"
},
{
"msg_contents": "\"David G. Johnston\" <[email protected]> writes:\n> On Fri, Feb 17, 2017 at 3:49 PM, Tomas Vondra <[email protected]>\n> wrote:\n>> That may seem a bit strange, but I'd bet it finds the short value in some\n>> statistic (MCV, histogram) ans so can provide very accurate estimate.\n\n> I'm not seeing how any of the statistic columns would capture a value that\n> doesn't actually appear in the table...(actual ... row=0)\n\nI think it's the other way around. It found\n'01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea' in the stats, concluded\n(accurately) that there would be five matches, and on the strength of that\ndecided that a seqscan over this very tiny table would be faster than an\nindexscan. In the other case, the short string exists neither in the\ntable nor the stats, and the default estimate is turning out to be that\nthere's a single match, for which it likes the indexscan solution. This\nis all pretty unsurprising if '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'\nis in the most-common-values list. Anything that's *not* in that list\nis going to get a smaller rowcount estimate. (I don't think that the\nstring length, per se, has anything to do with it.)\n\nI'm not sure what performance problem the OP was looking to solve,\nbut expecting experiments on toy-sized tables to give the same plans\nas you get on large tables is a standard mistake when learning to work\nwith the PG planner.\n\nAlso, if toy-sized tables are all you've got, meaning the whole database\ncan be expected to stay RAM-resident at all times, it'd be a good idea\nto reduce random_page_cost to reflect that. The default planner cost\nsettings are meant for data that's mostly on spinning rust.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 Feb 2017 19:04:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Number of characters in column preventing index usage"
},
{
"msg_contents": "Thanks you guys are correct... the size of the table caused the optimizer\nto do a seq scan instead of using the index. I tried it on a 24 MB and 1\nGB table and the expected index was used.\n\n\n\nOn Fri, Feb 17, 2017 at 7:04 PM, Tom Lane <[email protected]> wrote:\n\n> \"David G. Johnston\" <[email protected]> writes:\n> > On Fri, Feb 17, 2017 at 3:49 PM, Tomas Vondra <\n> [email protected]>\n> > wrote:\n> >> That may seem a bit strange, but I'd bet it finds the short value in\n> some\n> >> statistic (MCV, histogram) ans so can provide very accurate estimate.\n>\n> > I'm not seeing how any of the statistic columns would capture a value\n> that\n> > doesn't actually appear in the table...(actual ... row=0)\n>\n> I think it's the other way around. It found\n> '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea' in the stats, concluded\n> (accurately) that there would be five matches, and on the strength of that\n> decided that a seqscan over this very tiny table would be faster than an\n> indexscan. In the other case, the short string exists neither in the\n> table nor the stats, and the default estimate is turning out to be that\n> there's a single match, for which it likes the indexscan solution. This\n> is all pretty unsurprising if '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'\n> is in the most-common-values list. Anything that's *not* in that list\n> is going to get a smaller rowcount estimate. (I don't think that the\n> string length, per se, has anything to do with it.)\n>\n> I'm not sure what performance problem the OP was looking to solve,\n> but expecting experiments on toy-sized tables to give the same plans\n> as you get on large tables is a standard mistake when learning to work\n> with the PG planner.\n>\n> Also, if toy-sized tables are all you've got, meaning the whole database\n> can be expected to stay RAM-resident at all times, it'd be a good idea\n> to reduce random_page_cost to reflect that. The default planner cost\n> settings are meant for data that's mostly on spinning rust.\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThanks you guys are correct... the size of the table caused the optimizer to do a seq scan instead of using the index. I tried it on a 24 MB and 1 GB table and the expected index was used. On Fri, Feb 17, 2017 at 7:04 PM, Tom Lane <[email protected]> wrote:\"David G. Johnston\" <[email protected]> writes:\n> On Fri, Feb 17, 2017 at 3:49 PM, Tomas Vondra <[email protected]>\n> wrote:\n>> That may seem a bit strange, but I'd bet it finds the short value in some\n>> statistic (MCV, histogram) ans so can provide very accurate estimate.\n\n> I'm not seeing how any of the statistic columns would capture a value that\n> doesn't actually appear in the table...(actual ... row=0)\n\nI think it's the other way around. It found\n'01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea' in the stats, concluded\n(accurately) that there would be five matches, and on the strength of that\ndecided that a seqscan over this very tiny table would be faster than an\nindexscan. In the other case, the short string exists neither in the\ntable nor the stats, and the default estimate is turning out to be that\nthere's a single match, for which it likes the indexscan solution. This\nis all pretty unsurprising if '01625cfa-2bf8-45cf-bf4c-aa5f3c6fa8ea'\nis in the most-common-values list. Anything that's *not* in that list\nis going to get a smaller rowcount estimate. (I don't think that the\nstring length, per se, has anything to do with it.)\n\nI'm not sure what performance problem the OP was looking to solve,\nbut expecting experiments on toy-sized tables to give the same plans\nas you get on large tables is a standard mistake when learning to work\nwith the PG planner.\n\nAlso, if toy-sized tables are all you've got, meaning the whole database\ncan be expected to stay RAM-resident at all times, it'd be a good idea\nto reduce random_page_cost to reflect that. The default planner cost\nsettings are meant for data that's mostly on spinning rust.\n\n regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 17 Feb 2017 19:16:48 -0500",
"msg_from": "Hustler DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Number of characters in column preventing index usage"
}
] |
[
{
"msg_contents": "Hi All,\n\nI'm having some trouble improving the timing of a set of queries to a\npartitioned table.\nBasically, I'm trying to find an index that would be used instead of a\nbitmap heap scan by when the data is taken from disk. Or in any case,\nsomething that would make the process of retrieving the data from disk\nfaster.\n\nI've installed postgreSQL compiling the source: PostgreSQL 9.2.20 on\nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat\n4.4.7-17), 64-bit\nAnd these are the current changes on the configuration file:\n name | current_setting | source\n----------------------------+--------------------+----------------------\n application_name | psql | client\n client_encoding | UTF8 | client\n DateStyle | ISO, MDY | configuration file\n default_text_search_config | pg_catalog.english | configuration file\n lc_messages | en_US.UTF-8 | configuration file\n lc_monetary | en_US.UTF-8 | configuration file\n lc_numeric | en_US.UTF-8 | configuration file\n lc_time | en_US.UTF-8 | configuration file\n log_destination | stderr | configuration file\n log_directory | pg_log | configuration file\n log_filename | postgresql-%a.log | configuration file\n log_rotation_age | 1d | configuration file\n log_rotation_size | 0 | configuration file\n log_timezone | UTC | configuration file\n log_truncate_on_rotation | on | configuration file\n logging_collector | on | configuration file\n max_connections | 100 | configuration file\n max_stack_depth | 2MB | environment variable\n shared_buffers | 6GB | configuration file\n TimeZone | UTC | configuration file\n work_mem | 50MB | configuration file\n\nI'm running on CentOS 6.8, and all the tests are being done through psql.\n\nNow, this is the table in question:\nlportal=# \\d+ data_jsons_partition\n Table \"data_jsons_partition\"\n Column | Type | Modifiers | Storage |\nStats target | Description\n-----------------+-----------------------------+-----------+\n----------+--------------+-------------\n id | integer | | plain\n| |\n site_id | integer | | plain\n| |\n site_name | character varying(255) | | extended\n| |\n measured_on | date | | plain\n| |\n protocol | text | | extended\n| |\n data | json | | extended\n| |\n created_at | timestamp without time zone | | plain\n| |\n updated_at | timestamp without time zone | | plain\n| |\n org_name | character varying | | extended\n| |\n org_id | integer | | plain\n| |\n lat | double precision | | plain\n| |\n long | double precision | | plain\n| |\n elev | double precision | | plain\n| |\nTriggers:\n insert_measurement_trigger BEFORE INSERT ON data_jsons_partition FOR\nEACH ROW EXECUTE PROCEDURE data_insert_trigger()\nChild tables: partitions.partition_a_data_jsons_part,\n partitions.partition_b_data_jsons_part,\n ...\n partitions.partition_aa_data_jsons_part,\n partitions.partition_ab_data_jsons_part\n\n\nThe child tables exists based on the protocol column. Now, each partition\nlooks like this:\n\nlportal=# \\d+ partitions.partition_ab_data_jsons_part\n Table \"partitions.partition_ab_data_jsons_part\"\n Column | Type | Modifiers | Storage |\nStats target | Description\n-----------------+-----------------------------+-----------+\n----------+--------------+-------------\n id | integer | not null | plain\n| |\n site_id | integer | | plain\n| |\n site_name | character varying(255) | | extended\n| |\n measured_on | date | | plain\n| |\n protocol | text | | extended\n| |\n data | json | | extended\n| |\n created_at | timestamp without time zone | | plain\n| |\n updated_at | timestamp without time zone | | plain\n| |\n org_name | character varying | | extended\n| |\n organization_id | integer | | plain\n| |\n latitude | double precision | | plain\n| |\n longitude | double precision | | plain\n| |\n elevation | double precision | | plain\n| |\nIndexes:\n \"partition_ab_data_jsons_part_pkey\" PRIMARY KEY, btree (id)\n \"partition_ab_data_jsons_part_spm_key\" UNIQUE CONSTRAINT, btree\n(site_id, protocol, measured_on)\n \"partition_ab_data_jsons_part_mo\" btree (measured_on)\n \"partition_ab_data_jsons_part_org\" btree (org_name)\n \"partition_ab_data_jsons_part_org_id\" btree (organization_id)\n \"partition_ab_data_jsons_part_sid\" btree (site_id) CLUSTER\n \"partition_ab_data_jsons_part_sm\" btree (site_id, measured_on)\nCheck constraints:\n \"partition_ab_data_jsons_part_protocol_check\" CHECK (protocol = '\npartition_ab'::text)\nInherits: data_jsons_partition\n\n\nNow, I have this query that I've executed with a clean cache:\nlportal=# explain analyze SELECT org_name, site_name, latitude, longitude,\nelevation, measured_on, data FROM data_jsons_partition where protocol in\n('aerosols','precipitations') and site_id in (... around 1000 site_id-s\n...) and (measured_on >= '2013-09-24' and measured_on <= '2016-10-10')\norder by org_name, site_name, measured_on limit 1000000;\n\nAnd I get the following:\n Limit (cost=149414.00..149518.52 rows=41806 width=110) (actual\ntime=25827.893..26012.065 rows=126543 loops=1)\n -> Sort (cost=149414.00..149518.52 rows=41806 width=110) (actual\ntime=25827.889..25970.671 rows=126543 loops=1)\n Sort Key: data_jsons_partition.org_name,\ndata_jsons_partition.site_name,\ndata_jsons_partition.measured_on\n Sort Method: external merge Disk: 70616kB\n -> Result (cost=0.00..146205.09 rows=41806 width=110) (actual\ntime=38.533..20810.204 rows=126543 loops=1)\n -> Append (cost=0.00..146205.09 rows=41806 width=110)\n(actual time=38.530..20739.245 rows=126543 loops=1)\n -> Seq Scan on data_jsons_partition (cost=0.00..0.00\nrows=1 width=608) (actual time=0.002..0.002 rows=0 loops=1)\n Filter: ((protocol = ANY\n('{partition_a,partition_b}'::text[])) AND (measured_on >=\n'2013-09-24'::date) AND (measured_on <= '2016-10-10'::date) AND (site_id =\nANY ('{... 1000 site_id-s ...}'::integer[])))\n -> Bitmap Heap Scan on partition_a_data_jsons_part\ndata_jsons_partition (cost=70.92..5209.38 rows=2132 width=114) (actual\ntime=38.526..812.397 rows=3017 loops=1)\n Recheck Cond: ((measured_on >=\n'2013-09-24'::date) AND (measured_on <= '2016-10-10'::date))\n Filter: ((protocol = ANY ('{partition_a,\npartition_b}'::text[])) AND (site_id = ANY ('{ ... }'::integer[])))\n -> Bitmap Index Scan on partition_a\n_data_jsons_part_mo (cost=0.00..70.39 rows=3014 width=0) (actual\ntime=2.974..2.974 rows=3017 loops=1)\n Index Cond: ((measured_on >=\n'2013-09-24'::date) AND (measured_on <= '2016-10-10'::date))\n -> Bitmap Heap Scan on partition_b_data_jsons_part\ndata_jsons_partition (cost=4582.19..140995.72 rows=39673 width=110)\n(actual time=738.486..19871.141 rows=123526 loops=1)\n Recheck Cond: ((site_id = ANY ('{...\n...}'::integer[])))\n Filter: (protocol = ANY ('{partition_a,\npartition_b}'::text[]))\n -> Bitmap Index Scan on partition_b\n_data_jsons_part_sm (cost=0.00..4572.27 rows=39673 width=0) (actual\ntime=715.684..715.684 rows=123526 loops=1)\n Index Cond: ((site_id = ANY ('{...\n...}'::integer[])))\n Total runtime: 26049.062 ms\n\n From this I've increased the effective_io_concurrency to 150 (since most of\nthe time was on fetching the data from the partition_b_data_jsons_part in\nthe second bitmap heap scan) and the work_mem to 1.5GB (for the sorting\nthat's being spilled on disk), improving the timing to 7 seconds (from\nwhich 5-6 seconds comes from the sorting).\n\nNow, this is a relative fast query. Some other doesn't specify the\nprotocol, and therefore goes over all the children tables. Those queries\ntakes around 5 minutes (without changes mentioned above) and around 1.5min\nwith the changes. Doing an explain analyze on those queries I see some of\nthe tables uses index scans (much slower than bitmap scan since there's\nnothing on cache) and other the bitmap scans.\n\nIs there a way to make it faster?\n\nThank you in advance.\n\nHi All,I'm having some trouble improving the timing of a set of queries to a partitioned table. Basically,\r\n I'm trying to find an index that would be used instead of a bitmap heap\r\n scan by when the data is taken from disk. Or in any case, something \r\nthat would make the process of retrieving the data from disk faster.I've\r\n installed postgreSQL compiling the source: PostgreSQL 9.2.20 on \r\nx86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat \r\n4.4.7-17), 64-bitAnd these are the current changes on the configuration file: name | current_setting | source----------------------------+--------------------+---------------------- application_name | psql | client client_encoding | UTF8 | client DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file lc_messages | en_US.UTF-8 | configuration file lc_monetary | en_US.UTF-8 | configuration file lc_numeric | en_US.UTF-8 | configuration file lc_time | en_US.UTF-8 | configuration file log_destination | stderr | configuration file log_directory | pg_log | configuration file log_filename | postgresql-%a.log | configuration file log_rotation_age | 1d | configuration file log_rotation_size | 0 | configuration file log_timezone | UTC | configuration file log_truncate_on_rotation | on | configuration file logging_collector | on | configuration file max_connections | 100 | configuration file max_stack_depth | 2MB | environment variable shared_buffers | 6GB | configuration file TimeZone | UTC | configuration file work_mem | 50MB | configuration fileI'm running on CentOS 6.8, and all the tests are being done through psql. Now, this is the table in question:lportal=# \\d+ data_jsons_partition Table \"data_jsons_partition\" Column | Type | Modifiers | Storage | Stats target | Description-----------------+-----------------------------+-----------+----------+--------------+------------- id | integer | | plain | | site_id | integer | | plain | | site_name | character varying(255) | | extended | | measured_on | date | | plain | | protocol | text | | extended | | data | json | | extended | | created_at | timestamp without time zone | | plain | | updated_at | timestamp without time zone | | plain | | org_name | character varying | | extended | | org_id | integer | | plain | | lat | double precision | | plain | | long | double precision | | plain | | elev | double precision | | plain | |Triggers: insert_measurement_trigger BEFORE INSERT ON data_jsons_partition FOR EACH ROW EXECUTE PROCEDURE data_insert_trigger()Child tables: partitions.partition_a_data_jsons_part, partitions.partition_b_data_jsons_part, ... partitions.partition_aa_data_jsons_part, partitions.partition_ab_data_jsons_partThe child tables exists based on the protocol column. Now, each partition looks like this:lportal=# \\d+ partitions.partition_ab_data_jsons_part Table \"partitions.partition_ab_data_jsons_part\" Column | Type | Modifiers | Storage | Stats target | Description-----------------+-----------------------------+-----------+----------+--------------+------------- id | integer | not null | plain | | site_id | integer | | plain | | site_name | character varying(255) | | extended | | measured_on | date | | plain | | protocol | text | | extended | | data | json | | extended | | created_at | timestamp without time zone | | plain | | updated_at | timestamp without time zone | | plain | | org_name | character varying | | extended | | organization_id | integer | | plain | | latitude | double precision | | plain | | longitude | double precision | | plain | | elevation | double precision | | plain | |Indexes: \"partition_ab_data_jsons_part_pkey\" PRIMARY KEY, btree (id) \"partition_ab_data_jsons_part_spm_key\" UNIQUE CONSTRAINT, btree (site_id, protocol, measured_on) \"partition_ab_data_jsons_part_mo\" btree (measured_on) \"partition_ab_data_jsons_part_org\" btree (org_name) \"partition_ab_data_jsons_part_org_id\" btree (organization_id) \"partition_ab_data_jsons_part_sid\" btree (site_id) CLUSTER \"partition_ab_data_jsons_part_sm\" btree (site_id, measured_on)Check constraints: \"partition_ab_data_jsons_part_protocol_check\" CHECK (protocol = 'partition_ab'::text)Inherits: data_jsons_partitionNow, I have this query that I've executed with a clean cache:lportal=#\r\n explain analyze SELECT org_name, site_name, latitude, longitude, \r\nelevation, measured_on, data FROM data_jsons_partition where protocol in\r\n ('aerosols','precipitations') and site_id in (... around 1000 site_id-s\r\n ...) and (measured_on >= '2013-09-24' and measured_on <= \r\n'2016-10-10') order by org_name, site_name, measured_on limit 1000000;And I get the following: Limit (cost=149414.00..149518.52 rows=41806 width=110) (actual time=25827.893..26012.065 rows=126543 loops=1) -> Sort (cost=149414.00..149518.52 rows=41806 width=110) (actual time=25827.889..25970.671 rows=126543 loops=1) Sort Key: data_jsons_partition.org_name, data_jsons_partition.site_name, data_jsons_partition.measured_on Sort Method: external merge Disk: 70616kB -> Result (cost=0.00..146205.09 rows=41806 width=110) (actual time=38.533..20810.204 rows=126543 loops=1) -> Append (cost=0.00..146205.09 rows=41806 width=110) (actual time=38.530..20739.245 rows=126543 loops=1) \r\n -> Seq Scan on data_jsons_partition (cost=0.00..0.00 rows=1 \r\nwidth=608) (actual time=0.002..0.002 rows=0 loops=1) Filter: ((protocol = ANY ('{partition_a,partition_b}'::text[]))\r\n AND (measured_on >= '2013-09-24'::date) AND (measured_on <= \r\n'2016-10-10'::date) AND (site_id = ANY ('{... 1000 site_id-s \r\n...}'::integer[]))) -> Bitmap Heap Scan on partition_a_data_jsons_part data_jsons_partition (cost=70.92..5209.38 rows=2132 width=114) (actual time=38.526..812.397 rows=3017 loops=1) Recheck Cond: ((measured_on >= '2013-09-24'::date) AND (measured_on <= '2016-10-10'::date)) \r\n Filter: ((protocol = ANY ('{partition_a, partition_b}'::text[])) AND \r\n(site_id = ANY ('{ ... }'::integer[]))) -> Bitmap Index Scan on partition_a_data_jsons_part_mo (cost=0.00..70.39 rows=3014 width=0) (actual time=2.974..2.974 rows=3017 loops=1) Index Cond: ((measured_on >= '2013-09-24'::date) AND (measured_on <= '2016-10-10'::date)) -> Bitmap Heap Scan on partition_b_data_jsons_part data_jsons_partition (cost=4582.19..140995.72 rows=39673 width=110) (actual time=738.486..19871.141 rows=123526 loops=1) Recheck Cond: ((site_id = ANY ('{... ...}'::integer[]))) Filter: (protocol = ANY ('{partition_a, partition_b}'::text[])) -> Bitmap Index Scan on partition_b_data_jsons_part_sm (cost=0.00..4572.27 rows=39673 width=0) (actual time=715.684..715.684 rows=123526 loops=1) Index Cond: ((site_id = ANY ('{... ...}'::integer[]))) Total runtime: 26049.062 msFrom\r\n this I've increased the effective_io_concurrency to 150 (since most of \r\nthe time was on fetching the data from the partition_b_data_jsons_part \r\nin the second bitmap heap scan) and the work_mem to 1.5GB (for the \r\nsorting that's being spilled on disk), improving the timing to 7 seconds\r\n (from which 5-6 seconds comes from the sorting).Now,\r\n this is a relative fast query. Some other doesn't specify the protocol,\r\n and therefore goes over all the children tables. Those queries takes \r\naround 5 minutes (without changes mentioned above) and around 1.5min \r\nwith the changes. Doing an explain analyze on those queries I see some \r\nof the tables uses index scans (much slower than bitmap scan since \r\nthere's nothing on cache) and other the bitmap scans. Is there a way to make it faster?Thank you in advance.",
"msg_date": "Mon, 20 Feb 2017 16:39:52 -0500",
"msg_from": "Diego Vargas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Performance"
}
] |
[
{
"msg_contents": "Hi there,\nI configured an IBM X3650 M4 for development and testing purposes. It’s composed by:\n - 2 x Intel Xeon E5-2690 @ 2.90Ghz (2 x 8 physical Cores + HT)\n - 96GB RAM DDR3 1333MHz (12 x 8GB)\n - 2 x 146GB SAS HDDs @ 15k rpm configured in RAID1 (mdadm)\n - 6 x 525GB SATA SSDs (over-provisioned at 25%, so 393GB available)\n\nI’ve done a lot of testing focusing on 4k and 8k workloads and found that the IOPS of those SSDs are half the expected. On serverfault.com someone suggested me that probably the bottle neck is the embedded RAID controller, a IBM ServeRaid m5110e, which mounts a LSI 2008 controller.\n\nI’m using the disks in JBOD mode with mdadm software RAID, which is blazing fast. The CPU is also very fast, so I don’t mind having a little overhead due to software RAID.\n\nMy typical workload is Postgres run as a DWH with 1 to 2 billions of rows, big indexes, partitions and so on, but also intensive statistical computations.\n\n\nHere’s my post on serverfault.com ( http://serverfault.com/questions/833642/slow-ssd-performance-ibm-x3650-m4-7915 )\nand here’s a graph of those six SSDs evaluated using fio as stand-alone disks (outside of the RAID):\n\n\n\nAll those IOPS should be doubled if all was working correctly. The curve trend is correct for increasing IO Depths.\n\n\nAnyway, I would like to buy a HBA controller that leverages those 6 SSDs. Each SSD should deliver about 80k to 90k IOPS, so in RAID10 I should get ~240k IOPS (6 x 80k / 2) and in RAID0 ~480k IOPS (6 x 80k). I’ve seen that mdadm effectively scales performance, but the controller limits the overal IOPS at ~120k (exactly the half of the expected IOPS).\n\nWhat HBA controller would you suggest me able to handle 500k IOPS? \n\n\nMy server is able to handle 8 more SSDs, for a total of 14 SSDs and 1260k theoretical IOPS. If we imagine adding only 2 more disks, I will achieve 720k theoretical IOPS in RAID0. \n\nWhat HBA controller would you suggest me able to handle more than 700k IOPS? \n\nHave you got some advices about using mdadm RAID software on SATAIII SSDs and plain HBA?\n\nThank you everyone\n Pietro Pugni\n\n\n\n\nHi there,I configured an IBM X3650 M4 for development and testing purposes. It’s composed by: - 2 x Intel Xeon E5-2690 @ 2.90Ghz (2 x 8 physical Cores + HT) - 96GB RAM DDR3 1333MHz (12 x 8GB) - 2 x 146GB SAS HDDs @ 15k rpm configured in RAID1 (mdadm) - 6 x 525GB SATA SSDs (over-provisioned at 25%, so 393GB available)I’ve done a lot of testing focusing on 4k and 8k workloads and found that the IOPS of those SSDs are half the expected. On serverfault.com someone suggested me that probably the bottle neck is the embedded RAID controller, a IBM ServeRaid m5110e, which mounts a LSI 2008 controller.I’m using the disks in JBOD mode with mdadm software RAID, which is blazing fast. The CPU is also very fast, so I don’t mind having a little overhead due to software RAID.My typical workload is Postgres run as a DWH with 1 to 2 billions of rows, big indexes, partitions and so on, but also intensive statistical computations.Here’s my post on serverfault.com ( http://serverfault.com/questions/833642/slow-ssd-performance-ibm-x3650-m4-7915 )and here’s a graph of those six SSDs evaluated using fio as stand-alone disks (outside of the RAID):All those IOPS should be doubled if all was working correctly. The curve trend is correct for increasing IO Depths.Anyway, I would like to buy a HBA controller that leverages those 6 SSDs. Each SSD should deliver about 80k to 90k IOPS, so in RAID10 I should get ~240k IOPS (6 x 80k / 2) and in RAID0 ~480k IOPS (6 x 80k). I’ve seen that mdadm effectively scales performance, but the controller limits the overal IOPS at ~120k (exactly the half of the expected IOPS).What HBA controller would you suggest me able to handle 500k IOPS? My server is able to handle 8 more SSDs, for a total of 14 SSDs and 1260k theoretical IOPS. If we imagine adding only 2 more disks, I will achieve 720k theoretical IOPS in RAID0. What HBA controller would you suggest me able to handle more than 700k IOPS? Have you got some advices about using mdadm RAID software on SATAIII SSDs and plain HBA?Thank you everyone Pietro Pugni",
"msg_date": "Tue, 21 Feb 2017 14:49:39 +0100",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
},
{
"msg_contents": "On Tue, Feb 21, 2017 at 7:49 AM, Pietro Pugni <[email protected]>\nwrote:\n\n> Hi there,\n> I configured an IBM X3650 M4 for development and testing purposes. It’s\n> composed by:\n> - 2 x Intel Xeon E5-2690 @ 2.90Ghz (2 x 8 physical Cores + HT)\n> - 96GB RAM DDR3 1333MHz (12 x 8GB)\n> - 2 x 146GB SAS HDDs @ 15k rpm configured in RAID1 (mdadm)\n> - 6 x 525GB SATA SSDs (over-provisioned at 25%, so 393GB available)\n>\n> I’ve done a lot of testing focusing on 4k and 8k workloads and found that\n> the IOPS of those SSDs are half the expected. On serverfault.com someone\n> suggested me that probably the bottle neck is the embedded RAID controller,\n> a IBM ServeRaid m5110e, which mounts a LSI 2008 controller.\n>\n> I’m using the disks in JBOD mode with mdadm software RAID, which is\n> blazing fast. The CPU is also very fast, so I don’t mind having a little\n> overhead due to software RAID.\n>\n> My typical workload is Postgres run as a DWH with 1 to 2 billions of rows,\n> big indexes, partitions and so on, but also intensive statistical\n> computations.\n>\n>\n> Here’s my post on serverfault.com ( http://serverfault.com/\n> questions/833642/slow-ssd-performance-ibm-x3650-m4-7915 )\n> and here’s a graph of those six SSDs evaluated using fio as stand-alone\n> disks (outside of the RAID):\n>\n> [image: https://i.stack.imgur.com/ZMhUJ.png]\n>\n> All those IOPS should be doubled if all was working correctly. The curve\n> trend is correct for increasing IO Depths.\n>\n>\n> Anyway, I would like to buy a HBA controller that leverages those 6 SSDs.\n> Each SSD should deliver about 80k to 90k IOPS, so in RAID10 I should get\n> ~240k IOPS (6 x 80k / 2) and in RAID0 ~480k IOPS (6 x 80k). I’ve seen that\n> mdadm effectively scales performance, but the controller limits the overal\n> IOPS at ~120k (exactly the half of the expected IOPS).\n>\n> *What HBA controller would you suggest me able to handle 500k IOPS? *\n>\n>\n> My server is able to handle 8 more SSDs, for a total of 14 SSDs and 1260k\n> theoretical IOPS. If we imagine adding only 2 more disks, I will achieve\n> 720k theoretical IOPS in RAID0.\n>\n> *What HBA controller would you suggest me able to handle more than 700k\n> IOPS? *\n>\n> *Have you got some advices about using mdadm RAID software on SATAIII SSDs\n> and plain HBA?*\n>\n\nRandom points/suggestions:\n*) mdadm is the way to go. I think you'll get bandwidth constrained on\nmost modern hba unless they are really crappy. On reasonably modern\nhardware storage is rarely the bottleneck anymore (which is a great place\nto be). Fancy raid controllers may actually hurt performance -- they are\nobsolete IMNSHO.\n\n*) Small point, but you'll want to crank effective_io_concurrency (see:\nhttps://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com).\nIt only affects certain kinds of queries, but when it works it really\nworks. Those benchmarks were done on my crapbox dell workstation!\n\n*) For very high transaction rates, you can get a lot of benefit from\ndisabling synchronous_commit if you are willing to accommodate the risk. I\ndo not recommend disabling fsync unless you are prepared to regenerate the\nentire database at any time.\n\n*) Don't assume indefinite linear scaling as you increase storage capacity\n-- the database itself can become the bottleneck, especially for writing.\nTo improve write performance, classic optimization strategies of trying to\nintelligently bundle writes around units of work still apply. If you are\nexpecting high rates of write activity your engineering focus needs to be\nhere for sure (read scaling is comparatively pretty easy).\n\n*) I would start doing your benchmarking with pgbench since that is going\nto most closely reflect measured production performance.\n\n> My typical workload is Postgres run as a DWH with 1 to 2 billions of\nrows, big indexes, partitions and so on, but also intensive statistical\ncomputations.\n\nIf this is the case your stack performance is going to be based on data\nstructure design. Make liberal use of:\n*) natural keys\n*) constraint exclusion for partition selection\n*) BRIN index is amazing (if you can work into it's limitations)\n*) partial indexing\n*) covering indexes. Don't forget to vacuum your partitions before you\nmake them live if you use them\n\nIf your data is going to get really big and/or query activity is expected\nto be high, keep an eye on your scale out strategy. Going monolithic to\nbootstrap your app is the right choice IMO but start thinking about the\nlonger term if you are expecting growth. I'm starting to come out to the\nperspective that lift/shift scaleout using postgres fdw without an insane\namount of app retooling could be a viable option by postgres 11/12 or so.\nFor my part I scaled out over asynchronous dblink which is a much more\nmaintenance heavy strategy (but works fabulous although I which you could\nasynchronously connect).\n\nmerlin\n\nOn Tue, Feb 21, 2017 at 7:49 AM, Pietro Pugni <[email protected]> wrote:Hi there,I configured an IBM X3650 M4 for development and testing purposes. It’s composed by: - 2 x Intel Xeon E5-2690 @ 2.90Ghz (2 x 8 physical Cores + HT) - 96GB RAM DDR3 1333MHz (12 x 8GB) - 2 x 146GB SAS HDDs @ 15k rpm configured in RAID1 (mdadm) - 6 x 525GB SATA SSDs (over-provisioned at 25%, so 393GB available)I’ve done a lot of testing focusing on 4k and 8k workloads and found that the IOPS of those SSDs are half the expected. On serverfault.com someone suggested me that probably the bottle neck is the embedded RAID controller, a IBM ServeRaid m5110e, which mounts a LSI 2008 controller.I’m using the disks in JBOD mode with mdadm software RAID, which is blazing fast. The CPU is also very fast, so I don’t mind having a little overhead due to software RAID.My typical workload is Postgres run as a DWH with 1 to 2 billions of rows, big indexes, partitions and so on, but also intensive statistical computations.Here’s my post on serverfault.com ( http://serverfault.com/questions/833642/slow-ssd-performance-ibm-x3650-m4-7915 )and here’s a graph of those six SSDs evaluated using fio as stand-alone disks (outside of the RAID):All those IOPS should be doubled if all was working correctly. The curve trend is correct for increasing IO Depths.Anyway, I would like to buy a HBA controller that leverages those 6 SSDs. Each SSD should deliver about 80k to 90k IOPS, so in RAID10 I should get ~240k IOPS (6 x 80k / 2) and in RAID0 ~480k IOPS (6 x 80k). I’ve seen that mdadm effectively scales performance, but the controller limits the overal IOPS at ~120k (exactly the half of the expected IOPS).What HBA controller would you suggest me able to handle 500k IOPS? My server is able to handle 8 more SSDs, for a total of 14 SSDs and 1260k theoretical IOPS. If we imagine adding only 2 more disks, I will achieve 720k theoretical IOPS in RAID0. What HBA controller would you suggest me able to handle more than 700k IOPS? Have you got some advices about using mdadm RAID software on SATAIII SSDs and plain HBA?Random points/suggestions:*) mdadm is the way to go. I think you'll get bandwidth constrained on most modern hba unless they are really crappy. On reasonably modern hardware storage is rarely the bottleneck anymore (which is a great place to be). Fancy raid controllers may actually hurt performance -- they are obsolete IMNSHO.*) Small point, but you'll want to crank effective_io_concurrency (see: https://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com). It only affects certain kinds of queries, but when it works it really works. Those benchmarks were done on my crapbox dell workstation!*) For very high transaction rates, you can get a lot of benefit from disabling synchronous_commit if you are willing to accommodate the risk. I do not recommend disabling fsync unless you are prepared to regenerate the entire database at any time.*) Don't assume indefinite linear scaling as you increase storage capacity -- the database itself can become the bottleneck, especially for writing. To improve write performance, classic optimization strategies of trying to intelligently bundle writes around units of work still apply. If you are expecting high rates of write activity your engineering focus needs to be here for sure (read scaling is comparatively pretty easy).*) I would start doing your benchmarking with pgbench since that is going to most closely reflect measured production performance. > My typical workload is Postgres run as a DWH with 1 to 2 billions of rows, big indexes, partitions and so on, but also intensive statistical computations.If this is the case your stack performance is going to be based on data structure design. Make liberal use of:*) natural keys *) constraint exclusion for partition selection*) BRIN index is amazing (if you can work into it's limitations)*) partial indexing*) covering indexes. Don't forget to vacuum your partitions before you make them live if you use themIf your data is going to get really big and/or query activity is expected to be high, keep an eye on your scale out strategy. Going monolithic to bootstrap your app is the right choice IMO but start thinking about the longer term if you are expecting growth. I'm starting to come out to the perspective that lift/shift scaleout using postgres fdw without an insane amount of app retooling could be a viable option by postgres 11/12 or so. For my part I scaled out over asynchronous dblink which is a much more maintenance heavy strategy (but works fabulous although I which you could asynchronously connect).merlin",
"msg_date": "Tue, 21 Feb 2017 09:04:34 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
},
{
"msg_contents": "Disclaimer: I’ve done extensive testing (FIO and postgres) with a few different RAID controllers and HW RAID vs mdadm. We (micron) are crucial but I don’t personally work with the consumer drives.\r\n\r\nVerify whether you have your disk write cache enabled or disabled. If it’s disabled, that will have a large impact on write performance.\r\n\r\nIs this the *exact* string you used? `fio --filename=/dev/sdx --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest`\r\n\r\nWith FIO, you need to multiply iodepth by numjobs to get the final queue depth its pushing. (in this case, 256). Make sure you’re looking at the correct data.\r\n\r\nFew other things:\r\n\r\n- Mdadm will give better performance than HW RAID for specific benchmarks.\r\n\r\n- Performance is NOT linear with drive count for synthetic benchmarks.\r\n\r\n- It is often nearly linear for application performance.\r\n\r\n- HW RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer drives can often get better performance from HW RAID*. (otherwise MDADM has been faster in all of my testing)\r\n\r\n- Mdadm RAID10 has a bug where reads are not properly distributed between the mirror pairs. (It uses head position calculated from the last IO to determine which drive in a mirror pair should get the next read. It results in really weird behavior of most read IO going to half of your drives instead of being evenly split as should be the case for SSDs). You can see this by running iostat while you’ve got a load running and you’ll see uneven distribution of IOs. FYI, the RAID1 implementation has an exception where it does NOT use head position for SSDs. I have yet to test this but you should be able to get better performance by manually striping a RAID0 across multiple RAID1s instead of using the default RAID10 implementation.\r\n\r\n- Don’t focus on 4k Random Read. Do something more similar to a PG workload (64k 70/30 R/W @ QD=4 is *reasonably* close to what I see for heavy OLTP). I’ve tested multiple controllers based on the LSI 3108 and found that default settings from one vendor to another provide drastically different performance profiles. Vendor A had much better benchmark performance (2x IOPS of B) while vendor B gave better application performance (20% better OLTP performance in Postgres). (I got equivalent performance from A & B when using the same settings).\r\n\r\n\r\nWes Vaske\r\nSenior Storage Solutions Engineer\r\nMicron Technology\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Merlin Moncure\r\nSent: Tuesday, February 21, 2017 9:05 AM\r\nTo: Pietro Pugni <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Suggestions for a HBA controller (6 x SSDs + madam RAID10)\r\n\r\nOn Tue, Feb 21, 2017 at 7:49 AM, Pietro Pugni <[email protected]<mailto:[email protected]>> wrote:\r\nHi there,\r\nI configured an IBM X3650 M4 for development and testing purposes. It’s composed by:\r\n - 2 x Intel Xeon E5-2690 @ 2.90Ghz (2 x 8 physical Cores + HT)\r\n - 96GB RAM DDR3 1333MHz (12 x 8GB)\r\n - 2 x 146GB SAS HDDs @ 15k rpm configured in RAID1 (mdadm)\r\n - 6 x 525GB SATA SSDs (over-provisioned at 25%, so 393GB available)\r\n\r\nI’ve done a lot of testing focusing on 4k and 8k workloads and found that the IOPS of those SSDs are half the expected. On serverfault.com<http://serverfault.com> someone suggested me that probably the bottle neck is the embedded RAID controller, a IBM ServeRaid m5110e, which mounts a LSI 2008 controller.\r\n\r\nI’m using the disks in JBOD mode with mdadm software RAID, which is blazing fast. The CPU is also very fast, so I don’t mind having a little overhead due to software RAID.\r\n\r\nMy typical workload is Postgres run as a DWH with 1 to 2 billions of rows, big indexes, partitions and so on, but also intensive statistical computations.\r\n\r\n\r\nHere’s my post on serverfault.com<http://serverfault.com> ( http://serverfault.com/questions/833642/slow-ssd-performance-ibm-x3650-m4-7915 )\r\nand here’s a graph of those six SSDs evaluated using fio as stand-alone disks (outside of the RAID):\r\n\r\n[https://i.stack.imgur.com/ZMhUJ.png]\r\n\r\nAll those IOPS should be doubled if all was working correctly. The curve trend is correct for increasing IO Depths.\r\n\r\n\r\nAnyway, I would like to buy a HBA controller that leverages those 6 SSDs. Each SSD should deliver about 80k to 90k IOPS, so in RAID10 I should get ~240k IOPS (6 x 80k / 2) and in RAID0 ~480k IOPS (6 x 80k). I’ve seen that mdadm effectively scales performance, but the controller limits the overal IOPS at ~120k (exactly the half of the expected IOPS).\r\n\r\nWhat HBA controller would you suggest me able to handle 500k IOPS?\r\n\r\n\r\nMy server is able to handle 8 more SSDs, for a total of 14 SSDs and 1260k theoretical IOPS. If we imagine adding only 2 more disks, I will achieve 720k theoretical IOPS in RAID0.\r\n\r\nWhat HBA controller would you suggest me able to handle more than 700k IOPS?\r\n\r\nHave you got some advices about using mdadm RAID software on SATAIII SSDs and plain HBA?\r\n\r\nRandom points/suggestions:\r\n*) mdadm is the way to go. I think you'll get bandwidth constrained on most modern hba unless they are really crappy. On reasonably modern hardware storage is rarely the bottleneck anymore (which is a great place to be). Fancy raid controllers may actually hurt performance -- they are obsolete IMNSHO.\r\n\r\n*) Small point, but you'll want to crank effective_io_concurrency (see: https://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com). It only affects certain kinds of queries, but when it works it really works. Those benchmarks were done on my crapbox dell workstation!\r\n\r\n*) For very high transaction rates, you can get a lot of benefit from disabling synchronous_commit if you are willing to accommodate the risk. I do not recommend disabling fsync unless you are prepared to regenerate the entire database at any time.\r\n\r\n*) Don't assume indefinite linear scaling as you increase storage capacity -- the database itself can become the bottleneck, especially for writing. To improve write performance, classic optimization strategies of trying to intelligently bundle writes around units of work still apply. If you are expecting high rates of write activity your engineering focus needs to be here for sure (read scaling is comparatively pretty easy).\r\n\r\n*) I would start doing your benchmarking with pgbench since that is going to most closely reflect measured production performance.\r\n\r\n> My typical workload is Postgres run as a DWH with 1 to 2 billions of rows, big indexes, partitions and so on, but also intensive statistical computations.\r\n\r\nIf this is the case your stack performance is going to be based on data structure design. Make liberal use of:\r\n*) natural keys\r\n*) constraint exclusion for partition selection\r\n*) BRIN index is amazing (if you can work into it's limitations)\r\n*) partial indexing\r\n*) covering indexes. Don't forget to vacuum your partitions before you make them live if you use them\r\n\r\nIf your data is going to get really big and/or query activity is expected to be high, keep an eye on your scale out strategy. Going monolithic to bootstrap your app is the right choice IMO but start thinking about the longer term if you are expecting growth. I'm starting to come out to the perspective that lift/shift scaleout using postgres fdw without an insane amount of app retooling could be a viable option by postgres 11/12 or so. For my part I scaled out over asynchronous dblink which is a much more maintenance heavy strategy (but works fabulous although I which you could asynchronously connect).\r\n\r\nmerlin\r\n\r\n\n\n\n\n\n\n\n\n\nDisclaimer: I’ve done extensive testing (FIO and postgres) with a few different RAID controllers and HW RAID vs mdadm. We (micron) are crucial but I don’t personally work with\r\n the consumer drives.\n \nVerify whether you have your disk write cache enabled or disabled. If it’s disabled, that will have a large impact on write performance.\n \nIs this the *exact* string you used? `fio --filename=/dev/sdx --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100\r\n --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest`\n \nWith FIO, you need to multiply iodepth by numjobs to get the final queue depth its pushing. (in this case, 256). Make sure you’re looking at the correct data.\n \nFew other things:\n- \r\nMdadm will give better performance than HW RAID for specific benchmarks.\n- \r\nPerformance is NOT linear with drive count for synthetic benchmarks.\n- \r\nIt is often nearly linear for application performance.\n- \r\nHW RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer\r\n drives can often get better performance from HW RAID*. (otherwise MDADM has been faster in all of my testing)\n- \r\nMdadm RAID10 has a bug where reads are not properly distributed between the mirror pairs. (It uses head position calculated from the last IO to determine which drive\r\n in a mirror pair should get the next read. It results in really weird behavior of most read IO going to half of your drives instead of being evenly split as should be the case for SSDs). You can see this by running iostat while you’ve got a load running and\r\n you’ll see uneven distribution of IOs. FYI, the RAID1 implementation has an exception where it does NOT use head position for SSDs. I have yet to test this but you should be able to get better performance by manually striping a RAID0 across multiple RAID1s\r\n instead of using the default RAID10 implementation.\n- \r\nDon’t focus on 4k Random Read. Do something more similar to a PG workload (64k 70/30 R/W @ QD=4 is *reasonably*\r\nclose to what I see for heavy OLTP). I’ve tested multiple controllers based on the LSI 3108 and found that default settings from one vendor to another provide drastically different performance profiles. Vendor A had much better benchmark performance (2x\r\n IOPS of B) while vendor B gave better application performance (20% better OLTP performance in Postgres). (I got equivalent performance from A & B when using the same settings).\n \n \nWes Vaske\nSenior Storage Solutions Engineer\nMicron Technology\r\n\n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Merlin Moncure\nSent: Tuesday, February 21, 2017 9:05 AM\nTo: Pietro Pugni <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Suggestions for a HBA controller (6 x SSDs + madam RAID10)\n \n\n\n\nOn Tue, Feb 21, 2017 at 7:49 AM, Pietro Pugni <[email protected]> wrote:\n\n\nHi there,\n\nI configured an IBM X3650 M4 for development and testing purposes. It’s composed by:\n\n\n - 2 x Intel Xeon E5-2690 @ 2.90Ghz (2 x 8 physical Cores + HT)\n\n\n - 96GB RAM DDR3 1333MHz (12 x 8GB)\n\n\n - 2 x 146GB SAS HDDs @ 15k rpm configured in RAID1 (mdadm)\n\n\n - 6 x 525GB SATA SSDs (over-provisioned at 25%, so 393GB available)\n\n\n \n\n\nI’ve done a lot of testing focusing on 4k and 8k workloads and found that the IOPS of those SSDs are half the expected. On\r\nserverfault.com someone suggested me that probably the bottle neck is the embedded RAID controller, a IBM ServeRaid m5110e, which mounts a LSI 2008 controller.\n\n\n \n\n\nI’m using the disks in JBOD mode with mdadm software RAID, which is blazing fast. The CPU is also very fast, so I don’t mind having a little overhead due to software RAID.\n\n\n \n\n\nMy typical workload is Postgres run as a DWH with 1 to 2 billions of rows, big indexes, partitions and so on, but also intensive statistical computations.\n\n\n \n\n\n \n\n\nHere’s my post on \r\nserverfault.com ( http://serverfault.com/questions/833642/slow-ssd-performance-ibm-x3650-m4-7915 )\n\n\nand here’s a graph of those six SSDs evaluated using fio as stand-alone disks (outside of the RAID):\n\n\n \n\n\n\n\n\n \n\n\nAll those IOPS should be doubled if all was working correctly. The curve trend is correct for increasing IO Depths.\n\n\n \n\n\n \n\n\nAnyway, I would like to buy a HBA controller that leverages those 6 SSDs. Each SSD should deliver about 80k to 90k IOPS, so in RAID10 I should get ~240k IOPS (6 x 80k / 2) and in RAID0 ~480k IOPS (6 x 80k). I’ve seen that mdadm effectively\r\n scales performance, but the controller limits the overal IOPS at ~120k (exactly the half of the expected IOPS).\n\n\n \n\n\nWhat HBA controller would you suggest me able to handle 500k IOPS? \n\n\n \n\n\n \n\n\n\nMy server is able to handle 8 more SSDs, for a total of 14 SSDs and 1260k theoretical IOPS. If we imagine adding only 2 more disks, I will achieve 720k theoretical IOPS in RAID0. \n\n\n\n \n\n\n\nWhat HBA controller would you suggest me able to handle more than 700k IOPS? \n\n\n\n\n \n\n\n\nHave you got some advices about using mdadm RAID software on SATAIII SSDs and plain HBA?\n\n\n\n\n \n\n\nRandom points/suggestions:\n\n\n*) mdadm is the way to go. I think you'll get bandwidth constrained on most modern hba unless they are really crappy. On reasonably modern hardware storage is rarely the bottleneck anymore (which is a great place to be). Fancy raid\r\n controllers may actually hurt performance -- they are obsolete IMNSHO.\n\n\n \n\n\n*) Small point, but you'll want to crank effective_io_concurrency (see: https://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com).\r\n It only affects certain kinds of queries, but when it works it really works. Those benchmarks were done on my crapbox dell workstation!\n\n\n \n\n\n*) For very high transaction rates, you can get a lot of benefit from disabling synchronous_commit if you are willing to accommodate the risk. I do not recommend disabling fsync unless you are prepared to regenerate the entire database\r\n at any time.\n\n\n \n\n\n*) Don't assume indefinite linear scaling as you increase storage capacity -- the database itself can become the bottleneck, especially for writing. To improve write performance, classic optimization strategies of trying to intelligently\r\n bundle writes around units of work still apply. If you are expecting high rates of write activity your engineering focus needs to be here for sure (read scaling is comparatively pretty easy).\n\n\n \n\n\n*) I would start doing your benchmarking with pgbench since that is going to most closely reflect measured production performance. \n\n\n \n\n\n> My typical workload is Postgres run as a DWH with 1 to 2 billions of rows, big indexes, partitions and so on, but also intensive statistical computations.\n\n\n \n\n\nIf this is the case your stack performance is going to be based on data structure design. Make liberal use of:\n\n\n*) natural keys \n\n\n*) constraint exclusion for partition selection\n\n\n*) BRIN index is amazing (if you can work into it's limitations)\n\n\n*) partial indexing\n\n\n*) covering indexes. Don't forget to vacuum your partitions before you make them live if you use them\n\n\n \n\n\nIf your data is going to get really big and/or query activity is expected to be high, keep an eye on your scale out strategy. Going monolithic to bootstrap your app is the right choice IMO but start thinking about the longer term if you\r\n are expecting growth. I'm starting to come out to the perspective that lift/shift scaleout using postgres fdw without an insane amount of app retooling could be a viable option by postgres 11/12 or so. For my part I scaled out over asynchronous dblink which\r\n is a much more maintenance heavy strategy (but works fabulous although I which you could asynchronously connect).\n\n\n \n\n\nmerlin",
"msg_date": "Tue, 21 Feb 2017 19:40:03 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam\n RAID10)"
},
{
"msg_contents": "On Tue, Feb 21, 2017 at 1:40 PM, Wes Vaske (wvaske) <[email protected]>\nwrote:\n\n> - HW RAID can give better performance if your drives do not have\n> a capacitor backed cache (like the MX300) AND the controller has a battery\n> backed cache. **Consumer drives can often get better performance from HW\n> RAID**. (otherwise MDADM has been faster in all of my testing)\n>\n\nI stopped recommending non-capacitor drives a long time ago for databases.\nA capacitor is basically a battery that operates on the drive itself and is\nnot subject to chemical failure. Also, drives without capacitors tend not\n(in my direct experience) to be suitable for database use in any scenario\nwhere write performance matters. There are capacitor equipped drives that\ngive excellent performance for around .60$/gb. I'm curious what the entry\npoint is for micron models are capacitor enabled...\n\nMLC solid state drives are essentially raid systems already with very\ncomplex tradeoffs engineered into the controller itself -- hw raid\ncontrollers are redundant systems and their price and added latency to\nfilesystem calls is not warranted. I guess in theory a SSD specialized\nraid controller could cooperate with the drives and do things like manage\nwear leveling across multiple devices but AFAIK no such product exists\n(note: I haven't looked lately).\n\nmerlin\n\nOn Tue, Feb 21, 2017 at 1:40 PM, Wes Vaske (wvaske) <[email protected]> wrote:\n\n\n- \nHW RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer\n drives can often get better performance from HW RAID*. (otherwise MDADM has been faster in all of my testing)I stopped recommending non-capacitor drives a long time ago for databases. A capacitor is basically a battery that operates on the drive itself and is not subject to chemical failure. Also, drives without capacitors tend not (in my direct experience) to be suitable for database use in any scenario where write performance matters. There are capacitor equipped drives that give excellent performance for around .60$/gb. I'm curious what the entry point is for micron models are capacitor enabled...MLC solid state drives are essentially raid systems already with very complex tradeoffs engineered into the controller itself -- hw raid controllers are redundant systems and their price and added latency to filesystem calls is not warranted. I guess in theory a SSD specialized raid controller could cooperate with the drives and do things like manage wear leveling across multiple devices but AFAIK no such product exists (note: I haven't looked lately).merlin",
"msg_date": "Tue, 21 Feb 2017 14:01:59 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
},
{
"msg_contents": "Suggestion #1 is to turn off any write caching on the RAID controller.\nUsing LSI MegaRAID we went from 3k to 5k tps to 18k just turning off write\ncaching. Basically it just got in the way.\n\nSuggestion #1 is to turn off any write caching on the RAID controller. Using LSI MegaRAID we went from 3k to 5k tps to 18k just turning off write caching. Basically it just got in the way.",
"msg_date": "Tue, 21 Feb 2017 13:09:12 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
},
{
"msg_contents": "> I'm curious what the entry point is for micron models are capacitor enabled...\r\n\r\nThe 5100 is the entry SATA drive with full power loss protection.\r\nhttp://www.anandtech.com/show/10886/micron-announces-5100-series-enterprise-sata-ssds-with-3d-tlc-nand\r\n\r\nFun Fact: 3D TLC can give better endurance than planar MLC. http://www.chipworks.com/about-chipworks/overview/blog/intelmicron-detail-their-3d-nand-iedm\r\n\r\nMy understanding (and I’m not a process or electrical engineer) is that the 3D cell size is significantly larger than what was being used for planar (Samsung’s 3D is reportedly a ~40nm class device vs our most recent planar which is 16nm). This results in many more electrons per cell which provides better endurance.\r\n\r\n\r\nWes Vaske\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Merlin Moncure\r\nSent: Tuesday, February 21, 2017 2:02 PM\r\nTo: Wes Vaske (wvaske) <[email protected]>\r\nCc: Pietro Pugni <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] Suggestions for a HBA controller (6 x SSDs + madam RAID10)\r\n\r\nOn Tue, Feb 21, 2017 at 1:40 PM, Wes Vaske (wvaske) <[email protected]<mailto:[email protected]>> wrote:\r\n- HW RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer drives can often get better performance from HW RAID*. (otherwise MDADM has been faster in all of my testing)\r\n\r\nI stopped recommending non-capacitor drives a long time ago for databases. A capacitor is basically a battery that operates on the drive itself and is not subject to chemical failure. Also, drives without capacitors tend not (in my direct experience) to be suitable for database use in any scenario where write performance matters. There are capacitor equipped drives that give excellent performance for around .60$/gb. I'm curious what the entry point is for micron models are capacitor enabled...\r\n\r\nMLC solid state drives are essentially raid systems already with very complex tradeoffs engineered into the controller itself -- hw raid controllers are redundant systems and their price and added latency to filesystem calls is not warranted. I guess in theory a SSD specialized raid controller could cooperate with the drives and do things like manage wear leveling across multiple devices but AFAIK no such product exists (note: I haven't looked lately).\r\n\r\nmerlin\r\n\r\n\n\n\n\n\n\n\n\n\n> I'm curious what the entry point is for micron models are capacitor enabled...\n \nThe 5100 is the entry SATA drive with full power loss protection.\nhttp://www.anandtech.com/show/10886/micron-announces-5100-series-enterprise-sata-ssds-with-3d-tlc-nand\n \nFun Fact: 3D TLC can give better endurance than planar MLC.\r\n\r\nhttp://www.chipworks.com/about-chipworks/overview/blog/intelmicron-detail-their-3d-nand-iedm\n \nMy understanding (and I’m not a process or electrical engineer) is that the 3D cell size is significantly larger than what was being used for planar (Samsung’s 3D is reportedly\r\n a ~40nm class device vs our most recent planar which is 16nm). This results in many more electrons per cell which provides better endurance.\n \n \nWes Vaske\n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Merlin Moncure\nSent: Tuesday, February 21, 2017 2:02 PM\nTo: Wes Vaske (wvaske) <[email protected]>\nCc: Pietro Pugni <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Suggestions for a HBA controller (6 x SSDs + madam RAID10)\n \n\n\n\nOn Tue, Feb 21, 2017 at 1:40 PM, Wes Vaske (wvaske) <[email protected]> wrote:\n\n\n\n- \r\nHW RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer drives can often get\r\n better performance from HW RAID*. (otherwise MDADM has been faster in all of my testing)\n\n\n\n\n \n\n\nI stopped recommending non-capacitor drives a long time ago for databases. A capacitor is basically a battery that operates on the drive itself and is not subject to chemical failure. Also, drives without capacitors tend not (in my direct\r\n experience) to be suitable for database use in any scenario where write performance matters. There are capacitor equipped drives that give excellent performance for around .60$/gb. I'm curious what the entry point is for micron models are capacitor enabled...\n\n\n \n\n\nMLC solid state drives are essentially raid systems already with very complex tradeoffs engineered into the controller itself -- hw raid controllers are redundant systems and their price and added latency to filesystem calls is not warranted. \r\n I guess in theory a SSD specialized raid controller could cooperate with the drives and do things like manage wear leveling across multiple devices but AFAIK no such product exists (note: I haven't looked lately).\n\n\n \n\n\nmerlin",
"msg_date": "Tue, 21 Feb 2017 21:50:23 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam\n RAID10)"
},
{
"msg_contents": "Thank you a lot for your suggestions.\n\n> Random points/suggestions:\n> *) mdadm is the way to go. I think you'll get bandwidth constrained on most modern hba unless they are really crappy. On reasonably modern hardware storage is rarely the bottleneck anymore (which is a great place to be). Fancy raid controllers may actually hurt performance -- they are obsolete IMNSHO.\n> \n> *) Small point, but you'll want to crank effective_io_concurrency (see: https://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com <https://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com>). It only affects certain kinds of queries, but when it works it really works. Those benchmarks were done on my crapbox dell workstation!\n\nThis is really impressive, thank you for sharing it.\n\n> *) For very high transaction rates, you can get a lot of benefit from disabling synchronous_commit if you are willing to accommodate the risk. I do not recommend disabling fsync unless you are prepared to regenerate the entire database at any time.\n> \n> *) Don't assume indefinite linear scaling as you increase storage capacity -- the database itself can become the bottleneck, especially for writing. To improve write performance, classic optimization strategies of trying to intelligently bundle writes around units of work still apply. If you are expecting high rates of write activity your engineering focus needs to be here for sure (read scaling is comparatively pretty easy).\n\nWhat do you mean with “units of work”?\n\n\n> *) I would start doing your benchmarking with pgbench since that is going to most closely reflect measured production performance. \n\nMy final benchmark will be my application, it’s quite articulated and does also query parallelization using a custom splitter, so it will be hard to reproduce it using pgbench. At the moment I was just figuring out why my SSD weren’t performing as expected with comparable benchmarks found on hardware review websites (fio with 4k and 8k workloads). \n\n\n> If this is the case your stack performance is going to be based on data structure design. Make liberal use of:\n> *) natural keys \n> *) constraint exclusion for partition selection\n> *) BRIN index is amazing (if you can work into it's limitations)\n> *) partial indexing\n> *) covering indexes. Don't forget to vacuum your partitions before you make them live if you use them\n\nThe data definition is optimized for Postgres yet, but didn’t know about covering indexes. I read about BRIN but never tried them. Will do some testing.\n\n\n> If your data is going to get really big and/or query activity is expected to be high, keep an eye on your scale out strategy. Going monolithic to bootstrap your app is the right choice IMO but start thinking about the longer term if you are expecting growth. I'm starting to come out to the perspective that lift/shift scaleout using postgres fdw without an insane amount of app retooling could be a viable option by postgres 11/12 or so. For my part I scaled out over asynchronous dblink which is a much more maintenance heavy strategy (but works fabulous although I which you could asynchronously connect).\n\nThank you for your hints \n\n Pietro Pugni\n\n\nThank you a lot for your suggestions.Random points/suggestions:*) mdadm is the way to go. I think you'll get bandwidth constrained on most modern hba unless they are really crappy. On reasonably modern hardware storage is rarely the bottleneck anymore (which is a great place to be). Fancy raid controllers may actually hurt performance -- they are obsolete IMNSHO.*) Small point, but you'll want to crank effective_io_concurrency (see: https://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH%3DHWh1WbLRsioe%3DmzRJTHwtr%3D2azsTdQ%40mail.gmail.com). It only affects certain kinds of queries, but when it works it really works. Those benchmarks were done on my crapbox dell workstation!This is really impressive, thank you for sharing it.*) For very high transaction rates, you can get a lot of benefit from disabling synchronous_commit if you are willing to accommodate the risk. I do not recommend disabling fsync unless you are prepared to regenerate the entire database at any time.*) Don't assume indefinite linear scaling as you increase storage capacity -- the database itself can become the bottleneck, especially for writing. To improve write performance, classic optimization strategies of trying to intelligently bundle writes around units of work still apply. If you are expecting high rates of write activity your engineering focus needs to be here for sure (read scaling is comparatively pretty easy).What do you mean with “units of work”?*) I would start doing your benchmarking with pgbench since that is going to most closely reflect measured production performance. My final benchmark will be my application, it’s quite articulated and does also query parallelization using a custom splitter, so it will be hard to reproduce it using pgbench. At the moment I was just figuring out why my SSD weren’t performing as expected with comparable benchmarks found on hardware review websites (fio with 4k and 8k workloads). If this is the case your stack performance is going to be based on data structure design. Make liberal use of:*) natural keys *) constraint exclusion for partition selection*) BRIN index is amazing (if you can work into it's limitations)*) partial indexing*) covering indexes. Don't forget to vacuum your partitions before you make them live if you use themThe data definition is optimized for Postgres yet, but didn’t know about covering indexes. I read about BRIN but never tried them. Will do some testing.If your data is going to get really big and/or query activity is expected to be high, keep an eye on your scale out strategy. Going monolithic to bootstrap your app is the right choice IMO but start thinking about the longer term if you are expecting growth. I'm starting to come out to the perspective that lift/shift scaleout using postgres fdw without an insane amount of app retooling could be a viable option by postgres 11/12 or so. For my part I scaled out over asynchronous dblink which is a much more maintenance heavy strategy (but works fabulous although I which you could asynchronously connect).Thank you for your hints Pietro Pugni",
"msg_date": "Wed, 22 Feb 2017 00:18:36 +0100",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
},
{
"msg_contents": "> Disclaimer: I’ve done extensive testing (FIO and postgres) with a few different RAID controllers and HW RAID vs mdadm. We (micron) are crucial but I don’t personally work with the consumer drives.\n> \n> Verify whether you have your disk write cache enabled or disabled. If it’s disabled, that will have a large impact on write performance. \n\nWhat an honor :)\nMy SSDs are Crucial MX300 (consumer drives) but, as previously stated, they gave ~90k IOPS in all benchmarks I found on the web, while mine tops at ~40k IOPS. Being 6 devices bought from 4 different sellers it’s impossible that they are all defective.\n\n> Is this the *exact* string you used? `fio --filename=/dev/sdx --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest`\n> \n> With FIO, you need to multiply iodepth by numjobs to get the final queue depth its pushing. (in this case, 256). Make sure you’re looking at the correct data.\n\nI used —numjobs=1 because I needed the time series values for bandwidth, latencies and iops. The command string was the same, except from varying IO Depth and numjobs=1.\n\n\n> Few other things:\n> - Mdadm will give better performance than HW RAID for specific benchmarks.\n> - Performance is NOT linear with drive count for synthetic benchmarks.\n> - It is often nearly linear for application performance.\n\nmdadm RAID10 scaled linearly while mdadm RAID0 scaled much less.\n\n\n> - HW RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer drives can often get better performance from HW RAID*. (otherwise MDADM has been faster in all of my testing)\n\nMy RAID controller doesn’t have a BBU.\n\n\n> - Mdadm RAID10 has a bug where reads are not properly distributed between the mirror pairs. (It uses head position calculated from the last IO to determine which drive in a mirror pair should get the next read. It results in really weird behavior of most read IO going to half of your drives instead of being evenly split as should be the case for SSDs). You can see this by running iostat while you’ve got a load running and you’ll see uneven distribution of IOs. FYI, the RAID1 implementation has an exception where it does NOT use head position for SSDs. I have yet to test this but you should be able to get better performance by manually striping a RAID0 across multiple RAID1s instead of using the default RAID10 implementation.\n\nVery interesting. I will double check this after buying and mounting the new HBA. I heard of someone doing what you are suggesting but never tried.\n\n\n> - Don’t focus on 4k Random Read. Do something more similar to a PG workload (64k 70/30 R/W @ QD=4 is *reasonably* close to what I see for heavy OLTP).\n\nWhy 64k and QD=4? I thought of 8k and larger QD. Will test as soon as possible and report here the results :)\n\n\n> I’ve tested multiple controllers based on the LSI 3108 and found that default settings from one vendor to another provide drastically different performance profiles. Vendor A had much better benchmark performance (2x IOPS of B) while vendor B gave better application performance (20% better OLTP performance in Postgres). (I got equivalent performance from A & B when using the same settings). \n\nDo you have some HBA card to suggest? What do you think of LSI SAS3008? I think it’s the same as the 3108 without RAID On Chip feature. Probably I will buy a Lenovo HBA card with that chip. It seems blazing fast (1mln IOPS) compared to the actual embedded RAID controller (LSI 2008).\nI don’t know if I can connect a 12Gb/s HBA directly to my existing 6Gb/s expander/backplane.. sure I will have the right cables but don’t know if it will work without changing the expander/backplane.\n\n\nThank you a lot for your time\n Pietro Pugni\n\n\n\n\nDisclaimer: I’ve done extensive testing (FIO and postgres) with a few different RAID controllers and HW RAID vs mdadm. We (micron) are crucial but I don’t personally work with the consumer drives. Verify whether you have your disk write cache enabled or disabled. If it’s disabled, that will have a large impact on write performance. What an honor :)My SSDs are Crucial MX300 (consumer drives) but, as previously stated, they gave ~90k IOPS in all benchmarks I found on the web, while mine tops at ~40k IOPS. Being 6 devices bought from 4 different sellers it’s impossible that they are all defective.Is this the *exact* string you used? `fio --filename=/dev/sdx --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest` With FIO, you need to multiply iodepth by numjobs to get the final queue depth its pushing. (in this case, 256). Make sure you’re looking at the correct data.I used —numjobs=1 because I needed the time series values for bandwidth, latencies and iops. The command string was the same, except from varying IO Depth and numjobs=1.Few other things:- Mdadm will give better performance than HW RAID for specific benchmarks.- Performance is NOT linear with drive count for synthetic benchmarks.- It is often nearly linear for application performance.mdadm RAID10 scaled linearly while mdadm RAID0 scaled much less.- HW RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer drives can often get better performance from HW RAID*. (otherwise MDADM has been faster in all of my testing)My RAID controller doesn’t have a BBU.- Mdadm RAID10 has a bug where reads are not properly distributed between the mirror pairs. (It uses head position calculated from the last IO to determine which drive in a mirror pair should get the next read. It results in really weird behavior of most read IO going to half of your drives instead of being evenly split as should be the case for SSDs). You can see this by running iostat while you’ve got a load running and you’ll see uneven distribution of IOs. FYI, the RAID1 implementation has an exception where it does NOT use head position for SSDs. I have yet to test this but you should be able to get better performance by manually striping a RAID0 across multiple RAID1s instead of using the default RAID10 implementation.Very interesting. I will double check this after buying and mounting the new HBA. I heard of someone doing what you are suggesting but never tried.- Don’t focus on 4k Random Read. Do something more similar to a PG workload (64k 70/30 R/W @ QD=4 is *reasonably* close to what I see for heavy OLTP).Why 64k and QD=4? I thought of 8k and larger QD. Will test as soon as possible and report here the results :) I’ve tested multiple controllers based on the LSI 3108 and found that default settings from one vendor to another provide drastically different performance profiles. Vendor A had much better benchmark performance (2x IOPS of B) while vendor B gave better application performance (20% better OLTP performance in Postgres). (I got equivalent performance from A & B when using the same settings). Do you have some HBA card to suggest? What do you think of LSI SAS3008? I think it’s the same as the 3108 without RAID On Chip feature. Probably I will buy a Lenovo HBA card with that chip. It seems blazing fast (1mln IOPS) compared to the actual embedded RAID controller (LSI 2008).I don’t know if I can connect a 12Gb/s HBA directly to my existing 6Gb/s expander/backplane.. sure I will have the right cables but don’t know if it will work without changing the expander/backplane.Thank you a lot for your time Pietro Pugni",
"msg_date": "Wed, 22 Feb 2017 00:44:14 +0100",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
},
{
"msg_contents": "> Suggestion #1 is to turn off any write caching on the RAID controller. Using LSI MegaRAID we went from 3k to 5k tps to 18k just turning off write caching. Basically it just got in the way.\n\nWrite caching is disabled because I removed the expansion card of the RAID controller. It didn’t let the server boot properly with SSDs mounted.\n\nThank you for the suggestion\n Pietro Pugni\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Feb 2017 00:46:13 +0100",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
},
{
"msg_contents": "> I used —numjobs=1 because I needed the time series values for bandwidth, latencies and iops. The command string was the same, except from varying IO Depth and numjobs=1.\r\n\r\nYou might need to increase the number of jobs here. The primary reason for this parameter is to improve scaling when you’re single thread CPU bound. With numjob=1 FIO will use only a single thread and there’s only so much a single CPU core can do.\r\n\r\n> Being 6 devices bought from 4 different sellers it’s impossible that they are all defective.\r\n\r\nI was a little unclear on the disk cache part. It’s a setting, generally in the RAID controller / HBA. It’s also a filesystem level option in Linux (hdparm) and Windows (somewhere in device manager?). The reason to disable the disk cache is that it’s NOT protected against power loss protection on the MX300. So by disabling it you can ensure 100% write consistency at the cost of write performance. (using fully power protected drives will let you keep disk cache enabled)\r\n\r\n> Why 64k and QD=4? I thought of 8k and larger QD. Will test as soon as possible and report here the results :)\r\n\r\nIt’s more representative of what you’ll see at the application level. (If you’ve got a running system, you can just use IOstat to see what your average QD is. (iostat -x 10, and it’s the column: avgqu-sz. Change from 10 seconds to whatever interval works best for your environment)\r\n\r\n> Do you have some HBA card to suggest? What do you think of LSI SAS3008? I think it’s the same as the 3108 without RAID On Chip feature. Probably I will buy a Lenovo HBA card with that chip. It seems blazing fast (1mln IOPS) compared to the actual embedded RAID controller (LSI 2008).\r\n\r\nI’ve been able to consistently get the same performance out of any of the LSI based cards. The 3008 and 3108 both work great, regardless of vendor. Just test or read up on the different configuration parameters (read ahead, write back vs write through, disk cache)\r\n\r\n\r\nWes Vaske\r\nSenior Storage Solutions Engineer\r\nMicron Technology\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Pietro Pugni\r\nSent: Tuesday, February 21, 2017 5:44 PM\r\nTo: Wes Vaske (wvaske) <[email protected]>\r\nCc: Merlin Moncure <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] Suggestions for a HBA controller (6 x SSDs + madam RAID10)\r\n\r\nDisclaimer: I’ve done extensive testing (FIO and postgres) with a few different RAID controllers and HW RAID vs mdadm. We (micron) are crucial but I don’t personally work with the consumer drives.\r\n\r\nVerify whether you have your disk write cache enabled or disabled. If it’s disabled, that will have a large impact on write performance.\r\n\r\nWhat an honor :)\r\nMy SSDs are Crucial MX300 (consumer drives) but, as previously stated, they gave ~90k IOPS in all benchmarks I found on the web, while mine tops at ~40k IOPS. Being 6 devices bought from 4 different sellers it’s impossible that they are all defective.\r\n\r\n\r\nIs this the *exact* string you used? `fio --filename=/dev/sdx --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100 --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest`\r\n\r\nWith FIO, you need to multiply iodepth by numjobs to get the final queue depth its pushing. (in this case, 256). Make sure you’re looking at the correct data.\r\n\r\nI used —numjobs=1 because I needed the time series values for bandwidth, latencies and iops. The command string was the same, except from varying IO Depth and numjobs=1.\r\n\r\n\r\n\r\nFew other things:\r\n- Mdadm will give better performance than HW RAID for specific benchmarks.\r\n- Performance is NOT linear with drive count for synthetic benchmarks.\r\n- It is often nearly linear for application performance.\r\n\r\nmdadm RAID10 scaled linearly while mdadm RAID0 scaled much less.\r\n\r\n\r\n\r\n- HW RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer drives can often get better performance from HW RAID*. (otherwise MDADM has been faster in all of my testing)\r\n\r\nMy RAID controller doesn’t have a BBU.\r\n\r\n\r\n\r\n- Mdadm RAID10 has a bug where reads are not properly distributed between the mirror pairs. (It uses head position calculated from the last IO to determine which drive in a mirror pair should get the next read. It results in really weird behavior of most read IO going to half of your drives instead of being evenly split as should be the case for SSDs). You can see this by running iostat while you’ve got a load running and you’ll see uneven distribution of IOs. FYI, the RAID1 implementation has an exception where it does NOT use head position for SSDs. I have yet to test this but you should be able to get better performance by manually striping a RAID0 across multiple RAID1s instead of using the default RAID10 implementation.\r\n\r\nVery interesting. I will double check this after buying and mounting the new HBA. I heard of someone doing what you are suggesting but never tried.\r\n\r\n\r\n\r\n- Don’t focus on 4k Random Read. Do something more similar to a PG workload (64k 70/30 R/W @ QD=4 is *reasonably* close to what I see for heavy OLTP).\r\n\r\nWhy 64k and QD=4? I thought of 8k and larger QD. Will test as soon as possible and report here the results :)\r\n\r\n\r\n\r\nI’ve tested multiple controllers based on the LSI 3108 and found that default settings from one vendor to another provide drastically different performance profiles. Vendor A had much better benchmark performance (2x IOPS of B) while vendor B gave better application performance (20% better OLTP performance in Postgres). (I got equivalent performance from A & B when using the same settings).\r\n\r\nDo you have some HBA card to suggest? What do you think of LSI SAS3008? I think it’s the same as the 3108 without RAID On Chip feature. Probably I will buy a Lenovo HBA card with that chip. It seems blazing fast (1mln IOPS) compared to the actual embedded RAID controller (LSI 2008).\r\nI don’t know if I can connect a 12Gb/s HBA directly to my existing 6Gb/s expander/backplane.. sure I will have the right cables but don’t know if it will work without changing the expander/backplane.\r\n\r\n\r\nThank you a lot for your time\r\n Pietro Pugni\r\n\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n> I used —numjobs=1 because I needed the time series values for bandwidth, latencies and iops. The command string was the same, except from varying IO Depth and numjobs=1.\n \nYou might need to increase the number of jobs here. The primary reason for this parameter is to improve scaling when you’re single thread CPU bound. With numjob=1 FIO will\r\n use only a single thread and there’s only so much a single CPU core can do.\n \n> Being 6 devices bought from 4 different sellers it’s impossible that they are all defective.\n \nI was a little unclear on the disk cache part. It’s a setting, generally in the RAID controller / HBA. It’s also a filesystem level option in Linux (hdparm) and Windows (somewhere\r\n in device manager?). The reason to disable the disk cache is that it’s NOT protected against power loss protection on the MX300. So by disabling it you can ensure 100% write consistency at the cost of write performance. (using fully power protected drives\r\n will let you keep disk cache enabled)\n \n> Why 64k and QD=4? I thought of 8k and larger QD. Will test as soon as possible and report here the results :)\n \nIt’s more representative of what you’ll see at the application level. (If you’ve got a running system, you can just use IOstat to see what your average QD is. (iostat -x 10, and it’s the column: avgqu-sz. Change from 10 seconds to whatever\r\n interval works best for your environment)\n \n> Do you have some HBA card to suggest? What do you think of LSI SAS3008? I think it’s the same as the 3108 without RAID On Chip feature. Probably I will buy a Lenovo HBA card with that chip. It seems blazing fast (1mln IOPS) compared to\r\n the actual embedded RAID controller (LSI 2008).\n \nI’ve been able to consistently get the same performance out of any of the LSI based cards. The 3008 and 3108 both work great, regardless of vendor. Just test or read up on the different configuration parameters (read ahead, write back vs\r\n write through, disk cache)\n \n \n\nWes Vaske\nSenior Storage Solutions Engineer\nMicron Technology\r\n\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Pietro Pugni\nSent: Tuesday, February 21, 2017 5:44 PM\nTo: Wes Vaske (wvaske) <[email protected]>\nCc: Merlin Moncure <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Suggestions for a HBA controller (6 x SSDs + madam RAID10)\n\n\n \n\n\nDisclaimer: I’ve done extensive testing (FIO and postgres) with a few different RAID controllers and HW RAID vs mdadm. We (micron) are crucial but I don’t personally work with\r\n the consumer drives.\n\n\n \n\n\nVerify whether you have your disk write cache enabled or disabled. If it’s disabled, that will have a large impact on write performance. \n\n\n\n\n \n\n\nWhat an honor :)\n\n\nMy SSDs are Crucial MX300 (consumer drives) but, as previously stated, they gave ~90k IOPS in all benchmarks I found on the web, while mine tops at ~40k IOPS. Being 6 devices bought from 4 different sellers it’s impossible that they are\r\n all defective.\n\n\n\n\n\n\nIs this the *exact* string you used? `fio --filename=/dev/sdx --direct=1 --rw=randrw --refill_buffers --norandommap --randrepeat=0 --ioengine=libaio --bs=4k --rwmixread=100\r\n --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest`\n\n\n \n\n\nWith FIO, you need to multiply iodepth by numjobs to get the final queue depth its pushing. (in this case, 256). Make sure you’re looking at the correct data.\n\n\n\n \n\n\nI used —numjobs=1 because I needed the time series values for bandwidth, latencies and iops. The command string was the same, except from varying IO Depth and numjobs=1.\n\n\n \n\n\n\n\n\n\n\nFew other things:\n\n\n- Mdadm\r\n will give better performance than HW RAID for specific benchmarks.\n\n\n- Performance\r\n is NOT linear with drive count for synthetic benchmarks.\n\n\n- It\r\n is often nearly linear for application performance.\n\n\n\n\n \n\n\nmdadm RAID10 scaled linearly while mdadm RAID0 scaled much less.\n\n\n \n\n\n\n\n\n\n\n- HW\r\n RAID can give better performance if your drives do not have a capacitor backed cache (like the MX300) AND the controller has a battery backed cache. *Consumer drives can often get better performance from HW RAID*. (otherwise MDADM has been faster in\r\n all of my testing)\n\n\n\n\n \n\n\nMy RAID controller doesn’t have a BBU.\n\n\n \n\n\n\n\n\n\n\n- Mdadm\r\n RAID10 has a bug where reads are not properly distributed between the mirror pairs. (It uses head position calculated from the last IO to determine which drive in a mirror pair should get the next read. It results in really weird behavior of most read IO going\r\n to half of your drives instead of being evenly split as should be the case for SSDs). You can see this by running iostat while you’ve got a load running and you’ll see uneven distribution of IOs. FYI, the RAID1 implementation has an exception where it does\r\n NOT use head position for SSDs. I have yet to test this but you should be able to get better performance by manually striping a RAID0 across multiple RAID1s instead of using the default RAID10 implementation.\n\n\n\n\n \n\n\nVery interesting. I will double check this after buying and mounting the new HBA. I heard of someone doing what you are suggesting but never tried.\n\n\n \n\n\n\n\n\n\n- Don’t\r\n focus on 4k Random Read. Do something more similar to a PG workload (64k 70/30 R/W @ QD=4 is *reasonably* close to what I see for heavy OLTP).\n\n\n\n \n\n\nWhy 64k and QD=4? I thought of 8k and larger QD. Will test as soon as possible and report here the results :)\n\n\n \n\n\n\n\n\n\nI’ve tested multiple controllers based on the LSI 3108 and found that default settings from one vendor to another provide drastically different performance\r\n profiles. Vendor A had much better benchmark performance (2x IOPS of B) while vendor B gave better application performance (20% better OLTP performance in Postgres). (I got equivalent performance from A & B when using the same settings). \n\n\n\n \n\n\nDo you have some HBA card to suggest? What do you think of LSI SAS3008? I think it’s the same as the 3108 without RAID On Chip feature. Probably I will buy a Lenovo HBA card with that chip. It seems blazing fast (1mln IOPS) compared to\r\n the actual embedded RAID controller (LSI 2008).\n\n\nI don’t know if I can connect a 12Gb/s HBA directly to my existing 6Gb/s expander/backplane.. sure I will have the right cables but don’t know if it will work without changing the expander/backplane.\n\n\n \n\n\n \n\n\nThank you a lot for your time\n\n\n Pietro Pugni",
"msg_date": "Wed, 22 Feb 2017 15:36:05 +0000",
"msg_from": "\"Wes Vaske (wvaske)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam\n RAID10)"
},
{
"msg_contents": "I just mounted and configured my brand new LSI 3008-8i. This server had 1 SAS expander connected to 2 backplanes (8 disks to the first backplane and no disks connected to the second backplane). After some testing I found the SAS expander was a bottleneck, so I removed it and connected the first backplane directly to the controller. \n\nThe following results are from 4k 100% random reads (32QD) run in parallel on each single SSD:\n\nRaw SSDs [ 4k, 100% random reads, 32 Queue Depth]\nServeRaid m5110e (with SAS expander) [numjob=1]\n read : io=5111.2MB, bw=87227KB/s, iops=21806, runt= 60002msec\n read : io=4800.6MB, bw=81927KB/s, iops=20481, runt= 60002msec\n read : io=4997.6MB, bw=85288KB/s, iops=21322, runt= 60002msec\n read : io=4796.2MB, bw=81853KB/s, iops=20463, runt= 60001msec\n read : io=5062.6MB, bw=86400KB/s, iops=21599, runt= 60001msec\n read : io=4989.6MB, bw=85154KB/s, iops=21288, runt= 60001msec\nTotal read iops: 126,595 ( ~ 21,160 iops/disk)\n\n\nRaw SSDs [ 4k, 100% random reads, 32 Queue Depth]\nLenovo N2215 (LSI 3008-8i flashed with LSI IT firmware, without SAS expander) [numjob=1]\n read : io=15032MB, bw=256544KB/s, iops=64136, runt= 60001msec\n read : io=16679MB, bw=284656KB/s, iops=71163, runt= 60001msec\n read : io=15046MB, bw=256779KB/s, iops=64194, runt= 60001msec\n read : io=16667MB, bw=284444KB/s, iops=71111, runt= 60001msec\n read : io=16692MB, bw=284867KB/s, iops=71216, runt= 60001msec\n read : io=15149MB, bw=258534KB/s, iops=64633, runt= 60002msec\nTotal read iops: 406,453 ( ~ 67,742 iops/disk)\n\n\n321% performance improvement.\nI chose 4k 32QD because it should deliver the maximum iops and should clearly show if the I/O is properly configured.\nI don’t mind testing the embedded m5110e without the SAS expander because it will be slower for sure. \n\n\n> You might need to increase the number of jobs here. The primary reason for this parameter is to improve scaling when you’re single thread CPU bound. With numjob=1 FIO will use only a single thread and there’s only so much a single CPU core can do.\n\nThe HBA provided slightly better performance without removing the expander and even more slightly faster after removing the expander, but then I tried increasing numjob from 1 to 16 (tried also 12, 18, 20, 24 and 32 but found 16 to get higher iops) and the benchmarks returned expected results. I guess how this relates with Postgres.. probably effective_io_concurrency, as suggested by Merlin Moncure, should be the counterpart of numjob in fio?\n\n\n> I was a little unclear on the disk cache part. It’s a setting, generally in the RAID controller / HBA. It’s also a filesystem level option in Linux (hdparm) and Windows (somewhere in device manager?). The reason to disable the disk cache is that it’s NOT protected against power loss protection on the MX300. So by disabling it you can ensure 100% write consistency at the cost of write performance. (using fully power protected drives will let you keep disk cache enabled)\n\nI always enabled the write cache during my tests. I tried to disable it but performance were too poor. Those SSD are consumer ones and don’t have any capacitor :(\n\n\n> > Why 64k and QD=4? I thought of 8k and larger QD. Will test as soon as possible and report here the results :)\n> \n> It’s more representative of what you’ll see at the application level. (If you’ve got a running system, you can just use IOstat to see what your average QD is. (iostat -x 10, and it’s the column: avgqu-sz. Change from 10 seconds to whatever interval works best for your environment)\n\nI tried your suggestion (64k, 70/30 random r/w, 4QD) on RAID0 and RAID10 (mdadm) with the new controller and the results are quite good if we think that the underlying SSDs are consumer with original firmware (overprovisioned at 25%).\n\nRAID10 is about 22% slower in both reads and writes compared to RAID0, at least on a 1 minute run. The totals and averages were calculated from the whole fio log output using the single jobs iops.\n\nThese are the results:\n\n\n############################################################################\nmdadm RAID0 [ 64k, 70% random reads, 30% random writes, 04 Queue Depth]\nLenovo N2215 (LSI 3008-8i flashed with LSI IT firmware, without SAS expander) [numjob=16]\n############################################################################\nRun status group 0 (all jobs):\n READ: io=75943MB, aggrb=1265.7MB/s, minb=80445KB/s, maxb=81576KB/s, mint=60001msec, maxt=60004msec\n WRITE: io=32585MB, aggrb=556072KB/s, minb=34220KB/s, maxb=35098KB/s, mint=60001msec, maxt=60004msec\n\nDisk stats (read/write):\n md127: ios=1213256/520566, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=202541/86892, aggrmerge=0/0, aggrticks=490418/137398, aggrin_queue=628566, aggrutil=99.20%\n sdf: ios=202557/86818, merge=0/0, ticks=450384/131512, in_queue=582528, util=98.58%\n sdb: ios=202626/87184, merge=0/0, ticks=573448/177336, in_queue=751784, util=99.20%\n sdg: ios=202391/86810, merge=0/0, ticks=463644/137084, in_queue=601272, util=98.46%\n sde: ios=202462/86551, merge=0/0, ticks=470028/121424, in_queue=592500, util=98.79%\n sda: ios=202287/86697, merge=0/0, ticks=473312/121192, in_queue=595044, util=98.95%\n sdh: ios=202928/87293, merge=0/0, ticks=511696/135840, in_queue=648272, util=99.14%\n\nTotal read iops: 20,242 ( ~ 3,374 iops/disk)\nTotal write iops: 8,679 ( ~ 1,447 iops/disk)\n\n\n\n############################################################################\nmdadm RAID10 [ 64k, 70% random reads, 30% random writes, 04 Queue Depth]\nLenovo N2215 (LSI 3008-8i flashed with LSI IT firmware, without SAS expander) [numjob=16]\n############################################################################\nRun status group 0 (all jobs):\n READ: io=58624MB, aggrb=976.11MB/s, minb=62125KB/s, maxb=62814KB/s, mint=60001msec, maxt=60005msec\n WRITE: io=25190MB, aggrb=429874KB/s, minb=26446KB/s, maxb=27075KB/s, mint=60001msec, maxt=60005msec\n\nDisk stats (read/write):\n md127: ios=936349/402381, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=156357/134348, aggrmerge=0/0, aggrticks=433286/262226, aggrin_queue=696052, aggrutil=99.41%\n sdf: ios=150239/134315, merge=0/0, ticks=298268/168472, in_queue=466852, util=95.31%\n sdb: ios=153088/133664, merge=0/0, ticks=329160/188060, in_queue=517432, util=96.81%\n sdg: ios=157361/135065, merge=0/0, ticks=658208/459168, in_queue=1118588, util=99.16%\n sde: ios=161361/134315, merge=0/0, ticks=476388/278628, in_queue=756056, util=97.61%\n sda: ios=160431/133664, merge=0/0, ticks=548620/329708, in_queue=878708, util=99.41%\n sdh: ios=155667/135065, merge=0/0, ticks=289072/149324, in_queue=438680, util=96.71%\n\nTotal read iops: 15,625 ( ~ 2,604 iops/disk)\nTotal write iops: 6,709 ( ~ 1,118 iops/disk)\n\n\n\n> > Do you have some HBA card to suggest? What do you think of LSI SAS3008? I think it’s the same as the 3108 without RAID On Chip feature. Probably I will buy a Lenovo HBA card with that chip. It seems blazing fast (1mln IOPS) compared to the actual embedded RAID controller (LSI 2008).\n> \n> I’ve been able to consistently get the same performance out of any of the LSI based cards. The 3008 and 3108 both work great, regardless of vendor. Just test or read up on the different configuration parameters (read ahead, write back vs write through, disk cache)\n\nDo you have any suggestion for fine tuning this controller? I’m referring to parameters like nr_requests, queue_depth, etc.\nAlso, any way to optimize the various mdadm parameters available at /sys/block/mdX/ ? I disabled the internal bitmap and write performance improved.\n\n\n\nThank you\n Pietro Pugni\n\n\n\n\nI just mounted and configured my brand new LSI 3008-8i. This server had 1 SAS expander connected to 2 backplanes (8 disks to the first backplane and no disks connected to the second backplane). After some testing I found the SAS expander was a bottleneck, so I removed it and connected the first backplane directly to the controller. The following results are from 4k 100% random reads (32QD) run in parallel on each single SSD:Raw SSDs [ 4k, 100% random reads, 32 Queue Depth]ServeRaid m5110e (with SAS expander) [numjob=1] read : io=5111.2MB, bw=87227KB/s, iops=21806, runt= 60002msec read : io=4800.6MB, bw=81927KB/s, iops=20481, runt= 60002msec read : io=4997.6MB, bw=85288KB/s, iops=21322, runt= 60002msec read : io=4796.2MB, bw=81853KB/s, iops=20463, runt= 60001msec read : io=5062.6MB, bw=86400KB/s, iops=21599, runt= 60001msec read : io=4989.6MB, bw=85154KB/s, iops=21288, runt= 60001msecTotal read iops: 126,595 ( ~ 21,160 iops/disk)Raw SSDs [ 4k, 100% random reads, 32 Queue Depth]Lenovo N2215 (LSI 3008-8i flashed with LSI IT firmware, without SAS expander) [numjob=1] read : io=15032MB, bw=256544KB/s, iops=64136, runt= 60001msec read : io=16679MB, bw=284656KB/s, iops=71163, runt= 60001msec read : io=15046MB, bw=256779KB/s, iops=64194, runt= 60001msec read : io=16667MB, bw=284444KB/s, iops=71111, runt= 60001msec read : io=16692MB, bw=284867KB/s, iops=71216, runt= 60001msec read : io=15149MB, bw=258534KB/s, iops=64633, runt= 60002msecTotal read iops: 406,453 ( ~ 67,742 iops/disk)321% performance improvement.I chose 4k 32QD because it should deliver the maximum iops and should clearly show if the I/O is properly configured.I don’t mind testing the embedded m5110e without the SAS expander because it will be slower for sure. You might need to increase the number of jobs here. The primary reason for this parameter is to improve scaling when you’re single thread CPU bound. With numjob=1 FIO will use only a single thread and there’s only so much a single CPU core can do.The HBA provided slightly better performance without removing the expander and even more slightly faster after removing the expander, but then I tried increasing numjob from 1 to 16 (tried also 12, 18, 20, 24 and 32 but found 16 to get higher iops) and the benchmarks returned expected results. I guess how this relates with Postgres.. probably effective_io_concurrency, as suggested by Merlin Moncure, should be the counterpart of numjob in fio?I was a little unclear on the disk cache part. It’s a setting, generally in the RAID controller / HBA. It’s also a filesystem level option in Linux (hdparm) and Windows (somewhere in device manager?). The reason to disable the disk cache is that it’s NOT protected against power loss protection on the MX300. So by disabling it you can ensure 100% write consistency at the cost of write performance. (using fully power protected drives will let you keep disk cache enabled)I always enabled the write cache during my tests. I tried to disable it but performance were too poor. Those SSD are consumer ones and don’t have any capacitor :(> Why 64k and QD=4? I thought of 8k and larger QD. Will test as soon as possible and report here the results :) It’s more representative of what you’ll see at the application level. (If you’ve got a running system, you can just use IOstat to see what your average QD is. (iostat -x 10, and it’s the column: avgqu-sz. Change from 10 seconds to whatever interval works best for your environment)I tried your suggestion (64k, 70/30 random r/w, 4QD) on RAID0 and RAID10 (mdadm) with the new controller and the results are quite good if we think that the underlying SSDs are consumer with original firmware (overprovisioned at 25%).RAID10 is about 22% slower in both reads and writes compared to RAID0, at least on a 1 minute run. The totals and averages were calculated from the whole fio log output using the single jobs iops.These are the results:############################################################################mdadm RAID0 [ 64k, 70% random reads, 30% random writes, 04 Queue Depth]Lenovo N2215 (LSI 3008-8i flashed with LSI IT firmware, without SAS expander) [numjob=16]############################################################################Run status group 0 (all jobs): READ: io=75943MB, aggrb=1265.7MB/s, minb=80445KB/s, maxb=81576KB/s, mint=60001msec, maxt=60004msec WRITE: io=32585MB, aggrb=556072KB/s, minb=34220KB/s, maxb=35098KB/s, mint=60001msec, maxt=60004msecDisk stats (read/write): md127: ios=1213256/520566, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=202541/86892, aggrmerge=0/0, aggrticks=490418/137398, aggrin_queue=628566, aggrutil=99.20% sdf: ios=202557/86818, merge=0/0, ticks=450384/131512, in_queue=582528, util=98.58% sdb: ios=202626/87184, merge=0/0, ticks=573448/177336, in_queue=751784, util=99.20% sdg: ios=202391/86810, merge=0/0, ticks=463644/137084, in_queue=601272, util=98.46% sde: ios=202462/86551, merge=0/0, ticks=470028/121424, in_queue=592500, util=98.79% sda: ios=202287/86697, merge=0/0, ticks=473312/121192, in_queue=595044, util=98.95% sdh: ios=202928/87293, merge=0/0, ticks=511696/135840, in_queue=648272, util=99.14%Total read iops: 20,242 ( ~ 3,374 iops/disk)Total write iops: 8,679 ( ~ 1,447 iops/disk)############################################################################mdadm RAID10 [ 64k, 70% random reads, 30% random writes, 04 Queue Depth]Lenovo N2215 (LSI 3008-8i flashed with LSI IT firmware, without SAS expander) [numjob=16]############################################################################Run status group 0 (all jobs): READ: io=58624MB, aggrb=976.11MB/s, minb=62125KB/s, maxb=62814KB/s, mint=60001msec, maxt=60005msec WRITE: io=25190MB, aggrb=429874KB/s, minb=26446KB/s, maxb=27075KB/s, mint=60001msec, maxt=60005msecDisk stats (read/write): md127: ios=936349/402381, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=156357/134348, aggrmerge=0/0, aggrticks=433286/262226, aggrin_queue=696052, aggrutil=99.41% sdf: ios=150239/134315, merge=0/0, ticks=298268/168472, in_queue=466852, util=95.31% sdb: ios=153088/133664, merge=0/0, ticks=329160/188060, in_queue=517432, util=96.81% sdg: ios=157361/135065, merge=0/0, ticks=658208/459168, in_queue=1118588, util=99.16% sde: ios=161361/134315, merge=0/0, ticks=476388/278628, in_queue=756056, util=97.61% sda: ios=160431/133664, merge=0/0, ticks=548620/329708, in_queue=878708, util=99.41% sdh: ios=155667/135065, merge=0/0, ticks=289072/149324, in_queue=438680, util=96.71%Total read iops: 15,625 ( ~ 2,604 iops/disk)Total write iops: 6,709 ( ~ 1,118 iops/disk)> Do you have some HBA card to suggest? What do you think of LSI SAS3008? I think it’s the same as the 3108 without RAID On Chip feature. Probably I will buy a Lenovo HBA card with that chip. It seems blazing fast (1mln IOPS) compared to the actual embedded RAID controller (LSI 2008). I’ve been able to consistently get the same performance out of any of the LSI based cards. The 3008 and 3108 both work great, regardless of vendor. Just test or read up on the different configuration parameters (read ahead, write back vs write through, disk cache)Do you have any suggestion for fine tuning this controller? I’m referring to parameters like nr_requests, queue_depth, etc.Also, any way to optimize the various mdadm parameters available at /sys/block/mdX/ ? I disabled the internal bitmap and write performance improved.Thank you Pietro Pugni",
"msg_date": "Thu, 2 Mar 2017 22:51:56 +0100",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
},
{
"msg_contents": "On Thu, Mar 2, 2017 at 3:51 PM, Pietro Pugni <[email protected]> wrote:\n> The HBA provided slightly better performance without removing the expander\n> and even more slightly faster after removing the expander, but then I tried\n> increasing numjob from 1 to 16 (tried also 12, 18, 20, 24 and 32 but found\n> 16 to get higher iops) and the benchmarks returned expected results. I guess\n> how this relates with Postgres.. probably effective_io_concurrency, as\n> suggested by Merlin Moncure, should be the counterpart of numjob in fio?\n\nKind of. effective_io_concurrency allows the database to send >1\nfilesystem commands to the hardware from a single process. Sadly,\nonly certain classes of query can currently leverage this factility --\nas you can see, it's a huge optimization.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Mar 2017 09:03:04 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions for a HBA controller (6 x SSDs + madam RAID10)"
}
] |
[
{
"msg_contents": "Hi guys\n\nI'm a bit stuck on a query that performs fantastically up to a certain\nlimit value, after which the planner goes off in a completely different\ndirection and performance gets dramatically worse. Am using Postgresql 9.3\n\nYou can see all the relevant schemas at http://pastebin.com/PNEqw2id and in\nthe test database there are 1,000,000 records in contacts_contact, and\nabout half of those will match the subquery on values_value.\n\nThe query in question is:\n\nSELECT \"contacts_contact\".* FROM \"contacts_contact\"\nINNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\" =\n\"contacts_contactgroup_contacts\".\"contact_id\")\nWHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1\n AND \"contacts_contact\".\"id\" IN (\n SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE\n(U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F'))\n )\n) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;\n\nWith that limit of 222, it performs like:\n\nLimit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358\nrows=222 loops=1)\n Buffers: shared hit=708 read=63\n -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual\ntime=0.120..3.304 rows=222 loops=1)\n Buffers: shared hit=708 read=63\n -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92)\n(actual time=0.103..1.968 rows=227 loops=1)\n Merge Cond: (contacts_contact.id = u0.contact_id)\n Buffers: shared hit=24 read=63\n -> Index Scan Backward using contacts_contact_pkey on\ncontacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual\ntime=0.008..0.502 rows=1117 loops=1)\n Buffers: shared hit=22 read=2\n -> Index Scan using values_value_field_string_value_contact\non values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual\ntime=0.086..0.857 rows=227 loops=1)\n Index Cond: ((contact_field_id = 1) AND\n(upper(string_value) = 'F'::text))\n Buffers: shared hit=2 read=61\n -> Index Only Scan using\ncontacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on\ncontacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual\ntime=0.005..0.005 rows=1 loops=227)\n Index Cond: ((contactgroup_id = 1) AND (contact_id =\ncontacts_contact.id))\n Heap Fetches: 0\n Buffers: shared hit=684\nTotal runtime: 3.488 ms\n\nhttps://explain.depesz.com/s/iPPJ\n\nBut if increase the limit to 223 then it performs like:\n\nLimit (cost=8785.68..13306.24 rows=223 width=88) (actual\ntime=2685.830..2686.534 rows=223 loops=1)\n Buffers: shared hit=767648 read=86530\n -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual\ntime=2685.828..2686.461 rows=223 loops=1)\n Merge Cond: (contacts_contact.id =\ncontacts_contactgroup_contacts.contact_id)\n Buffers: shared hit=767648 read=86530\n -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual\ntime=2685.742..2685.804 rows=228 loops=1)\n Sort Key: contacts_contact.id\n Sort Method: quicksort Memory: 34327kB\n Buffers: shared hit=767648 read=86524\n -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92)\n(actual time=646.573..2417.291 rows=200412 loops=1)\n Buffers: shared hit=767648 read=86524\n -> HashAggregate (cost=6810.70..6813.14 rows=244\nwidth=4) (actual time=646.532..766.200 rows=200412 loops=1)\n Buffers: shared read=51417\n -> Bitmap Heap Scan on values_value u0\n (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709\nrows=200412 loops=1)\n Recheck Cond: ((contact_field_id = 1) AND\n(upper(string_value) = 'F'::text))\n Buffers: shared read=51417\n -> Bitmap Index Scan on\nvalues_value_field_string_value_contact (cost=0.00..60.47 rows=2004\nwidth=0) (actual time=70.647..70.647 rows=200412 loops=1)\n Index Cond: ((contact_field_id = 1)\nAND (upper(string_value) = 'F'::text))\n Buffers: shared read=770\n -> Index Scan using contacts_contact_pkey on\ncontacts_contact (cost=0.42..7.62 rows=1 width=88) (actual\ntime=0.007..0.007 rows=1 loops=200412)\n Index Cond: (id = u0.contact_id)\n Buffers: shared hit=767648 read=35107\n -> Index Only Scan Backward using\ncontacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on\ncontacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4)\n(actual time=0.073..0.273 rows=550 loops=1)\n Index Cond: (contactgroup_id = 1)\n Heap Fetches: 0\n Buffers: shared read=6\nTotal runtime: 2695.301 ms\n\nhttps://explain.depesz.com/s/gXS\n\nI've tried running ANALYZE but that actually reduced the limit at which\nthings get worse. Any insight into the reasoning of the query planner would\nbe much appreciated.\n\nThanks\n\n-- \n*Rowan Seymour* | +260 964153686 | @rowanseymour\n\nHi guysI'm a bit stuck on a query that performs fantastically up to a certain limit value, after which the planner goes off in a completely different direction and performance gets dramatically worse. Am using Postgresql 9.3You can see all the relevant schemas at http://pastebin.com/PNEqw2id and in the test database there are 1,000,000 records in contacts_contact, and about half of those will match the subquery on values_value.The query in question is:SELECT \"contacts_contact\".* FROM \"contacts_contact\"INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\" = \"contacts_contactgroup_contacts\".\"contact_id\")WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1 AND \"contacts_contact\".\"id\" IN ( SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F')) )) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;With that limit of 222, it performs like:Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual time=0.120..3.304 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92) (actual time=0.103..1.968 rows=227 loops=1) Merge Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=24 read=63 -> Index Scan Backward using contacts_contact_pkey on contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual time=0.008..0.502 rows=1117 loops=1) Buffers: shared hit=22 read=2 -> Index Scan using values_value_field_string_value_contact on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual time=0.086..0.857 rows=227 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=2 read=61 -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=227) Index Cond: ((contactgroup_id = 1) AND (contact_id = contacts_contact.id)) Heap Fetches: 0 Buffers: shared hit=684Total runtime: 3.488 mshttps://explain.depesz.com/s/iPPJBut if increase the limit to 223 then it performs like:Limit (cost=8785.68..13306.24 rows=223 width=88) (actual time=2685.830..2686.534 rows=223 loops=1) Buffers: shared hit=767648 read=86530 -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual time=2685.828..2686.461 rows=223 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=767648 read=86530 -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual time=2685.742..2685.804 rows=228 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=767648 read=86524 -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92) (actual time=646.573..2417.291 rows=200412 loops=1) Buffers: shared hit=767648 read=86524 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=646.532..766.200 rows=200412 loops=1) Buffers: shared read=51417 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=51417 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=70.647..70.647 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=770 -> Index Scan using contacts_contact_pkey on contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual time=0.007..0.007 rows=1 loops=200412) Index Cond: (id = u0.contact_id) Buffers: shared hit=767648 read=35107 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.073..0.273 rows=550 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=6Total runtime: 2695.301 mshttps://explain.depesz.com/s/gXSI've tried running ANALYZE but that actually reduced the limit at which things get worse. Any insight into the reasoning of the query planner would be much appreciated.Thanks-- Rowan Seymour | +260 964153686 | @rowanseymour",
"msg_date": "Thu, 23 Feb 2017 15:11:38 +0200",
"msg_from": "Rowan Seymour <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance changes significantly depending on limit value"
},
{
"msg_contents": "2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:\n\n> Hi guys\n>\n> I'm a bit stuck on a query that performs fantastically up to a certain\n> limit value, after which the planner goes off in a completely different\n> direction and performance gets dramatically worse. Am using Postgresql 9.3\n>\n> You can see all the relevant schemas at http://pastebin.com/PNEqw2id and\n> in the test database there are 1,000,000 records in contacts_contact, and\n> about half of those will match the subquery on values_value.\n>\n> The query in question is:\n>\n> SELECT \"contacts_contact\".* FROM \"contacts_contact\"\n> INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\" =\n> \"contacts_contactgroup_contacts\".\"contact_id\")\n> WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1\n> AND \"contacts_contact\".\"id\" IN (\n> SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE\n> (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F'))\n> )\n> ) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;\n>\n> With that limit of 222, it performs like:\n>\n> Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358\n> rows=222 loops=1)\n> Buffers: shared hit=708 read=63\n> -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual\n> time=0.120..3.304 rows=222 loops=1)\n> Buffers: shared hit=708 read=63\n> -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92)\n> (actual time=0.103..1.968 rows=227 loops=1)\n> Merge Cond: (contacts_contact.id = u0.contact_id)\n> Buffers: shared hit=24 read=63\n> -> Index Scan Backward using contacts_contact_pkey on\n> contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual\n> time=0.008..0.502 rows=1117 loops=1)\n> Buffers: shared hit=22 read=2\n> -> Index Scan using values_value_field_string_value_contact\n> on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual\n> time=0.086..0.857 rows=227 loops=1)\n> Index Cond: ((contact_field_id = 1) AND\n> (upper(string_value) = 'F'::text))\n> Buffers: shared hit=2 read=61\n> -> Index Only Scan using contacts_contactgroup_\n> contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts\n> (cost=0.43..3.93 rows=1 width=4) (actual time=0.005..0.005 rows=1\n> loops=227)\n> Index Cond: ((contactgroup_id = 1) AND (contact_id =\n> contacts_contact.id))\n> Heap Fetches: 0\n> Buffers: shared hit=684\n> Total runtime: 3.488 ms\n>\n> https://explain.depesz.com/s/iPPJ\n>\n> But if increase the limit to 223 then it performs like:\n>\n> Limit (cost=8785.68..13306.24 rows=223 width=88) (actual\n> time=2685.830..2686.534 rows=223 loops=1)\n> Buffers: shared hit=767648 read=86530\n> -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual\n> time=2685.828..2686.461 rows=223 loops=1)\n> Merge Cond: (contacts_contact.id = contacts_contactgroup_\n> contacts.contact_id)\n> Buffers: shared hit=767648 read=86530\n> -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual\n> time=2685.742..2685.804 rows=228 loops=1)\n> Sort Key: contacts_contact.id\n> Sort Method: quicksort Memory: 34327kB\n> Buffers: shared hit=767648 read=86524\n> -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92)\n> (actual time=646.573..2417.291 rows=200412 loops=1)\n>\n\nThere is pretty bad estimation probably due dependency between\ncontact_field_id = 1 and upper(string_value) = 'F'::text\n\nThe most simple solution is disable nested loop - set enable_nestloop to off\n\nRegards\n\nPavel\n\n\n> Buffers: shared hit=767648 read=86524\n> -> HashAggregate (cost=6810.70..6813.14 rows=244\n> width=4) (actual time=646.532..766.200 rows=200412 loops=1)\n> Buffers: shared read=51417\n> -> Bitmap Heap Scan on values_value u0\n> (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709\n> rows=200412 loops=1)\n> Recheck Cond: ((contact_field_id = 1) AND\n> (upper(string_value) = 'F'::text))\n> Buffers: shared read=51417\n> -> Bitmap Index Scan on\n> values_value_field_string_value_contact (cost=0.00..60.47 rows=2004\n> width=0) (actual time=70.647..70.647 rows=200412 loops=1)\n> Index Cond: ((contact_field_id = 1)\n> AND (upper(string_value) = 'F'::text))\n> Buffers: shared read=770\n> -> Index Scan using contacts_contact_pkey on\n> contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual\n> time=0.007..0.007 rows=1 loops=200412)\n> Index Cond: (id = u0.contact_id)\n> Buffers: shared hit=767648 read=35107\n> -> Index Only Scan Backward using contacts_contactgroup_\n> contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts\n> (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.073..0.273\n> rows=550 loops=1)\n> Index Cond: (contactgroup_id = 1)\n> Heap Fetches: 0\n> Buffers: shared read=6\n> Total runtime: 2695.301 ms\n>\n> https://explain.depesz.com/s/gXS\n>\n> I've tried running ANALYZE but that actually reduced the limit at which\n> things get worse. Any insight into the reasoning of the query planner would\n> be much appreciated.\n>\n> Thanks\n>\n> --\n> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>\n\n2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:Hi guysI'm a bit stuck on a query that performs fantastically up to a certain limit value, after which the planner goes off in a completely different direction and performance gets dramatically worse. Am using Postgresql 9.3You can see all the relevant schemas at http://pastebin.com/PNEqw2id and in the test database there are 1,000,000 records in contacts_contact, and about half of those will match the subquery on values_value.The query in question is:SELECT \"contacts_contact\".* FROM \"contacts_contact\"INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\" = \"contacts_contactgroup_contacts\".\"contact_id\")WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1 AND \"contacts_contact\".\"id\" IN ( SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F')) )) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;With that limit of 222, it performs like:Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual time=0.120..3.304 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92) (actual time=0.103..1.968 rows=227 loops=1) Merge Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=24 read=63 -> Index Scan Backward using contacts_contact_pkey on contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual time=0.008..0.502 rows=1117 loops=1) Buffers: shared hit=22 read=2 -> Index Scan using values_value_field_string_value_contact on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual time=0.086..0.857 rows=227 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=2 read=61 -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=227) Index Cond: ((contactgroup_id = 1) AND (contact_id = contacts_contact.id)) Heap Fetches: 0 Buffers: shared hit=684Total runtime: 3.488 mshttps://explain.depesz.com/s/iPPJBut if increase the limit to 223 then it performs like:Limit (cost=8785.68..13306.24 rows=223 width=88) (actual time=2685.830..2686.534 rows=223 loops=1) Buffers: shared hit=767648 read=86530 -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual time=2685.828..2686.461 rows=223 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=767648 read=86530 -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual time=2685.742..2685.804 rows=228 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=767648 read=86524 -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92) (actual time=646.573..2417.291 rows=200412 loops=1)There is pretty bad estimation probably due dependency between contact_field_id = 1 and upper(string_value) = 'F'::textThe most simple solution is disable nested loop - set enable_nestloop to offRegardsPavel Buffers: shared hit=767648 read=86524 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=646.532..766.200 rows=200412 loops=1) Buffers: shared read=51417 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=51417 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=70.647..70.647 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=770 -> Index Scan using contacts_contact_pkey on contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual time=0.007..0.007 rows=1 loops=200412) Index Cond: (id = u0.contact_id) Buffers: shared hit=767648 read=35107 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.073..0.273 rows=550 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=6Total runtime: 2695.301 mshttps://explain.depesz.com/s/gXSI've tried running ANALYZE but that actually reduced the limit at which things get worse. Any insight into the reasoning of the query planner would be much appreciated.Thanks-- Rowan Seymour | +260 964153686 | @rowanseymour",
"msg_date": "Thu, 23 Feb 2017 14:32:16 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance changes significantly depending on\n limit value"
},
{
"msg_contents": "Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan\nthen things get slow again. This is now what happens at LIMIT 695:\n\nLimit (cost=35945.78..50034.52 rows=695 width=88) (actual\ntime=12852.580..12854.382 rows=695 loops=1)\n Buffers: shared hit=6 read=66689\n -> Merge Join (cost=35945.78..56176.80 rows=998 width=88) (actual\ntime=12852.577..12854.271 rows=695 loops=1)\n Merge Cond: (contacts_contact.id =\ncontacts_contactgroup_contacts.contact_id)\n Buffers: shared hit=6 read=66689\n -> Sort (cost=35944.53..35949.54 rows=2004 width=92) (actual\ntime=12852.486..12852.577 rows=710 loops=1)\n Sort Key: contacts_contact.id\n Sort Method: quicksort Memory: 34327kB\n Buffers: shared hit=6 read=66677\n -> Hash Join (cost=6816.19..35834.63 rows=2004 width=92)\n(actual time=721.293..12591.204 rows=200412 loops=1)\n Hash Cond: (contacts_contact.id = u0.contact_id)\n Buffers: shared hit=6 read=66677\n -> Seq Scan on contacts_contact (cost=0.00..25266.00\nrows=1000000 width=88) (actual time=0.003..267.258 rows=1000000 loops=1)\n Buffers: shared hit=1 read=15265\n -> Hash (cost=6813.14..6813.14 rows=244 width=4)\n(actual time=714.373..714.373 rows=200412 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 7046kB\n Buffers: shared hit=5 read=51412\n -> HashAggregate (cost=6810.70..6813.14\nrows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1)\n Buffers: shared hit=5 read=51412\n -> Bitmap Heap Scan on values_value u0\n (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\nrows=200412 loops=1)\n Recheck Cond: ((contact_field_id = 1)\nAND (upper(string_value) = 'F'::text))\n Buffers: shared hit=5 read=51412\n -> Bitmap Index Scan on\nvalues_value_field_string_value_contact (cost=0.00..60.47 rows=2004\nwidth=0) (actual time=57.642..57.642 rows=200412 loops=1)\n Index Cond: ((contact_field_id\n= 1) AND (upper(string_value) = 'F'::text))\n Buffers: shared hit=5 read=765\n -> Index Only Scan Backward using\ncontacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on\ncontacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4)\n(actual time=0.080..0.651 rows=1707 loops=1)\n Index Cond: (contactgroup_id = 1)\n Heap Fetches: 0\n Buffers: shared read=12\nTotal runtime: 12863.938 ms\n\nhttps://explain.depesz.com/s/nfw1\n\nCan you explain a bit more about what you mean about \" dependency between\ncontact_field_id = 1 and upper(string_value) = 'F'::text\"?\n\nBtw I created the index values_value_field_string_value_contact as\n\nCREATE INDEX values_value_field_string_value_contact\nON values_value(contact_field_id, UPPER(string_value), contact_id DESC)\nWHERE contact_field_id IS NOT NULL;\n\nI'm not sure why it needs the contact_id column but without it the planner\npicks a slow approach for even smaller LIMIT values.\n\n\nOn 23 February 2017 at 15:32, Pavel Stehule <[email protected]> wrote:\n\n>\n>\n> 2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:\n>\n>> Hi guys\n>>\n>> I'm a bit stuck on a query that performs fantastically up to a certain\n>> limit value, after which the planner goes off in a completely different\n>> direction and performance gets dramatically worse. Am using Postgresql 9.3\n>>\n>> You can see all the relevant schemas at http://pastebin.com/PNEqw2id and\n>> in the test database there are 1,000,000 records in contacts_contact, and\n>> about half of those will match the subquery on values_value.\n>>\n>> The query in question is:\n>>\n>> SELECT \"contacts_contact\".* FROM \"contacts_contact\"\n>> INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\"\n>> = \"contacts_contactgroup_contacts\".\"contact_id\")\n>> WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1\n>> AND \"contacts_contact\".\"id\" IN (\n>> SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE\n>> (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F'))\n>> )\n>> ) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;\n>>\n>> With that limit of 222, it performs like:\n>>\n>> Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358\n>> rows=222 loops=1)\n>> Buffers: shared hit=708 read=63\n>> -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual\n>> time=0.120..3.304 rows=222 loops=1)\n>> Buffers: shared hit=708 read=63\n>> -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92)\n>> (actual time=0.103..1.968 rows=227 loops=1)\n>> Merge Cond: (contacts_contact.id = u0.contact_id)\n>> Buffers: shared hit=24 read=63\n>> -> Index Scan Backward using contacts_contact_pkey on\n>> contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual\n>> time=0.008..0.502 rows=1117 loops=1)\n>> Buffers: shared hit=22 read=2\n>> -> Index Scan using values_value_field_string_value_contact\n>> on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual\n>> time=0.086..0.857 rows=227 loops=1)\n>> Index Cond: ((contact_field_id = 1) AND\n>> (upper(string_value) = 'F'::text))\n>> Buffers: shared hit=2 read=61\n>> -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq\n>> on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual\n>> time=0.005..0.005 rows=1 loops=227)\n>> Index Cond: ((contactgroup_id = 1) AND (contact_id =\n>> contacts_contact.id))\n>> Heap Fetches: 0\n>> Buffers: shared hit=684\n>> Total runtime: 3.488 ms\n>>\n>> https://explain.depesz.com/s/iPPJ\n>>\n>> But if increase the limit to 223 then it performs like:\n>>\n>> Limit (cost=8785.68..13306.24 rows=223 width=88) (actual\n>> time=2685.830..2686.534 rows=223 loops=1)\n>> Buffers: shared hit=767648 read=86530\n>> -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual\n>> time=2685.828..2686.461 rows=223 loops=1)\n>> Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts\n>> .contact_id)\n>> Buffers: shared hit=767648 read=86530\n>> -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual\n>> time=2685.742..2685.804 rows=228 loops=1)\n>> Sort Key: contacts_contact.id\n>> Sort Method: quicksort Memory: 34327kB\n>> Buffers: shared hit=767648 read=86524\n>> -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92)\n>> (actual time=646.573..2417.291 rows=200412 loops=1)\n>>\n>\n> There is pretty bad estimation probably due dependency between\n> contact_field_id = 1 and upper(string_value) = 'F'::text\n>\n> The most simple solution is disable nested loop - set enable_nestloop to\n> off\n>\n> Regards\n>\n> Pavel\n>\n>\n>> Buffers: shared hit=767648 read=86524\n>> -> HashAggregate (cost=6810.70..6813.14 rows=244\n>> width=4) (actual time=646.532..766.200 rows=200412 loops=1)\n>> Buffers: shared read=51417\n>> -> Bitmap Heap Scan on values_value u0\n>> (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709\n>> rows=200412 loops=1)\n>> Recheck Cond: ((contact_field_id = 1) AND\n>> (upper(string_value) = 'F'::text))\n>> Buffers: shared read=51417\n>> -> Bitmap Index Scan on\n>> values_value_field_string_value_contact (cost=0.00..60.47 rows=2004\n>> width=0) (actual time=70.647..70.647 rows=200412 loops=1)\n>> Index Cond: ((contact_field_id = 1)\n>> AND (upper(string_value) = 'F'::text))\n>> Buffers: shared read=770\n>> -> Index Scan using contacts_contact_pkey on\n>> contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual\n>> time=0.007..0.007 rows=1 loops=200412)\n>> Index Cond: (id = u0.contact_id)\n>> Buffers: shared hit=767648 read=35107\n>> -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq\n>> on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992\n>> width=4) (actual time=0.073..0.273 rows=550 loops=1)\n>> Index Cond: (contactgroup_id = 1)\n>> Heap Fetches: 0\n>> Buffers: shared read=6\n>> Total runtime: 2695.301 ms\n>>\n>> https://explain.depesz.com/s/gXS\n>>\n>> I've tried running ANALYZE but that actually reduced the limit at which\n>> things get worse. Any insight into the reasoning of the query planner would\n>> be much appreciated.\n>>\n>> Thanks\n>>\n>> --\n>> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>>\n>\n>\n\n\n-- \n*Rowan Seymour* | +260 964153686 | @rowanseymour\n\nHi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan then things get slow again. This is now what happens at LIMIT 695:Limit (cost=35945.78..50034.52 rows=695 width=88) (actual time=12852.580..12854.382 rows=695 loops=1) Buffers: shared hit=6 read=66689 -> Merge Join (cost=35945.78..56176.80 rows=998 width=88) (actual time=12852.577..12854.271 rows=695 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=6 read=66689 -> Sort (cost=35944.53..35949.54 rows=2004 width=92) (actual time=12852.486..12852.577 rows=710 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=6 read=66677 -> Hash Join (cost=6816.19..35834.63 rows=2004 width=92) (actual time=721.293..12591.204 rows=200412 loops=1) Hash Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=6 read=66677 -> Seq Scan on contacts_contact (cost=0.00..25266.00 rows=1000000 width=88) (actual time=0.003..267.258 rows=1000000 loops=1) Buffers: shared hit=1 read=15265 -> Hash (cost=6813.14..6813.14 rows=244 width=4) (actual time=714.373..714.373 rows=200412 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 7046kB Buffers: shared hit=5 read=51412 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1) Buffers: shared hit=5 read=51412 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=51412 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=57.642..57.642 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=765 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.080..0.651 rows=1707 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=12Total runtime: 12863.938 mshttps://explain.depesz.com/s/nfw1Can you explain a bit more about what you mean about \" dependency between contact_field_id = 1 and upper(string_value) = 'F'::text\"?Btw I created the index values_value_field_string_value_contact asCREATE INDEX values_value_field_string_value_contactON values_value(contact_field_id, UPPER(string_value), contact_id DESC)WHERE contact_field_id IS NOT NULL;I'm not sure why it needs the contact_id column but without it the planner picks a slow approach for even smaller LIMIT values.On 23 February 2017 at 15:32, Pavel Stehule <[email protected]> wrote:2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:Hi guysI'm a bit stuck on a query that performs fantastically up to a certain limit value, after which the planner goes off in a completely different direction and performance gets dramatically worse. Am using Postgresql 9.3You can see all the relevant schemas at http://pastebin.com/PNEqw2id and in the test database there are 1,000,000 records in contacts_contact, and about half of those will match the subquery on values_value.The query in question is:SELECT \"contacts_contact\".* FROM \"contacts_contact\"INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\" = \"contacts_contactgroup_contacts\".\"contact_id\")WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1 AND \"contacts_contact\".\"id\" IN ( SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F')) )) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;With that limit of 222, it performs like:Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual time=0.120..3.304 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92) (actual time=0.103..1.968 rows=227 loops=1) Merge Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=24 read=63 -> Index Scan Backward using contacts_contact_pkey on contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual time=0.008..0.502 rows=1117 loops=1) Buffers: shared hit=22 read=2 -> Index Scan using values_value_field_string_value_contact on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual time=0.086..0.857 rows=227 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=2 read=61 -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=227) Index Cond: ((contactgroup_id = 1) AND (contact_id = contacts_contact.id)) Heap Fetches: 0 Buffers: shared hit=684Total runtime: 3.488 mshttps://explain.depesz.com/s/iPPJBut if increase the limit to 223 then it performs like:Limit (cost=8785.68..13306.24 rows=223 width=88) (actual time=2685.830..2686.534 rows=223 loops=1) Buffers: shared hit=767648 read=86530 -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual time=2685.828..2686.461 rows=223 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=767648 read=86530 -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual time=2685.742..2685.804 rows=228 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=767648 read=86524 -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92) (actual time=646.573..2417.291 rows=200412 loops=1)There is pretty bad estimation probably due dependency between contact_field_id = 1 and upper(string_value) = 'F'::textThe most simple solution is disable nested loop - set enable_nestloop to offRegardsPavel Buffers: shared hit=767648 read=86524 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=646.532..766.200 rows=200412 loops=1) Buffers: shared read=51417 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=51417 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=70.647..70.647 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=770 -> Index Scan using contacts_contact_pkey on contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual time=0.007..0.007 rows=1 loops=200412) Index Cond: (id = u0.contact_id) Buffers: shared hit=767648 read=35107 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.073..0.273 rows=550 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=6Total runtime: 2695.301 mshttps://explain.depesz.com/s/gXSI've tried running ANALYZE but that actually reduced the limit at which things get worse. Any insight into the reasoning of the query planner would be much appreciated.Thanks-- Rowan Seymour | +260 964153686 | @rowanseymour \n\n\n-- Rowan Seymour | +260 964153686 | @rowanseymour",
"msg_date": "Thu, 23 Feb 2017 16:02:27 +0200",
"msg_from": "Rowan Seymour <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance changes significantly depending on\n limit value"
},
{
"msg_contents": "2017-02-23 15:02 GMT+01:00 Rowan Seymour <[email protected]>:\n\n> Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan\n> then things get slow again. This is now what happens at LIMIT 695:\n>\n> Limit (cost=35945.78..50034.52 rows=695 width=88) (actual\n> time=12852.580..12854.382 rows=695 loops=1)\n> Buffers: shared hit=6 read=66689\n> -> Merge Join (cost=35945.78..56176.80 rows=998 width=88) (actual\n> time=12852.577..12854.271 rows=695 loops=1)\n> Merge Cond: (contacts_contact.id = contacts_contactgroup_\n> contacts.contact_id)\n> Buffers: shared hit=6 read=66689\n> -> Sort (cost=35944.53..35949.54 rows=2004 width=92) (actual\n> time=12852.486..12852.577 rows=710 loops=1)\n> Sort Key: contacts_contact.id\n> Sort Method: quicksort Memory: 34327kB\n> Buffers: shared hit=6 read=66677\n> -> Hash Join (cost=6816.19..35834.63 rows=2004 width=92)\n> (actual time=721.293..12591.204 rows=200412 loops=1)\n> Hash Cond: (contacts_contact.id = u0.contact_id)\n> Buffers: shared hit=6 read=66677\n> -> Seq Scan on contacts_contact (cost=0.00..25266.00\n> rows=1000000 width=88) (actual time=0.003..267.258 rows=1000000 loops=1)\n> Buffers: shared hit=1 read=15265\n> -> Hash (cost=6813.14..6813.14 rows=244 width=4)\n> (actual time=714.373..714.373 rows=200412 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 7046kB\n> Buffers: shared hit=5 read=51412\n> -> HashAggregate (cost=6810.70..6813.14\n> rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1)\n> Buffers: shared hit=5 read=51412\n> -> Bitmap Heap Scan on values_value u0\n> (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\n> rows=200412 loops=1)\n> Recheck Cond: ((contact_field_id =\n> 1) AND (upper(string_value) = 'F'::text))\n> Buffers: shared hit=5 read=51412\n> -> Bitmap Index Scan on\n> values_value_field_string_value_contact (cost=0.00..60.47 rows=2004\n> width=0) (actual time=57.642..57.642 rows=200412 loops=1)\n> Index Cond: ((contact_field_id\n> = 1) AND (upper(string_value) = 'F'::text))\n> Buffers: shared hit=5 read=765\n> -> Index Only Scan Backward using contacts_contactgroup_\n> contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts\n> (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.080..0.651\n> rows=1707 loops=1)\n> Index Cond: (contactgroup_id = 1)\n> Heap Fetches: 0\n> Buffers: shared read=12\n> Total runtime: 12863.938 ms\n>\n> https://explain.depesz.com/s/nfw1\n>\n> Can you explain a bit more about what you mean about \" dependency between\n> contact_field_id = 1 and upper(string_value) = 'F'::text\"?\n>\n\nlook to related node in plan\n\n\n -> Hash (cost=6813.14..6813.14 rows=244 width=4)\n(actual time=714.373..714.373 rows=200412 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 7046kB\n Buffers: shared hit=5 read=51412\n -> HashAggregate (cost=6810.70..6813.14\nrows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1)\n Buffers: shared hit=5 read=51412\n -> Bitmap Heap Scan on values_value u0\n (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\nrows=200412 loops=1)\n Recheck Cond: ((contact_field_id = 1)\nAND (upper(string_value) = 'F'::text))\n Buffers: shared hit=5 read=51412\n\nThere is lot of significant differences between estimation (2004) and\nreality (200412) - two orders - so the plan must be suboptimal\n\nI am looking to your schema - and it is variant on EAV table - this is\nantippatern and for more then small returned rows it should be slow.\n\nRegards\n\nPavel\n\n\n\n> Btw I created the index values_value_field_string_value_contact as\n>\n> CREATE INDEX values_value_field_string_value_contact\n> ON values_value(contact_field_id, UPPER(string_value), contact_id DESC)\n> WHERE contact_field_id IS NOT NULL;\n>\n> I'm not sure why it needs the contact_id column but without it the planner\n> picks a slow approach for even smaller LIMIT values.\n>\n>\n> On 23 February 2017 at 15:32, Pavel Stehule <[email protected]>\n> wrote:\n>\n>>\n>>\n>> 2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:\n>>\n>>> Hi guys\n>>>\n>>> I'm a bit stuck on a query that performs fantastically up to a certain\n>>> limit value, after which the planner goes off in a completely different\n>>> direction and performance gets dramatically worse. Am using Postgresql 9.3\n>>>\n>>> You can see all the relevant schemas at http://pastebin.com/PNEqw2id\n>>> and in the test database there are 1,000,000 records in contacts_contact,\n>>> and about half of those will match the subquery on values_value.\n>>>\n>>> The query in question is:\n>>>\n>>> SELECT \"contacts_contact\".* FROM \"contacts_contact\"\n>>> INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\"\n>>> = \"contacts_contactgroup_contacts\".\"contact_id\")\n>>> WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1\n>>> AND \"contacts_contact\".\"id\" IN (\n>>> SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE\n>>> (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F'))\n>>> )\n>>> ) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;\n>>>\n>>> With that limit of 222, it performs like:\n>>>\n>>> Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358\n>>> rows=222 loops=1)\n>>> Buffers: shared hit=708 read=63\n>>> -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual\n>>> time=0.120..3.304 rows=222 loops=1)\n>>> Buffers: shared hit=708 read=63\n>>> -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92)\n>>> (actual time=0.103..1.968 rows=227 loops=1)\n>>> Merge Cond: (contacts_contact.id = u0.contact_id)\n>>> Buffers: shared hit=24 read=63\n>>> -> Index Scan Backward using contacts_contact_pkey on\n>>> contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual\n>>> time=0.008..0.502 rows=1117 loops=1)\n>>> Buffers: shared hit=22 read=2\n>>> -> Index Scan using values_value_field_string_value_contact\n>>> on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual\n>>> time=0.086..0.857 rows=227 loops=1)\n>>> Index Cond: ((contact_field_id = 1) AND\n>>> (upper(string_value) = 'F'::text))\n>>> Buffers: shared hit=2 read=61\n>>> -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq\n>>> on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual\n>>> time=0.005..0.005 rows=1 loops=227)\n>>> Index Cond: ((contactgroup_id = 1) AND (contact_id =\n>>> contacts_contact.id))\n>>> Heap Fetches: 0\n>>> Buffers: shared hit=684\n>>> Total runtime: 3.488 ms\n>>>\n>>> https://explain.depesz.com/s/iPPJ\n>>>\n>>> But if increase the limit to 223 then it performs like:\n>>>\n>>> Limit (cost=8785.68..13306.24 rows=223 width=88) (actual\n>>> time=2685.830..2686.534 rows=223 loops=1)\n>>> Buffers: shared hit=767648 read=86530\n>>> -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual\n>>> time=2685.828..2686.461 rows=223 loops=1)\n>>> Merge Cond: (contacts_contact.id =\n>>> contacts_contactgroup_contacts.contact_id)\n>>> Buffers: shared hit=767648 read=86530\n>>> -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual\n>>> time=2685.742..2685.804 rows=228 loops=1)\n>>> Sort Key: contacts_contact.id\n>>> Sort Method: quicksort Memory: 34327kB\n>>> Buffers: shared hit=767648 read=86524\n>>> -> Nested Loop (cost=6811.12..8674.53 rows=2004\n>>> width=92) (actual time=646.573..2417.291 rows=200412 loops=1)\n>>>\n>>\n>> There is pretty bad estimation probably due dependency between\n>> contact_field_id = 1 and upper(string_value) = 'F'::text\n>>\n>> The most simple solution is disable nested loop - set enable_nestloop to\n>> off\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>> Buffers: shared hit=767648 read=86524\n>>> -> HashAggregate (cost=6810.70..6813.14 rows=244\n>>> width=4) (actual time=646.532..766.200 rows=200412 loops=1)\n>>> Buffers: shared read=51417\n>>> -> Bitmap Heap Scan on values_value u0\n>>> (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709\n>>> rows=200412 loops=1)\n>>> Recheck Cond: ((contact_field_id = 1)\n>>> AND (upper(string_value) = 'F'::text))\n>>> Buffers: shared read=51417\n>>> -> Bitmap Index Scan on\n>>> values_value_field_string_value_contact (cost=0.00..60.47 rows=2004\n>>> width=0) (actual time=70.647..70.647 rows=200412 loops=1)\n>>> Index Cond: ((contact_field_id =\n>>> 1) AND (upper(string_value) = 'F'::text))\n>>> Buffers: shared read=770\n>>> -> Index Scan using contacts_contact_pkey on\n>>> contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual\n>>> time=0.007..0.007 rows=1 loops=200412)\n>>> Index Cond: (id = u0.contact_id)\n>>> Buffers: shared hit=767648 read=35107\n>>> -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq\n>>> on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992\n>>> width=4) (actual time=0.073..0.273 rows=550 loops=1)\n>>> Index Cond: (contactgroup_id = 1)\n>>> Heap Fetches: 0\n>>> Buffers: shared read=6\n>>> Total runtime: 2695.301 ms\n>>>\n>>> https://explain.depesz.com/s/gXS\n>>>\n>>> I've tried running ANALYZE but that actually reduced the limit at which\n>>> things get worse. Any insight into the reasoning of the query planner would\n>>> be much appreciated.\n>>>\n>>> Thanks\n>>>\n>>> --\n>>> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>>>\n>>\n>>\n>\n>\n> --\n> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>\n\n2017-02-23 15:02 GMT+01:00 Rowan Seymour <[email protected]>:Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan then things get slow again. This is now what happens at LIMIT 695:Limit (cost=35945.78..50034.52 rows=695 width=88) (actual time=12852.580..12854.382 rows=695 loops=1) Buffers: shared hit=6 read=66689 -> Merge Join (cost=35945.78..56176.80 rows=998 width=88) (actual time=12852.577..12854.271 rows=695 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=6 read=66689 -> Sort (cost=35944.53..35949.54 rows=2004 width=92) (actual time=12852.486..12852.577 rows=710 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=6 read=66677 -> Hash Join (cost=6816.19..35834.63 rows=2004 width=92) (actual time=721.293..12591.204 rows=200412 loops=1) Hash Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=6 read=66677 -> Seq Scan on contacts_contact (cost=0.00..25266.00 rows=1000000 width=88) (actual time=0.003..267.258 rows=1000000 loops=1) Buffers: shared hit=1 read=15265 -> Hash (cost=6813.14..6813.14 rows=244 width=4) (actual time=714.373..714.373 rows=200412 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 7046kB Buffers: shared hit=5 read=51412 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1) Buffers: shared hit=5 read=51412 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=51412 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=57.642..57.642 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=765 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.080..0.651 rows=1707 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=12Total runtime: 12863.938 mshttps://explain.depesz.com/s/nfw1Can you explain a bit more about what you mean about \" dependency between contact_field_id = 1 and upper(string_value) = 'F'::text\"?look to related node in plan -> Hash (cost=6813.14..6813.14 rows=244 width=4) (actual time=714.373..714.373 rows=200412 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 7046kB Buffers: shared hit=5 read=51412 \n -> HashAggregate (cost=6810.70..6813.14 \nrows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1) Buffers: shared hit=5 read=51412 \n -> Bitmap Heap Scan on values_value \nu0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\n rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=51412There is lot of significant differences between estimation (2004) and reality (200412) - two orders - so the plan must be suboptimal I am looking to your schema - and it is variant on EAV table - this is antippatern and for more then small returned rows it should be slow.RegardsPavelBtw I created the index values_value_field_string_value_contact asCREATE INDEX values_value_field_string_value_contactON values_value(contact_field_id, UPPER(string_value), contact_id DESC)WHERE contact_field_id IS NOT NULL;I'm not sure why it needs the contact_id column but without it the planner picks a slow approach for even smaller LIMIT values.On 23 February 2017 at 15:32, Pavel Stehule <[email protected]> wrote:2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:Hi guysI'm a bit stuck on a query that performs fantastically up to a certain limit value, after which the planner goes off in a completely different direction and performance gets dramatically worse. Am using Postgresql 9.3You can see all the relevant schemas at http://pastebin.com/PNEqw2id and in the test database there are 1,000,000 records in contacts_contact, and about half of those will match the subquery on values_value.The query in question is:SELECT \"contacts_contact\".* FROM \"contacts_contact\"INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\" = \"contacts_contactgroup_contacts\".\"contact_id\")WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1 AND \"contacts_contact\".\"id\" IN ( SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F')) )) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;With that limit of 222, it performs like:Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual time=0.120..3.304 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92) (actual time=0.103..1.968 rows=227 loops=1) Merge Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=24 read=63 -> Index Scan Backward using contacts_contact_pkey on contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual time=0.008..0.502 rows=1117 loops=1) Buffers: shared hit=22 read=2 -> Index Scan using values_value_field_string_value_contact on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual time=0.086..0.857 rows=227 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=2 read=61 -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=227) Index Cond: ((contactgroup_id = 1) AND (contact_id = contacts_contact.id)) Heap Fetches: 0 Buffers: shared hit=684Total runtime: 3.488 mshttps://explain.depesz.com/s/iPPJBut if increase the limit to 223 then it performs like:Limit (cost=8785.68..13306.24 rows=223 width=88) (actual time=2685.830..2686.534 rows=223 loops=1) Buffers: shared hit=767648 read=86530 -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual time=2685.828..2686.461 rows=223 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=767648 read=86530 -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual time=2685.742..2685.804 rows=228 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=767648 read=86524 -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92) (actual time=646.573..2417.291 rows=200412 loops=1)There is pretty bad estimation probably due dependency between contact_field_id = 1 and upper(string_value) = 'F'::textThe most simple solution is disable nested loop - set enable_nestloop to offRegardsPavel Buffers: shared hit=767648 read=86524 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=646.532..766.200 rows=200412 loops=1) Buffers: shared read=51417 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=51417 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=70.647..70.647 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=770 -> Index Scan using contacts_contact_pkey on contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual time=0.007..0.007 rows=1 loops=200412) Index Cond: (id = u0.contact_id) Buffers: shared hit=767648 read=35107 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.073..0.273 rows=550 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=6Total runtime: 2695.301 mshttps://explain.depesz.com/s/gXSI've tried running ANALYZE but that actually reduced the limit at which things get worse. Any insight into the reasoning of the query planner would be much appreciated.Thanks-- Rowan Seymour | +260 964153686 | @rowanseymour \n\n\n-- Rowan Seymour | +260 964153686 | @rowanseymour",
"msg_date": "Thu, 23 Feb 2017 16:35:53 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance changes significantly depending on\n limit value"
},
{
"msg_contents": "Not sure what other options we have other than an EAV approach since we\nallow users to define their own attribute types (attribute type is in\ncontacts_contactfield, attribute value is in values_value). Would you\nexpect modelling that with a JSON column to perform better?\n\nThanks for the tips!\n\nOn 23 February 2017 at 17:35, Pavel Stehule <[email protected]> wrote:\n\n>\n>\n> 2017-02-23 15:02 GMT+01:00 Rowan Seymour <[email protected]>:\n>\n>> Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan\n>> then things get slow again. This is now what happens at LIMIT 695:\n>>\n>> Limit (cost=35945.78..50034.52 rows=695 width=88) (actual\n>> time=12852.580..12854.382 rows=695 loops=1)\n>> Buffers: shared hit=6 read=66689\n>> -> Merge Join (cost=35945.78..56176.80 rows=998 width=88) (actual\n>> time=12852.577..12854.271 rows=695 loops=1)\n>> Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts\n>> .contact_id)\n>> Buffers: shared hit=6 read=66689\n>> -> Sort (cost=35944.53..35949.54 rows=2004 width=92) (actual\n>> time=12852.486..12852.577 rows=710 loops=1)\n>> Sort Key: contacts_contact.id\n>> Sort Method: quicksort Memory: 34327kB\n>> Buffers: shared hit=6 read=66677\n>> -> Hash Join (cost=6816.19..35834.63 rows=2004 width=92)\n>> (actual time=721.293..12591.204 rows=200412 loops=1)\n>> Hash Cond: (contacts_contact.id = u0.contact_id)\n>> Buffers: shared hit=6 read=66677\n>> -> Seq Scan on contacts_contact\n>> (cost=0.00..25266.00 rows=1000000 width=88) (actual time=0.003..267.258\n>> rows=1000000 loops=1)\n>> Buffers: shared hit=1 read=15265\n>> -> Hash (cost=6813.14..6813.14 rows=244 width=4)\n>> (actual time=714.373..714.373 rows=200412 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 7046kB\n>> Buffers: shared hit=5 read=51412\n>> -> HashAggregate (cost=6810.70..6813.14\n>> rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1)\n>> Buffers: shared hit=5 read=51412\n>> -> Bitmap Heap Scan on values_value u0\n>> (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\n>> rows=200412 loops=1)\n>> Recheck Cond: ((contact_field_id =\n>> 1) AND (upper(string_value) = 'F'::text))\n>> Buffers: shared hit=5 read=51412\n>> -> Bitmap Index Scan on\n>> values_value_field_string_value_contact (cost=0.00..60.47 rows=2004\n>> width=0) (actual time=57.642..57.642 rows=200412 loops=1)\n>> Index Cond:\n>> ((contact_field_id = 1) AND (upper(string_value) = 'F'::text))\n>> Buffers: shared hit=5 read=765\n>> -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq\n>> on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992\n>> width=4) (actual time=0.080..0.651 rows=1707 loops=1)\n>> Index Cond: (contactgroup_id = 1)\n>> Heap Fetches: 0\n>> Buffers: shared read=12\n>> Total runtime: 12863.938 ms\n>>\n>> https://explain.depesz.com/s/nfw1\n>>\n>> Can you explain a bit more about what you mean about \" dependency\n>> between contact_field_id = 1 and upper(string_value) = 'F'::text\"?\n>>\n>\n> look to related node in plan\n>\n>\n> -> Hash (cost=6813.14..6813.14 rows=244 width=4)\n> (actual time=714.373..714.373 rows=200412 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 7046kB\n> Buffers: shared hit=5 read=51412\n> -> HashAggregate (cost=6810.70..6813.14\n> rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1)\n> Buffers: shared hit=5 read=51412\n> -> Bitmap Heap Scan on values_value u0\n> (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\n> rows=200412 loops=1)\n> Recheck Cond: ((contact_field_id =\n> 1) AND (upper(string_value) = 'F'::text))\n> Buffers: shared hit=5 read=51412\n>\n> There is lot of significant differences between estimation (2004) and\n> reality (200412) - two orders - so the plan must be suboptimal\n>\n> I am looking to your schema - and it is variant on EAV table - this is\n> antippatern and for more then small returned rows it should be slow.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>> Btw I created the index values_value_field_string_value_contact as\n>>\n>> CREATE INDEX values_value_field_string_value_contact\n>> ON values_value(contact_field_id, UPPER(string_value), contact_id DESC)\n>> WHERE contact_field_id IS NOT NULL;\n>>\n>> I'm not sure why it needs the contact_id column but without it the\n>> planner picks a slow approach for even smaller LIMIT values.\n>>\n>>\n>> On 23 February 2017 at 15:32, Pavel Stehule <[email protected]>\n>> wrote:\n>>\n>>>\n>>>\n>>> 2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:\n>>>\n>>>> Hi guys\n>>>>\n>>>> I'm a bit stuck on a query that performs fantastically up to a certain\n>>>> limit value, after which the planner goes off in a completely different\n>>>> direction and performance gets dramatically worse. Am using Postgresql 9.3\n>>>>\n>>>> You can see all the relevant schemas at http://pastebin.com/PNEqw2id\n>>>> and in the test database there are 1,000,000 records in contacts_contact,\n>>>> and about half of those will match the subquery on values_value.\n>>>>\n>>>> The query in question is:\n>>>>\n>>>> SELECT \"contacts_contact\".* FROM \"contacts_contact\"\n>>>> INNER JOIN \"contacts_contactgroup_contacts\" ON\n>>>> (\"contacts_contact\".\"id\" = \"contacts_contactgroup_contact\n>>>> s\".\"contact_id\")\n>>>> WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1\n>>>> AND \"contacts_contact\".\"id\" IN (\n>>>> SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE\n>>>> (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F'))\n>>>> )\n>>>> ) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;\n>>>>\n>>>> With that limit of 222, it performs like:\n>>>>\n>>>> Limit (cost=3.09..13256.36 rows=222 width=88) (actual\n>>>> time=0.122..3.358 rows=222 loops=1)\n>>>> Buffers: shared hit=708 read=63\n>>>> -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual\n>>>> time=0.120..3.304 rows=222 loops=1)\n>>>> Buffers: shared hit=708 read=63\n>>>> -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92)\n>>>> (actual time=0.103..1.968 rows=227 loops=1)\n>>>> Merge Cond: (contacts_contact.id = u0.contact_id)\n>>>> Buffers: shared hit=24 read=63\n>>>> -> Index Scan Backward using contacts_contact_pkey on\n>>>> contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual\n>>>> time=0.008..0.502 rows=1117 loops=1)\n>>>> Buffers: shared hit=22 read=2\n>>>> -> Index Scan using values_value_field_string_value_contact\n>>>> on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual\n>>>> time=0.086..0.857 rows=227 loops=1)\n>>>> Index Cond: ((contact_field_id = 1) AND\n>>>> (upper(string_value) = 'F'::text))\n>>>> Buffers: shared hit=2 read=61\n>>>> -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq\n>>>> on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual\n>>>> time=0.005..0.005 rows=1 loops=227)\n>>>> Index Cond: ((contactgroup_id = 1) AND (contact_id =\n>>>> contacts_contact.id))\n>>>> Heap Fetches: 0\n>>>> Buffers: shared hit=684\n>>>> Total runtime: 3.488 ms\n>>>>\n>>>> https://explain.depesz.com/s/iPPJ\n>>>>\n>>>> But if increase the limit to 223 then it performs like:\n>>>>\n>>>> Limit (cost=8785.68..13306.24 rows=223 width=88) (actual\n>>>> time=2685.830..2686.534 rows=223 loops=1)\n>>>> Buffers: shared hit=767648 read=86530\n>>>> -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual\n>>>> time=2685.828..2686.461 rows=223 loops=1)\n>>>> Merge Cond: (contacts_contact.id =\n>>>> contacts_contactgroup_contacts.contact_id)\n>>>> Buffers: shared hit=767648 read=86530\n>>>> -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual\n>>>> time=2685.742..2685.804 rows=228 loops=1)\n>>>> Sort Key: contacts_contact.id\n>>>> Sort Method: quicksort Memory: 34327kB\n>>>> Buffers: shared hit=767648 read=86524\n>>>> -> Nested Loop (cost=6811.12..8674.53 rows=2004\n>>>> width=92) (actual time=646.573..2417.291 rows=200412 loops=1)\n>>>>\n>>>\n>>> There is pretty bad estimation probably due dependency between\n>>> contact_field_id = 1 and upper(string_value) = 'F'::text\n>>>\n>>> The most simple solution is disable nested loop - set enable_nestloop to\n>>> off\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>>\n>>>> Buffers: shared hit=767648 read=86524\n>>>> -> HashAggregate (cost=6810.70..6813.14 rows=244\n>>>> width=4) (actual time=646.532..766.200 rows=200412 loops=1)\n>>>> Buffers: shared read=51417\n>>>> -> Bitmap Heap Scan on values_value u0\n>>>> (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709\n>>>> rows=200412 loops=1)\n>>>> Recheck Cond: ((contact_field_id = 1)\n>>>> AND (upper(string_value) = 'F'::text))\n>>>> Buffers: shared read=51417\n>>>> -> Bitmap Index Scan on\n>>>> values_value_field_string_value_contact (cost=0.00..60.47 rows=2004\n>>>> width=0) (actual time=70.647..70.647 rows=200412 loops=1)\n>>>> Index Cond: ((contact_field_id =\n>>>> 1) AND (upper(string_value) = 'F'::text))\n>>>> Buffers: shared read=770\n>>>> -> Index Scan using contacts_contact_pkey on\n>>>> contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual\n>>>> time=0.007..0.007 rows=1 loops=200412)\n>>>> Index Cond: (id = u0.contact_id)\n>>>> Buffers: shared hit=767648 read=35107\n>>>> -> Index Only Scan Backward using\n>>>> contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on\n>>>> contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4)\n>>>> (actual time=0.073..0.273 rows=550 loops=1)\n>>>> Index Cond: (contactgroup_id = 1)\n>>>> Heap Fetches: 0\n>>>> Buffers: shared read=6\n>>>> Total runtime: 2695.301 ms\n>>>>\n>>>> https://explain.depesz.com/s/gXS\n>>>>\n>>>> I've tried running ANALYZE but that actually reduced the limit at which\n>>>> things get worse. Any insight into the reasoning of the query planner would\n>>>> be much appreciated.\n>>>>\n>>>> Thanks\n>>>>\n>>>> --\n>>>> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>>>>\n>>>\n>>>\n>>\n>>\n>> --\n>> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>>\n>\n>\n\n\n-- \n*Rowan Seymour* | +260 964153686 | @rowanseymour\n\nNot sure what other options we have other than an EAV approach since we allow users to define their own attribute types (attribute type is in contacts_contactfield, attribute value is in values_value). Would you expect modelling that with a JSON column to perform better?Thanks for the tips!On 23 February 2017 at 17:35, Pavel Stehule <[email protected]> wrote:2017-02-23 15:02 GMT+01:00 Rowan Seymour <[email protected]>:Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan then things get slow again. This is now what happens at LIMIT 695:Limit (cost=35945.78..50034.52 rows=695 width=88) (actual time=12852.580..12854.382 rows=695 loops=1) Buffers: shared hit=6 read=66689 -> Merge Join (cost=35945.78..56176.80 rows=998 width=88) (actual time=12852.577..12854.271 rows=695 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=6 read=66689 -> Sort (cost=35944.53..35949.54 rows=2004 width=92) (actual time=12852.486..12852.577 rows=710 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=6 read=66677 -> Hash Join (cost=6816.19..35834.63 rows=2004 width=92) (actual time=721.293..12591.204 rows=200412 loops=1) Hash Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=6 read=66677 -> Seq Scan on contacts_contact (cost=0.00..25266.00 rows=1000000 width=88) (actual time=0.003..267.258 rows=1000000 loops=1) Buffers: shared hit=1 read=15265 -> Hash (cost=6813.14..6813.14 rows=244 width=4) (actual time=714.373..714.373 rows=200412 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 7046kB Buffers: shared hit=5 read=51412 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1) Buffers: shared hit=5 read=51412 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=51412 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=57.642..57.642 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=765 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.080..0.651 rows=1707 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=12Total runtime: 12863.938 mshttps://explain.depesz.com/s/nfw1Can you explain a bit more about what you mean about \" dependency between contact_field_id = 1 and upper(string_value) = 'F'::text\"?look to related node in plan -> Hash (cost=6813.14..6813.14 rows=244 width=4) (actual time=714.373..714.373 rows=200412 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 7046kB Buffers: shared hit=5 read=51412 \n -> HashAggregate (cost=6810.70..6813.14 \nrows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1) Buffers: shared hit=5 read=51412 \n -> Bitmap Heap Scan on values_value \nu0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\n rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=51412There is lot of significant differences between estimation (2004) and reality (200412) - two orders - so the plan must be suboptimal I am looking to your schema - and it is variant on EAV table - this is antippatern and for more then small returned rows it should be slow.RegardsPavelBtw I created the index values_value_field_string_value_contact asCREATE INDEX values_value_field_string_value_contactON values_value(contact_field_id, UPPER(string_value), contact_id DESC)WHERE contact_field_id IS NOT NULL;I'm not sure why it needs the contact_id column but without it the planner picks a slow approach for even smaller LIMIT values.On 23 February 2017 at 15:32, Pavel Stehule <[email protected]> wrote:2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:Hi guysI'm a bit stuck on a query that performs fantastically up to a certain limit value, after which the planner goes off in a completely different direction and performance gets dramatically worse. Am using Postgresql 9.3You can see all the relevant schemas at http://pastebin.com/PNEqw2id and in the test database there are 1,000,000 records in contacts_contact, and about half of those will match the subquery on values_value.The query in question is:SELECT \"contacts_contact\".* FROM \"contacts_contact\"INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\" = \"contacts_contactgroup_contacts\".\"contact_id\")WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1 AND \"contacts_contact\".\"id\" IN ( SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F')) )) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;With that limit of 222, it performs like:Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual time=0.120..3.304 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92) (actual time=0.103..1.968 rows=227 loops=1) Merge Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=24 read=63 -> Index Scan Backward using contacts_contact_pkey on contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual time=0.008..0.502 rows=1117 loops=1) Buffers: shared hit=22 read=2 -> Index Scan using values_value_field_string_value_contact on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual time=0.086..0.857 rows=227 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=2 read=61 -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=227) Index Cond: ((contactgroup_id = 1) AND (contact_id = contacts_contact.id)) Heap Fetches: 0 Buffers: shared hit=684Total runtime: 3.488 mshttps://explain.depesz.com/s/iPPJBut if increase the limit to 223 then it performs like:Limit (cost=8785.68..13306.24 rows=223 width=88) (actual time=2685.830..2686.534 rows=223 loops=1) Buffers: shared hit=767648 read=86530 -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual time=2685.828..2686.461 rows=223 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=767648 read=86530 -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual time=2685.742..2685.804 rows=228 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=767648 read=86524 -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92) (actual time=646.573..2417.291 rows=200412 loops=1)There is pretty bad estimation probably due dependency between contact_field_id = 1 and upper(string_value) = 'F'::textThe most simple solution is disable nested loop - set enable_nestloop to offRegardsPavel Buffers: shared hit=767648 read=86524 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=646.532..766.200 rows=200412 loops=1) Buffers: shared read=51417 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=51417 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=70.647..70.647 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=770 -> Index Scan using contacts_contact_pkey on contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual time=0.007..0.007 rows=1 loops=200412) Index Cond: (id = u0.contact_id) Buffers: shared hit=767648 read=35107 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.073..0.273 rows=550 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=6Total runtime: 2695.301 mshttps://explain.depesz.com/s/gXSI've tried running ANALYZE but that actually reduced the limit at which things get worse. Any insight into the reasoning of the query planner would be much appreciated.Thanks-- Rowan Seymour | +260 964153686 | @rowanseymour \n\n\n-- Rowan Seymour | +260 964153686 | @rowanseymour \n\n\n-- Rowan Seymour | +260 964153686 | @rowanseymour",
"msg_date": "Thu, 23 Feb 2017 18:45:41 +0200",
"msg_from": "Rowan Seymour <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance changes significantly depending on\n limit value"
},
{
"msg_contents": "2017-02-23 17:45 GMT+01:00 Rowan Seymour <[email protected]>:\n\n> Not sure what other options we have other than an EAV approach since we\n> allow users to define their own attribute types (attribute type is in\n> contacts_contactfield, attribute value is in values_value). Would you\n> expect modelling that with a JSON column to perform better?\n>\n\nShould be - maybe hstore, jsonb with special index. EAV works if you don't\ndo massive operations.\n\nUsually the best approach is mix design - what can be relational - often\nattributes used in filters should be rational (columnar) and others can be\nin some unrelational type - XML, JSON, ...\n\nRegards\n\nPavel\n\n\n>\n> Thanks for the tips!\n>\n> On 23 February 2017 at 17:35, Pavel Stehule <[email protected]>\n> wrote:\n>\n>>\n>>\n>> 2017-02-23 15:02 GMT+01:00 Rowan Seymour <[email protected]>:\n>>\n>>> Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan\n>>> then things get slow again. This is now what happens at LIMIT 695:\n>>>\n>>> Limit (cost=35945.78..50034.52 rows=695 width=88) (actual\n>>> time=12852.580..12854.382 rows=695 loops=1)\n>>> Buffers: shared hit=6 read=66689\n>>> -> Merge Join (cost=35945.78..56176.80 rows=998 width=88) (actual\n>>> time=12852.577..12854.271 rows=695 loops=1)\n>>> Merge Cond: (contacts_contact.id =\n>>> contacts_contactgroup_contacts.contact_id)\n>>> Buffers: shared hit=6 read=66689\n>>> -> Sort (cost=35944.53..35949.54 rows=2004 width=92) (actual\n>>> time=12852.486..12852.577 rows=710 loops=1)\n>>> Sort Key: contacts_contact.id\n>>> Sort Method: quicksort Memory: 34327kB\n>>> Buffers: shared hit=6 read=66677\n>>> -> Hash Join (cost=6816.19..35834.63 rows=2004 width=92)\n>>> (actual time=721.293..12591.204 rows=200412 loops=1)\n>>> Hash Cond: (contacts_contact.id = u0.contact_id)\n>>> Buffers: shared hit=6 read=66677\n>>> -> Seq Scan on contacts_contact\n>>> (cost=0.00..25266.00 rows=1000000 width=88) (actual time=0.003..267.258\n>>> rows=1000000 loops=1)\n>>> Buffers: shared hit=1 read=15265\n>>> -> Hash (cost=6813.14..6813.14 rows=244 width=4)\n>>> (actual time=714.373..714.373 rows=200412 loops=1)\n>>> Buckets: 1024 Batches: 1 Memory Usage: 7046kB\n>>> Buffers: shared hit=5 read=51412\n>>> -> HashAggregate (cost=6810.70..6813.14\n>>> rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1)\n>>> Buffers: shared hit=5 read=51412\n>>> -> Bitmap Heap Scan on values_value u0\n>>> (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\n>>> rows=200412 loops=1)\n>>> Recheck Cond: ((contact_field_id =\n>>> 1) AND (upper(string_value) = 'F'::text))\n>>> Buffers: shared hit=5 read=51412\n>>> -> Bitmap Index Scan on\n>>> values_value_field_string_value_contact (cost=0.00..60.47 rows=2004\n>>> width=0) (actual time=57.642..57.642 rows=200412 loops=1)\n>>> Index Cond:\n>>> ((contact_field_id = 1) AND (upper(string_value) = 'F'::text))\n>>> Buffers: shared hit=5\n>>> read=765\n>>> -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq\n>>> on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992\n>>> width=4) (actual time=0.080..0.651 rows=1707 loops=1)\n>>> Index Cond: (contactgroup_id = 1)\n>>> Heap Fetches: 0\n>>> Buffers: shared read=12\n>>> Total runtime: 12863.938 ms\n>>>\n>>> https://explain.depesz.com/s/nfw1\n>>>\n>>> Can you explain a bit more about what you mean about \" dependency\n>>> between contact_field_id = 1 and upper(string_value) = 'F'::text\"?\n>>>\n>>\n>> look to related node in plan\n>>\n>>\n>> -> Hash (cost=6813.14..6813.14 rows=244 width=4)\n>> (actual time=714.373..714.373 rows=200412 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 7046kB\n>> Buffers: shared hit=5 read=51412\n>> -> HashAggregate (cost=6810.70..6813.14\n>> rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1)\n>> Buffers: shared hit=5 read=51412\n>> -> Bitmap Heap Scan on values_value u0\n>> (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\n>> rows=200412 loops=1)\n>> Recheck Cond: ((contact_field_id =\n>> 1) AND (upper(string_value) = 'F'::text))\n>> Buffers: shared hit=5 read=51412\n>>\n>> There is lot of significant differences between estimation (2004) and\n>> reality (200412) - two orders - so the plan must be suboptimal\n>>\n>> I am looking to your schema - and it is variant on EAV table - this is\n>> antippatern and for more then small returned rows it should be slow.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>> Btw I created the index values_value_field_string_value_contact as\n>>>\n>>> CREATE INDEX values_value_field_string_value_contact\n>>> ON values_value(contact_field_id, UPPER(string_value), contact_id DESC)\n>>> WHERE contact_field_id IS NOT NULL;\n>>>\n>>> I'm not sure why it needs the contact_id column but without it the\n>>> planner picks a slow approach for even smaller LIMIT values.\n>>>\n>>>\n>>> On 23 February 2017 at 15:32, Pavel Stehule <[email protected]>\n>>> wrote:\n>>>\n>>>>\n>>>>\n>>>> 2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:\n>>>>\n>>>>> Hi guys\n>>>>>\n>>>>> I'm a bit stuck on a query that performs fantastically up to a certain\n>>>>> limit value, after which the planner goes off in a completely different\n>>>>> direction and performance gets dramatically worse. Am using Postgresql 9.3\n>>>>>\n>>>>> You can see all the relevant schemas at http://pastebin.com/PNEqw2id\n>>>>> and in the test database there are 1,000,000 records in contacts_contact,\n>>>>> and about half of those will match the subquery on values_value.\n>>>>>\n>>>>> The query in question is:\n>>>>>\n>>>>> SELECT \"contacts_contact\".* FROM \"contacts_contact\"\n>>>>> INNER JOIN \"contacts_contactgroup_contacts\" ON\n>>>>> (\"contacts_contact\".\"id\" = \"contacts_contactgroup_contact\n>>>>> s\".\"contact_id\")\n>>>>> WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1\n>>>>> AND \"contacts_contact\".\"id\" IN (\n>>>>> SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE\n>>>>> (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F'))\n>>>>> )\n>>>>> ) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;\n>>>>>\n>>>>> With that limit of 222, it performs like:\n>>>>>\n>>>>> Limit (cost=3.09..13256.36 rows=222 width=88) (actual\n>>>>> time=0.122..3.358 rows=222 loops=1)\n>>>>> Buffers: shared hit=708 read=63\n>>>>> -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual\n>>>>> time=0.120..3.304 rows=222 loops=1)\n>>>>> Buffers: shared hit=708 read=63\n>>>>> -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92)\n>>>>> (actual time=0.103..1.968 rows=227 loops=1)\n>>>>> Merge Cond: (contacts_contact.id = u0.contact_id)\n>>>>> Buffers: shared hit=24 read=63\n>>>>> -> Index Scan Backward using contacts_contact_pkey on\n>>>>> contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual\n>>>>> time=0.008..0.502 rows=1117 loops=1)\n>>>>> Buffers: shared hit=22 read=2\n>>>>> -> Index Scan using values_value_field_string_value_contact\n>>>>> on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual\n>>>>> time=0.086..0.857 rows=227 loops=1)\n>>>>> Index Cond: ((contact_field_id = 1) AND\n>>>>> (upper(string_value) = 'F'::text))\n>>>>> Buffers: shared hit=2 read=61\n>>>>> -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq\n>>>>> on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual\n>>>>> time=0.005..0.005 rows=1 loops=227)\n>>>>> Index Cond: ((contactgroup_id = 1) AND (contact_id =\n>>>>> contacts_contact.id))\n>>>>> Heap Fetches: 0\n>>>>> Buffers: shared hit=684\n>>>>> Total runtime: 3.488 ms\n>>>>>\n>>>>> https://explain.depesz.com/s/iPPJ\n>>>>>\n>>>>> But if increase the limit to 223 then it performs like:\n>>>>>\n>>>>> Limit (cost=8785.68..13306.24 rows=223 width=88) (actual\n>>>>> time=2685.830..2686.534 rows=223 loops=1)\n>>>>> Buffers: shared hit=767648 read=86530\n>>>>> -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual\n>>>>> time=2685.828..2686.461 rows=223 loops=1)\n>>>>> Merge Cond: (contacts_contact.id =\n>>>>> contacts_contactgroup_contacts.contact_id)\n>>>>> Buffers: shared hit=767648 read=86530\n>>>>> -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual\n>>>>> time=2685.742..2685.804 rows=228 loops=1)\n>>>>> Sort Key: contacts_contact.id\n>>>>> Sort Method: quicksort Memory: 34327kB\n>>>>> Buffers: shared hit=767648 read=86524\n>>>>> -> Nested Loop (cost=6811.12..8674.53 rows=2004\n>>>>> width=92) (actual time=646.573..2417.291 rows=200412 loops=1)\n>>>>>\n>>>>\n>>>> There is pretty bad estimation probably due dependency between\n>>>> contact_field_id = 1 and upper(string_value) = 'F'::text\n>>>>\n>>>> The most simple solution is disable nested loop - set enable_nestloop\n>>>> to off\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>>>\n>>>>> Buffers: shared hit=767648 read=86524\n>>>>> -> HashAggregate (cost=6810.70..6813.14 rows=244\n>>>>> width=4) (actual time=646.532..766.200 rows=200412 loops=1)\n>>>>> Buffers: shared read=51417\n>>>>> -> Bitmap Heap Scan on values_value u0\n>>>>> (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709\n>>>>> rows=200412 loops=1)\n>>>>> Recheck Cond: ((contact_field_id = 1)\n>>>>> AND (upper(string_value) = 'F'::text))\n>>>>> Buffers: shared read=51417\n>>>>> -> Bitmap Index Scan on\n>>>>> values_value_field_string_value_contact (cost=0.00..60.47 rows=2004\n>>>>> width=0) (actual time=70.647..70.647 rows=200412 loops=1)\n>>>>> Index Cond: ((contact_field_id =\n>>>>> 1) AND (upper(string_value) = 'F'::text))\n>>>>> Buffers: shared read=770\n>>>>> -> Index Scan using contacts_contact_pkey on\n>>>>> contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual\n>>>>> time=0.007..0.007 rows=1 loops=200412)\n>>>>> Index Cond: (id = u0.contact_id)\n>>>>> Buffers: shared hit=767648 read=35107\n>>>>> -> Index Only Scan Backward using\n>>>>> contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on\n>>>>> contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4)\n>>>>> (actual time=0.073..0.273 rows=550 loops=1)\n>>>>> Index Cond: (contactgroup_id = 1)\n>>>>> Heap Fetches: 0\n>>>>> Buffers: shared read=6\n>>>>> Total runtime: 2695.301 ms\n>>>>>\n>>>>> https://explain.depesz.com/s/gXS\n>>>>>\n>>>>> I've tried running ANALYZE but that actually reduced the limit at\n>>>>> which things get worse. Any insight into the reasoning of the query planner\n>>>>> would be much appreciated.\n>>>>>\n>>>>> Thanks\n>>>>>\n>>>>> --\n>>>>> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>>>>>\n>>>>\n>>>>\n>>>\n>>>\n>>> --\n>>> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>>>\n>>\n>>\n>\n>\n> --\n> *Rowan Seymour* | +260 964153686 <+260%2096%204153686> | @rowanseymour\n>\n\n2017-02-23 17:45 GMT+01:00 Rowan Seymour <[email protected]>:Not sure what other options we have other than an EAV approach since we allow users to define their own attribute types (attribute type is in contacts_contactfield, attribute value is in values_value). Would you expect modelling that with a JSON column to perform better?Should be - maybe hstore, jsonb with special index. EAV works if you don't do massive operations.Usually the best approach is mix design - what can be relational - often attributes used in filters should be rational (columnar) and others can be in some unrelational type - XML, JSON, ...RegardsPavel Thanks for the tips!On 23 February 2017 at 17:35, Pavel Stehule <[email protected]> wrote:2017-02-23 15:02 GMT+01:00 Rowan Seymour <[email protected]>:Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan then things get slow again. This is now what happens at LIMIT 695:Limit (cost=35945.78..50034.52 rows=695 width=88) (actual time=12852.580..12854.382 rows=695 loops=1) Buffers: shared hit=6 read=66689 -> Merge Join (cost=35945.78..56176.80 rows=998 width=88) (actual time=12852.577..12854.271 rows=695 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=6 read=66689 -> Sort (cost=35944.53..35949.54 rows=2004 width=92) (actual time=12852.486..12852.577 rows=710 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=6 read=66677 -> Hash Join (cost=6816.19..35834.63 rows=2004 width=92) (actual time=721.293..12591.204 rows=200412 loops=1) Hash Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=6 read=66677 -> Seq Scan on contacts_contact (cost=0.00..25266.00 rows=1000000 width=88) (actual time=0.003..267.258 rows=1000000 loops=1) Buffers: shared hit=1 read=15265 -> Hash (cost=6813.14..6813.14 rows=244 width=4) (actual time=714.373..714.373 rows=200412 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 7046kB Buffers: shared hit=5 read=51412 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1) Buffers: shared hit=5 read=51412 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=51412 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=57.642..57.642 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=765 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.080..0.651 rows=1707 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=12Total runtime: 12863.938 mshttps://explain.depesz.com/s/nfw1Can you explain a bit more about what you mean about \" dependency between contact_field_id = 1 and upper(string_value) = 'F'::text\"?look to related node in plan -> Hash (cost=6813.14..6813.14 rows=244 width=4) (actual time=714.373..714.373 rows=200412 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 7046kB Buffers: shared hit=5 read=51412 \n -> HashAggregate (cost=6810.70..6813.14 \nrows=244 width=4) (actual time=561.099..644.822 rows=200412 loops=1) Buffers: shared hit=5 read=51412 \n -> Bitmap Heap Scan on values_value \nu0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=75.410..364.976\n rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=5 read=51412There is lot of significant differences between estimation (2004) and reality (200412) - two orders - so the plan must be suboptimal I am looking to your schema - and it is variant on EAV table - this is antippatern and for more then small returned rows it should be slow.RegardsPavelBtw I created the index values_value_field_string_value_contact asCREATE INDEX values_value_field_string_value_contactON values_value(contact_field_id, UPPER(string_value), contact_id DESC)WHERE contact_field_id IS NOT NULL;I'm not sure why it needs the contact_id column but without it the planner picks a slow approach for even smaller LIMIT values.On 23 February 2017 at 15:32, Pavel Stehule <[email protected]> wrote:2017-02-23 14:11 GMT+01:00 Rowan Seymour <[email protected]>:Hi guysI'm a bit stuck on a query that performs fantastically up to a certain limit value, after which the planner goes off in a completely different direction and performance gets dramatically worse. Am using Postgresql 9.3You can see all the relevant schemas at http://pastebin.com/PNEqw2id and in the test database there are 1,000,000 records in contacts_contact, and about half of those will match the subquery on values_value.The query in question is:SELECT \"contacts_contact\".* FROM \"contacts_contact\"INNER JOIN \"contacts_contactgroup_contacts\" ON (\"contacts_contact\".\"id\" = \"contacts_contactgroup_contacts\".\"contact_id\")WHERE (\"contacts_contactgroup_contacts\".\"contactgroup_id\" = 1 AND \"contacts_contact\".\"id\" IN ( SELECT U0.\"contact_id\" FROM \"values_value\" U0 WHERE (U0.\"contact_field_id\" = 1 AND UPPER(U0.\"string_value\"::text) = UPPER('F')) )) ORDER BY \"contacts_contact\".\"id\" DESC LIMIT 222;With that limit of 222, it performs like:Limit (cost=3.09..13256.36 rows=222 width=88) (actual time=0.122..3.358 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Nested Loop (cost=3.09..59583.10 rows=998 width=88) (actual time=0.120..3.304 rows=222 loops=1) Buffers: shared hit=708 read=63 -> Merge Semi Join (cost=2.65..51687.89 rows=2004 width=92) (actual time=0.103..1.968 rows=227 loops=1) Merge Cond: (contacts_contact.id = u0.contact_id) Buffers: shared hit=24 read=63 -> Index Scan Backward using contacts_contact_pkey on contacts_contact (cost=0.42..41249.43 rows=1000000 width=88) (actual time=0.008..0.502 rows=1117 loops=1) Buffers: shared hit=22 read=2 -> Index Scan using values_value_field_string_value_contact on values_value u0 (cost=0.43..7934.72 rows=2004 width=4) (actual time=0.086..0.857 rows=227 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared hit=2 read=61 -> Index Only Scan using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..3.93 rows=1 width=4) (actual time=0.005..0.005 rows=1 loops=227) Index Cond: ((contactgroup_id = 1) AND (contact_id = contacts_contact.id)) Heap Fetches: 0 Buffers: shared hit=684Total runtime: 3.488 mshttps://explain.depesz.com/s/iPPJBut if increase the limit to 223 then it performs like:Limit (cost=8785.68..13306.24 rows=223 width=88) (actual time=2685.830..2686.534 rows=223 loops=1) Buffers: shared hit=767648 read=86530 -> Merge Join (cost=8785.68..29016.70 rows=998 width=88) (actual time=2685.828..2686.461 rows=223 loops=1) Merge Cond: (contacts_contact.id = contacts_contactgroup_contacts.contact_id) Buffers: shared hit=767648 read=86530 -> Sort (cost=8784.44..8789.45 rows=2004 width=92) (actual time=2685.742..2685.804 rows=228 loops=1) Sort Key: contacts_contact.id Sort Method: quicksort Memory: 34327kB Buffers: shared hit=767648 read=86524 -> Nested Loop (cost=6811.12..8674.53 rows=2004 width=92) (actual time=646.573..2417.291 rows=200412 loops=1)There is pretty bad estimation probably due dependency between contact_field_id = 1 and upper(string_value) = 'F'::textThe most simple solution is disable nested loop - set enable_nestloop to offRegardsPavel Buffers: shared hit=767648 read=86524 -> HashAggregate (cost=6810.70..6813.14 rows=244 width=4) (actual time=646.532..766.200 rows=200412 loops=1) Buffers: shared read=51417 -> Bitmap Heap Scan on values_value u0 (cost=60.98..6805.69 rows=2004 width=4) (actual time=92.016..433.709 rows=200412 loops=1) Recheck Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=51417 -> Bitmap Index Scan on values_value_field_string_value_contact (cost=0.00..60.47 rows=2004 width=0) (actual time=70.647..70.647 rows=200412 loops=1) Index Cond: ((contact_field_id = 1) AND (upper(string_value) = 'F'::text)) Buffers: shared read=770 -> Index Scan using contacts_contact_pkey on contacts_contact (cost=0.42..7.62 rows=1 width=88) (actual time=0.007..0.007 rows=1 loops=200412) Index Cond: (id = u0.contact_id) Buffers: shared hit=767648 read=35107 -> Index Only Scan Backward using contacts_contactgroup_contacts_contactgroup_id_0f909f73_uniq on contacts_contactgroup_contacts (cost=0.43..18967.29 rows=497992 width=4) (actual time=0.073..0.273 rows=550 loops=1) Index Cond: (contactgroup_id = 1) Heap Fetches: 0 Buffers: shared read=6Total runtime: 2695.301 mshttps://explain.depesz.com/s/gXSI've tried running ANALYZE but that actually reduced the limit at which things get worse. Any insight into the reasoning of the query planner would be much appreciated.Thanks-- Rowan Seymour | +260 964153686 | @rowanseymour \n\n\n-- Rowan Seymour | +260 964153686 | @rowanseymour \n\n\n-- Rowan Seymour | +260 964153686 | @rowanseymour",
"msg_date": "Thu, 23 Feb 2017 18:13:18 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance changes significantly depending on\n limit value"
}
] |
[
{
"msg_contents": "Dear expert,\n\nI want to log only that queries which are taking around 5 minutes to execute.\nI have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.\n\n\nThanks in advance.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\nDear expert,\n \nI want to log only that queries which are taking around 5 minutes to execute.\nI have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.\n \n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Thu, 23 Feb 2017 16:21:01 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to log quires which are taking time in PostgreSQL 9.1."
},
{
"msg_contents": "set log_min_duration_statement = 300,000\n\n(300,000 ms = 5min)\n\n\n\n From the docs:\n\n\nlog_min_duration_statement (integer)\n\nCauses the duration of each completed statement to be logged if the \nstatement ran for at least the specified number of milliseconds. Setting \nthis to zero prints all statement durations. Minus-one (the default) \ndisables logging statement durations. For example, if you set it to \n250ms then all SQL statements that run 250ms or longer will be logged. \nEnabling this parameter can be helpful in tracking down unoptimized \nqueries in your applications. Only superusers can change this setting.\n\n\n\n\n\nOn 02/23/2017 09:21 AM, Dinesh Chandra 12108 wrote:\n>\n> Dear expert,\n>\n> I want to log only that queries which are taking around 5 minutes to \n> execute.\n>\n> I have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.\n>\n> Thanks in advance.\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 | Ext 1078 |[email protected] \n> <mailto:%[email protected]>\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n> ------------------------------------------------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) \n> and may contain confidential and privileged information. Any \n> unauthorized review, use, disclosure or distribution is prohibited. If \n> you are not the intended recipient, please contact the sender by reply \n> email and destroy all copies of the original message. Check all \n> attachments for viruses before opening them. All views or opinions \n> presented in this e-mail are those of the author and may not reflect \n> the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\nset log_min_duration_statement = 300,000 \n\n(300,000 ms = 5min)\n\n\n\n From the docs:\n\n\nlog_min_duration_statement (integer)\n Causes the duration of each completed statement to be logged if\n the statement ran for at least the specified number of\n milliseconds. Setting this to zero prints all statement durations.\n Minus-one (the default) disables logging statement durations. For\n example, if you set it to 250ms then all\n SQL statements that run 250ms or longer will be logged. Enabling\n this parameter can be helpful in tracking down unoptimized queries\n in your applications. Only superusers can change this setting.\n\n\n\n\n\n\n\nOn 02/23/2017 09:21 AM, Dinesh Chandra\n 12108 wrote:\n\n\n\n\n\nDear expert,\n�\nI want to log\n only that queries which are taking around 5 minutes to\n execute.\nI have a\n database size >1.5T and using PostgreSQL 9.1 with Linux\n OS.\n�\n�\nThanks in\n advance.\n�\nRegards,\nDinesh\n Chandra\n|Database\n administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\n +91-9953975849 | Ext 1078\n |[email protected]\n\nPlot\n No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201\n 305,India.\n�\n\n\n\n\n DISCLAIMER:\n\n This email message is for the sole use of the intended\n recipient(s) and may contain confidential and privileged\n information. Any unauthorized review, use, disclosure or\n distribution is prohibited. If you are not the intended\n recipient, please contact the sender by reply email and destroy\n all copies of the original message. Check all attachments for\n viruses before opening them. All views or opinions presented in\n this e-mail are those of the author and may not reflect the\n opinion of Cyient or those of our affiliates.",
"msg_date": "Thu, 23 Feb 2017 09:30:52 -0700",
"msg_from": "ProPAAS DBA <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to log quires which are taking time in PostgreSQL\n 9.1."
},
{
"msg_contents": "Thanks for reply.\n\nMay I know where it will create log??\nIn pg_log directory or somewhere else.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of ProPAAS DBA\nSent: 23 February, 2017 10:01 PM\nTo: [email protected]\nSubject: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\n\n\nset log_min_duration_statement = 300,000\n\n(300,000 ms = 5min)\n\n\n From the docs:\n\n\nlog_min_duration_statement (integer)\n\nCauses the duration of each completed statement to be logged if the statement ran for at least the specified number of milliseconds. Setting this to zero prints all statement durations. Minus-one (the default) disables logging statement durations. For example, if you set it to 250ms then all SQL statements that run 250ms or longer will be logged. Enabling this parameter can be helpful in tracking down unoptimized queries in your applications. Only superusers can change this setting.\n\n\n\n\n\n\n\nOn 02/23/2017 09:21 AM, Dinesh Chandra 12108 wrote:\nDear expert,\n\nI want to log only that queries which are taking around 5 minutes to execute.\nI have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.\n\n\nThanks in advance.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\n\nThanks for reply.\n \nMay I know where it will create log??\nIn pg_log directory or somewhere else.\n \n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of ProPAAS DBA\nSent: 23 February, 2017 10:01 PM\nTo: [email protected]\nSubject: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\n\n\n \nset log_min_duration_statement = 300,000 \n(300,000 ms = 5min)\n\n\n From the docs:\n\n\nlog_min_duration_statement (integer)\n\nCauses the duration of each completed statement to be logged if the statement ran for at least the specified number of milliseconds. Setting this to zero prints all statement durations. Minus-one (the default) disables logging statement durations. For example,\n if you set it to 250ms then all SQL statements that run 250ms or longer will be logged. Enabling this parameter can be helpful in tracking down unoptimized queries in your applications. Only superusers can change\n this setting.\n \n \n \n \n\nOn 02/23/2017 09:21 AM, Dinesh Chandra 12108 wrote:\n\n\n\nDear expert,\n \nI want to log only that queries which are taking around 5 minutes to execute.\nI have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.\n \n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Thu, 23 Feb 2017 16:44:05 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to log quires which are taking time in PostgreSQL\n 9.1."
}
] |
[
{
"msg_contents": "The parameter \"log_directory\" on postgresql.conf, you can define where it'll be created. \nThe name of the log file is defined on \"log_filename\". \n\n----- Mensagem original -----\n\nDe: \"Dinesh Chandra 12108\" <[email protected]> \nPara: \"ProPAAS DBA\" <[email protected]>, [email protected] \nCc: \"Dinesh Chandra 12108\" <[email protected]> \nEnviadas: Quinta-feira, 23 de Fevereiro de 2017 13:44:05 \nAssunto: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1. \n\n\n\nThanks for reply. \n\nMay I know where it will create log?? \nIn pg_log directory or somewhere else. \n\n\nRegards, \nDinesh Chandra \n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida. \n------------------------------------------------------------------ \nMobile: +91-9953975849 | Ext 1078 |[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India. \n\n\n\nFrom: [email protected] [mailto:[email protected]] On Behalf Of ProPAAS DBA \nSent: 23 February, 2017 10:01 PM \nTo: [email protected] \nSubject: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1. \n\nset log_min_duration_statement = 300,000 \n(300,000 ms = 5min) \n\n\n From the docs: \n\n\nlog_min_duration_statement ( integer ) \nCauses the duration of each completed statement to be logged if the statement ran for at least the specified number of milliseconds. Setting this to zero prints all statement durations. Minus-one (the default) disables logging statement durations. For example, if you set it to 250ms then all SQL statements that run 250ms or longer will be logged. Enabling this parameter can be helpful in tracking down unoptimized queries in your applications. Only superusers can change this setting. \n\n\n\n\n\nOn 02/23/2017 09:21 AM, Dinesh Chandra 12108 wrote: \n\n\n\nDear expert, \n\nI want to log only that queries which are taking around 5 minutes to execute. \nI have a database size >1.5T and using PostgreSQL 9.1 with Linux OS. \n\n\nThanks in advance. \n\nRegards, \nDinesh Chandra \n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida. \n------------------------------------------------------------------ \nMobile: +91-9953975849 | Ext 1078 |[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India. \n\n\n\n\n\nDISCLAIMER: \n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates. \n\n\n\n---------------------------------------------------------------\nEste E-Mail foi processado por um Filtro Anti-SPAM, e recebeu\num status, caso voc� n�o concorde com o status recebido, clique\nem um dos links abaixo listado.\nClick here to mark email as junk .\n--------------------------------------------------------------- \n\nThe parameter \"log_directory\" on postgresql.conf, you can define where it'll be created.The name of the log file is defined on \"log_filename\".De: \"Dinesh Chandra 12108\" <[email protected]>Para: \"ProPAAS DBA\" <[email protected]>, [email protected]: \"Dinesh Chandra 12108\" <[email protected]>Enviadas: Quinta-feira, 23 de Fevereiro de 2017 13:44:05Assunto: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\n\n\nThanks for reply.\n \nMay I know where it will create log??\nIn pg_log directory or somewhere else.\n \n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of ProPAAS DBA\nSent: 23 February, 2017 10:01 PM\nTo: [email protected]\nSubject: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\n\n\n \nset log_min_duration_statement = 300,000 \n(300,000 ms = 5min)\n\n\n From the docs:\n\n\nlog_min_duration_statement (integer)\n\nCauses the duration of each completed statement to be logged if the statement ran for at least the specified number of milliseconds. Setting this to zero prints all statement durations. Minus-one (the default) disables logging statement durations. For example,\n if you set it to 250ms then all SQL statements that run 250ms or longer will be logged. Enabling this parameter can be helpful in tracking down unoptimized queries in your applications. Only superusers can change\n this setting.\n \n \n \n \n\nOn 02/23/2017 09:21 AM, Dinesh Chandra 12108 wrote:\n\n\n\nDear expert,\n \nI want to log only that queries which are taking around 5 minutes to execute.\nI have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.\n \n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n \n\n---------------------------------------------------------------\nEste E-Mail foi processado por um Filtro Anti-SPAM, e recebeu\num status, caso voc� n�o concorde com o status recebido, clique\nem um dos links abaixo listado.\nClick here to mark email as junk.\n---------------------------------------------------------------",
"msg_date": "Thu, 23 Feb 2017 13:59:06 -0300 (BRT)",
"msg_from": "Luis Fernando Simone <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to log quires which are taking time in PostgreSQL\n 9.1."
},
{
"msg_contents": "Hi Luis,\r\n\r\nThanks for your reply.\r\nIt’s logging the quires which are taking more than specified time in log_min_duration_statement().\r\n\r\nThanks so much.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\nFrom: Luis Fernando Simone [mailto:[email protected]]\r\nSent: 23 February, 2017 10:29 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>\r\nCc: ProPAAS DBA <[email protected]>; [email protected]\r\nSubject: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\r\n\r\nThe parameter \"log_directory\" on postgresql.conf, you can define where it'll be created.\r\nThe name of the log file is defined on \"log_filename\".\r\n________________________________\r\nDe: \"Dinesh Chandra 12108\" <[email protected]<mailto:[email protected]>>\r\nPara: \"ProPAAS DBA\" <[email protected]<mailto:[email protected]>>, [email protected]<mailto:[email protected]>\r\nCc: \"Dinesh Chandra 12108\" <[email protected]<mailto:[email protected]>>\r\nEnviadas: Quinta-feira, 23 de Fevereiro de 2017 13:44:05\r\nAssunto: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\r\nThanks for reply.\r\n\r\nMay I know where it will create log??\r\nIn pg_log directory or somewhere else.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]] On Behalf Of ProPAAS DBA\r\nSent: 23 February, 2017 10:01 PM\r\nTo: [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\r\n\r\n\r\nset log_min_duration_statement = 300,000\r\n\r\n(300,000 ms = 5min)\r\n\r\n\r\nFrom the docs:\r\n\r\n\r\nlog_min_duration_statement (integer)\r\n\r\nCauses the duration of each completed statement to be logged if the statement ran for at least the specified number of milliseconds. Setting this to zero prints all statement durations. Minus-one (the default) disables logging statement durations. For example, if you set it to 250ms then all SQL statements that run 250ms or longer will be logged. Enabling this parameter can be helpful in tracking down unoptimized queries in your applications. Only superusers can change this setting.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\nOn 02/23/2017 09:21 AM, Dinesh Chandra 12108 wrote:\r\nDear expert,\r\n\r\nI want to log only that queries which are taking around 5 minutes to execute.\r\nI have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.\r\n\r\n\r\nThanks in advance.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n\r\n\r\n---------------------------------------------------------------\r\n\r\nEste E-Mail foi processado por um Filtro Anti-SPAM, e recebeu\r\n\r\num status, caso voc� n�o concorde com o status recebido, clique\r\n\r\nem um dos links abaixo listado.\r\n\r\nClick here to mark email as junk<http://shaakti.datacoper.com.br:5272/FrontController?operation=mbeu&f=00001_-45_20170223_18986272.eml&chkBayesian=1&pr=1&mt=1&ma=s>.\r\n\r\n---------------------------------------------------------------\r\n\r\n\n\n\n\n\n\n\n\n\nHi Luis,\n \nThanks for your reply.\nIt’s logging the quires which are taking more than specified time in log_min_duration_statement().\n \nThanks so much.\n \n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n \n\n\nFrom: Luis Fernando Simone [mailto:[email protected]]\r\n\nSent: 23 February, 2017 10:29 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: ProPAAS DBA <[email protected]>; [email protected]\nSubject: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\n\n\n \n\nThe parameter \"log_directory\" on postgresql.conf, you can define where it'll be created.\r\nThe name of the log file is defined on \"log_filename\".\n\n\n\n\nDe:\r\n\"Dinesh Chandra 12108\" <[email protected]>\nPara: \"ProPAAS DBA\" <[email protected]>,\r\[email protected]\nCc: \"Dinesh Chandra 12108\" <[email protected]>\nEnviadas: Quinta-feira, 23 de Fevereiro de 2017 13:44:05\nAssunto: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\nThanks for reply.\n \nMay I know where it will create log??\nIn pg_log directory or somewhere else.\n \n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n \n\n\nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of ProPAAS DBA\nSent: 23 February, 2017 10:01 PM\nTo: [email protected]\nSubject: Re: [PERFORM] How to log quires which are taking time in PostgreSQL 9.1.\n\n\n \nset log_min_duration_statement = 300,000 \n(300,000 ms = 5min)\n\n\r\nFrom the docs:\n\n\nlog_min_duration_statement (integer)\r\n\nCauses the duration of each completed statement to be logged if the statement ran for at least the specified number of milliseconds. Setting this to zero prints all statement durations. Minus-one (the default) disables logging statement durations. For example,\r\n if you set it to 250ms then all SQL statements that run 250ms or longer will be logged. Enabling this parameter can be helpful in tracking down unoptimized queries in your applications. Only superusers can change\r\n this setting.\n \n \n \n \n\nOn 02/23/2017 09:21 AM, Dinesh Chandra 12108 wrote:\n\n\n\nDear expert,\n \nI want to log only that queries which are taking around 5 minutes to execute.\nI have a database size >1.5T and using PostgreSQL 9.1 with Linux OS.\n \n \nThanks in advance.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n \n---------------------------------------------------------------\nEste E-Mail foi processado por um Filtro Anti-SPAM, e recebeu\num status, caso voc� n�o concorde com o status recebido, clique\nem um dos links abaixo listado.\nClick here to mark email as junk.\n---------------------------------------------------------------",
"msg_date": "Thu, 23 Feb 2017 17:21:35 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to log quires which are taking time in PostgreSQL\n 9.1."
}
] |
[
{
"msg_contents": "Hello everyone,\n\nI am currently evaluating the possibility of using PostgreSQL for \nstoring and querying jsonb+tsvector queries. Let's consider this setup:\n\ncreate table docs (id serial primary key, meta jsonb);\n# generate 10M entries, cf. appendix\ncreate index docs_meta_idx ON docs using gin (meta jsonb_path_ops);\ncreate index docs_name_idx ON docs using gin (to_tsvector('english', \nmeta->>'name'));\ncreate index docs_address_idx ON docs using gin (to_tsvector('english', \nmeta->>'address'));\n\n\nTesting around with some smaller datasets, functionality-wise it's \ngreat. However increasing to 10M, things tend to slow down (using \nPostgreSQL 9.5):\n\n\nexplain analyze select id from docs where meta @> '{\"age\": 20}';\n Planning time: 0.121 ms\n Execution time: 4873.507 ms\n\nexplain analyze select id from docs where meta @> '{\"age\": 20}';\n Planning time: 0.122 ms\n Execution time: 206.289 ms\n\n\n\nexplain analyze select id from docs where meta @> '{\"age\": 30}';\n Planning time: 0.109 ms\n Execution time: 7496.886 ms\n\nexplain analyze select id from docs where meta @> '{\"age\": 30}';\n Planning time: 0.114 ms\n Execution time: 1169.649 ms\n\n\n\nexplain analyze select id from docs where to_tsvector('english', \nmeta->>'name') @@ to_tsquery('english', 'john');\n Planning time: 0.179 ms\n Execution time: 10109.375 ms\n\nexplain analyze select id from docs where to_tsvector('english', \nmeta->>'name') @@ to_tsquery('english', 'john');\nPlanning time: 0.188 ms\n Execution time: 238.854 ms\n\n\nUsing \"select pg_prewarm('docs');\" and on any of the indexes doesn't \nhelp either.\nAfter a \"systemctl stop postgresql.service && sync && echo 3 > \n/proc/sys/vm/drop_caches && systemctl start postgresql.service\" the \nage=20, 30 or name=john queries are slow again.\n\n\nIs there a way to speed up or to warm up things permanently?\n\n\nRegards,\nSven\n\n\nAppendix I:\n\nexample json:\n\n{\"age\": 20, \"name\": \"Michelle Hernandez\", \"birth\": \"1991-08-16\", \n\"address\": \"94753 Tina Bridge Suite 318\\\\nEmilyport, MT 75302\"}\n\n\n\nAppendix II:\n\n\nThe Python script to generate fake json data. Needs \"pip install faker\".\n\n >>> python fake_json.py > test.json # generates 2M entries; takes some \ntime\n >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n\n\n-- fake_json.py --\n\nimport faker, json;\nfake = faker.Faker();\nfor i in range(2*10**6):\n print(json.dumps({\"name\": fake.name(), \"birth\": fake.date(), \n\"address\": fake.address(), \"age\": \nfake.random_int(0,100)}).replace('\\\\n', '\\\\\\\\n'))\n\n\n\n\n\n\n\nHello everyone,\n\nI am currently evaluating the possibility of\n using PostgreSQL for storing and querying jsonb+tsvector queries.\n Let's consider this setup:\n\n create table docs (id serial primary key, meta jsonb);\n # generate 10M entries, cf. appendix\n create index docs_meta_idx ON docs using gin (meta\n jsonb_path_ops);\ncreate index docs_name_idx ON docs using gin\n (to_tsvector('english', meta->>'name'));\n create index docs_address_idx ON docs using gin\n (to_tsvector('english', meta->>'address'));\n\n\n Testing around with some smaller datasets, functionality-wise it's\n great. However increasing to 10M, things tend to slow down (using\n PostgreSQL 9.5):\n\n\n explain analyze select id from docs where meta @> '{\"age\":\n 20}';\n Planning time: 0.121 ms\n Execution time: 4873.507 ms\n\n explain analyze select id from docs where meta @> '{\"age\":\n 20}';\n Planning time: 0.122 ms\n Execution time: 206.289 ms\n\n\n\n explain analyze select id from docs where meta @> '{\"age\":\n 30}';\n Planning time: 0.109 ms\n Execution time: 7496.886 ms\n\n explain analyze select id from docs where meta @> '{\"age\":\n 30}';\n Planning time: 0.114 ms\n Execution time: 1169.649 ms\n\n\n\n explain analyze select id from docs where to_tsvector('english',\n meta->>'name') @@ to_tsquery('english', 'john');\n Planning time: 0.179 ms\n Execution time: 10109.375 ms\n\nexplain analyze select id from docs where\n to_tsvector('english', meta->>'name') @@\n to_tsquery('english', 'john');\n Planning time: 0.188 ms\n Execution time: 238.854 ms\n\n\n Using \"select pg_prewarm('docs');\" and on any of the indexes\n doesn't help either.\n After a \"systemctl stop postgresql.service && sync\n && echo 3 > /proc/sys/vm/drop_caches &&\n systemctl start postgresql.service\" the age=20, 30 or name=john\n queries are slow again.\n\n\n Is there a way to speed up or to warm up things permanently?\n\n\nRegards,\nSven\n\n\n Appendix I:\n\n example json:\n\n {\"age\": 20, \"name\": \"Michelle Hernandez\", \"birth\": \"1991-08-16\",\n \"address\": \"94753 Tina Bridge Suite 318\\\\nEmilyport, MT 75302\"}\n\n\n\n Appendix II:\n\n\n The Python script to generate fake json data. Needs \"pip install\n faker\". \n\n >>> python fake_json.py > test.json # generates 2M\n entries; takes some time\n >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n>>> cat test.json | psql -c 'copy docs (meta) from\n stdin'\n>>> cat test.json | psql -c 'copy docs (meta) from\n stdin'\n>>> cat test.json | psql -c 'copy\n docs (meta) from stdin'\n>>> cat test.json | psql -c\n 'copy docs (meta) from stdin'\n\n\n -- fake_json.py --\n\n import faker, json;\n fake = faker.Faker();\n for i in range(2*10**6):\n print(json.dumps({\"name\": fake.name(), \"birth\": fake.date(),\n \"address\": fake.address(), \"age\":\n fake.random_int(0,100)}).replace('\\\\n', '\\\\\\\\n'))",
"msg_date": "Sun, 26 Feb 2017 14:28:10 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On Sun, Feb 26, 2017 at 4:28 PM, Sven R. Kunze <[email protected]> wrote:\n\n> Hello everyone,\n>\n> I am currently evaluating the possibility of using PostgreSQL for storing\n> and querying jsonb+tsvector queries. Let's consider this setup:\n>\n> create table docs (id serial primary key, meta jsonb);\n> # generate 10M entries, cf. appendix\n> create index docs_meta_idx ON docs using gin (meta jsonb_path_ops);\n> create index docs_name_idx ON docs using gin (to_tsvector('english',\n> meta->>'name'));\n> create index docs_address_idx ON docs using gin (to_tsvector('english',\n> meta->>'address'));\n>\n>\nfunctional index tends to be slow, better use separate column(s) for\ntsvector\n\n\n>\n> Testing around with some smaller datasets, functionality-wise it's great.\n> However increasing to 10M, things tend to slow down (using PostgreSQL 9.5):\n>\n>\n> explain analyze select id from docs where meta @> '{\"age\": 20}';\n> Planning time: 0.121 ms\n> Execution time: 4873.507 ms\n>\n> explain analyze select id from docs where meta @> '{\"age\": 20}';\n> Planning time: 0.122 ms\n> Execution time: 206.289 ms\n>\n>\n>\n> explain analyze select id from docs where meta @> '{\"age\": 30}';\n> Planning time: 0.109 ms\n> Execution time: 7496.886 ms\n>\n> explain analyze select id from docs where meta @> '{\"age\": 30}';\n> Planning time: 0.114 ms\n> Execution time: 1169.649 ms\n>\n>\n>\n> explain analyze select id from docs where to_tsvector('english',\n> meta->>'name') @@ to_tsquery('english', 'john');\n> Planning time: 0.179 ms\n> Execution time: 10109.375 ms\n>\n> explain analyze select id from docs where to_tsvector('english',\n> meta->>'name') @@ to_tsquery('english', 'john');\n> Planning time: 0.188 ms\n> Execution time: 238.854 ms\n>\n\nwhat is full output from explain analyze ?\n\n\n>\n>\n> Using \"select pg_prewarm('docs');\" and on any of the indexes doesn't help\n> either.\n> After a \"systemctl stop postgresql.service && sync && echo 3 >\n> /proc/sys/vm/drop_caches && systemctl start postgresql.service\" the age=20,\n> 30 or name=john queries are slow again.\n>\n>\n> Is there a way to speed up or to warm up things permanently?\n>\n>\n> Regards,\n> Sven\n>\n>\n> Appendix I:\n>\n> example json:\n>\n> {\"age\": 20, \"name\": \"Michelle Hernandez\", \"birth\": \"1991-08-16\",\n> \"address\": \"94753 Tina Bridge Suite 318\\\\nEmilyport, MT 75302\"}\n>\n>\n>\n> Appendix II:\n>\n>\n> The Python script to generate fake json data. Needs \"pip install faker\".\n>\n> >>> python fake_json.py > test.json # generates 2M entries; takes some\n> time\n> >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n> >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n> >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n> >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n> >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n>\n>\n> -- fake_json.py --\n>\n> import faker, json;\n> fake = faker.Faker();\n> for i in range(2*10**6):\n> print(json.dumps({\"name\": fake.name(), \"birth\": fake.date(),\n> \"address\": fake.address(), \"age\": fake.random_int(0,100)}).replace('\\\\n',\n> '\\\\\\\\n'))\n>\n>\n\nOn Sun, Feb 26, 2017 at 4:28 PM, Sven R. Kunze <[email protected]> wrote:\n\nHello everyone,\n\nI am currently evaluating the possibility of\n using PostgreSQL for storing and querying jsonb+tsvector queries.\n Let's consider this setup:\n\n create table docs (id serial primary key, meta jsonb);\n # generate 10M entries, cf. appendix\n create index docs_meta_idx ON docs using gin (meta\n jsonb_path_ops);\ncreate index docs_name_idx ON docs using gin\n (to_tsvector('english', meta->>'name'));\n create index docs_address_idx ON docs using gin\n (to_tsvector('english', meta->>'address'));\nfunctional index tends to be slow, better use separate column(s) for tsvector \n\n Testing around with some smaller datasets, functionality-wise it's\n great. However increasing to 10M, things tend to slow down (using\n PostgreSQL 9.5):\n\n\n explain analyze select id from docs where meta @> '{\"age\":\n 20}';\n Planning time: 0.121 ms\n Execution time: 4873.507 ms\n\n explain analyze select id from docs where meta @> '{\"age\":\n 20}';\n Planning time: 0.122 ms\n Execution time: 206.289 ms\n\n\n\n explain analyze select id from docs where meta @> '{\"age\":\n 30}';\n Planning time: 0.109 ms\n Execution time: 7496.886 ms\n\n explain analyze select id from docs where meta @> '{\"age\":\n 30}';\n Planning time: 0.114 ms\n Execution time: 1169.649 ms\n\n\n\n explain analyze select id from docs where to_tsvector('english',\n meta->>'name') @@ to_tsquery('english', 'john');\n Planning time: 0.179 ms\n Execution time: 10109.375 ms\n\nexplain analyze select id from docs where\n to_tsvector('english', meta->>'name') @@\n to_tsquery('english', 'john');\n Planning time: 0.188 ms\n Execution time: 238.854 mswhat is full output from explain analyze ? \n\n\n Using \"select pg_prewarm('docs');\" and on any of the indexes\n doesn't help either.\n After a \"systemctl stop postgresql.service && sync\n && echo 3 > /proc/sys/vm/drop_caches &&\n systemctl start postgresql.service\" the age=20, 30 or name=john\n queries are slow again.\n\n\n Is there a way to speed up or to warm up things permanently?\n\n\nRegards,\nSven\n\n\n Appendix I:\n\n example json:\n\n {\"age\": 20, \"name\": \"Michelle Hernandez\", \"birth\": \"1991-08-16\",\n \"address\": \"94753 Tina Bridge Suite 318\\\\nEmilyport, MT 75302\"}\n\n\n\n Appendix II:\n\n\n The Python script to generate fake json data. Needs \"pip install\n faker\". \n\n >>> python fake_json.py > test.json # generates 2M\n entries; takes some time\n >>> cat test.json | psql -c 'copy docs (meta) from stdin'\n>>> cat test.json | psql -c 'copy docs (meta) from\n stdin'\n>>> cat test.json | psql -c 'copy docs (meta) from\n stdin'\n>>> cat test.json | psql -c 'copy\n docs (meta) from stdin'\n>>> cat test.json | psql -c\n 'copy docs (meta) from stdin'\n\n\n -- fake_json.py --\n\n import faker, json;\n fake = faker.Faker();\n for i in range(2*10**6):\n print(json.dumps({\"name\": fake.name(), \"birth\": fake.date(),\n \"address\": fake.address(), \"age\":\n fake.random_int(0,100)}).replace('\\\\n', '\\\\\\\\n'))",
"msg_date": "Sun, 26 Feb 2017 23:13:03 +0300",
"msg_from": "Oleg Bartunov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "Thanks Oleg for your reply.\n\nOn 26.02.2017 21:13, Oleg Bartunov wrote:\n> On Sun, Feb 26, 2017 at 4:28 PM, Sven R. Kunze <[email protected] \n> <mailto:[email protected]>>wrote:\n>\n> create index docs_meta_idx ON docs using gin (meta jsonb_path_ops);\n> create index docs_name_idx ON docs using gin\n> (to_tsvector('english', meta->>'name'));\n> create index docs_address_idx ON docs using gin\n> (to_tsvector('english', meta->>'address'));\n>\n>\n> functional index tends to be slow, better use separate column(s) for \n> tsvector\n\nWhy? Don't we have indexes to make them faster?\n\nThe idea is to accelerate all operations as specified (cf. the table \nschema below) without adding more and more columns.\n\n> what is full output from explain analyze ?\n\nOkay, let's stick to gin + @> operator for nowbefore we tackle the \nfunctional index issue.\nMaybe, I did something wrong while defining the gin indexes:\n\n\nexplain analyze select id from docs where meta @> '{\"age\": 40}';\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000 width=4) \n(actual time=97.443..8073.983 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx (cost=0.00..84.00 rows=10000 \nwidth=0) (actual time=66.878..66.878 rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.118 ms\n Execution time: 8093.533 ms\n(7 rows)\n\nexplain analyze select id from docs where meta @> '{\"age\": 40}';\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000 width=4) \n(actual time=99.527..3349.001 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx (cost=0.00..84.00 rows=10000 \nwidth=0) (actual time=68.503..68.503 rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.113 ms\n Execution time: 3360.773 ms\n(7 rows)\n\nexplain analyze select id from docs where meta @> '{\"age\": 40}';\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000 width=4) \n(actual time=64.928..168.311 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx (cost=0.00..84.00 rows=10000 \nwidth=0) (actual time=45.340..45.340 rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.121 ms\n Execution time: 171.098 ms\n(7 rows)\n\nexplain analyze select id from docs where meta @> '{\"age\": 40}';\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000 width=4) \n(actual time=86.118..215.755 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx (cost=0.00..84.00 rows=10000 \nwidth=0) (actual time=54.535..54.535 rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.127 ms\n Execution time: 219.746 ms\n(7 rows)\n\nexplain analyze select id from docs where meta @> '{\"age\": 40}';\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000 width=4) \n(actual time=83.197..211.840 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx (cost=0.00..84.00 rows=10000 \nwidth=0) (actual time=53.036..53.036 rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.127 ms\n Execution time: 215.753 ms\n(7 rows)\n\n\nRegards,\nSven\n\n\nTable Schema:\n\n Table \"public.docs\"\n Column | Type | Modifiers\n--------+---------+---------------------------------------------------\n id | integer | not null default nextval('docs_id_seq'::regclass)\n meta | jsonb |\nIndexes:\n \"docs_pkey\" PRIMARY KEY, btree (id)\n \"docs_address_idx\" gin (to_tsvector('english'::regconfig, meta ->> \n'address'::text))\n \"docs_address_trgm_idx\" gin ((meta ->> 'address'::text) gin_trgm_ops)\n \"docs_birth_idx\" btree ((meta ->> 'birth'::text))\n \"docs_meta_idx\" gin (meta jsonb_path_ops)\n \"docs_name_idx\" gin (to_tsvector('english'::regconfig, meta ->> \n'name'::text))\n\n\n\n\n\n\n\nThanks Oleg for your reply.\n\nOn 26.02.2017 21:13, Oleg Bartunov wrote:\n\n\nOn Sun, Feb 26, 2017 at 4:28 PM, Sven R. Kunze\n <[email protected]>\n wrote:\n\n\n\n\n create index\n docs_meta_idx ON docs using gin (meta jsonb_path_ops);\n create index docs_name_idx ON docs using gin\n (to_tsvector('english', meta->>'name'));\n create index docs_address_idx ON docs using\n gin (to_tsvector('english', meta->>'address'));\n \n\n\n\n\nfunctional index tends to be slow, better use\n separate column(s) for tsvector\n\n\n\n\n\n\nWhy? Don't we have indexes to make them faster?\n\n The idea is to accelerate all operations as specified (cf. the\n table schema below) without adding more and more columns.\n\n\n\n\n\n\nwhat is full output from explain analyze ?\n\n\n\n\n\n\nOkay, let's stick to gin + @> operator for now\n before we tackle the functional index issue.\n Maybe, I did something wrong while defining the gin indexes:\n\n\nexplain analyze select id from docs where meta @>\n '{\"age\": 40}';\n \n QUERY\n PLAN \n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000\n width=4) (actual time=97.443..8073.983 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx \n (cost=0.00..84.00 rows=10000 width=0) (actual time=66.878..66.878\n rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.118 ms\n Execution time: 8093.533 ms\n(7 rows)\n\nexplain analyze select id from docs where meta @>\n '{\"age\": 40}';\n \n QUERY\n PLAN \n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000\n width=4) (actual time=99.527..3349.001 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx \n (cost=0.00..84.00 rows=10000 width=0) (actual time=68.503..68.503\n rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.113 ms\n Execution time: 3360.773 ms\n(7 rows)\n\nexplain analyze select id from docs where meta @>\n '{\"age\": 40}';\n \n QUERY\n PLAN \n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000\n width=4) (actual time=64.928..168.311 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx \n (cost=0.00..84.00 rows=10000 width=0) (actual time=45.340..45.340\n rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.121 ms\n Execution time: 171.098 ms\n(7 rows)\n\nexplain analyze select id from docs where meta @>\n '{\"age\": 40}';\n \n QUERY\n PLAN \n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000\n width=4) (actual time=86.118..215.755 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx \n (cost=0.00..84.00 rows=10000 width=0) (actual time=54.535..54.535\n rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.127 ms\n Execution time: 219.746 ms\n(7 rows)\n\nexplain analyze select id from docs where meta @>\n '{\"age\": 40}';\n \n QUERY\n PLAN \n---------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on docs (cost=86.50..9982.50 rows=10000\n width=4) (actual time=83.197..211.840 rows=98385 loops=1)\n Recheck Cond: (meta @> '{\"age\": 40}'::jsonb)\n Heap Blocks: exact=79106\n -> Bitmap Index Scan on docs_meta_idx \n (cost=0.00..84.00 rows=10000 width=0) (actual time=53.036..53.036\n rows=98385 loops=1)\n Index Cond: (meta @> '{\"age\": 40}'::jsonb)\n Planning time: 0.127 ms\n Execution time: 215.753 ms\n(7 rows)\n\n\nRegards,\nSven\n\n\nTable Schema:\n\n Table \"public.docs\"\n Column | Type | \n Modifiers \n--------+---------+---------------------------------------------------\n id | integer | not null default\n nextval('docs_id_seq'::regclass)\n meta | jsonb | \nIndexes:\n \"docs_pkey\" PRIMARY KEY, btree (id)\n \"docs_address_idx\" gin\n (to_tsvector('english'::regconfig, meta ->>\n 'address'::text))\n \"docs_address_trgm_idx\" gin ((meta ->>\n 'address'::text) gin_trgm_ops)\n \"docs_birth_idx\" btree ((meta ->> 'birth'::text))\n \"docs_meta_idx\" gin (meta jsonb_path_ops)\n \"docs_name_idx\" gin (to_tsvector('english'::regconfig,\n meta ->> 'name'::text))",
"msg_date": "Mon, 27 Feb 2017 15:46:59 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On Sun, Feb 26, 2017 at 5:28 AM, Sven R. Kunze <[email protected]> wrote:\n\n>\n>\n> Using \"select pg_prewarm('docs');\" and on any of the indexes doesn't help\n> either.\n> After a \"systemctl stop postgresql.service && sync && echo 3 >\n> /proc/sys/vm/drop_caches && systemctl start postgresql.service\" the age=20,\n> 30 or name=john queries are slow again.\n>\n>\n> Is there a way to speed up or to warm up things permanently?\n>\n\n\nIf by 'permanently', you mean even when you intentionally break things,\nthen no. You will always be able to intentionally break things. There is\non-going discussion of an auto-prewarm feature. But that doesn't yet\nexist; and once it does, a super user will always be able to break it.\n\nPresumably you have a use-case in mind other than intentional sabotage of\nyour caches by root. But, what is it? If you reboot the server\nfrequently, maybe you can just throw 'select pg_prewarm...' into an init\nscript?\n\nCheers,\n\nJeff\n\nOn Sun, Feb 26, 2017 at 5:28 AM, Sven R. Kunze <[email protected]> wrote:\n\n Using \"select pg_prewarm('docs');\" and on any of the indexes\n doesn't help either.\n After a \"systemctl stop postgresql.service && sync\n && echo 3 > /proc/sys/vm/drop_caches &&\n systemctl start postgresql.service\" the age=20, 30 or name=john\n queries are slow again.\n\n\n Is there a way to speed up or to warm up things permanently?If by 'permanently', you mean even when you intentionally break things, then no. You will always be able to intentionally break things. There is on-going discussion of an auto-prewarm feature. But that doesn't yet exist; and once it does, a super user will always be able to break it.Presumably you have a use-case in mind other than intentional sabotage of your caches by root. But, what is it? If you reboot the server frequently, maybe you can just throw 'select pg_prewarm...' into an init script?Cheers,Jeff",
"msg_date": "Mon, 27 Feb 2017 10:22:25 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On 27.02.2017 19:22, Jeff Janes wrote:\n> If by 'permanently', you mean even when you intentionally break \n> things, then no. You will always be able to intentionally break \n> things. There is on-going discussion of an auto-prewarm feature. But \n> that doesn't yet exist; and once it does, a super user will always be \n> able to break it.\n>\n> Presumably you have a use-case in mind other than intentional sabotage \n> of your caches by root. But, what is it? If you reboot the server \n> frequently, maybe you can just throw 'select pg_prewarm...' into an \n> init script?\n\nI didn't express myself well enough. pg_prewarm doesn't help to speed up \nthose queries at all.\n\n\nLooking at these numbers, I wonder why it takes ~5 secs to answer?\n\n\nBest,\nSven\n\n\n\n\n\n\nOn 27.02.2017 19:22, Jeff Janes wrote:\n\n\n\n\nIf by 'permanently', you mean even\n when you intentionally break things, then no. You will\n always be able to intentionally break things. There is\n on-going discussion of an auto-prewarm feature. But that\n doesn't yet exist; and once it does, a super user will\n always be able to break it.\n \n\nPresumably you have a use-case in mind other than\n intentional sabotage of your caches by root. But, what is\n it? If you reboot the server frequently, maybe you can\n just throw 'select pg_prewarm...' into an init script?\n\n\n\n\n\n I didn't express myself well enough. pg_prewarm doesn't help to\n speed up those queries at all.\n\n\n Looking at these numbers, I wonder why it takes ~5 secs to answer?\n\n\n Best,\n Sven",
"msg_date": "Tue, 28 Feb 2017 09:27:09 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On Tue, Feb 28, 2017 at 12:27 AM, Sven R. Kunze <[email protected]> wrote:\n\n> On 27.02.2017 19:22, Jeff Janes wrote:\n>\n> If by 'permanently', you mean even when you intentionally break things,\n> then no. You will always be able to intentionally break things. There is\n> on-going discussion of an auto-prewarm feature. But that doesn't yet\n> exist; and once it does, a super user will always be able to break it.\n>\n> Presumably you have a use-case in mind other than intentional sabotage of\n> your caches by root. But, what is it? If you reboot the server\n> frequently, maybe you can just throw 'select pg_prewarm...' into an init\n> script?\n>\n>\n> I didn't express myself well enough. pg_prewarm doesn't help to speed up\n> those queries at all.\n>\n\n\nOh. In my hands, it works very well. I get 70 seconds to do the {age: 20}\nquery from pure cold caches, versus 1.4 seconds from cold caches which was\nfollowed by pg_prewarm('docs','prefetch').\n\nHow much RAM do you have? Maybe you don't have enough to hold the table in\nRAM. What kind of IO system? And what OS?\n\n\nCheers,\n\nJeff\n\nOn Tue, Feb 28, 2017 at 12:27 AM, Sven R. Kunze <[email protected]> wrote:\n\nOn 27.02.2017 19:22, Jeff Janes wrote:\n\n\n\n\nIf by 'permanently', you mean even\n when you intentionally break things, then no. You will\n always be able to intentionally break things. There is\n on-going discussion of an auto-prewarm feature. But that\n doesn't yet exist; and once it does, a super user will\n always be able to break it.\n \n\nPresumably you have a use-case in mind other than\n intentional sabotage of your caches by root. But, what is\n it? If you reboot the server frequently, maybe you can\n just throw 'select pg_prewarm...' into an init script?\n\n\n\n\n\n I didn't express myself well enough. pg_prewarm doesn't help to\n speed up those queries at all.Oh. In my hands, it works very well. I get 70 seconds to do the {age: 20} query from pure cold caches, versus 1.4 seconds from cold caches which was followed by pg_prewarm('docs','prefetch').How much RAM do you have? Maybe you don't have enough to hold the table in RAM. What kind of IO system? And what OS?Cheers,Jeff",
"msg_date": "Tue, 28 Feb 2017 08:49:42 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On 28.02.2017 17:49, Jeff Janes wrote:\n> Oh. In my hands, it works very well. I get 70 seconds to do the \n> {age: 20} query from pure cold caches, versus 1.4 seconds from cold \n> caches which was followed by pg_prewarm('docs','prefetch').\n>\n> How much RAM do you have? Maybe you don't have enough to hold the \n> table in RAM. What kind of IO system? And what OS?\n\nOn my test system:\n\nRAM: 4GB\nIO: SSD (random_page_cost = 1.0)\nOS: Ubuntu 16.04\n\nRegards,\nSven\n\n\n\n\n\n\n\nOn 28.02.2017 17:49, Jeff Janes wrote:\n\n\n\n\nOh. In my hands, it works very\n well. I get 70 seconds to do the {age: 20} query from pure\n cold caches, versus 1.4 seconds from cold caches which was\n followed by pg_prewarm('docs','prefetch').\n \n\nHow much RAM do you have? Maybe you don't have enough\n to hold the table in RAM. What kind of IO system? And\n what OS?\n\n\n\n\n\n\n On my test system:\n\n RAM: 4GB\n IO: SSD (random_page_cost = 1.0)\n OS: Ubuntu 16.04\n\n Regards,\n Sven",
"msg_date": "Wed, 1 Mar 2017 15:02:15 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On Wed, Mar 1, 2017 at 6:02 AM, Sven R. Kunze <[email protected]> wrote:\n\n> On 28.02.2017 17:49, Jeff Janes wrote:\n>\n> Oh. In my hands, it works very well. I get 70 seconds to do the {age:\n> 20} query from pure cold caches, versus 1.4 seconds from cold caches which\n> was followed by pg_prewarm('docs','prefetch').\n>\n> How much RAM do you have? Maybe you don't have enough to hold the table\n> in RAM. What kind of IO system? And what OS?\n>\n>\n> On my test system:\n>\n> RAM: 4GB\n> IO: SSD (random_page_cost = 1.0)\n> OS: Ubuntu 16.04\n>\n\n\n4GB is not much RAM to be trying to pre-warm this amount of data into.\nTowards the end of the pg_prewarm, it is probably evicting data read in by\nthe earlier part of it.\n\nWhat is shared_buffers?\n\nCheers,\n\nJeff\n\nOn Wed, Mar 1, 2017 at 6:02 AM, Sven R. Kunze <[email protected]> wrote:\n\nOn 28.02.2017 17:49, Jeff Janes wrote:\n\n\n\n\nOh. In my hands, it works very\n well. I get 70 seconds to do the {age: 20} query from pure\n cold caches, versus 1.4 seconds from cold caches which was\n followed by pg_prewarm('docs','prefetch').\n \n\nHow much RAM do you have? Maybe you don't have enough\n to hold the table in RAM. What kind of IO system? And\n what OS?\n\n\n\n\n\n\n On my test system:\n\n RAM: 4GB\n IO: SSD (random_page_cost = 1.0)\n OS: Ubuntu 16.044GB is not much RAM to be trying to pre-warm this amount of data into. Towards the end of the pg_prewarm, it is probably evicting data read in by the earlier part of it.What is shared_buffers?Cheers,Jeff",
"msg_date": "Wed, 1 Mar 2017 09:04:13 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On 01.03.2017 18:04, Jeff Janes wrote:\n> On Wed, Mar 1, 2017 at 6:02 AM, Sven R. Kunze <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> On 28.02.2017 17:49, Jeff Janes wrote:\n>> Oh. In my hands, it works very well. I get 70 seconds to do the\n>> {age: 20} query from pure cold caches, versus 1.4 seconds from\n>> cold caches which was followed by pg_prewarm('docs','prefetch').\n>>\n>> How much RAM do you have? Maybe you don't have enough to hold\n>> the table in RAM. What kind of IO system? And what OS?\n>\n> On my test system:\n>\n> RAM: 4GB\n> IO: SSD (random_page_cost = 1.0)\n> OS: Ubuntu 16.04\n>\n>\n>\n> 4GB is not much RAM to be trying to pre-warm this amount of data \n> into. Towards the end of the pg_prewarm, it is probably evicting data \n> read in by the earlier part of it.\n>\n> What is shared_buffers?\n\n942MB.\n\nBut I see where you are coming from. How come that these queries need a \nRecheck Cond? I gather that this would require reading not only the \nindex data but also the table itself which could be huge, right?\n\nSven\n\n\n\n\n\n\n On 01.03.2017 18:04, Jeff Janes wrote:\n\n\n\nOn Wed, Mar 1, 2017 at 6:02 AM, Sven\n R. Kunze <[email protected]>\n wrote:\n\n\nOn\n 28.02.2017 17:49, Jeff Janes wrote:\n\n\n\n\nOh. In my hands, it\n works very well. I get 70 seconds to do the\n {age: 20} query from pure cold caches, versus\n 1.4 seconds from cold caches which was\n followed by pg_prewarm('docs','prefetch').\n \n\nHow much RAM do you have? Maybe you\n don't have enough to hold the table in RAM. \n What kind of IO system? And what OS?\n\n\n\n\n\n\n On my test system:\n\n RAM: 4GB\n IO: SSD (random_page_cost = 1.0)\n OS: Ubuntu 16.04\n\n\n\n\n\n\n4GB is not much RAM to be trying to pre-warm this\n amount of data into. Towards the end of the pg_prewarm,\n it is probably evicting data read in by the earlier part\n of it.\n\n\nWhat is shared_buffers?\n\n\n\n\n\n\n 942MB.\n\n But I see where you are coming from. How come that these queries\n need a Recheck Cond? I gather that this would require reading not\n only the index data but also the table itself which could be huge,\n right?\n\n Sven",
"msg_date": "Thu, 2 Mar 2017 22:19:49 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On Thu, Mar 2, 2017 at 1:19 PM, Sven R. Kunze <[email protected]> wrote:\n\n> On 01.03.2017 18:04, Jeff Janes wrote:\n>\n> On Wed, Mar 1, 2017 at 6:02 AM, Sven R. Kunze <[email protected]> wrote:\n>\n>> On 28.02.2017 17:49, Jeff Janes wrote:\n>>\n>> Oh. In my hands, it works very well. I get 70 seconds to do the {age:\n>> 20} query from pure cold caches, versus 1.4 seconds from cold caches which\n>> was followed by pg_prewarm('docs','prefetch').\n>>\n>> How much RAM do you have? Maybe you don't have enough to hold the table\n>> in RAM. What kind of IO system? And what OS?\n>>\n>>\n>> On my test system:\n>>\n>> RAM: 4GB\n>> IO: SSD (random_page_cost = 1.0)\n>> OS: Ubuntu 16.04\n>>\n>\n>\n> 4GB is not much RAM to be trying to pre-warm this amount of data into.\n> Towards the end of the pg_prewarm, it is probably evicting data read in by\n> the earlier part of it.\n>\n> What is shared_buffers?\n>\n>\n> 942MB.\n>\n> But I see where you are coming from. How come that these queries need a\n> Recheck Cond? I gather that this would require reading not only the index\n> data but also the table itself which could be huge, right?\n>\n\nBitmaps can overflow and drop the row-level information, tracking only the\nblocks which need to be inspected. So it has to have a recheck in case\nthat happens (although in your case it is not actually overflowing--but it\nstill needs to be prepared for that). Also, I think that jsonb_path_ops\nindexes the hashes of the paths, so it can deliver false positives which\nneed to be rechecked. And you are selecting `id`, which is not in the\nindex so it would have to consult the table anyway to retrieve that. Even\nif it could get all the data from the index itself, I don't think GIN\nindexes support that feature.\n\nCheers,\n\nJeff\n\nOn Thu, Mar 2, 2017 at 1:19 PM, Sven R. Kunze <[email protected]> wrote:\n\n On 01.03.2017 18:04, Jeff Janes wrote:\n\n\n\nOn Wed, Mar 1, 2017 at 6:02 AM, Sven\n R. Kunze <[email protected]>\n wrote:\n\n\nOn\n 28.02.2017 17:49, Jeff Janes wrote:\n\n\n\n\nOh. In my hands, it\n works very well. I get 70 seconds to do the\n {age: 20} query from pure cold caches, versus\n 1.4 seconds from cold caches which was\n followed by pg_prewarm('docs','prefetch').\n \n\nHow much RAM do you have? Maybe you\n don't have enough to hold the table in RAM. \n What kind of IO system? And what OS?\n\n\n\n\n\n\n On my test system:\n\n RAM: 4GB\n IO: SSD (random_page_cost = 1.0)\n OS: Ubuntu 16.04\n\n\n\n\n\n\n4GB is not much RAM to be trying to pre-warm this\n amount of data into. Towards the end of the pg_prewarm,\n it is probably evicting data read in by the earlier part\n of it.\n\n\nWhat is shared_buffers?\n\n\n\n\n\n\n 942MB.\n\n But I see where you are coming from. How come that these queries\n need a Recheck Cond? I gather that this would require reading not\n only the index data but also the table itself which could be huge,\n right?Bitmaps can overflow and drop the row-level information, tracking only the blocks which need to be inspected. So it has to have a recheck in case that happens (although in your case it is not actually overflowing--but it still needs to be prepared for that). Also, I think that jsonb_path_ops indexes the hashes of the paths, so it can deliver false positives which need to be rechecked. And you are selecting `id`, which is not in the index so it would have to consult the table anyway to retrieve that. Even if it could get all the data from the index itself, I don't think GIN indexes support that feature.Cheers,Jeff",
"msg_date": "Sun, 5 Mar 2017 20:25:23 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
},
{
"msg_contents": "On 06.03.2017 05:25, Jeff Janes wrote:\n> Bitmaps can overflow and drop the row-level information, tracking only \n> the blocks which need to be inspected. So it has to have a recheck in \n> case that happens (although in your case it is not actually \n> overflowing--but it still needs to be prepared for that).\n\nGood to know.\n\n> Also, I think that jsonb_path_ops indexes the hashes of the paths, so \n> it can deliver false positives which need to be rechecked.\n\nWow, that's a very important piece of information. It explains a lot. \nThanks a lot.\n\n> And you are selecting `id`, which is not in the index so it would have \n> to consult the table anyway to retrieve that. Even if it could get \n> all the data from the index itself, I don't think GIN indexes support \n> that feature.\n\nYes, I see. I actually was sloppy about the query. What's really \nimportant here would be counting the number of rows. However, from what \nI can see, it's the best PostgreSQL can do right now.\n\n\nOr you have any more ideas how to speed up counting?\n\n\nBest,\nSven\n\n\n\n\n\n\n On 06.03.2017 05:25, Jeff Janes wrote:\n\n\n\nBitmaps can overflow and drop the\n row-level information, tracking only the blocks which need\n to be inspected. So it has to have a recheck in case that\n happens (although in your case it is not actually\n overflowing--but it still needs to be prepared for that).\n\n\n\n\n Good to know.\n\n\n\n\nAlso, I think that jsonb_path_ops\n indexes the hashes of the paths, so it can deliver false\n positives which need to be rechecked.\n\n\n\n\n Wow, that's a very important piece of information. It explains a\n lot. Thanks a lot.\n\n\n\n\nAnd you are selecting `id`, which is\n not in the index so it would have to consult the table\n anyway to retrieve that. Even if it could get all the data\n from the index itself, I don't think GIN indexes support\n that feature.\n\n\n\n\n\n Yes, I see. I actually was sloppy about the query. What's really\n important here would be counting the number of rows. However, from\n what I can see, it's the best PostgreSQL can do right now.\n\n\n Or you have any more ideas how to speed up counting?\n\n\n Best,\n Sven",
"msg_date": "Mon, 6 Mar 2017 22:10:46 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speeding up JSON + TSQUERY + GIN"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are taking daily full backup of PostgreSQL database using PG_DUMP which is automatic scheduled through Cronjobs.\n\nHow can I check my yesterday backup is successfully or not?\nIs there any query or view by which I can check it?\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\n\nHi,\n \nWe are taking daily full backup of PostgreSQL database using\nPG_DUMP which is automatic scheduled through Cronjobs.\n \nHow can I check my yesterday backup is successfully or not?\nIs there any query or view by which I can check it?\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Mon, 27 Feb 2017 09:35:47 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "How Can I check PostgreSQL backup is successfully or not ?"
},
{
"msg_contents": "Although it doesn't really tell if the pg_dump was successful (you'll need\nto do a full restore to be sure), I generate an archive list. If that\nfails, the backup clearly wasn't successful, and if it succeeds, odds are\npretty good that it worked:\n\n-- bash code snippet --\narchiveList=`pg_restore -l ${backupFolder}`\nif [[ ! ${archiveList} =~ \"Archive created at\" ]]\nthen\n echo \"PostgreSQL backup - Archive List Test Failed for\n${hostName}:${dbName}\"\n echo \"Archive listing:\"\n echo ${archiveList}\n exit 1\nfi\n-----------------------\n\n\n\nOn Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> We are taking daily full backup of PostgreSQL database using *PG_DUMP*\n> which is automatic scheduled through Cronjobs.\n>\n>\n>\n> How can I check my yesterday backup is successfully or not?\n>\n> Is there any query or view by which I can check it?\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n> |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n\nAlthough it doesn't really tell if the pg_dump was successful (you'll need to do a full restore to be sure), I generate an archive list. If that fails, the backup clearly wasn't successful, and if it succeeds, odds are pretty good that it worked:-- bash code snippet --archiveList=`pg_restore -l ${backupFolder}`if [[ ! ${archiveList} =~ \"Archive created at\" ]]then echo \"PostgreSQL backup - Archive List Test Failed for ${hostName}:${dbName}\" echo \"Archive listing:\" echo ${archiveList} exit 1fi-----------------------On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nHi,\n \nWe are taking daily full backup of PostgreSQL database using\nPG_DUMP which is automatic scheduled through Cronjobs.\n \nHow can I check my yesterday backup is successfully or not?\nIs there any query or view by which I can check it?\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Mon, 27 Feb 2017 05:36:15 -0500",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "Even though it's not listed in any of the documentation or “pg_dump --help” you can check the return code of the process. A return code greater than 0 (zero) usually indicates a failure\r\n\r\n./bin >pg_dump -U dummy_user dummy_database; echo $?\r\n1\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Rick Otten\r\nSent: Monday, February 27, 2017 3:36 AM\r\nTo: Dinesh Chandra 12108\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\r\n\r\nAlthough it doesn't really tell if the pg_dump was successful (you'll need to do a full restore to be sure), I generate an archive list. If that fails, the backup clearly wasn't successful, and if it succeeds, odds are pretty good that it worked:\r\n\r\n-- bash code snippet --\r\narchiveList=`pg_restore -l ${backupFolder}`\r\nif [[ ! ${archiveList} =~ \"Archive created at\" ]]\r\nthen\r\n echo \"PostgreSQL backup - Archive List Test Failed for ${hostName}:${dbName}\"\r\n echo \"Archive listing:\"\r\n echo ${archiveList}\r\n exit 1\r\nfi\r\n-----------------------\r\n\r\n\r\n\r\nOn Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\r\nHi,\r\n\r\nWe are taking daily full backup of PostgreSQL database using PG_DUMP which is automatic scheduled through Cronjobs.\r\n\r\nHow can I check my yesterday backup is successfully or not?\r\nIs there any query or view by which I can check it?\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849<tel:+91%2099539%2075849> | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n\r\n\n\n\n\n\n\n\n\n\nEven though it's not listed in any of the documentation or “pg_dump --help” you can check the return code of the process. A return code greater than 0 (zero) usually indicates\r\n a failure\n \n./bin >pg_dump -U dummy_user dummy_database; echo $?\n1\n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Rick Otten\nSent: Monday, February 27, 2017 3:36 AM\nTo: Dinesh Chandra 12108\nCc: [email protected]\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n \n\nAlthough it doesn't really tell if the pg_dump was successful (you'll need to do a full restore to be sure), I generate an archive list. If that fails, the backup clearly wasn't successful, and if it succeeds, odds are pretty good that\r\n it worked:\n\n \n\n\n-- bash code snippet --\n\n\n\narchiveList=`pg_restore -l ${backupFolder}`\n\n\nif [[ ! ${archiveList} =~ \"Archive created at\" ]]\n\n\nthen\n\n\n echo \"PostgreSQL backup - Archive List Test Failed for ${hostName}:${dbName}\"\n\n\n echo \"Archive listing:\"\n\n\n echo ${archiveList}\n\n\n exit 1\n\n\nfi\n\n\n\n-----------------------\n\n\n \n\n\n \n\n\n\n \n\nOn Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nHi,\n \nWe are taking daily full backup of PostgreSQL database using\r\nPG_DUMP which is automatic scheduled through Cronjobs.\n \nHow can I check my yesterday backup is successfully or not?\nIs there any query or view by which I can check it?\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\r\n+91-9953975849 | Ext 1078 \r\n|[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Mon, 27 Feb 2017 13:29:14 +0000",
"msg_from": "John Gorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "On 2017-02-27 14:29, John Gorman wrote:\n> Even though it's not listed in any of the documentation or “pg_dump\n> --help” you can check the return code of the process. A return code\n> greater than 0 (zero) usually indicates a failure\n> \n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\n> \n> 1\n> \n> FROM: [email protected]\n> [mailto:[email protected]] ON BEHALF OF Rick\n> Otten\n> SENT: Monday, February 27, 2017 3:36 AM\n> TO: Dinesh Chandra 12108\n> CC: [email protected]\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> successfully or not ?\n> \n> Although it doesn't really tell if the pg_dump was successful (you'll\n> need to do a full restore to be sure), I generate an archive list. If\n> that fails, the backup clearly wasn't successful, and if it succeeds,\n> odds are pretty good that it worked:\n> \n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> <[email protected]> wrote:\n> \n> Hi,\n> \n> We are taking daily full backup of PostgreSQL database using PG_DUMP\n> which is automatic scheduled through Cronjobs.\n> \n> How can I check my yesterday backup is successfully or not?\n> \n> Is there any query or view by which I can check it?\n> \n> REGARDS,\n> \n> DINESH CHANDRA\n> \n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n\n\nIt's important to note the distinction between\n\n\"the backup process did not fail\"\n\nand\n\n\"we now have a trustworthy backup\"\n\nAnd you can go full-paranoia and say that you can successfully create a \nperfectly working backup of the wrong database.\n\nSo what is it that you want to make sure of:\n1. Did the process give an error?\n2. Did the process create a usable backup?\n\nWhat are the chances of #1 reporting success but still producing a bad \nbackup?\nAnd can #2 fail on a good database, and if so, can you detect that?\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 27 Feb 2017 15:00:32 +0100",
"msg_from": "vinny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "Hi,\r\n\r\nI run the below command\r\n[postgres@CPPMoma-DB05 bin]$ pg_dump -U postgres moma_ts_oce; echo $\r\n\r\nOutput was like this(Last few lines )\r\n------------------------------\r\n-- Name: public; Type: ACL; Schema: -; Owner: postgres\r\n--\r\n\r\nREVOKE ALL ON SCHEMA public FROM PUBLIC;\r\nREVOKE ALL ON SCHEMA public FROM postgres;\r\nGRANT ALL ON SCHEMA public TO postgres;\r\nGRANT ALL ON SCHEMA public TO PUBLIC;\r\n\r\n--\r\n-- PostgreSQL database dump complete\r\n--\r\n\r\n$\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n-----Original Message-----\r\nFrom: vinny [mailto:[email protected]]\r\nSent: 27 February, 2017 7:31 PM\r\nTo: John Gorman <[email protected]>\r\nCc: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>; [email protected]; [email protected]\r\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\r\n\r\nOn 2017-02-27 14:29, John Gorman wrote:\r\n> Even though it's not listed in any of the documentation or “pg_dump\r\n> --help” you can check the return code of the process. A return code\r\n> greater than 0 (zero) usually indicates a failure\r\n>\r\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\r\n>\r\n> 1\r\n>\r\n> FROM: [email protected]\r\n> [mailto:[email protected]] ON BEHALF OF Rick\r\n> Otten\r\n> SENT: Monday, February 27, 2017 3:36 AM\r\n> TO: Dinesh Chandra 12108\r\n> CC: [email protected]\r\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\r\n> successfully or not ?\r\n>\r\n> Although it doesn't really tell if the pg_dump was successful (you'll\r\n> need to do a full restore to be sure), I generate an archive list. If\r\n> that fails, the backup clearly wasn't successful, and if it succeeds,\r\n> odds are pretty good that it worked:\r\n>\r\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\r\n> <[email protected]> wrote:\r\n>\r\n> Hi,\r\n>\r\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\r\n> which is automatic scheduled through Cronjobs.\r\n>\r\n> How can I check my yesterday backup is successfully or not?\r\n>\r\n> Is there any query or view by which I can check it?\r\n>\r\n> REGARDS,\r\n>\r\n> DINESH CHANDRA\r\n>\r\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\r\n\r\n\r\nIt's important to note the distinction between\r\n\r\n\"the backup process did not fail\"\r\n\r\nand\r\n\r\n\"we now have a trustworthy backup\"\r\n\r\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\r\n\r\nSo what is it that you want to make sure of:\r\n1. Did the process give an error?\r\n2. Did the process create a usable backup?\r\n\r\nWhat are the chances of #1 reporting success but still producing a bad backup?\r\nAnd can #2 fail on a good database, and if so, can you detect that?\r\n\r\n\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 27 Feb 2017 14:10:52 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "Hi,\r\n\r\nWhen I issue the bleow command\r\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\r\n\r\nI checked with Linux TOP command on the same server, it was showing COPY database.\r\nWhat exactly it doing ??\r\n\r\nRegards,\r\nDinesh Chandra\r\n\r\n-----Original Message-----\r\nFrom: vinny [mailto:[email protected]]\r\nSent: 27 February, 2017 7:31 PM\r\nTo: John Gorman <[email protected]>\r\nCc: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>; [email protected]; [email protected]\r\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\r\n\r\nOn 2017-02-27 14:29, John Gorman wrote:\r\n> Even though it's not listed in any of the documentation or “pg_dump\r\n> --help” you can check the return code of the process. A return code\r\n> greater than 0 (zero) usually indicates a failure\r\n>\r\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\r\n>\r\n> 1\r\n>\r\n> FROM: [email protected]\r\n> [mailto:[email protected]] ON BEHALF OF Rick\r\n> Otten\r\n> SENT: Monday, February 27, 2017 3:36 AM\r\n> TO: Dinesh Chandra 12108\r\n> CC: [email protected]\r\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\r\n> successfully or not ?\r\n>\r\n> Although it doesn't really tell if the pg_dump was successful (you'll\r\n> need to do a full restore to be sure), I generate an archive list. If\r\n> that fails, the backup clearly wasn't successful, and if it succeeds,\r\n> odds are pretty good that it worked:\r\n>\r\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\r\n> <[email protected]> wrote:\r\n>\r\n> Hi,\r\n>\r\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\r\n> which is automatic scheduled through Cronjobs.\r\n>\r\n> How can I check my yesterday backup is successfully or not?\r\n>\r\n> Is there any query or view by which I can check it?\r\n>\r\n> REGARDS,\r\n>\r\n> DINESH CHANDRA\r\n>\r\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\r\n\r\n\r\nIt's important to note the distinction between\r\n\r\n\"the backup process did not fail\"\r\n\r\nand\r\n\r\n\"we now have a trustworthy backup\"\r\n\r\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\r\n\r\nSo what is it that you want to make sure of:\r\n1. Did the process give an error?\r\n2. Did the process create a usable backup?\r\n\r\nWhat are the chances of #1 reporting success but still producing a bad backup?\r\nAnd can #2 fail on a good database, and if so, can you detect that?\r\n\r\n\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Mar 2017 12:05:35 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "This reminds me - I have had a case where the exit code for pg_dump was\nsuccessful, but the backup was still corrupted on disk. By all means check\nthe exit code, but I strong encourage a second validation, such as the\nindex listing, to increase your confidence that the backup was successful.\n\nThe best way to ensure good backups is to establish a regular practice of\nrestoring a backup to another database. The easiest such practice to\njustify and implement is to maintain a developer/development database, and\nto use your production database backups to rebuild it on a regular basis.\nOther approaches could include regularly scheduled Disaster Recovery\nexercises, or simply spinning up throw away cloud instances for the purpose.\n\npg_dump uses the ordinary postgresql COPY command to extract data from the\ntables. Beyond that, I'm not sure how it works. Sorry I can't help you\nthere.\n\n\nOn Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Hi,\n>\n> When I issue the bleow command\n> > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n>\n> I checked with Linux TOP command on the same server, it was showing COPY\n> database.\n> What exactly it doing ??\n>\n> Regards,\n> Dinesh Chandra\n>\n> -----Original Message-----\n> From: vinny [mailto:[email protected]]\n> Sent: 27 February, 2017 7:31 PM\n> To: John Gorman <[email protected]>\n> Cc: Rick Otten <[email protected]>; Dinesh Chandra 12108 <\n> [email protected]>; [email protected];\n> [email protected]\n> Subject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully\n> or not ?\n>\n> On 2017-02-27 14:29, John Gorman wrote:\n> > Even though it's not listed in any of the documentation or “pg_dump\n> > --help” you can check the return code of the process. A return code\n> > greater than 0 (zero) usually indicates a failure\n> >\n> > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n> >\n> > 1\n> >\n> > FROM: [email protected]\n> > [mailto:[email protected]] ON BEHALF OF Rick\n> > Otten\n> > SENT: Monday, February 27, 2017 3:36 AM\n> > TO: Dinesh Chandra 12108\n> > CC: [email protected]\n> > SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> > successfully or not ?\n> >\n> > Although it doesn't really tell if the pg_dump was successful (you'll\n> > need to do a full restore to be sure), I generate an archive list. If\n> > that fails, the backup clearly wasn't successful, and if it succeeds,\n> > odds are pretty good that it worked:\n> >\n> > On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> > <[email protected]> wrote:\n> >\n> > Hi,\n> >\n> > We are taking daily full backup of PostgreSQL database using PG_DUMP\n> > which is automatic scheduled through Cronjobs.\n> >\n> > How can I check my yesterday backup is successfully or not?\n> >\n> > Is there any query or view by which I can check it?\n> >\n> > REGARDS,\n> >\n> > DINESH CHANDRA\n> >\n> > |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n>\n>\n> It's important to note the distinction between\n>\n> \"the backup process did not fail\"\n>\n> and\n>\n> \"we now have a trustworthy backup\"\n>\n> And you can go full-paranoia and say that you can successfully create a\n> perfectly working backup of the wrong database.\n>\n> So what is it that you want to make sure of:\n> 1. Did the process give an error?\n> 2. Did the process create a usable backup?\n>\n> What are the chances of #1 reporting success but still producing a bad\n> backup?\n> And can #2 fail on a good database, and if so, can you detect that?\n>\n>\n>\n> ________________________________\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThis reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to increase your confidence that the backup was successful.The best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production database backups to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.pg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.On Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 <[email protected]> wrote:Hi,\n\nWhen I issue the bleow command\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n\nI checked with Linux TOP command on the same server, it was showing COPY database.\nWhat exactly it doing ??\n\nRegards,\nDinesh Chandra\n\n-----Original Message-----\nFrom: vinny [mailto:[email protected]]\nSent: 27 February, 2017 7:31 PM\nTo: John Gorman <[email protected]>\nCc: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>; [email protected]; [email protected]\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\nOn 2017-02-27 14:29, John Gorman wrote:\n> Even though it's not listed in any of the documentation or “pg_dump\n> --help” you can check the return code of the process. A return code\n> greater than 0 (zero) usually indicates a failure\n>\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\n>\n> 1\n>\n> FROM: [email protected]\n> [mailto:[email protected]] ON BEHALF OF Rick\n> Otten\n> SENT: Monday, February 27, 2017 3:36 AM\n> TO: Dinesh Chandra 12108\n> CC: [email protected]\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> successfully or not ?\n>\n> Although it doesn't really tell if the pg_dump was successful (you'll\n> need to do a full restore to be sure), I generate an archive list. If\n> that fails, the backup clearly wasn't successful, and if it succeeds,\n> odds are pretty good that it worked:\n>\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> <[email protected]> wrote:\n>\n> Hi,\n>\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\n> which is automatic scheduled through Cronjobs.\n>\n> How can I check my yesterday backup is successfully or not?\n>\n> Is there any query or view by which I can check it?\n>\n> REGARDS,\n>\n> DINESH CHANDRA\n>\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n\n\nIt's important to note the distinction between\n\n\"the backup process did not fail\"\n\nand\n\n\"we now have a trustworthy backup\"\n\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\n\nSo what is it that you want to make sure of:\n1. Did the process give an error?\n2. Did the process create a usable backup?\n\nWhat are the chances of #1 reporting success but still producing a bad backup?\nAnd can #2 fail on a good database, and if so, can you detect that?\n\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 2 Mar 2017 08:14:57 -0500",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "Dear Rick,\r\n\r\nThanks for your valuable reply.\r\n\r\nBut the daily restoration of backup to another database is really so time consuming because our databases size is greater than 2TB.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)|\r\n\r\nFrom: Rick Otten [mailto:[email protected]]\r\nSent: 02 March, 2017 6:45 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\r\n\r\nThis reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to increase your confidence that the backup was successful.\r\n\r\nThe best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production database backups to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.\r\n\r\npg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.\r\n\r\n\r\nOn Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\r\nHi,\r\n\r\nWhen I issue the bleow command\r\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\r\n\r\nI checked with Linux TOP command on the same server, it was showing COPY database.\r\nWhat exactly it doing ??\r\n\r\nRegards,\r\nDinesh Chandra\r\n\r\n-----Original Message-----\r\nFrom: vinny [mailto:[email protected]<mailto:[email protected]>]\r\nSent: 27 February, 2017 7:31 PM\r\nTo: John Gorman <[email protected]<mailto:[email protected]>>\r\nCc: Rick Otten <[email protected]<mailto:[email protected]>>; Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\r\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\r\n\r\nOn 2017-02-27 14:29, John Gorman wrote:\r\n> Even though it's not listed in any of the documentation or “pg_dump\r\n> --help” you can check the return code of the process. A return code\r\n> greater than 0 (zero) usually indicates a failure\r\n>\r\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\r\n>\r\n> 1\r\n>\r\n> FROM: [email protected]<mailto:[email protected]>\r\n> [mailto:[email protected]<mailto:[email protected]>] ON BEHALF OF Rick\r\n> Otten\r\n> SENT: Monday, February 27, 2017 3:36 AM\r\n> TO: Dinesh Chandra 12108\r\n> CC: [email protected]<mailto:[email protected]>\r\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\r\n> successfully or not ?\r\n>\r\n> Although it doesn't really tell if the pg_dump was successful (you'll\r\n> need to do a full restore to be sure), I generate an archive list. If\r\n> that fails, the backup clearly wasn't successful, and if it succeeds,\r\n> odds are pretty good that it worked:\r\n>\r\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\r\n> <[email protected]<mailto:[email protected]>> wrote:\r\n>\r\n> Hi,\r\n>\r\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\r\n> which is automatic scheduled through Cronjobs.\r\n>\r\n> How can I check my yesterday backup is successfully or not?\r\n>\r\n> Is there any query or view by which I can check it?\r\n>\r\n> REGARDS,\r\n>\r\n> DINESH CHANDRA\r\n>\r\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\r\n\r\n\r\nIt's important to note the distinction between\r\n\r\n\"the backup process did not fail\"\r\n\r\nand\r\n\r\n\"we now have a trustworthy backup\"\r\n\r\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\r\n\r\nSo what is it that you want to make sure of:\r\n1. Did the process give an error?\r\n2. Did the process create a usable backup?\r\n\r\nWhat are the chances of #1 reporting success but still producing a bad backup?\r\nAnd can #2 fail on a good database, and if so, can you detect that?\r\n\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n--\r\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\r\n\n\n\n\n\n\n\n\n\nDear Rick,\n \nThanks for your valuable reply.\n \nBut the daily restoration of backup to another database is really so time consuming because our databases size is greater than 2TB.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)|\r\n\n \nFrom: Rick Otten [mailto:[email protected]]\r\n\nSent: 02 March, 2017 6:45 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n \n\nThis reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to\r\n increase your confidence that the backup was successful.\n\n \n\n\nThe best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production\r\n database backups to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.\n\n\n \n\n\npg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.\n\n\n \n\n\n\n \n\nOn Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\nHi,\n\r\nWhen I issue the bleow command\r\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n\r\nI checked with Linux TOP command on the same server, it was showing COPY database.\r\nWhat exactly it doing ??\n\r\nRegards,\r\nDinesh Chandra\n\n-----Original Message-----\nFrom: vinny [mailto:[email protected]]\nSent: 27 February, 2017 7:31 PM\nTo: John Gorman <[email protected]>\nCc: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>;\r\[email protected];\r\[email protected]\n\n\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\r\nOn 2017-02-27 14:29, John Gorman wrote:\r\n> Even though it's not listed in any of the documentation or “pg_dump\r\n> --help” you can check the return code of the process. A return code\r\n> greater than 0 (zero) usually indicates a failure\r\n>\r\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\r\n>\r\n> 1\r\n>\r\n> FROM: [email protected]\r\n> [mailto:[email protected]] ON BEHALF OF Rick\r\n> Otten\r\n> SENT: Monday, February 27, 2017 3:36 AM\r\n> TO: Dinesh Chandra 12108\r\n> CC: [email protected]\r\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\r\n> successfully or not ?\r\n>\r\n> Although it doesn't really tell if the pg_dump was successful (you'll\r\n> need to do a full restore to be sure), I generate an archive list. If\r\n> that fails, the backup clearly wasn't successful, and if it succeeds,\r\n> odds are pretty good that it worked:\r\n>\r\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\r\n> <[email protected]> wrote:\r\n>\r\n> Hi,\r\n>\r\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\r\n> which is automatic scheduled through Cronjobs.\r\n>\r\n> How can I check my yesterday backup is successfully or not?\r\n>\r\n> Is there any query or view by which I can check it?\r\n>\r\n> REGARDS,\r\n>\r\n> DINESH CHANDRA\r\n>\r\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n\n\r\nIt's important to note the distinction between\n\r\n\"the backup process did not fail\"\n\r\nand\n\r\n\"we now have a trustworthy backup\"\n\r\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\n\r\nSo what is it that you want to make sure of:\r\n1. Did the process give an error?\r\n2. Did the process create a usable backup?\n\r\nWhat are the chances of #1 reporting success but still producing a bad backup?\r\nAnd can #2 fail on a good database, and if so, can you detect that?\n\n\n\n\n\n\n\n________________________________\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 2 Mar 2017 13:19:40 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "It'd be so nice to have some checks to guarantee the backup is trustworthy. Restoring the db is imho not a very good option in general:\n - large databases are a problem. My db is about 3TB. Time plus disk space is a big blocker.\n - also, what if the backup is incomplete? Just restoring the db successfully is not enough right? You'd have to compare with the prod to make sure nothing was missed... in a fast moving outfit where the db today will have tons of new/changed deleted stuff from yesterday.. how to even do that?\n\nI am in a warehouse environment, so I have given up on guaranteeing backups and in a case of trouble, i'll spend 20h rebuilding my db. So I have a way out but i'd much prefer working with trustworthy backups.\n\n\nSent from my BlackBerry 10 smartphone.\nFrom: Rick Otten\nSent: Thursday, March 2, 2017 08:19\nTo: Dinesh Chandra 12108\nCc: [email protected]\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\n\nThis reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to increase your confidence that the backup was successful.\n\nThe best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production database backups to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.\n\npg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.\n\n\nOn Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\nHi,\n\nWhen I issue the bleow command\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n\nI checked with Linux TOP command on the same server, it was showing COPY database.\nWhat exactly it doing ??\n\nRegards,\nDinesh Chandra\n\n-----Original Message-----\nFrom: vinny [mailto:[email protected]<mailto:[email protected]>]\nSent: 27 February, 2017 7:31 PM\nTo: John Gorman <[email protected]<mailto:[email protected]>>\nCc: Rick Otten <[email protected]<mailto:[email protected]>>; Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\nOn 2017-02-27 14:29, John Gorman wrote:\n> Even though it's not listed in any of the documentation or “pg_dump\n> --help” you can check the return code of the process. A return code\n> greater than 0 (zero) usually indicates a failure\n>\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\n>\n> 1\n>\n> FROM: [email protected]<mailto:[email protected]>\n> [mailto:[email protected]<mailto:[email protected]>] ON BEHALF OF Rick\n> Otten\n> SENT: Monday, February 27, 2017 3:36 AM\n> TO: Dinesh Chandra 12108\n> CC: [email protected]<mailto:[email protected]>\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> successfully or not ?\n>\n> Although it doesn't really tell if the pg_dump was successful (you'll\n> need to do a full restore to be sure), I generate an archive list. If\n> that fails, the backup clearly wasn't successful, and if it succeeds,\n> odds are pretty good that it worked:\n>\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> <[email protected]<mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\n> which is automatic scheduled through Cronjobs.\n>\n> How can I check my yesterday backup is successfully or not?\n>\n> Is there any query or view by which I can check it?\n>\n> REGARDS,\n>\n> DINESH CHANDRA\n>\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n\n\nIt's important to note the distinction between\n\n\"the backup process did not fail\"\n\nand\n\n\"we now have a trustworthy backup\"\n\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\n\nSo what is it that you want to make sure of:\n1. Did the process give an error?\n2. Did the process create a usable backup?\n\nWhat are the chances of #1 reporting success but still producing a bad backup?\nAnd can #2 fail on a good database, and if so, can you detect that?\n\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n--\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\n\nIt'd be so nice to have some checks to guarantee the backup is trustworthy. Restoring the db is imho not a very good option in general:\n\n - large databases are a problem. My db is about 3TB. Time plus disk space is a big blocker.\n\n - also, what if the backup is incomplete? Just restoring the db successfully is not enough right? You'd have to compare with the prod to make sure nothing was missed... in a fast moving outfit where the db today will have tons of new/changed deleted stuff\n from yesterday.. how to even do that?\n\n\n\n\nI am in a warehouse environment, so I have given up on guaranteeing backups and in a case of trouble, i'll spend 20h rebuilding my db. So I have a way out but i'd much prefer working with trustworthy backups.\n\n\n\n\n\n\n\nSent from my BlackBerry 10 smartphone.\n\n\n\n\n\nFrom: Rick Otten\nSent: Thursday, March 2, 2017 08:19\nTo: Dinesh Chandra 12108\nCc: [email protected]\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\n\n\n\n\n\n\n\n\nThis reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to increase\n your confidence that the backup was successful.\n\n\nThe best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production database backups\n to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.\n\n\npg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.\n\n\n\n\nOn Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 \n<[email protected]> wrote:\n\nHi,\n\nWhen I issue the bleow command\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n\nI checked with Linux TOP command on the same server, it was showing COPY database.\nWhat exactly it doing ??\n\nRegards,\nDinesh Chandra\n\n-----Original Message-----\nFrom: vinny [mailto:[email protected]]\nSent: 27 February, 2017 7:31 PM\nTo: John Gorman <[email protected]>\nCc: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>;\[email protected];\[email protected]\n\n\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\nOn 2017-02-27 14:29, John Gorman wrote:\n> Even though it's not listed in any of the documentation or “pg_dump\n> --help” you can check the return code of the process. A return code\n> greater than 0 (zero) usually indicates a failure\n>\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\n>\n> 1\n>\n> FROM: [email protected]\n> [mailto:[email protected]] ON BEHALF OF Rick\n> Otten\n> SENT: Monday, February 27, 2017 3:36 AM\n> TO: Dinesh Chandra 12108\n> CC: [email protected]\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> successfully or not ?\n>\n> Although it doesn't really tell if the pg_dump was successful (you'll\n> need to do a full restore to be sure), I generate an archive list. If\n> that fails, the backup clearly wasn't successful, and if it succeeds,\n> odds are pretty good that it worked:\n>\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> <[email protected]> wrote:\n>\n> Hi,\n>\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\n> which is automatic scheduled through Cronjobs.\n>\n> How can I check my yesterday backup is successfully or not?\n>\n> Is there any query or view by which I can check it?\n>\n> REGARDS,\n>\n> DINESH CHANDRA\n>\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n\n\nIt's important to note the distinction between\n\n\"the backup process did not fail\"\n\nand\n\n\"we now have a trustworthy backup\"\n\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\n\nSo what is it that you want to make sure of:\n1. Did the process give an error?\n2. Did the process create a usable backup?\n\nWhat are the chances of #1 reporting success but still producing a bad backup?\nAnd can #2 fail on a good database, and if so, can you detect that?\n\n\n\n\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 2 Mar 2017 16:26:56 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "May you please share what types of check if there to guarantee the backup is trustworthy.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n\nFrom: [email protected] [mailto:[email protected]]\nSent: 02 March, 2017 9:57 PM\nTo: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\nIt'd be so nice to have some checks to guarantee the backup is trustworthy. Restoring the db is imho not a very good option in general:\n - large databases are a problem. My db is about 3TB. Time plus disk space is a big blocker.\n - also, what if the backup is incomplete? Just restoring the db successfully is not enough right? You'd have to compare with the prod to make sure nothing was missed... in a fast moving outfit where the db today will have tons of new/changed deleted stuff from yesterday.. how to even do that?\n\nI am in a warehouse environment, so I have given up on guaranteeing backups and in a case of trouble, i'll spend 20h rebuilding my db. So I have a way out but i'd much prefer working with trustworthy backups.\n\n\nSent from my BlackBerry 10 smartphone.\nFrom: Rick Otten\nSent: Thursday, March 2, 2017 08:19\nTo: Dinesh Chandra 12108\nCc: [email protected]<mailto:[email protected]>\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\n\nThis reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to increase your confidence that the backup was successful.\n\nThe best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production database backups to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.\n\npg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.\n\n\nOn Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\nHi,\n\nWhen I issue the bleow command\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n\nI checked with Linux TOP command on the same server, it was showing COPY database.\nWhat exactly it doing ??\n\nRegards,\nDinesh Chandra\n\n-----Original Message-----\nFrom: vinny [mailto:[email protected]<mailto:[email protected]>]\nSent: 27 February, 2017 7:31 PM\nTo: John Gorman <[email protected]<mailto:[email protected]>>\nCc: Rick Otten <[email protected]<mailto:[email protected]>>; Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>>; [email protected]<mailto:[email protected]>; [email protected]<mailto:[email protected]>\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\nOn 2017-02-27 14:29, John Gorman wrote:\n> Even though it's not listed in any of the documentation or “pg_dump\n> --help” you can check the return code of the process. A return code\n> greater than 0 (zero) usually indicates a failure\n>\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\n>\n> 1\n>\n> FROM: [email protected]<mailto:[email protected]>\n> [mailto:[email protected]<mailto:[email protected]>] ON BEHALF OF Rick\n> Otten\n> SENT: Monday, February 27, 2017 3:36 AM\n> TO: Dinesh Chandra 12108\n> CC: [email protected]<mailto:[email protected]>\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> successfully or not ?\n>\n> Although it doesn't really tell if the pg_dump was successful (you'll\n> need to do a full restore to be sure), I generate an archive list. If\n> that fails, the backup clearly wasn't successful, and if it succeeds,\n> odds are pretty good that it worked:\n>\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> <[email protected]<mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\n> which is automatic scheduled through Cronjobs.\n>\n> How can I check my yesterday backup is successfully or not?\n>\n> Is there any query or view by which I can check it?\n>\n> REGARDS,\n>\n> DINESH CHANDRA\n>\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n\n\nIt's important to note the distinction between\n\n\"the backup process did not fail\"\n\nand\n\n\"we now have a trustworthy backup\"\n\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\n\nSo what is it that you want to make sure of:\n1. Did the process give an error?\n2. Did the process create a usable backup?\n\nWhat are the chances of #1 reporting success but still producing a bad backup?\nAnd can #2 fail on a good database, and if so, can you detect that?\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n--\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\n\n\nMay you please share what types of check if there to\nguarantee the backup is trustworthy.\n \n \n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n\n \n\n\nFrom: [email protected] [mailto:[email protected]]\n\nSent: 02 March, 2017 9:57 PM\nTo: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\n\n \n\nIt'd be so nice to have some checks to guarantee the backup is trustworthy. Restoring the db is imho not a very good option in general:\n\n\n - large databases are a problem. My db is about 3TB. Time plus disk space is a big blocker.\n\n\n - also, what if the backup is incomplete? Just restoring the db successfully is not enough right? You'd have to compare with the prod to make sure nothing\n was missed... in a fast moving outfit where the db today will have tons of new/changed deleted stuff from yesterday.. how to even do that?\n\n\n \n\n\nI am in a warehouse environment, so I have given up on guaranteeing backups and in a case of trouble, i'll spend 20h rebuilding my db. So I have a way\n out but i'd much prefer working with trustworthy backups.\n\n\n \n\n\n \n\n\nSent from my BlackBerry 10 smartphone.\n\n\n\n\n\n\n\nFrom:\nRick Otten\n\n\nSent:\nThursday, March 2, 2017 08:19\n\n\nTo:\nDinesh Chandra 12108\n\n\nCc:\[email protected]\n\n\nSubject:\nRe: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\n\n\n\n\n\n \n\n\nThis reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to\n increase your confidence that the backup was successful. \n\n \n\n\nThe best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production\n database backups to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.\n\n\n \n\n\npg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.\n\n\n \n\n\n\n \n\nOn Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\nHi,\n\nWhen I issue the bleow command\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n\nI checked with Linux TOP command on the same server, it was showing COPY database.\nWhat exactly it doing ??\n\nRegards,\nDinesh Chandra\n\n-----Original Message-----\nFrom: vinny [mailto:[email protected]]\nSent: 27 February, 2017 7:31 PM\nTo: John Gorman <[email protected]>\nCc: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>;\[email protected];\[email protected]\n\n\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\nOn 2017-02-27 14:29, John Gorman wrote:\n> Even though it's not listed in any of the documentation or “pg_dump\n> --help” you can check the return code of the process. A return code\n> greater than 0 (zero) usually indicates a failure\n>\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\n>\n> 1\n>\n> FROM: [email protected]\n> [mailto:[email protected]] ON BEHALF OF Rick\n> Otten\n> SENT: Monday, February 27, 2017 3:36 AM\n> TO: Dinesh Chandra 12108\n> CC: [email protected]\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> successfully or not ?\n>\n> Although it doesn't really tell if the pg_dump was successful (you'll\n> need to do a full restore to be sure), I generate an archive list. If\n> that fails, the backup clearly wasn't successful, and if it succeeds,\n> odds are pretty good that it worked:\n>\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> <[email protected]> wrote:\n>\n> Hi,\n>\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\n> which is automatic scheduled through Cronjobs.\n>\n> How can I check my yesterday backup is successfully or not?\n>\n> Is there any query or view by which I can check it?\n>\n> REGARDS,\n>\n> DINESH CHANDRA\n>\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n\n\nIt's important to note the distinction between\n\n\"the backup process did not fail\"\n\nand\n\n\"we now have a trustworthy backup\"\n\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\n\nSo what is it that you want to make sure of:\n1. Did the process give an error?\n2. Did the process create a usable backup?\n\nWhat are the chances of #1 reporting success but still producing a bad backup?\nAnd can #2 fail on a good database, and if so, can you detect that?\n\n\n\n\n\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 2 Mar 2017 16:32:54 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
},
{
"msg_contents": "Hi Rick, \n\nYou Always have a doubt, if backup is full and done.\n\nIn my case, I have one Client using a large database, and the base in mysql, an when we stop traffic about 8:00PM start a lot of routines. I have about 2hours of Exclusive lock.\n\nFor solve one problem like these in this Client. We prepared a replication on other servers, and doing a backup in a replications servers.\n\nIn My Opinion Postgres Replication work better way than mysql.\n\nhttps://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling\n\nPros.: You have a second and third server and (n).\nCan compare data and have a control of replication log.\nAnd dont need do a directly backup in production enviroment\n\nBest Regards\n\n\n\nGustavo Neves Vargas\n\[email protected] <mailto:[email protected]>\njustblue.com.br\n+55(41) 9157-7816\n+55(41) 3058-4967\n\n\n\n\n> On 2 Mar 2017, at 13:26, [email protected] wrote:\n> \n> It'd be so nice to have some checks to guarantee the backup is trustworthy. Restoring the db is imho not a very good option in general:\n> - large databases are a problem. My db is about 3TB. Time plus disk space is a big blocker.\n> - also, what if the backup is incomplete? Just restoring the db successfully is not enough right? You'd have to compare with the prod to make sure nothing was missed... in a fast moving outfit where the db today will have tons of new/changed deleted stuff from yesterday.. how to even do that?\n> \n> I am in a warehouse environment, so I have given up on guaranteeing backups and in a case of trouble, i'll spend 20h rebuilding my db. So I have a way out but i'd much prefer working with trustworthy backups.\n> \n> \n> Sent from my BlackBerry 10 smartphone.\n> From: Rick Otten\n> Sent: Thursday, March 2, 2017 08:19\n> To: Dinesh Chandra 12108\n> Cc: [email protected]\n> Subject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n> \n> This reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to increase your confidence that the backup was successful.\n> \n> The best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production database backups to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.\n> \n> pg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.\n> \n> \n> On Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 <[email protected] <mailto:[email protected]>> wrote:\n> Hi,\n> \n> When I issue the bleow command\n> > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n> \n> I checked with Linux TOP command on the same server, it was showing COPY database.\n> What exactly it doing ??\n> \n> Regards,\n> Dinesh Chandra\n> \n> -----Original Message-----\n> From: vinny [mailto:[email protected] <mailto:[email protected]>]\n> Sent: 27 February, 2017 7:31 PM\n> To: John Gorman <[email protected] <mailto:[email protected]>>\n> Cc: Rick Otten <[email protected] <mailto:[email protected]>>; Dinesh Chandra 12108 <[email protected] <mailto:[email protected]>>; [email protected] <mailto:[email protected]>; [email protected] <mailto:[email protected]>\n> Subject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n> \n> On 2017-02-27 14:29, John Gorman wrote:\n> > Even though it's not listed in any of the documentation or “pg_dump\n> > --help” you can check the return code of the process. A return code\n> > greater than 0 (zero) usually indicates a failure\n> >\n> > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n> >\n> > 1\n> >\n> > FROM: [email protected] <mailto:[email protected]>\n> > [mailto:[email protected] <mailto:[email protected]>] ON BEHALF OF Rick\n> > Otten\n> > SENT: Monday, February 27, 2017 3:36 AM\n> > TO: Dinesh Chandra 12108\n> > CC: [email protected] <mailto:[email protected]>\n> > SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> > successfully or not ?\n> >\n> > Although it doesn't really tell if the pg_dump was successful (you'll\n> > need to do a full restore to be sure), I generate an archive list. If\n> > that fails, the backup clearly wasn't successful, and if it succeeds,\n> > odds are pretty good that it worked:\n> >\n> > On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> > <[email protected] <mailto:[email protected]>> wrote:\n> >\n> > Hi,\n> >\n> > We are taking daily full backup of PostgreSQL database using PG_DUMP\n> > which is automatic scheduled through Cronjobs.\n> >\n> > How can I check my yesterday backup is successfully or not?\n> >\n> > Is there any query or view by which I can check it?\n> >\n> > REGARDS,\n> >\n> > DINESH CHANDRA\n> >\n> > |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n> \n> \n> It's important to note the distinction between\n> \n> \"the backup process did not fail\"\n> \n> and\n> \n> \"we now have a trustworthy backup\"\n> \n> And you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\n> \n> So what is it that you want to make sure of:\n> 1. Did the process give an error?\n> 2. Did the process create a usable backup?\n> \n> What are the chances of #1 reporting success but still producing a bad backup?\n> And can #2 fail on a good database, and if so, can you detect that?\n> \n> \n> \n> ________________________________\n> \n> DISCLAIMER:\n> \n> This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n> \n> --\n> Sent via pgsql-performance mailing list ([email protected] <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance <http://www.postgresql.org/mailpref/pgsql-performance>\n> \n\n\nHi Rick, You Always have a doubt, if backup is full and done.In my case, I have one Client using a large database, and the base in mysql, an when we stop traffic about 8:00PM start a lot of routines. I have about 2hours of Exclusive lock.For solve one problem like these in this Client. We prepared a replication on other servers, and doing a backup in a replications servers.In My Opinion Postgres Replication work better way than mysql.https://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_PoolingPros.: You have a second and third server and (n).Can compare data and have a control of replication log.And dont need do a directly backup in production enviromentBest Regards\nGustavo Neves [email protected]+55(41) 9157-7816+55(41) 3058-4967\n\nOn 2 Mar 2017, at 13:26, [email protected] wrote:\n\n\n\n\nIt'd be so nice to have some checks to guarantee the backup is trustworthy. Restoring the db is imho not a very good option in general:\n\n - large databases are a problem. My db is about 3TB. Time plus disk space is a big blocker.\n\n - also, what if the backup is incomplete? Just restoring the db successfully is not enough right? You'd have to compare with the prod to make sure nothing was missed... in a fast moving outfit where the db today will have tons of new/changed deleted stuff\n from yesterday.. how to even do that?\n\n\n\n\nI am in a warehouse environment, so I have given up on guaranteeing backups and in a case of trouble, i'll spend 20h rebuilding my db. So I have a way out but i'd much prefer working with trustworthy backups.\n\n\n\n\n\n\n\nSent from my BlackBerry 10 smartphone.\n\n\n\n\n\nFrom: Rick Otten\nSent: Thursday, March 2, 2017 08:19\nTo: Dinesh Chandra 12108\nCc: [email protected]\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\n\n\n\n\n\n\n\n\nThis reminds me - I have had a case where the exit code for pg_dump was successful, but the backup was still corrupted on disk. By all means check the exit code, but I strong encourage a second validation, such as the index listing, to increase\n your confidence that the backup was successful.\n\n\nThe best way to ensure good backups is to establish a regular practice of restoring a backup to another database. The easiest such practice to justify and implement is to maintain a developer/development database, and to use your production database backups\n to rebuild it on a regular basis. Other approaches could include regularly scheduled Disaster Recovery exercises, or simply spinning up throw away cloud instances for the purpose.\n\n\npg_dump uses the ordinary postgresql COPY command to extract data from the tables. Beyond that, I'm not sure how it works. Sorry I can't help you there.\n\n\n\n\nOn Thu, Mar 2, 2017 at 7:05 AM, Dinesh Chandra 12108 \n<[email protected]> wrote:\n\nHi,\n\nWhen I issue the bleow command\n > ./bin >pg_dump -U dummy_user dummy_database; echo $?\n\nI checked with Linux TOP command on the same server, it was showing COPY database.\nWhat exactly it doing ??\n\nRegards,\nDinesh Chandra\n\n-----Original Message-----\nFrom: vinny [mailto:[email protected]]\nSent: 27 February, 2017 7:31 PM\nTo: John Gorman <[email protected]>\nCc: Rick Otten <[email protected]>; Dinesh Chandra 12108 <[email protected]>;\[email protected];\[email protected]\n\n\nSubject: Re: [PERFORM] How Can I check PostgreSQL backup is successfully or not ?\n\nOn 2017-02-27 14:29, John Gorman wrote:\n> Even though it's not listed in any of the documentation or “pg_dump\n> --help” you can check the return code of the process. A return code\n> greater than 0 (zero) usually indicates a failure\n>\n> ./bin >pg_dump -U dummy_user dummy_database; echo $?\n>\n> 1\n>\n> FROM: [email protected]\n> [mailto:[email protected]] ON BEHALF OF Rick\n> Otten\n> SENT: Monday, February 27, 2017 3:36 AM\n> TO: Dinesh Chandra 12108\n> CC: [email protected]\n> SUBJECT: Re: [PERFORM] How Can I check PostgreSQL backup is\n> successfully or not ?\n>\n> Although it doesn't really tell if the pg_dump was successful (you'll\n> need to do a full restore to be sure), I generate an archive list. If\n> that fails, the backup clearly wasn't successful, and if it succeeds,\n> odds are pretty good that it worked:\n>\n> On Mon, Feb 27, 2017 at 4:35 AM, Dinesh Chandra 12108\n> <[email protected]> wrote:\n>\n> Hi,\n>\n> We are taking daily full backup of PostgreSQL database using PG_DUMP\n> which is automatic scheduled through Cronjobs.\n>\n> How can I check my yesterday backup is successfully or not?\n>\n> Is there any query or view by which I can check it?\n>\n> REGARDS,\n>\n> DINESH CHANDRA\n>\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n\n\nIt's important to note the distinction between\n\n\"the backup process did not fail\"\n\nand\n\n\"we now have a trustworthy backup\"\n\nAnd you can go full-paranoia and say that you can successfully create a perfectly working backup of the wrong database.\n\nSo what is it that you want to make sure of:\n1. Did the process give an error?\n2. Did the process create a usable backup?\n\nWhat are the chances of #1 reporting success but still producing a bad backup?\nAnd can #2 fail on a good database, and if so, can you detect that?\n\n\n\n\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 2 Mar 2017 13:50:10 -0300",
"msg_from": "Gustavo Vargas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How Can I check PostgreSQL backup is successfully or\n not ?"
}
] |
[
{
"msg_contents": "Hello,\n\nI have encountered a strange problem when doing an anti-join with a very \nsmall table via a varchar or text field as opposed to an integer field. \nPostgres version is 9.5.3\n\nI did some experiments to extract the problem in a simple form. FIrst \ngenerate two tables with a series of numbers - once as integers once as \ntext. The first table has 10,000 rows the second table just one:\n\n=# select generate_series(1, 10000) as id, generate_series(1,10000)::text as text into table tmp_san_1;\nSELECT 10000\n=# select generate_series(1, 1) as id, generate_series(1,1)::text as text into table tmp_san_2;\nSELECT 1\n\n=# analyze tmp_san_1;\nANALYZE\n=# analyze tmp_san_2;\nANALYZE\n\n=# \\d tmp_san_*\n Table \"public.tmp_san_1\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\n text | text |\n\n Table \"public.tmp_san_2\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\n text | text |\n\n\n\nNow I do an anti-join between the two tables via the id field (integer). \nThe number of resulting rows are estimated correctly as 9,999:\n\n\n=# explain analyze\n select tmp_san_1.id\n from tmp_san_1\n left join tmp_san_2 on tmp_san_1.id = tmp_san_2.id\n where tmp_san_2.id is null;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=1.02..281.26 rows=9999 width=4) (actual time=0.019..2.743 rows=9999 loops=1)\n Hash Cond: (tmp_san_1.id = tmp_san_2.id)\n -> Seq Scan on tmp_san_1 (cost=0.00..154.00 rows=10000 width=4) (actual time=0.007..1.023 rows=10000 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=4) (actual time=0.004..0.004 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tmp_san_2 (cost=0.00..1.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Planning time: 0.138 ms\n Execution time: 3.218 ms\n(8 rows)\n\n\nThe same anti-join using the text fields, however estimates just 1 \nresulting row, while there are still of course 9,999 of them:\n\n=# explain analyze\n select tmp_san_1.id\n from tmp_san_1\n left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text\n where tmp_san_2.id is null;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.02..192.53 rows=1 width=4) (actual time=0.020..3.091 rows=9999 loops=1)\n Hash Cond: (tmp_san_1.text = tmp_san_2.text)\n Filter: (tmp_san_2.id IS NULL)\n Rows Removed by Filter: 1\n -> Seq Scan on tmp_san_1 (cost=0.00..154.00 rows=10000 width=8) (actual time=0.008..0.983 rows=10000 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=6) (actual time=0.004..0.004 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tmp_san_2 (cost=0.00..1.01 rows=1 width=6) (actual time=0.002..0.002 rows=1 loops=1)\n Planning time: 0.173 ms\n Execution time: 3.546 ms\n(10 rows)\n\n\nI cannot explain that behavior and much less think of a fix or \nworkaround. Unfortunately my real-world example has to use varchar for \nthe join.\n\nThanks for any help,\nStefan\n\n\n\n\n\n\n\n Hello,\n\n I have encountered a strange problem when doing an anti-join with a\n very small table via a varchar or text field as opposed to an\n integer field. Postgres version is 9.5.3\n\n I did some experiments to extract the problem in a simple form.\n FIrst generate two tables with a series of numbers - once as\n integers once as text. The first table has 10,000 rows the second\n table just one:\n\n=# select generate_series(1, 10000) as id, generate_series(1,10000)::text as text into table tmp_san_1;\nSELECT 10000\n=# select generate_series(1, 1) as id, generate_series(1,1)::text as text into table tmp_san_2;\nSELECT 1\n\n=# analyze tmp_san_1;\nANALYZE\n=# analyze tmp_san_2;\nANALYZE\n\n=# \\d tmp_san_*\n Table \"public.tmp_san_1\"\n Column | Type | Modifiers \n--------+---------+-----------\n id | integer | \n text | text | \n\n Table \"public.tmp_san_2\"\n Column | Type | Modifiers \n--------+---------+-----------\n id | integer | \n text | text | \n\n\n\n\n Now I do an anti-join between the two tables via the id field\n (integer). The number of resulting rows are estimated correctly as\n 9,999:\n\n\n=# explain analyze \n select tmp_san_1.id \n from tmp_san_1 \n left join tmp_san_2 on tmp_san_1.id = tmp_san_2.id \n where tmp_san_2.id is null;\n \n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=1.02..281.26 rows=9999 width=4) (actual time=0.019..2.743 rows=9999 loops=1)\n Hash Cond: (tmp_san_1.id = tmp_san_2.id)\n -> Seq Scan on tmp_san_1 (cost=0.00..154.00 rows=10000 width=4) (actual time=0.007..1.023 rows=10000 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=4) (actual time=0.004..0.004 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tmp_san_2 (cost=0.00..1.01 rows=1 width=4) (actual time=0.002..0.002 rows=1 loops=1)\n Planning time: 0.138 ms\n Execution time: 3.218 ms\n(8 rows)\n\n\n\n The same anti-join using the text fields, however estimates just 1\n resulting row, while there are still of course 9,999 of them:\n\n=# explain analyze \n select tmp_san_1.id \n from tmp_san_1 \n left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text \n where tmp_san_2.id is null;\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.02..192.53 rows=1 width=4) (actual time=0.020..3.091 rows=9999 loops=1)\n Hash Cond: (tmp_san_1.text = tmp_san_2.text)\n Filter: (tmp_san_2.id IS NULL)\n Rows Removed by Filter: 1\n -> Seq Scan on tmp_san_1 (cost=0.00..154.00 rows=10000 width=8) (actual time=0.008..0.983 rows=10000 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=6) (actual time=0.004..0.004 rows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tmp_san_2 (cost=0.00..1.01 rows=1 width=6) (actual time=0.002..0.002 rows=1 loops=1)\n Planning time: 0.173 ms\n Execution time: 3.546 ms\n(10 rows)\n\n\n\n I cannot explain that behavior and much less think of a fix or\n workaround. Unfortunately my real-world example has to use varchar\n for the join.\n\n Thanks for any help,\n Stefan",
"msg_date": "Wed, 1 Mar 2017 23:00:06 +0100",
"msg_from": "Stefan Andreatta <[email protected]>",
"msg_from_op": true,
"msg_subject": "anti-join with small table via text/varchar cannot estimate rows\n correctly"
},
{
"msg_contents": "On Wed, Mar 1, 2017 at 3:00 PM, Stefan Andreatta <[email protected]>\nwrote:\n\n> plain analyze\n> select tmp_san_1.id\n> from tmp_san_1\n> left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text\n> where tmp_san_2.id is null;\n>\n> Does it help if you check for \"tmp_san_2.text is null\"?\n\nDavid J.\n\nOn Wed, Mar 1, 2017 at 3:00 PM, Stefan Andreatta <[email protected]> wrote:\nplain analyze \n select tmp_san_1.id \n from tmp_san_1 \n left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text \n where tmp_san_2.id is null;Does it help if you check for \"tmp_san_2.text is null\"?David J.",
"msg_date": "Wed, 1 Mar 2017 15:12:29 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: anti-join with small table via text/varchar cannot\n estimate rows correctly"
},
{
"msg_contents": "On Wed, Mar 1, 2017 at 2:12 PM, David G. Johnston <\[email protected]> wrote:\n\n> On Wed, Mar 1, 2017 at 3:00 PM, Stefan Andreatta <[email protected]>\n> wrote:\n>\n>> plain analyze\n>> select tmp_san_1.id\n>> from tmp_san_1\n>> left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text\n>> where tmp_san_2.id is null;\n>>\n>> Does it help if you check for \"tmp_san_2.text is null\"?\n>\n>\n>\nYes. And if you swap it so that the left join is on the integer while IS\nNULL is on the text, that also gets poorly estimated. Also, if you make\nboth column of both tables be integers, same thing--you get bad estimates\nwhen the join condition refers to one column and the where refers to the\nother. I don't know why the estimate is poor, but it is not related to the\ntypes of the columns, but rather the identities of them.\n\nCheers,\n\nJeff\n\nOn Wed, Mar 1, 2017 at 2:12 PM, David G. Johnston <[email protected]> wrote:On Wed, Mar 1, 2017 at 3:00 PM, Stefan Andreatta <[email protected]> wrote:\nplain analyze \n select tmp_san_1.id \n from tmp_san_1 \n left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text \n where tmp_san_2.id is null;Does it help if you check for \"tmp_san_2.text is null\"?Yes. And if you swap it so that the left join is on the integer while IS NULL is on the text, that also gets poorly estimated. Also, if you make both column of both tables be integers, same thing--you get bad estimates when the join condition refers to one column and the where refers to the other. I don't know why the estimate is poor, but it is not related to the types of the columns, but rather the identities of them.Cheers,Jeff",
"msg_date": "Wed, 1 Mar 2017 16:24:14 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: anti-join with small table via text/varchar cannot\n estimate rows correctly"
},
{
"msg_contents": "On Wed, Mar 1, 2017 at 5:24 PM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Mar 1, 2017 at 2:12 PM, David G. Johnston <\n> [email protected]> wrote:\n>\n>> On Wed, Mar 1, 2017 at 3:00 PM, Stefan Andreatta <[email protected]\n>> > wrote:\n>>\n>>> plain analyze\n>>> select tmp_san_1.id\n>>> from tmp_san_1\n>>> left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text\n>>> where tmp_san_2.id is null;\n>>>\n>>> Does it help if you check for \"tmp_san_2.text is null\"?\n>>\n>>\n>>\n> Yes. And if you swap it so that the left join is on the integer while IS\n> NULL is on the text, that also gets poorly estimated. Also, if you make\n> both column of both tables be integers, same thing--you get bad estimates\n> when the join condition refers to one column and the where refers to the\n> other. I don't know why the estimate is poor, but it is not related to the\n> types of the columns, but rather the identities of them.\n>\n>\nI suspect it has to with the lack of a NOT NULL constraint on either\ncolumn causing the planner to disregard the potential to implement a LEFT\nJOIN using ANTI-JOIN semantics - or, also possible - the form itself is\ninvalid regardless of the presence or absence of contraints. IIUC, while a\ntrue anti-join syntax doesn't exist the canonical form for one uses NOT\nEXISTS - which would force the author to use only the correct column pair.\n\nDavid J.\n\n\nOn Wed, Mar 1, 2017 at 5:24 PM, Jeff Janes <[email protected]> wrote:On Wed, Mar 1, 2017 at 2:12 PM, David G. Johnston <[email protected]> wrote:On Wed, Mar 1, 2017 at 3:00 PM, Stefan Andreatta <[email protected]> wrote:\nplain analyze \n select tmp_san_1.id \n from tmp_san_1 \n left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text \n where tmp_san_2.id is null;Does it help if you check for \"tmp_san_2.text is null\"?Yes. And if you swap it so that the left join is on the integer while IS NULL is on the text, that also gets poorly estimated. Also, if you make both column of both tables be integers, same thing--you get bad estimates when the join condition refers to one column and the where refers to the other. I don't know why the estimate is poor, but it is not related to the types of the columns, but rather the identities of them.I suspect it has to with the lack of a NOT NULL constraint on either column causing the planner to disregard the potential to implement a LEFT JOIN using ANTI-JOIN semantics - or, also possible - the form itself is invalid regardless of the presence or absence of contraints. IIUC, while a true anti-join syntax doesn't exist the canonical form for one uses NOT EXISTS - which would force the author to use only the correct column pair.David J.",
"msg_date": "Wed, 1 Mar 2017 17:28:28 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: anti-join with small table via text/varchar cannot\n estimate rows correctly"
},
{
"msg_contents": "Stefan Andreatta <[email protected]> writes:\n> The same anti-join using the text fields, however estimates just 1 \n> resulting row, while there are still of course 9,999 of them:\n\n> =# explain analyze\n> select tmp_san_1.id\n> from tmp_san_1\n> left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text\n> where tmp_san_2.id is null;\n\nThat is not an anti-join. To make it one, you have to constrain the RHS\njoin column to be IS NULL, not some random other column. Note the join\ntype isn't getting shown as Anti:\n\n> Hash Left Join (cost=1.02..192.53 rows=1 width=4) (actual time=0.020..3.091 rows=9999 loops=1)\n\nAs written, the query could return some rows that weren't actually\nantijoin rows, ie tmp_san_1.text *did* have a match in tmp_san_2,\nbut that row chanced to have a null value of id.\n\nPossibly the planner could be smarter about estimating for this case,\nbut it doesn't look much like a typical use-case to me.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 01 Mar 2017 20:06:58 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: anti-join with small table via text/varchar cannot estimate rows\n correctly"
},
{
"msg_contents": "On 02.03.2017 02:06, Tom Lane wrote:\n> Stefan Andreatta <[email protected]> writes:\n>> The same anti-join using the text fields, however estimates just 1\n>> resulting row, while there are still of course 9,999 of them:\n>> =# explain analyze\n>> select tmp_san_1.id\n>> from tmp_san_1\n>> left join tmp_san_2 on tmp_san_1.text = tmp_san_2.text\n>> where tmp_san_2.id is null;\n> That is not an anti-join. To make it one, you have to constrain the RHS\n> join column to be IS NULL, not some random other column. Note the join\n> type isn't getting shown as Anti:\n>\n>> Hash Left Join (cost=1.02..192.53 rows=1 width=4) (actual time=0.020..3.091 rows=9999 loops=1)\n> As written, the query could return some rows that weren't actually\n> antijoin rows, ie tmp_san_1.text *did* have a match in tmp_san_2,\n> but that row chanced to have a null value of id.\n>\n> Possibly the planner could be smarter about estimating for this case,\n> but it doesn't look much like a typical use-case to me.\n>\n> \t\t\tregards, tom lane\n\nThanks a lot! Right, my problem had nothing to do with the type of the \njoin field, but with the selection of the proper field for the \nNULL-condition.\n\nSo, even a join on the id field is badly estimated if checked on the \ntext field:\n\n=# EXPLAIN ANALYZE\n SELECT tmp_san_1.id\n FROM tmp_san_1\n LEFT JOIN tmp_san_2 ON tmp_san_1.id = tmp_san_2.id\n WHERE (tmp_san_2.text IS NULL);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.02..192.53 rows=1 width=4) (actual \ntime=0.019..2.939 rows=9999 loops=1)\n Hash Cond: (tmp_san_1.id = tmp_san_2.id)\n Filter: (tmp_san_2.text IS NULL)\n Rows Removed by Filter: 1\n -> Seq Scan on tmp_san_1 (cost=0.00..154.00 rows=10000 width=4) \n(actual time=0.007..1.003 rows=10000 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=6) (actual time=0.004..0.004 \nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tmp_san_2 (cost=0.00..1.01 rows=1 width=6) \n(actual time=0.001..0.002 rows=1 loops=1)\n Planning time: 0.062 ms\n Execution time: 3.381 ms\n(10 rows)\n\n\n... but if the join and the check refer to the same field everything is \nfine:\n\n=# EXPLAIN ANALYZE\n SELECT tmp_san_1.id\n FROM tmp_san_1\n LEFT JOIN tmp_san_2 ON tmp_san_1.id = tmp_san_2.id\n WHERE (tmp_san_2.id IS NULL);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=1.02..281.26 rows=9999 width=4) (actual \ntime=0.018..2.672 rows=9999 loops=1)\n Hash Cond: (tmp_san_1.id = tmp_san_2.id)\n -> Seq Scan on tmp_san_1 (cost=0.00..154.00 rows=10000 width=4) \n(actual time=0.007..0.962 rows=10000 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=4) (actual time=0.003..0.003 \nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tmp_san_2 (cost=0.00..1.01 rows=1 width=4) \n(actual time=0.001..0.001 rows=1 loops=1)\n Planning time: 0.051 ms\n Execution time: 3.164 ms\n(8 rows)\n\n\nIt get's more interesting again, if the text field really could be NULL \nand I wanted to include those rows. If I just include \"OR tmp_san_2.text \nIS NULL\" estimates are off again:\n\n=# EXPLAIN ANALYZE\n SELECT tmp_san_1.id\n FROM tmp_san_1\n LEFT JOIN tmp_san_2 ON tmp_san_1.id = tmp_san_2.id\n WHERE (tmp_san_2.id IS NULL OR tmp_san_2.text IS NULL);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.02..192.53 rows=1 width=4) (actual \ntime=0.019..2.984 rows=9999 loops=1)\n Hash Cond: (tmp_san_1.id = tmp_san_2.id)\n Filter: ((tmp_san_2.id IS NULL) OR (tmp_san_2.text IS NULL))\n Rows Removed by Filter: 1\n -> Seq Scan on tmp_san_1 (cost=0.00..154.00 rows=10000 width=4) \n(actual time=0.008..1.024 rows=10000 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=6) (actual time=0.004..0.004 \nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tmp_san_2 (cost=0.00..1.01 rows=1 width=6) \n(actual time=0.001..0.002 rows=1 loops=1)\n Planning time: 0.088 ms\n Execution time: 3.508 ms\n(10 rows)\n\n\nInstead, it seems, I have to move this condition (inverted) into the \njoin clause for the planner to make correct estimates again:\n\n=# EXPLAIN ANALYZE\n SELECT tmp_san_1.id\n FROM tmp_san_1\n LEFT JOIN tmp_san_2 ON tmp_san_1.id = tmp_san_2.id AND \ntmp_san_2.text IS NOT NULL\n WHERE (tmp_san_2.id IS NULL);\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------\n Hash Anti Join (cost=1.02..281.26 rows=9999 width=4) (actual \ntime=0.017..2.761 rows=9999 loops=1)\n Hash Cond: (tmp_san_1.id = tmp_san_2.id)\n -> Seq Scan on tmp_san_1 (cost=0.00..154.00 rows=10000 width=4) \n(actual time=0.007..1.052 rows=10000 loops=1)\n -> Hash (cost=1.01..1.01 rows=1 width=4) (actual time=0.004..0.004 \nrows=1 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 9kB\n -> Seq Scan on tmp_san_2 (cost=0.00..1.01 rows=1 width=4) \n(actual time=0.002..0.002 rows=1 loops=1)\n Filter: (text IS NOT NULL)\n Planning time: 0.058 ms\n Execution time: 3.232 ms\n(9 rows)\n\n\nSo, yes, the planner could infer a bit more here - after all, if few \nrows are present to start with only few rows can meet any condition. But \nthat may well be an unusual case. It's just easy to get puzzled by these \nthings once you get used to the postresql planner being very smart in \nmost cases ;-)\n\nThanks again,\nStefan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Mar 2017 06:14:57 +0100",
"msg_from": "Stefan Andreatta <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: anti-join with small table via text/varchar cannot\n estimate rows correctly"
}
] |
[
{
"msg_contents": "Dear Experts,\n\nI need your suggestions to resolve the performance issue reported on our PostgreSQL9.1 production database having 1.5 TB Size. I have observed that, some select queries with order by clause are taking lot of time in execution and forcing applications to give slow response.\n\nThe configuration of database server is :\n\nArchitecture: x86_64\nCPU op-mode(s): 32-bit, 64-bit\nCPU's : 8\nCore(s) per socket: 4\nSocket(s): 2\nModel name: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz\n\nRAM : 32 GB\nSWAP :8 Gb\n\nKernel parameter:\n\nkernel.shmmax = 32212254720\nkernel.shmall = 1073741824\n\n\nValues of PostgreSQL.conf parameters are :\n\nshared_buffers = 10GB\ntemp_buffers = 32MB\nwork_mem = 512MB\nmaintenance_work_mem = 2048MB\nmax_files_per_process = 2000\ncheckpoint_segments = 200\nmax_wal_senders = 5\nwal_buffers = -1 # min 32kB, -1 sets based on shared_buffers\n\n\nQueries taking lot of time are:\n==================================\n\n\n2017-03-02 00:46:50 IST LOG: duration: 2492951.927 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n\n\n2017-03-02 01:05:16 IST LOG: duration: 516250.512 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (3) AND (p.modification_time > '2015-05-10 01:22:59.059 IST' OR oe.modification_time > '2015-05-10 01:22:59.059 IST') ORDER BY feature_id\n\n\nTop command output:\n\ntop - 15:13:15 up 66 days, 3:45, 8 users, load average: 1.84, 1.59, 1.57\nTasks: 830 total, 1 running, 828 sleeping, 0 stopped, 1 zombie\nCpu(s): 3.4%us, 0.7%sy, 0.0%ni, 81.7%id, 14.2%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 32830016k total, 32142596k used, 687420k free, 77460k buffers\nSwap: 8190972k total, 204196k used, 7986776k free, 27981268k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n30639 postgres 20 0 10.5g 4.7g 4.7g S 13.5 14.9 10:20.95 postgres\n18185 postgres 20 0 10.5g 603m 596m S 4.9 1.9 2:51.16 postgres\n16543 postgres 20 0 10.5g 2.8g 2.8g S 4.3 8.8 1:34.04 postgres\n14710 postgres 20 0 10.5g 2.9g 2.9g S 3.9 9.2 1:20.84 postgres\n1214 root 20 0 15668 1848 896 S 1.0 0.0 130:46.43 top\n13462 postgres 20 0 10.5g 1.4g 1.3g S 1.0 4.3 0:25.56 postgres\n20081 root 20 0 15668 1880 936 R 1.0 0.0 0:00.12 top\n13478 postgres 20 0 10.5g 2.1g 2.1g S 0.7 6.9 0:56.43 postgres\n41107 root 20 0 416m 10m 4892 S 0.7 0.0 305:25.71 pgadmin3\n2680 root 20 0 0 0 0 S 0.3 0.0 103:38.54 nfsiod\n3558 root 20 0 13688 1100 992 S 0.3 0.0 45:00.36 gam_server\n15576 root 20 0 0 0 0 S 0.3 0.0 0:01.16 flush-253:1\n18430 postgres 20 0 10.5g 18m 13m S 0.3 0.1 0:00.64 postgres\n20083 postgres 20 0 105m 1852 1416 S 0.3 0.0 0:00.01 bash\n24188 postgres 20 0 102m 1856 832 S 0.3 0.0 0:23.39 sshd\n28250 postgres 20 0 156m 1292 528 S 0.3 0.0 0:46.86 postgres\n1 root 20 0 19356 1188 996 S 0.0 0.0 0:05.00 init\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)|\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\nDear Experts,\n \nI need your suggestions to resolve the performance issue reported on our\nPostgreSQL9.1 production database having 1.5 TB Size. I have observed that, some select queries with order by clause are taking lot of time in execution and forcing applications to give slow response.\n\n \nThe configuration of database server is :\n \nArchitecture: x86_64 \nCPU op-mode(s): 32-bit, 64-bit\nCPU’s : 8\nCore(s) per socket: 4\nSocket(s): 2\nModel name: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz\n \nRAM : 32 GB\n\nSWAP :8 Gb\n\n \nKernel parameter:\n \nkernel.shmmax = 32212254720\nkernel.shmall = 1073741824\n \n \nValues of PostgreSQL.conf parameters are :\n \nshared_buffers = 10GB\ntemp_buffers = 32MB\nwork_mem = 512MB \n\nmaintenance_work_mem = 2048MB\nmax_files_per_process = 2000\ncheckpoint_segments = 200\nmax_wal_senders = 5 \n\nwal_buffers = -1 # min 32kB, -1 sets based on shared_buffers\n \n \nQueries taking lot of time are:\n==================================\n \n \n2017-03-02 00:46:50 IST LOG: duration: 2492951.927 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe\n ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n \n \n2017-03-02 01:05:16 IST LOG: duration: 516250.512 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON\n p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (3) AND (p.modification_time > '2015-05-10 01:22:59.059 IST' OR oe.modification_time > '2015-05-10 01:22:59.059 IST') ORDER BY feature_id\n \n \nTop command output:\n \ntop - 15:13:15 up 66 days, 3:45, 8 users, load average: 1.84, 1.59, 1.57\nTasks: 830 total, 1 running, 828 sleeping, 0 stopped, 1 zombie\nCpu(s): 3.4%us, 0.7%sy, 0.0%ni, 81.7%id, 14.2%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 32830016k total,\n32142596k used, 687420k free, 77460k buffers\nSwap: 8190972k total, 204196k used, 7986776k free, 27981268k cached\n \n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n30639 postgres 20 0 10.5g 4.7g 4.7g S 13.5 14.9 10:20.95 postgres\n18185 postgres 20 0 10.5g 603m 596m S 4.9 1.9 2:51.16 postgres\n16543 postgres 20 0 10.5g 2.8g 2.8g S 4.3 8.8 1:34.04 postgres\n14710 postgres 20 0 10.5g 2.9g 2.9g S 3.9 9.2 1:20.84 postgres\n1214 root 20 0 15668 1848 896 S 1.0 0.0 130:46.43 top\n13462 postgres 20 0 10.5g 1.4g 1.3g S 1.0 4.3 0:25.56 postgres\n20081 root 20 0 15668 1880 936 R 1.0 0.0 0:00.12 top\n13478 postgres 20 0 10.5g 2.1g 2.1g S 0.7 6.9 0:56.43 postgres\n41107 root 20 0 416m 10m 4892 S 0.7 0.0 305:25.71 pgadmin3\n2680 root 20 0 0 0 0 S 0.3 0.0 103:38.54 nfsiod\n3558 root 20 0 13688 1100 992 S 0.3 0.0 45:00.36 gam_server\n15576 root 20 0 0 0 0 S 0.3 0.0 0:01.16 flush-253:1\n18430 postgres 20 0 10.5g 18m 13m S 0.3 0.1 0:00.64 postgres\n20083 postgres 20 0 105m 1852 1416 S 0.3 0.0 0:00.01 bash\n24188 postgres 20 0 102m 1856 832 S 0.3 0.0 0:23.39 sshd\n28250 postgres 20 0 156m 1292 528 S 0.3 0.0 0:46.86 postgres\n1 root 20 0 19356 1188 996 S 0.0 0.0 0:05.00 init\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)|\n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Fri, 3 Mar 2017 10:29:49 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issue in PostgreSQL server..."
},
{
"msg_contents": "Hello Dinesh,\n\nYou can try the EXPLAIN tool\n\npsql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p\nINNER JOIN evidence.observation_evidence oe ON p.feature_id =\noe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time\n> '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10\n00:06:56.056 IST') ORDER BY feature_id\n\nThen paste here the result.\n\nThanks\n\nOn Fri, Mar 3, 2017 at 5:29 PM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Dear Experts,\n>\n>\n>\n> I need your suggestions to resolve the performance issue reported on our\n> *PostgreSQL9.1* production database having 1.5 TB *Size*. I have observed\n> that, some select queries with order by clause are taking lot of time in\n> execution and forcing applications to give slow response.\n>\n>\n>\n> The configuration of database server is :\n>\n>\n>\n> Architecture: x86_64\n>\n> CPU op-mode(s): 32-bit, 64-bit\n>\n> CPU’s : 8\n>\n> Core(s) per socket: 4\n>\n> Socket(s): 2\n>\n> Model name: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz\n>\n>\n>\n> *RAM* : 32 GB\n>\n> *SWAP* :8 Gb\n>\n>\n>\n> *Kernel parameter*:\n>\n>\n>\n> kernel.shmmax = 32212254720\n>\n> kernel.shmall = 1073741824\n>\n>\n>\n>\n>\n> Values of PostgreSQL.conf parameters are :\n>\n>\n>\n> shared_buffers = 10GB\n>\n> temp_buffers = 32MB\n>\n> work_mem = 512MB\n>\n> maintenance_work_mem = 2048MB\n>\n> max_files_per_process = 2000\n>\n> checkpoint_segments = 200\n>\n> max_wal_senders = 5\n>\n> wal_buffers = -1 # min 32kB, -1 sets based on\n> shared_buffers\n>\n>\n>\n>\n>\n> *Queries taking lot of time are:*\n>\n> ==================================\n>\n>\n>\n>\n>\n> 2017-03-02 00:46:50 IST LOG: duration: 2492951.927 ms execute <unnamed>:\n> SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN\n> evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE\n> p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10\n> 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST')\n> ORDER BY feature_id\n>\n>\n>\n>\n>\n> 2017-03-02 01:05:16 IST LOG: duration: 516250.512 ms execute <unnamed>:\n> SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN\n> evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE\n> p.domain_class_id IN (3) AND (p.modification_time > '2015-05-10\n> 01:22:59.059 IST' OR oe.modification_time > '2015-05-10 01:22:59.059 IST')\n> ORDER BY feature_id\n>\n>\n>\n>\n>\n> *Top command output*:\n>\n>\n>\n> top - 15:13:15 up 66 days, 3:45, 8 users, load average: 1.84, 1.59, 1.57\n>\n> Tasks: 830 total, 1 running, 828 sleeping, 0 stopped, 1 zombie\n>\n> Cpu(s): 3.4%us, 0.7%sy, 0.0%ni, 81.7%id, 14.2%wa, 0.0%hi, 0.0%si,\n> 0.0%st\n>\n> *Mem:* 32830016k total, *32142596k* used, *687420k* free, 77460k\n> buffers\n>\n> Swap: 8190972k total, 204196k used, 7986776k free, 27981268k cached\n>\n>\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>\n> 30639 postgres 20 0 10.5g 4.7g 4.7g S 13.5 14.9 10:20.95 postgres\n>\n> 18185 postgres 20 0 10.5g 603m 596m S 4.9 1.9 2:51.16 postgres\n>\n> 16543 postgres 20 0 10.5g 2.8g 2.8g S 4.3 8.8 1:34.04 postgres\n>\n> 14710 postgres 20 0 10.5g 2.9g 2.9g S 3.9 9.2 1:20.84 postgres\n>\n> 1214 root 20 0 15668 1848 896 S 1.0 0.0 130:46.43 top\n>\n> 13462 postgres 20 0 10.5g 1.4g 1.3g S 1.0 4.3 0:25.56 postgres\n>\n> 20081 root 20 0 15668 1880 936 R 1.0 0.0 0:00.12 top\n>\n> 13478 postgres 20 0 10.5g 2.1g 2.1g S 0.7 6.9 0:56.43 postgres\n>\n> 41107 root 20 0 416m 10m 4892 S 0.7 0.0 305:25.71 pgadmin3\n>\n> 2680 root 20 0 0 0 0 S 0.3 0.0 103:38.54 nfsiod\n>\n> 3558 root 20 0 13688 1100 992 S 0.3 0.0 45:00.36 gam_server\n>\n> 15576 root 20 0 0 0 0 S 0.3 0.0 0:01.16 flush-253:1\n>\n> 18430 postgres 20 0 10.5g 18m 13m S 0.3 0.1 0:00.64 postgres\n>\n> 20083 postgres 20 0 105m 1852 1416 S 0.3 0.0 0:00.01 bash\n>\n> 24188 postgres 20 0 102m 1856 832 S 0.3 0.0 0:23.39 sshd\n>\n> 28250 postgres 20 0 156m 1292 528 S 0.3 0.0 0:46.86 postgres\n>\n> 1 root 20 0 19356 1188 996 S 0.0 0.0 0:05.00 init\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| *\n>\n>\n>\n> ------------------------------\n>\n> DISCLAIMER:\n>\n> This email message is for the sole use of the intended recipient(s) and\n> may contain confidential and privileged information. Any unauthorized\n> review, use, disclosure or distribution is prohibited. If you are not the\n> intended recipient, please contact the sender by reply email and destroy\n> all copies of the original message. Check all attachments for viruses\n> before opening them. All views or opinions presented in this e-mail are\n> those of the author and may not reflect the opinion of Cyient or those of\n> our affiliates.\n>\n\nHello Dinesh,You can try the EXPLAIN toolpsql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_idThen paste here the result.ThanksOn Fri, Mar 3, 2017 at 5:29 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nDear Experts,\n \nI need your suggestions to resolve the performance issue reported on our\nPostgreSQL9.1 production database having 1.5 TB Size. I have observed that, some select queries with order by clause are taking lot of time in execution and forcing applications to give slow response.\n\n \nThe configuration of database server is :\n \nArchitecture: x86_64 \nCPU op-mode(s): 32-bit, 64-bit\nCPU’s : 8\nCore(s) per socket: 4\nSocket(s): 2\nModel name: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz\n \nRAM : 32 GB\n\nSWAP :8 Gb\n\n \nKernel parameter:\n \nkernel.shmmax = 32212254720\nkernel.shmall = 1073741824\n \n \nValues of PostgreSQL.conf parameters are :\n \nshared_buffers = 10GB\ntemp_buffers = 32MB\nwork_mem = 512MB \n\nmaintenance_work_mem = 2048MB\nmax_files_per_process = 2000\ncheckpoint_segments = 200\nmax_wal_senders = 5 \n\nwal_buffers = -1 # min 32kB, -1 sets based on shared_buffers\n \n \nQueries taking lot of time are:\n==================================\n \n \n2017-03-02 00:46:50 IST LOG: duration: 2492951.927 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe\n ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n \n \n2017-03-02 01:05:16 IST LOG: duration: 516250.512 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON\n p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (3) AND (p.modification_time > '2015-05-10 01:22:59.059 IST' OR oe.modification_time > '2015-05-10 01:22:59.059 IST') ORDER BY feature_id\n \n \nTop command output:\n \ntop - 15:13:15 up 66 days, 3:45, 8 users, load average: 1.84, 1.59, 1.57\nTasks: 830 total, 1 running, 828 sleeping, 0 stopped, 1 zombie\nCpu(s): 3.4%us, 0.7%sy, 0.0%ni, 81.7%id, 14.2%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 32830016k total,\n32142596k used, 687420k free, 77460k buffers\nSwap: 8190972k total, 204196k used, 7986776k free, 27981268k cached\n \n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n30639 postgres 20 0 10.5g 4.7g 4.7g S 13.5 14.9 10:20.95 postgres\n18185 postgres 20 0 10.5g 603m 596m S 4.9 1.9 2:51.16 postgres\n16543 postgres 20 0 10.5g 2.8g 2.8g S 4.3 8.8 1:34.04 postgres\n14710 postgres 20 0 10.5g 2.9g 2.9g S 3.9 9.2 1:20.84 postgres\n1214 root 20 0 15668 1848 896 S 1.0 0.0 130:46.43 top\n13462 postgres 20 0 10.5g 1.4g 1.3g S 1.0 4.3 0:25.56 postgres\n20081 root 20 0 15668 1880 936 R 1.0 0.0 0:00.12 top\n13478 postgres 20 0 10.5g 2.1g 2.1g S 0.7 6.9 0:56.43 postgres\n41107 root 20 0 416m 10m 4892 S 0.7 0.0 305:25.71 pgadmin3\n2680 root 20 0 0 0 0 S 0.3 0.0 103:38.54 nfsiod\n3558 root 20 0 13688 1100 992 S 0.3 0.0 45:00.36 gam_server\n15576 root 20 0 0 0 0 S 0.3 0.0 0:01.16 flush-253:1\n18430 postgres 20 0 10.5g 18m 13m S 0.3 0.1 0:00.64 postgres\n20083 postgres 20 0 105m 1852 1416 S 0.3 0.0 0:00.01 bash\n24188 postgres 20 0 102m 1856 832 S 0.3 0.0 0:23.39 sshd\n28250 postgres 20 0 156m 1292 528 S 0.3 0.0 0:46.86 postgres\n1 root 20 0 19356 1188 996 S 0.0 0.0 0:05.00 init\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)|\n\n \n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Fri, 3 Mar 2017 19:23:36 +0700",
"msg_from": "Nur Agus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue in PostgreSQL server..."
},
{
"msg_contents": "Dear Nur,\r\n\r\nThe below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\r\n\r\n\r\n QUERY PLAN\r\n\r\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n-----------------------------------------\r\nUnique (cost=1679730.32..1679837.46 rows=21428 width=8) (actual time=154753.528..155657.818 rows=1607489 loops=1)\r\n -> Sort (cost=1679730.32..1679783.89 rows=21428 width=8) (actual time=154753.514..155087.734 rows=4053270 loops=1)\r\n Sort Key: p.feature_id\r\n Sort Method: quicksort Memory: 288302kB\r\n -> Hash Join (cost=1501657.09..1678188.87 rows=21428 width=8) (actual time=144146.620..152050.311 rows=4053270 loops=1)\r\n Hash Cond: (oe.evd_feature_id = p.feature_id)\r\n Join Filter: ((p.modification_time > '2015-05-10 03:36:56.056+05:30'::timestamp with time zone) OR (oe.modification_time > '2015-05-10 03:36:5\r\n6.056+05:30'::timestamp with time zone))\r\n -> Seq Scan on observation_evidence oe (cost=0.00..121733.18 rows=5447718 width=16) (actual time=0.007..1534.905 rows=5434406 loops=1)\r\n -> Hash (cost=1483472.70..1483472.70 rows=1454751 width=16) (actual time=144144.653..144144.653 rows=1607491 loops=1)\r\n Buckets: 262144 Batches: 1 Memory Usage: 75352kB\r\n -> Index Scan using point_domain_class_id_index on point p (cost=0.00..1483472.70 rows=1454751 width=16) (actual time=27.265..142101.1\r\n59 rows=1607491 loops=1)\r\n Index Cond: (domain_class_id = 11)\r\nTotal runtime: 155787.379 ms\r\n(13 rows)\r\n\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\nFrom: Nur Agus [mailto:[email protected]]\r\nSent: 03 March, 2017 5:54 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Performance issue in PostgreSQL server...\r\n\r\nHello Dinesh,\r\n\r\nYou can try the EXPLAIN tool\r\n\r\npsql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\r\n\r\nThen paste here the result.\r\n\r\nThanks\r\n\r\nOn Fri, Mar 3, 2017 at 5:29 PM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\r\nDear Experts,\r\n\r\nI need your suggestions to resolve the performance issue reported on our PostgreSQL9.1 production database having 1.5 TB Size. I have observed that, some select queries with order by clause are taking lot of time in execution and forcing applications to give slow response.\r\n\r\nThe configuration of database server is :\r\n\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nCPU’s : 8\r\nCore(s) per socket: 4\r\nSocket(s): 2\r\nModel name: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz\r\n\r\nRAM : 32 GB\r\nSWAP :8 Gb\r\n\r\nKernel parameter:\r\n\r\nkernel.shmmax = 32212254720\r\nkernel.shmall = 1073741824\r\n\r\n\r\nValues of PostgreSQL.conf parameters are :\r\n\r\nshared_buffers = 10GB\r\ntemp_buffers = 32MB\r\nwork_mem = 512MB\r\nmaintenance_work_mem = 2048MB\r\nmax_files_per_process = 2000\r\ncheckpoint_segments = 200\r\nmax_wal_senders = 5\r\nwal_buffers = -1 # min 32kB, -1 sets based on shared_buffers\r\n\r\n\r\nQueries taking lot of time are:\r\n==================================\r\n\r\n\r\n2017-03-02 00:46:50 IST LOG: duration: 2492951.927 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\r\n\r\n\r\n2017-03-02 01:05:16 IST LOG: duration: 516250.512 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (3) AND (p.modification_time > '2015-05-10 01:22:59.059 IST' OR oe.modification_time > '2015-05-10 01:22:59.059 IST') ORDER BY feature_id\r\n\r\n\r\nTop command output:\r\n\r\ntop - 15:13:15 up 66 days, 3:45, 8 users, load average: 1.84, 1.59, 1.57\r\nTasks: 830 total, 1 running, 828 sleeping, 0 stopped, 1 zombie\r\nCpu(s): 3.4%us, 0.7%sy, 0.0%ni, 81.7%id, 14.2%wa, 0.0%hi, 0.0%si, 0.0%st\r\nMem: 32830016k total, 32142596k used, 687420k free, 77460k buffers\r\nSwap: 8190972k total, 204196k used, 7986776k free, 27981268k cached\r\n\r\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\r\n30639 postgres 20 0 10.5g 4.7g 4.7g S 13.5 14.9 10:20.95 postgres\r\n18185 postgres 20 0 10.5g 603m 596m S 4.9 1.9 2:51.16 postgres\r\n16543 postgres 20 0 10.5g 2.8g 2.8g S 4.3 8.8 1:34.04 postgres\r\n14710 postgres 20 0 10.5g 2.9g 2.9g S 3.9 9.2 1:20.84 postgres\r\n1214 root 20 0 15668 1848 896 S 1.0 0.0 130:46.43 top\r\n13462 postgres 20 0 10.5g 1.4g 1.3g S 1.0 4.3 0:25.56 postgres\r\n20081 root 20 0 15668 1880 936 R 1.0 0.0 0:00.12 top\r\n13478 postgres 20 0 10.5g 2.1g 2.1g S 0.7 6.9 0:56.43 postgres\r\n41107 root 20 0 416m 10m 4892 S 0.7 0.0 305:25.71 pgadmin3\r\n2680 root 20 0 0 0 0 S 0.3 0.0 103:38.54 nfsiod\r\n3558 root 20 0 13688 1100 992 S 0.3 0.0 45:00.36 gam_server\r\n15576 root 20 0 0 0 0 S 0.3 0.0 0:01.16 flush-253:1\r\n18430 postgres 20 0 10.5g 18m 13m S 0.3 0.1 0:00.64 postgres\r\n20083 postgres 20 0 105m 1852 1416 S 0.3 0.0 0:00.01 bash\r\n24188 postgres 20 0 102m 1856 832 S 0.3 0.0 0:23.39 sshd\r\n28250 postgres 20 0 156m 1292 528 S 0.3 0.0 0:46.86 postgres\r\n1 root 20 0 19356 1188 996 S 0.0 0.0 0:05.00 init\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)|\r\n\r\n\r\n________________________________\r\n\r\nDISCLAIMER:\r\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\r\n\r\n\n\n\n\n\n\n\n\n\nDear Nur,\n \nThe below is the output for\r\npsql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time\r\n > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n \n \n QUERY PLAN\n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------------------------------\nUnique (cost=1679730.32..1679837.46 rows=21428 width=8) (actual time=154753.528..155657.818 rows=1607489 loops=1)\n -> Sort (cost=1679730.32..1679783.89 rows=21428 width=8) (actual time=154753.514..155087.734 rows=4053270 loops=1)\n Sort Key: p.feature_id\n Sort Method: quicksort Memory: 288302kB\n -> Hash Join (cost=1501657.09..1678188.87 rows=21428 width=8) (actual time=144146.620..152050.311 rows=4053270 loops=1)\n Hash Cond: (oe.evd_feature_id = p.feature_id)\n Join Filter: ((p.modification_time > '2015-05-10 03:36:56.056+05:30'::timestamp with time zone) OR (oe.modification_time > '2015-05-10 03:36:5\n6.056+05:30'::timestamp with time zone))\n -> Seq Scan on observation_evidence oe (cost=0.00..121733.18 rows=5447718 width=16) (actual time=0.007..1534.905 rows=5434406 loops=1)\n -> Hash (cost=1483472.70..1483472.70 rows=1454751 width=16) (actual time=144144.653..144144.653 rows=1607491 loops=1)\n Buckets: 262144 Batches: 1 Memory Usage: 75352kB\n -> Index Scan using point_domain_class_id_index on point p (cost=0.00..1483472.70 rows=1454751 width=16) (actual time=27.265..142101.1\n59 rows=1607491 loops=1)\n Index Cond: (domain_class_id = 11)\nTotal runtime: 155787.379 ms\n(13 rows)\n \n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Nur Agus [mailto:[email protected]]\r\n\nSent: 03 March, 2017 5:54 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: Re: [PERFORM] Performance issue in PostgreSQL server...\n \n\nHello Dinesh,\n\n \n\n\nYou can try the EXPLAIN tool\n\n\n \n\n\npsql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN\r\n (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n\n\n \n\n\nThen paste here the result.\n\n\n \n\n\nThanks\n\n\n\n \n\nOn Fri, Mar 3, 2017 at 5:29 PM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\nDear Experts,\n \nI need your suggestions to resolve the performance issue reported on our\r\nPostgreSQL9.1 production database having 1.5 TB Size. I have observed that, some select queries with order by clause are taking lot of time in execution and forcing applications to give slow response.\r\n\n \nThe configuration of database server is :\n \nArchitecture: x86_64 \nCPU op-mode(s): 32-bit, 64-bit\nCPU’s : 8\nCore(s) per socket: 4\nSocket(s): 2\nModel name: Intel(R) Xeon(R) CPU E5-2609 0 @ 2.40GHz\n \nRAM : 32 GB\r\n\nSWAP :8 Gb\r\n\n \nKernel parameter:\n \nkernel.shmmax = 32212254720\nkernel.shmall = 1073741824\n \n \nValues of PostgreSQL.conf parameters are :\n \nshared_buffers = 10GB\ntemp_buffers = 32MB\nwork_mem = 512MB \r\n\nmaintenance_work_mem = 2048MB\nmax_files_per_process = 2000\ncheckpoint_segments = 200\nmax_wal_senders = 5 \r\n\nwal_buffers = -1 # min 32kB, -1 sets based on shared_buffers\n \n \nQueries taking lot of time are:\n==================================\n \n \n2017-03-02 00:46:50 IST LOG: duration: 2492951.927 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point\r\n p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n \n \n2017-03-02 01:05:16 IST LOG: duration: 516250.512 ms execute <unnamed>: SELECT DISTINCT feature_id FROM evidence.point\r\n p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (3) AND (p.modification_time > '2015-05-10 01:22:59.059 IST' OR oe.modification_time > '2015-05-10 01:22:59.059 IST') ORDER BY feature_id\n \n \nTop command output:\n \ntop - 15:13:15 up 66 days, 3:45, 8 users, load average: 1.84, 1.59, 1.57\nTasks: 830 total, 1 running, 828 sleeping, 0 stopped, 1 zombie\nCpu(s): 3.4%us, 0.7%sy, 0.0%ni, 81.7%id, 14.2%wa, 0.0%hi, 0.0%si, 0.0%st\nMem: 32830016k total,\r\n32142596k used, 687420k free, 77460k buffers\nSwap: 8190972k total, 204196k used, 7986776k free, 27981268k cached\n \n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n30639 postgres 20 0 10.5g 4.7g 4.7g S 13.5 14.9 10:20.95 postgres\n18185 postgres 20 0 10.5g 603m 596m S 4.9 1.9 2:51.16 postgres\n16543 postgres 20 0 10.5g 2.8g 2.8g S 4.3 8.8 1:34.04 postgres\n14710 postgres 20 0 10.5g 2.9g 2.9g S 3.9 9.2 1:20.84 postgres\n1214 root 20 0 15668 1848 896 S 1.0 0.0 130:46.43 top\n13462 postgres 20 0 10.5g 1.4g 1.3g S 1.0 4.3 0:25.56 postgres\n20081 root 20 0 15668 1880 936 R 1.0 0.0 0:00.12 top\n13478 postgres 20 0 10.5g 2.1g 2.1g S 0.7 6.9 0:56.43 postgres\n41107 root 20 0 416m 10m 4892 S 0.7 0.0 305:25.71 pgadmin3\n2680 root 20 0 0 0 0 S 0.3 0.0 103:38.54 nfsiod\n3558 root 20 0 13688 1100 992 S 0.3 0.0 45:00.36 gam_server\n15576 root 20 0 0 0 0 S 0.3 0.0 0:01.16 flush-253:1\n18430 postgres 20 0 10.5g 18m 13m S 0.3 0.1 0:00.64 postgres\n20083 postgres 20 0 105m 1852 1416 S 0.3 0.0 0:00.01 bash\n24188 postgres 20 0 102m 1856 832 S 0.3 0.0 0:23.39 sshd\n28250 postgres 20 0 156m 1292 528 S 0.3 0.0 0:46.86 postgres\n1 root 20 0 19356 1188 996 S 0.0 0.0 0:05.00 init\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)|\r\n\n \n\n \n\n\n\n\r\nDISCLAIMER:\n\r\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\r\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Fri, 3 Mar 2017 12:44:07 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue in PostgreSQL server..."
},
{
"msg_contents": "Dinesh Chandra 12108 <[email protected]> writes:\n> The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n\nI think that's a fundamentally slow query and you're not going to be able\nto make it better without rethinking your requirements and/or data\nrepresentation. As written, that requires the server to form the entire\njoin of p to oe on feature_id, with the only filter before the join being\nthe evidently-none-too-selective domain_class_id condition. Only after\njoining can it apply the OR condition. So this is inherently processing a\nlot of rows.\n\nIf the OR arms were individually pretty selective you could rewrite this\ninto a UNION of two joins, a la the discussion at\nhttps://www.postgresql.org/message-id/flat/[email protected]\nbut given the dates involved I'm betting that won't help very much.\n\nOr maybe you could try\n\nselect feature_id from p where domain_class_id IN (11) AND p.modification_time > '2015-05-10 00:06:56.056 IST'\nintersect\nselect feature_id from oe where oe.modification_time > '2015-05-10 00:06:56.056 IST'\norder by feature_id\n\nalthough I'm not entirely certain that that has exactly the same\nsemantics (-ENOCAFFEINE), and it might still be none too quick.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 03 Mar 2017 10:19:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue in PostgreSQL server..."
},
{
"msg_contents": "On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Dear Nur,\n>\n>\n>\n> The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT\n> feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence\n> oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND\n> (p.modification_time > '2015-05-10 00:06:56.056 IST' OR\n> oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n>\n>\n>\n>\n...\n\n\n> -> Index Scan using point_domain_class_id_index on\n> point p (cost=0.00..1483472.70 rows=1454751 width=16) (actual\n> time=27.265..142101.1\n>\n> 59 rows=1607491 loops=1)\n>\n> Index Cond: (domain_class_id = 11)\n>\n\nWhy wouldn't this be using a bitmap scan rather than a regular index scan?\nIt seems like it should prefer the bitmap scan, unless the table is well\nclustered on domain_class_id. In which case, why isn't it just faster?\n\nYou could try repeating the explain analyze after setting enable_indexscan\n=off to see what that gives. If it gives a seq scan, then repeat with\nenable_seqscan also turned off. Or If it gives the bitmap scan, then\nrepeat with enable_bitmapscan turned off.\n\nHow many rows is in point, and how big is it?\n\nThe best bet for making this better might be to have an index on\n(domain_class_id, modification_time) and hope for an index only scan.\nExcept that you are on 9.1, so first you would have to upgrade. Which\nwould allow you to use BUFFERS in the explain analyze, as well as\ntrack_io_timings, both of which would also be pretty nice to see. Using\n9.1 is like having one hand tied behind your back.\n\nAlso, any idea why this execution of this query 15 is times faster than the\nexecution you found in the log file? Was the top output you showed in the\nfirst email happening at the time the really slow query was running, or was\nthat from a different period?\n\nCheers,\n\nJeff\n\nOn Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nDear Nur,\n \nThe below is the output for\npsql=> EXPLAIN ANALYZE SELECT DISTINCT feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND (p.modification_time\n > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n \n... -> Index Scan using point_domain_class_id_index on point p (cost=0.00..1483472.70 rows=1454751 width=16) (actual time=27.265..142101.1\n59 rows=1607491 loops=1)\n Index Cond: (domain_class_id = 11)Why wouldn't this be using a bitmap scan rather than a regular index scan? It seems like it should prefer the bitmap scan, unless the table is well clustered on domain_class_id. In which case, why isn't it just faster?You could try repeating the explain analyze after setting enable_indexscan =off to see what that gives. If it gives a seq scan, then repeat with enable_seqscan also turned off. Or If it gives the bitmap scan, then repeat with enable_bitmapscan turned off.How many rows is in point, and how big is it?The best bet for making this better might be to have an index on (domain_class_id, modification_time) and hope for an index only scan. Except that you are on 9.1, so first you would have to upgrade. Which would allow you to use BUFFERS in the explain analyze, as well as track_io_timings, both of which would also be pretty nice to see. Using 9.1 is like having one hand tied behind your back. Also, any idea why this execution of this query 15 is times faster than the execution you found in the log file? Was the top output you showed in the first email happening at the time the really slow query was running, or was that from a different period?Cheers,Jeff",
"msg_date": "Sun, 5 Mar 2017 20:23:08 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue in PostgreSQL server..."
},
{
"msg_contents": "On Sun, Mar 05, 2017 at 08:23:08PM -0800, Jeff Janes wrote:\n> On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n> > The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT\n> > feature_id FROM evidence.point p INNER JOIN evidence.observation_evidence\n> > oe ON p.feature_id = oe.evd_feature_id WHERE p.domain_class_id IN (11) AND\n> > (p.modification_time > '2015-05-10 00:06:56.056 IST' OR\n> > oe.modification_time > '2015-05-10 00:06:56.056 IST') ORDER BY feature_id\n> ...\n> \n> > -> Index Scan using point_domain_class_id_index on point p (cost=0.00..1483472.70 rows=1454751 width=16) (actual time=27.265..142101.1 59 rows=1607491 loops=1)\n> > Index Cond: (domain_class_id = 11)\n> \n> Why wouldn't this be using a bitmap scan rather than a regular index scan?\n> It seems like it should prefer the bitmap scan, unless the table is well\n> clustered on domain_class_id. In which case, why isn't it just faster?\n\nCould you send:\n\nSELECT * FROM pg_stats WHERE tablename='point' AND attname='domain_class_id' ;\n\n.. or if that's too verbose or you don't want to share the histogram or MCV\nlist:\n\nSELECT correlation FROM pg_stats WHERE tablename='point' AND attname='domain_class_id' ;\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 5 Mar 2017 23:24:02 -0600",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue in PostgreSQL server..."
},
{
"msg_contents": "Dear Justin,\n\nBelow is the output of Query SELECT * FROM pg_stats WHERE tablename='point' AND attname='domain_class_id' ;\n\n\nschemaname | tablename | attname | inherited | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n\n\"evidence\"|\"point\"|\"domain_class_id\"|f|0|8|10|\"{7,9,2,11,43,3,1,10,4,17}\"|\"{0.9322,0.0451333,0.0145,0.00393333,0.00183333,0.00146667,0.0005,0.0003,6.66667e-05,6.66667e-05}\"|\"\"|0.889078\n\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n\n\n-----Original Message-----\nFrom: Justin Pryzby [mailto:[email protected]]\nSent: 06 March, 2017 10:54 AM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: Nur Agus <[email protected]>; Jeff Janes <[email protected]>; [email protected]\nSubject: Re: [PERFORM] Performance issue in PostgreSQL server...\n\nOn Sun, Mar 05, 2017 at 08:23:08PM -0800, Jeff Janes wrote:\n> On Fri, Mar 3, 2017 at 4:44 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n> > The below is the output for psql=> EXPLAIN ANALYZE SELECT DISTINCT\n> > feature_id FROM evidence.point p INNER JOIN\n> > evidence.observation_evidence oe ON p.feature_id = oe.evd_feature_id\n> > WHERE p.domain_class_id IN (11) AND (p.modification_time >\n> > '2015-05-10 00:06:56.056 IST' OR oe.modification_time > '2015-05-10\n> > 00:06:56.056 IST') ORDER BY feature_id\n> ...\n>\n> > -> Index Scan using point_domain_class_id_index on point p (cost=0.00..1483472.70 rows=1454751 width=16) (actual time=27.265..142101.1 59 rows=1607491 loops=1)\n> > Index Cond: (domain_class_id = 11)\n>\n> Why wouldn't this be using a bitmap scan rather than a regular index scan?\n> It seems like it should prefer the bitmap scan, unless the table is\n> well clustered on domain_class_id. In which case, why isn't it just faster?\n\nCould you send:\n\nSELECT * FROM pg_stats WHERE tablename='point' AND attname='domain_class_id' ;\n\n.. or if that's too verbose or you don't want to share the histogram or MCV\nlist:\n\nSELECT correlation FROM pg_stats WHERE tablename='point' AND attname='domain_class_id' ;\n\nJustin\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Mar 2017 12:17:22 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance issue in PostgreSQL server..."
},
{
"msg_contents": "On Mon, Mar 06, 2017 at 12:17:22PM +0000, Dinesh Chandra 12108 wrote:\n> Below is the output of Query SELECT * FROM pg_stats WHERE tablename='point' AND attname='domain_class_id' ;\n> \n> \n> schemaname | tablename | attname | inherited | null_frac | avg_width | n_distinct | most_common_vals | most_common_freqs | histogram_bounds | correlation\n> \n> \"evidence\"|\"point\"|\"domain_class_id\"|f|0|8|10|\"{7,9,2,11,43,3,1,10,4,17}\"|\"{0.9322,0.0451333,0.0145,0.00393333,0.00183333,0.00146667,0.0005,0.0003,6.66667e-05,6.66667e-05}\"|\"\"|0.889078\n\nOn Fri, Mar 03, 2017 at 12:44:07PM +0000, Dinesh Chandra 12108 wrote:\n> -> Index Scan using point_domain_class_id_index on point p (cost=0.00..1483472.70 rows=1454751 width=16) (actual time=27.265..142101.1 59 rows=1607491 loops=1)\n\nOn Sun, Mar 05, 2017 at 08:23:08PM -0800, Jeff Janes wrote:\n> Why wouldn't this be using a bitmap scan rather than a regular index scan?\n> It seems like it should prefer the bitmap scan, unless the table is well\n> clustered on domain_class_id. In which case, why isn't it just faster?\n\nI missed your response until now, and can't see that anybody else responded,\nbut I suspect the issue is that the *table* is highly correlated WRT this\ncolumn, but the index may not be, probably due to duplicated index keys.\npostgres only stores statistics on expression indices, and falls back to\ncorrelation of table column of a simple indices.\n\nIf you're still fighting this, would you send result of:\n\nSELECT domain_class_id, count(1) FROM point GROUP BY 1 ORDER BY 2 DESC LIMIT 22;\nor,\nSELECT count(1) FROM point GROUP BY domain_class_id ORDER BY 1 DESC LIMIT 22;\n\nif there's much repetition in the index keys, then PG's planner thinks an index\nscan has low random_page_cost, and effective_cache_size has little effect on\nlarge tables, and it never uses bitmap scan, which blows up if the index is\nfragmented and has duplicate keys. The table reads end up costing something\nlike 1454751*random_page_cost nonsequential reads and fseek() calls when it\nthinks it'll cost only 1454751*16*seq_page_cost.\n\nIs the query much faster if you first reindex point_domain_class_id_index ?\n\nThis has come up before, see:\n> https://www.postgresql.org/message-id/flat/520D6610.8040907%40emulex.com#[email protected]\n> https://www.postgresql.org/message-id/flat/20160524173914.GA11880%40telsasoft.com#[email protected]\n> https://www.postgresql.org/message-id/flat/n6cmpug13b9rk1srebjvhphg0lm8dou1kn%404ax.com#[email protected]\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Apr 2017 13:47:47 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue in PostgreSQL server..."
}
] |
[
{
"msg_contents": "We are having some performance issues after we upgraded to newest\nversion of PostgreSQL, before it everything was fast and smooth.\n\nUpgrade was done by pg_upgrade from 9.4 directly do 9.6.1. Now we\nupgraded to 9.6.2 with no improvement.\n\nSome information about our setup: Freebsd, Solaris (SmartOS), simple\nmaster-slave using streaming replication.\n\nProblem:\nVery high system CPU when master is streaming replication data, CPU\ngoes up to 77%. Only one process is generating this load, it's a\npostgresql startup process. When I attached a truss to this process I\nsaw a lot o read calls with almost the same number of errors (EAGAIN).\n\nroot@d8:~ # truss -c -p 38091\n^Csyscall seconds calls errors\nsemop 0.001611782 198 0\nwrite 0.000074404 2 0\nread 2.281535100 17266 12375\nopenat 0.000683532 48 0\nlseek 0.177612479 20443 0\nclose 0.000395549 48 0\n ------------- ------- -------\n 2.461912846 38005 12375\n\nread(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\nlseek(444,0x0,SEEK_END) = 32571392 (0x1f10000)\nread(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\nlseek(444,0x0,SEEK_END) = 32571392 (0x1f10000)\nread(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\nlseek(7,0x0,SEEK_END) = 164487168 (0x9cde000)\nlseek(778,0x0,SEEK_END) = 57344 (0xe000)\nread(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\nlseek(894,0x0,SEEK_END) = 139296768 (0x84d8000)\nlseek(894,0x4b7e000,SEEK_SET) = 79159296 (0x4b7e000)\nread(894,\" ~\\0\\08\\a\\M--m\\0\\0\\^A\\0\\M^T\\0000\"...,8192) = 8192 (0x2000)\nlseek(3,0xfa6000,SEEK_SET) = 16408576 (0xfa6000)\nread(3,\"\\M^S\\M-P\\^E\\0\\^A\\0\\0\\0\\0`\\M-z\"...,8192) = 8192 (0x2000)\nread(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\nlseek(894,0x0,SEEK_END) = 139296768 (0x84d8000)\nlseek(894,0x0,SEEK_END) = 139296768 (0x84d8000)\nlseek(894,0x449c000,SEEK_SET) = 71942144 (0x449c000)\nread(894,\"\\^_~\\0\\0\\M-H\\M-H\\M-B\\M-b\\0\\0\\^E\"...,8192) = 8192 (0x2000)\nlseek(818,0x0,SEEK_END) = 57344 (0xe000)\nread(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\nlseek(442,0x0,SEEK_END) = 10174464 (0x9b4000)\nlseek(442,0x4c4000,SEEK_SET) = 4997120 (0x4c4000)\nread(442,\"\\^_~\\0\\0\\M-P\\M-+\\M-1\\M-b\\0\\0\\0\\0\"...,8192) = 8192 (0x2000)\nread(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\n\nDescriptor 6 is a pipe\n\nRead call try to read one byte over and over, I looked up to source\ncode and I think this file is responsible for this behavior\nsrc/backend/storage/ipc/latch.c. There was no such file in 9.4.\n\n\n-- \nPiotr Gasidło\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 6 Mar 2017 14:20:42 +0100",
"msg_from": "=?UTF-8?Q?Piotr_Gasid=C5=82o?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance issue after upgrading from 9.4 to 9.6"
},
{
"msg_contents": "On Mon, Mar 6, 2017 at 7:20 AM, Piotr Gasidło <[email protected]> wrote:\n> We are having some performance issues after we upgraded to newest\n> version of PostgreSQL, before it everything was fast and smooth.\n>\n> Upgrade was done by pg_upgrade from 9.4 directly do 9.6.1. Now we\n> upgraded to 9.6.2 with no improvement.\n>\n> Some information about our setup: Freebsd, Solaris (SmartOS), simple\n> master-slave using streaming replication.\n>\n> Problem:\n> Very high system CPU when master is streaming replication data, CPU\n> goes up to 77%. Only one process is generating this load, it's a\n> postgresql startup process. When I attached a truss to this process I\n> saw a lot o read calls with almost the same number of errors (EAGAIN).\n>\n> root@d8:~ # truss -c -p 38091\n> ^Csyscall seconds calls errors\n> semop 0.001611782 198 0\n> write 0.000074404 2 0\n> read 2.281535100 17266 12375\n> openat 0.000683532 48 0\n> lseek 0.177612479 20443 0\n> close 0.000395549 48 0\n> ------------- ------- -------\n> 2.461912846 38005 12375\n>\n> read(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\n> lseek(444,0x0,SEEK_END) = 32571392 (0x1f10000)\n> read(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\n> lseek(444,0x0,SEEK_END) = 32571392 (0x1f10000)\n> read(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\n> lseek(7,0x0,SEEK_END) = 164487168 (0x9cde000)\n> lseek(778,0x0,SEEK_END) = 57344 (0xe000)\n> read(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\n> lseek(894,0x0,SEEK_END) = 139296768 (0x84d8000)\n> lseek(894,0x4b7e000,SEEK_SET) = 79159296 (0x4b7e000)\n> read(894,\" ~\\0\\08\\a\\M--m\\0\\0\\^A\\0\\M^T\\0000\"...,8192) = 8192 (0x2000)\n> lseek(3,0xfa6000,SEEK_SET) = 16408576 (0xfa6000)\n> read(3,\"\\M^S\\M-P\\^E\\0\\^A\\0\\0\\0\\0`\\M-z\"...,8192) = 8192 (0x2000)\n> read(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\n> lseek(894,0x0,SEEK_END) = 139296768 (0x84d8000)\n> lseek(894,0x0,SEEK_END) = 139296768 (0x84d8000)\n> lseek(894,0x449c000,SEEK_SET) = 71942144 (0x449c000)\n> read(894,\"\\^_~\\0\\0\\M-H\\M-H\\M-B\\M-b\\0\\0\\^E\"...,8192) = 8192 (0x2000)\n> lseek(818,0x0,SEEK_END) = 57344 (0xe000)\n> read(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\n> lseek(442,0x0,SEEK_END) = 10174464 (0x9b4000)\n> lseek(442,0x4c4000,SEEK_SET) = 4997120 (0x4c4000)\n> read(442,\"\\^_~\\0\\0\\M-P\\M-+\\M-1\\M-b\\0\\0\\0\\0\"...,8192) = 8192 (0x2000)\n> read(6,0x7fffffffa0c7,1) ERR#35 'Resource temporarily unavailable'\n>\n> Descriptor 6 is a pipe\n>\n> Read call try to read one byte over and over, I looked up to source\n> code and I think this file is responsible for this behavior\n> src/backend/storage/ipc/latch.c. There was no such file in 9.4.\n\nIs a git bisect out of the question?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 27 Mar 2017 08:21:13 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance issue after upgrading from 9.4 to 9.6"
}
] |
[
{
"msg_contents": "I have the following query\n\nwhich takes 90 seconds to finish. *JOB_MEMORY* has 45 million rows,\n*JOB_MEMORY_STORAGE* has 50 000 rows.\n\nQuery plan:\n\nAs you can see, it is indeed using an index *JOB_MEMORY_id_desc* in a\nbackward direction, but it is very slow.\n\nWhen I change ordering to *desc* in the query, the query finishes\nimmediately and the query plan is\n\nThere is also an index on *JOB_MEMORY.id*. I also tried a composite index on\n*(fk_id_storage, id)*, but it did not help (and was not actually used).\nI ran *ANALYZE* on both tables.\n\nPostgres 9.6.2, Ubuntu 14.04, 192 GB RAM, SSD, shared_buffers = 8196 MB.\nHow can I help Postgres execute the query with *asc* ordering as fast as the\none with *desc*?\n\nThank you.\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Huge-difference-between-ASC-and-DESC-ordering-tp5947712.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nI have the following query\n\n\nselect *\nfrom \"JOB_MEMORY_STORAGE\" st\n inner join \"JOB_MEMORY\" s on s.fk_id_storage = st.id\nwhere st.fk_id_client = 20045\norder by s.id asc limit 50\n\n\nwhich takes 90 seconds to finish. JOB_MEMORY has 45 million rows, JOB_MEMORY_STORAGE has 50 000 rows.\n\nQuery plan:\n\n\nLimit (cost=0.98..1971.04 rows=50 width=394) (actual time=93357.197..93357.654 rows=50 loops=1)\n -> Nested Loop (cost=0.98..344637384.09 rows=8746875 width=394) (actual time=93357.194..93357.584 rows=50 loops=1)\n -> Index Scan Backward using \"JOB_MEMORY_id_desc\" on \"JOB_MEMORY\" s (cost=0.56..113858938.25 rows=45452112 width=164) (actual time=0.059..18454.332 rows=18883917 loops=1)\n -> Index Scan using \"JOB_MEMORY_STORAGE_pkey\" on \"JOB_MEMORY_STORAGE\" st (cost=0.41..5.07 rows=1 width=222) (actual time=0.002..0.002 rows=0 loops=18883917)\n Index Cond: (id = s.fk_id_storage)\n Filter: (fk_id_client = 20045)\n Rows Removed by Filter: 1\nPlanning time: 1.932 ms\nExecution time: 93357.745 ms\n\n\nAs you can see, it is indeed using an index JOB_MEMORY_id_desc in a backward direction, but it is very slow.\n\nWhen I change ordering to desc in the query, the query finishes immediately and the query plan is\n\n\nLimit (cost=0.98..1981.69 rows=50 width=394) (actual time=37.577..37.986 rows=50 loops=1)\n -> Nested Loop (cost=0.98..344613154.25 rows=8699235 width=394) (actual time=37.575..37.920 rows=50 loops=1)\n -> Index Scan using \"JOB_MEMORY_id_desc\" on \"JOB_MEMORY\" s (cost=0.56..113850978.19 rows=45448908 width=165) (actual time=0.013..5.117 rows=6610 loops=1)\n -> Index Scan using \"JOB_MEMORY_STORAGE_pkey\" on \"JOB_MEMORY_STORAGE\" st (cost=0.41..5.07 rows=1 width=221) (actual time=0.003..0.003 rows=0 loops=6610)\n Index Cond: (id = s.fk_id_storage)\n Filter: (fk_id_client = 20045)\n Rows Removed by Filter: 1\nPlanning time: 0.396 ms\nExecution time: 38.058 ms\n\n\nThere is also an index on JOB_MEMORY.id. I also tried a composite index on (fk_id_storage, id), but it did not help (and was not actually used).\n\nI ran ANALYZE on both tables.\n\nPostgres 9.6.2, Ubuntu 14.04, 192 GB RAM, SSD, shared_buffers = 8196 MB.\n\nHow can I help Postgres execute the query with asc ordering as fast as the one with desc?\n\nThank you.\n\n\t\n\t\n\t\n\nView this message in context: Huge difference between ASC and DESC ordering\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Mon, 6 Mar 2017 07:22:26 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "Huge difference between ASC and DESC ordering"
},
{
"msg_contents": "On Mon, Mar 6, 2017 at 6:22 AM, twoflower <[email protected]> wrote:\n\n> I have the following query\n>\n> select *\n> from \"JOB_MEMORY_STORAGE\" st\n> inner join \"JOB_MEMORY\" s on s.fk_id_storage = st.id\n> where st.fk_id_client = 20045\n> order by s.id asc limit 50\n>\n>\nThe query stops as soon as it finds 50 rows which meet fk_id_client =\n20045. When you order one way, it needs to cover 18883917 to find those\n50. When you order the other way, it takes 6610 to find those 50. So the\nproblem is that the tuples which satisfy st.fk_id_client = 20045 all lie\ntowards one end of the s.id range, but PostgreSQL doesn't know that. This\nis a hard type of problem to solve at a fundamental level. The best you\ncan do is work around it. Do you really need the order to be on s.id? If\nso, you can get PostgreSQL to stop trying to use the index for ordering\npurposes by writing that as \"order by s.id+0 asc limit 50\", or by using a\nCTE which does the join and have the ORDER BY and LIMIT outside the CTE.\n\nDo you have an index on fk_id_client? Or perhaps better, (fk_id_client,\nid)? How many rows satisfy fk_id_client = 20045?\n\n\nHow can I help Postgres execute the query with *asc* ordering as fast as\n> the one with *desc*?\n>\n\nYou probably can't. Your data us well suited to one, and ill suited for\nthe other. You can probably make it faster than it currently is, but not\nas fast as the DESC version.\n\nCheers,\n\nJeff\n\nOn Mon, Mar 6, 2017 at 6:22 AM, twoflower <[email protected]> wrote:I have the following query\n\nselect *\nfrom \"JOB_MEMORY_STORAGE\" st\n inner join \"JOB_MEMORY\" s on s.fk_id_storage = st.id\nwhere st.fk_id_client = 20045\norder by s.id asc limit 50The query stops as soon as it finds 50 rows which meet fk_id_client = 20045. When you order one way, it needs to cover 18883917 to find those 50. When you order the other way, it takes 6610 to find those 50. So the problem is that the tuples which satisfy st.fk_id_client = 20045 all lie towards one end of the s.id range, but PostgreSQL doesn't know that. This is a hard type of problem to solve at a fundamental level. The best you can do is work around it. Do you really need the order to be on s.id? If so, you can get PostgreSQL to stop trying to use the index for ordering purposes by writing that as \"order by s.id+0 asc limit 50\", or by using a CTE which does the join and have the ORDER BY and LIMIT outside the CTE.Do you have an index on fk_id_client? Or perhaps better, (fk_id_client, id)? How many rows satisfy fk_id_client = 20045?How can I help Postgres execute the query with asc ordering as fast as the one with desc?\nYou probably can't. Your data us well suited to one, and ill suited for the other. You can probably make it faster than it currently is, but not as fast as the DESC version.Cheers,Jeff",
"msg_date": "Mon, 6 Mar 2017 08:19:31 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge difference between ASC and DESC ordering"
},
{
"msg_contents": "Thank you Jeff.\n\nThere are 7 million rows satisfying fk_id_client = 20045. There is an index\non fk_id_client, now I added a composite (fk_id_client, id) index but that\ndid not help.\n\nI see the point of what you are saying, but still don't understand how these\ntwo situations (*asc* vs. *desc*) are not symmetrical. I mean, there /is/ an\nascending index on *JOB_MEMORY.id*, so why does it matter which end I am\npicking the data from?\n\nThe thing is, even when I force Postgres to use the ascending index on *id*,\nit's still orders of magnitude slower than the *desc* version (even when\nthat one goes through the index backwards).\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Huge-difference-between-ASC-and-DESC-ordering-tp5947712p5947737.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\nThank you Jeff.\n\nThere are 7 million rows satisfying fk_id_client = 20045. There is an index on fk_id_client, now I added a composite (fk_id_client, id) index but that did not help.\n\nI see the point of what you are saying, but still don't understand how these two situations (asc vs. desc) are not symmetrical. I mean, there is an ascending index on JOB_MEMORY.id, so why does it matter which end I am picking the data from?\n\nThe thing is, even when I force Postgres to use the ascending index on id, it's still orders of magnitude slower than the desc version (even when that one goes through the index backwards).\n\n\n\n\n\n\n\t\n\t\n\t\n\nView this message in context: Re: Huge difference between ASC and DESC ordering\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.",
"msg_date": "Mon, 6 Mar 2017 09:46:32 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Huge difference between ASC and DESC ordering"
},
{
"msg_contents": "On Mon, Mar 6, 2017 at 8:46 AM, twoflower <[email protected]> wrote:\n\n> Thank you Jeff.\n>\n> There are 7 million rows satisfying fk_id_client = 20045. There is an\n> index on fk_id_client, now I added a composite (fk_id_client, id) index but\n> that did not help.\n>\n\nWith 7 million rows, you shouldn't expect any magic here. But still 7\nmillion is less than 18 million, and you may be able to get that 7 million\nwith more sequential-like IO.\n\nDid you force PostgreSQL to stop using the index on s.id? If not, do\nthat. If so, please post the EXPLAIN (analyze) of the plan it does switch\nto.\n\n\n\n> I see the point of what you are saying, but still don't understand how\n> these two situations (*asc* vs. *desc*) are not symmetrical.\n\n\nThey return different data. How could they be symmetrical? You are\ngetting a different 50 rows depending on which way you order the data in\nthe query. You are **not** getting the same 50 rows, just in a different\norder from among the 50.\n\n\n\n> I mean, there *is* an ascending index on *JOB_MEMORY.id*, so why does it\n> matter which end I am picking the data from?\n>\n\n\nThe query stops as soon as it finds 50 rows which meet fk_id_client =\n20045. When you order one way, it needs to cover 18883917 to find those\n50. When you order the other way, it takes 6610 to find those 50. This\nfact does not depend on whether the index is ASC or DESC. If you traverse\na DESC index backwards, it has exactly the same issue as if you traverse a\nASC index forward. Either way, once it decides to use that index to obtain\nthe ordering of the query, it has to inspect 18883917 tuples before it\nsatisfies the LIMIT.\n\n\n>\n> The thing is, even when I force Postgres to use the ascending index on\n> *id*, it's still orders of magnitude slower than the *desc* version (even\n> when that one goes through the index backwards).\n\n\nRight. PostgreSQL has to return the rows commanded by your query. It\ncan't just decide to return a different set of rows because doing so would\nbe faster. If that is what you want, wrap the whole query into a subselect\nand move the ORDER BY into the outer query, like \"select * from (SELECT ...\nLIMIT 50) foo order by foo.id\"\n\nChanging the ordering direction of the index doesn't change which rows get\nreturned, while changing the ordering direction of the query does.\n\nCheers,\n\nJeff\n\nOn Mon, Mar 6, 2017 at 8:46 AM, twoflower <[email protected]> wrote:Thank you Jeff.\n\nThere are 7 million rows satisfying fk_id_client = 20045. There is an index on fk_id_client, now I added a composite (fk_id_client, id) index but that did not help.\nWith 7 million rows, you shouldn't expect any magic here. But still 7 million is less than 18 million, and you may be able to get that 7 million with more sequential-like IO.Did you force PostgreSQL to stop using the index on s.id? If not, do that. If so, please post the EXPLAIN (analyze) of the plan it does switch to.\nI see the point of what you are saying, but still don't understand how these two situations (asc vs. desc) are not symmetrical. They return different data. How could they be symmetrical? You are getting a different 50 rows depending on which way you order the data in the query. You are **not** getting the same 50 rows, just in a different order from among the 50. I mean, there is an ascending index on JOB_MEMORY.id, so why does it matter which end I am picking the data from?\nThe query stops as soon as it finds 50 rows which meet fk_id_client = 20045. When you order one way, it needs to cover 18883917 to find those 50. When you order the other way, it takes 6610 to find those 50. This fact does not depend on whether the index is ASC or DESC. If you traverse a DESC index backwards, it has exactly the same issue as if you traverse a ASC index forward. Either way, once it decides to use that index to obtain the ordering of the query, it has to inspect 18883917 tuples before it satisfies the LIMIT. \nThe thing is, even when I force Postgres to use the ascending index on id, it's still orders of magnitude slower than the desc version (even when that one goes through the index backwards).\nRight. PostgreSQL has to return the rows commanded by your query. It can't just decide to return a different set of rows because doing so would be faster. If that is what you want, wrap the whole query into a subselect and move the ORDER BY into the outer query, like \"select * from (SELECT ... LIMIT 50) foo order by foo.id\"Changing the ordering direction of the index doesn't change which rows get returned, while changing the ordering direction of the query does.Cheers,Jeff",
"msg_date": "Mon, 6 Mar 2017 09:11:30 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Huge difference between ASC and DESC ordering"
},
{
"msg_contents": "Thank you Jeff.\n\n\nJeff Janes wrote\n> Did you force PostgreSQL to stop using the index on s.id? If not, do\n> that. If so, please post the EXPLAIN (analyze) of the plan it does switch\n> to.\n\nYes, this\n\n\n\nfinishes in 20 seconds, which is two times faster than *order by id asc*.\nQuery plan:\n\n\n\n\nJeff Janes wrote\n> The query stops as soon as it finds 50 rows which meet fk_id_client =\n> 20045. When you order one way, it needs to cover 18883917 to find those\n> 50. When you order the other way, it takes 6610 to find those 50. This\n> fact does not depend on whether the index is ASC or DESC. If you traverse\n> a DESC index backwards, it has exactly the same issue as if you traverse a\n> ASC index forward. Either way, once it decides to use that index to\n> obtain\n> the ordering of the query, it has to inspect 18883917 tuples before it\n> satisfies the LIMIT.\n\nI think I finally get it. I investigated the query result set more closely\nand realized that indeed the relevant rows start only after > 18 million\nrows in the asc *id* order and that's the problem. On the other hand, with\n*desc* Postgres very quickly finds 50 rows matching *fk_id_client = 20045*.\nSo it is just the process of scanning the index and checking the condition\nwhich takes all of the time.\n\nUnderstanding the problem more, it brought me to a solution I might end up\ngoing with (and which you also suggested by asking whether I really need\nordering the data by *id*), a different order clause which still makes sense\nin my scenario:\n\n\n\nFinishes in 7 seconds.\n\n\n\nBest regards,\nStanislav\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Huge-difference-between-ASC-and-DESC-ordering-tp5947712p5947887.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Mar 2017 01:40:18 -0700 (MST)",
"msg_from": "twoflower <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Huge difference between ASC and DESC ordering"
}
] |
[
{
"msg_contents": "Hi there,\n\nI’ve been asked to help with a project dealing with slow queries. I’m brand new to the project, so I have very little context. I’ve gathered as much information as I can.\n\nI’ve put the schema, query, and explain info in gists to maintain their formatting.\n\nWe are stumped with this slow query right now. I could really use some help looking for ways to speed it up.\n\nIf you need any more information, please let me know.\n\nThanks,\nPat\n\n\nFull Table and Index Schema\n\ntasks schema <https://gist.github.com/patmaddox/c599dc26daa99a12c1923c4994e402df#file-1_tasks-txt>\npermissions schema <https://gist.github.com/patmaddox/c599dc26daa99a12c1923c4994e402df#file-2_permissions-txt>\n\nTable Metadata\n\ntasks count: 8.8 million\ntasks count where assigned_to_user_id is null: 2.7 million\ntasks table has lots of new records added, individual existing records updated (e.g. to mark them complete)\npermissions count: 4.4 million\n\nEXPLAIN (ANALYZE, BUFFERS)\n\nquery <https://gist.github.com/patmaddox/c599dc26daa99a12c1923c4994e402df#file-3_query-sql>\n\nexplain using Heroku default work_mem=30MB: <https://gist.github.com/patmaddox/c599dc26daa99a12c1923c4994e402df#file-4_explain-txt>\n\nexplain using work_mem=192MB <https://gist.github.com/patmaddox/c599dc26daa99a12c1923c4994e402df#file-5_explain_mem-txt>\n\nPostgres version\n\nPostgreSQL 9.4.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bit\n\nHistory\n\nSlow query has gotten steadily worse over the past few months.\n\nHardware / Maintenance Setup / WAL Configuration / GUC Settings\n\nHeroku Premium 2 plan <https://devcenter.heroku.com/articles/heroku-postgres-plans#premium-tier>\n\nCache size: 3.5 GB\nStorage limit: 256 GB\nConnection limit: 400\n\nwork_mem: 30MB\ncheckpoint_segments: 40\nwal_buffers: 16MB\nHi there,I’ve been asked to help with a project dealing with slow queries. I’m brand new to the project, so I have very little context. I’ve gathered as much information as I can.I’ve put the schema, query, and explain info in gists to maintain their formatting.We are stumped with this slow query right now. I could really use some help looking for ways to speed it up.If you need any more information, please let me know.Thanks,PatFull Table and Index Schematasks schemapermissions schemaTable Metadatatasks count: 8.8 milliontasks count where assigned_to_user_id is null: 2.7 milliontasks table has lots of new records added, individual existing records updated (e.g. to mark them complete)permissions count: 4.4 millionEXPLAIN (ANALYZE, BUFFERS)queryexplain using Heroku default work_mem=30MB:explain using work_mem=192MBPostgres versionPostgreSQL 9.4.9 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bitHistorySlow query has gotten steadily worse over the past few months.Hardware / Maintenance Setup / WAL Configuration / GUC SettingsHeroku Premium 2 planCache size: 3.5 GBStorage limit: 256 GBConnection limit: 400work_mem: 30MBcheckpoint_segments: 40wal_buffers: 16MB",
"msg_date": "Tue, 7 Mar 2017 19:26:36 -0700",
"msg_from": "Pat Maddox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Please help with a slow query: there are millions of records, what\n can we do?"
},
{
"msg_contents": "Pat Maddox wrote:\r\n> I’ve been asked to help with a project dealing with slow queries. I’m brand new to the project, so I\r\n> have very little context. I’ve gathered as much information as I can.\r\n> \r\n> I’ve put the schema, query, and explain info in gists to maintain their formatting.\r\n> \r\n> We are stumped with this slow query right now. I could really use some help looking for ways to speed\r\n> it up.\r\n\r\nI don't know if the plan can be improved; it has to retrieve and sort 347014 rows,\r\nmost of which are read from diak, so it will take some time.\r\n\r\nOne thing I notice is that some statistics seem to be bad (the estimate for\r\nthe index scan on \"permissions\" is off the mark), so maybe you can ANALYZE\r\nboth tables (perhaps with higher \"default_statistics_target\") and see if that\r\nchanges anything.\r\n\r\nIs there any chance you could give the machine lots of RAM?\r\nThat would speed up the bitmap heap scan (but not the sort).\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 8 Mar 2017 10:05:07 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please help with a slow query: there are millions of\n records, what can we do?"
},
{
"msg_contents": "On Tue, Mar 7, 2017 at 6:26 PM, Pat Maddox <[email protected]> wrote:\n\n> Hi there,\n>\n> I’ve been asked to help with a project dealing with slow queries. I’m\n> brand new to the project, so I have very little context. I’ve gathered as\n> much information as I can.\n>\n> I’ve put the schema, query, and explain info in gists to maintain their\n> formatting.\n>\n> We are stumped with this slow query right now. I could really use some\n> help looking for ways to speed it up.\n>\n> If you need any more information, please let me know.\n>\n\n\nYou could try a partial index on:\n\n(account_id, completed_at desc, taskable_name, position,\nassigned_to_user_id) where \"tasks\".\"archived\" != 't' AND \"tasks\".\"complete\"\n= 't'\n\nAlso, the poor estimate of the number of rows on your scan of\nindex_permissions_on_user_id_and_object_id_and_object_type suggests that\nyou are not analyzing (and so probably also not vacuuming) often enough.\n\nCheers,\n\nJeff\n\nOn Tue, Mar 7, 2017 at 6:26 PM, Pat Maddox <[email protected]> wrote:Hi there,I’ve been asked to help with a project dealing with slow queries. I’m brand new to the project, so I have very little context. I’ve gathered as much information as I can.I’ve put the schema, query, and explain info in gists to maintain their formatting.We are stumped with this slow query right now. I could really use some help looking for ways to speed it up.If you need any more information, please let me know.You could try a partial index on:(account_id, completed_at desc, taskable_name, position, assigned_to_user_id) where \"tasks\".\"archived\" != 't' AND \"tasks\".\"complete\" = 't'Also, the poor estimate of the number of rows on your scan of index_permissions_on_user_id_and_object_id_and_object_type suggests that you are not analyzing (and so probably also not vacuuming) often enough.Cheers,Jeff",
"msg_date": "Wed, 8 Mar 2017 10:00:47 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Please help with a slow query: there are millions of\n records, what can we do?"
},
{
"msg_contents": "\n> On Mar 8, 2017, at 11:00 AM, Jeff Janes <[email protected]> wrote:\n> \n> You could try a partial index on:\n> \n> (account_id, completed_at desc, taskable_name, position, assigned_to_user_id) where \"tasks\".\"archived\" != 't' AND \"tasks\".\"complete\" = 't'\n> \n> Also, the poor estimate of the number of rows on your scan of index_permissions_on_user_id_and_object_id_and_object_type suggests that you are not analyzing (and so probably also not vacuuming) often enough.\n\nThanks for this. So here’s a quick update…\n\nI removed all the indexes that are there and added one on:\n\n(account_id, taskable_type, taskable_id, assigned_to_user_id, archived, complete, completed_at, due_on)\n\nWe search for tasks that are complete or incomplete, so we wouldn’t want a partial index there… but I _think_ changing the index to be partial where archived != ’t’ would be beneficial; I’ll have to look. As of today, only about 10% of the tasks are archived=’t’ – though that’s still ~1 million rows at this point.\n\nThat helped the query plans big time, and adding more RAM so the indexes fit in memory instead of swapping led to major improvements.\n\nSo thank you for the suggestions :)\n\nI’ve manually vacuumed and analyzed a few times, and the estimates are always pretty far off. How do you suggest increasing the stats for the table? Just increase it, vacuum, and see if the stats look better?\n\nThanks,\nPat\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Mar 2017 09:07:46 -0600",
"msg_from": "Pat Maddox <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Please help with a slow query: there are millions of\n records, what can we do?"
}
] |
[
{
"msg_contents": "Dear expert,\n\nI have to add one column \"ID\" in postgres table which will generate Auto Increment<http://www.davidghedini.com/pg/entry/postgresql_auto_increment>ed number .\n\nExample:\nSuppose I have five records and if I insert 1 new record It should auto generate 6.\nIf I truncate the same table and then again insert rows should start with 1 in \"ID\" column.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n\n\n\n\n\n\nDear expert,\n \nI have to add one column “ID” in postgres table which will generate\n\nAuto Incremented number .\n \nExample:\nSuppose I have five records and if I insert 1 new record It should auto generate 6.\nIf I truncate the same table and then again insert rows should start with 1 in “ID” column.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n\n\n\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender\n by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.",
"msg_date": "Mon, 20 Mar 2017 13:38:40 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Auto generate number in Postgres-9.1."
},
{
"msg_contents": "Dinesh,\n\n> I have to add one column “ID” in postgres table which will generate\n> Auto Increment\n> <http://www.davidghedini.com/pg/entry/postgresql_auto_increment>ed number .\n> \n> \n> \n> Example:\n> \n> Suppose I have five records and if I insert 1 new record It should auto\n> generate 6.\n\nhttps://www.postgresql.org/docs/9.6/static/sql-createsequence.html\nalso SERIAL on this page:\nhttps://www.postgresql.org/docs/9.6/static/datatype-numeric.html\n\n\n> \n> If I truncate the same table and then again insert rows should start\n> with 1 in “ID” column.\n\nThat's not how it works, normally. I'd suggest adding an ON TRUNCATE\ntrigger to the table.\n\n\n-- \nJosh Berkus\nContainers & Databases Oh My!\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Mar 2017 09:43:11 -0400",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto generate number in Postgres-9.1."
},
{
"msg_contents": "On 03/20/2017 02:43 PM, Josh Berkus wrote:\n>> If I truncate the same table and then again insert rows should start\n>> with 1 in �ID� column.\n>\n> That's not how it works, normally. I'd suggest adding an ON TRUNCATE\n> trigger to the table.\n\nActually that may not be necessary as long as you make sure to use the \nRESTART IDENTITY option when running TRUNCATE. I would argue that is a \ncleaner solution than using triggers, if you can get away with it.\n\nhttps://www.postgresql.org/docs/9.6/static/sql-truncate.html\n\nAndreas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Mar 2017 14:48:23 +0100",
"msg_from": "Andreas Karlsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto generate number in Postgres-9.1."
},
{
"msg_contents": "Sequences are stored as a separate object in PostgreSQL.\n\nHere in this example table and you can see that rec_id is a sequence number and that the object name is: whiteboards_rec_id_seq\n\nmydb=> \\d whiteboards\n\n Table \"public.whiteboards\"\n Column | Type | Modifiers \n---------------+-----------------------------+--------------------------------------------------------------\n rec_id | integer | not null default nextval('whiteboards_rec_id_seq'::regclass)\n board_name | character varying(24) | not null\n board_content | text | not null\n updatets | timestamp without time zone | default now()\nIndexes:\n \"whiteboards_pkey\" PRIMARY KEY, btree (rec_id)\n\nNow I can display the whiteboards_rec_id_seq object\n\nmydb=> \\dS whiteboards_rec_id_seq \n Sequence \"public.whiteboards_rec_id_seq\"\n Column | Type | Value \n---------------+---------+------------------------\n sequence_name | name | whiteboards_rec_id_seq\n last_value | bigint | 12\n start_value | bigint | 1\n increment_by | bigint | 1\n max_value | bigint | 9223372036854775807\n min_value | bigint | 1\n cache_value | bigint | 1\n log_cnt | bigint | 31\n is_cycled | boolean | f\n is_called | boolean | t\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Josh Berkus\nSent: Monday, March 20, 2017 6:43 AM\nTo: Dinesh Chandra 12108; [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Auto generate number in Postgres-9.1.\n\nDinesh,\n\n> I have to add one column \"ID\" in postgres table which will generate\n> Auto Increment\n> <http://www.davidghedini.com/pg/entry/postgresql_auto_increment>ed number .\n> \n> \n> \n> Example:\n> \n> Suppose I have five records and if I insert 1 new record It should auto\n> generate 6.\n\nhttps://www.postgresql.org/docs/9.6/static/sql-createsequence.html\nalso SERIAL on this page:\nhttps://www.postgresql.org/docs/9.6/static/datatype-numeric.html\n\n\n> \n> If I truncate the same table and then again insert rows should start\n> with 1 in \"ID\" column.\n\nThat's not how it works, normally. I'd suggest adding an ON TRUNCATE\ntrigger to the table.\n\n\n-- \nJosh Berkus\nContainers & Databases Oh My!\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Mar 2017 13:50:06 +0000",
"msg_from": "John Gorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto generate number in Postgres-9.1."
},
{
"msg_contents": "Hi,\n\nThanks for your immediate response!!!!\n\nIts working fine when we insert a new row.\n\nBut on deletion it's not automatically re-adjusting the id's.\n\nDo I need to create trigger for this??\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of John Gorman\nSent: 20 March, 2017 7:20 PM\nTo: [email protected]; [email protected]\nSubject: Re: [PERFORM] Auto generate number in Postgres-9.1.\n\nSequences are stored as a separate object in PostgreSQL.\n\nHere in this example table and you can see that rec_id is a sequence number and that the object name is: whiteboards_rec_id_seq\n\nmydb=> \\d whiteboards\n\n Table \"public.whiteboards\"\n Column | Type | Modifiers\n---------------+-----------------------------+--------------------------\n---------------+-----------------------------+--------------------------\n---------------+-----------------------------+----------\n rec_id | integer | not null default nextval('whiteboards_rec_id_seq'::regclass)\n board_name | character varying(24) | not null\n board_content | text | not null\n updatets | timestamp without time zone | default now()\nIndexes:\n \"whiteboards_pkey\" PRIMARY KEY, btree (rec_id)\n\nNow I can display the whiteboards_rec_id_seq object\n\nmydb=> \\dS whiteboards_rec_id_seq\n Sequence \"public.whiteboards_rec_id_seq\"\n Column | Type | Value\n---------------+---------+------------------------\n sequence_name | name | whiteboards_rec_id_seq\n last_value | bigint | 12\n start_value | bigint | 1\n increment_by | bigint | 1\n max_value | bigint | 9223372036854775807\n min_value | bigint | 1\n cache_value | bigint | 1\n log_cnt | bigint | 31\n is_cycled | boolean | f\n is_called | boolean | t\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Josh Berkus\nSent: Monday, March 20, 2017 6:43 AM\nTo: Dinesh Chandra 12108; [email protected]\nCc: [email protected]\nSubject: Re: [PERFORM] Auto generate number in Postgres-9.1.\n\nDinesh,\n\n> I have to add one column \"ID\" in postgres table which will generate\n> Auto Increment\n> <http://www.davidghedini.com/pg/entry/postgresql_auto_increment>ed number .\n>\n>\n>\n> Example:\n>\n> Suppose I have five records and if I insert 1 new record It should\n> auto generate 6.\n\nhttps://www.postgresql.org/docs/9.6/static/sql-createsequence.html\nalso SERIAL on this page:\nhttps://www.postgresql.org/docs/9.6/static/datatype-numeric.html\n\n\n>\n> If I truncate the same table and then again insert rows should start\n> with 1 in \"ID\" column.\n\nThat's not how it works, normally. I'd suggest adding an ON TRUNCATE trigger to the table.\n\n\n--\nJosh Berkus\nContainers & Databases Oh My!\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n________________________________\n\nDISCLAIMER:\n\nThis email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. Check all attachments for viruses before opening them. All views or opinions presented in this e-mail are those of the author and may not reflect the opinion of Cyient or those of our affiliates.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Mar 2017 14:08:07 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Auto generate number in Postgres-9.1."
},
{
"msg_contents": "On 03/20/2017 03:08 PM, Dinesh Chandra 12108 wrote:\n> But on deletion it's not automatically re-adjusting the id's.\n>\n> Do I need to create trigger for this??\n\nIt is possible to do but I advice against adjusting the IDs on DELETE \ndue to to do so safely would require locking the entire table in the \ntrigger.\n\nNote that serial columns will also get holes on ROLLBACK. In general I \nthink the right thing to do is accept that your ID columns can get a bit \nugly.\n\nFor example:\n\nCREATE TABLE t (id serial);\n\nINSERT INTO t DEFAULT VALUES;\n\nBEGIN;\n\nINSERT INTO t DEFAULT VALUES;\n\nROLLBACK;\n\nINSERT INTO t DEFAULT VALUES;\n\nGives us the following data in the table:\n\n id\n----\n 1\n 3\n(2 rows)\n\nAndreas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Mar 2017 16:40:50 +0100",
"msg_from": "Andreas Karlsson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Auto generate number in Postgres-9.1."
}
] |
[
{
"msg_contents": "Hi pgsql-performance!\n\nSo I have a Postgresql database -- version \"PostgreSQL 9.4.8 on\nx86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\",\nspecifically.\n\nIn it, I have essentially two categories of tables:\n- small tables that are updated frequently, and tend to be often queried in\ntheir entirety (seq scan)\n- large tables that are updated infrequently, and tend to be often queried\nusing an index\n\nLet us assume that I have a table \"A\" that falls in the small category, and\na table \"B\" that falls in the large category.\n\nThe problem I'm having is that it is very difficult to do any amount of\nmaintenance on tables like B without DOSing any queries that reference\ntable A. The reason, as far as I can tell, is that having any statement run\nagainst a table like B results in all updates to table A being kept around\nuntil the statement on table B completes (as per the READ COMMITTED\ntransaction isolation level -- statements against B must only see rows\ncommitted before they started). This makes sense -- it's required to keep\nACID.\n\nHowever, there are times where I need to do large operations to B -- and\nthese operations can be literally anything, but I'll focus on my most\nrecent need: running a \"pg_dump\" against table B.\n\nI should add that table B is never involved with any query that touches\ntable A -- in this case, it is an append-only table that records changes to\na table C that is equivalently never involved with table A.\n\nSo, on to the data from which I base the above claims:\n\nLet table A have 43 thousand rows:\ndatabase=> select count(*) from a;\n-[ RECORD 1 ]\ncount | 43717\nTime: 10447.681 ms\n\nLet table B have 21 million rows:\nmeraki_shard_production=> select count(id) from b;\n-[ RECORD 1 ]---\ncount | 21845610\nTime: 116873.051 ms\n\nAssume a pg_dump operation is copying table B, i.e. there's a currently\nrunning query that looks like \"COPY public.b (id, ...) TO STDOUT\"\n\nThen this is what I get for running a verbose vacuum against A:\n\ndatabase=> vacuum verbose a;\nINFO: vacuuming \"public.a\"\nINFO: index \"a_pkey\" now contains 2119583 row versions in 9424 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.04s/0.03u sec elapsed 0.49 sec.\nINFO: \"a\": found 0 removable, 2112776 nonremovable row versions in 185345\nout of 186312 pages\nDETAIL: 2069676 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 1.28s/1.15u sec elapsed 22.93 sec.\nINFO: vacuuming \"pg_toast.pg_toast_18889\"\nINFO: index \"pg_toast_18889_index\" now contains 31 row versions in 2 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_18889\": found 0 removable, 31 nonremovable row versions in\n7 out of 7 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nTime: 23035.282 ms\n\n... and here's how long it takes to read all of the rows:\ndatabase=> select max(an unindexed bigint column) from a;\n-[ RECORD 1 ]--------\nmax | <some number>\nTime: 10624.368 ms\n\nRunning this another time immediately afterward (to show the cached speed)\nreturns:\nTime: 13782.363 ms\n\nIf I go to a separate database cluster that has an equivalent schema, and\nroughly equivalent table a (+- 2% on the number of rows), the above queries\nlook more like this:\n\nmeraki_shard_production=> vacuum verbose a;\nINFO: vacuuming \"public.a\"\nINFO: index \"a_pkey\" now contains 42171 row versions in 162 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"a\": found 487 removable, 42286 nonremovable row versions in 7809\nout of 7853 pages\nDETAIL: 373 dead row versions cannot be removed yet.\nThere were 42436 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.02u sec elapsed 0.01 sec.\nINFO: vacuuming \"pg_toast.pg_toast_19037\"\nINFO: index \"pg_toast_19037_index\" now contains 57 row versions in 2 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"pg_toast_19037\": found 0 removable, 57 nonremovable row versions in\n12 out of 12 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nVACUUM\nTime: 32.890 ms\n\ndatabase=> select max(the same unindexed bigint column) from a;\n max\n-----------------\n <some number>\n(1 row)\nTime: 16.696 ms\n(The second iteration takes 15.320 ms)\n\nSo, the way I see it, my problem boils down to table \"A\" getting roughly\n100-1000x slower when it gets roughly 20-50x bigger (depending if you\nmeasure in pages or tuples). Unfortunately, in my use case, table \"A\" acts\nas a join table for a lot of aspects of our company's webapp. Every 10\nminutes, the table is queried for 35 million rows via sequential scan (~800\nseq scans per minute, ~1.3 per second on average), and 6.5 million rows via\nindex lookup. When a sequential scan over 40k rows takes less than 1\nsecond, everything is fine -- when it takes 10+ seconds the database starts\nto slow down significantly. Thankfully, queries can share sequential scans,\nbut you can imagine how the responsiveness of the webapp might suffer as a\nconsequence. There's also the secondary effect that, should the query on B\ncomplete, there now exist many queries against A (and other related tables)\nthat are slow enough to potentially increase the size of A even further. It\nis not uncommon for queries involving A to start taking upwards of 30\nminutes to complete, when they usually complete in roughly 300ms, after\nsome maintenance query against B has completed.\n\nOur go-to solution has been to detect and stop these maintenance queries if\nthey take too long, and then to CLUSTER table A. This puts a cap on how\nlong any maintenance query can take -- down to somewhere around 1 hour.\n\nAnd thus my query to you guys:\n\nWhat can I do to keep running long maintenance operations on large tables\n(SELECTing significant fractions of B, DELETEing significant fractions of\nB, running VACUUM FULL on B) without denying other Postgresql backends\ntheir ability to efficiently query table A? Or, in other words, how do I\navoid incurring the cost of transaction isolation for queries against B on\na case-by-case basis?\n\nAnything is on the table for implementation:\n- moving tables to a different database / cluster / completely different\nDBMS system\n- designing an extension to tune either sets of queries\n- partitioning tables\n- etc\n... although the simpler the better. If you were in this position, what\nwould you do?\n\nRegards,\nJames\n\nHi pgsql-performance!So I have a Postgresql database -- version \"PostgreSQL 9.4.8 on x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, 64-bit\", specifically.In it, I have essentially two categories of tables:- small tables that are updated frequently, and tend to be often queried in their entirety (seq scan)- large tables that are updated infrequently, and tend to be often queried using an indexLet us assume that I have a table \"A\" that falls in the small category, and a table \"B\" that falls in the large category.The problem I'm having is that it is very difficult to do any amount of maintenance on tables like B without DOSing any queries that reference table A. The reason, as far as I can tell, is that having any statement run against a table like B results in all updates to table A being kept around until the statement on table B completes (as per the READ COMMITTED transaction isolation level -- statements against B must only see rows committed before they started). This makes sense -- it's required to keep ACID.However, there are times where I need to do large operations to B -- and these operations can be literally anything, but I'll focus on my most recent need: running a \"pg_dump\" against table B.I should add that table B is never involved with any query that touches table A -- in this case, it is an append-only table that records changes to a table C that is equivalently never involved with table A.So, on to the data from which I base the above claims:Let table A have 43 thousand rows:database=> select count(*) from a;-[ RECORD 1 ]count | 43717Time: 10447.681 msLet table B have 21 million rows:meraki_shard_production=> select count(id) from b;-[ RECORD 1 ]---count | 21845610Time: 116873.051 msAssume a pg_dump operation is copying table B, i.e. there's a currently running query that looks like \"COPY public.b (id, ...) TO STDOUT\"Then this is what I get for running a verbose vacuum against A:database=> vacuum verbose a;INFO: vacuuming \"public.a\"INFO: index \"a_pkey\" now contains 2119583 row versions in 9424 pagesDETAIL: 0 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.04s/0.03u sec elapsed 0.49 sec.INFO: \"a\": found 0 removable, 2112776 nonremovable row versions in 185345 out of 186312 pagesDETAIL: 2069676 dead row versions cannot be removed yet.There were 0 unused item pointers.0 pages are entirely empty.CPU 1.28s/1.15u sec elapsed 22.93 sec.INFO: vacuuming \"pg_toast.pg_toast_18889\"INFO: index \"pg_toast_18889_index\" now contains 31 row versions in 2 pagesDETAIL: 0 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"pg_toast_18889\": found 0 removable, 31 nonremovable row versions in 7 out of 7 pagesDETAIL: 0 dead row versions cannot be removed yet.There were 0 unused item pointers.0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.00 sec.VACUUMTime: 23035.282 ms... and here's how long it takes to read all of the rows:database=> select max(an unindexed bigint column) from a;-[ RECORD 1 ]--------max | <some number>Time: 10624.368 msRunning this another time immediately afterward (to show the cached speed) returns:Time: 13782.363 msIf I go to a separate database cluster that has an equivalent schema, and roughly equivalent table a (+- 2% on the number of rows), the above queries look more like this:meraki_shard_production=> vacuum verbose a;INFO: vacuuming \"public.a\"INFO: index \"a_pkey\" now contains 42171 row versions in 162 pagesDETAIL: 0 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"a\": found 487 removable, 42286 nonremovable row versions in 7809 out of 7853 pagesDETAIL: 373 dead row versions cannot be removed yet.There were 42436 unused item pointers.0 pages are entirely empty.CPU 0.00s/0.02u sec elapsed 0.01 sec.INFO: vacuuming \"pg_toast.pg_toast_19037\"INFO: index \"pg_toast_19037_index\" now contains 57 row versions in 2 pagesDETAIL: 0 index row versions were removed.0 index pages have been deleted, 0 are currently reusable.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO: \"pg_toast_19037\": found 0 removable, 57 nonremovable row versions in 12 out of 12 pagesDETAIL: 0 dead row versions cannot be removed yet.There were 0 unused item pointers.0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.00 sec.VACUUMTime: 32.890 msdatabase=> select max(the same unindexed bigint column) from a; max ----------------- <some number>(1 row)Time: 16.696 ms(The second iteration takes 15.320 ms)So, the way I see it, my problem boils down to table \"A\" getting roughly 100-1000x slower when it gets roughly 20-50x bigger (depending if you measure in pages or tuples). Unfortunately, in my use case, table \"A\" acts as a join table for a lot of aspects of our company's webapp. Every 10 minutes, the table is queried for 35 million rows via sequential scan (~800 seq scans per minute, ~1.3 per second on average), and 6.5 million rows via index lookup. When a sequential scan over 40k rows takes less than 1 second, everything is fine -- when it takes 10+ seconds the database starts to slow down significantly. Thankfully, queries can share sequential scans, but you can imagine how the responsiveness of the webapp might suffer as a consequence. There's also the secondary effect that, should the query on B complete, there now exist many queries against A (and other related tables) that are slow enough to potentially increase the size of A even further. It is not uncommon for queries involving A to start taking upwards of 30 minutes to complete, when they usually complete in roughly 300ms, after some maintenance query against B has completed.Our go-to solution has been to detect and stop these maintenance queries if they take too long, and then to CLUSTER table A. This puts a cap on how long any maintenance query can take -- down to somewhere around 1 hour.And thus my query to you guys:What can I do to keep running long maintenance operations on large tables (SELECTing significant fractions of B, DELETEing significant fractions of B, running VACUUM FULL on B) without denying other Postgresql backends their ability to efficiently query table A? Or, in other words, how do I avoid incurring the cost of transaction isolation for queries against B on a case-by-case basis?Anything is on the table for implementation:- moving tables to a different database / cluster / completely different DBMS system- designing an extension to tune either sets of queries- partitioning tables- etc... although the simpler the better. If you were in this position, what would you do?Regards,James",
"msg_date": "Tue, 21 Mar 2017 12:24:52 -0700",
"msg_from": "James Parks <[email protected]>",
"msg_from_op": true,
"msg_subject": "Optimizing around retained tuples"
},
{
"msg_contents": "On Tue, Mar 21, 2017 at 4:24 PM, James Parks <[email protected]> wrote:\n> ... and here's how long it takes to read all of the rows:\n> database=> select max(an unindexed bigint column) from a;\n> -[ RECORD 1 ]--------\n> max | <some number>\n> Time: 10624.368 ms\n>\n> Running this another time immediately afterward (to show the cached speed)\n> returns:\n> Time: 13782.363 ms\n>\n> If I go to a separate database cluster that has an equivalent schema, and\n> roughly equivalent table a (+- 2% on the number of rows), the above queries\n> look more like this:\n>\n> meraki_shard_production=> vacuum verbose a;\n> INFO: vacuuming \"public.a\"\n> INFO: index \"a_pkey\" now contains 42171 row versions in 162 pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"a\": found 487 removable, 42286 nonremovable row versions in 7809 out\n> of 7853 pages\n> DETAIL: 373 dead row versions cannot be removed yet.\n> There were 42436 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.02u sec elapsed 0.01 sec.\n> INFO: vacuuming \"pg_toast.pg_toast_19037\"\n> INFO: index \"pg_toast_19037_index\" now contains 57 row versions in 2 pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"pg_toast_19037\": found 0 removable, 57 nonremovable row versions in\n> 12 out of 12 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 0 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> VACUUM\n> Time: 32.890 ms\n>\n> database=> select max(the same unindexed bigint column) from a;\n> max\n> -----------------\n> <some number>\n> (1 row)\n> Time: 16.696 ms\n> (The second iteration takes 15.320 ms)\n>\n> So, the way I see it, my problem boils down to table \"A\" getting roughly\n> 100-1000x slower when it gets roughly 20-50x bigger (depending if you\n> measure in pages or tuples). Unfortunately, in my use case, table \"A\" acts\n> as a join table for a lot of aspects of our company's webapp. Every 10\n> minutes, the table is queried for 35 million rows via sequential scan (~800\n> seq scans per minute, ~1.3 per second on average), and 6.5 million rows via\n> index lookup. When a sequential scan over 40k rows takes less than 1 second,\n> everything is fine -- when it takes 10+ seconds the database starts to slow\n> down significantly. Thankfully, queries can share sequential scans, but you\n> can imagine how the responsiveness of the webapp might suffer as a\n> consequence. There's also the secondary effect that, should the query on B\n> complete, there now exist many queries against A (and other related tables)\n> that are slow enough to potentially increase the size of A even further. It\n> is not uncommon for queries involving A to start taking upwards of 30\n> minutes to complete, when they usually complete in roughly 300ms, after some\n> maintenance query against B has completed.\n>\n> Our go-to solution has been to detect and stop these maintenance queries if\n> they take too long, and then to CLUSTER table A. This puts a cap on how long\n> any maintenance query can take -- down to somewhere around 1 hour.\n>\n> And thus my query to you guys:\n>\n> What can I do to keep running long maintenance operations on large tables\n> (SELECTing significant fractions of B, DELETEing significant fractions of B,\n> running VACUUM FULL on B) without denying other Postgresql backends their\n> ability to efficiently query table A? Or, in other words, how do I avoid\n> incurring the cost of transaction isolation for queries against B on a\n> case-by-case basis?\n>\n> Anything is on the table for implementation:\n> - moving tables to a different database / cluster / completely different\n> DBMS system\n> - designing an extension to tune either sets of queries\n> - partitioning tables\n> - etc\n> ... although the simpler the better. If you were in this position, what\n> would you do?\n>\n> Regards,\n> James\n\nYou're experiencing bloat because the transaction on B is preventing\nthe xid horizon from moving forward, thus dead tuples from A cannot be\nreclaimed in case the transaction on B decides to query them.\n\nThere's only one \"easy\" solution for this as far as I know, and it is\nto run your long-running queries on a hot standby. That certainly\nworks for most read-only workloads, especially pg_dump.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Mar 2017 22:56:34 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing around retained tuples"
},
{
"msg_contents": "On Tue, Mar 21, 2017 at 10:56 PM, Claudio Freire <[email protected]> wrote:\n> On Tue, Mar 21, 2017 at 4:24 PM, James Parks <[email protected]> wrote:\n>> ... and here's how long it takes to read all of the rows:\n>> database=> select max(an unindexed bigint column) from a;\n>> -[ RECORD 1 ]--------\n>> max | <some number>\n>> Time: 10624.368 ms\n>>\n>> Running this another time immediately afterward (to show the cached speed)\n>> returns:\n>> Time: 13782.363 ms\n>>\n>> If I go to a separate database cluster that has an equivalent schema, and\n>> roughly equivalent table a (+- 2% on the number of rows), the above queries\n>> look more like this:\n>>\n>> meraki_shard_production=> vacuum verbose a;\n>> INFO: vacuuming \"public.a\"\n>> INFO: index \"a_pkey\" now contains 42171 row versions in 162 pages\n>> DETAIL: 0 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>> INFO: \"a\": found 487 removable, 42286 nonremovable row versions in 7809 out\n>> of 7853 pages\n>> DETAIL: 373 dead row versions cannot be removed yet.\n>> There were 42436 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 0.00s/0.02u sec elapsed 0.01 sec.\n>> INFO: vacuuming \"pg_toast.pg_toast_19037\"\n>> INFO: index \"pg_toast_19037_index\" now contains 57 row versions in 2 pages\n>> DETAIL: 0 index row versions were removed.\n>> 0 index pages have been deleted, 0 are currently reusable.\n>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>> INFO: \"pg_toast_19037\": found 0 removable, 57 nonremovable row versions in\n>> 12 out of 12 pages\n>> DETAIL: 0 dead row versions cannot be removed yet.\n>> There were 0 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>> VACUUM\n>> Time: 32.890 ms\n>>\n>> database=> select max(the same unindexed bigint column) from a;\n>> max\n>> -----------------\n>> <some number>\n>> (1 row)\n>> Time: 16.696 ms\n>> (The second iteration takes 15.320 ms)\n>>\n>> So, the way I see it, my problem boils down to table \"A\" getting roughly\n>> 100-1000x slower when it gets roughly 20-50x bigger (depending if you\n>> measure in pages or tuples). Unfortunately, in my use case, table \"A\" acts\n>> as a join table for a lot of aspects of our company's webapp. Every 10\n>> minutes, the table is queried for 35 million rows via sequential scan (~800\n>> seq scans per minute, ~1.3 per second on average), and 6.5 million rows via\n>> index lookup. When a sequential scan over 40k rows takes less than 1 second,\n>> everything is fine -- when it takes 10+ seconds the database starts to slow\n>> down significantly. Thankfully, queries can share sequential scans, but you\n>> can imagine how the responsiveness of the webapp might suffer as a\n>> consequence. There's also the secondary effect that, should the query on B\n>> complete, there now exist many queries against A (and other related tables)\n>> that are slow enough to potentially increase the size of A even further. It\n>> is not uncommon for queries involving A to start taking upwards of 30\n>> minutes to complete, when they usually complete in roughly 300ms, after some\n>> maintenance query against B has completed.\n>>\n>> Our go-to solution has been to detect and stop these maintenance queries if\n>> they take too long, and then to CLUSTER table A. This puts a cap on how long\n>> any maintenance query can take -- down to somewhere around 1 hour.\n>>\n>> And thus my query to you guys:\n>>\n>> What can I do to keep running long maintenance operations on large tables\n>> (SELECTing significant fractions of B, DELETEing significant fractions of B,\n>> running VACUUM FULL on B) without denying other Postgresql backends their\n>> ability to efficiently query table A? Or, in other words, how do I avoid\n>> incurring the cost of transaction isolation for queries against B on a\n>> case-by-case basis?\n>>\n>> Anything is on the table for implementation:\n>> - moving tables to a different database / cluster / completely different\n>> DBMS system\n>> - designing an extension to tune either sets of queries\n>> - partitioning tables\n>> - etc\n>> ... although the simpler the better. If you were in this position, what\n>> would you do?\n>>\n>> Regards,\n>> James\n>\n> You're experiencing bloat because the transaction on B is preventing\n> the xid horizon from moving forward, thus dead tuples from A cannot be\n> reclaimed in case the transaction on B decides to query them.\n>\n> There's only one \"easy\" solution for this as far as I know, and it is\n> to run your long-running queries on a hot standby. That certainly\n> works for most read-only workloads, especially pg_dump.\n\nForgot to clarify... for your use case, make sure you *don't* enable\nstandby feedback on the standby.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Mar 2017 22:58:58 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing around retained tuples"
},
{
"msg_contents": "On Tue, Mar 21, 2017 at 4:24 PM, James Parks <[email protected]> wrote:\r\n> What can I do to keep running long maintenance operations on large\r\n> tables (SELECTing significant fractions of B, DELETEing significant\r\n> fractions of B, running VACUUM FULL on B) without denying other\r\n> Postgresql backends their ability to efficiently query table A? \r\n> \r\n> Anything is on the table for implementation:\r\n> - moving tables to a different database / cluster / completely different DBMS system\r\n> - designing an extension to tune either sets of queries\r\n> - partitioning tables\r\n> - etc\r\n\r\nThe PostgreSQL 9.6 old_snapshot_threshold feature may be useful for this situation.\r\n\r\nFrom the patch proposal e-mail \"... Basically, this patch aims to limit bloat when there are snapshots\r\nthat are kept registered for prolonged periods. ...\".\r\n\r\nI think that matches your description.\r\n\r\nPgCon 2016 presentation - https://www.pgcon.org/2016/schedule/attachments/420_snapshot-too-old.odp\r\nCommitFest entry - https://commitfest.postgresql.org/9/562/\r\n\r\nOn Tue, Mar 21, 2017 at 10:56 PM, Claudio Freire <[email protected]> wrote:\r\n> You're experiencing bloat because the transaction on B is preventing \r\n> the xid horizon from moving forward, thus dead tuples from A cannot be \r\n> reclaimed in case the transaction on B decides to query them.\r\n\r\nSetting old_snapshot_threshold to a positive value changes that behavior.\r\n\r\nInstead of holding on to the \"dead\" tuples in A so that the transaction\r\non B can query them in the future, the tuples are vaccuumed and the\r\ntransaction on B gets a \"snapshot too old\" error if it tries to read a\r\npage in A where a tuple was vaccuumed.\r\n\r\nThere are also discussions on pgsql-hackers (\"pluggable storage\" and \"UNDO\r\nand in-place update\") regarding alternate table formats that might work\r\nbetter in this situation. But it doesn't look like either of those will\r\nmake it into PostgreSQL 10.\r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 24 Mar 2017 16:00:03 +0000",
"msg_from": "Brad DeJong <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing around retained tuples"
}
] |
[
{
"msg_contents": "Hi there,\nI’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel 4.4.0-66-generic). Hardware is:\n - 2 x Intel Xeon E5-2690\n - 96GB RAM\n - Software mdadm RAID10 (6 x SSDs)\n\nPostgres is used in a sort of DWH application, so all the resources are assigned to it and the aim is to maximize the single transaction performance instead of balancing between multiple connections.\n\nThe configuration variables I changed are the following ones:\n\n\tcheckpoint_completion_target = 0.9\n\tdata_directory = '/mnt/raid10/pg_data_9.6.2'\n\tdefault_statistics_target = 1000\n\teffective_cache_size = 72GB\n\teffective_io_concurrency = 1000\n\tlisten_addresses = '127.0.0.1,192.168.2.90'\n\tmaintenance_work_mem = 1GB\n\tmax_connections=32\n\trandom_page_cost=1.2\n\tseq_page_cost=1.0\n\tshared_buffers = 24GB\n\twork_mem = 512MB\n\n\nThe kernel configuration in /etc/sysctl.conf is:\n\n\t# 24GB = (24*1024*1024*1024)\n\tkernel.shmmax = 25769803776\n\n\t# 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\"\n\tkernel.shmall = 6291456\n\n\tkernel.sched_migration_cost_ns = 5000000\n\tkernel.sched_autogroup_enabled = 0\n\n\tvm.overcommit_memory = 2\n\tvm.overcommit_ratio = 90\n\tvm.swappiness = 4\n\tvm.zone_reclaim_mode = 0\n\tvm.dirty_ratio = 15\n\tvm.dirty_background_ratio = 3\n\tvm.nr_hugepages = 12657\n\tvm.min_free_kbytes=262144\n\n\tdev.raid.speed_limit_max=1000000\n\tdev.raid.speed_limit_min=1000000\n\n\nHuge pages are being used on this machine and Postgres allocates 24GB immediately after starting up, as set by vm.nr_hugepages = 12657.\nMy concern is that it never uses more than 24GB. For example, I’m running 16 queries that use a lot of CPU (they do time series expansion and some arithmetics). I estimate they will generate a maximum of 2.5 billions of rows. Those queries are running since 48 hours and don’t know when they will finish, but RAM never overpassed those 24GB (+ some system). \n\nOutput from free -ht:\n total used free shared buff/cache available\nMem: 94G 28G 46G 17M 19G 64G\nSwap: 15G 0B 15G\nTotal: 109G 28G 61G\n\nOutput from vmstat -S M:\nprocs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----\n r b swpd free buff cache si so bi bo in cs us sy id wa st\n17 0 0 47308 197 19684 0 0 4 12 3 8 96 0 3 0 0\n\n\nOutput from top -U postgres:\ntop - 10:54:19 up 2 days, 1:37, 1 user, load average: 16.00, 16.00, 16.00\nTasks: 347 total, 17 running, 330 sleeping, 0 stopped, 0 zombie\n%Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem : 98847584 total, 48442364 free, 30046352 used, 20358872 buff/cache\nKiB Swap: 15825916 total, 15825916 free, 0 used. 67547664 avail Mem \n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n 9686 postgres 20 0 24.918g 214236 12628 R 100.0 0.2 2872:38 postgres \n 9687 postgres 20 0 24.918g 214212 12600 R 100.0 0.2 2872:27 postgres \n 9688 postgres 20 0 25.391g 709936 12708 R 100.0 0.7 2872:40 postgres \n 9691 postgres 20 0 24.918g 214516 12900 R 100.0 0.2 2865:23 postgres \n 9697 postgres 20 0 24.918g 214284 12676 R 100.0 0.2 2866:05 postgres \n 9698 postgres 20 0 24.922g 218608 12904 R 100.0 0.2 2872:31 postgres \n 9699 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2865:32 postgres \n 9702 postgres 20 0 24.922g 218332 12628 R 100.0 0.2 2865:24 postgres \n 9704 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2872:50 postgres \n 9710 postgres 20 0 24.918g 212364 12904 R 100.0 0.2 2865:38 postgres \n 9681 postgres 20 0 24.918g 212300 12596 R 99.7 0.2 2865:18 postgres \n 9682 postgres 20 0 24.918g 212108 12656 R 99.7 0.2 2872:34 postgres \n 9684 postgres 20 0 24.918g 212612 12908 R 99.7 0.2 2872:24 postgres \n 9685 postgres 20 0 24.918g 214208 12600 R 99.7 0.2 2872:47 postgres \n 9709 postgres 20 0 24.918g 214284 12672 R 99.7 0.2 2866:03 postgres \n 9693 postgres 20 0 24.918g 214300 12688 R 99.3 0.2 2865:59 postgres \n 9063 postgres 20 0 24.722g 14812 12956 S 0.3 0.0 0:07.36 postgres \n 9068 postgres 20 0 24.722g 6380 4232 S 0.3 0.0 0:02.15 postgres \n 9065 postgres 20 0 24.727g 10368 3516 S 0.0 0.0 0:04.24 postgres \n 9066 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:06.04 postgres \n 9067 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:01.37 postgres \n 9069 postgres 20 0 161740 4596 2312 S 0.0 0.0 0:04.48 postgres \n\nWhat’s wrong with this? There isn’t something wrong in RAM usage?\n\nThank you all\n Pietro \nHi there,I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel 4.4.0-66-generic). Hardware is: - 2 x Intel Xeon E5-2690 - 96GB RAM - Software mdadm RAID10 (6 x SSDs)Postgres is used in a sort of DWH application, so all the resources are assigned to it and the aim is to maximize the single transaction performance instead of balancing between multiple connections.The configuration variables I changed are the following ones: checkpoint_completion_target = 0.9 data_directory = '/mnt/raid10/pg_data_9.6.2' default_statistics_target = 1000 effective_cache_size = 72GB effective_io_concurrency = 1000 listen_addresses = '127.0.0.1,192.168.2.90' maintenance_work_mem = 1GB max_connections=32 random_page_cost=1.2 seq_page_cost=1.0 shared_buffers = 24GB work_mem = 512MBThe kernel configuration in /etc/sysctl.conf is: # 24GB = (24*1024*1024*1024) kernel.shmmax = 25769803776 # 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\" kernel.shmall = 6291456 kernel.sched_migration_cost_ns = 5000000 kernel.sched_autogroup_enabled = 0 vm.overcommit_memory = 2 vm.overcommit_ratio = 90 vm.swappiness = 4 vm.zone_reclaim_mode = 0 vm.dirty_ratio = 15 vm.dirty_background_ratio = 3 vm.nr_hugepages = 12657 vm.min_free_kbytes=262144 dev.raid.speed_limit_max=1000000 dev.raid.speed_limit_min=1000000Huge pages are being used on this machine and Postgres allocates 24GB immediately after starting up, as set by vm.nr_hugepages = 12657.My concern is that it never uses more than 24GB. For example, I’m running 16 queries that use a lot of CPU (they do time series expansion and some arithmetics). I estimate they will generate a maximum of 2.5 billions of rows. Those queries are running since 48 hours and don’t know when they will finish, but RAM never overpassed those 24GB (+ some system). Output from free -ht: total used free shared buff/cache availableMem: 94G 28G 46G 17M 19G 64GSwap: 15G 0B 15GTotal: 109G 28G 61GOutput from vmstat -S M:procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st17 0 0 47308 197 19684 0 0 4 12 3 8 96 0 3 0 0Output from top -U postgres:top - 10:54:19 up 2 days, 1:37, 1 user, load average: 16.00, 16.00, 16.00Tasks: 347 total, 17 running, 330 sleeping, 0 stopped, 0 zombie%Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 stKiB Mem : 98847584 total, 48442364 free, 30046352 used, 20358872 buff/cacheKiB Swap: 15825916 total, 15825916 free, 0 used. 67547664 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9686 postgres 20 0 24.918g 214236 12628 R 100.0 0.2 2872:38 postgres 9687 postgres 20 0 24.918g 214212 12600 R 100.0 0.2 2872:27 postgres 9688 postgres 20 0 25.391g 709936 12708 R 100.0 0.7 2872:40 postgres 9691 postgres 20 0 24.918g 214516 12900 R 100.0 0.2 2865:23 postgres 9697 postgres 20 0 24.918g 214284 12676 R 100.0 0.2 2866:05 postgres 9698 postgres 20 0 24.922g 218608 12904 R 100.0 0.2 2872:31 postgres 9699 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2865:32 postgres 9702 postgres 20 0 24.922g 218332 12628 R 100.0 0.2 2865:24 postgres 9704 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2872:50 postgres 9710 postgres 20 0 24.918g 212364 12904 R 100.0 0.2 2865:38 postgres 9681 postgres 20 0 24.918g 212300 12596 R 99.7 0.2 2865:18 postgres 9682 postgres 20 0 24.918g 212108 12656 R 99.7 0.2 2872:34 postgres 9684 postgres 20 0 24.918g 212612 12908 R 99.7 0.2 2872:24 postgres 9685 postgres 20 0 24.918g 214208 12600 R 99.7 0.2 2872:47 postgres 9709 postgres 20 0 24.918g 214284 12672 R 99.7 0.2 2866:03 postgres 9693 postgres 20 0 24.918g 214300 12688 R 99.3 0.2 2865:59 postgres 9063 postgres 20 0 24.722g 14812 12956 S 0.3 0.0 0:07.36 postgres 9068 postgres 20 0 24.722g 6380 4232 S 0.3 0.0 0:02.15 postgres 9065 postgres 20 0 24.727g 10368 3516 S 0.0 0.0 0:04.24 postgres 9066 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:06.04 postgres 9067 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:01.37 postgres 9069 postgres 20 0 161740 4596 2312 S 0.0 0.0 0:04.48 postgres What’s wrong with this? There isn’t something wrong in RAM usage?Thank you all Pietro",
"msg_date": "Fri, 24 Mar 2017 10:58:08 +0100",
"msg_from": "Pietro Pugni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres not using all RAM (Huge Page activated on a 96GB RAM system)"
},
{
"msg_contents": "As I read this, you have 24G of hugepages, and hugepages enabled for\npostgres. Can postgres use both standard pages and hugepages at the same\ntime? Seems unlikely to me.\n\nOn Fri, Mar 24, 2017 at 4:58 AM, Pietro Pugni <[email protected]>\nwrote:\n\n> Hi there,\n> I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel\n> 4.4.0-66-generic). Hardware is:\n> - 2 x Intel Xeon E5-2690\n> - 96GB RAM\n> - Software mdadm RAID10 (6 x SSDs)\n>\n> Postgres is used in a sort of DWH application, so all the resources are\n> assigned to it and the aim is to maximize the single transaction\n> performance instead of balancing between multiple connections.\n>\n> The configuration variables I changed are the following ones:\n>\n> checkpoint_completion_target = 0.9\n> data_directory = '/mnt/raid10/pg_data_9.6.2'\n> default_statistics_target = 1000\n> effective_cache_size = 72GB\n> effective_io_concurrency = 1000\n> listen_addresses = '127.0.0.1,192.168.2.90'\n> maintenance_work_mem = 1GB\n> max_connections=32\n> random_page_cost=1.2\n> seq_page_cost=1.0\n> shared_buffers = 24GB\n> work_mem = 512MB\n>\n>\n> The kernel configuration in /etc/sysctl.conf is:\n>\n> # 24GB = (24*1024*1024*1024)\n> kernel.shmmax = 25769803776\n>\n> # 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\"\n> kernel.shmall = 6291456\n>\n> kernel.sched_migration_cost_ns = 5000000\n> kernel.sched_autogroup_enabled = 0\n>\n> vm.overcommit_memory = 2\n> vm.overcommit_ratio = 90\n> vm.swappiness = 4\n> vm.zone_reclaim_mode = 0\n> vm.dirty_ratio = 15\n> vm.dirty_background_ratio = 3\n> vm.nr_hugepages = 12657\n> vm.min_free_kbytes=262144\n>\n> dev.raid.speed_limit_max=1000000\n> dev.raid.speed_limit_min=1000000\n>\n>\n> *Huge pages are being used on this machine *and Postgres allocates 24GB\n> immediately after starting up, as set by vm.nr_hugepages = 12657.\n> My concern is that it never uses more than 24GB. For example, I’m running\n> 16 queries that use a lot of CPU (they do time series expansion and some\n> arithmetics). I estimate they will generate a maximum of 2.5 billions of\n> rows. Those queries are running since 48 hours and don’t know when they\n> will finish, but RAM never overpassed those 24GB (+ some system).\n>\n> Output from *free -ht*:\n> total used free shared buff/cache\n> available\n> Mem: 94G 28G 46G 17M 19G\n> 64G\n> Swap: 15G 0B 15G\n> Total: 109G 28G 61G\n>\n> Output from *vmstat -S M*:\n> procs -----------memory---------- ---swap-- -----io---- -system--\n> ------cpu-----\n> r b swpd free buff cache si so bi bo in cs us sy id\n> wa st\n> 17 0 0 47308 197 19684 0 0 4 12 3 8 96 0 3\n> 0 0\n>\n>\n> Output from *top -U postgres*:\n> top - 10:54:19 up 2 days, 1:37, 1 user, load average: 16.00, 16.00,\n> 16.00\n> Tasks: 347 total, 17 running, 330 sleeping, 0 stopped, 0 zombie\n> %Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si,\n> 0.0 st\n> KiB Mem : 98847584 total, 48442364 free, 30046352 used, 20358872 buff/cache\n> KiB Swap: 15825916 total, 15825916 free, 0 used. 67547664 avail Mem\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+\n> COMMAND\n>\n>\n> 9686 postgres 20 0 24.918g 214236 12628 R 100.0 0.2 2872:38\n> postgres\n>\n>\n> 9687 postgres 20 0 24.918g 214212 12600 R 100.0 0.2 2872:27\n> postgres\n>\n>\n> 9688 postgres 20 0 25.391g 709936 12708 R 100.0 0.7 2872:40\n> postgres\n>\n>\n> 9691 postgres 20 0 24.918g 214516 12900 R 100.0 0.2 2865:23\n> postgres\n>\n>\n> 9697 postgres 20 0 24.918g 214284 12676 R 100.0 0.2 2866:05\n> postgres\n>\n>\n> 9698 postgres 20 0 24.922g 218608 12904 R 100.0 0.2 2872:31\n> postgres\n>\n>\n> 9699 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2865:32\n> postgres\n>\n>\n> 9702 postgres 20 0 24.922g 218332 12628 R 100.0 0.2 2865:24\n> postgres\n>\n>\n> 9704 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2872:50\n> postgres\n>\n>\n> 9710 postgres 20 0 24.918g 212364 12904 R 100.0 0.2 2865:38\n> postgres\n>\n>\n> 9681 postgres 20 0 24.918g 212300 12596 R 99.7 0.2 2865:18\n> postgres\n>\n>\n> 9682 postgres 20 0 24.918g 212108 12656 R 99.7 0.2 2872:34\n> postgres\n>\n>\n> 9684 postgres 20 0 24.918g 212612 12908 R 99.7 0.2 2872:24\n> postgres\n>\n>\n> 9685 postgres 20 0 24.918g 214208 12600 R 99.7 0.2 2872:47\n> postgres\n>\n>\n> 9709 postgres 20 0 24.918g 214284 12672 R 99.7 0.2 2866:03\n> postgres\n>\n>\n> 9693 postgres 20 0 24.918g 214300 12688 R 99.3 0.2 2865:59\n> postgres\n>\n>\n> 9063 postgres 20 0 24.722g 14812 12956 S 0.3 0.0 0:07.36\n> postgres\n>\n>\n> 9068 postgres 20 0 24.722g 6380 4232 S 0.3 0.0 0:02.15\n> postgres\n>\n>\n> 9065 postgres 20 0 24.727g 10368 3516 S 0.0 0.0 0:04.24\n> postgres\n>\n>\n> 9066 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:06.04\n> postgres\n>\n>\n> 9067 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:01.37\n> postgres\n>\n>\n> 9069 postgres 20 0 161740 4596 2312 S 0.0 0.0 0:04.48\n> postgres\n>\n> What’s wrong with this? There isn’t something wrong in RAM usage?\n>\n> Thank you all\n> Pietro\n>\n\n\n\n-- \nAndrew W. Kerber\n\n'If at first you dont succeed, dont take up skydiving.'\n\nAs I read this, you have 24G of hugepages, and hugepages enabled for postgres. Can postgres use both standard pages and hugepages at the same time? Seems unlikely to me.On Fri, Mar 24, 2017 at 4:58 AM, Pietro Pugni <[email protected]> wrote:Hi there,I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel 4.4.0-66-generic). Hardware is: - 2 x Intel Xeon E5-2690 - 96GB RAM - Software mdadm RAID10 (6 x SSDs)Postgres is used in a sort of DWH application, so all the resources are assigned to it and the aim is to maximize the single transaction performance instead of balancing between multiple connections.The configuration variables I changed are the following ones: checkpoint_completion_target = 0.9 data_directory = '/mnt/raid10/pg_data_9.6.2' default_statistics_target = 1000 effective_cache_size = 72GB effective_io_concurrency = 1000 listen_addresses = '127.0.0.1,192.168.2.90' maintenance_work_mem = 1GB max_connections=32 random_page_cost=1.2 seq_page_cost=1.0 shared_buffers = 24GB work_mem = 512MBThe kernel configuration in /etc/sysctl.conf is: # 24GB = (24*1024*1024*1024) kernel.shmmax = 25769803776 # 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\" kernel.shmall = 6291456 kernel.sched_migration_cost_ns = 5000000 kernel.sched_autogroup_enabled = 0 vm.overcommit_memory = 2 vm.overcommit_ratio = 90 vm.swappiness = 4 vm.zone_reclaim_mode = 0 vm.dirty_ratio = 15 vm.dirty_background_ratio = 3 vm.nr_hugepages = 12657 vm.min_free_kbytes=262144 dev.raid.speed_limit_max=1000000 dev.raid.speed_limit_min=1000000Huge pages are being used on this machine and Postgres allocates 24GB immediately after starting up, as set by vm.nr_hugepages = 12657.My concern is that it never uses more than 24GB. For example, I’m running 16 queries that use a lot of CPU (they do time series expansion and some arithmetics). I estimate they will generate a maximum of 2.5 billions of rows. Those queries are running since 48 hours and don’t know when they will finish, but RAM never overpassed those 24GB (+ some system). Output from free -ht: total used free shared buff/cache availableMem: 94G 28G 46G 17M 19G 64GSwap: 15G 0B 15GTotal: 109G 28G 61GOutput from vmstat -S M:procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu----- r b swpd free buff cache si so bi bo in cs us sy id wa st17 0 0 47308 197 19684 0 0 4 12 3 8 96 0 3 0 0Output from top -U postgres:top - 10:54:19 up 2 days, 1:37, 1 user, load average: 16.00, 16.00, 16.00Tasks: 347 total, 17 running, 330 sleeping, 0 stopped, 0 zombie%Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 stKiB Mem : 98847584 total, 48442364 free, 30046352 used, 20358872 buff/cacheKiB Swap: 15825916 total, 15825916 free, 0 used. 67547664 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 9686 postgres 20 0 24.918g 214236 12628 R 100.0 0.2 2872:38 postgres 9687 postgres 20 0 24.918g 214212 12600 R 100.0 0.2 2872:27 postgres 9688 postgres 20 0 25.391g 709936 12708 R 100.0 0.7 2872:40 postgres 9691 postgres 20 0 24.918g 214516 12900 R 100.0 0.2 2865:23 postgres 9697 postgres 20 0 24.918g 214284 12676 R 100.0 0.2 2866:05 postgres 9698 postgres 20 0 24.922g 218608 12904 R 100.0 0.2 2872:31 postgres 9699 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2865:32 postgres 9702 postgres 20 0 24.922g 218332 12628 R 100.0 0.2 2865:24 postgres 9704 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2872:50 postgres 9710 postgres 20 0 24.918g 212364 12904 R 100.0 0.2 2865:38 postgres 9681 postgres 20 0 24.918g 212300 12596 R 99.7 0.2 2865:18 postgres 9682 postgres 20 0 24.918g 212108 12656 R 99.7 0.2 2872:34 postgres 9684 postgres 20 0 24.918g 212612 12908 R 99.7 0.2 2872:24 postgres 9685 postgres 20 0 24.918g 214208 12600 R 99.7 0.2 2872:47 postgres 9709 postgres 20 0 24.918g 214284 12672 R 99.7 0.2 2866:03 postgres 9693 postgres 20 0 24.918g 214300 12688 R 99.3 0.2 2865:59 postgres 9063 postgres 20 0 24.722g 14812 12956 S 0.3 0.0 0:07.36 postgres 9068 postgres 20 0 24.722g 6380 4232 S 0.3 0.0 0:02.15 postgres 9065 postgres 20 0 24.727g 10368 3516 S 0.0 0.0 0:04.24 postgres 9066 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:06.04 postgres 9067 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:01.37 postgres 9069 postgres 20 0 161740 4596 2312 S 0.0 0.0 0:04.48 postgres What’s wrong with this? There isn’t something wrong in RAM usage?Thank you all Pietro -- Andrew W. Kerber'If at first you dont succeed, dont take up skydiving.'",
"msg_date": "Fri, 24 Mar 2017 09:00:23 -0500",
"msg_from": "Andrew Kerber <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using all RAM (Huge Page activated on a\n 96GB RAM system)"
},
{
"msg_contents": "On Fri, Mar 24, 2017 at 3:58 AM, Pietro Pugni <[email protected]> wrote:\n> Hi there,\n> I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel\n> 4.4.0-66-generic). Hardware is:\n> - 2 x Intel Xeon E5-2690\n> - 96GB RAM\n> - Software mdadm RAID10 (6 x SSDs)\n>\n> Postgres is used in a sort of DWH application, so all the resources are\n> assigned to it and the aim is to maximize the single transaction performance\n> instead of balancing between multiple connections.\n>\n> The configuration variables I changed are the following ones:\n>\n> checkpoint_completion_target = 0.9\n> data_directory = '/mnt/raid10/pg_data_9.6.2'\n> default_statistics_target = 1000\n> effective_cache_size = 72GB\n> effective_io_concurrency = 1000\n> listen_addresses = '127.0.0.1,192.168.2.90'\n> maintenance_work_mem = 1GB\n> max_connections=32\n> random_page_cost=1.2\n> seq_page_cost=1.0\n> shared_buffers = 24GB\n> work_mem = 512MB\n>\n>\n> The kernel configuration in /etc/sysctl.conf is:\n>\n> # 24GB = (24*1024*1024*1024)\n> kernel.shmmax = 25769803776\n>\n> # 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\"\n> kernel.shmall = 6291456\n>\n> kernel.sched_migration_cost_ns = 5000000\n> kernel.sched_autogroup_enabled = 0\n>\n> vm.overcommit_memory = 2\n> vm.overcommit_ratio = 90\n> vm.swappiness = 4\n> vm.zone_reclaim_mode = 0\n> vm.dirty_ratio = 15\n> vm.dirty_background_ratio = 3\n> vm.nr_hugepages = 12657\n> vm.min_free_kbytes=262144\n>\n> dev.raid.speed_limit_max=1000000\n> dev.raid.speed_limit_min=1000000\n>\n>\n> Huge pages are being used on this machine and Postgres allocates 24GB\n> immediately after starting up, as set by vm.nr_hugepages = 12657.\n> My concern is that it never uses more than 24GB. For example, I’m running 16\n> queries that use a lot of CPU (they do time series expansion and some\n> arithmetics). I estimate they will generate a maximum of 2.5 billions of\n> rows. Those queries are running since 48 hours and don’t know when they will\n> finish, but RAM never overpassed those 24GB (+ some system).\n>\n> Output from free -ht:\n> total used free shared buff/cache\n> available\n> Mem: 94G 28G 46G 17M 19G\n> 64G\n> Swap: 15G 0B 15G\n> Total: 109G 28G 61G\n\nLooks normal to me. Note that the OS is caching 19G of data.\nPostgresql is only going to allocate extra memory 512MB at a time for\nbig sorts. Any sort bigger than that will spill to disk. GIven that\ntop and vmstat seem to show you as being CPU bound I don't think\namount of memory postgresql is using is your problem.\n\nYou'd be better off to ask for help in optimizing your queries IMHO.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 24 Mar 2017 13:47:29 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using all RAM (Huge Page activated on a\n 96GB RAM system)"
},
{
"msg_contents": "On 24/03/17 10:58, Pietro Pugni wrote:\n> Hi there,\n> I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel \n> 4.4.0-66-generic). Hardware is:\n> - 2 x Intel Xeon E5-2690\n> - 96GB RAM\n> - Software mdadm RAID10 (6 x SSDs)\n>\n> Postgres is used in a sort of DWH application, so all the resources \n> are assigned to it and the aim is to maximize the single transaction \n> performance instead of balancing between multiple connections.\n>\n> The configuration variables I changed are the following ones:\n>\n> checkpoint_completion_target = 0.9\n> data_directory = '/mnt/raid10/pg_data_9.6.2'\n> default_statistics_target = 1000\n> effective_cache_size = 72GB\n> effective_io_concurrency = 1000\n> listen_addresses = '127.0.0.1,192.168.2.90'\n> maintenance_work_mem = 1GB\n> max_connections=32\n> random_page_cost=1.2\n> seq_page_cost=1.0\n> shared_buffers = 24GB\n> work_mem = 512MB\n>\n>\n> The kernel configuration in /etc/sysctl.conf is:\n>\n> # 24GB = (24*1024*1024*1024)\n> kernel.shmmax = 25769803776\n>\n> # 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\"\n> kernel.shmall = 6291456\n>\n> kernel.sched_migration_cost_ns = 5000000\n> kernel.sched_autogroup_enabled = 0\n>\n> vm.overcommit_memory = 2\n> vm.overcommit_ratio = 90\n> vm.swappiness = 4\n> vm.zone_reclaim_mode = 0\n> vm.dirty_ratio = 15\n> vm.dirty_background_ratio = 3\n> vm.nr_hugepages = 12657\n> vm.min_free_kbytes=262144\n>\n> dev.raid.speed_limit_max=1000000\n> dev.raid.speed_limit_min=1000000\n>\n>\n> *Huge pages are being used on this machine *and Postgres allocates \n> 24GB immediately after starting up, as set by vm.nr_hugepages = 12657.\n> My concern is that it never uses more than 24GB.\n\n Hi Pietro.\n\n Well, your shared_buffers is 24G, so it is expected that it won't \nuse more (much more, the rest being other parameters). The rest if \neffective_cache_size, which is what the VFS is expected to be caching.\n\n Have you configured parallel query \n(max_parallel_workers_per_gather) to allow for faster queries? It may \nwork well on your scenario.\n\n\n Regards,\n\n Álvaro\n\n\n-- \n\nÁlvaro Hernández Tortosa\n\n\n-----------\n<8K>data\n\n\n\n\n> For example, I’m running 16 queries that use a lot of CPU (they do \n> time series expansion and some arithmetics). I estimate they will \n> generate a maximum of 2.5 billions of rows. Those queries are running \n> since 48 hours and don’t know when they will finish, but RAM never \n> overpassed those 24GB (+ some system).\n>\n> Output from /free -ht/:\n> total used free shared buff/cache available\n> Mem: 94G 28G 46G 17M 19G 64G\n> Swap: 15G 0B 15G\n> Total: 109G 28G 61G\n>\n> Output from /vmstat -S M/:\n> procs -----------memory---------- ---swap-- -----io---- -system-- \n> ------cpu-----\n> r b swpd free buff cache si so bi bo in cs us sy \n> id wa st\n> 17 0 0 47308 197 19684 0 0 4 12 3 8 96 0 \n> 3 0 0\n>\n>\n> Output from /top -U postgres/:\n> top - 10:54:19 up 2 days, 1:37, 1 user, load average: 16.00, 16.00, \n> 16.00\n> Tasks: 347 total, 17 running, 330 sleeping, 0 stopped, 0 zombie\n> %Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 \n> si, 0.0 st\n> KiB Mem : 98847584 total, 48442364 free, 30046352 used, 20358872 \n> buff/cache\n> KiB Swap: 15825916 total, 15825916 free, 0 used. 67547664 avail \n> Mem\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 9686 postgres 20 0 24.918g 214236 12628 R 100.0 0.2 2872:38 \n> postgres\n> 9687 postgres 20 0 24.918g 214212 12600 R 100.0 0.2 2872:27 \n> postgres\n> 9688 postgres 20 0 25.391g 709936 12708 R 100.0 0.7 2872:40 \n> postgres\n> 9691 postgres 20 0 24.918g 214516 12900 R 100.0 0.2 2865:23 \n> postgres\n> 9697 postgres 20 0 24.918g 214284 12676 R 100.0 0.2 2866:05 \n> postgres\n> 9698 postgres 20 0 24.922g 218608 12904 R 100.0 0.2 2872:31 \n> postgres\n> 9699 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2865:32 \n> postgres\n> 9702 postgres 20 0 24.922g 218332 12628 R 100.0 0.2 2865:24 \n> postgres\n> 9704 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2872:50 \n> postgres\n> 9710 postgres 20 0 24.918g 212364 12904 R 100.0 0.2 2865:38 \n> postgres\n> 9681 postgres 20 0 24.918g 212300 12596 R 99.7 0.2 2865:18 \n> postgres\n> 9682 postgres 20 0 24.918g 212108 12656 R 99.7 0.2 2872:34 \n> postgres\n> 9684 postgres 20 0 24.918g 212612 12908 R 99.7 0.2 2872:24 \n> postgres\n> 9685 postgres 20 0 24.918g 214208 12600 R 99.7 0.2 2872:47 \n> postgres\n> 9709 postgres 20 0 24.918g 214284 12672 R 99.7 0.2 2866:03 \n> postgres\n> 9693 postgres 20 0 24.918g 214300 12688 R 99.3 0.2 2865:59 \n> postgres\n> 9063 postgres 20 0 24.722g 14812 12956 S 0.3 0.0 0:07.36 \n> postgres\n> 9068 postgres 20 0 24.722g 6380 4232 S 0.3 0.0 0:02.15 \n> postgres\n> 9065 postgres 20 0 24.727g 10368 3516 S 0.0 0.0 0:04.24 \n> postgres\n> 9066 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:06.04 \n> postgres\n> 9067 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:01.37 \n> postgres\n> 9069 postgres 20 0 161740 4596 2312 S 0.0 0.0 0:04.48 \n> postgres\n>\n> What’s wrong with this? There isn’t something wrong in RAM usage?\n>\n> Thank you all\n> Pietro\n\n\n\n\n\n\n\n\n\nOn 24/03/17 10:58, Pietro Pugni wrote:\n\n\n\n Hi there,\n I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS\n (kernel 4.4.0-66-generic). Hardware is:\n - 2 x Intel Xeon E5-2690\n - 96GB RAM\n - Software mdadm RAID10 (6 x SSDs)\n\n\nPostgres is used in a sort of DWH application, so\n all the resources are assigned to it and the aim is to maximize\n the single transaction performance instead of balancing between\n multiple connections.\n\n\nThe configuration variables I changed are the\n following ones:\n\n\n\n checkpoint_completion_target\n = 0.9\n data_directory\n = '/mnt/raid10/pg_data_9.6.2'\n default_statistics_target\n = 1000\n effective_cache_size\n = 72GB\n effective_io_concurrency\n = 1000\n listen_addresses\n = '127.0.0.1,192.168.2.90'\n maintenance_work_mem\n = 1GB\n max_connections=32\n random_page_cost=1.2\n seq_page_cost=1.0\n shared_buffers\n = 24GB\n work_mem\n = 512MB\n\n\n\n\n\nThe kernel configuration in /etc/sysctl.conf is:\n\n\n\n #\n 24GB = (24*1024*1024*1024)\n kernel.shmmax\n = 25769803776\n\n\n #\n 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\"\n kernel.shmall\n = 6291456\n\n\n kernel.sched_migration_cost_ns\n = 5000000\n kernel.sched_autogroup_enabled\n = 0\n\n\n vm.overcommit_memory\n = 2\n vm.overcommit_ratio\n = 90\n vm.swappiness\n = 4\n vm.zone_reclaim_mode\n = 0\n vm.dirty_ratio\n = 15\n vm.dirty_background_ratio\n = 3\n vm.nr_hugepages\n = 12657\n vm.min_free_kbytes=262144\n\n\n dev.raid.speed_limit_max=1000000\n dev.raid.speed_limit_min=1000000\n\n\n\n\n\nHuge pages are being used on this\n machine and Postgres allocates 24GB immediately after\n starting up, as set by vm.nr_hugepages = 12657.\nMy concern is that it never uses more than 24GB. \n\n\n Hi Pietro.\n\n Well, your shared_buffers is 24G, so it is expected that it\n won't use more (much more, the rest being other parameters). The\n rest if effective_cache_size, which is what the VFS is expected to\n be caching.\n\n Have you configured parallel query\n (max_parallel_workers_per_gather) to allow for faster queries? It\n may work well on your scenario.\n\n \n Regards,\n\n Álvaro\n\n\n-- \n\nÁlvaro Hernández Tortosa\n\n\n-----------\n<8K>data\n\n\n\n\nFor example, I’m running 16 queries that use a lot\n of CPU (they do time series expansion and some arithmetics). I\n estimate they will generate a maximum of 2.5 billions of rows.\n Those queries are running since 48 hours and don’t know when\n they will finish, but RAM never overpassed those 24GB (+ some\n system). \n\n\nOutput from free -ht:\n\n total used \n free shared buff/cache available\nMem: 94G 28G \n 46G 17M 19G 64G\nSwap: 15G 0B \n 15G\nTotal: 109G 28G \n 61G\n\n\n\nOutput from vmstat -S M:\n\nprocs -----------memory----------\n ---swap-- -----io---- -system-- ------cpu-----\n r b swpd free buff cache si \n so bi bo in cs us sy id wa st\n17 0 0 47308 197 19684 0 \n 0 4 12 3 8 96 0 3 0 0\n\n\n\n\n\nOutput from top -U postgres:\n\ntop - 10:54:19 up 2 days, 1:37, 1 user,\n load average: 16.00, 16.00, 16.00\nTasks: 347 total, 17 running, 330\n sleeping, 0 stopped, 0 zombie\n%Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0\n id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem : 98847584 total, 48442364 free,\n 30046352 used, 20358872 buff/cache\nKiB Swap: 15825916 total, 15825916 free,\n 0 used. 67547664 avail Mem \n\n\n PID USER PR NI VIRT RES \n SHR S %CPU %MEM TIME+ COMMAND \n \n \n \n 9686 postgres 20 0 24.918g 214236\n 12628 R 100.0 0.2 2872:38 postgres \n \n \n \n 9687 postgres 20 0 24.918g 214212\n 12600 R 100.0 0.2 2872:27 postgres \n \n \n \n 9688 postgres 20 0 25.391g 709936\n 12708 R 100.0 0.7 2872:40 postgres \n \n \n \n 9691 postgres 20 0 24.918g 214516\n 12900 R 100.0 0.2 2865:23 postgres \n \n \n \n 9697 postgres 20 0 24.918g 214284\n 12676 R 100.0 0.2 2866:05 postgres \n \n \n \n 9698 postgres 20 0 24.922g 218608\n 12904 R 100.0 0.2 2872:31 postgres \n \n \n \n 9699 postgres 20 0 24.918g 214512\n 12904 R 100.0 0.2 2865:32 postgres \n \n \n \n 9702 postgres 20 0 24.922g 218332\n 12628 R 100.0 0.2 2865:24 postgres \n \n \n \n 9704 postgres 20 0 24.918g 214512\n 12904 R 100.0 0.2 2872:50 postgres \n \n \n \n 9710 postgres 20 0 24.918g 212364\n 12904 R 100.0 0.2 2865:38 postgres \n \n \n \n 9681 postgres 20 0 24.918g 212300\n 12596 R 99.7 0.2 2865:18 postgres \n \n \n \n 9682 postgres 20 0 24.918g 212108\n 12656 R 99.7 0.2 2872:34 postgres \n \n \n \n 9684 postgres 20 0 24.918g 212612\n 12908 R 99.7 0.2 2872:24 postgres \n \n \n \n 9685 postgres 20 0 24.918g 214208\n 12600 R 99.7 0.2 2872:47 postgres \n \n \n \n 9709 postgres 20 0 24.918g 214284\n 12672 R 99.7 0.2 2866:03 postgres \n \n \n \n 9693 postgres 20 0 24.918g 214300\n 12688 R 99.3 0.2 2865:59 postgres \n \n \n \n 9063 postgres 20 0 24.722g 14812\n 12956 S 0.3 0.0 0:07.36 postgres \n \n \n \n 9068 postgres 20 0 24.722g 6380 \n 4232 S 0.3 0.0 0:02.15 postgres \n \n \n \n 9065 postgres 20 0 24.727g 10368 \n 3516 S 0.0 0.0 0:04.24 postgres \n \n \n \n 9066 postgres 20 0 24.722g 4100 \n 2248 S 0.0 0.0 0:06.04 postgres \n \n \n \n 9067 postgres 20 0 24.722g 4100 \n 2248 S 0.0 0.0 0:01.37 postgres \n \n \n \n 9069 postgres 20 0 161740 4596 \n 2312 S 0.0 0.0 0:04.48 postgres \n\n\n\nWhat’s wrong with this? There isn’t something wrong\n in RAM usage?\n\n\nThank you all\n Pietro",
"msg_date": "Fri, 24 Mar 2017 21:16:28 +0100",
"msg_from": "=?UTF-8?Q?=c3=81lvaro_Hern=c3=a1ndez_Tortosa?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using all RAM (Huge Page activated on a\n 96GB RAM system)"
},
{
"msg_contents": "> What’s wrong with this? There isn’t something wrong in RAM usage?\n\nNope, nothing wrong with RAM usage at all from what you've presented\nhere. Please consider the cut-and-paste you included a bit closer. All\nof your active threads are utilizing 100% CPU, and are therefore\nCPU-bound. If there were some kind of IO issue due to disk fetching,\nyour CPU utilization would be much lower. From the looks of things,\nyour threads are either operating on fully cached or otherwise\navailable pages, or are generating their own such that it doesn't\nmatter.\n\nThe real question is this: what are your queries/processes doing?\nBecause if the query plan is using a giant nested loop, or you are\nrelying on a stored procedure that's in a tight and non-optimized loop\nof some kind, you're going to be consuming a lot of clock cycles with\ndiminishing benefits. If you're not making use of set theory within a\ndatabase, for example, you might be getting 100x less throughput than\nyou could otherwise attain. If it's not proprietary in some way, or\nyou can obfuscate it into a test case, we can probably help then. As\nit stands, there isn't enough to go on.\n\n-- \nShaun Thomas\[email protected]\nhttp://bonesmoses.org/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 24 Mar 2017 20:27:30 -0500",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using all RAM (Huge Page activated on a\n 96GB RAM system)"
},
{
"msg_contents": "Looks like Postgres will never \"use\" (visually) more than shared_buffers \nsize of memory.\n\nChange it to 48GB, and in your \"top\" output you will see how memory \nusage bumped up to this new limit.\n\nBut it's just a \"visual\" change, I doubt you'll get any benefits from it.\n\n\nOn 03/24/17 02:58, Pietro Pugni wrote:\n> Hi there,\n> I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel \n> 4.4.0-66-generic). Hardware is:\n> - 2 x Intel Xeon E5-2690\n> - 96GB RAM\n> - Software mdadm RAID10 (6 x SSDs)\n>\n> Postgres is used in a sort of DWH application, so all the resources \n> are assigned to it and the aim is to maximize the single transaction \n> performance instead of balancing between multiple connections.\n>\n> The configuration variables I changed are the following ones:\n>\n> checkpoint_completion_target = 0.9\n> data_directory = '/mnt/raid10/pg_data_9.6.2'\n> default_statistics_target = 1000\n> effective_cache_size = 72GB\n> effective_io_concurrency = 1000\n> listen_addresses = '127.0.0.1,192.168.2.90'\n> maintenance_work_mem = 1GB\n> max_connections=32\n> random_page_cost=1.2\n> seq_page_cost=1.0\n> shared_buffers = 24GB\n> work_mem = 512MB\n>\n>\n> The kernel configuration in /etc/sysctl.conf is:\n>\n> # 24GB = (24*1024*1024*1024)\n> kernel.shmmax = 25769803776\n>\n> # 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\"\n> kernel.shmall = 6291456\n>\n> kernel.sched_migration_cost_ns = 5000000\n> kernel.sched_autogroup_enabled = 0\n>\n> vm.overcommit_memory = 2\n> vm.overcommit_ratio = 90\n> vm.swappiness = 4\n> vm.zone_reclaim_mode = 0\n> vm.dirty_ratio = 15\n> vm.dirty_background_ratio = 3\n> vm.nr_hugepages = 12657\n> vm.min_free_kbytes=262144\n>\n> dev.raid.speed_limit_max=1000000\n> dev.raid.speed_limit_min=1000000\n>\n>\n> *Huge pages are being used on this machine *and Postgres allocates \n> 24GB immediately after starting up, as set by vm.nr_hugepages = 12657.\n> My concern is that it never uses more than 24GB. For example, I’m \n> running 16 queries that use a lot of CPU (they do time series \n> expansion and some arithmetics). I estimate they will generate a \n> maximum of 2.5 billions of rows. Those queries are running since 48 \n> hours and don’t know when they will finish, but RAM never overpassed \n> those 24GB (+ some system).\n>\n> Output from /free -ht/:\n> total used free shared buff/cache available\n> Mem: 94G 28G 46G 17M 19G 64G\n> Swap: 15G 0B 15G\n> Total: 109G 28G 61G\n>\n> Output from /vmstat -S M/:\n> procs -----------memory---------- ---swap-- -----io---- -system-- \n> ------cpu-----\n> r b swpd free buff cache si so bi bo in cs us sy \n> id wa st\n> 17 0 0 47308 197 19684 0 0 4 12 3 8 96 0 \n> 3 0 0\n>\n>\n> Output from /top -U postgres/:\n> top - 10:54:19 up 2 days, 1:37, 1 user, load average: 16.00, 16.00, \n> 16.00\n> Tasks: 347 total, 17 running, 330 sleeping, 0 stopped, 0 zombie\n> %Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.0 hi, 0.0 \n> si, 0.0 st\n> KiB Mem : 98847584 total, 48442364 free, 30046352 used, 20358872 \n> buff/cache\n> KiB Swap: 15825916 total, 15825916 free, 0 used. 67547664 avail \n> Mem\n>\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 9686 postgres 20 0 24.918g 214236 12628 R 100.0 0.2 2872:38 \n> postgres\n> 9687 postgres 20 0 24.918g 214212 12600 R 100.0 0.2 2872:27 \n> postgres\n> 9688 postgres 20 0 25.391g 709936 12708 R 100.0 0.7 2872:40 \n> postgres\n> 9691 postgres 20 0 24.918g 214516 12900 R 100.0 0.2 2865:23 \n> postgres\n> 9697 postgres 20 0 24.918g 214284 12676 R 100.0 0.2 2866:05 \n> postgres\n> 9698 postgres 20 0 24.922g 218608 12904 R 100.0 0.2 2872:31 \n> postgres\n> 9699 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2865:32 \n> postgres\n> 9702 postgres 20 0 24.922g 218332 12628 R 100.0 0.2 2865:24 \n> postgres\n> 9704 postgres 20 0 24.918g 214512 12904 R 100.0 0.2 2872:50 \n> postgres\n> 9710 postgres 20 0 24.918g 212364 12904 R 100.0 0.2 2865:38 \n> postgres\n> 9681 postgres 20 0 24.918g 212300 12596 R 99.7 0.2 2865:18 \n> postgres\n> 9682 postgres 20 0 24.918g 212108 12656 R 99.7 0.2 2872:34 \n> postgres\n> 9684 postgres 20 0 24.918g 212612 12908 R 99.7 0.2 2872:24 \n> postgres\n> 9685 postgres 20 0 24.918g 214208 12600 R 99.7 0.2 2872:47 \n> postgres\n> 9709 postgres 20 0 24.918g 214284 12672 R 99.7 0.2 2866:03 \n> postgres\n> 9693 postgres 20 0 24.918g 214300 12688 R 99.3 0.2 2865:59 \n> postgres\n> 9063 postgres 20 0 24.722g 14812 12956 S 0.3 0.0 0:07.36 \n> postgres\n> 9068 postgres 20 0 24.722g 6380 4232 S 0.3 0.0 0:02.15 \n> postgres\n> 9065 postgres 20 0 24.727g 10368 3516 S 0.0 0.0 0:04.24 \n> postgres\n> 9066 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:06.04 \n> postgres\n> 9067 postgres 20 0 24.722g 4100 2248 S 0.0 0.0 0:01.37 \n> postgres\n> 9069 postgres 20 0 161740 4596 2312 S 0.0 0.0 0:04.48 \n> postgres\n>\n> What’s wrong with this? There isn’t something wrong in RAM usage?\n>\n> Thank you all\n> Pietro\n\n\n\n\n\n\n\nLooks like Postgres will never \"use\" (visually) more than shared_buffers size of memory.\nChange it to 48GB, and in your\n \"top\" output you will see how memory usage bumped up to this new\n limit.\n\nBut it's just a \"visual\"\n change, I doubt you'll get any benefits from it.\n\n\nOn 03/24/17 02:58, Pietro Pugni wrote:\n\n\n\n Hi there,\n I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS\n (kernel 4.4.0-66-generic). Hardware is:\n - 2 x Intel Xeon E5-2690\n - 96GB RAM\n - Software mdadm RAID10 (6 x SSDs)\n\n\nPostgres is used in a sort of DWH application, so\n all the resources are assigned to it and the aim is to maximize\n the single transaction performance instead of balancing between\n multiple connections.\n\n\nThe configuration variables I changed are the\n following ones:\n\n\n\n checkpoint_completion_target\n = 0.9\n data_directory\n = '/mnt/raid10/pg_data_9.6.2'\n default_statistics_target\n = 1000\n effective_cache_size\n = 72GB\n effective_io_concurrency\n = 1000\n listen_addresses\n = '127.0.0.1,192.168.2.90'\n maintenance_work_mem\n = 1GB\n max_connections=32\n random_page_cost=1.2\n seq_page_cost=1.0\n shared_buffers\n = 24GB\n work_mem\n = 512MB\n\n\n\n\n\nThe kernel configuration in /etc/sysctl.conf is:\n\n\n\n #\n 24GB = (24*1024*1024*1024)\n kernel.shmmax\n = 25769803776\n\n\n #\n 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\"\n kernel.shmall\n = 6291456\n\n\n kernel.sched_migration_cost_ns\n = 5000000\n kernel.sched_autogroup_enabled\n = 0\n\n\n vm.overcommit_memory\n = 2\n vm.overcommit_ratio\n = 90\n vm.swappiness\n = 4\n vm.zone_reclaim_mode\n = 0\n vm.dirty_ratio\n = 15\n vm.dirty_background_ratio\n = 3\n vm.nr_hugepages\n = 12657\n vm.min_free_kbytes=262144\n\n\n dev.raid.speed_limit_max=1000000\n dev.raid.speed_limit_min=1000000\n\n\n\n\n\nHuge pages are being used on this\n machine and Postgres allocates 24GB immediately after\n starting up, as set by vm.nr_hugepages = 12657.\nMy concern is that it never uses more than 24GB. For\n example, I’m running 16 queries that use a lot of CPU (they do\n time series expansion and some arithmetics). I estimate they\n will generate a maximum of 2.5 billions of rows. Those queries\n are running since 48 hours and don’t know when they will finish,\n but RAM never overpassed those 24GB (+ some system). \n\n\nOutput from free -ht:\n\n total used \n free shared buff/cache available\nMem: 94G 28G \n 46G 17M 19G 64G\nSwap: 15G 0B \n 15G\nTotal: 109G 28G \n 61G\n\n\n\nOutput from vmstat -S M:\n\nprocs -----------memory----------\n ---swap-- -----io---- -system-- ------cpu-----\n r b swpd free buff cache si \n so bi bo in cs us sy id wa st\n17 0 0 47308 197 19684 0 \n 0 4 12 3 8 96 0 3 0 0\n\n\n\n\n\nOutput from top -U postgres:\n\ntop - 10:54:19 up 2 days, 1:37, 1 user,\n load average: 16.00, 16.00, 16.00\nTasks: 347 total, 17 running, 330\n sleeping, 0 stopped, 0 zombie\n%Cpu(s):100.0 us, 0.0 sy, 0.0 ni, 0.0\n id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st\nKiB Mem : 98847584 total, 48442364 free,\n 30046352 used, 20358872 buff/cache\nKiB Swap: 15825916 total, 15825916 free,\n 0 used. 67547664 avail Mem \n\n\n PID USER PR NI VIRT RES \n SHR S %CPU %MEM TIME+ COMMAND \n \n \n \n 9686 postgres 20 0 24.918g 214236\n 12628 R 100.0 0.2 2872:38 postgres \n \n \n \n 9687 postgres 20 0 24.918g 214212\n 12600 R 100.0 0.2 2872:27 postgres \n \n \n \n 9688 postgres 20 0 25.391g 709936\n 12708 R 100.0 0.7 2872:40 postgres \n \n \n \n 9691 postgres 20 0 24.918g 214516\n 12900 R 100.0 0.2 2865:23 postgres \n \n \n \n 9697 postgres 20 0 24.918g 214284\n 12676 R 100.0 0.2 2866:05 postgres \n \n \n \n 9698 postgres 20 0 24.922g 218608\n 12904 R 100.0 0.2 2872:31 postgres \n \n \n \n 9699 postgres 20 0 24.918g 214512\n 12904 R 100.0 0.2 2865:32 postgres \n \n \n \n 9702 postgres 20 0 24.922g 218332\n 12628 R 100.0 0.2 2865:24 postgres \n \n \n \n 9704 postgres 20 0 24.918g 214512\n 12904 R 100.0 0.2 2872:50 postgres \n \n \n \n 9710 postgres 20 0 24.918g 212364\n 12904 R 100.0 0.2 2865:38 postgres \n \n \n \n 9681 postgres 20 0 24.918g 212300\n 12596 R 99.7 0.2 2865:18 postgres \n \n \n \n 9682 postgres 20 0 24.918g 212108\n 12656 R 99.7 0.2 2872:34 postgres \n \n \n \n 9684 postgres 20 0 24.918g 212612\n 12908 R 99.7 0.2 2872:24 postgres \n \n \n \n 9685 postgres 20 0 24.918g 214208\n 12600 R 99.7 0.2 2872:47 postgres \n \n \n \n 9709 postgres 20 0 24.918g 214284\n 12672 R 99.7 0.2 2866:03 postgres \n \n \n \n 9693 postgres 20 0 24.918g 214300\n 12688 R 99.3 0.2 2865:59 postgres \n \n \n \n 9063 postgres 20 0 24.722g 14812\n 12956 S 0.3 0.0 0:07.36 postgres \n \n \n \n 9068 postgres 20 0 24.722g 6380 \n 4232 S 0.3 0.0 0:02.15 postgres \n \n \n \n 9065 postgres 20 0 24.727g 10368 \n 3516 S 0.0 0.0 0:04.24 postgres \n \n \n \n 9066 postgres 20 0 24.722g 4100 \n 2248 S 0.0 0.0 0:06.04 postgres \n \n \n \n 9067 postgres 20 0 24.722g 4100 \n 2248 S 0.0 0.0 0:01.37 postgres \n \n \n \n 9069 postgres 20 0 161740 4596 \n 2312 S 0.0 0.0 0:04.48 postgres \n\n\n\nWhat’s wrong with this? There isn’t something wrong\n in RAM usage?\n\n\nThank you all\n Pietro",
"msg_date": "Fri, 24 Mar 2017 18:56:27 -0700",
"msg_from": "trafdev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using all RAM (Huge Page activated on a\n 96GB RAM system)"
},
{
"msg_contents": "On Fri, Mar 24, 2017 at 2:47 PM, Scott Marlowe <[email protected]> wrote:\n> On Fri, Mar 24, 2017 at 3:58 AM, Pietro Pugni <[email protected]> wrote:\n>> Hi there,\n>> I’m running PostgreSQL 9.6.2 on Ubuntu 16.04.2 TLS (kernel\n>> 4.4.0-66-generic). Hardware is:\n>> - 2 x Intel Xeon E5-2690\n>> - 96GB RAM\n>> - Software mdadm RAID10 (6 x SSDs)\n>>\n>> Postgres is used in a sort of DWH application, so all the resources are\n>> assigned to it and the aim is to maximize the single transaction performance\n>> instead of balancing between multiple connections.\n>>\n>> The configuration variables I changed are the following ones:\n>>\n>> checkpoint_completion_target = 0.9\n>> data_directory = '/mnt/raid10/pg_data_9.6.2'\n>> default_statistics_target = 1000\n>> effective_cache_size = 72GB\n>> effective_io_concurrency = 1000\n>> listen_addresses = '127.0.0.1,192.168.2.90'\n>> maintenance_work_mem = 1GB\n>> max_connections=32\n>> random_page_cost=1.2\n>> seq_page_cost=1.0\n>> shared_buffers = 24GB\n>> work_mem = 512MB\n>>\n>>\n>> The kernel configuration in /etc/sysctl.conf is:\n>>\n>> # 24GB = (24*1024*1024*1024)\n>> kernel.shmmax = 25769803776\n>>\n>> # 6MB = (24GB/4096) dove 4096 e' uguale a \"getconf PAGE_SIZE\"\n>> kernel.shmall = 6291456\n>>\n>> kernel.sched_migration_cost_ns = 5000000\n>> kernel.sched_autogroup_enabled = 0\n>>\n>> vm.overcommit_memory = 2\n>> vm.overcommit_ratio = 90\n>> vm.swappiness = 4\n>> vm.zone_reclaim_mode = 0\n>> vm.dirty_ratio = 15\n>> vm.dirty_background_ratio = 3\n>> vm.nr_hugepages = 12657\n>> vm.min_free_kbytes=262144\n>>\n>> dev.raid.speed_limit_max=1000000\n>> dev.raid.speed_limit_min=1000000\n>>\n>>\n>> Huge pages are being used on this machine and Postgres allocates 24GB\n>> immediately after starting up, as set by vm.nr_hugepages = 12657.\n>> My concern is that it never uses more than 24GB. For example, I’m running 16\n>> queries that use a lot of CPU (they do time series expansion and some\n>> arithmetics). I estimate they will generate a maximum of 2.5 billions of\n>> rows. Those queries are running since 48 hours and don’t know when they will\n>> finish, but RAM never overpassed those 24GB (+ some system).\n>>\n>> Output from free -ht:\n>> total used free shared buff/cache\n>> available\n>> Mem: 94G 28G 46G 17M 19G\n>> 64G\n>> Swap: 15G 0B 15G\n>> Total: 109G 28G 61G\n>\n> Looks normal to me. Note that the OS is caching 19G of data.\n> Postgresql is only going to allocate extra memory 512MB at a time for\n> big sorts. Any sort bigger than that will spill to disk. GIven that\n> top and vmstat seem to show you as being CPU bound I don't think\n> amount of memory postgresql is using is your problem.\n>\n> You'd be better off to ask for help in optimizing your queries IMHO.\n\n\n+1 this. Absent evidence, there is no reason to believe the memory is\nneeded. Memory is not magic pixie dust that makes queries go faster;\ngood data structure choices and algorithms remain the most important\ndeterminers of query performance.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 27 Mar 2017 08:27:37 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres not using all RAM (Huge Page activated on a\n 96GB RAM system)"
}
] |
[
{
"msg_contents": "It seems when self (inner/equi) joining there's two bad alternatives: either\nspecify a where clause for each self-joined table and incur poor estimate and\nplan, due to incorrect perceived independence of clauses, even though joined\ncolumn ought to be known equal; or, specify where clause only once, and incur\ncost of joining across all partitions, due to no contraint exclusion on (at\nleast) one self-joined table heirarchy.\n\n-- Specify WHERE for each table causes bad underestimate:\n|ts=# explain analyze SELECT * FROM eric_enodeb_metrics a JOIN eric_enodeb_metrics b USING (start_time, site_id) WHERE a.start_time>='2017-03-19' AND a.start_time<'2017-03-20' AND b.start_time>='2017-03-19' AND b.start_time<'2017-03-20';\n| Hash Join (cost=7310.80..14680.86 rows=14 width=1436) (actual time=33.053..73.180 rows=7869 loops=1)\n| Hash Cond: ((a.start_time = b.start_time) AND (a.site_id = b.site_id))\n| -> Append (cost=0.00..7192.56 rows=7883 width=723) (actual time=1.394..19.414 rows=7869 loops=1)\n| -> Seq Scan on eric_enodeb_metrics a (cost=0.00..0.00 rows=1 width=718) (actual time=0.003..0.003 rows=0 loops=1)\n| Filter: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| -> Bitmap Heap Scan on eric_enodeb_201703 a_1 (cost=605.34..7192.56 rows=7882 width=723) (actual time=1.390..14.536 rows=7869 loops=1)\n| Recheck Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| Heap Blocks: exact=247\n| -> Bitmap Index Scan on eric_enodeb_201703_unique_idx (cost=0.00..603.37 rows=7882 width=0) (actual time=1.351..1.351 rows=7869 loops=1)\n| Index Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| -> Hash (cost=7192.56..7192.56 rows=7883 width=723) (actual time=31.620..31.620 rows=7869 loops=1)\n| Buckets: 8192 Batches: 1 Memory Usage: 1986kB\n| -> Append (cost=0.00..7192.56 rows=7883 width=723) (actual time=0.902..19.543 rows=7869 loops=1)\n| -> Seq Scan on eric_enodeb_metrics b (cost=0.00..0.00 rows=1 width=718) (actual time=0.002..0.002 rows=0 loops=1)\n| Filter: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| -> Bitmap Heap Scan on eric_enodeb_201703 b_1 (cost=605.34..7192.56 rows=7882 width=723) (actual time=0.899..14.353 rows=7869 loops=1)\n| Recheck Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| Heap Blocks: exact=247\n| -> Bitmap Index Scan on eric_enodeb_201703_unique_idx (cost=0.00..603.37 rows=7882 width=0) (actual time=0.867..0.867 rows=7869 loops=1)\n| Index Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n\n\n\n-- Specify WHERE once gets good estimate, but with unnecessary scan of all child partitions:\n|ts=# explain analyze SELECT * FROM eric_enodeb_metrics a JOIN eric_enodeb_metrics b USING (start_time, site_id) WHERE start_time>='2017-03-19' AND start_time<'2017-03-20';\n| Gather (cost=8310.80..316545.60 rows=9591 width=1427) (actual time=9012.967..9073.539 rows=7869 loops=1)\n| Workers Planned: 3\n| Workers Launched: 3\n| -> Hash Join (cost=7310.80..314586.50 rows=3094 width=1427) (actual time=8892.121..8937.245 rows=1967 loops=4)\n| Hash Cond: ((b.start_time = a.start_time) AND (b.site_id = a.site_id))\n| -> Append (cost=0.00..261886.54 rows=2015655 width=714) (actual time=11.464..8214.063 rows=1308903 loops=4)\n| -> Parallel Seq Scan on eric_enodeb_metrics b (cost=0.00..0.00 rows=1 width=718) (actual time=0.001..0.001 rows=0 loops=4)\n| -> Parallel Seq Scan on eric_enodeb_201510 b_1 (cost=0.00..10954.43 rows=60343 width=707) (actual time=11.460..258.852 rows=46766 loops=4)\n| -> Parallel Seq Scan on eric_enodeb_201511 b_2 (cost=0.00..10310.91 rows=56891 width=707) (actual time=18.395..237.841 rows=44091 loops=4)\n|[...]\n| -> Parallel Seq Scan on eric_enodeb_201703 b_29 (cost=0.00..6959.75 rows=81875 width=723) (actual time=0.017..101.969 rows=49127 loops=4)\n| -> Hash (cost=7192.56..7192.56 rows=7883 width=723) (actual time=51.843..51.843 rows=7869 loops=4)\n| Buckets: 8192 Batches: 1 Memory Usage: 1970kB\n| -> Append (cost=0.00..7192.56 rows=7883 width=723) (actual time=2.558..27.829 rows=7869 loops=4)\n| -> Seq Scan on eric_enodeb_metrics a (cost=0.00..0.00 rows=1 width=718) (actual time=0.014..0.014 rows=0 loops=4)\n| Filter: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| -> Bitmap Heap Scan on eric_enodeb_201703 a_1 (cost=605.34..7192.56 rows=7882 width=723) (actual time=2.542..17.305 rows=7869 loops=4)\n| Recheck Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| Heap Blocks: exact=247\n| -> Bitmap Index Scan on eric_enodeb_201703_unique_idx (cost=0.00..603.37 rows=7882 width=0) (actual time=2.494..2.494 rows=7869 loops=4)\n| Index Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n\n\nMinor variations have same problems;\n-- Scans all partitions:\nts=# explain analyze SELECT * FROM (SELECT * FROM eric_enodeb_metrics a) t1 JOIN (SELECT * FROM eric_enodeb_metrics b WHERE start_time>='2017-03-19 23:00:00' AND start_time<'2017-03-20') t2 USING (start_time, site_id);\n\n-- Underestimtes due to perceived independence of clause:\n|ts=# explain analyze SELECT * FROM (SELECT * FROM eric_enodeb_metrics a WHERE start_time>='2017-03-19' AND start_time<'2017-03-20') t1 JOIN (SELECT * FROM eric_enodeb_metrics b WHERE start_time>='2017-03-19' AND start_time<'2017-03-20') t2 USING (start_time, site_id);\n| Hash Join (cost=7308.59..14676.41 rows=14 width=1436) (actual time=30.352..64.004 rows=7869 loops=1)\n\nI'll thank you in advance for your response.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 26 Mar 2017 14:33:44 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "self join estimate and constraint exclusion"
}
] |
[
{
"msg_contents": "Good day\n\n \n\nAt my company we're busy converting a product from using SQL Server to\nPostgres. One part of the old design involves filtering data for the rights\na user has.\n\n \n\nThe SQL Server table looked like this:\n\nCREATE TABLE [dbo].[usrUserRights] ( \n [UserId] [dbo].[dm_Id] NOT NULL,\n [SiteId] [dbo].[dm_Id] NOT NULL,\n [RightId] [dbo].[dm_Id] NOT NULL,\n CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY\nCLUSTERED([UserId],[RightId],[SiteId])\n\n);\n\n\n\nAll data in other tables would have a SiteId. Users would be assigned rights\nfor certain Sites. We would then be able to filter data with a join.\n\nExample:\n\nSELECT Id, Code FROM SomeTable st\n\nJOIN usrUserRights ur ON st.SiteId = ur.SiteId AND ur.UserId = @UserId AND\nur.RightId = @TheRightRequired\n\n \n\nThe one design flaw with this is that the table gets extremely large. At our\nlargest client this table contains over 700mil records. For a single user\nwith lots of rights there could be 7mil records to cover their rights.\n\n \n\nIn Postgres I was thinking of going with a design like this\n\nCREATE TABLE security.user_right_site\n(\n user_id bigint NOT NULL,\n right_id bigint NOT NULL,\n sites bigint[]\n);\ncreate index on security.user_right_site(user_id, right_id);\n\n \n\nThis drastically cut down on the number of records in the table. It also\nseems to make a massive change to the storage requirements.\n\nThe old design requires 61GB vs 2.6GB.\n\n \n\nMy one concern is regarding the limitations of the array type in Postgres.\nIs there a point at which one should not use it? Currently our largest\nclient has 6000+ sites, meaning that the array would contain that many\nitems. What would the upper feasible limit be in Postgres?\n\n \n\nRegarding queries to filter data against this table in Postgres. Any advice\nfor the best method. I've done some testing myself, but just want to know if\nthere are other alternatives.\n\n \n\nAttempt 1, using Any (250ms)\n\nselect a.id, a.code, a.description from ara.asset a\n join security.user_right_site urs on urs.user_id = 1783 and urs.right_id\n= 10000 and a.site_id = any(urs.sites) \nwhere a.is_historical = true;\n\n\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n--------------------------------------------------\n\nNested Loop (cost=1000.42..22712.71 rows=4 width=47)\n\n Join Filter: (a.site_id = ANY (urs.sites))\n\n -> Gather (cost=1000.00..22599.49 rows=4191 width=55)\n\n Workers Planned: 3\n\n -> Parallel Seq Scan on asset a (cost=0.00..21180.39 rows=1352\nwidth=55)\n\n Filter: is_historical\n\n -> Materialize (cost=0.42..8.45 rows=1 width=530)\n\n -> Index Scan using user_right_site_user_id_right_id_idx on\nuser_right_site urs (cost=0.42..8.45 rows=1 width=530)\n\n Index Cond: ((user_id = 1783) AND (right_id = 10000))\n\n(9 rows)\n\n \n\nAttempt 2, using CTE (65ms)\n\nwith sites as\n(\n select unnest(sites) AS site_id from security.user_right_site where\nuser_id = 1783 and right_id = 10000\n)\nselect a.id, a.code, a.description from ara.asset a\njoin sites s on a.site_id = s.site_id\nwhere a.is_historical = true;\n\n\n\n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n------------------------------------------\n\nHash Join (cost=1012.19..22628.68 rows=41 width=47)\n\n Hash Cond: (a.site_id = s.site_id)\n\n CTE sites\n\n -> Index Scan using user_right_site_user_id_right_id_idx on\nuser_right_site (cost=0.42..8.94 rows=100 width=8)\n\n Index Cond: ((user_id = 1783) AND (right_id = 10000))\n\n -> Gather (cost=1000.00..22599.49 rows=4191 width=55)\n\n Workers Planned: 3\n\n -> Parallel Seq Scan on asset a (cost=0.00..21180.39 rows=1352\nwidth=55)\n\n Filter: is_historical\n\n -> Hash (cost=2.00..2.00 rows=100 width=8)\n\n -> CTE Scan on sites s (cost=0.00..2.00 rows=100 width=8)\n\n(11 rows)\n\n \n\nAttempt 3, using sub select (65ms)\n\nselect a.id, a.code, a.description from\n(select unnest(sites) AS site_id from security.user_right_site where user_id\n= 1783 and right_id = 10000) sites\njoin ara.asset a on sites.site_id = a.site_id\nwhere a.is_historical = true;\n\n \n\n QUERY PLAN\n\n----------------------------------------------------------------------------\n----------------------------------------------------\n\nGather (cost=1011.19..22209.86 rows=128 width=47)\n\n Workers Planned: 3\n\n -> Hash Join (cost=11.19..21197.06 rows=41 width=47)\n\n Hash Cond: (a.site_id = (unnest(user_right_site.sites)))\n\n -> Parallel Seq Scan on asset a (cost=0.00..21180.39 rows=1352\nwidth=55)\n\n Filter: is_historical\n\n -> Hash (cost=9.94..9.94 rows=100 width=8)\n\n -> Index Scan using user_right_site_user_id_right_id_idx on\nuser_right_site (cost=0.42..8.94 rows=100 width=8)\n\n Index Cond: ((user_id = 1783) AND (right_id = 10000))\n\n(9 rows)\n\n \n\n \n\nRegards\n\nRiaan Stander\n\n\nGood day At my company we’re busy converting a product from using SQL Server to Postgres. One part of the old design involves filtering data for the rights a user has. The SQL Server table looked like this:CREATE TABLE [dbo].[usrUserRights] ( [UserId] [dbo].[dm_Id] NOT NULL, [SiteId] [dbo].[dm_Id] NOT NULL, [RightId] [dbo].[dm_Id] NOT NULL, CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY CLUSTERED([UserId],[RightId],[SiteId]));All data in other tables would have a SiteId. Users would be assigned rights for certain Sites. We would then be able to filter data with a join.Example:SELECT Id, Code FROM SomeTable stJOIN usrUserRights ur ON st.SiteId = ur.SiteId AND ur.UserId = @UserId AND ur.RightId = @TheRightRequired The one design flaw with this is that the table gets extremely large. At our largest client this table contains over 700mil records. For a single user with lots of rights there could be 7mil records to cover their rights. In Postgres I was thinking of going with a design like thisCREATE TABLE security.user_right_site( user_id bigint NOT NULL, right_id bigint NOT NULL, sites bigint[]);create index on security.user_right_site(user_id, right_id); This drastically cut down on the number of records in the table. It also seems to make a massive change to the storage requirements.The old design requires 61GB vs 2.6GB. My one concern is regarding the limitations of the array type in Postgres. Is there a point at which one should not use it? Currently our largest client has 6000+ sites, meaning that the array would contain that many items. What would the upper feasible limit be in Postgres? Regarding queries to filter data against this table in Postgres. Any advice for the best method. I’ve done some testing myself, but just want to know if there are other alternatives. Attempt 1, using Any (250ms)select a.id, a.code, a.description from ara.asset a join security.user_right_site urs on urs.user_id = 1783 and urs.right_id = 10000 and a.site_id = any(urs.sites) where a.is_historical = true; QUERY PLAN------------------------------------------------------------------------------------------------------------------------------ Nested Loop (cost=1000.42..22712.71 rows=4 width=47) Join Filter: (a.site_id = ANY (urs.sites)) -> Gather (cost=1000.00..22599.49 rows=4191 width=55) Workers Planned: 3 -> Parallel Seq Scan on asset a (cost=0.00..21180.39 rows=1352 width=55) Filter: is_historical -> Materialize (cost=0.42..8.45 rows=1 width=530) -> Index Scan using user_right_site_user_id_right_id_idx on user_right_site urs (cost=0.42..8.45 rows=1 width=530) Index Cond: ((user_id = 1783) AND (right_id = 10000))(9 rows) Attempt 2, using CTE (65ms)with sites as( select unnest(sites) AS site_id from security.user_right_site where user_id = 1783 and right_id = 10000)select a.id, a.code, a.description from ara.asset ajoin sites s on a.site_id = s.site_idwhere a.is_historical = true; QUERY PLAN---------------------------------------------------------------------------------------------------------------------- Hash Join (cost=1012.19..22628.68 rows=41 width=47) Hash Cond: (a.site_id = s.site_id) CTE sites -> Index Scan using user_right_site_user_id_right_id_idx on user_right_site (cost=0.42..8.94 rows=100 width=8) Index Cond: ((user_id = 1783) AND (right_id = 10000)) -> Gather (cost=1000.00..22599.49 rows=4191 width=55) Workers Planned: 3 -> Parallel Seq Scan on asset a (cost=0.00..21180.39 rows=1352 width=55) Filter: is_historical -> Hash (cost=2.00..2.00 rows=100 width=8) -> CTE Scan on sites s (cost=0.00..2.00 rows=100 width=8)(11 rows) Attempt 3, using sub select (65ms)select a.id, a.code, a.description from(select unnest(sites) AS site_id from security.user_right_site where user_id = 1783 and right_id = 10000) sitesjoin ara.asset a on sites.site_id = a.site_idwhere a.is_historical = true; QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------- Gather (cost=1011.19..22209.86 rows=128 width=47) Workers Planned: 3 -> Hash Join (cost=11.19..21197.06 rows=41 width=47) Hash Cond: (a.site_id = (unnest(user_right_site.sites))) -> Parallel Seq Scan on asset a (cost=0.00..21180.39 rows=1352 width=55) Filter: is_historical -> Hash (cost=9.94..9.94 rows=100 width=8) -> Index Scan using user_right_site_user_id_right_id_idx on user_right_site (cost=0.42..8.94 rows=100 width=8) Index Cond: ((user_id = 1783) AND (right_id = 10000))(9 rows) RegardsRiaan Stander",
"msg_date": "Tue, 28 Mar 2017 01:43:37 +0200",
"msg_from": "\"Riaan Stander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Best design for performance"
},
{
"msg_contents": "On Mon, Mar 27, 2017 at 8:43 PM, Riaan Stander <[email protected]> wrote:\n> In Postgres I was thinking of going with a design like this\n>\n> CREATE TABLE security.user_right_site\n> (\n> user_id bigint NOT NULL,\n> right_id bigint NOT NULL,\n> sites bigint[]\n> );\n> create index on security.user_right_site(user_id, right_id);\n>\n>\n>\n> This drastically cut down on the number of records in the table. It also\n> seems to make a massive change to the storage requirements.\n>\n> The old design requires 61GB vs 2.6GB.\n\nHow did you query the table's size? You're probably failing to account\nfor TOAST tables.\n\nI'd suggest using pg_total_relation_size.\n\n> My one concern is regarding the limitations of the array type in Postgres.\n> Is there a point at which one should not use it? Currently our largest\n> client has 6000+ sites, meaning that the array would contain that many\n> items. What would the upper feasible limit be in Postgres?\n\nIn that design, rows with a large number of sites would probably end\nup TOASTing the sites array.\n\nThat will make access to that array a tad slower, but it would\nprobably be OK for your use case, since you'll ever just read one such\nrow per query. You'll have to test to be sure.\n\nThe limit on that design is about 128M items on sites, IIRC (ie: the\nmaximum size of values is 1GB, so an array of 128M bigints is above\nthat limit). You'll probably have issues much earlier than that. For\ninstance, a filter of the form \"site_id = ANY(sites)\" with that many\nentries would probably be unusably slow.\n\nPersonally, I would go for fetching the sites array on the application\nside, and using site_id = ANY(ARRAY[...]) if small enough, and a\nsubselect if the array is too big. That would let the planner be\nsmarter, since it'll have the literal array list at planning time and\nwill be able to fetch accurate stats, and choose an optimal plan based\non data skew.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 27 Mar 2017 21:42:04 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best design for performance"
},
{
"msg_contents": "I'm using the first query from here.\nhttps://wiki.postgresql.org/wiki/Disk_Usage\nIt does seem to include toast data.\n\nThe plan is to do the rights checking in the application. The join solution gets used for reports to filter data & client adhoc queries.\n\n-----Original Message-----\nFrom: Claudio Freire [mailto:[email protected]] \nSent: 28 March 2017 02:42 AM\nTo: Riaan Stander <[email protected]>\nCc: postgres performance list <[email protected]>\nSubject: Re: [PERFORM] Best design for performance\n\nOn Mon, Mar 27, 2017 at 8:43 PM, Riaan Stander <[email protected]> wrote:\n> In Postgres I was thinking of going with a design like this\n>\n> CREATE TABLE security.user_right_site\n> (\n> user_id bigint NOT NULL,\n> right_id bigint NOT NULL,\n> sites bigint[]\n> );\n> create index on security.user_right_site(user_id, right_id);\n>\n>\n>\n> This drastically cut down on the number of records in the table. It \n> also seems to make a massive change to the storage requirements.\n>\n> The old design requires 61GB vs 2.6GB.\n\nHow did you query the table's size? You're probably failing to account for TOAST tables.\n\nI'd suggest using pg_total_relation_size.\n\n> My one concern is regarding the limitations of the array type in Postgres.\n> Is there a point at which one should not use it? Currently our largest \n> client has 6000+ sites, meaning that the array would contain that many \n> items. What would the upper feasible limit be in Postgres?\n\nIn that design, rows with a large number of sites would probably end up TOASTing the sites array.\n\nThat will make access to that array a tad slower, but it would probably be OK for your use case, since you'll ever just read one such row per query. You'll have to test to be sure.\n\nThe limit on that design is about 128M items on sites, IIRC (ie: the maximum size of values is 1GB, so an array of 128M bigints is above that limit). You'll probably have issues much earlier than that. For instance, a filter of the form \"site_id = ANY(sites)\" with that many entries would probably be unusably slow.\n\nPersonally, I would go for fetching the sites array on the application side, and using site_id = ANY(ARRAY[...]) if small enough, and a subselect if the array is too big. That would let the planner be smarter, since it'll have the literal array list at planning time and will be able to fetch accurate stats, and choose an optimal plan based on data skew.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Mar 2017 03:17:11 +0200",
"msg_from": "\"Riaan Stander\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Best design for performance"
},
{
"msg_contents": "> From: Claudio Freire [mailto:[email protected]]\n>\n> How did you query the table's size? You're probably failing to account for TOAST tables.\n>\n> I'd suggest using pg_total_relation_size.\n...\nOn Mon, Mar 27, 2017 at 10:17 PM, Riaan Stander <[email protected]> wrote:\n> I'm using the first query from here.\n> https://wiki.postgresql.org/wiki/Disk_Usage\n\nPlease don't top post.\n\nIt's a surprisingly big difference. TOAST could be compressing the\narray, but I wouldn't expect it to be that compressible. Do you have\nany stats about the length of the site array per row?\n\n> The plan is to do the rights checking in the application. The join solution gets used for reports to filter data & client adhoc queries.\n\nEspecially for reporting queries, you want the planner's stats to be\nas accurate as possible, and placing a literal sites arrays in the\nquery in my experience is the best way to achieve that. But that is\nindeed limited to reasonably small arrays, thereby the need to have\nboth variants to adapt the query to each case.\n\nIf you can't afford to do that change at the application level, I\nwould expect that the original schema without the array should be\nsuperior. The array hides useful information from the planner, and\nthat *should* hurt you.\n\nYou'll have to test with a reasonably large data set, resembling a\nproduction data set as much as possible.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 27 Mar 2017 23:22:54 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best design for performance"
},
{
"msg_contents": "On 28 Mar 2017 4:22 AM, Claudio Freire wrote:\n>> From: Claudio Freire [mailto:[email protected]]\n>>\n>> How did you query the table's size? You're probably failing to account for TOAST tables.\n>>\n>> I'd suggest using pg_total_relation_size.\n> ...\n> On Mon, Mar 27, 2017 at 10:17 PM, Riaan Stander <[email protected]> wrote:\n>> I'm using the first query from here.\n>> https://wiki.postgresql.org/wiki/Disk_Usage\n> Please don't top post.\n>\n> It's a surprisingly big difference. TOAST could be compressing the\n> array, but I wouldn't expect it to be that compressible. Do you have\n> any stats about the length of the site array per row?\n>\n>> The plan is to do the rights checking in the application. The join solution gets used for reports to filter data & client adhoc queries.\n> Especially for reporting queries, you want the planner's stats to be\n> as accurate as possible, and placing a literal sites arrays in the\n> query in my experience is the best way to achieve that. But that is\n> indeed limited to reasonably small arrays, thereby the need to have\n> both variants to adapt the query to each case.\n>\n> If you can't afford to do that change at the application level, I\n> would expect that the original schema without the array should be\n> superior. The array hides useful information from the planner, and\n> that *should* hurt you.\n>\n> You'll have to test with a reasonably large data set, resembling a\n> production data set as much as possible.\n>\n>\nI did some more testing on this. My primary concern that not all the \ndata was there in the array version,but after doing some extensive \ntesting all seems to be there.\n\nI've done some comparisons vs the SQL Server version too.\nSQL Sever Table with over 700mil records:\n\nCREATE TABLE [dbo].[usrUserRights] (\n [UserId] [dbo].[dm_Id] NOT NULL,\n [SiteId] [dbo].[dm_Id] NOT NULL,\n [RightId] [dbo].[dm_Id] NOT NULL,\n CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY \nCLUSTERED([UserId],[RightId],[SiteId])\n);\n\nTakes23GB for data and 200MB for indexes.\n\nPostgres table with over 700mil records:\n\nCREATE TABLE security.user_right_site2\n(\n user_id bigint NOT NULL,\n right_id bigint NOT NULL,\n site_id bigint NOT NULL\n);\ncreate index on security.user_right_site2(user_id, right_id);\n\nTakes 35GB data and 26GB index, for a total of 61GB.\n\nThat is quite a large increase over SQL Server storage. Am I missing \nsomething? Makes me worry about the rest of the database we still have \nto convert.\n\nPostgres Array version ends up with only 600k records, due to aggregation:\nCREATE TABLE security.user_right_site\n(\n user_id bigint NOT NULL,\n right_id bigint NOT NULL,\n sites bigint[]\n);\ncreate index on security.user_right_site(user_id, right_id);\n\nTakes 339Mb data, 25Mb index and 2240Mb TOAST\n\nRegarding the Array length for each of these. They currently have max \n6500 site ids.\n\nRegards\nRiaan\n\n\n\n\n\n\n On 28 Mar 2017 4:22 AM, Claudio Freire wrote:\n\n\nFrom: Claudio Freire [mailto:[email protected]]\n\nHow did you query the table's size? You're probably failing to account for TOAST tables.\n\nI'd suggest using pg_total_relation_size.\n\n\n...\nOn Mon, Mar 27, 2017 at 10:17 PM, Riaan Stander <[email protected]> wrote:\n\n\nI'm using the first query from here.\nhttps://wiki.postgresql.org/wiki/Disk_Usage\n\n\nPlease don't top post.\n\nIt's a surprisingly big difference. TOAST could be compressing the\narray, but I wouldn't expect it to be that compressible. Do you have\nany stats about the length of the site array per row?\n\n\n\nThe plan is to do the rights checking in the application. The join solution gets used for reports to filter data & client adhoc queries.\n\n\nEspecially for reporting queries, you want the planner's stats to be\nas accurate as possible, and placing a literal sites arrays in the\nquery in my experience is the best way to achieve that. But that is\nindeed limited to reasonably small arrays, thereby the need to have\nboth variants to adapt the query to each case.\n\nIf you can't afford to do that change at the application level, I\nwould expect that the original schema without the array should be\nsuperior. The array hides useful information from the planner, and\nthat *should* hurt you.\n\nYou'll have to test with a reasonably large data set, resembling a\nproduction data set as much as possible.\n\n\n\n\nI did some more testing on this. My primary concern that not all\n the data was there in the array version, but after doing\n some extensive testing all seems to be there.\n\n I've done some comparisons vs the SQL Server version too.\nSQL Sever Table with over 700mil records:\n\n CREATE TABLE [dbo].[usrUserRights] ( \n [UserId] [dbo].[dm_Id] NOT NULL,\n [SiteId] [dbo].[dm_Id] NOT NULL,\n [RightId] [dbo].[dm_Id] NOT NULL,\n CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY\n KEY CLUSTERED([UserId],[RightId],[SiteId])\n);\n\n Takes 23GB for data and 200MB for indexes.\n\n Postgres table with over 700mil records:\n\nCREATE TABLE security.user_right_site2\n(\n user_id bigint NOT NULL,\n right_id bigint NOT NULL,\n site_id bigint NOT NULL\n);\ncreate index on security.user_right_site2(user_id, right_id);\n\n Takes 35GB data and 26GB index, for a total of 61GB.\n\n That is quite a large increase over SQL Server storage. Am I\n missing something? Makes me worry about the rest of the database\n we still have to convert.\n\n Postgres Array version ends up with only 600k records, due to\n aggregation:\n CREATE TABLE security.user_right_site\n (\n user_id bigint NOT NULL,\n right_id bigint NOT NULL,\n sites bigint[]\n );\n create index on security.user_right_site(user_id, right_id);\n\n Takes 339Mb data, 25Mb index and 2240Mb TOAST\n\n Regarding the Array length for each of these. They currently have\n max 6500 site ids.\n\n Regards\n Riaan",
"msg_date": "Tue, 28 Mar 2017 14:41:37 +0200",
"msg_from": "Riaan Stander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best design for performance"
},
{
"msg_contents": "On Tue, Mar 28, 2017 at 9:41 AM, Riaan Stander <[email protected]> wrote:\n> CREATE TABLE [dbo].[usrUserRights] (\n> [UserId] [dbo].[dm_Id] NOT NULL,\n> [SiteId] [dbo].[dm_Id] NOT NULL,\n> [RightId] [dbo].[dm_Id] NOT NULL,\n> CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY\n> CLUSTERED([UserId],[RightId],[SiteId])\n> );\n>\n> Takes 23GB for data and 200MB for indexes.\n>\n> Postgres table with over 700mil records:\n>\n> CREATE TABLE security.user_right_site2\n> (\n> user_id bigint NOT NULL,\n> right_id bigint NOT NULL,\n> site_id bigint NOT NULL\n> );\n> create index on security.user_right_site2(user_id, right_id);\n>\n> Takes 35GB data and 26GB index, for a total of 61GB.\n>\n> That is quite a large increase over SQL Server storage. Am I missing\n> something? Makes me worry about the rest of the database we still have to\n> convert.\n\nIndexes are quite fat in postgres, especially if you index all\ncolumns. To make the difference even bigger, it seems like there is\nvery hardcore compression going on in SQL Server, for that index to be\nonly 200MB. Are you sure you measured it correctly?\n\nIn any case, yes, indexes will be fatter in postgres. Their\nperformance shouldn't suffer considerably, though, given enough RAM.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Mar 2017 14:15:49 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best design for performance"
},
{
"msg_contents": "On 2017-03-28 07:15 PM, Claudio Freire wrote:\n> On Tue, Mar 28, 2017 at 9:41 AM, Riaan Stander <[email protected]> wrote:\n>> CREATE TABLE [dbo].[usrUserRights] (\n>> [UserId] [dbo].[dm_Id] NOT NULL,\n>> [SiteId] [dbo].[dm_Id] NOT NULL,\n>> [RightId] [dbo].[dm_Id] NOT NULL,\n>> CONSTRAINT [pk_usrUserRights_UserId_RightId_SiteId] PRIMARY KEY\n>> CLUSTERED([UserId],[RightId],[SiteId])\n>> );\n>>\n>> Takes 23GB for data and 200MB for indexes.\n>>\n>> Postgres table with over 700mil records:\n>>\n>> CREATE TABLE security.user_right_site2\n>> (\n>> user_id bigint NOT NULL,\n>> right_id bigint NOT NULL,\n>> site_id bigint NOT NULL\n>> );\n>> create index on security.user_right_site2(user_id, right_id);\n>>\n>> Takes 35GB data and 26GB index, for a total of 61GB.\n>>\n>> That is quite a large increase over SQL Server storage. Am I missing\n>> something? Makes me worry about the rest of the database we still have to\n>> convert.\n> Indexes are quite fat in postgres, especially if you index all\n> columns. To make the difference even bigger, it seems like there is\n> very hardcore compression going on in SQL Server, for that index to be\n> only 200MB. Are you sure you measured it correctly?\n>\n> In any case, yes, indexes will be fatter in postgres. Their\n> performance shouldn't suffer considerably, though, given enough RAM.\n>\n>\nThat 200Mb is for another index on that table. Due to the table being \nclustered on those 3 columns SQL Server sees the clustered index as the \ntable storage.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Mar 2017 00:51:06 +0200",
"msg_from": "Riaan Stander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Best design for performance"
}
] |
[
{
"msg_contents": "While updating our database which includes a lot of deletions where a lot\nof foreign key references are involved we found that in the case of two\ntables the indexes are ignored and it slow down the process a lot.\n\nHere are stats about those two tables:\n\nrelname seq_scan seq_tup_read idx_scan idx_tup_fetch n_tup_ins n_tup_upd\nn_tup_del n_tup_hot_upd n_live_tup n_dead_tup n_mod_since_analyze\nbelongs_to 227 52539487559 0 0 771 0 1459 0 125 1459 2230\npublication 229 11502854612 0 0 254 0 229 0 60 229 483\nPublication ( has a foreign key (ut) and more than 50million records) that\nreferences the top of the chain of references. This field (ut) is also the\nprimary key of publication.\n\nIn the case of belongs_to (about 231393000 records) which references the\nsame table (article) ut has an index.\n\nAll other tables in this dependency chain reports 100% or near 100% usage\nof the indexes e.g.\n\ncitation_2010_2014 0 0 226 1882 2510 0 1910 0 816 1910 4420\n\nThe indexes are on a ssd and we have set the random_page_cost to 1 for\nthose queries.\n\nThe definition of belongs_to:\n\nCREATE TABLE wos_2017_1.belongs_to\n(\n suborg_id uuid,\n organisation_id uuid,\n address_id uuid,\n ut citext,\n uuid uuid NOT NULL,\n id integer NOT NULL DEFAULT\nnextval('wos_2017_1.belongs2_id_seq'::regclass),\n pref_name_id uuid,\n addr_no smallint,\n reprint_addr_no smallint,\n CONSTRAINT belongs2_pkey PRIMARY KEY (uuid),\n CONSTRAINT belongs_to_address_id_fkey FOREIGN KEY (address_id)\n REFERENCES wos_2017_1.address (uuid) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE SET NULL,\n CONSTRAINT belongs_to_pref_name_id_fkey FOREIGN KEY (pref_name_id)\n REFERENCES wos_2017_1.org_pref_name (uuid) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE SET NULL,\n CONSTRAINT belongs_to_suborg_id_fkey FOREIGN KEY (suborg_id)\n REFERENCES wos_2017_1.suborg (uuid) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE SET NULL,\n CONSTRAINT belongs_to_ut_fkey FOREIGN KEY (ut)\n REFERENCES wos_2017_1.article (ut) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE CASCADE,\n CONSTRAINT belongs2_id_key UNIQUE (id),\n CONSTRAINT\nbelongs2_ut_suborg_id_organisation_id_address_id_addr_no_re_key UNIQUE (ut,\nsuborg_id, organisation_id, address_id, addr_no, reprint_addr_no,\npref_name_id)\n)\nWITH (\n OIDS=FALSE\n);\nwith indexes on address_id, organisation_id, pref_name_id, ut\n\nI have also tried to set enable_seqscan to false for these queries, but\nstill no usage of the indexes.\n\nWhy would that be?\n\nRegards\nJohann\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nWhile updating our database which includes a lot of deletions where a lot of foreign key references are involved we found that in the case of two tables the indexes are ignored and it slow down the process a lot.Here are stats about those two tables:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nrelname\nseq_scan\nseq_tup_read\nidx_scan\nidx_tup_fetch\nn_tup_ins\nn_tup_upd\nn_tup_del\nn_tup_hot_upd\nn_live_tup\nn_dead_tup\nn_mod_since_analyze\n\n\nbelongs_to\n227\n52539487559\n0\n0\n771\n0\n1459\n0\n125\n1459\n2230\n\n\npublication\n229\n11502854612\n0\n0\n254\n0\n229\n0\n60\n229\n483\n\n\nPublication ( has a foreign key (ut) and more than 50million records) that references the top of the chain of references. This field (ut) is also the primary key of publication.In the case of belongs_to (about 231393000 records) which references the same table (article) ut has an index.All other tables in this dependency chain reports 100% or near 100% usage of the indexes e.g.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\ncitation_2010_2014\n0\n0\n226\n1882\n2510\n0\n1910\n0\n816\n1910\n4420\n\n\nThe indexes are on a ssd and we have set the random_page_cost to 1 for those queries.The definition of belongs_to:CREATE TABLE wos_2017_1.belongs_to( suborg_id uuid, organisation_id uuid, address_id uuid, ut citext, uuid uuid NOT NULL, id integer NOT NULL DEFAULT nextval('wos_2017_1.belongs2_id_seq'::regclass), pref_name_id uuid, addr_no smallint, reprint_addr_no smallint, CONSTRAINT belongs2_pkey PRIMARY KEY (uuid), CONSTRAINT belongs_to_address_id_fkey FOREIGN KEY (address_id) REFERENCES wos_2017_1.address (uuid) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE SET NULL, CONSTRAINT belongs_to_pref_name_id_fkey FOREIGN KEY (pref_name_id) REFERENCES wos_2017_1.org_pref_name (uuid) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE SET NULL, CONSTRAINT belongs_to_suborg_id_fkey FOREIGN KEY (suborg_id) REFERENCES wos_2017_1.suborg (uuid) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE SET NULL, CONSTRAINT belongs_to_ut_fkey FOREIGN KEY (ut) REFERENCES wos_2017_1.article (ut) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE CASCADE, CONSTRAINT belongs2_id_key UNIQUE (id), CONSTRAINT belongs2_ut_suborg_id_organisation_id_address_id_addr_no_re_key UNIQUE (ut, suborg_id, organisation_id, address_id, addr_no, reprint_addr_no, pref_name_id))WITH ( OIDS=FALSE);with indexes on address_id, organisation_id, pref_name_id, utI have also tried to set enable_seqscan to false for these queries, but still no usage of the indexes.Why would that be?RegardsJohann-- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)",
"msg_date": "Tue, 4 Apr 2017 14:07:06 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Delete, foreign key, index usage"
},
{
"msg_contents": "On 4 April 2017 at 14:07, Johann Spies <[email protected]> wrote:\n\n> Why would that be?\n\nTo answer my own question. After experimenting a lot we found that\n9.6 uses a parallel seqscan that is actually a lot faster than using\nthe index on these large tables.\n\nThis, to us was a surprise!\n\nRegards\nJohann\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Apr 2017 12:40:21 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete, foreign key, index usage"
},
{
"msg_contents": "On Wed, Apr 5, 2017 at 6:40 AM, Johann Spies <[email protected]> wrote:\n\n> On 4 April 2017 at 14:07, Johann Spies <[email protected]> wrote:\n>\n> > Why would that be?\n>\n> To answer my own question. After experimenting a lot we found that\n> 9.6 uses a parallel seqscan that is actually a lot faster than using\n> the index on these large tables.\n>\n> This, to us was a surprise!\n>\n>\nIf you have modern GPU's available, you could try the pg-strom extension -\nhttps://github.com/pg-strom/devel\nIt leverages GPU's to further parallelize scans.\n\nOn Wed, Apr 5, 2017 at 6:40 AM, Johann Spies <[email protected]> wrote:On 4 April 2017 at 14:07, Johann Spies <[email protected]> wrote:\n\n> Why would that be?\n\nTo answer my own question. After experimenting a lot we found that\n9.6 uses a parallel seqscan that is actually a lot faster than using\nthe index on these large tables.\n\nThis, to us was a surprise!\nIf you have modern GPU's available, you could try the pg-strom extension - https://github.com/pg-strom/develIt leverages GPU's to further parallelize scans.",
"msg_date": "Wed, 5 Apr 2017 07:15:45 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete, foreign key, index usage"
},
{
"msg_contents": "> On Wed, Apr 5, 2017 at 6:40 AM, Johann Spies <[email protected]> wrote:\n>>\n>> On 4 April 2017 at 14:07, Johann Spies <[email protected]> wrote:\n>>\n>> > Why would that be?\n>>\n>> To answer my own question. After experimenting a lot we found that\n>> 9.6 uses a parallel seqscan that is actually a lot faster than using\n>> the index on these large tables.\n\nFurther experimenting resulted in a solution which we do not understand:\n\nThe table 'publication' had the field 'ut' as primary key and the ut\nindex was not used.\n\nSo we built an additional btree index(ut) on publication - which was\nignored as well.\nThen we built a gin index(ut) on publication and now it is being used.\n\nThe same happened on the other table (belongs_to) where the btree\nindex was ignored by the planner but the gin-index used.\n\nAs a result our deletes runs between 25-60 times faster than earlier\nwith maximum of about 200000 records per hour in comparison with a\nmaximum of 4500 earlier..\n\nIn the case of both tables the ut has a foreign key reference to\nanother article.\n\nWhy would the planner prefer the use the gin index and not the btree\nindex in this case?\n\nRegards\nJohann\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Apr 2017 08:48:56 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete, foreign key, index usage"
},
{
"msg_contents": "On 04/24/2017 08:48 AM, Johann Spies wrote:\n>\n> Why would the planner prefer the use the gin index and not the btree\n> index in this case?\n>\n\nYou'll need to show what queries are you running - that's a quite \nimportant piece of information, and I don't see it anywhere in this \nthread. Seeing explain plans would also be helpful.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Apr 2017 15:17:18 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete, foreign key, index usage"
},
{
"msg_contents": "On 24 April 2017 at 15:17, Tomas Vondra <[email protected]> wrote:\n> On 04/24/2017 08:48 AM, Johann Spies wrote:\n>>\n>>\n>> Why would the planner prefer the use the gin index and not the btree\n>> index in this case?\n>>\n>\n> You'll need to show what queries are you running - that's a quite important\n> piece of information, and I don't see it anywhere in this thread. Seeing\n> explain plans would also be helpful.\n\nIt is a simple \"delete from wos_2017_1.article;\" which causes a domino\neffect deletes due to foreign keys. In the case of one table with more\nthan 50 million records where the primary key was also the foreign\nkey, the process only started to use the index when we built a gin\nindex. In the case of the \"belongs_to\" table (shown in my first\nemail) we first built a btree index on the foreign key - and it was\nignored. Only after the gin index was created did it use the index.\n\nRegards.\nJohann\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Apr 2017 08:28:11 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete, foreign key, index usage"
},
{
"msg_contents": "On 25 April 2017 at 18:28, Johann Spies <[email protected]> wrote:\n> On 24 April 2017 at 15:17, Tomas Vondra <[email protected]> wrote:\n>> On 04/24/2017 08:48 AM, Johann Spies wrote:\n>>>\n>>>\n>>> Why would the planner prefer the use the gin index and not the btree\n>>> index in this case?\n>>>\n>>\n>> You'll need to show what queries are you running - that's a quite important\n>> piece of information, and I don't see it anywhere in this thread. Seeing\n>> explain plans would also be helpful.\n>\n> It is a simple \"delete from wos_2017_1.article;\" which causes a domino\n> effect deletes due to foreign keys. In the case of one table with more\n> than 50 million records where the primary key was also the foreign\n> key, the process only started to use the index when we built a gin\n> index. In the case of the \"belongs_to\" table (shown in my first\n> email) we first built a btree index on the foreign key - and it was\n> ignored. Only after the gin index was created did it use the index.\n\nSome suggestions:\n\n(It's a good idea to CC the person you're replying to so that they're\nmore likely to notice the email)\n\npsql's \\d output for the referenced and referencing table would be a\ngood thing to show too.\n\nThis would confirm to us things like;\n\n* you've got the indexes defined correctly\n* there's nothing weird like the indexes are on some other tablesspace\nwith some other random_page_cost defined on it which is causing them\nnot to ever be preferred.\n* you've actually got indexes\n\nAlso, you might like to try to EXPLAIN DELETE FROM wos_2017_1.article\nWHERE ut = '<some constant>'; to see if the planner makes use of the\nindex for that. If that's not choosing the index then it might be an\neasier issue to debug.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Apr 2017 19:34:42 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete, foreign key, index usage"
},
{
"msg_contents": "On 04/25/2017 08:28 AM, Johann Spies wrote:\n> On 24 April 2017 at 15:17, Tomas Vondra <[email protected]> wrote:\n>> On 04/24/2017 08:48 AM, Johann Spies wrote:\n>>>\n>>>\n>>> Why would the planner prefer the use the gin index and not the btree\n>>> index in this case?\n>>>\n>>\n>> You'll need to show what queries are you running - that's a quite important\n>> piece of information, and I don't see it anywhere in this thread. Seeing\n>> explain plans would also be helpful.\n>\n> It is a simple \"delete from wos_2017_1.article;\" which causes a domino\n> effect deletes due to foreign keys. In the case of one table with more\n> than 50 million records where the primary key was also the foreign\n> key, the process only started to use the index when we built a gin\n> index. In the case of the \"belongs_to\" table (shown in my first\n> email) we first built a btree index on the foreign key - and it was\n> ignored. Only after the gin index was created did it use the index.\n>\n> Regards.\n> Johann\n\nWouldn't it be easier to simply show the queries (with the exact \ncondition) and the associated explain plans? I understand you're doing \nyour best to explain what's happening, but the explain plans contain a \nlot of information that you might have missed.\n\nI suppose you actually did explain analyze to verify the query was not \nusing the btree index and then started using the gin index. Or how did \nyou verify that?\n\nAlso, which PostgreSQL version have you observed this on? I see you've \nmentioned 9.6 when talking about parallel scans, but I suppose the issue \nwas originally observed on some older version.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Apr 2017 01:35:38 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Delete, foreign key, index usage"
},
{
"msg_contents": "On 4 April 2017 at 14:07, Johann Spies <[email protected]> wrote:\n\n> While updating our database which includes a lot of deletions where a lot\n> of foreign key references are involved we found that in the case of two\n> tables the indexes are ignored and it slow down the process a lot.\n> ...\n>\n\n\n>\n> Why would that be?\n>\n\nAfter a long time we found the problem: The primary/foreign key fields had\ndifferent types: varchar and citext. In the case of the two tables where\nthe indexes were ignored indexes were built with the 'citext' type and the\nqueries assumed it was varchar as the case were in the other tables using\nthe same field.\n\nLesson learnt: Check your types in every field in every table - and we\nhave many tables.\n\nRegards\nJohann\n\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nOn 4 April 2017 at 14:07, Johann Spies <[email protected]> wrote:While updating our database which includes a lot of deletions where a lot of foreign key references are involved we found that in the case of two tables the indexes are ignored and it slow down the process a lot.... Why would that be?After a long time we found the problem: The primary/foreign key fields had different types: varchar and citext. In the case of the two tables where the indexes were ignored indexes were built with the 'citext' type and the queries assumed it was varchar as the case were in the other tables using the same field.Lesson learnt: Check your types in every field in every table - and we have many tables.RegardsJohann-- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)",
"msg_date": "Thu, 25 May 2017 14:21:36 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Delete, foreign key, index usage"
}
] |
[
{
"msg_contents": "Hi,\n\nI have to send content of a log file in my mail Id.\nCould you please assist me to do this?\nI am using Postgres-9.1 with Linux OS.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n\n\n\n\n\n\n\n\nHi,\n \nI have to send content of a log file in my mail Id.\nCould you please assist me to do this?\nI am using Postgres-9.1 with Linux OS.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.",
"msg_date": "Wed, 5 Apr 2017 16:53:05 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to send content of log file in official mailid."
}
] |
[
{
"msg_contents": "Hi,\nI have a table with 22k rows - not large at all. I have a couple of indices\non it as well as a gin index on a tsvector column. If I reindex the table\nand run a query it takes ~20ms to execute using the tsvector-gin index. By\nthe end of the day, the planner decides not to use the gin index and uses\nthe other indices on the table and the query takes ~80ms. If I reindex, the\npattern repeats-it uses the gin index for a while for superior performance\nand then drops back to using the alternate ones. \nThe ibloat on the index shows as 0.4 and wastedibytes is 0. Less than 2K\nrows have been updated of the 22K since the last reindex but the performance\nhas dropped since it is no longer using the gin index by mid-day. \nAny thoughts on why it chooses to use alternate indices with hardly any\nupdates? And is there a way to force it to use the gin index without having\nto reindex it twice a day.\nThanks!\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Table-not-using-tsvector-gin-index-and-performance-much-worse-than-when-it-uses-it-tp5954485.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Apr 2017 15:51:11 -0700 (MST)",
"msg_from": "rverghese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Table not using tsvector gin index and performance much worse than\n when it uses it."
},
{
"msg_contents": "rverghese <[email protected]> writes:\n> I have a table with 22k rows - not large at all. I have a couple of indices\n> on it as well as a gin index on a tsvector column. If I reindex the table\n> and run a query it takes ~20ms to execute using the tsvector-gin index. By\n> the end of the day, the planner decides not to use the gin index and uses\n> the other indices on the table and the query takes ~80ms. If I reindex, the\n> pattern repeats-it uses the gin index for a while for superior performance\n> and then drops back to using the alternate ones. \n> The ibloat on the index shows as 0.4 and wastedibytes is 0. Less than 2K\n> rows have been updated of the 22K since the last reindex but the performance\n> has dropped since it is no longer using the gin index by mid-day. \n> Any thoughts on why it chooses to use alternate indices with hardly any\n> updates? And is there a way to force it to use the gin index without having\n> to reindex it twice a day.\n\nYou haven't mentioned what PG version this is, nor specified how many\nupdates is \"hardly any\", so you shouldn't expect any very precise answers.\nBut I'm suspicious that the problem is bloat of the index's pending list;\nthe planner's cost estimate is (correctly) pretty sensitive to the length\nof that list. If so, you need to arrange for the pending list to get\nflushed into the main index structure more often. Depending on your PG\nversion, that can be done by\n* vacuum\n* auto-analyze (but I bet your version doesn't, or you would not be\n complaining)\n* gin_clean_pending_list() (but you probably ain't got that either)\n\nOr you could reduce gin_pending_list_limit to cause insert-time flushes to\nhappen more often, or in the extremum even disable fastupdate for that\nindex. Those options would slow down updates to make search performance\nmore stable, so they're not panaceas.\n\nSee\nhttps://www.postgresql.org/docs/current/static/gin-implementation.html#GIN-FAST-UPDATE\nfor your version, also the \"GIN Tips\" on the next page.\n\nPersonally I'd try tweaking gin_pending_list_limit first, if you have\na version that has that ... but YMMV.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 05 Apr 2017 19:19:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Table not using tsvector gin index and performance much worse\n than when it uses it."
},
{
"msg_contents": "Thanks for the response!\n\n* We are on version 9.5.6\n\n* Less than 10% of the table was updated today (between the time of the last\nreindex to when performance deteriorated)\n\n* autovacuum is on. I don't see an autoanalyze property in config but these\nare the settings for analyze \n/autovacuum_analyze_threshold = 3000 # min number of row updates before \nanalyze\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\nvacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before\nanalyze\n#autovacuum_freeze_max_age = 200000000 # maximum XID age before forced\nvacuum\n # (change requires restart)/\n\n* And this #gin_pending_list_limit = 4MB \n\n* gin_clean_pending_list() is not available.\n\nWill play with gin_pending_list_limit and see what that does. \n\nThanks!\nRV\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Table-not-using-tsvector-gin-index-and-performance-much-worse-than-when-it-uses-it-tp5954485p5954503.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Apr 2017 16:54:00 -0700 (MST)",
"msg_from": "rverghese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Table not using tsvector gin index and performance much worse\n than when it uses it."
},
{
"msg_contents": "From my experience, you want to really tighten the autovacuum_analyze parameters.\n\n\n\nI recommend our users to use:\n\nautovacuum_analyze_threshold = 1\n\nautovacuum_analyze_scale_factor = 0.0\n\n\n\nAnalyze is quite cheap, and the speed difference between an optimal and a suboptimal plans are usually pretty big.\n\n\n\nMy 2c,\n\n Igor\n\n\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of rverghese\nSent: Wednesday, April 05, 2017 4:54 PM\nTo: [email protected]\nSubject: -EXT-[PERFORM] Re: Table not using tsvector gin index and performance much worse than when it uses it.\n\n\n\nThanks for the response!\n\n\n\n* We are on version 9.5.6\n\n\n\n* Less than 10% of the table was updated today (between the time of the last reindex to when performance deteriorated)\n\n\n\n* autovacuum is on. I don't see an autoanalyze property in config but these are the settings for analyze\n\n/autovacuum_analyze_threshold = 3000 # min number of row updates before\n\nanalyze\n\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\n\nvacuum\n\n#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum\n\n # (change requires restart)/\n\n\n\n* And this #gin_pending_list_limit = 4MB\n\n\n\n* gin_clean_pending_list() is not available.\n\n\n\nWill play with gin_pending_list_limit and see what that does.\n\n\n\nThanks!\n\nRV\n\n\n\n\n\n\n\n--\n\nView this message in context: http://www.postgresql-archive.org/Table-not-using-tsvector-gin-index-and-performance-much-worse-than-when-it-uses-it-tp5954485p5954503.html\n\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n\n\n\n--\n\nSent via pgsql-performance mailing list ([email protected]<mailto:[email protected]>)\n\nTo make changes to your subscription:\n\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n\n\n\n\n\n\n\nFrom my experience, you want to really tighten the autovacuum_analyze parameters.\n \nI recommend our users to use:\nautovacuum_analyze_threshold = 1\nautovacuum_analyze_scale_factor = 0.0\n \nAnalyze is quite cheap, and the speed difference between an optimal and a suboptimal plans are usually pretty big.\n \nMy 2c,\n Igor\n \n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of rverghese\nSent: Wednesday, April 05, 2017 4:54 PM\nTo: [email protected]\nSubject: -EXT-[PERFORM] Re: Table not using tsvector gin index and performance much worse than when it uses it.\n \nThanks for the response!\n \n* We are on version 9.5.6\n \n* Less than 10% of the table was updated today (between the time of the last reindex to when performance deteriorated)\n \n* autovacuum is on. I don't see an autoanalyze property in config but these are the settings for analyze\n\n/autovacuum_analyze_threshold = 3000 # min number of row updates before\n\nanalyze\n#autovacuum_vacuum_scale_factor = 0.2 # fraction of table size before\nvacuum\n#autovacuum_analyze_scale_factor = 0.1 # fraction of table size before analyze #autovacuum_freeze_max_age = 200000000 # maximum XID age before forced vacuum\n # (change requires restart)/\n \n* And this #gin_pending_list_limit = 4MB \n \n* gin_clean_pending_list() is not available.\n \nWill play with gin_pending_list_limit and see what that does.\n\n \nThanks!\nRV\n \n \n \n--\nView this message in context: \nhttp://www.postgresql-archive.org/Table-not-using-tsvector-gin-index-and-performance-much-worse-than-when-it-uses-it-tp5954485p5954503.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n \n \n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 6 Apr 2017 00:09:28 +0000",
"msg_from": "\"Sfiligoi, Igor\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: -EXT-Re: Table not using tsvector gin index and\n performance much worse than when it uses it."
},
{
"msg_contents": "Ok, appreciate the feedback. \nWill play around with those settings as well. Maybe start with default which\nis 50 I believe.\nThanks!\nRV\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Table-not-using-tsvector-gin-index-and-performance-much-worse-than-when-it-uses-it-tp5954485p5954509.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Apr 2017 17:26:52 -0700 (MST)",
"msg_from": "rverghese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: -EXT-Re: Table not using tsvector gin index and\n performance much worse than when it uses it."
},
{
"msg_contents": "rverghese <[email protected]> writes:\n> Will play around with those settings as well. Maybe start with default which\n> is 50 I believe.\n\nIf you're on 9.5, auto-analyze does not result in a pending list flush,\nso it's irrelevant to fixing your problem. (Assuming I've identified\nthe problem correctly.) But you do have gin_pending_list_limit, so see\nwhat that does for you. Note you can set it either globally or per-index.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 05 Apr 2017 21:08:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: -EXT-Re: Table not using tsvector gin index and performance much\n worse than when it uses it."
},
{
"msg_contents": "Yup, I just found the per index option. Pretty cool. Will see what value is\noptimal...\n\nThanks\nRV\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Table-not-using-tsvector-gin-index-and-performance-much-worse-than-when-it-uses-it-tp5954485p5954521.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Apr 2017 18:10:56 -0700 (MST)",
"msg_from": "rverghese <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: -EXT-Re: Table not using tsvector gin index and\n performance much worse than when it uses it."
}
] |
[
{
"msg_contents": "Hi expert,\n\nMay I know how to select a range of IP address.\n\nExample: I have number of different-2 IP's present in a table.\n\nI have to select only that IP address which does not start from prefix \"172.23.110\".\nThanks in advance\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n\n\n\n\n\n\n\n\nHi expert,\n \nMay I know how to select a range of IP address.\n \nExample: I have number of different-2 IP’s present in a table.\n \nI have to select only that IP address which does not start from prefix “172.23.110”.\nThanks in advance\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.",
"msg_date": "Fri, 7 Apr 2017 14:13:58 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Filter certain range of IP address."
},
{
"msg_contents": "On 2017-04-07 16:13, Dinesh Chandra 12108 wrote:\n> Hi expert,\n> \n> May I know how to select a range of IP address.\n> \n> Example: I have number of different-2 IP's present in a table.\n> \n> I HAVE TO SELECT ONLY THAT IP ADDRESS WHICH DOES NOT START FROM PREFIX\n> “172.23.110”.\n> \n> Thanks in advance\n> \n> REGARDS,\n> \n> DINESH CHANDRA\n> \n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\n> \n> ------------------------------------------------------------------\n> \n> Mobile: +91-9953975849 | Ext 1078 |[email protected]\n> \n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\nIf you store the ip address as the INET datatype then you can use the \nINET operators\nto see if any arbitraty number of bits match, the first 3 bytes means \nthe first 24 bits:\n\n\nSELECT '172.23.110.55'::inet << '172.23.110.1/24'::inet;\n ?column?\n----------\n t\n(1 row)\n\n\nSELECT '272.23.110.55'::inet << '172.23.110.1/24'::inet;\n ?column?\n----------\n f\n(1 row)\n\nSee also: https://www.postgresql.org/docs/9.3/static/functions-net.html\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 07 Apr 2017 16:22:20 +0200",
"msg_from": "vinny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filter certain range of IP address."
},
{
"msg_contents": "Dear Vinny,\r\n\r\nThanks for your valuable replay.\r\n\r\nbut I need a select query, which select only that record which starts from IP \"172.23.110\" only from below table.\r\n\r\nxxx\t172.23.110.175\r\nyyy\t172.23.110.178\r\nzzz\t172.23.110.177\r\naaa\t172.23.110.176\r\nbbb\t172.23.111.180\r\nccc\t172.23.115.26\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected] \r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n-----Original Message-----\r\nFrom: vinny [mailto:[email protected]] \r\nSent: 07 April, 2017 7:52 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>\r\nCc: [email protected]; [email protected]\r\nSubject: Re: [PERFORM] Filter certain range of IP address.\r\n\r\nOn 2017-04-07 16:13, Dinesh Chandra 12108 wrote:\r\n> Hi expert,\r\n> \r\n> May I know how to select a range of IP address.\r\n> \r\n> Example: I have number of different-2 IP's present in a table.\r\n> \r\n> I HAVE TO SELECT ONLY THAT IP ADDRESS WHICH DOES NOT START FROM PREFIX \r\n> “172.23.110”.\r\n> \r\n> Thanks in advance\r\n> \r\n> REGARDS,\r\n> \r\n> DINESH CHANDRA\r\n> \r\n> |DATABASE ADMINISTRATOR (ORACLE/POSTGRESQL)| CYIENT LTD. NOIDA.\r\n> \r\n> ------------------------------------------------------------------\r\n> \r\n> Mobile: +91-9953975849 | Ext 1078 |[email protected]\r\n> \r\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n\r\nIf you store the ip address as the INET datatype then you can use the INET operators to see if any arbitraty number of bits match, the first 3 bytes means the first 24 bits:\r\n\r\n\r\nSELECT '172.23.110.55'::inet << '172.23.110.1/24'::inet;\r\n ?column?\r\n----------\r\n t\r\n(1 row)\r\n\r\n\r\nSELECT '272.23.110.55'::inet << '172.23.110.1/24'::inet;\r\n ?column?\r\n----------\r\n f\r\n(1 row)\r\n\r\nSee also: https://www.postgresql.org/docs/9.3/static/functions-net.html\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 7 Apr 2017 15:18:58 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Filter certain range of IP address."
},
{
"msg_contents": "On Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Dear Vinny,\n>\n> Thanks for your valuable replay.\n>\n> but I need a select query, which select only that record which starts from\n> IP \"172.23.110\" only from below table.\n>\n> xxx 172.23.110.175\n> yyy 172.23.110.178\n> zzz 172.23.110.177\n> aaa 172.23.110.176\n> bbb 172.23.111.180\n> ccc 172.23.115.26\n>\n\nSELECT ... WHERE substring(ip_addr::text, 1, 10) = '172.23.110'\n\nDavid J.\n \n\nOn Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108 <[email protected]> wrote:Dear Vinny,\n\nThanks for your valuable replay.\n\nbut I need a select query, which select only that record which starts from IP \"172.23.110\" only from below table.\n\nxxx 172.23.110.175\nyyy 172.23.110.178\nzzz 172.23.110.177\naaa 172.23.110.176\nbbb 172.23.111.180\nccc 172.23.115.26SELECT ... WHERE substring(ip_addr::text, 1, 10) = '172.23.110'David J. ",
"msg_date": "Fri, 7 Apr 2017 08:29:29 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filter certain range of IP address."
},
{
"msg_contents": "On Fri, Apr 7, 2017 at 11:29 AM, David G. Johnston <\[email protected]> wrote:\n\n> On Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108 <\n> [email protected]> wrote:\n>\n>> Dear Vinny,\n>>\n>> Thanks for your valuable replay.\n>>\n>> but I need a select query, which select only that record which starts\n>> from IP \"172.23.110\" only from below table.\n>>\n>> xxx 172.23.110.175\n>> yyy 172.23.110.178\n>> zzz 172.23.110.177\n>> aaa 172.23.110.176\n>> bbb 172.23.111.180\n>> ccc 172.23.115.26\n>>\n>\n> SELECT ... WHERE substring(ip_addr::text, 1, 10) = '172.23.110'\n>\n\nor\n select ... where ip_addr << '172.23.110/32';\n\nif ip_addr is an inet data type -- https://www.postgresql.org/\ndocs/9.6/static/functions-net.html\n\nOn Fri, Apr 7, 2017 at 11:29 AM, David G. Johnston <[email protected]> wrote:On Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108 <[email protected]> wrote:Dear Vinny,\n\nThanks for your valuable replay.\n\nbut I need a select query, which select only that record which starts from IP \"172.23.110\" only from below table.\n\nxxx 172.23.110.175\nyyy 172.23.110.178\nzzz 172.23.110.177\naaa 172.23.110.176\nbbb 172.23.111.180\nccc 172.23.115.26SELECT ... WHERE substring(ip_addr::text, 1, 10) = '172.23.110'or select ... where ip_addr << '172.23.110/32';if ip_addr is an inet data type -- https://www.postgresql.org/docs/9.6/static/functions-net.html",
"msg_date": "Fri, 7 Apr 2017 11:56:18 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filter certain range of IP address."
},
{
"msg_contents": "Thanks.\r\n\r\nIt’s working fine.\r\nThank you so much\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\nFrom: Rick Otten [mailto:[email protected]]\r\nSent: 07 April, 2017 9:26 PM\r\nTo: David G. Johnston <[email protected]>\r\nCc: Dinesh Chandra 12108 <[email protected]>; vinny <[email protected]>; [email protected]; [email protected]\r\nSubject: Re: [PERFORM] Filter certain range of IP address.\r\n\r\n\r\n\r\nOn Fri, Apr 7, 2017 at 11:29 AM, David G. Johnston <[email protected]<mailto:[email protected]>> wrote:\r\nOn Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\r\nDear Vinny,\r\n\r\nThanks for your valuable replay.\r\n\r\nbut I need a select query, which select only that record which starts from IP \"172.23.110\" only from below table.\r\n\r\nxxx 172.23.110.175\r\nyyy 172.23.110.178\r\nzzz 172.23.110.177\r\naaa 172.23.110.176\r\nbbb 172.23.111.180\r\nccc 172.23.115.26\r\n\r\nSELECT ... WHERE substring(ip_addr::text, 1, 10) = '172.23.110'\r\n\r\nor\r\n select ... where ip_addr << '172.23.110/32';\r\n\r\nif ip_addr is an inet data type -- https://www.postgresql.org/docs/9.6/static/functions-net.html\r\n\r\n\n\n\n\n\n\n\n\n\nThanks.\n \nIt’s working fine.\nThank you so much\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Rick Otten [mailto:[email protected]]\r\n\nSent: 07 April, 2017 9:26 PM\nTo: David G. Johnston <[email protected]>\nCc: Dinesh Chandra 12108 <[email protected]>; vinny <[email protected]>; [email protected]; [email protected]\nSubject: Re: [PERFORM] Filter certain range of IP address.\n \n\n \n\n \n\nOn Fri, Apr 7, 2017 at 11:29 AM, David G. Johnston <[email protected]> wrote:\n\n\n\nOn Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\n\nDear Vinny,\n\r\nThanks for your valuable replay.\n\r\nbut I need a select query, which select only that record which starts from IP \"172.23.110\" only from below table.\n\r\nxxx 172.23.110.175\r\nyyy 172.23.110.178\r\nzzz 172.23.110.177\r\naaa 172.23.110.176\r\nbbb 172.23.111.180\r\nccc 172.23.115.26\n\n\n \n\n\nSELECT ... WHERE substring(ip_addr::text, 1, 10) = '172.23.110'\n\n\n\n\n\n\n \n\n\nor\n\n\n select ... where ip_addr << '172.23.110/32';\n\n\n \n\n\nif ip_addr is an inet data type -- https://www.postgresql.org/docs/9.6/static/functions-net.html",
"msg_date": "Fri, 7 Apr 2017 16:09:33 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Filter certain range of IP address."
},
{
"msg_contents": "\n\n\n\n\nIl 07/04/2017 17:56, Rick Otten ha\n scritto:\n\n\n\n\nOn Fri, Apr 7, 2017 at 11:29 AM,\n David G. Johnston <[email protected]>\n wrote:\n\n\nOn Fri, Apr 7,\n 2017 at 8:18 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\nDear Vinny,\n\n Thanks for your valuable replay.\n\n but I need a select query, which select only that\n record which starts from IP \"172.23.110\" only from\n below table.\n\n xxx 172.23.110.175\n yyy 172.23.110.178\n zzz 172.23.110.177\n aaa 172.23.110.176\n bbb 172.23.111.180\n ccc 172.23.115.26\n\n\n\nSELECT\n ... WHERE substring(ip_addr::text, 1, 10) =\n '172.23.110'\n\n\n\n\n\n\nor\n select ... where ip_addr\n << '172.23.110/32';\n\n\n\n\n /32 is for one address only (fourth byte, which we want to exclude),\n so we need to use /24 (as for CIDR notation), that would be equal to\n a 255.255.255.0 subnet mask.\n\n My 2 cents\n Moreno\n\n\n\n\n",
"msg_date": "Fri, 7 Apr 2017 18:20:04 +0200",
"msg_from": "Moreno Andreo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filter certain range of IP address."
},
{
"msg_contents": "On 2017-04-07 17:29, David G. Johnston wrote:\n> On Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108\n> <[email protected]> wrote:\n> \n>> Dear Vinny,\n>> \n>> Thanks for your valuable replay.\n>> \n>> but I need a select query, which select only that record which\n>> starts from IP \"172.23.110\" only from below table.\n>> \n>> xxx 172.23.110.175\n>> yyy 172.23.110.178\n>> zzz 172.23.110.177\n>> aaa 172.23.110.176\n>> bbb 172.23.111.180\n>> ccc 172.23.115.26\n> \n> SELECT ... WHERE substring(ip_addr::text, 1, 10) = '172.23.110'\n> \n> David J.\n> \n\nWhile it's certainly possible to do it with a substring(), I'd strongly \nadvise against it,\nfor several reasons, but the main one is that it does not take into \naccount what happens to the presentation of the IP address when cast to \na string. There might be some conditions that cause it to render as \n'172.023.110' instead of '172.23.110' just like numbers can be rendered \nas '1.234,56' or '1,234.56' depending on locale, and that would break \nthe functionality without throwing an error.\n\nGenerally speaking; if you find yourself using a substring() on a \ndatatype other than a string,\nyou should check if there isn't an operator that already can do what you \nwant to do. PostgreSQL has operators\nto do all the basic things with the datatypes it supports, so you don't \nhave to re-invent the wheel. :-)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Apr 2017 13:33:05 +0200",
"msg_from": "vinny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Filter certain range of IP address."
}
] |
[
{
"msg_contents": "Hello\n\nI want to understand execution time of a query in PostgreSQL then I want to\nrelate it to the problem i am getting. According to my observation ( I\ncan't explain why this happen ) whenever we query a table first time its\nexecution will be high (sometimes very high) as compare to queries made on\nsame table in a short period of time followed by first query on that table.\nFor example query given below\n\n*SELECT \"global_configs\".* FROM \"global_configs\" ORDER BY\n\"global_configs\".\"id\" ASC LIMIT $1*\n\nexecuted multiple times instantaneous one after another have following\nexecution time\n\n1st time => *147.5ms*\n*2nd time => 3.0ms*\n*3rd time => 3.0ms*\n*4th time => 3.0ms*\n*5th time => 0.8ms*\n\nI want to understand why there is a huge time difference between 1st and\nrest of the executions.\n\n*Relation to other problem*\n\nHaving experience above behaviour of PostgreSQL now I am using PostgreSQL\nmanaged by Amazon RDS. Observation is no matter how many times I execute\nsame query its execution times remain same ( although execution time of a\nquery on RDS is comparatively high as compare to query running on local\ninstance of PostgreSQL that I can understand is because of Network latency)\n\n*Questions*\n\n\n 1. Why first query on a table takes more time then queries followed by\n it ?\n 2. Why above behaviour doesn't reflect on Amazon RDS ?\n\n\nThank you for reading my post.\n\n-- \nHaider Ali\n\nHelloI want to understand execution time of a query in PostgreSQL then I want to relate it to the problem i am getting. According to my observation ( I can't explain why this happen ) whenever we query a table first time its execution will be high (sometimes very high) as compare to queries made on same table in a short period of time followed by first query on that table. For example query given belowSELECT \"global_configs\".* FROM \"global_configs\" ORDER BY \"global_configs\".\"id\" ASC LIMIT $1executed multiple times instantaneous one after another have following execution time1st time => 147.5ms2nd time => 3.0ms3rd time => 3.0ms4th time => 3.0ms5th time => 0.8msI want to understand why there is a huge time difference between 1st and rest of the executions.Relation to other problemHaving experience above behaviour of PostgreSQL now I am using PostgreSQL managed by Amazon RDS. Observation is no matter how many times I execute same query its execution times remain same ( although execution time of a query on RDS is comparatively high as compare to query running on local instance of PostgreSQL that I can understand is because of Network latency)QuestionsWhy first query on a table takes more time then queries followed by it ?Why above behaviour doesn't reflect on Amazon RDS ?Thank you for reading my post.-- Haider Ali",
"msg_date": "Fri, 7 Apr 2017 19:56:53 +0500",
"msg_from": "Haider Ali <[email protected]>",
"msg_from_op": true,
"msg_subject": "Understanding PostgreSQL query execution time"
},
{
"msg_contents": "The first behavior is very likely just caching. The plan and results from the query are cached, so the second time, it's reused directly.\n\nIf you ran a bunch of other queries in the middle and effectively exhausted the cache, then back to your query, likely tou'd see the 'slow' behavior again.\n\nAs for AWS, not sure, but likely about memory and config more than latency.\n\n\nSent from my BlackBerry 10 smartphone.\nFrom: Haider Ali\nSent: Friday, April 7, 2017 09:58\nTo: [email protected]\nSubject: [PERFORM] Understanding PostgreSQL query execution time\n\n\nHello\n\nI want to understand execution time of a query in PostgreSQL then I want to relate it to the problem i am getting. According to my observation ( I can't explain why this happen ) whenever we query a table first time its execution will be high (sometimes very high) as compare to queries made on same table in a short period of time followed by first query on that table. For example query given below\n\nSELECT \"global_configs\".* FROM \"global_configs\" ORDER BY \"global_configs\".\"id\" ASC LIMIT $1\n\nexecuted multiple times instantaneous one after another have following execution time\n\n1st time => 147.5ms\n2nd time => 3.0ms\n3rd time => 3.0ms\n4th time => 3.0ms\n5th time => 0.8ms\n\nI want to understand why there is a huge time difference between 1st and rest of the executions.\n\nRelation to other problem\n\nHaving experience above behaviour of PostgreSQL now I am using PostgreSQL managed by Amazon RDS. Observation is no matter how many times I execute same query its execution times remain same ( although execution time of a query on RDS is comparatively high as compare to query running on local instance of PostgreSQL that I can understand is because of Network latency)\n\nQuestions\n\n\n 1. Why first query on a table takes more time then queries followed by it ?\n 2. Why above behaviour doesn't reflect on Amazon RDS ?\n\nThank you for reading my post.\n\n--\nHaider Ali\n\n\n\n\n\n\n\n\nThe first behavior is very likely just caching. The plan and results from the query are cached, so the second time, it's reused directly.\n\n\n\n\nIf you ran a bunch of other queries in the middle and effectively exhausted the cache, then back to your query, likely tou'd see the 'slow' behavior again.\n\n\n\n\nAs for AWS, not sure, but likely about memory and config more than latency.\n\n\n\n\n\n\n\nSent from my BlackBerry 10 smartphone.\n\n\n\n\n\nFrom: Haider Ali\nSent: Friday, April 7, 2017 09:58\nTo: [email protected]\nSubject: [PERFORM] Understanding PostgreSQL query execution time\n\n\n\n\n\n\n\n\n\nHello\n\n\nI want to understand execution time of a query in PostgreSQL then I want to relate it to the problem i am getting. According to my observation ( I can't explain why this happen ) whenever we query a table first time its execution will be high (sometimes\n very high) as compare to queries made on same table in a short period of time followed by first query on that table. For example query given below\n\n\nSELECT \"global_configs\".* FROM \"global_configs\" ORDER BY \"global_configs\".\"id\" ASC LIMIT $1\n\n\n\n\nexecuted multiple times instantaneous one after another have following execution time\n\n\n1st time => 147.5ms\n\n2nd time => 3.0ms\n\n3rd time => 3.0ms\n\n\n4th time => 3.0ms\n\n5th time => 0.8ms\n\n\n\n\n\n\nI want to understand why there is a huge time difference between 1st and rest of the executions.\n\n\n\n\n\n\nRelation to other problem\n\n\nHaving experience above behaviour of PostgreSQL now I am using PostgreSQL managed by Amazon RDS. Observation is no matter how many times I execute same query its execution times remain same ( although execution time of a query on RDS is comparatively high\n as compare to query running on local instance of PostgreSQL that I can understand is because of Network latency)\n\n\nQuestions\n\n\n\n\nWhy first query on a table takes more time then queries followed by it ?Why above behaviour doesn't reflect on Amazon RDS ?\n\n\n\nThank you for reading my post.\n\n\n-- \n\nHaider Ali",
"msg_date": "Fri, 7 Apr 2017 15:03:18 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding PostgreSQL query execution time"
},
{
"msg_contents": "\n\n----- Mensaje original -----\n> De: \"Haider Ali\" <[email protected]>\n> Para: [email protected]\n> Enviados: Viernes, 7 de Abril 2017 11:56:53\n> Asunto: [PERFORM] Understanding PostgreSQL query execution time\n> \n> \n> Hello\n> \n> \n> I want to understand execution time of a query in PostgreSQL then I\n> want to relate it to the problem i am getting. According to my\n> observation ( I can't explain why this happen ) whenever we query a\n> table first time its execution will be high (sometimes very high) as\n> compare to queries made on same table in a short period of time\n> followed by first query on that table. For example query given below\n> \n> \n> SELECT \"global_configs\".* FROM \"global_configs\" ORDER BY\n> \"global_configs\".\"id\" ASC LIMIT $1\n> \n> \n> \n> \n> executed multiple times instantaneous one after another have\n> following execution time\n> \n> \n> 1st time => 147.5ms\n> \n> 2nd time => 3.0ms\n> \n> 3rd time => 3.0ms\n> \n> 4th time => 3.0ms\n> \n> 5th time => 0.8ms\n\nThat is the effects of the postgres/Linux cache for shure. \n> \n> \n> I want to understand why there is a huge time difference between 1st\n> and rest of the executions.\n> \n> \n> Relation to other problem\n> \n> \n> Having experience above behaviour of PostgreSQL now I am using\n> PostgreSQL managed by Amazon RDS. Observation is no matter how many\n> times I execute same query its execution times remain same (\n> although execution time of a query on RDS is comparatively high as\n> compare to query running on local instance of PostgreSQL that I can\n> understand is because of Network latency)\n> \n> \n> Questions\n> \n> \n> \n> \n> 1. Why first query on a table takes more time then queries\n> followed by it ?\n> 2. Why above behaviour doesn't reflect on Amazon RDS ?\n> \nAmazon provides you with SSD like disks, running close to memory speed. That would explain the little impact of having a ram cache.\n\nHTH\nGerardo\n> \n> Haider Ali\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 7 Apr 2017 12:15:37 -0300 (ART)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding PostgreSQL query execution time"
},
{
"msg_contents": "On 2017-04-07 16:56, Haider Ali wrote:\n> Hello\n> \n> I want to understand execution time of a query in PostgreSQL then I\n> want to relate it to the problem i am getting. According to my\n> observation ( I can't explain why this happen ) whenever we query a\n> table first time its execution will be high (sometimes very high) as\n> compare to queries made on same table in a short period of time\n> followed by first query on that table. For example query given below\n\nThe first time a query is executed it is quite likely that the data it \nneeds\nis not in RAM yet, so it must fetch the data from disk, which is slow.\n\nBut, benchmarking is an art; did you execute these queries separately \nfrom the commandline?\nOtherwise where may be other forces at work here...\n\n> \n> Having experience above behaviour of PostgreSQL now I am using\n> PostgreSQL managed by Amazon RDS. Observation is no matter how many\n> times I execute same query its execution times remain same ( although\n> execution time of a query on RDS is comparatively high as compare to\n> query running on local instance of PostgreSQL that I can understand is\n> because of Network latency)\n\nThe problem may go away entirely if the database/OS has enough RAM \navailable,\nand configured, for caching.\n\nThe problem on your local system may be simply a case of PostgreSQL or \nthe OS\nremoving tuples/index data from RAM when it feels it can make better use \nof that RAM\nspace for other things if you don't access that data for a while.\n\n\nTry spying on your system with iotop and such tools to see what the \nserver is actually doing\nduring the first query. If there is a spike in disk-IO then you've found \nthe cause;\nthe tuples where not in RAM.\nYou may also want to run an EXPLAIN to make sure that the fast queries \nare not purely the result\nof some query-result cache.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Apr 2017 14:18:28 +0200",
"msg_from": "vinny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Understanding PostgreSQL query execution time"
}
] |
[
{
"msg_contents": "Hello,\n\nI need to know the criteria behind for settings the work_mem in PostgreSQL, please give the example also if possible.\n\nRegards,\nDaulat\n\n\n\n\n\n\n\n\n\nHello, \n \nI need to know the criteria behind for settings the work_mem in PostgreSQL, please give the example also if possible.\n \nRegards,\nDaulat",
"msg_date": "Thu, 13 Apr 2017 06:25:17 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hi "
},
{
"msg_contents": "\n> From: Daulat Ram <[email protected]>\n> To: \"[email protected]\" <[email protected]> \n> Sent: Thursday, 13 April 2017, 7:25\n> Subject: [PERFORM] Hi \n>\n> Hello, \n> \n> I need to know the criteria behind for settings the work_mem in PostgreSQL, please give the example also if possible.\n> \n> Regards,\n\n> Daulat\n\nIs there anything in particular from the manual pages you don't understand? It should be quite clear:\n\nhttps://www.postgresql.org/docs/current/static/runtime-config-resource.html\n\n\"Specifies the amount of memory to be used by internal sort operations and hash tables before writing to temporary disk files. The value defaults to four megabytes (4MB). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will be allowed to use as much memory as this value specifies before it starts to write data into temporary files.\"\n\n\"Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.\"\n\nGlyn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 13 Apr 2017 08:21:53 +0000 (UTC)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hi"
}
] |
[
{
"msg_contents": "Summary: I am facing a contention problem with ODBC on the client side. strace and perf top show we are serializing over what appears to be accesses to the ODBC statement handle. Contention goes down if I use multiple processes instead of multiple threads within a process. Also, all the threads get the same connection handle number and the same statement handle number. Is there a way to force different connection and statement handles? I have asked this question on the ODBC mailing list, and they suggested it could be something in the postgresql driver.\r\n\r\nDetails: Running the TPCx-V benchmark, we hit a performance bottleneck as the load increases. We have plenty of CPU and disk resources available in our driver VM, client VM, and database backend VM (all with high vCPU counts) on a dedicated server. When we increase the number of threads of execution, not only doesn’t throughput go up, it actually degrades. I am running with 80 threads in one process. When I divide these threads into 5 processes, performance nearly doubles. So, the problem is not in the database backend. Each thread has its own database connection and its own statement handle.\r\n\r\nLooking more closely at the client, this is what I see in the strace output when everything flows through one process:\r\n\r\n17:52:52.762491 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000102>\r\n17:52:52.762635 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 <0.000664>\r\n17:52:52.763540 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000027>\r\n17:52:52.763616 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000076>\r\n17:52:52.763738 futex(0x7fae463a9f00, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000016>\r\n17:52:52.763793 futex(0x7fae463a9f00, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000010>\r\n17:52:52.763867 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000038>\r\n17:52:52.763982 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000037>\r\n17:52:52.764078 futex(0x7fae18000020, FUTEX_WAKE_PRIVATE, 1) = 0 <0.000010>\r\n17:52:52.764182 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000030>\r\n17:52:52.764264 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000075>\r\n17:52:52.764401 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000014>\r\n17:52:52.764455 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000011>\r\n17:52:52.764507 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000025>\r\n17:52:52.764579 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000010>\r\n17:52:52.764821 sendto(227, \"\\x51\\x00\\x00\\x00\\x0b\\x43\\x4f\\x4d\\x4d\\x49\\x54\\x00\", 12, MSG_NOSIGNAL, NULL, 0) = 12 <0.000029>\r\n17:52:52.764911 recvfrom(227, 0x7fae18058760, 4096, 16384, 0, 0) = -1 EAGAIN (Resource temporarily unavailable) <0.000107>\r\n17:52:52.765065 poll([{fd=227, events=POLLIN}], 1, 4294967295) = 1 ([{fd=227, revents=POLLIN}]) <0.000017>\r\n17:52:52.765185 recvfrom(227, \"\\x43\\x00\\x00\\x00\\x0b\\x43\\x4f\\x4d\\x4d\\x49\\x54\\x00\\x5a\\x00\\x00\\x00\\x05\\x49\", 4096, MSG_NOSIGNAL, NULL, NULL) = 18 <0.000018>\r\n17:52:52.765258 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 <0.000470>\r\n17:52:52.765764 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000052>\r\n17:52:52.765908 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000073>\r\n17:52:52.766045 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000040>\r\n17:52:52.766246 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000026>\r\n\r\n\r\nAnd perf top shows:\r\n\r\n 9.89% [kernel] [k] _raw_spin_unlock_irqrestore\r\n 4.86% [kernel] [k] finish_task_switch\r\n 3.53% [kernel] [k] _raw_spin_lock\r\n 3.00% libodbc.so.2.0.0 [.] __check_stmt_from_dbc_v\r\n 2.80% [kernel] [k] __do_softirq\r\n 2.43% psqlodbcw.so [.] 0x000000000003b146\r\n 1.95% libodbc.so.2.0.0 [.] __validate_stmt\r\n 1.93% libc-2.17.so [.] _IO_vfscanf\r\n 1.91% libodbc.so.2.0.0 [.] __validate_dbc\r\n 1.90% psqlodbcw.so [.] copy_and_convert_field\r\n 1.80% libc-2.17.so [.] _int_malloc\r\n 1.58% libc-2.17.so [.] malloc\r\n 1.42% libc-2.17.so [.] _int_free\r\n 1.36% libodbc.so.2.0.0 [.] __release_desc\r\n 1.26% libc-2.17.so [.] __strncpy_sse2_unaligned\r\n 1.25% psqlodbcw.so [.] mylog\r\n 1.17% libc-2.17.so [.] __offtime\r\n 1.05% [kernel] [k] vmxnet3_xmit_frame\r\n 1.03% psqlodbcw.so [.] SC_fetch\r\n 0.91% libc-2.17.so [.] __memcpy_ssse3_back\r\n 0.89% libc-2.17.so [.] __strcpy_sse2_unaligned\r\n 0.87% libc-2.17.so [.] __GI_____strtod_l_internal\r\n 0.86% libc-2.17.so [.] __tz_convert\r\n 0.76% libc-2.17.so [.] __tzfile_compute\r\n 0.76% psqlodbcw.so [.] convert_linefeeds\r\n 0.71% libpthread-2.17.so [.] pthread_mutex_lock\r\n 0.69% [kernel] [k] vmxnet3_poll_rx_only\r\n 0.66% [kernel] [k] tick_nohz_idle_enter\r\n 0.64% libc-2.17.so [.] __strcmp_sse2\r\n 0.63% libc-2.17.so [.] malloc_consolidate\r\n 0.63% libodbc.so.2.0.0 [.] __set_stmt_state\r\n 0.62% [kernel] [k] __audit_syscall_exit\r\n 0.60% libc-2.17.so [.] __GI_____strtoll_l_internal\r\n 0.59% libc-2.17.so [.] __memset_sse2\r\n 0.56% libpthread-2.17.so [.] pthread_mutex_unlock\r\n 0.54% psqlodbcw.so [.] PGAPI_ExtendedFetch\r\n 0.53% [kernel] [k] ipt_do_table\r\n\r\nSo, we have severe contention, but not over actual database accesses. If I look at the PGSQL backend, I see many transactions in flight; i.e., transactions aren’t serialized. The contention appears to be over accesses to data structures:\r\n\r\n- With 5 processes and 80 threads of execution, the response time of the TRADE_STATUS transaction is 6.3ms @ 2718 total transactions/sec\r\n\r\n- With 1 process and 80 threads of execution, TRADE_STATUS response time is 44ms @ 1376 total transactions/sec\r\n\r\n- TRADE_STATUS has a number of function calls that reference the “stmt” handle returned by SQLAllocStmt:\r\n\r\no 6 to allocate the statement handle, run the query, bind input parameter, free the handle, …\r\n\r\no 13 SQLBindCol()\r\n\r\no 50 SQLFetch()\r\n\r\n- If I replace the 50 SQLFetch() calls with a single SQLExtendedFetch() call, response time drops to 20ms\r\n\r\n- If I comment out the 13 SQLBindCol() calls (which means the benchmark isn’t working right but the query still runs), it drops to 13.9ms\r\n\r\n- If by dropping these calls we were simply saving the execution time of their pathlength, we would have saved at most the 6.3ms we measured with 5 processes. So, the improvements we saw by avoiding the SQLBindCol() and SQLFetch() calls weren’t simply due to doing less work. They were due to avoiding contention inside those routines\r\n\r\nSomething that puzzles me is that all the calls to SQLAllocHandle(SQL_HANDLE_DBC, env_handle, &dbc_handle) set the dbc_handle to 19290 for all the threads within the process, and all the calls to SQLAllocHandle(SQL_HANDLE_STMT, dbc_handle, &stmt_handle) set the stmt_handle to 19291. Even if I call SQLAllocHandle(SQL_HANDLE_STMT, , ) multiple times to get multiple statement handles, all the handles get set to the same 19291 (and furthermore, performance gets worse even if I don’t use the new statement handles; just creating them impacts performance).\r\n\r\nIs there a way to avoid all this contention, and allow multiple threads to behave like multiple processes? The threads do not share connections, statements, etc. There should be no contention between them, and as far as I can tell, there isn’t any contention over actual resources. The contention appears to be over just accessing the ODBC data structures\r\n\r\nWe have unixODBC-2.3.4, built with:\r\n./configure --prefix=/usr --sysconfdir=/etc --enable-threads=yes --enable-drivers=yes --enable-driver-conf=yes --enable-stats=no --enable-fastvalidate=yes\r\nmake\r\nmake install\r\nMy config files look like:\r\n# cat /etc/odbcinst.ini\r\n# Example driver definitions\r\n# Driver from the postgresql-odbc package\r\n# Setup from the unixODBC package\r\n[PostgreSQL]\r\nDescription = ODBC for PostgreSQL\r\nDriver = /usr/lib/psqlodbcw.so\r\nSetup = /usr/lib/libodbcpsqlS.so\r\nDriver64 = /usr/pgsql-9.3/lib/psqlodbcw.so\r\nSetup64 = /usr/lib64/libodbcpsqlS.so\r\nFileUsage = 1\r\nThreading = 0\r\n#\r\n# cat /etc/odbc.ini\r\n[PSQL2]\r\nDescription = PostgreSQL\r\nDriver = PostgreSQL\r\nDatabase = tpcv\r\nServerName = w1-tpcv-vm-50\r\nUserName = tpcv\r\nPassword = tpcv\r\nPort = 5432\r\n[PSQL5]\r\nDescription = PostgreSQL\r\nDriver = PostgreSQL\r\nDatabase = tpcv1\r\nServerName = w1-tpcv-vm-60\r\nUserName = tpcv\r\nPassword = tpcv\r\nPort = 5432\r\n\r\n\n\n\n\n\n\n\n\n\n\n\nSummary: I am facing a contention problem with ODBC on the client side. strace and perf top show we are serializing over what appears to be accesses to the ODBC statement handle. Contention goes down if I\r\n use multiple processes instead of multiple threads within a process. Also, all the threads get the same connection handle number and the same statement handle number. Is there a way to force different connection and statement handles? I have asked this question\r\n on the ODBC mailing list, and they suggested it could be something in the postgresql driver.\n \nDetails: Running the TPCx-V benchmark, we hit a performance bottleneck as the load increases. We have plenty of CPU and disk resources available in our driver VM, client VM, and database backend VM (all with\r\n high vCPU counts) on a dedicated server. When we increase the number of threads of execution, not only doesn’t throughput go up, it actually degrades. I am running with 80 threads in one process. When I divide these threads into 5 processes, performance nearly\r\n doubles. So, the problem is not in the database backend. Each thread has its own database connection and its own statement handle.\n \nLooking more closely at the client, this is what I see in the strace output when everything flows through one process:\n \n17:52:52.762491 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000102>\n17:52:52.762635 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 <0.000664>\n17:52:52.763540 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000027>\n17:52:52.763616 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000076>\n17:52:52.763738 futex(0x7fae463a9f00, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000016>\n17:52:52.763793 futex(0x7fae463a9f00, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000010>\n17:52:52.763867 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000038>\n17:52:52.763982 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000037>\n17:52:52.764078 futex(0x7fae18000020, FUTEX_WAKE_PRIVATE, 1) = 0 <0.000010>\n17:52:52.764182 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000030>\n17:52:52.764264 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000075>\n17:52:52.764401 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000014>\n17:52:52.764455 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000011>\n17:52:52.764507 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000025>\n17:52:52.764579 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000010>\n17:52:52.764821 sendto(227, \"\\x51\\x00\\x00\\x00\\x0b\\x43\\x4f\\x4d\\x4d\\x49\\x54\\x00\", 12, MSG_NOSIGNAL, NULL, 0) = 12 <0.000029>\n17:52:52.764911 recvfrom(227, 0x7fae18058760, 4096, 16384, 0, 0) = -1 EAGAIN (Resource temporarily unavailable) <0.000107>\n17:52:52.765065 poll([{fd=227, events=POLLIN}], 1, 4294967295) = 1 ([{fd=227, revents=POLLIN}]) <0.000017>\n17:52:52.765185 recvfrom(227, \"\\x43\\x00\\x00\\x00\\x0b\\x43\\x4f\\x4d\\x4d\\x49\\x54\\x00\\x5a\\x00\\x00\\x00\\x05\\x49\", 4096, MSG_NOSIGNAL, NULL, NULL) = 18 <0.000018>\n17:52:52.765258 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = 0 <0.000470>\n17:52:52.765764 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000052>\n17:52:52.765908 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000073>\n17:52:52.766045 futex(0x7fae351c5100, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000040>\n17:52:52.766246 futex(0x7fae351c5100, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource temporarily unavailable) <0.000026>\n \n \nAnd perf top shows:\n \n 9.89% [kernel] [k] _raw_spin_unlock_irqrestore\n 4.86% [kernel] [k] finish_task_switch\n 3.53% [kernel] [k] _raw_spin_lock\n 3.00% libodbc.so.2.0.0 [.] __check_stmt_from_dbc_v\n 2.80% [kernel] [k] __do_softirq\n 2.43% psqlodbcw.so [.] 0x000000000003b146\n 1.95% libodbc.so.2.0.0 [.] __validate_stmt\n 1.93% libc-2.17.so [.] _IO_vfscanf\n 1.91% libodbc.so.2.0.0 [.] __validate_dbc\n 1.90% psqlodbcw.so [.] copy_and_convert_field\n 1.80% libc-2.17.so [.] _int_malloc\n 1.58% libc-2.17.so [.] malloc\n 1.42% libc-2.17.so [.] _int_free\n 1.36% libodbc.so.2.0.0 [.] __release_desc\n 1.26% libc-2.17.so [.] __strncpy_sse2_unaligned\n 1.25% psqlodbcw.so [.] mylog\n 1.17% libc-2.17.so [.] __offtime\n 1.05% [kernel] [k] vmxnet3_xmit_frame\n 1.03% psqlodbcw.so [.] SC_fetch\n 0.91% libc-2.17.so [.] __memcpy_ssse3_back\n 0.89% libc-2.17.so [.] __strcpy_sse2_unaligned\n 0.87% libc-2.17.so [.] __GI_____strtod_l_internal\n 0.86% libc-2.17.so [.] __tz_convert\n 0.76% libc-2.17.so [.] __tzfile_compute\n 0.76% psqlodbcw.so [.] convert_linefeeds\n 0.71% libpthread-2.17.so [.] pthread_mutex_lock\n 0.69% [kernel] [k] vmxnet3_poll_rx_only\n 0.66% [kernel] [k] tick_nohz_idle_enter\n 0.64% libc-2.17.so [.] __strcmp_sse2\n 0.63% libc-2.17.so [.] malloc_consolidate\n 0.63% libodbc.so.2.0.0 [.] __set_stmt_state\n 0.62% [kernel] [k] __audit_syscall_exit\n 0.60% libc-2.17.so [.] __GI_____strtoll_l_internal\n 0.59% libc-2.17.so [.] __memset_sse2\n 0.56% libpthread-2.17.so [.] pthread_mutex_unlock\n 0.54% psqlodbcw.so [.] PGAPI_ExtendedFetch\n 0.53% [kernel] [k] ipt_do_table\n \nSo, we have severe contention, but not over actual database accesses. If I look at the PGSQL backend, I see many transactions in flight; i.e., transactions aren’t serialized. The contention appears to be over\r\n accesses to data structures:\n- \r\nWith 5 processes and 80 threads of execution, the response time of the TRADE_STATUS transaction is 6.3ms @ 2718 total transactions/sec\n- \r\nWith 1 process and 80 threads of execution, TRADE_STATUS response time is 44ms @ 1376 total transactions/sec\n- \r\nTRADE_STATUS has a number of function calls that reference the “stmt” handle returned by SQLAllocStmt:\n\no \r\n6 to allocate the statement handle, run the query, bind input parameter, free the handle, …\n\no \r\n13 SQLBindCol()\n\no \r\n50 SQLFetch()\n- \r\nIf I replace the 50 SQLFetch() calls with a single SQLExtendedFetch() call, response time drops to 20ms\n- \r\nIf I comment out the 13 SQLBindCol() calls (which means the benchmark isn’t working right but the query still runs), it drops to 13.9ms\n- \r\nIf by dropping these calls we were simply saving the execution time of their pathlength, we would have saved at most the 6.3ms we measured with 5 processes. So, the improvements we saw by avoiding\r\n the SQLBindCol() and SQLFetch() calls weren’t simply due to doing less work. They were due to avoiding contention inside those routines\n \nSomething that puzzles me is that all the calls to SQLAllocHandle(SQL_HANDLE_DBC, env_handle, &dbc_handle) set the dbc_handle to 19290 for all the threads within the process, and all the calls to SQLAllocHandle(SQL_HANDLE_STMT,\r\n dbc_handle, &stmt_handle) set the stmt_handle to 19291. Even if I call SQLAllocHandle(SQL_HANDLE_STMT, , ) multiple times to get multiple statement handles, all the handles get set to the same 19291 (and furthermore, performance gets worse even if I don’t\r\n use the new statement handles; just creating them impacts performance).\n \nIs there a way to avoid all this contention, and allow multiple threads to behave like multiple processes? The threads do not share connections, statements, etc. There should be no contention between them,\r\n and as far as I can tell, there isn’t any contention over actual resources. The contention appears to be over just accessing the ODBC data structures\n \nWe have unixODBC-2.3.4, built with:\n./configure --prefix=/usr --sysconfdir=/etc --enable-threads=yes --enable-drivers=yes --enable-driver-conf=yes --enable-stats=no --enable-fastvalidate=yes\nmake\nmake install\nMy config files look like:\n# cat /etc/odbcinst.ini \n# Example driver definitions\n# Driver from the postgresql-odbc package\n# Setup from the unixODBC package\n[PostgreSQL]\nDescription = ODBC for PostgreSQL\nDriver = /usr/lib/psqlodbcw.so\nSetup = /usr/lib/libodbcpsqlS.so\nDriver64 = /usr/pgsql-9.3/lib/psqlodbcw.so\nSetup64 = /usr/lib64/libodbcpsqlS.so\nFileUsage = 1\nThreading = 0\n#\n# cat /etc/odbc.ini \n[PSQL2]\nDescription = PostgreSQL\nDriver = PostgreSQL\nDatabase = tpcv\nServerName = w1-tpcv-vm-50\nUserName = tpcv\nPassword = tpcv\nPort = 5432\n[PSQL5]\nDescription = PostgreSQL\nDriver = PostgreSQL\nDatabase = tpcv1\nServerName = w1-tpcv-vm-60\nUserName = tpcv\nPassword = tpcv\nPort = 5432",
"msg_date": "Thu, 13 Apr 2017 19:30:54 +0000",
"msg_from": "Reza Taheri <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgresql, and ODBC handles"
}
] |
[
{
"msg_contents": "I have found some examples of people tweaking this\nparameter track_activity_query_size to various setting such as 4000, 10000,\n15000, but little discussion as to performance impact on memory usage.\nWhat I don't have a good sense of is how significant this would be for a\nhigh traffic system with rapid connection creation/destruction, say 1000s\nper second. In such a case, would there be a reason to hesitate raising it\nto 10000 from 1024? Is 10k memory insignificant? Any direction here is\nmuch appreciated, including a good way to benchmark this kind of thing.\n\nThanks!\n\nI have found some examples of people tweaking this parameter track_activity_query_size to various setting such as 4000, 10000, 15000, but little discussion as to performance impact on memory usage. What I don't have a good sense of is how significant this would be for a high traffic system with rapid connection creation/destruction, say 1000s per second. In such a case, would there be a reason to hesitate raising it to 10000 from 1024? Is 10k memory insignificant? Any direction here is much appreciated, including a good way to benchmark this kind of thing.Thanks!",
"msg_date": "Thu, 13 Apr 2017 15:45:49 -0500",
"msg_from": "Jeremy Finzel <[email protected]>",
"msg_from_op": true,
"msg_subject": "Impact of track_activity_query_size on high traffic OLTP system"
},
{
"msg_contents": "I always bump it up, but usually just to 4096, because I often have queries\nthat are longer than 1024 and I'd like to be able to see the full query.\nI've never seen any significant memory impact. I suppose if you had\nthousands of concurrent queries it would add up, but if you only have a few\ndozen, or even a few hundred queries at any given moment - on a modern\nsystem it doesn't seem to impact things very much.\n\n\nOn Thu, Apr 13, 2017 at 4:45 PM, Jeremy Finzel <[email protected]> wrote:\n\n> I have found some examples of people tweaking this\n> parameter track_activity_query_size to various setting such as 4000,\n> 10000, 15000, but little discussion as to performance impact on memory\n> usage. What I don't have a good sense of is how significant this would be\n> for a high traffic system with rapid connection creation/destruction, say\n> 1000s per second. In such a case, would there be a reason to hesitate\n> raising it to 10000 from 1024? Is 10k memory insignificant? Any direction\n> here is much appreciated, including a good way to benchmark this kind of\n> thing.\n>\n> Thanks!\n>\n\nI always bump it up, but usually just to 4096, because I often have queries that are longer than 1024 and I'd like to be able to see the full query. I've never seen any significant memory impact. I suppose if you had thousands of concurrent queries it would add up, but if you only have a few dozen, or even a few hundred queries at any given moment - on a modern system it doesn't seem to impact things very much.On Thu, Apr 13, 2017 at 4:45 PM, Jeremy Finzel <[email protected]> wrote:I have found some examples of people tweaking this parameter track_activity_query_size to various setting such as 4000, 10000, 15000, but little discussion as to performance impact on memory usage. What I don't have a good sense of is how significant this would be for a high traffic system with rapid connection creation/destruction, say 1000s per second. In such a case, would there be a reason to hesitate raising it to 10000 from 1024? Is 10k memory insignificant? Any direction here is much appreciated, including a good way to benchmark this kind of thing.Thanks!",
"msg_date": "Thu, 13 Apr 2017 17:17:07 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Impact of track_activity_query_size on high traffic\n OLTP system"
}
] |
[
{
"msg_contents": "Hello,\n\nafter updating my server to Postgres 9.5 (or 9.6) I tried to import PG 9.4 dumps and/or to restore 9.5/9.6 dumps:\n\npg_dump testdb > db.sql\npsql -d testdb -f db.sql\n\nRestoring these dumps in PG 9.4 takes less than 20 minutes, restoring them in PG 9.5/9.5 takes several hours on the same system (even if I make a PG 9.5/9.6 dumps and try to restore this one)!\nI also tried to restore dumps with the original PG 9.4/9.5.9.6 configuration as well as with different options like increasing max_wal_size in 9.5/9.6. e.g.\nDo I miss a specific option in 9.5/9.6 which may be different to 9.4? If I turn off autovacuum (9.5/9.6) it's faster - but not as fast as with PG 9.4.\n\nExample Log 9.5/9.6:\n\nLOG: duration: 278349.128 ms statement: COPY test (id, ...)\nLOG: duration: 646487.952 ms statement: ALTER TABLE ONLY test ...\nThe same with creating index... It takes hours with PG 9.5./9.6!\n\n\nThanks, Hans\n\n<http://www.maps-for-free.com/>\n\n\n\n\n\n\n\n\n\nHello,\n\n\nafter updating my server to Postgres 9.5 (or 9.6) I tried to import PG 9.4 dumps and/or to restore 9.5/9.6 dumps:\n\n\npg_dump testdb > db.sql\npsql -d testdb -f db.sql\n\n\nRestoring these dumps in PG 9.4 takes less than 20 minutes, restoring them in PG 9.5/9.5 takes several hours on the same system (even if I make a PG 9.5/9.6 dumps and try to restore this one)!\nI also tried to restore dumps with the original PG 9.4/9.5.9.6 configuration as well as with different options like increasing max_wal_size in 9.5/9.6. e.g.\n\nDo I miss a specific option in 9.5/9.6 which may be different to 9.4? If I turn off autovacuum (9.5/9.6) it's faster - but not as fast as with PG 9.4.\n\n\n\nExample Log 9.5/9.6:\n\n\nLOG: duration: 278349.128 ms statement: COPY test (id, ...)\nLOG: duration: 646487.952 ms statement: ALTER TABLE ONLY test ...\nThe same with creating index... It takes hours with PG 9.5./9.6!\n\n\nThanks, Hans",
"msg_date": "Fri, 14 Apr 2017 23:30:10 +0000",
"msg_from": "Hans Braxmeier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres 9.5 / 9.6: Restoring PG 9.4 dump is very very slow"
},
{
"msg_contents": "Hans Braxmeier <[email protected]> writes:\n> Restoring these dumps in PG 9.4 takes less than 20 minutes, restoring them in PG 9.5/9.5 takes several hours on the same system (even if I make a PG 9.5/9.6 dumps and try to restore this one)!\n\nCan you provide a test case demonstrating this sort of slowdown?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Apr 2017 19:40:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres 9.5 / 9.6: Restoring PG 9.4 dump is very very slow"
}
] |
[
{
"msg_contents": "I mailed last month [0] but didn't see any reponse .. (if I'm being naive,\ndaft, or missing something simple, please just say so).\n\n[0] https://www.postgresql.org/message-id/20170326193344.GS31628%40telsasoft.com\n\nIt seems when self (inner/equi) joining there's two bad alternatives: either\nspecify a where clause for each self-joined table and incur poor estimate and\nplan, due to incorrect perceived independence of clauses, even though joined\ncolumn(s) could/ought to be known equal; or, specify where clause only once,\nand incur cost of joining across all partitions, due to no contraint exclusion\non one (or more) self-joined table heirarchy/s.\n\n-- Specify WHERE for each table causes bad underestimate:\n|ts=# explain analyze SELECT * FROM eric_enodeb_metrics a JOIN eric_enodeb_metrics b USING (start_time, site_id) WHERE a.start_time>='2017-03-19' AND a.start_time<'2017-03-20' AND b.start_time>='2017-03-19' AND b.start_time<'2017-03-20';\n| Hash Join (cost=7310.80..14680.86 rows=14 width=1436) (actual time=33.053..73.180 rows=7869 loops=1)\n| Hash Cond: ((a.start_time = b.start_time) AND (a.site_id = b.site_id))\n| -> Append (cost=0.00..7192.56 rows=7883 width=723) (actual time=1.394..19.414 rows=7869 loops=1)\n| -> Seq Scan on eric_enodeb_metrics a (cost=0.00..0.00 rows=1 width=718) (actual time=0.003..0.003 rows=0 loops=1)\n| Filter: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| -> Bitmap Heap Scan on eric_enodeb_201703 a_1 (cost=605.34..7192.56 rows=7882 width=723) (actual time=1.390..14.536 rows=7869 loops=1)\n| Recheck Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| Heap Blocks: exact=247\n| -> Bitmap Index Scan on eric_enodeb_201703_unique_idx (cost=0.00..603.37 rows=7882 width=0) (actual time=1.351..1.351 rows=7869 loops=1)\n| Index Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| -> Hash (cost=7192.56..7192.56 rows=7883 width=723) (actual time=31.620..31.620 rows=7869 loops=1)\n| Buckets: 8192 Batches: 1 Memory Usage: 1986kB\n| -> Append (cost=0.00..7192.56 rows=7883 width=723) (actual time=0.902..19.543 rows=7869 loops=1)\n| -> Seq Scan on eric_enodeb_metrics b (cost=0.00..0.00 rows=1 width=718) (actual time=0.002..0.002 rows=0 loops=1)\n| Filter: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| -> Bitmap Heap Scan on eric_enodeb_201703 b_1 (cost=605.34..7192.56 rows=7882 width=723) (actual time=0.899..14.353 rows=7869 loops=1)\n| Recheck Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| Heap Blocks: exact=247\n| -> Bitmap Index Scan on eric_enodeb_201703_unique_idx (cost=0.00..603.37 rows=7882 width=0) (actual time=0.867..0.867 rows=7869 loops=1)\n| Index Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n\n\n\n-- Specify WHERE once gets good estimate, but with unnecessary scan of all child partitions:\n|ts=# explain analyze SELECT * FROM eric_enodeb_metrics a JOIN eric_enodeb_metrics b USING (start_time, site_id) WHERE start_time>='2017-03-19' AND start_time<'2017-03-20';\n| Gather (cost=8310.80..316545.60 rows=9591 width=1427) (actual time=9012.967..9073.539 rows=7869 loops=1)\n| Workers Planned: 3\n| Workers Launched: 3\n| -> Hash Join (cost=7310.80..314586.50 rows=3094 width=1427) (actual time=8892.121..8937.245 rows=1967 loops=4)\n| Hash Cond: ((b.start_time = a.start_time) AND (b.site_id = a.site_id))\n| -> Append (cost=0.00..261886.54 rows=2015655 width=714) (actual time=11.464..8214.063 rows=1308903 loops=4)\n| -> Parallel Seq Scan on eric_enodeb_metrics b (cost=0.00..0.00 rows=1 width=718) (actual time=0.001..0.001 rows=0 loops=4)\n| -> Parallel Seq Scan on eric_enodeb_201510 b_1 (cost=0.00..10954.43 rows=60343 width=707) (actual time=11.460..258.852 rows=46766 loops=4)\n| -> Parallel Seq Scan on eric_enodeb_201511 b_2 (cost=0.00..10310.91 rows=56891 width=707) (actual time=18.395..237.841 rows=44091 loops=4)\n|[...]\n| -> Parallel Seq Scan on eric_enodeb_201703 b_29 (cost=0.00..6959.75 rows=81875 width=723) (actual time=0.017..101.969 rows=49127 loops=4)\n| -> Hash (cost=7192.56..7192.56 rows=7883 width=723) (actual time=51.843..51.843 rows=7869 loops=4)\n| Buckets: 8192 Batches: 1 Memory Usage: 1970kB\n| -> Append (cost=0.00..7192.56 rows=7883 width=723) (actual time=2.558..27.829 rows=7869 loops=4)\n| -> Seq Scan on eric_enodeb_metrics a (cost=0.00..0.00 rows=1 width=718) (actual time=0.014..0.014 rows=0 loops=4)\n| Filter: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| -> Bitmap Heap Scan on eric_enodeb_201703 a_1 (cost=605.34..7192.56 rows=7882 width=723) (actual time=2.542..17.305 rows=7869 loops=4)\n| Recheck Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n| Heap Blocks: exact=247\n| -> Bitmap Index Scan on eric_enodeb_201703_unique_idx (cost=0.00..603.37 rows=7882 width=0) (actual time=2.494..2.494 rows=7869 loops=4)\n| Index Cond: ((start_time >= '2017-03-19 00:00:00-04'::timestamp with time zone) AND (start_time < '2017-03-20 00:00:00-04'::timestamp with time zone))\n\n\nMinor variations have same problems;\n-- Scans all partitions:\nts=# explain analyze SELECT * FROM (SELECT * FROM eric_enodeb_metrics a) t1 JOIN (SELECT * FROM eric_enodeb_metrics b WHERE start_time>='2017-03-19 23:00:00' AND start_time<'2017-03-20') t2 USING (start_time, site_id);\n\n-- Underestimtes due to perceived independence of clause:\n|ts=# explain analyze SELECT * FROM (SELECT * FROM eric_enodeb_metrics a WHERE start_time>='2017-03-19' AND start_time<'2017-03-20') t1 JOIN (SELECT * FROM eric_enodeb_metrics b WHERE start_time>='2017-03-19' AND start_time<'2017-03-20') t2 USING (start_time, site_id);\n| Hash Join (cost=7308.59..14676.41 rows=14 width=1436) (actual time=30.352..64.004 rows=7869 loops=1)\n\nThank you in advance for your any response.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Apr 2017 19:23:22 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "self join estimate and constraint exclusion"
},
{
"msg_contents": "We got bitten again by what appears to be the same issue I reported (perhaps\npoorly) here:\nhttps://www.postgresql.org/message-id/20170326193344.GS31628%40telsasoft.com\n\nWe have PG9.6.3 table heirarchies partitioned by time. Our reports use\nsubqueries each with their own copies of a range clauses on time column, as\nneeded to get constraint exclusion reference:\nhttps://www.postgresql.org/message-id/25076.1366321335%40sss.pgh.pa.us\n\n\tSELECT * FROM\n\t(SELECT * FROM t WHERE col>const) a JOIN\n\t(SELECT * FROM t WHERE col>const) b USING (col)\n\nI'm diagnosing a bad estimate/plan due to excessively high n_distinct leading\nto underestimated rowcount when selecting from a small fraction of the table\nheirarchy. This leads intermittently to bad things, specifically a cascade of\nmisestimates and associated nested loops around millions of rows.\n\nArtificial/generated/contrived test case, involving table with 99 instances\neach of 99 values:\n\npostgres=# CREATE TABLE t(i INT);\npostgres=# TRUNCATE t;INSERT INTO t SELECT i FROM generate_series(1,99) i,generate_series(1,99);ANALYZE t;\npostgres=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, (SELECT MAX(x) FROM unnest(most_common_vals::text::text[]) x) maxmcv, (histogram_bounds::text::text[])[array_length(histogram_bounds,1)] maxhist FROM pg_stats WHERE attname~'i' AND tablename='t' GROUP BY 1,2,3,4,5,6,7,8 ORDER BY 1 DESC;\n-[ RECORD 1 ]--\nfrac_mcv | 1\ntablename | t\nattname | i\nn_distinct | 99\nn_mcv | 99\nn_hist |\nmaxmcv | 99\nmaxhist |\n\nrange query (which could use constraint exclusion), but bad estimate:\npostgres=# explain ANALYZE SELECT * FROM (SELECT * FROM t WHERE i<2) AS a JOIN (SELECT * FROM t WHERE i<2) AS b USING (i);\n Merge Join (cost=339.59..341.57 rows=99 width=4) (actual time=8.272..16.892 rows=9801 loops=1)\n\nrange query which could NOT use constraint exclusion, good estimate:\npostgres=# explain ANALYZE SELECT * FROM (SELECT * FROM t) AS a JOIN (SELECT * FROM t) AS b USING (i) WHERE i<2;\n Hash Join (cost=264.52..541.54 rows=9801 width=4) (actual time=12.688..22.325 rows=9801 loops=1)\n\nnon-range query, good estimate:\npostgres=# explain ANALYZE SELECT * FROM (SELECT * FROM t WHERE i=3) AS a JOIN (SELECT * FROM t WHERE i=3) AS b USING (i);\n Nested Loop (cost=0.00..455.78 rows=9801 width=4) (actual time=0.482..15.820 rows=9801 loops=1)\n\nMy understanding:\nPostgres estimates join selectivity using number of distinct values of\nunderlying. For the subqueries \"a\" and \"b\", the estimate is same as for\nunderlying table \"t\", even when selecting only a small fraction of the table...\nThis is adt/selfuncs:eqjoinsel_inner().\n\nNote, in my tests, report queries on the child table have correct estimates;\nand, queries with only \"push down\" WHERE clause outside the subquery have\ncorrect estimate (but not constraint exclusion), apparently due to\ncalc_joinrel_size_estimate() returning the size of the parent table, planning\nan join without restriction clause, following by filtering the join result, at\nwhich point I guess the MCV list becomes useful and estimate is perfect..\n\n\tSELECT * FROM\n\t(SELECT * FROM t)a JOIN(SELECT * FROM t)b\n\tUSING (col) WHERE col>const\n\nSo my original question is basically still opened ... is it possible to get\nboth good estimates/plans AND constraint exclusion ??\n\nThanks\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 May 2017 16:17:30 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "join estimate of subqueries with range conditions and constraint\n exclusion"
},
{
"msg_contents": "On Wed, May 24, 2017 at 04:17:30PM -0500, Justin Pryzby wrote:\n> We got bitten again by what appears to be the same issue I reported (perhaps\n> poorly) here:\n> https://www.postgresql.org/message-id/20170326193344.GS31628%40telsasoft.com\n\n> I'm diagnosing a bad estimate/plan due to excessively high n_distinct leading\n> to underestimated rowcount when selecting from a small fraction of the table\n> heirarchy. This leads intermittently to bad things, specifically a cascade of\n> misestimates and associated nested loops around millions of rows.\n\nI dug into this some more; I can mitigate the issue with this change:\n\ndiff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\nindex 6a4f7b1..962a5b4 100644\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -2279,6 +2279,22 @@ eqjoinsel_inner(Oid operator,\n \n nd1 = get_variable_numdistinct(vardata1, &isdefault1);\n nd2 = get_variable_numdistinct(vardata2, &isdefault2);\n+ elog(DEBUG4, \"nd %lf %lf\", nd1 ,nd2);\n+ if (nd1>vardata1->rel->rows) nd1=vardata1->rel->rows;\n+ if (nd2>vardata1->rel->rows) nd2=vardata2->rel->rows;\n+\n+ elog(DEBUG4, \"nd %lf %lf\", nd1 ,nd2);\n+ elog(DEBUG4, \"rows %lf %lf\", vardata1->rel->rows ,vardata2->rel->rows);\n+ elog(DEBUG4, \"tuples %lf %lf\", vardata1->rel->tuples ,vardata2->rel->tuples);\n\noriginal estimate:\n\nDEBUG: nd 35206.000000 35206.000000\nDEBUG: nd 35206.000000 35206.000000\nDEBUG: rows 5031.000000 5031.000000\nDEBUG: tuples 5031.000000 5031.000000\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=1294.56..2558.62 rows=723 width=750) (actual time=103.273..490.984 rows=50300 loops=1)\n Hash Cond: (eric_enodeb_metrics.start_time = eric_enodeb_metrics_1.start_time)\n\npatched estimate/plan:\n\npostgres=# explain ANALYZE SELECT * FROM (SELECT * FROM eric_enodeb_metrics WHERE start_time>'2017-04-25 18:00') x JOIN (SELECT * FROM eric_enodeb_metrics WHERE start_time>'2017-04-25 18:00') y USING (start_time);\n\nDEBUG: nd 35206.000000 35206.000000\nDEBUG: nd 5031.000000 5031.000000\nDEBUG: rows 5031.000000 5031.000000\nDEBUG: tuples 5031.000000 5031.000000\n\n| Hash Join (cost=1294.56..2602.14 rows=5075 width=750) (actual time=90.445..477.712 rows=50300 loops=1)\n| Hash Cond: (eric_enodeb_metrics.start_time = eric_enodeb_metrics_1.start_time)\n| -> Append (cost=0.00..1231.67 rows=5031 width=379) (actual time=16.424..46.899 rows=5030 loops=1)\n| -> Seq Scan on eric_enodeb_metrics (cost=0.00..0.00 rows=1 width=378) (actual time=0.012..0.012 rows=0 loops=1)\n| Filter: (start_time > '2017-04-25 18:00:00-05'::timestamp with time zone)\n| -> Seq Scan on eric_enodeb_201704 (cost=0.00..1231.67 rows=5030 width=379) (actual time=16.408..45.634 rows=5030 loops=1)\n| Filter: (start_time > '2017-04-25 18:00:00-05'::timestamp with time zone)\n| Rows Removed by Filter: 23744\n| -> Hash (cost=1231.67..1231.67 rows=5031 width=379) (actual time=73.801..73.801 rows=5030 loops=1)\n| Buckets: 8192 Batches: 1 Memory Usage: 1283kB\n| -> Append (cost=0.00..1231.67 rows=5031 width=379) (actual time=14.607..47.395 rows=5030 loops=1)\n| -> Seq Scan on eric_enodeb_metrics eric_enodeb_metrics_1 (cost=0.00..0.00 rows=1 width=378) (actual time=0.009..0.009 rows=0 loops=1)\n| Filter: (start_time > '2017-04-25 18:00:00-05'::timestamp with time zone)\n| -> Seq Scan on eric_enodeb_201704 eric_enodeb_201704_1 (cost=0.00..1231.67 rows=5030 width=379) (actual time=14.594..46.091 rows=5030 loops=1)\n| Filter: (start_time > '2017-04-25 18:00:00-05'::timestamp with time zone)\n| Rows Removed by Filter: 23744\n\n.. which gets additionally extreme with increasingly restrictive condition, as\nrows estimate diverges more from nd.\n\nThere's still an 2nd issue which this doesn't address, having to do with joins\nof tables with full/complete MCV lists, and selective queries on those tables,\nas demonstrated by the artificial test:\n\n> postgres=# CREATE TABLE t(i INT);\n> postgres=# TRUNCATE t;INSERT INTO t SELECT i FROM generate_series(1,99) i,generate_series(1,99);ANALYZE t;\n> postgres=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, (SELECT MAX(x) FROM unnest(most_common_vals::text::text[]) x) maxmcv, (histogram_bounds::text::text[])[array_length(histogram_bounds,1)] maxhist FROM pg_stats WHERE attname~'i' AND tablename='t' GROUP BY 1,2,3,4,5,6,7,8 ORDER BY 1 DESC;\n> -[ RECORD 1 ]--\n> frac_mcv | 1\n> tablename | t\n> attname | i\n> n_distinct | 99\n> n_mcv | 99\n> n_hist |\n> maxmcv | 99\n> maxhist |\n> \n> range query (which could use constraint exclusion), but bad estimate:\n> postgres=# explain ANALYZE SELECT * FROM (SELECT * FROM t WHERE i<2) AS a JOIN (SELECT * FROM t WHERE i<2) AS b USING (i);\n> Merge Join (cost=339.59..341.57 rows=99 width=4) (actual time=8.272..16.892 rows=9801 loops=1)\n\n\n> My understanding:\n> Postgres estimates join selectivity using number of distinct values of\n> underlying. For the subqueries \"a\" and \"b\", the estimate is same as for\n> underlying table \"t\", even when selecting only a small fraction of the table...\n\nIt seems to me that 1) estimates of tables with MCV lists including every\ncolumn values should get much better estiamtes than that, and hopefully\nestimates of (t WHERE) JOIN (t WHERE) USING (c) as good as t JOIN t USING(c)\nWHERE. 2) postgres estimator doesn't have everything it needs to invoke\nexisting functionality to apply all its knowledge without also invoking the\nexecutor (testing MCVs for passing qual conditions); 3) frequency values (join\neqsel's numbers[]) should be scaled up by something resembling rows/tuples, but\nmy existing test showed that can be too strong a correction.\n\nJustin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 May 2017 05:52:15 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: join estimate of subqueries with range conditions and constraint\n exclusion"
},
{
"msg_contents": "On Wed, May 24, 2017 at 2:17 PM, Justin Pryzby <[email protected]> wrote:\n\n> We got bitten again by what appears to be the same issue I reported\n> (perhaps\n> poorly) here:\n> https://www.postgresql.org/message-id/20170326193344.\n> GS31628%40telsasoft.com\n>\n> We have PG9.6.3 table heirarchies partitioned by time. Our reports use\n> subqueries each with their own copies of a range clauses on time column, as\n> needed to get constraint exclusion reference:\n> https://www.postgresql.org/message-id/25076.1366321335%40sss.pgh.pa.us\n>\n> SELECT * FROM\n> (SELECT * FROM t WHERE col>const) a JOIN\n> (SELECT * FROM t WHERE col>const) b USING (col)\n>\n> I'm diagnosing a bad estimate/plan due to excessively high n_distinct\n> leading\n> to underestimated rowcount when selecting from a small fraction of the\n> table\n> heirarchy. This leads intermittently to bad things, specifically a\n> cascade of\n> misestimates and associated nested loops around millions of rows.\n>\n\nJustin,\n\nI'm not going to be much help personally but I just wanted to say that with\nPGCon just completed and Beta1 just starting, combined with the somewhat\nspecialized nature of the problem, a response should be forthcoming even\nthough its taking a bit longer than usual.\n\nDavid J.\n\nOn Wed, May 24, 2017 at 2:17 PM, Justin Pryzby <[email protected]> wrote:We got bitten again by what appears to be the same issue I reported (perhaps\npoorly) here:\nhttps://www.postgresql.org/message-id/20170326193344.GS31628%40telsasoft.com\n\nWe have PG9.6.3 table heirarchies partitioned by time. Our reports use\nsubqueries each with their own copies of a range clauses on time column, as\nneeded to get constraint exclusion reference:\nhttps://www.postgresql.org/message-id/25076.1366321335%40sss.pgh.pa.us\n\n SELECT * FROM\n (SELECT * FROM t WHERE col>const) a JOIN\n (SELECT * FROM t WHERE col>const) b USING (col)\n\nI'm diagnosing a bad estimate/plan due to excessively high n_distinct leading\nto underestimated rowcount when selecting from a small fraction of the table\nheirarchy. This leads intermittently to bad things, specifically a cascade of\nmisestimates and associated nested loops around millions of rows.Justin,I'm not going to be much help personally but I just wanted to say that with PGCon just completed and Beta1 just starting, combined with the somewhat specialized nature of the problem, a response should be forthcoming even though its taking a bit longer than usual.David J.",
"msg_date": "Sat, 3 Jun 2017 12:23:59 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: join estimate of subqueries with range conditions and\n constraint exclusion"
},
{
"msg_contents": "Justin Pryzby <[email protected]> writes:\n> I dug into this some more; I can mitigate the issue with this change:\n\n> diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\n> index 6a4f7b1..962a5b4 100644\n> --- a/src/backend/utils/adt/selfuncs.c\n> +++ b/src/backend/utils/adt/selfuncs.c\n> @@ -2279,6 +2279,22 @@ eqjoinsel_inner(Oid operator,\n \n> nd1 = get_variable_numdistinct(vardata1, &isdefault1);\n> nd2 = get_variable_numdistinct(vardata2, &isdefault2);\n> + elog(DEBUG4, \"nd %lf %lf\", nd1 ,nd2);\n> + if (nd1>vardata1->rel->rows) nd1=vardata1->rel->rows;\n> + if (nd2>vardata1->rel->rows) nd2=vardata2->rel->rows;\n> +\n> + elog(DEBUG4, \"nd %lf %lf\", nd1 ,nd2);\n> + elog(DEBUG4, \"rows %lf %lf\", vardata1->rel->rows ,vardata2->rel->rows);\n> + elog(DEBUG4, \"tuples %lf %lf\", vardata1->rel->tuples ,vardata2->rel->tuples);\n\nI don't like this change too much. I agree that intuitively you would\nnot expect the number of distinct values to exceed the possibly-restricted\nnumber of rows from the input relation, but I think this falls foul of\nthe problem mentioned in eqjoinsel_semi's comments, namely that it's\neffectively double-counting the restriction selectivity. It happens to\nimprove matters in the test case you show, but it's not exactly producing\na good estimate even so; and the fact that the change is in the right\ndirection seems like mostly an artifact of particular ndistinct and\nrowcount values. I note for instance that this patch would do nothing\nat all for the toy example you posted upthread, because nd1/nd2 are\nalready equal to the rows estimates in that case.\n\nThe core reason why you get good results for\n\n\tselect * from a join b using (x) where x = constant\n\nis that there's a great deal of intelligence in the planner about\ntransitive equality deductions and what to do with partially-redundant\nequality clauses. The reason you don't get similarly good results for\n\n\tselect * from a join b using (x) where x < constant\n\nis that there is no comparable machinery for inequalities. Maybe there\nshould be, but it'd be a fair bit of work to create, and we'd have to\nkeep one eye firmly fixed on whether it slows planning down even in cases\nwhere no benefit ensues. In the meantime, I'm not sure that there are\nany quick-hack ways of materially improving the situation :-(\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 05 Jun 2017 17:02:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: join estimate of subqueries with range conditions and\n constraint exclusion"
},
{
"msg_contents": "On Mon, Jun 05, 2017 at 05:02:32PM -0400, Tom Lane wrote:\n> Justin Pryzby <[email protected]> writes:\n> > diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\n> > + if (nd1>vardata1->rel->rows) nd1=vardata1->rel->rows;\n> > + if (nd2>vardata1->rel->rows) nd2=vardata2->rel->rows;\n> \n> I don't like this change too much.\n\nThanks for your analysis ;)\n\nI have a 2nd patch which improves the 2nd case I mentioned..\n\n> I note for instance that this patch would do nothing at all for the toy\n\n>> There's still an 2nd issue which this doesn't address, having to do with joins\n>> of tables with full/complete MCV lists, and selective queries on those tables,\n>> as demonstrated by the artificial test:\n>>\n>> > postgres=# CREATE TABLE t(i INT);\n>> > postgres=# TRUNCATE t;INSERT INTO t SELECT i FROM generate_series(1,99) i,generate_series(1,99);ANALYZE t;\n>> > postgres=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, (SELECT MAX(x) FROM unnest(most_common_vals::text::text[]) x) maxmcv, (histogram_bounds::text::text[])[array_length(histogram_bounds,1)] maxhist FROM pg_stats WHERE attname~'i' AND tablename='t' GROUP BY 1,2,3,4,5,6,7,8 ORDER BY 1 DESC;\n\nI pointed out that there were two issues, both involving underestimates from\nquerying a fraction of a table using inequality condition. One due to join\nestimate based on \"nd\" (and not substantially based on MCV), and one due to\nfrequencies associated with MCV list (and not substantially falling back to\nestimate from \"nd\").\n\nI made another patch to address the 2nd issue, which affects our pre-aggregated\ntables (which are partitioned by month, same as the raw tables). The\naggregated tables are the result of something like SELECT start_time::date, k1,\nk2, ..., sum(a), avg(b) ... GROUP BY 1,2,3, so have many fewer rows, and nd for\nstart_time::date column would be at most 31, so MCV list would be expected to\nbe complete, same as the \"toy\" example I gave.\n\nSometimes when we query the aggregated tables for a small number of days we get\nunderestimate leading to nested loops..\n\nWithout patch:\nMerge Join (cost=339.59..341.57 rows=99 width=4) (actual time=10.190..17.430 rows=9801 loops=1)\n\nWith patch:\nDEBUG: ndfactor 99.000000 99.000000\nDEBUG: nmatches 99 matchprodfreq 1.000000\nDEBUG: nmatches 99 matchprodfreq 1.000000\nDEBUG: matchfreq1 99.000000 unmatchfreq1 0.000000\nDEBUG: matchfreq1 1.000000 unmatchfreq1 0.000000\nDEBUG: matchfreq2 99.000000 unmatchfreq2 0.000000\nDEBUG: matchfreq2 1.000000 unmatchfreq2 0.000000\nDEBUG: otherfreq1 0.000000 otherfreq2 0.000000\nDEBUG: select(1) 1.000000\n Hash Join (cost=167.75..444.77 rows=9801 width=4) (actual time=4.706..13.892 rows=9801 loops=1)\n\n\ndiff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\nindex 6a4f7b1..bc88423 100644\n--- a/src/backend/utils/adt/selfuncs.c\n+++ b/src/backend/utils/adt/selfuncs.c\n@@ -2279,6 +2279,14 @@ eqjoinsel_inner(Oid operator,\n \n \tnd1 = get_variable_numdistinct(vardata1, &isdefault1);\n \tnd2 = get_variable_numdistinct(vardata2, &isdefault2);\n+\tfloat ndfactor1=1;\n+\tfloat ndfactor2=1;\n+\tif (vardata1->rel->rows)\n+\t\tndfactor1=vardata1->rel->tuples / vardata1->rel->rows;\n+\tif (vardata2->rel->rows)\n+\t\tndfactor2=vardata2->rel->tuples / vardata2->rel->rows;\n+\t// ndfactor1=ndfactor2=1;\n+\telog(DEBUG4, \"ndfactor %lf %lf\", ndfactor1,ndfactor2);\n \n \topfuncoid = get_opcode(operator);\n \n@@ -2375,7 +2383,19 @@ eqjoinsel_inner(Oid operator,\n \t\t\t\t}\n \t\t\t}\n \t\t}\n+\n+\t\t// you might think we should multiple by ndfactor1*ndfactor2,\n+\t\t// but that gives serious overestimates...\n+\t\t// matchprodfreq*= ndfactor1>ndfactor2?ndfactor1:ndfactor2;\n+\t\t// matchprodfreq*=ndfactor1;\n+\t\t// matchprodfreq*=ndfactor2;\n+\t\t// matchprodfreq*= ndfactor1<ndfactor2?ndfactor1:ndfactor2;\n+\t\tmatchprodfreq*= ndfactor1<ndfactor2?ndfactor1:ndfactor2;\n+\n+\t\telog(DEBUG4, \"nmatches %d matchprodfreq %lf\", nmatches, matchprodfreq);\n \t\tCLAMP_PROBABILITY(matchprodfreq);\n+\t\telog(DEBUG4, \"nmatches %d matchprodfreq %lf\", nmatches, matchprodfreq);\n+\n \t\t/* Sum up frequencies of matched and unmatched MCVs */\n \t\tmatchfreq1 = unmatchfreq1 = 0.0;\n \t\tfor (i = 0; i < nvalues1; i++)\n@@ -2385,8 +2405,14 @@ eqjoinsel_inner(Oid operator,\n \t\t\telse\n \t\t\t\tunmatchfreq1 += numbers1[i];\n \t\t}\n+\n+\t\tmatchfreq1*=ndfactor1;\n+\t\tunmatchfreq1*=ndfactor1;\n+\t\telog(DEBUG4, \"matchfreq1 %lf unmatchfreq1 %lf\", matchfreq1, unmatchfreq1);\n \t\tCLAMP_PROBABILITY(matchfreq1);\n \t\tCLAMP_PROBABILITY(unmatchfreq1);\n+\t\telog(DEBUG4, \"matchfreq1 %lf unmatchfreq1 %lf\", matchfreq1, unmatchfreq1);\n+\n \t\tmatchfreq2 = unmatchfreq2 = 0.0;\n \t\tfor (i = 0; i < nvalues2; i++)\n \t\t{\n@@ -2395,8 +2421,12 @@ eqjoinsel_inner(Oid operator,\n \t\t\telse\n \t\t\t\tunmatchfreq2 += numbers2[i];\n \t\t}\n+\t\tmatchfreq2*=ndfactor2;\n+\t\tunmatchfreq2*=ndfactor2;\n+\t\telog(DEBUG4, \"matchfreq2 %lf unmatchfreq2 %lf\", matchfreq2, unmatchfreq2);\n \t\tCLAMP_PROBABILITY(matchfreq2);\n \t\tCLAMP_PROBABILITY(unmatchfreq2);\n+\t\telog(DEBUG4, \"matchfreq2 %lf unmatchfreq2 %lf\", matchfreq2, unmatchfreq2);\n \t\tpfree(hasmatch1);\n \t\tpfree(hasmatch2);\n \n \tif (have_mcvs1)\n\nJustin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 8 Jun 2017 11:05:38 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: join under-estimates with ineq conditions"
},
{
"msg_contents": "I never heard back but was hoping for some feedback/discussion about this 2nd\nproblem/patch.\n\njust a reminder - Thanks\n\nOn Thu, Jun 08, 2017 at 11:05:38AM -0500, Justin Pryzby wrote:\n> On Mon, Jun 05, 2017 at 05:02:32PM -0400, Tom Lane wrote:\n> > Justin Pryzby <[email protected]> writes:\n> > > diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\n> > > + if (nd1>vardata1->rel->rows) nd1=vardata1->rel->rows;\n> > > + if (nd2>vardata1->rel->rows) nd2=vardata2->rel->rows;\n> > \n> > I don't like this change too much.\n> \n> Thanks for your analysis ;)\n> \n> I have a 2nd patch which improves the 2nd case I mentioned..\n> \n> > I note for instance that this patch would do nothing at all for the toy\n> \n> >> There's still an 2nd issue which this doesn't address, having to do with joins\n> >> of tables with full/complete MCV lists, and selective queries on those tables,\n> >> as demonstrated by the artificial test:\n> >>\n> >> > postgres=# CREATE TABLE t(i INT);\n> >> > postgres=# TRUNCATE t;INSERT INTO t SELECT i FROM generate_series(1,99) i,generate_series(1,99);ANALYZE t;\n> >> > postgres=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct, array_length(most_common_vals,1) n_mcv, array_length(histogram_bounds,1) n_hist, (SELECT MAX(x) FROM unnest(most_common_vals::text::text[]) x) maxmcv, (histogram_bounds::text::text[])[array_length(histogram_bounds,1)] maxhist FROM pg_stats WHERE attname~'i' AND tablename='t' GROUP BY 1,2,3,4,5,6,7,8 ORDER BY 1 DESC;\n> \n> I pointed out that there were two issues, both involving underestimates from\n> querying a fraction of a table using inequality condition. One due to join\n> estimate based on \"nd\" (and not substantially based on MCV), and one due to\n> frequencies associated with MCV list (and not substantially falling back to\n> estimate from \"nd\").\n> \n> I made another patch to address the 2nd issue, which affects our pre-aggregated\n> tables (which are partitioned by month, same as the raw tables). The\n> aggregated tables are the result of something like SELECT start_time::date, k1,\n> k2, ..., sum(a), avg(b) ... GROUP BY 1,2,3, so have many fewer rows, and nd for\n> start_time::date column would be at most 31, so MCV list would be expected to\n> be complete, same as the \"toy\" example I gave.\n> \n> Sometimes when we query the aggregated tables for a small number of days we get\n> underestimate leading to nested loops..\n> \n> Without patch:\n> Merge Join (cost=339.59..341.57 rows=99 width=4) (actual time=10.190..17.430 rows=9801 loops=1)\n> \n> With patch:\n> DEBUG: ndfactor 99.000000 99.000000\n> DEBUG: nmatches 99 matchprodfreq 1.000000\n> DEBUG: nmatches 99 matchprodfreq 1.000000\n> DEBUG: matchfreq1 99.000000 unmatchfreq1 0.000000\n> DEBUG: matchfreq1 1.000000 unmatchfreq1 0.000000\n> DEBUG: matchfreq2 99.000000 unmatchfreq2 0.000000\n> DEBUG: matchfreq2 1.000000 unmatchfreq2 0.000000\n> DEBUG: otherfreq1 0.000000 otherfreq2 0.000000\n> DEBUG: select(1) 1.000000\n> Hash Join (cost=167.75..444.77 rows=9801 width=4) (actual time=4.706..13.892 rows=9801 loops=1)\n> \n> \n> diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c\n> index 6a4f7b1..bc88423 100644\n> --- a/src/backend/utils/adt/selfuncs.c\n> +++ b/src/backend/utils/adt/selfuncs.c\n> @@ -2279,6 +2279,14 @@ eqjoinsel_inner(Oid operator,\n> \n> \tnd1 = get_variable_numdistinct(vardata1, &isdefault1);\n> \tnd2 = get_variable_numdistinct(vardata2, &isdefault2);\n> +\tfloat ndfactor1=1;\n> +\tfloat ndfactor2=1;\n> +\tif (vardata1->rel->rows)\n> +\t\tndfactor1=vardata1->rel->tuples / vardata1->rel->rows;\n> +\tif (vardata2->rel->rows)\n> +\t\tndfactor2=vardata2->rel->tuples / vardata2->rel->rows;\n> +\t// ndfactor1=ndfactor2=1;\n> +\telog(DEBUG4, \"ndfactor %lf %lf\", ndfactor1,ndfactor2);\n> \n> \topfuncoid = get_opcode(operator);\n> \n> @@ -2375,7 +2383,19 @@ eqjoinsel_inner(Oid operator,\n> \t\t\t\t}\n> \t\t\t}\n> \t\t}\n> +\n> +\t\t// you might think we should multiple by ndfactor1*ndfactor2,\n> +\t\t// but that gives serious overestimates...\n> +\t\t// matchprodfreq*= ndfactor1>ndfactor2?ndfactor1:ndfactor2;\n> +\t\t// matchprodfreq*=ndfactor1;\n> +\t\t// matchprodfreq*=ndfactor2;\n> +\t\t// matchprodfreq*= ndfactor1<ndfactor2?ndfactor1:ndfactor2;\n> +\t\tmatchprodfreq*= ndfactor1<ndfactor2?ndfactor1:ndfactor2;\n> +\n> +\t\telog(DEBUG4, \"nmatches %d matchprodfreq %lf\", nmatches, matchprodfreq);\n> \t\tCLAMP_PROBABILITY(matchprodfreq);\n> +\t\telog(DEBUG4, \"nmatches %d matchprodfreq %lf\", nmatches, matchprodfreq);\n> +\n> \t\t/* Sum up frequencies of matched and unmatched MCVs */\n> \t\tmatchfreq1 = unmatchfreq1 = 0.0;\n> \t\tfor (i = 0; i < nvalues1; i++)\n> @@ -2385,8 +2405,14 @@ eqjoinsel_inner(Oid operator,\n> \t\t\telse\n> \t\t\t\tunmatchfreq1 += numbers1[i];\n> \t\t}\n> +\n> +\t\tmatchfreq1*=ndfactor1;\n> +\t\tunmatchfreq1*=ndfactor1;\n> +\t\telog(DEBUG4, \"matchfreq1 %lf unmatchfreq1 %lf\", matchfreq1, unmatchfreq1);\n> \t\tCLAMP_PROBABILITY(matchfreq1);\n> \t\tCLAMP_PROBABILITY(unmatchfreq1);\n> +\t\telog(DEBUG4, \"matchfreq1 %lf unmatchfreq1 %lf\", matchfreq1, unmatchfreq1);\n> +\n> \t\tmatchfreq2 = unmatchfreq2 = 0.0;\n> \t\tfor (i = 0; i < nvalues2; i++)\n> \t\t{\n> @@ -2395,8 +2421,12 @@ eqjoinsel_inner(Oid operator,\n> \t\t\telse\n> \t\t\t\tunmatchfreq2 += numbers2[i];\n> \t\t}\n> +\t\tmatchfreq2*=ndfactor2;\n> +\t\tunmatchfreq2*=ndfactor2;\n> +\t\telog(DEBUG4, \"matchfreq2 %lf unmatchfreq2 %lf\", matchfreq2, unmatchfreq2);\n> \t\tCLAMP_PROBABILITY(matchfreq2);\n> \t\tCLAMP_PROBABILITY(unmatchfreq2);\n> +\t\telog(DEBUG4, \"matchfreq2 %lf unmatchfreq2 %lf\", matchfreq2, unmatchfreq2);\n> \t\tpfree(hasmatch1);\n> \t\tpfree(hasmatch2);\n> \n> \tif (have_mcvs1)\n> \n> Justin\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Jun 2017 21:53:52 -0500",
"msg_from": "Justin Pryzby <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: join under-estimates with ineq conditions"
}
] |
[
{
"msg_contents": "Hello,\n\n\nrestoring a dump with Ubuntu 16.04 using postgres.conf default configuration (autovacuum on) takes several hours, with Ubuntu 14.04 only 20 minutes. Turning autovacuum off in Ubuntu 16.04 makes restoring much faster, with 14.04 autovacuum off has nearly no effect. The PG version does not matter. Same results with PG 9.4/9.5/9.6.\n\n\nTime to restore the dump\n\n- Ubuntu 14.04 autovaccuum on: Sa 15. Apr 09:07:41 CEST 2017 - Sa 15. Apr 09:24:09 CEST 2017 -> 17 min\n- Ubuntu 14.04 autovaccuum off: Sa 15. Apr 02:41:06 CEST 2017 - Sa 15. Apr 02:58:03 CEST 2017 -> 17 min\n- Ubuntu 16.04 autovaccuum on: Sat Apr 15 03:22:07 CEST 2017 - Sat Apr 15 07:45:46 CEST 2017 > 4 hours\n\n- Ubuntu 16.04 autovaccuum off: Sat Apr 15 09:19:20 CEST 2017 - Sat Apr 15 09:55:09 CEST 2017 -> 35 min\n\n\nDo you have any idea what could cause this effect (what has changed in Ubuntu 16.04?) and should autovaccum be turned on again after restoring the dump? Does autovaccum slow down other Postgres queries?\n\n\nThanks for any help, Hans\n\n\nPS: My last post \"Postgres 9.5 / 9.6: Restoring PG 9.4 dump is very very slow\" from yesterday (2017-04-14 23.30:10) was not correct. Sorry\n\n\n\n\n\n\n\n\n\nHello,\n\n\nrestoring a dump with Ubuntu 16.04 \nusing postgres.conf default configuration (autovacuum on)\ntakes several hours, with Ubuntu 14.04 only 20 minutes. Turning autovacuum off in Ubuntu 16.04 makes restoring much faster, with 14.04 autovacuum off has nearly no effect. The PG version does not matter. Same results with PG\n 9.4/9.5/9.6.\n\n\nTime to restore the dump\n\n\n- Ubuntu 14.04 autovaccuum on: Sa 15. Apr 09:07:41 CEST 2017 - Sa 15. Apr 09:24:09 CEST 2017 -> 17 min\n\n- Ubuntu 14.04 autovaccuum off: Sa 15. Apr 02:41:06 CEST 2017 - Sa 15. Apr 02:58:03 CEST 2017 -> 17 min\n- Ubuntu 16.04 autovaccuum on: Sat Apr 15 03:22:07 CEST 2017 - Sat Apr 15 07:45:46 CEST 2017 > 4 hours\n\n\n- Ubuntu 16.04 autovaccuum off: Sat Apr 15 09:19:20 CEST 2017 - Sat Apr 15 09:55:09 CEST 2017 -> 35 min\n\n\n\nDo you have any idea what could cause this effect (what has changed in Ubuntu 16.04?) and should autovaccum be turned on again after restoring the dump? Does autovaccum slow down other Postgres queries?\n\n\nThanks for any help, Hans\n\n\nPS: My last post \"Postgres 9.5 / 9.6: Restoring PG 9.4 dump is very very slow\" from yesterday (2017-04-14 23.30:10) was not correct. Sorry",
"msg_date": "Sat, 15 Apr 2017 08:11:16 +0000",
"msg_from": "Hans Braxmeier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Restoring Postgres Dump is very slow with Ubuntu 16.04"
},
{
"msg_contents": "Am 15.04.2017 um 10:11 schrieb Hans Braxmeier:\n>\n> Hello,\n>\n>\n> restoring a dump with Ubuntu 16.04 using postgres.conf default \n> configuration (autovacuum on) takes several hours, with Ubuntu 14.04 \n> only 20 minutes. Turning autovacuum off in Ubuntu 16.04 makes \n> restoring much faster, with 14.04 autovacuum off has nearly no effect. \n> The PG version does not matter. Same results with PG 9.4/9.5/9.6.\n>\n>\n>\n>\n> Do you have any idea what could cause this effect (what has changed in \n> Ubuntu 16.04?) and should autovaccum be turned on again after \n> restoring the dump? Does autovaccum slow down other Postgres queries?\n>\n\nAutovacuum should always be on. Is the destination database (the whole \ndb-cluster) new and empty?\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n\n\n\n\n\n\n\nAm 15.04.2017 um 10:11 schrieb Hans\n Braxmeier:\n\n\n\n\n\nHello,\n\n\nrestoring a dump with Ubuntu 16.04 \n using postgres.conf�default configuration (autovacuum on)\n takes several hours, with Ubuntu 14.04 only 20 minutes.\n Turning autovacuum off in Ubuntu 16.04 makes restoring much\n faster, with�14.04 autovacuum off has nearly�no effect. The PG\n version does not matter. Same results with�PG 9.4/9.5/9.6.\n\n\n\n\n\nDo you have any idea what could cause this effect (what has\n changed�in Ubuntu 16.04?)�and should autovaccum be turned on\n again�after restoring the dump? Does autovaccum slow down\n other Postgres queries?\n\n\n\n Autovacuum should always be on. Is the destination database (the\n whole db-cluster)� new and empty?\n\n Regards, Andreas\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com",
"msg_date": "Sat, 15 Apr 2017 15:59:56 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Restoring Postgres Dump is very slow with Ubuntu 16.04"
}
] |
[
{
"msg_contents": "Hi Experts,\n\nHow can we create a materialized view in PostgreSQL which can be access by all the user account in all Database?\n\nRegards,\nDinesh Chandra\n\n\n\n\n\n\n\n\n\nHi Experts,\n \nHow can we create a materialized view in PostgreSQL which can be access by all the user account in all Database?\n \nRegards,\nDinesh Chandra",
"msg_date": "Mon, 17 Apr 2017 17:00:14 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Create a materialized view in PostgreSQL which can be access by all\n the user account"
},
{
"msg_contents": "On Mon, Apr 17, 2017 at 10:00 AM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Hi Experts,\n>\n>\n>\n> How can we create a materialized view in PostgreSQL which can be access by\n> all the user account in all Database?\n>\n\nDatabases are isolated - while connected to one you cannot directly see\nobjects in another. You need to use something like postgres_fdw to link\ncurrent database and the one containing the materialized view together.\n\nhttps://www.postgresql.org/docs/9.6/static/postgres-fdw.html\n\nAnd ensure the proper permissions are setup.\n\nhttps://www.postgresql.org/docs/9.6/static/sql-grant.html\n\nDavid J.\n\nOn Mon, Apr 17, 2017 at 10:00 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nHi Experts,\n \nHow can we create a materialized view in PostgreSQL which can be access by all the user account in all Database?Databases are isolated - while connected to one you cannot directly see objects in another. You need to use something like postgres_fdw to link current database and the one containing the materialized view together.https://www.postgresql.org/docs/9.6/static/postgres-fdw.htmlAnd ensure the proper permissions are setup.https://www.postgresql.org/docs/9.6/static/sql-grant.htmlDavid J.",
"msg_date": "Mon, 17 Apr 2017 10:12:56 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Create a materialized view in PostgreSQL which can be\n access by all the user account"
}
] |
[
{
"msg_contents": "I come from an Oracle background and am porting an application to postgres. App has a table that will contain 100 million rows and has to be loaded by a process that reads messages off a SQS queue and makes web service calls to insert records one row at a time in a postgres RDS instance. I know slow by slow is not the ideal approach but I was wondering if postgres had partitioning or other ways to tune concurrent insert statements. Process will run 50 - 100 concurrent threads.\n\n\n\n\n\n\n\n\nI come from an Oracle background and am porting an application to postgres. App has a table that will contain 100 million rows and has to be loaded by a process that reads messages off a SQS queue and makes web service calls to insert records one row at\n a time in a postgres RDS instance. I know slow by slow is not the ideal approach but I was wondering if postgres had partitioning or other ways to tune concurrent insert statements. Process will run 50 - 100 concurrent threads.",
"msg_date": "Tue, 18 Apr 2017 02:55:41 +0000",
"msg_from": "ROBERT PRICE <[email protected]>",
"msg_from_op": true,
"msg_subject": "Insert Concurrency"
},
{
"msg_contents": "On 18 April 2017 at 14:55, ROBERT PRICE <[email protected]> wrote:\n> I come from an Oracle background and am porting an application to postgres.\n> App has a table that will contain 100 million rows and has to be loaded by a\n> process that reads messages off a SQS queue and makes web service calls to\n> insert records one row at a time in a postgres RDS instance. I know slow by\n> slow is not the ideal approach but I was wondering if postgres had\n> partitioning or other ways to tune concurrent insert statements. Process\n> will run 50 - 100 concurrent threads.\n\nHave you tested performance and noticed that it is insufficient for\nyour needs? or do you just assume PostgreSQL suffers from the same\nissue as Oracle in regards to INSERT contention on a single table?\n\nYou may like to look at pgbench [1] to test the performance if you've\nnot done so already.\n\n[1] https://www.postgresql.org/docs/9.6/static/pgbench.html\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Apr 2017 17:41:04 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert Concurrency"
},
{
"msg_contents": "Yes, postgres has partitions:\n\nhttps://www.postgresql.org/docs/9.6/static/ddl-partitioning.html <https://www.postgresql.org/docs/9.6/static/ddl-partitioning.html>\n\nBut this is not going to help much in the scenario you have. \n\nPostgres can ingest data very very fast, 100M records in seconds - minutes , faster than oracle can serve it in many scenarios (all I have tested).\n\nSpecially if you use COPY command \n\nhttps://www.postgresql.org/docs/9.6/static/sql-copy.html <https://www.postgresql.org/docs/9.6/static/sql-copy.html>\n\nand even faster if you use the unlogged feature \n\nhttps://www.postgresql.org/docs/9.6/static/sql-altertable.html <https://www.postgresql.org/docs/9.6/static/sql-altertable.html>\n\nYou can tune postgres to make it even faster, but it’s not normally necessary, with the two advices I gave you firstly, is more than enough, If I don’t remember it wrong you can move 100M records in ~ 2 minutes.\n\nhttps://www.postgresql.org/docs/current/static/populate.html <https://www.postgresql.org/docs/current/static/populate.html>\n\n\nBut if you are going to move a record at a time you are going to be limited by the fastest transaction rate you can achieve, which is going to be a few hundred per second, and limited at the end by the disk hardware you have, . Out of the box and on commodity hardware it can take you up to then days to move 100M records.\n\nSo, my recomendation is to find a way to batch record insertions using copy, the benefits you can achieve tunning postgres are going to be marginal compared with COPY.\n\nRegards\n\nDaniel Blanch.\nww.translatetopostgres.com\n\n\n\n\n\n\n\n> El 18 abr 2017, a las 4:55, ROBERT PRICE <[email protected]> escribió:\n> \n> I come from an Oracle background and am porting an application to postgres. App has a table that will contain 100 million rows and has to be loaded by a process that reads messages off a SQS queue and makes web service calls to insert records one row at a time in a postgres RDS instance. I know slow by slow is not the ideal approach but I was wondering if postgres had partitioning or other ways to tune concurrent insert statements. Process will run 50 - 100 concurrent threads.\n\n\nYes, postgres has partitions:https://www.postgresql.org/docs/9.6/static/ddl-partitioning.htmlBut this is not going to help much in the scenario you have. Postgres can ingest data very very fast, 100M records in seconds - minutes , faster than oracle can serve it in many scenarios (all I have tested).Specially if you use COPY command https://www.postgresql.org/docs/9.6/static/sql-copy.htmland even faster if you use the unlogged feature https://www.postgresql.org/docs/9.6/static/sql-altertable.htmlYou can tune postgres to make it even faster, but it’s not normally necessary, with the two advices I gave you firstly, is more than enough, If I don’t remember it wrong you can move 100M records in ~ 2 minutes.https://www.postgresql.org/docs/current/static/populate.htmlBut if you are going to move a record at a time you are going to be limited by the fastest transaction rate you can achieve, which is going to be a few hundred per second, and limited at the end by the disk hardware you have, . Out of the box and on commodity hardware it can take you up to then days to move 100M records.So, my recomendation is to find a way to batch record insertions using copy, the benefits you can achieve tunning postgres are going to be marginal compared with COPY.RegardsDaniel Blanch.ww.translatetopostgres.comEl 18 abr 2017, a las 4:55, ROBERT PRICE <[email protected]> escribió:I come from an Oracle background and am porting an application to postgres. App has a table that will contain 100 million rows and has to be loaded by a process that reads messages off a SQS queue and makes web service calls to insert records one row at a time in a postgres RDS instance. I know slow by slow is not the ideal approach but I was wondering if postgres had partitioning or other ways to tune concurrent insert statements. Process will run 50 - 100 concurrent threads.",
"msg_date": "Tue, 18 Apr 2017 07:45:31 +0200",
"msg_from": "Daniel Blanch Bataller <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert Concurrency"
},
{
"msg_contents": "On Tue, Apr 18, 2017 at 2:45 AM, Daniel Blanch Bataller\n<[email protected]> wrote:\n>\n> But if you are going to move a record at a time you are going to be limited\n> by the fastest transaction rate you can achieve, which is going to be a few\n> hundred per second, and limited at the end by the disk hardware you have, .\n> Out of the box and on commodity hardware it can take you up to then days to\n> move 100M records.\n\nRDS usually is not commodity hardware, most RDS instances will have\nsome form of SSD storage, so performance could be much higher than\nwhat you'd get on your laptop.\n\nI'd have to second David's advice: test with pgbench first. It can\nquite accurately simulate your use case.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Apr 2017 12:29:18 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert Concurrency"
},
{
"msg_contents": "On Mon, Apr 17, 2017 at 8:55 PM, ROBERT PRICE <[email protected]> wrote:\n> I come from an Oracle background and am porting an application to postgres.\n> App has a table that will contain 100 million rows and has to be loaded by a\n> process that reads messages off a SQS queue and makes web service calls to\n> insert records one row at a time in a postgres RDS instance. I know slow by\n> slow is not the ideal approach but I was wondering if postgres had\n> partitioning or other ways to tune concurrent insert statements. Process\n> will run 50 - 100 concurrent threads.\n\nIt's not uncommon to look for an Oracle solution while working with\nanother rdbms. Often what works in one engine doesn't work the same or\nas well in another.\n\nIs it possible for you to roll up some of these inserts into a single\ntransaction in some way? Even inserting ten rows at a time instead of\none at a time can make a big difference in your insert rate. Being\nable to roll up 100 or more together even more so.\n\nAnother possibility is to insert them into a smaller table, then have\na process every so often come along, and insert all the rows there and\nthen delete them or truncate the table (for truncate you'll need to\nlock the table to not lose rows).\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Apr 2017 09:41:17 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert Concurrency"
},
{
"msg_contents": "\n>> To understand recursion, one must first understand recursion.\n\nThis makes no sense unless you also provide the base case.\n\nDavid\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Apr 2017 16:49:53 +0100",
"msg_from": "David McKelvie <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Insert Concurrency"
},
{
"msg_contents": "Thanks everyone, I decided to have the SQS process changed to create csv files in a S3 bucket. Then we have a process that will use the copy command to load the data. Process is loading 500,000 records in around 4 minutes which should be good enough for now. Going to look at pg_citus to get up to speed on postgres partitioning for a future need.\n\n\n________________________________\nFrom: Scott Marlowe <[email protected]>\nSent: Tuesday, April 18, 2017 3:41 PM\nTo: ROBERT PRICE\nCc: [email protected]\nSubject: Re: [PERFORM] Insert Concurrency\n\nOn Mon, Apr 17, 2017 at 8:55 PM, ROBERT PRICE <[email protected]> wrote:\n> I come from an Oracle background and am porting an application to postgres.\n> App has a table that will contain 100 million rows and has to be loaded by a\n> process that reads messages off a SQS queue and makes web service calls to\n> insert records one row at a time in a postgres RDS instance. I know slow by\n> slow is not the ideal approach but I was wondering if postgres had\n> partitioning or other ways to tune concurrent insert statements. Process\n> will run 50 - 100 concurrent threads.\n\nIt's not uncommon to look for an Oracle solution while working with\nanother rdbms. Often what works in one engine doesn't work the same or\nas well in another.\n\nIs it possible for you to roll up some of these inserts into a single\ntransaction in some way? Even inserting ten rows at a time instead of\none at a time can make a big difference in your insert rate. Being\nable to roll up 100 or more together even more so.\n\nAnother possibility is to insert them into a smaller table, then have\na process every so often come along, and insert all the rows there and\nthen delete them or truncate the table (for truncate you'll need to\nlock the table to not lose rows).\n\n--\nTo understand recursion, one must first understand recursion.\n\n\n\n\n\n\n\n\nThanks everyone, I decided to have the SQS process changed to create csv files in a S3 bucket. Then we have a process that will use the copy command to load the data. Process is loading 500,000 records in around 4 minutes which should be good enough for\n now. Going to look at pg_citus to get up to speed on postgres partitioning for a future need. \n\n\n\n\n\nFrom: Scott Marlowe <[email protected]>\nSent: Tuesday, April 18, 2017 3:41 PM\nTo: ROBERT PRICE\nCc: [email protected]\nSubject: Re: [PERFORM] Insert Concurrency\n \n\n\n\nOn Mon, Apr 17, 2017 at 8:55 PM, ROBERT PRICE <[email protected]> wrote:\n> I come from an Oracle background and am porting an application to postgres.\n> App has a table that will contain 100 million rows and has to be loaded by a\n> process that reads messages off a SQS queue and makes web service calls to\n> insert records one row at a time in a postgres RDS instance. I know slow by\n> slow is not the ideal approach but I was wondering if postgres had\n> partitioning or other ways to tune concurrent insert statements. Process\n> will run 50 - 100 concurrent threads.\n\nIt's not uncommon to look for an Oracle solution while working with\nanother rdbms. Often what works in one engine doesn't work the same or\nas well in another.\n\nIs it possible for you to roll up some of these inserts into a single\ntransaction in some way? Even inserting ten rows at a time instead of\none at a time can make a big difference in your insert rate. Being\nable to roll up 100 or more together even more so.\n\nAnother possibility is to insert them into a smaller table, then have\na process every so often come along, and insert all the rows there and\nthen delete them or truncate the table (for truncate you'll need to\nlock the table to not lose rows).\n\n-- \nTo understand recursion, one must first understand recursion.",
"msg_date": "Tue, 18 Apr 2017 16:29:13 +0000",
"msg_from": "ROBERT PRICE <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Insert Concurrency"
}
] |
[
{
"msg_contents": "Hi\n\n\nRunning 9.5.2\n\nI have the following update and run into a bit of a trouble . I realize the tables involved have quite some data but here goes\n\n\nUPDATE\n tf_transaction_item_person TRANS\nSET\n general_ledger_code = PURCH.general_ledger_code,\n general_ledger_code_desc = PURCH.general_ledger_code_desc,\n update_datetime = now()::timestamp(0)\nFROM\n tf_purchases_person PURCH\nWHERE\n PURCH.general_ledger_code != '' AND\n TRANS.purchased_log_id = PURCH.purchased_log_id AND\n TRANS.general_ledger_code != PURCH.general_ledger_code\n;\n\n\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------\n Update on tf_transaction_item_person trans (cost=1432701.45..2209776.18 rows=3405170 width=231)\n -> Hash Join (cost=1432701.45..2209776.18 rows=3405170 width=231)\n Hash Cond: ((trans.purchased_log_id)::text = (purch.purchased_log_id)::text)\n Join Filter: ((trans.general_ledger_code)::text <> (purch.general_ledger_code)::text)\n -> Seq Scan on tf_transaction_item_person trans (cost=0.00..160488.20 rows=3405920 width=257)\n -> Hash (cost=970842.28..970842.28 rows=20743134 width=56)\n -> Seq Scan on tf_purchases_person purch (cost=0.00..970842.28 rows=20743134 width=56)\n Filter: ((general_ledger_code)::text <> ''::text)\n\n\n\n\n\n Table \"tf_transaction_item_person\"\n Column | Type | Modifiers \n---------------------------------+-----------------------------+----------------------------------------\n person_transaction_item_id | character varying(100) | not null\n person_transaction_id | character varying(100) | not null\n transaction_id | character varying(100) | \n show_id | character varying(100) | not null\n client_id | integer | not null\n company_id | integer | not null\n person_id | integer | not null\n badge_id | character varying(100) | not null\n transaction_type_code | character varying(100) | not null\n payment_type_code | character varying(100) | not null\n purchased_log_id | character varying(100) | not null\n item_id | character varying(100) | not null\n transaction_amount | double precision | not null\n add_by_user_id | character varying(100) | not null\n add_date | timestamp without time zone | not null\n transaction_items_person_source | character varying(1) | not null\n update_datetime | timestamp without time zone | \n is_deleted | character varying(5) | \n reg_is_deleted | character varying(5) | not null default ''::character varying\n birst_is_deleted | character varying(5) | not null default ''::character varying\n general_ledger_code | character varying(20) | \n general_ledger_code_desc | character varying(50) | \nIndexes:\n \"tf_transaction_item_person_pkey\" PRIMARY KEY, btree (person_transaction_item_id)\n \"tf_tip_idx\" btree (client_id, update_datetime)\n \"tf_tip_isdel_idx\" btree (show_id, person_transaction_item_id)\n\n\n Table \"tf_purchases_person\"\n Column | Type | Modifiers \n-----------------------------+-----------------------------+----------------------------------------\n purchased_log_id | character varying(100) | not null\n show_id | character varying(100) | \n client_id | integer | \n company_id | integer | \n person_id | integer | \n badge_id | character varying(100) | \n item_id | character varying(100) | \n general_ledger_code | character varying(100) | \n purchase_status | character varying(100) | \n purchase_quantity | integer | \n purchase_rate | double precision | \n purchase_total | double precision | \n tax_rate | double precision | \n tax_total | double precision | \n final_total | double precision | \n add_by_user_id | character varying(100) | \n add_date | timestamp without time zone | \n purchase_item_person_source | character varying(1) | \n is_deleted | character varying(5) | \n update_datetime | timestamp without time zone | \n reg_is_deleted | character varying(5) | not null default ''::character varying\n birst_is_deleted | character varying(5) | not null default ''::character varying\n general_ledger_code_desc | character varying(50) | \nIndexes:\n \"tf_purchases_person_pkey\" PRIMARY KEY, btree (purchased_log_id)\n \"foo1\" btree (general_ledger_code, show_id, purchased_log_id)\n \"tf_pp_genl_idx\" btree (show_id, general_ledger_code, general_ledger_code_desc)\n \"tf_pp_idx\" btree (client_id, update_datetime)\n \"tf_pp_isdel_idx\" btree (show_id, purchased_log_id)\n\n\n\nI looked at the counts to see which conditions are getting me the least amount of records relative to the tables’ counts and attempt some indexing\n\n\nbirstdb=# select count(*) from tf_transaction_item_person;\n count \n---------\n 3405920\n(1 row)\nbirstdb=# select count(*) from tf_purchases_person;\n count \n----------\n 20747702\n(1 row)\nselect count(TRANS.purchased_log_id)\nfrom\n\n tf_transaction_item_person TRANS,\n tf_purchases_person PURCH\nWHERE\n PURCH.general_ledger_code != '' AND\n TRANS.show_id = PURCH.show_id AND\n TRANS.purchased_log_id = PURCH.purchased_log_id AND\n TRANS.general_ledger_code != PURCH.general_ledger_code\n;\n count \n-------\n 0\n\nselect count(TRANS.purchased_log_id)\nfrom\n\n tf_transaction_item_person TRANS,\n tf_purchases_person PURCH\nWHERE\n TRANS.show_id = PURCH.show_id AND\n TRANS.purchased_log_id = PURCH.purchased_log_id AND\n TRANS.general_ledger_code != PURCH.general_ledger_code\n;\n count \n-------\n 0\n\n\n\n\ncreate index foo1 on tf_purchases_person (general_ledger_code, show_id, purchased_log_id);\ncreate index foo2 on tf_transaction_item_person (general_ledger_code, show_id, purchased_log_id);\n\n\n\nNo real improvement\n\nI went even this route\n\n\nUPDATE\n tf_transaction_item_person TRANS\nSET\n general_ledger_code = PURCH.general_ledger_code,\n general_ledger_code_desc = PURCH.general_ledger_code_desc,\n update_datetime = now()::timestamp(0)\nFROM\n(\nselect a.show_id ,a.general_ledger_code, a.purchased_log_id, a.general_ledger_code_desc\nfrom \ntf_transaction_item_person a left join tf_purchases_person b\non\n b.general_ledger_code != '' AND\n b.show_id=a.show_id AND\n b.purchased_log_id = a.purchased_log_id AND\n b.general_ledger_code = a.general_ledger_code\nwhere b.general_ledger_code is null\n) PURCH\nWHERE\n TRANS.purchased_log_id = PURCH.purchased_log_id AND\n TRANS.show_id = PURCH.show_id\n;\n\n QUERY PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-----------------\n Update on tf_transaction_item_person trans (cost=19194432.16..19467044.63 rows=34859 width=387)\n -> Nested Loop Anti Join (cost=19194432.16..19467044.63 rows=34859 width=387)\n -> Merge Join (cost=19194431.59..19254383.78 rows=34859 width=415)\n Merge Cond: (((trans.show_id)::text = (a.show_id)::text) AND ((trans.purchased_log_id)::text = (a.purchased_log_id)::text))\n -> Sort (cost=9603638.01..9612152.81 rows=3405920 width=199)\n Sort Key: trans.show_id, trans.purchased_log_id\n -> Index Scan using tf_tip_isdel_idx on tf_transaction_item_person trans (cost=0.56..8908143.78 rows=3405920 width=199)\n -> Materialize (cost=9590793.59..9607823.19 rows=3405920 width=216)\n -> Sort (cost=9590793.59..9599308.39 rows=3405920 width=216)\n Sort Key: a.show_id, a.purchased_log_id\n -> Index Scan using foo2 on tf_transaction_item_person a (cost=0.56..8872017.35 rows=3405920 width=216)\n -> Index Scan using foo1 on tf_purchases_person b (cost=0.56..6.09 rows=1 width=46)\n Index Cond: (((general_ledger_code)::text = (a.general_ledger_code)::text) AND ((show_id)::text = (a.show_id)::text) AND ((purchased_log_id)::text = (a.purchased\n_log_id)::text))\n Filter: ((general_ledger_code)::text <> ''::text)\n(14 rows)\n\n\nexplain analyze took well in excess of 10 minutes\n\nThe idea is an update needs to find the records to update to begin with.\nThe inner select with the above mentioned indexes runs in \n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Merge Anti Join (cost=1.12..15466467.80 rows=3405920 width=176) (actual time=245.940..63987.645 rows=3405920 loops=1)\n Merge Cond: ((trans.general_ledger_code)::text = (purch.general_ledger_code)::text)\n Join Filter: ((trans.purchased_log_id)::text = (purch.purchased_log_id)::text)\n -> Index Scan using foo2 on tf_transaction_item_person trans (cost=0.56..8162817.35 rows=3405920 width=200) (actual time=245.928..59480.444 rows=3405920 loops=1)\n -> Index Only Scan using foo1 on tf_purchases_person purch (cost=0.56..7243277.80 rows=20743134 width=30) (never executed)\n Filter: ((general_ledger_code)::text <> ''::text)\n Heap Fetches: 0\n Planning time: 216.738 ms\n Execution time: 64901.139 ms\n\n\nas opposed to a good 5 minutes\n\nThe update itself\n\n\n\n\nI am at a bit of a loss.\n\nAny ideas / pointers as to what I could do to make things better ?\n\n\n\nThanks in advance\n\n\n- Armand\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Apr 2017 16:46:09 -0500",
"msg_from": "\"Armand Pirvu (home)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "update from performance question"
},
{
"msg_contents": "Armand Pirvu wrote:\r\n> Running 9.5.2\r\n> \r\n> I have the following update and run into a bit of a trouble . I realize the tables\r\n> involved have quite some data but here goes\r\n> \r\n> \r\n> UPDATE\r\n> tf_transaction_item_person TRANS\r\n> SET\r\n> general_ledger_code = PURCH.general_ledger_code,\r\n> general_ledger_code_desc = PURCH.general_ledger_code_desc,\r\n> update_datetime = now()::timestamp(0)\r\n> FROM\r\n> tf_purchases_person PURCH\r\n> WHERE\r\n> PURCH.general_ledger_code != '' AND\r\n> TRANS.purchased_log_id = PURCH.purchased_log_id AND\r\n> TRANS.general_ledger_code != PURCH.general_ledger_code\r\n> ;\r\n[...]\r\n> Table \"tf_transaction_item_person\"\r\n[...]\r\n> Indexes:\r\n> \"tf_transaction_item_person_pkey\" PRIMARY KEY, btree (person_transaction_item_id)\r\n> \"tf_tip_idx\" btree (client_id, update_datetime)\r\n> \"tf_tip_isdel_idx\" btree (show_id, person_transaction_item_id)\r\n\r\nYou don't show EXPLAIN (ANALYZE, BUFFERS) output for the problematic query,\r\nso it is difficult to say where the time is spent.\r\n\r\nBut since you say that the same query without the UPDATE also takes more than\r\na minute, the duration for the UPDATE is not outrageous.\r\nIt may well be that much of the time is spent updating the index\r\nentries for the 3.5 million affected rows.\r\n\r\nI don't know if dropping indexes for the duration of the query and recreating\r\nthem afterwards would be a net win, but you should consider it.\r\n\r\nIt may be that the only ways to improve performance would be general\r\nthings like faster I/O, higher max_wal_size setting, and, most of all,\r\nenough RAM in the machine to contain the whole database.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 19 Apr 2017 08:06:00 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: update from performance question"
},
{
"msg_contents": "Hi Albe\n\n\nThank you for your reply\n\nThe query changed a bit\n\n\nexplain (analyze, buffers)\nUPDATE\n csischema.tf_transaction_item_person TRANS\nSET \n general_ledger_code = PURCH.general_ledger_code,\n general_ledger_code_desc = PURCH.general_ledger_code_desc,\n update_datetime = now()::timestamp(0)\nFROM\n csischema.tf_purchases_person PURCH\nWHERE\n PURCH.general_ledger_code IS NOT NULL AND\n TRANS.purchased_log_id = PURCH.purchased_log_id AND\n TRANS.general_ledger_code IS NULL\n;\n\n ^\nselect count(*) from csischema.tf_transaction_item_person where general_ledger_code is null;\n count \n---------\n 1393515\n\nselect count(*) from csischema.tf_transaction_item_person ;\n count \n---------\n 3408380\n\nselect count(*) from csischema.tf_purchases_person;\n count \n----------\n 20760731\n\nselect count(*) from csischema.tf_purchases_person where general_ledger_code IS NOT NULL;\n count \n---------\n 6909204\n\n\nBut the kicker is this\n\nA select count to see how many records will be used for update gets me zero\n\n\nselect count(trans.purchased_log_id) from \n csischema.tf_transaction_item_person TRANS,\n csischema.tf_purchases_person PURCH\nWHERE\n PURCH.general_ledger_code IS NOT NULL AND\n TRANS.purchased_log_id = PURCH.purchased_log_id AND\n TRANS.general_ledger_code IS NULL\n;\n count \n-------\n 0\n(1 row)\n\nConsidering this , I wonder if an index on csischema.tf_purchases_person (purchased_log_id, general_ledger_code) and one on tf_transaction_item_person (purchased_log_id, general_ledger_code) would not help ?\n\nThis is what bugs me. \n\nI got the explain out\n\n\nwithout indexes\n\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on tf_transaction_item_person trans (cost=1164684.43..1572235.51 rows=507748 width=227) (actual time=230320.060..230320.060 rows=0 loops=1)\n Buffers: shared hit=120188 read=876478, temp read=93661 written=93631\n -> Hash Join (cost=1164684.43..1572235.51 rows=507748 width=227) (actual time=230320.054..230320.054 rows=0 loops=1)\n Hash Cond: ((trans.purchased_log_id)::text = (purch.purchased_log_id)::text)\n Buffers: shared hit=120188 read=876478, temp read=93661 written=93631\n -> Seq Scan on tf_transaction_item_person trans (cost=0.00..228945.93 rows=1542683 width=199) (actual time=13.312..52046.689 rows=1393515 loops=1)\n Filter: (general_ledger_code IS NULL)\n Rows Removed by Filter: 2014865\n Buffers: shared read=191731\n -> Hash (cost=1012542.32..1012542.32 rows=6833049 width=52) (actual time=152339.000..152339.000 rows=6909204 loops=1)\n Buckets: 524288 Batches: 16 Memory Usage: 39882kB\n Buffers: shared hit=120188 read=684747, temp written=57588\n -> Seq Scan on tf_purchases_person purch (cost=0.00..1012542.32 rows=6833049 width=52) (actual time=8.252..140992.716 rows=6909204 loops=1)\n Filter: (general_ledger_code IS NOT NULL)\n Rows Removed by Filter: 13851527\n Buffers: shared hit=120188 read=684747\n Planning time: 0.867 ms\n Execution time: 230328.223 ms\n(18 rows)\n\n\n\nwith indexes\n\n\n\n QUERY PLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------------\n Update on tf_transaction_item_person trans (cost=1161742.22..1567806.87 rows=497927 width=228) (actual time=155171.388..155171.388 rows=0 loops=1)\n Buffers: shared hit=88095 read=908571, temp read=93661 written=93631\n -> Hash Join (cost=1161742.22..1567806.87 rows=497927 width=228) (actual time=155171.358..155171.358 rows=0 loops=1)\n Hash Cond: ((trans.purchased_log_id)::text = (purch.purchased_log_id)::text)\n Buffers: shared hit=88095 read=908571, temp read=93661 written=93631\n -> Seq Scan on tf_transaction_item_person trans (cost=0.00..228945.93 rows=1542683 width=199) (actual time=16.801..31016.221 rows=1393515 loops=1)\n Filter: (general_ledger_code IS NULL)\n Rows Removed by Filter: 2014865\n Buffers: shared read=191731\n -> Hash (cost=1012542.32..1012542.32 rows=6700872 width=53) (actual time=105101.946..105101.946 rows=6909204 loops=1)\n Buckets: 524288 Batches: 16 Memory Usage: 39882kB\n Buffers: shared hit=88095 read=716840, temp written=57588\n -> Seq Scan on tf_purchases_person purch (cost=0.00..1012542.32 rows=6700872 width=53) (actual time=13.823..95970.776 rows=6909204 loops=1)\n Filter: (general_ledger_code IS NOT NULL)\n Rows Removed by Filter: 13851527\n Buffers: shared hit=88095 read=716840\n Planning time: 90.409 ms\n Execution time: 155179.181 ms\n(18 rows)\n\nThanks\nArmand\n\n\n\nOn Apr 19, 2017, at 3:06 AM, Albe Laurenz <[email protected]> wrote:\n\n> Armand Pirvu wrote:\n>> Running 9.5.2\n>> \n>> I have the following update and run into a bit of a trouble . I realize the tables\n>> involved have quite some data but here goes\n>> \n>> \n>> UPDATE\n>> tf_transaction_item_person TRANS\n>> SET\n>> general_ledger_code = PURCH.general_ledger_code,\n>> general_ledger_code_desc = PURCH.general_ledger_code_desc,\n>> update_datetime = now()::timestamp(0)\n>> FROM\n>> tf_purchases_person PURCH\n>> WHERE\n>> PURCH.general_ledger_code != '' AND\n>> TRANS.purchased_log_id = PURCH.purchased_log_id AND\n>> TRANS.general_ledger_code != PURCH.general_ledger_code\n>> ;\n> [...]\n>> Table \"tf_transaction_item_person\"\n> [...]\n>> Indexes:\n>> \"tf_transaction_item_person_pkey\" PRIMARY KEY, btree (person_transaction_item_id)\n>> \"tf_tip_idx\" btree (client_id, update_datetime)\n>> \"tf_tip_isdel_idx\" btree (show_id, person_transaction_item_id)\n> \n> You don't show EXPLAIN (ANALYZE, BUFFERS) output for the problematic query,\n> so it is difficult to say where the time is spent.\n> \n> But since you say that the same query without the UPDATE also takes more than\n> a minute, the duration for the UPDATE is not outrageous.\n> It may well be that much of the time is spent updating the index\n> entries for the 3.5 million affected rows.\n> \n> I don't know if dropping indexes for the duration of the query and recreating\n> them afterwards would be a net win, but you should consider it.\n> \n> It may be that the only ways to improve performance would be general\n> things like faster I/O, higher max_wal_size setting, and, most of all,\n> enough RAM in the machine to contain the whole database.\n> \n> Yours,\n> Laurenz Albe\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 19 Apr 2017 18:11:06 -0500",
"msg_from": "\"Armand Pirvu (home)\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: update from performance question"
}
] |
[
{
"msg_contents": "Hi!, i've currently a big problem using ORBDER BY / LIMIT in a query with\nno result set.\nIf i add the order by/limit clause it runs really really slow.\n\n\n\nQUERY 1 FAST:\n--------------------------------\n\nSELECT fase.id\nFROM tipofase\nJOIN fase\nON (fase.tipofase = tipofase.id)\nWHERE tipofase.agendafrontoffice = true\n\nEXPLAIN ANALYZE:\n\nNested Loop (cost=0.43..790.19 rows=14462 width=4) (actual\ntime=0.079..0.079 rows=0 loops=1)\n\n -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual\ntime=0.077..0.077 rows=0 loops=1)\n Filter: agendafrontoffice\n Rows Removed by Filter: 102\n -> Index Only Scan using fase_test_prova_4 on fase\n(cost=0.43..595.59 rows=19158 width=8) (never executed)\n Index Cond: (tipofase = tipofase.id)\n Heap Fetches: 0\nPlanning time: 0.669 ms\nExecution time: 0.141 ms\n\n---\n\nIt's perfect because it starts from tipofase, where there are no\nagendafrontoffice = true\n\nfase_test_prova_4 is a btree index ON (fase.tipofase, fase.id)\nfase.id is PRIMARY key on fase,\ntipofase.id is PRIMARY key on tipofase,\nfase.tipofase is FK on tipofase.id\nand tipofase.agendafrontoffice is a boolean.\n\nI've also created a btree index on tipofase.agendafrontoffice.\n\n**fase** is a large table with 1.475.146 records. There are no rows in the\ntable matching tipofase.agendafrontoffice = true, so the result set is\nempty(QUERY 1)\n\n\n\n\nQUERY 2 SLOW(WITH limit and order by):\n--------------------------------\n\n\nSELECT fase.id\nFROM tipofase\nJOIN fase\nON (fase.tipofase = tipofase.id)\nWHERE tipofase.agendafrontoffice = true\nORDER BY fase.id DESC limit 10 offset 0\n\nLimit (cost=0.43..149.66 rows=10 width=4) (actual\ntime=173853.131..173853.131 rows=0 loops=1)\n -> Nested Loop (cost=0.43..215814.25 rows=14462 width=4) (actual\ntime=173853.130..173853.130 rows=0 loops=1)\n Join Filter: (fase.tipofase = tipofase.id)\n -> Index Scan Backward using test_prova_2 on fase\n(cost=0.43..193684.04 rows=1475146 width=8) (actual\ntime=1.336..173128.418 rows=1475146 loops=1)\n -> Materialize (cost=0.00..3.02 rows=1 width=4) (actual\ntime=0.000..0.000 rows=0 loops=1475146)\n -> Seq Scan on tipofase (cost=0.00..3.02 rows=1\nwidth=4) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: agendafrontoffice\n Rows Removed by Filter: 102\nPlanning time: 0.685 ms\nExecution time: 173853.221 ms\n\n\nReally really slow..... looks like the planner is not doing a good job.\nPostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bit\n\n\nI also run VACUUM AND VACUUM ANALYZE on both table\nI tried to play with the\n\"alter table tipofase alter column agendafrontoffice set statistics 2\"\nbut nothing.\n\nThanks in advance Marco\n\n\n\n-- \n-------------------------------------------------------------------------------------------------------------------------------------------\nIng. Marco Renzi\nOCA - Oracle Certified Associate Java SE7 Programmer\nOCP - Oracle Certified Mysql 5 Developer\n\nvia Zegalara 57\n62014 Corridonia(MC)\nMob: 3208377271\n\n\n\"The fastest way to change yourself is to hang out with people who are\nalready the way you want to be\" Reid Hoffman\n\nHi!, i've currently a big problem using ORBDER BY / LIMIT in a query with no result set.If i add the order by/limit clause it runs really really slow.QUERY 1 FAST:--------------------------------SELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = tipofase.id)WHERE tipofase.agendafrontoffice = trueEXPLAIN ANALYZE:Nested Loop (cost=0.43..790.19 rows=14462 width=4) (actual time=0.079..0.079 rows=0 loops=1) -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.077..0.077 rows=0 loops=1) Filter: agendafrontoffice Rows Removed by Filter: 102 -> Index Only Scan using fase_test_prova_4 on fase (cost=0.43..595.59 rows=19158 width=8) (never executed) Index Cond: (tipofase = tipofase.id) Heap Fetches: 0Planning time: 0.669 msExecution time: 0.141 ms---It's perfect because it starts from tipofase, where there are no agendafrontoffice = truefase_test_prova_4 is a btree index ON (fase.tipofase, fase.id)fase.id is PRIMARY key on fase, tipofase.id is PRIMARY key on tipofase, fase.tipofase is FK on tipofase.idand tipofase.agendafrontoffice is a boolean. I've also created a btree index on tipofase.agendafrontoffice.**fase**\n is a large table with 1.475.146 records. There are no rows in the table\n matching tipofase.agendafrontoffice = true, so the result set is \nempty(QUERY 1) QUERY 2 SLOW(WITH limit and order by):--------------------------------SELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = tipofase.id)WHERE tipofase.agendafrontoffice = trueORDER BY fase.id DESC limit 10 offset 0Limit (cost=0.43..149.66 rows=10 width=4) (actual time=173853.131..173853.131 rows=0 loops=1) -> Nested Loop (cost=0.43..215814.25 rows=14462 width=4) (actual time=173853.130..173853.130 rows=0 loops=1) Join Filter: (fase.tipofase = tipofase.id) -> Index Scan Backward using test_prova_2 on fase (cost=0.43..193684.04 rows=1475146 width=8) (actual time=1.336..173128.418 rows=1475146 loops=1) -> Materialize (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1475146) -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1) Filter: agendafrontoffice Rows Removed by Filter: 102Planning time: 0.685 msExecution time: 173853.221 msReally really slow..... looks like the planner is not doing a good job.PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bitI also run VACUUM AND VACUUM ANALYZE on both tableI tried to play with the \"alter table tipofase alter column agendafrontoffice set statistics 2\"but nothing.Thanks in advance Marco-- -------------------------------------------------------------------------------------------------------------------------------------------Ing. Marco RenziOCA - Oracle Certified Associate Java SE7 ProgrammerOCP - Oracle Certified Mysql 5 Developervia Zegalara 5762014 Corridonia(MC)Mob: 3208377271\"The fastest way to change yourself is to hang out with people who are already the way you want to be\" Reid Hoffman",
"msg_date": "Thu, 20 Apr 2017 09:19:23 +0200",
"msg_from": "Marco Renzi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query with no result set,\n really really slow adding ORBDER BY / LIMIT clause"
},
{
"msg_contents": "Thanks Philip, yes i tried, but that is not solving, still slow. Take a\nlook at the log.\n\n------------------------------------------------------------\n--------------------------------------------------------------\nLimit (cost=3.46..106.87 rows=10 width=4) (actual\ntime=396555.327..396555.327 rows=0 loops=1)\n -> Nested Loop (cost=3.46..214781.07 rows=20770 width=4) (actual\ntime=396555.326..396555.326 rows=0 loops=1)\n Join Filter: (tipofase.id = fase.tipofase)\n -> Index Scan Backward using test_prova_2 on fase\n(cost=0.43..192654.24 rows=1474700 width=8) (actual time=1.147..395710.190\nrows=1475146 loops=1)\n -> Materialize (cost=3.03..6.34 rows=1 width=8) (actual\ntime=0.000..0.000 rows=0 loops=1475146)\n -> Hash Semi Join (cost=3.03..6.33 rows=1 width=8) (actual\ntime=0.081..0.081 rows=0 loops=1)\n Hash Cond: (tipofase.id = tipofase_1.id)\n -> Seq Scan on tipofase (cost=0.00..3.02 rows=102\nwidth=4) (actual time=0.003..0.003 rows=1 loops=1)\n -> Hash (cost=3.02..3.02 rows=1 width=4) (actual\ntime=0.064..0.064 rows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 0kB\n -> Seq Scan on tipofase tipofase_1\n(cost=0.00..3.02 rows=1 width=4) (actual time=0.063..0.063 rows=0 loops=1)\n Filter: agendafrontoffice\n Rows Removed by Filter: 102\n\n*Planning time: 1.254 msExecution time: 396555.499 ms*\n\n------------------------------------------------------------\n--------------------------------------------------------------\n\n\n*The only way to speedup i found is this one*\n\nSELECT fase.id\nFROM tipofase\nJOIN fase\nON (fase.tipofase = (SELECT tipofase.id FROM tipofase WHERE\ntipofase.agendafrontoffice = true))\n\nORDER BY fase.id DESC limit 10 offset 0\n\n------------------------------------------------------------\n--------------------------------------------------------------\nLimit (cost=3.45..3.58 rows=10 width=4) (actual time=0.082..0.082 rows=0\nloops=1)\n InitPlan 1 (returns $0)\n -> Seq Scan on tipofase tipofase_1 (cost=0.00..3.02 rows=1 width=4)\n(actual time=0.072..0.072 rows=0 loops=1)\n Filter: agendafrontoffice\n Rows Removed by Filter: 102\n -> Nested Loop (cost=0.43..27080.93 rows=2118540 width=4) (actual\ntime=0.081..0.081 rows=0 loops=1)\n -> Index Only Scan Backward using fase_test_prova_4 on fase\n(cost=0.43..595.90 rows=20770 width=4) (actual time=0.080..0.080 rows=0\nloops=1)\n Index Cond: (tipofase = $0)\n Heap Fetches: 0\n -> Materialize (cost=0.00..3.53 rows=102 width=0) (never executed)\n -> Seq Scan on tipofase (cost=0.00..3.02 rows=102 width=0)\n(never executed)\n\n\n*Planning time: 0.471 msExecution time: 0.150 ms*\n\n------------------------------------------------------------\n--------------------------------------------------------------\n\n\nAnyone knows?\nI'm a bit worried about performance in my web app beacause sometimes\nfilters are written dinamically at the end, and i would like to avoid these\nproblems.\n\nThanks Philip, yes i tried, but that is not solving, still slow. Take a look at the log.--------------------------------------------------------------------------------------------------------------------------Limit (cost=3.46..106.87 rows=10 width=4) (actual time=396555.327..396555.327 rows=0 loops=1) -> Nested Loop (cost=3.46..214781.07 rows=20770 width=4) (actual time=396555.326..396555.326 rows=0 loops=1) Join Filter: (tipofase.id = fase.tipofase) \n -> Index Scan Backward using test_prova_2 on fase \n(cost=0.43..192654.24 rows=1474700 width=8) (actual \ntime=1.147..395710.190 rows=1475146 loops=1) -> Materialize (cost=3.03..6.34 rows=1 width=8) (actual time=0.000..0.000 rows=0 loops=1475146) -> Hash Semi Join (cost=3.03..6.33 rows=1 width=8) (actual time=0.081..0.081 rows=0 loops=1) Hash Cond: (tipofase.id = tipofase_1.id) -> Seq Scan on tipofase (cost=0.00..3.02 rows=102 width=4) (actual time=0.003..0.003 rows=1 loops=1) -> Hash (cost=3.02..3.02 rows=1 width=4) (actual time=0.064..0.064 rows=0 loops=1) Buckets: 1024 Batches: 1 Memory Usage: 0kB \n -> Seq Scan on tipofase tipofase_1 (cost=0.00..3.02 rows=1 \nwidth=4) (actual time=0.063..0.063 rows=0 loops=1) Filter: agendafrontoffice Rows Removed by Filter: 102Planning time: 1.254 msExecution time: 396555.499 ms--------------------------------------------------------------------------------------------------------------------------The only way to speedup i found is this oneSELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = (SELECT tipofase.id FROM tipofase WHERE tipofase.agendafrontoffice = true))ORDER BY fase.id DESC limit 10 offset 0 --------------------------------------------------------------------------------------------------------------------------Limit (cost=3.45..3.58 rows=10 width=4) (actual time=0.082..0.082 rows=0 loops=1) InitPlan 1 (returns $0) -> Seq Scan on tipofase tipofase_1 (cost=0.00..3.02 rows=1 width=4) (actual time=0.072..0.072 rows=0 loops=1) Filter: agendafrontoffice Rows Removed by Filter: 102 -> Nested Loop (cost=0.43..27080.93 rows=2118540 width=4) (actual time=0.081..0.081 rows=0 loops=1) \n -> Index Only Scan Backward using fase_test_prova_4 on fase \n(cost=0.43..595.90 rows=20770 width=4) (actual time=0.080..0.080 rows=0 \nloops=1) Index Cond: (tipofase = $0) Heap Fetches: 0 -> Materialize (cost=0.00..3.53 rows=102 width=0) (never executed) -> Seq Scan on tipofase (cost=0.00..3.02 rows=102 width=0) (never executed)Planning time: 0.471 msExecution time: 0.150 ms--------------------------------------------------------------------------------------------------------------------------Anyone knows?I'm\n a bit worried about performance in my web app beacause sometimes \nfilters are written dinamically at the end, and i would like to avoid \nthese problems.",
"msg_date": "Thu, 20 Apr 2017 13:16:26 +0200",
"msg_from": "Marco Renzi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
},
{
"msg_contents": "On 2017-04-20 13:16, Marco Renzi wrote:\n> Thanks Philip, yes i tried, but that is not solving, still slow. Take\n> a look at the log.\n> \n> --------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=3.46..106.87 rows=10 width=4) (actual\n> time=396555.327..396555.327 rows=0 loops=1)\n> -> Nested Loop (cost=3.46..214781.07 rows=20770 width=4) (actual\n> time=396555.326..396555.326 rows=0 loops=1)\n> Join Filter: (tipofase.id [1] = fase.tipofase)\n> -> Index Scan Backward using test_prova_2 on fase\n> (cost=0.43..192654.24 rows=1474700 width=8) (actual\n> time=1.147..395710.190 rows=1475146 loops=1)\n> -> Materialize (cost=3.03..6.34 rows=1 width=8) (actual\n> time=0.000..0.000 rows=0 loops=1475146)\n> -> Hash Semi Join (cost=3.03..6.33 rows=1 width=8)\n> (actual time=0.081..0.081 rows=0 loops=1)\n> Hash Cond: (tipofase.id [1] = tipofase_1.id [2])\n> -> Seq Scan on tipofase (cost=0.00..3.02\n> rows=102 width=4) (actual time=0.003..0.003 rows=1 loops=1)\n> -> Hash (cost=3.02..3.02 rows=1 width=4) (actual\n> time=0.064..0.064 rows=0 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 0kB\n> -> Seq Scan on tipofase tipofase_1\n> (cost=0.00..3.02 rows=1 width=4) (actual time=0.063..0.063 rows=0\n> loops=1)\n> Filter: agendafrontoffice\n> Rows Removed by Filter: 102\n> Planning time: 1.254 ms\n> Execution time: 396555.499 ms\n> \n> --------------------------------------------------------------------------------------------------------------------------\n> \n> THE ONLY WAY TO SPEEDUP I FOUND IS THIS ONE\n> \n> SELECT fase.id [3]\n> FROM tipofase\n> JOIN fase\n> ON (fase.tipofase = (SELECT tipofase.id [1] FROM tipofase\n> WHERE tipofase.agendafrontoffice = true))\n> \n> ORDER BY fase.id [3] DESC limit 10 offset 0\n> \n> --------------------------------------------------------------------------------------------------------------------------\n> \n> Limit (cost=3.45..3.58 rows=10 width=4) (actual time=0.082..0.082\n> rows=0 loops=1)\n> InitPlan 1 (returns $0)\n> -> Seq Scan on tipofase tipofase_1 (cost=0.00..3.02 rows=1\n> width=4) (actual time=0.072..0.072 rows=0 loops=1)\n> Filter: agendafrontoffice\n> Rows Removed by Filter: 102\n> -> Nested Loop (cost=0.43..27080.93 rows=2118540 width=4) (actual\n> time=0.081..0.081 rows=0 loops=1)\n> -> Index Only Scan Backward using fase_test_prova_4 on fase\n> (cost=0.43..595.90 rows=20770 width=4) (actual time=0.080..0.080\n> rows=0 loops=1)\n> Index Cond: (tipofase = $0)\n> Heap Fetches: 0\n> -> Materialize (cost=0.00..3.53 rows=102 width=0) (never\n> executed)\n> -> Seq Scan on tipofase (cost=0.00..3.02 rows=102\n> width=0) (never executed)\n> Planning time: 0.471 ms\n> Execution time: 0.150 ms\n> \n> --------------------------------------------------------------------------------------------------------------------------\n> \n> Anyone knows?\n> I'm a bit worried about performance in my web app beacause sometimes\n> filters are written dinamically at the end, and i would like to avoid\n> these problems.\n> \n\n\nWhat was it that Philip suggested? I can't find his reply in the list \nand you didn't quote it...\n\nDid you try reversing the order of the tables, so join fase to tipofase, \ninstead of tipofase to fase.\nAlso, did you try a partial index on tipofase.id where \ntipofase.agendafrontoffice = true?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 20 Apr 2017 13:54:47 +0200",
"msg_from": "vinny <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
},
{
"msg_contents": "Sorry Vinny, this was what Philip suggested:\n\nHave you tried changing your query to:\n\nSELECT id <http://fase.id>\nFROM fase\nWHERE tipofase IN (SELECT ID from tipofase WHERE agendafrontoffice = true)\nORDER BY id <http://fase.id> DESC\nLIMIT 10 OFFSET 0\n\n\nAnd this is my log:\n\n------------------------------------------------------------\n--------------------------------------------------------------\nLimit (cost=3.46..106.87 rows=10 width=4) (actual\ntime=396555.327..396555.327 rows=0 loops=1)\n -> Nested Loop (cost=3.46..214781.07 rows=20770 width=4) (actual\ntime=396555.326..396555.326 rows=0 loops=1)\n Join Filter: (tipofase.id [1] = fase.tipofase)\n -> Index Scan Backward using test_prova_2 on fase\n(cost=0.43..192654.24 rows=1474700 width=8) (actual\ntime=1.147..395710.190 rows=1475146 loops=1)\n -> Materialize (cost=3.03..6.34 rows=1 width=8) (actual\ntime=0.000..0.000 rows=0 loops=1475146)\n -> Hash Semi Join (cost=3.03..6.33 rows=1 width=8)\n(actual time=0.081..0.081 rows=0 loops=1)\n Hash Cond: (tipofase.id [1] = tipofase_1.id [2])\n -> Seq Scan on tipofase (cost=0.00..3.02\nrows=102 width=4) (actual time=0.003..0.003 rows=1 loops=1)\n -> Hash (cost=3.02..3.02 rows=1 width=4) (actual\ntime=0.064..0.064 rows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 0kB\n -> Seq Scan on tipofase tipofase_1\n(cost=0.00..3.02 rows=1 width=4) (actual time=0.063..0.063 rows=0\nloops=1)\n Filter: agendafrontoffice\n Rows Removed by Filter: 102\nPlanning time: 1.254 ms\nExecution time: 396555.499 ms\n\n------------------------------------------------------------\n--------------------------------------------------------------\n\n\n\n\n\n2017-04-20 13:54 GMT+02:00 vinny <[email protected]>:\n\n> On 2017-04-20 13:16, Marco Renzi wrote:\n>\n>> Thanks Philip, yes i tried, but that is not solving, still slow. Take\n>> a look at the log.\n>>\n>> ------------------------------------------------------------\n>> --------------------------------------------------------------\n>> Limit (cost=3.46..106.87 rows=10 width=4) (actual\n>> time=396555.327..396555.327 rows=0 loops=1)\n>> -> Nested Loop (cost=3.46..214781.07 rows=20770 width=4) (actual\n>> time=396555.326..396555.326 rows=0 loops=1)\n>> Join Filter: (tipofase.id [1] = fase.tipofase)\n>> -> Index Scan Backward using test_prova_2 on fase\n>> (cost=0.43..192654.24 rows=1474700 width=8) (actual\n>> time=1.147..395710.190 rows=1475146 loops=1)\n>> -> Materialize (cost=3.03..6.34 rows=1 width=8) (actual\n>> time=0.000..0.000 rows=0 loops=1475146)\n>> -> Hash Semi Join (cost=3.03..6.33 rows=1 width=8)\n>> (actual time=0.081..0.081 rows=0 loops=1)\n>> Hash Cond: (tipofase.id [1] = tipofase_1.id [2])\n>> -> Seq Scan on tipofase (cost=0.00..3.02\n>> rows=102 width=4) (actual time=0.003..0.003 rows=1 loops=1)\n>> -> Hash (cost=3.02..3.02 rows=1 width=4) (actual\n>> time=0.064..0.064 rows=0 loops=1)\n>> Buckets: 1024 Batches: 1 Memory Usage: 0kB\n>> -> Seq Scan on tipofase tipofase_1\n>> (cost=0.00..3.02 rows=1 width=4) (actual time=0.063..0.063 rows=0\n>> loops=1)\n>> Filter: agendafrontoffice\n>> Rows Removed by Filter: 102\n>> Planning time: 1.254 ms\n>> Execution time: 396555.499 ms\n>>\n>> ------------------------------------------------------------\n>> --------------------------------------------------------------\n>>\n>> THE ONLY WAY TO SPEEDUP I FOUND IS THIS ONE\n>>\n>> SELECT fase.id [3]\n>> FROM tipofase\n>> JOIN fase\n>> ON (fase.tipofase = (SELECT tipofase.id [1] FROM tipofase\n>> WHERE tipofase.agendafrontoffice = true))\n>>\n>> ORDER BY fase.id [3] DESC limit 10 offset 0\n>>\n>>\n>> ------------------------------------------------------------\n>> --------------------------------------------------------------\n>>\n>> Limit (cost=3.45..3.58 rows=10 width=4) (actual time=0.082..0.082\n>> rows=0 loops=1)\n>> InitPlan 1 (returns $0)\n>> -> Seq Scan on tipofase tipofase_1 (cost=0.00..3.02 rows=1\n>> width=4) (actual time=0.072..0.072 rows=0 loops=1)\n>> Filter: agendafrontoffice\n>> Rows Removed by Filter: 102\n>> -> Nested Loop (cost=0.43..27080.93 rows=2118540 width=4) (actual\n>> time=0.081..0.081 rows=0 loops=1)\n>> -> Index Only Scan Backward using fase_test_prova_4 on fase\n>> (cost=0.43..595.90 rows=20770 width=4) (actual time=0.080..0.080\n>> rows=0 loops=1)\n>> Index Cond: (tipofase = $0)\n>> Heap Fetches: 0\n>> -> Materialize (cost=0.00..3.53 rows=102 width=0) (never\n>> executed)\n>> -> Seq Scan on tipofase (cost=0.00..3.02 rows=102\n>> width=0) (never executed)\n>> Planning time: 0.471 ms\n>> Execution time: 0.150 ms\n>>\n>> ------------------------------------------------------------\n>> --------------------------------------------------------------\n>>\n>> Anyone knows?\n>> I'm a bit worried about performance in my web app beacause sometimes\n>> filters are written dinamically at the end, and i would like to avoid\n>> these problems.\n>>\n>>\n>\n> What was it that Philip suggested? I can't find his reply in the list and\n> you didn't quote it...\n>\n> Did you try reversing the order of the tables, so join fase to tipofase,\n> instead of tipofase to fase.\n> Also, did you try a partial index on tipofase.id where\n> tipofase.agendafrontoffice = true?\n>\n\n\n\n-- \n-------------------------------------------------------------------------------------------------------------------------------------------\nIng. Marco Renzi\nOCA - Oracle Certified Associate Java SE7 Programmer\nOCP - Oracle Certified Mysql 5 Developer\n\nvia Zegalara 57\n62014 Corridonia(MC)\nMob: 3208377271\n\n\n\"The fastest way to change yourself is to hang out with people who are\nalready the way you want to be\" Reid Hoffman\n\nSorry Vinny, this was what Philip suggested:Have you tried changing your query to:SELECT idFROM faseWHERE tipofase IN (SELECT ID from tipofase WHERE agendafrontoffice = true)ORDER BY id DESC LIMIT 10 OFFSET 0And this is my log:\n--------------------------------------------------------------------------------------------------------------------------\nLimit (cost=3.46..106.87 rows=10 width=4) (actual\ntime=396555.327..396555.327 rows=0 loops=1)\n -> Nested Loop (cost=3.46..214781.07 rows=20770 width=4) (actual\ntime=396555.326..396555.326 rows=0 loops=1)\n Join Filter: (tipofase.id [1] = fase.tipofase)\n -> Index Scan Backward using test_prova_2 on fase\n(cost=0.43..192654.24 rows=1474700 width=8) (actual\ntime=1.147..395710.190 rows=1475146 loops=1)\n -> Materialize (cost=3.03..6.34 rows=1 width=8) (actual\ntime=0.000..0.000 rows=0 loops=1475146)\n -> Hash Semi Join (cost=3.03..6.33 rows=1 width=8)\n(actual time=0.081..0.081 rows=0 loops=1)\n Hash Cond: (tipofase.id [1] = tipofase_1.id [2])\n -> Seq Scan on tipofase (cost=0.00..3.02\nrows=102 width=4) (actual time=0.003..0.003 rows=1 loops=1)\n -> Hash (cost=3.02..3.02 rows=1 width=4) (actual\ntime=0.064..0.064 rows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 0kB\n -> Seq Scan on tipofase tipofase_1\n(cost=0.00..3.02 rows=1 width=4) (actual time=0.063..0.063 rows=0\nloops=1)\n Filter: agendafrontoffice\n Rows Removed by Filter: 102\nPlanning time: 1.254 ms\nExecution time: 396555.499 ms\n\n--------------------------------------------------------------------------------------------------------------------------2017-04-20 13:54 GMT+02:00 vinny <[email protected]>:On 2017-04-20 13:16, Marco Renzi wrote:\n\nThanks Philip, yes i tried, but that is not solving, still slow. Take\na look at the log.\n\n--------------------------------------------------------------------------------------------------------------------------\nLimit (cost=3.46..106.87 rows=10 width=4) (actual\ntime=396555.327..396555.327 rows=0 loops=1)\n -> Nested Loop (cost=3.46..214781.07 rows=20770 width=4) (actual\ntime=396555.326..396555.326 rows=0 loops=1)\n Join Filter: (tipofase.id [1] = fase.tipofase)\n -> Index Scan Backward using test_prova_2 on fase\n(cost=0.43..192654.24 rows=1474700 width=8) (actual\ntime=1.147..395710.190 rows=1475146 loops=1)\n -> Materialize (cost=3.03..6.34 rows=1 width=8) (actual\ntime=0.000..0.000 rows=0 loops=1475146)\n -> Hash Semi Join (cost=3.03..6.33 rows=1 width=8)\n(actual time=0.081..0.081 rows=0 loops=1)\n Hash Cond: (tipofase.id [1] = tipofase_1.id [2])\n -> Seq Scan on tipofase (cost=0.00..3.02\nrows=102 width=4) (actual time=0.003..0.003 rows=1 loops=1)\n -> Hash (cost=3.02..3.02 rows=1 width=4) (actual\ntime=0.064..0.064 rows=0 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 0kB\n -> Seq Scan on tipofase tipofase_1\n(cost=0.00..3.02 rows=1 width=4) (actual time=0.063..0.063 rows=0\nloops=1)\n Filter: agendafrontoffice\n Rows Removed by Filter: 102\nPlanning time: 1.254 ms\nExecution time: 396555.499 ms\n\n--------------------------------------------------------------------------------------------------------------------------\n\nTHE ONLY WAY TO SPEEDUP I FOUND IS THIS ONE\n\nSELECT fase.id [3]\nFROM tipofase\nJOIN fase\nON (fase.tipofase = (SELECT tipofase.id [1] FROM tipofase\nWHERE tipofase.agendafrontoffice = true))\n\nORDER BY fase.id [3] DESC limit 10 offset 0\n\n--------------------------------------------------------------------------------------------------------------------------\n\nLimit (cost=3.45..3.58 rows=10 width=4) (actual time=0.082..0.082\nrows=0 loops=1)\n InitPlan 1 (returns $0)\n -> Seq Scan on tipofase tipofase_1 (cost=0.00..3.02 rows=1\nwidth=4) (actual time=0.072..0.072 rows=0 loops=1)\n Filter: agendafrontoffice\n Rows Removed by Filter: 102\n -> Nested Loop (cost=0.43..27080.93 rows=2118540 width=4) (actual\ntime=0.081..0.081 rows=0 loops=1)\n -> Index Only Scan Backward using fase_test_prova_4 on fase\n(cost=0.43..595.90 rows=20770 width=4) (actual time=0.080..0.080\nrows=0 loops=1)\n Index Cond: (tipofase = $0)\n Heap Fetches: 0\n -> Materialize (cost=0.00..3.53 rows=102 width=0) (never\nexecuted)\n -> Seq Scan on tipofase (cost=0.00..3.02 rows=102\nwidth=0) (never executed)\nPlanning time: 0.471 ms\nExecution time: 0.150 ms\n\n--------------------------------------------------------------------------------------------------------------------------\n\nAnyone knows?\nI'm a bit worried about performance in my web app beacause sometimes\nfilters are written dinamically at the end, and i would like to avoid\nthese problems.\n\n\n\n\nWhat was it that Philip suggested? I can't find his reply in the list and you didn't quote it...\n\nDid you try reversing the order of the tables, so join fase to tipofase, instead of tipofase to fase.\nAlso, did you try a partial index on tipofase.id where tipofase.agendafrontoffice = true?\n-- -------------------------------------------------------------------------------------------------------------------------------------------Ing. Marco RenziOCA - Oracle Certified Associate Java SE7 ProgrammerOCP - Oracle Certified Mysql 5 Developervia Zegalara 5762014 Corridonia(MC)Mob: 3208377271\"The fastest way to change yourself is to hang out with people who are already the way you want to be\" Reid Hoffman",
"msg_date": "Thu, 20 Apr 2017 14:16:41 +0200",
"msg_from": "Marco Renzi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
},
{
"msg_contents": "2017-04-20 9:19 GMT+02:00 Marco Renzi <[email protected]>:\n\n> Hi!, i've currently a big problem using ORBDER BY / LIMIT in a query with\n> no result set.\n> If i add the order by/limit clause it runs really really slow.\n>\n>\n>\n> QUERY 1 FAST:\n> --------------------------------\n>\n> SELECT fase.id\n> FROM tipofase\n> JOIN fase\n> ON (fase.tipofase = tipofase.id)\n> WHERE tipofase.agendafrontoffice = true\n>\n> EXPLAIN ANALYZE:\n>\n> Nested Loop (cost=0.43..790.19 rows=14462 width=4) (actual time=0.079..0.079 rows=0 loops=1)\n>\n> -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.077..0.077 rows=0 loops=1)\n> Filter: agendafrontoffice\n> Rows Removed by Filter: 102\n> -> Index Only Scan using fase_test_prova_4 on fase (cost=0.43..595.59 rows=19158 width=8) (never executed)\n> Index Cond: (tipofase = tipofase.id)\n> Heap Fetches: 0\n> Planning time: 0.669 ms\n> Execution time: 0.141 ms\n>\n> ---\n>\n> It's perfect because it starts from tipofase, where there are no agendafrontoffice = true\n>\n> fase_test_prova_4 is a btree index ON (fase.tipofase, fase.id)\n> fase.id is PRIMARY key on fase,\n> tipofase.id is PRIMARY key on tipofase,\n> fase.tipofase is FK on tipofase.id\n> and tipofase.agendafrontoffice is a boolean.\n>\n> I've also created a btree index on tipofase.agendafrontoffice.\n>\n> **fase** is a large table with 1.475.146 records. There are no rows in the\n> table matching tipofase.agendafrontoffice = true, so the result set is\n> empty(QUERY 1)\n>\n>\n>\n>\n> QUERY 2 SLOW(WITH limit and order by):\n> --------------------------------\n>\n>\n> SELECT fase.id\n> FROM tipofase\n> JOIN fase\n> ON (fase.tipofase = tipofase.id)\n> WHERE tipofase.agendafrontoffice = true\n> ORDER BY fase.id DESC limit 10 offset 0\n>\n> Limit (cost=0.43..149.66 rows=10 width=4) (actual time=173853.131..173853.131 rows=0 loops=1)\n> -> Nested Loop (cost=0.43..215814.25 rows=14462 width=4) (actual time=173853.130..173853.130 rows=0 loops=1)\n> Join Filter: (fase.tipofase = tipofase.id)\n> -> Index Scan Backward using test_prova_2 on fase (cost=0.43..193684.04 rows=1475146 width=8) (actual time=1.336..173128.418 rows=1475146 loops=1)\n> -> Materialize (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1475146)\n> -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1)\n> Filter: agendafrontoffice\n> Rows Removed by Filter: 102\n> Planning time: 0.685 ms\n> Execution time: 173853.221 ms\n>\n>\n>\nI am afraid so is not possible to solve this issue by one query. In this\ncase the planner expects early stop due finding few values. But because\nthere are not any value, the LIMIT clause has not any benefit in executor\ntime, but the planner is messed. Maybe try to increase LIMIT to some higher\nvalue .. 1000, 10000 so planner don't fall to this trap. PostgreSQL\nstatistics are about most common values, but the values without any\noccurrence are not well registered by statistics.\n\nRegards\n\nPavel\n\n\n> Really really slow..... looks like the planner is not doing a good job.\n> PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bit\n>\n>\n> I also run VACUUM AND VACUUM ANALYZE on both table\n> I tried to play with the\n> \"alter table tipofase alter column agendafrontoffice set statistics 2\"\n> but nothing.\n>\n> Thanks in advance Marco\n>\n>\n>\n> --\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> -------------------\n> Ing. Marco Renzi\n> OCA - Oracle Certified Associate Java SE7 Programmer\n> OCP - Oracle Certified Mysql 5 Developer\n>\n> via Zegalara 57\n> 62014 Corridonia(MC)\n> Mob: 3208377271 <(320)%20837-7271>\n>\n>\n> \"The fastest way to change yourself is to hang out with people who are\n> already the way you want to be\" Reid Hoffman\n>\n\n2017-04-20 9:19 GMT+02:00 Marco Renzi <[email protected]>:Hi!, i've currently a big problem using ORBDER BY / LIMIT in a query with no result set.If i add the order by/limit clause it runs really really slow.QUERY 1 FAST:--------------------------------SELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = tipofase.id)WHERE tipofase.agendafrontoffice = trueEXPLAIN ANALYZE:Nested Loop (cost=0.43..790.19 rows=14462 width=4) (actual time=0.079..0.079 rows=0 loops=1) -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.077..0.077 rows=0 loops=1) Filter: agendafrontoffice Rows Removed by Filter: 102 -> Index Only Scan using fase_test_prova_4 on fase (cost=0.43..595.59 rows=19158 width=8) (never executed) Index Cond: (tipofase = tipofase.id) Heap Fetches: 0Planning time: 0.669 msExecution time: 0.141 ms---It's perfect because it starts from tipofase, where there are no agendafrontoffice = truefase_test_prova_4 is a btree index ON (fase.tipofase, fase.id)fase.id is PRIMARY key on fase, tipofase.id is PRIMARY key on tipofase, fase.tipofase is FK on tipofase.idand tipofase.agendafrontoffice is a boolean. I've also created a btree index on tipofase.agendafrontoffice.**fase**\n is a large table with 1.475.146 records. There are no rows in the table\n matching tipofase.agendafrontoffice = true, so the result set is \nempty(QUERY 1) QUERY 2 SLOW(WITH limit and order by):--------------------------------SELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = tipofase.id)WHERE tipofase.agendafrontoffice = trueORDER BY fase.id DESC limit 10 offset 0Limit (cost=0.43..149.66 rows=10 width=4) (actual time=173853.131..173853.131 rows=0 loops=1) -> Nested Loop (cost=0.43..215814.25 rows=14462 width=4) (actual time=173853.130..173853.130 rows=0 loops=1) Join Filter: (fase.tipofase = tipofase.id) -> Index Scan Backward using test_prova_2 on fase (cost=0.43..193684.04 rows=1475146 width=8) (actual time=1.336..173128.418 rows=1475146 loops=1) -> Materialize (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1475146) -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1) Filter: agendafrontoffice Rows Removed by Filter: 102Planning time: 0.685 msExecution time: 173853.221 msI am afraid so is not possible to solve this issue by one query. In this case the planner expects early stop due finding few values. But because there are not any value, the LIMIT clause has not any benefit in executor time, but the planner is messed. Maybe try to increase LIMIT to some higher value .. 1000, 10000 so planner don't fall to this trap. PostgreSQL statistics are about most common values, but the values without any occurrence are not well registered by statistics.RegardsPavel Really really slow..... looks like the planner is not doing a good job.PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bitI also run VACUUM AND VACUUM ANALYZE on both tableI tried to play with the \"alter table tipofase alter column agendafrontoffice set statistics 2\"but nothing.Thanks in advance Marco-- -------------------------------------------------------------------------------------------------------------------------------------------Ing. Marco RenziOCA - Oracle Certified Associate Java SE7 ProgrammerOCP - Oracle Certified Mysql 5 Developervia Zegalara 5762014 Corridonia(MC)Mob: 3208377271\"The fastest way to change yourself is to hang out with people who are already the way you want to be\" Reid Hoffman",
"msg_date": "Thu, 20 Apr 2017 17:57:02 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
},
{
"msg_contents": "2017-04-20 17:57 GMT+02:00 Pavel Stehule <[email protected]>:\n\n>\n>\n> 2017-04-20 9:19 GMT+02:00 Marco Renzi <[email protected]>:\n>\n>> Hi!, i've currently a big problem using ORBDER BY / LIMIT in a query\n>> with no result set.\n>> If i add the order by/limit clause it runs really really slow.\n>>\n>>\n>>\n>> QUERY 1 FAST:\n>> --------------------------------\n>>\n>> SELECT fase.id\n>> FROM tipofase\n>> JOIN fase\n>> ON (fase.tipofase = tipofase.id)\n>> WHERE tipofase.agendafrontoffice = true\n>>\n>> EXPLAIN ANALYZE:\n>>\n>> Nested Loop (cost=0.43..790.19 rows=14462 width=4) (actual time=0.079..0.079 rows=0 loops=1)\n>>\n>> -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.077..0.077 rows=0 loops=1)\n>> Filter: agendafrontoffice\n>> Rows Removed by Filter: 102\n>> -> Index Only Scan using fase_test_prova_4 on fase (cost=0.43..595.59 rows=19158 width=8) (never executed)\n>> Index Cond: (tipofase = tipofase.id)\n>> Heap Fetches: 0\n>> Planning time: 0.669 ms\n>> Execution time: 0.141 ms\n>>\n>> ---\n>>\n>> It's perfect because it starts from tipofase, where there are no agendafrontoffice = true\n>>\n>> fase_test_prova_4 is a btree index ON (fase.tipofase, fase.id)\n>> fase.id is PRIMARY key on fase,\n>> tipofase.id is PRIMARY key on tipofase,\n>> fase.tipofase is FK on tipofase.id\n>> and tipofase.agendafrontoffice is a boolean.\n>>\n>> I've also created a btree index on tipofase.agendafrontoffice.\n>>\n>> **fase** is a large table with 1.475.146 records. There are no rows in\n>> the table matching tipofase.agendafrontoffice = true, so the result set is\n>> empty(QUERY 1)\n>>\n>>\n>>\n>>\n>> QUERY 2 SLOW(WITH limit and order by):\n>> --------------------------------\n>>\n>>\n>> SELECT fase.id\n>> FROM tipofase\n>> JOIN fase\n>> ON (fase.tipofase = tipofase.id)\n>> WHERE tipofase.agendafrontoffice = true\n>> ORDER BY fase.id DESC limit 10 offset 0\n>>\n>> Limit (cost=0.43..149.66 rows=10 width=4) (actual time=173853.131..173853.131 rows=0 loops=1)\n>> -> Nested Loop (cost=0.43..215814.25 rows=14462 width=4) (actual time=173853.130..173853.130 rows=0 loops=1)\n>> Join Filter: (fase.tipofase = tipofase.id)\n>> -> Index Scan Backward using test_prova_2 on fase (cost=0.43..193684.04 rows=1475146 width=8) (actual time=1.336..173128.418 rows=1475146 loops=1)\n>> -> Materialize (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1475146)\n>> -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1)\n>> Filter: agendafrontoffice\n>> Rows Removed by Filter: 102\n>> Planning time: 0.685 ms\n>> Execution time: 173853.221 ms\n>>\n>>\n>>\n> I am afraid so is not possible to solve this issue by one query. In this\n> case the planner expects early stop due finding few values. But because\n> there are not any value, the LIMIT clause has not any benefit in executor\n> time, but the planner is messed. Maybe try to increase LIMIT to some higher\n> value .. 1000, 10000 so planner don't fall to this trap. PostgreSQL\n> statistics are about most common values, but the values without any\n> occurrence are not well registered by statistics.\n>\n> Regards\n>\n\nIt can looks strange, but it can work\n\nSELECT *\n FROM (your query ORDER BY .. OFFSET 0 LIMIT 10000) s\n ORDER BY ...\n LIMIT 10;\n\nRegards\n\nPavel\n\n\n\n>\n> Pavel\n>\n>\n>> Really really slow..... looks like the planner is not doing a good job.\n>> PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bit\n>>\n>>\n>> I also run VACUUM AND VACUUM ANALYZE on both table\n>> I tried to play with the\n>> \"alter table tipofase alter column agendafrontoffice set statistics 2\"\n>> but nothing.\n>>\n>> Thanks in advance Marco\n>>\n>>\n>>\n>> --\n>> ------------------------------------------------------------\n>> ------------------------------------------------------------\n>> -------------------\n>> Ing. Marco Renzi\n>> OCA - Oracle Certified Associate Java SE7 Programmer\n>> OCP - Oracle Certified Mysql 5 Developer\n>>\n>> via Zegalara 57\n>> 62014 Corridonia(MC)\n>> Mob: 3208377271 <(320)%20837-7271>\n>>\n>>\n>> \"The fastest way to change yourself is to hang out with people who are\n>> already the way you want to be\" Reid Hoffman\n>>\n>\n>\n\n2017-04-20 17:57 GMT+02:00 Pavel Stehule <[email protected]>:2017-04-20 9:19 GMT+02:00 Marco Renzi <[email protected]>:Hi!, i've currently a big problem using ORBDER BY / LIMIT in a query with no result set.If i add the order by/limit clause it runs really really slow.QUERY 1 FAST:--------------------------------SELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = tipofase.id)WHERE tipofase.agendafrontoffice = trueEXPLAIN ANALYZE:Nested Loop (cost=0.43..790.19 rows=14462 width=4) (actual time=0.079..0.079 rows=0 loops=1) -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.077..0.077 rows=0 loops=1) Filter: agendafrontoffice Rows Removed by Filter: 102 -> Index Only Scan using fase_test_prova_4 on fase (cost=0.43..595.59 rows=19158 width=8) (never executed) Index Cond: (tipofase = tipofase.id) Heap Fetches: 0Planning time: 0.669 msExecution time: 0.141 ms---It's perfect because it starts from tipofase, where there are no agendafrontoffice = truefase_test_prova_4 is a btree index ON (fase.tipofase, fase.id)fase.id is PRIMARY key on fase, tipofase.id is PRIMARY key on tipofase, fase.tipofase is FK on tipofase.idand tipofase.agendafrontoffice is a boolean. I've also created a btree index on tipofase.agendafrontoffice.**fase**\n is a large table with 1.475.146 records. There are no rows in the table\n matching tipofase.agendafrontoffice = true, so the result set is \nempty(QUERY 1) QUERY 2 SLOW(WITH limit and order by):--------------------------------SELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = tipofase.id)WHERE tipofase.agendafrontoffice = trueORDER BY fase.id DESC limit 10 offset 0Limit (cost=0.43..149.66 rows=10 width=4) (actual time=173853.131..173853.131 rows=0 loops=1) -> Nested Loop (cost=0.43..215814.25 rows=14462 width=4) (actual time=173853.130..173853.130 rows=0 loops=1) Join Filter: (fase.tipofase = tipofase.id) -> Index Scan Backward using test_prova_2 on fase (cost=0.43..193684.04 rows=1475146 width=8) (actual time=1.336..173128.418 rows=1475146 loops=1) -> Materialize (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1475146) -> Seq Scan on tipofase (cost=0.00..3.02 rows=1 width=4) (actual time=0.000..0.000 rows=0 loops=1) Filter: agendafrontoffice Rows Removed by Filter: 102Planning time: 0.685 msExecution time: 173853.221 msI am afraid so is not possible to solve this issue by one query. In this case the planner expects early stop due finding few values. But because there are not any value, the LIMIT clause has not any benefit in executor time, but the planner is messed. Maybe try to increase LIMIT to some higher value .. 1000, 10000 so planner don't fall to this trap. PostgreSQL statistics are about most common values, but the values without any occurrence are not well registered by statistics.RegardsIt can looks strange, but it can workSELECT * FROM (your query ORDER BY .. OFFSET 0 LIMIT 10000) s ORDER BY ... LIMIT 10;RegardsPavel Pavel Really really slow..... looks like the planner is not doing a good job.PostgreSQL 9.4.1, compiled by Visual C++ build 1800, 64-bitI also run VACUUM AND VACUUM ANALYZE on both tableI tried to play with the \"alter table tipofase alter column agendafrontoffice set statistics 2\"but nothing.Thanks in advance Marco-- -------------------------------------------------------------------------------------------------------------------------------------------Ing. Marco RenziOCA - Oracle Certified Associate Java SE7 ProgrammerOCP - Oracle Certified Mysql 5 Developervia Zegalara 5762014 Corridonia(MC)Mob: 3208377271\"The fastest way to change yourself is to hang out with people who are already the way you want to be\" Reid Hoffman",
"msg_date": "Thu, 20 Apr 2017 18:05:59 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
},
{
"msg_contents": "This could look strange, but is fast as hell!\nThe main problem is:\nIs everytime ok doing query like this with order by and limit? Is ok using\nan upperlimit to 1.000.000.000 records?\n\nSELECT * FROM (\nSELECT fase.id\nFROM tipofase\nJOIN fase\nON (fase.tipofase = tipofase.id)\nWHERE agendafrontoffice = true\nORDER BY fase.id DESC limit 1000000000 offset 0\n) A\nORDER BY A.id DESC limit 10 offset 0\n\n2017-04-20 18:05 GMT+02:00 Pavel Stehule <[email protected]>:\n\n>\n>\n>\n> I am afraid so is not possible to solve this issue by one query. In this\n>> case the planner expects early stop due finding few values. But because\n>> there are not any value, the LIMIT clause has not any benefit in executor\n>> time, but the planner is messed. Maybe try to increase LIMIT to some higher\n>> value .. 1000, 10000 so planner don't fall to this trap. PostgreSQL\n>> statistics are about most common values, but the values without any\n>> occurrence are not well registered by statistics.\n>>\n>> Regards\n>>\n>\n> It can looks strange, but it can work\n>\n> SELECT *\n> FROM (your query ORDER BY .. OFFSET 0 LIMIT 10000) s\n> ORDER BY ...\n> LIMIT 10;\n>\n> Regards\n>\n> Pavel\n>\n\nThis could look strange, but is fast as hell!The main problem is: Is everytime ok doing query like this with order by and limit? Is ok using an upperlimit to 1.000.000.000 records?SELECT * FROM (SELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = tipofase.id)WHERE agendafrontoffice = trueORDER BY fase.id DESC limit 1000000000 offset 0 ) AORDER BY A.id DESC limit 10 offset 0 2017-04-20 18:05 GMT+02:00 Pavel Stehule <[email protected]>:I am afraid so is not possible to solve this issue by one query. In this case the planner expects early stop due finding few values. But because there are not any value, the LIMIT clause has not any benefit in executor time, but the planner is messed. Maybe try to increase LIMIT to some higher value .. 1000, 10000 so planner don't fall to this trap. PostgreSQL statistics are about most common values, but the values without any occurrence are not well registered by statistics.RegardsIt can looks strange, but it can workSELECT * FROM (your query ORDER BY .. OFFSET 0 LIMIT 10000) s ORDER BY ... LIMIT 10;RegardsPavel",
"msg_date": "Fri, 21 Apr 2017 08:49:54 +0200",
"msg_from": "Marco Renzi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
},
{
"msg_contents": "2017-04-21 8:49 GMT+02:00 Marco Renzi <[email protected]>:\n\n> This could look strange, but is fast as hell!\n> The main problem is:\n> Is everytime ok doing query like this with order by and limit? Is ok using\n> an upperlimit to 1.000.000.000 records?\n>\n\nI am thinking so limit 10000 should be ok. Too big number can be messy for\noptimizer similarly like too small number.\n\nThe planner is driven by statistics - and the statistics are not perfect -\nusually it is working on 80% - like weather forecasting.\n\nUsually it is working, but sometimes not.\n\nRegards\n\nPavel\n\n\n>\n> SELECT * FROM (\n> SELECT fase.id\n> FROM tipofase\n> JOIN fase\n> ON (fase.tipofase = tipofase.id)\n> WHERE agendafrontoffice = true\n> ORDER BY fase.id DESC limit 1000000000 offset 0\n> ) A\n> ORDER BY A.id DESC limit 10 offset 0\n>\n> 2017-04-20 18:05 GMT+02:00 Pavel Stehule <[email protected]>:\n>\n>>\n>>\n>>\n>> I am afraid so is not possible to solve this issue by one query. In this\n>>> case the planner expects early stop due finding few values. But because\n>>> there are not any value, the LIMIT clause has not any benefit in executor\n>>> time, but the planner is messed. Maybe try to increase LIMIT to some higher\n>>> value .. 1000, 10000 so planner don't fall to this trap. PostgreSQL\n>>> statistics are about most common values, but the values without any\n>>> occurrence are not well registered by statistics.\n>>>\n>>> Regards\n>>>\n>>\n>> It can looks strange, but it can work\n>>\n>> SELECT *\n>> FROM (your query ORDER BY .. OFFSET 0 LIMIT 10000) s\n>> ORDER BY ...\n>> LIMIT 10;\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n\n2017-04-21 8:49 GMT+02:00 Marco Renzi <[email protected]>:This could look strange, but is fast as hell!The main problem is: Is everytime ok doing query like this with order by and limit? Is ok using an upperlimit to 1.000.000.000 records?I am thinking so limit 10000 should be ok. Too big number can be messy for optimizer similarly like too small number.The planner is driven by statistics - and the statistics are not perfect - usually it is working on 80% - like weather forecasting.Usually it is working, but sometimes not.RegardsPavel SELECT * FROM (SELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = tipofase.id)WHERE agendafrontoffice = trueORDER BY fase.id DESC limit 1000000000 offset 0 ) AORDER BY A.id DESC limit 10 offset 0 2017-04-20 18:05 GMT+02:00 Pavel Stehule <[email protected]>:I am afraid so is not possible to solve this issue by one query. In this case the planner expects early stop due finding few values. But because there are not any value, the LIMIT clause has not any benefit in executor time, but the planner is messed. Maybe try to increase LIMIT to some higher value .. 1000, 10000 so planner don't fall to this trap. PostgreSQL statistics are about most common values, but the values without any occurrence are not well registered by statistics.RegardsIt can looks strange, but it can workSELECT * FROM (your query ORDER BY .. OFFSET 0 LIMIT 10000) s ORDER BY ... LIMIT 10;RegardsPavel",
"msg_date": "Fri, 21 Apr 2017 08:56:17 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
},
{
"msg_contents": ">\n> I am thinking so limit 10000 should be ok. Too big number can be messy for\n> optimizer similarly like too small number.\n>\n> The planner is driven by statistics - and the statistics are not perfect -\n> usually it is working on 80% - like weather forecasting.\n>\n> Usually it is working, but sometimes not.\n>\n> Regards\n>\n> Pavel\n>\n>\n>\nThanks Pavel, i almost found two solutions at the end:\nOne is to use an inner limit as you said, and the other, when you just know\nwhat the filter is,\nis to try to join with SELECTS that have to be executed first from the\nplanner.\n\nEg\nSELECT fase.id\nFROM tipofase\nJOIN fase\nON (fase.tipofase = (SELECT tipofase.id FROM tipofase WHERE\ntipofase.agendafrontoffice = true))\n\nORDER BY fase.id DESC limit 10 offset 0\n\nThanks for the help\n\n-- \n-------------------------------------------------------------------------------------------------------------------------------------------\nIng. Marco Renzi\nOCA - Oracle Certified Associate Java SE7 Programmer\nOCP - Oracle Certified Mysql 5 Developer\n\nvia Zegalara 57\n62014 Corridonia(MC)\nMob: 3208377271\n\n\n\"The fastest way to change yourself is to hang out with people who are\nalready the way you want to be\" Reid Hoffman\n\nI am thinking so limit 10000 should be ok. Too big number can be messy for optimizer similarly like too small number.The planner is driven by statistics - and the statistics are not perfect - usually it is working on 80% - like weather forecasting.Usually it is working, but sometimes not.RegardsPavel \nThanks Pavel, i almost found two solutions at the end:One is to use an inner limit as you said, and the other, when you just know what the filter is,is to try to join with SELECTS that have to be executed first from the planner.EgSELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = (SELECT tipofase.id FROM tipofase WHERE tipofase.agendafrontoffice = true))ORDER BY fase.id DESC limit 10 offset 0 Thanks for the help-- -------------------------------------------------------------------------------------------------------------------------------------------Ing. Marco RenziOCA - Oracle Certified Associate Java SE7 ProgrammerOCP - Oracle Certified Mysql 5 Developervia Zegalara 5762014 Corridonia(MC)Mob: 3208377271\"The fastest way to change yourself is to hang out with people who are already the way you want to be\" Reid Hoffman",
"msg_date": "Fri, 21 Apr 2017 09:05:36 +0200",
"msg_from": "Marco Renzi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
},
{
"msg_contents": "2017-04-21 9:05 GMT+02:00 Marco Renzi <[email protected]>:\n\n>\n>\n>> I am thinking so limit 10000 should be ok. Too big number can be messy\n>> for optimizer similarly like too small number.\n>>\n>> The planner is driven by statistics - and the statistics are not perfect\n>> - usually it is working on 80% - like weather forecasting.\n>>\n>> Usually it is working, but sometimes not.\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n> Thanks Pavel, i almost found two solutions at the end:\n> One is to use an inner limit as you said, and the other, when you just\n> know what the filter is,\n> is to try to join with SELECTS that have to be executed first from the\n> planner.\n>\n> Eg\n> SELECT fase.id\n> FROM tipofase\n> JOIN fase\n> ON (fase.tipofase = (SELECT tipofase.id FROM tipofase WHERE\n> tipofase.agendafrontoffice = true))\n>\n> ORDER BY fase.id DESC limit 10 offset 0\n>\n> Thanks for the help\n>\n\nyes, sometimes when the data are not homogeneous more queries are necessary\n\nRegards\n\nPavel\n\n\n>\n> --\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> -------------------\n> Ing. Marco Renzi\n> OCA - Oracle Certified Associate Java SE7 Programmer\n> OCP - Oracle Certified Mysql 5 Developer\n>\n> via Zegalara 57\n> 62014 Corridonia(MC)\n> Mob: 3208377271 <(320)%20837-7271>\n>\n>\n> \"The fastest way to change yourself is to hang out with people who are\n> already the way you want to be\" Reid Hoffman\n>\n\n2017-04-21 9:05 GMT+02:00 Marco Renzi <[email protected]>:I am thinking so limit 10000 should be ok. Too big number can be messy for optimizer similarly like too small number.The planner is driven by statistics - and the statistics are not perfect - usually it is working on 80% - like weather forecasting.Usually it is working, but sometimes not.RegardsPavel \nThanks Pavel, i almost found two solutions at the end:One is to use an inner limit as you said, and the other, when you just know what the filter is,is to try to join with SELECTS that have to be executed first from the planner.EgSELECT fase.idFROM tipofaseJOIN faseON (fase.tipofase = (SELECT tipofase.id FROM tipofase WHERE tipofase.agendafrontoffice = true))ORDER BY fase.id DESC limit 10 offset 0 Thanks for the helpyes, sometimes when the data are not homogeneous more queries are necessaryRegardsPavel -- -------------------------------------------------------------------------------------------------------------------------------------------Ing. Marco RenziOCA - Oracle Certified Associate Java SE7 ProgrammerOCP - Oracle Certified Mysql 5 Developervia Zegalara 5762014 Corridonia(MC)Mob: 3208377271\"The fastest way to change yourself is to hang out with people who are already the way you want to be\" Reid Hoffman",
"msg_date": "Fri, 21 Apr 2017 09:12:53 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with no result set, really really slow adding\n ORBDER BY / LIMIT clause"
}
] |
[
{
"msg_contents": "Hi,\n\nI've lately seen more and more installations where the generation of\nwrite-ahead-log (WAL) is one of the primary bottlenecks. I'm curious\nwhether that's primarily a \"sampling error\" of mine, or whether that's\nindeed more common.\n\nThe primary reason I'm curious is that I'm pondering a few potential\noptimizations, and would like to have some guidance which are more and\nwhich are less important.\n\nQuestions (answer as many you can comfortably answer):\n- How many MB/s, segments/s do you see on busier servers?\n- What generates the bulk of WAL on your servers (9.5+ can use\n pg_xlogdump --stats to compute that)?\n- Are you seeing WAL writes being a bottleneck?OA\n- What kind of backup methods are you using and is the WAL volume a\n problem?\n- What kind of replication are you using and is the WAL volume a\n problem?\n- What are your settings for wal_compression, max_wal_size (9.5+) /\n checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?\n- Could you quickly describe your workload?\n\nFeel free to add any information you think is pertinent ;)\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Apr 2017 21:17:43 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "Hi Andres.\n\n> 25 апр. 2017 г., в 7:17, Andres Freund <[email protected]> написал(а):\n> \n> Hi,\n> \n> I've lately seen more and more installations where the generation of\n> write-ahead-log (WAL) is one of the primary bottlenecks. I'm curious\n> whether that's primarily a \"sampling error\" of mine, or whether that's\n> indeed more common.\n> \n> The primary reason I'm curious is that I'm pondering a few potential\n> optimizations, and would like to have some guidance which are more and\n> which are less important.\n> \n> Questions (answer as many you can comfortably answer):\n> - How many MB/s, segments/s do you see on busier servers?\n\nNearly one WAL (16 MB) per second most of the time and 3 WALs per second in the beginning of checkpoint (due to full_page_writes).\n\n> - What generates the bulk of WAL on your servers (9.5+ can use\n> pg_xlogdump --stats to compute that)?\n\nHere is the output from a couple of our masters (and that is actually two hours before peak load):\n\n$ pg_xlogdump --stats 0000000100012B2800000089 0000000100012B3000000088 | fgrep -v 0.00\n\nType N (%) Record size (%) FPI size (%) Combined size (%)\n---- - --- ----------- --- -------- --- ------------- ---\nHeap2 55820638 ( 21.31) 1730485085 ( 22.27) 1385795249 ( 13.28) 3116280334 ( 17.12)\nHeap 74366993 ( 28.39) 2288644932 ( 29.46) 5880717650 ( 56.34) 8169362582 ( 44.87)\nBtree 84655827 ( 32.32) 2243526276 ( 28.88) 3170518879 ( 30.38) 5414045155 ( 29.74)\n -------- -------- -------- --------\nTotal 261933790 7769663301 [42.67%] 10437031778 [57.33%] 18206695079 [100%]\n$\n\n$ pg_xlogdump --stats 000000010000D17F000000A5 000000010000D19100000004 | fgrep -v 0.00\nType N (%) Record size (%) FPI size (%) Combined size (%)\n---- - --- ----------- --- -------- --- ------------- ---\nHeap2 13676881 ( 18.95) 422289539 ( 19.97) 15319927851 ( 25.63) 15742217390 ( 25.44)\nHeap 22284283 ( 30.88) 715293050 ( 33.83) 17119265188 ( 28.64) 17834558238 ( 28.82)\nBtree 27640155 ( 38.30) 725674896 ( 34.32) 19244109632 ( 32.19) 19969784528 ( 32.27)\nGin 6580760 ( 9.12) 172246586 ( 8.15) 8091332009 ( 13.54) 8263578595 ( 13.35)\n -------- -------- -------- --------\nTotal 72172983 2114133847 [3.42%] 59774634680 [96.58%] 61888768527 [100%]\n$\n\n> - Are you seeing WAL writes being a bottleneck?OA\n\nWe do sometimes see WALWriteLock in pg_stat_activity.wait_event, but not too often.\n\n> - What kind of backup methods are you using and is the WAL volume a\n> problem?\n\nWe use fork of barman project. In most cases that’s not a problem.\n\n> - What kind of replication are you using and is the WAL volume a\n> problem?\n\nPhysical streaming replication. We used to have problems with network bandwidth (1 Gbit/s was consumed by transferring WAL to two replicas and one archive) but that became better after 1. upgrading to 9.5 and turning wal_compression on, 2. changing archive command to doing parallel compression and sending WALs to archive, 3. increasing checkpoint_timeout.\n\n> - What are your settings for wal_compression, max_wal_size (9.5+) /\n> checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?\n\nxdb301e/postgres M # SELECT name, current_setting(name) FROM pg_settings\nWHERE name IN ('max_wal_size', 'checkpoint_timeout', 'wal_compression', 'wal_buffers');\n name | current_setting\n--------------------+-----------------\n checkpoint_timeout | 1h\n max_wal_size | 128GB\n wal_buffers | 16MB\n wal_compression | on\n(4 rows)\n\nTime: 0.938 ms\nxdb301e/postgres M #\n\n> - Could you quickly describe your workload?\n\nOLTP workload with 80% reads and 20% writes.\n\n> \n> Feel free to add any information you think is pertinent ;)\n\nWell, we actually workarounded issues with WAL write rate by increasing checkpoint_timeout to maximum possible (in 9.6 it can be even more). The downside of this change is recovery time. Thanks postgres for its stability but sometimes you can waste ~ 10 minutes just to restart postgres for upgrading to new minor version and that’s not really cool.\n\n> \n> Greetings,\n> \n> Andres Freund\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\nHi Andres.25 апр. 2017 г., в 7:17, Andres Freund <[email protected]> написал(а):Hi,I've lately seen more and more installations where the generation ofwrite-ahead-log (WAL) is one of the primary bottlenecks. I'm curiouswhether that's primarily a \"sampling error\" of mine, or whether that'sindeed more common.The primary reason I'm curious is that I'm pondering a few potentialoptimizations, and would like to have some guidance which are more andwhich are less important.Questions (answer as many you can comfortably answer):- How many MB/s, segments/s do you see on busier servers?Nearly one WAL (16 MB) per second most of the time and 3 WALs per second in the beginning of checkpoint (due to full_page_writes).- What generates the bulk of WAL on your servers (9.5+ can use pg_xlogdump --stats to compute that)?Here is the output from a couple of our masters (and that is actually two hours before peak load):$ pg_xlogdump --stats 0000000100012B2800000089 0000000100012B3000000088 | fgrep -v 0.00Type N (%) Record size (%) FPI size (%) Combined size (%)---- - --- ----------- --- -------- --- ------------- ---Heap2 55820638 ( 21.31) 1730485085 ( 22.27) 1385795249 ( 13.28) 3116280334 ( 17.12)Heap 74366993 ( 28.39) 2288644932 ( 29.46) 5880717650 ( 56.34) 8169362582 ( 44.87)Btree 84655827 ( 32.32) 2243526276 ( 28.88) 3170518879 ( 30.38) 5414045155 ( 29.74) -------- -------- -------- --------Total 261933790 7769663301 [42.67%] 10437031778 [57.33%] 18206695079 [100%]$$ pg_xlogdump --stats 000000010000D17F000000A5 000000010000D19100000004 | fgrep -v 0.00Type N (%) Record size (%) FPI size (%) Combined size (%)---- - --- ----------- --- -------- --- ------------- ---Heap2 13676881 ( 18.95) 422289539 ( 19.97) 15319927851 ( 25.63) 15742217390 ( 25.44)Heap 22284283 ( 30.88) 715293050 ( 33.83) 17119265188 ( 28.64) 17834558238 ( 28.82)Btree 27640155 ( 38.30) 725674896 ( 34.32) 19244109632 ( 32.19) 19969784528 ( 32.27)Gin 6580760 ( 9.12) 172246586 ( 8.15) 8091332009 ( 13.54) 8263578595 ( 13.35) -------- -------- -------- --------Total 72172983 2114133847 [3.42%] 59774634680 [96.58%] 61888768527 [100%]$- Are you seeing WAL writes being a bottleneck?OAWe do sometimes see WALWriteLock in pg_stat_activity.wait_event, but not too often.- What kind of backup methods are you using and is the WAL volume a problem?We use fork of barman project. In most cases that’s not a problem.- What kind of replication are you using and is the WAL volume a problem?Physical streaming replication. We used to have problems with network bandwidth (1 Gbit/s was consumed by transferring WAL to two replicas and one archive) but that became better after 1. upgrading to 9.5 and turning wal_compression on, 2. changing archive command to doing parallel compression and sending WALs to archive, 3. increasing checkpoint_timeout.- What are your settings for wal_compression, max_wal_size (9.5+) / checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?xdb301e/postgres M # SELECT name, current_setting(name) FROM pg_settingsWHERE name IN ('max_wal_size', 'checkpoint_timeout', 'wal_compression', 'wal_buffers'); name | current_setting--------------------+----------------- checkpoint_timeout | 1h max_wal_size | 128GB wal_buffers | 16MB wal_compression | on(4 rows)Time: 0.938 msxdb301e/postgres M #- Could you quickly describe your workload?OLTP workload with 80% reads and 20% writes.Feel free to add any information you think is pertinent ;)Well, we actually workarounded issues with WAL write rate by increasing checkpoint_timeout to maximum possible (in 9.6 it can be even more). The downside of this change is recovery time. Thanks postgres for its stability but sometimes you can waste ~ 10 minutes just to restart postgres for upgrading to new minor version and that’s not really cool.Greetings,Andres Freund-- Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 25 Apr 2017 10:56:14 +0300",
"msg_from": "Vladimir Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On Mon, Apr 24, 2017 at 9:17 PM, Andres Freund <[email protected]> wrote:\n\n>\n> Questions (answer as many you can comfortably answer):\n> - How many MB/s, segments/s do you see on busier servers?\n> - What generates the bulk of WAL on your servers (9.5+ can use\n> pg_xlogdump --stats to compute that)?\n> - Are you seeing WAL writes being a bottleneck?OA\n> - What kind of backup methods are you using and is the WAL volume a\n> problem?\n> - What kind of replication are you using and is the WAL volume a\n> problem?\n> - What are your settings for wal_compression, max_wal_size (9.5+) /\n> checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?\n> - Could you quickly describe your workload?\n>\n\n* Postgresql 9.3\n* 1500+ db servers\n* Daily peak for busy databases: 75 WALs switched per second (less than 10%\nof the servers experience this)\n* Avg per db: 2 WALs/s\n* Mainly generated by large batch sync processes that occur throughout the\nday, and by a legacy archiving process to purge older data (potentially\nmany millions of cascading deletes).\n*Half the servers have (encrypted) pg_dump backups, WAL volume hasn't\nproved to be a problem there, though dump size is a problem for a few of\nthe larger databases (less than 1TB).\n* Inter-data-centre replication is all streaming, across DC's (over the\nWAN) WAL shipping is over compressed SSH tunnels.\nOccasionally the streaming replication falls behind, but more commonly it\nis the cross-DC log shipping that becomes a problem. Some of the servers\nwill generate 50+ GBs of WAL in a matter of minutes and that backs up\nimmediately on the masters. Occasionally this has a knock-on effect for\nother servers and slows down their log shipping due to network saturation.\n* checkpoint_segments: 64, checkpoint_timeout: 5 mins, wal_buffers: 16MB\n\nWorkload:\n70% of servers are generally quiet, with occasional bursty reads and writes.\n20% are medium use, avg a few hundred transactions/second\n10% average around 5k txns/s, with bursts up to 25k txns/s for several\nminutes.\nAll servers have about 80% reads / 20% writes, though those numbers flip\nduring big sync jobs and when the purging maintenance kicks off.\n\nOn Mon, Apr 24, 2017 at 9:17 PM, Andres Freund <[email protected]> wrote:\n\nQuestions (answer as many you can comfortably answer):\n- How many MB/s, segments/s do you see on busier servers?\n- What generates the bulk of WAL on your servers (9.5+ can use\n pg_xlogdump --stats to compute that)?\n- Are you seeing WAL writes being a bottleneck?OA\n- What kind of backup methods are you using and is the WAL volume a\n problem?\n- What kind of replication are you using and is the WAL volume a\n problem?\n- What are your settings for wal_compression, max_wal_size (9.5+) /\n checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?\n- Could you quickly describe your workload?* Postgresql 9.3* 1500+ db servers* Daily peak for busy databases: 75 WALs switched per second (less than 10% of the servers experience this)* Avg per db: 2 WALs/s* Mainly generated by large batch sync processes that occur throughout the day, and by a legacy archiving process to purge older data (potentially many millions of cascading deletes).*Half the servers have (encrypted) pg_dump backups, WAL volume hasn't proved to be a problem there, though dump size is a problem for a few of the larger databases (less than 1TB).* Inter-data-centre replication is all streaming, across DC's (over the WAN) WAL shipping is over compressed SSH tunnels.Occasionally the streaming replication falls behind, but more commonly it is the cross-DC log shipping that becomes a problem. Some of the servers will generate 50+ GBs of WAL in a matter of minutes and that backs up immediately on the masters. Occasionally this has a knock-on effect for other servers and slows down their log shipping due to network saturation.* checkpoint_segments: 64, checkpoint_timeout: 5 mins, wal_buffers: 16MBWorkload:70% of servers are generally quiet, with occasional bursty reads and writes.20% are medium use, avg a few hundred transactions/second10% average around 5k txns/s, with bursts up to 25k txns/s for several minutes.All servers have about 80% reads / 20% writes, though those numbers flip during big sync jobs and when the purging maintenance kicks off.",
"msg_date": "Tue, 25 Apr 2017 07:19:31 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On Tue, Apr 25, 2017 at 1:17 AM, Andres Freund <[email protected]> wrote:\n> Questions (answer as many you can comfortably answer):\n> - How many MB/s, segments/s do you see on busier servers?\n\n~20MB/s with FPW compression, with peaks of ~35MB/s. Writes become the\nbottleneck without compression and it tops at about 40-50MB/s, WAL\narchiving cannot keep up beyond that point.\n\n> - What generates the bulk of WAL on your servers (9.5+ can use\n> pg_xlogdump --stats to compute that)?\n\nType N (%)\nRecord size (%) FPI size (%) Combined\nsize (%)\n---- - ---\n----------- --- -------- ---\n------------- ---\nXLOG 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nTransaction 30 ( 0.00)\n 960 ( 0.00) 0 ( 0.00) 960 (\n0.00)\nStorage 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nCLOG 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nDatabase 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nTablespace 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nMultiXact 110 ( 0.01)\n 7456 ( 0.02) 0 ( 0.00) 7456 (\n0.00)\nRelMap 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nStandby 2 ( 0.00)\n 368 ( 0.00) 0 ( 0.00) 368 (\n0.00)\nHeap2 2521 ( 0.22)\n 78752 ( 0.24) 4656133 ( 2.82) 4734885 (\n2.39)\nHeap 539419 ( 46.52)\n15646903 ( 47.14) 98720258 ( 59.87) 114367161 (\n57.73)\nBtree 606573 ( 52.31)\n15872182 ( 47.82) 57514798 ( 34.88) 73386980 (\n37.05)\nHash 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nGin 2866 ( 0.25)\n 134330 ( 0.40) 4012251 ( 2.43) 4146581 (\n2.09)\nGist 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nSequence 7970 ( 0.69)\n1450540 ( 4.37) 0 ( 0.00) 1450540 (\n0.73)\nSPGist 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nBRIN 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nCommitTs 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\nReplicationOrigin 0 ( 0.00)\n 0 ( 0.00) 0 ( 0.00) 0 (\n0.00)\n --------\n-------- -------- --------\nTotal 1159491\n33191491 [16.76%] 164903440 [83.24%] 198094931\n[100%]\n\n\n> - Are you seeing WAL writes being a bottleneck?OA\n\nSometimes, more so without FPW compression\n\n> - What kind of backup methods are you using and is the WAL volume a\n> problem?\n> - What kind of replication are you using and is the WAL volume a\n> problem?\n\nStreaming to hot standby + WAL archiving, delayed standby as backup\nand PITR. Backups are regular filesystem-level snapshots of the\ndelayed standby (with postgres down to get consistent snapshots).\n\nWAL volume getting full during periods where the hot standby lags\nbehind (or when we have to stop it to create consistent snapshots) are\nan issue indeed, and we've had to provision significant storage to be\nable to absorb those peaks (1TB of WAL)\n\nWe bundle WAL segments into groups of 256 segments for archiving and\nrecovery to minimize the impact of TCP slow start. We further gzip\nsegments before transfer with pigz, and we use mostly rsync (with a\nwrapper script that takes care of durability and error handling) to\nmove segments around. Getting the archive/recovery scripts to handle\nthe load hasn't been trivial.\n\n> - What are your settings for wal_compression, max_wal_size (9.5+) /\n> checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?\n\nwal_compression = on\nmax_wal_size = 12GB\nmin_wal_size = 2GB\ncheckpoint_timeout = 30min\nwal_buffers = -1 (16MB effective)\n\n> - Could you quickly describe your workload?\n\nSteady stream of (preaggregated) input events plus upserts into ~12\npartitioned aggregate \"matviews\" (within quotes since they're manually\nmaintained up to date).\n\nInput rate is approximately 9000 rows/s without counting the upserts\nonto the aggregate matviews. Old information is regularly compressed\nand archived into less detailed partitions for a steady size of about\n5TB.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Apr 2017 13:07:41 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "Andres Freund <[email protected]> writes:\n\n> Hi,\n>\n> I've lately seen more and more installations where the generation of\n> write-ahead-log (WAL) is one of the primary bottlenecks. I'm curious\n> whether that's primarily a \"sampling error\" of mine, or whether that's\n> indeed more common.\n>\n> The primary reason I'm curious is that I'm pondering a few potential\n> optimizations, and would like to have some guidance which are more and\n> which are less important.\n\nI have a busy warehouse spitting out about 400k\nsegments/week... ~10MB/second :-)\n\nWe have resorted to a rather complex batch/parallel compressor/shipper\nto keep up with the volume.\n\n>\n> Questions (answer as many you can comfortably answer):\n> - How many MB/s, segments/s do you see on busier servers?\n\nOur busiest system Avg 10MB/second but very burst. Assume it'w many\ntimes that during high churn periods.\n\n> - What generates the bulk of WAL on your servers (9.5+ can use\n> pg_xlogdump --stats to compute that)?\n\nSimply warehouse incremental loading and/or full table delete/trunc and\nreload, plus dirived data being created. Many of the transient tables\nare on NVME and unlogged.\n\n> - Are you seeing WAL writes being a bottleneck?OA\n> - What kind of backup methods are you using and is the WAL volume a\n> problem?\n\nI do not know if basic local WAL writing itself is a problem of or not\nbut as mentioned, we are scarcely able to handle the necessary archiving\nto make backups and PITR possible.\n\n> - What kind of replication are you using and is the WAL volume a\n\nTh;are 2 streamers both feeding directly from master. We use a fairly\nlarge 30k keep-segments value to help avoid streamers falling behind and\nthen having to resort to remote archive fetching.\n\nIt does appear that since streaming WAL reception and application as\nwell as of course remote fetching are single threaded, this is a\nbottleneck as well. That is, a totally unloded and well outfitted\n(hardware wise) streamer can barely keep up with master.\n\n> - What are your settings for wal_compression, max_wal_size (9.5+) /\n> checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?\n\n checkpoint_timeout | 5min\n max_wal_size | 4GB \n wal_buffers | 16MB\n wal_compression | off \n\n> - Could you quickly describe your workload?\n\nwarehouse with user self-service reporting creation/storage allowed in\nsame system.\n\n>\n> Feel free to add any information you think is pertinent ;)\n\nGreat idea!! Thanks\n\n>\n> Greetings,\n>\n> Andres Freund\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Tue, 25 Apr 2017 13:56:28 -0500",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "Hi,\n\nOn 2017-04-24 21:17:43 -0700, Andres Freund wrote:\n> I've lately seen more and more installations where the generation of\n> write-ahead-log (WAL) is one of the primary bottlenecks. I'm curious\n> whether that's primarily a \"sampling error\" of mine, or whether that's\n> indeed more common.\n> \n> The primary reason I'm curious is that I'm pondering a few potential\n> optimizations, and would like to have some guidance which are more and\n> which are less important.\n> \n> Questions (answer as many you can comfortably answer):\n> - How many MB/s, segments/s do you see on busier servers?\n> - What generates the bulk of WAL on your servers (9.5+ can use\n> pg_xlogdump --stats to compute that)?\n> - Are you seeing WAL writes being a bottleneck?OA\n> - What kind of backup methods are you using and is the WAL volume a\n> problem?\n> - What kind of replication are you using and is the WAL volume a\n> problem?\n> - What are your settings for wal_compression, max_wal_size (9.5+) /\n> checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?\n> - Could you quickly describe your workload?\n\nOk, based on the, few, answers I've got so far, my experience is indeed\nskewed. A number of the PG users I interacted with over the last couple\nyears had WAL write ranges somewhere in the range of 500MB/s to 2.2GB/s\n(max I'veseen). At that point WAL insertion became a major bottleneck,\neven if storage was more than fast enough to keep up. To address these\nwe'd need some changes, but the feedback so far suggest that it's not\nyet a widespread issue...\n\n- Andres\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 27 Apr 2017 08:59:06 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On 04/27/2017 08:59 AM, Andres Freund wrote:\n\n>\n> Ok, based on the, few, answers I've got so far, my experience is indeed\n> skewed. A number of the PG users I interacted with over the last couple\n> years had WAL write ranges somewhere in the range of 500MB/s to 2.2GB/s\n> (max I'veseen). At that point WAL insertion became a major bottleneck,\n> even if storage was more than fast enough to keep up. To address these\n> we'd need some changes, but the feedback so far suggest that it's not\n> yet a widespread issue...\n\nI would agree it isn't yet a widespread issue.\n\nThe only people that are likely going to see this are going to be on \nbare metal. We should definitely plan on that issue for say 11. I do \nhave a question though, where you have seen this issue is it with \nsynchronous_commit on or off?\n\nThanks,\n\nJD\n\n\n\n-- \nCommand Prompt, Inc. http://the.postgres.company/\n +1-503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nEveryone appreciates your honesty, until you are honest with them.\nUnless otherwise stated, opinions are my own.\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 27 Apr 2017 09:31:34 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:\n> On 04/27/2017 08:59 AM, Andres Freund wrote:\n> \n> > \n> > Ok, based on the, few, answers I've got so far, my experience is indeed\n> > skewed. A number of the PG users I interacted with over the last couple\n> > years had WAL write ranges somewhere in the range of 500MB/s to 2.2GB/s\n> > (max I'veseen). At that point WAL insertion became a major bottleneck,\n> > even if storage was more than fast enough to keep up. To address these\n> > we'd need some changes, but the feedback so far suggest that it's not\n> > yet a widespread issue...\n> \n> I would agree it isn't yet a widespread issue.\n\nI'm not yet sure about that actually. I suspect a large percentage of\npeople with such workloads aren't lingering lots on the lists.\n\n\n> The only people that are likely going to see this are going to be on bare\n> metal. We should definitely plan on that issue for say 11.\n\n\"plan on that issue\" - heh. We're talking about major engineering\nprojects here ;)\n\n\n> I do have a question though, where you have seen this issue is it with\n> synchronous_commit on or off?\n\nBoth. Whether that matters or not really depends on the workload. If you\nhave bulk writes, it doesn't really matter much.\n\n- Andres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Apr 2017 09:34:07 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On 04/27/2017 09:34 AM, Andres Freund wrote:\n> On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:\n>> On 04/27/2017 08:59 AM, Andres Freund wrote:\n>>\n\n>> I would agree it isn't yet a widespread issue.\n>\n> I'm not yet sure about that actually. I suspect a large percentage of\n> people with such workloads aren't lingering lots on the lists.\n\nThat would probably be true. I was thinking of it more as the \"most new \nusers are in the cloud\" and the \"cloud\" is going to be rare that a cloud \nuser is going to be able to hit that level of writes. (at least not \nwithout spending LOTS of money)\n\n>\n>\n>> The only people that are likely going to see this are going to be on bare\n>> metal. We should definitely plan on that issue for say 11.\n>\n> \"plan on that issue\" - heh. We're talking about major engineering\n> projects here ;)\n\nSorry, wasn't trying to make light of the effort. :D\n\n>\n>\n>> I do have a question though, where you have seen this issue is it with\n>> synchronous_commit on or off?\n>\n> Both. Whether that matters or not really depends on the workload. If you\n> have bulk writes, it doesn't really matter much.\n\nSure, o.k.\n\nThanks,\n\nAndres\n\n>\n> - Andres\n>\n\n\n-- \nCommand Prompt, Inc. http://the.postgres.company/\n +1-503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nEveryone appreciates your honesty, until you are honest with them.\nUnless otherwise stated, opinions are my own.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Apr 2017 10:29:48 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On 2017-04-27 10:29:48 -0700, Joshua D. Drake wrote:\n> On 04/27/2017 09:34 AM, Andres Freund wrote:\n> > On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:\n> > > On 04/27/2017 08:59 AM, Andres Freund wrote:\n> > > \n> \n> > > I would agree it isn't yet a widespread issue.\n> > \n> > I'm not yet sure about that actually. I suspect a large percentage of\n> > people with such workloads aren't lingering lots on the lists.\n> \n> That would probably be true. I was thinking of it more as the \"most new\n> users are in the cloud\" and the \"cloud\" is going to be rare that a cloud\n> user is going to be able to hit that level of writes. (at least not without\n> spending LOTS of money)\n\nYou can get pretty decent NVMe SSD drives on serveral cloud providers\nthese days, without immediately bancrupting you. Sure, it's instance\nstorage, but with a decent replication and archival setup, that's not\nnecessarily an issue.\n\nIt's not that hard to get to the point where postgres can't keep up with\nstorage, at least for some workloads.\n\n- Andres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Apr 2017 10:35:40 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "Hi,\n\nOn 04/25/2017 06:17 AM, Andres Freund wrote:\n> Hi,\n>\n> I've lately seen more and more installations where the generation of\n> write-ahead-log (WAL) is one of the primary bottlenecks. I'm curious\n> whether that's primarily a \"sampling error\" of mine, or whether\n> that's indeed more common.\n>\n\nI see those cases too. To some degree it's a sampling bias. People \ngenerally don't call us to look at the 99% of systems that perform fine, \nso we tend to see the the 1% of systems under pressure.\n\nThat doesn't make that observation irrelevant, though. Those demanding \nsystems are one of the things that pushes us forward.\n\n >\n> The primary reason I'm curious is that I'm pondering a few potential\n> optimizations, and would like to have some guidance which are more\n> and which are less important.\n>\n\nI think any optimization you do will improve at least some of those busy \nsystems.\n\n> Questions (answer as many you can comfortably answer):\n> - How many MB/s, segments/s do you see on busier servers?\n\nThat depends on the cause (see the next point).\n\n> - What generates the bulk of WAL on your servers (9.5+ can use\n> pg_xlogdump --stats to compute that)?\n\na) systems doing large batches\n - bulk loads/updates/deletes, one or few sessions doing a lot\n - easily high hundreds of MB/s (on a separate device)\n\nb) OLTP systems doing a lot of tiny/small transactions\n - many concurrent sessions\n - often end up much more limited by WAL, due to locking etc.\n - often the trouble is random updates all over the place, causing\n amplification due to FPI (PK on UUID is a great way to cause this\n unnecessarily even on apps with naturally tiny working set)\n\n> - Are you seeing WAL writes being a bottleneck?OA\n\nOn the write-intensive systems, yes. Often the CPUs are waiting for WAL \nI/O to complete during COMMIT.\n\n> - What kind of backup methods are you using and is the WAL volume a\n> problem?\n\nThe large and busy systems can easily produce so much WAL, that the \nbasebackup is not the largest part of the backup. That is somewhat \nsolvable by using other means of obtaining the basebackup snapshot (e.g. \nby taking some sort of filesystem / SAN / ... snapshot). That reduces \nthe amount of WAL needed to make the basebackup consistent, but it \ndoesn't solve the WAL archiving issue.\n\n> - What kind of replication are you using and is the WAL volume a\n> problem?\n\nGenerally streaming replication, and yes, the amount of WAL may be an \nissue, partially because the standby is a single-process thing. And as \nit has to process something generated by N sessions on the primary, that \ncan't end well.\n\nInterestingly enough, FPIs can actually make it way faster, because the \nstandby does not need to read the data from disk during recovery.\n\n> - What are your settings for wal_compression, max_wal_size (9.5+) /\n> checkpoint_segments (< 9.5), checkpoint_timeout and wal_buffers?\n\nI'd say the usual practice is to tune for timed checkpoints, say 30+ \nminutes apart (or more). wal_compression is typically 'off' (i.e. the \ndefault value).\n\n> - Could you quickly describe your workload?\n\nPretty much a little bit of everything, depending on the customer.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 28 Apr 2017 00:53:27 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On 04/27/2017 06:34 PM, Andres Freund wrote:\n> On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:\n>> On 04/27/2017 08:59 AM, Andres Freund wrote:\n>>\n>>>\n>>> Ok, based on the, few, answers I've got so far, my experience is\n>>> indeed skewed. A number of the PG users I interacted with over\n>>> the last couple years had WAL write ranges somewhere in the range\n>>> of 500MB/s to 2.2GB/s (max I'veseen). At that point WAL insertion\n>>> became a major bottleneck, even if storage was more than fast\n>>> enough to keep up. To address these we'd need some changes, but\n>>> the feedback so far suggest that it's not yet a widespread\n>>> issue...\n>>\n>> I would agree it isn't yet a widespread issue.\n>\n> I'm not yet sure about that actually. I suspect a large percentage\n> of people with such workloads aren't lingering lots on the lists.\n>\n\nTo a certain extent, this is a self-fulfilling prophecy. If you know \nyou'll have such a busy system, you probably do some research and \ntesting first, before choosing the database. If we don't perform well \nenough, you pick something else. Which removes the data point.\n\nObviously, there are systems that start small and get busier and busier \nover time. And those are the ones we see.\n\ncheers\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 28 Apr 2017 01:18:59 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On 04/27/2017 07:35 PM, Andres Freund wrote:\n> On 2017-04-27 10:29:48 -0700, Joshua D. Drake wrote:\n>> On 04/27/2017 09:34 AM, Andres Freund wrote:\n>>> On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:\n>>>> On 04/27/2017 08:59 AM, Andres Freund wrote:\n>>>>\n>>\n>>>> I would agree it isn't yet a widespread issue.\n>>>\n>>> I'm not yet sure about that actually. I suspect a large\n>>> percentage of people with such workloads aren't lingering lots on\n>>> the lists.\n>>\n>> That would probably be true. I was thinking of it more as the\n>> \"most new users are in the cloud\" and the \"cloud\" is going to be\n>> rare that a cloud user is going to be able to hit that level of\n>> writes. (at least not without spending LOTS of money)\n>\n> You can get pretty decent NVMe SSD drives on serveral cloud\n> providers these days, without immediately bancrupting you. Sure, it's\n> instance storage, but with a decent replication and archival setup,\n> that's not necessarily an issue.\n>\n> It's not that hard to get to the point where postgres can't keep up\n> with storage, at least for some workloads.\n>\n\nI can confirm this observation. I bought the Intel 750 NVMe SSD last \nyear, the device has 1GB DDR3 cache on it (power-loss protected), can do \n~1GB/s of sustained O_DIRECT sequential writes. But when running \npgbench, I can't push more than ~300MB/s of WAL to it, no matter what I \ndo because of WALWriteLock.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 28 Apr 2017 01:29:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On 2017-04-28 01:29:14 +0200, Tomas Vondra wrote:\n> I can confirm this observation. I bought the Intel 750 NVMe SSD last year,\n> the device has 1GB DDR3 cache on it (power-loss protected), can do ~1GB/s of\n> sustained O_DIRECT sequential writes. But when running pgbench, I can't push\n> more than ~300MB/s of WAL to it, no matter what I do because of\n> WALWriteLock.\n\nHm, interesting. Even if you up wal_buffers to 128MB, use\nsynchronous_commit = off, and play with wal_writer_delay/flush_after?\n\n- Andres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 27 Apr 2017 16:34:33 -0700",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
},
{
"msg_contents": "On 04/28/2017 01:34 AM, Andres Freund wrote:\n> On 2017-04-28 01:29:14 +0200, Tomas Vondra wrote:\n>> I can confirm this observation. I bought the Intel 750 NVMe SSD last year,\n>> the device has 1GB DDR3 cache on it (power-loss protected), can do ~1GB/s of\n>> sustained O_DIRECT sequential writes. But when running pgbench, I can't push\n>> more than ~300MB/s of WAL to it, no matter what I do because of\n>> WALWriteLock.\n>\n> Hm, interesting. Even if you up wal_buffers to 128MB, use\n> synchronous_commit = off, and play with wal_writer_delay/flush_after?\n>\n\nI think I've tried things like that, but let me do some proper testing. \nI'll report the numbers in a few days.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 29 Apr 2017 02:41:19 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Questionaire: Common WAL write rates on busy servers."
}
] |
[
{
"msg_contents": "After about 40 inutes the slow query finally finished and the result of the\nEXPLAIN plan can be found here:\n\nhttps://explain.depesz.com/s/BX22\n\nThanks,\nAlessandro Ferrucci\n\nOn Tue, Apr 25, 2017 at 11:10 PM, Alessandro Ferrucci <\[email protected]> wrote:\n\n> Hello - I am migrating a current system to PostgreSQL and I am having an\n> issue with a relatively straightforward query being extremely slow.\n>\n> The following are the definitions of the tables:\n>\n> CREATE TABLE popt_2017.unit\n> (\n> id serial NOT NULL,\n> unit_id text,\n> batch_id text,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT unit_pkey PRIMARY KEY (id)\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE TABLE popt_2017.field\n> (\n> id serial NOT NULL,\n> unit_id integer,\n> subunit_data_id integer,\n> field_name character varying(50),\n> page_id character varying(20),\n> page_type character varying(20),\n> batch_id character varying(20),\n> file_name character varying(20),\n> data_concept integer,\n> \"GROUP\" integer,\n> omr_group integer,\n> pres integer,\n> reg_data text,\n> ocr_conf text,\n> ocr_dict text,\n> ocr_phon text,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT field_pkey PRIMARY KEY (id),\n> CONSTRAINT field_subunit_data_id_fkey FOREIGN KEY (subunit_data_id)\n> REFERENCES popt_2017.subunit (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT field_unit_id_fk FOREIGN KEY (unit_id)\n> REFERENCES popt_2017.unit (id) MATCH FULL\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT field_unit_id_fkey FOREIGN KEY (unit_id)\n> REFERENCES popt_2017.unit (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> CREATE TABLE popt_2017.answer\n> (\n> id serial NOT NULL,\n> field_id integer,\n> ans_status integer,\n> ans text,\n> luggage text,\n> arec text,\n> kfi_partition integer,\n> final boolean,\n> length integer,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT answer_pkey PRIMARY KEY (id),\n> CONSTRAINT answer_field_id_fk FOREIGN KEY (field_id)\n> REFERENCES popt_2017.field (id) MATCH FULL\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT answer_field_id_fkey FOREIGN KEY (field_id)\n> REFERENCES popt_2017.field (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> Below are the index definitions for those tables:\n>\n> UNIT:\n> CREATE UNIQUE INDEX unit_pkey ON unit USING btree (id);\n> CREATE INDEX unit_unit_id_idx ON unit USING btree (unit_id);\n>\n> FIELD:\n> CREATE UNIQUE INDEX field_pkey ON field USING btree (id)\n> CREATE INDEX field_unit_id_idx ON field USING btree (unit_id)\n> CREATE INDEX field_subunit_id_idx ON field USING btree (subunit_data_id)\n> CREATE INDEX field_field_name_idx ON field USING btree (field_name)\n>\n> ANSWER:\n> CREATE UNIQUE INDEX answer_pkey ON answer USING btree (id)\n> CREATE INDEX answer_field_id_idx ON answer USING btree (field_id)\n> CREATE INDEX answer_ans_idx ON answer USING btree (ans)\n>\n> The tables each have the following number of rows:\n>\n> UNIT: 10,315\n> FIELD: 139,397,965\n> ANSWER: 3,463,300\n>\n> The query in question is:\n>\n> SELECT\n> UNIT.ID AS UNIT_ID,\n> UNIT.UNIT_ID AS UNIT_UNIT_ID,\n> UNIT.BATCH_ID AS UNIT_BATCH_ID,\n> UNIT.CREATE_DATE AS UNIT_CREATE_DATE,\n> UNIT.UPDATE_DATE AS UNIT_UPDATE_DATE\n> FROM\n> UNIT, FIELD, ANSWER\n> WHERE\n> UNIT.ID=FIELD.UNIT_ID AND\n> FIELD.ID=ANSWER.FIELD_ID AND\n> FIELD.FIELD_NAME='SHEETS_PRESENT' AND\n> ANSWER.ANS='2';\n>\n> I attempted to run an EXPLAIN (ANALYZE,BUFFERS) and the query has been\n> running for 32 minutes now, So I won't be able to post the results (as I've\n> never been able to get the query to actually finish.\n>\n> But, if I remove the join to UNIT (and just join FIELD and ANSWER) the\n> resulting query is sufficiently fast, (the first time it ran in roughly 3\n> seconds), the query as such is:\n>\n> SELECT * FROM\n> ANSWER, FIELD\n> WHERE\n> FIELD.ID=ANSWER.FIELD_ID AND\n> FIELD.FIELD_NAME='SHEETS_PRESENT' AND\n> ANSWER.ANS='2';\n>\n> The EXPLAIN ( ANALYZE, BUFFERS ) output of that query can be found here\n> https://explain.depesz.com/s/ueJq\n>\n> These tables are static for now, so they do not get DELETEs or INSERTS at\n> all and I have run VACUUM ANALYZE on all the affected tables.\n>\n> I'm running PostgreSQL PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu,\n> compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit\n>\n> I'm running this on RHEL 6.9\n>\n> On a server with 32 GB of ram, 2 CPUs.\n>\n> The following are the changes to postgresql.conf that I have made:\n>\n> shared_buffers = 7871MB\n> effective_cache_size = 23611MB\n> work_mem = 1000MB\n> maintenance_work_mem = 2048MB\n>\n> I have not changed the autovacuum settings, but since the tables are\n> static for now and I've already ran VACUUM that should not have any effect.\n>\n> Any assistance that could be provided is greatly appreciated.\n>\n> Thank you,\n> Alessandro Ferrucci\n>\n>\n>\n>\n>\n>\n>\n\n\n-- \nSigned,\nAlessandro Ferrucci\n\nAfter about 40 inutes the slow query finally finished and the result of the EXPLAIN plan can be found here:https://explain.depesz.com/s/BX22Thanks,Alessandro FerrucciOn Tue, Apr 25, 2017 at 11:10 PM, Alessandro Ferrucci <[email protected]> wrote:Hello - I am migrating a current system to PostgreSQL and I am having an issue with a relatively straightforward query being extremely slow.The following are the definitions of the tables:CREATE TABLE popt_2017.unit( id serial NOT NULL, unit_id text, batch_id text, create_date timestamp without time zone DEFAULT now(), update_date timestamp without time zone, CONSTRAINT unit_pkey PRIMARY KEY (id))WITH ( OIDS=FALSE);CREATE TABLE popt_2017.field( id serial NOT NULL, unit_id integer, subunit_data_id integer, field_name character varying(50), page_id character varying(20), page_type character varying(20), batch_id character varying(20), file_name character varying(20), data_concept integer, \"GROUP\" integer, omr_group integer, pres integer, reg_data text, ocr_conf text, ocr_dict text, ocr_phon text, create_date timestamp without time zone DEFAULT now(), update_date timestamp without time zone, CONSTRAINT field_pkey PRIMARY KEY (id), CONSTRAINT field_subunit_data_id_fkey FOREIGN KEY (subunit_data_id) REFERENCES popt_2017.subunit (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT field_unit_id_fk FOREIGN KEY (unit_id) REFERENCES popt_2017.unit (id) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT field_unit_id_fkey FOREIGN KEY (unit_id) REFERENCES popt_2017.unit (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION)WITH ( OIDS=FALSE);CREATE TABLE popt_2017.answer( id serial NOT NULL, field_id integer, ans_status integer, ans text, luggage text, arec text, kfi_partition integer, final boolean, length integer, create_date timestamp without time zone DEFAULT now(), update_date timestamp without time zone, CONSTRAINT answer_pkey PRIMARY KEY (id), CONSTRAINT answer_field_id_fk FOREIGN KEY (field_id) REFERENCES popt_2017.field (id) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT answer_field_id_fkey FOREIGN KEY (field_id) REFERENCES popt_2017.field (id) MATCH SIMPLE ON UPDATE NO ACTION ON DELETE NO ACTION)WITH ( OIDS=FALSE);Below are the index definitions for those tables:UNIT:CREATE UNIQUE INDEX unit_pkey ON unit USING btree (id);CREATE INDEX unit_unit_id_idx ON unit USING btree (unit_id);FIELD:CREATE UNIQUE INDEX field_pkey ON field USING btree (id)CREATE INDEX field_unit_id_idx ON field USING btree (unit_id)CREATE INDEX field_subunit_id_idx ON field USING btree (subunit_data_id)CREATE INDEX field_field_name_idx ON field USING btree (field_name)ANSWER:CREATE UNIQUE INDEX answer_pkey ON answer USING btree (id)CREATE INDEX answer_field_id_idx ON answer USING btree (field_id)CREATE INDEX answer_ans_idx ON answer USING btree (ans)The tables each have the following number of rows:UNIT: 10,315FIELD: 139,397,965ANSWER: 3,463,300The query in question is:SELECT UNIT.ID AS UNIT_ID, UNIT.UNIT_ID AS UNIT_UNIT_ID, UNIT.BATCH_ID AS UNIT_BATCH_ID, UNIT.CREATE_DATE AS UNIT_CREATE_DATE, UNIT.UPDATE_DATE AS UNIT_UPDATE_DATEFROM UNIT, FIELD, ANSWERWHERE UNIT.ID=FIELD.UNIT_ID AND FIELD.ID=ANSWER.FIELD_ID AND FIELD.FIELD_NAME='SHEETS_PRESENT' AND ANSWER.ANS='2';I attempted to run an EXPLAIN (ANALYZE,BUFFERS) and the query has been running for 32 minutes now, So I won't be able to post the results (as I've never been able to get the query to actually finish.But, if I remove the join to UNIT (and just join FIELD and ANSWER) the resulting query is sufficiently fast, (the first time it ran in roughly 3 seconds), the query as such is:SELECT * FROM ANSWER, FIELDWHERE FIELD.ID=ANSWER.FIELD_ID AND FIELD.FIELD_NAME='SHEETS_PRESENT' AND ANSWER.ANS='2';The EXPLAIN ( ANALYZE, BUFFERS ) output of that query can be found here https://explain.depesz.com/s/ueJqThese tables are static for now, so they do not get DELETEs or INSERTS at all and I have run VACUUM ANALYZE on all the affected tables.I'm running PostgreSQL PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bitI'm running this on RHEL 6.9On a server with 32 GB of ram, 2 CPUs.The following are the changes to postgresql.conf that I have made:shared_buffers = 7871MB effective_cache_size = 23611MBwork_mem = 1000MBmaintenance_work_mem = 2048MBI have not changed the autovacuum settings, but since the tables are static for now and I've already ran VACUUM that should not have any effect.Any assistance that could be provided is greatly appreciated.Thank you,Alessandro Ferrucci\n-- Signed,Alessandro Ferrucci",
"msg_date": "Tue, 25 Apr 2017 23:19:37 -0400",
"msg_from": "Alessandro Ferrucci <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "On 26 April 2017 at 15:19, Alessandro Ferrucci\n<[email protected]> wrote:\n> After about 40 inutes the slow query finally finished and the result of the\n> EXPLAIN plan can be found here:\n>\n> https://explain.depesz.com/s/BX22\n\n> Index Scan using field_unit_id_idx on field (cost=0.00..8746678.52 rows=850149 width=8) (actual time=0.030..2414345.998 rows=10315 loops=1)\"\n\nThis estimate seems a long way off. Are the stats up-to-date on the\ntable? Try again after running: ANALYZE field;\n\nIt might also be a good idea to ANALYZE all the tables. Is auto-vacuum\nswitched on?\n\nThe plan in question would work better if you create an index on field\n(field_name, unit_id);\n\nbut I think if you update the stats the plan will switch.\n\nA HashJoin, hashing \"unit\" and index scanning on field_field_name_idx\nwould have been a much smarter plan choice for the planner to make.\n\nAlso how many distinct field_names are there? SELECT COUNT(DISTINCT\nfield_name) FROM field;\n\nYou may want to increase the histogram buckets on that columns if\nthere are more than 100 field names, and the number of rows with each\nfield name is highly variable. ALTER TABLE field ALTER COLUMN\nfield_name SET STATISTICS <n buckets>; 100 is the default, and 10000\nis the maximum.\n\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Apr 2017 16:12:00 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "tis 2017-04-25 klockan 23:19 -0400 skrev Alessandro Ferrucci:\n> After about 40 inutes the slow query finally finished and the result\n> of the EXPLAIN plan can be found here:\n> \n> \n> https://explain.depesz.com/s/BX22\n> \n> \n> Thanks,\n> Alessandro Ferrucci\n\nI'm not so familiar with the index implementetion in Postgres, but I\ndon't think it is very efficient to index a text-field. It also loooks a\nbit strange that a id-field has the datatype \"text\" rather than integer\nor varchar.\n\n / Eskil\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Apr 2017 08:24:27 +0200",
"msg_from": "Johan Fredriksson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "Hi Eskil -\n\nThe I believe the id-field you're referring to is the UNIT.UNIT_ID, I could\nchange this to a varchar, however that column is not used in the query in\nquestion, so that wouldn't have any effect on the query's performance.\n\nJust for curiosity - I have changed the ANSWER.ANS datatype to a\nvarchar(250), but that did not affect the performance of the query.\n\n\n\nOn Wed, Apr 26, 2017 at 2:24 AM, Johan Fredriksson <[email protected]> wrote:\n\n> tis 2017-04-25 klockan 23:19 -0400 skrev Alessandro Ferrucci:\n> > After about 40 inutes the slow query finally finished and the result\n> > of the EXPLAIN plan can be found here:\n> >\n> >\n> > https://explain.depesz.com/s/BX22\n> >\n> >\n> > Thanks,\n> > Alessandro Ferrucci\n>\n> I'm not so familiar with the index implementetion in Postgres, but I\n> don't think it is very efficient to index a text-field. It also loooks a\n> bit strange that a id-field has the datatype \"text\" rather than integer\n> or varchar.\n>\n> / Eskil\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nSigned,\nAlessandro Ferrucci\n\nHi Eskil - The I believe the id-field you're referring to is the UNIT.UNIT_ID, I could change this to a varchar, however that column is not used in the query in question, so that wouldn't have any effect on the query's performance.Just for curiosity - I have changed the ANSWER.ANS datatype to a varchar(250), but that did not affect the performance of the query.On Wed, Apr 26, 2017 at 2:24 AM, Johan Fredriksson <[email protected]> wrote:tis 2017-04-25 klockan 23:19 -0400 skrev Alessandro Ferrucci:\n> After about 40 inutes the slow query finally finished and the result\n> of the EXPLAIN plan can be found here:\n>\n>\n> https://explain.depesz.com/s/BX22\n>\n>\n> Thanks,\n> Alessandro Ferrucci\n\nI'm not so familiar with the index implementetion in Postgres, but I\ndon't think it is very efficient to index a text-field. It also loooks a\nbit strange that a id-field has the datatype \"text\" rather than integer\nor varchar.\n\n / Eskil\n\n\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Signed,Alessandro Ferrucci",
"msg_date": "Wed, 26 Apr 2017 07:04:55 -0400",
"msg_from": "Alessandro Ferrucci <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "Hi Dave -\n\nthank you very much for all this advice! I will try each of these and post\nback results (some of this stuff, like creating the index, which is\nhappening now, takes a very long time).\n\nThanks again for all these pointers.\n\nCheers,\nAlessandro\n\nOn Wed, Apr 26, 2017 at 12:12 AM, David Rowley <[email protected]\n> wrote:\n\n> On 26 April 2017 at 15:19, Alessandro Ferrucci\n> <[email protected]> wrote:\n> > After about 40 inutes the slow query finally finished and the result of\n> the\n> > EXPLAIN plan can be found here:\n> >\n> > https://explain.depesz.com/s/BX22\n>\n> > Index Scan using field_unit_id_idx on field (cost=0.00..8746678.52\n> rows=850149 width=8) (actual time=0.030..2414345.998 rows=10315 loops=1)\"\n>\n> This estimate seems a long way off. Are the stats up-to-date on the\n> table? Try again after running: ANALYZE field;\n>\n> It might also be a good idea to ANALYZE all the tables. Is auto-vacuum\n> switched on?\n>\n> The plan in question would work better if you create an index on field\n> (field_name, unit_id);\n>\n> but I think if you update the stats the plan will switch.\n>\n> A HashJoin, hashing \"unit\" and index scanning on field_field_name_idx\n> would have been a much smarter plan choice for the planner to make.\n>\n> Also how many distinct field_names are there? SELECT COUNT(DISTINCT\n> field_name) FROM field;\n>\n> You may want to increase the histogram buckets on that columns if\n> there are more than 100 field names, and the number of rows with each\n> field name is highly variable. ALTER TABLE field ALTER COLUMN\n> field_name SET STATISTICS <n buckets>; 100 is the default, and 10000\n> is the maximum.\n>\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\n\n\n-- \nSigned,\nAlessandro Ferrucci\n\nHi Dave - thank you very much for all this advice! I will try each of these and post back results (some of this stuff, like creating the index, which is happening now, takes a very long time).Thanks again for all these pointers.Cheers,AlessandroOn Wed, Apr 26, 2017 at 12:12 AM, David Rowley <[email protected]> wrote:On 26 April 2017 at 15:19, Alessandro Ferrucci\n<[email protected]> wrote:\n> After about 40 inutes the slow query finally finished and the result of the\n> EXPLAIN plan can be found here:\n>\n> https://explain.depesz.com/s/BX22\n\n> Index Scan using field_unit_id_idx on field (cost=0.00..8746678.52 rows=850149 width=8) (actual time=0.030..2414345.998 rows=10315 loops=1)\"\n\nThis estimate seems a long way off. Are the stats up-to-date on the\ntable? Try again after running: ANALYZE field;\n\nIt might also be a good idea to ANALYZE all the tables. Is auto-vacuum\nswitched on?\n\nThe plan in question would work better if you create an index on field\n(field_name, unit_id);\n\nbut I think if you update the stats the plan will switch.\n\nA HashJoin, hashing \"unit\" and index scanning on field_field_name_idx\nwould have been a much smarter plan choice for the planner to make.\n\nAlso how many distinct field_names are there? SELECT COUNT(DISTINCT\nfield_name) FROM field;\n\nYou may want to increase the histogram buckets on that columns if\nthere are more than 100 field names, and the number of rows with each\nfield name is highly variable. ALTER TABLE field ALTER COLUMN\nfield_name SET STATISTICS <n buckets>; 100 is the default, and 10000\nis the maximum.\n\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n-- Signed,Alessandro Ferrucci",
"msg_date": "Wed, 26 Apr 2017 07:35:35 -0400",
"msg_from": "Alessandro Ferrucci <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "\n> Hi Eskil - \n> \n> \n> The I believe the id-field you're referring to is the UNIT.UNIT_ID, I\n> could change this to a varchar, however that column is not used in the\n> query in question, so that wouldn't have any effect on the query's\n> performance.\n\nSorry, I did not notice that the column \"unit_id\" existed in both \"unit\"\nand \"field\" tables.\n\n / Eskil\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Apr 2017 13:35:57 +0200",
"msg_from": "Johan Fredriksson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "\n\n----- Mensaje original -----\n> De: \"Alessandro Ferrucci\" <[email protected]>\n> Para: [email protected]\n> Enviados: Miércoles, 26 de Abril 2017 0:19:37\n> Asunto: Re: [PERFORM] Slow query with 3 table joins\n> \n> \n> \n> After about 40 inutes the slow query finally finished and the result\n> of the EXPLAIN plan can be found here:\n> \n> \n> https://explain.depesz.com/s/BX22\n> \n> \n> Thanks,\n> Alessandro Ferrucci\n\n1) Looking at the \"Rows removed by filter\" in that explain, looks like a selectivity issue: Many (many many) rows are fetched, just to be rejected later. \nI think you can try a partial index on ''field (unit_id) where field_name=\"SHEETS_PRESENT\"'', if it is practical to you.\nSee https://www.postgresql.org/docs/current/static/indexes-partial.html for a good read about partial indexes.\n\n2) 9.2 is a pretty old version of PG. If you are migrating yet, you should consider a more recent version\n\nHTH\n\nGerardo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Apr 2017 08:27:19 -0400 (EDT)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "Some other approaches you could try:\n\n1) What about an hashed index? You could make \nCREATE INDEX ON FIELD (unit_id, hashtext(field_name))\n\nand changing your query accordingly....\n\n\"....where hashtext(FIELD.FIELD_NAME)=hashtext('SHEETS_PRESENT') ....\"\n\n2) Partitioning (not native yet, but can be simulated through inheritance), like in\nhttps://www.postgresql.org/docs/current/static/ddl-partitioning.html\nThis could work well if you have a sort of limited different values in FIELD.FIELD_NAME\n\nGerardo\n\n----- Mensaje original -----\n> De: \"Alessandro Ferrucci\" <[email protected]>\n> Para: [email protected]\n> Enviados: Miércoles, 26 de Abril 2017 0:19:37\n> Asunto: Re: [PERFORM] Slow query with 3 table joins\n> \n> \n> \n> After about 40 inutes the slow query finally finished and the result\n> of the EXPLAIN plan can be found here:\n> \n> \n> https://explain.depesz.com/s/BX22\n> \n> \n> Thanks,\n> Alessandro Ferrucci\n> \n> \n> On Tue, Apr 25, 2017 at 11:10 PM, Alessandro Ferrucci <\n> [email protected] > wrote:\n> \n> \n> \n> \n> Hello - I am migrating a current system to PostgreSQL and I am having\n> an issue with a relatively straightforward query being extremely\n> slow.\n> \n> \n> The following are the definitions of the tables:\n> \n> \n> CREATE TABLE popt_2017.unit\n> (\n> id serial NOT NULL,\n> unit_id text,\n> batch_id text,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT unit_pkey PRIMARY KEY (id)\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> \n> \n> CREATE TABLE popt_2017.field\n> (\n> id serial NOT NULL,\n> unit_id integer,\n> subunit_data_id integer,\n> field_name character varying(50),\n> page_id character varying(20),\n> page_type character varying(20),\n> batch_id character varying(20),\n> file_name character varying(20),\n> data_concept integer,\n> \"GROUP\" integer,\n> omr_group integer,\n> pres integer,\n> reg_data text,\n> ocr_conf text,\n> ocr_dict text,\n> ocr_phon text,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT field_pkey PRIMARY KEY (id),\n> CONSTRAINT field_subunit_data_id_fkey FOREIGN KEY (subunit_data_id)\n> REFERENCES popt_2017.subunit (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT field_unit_id_fk FOREIGN KEY (unit_id)\n> REFERENCES popt_2017.unit (id) MATCH FULL\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT field_unit_id_fkey FOREIGN KEY (unit_id)\n> REFERENCES popt_2017.unit (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> \n> \n> CREATE TABLE popt_2017.answer\n> (\n> id serial NOT NULL,\n> field_id integer,\n> ans_status integer,\n> ans text,\n> luggage text,\n> arec text,\n> kfi_partition integer,\n> final boolean,\n> length integer,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT answer_pkey PRIMARY KEY (id),\n> CONSTRAINT answer_field_id_fk FOREIGN KEY (field_id)\n> REFERENCES popt_2017.field (id) MATCH FULL\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT answer_field_id_fkey FOREIGN KEY (field_id)\n> REFERENCES popt_2017.field (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> \n> \n> Below are the index definitions for those tables:\n> \n> \n> UNIT:\n> CREATE UNIQUE INDEX unit_pkey ON unit USING btree (id);\n> CREATE INDEX unit_unit_id_idx ON unit USING btree (unit_id);\n> \n> \n> FIELD:\n> CREATE UNIQUE INDEX field_pkey ON field USING btree (id)\n> CREATE INDEX field_unit_id_idx ON field USING btree (unit_id)\n> CREATE INDEX field_subunit_id_idx ON field USING btree\n> (subunit_data_id)\n> CREATE INDEX field_field_name_idx ON field USING btree (field_name)\n> \n> \n> ANSWER:\n> CREATE UNIQUE INDEX answer_pkey ON answer USING btree (id)\n> CREATE INDEX answer_field_id_idx ON answer USING btree (field_id)\n> CREATE INDEX answer_ans_idx ON answer USING btree (ans)\n> \n> \n> The tables each have the following number of rows:\n> \n> \n> UNIT: 10,315\n> FIELD: 139,397,965\n> ANSWER: 3,463,300\n> \n> \n> The query in question is:\n> \n> \n> SELECT\n> UNIT.ID AS UNIT_ID,\n> UNIT.UNIT_ID AS UNIT_UNIT_ID,\n> UNIT.BATCH_ID AS UNIT_BATCH_ID,\n> UNIT.CREATE_DATE AS UNIT_CREATE_DATE,\n> UNIT.UPDATE_DATE AS UNIT_UPDATE_DATE\n> FROM\n> UNIT, FIELD, ANSWER\n> WHERE\n> UNIT.ID =FIELD.UNIT_ID AND\n> FIELD.ID =ANSWER.FIELD_ID AND\n> FIELD.FIELD_NAME='SHEETS_PRESENT' AND\n> ANSWER.ANS='2';\n> \n> \n> I attempted to run an EXPLAIN (ANALYZE,BUFFERS) and the query has\n> been running for 32 minutes now, So I won't be able to post the\n> results (as I've never been able to get the query to actually\n> finish.\n> \n> \n> But, if I remove the join to UNIT (and just join FIELD and ANSWER)\n> the resulting query is sufficiently fast, (the first time it ran in\n> roughly 3 seconds), the query as such is:\n> \n> \n> SELECT * FROM\n> ANSWER, FIELD\n> WHERE\n> FIELD.ID =ANSWER.FIELD_ID AND\n> FIELD.FIELD_NAME='SHEETS_PRESENT' AND\n> ANSWER.ANS='2';\n> \n> \n> The EXPLAIN ( ANALYZE, BUFFERS ) output of that query can be found\n> here https://explain.depesz.com/s/ueJq\n> \n> \n> These tables are static for now, so they do not get DELETEs or\n> INSERTS at all and I have run VACUUM ANALYZE on all the affected\n> tables.\n> \n> \n> I'm running PostgreSQL PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu,\n> compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit\n> \n> \n> I'm running this on RHEL 6.9\n> \n> \n> On a server with 32 GB of ram, 2 CPUs.\n> \n> \n> The following are the changes to postgresql.conf that I have made:\n> \n> \n> shared_buffers = 7871MB\n> effective_cache_size = 23611MB\n> work_mem = 1000MB\n> maintenance_work_mem = 2048MB\n> \n> \n> I have not changed the autovacuum settings, but since the tables are\n> static for now and I've already ran VACUUM that should not have any\n> effect.\n> \n> \n> Any assistance that could be provided is greatly appreciated.\n> \n> \n> Thank you,\n> Alessandro Ferrucci\n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> --\n> \n> Signed,\n> Alessandro Ferrucci\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Apr 2017 11:00:52 -0400 (EDT)",
"msg_from": "Gerardo Herzig <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "Dave -\n\nI had re-ran ANALYZE and VACUUM on all the tables and that did not seem to\nhave fixed the issue (the query still took a long time, however I did not\nlet it finish to produce a full EXPLAIN plan.\n\nHowever - after creating an index on FIELD(FIELD_NAME,UNIT_ID) and now the\nquery runs very fast ( I changed the FIELD_NAME clause so I would not run\ninto\nany caching ).\n\nThe new query is (notice the new FIELD_NAME value to avoid caching).\n\nEXPLAIN (ANALYZE,BUFFERS) SELECT\n UNIT.ID AS UNIT_ID,\n UNIT.UNIT_ID AS UNIT_UNIT_ID,\n UNIT.BATCH_ID AS UNIT_BATCH_ID,\n UNIT.CREATE_DATE AS UNIT_CREATE_DATE,\n UNIT.UPDATE_DATE AS UNIT_UPDATE_DATE\nFROM\n UNIT, FIELD, ANSWER\nWHERE\n UNIT.ID=FIELD.UNIT_ID AND\n FIELD.ID=ANSWER.FIELD_ID AND\n FIELD.FIELD_NAME='RESP_PH_PREFIX_ID' AND\n ANSWER.ANS='2';\n\nYou can find the EXPLAIN plan here:\n\nhttps://explain.depesz.com/s/apYR\n\nI believe this fixes the issue as far as I can see. I'm going to play\naround with it more and see how it goes.\n\nI wanted to also answer your question as to how many different values there\nare for the FIELD_NAME (and how many rows of each value there are)\n\nhere is it:\n\nSELECT FIELD_NAME,COUNT(*) FROM FIELD GROUP BY FIELD_NAME ORDER BY COUNT;\n\n\"PAGE_SERIAL\";10315\n\"SHEETS_PRESENT\";10315\n\"RESP_PH_AREA_ID\";10556\n\"RESP_PH_PREFIX_ID\";10559\n\"RESP_PH_SUFFIX_ID\";10560\n\"H_LOC_ADD_NO_IND\";10587\n\"H_TENURE_RENTED_IND\";11162\n\"H_TENURE_OWNED_MORT_IND\";11199\n\"H_TENURE_OWNED_FREE_IND\";11208\n\"PAPER_JIC_2_TEXT\";11413\n\"PAPER_JIC_1_TEXT\";11413\n\"H_TENURE_OCC_NOPAY_IND\";11478\n\"H_LOC_CHILDREN_IND\";11496\n\"H_LOC_RELATIVES_IND\";11496\n\"H_LOC_TEMPORARY_IND\";11500\n\"H_LOC_NONRELATIVES_IND\";11510\n\"PSEUDO_FIELD_MARGINALIA\";87744\n\"H_SIZE_STATED_INT\";207918\n\"P_REL_NO_IND\";825240\n\"P_REL_YES_IND\";825240\n\"P_REL_OTHER_NONREL_IND\";1239894\n\"P_REL_CHILD_ADOPTED_IND\";1239894\n\"P_REL_CHILD_BIO_IND\";1239894\n\"P_REL_CHILD_FOSTER_IND\";1239894\n\"P_REL_CHILD_STEP_IND\";1239894\n\"P_REL_GRANDCHILD_IND\";1239894\n\"P_REL_HOUSEROOMMATE_IND\";1239894\n\"P_REL_INLAW_CHILD_IND\";1239894\n\"P_REL_INLAW_PARENT_IND\";1239894\n\"P_REL_OTHER_REL_IND\";1239894\n\"P_REL_PARENT_IND\";1239894\n\"P_REL_PARTNER_OPP_IND\";1239894\n\"P_REL_PARTNER_SAME_IND\";1239894\n\"P_REL_SIBLING_IND\";1239894\n\"P_REL_SPOUSE_OPP_IND\";1239894\n\"P_REL_SPOUSE_SAME_IND\";1239894\n\"P_TRBSHR_CORP_NAME\";1446204\n\"P_TRBSHR_YES_IND\";1446204\n\"P_TRBSHR_NO_IND\";1446204\n\"P_LOC_ELSE_COLLEGE_IND\";1446204\n\"P_LOC_ELSE_JAIL_IND\";1446204\n\"P_LOC_ELSE_JOB_IND\";1446204\n\"P_LOC_ELSE_MILITARY_IND\";1446204\n\"P_LOC_ELSE_NO_IND\";1446204\n\"P_TRBENR_YES_IND\";1446204\n\"P_LOC_ELSE_SEASONAL_IND\";1446204\n\"P_LOC_ELSE_NURSINGHOME_IND\";1446204\n\"P_TRBENR_TRIBE_NAME\";1446204\n\"P_TRBENR_NO_IND\";1446204\n\"P_LOC_ELSE_RELATIVES_IND\";1446204\n\"P_LOC_ELSE_OTHER_IND\";1446204\n\"P_RACE_WHITE_IND\";1447812\n\"P_RACE2_TONGAN_IND\";1447812\n\"P_RACE2_AFAM_IND\";1447812\n\"P_RACE2_AIAN_TEXT\";1447812\n\"P_RACE2_ASIANINDIAN_IND\";1447812\n\"P_RACE2_ASIAN_TEXT\";1447812\n\"P_RACE2_BLACK_TEXT\";1447812\n\"P_RACE2_CHAMORRO_IND\";1447812\n\"P_RACE2_CHINESE_IND\";1447812\n\"P_RACE2_COLOMBIAN_IND\";1447812\n\"P_RACE2_CUBAN_IND\";1447812\n\"P_RACE2_DOMINICAN_IND\";1447812\n\"P_RACE2_EGYPTIAN_IND\";1447812\n\"P_RACE2_ENGLISH_IND\";1447812\n\"P_RACE2_ETHIOPIAN_IND\";1447812\n\"P_RACE2_FIJIAN_IND\";1447812\n\"P_RACE2_FILIPINO_IND\";1447812\n\"P_RACE2_FRENCH_IND\";1447812\n\"P_RACE2_GERMAN_IND\";1447812\n\"P_RACE2_HAITIAN_IND\";1447812\n\"P_RACE2_HISP_TEXT\";1447812\n\"P_RACE2_IRANIAN_IND\";1447812\n\"P_RACE2_IRISH_IND\";1447812\n\"P_RACE2_ISRAELI_IND\";1447812\n\"P_RACE2_ITALIAN_IND\";1447812\n\"P_RACE2_JAMAICAN_IND\";1447812\n\"P_RACE2_JAPANESE_IND\";1447812\n\"P_RACE2_KOREAN_IND\";1447812\n\"P_RACE2_LEBANESE_IND\";1447812\n\"P_RACE2_MARSHALLESE_IND\";1447812\n\"P_RACE2_MENA_TEXT\";1447812\n\"P_RACE2_MEXICAN_IND\";1447812\n\"P_RACE2_MOROCCAN_IND\";1447812\n\"P_RACE2_NATHAWAIIAN_IND\";1447812\n\"P_RACE2_NHPI_TEXT\";1447812\n\"P_RACE2_NIGERIAN_IND\";1447812\n\"P_RACE2_POLISH_IND\";1447812\n\"P_RACE2_PUERTORICAN_IND\";1447812\n\"P_RACE2_SALVADORAN_IND\";1447812\n\"P_RACE2_SAMOAN_IND\";1447812\n\"P_RACE2_SOMALI_IND\";1447812\n\"P_RACE2_SOR_TEXT\";1447812\n\"P_RACE2_SYRIAN_IND\";1447812\n\"P_RACE2_VIETNAMESE_IND\";1447812\n\"P_RACE2_WHITE_TEXT\";1447812\n\"P_RACE_AIAN_IND\";1447812\n\"P_RACE_ASIAN_IND\";1447812\n\"P_RACE_BLACK_IND\";1447812\n\"P_RACE_HISP_IND\";1447812\n\"P_RACE_MENA_IND\";1447812\n\"P_RACE_NHPI_IND\";1447812\n\"P_RACE_SOR_IND\";1447812\n\"P_SEX_MALE_IND\";2273052\n\"P_SEX_FEMALE_IND\";2273052\n\"P_MIDDLE_NAME\";2273052\n\"P_LAST_NAME\";2273052\n\"P_FIRST_NAME\";2273052\n\"P_BIRTH_YEAR_INT\";2273052\n\"P_BIRTH_MONTH_INT\";2273052\n\"P_BIRTH_DAY_INT\";2273052\n\"P_AGE_INT\";2273052\n\n\nI want to give a HUGE thanks to everyone who took the time to look at my\nissue and provide insight and assistance, you folks are truly awesome!\n\n\n\nOn Wed, Apr 26, 2017 at 12:12 AM, David Rowley <[email protected]\n> wrote:\n\n> On 26 April 2017 at 15:19, Alessandro Ferrucci\n> <[email protected]> wrote:\n> > After about 40 inutes the slow query finally finished and the result of\n> the\n> > EXPLAIN plan can be found here:\n> >\n> > https://explain.depesz.com/s/BX22\n>\n> > Index Scan using field_unit_id_idx on field (cost=0.00..8746678.52\n> rows=850149 width=8) (actual time=0.030..2414345.998 rows=10315 loops=1)\"\n>\n> This estimate seems a long way off. Are the stats up-to-date on the\n> table? Try again after running: ANALYZE field;\n>\n> It might also be a good idea to ANALYZE all the tables. Is auto-vacuum\n> switched on?\n>\n> The plan in question would work better if you create an index on field\n> (field_name, unit_id);\n>\n> but I think if you update the stats the plan will switch.\n>\n> A HashJoin, hashing \"unit\" and index scanning on field_field_name_idx\n> would have been a much smarter plan choice for the planner to make.\n>\n> Also how many distinct field_names are there? SELECT COUNT(DISTINCT\n> field_name) FROM field;\n>\n> You may want to increase the histogram buckets on that columns if\n> there are more than 100 field names, and the number of rows with each\n> field name is highly variable. ALTER TABLE field ALTER COLUMN\n> field_name SET STATISTICS <n buckets>; 100 is the default, and 10000\n> is the maximum.\n>\n>\n> --\n> David Rowley http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Training & Services\n>\n\n\n\n-- \nSigned,\nAlessandro Ferrucci\n\nDave - I had re-ran ANALYZE and VACUUM on all the tables and that did not seem to have fixed the issue (the query still took a long time, however I did not let it finish to produce a full EXPLAIN plan.However - after creating an index on FIELD(FIELD_NAME,UNIT_ID) and now the query runs very fast ( I changed the FIELD_NAME clause so I would not run into any caching ). The new query is (notice the new FIELD_NAME value to avoid caching).EXPLAIN (ANALYZE,BUFFERS) SELECT UNIT.ID AS UNIT_ID, UNIT.UNIT_ID AS UNIT_UNIT_ID, UNIT.BATCH_ID AS UNIT_BATCH_ID, UNIT.CREATE_DATE AS UNIT_CREATE_DATE, UNIT.UPDATE_DATE AS UNIT_UPDATE_DATEFROM UNIT, FIELD, ANSWERWHERE UNIT.ID=FIELD.UNIT_ID AND FIELD.ID=ANSWER.FIELD_ID AND FIELD.FIELD_NAME='RESP_PH_PREFIX_ID' AND ANSWER.ANS='2';You can find the EXPLAIN plan here:https://explain.depesz.com/s/apYRI believe this fixes the issue as far as I can see. I'm going to play around with it more and see how it goes.I wanted to also answer your question as to how many different values there are for the FIELD_NAME (and how many rows of each value there are)here is it:SELECT FIELD_NAME,COUNT(*) FROM FIELD GROUP BY FIELD_NAME ORDER BY COUNT;\"PAGE_SERIAL\";10315\"SHEETS_PRESENT\";10315\"RESP_PH_AREA_ID\";10556\"RESP_PH_PREFIX_ID\";10559\"RESP_PH_SUFFIX_ID\";10560\"H_LOC_ADD_NO_IND\";10587\"H_TENURE_RENTED_IND\";11162\"H_TENURE_OWNED_MORT_IND\";11199\"H_TENURE_OWNED_FREE_IND\";11208\"PAPER_JIC_2_TEXT\";11413\"PAPER_JIC_1_TEXT\";11413\"H_TENURE_OCC_NOPAY_IND\";11478\"H_LOC_CHILDREN_IND\";11496\"H_LOC_RELATIVES_IND\";11496\"H_LOC_TEMPORARY_IND\";11500\"H_LOC_NONRELATIVES_IND\";11510\"PSEUDO_FIELD_MARGINALIA\";87744\"H_SIZE_STATED_INT\";207918\"P_REL_NO_IND\";825240\"P_REL_YES_IND\";825240\"P_REL_OTHER_NONREL_IND\";1239894\"P_REL_CHILD_ADOPTED_IND\";1239894\"P_REL_CHILD_BIO_IND\";1239894\"P_REL_CHILD_FOSTER_IND\";1239894\"P_REL_CHILD_STEP_IND\";1239894\"P_REL_GRANDCHILD_IND\";1239894\"P_REL_HOUSEROOMMATE_IND\";1239894\"P_REL_INLAW_CHILD_IND\";1239894\"P_REL_INLAW_PARENT_IND\";1239894\"P_REL_OTHER_REL_IND\";1239894\"P_REL_PARENT_IND\";1239894\"P_REL_PARTNER_OPP_IND\";1239894\"P_REL_PARTNER_SAME_IND\";1239894\"P_REL_SIBLING_IND\";1239894\"P_REL_SPOUSE_OPP_IND\";1239894\"P_REL_SPOUSE_SAME_IND\";1239894\"P_TRBSHR_CORP_NAME\";1446204\"P_TRBSHR_YES_IND\";1446204\"P_TRBSHR_NO_IND\";1446204\"P_LOC_ELSE_COLLEGE_IND\";1446204\"P_LOC_ELSE_JAIL_IND\";1446204\"P_LOC_ELSE_JOB_IND\";1446204\"P_LOC_ELSE_MILITARY_IND\";1446204\"P_LOC_ELSE_NO_IND\";1446204\"P_TRBENR_YES_IND\";1446204\"P_LOC_ELSE_SEASONAL_IND\";1446204\"P_LOC_ELSE_NURSINGHOME_IND\";1446204\"P_TRBENR_TRIBE_NAME\";1446204\"P_TRBENR_NO_IND\";1446204\"P_LOC_ELSE_RELATIVES_IND\";1446204\"P_LOC_ELSE_OTHER_IND\";1446204\"P_RACE_WHITE_IND\";1447812\"P_RACE2_TONGAN_IND\";1447812\"P_RACE2_AFAM_IND\";1447812\"P_RACE2_AIAN_TEXT\";1447812\"P_RACE2_ASIANINDIAN_IND\";1447812\"P_RACE2_ASIAN_TEXT\";1447812\"P_RACE2_BLACK_TEXT\";1447812\"P_RACE2_CHAMORRO_IND\";1447812\"P_RACE2_CHINESE_IND\";1447812\"P_RACE2_COLOMBIAN_IND\";1447812\"P_RACE2_CUBAN_IND\";1447812\"P_RACE2_DOMINICAN_IND\";1447812\"P_RACE2_EGYPTIAN_IND\";1447812\"P_RACE2_ENGLISH_IND\";1447812\"P_RACE2_ETHIOPIAN_IND\";1447812\"P_RACE2_FIJIAN_IND\";1447812\"P_RACE2_FILIPINO_IND\";1447812\"P_RACE2_FRENCH_IND\";1447812\"P_RACE2_GERMAN_IND\";1447812\"P_RACE2_HAITIAN_IND\";1447812\"P_RACE2_HISP_TEXT\";1447812\"P_RACE2_IRANIAN_IND\";1447812\"P_RACE2_IRISH_IND\";1447812\"P_RACE2_ISRAELI_IND\";1447812\"P_RACE2_ITALIAN_IND\";1447812\"P_RACE2_JAMAICAN_IND\";1447812\"P_RACE2_JAPANESE_IND\";1447812\"P_RACE2_KOREAN_IND\";1447812\"P_RACE2_LEBANESE_IND\";1447812\"P_RACE2_MARSHALLESE_IND\";1447812\"P_RACE2_MENA_TEXT\";1447812\"P_RACE2_MEXICAN_IND\";1447812\"P_RACE2_MOROCCAN_IND\";1447812\"P_RACE2_NATHAWAIIAN_IND\";1447812\"P_RACE2_NHPI_TEXT\";1447812\"P_RACE2_NIGERIAN_IND\";1447812\"P_RACE2_POLISH_IND\";1447812\"P_RACE2_PUERTORICAN_IND\";1447812\"P_RACE2_SALVADORAN_IND\";1447812\"P_RACE2_SAMOAN_IND\";1447812\"P_RACE2_SOMALI_IND\";1447812\"P_RACE2_SOR_TEXT\";1447812\"P_RACE2_SYRIAN_IND\";1447812\"P_RACE2_VIETNAMESE_IND\";1447812\"P_RACE2_WHITE_TEXT\";1447812\"P_RACE_AIAN_IND\";1447812\"P_RACE_ASIAN_IND\";1447812\"P_RACE_BLACK_IND\";1447812\"P_RACE_HISP_IND\";1447812\"P_RACE_MENA_IND\";1447812\"P_RACE_NHPI_IND\";1447812\"P_RACE_SOR_IND\";1447812\"P_SEX_MALE_IND\";2273052\"P_SEX_FEMALE_IND\";2273052\"P_MIDDLE_NAME\";2273052\"P_LAST_NAME\";2273052\"P_FIRST_NAME\";2273052\"P_BIRTH_YEAR_INT\";2273052\"P_BIRTH_MONTH_INT\";2273052\"P_BIRTH_DAY_INT\";2273052\"P_AGE_INT\";2273052I want to give a HUGE thanks to everyone who took the time to look at my issue and provide insight and assistance, you folks are truly awesome!On Wed, Apr 26, 2017 at 12:12 AM, David Rowley <[email protected]> wrote:On 26 April 2017 at 15:19, Alessandro Ferrucci\n<[email protected]> wrote:\n> After about 40 inutes the slow query finally finished and the result of the\n> EXPLAIN plan can be found here:\n>\n> https://explain.depesz.com/s/BX22\n\n> Index Scan using field_unit_id_idx on field (cost=0.00..8746678.52 rows=850149 width=8) (actual time=0.030..2414345.998 rows=10315 loops=1)\"\n\nThis estimate seems a long way off. Are the stats up-to-date on the\ntable? Try again after running: ANALYZE field;\n\nIt might also be a good idea to ANALYZE all the tables. Is auto-vacuum\nswitched on?\n\nThe plan in question would work better if you create an index on field\n(field_name, unit_id);\n\nbut I think if you update the stats the plan will switch.\n\nA HashJoin, hashing \"unit\" and index scanning on field_field_name_idx\nwould have been a much smarter plan choice for the planner to make.\n\nAlso how many distinct field_names are there? SELECT COUNT(DISTINCT\nfield_name) FROM field;\n\nYou may want to increase the histogram buckets on that columns if\nthere are more than 100 field names, and the number of rows with each\nfield name is highly variable. ALTER TABLE field ALTER COLUMN\nfield_name SET STATISTICS <n buckets>; 100 is the default, and 10000\nis the maximum.\n\n\n--\n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n-- Signed,Alessandro Ferrucci",
"msg_date": "Wed, 26 Apr 2017 11:22:35 -0400",
"msg_from": "Alessandro Ferrucci <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Slow query with 3 table joins"
},
{
"msg_contents": "This looks like the same optimizer problem that occasionally plagues our\ncustomers. Whenever the estimated rows of a join==1, but the actual rows\nis higher, the optimizer may choose very poor plans. I made some attempts\nto fix. The very simple fix is to never estimate 1 for a join result.\nEven using 2 works remarkably well as a defense against this problem.\n\n\nhttps://github.com/labkey-matthewb/postgres/commit/b1fd99f4deffbbf3db2172ccaba51a34f18d1b1a\n\nI also made a much more correct but complicated patch to track both\nuniqueness and selectivity thought the optimizer, but I didn't quite push\nthat over the finish line (I made a mistake in the hash join code, and got\ndistracted by my day job before finishing it).\n\n https://github.com/labkey-matthewb/postgres/commits/struct_selectivity\n\nThe second path is certainly better approach, but needs someone to pick up\nthe mission.\n\nMatt\n\nOn Wed, Apr 26, 2017 at 8:00 AM, Gerardo Herzig <[email protected]> wrote:\n\n> Some other approaches you could try:\n>\n> 1) What about an hashed index? You could make\n> CREATE INDEX ON FIELD (unit_id, hashtext(field_name))\n>\n> and changing your query accordingly....\n>\n> \"....where hashtext(FIELD.FIELD_NAME)=hashtext('SHEETS_PRESENT') ....\"\n>\n> 2) Partitioning (not native yet, but can be simulated through\n> inheritance), like in\n> https://www.postgresql.org/docs/current/static/ddl-partitioning.html\n> This could work well if you have a sort of limited different values in\n> FIELD.FIELD_NAME\n>\n> Gerardo\n>\n> ----- Mensaje original -----\n> > De: \"Alessandro Ferrucci\" <[email protected]>\n> > Para: [email protected]\n> > Enviados: Miércoles, 26 de Abril 2017 0:19:37\n> > Asunto: Re: [PERFORM] Slow query with 3 table joins\n> >\n> >\n> >\n> > After about 40 inutes the slow query finally finished and the result\n> > of the EXPLAIN plan can be found here:\n> >\n> >\n> > https://explain.depesz.com/s/BX22\n> >\n> >\n> > Thanks,\n> > Alessandro Ferrucci\n> >\n> >\n> > On Tue, Apr 25, 2017 at 11:10 PM, Alessandro Ferrucci <\n> > [email protected] > wrote:\n> >\n> >\n> >\n> >\n> > Hello - I am migrating a current system to PostgreSQL and I am having\n> > an issue with a relatively straightforward query being extremely\n> > slow.\n> >\n> >\n> > The following are the definitions of the tables:\n> >\n> >\n> > CREATE TABLE popt_2017.unit\n> > (\n> > id serial NOT NULL,\n> > unit_id text,\n> > batch_id text,\n> > create_date timestamp without time zone DEFAULT now(),\n> > update_date timestamp without time zone,\n> > CONSTRAINT unit_pkey PRIMARY KEY (id)\n> > )\n> > WITH (\n> > OIDS=FALSE\n> > );\n> >\n> >\n> > CREATE TABLE popt_2017.field\n> > (\n> > id serial NOT NULL,\n> > unit_id integer,\n> > subunit_data_id integer,\n> > field_name character varying(50),\n> > page_id character varying(20),\n> > page_type character varying(20),\n> > batch_id character varying(20),\n> > file_name character varying(20),\n> > data_concept integer,\n> > \"GROUP\" integer,\n> > omr_group integer,\n> > pres integer,\n> > reg_data text,\n> > ocr_conf text,\n> > ocr_dict text,\n> > ocr_phon text,\n> > create_date timestamp without time zone DEFAULT now(),\n> > update_date timestamp without time zone,\n> > CONSTRAINT field_pkey PRIMARY KEY (id),\n> > CONSTRAINT field_subunit_data_id_fkey FOREIGN KEY (subunit_data_id)\n> > REFERENCES popt_2017.subunit (id) MATCH SIMPLE\n> > ON UPDATE NO ACTION ON DELETE NO ACTION,\n> > CONSTRAINT field_unit_id_fk FOREIGN KEY (unit_id)\n> > REFERENCES popt_2017.unit (id) MATCH FULL\n> > ON UPDATE NO ACTION ON DELETE NO ACTION,\n> > CONSTRAINT field_unit_id_fkey FOREIGN KEY (unit_id)\n> > REFERENCES popt_2017.unit (id) MATCH SIMPLE\n> > ON UPDATE NO ACTION ON DELETE NO ACTION\n> > )\n> > WITH (\n> > OIDS=FALSE\n> > );\n> >\n> >\n> > CREATE TABLE popt_2017.answer\n> > (\n> > id serial NOT NULL,\n> > field_id integer,\n> > ans_status integer,\n> > ans text,\n> > luggage text,\n> > arec text,\n> > kfi_partition integer,\n> > final boolean,\n> > length integer,\n> > create_date timestamp without time zone DEFAULT now(),\n> > update_date timestamp without time zone,\n> > CONSTRAINT answer_pkey PRIMARY KEY (id),\n> > CONSTRAINT answer_field_id_fk FOREIGN KEY (field_id)\n> > REFERENCES popt_2017.field (id) MATCH FULL\n> > ON UPDATE NO ACTION ON DELETE NO ACTION,\n> > CONSTRAINT answer_field_id_fkey FOREIGN KEY (field_id)\n> > REFERENCES popt_2017.field (id) MATCH SIMPLE\n> > ON UPDATE NO ACTION ON DELETE NO ACTION\n> > )\n> > WITH (\n> > OIDS=FALSE\n> > );\n> >\n> >\n> > Below are the index definitions for those tables:\n> >\n> >\n> > UNIT:\n> > CREATE UNIQUE INDEX unit_pkey ON unit USING btree (id);\n> > CREATE INDEX unit_unit_id_idx ON unit USING btree (unit_id);\n> >\n> >\n> > FIELD:\n> > CREATE UNIQUE INDEX field_pkey ON field USING btree (id)\n> > CREATE INDEX field_unit_id_idx ON field USING btree (unit_id)\n> > CREATE INDEX field_subunit_id_idx ON field USING btree\n> > (subunit_data_id)\n> > CREATE INDEX field_field_name_idx ON field USING btree (field_name)\n> >\n> >\n> > ANSWER:\n> > CREATE UNIQUE INDEX answer_pkey ON answer USING btree (id)\n> > CREATE INDEX answer_field_id_idx ON answer USING btree (field_id)\n> > CREATE INDEX answer_ans_idx ON answer USING btree (ans)\n> >\n> >\n> > The tables each have the following number of rows:\n> >\n> >\n> > UNIT: 10,315\n> > FIELD: 139,397,965\n> > ANSWER: 3,463,300\n> >\n> >\n> > The query in question is:\n> >\n> >\n> > SELECT\n> > UNIT.ID AS UNIT_ID,\n> > UNIT.UNIT_ID AS UNIT_UNIT_ID,\n> > UNIT.BATCH_ID AS UNIT_BATCH_ID,\n> > UNIT.CREATE_DATE AS UNIT_CREATE_DATE,\n> > UNIT.UPDATE_DATE AS UNIT_UPDATE_DATE\n> > FROM\n> > UNIT, FIELD, ANSWER\n> > WHERE\n> > UNIT.ID =FIELD.UNIT_ID AND\n> > FIELD.ID =ANSWER.FIELD_ID AND\n> > FIELD.FIELD_NAME='SHEETS_PRESENT' AND\n> > ANSWER.ANS='2';\n> >\n> >\n> > I attempted to run an EXPLAIN (ANALYZE,BUFFERS) and the query has\n> > been running for 32 minutes now, So I won't be able to post the\n> > results (as I've never been able to get the query to actually\n> > finish.\n> >\n> >\n> > But, if I remove the join to UNIT (and just join FIELD and ANSWER)\n> > the resulting query is sufficiently fast, (the first time it ran in\n> > roughly 3 seconds), the query as such is:\n> >\n> >\n> > SELECT * FROM\n> > ANSWER, FIELD\n> > WHERE\n> > FIELD.ID =ANSWER.FIELD_ID AND\n> > FIELD.FIELD_NAME='SHEETS_PRESENT' AND\n> > ANSWER.ANS='2';\n> >\n> >\n> > The EXPLAIN ( ANALYZE, BUFFERS ) output of that query can be found\n> > here https://explain.depesz.com/s/ueJq\n> >\n> >\n> > These tables are static for now, so they do not get DELETEs or\n> > INSERTS at all and I have run VACUUM ANALYZE on all the affected\n> > tables.\n> >\n> >\n> > I'm running PostgreSQL PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu,\n> > compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit\n> >\n> >\n> > I'm running this on RHEL 6.9\n> >\n> >\n> > On a server with 32 GB of ram, 2 CPUs.\n> >\n> >\n> > The following are the changes to postgresql.conf that I have made:\n> >\n> >\n> > shared_buffers = 7871MB\n> > effective_cache_size = 23611MB\n> > work_mem = 1000MB\n> > maintenance_work_mem = 2048MB\n> >\n> >\n> > I have not changed the autovacuum settings, but since the tables are\n> > static for now and I've already ran VACUUM that should not have any\n> > effect.\n> >\n> >\n> > Any assistance that could be provided is greatly appreciated.\n> >\n> >\n> > Thank you,\n> > Alessandro Ferrucci\n> >\n> >\n> >\n> >\n> >\n> >\n> >\n> >\n> >\n> >\n> >\n> > --\n> >\n> > Signed,\n> > Alessandro Ferrucci\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThis looks like the same optimizer problem that occasionally plagues our customers. Whenever the estimated rows of a join==1, but the actual rows is higher, the optimizer may choose very poor plans. I made some attempts to fix. The very simple fix is to never estimate 1 for a join result. Even using 2 works remarkably well as a defense against this problem. https://github.com/labkey-matthewb/postgres/commit/b1fd99f4deffbbf3db2172ccaba51a34f18d1b1aI also made a much more correct but complicated patch to track both uniqueness and selectivity thought the optimizer, but I didn't quite push that over the finish line (I made a mistake in the hash join code, and got distracted by my day job before finishing it). https://github.com/labkey-matthewb/postgres/commits/struct_selectivityThe second path is certainly better approach, but needs someone to pick up the mission.MattOn Wed, Apr 26, 2017 at 8:00 AM, Gerardo Herzig <[email protected]> wrote:Some other approaches you could try:\n\n1) What about an hashed index? You could make\nCREATE INDEX ON FIELD (unit_id, hashtext(field_name))\n\nand changing your query accordingly....\n\n\"....where hashtext(FIELD.FIELD_NAME)=hashtext('SHEETS_PRESENT') ....\"\n\n2) Partitioning (not native yet, but can be simulated through inheritance), like in\nhttps://www.postgresql.org/docs/current/static/ddl-partitioning.html\nThis could work well if you have a sort of limited different values in FIELD.FIELD_NAME\n\nGerardo\n\n----- Mensaje original -----\n> De: \"Alessandro Ferrucci\" <[email protected]>\n> Para: [email protected]\n> Enviados: Miércoles, 26 de Abril 2017 0:19:37\n> Asunto: Re: [PERFORM] Slow query with 3 table joins\n>\n>\n>\n> After about 40 inutes the slow query finally finished and the result\n> of the EXPLAIN plan can be found here:\n>\n>\n> https://explain.depesz.com/s/BX22\n>\n>\n> Thanks,\n> Alessandro Ferrucci\n>\n>\n> On Tue, Apr 25, 2017 at 11:10 PM, Alessandro Ferrucci <\n> [email protected] > wrote:\n>\n>\n>\n>\n> Hello - I am migrating a current system to PostgreSQL and I am having\n> an issue with a relatively straightforward query being extremely\n> slow.\n>\n>\n> The following are the definitions of the tables:\n>\n>\n> CREATE TABLE popt_2017.unit\n> (\n> id serial NOT NULL,\n> unit_id text,\n> batch_id text,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT unit_pkey PRIMARY KEY (id)\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n>\n> CREATE TABLE popt_2017.field\n> (\n> id serial NOT NULL,\n> unit_id integer,\n> subunit_data_id integer,\n> field_name character varying(50),\n> page_id character varying(20),\n> page_type character varying(20),\n> batch_id character varying(20),\n> file_name character varying(20),\n> data_concept integer,\n> \"GROUP\" integer,\n> omr_group integer,\n> pres integer,\n> reg_data text,\n> ocr_conf text,\n> ocr_dict text,\n> ocr_phon text,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT field_pkey PRIMARY KEY (id),\n> CONSTRAINT field_subunit_data_id_fkey FOREIGN KEY (subunit_data_id)\n> REFERENCES popt_2017.subunit (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT field_unit_id_fk FOREIGN KEY (unit_id)\n> REFERENCES popt_2017.unit (id) MATCH FULL\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT field_unit_id_fkey FOREIGN KEY (unit_id)\n> REFERENCES popt_2017.unit (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n>\n> CREATE TABLE popt_2017.answer\n> (\n> id serial NOT NULL,\n> field_id integer,\n> ans_status integer,\n> ans text,\n> luggage text,\n> arec text,\n> kfi_partition integer,\n> final boolean,\n> length integer,\n> create_date timestamp without time zone DEFAULT now(),\n> update_date timestamp without time zone,\n> CONSTRAINT answer_pkey PRIMARY KEY (id),\n> CONSTRAINT answer_field_id_fk FOREIGN KEY (field_id)\n> REFERENCES popt_2017.field (id) MATCH FULL\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT answer_field_id_fkey FOREIGN KEY (field_id)\n> REFERENCES popt_2017.field (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n>\n> Below are the index definitions for those tables:\n>\n>\n> UNIT:\n> CREATE UNIQUE INDEX unit_pkey ON unit USING btree (id);\n> CREATE INDEX unit_unit_id_idx ON unit USING btree (unit_id);\n>\n>\n> FIELD:\n> CREATE UNIQUE INDEX field_pkey ON field USING btree (id)\n> CREATE INDEX field_unit_id_idx ON field USING btree (unit_id)\n> CREATE INDEX field_subunit_id_idx ON field USING btree\n> (subunit_data_id)\n> CREATE INDEX field_field_name_idx ON field USING btree (field_name)\n>\n>\n> ANSWER:\n> CREATE UNIQUE INDEX answer_pkey ON answer USING btree (id)\n> CREATE INDEX answer_field_id_idx ON answer USING btree (field_id)\n> CREATE INDEX answer_ans_idx ON answer USING btree (ans)\n>\n>\n> The tables each have the following number of rows:\n>\n>\n> UNIT: 10,315\n> FIELD: 139,397,965\n> ANSWER: 3,463,300\n>\n>\n> The query in question is:\n>\n>\n> SELECT\n> UNIT.ID AS UNIT_ID,\n> UNIT.UNIT_ID AS UNIT_UNIT_ID,\n> UNIT.BATCH_ID AS UNIT_BATCH_ID,\n> UNIT.CREATE_DATE AS UNIT_CREATE_DATE,\n> UNIT.UPDATE_DATE AS UNIT_UPDATE_DATE\n> FROM\n> UNIT, FIELD, ANSWER\n> WHERE\n> UNIT.ID =FIELD.UNIT_ID AND\n> FIELD.ID =ANSWER.FIELD_ID AND\n> FIELD.FIELD_NAME='SHEETS_PRESENT' AND\n> ANSWER.ANS='2';\n>\n>\n> I attempted to run an EXPLAIN (ANALYZE,BUFFERS) and the query has\n> been running for 32 minutes now, So I won't be able to post the\n> results (as I've never been able to get the query to actually\n> finish.\n>\n>\n> But, if I remove the join to UNIT (and just join FIELD and ANSWER)\n> the resulting query is sufficiently fast, (the first time it ran in\n> roughly 3 seconds), the query as such is:\n>\n>\n> SELECT * FROM\n> ANSWER, FIELD\n> WHERE\n> FIELD.ID =ANSWER.FIELD_ID AND\n> FIELD.FIELD_NAME='SHEETS_PRESENT' AND\n> ANSWER.ANS='2';\n>\n>\n> The EXPLAIN ( ANALYZE, BUFFERS ) output of that query can be found\n> here https://explain.depesz.com/s/ueJq\n>\n>\n> These tables are static for now, so they do not get DELETEs or\n> INSERTS at all and I have run VACUUM ANALYZE on all the affected\n> tables.\n>\n>\n> I'm running PostgreSQL PostgreSQL 9.2.4 on x86_64-unknown-linux-gnu,\n> compiled by gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52), 64-bit\n>\n>\n> I'm running this on RHEL 6.9\n>\n>\n> On a server with 32 GB of ram, 2 CPUs.\n>\n>\n> The following are the changes to postgresql.conf that I have made:\n>\n>\n> shared_buffers = 7871MB\n> effective_cache_size = 23611MB\n> work_mem = 1000MB\n> maintenance_work_mem = 2048MB\n>\n>\n> I have not changed the autovacuum settings, but since the tables are\n> static for now and I've already ran VACUUM that should not have any\n> effect.\n>\n>\n> Any assistance that could be provided is greatly appreciated.\n>\n>\n> Thank you,\n> Alessandro Ferrucci\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> --\n>\n> Signed,\n> Alessandro Ferrucci\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 26 Apr 2017 08:30:05 -0700",
"msg_from": "Matthew Bellew <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query with 3 table joins"
}
] |
[
{
"msg_contents": "Hello everyone.\n\nI have table \"events\" with 10 millions records. if have this fields:\n\nColumn | Type | Modifiers | Storage | Stats target | Description \n---------------------+-----------------------------+-----------------------------------------------------+----------+--------------+-------------\ncached_user_ids | integer[] | | extended | | \nbuffered_start_time | timestamp without time zone | | plain | | \nbuffered_end_time | timestamp without time zone | | plain | |\n\nI am trying to add EXCLUDE CONSTRAINT to it:\nALTER TABLE events\nADD CONSTRAINT exclusion_events_on_users_overlap_and_buffers_overlap\nEXCLUDE USING gist\n(\ncached_user_ids WITH &&,\ntsrange(buffered_start_time, buffered_end_time, '[)') WITH &&\n)\nWHERE (\ncancelled IS FALSE\n)\n\nDatabase have active btree_gist and intearay extensions:\n\nselect * from pg_extension ;\nextname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition \n--------------------+----------+--------------+----------------+------------+-----------+--------------\nplpgsql | 10 | 11 | f | 1.0 | | \npg_stat_statements | 10 | 2200 | t | 1.4 | | \npgcrypto | 10 | 2200 | t | 1.3 | | \nbtree_gist | 17046 | 2200 | t | 1.2 | | \nintarray | 17046 | 2200 | t | 1.2 | | \"cached_user_ids\" contain small amount of intefer elements (<= 10).\n\nProblem, that this index was build in 2 days and did not finished (I stopped it). After update 10 million records fileds \"buffered_start_time\" and \"buffered_end_time\" to NULL index was builded in 30 minutes, but after this any insert start working very slow in this table.\n\nDATABASE=> EXPLAIN ANALYZE INSERT INTO \"events\" (\nDATABASE(> \"event_type_id\", \"organization_id\", \"start_time\", \"end_time\", \"invitees_limit\", \"location\", \"cached_user_ids\", \"buffered_start_time\", \"buffered_end_time\", \"created_at\", \"updated_at\", \"profile_owner_id\", \"profile_owner_type\"\nDATABASE(> )\nDATABASE-> VALUES (\nDATABASE(> 1, 1458, '2017-05-01 00:00:00', '2017-05-01 00:30:00', 1, 'Lutsk', '{1}', '2017-05-01 00:00:00', '2017-05-01 00:30:00', '2017-04-24 12:12:12', '2017-04-24 12:12:12', 1, 'User'\nDATABASE(> )\nDATABASE-> RETURNING \"id\"\nDATABASE-> ;\nQUERY PLAN \n--------------------------------------------------------------------------------------------------\nInsert on events (cost=0.00..0.00 rows=1 width=349) (actual time=82.534..82.536 rows=1 loops=1)\n-> Result (cost=0.00..0.00 rows=1 width=349) (actual time=0.046..0.047 rows=1 loops=1)\nPlanning time: 0.063 ms\nTrigger generate_uuid: time=3.601 calls=1\nExecution time: 82.734 ms\n(5 rows) Before this EXCLUDE CONSTRAINT:\n\nDATABASE=> EXPLAIN ANALYZE INSERT INTO \"events\" (\nDATABASE(> \"event_type_id\", \"organization_id\", \"start_time\", \"end_time\", \"invitees_limit\", \"location\", \"created_at\", \"updated_at\", \"profile_owner_id\", \"profile_owner_type\"\nDATABASE(> )\nDATABASE-> VALUES (\nDATABASE(> 1, 1458, '2017-05-02 00:00:00', '2017-05-02 00:30:00', 1, 'Lutsk', '2017-04-24 12:12:12', '2017-04-24 12:12:12', 1, 'User'\nDATABASE(> )\nDATABASE-> RETURNING \"id\"\nDATABASE-> \nDATABASE-> ;\nQUERY PLAN \n------------------------------------------------------------------------------------------------\nInsert on events (cost=0.00..0.00 rows=1 width=349) (actual time=1.159..1.159 rows=1 loops=1)\n-> Result (cost=0.00..0.00 rows=1 width=349) (actual time=0.011..0.011 rows=1 loops=1)\nPlanning time: 0.033 ms\nTrigger generate_uuid: time=0.303 calls=1\nExecution time: 1.207 ms\n(5 rows)\nSo I decided remove this EXCLUDE CONSTRAINT and start testing with \"user_id\" field, which is integer and \"user_ids\" filed, which is integger[] :\n\n# This takes 10 minutes\nALTER TABLE event_memberships2\nADD CONSTRAINT \"exclude_overlap1\" EXCLUDE\nUSING gist (user_id WITH =, duration WITH &&)\nWHERE (canceled IS FALSE AND user_id IS NOT NULL AND duration IS NOT NULL);\n# This takes forever minutes:\nALTER TABLE event_memberships2\nADD CONSTRAINT \"exclude_overlap2\" EXCLUDE\nUSING gist (user_ids WITH &&, duration WITH &&)\nWHERE (canceled IS FALSE AND user_ids IS NOT NULL AND duration IS NOT NULL);\n# This takes forever minutes:\nALTER TABLE event_memberships2\nADD CONSTRAINT \"exclude_overlap3\" EXCLUDE\nUSING gist (user_ids WITH &&)\nWHERE (canceled IS FALSE AND user_ids IS NOT NULL);\n# This takes forever minutes:\nALTER TABLE event_memberships2\nADD CONSTRAINT \"exclude_overlap3\" EXCLUDE\nUSING gist (user_ids gist__intbig_ops WITH &&)\nWHERE (canceled IS FALSE AND user_ids IS NOT NULL);\n\n\nPostgreSQL: 9.6.2\n\n\nSo the question: does EXCLUDE CONSTRAINT works with array fields? Maybe I am doing something wrong or don't understand problems with this indexes, which building PostgreSQL.\n\n\nIn the meantime, thank you so much for your attention and participation.\n-- \nAlexey Vasiliev\nHello everyone.I have table \"events\" with 10 millions records. if have this fields:Column | Type | Modifiers | Storage | Stats target | Description ---------------------+-----------------------------+-----------------------------------------------------+----------+--------------+------------- cached_user_ids | integer[] | | extended | | buffered_start_time | timestamp without time zone | | plain | | buffered_end_time | timestamp without time zone | | plain | |I am trying to add EXCLUDE CONSTRAINT to it:ALTER TABLE eventsADD CONSTRAINT exclusion_events_on_users_overlap_and_buffers_overlapEXCLUDE USING gist ( cached_user_ids WITH &&, tsrange(buffered_start_time, buffered_end_time, '[)') WITH && )WHERE ( cancelled IS FALSE)Database have active btree_gist and intearay extensions:select * from pg_extension ; extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition --------------------+----------+--------------+----------------+------------+-----------+-------------- plpgsql | 10 | 11 | f | 1.0 | | pg_stat_statements | 10 | 2200 | t | 1.4 | | pgcrypto | 10 | 2200 | t | 1.3 | | btree_gist | 17046 | 2200 | t | 1.2 | | intarray | 17046 | 2200 | t | 1.2 | |\"cached_user_ids\" contain small amount of intefer elements (<= 10).Problem, that this index was build in 2 days and did not finished (I stopped it). After update 10 million records fileds \"buffered_start_time\" and \"buffered_end_time\" to NULL index was builded in 30 minutes, but after this any insert start working very slow in this table.DATABASE=> EXPLAIN ANALYZE INSERT INTO \"events\" (DATABASE(> \"event_type_id\", \"organization_id\", \"start_time\", \"end_time\", \"invitees_limit\", \"location\", \"cached_user_ids\", \"buffered_start_time\", \"buffered_end_time\", \"created_at\", \"updated_at\", \"profile_owner_id\", \"profile_owner_type\"DATABASE(> )DATABASE-> VALUES (DATABASE(> 1, 1458, '2017-05-01 00:00:00', '2017-05-01 00:30:00', 1, 'Lutsk', '{1}', '2017-05-01 00:00:00', '2017-05-01 00:30:00', '2017-04-24 12:12:12', '2017-04-24 12:12:12', 1, 'User'DATABASE(> )DATABASE-> RETURNING \"id\"DATABASE-> ; QUERY PLAN -------------------------------------------------------------------------------------------------- Insert on events (cost=0.00..0.00 rows=1 width=349) (actual time=82.534..82.536 rows=1 loops=1) -> Result (cost=0.00..0.00 rows=1 width=349) (actual time=0.046..0.047 rows=1 loops=1) Planning time: 0.063 ms Trigger generate_uuid: time=3.601 calls=1 Execution time: 82.734 ms(5 rows)Before this EXCLUDE CONSTRAINT:DATABASE=> EXPLAIN ANALYZE INSERT INTO \"events\" (DATABASE(> \"event_type_id\", \"organization_id\", \"start_time\", \"end_time\", \"invitees_limit\", \"location\", \"created_at\", \"updated_at\", \"profile_owner_id\", \"profile_owner_type\"DATABASE(> )DATABASE-> VALUES (DATABASE(> 1, 1458, '2017-05-02 00:00:00', '2017-05-02 00:30:00', 1, 'Lutsk', '2017-04-24 12:12:12', '2017-04-24 12:12:12', 1, 'User'DATABASE(> )DATABASE-> RETURNING \"id\"DATABASE-> DATABASE-> ; QUERY PLAN ------------------------------------------------------------------------------------------------ Insert on events (cost=0.00..0.00 rows=1 width=349) (actual time=1.159..1.159 rows=1 loops=1) -> Result (cost=0.00..0.00 rows=1 width=349) (actual time=0.011..0.011 rows=1 loops=1) Planning time: 0.033 ms Trigger generate_uuid: time=0.303 calls=1 Execution time: 1.207 ms(5 rows)So I decided remove this EXCLUDE CONSTRAINT and start testing with \"user_id\" field, which is integer and \"user_ids\" filed, which is integger[] :# This takes 10 minutesALTER TABLE event_memberships2ADD CONSTRAINT \"exclude_overlap1\" EXCLUDEUSING gist (user_id WITH =, duration WITH &&)WHERE (canceled IS FALSE AND user_id IS NOT NULL AND duration IS NOT NULL);# This takes forever minutes:ALTER TABLE event_memberships2ADD CONSTRAINT \"exclude_overlap2\" EXCLUDEUSING gist (user_ids WITH &&, duration WITH &&)WHERE (canceled IS FALSE AND user_ids IS NOT NULL AND duration IS NOT NULL);# This takes forever minutes:ALTER TABLE event_memberships2ADD CONSTRAINT \"exclude_overlap3\" EXCLUDEUSING gist (user_ids WITH &&)WHERE (canceled IS FALSE AND user_ids IS NOT NULL);# This takes forever minutes:ALTER TABLE event_memberships2ADD CONSTRAINT \"exclude_overlap3\" EXCLUDEUSING gist (user_ids gist__intbig_ops WITH &&)WHERE (canceled IS FALSE AND user_ids IS NOT NULL);PostgreSQL: 9.6.2So the question: does EXCLUDE CONSTRAINT works with array fields? Maybe I am doing something wrong or don't understand problems with this indexes, which building PostgreSQL.In the meantime, thank you so much for your attention and participation.-- Alexey Vasiliev",
"msg_date": "Thu, 27 Apr 2017 13:38:44 +0300",
"msg_from": "=?UTF-8?B?QWxleGV5IFZhc2lsaWV2?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "=?UTF-8?B?RVhDTFVERSBDT05TVFJBSU5UIHdpdGggaW50YXJyYXk=?="
}
] |
[
{
"msg_contents": "Hi,\nI have a performance problem with my query. As a simplified example, I have a table called Book, which has three columns: id, released (timestamp) and author_id. I have a need to search for the latest books released by multiple authors, at a specific point in the history. This could be latest book between beginning of time and now, or latest book released last year etc. In other words, only the latest book for each author, in specific time window. I have also a combined index for released and author_id columns.\nFirst, I tried a simple query that selects maximum value of released and the author_id, which are grouped by the author_id (then later do a join by these author_id, released columns to get the whole rows). Performance of this query is pretty bad (Execution time around 250-300ms for five authors). See query and query plan in the link below:\nhttps://gist.github.com/jehie/ca9fac16b6e3c19612d815446a0e1bc0\n\nThe execution time seems to grow linearly when the number of author_ids increase (50ms per author_id). I don't completely understand why it takes so long for this query to execute and why it does not use the directional index scan?\nI also tried second query using limit (where I can only ask for one author_id at a time, so cannot use this directly when searching for books of multiple author), which performs nicely (0.2ms):\nhttps://gist.github.com/jehie/284e7852089f6debe22e05c63e73027f\n\nSo, any ideas how to make multiple-author lookups (like in the first query) perform better? Or any other ideas?\n\nHere is the SQL to create the Table, Index, generate some test data and both queries:\nhttps://gist.github.com/jehie/87665c03bee124f8a96de24cae798194\n\nThanks,\nJesse\n\n\n\n\n\n\n\n\n\n\nHi,\nI have a performance problem with my query. As a simplified example, I have a table called Book, which has three columns: id, released (timestamp) and author_id. I have a need to search for\n the latest books released by multiple authors, at a specific point in the history. This could be latest book between beginning of time and now, or latest book released last year etc. In other words, only the latest book for each author, in specific time window.\n I have also a combined index for released and author_id columns. \nFirst, I tried a simple query that selects maximum value of released and the author_id, which are grouped by the author_id (then later do a\n join by these author_id, released columns to get the whole rows). Performance of this query is pretty bad (Execution time around 250-300ms for five authors). See query and query plan in the link below:\n\nhttps://gist.github.com/jehie/ca9fac16b6e3c19612d815446a0e1bc0\n \nThe execution time seems to grow linearly when the number of author_ids increase (50ms per author_id). I don’t completely understand why it takes so long for this query to execute and why\n it does not use the directional index scan?\nI also tried second query using limit (where I can only ask for one author_id at a time, so cannot use this directly when searching for books\n of multiple author), which performs nicely (0.2ms):\nhttps://gist.github.com/jehie/284e7852089f6debe22e05c63e73027f\n \nSo, any ideas how to make multiple-author lookups (like in the first query) perform better? Or any other ideas?\n\nHere is the SQL to create the Table, Index, generate some test data and both queries:\nhttps://gist.github.com/jehie/87665c03bee124f8a96de24cae798194\n \nThanks,\nJesse",
"msg_date": "Thu, 4 May 2017 10:52:03 +0000",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Inefficient max query when using group by"
},
{
"msg_contents": "On Thu, May 4, 2017 at 3:52 AM, <[email protected]> wrote:\n\n> Hi,\n>\n> I have a performance problem with my query. As a simplified example, I\n> have a table called Book, which has three columns: id, released (timestamp)\n> and author_id. I have a need to search for the latest books released by\n> multiple authors, at a specific point in the history. This could be latest\n> book between beginning of time and now, or latest book released last year\n> etc. In other words, only the latest book for each author, in specific time\n> window. I have also a combined index for released and author_id columns.\n>\n\nAs far as the query itself, I suspect you are paying a penalty for the\nto_timestamp() calls. Try the same query with hard-coded timestamps:\n\"AND released<='2017-05-05 00:00:00' AND released>='1970-01-01 00:00:00'\"\nIf you need these queries to be lightning fast then this looks like a good\ncandidate for using Materialized Views:\nhttps://www.postgresql.org/docs/current/static/sql-creatematerializedview.html\n\nOn Thu, May 4, 2017 at 3:52 AM, <[email protected]> wrote:\n\n\nHi,\nI have a performance problem with my query. As a simplified example, I have a table called Book, which has three columns: id, released (timestamp) and author_id. I have a need to search for\n the latest books released by multiple authors, at a specific point in the history. This could be latest book between beginning of time and now, or latest book released last year etc. In other words, only the latest book for each author, in specific time window.\n I have also a combined index for released and author_id columns. As far as the query itself, I suspect you are paying a penalty for the to_timestamp() calls. Try the same query with hard-coded timestamps: \"AND released<='2017-05-05 00:00:00' AND released>='1970-01-01 00:00:00'\"If you need these queries to be lightning fast then this looks like a good candidate for using Materialized Views: https://www.postgresql.org/docs/current/static/sql-creatematerializedview.html",
"msg_date": "Thu, 4 May 2017 06:21:47 -0700",
"msg_from": "bricklen <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficient max query when using group by"
},
{
"msg_contents": "On 4 May 2017 at 22:52, <[email protected]> wrote:\n> I have a performance problem with my query. As a simplified example, I have\n> a table called Book, which has three columns: id, released (timestamp) and\n> author_id. I have a need to search for the latest books released by multiple\n> authors, at a specific point in the history. This could be latest book\n> between beginning of time and now, or latest book released last year etc. In\n> other words, only the latest book for each author, in specific time window.\n> I have also a combined index for released and author_id columns.\n>\n> First, I tried a simple query that selects maximum value of released and the\n> author_id, which are grouped by the author_id (then later do a join by these\n> author_id, released columns to get the whole rows). Performance of this\n> query is pretty bad (Execution time around 250-300ms for five authors). See\n> query and query plan in the link below:\n>\n> https://gist.github.com/jehie/ca9fac16b6e3c19612d815446a0e1bc0\n>\n>\n>\n> The execution time seems to grow linearly when the number of author_ids\n> increase (50ms per author_id). I don’t completely understand why it takes so\n> long for this query to execute and why it does not use the directional index\n> scan?\n>\n> I also tried second query using limit (where I can only ask for one\n> author_id at a time, so cannot use this directly when searching for books of\n> multiple author), which performs nicely (0.2ms):\n>\n> https://gist.github.com/jehie/284e7852089f6debe22e05c63e73027f\n>\n>\n>\n> So, any ideas how to make multiple-author lookups (like in the first query)\n> perform better? Or any other ideas?\n\nYes, you could sidestep the whole issue by using a LATERAL join.\n\nSomething like:\n\nEXPLAIN ANALYZE\nSELECT b.released, b.author_id\nFROM (VALUES('1'),('2'),('3'),('4'),('5')) a (author_id)\nCROSS JOIN LATERAL (SELECT released, author_id\n FROM book\n WHERE author_id = a.author_id\nAND released<=to_timestamp(2e9)\nAND released>=to_timestamp(0)\nORDER BY released desc\n LIMIT 1) b;\n\nor you could write a function which just runs that query. Although,\nwith the above or the function method, if you give this enough\nauthors, then it'll eventually become slower than the problem query.\nPerhaps if you know the number of authors will not be too great, then\nyou'll be ok.\n\n-- \n David Rowley http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 5 May 2017 01:51:34 +1200",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inefficient max query when using group by"
}
] |
[
{
"msg_contents": "Hello Guys,\n\nWe are facing problem related to performance of Postgres. Indexes are not being utilized and Postgres is giving priority to seq scan. I read many articles of Postgres performance and found that we need to set the randome_page_cost value same as seq_page_cost because we are using SSD drives. We are running copy of Discourse forum, you can read more about Discourse here meta.discourse.org. Details of all Server hardware and Postgres version are given below.\n\nI am adding my Postgres configuration file in attachment, kindly review it and suggest the changes so that i can improve the performance of whole system. Currently queries are taking lot of time. I can also share the schema with you and queries in detail too.\n\nThanks\n\n\n\n Postgres Version : 9.5.4\n\n Server Hardware details :\n Dedicate machine\n 16 Physical cores 32 Logical cores\n RAM : 64 GB\n RAM Type : DDR3\n Drive Type : SSD\n Raid Controller : MegaRAID SAS 2108 Raid Card\n Configured Raids : 10\n No of Drives : 4\n File System : XFS\n\nRegards,\nJunaid\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 4 May 2017 14:10:01 +0000",
"msg_from": "Junaid Malik <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres performance issue"
},
{
"msg_contents": "On Thu, May 4, 2017 at 8:10 AM, Junaid Malik <[email protected]> wrote:\n> Hello Guys,\n>\n> We are facing problem related to performance of Postgres. Indexes are not\n> being utilized and Postgres is giving priority to seq scan. I read many\n> articles of Postgres performance and found that we need to set the\n> randome_page_cost value same as seq_page_cost because we are using SSD\n> drives. We are running copy of Discourse forum, you can read more about\n> Discourse here meta.discourse.org. Details of all Server hardware and\n> Postgres version are given below.\n>\n> I am adding my Postgres configuration file in attachment, kindly review it\n> and suggest the changes so that i can improve the performance of whole\n> system. Currently queries are taking lot of time. I can also share the\n> schema with you and queries in detail too.\n>\n> Thanks\n>\n>\n>\n> Postgres Version : 9.5.4\n>\n> Server Hardware details :\n> Dedicate machine\n> 16 Physical cores 32 Logical cores\n> RAM : 64 GB\n> RAM Type : DDR3\n> Drive Type : SSD\n> Raid Controller : MegaRAID SAS 2108 Raid Card\n> Configured Raids : 10\n> No of Drives : 4\n> File System : XFS\n\nPlease read this page\nhttps://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 May 2017 08:36:30 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance issue"
},
{
"msg_contents": "On Thu, May 4, 2017 at 8:36 AM, Scott Marlowe <[email protected]> wrote:\n> On Thu, May 4, 2017 at 8:10 AM, Junaid Malik <[email protected]> wrote:\n>> Hello Guys,\n>>\n>> We are facing problem related to performance of Postgres. Indexes are not\n>> being utilized and Postgres is giving priority to seq scan. I read many\n>> articles of Postgres performance and found that we need to set the\n>> randome_page_cost value same as seq_page_cost because we are using SSD\n>> drives. We are running copy of Discourse forum, you can read more about\n>> Discourse here meta.discourse.org. Details of all Server hardware and\n>> Postgres version are given below.\n>>\n>> I am adding my Postgres configuration file in attachment, kindly review it\n>> and suggest the changes so that i can improve the performance of whole\n>> system. Currently queries are taking lot of time. I can also share the\n>> schema with you and queries in detail too.\n>>\n>> Thanks\n>>\n>>\n>>\n>> Postgres Version : 9.5.4\n>>\n>> Server Hardware details :\n>> Dedicate machine\n>> 16 Physical cores 32 Logical cores\n>> RAM : 64 GB\n>> RAM Type : DDR3\n>> Drive Type : SSD\n>> Raid Controller : MegaRAID SAS 2108 Raid Card\n>> Configured Raids : 10\n>> No of Drives : 4\n>> File System : XFS\n>\n> Please read this page\n> https://wiki.postgresql.org/wiki/Guide_to_reporting_problems\n>\n\nOK so here's my quick critique of your conf file.\n\nmax_connections = 2000\n\nIf you really need to handle 2000 connections get a connection pooler\nlike pgbouncer in there to do it. 2000 active connections can swamp\nmost modern servers pretty quickly.\n\nshared_buffers = 20GB\n\nThis is fairly high and in my experience on a 64G machine is probably\na bit much. It likely isn't hurting performance much though.\n\nwork_mem = 10GB # min 64kB\n\nThis is insanely high. A lot of folks look at work_mem and think it's\na total number. It's not. It's per sort / operation. I.e. if 100\npeople run queries that each have 3 sorts they COULD allocated\n100*3*10G = 3000G of RAM. Further this is the kind of setting that\nonly becomes dangerous under heavy-ish loads. If you handle 3 or 4\nusers at a time normally, you'll never see a problem. Then someone\npoints a new site at your discourse instance and 10,000 people show up\nand bam, server goes unresponsive.\n\n#effective_io_concurrency = 1\n\nGiven your SSD raid you can probably look at raising this to 5 to 10 or so.\n\n\nThat's all I'm getting from your postgresql.conf. Not sure what your\nusage pattern is, but on something like a forum, it's likely there are\nno heavy transactional load, mostly read etc.\n\nAs for indexes getting used or not, if you have a small db right now,\nseq scans are likely as fast as index scans because there's just not\nas much to read. OTOH, if you have a decent sized db (couple gig to a\ncouple hundred gig) then if indexes are getting ignored they may not\nbe capable of being used due to data types and collation. In short we\nneed a much more detailed post of what you're doing, and how you're\nmeasuring performance and index usage and all that.\n\nThe more information you can post the better generally.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 May 2017 08:57:03 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance issue"
},
{
"msg_contents": "On Thu, May 4, 2017 at 8:10 AM, Junaid Malik <[email protected]> wrote:\n> Hello Guys,\n>\n> We are facing problem related to performance of Postgres. Indexes are not\n> being utilized and Postgres is giving priority to seq scan. I read many\n> articles of Postgres performance and found that we need to set the\n> randome_page_cost value same as seq_page_cost because we are using SSD\n> drives. We are running copy of Discourse forum, you can read more about\n> Discourse here meta.discourse.org. Details of all Server hardware and\n> Postgres version are given below.\n\nJust wondering if you've made any progress on this. If you get stuck\nlet us all know and somebody'll help out.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 8 May 2017 12:50:41 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres performance issue"
}
] |
[
{
"msg_contents": "Hello,\n\nI use two dedicated bare metal servers (Online and Kimsufi). The first one takes much longer to execute a procedure that recreates a database by truncating its tables, then copying the data from a set of text files; it is however much faster for more typical SELECT and INSERT queries done by users.\n\nHere is the timing for the procedure :\n\n#Kimsufi server\ntime psql -f myfile.sql mydb\nreal\t0m12.585s\nuser\t0m0.200s\nsys\t0m0.076s\n\n#Online server\ntime psql -f myfile.sql mydb\nreal\t1m15.410s\nuser\t0m0.144s\nsys\t0m0.028s\n\nAs you can see, the Kimsufi server takes 12 seconds to complete the procedure, while the Online one needs 75 seconds.\n\nFor more usual queries however, the ratio is reversed, as shown by explain analyze for a typical query:\n\n#Kimsufi server\nmarica=> explain (analyze, buffers) SELECT t1.id_contentieux, t1.ref_dossier, t1.ref_assureur, noms_des_tiers(t1.id_contentieux) as id_tiers, t1.libelle, t1.affaire, 1 as authorized\nFROM tblcontentieux t1 WHERE id_contentieux IN (SELECT id_contentieux FROM tblcontentieux_log WHERE plainto_tsquery('vol') @@ tsv_libelle) AND id_client = 13 ORDER BY 2\n;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=543.29..543.56 rows=106 width=116) (actual time=19.870..19.885 rows=75 loops=1)\n Sort Key: t1.ref_dossier\n Sort Method: quicksort Memory: 35kB\n Buffers: shared hit=689\n -> Nested Loop (cost=430.59..539.73 rows=106 width=116) (actual time=4.103..19.143 rows=75 loops=1)\n Buffers: shared hit=689\n -> HashAggregate (cost=430.31..430.49 rows=18 width=4) (actual time=2.077..2.266 rows=124 loops=1)\n Group Key: tblcontentieux_log.id_contentieux\n Buffers: shared hit=112\n -> Bitmap Heap Scan on tblcontentieux_log (cost=29.11..429.95 rows=142 width=4) (actual time=0.712..1.550 rows=147 loops=1)\n Recheck Cond: (plainto_tsquery('vol'::text) @@ tsv_libelle)\n Heap Blocks: exact=105\n Buffers: shared hit=112\n -> Bitmap Index Scan on tblcontentieux_log_tvs_libelle_idx (cost=0.00..29.07 rows=142 width=0) (actual time=0.632..0.632 rows=147 loops=1)\n Index Cond: (plainto_tsquery('vol'::text) @@ tsv_libelle)\n Buffers: shared hit=7\n -> Index Scan using tblcontentieux_pkey on tblcontentieux t1 (cost=0.28..4.59 rows=1 width=116) (actual time=0.018..0.019 rows=1 loops=124)\n Index Cond: (id_contentieux = tblcontentieux_log.id_contentieux)\n Filter: (id_client = 13)\n Rows Removed by Filter: 0\n Buffers: shared hit=372\n Planning time: 3.666 ms\n Execution time: 20.176 ms\n\n#Online server\nmarica=> explain (analyze,buffers) SELECT t1.id_contentieux, t1.ref_dossier, t1.ref_assureur, noms_des_tiers(t1.id_contentieux) as id_tiers, t1.libelle, t1.affaire, 1 as authorized\nFROM tblcontentieux t1 WHERE id_contentieux IN (SELECT id_contentieux FROM tblcontentieux_log WHERE plainto_tsquery('vol') @@ tsv_libelle) AND id_client = 13 ORDER BY 2;\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Sort (cost=492.01..492.27 rows=104 width=116) (actual time=10.660..10.673 rows=75 loops=1)\n Sort Key: t1.ref_dossier\n Sort Method: quicksort Memory: 35kB\n Buffers: shared hit=686\n -> Nested Loop (cost=390.14..488.52 rows=104 width=116) (actual time=1.363..10.066 rows=75 loops=1)\n Buffers: shared hit=686\n -> HashAggregate (cost=389.85..390.03 rows=18 width=4) (actual time=0.615..0.725 rows=124 loops=1)\n Group Key: tblcontentieux_log.id_contentieux\n Buffers: shared hit=109\n -> Bitmap Heap Scan on tblcontentieux_log (cost=13.08..389.51 rows=139 width=4) (actual time=0.156..0.465 rows=147 loops=1)\n Recheck Cond: (plainto_tsquery('vol'::text) @@ tsv_libelle)\n Heap Blocks: exact=106\n Buffers: shared hit=109\n -> Bitmap Index Scan on tblcontentieux_log_tvs_libelle_idx (cost=0.00..13.04 rows=139 width=0) (actual time=0.126..0.126 rows=147 loops=1)\n Index Cond: (plainto_tsquery('vol'::text) @@ tsv_libelle)\n Buffers: shared hit=3\n -> Index Scan using tblcontentieux_pkey on tblcontentieux t1 (cost=0.28..4.02 rows=1 width=116) (actual time=0.010..0.011 rows=1 loops=124)\n Index Cond: (id_contentieux = tblcontentieux_log.id_contentieux)\n Filter: (id_client = 13)\n Rows Removed by Filter: 0\n Buffers: shared hit=372\n Planning time: 1.311 ms\n Execution time: 10.813 ms\n\n\nBoth are bare metal servers, with 4GB of RAM; the dataset is small (compressed dump is 3MB). The main differences that I found are in disk I/O as shown by hdparm, and processor type :\n\n#Kimsufi server \nhdparm -tT /dev/sda\n Timing cached reads: 1744 MB in 2.00 seconds = 872.16 MB/sec\n Timing buffered disk reads: 482 MB in 3.00 seconds = 160.48 MB/sec\nProcessor Intel(R) Atom(TM) CPU N2800 @ 1.86GHz (4 cores, cache size : 512 KB)\nDisk 2TB, 7200rpm, db on 500MB partition\n\n#Online server\nhdparm -tT /dev/sda\n Timing cached reads: 2854 MB in 2.00 seconds = 1427.05 MB/sec\n Timing buffered disk reads: 184 MB in 3.00 seconds = 61.26 MB/sec\nProcessor Intel(R) Atom(TM) CPU C2350 @ 1.74GHz (2 cores, cache size : 1024 KB)\nDisk 1TB, 7200rpm, db on 1TB partition\n\nI've created two pastebins with the output of the following commands for each server:\n# hdparm /dev/sda\n# hdparm -i /dev/sda\n# df\n# cat /proc/cpuinfo\n# cat /proc/meminfo\n\n#Kimsufi server\nhttps://pastebin.com/3860hS92\n\n#Online server\nhttps://pastebin.com/FT1HFbD7\n\n\nMy questions: \n\n-Does the difference in 'buffered disk reads' explain the 6 fold increase in execution time for truncate/copy on the Online server?\n\n-Why are regular queries much faster on this same server?\n\n\n\n\n-- \n\t\t\t\t\tBien à vous, Vincent Veyron \n\nhttps://legalcase.libremen.com/\nLegal case management software\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 8 May 2017 19:49:22 +0200",
"msg_from": "Vincent Veyron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speed differences between two servers"
},
{
"msg_contents": "On Mon, May 8, 2017 at 11:49 AM, Vincent Veyron <[email protected]> wrote:\n> Hello,\n>\n> I use two dedicated bare metal servers (Online and Kimsufi). The first one takes much longer to execute a procedure that recreates a database by truncating its tables, then copying the data from a set of text files; it is however much faster for more typical SELECT and INSERT queries done by users.\n>\n> Here is the timing for the procedure :\n>\n> #Kimsufi server\n> time psql -f myfile.sql mydb\n> real 0m12.585s\n> user 0m0.200s\n> sys 0m0.076s\n>\n> #Online server\n> time psql -f myfile.sql mydb\n> real 1m15.410s\n> user 0m0.144s\n> sys 0m0.028s\n>\n> My questions:\n>\n> -Does the difference in 'buffered disk reads' explain the 6 fold increase in execution time for truncate/copy on the Online server?\n\nThe most likely cause of the difference would be that one server IS\nhonoring fsync requests from the db and the other one isn't.\n\nIf you run pgbench on both (something simple like pgbench -c 1 -T 60,\naka one thread for 60 seconds) on a machine running on a 7200RPM hard\ndrive, you should get approximately 120 transactions per second, or\nless, since that's how many times a second a disk spinning at that\nspeed can write out data. If you get say 110 on the slow machine and\n800 on the fast one, there's the culprit, the fast machine is not\nhonoring fsync requests and is not crash-safe. I.e. if you start\nwriting to the db and pull the power plug out the back of the machine\nit will likely power up with a corrupted database.\n\n> -Why are regular queries much faster on this same server?\n\nThat's a whole nother subject. Most likely the faster machine can fit\nthe whole db in memory, or has much faster memory, or the whole\ndataset is cached etc etc.\n\nFor now concentrate on figuring out of you've got an fsync problem. If\nthe data is just test data etc that you can afford to lose then you\ncan leave off fsync and not worry. But in production this is rarely\nthe case.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 8 May 2017 12:48:29 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed differences between two servers"
},
{
"msg_contents": "On Mon, 8 May 2017 12:48:29 -0600\nScott Marlowe <[email protected]> wrote:\n\nHi Scott,\n\nThank you for your input.\n\n> \n> The most likely cause of the difference would be that one server IS\n> honoring fsync requests from the db and the other one isn't.\n> \n> If you run pgbench on both (something simple like pgbench -c 1 -T 60,\n> aka one thread for 60 seconds) on a machine running on a 7200RPM hard\n> drive, you should get approximately 120 transactions per second\n\nHere are the results :\n\n#Kimsufi\npgbench -c 1 -T 60 test\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 6618\nlatency average: 9.069 ms\ntps = 110.270771 (including connections establishing)\ntps = 110.283733 (excluding connections establishing)\n\n#Online\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 1150\nlatency average: 52.317 ms\ntps = 19.114403 (including connections establishing)\ntps = 19.115739 (excluding connections establishing)\n\n\n> \n> > -Why are regular queries much faster on this same server?\n> \n> That's a whole nother subject. Most likely the faster machine can fit\n> the whole db in memory, or has much faster memory, or the whole\n> dataset is cached etc etc.\n> \n\nThe dataset is small (35 MB) and both servers have 4GB memory. It appears to be faster on the Online server.\n\nusing 'dmidecode -t 17' :\n\n#Kimsufi\nMemory Device\n\tArray Handle: 0x0016\n\tError Information Handle: Not Provided\n\tTotal Width: 64 bits\n\tData Width: 64 bits\n\tSize: 2048 MB\n\tForm Factor: DIMM\n\tSet: None\n\tLocator: SO DIMM 0\n\tBank Locator: Channel A DIMM0\n\tType: DDR3\n\tType Detail: Synchronous\n\tSpeed: 1066 MHz\n\tManufacturer: 0x0000000000000000\n\tSerial Number: 0x00000000\n\tAsset Tag: Unknown\n\tPart Number: 0x000000000000000000000000000000000000\n\tRank: Unknown\n\tConfigured Clock Speed: 1066 MHz\n\n[repeated for second locator]\n\n#Online\nMemory Device\n\tArray Handle: 0x0015\n\tError Information Handle: No Error\n\tTotal Width: Unknown\n\tData Width: Unknown\n\tSize: 4096 MB\n\tForm Factor: DIMM\n\tSet: None\n\tLocator: DIMM0\n\tBank Locator: BANK 0\n\tType: DDR3\n\tType Detail: Synchronous Unbuffered (Unregistered)\n\tSpeed: 1600 MHz\n\tManufacturer: <BAD INDEX>\n\tSerial Number: <BAD INDEX>\n\tAsset Tag: <BAD INDEX>\n\tPart Number: <BAD INDEX>\n\tRank: 1\n\tConfigured Clock Speed: 1333 MHz\n\tMinimum voltage: Unknown\n\tMaximum voltage: Unknown\n\tConfigured voltage: Unknown\n\n\n\n\n-- \n\t\t\t\t\tBien à vous, Vincent Veyron \n\nhttps://libremen.com\nLogiciels de gestion, libres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 May 2017 00:24:41 +0200",
"msg_from": "Vincent Veyron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speed differences between two servers"
},
{
"msg_contents": "On Mon, May 8, 2017 at 4:24 PM, Vincent Veyron <[email protected]> wrote:\n> On Mon, 8 May 2017 12:48:29 -0600\n> Scott Marlowe <[email protected]> wrote:\n>\n> Hi Scott,\n>\n> Thank you for your input.\n>\n>>\n>> The most likely cause of the difference would be that one server IS\n>> honoring fsync requests from the db and the other one isn't.\n>>\n>> If you run pgbench on both (something simple like pgbench -c 1 -T 60,\n>> aka one thread for 60 seconds) on a machine running on a 7200RPM hard\n>> drive, you should get approximately 120 transactions per second\n>\n> Here are the results :\n>\n> #Kimsufi\n> pgbench -c 1 -T 60 test\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> duration: 60 s\n> number of transactions actually processed: 6618\n> latency average: 9.069 ms\n> tps = 110.270771 (including connections establishing)\n> tps = 110.283733 (excluding connections establishing)\n\nJust under 120, looks like fsync is working.\n\n>\n> #Online\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> duration: 60 s\n> number of transactions actually processed: 1150\n> latency average: 52.317 ms\n> tps = 19.114403 (including connections establishing)\n> tps = 19.115739 (excluding connections establishing)\n\nOK that's horrendous. My mobile phone is likely faster. We need to\nfigure out why it's so slow. If it's in a RAID-1 set it might be\nsyncing.\n\n>> > -Why are regular queries much faster on this same server?\n>>\n>> That's a whole nother subject. Most likely the faster machine can fit\n>> the whole db in memory, or has much faster memory, or the whole\n>> dataset is cached etc etc.\n>>\n>\n> The dataset is small (35 MB) and both servers have 4GB memory. It appears to be faster on the Online server.\n\nYeah it fits in memory. Select queries will only hit disk at bootup.\n\nFirst machine\nSNIP\n> Speed: 1066 MHz\nSNIP\n> Configured Clock Speed: 1066 MHz\n\nSecond machine\n\n> Speed: 1600 MHz\nSNIP\n> Configured Clock Speed: 1333 MHz\n\nYeah the second machine likely has a noticeably faster CPU than the\nfirst as well. It's about two years younger so yeah it's probably just\ncpu/mem that's making it fast.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 8 May 2017 17:06:05 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed differences between two servers"
},
{
"msg_contents": "On Mon, May 8, 2017 at 5:06 PM, Scott Marlowe <[email protected]> wrote:\n> On Mon, May 8, 2017 at 4:24 PM, Vincent Veyron <[email protected]> wrote:\n>> On Mon, 8 May 2017 12:48:29 -0600\n>> Scott Marlowe <[email protected]> wrote:\n\n>>> > -Why are regular queries much faster on this same server?\n>>>\n>>> That's a whole nother subject. Most likely the faster machine can fit\n>>> the whole db in memory, or has much faster memory, or the whole\n>>> dataset is cached etc etc.\n>>>\n>>\n>> The dataset is small (35 MB) and both servers have 4GB memory. It appears to be faster on the Online server.\n>\n> Yeah it fits in memory. Select queries will only hit disk at bootup.\n>\n> First machine\n> SNIP\n>> Speed: 1066 MHz\n> SNIP\n>> Configured Clock Speed: 1066 MHz\n>\n> Second machine\n>\n>> Speed: 1600 MHz\n> SNIP\n>> Configured Clock Speed: 1333 MHz\n>\n> Yeah the second machine likely has a noticeably faster CPU than the\n> first as well. It's about two years younger so yeah it's probably just\n> cpu/mem that's making it fast.\n\nOK went back and looked at your original post. I seems like those two\nqueries that are 10 and 20 ms have essentially the same plan on\nsimilar sized dbs, so it's reasonable to assume the newer machine is\nabout twice as fast.\n\nWithout seeing what your test sql file does I have no idea what the\nbig difference in the other direction. You'll have to pull out and run\nthe individual queries, or turn on auto explain or something to see\nthe plans and compare. A lot of time it's just some simple tuning in\npostgresql.conf or maybe a database got an alter database on it to\nchange something? Either way use show all; to compare settings and get\nexplain (analyze) off of the slow queries.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 8 May 2017 17:35:38 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed differences between two servers"
},
{
"msg_contents": "On Mon, 8 May 2017 17:35:38 -0600\nScott Marlowe <[email protected]> wrote:\n> Without seeing what your test sql file does I have no idea what the\n> big difference in the other direction. \n\nIt truncates 59 tables, copies data back from a set of text files, inserts a few single records and does a few select setval('') to reset the serial columns.\n\nhere it is :\nhttps://pastebin.com/LVsvFzkj\n\n>You'll have to pull out and run\n> the individual queries, or turn on auto explain or something to see\n> the plans and compare. \n\nI used log_duration; it shows that the truncate and all the \\copy are much slower, while all insert/select statements are twice as fast\n\n>A lot of time it's just some simple tuning in\n> postgresql.conf or maybe a database got an alter database on it to\n> change something? \n\nServer setups are identical : same software, same configurations, same databases.\n\nI've put in a ticket at the Online provider with the data to see if they have an answer (now 14H00 in Paris, so they may take a while to respond)\n\n\n-- \n\t\t\t\t\tBien à vous, Vincent Veyron \n\nhttps://compta.libremen.com\nLogiciel libre de comptabilité générale en partie double\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 May 2017 14:02:42 +0200",
"msg_from": "Vincent Veyron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speed differences between two servers"
},
{
"msg_contents": "\nWell, the response to the ticket was quite fast :\n\nhttps://status.online.net/index.php?do=details&task_id=720\n\nHere's the stated cause :\n\n>Our tests have confirmed an issue caused by the fans of the power supplies installed in several chassis.\n\n>The fans create vibrations amplifying errors on the discs.\n\nNow on to decide whether I'm waiting for the fix or re-building a new box...\n\nThanks a bunch for your help.\n\n\n-- \n\t\t\t\t\tBien à vous, Vincent Veyron \n\nhttps://libremen.com\nLogiciels de gestion, libres\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 May 2017 15:08:52 +0200",
"msg_from": "Vincent Veyron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speed differences between two servers"
},
{
"msg_contents": "On Tue, May 9, 2017 at 7:08 AM, Vincent Veyron <[email protected]> wrote:\n>\n> Well, the response to the ticket was quite fast :\n>\n> https://status.online.net/index.php?do=details&task_id=720\n>\n> Here's the stated cause :\n>\n>>Our tests have confirmed an issue caused by the fans of the power supplies installed in several chassis.\n>\n>>The fans create vibrations amplifying errors on the discs.\n>\n> Now on to decide whether I'm waiting for the fix or re-building a new box...\n>\n> Thanks a bunch for your help.\n\nYou're welcome, any time.\n\nAs for the hard drives, can you upgrade to a pair of SSDs? If your\ndata set fits on (and will continue to fit on) SSDs, the performance\ngained from SSDs is HUGE and worth a few hundred extra for the drive.\nNote that you want to use the Intel enterprise stuff that survives\npower loss, not the cheap low end SSDs. May be an easy fix for your\nhosting company and a big performance gain.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 May 2017 10:24:20 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed differences between two servers"
},
{
"msg_contents": "On Tue, 9 May 2017 10:24:20 -0600\nScott Marlowe <[email protected]> wrote:\n> \n> As for the hard drives, can you upgrade to a pair of SSDs? If your\n> data set fits on (and will continue to fit on) SSDs, the performance\n> gained from SSDs is HUGE and worth a few hundred extra for the drive.\n> Note that you want to use the Intel enterprise stuff that survives\n> power loss, not the cheap low end SSDs. May be an easy fix for your\n> hosting company and a big performance gain.\n> \n\nSure, but I'm getting plenty of performance already : my little machines can serve 40 requests/second with 6 or 7 queries per request. So I'm fine for a while.\n\nYou can see for yourself if you enter the demo account for the site in my sig, and click in a couple of files (it's the database that gets re-created by the procedure I mentionned in my original post)\n\n\n-- \n\t\t\t\t\tBien à vous, Vincent Veyron \n\nhttps://legalcase.libremen.com/ \nLegal case, contract and insurance claim management software\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 May 2017 10:27:46 +0200",
"msg_from": "Vincent Veyron <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speed differences between two servers"
}
] |
[
{
"msg_contents": "Hello,\n\n\non our old server (120 GB RAM) PostgreSQL 9.4.5 was using less than 10 GB of ram. On our new server (same system) Postgres 9.4.11 is using up to 40 GB Ram. Especially each idle process is consuming 2.4 GB: postgres 30764 8.3 2.4 3358400 3215920 ? Ss 21:58 0:24 postgres: testuser testdb [local] idle\n\nSumming up PG needs currently 72.14GB (also the slab_cache was increasing from 20 GB to 40 GB. ). For monitoring we are using munin. Is this a bug of 9.4.11 or what could be wrong?\n\npostgresmain.conf did not change:\n\nmax_connections = 40\neffective_cache_size = 64GB\nshared_buffers = 1GB\nwork_mem = 8MB\ncheckpoint_segments = 32\ncheckpoint_timeout = 10min\ncheckpoint_completion_target = 0.9\ncheckpoint_warning = 30s\nconstraint_exclusion = off\n\nThanks, Hans\n\n<http://www.maps-for-free.com/>\n\n\n\n\n\n\n\n\nHello,\n\n\non our old server (120 GB RAM) PostgreSQL 9.4.5 was using less than 10 GB of ram. On our new server (same system) Postgres 9.4.11 is using up to 40 GB Ram. Especially each idle process is consuming 2.4 GB: postgres\n 30764 8.3 2.4 3358400 3215920 ? Ss 21:58 0:24 postgres: testuser testdb [local] idle \n\n\n\n\nSumming up PG needs currently 72.14GB (also the\n slab_cache was increasing from 20 GB to 40 GB. ). For monitoring we are using munin. Is this a bug of 9.4.11 or what could be wrong?\n\n\npostgresmain.conf did not change:\n\n\nmax_connections = 40 \n\n\n\neffective_cache_size = 64GB\nshared_buffers = 1GB\nwork_mem = 8MB\ncheckpoint_segments = 32\ncheckpoint_timeout = 10min\ncheckpoint_completion_target = 0.9\ncheckpoint_warning = 30s\nconstraint_exclusion = off\n\n\nThanks, Hans",
"msg_date": "Mon, 8 May 2017 20:23:14 +0000",
"msg_from": "Hans Braxmeier <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres uses too much RAM"
},
{
"msg_contents": "On Mon, May 8, 2017 at 3:23 PM, Hans Braxmeier\n<[email protected]> wrote:\n> Hello,\n>\n>\n> on our old server (120 GB RAM) PostgreSQL 9.4.5 was using less than 10 GB of\n> ram. On our new server (same system) Postgres 9.4.11 is using up to 40 GB\n> Ram. Especially each idle process is consuming 2.4 GB: postgres 30764 8.3\n> 2.4 3358400 3215920 ? Ss 21:58 0:24 postgres: testuser testdb [local] idle\n>\n>\n> Summing up PG needs currently 72.14GB (also the slab_cache was increasing\n> from 20 GB to 40 GB. ). For monitoring we are using munin. Is this a bug of\n> 9.4.11 or what could be wrong?\n\ncan you paste unredacted snippet from, say, 'top'? A common\nmeasuring error is to assume that shared memory usage is specific\ncumulative to each process rather than from a shared pool. It's hard\nto say either way from your info above.\n\nIf you do have extremely high resident memory usage, culprits might be:\n*) bona fide memory leak (although this is rare)\n*) bloat in the cache context (relcache, plancache, etc), especially\nif you have huge numbers of tables. workaround is to recycle\nprocesses occasionally and/or use pgbouncer\n*) 3rd party package attached to the postgres process (say, pl/java).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 8 May 2017 18:12:12 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres uses too much RAM"
}
] |
[
{
"msg_contents": "I have a weird case of query execution performance here. The query has a date\nvalues in the WHERE clause, and the speed of executing varies by values of\nthe date. Actualy,\n - for the dates from the range of the last 30 days execution takes aruond\n3 min\n - for the dates before the range of the last 30 days execution takes a few\nseconds\n\nThe query is listed below, with the date in the last 30 days range:\n\nselect\n sk2_.code as col_0_0_,\n bra4_.code as col_1_0_,\n st0_.quantity as col_2_0_,\n bat1_.forecast as col_3_0_ \nfrom\n TBL_st st0_,\n TBL_bat bat1_,\n TBL_sk sk2_,\n TBL_bra bra4_ \nwhere\n st0_.batc_id=bat1_.id \n and bat1_.sku_id=sk2_.id \n and bat1_.bran_id=bra4_.id \n and not (exists (select\n 1 \n from\n TBL_st st6_,\n TBL_bat bat7_,\n TBL_sk sk10_ \n where\n st6_.batc_id=bat7_.id \n and bat7_.sku_id=sk10_.id \n and bat7_.bran_id=bat1_.bran_id \n and sk10_.code=sk2_.code \n and st6_.date>st0_.date \n and sk10_.acco_id=1 \n and st6_.date>='2017-04-20' \n and st6_.date<='2017-04-30')) \n and sk2_.acco_id=1 \n and st0_.date>='2017-04-20' \n and st0_.date<='2017-04-30'\n\n\nand here is the plan for the query with the date in the last 30 days range:\n\t\nNested Loop (cost=289.06..19764.03 rows=1 width=430) (actual\ntime=3482.062..326049.246 rows=249 loops=1)\n -> Nested Loop Anti Join (cost=288.91..19763.86 rows=1 width=433)\n(actual time=3482.023..326048.023 rows=249 loops=1)\n Join Filter: ((st6_.date > st0_.date) AND ((sk10_.code)::text =\n(sk2_.code)::text))\n Rows Removed by Join Filter: 210558\n -> Nested Loop (cost=286.43..13719.38 rows=1 width=441) (actual\ntime=4.648..2212.042 rows=2474 loops=1)\n -> Nested Loop (cost=286.00..6871.33 rows=13335 width=436)\n(actual time=4.262..657.823 rows=666738 loops=1)\n -> Index Scan using uk_TBL_sk0_account_code on TBL_sk\nsk2_ (cost=0.14..12.53 rows=1 width=426) (actual time=1.036..1.084 rows=50\nloops=1)\n Index Cond: (acco_id = 1)\n -> Bitmap Heap Scan on TBL_bat bat1_ \n(cost=285.86..6707.27 rows=15153 width=26) (actual time=3.675..11.308\nrows=13335 loops=50)\n Recheck Cond: (sku_id = sk2_.id)\n Heap Blocks: exact=241295\n -> Bitmap Index Scan on ix_al_batc_sku_id \n(cost=0.00..282.07 rows=15153 width=0) (actual time=3.026..3.026 rows=13335\nloops=50)\n Index Cond: (sku_id = sk2_.id)\n -> Index Scan using ix_al_stle_batc_id on TBL_st st0_ \n(cost=0.42..0.50 rows=1 width=21) (actual time=0.002..0.002 rows=0\nloops=666738)\n Index Cond: (batc_id = bat1_.id)\n Filter: ((date >= '2017-04-20 00:00:00'::timestamp\nwithout time zone) AND (date <= '2017-04-30 00:00:00'::timestamp without\ntime zone))\n Rows Removed by Filter: 1\n -> Nested Loop (cost=2.49..3023.47 rows=1 width=434) (actual\ntime=111.345..130.883 rows=86 loops=2474)\n -> Hash Join (cost=2.06..2045.18 rows=1905 width=434)\n(actual time=0.010..28.028 rows=54853 loops=2474)\n Hash Cond: (bat7_.sku_id = sk10_.id)\n -> Index Scan using ix_al_batc_bran_id on TBL_bat bat7_ \n(cost=0.42..1667.31 rows=95248 width=24) (actual time=0.009..11.045\nrows=54853 loops=2474)\n Index Cond: (bran_id = bat1_.bran_id)\n -> Hash (cost=1.63..1.63 rows=1 width=426) (actual\ntime=0.026..0.026 rows=50 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 11kB\n -> Seq Scan on TBL_sk sk10_ (cost=0.00..1.63\nrows=1 width=426) (actual time=0.007..0.015 rows=50 loops=1)\n Filter: (acco_id = 1)\n -> Index Scan using ix_al_stle_batc_id on TBL_st st6_ \n(cost=0.42..0.50 rows=1 width=16) (actual time=0.002..0.002 rows=0\nloops=135706217)\n Index Cond: (batc_id = bat7_.id)\n Filter: ((date >= '2017-04-20 00:00:00'::timestamp\nwithout time zone) AND (date <= '2017-04-30 00:00:00'::timestamp without\ntime zone))\n Rows Removed by Filter: 1\n -> Index Scan using TBL_bra_pk on TBL_bra bra4_ (cost=0.14..0.16 rows=1\nwidth=13) (actual time=0.003..0.003 rows=1 loops=249)\n Index Cond: (id = bat1_.bran_id)\nPlanning time: 8.108 ms\nExecution time: 326049.583 ms\n\n\nHere is the same query with the date before the last 30 days range:\n\nselect\n sk2_.code as col_0_0_,\n bra4_.code as col_1_0_,\n st0_.quantity as col_2_0_,\n bat1_.forecast as col_3_0_ \nfrom\n TBL_st st0_,\n TBL_bat bat1_,\n TBL_sk sk2_,\n TBL_bra bra4_ \nwhere\n st0_.batc_id=bat1_.id \n and bat1_.sku_id=sk2_.id \n and bat1_.bran_id=bra4_.id \n and not (exists (select\n 1 \n from\n TBL_st st6_,\n TBL_bat bat7_,\n TBL_sk sk10_ \n where\n st6_.batc_id=bat7_.id \n and bat7_.sku_id=sk10_.id \n and bat7_.bran_id=bat1_.bran_id \n and sk10_.code=sk2_.code \n and st6_.date>st0_.date \n and sk10_.acco_id=1 \n and st6_.date>='2017-01-20' \n and st6_.date<='2017-01-30')) \n and sk2_.acco_id=1 \n and st0_.date>='2017-01-20' \n and st0_.date<='2017-01-30'\n\t\n\nand here is the plan for the query with the date before the last 30 days\nrange:\n\t\n Hash Join (cost=576.33..27443.95 rows=48 width=430) (actual\ntime=132.732..3894.554 rows=250 loops=1)\n Hash Cond: (bat1_.bran_id = bra4_.id)\n -> Merge Anti Join (cost=572.85..27439.82 rows=48 width=433) (actual\ntime=132.679..3894.287 rows=250 loops=1)\n Merge Cond: ((sk2_.code)::text = (sk10_.code)::text)\n Join Filter: ((st6_.date > st0_.date) AND (bat7_.bran_id =\nbat1_.bran_id))\n Rows Removed by Join Filter: 84521\n -> Nested Loop (cost=286.43..13719.38 rows=48 width=441) (actual\ntime=26.105..1893.523 rows=2491 loops=1)\n -> Nested Loop (cost=286.00..6871.33 rows=13335 width=436)\n(actual time=1.159..445.683 rows=666738 loops=1)\n -> Index Scan using uk_TBL_sk0_account_code on TBL_sk\nsk2_ (cost=0.14..12.53 rows=1 width=426) (actual time=0.035..0.084 rows=50\nloops=1)\n Index Cond: (acco_id = 1)\n -> Bitmap Heap Scan on TBL_bat bat1_ \n(cost=285.86..6707.27 rows=15153 width=26) (actual time=1.741..7.148\nrows=13335 loops=50)\n Recheck Cond: (sku_id = sk2_.id)\n Heap Blocks: exact=241295\n -> Bitmap Index Scan on ix_al_batc_sku_id \n(cost=0.00..282.07 rows=15153 width=0) (actual time=1.119..1.119 rows=13335\nloops=50)\n Index Cond: (sku_id = sk2_.id)\n -> Index Scan using ix_al_stle_batc_id on TBL_st st0_ \n(cost=0.42..0.50 rows=1 width=21) (actual time=0.002..0.002 rows=0\nloops=666738)\n Index Cond: (batc_id = bat1_.id)\n Filter: ((date >= '2017-01-20 00:00:00'::timestamp\nwithout time zone) AND (date <= '2017-01-30 00:00:00'::timestamp without\ntime zone))\n Rows Removed by Filter: 1\n -> Materialize (cost=286.43..13719.50 rows=48 width=434) (actual\ntime=15.584..1986.953 rows=84560 loops=1)\n -> Nested Loop (cost=286.43..13719.38 rows=48 width=434)\n(actual time=15.577..1983.384 rows=2491 loops=1)\n -> Nested Loop (cost=286.00..6871.33 rows=13335\nwidth=434) (actual time=0.843..482.864 rows=666738 loops=1)\n -> Index Scan using uk_TBL_sk0_account_code on\nTBL_sk sk10_ (cost=0.14..12.53 rows=1 width=426) (actual time=0.005..0.052\nrows=50 loops=1)\n Index Cond: (acco_id = 1)\n -> Bitmap Heap Scan on TBL_bat bat7_ \n(cost=285.86..6707.27 rows=15153 width=24) (actual time=2.051..7.902\nrows=13335 loops=50)\n Recheck Cond: (sku_id = sk10_.id)\n Heap Blocks: exact=241295\n -> Bitmap Index Scan on ix_al_batc_sku_id \n(cost=0.00..282.07 rows=15153 width=0) (actual time=1.424..1.424 rows=13335\nloops=50)\n Index Cond: (sku_id = sk10_.id)\n -> Index Scan using ix_al_stle_batc_id on TBL_st st6_ \n(cost=0.42..0.50 rows=1 width=16) (actual time=0.002..0.002 rows=0\nloops=666738)\n Index Cond: (batc_id = bat7_.id)\n Filter: ((date >= '2017-01-20 00:00:00'::timestamp\nwithout time zone) AND (date <= '2017-01-30 00:00:00'::timestamp without\ntime zone))\n Rows Removed by Filter: 1\n -> Hash (cost=2.10..2.10 rows=110 width=13) (actual time=0.033..0.033\nrows=110 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 14kB\n -> Seq Scan on TBL_bra bra4_ (cost=0.00..2.10 rows=110 width=13)\n(actual time=0.004..0.013 rows=110 loops=1)\nPlanning time: 14.542 ms\nExecution time: 3894.793 ms\n\n\n\nDoes anyone have an idea why does this happens. \nDid anyone had an experience with anything similar?\n\nThank you very much.\nKind regards, Petar\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Speed-differences-between-in-executing-the-same-query-tp5960964.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 May 2017 06:54:02 -0700 (MST)",
"msg_from": "plukovic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Speed differences between in executing the same query"
},
{
"msg_contents": "plukovic <[email protected]> writes:\n> I have a weird case of query execution performance here.\n\nMy first thought is that you are getting a bad plan because of this\nestimation error:\n\n> -> Index Scan using uk_TBL_sk0_account_code on TBL_sk\n> sk2_ (cost=0.14..12.53 rows=1 width=426) (actual time=1.036..1.084 rows=50\n> loops=1)\n> Index Cond: (acco_id = 1)\n\nThat rowcount estimate is off by 50X, resulting in 50X errors for the\njoins above it too, and in misguided choices of nestloops when some\nother join method would be better. Probably things would improve with\na better estimate. Maybe you need to increase the stats target for\nthat table ... or maybe it just hasn't been ANALYZEd lately?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 May 2017 10:06:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Speed differences between in executing the same query"
},
{
"msg_contents": "Thank you very much Tom. This is very helpful.\n\n\n\n--\nView this message in context: http://www.postgresql-archive.org/Speed-differences-between-in-executing-the-same-query-tp5960964p5961041.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 May 2017 10:11:27 -0700 (MST)",
"msg_from": "plukovic <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Speed differences between in executing the same query"
}
] |
[
{
"msg_contents": "I need to do a join between two foreign tables using columns of different\ntypes.\n\nselect data from remote2 join remote1 on ((remote2.id)::bigint=remote1.id)\nwhere cutoff > 0.9999;\n\nFor demonstration purposes, I use a loop-back foreign server, set up in the\nattached sql file.\n\nIf I do the join directly on the \"foreign\" server by specifying the\nschemaname where the physical tables live, it uses a sensible join plan,\nusing an index on cutoff column to get a handful of rows, then casting the\nid column and using in index on remote1.id to get each row there.\n\nexplain analyze select data from remote.remote2 join remote.remote1 on ((\nremote2.id)::bigint=remote1.id) where cutoff > 0.9999;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=5.56..1100.48 rows=100 width=8) (actual\ntime=0.303..5.598 rows=119 loops=1)\n -> Bitmap Heap Scan on remote2 (cost=5.13..334.85 rows=91 width=7)\n(actual time=0.112..0.899 rows=105 loops=1)\n Recheck Cond: (cutoff > '0.9999'::double precision)\n Heap Blocks: exact=105\n -> Bitmap Index Scan on remote2_cutoff_idx (cost=0.00..5.11\nrows=91 width=0) (actual time=0.062..0.062 rows=105 loops=1)\n Index Cond: (cutoff > '0.9999'::double precision)\n -> Index Scan using remote1_id_idx on remote1 (cost=0.43..8.40 rows=1\nwidth=16) (actual time=0.038..0.041 rows=1 loops=105)\n Index Cond: (id = (remote2.id)::bigint)\n\n\nBut if I go through the foreign machinery, it doesn't use a good plan:\n\nexplain analyze select data from remote2 join remote1 on ((remote2.id\n)::bigint=remote1.id) where cutoff > 0.9999;\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=537.81..76743.81 rows=455000 width=4) (actual\ntime=75.019..4659.802 rows=119 loops=1)\n Hash Cond: (remote1.id = (remote2.id)::bigint)\n -> Foreign Scan on remote1 (cost=100.00..35506.00 rows=1000000\nwidth=16) (actual time=1.110..4143.655 rows=1000000 loops=1)\n -> Hash (cost=436.67..436.67 rows=91 width=7) (actual\ntime=2.754..2.754 rows=105 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 13kB\n -> Foreign Scan on remote2 (cost=105.13..436.67 rows=91 width=7)\n(actual time=1.567..2.646 rows=105 loops=1)\n Planning time: 29.629 ms\n Execution time: 4660.433 ms\n\nI thought it would either push the entire join to the foreign side, or at\nleast do a foreign index scan on remote2_cutoff_idx, then loop over each\nrow and do a foreign index scans against remote1_id_idx.\n\nI've tried versions 9.6.3 and 10dev, and neither do what I expected. It\ndoesn't seem to be a planning problem where it thinks the fast plan is\nslower, it just doesn't seem to consider the faster plans as being options\nat all. Is there some setting to make it realize the cast is shippable?\nIs any of the work being done on postgres_fdw for V11 working towards\nfixing this?\n\nCheers,\n\nJeff\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Mon, 15 May 2017 14:42:14 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres_fdw and column casting shippability"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> I've tried versions 9.6.3 and 10dev, and neither do what I expected. It\n> doesn't seem to be a planning problem where it thinks the fast plan is\n> slower, it just doesn't seem to consider the faster plans as being options\n> at all. Is there some setting to make it realize the cast is shippable?\n\nAFAICS, postgres_fdw doesn't have any knowledge of CoerceViaIO parse\nnodes, so it's never going to consider this type of brute-force cast\nas shippable. Normal casts would presumably be shippable if the\nunderlying function is considered safe.\n\nLooks like a round-tuit-shortage issue rather than anything fundamental.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 May 2017 18:22:48 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres_fdw and column casting shippability"
},
{
"msg_contents": "On Mon, May 15, 2017 at 3:22 PM, Tom Lane <[email protected]> wrote:\n\n> Jeff Janes <[email protected]> writes:\n> > I've tried versions 9.6.3 and 10dev, and neither do what I expected. It\n> > doesn't seem to be a planning problem where it thinks the fast plan is\n> > slower, it just doesn't seem to consider the faster plans as being\n> options\n> > at all. Is there some setting to make it realize the cast is shippable?\n>\n> AFAICS, postgres_fdw doesn't have any knowledge of CoerceViaIO parse\n> nodes, so it's never going to consider this type of brute-force cast\n> as shippable. Normal casts would presumably be shippable if the\n> underlying function is considered safe.\n>\n\nSo then, the secret is to write it like this:\n\nexplain analyze select data from remote2 join remote1 on (int8in(textout(\nremote2.id)) = remote1.id)\n where cutoff > 0.9999;\n\nThis works to have the join pushed to the foreign side in 9.6, but not\nbefore that.\n\nThanks,\n\nJeff\n\nOn Mon, May 15, 2017 at 3:22 PM, Tom Lane <[email protected]> wrote:Jeff Janes <[email protected]> writes:\n> I've tried versions 9.6.3 and 10dev, and neither do what I expected. It\n> doesn't seem to be a planning problem where it thinks the fast plan is\n> slower, it just doesn't seem to consider the faster plans as being options\n> at all. Is there some setting to make it realize the cast is shippable?\n\nAFAICS, postgres_fdw doesn't have any knowledge of CoerceViaIO parse\nnodes, so it's never going to consider this type of brute-force cast\nas shippable. Normal casts would presumably be shippable if the\nunderlying function is considered safe.So then, the secret is to write it like this:explain analyze select data from remote2 join remote1 on (int8in(textout(remote2.id)) = remote1.id) where cutoff > 0.9999;This works to have the join pushed to the foreign side in 9.6, but not before that.Thanks,Jeff",
"msg_date": "Tue, 16 May 2017 09:54:13 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres_fdw and column casting shippability"
}
] |
[
{
"msg_contents": "Hey all, first off, i'm running: PostgreSQL 9.6.3 on x86_64-pc-linux-gnu,\ncompiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bit\n\nAt the high level, I am having an issue with a query not using an index,\nand in a very hard to reproduce way.\n\nI have a function which builds two temp tables, fills each with data (in\nmultiple steps), creates a gin index on one of the tables, analyzes each\ntable, then runs a query joining the two.\n\nMy issue is, I am getting inconsistent results for if the query will use\nthe index or not (with the exact same data each time, and no differences in\nthe stats stored on the table between using the index or not).\n\nIf I just run the function, it will never use the index and the query will\nnot finish.\n\nIf I pull the queries out of the function and run them manually, it will\noften use the index, but sometimes it won't, I can't make any sense of\nwhy/when it will use it vs not using it.\n\nI tried to create a test case I could attach to this email by just saving\nthe results of the temp tables, pg_dumping them, and creating a script to\nre-create the temp tables with that data and continue on with the index\ncreation / analyze / query... but when I try that it runs perfectly (using\nthe index) every time.\n\nI've attached the test case, because it contains all the schema and query,\nregardless of if I can't make it reproducible.\n\nRun the query_help_dump.sql first to populate regular tables, then within\nthe query_help_test_case.sql I was attempting to replicate the same (very\nsimplified) workflow that happens in my function, to no avail.\n\n\nI cannot run an explain analyze on the query when it doesn't use the index,\nbecause it will not finish in a reasonable amount of time (let it run for\n12 hours so far).\n\nquery without index:\nGroupAggregate (cost=23622602.94..23622603.80 rows=43 width=48)\n Group Key: r.row_id\n -> Sort (cost=23622602.94..23622603.04 rows=43 width=20)\n Sort Key: r.row_id\n -> Nested Loop (cost=0.00..23622601.77 rows=43 width=20)\n Join Filter: ((r.delivery_date <@ con.date_range) AND\n(r.contractee_company_ids && con.contractee_company_id) AND\n((r.distributor_company_ids && con.distributor_company_id) OR\n(con.distributor_company_id IS NULL)) AND (r.product_ids && con.product_id))\n -> Seq Scan on _import_invoice_product_contract_match r\n (cost=0.00..3525.52 rows=86752 width=145)\n -> Materialize (cost=0.00..874.50 rows=12100 width=542)\n -> Seq Scan on _contract_claim_match con\n (cost=0.00..814.00 rows=12100 width=542)\n\n\nquery with index:\nGroupAggregate (cost=137639.13..137639.99 rows=43 width=48) (actual\ntime=3944.309..4093.798 rows=57966 loops=1)\n Group Key: r.row_id\n -> Sort (cost=137639.13..137639.24 rows=43 width=20) (actual\ntime=3944.280..3992.348 rows=145312 loops=1)\n Sort Key: r.row_id\n Sort Method: external merge Disk: 4256kB\n -> Nested Loop (cost=0.02..137637.97 rows=43 width=20) (actual\ntime=0.091..3701.039 rows=145312 loops=1)\n -> Seq Scan on _import_invoice_product_contract_match r\n (cost=0.00..3525.52 rows=86752 width=145) (actual time=0.011..46.663\nrows=86752 loops=1)\n -> Bitmap Heap Scan on _contract_claim_match con\n (cost=0.02..1.54 rows=1 width=542) (actual time=0.033..0.040 rows=2\nloops=86752)\n Recheck Cond: ((r.contractee_company_ids &&\ncontractee_company_id) AND (r.product_ids && product_id))\n Filter: ((r.delivery_date <@ date_range) AND\n((r.distributor_company_ids && distributor_company_id) OR\n(distributor_company_id IS NULL)))\n Rows Removed by Filter: 8\n Heap Blocks: exact=793072\n -> Bitmap Index Scan on idx_tmp_contract_claim_match\n (cost=0.00..0.02 rows=1 width=0) (actual time=0.023..0.023 rows=10\nloops=86752)\n Index Cond: ((r.contractee_company_ids &&\ncontractee_company_id) AND (r.product_ids && product_id))\nPlanning time: 0.804 ms\nExecution time: 4106.043 ms\n\n\n query_help_dump.sql\n<https://drive.google.com/file/d/0BzxeqZ1lbi6RazBDbjdBbUNMbFk/view?usp=drive_web>\n\n query_help_test_case.sql\n<https://drive.google.com/file/d/0BzxeqZ1lbi6RT3hMbXd1WVpqTjg/view?usp=drive_web>\n\n\nHey all, first off, i'm running: PostgreSQL 9.6.3 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.9.2-10) 4.9.2, 64-bitAt the high level, I am having an issue with a query not using an index, and in a very hard to reproduce way.I have a function which builds two temp tables, fills each with data (in multiple steps), creates a gin index on one of the tables, analyzes each table, then runs a query joining the two.My issue is, I am getting inconsistent results for if the query will use the index or not (with the exact same data each time, and no differences in the stats stored on the table between using the index or not).If I just run the function, it will never use the index and the query will not finish.If I pull the queries out of the function and run them manually, it will often use the index, but sometimes it won't, I can't make any sense of why/when it will use it vs not using it.I tried to create a test case I could attach to this email by just saving the results of the temp tables, pg_dumping them, and creating a script to re-create the temp tables with that data and continue on with the index creation / analyze / query... but when I try that it runs perfectly (using the index) every time.I've attached the test case, because it contains all the schema and query, regardless of if I can't make it reproducible.Run the query_help_dump.sql first to populate regular tables, then within the query_help_test_case.sql I was attempting to replicate the same (very simplified) workflow that happens in my function, to no avail.I cannot run an explain analyze on the query when it doesn't use the index, because it will not finish in a reasonable amount of time (let it run for 12 hours so far).query without index: GroupAggregate (cost=23622602.94..23622603.80 rows=43 width=48) Group Key: r.row_id -> Sort (cost=23622602.94..23622603.04 rows=43 width=20) Sort Key: r.row_id -> Nested Loop (cost=0.00..23622601.77 rows=43 width=20) Join Filter: ((r.delivery_date <@ con.date_range) AND (r.contractee_company_ids && con.contractee_company_id) AND ((r.distributor_company_ids && con.distributor_company_id) OR (con.distributor_company_id IS NULL)) AND (r.product_ids && con.product_id)) -> Seq Scan on _import_invoice_product_contract_match r (cost=0.00..3525.52 rows=86752 width=145) -> Materialize (cost=0.00..874.50 rows=12100 width=542) -> Seq Scan on _contract_claim_match con (cost=0.00..814.00 rows=12100 width=542)query with index:GroupAggregate (cost=137639.13..137639.99 rows=43 width=48) (actual time=3944.309..4093.798 rows=57966 loops=1) Group Key: r.row_id -> Sort (cost=137639.13..137639.24 rows=43 width=20) (actual time=3944.280..3992.348 rows=145312 loops=1) Sort Key: r.row_id Sort Method: external merge Disk: 4256kB -> Nested Loop (cost=0.02..137637.97 rows=43 width=20) (actual time=0.091..3701.039 rows=145312 loops=1) -> Seq Scan on _import_invoice_product_contract_match r (cost=0.00..3525.52 rows=86752 width=145) (actual time=0.011..46.663 rows=86752 loops=1) -> Bitmap Heap Scan on _contract_claim_match con (cost=0.02..1.54 rows=1 width=542) (actual time=0.033..0.040 rows=2 loops=86752) Recheck Cond: ((r.contractee_company_ids && contractee_company_id) AND (r.product_ids && product_id)) Filter: ((r.delivery_date <@ date_range) AND ((r.distributor_company_ids && distributor_company_id) OR (distributor_company_id IS NULL))) Rows Removed by Filter: 8 Heap Blocks: exact=793072 -> Bitmap Index Scan on idx_tmp_contract_claim_match (cost=0.00..0.02 rows=1 width=0) (actual time=0.023..0.023 rows=10 loops=86752) Index Cond: ((r.contractee_company_ids && contractee_company_id) AND (r.product_ids && product_id))Planning time: 0.804 msExecution time: 4106.043 ms query_help_dump.sql query_help_test_case.sql",
"msg_date": "Fri, 19 May 2017 11:14:05 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": true,
"msg_subject": "GIN index not used if created in the same transaction as query"
},
{
"msg_contents": "Adam Brusselback <[email protected]> writes:\n> I have a function which builds two temp tables, fills each with data (in\n> multiple steps), creates a gin index on one of the tables, analyzes each\n> table, then runs a query joining the two.\n> My issue is, I am getting inconsistent results for if the query will use\n> the index or not (with the exact same data each time, and no differences in\n> the stats stored on the table between using the index or not).\n\nDoes the \"multiple steps\" part involve UPDATEs on pre-existing rows?\nDo the updates change the column(s) used in the gin index?\n\nWhat this sounds like is that you're getting \"broken HOT chains\" in which\nthere's not a unique indexable value among the updated versions of a given\nrow, so there's an interval in which the new index isn't usable for\nqueries. If that's the correct diagnosis, what you need to do is create\nthe gin index before you start populating the table. Fortunately, that\nshouldn't create a really horrid performance penalty, because gin index\nbuild isn't optimized all that much anyway compared to just inserting\nthe data serially.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 May 2017 11:33:21 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN index not used if created in the same transaction as query"
},
{
"msg_contents": ">\n> Does the \"multiple steps\" part involve UPDATEs on pre-existing rows?\n> Do the updates change the column(s) used in the gin index?\n>\n\n Yes they do, however the updates happen prior to the index creation.\n\nI just tried, and that looks like the solution. I really appreciate your\nhelp on this.\n\nIs there any easy way I can know if an index is usable or not? Are there\nany catalog views or anything I could check that in?\n\nThanks,\n-Adam\n\nDoes the \"multiple steps\" part involve UPDATEs on pre-existing rows?\nDo the updates change the column(s) used in the gin index? Yes they do, however the updates happen prior to the index creation. I just tried, and that looks like the solution. I really appreciate your help on this.Is there any easy way I can know if an index is usable or not? Are there any catalog views or anything I could check that in?Thanks,-Adam",
"msg_date": "Fri, 19 May 2017 11:49:32 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: GIN index not used if created in the same transaction\n as query"
},
{
"msg_contents": "Adam Brusselback <[email protected]> writes:\n> Is there any easy way I can know if an index is usable or not? Are there\n> any catalog views or anything I could check that in?\n\nIIRC, you can look at pg_index.indcheckxmin --- if that's set, then\nthe index had broken HOT chains during creation and may not be usable\nright away. Telling whether your own transaction can use it is harder\nfrom SQL level, but if you're in the same transaction that made the\nindex then the answer is probably always \"no\" :-(\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 May 2017 12:09:58 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: GIN index not used if created in the same transaction as query"
}
] |
[
{
"msg_contents": "I'm spoiled by using pg_stat_statements to find the hotspot queries which\ncould use some attention.\n\nBut with some recent work, all of the hotspots are of the form \"FETCH 1000\nFROM c3\". The vast majority of the queries return less than 1000 rows, so\nonly one fetch is issued per execution.\n\nIs there an automated way to trace these back to the parent query, without\nhaving to strong-arm the driving application into changing its cursor-using\nways?\n\npg_stat_statements v1.4 and postgresql v9.6 (or 10beta1, if it makes a\ndifference)\n\nSometimes you can catch the DECLARE also being in pg_stat_statements, but\nit is not a sure thing and there is some risk the name got freed and reused.\n\nlog_min_duration_statement has the same issue.\n\nCheers,\n\nJeff\n\nI'm spoiled by using pg_stat_statements to find the hotspot queries which could use some attention.But with some recent work, all of the hotspots are of the form \"FETCH 1000 FROM c3\". The vast majority of the queries return less than 1000 rows, so only one fetch is issued per execution.Is there an automated way to trace these back to the parent query, without having to strong-arm the driving application into changing its cursor-using ways?pg_stat_statements v1.4 and postgresql v9.6 (or 10beta1, if it makes a difference)Sometimes you can catch the DECLARE also being in pg_stat_statements, but it is not a sure thing and there is some risk the name got freed and reused.log_min_duration_statement has the same issue.Cheers,Jeff",
"msg_date": "Fri, 19 May 2017 16:04:36 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_stat_statements with fetch"
},
{
"msg_contents": "Would turning on logging of temp files help? That often reports the query\nthat is using the temp files:\nlog_temp_files = 0\n\nIt probably wouldn't help if the cursor query never pulls from a temp file,\nbut if it does ...\n\nOn Fri, May 19, 2017 at 7:04 PM, Jeff Janes <[email protected]> wrote:\n\n> I'm spoiled by using pg_stat_statements to find the hotspot queries which\n> could use some attention.\n>\n> But with some recent work, all of the hotspots are of the form \"FETCH 1000\n> FROM c3\". The vast majority of the queries return less than 1000 rows, so\n> only one fetch is issued per execution.\n>\n> Is there an automated way to trace these back to the parent query, without\n> having to strong-arm the driving application into changing its cursor-using\n> ways?\n>\n> pg_stat_statements v1.4 and postgresql v9.6 (or 10beta1, if it makes a\n> difference)\n>\n> Sometimes you can catch the DECLARE also being in pg_stat_statements, but\n> it is not a sure thing and there is some risk the name got freed and reused.\n>\n> log_min_duration_statement has the same issue.\n>\n> Cheers,\n>\n> Jeff\n>\n\nWould turning on logging of temp files help? That often reports the query that is using the temp files:log_temp_files = 0It probably wouldn't help if the cursor query never pulls from a temp file, but if it does ...On Fri, May 19, 2017 at 7:04 PM, Jeff Janes <[email protected]> wrote:I'm spoiled by using pg_stat_statements to find the hotspot queries which could use some attention.But with some recent work, all of the hotspots are of the form \"FETCH 1000 FROM c3\". The vast majority of the queries return less than 1000 rows, so only one fetch is issued per execution.Is there an automated way to trace these back to the parent query, without having to strong-arm the driving application into changing its cursor-using ways?pg_stat_statements v1.4 and postgresql v9.6 (or 10beta1, if it makes a difference)Sometimes you can catch the DECLARE also being in pg_stat_statements, but it is not a sure thing and there is some risk the name got freed and reused.log_min_duration_statement has the same issue.Cheers,Jeff",
"msg_date": "Sun, 21 May 2017 09:53:23 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_stat_statements with fetch"
}
] |
[
{
"msg_contents": "Hello,\n\nThe index on my composite type seems to be working most of the time, but\nthere's a query I need to run where it's not working even with\nenable_seqscan=false. The composite type uses int and numrange subcolumns,\nand is designed to operate primarily as a range type.\n\nIt will probably be easier for you to read the rest of this from my\nstackexchange post but I'll copy and paste the contents of it here as well.\nhttps://dba.stackexchange.com/questions/174099/postgres-composite-type-not-using-index\n\n\nThe queries in this example are for testing purposes. It's possible for me\nto get the index to work by using the int and numrange separately rather\nthan creating a new matchsecond_type, but using the composite type makes\nthings much easier further down the pipeline where I have to tie this in\nwith an ORM.\n\nThis should include everything necessary to test it out yourself.\n-----------------------------------------------\n\nI'm using: `PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc\n(Ubuntu 6.3.0-5ubuntu1) 6.3.0 20170124, 64-bit`\n\nAnd for the purposes of this testing `SET enable_seqscan=false`.\n\nThis uses the index:\n\n EXPLAIN ANALYZE SELECT * FROM shot\n WHERE lower(shot.matchsecond) <@ (0, numrange(5, 10))::matchsecond_type;\n\n Bitmap Heap Scan on shot (cost=471.17..790.19 rows=50 width=45)\n(actual time=2.601..29.555 rows=5 loops=1)\n Recheck Cond: (((matchsecond).match_id)::integer = (0)::integer)\n Filter: ((numrange(lower(((matchsecond).second)::numrange),\nlower(((matchsecond).second)::numrange), '[]'::text))::numrange <@\n('[5,10)'::numrange)::numrange)\n Rows Removed by Filter: 9996\n Heap Blocks: exact=94\n Buffers: shared hit=193\n -> Bitmap Index Scan on ix_shot_matchsecond (cost=0.00..471.16\nrows=10001 width=0) (actual time=2.516..2.516 rows=10001 loops=1)\n Index Cond: (((matchsecond).match_id)::integer = (0)::integer)\n Buffers: shared hit=99\n Planning time: 0.401 ms\n Execution time: 29.623 ms\n\n\n\nBut this doesn't:\n\n EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM shot\n WHERE lower(shot.matchsecond) <@ ((shot.matchsecond).match_id,\nnumrange(5, 10))::matchsecond_type;\n\n Seq Scan on shot (cost=10000000000.00..10000000319.02 rows=1 width=45)\n(actual time=0.091..20.003 rows=5 loops=1)\n Filter: ((((matchsecond).match_id)::integer =\n((matchsecond).match_id)::integer) AND\n((numrange(lower(((matchsecond).second)::numrange),\nlower(((matchsecond).second)::numrange), '[]'::text))::numrange <@\n('[5,10)'::numrange)::numrange))\n Rows Removed by Filter: 9996\n Buffers: shared hit=94\n Planning time: 0.351 ms\n Execution time: 20.075 ms\n\n\nNote the `0` in the first compared to `(shot.matchsecond).match_id` in the\nsecond on the right hand side of the `<@`. Interestingly, if the left hand\nside is simply `shot.matchsecond` instead of `lower(shot.matchsecond)`, the\nquery manages to use the index. The index is also used when constructing\nthe numrange with functions like\n`numrange(lower((shot.matchsecond).second), lower((shot.matchsecond).second\n+ 10))`.\n\nHere are the relevant definitions:\n\n CREATE DOMAIN matchsecond_match AS integer NOT NULL;\n CREATE DOMAIN matchsecond_second AS numrange NOT NULL CHECK(VALUE <>\nnumrange(0,0));\n\n CREATE TYPE matchsecond_type AS (\n match_id matchsecond_match,\n second matchsecond_second\n );\n\n CREATE OR REPLACE FUNCTION matchsecond_contains_range(matchsecond_type,\nmatchsecond_type)\n RETURNS BOOLEAN AS $$ SELECT $1.match_id = $2.match_id AND $1.second @>\n$2.second $$\n LANGUAGE SQL;\n\n CREATE OPERATOR @> (\n LEFTARG = matchsecond_type,\n RIGHTARG = matchsecond_type,\n PROCEDURE = matchsecond_contains_range,\n COMMUTATOR = <@,\n RESTRICT = eqsel,\n JOIN = eqjoinsel\n );\n\n CREATE OR REPLACE FUNCTION\nmatchsecond_contained_by_range(matchsecond_type, matchsecond_type)\n RETURNS BOOLEAN AS $$ SELECT $1.match_id = $2.match_id AND $1.second <@\n$2.second $$\n LANGUAGE SQL;\n\n CREATE OPERATOR <@ (\n LEFTARG = matchsecond_type,\n RIGHTARG = matchsecond_type,\n PROCEDURE = matchsecond_contained_by_range,\n COMMUTATOR = @>,\n RESTRICT = eqsel,\n JOIN = eqjoinsel\n );\n\n CREATE OR REPLACE FUNCTION lower(matchsecond_type)\n RETURNS matchsecond_type AS\n $$ SELECT ($1.match_id, numrange(lower($1.second), lower($1.second),\n'[]'))::matchsecond_type $$\n LANGUAGE SQL;\n\nAnd a test table:\n\nReminder: Use `CREATE EXTENSION btree_gist;`\n\n CREATE TABLE shot AS(\n SELECT i AS id, (0, numrange(i, i+1))::matchsecond_type AS\nmatchsecond\n FROM generate_series(0,10000) AS i\n );\n\n ALTER TABLE shot ADD PRIMARY KEY (id);\n CREATE INDEX ix_shot_matchsecond\n ON shot\n USING gist (((matchsecond).match_id), ((matchsecond).second));\n\n----------------------------------------------\n\nThank you\n\nHello,The index on my composite type seems to be working most of the time, but there's a query I need to run where it's not working even with enable_seqscan=false. The composite type uses int and numrange subcolumns, and is designed to operate primarily as a range type.It will probably be easier for you to read the rest of this from my stackexchange post but I'll copy and paste the contents of it here as well. https://dba.stackexchange.com/questions/174099/postgres-composite-type-not-using-index The queries in this example are for testing purposes. It's possible for me to get the index to work by using the int and numrange separately rather than creating a new matchsecond_type, but using the composite type makes things much easier further down the pipeline where I have to tie this in with an ORM.This should include everything necessary to test it out yourself.-----------------------------------------------I'm using: `PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 6.3.0-5ubuntu1) 6.3.0 20170124, 64-bit`And for the purposes of this testing `SET enable_seqscan=false`.This uses the index: EXPLAIN ANALYZE SELECT * FROM shot WHERE lower(shot.matchsecond) <@ (0, numrange(5, 10))::matchsecond_type; Bitmap Heap Scan on shot (cost=471.17..790.19 rows=50 width=45) (actual time=2.601..29.555 rows=5 loops=1) Recheck Cond: (((matchsecond).match_id)::integer = (0)::integer) Filter: ((numrange(lower(((matchsecond).second)::numrange), lower(((matchsecond).second)::numrange), '[]'::text))::numrange <@ ('[5,10)'::numrange)::numrange) Rows Removed by Filter: 9996 Heap Blocks: exact=94 Buffers: shared hit=193 -> Bitmap Index Scan on ix_shot_matchsecond (cost=0.00..471.16 rows=10001 width=0) (actual time=2.516..2.516 rows=10001 loops=1) Index Cond: (((matchsecond).match_id)::integer = (0)::integer) Buffers: shared hit=99 Planning time: 0.401 ms Execution time: 29.623 msBut this doesn't: EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM shot WHERE lower(shot.matchsecond) <@ ((shot.matchsecond).match_id, numrange(5, 10))::matchsecond_type; Seq Scan on shot (cost=10000000000.00..10000000319.02 rows=1 width=45) (actual time=0.091..20.003 rows=5 loops=1) Filter: ((((matchsecond).match_id)::integer = ((matchsecond).match_id)::integer) AND ((numrange(lower(((matchsecond).second)::numrange), lower(((matchsecond).second)::numrange), '[]'::text))::numrange <@ ('[5,10)'::numrange)::numrange)) Rows Removed by Filter: 9996 Buffers: shared hit=94 Planning time: 0.351 ms Execution time: 20.075 msNote the `0` in the first compared to `(shot.matchsecond).match_id` in the second on the right hand side of the `<@`. Interestingly, if the left hand side is simply `shot.matchsecond` instead of `lower(shot.matchsecond)`, the query manages to use the index. The index is also used when constructing the numrange with functions like `numrange(lower((shot.matchsecond).second), lower((shot.matchsecond).second + 10))`.Here are the relevant definitions: CREATE DOMAIN matchsecond_match AS integer NOT NULL; CREATE DOMAIN matchsecond_second AS numrange NOT NULL CHECK(VALUE <> numrange(0,0)); CREATE TYPE matchsecond_type AS ( match_id matchsecond_match, second matchsecond_second ); CREATE OR REPLACE FUNCTION matchsecond_contains_range(matchsecond_type, matchsecond_type) RETURNS BOOLEAN AS $$ SELECT $1.match_id = $2.match_id AND $1.second @> $2.second $$ LANGUAGE SQL; CREATE OPERATOR @> ( LEFTARG = matchsecond_type, RIGHTARG = matchsecond_type, PROCEDURE = matchsecond_contains_range, COMMUTATOR = <@, RESTRICT = eqsel, JOIN = eqjoinsel ); CREATE OR REPLACE FUNCTION matchsecond_contained_by_range(matchsecond_type, matchsecond_type) RETURNS BOOLEAN AS $$ SELECT $1.match_id = $2.match_id AND $1.second <@ $2.second $$ LANGUAGE SQL; CREATE OPERATOR <@ ( LEFTARG = matchsecond_type, RIGHTARG = matchsecond_type, PROCEDURE = matchsecond_contained_by_range, COMMUTATOR = @>, RESTRICT = eqsel, JOIN = eqjoinsel ); CREATE OR REPLACE FUNCTION lower(matchsecond_type) RETURNS matchsecond_type AS $$ SELECT ($1.match_id, numrange(lower($1.second), lower($1.second), '[]'))::matchsecond_type $$ LANGUAGE SQL;And a test table:Reminder: Use `CREATE EXTENSION btree_gist;` CREATE TABLE shot AS( SELECT i AS id, (0, numrange(i, i+1))::matchsecond_type AS matchsecond FROM generate_series(0,10000) AS i ); ALTER TABLE shot ADD PRIMARY KEY (id); CREATE INDEX ix_shot_matchsecond ON shot USING gist (((matchsecond).match_id), ((matchsecond).second));----------------------------------------------Thank you",
"msg_date": "Sat, 20 May 2017 16:33:16 -0700",
"msg_from": "Zac Goldstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index not being used on composite type for particular query"
},
{
"msg_contents": "Zac Goldstein <[email protected]> writes:\n> This uses the index:\n> ...\n> But this doesn't:\n\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM shot\n> WHERE lower(shot.matchsecond) <@ ((shot.matchsecond).match_id,\n> numrange(5, 10))::matchsecond_type;\n\nWell, yeah. After inlining the SQL functions, what you have is\n\n> Filter: ((((matchsecond).match_id)::integer =\n> ((matchsecond).match_id)::integer) AND\n> ((numrange(lower(((matchsecond).second)::numrange),\n> lower(((matchsecond).second)::numrange), '[]'::text))::numrange <@\n> ('[5,10)'::numrange)::numrange))\n\nand neither half of the AND has the form \"indexed_value indexable_operator\nconstant\", which is the basic requirement for an index condition. We're a\nlittle bit permissive about what \"constant\" means, but that most certainly\ndoesn't extend to expressions involving columns of the table. So the\nfirst clause loses because it's got variables on both sides, and the\nsecond loses because the LHS expression is not what the index is on.\n\nYou could build an additional index on that expression, if this shape\nof query is important enough to you to justify maintaining another index.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 May 2017 20:00:02 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index not being used on composite type for particular query"
},
{
"msg_contents": "Thanks for the fast reply and explanation, Tom. Overall, I have been\npleasantly surprised with the leniency of indexes on range types.\n\nOn Sat, May 20, 2017 at 5:00 PM, Tom Lane <[email protected]> wrote:\n\n> Zac Goldstein <[email protected]> writes:\n> > This uses the index:\n> > ...\n> > But this doesn't:\n>\n> > EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM shot\n> > WHERE lower(shot.matchsecond) <@ ((shot.matchsecond).match_id,\n> > numrange(5, 10))::matchsecond_type;\n>\n> Well, yeah. After inlining the SQL functions, what you have is\n>\n> > Filter: ((((matchsecond).match_id)::integer =\n> > ((matchsecond).match_id)::integer) AND\n> > ((numrange(lower(((matchsecond).second)::numrange),\n> > lower(((matchsecond).second)::numrange), '[]'::text))::numrange <@\n> > ('[5,10)'::numrange)::numrange))\n>\n> and neither half of the AND has the form \"indexed_value indexable_operator\n> constant\", which is the basic requirement for an index condition. We're a\n> little bit permissive about what \"constant\" means, but that most certainly\n> doesn't extend to expressions involving columns of the table. So the\n> first clause loses because it's got variables on both sides, and the\n> second loses because the LHS expression is not what the index is on.\n>\n> You could build an additional index on that expression, if this shape\n> of query is important enough to you to justify maintaining another index.\n>\n> regards, tom lane\n>\n\nThanks for the fast reply and explanation, Tom. Overall, I have been pleasantly surprised with the leniency of indexes on range types.On Sat, May 20, 2017 at 5:00 PM, Tom Lane <[email protected]> wrote:Zac Goldstein <[email protected]> writes:\n> This uses the index:\n> ...\n> But this doesn't:\n\n> EXPLAIN (ANALYZE, BUFFERS) SELECT * FROM shot\n> WHERE lower(shot.matchsecond) <@ ((shot.matchsecond).match_id,\n> numrange(5, 10))::matchsecond_type;\n\nWell, yeah. After inlining the SQL functions, what you have is\n\n> Filter: ((((matchsecond).match_id)::integer =\n> ((matchsecond).match_id)::integer) AND\n> ((numrange(lower(((matchsecond).second)::numrange),\n> lower(((matchsecond).second)::numrange), '[]'::text))::numrange <@\n> ('[5,10)'::numrange)::numrange))\n\nand neither half of the AND has the form \"indexed_value indexable_operator\nconstant\", which is the basic requirement for an index condition. We're a\nlittle bit permissive about what \"constant\" means, but that most certainly\ndoesn't extend to expressions involving columns of the table. So the\nfirst clause loses because it's got variables on both sides, and the\nsecond loses because the LHS expression is not what the index is on.\n\nYou could build an additional index on that expression, if this shape\nof query is important enough to you to justify maintaining another index.\n\n regards, tom lane",
"msg_date": "Sat, 20 May 2017 19:51:10 -0700",
"msg_from": "Zac Goldstein <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index not being used on composite type for particular query"
}
] |
[
{
"msg_contents": "Good day\n\nI’ve got a performance and best practice question. We’re busy writing our \npersistence framework for our application port to PostgreSQL.\nWe have business entities that are split over multiple tables and in an \neffort to not have multiple trips to the database we’re trying to batch \nthese together. Some of the extensions uses serials, necessitating output \nfrom the one query to be used in the other. It is at this point that we’re \nrunning into the PostgreSQL limitation of only declaring variables in \nfunctions.\n\nI've come up with generating functions on the go, but I'm concerned about \nthe performance impact of this. I first wanted to use an anonoumys code \nblock, but then I cannot do parameter binding from npgsql.\n\nExample:\ncreate table table1\n(\n id bigserial,\n value1 text\n);\n\ncreate table table2\n(\n id bigserial,\n value2 text\n);\n\ncreate table table3\n(\n id bigserial,\n value3 text,\n table1_id1 bigint,\n table1_id2 bigint,\n table2_id bigint\n);\n\nI then generate this on the fly to insert a new entity\n\nCREATE OR REPLACE FUNCTION \ntmp_641f51c9_d188_4386_93f3_c40001b191e7(table1_value1_0 Text, \ntable1_value1_1 Text, table2_value2_0 Text, table3_value3_0 Text)\nRETURNS BIGINT AS $$\nDECLARE\n _table1_id1 bigint;\n _table1_id2 bigint;\n _table2_id bigint;\n _id bigint;\n table1_value1_0 ALIAS FOR $1;\n table2_value2_0 ALIAS FOR $2;\n table3_value3_0 ALIAS FOR $3;\nBEGIN\n\n INSERT INTO public.table1 (value1) VALUES (table1_value1_0)\n RETURNING id INTO _table1_id1;\n\n INSERT INTO public.table1 (value1) VALUES (table1_value1_1)\n RETURNING id INTO _table1_id2;\n\n INSERT INTO public.table2 (value2) VALUES (table2_value2_0)\n RETURNING id INTO _table2_id;\n\n INSERT INTO public.table3 (value3, table1_id1, table1_id2, table2_id) \nVALUES (table3_value3_0, _table1_id1, _table1_id2, _table2_id)\n RETURNING id INTO _id;\n\n RETURN _id;\nEND;\n$$ LANGUAGE plpgsql;\n\nSELECT tmp_641f51c9_d188_4386_93f3_c40001b191e7(@table1_value1_0, \n@table1_value1_1, @table2_value2_0, @table3_value3_0);\n\nDROP FUNCTION IF EXISTS \ntmp_641f51c9_d188_4386_93f3_c40001b191e7(Text,Text,Text,Text);\n\nIs there a better way I'm missing and is \"temp\" function creation in \nPostgres a big performance concern, especially if a server is under load?\n\nRegards\nRiaan Stander\n\n\n\n\n\n\n\nGood\nday\nI’ve\ngot a performance and best practice question. We’re busy writing our\npersistence framework for our application port to PostgreSQL.\nWe have business entities that are split over multiple tables and in an\neffort to not have multiple trips to the database we’re trying to batch\nthese together. Some of the extensions uses serials, necessitating output\nfrom the one query to be used in the other. It is at this point that we’re\nrunning into the PostgreSQL limitation of only declaring variables in\nfunctions.\nI've\ncome up with generating functions on the go, but I'm concerned about the\nperformance impact of this. I first wanted to use an anonoumys code block,\nbut then I cannot do parameter binding from npgsql.\nExample:\ncreate table table1\n(\n id bigserial,\n value1 text\n);\ncreate\ntable table2\n(\n id bigserial,\n value2 text\n);\ncreate\ntable table3\n(\n id bigserial,\n value3 text,\n table1_id1 bigint,\n table1_id2 bigint,\n table2_id bigint\n);\nI\nthen generate this on the fly to insert a new entity\nCREATE\nOR REPLACE FUNCTION\ntmp_641f51c9_d188_4386_93f3_c40001b191e7(table1_value1_0 Text,\ntable1_value1_1 Text, table2_value2_0 Text, table3_value3_0 Text)\nRETURNS BIGINT AS $$\nDECLARE\n _table1_id1 bigint;\n _table1_id2 bigint;\n _table2_id bigint;\n _id bigint;\n table1_value1_0 ALIAS FOR $1;\n table2_value2_0 ALIAS FOR $2;\n table3_value3_0 ALIAS FOR $3;\nBEGIN\n \nINSERT INTO public.table1 (value1) VALUES (table1_value1_0)\n RETURNING id INTO _table1_id1;\n \nINSERT INTO public.table1 (value1) VALUES (table1_value1_1)\n RETURNING id INTO _table1_id2;\n \nINSERT INTO public.table2 (value2) VALUES (table2_value2_0)\n RETURNING id INTO _table2_id;\n \nINSERT INTO public.table3 (value3, table1_id1, table1_id2, table2_id)\nVALUES (table3_value3_0, _table1_id1, _table1_id2, _table2_id)\n RETURNING id INTO _id;\n \nRETURN _id;\nEND;\n$$ LANGUAGE plpgsql;\nSELECT\ntmp_641f51c9_d188_4386_93f3_c40001b191e7(@table1_value1_0,\n@table1_value1_1, @table2_value2_0, @table3_value3_0);\nDROP\nFUNCTION IF EXISTS\ntmp_641f51c9_d188_4386_93f3_c40001b191e7(Text,Text,Text,Text);\nIs\nthere a better way I'm missing and is \"temp\" function creation in Postgres\na big performance concern, especially if a server is under load?\nRegards\nRiaan Stander",
"msg_date": "Sun, 21 May 2017 08:25:49 +0200",
"msg_from": "Riaan Stander <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bulk persistence strategy"
},
{
"msg_contents": "Riaan Stander <[email protected]> writes:\n> I've come up with generating functions on the go, but I'm concerned about \n> the performance impact of this. I first wanted to use an anonoumys code \n> block, but then I cannot do parameter binding from npgsql.\n> ...\n> Is there a better way I'm missing and is \"temp\" function creation in \n> Postgres a big performance concern, especially if a server is under load?\n\nThe function itself is only one pg_proc row, but if you're expecting\nto do this thousands of times a minute you might have to adjust autovacuum\nsettings to avoid bad bloat in pg_proc.\n\nIf you're intending that these functions be use-once, it's fairly unclear\nto me why you bother, as opposed to just issuing the underlying SQL\nstatements.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 21 May 2017 10:33:06 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "> Riaan Stander <[email protected]> writes:\n>> I've come up with generating functions on the go, but I'm concerned about\n>> the performance impact of this. I first wanted to use an anonoumys code\n>> block, but then I cannot do parameter binding from npgsql.\n>> ...\n>> Is there a better way I'm missing and is \"temp\" function creation in\n>> Postgres a big performance concern, especially if a server is under load?\n> The function itself is only one pg_proc row, but if you're expecting\n> to do this thousands of times a minute you might have to adjust autovacuum\n> settings to avoid bad bloat in pg_proc.\n>\n> If you're intending that these functions be use-once, it's fairly unclear\n> to me why you bother, as opposed to just issuing the underlying SQL\n> statements.\n>\n> \t\t\tregards, tom lane\n\nThe intended use is use-once. The reason is that the statements might \ndiffer per call, especially when we start doing updates. The ideal would \nbe to just issue the sql statements, but I was trying to cut down on \nnetwork calls. To batch them together and get output from one query as \ninput for the others (declare variables), I have to wrap them in a \nfunction in Postgres. Or am I missing something? In SQL Server TSQL I \ncould declare variables in any statement as required.\n\n\n\n\n\n\n\n\n\nRiaan Stander <[email protected]> writes:\n\n\nI've come up with generating functions on the go, but I'm concerned about \nthe performance impact of this. I first wanted to use an anonoumys code \nblock, but then I cannot do parameter binding from npgsql.\n...\nIs there a better way I'm missing and is \"temp\" function creation in \nPostgres a big performance concern, especially if a server is under load?\n\n\n\nThe function itself is only one pg_proc row, but if you're expecting\nto do this thousands of times a minute you might have to adjust autovacuum\nsettings to avoid bad bloat in pg_proc.\n\nIf you're intending that these functions be use-once, it's fairly unclear\nto me why you bother, as opposed to just issuing the underlying SQL\nstatements.\n\n\t\t\tregards, tom lane\n\n\n\n The intended use is use-once. The reason is that the statements\n might differ per call, especially when we start doing updates. The\n ideal would be to just issue the sql statements, but I was trying\n to cut down on network calls. To batch them together and get\n output from one query as input for the others (declare variables),\n I have to wrap them in a function in Postgres. Or am I missing\n something? In SQL Server TSQL I could declare variables in any\n statement as required.",
"msg_date": "Sun, 21 May 2017 21:29:53 +0200",
"msg_from": "Riaan Stander <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "Riaan Stander <[email protected]> writes:\n> The intended use is use-once. The reason is that the statements might \n> differ per call, especially when we start doing updates. The ideal would \n> be to just issue the sql statements, but I was trying to cut down on \n> network calls. To batch them together and get output from one query as \n> input for the others (declare variables), I have to wrap them in a \n> function in Postgres. Or am I missing something? In SQL Server TSQL I \n> could declare variables in any statement as required.\n\nHm, well, feeding data forward to the next query without a network\nround trip is a valid concern.\n\nHow stylized are these commands? Have you considered pushing the\ngeneration logic into the function, so that you just have one (or\na few) persistent functions, and the variability slack is taken\nup through EXECUTE'd strings? That'd likely be significantly\nmore efficient than one-use functions. Even disregarding the\npg_proc update traffic, plpgsql isn't going to shine in that usage\nbecause it's optimized for repeated execution of functions.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 21 May 2017 18:37:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "\n> Riaan Stander <[email protected]> writes:\n>> The intended use is use-once. The reason is that the statements might\n>> differ per call, especially when we start doing updates. The ideal would\n>> be to just issue the sql statements, but I was trying to cut down on\n>> network calls. To batch them together and get output from one query as\n>> input for the others (declare variables), I have to wrap them in a\n>> function in Postgres. Or am I missing something? In SQL Server TSQL I\n>> could declare variables in any statement as required.\n> Hm, well, feeding data forward to the next query without a network\n> round trip is a valid concern.\n>\n> How stylized are these commands? Have you considered pushing the\n> generation logic into the function, so that you just have one (or\n> a few) persistent functions, and the variability slack is taken\n> up through EXECUTE'd strings? That'd likely be significantly\n> more efficient than one-use functions. Even disregarding the\n> pg_proc update traffic, plpgsql isn't going to shine in that usage\n> because it's optimized for repeated execution of functions.\n>\n> \t\t\tregards, tom lane\nThe commands are generated from a complex object/type in the \napplication. Some of them can be quite large. With modifications they do \nstate tracking too, so that we only update fields that actually changed \nand can do optimistic concurrency checking.\n\nIt'll probably make more sense to try create a function per type of \nobject that deals with the query generation. That way I can create a \nPostgres type that maps from the application object.\n\nThanks for the advice. I'll give that a shot.\n\nRegards\nRiaan Stander\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 04:14:35 +0200",
"msg_from": "Riaan Stander <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "On 22 May 2017 at 03:14, Riaan Stander <[email protected]> wrote:\n>\n>> Riaan Stander <[email protected]> writes:\n>>>\n>>> The intended use is use-once. The reason is that the statements might\n>>> differ per call, especially when we start doing updates. The ideal would\n>>> be to just issue the sql statements, but I was trying to cut down on\n>>> network calls. To batch them together and get output from one query as\n>>> input for the others (declare variables), I have to wrap them in a\n>>> function in Postgres. Or am I missing something? In SQL Server TSQL I\n>>> could declare variables in any statement as required.\n>>\n>> Hm, well, feeding data forward to the next query without a network\n>> round trip is a valid concern.\n>>\n>> How stylized are these commands? Have you considered pushing the\n>> generation logic into the function, so that you just have one (or\n>> a few) persistent functions, and the variability slack is taken\n>> up through EXECUTE'd strings? That'd likely be significantly\n>> more efficient than one-use functions. Even disregarding the\n>> pg_proc update traffic, plpgsql isn't going to shine in that usage\n>> because it's optimized for repeated execution of functions.\n>>\n>> regards, tom lane\n>\n> The commands are generated from a complex object/type in the application.\n> Some of them can be quite large. With modifications they do state tracking\n> too, so that we only update fields that actually changed and can do\n> optimistic concurrency checking.\n>\n> It'll probably make more sense to try create a function per type of object\n> that deals with the query generation. That way I can create a Postgres type\n> that maps from the application object.\n>\n> Thanks for the advice. I'll give that a shot.\n\nIt sounds like you don't know about anonymous code blocks with DO\nhttps://www.postgresql.org/docs/devel/static/sql-do.html\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 06:15:26 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "\n> On 22 May 2017 at 03:14, Riaan Stander <[email protected]> wrote:\n>>> Riaan Stander <[email protected]> writes:\n>>>> The intended use is use-once. The reason is that the statements might\n>>>> differ per call, especially when we start doing updates. The ideal would\n>>>> be to just issue the sql statements, but I was trying to cut down on\n>>>> network calls. To batch them together and get output from one query as\n>>>> input for the others (declare variables), I have to wrap them in a\n>>>> function in Postgres. Or am I missing something? In SQL Server TSQL I\n>>>> could declare variables in any statement as required.\n>>> Hm, well, feeding data forward to the next query without a network\n>>> round trip is a valid concern.\n>>>\n>>> How stylized are these commands? Have you considered pushing the\n>>> generation logic into the function, so that you just have one (or\n>>> a few) persistent functions, and the variability slack is taken\n>>> up through EXECUTE'd strings? That'd likely be significantly\n>>> more efficient than one-use functions. Even disregarding the\n>>> pg_proc update traffic, plpgsql isn't going to shine in that usage\n>>> because it's optimized for repeated execution of functions.\n>>>\n>>> regards, tom lane\n>> The commands are generated from a complex object/type in the application.\n>> Some of them can be quite large. With modifications they do state tracking\n>> too, so that we only update fields that actually changed and can do\n>> optimistic concurrency checking.\n>>\n>> It'll probably make more sense to try create a function per type of object\n>> that deals with the query generation. That way I can create a Postgres type\n>> that maps from the application object.\n>>\n>> Thanks for the advice. I'll give that a shot.\n> It sounds like you don't know about anonymous code blocks with DO\n> https://www.postgresql.org/docs/devel/static/sql-do.html\n>\n\nYes I do know about that feature. My first implemented generated an \nanonymous code block, but to my utter dismay once I tried actually doing \nparameter binding from the application it did not work. This seems to be \na Postgres limitation actually stated in the documentation. The \nanonymous code block is treated as a function body with no parameters.\n\nThanks for the suggestion though.\n\nRegards\nRiaan Stander\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 10:06:17 +0200",
"msg_from": "Riaan Stander <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "On 22 May 2017 at 09:06, Riaan Stander <[email protected]> wrote:\n\n>> It sounds like you don't know about anonymous code blocks with DO\n>> https://www.postgresql.org/docs/devel/static/sql-do.html\n>>\n>\n> Yes I do know about that feature. My first implemented generated an\n> anonymous code block, but to my utter dismay once I tried actually doing\n> parameter binding from the application it did not work. This seems to be a\n> Postgres limitation actually stated in the documentation. The anonymous code\n> block is treated as a function body with no parameters.\n>\n> Thanks for the suggestion though.\n\nPerhaps we should look into parameterisable DO statements.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 09:27:25 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "\n> On 22 May 2017 at 09:06, Riaan Stander <[email protected]> wrote:\n>\n>>> It sounds like you don't know about anonymous code blocks with DO\n>>> https://www.postgresql.org/docs/devel/static/sql-do.html\n>>>\n>> Yes I do know about that feature. My first implemented generated an\n>> anonymous code block, but to my utter dismay once I tried actually doing\n>> parameter binding from the application it did not work. This seems to be a\n>> Postgres limitation actually stated in the documentation. The anonymous code\n>> block is treated as a function body with no parameters.\n>>\n>> Thanks for the suggestion though.\n> Perhaps we should look into parameterisable DO statements.\n>\nNow that I would second!!\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 11:36:01 +0200",
"msg_from": "Riaan Stander <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> It sounds like you don't know about anonymous code blocks with DO\n> https://www.postgresql.org/docs/devel/static/sql-do.html\n\nNo, the problem was that there are also some parameters to be passed\nin from the application, and DO doesn't take any parameters; so that\nwould require inserting them manually into the DO text, with all the\nattendant hazards of getting-it-wrong.\n\nWe've speculated before about letting DO grow some parameter handling,\nbut it's not gotten to the top of anyone's to-do list.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 07:50:15 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk persistence strategy"
},
{
"msg_contents": "On Sun, May 21, 2017 at 5:37 PM, Tom Lane <[email protected]> wrote:\n> Riaan Stander <[email protected]> writes:\n>> The intended use is use-once. The reason is that the statements might\n>> differ per call, especially when we start doing updates. The ideal would\n>> be to just issue the sql statements, but I was trying to cut down on\n>> network calls.\n>\n> Hm, well, feeding data forward to the next query without a network\n> round trip is a valid concern.\n>\n> How stylized are these commands? Have you considered pushing the\n> generation logic into the function, so that you just have one (or\n> a few) persistent functions, and the variability slack is taken\n> up through EXECUTE'd strings?\n\n+1. If 'DO' could return a value and take arguments, we'd probably\njust use that. With the status quo however the SQL generation\nfacilities need to be moved into the database as dynamic SQL (that is,\nexecuted with EXECUTE). This will provide the speed benefits while\nmaintaining (albeit with some rethinking) your abstraction model\nPlease make liberal use of quote_ident() and quote_literal() to\nminimize security risks.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 10:37:52 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bulk persistence strategy"
}
] |
[
{
"msg_contents": "\nI need to be able to quickly find rows where a column is not null (only a \nsmall percent of the rows will have that column not null).\n\nShould I do:\n\nCREATE INDEX ON table ((col IS NOT NULL)) WHERE col IS NOT NULL\n\nor:\n\nCREATE INDEX ON table (col) WHERE col IS NOT NULL\n\nI'm thinking the first index will make a smaller, simpler, index since I \ndon't actually need to index the value of the column. But are there any \ndrawbacks I may not be aware of? Or perhaps there are no actual benefits?\n\n \t-Ariel\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 11:17:31 -0400 (EDT)",
"msg_from": "Ariel <[email protected]>",
"msg_from_op": true,
"msg_subject": "index of only not null, use function index?"
},
{
"msg_contents": "On Mon, May 22, 2017 at 10:17 AM, Ariel <[email protected]> wrote:\n>\n> I need to be able to quickly find rows where a column is not null (only a\n> small percent of the rows will have that column not null).\n>\n> Should I do:\n>\n> CREATE INDEX ON table ((col IS NOT NULL)) WHERE col IS NOT NULL\n>\n> or:\n>\n> CREATE INDEX ON table (col) WHERE col IS NOT NULL\n>\n> I'm thinking the first index will make a smaller, simpler, index since I\n> don't actually need to index the value of the column. But are there any\n> drawbacks I may not be aware of? Or perhaps there are no actual benefits?\n\n\nYou are correct. I don't see any downside to converting to bool; this\nwill be more efficient especially if 'col' is large at the small cost\nof some generality. Having said that, what I typically do in such\ncases (this comes a lot in database driven work queues) something like\nthis:\n\nCREATE INDEX ON table (OrderCol) WHERE col IS NOT NULL;\n\nWhere \"OrderCol\" is some field that defines some kind of order to the\nitems that you are marking off. This will give very good performance\nof queries in the form of:\n\nSELECT Col FROM table WHERE col IS NOT NULL ORDER BY OrderCol LIMIT 1;\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 8 Jun 2017 09:47:11 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index of only not null, use function index?"
},
{
"msg_contents": "Merlin Moncure <[email protected]> writes:\n> On Mon, May 22, 2017 at 10:17 AM, Ariel <[email protected]> wrote:\n>> Should I do:\n>> \n>> CREATE INDEX ON table ((col IS NOT NULL)) WHERE col IS NOT NULL\n>> \n>> or:\n>> \n>> CREATE INDEX ON table (col) WHERE col IS NOT NULL\n>> \n>> I'm thinking the first index will make a smaller, simpler, index since I\n>> don't actually need to index the value of the column. But are there any\n>> drawbacks I may not be aware of? Or perhaps there are no actual benefits?\n\n> You are correct. I don't see any downside to converting to bool; this\n> will be more efficient especially if 'col' is large at the small cost\n> of some generality.\n\nDepends on the datatype really. Because of alignment considerations,\nthe index tuples will be the same size for any column value <= 4 bytes,\nor <= 8 bytes on 64-bit hardware. So if this is an integer column,\nor even bigint on 64-bit, you won't save any space with the first\nindex definition. If it's a text column with an average width larger\nthan what I just mentioned, you could save some space that way.\n\nIn general, indexes on expressions are a tad more expensive to maintain\nthan indexes on plain column values. And the second index at least has\nthe potential to be useful for other queries than the one you're thinking\nabout. So personally I'd go with the second definition unless you can\nshow that there's a really meaningful space savings with the first one.\n\n> Having said that, what I typically do in such\n> cases (this comes a lot in database driven work queues) something like\n> this:\n> CREATE INDEX ON table (OrderCol) WHERE col IS NOT NULL;\n\nRight, you can frequently get a lot of mileage out of indexing something\nthat's unrelated to the predicate condition, but is also needed by the\nquery you want to optimize.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 08 Jun 2017 10:58:40 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index of only not null, use function index?"
},
{
"msg_contents": "Normally, I find that in these situations, it makes sense to index the\nprimary key of the table WHERE col is not null, because it will usually\ncover the largest number of cases, and is much better than a two-value\nboolean index, for example.\n\nOn Thu, Jun 8, 2017 at 9:58 AM, Tom Lane <[email protected]> wrote:\n\n> Merlin Moncure <[email protected]> writes:\n> > On Mon, May 22, 2017 at 10:17 AM, Ariel <[email protected]> wrote:\n> >> Should I do:\n> >>\n> >> CREATE INDEX ON table ((col IS NOT NULL)) WHERE col IS NOT NULL\n> >>\n> >> or:\n> >>\n> >> CREATE INDEX ON table (col) WHERE col IS NOT NULL\n> >>\n> >> I'm thinking the first index will make a smaller, simpler, index since I\n> >> don't actually need to index the value of the column. But are there any\n> >> drawbacks I may not be aware of? Or perhaps there are no actual\n> benefits?\n>\n> > You are correct. I don't see any downside to converting to bool; this\n> > will be more efficient especially if 'col' is large at the small cost\n> > of some generality.\n>\n> Depends on the datatype really. Because of alignment considerations,\n> the index tuples will be the same size for any column value <= 4 bytes,\n> or <= 8 bytes on 64-bit hardware. So if this is an integer column,\n> or even bigint on 64-bit, you won't save any space with the first\n> index definition. If it's a text column with an average width larger\n> than what I just mentioned, you could save some space that way.\n>\n> In general, indexes on expressions are a tad more expensive to maintain\n> than indexes on plain column values. And the second index at least has\n> the potential to be useful for other queries than the one you're thinking\n> about. So personally I'd go with the second definition unless you can\n> show that there's a really meaningful space savings with the first one.\n>\n> > Having said that, what I typically do in such\n> > cases (this comes a lot in database driven work queues) something like\n> > this:\n> > CREATE INDEX ON table (OrderCol) WHERE col IS NOT NULL;\n>\n> Right, you can frequently get a lot of mileage out of indexing something\n> that's unrelated to the predicate condition, but is also needed by the\n> query you want to optimize.\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nNormally, I find that in these situations, it makes sense to index the primary key of the table WHERE col is not null, because it will usually cover the largest number of cases, and is much better than a two-value boolean index, for example.On Thu, Jun 8, 2017 at 9:58 AM, Tom Lane <[email protected]> wrote:Merlin Moncure <[email protected]> writes:\n> On Mon, May 22, 2017 at 10:17 AM, Ariel <[email protected]> wrote:\n>> Should I do:\n>>\n>> CREATE INDEX ON table ((col IS NOT NULL)) WHERE col IS NOT NULL\n>>\n>> or:\n>>\n>> CREATE INDEX ON table (col) WHERE col IS NOT NULL\n>>\n>> I'm thinking the first index will make a smaller, simpler, index since I\n>> don't actually need to index the value of the column. But are there any\n>> drawbacks I may not be aware of? Or perhaps there are no actual benefits?\n\n> You are correct. I don't see any downside to converting to bool; this\n> will be more efficient especially if 'col' is large at the small cost\n> of some generality.\n\nDepends on the datatype really. Because of alignment considerations,\nthe index tuples will be the same size for any column value <= 4 bytes,\nor <= 8 bytes on 64-bit hardware. So if this is an integer column,\nor even bigint on 64-bit, you won't save any space with the first\nindex definition. If it's a text column with an average width larger\nthan what I just mentioned, you could save some space that way.\n\nIn general, indexes on expressions are a tad more expensive to maintain\nthan indexes on plain column values. And the second index at least has\nthe potential to be useful for other queries than the one you're thinking\nabout. So personally I'd go with the second definition unless you can\nshow that there's a really meaningful space savings with the first one.\n\n> Having said that, what I typically do in such\n> cases (this comes a lot in database driven work queues) something like\n> this:\n> CREATE INDEX ON table (OrderCol) WHERE col IS NOT NULL;\n\nRight, you can frequently get a lot of mileage out of indexing something\nthat's unrelated to the predicate condition, but is also needed by the\nquery you want to optimize.\n\n regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 8 Jun 2017 11:05:40 -0500",
"msg_from": "Jeremy Finzel <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index of only not null, use function index?"
},
{
"msg_contents": "On Thu, Jun 8, 2017 at 11:05 AM, Jeremy Finzel <[email protected]> wrote:\n> On Thu, Jun 8, 2017 at 9:58 AM, Tom Lane <[email protected]> wrote:\n>> Merlin Moncure <[email protected]> writes:\n>> > Having said that, what I typically do in such\n>> > cases (this comes a lot in database driven work queues) something like\n>> > this:\n>> > CREATE INDEX ON table (OrderCol) WHERE col IS NOT NULL;\n>>\n>> Right, you can frequently get a lot of mileage out of indexing something\n>> that's unrelated to the predicate condition, but is also needed by the\n>> query you want to optimize.\n\n> Normally, I find that in these situations, it makes sense to index the\n> primary key of the table WHERE col is not null, because it will usually\n> cover the largest number of cases, and is much better than a two-value\n> boolean index, for example.\n\n[meta note: please try to avoid top-posting]\n\nYeah, if you index the primary key and query it like this:\n\nCREATE INDEX ON table (pkey) WHERE col IS NOT NULL;\n\nSELECT pkey FROM table WHERE col IS NOT NULL\nORDER BY pkey LIMIT n;\n\nThis can give the best possible results since this can qualify for an\nindex only scan :-).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 8 Jun 2017 14:57:38 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: index of only not null, use function index?"
}
] |
[
{
"msg_contents": "I need to make a view which decorates rows from a parent table with\naggregated values from a child table. I can think of two ways to write it,\none which aggregates the child table to make a new query table and joins\nthe parent to that, as shown in \"view1\" below. Or does subselect in the\nselect list to aggregate just the currently matching rows, and returns that\nvalue, as in \"view2\" below.\n\nWhile these two are semantically equivalent, the planner doesn't understand\nthat, and always executes them pretty much the way you would naively do it\nbased on the text of the query.\n\nBut view1 is pretty slow if the WHERE clause is highly selective (like\n\"WHERE cutoff<0.00001\") because it has to summarize the entire child table\njust to pull out a few rows. But view2 is pretty slow if the entire view\nor most of it (like \"WHERE cutoff<0.9\") is being returned.\n\nIs there some 3rd way to write the query which allows the planner to switch\nbetween strategies (summarize whole table vs summarize values on demand)\ndepending on the known selectivity of the where clause?\n\nIn this case, the planner is getting the relative cost estimates roughly\ncorrect. It is not a problem of mis-estimation.\n\nI can always create two views, view_small and view_large, and swap between\nthem based on my own knowledge of how restrictive a query is likely to be,\nbut that is rather annoying. Especially in the real-world situation, which\nis quite a bit more complex than this.\n\ncreate table thing as select x as id, random() as cutoff from\ngenerate_series(1,2000000) f(x);\n\ncreate table thing_alias as select\nfloor(power(random()*power(2000000,5),0.2))::int thing_id, md5(x::text),\nrandom() as priority from generate_series(1,150000) f(x);\n\ncreate index on thing_alias (thing_id );\n\ncreate index on thing (cutoff );\n\nvacuum; analyze;\n\ncreate view view1 as select id, md5,cutoff from thing left join\n (\n select distinct on (thing_id) thing_id, md5 from thing_alias\n order by thing_id, priority desc\n ) as foo\n on (thing_id=id);\n\ncreate view view2 as select id,\n (\n select md5 from thing_alias where thing_id=id\n order by priority desc limit 1\n ) as md5,\n cutoff from thing;\n\nCheers,\n\nJeff\n\nI need to make a view which decorates rows from a parent table with aggregated values from a child table. I can think of two ways to write it, one which aggregates the child table to make a new query table and joins the parent to that, as shown in \"view1\" below. Or does subselect in the select list to aggregate just the currently matching rows, and returns that value, as in \"view2\" below.While these two are semantically equivalent, the planner doesn't understand that, and always executes them pretty much the way you would naively do it based on the text of the query.But view1 is pretty slow if the WHERE clause is highly selective (like \"WHERE cutoff<0.00001\") because it has to summarize the entire child table just to pull out a few rows. But view2 is pretty slow if the entire view or most of it (like \"WHERE cutoff<0.9\") is being returned.Is there some 3rd way to write the query which allows the planner to switch between strategies (summarize whole table vs summarize values on demand) depending on the known selectivity of the where clause?In this case, the planner is getting the relative cost estimates roughly correct. It is not a problem of mis-estimation.I can always create two views, view_small and view_large, and swap between them based on my own knowledge of how restrictive a query is likely to be, but that is rather annoying. Especially in the real-world situation, which is quite a bit more complex than this.create table thing as select x as id, random() as cutoff from generate_series(1,2000000) f(x);create table thing_alias as select floor(power(random()*power(2000000,5),0.2))::int thing_id, md5(x::text), random() as priority from generate_series(1,150000) f(x);create index on thing_alias (thing_id );create index on thing (cutoff );vacuum; analyze;create view view1 as select id, md5,cutoff from thing left join ( select distinct on (thing_id) thing_id, md5 from thing_alias order by thing_id, priority desc ) as foo on (thing_id=id);create view view2 as select id, ( select md5 from thing_alias where thing_id=id order by priority desc limit 1 ) as md5, cutoff from thing;Cheers,Jeff",
"msg_date": "Mon, 22 May 2017 12:57:39 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "select subquery versus join subquery"
},
{
"msg_contents": "Am 05/22/2017 um 09:57 PM schrieb Jeff Janes:\n> I need to make a view which decorates rows from a parent table with\n> aggregated values from a child table. I can think of two ways to write\n> it, one which aggregates the child table to make a new query table and\n> joins the parent to that, as shown in \"view1\" below. Or does subselect\n> in the select list to aggregate just the currently matching rows, and\n> returns that value, as in \"view2\" below.\n> \n> While these two are semantically equivalent, the planner doesn't\n> understand that, and always executes them pretty much the way you would\n> naively do it based on the text of the query.\n> \n> But view1 is pretty slow if the WHERE clause is highly selective (like\n> \"WHERE cutoff<0.00001\") because it has to summarize the entire child\n> table just to pull out a few rows. But view2 is pretty slow if the\n> entire view or most of it (like \"WHERE cutoff<0.9\") is being returned.\n> \n> Is there some 3rd way to write the query which allows the planner to\n> switch between strategies (summarize whole table vs summarize values on\n> demand) depending on the known selectivity of the where clause?\n> \n> In this case, the planner is getting the relative cost estimates roughly\n> correct. It is not a problem of mis-estimation.\n> \n> I can always create two views, view_small and view_large, and swap\n> between them based on my own knowledge of how restrictive a query is\n> likely to be, but that is rather annoying. Especially in the real-world\n> situation, which is quite a bit more complex than this.\n> \n> create table thing as select x as id, random() as cutoff from\n> generate_series(1,2000000) f(x);\n> \n> create table thing_alias as select\n> floor(power(random()*power(2000000,5),0.2))::int thing_id, md5(x::text),\n> random() as priority from generate_series(1,150000) f(x);\n> \n> create index on thing_alias (thing_id );\n> \n> create index on thing (cutoff );\n> \n> vacuum; analyze;\n> \n> create view view1 as select id, md5,cutoff from thing left join \n> (\n> select distinct on (thing_id) thing_id, md5 from thing_alias \n> order by thing_id, priority desc\n> ) as foo \n> on (thing_id=id);\n> \n> create view view2 as select id, \n> (\n> select md5 from thing_alias where thing_id=id \n> order by priority desc limit 1\n> ) as md5, \n> cutoff from thing;\n> \n> Cheers,\n> \n> Jeff\n\nHi Jeff,\n\nhow does something like\n\nCREATE OR REPLACE VIEW public.view3 AS\n SELECT thing.id,\n foo.md5,\n thing.cutoff\n FROM thing,\n LATERAL ( SELECT DISTINCT ON (thing_alias.thing_id)\nthing_alias.thing_id,\n thing_alias.md5\n FROM thing_alias\n WHERE thing_alias.thing_id = thing.id\n ORDER BY thing_alias.thing_id, thing_alias.priority DESC) foo\n\nwork for you? At least that's always using an index scan here, as\nopposed to view1, which (for me) defaults to a SeqScan on thing_alias at\na low cutoff.\n\n*****\nNote btw. that both view1 and view2 don't return any md5 values for me,\nwhile view3 does!\n*****\n\nResults (ms, median of 3 runs):\ncutoff< 0.1 0.9\nview1: 348 1022\nview2: 844 6484\nview3: 842 5976\n\nWith\n\n LATERAL ( SELECT string_agg(thing_alias.md5, ','::text) AS md5\n FROM thing_alias\n WHERE thing_alias.thing_id = thing.id\n GROUP BY thing_alias.thing_id) foo\n\n(which seems to make more sense ;-)\n\nI yield 483 (0.1) and 3410 (0.9) ms (and return md5-Aggregates).\n\nCheers,\n-- \nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel: +49 911/991-4665\nMobil: +49 172/8853339",
"msg_date": "Tue, 23 May 2017 13:03:37 +0200",
"msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select subquery versus join subquery"
},
{
"msg_contents": "On Tue, May 23, 2017 at 4:03 AM, Gunnar \"Nick\" Bluth <\[email protected]> wrote:\n\n> Am 05/22/2017 um 09:57 PM schrieb Jeff Janes:\n> >\n> > create view view2 as select id,\n> > (\n> > select md5 from thing_alias where thing_id=id\n> > order by priority desc limit 1\n> > ) as md5,\n> > cutoff from thing;\n> >\n> > Cheers,\n> >\n> > Jeff\n>\n> Hi Jeff,\n>\n> how does something like\n>\n> CREATE OR REPLACE VIEW public.view3 AS\n> SELECT thing.id,\n> foo.md5,\n> thing.cutoff\n> FROM thing,\n> LATERAL ( SELECT DISTINCT ON (thing_alias.thing_id)\n> thing_alias.thing_id,\n> thing_alias.md5\n> FROM thing_alias\n> WHERE thing_alias.thing_id = thing.id\n> ORDER BY thing_alias.thing_id, thing_alias.priority DESC) foo\n>\n> work for you? At least that's always using an index scan here, as\n> opposed to view1, which (for me) defaults to a SeqScan on thing_alias at\n> a low cutoff.\n>\n\nUnfortunately that always uses the index scan, even at a high cutoff where\naggregation on the seq scan and then hash joining is more appropriate. So\nit is very similar to view2, except that it doesn't return the rows from\n\"thing\" which have zero corresponding rows in thing_alias.\n\n*****\n> Note btw. that both view1 and view2 don't return any md5 values for me,\n> while view3 does!\n> *****\n>\n\nBecause of the way I constructed the data, using the power transform of the\nuniform random distribution, the early rows of the view (if sorted by\nthing_id) are mostly null in the md5 column, so if you only look at the\nfirst few screen-fulls you might not see any md5. But your view does\neffectively an inner join rather than a left join, so your view gets rid of\nthe rows with a NULL md5. Most things don't have aliases; of the things\nthat do, most have 1; and some have a several.\n\n\n\nCheers,\n\nJeff\n\nOn Tue, May 23, 2017 at 4:03 AM, Gunnar \"Nick\" Bluth <[email protected]> wrote:Am 05/22/2017 um 09:57 PM schrieb Jeff Janes:>\n> create view view2 as select id,\n> (\n> select md5 from thing_alias where thing_id=id\n> order by priority desc limit 1\n> ) as md5,\n> cutoff from thing;\n>\n> Cheers,\n>\n> Jeff\n\nHi Jeff,\n\nhow does something like\n\nCREATE OR REPLACE VIEW public.view3 AS\n SELECT thing.id,\n foo.md5,\n thing.cutoff\n FROM thing,\n LATERAL ( SELECT DISTINCT ON (thing_alias.thing_id)\nthing_alias.thing_id,\n thing_alias.md5\n FROM thing_alias\n WHERE thing_alias.thing_id = thing.id\n ORDER BY thing_alias.thing_id, thing_alias.priority DESC) foo\n\nwork for you? At least that's always using an index scan here, as\nopposed to view1, which (for me) defaults to a SeqScan on thing_alias at\na low cutoff.Unfortunately that always uses the index scan, even at a high cutoff where aggregation on the seq scan and then hash joining is more appropriate. So it is very similar to view2, except that it doesn't return the rows from \"thing\" which have zero corresponding rows in thing_alias.\n*****\nNote btw. that both view1 and view2 don't return any md5 values for me,\nwhile view3 does!\n*****Because of the way I constructed the data, using the power transform of the uniform random distribution, the early rows of the view (if sorted by thing_id) are mostly null in the md5 column, so if you only look at the first few screen-fulls you might not see any md5. But your view does effectively an inner join rather than a left join, so your view gets rid of the rows with a NULL md5. Most things don't have aliases; of the things that do, most have 1; and some have a several. Cheers,Jeff",
"msg_date": "Tue, 23 May 2017 09:59:49 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: select subquery versus join subquery"
},
{
"msg_contents": "Am 05/23/2017 um 06:59 PM schrieb Jeff Janes:\n> On Tue, May 23, 2017 at 4:03 AM, Gunnar \"Nick\" Bluth\n> <[email protected] <mailto:[email protected]>>\n> wrote:\n8>< -----\n> \n> Unfortunately that always uses the index scan, even at a high cutoff\n> where aggregation on the seq scan and then hash joining is more\n> appropriate. So it is very similar to view2, except that it doesn't\n> return the rows from \"thing\" which have zero corresponding rows in\n> thing_alias.\n> \n> *****\n> Note btw. that both view1 and view2 don't return any md5 values for me,\n> while view3 does!\n> *****\n> \n> \n> Because of the way I constructed the data, using the power transform of\n> the uniform random distribution, the early rows of the view (if sorted\n> by thing_id) are mostly null in the md5 column, so if you only look at\n> the first few screen-fulls you might not see any md5. But your view\n> does effectively an inner join rather than a left join, so your view\n> gets rid of the rows with a NULL md5. Most things don't have aliases;\n> of the things that do, most have 1; and some have a several.\n\nD'oh, of course! My bad... shouldn't have looked at the results with\nLIMIT :-/\n\nMy next best guess would involve a MatView for the aggregates...\n-- \nGunnar \"Nick\" Bluth\nDBA ELSTER\n\nTel: +49 911/991-4665\nMobil: +49 172/8853339",
"msg_date": "Wed, 24 May 2017 08:41:37 +0200",
"msg_from": "\"Gunnar \\\"Nick\\\" Bluth\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: select subquery versus join subquery"
}
] |
[
{
"msg_contents": "Hi,\n\nI have a letancy-sensitive legacy application, where the time consumed\nby query planning was always causing some headaches.\nCurrently it is running on postgresql-8.4 - will postgresql-10 support\ngenerating plans using multiple CPU cores to reduce the time required\nto generate a single plan?\n\nThank you in advance and best regards, Clemens\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 May 2017 22:21:45 +0200",
"msg_from": "Clemens Eisserer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Can postgresql plan a query using multiple CPU cores?"
},
{
"msg_contents": "On Monday, May 22, 2017, Clemens Eisserer <[email protected]> wrote:\n\n> Hi,\n>\n> I have a letancy-sensitive legacy application, where the time consumed\n> by query planning was always causing some headaches.\n> Currently it is running on postgresql-8.4 - will postgresql-10 support\n> generating plans using multiple CPU cores to reduce the time required\n> to generate a single plan?\n>\n>\nMy understanding, from both list monitoring and the release notes, is that\nquery parallelization happens only during execution, not planning. A\nsingle process is still responsible for evaluating all (possibly partial)\nplans and picking the best one - flagging those plan steps that can\nleverage parallelism for possible execution.\n\nDavid J.\n\nOn Monday, May 22, 2017, Clemens Eisserer <[email protected]> wrote:Hi,\n\nI have a letancy-sensitive legacy application, where the time consumed\nby query planning was always causing some headaches.\nCurrently it is running on postgresql-8.4 - will postgresql-10 support\ngenerating plans using multiple CPU cores to reduce the time required\nto generate a single plan?\nMy understanding, from both list monitoring and the release notes, is that query parallelization happens only during execution, not planning. A single process is still responsible for evaluating all (possibly partial) plans and picking the best one - flagging those plan steps that can leverage parallelism for possible execution.David J.",
"msg_date": "Mon, 22 May 2017 13:52:27 -0700",
"msg_from": "\"David G. Johnston\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can postgresql plan a query using multiple CPU cores?"
},
{
"msg_contents": "On 23/05/17 08:21, Clemens Eisserer wrote:\n> Hi,\n>\n> I have a letancy-sensitive legacy application, where the time consumed\n> by query planning was always causing some headaches.\n> Currently it is running on postgresql-8.4 - will postgresql-10 support\n> generating plans using multiple CPU cores to reduce the time required\n> to generate a single plan?\n>\n> Thank you in advance and best regards, Clemens\n>\n>\nHi,\n\nMight be worthwhile posting an example (query + EXPLAIN ANALYZE etc), so\nwe can see what type of queries are resulting in long plan times.\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 May 2017 10:37:52 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can postgresql plan a query using multiple CPU cores?"
},
{
"msg_contents": "Hi\n\n2017-05-22 22:21 GMT+02:00 Clemens Eisserer <[email protected]>:\n\n> Hi,\n>\n> I have a letancy-sensitive legacy application, where the time consumed\n> by query planning was always causing some headaches.\n> Currently it is running on postgresql-8.4 - will postgresql-10 support\n> generating plans using multiple CPU cores to reduce the time required\n> to generate a single plan?\n>\n\n no. PostgreSQL 9.6 and higher uses more CPU only for execution.\n\nFor planner speed are important GUC parameters join_collapse_limit,\nfrom_collapse_limit and show geqo_threshold.\n\nYou can try to decrease geqo_threshold - with low geqo_threshold you can\nincrease join_collapse_limit and from_collapse_limit\n\nRegards\n\nPavel\n\n>\n> Thank you in advance and best regards, Clemens\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi2017-05-22 22:21 GMT+02:00 Clemens Eisserer <[email protected]>:Hi,\n\nI have a letancy-sensitive legacy application, where the time consumed\nby query planning was always causing some headaches.\nCurrently it is running on postgresql-8.4 - will postgresql-10 support\ngenerating plans using multiple CPU cores to reduce the time required\nto generate a single plan? no. PostgreSQL 9.6 and higher uses more CPU only for execution. For planner speed are important GUC parameters join_collapse_limit, from_collapse_limit and show geqo_threshold.You can try to decrease geqo_threshold - with low geqo_threshold you can increase join_collapse_limit and from_collapse_limitRegardsPavel\n\nThank you in advance and best regards, Clemens\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 23 May 2017 06:41:36 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Can postgresql plan a query using multiple CPU cores?"
}
] |
[
{
"msg_contents": "Dear Expert,\n\nMay you please provide the solution for below query.\n\nI have to create a log for all the update query executed in database along with its username who has executed that query.\nHowever, I am able to log all the update queries in my pg_log file but it's not showing particular user who has run the query.\n\nI am using PostgreSQL 9.1 with Linux platform.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n\n\n\n\n\n\n\n\nDear Expert,\n \nMay you please provide the solution for below query.\n \nI have to create a log for all the update query executed in database along with its username who has executed that query.\nHowever, I am able to log all the update queries in my pg_log file but it’s not showing particular user who has run the query.\n \nI am using PostgreSQL 9.1 with Linux platform.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.",
"msg_date": "Tue, 23 May 2017 12:42:52 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Log update query along with username who has executed the same."
},
{
"msg_contents": "You need to include \"%u\" in the log_line_prefix setting in your\npostgresql.conf. Like this:\n\nlog_line_prefix = '%m %d %h %u '\n>\n> #log_line_prefix = '' # special values:\n>\n> # %a = application name\n>\n> # %u = user name\n>\n> # %d = database name\n>\n> # %r = remote host and port\n>\n> # %h = remote host\n>\n> # %p = process ID\n>\n> # %t = timestamp without milliseconds\n>\n> # %m = timestamp with milliseconds\n>\n> # %n = timestamp with milliseconds (as a Unix epoch)\n>\n> # %i = command tag\n>\n> # %e = SQL state\n>\n> # %c = session ID\n>\n> # %l = session line number\n>\n> # %s = session start timestamp\n>\n> # %v = virtual transaction ID\n>\n> # %x = transaction ID (0 if none)\n>\n> # %q = stop here in non-session\n>\n> # processes\n>\n> # %% = '%'\n>\n> # e.g. '<%u%%%d> '\n>\n>\n>\nAlso 9.1 is pretty old. You should think about upgrading as soon as is\npractical.\n\n\nOn Tue, May 23, 2017 at 8:42 AM, Dinesh Chandra 12108 <\[email protected]> wrote:\n\n> Dear Expert,\n>\n>\n>\n> May you please provide the solution for below query.\n>\n>\n>\n> I have to create a log for all the update query executed in database along\n> with its username who has executed that query.\n>\n> However, I am able to log all the update queries in my pg_log file but\n> it’s not showing particular user who has run the query.\n>\n>\n>\n> I am using PostgreSQL 9.1 with Linux platform.\n>\n>\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 <+91%2099539%2075849> | Ext 1078\n> |[email protected]\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n>\n\nYou need to include \"%u\" in the log_line_prefix setting in your postgresql.conf. Like this:log_line_prefix = '%m %d %h %u '#log_line_prefix = '' # special values: # %a = application name # %u = user name # %d = database name # %r = remote host and port # %h = remote host # %p = process ID # %t = timestamp without milliseconds # %m = timestamp with milliseconds # %n = timestamp with milliseconds (as a Unix epoch) # %i = command tag # %e = SQL state # %c = session ID # %l = session line number # %s = session start timestamp # %v = virtual transaction ID # %x = transaction ID (0 if none) # %q = stop here in non-session # processes # %% = '%' # e.g. '<%u%%%d> 'Also 9.1 is pretty old. You should think about upgrading as soon as is practical. On Tue, May 23, 2017 at 8:42 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\nDear Expert,\n \nMay you please provide the solution for below query.\n \nI have to create a log for all the update query executed in database along with its username who has executed that query.\nHowever, I am able to log all the update queries in my pg_log file but it’s not showing particular user who has run the query.\n \nI am using PostgreSQL 9.1 with Linux platform.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.",
"msg_date": "Tue, 23 May 2017 08:49:00 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Log update query along with username who has executed\n the same."
},
{
"msg_contents": "Thank you so much Rick,\r\n\r\nIt’s working fine.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\nFrom: Rick Otten [mailto:[email protected]]\r\nSent: 23 May, 2017 6:19 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>\r\nCc: [email protected]\r\nSubject: [EXTERNAL] Re: [PERFORM] Log update query along with username who has executed the same.\r\n\r\nYou need to include \"%u\" in the log_line_prefix setting in your postgresql.conf. Like this:\r\n\r\nlog_line_prefix = '%m %d %h %u '\r\n#log_line_prefix = '' # special values:\r\n # %a = application name\r\n # %u = user name\r\n # %d = database name\r\n # %r = remote host and port\r\n # %h = remote host\r\n # %p = process ID\r\n # %t = timestamp without milliseconds\r\n # %m = timestamp with milliseconds\r\n # %n = timestamp with milliseconds (as a Unix epoch)\r\n # %i = command tag\r\n # %e = SQL state\r\n # %c = session ID\r\n # %l = session line number\r\n # %s = session start timestamp\r\n # %v = virtual transaction ID\r\n # %x = transaction ID (0 if none)\r\n # %q = stop here in non-session\r\n # processes\r\n # %% = '%'\r\n # e.g. '<%u%%%d> '\r\n\r\n\r\nAlso 9.1 is pretty old. You should think about upgrading as soon as is practical.\r\n\r\n\r\nOn Tue, May 23, 2017 at 8:42 AM, Dinesh Chandra 12108 <[email protected]<mailto:[email protected]>> wrote:\r\nDear Expert,\r\n\r\nMay you please provide the solution for below query.\r\n\r\nI have to create a log for all the update query executed in database along with its username who has executed that query.\r\nHowever, I am able to log all the update queries in my pg_log file but it’s not showing particular user who has run the query.\r\n\r\nI am using PostgreSQL 9.1 with Linux platform.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849<tel:+91%2099539%2075849> | Ext 1078 |[email protected]<mailto:%[email protected]>\r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n\r\n\n\n\n\n\n\n\n\n\nThank you so much Rick,\n \nIt’s working fine.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\r\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \nFrom: Rick Otten [mailto:[email protected]]\r\n\nSent: 23 May, 2017 6:19 PM\nTo: Dinesh Chandra 12108 <[email protected]>\nCc: [email protected]\nSubject: [EXTERNAL] Re: [PERFORM] Log update query along with username who has executed the same.\n \n\nYou need to include \"%u\" in the log_line_prefix setting in your postgresql.conf. Like this:\n\n \n\n\n\nlog_line_prefix = '%m %d %h %u '\n\n\n#log_line_prefix = '' # special values:\n\n\n # %a = application name\n\n\n # %u = user name\n\n\n # %d = database name\n\n\n # %r = remote host and port\n\n\n # %h = remote host\n\n\n # %p = process ID\n\n\n # %t = timestamp without milliseconds\n\n\n # %m = timestamp with milliseconds\n\n\n # %n = timestamp with milliseconds (as a Unix epoch)\n\n\n # %i = command tag\n\n\n # %e = SQL state\n\n\n # %c = session ID\n\n\n # %l = session line number\n\n\n # %s = session start timestamp\n\n\n # %v = virtual transaction ID\n\n\n # %x = transaction ID (0 if none)\n\n\n # %q = stop here in non-session\n\n\n # processes\n\n\n # %% = '%'\n\n\n # e.g. '<%u%%%d> '\n\n\n \n\n\n\n \n\n\nAlso 9.1 is pretty old. You should think about upgrading as soon as is practical.\n\n\n \n\n\n\n \n\nOn Tue, May 23, 2017 at 8:42 AM, Dinesh Chandra 12108 <[email protected]> wrote:\n\n\n\nDear Expert,\n \nMay you please provide the solution for below query.\n \nI have to create a log for all the update query executed in database along with its username who has executed that query.\nHowever, I am able to log all the update queries in my pg_log file but it’s not showing particular user who has run the query.\n \nI am using PostgreSQL 9.1 with Linux platform.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\r\n+91-9953975849 | Ext 1078 \r\n|[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.",
"msg_date": "Tue, 23 May 2017 13:07:00 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Log update query along with username who has executed the\n same."
}
] |
[
{
"msg_contents": "Hello!\n\nI've heavy loaded PostgreSQL server, which I want to upgrade, so it will\nhandle more traffic. Can I estimate what is better: more cores or\nhigher frequency ? I expect that pg_stat should give some tips, but\ndon't know where to start...\n\nbest regards\nJarek\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 May 2017 20:29:34 +0200",
"msg_from": "Jarek <[email protected]>",
"msg_from_op": true,
"msg_subject": "More cores or higer frequency ?"
},
{
"msg_contents": "The answer, as always, is \"it depends.\"\n\nCan you give us an overview of your setup? The appropriate setup for small\nnumbers of long-running analytical queries (typically faster CPUs) will be\ndifferent than a setup for handling numerous simultaneous connections\n(typically more cores).\n\nBut CPU is often not the limiting factor. With a better understanding of\nyour needs, people here can offer suggestions for memory, storage, pooling,\nnetwork, etc.\n\nCheers,\nSteve\n\n\nOn Tue, May 23, 2017 at 11:29 AM, Jarek <[email protected]> wrote:\n\n> Hello!\n>\n> I've heavy loaded PostgreSQL server, which I want to upgrade, so it will\n> handle more traffic. Can I estimate what is better: more cores or\n> higher frequency ? I expect that pg_stat should give some tips, but\n> don't know where to start...\n>\n> best regards\n> Jarek\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThe answer, as always, is \"it depends.\"Can you give us an overview of your setup? The appropriate setup for small numbers of long-running analytical queries (typically faster CPUs) will be different than a setup for handling numerous simultaneous connections (typically more cores).But CPU is often not the limiting factor. With a better understanding of your needs, people here can offer suggestions for memory, storage, pooling, network, etc.Cheers,SteveOn Tue, May 23, 2017 at 11:29 AM, Jarek <[email protected]> wrote:Hello!\n\nI've heavy loaded PostgreSQL server, which I want to upgrade, so it will\nhandle more traffic. Can I estimate what is better: more cores or\nhigher frequency ? I expect that pg_stat should give some tips, but\ndon't know where to start...\n\nbest regards\nJarek\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 23 May 2017 11:39:15 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More cores or higer frequency ?"
},
{
"msg_contents": "Dnia 2017-05-23, wto o godzinie 11:39 -0700, Steve Crawford pisze:\n> The answer, as always, is \"it depends.\"\n> \n> \n> Can you give us an overview of your setup? The appropriate setup for\n> small numbers of long-running analytical queries (typically faster\n> CPUs) will be different than a setup for handling numerous\n> simultaneous connections (typically more cores).\n\nI have pool of clients (~30) inserting to database about 50 records per\nsecond (in total from all clients) and small numer (<10) clients\nquerying database for those records once per 10s.\nOther queries are rare and irregular.\nThe biggest table has ~ 100mln records (older records are purged\nnightly). Database size is ~13GB.\nI near future I'm expecting ~150 clients and 250 inserts per second and\nmore clients querying database.\nServer is handling also apache with simple web application written in\npython.\nFor the same price, I can get 8C/3.2GHz or 14C/2.6GHz. Which one will be\nbetter ?\n\n\n\n> \n> But CPU is often not the limiting factor. With a better understanding\n> of your needs, people here can offer suggestions for memory, storage,\n> pooling, network, etc.\n> \n> \n> Cheers,\n> Steve\n> \n> \n> \n> On Tue, May 23, 2017 at 11:29 AM, Jarek <[email protected]> wrote:\n> Hello!\n> \n> I've heavy loaded PostgreSQL server, which I want to upgrade,\n> so it will\n> handle more traffic. Can I estimate what is better: more cores\n> or\n> higher frequency ? I expect that pg_stat should give some\n> tips, but\n> don't know where to start...\n> \n> best regards\n> Jarek\n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list\n> ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 May 2017 22:14:14 +0200",
"msg_from": "Jarek <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: More cores or higer frequency ?"
},
{
"msg_contents": "On 23.05.2017 22:14, Jarek wrote:\n> I have pool of clients (~30) inserting to database about 50 records per\n> second (in total from all clients) and small numer (<10) clients\n> querying database for those records once per 10s.\n> Other queries are rare and irregular.\n> The biggest table has ~ 100mln records (older records are purged\n> nightly). Database size is ~13GB.\n> I near future I'm expecting ~150 clients and 250 inserts per second and\n> more clients querying database.\n> Server is handling also apache with simple web application written in\n> python.\n> For the same price, I can get 8C/3.2GHz or 14C/2.6GHz. Which one will be\n> better ?\n>\n>\n>\n\nHi Jarek,\n\nin case of your increasing parallel requirements (more clients, more \ninserts), I would tend to the 14C setup. However, as usual take this \nwith a grain of salt. My word is not a guarantee for success in this case.\n\nBest would be to setup a test scenario where you can simulate with both \nhardware setups.\n\nRegards,\nSven\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 May 2017 22:49:58 +0200",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More cores or higer frequency ?"
},
{
"msg_contents": "On Tue, May 23, 2017 at 2:14 PM, Jarek <[email protected]> wrote:\n> Dnia 2017-05-23, wto o godzinie 11:39 -0700, Steve Crawford pisze:\n>> The answer, as always, is \"it depends.\"\n>>\n>>\n>> Can you give us an overview of your setup? The appropriate setup for\n>> small numbers of long-running analytical queries (typically faster\n>> CPUs) will be different than a setup for handling numerous\n>> simultaneous connections (typically more cores).\n>\n> I have pool of clients (~30) inserting to database about 50 records per\n> second (in total from all clients) and small numer (<10) clients\n> querying database for those records once per 10s.\n\nOK how many of those clients are typically hitting the db at the same\ntime? If you see 3 or 4 clients at a time working with the rest idle\nthat's a completely different load than if you've got 30 running near\nfull throttle.\n\nI'd say build a simple synthetic workload that approximates your\ncurrent work load and see what it does on a smaller machine first.\n\nIf all 30 clients are keeping the db busy at once then definitely more\ncores. But also faster memory. A 32 core machine running ancient\n800MHz memory is gonna get stomped by something with 16 cores running\nfaster GHz while the memory is 2000MHz or higher. Always pay attention\nto memory speed, esp in GB/s etc. DB CPUs are basically mostly data\npumps, moving data as fast as possible from place to place. Bigger\ninternal piping means just as much as core speed and number of cores.\n\nAlso if all 30 clients are working hard then see how it runs with a db\npooler. You should be able to find the approximate knee of performance\nby synthetic testing and usually best throughput will be at somewhere\naround 1x to 2x # of cores. Depending on io and memory.\n\n> Other queries are rare and irregular.\n> The biggest table has ~ 100mln records (older records are purged\n> nightly). Database size is ~13GB.\n> I near future I'm expecting ~150 clients and 250 inserts per second and\n\nOK so yeah definitely look at connection pooling. You don't want to\nstart out handling 150 backends on any server if you don't have to.\nPerformance-wise a 14c machine fall off a cliff by 28 or so active\nconnections.\n\n> more clients querying database.\n\nOK if you're gonna let users throw random sql at it, then you need\nconnection pooling even more. Assuming writes have a priority then\nyou'd want to limit reads to some number of cores etc to keep it out\nof your hair.\n\n> Server is handling also apache with simple web application written in\n> python.\n> For the same price, I can get 8C/3.2GHz or 14C/2.6GHz. Which one will be\n> better ?\n\nCPU names / models please. If intel look up on arc, look for memory bandwidth.\n\n\n>\n>\n>\nor so >>\n>> But CPU is often not the limiting factor. With a better understanding\n>> of your needs, people here can offer suggestions for memory, storage,\n>> pooling, network, etc.\n>>\n>>\n>> Cheers,\n>> Steve\n>>\n>>\n>>\n>> On Tue, May 23, 2017 at 11:29 AM, Jarek <[email protected]> wrote:\n>> Hello!\n>>\n>> I've heavy loaded PostgreSQL server, which I want to upgrade,\n>> so it will\n>> handle more traffic. Can I estimate what is better: more cores\n>> or\n>> higher frequency ? I expect that pg_stat should give some\n>> tips, but\n>> don't know where to start...\n>>\n>> best regards\n>> Jarek\n>>\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list\n>> ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 May 2017 15:36:19 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More cores or higer frequency ?"
},
{
"msg_contents": "On Tue, May 23, 2017 at 1:49 PM, Sven R. Kunze <[email protected]> wrote:\n\n> On 23.05.2017 22:14, Jarek wrote:\n>\n>> I have pool of clients (~30) inserting to database about 50 records per\n>> second (in total from all clients) and small numer (<10) clients\n>> querying database for those records once per 10s.\n>> Other queries are rare and irregular.\n>> The biggest table has ~ 100mln records (older records are purged\n>> nightly). Database size is ~13GB.\n>> I near future I'm expecting ~150 clients and 250 inserts per second and\n>> more clients querying database.\n>> Server is handling also apache with simple web application written in\n>> python.\n>> For the same price, I can get 8C/3.2GHz or 14C/2.6GHz. Which one will be\n>> better ?\n>>\n>>\nI would start by trying a few things on your existing equipment.\n\nIf your inserts are coming from individual connections, say, via the\nweb-app in a connect-insert-disconnect fashion then pooling can be a huge\nwin. Connection overhead is a bigger factor than you might imagine and I've\nseen as much as a 10x improvement in small queries when pooling was added.\n\nIf the every-10-second queries are running on the recently inserted data\nthen partitioning by time range could substantially improve the speed of\ninserts, queries and purging. It's pretty easy to do, now, with pg_partman\nor similar but built-in auto-partitioning is coming in version 10.\n\nFast commit to disk is a win - think SSD or RAID with BBU cache and with a\nrelatively modest 13GB database you should be able to spec enough RAM to\nkeep everything in memory.\n\nCheers,\nSteve\n\nOn Tue, May 23, 2017 at 1:49 PM, Sven R. Kunze <[email protected]> wrote:On 23.05.2017 22:14, Jarek wrote:\n\nI have pool of clients (~30) inserting to database about 50 records per\nsecond (in total from all clients) and small numer (<10) clients\nquerying database for those records once per 10s.\nOther queries are rare and irregular.\nThe biggest table has ~ 100mln records (older records are purged\nnightly). Database size is ~13GB.\nI near future I'm expecting ~150 clients and 250 inserts per second and\nmore clients querying database.\nServer is handling also apache with simple web application written in\npython.\nFor the same price, I can get 8C/3.2GHz or 14C/2.6GHz. Which one will be\nbetter ?\nI would start by trying a few things on your existing equipment.If your inserts are coming from individual connections, say, via the web-app in a connect-insert-disconnect fashion then pooling can be a huge win. Connection overhead is a bigger factor than you might imagine and I've seen as much as a 10x improvement in small queries when pooling was added.If the every-10-second queries are running on the recently inserted data then partitioning by time range could substantially improve the speed of inserts, queries and purging. It's pretty easy to do, now, with pg_partman or similar but built-in auto-partitioning is coming in version 10.Fast commit to disk is a win - think SSD or RAID with BBU cache and with a relatively modest 13GB database you should be able to spec enough RAM to keep everything in memory.Cheers,Steve",
"msg_date": "Tue, 23 May 2017 14:40:53 -0700",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More cores or higer frequency ?"
},
{
"msg_contents": "Are you already on SSDs? That will be the dominant factor I think. Then memory.... After that, more cores are good for parallelism (especially with 9.6, although that requires solid memory support). Faster cores will be better if you expect complex calculations in memory, i.e., some analytics perhaps, but for your fairly straightforward write-throughput scenario, I think SSDs and memory will be king.\r\n\r\nLDH\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Jarek\r\nSent: Tuesday, May 23, 2017 14:30\r\nTo: [email protected]\r\nSubject: [PERFORM] More cores or higer frequency ?\r\n\r\nHello!\r\n\r\nI've heavy loaded PostgreSQL server, which I want to upgrade, so it will handle more traffic. Can I estimate what is better: more cores or higher frequency ? I expect that pg_stat should give some tips, but don't know where to start...\r\n\r\nbest regards\r\nJarek\r\n\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 May 2017 17:51:38 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: More cores or higer frequency ?"
}
] |
[
{
"msg_contents": "Dear Expert,\n\nWhile executing the blow query, its taking too long time to fetch output.\nCould you please help to fine tune the same?\n\nSELECT\ndate_trunc('day', insert_time),\nworkflow.project.project_name,\nworkflow.tool_performance.project_id,\nworkflow.tool_performance.user_id,\nworkflow.tool_performance.step_id,\ncount(*),\nround(sum(execution_time)/1000) as Sum_time_sec,\nround(((round(sum(execution_time)/1000))/60)/count(*),2) as Efficency_Min,\nround (((round(sum(execution_time)/1000)))/count(*),2) as Efficency_sec\nFROM workflow.project,workflow.tool_performance,workflow.evidence_to_do\nWHERE\nworkflow.evidence_to_do.project_id = workflow.tool_performance.project_id AND\nworkflow.evidence_to_do.project_id = workflow.project.project_id AND\nworkflow.tool_performance.insert_time >'2017-05-19' AND\nworkflow.tool_performance.insert_time <'2017-05-20' AND\nworkflow.evidence_to_do.status_id in (15100,15150,15200,15300,15400,15500)\nGroup BY\ndate_trunc('day', insert_time),\nworkflow.project.project_name,\nworkflow.tool_performance.project_id,\nworkflow.tool_performance.user_id,\nworkflow.tool_performance.step_id\nORDER BY\nworkflow.tool_performance.project_id,\nworkflow.project.project_name,\nworkflow.tool_performance.step_id\n\nI am using PostgreSQL 9.1 with Linux Platform.\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n\n\n\n\n\n\n\n\nDear Expert,\n \nWhile executing the blow query, its taking too long time to fetch output.\n\nCould you please help to fine tune the same?\n \nSELECT \ndate_trunc('day', insert_time),\nworkflow.project.project_name,\nworkflow.tool_performance.project_id,\nworkflow.tool_performance.user_id,\nworkflow.tool_performance.step_id,\ncount(*),\nround(sum(execution_time)/1000) as Sum_time_sec,\nround(((round(sum(execution_time)/1000))/60)/count(*),2) as Efficency_Min,\nround (((round(sum(execution_time)/1000)))/count(*),2) as Efficency_sec\nFROM workflow.project,workflow.tool_performance,workflow.evidence_to_do\nWHERE \nworkflow.evidence_to_do.project_id = workflow.tool_performance.project_id AND\nworkflow.evidence_to_do.project_id = workflow.project.project_id AND\nworkflow.tool_performance.insert_time >'2017-05-19' AND\nworkflow.tool_performance.insert_time <'2017-05-20' AND\nworkflow.evidence_to_do.status_id in (15100,15150,15200,15300,15400,15500)\nGroup BY \ndate_trunc('day', insert_time),\nworkflow.project.project_name,\nworkflow.tool_performance.project_id,\nworkflow.tool_performance.user_id,\nworkflow.tool_performance.step_id\nORDER BY \nworkflow.tool_performance.project_id,\nworkflow.project.project_name,\nworkflow.tool_performance.step_id\n \nI am using PostgreSQL 9.1 with Linux Platform.\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.",
"msg_date": "Wed, 24 May 2017 17:04:04 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query is running very slow......"
},
{
"msg_contents": "Hi,\n\nOn 5/24/17 7:04 PM, Dinesh Chandra 12108 wrote:\n> Dear Expert,\n> \n> While executing the blow query, its taking too long time to fetch output.\n> \n> Could you please help to fine tune the same?\n> \n\nYou'll have to provide far more details - the query alone is certainly \nnot enough for anyone to guess why it's slow. Perhaps look at this:\n\n https://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nIn particular, you'll have to tell us\n\n(a) something about the hardware it's running on\n\n(b) amounts of data in the tables / databases\n\n(c) EXPLAIN or even better EXPLAIN ANALYZE of the query\n\n(d) configuration of the database (work_mem, shared_buffers etc.)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 May 2017 19:25:54 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query is running very slow......"
},
{
"msg_contents": "Hi Tomas,\n\nPlease find the below input for slow query.\n\n(a) something about the hardware it's running on\n RAM-->64 GB, CPU->40core\n\n(b) amounts of data in the tables / databases\nDatabase size \t:32GB\n-----------------\nTables size\n-----------------\nWorkflow.project\t\t: 8194 byte\nworkflow.tool_performance\t:175 MB\nworkflow.evidence_to_do\t:580 MB\n\n(c) EXPLAIN or even better EXPLAIN ANALYZE of the query\n\n\"GroupAggregate (cost=16583736169.63..18157894828.18 rows=5920110 width=69)\"\n\" -> Sort (cost=16583736169.63..16714893857.43 rows=52463075120 width=69)\"\n\" Sort Key: tool_performance.project_id, project.project_name, tool_performance.step_id, (date_trunc('day'::text, tool_performance.insert_time)), tool_performance.user_id\"\n\" -> Nested Loop (cost=2.42..787115179.07 rows=52463075120 width=69)\"\n\" -> Seq Scan on evidence_to_do (cost=0.00..119443.95 rows=558296 width=0)\"\n\" Filter: (status_id = ANY ('{15100,15150,15200,15300,15400,15500}'::bigint[]))\"\n\" -> Materialize (cost=2.42..49843.24 rows=93970 width=69)\"\n\" -> Hash Join (cost=2.42..49373.39 rows=93970 width=69)\"\n\" Hash Cond: (tool_performance.project_id = project.project_id)\"\n\" -> Seq Scan on tool_performance (cost=0.00..48078.88 rows=93970 width=39)\"\n\" Filter: ((insert_time > '2017-05-01 00:00:00+05:30'::timestamp with time zone) AND (insert_time < '2017-05-02 00:00:00+05:30'::timestamp with time zone))\"\n\" -> Hash (cost=1.63..1.63 rows=63 width=38)\"\n\" -> Seq Scan on project (cost=0.00..1.63 rows=63 width=38)\"\n\n(d) configuration of the database (work_mem, shared_buffers etc.)\n\nwork_mem = 32MB\nshared_buffers = 16GB\nmaintenance_work_mem = 8GB\ntemp_buffers = 64MB\nmax_connections=2000\t\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected] \nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Tomas Vondra\nSent: 24 May, 2017 10:56 PM\nTo: [email protected]\nSubject: [EXTERNAL] Re: [PERFORM] Query is running very slow......\n\nHi,\n\nOn 5/24/17 7:04 PM, Dinesh Chandra 12108 wrote:\n> Dear Expert,\n> \n> While executing the blow query, its taking too long time to fetch output.\n> \n> Could you please help to fine tune the same?\n> \n\nYou'll have to provide far more details - the query alone is certainly not enough for anyone to guess why it's slow. Perhaps look at this:\n\n https://wiki.postgresql.org/wiki/Slow_Query_Questions\n\nIn particular, you'll have to tell us\n\n(a) something about the hardware it's running on\n\n(b) amounts of data in the tables / databases\n\n(c) EXPLAIN or even better EXPLAIN ANALYZE of the query\n\n(d) configuration of the database (work_mem, shared_buffers etc.)\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 May 2017 12:26:54 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "FW: Re: Query is running very slow......"
},
{
"msg_contents": "\n\nOn 5/25/17 2:26 PM, Dinesh Chandra 12108 wrote:\n> Hi Tomas,\n> \n> Please find the below input for slow query.\n> \n> (a) something about the hardware it's running on\n> RAM-->64 GB, CPU->40core\n> \n> (b) amounts of data in the tables / databases\n> Database size \t:32GB\n> -----------------\n> Tables size\n> -----------------\n> Workflow.project\t\t: 8194 byte\n> workflow.tool_performance\t:175 MB\n> workflow.evidence_to_do\t:580 MB\n> \n> (c) EXPLAIN or even better EXPLAIN ANALYZE of the query\n> \n> \"GroupAggregate (cost=16583736169.63..18157894828.18 rows=5920110 width=69)\"\n> \" -> Sort (cost=16583736169.63..16714893857.43 rows=52463075120 width=69)\"\n> \" Sort Key: tool_performance.project_id, project.project_name, tool_performance.step_id, (date_trunc('day'::text, tool_performance.insert_time)), tool_performance.user_id\"\n> \" -> Nested Loop (cost=2.42..787115179.07 rows=52463075120 width=69)\"\n> \" -> Seq Scan on evidence_to_do (cost=0.00..119443.95 rows=558296 width=0)\"\n> \" Filter: (status_id = ANY ('{15100,15150,15200,15300,15400,15500}'::bigint[]))\"\n> \" -> Materialize (cost=2.42..49843.24 rows=93970 width=69)\"\n> \" -> Hash Join (cost=2.42..49373.39 rows=93970 width=69)\"\n> \" Hash Cond: (tool_performance.project_id = project.project_id)\"\n> \" -> Seq Scan on tool_performance (cost=0.00..48078.88 rows=93970 width=39)\"\n> \" Filter: ((insert_time > '2017-05-01 00:00:00+05:30'::timestamp with time zone) AND (insert_time < '2017-05-02 00:00:00+05:30'::timestamp with time zone))\"\n> \" -> Hash (cost=1.63..1.63 rows=63 width=38)\"\n> \" -> Seq Scan on project (cost=0.00..1.63 rows=63 width=38)\"\n> \n\nAre you sure this is the same query? The query you posted includes there \ntwo join conditions:\n\n evidence_to_do.project_id = tool_performance.project_id\n evidence_to_do.project_id = project.project_id\n\nBut the plan only seems to enforce the equality between 'project' and \n'tool_performance'. So when joining the evidence_to_do, it performs a \ncartesian product, producing ~52B rows (estimated). That can't be fast.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 May 2017 17:38:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Re: Query is running very slow......"
},
{
"msg_contents": "Hi Thomas,\r\n\r\nThanks for your reply.\r\n\r\nYes, the query is absolutely same which I posted.\r\nPlease suggest if something need to change in query.\r\n\r\nAs Per your comment...\r\nThe query you posted includes there two join conditions:\r\n\r\n evidence_to_do.project_id = tool_performance.project_id\r\n evidence_to_do.project_id = project.project_id\r\n\r\nBut the plan only seems to enforce the equality between 'project' and 'tool_performance'. So when joining the evidence_to_do, it performs a cartesian product, producing ~52B rows (estimated). That can't be fast.\r\n\r\nRegards,\r\nDinesh Chandra\r\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\r\n------------------------------------------------------------------\r\nMobile: +91-9953975849 | Ext 1078 |[email protected] \r\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\r\n\r\n-----Original Message-----\r\nFrom: Tomas Vondra [mailto:[email protected]] \r\nSent: 25 May, 2017 9:08 PM\r\nTo: Dinesh Chandra 12108 <[email protected]>\r\nCc: [email protected]\r\nSubject: [EXTERNAL] Re: FW: Re: [PERFORM] Query is running very slow......\r\n\r\n\r\n\r\nOn 5/25/17 2:26 PM, Dinesh Chandra 12108 wrote:\r\n> Hi Tomas,\r\n> \r\n> Please find the below input for slow query.\r\n> \r\n> (a) something about the hardware it's running on\r\n> RAM-->64 GB, CPU->40core\r\n> \r\n> (b) amounts of data in the tables / databases\r\n> Database size \t:32GB\r\n> -----------------\r\n> Tables size\r\n> -----------------\r\n> Workflow.project\t\t: 8194 byte\r\n> workflow.tool_performance\t:175 MB\r\n> workflow.evidence_to_do\t:580 MB\r\n> \r\n> (c) EXPLAIN or even better EXPLAIN ANALYZE of the query\r\n> \r\n> \"GroupAggregate (cost=16583736169.63..18157894828.18 rows=5920110 width=69)\"\r\n> \" -> Sort (cost=16583736169.63..16714893857.43 rows=52463075120 width=69)\"\r\n> \" Sort Key: tool_performance.project_id, project.project_name, tool_performance.step_id, (date_trunc('day'::text, tool_performance.insert_time)), tool_performance.user_id\"\r\n> \" -> Nested Loop (cost=2.42..787115179.07 rows=52463075120 width=69)\"\r\n> \" -> Seq Scan on evidence_to_do (cost=0.00..119443.95 rows=558296 width=0)\"\r\n> \" Filter: (status_id = ANY ('{15100,15150,15200,15300,15400,15500}'::bigint[]))\"\r\n> \" -> Materialize (cost=2.42..49843.24 rows=93970 width=69)\"\r\n> \" -> Hash Join (cost=2.42..49373.39 rows=93970 width=69)\"\r\n> \" Hash Cond: (tool_performance.project_id = project.project_id)\"\r\n> \" -> Seq Scan on tool_performance (cost=0.00..48078.88 rows=93970 width=39)\"\r\n> \" Filter: ((insert_time > '2017-05-01 00:00:00+05:30'::timestamp with time zone) AND (insert_time < '2017-05-02 00:00:00+05:30'::timestamp with time zone))\"\r\n> \" -> Hash (cost=1.63..1.63 rows=63 width=38)\"\r\n> \" -> Seq Scan on project (cost=0.00..1.63 rows=63 width=38)\"\r\n> \r\n\r\nAre you sure this is the same query? The query you posted includes there two join conditions:\r\n\r\n evidence_to_do.project_id = tool_performance.project_id\r\n evidence_to_do.project_id = project.project_id\r\n\r\nBut the plan only seems to enforce the equality between 'project' and 'tool_performance'. So when joining the evidence_to_do, it performs a cartesian product, producing ~52B rows (estimated). That can't be fast.\r\n\r\nregards\r\n\r\n-- \r\nTomas Vondra http://www.2ndQuadrant.com\r\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 May 2017 12:31:15 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: FW: Re: Query is running very slow......"
},
{
"msg_contents": "Am 26.05.2017 um 14:31 schrieb Dinesh Chandra 12108:\n> Hi Thomas,\n>\n> Thanks for your reply.\n>\n> Yes, the query is absolutely same which I posted.\n> Please suggest if something need to change in query.\n>\n> As Per your comment...\n> The query you posted includes there two join conditions:\n>\n> evidence_to_do.project_id = tool_performance.project_id\n> evidence_to_do.project_id = project.project_id\n>\n> But the plan only seems to enforce the equality between 'project' and 'tool_performance'. So when joining the evidence_to_do, it performs a cartesian product, producing ~52B rows (estimated). That can't be fast.\n>\n>\n\nDinesh, please check that again. Your colleague Daulat Ram posted a \nsimilar question with this WHERE-Condition:\n\n===\n\nWHERE workflow.project\n\n.project_id = workflow.tool_performance.project_id AND insert_time \n >'2017-05-01' AND insert_time <'2017-05-02' AND\n\nworkflow.evidence_to_do.status_id in (15100,15150,15200,15300,15400,15500)\n\n===\n\nThis condition would explain the query-plan. I have answered that \nquestion yesterday.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n\n\n\n\n\n\n\nAm 26.05.2017 um 14:31 schrieb Dinesh\n Chandra 12108:\n\n\nHi Thomas,\n\nThanks for your reply.\n\nYes, the query is absolutely same which I posted.\nPlease suggest if something need to change in query.\n\nAs Per your comment...\nThe query you posted includes there two join conditions:\n\n evidence_to_do.project_id = tool_performance.project_id\n evidence_to_do.project_id = project.project_id\n\nBut the plan only seems to enforce the equality between 'project' and 'tool_performance'. So when joining the evidence_to_do, it performs a cartesian product, producing ~52B rows (estimated). That can't be fast.\n\n\n\n\n\n Dinesh, please check that again. Your colleague Daulat Ram posted a\n similar question with this WHERE-Condition:\n\n ===\nWHERE workflow.project\n.project_id =\n workflow.tool_performance.project_id AND insert_time\n >'2017-05-01' AND insert_time <'2017-05-02' AND\n \nworkflow.evidence_to_do.status_id in\n (15100,15150,15200,15300,15400,15500) \n ===\n\n This condition would explain the query-plan. I have answered that\n question yesterday.\n\n\n Regards, Andreas\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com",
"msg_date": "Fri, 26 May 2017 15:10:34 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: FW: Re: Query is running very slow......"
}
] |
[
{
"msg_contents": "Hi team,\n\nWe are getting very slow response of this query.\n\nSELECT date_trunc('day', insert_time),workflow.project.project_name,\nworkflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id,count(*),\nround(sum(execution_time)/1000) as Sum_time_sec,round(((round(sum(execution_time)/1000))/60)/count(*),2) as Efficency_Min,round\n(((round(sum(execution_time)/1000)))/count(*),2)\nas Efficency_sec FROM workflow.project,workflow.tool_performance,workflow.evidence_to_do WHERE workflow.project\n.project_id = workflow.tool_performance.project_id AND insert_time >'2017-05-01' AND insert_time <'2017-05-02' AND\nworkflow.evidence_to_do.status_id in (15100,15150,15200,15300,15400,15500) Group BY date_trunc('day', insert_time),workflow.project.project_name,\nworkflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id ORDER BY\nworkflow.tool_performance.project_id,workflow.project.project_name,\nworkflow.tool_performance.step_id;\n\nThe following indexes are created on project & evidence_to_do table.\n\n\"CREATE INDEX project_id_idx ON workflow.project USING btree (project_id)\"\n\"CREATE INDEX evidence_to_do_status_id_index ON workflow.evidence_to_do USING btree (status_id)\"\n\n\nExplain plan of the Query is:\n\n\"GroupAggregate (cost=18675703613.60..20443753075.67 rows=6689718 width=69)\"\n\" -> Sort (cost=18675703613.60..18823015982.33 rows=58924947492 width=69)\"\n\" Sort Key: tool_performance.project_id, project.project_name, tool_performance.step_id, (date_trunc('day'::text, tool_performance.insert_time)), tool_performance.user_id\"\n\" -> Nested Loop (cost=2.42..884042104.67 rows=58924947492 width=69)\"\n\" -> Seq Scan on evidence_to_do (cost=0.00..118722.17 rows=554922 width=0)\"\n\" Filter: (status_id = ANY ('{15100,15150,15200,15300,15400,15500}'::bigint[]))\"\n\" -> Materialize (cost=2.42..49435.58 rows=106186 width=69)\"\n\" -> Hash Join (cost=2.42..48904.65 rows=106186 width=69)\"\n\" Hash Cond: (tool_performance.project_id = project.project_id)\"\n\" -> Seq Scan on tool_performance (cost=0.00..47442.18 rows=106186 width=39)\"\n\" Filter: ((insert_time > '2017-05-01 00:00:00+05:30'::timestamp with time zone) AND (insert_time < '2017-05-02 00:00:00+05:30'::timestamp with time zone))\"\n\" -> Hash (cost=1.63..1.63 rows=63 width=38)\"\n\" -> Seq Scan on project (cost=0.00..1.63 rows=63 width=38)\"\n\n\nWe have 64 GB of RAM &\n\nCPU(s): 40\nThread(s) per core: 2\nCore(s) per socket: 10\nSocket(s): 2\n\n\nPostgreSQL.conf parameter:\nshared_buffers =16GB\nwork_mem =32MB\n\nWould you please help how we can tune this query at database & code level.\n\nRegards Daulat\n\n\n\n\n\n\n\n\n\nHi team,\n \nWe are getting very slow response of this query.\n \nSELECT date_trunc('day', insert_time),workflow.project.project_name,\nworkflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id,count(*),\nround(sum(execution_time)/1000) as Sum_time_sec,round(((round(sum(execution_time)/1000))/60)/count(*),2) as Efficency_Min,round\n\n(((round(sum(execution_time)/1000)))/count(*),2) \nas Efficency_sec FROM workflow.project,workflow.tool_performance,workflow.evidence_to_do WHERE workflow.project\n.project_id = workflow.tool_performance.project_id AND insert_time >'2017-05-01' AND insert_time <'2017-05-02' AND\n\nworkflow.evidence_to_do.status_id in (15100,15150,15200,15300,15400,15500) Group BY date_trunc('day', insert_time),workflow.project.project_name,\nworkflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id ORDER BY\nworkflow.tool_performance.project_id,workflow.project.project_name,\nworkflow.tool_performance.step_id;\n \nThe following indexes are created on project & evidence_to_do table.\n\n \n\"CREATE INDEX project_id_idx ON workflow.project USING btree (project_id)\"\n\"CREATE INDEX evidence_to_do_status_id_index ON workflow.evidence_to_do USING btree (status_id)\"\n \n \nExplain plan of the Query is:\n \n\"GroupAggregate (cost=18675703613.60..20443753075.67 rows=6689718 width=69)\"\n\" -> Sort (cost=18675703613.60..18823015982.33 rows=58924947492 width=69)\"\n\" Sort Key: tool_performance.project_id, project.project_name, tool_performance.step_id, (date_trunc('day'::text, tool_performance.insert_time)), tool_performance.user_id\"\n\" -> Nested Loop (cost=2.42..884042104.67 rows=58924947492 width=69)\"\n\" -> Seq Scan on evidence_to_do (cost=0.00..118722.17 rows=554922 width=0)\"\n\" Filter: (status_id = ANY ('{15100,15150,15200,15300,15400,15500}'::bigint[]))\"\n\" -> Materialize (cost=2.42..49435.58 rows=106186 width=69)\"\n\" -> Hash Join (cost=2.42..48904.65 rows=106186 width=69)\"\n\" Hash Cond: (tool_performance.project_id = project.project_id)\"\n\" -> Seq Scan on tool_performance (cost=0.00..47442.18 rows=106186 width=39)\"\n\" Filter: ((insert_time > '2017-05-01 00:00:00+05:30'::timestamp with time zone) AND (insert_time < '2017-05-02 00:00:00+05:30'::timestamp with time zone))\"\n\" -> Hash (cost=1.63..1.63 rows=63 width=38)\"\n\" -> Seq Scan on project (cost=0.00..1.63 rows=63 width=38)\"\n \n \nWe have 64 GB of RAM & \n \nCPU(s): 40\nThread(s) per core: 2\nCore(s) per socket: 10\nSocket(s): 2\n\n\n\nPostgreSQL.conf parameter:\nshared_buffers =16GB\nwork_mem =32MB\n \nWould you please help how we can tune this query at database & code level.\n \nRegards Daulat",
"msg_date": "Thu, 25 May 2017 05:13:26 +0000",
"msg_from": "Daulat Ram <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query is very much slow"
},
{
"msg_contents": "Am 25.05.2017 um 07:13 schrieb Daulat Ram:\n>\n> Hi team,\n>\n> We are getting very slow response of this query.\n>\n> SELECT date_trunc('day', insert_time),workflow.project.project_name,\n>\n> workflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id,count(*),\n>\n> round(sum(execution_time)/1000) as \n> Sum_time_sec,round(((round(sum(execution_time)/1000))/60)/count(*),2) \n> as Efficency_Min,round\n>\n> (((round(sum(execution_time)/1000)))/count(*),2)\n>\n> as Efficency_sec FROM \n> workflow.project,workflow.tool_performance,workflow.evidence_to_do \n> WHERE workflow.project\n>\n> .project_id = workflow.tool_performance.project_id AND insert_time \n> >'2017-05-01' AND insert_time <'2017-05-02' AND\n>\n> workflow.evidence_to_do.status_id in \n> (15100,15150,15200,15300,15400,15500) Group BY date_trunc('day', \n> insert_time),workflow.project.project_name,\n>\n> workflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id \n> ORDER BY\n>\n> workflow.tool_performance.project_id,workflow.project.project_name,\n>\n> workflow.tool_performance.step_id;\n>\n> *The following indexes are created on project & evidence_to_do table*.\n>\n> \"CREATE INDEX project_id_idx ON workflow.project USING btree (project_id)\"\n>\n> \"CREATE INDEX evidence_to_do_status_id_index ON \n> workflow.evidence_to_do USING btree (status_id)\"\n>\n> *Explain plan of the Query is:*\n>\n> \"GroupAggregate (cost=18675703613.60..20443753075.67 rows=6689718 \n> width=69)\"\n>\n> \" -> Sort (cost=18675703613.60..18823015982.33 rows=58924947492 \n> width=69)\"\n>\n> \" Sort Key: tool_performance.project_id, project.project_name, \n> tool_performance.step_id, (date_trunc('day'::text, \n> tool_performance.insert_time)), tool_performance.user_id\"\n>\n> \" -> Nested Loop (cost=2.42..884042104.67 rows=58924947492 \n> width=69)\"\n>\n> \" -> Seq Scan on evidence_to_do (cost=0.00..118722.17 \n> rows=554922 width=0)\"\n>\n> \" Filter: (status_id = ANY \n> ('{15100,15150,15200,15300,15400,15500}'::bigint[]))\"\n>\n> \" -> Materialize (cost=2.42..49435.58 rows=106186 width=69)\"\n>\n> \" -> Hash Join (cost=2.42..48904.65 rows=106186 \n> width=69)\"\n>\n> \" Hash Cond: (tool_performance.project_id = \n> project.project_id)\"\n>\n> \" -> Seq Scan on tool_performance \n> (cost=0.00..47442.18 rows=106186 width=39)\"\n>\n> \" Filter: ((insert_time > '2017-05-01 \n> 00:00:00+05:30'::timestamp with time zone) AND (insert_time < \n> '2017-05-02 00:00:00+05:30'::timestamp with time zone))\"\n>\n> \" -> Hash (cost=1.63..1.63 rows=63 width=38)\"\n>\n> \" -> Seq Scan on project \n> (cost=0.00..1.63 rows=63 width=38)\"\n>\n\nyou will get a so-called cross join with 106186 rows from \ntool_performance multiplied with 554922\nrows from evidence_to_do, resulting in 58.924.947.492 rows in total. Is \nthat really what you want?\n\nI think, there is a missing join-condition. It would be better to use \nexpliciet JOIN-Syntax to prevent such errors.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n\n\n\n\n\n\n\nAm 25.05.2017 um 07:13 schrieb Daulat\n Ram:\n\n\n\n\n\n\nHi team,\n�\nWe are getting very slow response of this\n query.\n�\nSELECT date_trunc('day',\n insert_time),workflow.project.project_name,\nworkflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id,count(*),\nround(sum(execution_time)/1000) as\n Sum_time_sec,round(((round(sum(execution_time)/1000))/60)/count(*),2)\n as Efficency_Min,round\n \n(((round(sum(execution_time)/1000)))/count(*),2)\n \nas Efficency_sec FROM\n workflow.project,workflow.tool_performance,workflow.evidence_to_do\n WHERE workflow.project\n.project_id =\n workflow.tool_performance.project_id AND insert_time\n >'2017-05-01' AND insert_time <'2017-05-02' AND\n \nworkflow.evidence_to_do.status_id in\n (15100,15150,15200,15300,15400,15500) Group BY\n date_trunc('day', insert_time),workflow.project.project_name,\nworkflow.tool_performance.project_id,workflow.tool_performance.user_id,workflow.tool_performance.step_id\n ORDER BY\nworkflow.tool_performance.project_id,workflow.project.project_name,\nworkflow.tool_performance.step_id;\n�\nThe following indexes are created on\n project & evidence_to_do table.\n\n�\n\"CREATE INDEX project_id_idx ON\n workflow.project USING btree (project_id)\"\n\"CREATE INDEX\n evidence_to_do_status_id_index ON workflow.evidence_to_do\n USING btree (status_id)\"\n�\n�\nExplain plan of the Query is:\n�\n\"GroupAggregate�\n (cost=18675703613.60..20443753075.67 rows=6689718 width=69)\"\n\"� ->� Sort�\n (cost=18675703613.60..18823015982.33 rows=58924947492\n width=69)\"\n\"�� �����Sort Key:\n tool_performance.project_id, project.project_name,\n tool_performance.step_id, (date_trunc('day'::text,\n tool_performance.insert_time)), tool_performance.user_id\"\n\"������� ->� Nested Loop�\n (cost=2.42..884042104.67 rows=58924947492 width=69)\"\n\" �������������->� Seq Scan on\n evidence_to_do� (cost=0.00..118722.17 rows=554922 width=0)\"\n\"������������������� Filter: (status_id =\n ANY ('{15100,15150,15200,15300,15400,15500}'::bigint[]))\"\n\"������������� ->� Materialize�\n (cost=2.42..49435.58 rows=106186 width=69)\"\n\"������������������� ->� Hash Join�\n (cost=2.42..48904.65 rows=106186 width=69)\"\n\"������������������������� Hash Cond:\n (tool_performance.project_id = project.project_id)\"\n\"������������������������� ->� Seq Scan\n on tool_performance� (cost=0.00..47442.18 rows=106186\n width=39)\"\n\"������������������������������� Filter:\n ((insert_time > '2017-05-01 00:00:00+05:30'::timestamp with\n time zone) AND (insert_time < '2017-05-02\n 00:00:00+05:30'::timestamp with time zone))\"\n\"������������������������� ->� Hash�\n (cost=1.63..1.63 rows=63 width=38)\"\n\"������������������������������� ->� Seq\n Scan on project� (cost=0.00..1.63 rows=63 width=38)\"\n�\n\n\n\n\n you will get a so-called cross join with 106186 rows from\n tool_performance multiplied with 554922\n rows from evidence_to_do, resulting in 58.924.947.492 rows in total.\n Is that really what you want?\n\n I think, there is a missing join-condition. It would be better to\n use expliciet JOIN-Syntax to prevent such errors.\n\n\n Regards, Andreas\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com",
"msg_date": "Thu, 25 May 2017 09:49:37 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query is very much slow"
},
{
"msg_contents": "Hi,\n\nthere is a similar question from [email protected], but it is \nnot exact the same query.\n[PERFORM] Query is running very slow......, some hours ago.\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 May 2017 10:10:06 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query is very much slow"
}
] |
[
{
"msg_contents": "I created such table (similar to example from\nhttp://use-the-index-luke.com/sql/example-schema/postgresql/performance-testing-scalability\n)\n\nCREATE TABLE scale_data (\n section NUMERIC NOT NULL,\n id1 NUMERIC NOT NULL, -- unique values simulating ID or Timestamp\n id2 NUMERIC NOT NULL -- a kind of Type);\n\nPopulate it with:\n\nINSERT INTO scale_dataSELECT sections.sections,\nsections.sections*10000 + gen.gen\n , CEIL(RANDOM()*100)\n FROM GENERATE_SERIES(1, 300) sections,\n GENERATE_SERIES(1, 90000) gen\n WHERE gen <= sections * 300;\n\nIt generated 13545000 records.\n\nComposite index on it:\n\nCREATE INDEX id1_id2_idx\n ON public.scale_data\n USING btree\n (id1, id2);\n\nAnd select#1:\n\nselect id2 from scale_data where id2 in (50)order by id1 desc\nlimit 500\n\nExplain analyze:\n\n\"Limit (cost=0.56..1177.67 rows=500 width=11) (actual\ntime=0.046..5.124 rows=500 loops=1)\"\" -> Index Only Scan Backward\nusing id1_id2_idx on scale_data (cost=0.56..311588.74 rows=132353\nwidth=11) (actual time=0.045..5.060 rows=500 loops=1)\"\" Index\nCond: (id2 = '50'::numeric)\"\" Heap Fetches: 0\"\"Planning time:\n0.103 ms\"\"Execution time: 5.177 ms\"\n\nSelect#2 --more values in IN - plan has changed\n\nselect id2 from scale_data where id2 in (50, 52)order by id1 desc\nlimit 500\n\nExplain analyze#2:\n\n\"Limit (cost=0.56..857.20 rows=500 width=11) (actual\ntime=0.061..8.703 rows=500 loops=1)\"\" -> Index Only Scan Backward\nusing id1_id2_idx on scale_data (cost=0.56..445780.74 rows=260190\nwidth=11) (actual time=0.059..8.648 rows=500 loops=1)\"\" Filter:\n(id2 = ANY ('{50,52}'::numeric[]))\"\" Rows Removed by Filter:\n25030\"\" Heap Fetches: 0\"\"Planning time: 0.153 ms\"\"Execution\ntime: 8.771 ms\"\n\nWhy plan differs? Why in #1 it does show like *Index condition*, but in #2\n*Filter* and number of index scanned cells. Doesn't sql#1 traverse index in\nthe same way like explain for sql#2 shows?\n\nOn real/production DB #2 works much slower, even if search by 2 keys\nseparately is fast\n\nPG 9.5, CentOS 6.7\n\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=icon>\nVirus-free.\nwww.avast.com\n<https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail&utm_term=link>\n<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>\n\n\nI created such table (similar to example from http://use-the-index-luke.com/sql/example-schema/postgresql/performance-testing-scalability )\nCREATE TABLE scale_data (\n section NUMERIC NOT NULL,\n id1 NUMERIC NOT NULL, -- unique values simulating ID or Timestamp\n id2 NUMERIC NOT NULL -- a kind of Type\n);\nPopulate it with: \nINSERT INTO scale_data\nSELECT sections.sections, sections.sections*10000 + gen.gen\n , CEIL(RANDOM()*100) \n FROM GENERATE_SERIES(1, 300) sections,\n GENERATE_SERIES(1, 90000) gen\n WHERE gen <= sections * 300;\nIt generated 13545000 records. \nComposite index on it: \nCREATE INDEX id1_id2_idx\n ON public.scale_data\n USING btree\n (id1, id2);\nAnd select#1: \nselect id2 from scale_data \nwhere id2 in (50)\norder by id1 desc\nlimit 500\nExplain analyze: \n\"Limit (cost=0.56..1177.67 rows=500 width=11) (actual time=0.046..5.124 rows=500 loops=1)\"\n\" -> Index Only Scan Backward using id1_id2_idx on scale_data (cost=0.56..311588.74 rows=132353 width=11) (actual time=0.045..5.060 rows=500 loops=1)\"\n\" Index Cond: (id2 = '50'::numeric)\"\n\" Heap Fetches: 0\"\n\"Planning time: 0.103 ms\"\n\"Execution time: 5.177 ms\"\nSelect#2 --more values in IN - plan has changed\nselect id2 from scale_data \nwhere id2 in (50, 52)\norder by id1 desc\nlimit 500\nExplain analyze#2: \n\"Limit (cost=0.56..857.20 rows=500 width=11) (actual time=0.061..8.703 rows=500 loops=1)\"\n\" -> Index Only Scan Backward using id1_id2_idx on scale_data (cost=0.56..445780.74 rows=260190 width=11) (actual time=0.059..8.648 rows=500 loops=1)\"\n\" Filter: (id2 = ANY ('{50,52}'::numeric[]))\"\n\" Rows Removed by Filter: 25030\"\n\" Heap Fetches: 0\"\n\"Planning time: 0.153 ms\"\n\"Execution time: 8.771 ms\"\nWhy plan differs? \nWhy in #1 it does show like Index condition, but in #2 Filter and number of index scanned cells. \nDoesn't sql#1 traverse index in the same way like explain for sql#2 shows?\nOn real/production DB #2 works much slower, even if search by 2 keys separately is fast\nPG 9.5, CentOS 6.7\n\n\n\n\nVirus-free. www.avast.com",
"msg_date": "Thu, 25 May 2017 21:29:53 +0300",
"msg_from": "Alexandru Lazarev <[email protected]>",
"msg_from_op": true,
"msg_subject": "Multicolumn B-Tree index - order by on 1st column and IN lookup for\n 2nd"
}
] |
[
{
"msg_contents": "Hi,\n\nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n\nSpecifically I am interested in tools to help:\n\nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n\n\nThanks\nravi\n\n\n\n\n\n\n\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi",
"msg_date": "Thu, 25 May 2017 19:48:43 +0000",
"msg_from": "Ravi Tammineni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Monitoring tool for Postgres Database"
},
{
"msg_contents": "Ravi Tammineni schrieb am 25.05.2017 um 21:48:\n> What is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n> \n> Specifically I am interested in tools to help:\n> Alert DBAs to problems with both configuration and performance issues\n> Deadlocks, Long running queries etc.,\n> Monitoring of overall system performance\n> General performance tuning\n> Storage/Disk latencies\n\nTake a look at PoWA: http://dalibo.github.io/powa/\nand OPM: http://opm.io/\n\n\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Thu, 25 May 2017 22:48:11 +0200",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring tool for Postgres Database"
},
{
"msg_contents": "Hi Ravi,\n\n\n> What is the best monitoring tool for Postgres database? Something like\n> Oracle Enterprise Manager.\n>\n\nIf you're an existing user of OEM, there is a PostgreSQL plugin for it by\nBlue Medora (believe it is commercial). You might like to have a look at\nit.\n\n>\n>\n> Specifically I am interested in tools to help:\n>\n>\n>\n> Alert DBAs to problems with both configuration and performance issues\n>\n> Deadlocks, Long running queries etc.,\n>\n> Monitoring of overall system performance\n>\n> General performance tuning\n>\n> Storage/Disk latencies\n>\n>\n>\n>\n>\n> Thanks\n>\n> ravi\n>\n\nCheers\nGary\n\nHi Ravi, \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.If you're an existing user of OEM, there is a PostgreSQL plugin for it by Blue Medora (believe it is commercial). You might like to have a look at it. \n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi\n\n\nCheersGary",
"msg_date": "Fri, 26 May 2017 09:05:15 +1000",
"msg_from": "Gary Evans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Monitoring tool for Postgres Database"
},
{
"msg_contents": "On Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni <\[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> What is the best monitoring tool for Postgres database? Something like\n> Oracle Enterprise Manager.\n>\n>\n>\n> Specifically I am interested in tools to help:\n>\n>\n>\n> Alert DBAs to problems with both configuration and performance issues\n>\n> Deadlocks, Long running queries etc.,\n>\n> Monitoring of overall system performance\n>\n> General performance tuning\n>\n> Storage/Disk latencies\n>\n>\n>\n>\n>\n> Thanks\n>\n> ravi\n>\n\nWe use Datadog. Their PostgreSQL plugin covers most of the most relevant\nstats. It is easy to configure and not very expensive at all. They have\nan easy GUI based configuration for monitors and alerts, and you can link\nit with something like Victorops and Slack for additional pager escalation\npolicies. We have all of our batch processing tied into Datadog as well,\nso we can get a picture of events, systems, and database internals all in\none dashboard.\n\nOn Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni <[email protected]> wrote:\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi\n\n\nWe use Datadog. Their PostgreSQL plugin covers most of the most relevant stats. It is easy to configure and not very expensive at all. They have an easy GUI based configuration for monitors and alerts, and you can link it with something like Victorops and Slack for additional pager escalation policies. We have all of our batch processing tied into Datadog as well, so we can get a picture of events, systems, and database internals all in one dashboard.",
"msg_date": "Fri, 26 May 2017 06:19:56 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring tool for Postgres Database"
},
{
"msg_contents": "Hi Ravi,\n\nWe use the nagios to monitor the Postgresql Database . Nagios provide the\ndedicated API for Postgresql and you can customize the same.\n\nThanks.\n\nOn May 26, 2017 1:19 AM, \"Ravi Tammineni\" <[email protected]>\nwrote:\n\n> Hi,\n>\n>\n>\n> What is the best monitoring tool for Postgres database? Something like\n> Oracle Enterprise Manager.\n>\n>\n>\n> Specifically I am interested in tools to help:\n>\n>\n>\n> Alert DBAs to problems with both configuration and performance issues\n>\n> Deadlocks, Long running queries etc.,\n>\n> Monitoring of overall system performance\n>\n> General performance tuning\n>\n> Storage/Disk latencies\n>\n>\n>\n>\n>\n> Thanks\n>\n> ravi\n>\n\nHi Ravi,We use the nagios to monitor the Postgresql Database . Nagios provide the dedicated API for Postgresql and you can customize the same.Thanks.On May 26, 2017 1:19 AM, \"Ravi Tammineni\" <[email protected]> wrote:\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi",
"msg_date": "Fri, 26 May 2017 19:58:22 +0530",
"msg_from": "Ashish Tiwari <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Monitoring tool for Postgres Database"
},
{
"msg_contents": "+1 for Datadog. It is highly configurable, but out of the box the \npostgres integration collects a good amount of useful stuff. Tech \nsupport is also good.\n\nhttp://docs.datadoghq.com/integrations/postgresql/\n\nGrant Evans\n\nEnova Inc.\n\[email protected]\n\n\nOn 5/26/17 5:19 AM, Rick Otten wrote:\n> On Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni \n> <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> What is the best monitoring tool for Postgres database? Something\n> like Oracle Enterprise Manager.\n>\n> Specifically I am interested in tools to help:\n>\n> Alert DBAs to problems with both configuration and performance issues\n>\n> Deadlocks, Long running queries etc.,\n>\n> Monitoring of overall system performance\n>\n> General performance tuning\n>\n> Storage/Disk latencies\n>\n> Thanks\n>\n> ravi\n>\n>\n> We use Datadog. Their PostgreSQL plugin covers most of the most \n> relevant stats. It is easy to configure and not very expensive at \n> all. They have an easy GUI based configuration for monitors and \n> alerts, and you can link it with something like Victorops and Slack \n> for additional pager escalation policies. We have all of our batch \n> processing tied into Datadog as well, so we can get a picture of \n> events, systems, and database internals all in one dashboard.\n>\n\n\n\n\n\n\n\n+1 for Datadog. It is highly configurable,\n but out of the box the postgres integration collects a good\n amount of useful stuff. Tech support is also good.\n\nhttp://docs.datadoghq.com/integrations/postgresql/\nGrant Evans\nEnova Inc.\[email protected]\n\n\nOn 5/26/17 5:19 AM, Rick Otten wrote:\n\n\n\n\nOn Thu, May 25, 2017 at 3:48 PM, Ravi\n Tammineni <[email protected]>\n wrote:\n\n\n\nHi,\n \nWhat is the best monitoring tool\n for Postgres database? Something like Oracle\n Enterprise Manager.\n \nSpecifically I am interested in\n tools to help:\n \nAlert DBAs to problems with both\n configuration and performance issues\nDeadlocks, Long running queries\n etc.,\nMonitoring of overall system\n performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\n\nravi\n\n\n\n\n\n\nWe use Datadog. Their PostgreSQL\n plugin covers most of the most relevant stats. It is easy to\n configure and not very expensive at all. They have an easy\n GUI based configuration for monitors and alerts, and you can\n link it with something like Victorops and Slack for additional\n pager escalation policies. We have all of our batch\n processing tied into Datadog as well, so we can get a picture\n of events, systems, and database internals all in one\n dashboard.",
"msg_date": "Fri, 26 May 2017 09:31:48 -0500",
"msg_from": "gevans <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring tool for Postgres Database"
},
{
"msg_contents": "We have been using Nagios to monitor the system level stats. The database\nlevel stats that we gather are custom scripts that we have nagios poll to\nget the database health. You could use pg badger to generate reports\nagainst your database logs as well. Pg_badger reports are your bffs for\nperformance related specs.. very close to AWR reports that oracle provides.\n\nSotrage/Disk latencies -- we have oracle's os watcher we running regularly\non these hosts to generate iostats as well.\n\nThanks.\n-Amrutha.\n\nOn Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni <\[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> What is the best monitoring tool for Postgres database? Something like\n> Oracle Enterprise Manager.\n>\n>\n>\n> Specifically I am interested in tools to help:\n>\n>\n>\n> Alert DBAs to problems with both configuration and performance issues\n>\n> Deadlocks, Long running queries etc.,\n>\n> Monitoring of overall system performance\n>\n> General performance tuning\n>\n> Storage/Disk latencies\n>\n>\n>\n>\n>\n> Thanks\n>\n> ravi\n>\n\nWe have been using Nagios to monitor the system level stats. The database level stats that we gather are custom scripts that we have nagios poll to get the database health. You could use pg badger to generate reports against your database logs as well. Pg_badger reports are your bffs for performance related specs.. very close to AWR reports that oracle provides.Sotrage/Disk latencies -- we have oracle's os watcher we running regularly on these hosts to generate iostats as well. Thanks.-Amrutha.On Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni <[email protected]> wrote:\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi",
"msg_date": "Fri, 26 May 2017 10:32:45 -0400",
"msg_from": "\"Sunkara, Amrutha\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Monitoring tool for Postgres Database"
},
{
"msg_contents": "We've found pghero to be a good first line of defence. It doesn't have\nalerting yet, but it's great for a quick high level healthcheck.\n\nAlso +1 for Datadog. Extremely flexible and elegant UI + powerful alerting\ncapabilities.\n\nOn Fri, May 26, 2017 at 10:32 AM, Sunkara, Amrutha <[email protected]>\nwrote:\n\n> We have been using Nagios to monitor the system level stats. The database\n> level stats that we gather are custom scripts that we have nagios poll to\n> get the database health. You could use pg badger to generate reports\n> against your database logs as well. Pg_badger reports are your bffs for\n> performance related specs.. very close to AWR reports that oracle provides.\n>\n> Sotrage/Disk latencies -- we have oracle's os watcher we running regularly\n> on these hosts to generate iostats as well.\n>\n> Thanks.\n> -Amrutha.\n>\n> On Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni <\n> [email protected]> wrote:\n>\n>> Hi,\n>>\n>>\n>>\n>> What is the best monitoring tool for Postgres database? Something like\n>> Oracle Enterprise Manager.\n>>\n>>\n>>\n>> Specifically I am interested in tools to help:\n>>\n>>\n>>\n>> Alert DBAs to problems with both configuration and performance issues\n>>\n>> Deadlocks, Long running queries etc.,\n>>\n>> Monitoring of overall system performance\n>>\n>> General performance tuning\n>>\n>> Storage/Disk latencies\n>>\n>>\n>>\n>>\n>>\n>> Thanks\n>>\n>> ravi\n>>\n>\n>\n\nWe've found pghero to be a good first line of defence. It doesn't have alerting yet, but it's great for a quick high level healthcheck. Also +1 for Datadog. Extremely flexible and elegant UI + powerful alerting capabilities.On Fri, May 26, 2017 at 10:32 AM, Sunkara, Amrutha <[email protected]> wrote:We have been using Nagios to monitor the system level stats. The database level stats that we gather are custom scripts that we have nagios poll to get the database health. You could use pg badger to generate reports against your database logs as well. Pg_badger reports are your bffs for performance related specs.. very close to AWR reports that oracle provides.Sotrage/Disk latencies -- we have oracle's os watcher we running regularly on these hosts to generate iostats as well. Thanks.-Amrutha.On Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni <[email protected]> wrote:\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi",
"msg_date": "Sat, 27 May 2017 00:58:52 -0400",
"msg_from": "Dave Stibrany <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Monitoring tool for Postgres Database"
},
{
"msg_contents": "Hi Ravi,\n\nWe, at Dalibo, are contributing to the postgres community mostly through \nopen source administration tools development.\n\nAmong them :\n- pgBadger, from Gilles Darold, extracts a lot of data from the postgres \nlogs and restitutes them via a web interface \n(https://github.com/dalibo/pgbadger and http://dalibo.github.io/pgbadger/),\n- OPM - Open PostgreSQL Monitoring - monitors the activity of instances \nand sends alerts if needed (http://opm.io/, https://github.com/OPMDG and \nhttp://opm.readthedocs.io/index.html). It uses the check_pgactivity agent,\n- PoWA - PostgreSQL Workload Analyzer - captures and stores the SQL \nactivity of instances (using the pg_stat_statements extension) and \nreports it through a web interface. \n(http://powa.readthedocs.io/en/latest/ and \nhttps://github.com/dalibo/powa). Several plugins help the DBA in \nunderstanding and improving SQL statements performance:\n - pg_qualstats evaluates the selectivity of predicates or where \nclause encountered in SQL statements,\n - pg_stat_kcache captures additional statistics from the OS, like \nCPU and physical I/Os,\n - HypoPG allows to create hypothetical indexes and shows the access \nplan that the postgres optimizer would choose if these indexes would exist.\n- PgCluu, from Gilles Darold, performs a full audit of a PostgreSQL \nCluster performances. A collector grabs statistics on the PostgreSQL \ncluster using psql and sar, a grapher generates all HTML and charts \noutput. (http://pgcluu.darold.net/ and https://github.com/darold/pgcluu)\n\nFYI, we are also working on a new project named TemBoard \n(http://temboard.io/ and https://github.com/dalibo/temboard). It is not \nyet production ready. But it has been presented at the latest postgres \nconference in Russia (https://pgconf.ru/en/2017/93881).\n\nPhilippe Beaudoin.\n\nLe 27/05/2017 à 06:58, Dave Stibrany a écrit :\n> We've found pghero to be a good first line of defence. It doesn't have \n> alerting yet, but it's great for a quick high level healthcheck.\n>\n> Also +1 for Datadog. Extremely flexible and elegant UI + powerful \n> alerting capabilities.\n>\n> On Fri, May 26, 2017 at 10:32 AM, Sunkara, Amrutha \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> We have been using Nagios to monitor the system level stats. The\n> database level stats that we gather are custom scripts that we\n> have nagios poll to get the database health. You could use pg\n> badger to generate reports against your database logs as well.\n> Pg_badger reports are your bffs for performance related specs..\n> very close to AWR reports that oracle provides.\n>\n> Sotrage/Disk latencies -- we have oracle's os watcher we running\n> regularly on these hosts to generate iostats as well.\n>\n> Thanks.\n> -Amrutha.\n>\n> On Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni\n> <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> Hi,\n>\n> What is the best monitoring tool for Postgres database?\n> Something like Oracle Enterprise Manager.\n>\n> Specifically I am interested in tools to help:\n>\n> Alert DBAs to problems with both configuration and performance\n> issues\n>\n> Deadlocks, Long running queries etc.,\n>\n> Monitoring of overall system performance\n>\n> General performance tuning\n>\n> Storage/Disk latencies\n>\n> Thanks\n>\n> ravi\n>\n>\n>\n\n\n\n\n\n\n\nHi Ravi,\nWe, at Dalibo, are contributing to the postgres community mostly\n through open source administration tools development.\n\n Among them :\n - pgBadger, from Gilles Darold, extracts a lot of data from the\n postgres logs and restitutes them via a web interface (https://github.com/dalibo/pgbadger\n and http://dalibo.github.io/pgbadger/),\n - OPM - Open PostgreSQL Monitoring - monitors the activity of\n instances and sends alerts if needed (http://opm.io/,\n https://github.com/OPMDG and\n http://opm.readthedocs.io/index.html).\n It uses the check_pgactivity agent,\n - PoWA - PostgreSQL Workload Analyzer - captures and stores the SQL\n activity of instances (using the pg_stat_statements extension) and\n reports it through a web interface. (http://powa.readthedocs.io/en/latest/\n and https://github.com/dalibo/powa).\n Several plugins help the DBA in understanding and improving SQL\n statements performance:\n - pg_qualstats evaluates the selectivity of predicates or where\n clause encountered in SQL statements,\n - pg_stat_kcache captures additional statistics from the OS, like\n CPU and physical I/Os,\n - HypoPG allows to create hypothetical indexes and shows the\n access plan that the postgres optimizer would choose if these\n indexes would exist.\n - PgCluu, from Gilles Darold, performs a full audit of a PostgreSQL\n Cluster performances. A collector grabs statistics on the PostgreSQL\n cluster using psql and sar, a grapher generates all HTML and charts\n output. (http://pgcluu.darold.net/\n and https://github.com/darold/pgcluu)\n\n FYI, we are also working on a new project named TemBoard (http://temboard.io/ and https://github.com/dalibo/temboard).\n It is not yet production ready. But it has been presented at the\n latest postgres conference in Russia (https://pgconf.ru/en/2017/93881).\n\n Philippe Beaudoin.\n\nLe 27/05/2017 à 06:58, Dave Stibrany a\n écrit :\n\n\nWe've found pghero to be a good first line of\n defence. It doesn't have alerting yet, but it's great for a\n quick high level healthcheck. \n \n\nAlso +1 for Datadog. Extremely flexible and elegant UI +\n powerful alerting capabilities.\n\n\nOn Fri, May 26, 2017 at 10:32 AM,\n Sunkara, Amrutha <[email protected]> wrote:\n\nWe have been using Nagios to monitor the\n system level stats. The database level stats that we\n gather are custom scripts that we have nagios poll to get\n the database health. You could use pg badger to generate\n reports against your database logs as well. Pg_badger\n reports are your bffs for performance related specs.. very\n close to AWR reports that oracle provides.\n \n\nSotrage/Disk latencies -- we have oracle's os watcher\n we running regularly on these hosts to generate iostats\n as well. \n\n\nThanks.\n\n-Amrutha.\n\n\nOn Thu, May 25,\n 2017 at 3:48 PM, Ravi Tammineni <[email protected]>\n wrote:\n\n\n\n\n\n\nHi,\n \nWhat is the best\n monitoring tool for Postgres database?\n Something like Oracle Enterprise Manager.\n \nSpecifically I am\n interested in tools to help:\n \nAlert DBAs to problems\n with both configuration and performance\n issues\nDeadlocks, Long running\n queries etc.,\nMonitoring of overall\n system performance\nGeneral performance\n tuning\nStorage/Disk latencies\n \n \nThanks\n\nravi",
"msg_date": "Sat, 27 May 2017 17:42:54 +0200",
"msg_from": "phb07 <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring tool for Postgres Database"
},
{
"msg_contents": "We use OKmeter.io, their Postgres monitoring is really good.\n\nOn Thu, May 25, 2017 at 10:48 PM, Ravi Tammineni <\[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> What is the best monitoring tool for Postgres database?\n>\n\nWe use OKmeter.io, their Postgres monitoring is really good.On Thu, May 25, 2017 at 10:48 PM, Ravi Tammineni <[email protected]> wrote:\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database?",
"msg_date": "Sat, 3 Jun 2017 11:37:05 +0300",
"msg_from": "Nikolay Samokhvalov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring tool for Postgres Database"
}
] |
[
{
"msg_contents": "Personally I push those off to my network monitor. Nagios in my case.\r\n\r\nI find a single integrated alert and escalation framework is better than individual tools, but that's just me.\r\n\r\nIf you're using Nagios, let me know, and I can pop you several stub scripts to help.\r\n\r\nOn May 25, 2017 2:50 PM, Ravi Tammineni <[email protected]> wrote:\r\n\r\nHi,\r\n\r\n\r\n\r\nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\r\n\r\n\r\n\r\nSpecifically I am interested in tools to help:\r\n\r\n\r\n\r\nAlert DBAs to problems with both configuration and performance issues\r\n\r\nDeadlocks, Long running queries etc.,\r\n\r\nMonitoring of overall system performance\r\n\r\nGeneral performance tuning\r\n\r\nStorage/Disk latencies\r\n\r\n\r\n\r\n\r\n\r\nThanks\r\n\r\nravi\r\n\r\n\r\n\r\nJournyx, Inc.\r\n7600 Burnet Road #300\r\nAustin, TX 78757\r\nwww.journyx.com\r\n\r\np 512.834.8888\r\nf 512-834-8858\r\n\r\nDo you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/\r\n\n\n\n\n\n\nPersonally I push those off to my network monitor. Nagios in my case.\r\n\n\nI find a single integrated alert and escalation framework is better than individual tools, but that's just me.\n\n\nIf you're using Nagios, let me know, and I can pop you several stub scripts to help.\n\n\nOn May 25, 2017 2:50 PM, Ravi Tammineni <[email protected]> wrote:\n\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi\n\n\n\n\n\n\n\n\n\n\n\nJournyx, Inc.\n7600 Burnet Road #300 \r\nAustin, TX 78757 \r\nwww.journyx.com \n\n\n\n\np 512.834.8888 \nf 512-834-8858 \n\n\n\nDo you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/",
"msg_date": "Thu, 25 May 2017 20:00:41 +0000",
"msg_from": "Scott Whitney <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [ADMIN] Monitoring tool for Postgres Database"
},
{
"msg_contents": "Hello Scott,\n\n Nagios is an alert system, I guess the check_postgres module is in use.\n EM Grid is a management console whose functions are not just alert ,\nbut also remote control, etc.\n\n Hello Ravi,\n\n From a open source standing, I propose pgcluu. But it doesn't include\nremote control .\n\nBest Regards,\nSteven\n\n2017-05-26 4:00 GMT+08:00 Scott Whitney <[email protected]>:\n\n> Personally I push those off to my network monitor. Nagios in my case.\n>\n> I find a single integrated alert and escalation framework is better than\n> individual tools, but that's just me.\n>\n> If you're using Nagios, let me know, and I can pop you several stub\n> scripts to help.\n>\n> On May 25, 2017 2:50 PM, Ravi Tammineni <[email protected]>\n> wrote:\n>\n> Hi,\n>\n>\n>\n> What is the best monitoring tool for Postgres database? Something like\n> Oracle Enterprise Manager.\n>\n>\n>\n> Specifically I am interested in tools to help:\n>\n>\n>\n> Alert DBAs to problems with both configuration and performance issues\n>\n> Deadlocks, Long running queries etc.,\n>\n> Monitoring of overall system performance\n>\n> General performance tuning\n>\n> Storage/Disk latencies\n>\n>\n>\n>\n>\n> Thanks\n>\n> ravi\n>\n>\n>\n>\n> Journyx, Inc.\n> 7600 Burnet Road #300\n> Austin, TX 78757\n> www.journyx.com\n>\n> p 512.834.8888 <(512)%20834-8888>\n> f 512-834-8858 <(512)%20834-8858>\n>\n> Do you receive our promotional emails? You can subscribe or unsubscribe to\n> those emails at http://go.journyx.com/emailPreference/e/4932/714/\n>\n\nHello Scott, Nagios is an alert system, I guess the check_postgres module is in use. EM Grid is a management console whose functions are not just alert , but also remote control, etc. Hello Ravi, From a open source standing, I propose pgcluu. But it doesn't include remote control .Best Regards,Steven 2017-05-26 4:00 GMT+08:00 Scott Whitney <[email protected]>:\n\nPersonally I push those off to my network monitor. Nagios in my case.\n\n\nI find a single integrated alert and escalation framework is better than individual tools, but that's just me.\n\n\nIf you're using Nagios, let me know, and I can pop you several stub scripts to help.\n\n\nOn May 25, 2017 2:50 PM, Ravi Tammineni <[email protected]> wrote:\n\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi\n\n\n\n\n\n\n\n\n\n\n\nJournyx, Inc.\n7600 Burnet Road #300 \nAustin, TX 78757 \nwww.journyx.com \n\n\n\n\np 512.834.8888 \nf 512-834-8858 \n\n\n\nDo you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/",
"msg_date": "Fri, 26 May 2017 09:20:55 +0800",
"msg_from": "Steven Chang <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Monitoring tool for Postgres Database"
},
{
"msg_contents": "On Thu, May 25, 2017 at 9:20 PM, Steven Chang <[email protected]>\nwrote:\n\n> Hello Scott,\n>\n> Nagios is an alert system, I guess the check_postgres module is in\n> use.\n> EM Grid is a management console whose functions are not just alert ,\n> but also remote control, etc.\n>\n> Hello Ravi,\n>\n> From a open source standing, I propose pgcluu. But it doesn't include\n> remote control .\n>\n> Best Regards,\n> Steven\n>\n> 2017-05-26 4:00 GMT+08:00 Scott Whitney <[email protected]>:\n>\n>> Personally I push those off to my network monitor. Nagios in my case.\n>>\n>> I find a single integrated alert and escalation framework is better than\n>> individual tools, but that's just me.\n>>\n>> If you're using Nagios, let me know, and I can pop you several stub\n>> scripts to help.\n>>\n>\nPersonally, I'm a huge fan of grafana and collectd It's definitely not a\npre-packaged solution, but it's simple, easy to use and very, VERY fast.\nAlerts with nagios work, but, nagios is awful with history and trending,\nbut, it beats most others for just alerts\n\n\n>\n>> On May 25, 2017 2:50 PM, Ravi Tammineni <[email protected]>\n>> wrote:\n>>\n>> Hi,\n>>\n>>\n>>\n>> What is the best monitoring tool for Postgres database? Something like\n>> Oracle Enterprise Manager.\n>>\n>>\n>>\n>> Specifically I am interested in tools to help:\n>>\n>>\n>>\n>> Alert DBAs to problems with both configuration and performance issues\n>>\n>> Deadlocks, Long running queries etc.,\n>>\n>> Monitoring of overall system performance\n>>\n>> General performance tuning\n>>\n>> Storage/Disk latencies\n>>\n>>\n>>\n>>\n>>\n>> Thanks\n>>\n>> ravi\n>>\n>>\n>>\n>>\n>> Journyx, Inc.\n>> 7600 Burnet Road #300\n>> Austin, TX 78757\n>> www.journyx.com\n>>\n>> p 512.834.8888 <(512)%20834-8888>\n>> f 512-834-8858 <(512)%20834-8858>\n>>\n>> Do you receive our promotional emails? You can subscribe or unsubscribe\n>> to those emails at http://go.journyx.com/emailPreference/e/4932/714/\n>>\n>\n>\n\n\n-- \n--\nScott Mead\nSr. Architect\n*OpenSCG <http://openscg.com>*\nhttp://openscg.com\n\nOn Thu, May 25, 2017 at 9:20 PM, Steven Chang <[email protected]> wrote:Hello Scott, Nagios is an alert system, I guess the check_postgres module is in use. EM Grid is a management console whose functions are not just alert , but also remote control, etc. Hello Ravi, From a open source standing, I propose pgcluu. But it doesn't include remote control .Best Regards,Steven 2017-05-26 4:00 GMT+08:00 Scott Whitney <[email protected]>:\n\nPersonally I push those off to my network monitor. Nagios in my case.\n\n\nI find a single integrated alert and escalation framework is better than individual tools, but that's just me.\n\n\nIf you're using Nagios, let me know, and I can pop you several stub scripts to help.Personally, I'm a huge fan of grafana and collectd It's definitely not a pre-packaged solution, but it's simple, easy to use and very, VERY fast. Alerts with nagios work, but, nagios is awful with history and trending, but, it beats most others for just alerts \n\n\nOn May 25, 2017 2:50 PM, Ravi Tammineni <[email protected]> wrote:\n\n\n\nHi,\n \nWhat is the best monitoring tool for Postgres database? Something like Oracle Enterprise Manager.\n \nSpecifically I am interested in tools to help:\n \nAlert DBAs to problems with both configuration and performance issues\nDeadlocks, Long running queries etc.,\nMonitoring of overall system performance\nGeneral performance tuning\nStorage/Disk latencies\n \n \nThanks\nravi\n\n\n\n\n\n\n\n\n\n\n\nJournyx, Inc.\n7600 Burnet Road #300 \nAustin, TX 78757 \nwww.journyx.com \n\n\n\n\np 512.834.8888 \nf 512-834-8858 \n\n\n\nDo you receive our promotional emails? You can subscribe or unsubscribe to those emails at http://go.journyx.com/emailPreference/e/4932/714/ \n\n\n\n\n\n-- --Scott MeadSr. ArchitectOpenSCGhttp://openscg.com",
"msg_date": "Thu, 25 May 2017 22:15:54 -0400",
"msg_from": "Scott Mead <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Monitoring tool for Postgres Database"
},
{
"msg_contents": "On 05/25/2017 07:15 PM, Scott Mead wrote:\n\n> Thanks\n> \n> ravi\n> \n\nWe use Zabbix.\n\nJD\n\n\n\n> \n> \n> \n\n-- \nCommand Prompt, Inc. http://the.postgres.company/\n +1-503-667-4564\nPostgreSQL Centered full stack support, consulting and development.\nEveryone appreciates your honesty, until you are honest with them.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 May 2017 04:31:28 -0700",
"msg_from": "\"Joshua D. Drake\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Monitoring tool for Postgres Database"
},
{
"msg_contents": "> We use Zabbix.\n\nThere's a Zabbix template for PostgreSQL called \"pg_monz\".\n\nhttp://pg-monz.github.io/pg_monz/index-en.html\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 May 2017 23:14:27 +0900 (JST)",
"msg_from": "Tatsuo Ishii <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] Monitoring tool for Postgres Database"
}
] |
[
{
"msg_contents": "Hi,\n\n This is a general question around this performance area rather than a specific performance problem.....so I apologise now for a lack of a specific detail.\n\n We have an application that does many small actions on the DB - and it's a small DB (a 50/100 Mbytes) so we would expect it to be contained in memory. Accesses need to be low latency - unfortunately there are \"serial\" accesses where the result of one access governs the next. Luckily the work to be done by the DB is, we believe, very simple and hence fast. Everything is running on one (large) server so we use UDS to connect the client to the server.\n\nOut observation (suspicion) is that the latency of the access, as opposed to the cost of the query, is high. Having done some investigation we believe the UDS latency may be contributing AND the cost imposed by postgres in \"formatting\" the messages between the client and server (transformation to network format?).\n\nWe will try and get underneath this with real results/measurements but I would appreciate any comments pointers on what we are doing and how/if we can optimise this style of applications\n\n\nCheers\n\n\n\n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE. \nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\n\n\n\n\n\n\n\nHi, \n \n This is a general question around this performance area rather than a specific performance problem.....so I apologise now for a lack of a specific detail.\n \n We have an application that does many small actions on the DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be contained in memory. Accesses need to be low latency – unfortunately there are “serial” accesses\n where the result of one access governs the next. Luckily the work to be done by the DB is, we believe, very simple and hence fast. Everything is running on one (large) server so we use UDS to connect the client to the server.\n \nOut observation (suspicion) is that the latency of the access, as opposed to the cost of the query, is high. Having done some investigation we believe the UDS latency may be contributing AND the cost imposed by postgres in “formatting”\n the messages between the client and server (transformation to network format?).\n \nWe will try and get underneath this with real results/measurements but I would appreciate any comments pointers on what we are doing and how/if we can optimise this style of applications\n \n \nCheers\n \n \n \n \n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE. \nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.",
"msg_date": "Fri, 26 May 2017 14:02:40 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Client Server performance & UDS"
},
{
"msg_contents": "You should have a layer such as pgbouncer between your pg instance and your\napplication. It is designed to mitigate the access latency issues you\ndescribe.\n\nOn May 26, 2017 10:03 AM, \"[email protected]\" <\[email protected]> wrote:\n\n> Hi,\n>\n>\n>\n> This is a general question around this performance area\n> rather than a specific performance problem.....so I apologise now for a\n> lack of a specific detail.\n>\n>\n>\n> We have an application that does many small actions on the\n> DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be\n> contained in memory. Accesses need to be low latency – unfortunately there\n> are “serial” accesses where the result of one access governs the next.\n> Luckily the work to be done by the DB is, we believe, very simple and\n> hence fast. Everything is running on one (large) server so we use UDS to\n> connect the client to the server.\n>\n>\n>\n> Out observation (suspicion) is that the latency of the access, as opposed\n> to the cost of the query, is high. Having done some investigation we\n> believe the UDS latency may be contributing AND the cost imposed by\n> postgres in “formatting” the messages between the client and server\n> (transformation to network format?).\n>\n>\n>\n> We will try and get underneath this with real results/measurements but I\n> would appreciate any comments pointers on what we are doing and how/if we\n> can optimise this style of applications\n>\n>\n>\n>\n>\n> Cheers\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> Unless otherwise stated, this email has been sent from Fujitsu Services\n> Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in\n> England No 2216100) both with registered offices at: 22 Baker Street,\n> London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and\n> Fujitsu Laboratories of Europe Limited (registered in England No. 4153469)\n> both with registered offices at: Hayes Park Central, Hayes End Road, Hayes,\n> Middlesex, UB4 8FE.\n> This email is only for the use of its intended recipient. Its contents are\n> subject to a duty of confidence and may be privileged. Fujitsu does not\n> guarantee that this email has not been intercepted and amended or that it\n> is virus-free.\n>\n\nYou should have a layer such as pgbouncer between your pg instance and your application. It is designed to mitigate the access latency issues you describe.On May 26, 2017 10:03 AM, \"[email protected]\" <[email protected]> wrote:\n\n\nHi, \n \n This is a general question around this performance area rather than a specific performance problem.....so I apologise now for a lack of a specific detail.\n \n We have an application that does many small actions on the DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be contained in memory. Accesses need to be low latency – unfortunately there are “serial” accesses\n where the result of one access governs the next. Luckily the work to be done by the DB is, we believe, very simple and hence fast. Everything is running on one (large) server so we use UDS to connect the client to the server.\n \nOut observation (suspicion) is that the latency of the access, as opposed to the cost of the query, is high. Having done some investigation we believe the UDS latency may be contributing AND the cost imposed by postgres in “formatting”\n the messages between the client and server (transformation to network format?).\n \nWe will try and get underneath this with real results/measurements but I would appreciate any comments pointers on what we are doing and how/if we can optimise this style of applications\n \n \nCheers\n \n \n \n \n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE. \nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.",
"msg_date": "Sat, 27 May 2017 08:26:31 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Client Server performance & UDS"
},
{
"msg_contents": "Hi Rick thanks for the reply.\n\nOur aim is to minimise latency hence we have a dedicated 1:1 relationship between the client and the server. If I use connection pooling surely this introduced latency – getting a server from the pool establishing the connection?\n\nAm I missing something?\n\n\nFrom: Rick Otten [mailto:[email protected]]\nSent: 27 May 2017 13:27\nTo: Hughes, Kevin <[email protected]>\nCc: pgsql-performa. <[email protected]>\nSubject: Re: [PERFORM] Client Server performance & UDS\n\nYou should have a layer such as pgbouncer between your pg instance and your application. It is designed to mitigate the access latency issues you describe.\n\nOn May 26, 2017 10:03 AM, \"[email protected]<mailto:[email protected]>\" <[email protected]<mailto:[email protected]>> wrote:\nHi,\n\n This is a general question around this performance area rather than a specific performance problem.....so I apologise now for a lack of a specific detail.\n\n We have an application that does many small actions on the DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be contained in memory. Accesses need to be low latency – unfortunately there are “serial” accesses where the result of one access governs the next. Luckily the work to be done by the DB is, we believe, very simple and hence fast. Everything is running on one (large) server so we use UDS to connect the client to the server.\n\nOut observation (suspicion) is that the latency of the access, as opposed to the cost of the query, is high. Having done some investigation we believe the UDS latency may be contributing AND the cost imposed by postgres in “formatting” the messages between the client and server (transformation to network format?).\n\nWe will try and get underneath this with real results/measurements but I would appreciate any comments pointers on what we are doing and how/if we can optimise this style of applications\n\n\nCheers\n\n\n\n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE.\nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE. \nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\n\n\n\n\n\n\n\n\nHi Rick thanks for the reply.\n \nOur aim is to minimise latency hence we have a dedicated 1:1 relationship between the client and the server. If I use connection pooling\n surely this introduced latency – getting a server from the pool establishing the connection?\n \nAm I missing something?\n \n \nFrom: Rick Otten [mailto:[email protected]]\n\nSent: 27 May 2017 13:27\nTo: Hughes, Kevin <[email protected]>\nCc: pgsql-performa. <[email protected]>\nSubject: Re: [PERFORM] Client Server performance & UDS\n \n\nYou should have a layer such as pgbouncer between your pg instance and your application. It is designed to mitigate the access latency issues you describe.\n\n\n \n\nOn May 26, 2017 10:03 AM, \"[email protected]\" <[email protected]> wrote:\n\n\n\nHi,\n\n \n This is a general question around this performance area rather than a specific performance problem.....so I apologise now for a lack of a specific detail.\n \n We have an application that does many small actions on the DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be contained in memory. Accesses need\n to be low latency – unfortunately there are “serial” accesses where the result of one access governs the next. Luckily the work to be done by the DB is, we believe, very simple and hence fast. Everything is running on one (large) server so we use UDS to\n connect the client to the server.\n \nOut observation (suspicion) is that the latency of the access, as opposed to the cost of the query, is high. Having done some investigation we believe the UDS latency may be contributing\n AND the cost imposed by postgres in “formatting” the messages between the client and server (transformation to network format?).\n \nWe will try and get underneath this with real results/measurements but I would appreciate any comments pointers on what we are doing and how/if we can optimise this style of applications\n \n \nCheers\n \n \n \n \n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered\n in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE.\n\nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\n\n\n\n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE. \nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.",
"msg_date": "Tue, 30 May 2017 07:34:26 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Client Server performance & UDS"
},
{
"msg_contents": "Establishing a connection with a PostgreSQL database is a very expensive\nprocess on the database server. On the other hand, establishing a\nconnection with pgbouncer is very fast. Offloading the connection\nmanagement to pgbouncer can significantly reduce the connection set up time.\n\nI've found it to help even with applications that have built-in connection\npooling.\n\nIf your clients are keeping persistent connections open to the database,\nand the latency you are experiencing is within the transaction itself, you\nmight look at disk I/O for your WAL (write ahead logs) and take a closer\nlook at WAL and checkpoint tuning.\n\n\nOn Tue, May 30, 2017 at 3:34 AM, [email protected] <\[email protected]> wrote:\n\n> Hi Rick thanks for the reply.\n>\n>\n>\n> Our aim is to minimise latency hence we have a dedicated 1:1 relationship\n> between the client and the server. If I use connection pooling surely this\n> introduced latency – getting a server from the pool establishing the\n> connection?\n>\n>\n>\n> Am I missing something?\n>\n>\n>\n>\n>\n> *From:* Rick Otten [mailto:[email protected]]\n> *Sent:* 27 May 2017 13:27\n> *To:* Hughes, Kevin <[email protected]>\n> *Cc:* pgsql-performa. <[email protected]>\n> *Subject:* Re: [PERFORM] Client Server performance & UDS\n>\n>\n>\n> You should have a layer such as pgbouncer between your pg instance and\n> your application. It is designed to mitigate the access latency issues you\n> describe.\n>\n>\n>\n> On May 26, 2017 10:03 AM, \"[email protected]\" <\n> [email protected]> wrote:\n>\n> Hi,\n>\n>\n>\n> This is a general question around this performance area\n> rather than a specific performance problem.....so I apologise now for a\n> lack of a specific detail.\n>\n>\n>\n> We have an application that does many small actions on the\n> DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be\n> contained in memory. Accesses need to be low latency – unfortunately there\n> are “serial” accesses where the result of one access governs the next.\n> Luckily the work to be done by the DB is, we believe, very simple and\n> hence fast. Everything is running on one (large) server so we use UDS to\n> connect the client to the server.\n>\n>\n>\n> Out observation (suspicion) is that the latency of the access, as opposed\n> to the cost of the query, is high. Having done some investigation we\n> believe the UDS latency may be contributing AND the cost imposed by\n> postgres in “formatting” the messages between the client and server\n> (transformation to network format?).\n>\n>\n>\n> We will try and get underneath this with real results/measurements but I\n> would appreciate any comments pointers on what we are doing and how/if we\n> can optimise this style of applications\n>\n>\n>\n>\n>\n> Cheers\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> Unless otherwise stated, this email has been sent from Fujitsu Services\n> Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in\n> England No 2216100) both with registered offices at: 22 Baker Street,\n> London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and\n> Fujitsu Laboratories of Europe Limited (registered in England No. 4153469)\n> both with registered offices at: Hayes Park Central, Hayes End Road, Hayes,\n> Middlesex, UB4 8FE.\n> This email is only for the use of its intended recipient. Its contents are\n> subject to a duty of confidence and may be privileged. Fujitsu does not\n> guarantee that this email has not been intercepted and amended or that it\n> is virus-free.\n>\n>\n> Unless otherwise stated, this email has been sent from Fujitsu Services\n> Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in\n> England No 2216100) both with registered offices at: 22 Baker Street,\n> London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and\n> Fujitsu Laboratories of Europe Limited (registered in England No. 4153469)\n> both with registered offices at: Hayes Park Central, Hayes End Road, Hayes,\n> Middlesex, UB4 8FE.\n> This email is only for the use of its intended recipient. Its contents are\n> subject to a duty of confidence and may be privileged. Fujitsu does not\n> guarantee that this email has not been intercepted and amended or that it\n> is virus-free.\n>\n\nEstablishing a connection with a PostgreSQL database is a very expensive process on the database server. On the other hand, establishing a connection with pgbouncer is very fast. Offloading the connection management to pgbouncer can significantly reduce the connection set up time.I've found it to help even with applications that have built-in connection pooling.If your clients are keeping persistent connections open to the database, and the latency you are experiencing is within the transaction itself, you might look at disk I/O for your WAL (write ahead logs) and take a closer look at WAL and checkpoint tuning.On Tue, May 30, 2017 at 3:34 AM, [email protected] <[email protected]> wrote:\n\n\nHi Rick thanks for the reply.\n \nOur aim is to minimise latency hence we have a dedicated 1:1 relationship between the client and the server. If I use connection pooling\n surely this introduced latency – getting a server from the pool establishing the connection?\n \nAm I missing something?\n \n \nFrom: Rick Otten [mailto:[email protected]]\n\nSent: 27 May 2017 13:27\nTo: Hughes, Kevin <[email protected]>\nCc: pgsql-performa. <[email protected]>\nSubject: Re: [PERFORM] Client Server performance & UDS\n \n\nYou should have a layer such as pgbouncer between your pg instance and your application. It is designed to mitigate the access latency issues you describe.\n\n\n \n\nOn May 26, 2017 10:03 AM, \"[email protected]\" <[email protected]> wrote:\n\n\n\nHi,\n\n \n This is a general question around this performance area rather than a specific performance problem.....so I apologise now for a lack of a specific detail.\n \n We have an application that does many small actions on the DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be contained in memory. Accesses need\n to be low latency – unfortunately there are “serial” accesses where the result of one access governs the next. Luckily the work to be done by the DB is, we believe, very simple and hence fast. Everything is running on one (large) server so we use UDS to\n connect the client to the server.\n \nOut observation (suspicion) is that the latency of the access, as opposed to the cost of the query, is high. Having done some investigation we believe the UDS latency may be contributing\n AND the cost imposed by postgres in “formatting” the messages between the client and server (transformation to network format?).\n \nWe will try and get underneath this with real results/measurements but I would appreciate any comments pointers on what we are doing and how/if we can optimise this style of applications\n \n \nCheers\n \n \n \n \n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered\n in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE.\n\nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\n\n\n\n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE. \nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.",
"msg_date": "Tue, 30 May 2017 06:17:37 -0400",
"msg_from": "Rick Otten <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Client Server performance & UDS"
},
{
"msg_contents": "On May 30, Rick Otten modulated:\n\n> If your clients are keeping persistent connections open to the\n> database, and the latency you are experiencing is within the\n> transaction itself, you might look at disk I/O for your WAL (write\n> ahead logs) and take a closer look at WAL and checkpoint tuning.\n> \n\nAlso, if you are doing similar operations over and over with literal\ndata in them, you may have more query planner overhead per transaction\nthan if you prepared statements when opening the persistent connection\nand then simply executed the statements over and over with different\nparameters for each request.\n\n\nKarl\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 May 2017 13:20:46 -0700",
"msg_from": "Karl Czajkowski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Client Server performance & UDS"
},
{
"msg_contents": "Well after some messing around I have eventually dug out some meaningful(?) information.\n\nWe engineered a simple test with a stored procedure that returned immediately – so the SQL is very simple (SELECT * from storedProc). This means no DB activity.\n\nOn a test machine that took ~44usecs....measured from the client. This sounds good BUT when I test the UDS on the same machine the round trip latency is ~3usecs. The implication is that the cost is in the message handling – at both ends.\n\nDigging into this further I ran ltrace against the postgres server......it was seriously busy but then again my client was spinning running the same request continually. The results from ltrace shows that the server is doing what appears to be data copies – it’s doing lots of strlen/strncpy/memcpy functions. Since the stored procedure is just returning it shouts that the server costs are dominated by message handling (copying in/out of buffers).\n\nIs there any way that this cost can be reduced? As I understand it Postgres the client/server protocol is platform independent EVEN when the client and server are on the same machine. Is there an optimisation that can avoid this overhead?\n\nAnother more radical solution to my admittedly very specific problem is to run the server and client in the same process.....don’t think that can be done with postgres\n\nAs always any help appreciated.\n\nCheers\n\n\n\n\n\n\n\nFrom: Rick Otten [mailto:[email protected]]\nSent: 30 May 2017 11:18\nTo: Hughes, Kevin <[email protected]>\nCc: pgsql-performa. <[email protected]>\nSubject: Re: [PERFORM] Client Server performance & UDS\n\nEstablishing a connection with a PostgreSQL database is a very expensive process on the database server. On the other hand, establishing a connection with pgbouncer is very fast. Offloading the connection management to pgbouncer can significantly reduce the connection set up time.\n\nI've found it to help even with applications that have built-in connection pooling.\n\nIf your clients are keeping persistent connections open to the database, and the latency you are experiencing is within the transaction itself, you might look at disk I/O for your WAL (write ahead logs) and take a closer look at WAL and checkpoint tuning.\n\n\nOn Tue, May 30, 2017 at 3:34 AM, [email protected]<mailto:[email protected]> <[email protected]<mailto:[email protected]>> wrote:\nHi Rick thanks for the reply.\n\nOur aim is to minimise latency hence we have a dedicated 1:1 relationship between the client and the server. If I use connection pooling surely this introduced latency – getting a server from the pool establishing the connection?\n\nAm I missing something?\n\n\nFrom: Rick Otten [mailto:[email protected]<mailto:[email protected]>]\nSent: 27 May 2017 13:27\nTo: Hughes, Kevin <[email protected]<mailto:[email protected]>>\nCc: pgsql-performa. <[email protected]<mailto:[email protected]>>\nSubject: Re: [PERFORM] Client Server performance & UDS\n\nYou should have a layer such as pgbouncer between your pg instance and your application. It is designed to mitigate the access latency issues you describe.\n\nOn May 26, 2017 10:03 AM, \"[email protected]<mailto:[email protected]>\" <[email protected]<mailto:[email protected]>> wrote:\nHi,\n\n This is a general question around this performance area rather than a specific performance problem.....so I apologise now for a lack of a specific detail.\n\n We have an application that does many small actions on the DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be contained in memory. Accesses need to be low latency – unfortunately there are “serial” accesses where the result of one access governs the next. Luckily the work to be done by the DB is, we believe, very simple and hence fast. Everything is running on one (large) server so we use UDS to connect the client to the server.\n\nOut observation (suspicion) is that the latency of the access, as opposed to the cost of the query, is high. Having done some investigation we believe the UDS latency may be contributing AND the cost imposed by postgres in “formatting” the messages between the client and server (transformation to network format?).\n\nWe will try and get underneath this with real results/measurements but I would appreciate any comments pointers on what we are doing and how/if we can optimise this style of applications\n\n\nCheers\n\n\n\n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE.\nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE.\nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE. \nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\n\n\n\n\n\n\n\n\nWell after some messing around I have eventually dug out some meaningful(?) information.\n \nWe engineered a simple test with a stored procedure that returned immediately – so the SQL is very simple (SELECT * from storedProc).\n This means no DB activity.\n \nOn a test machine that took ~44usecs....measured from the client. This sounds good BUT when I test the UDS on the same machine the\n round trip latency is ~3usecs. The implication is that the cost is in the message handling – at both ends.\n \nDigging into this further I ran\nltrace against the postgres server......it was seriously busy but then again my client was spinning running the same request continually. The results from ltrace shows that the server is doing what appears to be data copies – it’s doing lots\n of strlen/strncpy/memcpy functions. Since the stored procedure is just returning it shouts that the server costs are dominated by message handling (copying in/out of buffers).\n \nIs there any way that this cost can be reduced? As I understand it Postgres the client/server protocol is platform independent EVEN\n when the client and server are on the same machine. Is there an optimisation that can avoid this overhead?\n \nAnother more radical solution to my admittedly very specific problem is to run the server and client in the same process.....don’t\n think that can be done with postgres\n \nAs always any help appreciated.\n \nCheers\n \n \n \n \n \n \n \nFrom: Rick Otten [mailto:[email protected]]\n\nSent: 30 May 2017 11:18\nTo: Hughes, Kevin <[email protected]>\nCc: pgsql-performa. <[email protected]>\nSubject: Re: [PERFORM] Client Server performance & UDS\n \n\nEstablishing a connection with a PostgreSQL database is a very expensive process on the database server. On the other hand, establishing a connection with pgbouncer is very fast. Offloading the connection management to pgbouncer can\n significantly reduce the connection set up time.\n\n \n\n\nI've found it to help even with applications that have built-in connection pooling.\n\n\n \n\n\nIf your clients are keeping persistent connections open to the database, and the latency you are experiencing is within the transaction itself, you might look at disk I/O for your WAL (write ahead logs) and take a closer look at WAL and\n checkpoint tuning.\n\n\n \n\n \n\nOn Tue, May 30, 2017 at 3:34 AM, \[email protected] <[email protected]> wrote:\n\n\n\nHi Rick thanks for the reply.\n \nOur aim is to minimise latency hence we have a dedicated 1:1 relationship between the client and the\n server. If I use connection pooling surely this introduced latency – getting a server from the pool establishing the connection?\n \nAm I missing something?\n \n \nFrom: Rick\n Otten [mailto:[email protected]]\n\nSent: 27 May 2017 13:27\nTo: Hughes, Kevin <[email protected]>\nCc: pgsql-performa. <[email protected]>\nSubject: Re: [PERFORM] Client Server performance & UDS\n \n\nYou should have a layer such as pgbouncer between your pg instance and your application. It is designed to mitigate the access latency issues you describe.\n\n\n \n\nOn May 26, 2017 10:03 AM, \"[email protected]\" <[email protected]>\n wrote:\n\n\n\nHi,\n\n \n This is a general question around this performance area rather than a specific performance problem.....so I apologise now for a lack of a specific detail.\n \n We have an application that does many small actions on the DB – and it’s a small DB (a 50/100 Mbytes) so we would expect it to be contained in memory. Accesses need\n to be low latency – unfortunately there are “serial” accesses where the result of one access governs the next. Luckily the work to be done by the DB is, we believe, very simple and hence fast. Everything is running on one (large) server so we use UDS to\n connect the client to the server.\n \nOut observation (suspicion) is that the latency of the access, as opposed to the cost of the query, is high. Having done some investigation we believe the UDS latency may be contributing\n AND the cost imposed by postgres in “formatting” the messages between the client and server (transformation to network format?).\n \nWe will try and get underneath this with real results/measurements but I would appreciate any comments pointers on what we are doing and how/if we can optimise this style of applications\n \n \nCheers\n \n \n \n \n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered\n in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE.\n\nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\n\n\n\n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered\n in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE.\n\nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.\n\n\n\n \n\n\n\n\n\nUnless otherwise stated, this email has been sent from Fujitsu Services Limited (registered in England No 96056); Fujitsu EMEA PLC (registered in England No 2216100) both with registered offices at: 22 Baker Street, London W1U 3BW; PFU (EMEA) Limited, (registered in England No 1578652) and Fujitsu Laboratories of Europe Limited (registered in England No. 4153469) both with registered offices at: Hayes Park Central, Hayes End Road, Hayes, Middlesex, UB4 8FE. \nThis email is only for the use of its intended recipient. Its contents are subject to a duty of confidence and may be privileged. Fujitsu does not guarantee that this email has not been intercepted and amended or that it is virus-free.",
"msg_date": "Fri, 9 Jun 2017 13:20:46 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Client Server performance & UDS"
},
{
"msg_contents": "It is certainly the TCP loopback overhead you are experiencing.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n",
"msg_date": "Thu, 7 Jun 2018 02:40:39 -0700 (MST)",
"msg_from": "e-blokos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Client Server performance & UDS"
}
] |
[
{
"msg_contents": "Hi team:\r\n The following SQL is very slow in 9.6.1 for the plan has a “sort” node.\r\n\r\nSQL text:\r\nexplain(analyze, buffers, verbose, timing)WITH m as\r\n (SELECT date,accumulation,prod_type,IF,plan_code, mapping_code, channel, VARIABLE, up_load_data\r\n FROM sdm_actu_fore_up_act_nb\r\n WHERE fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da' ) ,\r\n a as\r\n (SELECT date,accumulation,prod_type,IF,plan_code, mapping_code, channel, VARIABLE, up_load_data\r\n FROM m\r\n WHERE date = '1' AND VARIABLE ='FYP_FAC') ,\r\n b as\r\n (SELECT date,mapping_code,channel,up_load_data\r\n FROM SDM_ACTU_FORE_UP_FYP_PROD\r\n WHERE FK_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'\r\n AND date >= '2017-01-31' ) ,\r\n n as\r\n (SELECT a.plan_code,a.mapping_code,a.channel,a.variable,b.date,\r\n CASE WHEN (a.up_load_data::numeric) = 0 THEN 0 ELSE b.up_load_data/(a.up_load_data::numeric) END AS fdyz\r\n FROM a, b\r\n WHERE /*a.plan_code = b.plan_code\r\n and*/ a.mapping_code = b.mapping_code\r\n AND a.channel=b.channel )\r\nSELECT 'b9eece0c-60cc-403f-992f-9db9e9b78ee1' FK_sdm_actu_fore_project_result,\r\n m.plan_code,\r\n m.mapping_code,\r\n m.accumulation,\r\n m.channel,\r\n m.prod_type,\r\n m.if,\r\n m.variable,\r\n 'PROF-IF' AS TYPE,\r\n\r\n ((date_trunc('month',add_months((n.date)::date,(m.date::numeric)))- interval '1 day')::date)::text,\r\n sum((m.up_load_data::numeric)*n.fdyz)\r\n FROM m, n\r\nWHERE m.mapping_code = n.mapping_code AND m.channel = n.channel\r\nGROUP BY m.plan_code,\r\n m.mapping_code,\r\n m.accumulation,\r\n m.channel,\r\n m.prod_type,\r\n m.if,\r\n m.variable,\r\n ((date_trunc('month',add_months((n.date)::date,(m.date::numeric)))- interval '1 day')::date)::text\r\n;\r\n===========\r\nPlan in 9.6.2:\r\n QUERY PL\r\nAN\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nGroupAggregate (cost=437554.59..437556.52 rows=22 width=352) (actual time=175322.440..192068.748 rows=1072820 loops=1)\r\n Output: 'b9eece0c-60cc-403f-992f-9db9e9b78ee1', m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, 'PROF-IF', ((((date_trunc('month'::text, ((((n.date\r\n)::date + ((((m.date)::numeric)::text || 'months'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text), sum(((m.up_load_data)::numeric * n.fdyz))\r\n Group Key: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, ((((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text || 'months\r\n'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text)\r\n Buffers: shared hit=29835, temp read=168320 written=168320\r\n CTE m\r\n -> Bitmap Heap Scan on public.sdm_actu_fore_up_act_nb (cost=22340.45..386925.95 rows=866760 width=60) (actual time=124.239..368.762 rows=895056 loops=1)\r\n Output: sdm_actu_fore_up_act_nb.date, sdm_actu_fore_up_act_nb.accumulation, sdm_actu_fore_up_act_nb.prod_type, sdm_actu_fore_up_act_nb.if, sdm_actu_fore_up_act_nb.plan_code, sdm_\r\nactu_fore_up_act_nb.mapping_code, sdm_actu_fore_up_act_nb.channel, sdm_actu_fore_up_act_nb.variable, sdm_actu_fore_up_act_nb.up_load_data\r\n Recheck Cond: (sdm_actu_fore_up_act_nb.fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da'::text)\r\n Heap Blocks: exact=23005\r\n Buffers: shared hit=29402\r\n -> Bitmap Index Scan on ix_sdm_actu_fore_up_act_nb (cost=0.00..22123.76 rows=866760 width=0) (actual time=119.406..119.406 rows=895056 loops=1)\r\n Index Cond: (sdm_actu_fore_up_act_nb.fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da'::text)\r\n Buffers: shared hit=6397\r\n CTE a\r\n -> CTE Scan on m m_1 (cost=0.00..21669.00 rows=22 width=288) (actual time=3.972..743.152 rows=289 loops=1)\r\n Output: m_1.date, m_1.accumulation, m_1.prod_type, m_1.if, m_1.plan_code, m_1.mapping_code, m_1.channel, m_1.variable, m_1.up_load_data\r\n Filter: ((m_1.date = '1'::text) AND (m_1.variable = 'FYP_FAC'::text))\r\n Rows Removed by Filter: 894767\r\n Buffers: shared hit=23004\r\n CTE b\r\n -> Bitmap Heap Scan on public.sdm_actu_fore_up_fyp_prod (cost=124.14..5052.60 rows=2497 width=33) (actual time=2.145..4.566 rows=4752 loops=1)\r\n Output: sdm_actu_fore_up_fyp_prod.date, sdm_actu_fore_up_fyp_prod.mapping_code, sdm_actu_fore_up_fyp_prod.channel, sdm_actu_fore_up_fyp_prod.up_load_data\r\n Recheck Cond: (sdm_actu_fore_up_fyp_prod.fk_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'::text)\r\n Filter: (sdm_actu_fore_up_fyp_prod.date >= '2017-01-31'::text)\r\n Heap Blocks: exact=315\r\n Buffers: shared hit=433\r\n -> Bitmap Index Scan on ix_sdm_actu_fore_up_fyp_prod (cost=0.00..123.52 rows=4746 width=0) (actual time=1.863..1.863 rows=14256 loops=1)\r\n Index Cond: (sdm_actu_fore_up_fyp_prod.fk_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'::text)\r\n Buffers: shared hit=118\r\n CTE n\r\n -> Hash Join (cost=0.77..69.46 rows=1 width=192) (actual time=745.835..756.304 rows=4764 loops=1)\r\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, b.date, CASE WHEN ((a.up_load_data)::numeric = '0'::numeric) THEN '0'::numeric ELSE (b.up_load_data / (a.up_load_data)\r\n::numeric) END\r\n Hash Cond: ((b.mapping_code = a.mapping_code) AND (b.channel = a.channel))\r\n Buffers: shared hit=23437\r\n -> CTE Scan on b (cost=0.00..49.94 rows=2497 width=128) (actual time=2.147..6.445 rows=4752 loops=1)\r\n Output: b.date, b.mapping_code, b.channel, b.up_load_data\r\n Buffers: shared hit=433\r\n -> Hash (cost=0.44..0.44 rows=22 width=160) (actual time=743.661..743.661 rows=289 loops=1)\r\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, a.up_load_data\r\n Buckets: 1024 Batches: 1 Memory Usage: 29kB\r\n Buffers: shared hit=23004\r\n -> CTE Scan on a (cost=0.00..0.44 rows=22 width=160) (actual time=3.974..743.380 rows=289 loops=1)\r\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, a.up_load_data\r\n Buffers: shared hit=23004\r\n -> Sort (cost=23837.58..23837.64 rows=22 width=320) (actual time=175322.411..178986.480 rows=14620032 loops=1)\r\n Output: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, ((((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text || 'mon\r\nths'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text), m.up_load_data, n.fdyz\r\n Sort Key: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, ((((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text || 'm\r\nonths'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text)\r\n Sort Method: external merge Disk: 1346544kB\r\n Buffers: shared hit=29835, temp read=168320 written=168320\r\n -> Hash Join (cost=0.04..23837.09 rows=22 width=320) (actual time=884.588..27338.979 rows=14620032 loops=1)\r\n Output: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, (((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text ||\r\n'months'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text, m.up_load_data, n.fdyz\r\n Hash Cond: ((m.mapping_code = n.mapping_code) AND (m.channel = n.channel))\r\n Buffers: shared hit=29835\r\n -> CTE Scan on m (cost=0.00..17335.20 rows=866760 width=288) (actual time=124.243..263.402 rows=895056 loops=1)\r\n Output: m.date, m.accumulation, m.prod_type, m.if, m.plan_code, m.mapping_code, m.channel, m.variable, m.up_load_data\r\n Buffers: shared hit=6398\r\n -> Hash (cost=0.02..0.02 rows=1 width=128) (actual time=760.302..760.302 rows=4764 loops=1)\r\n Output: n.date, n.fdyz, n.mapping_code, n.channel\r\n Buckets: 8192 (originally 1024) Batches: 1 (originally 1) Memory Usage: 389kB\r\n Buffers: shared hit=23437\r\n -> CTE Scan on n (cost=0.00..0.02 rows=1 width=128) (actual time=745.838..759.139 rows=4764 loops=1)\r\n Output: n.date, n.fdyz, n.mapping_code, n.channel\r\n Buffers: shared hit=23437\r\nPlanning time: 0.383 ms\r\nExecution time: 192187.911 ms\r\n(65 rows)\r\n\r\nTime: 192192.814 ms\r\n\r\n==========\r\nPlan in 9.4.1\r\n QUERY PL\r\nAN\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nHashAggregate (cost=478276.30..478278.89 rows=47 width=352) (actual time=92967.646..93660.910 rows=1072820 loops=1)\r\n Output: 'b9eece0c-60cc-403f-992f-9db9e9b78ee1', m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, 'PROF-IF', ((((date_trunc('month'::text, ((((n.date\r\n)::date + ((((m.date)::numeric)::text || 'months'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text), sum(((m.up_load_data)::numeric * n.fdyz))\r\n Group Key: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, (((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text || 'months'\r\n::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text\r\n Buffers: shared hit=30869, temp read=8103 written=8102\r\n CTE m\r\n -> Bitmap Heap Scan on public.sdm_actu_fore_up_act_nb (cost=37491.97..421474.67 rows=942376 width=60) (actual time=158.435..465.865 rows=895056 loops=1)\r\n Output: sdm_actu_fore_up_act_nb.date, sdm_actu_fore_up_act_nb.accumulation, sdm_actu_fore_up_act_nb.prod_type, sdm_actu_fore_up_act_nb.if, sdm_actu_fore_up_act_nb.plan_code, sdm_\r\nactu_fore_up_act_nb.mapping_code, sdm_actu_fore_up_act_nb.channel, sdm_actu_fore_up_act_nb.variable, sdm_actu_fore_up_act_nb.up_load_data\r\n Recheck Cond: (sdm_actu_fore_up_act_nb.fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da'::text)\r\n Heap Blocks: exact=23006\r\n Buffers: shared hit=30422\r\n -> Bitmap Index Scan on ix_sdm_actu_fore_up_act_nb (cost=0.00..37256.38 rows=942376 width=0) (actual time=153.180..153.180 rows=895056 loops=1)\r\n Index Cond: (sdm_actu_fore_up_act_nb.fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da'::text)\r\n Buffers: shared hit=7416\r\n CTE a\r\n -> CTE Scan on m m_1 (cost=0.00..23559.40 rows=24 width=288) (actual time=5.386..1227.412 rows=289 loops=1)\r\n Output: m_1.date, m_1.accumulation, m_1.prod_type, m_1.if, m_1.plan_code, m_1.mapping_code, m_1.channel, m_1.variable, m_1.up_load_data\r\n Filter: ((m_1.date = '1'::text) AND (m_1.variable = 'FYP_FAC'::text))\r\n Rows Removed by Filter: 894767\r\n Buffers: shared hit=23005, temp written=8101\r\n CTE b\r\n -> Bitmap Heap Scan on public.sdm_actu_fore_up_fyp_prod (cost=221.97..7251.24 rows=2575 width=33) (actual time=2.623..6.318 rows=4752 loops=1)\r\n Output: sdm_actu_fore_up_fyp_prod.date, sdm_actu_fore_up_fyp_prod.mapping_code, sdm_actu_fore_up_fyp_prod.channel, sdm_actu_fore_up_fyp_prod.up_load_data\r\n Recheck Cond: (sdm_actu_fore_up_fyp_prod.fk_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'::text)\r\n Filter: (sdm_actu_fore_up_fyp_prod.date >= '2017-01-31'::text)\r\n Heap Blocks: exact=327\r\n Buffers: shared hit=447\r\n -> Bitmap Index Scan on ix_sdm_actu_fore_up_fyp_prod (cost=0.00..221.32 rows=4920 width=0) (actual time=2.313..2.313 rows=14256 loops=1)\r\n Index Cond: (sdm_actu_fore_up_fyp_prod.fk_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'::text)\r\n Buffers: shared hit=120\r\n CTE n\r\n -> Hash Join (cost=0.84..71.70 rows=2 width=224) (actual time=1230.640..1245.947 rows=4764 loops=1)\r\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, b.date, CASE WHEN ((a.up_load_data)::numeric = 0::numeric) THEN 0::numeric ELSE (b.up_load_data / (a.up_load_data)::nu\r\nmeric) END\r\n Hash Cond: ((b.mapping_code = a.mapping_code) AND (b.channel = a.channel))\r\n Buffers: shared hit=23452, temp written=8101\r\n -> CTE Scan on b (cost=0.00..51.50 rows=2575 width=128) (actual time=2.626..8.904 rows=4752 loops=1)\r\n Output: b.date, b.mapping_code, b.channel, b.up_load_data\r\n Buffers: shared hit=447\r\n -> Hash (cost=0.48..0.48 rows=24 width=160) (actual time=1227.982..1227.982 rows=289 loops=1)\r\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, a.up_load_data\r\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\r\n Buffers: shared hit=23005, temp written=8101\r\n -> CTE Scan on a (cost=0.00..0.48 rows=24 width=160) (actual time=5.387..1227.668 rows=289 loops=1)\r\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, a.up_load_data\r\n Buffers: shared hit=23005, temp written=8101\r\n -> Hash Join (cost=0.07..25917.88 rows=47 width=352) (actual time=1410.018..61022.859 rows=14620032 loops=1)\r\n Output: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, (((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text || 'mont\r\nhs'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text, m.up_load_data, n.fdyz\r\n Hash Cond: ((m.mapping_code = n.mapping_code) AND (m.channel = n.channel))\r\n Buffers: shared hit=30869, temp read=8103 written=8102\r\n -> CTE Scan on m (cost=0.00..18847.52 rows=942376 width=288) (actual time=158.442..558.052 rows=895056 loops=1)\r\n Output: m.date, m.accumulation, m.prod_type, m.if, m.plan_code, m.mapping_code, m.channel, m.variable, m.up_load_data\r\n Buffers: shared hit=7417, temp read=8103 written=1\r\n -> Hash (cost=0.04..0.04 rows=2 width=128) (actual time=1251.514..1251.514 rows=4764 loops=1)\r\n Output: n.date, n.fdyz, n.mapping_code, n.channel\r\n Buckets: 1024 Batches: 1 Memory Usage: 325kB\r\n Buffers: shared hit=23452, temp written=8101\r\n -> CTE Scan on n (cost=0.00..0.04 rows=2 width=128) (actual time=1230.643..1249.718 rows=4764 loops=1)\r\n Output: n.date, n.fdyz, n.mapping_code, n.channel\r\n Buffers: shared hit=23452, temp written=8101\r\nPlanning time: 0.666 ms\r\nExecution time: 93783.172 ms\r\n(60 rows)\r\n\r\nTime: 93790.518 ms\r\n\n********************************************************************************************************************************\nThe information in this email is confidential and may be legally privileged. If you have received this email in error or are not the intended recipient, please immediately notify the sender and delete this message from your computer. Any use, distribution, or copying of this email other than by the intended recipient is strictly prohibited. All messages sent to and from us may be monitored to ensure compliance with internal policies and to protect our business.\nEmails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks.\n\n收发邮件者请注意:\n本邮件含涉密信息,请保守秘密,若误收本邮件,请务必通知发送人并直接删去,不得使用、传播或复制本邮件。\n进出邮件均受到本公司合规监控。邮件可能发生被截留、被修改、丢失、被破坏或包含计算机病毒等不安全情况。\n********************************************************************************************************************************\n\n\n\n\n\n\n\n\n\nHi team:\n The following SQL is very slow in 9.6.1 for the plan has a\n“sort” node.\n \nSQL text:\nexplain(analyze, buffers, verbose, timing)WITH m as\n (SELECT date,accumulation,prod_type,IF,plan_code, mapping_code, channel, VARIABLE, up_load_data\n FROM sdm_actu_fore_up_act_nb\n WHERE fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da' ) ,\n a as\n (SELECT date,accumulation,prod_type,IF,plan_code, mapping_code, channel, VARIABLE, up_load_data\n FROM m\n WHERE date = '1' AND VARIABLE ='FYP_FAC') ,\n b as\n (SELECT date,mapping_code,channel,up_load_data\n FROM SDM_ACTU_FORE_UP_FYP_PROD\n WHERE FK_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'\n AND date >= '2017-01-31' ) ,\n n as\n (SELECT a.plan_code,a.mapping_code,a.channel,a.variable,b.date,\n CASE WHEN (a.up_load_data::numeric) = 0 THEN 0 ELSE b.up_load_data/(a.up_load_data::numeric) END AS fdyz\n FROM a, b\n WHERE /*a.plan_code = b.plan_code\n and*/ a.mapping_code = b.mapping_code\n AND a.channel=b.channel )\nSELECT 'b9eece0c-60cc-403f-992f-9db9e9b78ee1' FK_sdm_actu_fore_project_result,\n m.plan_code,\n m.mapping_code,\n m.accumulation,\n m.channel,\n m.prod_type,\n m.if,\n m.variable,\n 'PROF-IF' AS TYPE,\n \n ((date_trunc('month',add_months((n.date)::date,(m.date::numeric)))- interval '1 day')::date)::text,\n sum((m.up_load_data::numeric)*n.fdyz)\n FROM m, n\nWHERE m.mapping_code = n.mapping_code AND m.channel = n.channel\nGROUP BY m.plan_code,\n m.mapping_code,\n m.accumulation,\n m.channel,\n m.prod_type,\n m.if,\n m.variable,\n ((date_trunc('month',add_months((n.date)::date,(m.date::numeric)))- interval '1 day')::date)::text\n;\n===========\nPlan in 9.6.2:\n QUERY PL\nAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGroupAggregate (cost=437554.59..437556.52 rows=22 width=352) (actual time=175322.440..192068.748 rows=1072820 loops=1)\n Output: 'b9eece0c-60cc-403f-992f-9db9e9b78ee1', m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, 'PROF-IF', ((((date_trunc('month'::text, ((((n.date\n)::date + ((((m.date)::numeric)::text || 'months'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text), sum(((m.up_load_data)::numeric * n.fdyz))\n Group Key: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, ((((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text || 'months\n'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text)\n Buffers: shared hit=29835, temp read=168320 written=168320\n CTE m\n -> Bitmap Heap Scan on public.sdm_actu_fore_up_act_nb (cost=22340.45..386925.95 rows=866760 width=60) (actual time=124.239..368.762 rows=895056 loops=1)\n Output: sdm_actu_fore_up_act_nb.date, sdm_actu_fore_up_act_nb.accumulation, sdm_actu_fore_up_act_nb.prod_type, sdm_actu_fore_up_act_nb.if, sdm_actu_fore_up_act_nb.plan_code, sdm_\nactu_fore_up_act_nb.mapping_code, sdm_actu_fore_up_act_nb.channel, sdm_actu_fore_up_act_nb.variable, sdm_actu_fore_up_act_nb.up_load_data\n Recheck Cond: (sdm_actu_fore_up_act_nb.fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da'::text)\n Heap Blocks: exact=23005\n Buffers: shared hit=29402\n -> Bitmap Index Scan on ix_sdm_actu_fore_up_act_nb (cost=0.00..22123.76 rows=866760 width=0) (actual time=119.406..119.406 rows=895056 loops=1)\n Index Cond: (sdm_actu_fore_up_act_nb.fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da'::text)\n Buffers: shared hit=6397\n CTE a\n -> CTE Scan on m m_1 (cost=0.00..21669.00 rows=22 width=288) (actual time=3.972..743.152 rows=289 loops=1)\n Output: m_1.date, m_1.accumulation, m_1.prod_type, m_1.if, m_1.plan_code, m_1.mapping_code, m_1.channel, m_1.variable, m_1.up_load_data\n Filter: ((m_1.date = '1'::text) AND (m_1.variable = 'FYP_FAC'::text))\n Rows Removed by Filter: 894767\n Buffers: shared hit=23004\n CTE b\n -> Bitmap Heap Scan on public.sdm_actu_fore_up_fyp_prod (cost=124.14..5052.60 rows=2497 width=33) (actual time=2.145..4.566 rows=4752 loops=1)\n Output: sdm_actu_fore_up_fyp_prod.date, sdm_actu_fore_up_fyp_prod.mapping_code, sdm_actu_fore_up_fyp_prod.channel, sdm_actu_fore_up_fyp_prod.up_load_data\n Recheck Cond: (sdm_actu_fore_up_fyp_prod.fk_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'::text)\n Filter: (sdm_actu_fore_up_fyp_prod.date >= '2017-01-31'::text)\n Heap Blocks: exact=315\n Buffers: shared hit=433\n -> Bitmap Index Scan on ix_sdm_actu_fore_up_fyp_prod (cost=0.00..123.52 rows=4746 width=0) (actual time=1.863..1.863 rows=14256 loops=1)\n Index Cond: (sdm_actu_fore_up_fyp_prod.fk_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'::text)\n Buffers: shared hit=118\n CTE n\n -> Hash Join (cost=0.77..69.46 rows=1 width=192) (actual time=745.835..756.304 rows=4764 loops=1)\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, b.date, CASE WHEN ((a.up_load_data)::numeric = '0'::numeric) THEN '0'::numeric ELSE (b.up_load_data / (a.up_load_data)\n::numeric) END\n Hash Cond: ((b.mapping_code = a.mapping_code) AND (b.channel = a.channel))\n Buffers: shared hit=23437\n -> CTE Scan on b (cost=0.00..49.94 rows=2497 width=128) (actual time=2.147..6.445 rows=4752 loops=1)\n Output: b.date, b.mapping_code, b.channel, b.up_load_data\n Buffers: shared hit=433\n -> Hash (cost=0.44..0.44 rows=22 width=160) (actual time=743.661..743.661 rows=289 loops=1)\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, a.up_load_data\n Buckets: 1024 Batches: 1 Memory Usage: 29kB\n Buffers: shared hit=23004\n -> CTE Scan on a (cost=0.00..0.44 rows=22 width=160) (actual time=3.974..743.380 rows=289 loops=1)\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, a.up_load_data\n Buffers: shared hit=23004\n \n-> Sort (cost=23837.58..23837.64 rows=22 width=320) (actual time=175322.411..178986.480 rows=14620032 loops=1)\n Output: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, ((((date_trunc('month'::text, ((((n.date)::date +\n ((((m.date)::numeric)::text || 'mon\nths'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text), m.up_load_data, n.fdyz\n Sort Key: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, ((((date_trunc('month'::text, ((((n.date)::date\n + ((((m.date)::numeric)::text || 'm\nonths'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text)\n Sort Method: external merge Disk: 1346544kB\n Buffers: shared hit=29835, temp read=168320 written=168320\n -> Hash Join (cost=0.04..23837.09 rows=22 width=320) (actual time=884.588..27338.979 rows=14620032 loops=1)\n Output: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, (((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text ||\n'months'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text, m.up_load_data, n.fdyz\n Hash Cond: ((m.mapping_code = n.mapping_code) AND (m.channel = n.channel))\n Buffers: shared hit=29835\n -> CTE Scan on m (cost=0.00..17335.20 rows=866760 width=288) (actual time=124.243..263.402 rows=895056 loops=1)\n Output: m.date, m.accumulation, m.prod_type, m.if, m.plan_code, m.mapping_code, m.channel, m.variable, m.up_load_data\n Buffers: shared hit=6398\n -> Hash (cost=0.02..0.02 rows=1 width=128) (actual time=760.302..760.302 rows=4764 loops=1)\n Output: n.date, n.fdyz, n.mapping_code, n.channel\n Buckets: 8192 (originally 1024) Batches: 1 (originally 1) Memory Usage: 389kB\n Buffers: shared hit=23437\n -> CTE Scan on n (cost=0.00..0.02 rows=1 width=128) (actual time=745.838..759.139 rows=4764 loops=1)\n Output: n.date, n.fdyz, n.mapping_code, n.channel\n Buffers: shared hit=23437\nPlanning time: 0.383 ms\nExecution time: 192187.911 ms\n(65 rows)\n \nTime: 192192.814 ms\n \n==========\nPlan in 9.4.1\n QUERY PL\nAN \n\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=478276.30..478278.89 rows=47 width=352) (actual time=92967.646..93660.910 rows=1072820 loops=1)\n Output: 'b9eece0c-60cc-403f-992f-9db9e9b78ee1', m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, 'PROF-IF', ((((date_trunc('month'::text, ((((n.date\n)::date + ((((m.date)::numeric)::text || 'months'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text), sum(((m.up_load_data)::numeric * n.fdyz))\n Group Key: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, (((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text || 'months'\n::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text\n Buffers: shared hit=30869, temp read=8103 written=8102\n CTE m\n -> Bitmap Heap Scan on public.sdm_actu_fore_up_act_nb (cost=37491.97..421474.67 rows=942376 width=60) (actual time=158.435..465.865 rows=895056 loops=1)\n Output: sdm_actu_fore_up_act_nb.date, sdm_actu_fore_up_act_nb.accumulation, sdm_actu_fore_up_act_nb.prod_type, sdm_actu_fore_up_act_nb.if, sdm_actu_fore_up_act_nb.plan_code, sdm_\nactu_fore_up_act_nb.mapping_code, sdm_actu_fore_up_act_nb.channel, sdm_actu_fore_up_act_nb.variable, sdm_actu_fore_up_act_nb.up_load_data\n Recheck Cond: (sdm_actu_fore_up_act_nb.fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da'::text)\n Heap Blocks: exact=23006\n Buffers: shared hit=30422\n -> Bitmap Index Scan on ix_sdm_actu_fore_up_act_nb (cost=0.00..37256.38 rows=942376 width=0) (actual time=153.180..153.180 rows=895056 loops=1)\n Index Cond: (sdm_actu_fore_up_act_nb.fk_sdm_actu_fore_up_act_nb = 'd626e902-b3c5-495f-938f-3c6c74fa18da'::text)\n Buffers: shared hit=7416\n CTE a\n -> CTE Scan on m m_1 (cost=0.00..23559.40 rows=24 width=288) (actual time=5.386..1227.412 rows=289 loops=1)\n Output: m_1.date, m_1.accumulation, m_1.prod_type, m_1.if, m_1.plan_code, m_1.mapping_code, m_1.channel, m_1.variable, m_1.up_load_data\n Filter: ((m_1.date = '1'::text) AND (m_1.variable = 'FYP_FAC'::text))\n Rows Removed by Filter: 894767\n Buffers: shared hit=23005, temp written=8101\n CTE b\n -> Bitmap Heap Scan on public.sdm_actu_fore_up_fyp_prod (cost=221.97..7251.24 rows=2575 width=33) (actual time=2.623..6.318 rows=4752 loops=1)\n Output: sdm_actu_fore_up_fyp_prod.date, sdm_actu_fore_up_fyp_prod.mapping_code, sdm_actu_fore_up_fyp_prod.channel, sdm_actu_fore_up_fyp_prod.up_load_data\n Recheck Cond: (sdm_actu_fore_up_fyp_prod.fk_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'::text)\n Filter: (sdm_actu_fore_up_fyp_prod.date >= '2017-01-31'::text)\n Heap Blocks: exact=327\n Buffers: shared hit=447\n -> Bitmap Index Scan on ix_sdm_actu_fore_up_fyp_prod (cost=0.00..221.32 rows=4920 width=0) (actual time=2.313..2.313 rows=14256 loops=1)\n Index Cond: (sdm_actu_fore_up_fyp_prod.fk_sdm_actu_fore_project_result = 'b9eece0c-60cc-403f-992f-9db9e9b78ee1'::text)\n Buffers: shared hit=120\n CTE n\n -> Hash Join (cost=0.84..71.70 rows=2 width=224) (actual time=1230.640..1245.947 rows=4764 loops=1)\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, b.date, CASE WHEN ((a.up_load_data)::numeric = 0::numeric) THEN 0::numeric ELSE (b.up_load_data / (a.up_load_data)::nu\nmeric) END\n Hash Cond: ((b.mapping_code = a.mapping_code) AND (b.channel = a.channel))\n Buffers: shared hit=23452, temp written=8101\n -> CTE Scan on b (cost=0.00..51.50 rows=2575 width=128) (actual time=2.626..8.904 rows=4752 loops=1)\n Output: b.date, b.mapping_code, b.channel, b.up_load_data\n Buffers: shared hit=447\n -> Hash (cost=0.48..0.48 rows=24 width=160) (actual time=1227.982..1227.982 rows=289 loops=1)\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, a.up_load_data\n Buckets: 1024 Batches: 1 Memory Usage: 21kB\n Buffers: shared hit=23005, temp written=8101\n -> CTE Scan on a (cost=0.00..0.48 rows=24 width=160) (actual time=5.387..1227.668 rows=289 loops=1)\n Output: a.plan_code, a.mapping_code, a.channel, a.variable, a.up_load_data\n Buffers: shared hit=23005, temp written=8101\n -> Hash Join (cost=0.07..25917.88 rows=47 width=352) (actual time=1410.018..61022.859 rows=14620032 loops=1)\n Output: m.plan_code, m.mapping_code, m.accumulation, m.channel, m.prod_type, m.if, m.variable, (((date_trunc('month'::text, ((((n.date)::date + ((((m.date)::numeric)::text || 'mont\nhs'::text))::interval))::date)::timestamp with time zone) - '1 day'::interval))::date)::text, m.up_load_data, n.fdyz\n Hash Cond: ((m.mapping_code = n.mapping_code) AND (m.channel = n.channel))\n Buffers: shared hit=30869, temp read=8103 written=8102\n -> CTE Scan on m (cost=0.00..18847.52 rows=942376 width=288) (actual time=158.442..558.052 rows=895056 loops=1)\n Output: m.date, m.accumulation, m.prod_type, m.if, m.plan_code, m.mapping_code, m.channel, m.variable, m.up_load_data\n Buffers: shared hit=7417, temp read=8103 written=1\n -> Hash (cost=0.04..0.04 rows=2 width=128) (actual time=1251.514..1251.514 rows=4764 loops=1)\n Output: n.date, n.fdyz, n.mapping_code, n.channel\n Buckets: 1024 Batches: 1 Memory Usage: 325kB\n Buffers: shared hit=23452, temp written=8101\n -> CTE Scan on n (cost=0.00..0.04 rows=2 width=128) (actual time=1230.643..1249.718 rows=4764 loops=1)\n Output: n.date, n.fdyz, n.mapping_code, n.channel\n Buffers: shared hit=23452, temp written=8101\nPlanning time: 0.666 ms\nExecution time: 93783.172 ms\n(60 rows)\n \nTime: 93790.518 ms\n\n********************************************************************************************************************************The information in this email is confidential and may be legally privileged. If you have received this email in error or are not the intended recipient, please immediately notify the sender and delete this message from your computer. Any use, distribution, or copying of this email other than by the intended recipient is strictly prohibited. All messages sent to and from us may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, lost or destroyed, or contain viruses. Anyone who communicates with us by email is taken to accept these risks. 收发邮件者请注意:本邮件含涉密信息,请保守秘密,若误收本邮件,请务必通知发送人并直接删去,不得使用、传播或复制本邮件。进出邮件均受到本公司合规监控。邮件可能发生被截留、被修改、丢失、被破坏或包含计算机病毒等不安全情况。 ********************************************************************************************************************************",
"msg_date": "Sat, 27 May 2017 08:40:46 +0000",
"msg_from": "=?gb2312?B?wbq6o7Cyo6hLaWxsdWEgTGV1bmejqQ==?=\n\t<[email protected]>",
"msg_from_op": true,
"msg_subject": "Different plan between 9.6 and 9.4 when using \"Group by\""
},
{
"msg_contents": "On Sat, May 27, 2017 at 1:40 AM, 梁海安(Killua Leung) <\[email protected]> wrote:\n\n> Hi team:\n>\n> The following SQL is very slow in 9.6.1 for the plan has a “sort”\n> node.\n>\n\n\nThe difference is only a factor of 2. I wouldn't call it \"very\" slow.\n\nYour explain plans are unreadable, please try posting them as\nun-line-wrapped text files, or using something like\nhttps://explain.depesz.com/, to share them in a readable way. (Also,\nVERBOSE probably isn't doing us much\ngood here, and makes it much less readable).\n\nWriting your CTEs as inline subqueries might help the planner make some\nbetter choices here. Also, the estimate for CTE n is so bad, I'm guessing\nthat their is a high functional dependency on:\n\na.mapping_code = b.mapping_code AND a.channel=b.channel\n\nWhile the planner is assuming they are independent. You might be able to\nget better estimates there by doing something like:\n\na.mapping_code+0 = b.mapping_code+0 AND a.channel=b.channel\n\n(or using ||'' rather than +0 if the types are textual rather than\nnumerical). But I doubt it would be enough of a difference to change the\nplan, but it is an easy thing to try.\n\nCheers,\n\nJeff\n\nOn Sat, May 27, 2017 at 1:40 AM, 梁海安(Killua Leung) <[email protected]> wrote:\n\n\nHi team:\n The following SQL is very slow in 9.6.1 for the plan has a\n“sort” node.The difference is only a factor of 2. I wouldn't call it \"very\" slow.Your explain plans are unreadable, please try posting them as un-line-wrapped text files, or using something like https://explain.depesz.com/, to share them in a readable way. (Also, VERBOSE probably isn't doing us muchgood here, and makes it much less readable).Writing your CTEs as inline subqueries might help the planner make some better choices here. Also, the estimate for CTE n is so bad, I'm guessing that their is a high functional dependency on:a.mapping_code = b.mapping_code AND a.channel=b.channelWhile the planner is assuming they are independent. You might be able to get better estimates there by doing something like:a.mapping_code+0 = b.mapping_code+0 AND a.channel=b.channel(or using ||'' rather than +0 if the types are textual rather than numerical). But I doubt it would be enough of a difference to change the plan, but it is an easy thing to try.Cheers,Jeff",
"msg_date": "Mon, 29 May 2017 11:21:01 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Different plan between 9.6 and 9.4 when using \"Group by\""
}
] |
[
{
"msg_contents": "Dear Expert,\n\nIs there any way to rollback table data in PostgreSQL?\n\nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078 |[email protected]<mailto:%[email protected]>\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n\n\n________________________________\n\nDISCLAIMER:\n\nThis email may contain confidential information and is intended only for the use of the specific individual(s) to which it is addressed. If you are not the intended recipient of this email, you are hereby notified that any unauthorized use, dissemination or copying of this email or the information contained in it or attached to it is strictly prohibited. If you received this message in error, please immediately notify the sender at Infotech or [email protected] and delete the original message.\n\n\n\n\n\n\n\n\n\nDear Expert,\n \nIs there any way to rollback table data in PostgreSQL?\n \nRegards,\nDinesh Chandra\n|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile: +91-9953975849 | Ext 1078\n|[email protected]\n\nPlot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n \n\n\n\n\nDISCLAIMER:\n\nThis email may contain confidential information and is intended only for the use of the specific individual(s) to which it is addressed. If you are not the intended recipient of this email, you are hereby notified that any unauthorized use, dissemination or\n copying of this email or the information contained in it or attached to it is strictly prohibited. If you received this message in error, please immediately notify the sender at Infotech or [email protected] and delete the original message.",
"msg_date": "Wed, 7 Jun 2017 11:33:26 +0000",
"msg_from": "Dinesh Chandra 12108 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Rollback table data."
},
{
"msg_contents": "\n\n\n\n\n\n Il 07/06/2017 13:33, Dinesh Chandra 12108 ha scritto:\n\n\n\n\n\n\nDear Expert,\n�\nIs there any\n way to rollback table data in PostgreSQL?\n\n\n Not knowing anything else about your what you want to do and what\n context you're in, I can only say that AFAIK, once you COMMITted a\n transaction, no rollback is possible.\n\n A very quick google search gave me this:\nhttps://stackoverflow.com/questions/12472318/can-i-rollback-a-transaction-ive-already-committed-data-loss\n\n There's a long, detailed post from Craig Ringer that gives you some\n advise on how to proceed (given that a committed transaction cannot\n be ROLLBACKed)\n\n Take a look, I hope it's applicable to your scenario.\n\n HTH\n Moreno.-\n\n\n",
"msg_date": "Wed, 7 Jun 2017 14:42:44 +0200",
"msg_from": "Moreno Andreo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rollback table data."
},
{
"msg_contents": "\n\nAm 07.06.2017 um 13:33 schrieb Dinesh Chandra 12108:\n>\n> Dear Expert,\n>\n> Is there any way to rollback table data in PostgreSQL?\n>\n>\nif you are looking for somewhat similar to flashback in oracle the \nanswer is no.\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Jun 2017 14:48:41 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rollback table data."
},
{
"msg_contents": "HI, If you dont vaccum the table, You can read data modified with \npg_dirtyread extension, but be carefull ;-)\n\nhttps://github.com/omniti-labs/pgtreats/tree/master/contrib/pg_dirtyread\n\nRegards\nOn 07/06/17 07:33, Dinesh Chandra 12108 wrote:\n>\n> Dear Expert,\n>\n> Is there any way to rollback table data in PostgreSQL?\n>\n> *Regards,*\n>\n> *Dinesh Chandra*\n>\n> *|Database administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.*\n>\n> *------------------------------------------------------------------*\n>\n> Mobile: +91-9953975849 | Ext 1078 |[email protected] \n> <mailto:%[email protected]>\n>\n> Plot No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201 305,India.\n>\n>\n> ------------------------------------------------------------------------\n>\n> DISCLAIMER:\n>\n> This email may contain confidential information and is intended only \n> for the use of the specific individual(s) to which it is addressed. If \n> you are not the intended recipient of this email, you are hereby \n> notified that any unauthorized use, dissemination or copying of this \n> email or the information contained in it or attached to it is strictly \n> prohibited. If you received this message in error, please immediately \n> notify the sender at Infotech or [email protected] and delete the \n> original message.\n\n\n\n\n\n\n\n HI, If you dont vaccum the table, You can read data modified with\n pg_dirtyread extension, but be carefull ;-) \nhttps://github.com/omniti-labs/pgtreats/tree/master/contrib/pg_dirtyread\n\n Regards \nOn 07/06/17 07:33, Dinesh Chandra 12108\n wrote:\n\n\n\n\n\n\nDear Expert,\n�\nIs there any\n way to rollback table data in PostgreSQL?\n�\nRegards,\nDinesh\n Chandra\n|Database\n administrator (Oracle/PostgreSQL)| Cyient Ltd. Noida.\n------------------------------------------------------------------\nMobile:\n +91-9953975849 | Ext 1078\n |[email protected]\n\nPlot\n No. 7, NSEZ, Phase-II ,Noida-Dadri Road, Noida - 201\n 305,India.\n�\n\n\n\n\n DISCLAIMER:\n\n This email may contain confidential information and is intended\n only for the use of the specific individual(s) to which it is\n addressed. If you are not the intended recipient of this email,\n you are hereby notified that any unauthorized use, dissemination\n or copying of this email or the information contained in it or\n attached to it is strictly prohibited. If you received this\n message in error, please immediately notify the sender at\n Infotech or [email protected] and delete the original\n message.",
"msg_date": "Wed, 7 Jun 2017 09:23:03 -0400",
"msg_from": "Anthony Sotolongo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rollback table data."
},
{
"msg_contents": "On Wed, Jun 7, 2017 at 5:33 AM, Dinesh Chandra 12108\n<[email protected]> wrote:\n> Dear Expert,\n>\n>\n>\n> Is there any way to rollback table data in PostgreSQL?\n\nYou really need to give us more details. PostgreSQL has the ability,\nthrough continuous archiving, to roll back to a previous point in\ntime. This is for the whole database ccluster though and not just one\ntable. BUT you can do it on a whole other machine, get the table data\nyou want, and put it into the production database etc.\n\nGot more details?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 7 Jun 2017 18:15:44 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rollback table data."
},
{
"msg_contents": "Hi Dinesh,\n\nLe 07/06/2017 � 14:48, Andreas Kretschmer a �crit :\n>\n>\n> Am 07.06.2017 um 13:33 schrieb Dinesh Chandra 12108:\n>>\n>> Dear Expert,\n>>\n>> Is there any way to rollback table data in PostgreSQL?\n>>\n>>\n> if you are looking for somewhat similar to flashback in oracle the \n> answer is no.\n>\nWell, if this is what you are looking for, the E-Maj extension may help \nyou. In few words, it allows 1) to log updates on tables sets (using \ntriggers), 2) to set marks on these tables sets when they are in a \nstable state and 3) to travel back and forth to these marks.\nSome pointers:\n- pgxn to download a stable version : https://pgxn.org/dist/e-maj/ (the \ndownloadable zip file also contains a presentation that may help to \nquickly get a good view of the extension - doc/emaj.2.0.1_doc_en.pdf)\n- on-line documentation : http://emaj.readthedocs.io/\n- github projects : https://github.com/beaud76/emaj and \nhttps://github.com/beaud76/emaj_ppa_plugin\n\nBest regards. Philippe.\n> Regards, Andreas\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Jun 2017 09:04:51 +0200",
"msg_from": "phb07 <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Rollback table data."
}
] |
[
{
"msg_contents": "Hi all,\n\nI am trying to improve the runtime of a big data warehouse application. One\nsignificant bottleneck found was insert performance, so I am investigating\nways of getting Postgresql to insert data faster. I ran several tests on a\nfast machine to find out what performs best, and compared the results with\nthe same actions in Oracle on that same machine.\n\nSo far I am finding that PostgreSQL insert performance is several times\nslower than Oracle performance, and I would be grateful for some help\ntrying to decrease this gap...\n\nTo test I wrote a simple Java program which inserts data into a simple\ntable, using statement batching and delayed commits. The table looks as\nfollows:\n\ncreate table h_test(\nid_h integer\n, source_system_id integer\n, organisation_id integer\n, load_dts timestamp without time zone\n, boekingdetailid text\n);\nNo constraints, no indexes.\n\nThe java program and PostgreSQL run on the same machine. The best results\nI've got are:\n\nPostgreSQL inserts:\n\nCommit size 50.000 and batch size 10.000\nInserted 1000000 rows in 7500 milliseconds, 142857.14285714287 rows per\nsecond\nInserted 1000000 rows in 7410 milliseconds, 142857.14285714287 rows per\nsecond\n\nThe exact same test done on Oracle (on the same machine) reports:\n\nInserted 1000000 rows in 1072 milliseconds, 1000000.0 rows per second\n\nIncreasing the row count in Oracle decreases this number a bit, but it's\nstill fast:\nInserted 24000000 rows in 47155 milliseconds, 510638.2978723404 rows per\nsecond (oracle)\n\ncompared with:\nInserted 24000000 rows in 159929 milliseconds, 150943.3962264151 rows per\nsecond (postgresql)\n\nI also created a small pg/sql stored procedure to insert the same 1 million\nrows, which runs in about 4 seconds, resulting in 250.000 rows a second.\nThis is in the DB itself, but it still is twice as slow as Oracle with JDBC:\nCREATE or replace function test() returns void AS $$\nDECLARE\n count integer;\nBEGIN\n for count in 1..1000000 loop\n insert into\nh_test(id_h,source_system_id,organisation_id,load_dts,boekingdetailid)\n values(count, 1, 12, now(), 'boe' || count || 'king' || count);\n end loop;\nEND;\n$$ LANGUAGE plpgsql;\n\n\nI already changed the following config parameters:\nwork_mem 512MB\nsynchronous_commit off\nshared_buffers 512mb\ncommit_delay 100000\nautovacuum_naptime 10min\n\nPostgres version is 9.6.3 on Ubuntu 17.04 64 bit, on a i7-4790K with 16GB\nmemory and an Intel 750 SSD. JDBC driver is postgresql-42.1.1.\n\n(btw: the actual load I'm trying to improve will load more than 132 million\nrows, and will grow).\n\nAny help is greatly appreciated!\n\nRegards,\n\nFrits\n\nHi all,I am trying to improve the runtime of a big data warehouse application. One significant bottleneck found was insert performance, so I am investigating ways of getting Postgresql to insert data faster. I ran several tests on a fast machine to find out what performs best, and compared the results with the same actions in Oracle on that same machine.So far I am finding that PostgreSQL insert performance is several times slower than Oracle performance, and I would be grateful for some help trying to decrease this gap...To test I wrote a simple Java program which inserts data into a simple table, using statement batching and delayed commits. The table looks as follows:create table h_test( id_h integer , source_system_id integer , organisation_id integer , load_dts timestamp without time zone , boekingdetailid text );No constraints, no indexes.The java program and PostgreSQL run on the same machine. The best results I've got are:PostgreSQL inserts:Commit size 50.000 and batch size 10.000Inserted 1000000 rows in 7500 milliseconds, 142857.14285714287 rows per secondInserted 1000000 rows in 7410 milliseconds, 142857.14285714287 rows per secondThe exact same test done on Oracle (on the same machine) reports:Inserted 1000000 rows in 1072 milliseconds, 1000000.0 rows per secondIncreasing the row count in Oracle decreases this number a bit, but it's still fast:Inserted 24000000 rows in 47155 milliseconds, 510638.2978723404 rows per second (oracle)compared with:Inserted 24000000 rows in 159929 milliseconds, 150943.3962264151 rows per second (postgresql)I also created a small pg/sql stored procedure to insert the same 1 million rows, which runs in about 4 seconds, resulting in 250.000 rows a second. This is in the DB itself, but it still is twice as slow as Oracle with JDBC:CREATE or replace function test() returns void AS $$DECLARE count integer;BEGIN for count in 1..1000000 loop insert into h_test(id_h,source_system_id,organisation_id,load_dts,boekingdetailid) values(count, 1, 12, now(), 'boe' || count || 'king' || count); end loop;END;$$ LANGUAGE plpgsql;I already changed the following config parameters:work_mem 512MBsynchronous_commit offshared_buffers 512mbcommit_delay 100000autovacuum_naptime 10minPostgres version is 9.6.3 on Ubuntu 17.04 64 bit, on a i7-4790K with 16GB memory and an Intel 750 SSD. JDBC driver is postgresql-42.1.1.(btw: the actual load I'm trying to improve will load more than 132 million rows, and will grow).Any help is greatly appreciated!Regards,Frits",
"msg_date": "Fri, 09 Jun 2017 13:04:29 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Improving PostgreSQL insert performance"
},
{
"msg_contents": "\n\nAm 09.06.2017 um 15:04 schrieb Frits Jalvingh:\n> Hi all,\n>\n> I am trying to improve the runtime of a big data warehouse \n> application. One significant bottleneck found was insert performance, \n> so I am investigating ways of getting Postgresql to insert data faster.\n\n* use COPY instead of Insert, it is much faster\n* bundle all Insert into one transaction\n* use a separate disk/spindel for the transaction log\n\n\n\n>\n> I already changed the following config parameters:\n> work_mem 512MB\n> synchronous_commit off\n> shared_buffers 512mb\n> commit_delay 100000\n> autovacuum_naptime 10min\n>\n> Postgres version is 9.6.3 on Ubuntu 17.04 64 bit, on a i7-4790K with \n> 16GB memory and an Intel 750 SSD. JDBC driver is postgresql-42.1.1.\n>\n\nincrease shared_buffers, with 16gb ram i would suggest 8gb\n\n\nRegards, Andreas\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Jun 2017 15:24:15 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On Fri, Jun 09, 2017 at 03:24:15PM +0200, Andreas Kretschmer wrote:\n> \n> \n> Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:\n> >Hi all,\n> >\n> >I am trying to improve the runtime of a big data warehouse\n> >application. One significant bottleneck found was insert\n> >performance, so I am investigating ways of getting Postgresql to\n> >insert data faster.\n> \n> * use COPY instead of Insert, it is much faster\n> * bundle all Insert into one transaction\n> * use a separate disk/spindel for the transaction log\n> \n> \n> \n> >\n> >I already changed the following config parameters:\n> >work_mem 512MB\n> >synchronous_commit off\n> >shared_buffers 512mb\n> >commit_delay 100000\n> >autovacuum_naptime 10min\n> >\n> >Postgres version is 9.6.3 on Ubuntu 17.04 64 bit, on a i7-4790K\n> >with 16GB memory and an Intel 750 SSD. JDBC driver is\n> >postgresql-42.1.1.\n> >\n> \n> increase shared_buffers, with 16gb ram i would suggest 8gb\n\n+1 Without even checking, I think Oracle is configured to use a LOT\nmore memory than 512mb.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Jun 2017 08:28:57 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Hi Kenneth, Andreas,\n\nThanks for your tips!\n\nI increased shared_buffers to 8GB but it has no measurable effect at all. I\nthink that is logical: shared buffers are important for querying but not\nfor inserting; for that the speed to write to disk seems most important- no\nbig reason to cache the data if the commit requires a full write anyway.\nI also changed the code to do only one commit; this also has no effect I\ncan see.\n\nIt is true that Oracle had more memory assigned to it (1.5G), but unlike\nPostgres (which is completely on a fast SSD) Oracle runs on slower disk\n(ZFS)..\n\nI will try copy, but I first need to investigate how to use it- its\ninterface seems odd to say the least ;) I'll report back on that once done.\n\nAny other tips would be welcome!\n\nRegards,\n\nFrits\n\nOn Fri, Jun 9, 2017 at 3:30 PM Kenneth Marshall <[email protected]> wrote:\n\n> On Fri, Jun 09, 2017 at 03:24:15PM +0200, Andreas Kretschmer wrote:\n> >\n> >\n> > Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:\n> > >Hi all,\n> > >\n> > >I am trying to improve the runtime of a big data warehouse\n> > >application. One significant bottleneck found was insert\n> > >performance, so I am investigating ways of getting Postgresql to\n> > >insert data faster.\n> >\n> > * use COPY instead of Insert, it is much faster\n> > * bundle all Insert into one transaction\n> > * use a separate disk/spindel for the transaction log\n> >\n> >\n> >\n> > >\n> > >I already changed the following config parameters:\n> > >work_mem 512MB\n> > >synchronous_commit off\n> > >shared_buffers 512mb\n> > >commit_delay 100000\n> > >autovacuum_naptime 10min\n> > >\n> > >Postgres version is 9.6.3 on Ubuntu 17.04 64 bit, on a i7-4790K\n> > >with 16GB memory and an Intel 750 SSD. JDBC driver is\n> > >postgresql-42.1.1.\n> > >\n> >\n> > increase shared_buffers, with 16gb ram i would suggest 8gb\n>\n> +1 Without even checking, I think Oracle is configured to use a LOT\n> more memory than 512mb.\n>\n> Regards,\n> Ken\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Kenneth, Andreas,Thanks for your tips!I increased shared_buffers to 8GB but it has no measurable effect at all. I think that is logical: shared buffers are important for querying but not for inserting; for that the speed to write to disk seems most important- no big reason to cache the data if the commit requires a full write anyway.I also changed the code to do only one commit; this also has no effect I can see.It is true that Oracle had more memory assigned to it (1.5G), but unlike Postgres (which is completely on a fast SSD) Oracle runs on slower disk (ZFS)..I will try copy, but I first need to investigate how to use it- its interface seems odd to say the least ;) I'll report back on that once done.Any other tips would be welcome!Regards,FritsOn Fri, Jun 9, 2017 at 3:30 PM Kenneth Marshall <[email protected]> wrote:On Fri, Jun 09, 2017 at 03:24:15PM +0200, Andreas Kretschmer wrote:\n>\n>\n> Am 09.06.2017 um 15:04 schrieb Frits Jalvingh:\n> >Hi all,\n> >\n> >I am trying to improve the runtime of a big data warehouse\n> >application. One significant bottleneck found was insert\n> >performance, so I am investigating ways of getting Postgresql to\n> >insert data faster.\n>\n> * use COPY instead of Insert, it is much faster\n> * bundle all Insert into one transaction\n> * use a separate disk/spindel for the transaction log\n>\n>\n>\n> >\n> >I already changed the following config parameters:\n> >work_mem 512MB\n> >synchronous_commit off\n> >shared_buffers 512mb\n> >commit_delay 100000\n> >autovacuum_naptime 10min\n> >\n> >Postgres version is 9.6.3 on Ubuntu 17.04 64 bit, on a i7-4790K\n> >with 16GB memory and an Intel 750 SSD. JDBC driver is\n> >postgresql-42.1.1.\n> >\n>\n> increase shared_buffers, with 16gb ram i would suggest 8gb\n\n+1 Without even checking, I think Oracle is configured to use a LOT\nmore memory than 512mb.\n\nRegards,\nKen\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 09 Jun 2017 13:56:58 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On Fri, Jun 9, 2017 at 7:56 AM, Frits Jalvingh <[email protected]> wrote:\n> Hi Kenneth, Andreas,\n>\n> Thanks for your tips!\n>\n> I increased shared_buffers to 8GB but it has no measurable effect at all. I\n> think that is logical: shared buffers are important for querying but not for\n> inserting; for that the speed to write to disk seems most important- no big\n> reason to cache the data if the commit requires a full write anyway.\n> I also changed the code to do only one commit; this also has no effect I can\n> see.\n>\n> It is true that Oracle had more memory assigned to it (1.5G), but unlike\n> Postgres (which is completely on a fast SSD) Oracle runs on slower disk\n> (ZFS)..\n>\n> I will try copy, but I first need to investigate how to use it- its\n> interface seems odd to say the least ;) I'll report back on that once done.\n\nI you want an example of copy, just pg_dump a table:\n\npg_dump -d smarlowe -t test\n\n(SNIP)\nCOPY test (a, b) FROM stdin;\n1 abc\n2 xyz\n\\.\n(SNIP)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Jun 2017 08:33:14 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On Fri, Jun 09, 2017 at 01:56:58PM +0000, Frits Jalvingh wrote:\n> Hi Kenneth, Andreas,\n> \n> Thanks for your tips!\n> \n> I increased shared_buffers to 8GB but it has no measurable effect at all. I\n> think that is logical: shared buffers are important for querying but not\n> for inserting; for that the speed to write to disk seems most important- no\n> big reason to cache the data if the commit requires a full write anyway.\n> I also changed the code to do only one commit; this also has no effect I\n> can see.\n> \n> It is true that Oracle had more memory assigned to it (1.5G), but unlike\n> Postgres (which is completely on a fast SSD) Oracle runs on slower disk\n> (ZFS)..\n> \n> I will try copy, but I first need to investigate how to use it- its\n> interface seems odd to say the least ;) I'll report back on that once done.\n> \n> Any other tips would be welcome!\n> \n> Regards,\n> \n> Frits\n\nHi Frits,\n\nHere is an article that is still valid:\n\nhttps://www.depesz.com/2007/07/05/how-to-insert-data-to-database-as-fast-as-possible/\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Jun 2017 09:36:37 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Hi all,\n\nThanks a lot for the many responses!\n\nAbout preparing statements: this is done properly in Java, and pgsql does\nit by itself. So that cannot be done better ;)\n\nI tried the copy command, and that indeed works quite brilliantly:\nInserted 24000000 rows in 22004 milliseconds, 1090710.7798582076 rows per\nsecond\n\nThat's faster than Oracle. But with a very bad interface I have to say for\nnormal database work.. I will try to make this work in the tooling, but it\nneeds some very special code to format all possible values properly, and to\nmanage the end of the copy, so it is not usable in general which is a pity,\nI think.\n\nSo, I am still very interested in getting normal inserts faster, because\nthat will gain speed for all work.. If Oracle can do it, and Postgres is\nable to insert fast with copy- where lies the bottleneck with the insert\ncommand? There seems to be quite a performance hit with the JDBC driver\nitself (as the stored procedure is a lot faster), so I can look into that.\nBut even after that there is quite a gap..\n\nRegards,\n\nFrits\n\nOn Fri, Jun 9, 2017 at 4:33 PM Scott Marlowe <[email protected]>\nwrote:\n\n> On Fri, Jun 9, 2017 at 7:56 AM, Frits Jalvingh <[email protected]> wrote:\n> > Hi Kenneth, Andreas,\n> >\n> > Thanks for your tips!\n> >\n> > I increased shared_buffers to 8GB but it has no measurable effect at\n> all. I\n> > think that is logical: shared buffers are important for querying but not\n> for\n> > inserting; for that the speed to write to disk seems most important- no\n> big\n> > reason to cache the data if the commit requires a full write anyway.\n> > I also changed the code to do only one commit; this also has no effect I\n> can\n> > see.\n> >\n> > It is true that Oracle had more memory assigned to it (1.5G), but unlike\n> > Postgres (which is completely on a fast SSD) Oracle runs on slower disk\n> > (ZFS)..\n> >\n> > I will try copy, but I first need to investigate how to use it- its\n> > interface seems odd to say the least ;) I'll report back on that once\n> done.\n>\n> I you want an example of copy, just pg_dump a table:\n>\n> pg_dump -d smarlowe -t test\n>\n> (SNIP)\n> COPY test (a, b) FROM stdin;\n> 1 abc\n> 2 xyz\n> \\.\n> (SNIP)\n>\n\nHi all,Thanks a lot for the many responses!About preparing statements: this is done properly in Java, and pgsql does it by itself. So that cannot be done better ;)I tried the copy command, and that indeed works quite brilliantly:Inserted 24000000 rows in 22004 milliseconds, 1090710.7798582076 rows per secondThat's faster than Oracle. But with a very bad interface I have to say for normal database work.. I will try to make this work in the tooling, but it needs some very special code to format all possible values properly, and to manage the end of the copy, so it is not usable in general which is a pity, I think.So, I am still very interested in getting normal inserts faster, because that will gain speed for all work.. If Oracle can do it, and Postgres is able to insert fast with copy- where lies the bottleneck with the insert command? There seems to be quite a performance hit with the JDBC driver itself (as the stored procedure is a lot faster), so I can look into that. But even after that there is quite a gap..Regards,FritsOn Fri, Jun 9, 2017 at 4:33 PM Scott Marlowe <[email protected]> wrote:On Fri, Jun 9, 2017 at 7:56 AM, Frits Jalvingh <[email protected]> wrote:\n> Hi Kenneth, Andreas,\n>\n> Thanks for your tips!\n>\n> I increased shared_buffers to 8GB but it has no measurable effect at all. I\n> think that is logical: shared buffers are important for querying but not for\n> inserting; for that the speed to write to disk seems most important- no big\n> reason to cache the data if the commit requires a full write anyway.\n> I also changed the code to do only one commit; this also has no effect I can\n> see.\n>\n> It is true that Oracle had more memory assigned to it (1.5G), but unlike\n> Postgres (which is completely on a fast SSD) Oracle runs on slower disk\n> (ZFS)..\n>\n> I will try copy, but I first need to investigate how to use it- its\n> interface seems odd to say the least ;) I'll report back on that once done.\n\nI you want an example of copy, just pg_dump a table:\n\npg_dump -d smarlowe -t test\n\n(SNIP)\nCOPY test (a, b) FROM stdin;\n1 abc\n2 xyz\n\\.\n(SNIP)",
"msg_date": "Fri, 09 Jun 2017 14:39:37 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Frits,\n\nWhen you use the copy command, are you doing anything special to get the\nrun time that you are indicating?\n\nOn Fri, Jun 9, 2017 at 10:39 AM, Frits Jalvingh <[email protected]> wrote:\n\n> Hi all,\n>\n> Thanks a lot for the many responses!\n>\n> About preparing statements: this is done properly in Java, and pgsql does\n> it by itself. So that cannot be done better ;)\n>\n> I tried the copy command, and that indeed works quite brilliantly:\n> Inserted 24000000 rows in 22004 milliseconds, 1090710.7798582076\n> <(779)%20858-2076> rows per second\n>\n> That's faster than Oracle. But with a very bad interface I have to say for\n> normal database work.. I will try to make this work in the tooling, but it\n> needs some very special code to format all possible values properly, and to\n> manage the end of the copy, so it is not usable in general which is a pity,\n> I think.\n>\n> So, I am still very interested in getting normal inserts faster, because\n> that will gain speed for all work.. If Oracle can do it, and Postgres is\n> able to insert fast with copy- where lies the bottleneck with the insert\n> command? There seems to be quite a performance hit with the JDBC driver\n> itself (as the stored procedure is a lot faster), so I can look into that.\n> But even after that there is quite a gap..\n>\n> Regards,\n>\n> Frits\n>\n> On Fri, Jun 9, 2017 at 4:33 PM Scott Marlowe <[email protected]>\n> wrote:\n>\n>> On Fri, Jun 9, 2017 at 7:56 AM, Frits Jalvingh <[email protected]> wrote:\n>> > Hi Kenneth, Andreas,\n>> >\n>> > Thanks for your tips!\n>> >\n>> > I increased shared_buffers to 8GB but it has no measurable effect at\n>> all. I\n>> > think that is logical: shared buffers are important for querying but\n>> not for\n>> > inserting; for that the speed to write to disk seems most important- no\n>> big\n>> > reason to cache the data if the commit requires a full write anyway.\n>> > I also changed the code to do only one commit; this also has no effect\n>> I can\n>> > see.\n>> >\n>> > It is true that Oracle had more memory assigned to it (1.5G), but unlike\n>> > Postgres (which is completely on a fast SSD) Oracle runs on slower disk\n>> > (ZFS)..\n>> >\n>> > I will try copy, but I first need to investigate how to use it- its\n>> > interface seems odd to say the least ;) I'll report back on that once\n>> done.\n>>\n>> I you want an example of copy, just pg_dump a table:\n>>\n>> pg_dump -d smarlowe -t test\n>>\n>> (SNIP)\n>> COPY test (a, b) FROM stdin;\n>> 1 abc\n>> 2 xyz\n>> \\.\n>> (SNIP)\n>>\n>\n\nFrits, When you use the copy command, are you doing anything special to get the run time that you are indicating?On Fri, Jun 9, 2017 at 10:39 AM, Frits Jalvingh <[email protected]> wrote:Hi all,Thanks a lot for the many responses!About preparing statements: this is done properly in Java, and pgsql does it by itself. So that cannot be done better ;)I tried the copy command, and that indeed works quite brilliantly:Inserted 24000000 rows in 22004 milliseconds, 1090710.7798582076 rows per secondThat's faster than Oracle. But with a very bad interface I have to say for normal database work.. I will try to make this work in the tooling, but it needs some very special code to format all possible values properly, and to manage the end of the copy, so it is not usable in general which is a pity, I think.So, I am still very interested in getting normal inserts faster, because that will gain speed for all work.. If Oracle can do it, and Postgres is able to insert fast with copy- where lies the bottleneck with the insert command? There seems to be quite a performance hit with the JDBC driver itself (as the stored procedure is a lot faster), so I can look into that. But even after that there is quite a gap..Regards,FritsOn Fri, Jun 9, 2017 at 4:33 PM Scott Marlowe <[email protected]> wrote:On Fri, Jun 9, 2017 at 7:56 AM, Frits Jalvingh <[email protected]> wrote:\n> Hi Kenneth, Andreas,\n>\n> Thanks for your tips!\n>\n> I increased shared_buffers to 8GB but it has no measurable effect at all. I\n> think that is logical: shared buffers are important for querying but not for\n> inserting; for that the speed to write to disk seems most important- no big\n> reason to cache the data if the commit requires a full write anyway.\n> I also changed the code to do only one commit; this also has no effect I can\n> see.\n>\n> It is true that Oracle had more memory assigned to it (1.5G), but unlike\n> Postgres (which is completely on a fast SSD) Oracle runs on slower disk\n> (ZFS)..\n>\n> I will try copy, but I first need to investigate how to use it- its\n> interface seems odd to say the least ;) I'll report back on that once done.\n\nI you want an example of copy, just pg_dump a table:\n\npg_dump -d smarlowe -t test\n\n(SNIP)\nCOPY test (a, b) FROM stdin;\n1 abc\n2 xyz\n\\.\n(SNIP)",
"msg_date": "Fri, 9 Jun 2017 10:46:49 -0400",
"msg_from": "\"Sunkara, Amrutha\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On Fri, Jun 09, 2017 at 02:39:37PM +0000, Frits Jalvingh wrote:\n> Hi all,\n> \n> Thanks a lot for the many responses!\n> \n> About preparing statements: this is done properly in Java, and pgsql does\n> it by itself. So that cannot be done better ;)\n> \n> I tried the copy command, and that indeed works quite brilliantly:\n> Inserted 24000000 rows in 22004 milliseconds, 1090710.7798582076 rows per\n> second\n> \n> That's faster than Oracle. But with a very bad interface I have to say for\n> normal database work.. I will try to make this work in the tooling, but it\n> needs some very special code to format all possible values properly, and to\n> manage the end of the copy, so it is not usable in general which is a pity,\n> I think.\n> \n> So, I am still very interested in getting normal inserts faster, because\n> that will gain speed for all work.. If Oracle can do it, and Postgres is\n> able to insert fast with copy- where lies the bottleneck with the insert\n> command? There seems to be quite a performance hit with the JDBC driver\n> itself (as the stored procedure is a lot faster), so I can look into that.\n> But even after that there is quite a gap..\n> \n> Regards,\n> \n> Frits\n\nHi Frits,\n\nHave you looked at UNLOGGED tables and also having more that 1 insert\nstream running at a time. Sometimes multiple parallel inserts can be\nfaster.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Jun 2017 09:53:47 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "I am not doing anything special I guess. I am adding the results of the\ntests and the programs I'm using to the following page:\n\nhttps://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests\n\nThe copy example, in Java, is at the end. All of the examples use trivial\ndata and the same data. If you find fault please let me know ;) But the\ncopy does insert the records as they can be seen ;)\n\nOn Fri, Jun 9, 2017 at 4:47 PM Sunkara, Amrutha <[email protected]> wrote:\n\n> Frits,\n>\n> When you use the copy command, are you doing anything special to get the\n> run time that you are indicating?\n>\n\nI am not doing anything special I guess. I am adding the results of the tests and the programs I'm using to the following page:https://etc.to/confluence/display/~admjal/PostgreSQL+performance+testsThe copy example, in Java, is at the end. All of the examples use trivial data and the same data. If you find fault please let me know ;) But the copy does insert the records as they can be seen ;)On Fri, Jun 9, 2017 at 4:47 PM Sunkara, Amrutha <[email protected]> wrote:Frits, When you use the copy command, are you doing anything special to get the run time that you are indicating?",
"msg_date": "Fri, 09 Jun 2017 14:55:04 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Hi Kenneth,\n\nI tried unlogged before, but as long as the commit interval is long it had\nno discerning effect that I could see.\n\nHi Kenneth,I tried unlogged before, but as long as the commit interval is long it had no discerning effect that I could see.",
"msg_date": "Fri, 09 Jun 2017 14:57:58 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Hi Babu,\n\nNo, I did not, and the effect is quite great:\n\nInserted 1000000 rows in 2535 milliseconds, 394477.3175542406 rows per\nsecond\nInserted 1000000 rows in 2553 milliseconds, 391696.0438699569 rows per\nsecond\n\ncompared to (without your parameter):\nInserted 1000000 rows in 7643 milliseconds, 130838.67591259976 rows per\nsecond\n\nThat is quite an increase!! Thanks a lot for the tip!!\n\nFor those keeping score: we're now at 77% of Oracle's performance- without\ncopy ;)\n\n\n\n\n>\n\nHi Babu,No, I did not, and the effect is quite great:Inserted 1000000 rows in 2535 milliseconds, 394477.3175542406 rows per secondInserted 1000000 rows in 2553 milliseconds, 391696.0438699569 rows per secondcompared to (without your parameter):Inserted 1000000 rows in 7643 milliseconds, 130838.67591259976 rows per secondThat is quite an increase!! Thanks a lot for the tip!!For those keeping score: we're now at 77% of Oracle's performance- without copy ;)",
"msg_date": "Fri, 09 Jun 2017 15:05:13 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "You need to be careful with the setFetchSize we have tables with over 10 million rows and many columns and the PostgreSQL JDBC driver silently fails, ignores the fetch size and tries to read the entire table content into memory. I spent many agonizing days on this.\r\n\r\nps.setFetchSize(65536);\r\n\r\nRegards\r\nJohn\r\n\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Frits Jalvingh\r\nSent: Friday, June 09, 2017 7:55 AM\r\nTo: Sunkara, Amrutha; [email protected]\r\nSubject: Re: [PERFORM] Improving PostgreSQL insert performance\r\n\r\nI am not doing anything special I guess. I am adding the results of the tests and the programs I'm using to the following page:\r\n\r\nhttps://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests\r\n\r\nThe copy example, in Java, is at the end. All of the examples use trivial data and the same data. If you find fault please let me know ;) But the copy does insert the records as they can be seen ;)\r\nOn Fri, Jun 9, 2017 at 4:47 PM Sunkara, Amrutha <[email protected]<mailto:[email protected]>> wrote:\r\nFrits,\r\n\r\nWhen you use the copy command, are you doing anything special to get the run time that you are indicating?\r\n\n\n\n\n\n\n\n\n\nYou need to be careful with the setFetchSize we have tables with over 10 million rows and many columns and the PostgreSQL JDBC driver silently fails, ignores the fetch size\r\n and tries to read the entire table content into memory. I spent many agonizing days on this.\n \nps.setFetchSize(65536);\n \nRegards\nJohn\r\n\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Frits Jalvingh\nSent: Friday, June 09, 2017 7:55 AM\nTo: Sunkara, Amrutha; [email protected]\nSubject: Re: [PERFORM] Improving PostgreSQL insert performance\n \n\nI am not doing anything special I guess. I am adding the results of the tests and the programs I'm using to the following page:\n\n \n\n\nhttps://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests\n\n\n \n\n\nThe copy example, in Java, is at the end. All of the examples use trivial data and the same data. If you find fault please let me know ;) But the copy does insert the records as they can be seen ;)\n\n\nOn Fri, Jun 9, 2017 at 4:47 PM Sunkara, Amrutha <[email protected]> wrote:\n\n\n\nFrits, \n\n \n\n\nWhen you use the copy command, are you doing anything special to get the run time that you are indicating?",
"msg_date": "Fri, 9 Jun 2017 15:08:45 +0000",
"msg_from": "John Gorman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Hi John,\n\nYes, I was aware and amazed by that ;) It is actually the fetch size in\ncombination with autocommit being on; that dies the sweet OOM death as soon\nas the table gets big.\n\nBut Postgres read performance, with autocommit off and fetch size arond\n64K, is quite OK. But it's good to get this mentioned a lot, because as you\nsaid you can spend quite some time wondering about this!\n\nOn Fri, Jun 9, 2017 at 5:08 PM John Gorman <[email protected]> wrote:\n\n> You need to be careful with the setFetchSize we have tables with over 10\n> million rows and many columns and the PostgreSQL JDBC driver silently\n> fails, ignores the fetch size and tries to read the entire table content\n> into memory. I spent many agonizing days on this.\n>\n>\n>\n> ps.setFetchSize(65536);\n>\n>\n>\n> Regards\n>\n> John\n>\n>\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Frits Jalvingh\n> *Sent:* Friday, June 09, 2017 7:55 AM\n> *To:* Sunkara, Amrutha; [email protected]\n>\n>\n> *Subject:* Re: [PERFORM] Improving PostgreSQL insert performance\n>\n>\n>\n> I am not doing anything special I guess. I am adding the results of the\n> tests and the programs I'm using to the following page:\n>\n>\n>\n> https://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests\n>\n>\n>\n> The copy example, in Java, is at the end. All of the examples use trivial\n> data and the same data. If you find fault please let me know ;) But the\n> copy does insert the records as they can be seen ;)\n>\n> On Fri, Jun 9, 2017 at 4:47 PM Sunkara, Amrutha <[email protected]>\n> wrote:\n>\n> Frits,\n>\n>\n>\n> When you use the copy command, are you doing anything special to get the\n> run time that you are indicating?\n>\n>\n\nHi John,Yes, I was aware and amazed by that ;) It is actually the fetch size in combination with autocommit being on; that dies the sweet OOM death as soon as the table gets big.But Postgres read performance, with autocommit off and fetch size arond 64K, is quite OK. But it's good to get this mentioned a lot, because as you said you can spend quite some time wondering about this!On Fri, Jun 9, 2017 at 5:08 PM John Gorman <[email protected]> wrote:\n\n\nYou need to be careful with the setFetchSize we have tables with over 10 million rows and many columns and the PostgreSQL JDBC driver silently fails, ignores the fetch size\n and tries to read the entire table content into memory. I spent many agonizing days on this.\n \nps.setFetchSize(65536);\n \nRegards\nJohn\n\n \n \nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Frits Jalvingh\nSent: Friday, June 09, 2017 7:55 AM\nTo: Sunkara, Amrutha; [email protected]\nSubject: Re: [PERFORM] Improving PostgreSQL insert performance\n \n\nI am not doing anything special I guess. I am adding the results of the tests and the programs I'm using to the following page:\n\n \n\n\nhttps://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests\n\n\n \n\n\nThe copy example, in Java, is at the end. All of the examples use trivial data and the same data. If you find fault please let me know ;) But the copy does insert the records as they can be seen ;)\n\n\nOn Fri, Jun 9, 2017 at 4:47 PM Sunkara, Amrutha <[email protected]> wrote:\n\n\n\nFrits, \n\n \n\n\nWhen you use the copy command, are you doing anything special to get the run time that you are indicating?",
"msg_date": "Fri, 09 Jun 2017 15:12:33 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On Fri, Jun 9, 2017 at 9:12 AM, Frits Jalvingh <[email protected]> wrote:\n> Hi John,\n>\n> Yes, I was aware and amazed by that ;) It is actually the fetch size in\n> combination with autocommit being on; that dies the sweet OOM death as soon\n> as the table gets big.\n>\n> But Postgres read performance, with autocommit off and fetch size arond 64K,\n> is quite OK. But it's good to get this mentioned a lot, because as you said\n> you can spend quite some time wondering about this!\n\nNo production db server should have the oom killer enabled.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Jun 2017 09:16:51 -0600",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Hi Babu,\n\nThat was all already done, as it is common practice for JDBC. Your\nparameter was added to the code that already did all that - and worked\nbrilliantly there ;)\n\n\n>\n\nHi Babu,That was all already done, as it is common practice for JDBC. Your parameter was added to the code that already did all that - and worked brilliantly there ;)",
"msg_date": "Fri, 09 Jun 2017 15:22:35 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On Fri, Jun 09, 2017 at 03:22:35PM +0000, Frits Jalvingh wrote:\n> Hi Babu,\n> \n> That was all already done, as it is common practice for JDBC. Your\n> parameter was added to the code that already did all that - and worked\n> brilliantly there ;)\n> \nHi Frits,\n\nWhat was the parameter? I did not see an Email in the thread from Babu.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Jun 2017 10:33:33 -0500",
"msg_from": "Kenneth Marshall <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "The parameter was\nreWriteBatchedInserts = true\n\nEither added in connection properties of in the connection URL like\n\njdbc:postgresql://localhost:5432/datavault_12_tst?reWriteBatchedInserts=true\n\nBTW: It seems you need a recent driver for this; I'm\nusing postgresql-42.1.1.jar\n\nOn Fri, Jun 9, 2017 at 5:33 PM Kenneth Marshall <[email protected]> wrote:\n\n> On Fri, Jun 09, 2017 at 03:22:35PM +0000, Frits Jalvingh wrote:\n> > Hi Babu,\n> >\n> > That was all already done, as it is common practice for JDBC. Your\n> > parameter was added to the code that already did all that - and worked\n> > brilliantly there ;)\n> >\n> Hi Frits,\n>\n> What was the parameter? I did not see an Email in the thread from Babu.\n>\n> Regards,\n> Ken\n>\n\nThe parameter wasreWriteBatchedInserts = trueEither added in connection properties of in the connection URL likejdbc:postgresql://localhost:5432/datavault_12_tst?reWriteBatchedInserts=trueBTW: It seems you need a recent driver for this; I'm using postgresql-42.1.1.jarOn Fri, Jun 9, 2017 at 5:33 PM Kenneth Marshall <[email protected]> wrote:On Fri, Jun 09, 2017 at 03:22:35PM +0000, Frits Jalvingh wrote:\n> Hi Babu,\n>\n> That was all already done, as it is common practice for JDBC. Your\n> parameter was added to the code that already did all that - and worked\n> brilliantly there ;)\n>\nHi Frits,\n\nWhat was the parameter? I did not see an Email in the thread from Babu.\n\nRegards,\nKen",
"msg_date": "Fri, 09 Jun 2017 15:37:10 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On Fri, Jun 9, 2017 at 6:04 AM, Frits Jalvingh <[email protected]> wrote:\n\n>\n> I already changed the following config parameters:\n> work_mem 512MB\n> synchronous_commit off\n>\n\nSince you are already batching up commits into large chunks, this setting\nis not very useful, but does risk you losing supposedly-committed data upon\na crash. I would not do it.\n\n\n> shared_buffers 512mb\n>\n\nYou might try increasing wal_buffers, but the default for this size of\nshared_buffers is 16MB, which is usually big enough.\n\nOne thing you are missing is max_wal_size. The default value of that is\nprobably too small for what you are doing.\n\nBut if you are not using COPY, then maybe none of this matters as the\nbottleneck will be elsewhere.\n\nCheers,\n\nJeff\n\nOn Fri, Jun 9, 2017 at 6:04 AM, Frits Jalvingh <[email protected]> wrote:I already changed the following config parameters:work_mem 512MBsynchronous_commit offSince you are already batching up commits into large chunks, this setting is not very useful, but does risk you losing supposedly-committed data upon a crash. I would not do it. shared_buffers 512mbYou might try increasing wal_buffers, but the default for this size of shared_buffers is 16MB, which is usually big enough. One thing you are missing is max_wal_size. The default value of that is probably too small for what you are doing.But if you are not using COPY, then maybe none of this matters as the bottleneck will be elsewhere.Cheers,Jeff",
"msg_date": "Fri, 9 Jun 2017 10:07:07 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Frits,\n\nWould you mind sharing the source code of your benchmark?\n\n>BTW: It seems you need a recent driver for this; I'm\nusing postgresql-42.1.1.jar\n\nTechnically speaking, reWriteBatchedInserts was introduced in 9.4.1209\n(2016-07-15)\n\nVladimir\n\nFrits,Would you mind sharing the source code of your benchmark?>BTW: It seems you need a recent driver for this; I'm using postgresql-42.1.1.jarTechnically speaking, reWriteBatchedInserts was introduced in 9.4.1209 (2016-07-15)Vladimir",
"msg_date": "Fri, 09 Jun 2017 22:08:34 +0000",
"msg_from": "Vladimir Sitnikov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On Sat, Jun 10, 2017 at 12:08 AM Vladimir Sitnikov <\[email protected]> wrote:\n\n> Would you mind sharing the source code of your benchmark?\n>\n\nThe source code for the several tests, plus the numbers collected so far,\ncan be found at:\n\nhttps://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests\n\nRegards,\n\nFrits\n\nOn Sat, Jun 10, 2017 at 12:08 AM Vladimir Sitnikov <[email protected]> wrote:Would you mind sharing the source code of your benchmark?The source code for the several tests, plus the numbers collected so far, can be found at:https://etc.to/confluence/display/~admjal/PostgreSQL+performance+testsRegards,Frits",
"msg_date": "Sat, 10 Jun 2017 11:12:43 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "> I tried the copy command, and that indeed works quite brilliantly:\n> Inserted 24000000 rows in 22004 milliseconds, 1090710.7798582076 rows per\n> second\n> \n> That's faster than Oracle. But with a very bad interface I have to say for\n> normal database work.. I will try to make this work in the tooling, but it\n> needs some very special code to format all possible values properly, and to\n> manage the end of the copy, so it is not usable in general which is a pity, I\n> think.\n\nHave you thought about the COPY with binary format ? Thats looks more\nrobust than the text format you used in your benchmarks.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Jun 2017 22:12:35 +0200",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "I think binary is worse.. according to the postgres documentation:\n\nThe binary format option causes all data to be stored/read as binary format\nrather than as text. It is somewhat faster than the text and CSV formats,\nbut a binary-format file is less portable across machine architectures and\nPostgreSQL versions. Also, the binary format is very data type specific;\nfor example it will not work to output binary data from a smallint column\nand read it into an integer column, even though that would work fine in\ntext format.\n\nBy itself it is similar in badness as both require completely different\nsemantics than insert..\nOn Sat, 10 Jun 2017 at 22:12, Nicolas Paris <[email protected]> wrote:\n\n> > I tried the copy command, and that indeed works quite brilliantly:\n> > Inserted 24000000 rows in 22004 milliseconds, 1090710.7798582076 rows per\n> > second\n> >\n> > That's faster than Oracle. But with a very bad interface I have to say\n> for\n> > normal database work.. I will try to make this work in the tooling, but\n> it\n> > needs some very special code to format all possible values properly, and\n> to\n> > manage the end of the copy, so it is not usable in general which is a\n> pity, I\n> > think.\n>\n> Have you thought about the COPY with binary format ? Thats looks more\n> robust than the text format you used in your benchmarks.\n>\n\nI think binary is worse.. according to the postgres documentation:The binary format option causes all data to be stored/read as binary format rather than as text. It is somewhat faster than the text and CSV formats, but a binary-format file is less portable across machine architectures and PostgreSQL versions. Also, the binary format is very data type specific; for example it will not work to output binary data from a smallint column and read it into an integer column, even though that would work fine in text format.By itself it is similar in badness as both require completely different semantics than insert..On Sat, 10 Jun 2017 at 22:12, Nicolas Paris <[email protected]> wrote:> I tried the copy command, and that indeed works quite brilliantly:\n> Inserted 24000000 rows in 22004 milliseconds, 1090710.7798582076 rows per\n> second\n>\n> That's faster than Oracle. But with a very bad interface I have to say for\n> normal database work.. I will try to make this work in the tooling, but it\n> needs some very special code to format all possible values properly, and to\n> manage the end of the copy, so it is not usable in general which is a pity, I\n> think.\n\nHave you thought about the COPY with binary format ? Thats looks more\nrobust than the text format you used in your benchmarks.",
"msg_date": "Sat, 10 Jun 2017 20:37:48 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Frits Jalvingh wrote:\n\n> So, I am still very interested in getting normal inserts faster, because\n> that will gain speed for all work.. If Oracle can do it, and Postgres is\n> able to insert fast with copy- where lies the bottleneck with the insert\n> command? There seems to be quite a performance hit with the JDBC driver\n> itself (as the stored procedure is a lot faster), so I can look into that.\n> But even after that there is quite a gap..\n\nDid you try inserting multiple tuples in one command? Something like\nINSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')\nIt's supposed to be faster than single-row inserts, though I don't\nknow by how much.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 10 Jun 2017 22:32:14 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "On 06/10/2017 07:32 PM, Alvaro Herrera wrote:\n> Frits Jalvingh wrote:\n> \n>> So, I am still very interested in getting normal inserts faster, because\n>> that will gain speed for all work.. If Oracle can do it, and Postgres is\n>> able to insert fast with copy- where lies the bottleneck with the insert\n>> command? There seems to be quite a performance hit with the JDBC driver\n>> itself (as the stored procedure is a lot faster), so I can look into that.\n>> But even after that there is quite a gap..\n> \n> Did you try inserting multiple tuples in one command? Something like\n> INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')\n> It's supposed to be faster than single-row inserts, though I don't\n> know by how much.\n\nWhen I did the testing of the patch originally I saw significant\nimprovements, e.g. 8x in early versions. The thread is here:\nhttps://www.postgresql.org/message-id/flat/44C4451A.4010906%40joeconway.com#[email protected]\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Sat, 10 Jun 2017 21:15:18 -0700",
"msg_from": "Joe Conway <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Hi Alvaro,\n\nI did not try that, to be honest. I am using a single prepared statement so\nthat the database needs to parse it only once. All executes then use the\nbatched parameters.\nI will try this later on, but I wonder whether having to reparse the\nstatement every time compared to one prepared statement would actually be\nfaster.\n\nBut thanks for the tip; I will take a look.\n\nRegards,\n\nFrits\n\nHi Alvaro,I did not try that, to be honest. I am using a single prepared statement so that the database needs to parse it only once. All executes then use the batched parameters.I will try this later on, but I wonder whether having to reparse the statement every time compared to one prepared statement would actually be faster.But thanks for the tip; I will take a look.Regards,Frits",
"msg_date": "Sun, 11 Jun 2017 08:44:04 +0000",
"msg_from": "Frits Jalvingh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Alvaro>Something like\nINSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I\ndid not\nFrits>try that, to be honest.\n\npgjdbc does automatically rewrite insert values(); into insert ...\nvalues(),(),(),() when reWriteBatchedInserts=true. I don't expect manual\nmultivalues to be noticeably faster there.\n\n\nFrits>https://etc.to/confluence/display/~admjal/PostgreSQL+performance+tests\n\nDo you really intend to measure just a single insert operation?\nIt looks odd, as typical applications would execute inserts for quite a\nwhile before they terminate.\n\nYou are including lots of warmup overheads (e.g. JIT-compilation), so your\napproach does not measure peak performance.\nOn the other hand, you are not measuring enough time to catch things like\n\"DB log switch\".\n\nWould you please use JMH as a load driver?\nHere's an example:\nhttps://github.com/pgjdbc/pgjdbc/blob/master/ubenchmark/src/main/java/org/postgresql/benchmark/statement/InsertBatch.java\n\n\nVladimir\n\n>\n\nAlvaro>Something likeINSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I did not Frits>try that, to be honest.pgjdbc does automatically rewrite insert values(); into insert ... values(),(),(),() when reWriteBatchedInserts=true. I don't expect manual multivalues to be noticeably faster there.Frits>https://etc.to/confluence/display/~admjal/PostgreSQL+performance+testsDo you really intend to measure just a single insert operation?It looks odd, as typical applications would execute inserts for quite a while before they terminate.You are including lots of warmup overheads (e.g. JIT-compilation), so your approach does not measure peak performance.On the other hand, you are not measuring enough time to catch things like \"DB log switch\".Would you please use JMH as a load driver?Here's an example: https://github.com/pgjdbc/pgjdbc/blob/master/ubenchmark/src/main/java/org/postgresql/benchmark/statement/InsertBatch.java Vladimir",
"msg_date": "Sun, 11 Jun 2017 09:30:29 +0000",
"msg_from": "Vladimir Sitnikov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
},
{
"msg_contents": "Vladimir Sitnikov wrote:\n> Alvaro>Something like\n> INSERT INTO .. VALUES ('col1', 'col2'), ('col1', 'col2'), ('col1', 'col2')>I\n> did not\n> Frits>try that, to be honest.\n> \n> pgjdbc does automatically rewrite insert values(); into insert ...\n> values(),(),(),() when reWriteBatchedInserts=true. I don't expect manual\n> multivalues to be noticeably faster there.\n\nAhh, so that's what that option does :-) Nice to know -- great feature.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 11 Jun 2017 08:36:48 -0400",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Improving PostgreSQL insert performance"
}
] |
[
{
"msg_contents": "Hi,\nWe have a big problem with managing the number of WAL products, during database activity, so I would try to write to you.\nWith pg_rman utility, we can't save thousands of WAL files every few hours, to be used for a possible recovery.\nWe are using a production environment 9.2 PostgreSQL server which processes a database of about 260 GB\nIs there a way to fix \"wal_segsize\" to about 1 Gb in 9.2. version, and \"rebuild\" postgreSQL server?\nThe goal is to drastically reduce the number of WALs.\n\nUpgrading to 9.5, is the only way to fix this issue?\n\n\nThank you.\n\n\n[Descrizione: cid:[email protected]]\n_______________________________\n\nGianfranco Cocco\nGruppo DBA Torino\nManaged Operations - Data Center Factory\[email protected]<mailto:[email protected]>\nEngineering.MO S.p.A\nCorso Mortara, 22 - 10149 Torino\nTel. 011 19449548 (SHORT CODE: 676548)\nwww.eng.it<http://www.eng.it>\n\n___________________________",
"msg_date": "Fri, 9 Jun 2017 13:55:04 +0000",
"msg_from": "Cocco Gianfranco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64"
},
{
"msg_contents": "On Fri, Jun 9, 2017 at 10:55 PM, Cocco Gianfranco\n<[email protected]> wrote:\n> Is there a way to fix “wal_segsize” to about 1 Gb in 9.2. version, and “rebuild” postgreSQL server?\n\nAs long as you are able to compile your own version of Postgres and\nyour distribution does not allow that, there is nothing preventing you\nto do so.\n\n> The goal is to drastically reduce the number of WALs.\n> Upgrading to 9.5, is the only way to fix this issue?\n\nNote that a server initialized with a segment size of X won't work\nwith a binary compiled with a size of Y. But you can always take a\nlogical dump of the server before the upgrade, and reload it in the\nversion of the server with a larger segment size. The cost here is\nmore downtime.\n-- \nMichael\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n",
"msg_date": "Sat, 10 Jun 2017 07:43:00 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64"
},
{
"msg_contents": "On Fri, Jun 9, 2017 at 3:43 PM, Michael Paquier <[email protected]>\nwrote:\n\n> On Fri, Jun 9, 2017 at 10:55 PM, Cocco Gianfranco\n> <[email protected]> wrote:\n> > Is there a way to fix “wal_segsize” to about 1 Gb in 9.2. version, and\n> “rebuild” postgreSQL server?\n>\n> As long as you are able to compile your own version of Postgres and\n> your distribution does not allow that, there is nothing preventing you\n> to do so.\n>\n\nBut there is something preventing it. wal_segsize cannot exceed 64MB in\n9.2. v10 will be the first version which will allow sizes above 64MB.\n\nCheers,\n\nJeff\n\nOn Fri, Jun 9, 2017 at 3:43 PM, Michael Paquier <[email protected]> wrote:On Fri, Jun 9, 2017 at 10:55 PM, Cocco Gianfranco\n<[email protected]> wrote:\n> Is there a way to fix “wal_segsize” to about 1 Gb in 9.2. version, and “rebuild” postgreSQL server?\n\nAs long as you are able to compile your own version of Postgres and\nyour distribution does not allow that, there is nothing preventing you\nto do so.But there is something preventing it. wal_segsize cannot exceed 64MB in 9.2. v10 will be the first version which will allow sizes above 64MB.Cheers,Jeff",
"msg_date": "Mon, 12 Jun 2017 10:27:58 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64"
},
{
"msg_contents": "On Tue, Jun 13, 2017 at 2:27 AM, Jeff Janes <[email protected]> wrote:\n> But there is something preventing it. wal_segsize cannot exceed 64MB in\n> 9.2. v10 will be the first version which will allow sizes above 64MB.\n\nYes, indeed. I have misread --with-segsize and --with-wal-segsize in\nthe docs. Sorry for the confusion.\n-- \nMichael\n\n\n-- \nSent via pgsql-bugs mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-bugs\n",
"msg_date": "Tue, 13 Jun 2017 06:25:57 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64"
},
{
"msg_contents": "Thank you.\r\n\r\nIf I understand it well, we can build a new postgreSQL server, by setting this value into \"configure\" file?\r\n\r\n\r\n\r\n--with-wal-segsize=SEGSIZE becomes --with-wal-segsize=1024 ??\r\n\r\n\r\n\r\n\r\n\r\n_______________________________\r\n\r\n\r\n\r\nGianfranco Cocco\r\n\r\nGruppo DBA Torino\r\n\r\nManaged Operations - Data Center Factory\r\n\r\[email protected]\r\n\r\nEngineering.MO S.p.A\r\n\r\nCorso Mortara, 22 - 10149 Torino\r\n\r\nTel. 011 19449548 (SHORT CODE: 676548)\r\n\r\nwww.eng.it\r\n\r\n\r\n\r\n___________________________\r\n\r\n\r\n\r\n-----Messaggio originale-----\r\nDa: Michael Paquier [mailto:[email protected]]\r\nInviato: lunedì 12 giugno 2017 23:26\r\nA: Jeff Janes <[email protected]>\r\nCc: Cocco Gianfranco <[email protected]>; [email protected]; [email protected]; DBA <[email protected]>\r\nOggetto: Re: [BUGS] Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64\r\n\r\n\r\n\r\nOn Tue, Jun 13, 2017 at 2:27 AM, Jeff Janes <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n> But there is something preventing it. wal_segsize cannot exceed 64MB\r\n\r\n> in 9.2. v10 will be the first version which will allow sizes above 64MB.\r\n\r\n\r\n\r\nYes, indeed. I have misread --with-segsize and --with-wal-segsize in the docs. Sorry for the confusion.\r\n\r\n--\r\n\r\nMichael\r\n\n\n\n\n\n\n\n\n\nThank you.\nIf I understand it well, we can build a new postgreSQL server, by setting this value into \"configure\" file?\n \n--with-wal-segsize=SEGSIZE becomes --with-wal-segsize=1024 ??\n \n \n_______________________________\n \nGianfranco Cocco\nGruppo DBA Torino\nManaged Operations - Data Center Factory\[email protected]\nEngineering.MO S.p.A\nCorso Mortara, 22 - 10149 Torino\nTel. 011 19449548 (SHORT CODE: 676548)\nwww.eng.it\n \n___________________________\n \n-----Messaggio originale-----\r\nDa: Michael Paquier [mailto:[email protected]] \r\nInviato: lunedì 12 giugno 2017 23:26\r\nA: Jeff Janes <[email protected]>\r\nCc: Cocco Gianfranco <[email protected]>; [email protected]; [email protected]; DBA <[email protected]>\r\nOggetto: Re: [BUGS] Invalid WAL segment size. Allowed values are 1,2,4,8,16,32,64\n \nOn Tue, Jun 13, 2017 at 2:27 AM, Jeff Janes <[email protected]> wrote:\n> But there is something preventing it. wal_segsize cannot exceed 64MB\r\n\n> in 9.2. v10 will be the first version which will allow sizes above 64MB.\n \nYes, indeed. I have misread --with-segsize and --with-wal-segsize in the docs. Sorry for the confusion.\n--\nMichael",
"msg_date": "Tue, 13 Jun 2017 09:10:50 +0000",
"msg_from": "Cocco Gianfranco <[email protected]>",
"msg_from_op": true,
"msg_subject": "R: Invalid WAL segment size. Allowed values are\n 1,2,4,8,16,32,64"
},
{
"msg_contents": "On Tue, Jun 13, 2017 at 6:10 PM, Cocco Gianfranco\n<[email protected]> wrote:\n> If I understand it well, we can build a new postgreSQL server, by setting\n> this value into \"configure\" file?\n>\n> --with-wal-segsize=SEGSIZE becomes --with-wal-segsize=1024 ??\n\nYes, but as Jeff has pointed out upthread, this value can just go up\nto 64 when using 9.5.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 13 Jun 2017 18:34:20 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Invalid WAL segment size. Allowed values are\n 1,2,4,8,16,32,64"
},
{
"msg_contents": "On Tue, Jun 13, 2017 at 10:39 PM, Cocco Gianfranco\n<[email protected]> wrote:\n> ./configure --with-wal-segsize=1024\n>\n> checking for WAL segment size... configure: error: Invalid WAL segment size.\n> Allowed values are 1,2,4,8,16,32,64.\n>\n> Please, how can I do?\n\nWhen trying to compile Postgres 9.6, the maximum value is 64. If you\nwant to allow 1GB of WAL segment size you will need to wait for 10, or\njust use 64MB.\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 14 Jun 2017 05:38:52 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [BUGS] Invalid WAL segment size. Allowed values are\n 1,2,4,8,16,32,64"
}
] |
[
{
"msg_contents": "Hi,\n\n\nI have two tables\n\ns {\n\n id bigint NOT NULL PRIMARY KEY,\n\n ...\n\n}\n\n\nsp {\n\n id bigint PRIMARY KEY,\n\n sid bigint REFERENCES s (id),\n\n i numeric,\n\n m numeric\n\n ...\n\n}\n\n\nI have for each entry in [s] on average around 120 entries in [sp]. And \nthat table has become the largest table in my database (8.81565*10^09 \nentries).\n\nData in [sp] are never changed. I can probably reduce the size by \nchanging datatypes from numeric to float but I was wondering if it would \nbe more efficient - primarily in terms of storage - to change the \nstructure to have two arrays in [s]. E.g.\n\ns {\n\n id bigint NOT NULL PRIMARY KEY,\n\n i numeric[],\n\n m numeric[],\n\n ...\n\n}\n\n\nI can probably reduce the size by changing datatypes from numeric to \nfloat/double. so final table would look like this:\n\n\ns {\n\n id bigint NOT NULL PRIMARY KEY,\n\n i float[],\n\n m double[],\n\n ...\n\n}\n\n\nI haven't really found anything yet how much space (e.g. how many bytes) \nan array will use compared to a table row in postgresql.\n\n\nThanks\n\nLutz\n\n\n\n\n\n\n-- \nThe University of Edinburgh is a charitable body, registered in\nScotland, with registration number SC005336.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Jun 2017 10:06:39 +0200",
"msg_from": "Lutz Fischer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Using array instead of sub table (storage and speed)"
},
{
"msg_contents": "Greetings,\n\n* Lutz Fischer ([email protected]) wrote:\n> Data in [sp] are never changed. I can probably reduce the size by\n> changing datatypes from numeric to float but I was wondering if it\n> would be more efficient - primarily in terms of storage - to change\n> the structure to have two arrays in [s]. E.g.\n\nThe short answer is 'probably', but it really depends on how wide your\nrows are.\n\n> I haven't really found anything yet how much space (e.g. how many\n> bytes) an array will use compared to a table row in postgresql.\n\nThere's a 24-byte overhead for each tuple. If the width of the tuple's\ncolumns ends up being less than 24 bytes then half (or more) of the\nspace used is for the tuple header. Arrays have a bit of overhead\nthemsleves but are then densely packed.\n\nIn testing that I've done, a table which looks like:\n\nCREATE TABLE t1 (\n c1 int\n);\n\nWill end up with a couple hundred rows per 8k page (perhaps 250 or so),\nmeaning that you get ~1k of actual data for 8k of storage. Changing\nthis to an array, like so:\n\nCREATE TABLE t1 (\n c1 int[]\n);\n\nAnd then storing 3-4 tuples per 8k page (keeping each tuple under the 2k\nTOAST limit) lead to being able to store something like 450 ints per\ntuple with a subsequent result of 1800 ints per page and ~7.2k worth of\nactual data for 8k of storage, which was much more efficient for\nstorage.\n\nOf course, the tuple header is actually useful data in many\nenvironments- if you go with this approach then you have to work out how\nto deal with the fact that a given tuple is either visible or not, and\nall the ints in the array for that tuple are all visible and that an\nupdate to that tuple locks the entire tuple and that set of ints, etc.\nIf the data isn't changed after being loaded and you're able to load an\nentire tuple all at once then this could work.\n\nNote that arrays aren't more efficient than just using individual\ncolumns, and this technique is only going to be helpful if the tuple\noverhead in your situation is a large portion of the data and using this\ntechnique allows you to reduce the number of tuples stored.\n\nThanks!\n\nStephen",
"msg_date": "Thu, 15 Jun 2017 09:37:04 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using array instead of sub table (storage and speed)"
},
{
"msg_contents": "Hi Stephen,\n\n\nThanks for your reply. The data in the sub table (sp) are only read in \nas a block. Meaning I will always read in all entries in [sp] that \nbelong to one entry in [s]. Meaning I would not lose much in terms of \nwhat I could do with the data in [sp] and I could be saving around 2.8K \nper entry in [s] (just counting the overhead for each tuple in [sp]) per \nentry in [s]\n\n\nOne thing I would still wonder is in how far this would affect the \nperformance retrieving data from [s].\n\nI often need some data from [s] where I don't care about [sp]. So in how \nfar does having these arrays a part of [s] would make these queries \nslower. Or would be better to store the array data in a separate table \ne.g. have [s] as it is now but turn [sp] into an array aggregated table.\n\n\nThanks,\n\nLutz\n\n\n\nOn 15/06/17 15:37, Stephen Frost wrote:\n> Greetings,\n>\n> * Lutz Fischer ([email protected]) wrote:\n>> Data in [sp] are never changed. I can probably reduce the size by\n>> changing datatypes from numeric to float but I was wondering if it\n>> would be more efficient - primarily in terms of storage - to change\n>> the structure to have two arrays in [s]. E.g.\n> The short answer is 'probably', but it really depends on how wide your\n> rows are.\n>\n>> I haven't really found anything yet how much space (e.g. how many\n>> bytes) an array will use compared to a table row in postgresql.\n> There's a 24-byte overhead for each tuple. If the width of the tuple's\n> columns ends up being less than 24 bytes then half (or more) of the\n> space used is for the tuple header. Arrays have a bit of overhead\n> themsleves but are then densely packed.\n>\n> In testing that I've done, a table which looks like:\n>\n> CREATE TABLE t1 (\n> c1 int\n> );\n>\n> Will end up with a couple hundred rows per 8k page (perhaps 250 or so),\n> meaning that you get ~1k of actual data for 8k of storage. Changing\n> this to an array, like so:\n>\n> CREATE TABLE t1 (\n> c1 int[]\n> );\n>\n> And then storing 3-4 tuples per 8k page (keeping each tuple under the 2k\n> TOAST limit) lead to being able to store something like 450 ints per\n> tuple with a subsequent result of 1800 ints per page and ~7.2k worth of\n> actual data for 8k of storage, which was much more efficient for\n> storage.\n>\n> Of course, the tuple header is actually useful data in many\n> environments- if you go with this approach then you have to work out how\n> to deal with the fact that a given tuple is either visible or not, and\n> all the ints in the array for that tuple are all visible and that an\n> update to that tuple locks the entire tuple and that set of ints, etc.\n> If the data isn't changed after being loaded and you're able to load an\n> entire tuple all at once then this could work.\n>\n> Note that arrays aren't more efficient than just using individual\n> columns, and this technique is only going to be helpful if the tuple\n> overhead in your situation is a large portion of the data and using this\n> technique allows you to reduce the number of tuples stored.\n>\n> Thanks!\n>\n> Stephen\n\n\n-- \nThe University of Edinburgh is a charitable body, registered in\nScotland, with registration number SC005336.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Jun 2017 12:37:47 +0200",
"msg_from": "Lutz Fischer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Using array instead of sub table (storage and speed)"
},
{
"msg_contents": "Greeting, Lutz!\n\nPlease don't top-post on the PG mailing lists, our style is to relpy\nin-line.\n\n* Lutz Fischer ([email protected]) wrote:\n> I often need some data from [s] where I don't care about [sp]. So in\n> how far does having these arrays a part of [s] would make these\n> queries slower. Or would be better to store the array data in a\n> separate table e.g. have [s] as it is now but turn [sp] into an\n> array aggregated table.\n\nIf that's the case then you would probably be better off putting the\narrays into an independent table, yes.\n\nThanks!\n\nStephen",
"msg_date": "Fri, 16 Jun 2017 08:05:37 -0400",
"msg_from": "Stephen Frost <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Using array instead of sub table (storage and speed)"
}
] |
[
{
"msg_contents": "Hello all,\n\nI have a query with many joins, something like:\n\nSelect c1, c2, c3, sum(c5)\n From V1\n Join V2 on ...\n Left join V3 on ...\n Left join T4 on ...\n Join T5 on ...\n Join T6 on ...\n Left join T7 on ...\n Join T8 on ...\n Left join T9 on ...\nWhere ...\nGroup by c1, c2, c3\n\nThe join clauses are fairly innocuous and work directly on foreign key relationships, so there is no voodoo there. Same for the where clause. The views are similar and also join 3-4 tables each. All in all, there are 3 of all the tables involved that have millions of rows and all the other tables have thousands of rows. In particular, T9 is totally empty.\n\nIf I remove T9 from the query, it takes 9s to run. If I keep T9, the query takes over 30mn to run! If I switch the order of T8/T9, then the same happens with T8. So I don't think this has to do with the tables themselves. I have updated all the statistics and reindexed all involved tables.\n\nAny idea as to what could be causing this issue? Am I having one too many joins and tripping the query execution? The query plans are very large in both cases, so I figured I'd abstract the cases a bit for this question, but could provide those plans if someone thinks it'd be useful.\n\nThank you,\nLaurent.\n\n\n\n\n\n\n\n\n\n\nHello all,\n \nI have a query with many joins, something like:\n \nSelect c1, c2, c3, sum(c5)\n From V1\n Join V2 on …\n Left join V3 on …\n Left join T4 on …\n Join T5 on …\n Join T6 on …\n Left join T7 on …\n Join T8 on …\n Left join T9 on …\nWhere …\nGroup by c1, c2, c3\n \nThe join clauses are fairly innocuous and work directly on foreign key relationships, so there is no voodoo there. Same for the where clause. The views are similar and also join 3-4 tables each. All in all, there are 3 of all the tables\n involved that have millions of rows and all the other tables have thousands of rows. In particular, T9 is totally empty.\n \nIf I remove T9 from the query, it takes 9s to run. If I keep T9, the query takes over 30mn to run! If I switch the order of T8/T9, then the same happens with T8. So I don’t think this has to do with the tables themselves. I have updated\n all the statistics and reindexed all involved tables.\n \nAny idea as to what could be causing this issue? Am I having one too many joins and tripping the query execution? The query plans are very large in both cases, so I figured I’d abstract the cases a bit for this question, but could provide\n those plans if someone thinks it’d be useful.\n \nThank you,\nLaurent.",
"msg_date": "Thu, 15 Jun 2017 14:53:44 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sudden drastic change in performance"
},
{
"msg_contents": "\"[email protected]\" <[email protected]> writes:\n> I have a query with many joins, something like:\n\n> Select c1, c2, c3, sum(c5)\n> From V1\n> Join V2 on ...\n> Left join V3 on ...\n> Left join T4 on ...\n> Join T5 on ...\n> Join T6 on ...\n> Left join T7 on ...\n> Join T8 on ...\n> Left join T9 on ...\n> Where ...\n> Group by c1, c2, c3\n\n> The join clauses are fairly innocuous and work directly on foreign key relationships, so there is no voodoo there. Same for the where clause. The views are similar and also join 3-4 tables each. All in all, there are 3 of all the tables involved that have millions of rows and all the other tables have thousands of rows. In particular, T9 is totally empty.\n\n> If I remove T9 from the query, it takes 9s to run. If I keep T9, the query takes over 30mn to run! If I switch the order of T8/T9, then the same happens with T8. So I don't think this has to do with the tables themselves. I have updated all the statistics and reindexed all involved tables.\n\nYou need to raise join_collapse_limit to keep the planner from operating\nwith its stupid cap on. Usually people also increase from_collapse_limit\nif they have to touch either, but I think for this specific query syntax\nonly the former matters.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Jun 2017 11:09:50 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drastic change in performance"
},
{
"msg_contents": "\n\nAm 15. Juni 2017 16:53:44 MESZ schrieb \"[email protected]\" <[email protected]>:\n>Hello all,\n>\n>I have a query with many joins, something like:\n>\n>Select c1, c2, c3, sum(c5)\n> From V1\n> Join V2 on ...\n> Left join V3 on ...\n> Left join T4 on ...\n> Join T5 on ...\n> Join T6 on ...\n> Left join T7 on ...\n> Join T8 on ...\n> Left join T9 on ...\n>Where ...\n>Group by c1, c2, c3\n>\n>\n>Thank you,\n>Laurent.\n\nPlease show us the explain analyse for the queries.\n\n\nRegards, Andreas\n\n-- \nDiese Nachricht wurde von meinem Android-Mobiltelefon mit K-9 Mail gesendet.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Jun 2017 17:14:14 +0200",
"msg_from": "Andreas Kretschmer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sudden drastic change in performance"
}
] |
[
{
"msg_contents": "Good morning,\n\nI have an index I created on the prefix of a column:\n\ncreate index location_geo_idx ON locations( substring(geohash, 0, 5));\n\nI currently use a similar to query, but I wanted to know if there is a\nfaster way to query multiple value using this index than this?\n\nselect l.geohash from locations l where l.geohash similar to '(dr7g|dr7e)%';\n\nMy goal is to utilize 9 values each time for the geohash adjacent squares.\n\nBest regards,\n\nTy\n\nGood morning,I have an index I created on the prefix of a column:create index location_geo_idx ON locations( substring(geohash, 0, 5));I currently use a similar to query, but I wanted to know if there is a faster way to query multiple value using this index than this?select l.geohash from locations l where l.geohash similar to '(dr7g|dr7e)%';My goal is to utilize 9 values each time for the geohash adjacent squares.Best regards,Ty",
"msg_date": "Tue, 20 Jun 2017 07:51:06 -0400",
"msg_from": "Tieson Molly <[email protected]>",
"msg_from_op": true,
"msg_subject": "substring index what is better way to query"
},
{
"msg_contents": "Tieson Molly <[email protected]> writes:\n> I have an index I created on the prefix of a column:\n\n> create index location_geo_idx ON locations( substring(geohash, 0, 5));\n\n> I currently use a similar to query, but I wanted to know if there is a\n> faster way to query multiple value using this index than this?\n\n> select l.geohash from locations l where l.geohash similar to '(dr7g|dr7e)%';\n\nWell, you've got a couple of problems there. The most basic one is that\nthat index doesn't match that query at all. You need to arrange things\nso that the lefthand side of the SIMILAR TO operator is exactly the\nindexed value, not something that's related to it. (Yes, in principle\nthat index could be used to answer this query, but it would require a\ngreat deal more intimate knowledge than the planner has about the\nsemantics of both substring() and SIMILAR TO.) IOW, you need to write\n\nselect l.geohash from locations l\n where substring(l.geohash, 0, 5) similar to '(dr7g|dr7e)%';\n\nThe other possible solution would be to just index the geohash strings\nverbatim; unless they are quite long, that's what I'd recommend, usually.\n\nSecondly, if you're using a non-C locale, you're likely not getting an\nindexscan plan anyway; check it with EXPLAIN. To get an indexed prefix\nsearch out of a pattern match, the index has to use C sorting rules,\nwhich you can force with a COLLATE or text_pattern_ops option if the\ndatabase's prevailing locale isn't C.\n\nThirdly, if you experiment with EXPLAIN a little bit, you'll soon realize\nthat the planner is not great at extracting common prefix strings out of\nOR'd pattern branches:\n\nregression=# create table loc (f1 text unique);\nCREATE TABLE\nregression=# explain select * from loc where f1 similar to '(dr7g|dr7e)%'; \n QUERY PLAN \n-------------------------------------------------------------------------\n Bitmap Heap Scan on loc (cost=4.22..14.37 rows=1 width=32)\n Filter: (f1 ~ '^(?:(?:dr7g|dr7e).*)$'::text)\n -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n Index Cond: ((f1 >= 'd'::text) AND (f1 < 'e'::text))\n(4 rows)\n\nThe useful part of this for speed purposes is the \"Index Cond\", and\nyou can see that it's only enforcing that the first character be \"d\".\nI don't remember that code very well at the moment, but I'm a bit\nsurprised that it's even figured out that the \"d\" is common to both\nbranches. You can get a lot more traction if you factor the common\nprefix manually:\n\nregression=# explain select * from loc where f1 similar to 'dr7(g|e)%';\n QUERY PLAN \n-------------------------------------------------------------------------\n Bitmap Heap Scan on loc (cost=4.22..14.37 rows=1 width=32)\n Filter: (f1 ~ '^(?:dr7(?:g|e).*)$'::text)\n -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n Index Cond: ((f1 >= 'dr7'::text) AND (f1 < 'dr8'::text))\n(4 rows)\n\nor maybe even\n\nregression=# explain select * from loc where f1 similar to 'dr7g%' or f1 similar to 'dr7e%';\n QUERY PLAN \n-------------------------------------------------------------------------------\n Bitmap Heap Scan on loc (cost=8.45..19.04 rows=2 width=32)\n Recheck Cond: ((f1 ~ '^(?:dr7g.*)$'::text) OR (f1 ~ '^(?:dr7e.*)$'::text))\n Filter: ((f1 ~ '^(?:dr7g.*)$'::text) OR (f1 ~ '^(?:dr7e.*)$'::text))\n -> BitmapOr (cost=8.45..8.45 rows=14 width=0)\n -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n Index Cond: ((f1 >= 'dr7g'::text) AND (f1 < 'dr7h'::text))\n -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n Index Cond: ((f1 >= 'dr7e'::text) AND (f1 < 'dr7f'::text))\n(8 rows)\n\nWhether this is worth the trouble depends a lot on your data distribution,\nbut any of them are probably better than the seqscan you're no doubt\ngetting right now.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Jun 2017 10:19:09 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: substring index what is better way to query"
},
{
"msg_contents": "Tom,\nis there a different construct than the Similar To that would work?\n\nI know for certain that the first few characters could be different due to\nthe nature of geohashes. So I may not be able to optimize the prefix\naspect in some cases.\n\nBest regards,\n\nTy\n\nOn Jun 20, 2017 10:19 AM, \"Tom Lane\" <[email protected]> wrote:\n\n> Tieson Molly <[email protected]> writes:\n> > I have an index I created on the prefix of a column:\n>\n> > create index location_geo_idx ON locations( substring(geohash, 0, 5));\n>\n> > I currently use a similar to query, but I wanted to know if there is a\n> > faster way to query multiple value using this index than this?\n>\n> > select l.geohash from locations l where l.geohash similar to\n> '(dr7g|dr7e)%';\n>\n> Well, you've got a couple of problems there. The most basic one is that\n> that index doesn't match that query at all. You need to arrange things\n> so that the lefthand side of the SIMILAR TO operator is exactly the\n> indexed value, not something that's related to it. (Yes, in principle\n> that index could be used to answer this query, but it would require a\n> great deal more intimate knowledge than the planner has about the\n> semantics of both substring() and SIMILAR TO.) IOW, you need to write\n>\n> select l.geohash from locations l\n> where substring(l.geohash, 0, 5) similar to '(dr7g|dr7e)%';\n>\n> The other possible solution would be to just index the geohash strings\n> verbatim; unless they are quite long, that's what I'd recommend, usually.\n>\n> Secondly, if you're using a non-C locale, you're likely not getting an\n> indexscan plan anyway; check it with EXPLAIN. To get an indexed prefix\n> search out of a pattern match, the index has to use C sorting rules,\n> which you can force with a COLLATE or text_pattern_ops option if the\n> database's prevailing locale isn't C.\n>\n> Thirdly, if you experiment with EXPLAIN a little bit, you'll soon realize\n> that the planner is not great at extracting common prefix strings out of\n> OR'd pattern branches:\n>\n> regression=# create table loc (f1 text unique);\n> CREATE TABLE\n> regression=# explain select * from loc where f1 similar to '(dr7g|dr7e)%';\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Bitmap Heap Scan on loc (cost=4.22..14.37 rows=1 width=32)\n> Filter: (f1 ~ '^(?:(?:dr7g|dr7e).*)$'::text)\n> -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n> Index Cond: ((f1 >= 'd'::text) AND (f1 < 'e'::text))\n> (4 rows)\n>\n> The useful part of this for speed purposes is the \"Index Cond\", and\n> you can see that it's only enforcing that the first character be \"d\".\n> I don't remember that code very well at the moment, but I'm a bit\n> surprised that it's even figured out that the \"d\" is common to both\n> branches. You can get a lot more traction if you factor the common\n> prefix manually:\n>\n> regression=# explain select * from loc where f1 similar to 'dr7(g|e)%';\n> QUERY PLAN\n> -------------------------------------------------------------------------\n> Bitmap Heap Scan on loc (cost=4.22..14.37 rows=1 width=32)\n> Filter: (f1 ~ '^(?:dr7(?:g|e).*)$'::text)\n> -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n> Index Cond: ((f1 >= 'dr7'::text) AND (f1 < 'dr8'::text))\n> (4 rows)\n>\n> or maybe even\n>\n> regression=# explain select * from loc where f1 similar to 'dr7g%' or f1\n> similar to 'dr7e%';\n> QUERY PLAN\n> ------------------------------------------------------------\n> -------------------\n> Bitmap Heap Scan on loc (cost=8.45..19.04 rows=2 width=32)\n> Recheck Cond: ((f1 ~ '^(?:dr7g.*)$'::text) OR (f1 ~\n> '^(?:dr7e.*)$'::text))\n> Filter: ((f1 ~ '^(?:dr7g.*)$'::text) OR (f1 ~ '^(?:dr7e.*)$'::text))\n> -> BitmapOr (cost=8.45..8.45 rows=14 width=0)\n> -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7\n> width=0)\n> Index Cond: ((f1 >= 'dr7g'::text) AND (f1 < 'dr7h'::text))\n> -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7\n> width=0)\n> Index Cond: ((f1 >= 'dr7e'::text) AND (f1 < 'dr7f'::text))\n> (8 rows)\n>\n> Whether this is worth the trouble depends a lot on your data distribution,\n> but any of them are probably better than the seqscan you're no doubt\n> getting right now.\n>\n> regards, tom lane\n>\n\nTom, is there a different construct than the Similar To that would work?I know for certain that the first few characters could be different due to the nature of geohashes. So I may not be able to optimize the prefix aspect in some cases.Best regards,TyOn Jun 20, 2017 10:19 AM, \"Tom Lane\" <[email protected]> wrote:Tieson Molly <[email protected]> writes:\n> I have an index I created on the prefix of a column:\n\n> create index location_geo_idx ON locations( substring(geohash, 0, 5));\n\n> I currently use a similar to query, but I wanted to know if there is a\n> faster way to query multiple value using this index than this?\n\n> select l.geohash from locations l where l.geohash similar to '(dr7g|dr7e)%';\n\nWell, you've got a couple of problems there. The most basic one is that\nthat index doesn't match that query at all. You need to arrange things\nso that the lefthand side of the SIMILAR TO operator is exactly the\nindexed value, not something that's related to it. (Yes, in principle\nthat index could be used to answer this query, but it would require a\ngreat deal more intimate knowledge than the planner has about the\nsemantics of both substring() and SIMILAR TO.) IOW, you need to write\n\nselect l.geohash from locations l\n where substring(l.geohash, 0, 5) similar to '(dr7g|dr7e)%';\n\nThe other possible solution would be to just index the geohash strings\nverbatim; unless they are quite long, that's what I'd recommend, usually.\n\nSecondly, if you're using a non-C locale, you're likely not getting an\nindexscan plan anyway; check it with EXPLAIN. To get an indexed prefix\nsearch out of a pattern match, the index has to use C sorting rules,\nwhich you can force with a COLLATE or text_pattern_ops option if the\ndatabase's prevailing locale isn't C.\n\nThirdly, if you experiment with EXPLAIN a little bit, you'll soon realize\nthat the planner is not great at extracting common prefix strings out of\nOR'd pattern branches:\n\nregression=# create table loc (f1 text unique);\nCREATE TABLE\nregression=# explain select * from loc where f1 similar to '(dr7g|dr7e)%';\n QUERY PLAN\n-------------------------------------------------------------------------\n Bitmap Heap Scan on loc (cost=4.22..14.37 rows=1 width=32)\n Filter: (f1 ~ '^(?:(?:dr7g|dr7e).*)$'::text)\n -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n Index Cond: ((f1 >= 'd'::text) AND (f1 < 'e'::text))\n(4 rows)\n\nThe useful part of this for speed purposes is the \"Index Cond\", and\nyou can see that it's only enforcing that the first character be \"d\".\nI don't remember that code very well at the moment, but I'm a bit\nsurprised that it's even figured out that the \"d\" is common to both\nbranches. You can get a lot more traction if you factor the common\nprefix manually:\n\nregression=# explain select * from loc where f1 similar to 'dr7(g|e)%';\n QUERY PLAN\n-------------------------------------------------------------------------\n Bitmap Heap Scan on loc (cost=4.22..14.37 rows=1 width=32)\n Filter: (f1 ~ '^(?:dr7(?:g|e).*)$'::text)\n -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n Index Cond: ((f1 >= 'dr7'::text) AND (f1 < 'dr8'::text))\n(4 rows)\n\nor maybe even\n\nregression=# explain select * from loc where f1 similar to 'dr7g%' or f1 similar to 'dr7e%';\n QUERY PLAN\n-------------------------------------------------------------------------------\n Bitmap Heap Scan on loc (cost=8.45..19.04 rows=2 width=32)\n Recheck Cond: ((f1 ~ '^(?:dr7g.*)$'::text) OR (f1 ~ '^(?:dr7e.*)$'::text))\n Filter: ((f1 ~ '^(?:dr7g.*)$'::text) OR (f1 ~ '^(?:dr7e.*)$'::text))\n -> BitmapOr (cost=8.45..8.45 rows=14 width=0)\n -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n Index Cond: ((f1 >= 'dr7g'::text) AND (f1 < 'dr7h'::text))\n -> Bitmap Index Scan on loc_f1_key (cost=0.00..4.22 rows=7 width=0)\n Index Cond: ((f1 >= 'dr7e'::text) AND (f1 < 'dr7f'::text))\n(8 rows)\n\nWhether this is worth the trouble depends a lot on your data distribution,\nbut any of them are probably better than the seqscan you're no doubt\ngetting right now.\n\n regards, tom lane",
"msg_date": "Tue, 20 Jun 2017 10:40:43 -0400",
"msg_from": "Tieson Molly <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: substring index what is better way to query"
},
{
"msg_contents": "Tieson Molly <[email protected]> writes:\n> is there a different construct than the Similar To that would work?\n\n> I know for certain that the first few characters could be different due to\n> the nature of geohashes. So I may not be able to optimize the prefix\n> aspect in some cases.\n\nDepending on what patterns you're looking for, it's possible that a\ntrigram index (contrib/pg_trgm) would work better.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 20 Jun 2017 10:45:47 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: substring index what is better way to query"
}
] |
[
{
"msg_contents": "Both the first run and subsequent run takes same amount of time.\n\n*First Run:*\n\n\"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual\ntime=8.760..98.582 rows=121788 loops=1)\"\n\" *Buffers: shared read=2521*\"\n\"Planning time: 16.820 ms\"\n\"Execution time: 108.626 ms\"\n\n\n*Second Run:*\n\n\"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual\ntime=0.010..18.456 rows=121788 loops=1)\"\n\" *Buffers: shared hit=2521*\"\n\"Planning time: 0.083 ms\"\n\"Execution time: 27.288 ms\"\n\n\nCan anyone please help me understand and fix this.\n\n\nThanks & Regards,\nSumeet Shukla\n\nBoth the first run and subsequent run takes same amount of time.First Run:\"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual time=8.760..98.582 rows=121788 loops=1)\"\" Buffers: shared read=2521\"\"Planning time: 16.820 ms\"\"Execution time: 108.626 ms\"Second Run:\"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual time=0.010..18.456 rows=121788 loops=1)\"\" Buffers: shared hit=2521\"\"Planning time: 0.083 ms\"\"Execution time: 27.288 ms\"Can anyone please help me understand and fix this.Thanks & Regards,Sumeet Shukla",
"msg_date": "Thu, 22 Jun 2017 19:40:35 -0500",
"msg_from": "Sumeet Shukla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Dataset is fetched from cache but still takes same time to fetch\n records as first run"
},
{
"msg_contents": "The numbers you posted look exactly as I would expect. The first read hits\ndisk and takes 108ms, the second read hits the cache and takes 27ms.\n\nOn Thu, Jun 22, 2017 at 8:40 PM, Sumeet Shukla <[email protected]>\nwrote:\n\n> Both the first run and subsequent run takes same amount of time.\n>\n> *First Run:*\n>\n> \"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual\n> time=8.760..98.582 rows=121788 loops=1)\"\n> \" *Buffers: shared read=2521*\"\n> \"Planning time: 16.820 ms\"\n> \"Execution time: 108.626 ms\"\n>\n>\n> *Second Run:*\n>\n> \"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual\n> time=0.010..18.456 rows=121788 loops=1)\"\n> \" *Buffers: shared hit=2521*\"\n> \"Planning time: 0.083 ms\"\n> \"Execution time: 27.288 ms\"\n>\n>\n> Can anyone please help me understand and fix this.\n>\n>\n> Thanks & Regards,\n> Sumeet Shukla\n>\n>\n\nThe numbers you posted look exactly as I would expect. The first read hits disk and takes 108ms, the second read hits the cache and takes 27ms.On Thu, Jun 22, 2017 at 8:40 PM, Sumeet Shukla <[email protected]> wrote:Both the first run and subsequent run takes same amount of time.First Run:\"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual time=8.760..98.582 rows=121788 loops=1)\"\" Buffers: shared read=2521\"\"Planning time: 16.820 ms\"\"Execution time: 108.626 ms\"Second Run:\"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual time=0.010..18.456 rows=121788 loops=1)\"\" Buffers: shared hit=2521\"\"Planning time: 0.083 ms\"\"Execution time: 27.288 ms\"Can anyone please help me understand and fix this.Thanks & Regards,Sumeet Shukla",
"msg_date": "Thu, 22 Jun 2017 20:54:41 -0400",
"msg_from": "Dave Stibrany <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dataset is fetched from cache but still takes same time\n to fetch records as first run"
},
{
"msg_contents": "Yes, but when I actually execute the query in pgAdmin3, it takes exactly\nthe same time of 19.5 secs.\nHow is that possible?\n\nThanks & Regards,\nSumeet Shukla\n\n\nOn Thu, Jun 22, 2017 at 7:54 PM, Dave Stibrany <[email protected]> wrote:\n\n> The numbers you posted look exactly as I would expect. The first read hits\n> disk and takes 108ms, the second read hits the cache and takes 27ms.\n>\n> On Thu, Jun 22, 2017 at 8:40 PM, Sumeet Shukla <[email protected]>\n> wrote:\n>\n>> Both the first run and subsequent run takes same amount of time.\n>>\n>> *First Run:*\n>>\n>> \"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual\n>> time=8.760..98.582 rows=121788 loops=1)\"\n>> \" *Buffers: shared read=2521*\"\n>> \"Planning time: 16.820 ms\"\n>> \"Execution time: 108.626 ms\"\n>>\n>>\n>> *Second Run:*\n>>\n>> \"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual\n>> time=0.010..18.456 rows=121788 loops=1)\"\n>> \" *Buffers: shared hit=2521*\"\n>> \"Planning time: 0.083 ms\"\n>> \"Execution time: 27.288 ms\"\n>>\n>>\n>> Can anyone please help me understand and fix this.\n>>\n>>\n>> Thanks & Regards,\n>> Sumeet Shukla\n>>\n>>\n>\n\nYes, but when I actually execute the query in pgAdmin3, it takes exactly the same time of 19.5 secs.How is that possible?Thanks & Regards,Sumeet Shukla\nOn Thu, Jun 22, 2017 at 7:54 PM, Dave Stibrany <[email protected]> wrote:The numbers you posted look exactly as I would expect. The first read hits disk and takes 108ms, the second read hits the cache and takes 27ms.On Thu, Jun 22, 2017 at 8:40 PM, Sumeet Shukla <[email protected]> wrote:Both the first run and subsequent run takes same amount of time.First Run:\"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual time=8.760..98.582 rows=121788 loops=1)\"\" Buffers: shared read=2521\"\"Planning time: 16.820 ms\"\"Execution time: 108.626 ms\"Second Run:\"Seq Scan on d_payer (cost=0.00..8610.40 rows=121788 width=133) (actual time=0.010..18.456 rows=121788 loops=1)\"\" Buffers: shared hit=2521\"\"Planning time: 0.083 ms\"\"Execution time: 27.288 ms\"Can anyone please help me understand and fix this.Thanks & Regards,Sumeet Shukla",
"msg_date": "Thu, 22 Jun 2017 23:35:29 -0500",
"msg_from": "Sumeet Shukla <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dataset is fetched from cache but still takes same time\n to fetch records as first run"
},
{
"msg_contents": "Sumeet Shukla <[email protected]> writes:\n> Yes, but when I actually execute the query in pgAdmin3, it takes exactly\n> the same time of 19.5 secs.\n\npgAdmin is well known to be horribly inefficient at displaying large\nquery results (and 121788 rows qualifies as \"large\" for this purpose,\nI believe). The circa-tenth-of-a-second savings on the server side\nis getting swamped by client-side processing.\n\nIt's possible that pgAdmin4 has improved matters in this area.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 00:50:31 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dataset is fetched from cache but still takes same time to fetch\n records as first run"
},
{
"msg_contents": "On Fri, Jun 23, 2017 at 12:50 AM, Tom Lane <[email protected]> wrote:\n>\n> It's possible that pgAdmin4 has improved matters in this area.\n>\n\nSadly, not in my experience. It's actually considerably worse than\npgAdminIII in my experience when selecting a lot of rows, especially when\nvery wide (20+ columns).\n\nOn Fri, Jun 23, 2017 at 12:50 AM, Tom Lane <[email protected]> wrote:\nIt's possible that pgAdmin4 has improved matters in this area.\nSadly, not in my experience. It's actually considerably worse than pgAdminIII in my experience when selecting a lot of rows, especially when very wide (20+ columns).",
"msg_date": "Fri, 23 Jun 2017 08:09:49 -0400",
"msg_from": "Adam Brusselback <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dataset is fetched from cache but still takes same time\n to fetch records as first run"
},
{
"msg_contents": "ditto here... much slower, and crashes too often. We run an evergreen shop where I work, but everyone has moved back to III.\n\nSent from my BlackBerry KEYone - the most secure mobile device\nFrom: [email protected]\nSent: June 23, 2017 8:11 AM\nTo: [email protected]\nCc: [email protected]; [email protected]; [email protected]\nSubject: Re: [PERFORM] Dataset is fetched from cache but still takes same time to fetch records as first run\n\n\nOn Fri, Jun 23, 2017 at 12:50 AM, Tom Lane <[email protected]<mailto:[email protected]>> wrote:\nIt's possible that pgAdmin4 has improved matters in this area.\n\nSadly, not in my experience. It's actually considerably worse than pgAdminIII in my experience when selecting a lot of rows, especially when very wide (20+ columns).\n\n\n\n\n\n\n\n\n\n\nditto here... much slower, and crashes too often. We run an evergreen shop where I work, but everyone has moved back to III.\n\n\n\n\n\nSent from my BlackBerry KEYone - the most secure mobile device\n\n\n\n\n\n\n\n\nFrom: [email protected]\nSent: June 23, 2017 8:11 AM\nTo: [email protected]\nCc: [email protected]; [email protected]; [email protected]\nSubject: Re: [PERFORM] Dataset is fetched from cache but still takes same time to fetch records as first run\n\n\n\n\n\n\n\n\n\n\n\n\nOn Fri, Jun 23, 2017 at 12:50 AM, Tom Lane \n<[email protected]> wrote:\n\nIt's possible that pgAdmin4 has improved matters in this area.\n\n\n\n\nSadly, not in my experience. It's actually considerably worse than pgAdminIII in my experience when selecting a lot of rows, especially when very wide (20+ columns).",
"msg_date": "Fri, 23 Jun 2017 16:55:03 +0000",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Dataset is fetched from cache but still takes same time\n to fetch records as first run"
}
] |
[
{
"msg_contents": ">From: Tom Lane <[email protected]>\n>To: Sumeet Shukla <[email protected]>\n>Cc: Dave Stibrany <[email protected]>; [email protected]\n>Sent: Friday, 23 June 2017, 5:50\n>Subject: Re: [PERFORM] Dataset is fetched from cache but still takes same time to fetch records as first run\n> Sumeet Shukla <[email protected]> writes:>\n>> Yes, but when I actually execute the query in pgAdmin3, it takes exactly\n>> the same time of 19.5 secs.\n>\n>pgAdmin is well known to be horribly inefficient at displaying large\n>query results (and 121788 rows qualifies as \"large\" for this purpose,\n>I believe). The circa-tenth-of-a-second savings on the server side\n>is getting swamped by client-side processing.\n>\n>It's possible that pgAdmin4 has improved matters in this area.\n\n>\n\nIt's also possibly time taken for the results to be tranferred over a network if the data is large.\n\nGlyn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 07:52:59 +0000 (UTC)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Dataset is fetched from cache but still takes same\n time to fetch records as first run"
}
] |
[
{
"msg_contents": "Hi all,\n\nI am having a problem with nested loop join.\n\nA database has 2 tables: \"posts\" and \"follows\".\nTable \"posts\" have two columns: \"timestamp\" and \"account\".\nTable \"follows\" have two columns: \"target_account\" and \"owner_account\".\nThe database also has an index on \"posts\" (\"account\", \"timestamp\"), one \non \"posts\"(\"timestamp\") and on \"follows\" (\"owner_account\", \n\"target_account\").\n\nTable \"posts\" is so big and have 10 million records.\nThe number of Records with the same value for \"owner_accounts\" in table \n\"follows\" is about 100 by average.\n\nI issue the following query:\n\nSELECT \"posts\".*\n FROM \"posts\"\n JOIN \"follows\" ON \"follows\".\"target_account\" = \"posts\".\"account\"\n WHERE \"follows\".\"owner_account\" = $1\n ORDER BY \"posts\".\"timestamp\"\n LIMIT 100\n\nThat results in a nested loop join with table \"posts\" as the inner and \n\"follows\" as the outer, which queried for each loop. EXPlAIN ANALYZE \nsays the actual number of rows queried from table \"posts\" is 500,000. \nThis behavior is problematic.\n\nFor performance, it may be better to retrieve 100 records joined with a \nrecord in table \"follows\", and then to retrieve those whose \n\"posts\".\"timestamp\" is greater than the one of last record we already \nhave, or 100 records, joined with another record in table \"follows\", and \nso on. It would end up querying 10,000 records from table \"posts\" at \nmost. The number could be even smaller in some cases.\n\nNow I have these tough questions:\n* Is the \"ideal\" operation I suggested possible for PostgreSQL?\n* If so, I think that could be achieved by letting PostgreSQL use \n\"follows\" as the inner in the loops. How could I achieve that?\n* Is there any other way to improve the performance of the query?\n\nAnswers are greatly appreciated.\n\nRegards,\nAkihiko Odaki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 19:31:40 +0900",
"msg_from": "Akihiko Odaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Inappropriate inner table for nested loop join"
},
{
"msg_contents": "Akihiko Odaki wrote:\r\n> I am having a problem with nested loop join.\r\n> \r\n> A database has 2 tables: \"posts\" and \"follows\".\r\n> Table \"posts\" have two columns: \"timestamp\" and \"account\".\r\n> Table \"follows\" have two columns: \"target_account\" and \"owner_account\".\r\n> The database also has an index on \"posts\" (\"account\", \"timestamp\"), one\r\n> on \"posts\"(\"timestamp\") and on \"follows\" (\"owner_account\",\r\n> \"target_account\").\r\n> \r\n> Table \"posts\" is so big and have 10 million records.\r\n> The number of Records with the same value for \"owner_accounts\" in table\r\n> \"follows\" is about 100 by average.\r\n> \r\n> I issue the following query:\r\n> \r\n> SELECT \"posts\".*\r\n> FROM \"posts\"\r\n> JOIN \"follows\" ON \"follows\".\"target_account\" = \"posts\".\"account\"\r\n> WHERE \"follows\".\"owner_account\" = $1\r\n> ORDER BY \"posts\".\"timestamp\"\r\n> LIMIT 100\r\n> \r\n> That results in a nested loop join with table \"posts\" as the inner and\r\n> \"follows\" as the outer, which queried for each loop. EXPlAIN ANALYZE\r\n> says the actual number of rows queried from table \"posts\" is 500,000.\r\n> This behavior is problematic.\r\n> \r\n> For performance, it may be better to retrieve 100 records joined with a\r\n> record in table \"follows\", and then to retrieve those whose\r\n> \"posts\".\"timestamp\" is greater than the one of last record we already\r\n> have, or 100 records, joined with another record in table \"follows\", and\r\n> so on. It would end up querying 10,000 records from table \"posts\" at\r\n> most. The number could be even smaller in some cases.\r\n> \r\n> Now I have these tough questions:\r\n> * Is the \"ideal\" operation I suggested possible for PostgreSQL?\r\n> * If so, I think that could be achieved by letting PostgreSQL use\r\n> \"follows\" as the inner in the loops. How could I achieve that?\r\n> * Is there any other way to improve the performance of the query?\r\n\r\nPostgreSQL`s plan is to use the index on \"posts\".\"timestamp\" to find the\r\nrows with the lowest \"timestamp\", match with rows from \"posts\" in\r\na nested loop and stop as soon as it has found 100 matches.\r\n\r\nNow it must be that the rows in \"posts\" that match with rows in \"follows\"\r\nhave high values of \"timestamp\".\r\n\r\nPostgreSQL doesn't know that, because it has no estimates how\r\nvalues correlate across tables, so it has to scan much more of the index\r\nthan it had expected to, and the query performs poorly.\r\n\r\nYou could either try to do something like\r\n\r\nSELECT *\r\nFROM (SELECT \"posts\".*\r\n FROM \"posts\"\r\n JOIN \"follows\" ON \"follows\".\"target_account\" = \"posts\".\"account\"\r\n WHERE \"follows\".\"owner_account\" = $1\r\n OFFSET 0) q\r\nORDER BY \"posts\".\"timestamp\" \r\nLIMIT 100;\r\n\r\nOr you could frop the index on \"posts\".\"timestamp\" and see if that helps.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 11:20:13 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inappropriate inner table for nested loop join"
},
{
"msg_contents": "Thank you for your quick reply. Your solution works for me!\n\nOn 2017-06-23 20:20, Albe Laurenz wrote:\n > PostgreSQL`s plan is to use the index on \"posts\".\"timestamp\" to find the\n > rows with the lowest \"timestamp\", match with rows from \"posts\" in\n > a nested loop and stop as soon as it has found 100 matches.\n >\n > Now it must be that the rows in \"posts\" that match with rows in \"follows\"\n > have high values of \"timestamp\".\n\nI mistakenly dropped DESC. The actual query should be:\n\nSELECT \"posts\".*\n FROM \"posts\"\n JOIN \"follows\" ON \"follows\".\"target_account\" = \"posts\".\"account\"\n WHERE \"follows\".\"owner_account\" = $1\n ORDER BY \"posts\".\"timestamp\" DESC\n LIMIT 100\n\nI note that here since that may be confusion to understand the later \npart of my first post.\n\n > PostgreSQL doesn't know that, because it has no estimates how\n > values correlate across tables, so it has to scan much more of the index\n > than it had expected to, and the query performs poorly.\n\nThat is exactly the problem what I have encountered.\n\n > You could either try to do something like\n >\n > SELECT *\n > FROM (SELECT \"posts\".*\n > FROM \"posts\"\n > JOIN \"follows\" ON \"follows\".\"target_account\" = \n\"posts\".\"account\"\n > WHERE \"follows\".\"owner_account\" = $1\n > OFFSET 0) q\n > ORDER BY \"posts\".\"timestamp\"\n > LIMIT 100;\n\nIt works. I had to replace \"posts\".\"timestamp\" with \"timestamp\", but \nthat is trivial. Anything else is fine.\n\n > Or you could frop the index on \"posts\".\"timestamp\" and see if that helps.\n\nThat is not a solution for me because it was used by other queries, but \nmay make sense in other cases.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 20:36:41 +0900",
"msg_from": "Akihiko Odaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inappropriate inner table for nested loop join"
},
{
"msg_contents": "On 2017-06-23 20:20, Albe Laurenz wrote:\n> You could either try to do something like\n>\n> SELECT *\n> FROM (SELECT \"posts\".*\n> FROM \"posts\"\n> JOIN \"follows\" ON \"follows\".\"target_account\" = \n\"posts\".\"account\"\n> WHERE \"follows\".\"owner_account\" = $1\n> OFFSET 0) q\n> ORDER BY \"posts\".\"timestamp\"\n> LIMIT 100;\n\nNow I wonder whether it actually sorted or not. As you said, I want to \n\"find rows with the greatest 'timestamp', match with rows from 'posts' \nin a nested loop and stop as soon as it has found 100 matches\".\n\nHowever, it seems to query 100 records without any consideration for \n\"timestamp\", and then sorts them. That is not expected. Here is a \nabstract query plan:\n\n Limit\n -> Sort\n Sort Key: posts.id DESC\n -> Nested Loop\n -> Seq Scan on follows\n Filter: (owner_account = $1)\n -> Index Scan using index_posts_on_account on posts\n Index Cond: (account_id = follows.target_account)\n\nindex_posts_on_account is an obsolete index on \"posts\" and only for \n\"account\". So it does nothing for sorting \"timestamp\".\n\nRegards,\nAkihiko Odaki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 20:54:39 +0900",
"msg_from": "Akihiko Odaki <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Inappropriate inner table for nested loop join"
},
{
"msg_contents": "Akihiko Odaki wrote:\r\n> On 2017-06-23 20:20, Albe Laurenz wrote:\r\n>> You could either try to do something like\r\n>>\r\n>> SELECT *\r\n>> FROM (SELECT \"posts\".*\r\n>> FROM \"posts\"\r\n>> JOIN \"follows\" ON \"follows\".\"target_account\" = \"posts\".\"account\"\r\n>> WHERE \"follows\".\"owner_account\" = $1\r\n>> OFFSET 0) q\r\n>> ORDER BY \"posts\".\"timestamp\"\r\n>> LIMIT 100;\r\n> \r\n> Now I wonder whether it actually sorted or not. As you said, I want to\r\n> \"find rows with the greatest 'timestamp', match with rows from 'posts'\r\n> in a nested loop and stop as soon as it has found 100 matches\".\r\n> \r\n> However, it seems to query 100 records without any consideration for\r\n> \"timestamp\", and then sorts them. That is not expected. Here is a\r\n> abstract query plan:\r\n> \r\n> Limit\r\n> -> Sort\r\n> Sort Key: posts.id DESC\r\n> -> Nested Loop\r\n> -> Seq Scan on follows\r\n> Filter: (owner_account = $1)\r\n> -> Index Scan using index_posts_on_account on posts\r\n> Index Cond: (account_id = follows.target_account)\r\n> \r\n> index_posts_on_account is an obsolete index on \"posts\" and only for\r\n> \"account\". So it does nothing for sorting \"timestamp\".\r\n\r\nYes, if you replace posts.timestamp with q.timestamp, it should\r\nsort by that.\r\n\r\nCould you send CREATE TABLE and CREATE INDEX statements so I can try it?\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 13:05:49 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inappropriate inner table for nested loop join"
},
{
"msg_contents": "Akihiko Odaki wrote:\r\n> On 2017-06-23 20:20, Albe Laurenz wrote:\r\n>> You could either try to do something like\r\n>>\r\n>> SELECT *\r\n>> FROM (SELECT \"posts\".*\r\n>> FROM \"posts\"\r\n>> JOIN \"follows\" ON \"follows\".\"target_account\" = \"posts\".\"account\"\r\n>> WHERE \"follows\".\"owner_account\" = $1\r\n>> OFFSET 0) q\r\n>> ORDER BY \"posts\".\"timestamp\"\r\n>> LIMIT 100;\r\n> \r\n> Now I wonder whether it actually sorted or not. As you said, I want to\r\n> \"find rows with the greatest 'timestamp', match with rows from 'posts'\r\n> in a nested loop and stop as soon as it has found 100 matches\".\r\n> \r\n> However, it seems to query 100 records without any consideration for\r\n> \"timestamp\", and then sorts them. That is not expected. Here is a\r\n> abstract query plan:\r\n> \r\n> Limit\r\n> -> Sort\r\n> Sort Key: posts.id DESC\r\n> -> Nested Loop\r\n> -> Seq Scan on follows\r\n> Filter: (owner_account = $1)\r\n> -> Index Scan using index_posts_on_account on posts\r\n> Index Cond: (account_id = follows.target_account)\r\n> \r\n> index_posts_on_account is an obsolete index on \"posts\" and only for\r\n> \"account\". So it does nothing for sorting \"timestamp\".\r\n\r\nThat should be fine.\r\n\r\nIt fetches all rows from \"follows\" that match the condition,\r\nThen joins them will all matching rows from \"posts\", sorts the\r\nresult descending by \"id\" and returns the 100 rows with the largest\r\nvalue for \"id\".\r\n\r\nSo you will get those 100 rows from \"posts\" with the largest \"id\"\r\nthat have a match in \"follows\" where the condition is fulfilled.\r\n\r\nIt is just a different plan to do the same thing that is more efficient\r\nin your case.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 26 Jun 2017 08:55:00 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Inappropriate inner table for nested loop join"
}
] |
[
{
"msg_contents": "Dear pgsql-performance list,\n\nI think I've found a case where the query planner chooses quite a\nsuboptimal plan for joining three tables. The main \"fact\" table\n(metric_value) links to two others with far fewer rows (like an OLAP/star\ndesign). We retrieve and summarise a large fraction of rows from the main\ntable, and sort according to an index on that table, and I'd like to speed\nit up, since we will need to run this query many times per day. I would\nreally appreciate your advice, thank you in advance!\n\nThe following SQL creates test data which can be used to reproduce the\nproblem:\n\ndrop table if exists metric_pos;\ncreate table metric_pos (id serial primary key, pos integer);\ninsert into metric_pos (pos) SELECT (random() * 1000)::integer from\ngenerate_series(1,100);\ncreate index idx_metric_pos_id_pos on metric_pos (id, pos);\n\ndrop table if exists asset_pos;\ncreate table asset_pos (id serial primary key, pos integer);\ninsert into asset_pos (pos) SELECT (random() * 1000)::integer from\ngenerate_series(1,100);\n\ndrop TABLE if exists metric_value;\nCREATE TABLE metric_value\n(\n id_asset integer NOT NULL,\n id_metric integer NOT NULL,\n value double precision NOT NULL,\n date date NOT NULL,\n timerange_transaction tstzrange NOT NULL,\n id bigserial NOT NULL,\n CONSTRAINT cons_metric_value_pk PRIMARY KEY (id)\n)\nWITH (\n OIDS=FALSE\n);\n\ninsert into metric_value (id_asset, id_metric, date, value,\ntimerange_transaction)\nselect asset_pos.id, metric_pos.id, generate_series('2015-06-01'::date,\n'2017-06-01'::date, '1 day'), random() * 1000, tstzrange(current_timestamp,\nNULL)\nfrom metric_pos, asset_pos;\n\nCREATE INDEX idx_metric_value_id_metric_id_asset_date ON metric_value\n(id_metric, id_asset, date, timerange_transaction, value);\n\n\nThis is an example of the kind of query we would like to speed up:\n\nSELECT metric_pos.pos AS pos_metric, asset_pos.pos AS pos_asset, date,\nvalue\nFROM metric_value\nINNER JOIN asset_pos ON asset_pos.id = metric_value.id_asset\nINNER JOIN metric_pos ON metric_pos.id = metric_value.id_metric\nWHERE\ndate >= '2016-01-01' and date < '2016-06-01'\nAND timerange_transaction @> current_timestamp\nORDER BY metric_value.id_metric, metric_value.id_asset, date\n\n\nThis takes ~12 seconds from psql. Wrapping it in \"SELECT SUM(value) FROM\n(...) AS t\" reduces that to ~8 seconds, so the rest is probably data\ntransfer overhead which is unavoidable.\n\nThe actual query plan selected is (explain.depesz.com\n<https://explain.depesz.com/s/EoLH>):\n\n Sort (cost=378949.08..382749.26 rows=1520071 width=28) (actual\ntime=7917.686..8400.254 rows=1520000 loops=1)\n Sort Key: metric_value.id_metric, metric_value.id_asset,\nmetric_value.date\n Sort Method: external merge Disk: 62408kB\n Buffers: shared hit=24421 read=52392, temp read=7803 written=7803\n -> Hash Join (cost=3.31..222870.41 rows=1520071 width=28) (actual\ntime=0.295..6049.550 rows=1520000 loops=1)\n Hash Cond: (metric_value.id_asset = asset_pos.id)\n Buffers: shared hit=24421 read=52392\n -> Nested Loop (cost=0.56..201966.69 <056%202019%206669>\nrows=1520071 width=24) (actual time=0.174..4671.452 <01744%20671452>\nrows=1520000 loops=1)\n Buffers: shared hit=24420 read=52392\n -> Seq Scan on metric_pos (cost=0.00..1.50 rows=100\nwidth=8) (actual time=0.015..0.125 rows=100 loops=1)\n Buffers: shared hit=1\n -> Index Only Scan using\nidx_metric_value_id_metric_id_asset_date\non metric_value (cost=0.56..1867.64 rows=15201 width=20) (actual\ntime=0.090..40.978 rows=15200 loops=100)\n Index Cond: ((id_metric = metric_pos.id) AND (date >=\n'2016-01-01'::date) AND (date < '2016-06-01'::date))\n Filter: (timerange_transaction @> now())\n Heap Fetches: 1520000\n Buffers: shared hit=24419 read=52392\n -> Hash (cost=1.50..1.50 rows=100 width=8) (actual\ntime=0.102..0.102 rows=100 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 12kB\n Buffers: shared hit=1\n -> Seq Scan on asset_pos (cost=0.00..1.50 rows=100\nwidth=8) (actual time=0.012..0.052 rows=100 loops=1)\n Buffers: shared hit=1\n Planning time: 1.498 ms\n Execution time: 8992.846 ms\n\n\nOr visually:\n\n[image: Inline images 2]\n\nWhat I find interesting about this query plan is:\n\nThe records can already be read in order from idx_metric_value.... If this\nwas selected as the primary table, and metric_pos was joined to it, then\nthe output would also be in order, and no sort would be needed.\n\nWe should be able to use a merge join to metric_pos, because it can be read\nin order of id_metric (its primary key, and the first column in\nidx_metric_value...). If not, a hash join should be faster than a nested\nloop, if we only have to hash ~100 records.\n\nI think that the joins should be fairly trivial: easily held in memory and\nindexed by relatively small integers. They would probably be temporary\ntables in our real use case. But removing them (and just selecting the IDs\nfrom metric_value) cuts 4 seconds off the query time (to 3.3 seconds). Why\nare they slow?\n\nIf I remove one of the joins (asset_pos) then I get a merge join between\ntwo indexes, as expected, but it has a materialize just before it which\nmakes no sense to me. Why do we need to materialize here? And why\nmaterialise 100 rows into 1.5 million rows? (explain.depesz.com\n<https://explain.depesz.com/s/7mkM>)\n\nSELECT metric_pos.pos AS pos_metric, id_asset AS pos_asset, date, value\nFROM metric_value\nINNER JOIN metric_pos ON metric_pos.id = metric_value.id_metric\nWHERE\ndate >= '2016-01-01' and date < '2016-06-01'\nAND timerange_transaction @> current_timestamp\nORDER BY metric_value.id_metric, metric_value.id_asset, date\n\n Merge Join (cost=0.70..209302.76 <070%202093%200276> rows=1520071\nwidth=28) (actual time=0.097..4899.972 rows=1520000 loops=1)\n Merge Cond: (metric_value.id_metric = metric_pos.id)\n Buffers: shared hit=76403\n -> Index Only Scan using idx_metric_value_id_metric_id_asset_date on\nmetric_value (cost=0.56..182696.87 <056%201826%209687> rows=1520071\nwidth=20) (actual time=0.074..3259.870 rows=1520000 lo\nops=1)\n Index Cond: ((date >= '2016-01-01'::date) AND (date <\n'2016-06-01'::date))\n Filter: (timerange_transaction @> now())\n Heap Fetches: 1520000\n Buffers: shared hit=76401\n -> Materialize (cost=0.14..4.89 rows=100 width=8) (actual\ntime=0.018..228.265 rows=1504801 loops=1)\n Buffers: shared hit=2\n -> Index Only Scan using idx_metric_pos_id_pos on metric_pos\n (cost=0.14..4.64 rows=100 width=8) (actual time=0.013..0.133 rows=100\nloops=1)\n Heap Fetches: 100\n Buffers: shared hit=2\n Planning time: 0.761 ms\n Execution time: 5253.260 ms\n\n\nThe size of the result set is approximately 91 MB (measured with psql -c |\nwc -c). Why does it take 4 seconds to transfer this much data over a UNIX\nsocket on the same box? Can it be made faster? The data is quite redundant\n(it's sorted for a start) so compression makes a big difference, and simple\nprefix elimination could probably reduce the volume of redundant data sent\nback to the client.\n\nStandard background info:\n\n - PostgreSQL 9.6.2 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5\n 20150623 (Red Hat 4.8.5-4), 64-bit, compiled from source.\n - shared_buffers = 15GB, work_mem = 100MB, seq_page_cost =\n 0.5, random_page_cost = 1.0, cpu_tuple_cost = 0.01.\n - HP ProLiant DL580 G7, Xeon(R) CPU E7- 4850 @ 2.00GHz * 80 cores,\n hardware RAID, 3.6 TB SAS array.\n\nThanks again in advance for any suggestions, hints or questions.\n\nCheers, Chris.",
"msg_date": "Fri, 23 Jun 2017 21:09:57 +0100",
"msg_from": "Chris Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Slow query from ~7M rows, joined to two tables of ~100 rows each"
},
{
"msg_contents": "On Jun 23, Chris Wilson modulated:\n> ...\n> create table metric_pos (id serial primary key, pos integer);\n> create index idx_metric_pos_id_pos on metric_pos (id, pos);\n> ...\n> create table asset_pos (id serial primary key, pos integer);\n> ...\n\nDid you only omit a CREATE INDEX statement on asset_pos (id, pos) from\nyour problem statement or also from your actual tests? Without any\nindex, you are forcing the query planner to do that join the hard way.\n\n\n> CREATE TABLE metric_value\n> (\n> id_asset integer NOT NULL,\n> id_metric integer NOT NULL,\n> value double precision NOT NULL,\n> date date NOT NULL,\n> timerange_transaction tstzrange NOT NULL,\n> id bigserial NOT NULL,\n> CONSTRAINT cons_metric_value_pk PRIMARY KEY (id)\n> )\n> WITH (\n> OIDS=FALSE\n> );\n>\n> ...\n> CREATE INDEX idx_metric_value_id_metric_id_asset_date ON\n> metric_value (id_metric, id_asset, date, timerange_transaction,\n> value);\n> ...\n\nHave you tried adding a foreign key constraint on the id_asset and\nid_metric columns? I wonder if you'd get a better query plan if the\nDB knew that the inner join would not change the number of result\nrows. I think it's doing the join inside the filter step because\nit assumes that the inner join may drop rows.\n\nAlso, did you include an ANALYZE step between your table creation\nstatements and your query benchmarks? Since you are dropping and\nrecreating test data, you have no stats on anything.\n\n\n> This is an example of the kind of query we would like to speed up:\n>\n>\n> SELECT metric_pos.pos AS pos_metric, asset_pos.pos AS pos_asset,\n> date, value\n> FROM metric_value\n> INNER JOIN asset_pos ON asset_pos.id = metric_value.id_asset\n> INNER JOIN metric_pos ON metric_pos.id = metric_value.id_metric\n> WHERE\n> date >= '2016-01-01' and date < '2016-06-01'\n> AND timerange_transaction @> current_timestamp\n> ORDER BY metric_value.id_metric, metric_value.id_asset, date\n>\n\nHow sparse is the typical result set selected by these date and\ntimerange predicates? If it is sparse, I'd think you want your\ncompound index to start with those two columns.\n\nFinally, your subject line said you were joining hundreds of rows to\nmillions. In queries where we used a similarly small dimension table\nin the WHERE clause, we saw massive speedup by pre-evaluating that\ndimension query to produce an array of keys, the in-lining the actual\nkey constants in the where clause of a main fact table query that\nno longer had the join in it.\n\nIn your case, the equivalent hack would be to compile the small\ndimension tables into big CASE statements I suppose...\n\n\nKarl\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 19:01:16 -0700",
"msg_from": "Karl Czajkowski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Slow query from ~7M rows, joined to two tables of ~100 rows\n each"
},
{
"msg_contents": "On Fri, Jun 23, 2017 at 1:09 PM, Chris Wilson <[email protected]>\nwrote:\n\n>\n> The records can already be read in order from idx_metric_value.... If this\n> was selected as the primary table, and metric_pos was joined to it, then\n> the output would also be in order, and no sort would be needed.\n>\n> We should be able to use a merge join to metric_pos, because it can be\n> read in order of id_metric (its primary key, and the first column in\n> idx_metric_value...). If not, a hash join should be faster than a nested\n> loop, if we only have to hash ~100 records.\n>\n\nHash joins do not preserve order. They could preserve the order of their\n\"first\" input, but only if the hash join is all run in one batch and\ndoesn't spill to disk. But the hash join code is never prepared to make a\nguarantee that it won't spill to disk, and so never considers it to\npreserve order. It thinks it only needs to hash 100 rows, but it is never\nabsolutely certain of that, until it actually executes.\n\nIf I set enable_sort to false, then I do get the merge join you want (but\nwith asset_pos joined by nested loop index scan, not a hash join, for the\nreason just stated above) but that is slower than the plan with the sort in\nit, just like PostgreSQL thinks it will be.\n\nIf I vacuum your fact table, then it can switch to use index only scans. I\nthen get a different plan, still using a sort, which runs in 1.6 seconds.\nSorting is not the slow step you think it is.\n\nBe warned that \"explain (analyze)\" can substantially slow down and distort\nthis type of query, especially when sorting. You should run \"explain\n(analyze, timing off)\" first, and then only trust \"explain (analyze)\" if\nthe overall execution times between them are similar.\n\n\n\n> If I remove one of the joins (asset_pos) then I get a merge join between\n> two indexes, as expected, but it has a materialize just before it which\n> makes no sense to me. Why do we need to materialize here? And why\n> materialise 100 rows into 1.5 million rows? (explain.depesz.com\n> <https://explain.depesz.com/s/7mkM>)\n>\n\n\n -> Materialize (cost=0.14..4.89 rows=100 width=8) (actual\n> time=0.018..228.265 rows=1504801 loops=1)\n> Buffers: shared hit=2\n> -> Index Only Scan using idx_metric_pos_id_pos on metric_pos\n> (cost=0.14..4.64 rows=100 width=8) (actual time=0.013..0.133 rows=100\n> loops=1)\n> Heap Fetches: 100\n> Buffers: shared hit=2\n>\n>\nIt doesn't need to materialize, it does it simply because it thinks it will\nbe faster (which it is, slightly). You can prevent it from doing so by set\nenable_materialize to off. The reason it is faster is that with the\nmaterialize, it can check all the visibility filters at once, rather than\nhaving to do it repeatedly. It is only materializing 100 rows, the 1504801\ncomes from the number of rows the projected out of the materialized table\n(one for each row in the other side of the join, in this case), rather than\nthe number of rows contained within it.\n\nAnd again, vacuum your tables. Heap fetches aren't cheap.\n\n\n> The size of the result set is approximately 91 MB (measured with psql -c |\n> wc -c). Why does it take 4 seconds to transfer this much data over a UNIX\n> socket on the same box?\n>\n\nIt has to convert the data to a format used for the wire protocol (hardware\nindependent, and able to support user defined and composite types), and\nthen back again.\n\n> work_mem = 100MB\n\nCan you give it more than that? How many simultaneous connections do you\nexpect?\n\nCheers,\n\nJeff\n\nOn Fri, Jun 23, 2017 at 1:09 PM, Chris Wilson <[email protected]> wrote:The records can already be read in order from idx_metric_value.... If this was selected as the primary table, and metric_pos was joined to it, then the output would also be in order, and no sort would be needed.We should be able to use a merge join to metric_pos, because it can be read in order of id_metric (its primary key, and the first column in idx_metric_value...). If not, a hash join should be faster than a nested loop, if we only have to hash ~100 records.Hash joins do not preserve order. They could preserve the order of their \"first\" input, but only if the hash join is all run in one batch and doesn't spill to disk. But the hash join code is never prepared to make a guarantee that it won't spill to disk, and so never considers it to preserve order. It thinks it only needs to hash 100 rows, but it is never absolutely certain of that, until it actually executes.If I set enable_sort to false, then I do get the merge join you want (but with asset_pos joined by nested loop index scan, not a hash join, for the reason just stated above) but that is slower than the plan with the sort in it, just like PostgreSQL thinks it will be.If I vacuum your fact table, then it can switch to use index only scans. I then get a different plan, still using a sort, which runs in 1.6 seconds. Sorting is not the slow step you think it is.Be warned that \"explain (analyze)\" can substantially slow down and distort this type of query, especially when sorting. You should run \"explain (analyze, timing off)\" first, and then only trust \"explain (analyze)\" if the overall execution times between them are similar. If I remove one of the joins (asset_pos) then I get a merge join between two indexes, as expected, but it has a materialize just before it which makes no sense to me. Why do we need to materialize here? And why materialise 100 rows into 1.5 million rows? (explain.depesz.com) -> Materialize (cost=0.14..4.89 rows=100 width=8) (actual time=0.018..228.265 rows=1504801 loops=1) Buffers: shared hit=2 -> Index Only Scan using idx_metric_pos_id_pos on metric_pos (cost=0.14..4.64 rows=100 width=8) (actual time=0.013..0.133 rows=100 loops=1) Heap Fetches: 100 Buffers: shared hit=2It doesn't need to materialize, it does it simply because it thinks it will be faster (which it is, slightly). You can prevent it from doing so by set enable_materialize to off. The reason it is faster is that with the materialize, it can check all the visibility filters at once, rather than having to do it repeatedly. It is only materializing 100 rows, the 1504801 comes from the number of rows the projected out of the materialized table (one for each row in the other side of the join, in this case), rather than the number of rows contained within it.And again, vacuum your tables. Heap fetches aren't cheap.The size of the result set is approximately 91 MB (measured with psql -c | wc -c). Why does it take 4 seconds to transfer this much data over a UNIX socket on the same box?It has to convert the data to a format used for the wire protocol (hardware independent, and able to support user defined and composite types), and then back again.> work_mem = 100MBCan you give it more than that? How many simultaneous connections do you expect?Cheers,Jeff",
"msg_date": "Mon, 26 Jun 2017 14:22:51 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Slow query from ~7M rows, joined to two tables of\n ~100 rows each"
}
] |
[
{
"msg_contents": "Let's say I have the following table and index:\n\ncreate table foo(s text, i integer);\ncreate index foo_idx on foo (s, i);\n\nIf I run the following commands:\n\nstart transaction;\nset local enable_sort = off;\nexplain analyze select * from foo where s = 'a' order by i;\nend;\n\nI get the following query plan:\n\nIndex Only Scan using foo_idx on foo (cost=0.14..8. 15 row=1 width=36)\n(actual time=0.008..0.0 10 rows=3 loops=1)\n\n Index Cond: (s = 'a'::text)\n\n Heap Fetches: 3\n\n\nThat's a good plan because it's not doing a quick sort. Instead, it's just\nreading the sort order off of the index, which is exactly what I want. (I\nhad to disable enable_sort because I didn't have enough rows of test data\nin the table to get Postgres to use the index. But if I had enough rows,\nthe enable_sort stuff wouldn't be necessary. My real table has lots of rows\nand doesn't need enable_sort turned off to do the sort with the index.)\n\nBut, if I run the following commands:\n\nstart transaction;\nset local enable_sort = off;\nexplain analyze select * from foo where s = 'a' or s = 'b' order by i;\nend;\n\nI get the following plan:\n\nSort (cost=10000000001.16..10000000001.16 rows=2 width=36) (actual\ntime=0.020..0.021 rows=7 loops=1)\n Sort Key: i\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on foo (cost=0.00..1.15 rows=2 width=36) (actual\ntime=0.009..0.011 rows=7 loops=1)\n Filter: ((s = 'a'::text) OR (s = 'b'::text))\n Rows Removed by Filter: 3\n\nHere, it's loading the full result set into memory and doing a quick sort.\n(I think that's what it's doing, at least. If that's not the case, let me\nknow.) That's not good.\n\nWhat I'd really like Postgres to do is use the index to get a sorted list\nof rows where s = 'a'. Then, use the index again to get a sorted list of\nrows where s = 'b'. Then it seems like Postgres should be able to merge the\nsorted lists into a single sorted result set in O(n) time and O(1) memory\nusing a single merge operation.\n\nAm I doing something wrong here? Is there a way to get Postgres to not do a\nquick sort here?\n\nMy concern is that my real table has a significant number of rows, and the\nresult set will not fit into memory. So instead of getting a quick sort,\nI'll end up getting a slow, disk-based merge sort. I really need the bulk\nof the sort operation to come off of the index so that time and memory are\nsmall.\n\nThanks for any help on this issue.\n\nLet's say I have the following table and index:create table foo(s text, i integer);create index foo_idx on foo (s, i);If I run the following commands:start transaction;set local enable_sort = off;explain analyze select * from foo where s = 'a' order by i;end;I get the following query plan:Index Only Scan using foo_idx on foo (cost=0.14..8. 15 row=1 width=36) (actual time=0.008..0.0 10 rows=3 loops=1) Index Cond: (s = 'a'::text) Heap Fetches: 3That's a good plan because it's not doing a quick sort. Instead, it's just reading the sort order off of the index, which is exactly what I want. (I had to disable enable_sort because I didn't have enough rows of test data in the table to get Postgres to use the index. But if I had enough rows, the enable_sort stuff wouldn't be necessary. My real table has lots of rows and doesn't need enable_sort turned off to do the sort with the index.)But, if I run the following commands:start transaction;set local enable_sort = off;explain analyze select * from foo where s = 'a' or s = 'b' order by i;end;I get the following plan:Sort (cost=10000000001.16..10000000001.16 rows=2 width=36) (actual time=0.020..0.021 rows=7 loops=1) Sort Key: i Sort Method: quicksort Memory: 25kB -> Seq Scan on foo (cost=0.00..1.15 rows=2 width=36) (actual time=0.009..0.011 rows=7 loops=1) Filter: ((s = 'a'::text) OR (s = 'b'::text)) Rows Removed by Filter: 3Here, it's loading the full result set into memory and doing a quick sort. (I think that's what it's doing, at least. If that's not the case, let me know.) That's not good.What I'd really like Postgres to do is use the index to get a sorted list of rows where s = 'a'. Then, use the index again to get a sorted list of rows where s = 'b'. Then it seems like Postgres should be able to merge the sorted lists into a single sorted result set in O(n) time and O(1) memory using a single merge operation.Am I doing something wrong here? Is there a way to get Postgres to not do a quick sort here?My concern is that my real table has a significant number of rows, and the result set will not fit into memory. So instead of getting a quick sort, I'll end up getting a slow, disk-based merge sort. I really need the bulk of the sort operation to come off of the index so that time and memory are small.Thanks for any help on this issue.",
"msg_date": "Fri, 23 Jun 2017 17:58:28 -0500",
"msg_from": "Clint Miller <[email protected]>",
"msg_from_op": true,
"msg_subject": "Efficiently merging and sorting collections of sorted rows"
},
{
"msg_contents": "On Fri, Jun 23, 2017 at 3:58 PM, Clint Miller <[email protected]> wrote:\n> Here, it's loading the full result set into memory and doing a quick sort.\n> (I think that's what it's doing, at least. If that's not the case, let me\n> know.) That's not good.\n\nIt's not sorting stuff that doesn't need to be read into memory in the\nfirst place. In the case of your plan with the sequential scan, some\nrows are eliminated early, before being input to the sort node.\n\n> What I'd really like Postgres to do is use the index to get a sorted list of\n> rows where s = 'a'. Then, use the index again to get a sorted list of rows\n> where s = 'b'. Then it seems like Postgres should be able to merge the\n> sorted lists into a single sorted result set in O(n) time and O(1) memory\n> using a single merge operation.\n>\n> Am I doing something wrong here? Is there a way to get Postgres to not do a\n> quick sort here?\n\nI would like that too. There is a patch that does what I think you're\ndescribing, but it seems to be in limbo:\n\nhttps://commitfest.postgresql.org/11/409/\n\n-- \nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 16:32:13 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently merging and sorting collections of sorted rows"
},
{
"msg_contents": "Clint Miller <[email protected]> writes:\n> That's a good plan because it's not doing a quick sort. Instead, it's just\n> reading the sort order off of the index, which is exactly what I want. (I\n> had to disable enable_sort because I didn't have enough rows of test data\n> in the table to get Postgres to use the index. But if I had enough rows,\n> the enable_sort stuff wouldn't be necessary. My real table has lots of rows\n> and doesn't need enable_sort turned off to do the sort with the index.)\n\nTBH, I think this whole argument is proceeding from false premises.\nUsing an indexscan as a substitute for an explicit sort of lots of\nrows isn't all that attractive, because it implies a whole lot of\nrandom access to the table (unless the table is nearly in index\norder, which isn't a condition you can count on without expending\na lot of maintenance effort to keep it that way). seqscan-and-sort\nis often a superior alternative, especially if you're willing to give\nthe sort a reasonable amount of work_mem.\n\n> What I'd really like Postgres to do is use the index to get a sorted list\n> of rows where s = 'a'. Then, use the index again to get a sorted list of\n> rows where s = 'b'. Then it seems like Postgres should be able to merge the\n> sorted lists into a single sorted result set in O(n) time and O(1) memory\n> using a single merge operation.\n\nIf there's no duplicates to remove, I think this will work:\n\nexplain\n(select * from foo a where s = 'a' order by i)\nunion all\n(select * from foo b where s = 'b' order by i)\norder by i;\n\n Merge Append (cost=0.32..48.73 rows=12 width=36)\n Sort Key: a.i\n -> Index Only Scan using foo_idx on foo a (cost=0.15..24.26 rows=6 width=36)\n Index Cond: (s = 'a'::text)\n -> Index Only Scan using foo_idx on foo b (cost=0.15..24.26 rows=6 width=36)\n Index Cond: (s = 'b'::text)\n\nIn this case it's pretty obvious that the two union arms can never\nreturn the same row, but optimizing OR into UNION in general is\ndifficult because of the possibility of duplicates. I wouldn't\nrecommend holding your breath waiting for the planner to do this\nfor you.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 23 Jun 2017 19:33:25 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently merging and sorting collections of sorted rows"
},
{
"msg_contents": "On Fri, Jun 23, 2017 at 6:33 PM, Tom Lane <[email protected]> wrote:\n> Clint Miller <[email protected]> writes:\n>> That's a good plan because it's not doing a quick sort. Instead, it's just\n>> reading the sort order off of the index, which is exactly what I want. (I\n>> had to disable enable_sort because I didn't have enough rows of test data\n>> in the table to get Postgres to use the index. But if I had enough rows,\n>> the enable_sort stuff wouldn't be necessary. My real table has lots of rows\n>> and doesn't need enable_sort turned off to do the sort with the index.)\n>\n> TBH, I think this whole argument is proceeding from false premises.\n> Using an indexscan as a substitute for an explicit sort of lots of\n> rows isn't all that attractive, because it implies a whole lot of\n> random access to the table (unless the table is nearly in index\n> order, which isn't a condition you can count on without expending\n> a lot of maintenance effort to keep it that way). seqscan-and-sort\n> is often a superior alternative, especially if you're willing to give\n> the sort a reasonable amount of work_mem.\n\nHm, if he reverses the index terms he gets his sort order for free and\na guaranteed IOS. This would only be sensible to do only if several\nconditions applied, you'd have to live under the IOS criteria\ngenerally, the number of rows returned to what relative to what was\nthrown out would have to be reasonably high (this is key), and the\nresult set would have to be large making the sort an expensive\nconsideration relative to the filtering. You'd also have to be\nuninterested in explicit filters on 's' or be willing to create\nanother index to do that if you were.\n\nmerlin\n\npostgres=# \\d foo\n Table \"public.foo\"\n Column │ Type │ Modifiers\n────────┼─────────┼───────────\n s │ text │\n i │ integer │\nIndexes:\n \"foo_i_s_idx\" btree (i, s) -- reversed\n\npostgres=# set enable_sort = false;\nSET\n\npostgres=# explain analyze select * from foo where s = 'a' or s = 'b'\norder by i;\n QUERY PLAN\n─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Index Only Scan using foo_i_s_idx on foo (cost=0.15..68.75 rows=12\nwidth=36) (actual time=0.004..0.004 rows=0 loops=1)\n Filter: ((s = 'a'::text) OR (s = 'b'::text))\n Heap Fetches: 0\n Planning time: 0.215 ms\n Execution time: 0.025 ms\n\n\n\n\n\n\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jun 2017 08:13:40 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Efficiently merging and sorting collections of sorted rows"
}
] |
[
{
"msg_contents": "Is it possible to speed up deletes which have null references so they don't check if a reference is valid?\n\nI had this scenario:\n\n--large table not referenced from other places\nCREATE TABLE large_table\n(\n id bigserial primary key,\n ref_1 bigint not null,\n ref_2 bigint not null,\n at_1 timestamptz not null,\n at_2 timestamptz not null,\n amount numeric not null,\n type_1 int not null,\n type_2 int not null,\n undo_id bigint references large_table\n);\n--some random data with some self references\ninsert into large_table\nselect i, i/10, i/100, now() , now(), i%1000, i%10, i%20, case when i%1000 = 3 then i -1 else null end\nfrom generate_series(1, 1000000) i;\n\n--create unique index ix_undo on large_table(undo_id) where undo_id is not null;\n\nanalyze large_table;\n\n--some new data with unique type_1 which don't have self references\ninsert into large_table\nselect 1000000 + i, i/10, i/100, now(), now(), i%1000, 11, i%20, null\nfrom generate_series(1, 100000) i;\n\ndelete from large_table where type_1 = 11;\n\nI had to cancel the last delete and create an index on undo_id for the last query to run fast.\n(I was actually expecting that commented out index to exists, but for some reason it didn't)\n\nRegards,\nRikard\n\n-- \nRikard Pavelic\nhttps://dsl-platform.com/\nhttp://templater.info/",
"msg_date": "Sat, 24 Jun 2017 09:28:12 +0200",
"msg_from": "Rikard Pavelic <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow delete due to reference"
},
{
"msg_contents": "Rikard Pavelic <[email protected]> writes:\n> Is it possible to speed up deletes which have null references so they don't check if a reference is valid?\n\nYou're thinking about the problem backwards. Since the table is\nself-referential, each row is both a PK (referenced) row and an FK\n(referencing) row. In its role as an FK row, a delete requires no work,\nnull referencing column or otherwise --- but in its role as a PK row, a\ndelete does require work. The PK column here is \"id\" which is not null in\nany row, so for every row, the FK trigger must check to see whether that\nid is referenced by any FK row. With no index on the FK column (undo_id)\nthat requires an expensive seqscan.\n\nThere are optimizations to skip the check when deleting a null PK value,\nbut that case never occurs in your example.\n\n> --create unique index ix_undo on large_table(undo_id) where undo_id is not null;\n> (I was actually expecting that commented out index to exists, but for some reason it didn't)\n\nIt would've done the job if you'd had it, I believe.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 24 Jun 2017 12:14:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow delete due to reference"
}
] |
[
{
"msg_contents": "Hi Karl,\n\nThanks for the quick reply! Answers inline.\n\nMy starting point, having executed exactly the preparation query in my\nemail, was that the sample EXPLAIN (ANALYZE, BUFFERS) SELECT query ran in\n15.3 seconds (best of 5), and did two nested loops\n<https://explain.depesz.com/s/KmXc>.\n\nOn 24 June 2017 at 03:01, Karl Czajkowski <[email protected]> wrote:\n\n> Also, did you include an ANALYZE step between your table creation\n> statements and your query benchmarks? Since you are dropping and\n> recreating test data, you have no stats on anything.\n\n\nI tried this suggestion first, as it's the hardest to undo, and could also\nbe done automatically by a background ANALYZE while I wasn't looking. It\ndid result in a switch to using hash joins\n<https://explain.depesz.com/s/hCiI> (instead of nested loops), and to\nstarting with the metric_value table (the fact table), which are both\nchanges that I thought would help, and the EXPLAIN ... SELECT speeded up to\n13.2 seconds (2 seconds faster; best of 5 again).\n\nDid you only omit a CREATE INDEX statement on asset_pos (id, pos) from\n> your problem statement or also from your actual tests? Without any\n> index, you are forcing the query planner to do that join the hard way.\n>\n\nI omitted it from my previous tests and the preparation script because I\ndidn't expect it to make much difference. There was already a primary key\non ID, so this would only enable an index scan to be changed into an\nindex-only scan, but the query plan wasn't doing an index scan.\n\nIt didn't appear to change the query plan or performance\n<https://explain.depesz.com/s/mSU>.\n\nHave you tried adding a foreign key constraint on the id_asset and\n> id_metric columns? I wonder if you'd get a better query plan if the\n> DB knew that the inner join would not change the number of result\n> rows. I think it's doing the join inside the filter step because\n> it assumes that the inner join may drop rows.\n>\n\nThis didn't appear to change the query plan or performance\n<https://explain.depesz.com/s/xZL> either.\n\n\n> > This is an example of the kind of query we would like to speed up:\n> >\n> >\n> > SELECT metric_pos.pos AS pos_metric, asset_pos.pos AS pos_asset,\n> > date, value\n> > FROM metric_value\n> > INNER JOIN asset_pos ON asset_pos.id = metric_value.id_asset\n> > INNER JOIN metric_pos ON metric_pos.id = metric_value.id_metric\n> > WHERE\n> > date >= '2016-01-01' and date < '2016-06-01'\n> > AND timerange_transaction @> current_timestamp\n> > ORDER BY metric_value.id_metric, metric_value.id_asset, date\n> >\n>\n> How sparse is the typical result set selected by these date and\n> timerange predicates? If it is sparse, I'd think you want your\n> compound index to start with those two columns.\n>\n\nI'm not sure what \"sparse\" means? The date is a significant fraction (25%)\nof the total table contents in this test example, although we're flexible\nabout date ranges (if it improves performance per day) since we'll end up\nprocessing a big chunk of the entire table anyway, batched by date. Almost\nno rows will be removed by the timerange_transaction filter (none in our\ntest example). We expect to have rows in this table for most metric and\nasset combinations (in the test example we populate metric_value using the\ncartesian product of these tables to simulate this).\n\nI created the index starting with date and it did make a big difference:\ndown to 10.3 seconds using a bitmap index scan and bitmap heap scan\n<https://explain.depesz.com/s/mGZT> (and then two hash joins as before).\n\nI was also able to shave another 1.1 seconds off\n<https://explain.depesz.com/s/xTig> (down to 9.2 seconds) by materialising\nthe cartesian product of id_asset and id_metric, and joining to\nmetric_value, but I don't really understand why this helps. It's\nunfortunate that this requires materialisation (using a subquery isn't\nenough) and takes more time than it saves from the query (6 seconds)\nalthough it might be partially reusable in our case.\n\nCREATE TABLE cartesian AS\nSELECT DISTINCT id_metric, id_asset FROM metric_value;\n\nSELECT metric_pos.pos AS pos_metric, asset_pos.pos AS pos_asset, date,\nvalue\nFROM cartesian\nINNER JOIN metric_value ON metric_value.id_metric = cartesian.id_metric AND\nmetric_value.id_asset = cartesian.id_asset\nINNER JOIN asset_pos ON asset_pos.id = metric_value.id_asset\nINNER JOIN metric_pos ON metric_pos.id = metric_value.id_metric\nWHERE\ndate >= '2016-01-01' and date < '2016-06-01'\nAND timerange_transaction @> current_timestamp\nORDER BY metric_value.id_metric, metric_value.id_asset, date;\n\n\nAnd I was able to shave another 3.7 seconds off\n<https://explain.depesz.com/s/lqGw> (down to 5.6 seconds) by making the\nonly two columns of the cartesian table into its primary key, although\nagain I don't understand why:\n\nalter table cartesian add primary key (id_metric, id_asset);\n\n\n[image: Inline images 1]\n\nThis uses merge joins instead, which supports the hypothesis that merge\njoins could be faster than hash joins if only we can persuade Postgres to\nuse them. It also contains two materialize steps that I don't understand.\n\n\n> Finally, your subject line said you were joining hundreds of rows to\n> millions. In queries where we used a similarly small dimension table\n> in the WHERE clause, we saw massive speedup by pre-evaluating that\n> dimension query to produce an array of keys, the in-lining the actual\n> key constants in the where clause of a main fact table query that\n> no longer had the join in it.\n>\n> In your case, the equivalent hack would be to compile the small\n> dimension tables into big CASE statements I suppose...\n>\n\nNice idea! I tried this but unfortunately it made the query 16 seconds\nslower <https://explain.depesz.com/s/EXLG> (up to 22 seconds) instead of\nfaster. I'm not sure why, perhaps the CASE expression is just very slow to\nevaluate?\n\nSELECT\ncase metric_value.id_metric when 1 then 565 when 2 then 422 when 3 then\n798 when 4 then 161 when 5 then 853 when 6 then 994 when 7 then 869\n when 8 then 909 when 9 then 226 when 10 then 32 when 11\n then 592 when 12 then 247 when 13 then 350 when 14 then 964 when 15\nthen 692 when 16 then 759 when 17 then 744 when 18 then 192 when 19\nthen 390 when 20 then 804 when 21 then 892 when 22 then 219 when 23\nthen 48 when 24 then 272 when 25 then 256 when 26 then 955 when 27 then\n258 when 28 then 858 when 29 then 298 when 30 then 200 when 31 then 681\n when 32 then 862\n when 33 then 621 when 34 then 478 when 35 then 23 when 36 then 474\n when 37 then 472 when 38 then 892 when 39 then 383 when 40 then 699\n when 41 then 924 when 42 then 976 when 43 then\n 946 when 44 then 275 when 45 then 940 when 46 then 637 when 47 then 34\n when 48 then 684 when 49 then 829 when 50 then 423 when 51 then 487\n when 52 then 721 when 53 then 642 when 54\nthen 535 when 55 then 992 when 56 then 898 when 57 then 490 when 58\nthen 251 when 59 then 756 when 60 then 788 when 61 then 451 when 62\nthen 437 when 63 then 650 when 64 then 72 when\n 65 then 915 when 66 then 673 when 67 then 546 when 68 then 387 when 69\nthen 565 when 70 then 929 when 71 then 86 when 72 then 490 when 73 then\n905 when 74 then 32 when 75 then 764 when 76 then 845 when 77 then 669\n when 78 then 798 when 79 then 529 when 80 then 498 when 81 then 221\n when 82 then 16 when 83 then 219 when 84 then 864 when 85 then 551\n when 86 then 211 when 87 then 762 when 88 then 42 when 89 then 462\n when 90 then 518 when 91 then 830 when 92 then 912 when 93 then 954\n when 94 then 480 when 95 then 984 when 96 then 869 when 97 then 153\n when 98 then 530 when 99 then 257 when 100 then 718 end AS pos_metric,\n\ncase metric_value.id_asset when 1 then 460 when 2 then 342 when 3 then\n208 when 4 then 365 when 5 then 374 when 6 then 972 when 7 then 210\n when 8 then 43 when 9 then 770 when 10 then 738 when 11\nthen 540 when 12 then 991 when 13 then 754 when 14 then 759 when 15\nthen 855 when 16 then 305 when 17 then 970 when 18 then 617 when 19\nthen 347 when 20 then 431 when 21 then 134 when 22 then 176 when 23\nthen 343 when 24 then 88 when 25 then 656 when 26 then 328 when 27 then\n958 when 28 then 809 when 29 then 858 when 30 then 214 when 31 then 527\n when 32 then 318\n when 33 then 557 when 34 then 735 when 35 then 683 when 36 then 930\n when 37 then 707 when 38 then 892 when 39 then 973 when 40 then 477\n when 41 then 631 when 42 then 513 when 43 then\n 469 when 44 then 385 when 45 then 272 when 46 then 324 when 47 then\n690 when 48 then 242 when 49 then 940 when 50 then 36 when 51 then 674\n when 52 then 74 when 53 then 212 when 54 then 17 when 55 then 163 when\n56 then 868 when 57 then 345 when 58 then 120 when 59 then 677 when 60\nthen 202 when 61 then 335 when 62 then 204 when 63 then 520 when 64\nthen 891 when\n65 then 938 when 66 then 203 when 67 then 822 when 68 then 645 when 69\nthen 95 when 70 then 795 when 71 then 123 when 72 then 726 when 73 then\n308 when 74 then 591 when 75 then 110 when 76 then 581 when 77 then 915\n when 78 then 800 when 79 then 823 when 80 then 855 when 81 then 836\n when 82 then 496 when 83 then 929 when 84 then 48 when 85 then 513\n when 86 then 92\n when 87 then 916 when 88 then 858 when 89 then 213 when 90 then 593\n when 91 then 60 when 92 then 547 when 93 then 796 when 94 then 581\n when 95 then 438 when 96 then 735 when 97 then\n 783 when 98 then 260 when 99 then 380 when 100 then 878 end AS\npos_asset,\n\ndate, value\nFROM metric_value\nWHERE\ndate >= '2016-01-01' and date < '2016-06-01'\nAND timerange_transaction @> current_timestamp\nORDER BY metric_value.id_metric, metric_value.id_asset, date;\n\n\nThanks again for the suggestions :) I'm still very happy for any ideas on\nhow to get back the 2 seconds longer <https://explain.depesz.com/s/NgfZ>\nthan it takes without any joins to the dimension tables (3.7 seconds), or\nexplain why the cartesian join helps and/or how we can get the same speedup\nwithout materialising it.\n\nSELECT id_metric, id_asset, date, value\nFROM metric_value\nWHERE\ndate >= '2016-01-01' and date < '2016-06-01'\nAND timerange_transaction @> current_timestamp\nORDER BY date, metric_value.id_metric;\n\n\nCheers, Chris.",
"msg_date": "Mon, 26 Jun 2017 16:43:04 +0100",
"msg_from": "Chris Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Fwd: Slow query from ~7M rows,\n joined to two tables of ~100 rows each"
},
{
"msg_contents": "On Jun 26, Chris Wilson modulated:\n> ...\n> In your case, the equivalent hack would be to compile the small\n> dimension tables into big CASE statements I suppose...\n> \n> \n> Nice idea! I tried this but unfortunately it made the query 16 seconds\n> slower�(up to 22 seconds) instead of faster.\n\nOther possible rewrites to try instead of joins:\n\n -- replace the case statement with a scalar subquery\n\n -- replace the case statement with a stored procedure wrapping that scalar subquery\n and declare the procedure as STABLE or even IMMUTABLE\n\nThese are shots in the dark, but seem easy enough to experiment with and might\nbehave differently if the query planner realizes it can cache results for \nrepeated use of the same ~100 input values.\n\n\nKarl\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 26 Jun 2017 10:01:20 -0700",
"msg_from": "Karl Czajkowski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Slow query from ~7M rows, joined to two tables of ~100 rows\n each"
},
{
"msg_contents": "On Jun 26, Chris Wilson modulated:\n\n> I created the index starting with date and it did make a big\n> difference: down to 10.3 seconds using a bitmap index scan and bitmap\n> heap scan (and then two hash joins as before).\n> \n\nBy the way, what kind of machine are you using? CPU, RAM, backing\nstorage?\n\nI tried running your original test code and the query completed in\nabout 8 seconds, and adding the index changes and analyze statement\nbrought it down to around 2.3 seconds on my workstation with Postgres\n9.5.7. On an unrelated development VM with Postgres 9.6.3, the final\nform took around 4 seconds.\n\n\nKarl\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 26 Jun 2017 13:32:21 -0700",
"msg_from": "Karl Czajkowski <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Fwd: Slow query from ~7M rows, joined to two tables of ~100 rows\n each"
}
] |
[
{
"msg_contents": "Hi Karl and Jeff,\n\nOn 26 June 2017 at 22:22, Jeff Janes <[email protected]> wrote:\n\n> Be warned that \"explain (analyze)\" can substantially slow down and distort\n> this type of query, especially when sorting. You should run \"explain\n> (analyze, timing off)\" first, and then only trust \"explain (analyze)\" if\n> the overall execution times between them are similar.\n>\n\nThanks, I didn't realise that. I will use TIMING OFF from now on.\n\nOn 26 June 2017 at 21:32, Karl Czajkowski <[email protected]> wrote:\n\n> > I created the index starting with date and it did make a big\n> > difference: down to 10.3 seconds using a bitmap index scan and bitmap\n> > heap scan (and then two hash joins as before).\n>\n> By the way, what kind of machine are you using? CPU, RAM, backing\n> storage?\n>\n> I tried running your original test code and the query completed in\n> about 8 seconds, and adding the index changes and analyze statement\n> brought it down to around 2.3 seconds on my workstation with Postgres\n> 9.5.7. On an unrelated development VM with Postgres 9.6.3, the final\n> form took around 4 seconds.\n>\n\nThis is very interesting. I'm using a powerful box:\n\n - HP ProLiant DL580 G7, Xeon(R) CPU E7- 4850 @ 2.00GHz * 80 cores, 128\n GB RAM, hardware RAID, 3.6 TB SAS array.\n\n total used free shared buff/cache\navailable\nMem: 125G 2.2G 834M 30G 122G\n91G\nSwap: 9.3G 98M 9.2G\n\n\nAnd disk I/O is fast:\n\n$ dd if=/dev/zero of=/local/tmp/bigfile bs=1M count=100k\n107374182400 bytes (107 GB) copied, 234.751 s, 457 MB/s\n\n\nBut your question let me to investigate and discover that we were compiling\nPostgres with no optimisations! I've built a new one with -O2 and got the\ntime down to 3.6 seconds (EXPLAIN with either TIMING OFF or BUFFERS,\nthere's no material difference).\n\nAnd again, vacuum your tables. Heap fetches aren't cheap.\n>\n\nSorry, I don't understand, why does VACUUM help on a table with no deleted\nrows? Do you mean ANALYZE?\n\n\n> > work_mem = 100MB\n>\n> Can you give it more than that? How many simultaneous connections do you\n> expect?\n>\n\nYes, I can and it does help! By increasing work_mem to 200 MB, I managed to\nconvert the external merge sort (on disk) to a quicksort in memory, and\nreached 3.3 seconds.\n\nThe cartestian join is slightly faster at 3.0 seconds, but not enough to be\nworth the overhead of creating the join table. I still wish I understood\nwhy it helps.\n\nJeff, thanks for the explanation about hash joins and sorting. I wish I\nunderstood why a hash join wouldn't preserve order in the first table even\nif it has to be done incrementally, since I expect that we'd still be\nreading records from the first table in order, but just in batches.\n\nOther possible rewrites to try instead of joins:\n>\n> -- replace the case statement with a scalar subquery\n>\n> -- replace the case statement with a stored procedure wrapping that\n> scalar subquery\n> and declare the procedure as STABLE or even IMMUTABLE\n>\n> These are shots in the dark, but seem easy enough to experiment with and\n> might\n> behave differently if the query planner realizes it can cache results for\n> repeated use of the same ~100 input values.\n\n\nI hit a jackpot with jsonb_object_agg, getting down to 2.1 seconds (2.8\nwith BUFFERS and TIMING <https://explain.depesz.com/s/uWyM>):\n\nexplain (analyze, timing off)\nwith metric as (select jsonb_object_agg(id, pos) AS metric_lookup from\nmetric_pos),\n asset as (select jsonb_object_agg(id, pos) AS asset_lookup from\nasset_pos)\nSELECT metric_lookup->id_metric AS pos_metric, asset_lookup->id_asset AS\npos_asset, date, value\nFROM metric_value, metric, asset\nWHERE date >= '2016-01-01' and date < '2016-06-01'\nAND timerange_transaction @> current_timestamp\nORDER BY metric_value.id_metric, metric_value.id_asset, date;\n\n\nWhich is awesome! Thank you so much for your help, both of you!\n\nNow if only we could make hash joins as fast as JSONB hash lookups :)\n\nCheers, Chris.\n\nHi Karl and Jeff,On 26 June 2017 at 22:22, Jeff Janes <[email protected]> wrote:Be warned that \"explain (analyze)\" can substantially slow down and distort this type of query, especially when sorting. You should run \"explain (analyze, timing off)\" first, and then only trust \"explain (analyze)\" if the overall execution times between them are similar.Thanks, I didn't realise that. I will use TIMING OFF from now on.On 26 June 2017 at 21:32, Karl Czajkowski <[email protected]> wrote:> I created the index starting with date and it did make a big> difference: down to 10.3 seconds using a bitmap index scan and bitmap> heap scan (and then two hash joins as before).By the way, what kind of machine are you using? CPU, RAM, backingstorage?I tried running your original test code and the query completed inabout 8 seconds, and adding the index changes and analyze statementbrought it down to around 2.3 seconds on my workstation with Postgres9.5.7. On an unrelated development VM with Postgres 9.6.3, the finalform took around 4 seconds.This is very interesting. I'm using a powerful box:HP ProLiant DL580 G7, Xeon(R) CPU E7- 4850 @ 2.00GHz * 80 cores, 128 GB RAM, hardware RAID, 3.6 TB SAS array. total used free shared buff/cache availableMem: 125G 2.2G 834M 30G 122G 91GSwap: 9.3G 98M 9.2GAnd disk I/O is fast:$ dd if=/dev/zero of=/local/tmp/bigfile bs=1M count=100k107374182400 bytes (107 GB) copied, 234.751 s, 457 MB/sBut your question let me to investigate and discover that we were compiling Postgres with no optimisations! I've built a new one with -O2 and got the time down to 3.6 seconds (EXPLAIN with either TIMING OFF or BUFFERS, there's no material difference).And again, vacuum your tables. Heap fetches aren't cheap.Sorry, I don't understand, why does VACUUM help on a table with no deleted rows? Do you mean ANALYZE? > work_mem = 100MBCan you give it more than that? How many simultaneous connections do you expect?Yes, I can and it does help! By increasing work_mem to 200 MB, I managed to convert the external merge sort (on disk) to a quicksort in memory, and reached 3.3 seconds.The cartestian join is slightly faster at 3.0 seconds, but not enough to be worth the overhead of creating the join table. I still wish I understood why it helps.Jeff, thanks for the explanation about hash joins and sorting. I wish I understood why a hash join wouldn't preserve order in the first table even if it has to be done incrementally, since I expect that we'd still be reading records from the first table in order, but just in batches.Other possible rewrites to try instead of joins: -- replace the case statement with a scalar subquery -- replace the case statement with a stored procedure wrapping that scalar subquery and declare the procedure as STABLE or even IMMUTABLEThese are shots in the dark, but seem easy enough to experiment with and mightbehave differently if the query planner realizes it can cache results forrepeated use of the same ~100 input values.I hit a jackpot with jsonb_object_agg, getting down to 2.1 seconds (2.8 with BUFFERS and TIMING):explain (analyze, timing off)with metric as (select jsonb_object_agg(id, pos) AS metric_lookup from metric_pos), asset as (select jsonb_object_agg(id, pos) AS asset_lookup from asset_pos)SELECT metric_lookup->id_metric AS pos_metric, asset_lookup->id_asset AS pos_asset, date, valueFROM metric_value, metric, assetWHERE date >= '2016-01-01' and date < '2016-06-01'AND timerange_transaction @> current_timestampORDER BY metric_value.id_metric, metric_value.id_asset, date; Which is awesome! Thank you so much for your help, both of you!Now if only we could make hash joins as fast as JSONB hash lookups :)Cheers, Chris.",
"msg_date": "Tue, 27 Jun 2017 14:15:04 +0100",
"msg_from": "Chris Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Fwd: Stalled post to pgsql-performance"
}
] |
[
{
"msg_contents": "Hi all,\n\nwe use schemata to separate our customers in a multi-tenant setup (9.5.7,\nDebian stable). Each tenant is managed in his own schema with all the\ntables that only he can access. All tables in all schemata are the same in\nterms of their DDL: Every tenant uses e.g. his own table 'address'. We\ncurrently manage around 1200 schemata (i.e. tenants) on one cluster. Every\nschema consists currently of ~200 tables - so we end up with ~240000 tables\nplus constraints, indexes, sequences et al.\n\nOur current approach is quite nice in terms of data privacy because every\ntenant is isolated from all other tenants. A tenant uses his own user that\ngives him only access to the corresponding schema. Performance is great for\nus - we didn't expect Postgres to scale so well!\n\nBut performance is pretty bad when we query things in the\ninformation_schema:\n\nSELECT\n *\nFROM information_schema.tables\nWHERE table_schema = 'foo'\nAND table_name = 'bar';``\n\nAbove query results in a large sequence scan with a filter that removes\n1305161 rows:\n\n\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Left Join (cost=0.70..101170.18 rows=3 width=265) (actual\ntime=383.505..383.505 rows=0 loops=1)\n -> Nested Loop (cost=0.00..101144.65 rows=3 width=141) (actual\ntime=383.504..383.504 rows=0 loops=1)\n Join Filter: (nc.oid = c.relnamespace)\n -> Seq Scan on pg_class c (cost=0.00..101023.01 rows=867\nwidth=77) (actual time=383.502..383.502 rows=0 loops=1)\n Filter: ((relkind = ANY ('{r,v,f}'::\"char\"[])) AND\n(((relname)::information_schema.sql_identifier)::text = 'bar'::text) AND\n(pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT,\nINSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR\nhas_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n Rows Removed by Filter: 1305161\n -> Materialize (cost=0.00..56.62 rows=5 width=68) (never\nexecuted)\n -> Seq Scan on pg_namespace nc (cost=0.00..56.60 rows=5\nwidth=68) (never executed)\n Filter: ((NOT pg_is_other_temp_schema(oid)) AND\n(((nspname)::information_schema.sql_identifier)::text = 'foo'::text))\n -> Nested Loop (cost=0.70..8.43 rows=1 width=132) (never executed)\n -> Index Scan using pg_type_oid_index on pg_type t\n (cost=0.42..8.12 rows=1 width=72) (never executed)\n Index Cond: (c.reloftype = oid)\n -> Index Scan using pg_namespace_oid_index on pg_namespace nt\n (cost=0.28..0.30 rows=1 width=68) (never executed)\n Index Cond: (oid = t.typnamespace)\n Planning time: 0.624 ms\n Execution time: 383.784 ms\n(16 rows)\n\nWe noticed the degraded performance first when using the psql cli. Pressing\ntab after beginning a WHERE clause results in a query against the\ninformation_schema which is pretty slow and ends in \"lag\" when trying to\nenter queries.\n\nWe also use Flyway (https://flywaydb.org/) to handle our database\nmigrations. Unfortunately Flyway is querying the information_schema to\ncheck if specific tables exist (I guess this is one of the reasons\ninformation_schema exists) and therefore vastly slows down the migration of\nour tenants. Our last migration run on all tenants (schemata) almost took\n2h because the above query is executed multiple times per tenant. The\nmigration run consisted of multiple sql files to be executed and triggered\nmore than 10 queries on information_schema per tenant.\n\nI don't think that Flyway is to blame because querying the\ninformation_schema should be a fast operation (and was fast for us when we\nhad less schemata). I tried to speedup querying pg_class by adding indexes\n(after enabling allow_system_table_mods) but didn't succeed. The function\ncall 'pg_has_role' is probably not easy to optimize.\n\nPostgres is really doing a great job to handle those many schemata and\ntables but doesn't scale well when querying information_schema. I actually\ndon't want to change my current multi-tenant setup (one schema per tenant)\nas it is working great but the slow information_schema is killing our\ndeployments.\n\nAre there any other options besides switching from\none-schema-per-tenant-approach? Any help is greatly appreciated!\n\nRegards,\nUlf\n\nHi all,we use schemata to separate our customers in a multi-tenant setup (9.5.7, Debian stable). Each tenant is managed in his own schema with all the tables that only he can access. All tables in all schemata are the same in terms of their DDL: Every tenant uses e.g. his own table 'address'. We currently manage around 1200 schemata (i.e. tenants) on one cluster. Every schema consists currently of ~200 tables - so we end up with ~240000 tables plus constraints, indexes, sequences et al.Our current approach is quite nice in terms of data privacy because every tenant is isolated from all other tenants. A tenant uses his own user that gives him only access to the corresponding schema. Performance is great for us - we didn't expect Postgres to scale so well!But performance is pretty bad when we query things in the information_schema:SELECT *FROM information_schema.tablesWHERE table_schema = 'foo'AND table_name = 'bar';``Above query results in a large sequence scan with a filter that removes 1305161 rows: QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop Left Join (cost=0.70..101170.18 rows=3 width=265) (actual time=383.505..383.505 rows=0 loops=1) -> Nested Loop (cost=0.00..101144.65 rows=3 width=141) (actual time=383.504..383.504 rows=0 loops=1) Join Filter: (nc.oid = c.relnamespace) -> Seq Scan on pg_class c (cost=0.00..101023.01 rows=867 width=77) (actual time=383.502..383.502 rows=0 loops=1) Filter: ((relkind = ANY ('{r,v,f}'::\"char\"[])) AND (((relname)::information_schema.sql_identifier)::text = 'bar'::text) AND (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text))) Rows Removed by Filter: 1305161 -> Materialize (cost=0.00..56.62 rows=5 width=68) (never executed) -> Seq Scan on pg_namespace nc (cost=0.00..56.60 rows=5 width=68) (never executed) Filter: ((NOT pg_is_other_temp_schema(oid)) AND (((nspname)::information_schema.sql_identifier)::text = 'foo'::text)) -> Nested Loop (cost=0.70..8.43 rows=1 width=132) (never executed) -> Index Scan using pg_type_oid_index on pg_type t (cost=0.42..8.12 rows=1 width=72) (never executed) Index Cond: (c.reloftype = oid) -> Index Scan using pg_namespace_oid_index on pg_namespace nt (cost=0.28..0.30 rows=1 width=68) (never executed) Index Cond: (oid = t.typnamespace) Planning time: 0.624 ms Execution time: 383.784 ms(16 rows)We noticed the degraded performance first when using the psql cli. Pressing tab after beginning a WHERE clause results in a query against the information_schema which is pretty slow and ends in \"lag\" when trying to enter queries.We also use Flyway (https://flywaydb.org/) to handle our database migrations. Unfortunately Flyway is querying the information_schema to check if specific tables exist (I guess this is one of the reasons information_schema exists) and therefore vastly slows down the migration of our tenants. Our last migration run on all tenants (schemata) almost took 2h because the above query is executed multiple times per tenant. The migration run consisted of multiple sql files to be executed and triggered more than 10 queries on information_schema per tenant.I don't think that Flyway is to blame because querying the information_schema should be a fast operation (and was fast for us when we had less schemata). I tried to speedup querying pg_class by adding indexes (after enabling allow_system_table_mods) but didn't succeed. The function call 'pg_has_role' is probably not easy to optimize.Postgres is really doing a great job to handle those many schemata and tables but doesn't scale well when querying information_schema. I actually don't want to change my current multi-tenant setup (one schema per tenant) as it is working great but the slow information_schema is killing our deployments.Are there any other options besides switching from one-schema-per-tenant-approach? Any help is greatly appreciated!Regards,Ulf",
"msg_date": "Wed, 28 Jun 2017 01:57:46 +0200",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of information_schema with many schemata and tables"
},
{
"msg_contents": "On Wednesday 28 June 2017 05:27 AM, Ulf Lohbrügge wrote:\n> Hi all,\n>\n> we use schemata to separate our customers in a multi-tenant setup (9.5.7, Debian stable). Each tenant is managed in his own schema with all the tables that only he can access. All tables in all schemata are the same in terms of their DDL: Every tenant uses e.g. his own table 'address'. We currently manage around 1200 schemata (i.e. tenants) on one cluster. Every schema consists currently of ~200 tables - so we end up with ~240000 tables plus constraints, indexes, sequences et al.\n>\n> Our current approach is quite nice in terms of data privacy because every tenant is isolated from all other tenants. A tenant uses his own user that gives him only access to the corresponding schema. Performance is great for us - we didn't expect Postgres to scale so well!\n>\n> But performance is pretty bad when we query things in the information_schema:\n>\n> SELECT\n> *\n> FROM information_schema.tables\n> WHERE table_schema = 'foo'\n> AND table_name = 'bar';``\n>\n> Above query results in a large sequence scan with a filter that removes 1305161 rows:\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=0.70..101170.18 rows=3 width=265) (actual time=383.505..383.505 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..101144.65 rows=3 width=141) (actual time=383.504..383.504 rows=0 loops=1)\n> Join Filter: (nc.oid = c.relnamespace)\n> -> Seq Scan on pg_class c (cost=0.00..101023.01 rows=867 width=77) (actual time=383.502..383.502 rows=0 loops=1)\n> Filter: ((relkind = ANY ('{r,v,f}'::\"char\"[])) AND (((relname)::information_schema.sql_identifier)::text = 'bar'::text) AND (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n> Rows Removed by Filter: 1305161\n> -> Materialize (cost=0.00..56.62 rows=5 width=68) (never executed)\n> -> Seq Scan on pg_namespace nc (cost=0.00..56.60 rows=5 width=68) (never executed)\n> Filter: ((NOT pg_is_other_temp_schema(oid)) AND (((nspname)::information_schema.sql_identifier)::text = 'foo'::text))\n> -> Nested Loop (cost=0.70..8.43 rows=1 width=132) (never executed)\n> -> Index Scan using pg_type_oid_index on pg_type t (cost=0.42..8.12 rows=1 width=72) (never executed)\n> Index Cond: (c.reloftype = oid)\n> -> Index Scan using pg_namespace_oid_index on pg_namespace nt (cost=0.28..0.30 rows=1 width=68) (never executed)\n> Index Cond: (oid = t.typnamespace)\n> Planning time: 0.624 ms\n> Execution time: 383.784 ms\n> (16 rows)\n>\n> We noticed the degraded performance first when using the psql cli. Pressing tab after beginning a WHERE clause results in a query against the information_schema which is pretty slow and ends in \"lag\" when trying to enter queries.\n>\n> We also use Flyway (https://flywaydb.org/) to handle our database migrations. Unfortunately Flyway is querying the information_schema to check if specific tables exist (I guess this is one of the reasons information_schema exists) and therefore vastly slows down the migration of our tenants. Our last migration run on all tenants (schemata) almost took 2h because the above query is executed multiple times per tenant. The migration run consisted of multiple sql files to be executed and triggered more than 10 queries on information_schema per tenant.\n>\n> I don't think that Flyway is to blame because querying the information_schema should be a fast operation (and was fast for us when we had less schemata). I tried to speedup querying pg_class by adding indexes (after enabling allow_system_table_mods) but didn't succeed. The function call 'pg_has_role' is probably not easy to optimize.\n>\n> Postgres is really doing a great job to handle those many schemata and tables but doesn't scale well when querying information_schema. I actually don't want to change my current multi-tenant setup (one schema per tenant) as it is working great but the slow information_schema is killing our deployments.\n>\n> Are there any other options besides switching from one-schema-per-tenant-approach? Any help is greatly appreciated!\n\nHave you tried a `REINDEX SYSTEM <dbname>`?\n\n>\n> Regards,\n> Ulf\n\n-- \n#!/usr/bin/env regards\nChhatoi Pritam Baral\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jun 2017 07:01:39 +0530",
"msg_from": "Pritam Baral <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of information_schema with many schemata\n and tables"
},
{
"msg_contents": "Nope, I didn't try that yet. But I don't have the impression that\nreindexing the indexes in information_schema will help. The table\ninformation_schema.tables consists of the following indexes:\n\n \"pg_class_oid_index\" UNIQUE, btree (oid)\n \"pg_class_relname_nsp_index\" UNIQUE, btree (relname, relnamespace)\n \"pg_class_tblspc_relfilenode_index\" btree (reltablespace, relfilenode)\n\nThe costly sequence scan in question on pg_class happens with the following\nWHERE clause:\n\nWHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'f'::\"char\"])) AND\nNOT pg_is_other_temp_schema(nc.oid) AND (pg_has_role(c.relowner,\n'USAGE'::text) OR has_table_privilege(c.oid, 'SELECT, INSERT, UPDATE,\nDELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR\nhas_any_column_privilege(c.oid,\n'SELECT, INSERT, UPDATE, REFERENCES'::text));\n\nBesides pg_class_oid_index none of the referenced columns is indexed. I\ntried to add an index on relowner but didn't succeed because the column is\nused in the function call pg_has_role and the query is still forced to do a\nsequence scan.\n\nRegards,\nUlf\n\n2017-06-28 3:31 GMT+02:00 Pritam Baral <[email protected]>:\n\n> On Wednesday 28 June 2017 05:27 AM, Ulf Lohbrügge wrote:\n> > Hi all,\n> >\n> > we use schemata to separate our customers in a multi-tenant setup\n> (9.5.7, Debian stable). Each tenant is managed in his own schema with all\n> the tables that only he can access. All tables in all schemata are the same\n> in terms of their DDL: Every tenant uses e.g. his own table 'address'. We\n> currently manage around 1200 schemata (i.e. tenants) on one cluster. Every\n> schema consists currently of ~200 tables - so we end up with ~240000 tables\n> plus constraints, indexes, sequences et al.\n> >\n> > Our current approach is quite nice in terms of data privacy because\n> every tenant is isolated from all other tenants. A tenant uses his own user\n> that gives him only access to the corresponding schema. Performance is\n> great for us - we didn't expect Postgres to scale so well!\n> >\n> > But performance is pretty bad when we query things in the\n> information_schema:\n> >\n> > SELECT\n> > *\n> > FROM information_schema.tables\n> > WHERE table_schema = 'foo'\n> > AND table_name = 'bar';``\n> >\n> > Above query results in a large sequence scan with a filter that removes\n> 1305161 rows:\n> >\n> >\n>\n> QUERY PLAN\n> > ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> -------------------------------------------------------\n> > Nested Loop Left Join (cost=0.70..101170.18 rows=3 width=265) (actual\n> time=383.505..383.505 rows=0 loops=1)\n> > -> Nested Loop (cost=0.00..101144.65 rows=3 width=141) (actual\n> time=383.504..383.504 rows=0 loops=1)\n> > Join Filter: (nc.oid = c.relnamespace)\n> > -> Seq Scan on pg_class c (cost=0.00..101023.01 rows=867\n> width=77) (actual time=383.502..383.502 rows=0 loops=1)\n> > Filter: ((relkind = ANY ('{r,v,f}'::\"char\"[])) AND\n> (((relname)::information_schema.sql_identifier)::text = 'bar'::text) AND\n> (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT,\n> INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR\n> has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n> > Rows Removed by Filter: 1305161\n> > -> Materialize (cost=0.00..56.62 rows=5 width=68) (never\n> executed)\n> > -> Seq Scan on pg_namespace nc (cost=0.00..56.60 rows=5\n> width=68) (never executed)\n> > Filter: ((NOT pg_is_other_temp_schema(oid)) AND\n> (((nspname)::information_schema.sql_identifier)::text = 'foo'::text))\n> > -> Nested Loop (cost=0.70..8.43 rows=1 width=132) (never executed)\n> > -> Index Scan using pg_type_oid_index on pg_type t\n> (cost=0.42..8.12 rows=1 width=72) (never executed)\n> > Index Cond: (c.reloftype = oid)\n> > -> Index Scan using pg_namespace_oid_index on pg_namespace nt\n> (cost=0.28..0.30 rows=1 width=68) (never executed)\n> > Index Cond: (oid = t.typnamespace)\n> > Planning time: 0.624 ms\n> > Execution time: 383.784 ms\n> > (16 rows)\n> >\n> > We noticed the degraded performance first when using the psql cli.\n> Pressing tab after beginning a WHERE clause results in a query against the\n> information_schema which is pretty slow and ends in \"lag\" when trying to\n> enter queries.\n> >\n> > We also use Flyway (https://flywaydb.org/) to handle our database\n> migrations. Unfortunately Flyway is querying the information_schema to\n> check if specific tables exist (I guess this is one of the reasons\n> information_schema exists) and therefore vastly slows down the migration of\n> our tenants. Our last migration run on all tenants (schemata) almost took\n> 2h because the above query is executed multiple times per tenant. The\n> migration run consisted of multiple sql files to be executed and triggered\n> more than 10 queries on information_schema per tenant.\n> >\n> > I don't think that Flyway is to blame because querying the\n> information_schema should be a fast operation (and was fast for us when we\n> had less schemata). I tried to speedup querying pg_class by adding indexes\n> (after enabling allow_system_table_mods) but didn't succeed. The function\n> call 'pg_has_role' is probably not easy to optimize.\n> >\n> > Postgres is really doing a great job to handle those many schemata and\n> tables but doesn't scale well when querying information_schema. I actually\n> don't want to change my current multi-tenant setup (one schema per tenant)\n> as it is working great but the slow information_schema is killing our\n> deployments.\n> >\n> > Are there any other options besides switching from one-schema-per-tenant-approach?\n> Any help is greatly appreciated!\n>\n> Have you tried a `REINDEX SYSTEM <dbname>`?\n>\n> >\n> > Regards,\n> > Ulf\n>\n> --\n> #!/usr/bin/env regards\n> Chhatoi Pritam Baral\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nNope, I didn't try that yet. But I don't have the impression that reindexing the indexes in information_schema will help. The table information_schema.tables consists of the following indexes: \"pg_class_oid_index\" UNIQUE, btree (oid) \"pg_class_relname_nsp_index\" UNIQUE, btree (relname, relnamespace) \"pg_class_tblspc_relfilenode_index\" btree (reltablespace, relfilenode)The costly sequence scan in question on pg_class happens with the following WHERE clause:WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'f'::\"char\"])) AND NOT pg_is_other_temp_schema(nc.oid) AND (pg_has_role(c.relowner, 'USAGE'::text) OR has_table_privilege(c.oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(c.oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text));Besides pg_class_oid_index none of the referenced columns is indexed. I tried to add an index on relowner but didn't succeed because the column is used in the function call pg_has_role and the query is still forced to do a sequence scan.Regards,Ulf2017-06-28 3:31 GMT+02:00 Pritam Baral <[email protected]>:On Wednesday 28 June 2017 05:27 AM, Ulf Lohbrügge wrote:\n> Hi all,\n>\n> we use schemata to separate our customers in a multi-tenant setup (9.5.7, Debian stable). Each tenant is managed in his own schema with all the tables that only he can access. All tables in all schemata are the same in terms of their DDL: Every tenant uses e.g. his own table 'address'. We currently manage around 1200 schemata (i.e. tenants) on one cluster. Every schema consists currently of ~200 tables - so we end up with ~240000 tables plus constraints, indexes, sequences et al.\n>\n> Our current approach is quite nice in terms of data privacy because every tenant is isolated from all other tenants. A tenant uses his own user that gives him only access to the corresponding schema. Performance is great for us - we didn't expect Postgres to scale so well!\n>\n> But performance is pretty bad when we query things in the information_schema:\n>\n> SELECT\n> *\n> FROM information_schema.tables\n> WHERE table_schema = 'foo'\n> AND table_name = 'bar';``\n>\n> Above query results in a large sequence scan with a filter that removes 1305161 rows:\n>\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Nested Loop Left Join (cost=0.70..101170.18 rows=3 width=265) (actual time=383.505..383.505 rows=0 loops=1)\n> -> Nested Loop (cost=0.00..101144.65 rows=3 width=141) (actual time=383.504..383.504 rows=0 loops=1)\n> Join Filter: (nc.oid = c.relnamespace)\n> -> Seq Scan on pg_class c (cost=0.00..101023.01 rows=867 width=77) (actual time=383.502..383.502 rows=0 loops=1)\n> Filter: ((relkind = ANY ('{r,v,f}'::\"char\"[])) AND (((relname)::information_schema.sql_identifier)::text = 'bar'::text) AND (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n> Rows Removed by Filter: 1305161\n> -> Materialize (cost=0.00..56.62 rows=5 width=68) (never executed)\n> -> Seq Scan on pg_namespace nc (cost=0.00..56.60 rows=5 width=68) (never executed)\n> Filter: ((NOT pg_is_other_temp_schema(oid)) AND (((nspname)::information_schema.sql_identifier)::text = 'foo'::text))\n> -> Nested Loop (cost=0.70..8.43 rows=1 width=132) (never executed)\n> -> Index Scan using pg_type_oid_index on pg_type t (cost=0.42..8.12 rows=1 width=72) (never executed)\n> Index Cond: (c.reloftype = oid)\n> -> Index Scan using pg_namespace_oid_index on pg_namespace nt (cost=0.28..0.30 rows=1 width=68) (never executed)\n> Index Cond: (oid = t.typnamespace)\n> Planning time: 0.624 ms\n> Execution time: 383.784 ms\n> (16 rows)\n>\n> We noticed the degraded performance first when using the psql cli. Pressing tab after beginning a WHERE clause results in a query against the information_schema which is pretty slow and ends in \"lag\" when trying to enter queries.\n>\n> We also use Flyway (https://flywaydb.org/) to handle our database migrations. Unfortunately Flyway is querying the information_schema to check if specific tables exist (I guess this is one of the reasons information_schema exists) and therefore vastly slows down the migration of our tenants. Our last migration run on all tenants (schemata) almost took 2h because the above query is executed multiple times per tenant. The migration run consisted of multiple sql files to be executed and triggered more than 10 queries on information_schema per tenant.\n>\n> I don't think that Flyway is to blame because querying the information_schema should be a fast operation (and was fast for us when we had less schemata). I tried to speedup querying pg_class by adding indexes (after enabling allow_system_table_mods) but didn't succeed. The function call 'pg_has_role' is probably not easy to optimize.\n>\n> Postgres is really doing a great job to handle those many schemata and tables but doesn't scale well when querying information_schema. I actually don't want to change my current multi-tenant setup (one schema per tenant) as it is working great but the slow information_schema is killing our deployments.\n>\n> Are there any other options besides switching from one-schema-per-tenant-approach? Any help is greatly appreciated!\n\nHave you tried a `REINDEX SYSTEM <dbname>`?\n\n>\n> Regards,\n> Ulf\n\n--\n#!/usr/bin/env regards\nChhatoi Pritam Baral\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 28 Jun 2017 11:38:14 +0200",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of information_schema with many schemata\n and tables"
},
{
"msg_contents": "2017-06-28 10:43 GMT+02:00 Pritam Baral <[email protected]>:\n\n>\n>\n> On Wednesday 28 June 2017 02:00 PM, Ulf Lohbrügge wrote:\n> > Nope, I didn't try that yet. But I don't have the impression that\n> reindexing the indexes in information_schema will help. The table\n> information_schema.tables consists of the following indexes:\n> >\n> > \"pg_class_oid_index\" UNIQUE, btree (oid)\n> > \"pg_class_relname_nsp_index\" UNIQUE, btree (relname, relnamespace)\n> > \"pg_class_tblspc_relfilenode_index\" btree (reltablespace,\n> relfilenode)\n>\n> information_schema.tables is not a table, it's a view; at least on 9.5[0].\n> These indexes you list are actually indexes on the pg_catalog.pg_class\n> table.\n>\n\nYes, it's a view. \\d+ information_schema.tables gives:\n\nView definition:\n SELECT current_database()::information_schema.sql_identifier AS\ntable_catalog,\n nc.nspname::information_schema.sql_identifier AS table_schema,\n c.relname::information_schema.sql_identifier AS table_name,\n CASE\n WHEN nc.oid = pg_my_temp_schema() THEN 'LOCAL TEMPORARY'::text\n WHEN c.relkind = 'r'::\"char\" THEN 'BASE TABLE'::text\n WHEN c.relkind = 'v'::\"char\" THEN 'VIEW'::text\n WHEN c.relkind = 'f'::\"char\" THEN 'FOREIGN TABLE'::text\n ELSE NULL::text\n END::information_schema.character_data AS table_type,\n NULL::character varying::information_schema.sql_identifier AS\nself_referencing_column_name,\n NULL::character varying::information_schema.character_data AS\nreference_generation,\n CASE\n WHEN t.typname IS NOT NULL THEN current_database()\n ELSE NULL::name\n END::information_schema.sql_identifier AS user_defined_type_catalog,\n nt.nspname::information_schema.sql_identifier AS\nuser_defined_type_schema,\n t.typname::information_schema.sql_identifier AS user_defined_type_name,\n CASE\n WHEN c.relkind = 'r'::\"char\" OR (c.relkind = ANY\n(ARRAY['v'::\"char\", 'f'::\"char\"])) AND\n(pg_relation_is_updatable(c.oid::regclass, false) & 8) = 8 THEN 'YES'::text\n ELSE 'NO'::text\n END::information_schema.yes_or_no AS is_insertable_into,\n CASE\n WHEN t.typname IS NOT NULL THEN 'YES'::text\n ELSE 'NO'::text\n END::information_schema.yes_or_no AS is_typed,\n NULL::character varying::information_schema.character_data AS\ncommit_action\n FROM pg_namespace nc\n JOIN pg_class c ON nc.oid = c.relnamespace\n LEFT JOIN (pg_type t\n JOIN pg_namespace nt ON t.typnamespace = nt.oid) ON c.reloftype = t.oid\n WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'f'::\"char\"]))\nAND NOT pg_is_other_temp_schema(nc.oid) AND (pg_has_role(c.relowner,\n'USAGE'::text) OR has_table_privilege(c.oid, 'SELECT, INSERT, UPDATE,\nDELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR\nhas_any_column_privilege(c.oid, 'SELECT, INSERT, UPDATE,\nREFERENCES'::text));\n\n\n>\n> >\n> > The costly sequence scan in question on pg_class happens with the\n> following WHERE clause:\n> >\n> > WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'f'::\"char\"]))\n> AND NOT pg_is_other_temp_schema(nc.oid) AND (pg_has_role(c.relowner,\n> 'USAGE'::text) OR has_table_privilege(c.oid, 'SELECT, INSERT, UPDATE,\n> DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(c.oid,\n> 'SELECT, INSERT, UPDATE, REFERENCES'::text));\n>\n> This is not the bottleneck WHERE clause the query plan from your first\n> mail shows. That one is:\n>\n> ((relkind = ANY ('{r,v,f}'::\"char\"[])) AND (((relname)::information_\n> schema.sql_identifier)::text = 'bar'::text) AND (pg_has_role(relowner,\n> 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE,\n> TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid,\n> 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n>\n\nThe part you copied is from the EXPLAIN ANALYZE output. The WHERE clause I\nposted earlier (or see view definition) above does unfortunately not\ncontain the relname.\n\n\n>\n> I can say with certainty that an index on pg_catalog.pg_class.relname is\n> going to speed this up. Postgres doesn't allow modifying system catalogs,\n> but the `REINDEX SYSTEM <dbname>;` command should rebuild the system\n> indexes and pg_catalog.pg_class.relname should be included in them (I\n> tested on 9.6).\n>\n> Do try that once. If you still see sequential scans, check what indexes\n> are present on pg_catalog.pg_class.\n>\n\nI just fired a 'REINDEX SYSTEM <dbname>;' but the output of EXPLAIN ANALYZE\nis unchanged and the query duration did not change.\n\nBest Regards,\nUlf\n\n\n>\n>\n> >\n> > Besides pg_class_oid_index none of the referenced columns is indexed. I\n> tried to add an index on relowner but didn't succeed because the column is\n> used in the function call pg_has_role and the query is still forced to do a\n> sequence scan.\n> >\n> > Regards,\n> > Ulf\n> >\n> > 2017-06-28 3:31 GMT+02:00 Pritam Baral <[email protected] <mailto:\n> [email protected]>>:\n> >\n> > On Wednesday 28 June 2017 05:27 AM, Ulf Lohbrügge wrote:\n> > > Hi all,\n> > >\n> > > we use schemata to separate our customers in a multi-tenant setup\n> (9.5.7, Debian stable). Each tenant is managed in his own schema with all\n> the tables that only he can access. All tables in all schemata are the same\n> in terms of their DDL: Every tenant uses e.g. his own table 'address'. We\n> currently manage around 1200 schemata (i.e. tenants) on one cluster. Every\n> schema consists currently of ~200 tables - so we end up with ~240000 tables\n> plus constraints, indexes, sequences et al.\n> > >\n> > > Our current approach is quite nice in terms of data privacy\n> because every tenant is isolated from all other tenants. A tenant uses his\n> own user that gives him only access to the corresponding schema.\n> Performance is great for us - we didn't expect Postgres to scale so well!\n> > >\n> > > But performance is pretty bad when we query things in the\n> information_schema:\n> > >\n> > > SELECT\n> > > *\n> > > FROM information_schema.tables\n> > > WHERE table_schema = 'foo'\n> > > AND table_name = 'bar';``\n> > >\n> > > Above query results in a large sequence scan with a filter that\n> removes 1305161 rows:\n> > >\n> > >\n>\n> QUERY PLAN\n> > > ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> ------------------------------------------------------------\n> -------------------------------------------------------\n> > > Nested Loop Left Join (cost=0.70..101170.18 rows=3 width=265)\n> (actual time=383.505..383.505 rows=0 loops=1)\n> > > -> Nested Loop (cost=0.00..101144.65 rows=3 width=141)\n> (actual time=383.504..383.504 rows=0 loops=1)\n> > > Join Filter: (nc.oid = c.relnamespace)\n> > > -> Seq Scan on pg_class c (cost=0.00..101023.01\n> rows=867 width=77) (actual time=383.502..383.502 rows=0 loops=1)\n> > > Filter: ((relkind = ANY ('{r,v,f}'::\"char\"[])) AND\n> (((relname)::information_schema.sql_identifier)::text = 'bar'::text) AND\n> (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT,\n> INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR\n> has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n> > > Rows Removed by Filter: 1305161\n> > > -> Materialize (cost=0.00..56.62 rows=5 width=68)\n> (never executed)\n> > > -> Seq Scan on pg_namespace nc (cost=0.00..56.60\n> rows=5 width=68) (never executed)\n> > > Filter: ((NOT pg_is_other_temp_schema(oid))\n> AND (((nspname)::information_schema.sql_identifier)::text = 'foo'::text))\n> > > -> Nested Loop (cost=0.70..8.43 rows=1 width=132) (never\n> executed)\n> > > -> Index Scan using pg_type_oid_index on pg_type t\n> (cost=0.42..8.12 rows=1 width=72) (never executed)\n> > > Index Cond: (c.reloftype = oid)\n> > > -> Index Scan using pg_namespace_oid_index on\n> pg_namespace nt (cost=0.28..0.30 rows=1 width=68) (never executed)\n> > > Index Cond: (oid = t.typnamespace)\n> > > Planning time: 0.624 ms\n> > > Execution time: 383.784 ms\n> > > (16 rows)\n> > >\n> > > We noticed the degraded performance first when using the psql cli.\n> Pressing tab after beginning a WHERE clause results in a query against the\n> information_schema which is pretty slow and ends in \"lag\" when trying to\n> enter queries.\n> > >\n> > > We also use Flyway (https://flywaydb.org/) to handle our database\n> migrations. Unfortunately Flyway is querying the information_schema to\n> check if specific tables exist (I guess this is one of the reasons\n> information_schema exists) and therefore vastly slows down the migration of\n> our tenants. Our last migration run on all tenants (schemata) almost took\n> 2h because the above query is executed multiple times per tenant. The\n> migration run consisted of multiple sql files to be executed and triggered\n> more than 10 queries on information_schema per tenant.\n> > >\n> > > I don't think that Flyway is to blame because querying the\n> information_schema should be a fast operation (and was fast for us when we\n> had less schemata). I tried to speedup querying pg_class by adding indexes\n> (after enabling allow_system_table_mods) but didn't succeed. The function\n> call 'pg_has_role' is probably not easy to optimize.\n> > >\n> > > Postgres is really doing a great job to handle those many schemata\n> and tables but doesn't scale well when querying information_schema. I\n> actually don't want to change my current multi-tenant setup (one schema per\n> tenant) as it is working great but the slow information_schema is killing\n> our deployments.\n> > >\n> > > Are there any other options besides switching from\n> one-schema-per-tenant-approach? Any help is greatly appreciated!\n> >\n> > Have you tried a `REINDEX SYSTEM <dbname>`?\n> >\n> > >\n> > > Regards,\n> > > Ulf\n> >\n> > --\n> > #!/usr/bin/env regards\n> > Chhatoi Pritam Baral\n> >\n> >\n> >\n> > --\n> > Sent via pgsql-performance mailing list (\n> [email protected] <mailto:[email protected]\n> >)\n> > To make changes to your subscription:\n> > http://www.postgresql.org/mailpref/pgsql-performance <\n> http://www.postgresql.org/mailpref/pgsql-performance>\n> >\n> >\n>\n> [0]: https://www.postgresql.org/docs/9.5/static/infoschema-tables.html\n>\n> --\n> #!/usr/bin/env regards\n> Chhatoi Pritam Baral\n>\n>\n\n2017-06-28 10:43 GMT+02:00 Pritam Baral <[email protected]>:\n\nOn Wednesday 28 June 2017 02:00 PM, Ulf Lohbrügge wrote:\n> Nope, I didn't try that yet. But I don't have the impression that reindexing the indexes in information_schema will help. The table information_schema.tables consists of the following indexes:\n>\n> \"pg_class_oid_index\" UNIQUE, btree (oid)\n> \"pg_class_relname_nsp_index\" UNIQUE, btree (relname, relnamespace)\n> \"pg_class_tblspc_relfilenode_index\" btree (reltablespace, relfilenode)\n\ninformation_schema.tables is not a table, it's a view; at least on 9.5[0]. These indexes you list are actually indexes on the pg_catalog.pg_class table.Yes, it's a view. \\d+ information_schema.tables gives:View definition: SELECT current_database()::information_schema.sql_identifier AS table_catalog, nc.nspname::information_schema.sql_identifier AS table_schema, c.relname::information_schema.sql_identifier AS table_name, CASE WHEN nc.oid = pg_my_temp_schema() THEN 'LOCAL TEMPORARY'::text WHEN c.relkind = 'r'::\"char\" THEN 'BASE TABLE'::text WHEN c.relkind = 'v'::\"char\" THEN 'VIEW'::text WHEN c.relkind = 'f'::\"char\" THEN 'FOREIGN TABLE'::text ELSE NULL::text END::information_schema.character_data AS table_type, NULL::character varying::information_schema.sql_identifier AS self_referencing_column_name, NULL::character varying::information_schema.character_data AS reference_generation, CASE WHEN t.typname IS NOT NULL THEN current_database() ELSE NULL::name END::information_schema.sql_identifier AS user_defined_type_catalog, nt.nspname::information_schema.sql_identifier AS user_defined_type_schema, t.typname::information_schema.sql_identifier AS user_defined_type_name, CASE WHEN c.relkind = 'r'::\"char\" OR (c.relkind = ANY (ARRAY['v'::\"char\", 'f'::\"char\"])) AND (pg_relation_is_updatable(c.oid::regclass, false) & 8) = 8 THEN 'YES'::text ELSE 'NO'::text END::information_schema.yes_or_no AS is_insertable_into, CASE WHEN t.typname IS NOT NULL THEN 'YES'::text ELSE 'NO'::text END::information_schema.yes_or_no AS is_typed, NULL::character varying::information_schema.character_data AS commit_action FROM pg_namespace nc JOIN pg_class c ON nc.oid = c.relnamespace LEFT JOIN (pg_type t JOIN pg_namespace nt ON t.typnamespace = nt.oid) ON c.reloftype = t.oid WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'f'::\"char\"])) AND NOT pg_is_other_temp_schema(nc.oid) AND (pg_has_role(c.relowner, 'USAGE'::text) OR has_table_privilege(c.oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(c.oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)); \n\n>\n> The costly sequence scan in question on pg_class happens with the following WHERE clause:\n>\n> WHERE (c.relkind = ANY (ARRAY['r'::\"char\", 'v'::\"char\", 'f'::\"char\"])) AND NOT pg_is_other_temp_schema(nc.oid) AND (pg_has_role(c.relowner, 'USAGE'::text) OR has_table_privilege(c.oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(c.oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text));\n\nThis is not the bottleneck WHERE clause the query plan from your first mail shows. That one is:\n\n((relkind = ANY ('{r,v,f}'::\"char\"[])) AND (((relname)::information_\nschema.sql_identifier)::text = 'bar'::text) AND (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))The part you copied is from the EXPLAIN ANALYZE output. The WHERE clause I posted earlier (or see view definition) above does unfortunately not contain the relname. \n\nI can say with certainty that an index on pg_catalog.pg_class.relname is going to speed this up. Postgres doesn't allow modifying system catalogs, but the `REINDEX SYSTEM <dbname>;` command should rebuild the system indexes and pg_catalog.pg_class.relname should be included in them (I tested on 9.6).\n\nDo try that once. If you still see sequential scans, check what indexes are present on pg_catalog.pg_class.I just fired a 'REINDEX SYSTEM <dbname>;' but the output of EXPLAIN ANALYZE is unchanged and the query duration did not change.Best Regards,Ulf \n\n\n>\n> Besides pg_class_oid_index none of the referenced columns is indexed. I tried to add an index on relowner but didn't succeed because the column is used in the function call pg_has_role and the query is still forced to do a sequence scan.\n>\n> Regards,\n> Ulf\n>\n> 2017-06-28 3:31 GMT+02:00 Pritam Baral <[email protected] <mailto:[email protected]>>:\n>\n> On Wednesday 28 June 2017 05:27 AM, Ulf Lohbrügge wrote:\n> > Hi all,\n> >\n> > we use schemata to separate our customers in a multi-tenant setup (9.5.7, Debian stable). Each tenant is managed in his own schema with all the tables that only he can access. All tables in all schemata are the same in terms of their DDL: Every tenant uses e.g. his own table 'address'. We currently manage around 1200 schemata (i.e. tenants) on one cluster. Every schema consists currently of ~200 tables - so we end up with ~240000 tables plus constraints, indexes, sequences et al.\n> >\n> > Our current approach is quite nice in terms of data privacy because every tenant is isolated from all other tenants. A tenant uses his own user that gives him only access to the corresponding schema. Performance is great for us - we didn't expect Postgres to scale so well!\n> >\n> > But performance is pretty bad when we query things in the information_schema:\n> >\n> > SELECT\n> > *\n> > FROM information_schema.tables\n> > WHERE table_schema = 'foo'\n> > AND table_name = 'bar';``\n> >\n> > Above query results in a large sequence scan with a filter that removes 1305161 rows:\n> >\n> > QUERY PLAN\n> > -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Nested Loop Left Join (cost=0.70..101170.18 rows=3 width=265) (actual time=383.505..383.505 rows=0 loops=1)\n> > -> Nested Loop (cost=0.00..101144.65 rows=3 width=141) (actual time=383.504..383.504 rows=0 loops=1)\n> > Join Filter: (nc.oid = c.relnamespace)\n> > -> Seq Scan on pg_class c (cost=0.00..101023.01 rows=867 width=77) (actual time=383.502..383.502 rows=0 loops=1)\n> > Filter: ((relkind = ANY ('{r,v,f}'::\"char\"[])) AND (((relname)::information_schema.sql_identifier)::text = 'bar'::text) AND (pg_has_role(relowner, 'USAGE'::text) OR has_table_privilege(oid, 'SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER'::text) OR has_any_column_privilege(oid, 'SELECT, INSERT, UPDATE, REFERENCES'::text)))\n> > Rows Removed by Filter: 1305161\n> > -> Materialize (cost=0.00..56.62 rows=5 width=68) (never executed)\n> > -> Seq Scan on pg_namespace nc (cost=0.00..56.60 rows=5 width=68) (never executed)\n> > Filter: ((NOT pg_is_other_temp_schema(oid)) AND (((nspname)::information_schema.sql_identifier)::text = 'foo'::text))\n> > -> Nested Loop (cost=0.70..8.43 rows=1 width=132) (never executed)\n> > -> Index Scan using pg_type_oid_index on pg_type t (cost=0.42..8.12 rows=1 width=72) (never executed)\n> > Index Cond: (c.reloftype = oid)\n> > -> Index Scan using pg_namespace_oid_index on pg_namespace nt (cost=0.28..0.30 rows=1 width=68) (never executed)\n> > Index Cond: (oid = t.typnamespace)\n> > Planning time: 0.624 ms\n> > Execution time: 383.784 ms\n> > (16 rows)\n> >\n> > We noticed the degraded performance first when using the psql cli. Pressing tab after beginning a WHERE clause results in a query against the information_schema which is pretty slow and ends in \"lag\" when trying to enter queries.\n> >\n> > We also use Flyway (https://flywaydb.org/) to handle our database migrations. Unfortunately Flyway is querying the information_schema to check if specific tables exist (I guess this is one of the reasons information_schema exists) and therefore vastly slows down the migration of our tenants. Our last migration run on all tenants (schemata) almost took 2h because the above query is executed multiple times per tenant. The migration run consisted of multiple sql files to be executed and triggered more than 10 queries on information_schema per tenant.\n> >\n> > I don't think that Flyway is to blame because querying the information_schema should be a fast operation (and was fast for us when we had less schemata). I tried to speedup querying pg_class by adding indexes (after enabling allow_system_table_mods) but didn't succeed. The function call 'pg_has_role' is probably not easy to optimize.\n> >\n> > Postgres is really doing a great job to handle those many schemata and tables but doesn't scale well when querying information_schema. I actually don't want to change my current multi-tenant setup (one schema per tenant) as it is working great but the slow information_schema is killing our deployments.\n> >\n> > Are there any other options besides switching from one-schema-per-tenant-approach? Any help is greatly appreciated!\n>\n> Have you tried a `REINDEX SYSTEM <dbname>`?\n>\n> >\n> > Regards,\n> > Ulf\n>\n> --\n> #!/usr/bin/env regards\n> Chhatoi Pritam Baral\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected] <mailto:[email protected]>)\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance <http://www.postgresql.org/mailpref/pgsql-performance>\n>\n>\n\n[0]: https://www.postgresql.org/docs/9.5/static/infoschema-tables.html\n\n--\n#!/usr/bin/env regards\nChhatoi Pritam Baral",
"msg_date": "Wed, 28 Jun 2017 16:25:15 +0200",
"msg_from": "=?UTF-8?Q?Ulf_Lohbr=C3=BCgge?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance of information_schema with many schemata\n and tables"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.