threads
listlengths 1
275
|
---|
[
{
"msg_contents": "Hi,\n\nBelow are two query plan for same SQL with and without an index. I noticed the Hash join order has changed since index has been created and this is not what I want. As it's hashing the big table and to provoke records in a small table. in Oracle, it's simple to add hint to point the table you'd like to be used as the provoke table. However, in Postgres, I don't know how to change the behavior.\n\n--plan 1, 10 seconds were spent on sequential scan on term_weekly table.\n\ndev=# explain analyze select distinct cs_id from lookup_weekly n inner join term_weekly s on s.b_id=n.b_id and s.date=n.date where term in ('cat'::text);\n\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=2100211.06..2100211.11 rows=5 width=4) (actual time=27095.470..27095.487 rows=138 loops=1)\n -> Hash Join (cost=954343.95..2100211.04 rows=5 width=4) (actual time=24088.912..27095.206 rows=160 loops=1)\n Hash Cond: (((n.b_id)::text = (s.b_id)::text) AND (n.date = s.date))\n -> Append (cost=0.00..862153.59 rows=37828460 width=52) (actual time=0.016..10923.091 rows=37828459 loops=1)\n -> Seq Scan on lookup_weekly n (cost=0.00..0.00 rows=1 width=524) (actual time=0.001..0.001 rows=0 loops=1)\n -> Seq Scan on lookup_weekly_20131130 n_1 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.015..1229.217 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131207 n_2 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=5.225..1177.539 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131214 n_3 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=5.756..1274.135 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131221 n_4 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=4.269..1131.570 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131228 n_5 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=9.383..1110.435 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20140426 n_6 (cost=0.00..91715.42 rows=4042442 width=52) (actual time=8.137..947.724 rows=4042442 loops=1)\n -> Seq Scan on lookup_weekly_20140503 n_7 (cost=0.00..93516.49 rows=4118149 width=52) (actual time=7.717..791.339 rows=4118149 loops=1)\n -> Seq Scan on lookup_weekly_20140329 n_8 (cost=0.00..88100.78 rows=3874278 width=52) (actual time=0.004..637.297 rows=3874278 loops=1)\n -> Hash (cost=954343.47..954343.47 rows=32 width=61) (actual time=10604.327..10604.327 rows=553 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 43kB\n -> Append (cost=0.00..954343.47 rows=32 width=61) (actual time=10.009..10602.075 rows=553 loops=1)\n -> Seq Scan on term_weekly s (cost=0.00..0.00 rows=1 width=520) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: (term = 'cat'::text)\n -> Seq Scan on term_weekly_20140503 s_1 (cost=0.00..262030.12 rows=8 width=46) (actual time=10.007..3738.945 rows=166 loops=1)\n Filter: (term = 'cat'::text)\n Rows Removed by Filter: 8516324\n -> Seq Scan on term_weekly_20140510 s_2 (cost=0.00..246131.35 rows=8 width=46) (actual time=52.059..2316.793 rows=152 loops=1)\n Filter: (term = 'cat'::text)\n Rows Removed by Filter: 8010196\n -> Seq Scan on term_weekly_20140517 s_3 (cost=0.00..233644.94 rows=8 width=46) (actual time=26.661..2504.273 rows=135 loops=1)\n Filter: (term = 'cat'::text)\n Rows Removed by Filter: 7632420\n -> Seq Scan on term_weekly_20140524 s_4 (cost=0.00..212537.06 rows=7 width=46) (actual time=49.773..2041.578 rows=100 loops=1)\n Filter: (term = 'cat'::text)\n Rows Removed by Filter: 6950865\nTotal runtime: 27095.639 ms\n(31 rows)\n\n--plan 2, only 1 second spent on index scan of term_weekly table, however, as it selects the big table to do the hashing, it takes 22 seconds for the hash to complete. The advantage get from index has been totally lost because of this join order.\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=1429795.17..1429795.22 rows=5 width=4) (actual time=22991.289..22991.307 rows=138 loops=1)\n -> Hash Join (cost=1429580.49..1429795.15 rows=5 width=4) (actual time=22963.340..22991.214 rows=160 loops=1)\n Hash Cond: (((s.b_id)::text = (n.b_id)::text) AND (s.date = n.date))\n -> Append (cost=0.00..142.77 rows=32 width=61) (actual time=0.052..1.125 rows=553 loops=1)\n -> Seq Scan on term_weekly s (cost=0.00..0.00 rows=1 width=520) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140503_3 on term_weekly_20140503 s_1 (cost=0.56..36.70 rows=8 width=46) (actual time=0.051..0.353 rows=166 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140510_3 on term_weekly_20140510 s_2 (cost=0.56..36.70 rows=8 width=46) (actual time=0.043..0.293 rows=152 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140517_3 on term_weekly_20140517 s_3 (cost=0.56..36.70 rows=8 width=46) (actual time=0.029..0.244 rows=135 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140524_3 on term_weekly_20140524 s_4 (cost=0.56..32.68 rows=7 width=46) (actual time=0.024..0.192 rows=100 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Hash (cost=862153.59..862153.59 rows=37828460 width=52) (actual time=22939.457..22939.457 rows=37828459 loops=1)\n Buckets: 4194304 Batches: 1 Memory Usage: 3144960kB\n -> Append (cost=0.00..862153.59 rows=37828460 width=52) (actual time=0.010..9100.690 rows=37828459 loops=1)\n -> Seq Scan on lookup_weekly n (cost=0.00..0.00 rows=1 width=524) (actual time=0.001..0.001 rows=0 loops=1)\n -> Seq Scan on lookup_weekly_20131130 n_1 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.008..1099.194 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131207 n_2 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.004..861.678 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131214 n_3 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.004..860.374 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131221 n_4 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.003..852.169 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131228 n_5 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.005..835.201 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20140426 n_6 (cost=0.00..91715.42 rows=4042442 width=52) (actual time=0.005..663.261 rows=4042442 loops=1)\n -> Seq Scan on lookup_weekly_20140503 n_7 (cost=0.00..93516.49 rows=4118149 width=52) (actual time=0.006..678.281 rows=4118149 loops=1)\n -> Seq Scan on lookup_weekly_20140329 n_8 (cost=0.00..88100.78 rows=3874278 width=52) (actual time=0.003..635.296 rows=3874278 loops=1)\nTotal runtime: 22995.361 ms\n(27 rows)\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\nHi,\n \nBelow are two query plan for same SQL with and without an index. I noticed the Hash join order has changed since index has been created and this is not what I want. As it’s hashing the big table and to provoke records in a small table.\n in Oracle, it’s simple to add hint to point the table you’d like to be used as the provoke table. However, in Postgres, I don’t know how to change the behavior.\n \n--plan 1, 10 seconds were spent on sequential scan on term_weekly table.\n \ndev=# explain analyze select distinct cs_id from lookup_weekly n inner join term_weekly s on s.b_id=n.b_id and s.date=n.date where term in ('cat'::text);\n \n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=2100211.06..2100211.11 rows=5 width=4) (actual time=27095.470..27095.487 rows=138 loops=1)\n -> Hash Join (cost=954343.95..2100211.04 rows=5 width=4) (actual time=24088.912..27095.206 rows=160 loops=1)\n Hash Cond: (((n.b_id)::text = (s.b_id)::text) AND (n.date = s.date))\n -> Append (cost=0.00..862153.59 rows=37828460 width=52) (actual time=0.016..10923.091 rows=37828459 loops=1)\n -> Seq Scan on lookup_weekly n (cost=0.00..0.00 rows=1 width=524) (actual time=0.001..0.001 rows=0 loops=1)\n -> Seq Scan on lookup_weekly_20131130 n_1 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.015..1229.217 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131207 n_2 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=5.225..1177.539 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131214 n_3 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=5.756..1274.135 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131221 n_4 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=4.269..1131.570 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131228 n_5 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=9.383..1110.435 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20140426 n_6 (cost=0.00..91715.42 rows=4042442 width=52) (actual time=8.137..947.724 rows=4042442 loops=1)\n -> Seq Scan on lookup_weekly_20140503 n_7 (cost=0.00..93516.49 rows=4118149 width=52) (actual time=7.717..791.339 rows=4118149 loops=1)\n -> Seq Scan on lookup_weekly_20140329 n_8 (cost=0.00..88100.78 rows=3874278 width=52) (actual time=0.004..637.297 rows=3874278 loops=1)\n -> Hash (cost=954343.47..954343.47 rows=32 width=61) (actual time=10604.327..10604.327 rows=553 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 43kB\n -> Append (cost=0.00..954343.47 rows=32 width=61) (actual time=10.009..10602.075 rows=553 loops=1)\n -> Seq Scan on term_weekly s (cost=0.00..0.00 rows=1 width=520) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: (term = 'cat'::text)\n -> Seq Scan on term_weekly_20140503 s_1 (cost=0.00..262030.12 rows=8 width=46) (actual time=10.007..3738.945 rows=166 loops=1)\n Filter: (term = 'cat'::text)\n Rows Removed by Filter: 8516324\n -> Seq Scan on term_weekly_20140510 s_2 (cost=0.00..246131.35 rows=8 width=46) (actual time=52.059..2316.793 rows=152 loops=1)\n Filter: (term = 'cat'::text)\n Rows Removed by Filter: 8010196\n -> Seq Scan on term_weekly_20140517 s_3 (cost=0.00..233644.94 rows=8 width=46) (actual time=26.661..2504.273 rows=135 loops=1)\n Filter: (term = 'cat'::text)\n Rows Removed by Filter: 7632420\n -> Seq Scan on term_weekly_20140524 s_4 (cost=0.00..212537.06 rows=7 width=46) (actual time=49.773..2041.578 rows=100 loops=1)\n Filter: (term = 'cat'::text)\n Rows Removed by Filter: 6950865\nTotal runtime: 27095.639 ms\n(31 rows)\n \n--plan 2, only 1 second spent on index scan of term_weekly table, however, as it selects the big table to do the hashing, it takes 22 seconds for the hash to complete. The advantage get from index has been totally lost because of this join\n order.\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=1429795.17..1429795.22 rows=5 width=4) (actual time=22991.289..22991.307 rows=138 loops=1)\n -> Hash Join (cost=1429580.49..1429795.15 rows=5 width=4) (actual time=22963.340..22991.214 rows=160 loops=1)\n Hash Cond: (((s.b_id)::text = (n.b_id)::text) AND (s.date = n.date))\n -> Append (cost=0.00..142.77 rows=32 width=61) (actual time=0.052..1.125 rows=553 loops=1)\n -> Seq Scan on term_weekly s (cost=0.00..0.00 rows=1 width=520) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140503_3 on term_weekly_20140503 s_1 (cost=0.56..36.70 rows=8 width=46) (actual time=0.051..0.353 rows=166 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140510_3 on term_weekly_20140510 s_2 (cost=0.56..36.70 rows=8 width=46) (actual time=0.043..0.293 rows=152 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140517_3 on term_weekly_20140517 s_3 (cost=0.56..36.70 rows=8 width=46) (actual time=0.029..0.244 rows=135 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140524_3 on term_weekly_20140524 s_4 (cost=0.56..32.68 rows=7 width=46) (actual time=0.024..0.192 rows=100 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Hash (cost=862153.59..862153.59 rows=37828460 width=52) (actual time=22939.457..22939.457 rows=37828459 loops=1)\n Buckets: 4194304 Batches: 1 Memory Usage: 3144960kB\n -> Append (cost=0.00..862153.59 rows=37828460 width=52) (actual time=0.010..9100.690 rows=37828459 loops=1)\n -> Seq Scan on lookup_weekly n (cost=0.00..0.00 rows=1 width=524) (actual time=0.001..0.001 rows=0 loops=1)\n -> Seq Scan on lookup_weekly_20131130 n_1 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.008..1099.194 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131207 n_2 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.004..861.678 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131214 n_3 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.004..860.374 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131221 n_4 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.003..852.169 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131228 n_5 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.005..835.201 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20140426 n_6 (cost=0.00..91715.42 rows=4042442 width=52) (actual time=0.005..663.261 rows=4042442 loops=1)\n -> Seq Scan on lookup_weekly_20140503 n_7 (cost=0.00..93516.49 rows=4118149 width=52) (actual time=0.006..678.281 rows=4118149 loops=1)\n -> Seq Scan on lookup_weekly_20140329 n_8 (cost=0.00..88100.78 rows=3874278 width=52) (actual time=0.003..635.296 rows=3874278 loops=1)\nTotal runtime: 22995.361 ms\n(27 rows)\n \nThanks,\nSuya",
"msg_date": "Thu, 11 Sep 2014 01:05:12 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "how to change the provoke table in hash join"
},
{
"msg_contents": "On Wed, Sep 10, 2014 at 10:05 PM, Huang, Suya <[email protected]>\nwrote:\n\n> --plan 1, 10 seconds were spent on sequential scan on term_weekly table.\n>\n>\n>\n> dev=# explain analyze select distinct cs_id from lookup_weekly n inner\n> join term_weekly s on s.b_id=n.b_id and s.date=n.date where term in\n> ('cat'::text);\n>\n>\n>\n>\n>\n>\n> QUERY PLAN\n>\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> HashAggregate (cost=2100211.06..2100211.11 rows=5 width=4) (actual\n> time=27095.470..27095.487 rows=138 loops=1)\n> ...\n>\n>\n>\n> --plan 2, only 1 second spent on index scan of term_weekly table, however,\n> as it selects the big table to do the hashing, it takes 22 seconds for the\n> hash to complete. The advantage get from index has been totally lost\n> because of this join order.\n>\n>\n>\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>\n> HashAggregate (cost=1429795.17..1429795.22 rows=5 width=4) (actual\n> time=22991.289..22991.307 rows=138 loops=1)\n> ...\n\n\nAm I reading something wrong here? I haven't looked all the plan, but the\nsecond is faster (overall), so why do you think you need a hint or change\nwhat the planner choose? For me looks like using the index is the best for\nthis situation. Could you try running this multiple times and taking the\nmin/max/avg time of both?\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Wed, Sep 10, 2014 at 10:05 PM, Huang, Suya <[email protected]> wrote:--plan 1, 10 seconds were spent on sequential scan on term_weekly table.\n \ndev=# explain analyze select distinct cs_id from lookup_weekly n inner join term_weekly s on s.b_id=n.b_id and s.date=n.date where term in ('cat'::text);\n \n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=2100211.06..2100211.11 rows=5 width=4) (actual time=27095.470..27095.487 rows=138 loops=1)...\n \n--plan 2, only 1 second spent on index scan of term_weekly table, however, as it selects the big table to do the hashing, it takes 22 seconds for the hash to complete. The advantage get from index has been totally lost because of this join\n order.\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=1429795.17..1429795.22 rows=5 width=4) (actual time=22991.289..22991.307 rows=138 loops=1)...Am I reading something wrong here? I haven't looked all the plan, but the second is faster (overall), so why do you think you need a hint or change what the planner choose? For me looks like using the index is the best for this situation. Could you try running this multiple times and taking the min/max/avg time of both?Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Thu, 11 Sep 2014 11:09:52 -0300",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to change the provoke table in hash join"
},
{
"msg_contents": "On Thu, Sep 11, 2014 at 7:09 AM, Matheus de Oliveira <\[email protected]> wrote:\n\n>\n> On Wed, Sep 10, 2014 at 10:05 PM, Huang, Suya <[email protected]>\n> wrote:\n>\n>> --plan 1, 10 seconds were spent on sequential scan on term_weekly table.\n>>\n>>\n>>\n>> dev=# explain analyze select distinct cs_id from lookup_weekly n inner\n>> join term_weekly s on s.b_id=n.b_id and s.date=n.date where term in\n>> ('cat'::text);\n>>\n>>\n>>\n>>\n>>\n>>\n>> QUERY PLAN\n>>\n>>\n>> -----------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> HashAggregate (cost=2100211.06..2100211.11 rows=5 width=4) (actual\n>> time=27095.470..27095.487 rows=138 loops=1)\n>> ...\n>>\n>>\n>>\n>> --plan 2, only 1 second spent on index scan of term_weekly table,\n>> however, as it selects the big table to do the hashing, it takes 22 seconds\n>> for the hash to complete. The advantage get from index has been totally\n>> lost because of this join order.\n>>\n>>\n>>\n>>\n>> QUERY PLAN\n>>\n>>\n>> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>>\n>> HashAggregate (cost=1429795.17..1429795.22 rows=5 width=4) (actual\n>> time=22991.289..22991.307 rows=138 loops=1)\n>> ...\n>\n>\n> Am I reading something wrong here? I haven't looked all the plan, but the\n> second is faster (overall), so why do you think you need a hint or change\n> what the planner choose? For me looks like using the index is the best for\n> this situation. Could you try running this multiple times and taking the\n> min/max/avg time of both?\n>\n\nThe difference in time could be a caching effect, not a reproducible\ndifference.\n\nThe 2nd plan uses 3GB of memory, and there might be better uses for that\nmemory.\n\nCurrently memory is un-costed, other than \"cliff costing\" once you thinks\nit will exceed work_mem, which I think is a problem. Just because I will\nlet you use 4GB of memory if you will really benefit from it, doesn't mean\nyou should use 4GB gratuitously.\n\n\nSuya, what happens if you lower work_mem setting? Does it revert to the\nplan you want?\n\nCheers,\n\nJeff\n\nOn Thu, Sep 11, 2014 at 7:09 AM, Matheus de Oliveira <[email protected]> wrote:On Wed, Sep 10, 2014 at 10:05 PM, Huang, Suya <[email protected]> wrote:--plan 1, 10 seconds were spent on sequential scan on term_weekly table.\n \ndev=# explain analyze select distinct cs_id from lookup_weekly n inner join term_weekly s on s.b_id=n.b_id and s.date=n.date where term in ('cat'::text);\n \n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=2100211.06..2100211.11 rows=5 width=4) (actual time=27095.470..27095.487 rows=138 loops=1)...\n \n--plan 2, only 1 second spent on index scan of term_weekly table, however, as it selects the big table to do the hashing, it takes 22 seconds for the hash to complete. The advantage get from index has been totally lost because of this join\n order.\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=1429795.17..1429795.22 rows=5 width=4) (actual time=22991.289..22991.307 rows=138 loops=1)...Am I reading something wrong here? I haven't looked all the plan, but the second is faster (overall), so why do you think you need a hint or change what the planner choose? For me looks like using the index is the best for this situation. Could you try running this multiple times and taking the min/max/avg time of both?The difference in time could be a caching effect, not a reproducible difference.The 2nd plan uses 3GB of memory, and there might be better uses for that memory.Currently memory is un-costed, other than \"cliff costing\" once you thinks it will exceed work_mem, which I think is a problem. Just because I will let you use 4GB of memory if you will really benefit from it, doesn't mean you should use 4GB gratuitously.Suya, what happens if you lower work_mem setting? Does it revert to the plan you want?Cheers,Jeff",
"msg_date": "Thu, 11 Sep 2014 11:09:26 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: how to change the provoke table in hash join"
},
{
"msg_contents": "From: Jeff Janes [mailto:[email protected]]\r\nSent: Friday, September 12, 2014 4:09 AM\r\nTo: Matheus de Oliveira\r\nCc: Huang, Suya; [email protected]\r\nSubject: Re: [PERFORM] how to change the provoke table in hash join\r\n\r\n\r\nOn Thu, Sep 11, 2014 at 7:09 AM, Matheus de Oliveira <[email protected]<mailto:[email protected]>> wrote:\r\n\r\nOn Wed, Sep 10, 2014 at 10:05 PM, Huang, Suya <[email protected]<mailto:[email protected]>> wrote:\r\n--plan 1, 10 seconds were spent on sequential scan on term_weekly table.\r\n\r\ndev=# explain analyze select distinct cs_id from lookup_weekly n inner join term_weekly s on s.b_id=n.b_id and s.date=n.date where term in ('cat'::text);\r\n\r\n\r\n QUERY PLAN\r\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nHashAggregate (cost=2100211.06..2100211.11 rows=5 width=4) (actual time=27095.470..27095.487 rows=138 loops=1)\r\n...\r\n\r\n--plan 2, only 1 second spent on index scan of term_weekly table, however, as it selects the big table to do the hashing, it takes 22 seconds for the hash to complete. The advantage get from index has been totally lost because of this join order.\r\n\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nHashAggregate (cost=1429795.17..1429795.22 rows=5 width=4) (actual time=22991.289..22991.307 rows=138 loops=1)\r\n...\r\n\r\nAm I reading something wrong here? I haven't looked all the plan, but the second is faster (overall), so why do you think you need a hint or change what the planner choose? For me looks like using the index is the best for this situation. Could you try running this multiple times and taking the min/max/avg time of both?\r\n\r\nThe difference in time could be a caching effect, not a reproducible difference.\r\n\r\nThe 2nd plan uses 3GB of memory, and there might be better uses for that memory.\r\n\r\nCurrently memory is un-costed, other than \"cliff costing\" once you thinks it will exceed work_mem, which I think is a problem. Just because I will let you use 4GB of memory if you will really benefit from it, doesn't mean you should use 4GB gratuitously.\r\n\r\n\r\nSuya, what happens if you lower work_mem setting? Does it revert to the plan you want?\r\n\r\nCheers,\r\n\r\nJeff\r\n\r\n________________________________\r\n\r\nHey Jeff,\r\n\r\nIt’s quite interesting, after I reduced the work_mem to 1GB, it chose the right plan. Also, if I create a temporary table and then join it with the temporary table, it also chose the right plan. Is this a defect of PG optimizer? While doing hash join, it’s unable to pick the small table to be the hash probe table while the query is complicated (not really that complicated in this case)\r\n\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\r\nHashAggregate (cost=1524294.96..1524295.01 rows=5 width=4) (actual time=13409.960..13409.979 rows=138 loops=1)\r\n -> Hash Join (cost=143.25..1524294.94 rows=5 width=4) (actual time=10648.440..13409.718 rows=160 loops=1)\r\n Hash Cond: (((n.b_id)::text = (s.b_id)::text) AND (n.date = s.date))\r\n -> Append (cost=0.00..862153.59 rows=37828460 width=52) (actual time=0.006..8152.938 rows=37828459 loops=1)\r\n -> Seq Scan on lookup_weekly n (cost=0.00..0.00 rows=1 width=524) (actual time=0.000..0.000 rows=0 loops=1)\r\n -> Seq Scan on lookup_weekly_20131130 n_1 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.006..743.985 rows=5158718 loops=1)\r\n -> Seq Scan on lookup_weekly_20131207 n_2 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.003..894.061 rows=5158718 loops=1)\r\n -> Seq Scan on lookup_weekly_20131214 n_3 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.008..746.660 rows=5158718 loops=1)\r\n -> Seq Scan on lookup_weekly_20131221 n_4 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.004..750.305 rows=5158718 loops=1)\r\n -> Seq Scan on lookup_weekly_20131228 n_5 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.004..741.233 rows=5158718 loops=1)\r\n -> Seq Scan on lookup_weekly_20140426 n_6 (cost=0.00..91715.42 rows=4042442 width=52) (actual time=0.010..595.792 rows=4042442 loops=1)\r\n -> Seq Scan on lookup_weekly_20140503 n_7 (cost=0.00..93516.49 rows=4118149 width=52) (actual time=0.009..598.208 rows=4118149 loops=1)\r\n -> Seq Scan on lookup_weekly_20140329 n_8 (cost=0.00..88100.78 rows=3874278 width=52) (actual time=0.004..574.846 rows=3874278 loops=1)\r\n -> Hash (cost=142.77..142.77 rows=32 width=61) (actual time=0.924..0.924 rows=553 loops=1)\r\n Buckets: 1024 Batches: 1 Memory Usage: 43kB\r\n -> Append (cost=0.00..142.77 rows=32 width=61) (actual time=0.031..0.752 rows=553 loops=1)\r\n -> Seq Scan on term_weekly s (cost=0.00..0.00 rows=1 width=520) (actual time=0.000..0.000 rows=0 loops=1)\r\n Filter: (term = 'cat'::text)\r\n -> Index Scan using idx_term_weekly_20140503_3 on term_weekly_20140503 s_1 (cost=0.56..36.70 rows=8 width=46) (actual time=0.031..0.225 rows=166 loops=1)\r\n Index Cond: (term = 'cat'::text)\r\n -> Index Scan using idx_term_weekly_20140510_3 on term_weekly_20140510 s_2 (cost=0.56..36.70 rows=8 width=46) (actual time=0.023..0.192 rows=152 loops=1)\r\n Index Cond: (term = 'cat'::text)\r\n -> Index Scan using idx_term_weekly_20140517_3 on term_weekly_20140517 s_3 (cost=0.56..36.70 rows=8 width=46) (actual time=0.022..0.176 rows=135 loops=1)\r\n Index Cond: (term = 'cat'::text)\r\n -> Index Scan using idx_term_weekly_20140524_3 on term_weekly_20140524 s_4 (cost=0.56..32.68 rows=7 width=46) (actual time=0.016..0.126 rows=100 loops=1)\r\n Index Cond: (term = 'cat'::text)\r\nTotal runtime: 13410.097 ms\r\n\r\nThanks,\r\nSuya\r\n\r\n\n\n\n\n\n\n\n\n\nFrom: Jeff Janes [mailto:[email protected]]\r\n\nSent: Friday, September 12, 2014 4:09 AM\nTo: Matheus de Oliveira\nCc: Huang, Suya; [email protected]\nSubject: Re: [PERFORM] how to change the provoke table in hash join\n \n\n\n \n\nOn Thu, Sep 11, 2014 at 7:09 AM, Matheus de Oliveira <[email protected]> wrote:\n\n\n \n\nOn Wed, Sep 10, 2014 at 10:05 PM, Huang, Suya <[email protected]> wrote:\n--plan 1, 10 seconds were spent on sequential scan on term_weekly table.\n \ndev=# explain analyze select distinct cs_id from lookup_weekly n inner join term_weekly s on s.b_id=n.b_id and s.date=n.date where term in ('cat'::text);\n \n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=2100211.06..2100211.11 rows=5 width=4) (actual time=27095.470..27095.487 rows=138 loops=1)\n... \n \n--plan 2, only 1 second spent on index scan of term_weekly table, however, as it selects the big table to do the hashing, it takes 22 seconds for the hash to complete. The advantage\r\n get from index has been totally lost because of this join order.\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=1429795.17..1429795.22 rows=5 width=4) (actual time=22991.289..22991.307 rows=138 loops=1)\n...\n\n \n\n\nAm I reading something wrong here? I haven't looked all the plan, but the second is faster (overall), so why do you think you need a hint or change what the planner choose? For me looks like using the index is the best for this situation.\r\n Could you try running this multiple times and taking the min/max/avg time of both?\n\n\n\n \n\n\nThe difference in time could be a caching effect, not a reproducible difference.\n\n\n \n\n\nThe 2nd plan uses 3GB of memory, and there might be better uses for that memory.\n\n\n \n\n\nCurrently memory is un-costed, other than \"cliff costing\" once you thinks it will exceed work_mem, which I think is a problem. Just because I will let you use 4GB of memory if you will really benefit from it, doesn't mean you should use\r\n 4GB gratuitously.\n\n\n \n\n\n \n\n\nSuya, what happens if you lower work_mem setting? Does it revert to the plan you want?\n\n\n \n\n\nCheers,\n\n\n \n\n\nJeff\n \n\n\n\n \nHey Jeff,\n \nIt’s quite interesting, after I reduced the work_mem to 1GB, it chose the right plan. Also, if I create a temporary table and then join it with the temporary\r\n table, it also chose the right plan. Is this a defect of PG optimizer? While doing hash join, it’s unable to pick the small table to be the hash probe table while the query is complicated (not really that complicated in this case)\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=1524294.96..1524295.01 rows=5 width=4) (actual time=13409.960..13409.979 rows=138 loops=1)\n -> Hash Join (cost=143.25..1524294.94 rows=5 width=4) (actual time=10648.440..13409.718 rows=160 loops=1)\n Hash Cond: (((n.b_id)::text = (s.b_id)::text) AND (n.date = s.date))\n -> Append (cost=0.00..862153.59 rows=37828460 width=52) (actual time=0.006..8152.938 rows=37828459 loops=1)\n -> Seq Scan on lookup_weekly n (cost=0.00..0.00 rows=1 width=524) (actual time=0.000..0.000 rows=0 loops=1)\n -> Seq Scan on lookup_weekly_20131130 n_1 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.006..743.985 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131207 n_2 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.003..894.061 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131214 n_3 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.008..746.660 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131221 n_4 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.004..750.305 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20131228 n_5 (cost=0.00..117764.18 rows=5158718 width=52) (actual time=0.004..741.233 rows=5158718 loops=1)\n -> Seq Scan on lookup_weekly_20140426 n_6 (cost=0.00..91715.42 rows=4042442 width=52) (actual time=0.010..595.792 rows=4042442 loops=1)\n -> Seq Scan on lookup_weekly_20140503 n_7 (cost=0.00..93516.49 rows=4118149 width=52) (actual time=0.009..598.208 rows=4118149 loops=1)\n -> Seq Scan on lookup_weekly_20140329 n_8 (cost=0.00..88100.78 rows=3874278 width=52) (actual time=0.004..574.846 rows=3874278 loops=1)\n -> Hash (cost=142.77..142.77 rows=32 width=61) (actual time=0.924..0.924 rows=553 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 43kB\n -> Append (cost=0.00..142.77 rows=32 width=61) (actual time=0.031..0.752 rows=553 loops=1)\n -> Seq Scan on term_weekly s (cost=0.00..0.00 rows=1 width=520) (actual time=0.000..0.000 rows=0 loops=1)\n Filter: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140503_3 on term_weekly_20140503 s_1 (cost=0.56..36.70 rows=8 width=46) (actual time=0.031..0.225\r\n rows=166 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140510_3 on term_weekly_20140510 s_2 (cost=0.56..36.70 rows=8 width=46) (actual time=0.023..0.192\r\n rows=152 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140517_3 on term_weekly_20140517 s_3 (cost=0.56..36.70 rows=8 width=46) (actual time=0.022..0.176\r\n rows=135 loops=1)\n Index Cond: (term = 'cat'::text)\n -> Index Scan using idx_term_weekly_20140524_3 on term_weekly_20140524 s_4 (cost=0.56..32.68 rows=7 width=46) (actual time=0.016..0.126\r\n rows=100 loops=1)\n Index Cond: (term = 'cat'::text)\nTotal runtime: 13410.097 ms\n \nThanks,\nSuya",
"msg_date": "Fri, 12 Sep 2014 03:33:42 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: how to change the provoke table in hash join"
}
] |
[
{
"msg_contents": "Hi,\n\nCan someone figure out why the first query runs so slow comparing to the second one? They generate the same result...\n\ndev=# explain analyze select count(distinct wid) from terms_weekly_20140503 a join port_terms b on a.term=b.terms;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=2226181.12..2226181.13 rows=1 width=516) (actual time=18757.318..18757.319 rows=1 loops=1)\n -> Hash Join (cost=37.67..2095240.22 rows=52376358 width=516) (actual time=0.758..2496.190 rows=1067696 loops=1)\n Hash Cond: (a.term = b.terms)\n -> Seq Scan on terms_weekly_20140503 a (cost=0.00..240738.81 rows=8516481 width=548) (actual time=0.009..951.875 rows=8516481 loops=1)\n -> Hash (cost=22.30..22.30 rows=1230 width=32) (actual time=0.690..0.690 rows=1000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 51kB\n -> Seq Scan on port_terms b (cost=0.00..22.30 rows=1230 width=32) (actual time=0.009..0.283 rows=1000 loops=1)\nTotal runtime: 18757.367 ms\n(8 rows)\n\nTime: 18758.068 ms\n\ndev=# explain analyze with x as (select distinct wid from terms_weekly_20140503 a join port_terms b on a.term=b.terms) select count(*) from x;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=2226187.62..2226187.63 rows=1 width=0) (actual time=2976.011..2976.011 rows=1 loops=1)\n CTE x\n -> HashAggregate (cost=2226181.12..2226183.12 rows=200 width=516) (actual time=2827.958..2896.747 rows=212249 loops=1)\n -> Hash Join (cost=37.67..2095240.22 rows=52376358 width=516) (actual time=0.734..2470.533 rows=1067696 loops=1)\n Hash Cond: (a.term = b.terms)\n -> Seq Scan on terms_weekly_20140503 a (cost=0.00..240738.81 rows=8516481 width=548) (actual time=0.009..916.028 rows=8516481 loops=1)\n -> Hash (cost=22.30..22.30 rows=1230 width=32) (actual time=0.669..0.669 rows=1000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 51kB\n -> Seq Scan on port_terms b (cost=0.00..22.30 rows=1230 width=32) (actual time=0.009..0.269 rows=1000 loops=1)\n -> CTE Scan on x (cost=0.00..4.00 rows=200 width=0) (actual time=2827.961..2963.878 rows=212249 loops=1)\nTotal runtime: 2980.681 ms\n(11 rows)\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\nHi,\n \nCan someone figure out why the first query runs so slow comparing to the second one? They generate the same result…\n \ndev=# explain analyze select count(distinct wid) from terms_weekly_20140503 a join port_terms b on a.term=b.terms;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=2226181.12..2226181.13 rows=1 width=516) (actual time=18757.318..18757.319 rows=1 loops=1)\n -> Hash Join (cost=37.67..2095240.22 rows=52376358 width=516) (actual time=0.758..2496.190 rows=1067696 loops=1)\n Hash Cond: (a.term = b.terms)\n -> Seq Scan on terms_weekly_20140503 a (cost=0.00..240738.81 rows=8516481 width=548) (actual time=0.009..951.875 rows=8516481 loops=1)\n -> Hash (cost=22.30..22.30 rows=1230 width=32) (actual time=0.690..0.690 rows=1000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 51kB\n -> Seq Scan on port_terms b (cost=0.00..22.30 rows=1230 width=32) (actual time=0.009..0.283 rows=1000 loops=1)\nTotal runtime: 18757.367 ms\n(8 rows)\n \nTime: 18758.068 ms\n \ndev=# explain analyze with x as (select distinct wid from terms_weekly_20140503 a join port_terms b on a.term=b.terms) select count(*) from x;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\nAggregate (cost=2226187.62..2226187.63 rows=1 width=0) (actual time=2976.011..2976.011 rows=1 loops=1)\n CTE x\n -> HashAggregate (cost=2226181.12..2226183.12 rows=200 width=516) (actual time=2827.958..2896.747 rows=212249 loops=1)\n -> Hash Join (cost=37.67..2095240.22 rows=52376358 width=516) (actual time=0.734..2470.533 rows=1067696 loops=1)\n Hash Cond: (a.term = b.terms)\n -> Seq Scan on terms_weekly_20140503 a (cost=0.00..240738.81 rows=8516481 width=548) (actual time=0.009..916.028 rows=8516481 loops=1)\n -> Hash (cost=22.30..22.30 rows=1230 width=32) (actual time=0.669..0.669 rows=1000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 51kB\n -> Seq Scan on port_terms b (cost=0.00..22.30 rows=1230 width=32) (actual time=0.009..0.269 rows=1000 loops=1)\n -> CTE Scan on x (cost=0.00..4.00 rows=200 width=0) (actual time=2827.961..2963.878 rows=212249 loops=1)\nTotal runtime: 2980.681 ms\n(11 rows)\n \nThanks,\nSuya",
"msg_date": "Fri, 12 Sep 2014 02:26:04 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "weird execution plan"
},
{
"msg_contents": "Huang, Suya wrote\n> Can someone figure out why the first query runs so slow comparing to the\n> second one? They generate the same result...\n\nTry: EXPLAIN (ANALYZE, BUFFERS)\n\nI believe you are only seeing caching effects.\n\nDavid J.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/weird-execution-plan-tp5818730p5818733.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Sep 2014 19:45:05 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird execution plan"
},
{
"msg_contents": "-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David G Johnston\nSent: Friday, September 12, 2014 12:45 PM\nTo: [email protected]\nSubject: Re: [PERFORM] weird execution plan\n\nHuang, Suya wrote\n> Can someone figure out why the first query runs so slow comparing to \n> the second one? They generate the same result...\n\nTry: EXPLAIN (ANALYZE, BUFFERS)\n\nI believe you are only seeing caching effects.\n\nDavid J.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/weird-execution-plan-tp5818730p5818733.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n=================================================================================================================\nBoth queries have been run several times so cache would have same effect on both of them? Below is the plan with buffer information.\n\ndev=# explain (ANALYZE, BUFFERS) with x as (select distinct wid from terms_weekly_20140503 a join port_terms b on a.term=b.terms) select count(*) from x;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2226187.62..2226187.63 rows=1 width=0) (actual time=3533.998..3533.999 rows=1 loops=1)\n Buffers: shared hit=86752 read=68837\n CTE x\n -> HashAggregate (cost=2226181.12..2226183.12 rows=200 width=516) (actual time=3383.700..3448.554 rows=212249 loops=1)\n Buffers: shared hit=86752 read=68837\n -> Hash Join (cost=37.67..2095240.22 rows=52376358 width=516) (actual time=0.799..3010.906 rows=1067696 loops=1)\n Hash Cond: (a.term = b.terms)\n Buffers: shared hit=86752 read=68837\n -> Seq Scan on terms_weekly_20140503 a (cost=0.00..240738.81 rows=8516481 width=548) (actual time=0.023..1277.352 rows=8516481 loops=1)\n Buffers: shared hit=86739 read=68835\n -> Hash (cost=22.30..22.30 rows=1230 width=32) (actual time=0.699..0.699 rows=1000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 51kB\n Buffers: shared hit=7\n -> Seq Scan on port_terms b (cost=0.00..22.30 rows=1230 width=32) (actual time=0.009..0.270 rows=1000 loops=1)\n Buffers: shared hit=7\n -> CTE Scan on x (cost=0.00..4.00 rows=200 width=0) (actual time=3383.707..3518.884 rows=212249 loops=1)\n Buffers: shared hit=86752 read=68837\n Total runtime: 3541.277 ms\n(18 rows)\n\nTime: 3552.505 ms\ndev=# explain (analyze,buffers) select count(distinct w_id) from terms_weekly_20140503 a join port_terms b on a.term=b.terms;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2226181.12..2226181.13 rows=1 width=516) (actual time=18914.881..18914.882 rows=1 loops=1)\n Buffers: shared hit=155589\n -> Hash Join (cost=37.67..2095240.22 rows=52376358 width=516) (actual time=0.802..2616.410 rows=1067696 loops=1)\n Hash Cond: (a.term = b.terms)\n Buffers: shared hit=155589\n -> Seq Scan on terms_weekly_20140503 a (cost=0.00..240738.81 rows=8516481 width=548) (actual time=0.010..966.380 rows=8516481 loops=1)\n Buffers: shared hit=155574\n -> Hash (cost=22.30..22.30 rows=1230 width=32) (actual time=0.729..0.729 rows=1000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 51kB\n Buffers: shared hit=7\n -> Seq Scan on port_terms b (cost=0.00..22.30 rows=1230 width=32) (actual time=0.009..0.300 rows=1000 loops=1)\n Buffers: shared hit=7\n Total runtime: 18914.933 ms\n(13 rows)\n\nTime: 18915.712 ms\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Sep 2014 03:41:08 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird execution plan"
},
{
"msg_contents": "Huang, Suya wrote\n> Both queries have been run several times so cache would have same effect\n> on both of them? Below is the plan with buffer information.\n\nNot everyone does so its nice to make certain - especially since I'm not all\nthat familiar with the code involved. But since no one else has answered I\nwill theorize.\n\nSELECT count(*) FROM ( SELECT DISTINCT col FROM tbl )\n\nvs\n\nSELECT count(DISTINCT col) FROM tbl\n\nThe code for \"SELECT DISTINCT col\" is likely highly efficient because it\nworks on complete sets of records.\n\nThe code for \"SELECT count(DISTINCT col)\" is at a relative disadvantage\nsince it must evaluate one row at a time and remember whether it had seen\nthe same value previously before deciding whether to increment a counter.\n\nWith a large number of duplicate rows the process of making the row set\nsmaller before counting the end result will perform better since fewer rows\nmust be evaluated in the less efficient count(DISTINCT) expression - the\ntime saved there more than offset by the fact that you are effectively\npassing over that subset of the data a second time.\n\nHashAggregate(1M rows) + Aggregate(200k rows) < Aggregate(1M rows)\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/weird-execution-plan-tp5818730p5818905.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Sep 2014 14:34:28 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: weird execution plan"
},
{
"msg_contents": "-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of David G Johnston\nSent: Saturday, September 13, 2014 7:34 AM\nTo: [email protected]\nSubject: Re: [PERFORM] weird execution plan\n\n>Not everyone does so its nice to make certain - especially since I'm not all that familiar with the code involved. But since no one else has answered I will theorize.\n>\n>SELECT count(*) FROM ( SELECT DISTINCT col FROM tbl )\n>\n>vs\n>\n>SELECT count(DISTINCT col) FROM tbl\n>\n>The code for \"SELECT DISTINCT col\" is likely highly efficient because it works on complete sets of records.\n>\n>The code for \"SELECT count(DISTINCT col)\" is at a relative disadvantage since it must evaluate one row at a time and remember whether it had seen the same value previously before deciding whether to >increment a counter.\n>\n>With a large number of duplicate rows the process of making the row set smaller before counting the end result will perform better since fewer rows must be evaluated in the less efficient count(DISTINCT) >expression - the time saved there more than offset by the fact that you are effectively passing over that subset of the data a second time.\n>\n>HashAggregate(1M rows) + Aggregate(200k rows) < Aggregate(1M rows)\n>\n>David J.\n\nThanks David!\n\nI am so surprised to the findings you put here. Just did an explain plan on the example you gave and pasted the result below, you're correct. \n\n\"Select count(distinct col1)\" is really a very common SQL statement we write daily, in Postgres, we need to rewrite it so that the aggregate doesn't happen on a very large data sets... I am wondering if this is something to be improved from the optimizer ifself, instead of developers to rewrite SQL. Like having the optimizer just do the counting in the end instead of doing it each time. I used Oracle before, never saw this issue...\n\nBut really thank you for pointing this out, very valuable lesson-learnt in PG SQL writing for me and our developers.\n\ndev=# explain analyze select count(*) from (select distinct wid from terms_weekly) foo;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1278656.00..1278656.01 rows=1 width=0) (actual time=24316.335..24316.336 rows=1 loops=1)\n -> HashAggregate (cost=1278651.50..1278653.50 rows=200 width=42) (actual time=23899.916..24242.010 rows=1298124 loops=1)\n -> Append (cost=0.00..1171738.20 rows=42765321 width=42) (actual time=0.028..13631.898 rows=42765320 loops=1)\n -> Seq Scan on search_terms_weekly (cost=0.00..0.00 rows=1 width=516) (actual time=0.001..0.001 rows=0 loops=1)\n -> Seq Scan on search_terms_weekly_20140503 (cost=0.00..293352.90 rows=10702190 width=42) (actual time=0.026..2195.460 rows=10702190 loops=1)\n -> Seq Scan on search_terms_weekly_20140510 (cost=0.00..298773.53 rows=10878953 width=42) (actual time=8.244..3163.087 rows=10878953 loops=1)\n -> Seq Scan on search_terms_weekly_20140517 (cost=0.00..288321.17 rows=10537717 width=41) (actual time=7.345..2520.531 rows=10537717 loops=1)\n -> Seq Scan on search_terms_weekly_20140524 (cost=0.00..291290.60 rows=10646460 width=41) (actual time=8.543..2693.833 rows=10646460 loops=1)\n Total runtime: 24333.830 ms\n(9 rows)\n\ndev=# explain analyze select count(distinct wid) from terms_weekly;\n\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=1278651.50..1278651.51 rows=1 width=42) (actual time=585774.511..585774.511 rows=1 loops=1)\n -> Append (cost=0.00..1171738.20 rows=42765321 width=42) (actual time=0.019..10656.782 rows=42765320 loops=1)\n -> Seq Scan on search_terms_weekly (cost=0.00..0.00 rows=1 width=516) (actual time=0.002..0.002 rows=0 loops=1)\n -> Seq Scan on search_terms_weekly_20140503 (cost=0.00..293352.90 rows=10702190 width=42) (actual time=0.017..2225.397 rows=10702190 loops=1)\n -> Seq Scan on search_terms_weekly_20140510 (cost=0.00..298773.53 rows=10878953 width=42) (actual time=0.009..2244.918 rows=10878953 loops=1)\n -> Seq Scan on search_terms_weekly_20140517 (cost=0.00..288321.17 rows=10537717 width=41) (actual time=0.008..1822.088 rows=10537717 loops=1)\n -> Seq Scan on search_terms_weekly_20140524 (cost=0.00..291290.60 rows=10646460 width=41) (actual time=0.006..1561.229 rows=10646460 loops=1)\n Total runtime: 585774.568 ms\n(8 rows)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Sep 2014 00:39:39 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: weird execution plan"
}
] |
[
{
"msg_contents": "Hi All,\n\nPlease see the output from the following query analysis :\n=# explain analyze select count(1) from jbpmprocess.jbpm_taskinstance ti \njoin jbpmprocess.jbpm_task task on (ti.task_ = task.id_ ) join \njbpmprocess.jbpm_processinstance pi on ti.procinst_ = pi.id_ where \nti.isopen_ = true;\nQUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=47372.04..47372.05 rows=1 width=0) (actual \ntime=647.070..647.071 rows=1 loops=1)\n -> Hash Join (cost=44806.99..47336.72 rows=14127 width=0) (actual \ntime=605.077..645.410 rows=20359 loops=1)\n Hash Cond: (ti.task_ = task.id_)\n -> Hash Join (cost=44779.80..47115.28 rows=14127 width=8) \n(actual time=604.874..640.541 rows=20359 loops=1)\n Hash Cond: (ti.procinst_ = pi.id_)\n -> Index Scan using idx_task_instance_isopen on \njbpm_taskinstance ti (cost=0.00..1995.84 rows=22672 width=16) (actual \ntime=0.011..16.606 rows=20359 loops=1)\n Index Cond: (isopen_ = true)\n Filter: isopen_\n -> Hash (cost=28274.91..28274.91 rows=1320391 width=8) \n(actual time=604.601..604.601 rows=1320391 loops=1)\n Buckets: 262144 Batches: 1 Memory Usage: 51578kB\n -> Seq Scan on jbpm_processinstance pi \n(cost=0.00..28274.91 rows=1320391 width=8) (actual time=0.004..192.166 \nrows=1320391 loops=1)\n -> Hash (cost=18.75..18.75 rows=675 width=8) (actual \ntime=0.196..0.196 rows=675 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on jbpm_task task (cost=0.00..18.75 \nrows=675 width=8) (actual time=0.003..0.106 rows=675 loops=1)\n Total runtime: 652.266 ms\n(15 rows)\n\n\nI'm not sure why the planner insists on doing the sequential scan on \njbpm_processinstance even though the 22672 rows from jbpm_taskinstance \nit has to match it against, is only 1% of the number of rows in \njbpm_processinstance. So far I think it is because the values in \nprocinst_ of jbpm_taskinstance are not entirely unique.\n\nThe very strange thing though is the way the query plan changes if I \nrepeat the where clause :\n\nexplain analyze select count(1) from jbpmprocess.jbpm_taskinstance ti \njoin jbpmprocess.jbpm_task task on (ti.task_ = task.id_ ) join \njbpmprocess.jbpm_processinstance pi on ti.procinst_ = pi.id_ where \nti.isopen_ = true and ti.isopen_ = true;\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=2074.61..2074.62 rows=1 width=0) (actual \ntime=80.126..80.126 rows=1 loops=1)\n -> Hash Join (cost=27.19..2074.24 rows=151 width=0) (actual \ntime=0.217..77.959 rows=20359 loops=1)\n Hash Cond: (ti.task_ = task.id_)\n -> Nested Loop (cost=0.00..2044.97 rows=151 width=8) (actual \ntime=0.016..71.429 rows=20359 loops=1)\n -> Index Scan using idx_task_instance_isopen on \njbpm_taskinstance ti (cost=0.00..29.72 rows=243 width=16) (actual \ntime=0.012..16.928 rows=20359 loops=1)\n Index Cond: ((isopen_ = true) AND (isopen_ = true))\n Filter: (isopen_ AND isopen_)\n -> Index Scan using jbpm_processinstance_pkey on \njbpm_processinstance pi (cost=0.00..8.28 rows=1 width=8) (actual \ntime=0.002..0.002 rows=1 loops=20359)\n Index Cond: (id_ = ti.procinst_)\n -> Hash (cost=18.75..18.75 rows=675 width=8) (actual \ntime=0.196..0.196 rows=675 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 27kB\n -> Seq Scan on jbpm_task task (cost=0.00..18.75 \nrows=675 width=8) (actual time=0.002..0.107 rows=675 loops=1)\n Total runtime: 80.170 ms\n\nI get a similar plan selected on the original query if I set \nenable_seqscan to off. I much prefer the second result.\nMy questions are:\n1. Why is this happening?\n2. How can I encourage the behavior of the second query without changing \nthe original query? Is there some column level setting I can set?\n\n(BTW the tables are analyzed, and I currently have no special \nsettings/attributes set for any of the tables.)\n\n-- \nKind Regards\nStefan\n\nCell : 072-380-1479\nDesk : 087-577-7241\nTo read FirstRand Bank's Disclaimer for this email click on the following address or copy into your Internet browser: \nhttps://www.fnb.co.za/disclaimer.html \n\nIf you are unable to access the Disclaimer, send a blank e-mail to\[email protected] and we will send you a copy of the Disclaimer.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Sep 2014 09:38:30 +0000",
"msg_from": "\"Van Der Berg, Stefan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange performance problem with query"
},
{
"msg_contents": "\"Van Der Berg, Stefan\" <[email protected]> wrote:\n\n> I get a similar plan selected on the original query if I set\n\n> enable_seqscan to off. I much prefer the second result.\n> My questions are:\n> 1. Why is this happening?\n\nYour cost factors don't accurately model actual costs.\n\n> 2. How can I encourage the behavior of the second query without\n> changing the original query?\n\nYou didn't give enough information to really give solid advice, but\nwhen people see what you are seeing, some common tuning needed is:\n\nSet shared_buffers to about 25% of system RAM or 8GB, whichever is\nlower.\n\nSet effective_cache_size to 50% to 75% of system RAM.\n\nSet work_mem to about 25% of system RAM divided by max_connections.\n\nIf you have a high cache hit ratio (which you apparently do) reduce\nrandom_page_cost, possibly to something near or equal to\nseq_page_cost.\n\nIncrease cpu_tuple_cost, perhaps to 0.03.\n\nYou might want to play with the above, and if you still have a\nproblem, read this page and post with more detail:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n> Is there some column level setting I can set?\n\nThe statistics looked pretty accurate, so that shouldn't be\nnecessary.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Sep 2014 06:25:44 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange performance problem with query"
},
{
"msg_contents": "Hi Kevin,\n\nThanks for the advice.\n\nI opted for setting the random_page_cost a bit lower, as that made the \nmost sense in the context of the current setup where there is quite a \nhigh cache hit ratio. Is 97% high enough?:\n\n=# SELECT\n 'cache hit rate' AS name,\n sum(heap_blks_hit) / (sum(heap_blks_hit) + sum(heap_blks_read)) AS \nratio\nFROM pg_statio_user_tables;\n name | ratio\n----------------+------------------------\n cache hit rate | 0.97344836172381212996\n\nWhen I set the random_page_cost down from 4 to 2, the query plan changes \nto the faster one.\n\nKind Regards\nStefan\n\nCell : 072-380-1479\nDesk : 087-577-7241\n\nOn 2014/09/15 03:25 PM, Kevin Grittner wrote:\n> \"Van Der Berg, Stefan\" <[email protected]> wrote:\n>\n>> I get a similar plan selected on the original query if I set\n>> enable_seqscan to off. I much prefer the second result.\n>> My questions are:\n>> 1. Why is this happening?\n> Your cost factors don't accurately model actual costs.\n>\n>> 2. How can I encourage the behavior of the second query without\n>> changing the original query?\n> You didn't give enough information to really give solid advice, but\n> when people see what you are seeing, some common tuning needed is:\n>\n> Set shared_buffers to about 25% of system RAM or 8GB, whichever is\n> lower.\n>\n> Set effective_cache_size to 50% to 75% of system RAM.\n>\n> Set work_mem to about 25% of system RAM divided by max_connections.\n>\n> If you have a high cache hit ratio (which you apparently do) reduce\n> random_page_cost, possibly to something near or equal to\n> seq_page_cost.\n>\n> Increase cpu_tuple_cost, perhaps to 0.03.\n>\n> You might want to play with the above, and if you still have a\n> problem, read this page and post with more detail:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n>> Is there some column level setting I can set?\n> The statistics looked pretty accurate, so that shouldn't be\n> necessary.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\nTo read FirstRand Bank's Disclaimer for this email click on the following address or copy into your Internet browser: \nhttps://www.fnb.co.za/disclaimer.html \n\nIf you are unable to access the Disclaimer, send a blank e-mail to\[email protected] and we will send you a copy of the Disclaimer.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Sep 2014 06:51:07 +0000",
"msg_from": "\"Van Der Berg, Stefan\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange performance problem with query"
}
] |
[
{
"msg_contents": "Hi,\n\nSorry for send this email twice but it seems it fits the performance group than admin group...\n\nI was reading an article of Gregory Smith http://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm and tried to do some analysis on our database.\n\npostgres=# select * from pg_stat_bgwriter;\n-[ RECORD 1 ]------+------------\ncheckpoints_timed | 42435\ncheckpoints_req | 629448\nbuffers_checkpoint | 1821978480\nbuffers_clean | 117710078\nmaxwritten_clean | 23796\nbuffers_backend | 1284631340\nbuffers_alloc | 32829025268\n\npostgres=# show checkpoint_segments ;\n-[ RECORD 1 ]-------+----\ncheckpoint_segments | 128\n\n\npostgres=# show checkpoint_timeout ;\n-[ RECORD 1 ]------+------\ncheckpoint_timeout | 10min\n\nbgwriter_delay bgwriter_lru_maxpages bgwriter_lru_multiplier\npostgres=# show bgwriter_delay;\n-[ RECORD 1 ]--+------\nbgwriter_delay | 100ms\n\npostgres=# show bgwriter_lru_maxpages;\n-[ RECORD 1 ]---------+-----\nbgwriter_lru_maxpages | 1000\n\npostgres=# show bgwriter_lru_multiplier;\n-[ RECORD 1 ]-----------+--\nbgwriter_lru_multiplier | 5\n\nbased on one snapshot, below are my thoughts after reading the example reading the example Greg used, it might be completely wrong as I'm just starting the learning process of checkpoint mechanism in PG. If anything missing/wrong, appreciate if you can help to point out.\n\n# checkpoints_req is much bigger than checkpoints_timed, suggest that I may increase checkpoint_segments in our system\n#maxwritten_clean is high, suggests increase bgwriter_lru_maxpages\n# buffers_backend is much smaller than buffers_alloc, suggests increasing bgwriter_lru_maxpages, bgwriter_lru_multiplier, and decreasing bgwriter_delay.\n\n\nThanks,\nSuya\n\n\n\n\n\n\n\n\n\n\nHi,\n \nSorry for send this email twice but it seems it fits the performance group than admin group…\n \nI was reading an article of Gregory Smith \nhttp://www.westnet.com/~gsmith/content/postgresql/chkp-bgw-83.htm and tried to do some analysis on our database.\n \npostgres=# select * from pg_stat_bgwriter;\n-[ RECORD 1 ]------+------------\ncheckpoints_timed | 42435\ncheckpoints_req | 629448\nbuffers_checkpoint | 1821978480\nbuffers_clean | 117710078\nmaxwritten_clean | 23796\nbuffers_backend | 1284631340\nbuffers_alloc | 32829025268\n \npostgres=# show checkpoint_segments ;\n-[ RECORD 1 ]-------+----\ncheckpoint_segments | 128\n \n \npostgres=# show checkpoint_timeout ;\n-[ RECORD 1 ]------+------\ncheckpoint_timeout | 10min\n \nbgwriter_delay bgwriter_lru_maxpages bgwriter_lru_multiplier\npostgres=# show bgwriter_delay;\n-[ RECORD 1 ]--+------\nbgwriter_delay | 100ms\n \npostgres=# show bgwriter_lru_maxpages;\n-[ RECORD 1 ]---------+-----\nbgwriter_lru_maxpages | 1000\n \npostgres=# show bgwriter_lru_multiplier;\n-[ RECORD 1 ]-----------+--\nbgwriter_lru_multiplier | 5\n \nbased on one snapshot, below are my thoughts after reading the example reading the example Greg used, it might be completely wrong as I’m just starting the learning process of checkpoint mechanism in PG. If anything missing/wrong, appreciate\n if you can help to point out.\n \n# checkpoints_req is much bigger than checkpoints_timed, suggest that I may increase checkpoint_segments in our system\n\n#maxwritten_clean is high, suggests increase bgwriter_lru_maxpages\n# buffers_backend is much smaller than buffers_alloc, suggests increasing bgwriter_lru_maxpages, bgwriter_lru_multiplier, and decreasing bgwriter_delay.\n \n \nThanks,\nSuya",
"msg_date": "Wed, 17 Sep 2014 00:21:58 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to interpret view pg_stat_bgwriter "
}
] |
[
{
"msg_contents": "Hello,\n\nI have a table of tree nodes with a tsquery column. To get a subtree's\ntsquery, I need to OR all of its nodes' tsqueries together.\n\nI defined a custom aggregate using tsquery_or:\n\n CREATE AGGREGATE tsquery_or_agg (tsquery)\n (\n sfunc = tsquery_or,\n stype = tsquery\n );\n\nbut I've found that\n\n tsquery_or_agg(query)\n\nis about a hundred times slower than this:\n\n ('(' || string_agg(query::text, ')|(') || ')')::tsquery\n\nThat works perfectly so I'm happy to continue doing it, but I'm curious to\nknow why the difference is so great and if anything can be done about it?\n\nCheers,\nAlex\n\nHello,I have a table of tree nodes with a tsquery column. To get a subtree's tsquery, I need to OR all of its nodes' tsqueries together.I defined a custom aggregate using tsquery_or: CREATE AGGREGATE tsquery_or_agg (tsquery) ( sfunc = tsquery_or, stype = tsquery );but I've found that tsquery_or_agg(query)is about a hundred times slower than this: ('(' || string_agg(query::text, ')|(') || ')')::tsqueryThat works perfectly so I'm happy to continue doing it, but I'm curious to know why the difference is so great and if anything can be done about it?Cheers,Alex",
"msg_date": "Wed, 17 Sep 2014 12:56:31 +0800",
"msg_from": "Alexander Hill <[email protected]>",
"msg_from_op": true,
"msg_subject": "Aggregating tsqueries"
},
{
"msg_contents": "On 09/17/2014 07:56 AM, Alexander Hill wrote:\n> Hello,\n>\n> I have a table of tree nodes with a tsquery column. To get a subtree's\n> tsquery, I need to OR all of its nodes' tsqueries together.\n>\n> I defined a custom aggregate using tsquery_or:\n>\n> CREATE AGGREGATE tsquery_or_agg (tsquery)\n> (\n> sfunc = tsquery_or,\n> stype = tsquery\n> );\n>\n> but I've found that\n>\n> tsquery_or_agg(query)\n>\n> is about a hundred times slower than this:\n>\n> ('(' || string_agg(query::text, ')|(') || ')')::tsquery\n>\n> That works perfectly so I'm happy to continue doing it, but I'm curious to\n> know why the difference is so great and if anything can be done about it?\n\nstring_agg's state transition function uses a buffer that's expanded as \nneeded. At every step, the next string is appended to the buffer. Your \ncustom aggregate is less efficient, because it constructs a new tsquery \nobject at every step. In every step, a new tsquery object is allocated \nand the old result and the next source tsquery are copied to it. That's \nmuch more expensive.\n\nIf you're not shy of writing C code, you could write a more efficient \nversion of tsquery_or_agg too, using a similar technique.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 10:20:41 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Aggregating tsqueries"
}
] |
[
{
"msg_contents": "Folks,\n\nJust encountered another case of critical fail for abort-early query\nplans. In this case, it will completely prevent a user from upgrading\nto 9.3; this is their most common query, and on 9.3 it takes 1000X longer.\n\nMaybe we should think about removing abort-early plans from 9.5?\nClearly we don't understand them well enough for them to work for users.\n\nQuery:\n\nSELECT \"categories\".* FROM \"categories\" WHERE \"categories\".\"user_id\" IN\n( SELECT to_user_id FROM \"tags\" WHERE \"tags\".\"from_user_id\" = 53529975 )\nORDER BY recorded_on DESC LIMIT 20;\n\nHere's the plan from 9.1:\n\n Limit (cost=1613.10..1613.15 rows=20 width=194) (actual\ntime=0.503..0.509 rows=20 loops=1)\n -> Sort (cost=1613.10..1736.14 rows=49215 width=194) (actual\ntime=0.502..0.505 rows=20 loops=1)\n Sort Key: categories.recorded_on\n Sort Method: top-N heapsort Memory: 30kB\n -> Nested Loop (cost=248.80..303.51 rows=49215 width=194)\n(actual time=0.069..0.347 rows=81 loops=1)\n -> HashAggregate (cost=248.80..248.81 rows=1 width=4)\n(actual time=0.050..0.054 rows=8 loops=1)\n -> Index Scan using unique_index_tags on tags\n(cost=0.00..248.54 rows=103 width=4) (actual time=0.020..0.033 rows=8\nloops=1)\n Index Cond: (from_user_id = 53529975)\n -> Index Scan using index_categories_on_user_id on\ncategories (cost=0.00..54.34 rows=29 width=194) (actual\ntime=0.010..0.028 rows=10 loops=8)\n Index Cond: (user_id = tags.to_user_id)\n Total runtime: 0.641 ms\n\nAnd from 9.3:\n\n Limit (cost=1.00..2641.10 rows=20 width=202) (actual\ntime=9.933..711.372 rows=20 loops=1)\n -> Nested Loop Semi Join (cost=1.00..9641758.39 rows=73041\nwidth=202) (actual time=9.931..711.361 rows=20 loops=1)\n -> Index Scan Backward using index_categories_on_recorded_on\non categories (cost=0.43..406943.98 rows=4199200 width=202) (actual\ntime=0.018..275.020 rows=170995 loops=1)\n -> Index Scan using unique_index_tags on tags\n(cost=0.57..2.20 rows=1 width=4) (actual time=0.002..0.002 rows=0\nloops=170995)\n Index Cond: ((from_user_id = 53529975) AND (to_user_id =\ncategories.user_id))\n Total runtime: 711.457 ms\n\nSo, here's what's happening here:\n\nAs usual, PostgreSQL is dramatically undercounting n_distinct: it shows\nchapters.user_id at 146,000 and the ratio of to_user_id:from_user_id as\nbeing 1:105 (as opposed to 1:6, which is about the real ratio). This\nmeans that PostgreSQL thinks it can find the 20 rows within the first 2%\nof the index ... whereas it actually needs to scan 50% of the index to\nfind them.\n\nRemoving LIMIT causes 9.3 to revert to the \"good\" plan, as expected.\n\nThis is the core issue with abort-early plans; they depend on our\nstatistics being extremely accurate, which we know they are not. And if\nthey're wrong, the execution time climbs by 1000X or more. Abort-early\nplans are inherently riskier than other types of query plans.\n\nWhat I'm not clear on is why upgrading from 9.1 to 9.3 would bring about\nthis change. The stats are no more than 10% different across the\nversion change.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Sep 2014 17:11:33 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Wed, Sep 17, 2014 at 7:11 PM, Josh Berkus <[email protected]> wrote:\n> Folks,\n>\n> Just encountered another case of critical fail for abort-early query\n> plans. In this case, it will completely prevent a user from upgrading\n> to 9.3; this is their most common query, and on 9.3 it takes 1000X longer.\n>\n> Maybe we should think about removing abort-early plans from 9.5?\n> Clearly we don't understand them well enough for them to work for users.\n>\n> Query:\n>\n> SELECT \"categories\".* FROM \"categories\" WHERE \"categories\".\"user_id\" IN\n> ( SELECT to_user_id FROM \"tags\" WHERE \"tags\".\"from_user_id\" = 53529975 )\n> ORDER BY recorded_on DESC LIMIT 20;\n>\n> Here's the plan from 9.1:\n>\n> Limit (cost=1613.10..1613.15 rows=20 width=194) (actual\n> time=0.503..0.509 rows=20 loops=1)\n> -> Sort (cost=1613.10..1736.14 rows=49215 width=194) (actual\n> time=0.502..0.505 rows=20 loops=1)\n> Sort Key: categories.recorded_on\n> Sort Method: top-N heapsort Memory: 30kB\n> -> Nested Loop (cost=248.80..303.51 rows=49215 width=194)\n> (actual time=0.069..0.347 rows=81 loops=1)\n> -> HashAggregate (cost=248.80..248.81 rows=1 width=4)\n> (actual time=0.050..0.054 rows=8 loops=1)\n> -> Index Scan using unique_index_tags on tags\n> (cost=0.00..248.54 rows=103 width=4) (actual time=0.020..0.033 rows=8\n> loops=1)\n> Index Cond: (from_user_id = 53529975)\n> -> Index Scan using index_categories_on_user_id on\n> categories (cost=0.00..54.34 rows=29 width=194) (actual\n> time=0.010..0.028 rows=10 loops=8)\n> Index Cond: (user_id = tags.to_user_id)\n> Total runtime: 0.641 ms\n>\n> And from 9.3:\n>\n> Limit (cost=1.00..2641.10 rows=20 width=202) (actual\n> time=9.933..711.372 rows=20 loops=1)\n> -> Nested Loop Semi Join (cost=1.00..9641758.39 rows=73041\n> width=202) (actual time=9.931..711.361 rows=20 loops=1)\n> -> Index Scan Backward using index_categories_on_recorded_on\n> on categories (cost=0.43..406943.98 rows=4199200 width=202) (actual\n> time=0.018..275.020 rows=170995 loops=1)\n> -> Index Scan using unique_index_tags on tags\n> (cost=0.57..2.20 rows=1 width=4) (actual time=0.002..0.002 rows=0\n> loops=170995)\n> Index Cond: ((from_user_id = 53529975) AND (to_user_id =\n> categories.user_id))\n> Total runtime: 711.457 ms\n>\n> So, here's what's happening here:\n>\n> As usual, PostgreSQL is dramatically undercounting n_distinct: it shows\n> chapters.user_id at 146,000 and the ratio of to_user_id:from_user_id as\n> being 1:105 (as opposed to 1:6, which is about the real ratio). This\n> means that PostgreSQL thinks it can find the 20 rows within the first 2%\n> of the index ... whereas it actually needs to scan 50% of the index to\n> find them.\n>\n> Removing LIMIT causes 9.3 to revert to the \"good\" plan, as expected.\n>\n> This is the core issue with abort-early plans; they depend on our\n> statistics being extremely accurate, which we know they are not. And if\n> they're wrong, the execution time climbs by 1000X or more. Abort-early\n> plans are inherently riskier than other types of query plans.\n>\n> What I'm not clear on is why upgrading from 9.1 to 9.3 would bring about\n> this change. The stats are no more than 10% different across the\n> version change.\n\nAmusingly on-topic rant I happened to read immediately after this by chance:\n\nhttp://wp.sigmod.org/?p=1075\n\nIs there a canonical case of where 'abort early' plans help? (I'm new\nto that term -- is it a recent planner innovation...got any handy\nlinks?)\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 12:15:34 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 09/19/2014 10:15 AM, Merlin Moncure wrote:\n> On Wed, Sep 17, 2014 at 7:11 PM, Josh Berkus <[email protected]> wrote:\n>> This is the core issue with abort-early plans; they depend on our\n>> statistics being extremely accurate, which we know they are not. And if\n>> they're wrong, the execution time climbs by 1000X or more. Abort-early\n>> plans are inherently riskier than other types of query plans.\n>>\n>> What I'm not clear on is why upgrading from 9.1 to 9.3 would bring about\n>> this change. The stats are no more than 10% different across the\n>> version change.\n> \n> Amusingly on-topic rant I happened to read immediately after this by chance:\n> \n> http://wp.sigmod.org/?p=1075\n> \n> Is there a canonical case of where 'abort early' plans help? (I'm new\n> to that term -- is it a recent planner innovation...got any handy\n> links?)\n\nYeah, here's an example of the canonical case:\n\nTable t1 ( a, b, c )\n\n- \"b\" is low-cardinality\n- \"c\" is high-cardinality\n- There are separate indexes on both b and c.\n\nSELECT a, b, c FROM t1\nWHERE b = 2\nORDER BY c LIMIT 1;\n\nIn this case, the fastest plan is usually to use the index on C and\nreturn the first row where the filter condition matches the filter on b.\n This can be an order of magnitude faster than using the index on b and\nthen resorting by c and taking the first row, if (b=2) happens to match\n20% of the table.\n\nThis is called an \"abort early\" plan because we expect to never finish\nthe scan on the index on c. We expect to scan the index on c, find the\nfirst row that matches b=2 and exit.\n\nThe problem with such plans is that they are \"risky\". As in, if we are\nwrong about our (b=2) stats, then we've just adopted a query plan which\nwill be 10X to 1000X slower than the more conventional plan.\n\nWe can see this in the bad plan I posted:\n\n Limit (cost=1.00..2641.10 rows=20 width=202) (actual\ntime=9.933..711.372 rows=20 loops=1)\n -> Nested Loop Semi Join (cost=1.00..9641758.39 rows=73041\nwidth=202) (actual time=9.931..711.361 rows=20 loops=1)\n -> Index Scan Backward using index_categories_on_recorded_on\non categories (cost=0.43..406943.98 rows=4199200 width=202) (actual\ntime=0.018..275.020 rows=170995 loops=1)\n\nNotice how the total cost of the plan is a fraction of the cost of the\ntwo steps which preceeded it? This is an indication that the planner\nexpects to be able to abort the index scan and nestloop join before it's\nmore than 2% through it.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 11:40:17 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 19 Sep 2014 19:40, \"Josh Berkus\" <[email protected]> wrote:\n>\n> On 09/19/2014 10:15 AM, Merlin Moncure wrote:\n> > On Wed, Sep 17, 2014 at 7:11 PM, Josh Berkus <[email protected]> wrote:\n> >> This is the core issue with abort-early plans; they depend on our\n> >> statistics being extremely accurate, which we know they are not. And if\n> >> they're wrong, the execution time climbs by 1000X or more. Abort-early\n> >> plans are inherently riskier than other types of query plans.\n\nAll plans are risky if the stats are wrong. It's one of the perennial\ndigressions that many postgres newcomers make to track worst case costs and\nprovide a knob for planner aggressiveness but it always breaks down when\nyou try to quantify the level of risk because you discover that even such\nsimple things as indeed scans versus sequential scans can be equally risky\neither way.\n\n> >> What I'm not clear on is why upgrading from 9.1 to 9.3 would bring\nabout\n> >> this change. The stats are no more than 10% different across the\n> >> version change.\n\nThere's no difference. Postgres has been estimating LIMIT costs this way\nsince before I came to postgres in 7.3.\n\n\n\n> > Is there a canonical case of where 'abort early' plans help? (I'm new\n> > to that term -- is it a recent planner innovation...got any handy\n> > links?)\n>\n> Yeah, here's an example of the canonical case:\n>\n> Table t1 ( a, b, c )\n>\n> - \"b\" is low-cardinality\n> - \"c\" is high-cardinality\n> - There are separate indexes on both b and c.\n>\n> SELECT a, b, c FROM t1\n> WHERE b = 2\n> ORDER BY c LIMIT 1;\n\nYou badly want a partial index on c WHERE b=2 for each value of 2 which\nappears in your queries.\n\nIt would be neat to have an opclass which worked like that. Which would\namount to having prefix compression perhaps.\n\nWhat plan does 9.1 come up with?\n\n\nOn 19 Sep 2014 19:40, \"Josh Berkus\" <[email protected]> wrote:\n>\n> On 09/19/2014 10:15 AM, Merlin Moncure wrote:\n> > On Wed, Sep 17, 2014 at 7:11 PM, Josh Berkus <[email protected]> wrote:\n> >> This is the core issue with abort-early plans; they depend on our\n> >> statistics being extremely accurate, which we know they are not. And if\n> >> they're wrong, the execution time climbs by 1000X or more. Abort-early\n> >> plans are inherently riskier than other types of query plans.\nAll plans are risky if the stats are wrong. It's one of the perennial digressions that many postgres newcomers make to track worst case costs and provide a knob for planner aggressiveness but it always breaks down when you try to quantify the level of risk because you discover that even such simple things as indeed scans versus sequential scans can be equally risky either way.\n> >> What I'm not clear on is why upgrading from 9.1 to 9.3 would bring about\n> >> this change. The stats are no more than 10% different across the\n> >> version change.\nThere's no difference. Postgres has been estimating LIMIT costs this way since before I came to postgres in 7.3.\n> > Is there a canonical case of where 'abort early' plans help? (I'm new\n> > to that term -- is it a recent planner innovation...got any handy\n> > links?)\n>\n> Yeah, here's an example of the canonical case:\n>\n> Table t1 ( a, b, c )\n>\n> - \"b\" is low-cardinality\n> - \"c\" is high-cardinality\n> - There are separate indexes on both b and c.\n>\n> SELECT a, b, c FROM t1\n> WHERE b = 2\n> ORDER BY c LIMIT 1;\nYou badly want a partial index on c WHERE b=2 for each value of 2 which appears in your queries.\nIt would be neat to have an opclass which worked like that. Which would amount to having prefix compression perhaps.\nWhat plan does 9.1 come up with?",
"msg_date": "Sat, 20 Sep 2014 07:38:03 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Sat, Sep 20, 2014 at 3:38 AM, Greg Stark <[email protected]> wrote:\n>> > Is there a canonical case of where 'abort early' plans help? (I'm new\n>> > to that term -- is it a recent planner innovation...got any handy\n>> > links?)\n>>\n>> Yeah, here's an example of the canonical case:\n>>\n>> Table t1 ( a, b, c )\n>>\n>> - \"b\" is low-cardinality\n>> - \"c\" is high-cardinality\n>> - There are separate indexes on both b and c.\n>>\n>> SELECT a, b, c FROM t1\n>> WHERE b = 2\n>> ORDER BY c LIMIT 1;\n>\n> You badly want a partial index on c WHERE b=2 for each value of 2 which\n> appears in your queries.\n>\n> It would be neat to have an opclass which worked like that. Which would\n> amount to having prefix compression perhaps.\n\nI've been looking at that exactly.\n\nOne complexity of it, is that splitting becomes much harder. As in, recursive.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Sep 2014 08:51:49 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "Greg Stark <[email protected]> writes:\n> On 19 Sep 2014 19:40, \"Josh Berkus\" <[email protected]> wrote:\n>> Yeah, here's an example of the canonical case:\n>> \n>> Table t1 ( a, b, c )\n>> \n>> - \"b\" is low-cardinality\n>> - \"c\" is high-cardinality\n>> - There are separate indexes on both b and c.\n>> \n>> SELECT a, b, c FROM t1\n>> WHERE b = 2\n>> ORDER BY c LIMIT 1;\n\n> You badly want a partial index on c WHERE b=2 for each value of 2 which\n> appears in your queries.\n\nWell, if it's *only* b = 2 that you ever search for, then maybe a partial\nindex would be a good answer. Personally I'd use a plain btree index on\n(b, c). The planner's been able to match this type of query to\nmulticolumn indexes for a long time.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Sep 2014 11:01:01 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 09/19/2014 11:38 PM, Greg Stark wrote:\n> \n> On 19 Sep 2014 19:40, \"Josh Berkus\" <[email protected]\n> <mailto:[email protected]>> wrote:\n>>\n>> On 09/19/2014 10:15 AM, Merlin Moncure wrote:\n>> > On Wed, Sep 17, 2014 at 7:11 PM, Josh Berkus <[email protected]\n> <mailto:[email protected]>> wrote:\n>> >> This is the core issue with abort-early plans; they depend on our\n>> >> statistics being extremely accurate, which we know they are not. And if\n>> >> they're wrong, the execution time climbs by 1000X or more. Abort-early\n>> >> plans are inherently riskier than other types of query plans.\n> \n> All plans are risky if the stats are wrong. It's one of the perennial\n> digressions that many postgres newcomers make to track worst case costs\n> and provide a knob for planner aggressiveness but it always breaks down\n> when you try to quantify the level of risk because you discover that\n> even such simple things as indeed scans versus sequential scans can be\n> equally risky either way.\n\nI've had a *wee* bit more experience with query plans than most Postgres\nnewcomers, Greg.\n\nWhile all query plan changes can result in regressions if they're bad,\nthere are certain kinds of plans which depend more on accurate stats\nthan others. Abort-early plans are the most extreme of these. Most\nlikely we should adjust the cost model for abort-early plans to take in\nour level of uncertainty, especially since we *know* that our n-distinct\nestimation is crap. For example, we could increase the estimated cost\nfor an abort-early index scan by 10X, to reflect our weak confidence in\nits correctness.\n\nWe could also probably do the same for plans which depend on column\ncorrelation estimates being accurate.\n\n> \n>> >> What I'm not clear on is why upgrading from 9.1 to 9.3 would bring\n> about\n>> >> this change. The stats are no more than 10% different across the\n>> >> version change.\n> \n> There's no difference. Postgres has been estimating LIMIT costs this way\n> since before I came to postgres in 7.3.\n\nThen why is the plan different in 9.1 and 9.3 with identical stats (I\ntested)?\n\n> \n> It would be neat to have an opclass which worked like that. Which would\n> amount to having prefix compression perhaps.\n> \n> What plan does 9.1 come up with?\n\nThat was the \"good\" plan from my original post.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Sep 2014 11:33:35 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Sat, Sep 20, 2014 at 1:33 PM, Josh Berkus <[email protected]> wrote:\n> For example, we could increase the estimated cost\n> for an abort-early index scan by 10X, to reflect our weak confidence in\n> its correctness.\n\nHas any progress been made on the performance farm? The problem with\nsuggestions like this (which seem pretty reasonable to me) is that\nwe've got no way of quantifying the downside. I think this is one\nexample of a class of plans that are high risk. Another one off the\ntop of my head is nestloop joins based on assumed selectivity of\nmultiple stacked quals. About 90% of the time, my reflective\nworkaround to these types of problems is to 'disable_nestloop' which\nworks around 90% of the time and the result are solved with monkeying\naround with 'OFFSET 0' etc. In the past, a GUC controlling planner\nrisk has been much discussed -- maybe it's still worth considering?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Sep 2014 08:55:59 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 09/22/2014 06:55 AM, Merlin Moncure wrote:\n> Has any progress been made on the performance farm? The problem with\n> suggestions like this (which seem pretty reasonable to me) is that\n> we've got no way of quantifying the downside. \n\nYeah, that's certainly an issue. The problem is that we'd need a\nbenchmark which actually created complex query plans. I believe that\nMark Wong is working on TPCH-based benchmarks, so maybe we'll get that.\n\n> I think this is one\n> example of a class of plans that are high risk. Another one off the\n> top of my head is nestloop joins based on assumed selectivity of\n> multiple stacked quals. \n\nYeah, that's another good example.\n\n> About 90% of the time, my reflective\n> workaround to these types of problems is to 'disable_nestloop' which\n> works around 90% of the time and the result are solved with monkeying\n> around with 'OFFSET 0' etc. In the past, a GUC controlling planner\n> risk has been much discussed -- maybe it's still worth considering?\n\nWe've hashed that out a bit, but frankly I think it's much more\nprofitable to pursue fixing the actual problem than providing a\nworkaround like \"risk\", such as:\n\na) fixing n_distinct estimation\nb) estimating stacked quals using better math (i.e. not assuming total\nrandomness)\nc) developing some kind of correlation stats\n\nOtherwise we would be just providing users with another knob there's no\nrational way to set.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Sep 2014 16:56:12 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 23 September 2014 00:56, Josh Berkus <[email protected]> wrote:\n\n> We've hashed that out a bit, but frankly I think it's much more\n> profitable to pursue fixing the actual problem than providing a\n> workaround like \"risk\", such as:\n>\n> a) fixing n_distinct estimation\n> b) estimating stacked quals using better math (i.e. not assuming total\n> randomness)\n> c) developing some kind of correlation stats\n>\n> Otherwise we would be just providing users with another knob there's no\n> rational way to set.\n\nI believe this is a serious issue for PostgreSQL users and one that\nneeds to be addressed.\n\nn_distinct can be fixed manually, so that is less of an issue.\n\nThe problem, as I see it, is different. We assume that if there are\n100 distinct values and you use LIMIT 1 that you would only need to\nscan 1% of rows. We assume that the data is arranged in the table in a\nvery homogenous layout. When data is not, and it seldom is, we get\nproblems.\n\nSimply put, assuming that LIMIT will reduce the size of all scans is\njust way wrong. I've seen many plans where increasing the LIMIT\ndramatically improves the plan.\n\nIf we can at least agree it is a problem, we can try to move forwards.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 Sep 2014 09:06:07 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Fri, Sep 26, 2014 at 3:06 AM, Simon Riggs <[email protected]> wrote:\n> The problem, as I see it, is different. We assume that if there are\n> 100 distinct values and you use LIMIT 1 that you would only need to\n> scan 1% of rows. We assume that the data is arranged in the table in a\n> very homogenous layout. When data is not, and it seldom is, we get\n> problems.\n\nHm, good point -- 'data proximity'. At least in theory, can't this be\nmeasured and quantified? For example, given a number of distinct\nvalues, you could estimate the % of pages read (or maybe non\nsequential seeks relative to the number of pages) you'd need to read\nall instances of a particular value in the average (or perhaps the\nworst) case. One way of trying to calculate that would be to look at\nproximity of values in sampled pages (and maybe a penalty assigned for\nhigh update activity relative to table size). Data proximity would\nthen become a cost coefficient to the benefits of LIMIT.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 29 Sep 2014 10:00:26 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 29 September 2014 16:00, Merlin Moncure <[email protected]> wrote:\n> On Fri, Sep 26, 2014 at 3:06 AM, Simon Riggs <[email protected]> wrote:\n>> The problem, as I see it, is different. We assume that if there are\n>> 100 distinct values and you use LIMIT 1 that you would only need to\n>> scan 1% of rows. We assume that the data is arranged in the table in a\n>> very homogenous layout. When data is not, and it seldom is, we get\n>> problems.\n>\n> Hm, good point -- 'data proximity'. At least in theory, can't this be\n> measured and quantified? For example, given a number of distinct\n> values, you could estimate the % of pages read (or maybe non\n> sequential seeks relative to the number of pages) you'd need to read\n> all instances of a particular value in the average (or perhaps the\n> worst) case. One way of trying to calculate that would be to look at\n> proximity of values in sampled pages (and maybe a penalty assigned for\n> high update activity relative to table size). Data proximity would\n> then become a cost coefficient to the benefits of LIMIT.\n\nThe necessary first step to this is to realise that we can't simply\napply the LIMIT as a reduction in query cost, in all cases.\n\nThe way I'm seeing it, you can't assume the LIMIT will apply to any\nIndexScan that doesn't have an index condition. If it has just a\nfilter, or nothing at all, just an ordering then it could easily scan\nthe whole index if the stats are wrong.\n\nSo plans like this could be wrong, by assuming the scan will end\nearlier because of the LIMIT than it actually will.\n\nLimit\n IndexScan (no index cond)\n\nLimit\n NestJoin\n IndexScan (no index cond)\n SomeScan\n\nLimit\n NestJoin\n NestJoin\n IndexScan (no index cond)\n SomeScan\n SomeScan\n\nand deeper...\n\nI'm looking for a way to identify and exclude such plans, assuming\nthat this captures at least some of the problem plans.\n\nComments? Test Cases?\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 29 Sep 2014 21:53:10 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 09/26/2014 01:06 AM, Simon Riggs wrote:\n> On 23 September 2014 00:56, Josh Berkus <[email protected]> wrote:\n> \n>> We've hashed that out a bit, but frankly I think it's much more\n>> profitable to pursue fixing the actual problem than providing a\n>> workaround like \"risk\", such as:\n>>\n>> a) fixing n_distinct estimation\n>> b) estimating stacked quals using better math (i.e. not assuming total\n>> randomness)\n>> c) developing some kind of correlation stats\n>>\n>> Otherwise we would be just providing users with another knob there's no\n>> rational way to set.\n> \n> I believe this is a serious issue for PostgreSQL users and one that\n> needs to be addressed.\n> \n> n_distinct can be fixed manually, so that is less of an issue.\n\nIt's an issue for the 99.8% of our users who don't know what n_distinct\nis, let alone how to calculate it. Also, changing it requires an\nexclusive lock on the table. Of course, you and I have been over this\nissue before.\n\nOne thing I'm wondering is why our estimator is creates n_distinct as a\n% so seldom. Really, any time n_distinct is over 10K we should be\nestimating a % instead. Now, estimating that % has its own issues, but\nit does seem like a peculiar quirk of our stats model.\n\nAnyway, in the particular case I posted fixing n_distinct to realistic\nnumbers (%) fixed the query plan.\n\n> \n> The problem, as I see it, is different. We assume that if there are\n> 100 distinct values and you use LIMIT 1 that you would only need to\n> scan 1% of rows. We assume that the data is arranged in the table in a\n> very homogenous layout. When data is not, and it seldom is, we get\n> problems.\n> \n> Simply put, assuming that LIMIT will reduce the size of all scans is\n> just way wrong. I've seen many plans where increasing the LIMIT\n> dramatically improves the plan.\n> \n> If we can at least agree it is a problem, we can try to move forwards.\n\nThat is certainly another problem. Does correlation stat figure in the\nLIMIT calculation at all, currently? That's what correlation stat is\nfor, no?\n\nAlso, to be fair, physical correlation of rows can also lead to\nabort-early plans being extra fast, if everything we want is towards the\nbeginning of the index. Which means we'd need working multi-column\ncorrelation, which is a known hard problem.\n\nFor example, consider the query:\n\nSELECT id, updated_on FROM audit_log\nWHERE updated_on < '2010-01-01'\nORDER BY id LIMIT 10;\n\nIn an append-only table, that query is liable to be very fast with an\nabort-early plan scanning on an ID index (AEP from now on), since the\noldest rows are likely to correspond with the smallest IDs. But then\nthe user does this:\n\nSELECT id, updated_on FROM audit_log\nWHERE updated_on < '2010-01-01'\nORDER BY id DESC LIMIT 10;\n\n... and a completely different plan is called for, because using an AEP\nwill result in reverse scanning most of the index. However, I'm not\nsure the planner knows the difference, since it's only comparing the\nestimated selectivity of (updated_on < '2010-01-01') and seeing that\nit's 20% of rows. I bet you'd get an AEP in the 2nd case too.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 29 Sep 2014 14:54:31 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "Simon Riggs <[email protected]> writes:\n> The way I'm seeing it, you can't assume the LIMIT will apply to any\n> IndexScan that doesn't have an index condition. If it has just a\n> filter, or nothing at all, just an ordering then it could easily scan\n> the whole index if the stats are wrong.\n\nThat statement applies with equal force to *any* plan with a LIMIT;\nit's not just index scans.\n\nThe real question is to what extent are the tuples satisfying the extra\nfilter condition randomly distributed with respect to the index order\n(or physical order, if it's a seqscan). The existing cost estimation\ncode effectively assumes that they're perfectly uniformly distributed;\nwhich is a good average-case assumption but can be horribly wrong in\nthe worst case.\n\nIf we could settle on some other model for the probable distribution\nof the matching tuples, we could adjust the cost estimates for LIMIT\naccordingly. I have not enough statistics background to know what a\nrealistic alternative would be.\n\nAnother possibility is to still assume a uniform distribution but estimate\nfor, say, a 90% probability instead of 50% probability that we'll find\nenough tuples after scanning X amount of the table. Again, I'm not too\nsure what that translates to in terms of the actual math, but it sounds\nlike something a statistics person could do in their sleep.\n\nI do not think we should estimate for the worst case though. If we do,\nwe'll hear cries of anguish from a lot of people, including many of the\nsame ones complaining now, because the planner stopped picking fast-start\nplans even for cases where they are orders of magnitude faster than the\nalternatives.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 29 Sep 2014 19:00:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 30/09/14 12:00, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n>> The way I'm seeing it, you can't assume the LIMIT will apply to any\n>> IndexScan that doesn't have an index condition. If it has just a\n>> filter, or nothing at all, just an ordering then it could easily scan\n>> the whole index if the stats are wrong.\n> That statement applies with equal force to *any* plan with a LIMIT;\n> it's not just index scans.\n>\n> The real question is to what extent are the tuples satisfying the extra\n> filter condition randomly distributed with respect to the index order\n> (or physical order, if it's a seqscan). The existing cost estimation\n> code effectively assumes that they're perfectly uniformly distributed;\n> which is a good average-case assumption but can be horribly wrong in\n> the worst case.\n>\n> If we could settle on some other model for the probable distribution\n> of the matching tuples, we could adjust the cost estimates for LIMIT\n> accordingly. I have not enough statistics background to know what a\n> realistic alternative would be.\n>\n> Another possibility is to still assume a uniform distribution but estimate\n> for, say, a 90% probability instead of 50% probability that we'll find\n> enough tuples after scanning X amount of the table. Again, I'm not too\n> sure what that translates to in terms of the actual math, but it sounds\n> like something a statistics person could do in their sleep.\n>\n> I do not think we should estimate for the worst case though. If we do,\n> we'll hear cries of anguish from a lot of people, including many of the\n> same ones complaining now, because the planner stopped picking fast-start\n> plans even for cases where they are orders of magnitude faster than the\n> alternatives.\n>\n> \t\t\tregards, tom lane\n>\n\nIf you analyzed the tables in most production databases, you would find that they are almost invariably not uniformly distributed.\n\nMost likely they will be clumped, and if you plotted the frequency of values of a given column in a given range against the number of blocks, you are likely to see one or more distinct peaks. If the table has been CLUSTERed on that column, then they will very likely to be in one clump spanning contiguous blocks.\n\nI suspect that there are two distinct populations: one relating to values present before the last VACUUM, and ones added since.\n\nThere are so many factors to consider, pattern of CRUD operations, range of values in query, ... that probably prevent using very sophisticated approaches, but I would be happy to be proved wrong!\n\nThough I am fairly confident that if the distribution is not known in advance for a given table, then the percentage required to process to satisfy the limit is likely to be a lot larger than a uniform distribution would suggest.\n\nWould it be feasible to get a competent statistician to advise what data to collect, and to analyze it? Maybe it is possible to get a better estimate on how much of a table needs to be scanned, based on some fairly simple statistics. But unless research is done, it is probably impossible to determine what statistics might be useful, and how effective a better estimate could be.\n\nI have a nasty feeling that assuming a uniform distribution, may still \nend up being the best we can do - but I maybe being unduly pessimistic!.\n\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 30/09/14 12:00, Tom Lane wrote:\n\n\nSimon Riggs <[email protected]> writes:\n\n\nThe way I'm seeing it, you can't assume the LIMIT will apply to any\nIndexScan that doesn't have an index condition. If it has just a\nfilter, or nothing at all, just an ordering then it could easily scan\nthe whole index if the stats are wrong.\n\n\n\nThat statement applies with equal force to *any* plan with a LIMIT;\nit's not just index scans.\n\nThe real question is to what extent are the tuples satisfying the extra\nfilter condition randomly distributed with respect to the index order\n(or physical order, if it's a seqscan). The existing cost estimation\ncode effectively assumes that they're perfectly uniformly distributed;\nwhich is a good average-case assumption but can be horribly wrong in\nthe worst case.\n\nIf we could settle on some other model for the probable distribution\nof the matching tuples, we could adjust the cost estimates for LIMIT\naccordingly. I have not enough statistics background to know what a\nrealistic alternative would be.\n\nAnother possibility is to still assume a uniform distribution but estimate\nfor, say, a 90% probability instead of 50% probability that we'll find\nenough tuples after scanning X amount of the table. Again, I'm not too\nsure what that translates to in terms of the actual math, but it sounds\nlike something a statistics person could do in their sleep.\n\nI do not think we should estimate for the worst case though. If we do,\nwe'll hear cries of anguish from a lot of people, including many of the\nsame ones complaining now, because the planner stopped picking fast-start\nplans even for cases where they are orders of magnitude faster than the\nalternatives.\n\n\t\t\tregards, tom lane\n\n\n\n\nIf you analyzed the tables in most production databases, you would find that they are almost invariably not uniformly distributed.\n\nMost likely they will be clumped, and if you plotted the frequency of values of a given column in a given range against the number of blocks, you are likely to see one or more distinct peaks. If the table has been CLUSTERed on that column, then they will very likely to be in one clump spanning contiguous blocks.\n\nI suspect that there are two distinct populations: one relating to values present before the last VACUUM, and ones added since.\n\nThere are so many factors to consider, pattern of CRUD operations, range of values in query, ... that probably prevent using very sophisticated approaches, but I would be happy to be proved wrong!\n\nThough I am fairly confident that if the distribution is not known in advance for a given table, then the percentage required to process to satisfy the limit is likely to be a lot larger than a uniform distribution would suggest.\n\nWould it be feasible to get a competent statistician to advise what data to collect, and to analyze it? Maybe it is possible to get a better estimate on how much of a table needs to be scanned, based on some fairly simple statistics. But unless research is done, it is probably impossible to determine what statistics might be useful, and how effective a better estimate could be.\n\n\n I have a nasty feeling that assuming a uniform distribution, may\n still end up being the best we can do - but I maybe being unduly\n pessimistic!.\n\n\n Cheers,\n Gavin",
"msg_date": "Tue, 30 Sep 2014 15:12:00 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Fri, Sep 26, 2014 at 9:06 AM, Simon Riggs <[email protected]> wrote:\n> If we can at least agree it is a problem, we can try to move forwards.\n\nWell that's a good question. I don't think we do and I think the\nreason why is because we haven't actually pinned down exactly what is\nthe problem.\n\nThe real problem here is that the ideal index for the query isn't there\nand Postgres is trying to choose between two inappropriatedoes not\nexist indexes where that decision is very very hard for it to make. If\nit guesses\nwrong in *either* direction it'll go very poorly. We can try to\nimprove the frequency of getting the right decision but it'll never be\n100% and even if it was it'll still not perform as well as the right\nindex would have.\n\nI have seen plenty of applications where the slowdown was in the\nreverse direction --\nwhere a query like \"find the last login for the current user\" was\nplanned just as Josh is asking for by retrieving all the records for\nthe user and sorting by login time and it caused large problems in\nproduction when some users had a disproportionately large number of\nrecords.\n\nThe real solution for users is to create the compound index on both columns (or\npartial index in some cases). Trying to make do with an ordered scan\nor a index scan and sort are both going to cause problems in real\nworld usage.\n\nIn fact I think the real story here is that Postgres is doing a\nsurprisingly good job at making do without the right index and that's\ncausing users to get surprisingly far before they run into problems.\nThat may not be the best thing for users in the long run but that's a\nproblem that should be solved by better development tools to help\nusers identify scalability problems early.\n\n\n\n-- \ngreg\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Sep 2014 04:11:58 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 29 September 2014 22:54, Josh Berkus <[email protected]> wrote:\n> On 09/26/2014 01:06 AM, Simon Riggs wrote:\n>> On 23 September 2014 00:56, Josh Berkus <[email protected]> wrote:\n>>\n>>> We've hashed that out a bit, but frankly I think it's much more\n>>> profitable to pursue fixing the actual problem than providing a\n>>> workaround like \"risk\", such as:\n>>>\n>>> a) fixing n_distinct estimation\n>>> b) estimating stacked quals using better math (i.e. not assuming total\n>>> randomness)\n>>> c) developing some kind of correlation stats\n>>>\n>>> Otherwise we would be just providing users with another knob there's no\n>>> rational way to set.\n>>\n>> I believe this is a serious issue for PostgreSQL users and one that\n>> needs to be addressed.\n>>\n>> n_distinct can be fixed manually, so that is less of an issue.\n>\n> It's an issue for the 99.8% of our users who don't know what n_distinct\n> is, let alone how to calculate it. Also, changing it requires an\n> exclusive lock on the table. Of course, you and I have been over this\n> issue before.\n\nIn 9.4 you'll be able to set n_distinct using only a Share Update\nExclusive lock.\n\nSo that's no longer a problem.\n\nThe quality of the n_distinct itself is an issue, but with no current\nsolution, but then that is why you can set it manually,\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Sep 2014 08:09:10 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 30 September 2014 00:00, Tom Lane <[email protected]> wrote:\n> Simon Riggs <[email protected]> writes:\n>> The way I'm seeing it, you can't assume the LIMIT will apply to any\n>> IndexScan that doesn't have an index condition. If it has just a\n>> filter, or nothing at all, just an ordering then it could easily scan\n>> the whole index if the stats are wrong.\n>\n> That statement applies with equal force to *any* plan with a LIMIT;\n> it's not just index scans.\n\nAgreed\n\n> The real question is to what extent are the tuples satisfying the extra\n> filter condition randomly distributed with respect to the index order\n> (or physical order, if it's a seqscan).\n\nAgreed\n\n> The existing cost estimation\n> code effectively assumes that they're perfectly uniformly distributed;\n> which is a good average-case assumption but can be horribly wrong in\n> the worst case.\n\nAgreed. This is the main observation from which we can work.\n\n> If we could settle on some other model for the probable distribution\n> of the matching tuples, we could adjust the cost estimates for LIMIT\n> accordingly. I have not enough statistics background to know what a\n> realistic alternative would be.\n\nI'm not sure that the correlation alone is sufficient to be able to do\nthat. We'd need to estimate where the values looked for are likely to\nbe wrt other values, then increase estimate accordingly. That sounds\nlike a lot of pushups grovelling through quals and comparing against\nstats. So my thinking is actually to rule that out, unless you've some\nideas for how to do that?\n\n> Another possibility is to still assume a uniform distribution but estimate\n> for, say, a 90% probability instead of 50% probability that we'll find\n> enough tuples after scanning X amount of the table. Again, I'm not too\n> sure what that translates to in terms of the actual math, but it sounds\n> like something a statistics person could do in their sleep.\n>\n> I do not think we should estimate for the worst case though. If we do,\n> we'll hear cries of anguish from a lot of people, including many of the\n> same ones complaining now, because the planner stopped picking fast-start\n> plans even for cases where they are orders of magnitude faster than the\n> alternatives.\n\nFast start plans still make sense when performing an IndexScan with no\nfilter conditions. Those types of plan should not be changed from\ncurrent costing - they are accurate, good and very important because\nof their frequency in real workloads.\n\nWhat I think we are seeing is Ordered plans being selected too often\nin preference to Sorted plans when we make selectivity or stats\nerrors. As well as data distributions that aren't correctly described\nby the statistics causing much longer execution times.\n\nHere are some plan selection strategies\n\n* Cost based - attempt to exactly calculate the cost based upon\nexisting stats - increase the complexity of cost calc to cover other\naspects. Even if we do that, these may not be that helpful in covering\nthe cases where the stats turn out to be wrong.\n\n* Risk based - A risk adjusted viewpoint would be that we should treat\nthe cost as mid-way between the best and the worst. The worst is\nclearly scanning (100% - N) of the tuples, the best is just N tuples.\nSo we should be costing scans with excess filter conditions as a (100%\nScan)/2, no matter the conditions, based purely upon risk.\n\n* Simplified heuristic - deselect ordered plans when they are driven\nfrom scans without quals or indexscans with filters, since the risk\nadjusted cost is likely to be higher than the sorted cost. Inspecting\nthe plan tree for this could be quite costly, so would only be done\nwhen the total cost is $high, prior to it being adjusted by LIMIT.\n\n\nIn terms of practical steps... I suggest the following:\n\n* Implement enable_orderedscan = on (default) | off. A switch to allow\nplans to de-select ordered plans, so we can more easily see the\neffects of such plans in the wild.\n\n* Code heuristic approach - I can see where to add my heuristic in the\ngrouping planner. So we just need to do a left? deep search of the\nplan tree looking for scans of the appropriate type and bail out if we\nfind one.\n\nThoughts?\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Sep 2014 10:25:23 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "\n>> The existing cost estimation\n>> code effectively assumes that they're perfectly uniformly distributed;\n>> which is a good average-case assumption but can be horribly wrong in\n>> the worst case.\n\n\nSorry, just an outsider jumping in with a quick comment.\n\nEvery year or two the core count goes up. Can/should/does postgres ever attempt two strategies in parallel, in cases where strategy A is generally good but strategy B prevents bad worst case behaviour? Kind of like a Schrödinger's Cat approach to scheduling. What problems would it raise?\n\nGraeme. \n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Sep 2014 11:34:48 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Tue, Sep 30, 2014 at 8:34 AM, Graeme B. Bell <[email protected]> wrote:\n>\n>>> The existing cost estimation\n>>> code effectively assumes that they're perfectly uniformly distributed;\n>>> which is a good average-case assumption but can be horribly wrong in\n>>> the worst case.\n>\n>\n> Sorry, just an outsider jumping in with a quick comment.\n>\n> Every year or two the core count goes up. Can/should/does postgres ever attempt two strategies in parallel, in cases where strategy A is generally good but strategy B prevents bad worst case behaviour? Kind of like a Schrödinger's Cat approach to scheduling.\n\n> What problems would it raise?\n\nInterleaved I/O, that would kill performance for both plans if it\nhappens on rotating media.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Sep 2014 12:59:34 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "\"Graeme B. Bell\" <[email protected]> writes:\n> Every year or two the core count goes up. Can/should/does postgres ever attempt two strategies in parallel, in cases where strategy A is generally good but strategy B prevents bad worst case behaviour? Kind of like a Schr�dinger's Cat approach to scheduling. What problems would it raise?\n\nYou can't run two plans and have them both returning rows to the client,\nor performing inserts/updates/deletes as the case may be.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Sep 2014 12:32:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Mon, Sep 29, 2014 at 7:12 PM, Gavin Flower <[email protected]\n> wrote:\n\n>\n> Would it be feasible to get a competent statistician to advise what data to collect, and to analyze it? Maybe it is possible to get a better estimate on how much of a table needs to be scanned, based on some fairly simple statistics. But unless research is done, it is probably impossible to determine what statistics might be useful, and how effective a better estimate could be.\n>\n> I have a nasty feeling that assuming a uniform distribution, may still\n> end up being the best we can do - but I maybe being unduly pessimistic!.\n>\n\nAs a semi-competent statistician, my gut feeling is that our best bet would\nbe not to rely on the competence of statisticians for too much, and instead\ntry to give the executor the ability to abandon a fruitless path and pick a\ndifferent plan instead. Of course this option is foreclosed once a tuple is\nreturned to the client (unless the ctid is also cached, so we can make sure\nnot to send it again on the new plan).\n\nI think that the exponential explosion of possibilities is going to be too\ngreat to analyze in any rigorous way.\n\nCheers,\n\nJeff\n\nOn Mon, Sep 29, 2014 at 7:12 PM, Gavin Flower <[email protected]> wrote:\n\nWould it be feasible to get a competent statistician to advise what data to collect, and to analyze it? Maybe it is possible to get a better estimate on how much of a table needs to be scanned, based on some fairly simple statistics. But unless research is done, it is probably impossible to determine what statistics might be useful, and how effective a better estimate could be.\n\n\n I have a nasty feeling that assuming a uniform distribution, may\n still end up being the best we can do - but I maybe being unduly\n pessimistic!.As a semi-competent statistician, my gut feeling is that our best bet would be not to rely on the competence of statisticians for too much, and instead try to give the executor the ability to abandon a fruitless path and pick a different plan instead. Of course this option is foreclosed once a tuple is returned to the client (unless the ctid is also cached, so we can make sure not to send it again on the new plan).I think that the exponential explosion of possibilities is going to be too great to analyze in any rigorous way.Cheers,Jeff",
"msg_date": "Tue, 30 Sep 2014 09:54:44 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "\nThanks for your replies everyone. \n\n> You can't run two plans and have them both returning rows to the client,\n\nThat wasn't what I had in mind. \n\n\bI can envisage cases where the worst case behaviour of one plan results in zero rows by the time the alternative plan has generated the complete result, never mind a single row (e.g. anything with LIMIT in it could fall into that category). Maybe it's enough to alleviate the problems caused by planning heuristics known to have bad worst-case performance that is hard to avoid with a single-threaded approach?\n\nProviding we're not modifying data in the query, and providing we kill the 'loser' thread when either (the first result / all results) come in, maybe there's value in letting them race and picking the best plan retrospectively.\n\n\nI guess it's going into another topic, but I wonder what % of DBs/queries look like this: \n\n- little or no I/O thrash (e.g. tuples mostly in memory already or DB configured to have a relatively low 'random_page_cost')\n- ordered results, or, the whole result set is being produced at once.\n- SELECTs only\n\n\bIn my own work (national scale GIS) this is what most of our queries & query environments look like. \n\nGraeme\n\n\nOn 30 Sep 2014, at 18:32, Tom Lane <[email protected]> wrote:\n\n> \"Graeme B. Bell\" <[email protected]> writes:\n>> Every year or two the core count goes up. Can/should/does postgres ever attempt two strategies in parallel, in cases where strategy A is generally good but strategy B prevents bad worst case behaviour? Kind of like a Schrödinger's Cat approach to scheduling. What problems would it raise?\n> \n> You can't run two plans and have them both returning rows to the client,\n> or performing inserts/updates/deletes as the case may be.\n> \n> \t\t\tregards, tom lane\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Sep 2014 17:07:09 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Mon, Sep 29, 2014 at 2:54 PM, Josh Berkus <[email protected]> wrote:\n\n> On 09/26/2014 01:06 AM, Simon Riggs wrote:\n> > On 23 September 2014 00:56, Josh Berkus <[email protected]> wrote:\n> >\n> >> We've hashed that out a bit, but frankly I think it's much more\n> >> profitable to pursue fixing the actual problem than providing a\n> >> workaround like \"risk\", such as:\n> >>\n> >> a) fixing n_distinct estimation\n> >> b) estimating stacked quals using better math (i.e. not assuming total\n> >> randomness)\n> >> c) developing some kind of correlation stats\n> >>\n> >> Otherwise we would be just providing users with another knob there's no\n> >> rational way to set.\n> >\n> > I believe this is a serious issue for PostgreSQL users and one that\n> > needs to be addressed.\n> >\n> > n_distinct can be fixed manually, so that is less of an issue.\n>\n> It's an issue for the 99.8% of our users who don't know what n_distinct\n> is, let alone how to calculate it. Also, changing it requires an\n> exclusive lock on the table. Of course, you and I have been over this\n> issue before.\n>\n\n\nIf 99.6% of our users don't have a problem with n_distinct in their system,\nthat would mean that only 50% of the people with the problem don't know how\nto solve it. And those people can usually get excellent free help on the\ninternet.\n\nBut if the problem not with n_distinct, but rather with most_common_freqs\n(which I encounter more often than problems with n_distinct), all I can do\nis shrug and say \"yeah I know about that problem. Either crank up\nstatistics target as high as it will go, or it sucks to be you.\"\n\n\n>\n> One thing I'm wondering is why our estimator is creates n_distinct as a\n> % so seldom. Really, any time n_distinct is over 10K we should be\n> estimating a % instead. Now, estimating that % has its own issues, but\n> it does seem like a peculiar quirk of our stats model.\n>\n> Anyway, in the particular case I posted fixing n_distinct to realistic\n> numbers (%) fixed the query plan.\n>\n\nBut wouldn't fixing the absolute number also have fixed the plan? If you\nare going to set a number manually and then nail it in place so that\nanalyze stops changing it, then I can certainly see how the fractional\nmethod is desirable. But if the goal is not to do that but have the\ncorrect value estimated in the first place, I don't really see much benefit\nfrom converting the estimate into a fraction and then back again.\n\n\n> >\n> > The problem, as I see it, is different. We assume that if there are\n> > 100 distinct values and you use LIMIT 1 that you would only need to\n> > scan 1% of rows. We assume that the data is arranged in the table in a\n> > very homogenous layout. When data is not, and it seldom is, we get\n> > problems.\n> >\n> > Simply put, assuming that LIMIT will reduce the size of all scans is\n> > just way wrong. I've seen many plans where increasing the LIMIT\n> > dramatically improves the plan.\n> >\n> > If we can at least agree it is a problem, we can try to move forwards.\n>\n\nI don't think anyone doubts there is a problem (many more than one of\nthem), there is just disagreement about the priority and what can be done\nabout it.\n\n\n>\n> That is certainly another problem. Does correlation stat figure in the\n> LIMIT calculation at all, currently? That's what correlation stat is\n> for, no?\n>\n\nI don't think correlation is up to the task as a complete solution,\nalthough it might help a little. There is no way a simple correlation can\nencode that John retired 15 years ago and hasn't logged on since, while\nJohannes was hired yesterday and never logged on before then.\n\n Cheers,\n\nJeff\n\nOn Mon, Sep 29, 2014 at 2:54 PM, Josh Berkus <[email protected]> wrote:On 09/26/2014 01:06 AM, Simon Riggs wrote:\n> On 23 September 2014 00:56, Josh Berkus <[email protected]> wrote:\n>\n>> We've hashed that out a bit, but frankly I think it's much more\n>> profitable to pursue fixing the actual problem than providing a\n>> workaround like \"risk\", such as:\n>>\n>> a) fixing n_distinct estimation\n>> b) estimating stacked quals using better math (i.e. not assuming total\n>> randomness)\n>> c) developing some kind of correlation stats\n>>\n>> Otherwise we would be just providing users with another knob there's no\n>> rational way to set.\n>\n> I believe this is a serious issue for PostgreSQL users and one that\n> needs to be addressed.\n>\n> n_distinct can be fixed manually, so that is less of an issue.\n\nIt's an issue for the 99.8% of our users who don't know what n_distinct\nis, let alone how to calculate it. Also, changing it requires an\nexclusive lock on the table. Of course, you and I have been over this\nissue before.If 99.6% of our users don't have a problem with n_distinct in their system, that would mean that only 50% of the people with the problem don't know how to solve it. And those people can usually get excellent free help on the internet.But if the problem not with n_distinct, but rather with most_common_freqs (which I encounter more often than problems with n_distinct), all I can do is shrug and say \"yeah I know about that problem. Either crank up statistics target as high as it will go, or it sucks to be you.\" \n\nOne thing I'm wondering is why our estimator is creates n_distinct as a\n% so seldom. Really, any time n_distinct is over 10K we should be\nestimating a % instead. Now, estimating that % has its own issues, but\nit does seem like a peculiar quirk of our stats model.\n\nAnyway, in the particular case I posted fixing n_distinct to realistic\nnumbers (%) fixed the query plan.But wouldn't fixing the absolute number also have fixed the plan? If you are going to set a number manually and then nail it in place so that analyze stops changing it, then I can certainly see how the fractional method is desirable. But if the goal is not to do that but have the correct value estimated in the first place, I don't really see much benefit from converting the estimate into a fraction and then back again.\n\n>\n> The problem, as I see it, is different. We assume that if there are\n> 100 distinct values and you use LIMIT 1 that you would only need to\n> scan 1% of rows. We assume that the data is arranged in the table in a\n> very homogenous layout. When data is not, and it seldom is, we get\n> problems.\n>\n> Simply put, assuming that LIMIT will reduce the size of all scans is\n> just way wrong. I've seen many plans where increasing the LIMIT\n> dramatically improves the plan.\n>\n> If we can at least agree it is a problem, we can try to move forwards.I don't think anyone doubts there is a problem (many more than one of them), there is just disagreement about the priority and what can be done about it. \n\nThat is certainly another problem. Does correlation stat figure in the\nLIMIT calculation at all, currently? That's what correlation stat is\nfor, no?I don't think correlation is up to the task as a complete solution, although it might help a little. There is no way a simple correlation can encode that John retired 15 years ago and hasn't logged on since, while Johannes was hired yesterday and never logged on before then. Cheers,Jeff",
"msg_date": "Tue, 30 Sep 2014 10:28:02 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 01/10/14 05:54, Jeff Janes wrote:\n> On Mon, Sep 29, 2014 at 7:12 PM, Gavin Flower \n> <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n>\n> Would it be feasible to get a competent statistician to advise what data to collect, and to analyze it? Maybe it is possible to get a better estimate on how much of a table needs to be scanned, based on some fairly simple statistics. But unless research is done, it is probably impossible to determine what statistics might be useful, and how effective a better estimate could be.\n>\n> I have a nasty feeling that assuming a uniform distribution, may\n> still end up being the best we can do - but I maybe being unduly\n> pessimistic!.\n>\n>\n> As a semi-competent statistician, my gut feeling is that our best bet \n> would be not to rely on the competence of statisticians for too much, \n> and instead try to give the executor the ability to abandon a \n> fruitless path and pick a different plan instead. Of course this \n> option is foreclosed once a tuple is returned to the client (unless \n> the ctid is also cached, so we can make sure not to send it again on \n> the new plan).\n>\n> I think that the exponential explosion of possibilities is going to be \n> too great to analyze in any rigorous way.\n>\n> Cheers,\n>\n> Jeff\nMany moons ago, I passed several 300 level statistics papers.\n\nI looked at this problem and found it was too hard to even properly \ncharacterise the problem (looks 'simple' - if you don't look too \nclosely), and ended up feeling it was definitely 'way above my pay \ngrade'! :-)\n\nIt might be possible to tackle it more pragmatically, instead of trying \nto be all analytic and rigorously list all the possible influences, have \na look at queries of this nature that are taking far too long. Then get \na feel for combinations of issues involved and how they contribute. If \nyou have enough data, you might be able to use something like Principle \nComponent Analysis (I was fortunate to meet a scientist who had got \nheavily into this area of statistics). Such an approach might yield \nvaluable insights, even if the problem is not fully characterised, let \nalone 'solved'.\n\n\nCheers,\nGavin\n\n\n\n\n\n\nOn 01/10/14 05:54, Jeff Janes wrote:\n\n\n\n\nOn Mon, Sep 29, 2014 at 7:12 PM,\n Gavin Flower <[email protected]>\n wrote:\n\n\n\n\n\n\n\n\nWould it be feasible to get a competent statistician to advise what data to collect, and to analyze it? Maybe it is possible to get a better estimate on how much of a table needs to be scanned, based on some fairly simple statistics. But unless research is done, it is probably impossible to determine what statistics might be useful, and how effective a better estimate could be.\n\n I have a\n nasty feeling that assuming a uniform distribution,\n may still end up being the best we can do - but I\n maybe being unduly pessimistic!.\n\n\n\n\nAs a semi-competent statistician, my gut feeling is\n that our best bet would be not to rely on the competence\n of statisticians for too much, and instead try to give the\n executor the ability to abandon a fruitless path and pick\n a different plan instead. Of course this option is\n foreclosed once a tuple is returned to the client (unless\n the ctid is also cached, so we can make sure not to send\n it again on the new plan).\n\n\nI think that the exponential explosion of possibilities\n is going to be too great to analyze in any rigorous way.\n\n\nCheers,\n\n\nJeff\n\n\n\n\n Many moons ago, I passed several 300 level statistics papers.\n\n I looked at this problem and found it was too hard to even properly\n characterise the problem (looks 'simple' - if you don't look too\n closely), and ended up feeling it was definitely 'way above my pay\n grade'! :-)\n\n It might be possible to tackle it more pragmatically, instead of\n trying to be all analytic and rigorously list all the possible\n influences, have a look at queries of this nature that are taking\n far too long. Then get a feel for combinations of issues involved\n and how they contribute. If you have enough data, you might be able\n to use something like Principle Component Analysis (I was fortunate\n to meet a scientist who had got heavily into this area of\n statistics). Such an approach might yield valuable insights, even\n if the problem is not fully characterised, let alone 'solved'.\n\n\n Cheers,\n Gavin",
"msg_date": "Wed, 01 Oct 2014 10:11:20 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Tue, Sep 30, 2014 at 11:54 AM, Jeff Janes <[email protected]> wrote:\n> On Mon, Sep 29, 2014 at 7:12 PM, Gavin Flower\n> <[email protected]> wrote:\n>>\n>>\n>> Would it be feasible to get a competent statistician to advise what data\n>> to collect, and to analyze it? Maybe it is possible to get a better\n>> estimate on how much of a table needs to be scanned, based on some fairly\n>> simple statistics. But unless research is done, it is probably impossible\n>> to determine what statistics might be useful, and how effective a better\n>> estimate could be.\n>>\n>> I have a nasty feeling that assuming a uniform distribution, may still end\n>> up being the best we can do - but I maybe being unduly pessimistic!.\n>\n> As a semi-competent statistician, my gut feeling is that our best bet would\n> be not to rely on the competence of statisticians for too much, and instead\n> try to give the executor the ability to abandon a fruitless path and pick a\n> different plan instead. Of course this option is foreclosed once a tuple is\n> returned to the client (unless the ctid is also cached, so we can make sure\n> not to send it again on the new plan).\n>\n> I think that the exponential explosion of possibilities is going to be too\n> great to analyze in any rigorous way.\n\nCall it the 'Parking in Manhattan' strategy -- you know when it's time\nto pull forward when you've smacked into the car behind you.\n\nKidding aside, this might be the path forward since it's A. more\ngeneral and can catch all kinds of problem cases that our statistics\nsystem won't/can't catch and B. At least in my case it seems like more\ncomplicated plans tend to not return much data until the inner most\nrisky parts have been involved. Even if that wasn't the case,\nwithholding data to the client until a user configurable time\nthreshold had been passed (giving the planner time to back up if\nnecessary) would be a reasonable user facing tradeoff via GUC:\n'max_planner_retry_time'.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Sep 2014 17:14:18 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 30 September 2014 18:28, Jeff Janes <[email protected]> wrote:\n\n>> Anyway, in the particular case I posted fixing n_distinct to realistic\n>> numbers (%) fixed the query plan.\n>\n>\n> But wouldn't fixing the absolute number also have fixed the plan?\n\nThere are two causes of this issue.\n\n1. Poor estimates of n_distinct. Fixable by user.\n\n2. Poor assumption of homogeneous distribution. No way for user to\nfix. Insufficient stats detail to be able to solve in current planner.\n\nI see (2) as the main source of issues, since as we observe, (1) is fixable.\n\nAn example is a social media application where the business query is\n\"Display the last 10 posts\". If the user is a frequent, recent user\nthen the query could come back very quickly, so a reverse scan on\npost_id would work great. If the user hasn't logged on for ages, then\nthat plan needs to scan lots and lots of data to get to find 10 posts.\nThat gives the problem that only certain users experience poor\nperformance - even the data isn't consistent in its distribution, so\nstats wouldn't help much, even if we could capture the profile of the\n\"typical user\".\n\n>> > The problem, as I see it, is different. We assume that if there are\n>> > 100 distinct values and you use LIMIT 1 that you would only need to\n>> > scan 1% of rows. We assume that the data is arranged in the table in a\n>> > very homogenous layout. When data is not, and it seldom is, we get\n>> > problems.\n>> >\n>> > Simply put, assuming that LIMIT will reduce the size of all scans is\n>> > just way wrong. I've seen many plans where increasing the LIMIT\n>> > dramatically improves the plan.\n>> >\n>> > If we can at least agree it is a problem, we can try to move forwards.\n>\n>\n> I don't think anyone doubts there is a problem (many more than one of them),\n> there is just disagreement about the priority and what can be done about it.\n\n\n>> That is certainly another problem. Does correlation stat figure in the\n>> LIMIT calculation at all, currently? That's what correlation stat is\n>> for, no?\n>\n>\n> I don't think correlation is up to the task as a complete solution, although\n> it might help a little. There is no way a simple correlation can encode\n> that John retired 15 years ago and hasn't logged on since, while Johannes\n> was hired yesterday and never logged on before then.\n\nAh, OK, essentially the same example.\n\nWhich is why I ruled out correlation stats based approaches and\nsuggested a risk-weighted cost approach.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 1 Oct 2014 00:01:46 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 09/30/2014 04:01 PM, Simon Riggs wrote:\n> On 30 September 2014 18:28, Jeff Janes <[email protected]> wrote:\n> \n>>> Anyway, in the particular case I posted fixing n_distinct to realistic\n>>> numbers (%) fixed the query plan.\n>>\n>>\n>> But wouldn't fixing the absolute number also have fixed the plan?\n> \n> There are two causes of this issue.\n> \n> 1. Poor estimates of n_distinct. Fixable by user.\n> \n> 2. Poor assumption of homogeneous distribution. No way for user to\n> fix. Insufficient stats detail to be able to solve in current planner.\n> \n> I see (2) as the main source of issues, since as we observe, (1) is fixable.\n\nI disagree that (1) is not worth fixing just because we've provided\nusers with an API to override the stats. It would unquestionably be\nbetter for us to have a better n_distinct estimate in the first place.\nFurther, this is an easier problem to solve, and fixing n_distinct\nestimates would fix a large minority of currently pathological queries.\n It's like saying \"hey, we don't need to fix the leak in your radiator,\nwe've given you a funnel in the dashboard you can pour water into.\"\n\nI do agree that (2) is worth fixing *as well*. In a first\napproximation, one possibility (as Tom suggests) would be to come up\nwith a mathematical model for a selectivity estimate which was somewhere\n*between* homogenous distribution and the worst case. While that\nwouldn't solve a lot of cases, it would be a start towards having a\nbetter model.\n\n>> I don't think correlation is up to the task as a complete solution, although\n>> it might help a little. There is no way a simple correlation can encode\n>> that John retired 15 years ago and hasn't logged on since, while Johannes\n>> was hired yesterday and never logged on before then.\n> \n> Ah, OK, essentially the same example.\n> \n> Which is why I ruled out correlation stats based approaches and\n> suggested a risk-weighted cost approach.\n\nBy \"risk-weighted\" you mean just adjusting cost estimates based on what\nthe worst case cost looks like, correct? That seemed to be your\nproposal from an earlier post. If so, we're in violent agreement here.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 01 Oct 2014 11:56:32 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 1 October 2014 19:56, Josh Berkus <[email protected]> wrote:\n> On 09/30/2014 04:01 PM, Simon Riggs wrote:\n>> On 30 September 2014 18:28, Jeff Janes <[email protected]> wrote:\n>>\n>>>> Anyway, in the particular case I posted fixing n_distinct to realistic\n>>>> numbers (%) fixed the query plan.\n>>>\n>>>\n>>> But wouldn't fixing the absolute number also have fixed the plan?\n>>\n>> There are two causes of this issue.\n>>\n>> 1. Poor estimates of n_distinct. Fixable by user.\n>>\n>> 2. Poor assumption of homogeneous distribution. No way for user to\n>> fix. Insufficient stats detail to be able to solve in current planner.\n>>\n>> I see (2) as the main source of issues, since as we observe, (1) is fixable.\n>\n> I disagree that (1) is not worth fixing just because we've provided\n> users with an API to override the stats. It would unquestionably be\n> better for us to have a better n_distinct estimate in the first place.\n> Further, this is an easier problem to solve, and fixing n_distinct\n> estimates would fix a large minority of currently pathological queries.\n> It's like saying \"hey, we don't need to fix the leak in your radiator,\n> we've given you a funnel in the dashboard you can pour water into.\"\n\nHaving read papers on it, I believe the problem is intractable. Coding\nis not the issue. To anyone: please prove me wrong, in detail, with\nreferences so it can be coded.\n\n> I do agree that (2) is worth fixing *as well*. In a first\n> approximation, one possibility (as Tom suggests) would be to come up\n> with a mathematical model for a selectivity estimate which was somewhere\n> *between* homogenous distribution and the worst case. While that\n> wouldn't solve a lot of cases, it would be a start towards having a\n> better model.\n\nThis may have a reasonable solution, but I don't know it. A more\naccurate mathematical model will still avoid the main problem: it is a\nguess, not certain knowledge and the risk will still remain.\n\n>>> I don't think correlation is up to the task as a complete solution, although\n>>> it might help a little. There is no way a simple correlation can encode\n>>> that John retired 15 years ago and hasn't logged on since, while Johannes\n>>> was hired yesterday and never logged on before then.\n>>\n>> Ah, OK, essentially the same example.\n>>\n>> Which is why I ruled out correlation stats based approaches and\n>> suggested a risk-weighted cost approach.\n>\n> By \"risk-weighted\" you mean just adjusting cost estimates based on what\n> the worst case cost looks like, correct? That seemed to be your\n> proposal from an earlier post. If so, we're in violent agreement here.\n\nI proposed a clear path for this earlier in the thread and received no\ncomments as yet. Please look at that.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Oct 2014 09:19:30 +0100",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Thu, Oct 2, 2014 at 1:19 AM, Simon Riggs <[email protected]> wrote:\n>> I disagree that (1) is not worth fixing just because we've provided\n>> users with an API to override the stats. It would unquestionably be\n>> better for us to have a better n_distinct estimate in the first place.\n>> Further, this is an easier problem to solve, and fixing n_distinct\n>> estimates would fix a large minority of currently pathological queries.\n>> It's like saying \"hey, we don't need to fix the leak in your radiator,\n>> we've given you a funnel in the dashboard you can pour water into.\"\n>\n> Having read papers on it, I believe the problem is intractable. Coding\n> is not the issue. To anyone: please prove me wrong, in detail, with\n> references so it can be coded.\n\nI think it might be close to intractable if you're determined to use a\nsampling model. HyperLogLog looks very interesting for n_distinct\nestimation, though. My abbreviated key patch estimates the cardinality\nof abbreviated keys (and original strings that are to be sorted) with\nhigh precision and fixed overhead. Maybe we can figure out a way to\ndo opportunistic streaming of HLL. Believe it or not, the way I use\nHLL for estimating cardinality is virtually free. Hashing is really\ncheap when the CPU is bottlenecked on memory bandwidth.\n\nIf you're interested, download the patch, and enable the debug traces.\nYou'll see HyperLogLog accurately indicate the cardinality of text\ndatums as they're copied into local memory before sorting.\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Oct 2014 02:30:02 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 29/09/2014 9:00 AM, Merlin Moncure wrote:\n> On Fri, Sep 26, 2014 at 3:06 AM, Simon Riggs <[email protected]> wrote:\n>> The problem, as I see it, is different. We assume that if there are\n>> 100 distinct values and you use LIMIT 1 that you would only need to\n>> scan 1% of rows. We assume that the data is arranged in the table in a\n>> very homogenous layout. When data is not, and it seldom is, we get\n>> problems.\n> Hm, good point -- 'data proximity'. At least in theory, can't this be\n> measured and quantified? For example, given a number of distinct\n> values, you could estimate the % of pages read (or maybe non\n> sequential seeks relative to the number of pages) you'd need to read\n> all instances of a particular value in the average (or perhaps the\n> worst) case. One way of trying to calculate that would be to look at\n> proximity of values in sampled pages (and maybe a penalty assigned for\n> high update activity relative to table size). Data proximity would\n> then become a cost coefficient to the benefits of LIMIT.\nLatecomer to the conversation here, but it seems like this issue (unlike \nsome) is really easy to recognize at runtime. The optimizer assumed the \nscan would access O(1) pages; if the scan has not returned enough \nresults after k pages, that would be a really good indication that it's \ntime to rethink the plan, and probably before too much work has been \ndone higher in the plan (esp. if there's any kind of buffering between \noperators, perhaps intentionally so in special cases like this)\n\nNot sure pgsql has any dynamic reoptimization infrastructure in place, \ntho. If not, these sorts of dangerous plans are best left alone IMO.\n\nRyan\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 02 Oct 2014 07:59:00 -0600",
"msg_from": "Ryan Johnson <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 10/02/2014 02:30 AM, Peter Geoghegan wrote:\n> On Thu, Oct 2, 2014 at 1:19 AM, Simon Riggs <[email protected]> wrote:\n>> Having read papers on it, I believe the problem is intractable. Coding\n>> is not the issue. To anyone: please prove me wrong, in detail, with\n>> references so it can be coded.\n> \n> I think it might be close to intractable if you're determined to use a\n> sampling model. HyperLogLog looks very interesting for n_distinct\n> estimation, though. My abbreviated key patch estimates the cardinality\n> of abbreviated keys (and original strings that are to be sorted) with\n> high precision and fixed overhead. Maybe we can figure out a way to\n> do opportunistic streaming of HLL. Believe it or not, the way I use\n> HLL for estimating cardinality is virtually free. Hashing is really\n> cheap when the CPU is bottlenecked on memory bandwidth.\n\nYes, it's only intractable if you're wedded to the idea of a tiny,\nfixed-size sample. If we're allowed to sample, say, 1% of the table, we\ncan get a MUCH more accurate n_distinct estimate using multiple\nalgorithms, of which HLL is one. While n_distinct will still have some\nvariance, it'll be over a much smaller range.\n\nThe n_distinct algo we use in Postgres is specifically designed (by its\nauthor) to choose the smallest reasonable number of distinct values\ncapable of producing the observed distribution. This made sense when we\nadded it because we didn't have query plans where underestimating\nn_distinct produced a penalty. Now we do, and we ought to change algos.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 02 Oct 2014 12:56:27 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Thu, Oct 2, 2014 at 12:56 PM, Josh Berkus <[email protected]> wrote:\n> Yes, it's only intractable if you're wedded to the idea of a tiny,\n> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n> can get a MUCH more accurate n_distinct estimate using multiple\n> algorithms, of which HLL is one. While n_distinct will still have some\n> variance, it'll be over a much smaller range.\n\nI think that HyperLogLog, as a streaming algorithm, will always\nrequire that the entire set be streamed. This doesn't need to be a big\ndeal - in the case of my abbreviated key patch, it appears to\nbasically be free because of the fact that we were streaming\neverything anyway. It's a very cool algorithm, with fixed overhead and\nconstant memory usage. It makes very useful guarantees around\naccuracy.\n\nI have this intuition that even though I'm more or less not paying\nanything for a great cardinality estimate, it's kind of a shame that I\nstill throw it away after the sort, each and every time. I have a hard\ntime actually putting my finger on how it could be put to further use,\nthough. And besides, this only helps if you happen to need to do a\nsort (or something that requires a sequential scan, since the cost\ncertainly isn't anywhere near \"free\" when you didn't need to do that\nanyway).\n\nOur current lack of block-based sampling probably implies that we are\nalmost as badly off as if we *did* a sequential scan. Not that I'm\nsuggesting that we give up on the idea of sampling (which would be\ncrazy).\n\nStreaming algorithms like HyperLogLog are very recent ideas, as these\nthings go. I wouldn't be all that discouraged by the fact that it\nmight not have been put to use in this way (for database statistics)\nby somebody before now.\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Oct 2014 17:54:06 -0700",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Thu, Oct 2, 2014 at 12:56 PM, Josh Berkus <[email protected]> wrote:\n\n> On 10/02/2014 02:30 AM, Peter Geoghegan wrote:\n> > On Thu, Oct 2, 2014 at 1:19 AM, Simon Riggs <[email protected]>\n> wrote:\n> >> Having read papers on it, I believe the problem is intractable. Coding\n> >> is not the issue. To anyone: please prove me wrong, in detail, with\n> >> references so it can be coded.\n> >\n> > I think it might be close to intractable if you're determined to use a\n> > sampling model. HyperLogLog looks very interesting for n_distinct\n> > estimation, though. My abbreviated key patch estimates the cardinality\n> > of abbreviated keys (and original strings that are to be sorted) with\n> > high precision and fixed overhead. Maybe we can figure out a way to\n> > do opportunistic streaming of HLL. Believe it or not, the way I use\n> > HLL for estimating cardinality is virtually free. Hashing is really\n> > cheap when the CPU is bottlenecked on memory bandwidth.\n>\n> Yes, it's only intractable if you're wedded to the idea of a tiny,\n> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n> can get a MUCH more accurate n_distinct estimate using multiple\n> algorithms, of which HLL is one. While n_distinct will still have some\n> variance, it'll be over a much smaller range.\n>\n\nIn my hands, the problems with poor n_distinct were not due to the\ninsufficient size of the sample, but the insufficient randomness of it.\nIncreasing default_statistics_target did help but not because it increases\nthe number of rows sampled, but rather because it increases the number of\nblocks sampled. Once substantially all of the blocks are part of the block\nsampling, the bias is eliminated even though it was still sampling a small\nfraction of the rows (roughly one per block).\n\nSo one idea would be go get rid of the 2-stage sampling algorithm (sample\nblocks, sample rows from the chosen blocks) and just read the whole table\nand sample rows from it unbiased, at least under some conditions. Some low\nlevel benchmarking on my favorite server showed that reading 1% of a\nsystem's blocks (in block number order within each file) was no faster than\nreading all of them from an IO perspective. But that is a virtualized\nserver that wasn't really speced out to be an IO intensive database server\nin the first place. It would be interesting to see what people get on real\nhardware that they actually designed for the task.\n\nA problem right now is that we only have one knob. I want to compute more\naccurate n_distinct and most_common_freqs, but I don't want to store huge\nnumbers entries for most_common_vals and histogram_bounds.\n\nCheers,\n\nJeff\n\nOn Thu, Oct 2, 2014 at 12:56 PM, Josh Berkus <[email protected]> wrote:On 10/02/2014 02:30 AM, Peter Geoghegan wrote:\n> On Thu, Oct 2, 2014 at 1:19 AM, Simon Riggs <[email protected]> wrote:\n>> Having read papers on it, I believe the problem is intractable. Coding\n>> is not the issue. To anyone: please prove me wrong, in detail, with\n>> references so it can be coded.\n>\n> I think it might be close to intractable if you're determined to use a\n> sampling model. HyperLogLog looks very interesting for n_distinct\n> estimation, though. My abbreviated key patch estimates the cardinality\n> of abbreviated keys (and original strings that are to be sorted) with\n> high precision and fixed overhead. Maybe we can figure out a way to\n> do opportunistic streaming of HLL. Believe it or not, the way I use\n> HLL for estimating cardinality is virtually free. Hashing is really\n> cheap when the CPU is bottlenecked on memory bandwidth.\n\nYes, it's only intractable if you're wedded to the idea of a tiny,\nfixed-size sample. If we're allowed to sample, say, 1% of the table, we\ncan get a MUCH more accurate n_distinct estimate using multiple\nalgorithms, of which HLL is one. While n_distinct will still have some\nvariance, it'll be over a much smaller range.In my hands, the problems with poor n_distinct were not due to the insufficient size of the sample, but the insufficient randomness of it. Increasing default_statistics_target did help but not because it increases the number of rows sampled, but rather because it increases the number of blocks sampled. Once substantially all of the blocks are part of the block sampling, the bias is eliminated even though it was still sampling a small fraction of the rows (roughly one per block).So one idea would be go get rid of the 2-stage sampling algorithm (sample blocks, sample rows from the chosen blocks) and just read the whole table and sample rows from it unbiased, at least under some conditions. Some low level benchmarking on my favorite server showed that reading 1% of a system's blocks (in block number order within each file) was no faster than reading all of them from an IO perspective. But that is a virtualized server that wasn't really speced out to be an IO intensive database server in the first place. It would be interesting to see what people get on real hardware that they actually designed for the task.A problem right now is that we only have one knob. I want to compute more accurate n_distinct and most_common_freqs, but I don't want to store huge numbers entries for most_common_vals and histogram_bounds. Cheers,Jeff",
"msg_date": "Fri, 3 Oct 2014 12:58:46 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 3.10.2014 21:58, Jeff Janes wrote:\n> On Thu, Oct 2, 2014 at 12:56 PM, Josh Berkus <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> Yes, it's only intractable if you're wedded to the idea of a tiny,\n> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n> can get a MUCH more accurate n_distinct estimate using multiple\n> algorithms, of which HLL is one. While n_distinct will still have some\n> variance, it'll be over a much smaller range.\n> \n> \n> In my hands, the problems with poor n_distinct were not due to the\n> insufficient size of the sample, but the insufficient randomness of it. \n> Increasing default_statistics_target did help but not because it\n> increases the number of rows sampled, but rather because it increases\n> the number of blocks sampled. Once substantially all of the blocks are\n> part of the block sampling, the bias is eliminated even though it was\n> still sampling a small fraction of the rows (roughly one per block).\n\nI don't think that's entirely accurate. According to [1], there's a\nlower boundary on ratio error, depending on the number of sampled rows.\n\nSay there's a table with 10M rows, we sample 30k rows (which is the\ndefault). Then with probability 5% we'll get ratio error over 20. That\nis, we may either estimate <5% or >200% of the actual ndistinct value.\nCombined with our arbitrary 10% limit that we use to decide whether\nndistinct scales with the number of rows, this sometimes explodes.\n\nBy increasing the statistics target, you get much larger sample and thus\nlower probability of such error. But nevertheless, it breaks from time\nto time, and the fact that statistics target is static (and not scaling\nwith the size of the table to get appropriate sample size) is not really\nhelping IMHO. Static sample size may work for histograms, for ndistinct\nnot so much.\n\n[1]\nhttp://ftp.cse.buffalo.edu/users/azhang/disc/disc01/cd1/out/papers/pods/towardsestimatimosur.pdf\n\n\n> So one idea would be go get rid of the 2-stage sampling algorithm \n> (sample blocks, sample rows from the chosen blocks) and just read\n> the whole table and sample rows from it unbiased, at least under\n> some conditions. Some low level benchmarking on my favorite server\n> showed that reading 1% of a system's blocks (in block number order\n> within each file) was no faster than reading all of them from an IO\n> perspective. But that is a virtualized server that wasn't really\n> speced out to be an IO intensive database server in the first place.\n> It would be interesting to see what people get on real hardware that\n> they actually designed for the task.\n\nI think there was a discussion about the sampling on pgsql-hackers a\nwhile ago ... and yes, here it is [2]. However it seems there was no\nclear conclusion on how to change it at that time ...\n\n\n[2]\nhttp://www.postgresql.org/message-id/CA+TgmoZaqyGSuaL2v+YFVsX06DQDQh-pEV0nobGPws-dNwAwBw@mail.gmail.com\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 04 Oct 2014 01:30:34 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 3.10.2014 02:54, Peter Geoghegan wrote:\n> On Thu, Oct 2, 2014 at 12:56 PM, Josh Berkus <[email protected]> wrote:\n>> Yes, it's only intractable if you're wedded to the idea of a tiny, \n>> fixed-size sample. If we're allowed to sample, say, 1% of the\n>> table, we can get a MUCH more accurate n_distinct estimate using\n>> multiple algorithms, of which HLL is one. While n_distinct will\n>> still have some variance, it'll be over a much smaller range.\n> \n> I think that HyperLogLog, as a streaming algorithm, will always \n> require that the entire set be streamed. This doesn't need to be a\n> big deal - in the case of my abbreviated key patch, it appears to \n> basically be free because of the fact that we were streaming \n> everything anyway. It's a very cool algorithm, with fixed overhead\n> and constant memory usage. It makes very useful guarantees around \n> accuracy.\n\nI think you're mixing two things here - estimating the number of\ndistinct values in a sample (which can be done very efficiently using\nHLL) and estimating the number of distinct values in the whole table.\nFor that HLL is not usable, unless you process all the data.\n\nSadly HLL is rather incompatible with the usual estimators, because the\nones I'm aware of need to know the number of occurences for the distinct\nvalues etc.\n\nBut couldn't we just piggyback this on autovacuum? One of the nice HLL\nfeatures is that it's additive - you can build \"partial counters\" for\nranges of blocks (say, a few MBs per range), and then merge them when\nneeded. By keeping the parts it's possible to rebuild it separately.\n\nBut maybe this idea is way too crazy ...\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 04 Oct 2014 01:41:50 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus <[email protected]> wrote:\n> Yes, it's only intractable if you're wedded to the idea of a tiny,\n> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n> can get a MUCH more accurate n_distinct estimate using multiple\n> algorithms, of which HLL is one. While n_distinct will still have some\n> variance, it'll be over a much smaller range.\n\nI've gone looking for papers on this topic but from what I read this\nisn't so. To get any noticeable improvement you need to read 10-50% of\nthe table and that's effectively the same as reading the entire table\n-- and it still had pretty poor results. All the research I could find\nwent into how to analyze the whole table while using a reasonable\namount of scratch space and how to do it incrementally.\n\n-- \ngreg\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 Oct 2014 12:16:08 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "Dne 10 Říjen 2014, 13:16, Greg Stark napsal(a):\n> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus <[email protected]> wrote:\n>> Yes, it's only intractable if you're wedded to the idea of a tiny,\n>> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n>> can get a MUCH more accurate n_distinct estimate using multiple\n>> algorithms, of which HLL is one. While n_distinct will still have some\n>> variance, it'll be over a much smaller range.\n>\n> I've gone looking for papers on this topic but from what I read this\n> isn't so. To get any noticeable improvement you need to read 10-50% of\n> the table and that's effectively the same as reading the entire table\n> -- and it still had pretty poor results. All the research I could find\n> went into how to analyze the whole table while using a reasonable\n> amount of scratch space and how to do it incrementally.\n\nI think it's really difficult to discuss the estimation without some basic\nagreement on what are the goals. Naturally, we can't get a perfect\nestimator with small samples (especially when the sample size is fixed and\nnot scaling with the table). But maybe we can improve the estimates\nwithout scanning most of the table?\n\nFWIW I've been playing with the adaptive estimator described in [1] and\nthe results looks really interesting, IMHO. So far I was testing it on\nsynthetic datasets outside the database, but I plan to use it instead of\nour estimator, and do some more tests.\n\nWould be helpful to get a collection of test cases that currently perform\npoorly. I have collected a few from the archives, but if those who follow\nthis thread can provide additional test cases / point to a thread\ndescribing related etc. that'd be great.\n\nIt certainly won't be perfect, but if it considerably improves the\nestimates then I believe it's step forward. Ultimately, it's impossible to\nimprove the estimates without increasing the sample size.\n\n[1]\nhttp://ftp.cse.buffalo.edu/users/azhang/disc/disc01/cd1/out/papers/pods/towardsestimatimosur.pdf\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 Oct 2014 14:10:13 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Fri, Oct 10, 2014 at 5:10 AM, Tomas Vondra <[email protected]> wrote:\n\n> > I've gone looking for papers on this topic but from what I read this\n> > isn't so. To get any noticeable improvement you need to read 10-50% of\n> > the table and that's effectively the same as reading the entire table\n> > -- and it still had pretty poor results. All the research I could find\n> > went into how to analyze the whole table while using a reasonable\n> > amount of scratch space and how to do it incrementally.\n>\n> I think it's really difficult to discuss the estimation without some basic\n> agreement on what are the goals. Naturally, we can't get a perfect\n> estimator with small samples (especially when the sample size is fixed and\n> not scaling with the table). But maybe we can improve the estimates\n> without scanning most of the table?\n>\n> FWIW I've been playing with the adaptive estimator described in [1] and\n> the results looks really interesting, IMHO. So far I was testing it on\n> synthetic datasets outside the database, but I plan to use it instead of\n> our estimator, and do some more tests.\n>\n\nWe've solved this problem using an external (non-Postgres) dynamically\noptimizing index. In addition to the \"early abort,\" we also require an\nefficient \"late start\", the equivalent of \"offset 100 limit 10\". It's a\ncommon problem for web sites that let users page through data with just a\ntiny amount of state information (a cookie).\n\nOur index is for chemical structures. Chemicals are indexed on chemical\nfragments <http://emolecules.com/info/molecular-informatics>. A search\ntypically starts with 50-200 indexed \"columns\" (chemical fragments). The\nquery is always flat, \"A and B and ... and Z\". The indexed fragments are\nboth correlated (the existence of one strongly raises the chances of\nanother) and anti-correlated (certain combinations are very rare).\n\nThe dynamic optimizer watches the performance of each index in real time.\nIt promotes highly selective indexes and demotes or removes redundant\nindexes. In a typical query, the initial 50-200 indexes are reduced to 5-10\nindexes within the first 100-200 rows examined. The remaining indexes have\nlittle correlation yet retain most of the selectivity. (One critical factor\nwith a dynamic optimizer is that the data must be randomized before it's\npresented to the optimizer. Databases tend to have clusters of similar\ndata. If the optimizer starts in such a cluster, it will optimize poorly.)\n\nOur query is simple (a flat AND) compared to what Postgres has to handle.\nEven so, a dynamic optimizer is the only effective solution.\n\nStatic planners simply can't handle the \"early abort\" condition, even with\ngood statistics. Many have pointed out that data are \"lumpy\" rather than\nwell distributed. A more subtle problem is that you can have evenly\ndistributed data, but badly distributed correlations. \"Agnes\" and \"Bob\" may\nbe names that are distributed well in a real-estate database, but it might\nhappen that all of the information about homes whose owners' names are\n\"Agnes\" and \"Bob\" occurs at the very end of all of your data because they\njust got married and bought a house.\n\nThe end result is that even with perfect statistics on each column, you're\nstill screwed. The combinatorial explosion of possible correlations between\nindexes is intractable.\n\nCraig\n\nOn Fri, Oct 10, 2014 at 5:10 AM, Tomas Vondra <[email protected]> wrote:> I've gone looking for papers on this topic but from what I read this\n> isn't so. To get any noticeable improvement you need to read 10-50% of\n> the table and that's effectively the same as reading the entire table\n> -- and it still had pretty poor results. All the research I could find\n> went into how to analyze the whole table while using a reasonable\n> amount of scratch space and how to do it incrementally.\n\nI think it's really difficult to discuss the estimation without some basic\nagreement on what are the goals. Naturally, we can't get a perfect\nestimator with small samples (especially when the sample size is fixed and\nnot scaling with the table). But maybe we can improve the estimates\nwithout scanning most of the table?\n\nFWIW I've been playing with the adaptive estimator described in [1] and\nthe results looks really interesting, IMHO. So far I was testing it on\nsynthetic datasets outside the database, but I plan to use it instead of\nour estimator, and do some more tests.We've solved this problem using an external (non-Postgres) dynamically optimizing index. In addition to the \"early abort,\" we also require an efficient \"late start\", the equivalent of \"offset 100 limit 10\". It's a common problem for web sites that let users page through data with just a tiny amount of state information (a cookie).Our index is for chemical structures. Chemicals are indexed on chemical fragments. A search typically starts with 50-200 indexed \"columns\" (chemical fragments). The query is always flat, \"A and B and ... and Z\". The indexed fragments are both correlated (the existence of one strongly raises the chances of another) and anti-correlated (certain combinations are very rare).The dynamic optimizer watches the performance of each index in real time. It promotes highly selective indexes and demotes or removes redundant indexes. In a typical query, the initial 50-200 indexes are reduced to 5-10 indexes within the first 100-200 rows examined. The remaining indexes have little correlation yet retain most of the selectivity. (One critical factor with a dynamic optimizer is that the data must be randomized before it's presented to the optimizer. Databases tend to have clusters of similar data. If the optimizer starts in such a cluster, it will optimize poorly.)Our query is simple (a flat AND) compared to what Postgres has to handle. Even so, a dynamic optimizer is the only effective solution.Static planners simply can't handle the \"early abort\" condition, even with good statistics. Many have pointed out that data are \"lumpy\" rather than well distributed. A more subtle problem is that you can have evenly distributed data, but badly distributed correlations. \"Agnes\" and \"Bob\" may be names that are distributed well in a real-estate database, but it might happen that all of the information about homes whose owners' names are \"Agnes\" and \"Bob\" occurs at the very end of all of your data because they just got married and bought a house.The end result is that even with perfect statistics on each column, you're still screwed. The combinatorial explosion of possible correlations between indexes is intractable.Craig",
"msg_date": "Fri, 10 Oct 2014 07:21:05 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "\nOn 10.10.2014 16:21, Craig James wrote:\n> On Fri, Oct 10, 2014 at 5:10 AM, Tomas Vondra <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> > I've gone looking for papers on this topic but from what I read this\n> > isn't so. To get any noticeable improvement you need to read 10-50% of\n> > the table and that's effectively the same as reading the entire table\n> > -- and it still had pretty poor results. All the research I could find\n> > went into how to analyze the whole table while using a reasonable\n> > amount of scratch space and how to do it incrementally.\n> \n> I think it's really difficult to discuss the estimation without some\n> basic\n> agreement on what are the goals. Naturally, we can't get a perfect\n> estimator with small samples (especially when the sample size is\n> fixed and\n> not scaling with the table). But maybe we can improve the estimates\n> without scanning most of the table?\n> \n> FWIW I've been playing with the adaptive estimator described in [1] and\n> the results looks really interesting, IMHO. So far I was testing it on\n> synthetic datasets outside the database, but I plan to use it instead of\n> our estimator, and do some more tests.\n> \n> \n> We've solved this problem using an external (non-Postgres) dynamically\n> optimizing index. In addition to the \"early abort,\" we also require an\n> efficient \"late start\", the equivalent of \"offset 100 limit 10\". It's a\n> common problem for web sites that let users page through data with just\n> a tiny amount of state information (a cookie).\n\nYeah, paging is a known example, both for the inefficiency once you get\nto pages far away, and because of the planning challenges. I think there\nare known solutions to this problem\n(http://use-the-index-luke.com/blog/2013-07/pagination-done-the-postgresql-way),\nalthough those are not applicable to all cases.\n\nBut I'm not sure how that's related to the ndistinct estimation problem,\ndiscussed in this thread (or rather in this subthread)?\n\n> Our index is for chemical structures. Chemicals are indexed on\n> chemical fragments\n> <http://emolecules.com/info/molecular-informatics>. A search \n> typically starts with 50-200 indexed \"columns\" (chemical fragments).\n> The query is always flat, \"A and B and ... and Z\". The indexed\n> fragments are both correlated (the existence of one strongly raises\n> the chances of another) and anti-correlated (certain combinations are\n> very rare).\n\nMaybe I don't understand the problem well enough, but isn't this a\nperfect match for GIN indexes? I mean, you essentially need to do\nqueries like \"WHERE substance @@ ('A & B & !C')\" etc. Which is exactly\nwhat GIN does, because it keeps pointers to tuples for each fragment.\n\n> Static planners simply can't handle the \"early abort\" condition,\n> even with good statistics. Many have pointed out that data are\n> \"lumpy\" rather than well distributed. A more subtle problem is that\n> you can have evenly distributed data, but badly distributed\n> correlations. \"Agnes\" and \"Bob\" may be names that are distributed\n> well in a real-estate database, but it might happen that all of the\n> information about homes whose owners' names are \"Agnes\" and \"Bob\"\n> occurs at the very end of all of your data because they just got\n> married and bought a house.\n> \n> The end result is that even with perfect statistics on each column,\n> you're still screwed. The combinatorial explosion of possible\n> correlations between indexes is intractable.\n\nStatic planners clearly have limitations, but we don't have dynamic\nplanning in PostgreSQL, so we have to live with them. And if we could\nimprove the quality of estimates - lowering the probability of poorly\nperforming plans, it's probably good to do that.\n\nIt won't be perfect, but until we have dynamic planning it's better than\nnothing.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 Oct 2014 18:53:51 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 10.10.2014 14:10, Tomas Vondra wrote:\n> Dne 10 Říjen 2014, 13:16, Greg Stark napsal(a):\n>> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus <[email protected]> wrote:\n>>> Yes, it's only intractable if you're wedded to the idea of a tiny,\n>>> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n>>> can get a MUCH more accurate n_distinct estimate using multiple\n>>> algorithms, of which HLL is one. While n_distinct will still have some\n>>> variance, it'll be over a much smaller range.\n>>\n>> I've gone looking for papers on this topic but from what I read this\n>> isn't so. To get any noticeable improvement you need to read 10-50% of\n>> the table and that's effectively the same as reading the entire table\n>> -- and it still had pretty poor results. All the research I could find\n>> went into how to analyze the whole table while using a reasonable\n>> amount of scratch space and how to do it incrementally.\n> \n> I think it's really difficult to discuss the estimation without some basic\n> agreement on what are the goals. Naturally, we can't get a perfect\n> estimator with small samples (especially when the sample size is fixed and\n> not scaling with the table). But maybe we can improve the estimates\n> without scanning most of the table?\n> \n> FWIW I've been playing with the adaptive estimator described in [1] and\n> the results looks really interesting, IMHO. So far I was testing it on\n> synthetic datasets outside the database, but I plan to use it instead of\n> our estimator, and do some more tests.\n\nAttached is an experimental patch implementing the adaptive estimator.\n\nIt was fairly simple (although it's a bit messy). It only computes the\nestimates for the \"scalar\" case (i.e. data types that we can sort).\nImplementing this for the \"minimal\" case is possible, but requires a bit\nmore work.\n\nIt only computes the estimate and prints a WARNING with both the current\nand new estimate, but the old estimate is stored.\n\nI also attach a few synthetic examples of synthetic datasets with\ndistributions stored in various ways, that I used for testing. In all\ncases there's a single table with 10M rows and a single INT column.\nThere are three kinds of skew:\n\n1) smooth skew\n\n - N distinct values (100, 10.000 and 100.000 values)\n - average moves to 0 as 'k' increases ('k' between 1 and 9)\n - smooth distribution of frequencies\n\n INSERT INTO test\n SELECT pow(random(),k) * 10000 FROM generate_series(1,10000000);\n\n2) step skew\n\n - a few very frequent values, many rare values\n - for example this generates 5 very frequent and ~10k rare values\n\n INSERT INTO test\n SELECT (CASE WHEN (v < 90000) THEN MOD(v,5) ELSE v END)\n FROM (\n SELECT (random()*100000)::int AS v\n FROM generate_series(1,10000000)\n ) foo;\n\n\nResults\n=======\n\nI tested this with various statistics target settings (10, 100, 1000),\nwhich translates to different sample sizes.\n\nstatistics target 100 (default, 30k rows, 0.3% sample)\n======================================================\n\na) smooth skew, 101 values, different skew ('k')\n\n k current adaptive\n -------------------------\n 1 101 102\n 3 101 102\n 5 101 102\n 7 101 102\n 9 101 102\n\nb) smooth skew, 10.001 values, different skew ('k')\n\n k current adaptive\n -------------------------\n 1 9986 10542\n 3 8902 10883\n 5 7579 10824\n 7 6639 10188\n 9 5947 10013\n\nc) step skew (different numbers of values)\n\n values current adaptive\n ------------------------------\n 106 106 107\n 106 35 104\n 1006 259 1262\n 10006 2823 11047\n\n\nstatistics target 10 (3k rows, 0.03% sample)\n============================================\n\na) smooth skew, 101 values, different skew ('k')\n\n k current adaptive\n -------------------------\n 1 101 102\n 3 101 102\n 5 101 102\n 7 101 102\n 9 101 102\n\nb) smooth skew, 10.001 values, different skew ('k')\n\n k current adaptive\n -------------------------\n 1 9846 10014\n 3 4399 7190\n 5 2532 5477\n 7 1938 4932\n 9 1623 1623\n\nc) step skew (different numbers of values)\n\n values current adaptive\n ------------------------------\n 106 100 114\n 106 5 5\n 1006 37 532\n 10006 323 20970\n\nstatistics target 1000 (300k rows, 3% sample)\n=============================================\n\n k current adaptive\n -------------------------\n 1 101 102\n 3 101 102\n 5 101 102\n 7 101 102\n 9 101 102\n\nb) smooth skew, 10.001 values, different skew ('k')\n\n k current adaptive\n -------------------------\n 1 10001 10002\n 3 10000 10000\n 5 9998 10011\n 7 9973 10045\n 9 9939 10114\n\nc) step skew (different numbers of values)\n\n values current adaptive\n ------------------------------\n 106 106 107\n 106 101 107\n 1006 957 1096\n 10006 9551 10550\n\n\nSummary\n=======\n\nI'm yet to see an example where the adaptive estimator produces worse\nresults than the current estimator, irrespectedly of the distribution\nand sample size / statistics target.\n\nWhat really matters is the sample size, with respect to the table size,\nso I'll use the 0.03%, 0.3%, and 3% instead of the statistics target.\n\nFor the large sample (3%) both estimators produce reasonably accurate\nresults. This however won't work as the tables grow, because the number\nof rows we sample is static (does not grow with the table).\n\nAs the sample decreases, the adaptive estimator starts winning. For the\n0.3% sample the difference is easily 3x for the high-skew cases. E.g.\nfor one of the \"step skew\" distributions the actual ndistinct value is\n10006, current estimator gives 2823 while adaptive gives 11047. That's\nratio error ~3.5 vs. 1.1x.\n\nFor the tiny 0.03% sample the difference gets even more siginficant,\nespecially for the step-skew cases, where the improvement is often an\norder of magnitude.\n\n\nProposal\n========\n\nI think the adaptive estimator works very well, and I plan to submit it\nto the next commitfest after a bit of polishing. Examples of\ndistributions that break it are welcome of course.\n\nAlso, I think it's clear it'd be useful to be able to scale the sample\nproportionally to the table (instead of only allowing the current\nstatistics target approach). I understand it may result in scanning\nlarge part of the table, but I don't see a problem in that's not a\ndefault behavior (and clearly documented). I don't see a way around that\n- small samples simply result in poor estimates.\n\nI was thinking that this could be done using statistics target, but it's\nalready used for other things (e.g. size of MCV list, histogram) and we\ndon't want to break that by adding yet another function.\n\nIdeas?\n\nregards\nTomas\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 10 Oct 2014 19:53:36 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Fri, Oct 10, 2014 at 9:53 AM, Tomas Vondra <[email protected]> wrote:\n\n>\n> On 10.10.2014 16:21, Craig James wrote:\n> > Our index is for chemical structures. Chemicals are indexed on\n> > chemical fragments\n> > <http://emolecules.com/info/molecular-informatics>. A search\n> > typically starts with 50-200 indexed \"columns\" (chemical fragments).\n> > The query is always flat, \"A and B and ... and Z\". The indexed\n> > fragments are both correlated (the existence of one strongly raises\n> > the chances of another) and anti-correlated (certain combinations are\n> > very rare).\n>\n> Maybe I don't understand the problem well enough, but isn't this a\n> perfect match for GIN indexes? I mean, you essentially need to do\n> queries like \"WHERE substance @@ ('A & B & !C')\" etc. Which is exactly\n> what GIN does, because it keeps pointers to tuples for each fragment.\n>\n\nOn the day our web site opened we were using tsearch. Before the end of the\nday we realized it was a bad idea, for the very reasons discussed here. The\nearly-abort/late-start problem (\"offset N limit M\") could take minutes to\nreturn the requested page. With the external dynamically-optimized index,\nwe can almost always get answers in less than a couple seconds, often in\n0.1 seconds.\n\nCraig\n\nOn Fri, Oct 10, 2014 at 9:53 AM, Tomas Vondra <[email protected]> wrote:\nOn 10.10.2014 16:21, Craig James wrote:> Our index is for chemical structures. Chemicals are indexed on\n> chemical fragments\n> <http://emolecules.com/info/molecular-informatics>. A search\n> typically starts with 50-200 indexed \"columns\" (chemical fragments).\n> The query is always flat, \"A and B and ... and Z\". The indexed\n> fragments are both correlated (the existence of one strongly raises\n> the chances of another) and anti-correlated (certain combinations are\n> very rare).\n\nMaybe I don't understand the problem well enough, but isn't this a\nperfect match for GIN indexes? I mean, you essentially need to do\nqueries like \"WHERE substance @@ ('A & B & !C')\" etc. Which is exactly\nwhat GIN does, because it keeps pointers to tuples for each fragment.On the day our web site opened we were using tsearch. Before the end of the day we realized it was a bad idea, for the very reasons discussed here. The early-abort/late-start problem (\"offset N limit M\") could take minutes to return the requested page. With the external dynamically-optimized index, we can almost always get answers in less than a couple seconds, often in 0.1 seconds.Craig",
"msg_date": "Fri, 10 Oct 2014 10:59:52 -0700",
"msg_from": "Craig James <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 10.10.2014 19:59, Craig James wrote:\n> On Fri, Oct 10, 2014 at 9:53 AM, Tomas Vondra <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> \n> On 10.10.2014 16:21, Craig James wrote:\n> > Our index is for chemical structures. Chemicals are indexed on\n> > chemical fragments\n> > <http://emolecules.com/info/molecular-informatics>. A search\n> > typically starts with 50-200 indexed \"columns\" (chemical fragments).\n> > The query is always flat, \"A and B and ... and Z\". The indexed\n> > fragments are both correlated (the existence of one strongly raises\n> > the chances of another) and anti-correlated (certain combinations are\n> > very rare).\n> \n> Maybe I don't understand the problem well enough, but isn't this a\n> perfect match for GIN indexes? I mean, you essentially need to do\n> queries like \"WHERE substance @@ ('A & B & !C')\" etc. Which is exactly\n> what GIN does, because it keeps pointers to tuples for each fragment.\n> \n> \n> On the day our web site opened we were using tsearch. Before the end of\n> the day we realized it was a bad idea, for the very reasons discussed\n> here. The early-abort/late-start problem (\"offset N limit M\") could take\n> minutes to return the requested page. With the external\n> dynamically-optimized index, we can almost always get answers in less\n> than a couple seconds, often in 0.1 seconds.\n\nIn the early days of tsearch, it did not support GIN indexes, and AFAIK\nGiST are not nearly as fast for such queries. Also, the GIN fastscan\nimplemented by Alexander Korotkov in 9.4 makes a huge difference for\nqueries combining frequent and rare terms.\n\nMaybe it'd be interesting to try this on 9.4. I'm not saying it will\nmake it faster than the optimized index, but it might be an interesting\ncomparison.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 10 Oct 2014 20:13:24 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 10/10/2014 04:16 AM, Greg Stark wrote:\n> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus <[email protected]> wrote:\n>> Yes, it's only intractable if you're wedded to the idea of a tiny,\n>> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n>> can get a MUCH more accurate n_distinct estimate using multiple\n>> algorithms, of which HLL is one. While n_distinct will still have some\n>> variance, it'll be over a much smaller range.\n> \n> I've gone looking for papers on this topic but from what I read this\n> isn't so. To get any noticeable improvement you need to read 10-50% of\n> the table and that's effectively the same as reading the entire table\n> -- and it still had pretty poor results. All the research I could find\n> went into how to analyze the whole table while using a reasonable\n> amount of scratch space and how to do it incrementally.\n\nSo, right now our estimation is off on large tables by -10X to -10000X.\n First, the fact that it's *always* low is an indication we're using the\nwrong algorithm. Second, we can most certainly do better than a median\nof -1000X.\n\nOne interesting set of algorithms is block-based sampling. That is, you\nread 5% of the physical table in random blocks, reading every row in the\nblock. The block size is determined by your storage block size, so\nyou're not actually reading any more physically than you are logically;\nit really is just 5% of the table, especially on SSD.\n\nThen you apply algorithms which first estimate the correlation of common\nvalues in the block (i.e. how likely is it that the table is completely\nsorted?), and then estimates of how many values there might be total\nbased on the correlation estimate.\n\nI no longer have my ACM membership, so I can't link this, but\nresearchers were able to get +/- 3X accuracy for a TPCH workload using\nthis approach. A real database would be more variable, of course, but\neven so we should be able to achieve +/- 50X, which would be an order of\nmagnitude better than we're doing now.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 Oct 2014 10:20:18 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 15.10.2014 19:20, Josh Berkus wrote:\n> On 10/10/2014 04:16 AM, Greg Stark wrote:\n>> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus <[email protected]> wrote:\n>>> Yes, it's only intractable if you're wedded to the idea of a tiny,\n>>> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n>>> can get a MUCH more accurate n_distinct estimate using multiple\n>>> algorithms, of which HLL is one. While n_distinct will still have some\n>>> variance, it'll be over a much smaller range.\n>>\n>> I've gone looking for papers on this topic but from what I read this\n>> isn't so. To get any noticeable improvement you need to read 10-50% of\n>> the table and that's effectively the same as reading the entire table\n>> -- and it still had pretty poor results. All the research I could find\n>> went into how to analyze the whole table while using a reasonable\n>> amount of scratch space and how to do it incrementally.\n> \n> So, right now our estimation is off on large tables by -10X to\n> -10000X. First, the fact that it's *always* low is an indication\n> we're using the wrong algorithm. Second, we can most certainly do\n> better than a median of -1000X.\n\nA few days ago I posted an experimental patch with the adaptive\nestimator, described in [1]. Not perfect, but based on the testing I did\nI believe it's a superior algorithm to the one we use now. Would be nice\nto identify a few datasets where the current estimate is way off.\n\n[1]\nhttp://ftp.cse.buffalo.edu/users/azhang/disc/disc01/cd1/out/papers/pods/towardsestimatimosur.pdf\n\n\n> One interesting set of algorithms is block-based sampling. That is,\n> you read 5% of the physical table in random blocks, reading every row\n> in the block. The block size is determined by your storage block\n> size, so you're not actually reading any more physically than you are\n> logically; it really is just 5% of the table, especially on SSD.\n> \n> Then you apply algorithms which first estimate the correlation of\n> common values in the block (i.e. how likely is it that the table is\n> completely sorted?), and then estimates of how many values there\n> might be total based on the correlation estimate.\n\nI think we might also use a different approach - instead of sampling the\ndata when ANALYZE kicks in, we might collect a requested sample of rows\non the fly. Say we want 1% sample - whenever you insert a new row, you\ndo [random() < 0.01] and if it happens to be true you keep a copy of the\nrow aside. Then, when you need the sample, you simply read the sample\nand you're done - no random access to the main table, no problems with\nestimated being off due to block-level sampling, etc.\n\nNot sure how to track deletions/updates, though. Maybe rebuilding the\nsample if the number of deletions exceeds some threshold, but that\ncontradicts the whole idea a bit.\n\n> I no longer have my ACM membership, so I can't link this, but \n> researchers were able to get +/- 3X accuracy for a TPCH workload \n> using this approach. A real database would be more variable, of \n> course, but even so we should be able to achieve +/- 50X, which\n> would be an order of magnitude better than we're doing now.\n\nIf you know the title of the article, it's usually available elsewhere\non the web - either at the university site, or elsewhere. I found these\ntwo articles about block-based sampling:\n\n\nhttp://ranger.uta.edu/~gdas/websitepages/preprints-papers/p287-chaudhuri.pdf\n\n https://www.stat.washington.edu/research/reports/1999/tr355.pdf\n\nMaybe there are more, but most of the other links were about how Oracle\ndoes this in 11g.\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 Oct 2014 20:02:30 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Wed, Oct 15, 2014 at 7:02 PM, Tomas Vondra <[email protected]> wrote:\n> If you know the title of the article, it's usually available elsewhere\n> on the web - either at the university site, or elsewhere. I found these\n> two articles about block-based sampling:\n>\n>\n> http://ranger.uta.edu/~gdas/websitepages/preprints-papers/p287-chaudhuri.pdf\n\nThere are a series of papers with Chaudhuri as lead author which I\nagree sounds like what Josh is talking about. Note that he's Microsoft\nResearch's database group lead and it would be a pretty safe bet\nanything published from there is going to be covered by patents from\nhere till next Tuesday (and seventeen years beyond).\n\nI think this is all putting the cart before the horse however. If we\ncould fix our current sampling to use the data more efficiently that\nwould be a good start before we start trying to read even more data.\n\nWe currently read just one row from each block on average instead of\nusing the whole block. That's what would be needed in the worst case\nif the blocks were a very biased sample (which indeed they probably\nare in most databases due to the way Postgres handles updates). But we\ncould at least give users the option to use more than one row per\nblock when they know it's ok (such as data that hasn't been updated)\nor detect when it's ok (such as by carefully thinking about how\nPostgres's storage mechanism would bias the data).\n\nBut I looked into this and ran into a problem. I think our algorithm\nfor calculating the most frequent values list is bunk. The more rows I\npicked from each block the more biased that list was towards values\nseen earlier in the table. What's worse, when that list was biased it\nthrew off the histogram since we exclude the most frequent values from\nthe histogram, so all estimates were thrown off.\n\nIf we could fix the most frequent values collection to not be biased\nwhen it sees values in a clumpy way then I think we would be okay to\nset the row sample size in Vitter's algorithm to a factor of N larger\nthan the block sample size where N is somewhat smaller than the\naverage number of rows per block. In fact even if we used all the rows\nin the block I think I've convinced myself that the results would be\naccurate in most circumstances.\n\nI think to calcualte the most frequent values more accurately it would\ntake a two pass approach. Scan some random sample of blocks with a\ncounting bloom filter then do a second pass (possibly for the same\nsample?) keeping counts only for values that the counting bloom filter\nsaid hashed to the most common hash values. That might not be exactly\nthe most common values but should be at least a representative sample\nof the most common values.\n\n-- \ngreg\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 17 Oct 2014 18:25:45 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 17.10.2014 19:25, Greg Stark wrote:\n> On Wed, Oct 15, 2014 at 7:02 PM, Tomas Vondra <[email protected]> wrote:\n>> If you know the title of the article, it's usually available\n>> elsewhere on the web - either at the university site, or elsewhere.\n>> I found these two articles about block-based sampling:\n>> \n>> \n>> http://ranger.uta.edu/~gdas/websitepages/preprints-papers/p287-chaudhuri.pdf\n>\n>> \n> There are a series of papers with Chaudhuri as lead author which I \n> agree sounds like what Josh is talking about. Note that he's\n> Microsoft Research's database group lead and it would be a pretty\n> safe bet anything published from there is going to be covered by\n> patents from here till next Tuesday (and seventeen years beyond).\n\nHmmm. I have 0 experience with handling patents and related issues. Any\nidea how to address that?\n\n> I think this is all putting the cart before the horse however. If we \n> could fix our current sampling to use the data more efficiently that \n> would be a good start before we start trying to read even more data.\n> \n> We currently read just one row from each block on average instead of \n> using the whole block. That's what would be needed in the worst case \n> if the blocks were a very biased sample (which indeed they probably \n> are in most databases due to the way Postgres handles updates). But\n> we could at least give users the option to use more than one row per \n> block when they know it's ok (such as data that hasn't been updated) \n> or detect when it's ok (such as by carefully thinking about how \n> Postgres's storage mechanism would bias the data).\n\nI think this will be very tricky, and in fact it may make the estimates\nmuch worse easily, because all the algorithms assume random sampling.\n\nFor example the ndistinct estimator uses the low-frequency values (that\nwere observed only once or twice in the sample). By using multiple rows\nfrom each block, you'll significantly influence this probability for\ncolumns with values correlated to block (which is quite common.\n\nTake for example fact tables in data warehouses - those are usually\ndenormalized, mostly append-only. Say each row has \"date_id\" which is a\nsequential number of a day, with 0 sometime in the past. Daily\nincrements are usually stored on many consecutive blocks, so on each\nblock there's usually a single date_id value.\n\nBy sampling all rows on a block you gain exactly nothing, and in fact it\nresults in observing no low-frequency values, making the estimator\nabsolutely useless.\n\nI can imagine fixing this (although I don't see how exactly), but the\nthing is we need to fix *all* the estimators we have, not just\nndistinct. And that's going to be tough.\n\nI don't think adding a knob to tune the number of tuples sampled per\nblock is a good approach. Either we can solve the issues I described\n(and in that case it's unnecessary), or we can't solve them and it turns\ninto a massive foot gun.\n\n> But I looked into this and ran into a problem. I think our algorithm \n> for calculating the most frequent values list is bunk. The more rows\n> I picked from each block the more biased that list was towards\n> values seen earlier in the table. What's worse, when that list was\n> biased it threw off the histogram since we exclude the most frequent\n> values from the histogram, so all estimates were thrown off.\n\nI think the 'minimal' stats (when we have just '=' for the type) behaves\nlike this, but fixing it by switching to a two-pass approach should not\nbe that difficult (but would cost a few more CPU cycles).\n\nOr do you suggest that even the scalar MCV algorithm behaves has this\nbias issue? I doubt that, because the MCV works with an array sorted by\nnumber of occurences, so the position within the table is irrelevant.\n\n> If we could fix the most frequent values collection to not be biased \n> when it sees values in a clumpy way then I think we would be okay to \n> set the row sample size in Vitter's algorithm to a factor of N\n> larger than the block sample size where N is somewhat smaller than\n> the average number of rows per block. In fact even if we used all the\n> rows in the block I think I've convinced myself that the results\n> would be accurate in most circumstances.\n\nI don't expect fixing the MCV to be overly difficult (although it will\nneed a few more CPU cycles).\n\nBut making it work with the block sampling will be much harder, because\nof the bias. The phrase 'in most circumstances' doesn't sound really\nconvincing to me ...\n\n> I think to calcualte the most frequent values more accurately it \n> would take a two pass approach. Scan some random sample of blocks \n> with a counting bloom filter then do a second pass (possibly for the \n> same sample?) keeping counts only for values that the counting bloom \n> filter said hashed to the most common hash values. That might not be \n> exactly the most common values but should be at least a \n> representative sample of the most common values.\n\nI don't see why the counting bloom filter would be necessary, in a two\npass approach?\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 18 Oct 2014 19:01:26 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Sat, Oct 18, 2014 at 6:01 PM, Tomas Vondra <[email protected]> wrote:\n\n> Hmmm. I have 0 experience with handling patents and related issues. Any\n> idea how to address that?\n\nWell there's no real way to address it. But to summarize: 1) We should\nnot go searching for patents, knowing that something is patented\nincreases your liability if you end up infringing on it 2) You should\nconsult your lawyer for advise which he can give under attorney-client\nprivilege and not expose you to such liability. 3) The whole patent\nsystem is fundamentally broken and serves only to protect incumbents\nfrom innovative competitors :(\n\nI think realistically software patents are so vague and cover so many\nbasic algorithms which are obvious to anyone in the field because\nthey're part of the basic knowledge taught in every algorithms course.\nSo Postgres is probably infringing on hundreds of patents which would\nall easily be declared invalid if the owners ever sought to use them\nto attack widely used free software.\n\nThat doesn't mean we should be specifically seeking out specific\nalgorithms that are known to have been published in papers coming out\nof major proprietary database vendors though. That just seems like\nasking for trouble. We should be looking for solutions that seem\nobvious to us or were published 17+ years ago or were published by\npeople who specifically claim they're not patented.\n\n\n> I think this will be very tricky, and in fact it may make the estimates\n> much worse easily, because all the algorithms assume random sampling.\n\nWell this is where Josh and I agree but come to different conclusions.\nHe wants to use more of each block so he can take larger samples in\nthe hopes it will produce better statistics. I want to use more of\neach block so we can continue to take rigorously justified sample\nsizes but without having to read such a large portion of the table.\n\nEither way our current sampling method isn't meeting anyone's needs.\nIt requires you to do copious amounts of I/O which makes running\nanalyze after an upgrade or data load a major contributor to your\noutage window.\n\n>\n> For example the ndistinct estimator\n\nYes, well, the ndistinct estimator just sucks. It's always going to\nsuck. Unless it has a chance to see every row of the table there's\nabsolutely no way it's ever going to not suck. Any attempt to estimate\nndistinct from a sample of any size is going to suck. That's just the\nway it is.\n\nOur sample sizes are based on the size needed to build the histogram.\nThat requires only a fairly small and slow growing sample though our\nblock based sampling inflates the amount of I/O that leads to.\nndistinct and MCV are a different kind of problem that would require a\nproportional sample where the sample needs to grow linearly as the\ntable grows. To get a good estimate of ndistinct requires a large\nenough proportion to effectively require reading the whole table. To\nget decent MCV stats only requires a smaller proportion but it's still\na proportion that grows linearly which is going to be impractical for\nlarge data.\n\nThere are two strategies for dealing with ndistinct that I can see\nhere. Either a) We decide that to estimate ndistinct we need to scan\nthe entire table and just make that a separate type of ANALYZE that\nyou run less frequently. Basically this is what we did with VACUUM and\nindeed we could even invest in infrastructure like the FSM which\nallows you to process only the changed pages so it can be\nincrementally.\n\nOr we decide gathering periodic statistics isn't the way to handle\nndistinct and instead watch the incoming inserts and outgoing deletes\nand keep the statistic up to date incrementally. That can be done\nthough obviously it's a bit tricky. But there's tons of published\nresearch on updating database statistics incrementally -- which brings\nus back to patents...\n\n> I think the 'minimal' stats (when we have just '=' for the type) behaves\n> like this, but fixing it by switching to a two-pass approach should not\n> be that difficult (but would cost a few more CPU cycles).\n>\n> Or do you suggest that even the scalar MCV algorithm behaves has this\n> bias issue? I doubt that, because the MCV works with an array sorted by\n> number of occurences, so the position within the table is irrelevant.\n\nHum. I'll have to reconstruct the tests I was doing back then and see\nwhat was going on. I never really understand what was causing it.\n\nWhat I observed was that if I increased the number of rows read per\nsampled block then the MCV was more and more dominated by the rows\nfound early in the table. This was in a column where the values were\nactually uniformly distributed so there should have been only any\nvalues which happened to be sampled substantially more than the mean\nfrequency. They were integers though so should have been covered by\nthe sorting approach.\n\n\n> I don't see why the counting bloom filter would be necessary, in a two\n> pass approach?\n\nI guess I was imagining it was not keeping all the values in memory\nfor at the same time. Isn't that the whole point of the lossy =\nalgorithm? But now that I think of it I don't understand why that\nalgorithm is needed at all.\n\n-- \ngreg\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 19 Oct 2014 23:24:51 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Fri, Oct 10, 2014 at 10:53 AM, Tomas Vondra <[email protected]> wrote:\n\n> On 10.10.2014 14:10, Tomas Vondra wrote:\n> > Dne 10 Říjen 2014, 13:16, Greg Stark napsal(a):\n> >> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus <[email protected]> wrote:\n> >>> Yes, it's only intractable if you're wedded to the idea of a tiny,\n> >>> fixed-size sample. If we're allowed to sample, say, 1% of the table,\n> we\n> >>> can get a MUCH more accurate n_distinct estimate using multiple\n> >>> algorithms, of which HLL is one. While n_distinct will still have some\n> >>> variance, it'll be over a much smaller range.\n> >>\n> >> I've gone looking for papers on this topic but from what I read this\n> >> isn't so. To get any noticeable improvement you need to read 10-50% of\n> >> the table and that's effectively the same as reading the entire table\n> >> -- and it still had pretty poor results. All the research I could find\n> >> went into how to analyze the whole table while using a reasonable\n> >> amount of scratch space and how to do it incrementally.\n> >\n> > I think it's really difficult to discuss the estimation without some\n> basic\n> > agreement on what are the goals. Naturally, we can't get a perfect\n> > estimator with small samples (especially when the sample size is fixed\n> and\n> > not scaling with the table). But maybe we can improve the estimates\n> > without scanning most of the table?\n> >\n> > FWIW I've been playing with the adaptive estimator described in [1] and\n> > the results looks really interesting, IMHO. So far I was testing it on\n> > synthetic datasets outside the database, but I plan to use it instead of\n> > our estimator, and do some more tests.\n>\n> Attached is an experimental patch implementing the adaptive estimator.\n>\n> It was fairly simple (although it's a bit messy). It only computes the\n> estimates for the \"scalar\" case (i.e. data types that we can sort).\n> Implementing this for the \"minimal\" case is possible, but requires a bit\n> more work.\n>\n> It only computes the estimate and prints a WARNING with both the current\n> and new estimate, but the old estimate is stored.\n>\n\nWhen I run this patch on the regression database, I get a case where the\ncurrent method is exact but the adaptive one is off:\n\nWARNING: ndistinct estimate current=676.00 adaptive=906.00\n\nselect count(distinct stringu1) from onek;\n676\n\nIt should be seeing every single row, so I don't know why the adaptive\nmethod is off. Seems like a bug.\n\n\nCheers,\n\nJeff\n\nOn Fri, Oct 10, 2014 at 10:53 AM, Tomas Vondra <[email protected]> wrote:On 10.10.2014 14:10, Tomas Vondra wrote:\n> Dne 10 Říjen 2014, 13:16, Greg Stark napsal(a):\n>> On Thu, Oct 2, 2014 at 8:56 PM, Josh Berkus <[email protected]> wrote:\n>>> Yes, it's only intractable if you're wedded to the idea of a tiny,\n>>> fixed-size sample. If we're allowed to sample, say, 1% of the table, we\n>>> can get a MUCH more accurate n_distinct estimate using multiple\n>>> algorithms, of which HLL is one. While n_distinct will still have some\n>>> variance, it'll be over a much smaller range.\n>>\n>> I've gone looking for papers on this topic but from what I read this\n>> isn't so. To get any noticeable improvement you need to read 10-50% of\n>> the table and that's effectively the same as reading the entire table\n>> -- and it still had pretty poor results. All the research I could find\n>> went into how to analyze the whole table while using a reasonable\n>> amount of scratch space and how to do it incrementally.\n>\n> I think it's really difficult to discuss the estimation without some basic\n> agreement on what are the goals. Naturally, we can't get a perfect\n> estimator with small samples (especially when the sample size is fixed and\n> not scaling with the table). But maybe we can improve the estimates\n> without scanning most of the table?\n>\n> FWIW I've been playing with the adaptive estimator described in [1] and\n> the results looks really interesting, IMHO. So far I was testing it on\n> synthetic datasets outside the database, but I plan to use it instead of\n> our estimator, and do some more tests.\n\nAttached is an experimental patch implementing the adaptive estimator.\n\nIt was fairly simple (although it's a bit messy). It only computes the\nestimates for the \"scalar\" case (i.e. data types that we can sort).\nImplementing this for the \"minimal\" case is possible, but requires a bit\nmore work.\n\nIt only computes the estimate and prints a WARNING with both the current\nand new estimate, but the old estimate is stored.When I run this patch on the regression database, I get a case where the current method is exact but the adaptive one is off:WARNING: ndistinct estimate current=676.00 adaptive=906.00select count(distinct stringu1) from onek;676It should be seeing every single row, so I don't know why the adaptive method is off. Seems like a bug. Cheers,Jeff",
"msg_date": "Fri, 21 Nov 2014 10:38:27 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 21.11.2014 19:38, Jeff Janes wrote:\n>\n> When I run this patch on the regression database, I get a case where \n> the current method is exact but the adaptive one is off:\n>\n> WARNING: ndistinct estimate current=676.00 adaptive=906.00\n> \n> select count(distinct stringu1) from onek;\n> 676\n> \n> It should be seeing every single row, so I don't know why the\n> adaptive method is off. Seems like a bug.\n\nThanks for noticing this. I wouldn't call it a bug, but there's clearly\nroom for improvement.\n\nThe estimator, as described in the original paper, does not expect the\nsampling to be done \"our\" way (using fixed number of rows) but assumes\nto get a fixed percentage of rows. Thus it does not expect the number of\nsampled rows to get so close (or equal) to the total number of rows.\n\nI think the only way to fix this is by checking if samplerows is close\nto totalrows, and use a straightforward estimate in that case (instead\nof a more sophisticated one). Something along these lines:\n\n\tif (samplerows >= 0.95 * totalrows)\n\t\tstats->stadistinct = (d + d/0.95) / 2;\n\nwhich means \"if we sampled >= 95% of the table, use the number of\nobserved distinct values directly\".\n\nI have modified the estimator to do the adaptive estimation, and then do\nthis correction too (and print the values). And with that in place I get\nthese results\n\n WARNING: ndistinct estimate current=676.00 adaptive=996.00\n WARNING: corrected ndistinct estimate current=676.00 adaptive=693.79\n\nSo it gets fairly close to the original estimate (and exact value).\n\nIn the end, this check should be performed before calling the adaptive\nestimator at all (and not calling it in case we sampled most of the rows).\n\nI also discovered an actual bug in the optimize_estimate() function,\nusing 'f_max' instead of the number of sampled rows.\n\nAttached is a patch fixing the bug, and implementing the sample size check.\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sun, 23 Nov 2014 21:19:56 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 30 September 2014 at 05:53, Simon Riggs <[email protected]> wrote:\n> On 29 September 2014 16:00, Merlin Moncure <[email protected]> wrote:\n>> On Fri, Sep 26, 2014 at 3:06 AM, Simon Riggs <[email protected]> wrote:\n>>> The problem, as I see it, is different. We assume that if there are\n>>> 100 distinct values and you use LIMIT 1 that you would only need to\n>>> scan 1% of rows. We assume that the data is arranged in the table in a\n>>> very homogenous layout. When data is not, and it seldom is, we get\n>>> problems.\n>>\n>> Hm, good point -- 'data proximity'. At least in theory, can't this be\n>> measured and quantified? For example, given a number of distinct\n>> values, you could estimate the % of pages read (or maybe non\n>> sequential seeks relative to the number of pages) you'd need to read\n>> all instances of a particular value in the average (or perhaps the\n>> worst) case. One way of trying to calculate that would be to look at\n>> proximity of values in sampled pages (and maybe a penalty assigned for\n>> high update activity relative to table size). Data proximity would\n>> then become a cost coefficient to the benefits of LIMIT.\n>\n> The necessary first step to this is to realise that we can't simply\n> apply the LIMIT as a reduction in query cost, in all cases.\n>\n> The way I'm seeing it, you can't assume the LIMIT will apply to any\n> IndexScan that doesn't have an index condition. If it has just a\n> filter, or nothing at all, just an ordering then it could easily scan\n> the whole index if the stats are wrong.\n>\n> So plans like this could be wrong, by assuming the scan will end\n> earlier because of the LIMIT than it actually will.\n>\n> Limit\n> IndexScan (no index cond)\n>\n> Limit\n> NestJoin\n> IndexScan (no index cond)\n> SomeScan\n>\n> Limit\n> NestJoin\n> NestJoin\n> IndexScan (no index cond)\n> SomeScan\n> SomeScan\n>\n> and deeper...\n>\n> I'm looking for a way to identify and exclude such plans, assuming\n> that this captures at least some of the problem plans.\n\nAfter looking at this for some time I now have a patch that solves this.\n\nIt relies on the observation that index scans with no bounded quals\ndon't play nicely with LIMIT. The solution relies upon the point that\nLIMIT does not reduce the startup cost of plans, only the total cost.\nSo we can solve the problem by keeping the total cost estimate, just\nmove some of that into startup cost so LIMIT does not reduce costs as\nmuch as before.\n\nIt's a simple patch, but it solves the test cases I know about and\ndoes almost nothing to planning time.\n\nI tried much less subtle approaches involving direct prevention of\nLIMIT pushdown but the code was much too complex for my liking.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 5 Dec 2014 15:46:15 +0900",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On Fri, Dec 5, 2014 at 12:46 AM, Simon Riggs <[email protected]> wrote:\n> On 30 September 2014 at 05:53, Simon Riggs <[email protected]> wrote:\n>> On 29 September 2014 16:00, Merlin Moncure <[email protected]> wrote:\n>>> On Fri, Sep 26, 2014 at 3:06 AM, Simon Riggs <[email protected]> wrote:\n>>>> The problem, as I see it, is different. We assume that if there are\n>>>> 100 distinct values and you use LIMIT 1 that you would only need to\n>>>> scan 1% of rows. We assume that the data is arranged in the table in a\n>>>> very homogenous layout. When data is not, and it seldom is, we get\n>>>> problems.\n>>>\n>>> Hm, good point -- 'data proximity'. At least in theory, can't this be\n>>> measured and quantified? For example, given a number of distinct\n>>> values, you could estimate the % of pages read (or maybe non\n>>> sequential seeks relative to the number of pages) you'd need to read\n>>> all instances of a particular value in the average (or perhaps the\n>>> worst) case. One way of trying to calculate that would be to look at\n>>> proximity of values in sampled pages (and maybe a penalty assigned for\n>>> high update activity relative to table size). Data proximity would\n>>> then become a cost coefficient to the benefits of LIMIT.\n>>\n>> The necessary first step to this is to realise that we can't simply\n>> apply the LIMIT as a reduction in query cost, in all cases.\n>>\n>> The way I'm seeing it, you can't assume the LIMIT will apply to any\n>> IndexScan that doesn't have an index condition. If it has just a\n>> filter, or nothing at all, just an ordering then it could easily scan\n>> the whole index if the stats are wrong.\n>>\n>> So plans like this could be wrong, by assuming the scan will end\n>> earlier because of the LIMIT than it actually will.\n>>\n>> Limit\n>> IndexScan (no index cond)\n>>\n>> Limit\n>> NestJoin\n>> IndexScan (no index cond)\n>> SomeScan\n>>\n>> Limit\n>> NestJoin\n>> NestJoin\n>> IndexScan (no index cond)\n>> SomeScan\n>> SomeScan\n>>\n>> and deeper...\n>>\n>> I'm looking for a way to identify and exclude such plans, assuming\n>> that this captures at least some of the problem plans.\n>\n> After looking at this for some time I now have a patch that solves this.\n>\n> It relies on the observation that index scans with no bounded quals\n> don't play nicely with LIMIT. The solution relies upon the point that\n> LIMIT does not reduce the startup cost of plans, only the total cost.\n> So we can solve the problem by keeping the total cost estimate, just\n> move some of that into startup cost so LIMIT does not reduce costs as\n> much as before.\n>\n> It's a simple patch, but it solves the test cases I know about and\n> does almost nothing to planning time.\n>\n> I tried much less subtle approaches involving direct prevention of\n> LIMIT pushdown but the code was much too complex for my liking.\n\nNeat -- got any test cases (would this have prevented OP's problem)?\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 5 Dec 2014 09:45:23 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 6 December 2014 at 00:45, Merlin Moncure <[email protected]> wrote:\n\n> Neat -- got any test cases (would this have prevented OP's problem)?\n\nNo test case was posted, so I am unable to confirm.\n\nA test case I produced that appears to be the same issue is fixed.\n\nI await confirmation from the OP.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 6 Dec 2014 01:04:59 +0900",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "Hi!\n\nThis was initially posted to pgsql-performance in this thread:\n\n http://www.postgresql.org/message-id/[email protected]\n\nbut pgsql-hackers seems like a more appropriate place for further\ndiscussion.\n\nAnyways, attached is v3 of the patch implementing the adaptive ndistinct\nestimator. Just like the previous version, the original estimate is the\none stored/used, and the alternative one is just printed, to make it\npossible to compare the results.\n\nChanges in this version:\n\n1) implementing compute_minimal_stats\n\n - So far only the 'scalar' (more common) case was handled.\n\n - The algorithm requires more detailed input data, the MCV-based\n stats insufficient, so the code hashes the values and then\n determines the f1, f2, ..., fN coefficients by sorting and\n walking the array of hashes.\n\n2) handling wide values properly (now are counted into f1)\n\n3) compensating for NULL values when calling optimize_estimate\n\n - The estimator has no notion of NULL values, so it's necessary to\n remove them both from the total number of rows, and sampled rows.\n\n4) some minor fixes and refactorings\n\n\nI also repeated the tests comparing the results to the current estimator\n- full results are at the end of the post.\n\nThe one interesting case is the 'step skew' with statistics_target=10,\ni.e. estimates based on mere 3000 rows. In that case, the adaptive\nestimator significantly overestimates:\n\n values current adaptive\n ------------------------------\n 106 99 107\n 106 8 6449190\n 1006 38 6449190\n 10006 327 42441\n\nI don't know why I didn't get these errors in the previous runs, because\nwhen I repeat the tests with the old patches I get similar results with\na 'good' result from time to time. Apparently I had a lucky day back\nthen :-/\n\nI've been messing with the code for a few hours, and I haven't found any\nsignificant error in the implementation, so it seems that the estimator\ndoes not perform terribly well for very small samples (in this case it's\n3000 rows out of 10.000.000 (i.e. ~0.03%).\n\nHowever, I've been able to come up with a simple way to limit such\nerrors, because the number of distinct values is naturally bounded by\n\n (totalrows / samplerows) * ndistinct\n\nwhere ndistinct is the number of distinct values in the sample. This\nessentially means that if you slice the table into sets of samplerows\nrows, you get different ndistinct values.\n\nBTW, this also fixes the issue reported by Jeff Janes on 21/11.\n\nWith this additional sanity check, the results look like this:\n\n values current adaptive\n ------------------------------\n 106 99 116\n 106 8 23331\n 1006 38 96657\n 10006 327 12400\n\nWhich is much better, but clearly still a bit on the high side.\n\nSo either the estimator really is a bit unstable for such small samples\n(it tends to overestimate a bit in all the tests), or there's a bug in\nthe implementation - I'd be grateful if someone could peek at the code\nand maybe compare it to the paper describing the estimator. I've spent a\nfair amount of time analyzing it, but found nothing.\n\nBut maybe the estimator really is unstable for such small samples - in\nthat case we could probably use the current estimator as a fallback.\nAfter all, this only happens when someone explicitly decreases the\nstatistics target to 10 - with the default statistics target it's damn\naccurate.\n\nkind regards\nTomas\n\n\nstatistics_target = 10\n======================\n\na) smooth skew, 101 values, different skew ('k')\n\n - defaults to the current estimator\n\nb) smooth skew, 10.001 values, different skew ('k')\n\n k current adaptive\n -----------------------\n 1 10231 11259\n 2 6327 8543\n 3 4364 7707\n 4 3436 7052\n 5 2725 5868\n 6 2223 5071\n 7 1979 5011\n 8 1802 5017\n 9 1581 4546\n\nc) step skew (different numbers of values)\n\n values current adaptive\n ------------------------------\n 106 99 107\n 106 8 6449190\n 1006 38 6449190\n 10006 327 42441\n\n patched:\n\n values current adaptive\n ------------------------------\n 106 99 116\n 106 8 23331\n 1006 38 96657\n 10006 327 12400\n\n\nstatistics_target = 100\n=======================\n\na) smooth skew, 101 values, different skew ('k')\n\n - defaults to the current estimator\n\nb) smooth skew, 10.001 values, different skew ('k')\n\n k current adaptive\n -----------------------------\n 1 10011 10655\n 2 9641 10944\n 3 8837 10846\n 4 8315 10992\n 5 7654 10760\n 6 7162 10524\n 7 6650 10375\n 8 6268 10275\n 9 5871 9783\n\nc) step skew (different numbers of values)\n\n values current adaptive\n ------------------------------\n 106 30 70\n 1006 271 1181\n 10006 2804 10312\n\n\nstatistics_target = 1000\n========================\n\na) smooth skew, 101 values, different skew ('k')\n\n - defaults to the current estimator\n\nb) smooth skew, 10.001 values, different skew ('k')\n\n k current adaptive\n ---------------------------\n 3 10001 10002\n 4 10000 10003\n 5 9996 10008\n 6 9985 10013\n 7 9973 10047\n 8 9954 10082\n 9 9932 10100\n\nc) step skew (different numbers of values)\n\n values current adaptive\n ------------------------------\n 106 105 113\n 1006 958 1077\n 10006 9592 10840\n\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers",
"msg_date": "Sun, 07 Dec 2014 02:54:14 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "PATCH: adaptive ndistinct estimator v3 (WAS: Re: [PERFORM] Yet\n another\n abort-early plan disaster on 9.3)"
},
{
"msg_contents": "On 12/05/2014 08:04 AM, Simon Riggs wrote:\n> On 6 December 2014 at 00:45, Merlin Moncure <[email protected]> wrote:\n> \n>> Neat -- got any test cases (would this have prevented OP's problem)?\n> \n> No test case was posted, so I am unable to confirm.\n> \n> A test case I produced that appears to be the same issue is fixed.\n> \n> I await confirmation from the OP.\n> \n\nSo that's proprietary/confidential data. However, the company involved\nhas a large testbed and I could test their data using a patched version\nof Postgres. In 3 months their data distribution has drifted, so I'll\nneed to do some work to recreate the original bad plan circumstances.\nI'll keep you posted on how the patch works for that setup.\n\nIt would be great to come up with a generic/public test for a bad\nabort-early situation. Ideas?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 09 Dec 2014 17:46:34 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 10 December 2014 at 10:46, Josh Berkus <[email protected]> wrote:\n> On 12/05/2014 08:04 AM, Simon Riggs wrote:\n>> On 6 December 2014 at 00:45, Merlin Moncure <[email protected]> wrote:\n>>\n>>> Neat -- got any test cases (would this have prevented OP's problem)?\n>>\n>> No test case was posted, so I am unable to confirm.\n>>\n>> A test case I produced that appears to be the same issue is fixed.\n>>\n>> I await confirmation from the OP.\n>>\n>\n> So that's proprietary/confidential data. However, the company involved\n> has a large testbed and I could test their data using a patched version\n> of Postgres. In 3 months their data distribution has drifted, so I'll\n> need to do some work to recreate the original bad plan circumstances.\n> I'll keep you posted on how the patch works for that setup.\n>\n> It would be great to come up with a generic/public test for a bad\n> abort-early situation. Ideas?\n\nIf you could contribute that, it would be welcome.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Dec 2014 11:09:01 +0900",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 30 September 2014 at 10:25, Simon Riggs <[email protected]> wrote:\n> On 30 September 2014 00:00, Tom Lane <[email protected]> wrote:\n\n>> The existing cost estimation\n>> code effectively assumes that they're perfectly uniformly distributed;\n>> which is a good average-case assumption but can be horribly wrong in\n>> the worst case.\n>\n> Agreed. This is the main observation from which we can work.\n>\n>> If we could settle on some other model for the probable distribution\n>> of the matching tuples, we could adjust the cost estimates for LIMIT\n>> accordingly. I have not enough statistics background to know what a\n>> realistic alternative would be.\n>\n> I'm not sure that the correlation alone is sufficient to be able to do\n> that. We'd need to estimate where the values looked for are likely to\n> be wrt other values, then increase estimate accordingly. That sounds\n> like a lot of pushups grovelling through quals and comparing against\n> stats. So my thinking is actually to rule that out, unless you've some\n> ideas for how to do that?\n>\n>> Another possibility is to still assume a uniform distribution but estimate\n>> for, say, a 90% probability instead of 50% probability that we'll find\n>> enough tuples after scanning X amount of the table. Again, I'm not too\n>> sure what that translates to in terms of the actual math, but it sounds\n>> like something a statistics person could do in their sleep.\n\n\nThe problem is one of risk. Whatever distribution we use, it will be\nwrong in some cases and good in others.\n\nFor example, if we look at \"10 Most Recent Calls\" for a user, then\nfrequent users would have one distribution, infrequent users another.\nSo we have multiple distributions in the same data. We just can't hold\nenough information to make sense of this.\n\nThink about how much data needs to be scanned if the user has only done 9 calls.\n\nWhat I've done in the past is to rewrite the query in different ways\nto force different plans, then call each plan depending upon the user\ncharacteristics. This is can also be done with hints, in a more\nignorant way.\n\n\n>> I do not think we should estimate for the worst case though. If we do,\n>> we'll hear cries of anguish from a lot of people, including many of the\n>> same ones complaining now, because the planner stopped picking fast-start\n>> plans even for cases where they are orders of magnitude faster than the\n>> alternatives.\n>\n> Fast start plans still make sense when performing an IndexScan with no\n> filter conditions. Those types of plan should not be changed from\n> current costing - they are accurate, good and very important because\n> of their frequency in real workloads.\n>\n> What I think we are seeing is Ordered plans being selected too often\n> in preference to Sorted plans when we make selectivity or stats\n> errors. As well as data distributions that aren't correctly described\n> by the statistics causing much longer execution times.\n>\n> Here are some plan selection strategies\n>\n> * Cost based - attempt to exactly calculate the cost based upon\n> existing stats - increase the complexity of cost calc to cover other\n> aspects. Even if we do that, these may not be that helpful in covering\n> the cases where the stats turn out to be wrong.\n>\n> * Risk based - A risk adjusted viewpoint would be that we should treat\n> the cost as mid-way between the best and the worst. The worst is\n> clearly scanning (100% - N) of the tuples, the best is just N tuples.\n> So we should be costing scans with excess filter conditions as a (100%\n> Scan)/2, no matter the conditions, based purely upon risk.\n>\n> * Simplified heuristic - deselect ordered plans when they are driven\n> from scans without quals or indexscans with filters, since the risk\n> adjusted cost is likely to be higher than the sorted cost. Inspecting\n> the plan tree for this could be quite costly, so would only be done\n> when the total cost is $high, prior to it being adjusted by LIMIT.\n>\n>\n> In terms of practical steps... I suggest the following:\n>\n> * Implement enable_orderedscan = on (default) | off. A switch to allow\n> plans to de-select ordered plans, so we can more easily see the\n> effects of such plans in the wild.\n>\n> * Code heuristic approach - I can see where to add my heuristic in the\n> grouping planner. So we just need to do a left? deep search of the\n> plan tree looking for scans of the appropriate type and bail out if we\n> find one.\n\nAfter looking at this for some time I now have a patch that solves this.\n\nIt relies on the observation that index scans with no bounded quals\ndon't play nicely with LIMIT. The solution relies upon the point that\nLIMIT does not reduce the startup cost of plans, only the total cost.\nSo we can solve the problem by keeping the total cost estimate, just\nmove some of that into startup cost so LIMIT does not reduce costs as\nmuch as before.\n\nIt's a simple patch, but it solves the test cases I know about and\ndoes almost nothing to planning time.\n\nI tried much less subtle approaches involving direct prevention of\nLIMIT pushdown but the code was much too complex for my liking.\n\n- - -\n\nThe only other practical way to do this would be to have a\nLimitPlanDisaster node\n\nLimitPlanDisaster\n-> PreSortedPath\n-> CheapestPath\n\nThe PlanDisaster node would read the PreSortedPath for costlimit C\nAfter we reach the limit we switch to the CheapestPath and execute\nthat instead for the remainder of the Limit.\n\nOr we could do time limits, just harder to make that make sense.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 12 Dec 2014 03:22:44 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 12 December 2014 at 03:22, Simon Riggs <[email protected]> wrote:\n\n> It's a simple patch, but it solves the test cases I know about and\n> does almost nothing to planning time.\n\nTest cases attached. The files marked \"pettus_*\" are written up from\nChristophe Pettus' blog.\nThe other test case is one of my own devising, based upon recent\ncustomer problems.\n\nThe \"10 most recent calls\" is a restatement of actual problems seen in the past.\n\n\nAlso attached is a new parameter called enable_sortedpath which can be\nused to turn on/off the sorted path generated by the planner.\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 12 Dec 2014 03:31:45 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 12 December 2014 at 03:31, Simon Riggs <[email protected]> wrote:\n\n> Also attached is a new parameter called enable_sortedpath which can be\n> used to turn on/off the sorted path generated by the planner.\n\nNow with attachment. (Thanks Jeff!)\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 17 Dec 2014 07:55:29 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 12/07/2014 03:54 AM, Tomas Vondra wrote:\n> The one interesting case is the 'step skew' with statistics_target=10,\n> i.e. estimates based on mere 3000 rows. In that case, the adaptive\n> estimator significantly overestimates:\n>\n> values current adaptive\n> ------------------------------\n> 106 99 107\n> 106 8 6449190\n> 1006 38 6449190\n> 10006 327 42441\n>\n> I don't know why I didn't get these errors in the previous runs, because\n> when I repeat the tests with the old patches I get similar results with\n> a 'good' result from time to time. Apparently I had a lucky day back\n> then :-/\n>\n> I've been messing with the code for a few hours, and I haven't found any\n> significant error in the implementation, so it seems that the estimator\n> does not perform terribly well for very small samples (in this case it's\n> 3000 rows out of 10.000.000 (i.e. ~0.03%).\n\nThe paper [1] gives an equation for an upper bound of the error of this \nGEE estimator. How do the above numbers compare with that bound?\n\n[1] \nhttp://ftp.cse.buffalo.edu/users/azhang/disc/disc01/cd1/out/papers/pods/towardsestimatimosur.pdf\n\n- Heikki\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Tue, 23 Dec 2014 12:28:42 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v3 (WAS: Re: [PERFORM]\n Yet another abort-early plan disaster on 9.3)"
},
{
"msg_contents": "On Wed, Dec 17, 2014 at 4:55 PM, Simon Riggs <[email protected]> wrote:\n\n> On 12 December 2014 at 03:31, Simon Riggs <[email protected]> wrote:\n>\n> > Also attached is a new parameter called enable_sortedpath which can be\n> > used to turn on/off the sorted path generated by the planner.\n>\n> Now with attachment. (Thanks Jeff!)\n>\n\nMoved this patch to CF 2015-02 because it did not receive any reviews.\nBetter to not lose track of it.\n-- \nMichael\n\nOn Wed, Dec 17, 2014 at 4:55 PM, Simon Riggs <[email protected]> wrote:On 12 December 2014 at 03:31, Simon Riggs <[email protected]> wrote:\n\n> Also attached is a new parameter called enable_sortedpath which can be\n> used to turn on/off the sorted path generated by the planner.\n\nNow with attachment. (Thanks Jeff!)Moved this patch to CF 2015-02 because it did not receive any reviews. Better to not lose track of it.-- Michael",
"msg_date": "Fri, 13 Feb 2015 16:16:28 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
},
{
"msg_contents": "On 23.12.2014 11:28, Heikki Linnakangas wrote:\n> On 12/07/2014 03:54 AM, Tomas Vondra wrote:\n>> The one interesting case is the 'step skew' with statistics_target=10,\n>> i.e. estimates based on mere 3000 rows. In that case, the adaptive\n>> estimator significantly overestimates:\n>>\n>> values current adaptive\n>> ------------------------------\n>> 106 99 107\n>> 106 8 6449190\n>> 1006 38 6449190\n>> 10006 327 42441\n>>\n>> I don't know why I didn't get these errors in the previous runs, because\n>> when I repeat the tests with the old patches I get similar results with\n>> a 'good' result from time to time. Apparently I had a lucky day back\n>> then :-/\n>>\n>> I've been messing with the code for a few hours, and I haven't\n>> found any significant error in the implementation, so it seems that\n>> the estimator does not perform terribly well for very small samples\n>> (in this case it's 3000 rows out of 10.000.000 (i.e. ~0.03%).\n> \n> The paper [1] gives an equation for an upper bound of the error of this\n> GEE estimator. How do the above numbers compare with that bound?\n\nWell, that's a bit more complicated because the \"Theorem 1\" you mention\ndoes not directly specify upper boundary for a single estimate. It's\nformulated like this:\n\n Assume table with \"N\" rows, D distinct values and sample of \"r\"\n rows (all those values are fixed). Then there exists a dataset with\n those features, so that \"ratio error\"\n\n error(D, D') = max(D'/D, D/D')\n\n is greater than f(N, r, P) with probability at least \"P\". I.e. if you\n randomly choose a sample of 'r' rows, you'll get an error exceeding\n the ratio with probability P.\n\nSo it's not not a hard limit, but speaks about probability of estimation\nerror with some (unknown) dataset dataset. So it describes what you can\nachieve at best - if you think your estimator is better, there'll always\nbe a dataset hiding in the shadows hissing \"Theorem 1\".\n\n\nLet's say we're looking for boundary that's crossed only in 1% (or 5%)\nof measurements. Applying this to the sample data I posted before, i.e.\n10M rows with three sample sizes 'r' (3000, 30.000 and 300.000 rows),\nthe ratio error boundary per the the paper is\n\n 3.000 30.000 300.000\n ----------------------------------------\n 1% 88 28 9\n 5% 70 22 7\n\n\nAt least that's what I get if I compute it using this python function:\n\n def err(N, r, p):\n return sqrt((N-r)/(2.0*r) * log(1.0/p))\n\n\nSo the estimates I posted before are not terribly good, I guess,\nespecially the ones returning 6449190. I wonder whether there really is\nsome stupid bug in the implementation.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 19 Feb 2015 04:08:33 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v3 (WAS: Re: [PERFORM]\n Yet another abort-early plan disaster on 9.3)"
},
{
"msg_contents": "Hi all,\n\nattached is v4 of the patch implementing adaptive ndistinct estimator.\n\nI've been looking into the strange estimates, mentioned on 2014/12/07:\n\n> values current adaptive\n> ------------------------------\n> 106 99 107\n> 106 8 6449190\n> 1006 38 6449190\n> 10006 327 42441\n\nI suspected this might be some sort of rounding error in the numerical\noptimization (looking for 'm' solving the equation from paper), but\nturns out that's not the case.\n\nThe adaptive estimator is a bit unstable for skewed distributions, that\nare not sufficiently smooth. Whenever f[1] or f[2] was 0 (i.e. there\nwere no values occuring exactly once or twice in the sample), the result\nwas rather off.\n\nThe simple workaround for this was adding a fallback to GEE when f[1] or\nf[2] is 0. GEE is another estimator described in the paper, behaving\nmuch better in those cases.\n\nWith the current version, I do get this (with statistics_target=10):\n\n values current adaptive\n ------------------------------\n 106 99 108\n 106 8 178\n 1006 38 2083\n 10006 327 11120\n\nThe results do change a bit based on the sample, but these values are a\ngood example of the values I'm getting.\n\nThe other examples (with skewed but smooth distributions) work as good\nas before.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers",
"msg_date": "Tue, 31 Mar 2015 21:02:29 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "> The simple workaround for this was adding a fallback to GEE when f[1] or\nf[2] is 0. GEE is another estimator described in the paper, behaving much\nbetter in those cases.\n\nFor completeness, what's the downside in just always using GEE?\n\n> The simple workaround for this was adding a fallback to GEE when f[1] or f[2] is 0. GEE is another estimator described in the paper, behaving much better in those cases.\nFor completeness, what's the downside in just always using GEE?",
"msg_date": "Fri, 3 Apr 2015 14:46:45 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "Hi,\n\nOn 04/03/15 15:46, Greg Stark wrote:\n> > The simple workaround for this was adding a fallback to GEE when f[1]\n> or f[2] is 0. GEE is another estimator described in the paper, behaving\n> much better in those cases.\n>\n> For completeness, what's the downside in just always using GEE?\n\nThat's a good question.\n\nGEE is the estimator with minimal average error, as defined in Theorem 1 \nin that paper. The exact formulation of the theorem is a bit complex, \nbut it essentially says that knowing just the sizes of the data set and \nsample, there's an accuracy limit.\n\nOr put another way, it's possible to construct the data set so that the \nestimator gives you estimates with error exceeding some limit (with a \ncertain probability).\n\nKnowledge of how much the data set is skewed gives us opportunity to \nimprove the estimates by choosing an estimator performing better with \nsuch data sets. The problem is we don't know the skew - we can only \nestimate it from the sample, which is what the hybrid estimators do.\n\nThe AE estimator (presented in the paper and implemented in the patch) \nis an example of such hybrid estimators, but based on my experiments it \ndoes not work terribly well with one particular type of skew that I'd \nexpect to be relatively common (few very common values, many very rare \nvalues).\n\nLuckily, GEE performs pretty well in this case, but we can use the AE \notherwise (ISTM it gives really good estimates).\n\nBut of course - there'll always be data sets that are poorly estimated \n(pretty much as Theorem 1 in the paper says). I'd be nice to do more \ntesting on real-world data sets, to see if this performs better or worse \nthan our current estimator.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Sun, 05 Apr 2015 01:57:46 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On Tue, Mar 31, 2015 at 12:02 PM, Tomas Vondra <[email protected]\n> wrote:\n\n> Hi all,\n>\n> attached is v4 of the patch implementing adaptive ndistinct estimator.\n>\n\nHi Tomas,\n\nI have a case here where the adaptive algorithm underestimates ndistinct by\na factor of 7 while the default estimator is pretty close.\n\n5MB file:\n\nhttps://drive.google.com/file/d/0Bzqrh1SO9FcETU1VYnQxU2RZSWM/view?usp=sharing\n\n# create table foo2 (x text);\n# \\copy foo2 from program 'bzcat ~/temp/foo1.txt.bz2'\n# analyze verbose foo2;\nINFO: analyzing \"public.foo2\"\nINFO: \"foo2\": scanned 6021 of 6021 pages, containing 1113772 live rows and\n0 dead rows; 30000 rows in sample, 1113772 estimated total rows\nWARNING: ndistinct estimate current=998951.78 adaptive=135819.00\n\nCheers,\n\nJeff\n\nOn Tue, Mar 31, 2015 at 12:02 PM, Tomas Vondra <[email protected]> wrote:Hi all,\n\nattached is v4 of the patch implementing adaptive ndistinct estimator.Hi Tomas,I have a case here where the adaptive algorithm underestimates ndistinct by a factor of 7 while the default estimator is pretty close.5MB file:https://drive.google.com/file/d/0Bzqrh1SO9FcETU1VYnQxU2RZSWM/view?usp=sharing# create table foo2 (x text);# \\copy foo2 from program 'bzcat ~/temp/foo1.txt.bz2'# analyze verbose foo2;INFO: analyzing \"public.foo2\"INFO: \"foo2\": scanned 6021 of 6021 pages, containing 1113772 live rows and 0 dead rows; 30000 rows in sample, 1113772 estimated total rowsWARNING: ndistinct estimate current=998951.78 adaptive=135819.00Cheers,Jeff",
"msg_date": "Tue, 14 Apr 2015 23:45:55 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On Tue, Apr 14, 2015 at 11:45 PM, Jeff Janes <[email protected]> wrote:\n\n> On Tue, Mar 31, 2015 at 12:02 PM, Tomas Vondra <\n> [email protected]> wrote:\n>\n>> Hi all,\n>>\n>> attached is v4 of the patch implementing adaptive ndistinct estimator.\n>>\n>\n> Hi Tomas,\n>\n> I have a case here where the adaptive algorithm underestimates ndistinct\n> by a factor of 7 while the default estimator is pretty close.\n>\n> 5MB file:\n>\n>\n> https://drive.google.com/file/d/0Bzqrh1SO9FcETU1VYnQxU2RZSWM/view?usp=sharing\n>\n> # create table foo2 (x text);\n> # \\copy foo2 from program 'bzcat ~/temp/foo1.txt.bz2'\n> # analyze verbose foo2;\n> INFO: analyzing \"public.foo2\"\n> INFO: \"foo2\": scanned 6021 of 6021 pages, containing 1113772 live rows\n> and 0 dead rows; 30000 rows in sample, 1113772 estimated total rows\n> WARNING: ndistinct estimate current=998951.78 adaptive=135819.00\n>\n\nI've done a more complete analysis with a real world database I have access\nto.\n\nI've analyzed patched and current with default_statistics_target of 100 and\n10000. (I also have some of the same data under 9.2, but that is not\nmeaningfully different than unpatched head).\n\nFor easier interpretation I hacked up the analyzer so that it just reports\nthe estimated number, never converting to the negative fraction.\n\nSee the spreadsheet here:\n\nhttps://docs.google.com/spreadsheets/d/1qUcBoQkRFFcSDq7GtkiQkHqlLTbxQYl5hh6S0byff2M/edit?usp=sharing\n\nThe 10000 target was initially collected in an attempt to discern the truth\nwhen the 100 target methods disagreed, but I decided to just collect the\ngold-standard truth.\n\nThe truth is given by:\nselect count(*) from (select distinct column from schema.table where column\nis not null) foo;\n\nAnd number_not_null is given by:\nselect count(*) from schema.table where column is not null;\n\nIt looks like the proposed method sometimes overestimates, although never\nby a huge amount, while the old one never overestimated. Overall it mostly\nseems to be more accurate, but occasionally it does substantially worse\nthan the current method. I suspect most of the problems are related to the\nsame issue reported in the last email. There are a lot of places where\nboth underestimate, but where the new method does so by less than head.\n\nIf there are any columns anyone wants to examine further, give me the token\nfor it and I'll try to create a generator that generates data with the same\ndistribution (and clustering, if that seems relevant).\n\nCheers,\n\nJeff\n\nOn Tue, Apr 14, 2015 at 11:45 PM, Jeff Janes <[email protected]> wrote:On Tue, Mar 31, 2015 at 12:02 PM, Tomas Vondra <[email protected]> wrote:Hi all,\n\nattached is v4 of the patch implementing adaptive ndistinct estimator.Hi Tomas,I have a case here where the adaptive algorithm underestimates ndistinct by a factor of 7 while the default estimator is pretty close.5MB file:https://drive.google.com/file/d/0Bzqrh1SO9FcETU1VYnQxU2RZSWM/view?usp=sharing# create table foo2 (x text);# \\copy foo2 from program 'bzcat ~/temp/foo1.txt.bz2'# analyze verbose foo2;INFO: analyzing \"public.foo2\"INFO: \"foo2\": scanned 6021 of 6021 pages, containing 1113772 live rows and 0 dead rows; 30000 rows in sample, 1113772 estimated total rowsWARNING: ndistinct estimate current=998951.78 adaptive=135819.00I've done a more complete analysis with a real world database I have access to.I've analyzed patched and current with default_statistics_target of 100 and 10000. (I also have some of the same data under 9.2, but that is not meaningfully different than unpatched head).For easier interpretation I hacked up the analyzer so that it just reports the estimated number, never converting to the negative fraction.See the spreadsheet here:https://docs.google.com/spreadsheets/d/1qUcBoQkRFFcSDq7GtkiQkHqlLTbxQYl5hh6S0byff2M/edit?usp=sharingThe 10000 target was initially collected in an attempt to discern the truth when the 100 target methods disagreed, but I decided to just collect the gold-standard truth.The truth is given by:select count(*) from (select distinct column from schema.table where column is not null) foo;And number_not_null is given by:select count(*) from schema.table where column is not null; It looks like the proposed method sometimes overestimates, although never by a huge amount, while the old one never overestimated. Overall it mostly seems to be more accurate, but occasionally it does substantially worse than the current method. I suspect most of the problems are related to the same issue reported in the last email. There are a lot of places where both underestimate, but where the new method does so by less than head.If there are any columns anyone wants to examine further, give me the token for it and I'll try to create a generator that generates data with the same distribution (and clustering, if that seems relevant).Cheers,Jeff",
"msg_date": "Sun, 19 Apr 2015 22:13:37 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On Tue, Mar 31, 2015 at 3:02 PM, Tomas Vondra\n<[email protected]> wrote:\n> attached is v4 of the patch implementing adaptive ndistinct estimator.\n\nSo, I took a look at this today. It's interesting work, but it looks\nmore like a research project than something we can commit to 9.5. As\nfar as I can see, this still computes the estimate the way we do\ntoday, but then also computes the estimate using this new method. The\nestimate computed the new way isn't stored anywhere, so this doesn't\nreally change behavior, except for printing out a WARNING comparing\nthe values produced by the two estimators.\n\nIMHO, the comments in this patch are pretty inscrutable. I believe\nthis is because they presume more knowledge of what the patch is doing\nthan I myself possess. For example:\n\n+ * The AEE estimator is based on solving this equality (for \"m\")\n+ *\n+ * m - f1 - f2 = f1 * (A + A(m)) / (B + B(m))\n+ *\n+ * where A, B are effectively constants (not depending on m), and A(m)\n+ * and B(m) are functions. This is equal to solving\n+ *\n+ * 0 = f1 * (A + A(m)) / (B + B(m)) - (m - f1 - f2)\n\nPerhaps I am just a dummy, but I have no idea what any of that means.\nI think that needs to be fixed so that someone who knows what\nn_distinct is but knows nothing about the details of these estimators\ncan get an idea of how they are doing their thing without too much\neffort. I think a lot of the comments share this problem.\n\nAside from the problems mentioned above, there's the broader question\nof how to evaluate the quality of the estimates produced by this\nestimator vs. what we're doing right now. I see that Jeff Janes has\npointed out some cases that seem to regress with this patch; those\npresumably need some investigation, or at least some comment. And I\nthink some testing from other people would be good too, before we\ncommit to this.\n\nLeaving that aside, at some point, you'll say, \"OK, there may be some\nregressions left but overall I believe this is going to be a win in\nmost cases\". It's going to be really hard for anyone, no matter how\nsmart, to figure out through code review whether that is true. So\ncommitting this figures to be extremely frightening. It's just not\ngoing to be reasonably possible to know what percentage of users are\ngoing to be more happy after this change and what percentage are going\nto be less happy.\n\nTherefore, I think that:\n\n1. This should be committed near the beginning of a release cycle, not\nnear the end. That way, if there are problem cases, we'll have a year\nor so of developer test to shake them out.\n\n2. There should be a compatibility GUC to restore the old behavior.\nThe new behavior should be the default, because if we're not confident\nthat the new behavior will be better for most people, we have no\nbusiness installing it in the first place (plus few people will try\nit). But just in case it turns out to suck for some people, we should\nprovide an escape hatch, at least for a few releases.\n\n3. There should be some clear documentation in the comments indicating\nwhy we believe that this is a whole lot better than what we do today.\nMaybe this has been discussed adequately on the thread and maybe it\nhasn't, but whoever goes to look at the committed code should not have\nto go root through hackers threads to understand why we replaced the\nexisting estimator. It should be right there in the code. If,\nhypothetically speaking, I were to commit this, and if, again strictly\nhypothetically, another distinguished committer were to write back a\nyear or two later, \"clearly Robert was an idiot to commit this because\nit's no better than what we had before\" then I want to be able to say\n\"clearly, you have not what got committed very carefully, because the\ncomment for function <blat> clearly explains that this new technology\nis teh awesome\".\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 30 Apr 2015 16:57:19 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On 04/30/2015 01:57 PM, Robert Haas wrote:\n>\n> 2. There should be a compatibility GUC to restore the old behavior.\n> The new behavior should be the default, because if we're not confident\n> that the new behavior will be better for most people, we have no\n> business installing it in the first place (plus few people will try\n> it). But just in case it turns out to suck for some people, we should\n> provide an escape hatch, at least for a few releases.\n\nYou can override the ndistinct estimate with ALTER TABLE. I think that's \nenough for an escape hatch.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 30 Apr 2015 14:31:48 -0700",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On Thu, Apr 30, 2015 at 5:31 PM, Heikki Linnakangas <[email protected]> wrote:\n> On 04/30/2015 01:57 PM, Robert Haas wrote:\n>> 2. There should be a compatibility GUC to restore the old behavior.\n>> The new behavior should be the default, because if we're not confident\n>> that the new behavior will be better for most people, we have no\n>> business installing it in the first place (plus few people will try\n>> it). But just in case it turns out to suck for some people, we should\n>> provide an escape hatch, at least for a few releases.\n>\n> You can override the ndistinct estimate with ALTER TABLE. I think that's\n> enough for an escape hatch.\n\nI'm not saying that isn't nice to have, but I don't think it really\nhelps much here. Setting the value manually requires that you know\nwhat value to set, and you might not. If, on some workloads, the old\nalgorithm beats the new one reliably, you want to be able to actually\ngo back to the old algorithm, not manually override every wrong\ndecision it makes. A GUC for this is pretty cheap insurance.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 30 Apr 2015 18:18:43 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "Hi,\n\nOn 04/30/15 22:57, Robert Haas wrote:\n> On Tue, Mar 31, 2015 at 3:02 PM, Tomas Vondra\n> <[email protected]> wrote:\n>> attached is v4 of the patch implementing adaptive ndistinct estimator.\n>\n> So, I took a look at this today. It's interesting work, but it looks\n> more like a research project than something we can commit to 9.5. As\n> far as I can see, this still computes the estimate the way we do\n> today, but then also computes the estimate using this new method.\n> The estimate computed the new way isn't stored anywhere, so this\n> doesn't really change behavior, except for printing out a WARNING\n> comparing the values produced by the two estimators.\n\nI agree that this is not ready for 9.5 - it was meant as an experiment\n(hence printing the estimate in a WARNING, to make it easier to compare\nthe value to the current estimator). Without that it'd be much more\ncomplicated to compare the old/new estimates, but you're right this is\nnot suitable for commit.\n\nSo far it received only reviews from Jeff Janes (thanks!), but I think a \nchange like this really requires a more thorough review, including the \nmath part. I don't expect that to happen at the very end of the last CF \nbefore the freeze.\n\n> IMHO, the comments in this patch are pretty inscrutable. I believe\n> this is because they presume more knowledge of what the patch is doing\n> than I myself possess. For example:\n>\n> + * The AEE estimator is based on solving this equality (for \"m\")\n> + *\n> + * m - f1 - f2 = f1 * (A + A(m)) / (B + B(m))\n> + *\n> + * where A, B are effectively constants (not depending on m), and A(m)\n> + * and B(m) are functions. This is equal to solving\n> + *\n> + * 0 = f1 * (A + A(m)) / (B + B(m)) - (m - f1 - f2)\n>\n> Perhaps I am just a dummy, but I have no idea what any of that means.\n> I think that needs to be fixed so that someone who knows what\n> n_distinct is but knows nothing about the details of these estimators\n> can get an idea of how they are doing their thing without too much\n> effort. I think a lot of the comments share this problem.\n\nWell, I don't think you're dummy, but this requires reading the paper \ndescribing the estimator. Explaining that fully would essentially mean \ncopying a large portion of the paper in the comment, and I suppose \nthat's not a good idea. The explanation might be perhaps a bit more \ndetailed, though - not sure what's the right balance.\n\n> Aside from the problems mentioned above, there's the broader question\n> of how to evaluate the quality of the estimates produced by this\n> estimator vs. what we're doing right now. I see that Jeff Janes has\n> pointed out some cases that seem to regress with this patch; those\n> presumably need some investigation, or at least some comment. And I\n> think some testing from other people would be good too, before we\n> commit to this.\n\nYeah, evaluating is difficult. I think that Jeff's approach - i.e. \ntesting the estimator on real-world data sets - is the right approach \nhere. Testing on synthetic data sets has it's value too (if only to \nbetter understand the failure cases).\n\n> Leaving that aside, at some point, you'll say, \"OK, there may be some\n> regressions left but overall I believe this is going to be a win in\n> most cases\". It's going to be really hard for anyone, no matter how\n> smart, to figure out through code review whether that is true. So\n> committing this figures to be extremely frightening. It's just not\n> going to be reasonably possible to know what percentage of users are\n> going to be more happy after this change and what percentage are\n> going to be less happy.\n\nFor every pair of estimators you can find cases where one of them is \nbetter than the other one. It's pretty much impossible to find an \nestimator that beats all other estimators on all possible inputs.\n\nThere's no way to make this an improvement for everyone - it will \nproduce worse estimates in some cases, and we have to admit that. If we \nthink this is unacceptable, we're effectively stuck with the current \nestimator forever.\n\n> Therefore, I think that:\n>\n> 1. This should be committed near the beginning of a release cycle,\n> not near the end. That way, if there are problem cases, we'll have a\n> year or so of developer test to shake them out.\n\n+1\n\n> 2. There should be a compatibility GUC to restore the old behavior.\n> The new behavior should be the default, because if we're not\n> confident that the new behavior will be better for most people, we\n> have no business installing it in the first place (plus few people\n> will try it). But just in case it turns out to suck for some people,\n> we should provide an escape hatch, at least for a few releases.\n\nI think a \"compatibility GUC\" is a damn poor solution, IMNSHO.\n\nFor example, GUCs are database-wide, but I do expect the estimator to \nperform worse only on a few data sets / columns. So making this a \ncolumn-level settings would be more appropriate, I think.\n\nBut it might work during the development cycle, as it would make \ncomparing the estimators possible (just run the tests with the GUC set \ndifferently). Assuming we'll re-evaluate it at the end, and remove the \nGUC if possible.\n\n>\n> 3. There should be some clear documentation in the comments indicating\n> why we believe that this is a whole lot better than what we do today.\n> Maybe this has been discussed adequately on the thread and maybe it\n> hasn't, but whoever goes to look at the committed code should not have\n> to go root through hackers threads to understand why we replaced the\n> existing estimator. It should be right there in the code. If,\n> hypothetically speaking, I were to commit this, and if, again strictly\n> hypothetically, another distinguished committer were to write back a\n> year or two later, \"clearly Robert was an idiot to commit this because\n> it's no better than what we had before\" then I want to be able to say\n> \"clearly, you have not what got committed very carefully, because the\n> comment for function <blat> clearly explains that this new technology\n> is teh awesome\".\n\nI certainly can add such comment to the patch ;-) Choose a function.\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 01 May 2015 03:20:15 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "\n\nOn 05/01/15 00:18, Robert Haas wrote:\n> On Thu, Apr 30, 2015 at 5:31 PM, Heikki Linnakangas <[email protected]> wrote:\n>>\n>> You can override the ndistinct estimate with ALTER TABLE. I think\n>> that's enough for an escape hatch.\n>\n> I'm not saying that isn't nice to have, but I don't think it really\n> helps much here. Setting the value manually requires that you know\n> what value to set, and you might not. If, on some workloads, the old\n> algorithm beats the new one reliably, you want to be able to\n> actually go back to the old algorithm, not manually override every\n> wrong decision it makes. A GUC for this is pretty cheap insurance.\n\nIMHO this is exactly the same situation as with the current ndistinct \nestimator. If we find out we'd have to use this workaround more \nfrequently than before, then clearly the new estimator is rubbish and \nshould not be committed.\n\nIn other words, I agree with Heikki.\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 01 May 2015 03:24:14 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On Thu, Apr 30, 2015 at 9:20 PM, Tomas Vondra\n<[email protected]> wrote:\n> I agree that this is not ready for 9.5 - it was meant as an experiment\n> (hence printing the estimate in a WARNING, to make it easier to compare\n> the value to the current estimator). Without that it'd be much more\n> complicated to compare the old/new estimates, but you're right this is\n> not suitable for commit.\n>\n> So far it received only reviews from Jeff Janes (thanks!), but I think a\n> change like this really requires a more thorough review, including the math\n> part. I don't expect that to happen at the very end of the last CF before\n> the freeze.\n\nOK.\n\n>> IMHO, the comments in this patch are pretty inscrutable. I believe\n>> this is because they presume more knowledge of what the patch is doing\n>> than I myself possess. For example:\n>>\n>> + * The AEE estimator is based on solving this equality (for \"m\")\n>> + *\n>> + * m - f1 - f2 = f1 * (A + A(m)) / (B + B(m))\n>> + *\n>> + * where A, B are effectively constants (not depending on m), and A(m)\n>> + * and B(m) are functions. This is equal to solving\n>> + *\n>> + * 0 = f1 * (A + A(m)) / (B + B(m)) - (m - f1 - f2)\n>>\n>> Perhaps I am just a dummy, but I have no idea what any of that means.\n>> I think that needs to be fixed so that someone who knows what\n>> n_distinct is but knows nothing about the details of these estimators\n>> can get an idea of how they are doing their thing without too much\n>> effort. I think a lot of the comments share this problem.\n>\n> Well, I don't think you're dummy, but this requires reading the paper\n> describing the estimator. Explaining that fully would essentially mean\n> copying a large portion of the paper in the comment, and I suppose that's\n> not a good idea. The explanation might be perhaps a bit more detailed,\n> though - not sure what's the right balance.\n\nWell, I think the problem in this case is that the comment describes\nwhat the values are mathematically without explaining what they are\nconceptually. For example, in s=1/2at^2+v_0t+s_0, we could say that a\nis meant to be the rate of change of an unseen variable v, while v_0\nis the initial vale of v, and that s_0 is meant to be the starting\nvalue of s, changing at a rate described by v. That's basically the\nkind of explanation you have right now. It's all correct, but what\ndoes it really mean? It's more helpful to say that we're trying to\nproject the position of a body at a given time (t) given its initial\nposition (s_0), its initial velocity (v), and its rate of acceleration\n(a).\n\n>> 3. There should be some clear documentation in the comments indicating\n>> why we believe that this is a whole lot better than what we do today.\n>> Maybe this has been discussed adequately on the thread and maybe it\n>> hasn't, but whoever goes to look at the committed code should not have\n>> to go root through hackers threads to understand why we replaced the\n>> existing estimator. It should be right there in the code. If,\n>> hypothetically speaking, I were to commit this, and if, again strictly\n>> hypothetically, another distinguished committer were to write back a\n>> year or two later, \"clearly Robert was an idiot to commit this because\n>> it's no better than what we had before\" then I want to be able to say\n>> \"clearly, you have not what got committed very carefully, because the\n>> comment for function <blat> clearly explains that this new technology\n>> is teh awesome\".\n>\n> I certainly can add such comment to the patch ;-) Choose a function.\n\nWell, at least the way things are organized right now,\nadaptive_estimator seems like the place.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 1 May 2015 07:55:49 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On Thu, Apr 30, 2015 at 6:20 PM, Tomas Vondra <[email protected]>\nwrote:\n\n> Hi,\n>\n> On 04/30/15 22:57, Robert Haas wrote:\n>\n>> On Tue, Mar 31, 2015 at 3:02 PM, Tomas Vondra\n>> <[email protected]> wrote:\n>>\n>>> attached is v4 of the patch implementing adaptive ndistinct estimator.\n>>>\n>>\n>> So, I took a look at this today. It's interesting work, but it looks\n>> more like a research project than something we can commit to 9.5. As\n>> far as I can see, this still computes the estimate the way we do\n>> today, but then also computes the estimate using this new method.\n>> The estimate computed the new way isn't stored anywhere, so this\n>> doesn't really change behavior, except for printing out a WARNING\n>> comparing the values produced by the two estimators.\n>>\n>\n> I agree that this is not ready for 9.5 - it was meant as an experiment\n> (hence printing the estimate in a WARNING, to make it easier to compare\n> the value to the current estimator). Without that it'd be much more\n> complicated to compare the old/new estimates, but you're right this is\n> not suitable for commit.\n>\n\nWith the warning it is very hard to correlate the discrepancy you do see\nwith which column is causing it, as the warnings don't include table or\ncolumn names (Assuming of course that you run it on a substantial\ndatabase--if you just run it on a few toy cases then the warning works\nwell).\n\nIf we want to have an explicitly experimental patch which we want people\nwith interesting real-world databases to report back on, what kind of patch\nwould it have to be to encourage that to happen? Or are we never going to\nget such feedback no matter how friendly we make it? Another problem is\nthat you really need to have the gold standard to compare them to, and\ngetting that is expensive (which is why we resort to sampling in the first\nplace). I don't think there is much to be done on that front other than\nbite the bullet and just do it--perhaps only for the tables which have\ndiscrepancies.\n\nSome of the regressions I've seen are at least partly a bug:\n\n+ /* find the 'm' value minimizing the difference */\n+ for (m = 1; m <= total_rows; m += step)\n+ {\n+ double q = k / (sample_rows * m);\n\nsample_rows and m are both integers, and their product overflows\nvigorously. A simple cast to double before the multiplication fixes the\nfirst example I produced. The estimate goes from 137,177 to 1,108,076.\nThe reality is 1,062,223.\n\nPerhaps m should be just be declared a double, as it is frequently used in\ndouble arithmetic.\n\n\n\n> Leaving that aside, at some point, you'll say, \"OK, there may be some\n>> regressions left but overall I believe this is going to be a win in\n>> most cases\". It's going to be really hard for anyone, no matter how\n>> smart, to figure out through code review whether that is true. So\n>> committing this figures to be extremely frightening. It's just not\n>> going to be reasonably possible to know what percentage of users are\n>> going to be more happy after this change and what percentage are\n>> going to be less happy.\n>>\n>\n> For every pair of estimators you can find cases where one of them is\n> better than the other one. It's pretty much impossible to find an estimator\n> that beats all other estimators on all possible inputs.\n>\n> There's no way to make this an improvement for everyone - it will produce\n> worse estimates in some cases, and we have to admit that. If we think this\n> is unacceptable, we're effectively stuck with the current estimator forever.\n>\n> Therefore, I think that:\n>>\n>> 1. This should be committed near the beginning of a release cycle,\n>> not near the end. That way, if there are problem cases, we'll have a\n>> year or so of developer test to shake them out.\n>>\n>\nIt can't hurt, but how effective will it be? Will developers know or care\nwhether ndistinct happened to get better or worse while they are working on\nother things? I would think that problems will be found by focused\ntesting, or during beta, and probably not by accidental discovery during\nthe development cycle. It can't hurt, but I don't know how much it will\nhelp.\n\n\n\n> 2. There should be a compatibility GUC to restore the old behavior.\n>> The new behavior should be the default, because if we're not\n>> confident that the new behavior will be better for most people, we\n>> have no business installing it in the first place (plus few people\n>> will try it). But just in case it turns out to suck for some people,\n>> we should provide an escape hatch, at least for a few releases.\n>>\n>\n> I think a \"compatibility GUC\" is a damn poor solution, IMNSHO.\n>\n\n> For example, GUCs are database-wide, but I do expect the estimator to\n> perform worse only on a few data sets / columns. So making this a\n> column-level settings would be more appropriate, I think.\n>\n> But it might work during the development cycle, as it would make comparing\n> the estimators possible (just run the tests with the GUC set differently).\n> Assuming we'll re-evaluate it at the end, and remove the GUC if possible.\n\n\nI agree with the \"experimental GUC\". That way if hackers do happen to see\nsomething suspicious, they can just turn it off and see what difference it\nmakes. If they have to reverse out a patch from 6 months ago in an area of\nthe code they aren't particularly interested in and then recompile their\ncode and then juggle two different sets of binaries, they will likely just\nshrug it off without investigation.\n\nCheers,\n\nJeff\n\nOn Thu, Apr 30, 2015 at 6:20 PM, Tomas Vondra <[email protected]> wrote:Hi,\n\nOn 04/30/15 22:57, Robert Haas wrote:\n\nOn Tue, Mar 31, 2015 at 3:02 PM, Tomas Vondra\n<[email protected]> wrote:\n\nattached is v4 of the patch implementing adaptive ndistinct estimator.\n\n\nSo, I took a look at this today. It's interesting work, but it looks\nmore like a research project than something we can commit to 9.5. As\nfar as I can see, this still computes the estimate the way we do\ntoday, but then also computes the estimate using this new method.\nThe estimate computed the new way isn't stored anywhere, so this\ndoesn't really change behavior, except for printing out a WARNING\ncomparing the values produced by the two estimators.\n\n\nI agree that this is not ready for 9.5 - it was meant as an experiment\n(hence printing the estimate in a WARNING, to make it easier to compare\nthe value to the current estimator). Without that it'd be much more\ncomplicated to compare the old/new estimates, but you're right this is\nnot suitable for commit.With the warning it is very hard to correlate the discrepancy you do see with which column is causing it, as the warnings don't include table or column names (Assuming of course that you run it on a substantial database--if you just run it on a few toy cases then the warning works well). If we want to have an explicitly experimental patch which we want people with interesting real-world databases to report back on, what kind of patch would it have to be to encourage that to happen? Or are we never going to get such feedback no matter how friendly we make it? Another problem is that you really need to have the gold standard to compare them to, and getting that is expensive (which is why we resort to sampling in the first place). I don't think there is much to be done on that front other than bite the bullet and just do it--perhaps only for the tables which have discrepancies.Some of the regressions I've seen are at least partly a bug:+ /* find the 'm' value minimizing the difference */+ for (m = 1; m <= total_rows; m += step)+ {+ double q = k / (sample_rows * m);sample_rows and m are both integers, and their product overflows vigorously. A simple cast to double before the multiplication fixes the first example I produced. The estimate goes from 137,177 to 1,108,076. The reality is 1,062,223.Perhaps m should be just be declared a double, as it is frequently used in double arithmetic. \n\nLeaving that aside, at some point, you'll say, \"OK, there may be some\nregressions left but overall I believe this is going to be a win in\nmost cases\". It's going to be really hard for anyone, no matter how\nsmart, to figure out through code review whether that is true. So\ncommitting this figures to be extremely frightening. It's just not\ngoing to be reasonably possible to know what percentage of users are\ngoing to be more happy after this change and what percentage are\ngoing to be less happy.\n\n\nFor every pair of estimators you can find cases where one of them is better than the other one. It's pretty much impossible to find an estimator that beats all other estimators on all possible inputs.\n\nThere's no way to make this an improvement for everyone - it will produce worse estimates in some cases, and we have to admit that. If we think this is unacceptable, we're effectively stuck with the current estimator forever.\n\n\nTherefore, I think that:\n\n1. This should be committed near the beginning of a release cycle,\nnot near the end. That way, if there are problem cases, we'll have a\nyear or so of developer test to shake them out.It can't hurt, but how effective will it be? Will developers know or care whether ndistinct happened to get better or worse while they are working on other things? I would think that problems will be found by focused testing, or during beta, and probably not by accidental discovery during the development cycle. It can't hurt, but I don't know how much it will help. \n\n2. There should be a compatibility GUC to restore the old behavior.\nThe new behavior should be the default, because if we're not\nconfident that the new behavior will be better for most people, we\nhave no business installing it in the first place (plus few people\nwill try it). But just in case it turns out to suck for some people,\nwe should provide an escape hatch, at least for a few releases.\n\n\nI think a \"compatibility GUC\" is a damn poor solution, IMNSHO.\nFor example, GUCs are database-wide, but I do expect the estimator to perform worse only on a few data sets / columns. So making this a column-level settings would be more appropriate, I think.\n\nBut it might work during the development cycle, as it would make comparing the estimators possible (just run the tests with the GUC set differently). Assuming we'll re-evaluate it at the end, and remove the GUC if possible.I agree with the \"experimental GUC\". That way if hackers do happen to see something suspicious, they can just turn it off and see what difference it makes. If they have to reverse out a patch from 6 months ago in an area of the code they aren't particularly interested in and then recompile their code and then juggle two different sets of binaries, they will likely just shrug it off without investigation.Cheers,Jeff",
"msg_date": "Wed, 13 May 2015 14:07:47 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On Wed, May 13, 2015 at 5:07 PM, Jeff Janes <[email protected]> wrote:\n> With the warning it is very hard to correlate the discrepancy you do see\n> with which column is causing it, as the warnings don't include table or\n> column names (Assuming of course that you run it on a substantial\n> database--if you just run it on a few toy cases then the warning works\n> well).\n\nPresumably the warning is going to go away before we actually commit this thing.\n\n> If we want to have an explicitly experimental patch which we want people\n> with interesting real-world databases to report back on, what kind of patch\n> would it have to be to encourage that to happen? Or are we never going to\n> get such feedback no matter how friendly we make it? Another problem is\n> that you really need to have the gold standard to compare them to, and\n> getting that is expensive (which is why we resort to sampling in the first\n> place). I don't think there is much to be done on that front other than\n> bite the bullet and just do it--perhaps only for the tables which have\n> discrepancies.\n\nIf we stick with the idea of a GUC to control the behavior, then\nsomebody can run ANALYZE, save the ndistinct estimates, run ANALYZE\nagain, and compare. They can also run SQL queries against the tables\nthemselves to check the real value. We could even provide a script\nfor all of that. I think that would be quite handy.\n\n> It can't hurt, but how effective will it be? Will developers know or care\n> whether ndistinct happened to get better or worse while they are working on\n> other things? I would think that problems will be found by focused testing,\n> or during beta, and probably not by accidental discovery during the\n> development cycle. It can't hurt, but I don't know how much it will help.\n\nOnce we enter beta (or even feature freeze), it's too late to whack\naround the algorithm heavily. We're pretty much committed to\nreleasing and supporting whatever we have got at that point. I guess\nwe could revert it if it doesn't work out, but that's about the only\noption at that point. We have more flexibility during the main part\nof the development cycle. But your point is certainly valid and I\ndon't mean to dispute it.\n\n> I agree with the \"experimental GUC\". That way if hackers do happen to see\n> something suspicious, they can just turn it off and see what difference it\n> makes. If they have to reverse out a patch from 6 months ago in an area of\n> the code they aren't particularly interested in and then recompile their\n> code and then juggle two different sets of binaries, they will likely just\n> shrug it off without investigation.\n\nYep. Users, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 15 May 2015 14:30:02 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On 05/15/2015 11:30 AM, Robert Haas wrote:\n> Once we enter beta (or even feature freeze), it's too late to whack\n> around the algorithm heavily. We're pretty much committed to\n> releasing and supporting whatever we have got at that point. I guess\n> we could revert it if it doesn't work out, but that's about the only\n> option at that point. We have more flexibility during the main part\n> of the development cycle. But your point is certainly valid and I\n> don't mean to dispute it.\n\nI will finally have a customer workload available to test this on this\nweekend. That's been rather delayed by the availability of customer\nhardware,because I'm not allowed to copy out the database. However,\nthis is a database which suffers from multiple ndistinct estimation\nissues in production, so I should be able to get a set of stats back by\nMonday which would show how much of a general improvement it is.\n\nI realize that's after the deadline, but there wasn't much I could do\nabout it. I've tried to simulate the kind of estimation issues I've\nseen, but they don't simulate well.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 15 May 2015 12:35:50 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On Fri, May 15, 2015 at 3:35 PM, Josh Berkus <[email protected]> wrote:\n> On 05/15/2015 11:30 AM, Robert Haas wrote:\n>> Once we enter beta (or even feature freeze), it's too late to whack\n>> around the algorithm heavily. We're pretty much committed to\n>> releasing and supporting whatever we have got at that point. I guess\n>> we could revert it if it doesn't work out, but that's about the only\n>> option at that point. We have more flexibility during the main part\n>> of the development cycle. But your point is certainly valid and I\n>> don't mean to dispute it.\n>\n> I will finally have a customer workload available to test this on this\n> weekend. That's been rather delayed by the availability of customer\n> hardware,because I'm not allowed to copy out the database. However,\n> this is a database which suffers from multiple ndistinct estimation\n> issues in production, so I should be able to get a set of stats back by\n> Monday which would show how much of a general improvement it is.\n>\n> I realize that's after the deadline, but there wasn't much I could do\n> about it. I've tried to simulate the kind of estimation issues I've\n> seen, but they don't simulate well.\n\nThis is clearly 9.6 material at this point, and has been for a while.\nThe patch - at least the last version I looked at - didn't store\nanything different in pg_statistic. It just logged what it would have\nstored. So testing is good, but there's not a question of pushing\nthis into 9.5.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 15 May 2015 15:58:40 -0400",
"msg_from": "Robert Haas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "On 05/15/2015 12:58 PM, Robert Haas wrote:\n> On Fri, May 15, 2015 at 3:35 PM, Josh Berkus <[email protected]> wrote:\n>> On 05/15/2015 11:30 AM, Robert Haas wrote:\n>>> Once we enter beta (or even feature freeze), it's too late to whack\n>>> around the algorithm heavily. We're pretty much committed to\n>>> releasing and supporting whatever we have got at that point. I guess\n>>> we could revert it if it doesn't work out, but that's about the only\n>>> option at that point. We have more flexibility during the main part\n>>> of the development cycle. But your point is certainly valid and I\n>>> don't mean to dispute it.\n>>\n>> I will finally have a customer workload available to test this on this\n>> weekend. That's been rather delayed by the availability of customer\n>> hardware,because I'm not allowed to copy out the database. However,\n>> this is a database which suffers from multiple ndistinct estimation\n>> issues in production, so I should be able to get a set of stats back by\n>> Monday which would show how much of a general improvement it is.\n>>\n>> I realize that's after the deadline, but there wasn't much I could do\n>> about it. I've tried to simulate the kind of estimation issues I've\n>> seen, but they don't simulate well.\n> \n> This is clearly 9.6 material at this point, and has been for a while.\n> The patch - at least the last version I looked at - didn't store\n> anything different in pg_statistic. It just logged what it would have\n> stored. So testing is good, but there's not a question of pushing\n> this into 9.5.\n\nI'm personally OK with that. The last thing we want to do is make query\ncosting changes *in haste*.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 15 May 2015 13:00:19 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "Hi,\n\nOn 05/13/15 23:07, Jeff Janes wrote:\n> With the warning it is very hard to correlate the discrepancy you do\n> see with which column is causing it, as the warnings don't include\n> table or column names (Assuming of course that you run it on a\n> substantial database--if you just run it on a few toy cases then the\n> warning works well).\n\nThat's true. I've added attnum/attname to the warning in the attached \nversion of the patch.\n\n> If we want to have an explicitly experimental patch which we want\n> people with interesting real-world databases to report back on, what\n> kind of patch would it have to be to encourage that to happen? Or are\n> we never going to get such feedback no matter how friendly we make\n> it? Another problem is that you really need to have the gold standard\n> to compare them to, and getting that is expensive (which is why we\n> resort to sampling in the first place). I don't think there is much\n> to be done on that front other than bite the bullet and just do\n> it--perhaps only for the tables which have discrepancies.\n\nNot sure. The \"experimental\" part of the patch was not really aimed at \nthe users outside the development community - it was meant to be used by \nmembers of the community, possibly testing it on customer databases I \ndon't think adding the GUC into the final release is a good idea, it's \njust a noise in the config no-one would actually use.\n\n> Some of the regressions I've seen are at least partly a bug:\n>\n> + /* find the 'm' value minimizing the difference */\n> + for (m = 1; m <= total_rows; m += step)\n> + {\n> + double q = k / (sample_rows * m);\n>\n> sample_rows and m are both integers, and their product overflows\n> vigorously. A simple cast to double before the multiplication fixes\n> the first example I produced. The estimate goes from 137,177 to\n> 1,108,076. The reality is 1,062,223.\n>\n> Perhaps m should be just be declared a double, as it is frequently\n> used in double arithmetic.\n\nYeah, I just discovered this bug independently. There's another bug that \nthe adaptive_estimator takes total_rows as integer, so it breaks for \ntables with more than INT_MAX rows. Both are fixed in the v5.\n\n>\n> Therefore, I think that:\n>\n> 1. This should be committed near the beginning of a release cycle,\n> not near the end. That way, if there are problem cases, we'll have a\n> year or so of developer test to shake them out.\n>\n>\n> It can't hurt, but how effective will it be? Will developers know or\n> care whether ndistinct happened to get better or worse while they\n> are working on other things? I would think that problems will be\n> found by focused testing, or during beta, and probably not by\n> accidental discovery during the development cycle. It can't hurt, but\n> I don't know how much it will help.\n\nI agree with that - it's unlikely the regressions will get discovered \nrandomly. OTOH I'd expect non-trivial number of people on this list to \nhave a few examples of ndistinct failures, and testing those would be \nmore useful I guess. But that's unlikely to find the cases that worked \nOK before and got broken by the new estimator :-(\n\n> I agree with the \"experimental GUC\". That way if hackers do happen to\n> see something suspicious, they can just turn it off and see what\n> difference it makes. If they have to reverse out a patch from 6 months\n> ago in an area of the code they aren't particularly interested in and\n> then recompile their code and then juggle two different sets of\n> binaries, they will likely just shrug it off without investigation.\n\n+1\n\n\n--\nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers",
"msg_date": "Wed, 17 Jun 2015 16:47:28 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PATCH: adaptive ndistinct estimator v4"
},
{
"msg_contents": "Sorry for disrupting the thread,\n\ni am wondering will it be possible to use BRIN indexes to better estimate distribution?\n\nI mean create btree index and brin index,\nprobe brin during planning and estimate if abort early plan with btree will be better. \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Nov 2015 12:45:31 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Yet another abort-early plan disaster on 9.3"
}
] |
[
{
"msg_contents": "\n\nHi Folk, \n\nI am trying to investigate some performance issues which we have with postgres\n(a different topic by itself) and tried postgres.9.4beta2, with a hope that it\nperform better.\n\nTurned out that 9.4 is 2x slower than 9.3.5 on the same hardware.\n\nSome technical details:\n\n Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64\n 256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz\n 2x160GB PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one 9.4 )\n\npostgres tweaks:\n\n\ndefault_statistics_target = 100\nwal_writer_delay = 10s\nvacuum_cost_delay = 50\nsynchronous_commit = off\nmaintenance_work_mem = 2GB\ncheckpoint_completion_target = 0.9\neffective_cache_size = 94GB\nwork_mem = 402MB\nwal_buffers = 16MB\ncheckpoint_segments = 64\nshared_buffers = 8GB\nmax_connections = 100\nrandom_page_cost = 1.5\n# other goodies\nlog_line_prefix = '%m <%d %u %r> %%'\nlog_temp_files = 0\nlog_min_duration_statement = 5\n\nin both cases databases are fresh - no data.\n\nHere is a results with pgbench.\n\n\n9.3.5:\n\n# /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 96361\ntps = 1605.972262 (including connections establishing)\ntps = 1606.064501 (excluding connections establishing)\nstatement latencies in milliseconds:\n\t0.001391\t\\set nbranches 1 * :scale\n\t0.000473\t\\set ntellers 10 * :scale\n\t0.000430\t\\set naccounts 100000 * :scale\n\t0.000533\t\\setrandom aid 1 :naccounts\n\t0.000393\t\\setrandom bid 1 :nbranches\n\t0.000468\t\\setrandom tid 1 :ntellers\n\t0.000447\t\\setrandom delta -5000 5000\n\t0.025161\tBEGIN;\n\t0.131317\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n\t0.100211\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\t0.117406\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n\t0.114332\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n\t0.086660\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n\t0.035940\tEND;\n\n\n9.4beta2:\n\n# /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 34017\ntps = 566.948384 (including connections establishing)\ntps = 567.008666 (excluding connections establishing)\nstatement latencies in milliseconds:\n\t0.001879\t\\set nbranches 1 * :scale\n\t0.000526\t\\set ntellers 10 * :scale\n\t0.000490\t\\set naccounts 100000 * :scale\n\t0.000595\t\\setrandom aid 1 :naccounts\n\t0.000421\t\\setrandom bid 1 :nbranches\n\t0.000480\t\\setrandom tid 1 :ntellers\n\t0.000484\t\\setrandom delta -5000 5000\n\t0.055047\tBEGIN;\n\t0.172179\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n\t0.135392\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\t0.157224\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n\t0.147969\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n\t0.123001\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n\t0.957854\tEND;\n\nany ideas?\n\nTigran.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 11:58:21 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 18/09/14 21:58, Mkrtchyan, Tigran wrote:\n>\n>\n> Hi Folk,\n>\n> I am trying to investigate some performance issues which we have with postgres\n> (a different topic by itself) and tried postgres.9.4beta2, with a hope that it\n> perform better.\n>\n> Turned out that 9.4 is 2x slower than 9.3.5 on the same hardware.\n>\n> Some technical details:\n>\n> Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64\n> 256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz\n> 2x160GB PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one 9.4 )\n>\n> postgres tweaks:\n>\n>\n> default_statistics_target = 100\n> wal_writer_delay = 10s\n> vacuum_cost_delay = 50\n> synchronous_commit = off\n> maintenance_work_mem = 2GB\n> checkpoint_completion_target = 0.9\n> effective_cache_size = 94GB\n> work_mem = 402MB\n> wal_buffers = 16MB\n> checkpoint_segments = 64\n> shared_buffers = 8GB\n> max_connections = 100\n> random_page_cost = 1.5\n> # other goodies\n> log_line_prefix = '%m <%d %u %r> %%'\n> log_temp_files = 0\n> log_min_duration_statement = 5\n>\n> in both cases databases are fresh - no data.\n>\n> Here is a results with pgbench.\n>\n>\n> 9.3.5:\n>\n> # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> duration: 60 s\n> number of transactions actually processed: 96361\n> tps = 1605.972262 (including connections establishing)\n> tps = 1606.064501 (excluding connections establishing)\n> statement latencies in milliseconds:\n> \t0.001391\t\\set nbranches 1 * :scale\n> \t0.000473\t\\set ntellers 10 * :scale\n> \t0.000430\t\\set naccounts 100000 * :scale\n> \t0.000533\t\\setrandom aid 1 :naccounts\n> \t0.000393\t\\setrandom bid 1 :nbranches\n> \t0.000468\t\\setrandom tid 1 :ntellers\n> \t0.000447\t\\setrandom delta -5000 5000\n> \t0.025161\tBEGIN;\n> \t0.131317\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n> \t0.100211\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> \t0.117406\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n> \t0.114332\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n> \t0.086660\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n> \t0.035940\tEND;\n>\n>\n> 9.4beta2:\n>\n> # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> duration: 60 s\n> number of transactions actually processed: 34017\n> tps = 566.948384 (including connections establishing)\n> tps = 567.008666 (excluding connections establishing)\n> statement latencies in milliseconds:\n> \t0.001879\t\\set nbranches 1 * :scale\n> \t0.000526\t\\set ntellers 10 * :scale\n> \t0.000490\t\\set naccounts 100000 * :scale\n> \t0.000595\t\\setrandom aid 1 :naccounts\n> \t0.000421\t\\setrandom bid 1 :nbranches\n> \t0.000480\t\\setrandom tid 1 :ntellers\n> \t0.000484\t\\setrandom delta -5000 5000\n> \t0.055047\tBEGIN;\n> \t0.172179\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n> \t0.135392\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> \t0.157224\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n> \t0.147969\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n> \t0.123001\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n> \t0.957854\tEND;\n>\n> any ideas?\n>\n\nHi Tigran,\n\nSome ideas:\n\n60s is too short for reliable results (default settings for checkpoints \nis 300s so 600s is the typical elapsed time to get reasonably repeatable \nnumbers (to ensure you get about 1 checkpoint in your run). In addition \nI usually do\n\npsql <<!\nCHECKPOINT;\n!\n\nPlus\n\n$ sleep 10\n\nbefore each run so that I've got some confidence that we are starting \nfrom approximately the same state each time (and getting hopefully only \n*one* checkpoint per run)!\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 22:17:45 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\n\n----- Original Message -----\n> From: \"Mark Kirkwood\" <[email protected]>\n> To: \"Tigran Mkrtchyan\" <[email protected]>, [email protected]\n> Sent: Thursday, September 18, 2014 12:17:45 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On 18/09/14 21:58, Mkrtchyan, Tigran wrote:\n> >\n> >\n> > Hi Folk,\n> >\n> > I am trying to investigate some performance issues which we have with\n> > postgres\n> > (a different topic by itself) and tried postgres.9.4beta2, with a hope that\n> > it\n> > perform better.\n> >\n> > Turned out that 9.4 is 2x slower than 9.3.5 on the same hardware.\n> >\n> > Some technical details:\n> >\n> > Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64\n> > 256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz\n> > 2x160GB PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one\n> > 9.4 )\n> >\n> > postgres tweaks:\n> >\n> >\n> > default_statistics_target = 100\n> > wal_writer_delay = 10s\n> > vacuum_cost_delay = 50\n> > synchronous_commit = off\n> > maintenance_work_mem = 2GB\n> > checkpoint_completion_target = 0.9\n> > effective_cache_size = 94GB\n> > work_mem = 402MB\n> > wal_buffers = 16MB\n> > checkpoint_segments = 64\n> > shared_buffers = 8GB\n> > max_connections = 100\n> > random_page_cost = 1.5\n> > # other goodies\n> > log_line_prefix = '%m <%d %u %r> %%'\n> > log_temp_files = 0\n> > log_min_duration_statement = 5\n> >\n> > in both cases databases are fresh - no data.\n> >\n> > Here is a results with pgbench.\n> >\n> >\n> > 9.3.5:\n> >\n> > # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60\n> > starting vacuum...end.\n> > transaction type: TPC-B (sort of)\n> > scaling factor: 1\n> > query mode: simple\n> > number of clients: 1\n> > number of threads: 1\n> > duration: 60 s\n> > number of transactions actually processed: 96361\n> > tps = 1605.972262 (including connections establishing)\n> > tps = 1606.064501 (excluding connections establishing)\n> > statement latencies in milliseconds:\n> > \t0.001391\t\\set nbranches 1 * :scale\n> > \t0.000473\t\\set ntellers 10 * :scale\n> > \t0.000430\t\\set naccounts 100000 * :scale\n> > \t0.000533\t\\setrandom aid 1 :naccounts\n> > \t0.000393\t\\setrandom bid 1 :nbranches\n> > \t0.000468\t\\setrandom tid 1 :ntellers\n> > \t0.000447\t\\setrandom delta -5000 5000\n> > \t0.025161\tBEGIN;\n> > \t0.131317\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE\n> > \taid = :aid;\n> > \t0.100211\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> > \t0.117406\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid\n> > \t= :tid;\n> > \t0.114332\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE\n> > \tbid = :bid;\n> > \t0.086660\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES\n> > \t(:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n> > \t0.035940\tEND;\n> >\n> >\n> > 9.4beta2:\n> >\n> > # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60\n> > starting vacuum...end.\n> > transaction type: TPC-B (sort of)\n> > scaling factor: 1\n> > query mode: simple\n> > number of clients: 1\n> > number of threads: 1\n> > duration: 60 s\n> > number of transactions actually processed: 34017\n> > tps = 566.948384 (including connections establishing)\n> > tps = 567.008666 (excluding connections establishing)\n> > statement latencies in milliseconds:\n> > \t0.001879\t\\set nbranches 1 * :scale\n> > \t0.000526\t\\set ntellers 10 * :scale\n> > \t0.000490\t\\set naccounts 100000 * :scale\n> > \t0.000595\t\\setrandom aid 1 :naccounts\n> > \t0.000421\t\\setrandom bid 1 :nbranches\n> > \t0.000480\t\\setrandom tid 1 :ntellers\n> > \t0.000484\t\\setrandom delta -5000 5000\n> > \t0.055047\tBEGIN;\n> > \t0.172179\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE\n> > \taid = :aid;\n> > \t0.135392\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> > \t0.157224\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid\n> > \t= :tid;\n> > \t0.147969\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE\n> > \tbid = :bid;\n> > \t0.123001\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES\n> > \t(:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n> > \t0.957854\tEND;\n> >\n> > any ideas?\n> >\n> \n> Hi Tigran,\n> \n> Some ideas:\n> \n> 60s is too short for reliable results (default settings for checkpoints\n> is 300s so 600s is the typical elapsed time to get reasonably repeatable\n> numbers (to ensure you get about 1 checkpoint in your run). In addition\n> I usually do\n> \n> psql <<!\n> CHECKPOINT;\n> !\n> \n> Plus\n> \n> $ sleep 10\n> \n> before each run so that I've got some confidence that we are starting\n> from approximately the same state each time (and getting hopefully only\n> *one* checkpoint per run)!\n\n\nSure, I can run a longer tests with longer breaks in between.\n\n\n\n\n\n\n9.3.5\n\n# /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 600\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 600 s\nnumber of transactions actually processed: 1037297\ntps = 1728.826406 (including connections establishing)\ntps = 1728.836277 (excluding connections establishing)\nstatement latencies in milliseconds:\n\t0.001471\t\\set nbranches 1 * :scale\n\t0.000456\t\\set ntellers 10 * :scale\n\t0.000411\t\\set naccounts 100000 * :scale\n\t0.000524\t\\setrandom aid 1 :naccounts\n\t0.000364\t\\setrandom bid 1 :nbranches\n\t0.000437\t\\setrandom tid 1 :ntellers\n\t0.000424\t\\setrandom delta -5000 5000\n\t0.024217\tBEGIN;\n\t0.118966\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n\t0.092483\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\t0.108232\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n\t0.107978\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n\t0.080137\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n\t0.034015\tEND;\n\n9.4beta2\n\n# /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 600\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 600 s\nnumber of transactions actually processed: 373454\ntps = 622.422377 (including connections establishing)\ntps = 622.429494 (excluding connections establishing)\nstatement latencies in milliseconds:\n\t0.001252\t\\set nbranches 1 * :scale\n\t0.000417\t\\set ntellers 10 * :scale\n\t0.000384\t\\set naccounts 100000 * :scale\n\t0.000466\t\\setrandom aid 1 :naccounts\n\t0.000344\t\\setrandom bid 1 :nbranches\n\t0.000411\t\\setrandom tid 1 :ntellers\n\t0.000397\t\\setrandom delta -5000 5000\n\t0.047489\tBEGIN;\n\t0.157164\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;\n\t0.119992\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\t0.141147\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;\n\t0.132492\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;\n\t0.108917\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n\t0.889112\tEND;\n\n\n\nTigran.\n\n\n> \n> Cheers\n> \n> Mark\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 15:49:24 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On Thu, Sep 18, 2014 at 2:58 AM, Mkrtchyan, Tigran <[email protected]\n> wrote:\n\n>\n>\n> Hi Folk,\n>\n> I am trying to investigate some performance issues which we have with\n> postgres\n> (a different topic by itself) and tried postgres.9.4beta2, with a hope\n> that it\n> perform better.\n>\n> Turned out that 9.4 is 2x slower than 9.3.5 on the same hardware.\n>\n> Some technical details:\n>\n> Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64\n> 256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz\n> 2x160GB PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one\n> 9.4 )\n>\n\nWhy are the versions segregated that way? Are you sure they are configured\nidentically?\n\n\n>\n> postgres tweaks:\n>\n>\n> default_statistics_target = 100\n> wal_writer_delay = 10s\n> vacuum_cost_delay = 50\n> synchronous_commit = off\n>\n\nAre you sure that synchronous_commit is actually off on the 9.4 instance?\n\n9.3.5:\n>\n> # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60\n>\n\n...\n\n\n> 0.035940 END;\n>\n>\n> 9.4beta2:\n>\n...\n\n> 0.957854 END;\n>\n\nLooks like IO.\n\nCheers,\n\nJeff\n\nOn Thu, Sep 18, 2014 at 2:58 AM, Mkrtchyan, Tigran <[email protected]> wrote:\n\nHi Folk,\n\nI am trying to investigate some performance issues which we have with postgres\n(a different topic by itself) and tried postgres.9.4beta2, with a hope that it\nperform better.\n\nTurned out that 9.4 is 2x slower than 9.3.5 on the same hardware.\n\nSome technical details:\n\n Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64\n 256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz\n 2x160GB PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one 9.4 )Why are the versions segregated that way? Are you sure they are configured identically? \n\npostgres tweaks:\n\n\ndefault_statistics_target = 100\nwal_writer_delay = 10s\nvacuum_cost_delay = 50\nsynchronous_commit = offAre you sure that synchronous_commit is actually off on the 9.4 instance?9.3.5:\n\n# /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60 ... 0.035940 END;\n\n\n9.4beta2:... 0.957854 END;Looks like IO.Cheers,Jeff",
"msg_date": "Thu, 18 Sep 2014 07:56:22 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\n\n----- Original Message -----\n> From: \"Jeff Janes\" <[email protected]>\n> To: \"Tigran Mkrtchyan\" <[email protected]>\n> Cc: [email protected]\n> Sent: Thursday, September 18, 2014 4:56:22 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On Thu, Sep 18, 2014 at 2:58 AM, Mkrtchyan, Tigran <[email protected]\n> > wrote:\n> \n> >\n> >\n> > Hi Folk,\n> >\n> > I am trying to investigate some performance issues which we have with\n> > postgres\n> > (a different topic by itself) and tried postgres.9.4beta2, with a hope\n> > that it\n> > perform better.\n> >\n> > Turned out that 9.4 is 2x slower than 9.3.5 on the same hardware.\n> >\n> > Some technical details:\n> >\n> > Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64\n> > 256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz\n> > 2x160GB PCIe SSD DELL_P320h-MTFDGAL175SAH ( on one 9.3, on an other one\n> > 9.4 )\n> >\n> \n> Why are the versions segregated that way? Are you sure they are configured\n> identically?\n\n\nes, they are configured identically\n> \n> \n> >\n> > postgres tweaks:\n> >\n> >\n> > default_statistics_target = 100\n> > wal_writer_delay = 10s\n> > vacuum_cost_delay = 50\n> > synchronous_commit = off\n> >\n> \n> Are you sure that synchronous_commit is actually off on the 9.4 instance?\n\n\nyes, synchronous_commit is off.\n\n> \n> 9.3.5:\n> >\n> > # /usr/pgsql-9.3/bin/pgbench -r -j 1 -c 1 -T 60\n> >\n> \n> ...\n> \n> \n> > 0.035940 END;\n> >\n> >\n> > 9.4beta2:\n> >\n> ...\n> \n> > 0.957854 END;\n> >\n> \n> Looks like IO.\n\nPostgres internal IO? May be. We get 600MB/s on this SSDs.\n\n\nTigran.\n\n> \n> Cheers,\n> \n> Jeff\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 17:09:20 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 09/18/2014 08:09 AM, Mkrtchyan, Tigran wrote:\n>>> 9.4beta2:\n>>> > >\n>> > ...\n>> > \n>>> > > 0.957854 END;\n>>> > >\n>> > \n>> > Looks like IO.\n> Postgres internal IO? May be. We get 600MB/s on this SSDs.\n\nWhile it's possible that this is a Postgres issue, my first thought is\nthat the two SSDs are not actually identical. The 9.4 one may either\nhave a fault, or may be mostly full and heavily fragmented. Or the Dell\nPCIe card may have an issue.\n\nYou are using \"scale 1\" which is a < 1MB database, and one client and 1\nthread, which is an interesting test I wouldn't necessarily have done\nmyself. I'll throw the same test on one of my machines and see how it does.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 10:54:24 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\n\n----- Original Message -----\n> From: \"Josh Berkus\" <[email protected]>\n> To: [email protected]\n> Sent: Thursday, September 18, 2014 7:54:24 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On 09/18/2014 08:09 AM, Mkrtchyan, Tigran wrote:\n> >>> 9.4beta2:\n> >>> > >\n> >> > ...\n> >> > \n> >>> > > 0.957854 END;\n> >>> > >\n> >> > \n> >> > Looks like IO.\n> > Postgres internal IO? May be. We get 600MB/s on this SSDs.\n> \n> While it's possible that this is a Postgres issue, my first thought is\n> that the two SSDs are not actually identical. The 9.4 one may either\n> have a fault, or may be mostly full and heavily fragmented. Or the Dell\n> PCIe card may have an issue.\n\n\nWe have tested both SSDs and they have identical IO characteristics and\nas I already mentioned, both databases are fresh, including filesystem.\n\n> \n> You are using \"scale 1\" which is a < 1MB database, and one client and 1\n> thread, which is an interesting test I wouldn't necessarily have done\n> myself. I'll throw the same test on one of my machines and see how it does.\n\nthis scenario corresponds to our use case. We need a high transaction rate\nper for a single client. Currently I can get only ~1500 tps. Unfortunately, \nposgtress does not tell me where the bottleneck is. Is this is defensively\nnot the disk IO.\n\n\nThanks for the help,\nTigran.\n\n> \n> --\n> Josh Berkus\n> PostgreSQL Experts Inc.\n> http://pgexperts.com\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 21:09:24 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\nOn 09/18/2014 03:09 PM, Mkrtchyan, Tigran wrote:\n>\n> ----- Original Message -----\n>> From: \"Josh Berkus\" <[email protected]>\n>> To: [email protected]\n>> Sent: Thursday, September 18, 2014 7:54:24 PM\n>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>>\n>> On 09/18/2014 08:09 AM, Mkrtchyan, Tigran wrote:\n>>>>> 9.4beta2:\n>>>>> ...\n>>>>>\n>>>>>>> 0.957854 END;\n>>>>>>>\n>>>>> Looks like IO.\n>>> Postgres internal IO? May be. We get 600MB/s on this SSDs.\n>> While it's possible that this is a Postgres issue, my first thought is\n>> that the two SSDs are not actually identical. The 9.4 one may either\n>> have a fault, or may be mostly full and heavily fragmented. Or the Dell\n>> PCIe card may have an issue.\n>\n> We have tested both SSDs and they have identical IO characteristics and\n> as I already mentioned, both databases are fresh, including filesystem.\n>\n>> You are using \"scale 1\" which is a < 1MB database, and one client and 1\n>> thread, which is an interesting test I wouldn't necessarily have done\n>> myself. I'll throw the same test on one of my machines and see how it does.\n> this scenario corresponds to our use case. We need a high transaction rate\n> per for a single client. Currently I can get only ~1500 tps. Unfortunately,\n> posgtress does not tell me where the bottleneck is. Is this is defensively\n> not the disk IO.\n>\n>\n>\n\n\nThis is when you dig out tools like perf, maybe.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 15:32:17 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n<[email protected]> wrote:\n>\n> 9.3.5:\n> 0.035940 END;\n>\n>\n> 9.4beta2:\n> 0.957854 END;\n\n\ntime being spent on 'END' is definitely suggesting i/o related issues.\nThis is making me very skeptical that postgres is the source of the\nproblem. I also thing synchronous_commit is not set properly on the\nnew instance (or possibly there is a bug or some such). Can you\nverify via:\n\nselect * from pg_settings where name = 'synchronous_commit';\n\non both servers?\n\nWhat is iowait? For pci-e SSD, these drives don't seem very fast...\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 15:32:20 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 19/09/14 08:32, Merlin Moncure wrote:\n> On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n> <[email protected]> wrote:\n>>\n>> 9.3.5:\n>> 0.035940 END;\n>>\n>>\n>> 9.4beta2:\n>> 0.957854 END;\n>\n>\n> time being spent on 'END' is definitely suggesting i/o related issues.\n> This is making me very skeptical that postgres is the source of the\n> problem. I also thing synchronous_commit is not set properly on the\n> new instance (or possibly there is a bug or some such). Can you\n> verify via:\n>\n> select * from pg_settings where name = 'synchronous_commit';\n>\n> on both servers?\n>\n\nYes, does look suspicious. It *could* be that the 9.4 case is getting \nunlucky and checkpointing just before the end of the 60s run, and 9.3 \nisn't.\n\n> What is iowait? For pci-e SSD, these drives don't seem very fast...\n>\n>\n>\n\nThese look like rebranded Micron P320's and should be extremely \nfast...However I note that my Crucial/Micron M550's are very fast for \nmost writes *but* are much slower for sync writes (and fsync) that \nhappen at commit...\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 08:56:36 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\n\n----- Original Message -----\n> From: \"Merlin Moncure\" <[email protected]>\n> To: \"Tigran Mkrtchyan\" <[email protected]>\n> Cc: \"postgres performance list\" <[email protected]>\n> Sent: Thursday, September 18, 2014 10:32:20 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n> <[email protected]> wrote:\n> >\n> > 9.3.5:\n> > 0.035940 END;\n> >\n> >\n> > 9.4beta2:\n> > 0.957854 END;\n> \n> \n> time being spent on 'END' is definitely suggesting i/o related issues.\n> This is making me very skeptical that postgres is the source of the\n> problem. I also thing synchronous_commit is not set properly on the\n> new instance (or possibly there is a bug or some such). Can you\n> verify via:\n> \n> select * from pg_settings where name = 'synchronous_commit';\n> \n> on both servers?\n\n\nhere you are:\n\n9.4beta2\npostgres=# select version();\n version \n-----------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.4beta2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n(1 row)\n\npostgres=# select * from pg_settings where name = 'synchronous_commit';\n name | setting | unit | category | short_desc | extra_desc | context | vartype | \nsource | min_val | max_val | enumvals | boot_val | reset_val | sourcefile | sourceline \n--------------------+---------+------+----------------------------+-------------------------------------------------------+------------+---------+---------+-------\n-------------+---------+---------+-----------------------------+----------+-----------+-----------------------------------------+------------\n synchronous_commit | off | | Write-Ahead Log / Settings | Sets the current transaction's synchronization level. | | user | enum | config\nuration file | | | {local,remote_write,on,off} | on | off | /var/lib/pgsql/9.4/data/postgresql.conf | 622\n(1 row)\n\n\n9.3.5\npostgres=# select version();\n version \n--------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4), 64-bit\n(1 row)\n\npostgres=# select * from pg_settings where name = 'synchronous_commit';\n name | setting | unit | category | short_desc | extra_desc | context | vartype | \nsource | min_val | max_val | enumvals | boot_val | reset_val | sourcefile | sourceline \n--------------------+---------+------+----------------------------+-------------------------------------------------------+------------+---------+---------+-------\n-------------+---------+---------+-----------------------------+----------+-----------+-----------------------------------------+------------\n synchronous_commit | off | | Write-Ahead Log / Settings | Sets the current transaction's synchronization level. | | user | enum | config\nuration file | | | {local,remote_write,on,off} | on | off | /var/lib/pgsql/9.3/data/postgresql.conf | 166\n(1 row)\n\n\n\n\n> \n> What is iowait? For pci-e SSD, these drives don't seem very fast...\n\niostat, top and pg_top never show iowait greater than 0.7%\n\nTigran.\n\n> \n> merlin\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 23:01:34 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\n\n----- Original Message -----\n> From: \"Mark Kirkwood\" <[email protected]>\n> To: \"Merlin Moncure\" <[email protected]>, \"Tigran Mkrtchyan\" <[email protected]>\n> Cc: \"postgres performance list\" <[email protected]>\n> Sent: Thursday, September 18, 2014 10:56:36 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On 19/09/14 08:32, Merlin Moncure wrote:\n> > On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n> > <[email protected]> wrote:\n> >>\n> >> 9.3.5:\n> >> 0.035940 END;\n> >>\n> >>\n> >> 9.4beta2:\n> >> 0.957854 END;\n> >\n> >\n> > time being spent on 'END' is definitely suggesting i/o related issues.\n> > This is making me very skeptical that postgres is the source of the\n> > problem. I also thing synchronous_commit is not set properly on the\n> > new instance (or possibly there is a bug or some such). Can you\n> > verify via:\n> >\n> > select * from pg_settings where name = 'synchronous_commit';\n> >\n> > on both servers?\n> >\n> \n> Yes, does look suspicious. It *could* be that the 9.4 case is getting\n> unlucky and checkpointing just before the end of the 60s run, and 9.3\n> isn't.\n\n10 minutes run had the same results.\n\nIs there some kind of statistics which can tell there time is spend?\nOr the only way is to run on solaris with dtrace? For me it's more important\nto find why I get only 1500tps with 9.3. The test with 9.4 was just a hope for\na magic code change that will give me a better performance.\n\nTigran.\n\n\n> \n> > What is iowait? For pci-e SSD, these drives don't seem very fast...\n> >\n> >\n> >\n> \n> These look like rebranded Micron P320's and should be extremely\n> fast...However I note that my Crucial/Micron M550's are very fast for\n> most writes *but* are much slower for sync writes (and fsync) that\n> happen at commit...\n> \n> Cheers\n> \n> Mark\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 23:10:54 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On Thu, Sep 18, 2014 at 11:10 PM, Mkrtchyan, Tigran\n<[email protected]> wrote:\n>\n>\n> ----- Original Message -----\n>> From: \"Mark Kirkwood\" <[email protected]>\n>> To: \"Merlin Moncure\" <[email protected]>, \"Tigran Mkrtchyan\" <[email protected]>\n>> Cc: \"postgres performance list\" <[email protected]>\n>> Sent: Thursday, September 18, 2014 10:56:36 PM\n>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>>\n>> On 19/09/14 08:32, Merlin Moncure wrote:\n>> > On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n>> > <[email protected]> wrote:\n>> >>\n>> >> 9.3.5:\n>> >> 0.035940 END;\n>> >>\n>> >>\n>> >> 9.4beta2:\n>> >> 0.957854 END;\n>> >\n>\n> 10 minutes run had the same results.\n>\n> Is there some kind of statistics which can tell there time is spend?\n> Or the only way is to run on solaris with dtrace? For me it's more important\n> to find why I get only 1500tps with 9.3. The test with 9.4 was just a hope for\n> a magic code change that will give me a better performance.\n\nCan you test 9.3 on the 9.4 computer?\n\nRegards\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 23:28:41 +0200",
"msg_from": "didier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On Thu, Sep 18, 2014 at 2:10 PM, Mkrtchyan, Tigran <[email protected]\n> wrote:\n\n>\n>\n> ----- Original Message -----\n> > From: \"Mark Kirkwood\" <[email protected]>\n> > To: \"Merlin Moncure\" <[email protected]>, \"Tigran Mkrtchyan\" <\n> [email protected]>\n> > Cc: \"postgres performance list\" <[email protected]>\n> > Sent: Thursday, September 18, 2014 10:56:36 PM\n> > Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> >\n> > On 19/09/14 08:32, Merlin Moncure wrote:\n> > > On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n> > > <[email protected]> wrote:\n> > >>\n> > >> 9.3.5:\n> > >> 0.035940 END;\n> > >>\n> > >>\n> > >> 9.4beta2:\n> > >> 0.957854 END;\n> > >\n> > >\n> > > time being spent on 'END' is definitely suggesting i/o related issues.\n> > > This is making me very skeptical that postgres is the source of the\n> > > problem. I also thing synchronous_commit is not set properly on the\n> > > new instance (or possibly there is a bug or some such). Can you\n> > > verify via:\n> > >\n> > > select * from pg_settings where name = 'synchronous_commit';\n> > >\n> > > on both servers?\n> > >\n> >\n> > Yes, does look suspicious. It *could* be that the 9.4 case is getting\n> > unlucky and checkpointing just before the end of the 60s run, and 9.3\n> > isn't.\n>\n> 10 minutes run had the same results.\n>\n> Is there some kind of statistics which can tell there time is spend?\n>\n\nProbably the first thing I'd so is strace -p the backend process with -T\nand -ttt while pgbench is running and watch a few seconds go by to see if\nanything stands out. Then strace -c and see what that shows.\n\npg_test_fsync with the file put in each of the pg_xlog directory.\n (Actually, that is probably the first thing to do.)\n\nrun pgbench with -l and see if the throughput is smooth or spiky.\n\nWhat does sar, top or vmstat say?\n\nRun with track_io_timing = on and with pg_stat_statements and see what they\nshow. Also turn on log_checkpoints.\n\nCheers,\n\nJeff\n\nOn Thu, Sep 18, 2014 at 2:10 PM, Mkrtchyan, Tigran <[email protected]> wrote:\n\n----- Original Message -----\n> From: \"Mark Kirkwood\" <[email protected]>\n> To: \"Merlin Moncure\" <[email protected]>, \"Tigran Mkrtchyan\" <[email protected]>\n> Cc: \"postgres performance list\" <[email protected]>\n> Sent: Thursday, September 18, 2014 10:56:36 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>\n> On 19/09/14 08:32, Merlin Moncure wrote:\n> > On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n> > <[email protected]> wrote:\n> >>\n> >> 9.3.5:\n> >> 0.035940 END;\n> >>\n> >>\n> >> 9.4beta2:\n> >> 0.957854 END;\n> >\n> >\n> > time being spent on 'END' is definitely suggesting i/o related issues.\n> > This is making me very skeptical that postgres is the source of the\n> > problem. I also thing synchronous_commit is not set properly on the\n> > new instance (or possibly there is a bug or some such). Can you\n> > verify via:\n> >\n> > select * from pg_settings where name = 'synchronous_commit';\n> >\n> > on both servers?\n> >\n>\n> Yes, does look suspicious. It *could* be that the 9.4 case is getting\n> unlucky and checkpointing just before the end of the 60s run, and 9.3\n> isn't.\n\n10 minutes run had the same results.\n\nIs there some kind of statistics which can tell there time is spend?Probably the first thing I'd so is strace -p the backend process with -T and -ttt while pgbench is running and watch a few seconds go by to see if anything stands out. Then strace -c and see what that shows.pg_test_fsync with the file put in each of the pg_xlog directory. (Actually, that is probably the first thing to do.)run pgbench with -l and see if the throughput is smooth or spiky.What does sar, top or vmstat say?Run with track_io_timing = on and with pg_stat_statements and see what they show. Also turn on log_checkpoints.Cheers,Jeff",
"msg_date": "Thu, 18 Sep 2014 14:34:26 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 19/09/14 09:10, Mkrtchyan, Tigran wrote:\n>\n>\n> ----- Original Message -----\n>> From: \"Mark Kirkwood\" <[email protected]>\n>> To: \"Merlin Moncure\" <[email protected]>, \"Tigran Mkrtchyan\" <[email protected]>\n>> Cc: \"postgres performance list\" <[email protected]>\n>> Sent: Thursday, September 18, 2014 10:56:36 PM\n>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>>\n>> On 19/09/14 08:32, Merlin Moncure wrote:\n>>> On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n>>> <[email protected]> wrote:\n>>>>\n>>>> 9.3.5:\n>>>> 0.035940 END;\n>>>>\n>>>>\n>>>> 9.4beta2:\n>>>> 0.957854 END;\n>>>\n>>>\n>>> time being spent on 'END' is definitely suggesting i/o related issues.\n>>> This is making me very skeptical that postgres is the source of the\n>>> problem. I also thing synchronous_commit is not set properly on the\n>>> new instance (or possibly there is a bug or some such). Can you\n>>> verify via:\n>>>\n>>> select * from pg_settings where name = 'synchronous_commit';\n>>>\n>>> on both servers?\n>>>\n>>\n>> Yes, does look suspicious. It *could* be that the 9.4 case is getting\n>> unlucky and checkpointing just before the end of the 60s run, and 9.3\n>> isn't.\n>\n> 10 minutes run had the same results.\n>\n> Is there some kind of statistics which can tell there time is spend?\n> Or the only way is to run on solaris with dtrace? For me it's more important\n> to find why I get only 1500tps with 9.3. The test with 9.4 was just a hope for\n> a magic code change that will give me a better performance.\n>\n>\n\nInteresting. With respect to dtrace, you can use systemtap on Linux to \nachieve similar things.\n\nHowever before getting too carried away with that - we already *know* \nthat 9.4 is spending longer in END (i.e commit) than 9.3 is. I'd \nrecommend you see what wal_sync_method is set to on both systems. If it \nis the same, then my suspicion is that one of the SSD's needs to be \ntrimmed [1]. You can do this by running:\n\n$ fstrim /mountpoint\n\nAlso - are you using the same filesystem and mount options on each SSD?\n\nCheers\n\nMark\n\n[1] if fact, for the paranoid - I usually secure erase any SSD before \nperformance testing, and then check the SMART counters too...\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 10:16:13 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 19/09/14 10:16, Mark Kirkwood wrote:\n> On 19/09/14 09:10, Mkrtchyan, Tigran wrote:\n>>\n>>\n>> ----- Original Message -----\n>>> From: \"Mark Kirkwood\" <[email protected]>\n>>> To: \"Merlin Moncure\" <[email protected]>, \"Tigran Mkrtchyan\"\n>>> <[email protected]>\n>>> Cc: \"postgres performance list\" <[email protected]>\n>>> Sent: Thursday, September 18, 2014 10:56:36 PM\n>>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>>>\n>>> On 19/09/14 08:32, Merlin Moncure wrote:\n>>>> On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n>>>> <[email protected]> wrote:\n>>>>>\n>>>>> 9.3.5:\n>>>>> 0.035940 END;\n>>>>>\n>>>>>\n>>>>> 9.4beta2:\n>>>>> 0.957854 END;\n>>>>\n>>>>\n>>>> time being spent on 'END' is definitely suggesting i/o related issues.\n>>>> This is making me very skeptical that postgres is the source of the\n>>>> problem. I also thing synchronous_commit is not set properly on the\n>>>> new instance (or possibly there is a bug or some such). Can you\n>>>> verify via:\n>>>>\n>>>> select * from pg_settings where name = 'synchronous_commit';\n>>>>\n>>>> on both servers?\n>>>>\n>>>\n>>> Yes, does look suspicious. It *could* be that the 9.4 case is getting\n>>> unlucky and checkpointing just before the end of the 60s run, and 9.3\n>>> isn't.\n>>\n>> 10 minutes run had the same results.\n>>\n>> Is there some kind of statistics which can tell there time is spend?\n>> Or the only way is to run on solaris with dtrace? For me it's more\n>> important\n>> to find why I get only 1500tps with 9.3. The test with 9.4 was just a\n>> hope for\n>> a magic code change that will give me a better performance.\n>>\n>>\n>\n> Interesting. With respect to dtrace, you can use systemtap on Linux to\n> achieve similar things.\n>\n> However before getting too carried away with that - we already *know*\n> that 9.4 is spending longer in END (i.e commit) than 9.3 is. I'd\n> recommend you see what wal_sync_method is set to on both systems. If it\n> is the same, then my suspicion is that one of the SSD's needs to be\n> trimmed [1]. You can do this by running:\n>\n> $ fstrim /mountpoint\n>\n> Also - are you using the same filesystem and mount options on each SSD?\n>\n> Cheers\n>\n> Mark\n>\n> [1] if fact, for the paranoid - I usually secure erase any SSD before\n> performance testing, and then check the SMART counters too...\n>\n\nFurther to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3 \none for 9.4), see below for results.\n\nI'm running xfs on them with trim/discard enabled:\n\n$ mount|grep pg\n/dev/sdd4 on /mnt/pg94 type xfs (rw,discard)\n/dev/sdc4 on /mnt/pg93 type xfs (rw,discard)\n\n\nI'm *not* seeing any significant difference between 9.3 and 9.4, and the \nnumbers are both about 2x your best number, which is food for thought \n(those P320's should toast my M550 for write performance...).\n\n\n9.3:\n\n$ pgbench -r -j 1 -c 1 -T 60 bench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 194615\ntps = 3243.567115 (including connections establishing)\ntps = 3243.771688 (excluding connections establishing)\nstatement latencies in milliseconds:\n\t0.000798\t\\set nbranches 1 * :scale\n\t0.000302\t\\set ntellers 10 * :scale\n\t0.000276\t\\set naccounts 100000 * :scale\n\t0.000330\t\\setrandom aid 1 :naccounts\n\t0.000265\t\\setrandom bid 1 :nbranches\n\t0.000278\t\\setrandom tid 1 :ntellers\n\t0.000298\t\\setrandom delta -5000 5000\n\t0.012818\tBEGIN;\n\t0.065403\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE \naid = :aid;\n\t0.048516\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\t0.058343\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE \ntid = :tid;\n\t0.057763\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE \nbid = :bid;\n\t0.043293\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) \nVALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n\t0.017087\tEND;\n\n\n9.4:\n\n$ pgbench -r -j 1 -c 1 -T 60 bench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 60 s\nnumber of transactions actually processed: 194130\nlatency average: 0.309 ms\ntps = 3235.488190 (including connections establishing)\ntps = 3235.560235 (excluding connections establishing)\nstatement latencies in milliseconds:\n\t0.000460\t\\set nbranches 1 * :scale\n\t0.000231\t\\set ntellers 10 * :scale\n\t0.000224\t\\set naccounts 100000 * :scale\n\t0.000258\t\\setrandom aid 1 :naccounts\n\t0.000252\t\\setrandom bid 1 :nbranches\n\t0.000266\t\\setrandom tid 1 :ntellers\n\t0.000272\t\\setrandom delta -5000 5000\n\t0.011724\tBEGIN;\n\t0.083750\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE \naid = :aid;\n\t0.045553\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\t0.054412\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE \ntid = :tid;\n\t0.053371\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE \nbid = :bid;\n\t0.041501\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime) \nVALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n\t0.015273\tEND;\n\nconfiguration:\n\nlogging_collector = 'on'\nwal_writer_delay = '10s'\nvacuum_cost_delay = 50\nsynchronous_commit = 'off'\nwal_buffers = '16MB'\ncheckpoint_segments = 64\nshared_buffers = '2GB'\nmax_connections = 100\nrandom_page_cost = 1.5\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 10:49:05 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\n\n----- Original Message -----\n> From: \"Mark Kirkwood\" <[email protected]>\n> To: \"Tigran Mkrtchyan\" <[email protected]>\n> Cc: \"Merlin Moncure\" <[email protected]>, \"postgres performance list\" <[email protected]>\n> Sent: Friday, September 19, 2014 12:49:05 AM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On 19/09/14 10:16, Mark Kirkwood wrote:\n> > On 19/09/14 09:10, Mkrtchyan, Tigran wrote:\n> >>\n> >>\n> >> ----- Original Message -----\n> >>> From: \"Mark Kirkwood\" <[email protected]>\n> >>> To: \"Merlin Moncure\" <[email protected]>, \"Tigran Mkrtchyan\"\n> >>> <[email protected]>\n> >>> Cc: \"postgres performance list\" <[email protected]>\n> >>> Sent: Thursday, September 18, 2014 10:56:36 PM\n> >>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> >>>\n> >>> On 19/09/14 08:32, Merlin Moncure wrote:\n> >>>> On Thu, Sep 18, 2014 at 4:58 AM, Mkrtchyan, Tigran\n> >>>> <[email protected]> wrote:\n> >>>>>\n> >>>>> 9.3.5:\n> >>>>> 0.035940 END;\n> >>>>>\n> >>>>>\n> >>>>> 9.4beta2:\n> >>>>> 0.957854 END;\n> >>>>\n> >>>>\n> >>>> time being spent on 'END' is definitely suggesting i/o related issues.\n> >>>> This is making me very skeptical that postgres is the source of the\n> >>>> problem. I also thing synchronous_commit is not set properly on the\n> >>>> new instance (or possibly there is a bug or some such). Can you\n> >>>> verify via:\n> >>>>\n> >>>> select * from pg_settings where name = 'synchronous_commit';\n> >>>>\n> >>>> on both servers?\n> >>>>\n> >>>\n> >>> Yes, does look suspicious. It *could* be that the 9.4 case is getting\n> >>> unlucky and checkpointing just before the end of the 60s run, and 9.3\n> >>> isn't.\n> >>\n> >> 10 minutes run had the same results.\n> >>\n> >> Is there some kind of statistics which can tell there time is spend?\n> >> Or the only way is to run on solaris with dtrace? For me it's more\n> >> important\n> >> to find why I get only 1500tps with 9.3. The test with 9.4 was just a\n> >> hope for\n> >> a magic code change that will give me a better performance.\n> >>\n> >>\n> >\n> > Interesting. With respect to dtrace, you can use systemtap on Linux to\n> > achieve similar things.\n> >\n> > However before getting too carried away with that - we already *know*\n> > that 9.4 is spending longer in END (i.e commit) than 9.3 is. I'd\n> > recommend you see what wal_sync_method is set to on both systems. If it\n> > is the same, then my suspicion is that one of the SSD's needs to be\n> > trimmed [1]. You can do this by running:\n> >\n> > $ fstrim /mountpoint\n> >\n> > Also - are you using the same filesystem and mount options on each SSD?\n> >\n> > Cheers\n> >\n> > Mark\n> >\n> > [1] if fact, for the paranoid - I usually secure erase any SSD before\n> > performance testing, and then check the SMART counters too...\n> >\n> \n> Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3\n> one for 9.4), see below for results.\n> \n> I'm running xfs on them with trim/discard enabled:\n> \n> $ mount|grep pg\n> /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)\n> /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)\n> \n> \n> I'm *not* seeing any significant difference between 9.3 and 9.4, and the\n> numbers are both about 2x your best number, which is food for thought\n> (those P320's should toast my M550 for write performance...).\n\ncool! any details on OS and other options? I still get the same numbers\nas before.\n\nTigran.\n\n> \n> \n> 9.3:\n> \n> $ pgbench -r -j 1 -c 1 -T 60 bench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> duration: 60 s\n> number of transactions actually processed: 194615\n> tps = 3243.567115 (including connections establishing)\n> tps = 3243.771688 (excluding connections establishing)\n> statement latencies in milliseconds:\n> \t0.000798\t\\set nbranches 1 * :scale\n> \t0.000302\t\\set ntellers 10 * :scale\n> \t0.000276\t\\set naccounts 100000 * :scale\n> \t0.000330\t\\setrandom aid 1 :naccounts\n> \t0.000265\t\\setrandom bid 1 :nbranches\n> \t0.000278\t\\setrandom tid 1 :ntellers\n> \t0.000298\t\\setrandom delta -5000 5000\n> \t0.012818\tBEGIN;\n> \t0.065403\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE\n> aid = :aid;\n> \t0.048516\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> \t0.058343\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE\n> tid = :tid;\n> \t0.057763\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE\n> bid = :bid;\n> \t0.043293\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime)\n> VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n> \t0.017087\tEND;\n> \n> \n> 9.4:\n> \n> $ pgbench -r -j 1 -c 1 -T 60 bench\n> starting vacuum...end.\n> transaction type: TPC-B (sort of)\n> scaling factor: 1\n> query mode: simple\n> number of clients: 1\n> number of threads: 1\n> duration: 60 s\n> number of transactions actually processed: 194130\n> latency average: 0.309 ms\n> tps = 3235.488190 (including connections establishing)\n> tps = 3235.560235 (excluding connections establishing)\n> statement latencies in milliseconds:\n> \t0.000460\t\\set nbranches 1 * :scale\n> \t0.000231\t\\set ntellers 10 * :scale\n> \t0.000224\t\\set naccounts 100000 * :scale\n> \t0.000258\t\\setrandom aid 1 :naccounts\n> \t0.000252\t\\setrandom bid 1 :nbranches\n> \t0.000266\t\\setrandom tid 1 :ntellers\n> \t0.000272\t\\setrandom delta -5000 5000\n> \t0.011724\tBEGIN;\n> \t0.083750\tUPDATE pgbench_accounts SET abalance = abalance + :delta WHERE\n> aid = :aid;\n> \t0.045553\tSELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n> \t0.054412\tUPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE\n> tid = :tid;\n> \t0.053371\tUPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE\n> bid = :bid;\n> \t0.041501\tINSERT INTO pgbench_history (tid, bid, aid, delta, mtime)\n> VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n> \t0.015273\tEND;\n> \n> configuration:\n> \n> logging_collector = 'on'\n> wal_writer_delay = '10s'\n> vacuum_cost_delay = 50\n> synchronous_commit = 'off'\n> wal_buffers = '16MB'\n> checkpoint_segments = 64\n> shared_buffers = '2GB'\n> max_connections = 100\n> random_page_cost = 1.5\n> \n> \n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 07:53:38 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 19/09/14 17:53, Mkrtchyan, Tigran wrote:\n>\n>\n> ----- Original Message -----\n>> From: \"Mark Kirkwood\" <[email protected]>\n\n>> Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3\n>> one for 9.4), see below for results.\n>>\n>> I'm running xfs on them with trim/discard enabled:\n>>\n>> $ mount|grep pg\n>> /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)\n>> /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)\n>>\n>>\n>> I'm *not* seeing any significant difference between 9.3 and 9.4, and the\n>> numbers are both about 2x your best number, which is food for thought\n>> (those P320's should toast my M550 for write performance...).\n>\n> cool! any details on OS and other options? I still get the same numbers\n> as before.\n>\n\nSorry, Ubuntu 14.04 on a single socket i7 3.4 Ghz, 16G (i.e my workstation).\n\nI saw the suggestion that Didier made to run 9.3 on the SSD that you \nwere using for 9.4, and see if it suddenly goes slow - then we'd know \nit's something about the disk (or filesystem/mount options). Can you \ntest this?\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 18:26:27 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\n\n----- Original Message -----\n> From: \"Mark Kirkwood\" <[email protected]>\n> To: \"Tigran Mkrtchyan\" <[email protected]>\n> Cc: \"Merlin Moncure\" <[email protected]>, \"postgres performance list\" <[email protected]>\n> Sent: Friday, September 19, 2014 8:26:27 AM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On 19/09/14 17:53, Mkrtchyan, Tigran wrote:\n> >\n> >\n> > ----- Original Message -----\n> >> From: \"Mark Kirkwood\" <[email protected]>\n> \n> >> Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3\n> >> one for 9.4), see below for results.\n> >>\n> >> I'm running xfs on them with trim/discard enabled:\n> >>\n> >> $ mount|grep pg\n> >> /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)\n> >> /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)\n> >>\n> >>\n> >> I'm *not* seeing any significant difference between 9.3 and 9.4, and the\n> >> numbers are both about 2x your best number, which is food for thought\n> >> (those P320's should toast my M550 for write performance...).\n> >\n> > cool! any details on OS and other options? I still get the same numbers\n> > as before.\n> >\n> \n> Sorry, Ubuntu 14.04 on a single socket i7 3.4 Ghz, 16G (i.e my workstation).\n> \n> I saw the suggestion that Didier made to run 9.3 on the SSD that you\n> were using for 9.4, and see if it suddenly goes slow - then we'd know\n> it's something about the disk (or filesystem/mount options). Can you\n> test this?\n\n\nswapping the disks did not change the results.\nNevertheless, I run the same test on my fedora20 laptop\n8GB RAM, i7 2.2GHz and got 2600tps! I am totally\nconfused now! Is it kernel version? libc?\n\n\nTigran.\n> \n> Cheers\n> \n> Mark\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 09:24:01 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 19/09/14 19:24, Mkrtchyan, Tigran wrote:\n>\n>\n> ----- Original Message -----\n>> From: \"Mark Kirkwood\" <[email protected]>\n>> To: \"Tigran Mkrtchyan\" <[email protected]>\n>> Cc: \"Merlin Moncure\" <[email protected]>, \"postgres performance list\" <[email protected]>\n>> Sent: Friday, September 19, 2014 8:26:27 AM\n>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>>\n>> On 19/09/14 17:53, Mkrtchyan, Tigran wrote:\n>>>\n>>>\n>>> ----- Original Message -----\n>>>> From: \"Mark Kirkwood\" <[email protected]>\n>>\n>>>> Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3\n>>>> one for 9.4), see below for results.\n>>>>\n>>>> I'm running xfs on them with trim/discard enabled:\n>>>>\n>>>> $ mount|grep pg\n>>>> /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)\n>>>> /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)\n>>>>\n>>>>\n>>>> I'm *not* seeing any significant difference between 9.3 and 9.4, and the\n>>>> numbers are both about 2x your best number, which is food for thought\n>>>> (those P320's should toast my M550 for write performance...).\n>>>\n>>> cool! any details on OS and other options? I still get the same numbers\n>>> as before.\n>>>\n>>\n>> Sorry, Ubuntu 14.04 on a single socket i7 3.4 Ghz, 16G (i.e my workstation).\n>>\n>> I saw the suggestion that Didier made to run 9.3 on the SSD that you\n>> were using for 9.4, and see if it suddenly goes slow - then we'd know\n>> it's something about the disk (or filesystem/mount options). Can you\n>> test this?\n>\n>\n> swapping the disks did not change the results.\n> Nevertheless, I run the same test on my fedora20 laptop\n> 8GB RAM, i7 2.2GHz and got 2600tps! I am totally\n> confused now! Is it kernel version? libc?\n>\n>\n\nWell, that's progress anyway!\n\nI guess you could try fedora 20 on the Dell server and see if that makes \nany difference. But yes, confusing. Having been dealing with a high end \nDell server myself recently (R920), some re-reading of any manuals you \ncan find might be useful, we were continually surprised how easy it was \nto have everything configured *slow*... and the detail in the \nmanuals...could be better!\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 19:53:27 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 19/09/14 19:24, Mkrtchyan, Tigran wrote:\n>\n>\n> ----- Original Message -----\n>> From: \"Mark Kirkwood\" <[email protected]>\n>> To: \"Tigran Mkrtchyan\" <[email protected]>\n>> Cc: \"Merlin Moncure\" <[email protected]>, \"postgres performance list\" <[email protected]>\n>> Sent: Friday, September 19, 2014 8:26:27 AM\n>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>>\n>> On 19/09/14 17:53, Mkrtchyan, Tigran wrote:\n>>>\n>>>\n>>> ----- Original Message -----\n>>>> From: \"Mark Kirkwood\" <[email protected]>\n>>\n>>>> Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3\n>>>> one for 9.4), see below for results.\n>>>>\n>>>> I'm running xfs on them with trim/discard enabled:\n>>>>\n>>>> $ mount|grep pg\n>>>> /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)\n>>>> /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)\n>>>>\n>>>>\n>>>> I'm *not* seeing any significant difference between 9.3 and 9.4, and the\n>>>> numbers are both about 2x your best number, which is food for thought\n>>>> (those P320's should toast my M550 for write performance...).\n>>>\n>>> cool! any details on OS and other options? I still get the same numbers\n>>> as before.\n>>>\n>>\n>> Sorry, Ubuntu 14.04 on a single socket i7 3.4 Ghz, 16G (i.e my workstation).\n>>\n>> I saw the suggestion that Didier made to run 9.3 on the SSD that you\n>> were using for 9.4, and see if it suddenly goes slow - then we'd know\n>> it's something about the disk (or filesystem/mount options). Can you\n>> test this?\n>\n>\n> swapping the disks did not change the results.\n>\n>\n\nDo you mean that 9.3 was still faster using the disk that 9.4 had used? \nIf so that strongly suggests that there is something you have configured \ndifferently in the 9.4 installation [1]. Not wanting to sound mean - but \nit is really easy to accidentally connect to the wrong instance when \nthere are two on the same box (ahem, yes , done it myself). So perhaps \nanother look at the 9.4 vs 9.3 setup (or even posti the config files \npostgresql.conf + postgresql.auto.conf for 9.4 here).\n\nRegards\n\nMark\n\n[1] In the light of my previous test of (essentially) your config + \nnumerous other folk have been benchmarking 9.4.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Sep 2014 11:58:48 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On Fri, Sep 19, 2014 at 6:58 PM, Mark Kirkwood\n<[email protected]> wrote:\n> On 19/09/14 19:24, Mkrtchyan, Tigran wrote:\n>>\n>>\n>>\n>> ----- Original Message -----\n>>>\n>>> From: \"Mark Kirkwood\" <[email protected]>\n>>> To: \"Tigran Mkrtchyan\" <[email protected]>\n>>> Cc: \"Merlin Moncure\" <[email protected]>, \"postgres performance list\"\n>>> <[email protected]>\n>>> Sent: Friday, September 19, 2014 8:26:27 AM\n>>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>>>\n>>> On 19/09/14 17:53, Mkrtchyan, Tigran wrote:\n>>>>\n>>>>\n>>>>\n>>>> ----- Original Message -----\n>>>>>\n>>>>> From: \"Mark Kirkwood\" <[email protected]>\n>>>\n>>>\n>>>>> Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3\n>>>>> one for 9.4), see below for results.\n>>>>>\n>>>>> I'm running xfs on them with trim/discard enabled:\n>>>>>\n>>>>> $ mount|grep pg\n>>>>> /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)\n>>>>> /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)\n>>>>>\n>>>>>\n>>>>> I'm *not* seeing any significant difference between 9.3 and 9.4, and\n>>>>> the\n>>>>> numbers are both about 2x your best number, which is food for thought\n>>>>> (those P320's should toast my M550 for write performance...).\n>>>>\n>>>>\n>>>> cool! any details on OS and other options? I still get the same numbers\n>>>> as before.\n>>>>\n>>>\n>>> Sorry, Ubuntu 14.04 on a single socket i7 3.4 Ghz, 16G (i.e my\n>>> workstation).\n>>>\n>>> I saw the suggestion that Didier made to run 9.3 on the SSD that you\n>>> were using for 9.4, and see if it suddenly goes slow - then we'd know\n>>> it's something about the disk (or filesystem/mount options). Can you\n>>> test this?\n>>\n>>\n>>\n>> swapping the disks did not change the results.\n>>\n>>\n>\n> Do you mean that 9.3 was still faster using the disk that 9.4 had used? If\n> so that strongly suggests that there is something you have configured\n> differently in the 9.4 installation [1]. Not wanting to sound mean - but it\n> is really easy to accidentally connect to the wrong instance when there are\n> two on the same box (ahem, yes , done it myself). So perhaps another look at\n> the 9.4 vs 9.3 setup (or even posti the config files postgresql.conf +\n> postgresql.auto.conf for 9.4 here).\n\nHuh. Where did the 9.4 build come from? I wonder if there are some\ndebugging options set. Can you check 9.4 pg_settings for value\nof\"debug_assertions\"? If it's set true, you might want to consider\nhand compiling postgres until 9.4 is released...\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Sep 2014 08:37:50 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "Hi Merlin,\n\nyou are right, in 9.4 the debug_assertions are on:\n\n# /etc/init.d/postgresql-9.4 start\nStarting postgresql-9.4 service: [ OK ]\n# psql -U postgres \npsql (9.4beta2)\nType \"help\" for help.\n\npostgres=# select name,setting from pg_settings where name='debug_assertions';\n name | setting \n------------------+---------\n debug_assertions | on\n(1 row)\n\npostgres=# \\q\n# /etc/init.d/postgresql-9.4 stop\nStopping postgresql-9.4 service: [ OK ]\n# /etc/init.d/postgresql-9.3 start\nStarting postgresql-9.3 service: [ OK ]\n# psql -U postgres \npsql (9.4beta2, server 9.3.5)\nType \"help\" for help.\n\npostgres=# select name,setting from pg_settings where name='debug_assertions';\n name | setting \n------------------+---------\n debug_assertions | off\n(1 row)\n\npostgres=# \\q\n# \n\n\nThe rpms are coming from Postgres official repo:\n\nhttp://yum.postgresql.org/9.4/redhat/rhel-$releasever-$basearch\n\n\nTigran.\n\n\n----- Original Message -----\n> From: \"Merlin Moncure\" <[email protected]>\n> To: \"Mark Kirkwood\" <[email protected]>\n> Cc: \"Tigran Mkrtchyan\" <[email protected]>, \"postgres performance list\" <[email protected]>\n> Sent: Monday, September 22, 2014 3:37:50 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On Fri, Sep 19, 2014 at 6:58 PM, Mark Kirkwood\n> <[email protected]> wrote:\n> > On 19/09/14 19:24, Mkrtchyan, Tigran wrote:\n> >>\n> >>\n> >>\n> >> ----- Original Message -----\n> >>>\n> >>> From: \"Mark Kirkwood\" <[email protected]>\n> >>> To: \"Tigran Mkrtchyan\" <[email protected]>\n> >>> Cc: \"Merlin Moncure\" <[email protected]>, \"postgres performance list\"\n> >>> <[email protected]>\n> >>> Sent: Friday, September 19, 2014 8:26:27 AM\n> >>> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> >>>\n> >>> On 19/09/14 17:53, Mkrtchyan, Tigran wrote:\n> >>>>\n> >>>>\n> >>>>\n> >>>> ----- Original Message -----\n> >>>>>\n> >>>>> From: \"Mark Kirkwood\" <[email protected]>\n> >>>\n> >>>\n> >>>>> Further to the confusion, here's my 9.3 vs 9.4 on two M550 (one for 9.3\n> >>>>> one for 9.4), see below for results.\n> >>>>>\n> >>>>> I'm running xfs on them with trim/discard enabled:\n> >>>>>\n> >>>>> $ mount|grep pg\n> >>>>> /dev/sdd4 on /mnt/pg94 type xfs (rw,discard)\n> >>>>> /dev/sdc4 on /mnt/pg93 type xfs (rw,discard)\n> >>>>>\n> >>>>>\n> >>>>> I'm *not* seeing any significant difference between 9.3 and 9.4, and\n> >>>>> the\n> >>>>> numbers are both about 2x your best number, which is food for thought\n> >>>>> (those P320's should toast my M550 for write performance...).\n> >>>>\n> >>>>\n> >>>> cool! any details on OS and other options? I still get the same numbers\n> >>>> as before.\n> >>>>\n> >>>\n> >>> Sorry, Ubuntu 14.04 on a single socket i7 3.4 Ghz, 16G (i.e my\n> >>> workstation).\n> >>>\n> >>> I saw the suggestion that Didier made to run 9.3 on the SSD that you\n> >>> were using for 9.4, and see if it suddenly goes slow - then we'd know\n> >>> it's something about the disk (or filesystem/mount options). Can you\n> >>> test this?\n> >>\n> >>\n> >>\n> >> swapping the disks did not change the results.\n> >>\n> >>\n> >\n> > Do you mean that 9.3 was still faster using the disk that 9.4 had used? If\n> > so that strongly suggests that there is something you have configured\n> > differently in the 9.4 installation [1]. Not wanting to sound mean - but it\n> > is really easy to accidentally connect to the wrong instance when there are\n> > two on the same box (ahem, yes , done it myself). So perhaps another look\n> > at\n> > the 9.4 vs 9.3 setup (or even posti the config files postgresql.conf +\n> > postgresql.auto.conf for 9.4 here).\n> \n> Huh. Where did the 9.4 build come from? I wonder if there are some\n> debugging options set. Can you check 9.4 pg_settings for value\n> of\"debug_assertions\"? If it's set true, you might want to consider\n> hand compiling postgres until 9.4 is released...\n> \n> merlin\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 Sep 2014 14:58:05 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On Tue, Sep 23, 2014 at 7:58 AM, Mkrtchyan, Tigran\n<[email protected]> wrote:\n> Hi Merlin,\n>\n> you are right, in 9.4 the debug_assertions are on:\n>\n> # /etc/init.d/postgresql-9.4 start\n> Starting postgresql-9.4 service: [ OK ]\n> # psql -U postgres\n> psql (9.4beta2)\n> Type \"help\" for help.\n>\n> postgres=# select name,setting from pg_settings where name='debug_assertions';\n> name | setting\n> ------------------+---------\n> debug_assertions | on\n> (1 row)\n\n\n(plz try to not top-post).\n\nThat's not not really unexpected: 9.4 is still in beta. If you're\njust doing raw performance testing consider building a postgres\ninstance from source (but, instead of compiling into /usr/local/bin,\nI'd keep it all in private user folder for easy removal).\n\nFor example, if I downloaded the source into /home/mmoncure/pgdev/src,\ni'd approximately do:\n\ncd /home/mmoncure/pgdev/src\n./configure --prefix=/home/mmoncure/pgdev\n# if configure gripes about missing readline, go grab the\nlibreadline-dev rpm etc and repeat above\nmake -j4 && make install\nexport PATH=/home/mmoncure/pgdev/bin:$PATH\nexport PGDATA=/home/mmoncure/pgdev/data\n# use C locale. may not be appropriate in your case\ninitdb --no-locale --encoding=UTF8\npg_ctl start\n\nThis should suffice any beta performance testing you need to do. When\n9.4 proper comes out, just stop the database and kill the pgdev folder\n(taking a backup first if you need to preserve stuff).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 Sep 2014 09:21:13 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "Hi Merlin et al.\n\nafter building postgres 9.4 myself from sources I get the same performance as\nwith 9.3. The difference was in the value of debug_assertions setting. \n\nNow the next step. Why my 3 years old laptop gets x1.8 times more tps than my one month old server?\nAnd Mark Kirkwood's desktop gets x2 times more tps as well? Is there some special optimization\nfor i7 which does not work with Intel(R) Xeon(R) CPU E5-2660?\n\n\nThanks,\n Tigran.\n\n----- Original Message -----\n> From: \"Merlin Moncure\" <[email protected]>\n> To: \"Tigran Mkrtchyan\" <[email protected]>\n> Cc: \"Mark Kirkwood\" <[email protected]>, \"postgres performance list\" <[email protected]>\n> Sent: Tuesday, September 23, 2014 4:21:13 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On Tue, Sep 23, 2014 at 7:58 AM, Mkrtchyan, Tigran\n> <[email protected]> wrote:\n> > Hi Merlin,\n> >\n> > you are right, in 9.4 the debug_assertions are on:\n> >\n> > # /etc/init.d/postgresql-9.4 start\n> > Starting postgresql-9.4 service: [ OK ]\n> > # psql -U postgres\n> > psql (9.4beta2)\n> > Type \"help\" for help.\n> >\n> > postgres=# select name,setting from pg_settings where\n> > name='debug_assertions';\n> > name | setting\n> > ------------------+---------\n> > debug_assertions | on\n> > (1 row)\n> \n> \n> (plz try to not top-post).\n> \n> That's not not really unexpected: 9.4 is still in beta. If you're\n> just doing raw performance testing consider building a postgres\n> instance from source (but, instead of compiling into /usr/local/bin,\n> I'd keep it all in private user folder for easy removal).\n> \n> For example, if I downloaded the source into /home/mmoncure/pgdev/src,\n> i'd approximately do:\n> \n> cd /home/mmoncure/pgdev/src\n> ./configure --prefix=/home/mmoncure/pgdev\n> # if configure gripes about missing readline, go grab the\n> libreadline-dev rpm etc and repeat above\n> make -j4 && make install\n> export PATH=/home/mmoncure/pgdev/bin:$PATH\n> export PGDATA=/home/mmoncure/pgdev/data\n> # use C locale. may not be appropriate in your case\n> initdb --no-locale --encoding=UTF8\n> pg_ctl start\n> \n> This should suffice any beta performance testing you need to do. When\n> 9.4 proper comes out, just stop the database and kill the pgdev folder\n> (taking a backup first if you need to preserve stuff).\n> \n> merlin\n> \n> \n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 Sep 2014 11:23:48 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 24/09/14 21:23, Mkrtchyan, Tigran wrote:\n> Hi Merlin et al.\n>\n> after building postgres 9.4 myself from sources I get the same performance as\n> with 9.3. The difference was in the value of debug_assertions setting.\n>\n> Now the next step. Why my 3 years old laptop gets x1.8 times more tps than my one month old server?\n> And Mark Kirkwood's desktop gets x2 times more tps as well? Is there some special optimization\n> for i7 which does not work with Intel(R) Xeon(R) CPU E5-2660?\n>\n>\n\nYes - firstly, nicely done re finding the assertions (my 9.4 beta2 was \nbuilt from src - never thought to mention sorry)!\n\nI'd guess that you are seeing some bios setting re the p320 SSD - it \n*should* be seriously fast...but does not seem to be. You could try \nrunning some pure IO benchmarks to confirm this (e.g fio). Also see if \nthe manual for however it is attached to the system allows for some \noptimized-for-ssd settings that tend to work better (altho these usually \nimply the drive is plugged into an adapter card of some kind - mind you \nyour p320 *does* used a custom connector that does 2.5\" SATA to PCIe \nstyle interconnect so I'd look to debug that first).\n\nCheers\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 Sep 2014 22:04:12 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "\nWith pg_test_timing I can see, that overhead is 48 nsec on my server and 32 nsec on the laptop.\nwhat makes this difference and have it any influence on the overall performance?\n\nTigran.\n\n----- Original Message -----\n> From: \"Mark Kirkwood\" <[email protected]>\n> To: \"Tigran Mkrtchyan\" <[email protected]>, \"Merlin Moncure\" <[email protected]>\n> Cc: \"postgres performance list\" <[email protected]>\n> Sent: Wednesday, September 24, 2014 12:04:12 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> \n> On 24/09/14 21:23, Mkrtchyan, Tigran wrote:\n> > Hi Merlin et al.\n> >\n> > after building postgres 9.4 myself from sources I get the same performance\n> > as\n> > with 9.3. The difference was in the value of debug_assertions setting.\n> >\n> > Now the next step. Why my 3 years old laptop gets x1.8 times more tps than\n> > my one month old server?\n> > And Mark Kirkwood's desktop gets x2 times more tps as well? Is there some\n> > special optimization\n> > for i7 which does not work with Intel(R) Xeon(R) CPU E5-2660?\n> >\n> >\n> \n> Yes - firstly, nicely done re finding the assertions (my 9.4 beta2 was\n> built from src - never thought to mention sorry)!\n> \n> I'd guess that you are seeing some bios setting re the p320 SSD - it\n> *should* be seriously fast...but does not seem to be. You could try\n> running some pure IO benchmarks to confirm this (e.g fio). Also see if\n> the manual for however it is attached to the system allows for some\n> optimized-for-ssd settings that tend to work better (altho these usually\n> imply the drive is plugged into an adapter card of some kind - mind you\n> your p320 *does* used a custom connector that does 2.5\" SATA to PCIe\n> style interconnect so I'd look to debug that first).\n> \n> Cheers\n> \n> Mark\n> \n> \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 Sep 2014 15:03:23 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "Hi Tigran,\n\nmy debugging tips:\n\n>Some technical details:\n>Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64\n>256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz\n>2x160GB PCIe SSD DELL_P320h-MTFDGAL175SAH\n\nAs I know PCIe SSD DELL_P320h = Micron P320h\n(\nhttp://www.micron.com/products/solid-state-storage/enterprise-pcie-ssd/p320h-25-inch-pcie-ssd\n)\n\nso my suggestions:\n\n#1. Check/Upgrade to the latest Micron kernel driver\n**\nhttps://www.google.com/search?q=p420m_p320h_hhhl_installation_guide%2520_.pdf\n*** \"RHEL version 6.1–6.5: kmod-mtip32xx-<version>.el6.x86_64_rhel6ux.rpm\"\n\n#2. And check \"Technical Note P320h/P420m SSD Performance Optimization and\nTesting\"\n**\nhttps://www.google.com/search?q=tnfd15_micron_pciessd_performance_testing.pdf\n\nstrange :\n**** \"Table 1: PCIe SSD Hardware and Software Requirements \"\n***** \"Processors with clock speeds greater than 3 GHz (recommended for\nbest performance)\" * ( you have now: 2.20GHz )*\n***** \"*Up to 8 CPU cores *(logical + physical) with hyperthreading\n(recommended)\"* ( you have 20! )*\n\nso check performance with* \"Turning off Hyper-Threading\" *\n\nImre\n\n2014-09-24 15:03 GMT+02:00 Mkrtchyan, Tigran <[email protected]>:\n\n>\n> With pg_test_timing I can see, that overhead is 48 nsec on my server and\n> 32 nsec on the laptop.\n> what makes this difference and have it any influence on the overall\n> performance?\n>\n> Tigran.\n>\n> ----- Original Message -----\n> > From: \"Mark Kirkwood\" <[email protected]>\n> > To: \"Tigran Mkrtchyan\" <[email protected]>, \"Merlin Moncure\" <\n> [email protected]>\n> > Cc: \"postgres performance list\" <[email protected]>\n> > Sent: Wednesday, September 24, 2014 12:04:12 PM\n> > Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n> >\n> > On 24/09/14 21:23, Mkrtchyan, Tigran wrote:\n> > > Hi Merlin et al.\n> > >\n> > > after building postgres 9.4 myself from sources I get the same\n> performance\n> > > as\n> > > with 9.3. The difference was in the value of debug_assertions setting.\n> > >\n> > > Now the next step. Why my 3 years old laptop gets x1.8 times more tps\n> than\n> > > my one month old server?\n> > > And Mark Kirkwood's desktop gets x2 times more tps as well? Is there\n> some\n> > > special optimization\n> > > for i7 which does not work with Intel(R) Xeon(R) CPU E5-2660?\n> > >\n> > >\n> >\n> > Yes - firstly, nicely done re finding the assertions (my 9.4 beta2 was\n> > built from src - never thought to mention sorry)!\n> >\n> > I'd guess that you are seeing some bios setting re the p320 SSD - it\n> > *should* be seriously fast...but does not seem to be. You could try\n> > running some pure IO benchmarks to confirm this (e.g fio). Also see if\n> > the manual for however it is attached to the system allows for some\n> > optimized-for-ssd settings that tend to work better (altho these usually\n> > imply the drive is plugged into an adapter card of some kind - mind you\n> > your p320 *does* used a custom connector that does 2.5\" SATA to PCIe\n> > style interconnect so I'd look to debug that first).\n> >\n> > Cheers\n> >\n> > Mark\n> >\n> >\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi Tigran,my debugging tips:>Some technical details:>Host: rhel 6.5 2.6.32-431.23.3.el6.x86_64>256 GB RAM, 40 cores, Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz>2x160GB PCIe SSD DELL_P320h-MTFDGAL175SAHAs I know PCIe SSD DELL_P320h = Micron P320h ( http://www.micron.com/products/solid-state-storage/enterprise-pcie-ssd/p320h-25-inch-pcie-ssd )so my suggestions:#1. Check/Upgrade to the latest Micron kernel driver ** https://www.google.com/search?q=p420m_p320h_hhhl_installation_guide%2520_.pdf*** \"RHEL version 6.1–6.5: kmod-mtip32xx-<version>.el6.x86_64_rhel6ux.rpm\"#2. And check \"Technical Note P320h/P420m SSD Performance Optimization and Testing\"** https://www.google.com/search?q=tnfd15_micron_pciessd_performance_testing.pdfstrange :**** \"Table 1: PCIe SSD Hardware and Software Requirements \"***** \"Processors with clock speeds greater than 3 GHz (recommended for best performance)\" ( you have now: 2.20GHz )***** \"Up to 8 CPU cores (logical + physical) with hyperthreading (recommended)\" ( you have 20! ) so check performance with \"Turning off Hyper-Threading\" Imre2014-09-24 15:03 GMT+02:00 Mkrtchyan, Tigran <[email protected]>:\nWith pg_test_timing I can see, that overhead is 48 nsec on my server and 32 nsec on the laptop.\nwhat makes this difference and have it any influence on the overall performance?\n\nTigran.\n\n----- Original Message -----\n> From: \"Mark Kirkwood\" <[email protected]>\n> To: \"Tigran Mkrtchyan\" <[email protected]>, \"Merlin Moncure\" <[email protected]>\n> Cc: \"postgres performance list\" <[email protected]>\n> Sent: Wednesday, September 24, 2014 12:04:12 PM\n> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4\n>\n> On 24/09/14 21:23, Mkrtchyan, Tigran wrote:\n> > Hi Merlin et al.\n> >\n> > after building postgres 9.4 myself from sources I get the same performance\n> > as\n> > with 9.3. The difference was in the value of debug_assertions setting.\n> >\n> > Now the next step. Why my 3 years old laptop gets x1.8 times more tps than\n> > my one month old server?\n> > And Mark Kirkwood's desktop gets x2 times more tps as well? Is there some\n> > special optimization\n> > for i7 which does not work with Intel(R) Xeon(R) CPU E5-2660?\n> >\n> >\n>\n> Yes - firstly, nicely done re finding the assertions (my 9.4 beta2 was\n> built from src - never thought to mention sorry)!\n>\n> I'd guess that you are seeing some bios setting re the p320 SSD - it\n> *should* be seriously fast...but does not seem to be. You could try\n> running some pure IO benchmarks to confirm this (e.g fio). Also see if\n> the manual for however it is attached to the system allows for some\n> optimized-for-ssd settings that tend to work better (altho these usually\n> imply the drive is plugged into an adapter card of some kind - mind you\n> your p320 *does* used a custom connector that does 2.5\" SATA to PCIe\n> style interconnect so I'd look to debug that first).\n>\n> Cheers\n>\n> Mark\n>\n>\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 24 Sep 2014 16:38:39 +0200",
"msg_from": "Imre Samu <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
},
{
"msg_contents": "On 25/09/14 01:03, Mkrtchyan, Tigran wrote:\n>\n> With pg_test_timing I can see, that overhead is 48 nsec on my server and 32 nsec on the laptop.\n> what makes this difference and have it any influence on the overall performance?\n>\n\nHmm - 22 nsec for my workstation, so while it could be a factor, your \nlaptop and my workstation performed the pgbench about the same, so I'd \nlook elsewhere - in particlular sync IO performance:\n\n\n$ cd <where ssd mounted>\n$ pg_test_fsync\n5 seconds per test\nO_DIRECT supported on this platform for open_datasync and open_sync.\n\nCompare file sync methods using one 8kB write:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 140.231 ops/sec 7131 \nusecs/op\n fdatasync 138.159 ops/sec 7238 \nusecs/op\n fsync 137.680 ops/sec 7263 \nusecs/op\n fsync_writethrough n/a\n open_sync 137.202 ops/sec 7289 \nusecs/op\n\nCompare file sync methods using two 8kB writes:\n(in wal_sync_method preference order, except fdatasync\nis Linux's default)\n open_datasync 68.832 ops/sec 14528 \nusecs/op\n fdatasync 135.994 ops/sec 7353 \nusecs/op\n fsync 137.454 ops/sec 7275 \nusecs/op\n fsync_writethrough n/a\n open_sync 69.092 ops/sec 14473 \nusecs/op\n\nCompare open_sync with different write sizes:\n(This is designed to compare the cost of writing 16kB\nin different write open_sync sizes.)\n 1 * 16kB open_sync write 136.904 ops/sec 7304 \nusecs/op\n 2 * 8kB open_sync writes 68.857 ops/sec 14523 \nusecs/op\n 4 * 4kB open_sync writes 34.744 ops/sec 28782 \nusecs/op\n 8 * 2kB open_sync writes write failed: Invalid argument\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 25 Sep 2014 16:39:40 +1200",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
}
] |
[
{
"msg_contents": "\nOn Sep 18, 2014 9:32 PM, Andrew Dunstan <[email protected]> wrote:\n>\n>\n> On 09/18/2014 03:09 PM, Mkrtchyan, Tigran wrote: \n> > \n> > ----- Original Message ----- \n> >> From: \"Josh Berkus\" <[email protected]> \n> >> To: [email protected] \n> >> Sent: Thursday, September 18, 2014 7:54:24 PM \n> >> Subject: Re: [PERFORM] postgres 9.3 vs. 9.4 \n> >> \n> >> On 09/18/2014 08:09 AM, Mkrtchyan, Tigran wrote: \n> >>>>> 9.4beta2: \n> >>>>> ... \n> >>>>> \n> >>>>>>> 0.957854 END; \n> >>>>>>> \n> >>>>> Looks like IO. \n> >>> Postgres internal IO? May be. We get 600MB/s on this SSDs. \n> >> While it's possible that this is a Postgres issue, my first thought is \n> >> that the two SSDs are not actually identical. The 9.4 one may either \n> >> have a fault, or may be mostly full and heavily fragmented. Or the Dell \n> >> PCIe card may have an issue. \n> > \n> > We have tested both SSDs and they have identical IO characteristics and \n> > as I already mentioned, both databases are fresh, including filesystem. \n> > \n> >> You are using \"scale 1\" which is a < 1MB database, and one client and 1 \n> >> thread, which is an interesting test I wouldn't necessarily have done \n> >> myself. I'll throw the same test on one of my machines and see how it does. \n> > this scenario corresponds to our use case. We need a high transaction rate \n> > per for a single client. Currently I can get only ~1500 tps. Unfortunately, \n> > posgtress does not tell me where the bottleneck is. Is this is defensively \n> > not the disk IO. \n> > \n> > \n> > \n>\n>\n> This is when you dig out tools like perf, maybe.\n\nDo you have a better suggestions ?\n\n>\n> cheers \n>\n> andrew \n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Sep 2014 21:50:35 +0200 (CEST)",
"msg_from": "\"Mkrtchyan, Tigran\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: postgres 9.3 vs. 9.4"
}
] |
[
{
"msg_contents": "Hi mailing list,\n\nI am relatively new to postgres. I have a table with 500 coulmns and \nabout 40 mio rows. I call this cache table where one column is a unique \nkey (indexed) and the 499 columns (type integer) are some values \nbelonging to this key.\n\nNow I have a second (temporary) table (only 2 columns one is the key of \nmy cache table) and I want do an inner join between my temporary table \nand the large cache table and export all matching rows. I found out, \nthat the performance increases when I limit the join to lots of small parts.\nBut it seems that the databases needs a lot of disk io to gather all 499 \ndata columns.\nIs there a possibilty to tell the databases that all these colums are \nalways treated as tuples and I always want to get the whole row? Perhaps \nthe disk oraganization could then be optimized?\n\n\nThank you for feedback and ideas\nBest\nNeo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 13:51:33 +0200",
"msg_from": "=?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "query a table with lots of coulmns"
},
{
"msg_contents": "On 19 September 2014 13:51, Björn Wittich <[email protected]> wrote:\n\n> Hi mailing list,\n>\n> I am relatively new to postgres. I have a table with 500 coulmns and about\n> 40 mio rows. I call this cache table where one column is a unique key\n> (indexed) and the 499 columns (type integer) are some values belonging to\n> this key.\n>\n> Now I have a second (temporary) table (only 2 columns one is the key of my\n> cache table) and I want do an inner join between my temporary table and\n> the large cache table and export all matching rows. I found out, that the\n> performance increases when I limit the join to lots of small parts.\n> But it seems that the databases needs a lot of disk io to gather all 499\n> data columns.\n> Is there a possibilty to tell the databases that all these colums are\n> always treated as tuples and I always want to get the whole row? Perhaps\n> the disk oraganization could then be optimized?\n>\n>\nHi,\ndo you have indexes on the columns you use for joins?\n\nSzymon\n\nOn 19 September 2014 13:51, Björn Wittich <[email protected]> wrote:Hi mailing list,\n\nI am relatively new to postgres. I have a table with 500 coulmns and about 40 mio rows. I call this cache table where one column is a unique key (indexed) and the 499 columns (type integer) are some values belonging to this key.\n\nNow I have a second (temporary) table (only 2 columns one is the key of my cache table) and I want do an inner join between my temporary table and the large cache table and export all matching rows. I found out, that the performance increases when I limit the join to lots of small parts.\nBut it seems that the databases needs a lot of disk io to gather all 499 data columns.\nIs there a possibilty to tell the databases that all these colums are always treated as tuples and I always want to get the whole row? Perhaps the disk oraganization could then be optimized?\n Hi,do you have indexes on the columns you use for joins? Szymon",
"msg_date": "Fri, 19 Sep 2014 14:04:30 +0200",
"msg_from": "Szymon Guz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query a table with lots of coulmns"
},
{
"msg_contents": "Hi Szymon,\n\nyes I have indexes on both columns (one in each table) which I am using \nfor join operation.\n\nAm 19.09.2014 14:04, schrieb Szymon Guz:\n>\n>\n> On 19 September 2014 13:51, Björn Wittich <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hi mailing list,\n>\n> I am relatively new to postgres. I have a table with 500 coulmns\n> and about 40 mio rows. I call this cache table where one column is\n> a unique key (indexed) and the 499 columns (type integer) are some\n> values belonging to this key.\n>\n> Now I have a second (temporary) table (only 2 columns one is the\n> key of my cache table) and I want do an inner join between my\n> temporary table and the large cache table and export all matching\n> rows. I found out, that the performance increases when I limit the\n> join to lots of small parts.\n> But it seems that the databases needs a lot of disk io to gather\n> all 499 data columns.\n> Is there a possibilty to tell the databases that all these colums\n> are always treated as tuples and I always want to get the whole\n> row? Perhaps the disk oraganization could then be optimized?\n>\n> Hi,\n> do you have indexes on the columns you use for joins?\n>\n> Szymon\n\n\n\n\n\n\n\nHi Szymon,\n\n yes I have indexes on both columns (one in each table) which I am\n using for join operation.\n\n Am 19.09.2014 14:04, schrieb Szymon Guz:\n\n\n\n\nOn 19 September 2014 13:51, Björn\n Wittich <[email protected]>\n wrote:\nHi\n mailing list,\n\n I am relatively new to postgres. I have a table with 500\n coulmns and about 40 mio rows. I call this cache table\n where one column is a unique key (indexed) and the 499\n columns (type integer) are some values belonging to this\n key.\n\n Now I have a second (temporary) table (only 2 columns one\n is the key of my cache table) and I want do an inner join\n between my temporary table and the large cache table and\n export all matching rows. I found out, that the\n performance increases when I limit the join to lots of\n small parts.\n But it seems that the databases needs a lot of disk io to\n gather all 499 data columns.\n Is there a possibilty to tell the databases that all these\n colums are always treated as tuples and I always want to\n get the whole row? Perhaps the disk oraganization could\n then be optimized?\n\n\n \nHi,\n do you have indexes on the columns you use for joins? \n\n\nSzymon",
"msg_date": "Fri, 19 Sep 2014 14:48:03 +0200",
"msg_from": "=?UTF-8?B?QmrDtnJuIFdpdHRpY2g=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query a table with lots of coulmns"
},
{
"msg_contents": "2014-09-19 13:51 GMT+02:00 Björn Wittich <[email protected]>:\n\n> Hi mailing list,\n>\n> I am relatively new to postgres. I have a table with 500 coulmns and about\n> 40 mio rows. I call this cache table where one column is a unique key\n> (indexed) and the 499 columns (type integer) are some values belonging to\n> this key.\n>\n> Now I have a second (temporary) table (only 2 columns one is the key of my\n> cache table) and I want do an inner join between my temporary table and\n> the large cache table and export all matching rows. I found out, that the\n> performance increases when I limit the join to lots of small parts.\n> But it seems that the databases needs a lot of disk io to gather all 499\n> data columns.\n> Is there a possibilty to tell the databases that all these colums are\n> always treated as tuples and I always want to get the whole row? Perhaps\n> the disk oraganization could then be optimized?\n>\n\nsorry for offtopic\n\narray databases are maybe better for your purpose\n\nhttp://rasdaman.com/\nhttp://www.scidb.org/\n\n\n>\n>\n> Thank you for feedback and ideas\n> Best\n> Neo\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2014-09-19 13:51 GMT+02:00 Björn Wittich <[email protected]>:Hi mailing list,\n\nI am relatively new to postgres. I have a table with 500 coulmns and about 40 mio rows. I call this cache table where one column is a unique key (indexed) and the 499 columns (type integer) are some values belonging to this key.\n\nNow I have a second (temporary) table (only 2 columns one is the key of my cache table) and I want do an inner join between my temporary table and the large cache table and export all matching rows. I found out, that the performance increases when I limit the join to lots of small parts.\nBut it seems that the databases needs a lot of disk io to gather all 499 data columns.\nIs there a possibilty to tell the databases that all these colums are always treated as tuples and I always want to get the whole row? Perhaps the disk oraganization could then be optimized?sorry for offtopic array databases are maybe better for your purpose http://rasdaman.com/http://www.scidb.org/ \n\n\nThank you for feedback and ideas\nBest\nNeo\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 19 Sep 2014 15:32:00 +0200",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query a table with lots of coulmns"
},
{
"msg_contents": "On 09/19/2014 04:51 AM, Björn Wittich wrote:\n> \n> I am relatively new to postgres. I have a table with 500 coulmns and\n> about 40 mio rows. I call this cache table where one column is a unique\n> key (indexed) and the 499 columns (type integer) are some values\n> belonging to this key.\n> \n> Now I have a second (temporary) table (only 2 columns one is the key of\n> my cache table) and I want do an inner join between my temporary table\n> and the large cache table and export all matching rows. I found out,\n> that the performance increases when I limit the join to lots of small\n> parts.\n> But it seems that the databases needs a lot of disk io to gather all 499\n> data columns.\n> Is there a possibilty to tell the databases that all these colums are\n> always treated as tuples and I always want to get the whole row? Perhaps\n> the disk oraganization could then be optimized?\n\nPostgreSQL is already a row store, which means by default you're getting\nall of the columns, and the columns are stored physically adjacent to\neach other.\n\nIf requesting only 1 or two columns is faster than requesting all of\nthem, that's pretty much certainly due to transmission time, not disk\nIO. Otherwise, please post your schema (well, a truncated version) and\nyour queries.\n\nBTW, in cases like yours I've used a INT array instead of 500 columns to\ngood effect; it works slightly better with PostgreSQL's compression.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 19 Sep 2014 14:40:49 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query a table with lots of coulmns"
},
{
"msg_contents": "At first, thanks for your fast and comprehensive help.\n\nThe structure of my cache table is\n\na text , b text NOT NULL , c text , d text , e timestamp without \ntimezone DEFAULT now(), f text, s1 integer DEFAULT 0, s2 integer \nDEFAULT 0, s3 integer DEFAULT 0, ... ,s512 DEFAULT 0\n\nadditional constraints: primary key (b) , Unique(b), Unique(a)\nIndexes : Index on a, Index on b\n\nThis table has 30 Mio rows ( will increase to 50 Mio) in future\n\nMy working table is\n\nb text, g integer\n\nIndexes on b and c\n\n\nThis table has 5 Mio rows\n\nScenario:\n\nWhat I want to achieve :\n\nSELECT s1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable> \nUSING(b) ORDER BY g\n\n\nThe inner join will match at least 95 % of columns of the smaller \nworktable in this example 4,75 mio rows.\n\nRunning this query takes several hours until I receive the first \nresults. Query analyzing shows that the execution plan is doing 2 seq \ntable scans on cache and work table.\n\n\nWhen I divide this huge statement into\n\nSELECT s1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable> \nUSING(b) WHERE g BETWEEN 1 and 10000 ORDER BY g, SELECT \ns1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable> USING(b) \nWHERE g BETWEEN 10001 and 20000 ORDER BY g, ....\n\n(I can do this because g i unique and continous id from 1 to N)\n\nThe result is fast but fireing parallel requests (4-8 times parallel) \nslows down the retrieval.\n\nExecution plan changes when adding \"BETWEEN 1 and 10000\" to use the indexes.\n\n\n\nOne remark which might help: overall 90 - 95 % of the s1-s512 columns \nare 0. I am only interested in columns not equals 0. Perhaps it would \nmake sense to use and array of json and enumerate only values not equals 0.\n\nStatistics on the large table:\ntable size: 80 GB\ntoast-tablesize: 37 GB\nsize of indexes: 17 GB\n\n\nThanks for your help and ideas\n\nBjörn\n\n\n\n\n\nAm 19.09.2014 23:40, schrieb Josh Berkus:\n> On 09/19/2014 04:51 AM, Björn Wittich wrote:\n>> I am relatively new to postgres. I have a table with 500 coulmns and\n>> about 40 mio rows. I call this cache table where one column is a unique\n>> key (indexed) and the 499 columns (type integer) are some values\n>> belonging to this key.\n>>\n>> Now I have a second (temporary) table (only 2 columns one is the key of\n>> my cache table) and I want do an inner join between my temporary table\n>> and the large cache table and export all matching rows. I found out,\n>> that the performance increases when I limit the join to lots of small\n>> parts.\n>> But it seems that the databases needs a lot of disk io to gather all 499\n>> data columns.\n>> Is there a possibilty to tell the databases that all these colums are\n>> always treated as tuples and I always want to get the whole row? Perhaps\n>> the disk oraganization could then be optimized?\n> PostgreSQL is already a row store, which means by default you're getting\n> all of the columns, and the columns are stored physically adjacent to\n> each other.\n>\n> If requesting only 1 or two columns is faster than requesting all of\n> them, that's pretty much certainly due to transmission time, not disk\n> IO. Otherwise, please post your schema (well, a truncated version) and\n> your queries.\n>\n> BTW, in cases like yours I've used a INT array instead of 500 columns to\n> good effect; it works slightly better with PostgreSQL's compression.\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Sep 2014 09:19:09 +0200",
"msg_from": "=?UTF-8?B?QmrDtnJuIFdpdHRpY2g=?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query a table with lots of coulmns"
},
{
"msg_contents": ">At first, thanks for your fast and comprehensive help.\n>\n>The structure of my cache table is\n>\n>a text , b text NOT NULL , c text , d text , e timestamp without\n>timezone DEFAULT now(), f text, s1 integer DEFAULT 0, s2 integer\n>DEFAULT 0, s3 integer DEFAULT 0, ... ,s512 DEFAULT 0\n\n\n>additional constraints: primary key (b) , Unique(b), Unique(a)\n>Indexes : Index on a, Index on b\n\nThis looks redundant. e.g. you don't need a unique index on b if you already have a primary key on it.\nCan you post the complete table definition ?\n\n...\n\n>One remark which might help: overall 90 - 95 % of the s1-s512 columns\n>are 0. I am only interested in columns not equals 0. Perhaps it would\n>make sense to use and array of json and enumerate only values not equals 0.\n\nCould you change that to replace 0 values with NULLs? \nThis would greatly reduce your table space as Postgres is very efficient about NULLs storage:\nIt marks all null values in a bit map within the row header so you just need about one bit per null\ninstead of 4 bytes for zeros, and hence get rid of your I/O issue.\n\nregards,\n\nMarc Mamin\n________________________________________\nVon: [email protected] [[email protected]]" im Auftrag von "Björn Wittich [[email protected]]\nGesendet: Samstag, 20. September 2014 09:19\nAn: Josh Berkus; [email protected]\nBetreff: Re: [PERFORM] query a table with lots of coulmns\n\nAt first, thanks for your fast and comprehensive help.\n\nThe structure of my cache table is\n\na text , b text NOT NULL , c text , d text , e timestamp without\ntimezone DEFAULT now(), f text, s1 integer DEFAULT 0, s2 integer\nDEFAULT 0, s3 integer DEFAULT 0, ... ,s512 DEFAULT 0\n\nadditional constraints: primary key (b) , Unique(b), Unique(a)\nIndexes : Index on a, Index on b\n\nThis table has 30 Mio rows ( will increase to 50 Mio) in future\n\nMy working table is\n\nb text, g integer\n\nIndexes on b and c\n\n\nThis table has 5 Mio rows\n\nScenario:\n\nWhat I want to achieve :\n\nSELECT s1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable>\nUSING(b) ORDER BY g\n\n\nThe inner join will match at least 95 % of columns of the smaller\nworktable in this example 4,75 mio rows.\n\nRunning this query takes several hours until I receive the first\nresults. Query analyzing shows that the execution plan is doing 2 seq\ntable scans on cache and work table.\n\n\nWhen I divide this huge statement into\n\nSELECT s1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable>\nUSING(b) WHERE g BETWEEN 1 and 10000 ORDER BY g, SELECT\ns1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable> USING(b)\nWHERE g BETWEEN 10001 and 20000 ORDER BY g, ....\n\n(I can do this because g i unique and continous id from 1 to N)\n\nThe result is fast but fireing parallel requests (4-8 times parallel)\nslows down the retrieval.\n\nExecution plan changes when adding \"BETWEEN 1 and 10000\" to use the indexes.\n\n\n\nOne remark which might help: overall 90 - 95 % of the s1-s512 columns\nare 0. I am only interested in columns not equals 0. Perhaps it would\nmake sense to use and array of json and enumerate only values not equals 0.\n\nStatistics on the large table:\ntable size: 80 GB\ntoast-tablesize: 37 GB\nsize of indexes: 17 GB\n\n\nThanks for your help and ideas\n\nBjörn\n\n\n\n\n\nAm 19.09.2014 23:40, schrieb Josh Berkus:\n> On 09/19/2014 04:51 AM, Björn Wittich wrote:\n>> I am relatively new to postgres. I have a table with 500 coulmns and\n>> about 40 mio rows. I call this cache table where one column is a unique\n>> key (indexed) and the 499 columns (type integer) are some values\n>> belonging to this key.\n>>\n>> Now I have a second (temporary) table (only 2 columns one is the key of\n>> my cache table) and I want do an inner join between my temporary table\n>> and the large cache table and export all matching rows. I found out,\n>> that the performance increases when I limit the join to lots of small\n>> parts.\n>> But it seems that the databases needs a lot of disk io to gather all 499\n>> data columns.\n>> Is there a possibilty to tell the databases that all these colums are\n>> always treated as tuples and I always want to get the whole row? Perhaps\n>> the disk oraganization could then be optimized?\n> PostgreSQL is already a row store, which means by default you're getting\n> all of the columns, and the columns are stored physically adjacent to\n> each other.\n>\n> If requesting only 1 or two columns is faster than requesting all of\n> them, that's pretty much certainly due to transmission time, not disk\n> IO. Otherwise, please post your schema (well, a truncated version) and\n> your queries.\n>\n> BTW, in cases like yours I've used a INT array instead of 500 columns to\n> good effect; it works slightly better with PostgreSQL's compression.\n>\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Sep 2014 10:36:34 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query a table with lots of coulmns"
},
{
"msg_contents": "Hi,\n\nok here are my schemata : cachetable : 30 - 50 Mio rows, worktable 5 Mio \n- 25 Mio\n\n\nCREATE TABLE cachetable\n(\n a text,\n b text NOT NULL,\n c text,\n d text,\n e timestamp without time zone DEFAULT now(),\n f text,\n s1 integer DEFAULT 0,\n s2 integer DEFAULT 0,\n s3 integer DEFAULT 0,\n s4 integer DEFAULT 0,\n s5 integer DEFAULT 0,\n s6 integer DEFAULT 0,\n s7 integer DEFAULT 0,\n s8 integer DEFAULT 0,\n s9 integer DEFAULT 0,\n s10 integer DEFAULT 0,\n s11 integer DEFAULT 0,\n s12 integer DEFAULT 0,\n s13 integer DEFAULT 0,\n s14 integer DEFAULT 0,\n s15 integer DEFAULT 0,\n s16 integer DEFAULT 0,\n s17 integer DEFAULT 0,\n s18 integer DEFAULT 0,\n s19 integer DEFAULT 0,\n s20 integer DEFAULT 0,\n s21 integer DEFAULT 0,\n s22 integer DEFAULT 0,\n s23 integer DEFAULT 0,\n s24 integer DEFAULT 0,\n s25 integer DEFAULT 0,\n s26 integer DEFAULT 0,\n s27 integer DEFAULT 0,\n s28 integer DEFAULT 0,\n s29 integer DEFAULT 0,\n s30 integer DEFAULT 0,\n s31 integer DEFAULT 0,\n s32 integer DEFAULT 0,\n s33 integer DEFAULT 0,\n s34 integer DEFAULT 0,\n s35 integer DEFAULT 0,\n s36 integer DEFAULT 0,\n s37 integer DEFAULT 0,\n s38 integer DEFAULT 0,\n s39 integer DEFAULT 0,\n s40 integer DEFAULT 0,\n s41 integer DEFAULT 0,\n s42 integer DEFAULT 0,\n s43 integer DEFAULT 0,\n s44 integer DEFAULT 0,\n s45 integer DEFAULT 0,\n s46 integer DEFAULT 0,\n s47 integer DEFAULT 0,\n s48 integer DEFAULT 0,\n s49 integer DEFAULT 0,\n s50 integer DEFAULT 0,\n s51 integer DEFAULT 0,\n s52 integer DEFAULT 0,\n s53 integer DEFAULT 0,\n s54 integer DEFAULT 0,\n s55 integer DEFAULT 0,\n s56 integer DEFAULT 0,\n s57 integer DEFAULT 0,\n s58 integer DEFAULT 0,\n s59 integer DEFAULT 0,\n s60 integer DEFAULT 0,\n s61 integer DEFAULT 0,\n s62 integer DEFAULT 0,\n s63 integer DEFAULT 0,\n s64 integer DEFAULT 0,\n s65 integer DEFAULT 0,\n s66 integer DEFAULT 0,\n s67 integer DEFAULT 0,\n s68 integer DEFAULT 0,\n s69 integer DEFAULT 0,\n s70 integer DEFAULT 0,\n s71 integer DEFAULT 0,\n s72 integer DEFAULT 0,\n s73 integer DEFAULT 0,\n s74 integer DEFAULT 0,\n s75 integer DEFAULT 0,\n s76 integer DEFAULT 0,\n s77 integer DEFAULT 0,\n s78 integer DEFAULT 0,\n s79 integer DEFAULT 0,\n s80 integer DEFAULT 0,\n s81 integer DEFAULT 0,\n s82 integer DEFAULT 0,\n s83 integer DEFAULT 0,\n s84 integer DEFAULT 0,\n s85 integer DEFAULT 0,\n s86 integer DEFAULT 0,\n s87 integer DEFAULT 0,\n s88 integer DEFAULT 0,\n s89 integer DEFAULT 0,\n s90 integer DEFAULT 0,\n s91 integer DEFAULT 0,\n s92 integer DEFAULT 0,\n s93 integer DEFAULT 0,\n s94 integer DEFAULT 0,\n s95 integer DEFAULT 0,\n s96 integer DEFAULT 0,\n s97 integer DEFAULT 0,\n s98 integer DEFAULT 0,\n s99 integer DEFAULT 0,\n s100 integer DEFAULT 0,\n s101 integer DEFAULT 0,\n s102 integer DEFAULT 0,\n s103 integer DEFAULT 0,\n s104 integer DEFAULT 0,\n s105 integer DEFAULT 0,\n s106 integer DEFAULT 0,\n s107 integer DEFAULT 0,\n s108 integer DEFAULT 0,\n s109 integer DEFAULT 0,\n s110 integer DEFAULT 0,\n s111 integer DEFAULT 0,\n s112 integer DEFAULT 0,\n s113 integer DEFAULT 0,\n s114 integer DEFAULT 0,\n s115 integer DEFAULT 0,\n s116 integer DEFAULT 0,\n s117 integer DEFAULT 0,\n s118 integer DEFAULT 0,\n s119 integer DEFAULT 0,\n s120 integer DEFAULT 0,\n s121 integer DEFAULT 0,\n s122 integer DEFAULT 0,\n s123 integer DEFAULT 0,\n s124 integer DEFAULT 0,\n s125 integer DEFAULT 0,\n s126 integer DEFAULT 0,\n s127 integer DEFAULT 0,\n s128 integer DEFAULT 0,\n s129 integer DEFAULT 0,\n s130 integer DEFAULT 0,\n s131 integer DEFAULT 0,\n s132 integer DEFAULT 0,\n s133 integer DEFAULT 0,\n s134 integer DEFAULT 0,\n s135 integer DEFAULT 0,\n s136 integer DEFAULT 0,\n s137 integer DEFAULT 0,\n s138 integer DEFAULT 0,\n s139 integer DEFAULT 0,\n s140 integer DEFAULT 0,\n s141 integer DEFAULT 0,\n s142 integer DEFAULT 0,\n s143 integer DEFAULT 0,\n s144 integer DEFAULT 0,\n s145 integer DEFAULT 0,\n s146 integer DEFAULT 0,\n s147 integer DEFAULT 0,\n s148 integer DEFAULT 0,\n s149 integer DEFAULT 0,\n s150 integer DEFAULT 0,\n s151 integer DEFAULT 0,\n s152 integer DEFAULT 0,\n s153 integer DEFAULT 0,\n s154 integer DEFAULT 0,\n s155 integer DEFAULT 0,\n s156 integer DEFAULT 0,\n s157 integer DEFAULT 0,\n s158 integer DEFAULT 0,\n s159 integer DEFAULT 0,\n s160 integer DEFAULT 0,\n s161 integer DEFAULT 0,\n s162 integer DEFAULT 0,\n s163 integer DEFAULT 0,\n s164 integer DEFAULT 0,\n s165 integer DEFAULT 0,\n s166 integer DEFAULT 0,\n s167 integer DEFAULT 0,\n s168 integer DEFAULT 0,\n s169 integer DEFAULT 0,\n s170 integer DEFAULT 0,\n s171 integer DEFAULT 0,\n s172 integer DEFAULT 0,\n s173 integer DEFAULT 0,\n s174 integer DEFAULT 0,\n s175 integer DEFAULT 0,\n s176 integer DEFAULT 0,\n s177 integer DEFAULT 0,\n s178 integer DEFAULT 0,\n s179 integer DEFAULT 0,\n s180 integer DEFAULT 0,\n s181 integer DEFAULT 0,\n s182 integer DEFAULT 0,\n s183 integer DEFAULT 0,\n s184 integer DEFAULT 0,\n s185 integer DEFAULT 0,\n s186 integer DEFAULT 0,\n s187 integer DEFAULT 0,\n s188 integer DEFAULT 0,\n s189 integer DEFAULT 0,\n s190 integer DEFAULT 0,\n s191 integer DEFAULT 0,\n s192 integer DEFAULT 0,\n s193 integer DEFAULT 0,\n s194 integer DEFAULT 0,\n s195 integer DEFAULT 0,\n s196 integer DEFAULT 0,\n s197 integer DEFAULT 0,\n s198 integer DEFAULT 0,\n s199 integer DEFAULT 0,\n s200 integer DEFAULT 0,\n s201 integer DEFAULT 0,\n s202 integer DEFAULT 0,\n s203 integer DEFAULT 0,\n s204 integer DEFAULT 0,\n s205 integer DEFAULT 0,\n s206 integer DEFAULT 0,\n s207 integer DEFAULT 0,\n s208 integer DEFAULT 0,\n s209 integer DEFAULT 0,\n s210 integer DEFAULT 0,\n s211 integer DEFAULT 0,\n s212 integer DEFAULT 0,\n s213 integer DEFAULT 0,\n s214 integer DEFAULT 0,\n s215 integer DEFAULT 0,\n s216 integer DEFAULT 0,\n s217 integer DEFAULT 0,\n s218 integer DEFAULT 0,\n s219 integer DEFAULT 0,\n s220 integer DEFAULT 0,\n s221 integer DEFAULT 0,\n s222 integer DEFAULT 0,\n s223 integer DEFAULT 0,\n s224 integer DEFAULT 0,\n s225 integer DEFAULT 0,\n s226 integer DEFAULT 0,\n s227 integer DEFAULT 0,\n s228 integer DEFAULT 0,\n s229 integer DEFAULT 0,\n s230 integer DEFAULT 0,\n s231 integer DEFAULT 0,\n s232 integer DEFAULT 0,\n s233 integer DEFAULT 0,\n s234 integer DEFAULT 0,\n s235 integer DEFAULT 0,\n s236 integer DEFAULT 0,\n s237 integer DEFAULT 0,\n s238 integer DEFAULT 0,\n s239 integer DEFAULT 0,\n s240 integer DEFAULT 0,\n s241 integer DEFAULT 0,\n s242 integer DEFAULT 0,\n s243 integer DEFAULT 0,\n s244 integer DEFAULT 0,\n s245 integer DEFAULT 0,\n s246 integer DEFAULT 0,\n s247 integer DEFAULT 0,\n s248 integer DEFAULT 0,\n s249 integer DEFAULT 0,\n s250 integer DEFAULT 0,\n s251 integer DEFAULT 0,\n s252 integer DEFAULT 0,\n s253 integer DEFAULT 0,\n s254 integer DEFAULT 0,\n s255 integer DEFAULT 0,\n s256 integer DEFAULT 0,\n s257 integer DEFAULT 0,\n s258 integer DEFAULT 0,\n s259 integer DEFAULT 0,\n s260 integer DEFAULT 0,\n s261 integer DEFAULT 0,\n s262 integer DEFAULT 0,\n s263 integer DEFAULT 0,\n s264 integer DEFAULT 0,\n s265 integer DEFAULT 0,\n s266 integer DEFAULT 0,\n s267 integer DEFAULT 0,\n s268 integer DEFAULT 0,\n s269 integer DEFAULT 0,\n s270 integer DEFAULT 0,\n s271 integer DEFAULT 0,\n s272 integer DEFAULT 0,\n s273 integer DEFAULT 0,\n s274 integer DEFAULT 0,\n s275 integer DEFAULT 0,\n s276 integer DEFAULT 0,\n s277 integer DEFAULT 0,\n s278 integer DEFAULT 0,\n s279 integer DEFAULT 0,\n s280 integer DEFAULT 0,\n s281 integer DEFAULT 0,\n s282 integer DEFAULT 0,\n s283 integer DEFAULT 0,\n s284 integer DEFAULT 0,\n s285 integer DEFAULT 0,\n s286 integer DEFAULT 0,\n s287 integer DEFAULT 0,\n s288 integer DEFAULT 0,\n s289 integer DEFAULT 0,\n s290 integer DEFAULT 0,\n s291 integer DEFAULT 0,\n s292 integer DEFAULT 0,\n s293 integer DEFAULT 0,\n s294 integer DEFAULT 0,\n s295 integer DEFAULT 0,\n s296 integer DEFAULT 0,\n s297 integer DEFAULT 0,\n s298 integer DEFAULT 0,\n s299 integer DEFAULT 0,\n s300 integer DEFAULT 0,\n s301 integer DEFAULT 0,\n s302 integer DEFAULT 0,\n s303 integer DEFAULT 0,\n s304 integer DEFAULT 0,\n s305 integer DEFAULT 0,\n s306 integer DEFAULT 0,\n s307 integer DEFAULT 0,\n s308 integer DEFAULT 0,\n s309 integer DEFAULT 0,\n s310 integer DEFAULT 0,\n s311 integer DEFAULT 0,\n s312 integer DEFAULT 0,\n s313 integer DEFAULT 0,\n s314 integer DEFAULT 0,\n s315 integer DEFAULT 0,\n s316 integer DEFAULT 0,\n s317 integer DEFAULT 0,\n s318 integer DEFAULT 0,\n s319 integer DEFAULT 0,\n s320 integer DEFAULT 0,\n s321 integer DEFAULT 0,\n s322 integer DEFAULT 0,\n s323 integer DEFAULT 0,\n s324 integer DEFAULT 0,\n s325 integer DEFAULT 0,\n s326 integer DEFAULT 0,\n s327 integer DEFAULT 0,\n s328 integer DEFAULT 0,\n s329 integer DEFAULT 0,\n s330 integer DEFAULT 0,\n s331 integer DEFAULT 0,\n s332 integer DEFAULT 0,\n s333 integer DEFAULT 0,\n s334 integer DEFAULT 0,\n s335 integer DEFAULT 0,\n s336 integer DEFAULT 0,\n s337 integer DEFAULT 0,\n s338 integer DEFAULT 0,\n s339 integer DEFAULT 0,\n s340 integer DEFAULT 0,\n s341 integer DEFAULT 0,\n s342 integer DEFAULT 0,\n s343 integer DEFAULT 0,\n s344 integer DEFAULT 0,\n s345 integer DEFAULT 0,\n s346 integer DEFAULT 0,\n s347 integer DEFAULT 0,\n s348 integer DEFAULT 0,\n s349 integer DEFAULT 0,\n s350 integer DEFAULT 0,\n s351 integer DEFAULT 0,\n s352 integer DEFAULT 0,\n s353 integer DEFAULT 0,\n s354 integer DEFAULT 0,\n s355 integer DEFAULT 0,\n s356 integer DEFAULT 0,\n s357 integer DEFAULT 0,\n s358 integer DEFAULT 0,\n s359 integer DEFAULT 0,\n s360 integer DEFAULT 0,\n s361 integer DEFAULT 0,\n s362 integer DEFAULT 0,\n s363 integer DEFAULT 0,\n s364 integer DEFAULT 0,\n s365 integer DEFAULT 0,\n s366 integer DEFAULT 0,\n s367 integer DEFAULT 0,\n s368 integer DEFAULT 0,\n s369 integer DEFAULT 0,\n s370 integer DEFAULT 0,\n s371 integer DEFAULT 0,\n s372 integer DEFAULT 0,\n s373 integer DEFAULT 0,\n s374 integer DEFAULT 0,\n s375 integer DEFAULT 0,\n s376 integer DEFAULT 0,\n s377 integer DEFAULT 0,\n s378 integer DEFAULT 0,\n s379 integer DEFAULT 0,\n s380 integer DEFAULT 0,\n s381 integer DEFAULT 0,\n s382 integer DEFAULT 0,\n s383 integer DEFAULT 0,\n s384 integer DEFAULT 0,\n s385 integer DEFAULT 0,\n s386 integer DEFAULT 0,\n s387 integer DEFAULT 0,\n s388 integer DEFAULT 0,\n s389 integer DEFAULT 0,\n s390 integer DEFAULT 0,\n s391 integer DEFAULT 0,\n s392 integer DEFAULT 0,\n s393 integer DEFAULT 0,\n s394 integer DEFAULT 0,\n s395 integer DEFAULT 0,\n s396 integer DEFAULT 0,\n s397 integer DEFAULT 0,\n s398 integer DEFAULT 0,\n s399 integer DEFAULT 0,\n s400 integer DEFAULT 0,\n s401 integer DEFAULT 0,\n s402 integer DEFAULT 0,\n s403 integer DEFAULT 0,\n s404 integer DEFAULT 0,\n s405 integer DEFAULT 0,\n s406 integer DEFAULT 0,\n s407 integer DEFAULT 0,\n s408 integer DEFAULT 0,\n s409 integer DEFAULT 0,\n s410 integer DEFAULT 0,\n s411 integer DEFAULT 0,\n s412 integer DEFAULT 0,\n s413 integer DEFAULT 0,\n s414 integer DEFAULT 0,\n s415 integer DEFAULT 0,\n s416 integer DEFAULT 0,\n s417 integer DEFAULT 0,\n s418 integer DEFAULT 0,\n s419 integer DEFAULT 0,\n s420 integer DEFAULT 0,\n s421 integer DEFAULT 0,\n s422 integer DEFAULT 0,\n s423 integer DEFAULT 0,\n s424 integer DEFAULT 0,\n s425 integer DEFAULT 0,\n s426 integer DEFAULT 0,\n s427 integer DEFAULT 0,\n s428 integer DEFAULT 0,\n s429 integer DEFAULT 0,\n s430 integer DEFAULT 0,\n s431 integer DEFAULT 0,\n s432 integer DEFAULT 0,\n s433 integer DEFAULT 0,\n s434 integer DEFAULT 0,\n s435 integer DEFAULT 0,\n s436 integer DEFAULT 0,\n s437 integer DEFAULT 0,\n s438 integer DEFAULT 0,\n s439 integer DEFAULT 0,\n s440 integer DEFAULT 0,\n s441 integer DEFAULT 0,\n s442 integer DEFAULT 0,\n s443 integer DEFAULT 0,\n s444 integer DEFAULT 0,\n s445 integer DEFAULT 0,\n s446 integer DEFAULT 0,\n s447 integer DEFAULT 0,\n s448 integer DEFAULT 0,\n s449 integer DEFAULT 0,\n s450 integer DEFAULT 0,\n s451 integer DEFAULT 0,\n s452 integer DEFAULT 0,\n s453 integer DEFAULT 0,\n s454 integer DEFAULT 0,\n s455 integer DEFAULT 0,\n s456 integer DEFAULT 0,\n s457 integer DEFAULT 0,\n s458 integer DEFAULT 0,\n s459 integer DEFAULT 0,\n s460 integer DEFAULT 0,\n s461 integer DEFAULT 0,\n s462 integer DEFAULT 0,\n s463 integer DEFAULT 0,\n s464 integer DEFAULT 0,\n s465 integer DEFAULT 0,\n s466 integer DEFAULT 0,\n s467 integer DEFAULT 0,\n s468 integer DEFAULT 0,\n s469 integer DEFAULT 0,\n s470 integer DEFAULT 0,\n s471 integer DEFAULT 0,\n s472 integer DEFAULT 0,\n s473 integer DEFAULT 0,\n s474 integer DEFAULT 0,\n s475 integer DEFAULT 0,\n s476 integer DEFAULT 0,\n s477 integer DEFAULT 0,\n s478 integer DEFAULT 0,\n s479 integer DEFAULT 0,\n s480 integer DEFAULT 0,\n s481 integer DEFAULT 0,\n s482 integer DEFAULT 0,\n s483 integer DEFAULT 0,\n s484 integer DEFAULT 0,\n s485 integer DEFAULT 0,\n s486 integer DEFAULT 0,\n s487 integer DEFAULT 0,\n s488 integer DEFAULT 0,\n s489 integer DEFAULT 0,\n s490 integer DEFAULT 0,\n s491 integer DEFAULT 0,\n s492 integer DEFAULT 0,\n s493 integer DEFAULT 0,\n s494 integer DEFAULT 0,\n s495 integer DEFAULT 0,\n s496 integer DEFAULT 0,\n s497 integer DEFAULT 0,\n s498 integer DEFAULT 0,\n s499 integer DEFAULT 0,\n s500 integer DEFAULT 0,\n s501 integer DEFAULT 0,\n s502 integer DEFAULT 0,\n s503 integer DEFAULT 0,\n s504 integer DEFAULT 0,\n s505 integer DEFAULT 0,\n s506 integer DEFAULT 0,\n s507 integer DEFAULT 0,\n s508 integer DEFAULT 0,\n s509 integer DEFAULT 0,\n s510 integer DEFAULT 0,\n s511 integer DEFAULT 0,\n s512 integer DEFAULT 0,\n\n CONSTRAINT primkey PRIMARY KEY (b),\n CONSTRAINT uniqueb UNIQUE (b),\n CONSTRAINT uniquea UNIQUE (a)\n)\n\nWITH (\n OIDS=FALSE\n);\n\nALTER TABLE cachetable\n OWNER TO myuser;\n\n\n\n\nCREATE INDEX test_index\n ON cachetable\n USING btree\n (b COLLATE pg_catalog.\"default\");\n\n\n\nCREATE INDEX test2_index\n ON cachetable\n USING btree\n (a COLLATE pg_catalog.\"default\");\n\n\n\n\n\nand my worktable\n\n\n\n\n\nCREATE TABLE worktable\n(\n b text,\n g integer\n)\n\nWITH (\n OIDS=FALSE\n);\n\nALTER TABLE worktable\n OWNER TO myuser;\n\n\n\n\n\n\n\nCREATE INDEX worktable_rh_index\n ON worktable\n USING btree\n (b COLLATE pg_catalog.\"default\");\n\n\n\n\n\n\n\nCREATE INDEX worktable_tn_index\n ON worktable\n USING btree\n (g);\n\n\n\nBest\nBj�rn\n\n\nAm 20.09.2014 12:36, schrieb Marc Mamin:\n>> At first, thanks for your fast and comprehensive help.\n>>\n>> The structure of my cache table is\n>>\n>> a text , b text NOT NULL , c text , d text , e timestamp without\n>> timezone DEFAULT now(), f text, s1 integer DEFAULT 0, s2 integer\n>> DEFAULT 0, s3 integer DEFAULT 0, ... ,s512 DEFAULT 0\n>\n>> additional constraints: primary key (b) , Unique(b), Unique(a)\n>> Indexes : Index on a, Index on b\n> This looks redundant. e.g. you don't need a unique index on b if you already have a primary key on it.\n> Can you post the complete table definition ?\n>\n> ...\n>\n>> One remark which might help: overall 90 - 95 % of the s1-s512 columns\n>> are 0. I am only interested in columns not equals 0. Perhaps it would\n>> make sense to use and array of json and enumerate only values not equals 0.\n> Could you change that to replace 0 values with NULLs?\n> This would greatly reduce your table space as Postgres is very efficient about NULLs storage:\n> It marks all null values in a bit map within the row header so you just need about one bit per null\n> instead of 4 bytes for zeros, and hence get rid of your I/O issue.\n>\n> regards,\n>\n> Marc Mamin\n> ________________________________________\n> Von: [email protected] [[email protected]]" im Auftrag von "Bj�rn Wittich [[email protected]]\n> Gesendet: Samstag, 20. September 2014 09:19\n> An: Josh Berkus; [email protected]\n> Betreff: Re: [PERFORM] query a table with lots of coulmns\n>\n> At first, thanks for your fast and comprehensive help.\n>\n> The structure of my cache table is\n>\n> a text , b text NOT NULL , c text , d text , e timestamp without\n> timezone DEFAULT now(), f text, s1 integer DEFAULT 0, s2 integer\n> DEFAULT 0, s3 integer DEFAULT 0, ... ,s512 DEFAULT 0\n>\n> additional constraints: primary key (b) , Unique(b), Unique(a)\n> Indexes : Index on a, Index on b\n>\n> This table has 30 Mio rows ( will increase to 50 Mio) in future\n>\n> My working table is\n>\n> b text, g integer\n>\n> Indexes on b and c\n>\n>\n> This table has 5 Mio rows\n>\n> Scenario:\n>\n> What I want to achieve :\n>\n> SELECT s1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable>\n> USING(b) ORDER BY g\n>\n>\n> The inner join will match at least 95 % of columns of the smaller\n> worktable in this example 4,75 mio rows.\n>\n> Running this query takes several hours until I receive the first\n> results. Query analyzing shows that the execution plan is doing 2 seq\n> table scans on cache and work table.\n>\n>\n> When I divide this huge statement into\n>\n> SELECT s1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable>\n> USING(b) WHERE g BETWEEN 1 and 10000 ORDER BY g, SELECT\n> s1,s2,s3,...s512,g,d from <worktable> INNER JOIN <cachetable> USING(b)\n> WHERE g BETWEEN 10001 and 20000 ORDER BY g, ....\n>\n> (I can do this because g i unique and continous id from 1 to N)\n>\n> The result is fast but fireing parallel requests (4-8 times parallel)\n> slows down the retrieval.\n>\n> Execution plan changes when adding \"BETWEEN 1 and 10000\" to use the indexes.\n>\n>\n>\n> One remark which might help: overall 90 - 95 % of the s1-s512 columns\n> are 0. I am only interested in columns not equals 0. Perhaps it would\n> make sense to use and array of json and enumerate only values not equals 0.\n>\n> Statistics on the large table:\n> table size: 80 GB\n> toast-tablesize: 37 GB\n> size of indexes: 17 GB\n>\n>\n> Thanks for your help and ideas\n>\n> Bj�rn\n>\n>\n>\n>\n>\n> Am 19.09.2014 23:40, schrieb Josh Berkus:\n>> On 09/19/2014 04:51 AM, Bj�rn Wittich wrote:\n>>> I am relatively new to postgres. I have a table with 500 coulmns and\n>>> about 40 mio rows. I call this cache table where one column is a unique\n>>> key (indexed) and the 499 columns (type integer) are some values\n>>> belonging to this key.\n>>>\n>>> Now I have a second (temporary) table (only 2 columns one is the key of\n>>> my cache table) and I want do an inner join between my temporary table\n>>> and the large cache table and export all matching rows. I found out,\n>>> that the performance increases when I limit the join to lots of small\n>>> parts.\n>>> But it seems that the databases needs a lot of disk io to gather all 499\n>>> data columns.\n>>> Is there a possibilty to tell the databases that all these colums are\n>>> always treated as tuples and I always want to get the whole row? Perhaps\n>>> the disk oraganization could then be optimized?\n>> PostgreSQL is already a row store, which means by default you're getting\n>> all of the columns, and the columns are stored physically adjacent to\n>> each other.\n>>\n>> If requesting only 1 or two columns is faster than requesting all of\n>> them, that's pretty much certainly due to transmission time, not disk\n>> IO. Otherwise, please post your schema (well, a truncated version) and\n>> your queries.\n>>\n>> BTW, in cases like yours I've used a INT array instead of 500 columns to\n>> good effect; it works slightly better with PostgreSQL's compression.\n>>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 20 Sep 2014 13:51:04 +0200",
"msg_from": "=?ISO-8859-1?Q?Bj=F6rn_Wittich?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query a table with lots of coulmns"
}
] |
[
{
"msg_contents": "Maybe someone can explain this. The following SQL will reproduce our issue:\nDROP TABLE IF EXISTS t1 CASCADE;\nCREATE TABLE t1 (name text,\n state text);\nCREATE INDEX t1_name ON t1(name);\nCREATE INDEX t1_state ON t1(state);\nCREATE INDEX t1_name_state ON t1(name,state);\n\n-- Create some sample data\nDO $$\nDECLARE\nstates text[] := array['UNKNOWN', 'TODO', 'DONE', 'UNKNOWN'];\nBEGIN\nFOR v IN 1..200000 LOOP\n INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);\n INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);\n INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);\n INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);\nEND LOOP;\nEND $$;\n\n\nCREATE OR REPLACE FUNCTION state_to_int(state character varying) RETURNS\ninteger\n LANGUAGE plpgsql IMMUTABLE STRICT\n AS $$BEGIN\nIF state = 'UNKNOWN' THEN RETURN 0;\nELSIF state = 'TODO' THEN RETURN 1;\nELSIF state = 'DONE' THEN RETURN 2;\nELSIF state = 'NOT REQUIRED' THEN RETURN 3;\nELSE RAISE EXCEPTION 'state_to_int called with invalid state value';\nEND IF;\nEND;$$;\n\nCREATE OR REPLACE FUNCTION int_to_state(state integer) RETURNS character\nvarying\n LANGUAGE plpgsql IMMUTABLE STRICT\n AS $$BEGIN\nIF state = 0 THEN RETURN 'UNKNOWN';\nELSIF state = 1 THEN RETURN 'TODO';\nELSIF state = 2 THEN RETURN 'DONE';\nELSIF state = 3 THEN RETURN 'NOT REQUIRED';\nELSE RAISE EXCEPTION 'int_to_state called with invalid state value';\nEND IF;\nEND;$$;\n\n-- Why is this a lot slower\nexplain (analyse, buffers) select name,\nint_to_state(min(state_to_int(state))) as status from t1 group by t1.name;\n\n-- Than this?\nexplain (analyze, buffers) select name, (array['UNKNOWN', 'TODO', 'DONE',\n'NOT REQUIRED'])[min(\nCASE state\nWHEN 'UNKNOWN' THEN 0\nWHEN 'TODO' THEN 1\nWHEN 'DONE' THEN 2\nWHEN 'NOT REQUIRED' THEN 3\nEND)] AS status from t1 group by t1.name;\n\n-- This is also very much slower\nexplain (analyze, buffers) select name, (array['UNKNOWN', 'TODO', 'DONE',\n'NOT REQUIRED'])[min(state_to_int(state))] AS status from t1 group by\nt1.name;\n\nThis was done on:\nPostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu\n4.8.2-19ubuntu1) 4.8.2, 64-bit\n\nWe get results like this:\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.42..280042.62 rows=208120 width=15) (actual\ntime=0.076..2439.066 rows=200000 loops=1)\n Buffers: shared hit=53146\n -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000\nwidth=15) (actual time=0.009..229.477 rows=800000 loops=1)\n Buffers: shared hit=53146\n Total runtime: 2460.860 ms\n(5 rows)\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.42..36012.62 rows=208120 width=15) (actual\ntime=0.017..559.384 rows=200000 loops=1)\n Buffers: shared hit=53146\n -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000\nwidth=15) (actual time=0.008..197.133 rows=800000 loops=1)\n Buffers: shared hit=53146\n Total runtime: 574.550 ms\n(5 rows)\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=0.42..228012.62 rows=208120 width=15) (actual\ntime=0.042..2089.367 rows=200000 loops=1)\n Buffers: shared hit=53146\n -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000\nwidth=15) (actual time=0.008..237.854 rows=800000 loops=1)\n Buffers: shared hit=53146\n Total runtime: 2111.004 ms\n(5 rows)\n\n\nWe cannot change our table structure to reflect something more sensible.\nWhat we would really like to know is why using functions is so much slower\nthan the unreadable method.\n\nRegards\n\nRoss\n\nMaybe someone can explain this. The following SQL will reproduce our issue:DROP TABLE IF EXISTS t1 CASCADE;CREATE TABLE t1 (name text, state text);CREATE INDEX t1_name ON t1(name);CREATE INDEX t1_state ON t1(state);CREATE INDEX t1_name_state ON t1(name,state);-- Create some sample dataDO $$DECLAREstates text[] := array['UNKNOWN', 'TODO', 'DONE', 'UNKNOWN'];BEGINFOR v IN 1..200000 LOOP INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]); INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]); INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]); INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);END LOOP;END $$;CREATE OR REPLACE FUNCTION state_to_int(state character varying) RETURNS integer LANGUAGE plpgsql IMMUTABLE STRICT AS $$BEGINIF state = 'UNKNOWN' THEN RETURN 0;ELSIF state = 'TODO' THEN RETURN 1;ELSIF state = 'DONE' THEN RETURN 2;ELSIF state = 'NOT REQUIRED' THEN RETURN 3;ELSE RAISE EXCEPTION 'state_to_int called with invalid state value';END IF;END;$$;CREATE OR REPLACE FUNCTION int_to_state(state integer) RETURNS character varying LANGUAGE plpgsql IMMUTABLE STRICT AS $$BEGINIF state = 0 THEN RETURN 'UNKNOWN';ELSIF state = 1 THEN RETURN 'TODO';ELSIF state = 2 THEN RETURN 'DONE';ELSIF state = 3 THEN RETURN 'NOT REQUIRED';ELSE RAISE EXCEPTION 'int_to_state called with invalid state value';END IF;END;$$;-- Why is this a lot slowerexplain (analyse, buffers) select name, int_to_state(min(state_to_int(state))) as status from t1 group by t1.name;-- Than this?explain (analyze, buffers) select name, (array['UNKNOWN', 'TODO', 'DONE', 'NOT REQUIRED'])[min( CASE state WHEN 'UNKNOWN' THEN 0 WHEN 'TODO' THEN 1 WHEN 'DONE' THEN 2 WHEN 'NOT REQUIRED' THEN 3 END)] AS status from t1 group by t1.name;-- This is also very much slowerexplain (analyze, buffers) select name, (array['UNKNOWN', 'TODO', 'DONE', 'NOT REQUIRED'])[min(state_to_int(state))] AS status from t1 group by t1.name;This was done on:PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2, 64-bitWe get results like this: QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=0.42..280042.62 rows=208120 width=15) (actual time=0.076..2439.066 rows=200000 loops=1) Buffers: shared hit=53146 -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000 width=15) (actual time=0.009..229.477 rows=800000 loops=1) Buffers: shared hit=53146 Total runtime: 2460.860 ms(5 rows) QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=0.42..36012.62 rows=208120 width=15) (actual time=0.017..559.384 rows=200000 loops=1) Buffers: shared hit=53146 -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000 width=15) (actual time=0.008..197.133 rows=800000 loops=1) Buffers: shared hit=53146 Total runtime: 574.550 ms(5 rows) QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------- GroupAggregate (cost=0.42..228012.62 rows=208120 width=15) (actual time=0.042..2089.367 rows=200000 loops=1) Buffers: shared hit=53146 -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000 width=15) (actual time=0.008..237.854 rows=800000 loops=1) Buffers: shared hit=53146 Total runtime: 2111.004 ms(5 rows)We cannot change our table structure to reflect something more sensible. What we would really like to know is why using functions is so much slower than the unreadable method.RegardsRoss",
"msg_date": "Tue, 23 Sep 2014 13:21:31 +0100",
"msg_from": "Ross Elliott <[email protected]>",
"msg_from_op": true,
"msg_subject": "Slow query"
},
{
"msg_contents": "Ross Elliott-2 wrote\n> Maybe someone can explain this. The following SQL will reproduce our\n> issue:\n> DROP TABLE IF EXISTS t1 CASCADE;\n> CREATE TABLE t1 (name text,\n> state text);\n> CREATE INDEX t1_name ON t1(name);\n> CREATE INDEX t1_state ON t1(state);\n> CREATE INDEX t1_name_state ON t1(name,state);\n> \n> -- Create some sample data\n> DO $$\n> DECLARE\n> states text[] := array['UNKNOWN', 'TODO', 'DONE', 'UNKNOWN'];\n> BEGIN\n> FOR v IN 1..200000 LOOP\n> INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);\n> INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);\n> INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);\n> INSERT INTO t1 VALUES('user'||v, states[floor(random()*4)]);\n> END LOOP;\n> END $$;\n> \n> \n> CREATE OR REPLACE FUNCTION state_to_int(state character varying) RETURNS\n> integer\n> LANGUAGE plpgsql IMMUTABLE STRICT\n> AS $$BEGIN\n> IF state = 'UNKNOWN' THEN RETURN 0;\n> ELSIF state = 'TODO' THEN RETURN 1;\n> ELSIF state = 'DONE' THEN RETURN 2;\n> ELSIF state = 'NOT REQUIRED' THEN RETURN 3;\n> ELSE RAISE EXCEPTION 'state_to_int called with invalid state value';\n> END IF;\n> END;$$;\n> \n> CREATE OR REPLACE FUNCTION int_to_state(state integer) RETURNS character\n> varying\n> LANGUAGE plpgsql IMMUTABLE STRICT\n> AS $$BEGIN\n> IF state = 0 THEN RETURN 'UNKNOWN';\n> ELSIF state = 1 THEN RETURN 'TODO';\n> ELSIF state = 2 THEN RETURN 'DONE';\n> ELSIF state = 3 THEN RETURN 'NOT REQUIRED';\n> ELSE RAISE EXCEPTION 'int_to_state called with invalid state value';\n> END IF;\n> END;$$;\n> \n> -- Why is this a lot slower\n> explain (analyse, buffers) select name,\n> int_to_state(min(state_to_int(state))) as status from t1 group by t1.name;\n> \n> -- Than this?\n> explain (analyze, buffers) select name, (array['UNKNOWN', 'TODO', 'DONE',\n> 'NOT REQUIRED'])[min(\n> CASE state\n> WHEN 'UNKNOWN' THEN 0\n> WHEN 'TODO' THEN 1\n> WHEN 'DONE' THEN 2\n> WHEN 'NOT REQUIRED' THEN 3\n> END)] AS status from t1 group by t1.name;\n> \n> -- This is also very much slower\n> explain (analyze, buffers) select name, (array['UNKNOWN', 'TODO', 'DONE',\n> 'NOT REQUIRED'])[min(state_to_int(state))] AS status from t1 group by\n> t1.name;\n> \n> This was done on:\n> PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu\n> 4.8.2-19ubuntu1) 4.8.2, 64-bit\n> \n> We get results like this:\n> QUERY PLAN\n> \n> -----------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.42..280042.62 rows=208120 width=15) (actual\n> time=0.076..2439.066 rows=200000 loops=1)\n> Buffers: shared hit=53146\n> -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000\n> width=15) (actual time=0.009..229.477 rows=800000 loops=1)\n> Buffers: shared hit=53146\n> Total runtime: 2460.860 ms\n> (5 rows)\n> \n> QUERY PLAN\n> \n> -----------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.42..36012.62 rows=208120 width=15) (actual\n> time=0.017..559.384 rows=200000 loops=1)\n> Buffers: shared hit=53146\n> -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000\n> width=15) (actual time=0.008..197.133 rows=800000 loops=1)\n> Buffers: shared hit=53146\n> Total runtime: 574.550 ms\n> (5 rows)\n> \n> QUERY PLAN\n> \n> -----------------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=0.42..228012.62 rows=208120 width=15) (actual\n> time=0.042..2089.367 rows=200000 loops=1)\n> Buffers: shared hit=53146\n> -> Index Scan using t1_name on t1 (cost=0.42..21931.42 rows=800000\n> width=15) (actual time=0.008..237.854 rows=800000 loops=1)\n> Buffers: shared hit=53146\n> Total runtime: 2111.004 ms\n> (5 rows)\n> \n> \n> We cannot change our table structure to reflect something more sensible.\n> What we would really like to know is why using functions is so much slower\n> than the unreadable method.\n> \n> Regards\n> \n> Ross\n\nPl/pgsql functions are black boxes and expensive to execute; you should\ndefine these functions as SQL functions and see if that helps.\n\nDavid J.\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Slow-query-tp5820086p5820096.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 Sep 2014 06:05:16 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Slow query"
}
] |
[
{
"msg_contents": "\n\n\n\n\nHello list,\n \n\n For a big table with more than 1,000,000 records, may I know\n which update is quicker please?\n \n\n (1) update t1\n \n set c1 = a.c1\n \n from a\n \n where pk and\n \n t1.c1 <> a.c1;\n \n ......\n \n update t1\n \n set c_N = a.c_N\n \n from a\n \n where pk and\n \n t1.c_N <> a.c_N;\n \n\n\n (2) update t1\n \n set c1 = a.c1 ,\n \n c2 = a.c2,\n \n ...\n \n c_N = a.c_N\n \n from a\n \n where pk AND\n \n ( t1.c1 <> a.c1 OR t1.c2 <>\n a.c2..... t1.c_N <> a.c_N)\n \n\n\n Or other quicker way for update action?\n \n\n Thank you\n \n Emi\n \n\n\n\n",
"msg_date": "Tue, 23 Sep 2014 16:37:15 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Which update action quicker?"
},
{
"msg_contents": "On 09/23/2014 11:37 PM, Emi Lu wrote:\n> Hello list,\n>\n> For a big table with more than 1,000,000 records, may I know which update is\n> quicker please?\n>\n> (1) update t1\n> set c1 = a.c1\n> from a\n> where pk and\n> t1.c1 <> a.c1;\n> ......\n> update t1\n> set c_N = a.c_N\n> from a\n> where pk and\n> t1.c_N <> a.c_N;\n>\n>\n> (2) update t1\n> set c1 = a.c1 ,\n> c2 = a.c2,\n> ...\n> c_N = a.c_N\n> from a\n> where pk AND\n> ( t1.c1 <> a.c1 OR t1.c2 <> a.c2..... t1.c_N <> a.c_N)\n\nProbably (2). <> is not indexable, so each update will have to perform a \nsequential scan of the table. With (2), you only need to scan it once, \nwith (1) you have to scan it N times. Also, method (1) will update the \nsame row multiple times, if it needs to have more than one column updated.\n\n> Or other quicker way for update action?\n\nIf a large percentage of the table needs to be updated, it can be faster \nto create a new table, insert all the rows with the right values, drop \nthe old table and rename the new one in its place. All in one transaction.\n\n- Heikki\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 Sep 2014 16:48:36 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Which update action quicker?"
},
{
"msg_contents": "Hello,\n\n> For a big table with more than 10 Million records, may I know which update is\n> quicker please?\n> (1) update t1\n> set c1 = a.c1\n> from a\n> where pk and\n> t1.c1 <> a.c1;\n> ......\n> update t1\n> set c_N = a.c_N\n> from a\n> where pk and\n> t1.c_N <> a.c_N;\n>\n>\n> (2) update t1\n> set c1 = a.c1 ,\n> c2 = a.c2,\n> ...\n> c_N = a.c_N\n> from a\n> where pk AND\n> (t1.c1, c2...c_N) <> (a.c1, c2... c_N)\n\nProbably (2). <> is not indexable, so each update will have to perform a\nsequential scan of the table. With (2), you only need to scan it once,\nwith (1) you have to scan it N times. Also, method (1) will update the\nsame row multiple times, if it needs to have more than one column updated.\n\n> Or other quicker way for update action?\n\nIf a large percentage of the table needs to be updated, it can be faster\nto create a new table, insert all the rows with the right values, drop\nthe old table and rename the new one in its place. All in one transaction.\n\nThe situation is:\n(t1.c1, c2, ... c_N) <> (a.c1, c2...c_N) won't return too many diff records. So, the calculation will only be query most of the case.\n\nBut if truncate/delete and copy will cause definitely write all more than 10 million data.\n\nIf for situation like this, will it still be quicker to delete/insert quicker?\nThank you\nEmi\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 24 Sep 2014 10:13:05 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Which update action quicker?"
}
] |
[
{
"msg_contents": "Help, please can anyone offer suggestions on how to speed this query up.\n\nthanks\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 26 Sep 2014 13:04:24 +0000",
"msg_from": "\"Burgess, Freddie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Very slow postgreSQL 9.3.4 query"
},
{
"msg_contents": "\nA good way to start would be to introduce the query - describe what it is meant to do, give some performance data (your measurements of time taken, amount of data being processed, hardware used etc).\n\nGraeme.\n\n\nOn 26 Sep 2014, at 15:04, Burgess, Freddie <[email protected]> wrote:\n\n> Help, please can anyone offer suggestions on how to speed this query up.\n> \n> thanks\n> \n> \n> <Poor Pref query.txt>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 Sep 2014 13:55:55 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow postgreSQL 9.3.4 query"
},
{
"msg_contents": "Workflow description:\n\n1.) User draws a polygon around an area of interest, via UI.\n2.) UI responses with how many sensors reside within the area of the polygon.\n3.) Hibernate generates the count query detailed in the attachment.\n\nPerformance data is included in the attachment, via EXPLAIN PLAN, query takes approx 6 minutes to return count to UI.\nAmount of data processed is also included in the attachment, 185 million row partition.\n\nHardware\n\nVM \n80GB memory\n8 CPU Xeon\nLinux 2.6.32-431.3.1.el6.x86-64\n40TB disk, Database size: 8TB \nPostgreSQL 9.3.4 with POSTGIS 2.1.1, Red Hat 4.4.7-4, 64 bit \nstreaming replication\n\nPostgresql.conf\n\nmax_connection = 100\nshared_buffers = 32GB\nwork_mem = 16MB\nmaintenance_work_mem = 1GB\nseq_page_cost = 1.0\nrandom_page_cost = 2.0\ncpu_tuple_cost = 0.03\neffective_cache_size = 48GB\n\n________________________________________\nFrom: Graeme B. Bell [[email protected]]\nSent: Friday, September 26, 2014 9:55 AM\nTo: Burgess, Freddie\nCc: [email protected]\nSubject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query\n\nA good way to start would be to introduce the query - describe what it is meant to do, give some performance data (your measurements of time taken, amount of data being processed, hardware used etc).\n\nGraeme.\n\n\nOn 26 Sep 2014, at 15:04, Burgess, Freddie <[email protected]> wrote:\n\n> Help, please can anyone offer suggestions on how to speed this query up.\n>\n> thanks\n>\n>\n> <Poor Pref query.txt>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 Sep 2014 16:17:12 +0000",
"msg_from": "\"Burgess, Freddie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow postgreSQL 9.3.4 query"
},
{
"msg_contents": "I have a cron job that updates the statistics on the \"doti_sensor_report\" table on the first Saturday of every month. Do you think I should re-generate these statistics more often? This table receives streaming inserts to the volume of about 350 million tuples per-month.\n\nI'll generate new stat's over the weekend, and then execute a new plan\n\nthanks\n\n________________________________\nFrom: Victor Yegorov [[email protected]]\nSent: Friday, September 26, 2014 3:15 PM\nTo: Burgess, Freddie\nSubject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query\n\n2014-09-26 19:17 GMT+03:00 Burgess, Freddie <[email protected]<mailto:[email protected]>>:\nPerformance data is included in the attachment, via EXPLAIN PLAN, query takes approx 6 minutes to return count to UI.\nAmount of data processed is also included in the attachment, 185 million row partition.\n\nIt looks like your statistics are off:\n\n-> Index Scan using idx_sensor_report_query_y2014m09 on doti_sensor_report_y2014m09 this__1 (cost=0.57..137498.17 rows=3883 width=0) (actual time=168.416..348873.308 rows=443542 loops=1)\n\nOptimizer expects to find ~ 4k rows, while in reality there're 2 orders of magnitude more rows that matches the condition.\nPerhaps BitmapIndexScan could be faster here.\n\n\n--\nVictor Y. Yegorov\n\n\n\n\n\n\n\nI have a cron job that updates the statistics on the\n\"doti_sensor_report\" table on the\nfirst Saturday of every month. Do you think I should re-generate these statistics more often? This table receives streaming inserts to the volume of about 350 million tuples per-month.\n\nI'll generate new stat's over the weekend, and then execute a new plan\n\nthanks\n\n\n\nFrom: Victor Yegorov [[email protected]]\nSent: Friday, September 26, 2014 3:15 PM\nTo: Burgess, Freddie\nSubject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query\n\n\n\n\n\n\n2014-09-26 19:17 GMT+03:00 Burgess, Freddie \n<[email protected]>:\n\nPerformance data is included in the attachment, via EXPLAIN PLAN, query takes approx 6 minutes to return count to UI.\nAmount of data processed is also included in the attachment, 185 million row partition.\n\n\n\n\nIt looks like your statistics are off:\n\n\n-> Index Scan using idx_sensor_report_query_y2014m09 on doti_sensor_report_y2014m09 this__1 (cost=0.57..137498.17 rows=3883\n width=0) (actual time=168.416..348873.308 rows=443542 loops=1)\n\nOptimizer expects to find ~ 4k rows, while in reality there're 2 orders of magnitude more rows that matches the condition.\nPerhaps BitmapIndexScan could be faster here.\n\n\n\n-- \nVictor Y. Yegorov",
"msg_date": "Fri, 26 Sep 2014 20:07:11 +0000",
"msg_from": "\"Burgess, Freddie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow postgreSQL 9.3.4 query"
},
{
"msg_contents": "We also have in our postgresql.conf file\n\nautovaccum = on\ndefault_statistics_target = 100\n\nDo you recommend any changes?\n\nThis partitioned table doti_sensor_report contains in total approximately 15 billion rows, autovaccum current has three processes that are running continuously on the box and specifically targeting this table to keep up.\n\nWhat does the upgrade to 9.3.5 buy us in terms of performance improvements?\n\nthanks Victor\n\nFreddie\n\n________________________________\nFrom: Victor Yegorov [[email protected]]\nSent: Friday, September 26, 2014 4:25 PM\nTo: Burgess, Freddie\nSubject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query\n\n2014-09-26 23:07 GMT+03:00 Burgess, Freddie <[email protected]<mailto:[email protected]>>:\nI have a cron job that updates the statistics on the \"doti_sensor_report\" table on the first Saturday of every month. Do you think I should re-generate these statistics more often? This table receives streaming inserts to the volume of about 350 million tuples per-month.\n\nThere's autovacuum that does the same job for you, I hope you have it enabled on your upgraded DB.\nIf not, then once-a-month stats is definitely not enough.\n\nI recommend you to look into autovacuum instead of using cron and tune per-table autovacuum settings:\nhttp://www.postgresql.org/docs/current/interactive/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS\n\nDefault parameters will cause autovacuum to process the table if 10% (analyze) or 20% (vacuum) of the table had changed. The bigger the table,\nthe longer it'll take to reach the threshold. Try lowering scale factors on a per-table basis, like:\n\n ALTER TABLE doti_sensor_report_y2014m09 SET (autovacuum_analyze_scale_factor=0.02, autovacuum_vacuum_scale_factor=0.05);\n\nAlso, given your tables are quite big, I would recommend to increase statistics targets for commonly used columns, like:\n\n ALTER TABLE doti_sensor_report_y2014m09 ALTER node_date_time SET STATISTICS 1000;\n\nHave a look at the docs on these topics and pick they way that suits you most.\n\n\nP.S. Consider upgrading to 9.3.5 also, it is a minor one: only restart is required.\n\n\n--\nVictor Y. Yegorov\n\n\n\n\n\n\n\nWe also have in our postgresql.conf file\n\nautovaccum = on\ndefault_statistics_target = 100\n\nDo you recommend any changes?\n\nThis partitioned table doti_sensor_report contains in total approximately 15 billion rows, autovaccum current has three processes that are running continuously on the box and specifically targeting this table to keep up.\n\nWhat does the upgrade to 9.3.5 buy us in terms of performance improvements?\n\nthanks Victor\n\nFreddie\n\n\n\nFrom: Victor Yegorov [[email protected]]\nSent: Friday, September 26, 2014 4:25 PM\nTo: Burgess, Freddie\nSubject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query\n\n\n\n\n\n\n2014-09-26 23:07 GMT+03:00 Burgess, Freddie \n<[email protected]>:\n\n\nI have a cron job that updates the statistics on the \n\"doti_sensor_report\" table on the first Saturday of every month. Do you think I should re-generate these statistics more often? This table receives streaming inserts to the volume of about 350 million tuples per-month.\n\n\n\nThere's autovacuum that does the same job for you, I hope you have it enabled on your upgraded DB.\nIf not, then once-a-month stats is definitely not enough.\n\n\nI recommend you to look into autovacuum instead of using cron and tune per-table autovacuum settings:\nhttp://www.postgresql.org/docs/current/interactive/sql-createtable.html#SQL-CREATETABLE-STORAGE-PARAMETERS\n\n\nDefault parameters will cause autovacuum to process the table if 10% (analyze) or 20% (vacuum) of the table had changed. The bigger the table,\nthe longer it'll take to reach the threshold. Try lowering scale factors on a per-table basis, like:\n\n\n ALTER TABLE doti_sensor_report_y2014m09 SET (autovacuum_analyze_scale_factor=0.02, autovacuum_vacuum_scale_factor=0.05);\n\n\nAlso, given your tables are quite big, I would recommend to increase statistics targets for commonly used columns, like:\n\n\n ALTER TABLE doti_sensor_report_y2014m09 ALTER node_date_time SET STATISTICS 1000;\n\n\nHave a look at the docs on these topics and pick they way that suits you most. \n\n\n\n\nP.S. Consider upgrading to 9.3.5 also, it is a minor one: only restart is required.\n\n\n\n-- \nVictor Y. Yegorov",
"msg_date": "Fri, 26 Sep 2014 22:25:17 +0000",
"msg_from": "\"Burgess, Freddie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow postgreSQL 9.3.4 query"
},
{
"msg_contents": "\nHi,\n\nTwo things:\n\n- Make sure you are creating a GIST index on your geometry column in postgis.\n- Try using st_intersects rather than &&. I've noticed that && isn't using indices correctly in some situations e.g. function indices for st_transform'd geo columns.\n\nGraeme\n\n\nOn 26 Sep 2014, at 18:17, Burgess, Freddie <[email protected]> wrote:\n\n> Workflow description:\n> \n> 1.) User draws a polygon around an area of interest, via UI.\n> 2.) UI responses with how many sensors reside within the area of the polygon.\n> 3.) Hibernate generates the count query detailed in the attachment.\n> \n> Performance data is included in the attachment, via EXPLAIN PLAN, query takes approx 6 minutes to return count to UI.\n> Amount of data processed is also included in the attachment, 185 million row partition.\n> \n> Hardware\n> \n> VM \n> 80GB memory\n> 8 CPU Xeon\n> Linux 2.6.32-431.3.1.el6.x86-64\n> 40TB disk, Database size: 8TB \n> PostgreSQL 9.3.4 with POSTGIS 2.1.1, Red Hat 4.4.7-4, 64 bit \n> streaming replication\n> \n> Postgresql.conf\n> \n> max_connection = 100\n> shared_buffers = 32GB\n> work_mem = 16MB\n> maintenance_work_mem = 1GB\n> seq_page_cost = 1.0\n> random_page_cost = 2.0\n> cpu_tuple_cost = 0.03\n> effective_cache_size = 48GB\n> \n> ________________________________________\n> From: Graeme B. Bell [[email protected]]\n> Sent: Friday, September 26, 2014 9:55 AM\n> To: Burgess, Freddie\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query\n> \n> A good way to start would be to introduce the query - describe what it is meant to do, give some performance data (your measurements of time taken, amount of data being processed, hardware used etc).\n> \n> Graeme.\n> \n> \n> On 26 Sep 2014, at 15:04, Burgess, Freddie <[email protected]> wrote:\n> \n>> Help, please can anyone offer suggestions on how to speed this query up.\n>> \n>> thanks\n>> \n>> \n>> <Poor Pref query.txt>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 29 Sep 2014 11:08:22 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Very slow postgreSQL 9.3.4 query"
},
{
"msg_contents": "I changed the query from (st_within or st_touches) to ST_intersects, that sped up the execution. Reference progress in Attachment please.\n\nThanks\n________________________________________\nFrom: Graeme B. Bell [[email protected]]\nSent: Monday, September 29, 2014 7:08 AM\nTo: Burgess, Freddie\nCc: [email protected]\nSubject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query\n\nHi,\n\nTwo things:\n\n- Make sure you are creating a GIST index on your geometry column in postgis.\n- Try using st_intersects rather than &&. I've noticed that && isn't using indices correctly in some situations e.g. function indices for st_transform'd geo columns.\n\nGraeme\n\n\nOn 26 Sep 2014, at 18:17, Burgess, Freddie <[email protected]> wrote:\n\n> Workflow description:\n>\n> 1.) User draws a polygon around an area of interest, via UI.\n> 2.) UI responses with how many sensors reside within the area of the polygon.\n> 3.) Hibernate generates the count query detailed in the attachment.\n>\n> Performance data is included in the attachment, via EXPLAIN PLAN, query takes approx 6 minutes to return count to UI.\n> Amount of data processed is also included in the attachment, 185 million row partition.\n>\n> Hardware\n>\n> VM\n> 80GB memory\n> 8 CPU Xeon\n> Linux 2.6.32-431.3.1.el6.x86-64\n> 40TB disk, Database size: 8TB\n> PostgreSQL 9.3.4 with POSTGIS 2.1.1, Red Hat 4.4.7-4, 64 bit\n> streaming replication\n>\n> Postgresql.conf\n>\n> max_connection = 100\n> shared_buffers = 32GB\n> work_mem = 16MB\n> maintenance_work_mem = 1GB\n> seq_page_cost = 1.0\n> random_page_cost = 2.0\n> cpu_tuple_cost = 0.03\n> effective_cache_size = 48GB\n>\n> ________________________________________\n> From: Graeme B. Bell [[email protected]]\n> Sent: Friday, September 26, 2014 9:55 AM\n> To: Burgess, Freddie\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Very slow postgreSQL 9.3.4 query\n>\n> A good way to start would be to introduce the query - describe what it is meant to do, give some performance data (your measurements of time taken, amount of data being processed, hardware used etc).\n>\n> Graeme.\n>\n>\n> On 26 Sep 2014, at 15:04, Burgess, Freddie <[email protected]> wrote:\n>\n>> Help, please can anyone offer suggestions on how to speed this query up.\n>>\n>> thanks\n>>\n>>\n>> <Poor Pref query.txt>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 30 Sep 2014 02:59:19 +0000",
"msg_from": "\"Burgess, Freddie\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Very slow postgreSQL 9.3.4 query"
}
] |
[
{
"msg_contents": "Hello,\nI am having a performance issue after upgrade from 8.4.20-1 -> 9.3.5. I am running on CentOS 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux.\nUpgrade was without any issues, I used pg_upgrade.\n\nOne of my queries now takes cca 100x more time than it used to. The query is:\nhttp://pastebin.com/uUe16SkR\n\nexplain from postgre 8.4.20-1:\nhttp://pastebin.com/r3WRHzSM\n\nexplain from postgre 9.3.5:\nhttp://pastebin.com/hmNxFiDL\n\nThe problematic part seems to be this (postgresql 93 version):\n SubPlan 17\n -> Limit (cost=8.29..8.41 rows=1 width=11)\n InitPlan 16 (returns $19)\n -> Index Scan using t_store_info_pkey on t_store_info s_7 (cost=0.28..8.29 rows=1 width=8)\n Index Cond: (id = 87::bigint)\n -> Nested Loop (cost=0.00..72351.91 rows=624663 width=11)\n -> Seq Scan on t_pn pn (cost=0.00..37498.65 rows=1 width=11) <<-----!!!!\n Filter: ((max(w.item_ean) = ean) AND (company_fk = $19))\n -> Seq Scan on t_weighting w4 (cost=0.00..28606.63 rows=624663 width=0)\n\nthis row: Seq Scan on t_pn pn (cost=0.00..37498.65 rows=1 width=11) in 8.4 explain looks like this:\n-> Index Scan using inx_pn_companyfk_ean on t_pn pn (cost=0.00..8.64 rows=1 width=11)\n Index Cond: ((company_fk = $19) AND ($20 = ean))\n\nAs You can see, 8.4 is using index scan on the table, 9.3 is using seq scan. The relevant index does exist in both databases.\nSo I tried to force 9.3 to use the index by:\nset enable_seqscan = off;\n\nNow explain analyze looks like this:\nhttp://pastebin.com/kR7qr39u\n\nthe relevant problematic part is:\n SubPlan 17w.stat_count_entered IS NULL AND w.stat_weight_start IS NULL))\n -> Limit (cost=9.15..9.31 rows=1 width=11)\n InitPlan 16 (returns $19)\n -> Index Scan using t_store_info_pkey on t_store_info s_7 (cost=0.28..8.29 rows=1 width=8)\n Index Cond: (id = 87::bigint)\n -> Nested Loop (cost=0.85..102881.78 rows=624667 width=11)\n -> Index Only Scan using int_t_weighting_coordinates on t_weighting w4 (cost=0.42..95064.99 rows=624667 <<---- !!!\n -> Materialize (cost=0.43..8.45 rows=1 width=11)\n -> Index Scan using inx_pn_companyfk_ean on t_pn pn (cost=0.43..8.45 rows=1 width=11)\n Index Cond: ((company_fk = $19) AND (max(w.item_ean) = ean))\n\nSo planner is now using index scan.\n\nQuery execution time with this is around 4.2 s (roughly same as in postgre 8.4) , with enable_seqscan=on it is around 360s (2 orders of magnitude higher than with postgre 8.4). What is interesting is, that query cost is roughly the same in both situations.\n\nMy questions are:\n 1. how to set postgresql / modify query / create some indexes / whatever, to get the same query running time in postgresql 9.3 as I had in 8.4\n 2. how is it possible for analyze to get same costs when the query running time is almost 100x higher.\n\nThank You for any ideas on this.\n--\nMatúš Svrček\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 26 Sep 2014 15:04:08 +0100 (GMT+01:00)",
"msg_from": "=?utf-8?Q?Mat=C3=BA=C5=A1_Svr=C4=8Dek?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "after upgrade 8.4->9.3 query is slow not using index scan"
},
{
"msg_contents": "2014-09-26 17:04 GMT+03:00 Matúš Svrček <[email protected]>:\n\n> I am having a performance issue after upgrade from 8.4.20-1 -> 9.3.5.\n\n\nFirst, make sure you have your statistics up to date — execute manual\n`VACUUM ANALYZE`.\n\nAnd then provide `EXPLAIN analyze` for 8.4 and `EXPLAIN (analyze, buffers)`\nfor 9.3 output.\n\n\n-- \nVictor Y. Yegorov\n\n2014-09-26 17:04 GMT+03:00 Matúš Svrček <[email protected]>:I am having a performance issue after upgrade from 8.4.20-1 -> 9.3.5. First, make sure you have your statistics up to date — execute manual `VACUUM ANALYZE`.And then provide `EXPLAIN analyze` for 8.4 and `EXPLAIN (analyze, buffers)` for 9.3 output.-- Victor Y. Yegorov",
"msg_date": "Fri, 26 Sep 2014 22:03:17 +0300",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: after upgrade 8.4->9.3 query is slow not using index scan"
}
] |
[
{
"msg_contents": "Hello,\nat the moment, I am not able to replicate the results posted. Explain analyze results are very similar at the moment on both databases, query runtime is also almost same.\nThis has happened after re-running VACUUM ANALYZE VERBOSE. However, right after the migration, I did run the script, which was generated by pg_upgrade script, which should have ran VACUUM on the database, the script name is analyze_new_cluster.sh. \nMaybe after re-running vacuum analyze the statistics somehow changed?\n\nCase closed, thank You.\n--\nMatúš Svrček\nPlainText s.r.o. [www.plaintext.sk]\[email protected]\n\n\n----- \"Victor Yegorov\" <[email protected]> wrote:\n\n> 2014-09-26 17:04 GMT+03:00 Matúš Svrček < [email protected] > :\n> \n> \n> I am having a performance issue after upgrade from 8.4.20-1 -> 9.3.5.\n> \n> \n> First, make sure you have your statistics up to date — execute manual\n> `VACUUM ANALYZE`.\n> And then provide `EXPLAIN analyze` for 8.4 and `EXPLAIN (analyze,\n> buffers)` for 9.3 output.\n> \n> \n> \n> --\n> Victor Y. Yegorov\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 28 Sep 2014 21:24:15 +0100 (GMT+01:00)",
"msg_from": "=?utf-8?Q?Mat=C3=BA=C5=A1_Svr=C4=8Dek?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: after upgrade 8.4->9.3 query is slow not using index\n scan"
}
] |
[
{
"msg_contents": "Hello, I have a table that receives lots of updates and inserts.\nAuto vaccum is always being cancelled on that table.\nOne day the database went on standby and I had to act manually to recover.\n\nWhat should I do to avoid auto vaccum cancel?\n\nHello, I have a table that receives lots of updates and inserts.Auto vaccum is always being cancelled on that table. One day the database went on standby and I had to act manually to recover.What should I do to avoid auto vaccum cancel?",
"msg_date": "Thu, 2 Oct 2014 01:43:20 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "auto vaccum is dying"
},
{
"msg_contents": "On 10/02/2014 07:43 AM, Rodrigo Barboza wrote:\n> Hello, I have a table that receives lots of updates and inserts.\n> Auto vaccum is always being cancelled on that table.\n> One day the database went on standby and I had to act manually to recover.\n>\n> What should I do to avoid auto vaccum cancel?\n\nCancellation happens when you run a command that requires an a stronger \non the table, like ALTER or TRUNCATE. Plain UPDATEs or INSERTS will not \ncause cancellations. There must be something else going on, causing the \ncancellations.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Oct 2014 09:53:03 +0300",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto vaccum is dying"
},
{
"msg_contents": "I think I've read that when auto-vacuum takes too long, run it more often.\n\nOn Thu, Oct 2, 2014 at 8:53 AM, Heikki Linnakangas <[email protected]>\nwrote:\n\n> On 10/02/2014 07:43 AM, Rodrigo Barboza wrote:\n>\n>> Hello, I have a table that receives lots of updates and inserts.\n>> Auto vaccum is always being cancelled on that table.\n>> One day the database went on standby and I had to act manually to recover.\n>>\n>> What should I do to avoid auto vaccum cancel?\n>>\n>\n> Cancellation happens when you run a command that requires an a stronger on\n> the table, like ALTER or TRUNCATE. Plain UPDATEs or INSERTS will not cause\n> cancellations. There must be something else going on, causing the\n> cancellations.\n>\n> - Heikki\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI think I've read that when auto-vacuum takes too long, run it more often.On Thu, Oct 2, 2014 at 8:53 AM, Heikki Linnakangas <[email protected]> wrote:On 10/02/2014 07:43 AM, Rodrigo Barboza wrote:\n\nHello, I have a table that receives lots of updates and inserts.\nAuto vaccum is always being cancelled on that table.\nOne day the database went on standby and I had to act manually to recover.\n\nWhat should I do to avoid auto vaccum cancel?\n\n\nCancellation happens when you run a command that requires an a stronger on the table, like ALTER or TRUNCATE. Plain UPDATEs or INSERTS will not cause cancellations. There must be something else going on, causing the cancellations.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 2 Oct 2014 09:42:35 +0200",
"msg_from": "Dorian Hoxha <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto vaccum is dying"
},
{
"msg_contents": "On Wed, Oct 1, 2014 at 9:43 PM, Rodrigo Barboza <[email protected]>\nwrote:\n\n> Hello, I have a table that receives lots of updates and inserts.\n> Auto vaccum is always being cancelled on that table.\n>\n\nDo you have a scheduled task that clusters or reindexes the table?\n\nNewer versions of PostgreSQL will log the conflicting statement that caused\nthe vacuum to cancel.\n\n\n\n> One day the database went on standby and I had to act manually to recover.\n>\n\nI'm not sure what that means. Do you mean it stopped accepting commands to\nprevent \"wrap around\" data loss? Once autovacuum starts running on a table\nin \"prevent wrap around\", then it no longer voluntarily yields to other\nprocesses trying to take a conflicting lock.\n\n\n>\n> What should I do to avoid auto vaccum cancel?\n>\n\nIf you have scheduled jobs that do something on the table that requires a\nlock which conflicts with autovac, then you might want to include a manual\nVACUUM in that job.\n\nAlso, what full version are you running?\n\nCheers,\n\nJeff\n\nOn Wed, Oct 1, 2014 at 9:43 PM, Rodrigo Barboza <[email protected]> wrote:Hello, I have a table that receives lots of updates and inserts.Auto vaccum is always being cancelled on that table. Do you have a scheduled task that clusters or reindexes the table?Newer versions of PostgreSQL will log the conflicting statement that caused the vacuum to cancel. One day the database went on standby and I had to act manually to recover.I'm not sure what that means. Do you mean it stopped accepting commands to prevent \"wrap around\" data loss? Once autovacuum starts running on a table in \"prevent wrap around\", then it no longer voluntarily yields to other processes trying to take a conflicting lock. What should I do to avoid auto vaccum cancel?\nIf you have scheduled jobs that do something on the table that requires a lock which conflicts with autovac, then you might want to include a manual VACUUM in that job.Also, what full version are you running?Cheers,Jeff",
"msg_date": "Thu, 2 Oct 2014 08:34:35 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto vaccum is dying"
},
{
"msg_contents": "On Thu, Oct 2, 2014 at 3:53 AM, Heikki Linnakangas <[email protected]>\nwrote:\n\n> On 10/02/2014 07:43 AM, Rodrigo Barboza wrote:\n>\n>> Hello, I have a table that receives lots of updates and inserts.\n>> Auto vaccum is always being cancelled on that table.\n>> One day the database went on standby and I had to act manually to recover.\n>>\n>> What should I do to avoid auto vaccum cancel?\n>>\n>\n> Cancellation happens when you run a command that requires an a stronger on\n> the table, like ALTER or TRUNCATE. Plain UPDATEs or INSERTS will not cause\n> cancellations. There must be something else going on, causing the\n> cancellations.\n>\n> - Heikki\n>\n>\nI only do updates, inserts and deletes.\n\nOn Thu, Oct 2, 2014 at 3:53 AM, Heikki Linnakangas <[email protected]> wrote:On 10/02/2014 07:43 AM, Rodrigo Barboza wrote:\n\nHello, I have a table that receives lots of updates and inserts.\nAuto vaccum is always being cancelled on that table.\nOne day the database went on standby and I had to act manually to recover.\n\nWhat should I do to avoid auto vaccum cancel?\n\n\nCancellation happens when you run a command that requires an a stronger on the table, like ALTER or TRUNCATE. Plain UPDATEs or INSERTS will not cause cancellations. There must be something else going on, causing the cancellations.\n\n- Heikki\n\nI only do updates, inserts and deletes.",
"msg_date": "Sat, 4 Oct 2014 14:31:09 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: auto vaccum is dying"
},
{
"msg_contents": "On Thu, Oct 2, 2014 at 12:34 PM, Jeff Janes <[email protected]> wrote:\n\n> On Wed, Oct 1, 2014 at 9:43 PM, Rodrigo Barboza <[email protected]>\n> wrote:\n>\n>> Hello, I have a table that receives lots of updates and inserts.\n>> Auto vaccum is always being cancelled on that table.\n>>\n>\n> Do you have a scheduled task that clusters or reindexes the table?\n>\n> Newer versions of PostgreSQL will log the conflicting statement that\n> caused the vacuum to cancel.\n>\n>\nI have nothing scheduled, only auto vacuum, but with the default parameters.\n\n\n>\n>\n>> One day the database went on standby and I had to act manually to recover.\n>>\n>\n> I'm not sure what that means. Do you mean it stopped accepting commands\n> to prevent \"wrap around\" data loss? Once autovacuum starts running on a\n> table in \"prevent wrap around\", then it no longer voluntarily yields to\n> other processes trying to take a conflicting lock.\n>\n>\n\nExactly, stopped to prevent wrap around. I think it was because auto vacuum\nis being canceled.\n\n\n>\n>> What should I do to avoid auto vaccum cancel?\n>>\n>\n> If you have scheduled jobs that do something on the table that requires a\n> lock which conflicts with autovac, then you might want to include a manual\n> VACUUM in that job.\n>\n> Also, what full version are you running?\n>\n>\nI am running postgres 9.1.4 with default auto vacuum parameters. I have\nonly a scheduled job that runs delete for old tuples. Sometimes it a lot of\ntuples. Beside that, no other tasks.\n\nCheers,\n>\n> Jeff\n>\n\nOn Thu, Oct 2, 2014 at 12:34 PM, Jeff Janes <[email protected]> wrote:On Wed, Oct 1, 2014 at 9:43 PM, Rodrigo Barboza <[email protected]> wrote:Hello, I have a table that receives lots of updates and inserts.Auto vaccum is always being cancelled on that table. Do you have a scheduled task that clusters or reindexes the table?Newer versions of PostgreSQL will log the conflicting statement that caused the vacuum to cancel.I have nothing scheduled, only auto vacuum, but with the default parameters. One day the database went on standby and I had to act manually to recover.I'm not sure what that means. Do you mean it stopped accepting commands to prevent \"wrap around\" data loss? Once autovacuum starts running on a table in \"prevent wrap around\", then it no longer voluntarily yields to other processes trying to take a conflicting lock. Exactly, stopped to prevent wrap around. I think it was because auto vacuum is being canceled. What should I do to avoid auto vaccum cancel?\nIf you have scheduled jobs that do something on the table that requires a lock which conflicts with autovac, then you might want to include a manual VACUUM in that job.Also, what full version are you running?I am running postgres 9.1.4 with default auto vacuum parameters. I have only a scheduled job that runs delete for old tuples. Sometimes it a lot of tuples. Beside that, no other tasks.Cheers,Jeff",
"msg_date": "Sat, 4 Oct 2014 14:31:33 -0300",
"msg_from": "Rodrigo Barboza <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: auto vaccum is dying"
},
{
"msg_contents": "On Sat, Oct 4, 2014 at 10:31 AM, Rodrigo Barboza <[email protected]>\nwrote:\n\n>\n> On Thu, Oct 2, 2014 at 12:34 PM, Jeff Janes <[email protected]> wrote:\n>\n>> On Wed, Oct 1, 2014 at 9:43 PM, Rodrigo Barboza <[email protected]>\n>> wrote:\n>>\n>>> Hello, I have a table that receives lots of updates and inserts.\n>>> Auto vaccum is always being cancelled on that table.\n>>>\n>>\n>> Do you have a scheduled task that clusters or reindexes the table?\n>>\n>> Newer versions of PostgreSQL will log the conflicting statement that\n>> caused the vacuum to cancel.\n>>\n>>\n> I have nothing scheduled, only auto vacuum, but with the default\n> parameters.\n>\n\nSo what is in the log files pertaining to this?\n\n\n>\n>> Also, what full version are you running?\n>>\n>>\n> I am running postgres 9.1.4 with default auto vacuum parameters. I have\n> only a scheduled job that runs delete for old tuples. Sometimes it a lot of\n> tuples. Beside that, no other tasks.\n>\n\n\nYou are missing 10 minor releases worth of bug fixes, some of which are\nrelated to autovacuuming.\n\nCheers,\n\nJeff\n\nOn Sat, Oct 4, 2014 at 10:31 AM, Rodrigo Barboza <[email protected]> wrote:On Thu, Oct 2, 2014 at 12:34 PM, Jeff Janes <[email protected]> wrote:On Wed, Oct 1, 2014 at 9:43 PM, Rodrigo Barboza <[email protected]> wrote:Hello, I have a table that receives lots of updates and inserts.Auto vaccum is always being cancelled on that table. Do you have a scheduled task that clusters or reindexes the table?Newer versions of PostgreSQL will log the conflicting statement that caused the vacuum to cancel.I have nothing scheduled, only auto vacuum, but with the default parameters.So what is in the log files pertaining to this? Also, what full version are you running?I am running postgres 9.1.4 with default auto vacuum parameters. I have only a scheduled job that runs delete for old tuples. Sometimes it a lot of tuples. Beside that, no other tasks.You are missing 10 minor releases worth of bug fixes, some of which are related to autovacuuming. Cheers,Jeff",
"msg_date": "Sat, 4 Oct 2014 11:12:26 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: auto vaccum is dying"
}
] |
[
{
"msg_contents": "Hi,\n\nI am using Postgresql 9.3.5 on Ubuntu and I have a sudden, unexplained \nfailure in a function that has been working for a long time.\n\n--------------- code ----------------\nCREATE OR REPLACE FUNCTION gen_random()\n RETURNS double precision AS\n$BODY$\nDECLARE\n num float8 := 0;\n den float8 := 281474976710655; -- 0xFFFFFFFFFFFF\n bytes bytea[6];\nBEGIN\n -- get random bytes from crypto module\n bytes := ext.gen_random_bytes(6);\n\n -- assemble a double precision value\n num := num + get_byte( bytes, 0 );\n FOR i IN 1..5 LOOP\n num := num * 256;\n num := num + get_byte( bytes, i );\n END LOOP;\n\n -- normalize value to range 0.0 .. 1.0\n RETURN num / den;\nEND;\n$BODY$\n LANGUAGE plpgsql VOLATILE;\n--------------- code ----------------\n\nThe error is:\nERROR: array value must start with \"{\" or dimension information\nSQL state: 22P02\nContext: PL/pgSQL function gen_random() line 8 at assignment\n\nwhich, if I'm counting correctly, is\nbytes := ext.gen_random_bytes(6);\n\nIf I comment out that line, it then tells me get_byte() is undefined, \nwhich should be impossible because it's built in.\n\n\nThis gen_random() function is in public, the pgcrypto function \ngen_random_bytes() is in a separate utility schema \"ext\". This is in a \ntest database which I am in process of modifying, but it works perfectly \nwhen dumped and restored to a different computer. This gen_random() \nfunction - and its environment - has been working in multiple systems \nfor quite a while.\n\nI suspect that the Postgresql installation somehow has been hosed and \nthat I'm looking at a reinstall, but I have no idea how I managed it. \nI'd like to know what happened so I can (try to) avoid it going \nforward. There haven't been any recent system updates, and AFAIK there \nhaven't been any crashes either. Occasionally pgAdmin3 does hang up, \nbut that happens very infrequently and has occurred on all the working \nsystems as well. I have been adding new tables and functions to the \npublic schema on this test system, but I haven't touched anything that \nwas already working.\n\nIt seems like Postgresql just snapped. Any ideas? Anything in \nparticular I might look at for a clue?\n\nThanks,\nGeorge\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 02 Oct 2014 19:00:58 -0400",
"msg_from": "George Neuner <[email protected]>",
"msg_from_op": true,
"msg_subject": "help: function failing"
},
{
"msg_contents": "On Thu, Oct 2, 2014 at 4:00 PM, George Neuner <[email protected]> wrote:\n> --------------- code ----------------\n> CREATE OR REPLACE FUNCTION gen_random()\n> RETURNS double precision AS\n> $BODY$\n> DECLARE\n> num float8 := 0;\n> den float8 := 281474976710655; -- 0xFFFFFFFFFFFF\n> bytes bytea[6];\n> BEGIN\n> -- get random bytes from crypto module\n> bytes := ext.gen_random_bytes(6);\n>\n> -- assemble a double precision value\n> num := num + get_byte( bytes, 0 );\n> FOR i IN 1..5 LOOP\n> num := num * 256;\n> num := num + get_byte( bytes, i );\n> END LOOP;\n>\n> -- normalize value to range 0.0 .. 1.0\n> RETURN num / den;\n> END;\n> $BODY$\n> LANGUAGE plpgsql VOLATILE;\n> --------------- code ----------------\n>\n> The error is:\n> ERROR: array value must start with \"{\" or dimension information\n> SQL state: 22P02\n> Context: PL/pgSQL function gen_random() line 8 at assignment\n>\n> which, if I'm counting correctly, is\n> bytes := ext.gen_random_bytes(6);\n\nGuessing on the name of ext.gen_random_bytes(6) it returns a value\nthat is incompatible with bytea[] array representation time from time,\nso take a closer look at ext.gen_random_bytes() first. You can test\nthe case using DO block.\n\n> If I comment out that line, it then tells me get_byte() is undefined,\n> which should be impossible because it's built in.\n\nFeels like somewhere inside ext.gen_random_bytes() you set a\nsearch_path that allows to see get_byte() and the search_path that was\nset before the gen_random() call doesn't allow it.\n\n-- \nKind regards,\nSergey Konoplev\nPostgreSQL Consultant and DBA\n\nhttp://www.linkedin.com/in/grayhemp\n+1 (415) 867-9984, +7 (499) 346-7196, +7 (988) 888-1979\[email protected]\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Oct 2014 13:41:04 -0700",
"msg_from": "Sergey Konoplev <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help: function failing"
},
{
"msg_contents": "\nOn 10/07/2014 04:41 PM, Sergey Konoplev wrote:\n> On Thu, Oct 2, 2014 at 4:00 PM, George Neuner <[email protected]> wrote:\n>> --------------- code ----------------\n>> CREATE OR REPLACE FUNCTION gen_random()\n>> RETURNS double precision AS\n>> $BODY$\n>> DECLARE\n>> num float8 := 0;\n>> den float8 := 281474976710655; -- 0xFFFFFFFFFFFF\n>> bytes bytea[6];\n>> BEGIN\n>> -- get random bytes from crypto module\n>> bytes := ext.gen_random_bytes(6);\n>>\n>> -- assemble a double precision value\n>> num := num + get_byte( bytes, 0 );\n>> FOR i IN 1..5 LOOP\n>> num := num * 256;\n>> num := num + get_byte( bytes, i );\n>> END LOOP;\n>>\n>> -- normalize value to range 0.0 .. 1.0\n>> RETURN num / den;\n>> END;\n>> $BODY$\n>> LANGUAGE plpgsql VOLATILE;\n>> --------------- code ----------------\n>>\n>> The error is:\n>> ERROR: array value must start with \"{\" or dimension information\n>> SQL state: 22P02\n>> Context: PL/pgSQL function gen_random() line 8 at assignment\n>>\n>> which, if I'm counting correctly, is\n>> bytes := ext.gen_random_bytes(6);\n> Guessing on the name of ext.gen_random_bytes(6) it returns a value\n> that is incompatible with bytea[] array representation time from time,\n> so take a closer look at ext.gen_random_bytes() first. You can test\n> the case using DO block.\n>\n>> If I comment out that line, it then tells me get_byte() is undefined,\n>> which should be impossible because it's built in.\n> Feels like somewhere inside ext.gen_random_bytes() you set a\n> search_path that allows to see get_byte() and the search_path that was\n> set before the gen_random() call doesn't allow it.\n>\n\nWhy does this code want an array of byteas?\n\nIt looks like the code thinks bytea[6] is a declaration of a bytea of \nlength 6, which of course it is not. Shouldn't it just be declared as:\n\n bytes bytea;\n\n?\n\n\nOh, and pgsql-performance is completely the wrong forum for this query. \nusage questions should be on pgsql-general.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Oct 2014 17:27:58 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: help: function failing"
}
] |
[
{
"msg_contents": "I ran into this oddity lately that goes against everything I thought I\nunderstood and was wondering if anyone had any insight. Version/env\ndetails at the end.\n\nThe root of it is these query times:\n\nmarcs=# select * from ccrimes offset 5140000 limit 1;\n[...data omitted...]\n(1 row)\nTime: 650.280 ms\nmarcs=# select description from ccrimes offset 5140000 limit 1;\n description\n-------------------------------------\n FINANCIAL IDENTITY THEFT OVER $ 300\n(1 row)\n\nTime: 1298.672 ms\n\nThese times are all from data that is cached and are very repeatable.\nYes, I know that offset and limit without an order by isn't useful for\npaging through data.\n\nAnd an explain on them both... everything looks the same other than\nthe width and actual times:\n\nmarcs=# explain (analyze,buffers) select * from ccrimes offset 5140000 limit 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=204146.73..204146.73 rows=1 width=202) (actual\ntime=1067.901..1067.901 rows=1 loops=1)\n Buffers: shared hit=152743\n -> Seq Scan on ccrimes (cost=0.00..204146.73 rows=5139873\nwidth=202) (actual time=0.014..810.672 rows=5140001 loops=1)\n Buffers: shared hit=152743\n Total runtime: 1067.951 ms\n(5 rows)\n\nTime: 1068.612 ms\nmarcs=# explain (analyze,buffers) select description from ccrimes\noffset 5140000 limit 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Limit (cost=204146.73..204146.73 rows=1 width=17) (actual\ntime=1713.027..1713.027 rows=1 loops=1)\n Buffers: shared hit=152743\n -> Seq Scan on ccrimes (cost=0.00..204146.73 rows=5139873\nwidth=17) (actual time=0.013..1457.521 rows=5140001 loops=1)\n Buffers: shared hit=152743\n Total runtime: 1713.053 ms\n(5 rows)\n\nTime: 1713.612 ms\n\nWhen I run the query and capture a profile using \"perf\" and compare\nthe two, the thing that stands out is the slot_getsomeattrs call that\ndominates the trace in the slow query but not in the faster \"SELECT *\"\nversion:\n\n- 39.25% postgres postgres [.] _start\n - _start\n - 99.47% slot_getsomeattrs\n ExecProject\n ExecScan\n ExecProcNode\n ExecLimit\n ExecProcNode\n standard_ExecutorRun\n 0x7f4315f7c427\n PortalRun\n PostgresMain\n PostmasterMain\n main\n __libc_start_main\n + 0.53% ExecProject\n+ 18.82% postgres postgres [.] HeapTupleSatisfiesMVCC\n+ 12.01% postgres postgres [.] 0xb6353\n+ 9.47% postgres postgres [.] ExecProject\n\n\n The table is defined as:\n\n Column | Type | Modifiers\n----------------------+--------------------------------+-----------\n s_updated_at_0 | timestamp(3) with time zone |\n s_version_1 | bigint |\n s_id_2 | bigint |\n s_created_at_3 | timestamp(3) with time zone |\n id | numeric |\n case_number | text |\n date | timestamp(3) without time zone |\n block | text |\n iucr | text |\n primary_type | text |\n description | text |\n location_description | text |\n arrest | boolean |\n domestic | boolean |\n beat | text |\n district | text |\n ward | numeric |\n community_area | text |\n fbi_code | text |\n x_coordinate | numeric |\n y_coordinate | numeric |\n year | numeric |\n updated_on | timestamp(3) without time zone |\n latitude | numeric |\n longitude | numeric |\n location_lat | double precision |\n location_long | double precision |\n\n\nI've been testing this against Postgres 9.3.5 on Ubuntu 12.04 LTS\nrunning with a 3.2.0 kernel, and get similar results on both raw\nhardware and in Azure VMs. This repros on boxes with no other load.\n\nAny suggestions about what is going on or where to dig further would\nbe appreciated. I can make a pgdump of the data I'm using if anyone\nis interested.\n\nThanks.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 2 Oct 2014 19:17:48 -0700",
"msg_from": "Marc Slemko <[email protected]>",
"msg_from_op": true,
"msg_subject": "performance of SELECT * much faster than SELECT <colname> with large\n offset"
},
{
"msg_contents": "Marc Slemko <[email protected]> writes:\n> I ran into this oddity lately that goes against everything I thought I\n> understood and was wondering if anyone had any insight.\n\nSELECT * avoids a projection step ... see ExecAssignScanProjectionInfo.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 02 Oct 2014 22:39:37 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance of SELECT * much faster than SELECT <colname> with\n large offset"
},
{
"msg_contents": "On Fri, Oct 3, 2014 at 5:39 AM, Tom Lane <[email protected]> wrote:\n> Marc Slemko <[email protected]> writes:\n>> I ran into this oddity lately that goes against everything I thought I\n>> understood and was wondering if anyone had any insight.\n>\n> SELECT * avoids a projection step ... see ExecAssignScanProjectionInfo.\n\nIt would be cool if OFFSET could somehow signal the child nodes \"don't\nbother constructing the actual tuple\". Not sure if that could work in\nmore complex queries. But this is just one of many performance\nproblems with large OFFSETs.\n\nOf course you can always work around this using a subquery...\nselect description from (\n select * from ccrimes offset 5140000 limit 1\n) subq;\n\nBut most of the time it's better to use scalable paging techniques:\nhttp://use-the-index-luke.com/sql/partial-results/fetch-next-page\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Oct 2014 11:40:04 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: performance of SELECT * much faster than SELECT\n <colname> with large offset"
}
] |
[
{
"msg_contents": "Dear Pg people,\n\nI would ask for your help considering this scaling issue. We are planning to move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to 5 millions of items/day.\nWhat do you suggest in order to plan this switch? Add separate server? Increase RAM? Use SSD?\n\nAny real help will be really precious and appreciated.\nRoberto\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Oct 2014 10:55:04 +0200 (CEST)",
"msg_from": "Roberto Grandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Planning for Scalability"
},
{
"msg_contents": "Hi Roberto, \n\nHardware etc. is a solution; but you have not yet characterised the problem. \n\nYou should investigate if the events are mostly... \n\n- reads\n- writes\n- computationally intensive\n- memory intensive\n- I/O intensive\n- network I/O intensive\n- independent? (e.g. does it matter if you split the database in two?)\n\nYou should also find out if the current server comfortably supports 3 million events per day or if you already have problems there that need addressed. \nWhereas if it handles 3 million with plenty of spare I/O, memory, CPU, network bandwidth, then maybe it will handle 5 million without changing anything.\n\nOnce you've gathered this information (using tools like pg_stat_statements, top, iotop, ... and by thinking about what the tables are doing), look at it and see if the answer is obvious.\nIf not, think about what is confusing for a while, and then write your thoughts and data as a new question to the list.\n\nGraeme.\n\n\n\nOn 03 Oct 2014, at 10:55, Roberto Grandi <[email protected]> wrote:\n\n> Dear Pg people,\n> \n> I would ask for your help considering this scaling issue. We are planning to move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to 5 millions of items/day.\n> What do you suggest in order to plan this switch? Add separate server? Increase RAM? Use SSD?\n> \n> Any real help will be really precious and appreciated.\n> Roberto\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Oct 2014 10:47:54 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning for Scalability"
},
{
"msg_contents": "On Fri, Oct 03, 2014 at 10:55:04AM +0200, Roberto Grandi wrote:\n> Dear Pg people,\n> \n> I would ask for your help considering this scaling issue. We are planning to move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to 5 millions of items/day.\n> What do you suggest in order to plan this switch? Add separate server? Increase RAM? Use SSD?\n> \n> Any real help will be really precious and appreciated.\n> Roberto\n> \n\nHi Roberto,\n\nThis change is within a factor of 2 of your existing load. I would start with\nanalyzing the load on your existing system to determine where your bottlenecks\nare. 5M/day is 57/sec evenly distributed or 174/sec in an 8 hour period. This\ndoes not seems like a lot, but you have given us no details on your actual\nworkload.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Oct 2014 08:00:03 -0500",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning for Scalability"
},
{
"msg_contents": "Dear All\n\nthanks for your precious help. I'll come back to the list once analyzed our system.\n\nRoberto\n\n----- Messaggio originale -----\nDa: [email protected]\nA: \"Roberto Grandi\" <[email protected]>\nCc: [email protected]\nInviato: Venerdì, 3 ottobre 2014 15:00:03\nOggetto: Re: [PERFORM] Planning for Scalability\n\nOn Fri, Oct 03, 2014 at 10:55:04AM +0200, Roberto Grandi wrote:\n> Dear Pg people,\n> \n> I would ask for your help considering this scaling issue. We are planning to move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to 5 millions of items/day.\n> What do you suggest in order to plan this switch? Add separate server? Increase RAM? Use SSD?\n> \n> Any real help will be really precious and appreciated.\n> Roberto\n> \n\nHi Roberto,\n\nThis change is within a factor of 2 of your existing load. I would start with\nanalyzing the load on your existing system to determine where your bottlenecks\nare. 5M/day is 57/sec evenly distributed or 174/sec in an 8 hour period. This\ndoes not seems like a lot, but you have given us no details on your actual\nworkload.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Oct 2014 15:43:49 +0200 (CEST)",
"msg_from": "Roberto Grandi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Planning for Scalability"
},
{
"msg_contents": "On Fri, Oct 3, 2014 at 5:55 AM, Roberto Grandi\n<[email protected]> wrote:\n> Dear Pg people,\n>\n> I would ask for your help considering this scaling issue. We are planning to move from 3Millions of events/day instance of postgres (8 CPU, 65 gb ram) to 5 millions of items/day.\n\nThe most important hardware part there is your I/O subsystem, which\nyou didn't include. Lets assume you put whatever works.\n\n> What do you suggest in order to plan this switch? Add separate server? Increase RAM? Use SSD?\n\nWith that kind of hardware, and a RAID10 of 4 SSDs, we're handling\nabout 6000 peak (1300 sustained) read transactions per second. They're\nnot trivial reads. They each process quite a lot of data. Write load\nis not huge, steady at 15 writes per second, but we've got lots of\nbulk inserts/update as well. Peak write thoughput is about 30 qps, but\neach query bulk-loads so it's probably equivalent to 3000 or so.\n\nIn essence, unless your I/O subsystem sucks, I think you'll be fine.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 3 Oct 2014 13:33:18 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Planning for Scalability"
}
] |
[
{
"msg_contents": "Hi,\nI have similar problem as in\nhttp://www.postgresql.org/message-id/flat/[email protected]#[email protected]\n\nserver version is 9.3.4\n\nHere is only two quite simple tables:\n\ndb_new=# \\d activities_example\n Table \"public.activities_example\"\n Column | Type | Modifiers\n----------------+---------+-----------\n id | integer |\n order_chain_id | integer |\nIndexes:\n \"activities_example_idx\" btree (order_chain_id)\n\ndb_new=# \\d orders_example\nTable \"public.orders_example\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\n\nNumber of rows as below:\n\ndb_new=# select count(*) from activities_example ;\n count\n---------\n 3059965\n\ndb_new=# select count(*) from orders_example ;\n count\n-------\n 19038\n\ndb_new=# select count(*) from activities_example where order_chain_id in\n(select id from orders_example);\n count\n-------\n 91426\n(1 row)\n\n\nand I can see that planner uses hashjoin with all enabled options and\nnested loop with disabled parameter:\n\ndb_new=# explain analyze select * from activities_example where\norder_chain_id in (select id from orders_example);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=513.36..57547.59 rows=89551 width=8) (actual\ntime=18.340..966.367 rows=91426 loops=1)\n Hash Cond: (activities_example.order_chain_id = orders_example.id)\n -> Seq Scan on activities_example (cost=0.00..44139.65 rows=3059965\nwidth=8) (actual time=0.018..294.216 rows=3059965 loops=1)\n -> Hash (cost=275.38..275.38 rows=19038 width=4) (actual\ntime=5.458..5.458 rows=19038 loops=1)\n Buckets: 2048 Batches: 1 Memory Usage: 670kB\n -> Seq Scan on orders_example (cost=0.00..275.38 rows=19038\nwidth=4) (actual time=0.015..2.308 rows=19038 loops=1)\n Total runtime: 970.234 ms\n(7 rows)\n\ndb_new=# set enable_hashjoin = off;\nSET\ndb_new=# explain analyze select * from activities_example where\norder_chain_id in (select id from orders_example);\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1629.09..166451.01 rows=89551 width=8) (actual\ntime=16.091..116.476 rows=91426 loops=1)\n -> Unique (cost=1628.66..1723.85 rows=19038 width=4) (actual\ntime=15.929..23.156 rows=19038 loops=1)\n -> Sort (cost=1628.66..1676.25 rows=19038 width=4) (actual\ntime=15.892..19.884 rows=19038 loops=1)\n Sort Key: orders_example.id\n Sort Method: external sort Disk: 264kB\n -> Seq Scan on orders_example (cost=0.00..275.38\nrows=19038 width=4) (actual time=0.015..2.747 rows=19038 loops=1)\n -> Index Scan using activities_example_idx on activities_example\n (cost=0.43..8.60 rows=5 width=8) (actual time=0.002..0.004 rows=5\nloops=19038)\n Index Cond: (order_chain_id = orders_example.id)\n Total runtime: 121.366 ms\n(9 rows)\n\nsecond runtime is much more quicker.\n\nWhat is the reason of \"Seq Scan on activities_example\" in the first case?\nIs it possible to force optimizer choose the second plan without doing\n \"set enable_hashjoin = off;\" ?\n\nIncreasing of 'effective_cache_size' leads to similar thing with\nmergejoin,\nother options (work_mem, shared_buffers. etc) do not change anything.\n\nThanks in advance.\n\n-- \nRegards, Andrey Lizenko\n\nHi, I have similar problem as in http://www.postgresql.org/message-id/flat/[email protected]#[email protected] version is 9.3.4Here is only two quite simple tables:db_new=# \\d activities_example Table \"public.activities_example\" Column | Type | Modifiers----------------+---------+----------- id | integer | order_chain_id | integer |Indexes: \"activities_example_idx\" btree (order_chain_id)db_new=# \\d orders_exampleTable \"public.orders_example\" Column | Type | Modifiers--------+---------+----------- id | integer |Number of rows as below: db_new=# select count(*) from activities_example ; count--------- 3059965db_new=# select count(*) from orders_example ; count------- 19038db_new=# select count(*) from activities_example where order_chain_id in (select id from orders_example); count------- 91426(1 row)and I can see that planner uses hashjoin with all enabled options and nested loop with disabled parameter:db_new=# explain analyze select * from activities_example where order_chain_id in (select id from orders_example); QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------ Hash Semi Join (cost=513.36..57547.59 rows=89551 width=8) (actual time=18.340..966.367 rows=91426 loops=1) Hash Cond: (activities_example.order_chain_id = orders_example.id) -> Seq Scan on activities_example (cost=0.00..44139.65 rows=3059965 width=8) (actual time=0.018..294.216 rows=3059965 loops=1) -> Hash (cost=275.38..275.38 rows=19038 width=4) (actual time=5.458..5.458 rows=19038 loops=1) Buckets: 2048 Batches: 1 Memory Usage: 670kB -> Seq Scan on orders_example (cost=0.00..275.38 rows=19038 width=4) (actual time=0.015..2.308 rows=19038 loops=1) Total runtime: 970.234 ms(7 rows)db_new=# set enable_hashjoin = off;SETdb_new=# explain analyze select * from activities_example where order_chain_id in (select id from orders_example); QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=1629.09..166451.01 rows=89551 width=8) (actual time=16.091..116.476 rows=91426 loops=1) -> Unique (cost=1628.66..1723.85 rows=19038 width=4) (actual time=15.929..23.156 rows=19038 loops=1) -> Sort (cost=1628.66..1676.25 rows=19038 width=4) (actual time=15.892..19.884 rows=19038 loops=1) Sort Key: orders_example.id Sort Method: external sort Disk: 264kB -> Seq Scan on orders_example (cost=0.00..275.38 rows=19038 width=4) (actual time=0.015..2.747 rows=19038 loops=1) -> Index Scan using activities_example_idx on activities_example (cost=0.43..8.60 rows=5 width=8) (actual time=0.002..0.004 rows=5 loops=19038) Index Cond: (order_chain_id = orders_example.id) Total runtime: 121.366 ms(9 rows)second runtime is much more quicker.What is the reason of \"Seq Scan on activities_example\" in the first case?Is it possible to force optimizer choose the second plan without doing \"set enable_hashjoin = off;\" ?Increasing of 'effective_cache_size' leads to similar thing with mergejoin, other options (work_mem, shared_buffers. etc) do not change anything.Thanks in advance.-- Regards, Andrey Lizenko",
"msg_date": "Fri, 3 Oct 2014 19:38:15 +0400",
"msg_from": "Andrey Lizenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "query plan question, nested loop vs hash join"
},
{
"msg_contents": "On Fri, Oct 3, 2014 at 6:38 PM, Andrey Lizenko <[email protected]> wrote:\n> Is it possible to force optimizer choose the second plan without doing \"set\n> enable_hashjoin = off;\" ?\n>\n> Increasing of 'effective_cache_size' leads to similar thing with mergejoin,\n> other options (work_mem, shared_buffers. etc) do not change anything.\n\nHave you tried changing random_page_cost?\n\nIn small databases where most of the data is cached anyway, lowering\nrandom_page_cost to somewhere between 1 and 2 usually leads to better\nplanner decisions.\n\nRegards,\nMarti\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Oct 2014 23:26:38 +0300",
"msg_from": "Marti Raudsepp <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan question, nested loop vs hash join"
},
{
"msg_contents": "Thanks for your reply, Marti, as I answered to Tom couple of days ago\nadjusting of 'effective_cache_size' to 80% of RAM and 'random_page_cost'\nfrom 2 to 1 helped me.\n\n\nOn 8 October 2014 00:26, Marti Raudsepp <[email protected]> wrote:\n\n> On Fri, Oct 3, 2014 at 6:38 PM, Andrey Lizenko <[email protected]>\n> wrote:\n> > Is it possible to force optimizer choose the second plan without doing\n> \"set\n> > enable_hashjoin = off;\" ?\n> >\n> > Increasing of 'effective_cache_size' leads to similar thing with\n> mergejoin,\n> > other options (work_mem, shared_buffers. etc) do not change anything.\n>\n> Have you tried changing random_page_cost?\n>\n> In small databases where most of the data is cached anyway, lowering\n> random_page_cost to somewhere between 1 and 2 usually leads to better\n> planner decisions.\n>\n> Regards,\n> Marti\n>\n\n\n\n-- \nС уважением, Андрей Лизенко\n\nThanks for your reply, Marti, as I answered to Tom couple of days ago adjusting of 'effective_cache_size' to 80% of RAM and 'random_page_cost' from 2 to 1 helped me.On 8 October 2014 00:26, Marti Raudsepp <[email protected]> wrote:On Fri, Oct 3, 2014 at 6:38 PM, Andrey Lizenko <[email protected]> wrote:\n> Is it possible to force optimizer choose the second plan without doing \"set\n> enable_hashjoin = off;\" ?\n>\n> Increasing of 'effective_cache_size' leads to similar thing with mergejoin,\n> other options (work_mem, shared_buffers. etc) do not change anything.\n\nHave you tried changing random_page_cost?\n\nIn small databases where most of the data is cached anyway, lowering\nrandom_page_cost to somewhere between 1 and 2 usually leads to better\nplanner decisions.\n\nRegards,\nMarti\n-- С уважением, Андрей Лизенко",
"msg_date": "Wed, 8 Oct 2014 13:48:35 +0400",
"msg_from": "Andrey Lizenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query plan question, nested loop vs hash join"
}
] |
[
{
"msg_contents": "Hi,\nI have similar problem as in\nhttp://www.postgresql.org/message-id/flat/[email protected]#[email protected]\n\nserver version is 9.3.4\n\nHere is only two quite simple tables:\n\ndb_new=# \\d activities_example\n Table \"public.activities_example\"\n Column | Type | Modifiers\n----------------+---------+-----------\n id | integer |\n order_chain_id | integer |\nIndexes:\n \"activities_example_idx\" btree (order_chain_id)\n\ndb_new=# \\d orders_example\nTable \"public.orders_example\"\n Column | Type | Modifiers\n--------+---------+-----------\n id | integer |\n\nNumber of rows as below:\n\ndb_new=# select count(*) from activities_example ;\n count\n---------\n 3059965\n\ndb_new=# select count(*) from orders_example ;\n count\n-------\n 19038\n\ndb_new=# select count(*) from activities_example where order_chain_id in\n(select id from orders_example);\n count\n-------\n 91426\n(1 row)\n\n\nand I can see that planner uses hashjoin with all enabled options and\nnested loop with disabled parameter:\n\ndb_new=# explain analyze select * from activities_example where\norder_chain_id in (select id from orders_example);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=513.36..57547.59 rows=89551 width=8) (actual\ntime=18.340..966.367 rows=91426 loops=1)\n Hash Cond: (activities_example.order_chain_id = orders_example.id)\n -> Seq Scan on activities_example (cost=0.00..44139.65 rows=3059965\nwidth=8) (actual time=0.018..294.216 rows=3059965 loops=1)\n -> Hash (cost=275.38..275.38 rows=19038 width=4) (actual\ntime=5.458..5.458 rows=19038 loops=1)\n Buckets: 2048 Batches: 1 Memory Usage: 670kB\n -> Seq Scan on orders_example (cost=0.00..275.38 rows=19038\nwidth=4) (actual time=0.015..2.308 rows=19038 loops=1)\n Total runtime: 970.234 ms\n(7 rows)\n\ndb_new=# set enable_hashjoin = off;\nSET\ndb_new=# explain analyze select * from activities_example where\norder_chain_id in (select id from orders_example);\n QUERY\nPLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=1629.09..166451.01 rows=89551 width=8) (actual\ntime=16.091..116.476 rows=91426 loops=1)\n -> Unique (cost=1628.66..1723.85 rows=19038 width=4) (actual\ntime=15.929..23.156 rows=19038 loops=1)\n -> Sort (cost=1628.66..1676.25 rows=19038 width=4) (actual\ntime=15.892..19.884 rows=19038 loops=1)\n Sort Key: orders_example.id\n Sort Method: external sort Disk: 264kB\n -> Seq Scan on orders_example (cost=0.00..275.38\nrows=19038 width=4) (actual time=0.015..2.747 rows=19038 loops=1)\n -> Index Scan using activities_example_idx on activities_example\n (cost=0.43..8.60 rows=5 width=8) (actual time=0.002..0.004 rows=5\nloops=19038)\n Index Cond: (order_chain_id = orders_example.id)\n Total runtime: 121.366 ms\n(9 rows)\n\nsecond runtime is much more quicker.\n\nWhat is the reason of \"Seq Scan on activities_example\" in the first case?\nIs it possible to force optimizer choose the second plan without doing\n \"set enable_hashjoin = off;\" ?\n\nIncreasing of 'effective_cache_size' leads to similar thing with\nmergejoin,\nother options (work_mem, shared_buffers. etc) do not change anything.\n\nThanks in advance.\n\n-- \nRegards, Andrey Lizenko\n\nHi, I have similar problem as in http://www.postgresql.org/message-id/flat/[email protected]#[email protected] version is 9.3.4Here is only two quite simple tables:db_new=# \\d activities_example Table \"public.activities_example\" Column | Type | Modifiers----------------+---------+----------- id | integer | order_chain_id | integer |Indexes: \"activities_example_idx\" btree (order_chain_id)db_new=# \\d orders_exampleTable \"public.orders_example\" Column | Type | Modifiers--------+---------+----------- id | integer |Number of rows as below: db_new=# select count(*) from activities_example ; count--------- 3059965db_new=# select count(*) from orders_example ; count------- 19038db_new=# select count(*) from activities_example where order_chain_id in (select id from orders_example); count------- 91426(1 row)and I can see that planner uses hashjoin with all enabled options and nested loop with disabled parameter:db_new=# explain analyze select * from activities_example where order_chain_id in (select id from orders_example); QUERY PLAN------------------------------------------------------------------------------------------------------------------------------------ Hash Semi Join (cost=513.36..57547.59 rows=89551 width=8) (actual time=18.340..966.367 rows=91426 loops=1) Hash Cond: (activities_example.order_chain_id = orders_example.id) -> Seq Scan on activities_example (cost=0.00..44139.65 rows=3059965 width=8) (actual time=0.018..294.216 rows=3059965 loops=1) -> Hash (cost=275.38..275.38 rows=19038 width=4) (actual time=5.458..5.458 rows=19038 loops=1) Buckets: 2048 Batches: 1 Memory Usage: 670kB -> Seq Scan on orders_example (cost=0.00..275.38 rows=19038 width=4) (actual time=0.015..2.308 rows=19038 loops=1) Total runtime: 970.234 ms(7 rows)db_new=# set enable_hashjoin = off;SETdb_new=# explain analyze select * from activities_example where order_chain_id in (select id from orders_example); QUERY PLAN----------------------------------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=1629.09..166451.01 rows=89551 width=8) (actual time=16.091..116.476 rows=91426 loops=1) -> Unique (cost=1628.66..1723.85 rows=19038 width=4) (actual time=15.929..23.156 rows=19038 loops=1) -> Sort (cost=1628.66..1676.25 rows=19038 width=4) (actual time=15.892..19.884 rows=19038 loops=1) Sort Key: orders_example.id Sort Method: external sort Disk: 264kB -> Seq Scan on orders_example (cost=0.00..275.38 rows=19038 width=4) (actual time=0.015..2.747 rows=19038 loops=1) -> Index Scan using activities_example_idx on activities_example (cost=0.43..8.60 rows=5 width=8) (actual time=0.002..0.004 rows=5 loops=19038) Index Cond: (order_chain_id = orders_example.id) Total runtime: 121.366 ms(9 rows)second runtime is much more quicker.What is the reason of \"Seq Scan on activities_example\" in the first case?Is it possible to force optimizer choose the second plan without doing \"set enable_hashjoin = off;\" ?Increasing of 'effective_cache_size' leads to similar thing with mergejoin, other options (work_mem, shared_buffers. etc) do not change anything.Thanks in advance.-- Regards, Andrey Lizenko",
"msg_date": "Sun, 5 Oct 2014 22:57:18 +0400",
"msg_from": "Andrey Lizenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "query plan question, nested loop vs hash join"
},
{
"msg_contents": "2014-10-05 21:57 GMT+03:00 Andrey Lizenko <[email protected]>:\n\n> Increasing of 'effective_cache_size' leads to similar thing with\n> mergejoin,\n> other options (work_mem, shared_buffers. etc) do not change anything.\n>\n\nI think increasing `work_mem` should have effects, as plan with `Nested\nLoop` is using disk-based sort.\nIncrease it till you'll stop seeing `external sort` in the EXPLAIN output.\nSomething like '10MB' should do.\n\nAlso, it'd be handy if you could provide `EXPLAIN (analyze, buffers)`\noutput along with the results of these queries:\n\n SELECT name,setting,source FROM pg_settings WHERE name ~ 'cost' AND NOT\nname ~ 'vacuum';\n SELECT name,setting,source FROM pg_settings WHERE NOT source IN\n('default','override');\n\nAnd describe your setup: what OS? how much RAM? what kind of disks? RAID?\n\n-- \nVictor Y. Yegorov\n\n2014-10-05 21:57 GMT+03:00 Andrey Lizenko <[email protected]>:Increasing of 'effective_cache_size' leads to similar thing with mergejoin, other options (work_mem, shared_buffers. etc) do not change anything.I think increasing `work_mem` should have effects, as plan with `Nested Loop` is using disk-based sort.Increase it till you'll stop seeing `external sort` in the EXPLAIN output. Something like '10MB' should do.Also, it'd be handy if you could provide `EXPLAIN (analyze, buffers)` output along with the results of these queries: SELECT name,setting,source FROM pg_settings WHERE name ~ 'cost' AND NOT name ~ 'vacuum'; SELECT name,setting,source FROM pg_settings WHERE NOT source IN ('default','override');And describe your setup: what OS? how much RAM? what kind of disks? RAID?-- Victor Y. Yegorov",
"msg_date": "Sun, 5 Oct 2014 22:18:32 +0300",
"msg_from": "Victor Yegorov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan question, nested loop vs hash join"
},
{
"msg_contents": "Andrey Lizenko <[email protected]> writes:\n> What is the reason of \"Seq Scan on activities_example\" in the first case?\n> Is it possible to force optimizer choose the second plan without doing\n> \"set enable_hashjoin = off;\" ?\n\nDisabling hashjoins altogether would be a pretty dangerous \"fix\".\n\nI think the real issue here is that you have an entirely cached-in-memory\ndatabase and therefore you ought to reduce random_page_cost. The\nplanner's estimates for the first query seem to more or less match reality\n(on the assumption that 1 msec equals about 100 cost units on your\nmachine). The cost estimates for the second one are way off though,\nmainly in that the repeated indexscans are far cheaper than the planner\nthinks. Getting that cost estimate down requires reducing random_page_cost\nor increasing effective_cache_size or some combination.\n\nYou can find the conventional wisdow about this sort of thing at\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 05 Oct 2014 15:47:52 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query plan question, nested loop vs hash join"
},
{
"msg_contents": "Thanks a lot, Tom,\nreducing 'random_page_cost' from 2 to 1 and increasing 'effective_cache_size'\nfrom 70% to 80% of RAM solved this at least on my virtual sandbox.\nBy the way, why increasing of cache only (with the same random_page_cost=2) can\nlead to mergejoin selection?\n\n\nOn 5 October 2014 23:47, Tom Lane <[email protected]> wrote:\n\n> Andrey Lizenko <[email protected]> writes:\n> > What is the reason of \"Seq Scan on activities_example\" in the first case?\n> > Is it possible to force optimizer choose the second plan without doing\n> > \"set enable_hashjoin = off;\" ?\n>\n> Disabling hashjoins altogether would be a pretty dangerous \"fix\".\n>\n> I think the real issue here is that you have an entirely cached-in-memory\n> database and therefore you ought to reduce random_page_cost. The\n> planner's estimates for the first query seem to more or less match reality\n> (on the assumption that 1 msec equals about 100 cost units on your\n> machine). The cost estimates for the second one are way off though,\n> mainly in that the repeated indexscans are far cheaper than the planner\n> thinks. Getting that cost estimate down requires reducing random_page_cost\n> or increasing effective_cache_size or some combination.\n>\n> You can find the conventional wisdow about this sort of thing at\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n> regards, tom lane\n>\n\n\n\n-- \nС уважением, Андрей Лизенко\n\nThanks a lot, Tom,reducing 'random_page_cost' from 2 to 1 and increasing 'effective_cache_size' from 70% to 80% of RAM solved this at least on my virtual sandbox.By the way, why increasing of cache only (with the same random_page_cost=2) can lead to mergejoin selection? On 5 October 2014 23:47, Tom Lane <[email protected]> wrote:Andrey Lizenko <[email protected]> writes:\n> What is the reason of \"Seq Scan on activities_example\" in the first case?\n> Is it possible to force optimizer choose the second plan without doing\n> \"set enable_hashjoin = off;\" ?\n\nDisabling hashjoins altogether would be a pretty dangerous \"fix\".\n\nI think the real issue here is that you have an entirely cached-in-memory\ndatabase and therefore you ought to reduce random_page_cost. The\nplanner's estimates for the first query seem to more or less match reality\n(on the assumption that 1 msec equals about 100 cost units on your\nmachine). The cost estimates for the second one are way off though,\nmainly in that the repeated indexscans are far cheaper than the planner\nthinks. Getting that cost estimate down requires reducing random_page_cost\nor increasing effective_cache_size or some combination.\n\nYou can find the conventional wisdow about this sort of thing at\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n regards, tom lane\n-- С уважением, Андрей Лизенко",
"msg_date": "Mon, 6 Oct 2014 16:50:18 +0400",
"msg_from": "Andrey Lizenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query plan question, nested loop vs hash join"
},
{
"msg_contents": "As I answered to Tom few moments ago:\n>reducing 'random_page_cost' from 2 to 1 and increasing\n'effective_cache_size' from 70% to 80% of RAM solved this at least on my\nvirtual sandbox.\nI've observed same behaviour both on weak virtual machine and on the quite\npowerfull stress test platform.\nThe first one is Ubuntu 12.04 LTS, second one is RedHat 6.4\nOf course, RAM. RAID, CPUs and so on are different enough, so I believe the\nroot clause of this issue is not connected with hardware at all.\n\nThanks for your idea with external sort, I'll test it\n\n\nOn 5 October 2014 23:18, Victor Yegorov <[email protected]> wrote:\n\n> 2014-10-05 21:57 GMT+03:00 Andrey Lizenko <[email protected]>:\n>\n>> Increasing of 'effective_cache_size' leads to similar thing with\n>> mergejoin,\n>> other options (work_mem, shared_buffers. etc) do not change anything.\n>>\n>\n> I think increasing `work_mem` should have effects, as plan with `Nested\n> Loop` is using disk-based sort.\n> Increase it till you'll stop seeing `external sort` in the EXPLAIN output.\n> Something like '10MB' should do.\n>\n> Also, it'd be handy if you could provide `EXPLAIN (analyze, buffers)`\n> output along with the results of these queries:\n>\n> SELECT name,setting,source FROM pg_settings WHERE name ~ 'cost' AND\n> NOT name ~ 'vacuum';\n> SELECT name,setting,source FROM pg_settings WHERE NOT source IN\n> ('default','override');\n>\n> And describe your setup: what OS? how much RAM? what kind of disks? RAID?\n>\n> --\n> Victor Y. Yegorov\n>\n\n\n\n-- \nС уважением, Андрей Лизенко\n\nAs I answered to Tom few moments ago:>reducing 'random_page_cost' from 2 to 1 and increasing 'effective_cache_size' from 70% to 80% of RAM solved this at least on my virtual sandbox.I've observed same behaviour both on weak virtual machine and on the quite powerfull stress test platform.The first one is Ubuntu 12.04 LTS, second one is RedHat 6.4Of course, RAM. RAID, CPUs and so on are different enough, so I believe the root clause of this issue is not connected with hardware at all.Thanks for your idea with external sort, I'll test itOn 5 October 2014 23:18, Victor Yegorov <[email protected]> wrote:2014-10-05 21:57 GMT+03:00 Andrey Lizenko <[email protected]>:Increasing of 'effective_cache_size' leads to similar thing with mergejoin, other options (work_mem, shared_buffers. etc) do not change anything.I think increasing `work_mem` should have effects, as plan with `Nested Loop` is using disk-based sort.Increase it till you'll stop seeing `external sort` in the EXPLAIN output. Something like '10MB' should do.Also, it'd be handy if you could provide `EXPLAIN (analyze, buffers)` output along with the results of these queries: SELECT name,setting,source FROM pg_settings WHERE name ~ 'cost' AND NOT name ~ 'vacuum'; SELECT name,setting,source FROM pg_settings WHERE NOT source IN ('default','override');And describe your setup: what OS? how much RAM? what kind of disks? RAID?-- Victor Y. Yegorov\n\n-- С уважением, Андрей Лизенко",
"msg_date": "Mon, 6 Oct 2014 16:58:12 +0400",
"msg_from": "Andrey Lizenko <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: query plan question, nested loop vs hash join"
}
] |
[
{
"msg_contents": "\n\n\n\n\nHello List, \n\n May I know will <idle> cause any potential performance\n issues for psql8.3 please?\n version (PostgreSQL 8.3.18 on x86_64-unknown-linux-gnu, compiled\n by GCC 4.1.2)\n\n E.g., got 10 idle connections for 10 days. \n select current_query from pg_stat_activity where usename\n ='test';\n \n current_query \n--------------------------------------------------------------------------\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n\n Thanks a lot!\n Emi \n\n\n",
"msg_date": "Mon, 06 Oct 2014 10:54:22 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "<idle> issue?"
},
{
"msg_contents": "2014-10-06 11:54 GMT-03:00 Emi Lu <[email protected]>:\n\n> Hello List,\n>\n> May I know will <idle> cause any potential performance issues for psql8.3\n> please?\n> version (PostgreSQL 8.3.18 on x86_64-unknown-linux-gnu, compiled by GCC\n> 4.1.2)\n>\n> E.g., got 10 idle connections for 10 days.\n> select current_query from pg_stat_activity where usename ='test';\n> current_query\n> --------------------------------------------------------------------------\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n>\n> Thanks a lot!\n> Emi\n>\n\nHi Emi,\n\nAs far as I know, it wont affect your performance.\n\nIt will affect the overall quantity of users that can connect to the\ndatabase though (since there is a limit that you can set up on\npostgres.conf).\n\nBR,\n\nFelipe\n\n2014-10-06 11:54 GMT-03:00 Emi Lu <[email protected]>:\n\nHello List, \n\n May I know will <idle> cause any potential performance\n issues for psql8.3 please?\n version (PostgreSQL 8.3.18 on x86_64-unknown-linux-gnu, compiled\n by GCC 4.1.2)\n\n E.g., got 10 idle connections for 10 days. \n select current_query from pg_stat_activity where usename\n ='test';\n \n current_query \n--------------------------------------------------------------------------\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n <IDLE>\n\n Thanks a lot!\n Emi \n\nHi Emi,As far as I know, it wont affect your performance.It will affect the overall quantity of users that can connect to the database though (since there is a limit that you can set up on postgres.conf).BR,Felipe",
"msg_date": "Mon, 6 Oct 2014 12:07:34 -0300",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: <idle> issue?"
},
{
"msg_contents": "El lun, 06-10-2014 a las 10:54 -0400, Emi Lu escribió:\n> Hello List, \n> \n> May I know will <idle> cause any potential performance issues for\n> psql8.3 please?\n> version (PostgreSQL 8.3.18 on x86_64-unknown-linux-gnu, compiled by\n> GCC 4.1.2)\n> \n> E.g., got 10 idle connections for 10 days. \n> select current_query from pg_stat_activity where usename ='test';\n> \n> current_query \n> --------------------------------------------------------------------------\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> <IDLE>\n> \n> Thanks a lot!\n> Emi \n\nMaybe your application is using a pool of 10 connections but it doesn't\naffect any potential performance issues.\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 06 Oct 2014 23:10:38 +0200",
"msg_from": "jaime soler <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: <idle> issue?"
}
] |
[
{
"msg_contents": "I'm seeing some strange behavior running pg_basebackup on 9.3.5. It\nappears that there are long pauses - up to a couple of minutes - between\nrelatively short bursts of disk activity.\n\nI'm not noticing any lock requests outstanding. What might I be missing?\n\n__________________________________________________________________________________\n*Mike Blackwell | Technical Analyst, Distribution Services/Rollout\nManagement | RR Donnelley*\n1750 Wallace Ave | St Charles, IL 60174-3401\nOffice: 630.313.7818\[email protected]\nhttp://www.rrdonnelley.com\n\n\n<http://www.rrdonnelley.com/>\n* <[email protected]>*\n\nI'm seeing some strange behavior running pg_basebackup on 9.3.5. It appears that there are long pauses - up to a couple of minutes - between relatively short bursts of disk activity. I'm not noticing any lock requests outstanding. What might I be missing?__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com",
"msg_date": "Mon, 6 Oct 2014 13:53:24 -0500",
"msg_from": "Mike Blackwell <[email protected]>",
"msg_from_op": true,
"msg_subject": "pg_basebackup - odd performance"
},
{
"msg_contents": "what is the cmd?the default checkpoint method is spread,so pg_basebackup\nwill wait the checkpoint complete on the master.you can set the checkpoint\nmethod to fast to avoid the waiting.\n2014年10月7日 2:54 AM于 \"Mike Blackwell\" <[email protected]>写道:\n\n> I'm seeing some strange behavior running pg_basebackup on 9.3.5. It\n> appears that there are long pauses - up to a couple of minutes - between\n> relatively short bursts of disk activity.\n>\n> I'm not noticing any lock requests outstanding. What might I be missing?\n>\n>\n> __________________________________________________________________________________\n> *Mike Blackwell | Technical Analyst, Distribution Services/Rollout\n> Management | RR Donnelley*\n> 1750 Wallace Ave | St Charles, IL 60174-3401\n> Office: 630.313.7818\n> [email protected]\n> http://www.rrdonnelley.com\n>\n>\n> <http://www.rrdonnelley.com/>\n> * <[email protected]>*\n>\n\nwhat is the cmd?the default checkpoint method is spread,so pg_basebackup will wait the checkpoint complete on the master.you can set the checkpoint method to fast to avoid the waiting.\n2014年10月7日 2:54 AM于 \"Mike Blackwell\" <[email protected]>写道:I'm seeing some strange behavior running pg_basebackup on 9.3.5. It appears that there are long pauses - up to a couple of minutes - between relatively short bursts of disk activity. I'm not noticing any lock requests outstanding. What might I be missing?__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com",
"msg_date": "Tue, 7 Oct 2014 10:59:30 +0800",
"msg_from": "Jov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pg_basebackup - odd performance"
},
{
"msg_contents": "Thanks for your reply. Adding '-c fast' does seem to improve the initial\ndelay. I'm still seeing delays of several minutes between write bursts.\nThe server has light OLTP loading.\n\n__________________________________________________________________________________\n*Mike Blackwell | Technical Analyst, Distribution Services/Rollout\nManagement | RR Donnelley*\n1750 Wallace Ave | St Charles, IL 60174-3401\nOffice: 630.313.7818\[email protected]\nhttp://www.rrdonnelley.com\n\n\n<http://www.rrdonnelley.com/>\n* <[email protected]>*\n\nOn Mon, Oct 6, 2014 at 9:59 PM, Jov <[email protected]> wrote:\n\n> what is the cmd?the default checkpoint method is spread,so pg_basebackup\n> will wait the checkpoint complete on the master.you can set the checkpoint\n> method to fast to avoid the waiting.\n> 2014年10月7日 2:54 AM于 \"Mike Blackwell\" <[email protected]>写道:\n>\n> I'm seeing some strange behavior running pg_basebackup on 9.3.5. It\n>> appears that there are long pauses - up to a couple of minutes - between\n>> relatively short bursts of disk activity.\n>>\n>> I'm not noticing any lock requests outstanding. What might I be missing?\n>>\n>>\n>> __________________________________________________________________________________\n>> *Mike Blackwell | Technical Analyst, Distribution Services/Rollout\n>> Management | RR Donnelley*\n>> 1750 Wallace Ave | St Charles, IL 60174-3401\n>> Office: 630.313.7818\n>> [email protected]\n>> http://www.rrdonnelley.com\n>>\n>>\n>> <http://www.rrdonnelley.com/>\n>> * <[email protected]>*\n>>\n>\n\nThanks for your reply. Adding '-c fast' does seem to improve the initial delay. I'm still seeing delays of several minutes between write bursts. The server has light OLTP loading.__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com\nOn Mon, Oct 6, 2014 at 9:59 PM, Jov <[email protected]> wrote:what is the cmd?the default checkpoint method is spread,so pg_basebackup will wait the checkpoint complete on the master.you can set the checkpoint method to fast to avoid the waiting.\n2014年10月7日 2:54 AM于 \"Mike Blackwell\" <[email protected]>写道:I'm seeing some strange behavior running pg_basebackup on 9.3.5. It appears that there are long pauses - up to a couple of minutes - between relatively short bursts of disk activity. I'm not noticing any lock requests outstanding. What might I be missing?__________________________________________________________________________________\nMike Blackwell | Technical Analyst, Distribution Services/Rollout Management | RR Donnelley\n1750 Wallace Ave | St Charles, IL 60174-3401 \nOffice: 630.313.7818 \[email protected]\nhttp://www.rrdonnelley.com",
"msg_date": "Tue, 7 Oct 2014 14:55:48 -0500",
"msg_from": "Mike Blackwell <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: pg_basebackup - odd performance"
}
] |
[
{
"msg_contents": "After upgrade from 9.3.1 to 9.3.5 we expirienced a slight performance degradation of all queries. Query time increased to some amount of ms, mostly in range of 100ms. Some actions in our application results in a lot of small queries and in such cases performance degradation is very significant - total action performs for a 2-3 times longer then before (15s -> 40s, etc).\n\nUsing git-bisect we've found a bad revision causes performance drop: it is 324577f39bc8738ed0ec24c36c5cb2c2f81ec660\n\nAll tests were performed on the same server with same postgresql.conf, the only load on this server is postgresql test setup.\n\nHere is example query plan of one query: http://explain.depesz.com/s/CWA\nAnecdotally, when such a query executed in psql, it shows different results than when executes as a part of application-induced batch of queries. For example, the above query takes 24ms on \"fast\" postgres version, and 80ms on \"slow\" postgres versions. But when executed in \"standalone\" mode from psql, it takes 9.5-13 ms independently on postgres version. So we're logged all statements from our test suite using auto_explain module.\n\nHere is query time difference on different postgresql versions:\n# grep \"duration: \" 9.3-fast.log |awk '{sum += $6}; END { print sum}'\n8309.05\n# grep \"duration: \" 9.3.5.log |awk '{sum += $6}; END { print sum}'\n24142\n\nLog from postgres from 1 revision before bad: http://tgt72.ru/static/tmp/9.3-fast.log\nLog from 9.3.5: http://tgt72.ru/static/tmp/9.3.5.log\nDatabase schema: http://tgt72.ru/static/tmp/gits.sql\n\npostgresql.conf:\ndata_directory = '/var/lib/postgresql/9.3/main' # use data in another directory\nhba_file = '/etc/postgresql/9.3/main/pg_hba.conf' # host-based authentication file\nident_file = '/etc/postgresql/9.3/main/pg_ident.conf' # ident configuration file\nexternal_pid_file = '/var/run/postgresql/9.3-main.pid' # write an extra PID file\nlisten_addresses = '*' # what IP address(es) to listen on;\nport = 5432 # (change requires restart)\nunix_socket_directories = '/var/run/postgresql' # comma-separated list of directories\nssl = true # (change requires restart)\nssl_cert_file = '/etc/ssl/certs/ssl-cert-snakeoil.pem' # (change requires restart)\nssl_key_file = '/etc/ssl/private/ssl-cert-snakeoil.key' # (change requires restart)\nvacuum_cost_delay = 50\nvacuum_cost_page_hit = 0\nvacuum_cost_limit = 600\nfsync = on # turns forced synchronization on or off\nsynchronous_commit = off # synchronization level;\nlog_line_prefix = '%t:%r:%u@%d:[%p]: '\nlog_statement = 'none' # none, ddl, mod, all\nlog_timezone = 'localtime'\nautovacuum_max_workers = 4\nautovacuum_vacuum_scale_factor = 0.0195\nautovacuum_analyze_scale_factor = 0.05\nautovacuum_freeze_max_age = 1000000000\nautovacuum_vacuum_cost_limit = 300\nvacuum_freeze_table_age = 500000000\ndatestyle = 'iso, dmy'\ntimezone = 'localtime'\nclient_encoding = utf8 # actually, defaults to database\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'ru_RU.UTF-8' # locale for monetary formatting\nlc_numeric = 'ru_RU.UTF-8' # locale for number formatting\nlc_time = 'ru_RU.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.russian'\nmax_locks_per_transaction = 128 # min 10\ndefault_statistics_target = 50 # pgtune wizard 2013-11-20\nmaintenance_work_mem = 1GB # pgtune wizard 2013-11-20\nconstraint_exclusion = on # pgtune wizard 2013-11-20\ncheckpoint_completion_target = 0.9 # pgtune wizard 2013-11-20\neffective_cache_size = 12GB # pgtune wizard 2013-11-20\nwork_mem = 96MB # pgtune wizard 2013-11-20\nwal_buffers = 8MB # pgtune wizard 2013-11-20\ncheckpoint_segments = 24\nshared_buffers = 4GB # pgtune wizard 2013-11-20\nmax_connections = 300 # pgtune wizard 2013-11-20\nshared_preload_libraries = 'auto_explain'\nauto_explain.log_analyze = 1\nauto_explain.log_min_duration = 0\nauto_explain.log_buffers = 1\nauto_explain.log_nested_statements = 1\n\n\nHere is number of tuples estimation:\nSELECT relname,reltuples::numeric FROM pg_class order by reltuples DESC limit 100;\n relname | reltuples \n-----------------------------------------------------------+-----------\n schedule_line_audit_2013_06_time_fact_idx | 80649500\n schedule_line_audit_2013_06_checkpoint_idx | 80649500\n schedule_line_audit_2013_06_audit_timestamp_idx | 80649500\n schedule_line_audit_2013_06_time_plan_idx | 80649500\n schedule_line_audit_2013_06_pk | 80649500\n schedule_line_audit_2013_06 | 80649500\n schedule_line_audit_2013_06_schedule_end_datetime_idx | 80649500\n schedule_line_audit_2013_06_id_idx | 80649500\n tl_detector_zone_history_zone_id | 38235000\n tl_detector_zone_history | 38235000\n tl_detector_zone_history_pkey | 38235000\n tl_detector_zone_history_datetime | 38235000\n matching_matchingevent_2014_07_pk | 36870100\n matching_matchingevent_2014_07 | 36870100\n matching_matchingevent_2014_07_start_datetime_idx | 36870100\n matching_matchingevent_2014_07_user_ignored_idx | 36870100\n matching_matchingevent_2014_07_device_datetime_unique_idx | 36870100\n matching_matchingevent_2014_07_device_idx | 36870100\n matching_matchingevent_2014_09_start_datetime_idx | 36453900\n matching_matchingevent_2014_09_user_ignored_idx | 36453900\n matching_matchingevent_2014_09_device_datetime_unique_idx | 36453900\n matching_matchingevent_2014_09_pk | 36453900\n matching_matchingevent_2014_09 | 36453900\n matching_matchingevent_2014_09_device_idx | 36453900\n matching_matchingevent_2014_08_device_datetime_unique_idx | 36102100\n matching_matchingevent_2014_08_device_idx | 36102100\n matching_matchingevent_2014_08 | 36102100\n matching_matchingevent_2014_08_start_datetime_idx | 36102100\n matching_matchingevent_2014_08_user_ignored_idx | 36102100\n matching_matchingevent_2014_08_pk | 36102100\n schedule_line_audit_2013_03_schedule_end_datetime_idx | 30608400\n schedule_line_audit_2013_03 | 30608400\n schedule_line_audit_2013_03_audit_timestamp_idx | 30608400\n schedule_line_audit_2013_03_time_fact_idx | 30608400\n schedule_line_audit_2013_03_checkpoint_idx | 30608400\n schedule_line_audit_2013_03_pk | 30608400\n schedule_line_audit_2013_03_id_idx | 30608400\n schedule_line_audit_2013_03_time_plan_idx | 30608400\n schedule_line_audit_2014_07_checkpoint_idx | 29604500\n schedule_line_audit_2014_07_time_plan_idx | 29604500\n schedule_line_audit_2014_07_time_fact_idx | 29604500\n schedule_line_audit_2014_07_pk | 29604500\n schedule_line_audit_2014_07 | 29604500\n schedule_line_audit_2014_07_id_idx | 29604500\n schedule_line_audit_2014_07_audit_timestamp_idx | 29604500\n schedule_line_audit_2014_07_schedule_end_datetime_idx | 29604500\n schedule_line_audit_2014_09_id_idx | 28739900\n schedule_line_audit_2014_09_audit_timestamp_idx | 28739900\n schedule_line_audit_2014_09_schedule_end_datetime_idx | 28739900\n schedule_line_audit_2014_09_time_fact_idx | 28739900\n schedule_line_audit_2014_09_checkpoint_idx | 28739900\n schedule_line_audit_2014_09_pk | 28739900\n schedule_line_audit_2014_09_time_plan_idx | 28739900\n schedule_line_audit_2014_09 | 28739900\n matching_matchingevent_2014_06 | 27963800\n matching_matchingevent_2014_06_user_ignored_idx | 27963800\n matching_matchingevent_2014_06_device_idx | 27963800\n matching_matchingevent_2014_06_pk | 27963800\n matching_matchingevent_2014_06_start_datetime_idx | 27963800\n matching_matchingevent_2014_06_device_datetime_unique_idx | 27963800\n schedule_line_audit_2014_08 | 27197700\n schedule_line_audit_2014_08_checkpoint_idx | 27197700\n schedule_line_audit_2014_08_schedule_end_datetime_idx | 27197700\n schedule_line_audit_2014_08_pk | 27197700\n schedule_line_audit_2014_08_time_plan_idx | 27197700\n schedule_line_audit_2014_08_time_fact_idx | 27197700\n schedule_line_audit_2014_08_id_idx | 27197700\n schedule_line_audit_2014_08_audit_timestamp_idx | 27197700\n matching_matchingevent_2014_05_user_ignored_idx | 26968500\n matching_matchingevent_2014_05_pk | 26968500\n matching_matchingevent_2014_05_device_idx | 26968500\n matching_matchingevent_2014_05_device_datetime_unique_idx | 26968500\n matching_matchingevent_2014_05 | 26968500\n matching_matchingevent_2014_05_start_datetime_idx | 26968500\n schedule_line_audit_2014_06_audit_timestamp_idx | 25498800\n schedule_line_audit_2014_06_time_plan_idx | 25498800\n schedule_line_audit_2014_06_schedule_end_datetime_idx | 25498800\n schedule_line_audit_2014_06_time_fact_idx | 25498800\n schedule_line_audit_2014_06_id_idx | 25498800\n schedule_line_audit_2014_06_pk | 25498800\n schedule_line_audit_2014_06 | 25498800\n schedule_line_audit_2014_06_checkpoint_idx | 25498800\n schedule_line_audit_2013_08_audit_timestamp_idx | 25396100\n schedule_line_audit_2013_08_pk | 25396100\n schedule_line_audit_2013_08_time_plan_idx | 25396100\n schedule_line_audit_2013_08_schedule_end_datetime_idx | 25396100\n schedule_line_audit_2013_08_id_idx | 25396100\n schedule_line_audit_2013_08_time_fact_idx | 25396100\n schedule_line_audit_2013_08_checkpoint_idx | 25396100\n schedule_line_audit_2013_08 | 25396100\n schedule_line_audit_2014_05_id_idx | 24859700\n schedule_line_audit_2014_05_pk | 24859700\n schedule_line_audit_2014_05_checkpoint_idx | 24859700\n schedule_line_audit_2014_05 | 24859700\n schedule_line_audit_2014_05_time_fact_idx | 24859700\n schedule_line_audit_2014_05_audit_timestamp_idx | 24859700\n schedule_line_audit_2014_05_schedule_end_datetime_idx | 24859700\n schedule_line_audit_2014_05_time_plan_idx | 24859700\n matching_matchingevent_2014_04_device_idx | 24449500\n matching_matchingevent_2014_04_start_datetime_idx | 24449500\n(100 rows)\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Oct 2014 16:13:35 +0600",
"msg_from": "Vladimir Kamarzin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance degradation in 324577f39bc8738ed0ec24c36c5cb2c2f81ec660"
},
{
"msg_contents": "Vladimir Kamarzin <[email protected]> writes:\n> After upgrade from 9.3.1 to 9.3.5 we expirienced a slight performance degradation of all queries. Query time increased to some amount of ms, mostly in range of 100ms. Some actions in our application results in a lot of small queries and in such cases performance degradation is very significant - total action performs for a 2-3 times longer then before (15s -> 40s, etc).\n> Using git-bisect we've found a bad revision causes performance drop: it is 324577f39bc8738ed0ec24c36c5cb2c2f81ec660\n\nHm. If you're going to do queries that involve update/delete across large\ninheritance trees, that bug fix is unavoidably going to cost you some\ncycles. Having said that, though, the append_rel_list data structures\naren't especially large or complex, so it's a mite astonishing that you\ncould notice this extra copying cost in the context of everything else\nthat happens in a large inherited UPDATE. I am wondering if you've\nmisidentified the commit that made the difference --- especially since you\nclaim there's a penalty for \"all\" queries, which there manifestly couldn't\nbe with this particular patch. If not, there must be something rather\nunusual about your queries or schema. Care to provide a self-contained\ntest case?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 07 Oct 2014 09:56:34 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance degradation in\n 324577f39bc8738ed0ec24c36c5cb2c2f81ec660"
},
{
"msg_contents": "\n07.10.2014, 19:59, \"Tom Lane\" <[email protected]>:\n> Vladimir Kamarzin <[email protected]> writes:\n>> О©╫After upgrade from 9.3.1 to 9.3.5 we expirienced a slight performance degradation of all queries. Query time increased to some amount of ms, mostly in range of 100ms. Some actions in our application results in a lot of small queries and in such cases performance degradation is very significant - total action performs for a 2-3 times longer then before (15s -> 40s, etc).\n>> О©╫Using git-bisect we've found a bad revision causes performance drop: it is 324577f39bc8738ed0ec24c36c5cb2c2f81ec660\n>\n> Hm. О©╫If you're going to do queries that involve update/delete across large\n> inheritance trees, that bug fix is unavoidably going to cost you some\n> cycles.\n\nYeah, we're actually noticed significantly increased CPU load while running on 9.3.5.\n\n> I am wondering if you've\n> misidentified the commit that made the difference --- especially since you\n> claim there's a penalty for \"all\" queries, which there manifestly couldn't\n> be with this particular patch.\n\nNo, problem appears exactly on this commit. Actually I don't really sure about \"all\": we don't see degradation when performing plain SELECTs manually,\nbut comparing logged query time of some SELECTs we see the differences.\n\nHere is example 42ms -> 250ms:\nhttp://pastebin.ca/2855292\nhttp://pastebin.ca/2855290\n\n> О©╫If not, there must be something rather\n> unusual about your queries or schema. О©╫Care to provide a self-contained\n> test case?\n\nI'm afraid we cannot do this now. If you wish, we can give you ssh access to the test server to investigate the problem.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 08 Oct 2014 14:40:23 +0600",
"msg_from": "Vladimir Kamarzin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Performance degradation in\n 324577f39bc8738ed0ec24c36c5cb2c2f81ec660"
}
] |
[
{
"msg_contents": "see\nhttp://stackoverflow.com/questions/26237463/bad-optimization-planning-on-postgres-window-based-queries-partition-by-group\nfor details\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Bad-optimization-planning-on-Postgres-window-based-queries-partition-by-group-by-1000x-speedup-tp5822190.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 7 Oct 2014 23:35:02 -0700 (PDT)",
"msg_from": "and <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad optimization/planning on Postgres window-based queries\n (partition by(, group by?)) - 1000x speedup"
}
] |
[
{
"msg_contents": "\n\n\n\n\nGood morning, \n\n For performance point of view, are there big differences between:\n char(N), varchar(N), varchar, text? \n\nSome comments from google shows: \n No difference, under the hood it's all varlena. Check this\n article from Depesz: http://www.depesz.com/index.php/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/\n A couple of highlights:\n\n\n \nTo sum it all up:\n \n\nchar(n) – takes too much space when dealing with\n values shorter than n, and can lead to subtle errors because\n of adding trailing spaces, plus it is problematic to change\n the limit\nvarchar(n) – it's problematic to change the limit in\n live environment\nvarchar – just like text\ntext – for me a winner – over (n) data types because\n it lacks their problems, and over varchar – because it has\n distinct name\n\n\n So, can I assume no big performance differences? \n Thanks alot!\n Emi\n\n\n\n",
"msg_date": "Wed, 08 Oct 2014 10:22:44 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "char(N), varchar(N), varchar, text"
},
{
"msg_contents": "\nOn 10/08/2014 10:22 AM, Emi Lu wrote:\n> Good morning,\n>\n> For performance point of view, are there big differences between: \n> char(N), varchar(N), varchar, text?\n>\n> Some comments from google shows:\n> No difference, under the hood it's all varlena. Check this article \n> from Depesz: \n> http://www.depesz.com/index.php/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/\n> A couple of highlights:\n>\n> To sum it all up:\n>\n> * char(n) – takes too much space when dealing with values\n> shorter than n, and can lead to subtle errors because of\n> adding trailing spaces, plus it is problematic to change the limit\n> * varchar(n) – it's problematic to change the limit in live\n> environment\n> * varchar – just like text\n> * text – for me a winner – over (n) data types because it lacks\n> their problems, and over varchar – because it has distinct name\n>\n> So, can I assume no big performance differences?\n> Thanks alot!\n> Emi\n>\n\n\nWhy do you need to ask if you already have the answer? Depesz is right.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 08 Oct 2014 10:30:11 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: char(N), varchar(N), varchar, text"
},
{
"msg_contents": "\n>\n>>\n>> For performance point of view, are there big differences between: \n>> char(N), varchar(N), varchar, text?\n>>\n>> Some comments from google shows:\n>> No difference, under the hood it's all varlena. Check this article \n>> from Depesz: \n>> http://www.depesz.com/index.php/2010/03/02/charx-vs-varcharx-vs-varchar-vs-text/\n>> A couple of highlights:\n>>\n>> To sum it all up:\n>>\n>> * char(n) � takes too much space when dealing with values\n>> shorter than n, and can lead to subtle errors because of\n>> adding trailing spaces, plus it is problematic to change the \n>> limit\n>> * varchar(n) � it's problematic to change the limit in live\n>> environment\n>> * varchar � just like text\n>> * text � for me a winner � over (n) data types because it lacks\n>> their problems, and over varchar � because it has distinct name\n>>\n>> So, can I assume no big performance differences?\n>> Thanks alot!\n>> Emi\n>>\n>\n>\n> Why do you need to ask if you already have the answer? Depesz is right.\nGood to hear this. Well, sorry I saw the time is:/2010/03 (might changes \nfor diff/newer versions).\n\nThank you for the confirmation.\nEmi\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 08 Oct 2014 10:42:55 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: char(N), varchar(N), varchar, text"
}
] |
[
{
"msg_contents": "I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\nwork_mem and partitions interact.\n\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\nThe above wiki states that \"if a query involves doing merge sorts of 8\ntables, that requires 8 times work_mem.\" If I have a table that is\npartitioned does each partition count as a \"table\" and get its on work_mem?\n\nFor example, say I have the following table partitioned by the time column:\nCREATE TABLE values (time TIMESTAMP, value INTEGER);\nIf I do the following query will it require 1 work_mem or N work_mem's\n(where N is the number of partitions)?\nSELECT * FROM values ORDER BY time;\n\nThanks,\nDave\n\nI'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how work_mem and partitions interact.https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_memThe above wiki states that \"if a query involves doing merge sorts of 8 tables, that requires 8 times work_mem.\" If I have a table that is partitioned does each partition count as a \"table\" and get its on work_mem?For example, say I have the following table partitioned by the time column:CREATE TABLE values (time TIMESTAMP, value INTEGER);If I do the following query will it require 1 work_mem or N work_mem's (where N is the number of partitions)?SELECT * FROM values ORDER BY time;Thanks,Dave",
"msg_date": "Tue, 14 Oct 2014 10:08:26 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitions and work_mem?"
},
{
"msg_contents": "On Tue, Oct 14, 2014 at 10:08 AM, Dave Johansen <[email protected]>\nwrote:\n\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\n> work_mem and partitions interact.\n>\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\n> The above wiki states that \"if a query involves doing merge sorts of 8\n> tables, that requires 8 times work_mem.\" If I have a table that is\n> partitioned does each partition count as a \"table\" and get its on work_mem?\n>\n> For example, say I have the following table partitioned by the time column:\n> CREATE TABLE values (time TIMESTAMP, value INTEGER);\n> If I do the following query will it require 1 work_mem or N work_mem's\n> (where N is the number of partitions)?\n> SELECT * FROM values ORDER BY time;\n>\n\nThe specific query you show should do the append first and then the sort on\nthe result, and so would only use 1 work_mem.\n\nHowever, other queries could cause it to use one (or more) sorts per\npartition, for example a self-join which it decides to run as a sort-merge\njoin.\n\nCheers,\n\nJeff\n\nOn Tue, Oct 14, 2014 at 10:08 AM, Dave Johansen <[email protected]> wrote:I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how work_mem and partitions interact.https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_memThe above wiki states that \"if a query involves doing merge sorts of 8 tables, that requires 8 times work_mem.\" If I have a table that is partitioned does each partition count as a \"table\" and get its on work_mem?For example, say I have the following table partitioned by the time column:CREATE TABLE values (time TIMESTAMP, value INTEGER);If I do the following query will it require 1 work_mem or N work_mem's (where N is the number of partitions)?SELECT * FROM values ORDER BY time;The specific query you show should do the append first and then the sort on the result, and so would only use 1 work_mem.However, other queries could cause it to use one (or more) sorts per partition, for example a self-join which it decides to run as a sort-merge join. Cheers,Jeff",
"msg_date": "Tue, 14 Oct 2014 11:29:42 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "On 10/14/2014 10:08 AM, Dave Johansen wrote:\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\n> work_mem and partitions interact.\n> \n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\n> The above wiki states that \"if a query involves doing merge sorts of 8\n> tables, that requires 8 times work_mem.\" If I have a table that is\n> partitioned does each partition count as a \"table\" and get its on work_mem?\n\nIn theory, this could happen. In practice, based on tests I did at Sun\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\npartly because the level of parallelism in postgres is extremely\nlimited, so we can't actually sort 8 partitions at the same time.\n\nBTW, 8.4 is EOL. Maybe time to upgrade?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 Oct 2014 10:10:58 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "On Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:\n\n> On 10/14/2014 10:08 AM, Dave Johansen wrote:\n> > I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\n> > work_mem and partitions interact.\n> >\n> > https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\n> > The above wiki states that \"if a query involves doing merge sorts of 8\n> > tables, that requires 8 times work_mem.\" If I have a table that is\n> > partitioned does each partition count as a \"table\" and get its on\n> work_mem?\n>\n> In theory, this could happen. In practice, based on tests I did at Sun\n> with DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\n> partly because the level of parallelism in postgres is extremely\n> limited, so we can't actually sort 8 partitions at the same time.\n>\n\nThanks for the feedback. That's very helpful.\n\n\n> BTW, 8.4 is EOL. Maybe time to upgrade?\n>\n\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow\nprocess that will probably take quite a bit of time, if it ever happens.\n\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:On 10/14/2014 10:08 AM, Dave Johansen wrote:\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\n> work_mem and partitions interact.\n>\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\n> The above wiki states that \"if a query involves doing merge sorts of 8\n> tables, that requires 8 times work_mem.\" If I have a table that is\n> partitioned does each partition count as a \"table\" and get its on work_mem?\n\nIn theory, this could happen. In practice, based on tests I did at Sun\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\npartly because the level of parallelism in postgres is extremely\nlimited, so we can't actually sort 8 partitions at the same time.Thanks for the feedback. That's very helpful. \n\nBTW, 8.4 is EOL. Maybe time to upgrade?\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.",
"msg_date": "Wed, 15 Oct 2014 13:05:07 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Dave Johansen\r\nSent: Wednesday, October 15, 2014 4:05 PM\r\nTo: Josh Berkus\r\nCc: pgsql-performance\r\nSubject: Re: [PERFORM] Partitions and work_mem?\r\n\r\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]<mailto:[email protected]>> wrote:\r\nOn 10/14/2014 10:08 AM, Dave Johansen wrote:\r\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\r\n> work_mem and partitions interact.\r\n>\r\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\r\n> The above wiki states that \"if a query involves doing merge sorts of 8\r\n> tables, that requires 8 times work_mem.\" If I have a table that is\r\n> partitioned does each partition count as a \"table\" and get its on work_mem?\r\n\r\nIn theory, this could happen. In practice, based on tests I did at Sun\r\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\r\npartly because the level of parallelism in postgres is extremely\r\nlimited, so we can't actually sort 8 partitions at the same time.\r\n\r\nThanks for the feedback. That's very helpful.\r\n\r\nBTW, 8.4 is EOL. Maybe time to upgrade?\r\n\r\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.\r\n\r\n\r\nPostgres 8.4 is EOL (RHEL).\r\n\r\nIgor Neyman\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Dave Johansen\nSent: Wednesday, October 15, 2014 4:05 PM\nTo: Josh Berkus\nCc: pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:\n\nOn 10/14/2014 10:08 AM, Dave Johansen wrote:\r\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\r\n> work_mem and partitions interact.\r\n>\r\n> \r\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\r\n> The above wiki states that \"if a query involves doing merge sorts of 8\r\n> tables, that requires 8 times work_mem.\" If I have a table that is\r\n> partitioned does each partition count as a \"table\" and get its on work_mem?\n\r\nIn theory, this could happen. In practice, based on tests I did at Sun\r\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\r\npartly because the level of parallelism in postgres is extremely\r\nlimited, so we can't actually sort 8 partitions at the same time.\n\n\n \n\n\nThanks for the feedback. That's very helpful.\n\n\n \n\n\nBTW, 8.4 is EOL. Maybe time to upgrade? \n\n\n \n\n\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.\n \n \nPostgres 8.4 is EOL (RHEL).\n \nIgor Neyman",
"msg_date": "Wed, 15 Oct 2014 20:08:54 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "On Wed, Oct 15, 2014 at 1:08 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Dave Johansen\n> *Sent:* Wednesday, October 15, 2014 4:05 PM\n> *To:* Josh Berkus\n> *Cc:* pgsql-performance\n> *Subject:* Re: [PERFORM] Partitions and work_mem?\n>\n>\n>\n> On Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:\n>\n> On 10/14/2014 10:08 AM, Dave Johansen wrote:\n> > I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\n> > work_mem and partitions interact.\n> >\n> > https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\n> > The above wiki states that \"if a query involves doing merge sorts of 8\n> > tables, that requires 8 times work_mem.\" If I have a table that is\n> > partitioned does each partition count as a \"table\" and get its on\n> work_mem?\n>\n> In theory, this could happen. In practice, based on tests I did at Sun\n> with DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\n> partly because the level of parallelism in postgres is extremely\n> limited, so we can't actually sort 8 partitions at the same time.\n>\n>\n>\n> Thanks for the feedback. That's very helpful.\n>\n>\n>\n> BTW, 8.4 is EOL. Maybe time to upgrade?\n>\n>\n>\n> RHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow\n> process that will probably take quite a bit of time, if it ever happens.\n>\n>\n>\n>\n>\n> Postgres 8.4 is EOL (RHEL).\n>\n\nSorry I don't understand what you mean by that. My understanding is that\nRedHat maintains fixes for security and other major issues for packages\nthat have been EOLed. Are you implying that that's not the case? Or\nsomething else?\n\nOn Wed, Oct 15, 2014 at 1:08 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\nOn Behalf Of Dave Johansen\nSent: Wednesday, October 15, 2014 4:05 PM\nTo: Josh Berkus\nCc: pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:\n\nOn 10/14/2014 10:08 AM, Dave Johansen wrote:\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\n> work_mem and partitions interact.\n>\n> \nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\n> The above wiki states that \"if a query involves doing merge sorts of 8\n> tables, that requires 8 times work_mem.\" If I have a table that is\n> partitioned does each partition count as a \"table\" and get its on work_mem?\n\nIn theory, this could happen. In practice, based on tests I did at Sun\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\npartly because the level of parallelism in postgres is extremely\nlimited, so we can't actually sort 8 partitions at the same time.\n\n\n \n\n\nThanks for the feedback. That's very helpful.\n\n\n \n\n\nBTW, 8.4 is EOL. Maybe time to upgrade? \n\n\n \n\n\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.\n \n \nPostgres 8.4 is EOL (RHEL).Sorry I don't understand what you mean by that. My understanding is that RedHat maintains fixes for security and other major issues for packages that have been EOLed. Are you implying that that's not the case? Or something else?",
"msg_date": "Wed, 15 Oct 2014 13:19:37 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "From: Dave Johansen [mailto:[email protected]]\r\nSent: Wednesday, October 15, 2014 4:20 PM\r\nTo: Igor Neyman\r\nCc: Josh Berkus; pgsql-performance\r\nSubject: Re: [PERFORM] Partitions and work_mem?\r\n\r\nOn Wed, Oct 15, 2014 at 1:08 PM, Igor Neyman <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of Dave Johansen\r\nSent: Wednesday, October 15, 2014 4:05 PM\r\nTo: Josh Berkus\r\nCc: pgsql-performance\r\nSubject: Re: [PERFORM] Partitions and work_mem?\r\n\r\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]<mailto:[email protected]>> wrote:\r\nOn 10/14/2014 10:08 AM, Dave Johansen wrote:\r\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\r\n> work_mem and partitions interact.\r\n>\r\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\r\n> The above wiki states that \"if a query involves doing merge sorts of 8\r\n> tables, that requires 8 times work_mem.\" If I have a table that is\r\n> partitioned does each partition count as a \"table\" and get its on work_mem?\r\n\r\nIn theory, this could happen. In practice, based on tests I did at Sun\r\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\r\npartly because the level of parallelism in postgres is extremely\r\nlimited, so we can't actually sort 8 partitions at the same time.\r\n\r\nThanks for the feedback. That's very helpful.\r\n\r\nBTW, 8.4 is EOL. Maybe time to upgrade?\r\n\r\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.\r\n\r\n\r\nPostgres 8.4 is EOL (RHEL).\r\n\r\nSorry I don't understand what you mean by that. My understanding is that RedHat maintains fixes for security and other major issues for packages that have been EOLed. Are you implying that that's not the case? Or something else?\r\n\r\nI don’t think that RedHat can maintain Postgres version which was EOLed.\r\nPostgres 8.4 is not supported by PostgreSQL community.\r\n\r\nIgor Neyman\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: Dave Johansen [mailto:[email protected]]\r\n\nSent: Wednesday, October 15, 2014 4:20 PM\nTo: Igor Neyman\nCc: Josh Berkus; pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\nOn Wed, Oct 15, 2014 at 1:08 PM, Igor Neyman <[email protected]> wrote:\n\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of Dave Johansen\nSent: Wednesday, October 15, 2014 4:05 PM\nTo: Josh Berkus\nCc: pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\n\n\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:\n\nOn 10/14/2014 10:08 AM, Dave Johansen wrote:\r\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\r\n> work_mem and partitions interact.\r\n>\r\n> \r\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\r\n> The above wiki states that \"if a query involves doing merge sorts of 8\r\n> tables, that requires 8 times work_mem.\" If I have a table that is\r\n> partitioned does each partition count as a \"table\" and get its on work_mem?\n\r\nIn theory, this could happen. In practice, based on tests I did at Sun\r\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\r\npartly because the level of parallelism in postgres is extremely\r\nlimited, so we can't actually sort 8 partitions at the same time.\n\n\n \n\n\nThanks for the feedback. That's very helpful.\n\n\n \n\n\nBTW, 8.4 is EOL. Maybe time to upgrade?\r\n\n\n\n \n\n\n\n\n\n\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.\n \n \n\n\nPostgres 8.4 is EOL (RHEL).\n\n\n\n\n\n\n \n\n\nSorry I don't understand what you mean by that. My understanding is that RedHat maintains fixes for security and other major issues for packages that have been EOLed. Are you implying that that's not the case? Or something else?\n \nI don’t think that RedHat can maintain Postgres version which was EOLed.\nPostgres 8.4 is not supported by PostgreSQL community.\n \nIgor Neyman",
"msg_date": "Wed, 15 Oct 2014 20:36:56 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "On Wed, Oct 15, 2014 at 1:36 PM, Igor Neyman <[email protected]> wrote:\n\n>\n>\n>\n>\n> *From:* Dave Johansen [mailto:[email protected]]\n> *Sent:* Wednesday, October 15, 2014 4:20 PM\n> *To:* Igor Neyman\n> *Cc:* Josh Berkus; pgsql-performance\n> *Subject:* Re: [PERFORM] Partitions and work_mem?\n>\n>\n>\n> On Wed, Oct 15, 2014 at 1:08 PM, Igor Neyman <[email protected]>\n> wrote:\n>\n>\n>\n>\n>\n> *From:* [email protected] [mailto:\n> [email protected]] *On Behalf Of *Dave Johansen\n> *Sent:* Wednesday, October 15, 2014 4:05 PM\n> *To:* Josh Berkus\n> *Cc:* pgsql-performance\n> *Subject:* Re: [PERFORM] Partitions and work_mem?\n>\n>\n>\n> On Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:\n>\n> On 10/14/2014 10:08 AM, Dave Johansen wrote:\n> > I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\n> > work_mem and partitions interact.\n> >\n> > https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\n> > The above wiki states that \"if a query involves doing merge sorts of 8\n> > tables, that requires 8 times work_mem.\" If I have a table that is\n> > partitioned does each partition count as a \"table\" and get its on\n> work_mem?\n>\n> In theory, this could happen. In practice, based on tests I did at Sun\n> with DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\n> partly because the level of parallelism in postgres is extremely\n> limited, so we can't actually sort 8 partitions at the same time.\n>\n>\n>\n> Thanks for the feedback. That's very helpful.\n>\n>\n>\n> BTW, 8.4 is EOL. Maybe time to upgrade?\n>\n>\n>\n> RHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow\n> process that will probably take quite a bit of time, if it ever happens.\n>\n>\n>\n>\n>\n> Postgres 8.4 is EOL (RHEL).\n>\n>\n>\n> Sorry I don't understand what you mean by that. My understanding is that\n> RedHat maintains fixes for security and other major issues for packages\n> that have been EOLed. Are you implying that that's not the case? Or\n> something else?\n>\n>\n>\n> I don’t think that RedHat can maintain Postgres version which was EOLed.\n>\n> Postgres 8.4 is not supported by PostgreSQL community.\n>\n\nThis conversation has probably become a bit off topic, but my understanding\nis that what you're paying RedHat for is a stable platform for a long\nperiod of time. That means creating/backporting of fixes for security and\nother critical issues for packages that have been EOLed.\n\nAssuming the above is true, (which I beleve to be the case\nhttps://access.redhat.com/support/policy/updates/errata ), I don't see what\nwould prevent RedHat from making a patch and applying it to the latest 8.4\nrelease to resolve any newly discovered issues. Isn't that the whole point\nof open source and RedHat being able to do with the code what it wishes as\nlong as it meets the requirements of the license? So are you claiming that\nRedHat doesn't/won't do this? Is incapable of doing this? Or am I missing\nsomething?\n\nOn Wed, Oct 15, 2014 at 1:36 PM, Igor Neyman <[email protected]> wrote:\n\n\n \n \nFrom: Dave Johansen [mailto:[email protected]]\n\nSent: Wednesday, October 15, 2014 4:20 PM\nTo: Igor Neyman\nCc: Josh Berkus; pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\nOn Wed, Oct 15, 2014 at 1:08 PM, Igor Neyman <[email protected]> wrote:\n\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\nOn Behalf Of Dave Johansen\nSent: Wednesday, October 15, 2014 4:05 PM\nTo: Josh Berkus\nCc: pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\n\n\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:\n\nOn 10/14/2014 10:08 AM, Dave Johansen wrote:\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\n> work_mem and partitions interact.\n>\n> \nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\n> The above wiki states that \"if a query involves doing merge sorts of 8\n> tables, that requires 8 times work_mem.\" If I have a table that is\n> partitioned does each partition count as a \"table\" and get its on work_mem?\n\nIn theory, this could happen. In practice, based on tests I did at Sun\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\npartly because the level of parallelism in postgres is extremely\nlimited, so we can't actually sort 8 partitions at the same time.\n\n\n \n\n\nThanks for the feedback. That's very helpful.\n\n\n \n\n\nBTW, 8.4 is EOL. Maybe time to upgrade?\n\n\n\n \n\n\n\n\n\n\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.\n \n \n\n\nPostgres 8.4 is EOL (RHEL).\n\n\n\n\n\n\n \n\n\nSorry I don't understand what you mean by that. My understanding is that RedHat maintains fixes for security and other major issues for packages that have been EOLed. Are you implying that that's not the case? Or something else?\n \nI don’t think that RedHat can maintain Postgres version which was EOLed.\nPostgres 8.4 is not supported by PostgreSQL community.\nThis conversation has probably become a bit off topic, but my understanding is that what you're paying RedHat for is a stable platform for a long period of time. That means creating/backporting of fixes for security and other critical issues for packages that have been EOLed.Assuming the above is true, (which I beleve to be the case https://access.redhat.com/support/policy/updates/errata ), I don't see what would prevent RedHat from making a patch and applying it to the latest 8.4 release to resolve any newly discovered issues. Isn't that the whole point of open source and RedHat being able to do with the code what it wishes as long as it meets the requirements of the license? So are you claiming that RedHat doesn't/won't do this? Is incapable of doing this? Or am I missing something?",
"msg_date": "Wed, 15 Oct 2014 13:49:11 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "From: Dave Johansen [mailto:[email protected]]\r\nSent: Wednesday, October 15, 2014 4:49 PM\r\nTo: Igor Neyman\r\nCc: Josh Berkus; pgsql-performance\r\nSubject: Re: [PERFORM] Partitions and work_mem?\r\n\r\nOn Wed, Oct 15, 2014 at 1:36 PM, Igor Neyman <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\nFrom: Dave Johansen [mailto:[email protected]<mailto:[email protected]>]\r\nSent: Wednesday, October 15, 2014 4:20 PM\r\nTo: Igor Neyman\r\nCc: Josh Berkus; pgsql-performance\r\nSubject: Re: [PERFORM] Partitions and work_mem?\r\n\r\nOn Wed, Oct 15, 2014 at 1:08 PM, Igor Neyman <[email protected]<mailto:[email protected]>> wrote:\r\n\r\n\r\nFrom: [email protected]<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of Dave Johansen\r\nSent: Wednesday, October 15, 2014 4:05 PM\r\nTo: Josh Berkus\r\nCc: pgsql-performance\r\nSubject: Re: [PERFORM] Partitions and work_mem?\r\n\r\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]<mailto:[email protected]>> wrote:\r\nOn 10/14/2014 10:08 AM, Dave Johansen wrote:\r\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\r\n> work_mem and partitions interact.\r\n>\r\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\r\n> The above wiki states that \"if a query involves doing merge sorts of 8\r\n> tables, that requires 8 times work_mem.\" If I have a table that is\r\n> partitioned does each partition count as a \"table\" and get its on work_mem?\r\n\r\nIn theory, this could happen. In practice, based on tests I did at Sun\r\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\r\npartly because the level of parallelism in postgres is extremely\r\nlimited, so we can't actually sort 8 partitions at the same time.\r\n\r\nThanks for the feedback. That's very helpful.\r\n\r\nBTW, 8.4 is EOL. Maybe time to upgrade?\r\n\r\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.\r\n\r\n\r\nPostgres 8.4 is EOL (RHEL).\r\n\r\nSorry I don't understand what you mean by that. My understanding is that RedHat maintains fixes for security and other major issues for packages that have been EOLed. Are you implying that that's not the case? Or something else?\r\n\r\nI don’t think that RedHat can maintain Postgres version which was EOLed.\r\nPostgres 8.4 is not supported by PostgreSQL community.\r\n\r\nThis conversation has probably become a bit off topic, but my understanding is that what you're paying RedHat for is a stable platform for a long period of time. That means creating/backporting of fixes for security and other critical issues for packages that have been EOLed.\r\nAssuming the above is true, (which I beleve to be the case https://access.redhat.com/support/policy/updates/errata ), I don't see what would prevent RedHat from making a patch and applying it to the latest 8.4 release to resolve any newly discovered issues. Isn't that the whole point of open source and RedHat being able to do with the code what it wishes as long as it meets the requirements of the license? So are you claiming that RedHat doesn't/won't do this? Is incapable of doing this? Or am I missing something?\r\n\r\n\r\nTom Lane is probably better authority on this issue.\r\nLet’s wait and see what he says.\r\n\r\n\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: Dave Johansen [mailto:[email protected]]\r\n\nSent: Wednesday, October 15, 2014 4:49 PM\nTo: Igor Neyman\nCc: Josh Berkus; pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\nOn Wed, Oct 15, 2014 at 1:36 PM, Igor Neyman <[email protected]> wrote:\n\n\n\n \n \nFrom: Dave Johansen [mailto:[email protected]]\r\n\nSent: Wednesday, October 15, 2014 4:20 PM\nTo: Igor Neyman\nCc: Josh Berkus; pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\nOn Wed, Oct 15, 2014 at 1:08 PM, Igor Neyman <[email protected]> wrote:\n\n\n\n \n \nFrom:\[email protected] [mailto:[email protected]]\r\nOn Behalf Of Dave Johansen\nSent: Wednesday, October 15, 2014 4:05 PM\nTo: Josh Berkus\nCc: pgsql-performance\nSubject: Re: [PERFORM] Partitions and work_mem?\n \n\n\n\n\n\nOn Wed, Oct 15, 2014 at 10:10 AM, Josh Berkus <[email protected]> wrote:\n\nOn 10/14/2014 10:08 AM, Dave Johansen wrote:\r\n> I'm running Postgres 8.4 on RHEL 6 64-bit and I had a question about how\r\n> work_mem and partitions interact.\r\n>\r\n> \r\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server#work_mem\r\n> The above wiki states that \"if a query involves doing merge sorts of 8\r\n> tables, that requires 8 times work_mem.\" If I have a table that is\r\n> partitioned does each partition count as a \"table\" and get its on work_mem?\n\r\nIn theory, this could happen. In practice, based on tests I did at Sun\r\nwith DBT3 and 8.3, no backend ever used more than 3X work_mem. This is\r\npartly because the level of parallelism in postgres is extremely\r\nlimited, so we can't actually sort 8 partitions at the same time.\n\n\n \n\n\nThanks for the feedback. That's very helpful.\n\n\n \n\n\nBTW, 8.4 is EOL. Maybe time to upgrade?\r\n\n\n\n \n\n\n\n\n\n\nRHEL 6 isn't EOLed and we're working on moving to RHEL 7 but it's a slow process that will probably take quite a bit of time, if it ever happens.\n \n \n\n\nPostgres 8.4 is EOL (RHEL).\n\n\n\n\n\n\n \n\n\nSorry I don't understand what you mean by that. My understanding is that RedHat maintains fixes for security and other major issues for packages that have been EOLed. Are you implying\r\n that that's not the case? Or something else?\n \nI don’t think that RedHat can maintain Postgres version which was EOLed.\nPostgres 8.4 is not supported by PostgreSQL community.\n\n\n\n\n\n\n \n\n\nThis conversation has probably become a bit off topic, but my understanding is that what you're paying RedHat for is a stable platform for a long period of time. That means creating/backporting of fixes for\r\n security and other critical issues for packages that have been EOLed.\n\n\nAssuming the above is true, (which I beleve to be the case \r\nhttps://access.redhat.com/support/policy/updates/errata ), I don't see what would prevent RedHat from making a patch and applying it to the latest 8.4 release to resolve any newly discovered issues. Isn't that the whole point of open source and RedHat being\r\n able to do with the code what it wishes as long as it meets the requirements of the license? So are you claiming that RedHat doesn't/won't do this? Is incapable of doing this? Or am I missing something?\n \n \nTom Lane is probably better authority on this issue.\nLet’s wait and see what he says.",
"msg_date": "Wed, 15 Oct 2014 21:00:21 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "On 10/15/2014 01:19 PM, Dave Johansen wrote:\n> Sorry I don't understand what you mean by that. My understanding is that\n> RedHat maintains fixes for security and other major issues for packages\n> that have been EOLed. Are you implying that that's not the case? Or\n> something else?\n\nRH probably backpatches our fixes as they come out. They did in the\npast, anyway.\n\nI just had the impression from your original post that this was a new\nsystem; if so, it would make sense to build it on a version of Postgres\nwhich wasn't already EOL.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 Oct 2014 15:25:06 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "Igor Neyman <[email protected]> writes:\n> From: Dave Johansen [mailto:[email protected]]\n> This conversation has probably become a bit off topic, but my understanding is that what you're paying RedHat for is a stable platform for a long period of time. That means creating/backporting of fixes for security and other critical issues for packages that have been EOLed.\n> Assuming the above is true, (which I beleve to be the case https://access.redhat.com/support/policy/updates/errata ), I don't see what would prevent RedHat from making a patch and applying it to the latest 8.4 release to resolve any newly discovered issues. Isn't that the whole point of open source and RedHat being able to do with the code what it wishes as long as it meets the requirements of the license? So are you claiming that RedHat doesn't/won't do this? Is incapable of doing this? Or am I missing something?\n\n> Tom Lane is probably better authority on this issue.\n> Let’s wait and see what he says.\n\nThat is in fact exactly what people pay Red Hat to do, and it was my job\nto do it for Postgres when I worked there. I don't work there any more,\nbut I'm sure my replacement is entirely capable of back-patching fixes as\nneeded.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 15 Oct 2014 18:58:14 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "On Wed, Oct 15, 2014 at 3:25 PM, Josh Berkus <[email protected]> wrote:\n\n> On 10/15/2014 01:19 PM, Dave Johansen wrote:\n> > Sorry I don't understand what you mean by that. My understanding is that\n> > RedHat maintains fixes for security and other major issues for packages\n> > that have been EOLed. Are you implying that that's not the case? Or\n> > something else?\n>\n> RH probably backpatches our fixes as they come out. They did in the\n> past, anyway.\n>\n> I just had the impression from your original post that this was a new\n> system; if so, it would make sense to build it on a version of Postgres\n> which wasn't already EOL.\n>\n\nSorry for not being more clear in the original post. This is a system that\nhas been running for just over a year and the beta for RHEL 7 hadn't even\nbeen released when we started things so 8.4 was the only real option.\n\nHaving said all of that, we recently had an increase in the number of users\nand we had experienced database restarts because the Linux Out of Memory\nKiller would kill a query on occassion. I just wanted to make sure that the\nchanges we were making were based on sound logic and we wouldn't be\nexperiencing these restarts anymore.\n\nThanks everyone for the help,\nDave\n\nOn Wed, Oct 15, 2014 at 3:25 PM, Josh Berkus <[email protected]> wrote:On 10/15/2014 01:19 PM, Dave Johansen wrote:\n> Sorry I don't understand what you mean by that. My understanding is that\n> RedHat maintains fixes for security and other major issues for packages\n> that have been EOLed. Are you implying that that's not the case? Or\n> something else?\n\nRH probably backpatches our fixes as they come out. They did in the\npast, anyway.\n\nI just had the impression from your original post that this was a new\nsystem; if so, it would make sense to build it on a version of Postgres\nwhich wasn't already EOL.Sorry for not being more clear in the original post. This is a system that has been running for just over a year and the beta for RHEL 7 hadn't even been released when we started things so 8.4 was the only real option.Having said all of that, we recently had an increase in the number of users and we had experienced database restarts because the Linux Out of Memory Killer would kill a query on occassion. I just wanted to make sure that the changes we were making were based on sound logic and we wouldn't be experiencing these restarts anymore.Thanks everyone for the help,Dave",
"msg_date": "Sun, 16 Nov 2014 15:25:45 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "On Oct 16, 2014 12:58 AM, \"Tom Lane\" <[email protected]> wrote:\n>\n> Igor Neyman <[email protected]> writes:\n> > From: Dave Johansen [mailto:[email protected]]\n> > This conversation has probably become a bit off topic, but my\nunderstanding is that what you're paying RedHat for is a stable platform\nfor a long period of time. That means creating/backporting of fixes for\nsecurity and other critical issues for packages that have been EOLed.\n> > Assuming the above is true, (which I beleve to be the case\nhttps://access.redhat.com/support/policy/updates/errata ), I don't see what\nwould prevent RedHat from making a patch and applying it to the latest 8.4\nrelease to resolve any newly discovered issues. Isn't that the whole point\nof open source and RedHat being able to do with the code what it wishes as\nlong as it meets the requirements of the license? So are you claiming that\nRedHat doesn't/won't do this? Is incapable of doing this? Or am I missing\nsomething?\n>\n> > Tom Lane is probably better authority on this issue.\n> > Let’s wait and see what he says.\n>\n> That is in fact exactly what people pay Red Hat to do, and it was my job\n> to do it for Postgres when I worked there. I don't work there any more,\n> but I'm sure my replacement is entirely capable of back-patching fixes as\n> needed.\n>\n\nDo they backpatch everything, or just things like security issues? (in sure\nthey can do either, but do you know what the policy says?)\n\nEither way it does also mean that the support requests for such versions\nwould need to go to redhat rather than the community lists at some point -\nright now their 8.4 would be almost the same as ours, but down the road\nthey'll start separating more and more of course.\n\nFor the op - of you haven't already, is suggest you take a look at\nyum.postgresql.org which will get you a modern, supported, postgresql\nversion for rhel 6. Regardless of the support, you get all the other\nimprovements in postgresql.\n\n/Magnus\n\n\nOn Oct 16, 2014 12:58 AM, \"Tom Lane\" <[email protected]> wrote:\n>\n> Igor Neyman <[email protected]> writes:\n> > From: Dave Johansen [mailto:[email protected]]\n> > This conversation has probably become a bit off topic, but my understanding is that what you're paying RedHat for is a stable platform for a long period of time. That means creating/backporting of fixes for security and other critical issues for packages that have been EOLed.\n> > Assuming the above is true, (which I beleve to be the case https://access.redhat.com/support/policy/updates/errata ), I don't see what would prevent RedHat from making a patch and applying it to the latest 8.4 release to resolve any newly discovered issues. Isn't that the whole point of open source and RedHat being able to do with the code what it wishes as long as it meets the requirements of the license? So are you claiming that RedHat doesn't/won't do this? Is incapable of doing this? Or am I missing something?\n>\n> > Tom Lane is probably better authority on this issue.\n> > Let’s wait and see what he says.\n>\n> That is in fact exactly what people pay Red Hat to do, and it was my job\n> to do it for Postgres when I worked there. I don't work there any more,\n> but I'm sure my replacement is entirely capable of back-patching fixes as\n> needed.\n>\nDo they backpatch everything, or just things like security issues? (in sure they can do either, but do you know what the policy says?) \nEither way it does also mean that the support requests for such versions would need to go to redhat rather than the community lists at some point - right now their 8.4 would be almost the same as ours, but down the road they'll start separating more and more of course. \nFor the op - of you haven't already, is suggest you take a look at yum.postgresql.org which will get you a modern, supported, postgresql version for rhel 6. Regardless of the support, you get all the other improvements in postgresql. \n/Magnus",
"msg_date": "Mon, 17 Nov 2014 07:57:01 +0100",
"msg_from": "Magnus Hagander <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "Magnus Hagander <[email protected]> writes:\n> On Oct 16, 2014 12:58 AM, \"Tom Lane\" <[email protected]> wrote:\n>> That is in fact exactly what people pay Red Hat to do, and it was my job\n>> to do it for Postgres when I worked there. I don't work there any more,\n>> but I'm sure my replacement is entirely capable of back-patching fixes as\n>> needed.\n\n> Do they backpatch everything, or just things like security issues? (in sure\n> they can do either, but do you know what the policy says?)\n\nSecurity issues are high priority to fix, otherwise it takes (usually)\ncomplaints from paying customers and/or effective lobbying from the\npackage's maintainer. They have finite bandwidth for package updates,\nand they also take seriously the idea that a RHEL release series is\nsupposed to be a stable platform. When I was there I was usually able\nto get them to update to new PG minor releases only when said releases\ninvolved security fixes, otherwise the can got kicked down the road...\n\n> Either way it does also mean that the support requests for such versions\n> would need to go to redhat rather than the community lists at some point -\n> right now their 8.4 would be almost the same as ours, but down the road\n> they'll start separating more and more of course.\n\nIf you want a fix in Red Hat's version of 8.4, you need to be talking to\nthem *now*, not \"at some point\". The community lost any input into that\nwhen we stopped updating 8.4.\n\n> For the op - of you haven't already, is suggest you take a look at\n> yum.postgresql.org which will get you a modern, supported, postgresql\n> version for rhel 6. Regardless of the support, you get all the other\n> improvements in postgresql.\n\nYeah. Also, Red Hat is shipping a newer version (I think 9.2.something)\nas part of their \"software collections\" packaging initiative. I do not\nknow whether that's included in a standard RHEL subscription or costs\nextra.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 17 Nov 2014 10:13:42 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitions and work_mem?"
},
{
"msg_contents": "On Mon, Nov 17, 2014 at 8:13 AM, Tom Lane <[email protected]> wrote:\n\n> Magnus Hagander <[email protected]> writes:\n> > On Oct 16, 2014 12:58 AM, \"Tom Lane\" <[email protected]> wrote:\n> >> That is in fact exactly what people pay Red Hat to do, and it was my job\n> >> to do it for Postgres when I worked there. I don't work there any more,\n> >> but I'm sure my replacement is entirely capable of back-patching fixes\n> as\n> >> needed.\n>\n> > Do they backpatch everything, or just things like security issues? (in\n> sure\n> > they can do either, but do you know what the policy says?)\n>\n> Security issues are high priority to fix, otherwise it takes (usually)\n> complaints from paying customers and/or effective lobbying from the\n> package's maintainer. They have finite bandwidth for package updates,\n> and they also take seriously the idea that a RHEL release series is\n> supposed to be a stable platform. When I was there I was usually able\n> to get them to update to new PG minor releases only when said releases\n> involved security fixes, otherwise the can got kicked down the road...\n>\n> > Either way it does also mean that the support requests for such versions\n> > would need to go to redhat rather than the community lists at some point\n> -\n> > right now their 8.4 would be almost the same as ours, but down the road\n> > they'll start separating more and more of course.\n>\n> If you want a fix in Red Hat's version of 8.4, you need to be talking to\n> them *now*, not \"at some point\". The community lost any input into that\n> when we stopped updating 8.4.\n>\n> > For the op - of you haven't already, is suggest you take a look at\n> > yum.postgresql.org which will get you a modern, supported, postgresql\n> > version for rhel 6. Regardless of the support, you get all the other\n> > improvements in postgresql.\n>\n> Yeah. Also, Red Hat is shipping a newer version (I think 9.2.something)\n> as part of their \"software collections\" packaging initiative. I do not\n> know whether that's included in a standard RHEL subscription or costs\n> extra.\n>\n\nWe've looked into both the repos at yum.postgresql.org and Red Hat's SCL,\nbut as most people are already aware, the problem is just that it takes a\nLONG time to move a production system to a new version of a major\ncomponent, if it ever happens at all.\n\nOn a side note, the SCL stuff does require the right type of subsciption (\nhttps://access.redhat.com/solutions/472793 ) and has a MUCH shorter life\ncycle than the rest of RHEL (\nhttps://access.redhat.com/support/policy/updates/rhscl ) so it's honestly\nkind of hard to use in most production environments.\n\nOn Mon, Nov 17, 2014 at 8:13 AM, Tom Lane <[email protected]> wrote:Magnus Hagander <[email protected]> writes:\n> On Oct 16, 2014 12:58 AM, \"Tom Lane\" <[email protected]> wrote:\n>> That is in fact exactly what people pay Red Hat to do, and it was my job\n>> to do it for Postgres when I worked there. I don't work there any more,\n>> but I'm sure my replacement is entirely capable of back-patching fixes as\n>> needed.\n\n> Do they backpatch everything, or just things like security issues? (in sure\n> they can do either, but do you know what the policy says?)\n\nSecurity issues are high priority to fix, otherwise it takes (usually)\ncomplaints from paying customers and/or effective lobbying from the\npackage's maintainer. They have finite bandwidth for package updates,\nand they also take seriously the idea that a RHEL release series is\nsupposed to be a stable platform. When I was there I was usually able\nto get them to update to new PG minor releases only when said releases\ninvolved security fixes, otherwise the can got kicked down the road...\n\n> Either way it does also mean that the support requests for such versions\n> would need to go to redhat rather than the community lists at some point -\n> right now their 8.4 would be almost the same as ours, but down the road\n> they'll start separating more and more of course.\n\nIf you want a fix in Red Hat's version of 8.4, you need to be talking to\nthem *now*, not \"at some point\". The community lost any input into that\nwhen we stopped updating 8.4.\n\n> For the op - of you haven't already, is suggest you take a look at\n> yum.postgresql.org which will get you a modern, supported, postgresql\n> version for rhel 6. Regardless of the support, you get all the other\n> improvements in postgresql.\n\nYeah. Also, Red Hat is shipping a newer version (I think 9.2.something)\nas part of their \"software collections\" packaging initiative. I do not\nknow whether that's included in a standard RHEL subscription or costs\nextra.We've looked into both the repos at yum.postgresql.org and Red Hat's SCL, but as most people are already aware, the problem is just that it takes a LONG time to move a production system to a new version of a major component, if it ever happens at all.On a side note, the SCL stuff does require the right type of subsciption ( https://access.redhat.com/solutions/472793 ) and has a MUCH shorter life cycle than the rest of RHEL ( https://access.redhat.com/support/policy/updates/rhscl ) so it's honestly kind of hard to use in most production environments.",
"msg_date": "Mon, 17 Nov 2014 20:34:06 -0700",
"msg_from": "Dave Johansen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitions and work_mem?"
}
] |
[
{
"msg_contents": "Hi,\n\nlets imagine that we have some table, partitioned by timestamp field, and\nwe query it with SELECT with ordering by that field (DESC for example),\nwith some modest limit.\nLets further say that required amount of rows is found in the first table\nthat query encounters (say, latest one).\nI am just wondering, why nevertheless PostgreSQL does read couple of\nbuffers from each of the older tables?\n\nBest regards,\nDmitriy Shalashov\n\nHi,lets imagine that we have some table, partitioned by timestamp field, and we query it with SELECT with ordering by that field (DESC for example), with some modest limit.Lets further say that required amount of rows is found in the first table that query encounters (say, latest one).I am just wondering, why nevertheless PostgreSQL does read couple of buffers from each of the older tables?Best regards,Dmitriy Shalashov",
"msg_date": "Thu, 16 Oct 2014 16:35:13 +0400",
"msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Partitioned tables and SELECT ... ORDER BY ... LIMIT"
},
{
"msg_contents": "Hello!\n\nLe 2014-10-16 à 08:35, Дмитрий Шалашов <[email protected]> a écrit :\n> lets imagine that we have some table, partitioned by timestamp field, and we query it with SELECT with ordering by that field (DESC for example), with some modest limit.\n> Lets further say that required amount of rows is found in the first table that query encounters (say, latest one).\n> I am just wondering, why nevertheless PostgreSQL does read couple of buffers from each of the older tables?\n\nCould you share a specific plan with us, as well as your PostgreSQL version? It would make the conversation much easier.\n\nCan you also confirm your constraint_exclusion parameter is set to either 'partition' or 'on'?\n\nThanks!\nFrançois Beausoleil\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 Oct 2014 09:19:12 -0400",
"msg_from": "=?utf-8?Q?Fran=C3=A7ois_Beausoleil?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioned tables and SELECT ... ORDER BY ... LIMIT"
},
{
"msg_contents": "Hi!\n\nSo this is not obviously normal, I guess?)\n\nMy version is 9.2.9, constraint_exclusion set to 'partition'.\n\n\n# \\d user_feed_master\n Table \"public.user_feed_master\"\n Column | Type |\nModifiers\n------------+-----------------------------+---------------------------------------------------------------\n id | bigint | not null default\nnextval('user_feed_master_id_seq'::regclass)\n user_id | integer | not null\n type | smallint | not null\n added | timestamp without time zone | not null\n active_id | integer | not null\n url_id | integer | not null\n channel_id | integer |\n updated | timestamp without time zone | default now()\n activity | text |\nNumber of child tables: 11 (Use \\d+ to list them.)\n\n\n# \\d user_feed_201406 -- one of partitions\n Table \"public.user_feed_201406\"\n Column | Type |\nModifiers\n------------+-----------------------------+---------------------------------------------------------------\n id | bigint | not null default\nnextval('user_feed_master_id_seq'::regclass)\n user_id | integer | not null\n type | smallint | not null\n added | timestamp without time zone | not null\n active_id | integer | not null\n url_id | integer | not null\n channel_id | integer |\n updated | timestamp without time zone | default now()\n activity | text |\nIndexes:\n \"user_feed_201406_pkey\" PRIMARY KEY, btree (id)\n \"user_feed_201406_url_id_user_id_idx\" btree (url_id, user_id)\n \"user_feed_201406_user_id_active_id_added_idx\" btree (user_id,\nactive_id, added DESC)\n \"user_feed_201406_user_id_added_idx\" btree (user_id, added DESC)\nCheck constraints:\n \"user_feed_201406_added_check\" CHECK (added >= '2014-06-01'::date AND\nadded < '2014-07-01'::date)\nInherits: user_feed_master\n\n\n# SELECT count(*) FROM user_feed_201406 WHERE user_id = 83586;\n count\n-------\n909\n(1 row)\n\n\nEXPLAIN (ANALYZE,BUFFERS) SELECT url_id FROM user_feed_master WHERE user_id\n= 83586 AND added <= '2014-06-30 23:59:59.99999' ORDER BY added DESC LIMIT\n100;\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------------\n--------------------------------------------------------------\n Limit (cost=0.13..397.23 rows=100 width=12) (actual time=107.442..107.706\nrows=100 loops=1)\n Buffers: shared hit=104 read=18\n I/O Timings: read=107.131\n -> Result (cost=0.13..19664.74 rows=4952 width=12) (actual\ntime=107.442..107.695 rows=100 loops=1)\n Buffers: shared hit=104 read=18\n I/O Timings: read=107.131\n -> Merge Append (cost=0.13..19664.74 rows=4952 width=12) (actual\ntime=107.440..107.683 rows=100 loops=1)\n Sort Key: public.user_feed_master.added\n Buffers: shared hit=104 read=18\n I/O Timings: read=107.131\n -> Sort (cost=0.01..0.02 rows=1 width=12) (actual\ntime=0.006..0.006 rows=0 loops=1)\n Sort Key: public.user_feed_master.added\n Sort Method: quicksort Memory: 25kB\n -> Seq Scan on user_feed_master (cost=0.00..0.00\nrows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1)\n Filter: ((added <= '2014-06-30\n23:59:59.63551'::timestamp without time zone) AND (user_id = 83586))\n -> Index Scan using user_feed_201312_user_id_added_idx on\nuser_feed_201312 user_feed_master (cost=0.00..1525.71 row\ns=392 width=12) (actual time=15.020..15.020 rows=1 loops=1)\n Index Cond: ((user_id = 83586) AND (added <=\n'2014-06-30 23:59:59.63551'::timestamp without time zone))\n Buffers: shared hit=1 read=3\n I/O Timings: read=14.980\n -> Index Scan using user_feed_201401_user_id_added_idx on\nuser_feed_201401 user_feed_master (cost=0.00..966.92 rows\n=272 width=12) (actual time=8.703..8.703 rows=1 loops=1)\n Index Cond: ((user_id = 83586) AND (added <=\n'2014-06-30 23:59:59.63551'::timestamp without time zone))\n Buffers: shared hit=2 read=2\n I/O Timings: read=8.667\n -> Index Scan using user_feed_201402_user_id_added_idx on\nuser_feed_201402 user_feed_master (cost=0.00..1356.38 row\ns=396 width=12) (actual time=12.818..12.818 rows=1 loops=1)\n Index Cond: ((user_id = 83586) AND (added <=\n'2014-06-30 23:59:59.63551'::timestamp without time zone))\n Buffers: shared hit=2 read=2\n I/O Timings: read=12.782\n -> Index Scan using user_feed_201403_user_id_added_idx on\nuser_feed_201403 user_feed_master (cost=0.00..4400.92 row\ns=1116 width=12) (actual time=16.959..16.959 rows=1 loops=1)\n Index Cond: ((user_id = 83586) AND (added <=\n'2014-06-30 23:59:59.63551'::timestamp without time zone))\n Buffers: shared hit=2 read=3\n I/O Timings: read=16.921\n -> Index Scan using user_feed_201404_user_id_added_idx on\nuser_feed_201404 user_feed_master (cost=0.00..5576.73 row\ns=1375 width=12) (actual time=15.534..15.534 rows=1 loops=1)\n Index Cond: ((user_id = 83586) AND (added <=\n'2014-06-30 23:59:59.63551'::timestamp without time zone))\n Buffers: shared hit=2 read=3\n I/O Timings: read=15.485\n -> Index Scan using user_feed_201405_user_id_added_idx on\nuser_feed_201405 user_feed_master (cost=0.00..2895.72 row\ns=714 width=12) (actual time=17.328..17.328 rows=1 loops=1)\n Index Cond: ((user_id = 83586) AND (added <=\n'2014-06-30 23:59:59.63551'::timestamp without time zone))\n Buffers: shared hit=2 read=3\n I/O Timings: read=17.281\n -> Index Scan using user_feed_201406_user_id_added_idx on\nuser_feed_201406 user_feed_master (cost=0.00..2781.28 row\ns=686 width=12) (actual time=21.064..21.276 rows=100 loops=1)\n Index Cond: ((user_id = 83586) AND (added <=\n'2014-06-30 23:59:59.63551'::timestamp without time zone))\n Buffers: shared hit=93 read=2\n I/O Timings: read=21.015\n Total runtime: 107.797 ms\n(44 rows)\n\n\nBest regards,\nDmitriy Shalashov\n\n2014-10-16 17:19 GMT+04:00 François Beausoleil <[email protected]>:\n\n> Hello!\n>\n> Le 2014-10-16 à 08:35, Дмитрий Шалашов <[email protected]> a écrit :\n> > lets imagine that we have some table, partitioned by timestamp field,\n> and we query it with SELECT with ordering by that field (DESC for example),\n> with some modest limit.\n> > Lets further say that required amount of rows is found in the first\n> table that query encounters (say, latest one).\n> > I am just wondering, why nevertheless PostgreSQL does read couple of\n> buffers from each of the older tables?\n>\n> Could you share a specific plan with us, as well as your PostgreSQL\n> version? It would make the conversation much easier.\n>\n> Can you also confirm your constraint_exclusion parameter is set to either\n> 'partition' or 'on'?\n>\n> Thanks!\n> François Beausoleil\n>\n>\n\nHi!So this is not obviously normal, I guess?)My version is 9.2.9, constraint_exclusion set to 'partition'.# \\d user_feed_master Table \"public.user_feed_master\" Column | Type | Modifiers------------+-----------------------------+--------------------------------------------------------------- id | bigint | not null default nextval('user_feed_master_id_seq'::regclass) user_id | integer | not null type | smallint | not null added | timestamp without time zone | not null active_id | integer | not null url_id | integer | not null channel_id | integer | updated | timestamp without time zone | default now() activity | text |Number of child tables: 11 (Use \\d+ to list them.)# \\d user_feed_201406 -- one of partitions Table \"public.user_feed_201406\" Column | Type | Modifiers------------+-----------------------------+--------------------------------------------------------------- id | bigint | not null default nextval('user_feed_master_id_seq'::regclass) user_id | integer | not null type | smallint | not null added | timestamp without time zone | not null active_id | integer | not null url_id | integer | not null channel_id | integer | updated | timestamp without time zone | default now() activity | text |Indexes: \"user_feed_201406_pkey\" PRIMARY KEY, btree (id) \"user_feed_201406_url_id_user_id_idx\" btree (url_id, user_id) \"user_feed_201406_user_id_active_id_added_idx\" btree (user_id, active_id, added DESC) \"user_feed_201406_user_id_added_idx\" btree (user_id, added DESC)Check constraints: \"user_feed_201406_added_check\" CHECK (added >= '2014-06-01'::date AND added < '2014-07-01'::date)Inherits: user_feed_master# SELECT count(*) FROM user_feed_201406 WHERE user_id = 83586; count-------909(1 row)EXPLAIN (ANALYZE,BUFFERS) SELECT url_id FROM user_feed_master WHERE user_id = 83586 AND added <= '2014-06-30 23:59:59.99999' ORDER BY added DESC LIMIT 100; QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=0.13..397.23 rows=100 width=12) (actual time=107.442..107.706 rows=100 loops=1) Buffers: shared hit=104 read=18 I/O Timings: read=107.131 -> Result (cost=0.13..19664.74 rows=4952 width=12) (actual time=107.442..107.695 rows=100 loops=1) Buffers: shared hit=104 read=18 I/O Timings: read=107.131 -> Merge Append (cost=0.13..19664.74 rows=4952 width=12) (actual time=107.440..107.683 rows=100 loops=1) Sort Key: public.user_feed_master.added Buffers: shared hit=104 read=18 I/O Timings: read=107.131 -> Sort (cost=0.01..0.02 rows=1 width=12) (actual time=0.006..0.006 rows=0 loops=1) Sort Key: public.user_feed_master.added Sort Method: quicksort Memory: 25kB -> Seq Scan on user_feed_master (cost=0.00..0.00 rows=1 width=12) (actual time=0.001..0.001 rows=0 loops=1) Filter: ((added <= '2014-06-30 23:59:59.63551'::timestamp without time zone) AND (user_id = 83586)) -> Index Scan using user_feed_201312_user_id_added_idx on user_feed_201312 user_feed_master (cost=0.00..1525.71 rows=392 width=12) (actual time=15.020..15.020 rows=1 loops=1) Index Cond: ((user_id = 83586) AND (added <= '2014-06-30 23:59:59.63551'::timestamp without time zone)) Buffers: shared hit=1 read=3 I/O Timings: read=14.980 -> Index Scan using user_feed_201401_user_id_added_idx on user_feed_201401 user_feed_master (cost=0.00..966.92 rows=272 width=12) (actual time=8.703..8.703 rows=1 loops=1) Index Cond: ((user_id = 83586) AND (added <= '2014-06-30 23:59:59.63551'::timestamp without time zone)) Buffers: shared hit=2 read=2 I/O Timings: read=8.667 -> Index Scan using user_feed_201402_user_id_added_idx on user_feed_201402 user_feed_master (cost=0.00..1356.38 rows=396 width=12) (actual time=12.818..12.818 rows=1 loops=1) Index Cond: ((user_id = 83586) AND (added <= '2014-06-30 23:59:59.63551'::timestamp without time zone)) Buffers: shared hit=2 read=2 I/O Timings: read=12.782 -> Index Scan using user_feed_201403_user_id_added_idx on user_feed_201403 user_feed_master (cost=0.00..4400.92 rows=1116 width=12) (actual time=16.959..16.959 rows=1 loops=1) Index Cond: ((user_id = 83586) AND (added <= '2014-06-30 23:59:59.63551'::timestamp without time zone)) Buffers: shared hit=2 read=3 I/O Timings: read=16.921 -> Index Scan using user_feed_201404_user_id_added_idx on user_feed_201404 user_feed_master (cost=0.00..5576.73 rows=1375 width=12) (actual time=15.534..15.534 rows=1 loops=1) Index Cond: ((user_id = 83586) AND (added <= '2014-06-30 23:59:59.63551'::timestamp without time zone)) Buffers: shared hit=2 read=3 I/O Timings: read=15.485 -> Index Scan using user_feed_201405_user_id_added_idx on user_feed_201405 user_feed_master (cost=0.00..2895.72 rows=714 width=12) (actual time=17.328..17.328 rows=1 loops=1) Index Cond: ((user_id = 83586) AND (added <= '2014-06-30 23:59:59.63551'::timestamp without time zone)) Buffers: shared hit=2 read=3 I/O Timings: read=17.281 -> Index Scan using user_feed_201406_user_id_added_idx on user_feed_201406 user_feed_master (cost=0.00..2781.28 rows=686 width=12) (actual time=21.064..21.276 rows=100 loops=1) Index Cond: ((user_id = 83586) AND (added <= '2014-06-30 23:59:59.63551'::timestamp without time zone)) Buffers: shared hit=93 read=2 I/O Timings: read=21.015 Total runtime: 107.797 ms(44 rows)Best regards,Dmitriy Shalashov\n2014-10-16 17:19 GMT+04:00 François Beausoleil <[email protected]>:Hello!\n\nLe 2014-10-16 à 08:35, Дмитрий Шалашов <[email protected]> a écrit :\n> lets imagine that we have some table, partitioned by timestamp field, and we query it with SELECT with ordering by that field (DESC for example), with some modest limit.\n> Lets further say that required amount of rows is found in the first table that query encounters (say, latest one).\n> I am just wondering, why nevertheless PostgreSQL does read couple of buffers from each of the older tables?\n\nCould you share a specific plan with us, as well as your PostgreSQL version? It would make the conversation much easier.\n\nCan you also confirm your constraint_exclusion parameter is set to either 'partition' or 'on'?\n\nThanks!\nFrançois Beausoleil",
"msg_date": "Thu, 16 Oct 2014 17:33:09 +0400",
"msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioned tables and SELECT ... ORDER BY ... LIMIT"
},
{
"msg_contents": "On Thu, Oct 16, 2014 at 5:35 AM, Дмитрий Шалашов <[email protected]> wrote:\n\n> Hi,\n>\n> lets imagine that we have some table, partitioned by timestamp field, and\n> we query it with SELECT with ordering by that field (DESC for example),\n> with some modest limit.\n> Lets further say that required amount of rows is found in the first table\n> that query encounters (say, latest one).\n> I am just wondering, why nevertheless PostgreSQL does read couple of\n> buffers from each of the older tables?\n>\n\nThe planner only does partition pruning statically, not dynamically.The\nLIMIT has to be implemented dynamically--it cannot prove absolutely that\nthe \"first\" partition will have enough rows, so it cannot eliminate the\nothers.\n\nThe \"Merge Append\" does a priority queue merge, and so needs to read the\n\"first\" row (according to the ORDER BY) from each partition in order to seed\nthe priority queue. I guess what it could be made to do in the case where\nthere are suitable check constraints on a partition, is seed the priority\nqueue with a dummy value constructed from the constraint. If the merge\nnever gets far enough to draw upon that dummy value, then that whole plan\nnode never needs to get started up.\n\nIn your case that would save very little, as reading a few blocks for each\npartition is not much of a burden. Especially as it the same few blocks\nevery time, so they should be well cached. There may be other case where\nthis would be more helpful. But it isn't clear to me how the planner could\nbuild such a feature into its cost estimates, and the whole thing would be\na rather complex and esoteric optimization to make for uncertain gain.\n\nCheers,\n\nJeff\n\nOn Thu, Oct 16, 2014 at 5:35 AM, Дмитрий Шалашов <[email protected]> wrote:Hi,lets imagine that we have some table, partitioned by timestamp field, and we query it with SELECT with ordering by that field (DESC for example), with some modest limit.Lets further say that required amount of rows is found in the first table that query encounters (say, latest one).I am just wondering, why nevertheless PostgreSQL does read couple of buffers from each of the older tables?The planner only does partition pruning statically, not dynamically.The LIMIT has to be implemented dynamically--it cannot prove absolutely that the \"first\" partition will have enough rows, so it cannot eliminate the others.The \"Merge Append\" does a priority queue merge, and so needs to read the \"first\" row (according to the ORDER BY) from each partition in order to seed the priority queue. I guess what it could be made to do in the case where there are suitable check constraints on a partition, is seed the priority queue with a dummy value constructed from the constraint. If the merge never gets far enough to draw upon that dummy value, then that whole plan node never needs to get started up.In your case that would save very little, as reading a few blocks for each partition is not much of a burden. Especially as it the same few blocks every time, so they should be well cached. There may be other case where this would be more helpful. But it isn't clear to me how the planner could build such a feature into its cost estimates, and the whole thing would be a rather complex and esoteric optimization to make for uncertain gain.Cheers,Jeff",
"msg_date": "Thu, 16 Oct 2014 10:04:45 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioned tables and SELECT ... ORDER BY ... LIMIT"
},
{
"msg_contents": "2014-10-16 14:04 GMT-03:00 Jeff Janes <[email protected]>:\n\n> On Thu, Oct 16, 2014 at 5:35 AM, Дмитрий Шалашов <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> lets imagine that we have some table, partitioned by timestamp field, and\n>> we query it with SELECT with ordering by that field (DESC for example),\n>> with some modest limit.\n>> Lets further say that required amount of rows is found in the first table\n>> that query encounters (say, latest one).\n>> I am just wondering, why nevertheless PostgreSQL does read couple of\n>> buffers from each of the older tables?\n>>\n>\n> The planner only does partition pruning statically, not dynamically.The\n> LIMIT has to be implemented dynamically--it cannot prove absolutely that\n> the \"first\" partition will have enough rows, so it cannot eliminate the\n> others.\n>\n> The \"Merge Append\" does a priority queue merge, and so needs to read the\n> \"first\" row (according to the ORDER BY) from each partition in order to seed\n> the priority queue. I guess what it could be made to do in the case where\n> there are suitable check constraints on a partition, is seed the priority\n> queue with a dummy value constructed from the constraint. If the merge\n> never gets far enough to draw upon that dummy value, then that whole plan\n> node never needs to get started up.\n>\n> In your case that would save very little, as reading a few blocks for each\n> partition is not much of a burden. Especially as it the same few blocks\n> every time, so they should be well cached. There may be other case where\n> this would be more helpful. But it isn't clear to me how the planner could\n> build such a feature into its cost estimates, and the whole thing would be\n> a rather complex and esoteric optimization to make for uncertain gain.\n>\n> Cheers,\n>\n> Jeff\n>\n\nLike Jeff said, it shouldn't be much of a burden.\n\nIf you think it is, than you can query only the last partition (since\npartitions are tables themselves).\n\nIt seems to me that your application is querying some sample data from the\nlast date to show something in your application and this approach would do\nfor that purpose.\n\n2014-10-16 14:04 GMT-03:00 Jeff Janes <[email protected]>:On Thu, Oct 16, 2014 at 5:35 AM, Дмитрий Шалашов <[email protected]> wrote:Hi,lets imagine that we have some table, partitioned by timestamp field, and we query it with SELECT with ordering by that field (DESC for example), with some modest limit.Lets further say that required amount of rows is found in the first table that query encounters (say, latest one).I am just wondering, why nevertheless PostgreSQL does read couple of buffers from each of the older tables?The planner only does partition pruning statically, not dynamically.The LIMIT has to be implemented dynamically--it cannot prove absolutely that the \"first\" partition will have enough rows, so it cannot eliminate the others.The \"Merge Append\" does a priority queue merge, and so needs to read the \"first\" row (according to the ORDER BY) from each partition in order to seed the priority queue. I guess what it could be made to do in the case where there are suitable check constraints on a partition, is seed the priority queue with a dummy value constructed from the constraint. If the merge never gets far enough to draw upon that dummy value, then that whole plan node never needs to get started up.In your case that would save very little, as reading a few blocks for each partition is not much of a burden. Especially as it the same few blocks every time, so they should be well cached. There may be other case where this would be more helpful. But it isn't clear to me how the planner could build such a feature into its cost estimates, and the whole thing would be a rather complex and esoteric optimization to make for uncertain gain.Cheers,Jeff\nLike Jeff said, it shouldn't be much of a burden.If you think it is, than you can query only the last partition (since partitions are tables themselves).It seems to me that your application is querying some sample data from the last date to show something in your application and this approach would do for that purpose.",
"msg_date": "Thu, 16 Oct 2014 14:15:43 -0300",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Partitioned tables and SELECT ... ORDER BY ... LIMIT"
},
{
"msg_contents": "Hi Jeff,\n\nThanks for clarifications!\n\nIn my case yes, it's just few blocks, but different ones every time I\nchange user_id value in my WHERE clause. When I change user_id - buffers\nare no longer \"shared hit\" in EXPLAIN. This is a bit more worrying.\n\nBut if there is no easy fix - well, OK.\n\n\nBest regards,\nDmitriy Shalashov\n\n2014-10-16 21:04 GMT+04:00 Jeff Janes <[email protected]>:\n\n> On Thu, Oct 16, 2014 at 5:35 AM, Дмитрий Шалашов <[email protected]>\n> wrote:\n>\n>> Hi,\n>>\n>> lets imagine that we have some table, partitioned by timestamp field, and\n>> we query it with SELECT with ordering by that field (DESC for example),\n>> with some modest limit.\n>> Lets further say that required amount of rows is found in the first table\n>> that query encounters (say, latest one).\n>> I am just wondering, why nevertheless PostgreSQL does read couple of\n>> buffers from each of the older tables?\n>>\n>\n> The planner only does partition pruning statically, not dynamically.The\n> LIMIT has to be implemented dynamically--it cannot prove absolutely that\n> the \"first\" partition will have enough rows, so it cannot eliminate the\n> others.\n>\n> The \"Merge Append\" does a priority queue merge, and so needs to read the\n> \"first\" row (according to the ORDER BY) from each partition in order to seed\n> the priority queue. I guess what it could be made to do in the case where\n> there are suitable check constraints on a partition, is seed the priority\n> queue with a dummy value constructed from the constraint. If the merge\n> never gets far enough to draw upon that dummy value, then that whole plan\n> node never needs to get started up.\n>\n> In your case that would save very little, as reading a few blocks for each\n> partition is not much of a burden. Especially as it the same few blocks\n> every time, so they should be well cached. There may be other case where\n> this would be more helpful. But it isn't clear to me how the planner could\n> build such a feature into its cost estimates, and the whole thing would be\n> a rather complex and esoteric optimization to make for uncertain gain.\n>\n> Cheers,\n>\n> Jeff\n>\n\nHi Jeff,Thanks for clarifications!In my case yes, it's just few blocks, but different ones every time I change user_id value in my WHERE clause. When I change user_id - buffers are no longer \"shared hit\" in EXPLAIN. This is a bit more worrying.But if there is no easy fix - well, OK.Best regards,Dmitriy Shalashov\n2014-10-16 21:04 GMT+04:00 Jeff Janes <[email protected]>:On Thu, Oct 16, 2014 at 5:35 AM, Дмитрий Шалашов <[email protected]> wrote:Hi,lets imagine that we have some table, partitioned by timestamp field, and we query it with SELECT with ordering by that field (DESC for example), with some modest limit.Lets further say that required amount of rows is found in the first table that query encounters (say, latest one).I am just wondering, why nevertheless PostgreSQL does read couple of buffers from each of the older tables?The planner only does partition pruning statically, not dynamically.The LIMIT has to be implemented dynamically--it cannot prove absolutely that the \"first\" partition will have enough rows, so it cannot eliminate the others.The \"Merge Append\" does a priority queue merge, and so needs to read the \"first\" row (according to the ORDER BY) from each partition in order to seed the priority queue. I guess what it could be made to do in the case where there are suitable check constraints on a partition, is seed the priority queue with a dummy value constructed from the constraint. If the merge never gets far enough to draw upon that dummy value, then that whole plan node never needs to get started up.In your case that would save very little, as reading a few blocks for each partition is not much of a burden. Especially as it the same few blocks every time, so they should be well cached. There may be other case where this would be more helpful. But it isn't clear to me how the planner could build such a feature into its cost estimates, and the whole thing would be a rather complex and esoteric optimization to make for uncertain gain.Cheers,Jeff",
"msg_date": "Thu, 16 Oct 2014 21:22:20 +0400",
"msg_from": "=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Partitioned tables and SELECT ... ORDER BY ... LIMIT"
}
] |
[
{
"msg_contents": "Hello,\n\nTwo options for data (>1M), may I know which one better please?\n\n(1) copyOut (JDBC copyManager)\n t1 into a.csv\n delete t2 where pk.cols in t1\n copyIn t2 from a.csv\n\n(2) setautoCommit(false);\n delete t2 where pk.cols in t1;\n insert t2 select * from t1;\n\nThank you\nEmi\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 Oct 2014 14:30:54 -0400",
"msg_from": "Emi Lu <[email protected]>",
"msg_from_op": true,
"msg_subject": "CopyManager(In/out) vs. delete/insert directly"
}
] |
[
{
"msg_contents": "All,\n\nThought I'd share some pgbench runs I did on two servers, one running\n9.3 and one running 9.4.\n\nA small (512MB) pgbench test didn't show much difference between the two:\n\n9.3.5:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 200\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 4\nduration: 600 s\nnumber of transactions actually processed: 7686217\nlatency average: 1.249 ms\ntps = 12810.135226 (including connections establishing)\ntps = 12810.277332 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001833 \\set nbranches 1 * :scale\n 0.000513 \\set ntellers 10 * :scale\n 0.000447 \\set naccounts 100000 * :scale\n 0.000597 \\setrandom aid 1 :naccounts\n 0.000585 \\setrandom bid 1 :nbranches\n 0.000506 \\setrandom tid 1 :ntellers\n 0.000507 \\setrandom delta -5000 5000\n 0.053684 BEGIN;\n 0.161115 UPDATE pgbench_accounts SET abalance = abalance\n+ :delta WHERE aid = :aid;\n 0.143763 SELECT abalance FROM pgbench_accounts WHERE aid\n= :aid;\n 0.168801 UPDATE pgbench_tellers SET tbalance = tbalance +\n:delta WHERE tid = :tid;\n 0.183900 UPDATE pgbench_branches SET bbalance = bbalance\n+ :delta WHERE bid = :bid;\n 0.137570 INSERT INTO pgbench_history (tid, bid, aid,\ndelta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n 0.389587 END;\n\n9.4b3:\n\ntransaction type: TPC-B (sort of)\nscaling factor: 200\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 4\nduration: 600 s\nnumber of transactions actually processed: 7822118\nlatency average: 1.227 ms\ntps = 13036.312006 (including connections establishing)\ntps = 13036.498067 (excluding connections establishing)\nstatement latencies in milliseconds:\n 0.001817 \\set nbranches 1 * :scale\n 0.000506 \\set ntellers 10 * :scale\n 0.000439 \\set naccounts 100000 * :scale\n 0.000587 \\setrandom aid 1 :naccounts\n 0.000497 \\setrandom bid 1 :nbranches\n 0.000487 \\setrandom tid 1 :ntellers\n 0.000506 \\setrandom delta -5000 5000\n 0.053509 BEGIN;\n 0.160929 UPDATE pgbench_accounts SET abalance = abalance\n+ :delta WHERE aid = :aid;\n 0.145014 SELECT abalance FROM pgbench_accounts WHERE aid\n= :aid;\n 0.169506 UPDATE pgbench_tellers SET tbalance = tbalance +\n:delta WHERE tid = :tid;\n 0.188648 UPDATE pgbench_branches SET bbalance = bbalance\n+ :delta WHERE bid = :bid;\n 0.141014 INSERT INTO pgbench_history (tid, bid, aid,\ndelta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);\n 0.358340 END;\n\nHowever, on a big disk-bound database, 9.4 was 20% better throughput.\nThe database in this case is around 200GB, for a server with 128GB RAM:\n\n9.3.5:\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10000\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 4\nduration: 3600 s\nnumber of transactions actually processed: 1944320\nlatency average: 29.625 ms\ntps = 539.675140 (including connections establishing)\ntps = 539.677426 (excluding connections establishing)\n\n9.4b3:\n\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 10000\nquery mode: simple\nnumber of clients: 16\nnumber of threads: 4\nduration: 3600 s\nnumber of transactions actually processed: 2422502\nlatency average: 23.777 ms\ntps = 672.816239 (including connections establishing)\ntps = 672.821433 (excluding connections establishing)\n\nI suspect this is due to the improvements in writing less to WAL. If\nso, good work, guys!\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 16 Oct 2014 13:06:14 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "9.4 performance improvements test"
}
] |
[
{
"msg_contents": "Hello there,\n\nI have a strange query plan involving an IS NOT NULL and a LEFT JOIN.\n\nI grant you that the query can be written without the JOIN on \nuser_user_info,\nbut it is generated like this by hibernate. Just changing the IS NOT \nNULL condition\nto the other side of useless JOIN makes a big difference in the query plan :\n\n-- THE BAD ONE : given the selectivity on c.name and c.email, barely \nmore than one row will ever be returned\nexplain analyze select c.*\n from contact_contact c\n left outer join user_user_info u on c.user_info=u.id\n left outer join contact_address a on c.address=a.id\n where lower(c.name)='martelli'\n and c.email='[email protected]' or u.id is not null;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.83..2246.76 rows=59412 width=4012) (actual \ntime=53.645..53.645 rows=0 loops=1)\n Hash Cond: (c.user_info = u.id)\n Filter: (((lower((c.name)::text) = 'martelli'::text) AND \n((c.email)::text = '[email protected]'::text)) OR (u.id IS NOT NULL))\n Rows Removed by Filter: 58247\n -> Seq Scan on contact_contact c (cost=0.00..2022.12 rows=59412 \nwidth=4012) (actual time=0.007..6.892 rows=58247 loops=1)\n -> Hash (cost=1.37..1.37 rows=37 width=8) (actual \ntime=0.029..0.029 rows=37 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on user_user_info u (cost=0.00..1.37 rows=37 \nwidth=8) (actual time=0.004..0.015 rows=37 loops=1)\n Planning time: 0.790 ms\n Execution time: 53.712 ms\n\n-- THE GOOD ONE (test IS NOT NULL on contact0_.user_info instead of \nuserinfo1_.id)\nexplain analyze select c.*\n from contact_contact c\n left outer join user_user_info u on c.user_info=u.id\n left outer join contact_address a on c.address=a.id\n where lower(c.name)='martelli'\n and c.email='[email protected]' or c.user_info is not null;\nQUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on contact_contact c (cost=8.60..16.41 rows=1 \nwidth=4012) (actual time=0.037..0.037 rows=0 loops=1)\n Recheck Cond: (((email)::text = '[email protected]'::text) OR \n(user_info IS NOT NULL))\n Filter: (((lower((name)::text) = 'martelli'::text) AND \n((email)::text = '[email protected]'::text)) OR (user_info IS NOT NULL))\n -> BitmapOr (cost=8.60..8.60 rows=2 width=0) (actual \ntime=0.034..0.034 rows=0 loops=1)\n -> Bitmap Index Scan on idx_contact_email (cost=0.00..4.30 \nrows=2 width=0) (actual time=0.027..0.027 rows=0 loops=1)\n Index Cond: ((email)::text = '[email protected]'::text)\n -> Bitmap Index Scan on contact_contact_user_info_idx \n(cost=0.00..4.30 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (user_info IS NOT NULL)\n Planning time: 0.602 ms\n Execution time: 0.118 ms\n\nMy tables are as follow, and I use postgres 9.4 :\n\n Table � public.contact_contact �\n Colonne | Type | Modificateurs | Stockage | Cible de statistiques | Description\n------------------------+-----------------------------+---------------+----------+-----------------------+-------------\n id | bigint | non NULL | plain | |\n archived | boolean | | plain | |\n version | integer | | plain | |\n created_on | timestamp without time zone | | plain | |\n updated_on | timestamp without time zone | | plain | |\n actor_ref | character varying(255) | | extended | |\n addressl1 | character varying(255) | | extended | |\n comment | text | | extended | |\n contact_partner_ok | boolean | | plain | |\n date_of_birth | date | | plain | |\n email | character varying(255) | | extended | |\n email_pro | character varying(255) | | extended | |\n fax | character varying(255) | | extended | |\n first_name | character varying(255) | | extended | |\n fixed_phone1 | character varying(255) | | extended | |\n fixed_phone2 | character varying(255) | | extended | |\n fixed_phone_pro | character varying(255) | | extended | |\n import_key1 | character varying(255) | | extended | |\n import_key2 | character varying(255) | | extended | |\n koala_id | character varying(255) | | extended | |\n mobile_phone_perso | character varying(255) | | extended | |\n mobile_phone_pro | character varying(255) | | extended | |\n name | character varying(255) | non NULL | extended | |\n ola_email | character varying(255) | | extended | |\n ola_phone | character varying(255) | | extended | |\n person_category_select | character varying(255) | | extended | |\n web_site | character varying(255) | | extended | |\n year_of_birth | integer | | plain | |\n created_by | bigint | | plain | |\n updated_by | bigint | | plain | |\n action_event_source | bigint | | plain | |\n address | bigint | | plain | |\n address_pro | bigint | | plain | |\n jobtitle | bigint | | plain | |\n merged_with | bigint | | plain | |\n nationality_country | bigint | | plain | |\n origin | bigint | | plain | |\n place_of_birth_address | bigint | | plain | |\n title | bigint | | plain | |\n user_info | bigint | | plain | |\n import_origin | character varying(255) | | extended | |\n duplicates | bigint | | plain | |\nIndex :\n \"contact_contact_pkey\" PRIMARY KEY, btree (id)\n \"uk_bx19539x7h0y0w4p4uw9gnqbo\" UNIQUE CONSTRAINT, btree (koala_id)\n \"uk_vg25de8jcu18m89o9dy2n4fe\" UNIQUE CONSTRAINT, btree (import_key1)\n \"contact_contact_action_event_source_idx\" btree (action_event_source)\n \"contact_contact_address_idx\" btree (address)\n \"contact_contact_address_l1_idx\" btree (addressl1)\n \"contact_contact_address_pro_idx\" btree (address_pro)\n \"contact_contact_jobtitle_idx\" btree (jobtitle)\n \"contact_contact_merged_with_idx\" btree (merged_with)\n \"contact_contact_name_idx\" btree (name)\n \"contact_contact_nationality_country_idx\" btree (nationality_country)\n \"contact_contact_origin_idx\" btree (origin)\n \"contact_contact_place_of_birth_address_idx\" btree (place_of_birth_address)\n \"contact_contact_title_idx\" btree (title)\n \"contact_contact_user_info_idx\" btree (user_info)\n \"idx_contact_email\" btree (email)\n \"idx_contact_lower_name\" btree (lower(name::text))\n \"idx_contact_search_name\" btree (lower(name::text), lower(first_name::text))\nContraintes de cl�s �trang�res :\n \"fk_8dj7rw3jrdxk4vxbi6vony0ne\" FOREIGN KEY (created_by) REFERENCES auth_user(id)\n \"fk_9s1dhwrvw6lq74fvty6oj2wc5\" FOREIGN KEY (address_pro) REFERENCES contact_address(id)\n \"fk_9wjsgh8lt5ixbshx9pjwmjtk1\" FOREIGN KEY (origin) REFERENCES crm_origin(id)\n \"fk_ad53x8tdando1w1jdlyxcop9v\" FOREIGN KEY (duplicates) REFERENCES contact_contact(id)\n \"fk_edusucr1gdfj99vtm0a70gggg\" FOREIGN KEY (title) REFERENCES contact_title(id)\n \"fk_g7u75rjd754m7evn2alckjvka\" FOREIGN KEY (merged_with) REFERENCES contact_contact(id)\n \"fk_j72hkuq0337v6utjbf85hhvxg\" FOREIGN KEY (action_event_source) REFERENCES crm_action_event_source(id)\n \"fk_k73mcu7swia6uf6qpp4v6lwxf\" FOREIGN KEY (updated_by) REFERENCES auth_user(id)\n \"fk_mvpl7wudcdqgitmmsd900od97\" FOREIGN KEY (place_of_birth_address) REFERENCES contact_address(id)\n \"fk_onriw4jpgeuvhfk827amxry8k\" FOREIGN KEY (address) REFERENCES contact_address(id)\n \"fk_rpkvno8705gap9ejj4wnnb7hl\" FOREIGN KEY (nationality_country) REFERENCES territory_country(id)\n \"fk_s9fsy33u5a9ke8wee9mc2vpsx\" FOREIGN KEY (user_info) REFERENCES user_user_info(id)\n \"fk_t8uexb8lmgaftjsnn63eoty90\" FOREIGN KEY (jobtitle) REFERENCES contact_jobtitle(id)\n\ncoopener=# \\d+ user_user_info\n Table � public.user_user_info �\n Colonne | Type | Modificateurs | Stockage | Cible de statistiques | Description\n-----------------+-----------------------------+---------------+----------+-----------------------+-------------\n id | bigint | non NULL | plain | |\n archived | boolean | | plain | |\n version | integer | | plain | |\n created_on | timestamp without time zone | | plain | |\n updated_on | timestamp without time zone | | plain | |\n full_name | character varying(255) | | extended | |\n import_key | character varying(255) | | extended | |\n import_username | character varying(255) | | extended | |\n today | timestamp without time zone | | plain | |\n user_system_ok | boolean | | plain | |\n created_by | bigint | | plain | |\n updated_by | bigint | | plain | |\n active_company | bigint | | plain | |\n agency | bigint | | plain | |\n internal_user | bigint | non NULL | plain | |\nIndex :\n \"user_user_info_pkey\" PRIMARY KEY, btree (id)\n \"uk_99o17944ddytysui6b05lxyb2\" UNIQUE CONSTRAINT, btree (import_key)\n \"uk_cqgrw75h35ts19uixn03rkjsu\" UNIQUE CONSTRAINT, btree (internal_user)\n \"uk_jtsvu4r7s12nnh9o2sloqyqv4\" UNIQUE CONSTRAINT, btree (import_username)\n \"user_user_info_active_company_idx\" btree (active_company)\n \"user_user_info_agency_idx\" btree (agency)\n \"user_user_info_full_name_idx\" btree (full_name)\nContraintes de cl�s �trang�res :\n \"fk_cojxp4r7d8n2l135gy4xa4vak\" FOREIGN KEY (active_company) REFERENCES contact_company(id)\n \"fk_cqgrw75h35ts19uixn03rkjsu\" FOREIGN KEY (internal_user) REFERENCES auth_user(id)\n \"fk_k3riohsx7jrhxkxdmxyeqflq1\" FOREIGN KEY (updated_by) REFERENCES auth_user(id)\n \"fk_r3e16hs6puibteaby3rk42yg0\" FOREIGN KEY (created_by) REFERENCES auth_user(id)\n \"fk_t389sdkhi9owy0xbhec2nqp5w\" FOREIGN KEY (agency) REFERENCES contact_agency(id)\n\ncoopener=# \\d+ contact_address\n Table � public.contact_address �\n Colonne | Type | Modificateurs | Stockage | Cible de statistiques | Description\n----------------------+-----------------------------+---------------+----------+-----------------------+-------------\n id | bigint | non NULL | plain | |\n archived | boolean | | plain | |\n version | integer | | plain | |\n created_on | timestamp without time zone | | plain | |\n updated_on | timestamp without time zone | | plain | |\n addressl2 | character varying(255) | | extended | |\n addressl3 | character varying(255) | | extended | |\n addressl4 | character varying(255) | | extended | |\n addressl5 | character varying(255) | | extended | |\n addressl6 | character varying(255) | | extended | |\n certified_ok | boolean | | plain | |\n consumption_place_ok | boolean | | plain | |\n full_name | character varying(255) | | extended | |\n insee_code | character varying(255) | | extended | |\n koala_id | character varying(255) | | extended | |\n created_by | bigint | | plain | |\n updated_by | bigint | | plain | |\n addressl7country | bigint | | plain | |\n commune | bigint | | plain | |\nIndex :\n \"contact_address_pkey\" PRIMARY KEY, btree (id)\n \"contact_address_address_l7_country_idx\" btree (addressl7country)\n \"contact_address_commune_idx\" btree (commune)\n \"contact_address_full_name_idx\" btree (full_name)\nContraintes de cl�s �trang�res :\n \"fk_4yx7nnewflhyjdm5tue5qntbg\" FOREIGN KEY (commune) REFERENCES territory_commune(id)\n \"fk_5lwaygtve0ol8ma53picsdef\" FOREIGN KEY (addressl7country) REFERENCES territory_country(id)\n \"fk_p9svu5ssynimpuu0is3j396lt\" FOREIGN KEY (updated_by) REFERENCES auth_user(id)\n \"fk_rm0lcgnys2n97ad62jkm53qlt\" FOREIGN KEY (created_by) REFERENCES auth_user(id)\n\n\nRegards,\nLaurent\n\n\n\n\n\n\n\nHello there,\n\n I have a strange query plan involving an IS NOT NULL and a LEFT\n JOIN. \n\n I grant you that the query can be written without the JOIN on user_user_info,\n but it is generated like this by hibernate. Just changing the IS\n NOT NULL condition \n to the other side of useless JOIN makes a big difference in the\n query plan :\n\n -- THE BAD ONE : given the selectivity on c.name and c.email,\n barely more than one row will ever be returned\n explain analyze select c.* \n from contact_contact c \n left outer join user_user_info u on c.user_info=u.id \n left outer join contact_address a on c.address=a.id \n where lower(c.name)='martelli' \n and c.email='[email protected]' or u.id is not null;\n QUERY\n PLAN \n--------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.83..2246.76 rows=59412 width=4012)\n (actual time=53.645..53.645 rows=0 loops=1)\n Hash Cond: (c.user_info = u.id)\n Filter: (((lower((c.name)::text) = 'martelli'::text) AND\n ((c.email)::text = '[email protected]'::text)) OR (u.id IS NOT\n NULL))\n Rows Removed by Filter: 58247\n -> Seq Scan on contact_contact c (cost=0.00..2022.12\n rows=59412 width=4012) (actual time=0.007..6.892 rows=58247\n loops=1)\n -> Hash (cost=1.37..1.37 rows=37 width=8) (actual\n time=0.029..0.029 rows=37 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on user_user_info u (cost=0.00..1.37\n rows=37 width=8) (actual time=0.004..0.015 rows=37 loops=1)\n Planning time: 0.790 ms\n Execution time: 53.712 ms\n\n -- THE GOOD ONE (test IS NOT NULL on contact0_.user_info\n instead of userinfo1_.id)\n explain analyze select c.* \n from contact_contact c \n left outer join user_user_info u on\n c.user_info=u.id \n left outer join contact_address a on\n c.address=a.id \n where lower(c.name)='martelli' \n and c.email='[email protected]' or c.user_info is not null;\n \n QUERY\n PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on contact_contact c (cost=8.60..16.41 rows=1\n width=4012) (actual time=0.037..0.037 rows=0 loops=1)\n Recheck Cond: (((email)::text = '[email protected]'::text) OR\n (user_info IS NOT NULL))\n Filter: (((lower((name)::text) = 'martelli'::text) AND\n ((email)::text = '[email protected]'::text)) OR (user_info IS NOT\n NULL))\n -> BitmapOr (cost=8.60..8.60 rows=2 width=0) (actual\n time=0.034..0.034 rows=0 loops=1)\n -> Bitmap Index Scan on idx_contact_email \n (cost=0.00..4.30 rows=2 width=0) (actual time=0.027..0.027 rows=0\n loops=1)\n Index Cond: ((email)::text =\n '[email protected]'::text)\n -> Bitmap Index Scan on\n contact_contact_user_info_idx (cost=0.00..4.30 rows=1 width=0)\n (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (user_info IS NOT NULL)\n Planning time: 0.602 ms\n Execution time: 0.118 ms\nMy tables are as follow, and I use postgres 9.4 :\n\n Table « public.contact_contact »\n Colonne | Type | Modificateurs | Stockage | Cible de statistiques | Description \n------------------------+-----------------------------+---------------+----------+-----------------------+-------------\n id | bigint | non NULL | plain | | \n archived | boolean | | plain | | \n version | integer | | plain | | \n created_on | timestamp without time zone | | plain | | \n updated_on | timestamp without time zone | | plain | | \n actor_ref | character varying(255) | | extended | | \n addressl1 | character varying(255) | | extended | | \n comment | text | | extended | | \n contact_partner_ok | boolean | | plain | | \n date_of_birth | date | | plain | | \n email | character varying(255) | | extended | | \n email_pro | character varying(255) | | extended | | \n fax | character varying(255) | | extended | | \n first_name | character varying(255) | | extended | | \n fixed_phone1 | character varying(255) | | extended | | \n fixed_phone2 | character varying(255) | | extended | | \n fixed_phone_pro | character varying(255) | | extended | | \n import_key1 | character varying(255) | | extended | | \n import_key2 | character varying(255) | | extended | | \n koala_id | character varying(255) | | extended | | \n mobile_phone_perso | character varying(255) | | extended | | \n mobile_phone_pro | character varying(255) | | extended | | \n name | character varying(255) | non NULL | extended | | \n ola_email | character varying(255) | | extended | | \n ola_phone | character varying(255) | | extended | | \n person_category_select | character varying(255) | | extended | | \n web_site | character varying(255) | | extended | | \n year_of_birth | integer | | plain | | \n created_by | bigint | | plain | | \n updated_by | bigint | | plain | | \n action_event_source | bigint | | plain | | \n address | bigint | | plain | | \n address_pro | bigint | | plain | | \n jobtitle | bigint | | plain | | \n merged_with | bigint | | plain | | \n nationality_country | bigint | | plain | | \n origin | bigint | | plain | | \n place_of_birth_address | bigint | | plain | | \n title | bigint | | plain | | \n user_info | bigint | | plain | | \n import_origin | character varying(255) | | extended | | \n duplicates | bigint | | plain | | \nIndex :\n \"contact_contact_pkey\" PRIMARY KEY, btree (id)\n \"uk_bx19539x7h0y0w4p4uw9gnqbo\" UNIQUE CONSTRAINT, btree (koala_id)\n \"uk_vg25de8jcu18m89o9dy2n4fe\" UNIQUE CONSTRAINT, btree (import_key1)\n \"contact_contact_action_event_source_idx\" btree (action_event_source)\n \"contact_contact_address_idx\" btree (address)\n \"contact_contact_address_l1_idx\" btree (addressl1)\n \"contact_contact_address_pro_idx\" btree (address_pro)\n \"contact_contact_jobtitle_idx\" btree (jobtitle)\n \"contact_contact_merged_with_idx\" btree (merged_with)\n \"contact_contact_name_idx\" btree (name)\n \"contact_contact_nationality_country_idx\" btree (nationality_country)\n \"contact_contact_origin_idx\" btree (origin)\n \"contact_contact_place_of_birth_address_idx\" btree (place_of_birth_address)\n \"contact_contact_title_idx\" btree (title)\n \"contact_contact_user_info_idx\" btree (user_info)\n \"idx_contact_email\" btree (email)\n \"idx_contact_lower_name\" btree (lower(name::text))\n \"idx_contact_search_name\" btree (lower(name::text), lower(first_name::text))\nContraintes de clés étrangères :\n \"fk_8dj7rw3jrdxk4vxbi6vony0ne\" FOREIGN KEY (created_by) REFERENCES auth_user(id)\n \"fk_9s1dhwrvw6lq74fvty6oj2wc5\" FOREIGN KEY (address_pro) REFERENCES contact_address(id)\n \"fk_9wjsgh8lt5ixbshx9pjwmjtk1\" FOREIGN KEY (origin) REFERENCES crm_origin(id)\n \"fk_ad53x8tdando1w1jdlyxcop9v\" FOREIGN KEY (duplicates) REFERENCES contact_contact(id)\n \"fk_edusucr1gdfj99vtm0a70gggg\" FOREIGN KEY (title) REFERENCES contact_title(id)\n \"fk_g7u75rjd754m7evn2alckjvka\" FOREIGN KEY (merged_with) REFERENCES contact_contact(id)\n \"fk_j72hkuq0337v6utjbf85hhvxg\" FOREIGN KEY (action_event_source) REFERENCES crm_action_event_source(id)\n \"fk_k73mcu7swia6uf6qpp4v6lwxf\" FOREIGN KEY (updated_by) REFERENCES auth_user(id)\n \"fk_mvpl7wudcdqgitmmsd900od97\" FOREIGN KEY (place_of_birth_address) REFERENCES contact_address(id)\n \"fk_onriw4jpgeuvhfk827amxry8k\" FOREIGN KEY (address) REFERENCES contact_address(id)\n \"fk_rpkvno8705gap9ejj4wnnb7hl\" FOREIGN KEY (nationality_country) REFERENCES territory_country(id)\n \"fk_s9fsy33u5a9ke8wee9mc2vpsx\" FOREIGN KEY (user_info) REFERENCES user_user_info(id)\n \"fk_t8uexb8lmgaftjsnn63eoty90\" FOREIGN KEY (jobtitle) REFERENCES contact_jobtitle(id)\n\ncoopener=# \\d+ user_user_info\n Table « public.user_user_info »\n Colonne | Type | Modificateurs | Stockage | Cible de statistiques | Description \n-----------------+-----------------------------+---------------+----------+-----------------------+-------------\n id | bigint | non NULL | plain | | \n archived | boolean | | plain | | \n version | integer | | plain | | \n created_on | timestamp without time zone | | plain | | \n updated_on | timestamp without time zone | | plain | | \n full_name | character varying(255) | | extended | | \n import_key | character varying(255) | | extended | | \n import_username | character varying(255) | | extended | | \n today | timestamp without time zone | | plain | | \n user_system_ok | boolean | | plain | | \n created_by | bigint | | plain | | \n updated_by | bigint | | plain | | \n active_company | bigint | | plain | | \n agency | bigint | | plain | | \n internal_user | bigint | non NULL | plain | | \nIndex :\n \"user_user_info_pkey\" PRIMARY KEY, btree (id)\n \"uk_99o17944ddytysui6b05lxyb2\" UNIQUE CONSTRAINT, btree (import_key)\n \"uk_cqgrw75h35ts19uixn03rkjsu\" UNIQUE CONSTRAINT, btree (internal_user)\n \"uk_jtsvu4r7s12nnh9o2sloqyqv4\" UNIQUE CONSTRAINT, btree (import_username)\n \"user_user_info_active_company_idx\" btree (active_company)\n \"user_user_info_agency_idx\" btree (agency)\n \"user_user_info_full_name_idx\" btree (full_name)\nContraintes de clés étrangères :\n \"fk_cojxp4r7d8n2l135gy4xa4vak\" FOREIGN KEY (active_company) REFERENCES contact_company(id)\n \"fk_cqgrw75h35ts19uixn03rkjsu\" FOREIGN KEY (internal_user) REFERENCES auth_user(id)\n \"fk_k3riohsx7jrhxkxdmxyeqflq1\" FOREIGN KEY (updated_by) REFERENCES auth_user(id)\n \"fk_r3e16hs6puibteaby3rk42yg0\" FOREIGN KEY (created_by) REFERENCES auth_user(id)\n \"fk_t389sdkhi9owy0xbhec2nqp5w\" FOREIGN KEY (agency) REFERENCES contact_agency(id)\n\ncoopener=# \\d+ contact_address\n Table « public.contact_address »\n Colonne | Type | Modificateurs | Stockage | Cible de statistiques | Description \n----------------------+-----------------------------+---------------+----------+-----------------------+-------------\n id | bigint | non NULL | plain | | \n archived | boolean | | plain | | \n version | integer | | plain | | \n created_on | timestamp without time zone | | plain | | \n updated_on | timestamp without time zone | | plain | | \n addressl2 | character varying(255) | | extended | | \n addressl3 | character varying(255) | | extended | | \n addressl4 | character varying(255) | | extended | | \n addressl5 | character varying(255) | | extended | | \n addressl6 | character varying(255) | | extended | | \n certified_ok | boolean | | plain | | \n consumption_place_ok | boolean | | plain | | \n full_name | character varying(255) | | extended | | \n insee_code | character varying(255) | | extended | | \n koala_id | character varying(255) | | extended | | \n created_by | bigint | | plain | | \n updated_by | bigint | | plain | | \n addressl7country | bigint | | plain | | \n commune | bigint | | plain | | \nIndex :\n \"contact_address_pkey\" PRIMARY KEY, btree (id)\n \"contact_address_address_l7_country_idx\" btree (addressl7country)\n \"contact_address_commune_idx\" btree (commune)\n \"contact_address_full_name_idx\" btree (full_name)\nContraintes de clés étrangères :\n \"fk_4yx7nnewflhyjdm5tue5qntbg\" FOREIGN KEY (commune) REFERENCES territory_commune(id)\n \"fk_5lwaygtve0ol8ma53picsdef\" FOREIGN KEY (addressl7country) REFERENCES territory_country(id)\n \"fk_p9svu5ssynimpuu0is3j396lt\" FOREIGN KEY (updated_by) REFERENCES auth_user(id)\n \"fk_rm0lcgnys2n97ad62jkm53qlt\" FOREIGN KEY (created_by) REFERENCES auth_user(id)\n\n\nRegards,\nLaurent",
"msg_date": "Sun, 19 Oct 2014 06:10:50 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "IS NOT NULL and LEFT JOIN"
},
{
"msg_contents": "On Sun, Oct 19, 2014 at 5:10 PM, Laurent Martelli <\[email protected]> wrote:\n\n> Hello there,\n>\n> I have a strange query plan involving an IS NOT NULL and a LEFT JOIN.\n>\n> I grant you that the query can be written without the JOIN on\n> user_user_info,\n> but it is generated like this by hibernate. Just changing the IS NOT NULL\n> condition\n> to the other side of useless JOIN makes a big difference in the query plan\n> :\n>\n> -- THE BAD ONE : given the selectivity on c.name and c.email, barely more\n> than one row will ever be returned\n>\n\nBut it looks like you're ignoring the fact that the OR condition would\nforce the query to match not only the user and the email, but also any row\nthat finds a match in the user_user_info table, which going by the\nplanner's estimates, that's every row in the contract_contract table. This\nis why the planner chooses a seqscan on the contract_contract table instead\nof using the index on lower(name).\n\nIs it really your intention to get all rows that find a this martelli\ncontract that has this email, and along with that, get every contract that\nhas a not null user_info record?\n\nI see that you have a foreign key on c.user_info to reference the user, so\nthis should be matching everything with a non null user_info record.\n\n\nexplain analyze select c.*\n> from contact_contact c\n> left outer join user_user_info u on c.user_info=u.id\n> left outer join contact_address a on c.address=a.id\n> where lower(c.name)='martelli'\n> and c.email='[email protected]' or u.id is not null;\n>\n QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=1.83..2246.76 rows=59412 width=4012) (actual\n> time=53.645..53.645 rows=0 loops=1)\n> Hash Cond: (c.user_info = u.id)\n> Filter: (((lower((c.name)::text) = 'martelli'::text) AND\n> ((c.email)::text = '[email protected]'::text)) OR (u.id IS NOT NULL))\n> Rows Removed by Filter: 58247\n> -> Seq Scan on contact_contact c (cost=0.00..2022.12 rows=59412\n> width=4012) (actual time=0.007..6.892 rows=58247 loops=1)\n> -> Hash (cost=1.37..1.37 rows=37 width=8) (actual time=0.029..0.029\n> rows=37 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 2kB\n> -> Seq Scan on user_user_info u (cost=0.00..1.37 rows=37\n> width=8) (actual time=0.004..0.015 rows=37 loops=1)\n> Planning time: 0.790 ms\n> Execution time: 53.712 ms\n>\n> -- THE GOOD ONE (test IS NOT NULL on contact0_.user_info instead of\n> userinfo1_.id)\n> explain analyze select c.*\n> from contact_contact c\n> left outer join user_user_info u on c.user_info=u.id\n> left outer join contact_address a on c.address=a.id\n> where lower(c.name)='martelli'\n> and c.email='[email protected]' or c.user_info is not null;\n> QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on contact_contact c (cost=8.60..16.41 rows=1\n> width=4012) (actual time=0.037..0.037 rows=0 loops=1)\n> Recheck Cond: (((email)::text = '[email protected]'::text) OR (user_info\n> IS NOT NULL))\n> Filter: (((lower((name)::text) = 'martelli'::text) AND ((email)::text =\n> '[email protected]'::text)) OR (user_info IS NOT NULL))\n> -> BitmapOr (cost=8.60..8.60 rows=2 width=0) (actual\n> time=0.034..0.034 rows=0 loops=1)\n> -> Bitmap Index Scan on idx_contact_email (cost=0.00..4.30\n> rows=2 width=0) (actual time=0.027..0.027 rows=0 loops=1)\n> Index Cond: ((email)::text = '[email protected]'::text)\n> -> Bitmap Index Scan on contact_contact_user_info_idx\n> (cost=0.00..4.30 rows=1 width=0) (actual time=0.007..0.007 rows=0 loops=1)\n> Index Cond: (user_info IS NOT NULL)\n> Planning time: 0.602 ms\n> Execution time: 0.118 ms\n>\n>\n>\nIf you look closely at the 2nd query plan, you'll see that no joins are\nperformed, and it's only the contract_contract table that's looked at. This\nis because PostgresSQL sees that none of the columns from the 2 tables\nwhich are being left joined to are used, and also that the columns that\nyou're joining to on these tables are unique, therefore joining to them\ncannot duplicate any rows, and since these are left joined, if there was no\nmatching row, then it wouldn't filter out rows from the contract_contract\ntable, as it would with INNER JOINs. The planner sees that these left joins\nare pointless, so just removes them from the plan.\n\nRegards\n\nDavid Rowley\n\nOn Sun, Oct 19, 2014 at 5:10 PM, Laurent Martelli <[email protected]> wrote:\n\nHello there,\n\n I have a strange query plan involving an IS NOT NULL and a LEFT\n JOIN. \n\n I grant you that the query can be written without the JOIN on user_user_info,\n but it is generated like this by hibernate. Just changing the IS\n NOT NULL condition \n to the other side of useless JOIN makes a big difference in the\n query plan :\n\n -- THE BAD ONE : given the selectivity on c.name and c.email,\n barely more than one row will ever be returnedBut it looks like you're ignoring the fact that the OR condition would force the query to match not only the user and the email, but also any row that finds a match in the user_user_info table, which going by the planner's estimates, that's every row in the contract_contract table. This is why the planner chooses a seqscan on the contract_contract table instead of using the index on lower(name).Is it really your intention to get all rows that find a this martelli contract that has this email, and along with that, get every contract that has a not null user_info record?I see that you have a foreign key on c.user_info to reference the user, so this should be matching everything with a non null user_info record. \n explain analyze select c.* \n from contact_contact c \n left outer join user_user_info u on c.user_info=u.id \n left outer join contact_address a on c.address=a.id \n where lower(c.name)='martelli' \n and c.email='[email protected]' or u.id is not null; QUERY\n PLAN \n--------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.83..2246.76 rows=59412 width=4012)\n (actual time=53.645..53.645 rows=0 loops=1)\n Hash Cond: (c.user_info = u.id)\n Filter: (((lower((c.name)::text) = 'martelli'::text) AND\n ((c.email)::text = '[email protected]'::text)) OR (u.id IS NOT\n NULL))\n Rows Removed by Filter: 58247\n -> Seq Scan on contact_contact c (cost=0.00..2022.12\n rows=59412 width=4012) (actual time=0.007..6.892 rows=58247\n loops=1)\n -> Hash (cost=1.37..1.37 rows=37 width=8) (actual\n time=0.029..0.029 rows=37 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on user_user_info u (cost=0.00..1.37\n rows=37 width=8) (actual time=0.004..0.015 rows=37 loops=1)\n Planning time: 0.790 ms\n Execution time: 53.712 ms\n\n -- THE GOOD ONE (test IS NOT NULL on contact0_.user_info\n instead of userinfo1_.id)\n explain analyze select c.* \n from contact_contact c \n left outer join user_user_info u on\n c.user_info=u.id \n left outer join contact_address a on\n c.address=a.id \n where lower(c.name)='martelli' \n and c.email='[email protected]' or c.user_info is not null;\n \n QUERY\n PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on contact_contact c (cost=8.60..16.41 rows=1\n width=4012) (actual time=0.037..0.037 rows=0 loops=1)\n Recheck Cond: (((email)::text = '[email protected]'::text) OR\n (user_info IS NOT NULL))\n Filter: (((lower((name)::text) = 'martelli'::text) AND\n ((email)::text = '[email protected]'::text)) OR (user_info IS NOT\n NULL))\n -> BitmapOr (cost=8.60..8.60 rows=2 width=0) (actual\n time=0.034..0.034 rows=0 loops=1)\n -> Bitmap Index Scan on idx_contact_email \n (cost=0.00..4.30 rows=2 width=0) (actual time=0.027..0.027 rows=0\n loops=1)\n Index Cond: ((email)::text =\n '[email protected]'::text)\n -> Bitmap Index Scan on\n contact_contact_user_info_idx (cost=0.00..4.30 rows=1 width=0)\n (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (user_info IS NOT NULL)\n Planning time: 0.602 ms\n Execution time: 0.118 ms\nIf you look closely at the 2nd query plan, you'll see that no joins are performed, and it's only the contract_contract table that's looked at. This is because PostgresSQL sees that none of the columns from the 2 tables which are being left joined to are used, and also that the columns that you're joining to on these tables are unique, therefore joining to them cannot duplicate any rows, and since these are left joined, if there was no matching row, then it wouldn't filter out rows from the contract_contract table, as it would with INNER JOINs. The planner sees that these left joins are pointless, so just removes them from the plan.RegardsDavid Rowley",
"msg_date": "Sun, 19 Oct 2014 21:41:57 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT NULL and LEFT JOIN"
},
{
"msg_contents": "Hi David,\n\nDo we agree that both queries are identical ? Since we join on \nc.user_info=u.id <http://u.id> having u.id <http://u.id> is not null or \nc.user_info is not null in the where clause is the same, isn't it ?\n\nSince c.user_info=u.id <http://u.id> the condition onu.id is not null \ndoes not use any *new* information from user_user_info.\n\nRegards,\nLaurent\n\nLe 19/10/2014 10:41, David Rowley a écrit :\n> On Sun, Oct 19, 2014 at 5:10 PM, Laurent Martelli \n> <[email protected] <mailto:[email protected]>> \n> wrote:\n>\n> Hello there,\n>\n> I have a strange query plan involving an IS NOT NULL and a LEFT JOIN.\n>\n> I grant you that the query can be written without the JOIN on\n> user_user_info,\n> but it is generated like this by hibernate. Just changing the IS\n> NOT NULL condition\n> to the other side of useless JOIN makes a big difference in the\n> query plan :\n>\n> -- THE BAD ONE : given the selectivity on c.name <http://c.name>\n> and c.email, barely more than one row will ever be returned\n>\n>\n> But it looks like you're ignoring the fact that the OR condition would \n> force the query to match not only the user and the email, but also any \n> row that finds a match in the user_user_info table, which going by the \n> planner's estimates, that's every row in the contract_contract table. \n> This is why the planner chooses a seqscan on the contract_contract \n> table instead of using the index on lower(name).\n>\n> Is it really your intention to get all rows that find a this martelli \n> contract that has this email, and along with that, get every contract \n> that has a not null user_info record?\n>\n> I see that you have a foreign key on c.user_info to reference the \n> user, so this should be matching everything with a non null user_info \n> record.\n>\n> explain analyze select c.*\n> from contact_contact c\n> left outer join user_user_info u on c.user_info=u.id\n> <http://u.id>\n> left outer join contact_address a on c.address=a.id\n> <http://a.id>\n> where lower(c.name <http://c.name>)='martelli'\n> and c.email='[email protected] <mailto:[email protected]>' or\n> u.id <http://u.id> is not null;\n>\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------\n> Hash Left Join (cost=1.83..2246.76 rows=59412 width=4012)\n> (actual time=53.645..53.645 rows=0 loops=1)\n> Hash Cond: (c.user_info = u.id <http://u.id>)\n> Filter: (((lower((c.name <http://c.name>)::text) =\n> 'martelli'::text) AND ((c.email)::text = '[email protected]\n> <mailto:[email protected]>'::text)) OR (u.id <http://u.id> IS NOT NULL))\n> Rows Removed by Filter: 58247\n> -> Seq Scan on contact_contact c (cost=0.00..2022.12\n> rows=59412 width=4012) (actual time=0.007..6.892 rows=58247 loops=1)\n> -> Hash (cost=1.37..1.37 rows=37 width=8) (actual\n> time=0.029..0.029 rows=37 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 2kB\n> -> Seq Scan on user_user_info u (cost=0.00..1.37 rows=37\n> width=8) (actual time=0.004..0.015 rows=37 loops=1)\n> Planning time: 0.790 ms\n> Execution time: 53.712 ms\n>\n> -- THE GOOD ONE (test IS NOT NULL on contact0_.user_info instead\n> of userinfo1_.id)\n> explain analyze select c.*\n> from contact_contact c\n> left outer join user_user_info u on c.user_info=u.id\n> <http://u.id>\n> left outer join contact_address a on c.address=a.id\n> <http://a.id>\n> where lower(c.name <http://c.name>)='martelli'\n> and c.email='[email protected] <mailto:[email protected]>' or\n> c.user_info is not null;\n> QUERY PLAN\n> --------------------------------------------------------------------------------------------------------------------------------------------\n> Bitmap Heap Scan on contact_contact c (cost=8.60..16.41 rows=1\n> width=4012) (actual time=0.037..0.037 rows=0 loops=1)\n> Recheck Cond: (((email)::text = '[email protected]\n> <mailto:[email protected]>'::text) OR (user_info IS NOT NULL))\n> Filter: (((lower((name)::text) = 'martelli'::text) AND\n> ((email)::text = '[email protected]\n> <mailto:[email protected]>'::text)) OR (user_info IS NOT NULL))\n> -> BitmapOr (cost=8.60..8.60 rows=2 width=0) (actual\n> time=0.034..0.034 rows=0 loops=1)\n> -> Bitmap Index Scan on idx_contact_email \n> (cost=0.00..4.30 rows=2 width=0) (actual time=0.027..0.027 rows=0\n> loops=1)\n> Index Cond: ((email)::text = '[email protected]\n> <mailto:[email protected]>'::text)\n> -> Bitmap Index Scan on contact_contact_user_info_idx \n> (cost=0.00..4.30 rows=1 width=0) (actual time=0.007..0.007 rows=0\n> loops=1)\n> Index Cond: (user_info IS NOT NULL)\n> Planning time: 0.602 ms\n> Execution time: 0.118 ms\n>\n>\n> If you look closely at the 2nd query plan, you'll see that no joins \n> are performed, and it's only the contract_contract table that's looked \n> at. This is because PostgresSQL sees that none of the columns from the \n> 2 tables which are being left joined to are used, and also that the \n> columns that you're joining to on these tables are unique, therefore \n> joining to them cannot duplicate any rows, and since these are left \n> joined, if there was no matching row, then it wouldn't filter out rows \n> from the contract_contract table, as it would with INNER JOINs. The \n> planner sees that these left joins are pointless, so just removes them \n> from the plan.\n>\n> Regards\n>\n> David Rowley\n\n\n\n\n\n\n\n Hi David,\n\n Do we agree that both queries are identical ? Since we join on c.user_info=u.id having u.id is not null or c.user_info is not null in the where\n clause is the same, isn't it ?\n\n Since c.user_info=u.id the\n condition on u.id is not null does not use any *new*\n information from user_user_info. \n\n Regards,\n Laurent\n\nLe 19/10/2014 10:41, David Rowley a\n écrit :\n\n\n\n\nOn Sun, Oct 19, 2014 at 5:10 PM,\n Laurent Martelli <[email protected]>\n wrote:\n\n Hello there,\n\n I have a strange query plan involving an IS NOT NULL\n and a LEFT JOIN. \n\n I grant you that the query can be written without the\n JOIN on user_user_info,\n but it is generated like this by hibernate. Just\n changing the IS NOT NULL condition \n to the other side of useless JOIN makes a big\n difference in the query plan :\n\n -- THE BAD ONE : given the selectivity on c.name and c.email, barely more\n than one row will ever be returned\n\n\n\n\nBut it looks like you're ignoring the fact that the OR\n condition would force the query to match not only the user\n and the email, but also any row that finds a match in the\n user_user_info table, which going by the planner's\n estimates, that's every row in the contract_contract\n table. This is why the planner chooses a seqscan on the\n contract_contract table instead of using the index on\n lower(name).\n\n\nIs it really your intention to get all rows that find a\n this martelli contract that has this email, and along with\n that, get every contract that has a not null user_info\n record?\n\n\nI see that you have a foreign key on c.user_info to\n reference the user, so this should be matching everything\n with a non null user_info record.\n \n\n\n\n explain analyze select c.* \n from contact_contact c \n left outer join user_user_info u on\n c.user_info=u.id \n left outer join contact_address a on c.address=a.id \n where lower(c.name)='martelli'\n \n and c.email='[email protected]'\n or u.id is not null;\n\n\n\n \n QUERY\n PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=1.83..2246.76 rows=59412\n width=4012) (actual time=53.645..53.645 rows=0\n loops=1)\n Hash Cond: (c.user_info = u.id)\n Filter: (((lower((c.name)::text)\n = 'martelli'::text) AND ((c.email)::text = '[email protected]'::text)) OR (u.id IS NOT NULL))\n Rows Removed by Filter: 58247\n -> Seq Scan on contact_contact c \n (cost=0.00..2022.12 rows=59412 width=4012) (actual\n time=0.007..6.892 rows=58247 loops=1)\n -> Hash (cost=1.37..1.37 rows=37 width=8)\n (actual time=0.029..0.029 rows=37 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 2kB\n -> Seq Scan on user_user_info u \n (cost=0.00..1.37 rows=37 width=8) (actual\n time=0.004..0.015 rows=37 loops=1)\n Planning time: 0.790 ms\n Execution time: 53.712 ms\n\n -- THE GOOD ONE (test IS NOT NULL on contact0_.user_info\n\n instead of userinfo1_.id)\n explain analyze select c.* \n from contact_contact c \n left outer join user_user_info u on\n c.user_info=u.id \n \n left outer join contact_address a on c.address=a.id \n where lower(c.name)='martelli'\n \n and c.email='[email protected]'\n or c.user_info is not null;\n \n\n QUERY\n PLAN \n \n--------------------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on contact_contact c \n (cost=8.60..16.41 rows=1 width=4012) (actual\n time=0.037..0.037 rows=0 loops=1)\n Recheck Cond: (((email)::text = '[email protected]'::text) OR\n (user_info IS NOT NULL))\n Filter: (((lower((name)::text) = 'martelli'::text)\n AND ((email)::text = '[email protected]'::text))\n OR (user_info IS NOT NULL))\n -> BitmapOr (cost=8.60..8.60 rows=2 width=0)\n (actual time=0.034..0.034 rows=0 loops=1)\n -> Bitmap Index Scan on\n idx_contact_email (cost=0.00..4.30 rows=2 width=0)\n (actual time=0.027..0.027 rows=0 loops=1)\n Index Cond: ((email)::text = '[email protected]'::text)\n -> Bitmap Index Scan on\n contact_contact_user_info_idx (cost=0.00..4.30 rows=1\n width=0) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (user_info IS NOT NULL)\n Planning time: 0.602 ms\n Execution time: 0.118 ms\n\n\n\n\n\n\nIf you look closely at the 2nd query plan, you'll see\n that no joins are performed, and it's only the\n contract_contract table that's looked at. This is because\n PostgresSQL sees that none of the columns from the 2\n tables which are being left joined to are used, and also\n that the columns that you're joining to on these tables\n are unique, therefore joining to them cannot duplicate any\n rows, and since these are left joined, if there was no\n matching row, then it wouldn't filter out rows from the\n contract_contract table, as it would with INNER JOINs. The\n planner sees that these left joins are pointless, so just\n removes them from the plan.\n\n\nRegards\n\n\nDavid Rowley",
"msg_date": "Mon, 20 Oct 2014 11:29:35 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IS NOT NULL and LEFT JOIN"
},
{
"msg_contents": "Laurent Martelli <[email protected]> writes:\n> Do we agree that both queries are identical ?\n\nNo, they *aren't* identical. Go consult any SQL reference. Left join\nconditions don't work the way you seem to be thinking: after the join,\nthe RHS column might be null, rather than equal to the LHS column.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Oct 2014 09:58:08 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT NULL and LEFT JOIN"
},
{
"msg_contents": "Le 20/10/2014 15:58, Tom Lane a �crit :\n> Laurent Martelli <[email protected]> writes:\n>> Do we agree that both queries are identical ?\n> No, they *aren't* identical. Go consult any SQL reference. Left join\n> conditions don't work the way you seem to be thinking: after the join,\n> the RHS column might be null, rather than equal to the LHS column.\nYes, I was wrong to assume that c.user_info=u.id because of the LEFT JOIN.\n\nBut since I only want rows where u.id IS NOT NULL, in any case I will \nalso have c.user_info IS NOT NULL.\n\nAlso, having a foreign key, if c.user_info is not null, it will have a \nmatch in u. So in that case, either both c.user_info and c.id are null \nin the result rows, or they are equal.\n\nRegards,\nLaurent\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Oct 2014 23:02:33 +0200",
"msg_from": "Laurent Martelli <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: IS NOT NULL and LEFT JOIN"
},
{
"msg_contents": "Laurent Martelli wrote\n> Le 20/10/2014 15:58, Tom Lane a écrit :\n>> Laurent Martelli <\n\n> laurent.martelli@\n\n> > writes:\n>>> Do we agree that both queries are identical ?\n>> No, they *aren't* identical. Go consult any SQL reference. Left join\n>> conditions don't work the way you seem to be thinking: after the join,\n>> the RHS column might be null, rather than equal to the LHS column.\n> Yes, I was wrong to assume that c.user_info=u.id because of the LEFT JOIN.\n> \n> But since I only want rows where u.id IS NOT NULL, in any case I will \n> also have c.user_info IS NOT NULL.\n> \n> Also, having a foreign key, if c.user_info is not null, it will have a \n> match in u. So in that case, either both c.user_info and c.id are null \n> in the result rows, or they are equal.\n\nThe planner only expends so much effort converting between equivalent query\nforms. By adding u.id IS NOT NULL you are saying that you really meant to\nuse INNER JOIN instead of LEFT JOIN but whether the planner can and/or does\nact on that information in the WHERE clause to modify its joins is beyond my\nknowledge. It doesn't seem to and probably correctly isn't worth adding the\nplanner cycles to fix a poorly written/generated query on-the-fly.\n\n\nNow that it has been pointed out that the two queries you supplied are\nsemantically different it is unclear what your point here is. It is known\nthat Hibernate (and humans too) will generate sub-optimal plans that can be\nrewritten using relational algebra and better optimized for having done so. \nBut such work takes resources that would be expended for every single query\nwhile manually rewriting the sub-optimal query solves the problem\nonce-and-for-all.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/IS-NOT-NULL-and-LEFT-JOIN-tp5823591p5823737.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Oct 2014 15:19:50 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT NULL and LEFT JOIN"
},
{
"msg_contents": "David G Johnston wrote\n> \n> Laurent Martelli wrote\n>> Le 20/10/2014 15:58, Tom Lane a écrit :\n>>> Laurent Martelli <\n\n>> laurent.martelli@\n\n>> > writes:\n>>>> Do we agree that both queries are identical ?\n>>> No, they *aren't* identical. Go consult any SQL reference. Left join\n>>> conditions don't work the way you seem to be thinking: after the join,\n>>> the RHS column might be null, rather than equal to the LHS column.\n>> Yes, I was wrong to assume that c.user_info=u.id because of the LEFT\n>> JOIN.\n>> \n>> But since I only want rows where u.id IS NOT NULL, in any case I will \n>> also have c.user_info IS NOT NULL.\n>> \n>> Also, having a foreign key, if c.user_info is not null, it will have a \n>> match in u. So in that case, either both c.user_info and c.id are null \n>> in the result rows, or they are equal.\n> The planner only expends so much effort converting between equivalent\n> query forms. By adding u.id IS NOT NULL you are saying that you really\n> meant to use INNER JOIN instead of LEFT JOIN but whether the planner can\n> and/or does act on that information in the WHERE clause to modify its\n> joins is beyond my knowledge. It doesn't seem to and probably correctly\n> isn't worth adding the planner cycles to fix a poorly written/generated\n> query on-the-fly.\n> \n> \n> Now that it has been pointed out that the two queries you supplied are\n> semantically different it is unclear what your point here is. It is known\n> that Hibernate (and humans too) will generate sub-optimal plans that can\n> be rewritten using relational algebra and better optimized for having done\n> so. But such work takes resources that would be expended for every single\n> query while manually rewriting the sub-optimal query solves the problem\n> once-and-for-all.\n> \n> David J.\n\nDidn't sound right what I wrote above...\n\nThe presence of the \"OR\" screws things up even further since it does force\nthe use of LEFT JOIN mechanics for the single case where the name and e-mail\nmatch.\n\nI would maybe try a UNION DISTINCT query instead of an OR clause if you want\nto have a query that performs better than the Hibernate one...otherwise\nothers more knowledgeable than myself have not made any indication that the\nplanner is unintentionally deficient in its handling of your original query.\n\nYou may try posting your actual question, and not the SQL, and see if that\nsparks any suggestions.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/IS-NOT-NULL-and-LEFT-JOIN-tp5823591p5823739.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Oct 2014 15:30:05 -0700 (PDT)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT NULL and LEFT JOIN"
},
{
"msg_contents": "On Tue, Oct 21, 2014 at 2:58 AM, Tom Lane <[email protected]> wrote:\n\n> Laurent Martelli <[email protected]> writes:\n> > Do we agree that both queries are identical ?\n>\n> No, they *aren't* identical. Go consult any SQL reference. Left join\n> conditions don't work the way you seem to be thinking: after the join,\n> the RHS column might be null, rather than equal to the LHS column.\n>\n>\n>\nFor what it's worth I'd say they are identical, at least, if you discount\ndeferring foreign key constraints or also executing the query from within\na volatile function which was called by a query which just updated the\nuser_info table to break referential integrity.\n\nThe presence of the foreign key on contract_contract.user_info which\nreferences user_user_info.id means that any non-null\ncontract_contract.user_info record must reference a valid user_user_info\nrecord, therefore the join is not required to prove that a non nulled\nuser_info contract records match a user info record, therefore the join to\ncheck it exists is pretty much pointless in just about all cases that\nyou're likely to care about.\n\nAlthough, saying that I'm still a bit confused about the question. Are you\nasking if there's some way to get PostgreSQL to run the 1st query faster?\nOr are you asking if both queries are equivalent?\n\nRegards\n\nDavid Rowley\n\nOn Tue, Oct 21, 2014 at 2:58 AM, Tom Lane <[email protected]> wrote:Laurent Martelli <[email protected]> writes:\n> Do we agree that both queries are identical ?\n\nNo, they *aren't* identical. Go consult any SQL reference. Left join\nconditions don't work the way you seem to be thinking: after the join,\nthe RHS column might be null, rather than equal to the LHS column.\nFor what it's worth I'd say they are identical, at least, if you discount deferring foreign key constraints or also executing the query from within a volatile function which was called by a query which just updated the user_info table to break referential integrity.The presence of the foreign key on contract_contract.user_info which references user_user_info.id means that any non-null contract_contract.user_info record must reference a valid user_user_info record, therefore the join is not required to prove that a non nulled user_info contract records match a user info record, therefore the join to check it exists is pretty much pointless in just about all cases that you're likely to care about.Although, saying that I'm still a bit confused about the question. Are you asking if there's some way to get PostgreSQL to run the 1st query faster? Or are you asking if both queries are equivalent?RegardsDavid Rowley",
"msg_date": "Tue, 21 Oct 2014 21:44:51 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT NULL and LEFT JOIN"
},
{
"msg_contents": "\nLe Mardi 21 Octobre 2014 10:44 CEST, David Rowley <[email protected]> a écrit:\n\n> For what it's worth I'd say they are identical, at least, if you discount\n> deferring foreign key constraints or also executing the query from within\n> a volatile function which was called by a query which just updated the\n> user_info table to break referential integrity.\n\nI must say I had not thought of that.\n\n> The presence of the foreign key on contract_contract.user_info which\n> references user_user_info.id means that any non-null\n> contract_contract.user_info record must reference a valid user_user_info\n> record, therefore the join is not required to prove that a non nulled\n> user_info contract records match a user info record, therefore the join to\n> check it exists is pretty much pointless in just about all cases that\n> you're likely to care about.\n>\n> Although, saying that I'm still a bit confused about the question. Are you\n> asking if there's some way to get PostgreSQL to run the 1st query faster?\n> Or are you asking if both queries are equivalent?\n\nI was asking for a way to make it run faster. Given that it returns at most a few rows found by an index, I was thinking it could be made to run faster.\n\nBut I agree that the query is not well written (well generated by hibernate) considering the result I want.\n\nRegards,\nLaurent\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Oct 2014 08:29:18 +0200",
"msg_from": "\"Laurent Martelli\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: IS NOT NULL and LEFT JOIN"
}
] |
[
{
"msg_contents": "We are using Postgres for the first time after being SQLServer users for a long time so forgive for being noobs.\n\nWe are using a BI tool that generates a query with an unusually large number of joins. My understanding is that with this many joins Postgres query planner can't possibly use an exhaustive search so it drops into a heuristics algorithm. Unfortunately, the query runs quite slow (~35 seconds) and seems to ignore using primary keys and indexes where available.\n\nQuery plan here (sorry had to anonymize):\nhttp://explain.depesz.com/s/Uml\n\nLine 30 is one of the pain points where a full table scan is running on 4.2 million rows even though there are indexes on oscar_bravo.foxtrot_four and oscar_charlie.foxtrot_four\n\nWe've tried to play around with the join_collapse_limit value by upping it from the default of 8 to 10 or 12 but it doesn't seem to help much. Cranking the value up to an unreasonable value of 20 does shave some seconds off the query time but not substantially (explain plan with the value set to 20: http://explain.depesz.com/s/sW6).\n\nWe haven't tried playing around with the geqo_threshold at this point.\n\nAny thoughts on ways to speed up the run time of this query or any other Postgres settings we should be aware of when dealing with this unusually large number of joins?\n\nThanks in advance\n\n\n\nMarco Di Cesare\n\n\n\n\n\n\n\n\n\n\n\nWe are using Postgres for the first time after being SQLServer users for a long time so forgive for being noobs.\n \nWe are using a BI tool that generates a query with an unusually large number of joins. My understanding is that with this many joins Postgres query planner can't possibly use an exhaustive search so it drops into a heuristics algorithm.\n Unfortunately, the query runs quite slow (~35 seconds) and seems to ignore using primary keys and indexes where available. \n\n \nQuery plan here (sorry had to anonymize):\nhttp://explain.depesz.com/s/Uml\n \nLine 30 is one of the pain points where a full table scan is running on 4.2 million rows even though there are indexes on oscar_bravo.foxtrot_four and oscar_charlie.foxtrot_four\n \nWe've tried to play around with the join_collapse_limit value by upping it from the default of 8 to 10 or 12 but it doesn't seem to help much. Cranking the value up to an unreasonable value of 20 does shave some seconds off the query time\n but not substantially (explain plan with the value set to 20: http://explain.depesz.com/s/sW6).\n \nWe haven't tried playing around with the geqo_threshold at this point.\n\n \nAny thoughts on ways to speed up the run time of this query or any other Postgres settings we should be aware of when dealing with this unusually large number of joins?\n\n \nThanks in advance\n \n \n \nMarco Di Cesare",
"msg_date": "Mon, 20 Oct 2014 20:32:56 +0000",
"msg_from": "Marco Di Cesare <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query with large number of joins"
},
{
"msg_contents": "Marco Di Cesare <[email protected]> writes:\n> We are using a BI tool that generates a query with an unusually large number of joins. My understanding is that with this many joins Postgres query planner can't possibly use an exhaustive search so it drops into a heuristics algorithm. Unfortunately, the query runs quite slow (~35 seconds) and seems to ignore using primary keys and indexes where available.\n\n> Query plan here (sorry had to anonymize):\n> http://explain.depesz.com/s/Uml\n\nIt's difficult to make any detailed comments when you've shown us only an\nallegedly-bad query plan, and not either the query itself or the table\ndefinitions.\n\nHowever, it appears to me that the query plan is aggregating over a rather\nlarge number of join rows, and there are very few constraints that would\nallow eliminating rows. So I'm not at all sure there is a significantly\nbetter plan available. Are you claiming this query was instantaneous\non SQL Server?\n\nThe only thing that jumps out at me as possibly improvable is that with\na further increase in work_mem, you could probably get it to change the\nlast aggregation step from Sort+GroupAggregate into HashAggregate,\nwhich'd likely run faster ... assuming you can spare some more memory.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 20 Oct 2014 19:59:57 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "2014-10-20 21:59 GMT-02:00 Tom Lane <[email protected]>:\n\n> Marco Di Cesare <[email protected]> writes:\n> > We are using a BI tool that generates a query with an unusually large\n> number of joins. My understanding is that with this many joins Postgres\n> query planner can't possibly use an exhaustive search so it drops into a\n> heuristics algorithm. Unfortunately, the query runs quite slow (~35\n> seconds) and seems to ignore using primary keys and indexes where available.\n>\n> > Query plan here (sorry had to anonymize):\n> > http://explain.depesz.com/s/Uml\n>\n> It's difficult to make any detailed comments when you've shown us only an\n> allegedly-bad query plan, and not either the query itself or the table\n> definitions.\n>\n> However, it appears to me that the query plan is aggregating over a rather\n> large number of join rows, and there are very few constraints that would\n> allow eliminating rows. So I'm not at all sure there is a significantly\n> better plan available. Are you claiming this query was instantaneous\n> on SQL Server?\n>\n> The only thing that jumps out at me as possibly improvable is that with\n> a further increase in work_mem, you could probably get it to change the\n> last aggregation step from Sort+GroupAggregate into HashAggregate,\n> which'd likely run faster ... assuming you can spare some more memory.\n>\n> regards, tom lane\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\nHi,\n\nAs Tom said, WORK_MEM seems a nice place to start.\n\nHere are other considerations you might take in account:\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nThere's also the opportunity to tune the query itself (if it's not\nautomatically generated by your BI tool). You can always speed up a query\nresponse by using filtered sub-selects instead of calling the the entire\ntables themselves on the joins.\n\nBR\n\nFelipe\n\n2014-10-20 21:59 GMT-02:00 Tom Lane <[email protected]>:Marco Di Cesare <[email protected]> writes:\n> We are using a BI tool that generates a query with an unusually large number of joins. My understanding is that with this many joins Postgres query planner can't possibly use an exhaustive search so it drops into a heuristics algorithm. Unfortunately, the query runs quite slow (~35 seconds) and seems to ignore using primary keys and indexes where available.\n\n> Query plan here (sorry had to anonymize):\n> http://explain.depesz.com/s/Uml\n\nIt's difficult to make any detailed comments when you've shown us only an\nallegedly-bad query plan, and not either the query itself or the table\ndefinitions.\n\nHowever, it appears to me that the query plan is aggregating over a rather\nlarge number of join rows, and there are very few constraints that would\nallow eliminating rows. So I'm not at all sure there is a significantly\nbetter plan available. Are you claiming this query was instantaneous\non SQL Server?\n\nThe only thing that jumps out at me as possibly improvable is that with\na further increase in work_mem, you could probably get it to change the\nlast aggregation step from Sort+GroupAggregate into HashAggregate,\nwhich'd likely run faster ... assuming you can spare some more memory.\n\n regards, tom lane\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\nHi,As Tom said, WORK_MEM seems a nice place to start.Here are other considerations you might take in account:https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_ServerThere's also the opportunity to tune the query itself (if it's not automatically generated by your BI tool). You can always speed up a query response by using filtered sub-selects instead of calling the the entire tables themselves on the joins.BRFelipe",
"msg_date": "Tue, 21 Oct 2014 08:45:01 -0200",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "On Mon, Oct 20, 2014 at 3:32 PM, Marco Di Cesare\n<[email protected]> wrote:\n> We are using Postgres for the first time after being SQLServer users for a\n> long time so forgive for being noobs.\n>\n>\n>\n> We are using a BI tool that generates a query with an unusually large number\n> of joins. My understanding is that with this many joins Postgres query\n> planner can't possibly use an exhaustive search so it drops into a\n> heuristics algorithm. Unfortunately, the query runs quite slow (~35 seconds)\n> and seems to ignore using primary keys and indexes where available.\n>\n>\n>\n> Query plan here (sorry had to anonymize):\n>\n> http://explain.depesz.com/s/Uml\n>\n>\n>\n> Line 30 is one of the pain points where a full table scan is running on 4.2\n> million rows even though there are indexes on oscar_bravo.foxtrot_four and\n> oscar_charlie.foxtrot_four\n>\n>\n>\n> We've tried to play around with the join_collapse_limit value by upping it\n> from the default of 8 to 10 or 12 but it doesn't seem to help much. Cranking\n> the value up to an unreasonable value of 20 does shave some seconds off the\n> query time but not substantially (explain plan with the value set to 20:\n> http://explain.depesz.com/s/sW6).\n\nYou always have the option of disabling geqo completely.\n\nHowever, in this case, can you fetch out the relevant fields for\n\"oscar_bravo\" that are participating in the join? I'd like to see the\nfield name/type in the source table and the destination table. Also.\nI'd like to see the index definition and the snippit of the query that\npresents the join condition.\n\nYou can encourage the server to favor index scans vs seq scans by\nlowering 'random_page_cost'. The nuclear option is to disable\nsequential scans completely (which is generally a bad idea but can be\nuseful to try and fetch out queries that are inadvertently forced into\na seqscan for some reason).\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 08:39:18 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "I did not mean to imply this works any better on SQL Server. We never tried. I just meant to say this is the first time we are using Postgres so we don't have much experience with it.\r\n\r\nWe tried with work_mem set to 1GB (even as high as 3GB) but it didn't change the GroupAggregate and Sort or query run time.\r\n\r\nSorry, I had to sanitize the query and a few of the relevant tables so hopefully I got it all right. \r\n\r\nSELECT\r\n \"foxtrot_india\".\"juliet_alpha\", \r\n \"foxtrot_india\".\"foxtrot_yankee\", \r\n \"foxtrot_india\".\"hotel_sierra\", \r\n \"foxtrot_india\".\"juliet_alpha\", \r\n\t\t\t\t\"foxtrot_india\".\"bravo_romeo\", \r\n \"oscar_bravo\".\"golf_foxtrot\", \r\n \"seven_kilo\".\"november_lima\", \r\n \"foxtrot_india\".\"echo_six\", \r\n \"uniform_six\".\"seven_six\", \r\n\t\t\t\t\"oscar_charlie\".\"foxtrot_charlie\", \r\n COUNT(DISTINCT \"foxtrot_india\".\"bravo_romeo\") \r\nFROM\r\n \"public\".\"seven_kilo\" \"seven_kilo\"\r\n INNER JOIN \"public\".\"papa_sierra\" \"papa_sierra\" ON (\"seven_kilo\".\"golf_bravo\" = \"papa_sierra\".\"golf_bravo\")\r\n LEFT JOIN \"public\".\"golf_two\" \"golf_two\" ON (\"seven_kilo\".\"lima\" = \"golf_two\".\"lima\")\r\n LEFT JOIN \"public\".\"bravo_xray\" \"bravo_xray\" ON (\"seven_kilo\".\"lima\" = \"bravo_xray\".\"lima\")\r\n LEFT JOIN \"public\".\"foo1\" \"foo1\" ON ((\"seven_kilo\".\"bar1\" = \"foo1\".\"bar1\") AND (\"seven_kilo\".\"golf_bravo\" = \"foo1\".\"golf_bravo\"))\r\n INNER JOIN \"public\".\"oscar_charlie\" \"oscar_charlie\" ON (\"seven_kilo\".\"lima\" = \"oscar_charlie\".\"lima\")\r\n INNER JOIN \"public\".\"oscar_bravo\" \"oscar_bravo\" ON (\"oscar_charlie\".\"foxtrot_four\" = \"oscar_bravo\".\"foxtrot_four\")\r\n INNER JOIN \"public\".\"foxtrot_india\" \"foxtrot_india\" ON (\"oscar_bravo\".\"sierra\" = \"foxtrot_india\".\"sierra\")\r\n INNER JOIN \"public\".\"hotel_romeo\" \"hotel_romeo\" ON (\"oscar_charlie\".\"foxtrot_charlie\" = \"hotel_romeo\".\"foxtrot_charlie\")\r\n INNER JOIN \"public\".\"uniform_six\" \"uniform_six\" ON (\"hotel_romeo\".\"hotel_lima\" = \"uniform_six\".\"hotel_lima\")\r\n LEFT JOIN \"public\".\"lookup\" \"foo2\" ON (\"foxtrot_india\".\"bar2\" = \"foo2\".\"lookup_id\")\r\n LEFT JOIN \"public\".\"uniform_two\" \"uniform_two\" ON (\"foxtrot_india\".\"sierra\" = \"uniform_two\".\"sierra\")\r\n INNER JOIN \"public\".\"lookup\" \"four_xray\" ON (\"uniform_two\".\"quebec\" = \"four_xray\".\"quebec\")\r\n LEFT JOIN \"public\".\"papa_four\" \"papa_four\" ON (\"foxtrot_india\".\"sierra\" = \"papa_four\".\"sierra\")\r\n INNER JOIN \"public\".\"lookup\" \"romeo_bravo\" ON (\"papa_four\".\"quebec\" = \"romeo_bravo\".\"quebec\")\r\n LEFT JOIN \"public\".\"juliet_two\" \"juliet_two\" ON (\"foxtrot_india\".\"sierra\" = \"juliet_two\".\"sierra\")\r\n INNER JOIN \"public\".\"lookup\" \"four_delta\" ON (\"juliet_two\".\"quebec\" = \"four_delta\".\"quebec\")\r\n LEFT JOIN \"public\".\"foo3\" \"foo3\" ON (\"foxtrot_india\".\"bar3\" = \"foo3\".\"bar3\")\r\n INNER JOIN \"public\".\"xray\" \"xray\" ON (\"seven_kilo\".\"lima\" = \"xray\".\"lima\")\r\n INNER JOIN \"public\".\"romeo_echo\" \"romeo_echo\" ON (\"xray\".\"echo_sierra\" = \"romeo_echo\".\"echo_sierra\") \r\nWHERE\r\n (((\"xray\".\"echo_sierra\" = 'november_foxtrot')\r\n AND (\"romeo_echo\".\"hotel_oscar\" = 'zulu')\r\n AND (\"oscar_charlie\".\"five\" = 6)\r\n AND (\"oscar_charlie\".\"whiskey\" = 'four_romeo')\r\n AND (\"oscar_charlie\".\"charlie_romeo\" = 2014))) \r\nGROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\r\n\r\n\r\n Table \"public.oscar_bravo\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n\r\n-------------------------+-----------------------+-----------+----------+--------------+------------\r\n-\r\n foxtrot_four | character varying(60) | not null | extended | |\r\n sierra \t | character varying(40) | not null | extended | |\r\n foo\t\t | boolean | not null | plain | |\r\n bar\t\t\t\t | numeric(3,2) | | main | |\r\n baz\t\t | integer | not null | plain | |\r\n \r\nIndexes:\r\n \"foxtrot_four_sierra_PK_IX\" PRIMARY KEY, btree (foxtrot_four, sierra)\r\n \"foxtrot_four_idx\" btree (foxtrot_four)\r\n \"sierra_idx\" btree (sierra) CLUSTER\r\n\r\nForeign-key constraints:\r\n \"sierra_FK\" FOREIGN KEY (sierra) REFERENCES foxtrot_india(sierra)\r\n \"foxtrot_four_FK\" FOREIGN KEY (foxtrot_four) REFERENCES oscar_charlie(foxtrot_four )\r\nHas OIDs: no\r\n\r\n\r\n Table \"public.oscar_charlie\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n-------------------+-----------------------+-----------+----------+--------------+-------------\r\n foxtrot_four | character varying(60) | not null | extended | |\r\n foxtrot_charlie | character varying(10) | not null | extended | |\r\n lima\t\t | character varying(30) | not null | extended | |\r\n whiskey \t | character varying(3) | not null | extended | |\r\n charlie_romeo | numeric(4,0) | not null | main | |\r\n five \t\t | numeric(2,0) | not null | main | |\r\n period_end_date | date | not null | plain | |\r\n qm_score | numeric(5,2) | not null | main | |\r\n revision_date | date | not null | plain | |\r\nIndexes:\r\n \"foxtrot_four_PK_IX\" PRIMARY KEY, btree (foxtrot_four)\r\n \"foxtrot_charlie_UQ_IX\" UNIQUE CONSTRAINT, btree (foxtrot_charlie, lima, whiskey, charlie_romeo, five)\r\n \"target_period_idx\" btree (five, whiskey, charlie_romeo) CLUSTER\r\n \"foxtrot_charlie_idx\" btree (foxtrot_charlie)\r\nForeign-key constraints:\r\nReferenced by:\r\n TABLE \"oscar_bravo\" CONSTRAINT \"foxtrot_four_FK\" FOREIGN KEY (foxtrot_four) REFERENCES oscar_charlie(foxtrot_four)\r\nHas OIDs: no\r\n\r\n\r\n Table \"public.foxtrot_india\"\r\n Column | Type | Modifiers | Storage | Stats target | Description\r\n----------------------+------------------------+-----------+----------+--------------+-------------\r\n sierra\t\t\t | character varying(40) | not null | extended | |\r\n lima\t\t | character varying(30) | not null | extended | |\r\n global_client_id | character varying(40) | not null | extended | |\r\n org_assess_id | integer | | plain | |\r\n org_client_id | integer | | plain | |\r\n assess_ref_date | date | not null | plain | |\r\n assess_type | character varying(10) | not null | extended | |\r\n client_name | character varying(100) | not null | extended | |\r\n gender | character(1) | not null | extended | |\r\n date_of_birth | date | not null | plain | |\r\n room | character varying(10) | not null | extended | |\r\n bed | character varying(10) | | extended | |\r\n org_floor_id | integer | | plain | |\r\n org_unit_id | integer | | plain | |\r\n global_physician_id | character varying(20) | | extended | |\r\n payer_type_lookup_id | character varying(10) | | extended | |\r\n mds_version | character varying(10) | not null | extended | |\r\nIndexes:\r\n \"sierra_PK_IX\" PRIMARY KEY, btree (sierra)\r\n \"lima_FK_IX\" hash (lima) \r\n\r\nReferenced by:\r\n TABLE \"oscar_bravo\" CONSTRAINT \"oscar_sierra_FK\" FOREIGN KEY (sierra) REFERENCES foxtrot_india(sierra)\r\nHas OIDs: no\r\n\r\n\r\nWe did also attempt to change the pk's from large varchar to ints but it didn't make any noticeable difference. If you are wondering why we went with that data type it's a long story. :-)\r\n\r\n\r\n-----Original Message-----\r\nFrom: Merlin Moncure [mailto:[email protected]] \r\nSent: Tuesday, October 21, 2014 9:39 AM\r\nTo: Marco Di Cesare\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Query with large number of joins\r\n\r\nOn Mon, Oct 20, 2014 at 3:32 PM, Marco Di Cesare <[email protected]> wrote:\r\n> We are using Postgres for the first time after being SQLServer users \r\n> for a long time so forgive for being noobs.\r\n>\r\n>\r\n>\r\n> We are using a BI tool that generates a query with an unusually large \r\n> number of joins. My understanding is that with this many joins \r\n> Postgres query planner can't possibly use an exhaustive search so it \r\n> drops into a heuristics algorithm. Unfortunately, the query runs quite \r\n> slow (~35 seconds) and seems to ignore using primary keys and indexes where available.\r\n>\r\n>\r\n>\r\n> Query plan here (sorry had to anonymize):\r\n>\r\n> http://explain.depesz.com/s/Uml\r\n>\r\n>\r\n>\r\n> Line 30 is one of the pain points where a full table scan is running \r\n> on 4.2 million rows even though there are indexes on \r\n> oscar_bravo.foxtrot_four and oscar_charlie.foxtrot_four\r\n>\r\n>\r\n>\r\n> We've tried to play around with the join_collapse_limit value by \r\n> upping it from the default of 8 to 10 or 12 but it doesn't seem to \r\n> help much. Cranking the value up to an unreasonable value of 20 does \r\n> shave some seconds off the query time but not substantially (explain plan with the value set to 20:\r\n> http://explain.depesz.com/s/sW6).\r\n\r\nYou always have the option of disabling geqo completely.\r\n\r\nHowever, in this case, can you fetch out the relevant fields for \"oscar_bravo\" that are participating in the join? I'd like to see the field name/type in the source table and the destination table. Also.\r\nI'd like to see the index definition and the snippit of the query that presents the join condition.\r\n\r\nYou can encourage the server to favor index scans vs seq scans by lowering 'random_page_cost'. The nuclear option is to disable sequential scans completely (which is generally a bad idea but can be useful to try and fetch out queries that are inadvertently forced into a seqscan for some reason).\r\n\r\nmerlin\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 16:09:40 +0000",
"msg_from": "Marco Di Cesare <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "\nOn 10/21/2014 12:09 PM, Marco Di Cesare wrote:\n> I did not mean to imply this works any better on SQL Server. We never tried. I just meant to say this is the first time we are using Postgres so we don't have much experience with it.\n>\n> We tried with work_mem set to 1GB (even as high as 3GB) but it didn't change the GroupAggregate and Sort or query run time.\n>\n> Sorry, I had to sanitize the query and a few of the relevant tables so hopefully I got it all right.\n>\n> SELECT\n> \"foxtrot_india\".\"juliet_alpha\",\n> \"foxtrot_india\".\"foxtrot_yankee\",\n> \"foxtrot_india\".\"hotel_sierra\",\n> \"foxtrot_india\".\"juliet_alpha\",\n> \t\t\t\t\"foxtrot_india\".\"bravo_romeo\",\n> \"oscar_bravo\".\"golf_foxtrot\",\n> \"seven_kilo\".\"november_lima\",\n> \"foxtrot_india\".\"echo_six\",\n> \"uniform_six\".\"seven_six\",\n> \t\t\t\t\"oscar_charlie\".\"foxtrot_charlie\",\n> COUNT(DISTINCT \"foxtrot_india\".\"bravo_romeo\")\n> FROM\n> \"public\".\"seven_kilo\" \"seven_kilo\"\n> INNER JOIN \"public\".\"papa_sierra\" \"papa_sierra\" ON (\"seven_kilo\".\"golf_bravo\" = \"papa_sierra\".\"golf_bravo\")\n> LEFT JOIN \"public\".\"golf_two\" \"golf_two\" ON (\"seven_kilo\".\"lima\" = \"golf_two\".\"lima\")\n> LEFT JOIN \"public\".\"bravo_xray\" \"bravo_xray\" ON (\"seven_kilo\".\"lima\" = \"bravo_xray\".\"lima\")\n> LEFT JOIN \"public\".\"foo1\" \"foo1\" ON ((\"seven_kilo\".\"bar1\" = \"foo1\".\"bar1\") AND (\"seven_kilo\".\"golf_bravo\" = \"foo1\".\"golf_bravo\"))\n> INNER JOIN \"public\".\"oscar_charlie\" \"oscar_charlie\" ON (\"seven_kilo\".\"lima\" = \"oscar_charlie\".\"lima\")\n> INNER JOIN \"public\".\"oscar_bravo\" \"oscar_bravo\" ON (\"oscar_charlie\".\"foxtrot_four\" = \"oscar_bravo\".\"foxtrot_four\")\n> INNER JOIN \"public\".\"foxtrot_india\" \"foxtrot_india\" ON (\"oscar_bravo\".\"sierra\" = \"foxtrot_india\".\"sierra\")\n> INNER JOIN \"public\".\"hotel_romeo\" \"hotel_romeo\" ON (\"oscar_charlie\".\"foxtrot_charlie\" = \"hotel_romeo\".\"foxtrot_charlie\")\n> INNER JOIN \"public\".\"uniform_six\" \"uniform_six\" ON (\"hotel_romeo\".\"hotel_lima\" = \"uniform_six\".\"hotel_lima\")\n> LEFT JOIN \"public\".\"lookup\" \"foo2\" ON (\"foxtrot_india\".\"bar2\" = \"foo2\".\"lookup_id\")\n> LEFT JOIN \"public\".\"uniform_two\" \"uniform_two\" ON (\"foxtrot_india\".\"sierra\" = \"uniform_two\".\"sierra\")\n> INNER JOIN \"public\".\"lookup\" \"four_xray\" ON (\"uniform_two\".\"quebec\" = \"four_xray\".\"quebec\")\n> LEFT JOIN \"public\".\"papa_four\" \"papa_four\" ON (\"foxtrot_india\".\"sierra\" = \"papa_four\".\"sierra\")\n> INNER JOIN \"public\".\"lookup\" \"romeo_bravo\" ON (\"papa_four\".\"quebec\" = \"romeo_bravo\".\"quebec\")\n> LEFT JOIN \"public\".\"juliet_two\" \"juliet_two\" ON (\"foxtrot_india\".\"sierra\" = \"juliet_two\".\"sierra\")\n> INNER JOIN \"public\".\"lookup\" \"four_delta\" ON (\"juliet_two\".\"quebec\" = \"four_delta\".\"quebec\")\n> LEFT JOIN \"public\".\"foo3\" \"foo3\" ON (\"foxtrot_india\".\"bar3\" = \"foo3\".\"bar3\")\n> INNER JOIN \"public\".\"xray\" \"xray\" ON (\"seven_kilo\".\"lima\" = \"xray\".\"lima\")\n> INNER JOIN \"public\".\"romeo_echo\" \"romeo_echo\" ON (\"xray\".\"echo_sierra\" = \"romeo_echo\".\"echo_sierra\")\n> WHERE\n> (((\"xray\".\"echo_sierra\" = 'november_foxtrot')\n> AND (\"romeo_echo\".\"hotel_oscar\" = 'zulu')\n> AND (\"oscar_charlie\".\"five\" = 6)\n> AND (\"oscar_charlie\".\"whiskey\" = 'four_romeo')\n> AND (\"oscar_charlie\".\"charlie_romeo\" = 2014)))\n> GROUP BY 1, 2, 3, 4, 5, 6, 7, 8, 9, 10\n\n\nPlease don't top-post on the PostgreSQL lists. See \n<http://idallen.com/topposting.html>\n\nHave you tried a) either turning off geqo or setting geqo_threshold \nfairly high b) setting join_collapse_limit fairly high (assuming all the \nabove join targets are tables and not views, setting it to something \nlike 25 should do the trick.\n\nYou also haven't told us what settings you have for things like \neffective_cache_size, which can dramatically affect query plans.\n\ncheers\n\nandrew\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 12:31:06 -0400",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> Have you tried a) either turning off geqo or setting geqo_threshold \n> fairly high b) setting join_collapse_limit fairly high (assuming all the \n> above join targets are tables and not views, setting it to something \n> like 25 should do the trick.\n\nYou'd have to do both, I think, to get an exhaustive plan search.\n\nIn any case, this query is going to result in full table scans of most\nof the tables, because there just aren't very many WHERE constraints;\nso expecting it to run instantaneously is a pipe dream. I'm not sure\nthat there's a significantly better plan to be had.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 12:46:46 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "Marco Di Cesare <[email protected]> writes:\n> COUNT(DISTINCT \"foxtrot_india\".\"bravo_romeo\") \n\nAh. That explains why the planner doesn't want to use a hash aggregation\nstep --- DISTINCT aggregates aren't supported with those.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 12:50:32 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "\r\nOn 10/21/2014 12:31 PM, Andrew Dunstan wrote:\r\n> Please don't top-post on the PostgreSQL lists. See <http://idallen.com/topposting.html>\r\n\r\nOops, sorry.\r\n\r\n>Have you tried a) either turning off geqo or setting geqo_threshold fairly high b) setting join_collapse_limit fairly high (assuming \r\n>all the above join targets are tables and not views, setting it to something like 25 should do the trick.\r\n\r\nI did try various combinations of these settings but none yielded any significant query run time improvements. \r\n\r\n> You also haven't told us what settings you have for things like effective_cache_size, which can dramatically affect query plans.\r\n\r\neffective_cache_size = 4096MB\r\n\r\nI tried bumping this up as well but again no significant query run time improvements. \r\n\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 20:02:53 +0000",
"msg_from": "Marco Di Cesare <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "\nAndrew Dunstan <[email protected]> writes:\n> Have you tried a) either turning off geqo or setting geqo_threshold \n> fairly high b) setting join_collapse_limit fairly high (assuming all \n> the above join targets are tables and not views, setting it to \n> something like 25 should do the trick.\n\nTom Lane < [email protected]> writes:\n> You'd have to do both, I think, to get an exhaustive plan search.\n\n>In any case, this query is going to result in full table scans of most of the tables, because there just aren't very many WHERE constraints; so >\n>expecting it to run instantaneously is a pipe dream. I'm not sure that there's a significantly better plan to be had.\n\n>\t\t\tregards, tom lane\n\nI get that same feeling. Just wanted to be sure there was nothing obvious in terms of settings we might have missed.\n\nThe BI tool we use wants to load as much raw data as needed and then apply filters (where clauses) on top of that. The numerous joins support those filters and a good number of those joins are one-to-many tables causing a Cartesian product. \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 20:06:58 +0000",
"msg_from": "Marco Di Cesare <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Marco Di Cesare\r\nSent: Tuesday, October 21, 2014 4:03 PM\r\nTo: Andrew Dunstan; Merlin Moncure\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Query with large number of joins\r\n\r\n\r\nOn 10/21/2014 12:31 PM, Andrew Dunstan wrote:\r\n> Please don't top-post on the PostgreSQL lists. See \r\n> <http://idallen.com/topposting.html>\r\n\r\nOops, sorry.\r\n\r\n>Have you tried a) either turning off geqo or setting geqo_threshold \r\n>fairly high b) setting join_collapse_limit fairly high (assuming all the above join targets are tables and not views, setting it to something like 25 should do the trick.\r\n\r\nI did try various combinations of these settings but none yielded any significant query run time improvements. \r\n\r\n> You also haven't told us what settings you have for things like effective_cache_size, which can dramatically affect query plans.\r\n\r\neffective_cache_size = 4096MB\r\n\r\nI tried bumping this up as well but again no significant query run time improvements. \r\n\r\n\r\n\r\nMarco,\r\n\r\nDidn't you mention, that you have something like 48GB RAM?\r\nIn this case (if that's dedicated db server), you should try and set effective_cache_size around 40GB (not 4GB).\r\n\r\nRegards,\r\nIgor Neyman\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 20:12:21 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with large number of joins"
},
{
"msg_contents": "On Tue, Oct 21, 2014 at 11:50 AM, Tom Lane <[email protected]> wrote:\n> Marco Di Cesare <[email protected]> writes:\n>> COUNT(DISTINCT \"foxtrot_india\".\"bravo_romeo\")\n>\n> Ah. That explains why the planner doesn't want to use a hash aggregation\n> step --- DISTINCT aggregates aren't supported with those.\n\nyup. With this query, the planner statistics are pretty good for the\nmost part. Considering that the query is generated and amount of data\nis significant the runtime isn't too bad. The query could be\nrewritten to support a hash aggregate...\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Oct 2014 08:22:50 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query with large number of joins"
}
] |
[
{
"msg_contents": "Hi all,I'm experimenting with table partitioning though inheritance. I'm testing a query as follows:explain (analyze, buffers)select response.idfrom claim.responsewhere response.account_id = 4766and response.expire_timestamp is nulland response.create_timestamp >= DATE '2014-08-01'order by create_timestamp;The response table looks like this:\"account_id\";\"integer\"\"file_type_id\";\"integer\"\"receiver_inbound_detail_id\";\"integer\"\"processing_status_id\";\"integer\"\"processing\";\"boolean\"\"expire_timestamp\";\"timestamp without time zone\"\"last_mod_timestamp\";\"timestamp without time zone\"\"create_timestamp\";\"timestamp without time zone\"\"response_trace_nbr\";\"character varying\"\"posted_timestamp\";\"timestamp without time zone\"\"need_to_post\";\"boolean\"\"response_message\";\"text\"\"worked\";\"boolean\"\"response_status_id\";\"integer\"\"response_type_id\";\"integer\"\"outbound_claim_detail_id\";\"bigint\"\"id\";\"bigint\"Here are some rowcounts:SELECT count(*) from claim_response.response_201408; count--------- 4585746(1 row)Time: 7271.054 msSELECT count(*) from claim_response.response_201409; count--------- 3523370(1 row)Time: 4341.116 msSELECT count(*) from claim_response.response_201410; count------- 154(1 row)Time: 0.258 msThe entire table has 225,665,512 rows. I read that a partitioning rule of thumb is that benefits of partitioning occur starting around 100 million rows.SELECT count(*) from claim.response; count----------- 225665512(1 row)Time: 685064.637 msThe partitioning is on the create_timestamp field.The server is Red Hat Enterprise Linux Server release 6.2 (Santiago) on a VM machine - 8 GB RAM with 2 CPUs:Architecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 2On-line CPU(s) list: 0,1Thread(s) per core: 1Core(s) per socket: 2CPU socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 44Stepping: 2CPU MHz: 2660.000BogoMIPS: 5320.00L1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 12288KNUMA node0 CPU(s): 0,12 users, load average: 0.00, 0.12, 0.37Please see the following for the explain analysis :http://explain.depesz.com/s/I3SLI'm trying to understand why I'm getting the yellow, orange, and red on the inclusive, and the yellow on the exclusive. (referring to the explain.depesz.com/s/I3SL page.)I'm relatively new to PostgreSQL, but I've been an Oracle DBA for some time. I suspect the I/O may be dragging but I don't know how to dig that information out from here. Please point out anything else you can decipher from this. Thanks,John\n",
"msg_date": "Tue, 21 Oct 2014 05:57:06 -0700",
"msg_from": "<[email protected]>",
"msg_from_op": true,
"msg_subject": "Query Performance Problem"
},
{
"msg_contents": "2014-10-21 10:57 GMT-02:00 <[email protected]>:\n\n>\n>\n> Hi all,\n>\n> I'm experimenting with table partitioning though inheritance. I'm testing\n> a query as follows:\n>\n> explain (analyze, buffers)\n> select response.id\n> from claim.response\n> where response.account_id = 4766\n> and response.expire_timestamp is null\n> and response.create_timestamp >= DATE '2014-08-01'\n> order by create_timestamp;\n>\n> The response table looks like this:\n> \"account_id\";\"integer\"\n> \"file_type_id\";\"integer\"\n> \"receiver_inbound_detail_id\";\"integer\"\n> \"processing_status_id\";\"integer\"\n> \"processing\";\"boolean\"\n> \"expire_timestamp\";\"timestamp without time zone\"\n> \"last_mod_timestamp\";\"timestamp without time zone\"\n> \"create_timestamp\";\"timestamp without time zone\"\n> \"response_trace_nbr\";\"character varying\"\n> \"posted_timestamp\";\"timestamp without time zone\"\n> \"need_to_post\";\"boolean\"\n> \"response_message\";\"text\"\n> \"worked\";\"boolean\"\n> \"response_status_id\";\"integer\"\n> \"response_type_id\";\"integer\"\n> \"outbound_claim_detail_id\";\"bigint\"\n> \"id\";\"bigint\"\n>\n> Here are some rowcounts:\n>\n> SELECT count(*) from claim_response.response_201408;\n> count\n> ---------\n> 4585746\n> (1 row)\n>\n> Time: 7271.054 ms\n> SELECT count(*) from claim_response.response_201409;\n> count\n> ---------\n> 3523370\n> (1 row)\n>\n> Time: 4341.116 ms\n> SELECT count(*) from claim_response.response_201410;\n> count\n> -------\n> 154\n> (1 row)\n>\n> Time: 0.258 ms\n>\n> The entire table has 225,665,512 rows. I read that a partitioning rule of\n> thumb is that benefits of partitioning occur starting around 100 million\n> rows.\n>\n> SELECT count(*) from claim.response;\n> count\n> -----------\n> 225665512\n> (1 row)\n>\n> Time: 685064.637 ms\n>\n>\n> The partitioning is on the create_timestamp field.\n>\n> The server is Red Hat Enterprise Linux Server release 6.2 (Santiago) on a\n> VM machine - 8 GB RAM with 2 CPUs:\n>\n> Architecture: x86_64\n> CPU op-mode(s): 32-bit, 64-bit\n> Byte Order: Little Endian\n> CPU(s): 2\n> On-line CPU(s) list: 0,1\n> Thread(s) per core: 1\n> Core(s) per socket: 2\n> CPU socket(s): 1\n> NUMA node(s): 1\n> Vendor ID: GenuineIntel\n> CPU family: 6\n> Model: 44\n> Stepping: 2\n> CPU MHz: 2660.000\n> BogoMIPS: 5320.00\n> L1d cache: 32K\n> L1i cache: 32K\n> L2 cache: 256K\n> L3 cache: 12288K\n> NUMA node0 CPU(s): 0,1\n>\n>\n>\n> 2 users, load average: 0.00, 0.12, 0.37\n>\n>\n> Please see the following for the explain analysis :\n>\n> http://explain.depesz.com/s/I3SL\n>\n> I'm trying to understand why I'm getting the yellow, orange, and red on\n> the inclusive, and the yellow on the exclusive. (referring to the\n> explain.depesz.com/s/I3SL page.)\n> I'm relatively new to PostgreSQL, but I've been an Oracle DBA for some\n> time. I suspect the I/O may be dragging but I don't know how to dig that\n> information out from here. Please point out anything else you can decipher\n> from this.\n>\n> Thanks,\n>\n> John\n>\n\n\nHi John,\n\nDont know about the colors, but the Stats tab looks fine. You've got\nyourself 5 Index Scans, which are a very fast way to dig data.\n\n I noticed you've also cast your filter field \"(create_timestamp >=\n'2014-08-01'::date)\". As far as I know, Postgresql doesn't need this kind\nof explicit conversion. You would be fine with just \"(create_timestamp >=\n'2014-08-01')\".\n\nRegards,\n\nFelipe\n\n2014-10-21 10:57 GMT-02:00 <[email protected]>:Hi all,I'm experimenting with table partitioning though inheritance. I'm testing a query as follows:explain (analyze, buffers)select response.idfrom claim.responsewhere response.account_id = 4766and response.expire_timestamp is nulland response.create_timestamp >= DATE '2014-08-01'order by create_timestamp;The response table looks like this:\"account_id\";\"integer\"\"file_type_id\";\"integer\"\"receiver_inbound_detail_id\";\"integer\"\"processing_status_id\";\"integer\"\"processing\";\"boolean\"\"expire_timestamp\";\"timestamp without time zone\"\"last_mod_timestamp\";\"timestamp without time zone\"\"create_timestamp\";\"timestamp without time zone\"\"response_trace_nbr\";\"character varying\"\"posted_timestamp\";\"timestamp without time zone\"\"need_to_post\";\"boolean\"\"response_message\";\"text\"\"worked\";\"boolean\"\"response_status_id\";\"integer\"\"response_type_id\";\"integer\"\"outbound_claim_detail_id\";\"bigint\"\"id\";\"bigint\"Here are some rowcounts:SELECT count(*) from claim_response.response_201408; count--------- 4585746(1 row)Time: 7271.054 msSELECT count(*) from claim_response.response_201409; count--------- 3523370(1 row)Time: 4341.116 msSELECT count(*) from claim_response.response_201410; count------- 154(1 row)Time: 0.258 msThe entire table has 225,665,512 rows. I read that a partitioning rule of thumb is that benefits of partitioning occur starting around 100 million rows.SELECT count(*) from claim.response; count----------- 225665512(1 row)Time: 685064.637 msThe partitioning is on the create_timestamp field.The server is Red Hat Enterprise Linux Server release 6.2 (Santiago) on a VM machine - 8 GB RAM with 2 CPUs:Architecture: x86_64CPU op-mode(s): 32-bit, 64-bitByte Order: Little EndianCPU(s): 2On-line CPU(s) list: 0,1Thread(s) per core: 1Core(s) per socket: 2CPU socket(s): 1NUMA node(s): 1Vendor ID: GenuineIntelCPU family: 6Model: 44Stepping: 2CPU MHz: 2660.000BogoMIPS: 5320.00L1d cache: 32KL1i cache: 32KL2 cache: 256KL3 cache: 12288KNUMA node0 CPU(s): 0,12 users, load average: 0.00, 0.12, 0.37Please see the following for the explain analysis :http://explain.depesz.com/s/I3SLI'm trying to understand why I'm getting the yellow, orange, and red on the inclusive, and the yellow on the exclusive. (referring to the explain.depesz.com/s/I3SL page.)I'm relatively new to PostgreSQL, but I've been an Oracle DBA for some time. I suspect the I/O may be dragging but I don't know how to dig that information out from here. Please point out anything else you can decipher from this. Thanks,JohnHi John,Dont know about the colors, but the Stats tab looks fine. You've got yourself 5 Index Scans, which are a very fast way to dig data. I noticed you've also cast your filter field \"(create_timestamp >= '2014-08-01'::date)\". As far as I know, Postgresql doesn't need this kind of explicit conversion. You would be fine with just \"(create_timestamp >= '2014-08-01')\".Regards,Felipe",
"msg_date": "Tue, 21 Oct 2014 11:16:49 -0200",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query Performance Problem"
}
] |
[
{
"msg_contents": "Hi newsgroup,\n\nI have a very huge table (70 mio rows ) with a key (text length about 30 \ncharacters each key). A select on this indexed column \"myprimkey\" (index \non column mycolumn) took more than 30 mins.\n\nHere is the explain (analyze,buffers) select mycolumn from myhugetable\n\n\"Index Only Scan using myprimkey on myhugetable (cost=0.00..8224444.82 \nrows=71768080 width=33) (actual time=16.722..2456300.778 rows=71825999 \nloops=1)\"\n\n\" Heap Fetches: 356861\"\n\n\"Total runtime: 2503009.611 ms\"\n\n\nEven repeating the query does not show a performance improvement. I \nassume that the index itself is too large for my db cache. What can I do \nto gain performance? Which parameters can I adapt? Having a huge Linux \nmachine with 72 GB RAM.\n\nNote: This select is just for testing. My final statement will be a join \non this table via the \"mycolumn\" column.\n\nThanks for your help\nBjᅵrn\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 19:26:03 +0200",
"msg_from": "=?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "extremly bad select performance on huge table"
},
{
"msg_contents": "Sorry forget to copy the buffer information:\n\n\" Heap Fetches: 356861\"\n\n\" Buffers: shared hit=71799472 read=613813\"\n\n\n\n\n> Hi newsgroup,\n>\n> I have a very huge table (70 mio rows ) with a key (text length about \n> 30 characters each key). A select on this indexed column \"myprimkey\" \n> (index on column mycolumn) took more than 30 mins.\n>\n> Here is the explain (analyze,buffers) select mycolumn from myhugetable\n>\n> \"Index Only Scan using myprimkey on myhugetable (cost=0.00..8224444.82 \n> rows=71768080 width=33) (actual time=16.722..2456300.778 rows=71825999 \n> loops=1)\"\n>\n> \" Heap Fetches: 356861\"\n>\n> \"Total runtime: 2503009.611 ms\"\n>\n>\n> Even repeating the query does not show a performance improvement. I \n> assume that the index itself is too large for my db cache. What can I \n> do to gain performance? Which parameters can I adapt? Having a huge \n> Linux machine with 72 GB RAM.\n>\n> Note: This select is just for testing. My final statement will be a \n> join on this table via the \"mycolumn\" column.\n>\n> Thanks for your help\n> Bjᅵrn\n>\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 19:35:03 +0200",
"msg_from": "=?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Björn Wittich\nSent: Tuesday, October 21, 2014 1:35 PM\nTo: [email protected]\nSubject: Re: [PERFORM] extremly bad select performance on huge table\n\nSorry forget to copy the buffer information:\n\n\" Heap Fetches: 356861\"\n\n\" Buffers: shared hit=71799472 read=613813\"\n\n\n\n\n> Hi newsgroup,\n>\n> I have a very huge table (70 mio rows ) with a key (text length about\n> 30 characters each key). A select on this indexed column \"myprimkey\" \n> (index on column mycolumn) took more than 30 mins.\n>\n> Here is the explain (analyze,buffers) select mycolumn from myhugetable\n>\n> \"Index Only Scan using myprimkey on myhugetable (cost=0.00..8224444.82\n> rows=71768080 width=33) (actual time=16.722..2456300.778 rows=71825999 \n> loops=1)\"\n>\n> \" Heap Fetches: 356861\"\n>\n> \"Total runtime: 2503009.611 ms\"\n>\n>\n> Even repeating the query does not show a performance improvement. I \n> assume that the index itself is too large for my db cache. What can I \n> do to gain performance? Which parameters can I adapt? Having a huge \n> Linux machine with 72 GB RAM.\n>\n> Note: This select is just for testing. My final statement will be a \n> join on this table via the \"mycolumn\" column.\n>\n> Thanks for your help\n> Björn\n>\n>\n>\n>\n\nDid you check the bloat in your myprimkey index?\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 17:39:58 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "=?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]> writes:\n> Here is the explain (analyze,buffers) select mycolumn from myhugetable\n> \"Index Only Scan using myprimkey on myhugetable (cost=0.00..8224444.82 \n> rows=71768080 width=33) (actual time=16.722..2456300.778 rows=71825999 \n> loops=1)\"\n> \" Heap Fetches: 356861\"\n> \" Buffers: shared hit=71799472 read=613813\"\n> \"Total runtime: 2503009.611 ms\"\n\nSo that works out to about 4 msec per page fetched considering only I/O\ncosts, which is about as good as you're likely to get if the data is\nsitting on spinning rust.\n\nYou could potentially make it faster with a VACUUM (to mark all pages\nall-visible and eliminate the \"heap fetches\" costs), or a REINDEX\n(so that the index scan becomes more nearly sequential instead of random\naccess). However, unless the data is nearly static those will just be\ntemporary fixes: the time will degrade again as you update the table.\n\n> Note: This select is just for testing. My final statement will be a join \n> on this table via the \"mycolumn\" column.\n\nIn that case it's probably a waste of time to worry about the performance\nof this query as such. In the first place, a join is not likely to use\nthe index at all unless it's fetching a relatively small number of rows,\nand in the second place it seems unlikely that the join query can use\nan IndexOnlyScan on this index --- I imagine that the purpose of the join\nwill require fetching additional columns.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 15:07:42 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "Hi Tom and Igor,\n\nthanks for your help. With the reindex the select query running time was \nreduced from 5200 sec to 130 sec. Impressive!\n\nEven a join on this table is now fast.\n\nUnfortunately, there is now another problem: The table in my example has \n500 columns which I want to retrieve with my join command.\n\nExample which is fast \"select value from smallertable inner join \nmyhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n\nExample which is slow \"select value,c1,c2,c3,...,c10 from smallertable \ninner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n\n\nWhich is the number of columns to fetch so bad ? Which action is done in \nthe db system when querying this via pgadmin? I think that there is no \nreal retrieval included, why is the number of additional columns so bad \nfor the join performance?\n\n> =?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]> writes:\n>> Here is the explain (analyze,buffers) select mycolumn from myhugetable\n>> \"Index Only Scan using myprimkey on myhugetable (cost=0.00..8224444.82\n>> rows=71768080 width=33) (actual time=16.722..2456300.778 rows=71825999\n>> loops=1)\"\n>> \" Heap Fetches: 356861\"\n>> \" Buffers: shared hit=71799472 read=613813\"\n>> \"Total runtime: 2503009.611 ms\"\n> So that works out to about 4 msec per page fetched considering only I/O\n> costs, which is about as good as you're likely to get if the data is\n> sitting on spinning rust.\n>\n> You could potentially make it faster with a VACUUM (to mark all pages\n> all-visible and eliminate the \"heap fetches\" costs), or a REINDEX\n> (so that the index scan becomes more nearly sequential instead of random\n> access). However, unless the data is nearly static those will just be\n> temporary fixes: the time will degrade again as you update the table.\n>\n>> Note: This select is just for testing. My final statement will be a join\n>> on this table via the \"mycolumn\" column.\n> In that case it's probably a waste of time to worry about the performance\n> of this query as such. In the first place, a join is not likely to use\n> the index at all unless it's fetching a relatively small number of rows,\n> and in the second place it seems unlikely that the join query can use\n> an IndexOnlyScan on this index --- I imagine that the purpose of the join\n> will require fetching additional columns.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 21:32:03 +0200",
"msg_from": "=?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Björn Wittich\nSent: Tuesday, October 21, 2014 3:32 PM\nTo: [email protected]\nSubject: Re: [PERFORM] extremly bad select performance on huge table\n\nHi Tom and Igor,\n\nthanks for your help. With the reindex the select query running time was reduced from 5200 sec to 130 sec. Impressive!\n\nEven a join on this table is now fast.\n\nUnfortunately, there is now another problem: The table in my example has\n500 columns which I want to retrieve with my join command.\n\nExample which is fast \"select value from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n\nExample which is slow \"select value,c1,c2,c3,...,c10 from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n\n\nWhich is the number of columns to fetch so bad ? Which action is done in \nthe db system when querying this via pgadmin? I think that there is no \nreal retrieval included, why is the number of additional columns so bad \nfor the join performance?\n\n> =?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]> writes:\n>> Here is the explain (analyze,buffers) select mycolumn from myhugetable\n>> \"Index Only Scan using myprimkey on myhugetable (cost=0.00..8224444.82\n>> rows=71768080 width=33) (actual time=16.722..2456300.778 rows=71825999\n>> loops=1)\"\n>> \" Heap Fetches: 356861\"\n>> \" Buffers: shared hit=71799472 read=613813\"\n>> \"Total runtime: 2503009.611 ms\"\n> So that works out to about 4 msec per page fetched considering only I/O\n> costs, which is about as good as you're likely to get if the data is\n> sitting on spinning rust.\n>\n> You could potentially make it faster with a VACUUM (to mark all pages\n> all-visible and eliminate the \"heap fetches\" costs), or a REINDEX\n> (so that the index scan becomes more nearly sequential instead of random\n> access). However, unless the data is nearly static those will just be\n> temporary fixes: the time will degrade again as you update the table.\n>\n>> Note: This select is just for testing. My final statement will be a join\n>> on this table via the \"mycolumn\" column.\n> In that case it's probably a waste of time to worry about the performance\n> of this query as such. In the first place, a join is not likely to use\n> the index at all unless it's fetching a relatively small number of rows,\n> and in the second place it seems unlikely that the join query can use\n> an IndexOnlyScan on this index --- I imagine that the purpose of the join\n> will require fetching additional columns.\n>\n> \t\t\tregards, tom lane\n>\n>\n\nBjörn,\n\nI think, the timing difference you see between 2 queries is caused by delivering to the front-end (PgAdmin) and displaying all additional columns that you include in the second query (much bigger amount of data to pass from the db to the client).\nPretty sure, if you do explain analyze on both queries, you'll see the same timing, because it'll reflect only db time without what's spent on delivering data to the client.\n\nRegards,\nIgor Neyman\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 19:53:48 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "Hi Igor,\n\nthat was also my assumption, but unfortunately this isn't true.\nI am using the explain analyze.\n\nExample which is fast \"explain analyze select value from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n\n130 - 140 sec\n\nExample which is fast \"explain analyze select value,c1 from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n\n\ndoes not complete after several hours although the c1 coulmn should only \nbe relevant for retrieval.\n\nComparing the explain comparison of both statements gave me a hint:\n\nadding the c1 column changes the query planner to make a sequential scan \non myhugetable as well as on smallertable. This is much slower.\n\nWhen I set enable_seqscan=false the queryplanner shows the same query \nplan for both statements but the statement including the c1 column does \nnot complete after several hours.\n\nHow can this be explained?\n\nI do not want the db server to prepare the whole query result at once, \nmy intention is that the asynchronous retrieval starts as fast as possible.\n\nThanks\nBj�rn\n\n\n\n\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Bj�rn Wittich\n> Sent: Tuesday, October 21, 2014 3:32 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] extremly bad select performance on huge table\n>\n> Hi Tom and Igor,\n>\n> thanks for your help. With the reindex the select query running time was reduced from 5200 sec to 130 sec. Impressive!\n>\n> Even a join on this table is now fast.\n>\n> Unfortunately, there is now another problem: The table in my example has\n> 500 columns which I want to retrieve with my join command.\n>\n> Example which is fast \"select value from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n>\n> Example which is slow \"select value,c1,c2,c3,...,c10 from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n>\n>\n> Which is the number of columns to fetch so bad ? Which action is done in\n> the db system when querying this via pgadmin? I think that there is no\n> real retrieval included, why is the number of additional columns so bad\n> for the join performance?\n>\n>> =?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]> writes:\n>>> Here is the explain (analyze,buffers) select mycolumn from myhugetable\n>>> \"Index Only Scan using myprimkey on myhugetable (cost=0.00..8224444.82\n>>> rows=71768080 width=33) (actual time=16.722..2456300.778 rows=71825999\n>>> loops=1)\"\n>>> \" Heap Fetches: 356861\"\n>>> \" Buffers: shared hit=71799472 read=613813\"\n>>> \"Total runtime: 2503009.611 ms\"\n>> So that works out to about 4 msec per page fetched considering only I/O\n>> costs, which is about as good as you're likely to get if the data is\n>> sitting on spinning rust.\n>>\n>> You could potentially make it faster with a VACUUM (to mark all pages\n>> all-visible and eliminate the \"heap fetches\" costs), or a REINDEX\n>> (so that the index scan becomes more nearly sequential instead of random\n>> access). However, unless the data is nearly static those will just be\n>> temporary fixes: the time will degrade again as you update the table.\n>>\n>>> Note: This select is just for testing. My final statement will be a join\n>>> on this table via the \"mycolumn\" column.\n>> In that case it's probably a waste of time to worry about the performance\n>> of this query as such. In the first place, a join is not likely to use\n>> the index at all unless it's fetching a relatively small number of rows,\n>> and in the second place it seems unlikely that the join query can use\n>> an IndexOnlyScan on this index --- I imagine that the purpose of the join\n>> will require fetching additional columns.\n>>\n>> \t\t\tregards, tom lane\n>>\n>>\n> Bj�rn,\n>\n> I think, the timing difference you see between 2 queries is caused by delivering to the front-end (PgAdmin) and displaying all additional columns that you include in the second query (much bigger amount of data to pass from the db to the client).\n> Pretty sure, if you do explain analyze on both queries, you'll see the same timing, because it'll reflect only db time without what's spent on delivering data to the client.\n>\n> Regards,\n> Igor Neyman\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Oct 2014 07:05:50 +0200",
"msg_from": "=?ISO-8859-1?Q?Bj=F6rn_Wittich?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Björn Wittich\nSent: Wednesday, October 22, 2014 1:06 AM\nTo: [email protected]\nSubject: Re: [PERFORM] extremly bad select performance on huge table\n\nHi Igor,\n\nthat was also my assumption, but unfortunately this isn't true.\nI am using the explain analyze.\n\nExample which is fast \"explain analyze select value from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n\n130 - 140 sec\n\nExample which is fast \"explain analyze select value,c1 from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n\n\ndoes not complete after several hours although the c1 coulmn should only be relevant for retrieval.\n\nComparing the explain comparison of both statements gave me a hint:\n\nadding the c1 column changes the query planner to make a sequential scan on myhugetable as well as on smallertable. This is much slower.\n\nWhen I set enable_seqscan=false the queryplanner shows the same query plan for both statements but the statement including the c1 column does not complete after several hours.\n\nHow can this be explained?\n\nI do not want the db server to prepare the whole query result at once, my intention is that the asynchronous retrieval starts as fast as possible.\n\nThanks\nBjörn\n\n\n\n\n>\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Björn \n> Wittich\n> Sent: Tuesday, October 21, 2014 3:32 PM\n> To: [email protected]\n> Subject: Re: [PERFORM] extremly bad select performance on huge table\n>\n> Hi Tom and Igor,\n>\n> thanks for your help. With the reindex the select query running time was reduced from 5200 sec to 130 sec. Impressive!\n>\n> Even a join on this table is now fast.\n>\n> Unfortunately, there is now another problem: The table in my example \n> has\n> 500 columns which I want to retrieve with my join command.\n>\n> Example which is fast \"select value from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n>\n> Example which is slow \"select value,c1,c2,c3,...,c10 from smallertable inner join myhugetable ON smallertable.mycolumn = myhugetable.mycolumn\"\n>\n>\n> Which is the number of columns to fetch so bad ? Which action is done \n> in the db system when querying this via pgadmin? I think that there is \n> no real retrieval included, why is the number of additional columns so \n> bad for the join performance?\n>\n>> =?ISO-8859-15?Q?Bj=F6rn_Wittich?= <[email protected]> writes:\n>>> Here is the explain (analyze,buffers) select mycolumn from \n>>> myhugetable \"Index Only Scan using myprimkey on myhugetable \n>>> (cost=0.00..8224444.82\n>>> rows=71768080 width=33) (actual time=16.722..2456300.778 \n>>> rows=71825999 loops=1)\"\n>>> \" Heap Fetches: 356861\"\n>>> \" Buffers: shared hit=71799472 read=613813\"\n>>> \"Total runtime: 2503009.611 ms\"\n>> So that works out to about 4 msec per page fetched considering only \n>> I/O costs, which is about as good as you're likely to get if the data \n>> is sitting on spinning rust.\n>>\n>> You could potentially make it faster with a VACUUM (to mark all pages \n>> all-visible and eliminate the \"heap fetches\" costs), or a REINDEX (so \n>> that the index scan becomes more nearly sequential instead of random \n>> access). However, unless the data is nearly static those will just \n>> be temporary fixes: the time will degrade again as you update the table.\n>>\n>>> Note: This select is just for testing. My final statement will be a \n>>> join on this table via the \"mycolumn\" column.\n>> In that case it's probably a waste of time to worry about the \n>> performance of this query as such. In the first place, a join is not \n>> likely to use the index at all unless it's fetching a relatively \n>> small number of rows, and in the second place it seems unlikely that \n>> the join query can use an IndexOnlyScan on this index --- I imagine \n>> that the purpose of the join will require fetching additional columns.\n>>\n>> \t\t\tregards, tom lane\n>>\n>>\n> Björn,\n>\n> I think, the timing difference you see between 2 queries is caused by delivering to the front-end (PgAdmin) and displaying all additional columns that you include in the second query (much bigger amount of data to pass from the db to the client).\n> Pretty sure, if you do explain analyze on both queries, you'll see the same timing, because it'll reflect only db time without what's spent on delivering data to the client.\n>\n> Regards,\n> Igor Neyman\n>\n>\n>\n\n\nOkay,\n\nSo, REINDEX helped with original query, which execution plan used Index Only Scan, if I remember correctly, since you asked only for the column in PK index.\nNow, when you add some other column which is not in the index, it switches to Sequential Scan.\nSo, check the bloat on the table. May be performance could be improved if you VACUUM bloated table.\n\nRegards,\nIgor Neyman\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Oct 2014 13:57:34 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "Björn Wittich <[email protected]> wrote:\n\n> I do not want the db server to prepare the whole query result at\n> once, my intention is that the asynchronous retrieval starts as\n> fast as possible.\n\nThen you probably should be using a cursor.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Oct 2014 07:53:59 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "Hi Kevin,\n\n\nthis is what I need (I think). Hopefully a cursor can operate on a \njoin. Will read docu now.\n\nThanks!\n\n\nBj�rn\n\nAm 22.10.2014 16:53, schrieb Kevin Grittner:\n> Bj�rn Wittich <[email protected]> wrote:\n>\n>> I do not want the db server to prepare the whole query result at\n>> once, my intention is that the asynchronous retrieval starts as\n>> fast as possible.\n> Then you probably should be using a cursor.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Oct 2014 17:13:42 +0200",
"msg_from": "=?ISO-8859-1?Q?Bj=F6rn_Wittich?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extremly bad select performance on huge table"
},
{
"msg_contents": "Hi,\n\nwith a cursor the behaviour is the same. So I would like to ask a more \ngeneral question:\n\nMy client needs to receive data from a huge join. The time the client \nwaits for being able to fetch the first row is very long. When the \nretrieval starts after about 10 mins, the client itself is I/O bound so \nit is not able to catch up the elapsed time.\n\nMy workaround was to build a queue of small joins (assuming the huge \njoin delivers 10 mio rows I now have 10000 joins delivering 1000 rows ). \nSo the general question is: Is there a better solution then my crude \nworkaround?\n\n\nThank you\n\n> Hi Kevin,\n>\n>\n> this is what I need (I think). Hopefully a cursor can operate on a \n> join. Will read docu now.\n>\n> Thanks!\n>\n>\n> Bj�rn\n>\n> Am 22.10.2014 16:53, schrieb Kevin Grittner:\n>> Bj�rn Wittich <[email protected]> wrote:\n>>\n>>> I do not want the db server to prepare the whole query result at\n>>> once, my intention is that the asynchronous retrieval starts as\n>>> fast as possible.\n>> Then you probably should be using a cursor.\n>>\n>> -- \n>> Kevin Grittner\n>> EDB: http://www.enterprisedb.com\n>> The Enterprise PostgreSQL Company\n>>\n>>\n>\n>\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 24 Oct 2014 07:16:48 +0200",
"msg_from": "=?ISO-8859-1?Q?Bj=F6rn_Wittich?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: extremly bad select performance on huge table"
}
] |
[
{
"msg_contents": "I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel 3.16.3.\nI receive numerous Error: out of memory messages in the log, which are\naborting client requests, even though there appears to be 23GB available in\nthe OS cache.\n\nThere is no swap on the box. Postgres is behind pgbouncer to protect from\nthe 200 real clients, which limits connections to 32, although there are\nrarely more than 20 active connections, even though postgres\nmax_connections is set very high for historic reasons. There is also a 4GB\njava process running on the box.\n\n\n\n\nrelevant postgresql.conf:\n\nmax_connections = 1000 # (change requires restart)\nshared_buffers = 7GB # min 128kB\nwork_mem = 40MB # min 64kB\nmaintenance_work_mem = 1GB # min 1MB\neffective_cache_size = 20GB\n\n\n\nsysctl.conf:\n\nvm.swappiness = 0\nvm.overcommit_memory = 2\nkernel.shmmax=34359738368\nkernel.shmall=8388608\n\n\n\nlog example:\n\nERROR: out of memory\nDETAIL: Failed on request of size 67108864.\nSTATEMENT: SELECT \"package_texts\".* FROM \"package_texts\" WHERE\n\"package_texts\".\"id\" = $1 LIMIT 1\n\n\n\nexample pg_top, showing 23GB available in cache:\n\nlast pid: 6607; load avg: 3.59, 2.32, 2.61; up 16+09:17:29\n20:49:51\n18 processes: 1 running, 17 sleeping\nCPU states: 22.5% user, 0.0% nice, 4.9% system, 63.2% idle, 9.4% iowait\nMemory: 29G used, 186M free, 7648K buffers, 23G cached\nDB activity: 2479 tps, 1 rollbs/s, 217 buffer r/s, 99 hit%, 11994 row\nr/s, 3820 row w/s\nDB I/O: 0 reads/s, 0 KB/s, 0 writes/s, 0 KB/s\nDB disk: 149.8 GB total, 46.7 GB free (68% used)\nSwap:\n\n\n\nexample top showing the only other significant 4GB process on the box:\n\ntop - 21:05:09 up 16 days, 9:32, 2 users, load average: 2.73, 2.91, 2.88\nTasks: 147 total, 3 running, 244 sleeping, 0 stopped, 0 zombie\n%Cpu(s): 22.1 us, 4.1 sy, 0.0 ni, 62.9 id, 9.8 wa, 0.0 hi, 0.7 si,\n 0.3 st\nKiB Mem: 30827220 total, 30642584 used, 184636 free, 7292 buffers\nKiB Swap: 0 total, 0 used, 0 free. 23449636 cached Mem\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 7407 postgres 20 0 7604928 10172 7932 S 29.6 0.0\n 2:51.27 postgres\n10469 postgres 20 0 7617716 176032 160328 R 11.6 0.6 0:01.48 postgres\n10211 postgres 20 0 7630352 237736 208704 S 10.6 0.8 0:03.64 postgres\n18202 elastic+ 20 0 8726984 4.223g 4248 S 9.6 14.4 883:06.79 java\n9711 postgres 20 0 7619500 354188 335856 S 7.0 1.1 0:08.03 postgres\n3638 postgres 20 0 7634552 1.162g 1.127g S 6.6 4.0 0:50.42 postgres\n\nI'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel 3.16.3. I receive numerous Error: out of memory messages in the log, which are aborting client requests, even though there appears to be 23GB available in the OS cache.There is no swap on the box. Postgres is behind pgbouncer to protect from the 200 real clients, which limits connections to 32, although there are rarely more than 20 active connections, even though postgres max_connections is set very high for historic reasons. There is also a 4GB java process running on the box.relevant postgresql.conf:max_connections = 1000 # (change requires restart)shared_buffers = 7GB # min 128kBwork_mem = 40MB # min 64kBmaintenance_work_mem = 1GB # min 1MBeffective_cache_size = 20GBsysctl.conf:vm.swappiness = 0vm.overcommit_memory = 2kernel.shmmax=34359738368kernel.shmall=8388608log example:ERROR: out of memoryDETAIL: Failed on request of size 67108864.STATEMENT: SELECT \"package_texts\".* FROM \"package_texts\" WHERE \"package_texts\".\"id\" = $1 LIMIT 1example pg_top, showing 23GB available in cache:last pid: 6607; load avg: 3.59, 2.32, 2.61; up 16+09:17:29 20:49:5118 processes: 1 running, 17 sleepingCPU states: 22.5% user, 0.0% nice, 4.9% system, 63.2% idle, 9.4% iowaitMemory: 29G used, 186M free, 7648K buffers, 23G cachedDB activity: 2479 tps, 1 rollbs/s, 217 buffer r/s, 99 hit%, 11994 row r/s, 3820 row w/s DB I/O: 0 reads/s, 0 KB/s, 0 writes/s, 0 KB/s DB disk: 149.8 GB total, 46.7 GB free (68% used)Swap:example top showing the only other significant 4GB process on the box:top - 21:05:09 up 16 days, 9:32, 2 users, load average: 2.73, 2.91, 2.88Tasks: 147 total, 3 running, 244 sleeping, 0 stopped, 0 zombie%Cpu(s): 22.1 us, 4.1 sy, 0.0 ni, 62.9 id, 9.8 wa, 0.0 hi, 0.7 si, 0.3 stKiB Mem: 30827220 total, 30642584 used, 184636 free, 7292 buffersKiB Swap: 0 total, 0 used, 0 free. 23449636 cached Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7407 postgres 20 0 7604928 10172 7932 S 29.6 0.0 2:51.27 postgres10469 postgres 20 0 7617716 176032 160328 R 11.6 0.6 0:01.48 postgres10211 postgres 20 0 7630352 237736 208704 S 10.6 0.8 0:03.64 postgres18202 elastic+ 20 0 8726984 4.223g 4248 S 9.6 14.4 883:06.79 java 9711 postgres 20 0 7619500 354188 335856 S 7.0 1.1 0:08.03 postgres3638 postgres 20 0 7634552 1.162g 1.127g S 6.6 4.0 0:50.42 postgres",
"msg_date": "Tue, 21 Oct 2014 15:25:01 -0700",
"msg_from": "Montana Low <[email protected]>",
"msg_from_op": true,
"msg_subject": "ERROR: out of memory | with 23GB cached 7GB reserved on 30GB machine"
},
{
"msg_contents": "Montana Low <[email protected]> writes:\n> I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel 3.16.3.\n> I receive numerous Error: out of memory messages in the log, which are\n> aborting client requests, even though there appears to be 23GB available in\n> the OS cache.\n\nPerhaps the postmaster is being started with a ulimit setting that\nrestricts process size?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 21 Oct 2014 18:35:10 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: out of memory | with 23GB cached 7GB reserved on 30GB\n machine"
},
{
"msg_contents": "Dne 22 Říjen 2014, 0:25, Montana Low napsal(a):\n> I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel\n> 3.16.3.\n> I receive numerous Error: out of memory messages in the log, which are\n> aborting client requests, even though there appears to be 23GB available\n> in\n> the OS cache.\n>\n> There is no swap on the box. Postgres is behind pgbouncer to protect from\n> the 200 real clients, which limits connections to 32, although there are\n> rarely more than 20 active connections, even though postgres\n> max_connections is set very high for historic reasons. There is also a 4GB\n> java process running on the box.\n>\n>\n>\n>\n> relevant postgresql.conf:\n>\n> max_connections = 1000 # (change requires restart)\n> shared_buffers = 7GB # min 128kB\n> work_mem = 40MB # min 64kB\n> maintenance_work_mem = 1GB # min 1MB\n> effective_cache_size = 20GB\n>\n>\n>\n> sysctl.conf:\n>\n> vm.swappiness = 0\n> vm.overcommit_memory = 2\n\nThis means you have 'no overcommit', so the amount of memory is limited by\novercommit_ratio + swap. The default value for overcommit_ratio is 50%\nRAM, and as you have no swap that effectively means only 50% of the RAM is\navailable to the system.\n\nIf you want to verify this, check /proc/meminfo - see the lines\nCommitLimit (the current limit) and Commited_AS (committed address space).\nOnce the committed_as reaches the limit, it's game over.\n\nThere are different ways to fix this, or at least improve that:\n\n(1) increasing the overcommit_ratio (clearly, 50% is way too low -\nsomething 90% might be more appropriate on 30GB RAM without swap)\n\n(2) adding swap (say a small ephemeral drive, with swappiness=10 or\nsomething like that)\n\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 22 Oct 2014 00:46:04 +0200",
"msg_from": "\"Tomas Vondra\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: out of memory | with 23GB cached 7GB reserved\n on 30GB machine"
},
{
"msg_contents": "I didn't realize that about overcommit_ratio. It was at 50, I've changed it\nto 95. I'll see if that clears up the problem moving forward.\n\n# cat /proc/meminfo\nMemTotal: 30827220 kB\nMemFree: 153524 kB\nMemAvailable: 17941864 kB\nBuffers: 6188 kB\nCached: 24560208 kB\nSwapCached: 0 kB\nActive: 20971256 kB\nInactive: 8538660 kB\nActive(anon): 12460680 kB\nInactive(anon): 36612 kB\nActive(file): 8510576 kB\nInactive(file): 8502048 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nSwapTotal: 0 kB\nSwapFree: 0 kB\nDirty: 50088 kB\nWriteback: 160 kB\nAnonPages: 4943740 kB\nMapped: 7571496 kB\nShmem: 7553176 kB\nSlab: 886428 kB\nSReclaimable: 858936 kB\nSUnreclaim: 27492 kB\nKernelStack: 4208 kB\nPageTables: 188352 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 15413608 kB\nCommitted_AS: 14690544 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 59012 kB\nVmallocChunk: 34359642367 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugePages_Surp: 0\nHugepagesize: 2048 kB\nDirectMap4k: 31465472 kB\nDirectMap2M: 0 kB\n\n\n\n# sysctl -a:\n\nvm.admin_reserve_kbytes = 8192\n\nvm.block_dump = 0\n\nvm.dirty_background_bytes = 0\n\nvm.dirty_background_ratio = 10\n\nvm.dirty_bytes = 0\n\nvm.dirty_expire_centisecs = 3000\n\nvm.dirty_ratio = 20\n\nvm.dirty_writeback_centisecs = 500\n\nvm.drop_caches = 0\n\nvm.extfrag_threshold = 500\n\nvm.hugepages_treat_as_movable = 0\n\nvm.hugetlb_shm_group = 0\n\nvm.laptop_mode = 0\n\nvm.legacy_va_layout = 0\n\nvm.lowmem_reserve_ratio = 256 256 32\n\nvm.max_map_count = 65530\n\nvm.min_free_kbytes = 22207\n\nvm.min_slab_ratio = 5\n\nvm.min_unmapped_ratio = 1\n\nvm.mmap_min_addr = 4096\n\nvm.nr_hugepages = 0\n\nvm.nr_hugepages_mempolicy = 0\n\nvm.nr_overcommit_hugepages = 0\n\nvm.nr_pdflush_threads = 0\n\nvm.numa_zonelist_order = default\n\nvm.oom_dump_tasks = 1\n\nvm.oom_kill_allocating_task = 0\n\nvm.overcommit_kbytes = 0\n\nvm.overcommit_memory = 2\n\nvm.overcommit_ratio = 50\n\nvm.page-cluster = 3\n\nvm.panic_on_oom = 0\n\nvm.percpu_pagelist_fraction = 0\n\nvm.scan_unevictable_pages = 0\n\nvm.stat_interval = 1\n\nvm.swappiness = 0\n\nvm.user_reserve_kbytes = 131072\n\nvm.vfs_cache_pressure = 100\n\nvm.zone_reclaim_mode = 0\n\n\n\n\n\n\nOn Tue, Oct 21, 2014 at 3:46 PM, Tomas Vondra <[email protected]> wrote:\n>\n> Dne 22 Říjen 2014, 0:25, Montana Low napsal(a):\n> > I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel\n> > 3.16.3.\n> > I receive numerous Error: out of memory messages in the log, which are\n> > aborting client requests, even though there appears to be 23GB available\n> > in\n> > the OS cache.\n> >\n> > There is no swap on the box. Postgres is behind pgbouncer to protect\nfrom\n> > the 200 real clients, which limits connections to 32, although there are\n> > rarely more than 20 active connections, even though postgres\n> > max_connections is set very high for historic reasons. There is also a\n4GB\n> > java process running on the box.\n> >\n> >\n> >\n> >\n> > relevant postgresql.conf:\n> >\n> > max_connections = 1000 # (change requires restart)\n> > shared_buffers = 7GB # min 128kB\n> > work_mem = 40MB # min 64kB\n> > maintenance_work_mem = 1GB # min 1MB\n> > effective_cache_size = 20GB\n> >\n> >\n> >\n> > sysctl.conf:\n> >\n> > vm.swappiness = 0\n> > vm.overcommit_memory = 2\n>\n> This means you have 'no overcommit', so the amount of memory is limited by\n> overcommit_ratio + swap. The default value for overcommit_ratio is 50%\n> RAM, and as you have no swap that effectively means only 50% of the RAM is\n> available to the system.\n>\n> If you want to verify this, check /proc/meminfo - see the lines\n> CommitLimit (the current limit) and Commited_AS (committed address space).\n> Once the committed_as reaches the limit, it's game over.\n>\n> There are different ways to fix this, or at least improve that:\n>\n> (1) increasing the overcommit_ratio (clearly, 50% is way too low -\n> something 90% might be more appropriate on 30GB RAM without swap)\n>\n> (2) adding swap (say a small ephemeral drive, with swappiness=10 or\n> something like that)\n>\n> Tomas\n>\n\nI didn't realize that about overcommit_ratio. It was at 50, I've changed it to 95. I'll see if that clears up the problem moving forward.# cat /proc/meminfoMemTotal: 30827220 kBMemFree: 153524 kBMemAvailable: 17941864 kBBuffers: 6188 kBCached: 24560208 kBSwapCached: 0 kBActive: 20971256 kBInactive: 8538660 kBActive(anon): 12460680 kBInactive(anon): 36612 kBActive(file): 8510576 kBInactive(file): 8502048 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 0 kBSwapFree: 0 kBDirty: 50088 kBWriteback: 160 kBAnonPages: 4943740 kBMapped: 7571496 kBShmem: 7553176 kBSlab: 886428 kBSReclaimable: 858936 kBSUnreclaim: 27492 kBKernelStack: 4208 kBPageTables: 188352 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 15413608 kBCommitted_AS: 14690544 kBVmallocTotal: 34359738367 kBVmallocUsed: 59012 kBVmallocChunk: 34359642367 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 31465472 kBDirectMap2M: 0 kB# sysctl -a:\nvm.admin_reserve_kbytes = 8192\nvm.block_dump = 0\nvm.dirty_background_bytes = 0\nvm.dirty_background_ratio = 10\nvm.dirty_bytes = 0\nvm.dirty_expire_centisecs = 3000\nvm.dirty_ratio = 20\nvm.dirty_writeback_centisecs = 500\nvm.drop_caches = 0\nvm.extfrag_threshold = 500\nvm.hugepages_treat_as_movable = 0\nvm.hugetlb_shm_group = 0\nvm.laptop_mode = 0\nvm.legacy_va_layout = 0\nvm.lowmem_reserve_ratio = 256 256 32\nvm.max_map_count = 65530\nvm.min_free_kbytes = 22207\nvm.min_slab_ratio = 5\nvm.min_unmapped_ratio = 1\nvm.mmap_min_addr = 4096\nvm.nr_hugepages = 0\nvm.nr_hugepages_mempolicy = 0\nvm.nr_overcommit_hugepages = 0\nvm.nr_pdflush_threads = 0\nvm.numa_zonelist_order = default\nvm.oom_dump_tasks = 1\nvm.oom_kill_allocating_task = 0\nvm.overcommit_kbytes = 0\nvm.overcommit_memory = 2\nvm.overcommit_ratio = 50\nvm.page-cluster = 3\nvm.panic_on_oom = 0\nvm.percpu_pagelist_fraction = 0\nvm.scan_unevictable_pages = 0\nvm.stat_interval = 1\nvm.swappiness = 0\nvm.user_reserve_kbytes = 131072\nvm.vfs_cache_pressure = 100\nvm.zone_reclaim_mode = 0\nOn Tue, Oct 21, 2014 at 3:46 PM, Tomas Vondra <[email protected]> wrote:>> Dne 22 Říjen 2014, 0:25, Montana Low napsal(a):> > I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel> > 3.16.3.> > I receive numerous Error: out of memory messages in the log, which are> > aborting client requests, even though there appears to be 23GB available> > in> > the OS cache.> >> > There is no swap on the box. Postgres is behind pgbouncer to protect from> > the 200 real clients, which limits connections to 32, although there are> > rarely more than 20 active connections, even though postgres> > max_connections is set very high for historic reasons. There is also a 4GB> > java process running on the box.> >> >> >> >> > relevant postgresql.conf:> >> > max_connections = 1000 # (change requires restart)> > shared_buffers = 7GB # min 128kB> > work_mem = 40MB # min 64kB> > maintenance_work_mem = 1GB # min 1MB> > effective_cache_size = 20GB> >> >> >> > sysctl.conf:> >> > vm.swappiness = 0> > vm.overcommit_memory = 2>> This means you have 'no overcommit', so the amount of memory is limited by> overcommit_ratio + swap. The default value for overcommit_ratio is 50%> RAM, and as you have no swap that effectively means only 50% of the RAM is> available to the system.>> If you want to verify this, check /proc/meminfo - see the lines> CommitLimit (the current limit) and Commited_AS (committed address space).> Once the committed_as reaches the limit, it's game over.>> There are different ways to fix this, or at least improve that:>> (1) increasing the overcommit_ratio (clearly, 50% is way too low -> something 90% might be more appropriate on 30GB RAM without swap)>> (2) adding swap (say a small ephemeral drive, with swappiness=10 or> something like that)>> Tomas>",
"msg_date": "Tue, 21 Oct 2014 15:55:18 -0700",
"msg_from": "Montana Low <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: out of memory | with 23GB cached 7GB reserved on\n 30GB machine"
},
{
"msg_contents": "increasing overcommit_ratio to 95 solved the problem, the box is now using\nit's memory as expected without needing to resort to swap.\n\nOn Tue, Oct 21, 2014 at 3:55 PM, Montana Low <[email protected]> wrote:\n\n> I didn't realize that about overcommit_ratio. It was at 50, I've changed\n> it to 95. I'll see if that clears up the problem moving forward.\n>\n> # cat /proc/meminfo\n> MemTotal: 30827220 kB\n> MemFree: 153524 kB\n> MemAvailable: 17941864 kB\n> Buffers: 6188 kB\n> Cached: 24560208 kB\n> SwapCached: 0 kB\n> Active: 20971256 kB\n> Inactive: 8538660 kB\n> Active(anon): 12460680 kB\n> Inactive(anon): 36612 kB\n> Active(file): 8510576 kB\n> Inactive(file): 8502048 kB\n> Unevictable: 0 kB\n> Mlocked: 0 kB\n> SwapTotal: 0 kB\n> SwapFree: 0 kB\n> Dirty: 50088 kB\n> Writeback: 160 kB\n> AnonPages: 4943740 kB\n> Mapped: 7571496 kB\n> Shmem: 7553176 kB\n> Slab: 886428 kB\n> SReclaimable: 858936 kB\n> SUnreclaim: 27492 kB\n> KernelStack: 4208 kB\n> PageTables: 188352 kB\n> NFS_Unstable: 0 kB\n> Bounce: 0 kB\n> WritebackTmp: 0 kB\n> CommitLimit: 15413608 kB\n> Committed_AS: 14690544 kB\n> VmallocTotal: 34359738367 kB\n> VmallocUsed: 59012 kB\n> VmallocChunk: 34359642367 kB\n> HugePages_Total: 0\n> HugePages_Free: 0\n> HugePages_Rsvd: 0\n> HugePages_Surp: 0\n> Hugepagesize: 2048 kB\n> DirectMap4k: 31465472 kB\n> DirectMap2M: 0 kB\n>\n>\n>\n> # sysctl -a:\n>\n> vm.admin_reserve_kbytes = 8192\n>\n> vm.block_dump = 0\n>\n> vm.dirty_background_bytes = 0\n>\n> vm.dirty_background_ratio = 10\n>\n> vm.dirty_bytes = 0\n>\n> vm.dirty_expire_centisecs = 3000\n>\n> vm.dirty_ratio = 20\n>\n> vm.dirty_writeback_centisecs = 500\n>\n> vm.drop_caches = 0\n>\n> vm.extfrag_threshold = 500\n>\n> vm.hugepages_treat_as_movable = 0\n>\n> vm.hugetlb_shm_group = 0\n>\n> vm.laptop_mode = 0\n>\n> vm.legacy_va_layout = 0\n>\n> vm.lowmem_reserve_ratio = 256 256 32\n>\n> vm.max_map_count = 65530\n>\n> vm.min_free_kbytes = 22207\n>\n> vm.min_slab_ratio = 5\n>\n> vm.min_unmapped_ratio = 1\n>\n> vm.mmap_min_addr = 4096\n>\n> vm.nr_hugepages = 0\n>\n> vm.nr_hugepages_mempolicy = 0\n>\n> vm.nr_overcommit_hugepages = 0\n>\n> vm.nr_pdflush_threads = 0\n>\n> vm.numa_zonelist_order = default\n>\n> vm.oom_dump_tasks = 1\n>\n> vm.oom_kill_allocating_task = 0\n>\n> vm.overcommit_kbytes = 0\n>\n> vm.overcommit_memory = 2\n>\n> vm.overcommit_ratio = 50\n>\n> vm.page-cluster = 3\n>\n> vm.panic_on_oom = 0\n>\n> vm.percpu_pagelist_fraction = 0\n>\n> vm.scan_unevictable_pages = 0\n>\n> vm.stat_interval = 1\n>\n> vm.swappiness = 0\n>\n> vm.user_reserve_kbytes = 131072\n>\n> vm.vfs_cache_pressure = 100\n>\n> vm.zone_reclaim_mode = 0\n>\n>\n>\n>\n>\n>\n> On Tue, Oct 21, 2014 at 3:46 PM, Tomas Vondra <[email protected]> wrote:\n> >\n> > Dne 22 Říjen 2014, 0:25, Montana Low napsal(a):\n> > > I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel\n> > > 3.16.3.\n> > > I receive numerous Error: out of memory messages in the log, which are\n> > > aborting client requests, even though there appears to be 23GB\n> available\n> > > in\n> > > the OS cache.\n> > >\n> > > There is no swap on the box. Postgres is behind pgbouncer to protect\n> from\n> > > the 200 real clients, which limits connections to 32, although there\n> are\n> > > rarely more than 20 active connections, even though postgres\n> > > max_connections is set very high for historic reasons. There is also a\n> 4GB\n> > > java process running on the box.\n> > >\n> > >\n> > >\n> > >\n> > > relevant postgresql.conf:\n> > >\n> > > max_connections = 1000 # (change requires restart)\n> > > shared_buffers = 7GB # min 128kB\n> > > work_mem = 40MB # min 64kB\n> > > maintenance_work_mem = 1GB # min 1MB\n> > > effective_cache_size = 20GB\n> > >\n> > >\n> > >\n> > > sysctl.conf:\n> > >\n> > > vm.swappiness = 0\n> > > vm.overcommit_memory = 2\n> >\n> > This means you have 'no overcommit', so the amount of memory is limited\n> by\n> > overcommit_ratio + swap. The default value for overcommit_ratio is 50%\n> > RAM, and as you have no swap that effectively means only 50% of the RAM\n> is\n> > available to the system.\n> >\n> > If you want to verify this, check /proc/meminfo - see the lines\n> > CommitLimit (the current limit) and Commited_AS (committed address\n> space).\n> > Once the committed_as reaches the limit, it's game over.\n> >\n> > There are different ways to fix this, or at least improve that:\n> >\n> > (1) increasing the overcommit_ratio (clearly, 50% is way too low -\n> > something 90% might be more appropriate on 30GB RAM without swap)\n> >\n> > (2) adding swap (say a small ephemeral drive, with swappiness=10 or\n> > something like that)\n> >\n> > Tomas\n> >\n>\n\nincreasing overcommit_ratio to 95 solved the problem, the box is now using it's memory as expected without needing to resort to swap.On Tue, Oct 21, 2014 at 3:55 PM, Montana Low <[email protected]> wrote:I didn't realize that about overcommit_ratio. It was at 50, I've changed it to 95. I'll see if that clears up the problem moving forward.# cat /proc/meminfoMemTotal: 30827220 kBMemFree: 153524 kBMemAvailable: 17941864 kBBuffers: 6188 kBCached: 24560208 kBSwapCached: 0 kBActive: 20971256 kBInactive: 8538660 kBActive(anon): 12460680 kBInactive(anon): 36612 kBActive(file): 8510576 kBInactive(file): 8502048 kBUnevictable: 0 kBMlocked: 0 kBSwapTotal: 0 kBSwapFree: 0 kBDirty: 50088 kBWriteback: 160 kBAnonPages: 4943740 kBMapped: 7571496 kBShmem: 7553176 kBSlab: 886428 kBSReclaimable: 858936 kBSUnreclaim: 27492 kBKernelStack: 4208 kBPageTables: 188352 kBNFS_Unstable: 0 kBBounce: 0 kBWritebackTmp: 0 kBCommitLimit: 15413608 kBCommitted_AS: 14690544 kBVmallocTotal: 34359738367 kBVmallocUsed: 59012 kBVmallocChunk: 34359642367 kBHugePages_Total: 0HugePages_Free: 0HugePages_Rsvd: 0HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 31465472 kBDirectMap2M: 0 kB# sysctl -a:\nvm.admin_reserve_kbytes = 8192\nvm.block_dump = 0\nvm.dirty_background_bytes = 0\nvm.dirty_background_ratio = 10\nvm.dirty_bytes = 0\nvm.dirty_expire_centisecs = 3000\nvm.dirty_ratio = 20\nvm.dirty_writeback_centisecs = 500\nvm.drop_caches = 0\nvm.extfrag_threshold = 500\nvm.hugepages_treat_as_movable = 0\nvm.hugetlb_shm_group = 0\nvm.laptop_mode = 0\nvm.legacy_va_layout = 0\nvm.lowmem_reserve_ratio = 256 256 32\nvm.max_map_count = 65530\nvm.min_free_kbytes = 22207\nvm.min_slab_ratio = 5\nvm.min_unmapped_ratio = 1\nvm.mmap_min_addr = 4096\nvm.nr_hugepages = 0\nvm.nr_hugepages_mempolicy = 0\nvm.nr_overcommit_hugepages = 0\nvm.nr_pdflush_threads = 0\nvm.numa_zonelist_order = default\nvm.oom_dump_tasks = 1\nvm.oom_kill_allocating_task = 0\nvm.overcommit_kbytes = 0\nvm.overcommit_memory = 2\nvm.overcommit_ratio = 50\nvm.page-cluster = 3\nvm.panic_on_oom = 0\nvm.percpu_pagelist_fraction = 0\nvm.scan_unevictable_pages = 0\nvm.stat_interval = 1\nvm.swappiness = 0\nvm.user_reserve_kbytes = 131072\nvm.vfs_cache_pressure = 100\nvm.zone_reclaim_mode = 0\nOn Tue, Oct 21, 2014 at 3:46 PM, Tomas Vondra <[email protected]> wrote:>> Dne 22 Říjen 2014, 0:25, Montana Low napsal(a):> > I'm running postgres-9.3 on a 30GB ec2 xen instance w/ linux kernel> > 3.16.3.> > I receive numerous Error: out of memory messages in the log, which are> > aborting client requests, even though there appears to be 23GB available> > in> > the OS cache.> >> > There is no swap on the box. Postgres is behind pgbouncer to protect from> > the 200 real clients, which limits connections to 32, although there are> > rarely more than 20 active connections, even though postgres> > max_connections is set very high for historic reasons. There is also a 4GB> > java process running on the box.> >> >> >> >> > relevant postgresql.conf:> >> > max_connections = 1000 # (change requires restart)> > shared_buffers = 7GB # min 128kB> > work_mem = 40MB # min 64kB> > maintenance_work_mem = 1GB # min 1MB> > effective_cache_size = 20GB> >> >> >> > sysctl.conf:> >> > vm.swappiness = 0> > vm.overcommit_memory = 2>> This means you have 'no overcommit', so the amount of memory is limited by> overcommit_ratio + swap. The default value for overcommit_ratio is 50%> RAM, and as you have no swap that effectively means only 50% of the RAM is> available to the system.>> If you want to verify this, check /proc/meminfo - see the lines> CommitLimit (the current limit) and Commited_AS (committed address space).> Once the committed_as reaches the limit, it's game over.>> There are different ways to fix this, or at least improve that:>> (1) increasing the overcommit_ratio (clearly, 50% is way too low -> something 90% might be more appropriate on 30GB RAM without swap)>> (2) adding swap (say a small ephemeral drive, with swappiness=10 or> something like that)>> Tomas>",
"msg_date": "Tue, 21 Oct 2014 23:23:56 -0700",
"msg_from": "Montana Low <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: ERROR: out of memory | with 23GB cached 7GB reserved on\n 30GB machine"
}
] |
[
{
"msg_contents": "I have saved data from pg_stat_bgwriter view following Greg Smith's advice\nfrom his book:\nselect now(),* from pg_stat_bgwriter; \n\nand then aggregated the data with query from his book as well.\n\ncheckpoint segments was first 30 and next day I have increased it to 200,\nand results has changed:\n\n\n<http://postgresql.1045698.n5.nabble.com/file/n5824026/Auswahl_235.png> \n\n\nnow percent of checkpoints required because of number of segments is bigger\nand backend writer share is also too high- I assume it's not what should\nhappen.\nI'm not sure how to interpret correlation between allocation and written\ndata?\nThe bigger amount of data written per sec is a good sign?\n \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Checkpoints-tuning-tp5824026.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 23 Oct 2014 06:24:00 -0700 (PDT)",
"msg_from": "pinker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Checkpoints tuning"
}
] |
[
{
"msg_contents": "Hi,\n\nThis is the Greenplum database 4.3.1.0.\n\nTables :\n\ndev=# \\d+ visits_weekly_new_3\nAppend-Only Columnar Table \"uk.visits_weekly_new_3\"\nColumn | Type | Modifiers | Storage | Compression Type | Compression Level | Block Size | Description\n------------------+------------------------+-----------+----------+------------------+-------------------+------------+-------------\ndate | date | | plain | none | 0 | 32768 |\nhw_id | character varying(256) | | extended | none | 0 | 32768 |\nchannel | character varying(256) | | extended | none | 0 | 32768 |\nindustries | integer[] | | extended | none | 0 | 32768 |\nweighted_visits | double precision | | plain | none | 0 | 32768 |\nprojected_visits | double precision | | plain | none | 0 | 32768 |\nChecksum: f\nChild tables: visits_weekly_new_3_1_prt_1,\nvisits_weekly_new_3_1_prt_2,\nvisits_weekly_new_3_1_prt_3,\nvisits_weekly_new_3_1_prt_4,\nvisits_weekly_new_3_1_prt_5,\nvisits_weekly_new_3_1_prt_6,\nvisits_weekly_new_3_1_prt_7,\nvisits_weekly_new_3_1_prt_8,\nvisits_weekly_new_3_1_prt_9\nHas OIDs: no\nOptions: appendonly=true, orientation=column\nDistributed by: (date, channel)\n\ndev=# \\d+ temp.tmp_hw_channel\nTable \"temp.tmp_hw_channel\"\nColumn | Type | Modifiers | Storage | Description\n--------+------------------------+-----------+----------+-------------\nid | character varying(256) | | extended |\nHas OIDs: no\nDistributed by: (id)\n\nBelow is the execution plan for two SQL, the only difference between two SQL is that one has 2 group by columns and the other one has 3 group by columns. However, one is use hash aggregate, the other is doing sorting and group aggregate. It leads to very different performance although it has the same result set.\n\n\ndev=# explain ANALYZE\n\nSELECT v.date,\n\n channel,\n\n SUM(weighted_visits) AS weighted_visits,\n\n SUM(projected_visits) AS projected_visits\n\nFROM visits_weekly_new_3 v\n\nINNER JOIN temp.tmp_hw_channel id ON v.hw_id = id.id\n\nWHERE v.date >= '2014-05-03'\n\n AND v.date<= '2014-05-24'\n\nGROUP BY v.date,\n\n channel;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGather Motion 24:1 (slice2; segments: 24) (cost=31286842.08..31287447.81 rows=1683 width=536)\n Rows out: 15380160 rows at destination with 14860 ms to first row, 23856 ms to end, start offset by 104 ms.\n -> HashAggregate (cost=31286842.08..31287447.81 rows=1683 width=536)\n Group By: v.date, v.channel\n Rows out: Avg 640840.0 rows x 24 workers. Max 642307 rows (seg14) with 18979 ms to first row, 19365 ms to end, start offset by 57 ms.\n Executor memory: 66688K bytes avg, 66794K bytes max (seg0).\n -> Hash Join (cost=299802.88..28414086.88 rows=11969814 width=132)\n Hash Cond: v.hw_id::text = id.id::text\n Rows out: Avg 6657725.2 rows x 24 workers. Max 7363985 rows (seg10) with 1225 ms to first row, 18839 ms to end, start offset by 63 ms.\n Executor memory: 35037K bytes avg, 35037K bytes max (seg0).\n Work_mem used: 35037K bytes avg, 35037K bytes max (seg0). Workfile: (0 spilling, 0 reused)\n (seg10) Hash chain length 1.3 avg, 7 max, using 389733 of 1048589 buckets.\n -> Append (cost=0.00..5297308.80 rows=11969814 width=87)\n Rows out: Avg 11969813.7 rows x 24 workers. Max 13482240 rows (seg10) with 1.284 ms to first row, 8168 ms to end, start offset by 1287 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_1 v (cost=0.00..1324327.20 rows=2992454 width=87)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3623583 rows (seg21) with 1.232 ms to first row, 1299 ms to end, start offset by 1279 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_2 v (cost=0.00..1324327.20 rows=2992454 width=87)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3767678 rows (seg10) with 0.312 ms to first row, 2123 ms to end, start offset by 5966 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_3 v (cost=0.00..1324328.20 rows=2992454 width=87)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 4283207 rows (seg15) with 0.295 ms to first row, 1444 ms to end, start offset by 9383 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_4 v (cost=0.00..1324326.20 rows=2992454 width=87)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3760361 rows (seg12) with 0.299 ms to first row, 1309 ms to end, start offset by 14026 ms.\n -> Hash (cost=127888.98..127888.98 rows=487373 width=45)\n Rows in: Avg 487556.0 rows x 24 workers. Max 487556 rows (seg0) with 1188 ms to end, start offset by 86 ms.\n -> Broadcast Motion 24:24 (slice1; segments: 24) (cost=0.00..127888.98 rows=487373 width=45)\n Rows out: Avg 487556.0 rows x 24 workers at destination. Max 487556 rows (seg0) with 0.094 ms to first row, 590 ms to end, start offset by 86 ms.\n -> Seq Scan on tmp_hw_channel id (cost=0.00..6045.73 rows=20308 width=45)\n Rows out: Avg 20314.8 rows x 24 workers. Max 20536 rows (seg23) with 0.131 ms to first row, 6.642 ms to end, start offset by 69 ms.\nSlice statistics:\n (slice0) Executor memory: 286K bytes.\n (slice1) Executor memory: 774K bytes avg x 24 workers, 774K bytes max (seg0).\n (slice2) Executor memory: 149541K bytes avg x 24 workers, 149658K bytes max (seg0). Work_mem: 35037K bytes max.\nStatement statistics:\n Memory used: 1048576K bytes\nSettings: enable_bitmapscan=on; enable_indexscan=on; enable_sort=off\nTotal runtime: 25374.000 ms\n(40 rows)\n\nTime: 25383.704 ms\n\n\ndev=# explain ANALYZE\nSELECT v.date,\n channel,\n industries,\n SUM(weighted_visits) AS weighted_visits,\n SUM(projected_visits) AS projected_visits\nFROM visits_weekly_new_3 v\nINNER JOIN temp.tmp_hw_channel id ON v.hw_id = id.id\nWHERE v.date >= '2014-05-03'\n AND v.date<= '2014-05-24'\nGROUP BY v.date,\n channel,\n industries;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGather Motion 24:1 (slice2; segments: 24) (cost=152269717.33..157009763.41 rows=1196982 width=568)\n Rows out: 15380160 rows at destination with 35320 ms to first row, 70091 ms to end, start offset by 102 ms.\n -> GroupAggregate (cost=152269717.33..157009763.41 rows=1196982 width=568)\n Group By: v.date, v.channel, v.industries\n Rows out: Avg 640840.0 rows x 24 workers. Max 642307 rows (seg14) with 48843 ms to first row, 54853 ms to end, start offset by 54 ms.\n -> Sort (cost=152269717.33..152987906.13 rows=11969814 width=155)\n Sort Key: v.date, v.channel, v.industries\n Rows out: Avg 6657725.2 rows x 24 workers. Max 7363985 rows (seg10) with 64604 ms to first row, 65912 ms to end, start offset by 62 ms.\n Executor memory: 692755K bytes avg, 760338K bytes max (seg15).\n Work_mem used: 692755K bytes avg, 760338K bytes max (seg15). Workfile: (24 spilling, 0 reused)\n Work_mem wanted: 1603070K bytes avg, 1782291K bytes max (seg10) to lessen workfile I/O affecting 24 workers.\n -> Hash Join (cost=299802.88..28834900.88 rows=11969814 width=155)\n Hash Cond: v.hw_id::text = id.id::text\n Rows out: Avg 6657725.2 rows x 24 workers. Max 7363985 rows (seg10) with 1226 ms to first row, 24249 ms to end, start offset by 62 ms.\n Executor memory: 35037K bytes avg, 35037K bytes max (seg0).\n Work_mem used: 35037K bytes avg, 35037K bytes max (seg0). Workfile: (0 spilling, 0 reused)\n (seg10) Hash chain length 1.3 avg, 7 max, using 389733 of 1048589 buckets.\n (seg15) Hash chain length 1.3 avg, 7 max, using 389733 of 1048589 buckets.\n -> Append (cost=0.00..5297308.80 rows=11969814 width=111)\n Rows out: Avg 11969813.7 rows x 24 workers. Max 13482240 rows (seg10) with 0.846 ms to first row, 11214 ms to end, start offset by 1287 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_1 v (cost=0.00..1324327.20 rows=2992454 width=111)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3623583 rows (seg21) with 0.624 ms to first row, 1465 ms to end, start offset by 1264 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_2 v (cost=0.00..1324327.20 rows=2992454 width=110)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3767678 rows (seg10) with 0.486 ms to first row, 2419 ms to end, start offset by 8616 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_3 v (cost=0.00..1324328.20 rows=2992454 width=111)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 4283207 rows (seg15) with 0.453 ms to first row, 2357 ms to end, start offset by 13242 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_4 v (cost=0.00..1324326.20 rows=2992454 width=110)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3760361 rows (seg12) with 0.440 ms to first row, 2532 ms to end, start offset by 35558 ms.\n -> Hash (cost=127888.98..127888.98 rows=487373 width=45)\n Rows in: Avg 487556.0 rows x 24 workers. Max 487556 rows (seg0) with 1184 ms to end, start offset by 74 ms.\n -> Broadcast Motion 24:24 (slice1; segments: 24) (cost=0.00..127888.98 rows=487373 width=45)\n Rows out: Avg 487556.0 rows x 24 workers at destination. Max 487556 rows (seg0) with 0.168 ms to first row, 622 ms to end, start offset by 74 ms.\n -> Seq Scan on tmp_hw_channel id (cost=0.00..6045.73 rows=20308 width=45)\n Rows out: Avg 20314.8 rows x 24 workers. Max 20536 rows (seg23) with 0.263 ms to first row, 6.508 ms to end, start offset by 70 ms.\nSlice statistics:\n (slice0) Executor memory: 286K bytes.\n (slice1) Executor memory: 774K bytes avg x 24 workers, 774K bytes max (seg0).\n (slice2) * Executor memory: 771617K bytes avg x 24 workers, 843298K bytes max (seg15). Work_mem: 760338K bytes max, 1782291K bytes wanted.\nStatement statistics:\n Memory used: 1048576K bytes\n Memory wanted: 3565580K bytes\nSettings: enable_bitmapscan=on; enable_indexscan=on; enable_sort=off\nTotal runtime: 72071.845 ms\n(47 rows)\n\nTime: 72078.079 ms\n\n\n\n\n\n\n\n\n\nHi,\n \nThis is the Greenplum database 4.3.1.0.\n \nTables :\n \ndev=# \\d+ visits_weekly_new_3\nAppend-Only Columnar Table \"uk.visits_weekly_new_3\"\nColumn | Type | Modifiers | Storage | Compression Type | Compression Level | Block Size | Description\n------------------+------------------------+-----------+----------+------------------+-------------------+------------+-------------\ndate | date | | plain | none | 0 | 32768 |\nhw_id | character varying(256) | | extended | none | 0 | 32768 |\nchannel | character varying(256) | | extended | none | 0 | 32768 |\nindustries | integer[] | | extended | none | 0 | 32768 |\nweighted_visits | double precision | | plain | none | 0 | 32768 |\nprojected_visits | double precision | | plain | none | 0 | 32768 |\nChecksum: f\nChild tables: visits_weekly_new_3_1_prt_1,\nvisits_weekly_new_3_1_prt_2,\nvisits_weekly_new_3_1_prt_3,\nvisits_weekly_new_3_1_prt_4,\nvisits_weekly_new_3_1_prt_5,\nvisits_weekly_new_3_1_prt_6,\nvisits_weekly_new_3_1_prt_7,\nvisits_weekly_new_3_1_prt_8,\nvisits_weekly_new_3_1_prt_9\nHas OIDs: no\nOptions: appendonly=true, orientation=column\nDistributed by: (date, channel)\n \ndev=# \\d+ temp.tmp_hw_channel\nTable \"temp.tmp_hw_channel\"\nColumn | Type | Modifiers | Storage | Description\n--------+------------------------+-----------+----------+-------------\nid | character varying(256) | | extended |\nHas OIDs: no\nDistributed by: (id)\n \nBelow is the execution plan for two SQL, the only difference between two SQL is that one has 2 group by columns and the other one has 3 group by columns. However, one is use hash aggregate, the other is doing sorting and group aggregate.\n It leads to very different performance although it has the same result set.\n \n \ndev=# explain ANALYZE \n\nSELECT v.date,\n channel,\n SUM(weighted_visits) AS weighted_visits,\n SUM(projected_visits) AS projected_visits\nFROM visits_weekly_new_3 v\nINNER JOIN temp.tmp_hw_channel id ON v.hw_id = id.id\nWHERE v.date >= '2014-05-03'\n AND v.date<= '2014-05-24'\nGROUP BY v.date,\n channel;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGather Motion 24:1 (slice2; segments: 24) (cost=31286842.08..31287447.81 rows=1683 width=536)\n Rows out: 15380160 rows at destination with 14860 ms to first row, 23856 ms to end, start offset by 104 ms.\n -> HashAggregate (cost=31286842.08..31287447.81 rows=1683 width=536)\n Group By: v.date, v.channel\n Rows out: Avg 640840.0 rows x 24 workers. Max 642307 rows (seg14) with 18979 ms to first row, 19365 ms to end, start offset by 57 ms.\n Executor memory: 66688K bytes avg, 66794K bytes max (seg0).\n -> Hash Join (cost=299802.88..28414086.88 rows=11969814 width=132)\n Hash Cond: v.hw_id::text = id.id::text\n Rows out: Avg 6657725.2 rows x 24 workers. Max 7363985 rows (seg10) with 1225 ms to first row, 18839 ms to end, start offset by 63 ms.\n Executor memory: 35037K bytes avg, 35037K bytes max (seg0).\n Work_mem used: 35037K bytes avg, 35037K bytes max (seg0). Workfile: (0 spilling, 0 reused)\n (seg10) Hash chain length 1.3 avg, 7 max, using 389733 of 1048589 buckets.\n -> Append (cost=0.00..5297308.80 rows=11969814 width=87)\n Rows out: Avg 11969813.7 rows x 24 workers. Max 13482240 rows (seg10) with 1.284 ms to first row, 8168 ms to end, start offset by 1287 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_1 v (cost=0.00..1324327.20 rows=2992454 width=87)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3623583 rows (seg21) with 1.232 ms to first row, 1299 ms to end, start offset by 1279 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_2 v (cost=0.00..1324327.20 rows=2992454 width=87)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3767678 rows (seg10) with 0.312 ms to first row, 2123 ms to end, start offset by 5966 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_3 v (cost=0.00..1324328.20 rows=2992454 width=87)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 4283207 rows (seg15) with 0.295 ms to first row, 1444 ms to end, start offset by 9383 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_4 v (cost=0.00..1324326.20 rows=2992454 width=87)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3760361 rows (seg12) with 0.299 ms to first row, 1309 ms to end, start offset by 14026 ms.\n -> Hash (cost=127888.98..127888.98 rows=487373 width=45)\n Rows in: Avg 487556.0 rows x 24 workers. Max 487556 rows (seg0) with 1188 ms to end, start offset by 86 ms.\n -> Broadcast Motion 24:24 (slice1; segments: 24) (cost=0.00..127888.98 rows=487373 width=45)\n Rows out: Avg 487556.0 rows x 24 workers at destination. Max 487556 rows (seg0) with 0.094 ms to first row, 590 ms to end, start offset by 86 ms.\n -> Seq Scan on tmp_hw_channel id (cost=0.00..6045.73 rows=20308 width=45)\n Rows out: Avg 20314.8 rows x 24 workers. Max 20536 rows (seg23) with 0.131 ms to first row, 6.642 ms to end, start offset by 69 ms.\nSlice statistics:\n (slice0) Executor memory: 286K bytes.\n (slice1) Executor memory: 774K bytes avg x 24 workers, 774K bytes max (seg0).\n (slice2) Executor memory: 149541K bytes avg x 24 workers, 149658K bytes max (seg0). Work_mem: 35037K bytes max.\nStatement statistics:\n Memory used: 1048576K bytes\nSettings: enable_bitmapscan=on; enable_indexscan=on; enable_sort=off\nTotal runtime: 25374.000 ms\n(40 rows)\n \nTime: 25383.704 ms\n \n \ndev=# explain ANALYZE \nSELECT v.date,\n channel,\n industries,\n SUM(weighted_visits) AS weighted_visits,\n SUM(projected_visits) AS projected_visits\nFROM visits_weekly_new_3 v\nINNER JOIN temp.tmp_hw_channel id ON v.hw_id = id.id\nWHERE v.date >= '2014-05-03'\n AND v.date<= '2014-05-24'\nGROUP BY v.date,\n channel,\n industries;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGather Motion 24:1 (slice2; segments: 24) (cost=152269717.33..157009763.41 rows=1196982 width=568)\n Rows out: 15380160 rows at destination with 35320 ms to first row, 70091 ms to end, start offset by 102 ms.\n -> GroupAggregate (cost=152269717.33..157009763.41 rows=1196982 width=568)\n Group By: v.date, v.channel, v.industries\n Rows out: Avg 640840.0 rows x 24 workers. Max 642307 rows (seg14) with 48843 ms to first row, 54853 ms to end, start offset by 54 ms.\n -> Sort (cost=152269717.33..152987906.13 rows=11969814 width=155)\n Sort Key: v.date, v.channel, v.industries\n Rows out: Avg 6657725.2 rows x 24 workers. Max 7363985 rows (seg10) with 64604 ms to first row, 65912 ms to end, start offset by 62 ms.\n Executor memory: 692755K bytes avg, 760338K bytes max (seg15).\n Work_mem used: 692755K bytes avg, 760338K bytes max (seg15). Workfile: (24 spilling, 0 reused)\n Work_mem wanted: 1603070K bytes avg, 1782291K bytes max (seg10) to lessen workfile I/O affecting 24 workers.\n -> Hash Join (cost=299802.88..28834900.88 rows=11969814 width=155)\n Hash Cond: v.hw_id::text = id.id::text\n Rows out: Avg 6657725.2 rows x 24 workers. Max 7363985 rows (seg10) with 1226 ms to first row, 24249 ms to end, start offset by 62 ms.\n Executor memory: 35037K bytes avg, 35037K bytes max (seg0).\n Work_mem used: 35037K bytes avg, 35037K bytes max (seg0). Workfile: (0 spilling, 0 reused)\n (seg10) Hash chain length 1.3 avg, 7 max, using 389733 of 1048589 buckets.\n (seg15) Hash chain length 1.3 avg, 7 max, using 389733 of 1048589 buckets.\n -> Append (cost=0.00..5297308.80 rows=11969814 width=111)\n Rows out: Avg 11969813.7 rows x 24 workers. Max 13482240 rows (seg10) with 0.846 ms to first row, 11214 ms to end, start offset by 1287 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_1 v (cost=0.00..1324327.20 rows=2992454 width=111)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3623583 rows (seg21) with 0.624 ms to first row, 1465 ms to end, start offset by 1264 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_2 v (cost=0.00..1324327.20 rows=2992454 width=110)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3767678 rows (seg10) with 0.486 ms to first row, 2419 ms to end, start offset by 8616 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_3 v (cost=0.00..1324328.20 rows=2992454 width=111)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 4283207 rows (seg15) with 0.453 ms to first row, 2357 ms to end, start offset by 13242 ms.\n -> Append-only Columnar Scan on visits_weekly_new_3_1_prt_4 v (cost=0.00..1324326.20 rows=2992454 width=110)\n Filter: date >= '2014-05-03'::date AND date <= '2014-05-24'::date\n Rows out: Avg 2992453.4 rows x 24 workers. Max 3760361 rows (seg12) with 0.440 ms to first row, 2532 ms to end, start offset by 35558 ms.\n -> Hash (cost=127888.98..127888.98 rows=487373 width=45)\n Rows in: Avg 487556.0 rows x 24 workers. Max 487556 rows (seg0) with 1184 ms to end, start offset by 74 ms.\n -> Broadcast Motion 24:24 (slice1; segments: 24) (cost=0.00..127888.98 rows=487373 width=45)\n Rows out: Avg 487556.0 rows x 24 workers at destination. Max 487556 rows (seg0) with 0.168 ms to first row, 622 ms to end, start offset by 74 ms.\n -> Seq Scan on tmp_hw_channel id (cost=0.00..6045.73 rows=20308 width=45)\n Rows out: Avg 20314.8 rows x 24 workers. Max 20536 rows (seg23) with 0.263 ms to first row, 6.508 ms to end, start offset by 70 ms.\nSlice statistics:\n (slice0) Executor memory: 286K bytes.\n (slice1) Executor memory: 774K bytes avg x 24 workers, 774K bytes max (seg0).\n (slice2) * Executor memory: 771617K bytes avg x 24 workers, 843298K bytes max (seg15). Work_mem: 760338K bytes max, 1782291K bytes wanted.\nStatement statistics:\n Memory used: 1048576K bytes\n Memory wanted: 3565580K bytes\nSettings: enable_bitmapscan=on; enable_indexscan=on; enable_sort=off\nTotal runtime: 72071.845 ms\n(47 rows)\n \nTime: 72078.079 ms",
"msg_date": "Tue, 28 Oct 2014 06:26:48 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "unnecessary sort in the execution plan when doing group by"
},
{
"msg_contents": "On Tue, Oct 28, 2014 at 7:26 PM, Huang, Suya <[email protected]>\nwrote:\n\n> Hi,\n>\n>\n>\n> This is the Greenplum database 4.3.1.0.\n>\n>\nLikely this is the wrong place to ask for help. The plan output that you've\npasted below looks very different to PostgreSQL's EXPLAIN output.\n\n\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nGather Motion 24:1 (slice2; segments: 24) (cost=31286842.08..31287447.81\nrows=1683 width=536)\n\n Rows out: 15380160 rows at destination with 14860 ms to first row,\n23856 ms to end, start offset by 104 ms.\n\n -> HashAggregate (cost=31286842.08..31287447.81 rows=1683 width=536)\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\nGather Motion 24:1 (slice2; segments: 24)\n(cost=152269717.33..157009763.41 rows=1196982 width=568)\n\n Rows out: 15380160 rows at destination with 35320 ms to first row,\n70091 ms to end, start offset by 102 ms.\n\n -> GroupAggregate (cost=152269717.33..157009763.41 rows=1196982\nwidth=568)\n\n\nMost likely the reason you're getting the difference in plan is because the\nplanner is probably decided that there will be too many hash entries for a\nhash table based on the 3 grouping columns... Look at the estimates, 1683 with\n2 columns and 1196982 with the 3 columns. If those estimates turned out to\nbe true, then the hash table for 3 columns will be massively bigger than it\nwould be with 2 columns. With PostgreSQL you might see the plan changing if\nyou increased the work_mem setting. For greenplum, I've no idea if that's\nthe same.\n\nDatabases are often not very good at knowing with the number of distinct\nvalues would be over more than 1 column. Certain databases have solved this\nwith multi column statistics, but PostgreSQL does not have these. Although\nI just noticed last night that someone is working on them.\n\nRegards\n\nDavid Rowley\n\nOn Tue, Oct 28, 2014 at 7:26 PM, Huang, Suya <[email protected]> wrote:\n\n\nHi,\n \nThis is the Greenplum database 4.3.1.0.\nLikely this is the wrong place to ask for help. The plan output that you've pasted below looks very different to PostgreSQL's EXPLAIN output. QUERY PLAN-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Gather Motion 24:1 (slice2; segments: 24) (cost=31286842.08..31287447.81 rows=1683 width=536) Rows out: 15380160 rows at destination with 14860 ms to first row, 23856 ms to end, start offset by 104 ms. -> HashAggregate (cost=31286842.08..31287447.81 rows=1683 width=536)-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Gather Motion 24:1 (slice2; segments: 24) (cost=152269717.33..157009763.41 rows=1196982 width=568) Rows out: 15380160 rows at destination with 35320 ms to first row, 70091 ms to end, start offset by 102 ms. -> GroupAggregate (cost=152269717.33..157009763.41 rows=1196982 width=568)Most likely the reason you're getting the difference in plan is because the planner is probably decided that there will be too many hash entries for a hash table based on the 3 grouping columns... Look at the estimates, 1683 with 2 columns and 1196982 with the 3 columns. If those estimates turned out to be true, then the hash table for 3 columns will be massively bigger than it would be with 2 columns. With PostgreSQL you might see the plan changing if you increased the work_mem setting. For greenplum, I've no idea if that's the same.Databases are often not very good at knowing with the number of distinct values would be over more than 1 column. Certain databases have solved this with multi column statistics, but PostgreSQL does not have these. Although I just noticed last night that someone is working on them.RegardsDavid Rowley",
"msg_date": "Tue, 28 Oct 2014 20:06:43 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unnecessary sort in the execution plan when doing group by"
},
{
"msg_contents": "Thank you Dave. I've opened an SR with GP and see if they have any good suggestion on changing the plan.\n\nThanks,\nSuya\n________________________________\nFrom: David Rowley [[email protected]]\nSent: Tuesday, October 28, 2014 6:06 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] unnecessary sort in the execution plan when doing group by\n\nOn Tue, Oct 28, 2014 at 7:26 PM, Huang, Suya <[email protected]<mailto:[email protected]>> wrote:\nHi,\n\nThis is the Greenplum database 4.3.1.0.\n\nLikely this is the wrong place to ask for help. The plan output that you've pasted below looks very different to PostgreSQL's EXPLAIN output.\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGather Motion 24:1 (slice2; segments: 24) (cost=31286842.08..31287447.81 rows=1683 width=536)\n Rows out: 15380160 rows at destination with 14860 ms to first row, 23856 ms to end, start offset by 104 ms.\n -> HashAggregate (cost=31286842.08..31287447.81 rows=1683 width=536)\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGather Motion 24:1 (slice2; segments: 24) (cost=152269717.33..157009763.41 rows=1196982 width=568)\n Rows out: 15380160 rows at destination with 35320 ms to first row, 70091 ms to end, start offset by 102 ms.\n -> GroupAggregate (cost=152269717.33..157009763.41 rows=1196982 width=568)\n\n\nMost likely the reason you're getting the difference in plan is because the planner is probably decided that there will be too many hash entries for a hash table based on the 3 grouping columns... Look at the estimates, 1683 with 2 columns and 1196982 with the 3 columns. If those estimates turned out to be true, then the hash table for 3 columns will be massively bigger than it would be with 2 columns. With PostgreSQL you might see the plan changing if you increased the work_mem setting. For greenplum, I've no idea if that's the same.\n\nDatabases are often not very good at knowing with the number of distinct values would be over more than 1 column. Certain databases have solved this with multi column statistics, but PostgreSQL does not have these. Although I just noticed last night that someone is working on them.\n\nRegards\n\nDavid Rowley\n\n\n\n\n\n\n\n\nThank you Dave. I've opened an SR with GP and see if they have any good suggestion on changing the plan.\n\nThanks,\nSuya\n\n\nFrom: David Rowley [[email protected]]\nSent: Tuesday, October 28, 2014 6:06 PM\nTo: Huang, Suya\nCc: [email protected]\nSubject: Re: [PERFORM] unnecessary sort in the execution plan when doing group by\n\n\n\n\n\n\nOn Tue, Oct 28, 2014 at 7:26 PM, Huang, Suya \n<[email protected]> wrote:\n\n\n\nHi,\n \nThis is the Greenplum database 4.3.1.0.\n\n\n\n\n\n\nLikely this is the wrong place to ask for help. The plan output that you've pasted below looks very different to PostgreSQL's EXPLAIN output.\n \n\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGather Motion 24:1 (slice2; segments: 24) (cost=31286842.08..31287447.81 rows=1683 width=536)\n Rows out: 15380160 rows at destination with 14860 ms to first row, 23856 ms to end, start offset by 104 ms.\n -> HashAggregate (cost=31286842.08..31287447.81 rows=1683 width=536)\n\n\n\n\n\n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nGather Motion 24:1 (slice2; segments: 24) (cost=152269717.33..157009763.41 rows=1196982 width=568)\n Rows out: 15380160 rows at destination with 35320 ms to first row, 70091 ms to end, start offset by 102 ms.\n -> GroupAggregate (cost=152269717.33..157009763.41 rows=1196982 width=568)\n\n\n\n\n\nMost likely the reason you're getting the difference in plan is because the planner is probably decided that there will be too many hash entries for a hash table based on the 3 grouping columns... Look at the estimates, 1683 with\n 2 columns and 1196982 with the 3 columns. If those estimates turned out to be true, then the hash table for 3 columns will be massively bigger than it would be with 2 columns. With PostgreSQL you might see the plan changing if you increased the work_mem\n setting. For greenplum, I've no idea if that's the same.\n\n\nDatabases are often not very good at knowing with the number of distinct values would be over more than 1 column. Certain databases have solved this with multi column statistics, but PostgreSQL does not have these. Although I just noticed last night that\n someone is working on them.\n\n\nRegards\n\n\nDavid Rowley",
"msg_date": "Tue, 28 Oct 2014 23:43:22 +0000",
"msg_from": "\"Huang, Suya\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: unnecessary sort in the execution plan when doing\n group by"
},
{
"msg_contents": "On 28 October 2014 06:26, Huang, Suya <[email protected]> wrote:\n\n> Memory wanted: 3565580K bytes\n\nThis means \"increase work_mem to this value\".\n\n-- \n Simon Riggs http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 4 Nov 2014 16:02:26 +0000",
"msg_from": "Simon Riggs <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: unnecessary sort in the execution plan when doing group by"
}
] |
[
{
"msg_contents": "Hi, we have a nightly job that restores current production data to the\ndevelopment databases in a 'warm spare' database so that if the developers\nneed fresh data, it's ready during the day. When we moved from 9.0 to 9.2\nsuddenly the restores began to take from a few hours to more like 15 hours\nor so. We're in Amazon EC2, I've tried new EBS volumes, warmed them up,\nthrew IOPS at them, pretty much all the standard stuff to get more disk\nperformance.\n\nHere's the thing, the disk isn't saturated. The behavior I'm seeing seems\nvery odd to me; I'm seeing the source disk which holds the dump saturated by\nreads, which is great, but then I just see nothing being written to the\npostgres volume. Just nothing happening, then a small burst. There is no\nwrite queue backup on the destination disk either. if I look at\npg_stat_activity I'll see something like:\n\nCOPY salesforce_reconciliation (salesforce_id, email, advisor_salesforce_id,\nprocessed) FROM stdin\n\nand even for small tables, that seems to take a very long time even though\nthe destination disk is almost at 0 utilization.\n\nThe dumps are created with pg_dump -Fc and restored with pg_restore -d db -j\n2 -O -U postgres PostgreSQL-db.sql.\n\nIs it possible that some default settings were changed from 9.0 to 9.2 that\nwould cause this kind of behavior? I'm stumped here. Thanks in advance for\nany consideration here.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Oct 2014 13:55:39 -0700 (PDT)",
"msg_from": "jmcdonagh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "On 28.10.2014 21:55, jmcdonagh wrote:\n> Hi, we have a nightly job that restores current production data to\n> the development databases in a 'warm spare' database so that if the\n> developers need fresh data, it's ready during the day. When we moved\n> from 9.0 to 9.2 suddenly the restores began to take from a few hours\n> to more like 15 hours or so. We're in Amazon EC2, I've tried new EBS\n> volumes, warmed them up, threw IOPS at them, pretty much all the\n> standard stuff to get more disk performance.\n\nSo, if I understand it correctly, you've been restoring into 9.0, then\nyou switched to 9.2 and it's much slower?\n\nIs the 9.2 configured equally to 9.0? If you do something like this\n\n SELECT name, setting\n FROM pg_settings\n WHERE source = 'configuration file';\n\non both versions, what do you get?\n\n> Here's the thing, the disk isn't saturated. The behavior I'm seeing \n> seems very odd to me; I'm seeing the source disk which holds the dump\n> saturated by reads, which is great, but then I just see nothing being\n> written to the postgres volume. Just nothing happening, then a\n> small burst. There is no write queue backup on the destination disk\n> either. if I look at pg_stat_activity I'll see something like:\n> \n> COPY salesforce_reconciliation (salesforce_id, email,\n> advisor_salesforce_id, processed) FROM stdin\n> \n> and even for small tables, that seems to take a very long time even\n> though the destination disk is almost at 0 utilization.\n\nSo, where's the bottleneck? Clearly, there's one, so is it a CPU, a disk\nor something else? Or maybe network, because you're using EBS?\n\nWhat do you mean by 'utilization'? How do you measure that?\n\n\n> The dumps are created with pg_dump -Fc and restored with pg_restore\n> -d db -j 2 -O -U postgres PostgreSQL-db.sql.\n\nOK\n\n> Is it possible that some default settings were changed from 9.0 to \n> 9.2 that would cause this kind of behavior? I'm stumped here. Thanks\n> in advance for any consideration here.\n\nI doubt that. There probably were some changes (after all, we're talking\nabout 2 major versions), but we generally don't change it in a way\nthat'd hurt performance.\n\nregards\nTomas\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Oct 2014 22:19:10 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "Hi Tomas- thank you for your thoughtful response!\n\n\nTomas Vondra wrote\n> On 28.10.2014 21:55, jmcdonagh wrote:\n>> Hi, we have a nightly job that restores current production data to\n>> the development databases in a 'warm spare' database so that if the\n>> developers need fresh data, it's ready during the day. When we moved\n>> from 9.0 to 9.2 suddenly the restores began to take from a few hours\n>> to more like 15 hours or so. We're in Amazon EC2, I've tried new EBS\n>> volumes, warmed them up, threw IOPS at them, pretty much all the\n>> standard stuff to get more disk performance.\n> \n> So, if I understand it correctly, you've been restoring into 9.0, then\n> you switched to 9.2 and it's much slower?\n\nYes- but since the move was done utilizing snapshots so the move involves\nnew volumes, but I have created new volumes since then to rule out a single\nbad volume.\n\n\nTomas Vondra wrote\n> Is the 9.2 configured equally to 9.0? If you do something like this\n> \n> SELECT name, setting\n> FROM pg_settings\n> WHERE source = 'configuration file';\n> \n> on both versions, what do you get?\n\nI no longer have the 9.0 box up but we do track configuration via puppet and\ngit. The only configuration change made for 9.2 is:\n\n-#standard_conforming_strings = off\n+standard_conforming_strings = off\n\nCause we have an old app that needs this setting on otherwise we'd spend a\nlot of time trying to fix it.\n\n\nTomas Vondra wrote\n>> Here's the thing, the disk isn't saturated. The behavior I'm seeing \n>> seems very odd to me; I'm seeing the source disk which holds the dump\n>> saturated by reads, which is great, but then I just see nothing being\n>> written to the postgres volume. Just nothing happening, then a\n>> small burst. There is no write queue backup on the destination disk\n>> either. if I look at pg_stat_activity I'll see something like:\n>> \n>> COPY salesforce_reconciliation (salesforce_id, email,\n>> advisor_salesforce_id, processed) FROM stdin\n>> \n>> and even for small tables, that seems to take a very long time even\n>> though the destination disk is almost at 0 utilization.\n> \n> So, where's the bottleneck? Clearly, there's one, so is it a CPU, a disk\n> or something else? Or maybe network, because you're using EBS?\n> \n> What do you mean by 'utilization'? How do you measure that?\n\nThe bottleneck is I/O somehow. I say somehow, because I see iowait averaging\nabout 50% between two CPUs, but there is just no writes to the destination\nEBS volume really happening, just reads from the disk where the source dump\nis located, then bursts of writes to the destination volume every so often.\nIt's kind of puzzling. This is happening on multiple database servers, in\nmultiple availability zones. Driving me bonkers.\n\nWhat I mean by utilization is util% from iostat -m -x 1.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701p5824847.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Oct 2014 08:12:23 -0700 (PDT)",
"msg_from": "jmcdonagh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "Is the instance ebs-optimized? I am wondering if its a configuration on the\ninstance not postgres or ebs.\n\nOn Wed, Oct 29, 2014 at 10:12 AM, jmcdonagh <[email protected]>\nwrote:\n\n> Hi Tomas- thank you for your thoughtful response!\n>\n>\n> Tomas Vondra wrote\n> > On 28.10.2014 21:55, jmcdonagh wrote:\n> >> Hi, we have a nightly job that restores current production data to\n> >> the development databases in a 'warm spare' database so that if the\n> >> developers need fresh data, it's ready during the day. When we moved\n> >> from 9.0 to 9.2 suddenly the restores began to take from a few hours\n> >> to more like 15 hours or so. We're in Amazon EC2, I've tried new EBS\n> >> volumes, warmed them up, threw IOPS at them, pretty much all the\n> >> standard stuff to get more disk performance.\n> >\n> > So, if I understand it correctly, you've been restoring into 9.0, then\n> > you switched to 9.2 and it's much slower?\n>\n> Yes- but since the move was done utilizing snapshots so the move involves\n> new volumes, but I have created new volumes since then to rule out a single\n> bad volume.\n>\n>\n> Tomas Vondra wrote\n> > Is the 9.2 configured equally to 9.0? If you do something like this\n> >\n> > SELECT name, setting\n> > FROM pg_settings\n> > WHERE source = 'configuration file';\n> >\n> > on both versions, what do you get?\n>\n> I no longer have the 9.0 box up but we do track configuration via puppet\n> and\n> git. The only configuration change made for 9.2 is:\n>\n> -#standard_conforming_strings = off\n> +standard_conforming_strings = off\n>\n> Cause we have an old app that needs this setting on otherwise we'd spend a\n> lot of time trying to fix it.\n>\n>\n> Tomas Vondra wrote\n> >> Here's the thing, the disk isn't saturated. The behavior I'm seeing\n> >> seems very odd to me; I'm seeing the source disk which holds the dump\n> >> saturated by reads, which is great, but then I just see nothing being\n> >> written to the postgres volume. Just nothing happening, then a\n> >> small burst. There is no write queue backup on the destination disk\n> >> either. if I look at pg_stat_activity I'll see something like:\n> >>\n> >> COPY salesforce_reconciliation (salesforce_id, email,\n> >> advisor_salesforce_id, processed) FROM stdin\n> >>\n> >> and even for small tables, that seems to take a very long time even\n> >> though the destination disk is almost at 0 utilization.\n> >\n> > So, where's the bottleneck? Clearly, there's one, so is it a CPU, a disk\n> > or something else? Or maybe network, because you're using EBS?\n> >\n> > What do you mean by 'utilization'? How do you measure that?\n>\n> The bottleneck is I/O somehow. I say somehow, because I see iowait\n> averaging\n> about 50% between two CPUs, but there is just no writes to the destination\n> EBS volume really happening, just reads from the disk where the source dump\n> is located, then bursts of writes to the destination volume every so often.\n> It's kind of puzzling. This is happening on multiple database servers, in\n> multiple availability zones. Driving me bonkers.\n>\n> What I mean by utilization is util% from iostat -m -x 1.\n>\n>\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701p5824847.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nIs the instance ebs-optimized? I am wondering if its a configuration on the instance not postgres or ebs. On Wed, Oct 29, 2014 at 10:12 AM, jmcdonagh <[email protected]> wrote:Hi Tomas- thank you for your thoughtful response!\n\n\nTomas Vondra wrote\n> On 28.10.2014 21:55, jmcdonagh wrote:\n>> Hi, we have a nightly job that restores current production data to\n>> the development databases in a 'warm spare' database so that if the\n>> developers need fresh data, it's ready during the day. When we moved\n>> from 9.0 to 9.2 suddenly the restores began to take from a few hours\n>> to more like 15 hours or so. We're in Amazon EC2, I've tried new EBS\n>> volumes, warmed them up, threw IOPS at them, pretty much all the\n>> standard stuff to get more disk performance.\n>\n> So, if I understand it correctly, you've been restoring into 9.0, then\n> you switched to 9.2 and it's much slower?\n\nYes- but since the move was done utilizing snapshots so the move involves\nnew volumes, but I have created new volumes since then to rule out a single\nbad volume.\n\n\nTomas Vondra wrote\n> Is the 9.2 configured equally to 9.0? If you do something like this\n>\n> SELECT name, setting\n> FROM pg_settings\n> WHERE source = 'configuration file';\n>\n> on both versions, what do you get?\n\nI no longer have the 9.0 box up but we do track configuration via puppet and\ngit. The only configuration change made for 9.2 is:\n\n-#standard_conforming_strings = off\n+standard_conforming_strings = off\n\nCause we have an old app that needs this setting on otherwise we'd spend a\nlot of time trying to fix it.\n\n\nTomas Vondra wrote\n>> Here's the thing, the disk isn't saturated. The behavior I'm seeing\n>> seems very odd to me; I'm seeing the source disk which holds the dump\n>> saturated by reads, which is great, but then I just see nothing being\n>> written to the postgres volume. Just nothing happening, then a\n>> small burst. There is no write queue backup on the destination disk\n>> either. if I look at pg_stat_activity I'll see something like:\n>>\n>> COPY salesforce_reconciliation (salesforce_id, email,\n>> advisor_salesforce_id, processed) FROM stdin\n>>\n>> and even for small tables, that seems to take a very long time even\n>> though the destination disk is almost at 0 utilization.\n>\n> So, where's the bottleneck? Clearly, there's one, so is it a CPU, a disk\n> or something else? Or maybe network, because you're using EBS?\n>\n> What do you mean by 'utilization'? How do you measure that?\n\nThe bottleneck is I/O somehow. I say somehow, because I see iowait averaging\nabout 50% between two CPUs, but there is just no writes to the destination\nEBS volume really happening, just reads from the disk where the source dump\nis located, then bursts of writes to the destination volume every so often.\nIt's kind of puzzling. This is happening on multiple database servers, in\nmultiple availability zones. Driving me bonkers.\n\nWhat I mean by utilization is util% from iostat -m -x 1.\n\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701p5824847.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 29 Oct 2014 10:34:07 -0500",
"msg_from": "\"Mathis, Jason\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "Hi Jason, oddly enough the setting on or off does not affect this particular\nissue. As a rule I generally enable this option on my instances that support\nit. I recently tried upping the nodes to the latest generation (m3) to try\nand rectify/improve this issue. Unfortunately right now m3 won't work\nbecause we rely on a lot of space in mnt for temporary data work and the new\ninstances don't have much space there (though it is much faster). So I went\nback to m1.large and left EBS optimized off. I'm not seeing any note-worthy\nchange in performance.\n\nSo I went and fired up an RDS postgres instance here. Eventually I want to\nmove to RDS anyways, but it's not a good short term solution to the right\nnow issue. Restore is running now, so I'll know within the next day or so if\nthis is much faster.\n\nI'm puzzled by the change from 9.0 to 9.2 coinciding with this though.\nBefore the upgrade this job never had any issues. But I am 100% aware that\ncould be a red herring.\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701p5824871.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Oct 2014 09:33:01 -0700 (PDT)",
"msg_from": "jmcdonagh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "I just had a thought- I know some of these tables are in need of a vacuuming.\nCould it be that the dump is dumping a bunch of garbage that the restore has\nto sift through on the restore? I don't know enough details to know if this\nis a dumb thought or not.\n\nThe restore to RDS took roughly the same amount of time. My next move is to\ntry on a fast instance store, and also do a postgres 9 restore of a pure SQL\ndump, but that won't really be a great test since I use custom format. I'm\nassuming here that I can't take the custom dump from 9.2 and apply it to\n9.0, or can I?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701p5825052.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 Oct 2014 09:23:22 -0700 (PDT)",
"msg_from": "jmcdonagh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "jmcdonagh <[email protected]> writes:\n\n> I just had a thought- I know some of these tables are in need of a vacuuming.\n> Could it be that the dump is dumping a bunch of garbage that the restore has\n> to sift through on the restore? I don't know enough details to know if this\n> is a dumb thought or not.\n\nNo. However it's true that the dump will take a bit longer having to\nscan a bloated table rather than a tight one.\n\nDump will only output the live rows. psql or pg_restore whatever you're\nusing on the target side will not have to step over any junk.\n\nHTH\n\n>\n> The restore to RDS took roughly the same amount of time. My next move is to\n> try on a fast instance store, and also do a postgres 9 restore of a pure SQL\n> dump, but that won't really be a great test since I use custom format. I'm\n> assuming here that I can't take the custom dump from 9.2 and apply it to\n> 9.0, or can I?\n>\n>\n>\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701p5825052.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 Oct 2014 11:58:11 -0500",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "On 29.10.2014 16:12, jmcdonagh wrote:\n> Hi Tomas- thank you for your thoughtful response!\n> \n> \n> Tomas Vondra wrote\n>> On 28.10.2014 21:55, jmcdonagh wrote:\n>>> Hi, we have a nightly job that restores current production data to\n>>> the development databases in a 'warm spare' database so that if the\n>>> developers need fresh data, it's ready during the day. When we moved\n>>> from 9.0 to 9.2 suddenly the restores began to take from a few hours\n>>> to more like 15 hours or so. We're in Amazon EC2, I've tried new EBS\n>>> volumes, warmed them up, threw IOPS at them, pretty much all the\n>>> standard stuff to get more disk performance.\n>>\n>> So, if I understand it correctly, you've been restoring into 9.0, then\n>> you switched to 9.2 and it's much slower?\n> \n> Yes- but since the move was done utilizing snapshots so the move\n> involves new volumes, but I have created new volumes since then to\n> rule out a single bad volume.\n\nMy advice would be to do some basic low-level performance tests to rule\nthis out. Use dd or (better) fio to test basic I/O performance, it's\nmuch easier to spot issues that way.\n\n> Tomas Vondra wrote\n>> Is the 9.2 configured equally to 9.0? If you do something like this\n>>\n>> SELECT name, setting\n>> FROM pg_settings\n>> WHERE source = 'configuration file';\n>>\n>> on both versions, what do you get?\n> \n> I no longer have the 9.0 box up but we do track configuration via\n> puppet and git. The only configuration change made for 9.2 is:\n> \n> -#standard_conforming_strings = off\n> +standard_conforming_strings = off\n\nCompared to 9.0, I suppose? Anyway, post the non-default config values\nat least for 9.2, please.\n\n> Cause we have an old app that needs this setting on otherwise we'd\n> spend a lot of time trying to fix it.\n\nI doubt standard_conforming_strings has anything to do with the issues.\n\n> Tomas Vondra wrote\n>>> Here's the thing, the disk isn't saturated. The behavior I'm seeing \n>>> seems very odd to me; I'm seeing the source disk which holds the dump\n>>> saturated by reads, which is great, but then I just see nothing being\n>>> written to the postgres volume. Just nothing happening, then a\n>>> small burst. There is no write queue backup on the destination disk\n>>> either. if I look at pg_stat_activity I'll see something like:\n>>>\n>>> COPY salesforce_reconciliation (salesforce_id, email,\n>>> advisor_salesforce_id, processed) FROM stdin\n>>>\n>>> and even for small tables, that seems to take a very long time even\n>>> though the destination disk is almost at 0 utilization.\n>>\n>> So, where's the bottleneck? Clearly, there's one, so is it a CPU, a\n>> disk or something else? Or maybe network, because you're using EBS?\n>>\n>> What do you mean by 'utilization'? How do you measure that?\n> \n> The bottleneck is I/O somehow. I say somehow, because I see iowait \n> averaging about 50% between two CPUs, but there is just no writes to \n> the destination EBS volume really happening, just reads from the\n> disk where the source dump is located, then bursts of writes to the\n> destination volume every so often. It's kind of puzzling. This is\n> happening on multiple database servers, in multiple availability\n> zones. Driving me bonkers.\n> \n> What I mean by utilization is util% from iostat -m -x 1.\n\nI find this rather contradictory. At one moment you say the disk isn't\nsaturated, the next moment you say you're I/O bound.\n\nAlso, iowait (as reported e.g. by 'top') is tricky to interpret\ncorrectly, especially on multi-cpu systems (nice intro to the complexity\n[1]). It's really difficult to interpret the 50% iowait without more\ninfo about what's happening on the machine.\n\nIMHO, the utilization (as reported by iotop) is much easier to\ninterpret, because it means '% of time the device was servicing\nrequests'. It has issues too, because 100% does not mean 'saturated'\n(especially on RAID arrays that can service multiple requests in\nparallel), but it's better than iowait.\n\nIf I had to guess based from your info, I'd bet you're CPU bound, so\nthere's very little idle time and about 50% of it is spent waiting for\nI/O requests (hence the 50% iowait). But in total the amount of I/O is\nvery small, so %util is ~0.\n\nPlease, post a few lines of 'iostat -x -k 1' output. Samples from 'top'\nand 'vmstat 1' would be handy too.\n\nregards\nTomas\n\n[1] http://veithen.blogspot.cz/2013/11/iowait-linux.html\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 30 Oct 2014 20:08:36 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "Thanks for the confirmation Jerry. \n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701p5825615.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 4 Nov 2014 08:28:05 -0800 (PST)",
"msg_from": "jmcdonagh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
},
{
"msg_contents": "Tomas Vondra wrote\n> On 29.10.2014 16:12, jmcdonagh wrote:\n>> Hi Tomas- thank you for your thoughtful response!\n>> \n>> \n>> Tomas Vondra wrote\n>>> On 28.10.2014 21:55, jmcdonagh wrote:\n>>>> Hi, we have a nightly job that restores current production data to\n>>>> the development databases in a 'warm spare' database so that if the\n>>>> developers need fresh data, it's ready during the day. When we moved\n>>>> from 9.0 to 9.2 suddenly the restores began to take from a few hours\n>>>> to more like 15 hours or so. We're in Amazon EC2, I've tried new EBS\n>>>> volumes, warmed them up, threw IOPS at them, pretty much all the\n>>>> standard stuff to get more disk performance.\n>>>\n>>> So, if I understand it correctly, you've been restoring into 9.0, then\n>>> you switched to 9.2 and it's much slower?\n>> \n>> Yes- but since the move was done utilizing snapshots so the move\n>> involves new volumes, but I have created new volumes since then to\n>> rule out a single bad volume.\n> \n> My advice would be to do some basic low-level performance tests to rule\n> this out. Use dd or (better) fio to test basic I/O performance, it's\n> much easier to spot issues that way.\n\nI've done dd tests and the volumes perform fine. \n\n\nTomas Vondra wrote\n>> Tomas Vondra wrote\n>>> Is the 9.2 configured equally to 9.0? If you do something like this\n>>>\n>>> SELECT name, setting\n>>> FROM pg_settings\n>>> WHERE source = 'configuration file';\n>>>\n>>> on both versions, what do you get?\n>> \n>> I no longer have the 9.0 box up but we do track configuration via\n>> puppet and git. The only configuration change made for 9.2 is:\n>> \n>> -#standard_conforming_strings = off\n>> +standard_conforming_strings = off\n> \n> Compared to 9.0, I suppose? Anyway, post the non-default config values\n> at least for 9.2, please.\n\nYea, so in comparison to the only change was that. Here are the non-default\nsettings (some of them are probably defaults, but these are the uncommented\nlines from postgresql.conf):\n\ndata_directory = '/mnt/postgresql/9.2/main' # use data in another\ndirectory\nhba_file = '/etc/postgresql/9.2/main/pg_hba.conf' # host-based\nauthentication file\nident_file = '/etc/postgresql/9.2/main/pg_ident.conf' # ident\nconfiguration file\nexternal_pid_file = '/var/run/postgresql/9.2-main.pid' # write an extra\nPID file\nlisten_addresses = '*' # what IP address(es) to listen on;\nport = 5432 # (change requires restart)\nmax_connections = 300 # (change requires restart)\nunix_socket_directory = '/var/run/postgresql' # (change requires\nrestart)\nssl = true # (change requires restart)\nshared_buffers = 4GB # min 128kB\ntemp_buffers = 128MB # min 800kB\nwork_mem = 256MB # min 64kB\nmaintenance_work_mem = 256MB # min 1MB\nwal_buffers = 512kB # min 32kB\ncommit_delay = 50000 # range 0-100000, in microseconds\ncommit_siblings = 1 # range 1-1000\nrandom_page_cost = 2.0 # same scale as above\neffective_cache_size = 16GB\nfrom_collapse_limit = 10\njoin_collapse_limit = 10 # 1 disables collapsing of explicit\nlog_destination = 'stderr' # Valid values are combinations of\nclient_min_messages = warning # values in order of decreasing detail:\nlog_min_messages = warning # values in order of decreasing detail:\nlog_min_duration_statement = 1000 # -1 is disabled, 0 logs all statements\nlog_line_prefix = '%t ' # special values:\nautovacuum = on # Enable autovacuum subprocess? 'on'\ndatestyle = 'iso, mdy'\ntimezone = EST5EDT # actually, defaults to TZ environment\nclient_encoding = sql_ascii # actually, defaults to database\nlc_messages = 'en_US.UTF-8' # locale for system error message\nlc_monetary = 'en_US.UTF-8' # locale for monetary formatting\nlc_numeric = 'en_US.UTF-8' # locale for number formatting\nlc_time = 'en_US.UTF-8' # locale for time formatting\ndefault_text_search_config = 'pg_catalog.english'\nstandard_conforming_strings = off\n\n\nTomas Vondra wrote\n>> Tomas Vondra wrote\n>>>> Here's the thing, the disk isn't saturated. The behavior I'm seeing \n>>>> seems very odd to me; I'm seeing the source disk which holds the dump\n>>>> saturated by reads, which is great, but then I just see nothing being\n>>>> written to the postgres volume. Just nothing happening, then a\n>>>> small burst. There is no write queue backup on the destination disk\n>>>> either. if I look at pg_stat_activity I'll see something like:\n>>>>\n>>>> COPY salesforce_reconciliation (salesforce_id, email,\n>>>> advisor_salesforce_id, processed) FROM stdin\n>>>>\n>>>> and even for small tables, that seems to take a very long time even\n>>>> though the destination disk is almost at 0 utilization.\n>>>\n>>> So, where's the bottleneck? Clearly, there's one, so is it a CPU, a\n>>> disk or something else? Or maybe network, because you're using EBS?\n>>>\n>>> What do you mean by 'utilization'? How do you measure that?\n>> \n>> The bottleneck is I/O somehow. I say somehow, because I see iowait \n>> averaging about 50% between two CPUs, but there is just no writes to \n>> the destination EBS volume really happening, just reads from the\n>> disk where the source dump is located, then bursts of writes to the\n>> destination volume every so often. It's kind of puzzling. This is\n>> happening on multiple database servers, in multiple availability\n>> zones. Driving me bonkers.\n>> \n>> What I mean by utilization is util% from iostat -m -x 1.\n> \n> I find this rather contradictory. At one moment you say the disk isn't\n> saturated, the next moment you say you're I/O bound.\n> \n> Also, iowait (as reported e.g. by 'top') is tricky to interpret\n> correctly, especially on multi-cpu systems (nice intro to the complexity\n> [1]). It's really difficult to interpret the 50% iowait without more\n> info about what's happening on the machine.\n> \n> IMHO, the utilization (as reported by iotop) is much easier to\n> interpret, because it means '% of time the device was servicing\n> requests'. It has issues too, because 100% does not mean 'saturated'\n> (especially on RAID arrays that can service multiple requests in\n> parallel), but it's better than iowait.\n> \n> If I had to guess based from your info, I'd bet you're CPU bound, so\n> there's very little idle time and about 50% of it is spent waiting for\n> I/O requests (hence the 50% iowait). But in total the amount of I/O is\n> very small, so %util is ~0.\n> \n> Please, post a few lines of 'iostat -x -k 1' output. Samples from 'top'\n> and 'vmstat 1' would be handy too.\n> \n> regards\n> Tomas\n\nWell I'm confused too by this whole thing which is why I came here. I can\ngather those statistics but I have a quick short question, could this be\ncaused by frivolous indexes or something like that?\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Incredibly-slow-restore-times-after-9-0-9-2-upgrade-tp5824701p5825657.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 4 Nov 2014 10:59:32 -0800 (PST)",
"msg_from": "jmcdonagh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Incredibly slow restore times after 9.0>9.2 upgrade"
}
] |
[
{
"msg_contents": "Hi friends!\n\nI'd love to get a sanity check on whether a fat select query I'm doing\nmakes sense given the infrastructure that we have.\n\nWe have 3 big tables that we typically join together for certain queries: a\n~40 million row photos table, a ~20 million row users table, and a ~50\nmillion row photo_to_album table that maps photos to albums.\n\nWe like to display real time analytics, which often results in a query like:\n\nselect (random aggregations )\nfrom\nphoto_to_album join photos on photos.id = photo_to_album.photo_id\njoin users on users.id = photos.user_id\nwhere\nphoto_to_album.album_id = <something>\nand\nphotos.created_at between <some dates>\nand <other junk>\n\nWe have indexes on all of the joins, and the where clauses.\n\nOne of these queries that should be targeting something like 300K photos\ntakes 38 seconds to run (with an aggregate/nested loop taking effectively\nall of that time), and then upon second execution with a warm cache, 4\nseconds.\n\nAlso worryingly, it spikes read IOPS to almost 1500/sec during the time and\nwrite IOPS 200/sec. When not running the query, steady level read iops\nbasically nil, write hovers around 50-100.\n\nThis also increases the queue depth from basically 0 up to 6. Keeping the\nqueue depth high seems to cause timeouts in other queries. The CPU is\nbarely if at all affected, hovering around 20%. Memory also barely\naffected.\n\nWe have a RDS Postgres database, m3.2xlarge with 2000 Provisioned IOPS and\n400GB storage. This translates to 8 virtual CPUs, 30GiB memory, and all\nSSD drives.\n\nSeveral questions here:\n\n1) Is that level of IOPS normal?\n2) Is it bad that the level of iops can queue requests that screw up the\nwhole database even if it's just select queries? Especially when the CPU\nand Memory are still plentiful?\n3) What is up with the huge difference between cold and warm cache?\n\nAny help is appreciated!\n\n- jzc\n\nHi friends!I'd love to get a sanity check on whether a fat select query I'm doing makes sense given the infrastructure that we have.We have 3 big tables that we typically join together for certain queries: a ~40 million row photos table, a ~20 million row users table, and a ~50 million row photo_to_album table that maps photos to albums.We like to display real time analytics, which often results in a query like:select (random aggregations )fromphoto_to_album join photos on photos.id = photo_to_album.photo_idjoin users on users.id = photos.user_idwherephoto_to_album.album_id = <something>andphotos.created_at between <some dates>and <other junk>We have indexes on all of the joins, and the where clauses.One of these queries that should be targeting something like 300K photos takes 38 seconds to run (with an aggregate/nested loop taking effectively all of that time), and then upon second execution with a warm cache, 4 seconds.Also worryingly, it spikes read IOPS to almost 1500/sec during the time and write IOPS 200/sec. When not running the query, steady level read iops basically nil, write hovers around 50-100.This also increases the queue depth from basically 0 up to 6. Keeping the queue depth high seems to cause timeouts in other queries. The CPU is barely if at all affected, hovering around 20%. Memory also barely affected.We have a RDS Postgres database, m3.2xlarge with 2000 Provisioned IOPS and 400GB storage. This translates to 8 virtual CPUs, 30GiB memory, and all SSD drives.Several questions here:1) Is that level of IOPS normal?2) Is it bad that the level of iops can queue requests that screw up the whole database even if it's just select queries? Especially when the CPU and Memory are still plentiful?3) What is up with the huge difference between cold and warm cache?Any help is appreciated!- jzc",
"msg_date": "Tue, 28 Oct 2014 14:15:53 -0700",
"msg_from": "Jeff Chen <[email protected]>",
"msg_from_op": true,
"msg_subject": "Sanity checking big select performance"
},
{
"msg_contents": "On 28.10.2014 22:15, Jeff Chen wrote:\n> Hi friends!\n> \n> I'd love to get a sanity check on whether a fat select query I'm doing\n> makes sense given the infrastructure that we have.\n> \n> We have 3 big tables that we typically join together for certain\n> queries: a ~40 million row photos table, a ~20 million row users table,\n> and a ~50 million row photo_to_album table that maps photos to albums.\n\nSo how much data is it? Does it fit within RAM (after loading into DB,\nwith all the indexes)?\n\n> We like to display real time analytics, which often results in a query like:\n> \n> select (random aggregations )\n> from\n> photo_to_album join photos on photos.id <http://photos.id> =\n> photo_to_album.photo_id\n> join users on users.id <http://users.id> = photos.user_id\n> where\n> photo_to_album.album_id = <something>\n> and\n> photos.created_at between <some dates>\n> and <other junk>\n> \n> We have indexes on all of the joins, and the where clauses.\n\nCan we get EXPLAIN (and ideally EXPLAIN ANALYZE) for such queries?\n\n> One of these queries that should be targeting something like 300K\n> photos takes 38 seconds to run (with an aggregate/nested loop taking \n> effectively all of that time), and then upon second execution with a \n> warm cache, 4 seconds.\n\nWell, if you're hitting disk, it's going to be slow. As you observed,\nafter loading it into page cache, it's much faster.\n\n> Also worryingly, it spikes read IOPS to almost 1500/sec during the time\n> and write IOPS 200/sec. When not running the query, steady level read\n> iops basically nil, write hovers around 50-100.\n> \n> This also increases the queue depth from basically 0 up to 6. Keeping\n> the queue depth high seems to cause timeouts in other queries. The CPU\n> is barely if at all affected, hovering around 20%. Memory also barely\n> affected.\n\n20% is ~2 CPU cores (as you have 8 of them).\n\n> We have a RDS Postgres database, m3.2xlarge with 2000 Provisioned IOPS\n> and 400GB storage. This translates to 8 virtual CPUs, 30GiB memory, and\n> all SSD drives.\n\nAFAIK there are two PostgreSQL major versions supported on RDS - 9.1 and\n9.3. Which one are you using?\n\nAlso, can you list values for some basic parameters (shared_buffers,\nwork_mem)? We don't know what are the default values on RDS, neither if\nyou somehow modified them.\n\n> Several questions here:\n> \n> 1) Is that level of IOPS normal?\n\nUmmmmm, why wouldn't it be? Each IO request works with 16 KB (on EBS),\nand you're reading/writing a certain amount of data.\n\n> 2) Is it bad that the level of iops can queue requests that screw up the\n> whole database even if it's just select queries? Especially when the\n> CPU and Memory are still plentiful?\n\nYou're saturating a particular resource. If you hit I/O wall, you can't\nuse the CPU / memory. The fact that it slows down your queries is\nsomehow expected.\n\nIs it bad? Well, if you need to minimize impact on other queries, then\nprobably yes.\n\n> 3) What is up with the huge difference between cold and warm cache?\n\nI don't understand why you're surprised by this? The EBS performance on\nm3.2xlarge (with EBS-Optimized networking, i.e. 1 Gbit dedicated to EBS)\nyou get up to ~120 MB/s, except that you set 2000 IOPS, which is ~32\nMB/s. Memory is orders of magnitude faster, hence the difference.\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 28 Oct 2014 22:42:47 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity checking big select performance"
},
{
"msg_contents": "Jeff Chen <[email protected]> wrote:\n\n> One of these queries that should be targeting something like 300K\n> photos takes 38 seconds to run (with an aggregate/nested loop\n> taking effectively all of that time),\n\nWith the seek time of commodity disk drives typically being 9ms, a\nnaive approach using random access to join to 300k rows on a single\nthread with no caching would take 45 minutes; so the fact that you\nare seeing much better than that implies some benefit from cache,\nsome sequential scanning, faster drives, or concurrent access to\nmultiple spindles.\n\n> and then upon second execution with a warm cache, 4 seconds.\n\nThis shows that it is faster to access data in RAM than on disk,\nand that your data wasn't already all in cache (most likely because\nit doesn't all fit in RAM).\n\n> Also worryingly, it spikes read IOPS to almost 1500/sec during\n> the time and write IOPS 200/sec. When not running the query,\n> steady level read iops basically nil, write hovers around 50-100.\n\nThe reads are just another symptom of not having the data fully\ncached. The writes are more interesting. The two obvious\npossibilities are that the query needed to use work files (for\nsorts or hash tables) or that you were accessing a fair amount of\ndata which had not been vacuumed since it was last modified. To\nhelp improve performance for the first, you might want to consider\nincreasing work_mem (although this will reduce RAM available for\ncaching). To improve performance for the second you might want to\nmake autovacuum more aggressive.\n\nTo get more specific advice, you may want to read this page and\nfollow the advice there:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 29 Oct 2014 06:47:49 -0700",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Sanity checking big select performance"
}
] |
[
{
"msg_contents": "Greetings all,\n\nI'm trying to wrap my head around updating my configuration files, which\nhave been probably fairly static since before 8.4.\n\nI've got some beefy hardware but have some tables that are over 57GB raw\nand end up at 140GB size after indexes are applied. One index creation took\n7 hours today. So it's time to dive in and see where i'm lacking and what I\nshould be tweaking.\n\nI looked at pgtune again today and the numbers it's spitting out took me\nback, they are huge. From all historical conversations and attempts a few\nof these larger numbers netted reduced performance vs better performance\n(but that was on older versions of Postgres).\n\nSo I come here today to seek out some type of affirmation that these\nnumbers look good and I should look at putting them into my config, staged\nand or in one fell swoop.\n\nI will start at the same time migrating my config to the latest 9.3\ntemplate...\n\nPostgres Version: 9.3.4, Slony 2.1.3 (migrating to 2.2).\nCentOS 6.x, 2.6.32-431.5.1.el6.x86_64\nBig HP Boxen.\n\n32 core, 256GB of Ram DB is roughly 175GB in size but many tables are\nhundreds of millions of rows.\n\nThe pgtune configurations that were spit out based on the information above;\n\nmax_connections = 300\nshared_buffers = 64GB\neffective_cache_size = 192GB\nwork_mem = 223696kB\nmaintenance_work_mem = 2GB\ncheckpoint_segments = 32\ncheckpoint_completion_target = 0.7\nwal_buffers = 16MB\ndefault_statistics_target = 100\n\n*my current configuration:*\n\nmax_connections = 300\nshared_buffers = 2000MB\neffective_cache_size = 7GB\nwork_mem = 6GB\nmaintenance_work_mem = 10GB <-- bumped this to try to get my reindexes\ndone\ncheckpoint_segments = 100\n#wal_buffers = 64kB\n#default_statistics_target = 10\n\nHere is my complete configuration (This is my slon slave server, so fsync\nis off and archive is off, but on my primary fsync=on and archive=on).\n\nlisten_addresses = '*'\nmax_connections = 300\nshared_buffers = 2000MB\nmax_prepared_transactions = 0\nwork_mem = 6GB\nmaintenance_work_mem = 10GB\nfsync = off\ncheckpoint_segments = 100\ncheckpoint_timeout = 10min\ncheckpoint_warning = 3600s\nwal_level archive\narchive_mode = off\narchive_command = 'tar -czvpf /pg_archives/%f.tgz %p'\narchive_timeout = 10min\nrandom_page_cost = 2.0\neffective_cache_size = 7GB\nlog_destination = 'stderr'\nlogging_collector = on\nlog_directory = '/data/logs'\nlog_filename = 'pgsql-%m-%d.log'\nlog_truncate_on_rotation = on\nlog_rotation_age = 1d\nlog_min_messages = info\nlog_min_duration_statement = 15s\nlog_line_prefix = '%t %d %u %r %p %m'\nlog_lock_waits = on\nlog_timezone = 'US/Pacific'\nautovacuum_max_workers = 3\nautovacuum_vacuum_threshold = 1000\nautovacuum_analyze_threshold = 2000\ndatestyle = 'iso, mdy'\ntimezone = 'US/Pacific'\nlc_messages = 'en_US.UTF-8'\nlc_monetary = 'en_US.UTF-8'\nlc_numeric = 'en_US.UTF-8'\nlc_time = 'en_US.UTF-8'\ndeadlock_timeout = 5s\n\nAlso while it doesn't matter in 9.3 anymore apparently my sysctl.conf has\n\nkernel.shmmax = 68719476736\nkernel.shmall = 4294967296\n\nAnd PGTune recommended;\n\nkernel.shmmax=137438953472\nkernel.shmall=33554432\n\nAlso of note in my sysctl.conf config:\n\nvm.zone_reclaim_mode = 0\nvm.swappiness = 10\n\nThanks for the assistance, watching these index creations crawl along when\nyou know you have so many more compute cycles to provide makes one go\ncrazy.'\n\nTory\n\nGreetings all,I'm trying to wrap my head around updating my configuration files, which have been probably fairly static since before 8.4.I've got some beefy hardware but have some tables that are over 57GB raw and end up at 140GB size after indexes are applied. One index creation took 7 hours today. So it's time to dive in and see where i'm lacking and what I should be tweaking.I looked at pgtune again today and the numbers it's spitting out took me back, they are huge. From all historical conversations and attempts a few of these larger numbers netted reduced performance vs better performance (but that was on older versions of Postgres).So I come here today to seek out some type of affirmation that these numbers look good and I should look at putting them into my config, staged and or in one fell swoop.I will start at the same time migrating my config to the latest 9.3 template...Postgres Version: 9.3.4, Slony 2.1.3 (migrating to 2.2).CentOS 6.x, 2.6.32-431.5.1.el6.x86_64Big HP Boxen.32 core, 256GB of Ram DB is roughly 175GB in size but many tables are hundreds of millions of rows.The pgtune configurations that were spit out based on the information above;max_connections = 300shared_buffers = 64GBeffective_cache_size = 192GBwork_mem = 223696kBmaintenance_work_mem = 2GBcheckpoint_segments = 32checkpoint_completion_target = 0.7wal_buffers = 16MBdefault_statistics_target = 100my current configuration:max_connections = 300shared_buffers = 2000MB effective_cache_size = 7GBwork_mem = 6GB maintenance_work_mem = 10GB <-- bumped this to try to get my reindexes donecheckpoint_segments = 100 #wal_buffers = 64kB #default_statistics_target = 10 Here is my complete configuration (This is my slon slave server, so fsync is off and archive is off, but on my primary fsync=on and archive=on).listen_addresses = '*'max_connections = 300shared_buffers = 2000MBmax_prepared_transactions = 0work_mem = 6GBmaintenance_work_mem = 10GBfsync = offcheckpoint_segments = 100checkpoint_timeout = 10mincheckpoint_warning = 3600swal_level archivearchive_mode = offarchive_command = 'tar -czvpf /pg_archives/%f.tgz %p'archive_timeout = 10minrandom_page_cost = 2.0effective_cache_size = 7GBlog_destination = 'stderr'logging_collector = onlog_directory = '/data/logs'log_filename = 'pgsql-%m-%d.log'log_truncate_on_rotation = onlog_rotation_age = 1dlog_min_messages = infolog_min_duration_statement = 15slog_line_prefix = '%t %d %u %r %p %m'log_lock_waits = onlog_timezone = 'US/Pacific'autovacuum_max_workers = 3autovacuum_vacuum_threshold = 1000autovacuum_analyze_threshold = 2000datestyle = 'iso, mdy'timezone = 'US/Pacific'lc_messages = 'en_US.UTF-8'lc_monetary = 'en_US.UTF-8'lc_numeric = 'en_US.UTF-8'lc_time = 'en_US.UTF-8'deadlock_timeout = 5sAlso while it doesn't matter in 9.3 anymore apparently my sysctl.conf haskernel.shmmax = 68719476736kernel.shmall = 4294967296And PGTune recommended;kernel.shmmax=137438953472kernel.shmall=33554432Also of note in my sysctl.conf config:vm.zone_reclaim_mode = 0vm.swappiness = 10 Thanks for the assistance, watching these index creations crawl along when you know you have so many more compute cycles to provide makes one go crazy.'Tory",
"msg_date": "Wed, 29 Oct 2014 23:49:13 -0700",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "pgtune + configurations with 9.3"
},
{
"msg_contents": "Tory M Blue wrote:\r\n> I've got some beefy hardware but have some tables that are over 57GB raw and end up at 140GB size\r\n> after indexes are applied. One index creation took 7 hours today. So it's time to dive in and see\r\n> where i'm lacking and what I should be tweaking.\r\n> \r\n> I looked at pgtune again today and the numbers it's spitting out took me back, they are huge. From all\r\n> historical conversations and attempts a few of these larger numbers netted reduced performance vs\r\n> better performance (but that was on older versions of Postgres).\r\n> \r\n> So I come here today to seek out some type of affirmation that these numbers look good and I should\r\n> look at putting them into my config, staged and or in one fell swoop.\r\n> \r\n> I will start at the same time migrating my config to the latest 9.3 template...\r\n> \r\n> Postgres Version: 9.3.4, Slony 2.1.3 (migrating to 2.2).\r\n> CentOS 6.x, 2.6.32-431.5.1.el6.x86_64\r\n> Big HP Boxen.\r\n> \r\n> 32 core, 256GB of Ram DB is roughly 175GB in size but many tables are hundreds of millions of rows.\r\n> \r\n> The pgtune configurations that were spit out based on the information above;\r\n> \r\n> max_connections = 300\r\n\r\nThat's a lot, but equals what you currently have.\r\nIt is probably ok, but can have repercussions if used with large work_mem:\r\nEvery backend can allocate that much memory, maybe even several times for a complicated query.\r\n\r\n> shared_buffers = 64GB\r\n\r\nThat seems a bit on the large side.\r\nI would start with something like 4GB and run (realistic) performance tests, doubling the value each time.\r\nSee where you come out best.\r\nYou can use the pg_buffercache contrib to see how your shared buffers are used.\r\n\r\n> effective_cache_size = 192GB\r\n\r\nThat should be all the memory in the machine that is available to PostgreSQL,\r\nso on an exclusive database machine it could be even higher.\r\n\r\n> work_mem = 223696kB\r\n\r\nThat looks ok, but performance testing wouldn't harm.\r\nIdeally you log temporary file creation and have this parameter big enough so that\r\nnormal queries don't need temp files, but low enough so that the file system cache still has\r\nsome RAM left.\r\n\r\n> maintenance_work_mem = 2GB\r\n\r\nThat's particularly helpful for your problem, index creation.\r\n\r\n> checkpoint_segments = 32\r\n\r\nCheck.\r\nYou want checkpoints to be time triggered, so don't be afraid to go higher\r\nif you get warnings unless a very short restore time is of paramount importance.\r\n\r\n> checkpoint_completion_target = 0.7\r\n\r\nCheck.\r\n\r\n> wal_buffers = 16MB\r\n\r\nThat's fine too, although with 9.3 you might as well leave it default.\r\nWith that much RAM it will be autotuned to the maximum anyway.\r\n\r\n> default_statistics_target = 100\r\n\r\nThat's the default value.\r\nIncrease only if you get bad plans because of insufficient statistics.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 31 Oct 2014 09:43:56 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "On 10/29/2014 11:49 PM, Tory M Blue wrote:\n> I looked at pgtune again today and the numbers it's spitting out took me\n> back, they are huge. From all historical conversations and attempts a few\n> of these larger numbers netted reduced performance vs better performance\n> (but that was on older versions of Postgres).\n\nYeah, pgTune is pretty badly out of date. It's been on my TODO list, as\nI'm sure it has been on Greg's.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 06 Nov 2014 14:01:38 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "> Yeah, pgTune is pretty badly out of date. It's been on my TODO list, as\n> I'm sure it has been on Greg's.\n\nYeah. And unfortunately the recommendations it gives have been spreading. Take a look at the online version:\n\nhttp://pgtune.leopard.in.ua/\n\nI entered a pretty typical 92GB system, and it recommended 23GB of shared buffers. I tried to tell the author the performance guidelines have since changed, but it didn't help.\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 7 Nov 2014 14:13:20 +0000",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": " Fri, 7 Nov 2014 14:13:20 +0000 от Shaun Thomas <[email protected]>:\n>> Yeah, pgTune is pretty badly out of date. It's been on my TODO list, as\n>> I'm sure it has been on Greg's.\n>\n>Yeah. And unfortunately the recommendations it gives have been spreading. Take a look at the online version:\n>\n>http://pgtune.leopard.in.ua/\n>\n>I entered a pretty typical 92GB system, and it recommended 23GB of shared buffers. I tried to tell the author the performance guidelines have since changed, but it didn't help.\n>\n>\n>______________________________________________\n>\n>See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n\nHello, author of http://pgtune.leopard.in.ua/ is here.\n\nI think everyone can do pull request to it. Old one take 25% for shared_buffers and 75% for effective_cache_size. I think I can even add selector with version of postgresql (9.0 - 9.4) and in this case change formulas for 9.4 (for example).\n\nBut I don't know what type of calculation should be in this case. Does we have in some place this information? Or someone can provide it? Because this generator should be valid for most users.\n\nThanks.\n---\nAlexey Vasiliev\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 17:23:56 +0300",
"msg_from": "=?UTF-8?B?QWxleGV5IFZhc2lsaWV2?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQRVJGT1JNXSBwZ3R1bmUgKyBjb25maWd1cmF0aW9ucyB3aXRo?=\n =?UTF-8?B?IDkuMw==?="
},
{
"msg_contents": "Alexey,\n\nThe issue is that the 1/4 memory suggestion hasn't been a recommendation in quite a while. Now that much larger amounts of RAM are readily available, tests have been finding out that more than 8GB of RAM in shared_buffers has diminishing or even worse returns. This is true for any version. Further, since PostgreSQL manages its own memory, and the Linux Kernel also manages various caches, there's significant risk of storing the same memory both in shared_buffers, and in file cache.\n\nThere are other tweaks the tool probably needs, but I think this, more than anything else, needs to be updated. Until PG solves the issue of double-buffering (which is somewhat in progress since they're somewhat involved with the Linux kernel devs) you can actually give it too much memory.\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 16:28:16 +0000",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "\n\n\nFri, 14 Nov 2014 16:28:16 +0000 от Shaun Thomas <[email protected]>:\n> Alexey,\n> \n> The issue is that the 1/4 memory suggestion hasn't been a recommendation in quite a while. Now that much larger amounts of RAM are readily available, tests have been finding out that more than 8GB of RAM in shared_buffers has diminishing or even worse returns. This is true for any version. Further, since PostgreSQL manages its own memory, and the Linux Kernel also manages various caches, there's significant risk of storing the same memory both in shared_buffers, and in file cache.\n> \n> There are other tweaks the tool probably needs, but I think this, more than anything else, needs to be updated. Until PG solves the issue of double-buffering (which is somewhat in progress since they're somewhat involved with the Linux kernel devs) you can actually give it too much memory.\n> \n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\nSeveral months ago I asked question in this channel \"Why shared_buffers max is 8GB?\". Many persons said, what this is apocrypha, what 8GB is maximum value for shared_buffers. This is archive of this chat: http://www.postgresql.org/message-id/[email protected]\n\nWhat is why so hard to understand what to do with pgtune calculation.\n\n-- \nAlexey Vasiliev\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 19:40:19 +0300",
"msg_from": "=?UTF-8?B?QWxleGV5IFZhc2lsaWV2?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?W1BFUkZPUk1dIHBndHVuZSArIGNvbmZpZ3VyYXRpb25zIHdpdGggOS4z?="
},
{
"msg_contents": "Alexey,\n\nThe issue is not that 8GB is the maximum. You *can* set it higher. What I'm saying, and I'm not alone in this, is that setting it higher can actually decrease performance for various reasons. Setting it to 25% of memory on a system with 512GB of RAM for instance, would be tantamount to disaster. A checkpoint with a setting that high could overwhelm pretty much any disk controller and end up completely ruining DB performance. And that's just *one* of the drawbacks.\n\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 17:06:54 +0000",
"msg_from": "Shaun Thomas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "\n\n\nFri, 14 Nov 2014 17:06:54 +0000 от Shaun Thomas <[email protected]>:\n> Alexey,\n> \n> The issue is not that 8GB is the maximum. You *can* set it higher. What I'm saying, and I'm not alone in this, is that setting it higher can actually decrease performance for various reasons. Setting it to 25% of memory on a system with 512GB of RAM for instance, would be tantamount to disaster. A checkpoint with a setting that high could overwhelm pretty much any disk controller and end up completely ruining DB performance. And that's just *one* of the drawbacks.\n> \n> \n> \n> ______________________________________________\n> \n> See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n\nOk. Just need to know what think another developers about this - should pgtune care about this case? Because I am not sure, what users with 512GB will use pgtune.\n\n-- \nAlexey Vasiliev\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 22:10:24 +0300",
"msg_from": "=?UTF-8?B?QWxleGV5IFZhc2lsaWV2?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFtQRVJGT1JNXSBwZ3R1bmUgKyBjb25maWd1cmF0aW9ucyB3aXRo?=\n =?UTF-8?B?IDkuMw==?="
},
{
"msg_contents": "On 15/11/14 06:06, Shaun Thomas wrote:\n> Alexey,\n>\n> The issue is not that 8GB is the maximum. You *can* set it higher. What I'm saying, and I'm not alone in this, is that setting it higher can actually decrease performance for various reasons. Setting it to 25% of memory on a system with 512GB of RAM for instance, would be tantamount to disaster. A checkpoint with a setting that high could overwhelm pretty much any disk controller and end up completely ruining DB performance. And that's just *one* of the drawbacks.\n>\n\nIt is probably time to revisit this 8GB limit with some benchmarking. We \ndon't really have a hard and fast rule that is known to be correct, and \nthat makes Alexey's job really difficult. Informally folk (including \nmyself at times) have suggested:\n\nmin(ram/4, 8GB)\n\nas the 'rule of thumb' for setting shared_buffers. However I was \nrecently benchmarking a machine with a lot of ram (1TB) and entirely SSD \nstorage [1], and that seemed quite happy with 50GB of shared buffers \n(better performance than with 8GB). Now shared_buffers was not the \nvariable we were concentrating on so I didn't get too carried away and \ntry much bigger than about 100GB - but this seems like a good thing to \ncome out with some numbers for i.e pgbench read write and read only tps \nvs shared_buffers 1 -> 100 GB in size.\n\nCheers\n\nMark\n\n[1] I may be in a position to benchmark the machines these replaced at \nsome not to distant time. These are the previous generation (0.5TB ram, \n32 cores and all SSD storage) but probably still good for this test.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 15 Nov 2014 12:00:28 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "On 11/14/14, 5:00 PM, Mark Kirkwood wrote:\n>\n> as the 'rule of thumb' for setting shared_buffers. However I was recently benchmarking a machine with a lot of ram (1TB) and entirely SSD storage [1], and that seemed quite happy with 50GB of shared buffers (better performance than with 8GB). Now shared_buffers was not the variable we were concentrating on so I didn't get too carried away and try much bigger than about 100GB - but this seems like a good thing to come out with some numbers for i.e pgbench read write and read only tps vs shared_buffers 1 -> 100 GB in size.\n\nWhat PG version?\n\nOne of the huge issues with large shared_buffers is the immense overhead you end up with for running the clock sweep, and on most systems that overhead is born by every backend individually. You will only see that overhead if your database is larger than shared bufers, because you only pay it when you need to evict a buffer. I suspect you'd actually need a database at least 2x > shared_buffers for it to really start showing up.\n\n> [1] I may be in a position to benchmark the machines these replaced at some not to distant time. These are the previous generation (0.5TB ram, 32 cores and all SSD storage) but probably still good for this test.\n\nAwesome! If there's possibility of developers getting direct access, I suspect folks on -hackers would be interested. If not but you're willing to run tests for folks, they'd still be interested. :)\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 20:08:59 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "On 15/11/14 15:08, Jim Nasby wrote:\n> On 11/14/14, 5:00 PM, Mark Kirkwood wrote:\n>>\n>> as the 'rule of thumb' for setting shared_buffers. However I was\n>> recently benchmarking a machine with a lot of ram (1TB) and entirely\n>> SSD storage [1], and that seemed quite happy with 50GB of shared\n>> buffers (better performance than with 8GB). Now shared_buffers was not\n>> the variable we were concentrating on so I didn't get too carried away\n>> and try much bigger than about 100GB - but this seems like a good\n>> thing to come out with some numbers for i.e pgbench read write and\n>> read only tps vs shared_buffers 1 -> 100 GB in size.\n>\n> What PG version?\n>\n> One of the huge issues with large shared_buffers is the immense overhead\n> you end up with for running the clock sweep, and on most systems that\n> overhead is born by every backend individually. You will only see that\n> overhead if your database is larger than shared bufers, because you only\n> pay it when you need to evict a buffer. I suspect you'd actually need a\n> database at least 2x > shared_buffers for it to really start showing up.\n>\n\nThat was 9.4 beta1 and2.\n\nA variety of db sizes were tried, some just fitting inside \nshared_buffers and some a bit over 2x larger, and one variant where we \nsized the db to 600GB, and used 4,8 and 50GB shared_buffers (50 was the \nbest by a small margin...and certainly no worse).\n\nNow we were mainly looking at 60 core performance issues (see thread \"60 \ncore performance with 9.3\"), and possibly some detrimental effects of \nlarger shared_buffers may have been masked by this - but performance was \ncertainly not hurt with larger shared_buffers.\n\nregards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 15 Nov 2014 16:29:26 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "On 15 November 2014 02:10, Alexey Vasiliev <[email protected]> wrote:\n\n> Ok. Just need to know what think another developers about this - should pgtune care about this case? Because I am not sure, what users with 512GB will use pgtune.\n\npgtune should certainly care about working with large amounts of RAM.\nBest practice does not stop at 32GB of RAM, but instead becomes more\nand more important. I am not interested in edge cases or unusual\nconfigurations. I am interested in setting decent defaults to provide\na good starting point to administrators on all sizes of hardware.\n\nI use pgtune to configure automatically deployed cloud instances. My\ngoal is to prepare instances that have been tuned according to best\npractice for standard types of load. Administrators will ideally not\nneed to tweak anything themselves, but at a minimum have been\nprovided with a good starting point. pgtune does a great job of this,\napart from the insanely high shared_buffers. At the moment I run\npgtune, and then must reduce shared_buffers to 8GB if pgtune tried to\nselect a higher value. The values it is currently choosing on higher\nRAM boxes are not best practice and quite wrong.\n\nThe work_mem settings also seem to be very high, but so far have not\nposed a problem and may well be correct. I'm trusting pgtune here\nrather than my outdated guesses.\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 17 Nov 2014 11:52:45 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re[2]: [PERFORM] pgtune + configurations with 9.3"
},
{
"msg_contents": "On 15 November 2014 06:00, Mark Kirkwood <[email protected]> wrote:\n\n> It is probably time to revisit this 8GB limit with some benchmarking. We\n> don't really have a hard and fast rule that is known to be correct, and that\n> makes Alexey's job really difficult. Informally folk (including myself at\n> times) have suggested:\n>\n> min(ram/4, 8GB)\n>\n> as the 'rule of thumb' for setting shared_buffers. However I was recently\n\nIt would be nice to have more benchmarking and improve the rule of\nthumb. I do, however, believe this is orthogonal to fixing pgtune\nwhich I think should be using the current rule of thumb (which is\noverwhelmingly min(ram/4, 8GB) as you suggest).\n\n\n\n> benchmarking a machine with a lot of ram (1TB) and entirely SSD storage [1],\n> and that seemed quite happy with 50GB of shared buffers (better performance\n> than with 8GB). Now shared_buffers was not the variable we were\n> concentrating on so I didn't get too carried away and try much bigger than\n> about 100GB - but this seems like a good thing to come out with some numbers\n> for i.e pgbench read write and read only tps vs shared_buffers 1 -> 100 GB\n> in size.\n\nI've always thought the shared_buffers setting would need to factor in\nthings like CPU speed and memory access, since the rational for the\n8GB cap has always been the cost to scan the data structures. And the\nkernel would factor in too, since the PG specific algorithms are in\ncompetition with the generic OS algorithms. And size of the hot set,\nsince this gets pinned in shared_buffers. Urgh, so many variables.\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 17 Nov 2014 12:17:32 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "I have done some tests using pgbench-tools with different configurations on\nour new server with 768G RAM and it seems for our purpose 32G\nshared_buffers would give the best results.\n\nRegards\nJohann\n\nOn 17 November 2014 at 07:17, Stuart Bishop <[email protected]> wrote:\n\n> On 15 November 2014 06:00, Mark Kirkwood <[email protected]>\n> wrote:\n>\n> > It is probably time to revisit this 8GB limit with some benchmarking. We\n> > don't really have a hard and fast rule that is known to be correct, and\n> that\n> > makes Alexey's job really difficult. Informally folk (including myself at\n> > times) have suggested:\n> >\n> > min(ram/4, 8GB)\n> >\n> > as the 'rule of thumb' for setting shared_buffers. However I was recently\n>\n> It would be nice to have more benchmarking and improve the rule of\n> thumb. I do, however, believe this is orthogonal to fixing pgtune\n> which I think should be using the current rule of thumb (which is\n> overwhelmingly min(ram/4, 8GB) as you suggest).\n>\n>\n>\n> > benchmarking a machine with a lot of ram (1TB) and entirely SSD storage\n> [1],\n> > and that seemed quite happy with 50GB of shared buffers (better\n> performance\n> > than with 8GB). Now shared_buffers was not the variable we were\n> > concentrating on so I didn't get too carried away and try much bigger\n> than\n> > about 100GB - but this seems like a good thing to come out with some\n> numbers\n> > for i.e pgbench read write and read only tps vs shared_buffers 1 -> 100\n> GB\n> > in size.\n>\n> I've always thought the shared_buffers setting would need to factor in\n> things like CPU speed and memory access, since the rational for the\n> 8GB cap has always been the cost to scan the data structures. And the\n> kernel would factor in too, since the PG specific algorithms are in\n> competition with the generic OS algorithms. And size of the hot set,\n> since this gets pinned in shared_buffers. Urgh, so many variables.\n>\n> --\n> Stuart Bishop <[email protected]>\n> http://www.stuartbishop.net/\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\nI have done some tests using pgbench-tools with different configurations on our new server with 768G RAM and it seems for our purpose 32G shared_buffers would give the best results.RegardsJohannOn 17 November 2014 at 07:17, Stuart Bishop <[email protected]> wrote:On 15 November 2014 06:00, Mark Kirkwood <[email protected]> wrote:\n\n> It is probably time to revisit this 8GB limit with some benchmarking. We\n> don't really have a hard and fast rule that is known to be correct, and that\n> makes Alexey's job really difficult. Informally folk (including myself at\n> times) have suggested:\n>\n> min(ram/4, 8GB)\n>\n> as the 'rule of thumb' for setting shared_buffers. However I was recently\n\nIt would be nice to have more benchmarking and improve the rule of\nthumb. I do, however, believe this is orthogonal to fixing pgtune\nwhich I think should be using the current rule of thumb (which is\noverwhelmingly min(ram/4, 8GB) as you suggest).\n\n\n\n> benchmarking a machine with a lot of ram (1TB) and entirely SSD storage [1],\n> and that seemed quite happy with 50GB of shared buffers (better performance\n> than with 8GB). Now shared_buffers was not the variable we were\n> concentrating on so I didn't get too carried away and try much bigger than\n> about 100GB - but this seems like a good thing to come out with some numbers\n> for i.e pgbench read write and read only tps vs shared_buffers 1 -> 100 GB\n> in size.\n\nI've always thought the shared_buffers setting would need to factor in\nthings like CPU speed and memory access, since the rational for the\n8GB cap has always been the cost to scan the data structures. And the\nkernel would factor in too, since the PG specific algorithms are in\ncompetition with the generic OS algorithms. And size of the hot set,\nsince this gets pinned in shared_buffers. Urgh, so many variables.\n\n--\nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Because experiencing your loyal love is better than life itself, my lips will praise you. (Psalm 63:3)",
"msg_date": "Mon, 24 Nov 2014 09:01:56 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "Hello Greame,\n\nIt's probably helpful if everyone sharing this information can post their\n> measurement process / settings and the results as completely as possible,\n> for comparison and reference.\n>\n\nApologies. I have only changed one parameter in postgresql.conf for the\ntests and that was shared_buffers:\n\n shared_buffers = 32GB # min 128k\nshared_preload_libraries = 'auto_explain' # (change requires\nrestart)\nvacuum_cost_delay = 5 # 0-100 milliseconds\nwal_sync_method = open_sync # the default is the first option\n wal_buffers = -1 # min 32kB, -1 sets based\non shared_buffers\n checkpoint_completion_target = 0.9 # checkpoint target\nduration, 0.0 - 1.0\n checkpoint_warning = 30s # 0 disables\n default_statistics_target = 100 # range 1-10000\n log_line_prefix = '%t ' # special values:\n log_statement = 'all' # none, ddl, mod, all\n log_timezone = 'localtime'\n autovacuum_vacuum_scale_factor = 0.1 # fraction of table size\nbefore vacuum\n autovacuum_vacuum_cost_delay = 5ms # default vacuum cost delay\nfor\n datestyle = 'iso, dmy'\n timezone = 'localtime'\n lc_messages = 'en_ZA.UTF-8' # locale for system\nerror message\n lc_monetary = 'en_ZA.UTF-8' # locale for\nmonetary formatting\n lc_numeric = 'en_ZA.UTF-8' # locale for number\nformatting\n lc_time = 'en_ZA.UTF-8' # locale for time\nformatting\n default_text_search_config = 'pg_catalog.english'\n auto_explain.log_min_duration = '6s' # Gregory Smith page 180\n effective_cache_size = 512GB # pgtune wizard 2014-09-25\n work_mem = 4608MB # pgtune wizard 2014-09-25\n checkpoint_segments = 16 # pgtune wizard 2014-09-25\n max_connections = 80 # pgtune wizard 2014-09-25\n\nAnd pgbench-tools - the default configuration:\n\nBASEDIR=`pwd`\nPGBENCHBIN=`which pgbench`\nTESTDIR=\"tests\"\nSKIPINIT=0\nTABBED=0\nOSDATA=1\nTESTHOST=localhost\nTESTUSER=`whoami`\nTESTPORT=5432\nTESTDB=pgbench\nRESULTHOST=\"$TESTHOST\"\nRESULTUSER=\"$TESTUSER\"\nRESULTPORT=\"$TESTPORT\"\nRESULTDB=results\nMAX_WORKERS=\"\"\nSCRIPT=\"select.sql\"\nSCALES=\"1 10 100 1000\"\nSETCLIENTS=\"1 2 4 8 16 32\"\nSETTIMES=3\nRUNTIME=60\nTOTTRANS=\"\"\nSETRATES=\"\"\n\n\nThe server:\n\n# See Gregory Smith: High Performans Postgresql 9.0 pages 81,82 for the\nnext lines\nvm.swappiness=0\nvm.overcommit_memory=2\nvm.dirty_ratio = 2\nvm.dirty_background_ratio=1\n# Maximum shared segment size in bytes\nkernel.shmmax = 406622322688\n# Maximum number of shared memory segments in pages\nkernel.shmall = 99273028\n\n$ free\n total used free shared buffers cached\nMem: 794184164 792406416 1777748 0 123676 788079892\n-/+ buffers/cache: 4202848 789981316\nSwap: 7906300 0 7906300\n\nI have attached the resulting graphs.\n\nRegards\nJohann\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 26 Nov 2014 15:34:05 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
},
{
"msg_contents": "Another apology:\n\nMy pg_version is 9.3\nand here are more up to date png's.\n\nOn 26 November 2014 at 15:34, Johann Spies <[email protected]> wrote:\n\n> Hello Greame,\n>\n> It's probably helpful if everyone sharing this information can post their\n>> measurement process / settings and the results as completely as possible,\n>> for comparison and reference.\n>>\n>\n> Apologies. I have only changed one parameter in postgresql.conf for the\n> tests and that was shared_buffers:\n>\n> shared_buffers = 32GB # min 128k\n> shared_preload_libraries = 'auto_explain' # (change requires\n> restart)\n> vacuum_cost_delay = 5 # 0-100 milliseconds\n> wal_sync_method = open_sync # the default is the first option\n> wal_buffers = -1 # min 32kB, -1 sets based\n> on shared_buffers\n> checkpoint_completion_target = 0.9 # checkpoint target\n> duration, 0.0 - 1.0\n> checkpoint_warning = 30s # 0 disables\n> default_statistics_target = 100 # range 1-10000\n> log_line_prefix = '%t ' # special values:\n> log_statement = 'all' # none, ddl, mod, all\n> log_timezone = 'localtime'\n> autovacuum_vacuum_scale_factor = 0.1 # fraction of table size\n> before vacuum\n> autovacuum_vacuum_cost_delay = 5ms # default vacuum cost\n> delay for\n> datestyle = 'iso, dmy'\n> timezone = 'localtime'\n> lc_messages = 'en_ZA.UTF-8' # locale for\n> system error message\n> lc_monetary = 'en_ZA.UTF-8' # locale for\n> monetary formatting\n> lc_numeric = 'en_ZA.UTF-8' # locale for\n> number formatting\n> lc_time = 'en_ZA.UTF-8' # locale for time\n> formatting\n> default_text_search_config = 'pg_catalog.english'\n> auto_explain.log_min_duration = '6s' # Gregory Smith page 180\n> effective_cache_size = 512GB # pgtune wizard 2014-09-25\n> work_mem = 4608MB # pgtune wizard 2014-09-25\n> checkpoint_segments = 16 # pgtune wizard 2014-09-25\n> max_connections = 80 # pgtune wizard 2014-09-25\n>\n> And pgbench-tools - the default configuration:\n>\n> BASEDIR=`pwd`\n> PGBENCHBIN=`which pgbench`\n> TESTDIR=\"tests\"\n> SKIPINIT=0\n> TABBED=0\n> OSDATA=1\n> TESTHOST=localhost\n> TESTUSER=`whoami`\n> TESTPORT=5432\n> TESTDB=pgbench\n> RESULTHOST=\"$TESTHOST\"\n> RESULTUSER=\"$TESTUSER\"\n> RESULTPORT=\"$TESTPORT\"\n> RESULTDB=results\n> MAX_WORKERS=\"\"\n> SCRIPT=\"select.sql\"\n> SCALES=\"1 10 100 1000\"\n> SETCLIENTS=\"1 2 4 8 16 32\"\n> SETTIMES=3\n> RUNTIME=60\n> TOTTRANS=\"\"\n> SETRATES=\"\"\n>\n>\n> The server:\n>\n> # See Gregory Smith: High Performans Postgresql 9.0 pages 81,82 for the\n> next lines\n> vm.swappiness=0\n> vm.overcommit_memory=2\n> vm.dirty_ratio = 2\n> vm.dirty_background_ratio=1\n> # Maximum shared segment size in bytes\n> kernel.shmmax = 406622322688\n> # Maximum number of shared memory segments in pages\n> kernel.shmall = 99273028\n>\n> $ free\n> total used free shared buffers cached\n> Mem: 794184164 792406416 1777748 0 123676 788079892\n> -/+ buffers/cache: 4202848 789981316\n> Swap: 7906300 0 7906300\n>\n> I have attached the resulting graphs.\n>\n> Regards\n> Johann\n>\n> --\n> Because experiencing your loyal love is better than life itself,\n> my lips will praise you. (Psalm 63:3)\n>\n\n\n\n-- \nBecause experiencing your loyal love is better than life itself,\nmy lips will praise you. (Psalm 63:3)\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 26 Nov 2014 15:39:42 +0200",
"msg_from": "Johann Spies <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pgtune + configurations with 9.3"
}
] |
[
{
"msg_contents": "I have two 9.3.4 PG instances that back a large internet website that has very seasonal traffic and can generate large query loads. My instances are in a master-slave streaming replication setup and are stable and in general perform very well. The only issues we have with the boxes is that when the master is busy the slave may start to lag excessively. I can give specifics as to what heavily loaded means and additionally the postgresql.conf for both boxes but my basic questions are:\n * What causes streaming replication lag to increase? \n * What parameters can be tuned to reduce streaming replication lag?\n * Can a loaded slave affect lag adversely?\n * Can increasing max_wal_senders help reduce lag?\n\nThe reason I ask this is that as mentioned above the servers are stable and are real troopers in general as they back a very popular web site that puts the master under heavy seasonal load at times. At those times though we see an almost exponential growth in streaming replication lag compared to load on the master. \n\nFor example, the master is a very beefy Solaris:\n * 4 Recent Intel Zeons (16 physical cores)\n * 256 GB of ECC RAM\n * 12 TB of ZFS (spindle and SSD internal storage)\n * DB on disk size is 2TB\n * ZFS ARC cache of roughly 250G.\n * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)\n\nBasic PG Config:\n shared_buffers = 2GB\n work_mem = 128MB\n max_connections = 1700 (supports roughly 100 web servers)\n wal_keep_segments = 256 (roughly enough for 24 hours of operation under heavy load)\n wal_sender_timeout = 60s\n replication_timeout=(not set)\n wal_receiver_status_interval=10s\n max_wal_senders=6\n * wal archiving is off\n * 98% of the queries on the master complete in under 500ms.\n * No hung or very long running queries in general.\n\nThe master on a normal day maintains a load of about 0.5, during which replication lag to the slave is in hundreds milliseconds. When the production db server is heavily hit though the load may go as high as 4 on the master and the streaming replication lag may increase to more than 2 hours relatively quickly. Load on the slave is generally below 1 even when the master is heavily loaded. The traffic to the master is primarily read with about 10% DML (new users, purchase records, etc). DML statements increase proportionally when under load though. The master and slave are connected via dedicated 10G fiber link and even under heavy load the utilization of the link is nowhere near close to saturation. BTW, the slave does run some reported related queries throughout the day that might take up to a minute to complete.\n\nI have the task of figuring out why this otherwise healthy DB starts to lag so badly under load and if there is anything that we could do about it. I’ve been wondering particularly if we should up the max_wal_senders but from the docs it is unclear if that would help. In my testing with pg_bench on our dev boxes which were the previous production hardware for these servers I have determined that it doesn’t take much DML load on the master to get the slave to start lagging severely. I was wondering if this was expected and/or some design consideration? Possibly streaming replication isn’t meant to be used for heavily hit databases and maintain small lag times? I would like to believe that the fault is something we have done though and that there is some parameter we could tune to reduce this lag.\n\nAny recommendations would be very helpful. \n \nMike Wilson\nPredicate Logic Consulting\n\n\n\n\nI have two 9.3.4 PG instances that back a large internet website that has very seasonal traffic and can generate large query loads. My instances are in a master-slave streaming replication setup and are stable and in general perform very well. The only issues we have with the boxes is that when the master is busy the slave may start to lag excessively. I can give specifics as to what heavily loaded means and additionally the postgresql.conf for both boxes but my basic questions are: * What causes streaming replication lag to increase? * What parameters can be tuned to reduce streaming replication lag? * Can a loaded slave affect lag adversely? * Can increasing max_wal_senders help reduce lag?The reason I ask this is that as mentioned above the servers are stable and are real troopers in general as they back a very popular web site that puts the master under heavy seasonal load at times. At those times though we see an almost exponential growth in streaming replication lag compared to load on the master. For example, the master is a very beefy Solaris: * 4 Recent Intel Zeons (16 physical cores) * 256 GB of ECC RAM * 12 TB of ZFS (spindle and SSD internal storage) * DB on disk size is 2TB * ZFS ARC cache of roughly 250G. * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)Basic PG Config: shared_buffers = 2GB work_mem = 128MB max_connections = 1700 (supports roughly 100 web servers) wal_keep_segments = 256 (roughly enough for 24 hours of operation under heavy load) wal_sender_timeout = 60s replication_timeout=(not set) wal_receiver_status_interval=10s max_wal_senders=6 * wal archiving is off * 98% of the queries on the master complete in under 500ms. * No hung or very long running queries in general.The master on a normal day maintains a load of about 0.5, during which replication lag to the slave is in hundreds milliseconds. When the production db server is heavily hit though the load may go as high as 4 on the master and the streaming replication lag may increase to more than 2 hours relatively quickly. Load on the slave is generally below 1 even when the master is heavily loaded. The traffic to the master is primarily read with about 10% DML (new users, purchase records, etc). DML statements increase proportionally when under load though. The master and slave are connected via dedicated 10G fiber link and even under heavy load the utilization of the link is nowhere near close to saturation. BTW, the slave does run some reported related queries throughout the day that might take up to a minute to complete.I have the task of figuring out why this otherwise healthy DB starts to lag so badly under load and if there is anything that we could do about it. I’ve been wondering particularly if we should up the max_wal_senders but from the docs it is unclear if that would help. In my testing with pg_bench on our dev boxes which were the previous production hardware for these servers I have determined that it doesn’t take much DML load on the master to get the slave to start lagging severely. I was wondering if this was expected and/or some design consideration? Possibly streaming replication isn’t meant to be used for heavily hit databases and maintain small lag times? I would like to believe that the fault is something we have done though and that there is some parameter we could tune to reduce this lag.Any recommendations would be very helpful. \nMike WilsonPredicate Logic Consulting",
"msg_date": "Sat, 1 Nov 2014 15:33:04 -0700",
"msg_from": "Mike Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Replication Lag Causes"
},
{
"msg_contents": "Hello Mike,\n\nwhat kind of load does the slave get?\n\nwhat does the recovery process do on the slave during the times when lag is\nbeing observed? Does it use 100% of the CPU?\n\nWAL can be replayed by only one process, so no need to increase the\nmax_wal_senders.\n\nCheers,\n\n-- Valentine Gogichashvili\n\nOn Sun, Nov 2, 2014 at 1:33 AM, Mike Wilson <[email protected]> wrote:\n\n> I have two 9.3.4 PG instances that back a large internet website that has\n> very seasonal traffic and can generate large query loads. My instances are\n> in a master-slave streaming replication setup and are stable and in\n> general perform very well. The only issues we have with the boxes is that\n> when the master is busy the slave may start to lag excessively. I can give\n> specifics as to what heavily loaded means and additionally the\n> postgresql.conf for both boxes but my basic questions are:\n> * What causes streaming replication lag to increase?\n> * What parameters can be tuned to reduce streaming replication lag?\n> * Can a loaded slave affect lag adversely?\n> * Can increasing max_wal_senders help reduce lag?\n>\n> The reason I ask this is that as mentioned above the servers are stable\n> and are real troopers in general as they back a very popular web site that\n> puts the master under heavy seasonal load at times. At those times though\n> we see an almost exponential growth in streaming replication lag compared\n> to load on the master.\n>\n> For example, the master is a very beefy Solaris:\n> * 4 Recent Intel Zeons (16 physical cores)\n> * 256 GB of ECC RAM\n> * 12 TB of ZFS (spindle and SSD internal storage)\n> * DB on disk size is 2TB\n> * ZFS ARC cache of roughly 250G.\n> * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)\n>\n> Basic PG Config:\n> shared_buffers = 2GB\n> work_mem = 128MB\n> max_connections = 1700 (supports roughly 100 web servers)\n> wal_keep_segments = 256 (roughly enough for 24 hours of operation under\n> heavy load)\n> wal_sender_timeout = 60s\n> replication_timeout=(not set)\n> wal_receiver_status_interval=10s\n> max_wal_senders=6\n> * wal archiving is off\n> * 98% of the queries on the master complete in under 500ms.\n> * No hung or very long running queries in general.\n>\n> The master on a normal day maintains a load of about 0.5, during which\n> replication lag to the slave is in hundreds milliseconds. When the\n> production db server is heavily hit though the load may go as high as 4 on\n> the master and the streaming replication lag may increase to more than 2\n> hours relatively quickly. Load on the slave is generally below 1 even when\n> the master is heavily loaded. The traffic to the master is primarily read\n> with about 10% DML (new users, purchase records, etc). DML statements\n> increase proportionally when under load though. The master and slave are\n> connected via dedicated 10G fiber link and even under heavy load the\n> utilization of the link is nowhere near close to saturation. BTW, the\n> slave does run some reported related queries throughout the day that might\n> take up to a minute to complete.\n>\n> I have the task of figuring out why this otherwise healthy DB starts to\n> lag so badly under load and if there is anything that we could do about\n> it. I’ve been wondering particularly if we should up the max_wal_senders\n> but from the docs it is unclear if that would help. In my testing with\n> pg_bench on our dev boxes which were the previous production hardware for\n> these servers I have determined that it doesn’t take much DML load on the\n> master to get the slave to start lagging severely. I was wondering if this\n> was expected and/or some design consideration? Possibly streaming\n> replication isn’t meant to be used for heavily hit databases and maintain\n> small lag times? I would like to believe that the fault is something we\n> have done though and that there is some parameter we could tune to reduce\n> this lag.\n>\n> Any recommendations would be very helpful.\n>\n> Mike Wilson\n> Predicate Logic Consulting\n>\n>\n>\n>\n\nHello Mike, what kind of load does the slave get?what does the recovery process do on the slave during the times when lag is being observed? Does it use 100% of the CPU?WAL can be replayed by only one process, so no need to increase the max_wal_senders.Cheers,-- Valentine Gogichashvili\nOn Sun, Nov 2, 2014 at 1:33 AM, Mike Wilson <[email protected]> wrote:I have two 9.3.4 PG instances that back a large internet website that has very seasonal traffic and can generate large query loads. My instances are in a master-slave streaming replication setup and are stable and in general perform very well. The only issues we have with the boxes is that when the master is busy the slave may start to lag excessively. I can give specifics as to what heavily loaded means and additionally the postgresql.conf for both boxes but my basic questions are: * What causes streaming replication lag to increase? * What parameters can be tuned to reduce streaming replication lag? * Can a loaded slave affect lag adversely? * Can increasing max_wal_senders help reduce lag?The reason I ask this is that as mentioned above the servers are stable and are real troopers in general as they back a very popular web site that puts the master under heavy seasonal load at times. At those times though we see an almost exponential growth in streaming replication lag compared to load on the master. For example, the master is a very beefy Solaris: * 4 Recent Intel Zeons (16 physical cores) * 256 GB of ECC RAM * 12 TB of ZFS (spindle and SSD internal storage) * DB on disk size is 2TB * ZFS ARC cache of roughly 250G. * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)Basic PG Config: shared_buffers = 2GB work_mem = 128MB max_connections = 1700 (supports roughly 100 web servers) wal_keep_segments = 256 (roughly enough for 24 hours of operation under heavy load) wal_sender_timeout = 60s replication_timeout=(not set) wal_receiver_status_interval=10s max_wal_senders=6 * wal archiving is off * 98% of the queries on the master complete in under 500ms. * No hung or very long running queries in general.The master on a normal day maintains a load of about 0.5, during which replication lag to the slave is in hundreds milliseconds. When the production db server is heavily hit though the load may go as high as 4 on the master and the streaming replication lag may increase to more than 2 hours relatively quickly. Load on the slave is generally below 1 even when the master is heavily loaded. The traffic to the master is primarily read with about 10% DML (new users, purchase records, etc). DML statements increase proportionally when under load though. The master and slave are connected via dedicated 10G fiber link and even under heavy load the utilization of the link is nowhere near close to saturation. BTW, the slave does run some reported related queries throughout the day that might take up to a minute to complete.I have the task of figuring out why this otherwise healthy DB starts to lag so badly under load and if there is anything that we could do about it. I’ve been wondering particularly if we should up the max_wal_senders but from the docs it is unclear if that would help. In my testing with pg_bench on our dev boxes which were the previous production hardware for these servers I have determined that it doesn’t take much DML load on the master to get the slave to start lagging severely. I was wondering if this was expected and/or some design consideration? Possibly streaming replication isn’t meant to be used for heavily hit databases and maintain small lag times? I would like to believe that the fault is something we have done though and that there is some parameter we could tune to reduce this lag.Any recommendations would be very helpful. \nMike WilsonPredicate Logic Consulting",
"msg_date": "Sun, 2 Nov 2014 03:14:24 +0400",
"msg_from": "Valentine Gogichashvili <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication Lag Causes"
},
{
"msg_contents": "Load on the slave is relatively light. It averages about 1.0 due to some data ware house select queries running against it frequently. Previously only the load on the master seems to have affected our replication lag no matter what the slave was doing. \n\nIn thinking about this a bit more, the load on the master does cause increasing lag but only if the query mix begins to change to more DML than SELECTS. Basically, the amount of DML is what really appears to cause the replication to lag. This is an OLTP system backing a rather heavy commercial website where memberships are sold and when the purchase traffic increases that is when we start to see extreme lag develop on the slave.\n\nCPU utilization on the slave during extreme lag is similar to normal operation even if the slave is lagging more than usual.\n\nThanks for the info on max_wal_senders. That’s good to know.\n\nMike Wilson\n\n\n\n\n> On Nov 1, 2014, at 4:14 PM, Valentine Gogichashvili <[email protected]> wrote:\n> \n> Hello Mike, \n> \n> what kind of load does the slave get?\n> \n> what does the recovery process do on the slave during the times when lag is being observed? Does it use 100% of the CPU?\n> \n> WAL can be replayed by only one process, so no need to increase the max_wal_senders.\n> \n> Cheers,\n> \n> -- Valentine Gogichashvili\n> \n> On Sun, Nov 2, 2014 at 1:33 AM, Mike Wilson <[email protected] <mailto:[email protected]>> wrote:\n> I have two 9.3.4 PG instances that back a large internet website that has very seasonal traffic and can generate large query loads. My instances are in a master-slave streaming replication setup and are stable and in general perform very well. The only issues we have with the boxes is that when the master is busy the slave may start to lag excessively. I can give specifics as to what heavily loaded means and additionally the postgresql.conf for both boxes but my basic questions are:\n> * What causes streaming replication lag to increase? \n> * What parameters can be tuned to reduce streaming replication lag?\n> * Can a loaded slave affect lag adversely?\n> * Can increasing max_wal_senders help reduce lag?\n> \n> The reason I ask this is that as mentioned above the servers are stable and are real troopers in general as they back a very popular web site that puts the master under heavy seasonal load at times. At those times though we see an almost exponential growth in streaming replication lag compared to load on the master. \n> \n> For example, the master is a very beefy Solaris:\n> * 4 Recent Intel Zeons (16 physical cores)\n> * 256 GB of ECC RAM\n> * 12 TB of ZFS (spindle and SSD internal storage)\n> * DB on disk size is 2TB\n> * ZFS ARC cache of roughly 250G.\n> * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)\n> \n> Basic PG Config:\n> shared_buffers = 2GB\n> work_mem = 128MB\n> max_connections = 1700 (supports roughly 100 web servers)\n> wal_keep_segments = 256 (roughly enough for 24 hours of operation under heavy load)\n> wal_sender_timeout = 60s\n> replication_timeout=(not set)\n> wal_receiver_status_interval=10s\n> max_wal_senders=6\n> * wal archiving is off\n> * 98% of the queries on the master complete in under 500ms.\n> * No hung or very long running queries in general.\n> \n> The master on a normal day maintains a load of about 0.5, during which replication lag to the slave is in hundreds milliseconds. When the production db server is heavily hit though the load may go as high as 4 on the master and the streaming replication lag may increase to more than 2 hours relatively quickly. Load on the slave is generally below 1 even when the master is heavily loaded. The traffic to the master is primarily read with about 10% DML (new users, purchase records, etc). DML statements increase proportionally when under load though. The master and slave are connected via dedicated 10G fiber link and even under heavy load the utilization of the link is nowhere near close to saturation. BTW, the slave does run some reported related queries throughout the day that might take up to a minute to complete.\n> \n> I have the task of figuring out why this otherwise healthy DB starts to lag so badly under load and if there is anything that we could do about it. I’ve been wondering particularly if we should up the max_wal_senders but from the docs it is unclear if that would help. In my testing with pg_bench on our dev boxes which were the previous production hardware for these servers I have determined that it doesn’t take much DML load on the master to get the slave to start lagging severely. I was wondering if this was expected and/or some design consideration? Possibly streaming replication isn’t meant to be used for heavily hit databases and maintain small lag times? I would like to believe that the fault is something we have done though and that there is some parameter we could tune to reduce this lag.\n> \n> Any recommendations would be very helpful. \n> \n> Mike Wilson\n> Predicate Logic Consulting\n> \n> \n> \n> \n\n\nLoad on the slave is relatively light. It averages about 1.0 due to some data ware house select queries running against it frequently. Previously only the load on the master seems to have affected our replication lag no matter what the slave was doing. In thinking about this a bit more, the load on the master does cause increasing lag but only if the query mix begins to change to more DML than SELECTS. Basically, the amount of DML is what really appears to cause the replication to lag. This is an OLTP system backing a rather heavy commercial website where memberships are sold and when the purchase traffic increases that is when we start to see extreme lag develop on the slave.CPU utilization on the slave during extreme lag is similar to normal operation even if the slave is lagging more than usual.Thanks for the info on max_wal_senders. That’s good to know.\nMike Wilson\n\nOn Nov 1, 2014, at 4:14 PM, Valentine Gogichashvili <[email protected]> wrote:Hello Mike, what kind of load does the slave get?what does the recovery process do on the slave during the times when lag is being observed? Does it use 100% of the CPU?WAL can be replayed by only one process, so no need to increase the max_wal_senders.Cheers,-- Valentine Gogichashvili\nOn Sun, Nov 2, 2014 at 1:33 AM, Mike Wilson <[email protected]> wrote:I have two 9.3.4 PG instances that back a large internet website that has very seasonal traffic and can generate large query loads. My instances are in a master-slave streaming replication setup and are stable and in general perform very well. The only issues we have with the boxes is that when the master is busy the slave may start to lag excessively. I can give specifics as to what heavily loaded means and additionally the postgresql.conf for both boxes but my basic questions are: * What causes streaming replication lag to increase? * What parameters can be tuned to reduce streaming replication lag? * Can a loaded slave affect lag adversely? * Can increasing max_wal_senders help reduce lag?The reason I ask this is that as mentioned above the servers are stable and are real troopers in general as they back a very popular web site that puts the master under heavy seasonal load at times. At those times though we see an almost exponential growth in streaming replication lag compared to load on the master. For example, the master is a very beefy Solaris: * 4 Recent Intel Zeons (16 physical cores) * 256 GB of ECC RAM * 12 TB of ZFS (spindle and SSD internal storage) * DB on disk size is 2TB * ZFS ARC cache of roughly 250G. * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)Basic PG Config: shared_buffers = 2GB work_mem = 128MB max_connections = 1700 (supports roughly 100 web servers) wal_keep_segments = 256 (roughly enough for 24 hours of operation under heavy load) wal_sender_timeout = 60s replication_timeout=(not set) wal_receiver_status_interval=10s max_wal_senders=6 * wal archiving is off * 98% of the queries on the master complete in under 500ms. * No hung or very long running queries in general.The master on a normal day maintains a load of about 0.5, during which replication lag to the slave is in hundreds milliseconds. When the production db server is heavily hit though the load may go as high as 4 on the master and the streaming replication lag may increase to more than 2 hours relatively quickly. Load on the slave is generally below 1 even when the master is heavily loaded. The traffic to the master is primarily read with about 10% DML (new users, purchase records, etc). DML statements increase proportionally when under load though. The master and slave are connected via dedicated 10G fiber link and even under heavy load the utilization of the link is nowhere near close to saturation. BTW, the slave does run some reported related queries throughout the day that might take up to a minute to complete.I have the task of figuring out why this otherwise healthy DB starts to lag so badly under load and if there is anything that we could do about it. I’ve been wondering particularly if we should up the max_wal_senders but from the docs it is unclear if that would help. In my testing with pg_bench on our dev boxes which were the previous production hardware for these servers I have determined that it doesn’t take much DML load on the master to get the slave to start lagging severely. I was wondering if this was expected and/or some design consideration? Possibly streaming replication isn’t meant to be used for heavily hit databases and maintain small lag times? I would like to believe that the fault is something we have done though and that there is some parameter we could tune to reduce this lag.Any recommendations would be very helpful. \nMike WilsonPredicate Logic Consulting",
"msg_date": "Sun, 2 Nov 2014 12:58:31 -0800",
"msg_from": "Mike Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replication Lag Causes"
},
{
"msg_contents": "Thanks for the information Greg.\n\nUnfortunately modifying the application stack this close to the holiday season won’t be an option so I’m left with:\n 1) Trying to optimize the settings I have for the query mix I have.\n 2) Optimize any long running DML queries (if any) to prevent lag due to locks.\n 3) Getting a better understanding of “what” causes lag.\n\n#3 will probably be central to at least minimizing lag during heavy DML load. If anyone has a good resource to describe when a slave would start to lag potentially that would help me hunt for the cause. I know long running DML on the master may cause lag but I’m uncertain as to the specifics of why. During periods of lag we do have more DML than usual running against the master but the queries themselves are very quick although there might be 20-30 DML operations per second against some of our central tables that store user account information. Even under heavy DML the queries still return in under a second. Possibly a large volume of of short running DML cause replication lag issues for large tables (~20M)?\n\nThanks again for your help. BDR looks interesting but probably too cutting edge for my client.\n\nMike Wilson\n\n\n\n\n> On Nov 2, 2014, at 12:33 PM, Greg Spiegelberg <[email protected]> wrote:\n> \n> Hi Mike,\n> \n> Sounds very familiar. Our master fans out to 16 slaves (cascading) and we had great success with segregating database queries to different slaves and some based on network latency. I'd suggest, if possible, alter the application to use the slave for simple SELECT's and FUNCTION's performing SELECT-like only work while limiting those applications and queries that perform DML to the master (obviously). If the load on the slave increases too much, spin up another slave. I'd mention from experience that it could be the load on the slave that is giving the appearance of replication lag. This is what led us to having (1) slave per application.\n> \n> There is also the BDR multi-master available in 9.4beta if you're wanting to live on the edge.\n> \n> -Greg\n> \n> On Sat, Nov 1, 2014 at 4:33 PM, Mike Wilson <[email protected] <mailto:[email protected]>> wrote:\n> I have two 9.3.4 PG instances that back a large internet website that has very seasonal traffic and can generate large query loads. My instances are in a master-slave streaming replication setup and are stable and in general perform very well. The only issues we have with the boxes is that when the master is busy the slave may start to lag excessively. I can give specifics as to what heavily loaded means and additionally the postgresql.conf for both boxes but my basic questions are:\n> * What causes streaming replication lag to increase? \n> * What parameters can be tuned to reduce streaming replication lag?\n> * Can a loaded slave affect lag adversely?\n> * Can increasing max_wal_senders help reduce lag?\n> \n> The reason I ask this is that as mentioned above the servers are stable and are real troopers in general as they back a very popular web site that puts the master under heavy seasonal load at times. At those times though we see an almost exponential growth in streaming replication lag compared to load on the master. \n> \n> For example, the master is a very beefy Solaris:\n> * 4 Recent Intel Zeons (16 physical cores)\n> * 256 GB of ECC RAM\n> * 12 TB of ZFS (spindle and SSD internal storage)\n> * DB on disk size is 2TB\n> * ZFS ARC cache of roughly 250G.\n> * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)\n> \n> Basic PG Config:\n> shared_buffers = 2GB\n> work_mem = 128MB\n> max_connections = 1700 (supports roughly 100 web servers)\n> wal_keep_segments = 256 (roughly enough for 24 hours of operation under heavy load)\n> wal_sender_timeout = 60s\n> replication_timeout=(not set)\n> wal_receiver_status_interval=10s\n> max_wal_senders=6\n> * wal archiving is off\n> * 98% of the queries on the master complete in under 500ms.\n> * No hung or very long running queries in general.\n> \n> The master on a normal day maintains a load of about 0.5, during which replication lag to the slave is in hundreds milliseconds. When the production db server is heavily hit though the load may go as high as 4 on the master and the streaming replication lag may increase to more than 2 hours relatively quickly. Load on the slave is generally below 1 even when the master is heavily loaded. The traffic to the master is primarily read with about 10% DML (new users, purchase records, etc). DML statements increase proportionally when under load though. The master and slave are connected via dedicated 10G fiber link and even under heavy load the utilization of the link is nowhere near close to saturation. BTW, the slave does run some reported related queries throughout the day that might take up to a minute to complete.\n> \n> I have the task of figuring out why this otherwise healthy DB starts to lag so badly under load and if there is anything that we could do about it. I’ve been wondering particularly if we should up the max_wal_senders but from the docs it is unclear if that would help. In my testing with pg_bench on our dev boxes which were the previous production hardware for these servers I have determined that it doesn’t take much DML load on the master to get the slave to start lagging severely. I was wondering if this was expected and/or some design consideration? Possibly streaming replication isn’t meant to be used for heavily hit databases and maintain small lag times? I would like to believe that the fault is something we have done though and that there is some parameter we could tune to reduce this lag.\n> \n> Any recommendations would be very helpful. \n> \n> Mike Wilson\n> Predicate Logic Consulting\n> \n> \n> \n> \n\n\nThanks for the information Greg.Unfortunately modifying the application stack this close to the holiday season won’t be an option so I’m left with: 1) Trying to optimize the settings I have for the query mix I have. 2) Optimize any long running DML queries (if any) to prevent lag due to locks. 3) Getting a better understanding of “what” causes lag.#3 will probably be central to at least minimizing lag during heavy DML load. If anyone has a good resource to describe when a slave would start to lag potentially that would help me hunt for the cause. I know long running DML on the master may cause lag but I’m uncertain as to the specifics of why. During periods of lag we do have more DML than usual running against the master but the queries themselves are very quick although there might be 20-30 DML operations per second against some of our central tables that store user account information. Even under heavy DML the queries still return in under a second. Possibly a large volume of of short running DML cause replication lag issues for large tables (~20M)?Thanks again for your help. BDR looks interesting but probably too cutting edge for my client.\nMike Wilson\n\nOn Nov 2, 2014, at 12:33 PM, Greg Spiegelberg <[email protected]> wrote:Hi Mike,Sounds very familiar. Our master fans out to 16 slaves (cascading) and we had great success with segregating database queries to different slaves and some based on network latency. I'd suggest, if possible, alter the application to use the slave for simple SELECT's and FUNCTION's performing SELECT-like only work while limiting those applications and queries that perform DML to the master (obviously). If the load on the slave increases too much, spin up another slave. I'd mention from experience that it could be the load on the slave that is giving the appearance of replication lag. This is what led us to having (1) slave per application.There is also the BDR multi-master available in 9.4beta if you're wanting to live on the edge.-GregOn Sat, Nov 1, 2014 at 4:33 PM, Mike Wilson <[email protected]> wrote:I have two 9.3.4 PG instances that back a large internet website that has very seasonal traffic and can generate large query loads. My instances are in a master-slave streaming replication setup and are stable and in general perform very well. The only issues we have with the boxes is that when the master is busy the slave may start to lag excessively. I can give specifics as to what heavily loaded means and additionally the postgresql.conf for both boxes but my basic questions are: * What causes streaming replication lag to increase? * What parameters can be tuned to reduce streaming replication lag? * Can a loaded slave affect lag adversely? * Can increasing max_wal_senders help reduce lag?The reason I ask this is that as mentioned above the servers are stable and are real troopers in general as they back a very popular web site that puts the master under heavy seasonal load at times. At those times though we see an almost exponential growth in streaming replication lag compared to load on the master. For example, the master is a very beefy Solaris: * 4 Recent Intel Zeons (16 physical cores) * 256 GB of ECC RAM * 12 TB of ZFS (spindle and SSD internal storage) * DB on disk size is 2TB * ZFS ARC cache of roughly 250G. * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)Basic PG Config: shared_buffers = 2GB work_mem = 128MB max_connections = 1700 (supports roughly 100 web servers) wal_keep_segments = 256 (roughly enough for 24 hours of operation under heavy load) wal_sender_timeout = 60s replication_timeout=(not set) wal_receiver_status_interval=10s max_wal_senders=6 * wal archiving is off * 98% of the queries on the master complete in under 500ms. * No hung or very long running queries in general.The master on a normal day maintains a load of about 0.5, during which replication lag to the slave is in hundreds milliseconds. When the production db server is heavily hit though the load may go as high as 4 on the master and the streaming replication lag may increase to more than 2 hours relatively quickly. Load on the slave is generally below 1 even when the master is heavily loaded. The traffic to the master is primarily read with about 10% DML (new users, purchase records, etc). DML statements increase proportionally when under load though. The master and slave are connected via dedicated 10G fiber link and even under heavy load the utilization of the link is nowhere near close to saturation. BTW, the slave does run some reported related queries throughout the day that might take up to a minute to complete.I have the task of figuring out why this otherwise healthy DB starts to lag so badly under load and if there is anything that we could do about it. I’ve been wondering particularly if we should up the max_wal_senders but from the docs it is unclear if that would help. In my testing with pg_bench on our dev boxes which were the previous production hardware for these servers I have determined that it doesn’t take much DML load on the master to get the slave to start lagging severely. I was wondering if this was expected and/or some design consideration? Possibly streaming replication isn’t meant to be used for heavily hit databases and maintain small lag times? I would like to believe that the fault is something we have done though and that there is some parameter we could tune to reduce this lag.Any recommendations would be very helpful. \nMike WilsonPredicate Logic Consulting",
"msg_date": "Sun, 2 Nov 2014 13:16:24 -0800",
"msg_from": "Mike Wilson <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Replication Lag Causes"
},
{
"msg_contents": "2014-11-02 19:16 GMT-02:00 Mike Wilson <[email protected]>:\n\n> Thanks for the information Greg.\n>\n> Unfortunately modifying the application stack this close to the holiday\n> season won’t be an option so I’m left with:\n> 1) Trying to optimize the settings I have for the query mix I have.\n> 2) Optimize any long running DML queries (if any) to prevent lag due to\n> locks.\n> 3) Getting a better understanding of “what” causes lag.\n>\n> #3 will probably be central to at least minimizing lag during heavy DML\n> load. If anyone has a good resource to describe when a slave would start\n> to lag potentially that would help me hunt for the cause. I know long\n> running DML on the master may cause lag but I’m uncertain as to the\n> specifics of why. During periods of lag we do have more DML than usual\n> running against the master but the queries themselves are very quick\n> although there might be 20-30 DML operations per second against some of our\n> central tables that store user account information. Even under heavy DML\n> the queries still return in under a second. Possibly a large volume of of\n> short running DML cause replication lag issues for large tables (~20M)?\n>\n> Thanks again for your help. BDR looks interesting but probably too\n> cutting edge for my client.\n>\n> Mike Wilson\n>\n>\n>\n>\n> On Nov 2, 2014, at 12:33 PM, Greg Spiegelberg <[email protected]>\n> wrote:\n>\n> Hi Mike,\n>\n> Sounds very familiar. Our master fans out to 16 slaves (cascading) and we\n> had great success with segregating database queries to different slaves and\n> some based on network latency. I'd suggest, if possible, alter the\n> application to use the slave for simple SELECT's and FUNCTION's performing\n> SELECT-like only work while limiting those applications and queries that\n> perform DML to the master (obviously). If the load on the slave increases\n> too much, spin up another slave. I'd mention from experience that it could\n> be the load on the slave that is giving the appearance of replication lag.\n> This is what led us to having (1) slave per application.\n>\n> There is also the BDR multi-master available in 9.4beta if you're wanting\n> to live on the edge.\n>\n> -Greg\n>\n> On Sat, Nov 1, 2014 at 4:33 PM, Mike Wilson <[email protected]> wrote:\n>\n>> I have two 9.3.4 PG instances that back a large internet website that has\n>> very seasonal traffic and can generate large query loads. My instances are\n>> in a master-slave streaming replication setup and are stable and in\n>> general perform very well. The only issues we have with the boxes is that\n>> when the master is busy the slave may start to lag excessively. I can give\n>> specifics as to what heavily loaded means and additionally the\n>> postgresql.conf for both boxes but my basic questions are:\n>> * What causes streaming replication lag to increase?\n>> * What parameters can be tuned to reduce streaming replication lag?\n>> * Can a loaded slave affect lag adversely?\n>> * Can increasing max_wal_senders help reduce lag?\n>>\n>> The reason I ask this is that as mentioned above the servers are stable\n>> and are real troopers in general as they back a very popular web site that\n>> puts the master under heavy seasonal load at times. At those times though\n>> we see an almost exponential growth in streaming replication lag compared\n>> to load on the master.\n>>\n>> For example, the master is a very beefy Solaris:\n>> * 4 Recent Intel Zeons (16 physical cores)\n>> * 256 GB of ECC RAM\n>> * 12 TB of ZFS (spindle and SSD internal storage)\n>> * DB on disk size is 2TB\n>> * ZFS ARC cache of roughly 250G.\n>> * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)\n>>\n>> Basic PG Config:\n>> shared_buffers = 2GB\n>> work_mem = 128MB\n>> max_connections = 1700 (supports roughly 100 web servers)\n>> wal_keep_segments = 256 (roughly enough for 24 hours of operation\n>> under heavy load)\n>> wal_sender_timeout = 60s\n>> replication_timeout=(not set)\n>> wal_receiver_status_interval=10s\n>> max_wal_senders=6\n>> * wal archiving is off\n>> * 98% of the queries on the master complete in under 500ms.\n>> * No hung or very long running queries in general.\n>>\n>> The master on a normal day maintains a load of about 0.5, during which\n>> replication lag to the slave is in hundreds milliseconds. When the\n>> production db server is heavily hit though the load may go as high as 4 on\n>> the master and the streaming replication lag may increase to more than 2\n>> hours relatively quickly. Load on the slave is generally below 1 even when\n>> the master is heavily loaded. The traffic to the master is primarily read\n>> with about 10% DML (new users, purchase records, etc). DML statements\n>> increase proportionally when under load though. The master and slave are\n>> connected via dedicated 10G fiber link and even under heavy load the\n>> utilization of the link is nowhere near close to saturation. BTW, the\n>> slave does run some reported related queries throughout the day that might\n>> take up to a minute to complete.\n>>\n>> I have the task of figuring out why this otherwise healthy DB starts to\n>> lag so badly under load and if there is anything that we could do about\n>> it. I’ve been wondering particularly if we should up the max_wal_senders\n>> but from the docs it is unclear if that would help. In my testing with\n>> pg_bench on our dev boxes which were the previous production hardware for\n>> these servers I have determined that it doesn’t take much DML load on the\n>> master to get the slave to start lagging severely. I was wondering if this\n>> was expected and/or some design consideration? Possibly streaming\n>> replication isn’t meant to be used for heavily hit databases and maintain\n>> small lag times? I would like to believe that the fault is something we\n>> have done though and that there is some parameter we could tune to reduce\n>> this lag.\n>>\n>> Any recommendations would be very helpful.\n>>\n>> Mike Wilson\n>> Predicate Logic Consulting\n>>\n>>\n>>\n>>\n>\nHi Mike,\n\n\"Basically, the amount of DML is what really appears to cause the\nreplication to lag.\"\n\nYou said that you are using streaming replication, and streaming means\nreading the logfiles and applying changes on the slave. Since DML is also\nlogged on the logfiles, maybe you are locking yourself by trying to read\nand write to the same file(s).\n\nYou can try changing your replication type from streaming to Archive-WAL,\nthat way you will send archived WALs from the master to the slave, while\nletting the \"hot\" WAL available for your DML operations only.\n\nI've changed my replication type in the past and had no significant\nincrease in lag.\n\nCheers\n\n2014-11-02 19:16 GMT-02:00 Mike Wilson <[email protected]>:Thanks for the information Greg.Unfortunately modifying the application stack this close to the holiday season won’t be an option so I’m left with: 1) Trying to optimize the settings I have for the query mix I have. 2) Optimize any long running DML queries (if any) to prevent lag due to locks. 3) Getting a better understanding of “what” causes lag.#3 will probably be central to at least minimizing lag during heavy DML load. If anyone has a good resource to describe when a slave would start to lag potentially that would help me hunt for the cause. I know long running DML on the master may cause lag but I’m uncertain as to the specifics of why. During periods of lag we do have more DML than usual running against the master but the queries themselves are very quick although there might be 20-30 DML operations per second against some of our central tables that store user account information. Even under heavy DML the queries still return in under a second. Possibly a large volume of of short running DML cause replication lag issues for large tables (~20M)?Thanks again for your help. BDR looks interesting but probably too cutting edge for my client.\nMike Wilson\n\nOn Nov 2, 2014, at 12:33 PM, Greg Spiegelberg <[email protected]> wrote:Hi Mike,Sounds very familiar. Our master fans out to 16 slaves (cascading) and we had great success with segregating database queries to different slaves and some based on network latency. I'd suggest, if possible, alter the application to use the slave for simple SELECT's and FUNCTION's performing SELECT-like only work while limiting those applications and queries that perform DML to the master (obviously). If the load on the slave increases too much, spin up another slave. I'd mention from experience that it could be the load on the slave that is giving the appearance of replication lag. This is what led us to having (1) slave per application.There is also the BDR multi-master available in 9.4beta if you're wanting to live on the edge.-GregOn Sat, Nov 1, 2014 at 4:33 PM, Mike Wilson <[email protected]> wrote:I have two 9.3.4 PG instances that back a large internet website that has very seasonal traffic and can generate large query loads. My instances are in a master-slave streaming replication setup and are stable and in general perform very well. The only issues we have with the boxes is that when the master is busy the slave may start to lag excessively. I can give specifics as to what heavily loaded means and additionally the postgresql.conf for both boxes but my basic questions are: * What causes streaming replication lag to increase? * What parameters can be tuned to reduce streaming replication lag? * Can a loaded slave affect lag adversely? * Can increasing max_wal_senders help reduce lag?The reason I ask this is that as mentioned above the servers are stable and are real troopers in general as they back a very popular web site that puts the master under heavy seasonal load at times. At those times though we see an almost exponential growth in streaming replication lag compared to load on the master. For example, the master is a very beefy Solaris: * 4 Recent Intel Zeons (16 physical cores) * 256 GB of ECC RAM * 12 TB of ZFS (spindle and SSD internal storage) * DB on disk size is 2TB * ZFS ARC cache of roughly 250G. * ZFS ARC2/ZIL configured for SSD’s (this is awesome by the way)Basic PG Config: shared_buffers = 2GB work_mem = 128MB max_connections = 1700 (supports roughly 100 web servers) wal_keep_segments = 256 (roughly enough for 24 hours of operation under heavy load) wal_sender_timeout = 60s replication_timeout=(not set) wal_receiver_status_interval=10s max_wal_senders=6 * wal archiving is off * 98% of the queries on the master complete in under 500ms. * No hung or very long running queries in general.The master on a normal day maintains a load of about 0.5, during which replication lag to the slave is in hundreds milliseconds. When the production db server is heavily hit though the load may go as high as 4 on the master and the streaming replication lag may increase to more than 2 hours relatively quickly. Load on the slave is generally below 1 even when the master is heavily loaded. The traffic to the master is primarily read with about 10% DML (new users, purchase records, etc). DML statements increase proportionally when under load though. The master and slave are connected via dedicated 10G fiber link and even under heavy load the utilization of the link is nowhere near close to saturation. BTW, the slave does run some reported related queries throughout the day that might take up to a minute to complete.I have the task of figuring out why this otherwise healthy DB starts to lag so badly under load and if there is anything that we could do about it. I’ve been wondering particularly if we should up the max_wal_senders but from the docs it is unclear if that would help. In my testing with pg_bench on our dev boxes which were the previous production hardware for these servers I have determined that it doesn’t take much DML load on the master to get the slave to start lagging severely. I was wondering if this was expected and/or some design consideration? Possibly streaming replication isn’t meant to be used for heavily hit databases and maintain small lag times? I would like to believe that the fault is something we have done though and that there is some parameter we could tune to reduce this lag.Any recommendations would be very helpful. \nMike WilsonPredicate Logic Consulting\n\nHi Mike,\"Basically, the amount of DML is what really appears to cause the replication to lag.\" You said that you are using streaming replication, and streaming means reading the logfiles and applying changes on the slave. Since DML is also logged on the logfiles, maybe you are locking yourself by trying to read and write to the same file(s).You can try changing your replication type from streaming to Archive-WAL, that way you will send archived WALs from the master to the slave, while letting the \"hot\" WAL available for your DML operations only.I've changed my replication type in the past and had no significant increase in lag.Cheers",
"msg_date": "Mon, 3 Nov 2014 12:34:44 -0200",
"msg_from": "Felipe Santos <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication Lag Causes"
},
{
"msg_contents": "On 2 November 2014 05:33, Mike Wilson <[email protected]> wrote:\n\n> Any recommendations would be very helpful.\n\nTry using ionice and renice to increase the priority of the WAL sender\nprocess on the master. If it helps, you are lagging because not enough\nresources are being used by the sender process (rather than the slave\nhaving trouble, for example). Lowering the number of concurrent\nconnections in your pgbouncer connection pool could help here.\n\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 4 Nov 2014 19:34:24 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Replication Lag Causes"
}
] |
[
{
"msg_contents": "Hello,\n\nI am having some hard time understanding how postgresql handles null \nvalues. As much I understand null values are stored in b-tree as simple \nvalues (put as last or first depending on index). But it seems that \nthere is something really specific about them as postgresql deliberately \nignores obvious (I think...) optimizations concerning index order after \nusing one of them in a query. As a simple example look at table below:\n\narturas=# drop table if exists test;\nDROP TABLE\narturas=# create table test (\narturas(# a int not null,\narturas(# b int,\narturas(# c int not null\narturas(# );\nCREATE TABLE\n\nAfter filling this table with random data (actual distribution of \nnull's/real values seams not to matter):\n\narturas=# insert into test (a, b, c)\narturas-# select\narturas-# case when random() < 0.5 then 1 else 2 end\narturas-# , case when random() < 0.5 then null else 1 end\narturas-# , case when random() < 0.5 then 1 else 2 end\narturas-# from generate_series(1, 1000000, 1) as gen;\nINSERT 0 1000000\n\nAnd creating index:\n\narturas=# create index test_idx on test (a, b nulls first, c);\nCREATE INDEX\n\nWe get fast queries with `order by` on c:\n\narturas=# explain analyze verbose select * from test where a = 1 and b = 1 order by c limit 1;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.42..0.53 rows=1 width=12) (actual time=0.052..0.052 rows=1 loops=1)\n Output: a, b, c\n -> Index Only Scan using test_idx on public.test (cost=0.42..25890.42 rows=251433 width=12) (actual time=0.051..0.051 rows=1 loops=1)\n Output: a, b, c\n Index Cond: ((test.a = 1) AND (test.b = 1))\n Heap Fetches: 1\n Total runtime: 0.084 ms\n(7 rows)\n\nBut really slow ones if we search for null values of b:\n\narturas=# explain analyze verbose select * from test where a = 1 and b is null order by c limit 1;\n QUERY PLAN \n---------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=15632.47..15632.47 rows=1 width=12) (actual time=138.127..138.127 rows=1 loops=1)\n Output: a, b, c\n -> Sort (cost=15632.47..16253.55 rows=248434 width=12) (actual time=138.127..138.127 rows=1 loops=1)\n Output: a, b, c\n Sort Key: test.c\n Sort Method: top-N heapsort Memory: 25kB\n -> Bitmap Heap Scan on public.test (cost=6378.87..14390.30 rows=248434 width=12) (actual time=47.083..88.986 rows=249243 loops=1)\n Output: a, b, c\n Recheck Cond: ((test.a = 1) AND (test.b IS NULL))\n -> Bitmap Index Scan on test_idx (cost=0.00..6316.77 rows=248434 width=0) (actual time=46.015..46.015 rows=249243 loops=1)\n Index Cond: ((test.a = 1) AND (test.b IS NULL))\n Total runtime: 138.200 ms\n(12 rows)\n\nCan someone please give some insight on this problem :)\n\nP.S. I am using `select version()` => PostgreSQL 9.3.5 on \nx86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, \n64-bit, compiled from source with no default configuration changes.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 3 Nov 2014 02:40:22 +0100",
"msg_from": "=?utf-8?Q?Art=C5=ABras?= Lapinskas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index order ignored after `is null` in query"
}
] |
[
{
"msg_contents": "\nI found out today that direct assignment to a composite type is (at \nleast in my test) about 70% faster than setting it via SELECT INTO. That \nseems like an enormous difference in speed, which I haven't really been \nable to account for.\n\nTest case:\n\n andrew=# \\d abc\n Table \"public.abc\"\n Column | Type | Modifiers\n --------+---------+-----------\n x | text |\n y | text |\n z | integer |\n andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop\n select 'a','b',i into r.x,r.y,r.z; end loop; end; $x$;\n DO\n Time: 63731.434 ms\n andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop r\n := ('a','b',i); end loop; end; $x$;\n DO\n Time: 18744.151 ms\n\n\nIs it simply because the SELECT is in effect three assignments, so it \ntakes nearly 3 times as long?\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 03 Nov 2014 15:00:14 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "assignment vs SELECT INTO"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop\n> select 'a','b',i into r.x,r.y,r.z; end loop; end; $x$;\n> DO\n> Time: 63731.434 ms\n> andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop r\n> := ('a','b',i); end loop; end; $x$;\n> DO\n> Time: 18744.151 ms\n\n> Is it simply because the SELECT is in effect three assignments, so it \n> takes nearly 3 times as long?\n\nI think it's more likely that the second example is treated as a \"simple\nexpression\" so it has less overhead than a SELECT.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 03 Nov 2014 15:24:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: assignment vs SELECT INTO"
},
{
"msg_contents": "On Mon, Nov 3, 2014 at 6:00 PM, Andrew Dunstan <[email protected]> wrote:\n\n> andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop\n> select 'a','b',i into r.x,r.y,r.z; end loop; end; $x$;\n> DO\n> Time: 63731.434 ms\n> andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop r\n> := ('a','b',i); end loop; end; $x$;\n> DO\n> Time: 18744.151 ms\n>\n>\n> Is it simply because the SELECT is in effect three assignments, so it\n> takes nearly 3 times as long?\n>\n\n\nI don't think so, because this take pretty much the same time:\n\n SELECT ('a','b',i) INTO r;\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Nov 3, 2014 at 6:00 PM, Andrew Dunstan <[email protected]> wrote:\n andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop\n select 'a','b',i into r.x,r.y,r.z; end loop; end; $x$;\n DO\n Time: 63731.434 ms\n andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop r\n := ('a','b',i); end loop; end; $x$;\n DO\n Time: 18744.151 ms\n\n\nIs it simply because the SELECT is in effect three assignments, so it takes nearly 3 times as long?I don't think so, because this take pretty much the same time: SELECT ('a','b',i) INTO r;Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 3 Nov 2014 18:27:03 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: assignment vs SELECT INTO"
},
{
"msg_contents": "\nOn 11/03/2014 03:24 PM, Tom Lane wrote:\n> Andrew Dunstan <[email protected]> writes:\n>> andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop\n>> select 'a','b',i into r.x,r.y,r.z; end loop; end; $x$;\n>> DO\n>> Time: 63731.434 ms\n>> andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop r\n>> := ('a','b',i); end loop; end; $x$;\n>> DO\n>> Time: 18744.151 ms\n>> Is it simply because the SELECT is in effect three assignments, so it\n>> takes nearly 3 times as long?\n> I think it's more likely that the second example is treated as a \"simple\n> expression\" so it has less overhead than a SELECT.\n>\n> \t\t\t\n\n\nWell, I accidetally left out this case:\n\n andrew=# do $x$ declare r abc; begin for i in 1 .. 10000000 loop\n select row('a','b',i) into r; end loop; end; $x$;\n DO\n Time: 81919.721 ms\n\n\nwhich is slower still.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 03 Nov 2014 15:31:16 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: assignment vs SELECT INTO"
}
] |
[
{
"msg_contents": "Not sure what is going on but other than upgrading to 9.3.4 from 9.2.4, i'm\nseeing major slowness in basic queries and seeing a ton of the bind and\nparse in my logs. These are standard lookups and should take micro seconds.\nI'm logging all queries that take over a second and this seems to be\ngetting worse, seems like it's snowballing.\n\n\n2014-11-04 08:54:52 PST clsdb cls 216.0.0.50(33569) 14857 2014-11-04\n08:54:52.476 PSTLOG: duration: 2206.070 ms parse dbdpg_p12768_2: SELECT\ncontact_seq_id, status FROM cls.contacts\nWHERE\ncust_seq_id = $1\nAND contact_id = $2\n\n2014-11-04 08:54:52 PST clsdb cls 216.20.0.50(48450) 14882 2014-11-04\n08:54:52.394 PSTLOG: duration: 1624.847 ms bind dbdpg_p21610_2: SELECT\ncontact_seq_id, status FROM cls.contacts\nWHERE\ncust_seq_id = $1\nAND contact_id = $2\n\n\nCentOS 6.x\nPostgres: 9.3.4\n256GB Mem\n32Core\n\nI'm tearing up my system to see what is happening, the postgres is the only\nchange, the processes or the data is not. I tried tuning postgresql.conf\nand modifed effective cache etc, but I've reverted it all, so again just\nlooking for some assistance.\n\nAgain no I/O have plenty of memory etc. The server is running hotter than\nit used to as well.\n\nThanks\nTory\n\nNot sure what is going on but other than upgrading to 9.3.4 from 9.2.4, i'm seeing major slowness in basic queries and seeing a ton of the bind and parse in my logs. These are standard lookups and should take micro seconds. I'm logging all queries that take over a second and this seems to be getting worse, seems like it's snowballing.2014-11-04 08:54:52 PST clsdb cls 216.0.0.50(33569) 14857 2014-11-04 08:54:52.476 PSTLOG: duration: 2206.070 ms parse dbdpg_p12768_2: SELECT contact_seq_id, status FROM cls.contacts WHERE cust_seq_id = $1 AND contact_id = $22014-11-04 08:54:52 PST clsdb cls 216.20.0.50(48450) 14882 2014-11-04 08:54:52.394 PSTLOG: duration: 1624.847 ms bind dbdpg_p21610_2: SELECT contact_seq_id, status FROM cls.contacts WHERE cust_seq_id = $1 AND contact_id = $2CentOS 6.xPostgres: 9.3.4256GB Mem32CoreI'm tearing up my system to see what is happening, the postgres is the only change, the processes or the data is not. I tried tuning postgresql.conf and modifed effective cache etc, but I've reverted it all, so again just looking for some assistance.Again no I/O have plenty of memory etc. The server is running hotter than it used to as well.ThanksTory",
"msg_date": "Tue, 4 Nov 2014 09:01:41 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "9.3 performance issues, lots of bind and parse log entries"
},
{
"msg_contents": "On Tue, Nov 4, 2014 at 9:01 AM, Tory M Blue <[email protected]> wrote:\n\n> Not sure what is going on but other than upgrading to 9.3.4 from 9.2.4,\n> i'm seeing major slowness in basic queries and seeing a ton of the bind and\n> parse in my logs. These are standard lookups and should take micro seconds.\n> I'm logging all queries that take over a second and this seems to be\n> getting worse, seems like it's snowballing.\n>\n>\n> 2014-11-04 08:54:52 PST clsdb cls 216.0.0.50(33569) 14857 2014-11-04\n> 08:54:52.476 PSTLOG: duration: 2206.070 ms parse dbdpg_p12768_2: SELECT\n> contact_seq_id, status FROM cls.contacts\n> WHERE\n> cust_seq_id = $1\n> AND contact_id = $2\n>\n> 2014-11-04 08:54:52 PST clsdb cls 216.20.0.50(48450) 14882 2014-11-04\n> 08:54:52.394 PSTLOG: duration: 1624.847 ms bind dbdpg_p21610_2: SELECT\n> contact_seq_id, status FROM cls.contacts\n> WHERE\n> cust_seq_id = $1\n> AND contact_id = $2\n>\n>\n> CentOS 6.x\n> Postgres: 9.3.4\n> 256GB Mem\n> 32Core\n>\n> I'm tearing up my system to see what is happening, the postgres is the\n> only change, the processes or the data is not. I tried tuning\n> postgresql.conf and modifed effective cache etc, but I've reverted it all,\n> so again just looking for some assistance.\n>\n> Again no I/O have plenty of memory etc. The server is running hotter than\n> it used to as well.\n>\n> Thanks\n> Tory\n>\n\nWell after fighting this all day and dealing with a really sluggish db\nwhere even my slon processes were taking several seconds, I reduced my\nshared_buffers back to 2GB from 10GB and my work_mem from 7.5GB to 2GB. i\nactually undid all my changes, including dropping my effective_cache back\nto 7GB and restarted.\n\nI have 300 connections configured, we will use around 87 normally with some\nspikes, but I'm wondering if the 10GB shared memory caused me some grief, I\ndon't believe it was the work_mem and don't believe it was the effective\ncache, but something caused my DB to run into issues with basic queries,\nsame queries after restart are finishing in milliseconds instead of 2-3\nseconds. No disk issues seen,.\n\n\nSo if this is not a 9.3 issue, it's an issue with me upping my config\nparams to a level I thought would give a nice bump..\n\nCentOS 6.x\nPostgres: 9.3.4\n256GB Mem\n32Core\n\nOn Tue, Nov 4, 2014 at 9:01 AM, Tory M Blue <[email protected]> wrote:Not sure what is going on but other than upgrading to 9.3.4 from 9.2.4, i'm seeing major slowness in basic queries and seeing a ton of the bind and parse in my logs. These are standard lookups and should take micro seconds. I'm logging all queries that take over a second and this seems to be getting worse, seems like it's snowballing.2014-11-04 08:54:52 PST clsdb cls 216.0.0.50(33569) 14857 2014-11-04 08:54:52.476 PSTLOG: duration: 2206.070 ms parse dbdpg_p12768_2: SELECT contact_seq_id, status FROM cls.contacts WHERE cust_seq_id = $1 AND contact_id = $22014-11-04 08:54:52 PST clsdb cls 216.20.0.50(48450) 14882 2014-11-04 08:54:52.394 PSTLOG: duration: 1624.847 ms bind dbdpg_p21610_2: SELECT contact_seq_id, status FROM cls.contacts WHERE cust_seq_id = $1 AND contact_id = $2CentOS 6.xPostgres: 9.3.4256GB Mem32CoreI'm tearing up my system to see what is happening, the postgres is the only change, the processes or the data is not. I tried tuning postgresql.conf and modifed effective cache etc, but I've reverted it all, so again just looking for some assistance.Again no I/O have plenty of memory etc. The server is running hotter than it used to as well.ThanksToryWell after fighting this all day and dealing with a really sluggish db where even my slon processes were taking several seconds, I reduced my shared_buffers back to 2GB from 10GB and my work_mem from 7.5GB to 2GB. i actually undid all my changes, including dropping my effective_cache back to 7GB and restarted.I have 300 connections configured, we will use around 87 normally with some spikes, but I'm wondering if the 10GB shared memory caused me some grief, I don't believe it was the work_mem and don't believe it was the effective cache, but something caused my DB to run into issues with basic queries, same queries after restart are finishing in milliseconds instead of 2-3 seconds. No disk issues seen,.So if this is not a 9.3 issue, it's an issue with me upping my config params to a level I thought would give a nice bump..CentOS 6.xPostgres: 9.3.4256GB Mem32Core",
"msg_date": "Tue, 4 Nov 2014 12:07:33 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 9.3 performance issues, lots of bind and parse log entries"
},
{
"msg_contents": "Hi Tory,\n\nOn 4.11.2014 21:07, Tory M Blue wrote:\n> Well after fighting this all day and dealing with a really sluggish db\n> where even my slon processes were taking several seconds, I reduced my\n> shared_buffers back to 2GB from 10GB and my work_mem from 7.5GB to 2GB.\n> i actually undid all my changes, including dropping my effective_cache\n> back to 7GB and restarted.\n\nHave you been using the same parameter values on 9.2, or have you bumped\nthem up only on the new 9.3? I'm wondering whether 9.2 was performing\nbetter with the values?\n\n> I have 300 connections configured, we will use around 87 normally\n> with some spikes, but I'm wondering if the 10GB shared memory caused\n> me some grief, I don't believe it was the work_mem and don't believe\n> it was the effective cache, but something caused my DB to run into\n> issues with basic queries, same queries after restart are finishing\n> in milliseconds instead of 2-3 seconds. No disk issues seen,.\n\nI assume only some of the connections will be active (running queries)\nat the same time. If you expect >> 32 active queries at the same time,\nyou're only increasing latency.\n\nBased on your description I assume you're CPU bound (otherwise the\nmachine would not get \"hotter\", and planning is not about I/O).\n\nI'm not sure if this is a production machine or how much you can\nexperiment with it, but it'd be helpful if you could provide some\nprofiling information\n\n $ iostat -x -k 1\n $ vmstat 1\n\nand such data. A perf profile would be even better, but to get the most\nuseful info it may be necessary to recompile the postgres with debug\ninfo and '-fno-omit-frame-pointer'. Then this should do the trick:\n\n perf record -a -g (for a few seconds, then Ctrl-C)\n perf report\n\nor just \"perf top\" to see what functions are at the top.\n\n\n> So if this is not a 9.3 issue, it's an issue with me upping my config\n> params to a level I thought would give a nice bump..\n> \n> CentOS 6.x\n> Postgres: 9.3.4\n> 256GB Mem\n> 32Core\n\nWhat kernel version are you using? I assume 6.x means 6.5, or are you\nusing an older CentOS version?\n\nAre you using transparent huge pages, NUMA or similar features?\nAlthought, that'd probably impact 9.2 too.\n\nAlso, what package is this? Is it coming from the CentOS repository,\nyum.postgresql.org or some other repository?\n\nregards\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 04 Nov 2014 21:31:44 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.3 performance issues, lots of bind and parse log\n entries"
},
{
"msg_contents": "Thanks Thomas,\n\n>\n> On 4.11.2014 21:07, Tory M Blue wrote:\n> > Well after fighting this all day and dealing with a really sluggish db\n> > where even my slon processes were taking several seconds, I reduced my\n> > shared_buffers back to 2GB from 10GB and my work_mem from 7.5GB to 2GB.\n> > i actually undid all my changes, including dropping my effective_cache\n> > back to 7GB and restarted.\n>\n> Have you been using the same parameter values on 9.2, or have you bumped\n> them up only on the new 9.3? I'm wondering whether 9.2 was performing\n> better with the values?\n>\n>\nThings seem to have been running better on 9.2 at this point I'm using the\nsame config file from 9.2 and I'm still experiencing slowness under heavier\nwrite access. And my disk subsystem has not changed. Hardware has not\nchanged, heck i'm even running the old version of slony (have not upgraded\nit yet).\n\nBut since the upgrade to 9.3 even calls to my sl_log tables which are tiny\ncan take:\n\n2014-11-04 02:58:40 PST clsdb postgres 10.13.200.242(52022) 21642\n2014-11-04 02:58:40.515 PSTLOG: duration: 1627.019 ms statement: fetch\n500 from LOG; (log had 145K items).\n\n> I have 300 connections configured, we will use around 87 normally\n> > with some spikes, but I'm wondering if the 10GB shared memory caused\n> > me some grief, I don't believe it was the work_mem and don't believe\n> > it was the effective cache, but something caused my DB to run into\n> > issues with basic queries, same queries after restart are finishing\n> > in milliseconds instead of 2-3 seconds. No disk issues seen,.\n>\n> I assume only some of the connections will be active (running queries)\n> at the same time. If you expect >> 32 active queries at the same time,\n> you're only increasing latency.\n>\n> Based on your description I assume you're CPU bound (otherwise the\n> machine would not get \"hotter\", and planning is not about I/O).\n>\n> I'm not sure if this is a production machine or how much you can\n> experiment with it, but it'd be helpful if you could provide some\n> profiling information\n>\n> $ iostat -x -k 1\n> $ vmstat 1\n>\n> and such data. A perf profile would be even better, but to get the most\n> useful info it may be necessary to recompile the postgres with debug\n> info and '-fno-omit-frame-pointer'. Then this should do the trick:\n>\n> perf record -a -g (for a few seconds, then Ctrl-C)\n> perf report\n>\n> or just \"perf top\" to see what functions are at the top.\n>\n>\nThis is a production server, but it was not really CPU bound with 9.2 so\nsomething is odd and I'm starting to stress, because it is a production\nenvironment :)\n\nConnections correct, I have less than 20 or so active requests at a time,\nbut i would say active queries are in the handful. I was was not seeing IO,\nbut was seeing load increase as queries started taking longer, but nothing\nin iostat or vmstat/free showed any contention. Heck even Top while showed\nsome cores as busy, nothing was sitting at over 60% utilized. And we are\ntalking a load of 12-14 here on a 32 core system, when it's normally asleep!\n\nThis is my master slon insert server, so I can run commands, tweak configs\nbut any type of rebuild or restart of postgres is a scheduled affair.\n\nThese work loads that seem to be creating the issues run between midnight\nand now almost 6am, prior to 9.3 it was taking maybe 4 hours, now it's\ntaking 6. So tomorrow AM , I'll grab some stats when I see that it's\nstruggling.\n\nBut even now with almost no connections or really any major access i'm\nseeing the sl_log grab 500 rows take 1-3 seconds, which is just plain silly\n(but it's not a constant, so I may see 1 of these alerts every hour)\n\n\n>\n> > So if this is not a 9.3 issue, it's an issue with me upping my config\n> > params to a level I thought would give a nice bump..\n> >\n> > CentOS 6.x\n> > Postgres: 9.3.4\n> > 256GB Mem\n> > 32Core\n>\n> What kernel version are you using? I assume 6.x means 6.5, or are you\n> using an older CentOS version?\n>\n\n6.5 yes sir.. 2.6.32-431.5.1.el6.x86_64\n\n\n> Are you using transparent huge pages, NUMA or similar features?\n> Althought, that'd probably impact 9.2 too.\n>\n\nya nothing here. No difference from the 9.2 to 9.3 roll. My sysctl.conf is\npretty boring.\n\nAlso, what package is this? Is it coming from the CentOS repository,\n> yum.postgresql.org or some other repository?\n>\n>\nIt's a self spun RPM. follows the same procedures since earlier 7.x, with\nadding required includes as it went along. We spin this rpm with our slon\npackage together.\n\nThanks Tomas\n\nTory Blue\n\nThanks Thomas, \n\nOn 4.11.2014 21:07, Tory M Blue wrote:\n> Well after fighting this all day and dealing with a really sluggish db\n> where even my slon processes were taking several seconds, I reduced my\n> shared_buffers back to 2GB from 10GB and my work_mem from 7.5GB to 2GB.\n> i actually undid all my changes, including dropping my effective_cache\n> back to 7GB and restarted.\n\nHave you been using the same parameter values on 9.2, or have you bumped\nthem up only on the new 9.3? I'm wondering whether 9.2 was performing\nbetter with the values?\nThings seem to have been running better on 9.2 at this point I'm using the same config file from 9.2 and I'm still experiencing slowness under heavier write access. And my disk subsystem has not changed. Hardware has not changed, heck i'm even running the old version of slony (have not upgraded it yet).But since the upgrade to 9.3 even calls to my sl_log tables which are tiny can take:2014-11-04 02:58:40 PST clsdb postgres 10.13.200.242(52022) 21642 2014-11-04 02:58:40.515 PSTLOG: duration: 1627.019 ms statement: fetch 500 from LOG; (log had 145K items).\n> I have 300 connections configured, we will use around 87 normally\n> with some spikes, but I'm wondering if the 10GB shared memory caused\n> me some grief, I don't believe it was the work_mem and don't believe\n> it was the effective cache, but something caused my DB to run into\n> issues with basic queries, same queries after restart are finishing\n> in milliseconds instead of 2-3 seconds. No disk issues seen,.\n\nI assume only some of the connections will be active (running queries)\nat the same time. If you expect >> 32 active queries at the same time,\nyou're only increasing latency.\n\nBased on your description I assume you're CPU bound (otherwise the\nmachine would not get \"hotter\", and planning is not about I/O).\n\nI'm not sure if this is a production machine or how much you can\nexperiment with it, but it'd be helpful if you could provide some\nprofiling information\n\n $ iostat -x -k 1\n $ vmstat 1\n\nand such data. A perf profile would be even better, but to get the most\nuseful info it may be necessary to recompile the postgres with debug\ninfo and '-fno-omit-frame-pointer'. Then this should do the trick:\n\n perf record -a -g (for a few seconds, then Ctrl-C)\n perf report\n\nor just \"perf top\" to see what functions are at the top.\nThis is a production server, but it was not really CPU bound with 9.2 so something is odd and I'm starting to stress, because it is a production environment :)Connections correct, I have less than 20 or so active requests at a time, but i would say active queries are in the handful. I was was not seeing IO, but was seeing load increase as queries started taking longer, but nothing in iostat or vmstat/free showed any contention. Heck even Top while showed some cores as busy, nothing was sitting at over 60% utilized. And we are talking a load of 12-14 here on a 32 core system, when it's normally asleep!This is my master slon insert server, so I can run commands, tweak configs but any type of rebuild or restart of postgres is a scheduled affair.These work loads that seem to be creating the issues run between midnight and now almost 6am, prior to 9.3 it was taking maybe 4 hours, now it's taking 6. So tomorrow AM , I'll grab some stats when I see that it's struggling.But even now with almost no connections or really any major access i'm seeing the sl_log grab 500 rows take 1-3 seconds, which is just plain silly (but it's not a constant, so I may see 1 of these alerts every hour) \n\n> So if this is not a 9.3 issue, it's an issue with me upping my config\n> params to a level I thought would give a nice bump..\n>\n> CentOS 6.x\n> Postgres: 9.3.4\n> 256GB Mem\n> 32Core\n\nWhat kernel version are you using? I assume 6.x means 6.5, or are you\nusing an older CentOS version?6.5 yes sir.. 2.6.32-431.5.1.el6.x86_64 \n\nAre you using transparent huge pages, NUMA or similar features?\nAlthought, that'd probably impact 9.2 too.ya nothing here. No difference from the 9.2 to 9.3 roll. My sysctl.conf is pretty boring. \nAlso, what package is this? Is it coming from the CentOS repository,\nyum.postgresql.org or some other repository?It's a self spun RPM. follows the same procedures since earlier 7.x, with adding required includes as it went along. We spin this rpm with our slon package together. Thanks Tomas Tory Blue",
"msg_date": "Wed, 5 Nov 2014 11:16:00 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 9.3 performance issues, lots of bind and parse log entries"
},
{
"msg_contents": "On 5.11.2014 20:16, Tory M Blue wrote:\n> \n> Thanks Thomas, \n> \n> \n> On 4.11.2014 21:07, Tory M Blue wrote:\n> > Well after fighting this all day and dealing with a really sluggish db\n> > where even my slon processes were taking several seconds, I reduced my\n> > shared_buffers back to 2GB from 10GB and my work_mem from 7.5GB to\n> 2GB.\n> > i actually undid all my changes, including dropping my effective_cache\n> > back to 7GB and restarted.\n> \n> Have you been using the same parameter values on 9.2, or have you bumped\n> them up only on the new 9.3? I'm wondering whether 9.2 was performing\n> better with the values?\n> \n> \n> Things seem to have been running better on 9.2 at this point I'm using\n> the same config file from 9.2 and I'm still experiencing slowness under\n> heavier write access. And my disk subsystem has not changed. Hardware\n> has not changed, heck i'm even running the old version of slony (have\n> not upgraded it yet).\n\nSo with shared_buffers=10GB and work_mem=7.5GB you saw significant\nslowdown both for read and write queries, and after reverting to lower\nvalues the read queries are OK but writes still take much longer?\n\n> But since the upgrade to 9.3 even calls to my sl_log tables which\n> are tiny can take:\n> \n> 2014-11-04 02:58:40 PST clsdb postgres 10.13.200.242(52022) 21642\n> 2014-11-04 02:58:40.515 PSTLOG: duration: 1627.019 ms statement: fetch\n> 500 from LOG; (log had 145K items).\n> \n> > I have 300 connections configured, we will use around 87 normally\n> > with some spikes, but I'm wondering if the 10GB shared memory caused\n> > me some grief, I don't believe it was the work_mem and don't believe\n> > it was the effective cache, but something caused my DB to run into\n> > issues with basic queries, same queries after restart are finishing\n> > in milliseconds instead of 2-3 seconds. No disk issues seen,.\n> \n> I assume only some of the connections will be active (running queries)\n> at the same time. If you expect >> 32 active queries at the same time,\n> you're only increasing latency.\n> \n> Based on your description I assume you're CPU bound (otherwise the\n> machine would not get \"hotter\", and planning is not about I/O).\n> \n> I'm not sure if this is a production machine or how much you can\n> experiment with it, but it'd be helpful if you could provide some\n> profiling information\n> \n> $ iostat -x -k 1\n> $ vmstat 1\n> \n> and such data. A perf profile would be even better, but to get the most\n> useful info it may be necessary to recompile the postgres with debug\n> info and '-fno-omit-frame-pointer'. Then this should do the trick:\n> \n> perf record -a -g (for a few seconds, then Ctrl-C)\n> perf report\n> \n> or just \"perf top\" to see what functions are at the top.\n> \n> \n> This is a production server, but it was not really CPU bound with 9.2\n> so something is odd and I'm starting to stress, because it is a\n> production environment :)\n\nYeah, I was talking about the 9.3 - that's clearly CPU bound.\n\n> Connections correct, I have less than 20 or so active requests at a\n> time, but i would say active queries are in the handful. I was was not\n> seeing IO, but was seeing load increase as queries started taking\n> longer, but nothing in iostat or vmstat/free showed any contention. Heck\n> even Top while showed some cores as busy, nothing was sitting at over\n> 60% utilized. And we are talking a load of 12-14 here on a 32 core\n> system, when it's normally asleep!\n\nRight. That's consistent with being CPU bound.\n\n> This is my master slon insert server, so I can run commands, tweak \n> configs but any type of rebuild or restart of postgres is a\n> scheduled affair.\n\nOK, understood. That however mostly rules out recompiling with debug\ninfo and frame pointers, as that might make it significantly slower.\nThat's not something you'd like to do on production.\n\n> These work loads that seem to be creating the issues run between \n> midnight and now almost 6am, prior to 9.3 it was taking maybe 4\n> hours, now it's taking 6. So tomorrow AM , I'll grab some stats when\n> I see that it's struggling.\n> \n> But even now with almost no connections or really any major access\n> i'm seeing the sl_log grab 500 rows take 1-3 seconds, which is just\n> plain silly (but it's not a constant, so I may see 1 of these alerts\n> every hour)\n\nIs that plain \"SELECT * FROM sl_log\" or something more complex? When you\ndo explain analyze on the query, what you see?\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 06 Nov 2014 02:54:53 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.3 performance issues, lots of bind and parse log\n entries"
},
{
"msg_contents": "Tory,\n\nDo you know if your workload involves a lot of lock-blocking,\nparticularly blocking on locks related to FKs? I'm tracing down a\nproblem which sounds similar to yours.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 10:56:42 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.3 performance issues, lots of bind and parse log\n entries"
}
] |
[
{
"msg_contents": "I recently sourced a 300gb intel s3500 ssd to do some performance\ntesting. I didn't see a lot of results on the web so I thought I'd\npost some numbers. Testing machine is my workstation crapbox with 4\ncores and 8GB ram (of which about 4 is usable by the ~ 50gb database).\nThe drive cost 260$ at newegg (sub 1$/gb) and is write durable.\n\n\nSingle thread 'select only' results are pretty stable 2200 tps isn't\nbad. of particular note is the sub millisecond latency of the read.\nPer iostat I'm getting ~ 55mb/sec read off the device and around 4100\ndevice tps:\n\ntransaction type: SELECT only\nscaling factor: 3000\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nduration: 10 s\nnumber of transactions actually processed: 22061\ntps = 2206.019701 (including connections establishing)\ntps = 2206.534467 (excluding connections establishing)\nstatement latencies in milliseconds:\n0.003143 \\set naccounts 100000 * :scale\n0.000776 \\setrandom aid 1 :naccounts\n0.447513 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\nMulti thread 'select only' results are also pretty stable: I get\naround 16-17k tps, but of note:\n*) iowait in the mid 40's\n*) cpu bound\n*) consistent 430mb/sec off the device per iostat !! that's\nincredible!! (some of the latency may in fact be from SATA).\n\ntransaction type: SELECT only\nscaling factor: 3000\nquery mode: simple\nnumber of clients: 32\nnumber of threads: 32\nduration: 20 s\nnumber of transactions actually processed: 321823\ntps = 16052.052818 (including connections establishing)\ntps = 16062.973737 (excluding connections establishing)\nstatement latencies in milliseconds:\n0.002469 \\set naccounts 100000 * :scale\n0.000528 \\setrandom aid 1 :naccounts\n1.984443 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;\n\nFor random write tests, I see around 1000tps for single thread and ~\n4700 with 32 threads. These results are more volatile and,\nimportantly, I disable synchronous commit feature. For the price,\nunless you are doing tons and tons of writing (in which case i'd opt\nfor a more expensive drive like the S3700). This drive is perfectly\nsuited for OLAP work IMO since ssds like the big sequential loads and\nrandom access of the data is no problem.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Nov 2014 11:40:59 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "intel s3500 -- hot stuff"
},
{
"msg_contents": "On Wed, Nov 5, 2014 at 11:40 AM, Merlin Moncure <[email protected]> wrote:\n> I recently sourced a 300gb intel s3500 ssd to do some performance\n> testing. I didn't see a lot of results on the web so I thought I'd\n> post some numbers. Testing machine is my workstation crapbox with 4\n> cores and 8GB ram (of which about 4 is usable by the ~ 50gb database).\n> The drive cost 260$ at newegg (sub 1$/gb) and is write durable.\n\nHere's another fascinating data point. I was playing around\neffective_io_concurrency for the device with bitmap heap scans on the\nscale 3000 database (again, the performance numbers are very stable\nacross runs):\nbench=# explain (analyze, buffers) select * from pgbench_accounts\nwhere aid between 1000 and 50000000 and abalance != 0;\n\nQUERY PLAN\n────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────\n Bitmap Heap Scan on pgbench_accounts (cost=1059541.66..6929604.57\nrows=1 width=97) (actual time=5040.128..23089.651 rows=1420738\nloops=1)\n Recheck Cond: ((aid >= 1000) AND (aid <= 50000000))\n Rows Removed by Index Recheck: 3394823\n Filter: (abalance <> 0)\n Rows Removed by Filter: 48578263\n Buffers: shared hit=3 read=1023980\n -> Bitmap Index Scan on pgbench_accounts_pkey\n(cost=0.00..1059541.66 rows=50532109 width=0) (actual\ntime=5038.707..5038.707 rows=49999001 loops=1)\n Index Cond: ((aid >= 1000) AND (aid <= 50000000))\n Buffers: shared hit=3 read=136611\n Total runtime: 46251.375 ms\n\neffective_io_concurrency 1: 46.3 sec, ~ 170 mb/sec peak via iostat\neffective_io_concurrency 2: 49.3 sec, ~ 158 mb/sec peak via iostat\neffective_io_concurrency 4: 29.1 sec, ~ 291 mb/sec peak via iostat\neffective_io_concurrency 8: 23.2 sec, ~ 385 mb/sec peak via iostat\neffective_io_concurrency 16: 22.1 sec, ~ 409 mb/sec peak via iostat\neffective_io_concurrency 32: 20.7 sec, ~ 447 mb/sec peak via iostat\neffective_io_concurrency 64: 20.0 sec, ~ 468 mb/sec peak via iostat\neffective_io_concurrency 128: 19.3 sec, ~ 488 mb/sec peak via iostat\neffective_io_concurrency 256: 19.2 sec, ~ 494 mb/sec peak via iostat\n\nDid not see consistent measurable gains > 256\neffective_io_concurrency. Interesting that at setting of '2' (the\nlowest possible setting with the feature actually working) is\npessimal.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Nov 2014 12:09:16 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: intel s3500 -- hot stuff"
},
{
"msg_contents": "On Wed, Nov 5, 2014 at 12:09:16PM -0600, Merlin Moncure wrote:\n> effective_io_concurrency 1: 46.3 sec, ~ 170 mb/sec peak via iostat\n> effective_io_concurrency 2: 49.3 sec, ~ 158 mb/sec peak via iostat\n> effective_io_concurrency 4: 29.1 sec, ~ 291 mb/sec peak via iostat\n> effective_io_concurrency 8: 23.2 sec, ~ 385 mb/sec peak via iostat\n> effective_io_concurrency 16: 22.1 sec, ~ 409 mb/sec peak via iostat\n> effective_io_concurrency 32: 20.7 sec, ~ 447 mb/sec peak via iostat\n> effective_io_concurrency 64: 20.0 sec, ~ 468 mb/sec peak via iostat\n> effective_io_concurrency 128: 19.3 sec, ~ 488 mb/sec peak via iostat\n> effective_io_concurrency 256: 19.2 sec, ~ 494 mb/sec peak via iostat\n> \n> Did not see consistent measurable gains > 256\n> effective_io_concurrency. Interesting that at setting of '2' (the\n> lowest possible setting with the feature actually working) is\n> pessimal.\n\nVery interesting. When we added a per-tablespace random_page_cost,\nthere was a suggestion that we might want to add per-tablespace\neffective_io_concurrency someday:\n\n\tcommit d86d51a95810caebcea587498068ff32fe28293e\n\tAuthor: Robert Haas <[email protected]>\n\tDate: Tue Jan 5 21:54:00 2010 +0000\n\t\n\t Support ALTER TABLESPACE name SET/RESET ( tablespace_options ).\n\t\n\t This patch only supports seq_page_cost and random_page_cost as parameters,\n\t but it provides the infrastructure to scalably support many more.\n\t In particular, we may want to add support for effective_io_concurrency,\n\t but I'm leaving that as future work for now.\n\t\n\t Thanks to Tom Lane for design help and Alvaro Herrera for the review.\n\nIt seems that time has come.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 6 Dec 2014 08:08:28 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: intel s3500 -- hot stuff"
},
{
"msg_contents": "On Sat, Dec 6, 2014 at 7:08 AM, Bruce Momjian <[email protected]> wrote:\n> On Wed, Nov 5, 2014 at 12:09:16PM -0600, Merlin Moncure wrote:\n>> effective_io_concurrency 1: 46.3 sec, ~ 170 mb/sec peak via iostat\n>> effective_io_concurrency 2: 49.3 sec, ~ 158 mb/sec peak via iostat\n>> effective_io_concurrency 4: 29.1 sec, ~ 291 mb/sec peak via iostat\n>> effective_io_concurrency 8: 23.2 sec, ~ 385 mb/sec peak via iostat\n>> effective_io_concurrency 16: 22.1 sec, ~ 409 mb/sec peak via iostat\n>> effective_io_concurrency 32: 20.7 sec, ~ 447 mb/sec peak via iostat\n>> effective_io_concurrency 64: 20.0 sec, ~ 468 mb/sec peak via iostat\n>> effective_io_concurrency 128: 19.3 sec, ~ 488 mb/sec peak via iostat\n>> effective_io_concurrency 256: 19.2 sec, ~ 494 mb/sec peak via iostat\n>>\n>> Did not see consistent measurable gains > 256\n>> effective_io_concurrency. Interesting that at setting of '2' (the\n>> lowest possible setting with the feature actually working) is\n>> pessimal.\n>\n> Very interesting. When we added a per-tablespace random_page_cost,\n> there was a suggestion that we might want to add per-tablespace\n> effective_io_concurrency someday:\n\nWhat I'd really like to see is to have effective_io_concurrency work\non other types of scans. It's clearly a barn burner on fast storage\nand perhaps the default should be something other than '1'. Spinning\nstorage is clearly dead and ssd seem to really benefit from the posix\nreadhead api.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 8 Dec 2014 15:40:43 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: intel s3500 -- hot stuff"
},
{
"msg_contents": "On Mon, Dec 8, 2014 at 03:40:43PM -0600, Merlin Moncure wrote:\n> >> Did not see consistent measurable gains > 256\n> >> effective_io_concurrency. Interesting that at setting of '2' (the\n> >> lowest possible setting with the feature actually working) is\n> >> pessimal.\n> >\n> > Very interesting. When we added a per-tablespace random_page_cost,\n> > there was a suggestion that we might want to add per-tablespace\n> > effective_io_concurrency someday:\n> \n> What I'd really like to see is to have effective_io_concurrency work\n> on other types of scans. It's clearly a barn burner on fast storage\n> and perhaps the default should be something other than '1'. Spinning\n> storage is clearly dead and ssd seem to really benefit from the posix\n> readhead api.\n\nWell, the real question is knowing which blocks to request before\nactually needing them. With a bitmap scan, that is easy --- I am\nunclear how to do it for other scans. We already have kernel read-ahead\nfor sequential scans, and any index scan that hits multiple rows will\nprobably already be using a bitmap heap scan.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 9 Dec 2014 15:43:37 -0500",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: intel s3500 -- hot stuff"
},
{
"msg_contents": "On Tue, Dec 9, 2014 at 12:43 PM, Bruce Momjian <[email protected]> wrote:\n\n> On Mon, Dec 8, 2014 at 03:40:43PM -0600, Merlin Moncure wrote:\n> > >> Did not see consistent measurable gains > 256\n> > >> effective_io_concurrency. Interesting that at setting of '2' (the\n> > >> lowest possible setting with the feature actually working) is\n> > >> pessimal.\n> > >\n> > > Very interesting. When we added a per-tablespace random_page_cost,\n> > > there was a suggestion that we might want to add per-tablespace\n> > > effective_io_concurrency someday:\n> >\n> > What I'd really like to see is to have effective_io_concurrency work\n> > on other types of scans. It's clearly a barn burner on fast storage\n> > and perhaps the default should be something other than '1'. Spinning\n> > storage is clearly dead and ssd seem to really benefit from the posix\n> > readhead api.\n>\n\nI haven't played much with SSD, but effective_io_concurrency can be a big\nwin even on spinning disk.\n\n\n>\n> Well, the real question is knowing which blocks to request before\n> actually needing them. With a bitmap scan, that is easy --- I am\n> unclear how to do it for other scans. We already have kernel read-ahead\n> for sequential scans, and any index scan that hits multiple rows will\n> probably already be using a bitmap heap scan.\n>\n\nIf the index scan is used to provide ordering as well as selectivity than\nit will resist being converted to an bitmap scan. Also it won't convert to\na bitmap scan solely to get credit for the use of effective_io_concurrency,\nas that setting doesn't enter into planning decisions.\n\nFor a regular index scan, it should be easy to prefetch table blocks for\nall the tuples that will need to be retrieved based on the current index\nleaf page, for example. Looking ahead across leaf page boundaries would be\nharder.\n\nCheers,\n\nJeff\n\nOn Tue, Dec 9, 2014 at 12:43 PM, Bruce Momjian <[email protected]> wrote:On Mon, Dec 8, 2014 at 03:40:43PM -0600, Merlin Moncure wrote:\n> >> Did not see consistent measurable gains > 256\n> >> effective_io_concurrency. Interesting that at setting of '2' (the\n> >> lowest possible setting with the feature actually working) is\n> >> pessimal.\n> >\n> > Very interesting. When we added a per-tablespace random_page_cost,\n> > there was a suggestion that we might want to add per-tablespace\n> > effective_io_concurrency someday:\n>\n> What I'd really like to see is to have effective_io_concurrency work\n> on other types of scans. It's clearly a barn burner on fast storage\n> and perhaps the default should be something other than '1'. Spinning\n> storage is clearly dead and ssd seem to really benefit from the posix\n> readhead api.I haven't played much with SSD, but effective_io_concurrency can be a big win even on spinning disk. \n\nWell, the real question is knowing which blocks to request before\nactually needing them. With a bitmap scan, that is easy --- I am\nunclear how to do it for other scans. We already have kernel read-ahead\nfor sequential scans, and any index scan that hits multiple rows will\nprobably already be using a bitmap heap scan.If the index scan is used to provide ordering as well as selectivity than it will resist being converted to an bitmap scan. Also it won't convert to a bitmap scan solely to get credit for the use of effective_io_concurrency, as that setting doesn't enter into planning decisions. For a regular index scan, it should be easy to prefetch table blocks for all the tuples that will need to be retrieved based on the current index leaf page, for example. Looking ahead across leaf page boundaries would be harder.Cheers,Jeff",
"msg_date": "Wed, 10 Dec 2014 08:52:13 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: intel s3500 -- hot stuff"
},
{
"msg_contents": "On 10/12/2014 17:52, Jeff Janes wrote:\n> On Tue, Dec 9, 2014 at 12:43 PM, Bruce Momjian <[email protected]\n> <mailto:[email protected]>> wrote:\n> \n> On Mon, Dec 8, 2014 at 03:40:43PM -0600, Merlin Moncure wrote:\n> > >> Did not see consistent measurable gains > 256\n> > >> effective_io_concurrency. Interesting that at setting of '2' (the\n> > >> lowest possible setting with the feature actually working) is\n> > >> pessimal.\n> > >\n> > > Very interesting. When we added a per-tablespace random_page_cost,\n> > > there was a suggestion that we might want to add per-tablespace\n> > > effective_io_concurrency someday:\n> >\n> > What I'd really like to see is to have effective_io_concurrency work\n> > on other types of scans. It's clearly a barn burner on fast storage\n> > and perhaps the default should be something other than '1'. Spinning\n> > storage is clearly dead and ssd seem to really benefit from the posix\n> > readhead api.\n> \n> \n> I haven't played much with SSD, but effective_io_concurrency can be a\n> big win even on spinning disk.\n> \n> \n> \n> Well, the real question is knowing which blocks to request before\n> actually needing them. With a bitmap scan, that is easy --- I am\n> unclear how to do it for other scans. We already have kernel read-ahead\n> for sequential scans, and any index scan that hits multiple rows will\n> probably already be using a bitmap heap scan.\n> \n> \n> If the index scan is used to provide ordering as well as selectivity\n> than it will resist being converted to an bitmap scan. Also it won't\n> convert to a bitmap scan solely to get credit for the use of\n> effective_io_concurrency, as that setting doesn't enter into planning\n> decisions. \n> \n> For a regular index scan, it should be easy to prefetch table blocks for\n> all the tuples that will need to be retrieved based on the current index\n> leaf page, for example. Looking ahead across leaf page boundaries would\n> be harder.\n> \n\nI also think that having effective_io_concurrency for other nodes that\nbitmap scan would be really great, but for now\nhaving a per-tablespace effective_io_concurrency is simpler to implement\nand will already help, so here's a patch to implement it. I'm also\nadding it to the next commitfest.\n\n-- \nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Sat, 18 Jul 2015 12:03:21 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: intel s3500 -- hot stuff"
},
{
"msg_contents": "On 18/07/2015 12:03, Julien Rouhaud wrote:\n> On 10/12/2014 17:52, Jeff Janes wrote:\n>> On Tue, Dec 9, 2014 at 12:43 PM, Bruce Momjian <[email protected]\n>> <mailto:[email protected]>> wrote:\n>>\n>> On Mon, Dec 8, 2014 at 03:40:43PM -0600, Merlin Moncure wrote:\n>> > >> Did not see consistent measurable gains > 256\n>> > >> effective_io_concurrency. Interesting that at setting of '2' (the\n>> > >> lowest possible setting with the feature actually working) is\n>> > >> pessimal.\n>> > >\n>> > > Very interesting. When we added a per-tablespace random_page_cost,\n>> > > there was a suggestion that we might want to add per-tablespace\n>> > > effective_io_concurrency someday:\n>> >\n>> > What I'd really like to see is to have effective_io_concurrency work\n>> > on other types of scans. It's clearly a barn burner on fast storage\n>> > and perhaps the default should be something other than '1'. Spinning\n>> > storage is clearly dead and ssd seem to really benefit from the posix\n>> > readhead api.\n>>\n>>\n>> I haven't played much with SSD, but effective_io_concurrency can be a\n>> big win even on spinning disk.\n>> \n>>\n>>\n>> Well, the real question is knowing which blocks to request before\n>> actually needing them. With a bitmap scan, that is easy --- I am\n>> unclear how to do it for other scans. We already have kernel read-ahead\n>> for sequential scans, and any index scan that hits multiple rows will\n>> probably already be using a bitmap heap scan.\n>>\n>>\n>> If the index scan is used to provide ordering as well as selectivity\n>> than it will resist being converted to an bitmap scan. Also it won't\n>> convert to a bitmap scan solely to get credit for the use of\n>> effective_io_concurrency, as that setting doesn't enter into planning\n>> decisions. \n>>\n>> For a regular index scan, it should be easy to prefetch table blocks for\n>> all the tuples that will need to be retrieved based on the current index\n>> leaf page, for example. Looking ahead across leaf page boundaries would\n>> be harder.\n>>\n> \n> I also think that having effective_io_concurrency for other nodes that\n> bitmap scan would be really great, but for now\n> having a per-tablespace effective_io_concurrency is simpler to implement\n> and will already help, so here's a patch to implement it. I'm also\n> adding it to the next commitfest.\n> \n\nI didn't know that the thread must exists on -hackers to be able to add\na commitfest entry, so I transfer the thread here.\n\nSorry the double post.\n\n-- \nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers",
"msg_date": "Sat, 18 Jul 2015 12:17:39 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] intel s3500 -- hot stuff"
},
{
"msg_contents": "\nHi,\n\nOn 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:\n> I didn't know that the thread must exists on -hackers to be able to add\n> a commitfest entry, so I transfer the thread here.\n\nPlease, in the future, also update the title of the thread to something\nfitting.\n\n> @@ -539,6 +541,9 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)\n> {\n> \tBitmapHeapScanState *scanstate;\n> \tRelation\tcurrentRelation;\n> +#ifdef USE_PREFETCH\n> +\tint new_io_concurrency;\n> +#endif\n> \n> \t/* check for unsupported flags */\n> \tAssert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));\n> @@ -598,6 +603,25 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)\n> \t */\n> \tcurrentRelation = ExecOpenScanRelation(estate, node->scan.scanrelid, eflags);\n> \n> +#ifdef USE_PREFETCH\n> +\t/* check if the effective_io_concurrency has been overloaded for the\n> +\t * tablespace storing the relation and compute the target_prefetch_pages,\n> +\t * or just get the current target_prefetch_pages\n> +\t */\n> +\tnew_io_concurrency = get_tablespace_io_concurrency(\n> +\t\t\tcurrentRelation->rd_rel->reltablespace);\n> +\n> +\n> +\tscanstate->target_prefetch_pages = target_prefetch_pages;\n> +\n> +\tif (new_io_concurrency != effective_io_concurrency)\n> +\t{\n> +\t\tdouble prefetch_pages;\n> +\t if (compute_io_concurrency(new_io_concurrency, &prefetch_pages))\n> +\t\t\tscanstate->target_prefetch_pages = rint(prefetch_pages);\n> +\t}\n> +#endif\n\nMaybe it's just me - but imo there should be as few USE_PREFETCH\ndependant places in the code as possible. It'll just be 0 when not\nsupported, that's fine? Especially changing the size of externally\nvisible structs depending on a configure detected ifdef seems wrong to\nme.\n\n> +bool\n> +compute_io_concurrency(int io_concurrency, double *target_prefetch_pages)\n> +{\n> +\tdouble\t\tnew_prefetch_pages = 0.0;\n> +\tint\t\t\ti;\n> +\n> +\t/* make sure the io_concurrency value is correct, it may have been forced\n> +\t * with a pg_tablespace UPDATE\n> +\t */\n\nNitpick: Wrong comment style (/* stands on its own line).\n\n> +\tif (io_concurrency > MAX_IO_CONCURRENCY)\n> +\t\tio_concurrency = MAX_IO_CONCURRENCY;\n> +\n> +\t/*----------\n> +\t * The user-visible GUC parameter is the number of drives (spindles),\n> +\t * which we need to translate to a number-of-pages-to-prefetch target.\n> +\t * The target value is stashed in *extra and then assigned to the actual\n> +\t * variable by assign_effective_io_concurrency.\n> +\t *\n> +\t * The expected number of prefetch pages needed to keep N drives busy is:\n> +\t *\n> +\t * drives | I/O requests\n> +\t * -------+----------------\n> +\t *\t\t1 | 1\n> +\t *\t\t2 | 2/1 + 2/2 = 3\n> +\t *\t\t3 | 3/1 + 3/2 + 3/3 = 5 1/2\n> +\t *\t\t4 | 4/1 + 4/2 + 4/3 + 4/4 = 8 1/3\n> +\t *\t\tn | n * H(n)\n\nI know you just moved this code. But: I don't buy this formula. Like at\nall. Doesn't queuing and reordering entirely invalidate the logic here?\n\nPerhaps more relevantly: Imo nodeBitmapHeapscan.c is the wrong place for\nthis. bufmgr.c maybe?\n\nYou also didn't touch\n/*\n * How many buffers PrefetchBuffer callers should try to stay ahead of their\n * ReadBuffer calls by. This is maintained by the assign hook for\n * effective_io_concurrency. Zero means \"never prefetch\".\n */\nint\t\t\ttarget_prefetch_pages = 0;\nwhich surely doesn't make sense anymore after these changes.\n\nBut do we even need that variable now?\n\n> diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h\n> index dc167f9..57008fc 100644\n> --- a/src/include/utils/guc.h\n> +++ b/src/include/utils/guc.h\n> @@ -26,6 +26,9 @@\n> #define MAX_KILOBYTES\t(INT_MAX / 1024)\n> #endif\n> \n> +/* upper limit for effective_io_concurrency */\n> +#define MAX_IO_CONCURRENCY 1000\n> +\n> /*\n> * Automatic configuration file name for ALTER SYSTEM.\n> * This file will be used to store values of configuration parameters\n> @@ -256,6 +259,8 @@ extern int\ttemp_file_limit;\n> \n> extern int\tnum_temp_buffers;\n> \n> +extern int\teffective_io_concurrency;\n> +\n\ntarget_prefetch_pages is declared in bufmgr.h - that seems like a better\nplace for these.\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 2 Sep 2015 15:53:51 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "Hi\n\nOn 09/02/2015 03:53 PM, Andres Freund wrote:\n>\n> Hi,\n>\n> On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:\n>> I didn't know that the thread must exists on -hackers to be able to add\n>> a commitfest entry, so I transfer the thread here.\n>\n> Please, in the future, also update the title of the thread to something\n> fitting.\n>\n>> @@ -539,6 +541,9 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)\n>> {\n>> \tBitmapHeapScanState *scanstate;\n>> \tRelation\tcurrentRelation;\n>> +#ifdef USE_PREFETCH\n>> +\tint new_io_concurrency;\n>> +#endif\n>>\n>> \t/* check for unsupported flags */\n>> \tAssert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));\n>> @@ -598,6 +603,25 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)\n>> \t */\n>> \tcurrentRelation = ExecOpenScanRelation(estate, node->scan.scanrelid, eflags);\n>>\n>> +#ifdef USE_PREFETCH\n>> +\t/* check if the effective_io_concurrency has been overloaded for the\n>> +\t * tablespace storing the relation and compute the target_prefetch_pages,\n>> +\t * or just get the current target_prefetch_pages\n>> +\t */\n>> +\tnew_io_concurrency = get_tablespace_io_concurrency(\n>> +\t\t\tcurrentRelation->rd_rel->reltablespace);\n>> +\n>> +\n>> +\tscanstate->target_prefetch_pages = target_prefetch_pages;\n>> +\n>> +\tif (new_io_concurrency != effective_io_concurrency)\n>> +\t{\n>> +\t\tdouble prefetch_pages;\n>> +\t if (compute_io_concurrency(new_io_concurrency, &prefetch_pages))\n>> +\t\t\tscanstate->target_prefetch_pages = rint(prefetch_pages);\n>> +\t}\n>> +#endif\n>\n> Maybe it's just me - but imo there should be as few USE_PREFETCH\n> dependant places in the code as possible. It'll just be 0 when not\n> supported, that's fine?\n\nIt's not just you. Dealing with code with plenty of ifdefs is annoying - \nit compiles just fine most of the time, until you compile it with some \nrare configuration. Then it either starts producing strange warnings or \nthe compilation fails entirely.\n\nIt might make a tiny difference on builds without prefetching support \nbecause of code size, but realistically I think it's noise.\n\n> Especially changing the size of externally visible structs depending\n> ona configure detected ifdef seems wrong to me.\n\n+100 to that\n\n>> +\t/*----------\n>> +\t * The user-visible GUC parameter is the number of drives (spindles),\n>> +\t * which we need to translate to a number-of-pages-to-prefetch target.\n>> +\t * The target value is stashed in *extra and then assigned to the actual\n>> +\t * variable by assign_effective_io_concurrency.\n>> +\t *\n>> +\t * The expected number of prefetch pages needed to keep N drives busy is:\n>> +\t *\n>> +\t * drives | I/O requests\n>> +\t * -------+----------------\n>> +\t *\t\t1 | 1\n>> +\t *\t\t2 | 2/1 + 2/2 = 3\n>> +\t *\t\t3 | 3/1 + 3/2 + 3/3 = 5 1/2\n>> +\t *\t\t4 | 4/1 + 4/2 + 4/3 + 4/4 = 8 1/3\n>> +\t *\t\tn | n * H(n)\n>\n> I know you just moved this code. But: I don't buy this formula. Like at\n> all. Doesn't queuing and reordering entirely invalidate the logic here?\n\nWell, even the comment right next after the formula says that:\n\n * Experimental results show that both of these formulas aren't\n * aggressiveenough, but we don't really have any better proposals.\n\nThat's the reason why users generally either use 0 or some rather high \nvalue (16 or 32 are the most common values see). The problem is that we \ndon't really care about the number of spindles (and not just because \nSSDs don't have them at all), but about the target queue length per \ndevice. Spinning rust uses TCQ/NCQ to optimize the head movement, SSDs \nare parallel by nature (stacking multiple chips with separate channels).\n\nI doubt we can really improve the formula, except maybe for saying \"we \nwant 16 requests per device\" and multiplying the number by that. We \ndon't really have the necessary introspection to determine better values \n(and it's not really possible, because the devices may be hidden behind \na RAID controller (or a SAN). So we can't really do much.\n\nMaybe the best thing we can do is just completely abandon the \"number of \nspindles\" idea, and just say \"number of I/O requests to prefetch\". \nPossibly with an explanation of how to estimate it (devices * queue length).\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 02 Sep 2015 18:06:54 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On 2015-09-02 18:06:54 +0200, Tomas Vondra wrote:\n> Maybe the best thing we can do is just completely abandon the \"number of\n> spindles\" idea, and just say \"number of I/O requests to prefetch\". Possibly\n> with an explanation of how to estimate it (devices * queue length).\n\nI think that'd be a lot better.\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 2 Sep 2015 18:45:56 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On 2 Sep 2015 14:54, \"Andres Freund\" <[email protected]> wrote:\n>\n>\n> > + /*----------\n> > + * The user-visible GUC parameter is the number of drives (spindles),\n> > + * which we need to translate to a number-of-pages-to-prefetch target.\n> > + * The target value is stashed in *extra and then assigned to the actual\n> > + * variable by assign_effective_io_concurrency.\n> > + *\n> > + * The expected number of prefetch pages needed to keep N drives busy is:\n> > + *\n> > + * drives | I/O requests\n> > + * -------+----------------\n> > + * 1 | 1\n> > + * 2 | 2/1 + 2/2 = 3\n> > + * 3 | 3/1 + 3/2 + 3/3 = 5 1/2\n> > + * 4 | 4/1 + 4/2 + 4/3 + 4/4 = 8 1/3\n> > + * n | n * H(n)\n>\n> I know you just moved this code. But: I don't buy this formula. Like at\n> all. Doesn't queuing and reordering entirely invalidate the logic here?\n\nI can take the blame for this formula.\n\nIt's called the \"Coupon Collector Problem\". If you hit get a random\ncoupon from a set of n possible coupons, how many random coupons would\nyou have to collect before you expect to have at least one of each.\n\nThis computation model assumes we have no information about which\nspindle each block will hit. That's basically true for the case of\nbitmapheapscan for most cases because the idea of bitmapheapscan is to\nbe picking a sparse set of blocks and there's no reason the blocks\nbeing read will have any regularity that causes them all to fall on\nthe same spindles. If in fact you're reading a fairly dense set then\nbitmapheapscan probably is a waste of time and simply reading\nsequentially would be exactly as fast or even faster.\n\nWe talked about this quite a bit back then and there was no dispute\nthat the aim is to provide GUCs that mean something meaningful to the\nDBA who can actually measure them. They know how many spindles they\nhave. They do not know what the optimal prefetch depth is and the only\nway to determine it would be to experiment with Postgres. Worse, I\nthink the above formula works for essentially random I/O but for more\npredictable I/O it might be possible to use a different formula. But\nif we made the GUC something low level like \"how many blocks to\nprefetch\" then we're left in the dark about how to handle that\ndifferent access pattern.\n\nI did speak to a dm developer and he suggested that the kernel could\nhelp out with an API. He suggested something of the form \"how many\nblocks do I have to read before the end of the current device\". I\nwasn't sure exactly what we would do with something like that but it\nwould be better than just guessing how many I/O operations we need to\nissue to keep all the spindles busy.\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 2 Sep 2015 19:49:13 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nOn 02/09/2015 18:06, Tomas Vondra wrote:\n> Hi\n> \n> On 09/02/2015 03:53 PM, Andres Freund wrote:\n>> \n>> Hi,\n>> \n>> On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:\n>>> I didn't know that the thread must exists on -hackers to be\n>>> able to add a commitfest entry, so I transfer the thread here.\n>> \n>> Please, in the future, also update the title of the thread to\n>> something fitting.\n>> \n\nSorry for that.\n\n>>> @@ -539,6 +541,9 @@ ExecInitBitmapHeapScan(BitmapHeapScan\n>>> *node, EState *estate, int eflags) { BitmapHeapScanState\n>>> *scanstate; Relation currentRelation; +#ifdef USE_PREFETCH +\n>>> int new_io_concurrency; +#endif\n>>> \n>>> /* check for unsupported flags */ Assert(!(eflags &\n>>> (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK))); @@ -598,6 +603,25 @@\n>>> ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate,\n>>> int eflags) */ currentRelation = ExecOpenScanRelation(estate, \n>>> node->scan.scanrelid, eflags);\n>>> \n>>> +#ifdef USE_PREFETCH + /* check if the\n>>> effective_io_concurrency has been overloaded for the + *\n>>> tablespace storing the relation and compute the \n>>> target_prefetch_pages, + * or just get the current\n>>> target_prefetch_pages + */ + new_io_concurrency =\n>>> get_tablespace_io_concurrency( +\n>>> currentRelation->rd_rel->reltablespace); + + +\n>>> scanstate->target_prefetch_pages = target_prefetch_pages; + +\n>>> if (new_io_concurrency != effective_io_concurrency) + { +\n>>> double prefetch_pages; + if\n>>> (compute_io_concurrency(new_io_concurrency, &prefetch_pages)) +\n>>> scanstate->target_prefetch_pages = rint(prefetch_pages); +\n>>> } +#endif\n>> \n>> Maybe it's just me - but imo there should be as few USE_PREFETCH \n>> dependant places in the code as possible. It'll just be 0 when\n>> not supported, that's fine?\n> \n> It's not just you. Dealing with code with plenty of ifdefs is\n> annoying - it compiles just fine most of the time, until you\n> compile it with some rare configuration. Then it either starts\n> producing strange warnings or the compilation fails entirely.\n> \n> It might make a tiny difference on builds without prefetching\n> support because of code size, but realistically I think it's\n> noise.\n> \n>> Especially changing the size of externally visible structs\n>> depending ona configure detected ifdef seems wrong to me.\n> \n> +100 to that\n> \n\nI totally agree. I'll remove the ifdefs.\n\n>> Nitpick: Wrong comment style (/* stands on its own line).\n\nI did run pgindent before submitting patch, but apparently I picked\nthe wrong one. Already fixed in local branch.\n\n>>> + /*---------- + * The user-visible GUC parameter is the\n>>> number of drives (spindles), + * which we need to translate\n>>> to a number-of-pages-to-prefetch target. + * The target\n>>> value is stashed in *extra and then assigned to the actual +\n>>> * variable by assign_effective_io_concurrency. + * + *\n>>> The expected number of prefetch pages needed to keep N drives \n>>> busy is: + * + * drives | I/O requests + *\n>>> -------+---------------- + * 1 | 1 + *\n>>> 2 | 2/1 + 2/2 = 3 + * 3 | 3/1 + 3/2 + 3/3 = 5\n>>> 1/2 + * 4 | 4/1 + 4/2 + 4/3 + 4/4 = 8 1/3 + *\n>>> n | n * H(n)\n>> \n>> I know you just moved this code. But: I don't buy this formula.\n>> Like at all. Doesn't queuing and reordering entirely invalidate\n>> the logic here?\n> \n> Well, even the comment right next after the formula says that:\n> \n> * Experimental results show that both of these formulas aren't *\n> aggressiveenough, but we don't really have any better proposals.\n> \n> That's the reason why users generally either use 0 or some rather\n> high value (16 or 32 are the most common values see). The problem\n> is that we don't really care about the number of spindles (and not\n> just because SSDs don't have them at all), but about the target\n> queue length per device. Spinning rust uses TCQ/NCQ to optimize the\n> head movement, SSDs are parallel by nature (stacking multiple chips\n> with separate channels).\n> \n> I doubt we can really improve the formula, except maybe for saying\n> \"we want 16 requests per device\" and multiplying the number by\n> that. We don't really have the necessary introspection to determine\n> better values (and it's not really possible, because the devices\n> may be hidden behind a RAID controller (or a SAN). So we can't\n> really do much.\n> \n> Maybe the best thing we can do is just completely abandon the\n> \"number of spindles\" idea, and just say \"number of I/O requests to\n> prefetch\". Possibly with an explanation of how to estimate it\n> (devices * queue length).\n> \n>> I think that'd be a lot better.\n\n+1 for that too.\n\nIf everone's ok with this change, I can submit a patch for that too.\nShould I split that into two patches, and/or start a new thread?\n\n\n\n- -- \nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.17 (GNU/Linux)\n\niQEcBAEBAgAGBQJV50opAAoJELGaJ8vfEpOqve4H/0ZJCoFb0wHtArkGye6ietks\n9uahdJy5ixO4J+AZsf2mVxV/DZK7dhK8rWIXt6yS3kfYfPDB79cRFWU5EgjEGAHB\nqcB7wXCa5HibqLySgQct3zhVDj3CN3ucKT3LVp9OC9mrH2mnGtAp7PYkjd/HqBwd\nAzW05pu21Ivi/z2gUBOdxNEEDxCLu8T1OtYq3WY9l7Yc4HfLG3huYLQD2LoRFRFn\nlWwhQifML6uKzz7w3MfZrK4i2ffGGv/r1afHcpZvN3UsB5te1fSzr8KcUeJL7+Zg\nxJTKwppiEMHpxokn5lw4LzYkjYD7W1fvwLnJhzRrs7dXGPl6rZtLmasyCld4FVk=\n=r2jE\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 02 Sep 2015 21:12:41 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 02/09/2015 15:53, Andres Freund wrote:\n> On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:\n> \n> You also didn't touch /* * How many buffers PrefetchBuffer callers\n> should try to stay ahead of their * ReadBuffer calls by. This is\n> maintained by the assign hook for * effective_io_concurrency. Zero\n> means \"never prefetch\". */ int\t\t\ttarget_prefetch_pages = 0; which\n> surely doesn't make sense anymore after these changes.\n> \n> But do we even need that variable now?\n> \n\nI thought this was related to the effective_io_concurrency GUC\n(possibly overloaded by the per-tablespace setting), so I didn't make\nany change on that.\n\nI also just found an issue with my previous patch, the global\neffective_io_concurrency GUC was ignored if the tablespace had a\nspecific seq_page_cost or random_page_cost setting, I just fixed that\nin my local branch.\n\n\n>> diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h \n>> index dc167f9..57008fc 100644 --- a/src/include/utils/guc.h +++\n>> b/src/include/utils/guc.h @@ -26,6 +26,9 @@ #define MAX_KILOBYTES\n>> (INT_MAX / 1024) #endif\n>> \n>> +/* upper limit for effective_io_concurrency */ +#define\n>> MAX_IO_CONCURRENCY 1000 + /* * Automatic configuration file name\n>> for ALTER SYSTEM. * This file will be used to store values of\n>> configuration parameters @@ -256,6 +259,8 @@ extern int\n>> temp_file_limit;\n>> \n>> extern int\tnum_temp_buffers;\n>> \n>> +extern int\teffective_io_concurrency; +\n> \n> target_prefetch_pages is declared in bufmgr.h - that seems like a\n> better place for these.\n> \n\nI was rather sceptical about that too. I'll move these in bufmgr.h.\n\n\nRegards.\n\n\n- -- \nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v2.0.17 (GNU/Linux)\n\niQEcBAEBAgAGBQJV51pcAAoJELGaJ8vfEpOqIV0H/Rj1e/DtJS60X2mReWDyfooD\nG3j0Ptblhy+brYIIxo9Bdp9hVeYFmEqlOJIht9T/3gjfkg5IMz+5bV2waEbAan/m\n9uedR/RmS9sz2YpwGgpd21bfSt2LwB+UC448t3rq8KtuzwmXgSVVEflmDmN1qV3z\nPseUFzS74HeIJWfxLRLGsJ5amN0hJ8bdolIfxdFR0FyFDv0tRv1DzppdMebVJmHs\nuIdJOU49sIDHjcnsUcq67jkP+IfTUon+nnwvk5FYVVKdBX2ka1Q/1VAvTfmWo0oV\nWZSlIjQdMUnlTX91zke0NdmsTnagIeRy1oISn/K1v+YmSqnsPqPAcZ6FFQhUMqI=\n=4ofZ\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 02 Sep 2015 22:21:48 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "Hi,\n\nOn 09/02/2015 08:49 PM, Greg Stark wrote:\n> On 2 Sep 2015 14:54, \"Andres Freund\" <[email protected]> wrote:\n>>\n>>\n>>> + /*----------\n>>> + * The user-visible GUC parameter is the number of drives (spindles),\n>>> + * which we need to translate to a number-of-pages-to-prefetch target.\n>>> + * The target value is stashed in *extra and then assigned to the actual\n>>> + * variable by assign_effective_io_concurrency.\n>>> + *\n>>> + * The expected number of prefetch pages needed to keep N drives busy is:\n>>> + *\n>>> + * drives | I/O requests\n>>> + * -------+----------------\n>>> + * 1 | 1\n>>> + * 2 | 2/1 + 2/2 = 3\n>>> + * 3 | 3/1 + 3/2 + 3/3 = 5 1/2\n>>> + * 4 | 4/1 + 4/2 + 4/3 + 4/4 = 8 1/3\n>>> + * n | n * H(n)\n>>\n>> I know you just moved this code. But: I don't buy this formula. Like at\n>> all. Doesn't queuing and reordering entirely invalidate the logic here?\n>\n> I can take the blame for this formula.\n>\n> It's called the \"Coupon Collector Problem\". If you hit get a random\n> coupon from a set of n possible coupons, how many random coupons would\n> you have to collect before you expect to have at least one of each.\n>\n> This computation model assumes we have no information about which\n> spindle each block will hit. That's basically true for the case of\n> bitmapheapscan for most cases because the idea of bitmapheapscan is to\n> be picking a sparse set of blocks and there's no reason the blocks\n> being read will have any regularity that causes them all to fall on\n> the same spindles. If in fact you're reading a fairly dense set then\n> bitmapheapscan probably is a waste of time and simply reading\n> sequentially would be exactly as fast or even faster.\n\nThere are different meanings of \"busy\". If I get the coupon collector \nproblem right (after quickly skimming the wikipedia article today), it \neffectively makes sure that each \"spindle\" has at least 1 request in the \nqueue. Which sucks in practice, because on spinning rust it makes \nqueuing (TCQ/NCQ) totally inefficient, and on SSDs it only saturates one \nof the multiple channels.\n\nOn spinning drives, it's usually good to keep the iodepth>=4. For \nexample this 10k Seagate drive [1] can do ~450 random IOPS with \niodepth=16, while 10k drive should be able to do ~150 IOPS (with \niodepth=1). The other SAS drives behave quite similarly.\n\n[1] \nhttp://www.storagereview.com/seagate_enterprise_performance_10k_hdd_savvio_10k6_review\n\nOn SSDs the good values usually start at 16, depending on the model (and \ncontroller), and size (large SSDs are basically multiple small ones \nglued together, thus have more channels).\n\nThis is why the numbers from coupon collector are way too low in many \ncases. (OTOH this is done per backend, so if there are multiple backends \ndoing prefetching ...)\n\n>\n> We talked about this quite a bit back then and there was no dispute\n> that the aim is to provide GUCs that mean something meaningful to the\n> DBA who can actually measure them. They know how many spindles they\n> have. They do not know what the optimal prefetch depth is and the only\n> way to determine it would be to experiment with Postgres. Worse, I\n\nAs I explained, spindles have very little to do with it - you need \nmultiple I/O requests per device, to get the benefit. Sure, the DBAs \nshould know how many spindles they have and should be able to determine \noptimal IO depth. But we actually say this in the docs:\n\n A good starting point for this setting is the number of separate\n drives comprising a RAID 0 stripe or RAID 1 mirror being used for\n the database. (For RAID 5 the parity drive should not be counted.)\n However, if the database is often busy with multiple queries\n issued in concurrent sessions, lower values may be sufficient to\n keep the disk array busy. A value higher than needed to keep the\n disks busy will only result in extra CPU overhead.\n\nSo we recommend number of drives as a good starting value, and then warn \nagainst increasing the value further.\n\nMoreover, ISTM it's very unclear what value to use even if you know the \nnumber of devices and optimal iodepth. Setting (devices * iodepth) \ndoesn't really make much sense, because that effectively computes\n\n (devices*iodepth) * H(devices*iodepth)\n\nwhich says \"there are (devices*iodepth) devices, make sure there's at \nleast one request for each of them\", right? I guess we actually want\n\n (devices*iodepth) * H(devices)\n\nSadly that means we'd have to introduce another GUC, because we need \ntrack both ndevices and iodepth.\n\nThere probably is a value X so that\n\n X * H(X) ~= (devices*iodepth) * H(devices)\n\nbut it's far from clear that's what we need (it surely is not in the docs).\n\n\n> think the above formula works for essentially random I/O but for\n> more predictable I/O it might be possible to use a different formula.\n> But if we made the GUC something low level like \"how many blocks to\n> prefetch\" then we're left in the dark about how to handle that\n> different access pattern.\n\nMaybe. We only use this in Bitmap Index Scan at this point, and I don't \nsee any proposals to introduce this to other places. So no opinion.\n\n>\n> I did speak to a dm developer and he suggested that the kernel could\n> help out with an API. He suggested something of the form \"how many\n> blocks do I have to read before the end of the current device\". I\n> wasn't sure exactly what we would do with something like that but it\n> would be better than just guessing how many I/O operations we need\n> to issue to keep all the spindles busy.\n\nI don't really see how that would help us?\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 02 Sep 2015 23:25:36 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On 09/02/2015 02:25 PM, Tomas Vondra wrote:\n> \n> As I explained, spindles have very little to do with it - you need\n> multiple I/O requests per device, to get the benefit. Sure, the DBAs\n> should know how many spindles they have and should be able to determine\n> optimal IO depth. But we actually say this in the docs:\n\nMy experience with performance tuning is that values above 3 have no\nreal effect on how queries are executed.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 2 Sep 2015 14:31:35 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On Wed, Sep 2, 2015 at 4:31 PM, Josh Berkus <[email protected]> wrote:\n> On 09/02/2015 02:25 PM, Tomas Vondra wrote:\n>>\n>> As I explained, spindles have very little to do with it - you need\n>> multiple I/O requests per device, to get the benefit. Sure, the DBAs\n>> should know how many spindles they have and should be able to determine\n>> optimal IO depth. But we actually say this in the docs:\n>\n> My experience with performance tuning is that values above 3 have no\n> real effect on how queries are executed.\n\nThat's the exact opposite of my findings on intel S3500 (see:\nhttp://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com).\n\nmerlin\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Wed, 2 Sep 2015 16:58:33 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On Wed, Sep 2, 2015 at 2:31 PM, Josh Berkus <[email protected]> wrote:\n\n> On 09/02/2015 02:25 PM, Tomas Vondra wrote:\n> >\n> > As I explained, spindles have very little to do with it - you need\n> > multiple I/O requests per device, to get the benefit. Sure, the DBAs\n> > should know how many spindles they have and should be able to determine\n> > optimal IO depth. But we actually say this in the docs:\n>\n> My experience with performance tuning is that values above 3 have no\n> real effect on how queries are executed.\n>\n\nPerhaps one reason is that the planner assumes it will get no benefit from\nthis setting, meaning it is somewhat unlikely to choose the types of plans\nwhich would actually show a benefit from higher values.\n\nCheers,\n\nJeff\n\nOn Wed, Sep 2, 2015 at 2:31 PM, Josh Berkus <[email protected]> wrote:On 09/02/2015 02:25 PM, Tomas Vondra wrote:\n>\n> As I explained, spindles have very little to do with it - you need\n> multiple I/O requests per device, to get the benefit. Sure, the DBAs\n> should know how many spindles they have and should be able to determine\n> optimal IO depth. But we actually say this in the docs:\n\nMy experience with performance tuning is that values above 3 have no\nreal effect on how queries are executed.Perhaps one reason is that the planner assumes it will get no benefit from this setting, meaning it is somewhat unlikely to choose the types of plans which would actually show a benefit from higher values.Cheers,Jeff",
"msg_date": "Wed, 2 Sep 2015 15:10:08 -0700",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On 2015-09-02 14:31:35 -0700, Josh Berkus wrote:\n> On 09/02/2015 02:25 PM, Tomas Vondra wrote:\n> > \n> > As I explained, spindles have very little to do with it - you need\n> > multiple I/O requests per device, to get the benefit. Sure, the DBAs\n> > should know how many spindles they have and should be able to determine\n> > optimal IO depth. But we actually say this in the docs:\n> \n> My experience with performance tuning is that values above 3 have no\n> real effect on how queries are executed.\n\nI saw pretty much the opposite - the benefits seldomly were significant\nbelow 30 or so. Even on single disks. Which actually isn't that\nsurprising - to be actually beneficial (that is, turn an IO into a CPU\nbound workload) the prefetched buffer needs to actually have been read\nin by the time its needed. In many queries processing a single heap page\ntakes far shorter than prefetching the data from storage, even if it's\non good SSDs.\n\nTherefore what you actually need is a queue of prefetches for the next\nXX buffers so that between starting a prefetch and actually needing the\nbuffer ienough time has passed that the data is completely read in. And\nthe point is that that's the case even for a single rotating disk!\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 3 Sep 2015 00:23:29 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On 2015-09-02 19:49:13 +0100, Greg Stark wrote:\n> I can take the blame for this formula.\n> \n> It's called the \"Coupon Collector Problem\". If you hit get a random\n> coupon from a set of n possible coupons, how many random coupons would\n> you have to collect before you expect to have at least one of each.\n\nMy point is that that's just the entirely wrong way to model\nprefetching. Prefetching can be massively beneficial even if you only\nhave a single platter! Even if there were no queues on the hardware or\nOS level! Concurrency isn't the right way to look at prefetching.\n\nYou need to prefetch so far ahead that you'll never block on reading\nheap pages - and that's only the case if processing the next N heap\nblocks takes longer than the prefetch of the N+1 th page. That doesn't\nmean there continously have to be N+1 prefetches in progress - in fact\nthat actually often will only be the case for the first few, after that\nyou hopefully are bottlnecked on CPU.\n\nIf you additionally take into account hardware realities where you have\nmultiple platters, multiple spindles, command queueing etc, that's even\nmore true. A single rotation of a single platter with command queuing\ncan often read several non consecutive blocks if they're on a similar\n\n- Andres\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 3 Sep 2015 00:38:12 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "\n\nOn 09/03/2015 12:23 AM, Andres Freund wrote:\n> On 2015-09-02 14:31:35 -0700, Josh Berkus wrote:\n>> On 09/02/2015 02:25 PM, Tomas Vondra wrote:\n>>>\n>>> As I explained, spindles have very little to do with it - you need\n>>> multiple I/O requests per device, to get the benefit. Sure, the DBAs\n>>> should know how many spindles they have and should be able to determine\n>>> optimal IO depth. But we actually say this in the docs:\n>>\n>> My experience with performance tuning is that values above 3 have no\n>> real effect on how queries are executed.\n>\n> I saw pretty much the opposite - the benefits seldomly were\n> significant below 30 or so. Even on single disks.\n\nThat's a bit surprising, especially considering that e_i_c=30 means ~100 \npages to prefetch if I'm doing the math right.\n\nAFAIK queue depth for SATA drives generally is 32 (so prefetching 100 \npages should not make a difference), 256 for SAS drives and ~1000 for \nmost current RAID controllers.\n\nI'm not entirely surprised that values beyond 30 make a difference, but \nthat you seldomly saw significant improvements below this value.\n\nNo doubt there are workloads like that, but I'd expect them to be quite \nrare and not prevalent as you're claiming.\n\n> Which actually isn't that surprising - to be actually beneficial\n> (that is, turn an IO into a CPU bound workload) the prefetched buffer\n> needs to actually have been read in by the time its needed. In many\n> queries processing a single heap page takes far shorter than\n> prefetching the data from storage, even if it's on good SSDs.\n >\n> Therefore what you actually need is a queue of prefetches for the\n> next XX buffers so that between starting a prefetch and actually\n> needing the buffer ienough time has passed that the data is\n> completely read in. And the point is that that's the case even for a\n> single rotating disk!\n\nSo instead of \"How many blocks I need to prefetch to saturate the \ndevices?\" you're asking \"How many blocks I need to prefetch to never \nactually wait for the I/O?\"\n\nI do like this view, but I'm not really sure how could we determine the \nright value? It seems to be very dependent on hardware and workload.\n\nFor spinning drives the speedup comes from optimizing random seeks to a \nmore optimal path (thanks to NCQ/TCQ), and on SSDs thanks to using the \nparallel channels (and possibly faster access to the same block).\n\nI guess the best thing we could do at this level is simply keep the \non-device queues fully saturated, no?\n\nregards\n\n--\nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 03 Sep 2015 01:59:13 +0200",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On 2015-09-03 01:59:13 +0200, Tomas Vondra wrote:\n> That's a bit surprising, especially considering that e_i_c=30 means ~100\n> pages to prefetch if I'm doing the math right.\n> \n> AFAIK queue depth for SATA drives generally is 32 (so prefetching 100 pages\n> should not make a difference), 256 for SAS drives and ~1000 for most current\n> RAID controllers.\n\nI think the point is that after an initial buildup - we'll again only\nprefetch pages in smaller increments because we already prefetched most\npages. The actual number of concurrent reads will then be determined by\nhow fast pages are processed. A prefetch_target of a 100 does *not*\nmean 100 requests are continously in flight. And even if it would, the\nOS won't issue many more requests than the queue depth, so they'll just\nsit in the OS queue.\n\n> So instead of \"How many blocks I need to prefetch to saturate the devices?\"\n> you're asking \"How many blocks I need to prefetch to never actually wait for\n> the I/O?\"\n\nYes, pretty much.\n\n> I do like this view, but I'm not really sure how could we determine the\n> right value? It seems to be very dependent on hardware and workload.\n\nRight.\n\n> For spinning drives the speedup comes from optimizing random seeks to a more\n> optimal path (thanks to NCQ/TCQ), and on SSDs thanks to using the parallel\n> channels (and possibly faster access to the same block).\n\n+ OS reordering & coalescing.\n\nDon't forget that the OS processes the OS IO queues while the userland\nprocess isn't scheduled - in a concurrent workload with more processes\nthan hardware threads that means that pending OS requests are being\nissued while the query isn't actively being processed.\n\n> I guess the best thing we could do at this level is simply keep the\n> on-device queues fully saturated, no?\n\nWell, being too aggressive can hurt throughput and latency of concurrent\nprocesses, without being beneficial.\n\nAndres Freund\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 3 Sep 2015 02:24:48 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "That doesn't match any of the empirical tests I did at the time. I posted\ngraphs of the throughput for varying numbers of spindles with varying\namount of prefetch. In every case more prefetching increases throuput up to\nN times the single platter throuput where N was the number of spindles.\n\nThere can be a small speedup from overlapping CPU with I/O but that's a\nreally small effect. At most that can be a single request and it would be a\nvery rarely would the amount of CPU time be even a moderate fraction of the\nI/O latency. The only case where they're comparable is when your reading\nsequentially and then hopefully you wouldn't be using postgres prefetching\nat all which is only really intended to help random I/O.\nOn 2 Sep 2015 23:38, \"Andres Freund\" <[email protected]> wrote:\n\n> On 2015-09-02 19:49:13 +0100, Greg Stark wrote:\n> > I can take the blame for this formula.\n> >\n> > It's called the \"Coupon Collector Problem\". If you hit get a random\n> > coupon from a set of n possible coupons, how many random coupons would\n> > you have to collect before you expect to have at least one of each.\n>\n> My point is that that's just the entirely wrong way to model\n> prefetching. Prefetching can be massively beneficial even if you only\n> have a single platter! Even if there were no queues on the hardware or\n> OS level! Concurrency isn't the right way to look at prefetching.\n>\n> You need to prefetch so far ahead that you'll never block on reading\n> heap pages - and that's only the case if processing the next N heap\n> blocks takes longer than the prefetch of the N+1 th page. That doesn't\n> mean there continously have to be N+1 prefetches in progress - in fact\n> that actually often will only be the case for the first few, after that\n> you hopefully are bottlnecked on CPU.\n>\n> If you additionally take into account hardware realities where you have\n> multiple platters, multiple spindles, command queueing etc, that's even\n> more true. A single rotation of a single platter with command queuing\n> can often read several non consecutive blocks if they're on a similar\n>\n> - Andres\n>\n\nThat doesn't match any of the empirical tests I did at the time. I posted graphs of the throughput for varying numbers of spindles with varying amount of prefetch. In every case more prefetching increases throuput up to N times the single platter throuput where N was the number of spindles.\nThere can be a small speedup from overlapping CPU with I/O but that's a really small effect. At most that can be a single request and it would be a very rarely would the amount of CPU time be even a moderate fraction of the I/O latency. The only case where they're comparable is when your reading sequentially and then hopefully you wouldn't be using postgres prefetching at all which is only really intended to help random I/O.\n\nOn 2 Sep 2015 23:38, \"Andres Freund\" <[email protected]> wrote:On 2015-09-02 19:49:13 +0100, Greg Stark wrote:\n> I can take the blame for this formula.\n>\n> It's called the \"Coupon Collector Problem\". If you hit get a random\n> coupon from a set of n possible coupons, how many random coupons would\n> you have to collect before you expect to have at least one of each.\n\nMy point is that that's just the entirely wrong way to model\nprefetching. Prefetching can be massively beneficial even if you only\nhave a single platter! Even if there were no queues on the hardware or\nOS level! Concurrency isn't the right way to look at prefetching.\n\nYou need to prefetch so far ahead that you'll never block on reading\nheap pages - and that's only the case if processing the next N heap\nblocks takes longer than the prefetch of the N+1 th page. That doesn't\nmean there continously have to be N+1 prefetches in progress - in fact\nthat actually often will only be the case for the first few, after that\nyou hopefully are bottlnecked on CPU.\n\nIf you additionally take into account hardware realities where you have\nmultiple platters, multiple spindles, command queueing etc, that's even\nmore true. A single rotation of a single platter with command queuing\ncan often read several non consecutive blocks if they're on a similar\n\n- Andres",
"msg_date": "Thu, 3 Sep 2015 02:44:04 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On Wed, Sep 2, 2015 at 5:38 PM, Andres Freund <[email protected]> wrote:\n> If you additionally take into account hardware realities where you have\n> multiple platters, multiple spindles, command queueing etc, that's even\n> more true. A single rotation of a single platter with command queuing\n> can often read several non consecutive blocks if they're on a similar\n\nYeah. And in the case of solid state disks, it's really a much more\nsimple case of, \"synchronously reading from the disk block by block\ndoes not fully utilize the drive because of various introduced\nlatencies\". I find this talk of platters and spindles to be somewhat\nbaroque; for a 200$ part I have to work pretty hard to max out the\ndrive when reading and I'm still not completely sure if it's the drive\nitself, postgres, cpu, or sata interface bottlenecking me. This will\nrequire a rethink of e_i_o configuration; in the old days there were\nphysical limitations of the drives that were in the way regardless of\nthe software stack but we are in a new era, I think. I'm convinced\nprefetching works and we're going to want to aggressively prefetch\nanything and everything possible. SSD controllers (at least the intel\nones) are very smart.\n\nmerlin\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 3 Sep 2015 08:13:52 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On Thu, Sep 3, 2015 at 2:13 PM, Merlin Moncure <[email protected]> wrote:\n> I find this talk of platters and spindles to be somewhat\n> baroque; for a 200$ part I have to work pretty hard to max out the\n> drive when reading and I'm still not completely sure if it's the drive\n> itself, postgres, cpu, or sata interface bottlenecking me. This will\n> require a rethink of e_i_o configuration; in the old days there were\n> physical limitations of the drives that were in the way regardless of\n> the software stack but we are in a new era, I think. I'm convinced\n> prefetching works and we're going to want to aggressively prefetch\n> anything and everything possible. SSD controllers (at least the intel\n> ones) are very smart.\n\n\nWouldn't SSDs need much *less* aggressive prefetching? There's still\nlatency and there are multiple I/O channels so they will still need\nsome. But spinning media gives latencies measured in milliseconds. You\ncan process a lot of tuples in milliseconds. If you have a hundred\nspindles you want them all busy doing seeks because in the 5ms it\ntakes them to do that you can proess all the results on a single cpu\nand the rest of time is spend waiting.\n\nWhen your media has latency on the order of microseconds then you only\nneed to have a small handful of I/O requests in flight to keep your\nprocessor busy.\n\n-- \ngreg\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 4 Sep 2015 17:21:38 +0100",
"msg_from": "Greg Stark <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On Fri, Sep 4, 2015 at 05:21:38PM +0100, Greg Stark wrote:\n> Wouldn't SSDs need much *less* aggressive prefetching? There's still\n> latency and there are multiple I/O channels so they will still need\n> some. But spinning media gives latencies measured in milliseconds. You\n> can process a lot of tuples in milliseconds. If you have a hundred\n> spindles you want them all busy doing seeks because in the 5ms it\n> takes them to do that you can proess all the results on a single cpu\n> and the rest of time is spend waiting.\n> \n> When your media has latency on the order of microseconds then you only\n> need to have a small handful of I/O requests in flight to keep your\n> processor busy.\n\nWell, there is still the processing time of getting that data ready. \nAll I know is that people have reported that prefetching is even more\nuseful for SSDs.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + Everyone has their own god. +\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 4 Sep 2015 12:23:38 -0400",
"msg_from": "Bruce Momjian <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency\n setting"
},
{
"msg_contents": "On 2015-09-04 17:21:38 +0100, Greg Stark wrote:\n> Wouldn't SSDs need much *less* aggressive prefetching? There's still\n> latency and there are multiple I/O channels so they will still need\n> some. But spinning media gives latencies measured in milliseconds. You\n> can process a lot of tuples in milliseconds. If you have a hundred\n> spindles you want them all busy doing seeks because in the 5ms it\n> takes them to do that you can proess all the results on a single cpu\n> and the rest of time is spend waiting.\n> \n> When your media has latency on the order of microseconds then you only\n> need to have a small handful of I/O requests in flight to keep your\n> processor busy.\n\nMost(?) SSDs have latencies between 0.1 and 0.4 ms for individual random\nreads, often significantly more when a lot of IO is going on. In that\ntime you can still process a good number of pages on the CPU level.\n\nSure, there's few workloads where you need dozens of SSDs to parallelize\nrandom reads like in the rotating media days. But that doesn't mean you\ndon't need prefetching?\n\nGreetings,\n\nAndres Freund\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Fri, 4 Sep 2015 18:37:25 +0200",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On Fri, Sep 4, 2015 at 11:21 AM, Greg Stark <[email protected]> wrote:\n> On Thu, Sep 3, 2015 at 2:13 PM, Merlin Moncure <[email protected]> wrote:\n>> I find this talk of platters and spindles to be somewhat\n>> baroque; for a 200$ part I have to work pretty hard to max out the\n>> drive when reading and I'm still not completely sure if it's the drive\n>> itself, postgres, cpu, or sata interface bottlenecking me. This will\n>> require a rethink of e_i_o configuration; in the old days there were\n>> physical limitations of the drives that were in the way regardless of\n>> the software stack but we are in a new era, I think. I'm convinced\n>> prefetching works and we're going to want to aggressively prefetch\n>> anything and everything possible. SSD controllers (at least the intel\n>> ones) are very smart.\n>\n>\n> Wouldn't SSDs need much *less* aggressive prefetching? There's still\n> latency and there are multiple I/O channels so they will still need\n> some. But spinning media gives latencies measured in milliseconds. You\n> can process a lot of tuples in milliseconds. If you have a hundred\n> spindles you want them all busy doing seeks because in the 5ms it\n> takes them to do that you can proess all the results on a single cpu\n> and the rest of time is spend waiting.\n>\n> When your media has latency on the order of microseconds then you only\n> need to have a small handful of I/O requests in flight to keep your\n> processor busy.\n\nI'm not sure I agree with that. 100 micosecond latency is still a\npretty big deal when memory latency is measured in nanoseconds. This\nis why we have pcie storage devices. (if anyone has a fast pice flash\ndevice I'd be interested in seing some e_i_o tests...I bet they'd\nstill show some impact but it'd be less).\n\nFor spinning media, any random i/o workload on large datasets is a\nfreight train to 100% iowait. As each device has to process each data\nset synchronously minus what small gains can be eeked out by\nreordering and combining writes. This drives data transfer rates\nunder a megabyte per second.\n\nFlash devices OTOH are essentially giant raid 0 devices of electronic\nmedia underneath the flash controller. Each device can functionally\ndo many read operations in parallel so it makes perfect sense that\nthey perform better when given multiple overlapping requests; unlike\nspinning media they can actually make use of that information and\nthose assumptions are well supported by measurement.\n\nFlash is getting better all the time and storage technologies that\nremove its one weakness, random writes, appear to be right around the\ncorner. The age of storage being a bottleneck for database systems\nwithout massive high cost engineering and optimization has come to an\nend. This is good for software developers and especially good for\ndatabases because most of the perceived mythology of databases being\n'slow' are in fact storage issues. With that problem gone focus will\nreturn to clean data processing models which is where database systems\nexcel with their many decades of refinement.\n\nmerlin\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Sat, 5 Sep 2015 13:12:36 -0500",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "Hi,\n\nPlease find attached a v2 of the patch. See below for changes.\n\n\nOn 02/09/2015 15:53, Andres Freund wrote:\n> \n> Hi,\n> \n> On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:\n>> I didn't know that the thread must exists on -hackers to be able to add\n>> a commitfest entry, so I transfer the thread here.\n> \n> Please, in the future, also update the title of the thread to something\n> fitting.\n> \n>> @@ -539,6 +541,9 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)\n>> {\n>> \tBitmapHeapScanState *scanstate;\n>> \tRelation\tcurrentRelation;\n>> +#ifdef USE_PREFETCH\n>> +\tint new_io_concurrency;\n>> +#endif\n>> \n>> \t/* check for unsupported flags */\n>> \tAssert(!(eflags & (EXEC_FLAG_BACKWARD | EXEC_FLAG_MARK)));\n>> @@ -598,6 +603,25 @@ ExecInitBitmapHeapScan(BitmapHeapScan *node, EState *estate, int eflags)\n>> \t */\n>> \tcurrentRelation = ExecOpenScanRelation(estate, node->scan.scanrelid, eflags);\n>> \n>> +#ifdef USE_PREFETCH\n>> +\t/* check if the effective_io_concurrency has been overloaded for the\n>> +\t * tablespace storing the relation and compute the target_prefetch_pages,\n>> +\t * or just get the current target_prefetch_pages\n>> +\t */\n>> +\tnew_io_concurrency = get_tablespace_io_concurrency(\n>> +\t\t\tcurrentRelation->rd_rel->reltablespace);\n>> +\n>> +\n>> +\tscanstate->target_prefetch_pages = target_prefetch_pages;\n>> +\n>> +\tif (new_io_concurrency != effective_io_concurrency)\n>> +\t{\n>> +\t\tdouble prefetch_pages;\n>> +\t if (compute_io_concurrency(new_io_concurrency, &prefetch_pages))\n>> +\t\t\tscanstate->target_prefetch_pages = rint(prefetch_pages);\n>> +\t}\n>> +#endif\n> \n> Maybe it's just me - but imo there should be as few USE_PREFETCH\n> dependant places in the code as possible. It'll just be 0 when not\n> supported, that's fine? Especially changing the size of externally\n> visible structs depending on a configure detected ifdef seems wrong to\n> me.\n> \n\nI removed these ifdefs, and the more problematic one in the struct.\n\n>> +bool\n>> +compute_io_concurrency(int io_concurrency, double *target_prefetch_pages)\n>> +{\n>> +\tdouble\t\tnew_prefetch_pages = 0.0;\n>> +\tint\t\t\ti;\n>> +\n>> +\t/* make sure the io_concurrency value is correct, it may have been forced\n>> +\t * with a pg_tablespace UPDATE\n>> +\t */\n> \n> Nitpick: Wrong comment style (/* stands on its own line).\n> \n>> +\tif (io_concurrency > MAX_IO_CONCURRENCY)\n>> +\t\tio_concurrency = MAX_IO_CONCURRENCY;\n>> +\n>> +\t/*----------\n>> +\t * The user-visible GUC parameter is the number of drives (spindles),\n>> +\t * which we need to translate to a number-of-pages-to-prefetch target.\n>> +\t * The target value is stashed in *extra and then assigned to the actual\n>> +\t * variable by assign_effective_io_concurrency.\n>> +\t *\n>> +\t * The expected number of prefetch pages needed to keep N drives busy is:\n>> +\t *\n>> +\t * drives | I/O requests\n>> +\t * -------+----------------\n>> +\t *\t\t1 | 1\n>> +\t *\t\t2 | 2/1 + 2/2 = 3\n>> +\t *\t\t3 | 3/1 + 3/2 + 3/3 = 5 1/2\n>> +\t *\t\t4 | 4/1 + 4/2 + 4/3 + 4/4 = 8 1/3\n>> +\t *\t\tn | n * H(n)\n> \n> I know you just moved this code. But: I don't buy this formula. Like at\n> all. Doesn't queuing and reordering entirely invalidate the logic here?\n> \n\nChanging the formula, or changing the GUC to a number of pages to\nprefetch is still discussed, so no change here.\n\n> Perhaps more relevantly: Imo nodeBitmapHeapscan.c is the wrong place for\n> this. bufmgr.c maybe?\n> \n\nMoved to bufmgr.c\n\n\n> You also didn't touch\n> /*\n> * How many buffers PrefetchBuffer callers should try to stay ahead of their\n> * ReadBuffer calls by. This is maintained by the assign hook for\n> * effective_io_concurrency. Zero means \"never prefetch\".\n> */\n> int\t\t\ttarget_prefetch_pages = 0;\n> which surely doesn't make sense anymore after these changes.\n> \n> But do we even need that variable now?\n\nI slighty updated the comment. If the table doesn't belong to a\ntablespace with an overloaded effective_io_concurrency, keeping this\npre-computed target_prefetch_pages can save a few cycles on each\nexecution, so I think it's better to keep it.\n\n> \n>> diff --git a/src/include/utils/guc.h b/src/include/utils/guc.h\n>> index dc167f9..57008fc 100644\n>> --- a/src/include/utils/guc.h\n>> +++ b/src/include/utils/guc.h\n>> @@ -26,6 +26,9 @@\n>> #define MAX_KILOBYTES\t(INT_MAX / 1024)\n>> #endif\n>> \n>> +/* upper limit for effective_io_concurrency */\n>> +#define MAX_IO_CONCURRENCY 1000\n>> +\n>> /*\n>> * Automatic configuration file name for ALTER SYSTEM.\n>> * This file will be used to store values of configuration parameters\n>> @@ -256,6 +259,8 @@ extern int\ttemp_file_limit;\n>> \n>> extern int\tnum_temp_buffers;\n>> \n>> +extern int\teffective_io_concurrency;\n>> +\n> \n> target_prefetch_pages is declared in bufmgr.h - that seems like a better\n> place for these.\n> \n\nMoved to bufmgr.h\n\n\nAs said in a previous mail, I also fixed a problem when having settings\nother than effective_io_concurrency for a tablespace lead to ignore the\nregular effective_io_concurrency.\n\nI also added the forgotten lock level (AccessExclusiveLock) for this\ntablespace setting, which was leading to a failed assert during initdb.\n\nRegards.\n\n-- \nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers",
"msg_date": "Sun, 06 Sep 2015 15:49:54 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "Julien Rouhaud wrote:\n> Hi,\n> \n> Please find attached a v2 of the patch. See below for changes.\n\nPushed after smallish tweaks. Please test to verify I didn't break\nanything.\n\n(It's a pity that we can't add a regression test with a value other than\n0.)\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Tue, 8 Sep 2015 13:00:38 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
},
{
"msg_contents": "On 08/09/2015 18:00, Alvaro Herrera wrote:\n> Julien Rouhaud wrote:\n>> Hi,\n>>\n>> Please find attached a v2 of the patch. See below for changes.\n> \n> Pushed after smallish tweaks. Please test to verify I didn't break\n> anything.\n> \n\nI just tried with all the cases I could think of, everything works fine.\nThanks!\n\n> (It's a pity that we can't add a regression test with a value other than\n> 0.)\n> \n\n\n-- \nJulien Rouhaud\nhttp://dalibo.com - http://dalibo.org\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Tue, 08 Sep 2015 20:26:17 +0200",
"msg_from": "Julien Rouhaud <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Allow a per-tablespace effective_io_concurrency setting"
}
] |
[
{
"msg_contents": "log_temp_files (integer)\nControls logging of temporary file names and sizes. Temporary files can be\ncreated for sorts, hashes, and temporary query results. A log entry is made\nfor each temporary file when it is deleted. A value of zero logs all\ntemporary file information\n\nso I've set this to;\n\nlog_temp_files = 0 # log temporary files equal or\nlarger\n\nReloaded the config and still have not seen a creation or deletion of a log\nfile. Is this still valid in 9.3 and do I need to change anything else?\n\nI've got duration queries spitting out;\n\n2014-11-05 12:11:32 PST rvtempdb postgres [local] 31338 2014-11-05\n12:11:32.257 PSTLOG: duration: 1609.707 ms statement: COPY adgroups\n(adgroup_id, name, status, campaign_id, create_date, modify_date) FROM\nstdin;\n\nSo logging is working.\n\nI'm set to info ;\n\nlog_min_messages = info\n\nSo what would be the cause of not seeing anything ,and how can one turn\nwork_mem without seeing these entries?\n\nThanks!\nTory\n\nlog_temp_files (integer)Controls logging of temporary file names and sizes. Temporary files can be created for sorts, hashes, and temporary query results. A log entry is made for each temporary file when it is deleted. A value of zero logs all temporary file informationso I've set this to;log_temp_files = 0 # log temporary files equal or largerReloaded the config and still have not seen a creation or deletion of a log file. Is this still valid in 9.3 and do I need to change anything else?I've got duration queries spitting out;2014-11-05 12:11:32 PST rvtempdb postgres [local] 31338 2014-11-05 12:11:32.257 PSTLOG: duration: 1609.707 ms statement: COPY adgroups (adgroup_id, name, status, campaign_id, create_date, modify_date) FROM stdin;So logging is working.I'm set to info ;log_min_messages = info So what would be the cause of not seeing anything ,and how can one turn work_mem without seeing these entries?Thanks!Tory",
"msg_date": "Wed, 5 Nov 2014 13:32:37 -0800",
"msg_from": "Tory M Blue <[email protected]>",
"msg_from_op": true,
"msg_subject": "log_temp_files (integer), tuning work_mem"
},
{
"msg_contents": "Hi,\n\nLe 5 nov. 2014 22:34, \"Tory M Blue\" <[email protected]> a écrit :\n>\n> log_temp_files (integer)\n> Controls logging of temporary file names and sizes. Temporary files can\nbe created for sorts, hashes, and temporary query results. A log entry is\nmade for each temporary file when it is deleted. A value of zero logs all\ntemporary file information\n>\n> so I've set this to;\n>\n> log_temp_files = 0 # log temporary files equal or\nlarger\n>\n> Reloaded the config and still have not seen a creation or deletion of a\nlog file. Is this still valid in 9.3 and do I need to change anything else?\n>\n\nStill works (though only shows creation, not deletion).\n\n> I've got duration queries spitting out;\n>\n> 2014-11-05 12:11:32 PST rvtempdb postgres [local] 31338 2014-11-05\n12:11:32.257 PSTLOG: duration: 1609.707 ms statement: COPY adgroups\n(adgroup_id, name, status, campaign_id, create_date, modify_date) FROM\nstdin;\n>\n> So logging is working.\n>\n> I'm set to info ;\n>\n> log_min_messages = info\n>\n> So what would be the cause of not seeing anything ,and how can one turn\nwork_mem without seeing these entries?\n>\n\nMy best guess would be your queries are happy enough with your current\nwork_mem setting.\n\nWith the default value, an easy enough example that shows such a message is:\n\ncreate table t1(id integer);\ninsert into t1 select generate_series (1,1000000);\nselect * from t1 order by id;\n\nWith the last query, you should get a temporary file log message in your\nlog file.\n\nHi,\nLe 5 nov. 2014 22:34, \"Tory M Blue\" <[email protected]> a écrit :\n>\n> log_temp_files (integer)\n> Controls logging of temporary file names and sizes. Temporary files can be created for sorts, hashes, and temporary query results. A log entry is made for each temporary file when it is deleted. A value of zero logs all temporary file information\n>\n> so I've set this to;\n>\n> log_temp_files = 0 # log temporary files equal or larger\n>\n> Reloaded the config and still have not seen a creation or deletion of a log file. Is this still valid in 9.3 and do I need to change anything else?\n>\nStill works (though only shows creation, not deletion).\n> I've got duration queries spitting out;\n>\n> 2014-11-05 12:11:32 PST rvtempdb postgres [local] 31338 2014-11-05 12:11:32.257 PSTLOG: duration: 1609.707 ms statement: COPY adgroups (adgroup_id, name, status, campaign_id, create_date, modify_date) FROM stdin;\n>\n> So logging is working.\n>\n> I'm set to info ;\n>\n> log_min_messages = info \n>\n> So what would be the cause of not seeing anything ,and how can one turn work_mem without seeing these entries?\n>\nMy best guess would be your queries are happy enough with your current work_mem setting.\nWith the default value, an easy enough example that shows such a message is:\ncreate table t1(id integer);\ninsert into t1 select generate_series (1,1000000);\nselect * from t1 order by id;\nWith the last query, you should get a temporary file log message in your log file.",
"msg_date": "Thu, 6 Nov 2014 07:31:56 +0100",
"msg_from": "Guillaume Lelarge <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: log_temp_files (integer), tuning work_mem"
}
] |
[
{
"msg_contents": "Hello,\n\nI am having some hard time understanding how postgresql handles null \nvalues. As much I understand null values are stored in b-tree as simple \nvalues (put as last or first depending on index). But it seems that \nthere is something really specific about them as postgresql deliberately \nignores obvious (I think...) optimizations concerning index order after \nusing one of them in a query. As a simple example look at table below:\n\n\tarturas=# drop table if exists test;\n\tDROP TABLE\n\tarturas=# create table test (\n\tarturas(# a int not null,\n\tarturas(# b int,\n\tarturas(# c int not null\n\tarturas(# );\n\tCREATE TABLE\n\nAfter filling this table with random data (actual distribution of \nnull's/real values seams not to matter):\n\n\tarturas=# insert into test (a, b, c)\n\tarturas-# select\n\tarturas-# case when random() < 0.5 then 1 else 2 end\n\tarturas-# , case when random() < 0.5 then null else 1 end\n\tarturas-# , case when random() < 0.5 then 1 else 2 end\n\tarturas-# from generate_series(1, 1000000, 1) as gen;\n\tINSERT 0 1000000\n\nAnd creating index:\n\n\tarturas=# create index test_idx on test (a, b nulls first, c);\n\tCREATE INDEX\n\nWe get fast queries with `order by` on c:\n\n\tarturas=# explain analyze verbose select * from test where a = 1 and b = 1 order by c limit 1;\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tQUERY PLAN \n\t-------------------------------------------------------------------------------------------------------------------------------------------\n\t Limit (cost=0.42..0.53 rows=1 width=12) (actual time=0.052..0.052 rows=1 loops=1)\n\t Output: a, b, c\n\t -> Index Only Scan using test_idx on public.test (cost=0.42..25890.42 rows=251433 width=12) (actual time=0.051..0.051 rows=1 loops=1)\n\t\t\t Output: a, b, c\n\t\t\t Index Cond: ((test.a = 1) AND (test.b = 1))\n\t\t\t Heap Fetches: 1\n\t Total runtime: 0.084 ms\n\t(7 rows)\n\nBut really slow ones if we search for null values of b:\n\n\tarturas=# explain analyze verbose select * from test where a = 1 and b is null order by c limit 1;\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t QUERY PLAN \n\t---------------------------------------------------------------------------------------------------------------------------------------------\n\t Limit (cost=15632.47..15632.47 rows=1 width=12) (actual time=138.127..138.127 rows=1 loops=1)\n\t Output: a, b, c\n\t -> Sort (cost=15632.47..16253.55 rows=248434 width=12) (actual time=138.127..138.127 rows=1 loops=1)\n\t\t\t Output: a, b, c\n\t\t\t Sort Key: test.c\n\t\t\t Sort Method: top-N heapsort Memory: 25kB\n\t\t\t -> Bitmap Heap Scan on public.test (cost=6378.87..14390.30 rows=248434 width=12) (actual time=47.083..88.986 rows=249243 loops=1)\n\t\t\t\t Output: a, b, c\n\t\t\t\t Recheck Cond: ((test.a = 1) AND (test.b IS NULL))\n\t\t\t\t -> Bitmap Index Scan on test_idx (cost=0.00..6316.77 rows=248434 width=0) (actual time=46.015..46.015 rows=249243 loops=1)\n\t\t\t\t\t\t Index Cond: ((test.a = 1) AND (test.b IS NULL))\n\t Total runtime: 138.200 ms\n\t(12 rows)\n\nCan someone please give some insight on this problem :)\n\nP.S. I am using `select version()` => PostgreSQL 9.3.5 on \nx86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, \n64-bit, compiled from source with no default configuration changes.\n\n-- \nBest Regard,\nArtūras Lapinskas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 5 Nov 2014 22:42:43 +0100",
"msg_from": "=?utf-8?Q?Art=C5=ABras?= Lapinskas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Index order ignored after `is null` in query"
},
{
"msg_contents": "After some more investigation my wild guess would be that then nulls are \ninvolved in query postgresql wants to double check whatever they are \nreally nulls in actual relation (maybe because of dead tuples). To do \nthat it has to go and fetch pages from disk and the best way to do that \nis to use bitmap index. Sadly bitmaps tend to be not the best option \nwhen using limit in queries. Which would make sense, if it is really a \nneed to synchronize index with relation...\n\n-- \nBest Regard,\nArtūras Lapinskas\n\nOn Wed, Nov 05, 2014 at 10:42:43PM +0100, Artūras Lapinskas wrote:\n>Hello,\n>\n>I am having some hard time understanding how postgresql handles null \n>values. As much I understand null values are stored in b-tree as \n>simple values (put as last or first depending on index). But it seems \n>that there is something really specific about them as postgresql \n>deliberately ignores obvious (I think...) optimizations concerning \n>index order after using one of them in a query. As a simple example \n>look at table below:\n>\n>\tarturas=# drop table if exists test;\n>\tDROP TABLE\n>\tarturas=# create table test (\n>\tarturas(# a int not null,\n>\tarturas(# b int,\n>\tarturas(# c int not null\n>\tarturas(# );\n>\tCREATE TABLE\n>\n>After filling this table with random data (actual distribution of \n>null's/real values seams not to matter):\n>\n>\tarturas=# insert into test (a, b, c)\n>\tarturas-# select\n>\tarturas-# case when random() < 0.5 then 1 else 2 end\n>\tarturas-# , case when random() < 0.5 then null else 1 end\n>\tarturas-# , case when random() < 0.5 then 1 else 2 end\n>\tarturas-# from generate_series(1, 1000000, 1) as gen;\n>\tINSERT 0 1000000\n>\n>And creating index:\n>\n>\tarturas=# create index test_idx on test (a, b nulls first, c);\n>\tCREATE INDEX\n>\n>We get fast queries with `order by` on c:\n>\n>\tarturas=# explain analyze verbose select * from test where a = 1 and b = 1 order by c limit 1;\n>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tQUERY PLAN \t-------------------------------------------------------------------------------------------------------------------------------------------\n>\t Limit (cost=0.42..0.53 rows=1 width=12) (actual time=0.052..0.052 rows=1 loops=1)\n>\t Output: a, b, c\n>\t -> Index Only Scan using test_idx on public.test (cost=0.42..25890.42 rows=251433 width=12) (actual time=0.051..0.051 rows=1 loops=1)\n>\t\t\t Output: a, b, c\n>\t\t\t Index Cond: ((test.a = 1) AND (test.b = 1))\n>\t\t\t Heap Fetches: 1\n>\t Total runtime: 0.084 ms\n>\t(7 rows)\n>\n>But really slow ones if we search for null values of b:\n>\n>\tarturas=# explain analyze verbose select * from test where a = 1 and b is null order by c limit 1;\n>\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t QUERY PLAN \t---------------------------------------------------------------------------------------------------------------------------------------------\n>\t Limit (cost=15632.47..15632.47 rows=1 width=12) (actual time=138.127..138.127 rows=1 loops=1)\n>\t Output: a, b, c\n>\t -> Sort (cost=15632.47..16253.55 rows=248434 width=12) (actual time=138.127..138.127 rows=1 loops=1)\n>\t\t\t Output: a, b, c\n>\t\t\t Sort Key: test.c\n>\t\t\t Sort Method: top-N heapsort Memory: 25kB\n>\t\t\t -> Bitmap Heap Scan on public.test (cost=6378.87..14390.30 rows=248434 width=12) (actual time=47.083..88.986 rows=249243 loops=1)\n>\t\t\t\t Output: a, b, c\n>\t\t\t\t Recheck Cond: ((test.a = 1) AND (test.b IS NULL))\n>\t\t\t\t -> Bitmap Index Scan on test_idx (cost=0.00..6316.77 rows=248434 width=0) (actual time=46.015..46.015 rows=249243 loops=1)\n>\t\t\t\t\t\t Index Cond: ((test.a = 1) AND (test.b IS NULL))\n>\t Total runtime: 138.200 ms\n>\t(12 rows)\n>\n>Can someone please give some insight on this problem :)\n>\n>P.S. I am using `select version()` => PostgreSQL 9.3.5 on \n>x86_64-unknown-linux-gnu, compiled by gcc (Debian 4.7.2-5) 4.7.2, \n>64-bit, compiled from source with no default configuration changes.\n>\n>-- \n>Best Regard,\n>Artūras Lapinskas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 6 Nov 2014 18:06:12 +0100",
"msg_from": "=?utf-8?Q?Art=C5=ABras?= Lapinskas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index order ignored after `is null` in query"
},
{
"msg_contents": "=?utf-8?Q?Art=C5=ABras?= Lapinskas <[email protected]> writes:\n> After some more investigation my wild guess would be that then nulls are \n> involved in query postgresql wants to double check whatever they are \n> really nulls in actual relation (maybe because of dead tuples).\n\nNo, it's much simpler than that: IS NULL is not an equality operator,\nso it's not treated as constraining sort order.\n\nWhat you're asking for amounts to building in an assumption that \"all\nnulls are equal\", which is exactly not what the SQL semantics for NULL\nsay. So I feel that you have probably chosen a bogus data design\nthat is misusing NULL for a purpose at variance with the SQL semantics.\nThat's likely to bite you on the rear in many more ways than this.\n\nEven disregarding the question of whether it's semantically appropriate,\ngetting the planner to handle IS NULL this way would be a significant\namount of work.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 06 Nov 2014 12:23:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index order ignored after `is null` in query"
},
{
"msg_contents": "Hi,\n\nthanks for your time and answer. Not treating IS NULL as equality \noperator definitely helps me to make more sense out of previous \nexplains.\n\n-- \nBest Regard,\nArtūras Lapinskas\n\nOn Thu, Nov 06, 2014 at 12:23:12PM -0500, Tom Lane wrote:\n>=?utf-8?Q?Art=C5=ABras?= Lapinskas <[email protected]> writes:\n>> After some more investigation my wild guess would be that then nulls are\n>> involved in query postgresql wants to double check whatever they are\n>> really nulls in actual relation (maybe because of dead tuples).\n>\n>No, it's much simpler than that: IS NULL is not an equality operator,\n>so it's not treated as constraining sort order.\n>\n>What you're asking for amounts to building in an assumption that \"all\n>nulls are equal\", which is exactly not what the SQL semantics for NULL\n>say. So I feel that you have probably chosen a bogus data design\n>that is misusing NULL for a purpose at variance with the SQL semantics.\n>That's likely to bite you on the rear in many more ways than this.\n>\n>Even disregarding the question of whether it's semantically appropriate,\n>getting the planner to handle IS NULL this way would be a significant\n>amount of work.\n>\n>\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 7 Nov 2014 12:14:51 +0100",
"msg_from": "=?utf-8?Q?Art=C5=ABras?= Lapinskas <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Index order ignored after `is null` in query"
},
{
"msg_contents": "On 11/7/14, 5:14 AM, Artūras Lapinskas wrote:\n> thanks for your time and answer. Not treating IS NULL as equality operator definitely helps me to make more sense out of previous explains.\n\nYou can also try creating a partial index WHERE b IS NULL. WHERE b IS NOT NULL can also sometimes be useful, though for different reasons.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 20:12:32 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Index order ignored after `is null` in query"
}
] |
[
{
"msg_contents": "Hello,\n\nI have just came across interesting Postgres behaviour with \nOR-conditions. Are there any chances that the optimizer will handle this \nsituation in the future?\n\nselect *\nfrom commons.financial_documents fd\nwhere fd.creation_time <= '2011-11-07 10:39:07.285022+08'\norder by fd.creation_time desc\nlimit 200\n\n\"Limit (cost=4.30..44.50 rows=200 width=67) (actual time=0.032..1.376 \nrows=200 loops=1)\"\n\" -> Index Scan Backward using financial_document_creation_time_index \non financial_documents fd (cost=4.30..292076.25 rows=1453075 width=67) \n(actual time=0.027..0.683 rows=200 loops=1)\"\n\" Index Cond: (creation_time <= '2011-11-07 \n11:39:07.285022+09'::timestamp with time zone)\"\n\"Total runtime: 1.740 ms\"\n\nselect *\nfrom commons.financial_documents fd\nwhere fd.creation_time = '2011-11-07 10:39:07.285022+08'\n or fd.creation_time < '2011-11-07 10:39:07.285022+08'\norder by fd.creation_time desc\nlimit 200\n\n\"Limit (cost=4.30..71.76 rows=200 width=67) (actual \ntime=1067.935..1069.126 rows=200 loops=1)\"\n\" -> Index Scan Backward using financial_document_creation_time_index \non financial_documents fd (cost=4.30..490104.07 rows=1453075 width=67) \n(actual time=1067.927..1068.532 rows=200 loops=1)\"\n\" Filter: ((creation_time = '2011-11-07 \n11:39:07.285022+09'::timestamp with time zone) OR (creation_time < \n'2011-11-07 11:39:07.285022+09'::timestamp with time zone))\"\n\" Rows Removed by Filter: 776785\"\n\"Total runtime: 1069.480 ms\"\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 07 Nov 2014 12:16:27 +0800",
"msg_from": "arhipov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres does not use indexes with OR-conditions"
},
{
"msg_contents": "On Fri, Nov 7, 2014 at 5:16 PM, arhipov <[email protected]> wrote:\n\n> Hello,\n>\n> I have just came across interesting Postgres behaviour with OR-conditions.\n> Are there any chances that the optimizer will handle this situation in the\n> future?\n>\n> select *\n> from commons.financial_documents fd\n> where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n> order by fd.creation_time desc\n> limit 200\n>\n> select *\n> from commons.financial_documents fd\n> where fd.creation_time = '2011-11-07 10:39:07.285022+08'\n> or fd.creation_time < '2011-11-07 10:39:07.285022+08'\n> order by fd.creation_time desc\n> limit 200\n>\n\n It would certainly be possible, providing the constants compare equally,\nbut... Question: Would you really want to pay a, say 1% increase in\nplanning time for ALL queries, so that you could have this unique case of\nqueries perform better at execution time?\n\nIs there a valid reason why you don't just write the query with the <=\noperator?\n\nRegards\n\nDavid Rowley\n\nOn Fri, Nov 7, 2014 at 5:16 PM, arhipov <[email protected]> wrote:Hello,\n\nI have just came across interesting Postgres behaviour with OR-conditions. Are there any chances that the optimizer will handle this situation in the future?\n\nselect *\nfrom commons.financial_documents fd\nwhere fd.creation_time <= '2011-11-07 10:39:07.285022+08'\norder by fd.creation_time desc\nlimit 200\nselect *\nfrom commons.financial_documents fd\nwhere fd.creation_time = '2011-11-07 10:39:07.285022+08'\n or fd.creation_time < '2011-11-07 10:39:07.285022+08'\norder by fd.creation_time desc\nlimit 200 It would certainly be possible, providing the constants compare equally, but... Question: Would you really want to pay a, say 1% increase in planning time for ALL queries, so that you could have this unique case of queries perform better at execution time?Is there a valid reason why you don't just write the query with the <= operator?RegardsDavid Rowley",
"msg_date": "Fri, 7 Nov 2014 17:38:13 +1300",
"msg_from": "David Rowley <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres does not use indexes with OR-conditions"
},
{
"msg_contents": "It was just a minimal example. The real query looks like this.\n\nselect *\nfrom commons.financial_documents fd\nwhere fd.creation_time < '2011-11-07 10:39:07.285022+08'\n or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and \nfd.financial_document_id < 100)\norder by fd.creation_time desc\nlimit 200\n\nI need to rewrite it in the way below to make Postgres use the index.\n\nselect *\nfrom commons.financial_documents fd\nwhere fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n and (\n fd.creation_time < '2011-11-07 10:39:07.285022+08'\n or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and \nfd.financial_document_id < 100)\n )\norder by fd.creation_time desc\nlimit 200\n\nOn 11/07/2014 12:38 PM, David Rowley wrote:\n> On Fri, Nov 7, 2014 at 5:16 PM, arhipov <[email protected] \n> <mailto:[email protected]>> wrote:\n>\n> Hello,\n>\n> I have just came across interesting Postgres behaviour with\n> OR-conditions. Are there any chances that the optimizer will\n> handle this situation in the future?\n>\n> select *\n> from commons.financial_documents fd\n> where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n> order by fd.creation_time desc\n> limit 200\n>\n> select *\n> from commons.financial_documents fd\n> where fd.creation_time = '2011-11-07 10:39:07.285022+08'\n> or fd.creation_time < '2011-11-07 10:39:07.285022+08'\n> order by fd.creation_time desc\n> limit 200\n>\n>\n> It would certainly be possible, providing the constants compare \n> equally, but... Question: Would you really want to pay a, say 1% \n> increase in planning time for ALL queries, so that you could have this \n> unique case of queries perform better at execution time?\n>\n> Is there a valid reason why you don't just write the query with the <= \n> operator?\n>\n> Regards\n>\n> David Rowley\n\n\n\n\n\n\n\n It was just a minimal example. The real query looks like this.\n\n select *\n from commons.financial_documents fd\n where fd.creation_time < '2011-11-07 10:39:07.285022+08'\n or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and\n fd.financial_document_id < 100)\n order by fd.creation_time desc\n limit 200\n\n I need to rewrite it in the way below to make Postgres use the\n index.\n\n select *\n from commons.financial_documents fd\n where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n and (\n fd.creation_time < '2011-11-07 10:39:07.285022+08'\n or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and\n fd.financial_document_id < 100)\n )\n order by fd.creation_time desc\n limit 200\n\nOn 11/07/2014 12:38 PM, David Rowley\n wrote:\n\n\n\n\nOn Fri, Nov 7, 2014 at 5:16 PM,\n arhipov <[email protected]>\n wrote:\nHello,\n\n I have just came across interesting Postgres behaviour\n with OR-conditions. Are there any chances that the\n optimizer will handle this situation in the future?\n\n select *\n from commons.financial_documents fd\n where fd.creation_time <= '2011-11-07\n 10:39:07.285022+08'\n order by fd.creation_time desc\n limit 200\n\n select *\n from commons.financial_documents fd\n where fd.creation_time = '2011-11-07 10:39:07.285022+08'\n or fd.creation_time < '2011-11-07\n 10:39:07.285022+08'\n order by fd.creation_time desc\n limit 200\n\n\n\n It would certainly be possible, providing the\n constants compare equally, but... Question: Would you\n really want to pay a, say 1% increase in planning time for\n ALL queries, so that you could have this unique case of\n queries perform better at execution time?\n\n\nIs there a valid reason why you don't just write the\n query with the <= operator?\n\n\nRegards\n\n\nDavid Rowley",
"msg_date": "Fri, 07 Nov 2014 13:06:26 +0800",
"msg_from": "Vlad Arkhipov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres does not use indexes with OR-conditions"
},
{
"msg_contents": "\nOn 11/07/2014 12:06 AM, Vlad Arkhipov wrote:\n> It was just a minimal example. The real query looks like this.\n>\n> select *\n> from commons.financial_documents fd\n> where fd.creation_time < '2011-11-07 10:39:07.285022+08'\n> or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and \n> fd.financial_document_id < 100)\n> order by fd.creation_time desc\n> limit 200\n>\n> I need to rewrite it in the way below to make Postgres use the index.\n>\n> select *\n> from commons.financial_documents fd\n> where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n> and (\n> fd.creation_time < '2011-11-07 10:39:07.285022+08'\n> or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and \n> fd.financial_document_id < 100)\n> )\n> order by fd.creation_time desc\n> limit 200\n>\n\nFirst, please do not top-post on the PostgreSQL lists. See \n<http://idallen.com/topposting.html>\n\nSecond, the last test for fd.creation_time in your query seems \nredundant. Could you not rewrite it as something this?:\n\n where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n and (fd.creation_time < '2011-11-07 10:39:07.285022+08'\n or fd.financial_document_id < 100)\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 07 Nov 2014 08:55:32 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres does not use indexes with OR-conditions"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> wrote:\n> On 11/07/2014 12:06 AM, Vlad Arkhipov wrote:\n\n>> I need to rewrite it in the way below to make Postgres use the index.\n>>\n>> select *\n>> from commons.financial_documents fd\n>> where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n>> and (\n>> fd.creation_time < '2011-11-07 10:39:07.285022+08'\n>> or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and\n>> fd.financial_document_id < 100)\n>> )\n>> order by fd.creation_time desc\n>> limit 200\n>\n> Could you not rewrite it as something this?:\n>\n> where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n> and (fd.creation_time < '2011-11-07 10:39:07.285022+08'\n> or fd.financial_document_id < 100)\n\nYeah, when there are two ways to write a query that are logically\nequivalent, it is better to put the AND at the higher level than\nthe OR. On the other hand, why not simply write it as?:\n\nselect *\n from commons.financial_documents fd\n where (fd.creation_time, fd.financial_document_id)\n < ('2011-11-07 10:39:07.285022+08', 100)\n order by fd.creation_time desc\n limit 200\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 7 Nov 2014 06:17:31 -0800",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres does not use indexes with OR-conditions"
},
{
"msg_contents": "Kevin Grittner <[email protected]> writes:\n> On the other hand, why not simply write it as?:\n\n> select *\n> from commons.financial_documents fd\n> where (fd.creation_time, fd.financial_document_id)\n> < ('2011-11-07 10:39:07.285022+08', 100)\n> order by fd.creation_time desc\n> limit 200\n\nThat's the way to do it, not only because it's simpler and clearer,\nbut because the planner will recognize the relevance of the\ncondition to an index on creation_time, financial_document_id ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 07 Nov 2014 10:11:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres does not use indexes with OR-conditions"
},
{
"msg_contents": "Kevin Grittner-5 wrote\n> Andrew Dunstan <\n\n> andrew@\n\n> > wrote:\n>> On 11/07/2014 12:06 AM, Vlad Arkhipov wrote:\n> \n>>> I need to rewrite it in the way below to make Postgres use the index.\n>>>\n>>> select *\n>>> from commons.financial_documents fd\n>>> where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n>>> and (\n>>> fd.creation_time < '2011-11-07 10:39:07.285022+08'\n>>> or (fd.creation_time = '2011-11-07 10:39:07.285022+08' and\n>>> fd.financial_document_id < 100)\n>>> )\n>>> order by fd.creation_time desc\n>>> limit 200\n>>\n>> Could you not rewrite it as something this?:\n>>\n>> where fd.creation_time <= '2011-11-07 10:39:07.285022+08'\n>> and (fd.creation_time < '2011-11-07 10:39:07.285022+08'\n>> or fd.financial_document_id < 100)\n> \n> Yeah, when there are two ways to write a query that are logically\n> equivalent, it is better to put the AND at the higher level than\n> the OR. On the other hand, why not simply write it as?:\n> \n> select *\n> from commons.financial_documents fd\n> where (fd.creation_time, fd.financial_document_id)\n> < ('2011-11-07 10:39:07.285022+08', 100)\n> order by fd.creation_time desc\n> limit 200\n\n From personal experience and observation on these lists record inequality is\nnot particularly intuitive. I'm also not sure someone is likely to really\n\"get it\" until they have a problem for which the above is the solution.\n\nThat said is there a place where we supply solutions and idioms to common\nqueries? This query as well as pagination-oriented queries are two that\ncome to mind. I think the material would fit well in the tutorial section\nbut having some kind of quick synopsis and cross reference in the\nperformance chapter would aid someone whose looking to solve a problem and\nnot in general education mode.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Postgres-does-not-use-indexes-with-OR-conditions-tp5826027p5826065.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 7 Nov 2014 08:50:26 -0800 (PST)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres does not use indexes with OR-conditions"
}
] |
[
{
"msg_contents": "Hello,\n\nI have built up a hot-standby between a master running 9.2.4 and a slave running 9.2.9. I did the initial copy using \"pg_basebackup\" . My recovery.conf looks like:\n\nstandby_mode = 'on'\nprimary_conninfo = 'host=pXXXXXXXX port=XXXX user=replicator'\ntrigger_file = 'failover.now'\nrestore_command = 'test -f /ORA/dbs02/PUPDBTST/archive/%f && ln -fns /ORA/dbs02/PUPDBTST/archive/%f %p'\narchive_cleanup_command = '/usr/local/pgsql/postgresql-9.2.9/bin/pg_archivecleanup /ORA/dbs03/PUPDBTST/data %r'\n\nThe slave (I don't have control on the master) is using 2 NFS file systems, one for WALs and another one for the data, on Netapp controllers:\n\ndbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=600)\ndbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=600)\n\nThe master produces quite a lot of WALs. This is what I get on the slave (number of WAL files, date-hour, Total size in MB), so per day is more than 400GB:\n\n 1582 20140930-17 25312MB\n 1328 20140930-18 21248MB\n 1960 20140930-19 31360MB\n 1201 20140930-20 19216MB\n 1467 20140930-21 23472MB\n 1298 20140930-22 20768MB\n 1579 20140930-23 25264MB\n 1646 20141001-00 26336MB\n 1274 20141001-01 20384MB\n 1652 20141001-02 26432MB\n 1756 20141001-03 28096MB\n 1628 20141001-04 26048MB\n 1015 20141001-05 16240MB\n 1624 20141001-06 25984MB\n 1652 20141001-07 26432MB\n 1286 20141001-08 20576MB\n 1485 20141001-09 23760MB\n 1987 20141001-10 31792MB\n 1432 20141001-11 22912MB\n 1235 20141001-12 19760MB\n 1690 20141001-13 27040MB\n 1442 20141001-14 23072MB\n 1500 20141001-15 24000MB\n 1306 20141001-16 20896MB\n 1491 20141001-17 23856MB\n 1535 20141001-18 24560MB\n 1548 20141001-19 24768MB\n 1068 20141001-20 17088MB\n 1519 20141001-21 24304MB\n 1608 20141001-22 25728MB\n 1019 20141001-23 16304MB\n 1568 20141002-00 25088MB\n 1411 20141002-01 22576MB\n 1781 20141002-02 28496MB\n 1280 20141002-03 20480MB\n 1556 20141002-04 24896MB\n 1114 20141002-05 17824MB\n 1906 20141002-06 30496MB\n 1316 20141002-07 21056MB\n 1483 20141002-08 23728MB\n 1245 20141002-09 19920MB\n\nThe slave is running on a server with 48GB, and 8 cores (Intel(R) Xeon(R) CPU E5630 @ 2.53GHz) running red hat 5.10, herewith the postgresql.conf:\npostgres=# show all;\n name | setting | description\n---------------------------------+------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------\nallow_system_table_mods | off | Allows modifications of the structure of system tables.\napplication_name | psql | Sets the application name to be reported in statistics and logs.\narchive_command | (disabled) | Sets the shell command that will be called to archive a WAL file.\narchive_mode | off | Allows archiving of WAL files using archive_command.\narchive_timeout | 0 | Forces a switch to the next xlog file if a new file has not been started within N seconds.\narray_nulls | on | Enable input of NULL elements in arrays.\nauthentication_timeout | 1min | Sets the maximum allowed time to complete client authentication.\nautovacuum | on | Starts the autovacuum subprocess.\nautovacuum_analyze_scale_factor | 0.1 | Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples.\nautovacuum_analyze_threshold | 50 | Minimum number of tuple inserts, updates, or deletes prior to analyze.\nautovacuum_freeze_max_age | 200000000 | Age at which to autovacuum a table to prevent transaction ID wraparound.\nautovacuum_max_workers | 3 | Sets the maximum number of simultaneously running autovacuum worker processes.\nautovacuum_naptime | 20s | Time to sleep between autovacuum runs.\nautovacuum_vacuum_cost_delay | 5ms | Vacuum cost delay in milliseconds, for autovacuum.\nautovacuum_vacuum_cost_limit | 1000 | Vacuum cost amount available before napping, for autovacuum.\nautovacuum_vacuum_scale_factor | 0.1 | Number of tuple updates or deletes prior to vacuum as a fraction of reltuples.\nautovacuum_vacuum_threshold | 50 | Minimum number of tuple updates or deletes prior to vacuum.\nbackslash_quote | safe_encoding | Sets whether \"\\'\" is allowed in string literals.\nbgwriter_delay | 200ms | Background writer sleep time between rounds.\nbgwriter_lru_maxpages | 100 | Background writer maximum number of LRU pages to flush per round.\nbgwriter_lru_multiplier | 2 | Multiple of the average buffer usage to free per round.\nblock_size | 8192 | Shows the size of a disk block.\nbonjour | off | Enables advertising the server via Bonjour.\nbonjour_name | | Sets the Bonjour service name.\nbytea_output | hex | Sets the output format for bytea.\ncheck_function_bodies | on | Check function bodies during CREATE FUNCTION.\ncheckpoint_completion_target | 0.9 | Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval.\ncheckpoint_segments | 64 | Sets the maximum distance in log segments between automatic WAL checkpoints.\ncheckpoint_timeout | 10min | Sets the maximum time between automatic WAL checkpoints.\ncheckpoint_warning | 1min | Enables warnings if checkpoint segments are filled more frequently than this.\nclient_encoding | UTF8 | Sets the client's character set encoding.\nclient_min_messages | notice | Sets the message levels that are sent to the client.\ncommit_delay | 0 | Sets the delay in microseconds between transaction commit and flushing WAL to disk.\ncommit_siblings | 5 | Sets the minimum concurrent open transactions before performing commit_delay.\nconfig_file | /ORA/dbs03/PUPDBTST/data/postgresql.conf | Sets the server's main configuration file.\nconstraint_exclusion | on | Enables the planner to use constraints to optimize queries.\ncpu_index_tuple_cost | 0.005 | Sets the planner's estimate of the cost of processing each index entry during an index scan.\ncpu_operator_cost | 0.0025 | Sets the planner's estimate of the cost of processing each operator or function call.\ncpu_tuple_cost | 0.01 | Sets the planner's estimate of the cost of processing each tuple (row).\ncursor_tuple_fraction | 0.1 | Sets the planner's estimate of the fraction of a cursor's rows that will be retrieved.\ndata_directory | /ORA/dbs03/PUPDBTST/data | Sets the server's data directory.\nDateStyle | ISO, MDY | Sets the display format for date and time values.\ndb_user_namespace | off | Enables per-database user names.\ndeadlock_timeout | 1s | Sets the time to wait on a lock before checking for deadlock.\ndebug_assertions | off | Turns on various assertion checks.\ndebug_pretty_print | on | Indents parse and plan tree displays.\ndebug_print_parse | off | Logs each query's parse tree.\ndebug_print_plan | off | Logs each query's execution plan.\ndebug_print_rewritten | off | Logs each query's rewritten parse tree.\ndefault_statistics_target | 50 | Sets the default statistics target.\ndefault_tablespace | | Sets the default tablespace to create tables and indexes in.\ndefault_text_search_config | pg_catalog.english | Sets default text search configuration.\ndefault_transaction_deferrable | off | Sets the default deferrable status of new transactions.\ndefault_transaction_isolation | read committed | Sets the transaction isolation level of each new transaction.\ndefault_transaction_read_only | off | Sets the default read-only status of new transactions.\ndefault_with_oids | off | Create new tables with OIDs by default.\ndynamic_library_path | $libdir | Sets the path for dynamically loadable modules.\neffective_cache_size | 36GB | Sets the planner's assumption about the size of the disk cache.\neffective_io_concurrency | 1 | Number of simultaneous requests that can be handled efficiently by the disk subsystem.\nenable_bitmapscan | on | Enables the planner's use of bitmap-scan plans.\nenable_hashagg | on | Enables the planner's use of hashed aggregation plans.\nenable_hashjoin | on | Enables the planner's use of hash join plans.\nenable_indexonlyscan | on | Enables the planner's use of index-only-scan plans.\nenable_indexscan | on | Enables the planner's use of index-scan plans.\nenable_material | on | Enables the planner's use of materialization.\nenable_mergejoin | on | Enables the planner's use of merge join plans.\nenable_nestloop | on | Enables the planner's use of nested-loop join plans.\nenable_seqscan | on | Enables the planner's use of sequential-scan plans.\nenable_sort | on | Enables the planner's use of explicit sort steps.\nenable_tidscan | on | Enables the planner's use of TID scan plans.\nescape_string_warning | on | Warn about backslash escapes in ordinary string literals.\nevent_source | PostgreSQL | Sets the application name used to identify PostgreSQL messages in the event log.\nexit_on_error | off | Terminate session on any error.\nexternal_pid_file | | Writes the postmaster PID to the specified file.\nextra_float_digits | 0 | Sets the number of digits displayed for floating-point values.\nfrom_collapse_limit | 8 | Sets the FROM-list size beyond which subqueries are not collapsed.\nfsync | off | Forces synchronization of updates to disk.\nfull_page_writes | on | Writes full pages to WAL when first modified after a checkpoint.\ngeqo | on | Enables genetic query optimization.\ngeqo_effort | 5 | GEQO: effort is used to set the default for other GEQO parameters.\ngeqo_generations | 0 | GEQO: number of iterations of the algorithm.\ngeqo_pool_size | 0 | GEQO: number of individuals in the population.\ngeqo_seed | 0 | GEQO: seed for random path selection.\ngeqo_selection_bias | 2 | GEQO: selective pressure within the population.\ngeqo_threshold | 12 | Sets the threshold of FROM items beyond which GEQO is used.\ngin_fuzzy_search_limit | 0 | Sets the maximum allowed result for exact search by GIN.\nhba_file | /ORA/dbs03/PUPDBTST/data/pg_hba.conf | Sets the server's \"hba\" configuration file.\nhot_standby | on | Allows connections and queries during recovery.\nhot_standby_feedback | off | Allows feedback from a hot standby to the primary that will avoid query conflicts.\nident_file | /ORA/dbs03/PUPDBTST/data/pg_ident.conf | Sets the server's \"ident\" configuration file.\nignore_system_indexes | off | Disables reading from system indexes.\ninteger_datetimes | on | Datetimes are integer based.\nIntervalStyle | postgres | Sets the display format for interval values.\njoin_collapse_limit | 8 | Sets the FROM-list size beyond which JOIN constructs are not flattened.\nkrb_caseins_users | off | Sets whether Kerberos and GSSAPI user names should be treated as case-insensitive.\nkrb_server_keyfile | FILE:/etc/postgresql/krb5.keytab | Sets the location of the Kerberos server key file.\nkrb_srvname | postgres | Sets the name of the Kerberos service.\nlc_collate | en_US.UTF-8 | Shows the collation order locale.\nlc_ctype | en_US.UTF-8 | Shows the character classification and case conversion locale.\nlc_messages | en_US.UTF-8 | Sets the language in which messages are displayed.\nlc_monetary | en_US.UTF-8 | Sets the locale for formatting monetary amounts.\nlc_numeric | en_US.UTF-8 | Sets the locale for formatting numbers.\nlc_time | en_US.UTF-8 | Sets the locale for formatting date and time values.\nlisten_addresses | * | Sets the host name or IP address(es) to listen to.\nlo_compat_privileges | off | Enables backward compatibility mode for privilege checks on large objects.\nlocal_preload_libraries | | Lists shared libraries to preload into each backend.\nlog_autovacuum_min_duration | 0 | Sets the minimum execution time above which autovacuum actions will be logged.\nlog_checkpoints | off | Logs each checkpoint.\nlog_connections | off | Logs each successful connection.\nlog_destination | stderr | Sets the destination for server log output.\nlog_directory | pg_log | Sets the destination directory for log files.\nlog_disconnections | off | Logs end of a session, including duration.\nlog_duration | off | Logs the duration of each completed SQL statement.\nlog_error_verbosity | default | Sets the verbosity of logged messages.\nlog_executor_stats | off | Writes executor performance statistics to the server log.\nlog_file_mode | 0600 | Sets the file permissions for log files.\nlog_filename | postgresql-%Y-%m-%d_%H%M%S.log | Sets the file name pattern for log files.\nlog_hostname | off | Logs the host name in the connection logs.\nlog_line_prefix | <%m> | Controls information prefixed to each log line.\nlog_lock_waits | off | Logs long lock waits.\nlog_min_duration_statement | -1 | Sets the minimum execution time above which statements will be logged.\nlog_min_error_statement | error | Causes all statements generating error at or above this level to be logged.\nlog_min_messages | warning | Sets the message levels that are logged.\nlog_parser_stats | off | Writes parser performance statistics to the server log.\nlog_planner_stats | off | Writes planner performance statistics to the server log.\nlog_rotation_age | 1d | Automatic log file rotation will occur after N minutes.\nlog_rotation_size | 10MB | Automatic log file rotation will occur after N kilobytes.\nlog_statement | none | Sets the type of statements logged.\nlog_statement_stats | off | Writes cumulative performance statistics to the server log.\nlog_temp_files | -1 | Log the use of temporary files larger than this number of kilobytes.\nlog_timezone | Europe/Paris | Sets the time zone to use in log messages.\nlog_truncate_on_rotation | off | Truncate existing log files of same name during log rotation.\nlogging_collector | off | Start a subprocess to capture stderr output and/or csvlogs into log files.\nmaintenance_work_mem | 1GB | Sets the maximum memory to be used for maintenance operations.\nmax_connections | 80 | Sets the maximum number of concurrent connections.\nmax_files_per_process | 1000 | Sets the maximum number of simultaneously open files for each server process.\nmax_function_args | 100 | Shows the maximum number of function arguments.\nmax_identifier_length | 63 | Shows the maximum identifier length.\nmax_index_keys | 32 | Shows the maximum number of index keys.\nmax_locks_per_transaction | 64 | Sets the maximum number of locks per transaction.\nmax_pred_locks_per_transaction | 64 | Sets the maximum number of predicate locks per transaction.\nmax_prepared_transactions | 0 | Sets the maximum number of simultaneously prepared transactions.\nmax_stack_depth | 2MB | Sets the maximum stack depth, in kilobytes.\nmax_standby_archive_delay | 30s | Sets the maximum delay before canceling queries when a hot standby server is processing archived WAL data.\nmax_standby_streaming_delay | 30s | Sets the maximum delay before canceling queries when a hot standby server is processing streamed WAL data.\nmax_wal_senders | 0 | Sets the maximum number of simultaneously running WAL sender processes.\npassword_encryption | on | Encrypt passwords.\nport | 6600 | Sets the TCP port the server listens on.\npost_auth_delay | 0 | Waits N seconds on connection startup after authentication.\npre_auth_delay | 0 | Waits N seconds on connection startup before authentication.\nquote_all_identifiers | off | When generating SQL fragments, quote all identifiers.\nrandom_page_cost | 4 | Sets the planner's estimate of the cost of a nonsequentially fetched disk page.\nreplication_timeout | 1min | Sets the maximum time to wait for WAL replication.\nrestart_after_crash | on | Reinitialize server after backend crash.\nsearch_path | \"$user\",public | Sets the schema search order for names that are not schema-qualified.\nsegment_size | 1GB | Shows the number of pages per disk file.\nseq_page_cost | 1 | Sets the planner's estimate of the cost of a sequentially fetched disk page.\nserver_encoding | UTF8 | Sets the server (database) character set encoding.\nserver_version | 9.2.9 | Shows the server version.\nserver_version_num | 90209 | Shows the server version as an integer.\nsession_replication_role | origin | Sets the session's behavior for triggers and rewrite rules.\nshared_buffers | 12GB | Sets the number of shared memory buffers used by the server.\nshared_preload_libraries | | Lists shared libraries to preload into server.\nsql_inheritance | on | Causes subtables to be included by default in various commands.\nssl | off | Enables SSL connections.\nssl_ca_file | | Location of the SSL certificate authority file.\nssl_cert_file | server.crt | Location of the SSL server certificate file.\nssl_ciphers | ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH | Sets the list of allowed SSL ciphers.\nssl_crl_file | | Location of the SSL certificate revocation list file.\nssl_key_file | server.key | Location of the SSL server private key file.\nssl_renegotiation_limit | 512MB | Set the amount of traffic to send and receive before renegotiating the encryption keys.\nstandard_conforming_strings | on | Causes '...' strings to treat backslashes literally.\nstatement_timeout | 0 | Sets the maximum allowed duration of any statement.\nstats_temp_directory | pg_stat_tmp | Writes temporary statistics files to the specified directory.\nsuperuser_reserved_connections | 3 | Sets the number of connection slots reserved for superusers.\nsynchronize_seqscans | on | Enable synchronized sequential scans.\nsynchronous_commit | off | Sets the current transaction's synchronization level.\nsynchronous_standby_names | | List of names of potential synchronous standbys.\nsyslog_facility | local0 | Sets the syslog \"facility\" to be used when syslog enabled.\nsyslog_ident | postgres | Sets the program name used to identify PostgreSQL messages in syslog.\ntcp_keepalives_count | 0 | Maximum number of TCP keepalive retransmits.\ntcp_keepalives_idle | 0 | Time between issuing TCP keepalives.\ntcp_keepalives_interval | 0 | Time between TCP keepalive retransmits.\ntemp_buffers | 8MB | Sets the maximum number of temporary buffers used by each session.\ntemp_file_limit | -1 | Limits the total size of all temporary files used by each session.\ntemp_tablespaces | | Sets the tablespace(s) to use for temporary tables and sort files.\nTimeZone | Europe/Paris | Sets the time zone for displaying and interpreting time stamps.\ntimezone_abbreviations | Default | Selects a file of time zone abbreviations.\ntrace_notify | off | Generates debugging output for LISTEN and NOTIFY.\ntrace_recovery_messages | log | Enables logging of recovery-related debugging information.\ntrace_sort | off | Emit information about resource usage in sorting.\ntrack_activities | on | Collects information about executing commands.\ntrack_activity_query_size | 1024 | Sets the size reserved for pg_stat_activity.query, in bytes.\ntrack_counts | on | Collects statistics on database activity.\ntrack_functions | none | Collects function-level statistics on database activity.\ntrack_io_timing | off | Collects timing statistics for database I/O activity.\ntransaction_deferrable | off | Whether to defer a read-only serializable transaction until it can be executed with no possible serialization failures.\ntransaction_isolation | read committed | Sets the current transaction's isolation level.\ntransaction_read_only | on | Sets the current transaction's read-only status.\ntransform_null_equals | off | Treats \"expr=NULL\" as \"expr IS NULL\".\nunix_socket_directory | /var/lib/pgsql | Sets the directory where the Unix-domain socket will be created.\nunix_socket_group | | Sets the owning group of the Unix-domain socket.\nunix_socket_permissions | 0777 | Sets the access permissions of the Unix-domain socket.\nupdate_process_title | on | Updates the process title to show the active SQL command.\nvacuum_cost_delay | 0 | Vacuum cost delay in milliseconds.\nvacuum_cost_limit | 200 | Vacuum cost amount available before napping.\nvacuum_cost_page_dirty | 20 | Vacuum cost for a page dirtied by vacuum.\nvacuum_cost_page_hit | 1 | Vacuum cost for a page found in the buffer cache.\nvacuum_cost_page_miss | 10 | Vacuum cost for a page not found in the buffer cache.\nvacuum_defer_cleanup_age | 0 | Number of transactions by which VACUUM and HOT cleanup should be deferred, if any.\nvacuum_freeze_min_age | 50000000 | Minimum age at which VACUUM should freeze a table row.\nvacuum_freeze_table_age | 150000000 | Age at which VACUUM should scan whole table to freeze tuples.\nwal_block_size | 8192 | Shows the block size in the write ahead log.\nwal_buffers | 8MB | Sets the number of disk-page buffers in shared memory for WAL.\nwal_keep_segments | 0 | Sets the number of WAL files held for standby servers.\nwal_level | hot_standby | Set the level of information written to the WAL.\nwal_receiver_status_interval | 10s | Sets the maximum interval between WAL receiver status reports to the primary.\nwal_segment_size | 16MB | Shows the number of pages per write ahead log segment.\nwal_sync_method | fdatasync | Selects the method used for forcing WAL updates to disk.\nwal_writer_delay | 200ms | WAL writer sleep time between WAL flushes.\nwork_mem | 288MB | Sets the maximum memory to be used for query workspaces.\nxmlbinary | base64 | Sets how binary values are to be encoded in XML.\nxmloption | content | Sets whether XML data in implicit parsing and serialization operations is to be considered as documents or content fragments.\nzero_damaged_pages | off | Continues processing past damaged page headers.\n\nThe streaming is working perfectly:\n\n/usr/local/pgsql/postgresql-9.2.9/bin/psql -U admin -h XXXXX -p XXXXX -d puppetdb -c \"select pid,usesysid, usename, application_name, client_addr, state, sent_location,write_location,replay_location,sync_state from pg_stat_replication;\"\n pid | usesysid | usename | application_name | client_addr | state | sent_location | write_location | replay_location | sync_state\n--------+----------+------------+------------------+-------------+-----------+---------------+----------------+-----------------+------------\n117659 | 16384 | replicator | walreceiver | 10.16.7.137 | streaming | AA74/DD630978 | AA74/DD630978 | A977/F84F0BE0 | async\n(1 row)\n\nBut the lag is increasing constantly, it looks the replay can not cope with:\n\n/usr/local/pgsql/postgresql-9.2.9/bin/psql -U postgres -h /var/lib/pgsql/ -p 6600 -d puppetdb -c \"SELECT now(), now() - pg_last_xact_replay_timestamp() AS time_lag\" | perl -ne 'if (/\\|\\s+(\\d{2}):(\\d{2}):(\\d{2})\\.\\d+/) {$hour=$1;$min=$2;$sec=$3; print $_;}'\n\n2014-09-22 18:40:02.004482+02 | 00:00:03.166557\n2014-09-22 18:50:02.001836+02 | 00:00:00.765323\n2014-09-22 19:00:01.229943+02 | 00:00:00.600354\n2014-09-22 19:10:01.655343+02 | 00:00:07.85969\n2014-09-22 19:20:01.872653+02 | 00:07:23.727307\n2014-09-22 19:30:01.458236+02 | 00:14:16.244349\n2014-09-22 19:40:01.715044+02 | 00:20:31.616665\n2014-09-22 19:50:01.949646+02 | 00:28:20.129136\n2014-09-22 20:00:01.216571+02 | 00:31:11.289404\n...\n2014-09-30 14:00:01.67156+02 | 23:20:11.229815\n2014-09-30 14:10:01.884168+02 | 23:23:59.162476\n2014-09-30 14:20:02.022649+02 | 23:26:39.082807\n2014-09-30 14:30:01.263667+02 | 23:34:02.517874\n2014-09-30 14:40:01.421162+02 | 23:41:30.496393\n2014-09-30 14:50:01.650239+02 | 23:49:11.102335\n2014-09-30 15:00:02.055655+02 | 23:56:07.012862\n\n\nI tried to play with how the IO is handled, making it less strict setting synchronous_commit and fsync to off with not much success.\nI have also done a second test increasing shared_buffers from 12GB to 24GB (we are running on a 48GB, 8 cores server).\n\nPlease let me know if you can see something obvious I am missing.\nThanks for your help,\nRuben\n\n\n\n\n\n\n\n\n\nHello,\n \nI have built up a hot-standby between a master running 9.2.4 and a slave running 9.2.9. I did the initial copy using \"pg_basebackup\" . My recovery.conf looks like:\n \nstandby_mode = 'on'\nprimary_conninfo = 'host=pXXXXXXXX port=XXXX user=replicator'\ntrigger_file = 'failover.now'\nrestore_command = 'test -f /ORA/dbs02/PUPDBTST/archive/%f && ln -fns /ORA/dbs02/PUPDBTST/archive/%f %p'\narchive_cleanup_command = '/usr/local/pgsql/postgresql-9.2.9/bin/pg_archivecleanup /ORA/dbs03/PUPDBTST/data %r'\n \nThe slave (I don't have control on the master) is using 2 NFS file systems, one for WALs and another one for the data, on Netapp controllers:\n \ndbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=600)\ndbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=600)\n \nThe master produces quite a lot of WALs. This is what I get on the slave (number of WAL files, date-hour, Total size in MB), so per day is more than 400GB:\n \n 1582 20140930-17 25312MB\n 1328 20140930-18 21248MB\n 1960 20140930-19 31360MB\n 1201 20140930-20 19216MB\n 1467 20140930-21 23472MB\n 1298 20140930-22 20768MB\n 1579 20140930-23 25264MB\n 1646 20141001-00 26336MB\n 1274 20141001-01 20384MB\n 1652 20141001-02 26432MB\n 1756 20141001-03 28096MB\n 1628 20141001-04 26048MB\n 1015 20141001-05 16240MB\n 1624 20141001-06 25984MB\n 1652 20141001-07 26432MB\n 1286 20141001-08 20576MB\n 1485 20141001-09 23760MB\n 1987 20141001-10 31792MB\n 1432 20141001-11 22912MB\n 1235 20141001-12 19760MB\n 1690 20141001-13 27040MB\n 1442 20141001-14 23072MB\n 1500 20141001-15 24000MB\n 1306 20141001-16 20896MB\n 1491 20141001-17 23856MB\n 1535 20141001-18 24560MB\n 1548 20141001-19 24768MB\n 1068 20141001-20 17088MB\n 1519 20141001-21 24304MB\n 1608 20141001-22 25728MB\n 1019 20141001-23 16304MB\n 1568 20141002-00 25088MB\n 1411 20141002-01 22576MB\n 1781 20141002-02 28496MB\n 1280 20141002-03 20480MB\n 1556 20141002-04 24896MB\n 1114 20141002-05 17824MB\n 1906 20141002-06 30496MB\n 1316 20141002-07 21056MB\n 1483 20141002-08 23728MB\n 1245 20141002-09 19920MB\n \nThe slave is running on a server with 48GB, and 8 cores (Intel(R) Xeon(R) CPU E5630 @ 2.53GHz) running red hat 5.10, herewith the postgresql.conf:\npostgres=# show all;\n name | setting | description\n---------------------------------+------------------------------------------+-------------------------------------------------------------------------------------------------------------------------------\nallow_system_table_mods | off | Allows modifications of the structure of system tables.\napplication_name | psql | Sets the application name to be reported in statistics and logs.\narchive_command | (disabled) | Sets the shell command that will be called to archive a WAL file.\narchive_mode | off | Allows archiving of WAL files using archive_command.\narchive_timeout | 0 | Forces a switch to the next xlog file if a new file has not been started within N seconds.\narray_nulls | on | Enable input of NULL elements in arrays.\nauthentication_timeout | 1min | Sets the maximum allowed time to complete client authentication.\nautovacuum | on | Starts the autovacuum subprocess.\nautovacuum_analyze_scale_factor | 0.1 | Number of tuple inserts, updates, or deletes prior to analyze as a fraction of reltuples.\nautovacuum_analyze_threshold | 50 | Minimum number of tuple inserts, updates, or deletes prior to analyze.\nautovacuum_freeze_max_age | 200000000 | Age at which to autovacuum a table to prevent transaction ID wraparound.\nautovacuum_max_workers | 3 | Sets the maximum number of simultaneously running autovacuum worker processes.\nautovacuum_naptime | 20s | Time to sleep between autovacuum runs.\nautovacuum_vacuum_cost_delay | 5ms | Vacuum cost delay in milliseconds, for autovacuum.\nautovacuum_vacuum_cost_limit | 1000 | Vacuum cost amount available before napping, for autovacuum.\nautovacuum_vacuum_scale_factor | 0.1 | Number of tuple updates or deletes prior to vacuum as a fraction of reltuples.\nautovacuum_vacuum_threshold | 50 | Minimum number of tuple updates or deletes prior to vacuum.\nbackslash_quote | safe_encoding | Sets whether \"\\'\" is allowed in string literals.\nbgwriter_delay | 200ms | Background writer sleep time between rounds.\nbgwriter_lru_maxpages | 100 | Background writer maximum number of LRU pages to flush per round.\nbgwriter_lru_multiplier | 2 | Multiple of the average buffer usage to free per round.\nblock_size | 8192 | Shows the size of a disk block.\nbonjour | off | Enables advertising the server via Bonjour.\nbonjour_name | | Sets the Bonjour service name.\nbytea_output | hex | Sets the output format for bytea.\ncheck_function_bodies | on | Check function bodies during CREATE FUNCTION.\ncheckpoint_completion_target | 0.9 | Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval.\ncheckpoint_segments | 64 | Sets the maximum distance in log segments between automatic WAL checkpoints.\ncheckpoint_timeout | 10min | Sets the maximum time between automatic WAL checkpoints.\ncheckpoint_warning | 1min | Enables warnings if checkpoint segments are filled more frequently than this.\nclient_encoding | UTF8 | Sets the client's character set encoding.\nclient_min_messages | notice | Sets the message levels that are sent to the client.\ncommit_delay | 0 | Sets the delay in microseconds between transaction commit and flushing WAL to disk.\ncommit_siblings | 5 | Sets the minimum concurrent open transactions before performing commit_delay.\nconfig_file | /ORA/dbs03/PUPDBTST/data/postgresql.conf | Sets the server's main configuration file.\nconstraint_exclusion | on | Enables the planner to use constraints to optimize queries.\ncpu_index_tuple_cost | 0.005 | Sets the planner's estimate of the cost of processing each index entry during an index scan.\ncpu_operator_cost | 0.0025 | Sets the planner's estimate of the cost of processing each operator or function call.\ncpu_tuple_cost | 0.01 | Sets the planner's estimate of the cost of processing each tuple (row).\ncursor_tuple_fraction | 0.1 | Sets the planner's estimate of the fraction of a cursor's rows that will be retrieved.\ndata_directory | /ORA/dbs03/PUPDBTST/data | Sets the server's data directory.\nDateStyle | ISO, MDY | Sets the display format for date and time values.\ndb_user_namespace | off | Enables per-database user names.\ndeadlock_timeout | 1s | Sets the time to wait on a lock before checking for deadlock.\ndebug_assertions | off | Turns on various assertion checks.\ndebug_pretty_print | on | Indents parse and plan tree displays.\ndebug_print_parse | off | Logs each query's parse tree.\ndebug_print_plan | off | Logs each query's execution plan.\ndebug_print_rewritten | off | Logs each query's rewritten parse tree.\ndefault_statistics_target | 50 | Sets the default statistics target.\ndefault_tablespace | | Sets the default tablespace to create tables and indexes in.\ndefault_text_search_config | pg_catalog.english | Sets default text search configuration.\ndefault_transaction_deferrable | off | Sets the default deferrable status of new transactions.\ndefault_transaction_isolation | read committed | Sets the transaction isolation level of each new transaction.\ndefault_transaction_read_only | off | Sets the default read-only status of new transactions.\ndefault_with_oids | off | Create new tables with OIDs by default.\ndynamic_library_path | $libdir | Sets the path for dynamically loadable modules.\neffective_cache_size | 36GB | Sets the planner's assumption about the size of the disk cache.\neffective_io_concurrency | 1 | Number of simultaneous requests that can be handled efficiently by the disk subsystem.\nenable_bitmapscan | on | Enables the planner's use of bitmap-scan plans.\nenable_hashagg | on | Enables the planner's use of hashed aggregation plans.\nenable_hashjoin | on | Enables the planner's use of hash join plans.\nenable_indexonlyscan | on | Enables the planner's use of index-only-scan plans.\nenable_indexscan | on | Enables the planner's use of index-scan plans.\nenable_material | on | Enables the planner's use of materialization.\nenable_mergejoin | on | Enables the planner's use of merge join plans.\nenable_nestloop | on | Enables the planner's use of nested-loop join plans.\nenable_seqscan | on | Enables the planner's use of sequential-scan plans.\nenable_sort | on | Enables the planner's use of explicit sort steps.\nenable_tidscan | on | Enables the planner's use of TID scan plans.\nescape_string_warning | on | Warn about backslash escapes in ordinary string literals.\nevent_source | PostgreSQL | Sets the application name used to identify PostgreSQL messages in the event log.\nexit_on_error | off | Terminate session on any error.\nexternal_pid_file | | Writes the postmaster PID to the specified file.\nextra_float_digits | 0 | Sets the number of digits displayed for floating-point values.\nfrom_collapse_limit | 8 | Sets the FROM-list size beyond which subqueries are not collapsed.\nfsync | off | Forces synchronization of updates to disk.\nfull_page_writes | on | Writes full pages to WAL when first modified after a checkpoint.\ngeqo | on | Enables genetic query optimization.\ngeqo_effort | 5 | GEQO: effort is used to set the default for other GEQO parameters.\ngeqo_generations | 0 | GEQO: number of iterations of the algorithm.\ngeqo_pool_size | 0 | GEQO: number of individuals in the population.\ngeqo_seed | 0 | GEQO: seed for random path selection.\ngeqo_selection_bias | 2 | GEQO: selective pressure within the population.\ngeqo_threshold | 12 | Sets the threshold of FROM items beyond which GEQO is used.\ngin_fuzzy_search_limit | 0 | Sets the maximum allowed result for exact search by GIN.\nhba_file | /ORA/dbs03/PUPDBTST/data/pg_hba.conf | Sets the server's \"hba\" configuration file.\nhot_standby | on | Allows connections and queries during recovery.\nhot_standby_feedback | off | Allows feedback from a hot standby to the primary that will avoid query conflicts.\nident_file | /ORA/dbs03/PUPDBTST/data/pg_ident.conf | Sets the server's \"ident\" configuration file.\nignore_system_indexes | off | Disables reading from system indexes.\ninteger_datetimes | on | Datetimes are integer based.\nIntervalStyle | postgres | Sets the display format for interval values.\njoin_collapse_limit | 8 | Sets the FROM-list size beyond which JOIN constructs are not flattened.\nkrb_caseins_users | off | Sets whether Kerberos and GSSAPI user names should be treated as case-insensitive.\nkrb_server_keyfile | FILE:/etc/postgresql/krb5.keytab | Sets the location of the Kerberos server key file.\nkrb_srvname | postgres | Sets the name of the Kerberos service.\nlc_collate | en_US.UTF-8 | Shows the collation order locale.\nlc_ctype | en_US.UTF-8 | Shows the character classification and case conversion locale.\nlc_messages | en_US.UTF-8 | Sets the language in which messages are displayed.\nlc_monetary | en_US.UTF-8 | Sets the locale for formatting monetary amounts.\nlc_numeric | en_US.UTF-8 | Sets the locale for formatting numbers.\nlc_time | en_US.UTF-8 | Sets the locale for formatting date and time values.\nlisten_addresses | * | Sets the host name or IP address(es) to listen to.\nlo_compat_privileges | off | Enables backward compatibility mode for privilege checks on large objects.\nlocal_preload_libraries | | Lists shared libraries to preload into each backend.\nlog_autovacuum_min_duration | 0 | Sets the minimum execution time above which autovacuum actions will be logged.\nlog_checkpoints | off | Logs each checkpoint.\nlog_connections | off | Logs each successful connection.\nlog_destination | stderr | Sets the destination for server log output.\nlog_directory | pg_log | Sets the destination directory for log files.\nlog_disconnections | off | Logs end of a session, including duration.\nlog_duration | off | Logs the duration of each completed SQL statement.\nlog_error_verbosity | default | Sets the verbosity of logged messages.\nlog_executor_stats | off | Writes executor performance statistics to the server log.\nlog_file_mode | 0600 | Sets the file permissions for log files.\nlog_filename | postgresql-%Y-%m-%d_%H%M%S.log | Sets the file name pattern for log files.\nlog_hostname | off | Logs the host name in the connection logs.\nlog_line_prefix | <%m> | Controls information prefixed to each log line.\nlog_lock_waits | off | Logs long lock waits.\nlog_min_duration_statement | -1 | Sets the minimum execution time above which statements will be logged.\nlog_min_error_statement | error | Causes all statements generating error at or above this level to be logged.\nlog_min_messages | warning | Sets the message levels that are logged.\nlog_parser_stats | off | Writes parser performance statistics to the server log.\nlog_planner_stats | off | Writes planner performance statistics to the server log.\nlog_rotation_age | 1d | Automatic log file rotation will occur after N minutes.\nlog_rotation_size | 10MB | Automatic log file rotation will occur after N kilobytes.\nlog_statement | none | Sets the type of statements logged.\nlog_statement_stats | off | Writes cumulative performance statistics to the server log.\nlog_temp_files | -1 | Log the use of temporary files larger than this number of kilobytes.\nlog_timezone | Europe/Paris | Sets the time zone to use in log messages.\nlog_truncate_on_rotation | off | Truncate existing log files of same name during log rotation.\nlogging_collector | off | Start a subprocess to capture stderr output and/or csvlogs into log files.\nmaintenance_work_mem | 1GB | Sets the maximum memory to be used for maintenance operations.\nmax_connections | 80 | Sets the maximum number of concurrent connections.\nmax_files_per_process | 1000 | Sets the maximum number of simultaneously open files for each server process.\nmax_function_args | 100 | Shows the maximum number of function arguments.\nmax_identifier_length | 63 | Shows the maximum identifier length.\nmax_index_keys | 32 | Shows the maximum number of index keys.\nmax_locks_per_transaction | 64 | Sets the maximum number of locks per transaction.\nmax_pred_locks_per_transaction | 64 | Sets the maximum number of predicate locks per transaction.\nmax_prepared_transactions | 0 | Sets the maximum number of simultaneously prepared transactions.\nmax_stack_depth | 2MB | Sets the maximum stack depth, in kilobytes.\nmax_standby_archive_delay | 30s | Sets the maximum delay before canceling queries when a hot standby server is processing archived WAL data.\nmax_standby_streaming_delay | 30s | Sets the maximum delay before canceling queries when a hot standby server is processing streamed WAL data.\nmax_wal_senders | 0 | Sets the maximum number of simultaneously running WAL sender processes.\npassword_encryption | on | Encrypt passwords.\nport | 6600 | Sets the TCP port the server listens on.\npost_auth_delay | 0 | Waits N seconds on connection startup after authentication.\npre_auth_delay | 0 | Waits N seconds on connection startup before authentication.\nquote_all_identifiers | off | When generating SQL fragments, quote all identifiers.\nrandom_page_cost | 4 | Sets the planner's estimate of the cost of a nonsequentially fetched disk page.\nreplication_timeout | 1min | Sets the maximum time to wait for WAL replication.\nrestart_after_crash | on | Reinitialize server after backend crash.\nsearch_path | \"$user\",public | Sets the schema search order for names that are not schema-qualified.\nsegment_size | 1GB | Shows the number of pages per disk file.\nseq_page_cost | 1 | Sets the planner's estimate of the cost of a sequentially fetched disk page.\nserver_encoding | UTF8 | Sets the server (database) character set encoding.\nserver_version | 9.2.9 | Shows the server version.\nserver_version_num | 90209 | Shows the server version as an integer.\nsession_replication_role | origin | Sets the session's behavior for triggers and rewrite rules.\nshared_buffers | 12GB | Sets the number of shared memory buffers used by the server.\nshared_preload_libraries | | Lists shared libraries to preload into server.\nsql_inheritance | on | Causes subtables to be included by default in various commands.\nssl | off | Enables SSL connections.\nssl_ca_file | | Location of the SSL certificate authority file.\nssl_cert_file | server.crt | Location of the SSL server certificate file.\nssl_ciphers | ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH | Sets the list of allowed SSL ciphers.\nssl_crl_file | | Location of the SSL certificate revocation list file.\nssl_key_file | server.key | Location of the SSL server private key file.\nssl_renegotiation_limit | 512MB | Set the amount of traffic to send and receive before renegotiating the encryption keys.\nstandard_conforming_strings | on | Causes '...' strings to treat backslashes literally.\nstatement_timeout | 0 | Sets the maximum allowed duration of any statement.\nstats_temp_directory | pg_stat_tmp | Writes temporary statistics files to the specified directory.\nsuperuser_reserved_connections | 3 | Sets the number of connection slots reserved for superusers.\nsynchronize_seqscans | on | Enable synchronized sequential scans.\nsynchronous_commit | off | Sets the current transaction's synchronization level.\nsynchronous_standby_names | | List of names of potential synchronous standbys.\nsyslog_facility | local0 | Sets the syslog \"facility\" to be used when syslog enabled.\nsyslog_ident | postgres | Sets the program name used to identify PostgreSQL messages in syslog.\ntcp_keepalives_count | 0 | Maximum number of TCP keepalive retransmits.\ntcp_keepalives_idle | 0 | Time between issuing TCP keepalives.\ntcp_keepalives_interval | 0 | Time between TCP keepalive retransmits.\ntemp_buffers | 8MB | Sets the maximum number of temporary buffers used by each session.\ntemp_file_limit | -1 | Limits the total size of all temporary files used by each session.\ntemp_tablespaces | | Sets the tablespace(s) to use for temporary tables and sort files.\nTimeZone | Europe/Paris | Sets the time zone for displaying and interpreting time stamps.\ntimezone_abbreviations | Default | Selects a file of time zone abbreviations.\ntrace_notify | off | Generates debugging output for LISTEN and NOTIFY.\ntrace_recovery_messages | log | Enables logging of recovery-related debugging information.\ntrace_sort | off | Emit information about resource usage in sorting.\ntrack_activities | on | Collects information about executing commands.\ntrack_activity_query_size | 1024 | Sets the size reserved for pg_stat_activity.query, in bytes.\ntrack_counts | on | Collects statistics on database activity.\ntrack_functions | none | Collects function-level statistics on database activity.\ntrack_io_timing | off | Collects timing statistics for database I/O activity.\ntransaction_deferrable | off | Whether to defer a read-only serializable transaction until it can be executed with no possible serialization failures.\ntransaction_isolation | read committed | Sets the current transaction's isolation level.\ntransaction_read_only | on | Sets the current transaction's read-only status.\ntransform_null_equals | off | Treats \"expr=NULL\" as \"expr IS NULL\".\nunix_socket_directory | /var/lib/pgsql | Sets the directory where the Unix-domain socket will be created.\nunix_socket_group | | Sets the owning group of the Unix-domain socket.\nunix_socket_permissions | 0777 | Sets the access permissions of the Unix-domain socket.\nupdate_process_title | on | Updates the process title to show the active SQL command.\nvacuum_cost_delay | 0 | Vacuum cost delay in milliseconds.\nvacuum_cost_limit | 200 | Vacuum cost amount available before napping.\nvacuum_cost_page_dirty | 20 | Vacuum cost for a page dirtied by vacuum.\nvacuum_cost_page_hit | 1 | Vacuum cost for a page found in the buffer cache.\nvacuum_cost_page_miss | 10 | Vacuum cost for a page not found in the buffer cache.\nvacuum_defer_cleanup_age | 0 | Number of transactions by which VACUUM and HOT cleanup should be deferred, if any.\nvacuum_freeze_min_age | 50000000 | Minimum age at which VACUUM should freeze a table row.\nvacuum_freeze_table_age | 150000000 | Age at which VACUUM should scan whole table to freeze tuples.\nwal_block_size | 8192 | Shows the block size in the write ahead log.\nwal_buffers | 8MB | Sets the number of disk-page buffers in shared memory for WAL.\nwal_keep_segments | 0 | Sets the number of WAL files held for standby servers.\nwal_level | hot_standby | Set the level of information written to the WAL.\nwal_receiver_status_interval | 10s | Sets the maximum interval between WAL receiver status reports to the primary.\nwal_segment_size | 16MB | Shows the number of pages per write ahead log segment.\nwal_sync_method | fdatasync | Selects the method used for forcing WAL updates to disk.\nwal_writer_delay | 200ms | WAL writer sleep time between WAL flushes.\nwork_mem | 288MB | Sets the maximum memory to be used for query workspaces.\nxmlbinary | base64 | Sets how binary values are to be encoded in XML.\nxmloption | content | Sets whether XML data in implicit parsing and serialization operations is to be considered as documents or content fragments.\nzero_damaged_pages | off | Continues processing past damaged page headers.\n \nThe streaming is working perfectly:\n \n/usr/local/pgsql/postgresql-9.2.9/bin/psql -U admin -h XXXXX -p XXXXX -d puppetdb -c \"select pid,usesysid, usename, application_name, client_addr, state, sent_location,write_location,replay_location,sync_state\n from pg_stat_replication;\"\n pid | usesysid | usename | application_name | client_addr | state | sent_location | write_location | replay_location | sync_state\n--------+----------+------------+------------------+-------------+-----------+---------------+----------------+-----------------+------------\n117659 | 16384 | replicator | walreceiver | 10.16.7.137 | streaming | AA74/DD630978 | AA74/DD630978 | A977/F84F0BE0 | async\n(1 row)\n \nBut the lag is increasing constantly, it looks the replay can not cope with:\n \n/usr/local/pgsql/postgresql-9.2.9/bin/psql -U postgres -h /var/lib/pgsql/ -p 6600 -d puppetdb -c \"SELECT now(), now() - pg_last_xact_replay_timestamp() AS time_lag\" | perl -ne 'if (/\\|\\s+(\\d{2}):(\\d{2}):(\\d{2})\\.\\d+/)\n {$hour=$1;$min=$2;$sec=$3; print $_;}'\n \n2014-09-22 18:40:02.004482+02 | 00:00:03.166557\n2014-09-22 18:50:02.001836+02 | 00:00:00.765323\n2014-09-22 19:00:01.229943+02 | 00:00:00.600354\n2014-09-22 19:10:01.655343+02 | 00:00:07.85969\n2014-09-22 19:20:01.872653+02 | 00:07:23.727307\n2014-09-22 19:30:01.458236+02 | 00:14:16.244349\n2014-09-22 19:40:01.715044+02 | 00:20:31.616665\n2014-09-22 19:50:01.949646+02 | 00:28:20.129136\n2014-09-22 20:00:01.216571+02 | 00:31:11.289404\n...\n2014-09-30 14:00:01.67156+02 | 23:20:11.229815\n2014-09-30 14:10:01.884168+02 | 23:23:59.162476\n2014-09-30 14:20:02.022649+02 | 23:26:39.082807\n2014-09-30 14:30:01.263667+02 | 23:34:02.517874\n2014-09-30 14:40:01.421162+02 | 23:41:30.496393\n2014-09-30 14:50:01.650239+02 | 23:49:11.102335\n2014-09-30 15:00:02.055655+02 | 23:56:07.012862\n \n \nI tried to play with how the IO is handled, making it less strict setting synchronous_commit and fsync to off with not much success.\nI have also done a second test increasing shared_buffers from 12GB to 24GB (we are running on a 48GB, 8 cores server).\n \nPlease let me know if you can see something obvious I am missing.\nThanks for your help,\nRuben",
"msg_date": "Sat, 8 Nov 2014 13:11:25 +0000",
"msg_from": "Ruben Domingo Gaspar Aparicio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Postgres slave not catching up (on 9.2)"
},
{
"msg_contents": "On Sat, Nov 8, 2014 at 2:11 PM, Ruben Domingo Gaspar Aparicio wrote:\n\n> The slave (I don't have control on the master) is using 2 NFS file systems,\n> one for WALs and another one for the data, on Netapp controllers:\n>\n> dbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs\n> (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=600)\n>\n> dbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs\n> (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=600)\n\nYou should use noatime to avoid unnecessary IO.\n\n> The master produces quite a lot of WALs. This is what I get on the slave\n> (number of WAL files, date-hour, Total size in MB), so per day is more than\n> 400GB:\n\n> I tried to play with how the IO is handled, making it less strict setting\n> synchronous_commit and fsync to off with not much success.\n>\n> I have also done a second test increasing shared_buffers from 12GB to 24GB\n> (we are running on a 48GB, 8 cores server).\n\n> Please let me know if you can see something obvious I am missing.\n\nYour IO system needs to be able to deliver sustained IO bandwith at\nleast as large as you need to read and write all the changes. What raw\nIO bandwidth do those NFS file systems deliver _long term_? I am not\ntalking about spikes because there are buffers. I am talking about the\nminimum of network throughput on one hand and raw disk IO those boxes\ncan do on the other hand. Then, how much of it is available to your\nslave? Did you do the math to ensure that the IO bandwidth you have\navailable on the slave is at least as high as what is needed? Note\nthat it's not simply the WAL size that needs to be written and read\nbut also data pages.\n\nKind regards\n\nrobert\n\n-- \n[guy, jim].each {|him| remember.him do |as, often| as.you_can - without end}\nhttp://blog.rubybestpractices.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 8 Nov 2014 21:35:47 +0100",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres slave not catching up (on 9.2)"
},
{
"msg_contents": "Hi,\n\nOn 2014-11-08 13:11:25 +0000, Ruben Domingo Gaspar Aparicio wrote:\n> Hello,\n> \n> I have built up a hot-standby between a master running 9.2.4 and a slave running 9.2.9. I did the initial copy using \"pg_basebackup\" . My recovery.conf looks like:\n> \n> standby_mode = 'on'\n> primary_conninfo = 'host=pXXXXXXXX port=XXXX user=replicator'\n> trigger_file = 'failover.now'\n> restore_command = 'test -f /ORA/dbs02/PUPDBTST/archive/%f && ln -fns /ORA/dbs02/PUPDBTST/archive/%f %p'\n> archive_cleanup_command = '/usr/local/pgsql/postgresql-9.2.9/bin/pg_archivecleanup /ORA/dbs03/PUPDBTST/data %r'\n> \n> The slave (I don't have control on the master) is using 2 NFS file systems, one for WALs and another one for the data, on Netapp controllers:\n> \n> dbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=600)\n> dbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=600)\n> \n> The streaming is working perfectly:\n> \n> /usr/local/pgsql/postgresql-9.2.9/bin/psql -U admin -h XXXXX -p XXXXX -d puppetdb -c \"select pid,usesysid, usename, application_name, client_addr, state, sent_location,write_location,replay_location,sync_state from pg_stat_replication;\"\n> pid | usesysid | usename | application_name | client_addr | state | sent_location | write_location | replay_location | sync_state\n> --------+----------+------------+------------------+-------------+-----------+---------------+----------------+-----------------+------------\n> 117659 | 16384 | replicator | walreceiver | 10.16.7.137 | streaming | AA74/DD630978 | AA74/DD630978 | A977/F84F0BE0 | async\n> (1 row)\n> \n> But the lag is increasing constantly, it looks the replay can not cope with:\n\nI have a couple of questions:\n1) Is the standby actually used for querying? Is it possible that replay\n frequently conflicts with active queries? As you don't have\n hot_standby_feedback enabled that seems quite possible.\n2) Is the startup process on the standby CPU or IO bound?\n3) Does the workload involve loads of temporary tables or generally\n transactions locking lots of tables exclusively in one transaction?\n\nGreetings,\n\nAndres Freund\n\n-- \n Andres Freund\t http://www.2ndQuadrant.com/\n PostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 9 Nov 2014 23:44:46 +0100",
"msg_from": "Andres Freund <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Postgres slave not catching up (on 9.2)"
},
{
"msg_contents": "Indeed I could save some IO with noatime. I must say I haven’t found any recommendation about mount options for postgresql, likely because this is not encourage. The ones you see are taking from a Oracle cluster configuration where several nodes see the same files. It's not the case on this setup.\r\n\r\nThe IO is not an issue. The storage is not at all saturated. Slave gets streaming perfectly but the apply is quite slow, looks like always working with pages of 8k at a time:\r\n\r\n--datafiles\r\n[root@ ~]# /ORA/dbs01/syscontrol/projects/dfm/bin/smetrics -i 5 -n 100 -o vol dbnasg403-12a:/vol/dodpupdbtst03 Instance total_ops read_ops write_ops read_data write_data avg_latency read_latency write_latenc\r\n /s /s /s b/s b/s us us us\r\ndodpupdbtst03 6466 0 162 0 2350619 31.53 0 764.70\r\ndodpupdbtst03 6762 0 843 0 8751169 48.32 0 263.10\r\ndodpupdbtst03 7023 0 1547 0 14914498 112.88 0 303.16\r\ndodpupdbtst03 5373 0 321 6501 3809930 58.44 11287.75 467.21\r\ndodpupdbtst03 5618 0 183 0 1661200 20.91 0 265.61\r\ndodpupdbtst03 5538 0 214 0 3471380 29.24 0 374.27\r\ndodpupdbtst03 5753 0 425 0 4973131 45.36 0 351.08\r\ndodpupdbtst03 6110 0 142 0 2331695 20.96 0 378.95\r\n\r\n--WALs\r\nBye Bye[root@ ~]# /ORA/dbs01/syscontrol/projects/dfm/bin/smetrics -i 5 -n 100 -o vol dbnasg401-12a:/vol/dodpupdbtst02 Instance total_ops read_ops write_ops read_data write_data avg_latency read_latency write_latenc\r\n /s /s /s b/s b/s us us us\r\ndodpupdbtst02 1017 202 93 5915899 2637111 2033.22 10116.09 172.61\r\ndodpupdbtst02 1284 242 141 7368712 4309409 1235.11 6306.37 172.89\r\ndodpupdbtst02 1357 231 268 6869816 8489466 957.55 5104.09 192.26\r\ndodpupdbtst02 1566 264 288 8142965 9008529 747.96 4069.78 180.00\r\ndodpupdbtst02 1333 235 153 7601051 4755791 993.81 5394.99 176.99\r\ndodpupdbtst02 1261 199 287 6124821 9075170 896.32 5150.28 203.81\r\ndodpupdbtst02 963 161 192 4955996 6066333 1757.66 10035.06 213.12\r\ndodpupdbtst02 924 159 157 4782617 4807262 1092.61 5804.85 236.91\r\ndodpupdbtst02 591 97 137 2899085 4275046 1218.24 6980.66 190.20\r\n\r\nWrites are usually fast (us as they use the NVRAM )and reads are about 5 ms which is quite ok considering SATA disks (they have a flash cache of 512GB, this is why we get this average).\r\n\r\nThank you,\r\nRuben\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 10:14:24 +0000",
"msg_from": "Ruben Domingo Gaspar Aparicio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres slave not catching up (on 9.2)"
},
{
"msg_contents": "Hi Andres,\n\nSorry for my delay to reply. Here below my replies:\n\n> I have a couple of questions:\n> 1) Is the standby actually used for querying? Is it possible that replay\n> frequently conflicts with active queries? As you don't have\n> hot_standby_feedback enabled that seems quite possible.\n\nNowadays we don't manage to have a decent lag so the standby is not use at all. No clients connect to it.\n\n> 2) Is the startup process on the standby CPU or IO bound?\n\nThe servers is almost idle. I don't see any bottle neck either on CPU or IO.\n\n> 3) Does the workload involve loads of temporary tables or generally\n> transactions locking lots of tables exclusively in one transaction?\n\nWe have monitored the master for a couple of days we haven't detected any \"create temp table\" statement.\nFor the locks I see it's also not the case in my opinion, at a given point in time I don't see many tables lock in exclusive mode:\n\npuppetdb=# SELECT locktype, relation::regclass, mode, transactionid AS tid, datname,\nvirtualtransaction AS vtid, pid, granted\n FROM pg_catalog.pg_locks l LEFT JOIN pg_catalog.pg_database db\n ON db.oid = l.database WHERE NOT pid = pg_backend_pid();\n locktype | relation | mode | tid | datname | vtid | pid | granted\n---------------+----------------------------------------------+------------------+-----------+----------+------------+--------+---------\n relation | resource_params_cache_pkey | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | resource_params_cache | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalog_resources_pkey | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | idx_catalog_resources_exported_true | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | idx_catalog_resources_resource | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | idx_catalog_resources_type | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | idx_catalog_resources_type_title | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalog_resources | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | idx_catalogs_transaction_uuid | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | idx_catalogs_transaction_uuid | RowExclusiveLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalogs_certname_key | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalogs_certname_key | RowExclusiveLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalogs_hash_key | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalogs_hash_key | RowExclusiveLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalogs_pkey | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalogs_pkey | RowExclusiveLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalogs | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | catalogs | RowExclusiveLock | | puppetdb | 5/3099716 | 54422 | t\n relation | certnames_pkey | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | certnames_pkey | RowExclusiveLock | | puppetdb | 5/3099716 | 54422 | t\n relation | certnames | AccessShareLock | | puppetdb | 5/3099716 | 54422 | t\n relation | certnames | RowExclusiveLock | | puppetdb | 5/3099716 | 54422 | t\n virtualxid | | ExclusiveLock | | | 5/3099716 | 54422 | t\n relation | resource_params_cache_pkey | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | resource_params_cache | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalog_resources_pkey | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | idx_catalog_resources_exported_true | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | idx_catalog_resources_resource | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | idx_catalog_resources_type | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | idx_catalog_resources_type_title | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalog_resources | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | idx_catalogs_transaction_uuid | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | idx_catalogs_transaction_uuid | RowExclusiveLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalogs_certname_key | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalogs_certname_key | RowExclusiveLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalogs_hash_key | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalogs_hash_key | RowExclusiveLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalogs_pkey | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalogs_pkey | RowExclusiveLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalogs | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | catalogs | RowExclusiveLock | | puppetdb | 20/2901642 | 100098 | t\n relation | certnames_pkey | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | certnames_pkey | RowExclusiveLock | | puppetdb | 20/2901642 | 100098 | t\n relation | certnames | AccessShareLock | | puppetdb | 20/2901642 | 100098 | t\n relation | certnames | RowExclusiveLock | | puppetdb | 20/2901642 | 100098 | t\n virtualxid | | ExclusiveLock | | | 20/2901642 | 100098 | t\n relation | idx_catalogs_transaction_uuid | AccessShareLock | | puppetdb | 21/2767248 | 13044 | t\n relation | catalogs_certname_key | AccessShareLock | | puppetdb | 21/2767248 | 13044 | t\n relation | catalogs_hash_key | AccessShareLock | | puppetdb | 21/2767248 | 13044 | t\n relation | catalogs_pkey | AccessShareLock | | puppetdb | 21/2767248 | 13044 | t\n relation | catalogs | AccessShareLock | | puppetdb | 21/2767248 | 13044 | t\n relation | certnames_pkey | AccessShareLock | | puppetdb | 21/2767248 | 13044 | t\n relation | certnames_pkey | RowExclusiveLock | | puppetdb | 21/2767248 | 13044 | t\n relation | certnames | AccessShareLock | | puppetdb | 21/2767248 | 13044 | t\n relation | certnames | RowExclusiveLock | | puppetdb | 21/2767248 | 13044 | t\n virtualxid | | ExclusiveLock | | | 21/2767248 | 13044 | t\n relation | edges | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | resource_params_cache_pkey | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | resource_params_cache | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | resource_params_cache | RowShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalog_resources_pkey | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalog_resources_pkey | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalog_resources_exported_true | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalog_resources_exported_true | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalog_resources_resource | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalog_resources_resource | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalog_resources_type | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalog_resources_type | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalog_resources_type_title | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalog_resources_type_title | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalog_resources | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalog_resources | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalogs_transaction_uuid | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | idx_catalogs_transaction_uuid | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalogs_certname_key | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalogs_certname_key | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalogs_hash_key | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalogs_hash_key | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalogs_pkey | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalogs_pkey | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalogs | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | catalogs | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | certnames_pkey | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | certnames_pkey | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n relation | certnames | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | certnames | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n virtualxid | | ExclusiveLock | | | 27/1597400 | 77873 | t\n relation | certnames | RowShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | edges | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n transactionid | | ExclusiveLock | 191755866 | | 27/1597400 | 77873 | t\n transactionid | | ExclusiveLock | 191755880 | | 20/2901642 | 100098 | t\n relation | edges_certname_source_target_type_unique_key | AccessShareLock | | puppetdb | 27/1597400 | 77873 | t\n relation | edges_certname_source_target_type_unique_key | RowExclusiveLock | | puppetdb | 27/1597400 | 77873 | t\n transactionid | | ExclusiveLock | 191755874 | | 5/3099716 | 54422 | t\n transactionid | | ExclusiveLock | 191755883 | | 21/2767248 | 13044 | t\n(95 rows)\n\nJus to comment that we are running a DBaaS, we don't know much about the apps running on our servers. \nThank you,\nRuben\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 28 Nov 2014 16:21:43 +0000",
"msg_from": "Ruben Domingo Gaspar Aparicio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres slave not catching up (on 9.2)"
},
{
"msg_contents": " \r\n> > The slave (I don't have control on the master) is using 2 NFS file\r\n> > systems, one for WALs and another one for the data, on Netapp controllers:\r\n> >\r\n> > dbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs\r\n> >\r\n> (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=60\r\n> > 0)\r\n> >\r\n> > dbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs\r\n> >\r\n> (rw,remount,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,timeo=60\r\n> > 0)\r\n> \r\n> You should use noatime to avoid unnecessary IO.\r\n> \r\n\r\nJust to mention that changing the mount points from:\r\n\r\ndbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs (rw, actimeo=0,hard,nointr,rsize=65536,wsize=65536,tcp,timeo=600)\r\ndbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs (rw, actimeo=0,hard,nointr,rsize=65536,wsize=65536,tcp,timeo=600)\r\n\r\nto\r\n\r\ndbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs (rw,noatime,hard,nointr,rsize=65536,wsize=65536,tcp,timeo=600)\r\ndbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs (rw,noatime,hard,nointr,rsize=65536,wsize=65536,tcp,timeo=600)\r\n\r\nit did have a big impact. The profile of the recovery process on terms of calls changed quite a lot:\r\n\r\nFrom: \r\n\r\n[postgres@itrac1202 tmp]$ strace -p 9596 -c\r\nProcess 9596 attached - interrupt to quit\r\nProcess 9596 detached\r\n% time seconds usecs/call calls errors syscall\r\n------ ----------- ----------- --------- --------- ----------------\r\n78.73 0.217824 0 456855 381376 read\r\n17.87 0.049453 0 515320 lseek\r\n 2.89 0.007989 12 669 669 poll\r\n 0.33 0.000912 25 36 open\r\n 0.07 0.000206 0 994 994 stat\r\n 0.05 0.000151 0 995 787 rt_sigreturn\r\n 0.05 0.000133 0 673 write\r\n 0.00 0.000000 0 36 close\r\n 0.00 0.000000 0 52 kill\r\n------ ----------- ----------- --------- --------- ----------------\r\n100.00 0.276668 975630 383826 total\r\n\r\n\r\nTo:\r\n\r\n[postgres@itrac1202 tmp]$ strace -p 9596 -c\r\nProcess 9596 attached - interrupt to quit\r\nProcess 9596 detached\r\n% time seconds usecs/call calls errors syscall\r\n------ ----------- ----------- --------- --------- ----------------\r\n78.73 0.217824 0 456855 381376 read\r\n17.87 0.049453 0 515320 lseek\r\n 2.89 0.007989 12 669 669 poll\r\n 0.33 0.000912 25 36 open\r\n 0.07 0.000206 0 994 994 stat\r\n 0.05 0.000151 0 995 787 rt_sigreturn\r\n 0.05 0.000133 0 673 write\r\n 0.00 0.000000 0 36 close\r\n 0.00 0.000000 0 52 kill\r\n------ ----------- ----------- --------- --------- ----------------\r\n100.00 0.276668 975630 383826 total\r\n\r\nWe did also increased the shared_buffers from 12 to 24GB.\r\n\r\nThe lag has decreased most of the time:\r\n\r\n*/10 * * * * /usr/local/pgsql/postgresql-9.2.9/bin/psql -U postgres -h /var/lib/pgsql/ -p 6600 -d puppetdb -c \"SELECT now(), now() - pg_last_xact_replay_timestamp() AS time_lag\" | perl -ne 'if (/\\|\\s+(\\d{2}):(\\d{2}):(\\d{2})\\.\\d+/) {$hour=$1;$min=$2;$sec=$3; print $_;}' >> /tmp/lag929morememmount.log\r\n\r\n\r\n...\r\n2014-12-14 14:10:01.688947+01 | 00:00:00.096524\r\n 2014-12-14 14:20:01.798223+01 | 00:00:00.024083\r\n 2014-12-14 14:30:01.884448+01 | 00:00:00.420791\r\n 2014-12-14 14:40:01.960623+01 | 00:00:00.168318\r\n 2014-12-14 14:50:01.191487+01 | 00:00:00.163832\r\n 2014-12-14 15:00:02.146436+01 | 00:00:00.026934\r\n 2014-12-14 15:10:01.277963+01 | 00:00:00.332185\r\n 2014-12-14 15:20:01.353979+01 | 00:00:00.020616\r\n 2014-12-14 15:30:01.417092+01 | 00:00:00.584768\r\n 2014-12-14 15:40:01.575347+01 | 00:00:00.151685\r\n 2014-12-14 15:50:01.205507+01 | 00:00:00.102073\r\n 2014-12-14 16:00:01.321511+01 | 00:00:00.590677\r\n 2014-12-14 16:10:01.570474+01 | 00:00:00.182683\r\n 2014-12-14 16:20:01.640095+01 | 00:00:00.420185\r\n 2014-12-14 16:30:01.767033+01 | 00:00:00.015989\r\n 2014-12-14 16:40:01.849532+01 | 00:00:00.106296\r\n 2014-12-14 16:50:01.920876+01 | 00:00:00.258851\r\n 2014-12-14 17:00:02.000278+01 | 00:00:00.119841\r\n 2014-12-14 17:10:01.894227+01 | 00:00:00.091599\r\n 2014-12-14 17:20:01.61729+01 | 00:00:00.367367\r\n 2014-12-14 17:30:01.683326+01 | 00:00:00.103884\r\n 2014-12-14 17:40:01.755904+01 | 00:00:00.051262\r\n 2014-12-14 17:50:01.833825+01 | 00:00:00.06901\r\n 2014-12-14 18:00:01.901236+01 | 00:00:00.17467\r\n 2014-12-14 18:10:01.186283+01 | 00:00:00.214941\r\n 2014-12-14 18:20:01.145413+01 | 00:00:00.03517\r\n 2014-12-14 18:30:01.241746+01 | 00:00:00.207842\r\n 2014-12-14 18:40:01.299413+01 | 00:00:00.147878\r\n 2014-12-14 18:50:01.368541+01 | 00:00:00.393893\r\n 2014-12-14 19:00:01.430736+01 | 00:00:00.031226\r\n 2014-12-14 19:10:01.672117+01 | 00:05:03.512832\r\n 2014-12-14 19:20:01.9195+01 | 00:06:39.08761\r\n 2014-12-14 19:30:02.184486+01 | 00:00:00.307668\r\n 2014-12-14 19:40:01.227278+01 | 00:00:00.054831\r\n 2014-12-14 19:50:01.305485+01 | 00:00:00.425595\r\n 2014-12-14 20:00:01.410501+01 | 00:00:00.394526\r\n 2014-12-14 20:10:01.984196+01 | 00:00:00.388844\r\n 2014-12-14 20:20:01.031042+01 | 00:00:00.503092\r\n 2014-12-14 20:30:01.225871+01 | 00:00:00.241493\r\n 2014-12-14 20:40:01.305696+01 | 00:00:00.280656\r\n 2014-12-14 20:50:01.379617+01 | 00:00:00.151103\r\n 2014-12-14 21:00:01.468849+01 | 00:00:00.014412\r\n 2014-12-14 21:10:01.724514+01 | 00:00:00.147476\r\n 2014-12-14 21:20:01.799292+01 | 00:00:00.08696\r\n 2014-12-14 21:30:01.866336+01 | 00:00:00.035226\r\n 2014-12-14 21:40:01.942882+01 | 00:00:00.111701\r\n 2014-12-14 21:50:02.010419+01 | 00:00:00.215121\r\n 2014-12-14 22:00:01.110033+01 | 00:00:16.460612\r\n 2014-12-14 22:10:01.568286+01 | 00:00:00.077897\r\n 2014-12-14 22:20:01.682714+01 | 00:00:00.104112\r\n 2014-12-14 22:30:01.758958+01 | 00:00:00.061474\r\n 2014-12-14 22:40:01.970545+01 | 00:00:00.108613\r\n 2014-12-14 22:50:01.038908+01 | 00:00:00.039637\r\n 2014-12-14 23:00:01.120295+01 | 00:00:00.338731\r\n 2014-12-14 23:10:01.365371+01 | 00:00:00.680065\r\n 2014-12-14 23:20:01.423365+01 | 00:00:00.154614\r\n 2014-12-14 23:30:01.48998+01 | 00:00:00.014643\r\n 2014-12-14 23:40:01.569452+01 | 00:00:00.126961\r\n 2014-12-14 23:50:01.63047+01 | 00:00:00.303156\r\n 2014-12-15 00:00:01.278047+01 | 00:00:00.351391\r\n 2014-12-15 00:10:01.382566+01 | 00:00:00.012265\r\n 2014-12-15 00:20:01.444746+01 | 00:07:39.002651\r\n 2014-12-15 00:30:01.510413+01 | 00:16:13.476753\r\n 2014-12-15 00:40:01.97735+01 | 00:00:00.105011\r\n 2014-12-15 00:50:01.082528+01 | 00:01:10.313796\r\n 2014-12-15 01:00:01.124843+01 | 00:00:01.508016\r\n 2014-12-15 01:10:01.818415+01 | 00:00:00.082441\r\n 2014-12-15 01:20:01.961064+01 | 00:00:00.048221\r\n 2014-12-15 01:30:01.355472+01 | 00:00:00.37941\r\n 2014-12-15 01:40:01.42728+01 | 00:00:00.013836\r\n 2014-12-15 01:50:01.486446+01 | 00:00:00.110321\r\n 2014-12-15 02:00:01.566731+01 | 00:00:00.290281\r\n 2014-12-15 02:10:01.236574+01 | 00:01:15.954532\r\n 2014-12-15 02:20:01.440259+01 | 00:00:00.471677\r\n 2014-12-15 02:30:01.5733+01 | 00:00:00.208574\r\n 2014-12-15 02:40:01.662425+01 | 00:00:00.591091\r\n 2014-12-15 02:50:01.263385+01 | 00:00:00.050648\r\n 2014-12-15 03:00:01.340777+01 | 00:00:00.289115\r\n 2014-12-15 03:10:01.993079+01 | 00:00:00.790201\r\n 2014-12-15 03:20:01.061826+01 | 00:00:00.043176\r\n 2014-12-15 03:30:01.125639+01 | 00:00:00.172924\r\n 2014-12-15 03:40:01.252033+01 | 00:03:05.113579\r\n 2014-12-15 03:50:01.362396+01 | 00:00:00.254974\r\n 2014-12-15 04:00:01.370922+01 | 00:00:00.208254\r\n 2014-12-15 04:10:01.472816+01 | 00:00:00.077214\r\n 2014-12-15 04:20:01.553443+01 | 00:00:00.135887\r\n 2014-12-15 04:30:01.63607+01 | 00:00:00.027272\r\n 2014-12-15 04:40:01.696442+01 | 00:00:00.130954\r\n 2014-12-15 04:50:01.786961+01 | 00:00:00.572573\r\n 2014-12-15 05:00:01.790753+01 | 00:00:00.491799\r\n 2014-12-15 05:10:01.078332+01 | 00:07:58.438202 **** likely autovacuum\r\n 2014-12-15 05:20:01.139541+01 | 00:00:00.057486\r\n 2014-12-15 05:30:01.251079+01 | 00:00:00.053462\r\n 2014-12-15 05:40:01.322349+01 | 00:00:00.084701\r\n 2014-12-15 05:50:01.607937+01 | 00:00:00.205241\r\n 2014-12-15 06:00:01.699406+01 | 00:00:00.121415\r\n 2014-12-15 06:10:01.756047+01 | 00:00:00.20769\r\n 2014-12-15 06:20:01.854222+01 | 00:00:00.03397\r\n 2014-12-15 06:30:02.041054+01 | 00:03:07.271295\r\n 2014-12-15 06:40:01.891882+01 | 00:00:00.263748\r\n 2014-12-15 06:50:01.987809+01 | 00:00:00.155619\r\n 2014-12-15 07:00:01.068556+01 | 00:00:00.119866\r\n 2014-12-15 07:10:01.318478+01 | 00:00:00.092856\r\n 2014-12-15 07:20:01.704899+01 | 00:00:00.106533\r\n 2014-12-15 07:30:01.773268+01 | 00:00:00.135743\r\n 2014-12-15 07:40:01.730152+01 | 00:00:00.06358\r\n 2014-12-15 07:50:01.798179+01 | 00:00:00.529685\r\n 2014-12-15 08:00:01.868205+01 | 00:00:00.194482\r\n 2014-12-15 08:10:01.219339+01 | 00:00:00.063553\r\n 2014-12-15 08:20:01.309426+01 | 00:00:00.056698\r\n 2014-12-15 08:30:01.120431+01 | 00:00:00.425596\r\n 2014-12-15 08:40:01.201882+01 | 00:00:00.00909\r\n 2014-12-15 08:50:01.272526+01 | 00:00:00.019492\r\n 2014-12-15 09:00:01.361022+01 | 00:00:00.423997\r\n 2014-12-15 09:10:01.603702+01 | 00:00:00.066705\r\n 2014-12-15 09:20:01.682277+01 | 00:00:09.251202\r\n 2014-12-15 09:30:01.934477+01 | 00:00:00.311553\r\n 2014-12-15 09:40:02.03221+01 | 00:00:00.125678\r\n 2014-12-15 09:50:01.105372+01 | 00:00:00.294006\r\n 2014-12-15 10:00:01.201109+01 | 00:00:00.014641\r\n 2014-12-15 10:10:01.164478+01 | 00:01:51.375378\r\n 2014-12-15 10:20:01.264589+01 | 00:09:54.476361 **** likely autovacuum\r\n 2014-12-15 10:30:01.351103+01 | 00:00:00.213636\r\n 2014-12-15 10:40:01.623903+01 | 00:00:00.488103\r\n 2014-12-15 10:50:01.768132+01 | 00:00:00.080799\r\n 2014-12-15 11:00:01.880247+01 | 00:00:20.401738\r\n 2014-12-15 11:10:01.215509+01 | 00:00:00.036288\r\n 2014-12-15 11:20:01.265607+01 | 00:00:00.057142\r\n 2014-12-15 11:30:01.343731+01 | 00:00:00.036609\r\n 2014-12-15 11:40:01.41248+01 | 00:00:00.218139\r\n 2014-12-15 11:50:01.48113+01 | 00:00:00.242754\r\n 2014-12-15 12:00:01.685114+01 | 00:00:00.82528\r\n 2014-12-15 12:10:01.995243+01 | 00:02:29.971448\r\n 2014-12-15 12:20:01.962833+01 | 00:00:00.118112\r\n 2014-12-15 12:30:01.100587+01 | 00:00:00.214437\r\n 2014-12-15 12:40:01.226111+01 | 00:00:00.052599\r\n 2014-12-15 12:50:01.300061+01 | 00:00:00.162205\r\n 2014-12-15 13:00:01.4007+01 | 00:00:00.707891\r\n 2014-12-15 13:10:02.005526+01 | 00:00:00.162238\r\n 2014-12-15 13:20:01.072375+01 | 00:00:00.214978\r\n 2014-12-15 13:30:01.446005+01 | 00:00:00.121816\r\n 2014-12-15 13:40:01.483524+01 | 00:00:00.650178\r\n 2014-12-15 13:50:01.796143+01 | 00:00:00.065482\r\n 2014-12-15 14:00:01.886071+01 | 00:00:00.237577\r\n 2014-12-15 14:10:01.134148+01 | 00:00:00.193941\r\n 2014-12-15 14:20:01.199047+01 | 00:00:00.068058\r\n 2014-12-15 14:30:01.27777+01 | 00:00:00.022991\r\n 2014-12-15 14:40:01.361959+01 | 00:00:00.439753\r\n 2014-12-15 14:50:01.421515+01 | 00:00:00.037749\r\n 2014-12-15 15:00:01.500559+01 | 00:00:00.174448\r\n 2014-12-15 15:10:01.811804+01 | 00:06:09.196648 **** likely autovacuum\r\n..\r\n\r\n\r\n\r\nIt goes up till a maximum of 25 minutes (for the last two weeks), it looks correlated with an autovacuum at the master in one of the big tables of the schema. It happens at about 5hours interval. Is there a way to avoid this ? Should I ask to the master db dba to try to have a more active autovacuum policy?\r\n\r\n\r\nThank you,\r\nRuben\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Dec 2014 14:18:13 +0000",
"msg_from": "Ruben Domingo Gaspar Aparicio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres slave not catching up (on 9.2)"
},
{
"msg_contents": "> dbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs (rw,\r\n> actimeo=0,hard,nointr,rsize=65536,wsize=65536,tcp,timeo=600)\r\n> dbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs (rw,\r\n> actimeo=0,hard,nointr,rsize=65536,wsize=65536,tcp,timeo=600)\r\n> \r\n> to\r\n> \r\n> dbnasg403-12a:/vol/dodpupdbtst03 on /ORA/dbs03/PUPDBTST type nfs\r\n> (rw,noatime,hard,nointr,rsize=65536,wsize=65536,tcp,timeo=600)\r\n> dbnasg401-12a:/vol/dodpupdbtst02 on /ORA/dbs02/PUPDBTST type nfs\r\n> (rw,noatime,hard,nointr,rsize=65536,wsize=65536,tcp,timeo=600)\r\n> \r\n> it did have a big impact. The profile of the recovery process on terms of calls\r\n> changed quite a lot:\r\n> \r\n> From:\r\n> \r\n> [postgres@itrac1202 tmp]$ strace -p 9596 -c Process 9596 attached -\r\n> interrupt to quit Process 9596 detached\r\n> % time seconds usecs/call calls errors syscall\r\n> ------ ----------- ----------- --------- --------- ----------------\r\n> 78.73 0.217824 0 456855 381376 read\r\n> 17.87 0.049453 0 515320 lseek\r\n> 2.89 0.007989 12 669 669 poll\r\n> 0.33 0.000912 25 36 open\r\n> 0.07 0.000206 0 994 994 stat\r\n> 0.05 0.000151 0 995 787 rt_sigreturn\r\n> 0.05 0.000133 0 673 write\r\n> 0.00 0.000000 0 36 close\r\n> 0.00 0.000000 0 52 kill\r\n> ------ ----------- ----------- --------- --------- ----------------\r\n> 100.00 0.276668 975630 383826 total\r\n> \r\n\r\nThis one should read:\r\n\r\n[root@itrac1202 ~]# strace -c -p 28073\r\nProcess 28073 attached - interrupt to quit\r\n\r\nProcess 28073 detached\r\n% time seconds usecs/call calls errors syscall\r\n------ ----------- ----------- --------- --------- ----------------\r\n59.16 10.756007 5 2201974 1202832 read\r\n40.69 7.398247 3 2367885 lseek\r\n 0.14 0.025970 154 169 open\r\n 0.00 0.000057 0 169 close\r\n 0.00 0.000038 0 169 kill\r\n 0.00 0.000033 1 29 write\r\n 0.00 0.000000 0 1 semop\r\n------ ----------- ----------- --------- --------- ----------------\r\n100.00 18.180352 4570396 1202832 total\r\n\r\n\r\nApologies for the confusion.\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Dec 2014 14:32:02 +0000",
"msg_from": "Ruben Domingo Gaspar Aparicio <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Postgres slave not catching up (on 9.2)"
}
] |
[
{
"msg_contents": "Hi,\nI have created a sample database with test data to help benchmark our\napplication. The database has ten million records, and is running on a\ndedicated server(postgres 9.3) with 8GB of RAM. Our queries are pretty\nslow with this amount of data and is my job to get them to run to at\nacceptable speed. First thing that I notice was that the planner's row\nestimates are off by a large number or records (millions) I have updated\nthe statistics target but didn't seem to make a difference. The relevant\noutput follows.\nAm I looking in the wrong place, something else I should be trying?\nThanks in advance for your comments/suggestions,\nEric.\n\n\n=# show work_mem;\n work_mem\n----------\n 1GB\n(1 row)\n=# show effective_cache_size;\n effective_cache_size\n----------------------\n 5GB\n(1 row)\n\n=#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN\nhousehold_member_first_name SET STATISTICS 5000;\n=# vacuum analyse TAR_MVW_TARGETING_RECORD;\n\n=# \\d tar_mvw_targeting_record;\n Table \"public.tar_mvw_targeting_record\"\n Column | Type | Modifiers\n-----------------------------+-----------------------+-----------\n household_member_id | bigint |\n form_id | bigint |\n status | character varying(64) |\n gender | character varying(64) |\n household_member_first_name | character varying(64) |\n household_member_last_name | character varying(64) |\n\nIndexes:\n \"tar_mvw_targeting_record_form_id_household_member_id_idx\" UNIQUE, btree\n(form_id, household_member_id)\n \"tar_mvw_targeting_record_lower_idx\" gist\n(lower(household_member_first_name::text) extensions.gist_trgm_ops)\n WHERE status::text <> 'ANULLED'::text\n \"tar_mvw_targeting_record_lower_idx1\" gist\n(lower(household_member_last_name::text) extensions.gist_trgm_ops)\n WHERE status::text <> 'ANULLED'::text\n\n\n=# explain (analyse on,buffers on)select T.form_id from\nTAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND\nLOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND\nT.gender='FEMALE' group by T.form_id;\n\nQUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------\n HashAggregate (cost=450994.35..452834.96 rows=184061 width=8) (actual\ntime=11932.959..12061.206 rows=442453 loops=1)\n Buffers: shared hit=307404 read=109743\n -> Bitmap Heap Scan on tar_mvw_targeting_record t\n(cost=110866.33..448495.37 rows=999592 width=8) (actual\ntime=3577.301..11629.132 row\ns=500373 loops=1)\n Recheck Cond: ((lower((household_member_last_name)::text) ~~\n'%tu%'::text) AND ((status)::text <> 'ANULLED'::text))\n Rows Removed by Index Recheck: 9000079\n Filter: ((gender)::text = 'FEMALE'::text)\n Rows Removed by Filter: 499560\n Buffers: shared hit=307404 read=109743\n -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1\n(cost=0.00..110616.43 rows=2000002 width=0) (actual time=3471.142..3\n471.142 rows=10000012 loops=1)\n Index Cond: (lower((household_member_last_name)::text) ~~\n'%tu%'::text)\n Buffers: shared hit=36583 read=82935\n Total runtime: 12092.059 ms\n(12 rows)\n\nTime: 12093.107 ms\n\np.s. this plan was ran three times, first time took 74 seconds.\n\nHi,I have created a sample database with test data to help benchmark our application. The database has ten million records, and is running on a dedicated server(postgres 9.3) with 8GB of RAM. Our queries are pretty slow with this amount of data and is my job to get them to run to at acceptable speed. First thing that I notice was that the planner's row estimates are off by a large number or records (millions) I have updated the statistics target but didn't seem to make a difference. The relevant output follows.Am I looking in the wrong place, something else I should be trying? Thanks in advance for your comments/suggestions,Eric. =# show work_mem; work_mem ---------- 1GB(1 row)=# show effective_cache_size; effective_cache_size ---------------------- 5GB(1 row)=#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN household_member_first_name SET STATISTICS 5000;=# vacuum analyse TAR_MVW_TARGETING_RECORD; =# \\d tar_mvw_targeting_record; Table \"public.tar_mvw_targeting_record\" Column | Type | Modifiers -----------------------------+-----------------------+----------- household_member_id | bigint | form_id | bigint | status | character varying(64) | gender | character varying(64) | household_member_first_name | character varying(64) | household_member_last_name | character varying(64) | Indexes: \"tar_mvw_targeting_record_form_id_household_member_id_idx\" UNIQUE, btree (form_id, household_member_id) \"tar_mvw_targeting_record_lower_idx\" gist (lower(household_member_first_name::text) extensions.gist_trgm_ops) WHERE status::text <> 'ANULLED'::text \"tar_mvw_targeting_record_lower_idx1\" gist (lower(household_member_last_name::text) extensions.gist_trgm_ops) WHERE status::text <> 'ANULLED'::text=# explain (analyse on,buffers on)select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND T.gender='FEMALE' group by T.form_id; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=450994.35..452834.96 rows=184061 width=8) (actual time=11932.959..12061.206 rows=442453 loops=1) Buffers: shared hit=307404 read=109743 -> Bitmap Heap Scan on tar_mvw_targeting_record t (cost=110866.33..448495.37 rows=999592 width=8) (actual time=3577.301..11629.132 rows=500373 loops=1) Recheck Cond: ((lower((household_member_last_name)::text) ~~ '%tu%'::text) AND ((status)::text <> 'ANULLED'::text)) Rows Removed by Index Recheck: 9000079 Filter: ((gender)::text = 'FEMALE'::text) Rows Removed by Filter: 499560 Buffers: shared hit=307404 read=109743 -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1 (cost=0.00..110616.43 rows=2000002 width=0) (actual time=3471.142..3471.142 rows=10000012 loops=1) Index Cond: (lower((household_member_last_name)::text) ~~ '%tu%'::text) Buffers: shared hit=36583 read=82935 Total runtime: 12092.059 ms(12 rows)Time: 12093.107 msp.s. this plan was ran three times, first time took 74 seconds.",
"msg_date": "Mon, 10 Nov 2014 12:43:00 -0500",
"msg_from": "Eric Ramirez <[email protected]>",
"msg_from_op": true,
"msg_subject": "updating statistics on slow running query"
},
{
"msg_contents": "2014-11-10 18:43 GMT+01:00 Eric Ramirez <[email protected]>:\n\n>\n> Hi,\n> I have created a sample database with test data to help benchmark our\n> application. The database has ten million records, and is running on a\n> dedicated server(postgres 9.3) with 8GB of RAM. Our queries are pretty\n> slow with this amount of data and is my job to get them to run to at\n> acceptable speed. First thing that I notice was that the planner's row\n> estimates are off by a large number or records (millions) I have updated\n> the statistics target but didn't seem to make a difference. The relevant\n> output follows.\n> Am I looking in the wrong place, something else I should be trying?\n> Thanks in advance for your comments/suggestions,\n> Eric.\n>\n>\n> =# show work_mem;\n> work_mem\n> ----------\n> 1GB\n> (1 row)\n> =# show effective_cache_size;\n> effective_cache_size\n> ----------------------\n> 5GB\n> (1 row)\n>\n> =#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN\n> household_member_first_name SET STATISTICS 5000;\n> =# vacuum analyse TAR_MVW_TARGETING_RECORD;\n>\n> =# \\d tar_mvw_targeting_record;\n> Table \"public.tar_mvw_targeting_record\"\n> Column | Type | Modifiers\n> -----------------------------+-----------------------+-----------\n> household_member_id | bigint |\n> form_id | bigint |\n> status | character varying(64) |\n> gender | character varying(64) |\n> household_member_first_name | character varying(64) |\n> household_member_last_name | character varying(64) |\n>\n> Indexes:\n> \"tar_mvw_targeting_record_form_id_household_member_id_idx\" UNIQUE,\n> btree (form_id, household_member_id)\n> \"tar_mvw_targeting_record_lower_idx\" gist\n> (lower(household_member_first_name::text) extensions.gist_trgm_ops)\n> WHERE status::text <> 'ANULLED'::text\n> \"tar_mvw_targeting_record_lower_idx1\" gist\n> (lower(household_member_last_name::text) extensions.gist_trgm_ops)\n> WHERE status::text <> 'ANULLED'::text\n>\n>\n> =# explain (analyse on,buffers on)select T.form_id from\n> TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND\n> LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND\n> T.gender='FEMALE' group by T.form_id;\n>\n> QUERY PLAN\n>\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> -------------------------------\n> HashAggregate (cost=450994.35..452834.96 rows=184061 width=8) (actual\n> time=11932.959..12061.206 rows=442453 loops=1)\n> Buffers: shared hit=307404 read=109743\n> -> Bitmap Heap Scan on tar_mvw_targeting_record t\n> (cost=110866.33..448495.37 rows=999592 width=8) (actual\n> time=3577.301..11629.132 row\n> s=500373 loops=1)\n> Recheck Cond: ((lower((household_member_last_name)::text) ~~\n> '%tu%'::text) AND ((status)::text <> 'ANULLED'::text))\n> Rows Removed by Index Recheck: 9000079\n> Filter: ((gender)::text = 'FEMALE'::text)\n> Rows Removed by Filter: 499560\n> Buffers: shared hit=307404 read=109743\n> -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1\n> (cost=0.00..110616.43 rows=2000002 width=0) (actual time=3471.142..3\n> 471.142 rows=10000012 loops=1)\n> Index Cond: (lower((household_member_last_name)::text) ~~\n> '%tu%'::text)\n> Buffers: shared hit=36583 read=82935\n> Total runtime: 12092.059 ms\n> (12 rows)\n>\n> Time: 12093.107 ms\n>\n> p.s. this plan was ran three times, first time took 74 seconds.\n>\n>\n>\nHello Eric,\n did you try with gin index instead ? so you could\navoid, if possible, the recheck condition (almost the gin index is not\nlossy ), further if you always use a predicate like \"gender=\" , you could\nthink to partition the indexes based on that predicate (where status NOT IN\n('ANULLED') and gender='FEMALE', in the other case it wil be where status\nNOT IN ('ANULLED') and gender='MALE' ) . Moreover you could avoid also the\n\"lower\" operator and try use directly the ilike , instead of \"like\".\n\nCREATE INDEX tar_mvw_targeting_record_idx02 ON\ntar_mvw_targeting_record USING gin ( status gin_trgm_ops) where\nstatus NOT IN ('ANULLED') and gender='FEMALE' ;\nCREATE INDEX tar_mvw_targeting_record_idx03 ON\ntar_mvw_targeting_record USING gin ( status gin_trgm_ops) where\nstatus NOT IN ('ANULLED') and gender='MALE' ;\n\n\nexplain (analyse on,buffers on) select T.form_id from\nTAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND\nT.household_member_last_name ilike LOWER('%tu%') AND T.gender='FEMALE'\ngroup by T.form_id;\n\n\n I hope it works\n\nhave a nice day\n\n\n-- \nMatteo Durighetto\n\n- - - - - - - - - - - - - - - - - - - - - - -\n\nItalian PostgreSQL User Group <http://www.itpug.org/index.it.html>\nItalian Community for Geographic Free/Open-Source Software\n<http://www.gfoss.it>\n\n2014-11-10 18:43 GMT+01:00 Eric Ramirez <[email protected]>:Hi,I have created a sample database with test data to help benchmark our application. The database has ten million records, and is running on a dedicated server(postgres 9.3) with 8GB of RAM. Our queries are pretty slow with this amount of data and is my job to get them to run to at acceptable speed. First thing that I notice was that the planner's row estimates are off by a large number or records (millions) I have updated the statistics target but didn't seem to make a difference. The relevant output follows.Am I looking in the wrong place, something else I should be trying? Thanks in advance for your comments/suggestions,Eric. =# show work_mem; work_mem ---------- 1GB(1 row)=# show effective_cache_size; effective_cache_size ---------------------- 5GB(1 row)=#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN household_member_first_name SET STATISTICS 5000;=# vacuum analyse TAR_MVW_TARGETING_RECORD; =# \\d tar_mvw_targeting_record; Table \"public.tar_mvw_targeting_record\" Column | Type | Modifiers -----------------------------+-----------------------+----------- household_member_id | bigint | form_id | bigint | status | character varying(64) | gender | character varying(64) | household_member_first_name | character varying(64) | household_member_last_name | character varying(64) | Indexes: \"tar_mvw_targeting_record_form_id_household_member_id_idx\" UNIQUE, btree (form_id, household_member_id) \"tar_mvw_targeting_record_lower_idx\" gist (lower(household_member_first_name::text) extensions.gist_trgm_ops) WHERE status::text <> 'ANULLED'::text \"tar_mvw_targeting_record_lower_idx1\" gist (lower(household_member_last_name::text) extensions.gist_trgm_ops) WHERE status::text <> 'ANULLED'::text=# explain (analyse on,buffers on)select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND T.gender='FEMALE' group by T.form_id; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=450994.35..452834.96 rows=184061 width=8) (actual time=11932.959..12061.206 rows=442453 loops=1) Buffers: shared hit=307404 read=109743 -> Bitmap Heap Scan on tar_mvw_targeting_record t (cost=110866.33..448495.37 rows=999592 width=8) (actual time=3577.301..11629.132 rows=500373 loops=1) Recheck Cond: ((lower((household_member_last_name)::text) ~~ '%tu%'::text) AND ((status)::text <> 'ANULLED'::text)) Rows Removed by Index Recheck: 9000079 Filter: ((gender)::text = 'FEMALE'::text) Rows Removed by Filter: 499560 Buffers: shared hit=307404 read=109743 -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1 (cost=0.00..110616.43 rows=2000002 width=0) (actual time=3471.142..3471.142 rows=10000012 loops=1) Index Cond: (lower((household_member_last_name)::text) ~~ '%tu%'::text) Buffers: shared hit=36583 read=82935 Total runtime: 12092.059 ms(12 rows)Time: 12093.107 msp.s. this plan was ran three times, first time took 74 seconds.\nHello Eric, did you try with gin index instead ? so you could avoid, if possible, the recheck condition (almost the gin index is not lossy ), further if you always use a predicate like \"gender=\" , you could think to partition the indexes based on that predicate (where status NOT IN ('ANULLED') and gender='FEMALE', in the other case it wil be where status NOT IN ('ANULLED') and gender='MALE' ) . Moreover you could avoid also the \"lower\" operator and try use directly the ilike , instead of \"like\".CREATE INDEX tar_mvw_targeting_record_idx02 ON tar_mvw_targeting_record USING gin ( status gin_trgm_ops) where status NOT IN ('ANULLED') and gender='FEMALE' ;CREATE INDEX tar_mvw_targeting_record_idx03 ON tar_mvw_targeting_record USING gin ( status gin_trgm_ops) where status NOT IN ('ANULLED') and gender='MALE' ;explain (analyse on,buffers on) select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND T.household_member_last_name ilike LOWER('%tu%') AND T.gender='FEMALE' group by T.form_id; I hope it workshave a nice day-- Matteo Durighetto - - - - - - - - - - - - - - - - - - - - - - -Italian PostgreSQL User GroupItalian Community for Geographic Free/Open-Source Software",
"msg_date": "Mon, 10 Nov 2014 19:57:08 +0100",
"msg_from": "desmodemone <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: updating statistics on slow running query"
},
{
"msg_contents": "Hi Matteo,\nThanks for your suggestions, I just run some test with ILIKE and LIKE, and\nILIKE is consistently slower so I think I will keep the Lower functions.\nAs per your suggestion, I have switched indexes to use GIN type index,\nthey seem to build/read a bit faster, still the Recheck task continues to\nhappen in the query plan though. I have removed the Gender column from the\nquery since is not relevant in my tests. With all this playing around it\nlooks like the stats are now a bit more accurate.\nThe query went down to 9 seconds, ideally I would like to get to execute in\n2 seconds..., any thoughts on what else I could try?\nThanks again,\nEric\n\n=# explain (analyse on,buffers on)select T.form_id from\nTAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND\nLOWER(T.household_member_last_name) LIKE LOWER('%tu%') group by T.form_id;\n\nQUERY PLAN\n\n-------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------\n HashAggregate (cost=557677.27..561360.83 rows=368356 width=8) (actual\ntime=10172.672..10410.068 rows=786669 loops=1)\n Buffers: shared hit=304998\n -> Bitmap Heap Scan on tar_mvw_targeting_record t\n(cost=80048.06..552677.27 rows=2000002 width=8) (actual\ntime=2481.418..9564.280 rows\n=999933 loops=1)\n Recheck Cond: ((status)::text <> 'ANULLED'::text)\n Filter: (lower((household_member_last_name)::text) ~~ '%tu%'::text)\n Rows Removed by Filter: 9000079\n Buffers: shared hit=304998\n -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx4\n(cost=0.00..79548.06 rows=10000012 width=0) (actual time=2375.399..2\n375.399 rows=10000012 loops=1)\n Buffers: shared hit=7369\n Total runtime: 10475.240 ms\n\n\n\n\nOn Mon, Nov 10, 2014 at 1:57 PM, desmodemone <[email protected]> wrote:\n\n>\n>\n> 2014-11-10 18:43 GMT+01:00 Eric Ramirez <[email protected]>:\n>\n>>\n>> Hi,\n>> I have created a sample database with test data to help benchmark our\n>> application. The database has ten million records, and is running on a\n>> dedicated server(postgres 9.3) with 8GB of RAM. Our queries are pretty\n>> slow with this amount of data and is my job to get them to run to at\n>> acceptable speed. First thing that I notice was that the planner's row\n>> estimates are off by a large number or records (millions) I have updated\n>> the statistics target but didn't seem to make a difference. The relevant\n>> output follows.\n>> Am I looking in the wrong place, something else I should be trying?\n>> Thanks in advance for your comments/suggestions,\n>> Eric.\n>>\n>>\n>> =# show work_mem;\n>> work_mem\n>> ----------\n>> 1GB\n>> (1 row)\n>> =# show effective_cache_size;\n>> effective_cache_size\n>> ----------------------\n>> 5GB\n>> (1 row)\n>>\n>> =#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN\n>> household_member_first_name SET STATISTICS 5000;\n>> =# vacuum analyse TAR_MVW_TARGETING_RECORD;\n>>\n>> =# \\d tar_mvw_targeting_record;\n>> Table \"public.tar_mvw_targeting_record\"\n>> Column | Type | Modifiers\n>> -----------------------------+-----------------------+-----------\n>> household_member_id | bigint |\n>> form_id | bigint |\n>> status | character varying(64) |\n>> gender | character varying(64) |\n>> household_member_first_name | character varying(64) |\n>> household_member_last_name | character varying(64) |\n>>\n>> Indexes:\n>> \"tar_mvw_targeting_record_form_id_household_member_id_idx\" UNIQUE,\n>> btree (form_id, household_member_id)\n>> \"tar_mvw_targeting_record_lower_idx\" gist\n>> (lower(household_member_first_name::text) extensions.gist_trgm_ops)\n>> WHERE status::text <> 'ANULLED'::text\n>> \"tar_mvw_targeting_record_lower_idx1\" gist\n>> (lower(household_member_last_name::text) extensions.gist_trgm_ops)\n>> WHERE status::text <> 'ANULLED'::text\n>>\n>>\n>> =# explain (analyse on,buffers on)select T.form_id from\n>> TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND\n>> LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND\n>> T.gender='FEMALE' group by T.form_id;\n>>\n>> QUERY PLAN\n>>\n>>\n>> -------------------------------------------------------------------------------------------------------------------------------------------\n>> -------------------------------\n>> HashAggregate (cost=450994.35..452834.96 rows=184061 width=8) (actual\n>> time=11932.959..12061.206 rows=442453 loops=1)\n>> Buffers: shared hit=307404 read=109743\n>> -> Bitmap Heap Scan on tar_mvw_targeting_record t\n>> (cost=110866.33..448495.37 rows=999592 width=8) (actual\n>> time=3577.301..11629.132 row\n>> s=500373 loops=1)\n>> Recheck Cond: ((lower((household_member_last_name)::text) ~~\n>> '%tu%'::text) AND ((status)::text <> 'ANULLED'::text))\n>> Rows Removed by Index Recheck: 9000079\n>> Filter: ((gender)::text = 'FEMALE'::text)\n>> Rows Removed by Filter: 499560\n>> Buffers: shared hit=307404 read=109743\n>> -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1\n>> (cost=0.00..110616.43 rows=2000002 width=0) (actual time=3471.142..3\n>> 471.142 rows=10000012 loops=1)\n>> Index Cond: (lower((household_member_last_name)::text) ~~\n>> '%tu%'::text)\n>> Buffers: shared hit=36583 read=82935\n>> Total runtime: 12092.059 ms\n>> (12 rows)\n>>\n>> Time: 12093.107 ms\n>>\n>> p.s. this plan was ran three times, first time took 74 seconds.\n>>\n>>\n>>\n> Hello Eric,\n> did you try with gin index instead ? so you could\n> avoid, if possible, the recheck condition (almost the gin index is not\n> lossy ), further if you always use a predicate like \"gender=\" , you could\n> think to partition the indexes based on that predicate (where status NOT IN\n> ('ANULLED') and gender='FEMALE', in the other case it wil be where status\n> NOT IN ('ANULLED') and gender='MALE' ) . Moreover you could avoid also the\n> \"lower\" operator and try use directly the ilike , instead of \"like\".\n>\n> CREATE INDEX tar_mvw_targeting_record_idx02 ON tar_mvw_targeting_record USING gin ( status gin_trgm_ops) where status NOT IN ('ANULLED') and gender='FEMALE' ;\n> CREATE INDEX tar_mvw_targeting_record_idx03 ON tar_mvw_targeting_record USING gin ( status gin_trgm_ops) where status NOT IN ('ANULLED') and gender='MALE' ;\n>\n>\n> explain (analyse on,buffers on) select T.form_id from\n> TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND\n> T.household_member_last_name ilike LOWER('%tu%') AND T.gender='FEMALE'\n> group by T.form_id;\n>\n>\n> I hope it works\n>\n> have a nice day\n>\n>\n> --\n> Matteo Durighetto\n>\n> - - - - - - - - - - - - - - - - - - - - - - -\n>\n> Italian PostgreSQL User Group <http://www.itpug.org/index.it.html>\n> Italian Community for Geographic Free/Open-Source Software\n> <http://www.gfoss.it>\n>\n\nHi Matteo,Thanks for your suggestions, I just run some test with ILIKE and LIKE, and ILIKE is consistently slower so I think I will keep the Lower functions. As per your suggestion, I have switched indexes to use GIN type index, they seem to build/read a bit faster, still the Recheck task continues to happen in the query plan though. I have removed the Gender column from the query since is not relevant in my tests. With all this playing around it looks like the stats are now a bit more accurate.The query went down to 9 seconds, ideally I would like to get to execute in 2 seconds..., any thoughts on what else I could try?Thanks again,Eric =# explain (analyse on,buffers on)select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND LOWER(T.household_member_last_name) LIKE LOWER('%tu%') group by T.form_id; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=557677.27..561360.83 rows=368356 width=8) (actual time=10172.672..10410.068 rows=786669 loops=1) Buffers: shared hit=304998 -> Bitmap Heap Scan on tar_mvw_targeting_record t (cost=80048.06..552677.27 rows=2000002 width=8) (actual time=2481.418..9564.280 rows=999933 loops=1) Recheck Cond: ((status)::text <> 'ANULLED'::text) Filter: (lower((household_member_last_name)::text) ~~ '%tu%'::text) Rows Removed by Filter: 9000079 Buffers: shared hit=304998 -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx4 (cost=0.00..79548.06 rows=10000012 width=0) (actual time=2375.399..2375.399 rows=10000012 loops=1) Buffers: shared hit=7369 Total runtime: 10475.240 msOn Mon, Nov 10, 2014 at 1:57 PM, desmodemone <[email protected]> wrote:2014-11-10 18:43 GMT+01:00 Eric Ramirez <[email protected]>:Hi,I have created a sample database with test data to help benchmark our application. The database has ten million records, and is running on a dedicated server(postgres 9.3) with 8GB of RAM. Our queries are pretty slow with this amount of data and is my job to get them to run to at acceptable speed. First thing that I notice was that the planner's row estimates are off by a large number or records (millions) I have updated the statistics target but didn't seem to make a difference. The relevant output follows.Am I looking in the wrong place, something else I should be trying? Thanks in advance for your comments/suggestions,Eric. =# show work_mem; work_mem ---------- 1GB(1 row)=# show effective_cache_size; effective_cache_size ---------------------- 5GB(1 row)=#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN household_member_first_name SET STATISTICS 5000;=# vacuum analyse TAR_MVW_TARGETING_RECORD; =# \\d tar_mvw_targeting_record; Table \"public.tar_mvw_targeting_record\" Column | Type | Modifiers -----------------------------+-----------------------+----------- household_member_id | bigint | form_id | bigint | status | character varying(64) | gender | character varying(64) | household_member_first_name | character varying(64) | household_member_last_name | character varying(64) | Indexes: \"tar_mvw_targeting_record_form_id_household_member_id_idx\" UNIQUE, btree (form_id, household_member_id) \"tar_mvw_targeting_record_lower_idx\" gist (lower(household_member_first_name::text) extensions.gist_trgm_ops) WHERE status::text <> 'ANULLED'::text \"tar_mvw_targeting_record_lower_idx1\" gist (lower(household_member_last_name::text) extensions.gist_trgm_ops) WHERE status::text <> 'ANULLED'::text=# explain (analyse on,buffers on)select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND T.gender='FEMALE' group by T.form_id; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- HashAggregate (cost=450994.35..452834.96 rows=184061 width=8) (actual time=11932.959..12061.206 rows=442453 loops=1) Buffers: shared hit=307404 read=109743 -> Bitmap Heap Scan on tar_mvw_targeting_record t (cost=110866.33..448495.37 rows=999592 width=8) (actual time=3577.301..11629.132 rows=500373 loops=1) Recheck Cond: ((lower((household_member_last_name)::text) ~~ '%tu%'::text) AND ((status)::text <> 'ANULLED'::text)) Rows Removed by Index Recheck: 9000079 Filter: ((gender)::text = 'FEMALE'::text) Rows Removed by Filter: 499560 Buffers: shared hit=307404 read=109743 -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1 (cost=0.00..110616.43 rows=2000002 width=0) (actual time=3471.142..3471.142 rows=10000012 loops=1) Index Cond: (lower((household_member_last_name)::text) ~~ '%tu%'::text) Buffers: shared hit=36583 read=82935 Total runtime: 12092.059 ms(12 rows)Time: 12093.107 msp.s. this plan was ran three times, first time took 74 seconds.\nHello Eric, did you try with gin index instead ? so you could avoid, if possible, the recheck condition (almost the gin index is not lossy ), further if you always use a predicate like \"gender=\" , you could think to partition the indexes based on that predicate (where status NOT IN ('ANULLED') and gender='FEMALE', in the other case it wil be where status NOT IN ('ANULLED') and gender='MALE' ) . Moreover you could avoid also the \"lower\" operator and try use directly the ilike , instead of \"like\".CREATE INDEX tar_mvw_targeting_record_idx02 ON tar_mvw_targeting_record USING gin ( status gin_trgm_ops) where status NOT IN ('ANULLED') and gender='FEMALE' ;CREATE INDEX tar_mvw_targeting_record_idx03 ON tar_mvw_targeting_record USING gin ( status gin_trgm_ops) where status NOT IN ('ANULLED') and gender='MALE' ;explain (analyse on,buffers on) select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND T.household_member_last_name ilike LOWER('%tu%') AND T.gender='FEMALE' group by T.form_id; I hope it workshave a nice day-- Matteo Durighetto - - - - - - - - - - - - - - - - - - - - - - -Italian PostgreSQL User GroupItalian Community for Geographic Free/Open-Source Software",
"msg_date": "Mon, 10 Nov 2014 17:52:18 -0500",
"msg_from": "Eric Ramirez <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: updating statistics on slow running query"
},
{
"msg_contents": "On 11/10/14, 4:52 PM, Eric Ramirez wrote:\n> Hi Matteo,\n> Thanks for your suggestions, I just run some test with ILIKE and LIKE, and ILIKE is consistently slower so I think I will keep the Lower functions. As per your suggestion, I have switched indexes to use GIN type index, they seem to build/read a bit faster, still the Recheck task continues to happen in the query plan though. I have removed the Gender column from the query since is not relevant in my tests. With all this playing around it looks like the stats are now a bit more accurate.\n> The query went down to 9 seconds, ideally I would like to get to execute in 2 seconds..., any thoughts on what else I could try?\n> Thanks again,\n> Eric\n\nPlease don't top-post.\n\nYou might try the trigram contrib module: http://www.postgresql.org/docs/9.1/static/pgtrgm.html\n\nBTW, converting status and gender to enums will likely save you a non-trivial amount of space. It won't help in this query, but if there's other stuff the server will be doing it's probably worth-while.\n\n> =# explain (analyse on,buffers on)select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND LOWER(T.household_member_last_name) LIKE LOWER('%tu%') group by T.form_id;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> -------------------------------\n> HashAggregate (cost=557677.27..561360.83 rows=368356 width=8) (actual time=10172.672..10410.068 rows=786669 loops=1)\n> Buffers: shared hit=304998\n> -> Bitmap Heap Scan on tar_mvw_targeting_record t (cost=80048.06..552677.27 rows=2000002 width=8) (actual time=2481.418..9564.280 rows\n> =999933 loops=1)\n> Recheck Cond: ((status)::text <> 'ANULLED'::text)\n> Filter: (lower((household_member_last_name)::text) ~~ '%tu%'::text)\n> Rows Removed by Filter: 9000079\n> Buffers: shared hit=304998\n> -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx4 (cost=0.00..79548.06 rows=10000012 width=0) (actual time=2375.399..2\n> 375.399 rows=10000012 loops=1)\n> Buffers: shared hit=7369\n> Total runtime: 10475.240 ms\n>\n>\n>\n>\n> On Mon, Nov 10, 2014 at 1:57 PM, desmodemone <[email protected] <mailto:[email protected]>> wrote:\n>\n>\n>\n> 2014-11-10 18:43 GMT+01:00 Eric Ramirez <[email protected] <mailto:[email protected]>>:\n>\n>\n> Hi,\n> I have created a sample database with test data to help benchmark our application. The database has ten million records, and is running on a dedicated server(postgres 9.3) with 8GB of RAM. Our queries are pretty slow with this amount of data and is my job to get them to run to at acceptable speed. First thing that I notice was that the planner's row estimates are off by a large number or records (millions) I have updated the statistics target but didn't seem to make a difference. The relevant output follows.\n> Am I looking in the wrong place, something else I should be trying?\n> Thanks in advance for your comments/suggestions,\n> Eric.\n>\n>\n> =# show work_mem;\n> work_mem\n> ----------\n> 1GB\n> (1 row)\n> =# show effective_cache_size;\n> effective_cache_size\n> ----------------------\n> 5GB\n> (1 row)\n>\n> =#ALTER TABLE TAR_MVW_TARGETING_RECORD ALTER COLUMN household_member_first_name SET STATISTICS 5000;\n> =# vacuum analyse TAR_MVW_TARGETING_RECORD;\n>\n> =# \\d tar_mvw_targeting_record;\n> Table \"public.tar_mvw_targeting_record\"\n> Column | Type | Modifiers\n> -----------------------------+-----------------------+-----------\n> household_member_id | bigint |\n> form_id | bigint |\n> status | character varying(64) |\n> gender | character varying(64) |\n> household_member_first_name | character varying(64) |\n> household_member_last_name | character varying(64) |\n>\n> Indexes:\n> \"tar_mvw_targeting_record_form_id_household_member_id_idx\" UNIQUE, btree (form_id, household_member_id)\n> \"tar_mvw_targeting_record_lower_idx\" gist (lower(household_member_first_name::text) extensions.gist_trgm_ops)\n> WHERE status::text <> 'ANULLED'::text\n> \"tar_mvw_targeting_record_lower_idx1\" gist (lower(household_member_last_name::text) extensions.gist_trgm_ops)\n> WHERE status::text <> 'ANULLED'::text\n>\n>\n> =# explain (analyse on,buffers on)select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND LOWER(T.household_member_last_name) LIKE LOWER('%tu%') AND T.gender='FEMALE' group by T.form_id;\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------\n> -------------------------------\n> HashAggregate (cost=450994.35..452834.96 rows=184061 width=8) (actual time=11932.959..12061.206 rows=442453 loops=1)\n> Buffers: shared hit=307404 read=109743\n> -> Bitmap Heap Scan on tar_mvw_targeting_record t (cost=110866.33..448495.37 rows=999592 width=8) (actual time=3577.301..11629.132 row\n> s=500373 loops=1)\n> Recheck Cond: ((lower((household_member_last_name)::text) ~~ '%tu%'::text) AND ((status)::text <> 'ANULLED'::text))\n> Rows Removed by Index Recheck: 9000079\n> Filter: ((gender)::text = 'FEMALE'::text)\n> Rows Removed by Filter: 499560\n> Buffers: shared hit=307404 read=109743\n> -> Bitmap Index Scan on tar_mvw_targeting_record_lower_idx1 (cost=0.00..110616.43 rows=2000002 width=0) (actual time=3471.142..3\n> 471.142 rows=10000012 loops=1)\n> Index Cond: (lower((household_member_last_name)::text) ~~ '%tu%'::text)\n> Buffers: shared hit=36583 read=82935\n> Total runtime: 12092.059 ms\n> (12 rows)\n>\n> Time: 12093.107 ms\n>\n> p.s. this plan was ran three times, first time took 74 seconds.\n>\n>\n>\n> Hello Eric,\n> did you try with gin index instead ? so you could avoid, if possible, the recheck condition (almost the gin index is not lossy ), further if you always use a predicate like \"gender=\" , you could think to partition the indexes based on that predicate (where status NOT IN ('ANULLED') and gender='FEMALE', in the other case it wil be where status NOT IN ('ANULLED') and gender='MALE' ) . Moreover you could avoid also the \"lower\" operator and try use directly the ilike , instead of \"like\".\n>\n> CREATE INDEX tar_mvw_targeting_record_idx02 ON tar_mvw_targeting_record USING gin ( status gin_trgm_ops) wherestatus NOT IN ('ANULLED') and gender='FEMALE' ;\n> CREATE INDEX tar_mvw_targeting_record_idx03 ON tar_mvw_targeting_record USING gin ( status gin_trgm_ops) wherestatus NOT IN ('ANULLED') and gender='MALE' ;\n>\n>\n> explain (analyse on,buffers on) select T.form_id from TAR_MVW_targeting_record AS T where T.status NOT IN ('ANULLED') AND T.household_member_last_name ilike LOWER('%tu%') AND T.gender='FEMALE' group by T.form_id;\n>\n>\n> I hope it works\n>\n> have a nice day\n>\n>\n> --\n> Matteo Durighetto\n>\n> - - - - - - - - - - - - - - - - - - - - - - -\n>\n> Italian PostgreSQL User Group <http://www.itpug.org/index.it.html>\n> Italian Community for Geographic Free/Open-Source Software <http://www.gfoss.it>\n>\n>\n\n\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 19:53:07 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: updating statistics on slow running query"
}
] |
[
{
"msg_contents": "All,\n\npg version: 9.3.5\nRHEL 6.5\n128GB/32 cores\nConfigured with shared_buffers=16GB\nJava/Tomcat/JDBC application\n\nServer has an issue that whenever we get lock waits (transaction lock\nwaits, usually on an FK dependancy) lasting over a minute or more than\n10 at once, *all* queries on the server slow to a crawl, taking 100X to\n400X normal execution times.\n\nOther info:\n* This applies even to queries which are against other databases, so\nit's not purely a lock blocking issue.\n* this database routinely has a LOT of lock conlicts, churning through 1\nmillion multixacts per day\n* pgBouncer is also involved in this stack, and may be contributing to\nthe problem in some way\n* at no time is the DB server out of CPU (max usage = 38%), RAM, or\ndoing major IO (max %util = 22%).\n* BIND statements can be slow as well as EXECUTEs.\n\nI don't have full query logs from a stall period yet, so I'll have more\ninformation when I do: for example, is it ALL queries which are slow or\njust some of them?\n\nHowever, I thought this list would have some other ideas where to look.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 11:50:26 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Lock pileup causes server to stall"
},
{
"msg_contents": "Josh Berkus wrote:\n> All,\n> \n> pg version: 9.3.5\n> RHEL 6.5\n> 128GB/32 cores\n> Configured with shared_buffers=16GB\n> Java/Tomcat/JDBC application\n> \n> Server has an issue that whenever we get lock waits (transaction lock\n> waits, usually on an FK dependancy) lasting over a minute or more than\n> 10 at once, *all* queries on the server slow to a crawl, taking 100X to\n> 400X normal execution times.\n\nCurrent FK checking makes you wait if the referenced tuple is modified\non any indexed column, not just those that are actually used in\nforeign keys. Maybe this case would be sped up if we optimized that.\n\n> * This applies even to queries which are against other databases, so\n> it's not purely a lock blocking issue.\n\nOh.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 10 Nov 2014 18:40:01 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lock pileup causes server to stall"
},
{
"msg_contents": "On 11/10/2014 01:40 PM, Alvaro Herrera wrote:\n> Josh Berkus wrote:\n>> All,\n>>\n>> pg version: 9.3.5\n>> RHEL 6.5\n>> 128GB/32 cores\n>> Configured with shared_buffers=16GB\n>> Java/Tomcat/JDBC application\n>>\n>> Server has an issue that whenever we get lock waits (transaction lock\n>> waits, usually on an FK dependancy) lasting over a minute or more than\n>> 10 at once, *all* queries on the server slow to a crawl, taking 100X to\n>> 400X normal execution times.\n> \n> Current FK checking makes you wait if the referenced tuple is modified\n> on any indexed column, not just those that are actually used in\n> foreign keys. Maybe this case would be sped up if we optimized that.\n> \n>> * This applies even to queries which are against other databases, so\n>> it's not purely a lock blocking issue.\n> \n> Oh.\n\nYeah, I think this is more likely a problem with the general lock table\nand shared_buffers than anything to do with actual lock-blocks.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 11 Nov 2014 09:11:03 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Lock pileup causes server to stall"
},
{
"msg_contents": "On Tue, Nov 11, 2014 at 9:11 AM, Josh Berkus <[email protected]> wrote:\n\n> On 11/10/2014 01:40 PM, Alvaro Herrera wrote:\n> > Josh Berkus wrote:\n> >> All,\n> >>\n> >> pg version: 9.3.5\n> >> RHEL 6.5\n> >> 128GB/32 cores\n> >> Configured with shared_buffers=16GB\n> >> Java/Tomcat/JDBC application\n> >>\n> >> Server has an issue that whenever we get lock waits (transaction lock\n> >> waits, usually on an FK dependancy) lasting over a minute or more than\n> >> 10 at once, *all* queries on the server slow to a crawl, taking 100X to\n> >> 400X normal execution times.\n> >\n> > Current FK checking makes you wait if the referenced tuple is modified\n> > on any indexed column, not just those that are actually used in\n> > foreign keys. Maybe this case would be sped up if we optimized that.\n> >\n> >> * This applies even to queries which are against other databases, so\n> >> it's not purely a lock blocking issue.\n> >\n> > Oh.\n>\n> Yeah, I think this is more likely a problem with the general lock table\n> and shared_buffers than anything to do with actual lock-blocks.\n>\n>\nAny chance you can run 'perf record -a' on it?\n\nCheers,\n\nJeff\n\nOn Tue, Nov 11, 2014 at 9:11 AM, Josh Berkus <[email protected]> wrote:On 11/10/2014 01:40 PM, Alvaro Herrera wrote:\n> Josh Berkus wrote:\n>> All,\n>>\n>> pg version: 9.3.5\n>> RHEL 6.5\n>> 128GB/32 cores\n>> Configured with shared_buffers=16GB\n>> Java/Tomcat/JDBC application\n>>\n>> Server has an issue that whenever we get lock waits (transaction lock\n>> waits, usually on an FK dependancy) lasting over a minute or more than\n>> 10 at once, *all* queries on the server slow to a crawl, taking 100X to\n>> 400X normal execution times.\n>\n> Current FK checking makes you wait if the referenced tuple is modified\n> on any indexed column, not just those that are actually used in\n> foreign keys. Maybe this case would be sped up if we optimized that.\n>\n>> * This applies even to queries which are against other databases, so\n>> it's not purely a lock blocking issue.\n>\n> Oh.\n\nYeah, I think this is more likely a problem with the general lock table\nand shared_buffers than anything to do with actual lock-blocks.\nAny chance you can run 'perf record -a' on it?Cheers,Jeff",
"msg_date": "Tue, 11 Nov 2014 09:33:08 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lock pileup causes server to stall"
},
{
"msg_contents": "\n> On 10/11/2014, at 22.40, Alvaro Herrera <[email protected]> wrote:\n> \n> Josh Berkus wrote:\n>> All,\n>> \n>> pg version: 9.3.5\n>> RHEL 6.5\n>> 128GB/32 cores\n>> Configured with shared_buffers=16GB\n>> Java/Tomcat/JDBC application\n>> \n>> Server has an issue that whenever we get lock waits (transaction lock\n>> waits, usually on an FK dependancy) lasting over a minute or more than\n>> 10 at once, *all* queries on the server slow to a crawl, taking 100X to\n>> 400X normal execution times.\n> \n> Current FK checking makes you wait if the referenced tuple is modified\n> on any indexed column, not just those that are actually used in\n> foreign keys. Maybe this case would be sped up if we optimized that.\n\nEven if it is an gin index that is being modified? seems like a harsh limitation to me.\n\nJesper\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 12 Nov 2014 07:49:29 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lock pileup causes server to stall"
},
{
"msg_contents": "Jesper Krogh wrote:\n> \n> > On 10/11/2014, at 22.40, Alvaro Herrera <[email protected]> wrote:\n> > \n> > Josh Berkus wrote:\n> >> All,\n> >> \n> >> pg version: 9.3.5\n> >> RHEL 6.5\n> >> 128GB/32 cores\n> >> Configured with shared_buffers=16GB\n> >> Java/Tomcat/JDBC application\n> >> \n> >> Server has an issue that whenever we get lock waits (transaction lock\n> >> waits, usually on an FK dependancy) lasting over a minute or more than\n> >> 10 at once, *all* queries on the server slow to a crawl, taking 100X to\n> >> 400X normal execution times.\n> > \n> > Current FK checking makes you wait if the referenced tuple is modified\n> > on any indexed column, not just those that are actually used in\n> > foreign keys. Maybe this case would be sped up if we optimized that.\n> \n> Even if it is an gin index that is being modified? seems like a harsh limitation to me.\n\nWell, as I recall it's only unique indexes, so it's not *that* harsh.\n\nAnyway, the fklocks patch was stupidly complex (and still got much stuff\nwrong). I didn't want to add more ground to objections by additionally\nbreaking the abstraction between heapam and the concept of \"columns\nreferenced by a foreign key constraint\". So it was discussed and\ndecided we'd leave that for future improvement. Patches are welcome,\nparticularly if they come from the future.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 12 Nov 2014 10:51:10 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lock pileup causes server to stall"
},
{
"msg_contents": "On 11/12/2014 05:51 AM, Alvaro Herrera wrote:\n> Anyway, the fklocks patch was stupidly complex (and still got much stuff\n> wrong). I didn't want to add more ground to objections by additionally\n> breaking the abstraction between heapam and the concept of \"columns\n> referenced by a foreign key constraint\".\n\nOh, come on. We had hardly any problems with that patch!\n\n;-)\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 12 Nov 2014 09:04:06 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Lock pileup causes server to stall"
},
{
"msg_contents": ">>>> \n>>> \n>>> Current FK checking makes you wait if the referenced tuple is modified\n>>> on any indexed column, not just those that are actually used in\n>>> foreign keys. Maybe this case would be sped up if we optimized that.\n>> \n>> Even if it is an gin index that is being modified? seems like a harsh limitation to me.\n> \n> Well, as I recall it's only unique indexes, so it's not *that* harsh.\n> \nSounds good. Indices are there for all kinds of reasons, unique ones are more related to referential integrity, so even not 100% accurate, at least 90% of the way in my world.\n\nWe do have an \"star\"-schema in the db with some amount of information needed in the center that needs updates, apart from that a massive update activity on the sorrounding columns, locks on the center entity has quite high impact on the sorrounding updates. (9.2 moving to 9.3 reallly soon and looking forward for this enhancement.\n\nJesper\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 13 Nov 2014 07:53:41 +0100",
"msg_from": "Jesper Krogh <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Lock pileup causes server to stall"
}
] |
[
{
"msg_contents": "hi,\n\nin the pgsql documentation\n(http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html)\n\ni haven't seen anything referring to: how is affected the data inserted in\nthe new table by a trigger Before Insert compared with a trigger After\nInsert? and anything related to performance\n\nfor example:\n\ntables: actuals (summarize the total running hours), log (the functional\nhours are inserted in LOG as time)\n function: sum\nview: timeview (where running hours are calculated as a difference)\n\n-- Function: sum()\n\n-- DROP FUNCTION sum();\n\nCREATE OR REPLACE FUNCTION sum()\n RETURNS trigger AS\n$BODY$begin\nupdate actuals\nset\nhours = hours + (select time from time_view\nwhere idlog = (select max(idlog) from timeview))\nwhere actuals.idmac =\n(SELECT idmac FROM selectedmac) ;\nreturn new;\nend$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\nALTER FUNCTION sum()\n OWNER TO user;\n\n\n\n\n--trigger\nCREATE TRIGGER update_actuals_tg01\n AFTER INSERT\n ON log\n FOR EACH ROW\n EXECUTE PROCEDURE sum();\n\n\nI read somewhere (I don't find the link anymore) that if the trigger is\nAfter Insert, the data available in the table LOG might not be available\nanymore to run the trigger. is that correct? or I might understood wrong?\n\nwhat's the difference related to performance concerning a trigger Before\nInsert compared with a trigger After Insert?\n\nthank you\nhave a sunny day\n\nhi,in the pgsql documentation(http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html)i haven't seen anything referring to: how is affected the data inserted in the new table by a trigger Before Insert compared with a trigger After Insert? and anything related to performancefor example:tables: actuals (summarize the total running hours), log (the functional hours are inserted in LOG as time) function: sumview: timeview (where running hours are calculated as a difference)-- Function: sum()\n\n-- DROP FUNCTION sum();\n\nCREATE OR REPLACE FUNCTION sum()\n RETURNS trigger AS\n$BODY$begin\nupdate actuals\nset \nhours = hours + (select time from time_view \nwhere idlog = (select max(idlog) from timeview))where actuals.idmac = (SELECT idmac FROM selectedmac) ;\nreturn new;\nend$BODY$\n LANGUAGE plpgsql VOLATILE\n COST 100;\nALTER FUNCTION sum()\n OWNER TO user;--triggerCREATE TRIGGER update_actuals_tg01 AFTER INSERT ON log FOR EACH ROW EXECUTE PROCEDURE sum();I read somewhere (I don't find the link anymore) that if the trigger is After Insert, the data available in the table LOG might not be available anymore to run the trigger. is that correct? or I might understood wrong?what's the difference related to performance concerning a trigger Before Insert compared with a trigger After Insert?thank youhave a sunny day",
"msg_date": "Tue, 11 Nov 2014 07:38:11 +0100",
"msg_from": "avpro avpro <[email protected]>",
"msg_from_op": true,
"msg_subject": "trigger Before or After"
},
{
"msg_contents": "avpro avpro wrote:\r\n> in the pgsql documentation\r\n> (http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html)\r\n> \r\n> \r\n> i haven't seen anything referring to: how is affected the data inserted in the new table by a trigger\r\n> Before Insert compared with a trigger After Insert? and anything related to performance\r\n\r\nIn your example (the trigger updates a second table) it should make\r\nno difference if the trigger is BEFORE or AFTER INSERT.\r\n\r\nThe difference is that in a BEFORE trigger you can modify the values that\r\nwill be inserted before the INSERT actually happens.\r\n\r\n> I read somewhere (I don't find the link anymore) that if the trigger is After Insert, the data\r\n> available in the table LOG might not be available anymore to run the trigger. is that correct? or I\r\n> might understood wrong?\r\n\r\nI don't quite understand.\r\nYou will have access to the OLD and NEW values in both BEFORE and AFTER triggers.\r\nIn an AFTER trigger, the table row has already been modified.\r\n\r\n> what's the difference related to performance concerning a trigger Before Insert compared with a\r\n> trigger After Insert?\r\n\r\nI don't think that there is a big difference, but you can easily test it:\r\nInsert 100000 rows with a BEFORE trigger on the table and compare the\r\ntime it takes to inserting 100000 rows with an AFTER trigger.\r\n\r\nYours,\r\nLaurenz Albe\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 11 Nov 2014 08:19:43 +0000",
"msg_from": "Albe Laurenz <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] trigger Before or After"
},
{
"msg_contents": "On 11/10/2014 10:38 PM, avpro avpro wrote:\n> hi,\n>\n> in the pgsql documentation\n> (http://www.postgresql.org/docs/9.1/static/sql-createtrigger.html)\n>\n> i haven't seen anything referring to: how is affected the data inserted\n> in the new table by a trigger Before Insert compared with a trigger\n> After Insert? and anything related to performance\n\nSee bottom of above page and here:\n\nhttp://www.postgresql.org/docs/9.1/static/trigger-definition.html\n\n\n>\n> thank you\n> have a sunny day\n>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Tue, 11 Nov 2014 05:58:49 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trigger Before or After"
}
] |
[
{
"msg_contents": "Hi\n\nAfter upgrading our 9.0 database server\n\nfrom:\nopenSUSE 11.4, kernel 2.6.37.6-24-default, Pg 9.0.13\n\nto:\nopenSUSE 13.1, kernel v 3.11.10-21-default, Pg 9.0.15\n\n... and overall server load is +1 after that.\n\nWe did not add any new services/daemons.\n\nIt's hard to track down to individual queries - when I tested most\nindividual query times are same as before the migration.\n\n\nAny - ANY - hints will be much appreciated.\n\nThanks\nFilip\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 13 Nov 2014 09:10:23 +0100",
"msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "9.0 performance degradation with kernel 3.11"
},
{
"msg_contents": "> From: Filip Rembiałkowski <[email protected]>\n>To: [email protected] \n>Sent: Thursday, 13 November 2014, 8:10\n>Subject: [PERFORM] 9.0 performance degradation with kernel 3.11\n> \n>\n>Hi\n>\n>After upgrading our 9.0 database server\n>\n>from:\n>openSUSE 11.4, kernel 2.6.37.6-24-default, Pg 9.0.13\n>\n>to:\n>openSUSE 13.1, kernel v 3.11.10-21-default, Pg 9.0.15\n>\n>... and overall server load is +1 after that.\n>\n>We did not add any new services/daemons.\n>\n>It's hard to track down to individual queries - when I tested most\n>individual query times are same as before the migration.\n>\n>\n>Any - ANY - hints will be much appreciated.\n>\n>Thanks\n>Filip\n>\n\nIt's hard to say much going on the little information, but assuming everything was rosy for you with your 2.6 version, and you've kept the basics like hardware, filesystem, io scheduler etc the same, there are a few kernel tunables to tweak on later kernels.\n\nUsually defragmentation of transparent huge pages causes an issue and it's best to turn off the defrag option:\n\n\n echo always > /sys/kernel/mm/transparent_hugepage/enabled \n echo madvise > /sys/kernel/mm/transparent_hugepage/defrag\n\n\nIt's also recommended to increase the value of sched_migration_cost (I think now called sched_migration_cost_ns in 3.11+) and disable sched_autogroup_enabled.\n\n\n kernel.sched_migration_cost=5000000 \n kernel.sched_autogroup_enabled=0\n\nAlso disable vm.zone_reclaim_mode\n\n vm.zone_reclaim_mode=0\n\n\nOn some of our systems I also saw marked improvements increasing the values of kernel.sched_min_granularity_ns and kernel.sched_wakeup_granularity_ns too, on some other systems this had no effect. So you may want to try to see if some larger values there help.\n\nA lot of the earlier 3.x kernels aren't great with PostgreSQL, one of the noted issues being a \"stable pages\" feature that blocks processes modifying pages that are currently being written back until the write completes. I think people have noted this gets better in 3.9 onwards, but I personally didn't see much of a marked improvement until 3.16.\n\n\n\n>Thanks\n>Filip\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 16:08:29 +0000",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 9.0 performance degradation with kernel 3.11"
}
] |
[
{
"msg_contents": "All;\n\nWe have a large db server with 128GB of ram running complex functions.\n\nwith the server set to have the following we were seeing a somewhat low \nhit ratio and lots of temp buffers\n\nshared_buffers = 18GB\nwork_mem = 75MB\neffective_cache_size = 105GB\ncheckpoint_segments = 128\n\n\nwhen we increased the values to these not only did the hit ratio drop \nbut query times are now longer as well:\n\n\nshared_buffers = 28GB\nwork_mem = 150MB\neffective_cache_size = 105GB\ncheckpoint_segments = 256\n\nThis does not seem to make sense to me, anyone have any thoughts on why \nmore memory resources would cause worse performance?\n\nThanks in advance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 13 Nov 2014 16:09:18 -0700",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Increased shared_buffer setting = lower hit ratio ?"
},
{
"msg_contents": "This is on a CentOS 6.5 box running PostgreSQL 9.2\n\n\nOn 11/13/14 4:09 PM, CS DBA wrote:\n> All;\n>\n> We have a large db server with 128GB of ram running complex functions.\n>\n> with the server set to have the following we were seeing a somewhat \n> low hit ratio and lots of temp buffers\n>\n> shared_buffers = 18GB\n> work_mem = 75MB\n> effective_cache_size = 105GB\n> checkpoint_segments = 128\n>\n>\n> when we increased the values to these not only did the hit ratio drop \n> but query times are now longer as well:\n>\n>\n> shared_buffers = 28GB\n> work_mem = 150MB\n> effective_cache_size = 105GB\n> checkpoint_segments = 256\n>\n> This does not seem to make sense to me, anyone have any thoughts on \n> why more memory resources would cause worse performance?\n>\n> Thanks in advance\n>\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 13 Nov 2014 16:16:11 -0700",
"msg_from": "CS DBA <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Increased shared_buffer setting = lower hit ratio ?"
},
{
"msg_contents": "Hi,\n\nOn 14.11.2014 00:16, CS DBA wrote:\n> This is on a CentOS 6.5 box running PostgreSQL 9.2\n> \n> \n> On 11/13/14 4:09 PM, CS DBA wrote:\n>> All;\n>>\n>> We have a large db server with 128GB of ram running complex\n>> functions.\n>>\n>> with the server set to have the following we were seeing a\n>> somewhat low hit ratio and lots of temp buffers\n>>\n>> shared_buffers = 18GB\n>> work_mem = 75MB\n>> effective_cache_size = 105GB\n>> checkpoint_segments = 128\n>>\n>>\n>> when we increased the values to these not only did the hit ratio\n>> drop but query times are now longer as well:\n>>\n>>\n>> shared_buffers = 28GB\n>> work_mem = 150MB\n>> effective_cache_size = 105GB\n>> checkpoint_segments = 256\n>>\n>> This does not seem to make sense to me, anyone have any thoughts\n>> on why more memory resources would cause worse performance?\n\nwhat exactly do you mean by hit ratio - is that the page cache hit ratio\n(filesystem cache), or shared buffers hit ratio (measured e.g. using\npg_buffercache)?\n\nRegarding the unexpected decrease of performance after increasing\nshared_buffers - that's actually quite common behavior. First, the\nmanagement of shared buffers is not free, and the more pieces you need\nto manage the more expensive it is. Also, by using larger shared buffers\nyou make that memory unusable for page cache etc. There are also other\nnegative consequences - double buffering, accumulating more changes for\na checkpoint etc.\n\nThe common wisdom (which some claim to be obsolete) is not to set shared\nbuffers over ~10GB of RAM. It's however very workload-dependent so your\nmileage may vary.\n\nTo get some basic idea of the shared_buffers utilization, it's possible\nto compute stats using pg_buffercache. Also pg_stat_bgwriter contains\nuseful data.\n\nBTW, it's difficult to say why a query is slow - can you post explain\nanalyze of the query with both shared_buffers settings?\n\nAnd just to check - what kind of hardware/kernel version is this? Do you\nhave numa / transparent huge pages or similar trouble-indicing issues?\n\nTomas\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 14 Nov 2014 00:35:11 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increased shared_buffer setting = lower hit ratio ?"
},
{
"msg_contents": "On Thu, Nov 13, 2014 at 3:09 PM, CS DBA <[email protected]> wrote:\n\n> All;\n>\n> We have a large db server with 128GB of ram running complex functions.\n>\n> with the server set to have the following we were seeing a somewhat low\n> hit ratio and lots of temp buffers\n>\n> shared_buffers = 18GB\n> work_mem = 75MB\n> effective_cache_size = 105GB\n> checkpoint_segments = 128\n>\n>\n> when we increased the values to these not only did the hit ratio drop but\n> query times are now longer as well:\n>\n>\n> shared_buffers = 28GB\n> work_mem = 150MB\n> effective_cache_size = 105GB\n> checkpoint_segments = 256\n>\n> This does not seem to make sense to me, anyone have any thoughts on why\n> more memory resources would cause worse performance?\n>\n\nYou should try changing those things separately, there isn't much reason\nthat shared_buffers and work_mem should be changed together.\n\nThere are many reasons the hit ratio and the performance could have gotten\nworse, without more info we can just speculate. I'd guess it is just as\nlikely as not that the two observations actually have different causes,\nrather than both being caused by the same thing. Can you figure out which\nspecific queries changed performance? Barring that, which objects changed\nhit ratios the most? And how did the actual buffer hit statistics change?\nLooking at just the ratio obscures more than it enlightens.\n\nLarge sorts are often slower when given more memory. If you give it so\nmuch more memory that it becomes an in-memory sort, it will get faster.\nBut if you change it from (for example) a 12-way merge of X sized runs to a\n6-way merge of X*2 size runs it could very well be slower because you are\nmaking poor use of the CPU cache and spending more time waiting on main\nmemory while building those runs. But that shouldn't show up hit ratios,\njust in performance.\n\nA higher work_mem might also prompt a plan to read an entire table and hash\nit, rather than do a nested loop probing its index. If the index was\nwell-cached in shared buffers but the whole table is not, this could make\nthe buffer hit ratio look worse.\n\nCheers,\n\nJeff\n\nOn Thu, Nov 13, 2014 at 3:09 PM, CS DBA <[email protected]> wrote:All;\n\nWe have a large db server with 128GB of ram running complex functions.\n\nwith the server set to have the following we were seeing a somewhat low hit ratio and lots of temp buffers\n\nshared_buffers = 18GB\nwork_mem = 75MB\neffective_cache_size = 105GB\ncheckpoint_segments = 128\n\n\nwhen we increased the values to these not only did the hit ratio drop but query times are now longer as well:\n\n\nshared_buffers = 28GB\nwork_mem = 150MB\neffective_cache_size = 105GB\ncheckpoint_segments = 256\n\nThis does not seem to make sense to me, anyone have any thoughts on why more memory resources would cause worse performance?You should try changing those things separately, there isn't much reason that shared_buffers and work_mem should be changed together.There are many reasons the hit ratio and the performance could have gotten worse, without more info we can just speculate. I'd guess it is just as likely as not that the two observations actually have different causes, rather than both being caused by the same thing. Can you figure out which specific queries changed performance? Barring that, which objects changed hit ratios the most? And how did the actual buffer hit statistics change? Looking at just the ratio obscures more than it enlightens.Large sorts are often slower when given more memory. If you give it so much more memory that it becomes an in-memory sort, it will get faster. But if you change it from (for example) a 12-way merge of X sized runs to a 6-way merge of X*2 size runs it could very well be slower because you are making poor use of the CPU cache and spending more time waiting on main memory while building those runs. But that shouldn't show up hit ratios, just in performance.A higher work_mem might also prompt a plan to read an entire table and hash it, rather than do a nested loop probing its index. If the index was well-cached in shared buffers but the whole table is not, this could make the buffer hit ratio look worse.Cheers,Jeff",
"msg_date": "Thu, 13 Nov 2014 16:01:59 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Increased shared_buffer setting = lower hit ratio ?"
}
] |
[
{
"msg_contents": "Hi all,\n\nExcuse me if I made any mistakes, as this is my first time posting to a\nmailing list.\n\nI'm a user of Quassel, a IRC client that uses postgres a backing store\nfor IRC logs and am running into heavy intermittent performance\nproblems. I've tracked it down to a query that takes a very long time\n(around 4 minutes) to complete when its data isn't cached.\n\nThis is the layout of the table being queried and EXPLAIN ANALYZE result\nfor the problematic query:\n\nquassel=> \\d backlog Table \"public.backlog\" Column | Type | Modifiers\n-----------+-----------------------------+-------------------------------------------------------------\nmessageid | integer | not null default\nnextval('backlog_messageid_seq'::regclass) time | timestamp without time\nzone | not null bufferid | integer | not null type | integer | not null\nflags | integer | not null senderid | integer | not null message | text\n| Indexes: \"backlog_pkey\" PRIMARY KEY, btree (messageid)\n\"backlog_bufferid_idx\" btree (bufferid, messageid DESC) Foreign-key\nconstraints: \"backlog_bufferid_fkey\" FOREIGN KEY (bufferid) REFERENCES\nbuffer(bufferid) ON DELETE CASCADE \"backlog_senderid_fkey\" FOREIGN KEY\n(senderid) REFERENCES sender(senderid) ON DELETE SET NULL\n\nquassel=> explain (analyze, buffers) SELECT messageid, time, type,\nflags, sender, message FROM backlog LEFT JOIN sender ON backlog.senderid\n= sender.senderid WHERE bufferid = 39 ORDER BY messageid DESC LIMIT 10;\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.72..37.78 rows=10 width=102) (actual\ntime=154410.353..154410.424 rows=10 loops=1) Buffers: shared hit=13952\nread=19244 -> Nested Loop Left Join (cost=0.72..145800.61 rows=39345\nwidth=102) (actual time=154410.350..154410.414 rows=10 loops=1)\nBuffers: shared hit=13952 read=19244 -> Index Scan Backward using\nbacklog_pkey on backlog (cost=0.43..63830.21 rows=39345 width=62)\n(actual time=154410.327..154410.341 rows=10 loops=1) Filter: (bufferid\n= 39) Rows Removed by Filter: 1248320 Buffers: shared hit=13921\nread=19244 -> Index Scan using sender_pkey on sender (cost=0.29..2.07\nrows=1 width=48) (actual time=0.005..0.005 rows=1 loops=10) Index Cond:\n(backlog.senderid = senderid) Buffers: shared hit=31 Total runtime:\n154410.477 ms (12 rows)\n\nThis plan is consistently chosen, even after ANALYZEing and REINDEXing\nthe table. It looks like Postgres is opting to do a sequential scan of\nthe backlog_pkey index, filtering rows by bufferid, instead of directly\nusing the backlog_bufferid_idx index that directly maps to the operation\nbeing made by the query. I was advised on IRC to try dropping the\nbacklog_pkey index to force Postgres to use the correct one, and that\nuses a better plan:\n\nquassel=> begin; BEGIN quassel=> alter table backlog drop constraint\nbacklog_pkey; ALTER TABLE quassel=> explain analyze SELECT messageid,\ntime, type, flags, sender, message FROM backlog LEFT JOIN sender ON\nbacklog.senderid = sender.senderid WHERE bufferid = 39 ORDER BY\nmessageid DESC LIMIT 10; QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\nLimit (cost=0.72..40.50 rows=10 width=102) (actual time=63.826..162.134\nrows=10 loops=1) -> Nested Loop Left Join (cost=0.72..156518.91\nrows=39345 width=102) (actual time=63.823..162.126 rows=10 loops=1) ->\nIndex Scan using backlog_bufferid_idx on backlog (cost=0.43..74548.51\nrows=39345 width=62) (actual time=63.798..63.814 rows=10 loops=1) Index\nCond: (bufferid = 39) -> Index Scan using sender_pkey on sender\n(cost=0.29..2.07 rows=1 width=48) (actual time=8.532..9.825 rows=1\nloops=10) Index Cond: (backlog.senderid = senderid) Total runtime:\n162.377 ms (7 rows)\n\nquassel=> rollback; ROLLBACK\n\n(This query was also run with empty caches.) bufferid=39 in particular\nhas this issue because it hasn't had any messages posted to for a long\ntime, so scanning backlog upwards will take a long time to gather 10\nmessages from it. In contrast, most other bufferid's have their messages\ninterleaved on the last entries of backlog. I believe this might be\nthrowing Postgres' estimates off.\n\nDoes anyone know if there's any tweaking I can do in Postgres so that it\nuses the appropriate plan?\n\nInfo about my setup: PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu,\ncompiled by gcc (GCC) 4.9.1, 64-bit Arch Linux, PostgreSQL installed\nfrom the official repositories, running inside a Xen HVM VPS.\nConnecting to PostgreSQL using psql via UNIX socket. Changed options:\n(locale-related ones omitted) listen_addresses = max_stack_depth = 2MB\nshared_buffers = 256MB (Issue is also present with default value)\nTotal RAM: 1GB\n\n\nThanks, --yuriks\n\n\n\n\n\nHi all,\n \nExcuse me if I made any mistakes, as this is my first time posting to a mailing list.\n \nI'm a user of Quassel, a IRC client that uses postgres a backing store for IRC logs and am running into heavy intermittent performance problems. I've tracked it down to a query that takes a very long time (around 4 minutes) to complete when its data isn't cached.\n \nThis is the layout of the table being queried and EXPLAIN ANALYZE result for the problematic query:\n \nquassel=> \\d backlog\n Table \"public.backlog\"\n Column | Type | Modifiers\n-----------+-----------------------------+-------------------------------------------------------------\n messageid | integer | not null default nextval('backlog_messageid_seq'::regclass)\n time | timestamp without time zone | not null\n bufferid | integer | not null\n type | integer | not null\n flags | integer | not null\n senderid | integer | not null\n message | text |\nIndexes:\n \"backlog_pkey\" PRIMARY KEY, btree (messageid)\n \"backlog_bufferid_idx\" btree (bufferid, messageid DESC)\nForeign-key constraints:\n \"backlog_bufferid_fkey\" FOREIGN KEY (bufferid) REFERENCES buffer(bufferid) ON DELETE CASCADE\n \"backlog_senderid_fkey\" FOREIGN KEY (senderid) REFERENCES sender(senderid) ON DELETE SET NULL\n \nquassel=> explain (analyze, buffers) SELECT messageid, time, type, flags, sender, message\nFROM backlog\nLEFT JOIN sender ON backlog.senderid = sender.senderid\nWHERE bufferid = 39\nORDER BY messageid DESC LIMIT 10;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.72..37.78 rows=10 width=102) (actual time=154410.353..154410.424 rows=10 loops=1)\n Buffers: shared hit=13952 read=19244\n -> Nested Loop Left Join (cost=0.72..145800.61 rows=39345 width=102) (actual time=154410.350..154410.414 rows=10 loops=1)\n Buffers: shared hit=13952 read=19244\n -> Index Scan Backward using backlog_pkey on backlog (cost=0.43..63830.21 rows=39345 width=62) (actual time=154410.327..154410.341 rows=10 loops=1)\n Filter: (bufferid = 39)\n Rows Removed by Filter: 1248320\n Buffers: shared hit=13921 read=19244\n -> Index Scan using sender_pkey on sender (cost=0.29..2.07 rows=1 width=48) (actual time=0.005..0.005 rows=1 loops=10)\n Index Cond: (backlog.senderid = senderid)\n Buffers: shared hit=31\n Total runtime: 154410.477 ms\n(12 rows)\n \nThis plan is consistently chosen, even after ANALYZEing and REINDEXing the table. It looks like Postgres is opting to do a sequential scan of the backlog_pkey index, filtering rows by bufferid, instead of directly using the backlog_bufferid_idx index that directly maps to the operation being made by the query. I was advised on IRC to try dropping the backlog_pkey index to force Postgres to use the correct one, and that uses a better plan:\n \nquassel=> begin;\nBEGIN\nquassel=> alter table backlog drop constraint backlog_pkey;\nALTER TABLE\nquassel=> explain analyze SELECT messageid, time, type, flags, sender, message\nFROM backlog\nLEFT JOIN sender ON backlog.senderid = sender.senderid\nWHERE bufferid = 39\nORDER BY messageid DESC LIMIT 10;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.72..40.50 rows=10 width=102) (actual time=63.826..162.134 rows=10 loops=1)\n -> Nested Loop Left Join (cost=0.72..156518.91 rows=39345 width=102) (actual time=63.823..162.126 rows=10 loops=1)\n -> Index Scan using backlog_bufferid_idx on backlog (cost=0.43..74548.51 rows=39345 width=62) (actual time=63.798..63.814 rows=10 loops=1)\n Index Cond: (bufferid = 39)\n -> Index Scan using sender_pkey on sender (cost=0.29..2.07 rows=1 width=48) (actual time=8.532..9.825 rows=1 loops=10)\n Index Cond: (backlog.senderid = senderid)\n Total runtime: 162.377 ms\n(7 rows)\n \nquassel=> rollback;\nROLLBACK\n \n(This query was also run with empty caches.) bufferid=39 in particular has this issue because it hasn't had any messages posted to for a long time, so scanning backlog upwards will take a long time to gather 10 messages from it. In contrast, most other bufferid's have their messages interleaved on the last entries of backlog. I believe this might be throwing Postgres' estimates off.\n \nDoes anyone know if there's any tweaking I can do in Postgres so that it uses the appropriate plan?\n \nInfo about my setup:\nPostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.9.1, 64-bit\nArch Linux, PostgreSQL installed from the official repositories, running inside a Xen HVM VPS.\nConnecting to PostgreSQL using psql via UNIX socket.\nChanged options: (locale-related ones omitted)\n listen_addresses = \n max_stack_depth = 2MB\n shared_buffers = 256MB (Issue is also present with default value)\nTotal RAM: 1GB\n \n \nThanks,\n--yuriks",
"msg_date": "Sat, 15 Nov 2014 21:16:00 -0200",
"msg_from": "Yuri Kunde Schlesner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Plan uses wrong index, preferring to scan pkey index instead"
},
{
"msg_contents": "Yuri Kunde Schlesner <[email protected]> writes:\n> Does anyone know if there's any tweaking I can do in Postgres so that it\n> uses the appropriate plan?\n\nI suspect that the reason the planner likes the backlog_pkey is that it's\nalmost perfectly correlated with table order, which greatly reduces the\nnumber of table fetches that need to happen over the course of a indexscan\ncompared to using the less-well-correlated bufferid+messageid index.\nSo that way is estimated to be cheaper than using the less-correlated\nindex ... and that may even be true except for outlier bufferid values\nwith no recent messages.\n\nYou could try fooling around with the planner cost parameters\n(particularly random_page_cost) to see if that changes the decision;\nbut it's usually a bad idea to alter cost parameters on the basis of\ntweaking a single query, and even more so for tweaking an outlier\ncase of a single query.\n\nWhat I think might be a workable solution, assuming you can stand a little\ndowntime to do it, is to CLUSTER the table on the bufferid+messageid\nindex. This would reverse the correlation advantage and thereby solve\nyour problem. Now, ordinarily CLUSTER is only a temporary solution\nbecause the cluster-induced ordering degrades over time. But I think it\nwould likely be a very long time until you accumulate so many new messages\nthat the table as a whole looks well-correlated on messageid alone.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 16 Nov 2014 12:18:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Plan uses wrong index, preferring to scan pkey index instead"
},
{
"msg_contents": "On Sun, Nov 16, 2014, at 03:18 PM, Tom Lane wrote:\n> I suspect that the reason the planner likes the backlog_pkey is that it's\n> almost perfectly correlated with table order, which greatly reduces the\n> number of table fetches that need to happen over the course of a\n> indexscan\n> compared to using the less-well-correlated bufferid+messageid index.\n> So that way is estimated to be cheaper than using the less-correlated\n> index ... and that may even be true except for outlier bufferid values\n> with no recent messages.\nIndeed, and I can imagine that this is more advantageous in the general\ncase, as I described in my last message. The problem is that the\nvariance is too high, with a 500x slowdown between the best and the\nworst cases for that plan.\n\n> What I think might be a workable solution, assuming you can stand a\n> little\n> downtime to do it, is to CLUSTER the table on the bufferid+messageid\n> index. This would reverse the correlation advantage and thereby solve\n> your problem. Now, ordinarily CLUSTER is only a temporary solution\n> because the cluster-induced ordering degrades over time. But I think it\n> would likely be a very long time until you accumulate so many new\n> messages\n> that the table as a whole looks well-correlated on messageid alone.\nI tried this and it seems to have solved my problem! The better plan is\nconsistently chosen now, and it's as fast as former plan on the fast\ncases, and much faster on the slow case. I will continue monitoring the\nDB to see if it eventually switches back to the former scheme, and if it\ndoes I can just include a re-cluster on my maintenance schedule. Thanks\nso much for the suggestion.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 18 Nov 2014 22:02:34 -0200",
"msg_from": "Yuri Kunde Schlesner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plan uses wrong index,\n preferring to scan pkey index instead"
}
] |
[
{
"msg_contents": "Looks like my client mangled (reflowed) the command outputs, so I\nrequoted them in this message sent as plain text. In case it happens\nagain I also posted them to a pastebin here:\nhttps://gist.github.com/yuriks/e86fb0c3cefb8d348c34\n\n--yuriks\n\n\nOn Sat, Nov 15, 2014, at 09:16 PM, Yuri Kunde Schlesner wrote:\n> Hi all,\n> \n> Excuse me if I made any mistakes, as this is my first time posting to a mailing list.\n> \n> I'm a user of Quassel, a IRC client that uses postgres a backing store for IRC logs and am running into heavy intermittent performance problems. I've tracked it down to a query that takes a very long time (around 4 minutes) to complete when its data isn't cached.\n> \n> This is the layout of the table being queried and EXPLAIN ANALYZE result for the problematic query:\n> \n> quassel=> \\d backlog\n> Table \"public.backlog\"\n> Column | Type | Modifiers\n> -----------+-----------------------------+-------------------------------------------------------------\n> messageid | integer | not null default nextval('backlog_messageid_seq'::regclass)\n> time | timestamp without time zone | not null\n> bufferid | integer | not null\n> type | integer | not null\n> flags | integer | not null\n> senderid | integer | not null\n> message | text |\n> Indexes:\n> \"backlog_pkey\" PRIMARY KEY, btree (messageid)\n> \"backlog_bufferid_idx\" btree (bufferid, messageid DESC)\n> Foreign-key constraints:\n> \"backlog_bufferid_fkey\" FOREIGN KEY (bufferid) REFERENCES buffer(bufferid) ON DELETE CASCADE\n> \"backlog_senderid_fkey\" FOREIGN KEY (senderid) REFERENCES sender(senderid) ON DELETE SET NULL\n> \n> quassel=> explain (analyze, buffers) SELECT messageid, time, type, flags, sender, message\n> FROM backlog\n> LEFT JOIN sender ON backlog.senderid = sender.senderid\n> WHERE bufferid = 39\n> ORDER BY messageid DESC LIMIT 10;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.72..37.78 rows=10 width=102) (actual time=154410.353..154410.424 rows=10 loops=1)\n> Buffers: shared hit=13952 read=19244\n> -> Nested Loop Left Join (cost=0.72..145800.61 rows=39345 width=102) (actual time=154410.350..154410.414 rows=10 loops=1)\n> Buffers: shared hit=13952 read=19244\n> -> Index Scan Backward using backlog_pkey on backlog (cost=0.43..63830.21 rows=39345 width=62) (actual time=154410.327..154410.341 rows=10 loops=1)\n> Filter: (bufferid = 39)\n> Rows Removed by Filter: 1248320\n> Buffers: shared hit=13921 read=19244\n> -> Index Scan using sender_pkey on sender (cost=0.29..2.07 rows=1 width=48) (actual time=0.005..0.005 rows=1 loops=10)\n> Index Cond: (backlog.senderid = senderid)\n> Buffers: shared hit=31\n> Total runtime: 154410.477 ms\n> (12 rows)\n> \n> This plan is consistently chosen, even after ANALYZEing and REINDEXing the table. It looks like Postgres is opting to do a sequential scan of the backlog_pkey index, filtering rows by bufferid, instead of directly using the backlog_bufferid_idx index that directly maps to the operation being made by the query. I was advised on IRC to try dropping the backlog_pkey index to force Postgres to use the correct one, and that uses a better plan:\n> \n> quassel=> begin;\n> BEGIN\n> quassel=> alter table backlog drop constraint backlog_pkey;\n> ALTER TABLE\n> quassel=> explain analyze SELECT messageid, time, type, flags, sender, message\n> FROM backlog\n> LEFT JOIN sender ON backlog.senderid = sender.senderid\n> WHERE bufferid = 39\n> ORDER BY messageid DESC LIMIT 10;\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.72..40.50 rows=10 width=102) (actual time=63.826..162.134 rows=10 loops=1)\n> -> Nested Loop Left Join (cost=0.72..156518.91 rows=39345 width=102) (actual time=63.823..162.126 rows=10 loops=1)\n> -> Index Scan using backlog_bufferid_idx on backlog (cost=0.43..74548.51 rows=39345 width=62) (actual time=63.798..63.814 rows=10 loops=1)\n> Index Cond: (bufferid = 39)\n> -> Index Scan using sender_pkey on sender (cost=0.29..2.07 rows=1 width=48) (actual time=8.532..9.825 rows=1 loops=10)\n> Index Cond: (backlog.senderid = senderid)\n> Total runtime: 162.377 ms\n> (7 rows)\n> \n> quassel=> rollback;\n> ROLLBACK\n> \n> (This query was also run with empty caches.) bufferid=39 in particular has this issue because it hasn't had any messages posted to for a long time, so scanning backlog upwards will take a long time to gather 10 messages from it. In contrast, most other bufferid's have their messages interleaved on the last entries of backlog. I believe this might be throwing Postgres' estimates off.\n> \n> Does anyone know if there's any tweaking I can do in Postgres so that it uses the appropriate plan?\n> \n> Info about my setup:\n> PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.9.1, 64-bit\n> Arch Linux, PostgreSQL installed from the official repositories, running inside a Xen HVM VPS.\n> Connecting to PostgreSQL using psql via UNIX socket.\n> Changed options: (locale-related ones omitted)\n> listen_addresses = \n> max_stack_depth = 2MB\n> shared_buffers = 256MB (Issue is also present with default value)\n> Total RAM: 1GB\n> \n> \n> Thanks,\n> --yuriks\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 15 Nov 2014 21:24:04 -0200",
"msg_from": "Yuri Kunde Schlesner <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Plan uses wrong index,\n preferring to scan pkey index instead"
}
] |
[
{
"msg_contents": "Another day, another timing out query rewritten to force a more stable\nquery plan.\n\nWhile I know that the planner almost always chooses a good plan, I\ntend to think it is trying too hard. While 99% of the queries might be\n10% faster, 1% might be timing out which makes my users cross and my\nlife difficult. I'd much rather have systems that are less efficient\noverall, but stable with a very low rate of timeouts.\n\nI was wondering if the planner should be much more pessimistic,\ntrusting in Murphy's Law and assuming the worst case is the likely\ncase? Would this give me a much more consistent system? Would it\nconsistently grind to a halt doing full table scans? Do we actually\nknow the worst cases, and would it be a relatively easy task to update\nthe planner so we can optionally enable this behavior per transaction\nor across a system? Boolean choice between pessimistic or optimistic,\nor is pessimism a dial?\n\n-- \nStuart Bishop <[email protected]>\nhttp://www.stuartbishop.net/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 21 Nov 2014 12:07:27 +0700",
"msg_from": "Stuart Bishop <[email protected]>",
"msg_from_op": true,
"msg_subject": "A pessimistic planner"
},
{
"msg_contents": "Stuart Bishop <[email protected]> writes:\n> I was wondering if the planner should be much more pessimistic,\n> trusting in Murphy's Law and assuming the worst case is the likely\n> case? Would this give me a much more consistent system?\n\nWell, it would give uniformly awful plans for cases where fast-start\nplans are actually useful ... and there are plenty.\n\nWe do need to think of ways to account for risk, but I don't think\nthat jamming the tiller all the way over is likely to be a net\nimprovement.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 21 Nov 2014 00:12:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: A pessimistic planner"
}
] |
[
{
"msg_contents": "Hello,\n\nI wonder why Postgres does not use index in the query below? It is a \nquite common use-case when you want to sort records by an arbitrary set \nof columns but do not want to create a lot of compound indexes for all \npossible combinations of them. It seems that if, for instance, your \nquery's ORDER BY is x, y, z then any of these indexes could be used to \nimprove the performance: (x); (x, y); (x, y, z).\n\ncreate temp table t as\nselect s as x, s % 10 as y, s % 100 as z\nfrom generate_series(1, 1000000) s;\n\nanalyze t;\ncreate index idx1 on t (y);\n\nselect *\nfrom t\norder by y desc, x\nlimit 10;\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Nov 2014 19:02:18 +0800",
"msg_from": "Vlad Arkhipov <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why don't use index on x when ORDER BY x, y?"
},
{
"msg_contents": "On Mon, Nov 24, 2014 at 12:02 PM, Vlad Arkhipov wrote:\n> Hello,\n>\n> I wonder why Postgres does not use index in the query below? It is a quite\n> common use-case when you want to sort records by an arbitrary set of\n> columns but do not want to create a lot of compound indexes for all possible\n> combinations of them. It seems that if, for instance, your query's ORDER BY\n> is x, y, z then any of these indexes could be used to improve the\n> performance: (x); (x, y); (x, y, z).\n>\n> create temp table t as\n> select s as x, s % 10 as y, s % 100 as z\n> from generate_series(1, 1000000) s;\n>\n> analyze t;\n> create index idx1 on t (y);\n>\n> select *\n> from t\n> order by y desc, x\n> limit 10;\n\nFor the record, here's the plan:\n\nrklemme=> explain analyze\nrklemme-> select *\nrklemme-> from t\nrklemme-> order by y desc, x\nrklemme-> limit 10;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------\n Limit (cost=36511.64..36511.67 rows=10 width=12) (actual\ntime=4058.863..4058.917 rows=10 loops=1)\n -> Sort (cost=36511.64..39011.64 rows=1000000 width=12) (actual\ntime=4058.849..4058.868 rows=10 loops=1)\n Sort Key: y, x\n Sort Method: top-N heapsort Memory: 17kB\n -> Seq Scan on t (cost=0.00..14902.00 rows=1000000\nwidth=12) (actual time=0.031..2025.639 rows=1000000 loops=1)\n Total runtime: 4058.992 ms\n(6 rows)\n\nYour index does not cover y AND x. If you remove \"order by x\" the\nindex will be used:\n\nrklemme=> explain analyze\nselect *\nfrom t\norder by y desc\nlimit 10;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.42..0.86 rows=10 width=12) (actual time=0.081..0.284\nrows=10 loops=1)\n -> Index Scan Backward using idx1 on t (cost=0.42..43268.33\nrows=1000000 width=12) (actual time=0.066..0.125 rows=10 loops=1)\n Total runtime: 0.403 ms\n(3 rows)\n\n\nNow, with a different index:\n\nrklemme=> create index idx2 on t (y desc, x);\nCREATE INDEX\nrklemme=> explain analyze\nselect *\nfrom t\norder by y desc, x\nlimit 10;\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.42..0.88 rows=10 width=12) (actual time=0.127..0.290\nrows=10 loops=1)\n -> Index Scan using idx2 on t (cost=0.42..45514.12 rows=1000000\nwidth=12) (actual time=0.112..0.165 rows=10 loops=1)\n Total runtime: 0.404 ms\n(3 rows)\n\n\nrklemme=> select version();\n version\n-----------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on i686-pc-linux-gnu, compiled by gcc (Ubuntu\n4.8.2-19ubuntu1) 4.8.2, 32-bit\n(1 row)\n\nKind regards\n\n\n-- \n[guy, jim].each {|him| remember.him do |as, often| as.you_can - without end}\nhttp://blog.rubybestpractices.com/\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Nov 2014 14:39:11 +0100",
"msg_from": "Robert Klemme <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why don't use index on x when ORDER BY x, y?"
},
{
"msg_contents": "Vlad Arkhipov <[email protected]> writes:\n> I wonder why Postgres does not use index in the query below?\n\nBecause it's useless: you'd still have to do a sort, and an indexscan\nis going to be a slower source of data for the sort than a seqscan.\n\nThere's been some experimentation of late with a \"partial sort\" capability\nthat could take advantage of partially-ordered input, which might make\nthis kind of thing interesting after all. But it's not committed and\nmight never be: it's far from clear that it'd be a win in many cases.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 24 Nov 2014 10:38:46 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why don't use index on x when ORDER BY x, y?"
}
] |
[
{
"msg_contents": "OK so there's a simple set of tree functions we use at work. They're\nquite fast in 8.4 and they've gotten about 40% slower in 9.2. They're\na simple mix of sql and plpgsql functions which are at\nhttp://pastebin.com/SXTnNhd5 and which I've attached.\n\nHere's a test query:\n\nselect tree_ancestor_keys('000000000000000100000001');\n\nAccording to explain analyze on both 8.4 and 9.2 they have the same\nplan. However, on the same machine the query is about 40% slower on\n9.2. Note we're not hitting the disks, or even buffers here. It's pure\nin memory plpsql and sql that we're running.\n\nexplain analyze select tree_ancestor_keys('000000000000000100000001')\nfrom generate_series(1,1000);\n\nOn 8.4 runs in about 280 to 300 ms. (you can run it once and get the\nsame diff, it's just easier to see with the generate series forcing it\nto run 1000 times to kind of even out the noise.)\n\nOn 9.2, same machine, clean fresh dbs etc, it runs in ~400 ms. And\nthat difference seems to be there on all plpgsql and sql functions.\n\nIn our application, these tree functions get called millions and\nmillions of times a day, and a 40% performance penalty is a pretty big\ndeal.\n\nWe're already using the trick of telling the query planner that this\nfunction will return 1 row with alter function rows 1 etc. That helps\na lot but it doesn't fix this underlying performance issue.\n\nServer versions are 8.4.22 (last I think) and 9.2.9.\n\nIf anyone has any suggestions I'd love to hear them.\n-- \nTo understand recursion, one must first understand recursion.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 25 Nov 2014 13:36:03 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Small performance regression in 9.2 has a big impact"
},
{
"msg_contents": "On 11/25/2014 10:36 PM, Scott Marlowe wrote:\n> OK so there's a simple set of tree functions we use at work. They're\n> quite fast in 8.4 and they've gotten about 40% slower in 9.2. They're\n> a simple mix of sql and plpgsql functions which are at\n> http://pastebin.com/SXTnNhd5 and which I've attached.\n>\n> Here's a test query:\n>\n> select tree_ancestor_keys('000000000000000100000001');\n>\n> According to explain analyze on both 8.4 and 9.2 they have the same\n> plan. However, on the same machine the query is about 40% slower on\n> 9.2. Note we're not hitting the disks, or even buffers here. It's pure\n> in memory plpsql and sql that we're running.\n>\n> explain analyze select tree_ancestor_keys('000000000000000100000001')\n> from generate_series(1,1000);\n>\n> On 8.4 runs in about 280 to 300 ms. (you can run it once and get the\n> same diff, it's just easier to see with the generate series forcing it\n> to run 1000 times to kind of even out the noise.)\n>\n> On 9.2, same machine, clean fresh dbs etc, it runs in ~400 ms. And\n> that difference seems to be there on all plpgsql and sql functions.\n>\n> In our application, these tree functions get called millions and\n> millions of times a day, and a 40% performance penalty is a pretty big\n> deal.\n>\n> We're already using the trick of telling the query planner that this\n> function will return 1 row with alter function rows 1 etc. That helps\n> a lot but it doesn't fix this underlying performance issue.\n>\n> Server versions are 8.4.22 (last I think) and 9.2.9.\n>\n> If anyone has any suggestions I'd love to hear them.\n\nI don't know why this regressed between those versions, but looking at \nthe functions, there's some low-hanging fruit:\n\n1. tree_ancestor_keys() could use UNION ALL instead of UNION. (I believe \nduplicates are expected here, although I'm not 100% sure).\n\n2. tree_ancestor_keys() calculates tree_level($1) every time it \nrecurses. Would be cheaper to calculate once, and pass it as argument.\n\nPut together:\n\nCREATE FUNCTION tree_ancestor_keys(bit varying, integer, integer) \nRETURNS SETOF bit varying\n LANGUAGE sql IMMUTABLE STRICT\n AS $_$\n select tree_ancestor_key($1, $2)\n union all\n select tree_ancestor_keys($1, $2 + 1, $3)\n where $2 < $3\n$_$;\n\nCREATE or replace FUNCTION tree_ancestor_keys(bit varying, integer) \nRETURNS SETOF bit varying\n LANGUAGE sql IMMUTABLE STRICT\n AS $_$\n select tree_ancestor_keys($1, $2 + 1, tree_level($1))\n$_$;\n\nThese changes make your test query go about 2x faster on my laptop, with \ngit master. I'm sure you could optimize the functions further, but those \nat least seem like fairly safe and simple changes.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Nov 2014 22:58:36 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small performance regression in 9.2 has a big impact"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> OK so there's a simple set of tree functions we use at work. They're\n> quite fast in 8.4 and they've gotten about 40% slower in 9.2. They're\n> a simple mix of sql and plpgsql functions which are at\n> http://pastebin.com/SXTnNhd5 and which I've attached.\n\n> Here's a test query:\n\n> select tree_ancestor_keys('000000000000000100000001');\n\n> According to explain analyze on both 8.4 and 9.2 they have the same\n> plan. However, on the same machine the query is about 40% slower on\n> 9.2. Note we're not hitting the disks, or even buffers here. It's pure\n> in memory plpsql and sql that we're running.\n\n> explain analyze select tree_ancestor_keys('000000000000000100000001')\n> from generate_series(1,1000);\n\nHmm, I don't like the trend here. For the repeat-1000x query, I get\nthese reported execution times:\n\n8.4\t360 ms\n9.0\t365 ms\n9.1\t440 ms\n9.2\t510 ms\n9.3\t550 ms\n9.4\t570 ms\nhead\t570 ms\n\n(This is in assert-enabled builds, I'm too lazy to rebuild all the\nbranches without that.)\n\noprofile isn't showing any smoking gun AFAICS; it seems like things\nare just generally slower. Still, we've not seen similar reports\nelsewhere, so somehow this usage style is getting penalized in newer\nbranches compared to other cases. If it were all on 9.2's head\nI'd be suspicious of the plancache mechanism, but evidently that's\nnot it, or at least not the whole story.\n\nHEAD profile entries above 1%:\n\nsamples % image name symbol name\n7573 7.2448 postgres AllocSetAlloc\n6059 5.7964 postgres SearchCatCache\n3533 3.3799 postgres AllocSetCheck\n3420 3.2718 postgres base_yyparse\n2104 2.0128 postgres AllocSetFree\n1613 1.5431 postgres palloc\n1523 1.4570 postgres ExecMakeFunctionResultNoSets\n1313 1.2561 postgres check_list_invariants\n1241 1.1872 postgres palloc0\n1213 1.1604 postgres pfree\n1157 1.1069 postgres SPI_plan_get_cached_plan\n1136 1.0868 postgres GetPrivateRefCountEntry\n1098 1.0504 postgres sentinel_ok\n1085 1.0380 postgres hash_any\n1057 1.0112 postgres core_yylex\n1053 1.0074 postgres expression_tree_walker\n1046 1.0007 postgres hash_search_with_hash_value\n\n8.4 profile entries above 1%:\n\nsamples % image name symbol name\n11782 10.3680 postgres AllocSetAlloc\n7369 6.4846 postgres AllocSetCheck\n5623 4.9482 postgres base_yyparse\n4166 3.6660 postgres SearchCatCache\n2671 2.3504 postgres ExecMakeFunctionResultNoSets\n2060 1.8128 postgres MemoryContextAllocZeroAligned\n2030 1.7864 postgres MemoryContextAlloc\n1679 1.4775 postgres ExecEvalParam\n1607 1.4141 postgres base_yylex\n1389 1.2223 postgres check_list_invariants\n1348 1.1862 postgres RevalidateCachedPlan\n1341 1.1801 postgres AcquireExecutorLocks\n1266 1.1141 postgres MemoryContextCreate\n1256 1.1053 postgres hash_any\n1255 1.1044 postgres expression_tree_walker\n1202 1.0577 postgres expression_tree_mutator\n1191 1.0481 postgres AllocSetReset\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Nov 2014 16:55:15 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small performance regression in 9.2 has a big impact"
},
{
"msg_contents": "On Tue, Nov 25, 2014 at 1:58 PM, Heikki Linnakangas\n<[email protected]> wrote:\n> On 11/25/2014 10:36 PM, Scott Marlowe wrote:\n>>\n>> OK so there's a simple set of tree functions we use at work. They're\n>> quite fast in 8.4 and they've gotten about 40% slower in 9.2. They're\n>> a simple mix of sql and plpgsql functions which are at\n>> http://pastebin.com/SXTnNhd5 and which I've attached.\n>>\n>> Here's a test query:\n>>\n>> select tree_ancestor_keys('000000000000000100000001');\n>>\n>> According to explain analyze on both 8.4 and 9.2 they have the same\n>> plan. However, on the same machine the query is about 40% slower on\n>> 9.2. Note we're not hitting the disks, or even buffers here. It's pure\n>> in memory plpsql and sql that we're running.\n>>\n>> explain analyze select tree_ancestor_keys('000000000000000100000001')\n>> from generate_series(1,1000);\n>>\n>> On 8.4 runs in about 280 to 300 ms. (you can run it once and get the\n>> same diff, it's just easier to see with the generate series forcing it\n>> to run 1000 times to kind of even out the noise.)\n>>\n>> On 9.2, same machine, clean fresh dbs etc, it runs in ~400 ms. And\n>> that difference seems to be there on all plpgsql and sql functions.\n>>\n>> In our application, these tree functions get called millions and\n>> millions of times a day, and a 40% performance penalty is a pretty big\n>> deal.\n>>\n>> We're already using the trick of telling the query planner that this\n>> function will return 1 row with alter function rows 1 etc. That helps\n>> a lot but it doesn't fix this underlying performance issue.\n>>\n>> Server versions are 8.4.22 (last I think) and 9.2.9.\n>>\n>> If anyone has any suggestions I'd love to hear them.\n>\n>\n> I don't know why this regressed between those versions, but looking at the\n> functions, there's some low-hanging fruit:\n>\n> 1. tree_ancestor_keys() could use UNION ALL instead of UNION. (I believe\n> duplicates are expected here, although I'm not 100% sure).\n>\n> 2. tree_ancestor_keys() calculates tree_level($1) every time it recurses.\n> Would be cheaper to calculate once, and pass it as argument.\n>\n> Put together:\n>\n> CREATE FUNCTION tree_ancestor_keys(bit varying, integer, integer) RETURNS\n> SETOF bit varying\n> LANGUAGE sql IMMUTABLE STRICT\n> AS $_$\n> select tree_ancestor_key($1, $2)\n> union all\n> select tree_ancestor_keys($1, $2 + 1, $3)\n> where $2 < $3\n> $_$;\n>\n> CREATE or replace FUNCTION tree_ancestor_keys(bit varying, integer) RETURNS\n> SETOF bit varying\n> LANGUAGE sql IMMUTABLE STRICT\n> AS $_$\n> select tree_ancestor_keys($1, $2 + 1, tree_level($1))\n> $_$;\n>\n> These changes make your test query go about 2x faster on my laptop, with git\n> master. I'm sure you could optimize the functions further, but those at\n> least seem like fairly safe and simple changes.\n\nWow that made a huge difference. About a 50% increase across the\nboard. Sadly, 9.2 is still way behind 8.4 (see Tom's email)\n\nHere's the results of 10 runs, 9.2 old functions, 9.2 new functions,\n8.4 old functions, 8.4 new functions:\n\n402.454 ms 217.718 ms 283.289 ms 167.108 ms\n390.828 ms 219.644 ms 282.994 ms 165.524 ms\n397.987 ms 216.864 ms 289.053 ms 170.821 ms\n391.670 ms 220.943 ms 296.410 ms 164.190 ms\n437.099 ms 233.360 ms 284.279 ms 183.919 ms\n473.945 ms 291.199 ms 347.916 ms 168.300 ms\n453.974 ms 231.350 ms 367.517 ms 158.717 ms\n377.221 ms 226.697 ms 297.255 ms 164.196 ms\n396.812 ms 262.638 ms 300.073 ms 161.325 ms\n436.822 ms 227.489 ms 292.553 ms 179.194 ms\n405.929 ms 233.461 ms 267.355 ms 162.847 ms\n\nso about 400 versus about 220, and about 290 versus about 165 or so.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Nov 2014 15:25:47 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Small performance regression in 9.2 has a big impact"
},
{
"msg_contents": "I wrote:\n> Scott Marlowe <[email protected]> writes:\n>> OK so there's a simple set of tree functions we use at work. They're\n>> quite fast in 8.4 and they've gotten about 40% slower in 9.2.\n\n> Hmm, I don't like the trend here. For the repeat-1000x query, I get\n> these reported execution times:\n\n> 8.4\t360 ms\n> 9.0\t365 ms\n> 9.1\t440 ms\n> 9.2\t510 ms\n> 9.3\t550 ms\n> 9.4\t570 ms\n> head\t570 ms\n\nI found part of the issue: you're doing a lot of UNIONs on varbit\ncolumns, and every time we parse one of those, get_sort_group_operators\nasks the typcache about hash operators for the type. typcache finds\nout that varbit has no default hash opclass ... but *it doesn't cache\nnegative answers*. So that means a physical lookup in pg_opclass every\ntime :-(. That is actually the only bufmgr traffic induced by this\ntest query, once the catalog caches are loaded. Versions before 9.1\ndon't have that hit because they didn't consider hashing for UNIONs.\n\nI made a quick-hack patch to suppress redundant GetDefaultOpclass calls\nin typcache.c, and found that that brought HEAD's runtime down to 460ms.\nI don't think I'd want to commit this in its current form, but with\nsome additions to flush the cache after pg_opclass updates it would\nbe a credible improvement.\n\nSo that probably explains the jump from 9.0 to 9.1. Don't know yet\nabout the other lossage.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Nov 2014 17:57:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small performance regression in 9.2 has a big impact"
},
{
"msg_contents": "Scott Marlowe <[email protected]> writes:\n> On Tue, Nov 25, 2014 at 1:58 PM, Heikki Linnakangas\n> <[email protected]> wrote:\n>> I don't know why this regressed between those versions, but looking at the\n>> functions, there's some low-hanging fruit:\n>> \n>> 1. tree_ancestor_keys() could use UNION ALL instead of UNION. (I believe\n>> duplicates are expected here, although I'm not 100% sure).\n>> \n>> 2. tree_ancestor_keys() calculates tree_level($1) every time it recurses.\n>> Would be cheaper to calculate once, and pass it as argument.\n\n> Wow that made a huge difference. About a 50% increase across the\n> board. Sadly, 9.2 is still way behind 8.4 (see Tom's email)\n\nSwitching from UNION to UNION ALL would dodge the varbit hash-opclass\ncaching issue, I think. But there's still something else going on.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 25 Nov 2014 18:02:22 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small performance regression in 9.2 has a big impact"
},
{
"msg_contents": "I wrote:\n>> Hmm, I don't like the trend here. For the repeat-1000x query, I get\n>> these reported execution times:\n\n>> 8.4\t360 ms\n>> 9.0\t365 ms\n>> 9.1\t440 ms\n>> 9.2\t510 ms\n>> 9.3\t550 ms\n>> 9.4\t570 ms\n>> head\t570 ms\n\n> I made a quick-hack patch to suppress redundant GetDefaultOpclass calls\n> in typcache.c, and found that that brought HEAD's runtime down to 460ms.\n\nI found some additional low-hanging fruit by comparing gprof call counts\nin 8.4 and HEAD:\n\n* OverrideSearchPathMatchesCurrent(), which is not there at all in 8.4\nor 9.2, accounts for a depressingly large amount of palloc/pfree traffic.\nThe implementation was quick-n-dirty to begin with, but workloads\nlike this one call it often enough to make it a pain point.\n\n* plpgsql's setup_param_list() contributes another large fraction of\nadded palloc/pfree traffic; this is evidently caused by the temporary\nbitmapset needed for its bms_first_member() loop, which was not there\nin 8.4 but is there in 9.2.\n\nI've been able to bring HEAD's runtime down to about 415 ms with the\ncollection of more-or-less quick hacks attached. None of them are\nready to commit but I thought I'd post them for the record.\n\nAfter review of all this, I think the aspect of your example that is\ncausing performance issues is that there are a lot of non-inline-able\nSQL-language function calls. That's not a case that we've put much\nthought into lately. I doubt we are going to get all the way back to\nwhere 8.4 was in the short term, because I can see that there is a\nsignificant amount of new computation associated with collation\nmanagement during parsing (catcache lookups driven by get_typcollation,\nassign_collations_walker, etc). The long-term answer to that is to\nimprove the SQL-language function support so that it can cache the results\nof parsing the function body; we have surely got enough plancache support\nfor that now, but no one's attempted to apply it in functions.c.\n\n\t\t\tregards, tom lane\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 25 Nov 2014 22:59:30 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Small performance regression in 9.2 has a big impact"
},
{
"msg_contents": "On Tue, Nov 25, 2014 at 8:59 PM, Tom Lane <[email protected]> wrote:\n> I wrote:\n>>> Hmm, I don't like the trend here. For the repeat-1000x query, I get\n>>> these reported execution times:\n>\n>>> 8.4 360 ms\n>>> 9.0 365 ms\n>>> 9.1 440 ms\n>>> 9.2 510 ms\n>>> 9.3 550 ms\n>>> 9.4 570 ms\n>>> head 570 ms\n>\n>> I made a quick-hack patch to suppress redundant GetDefaultOpclass calls\n>> in typcache.c, and found that that brought HEAD's runtime down to 460ms.\n>\n> I found some additional low-hanging fruit by comparing gprof call counts\n> in 8.4 and HEAD:\n>\n> * OverrideSearchPathMatchesCurrent(), which is not there at all in 8.4\n> or 9.2, accounts for a depressingly large amount of palloc/pfree traffic.\n> The implementation was quick-n-dirty to begin with, but workloads\n> like this one call it often enough to make it a pain point.\n>\n> * plpgsql's setup_param_list() contributes another large fraction of\n> added palloc/pfree traffic; this is evidently caused by the temporary\n> bitmapset needed for its bms_first_member() loop, which was not there\n> in 8.4 but is there in 9.2.\n>\n> I've been able to bring HEAD's runtime down to about 415 ms with the\n> collection of more-or-less quick hacks attached. None of them are\n> ready to commit but I thought I'd post them for the record.\n>\n> After review of all this, I think the aspect of your example that is\n> causing performance issues is that there are a lot of non-inline-able\n> SQL-language function calls. That's not a case that we've put much\n> thought into lately. I doubt we are going to get all the way back to\n> where 8.4 was in the short term, because I can see that there is a\n> significant amount of new computation associated with collation\n> management during parsing (catcache lookups driven by get_typcollation,\n> assign_collations_walker, etc). The long-term answer to that is to\n> improve the SQL-language function support so that it can cache the results\n> of parsing the function body; we have surely got enough plancache support\n> for that now, but no one's attempted to apply it in functions.c.\n\nThanks so much for the work on this. We won't be applying a patch in\nprod but we can definitely get a feel for the change on some test\nboxes.\n\nAnd if I didn't say it, Thanks to Heikki for his advice. Huge\ndifference there too.\n\nThis is exactly why I love using Postgres so much. The community\nsupport. No other software package has this kind of support.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 26 Nov 2014 10:21:29 -0700",
"msg_from": "Scott Marlowe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Small performance regression in 9.2 has a big impact"
}
] |
[
{
"msg_contents": "Hi all,\n\nWe are facing following issue in postgresql 9.1.3 in using arrow key in Solaris platform.\nCan you please help us to resolve it or any new release has fix for this or any workaround for this?\n\nissue: psql client generates a core when up arrow is used twice.\n============\nPlatfrom: Solaris X86\n\nSteps to reproduce:\n=====================\n1. Login to any postgres database\n2. execute any quer say \"\\list\"\n3. press up arrow twice.\n4. segmentation fault occurs and core is generated. Also session is terminated.\n\nPLease find example below\n\n# ./psql -U super -d mgrdb\nPassword for user super:\npsql (9.1.3)\nType \"help\" for help.\n\nmgrdb=# \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access privileg\nes\n-----------+----------+----------+-------------+-------------+------------------\n-----\nmgrdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\npostgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\ntemplate0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres\n +\n | | | | | postgres=CTc/post\ngres\ntemplate1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres\n +\n | | | | | postgres=CTc/post\ngres\n(4 rows)\n\nmgrdb=#\nmgrdb=# select count(1) from operator_msm;Segmentation Fault (core dumped)\n\nRegards\nTarkeshwar\n\n\n\n\n\n\n\n\n\nHi all,\n \nWe are facing following issue in postgresql 9.1.3 in using arrow key in Solaris platform.\n\nCan you please help us to resolve it or any new release has fix for this or any workaround for this?\n \nissue: psql client generates a core when up arrow is used twice.\n\n============\nPlatfrom: Solaris X86\n \nSteps to reproduce:\n=====================\n1. Login to any postgres database\n2. execute any quer say \"\\list\"\n3. press up arrow twice.\n4. segmentation fault occurs and core is generated. Also session is terminated.\n \nPLease find example below\n \n# ./psql -U super -d mgrdb\nPassword for user super:\npsql (9.1.3)\nType \"help\" for help.\n \nmgrdb=# \\l\n List of databases\n Name | Owner | Encoding | Collate | Ctype | Access privileg\nes\n-----------+----------+----------+-------------+-------------+------------------\n-----\nmgrdb | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\npostgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |\ntemplate0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres\n +\n | | | | | postgres=CTc/post\ngres\ntemplate1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres\n +\n | | | | | postgres=CTc/post\ngres\n(4 rows)\n \nmgrdb=#\nmgrdb=# select count(1) from operator_msm;Segmentation Fault (core dumped)\n \nRegards\nTarkeshwar",
"msg_date": "Wed, 26 Nov 2014 10:16:57 +0000",
"msg_from": "M Tarkeshwar Rao <[email protected]>",
"msg_from_op": true,
"msg_subject": "issue in postgresql 9.1.3 in using arrow key in Solaris platform"
},
{
"msg_contents": "On 11/26/2014 02:16 AM, M Tarkeshwar Rao wrote:\n> Hi all,\n>\n> We are facing following issue in postgresql 9.1.3 in using arrow key in\n> Solaris platform.\n>\n> *Can you please help us to resolve it or any new release has fix for\n> this or any workaround for this?*\n\nWould seem to me to be an interaction between Postgres and readline. Not \nsure exactly what, but some information would be helpful for those that \nmight know:\n\n1) What version of Solaris?\n\n2) How was Postgres installed and from what source?\n\n3) What version of readline is installed?\n\n4) Are you using a psql client that is the same version as the server?\n\n\n>\n> issue: psql client generates a core when up arrow is used twice.\n>\n\n>\n> Regards\n>\n> Tarkeshwar\n>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Wed, 26 Nov 2014 06:37:31 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue in postgresql 9.1.3 in using arrow key in Solaris platform"
},
{
"msg_contents": "M Tarkeshwar Rao <[email protected]> writes:\n> We are facing following issue in postgresql 9.1.3 in using arrow key in Solaris platform.\n> Can you please help us to resolve it or any new release has fix for this or any workaround for this?\n> issue: psql client generates a core when up arrow is used twice.\n\nAlmost certainly, this is not psql's fault, but rather a bug in the\nreadline or libedit library it's using for command history.\n\nIf you're using libedit, I can't say that I'm astonished, as we've\nseen a depressingly large number of bugs reported in various versions\nof libedit.\n\nIn any case, try to get a newer version of that library; or if you've\nlinked psql to libedit, consider rebuilding against libreadline.\n\n\t\t\tregards, tom lane\n\nPS: this was cross-posted inappropriately. I've trimmed the cc\nlist to just pgsql-general.\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Wed, 26 Nov 2014 10:27:51 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue in postgresql 9.1.3 in using arrow key in Solaris platform"
},
{
"msg_contents": "On 11/26/2014 02:16 AM, M Tarkeshwar Rao wrote:\n> Hi all,\n> \n> \n> \n> We are facing following issue in postgresql 9.1.3 in using arrow key in\n> Solaris platform.\n> \n> *Can you please help us to resolve it or any new release has fix for\n> this or any workaround for this?*\n\nMr. Rao:\n\n1) Please do not cross-post to multiple mailing lists. In the future,\nthis may cause you to be banned from the PostgreSQL mailing lists.\n\n2) PostgreSQL 9.1.3 is 11 patch releases behind and contains multiple\npublished security holes.\n\n3) Sounds like there's a bug in the readline or libedit libraries for\nyour platform. How did you build PostgreSQL?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Dec 2014 11:10:54 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: issue in postgresql 9.1.3 in using arrow key in Solaris platform"
}
] |
[
{
"msg_contents": "Hi all -- I am trying to do a better job of understanding why the\nplanner chooses some plans over others, and I ran into this issue this\nmorning where the planner ends up choosing a query that's 15000x\nslower. I have a table that represents nodes (here called\n\"components\") in a tree. Each node has a parent_id; the root node has\na NULL parent_id. I wanted to find the route from each node to its\nroot. Here is my query:\n\n# explain analyze WITH RECURSIVE path(start, id, internal_id,\nparent_id, document_id, depth) AS (\n SELECT t.id, t.id, t.internal_id, t.parent_id, t.document_id, 1\n FROM component t\n WHERE id < 6361197\n UNION ALL\n SELECT path.start, t.id, t.internal_id, t.parent_id,\nt.document_id, path.depth+1\n FROM component t, path\n WHERE t.internal_id = path.parent_id AND t.document_id=path.document_id\n)\nSELECT * FROM path ;\n\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on path (cost=61484650.85..61484654.39 rows=177 width=24)\n(actual time=0.007..36.958 rows=1007 loops=1)\n CTE path\n -> Recursive Union (cost=0.57..61484650.85 rows=177 width=24)\n(actual time=0.007..36.755 rows=1007 loops=1)\n -> Index Scan using component_pkey on component t\n(cost=0.57..644.56 rows=167 width=16) (actual time=0.006..0.076\nrows=218 loops=1)\n Index Cond: (id < 6361197)\n -> Nested Loop (cost=0.57..6148400.28 rows=1 width=24)\n(actual time=0.063..4.054 rows=88 loops=9)\n -> WorkTable Scan on path path_1 (cost=0.00..33.40\nrows=1670 width=16) (actual time=0.000..0.006 rows=112 loops=9)\n -> Index Scan using component_document_id on\ncomponent t_1 (cost=0.57..3681.65 rows=1 width=16) (actual\ntime=0.023..0.036 rows=1 loops=1007)\n Index Cond: (document_id = path_1.document_id)\n Filter: (path_1.parent_id = internal_id)\n Rows Removed by Filter: 237\n Total runtime: 37.039 ms\n\n\nHowever, when I add one more row to the seed query of the CTE, it\nchanges the plan to this:\n\n# explain analyze WITH RECURSIVE path(start, id, internal_id,\nparent_id, document_id, depth) AS (\n SELECT t.id, t.id, t.internal_id, t.parent_id, t.document_id, 1\n FROM component t\n WHERE id < 6361198\n UNION ALL\n SELECT path.start, t.id, t.internal_id, t.parent_id,\nt.document_id, path.depth+1\n FROM component t, path\n WHERE t.internal_id = path.parent_id AND t.document_id=path.document_id\n)\nSELECT * FROM path ;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------\n CTE Scan on path (cost=61122640.05..61122643.61 rows=178 width=24)\n(actual time=0.008..587814.729 rows=1008 loops=1)\n CTE path\n -> Recursive Union (cost=0.57..61122640.05 rows=178 width=24)\n(actual time=0.006..587814.346 rows=1008 loops=1)\n -> Index Scan using component_pkey on component t\n(cost=0.57..648.36 rows=168 width=16) (actual time=0.006..0.076\nrows=219 loops=1)\n Index Cond: (id < 6361198)\n -> Hash Join (cost=5543292.40..6112198.81 rows=1\nwidth=24) (actual time=64743.932..65312.625 rows=88 loops=9)\n Hash Cond: ((path_1.parent_id = t_1.internal_id) AND\n(path_1.document_id = t_1.document_id))\n -> WorkTable Scan on path path_1 (cost=0.00..33.60\nrows=1680 width=16) (actual time=0.001..0.015 rows=112 loops=9)\n -> Hash (cost=3627866.96..3627866.96 rows=96335696\nwidth=16) (actual time=64572.641..64572.641 rows=94613537 loops=9)\n Buckets: 4096 Batches: 8192 Memory Usage: 537kB\n -> Seq Scan on component t_1\n(cost=0.00..3627866.96 rows=96335696 width=16) (actual\ntime=0.005..43364.346 rows=94613537 loops=9)\n Total runtime: 587814.885 ms\n\nI would think that it has decided that the document_id index is not\nvery selective for the given mix of rows, however I checked the\nstatistics for the table and I found that n_distinct for document_id\nis 101559 (the true value is 162545). The value of pg_class.reltuples\nfor the table is 96055600, which is very close to the true value\n94613537.\n\nIn the first query, it appears to me that postgres thinks the index\nscan is much more expensive than it really is. However, given the\naccurate statistics, I can't see how.\n\nBTW I tried playing with random_page_cost. If I lower it to 2.0 then\nit chooses the fast plan. At 3.0 it chooses the slow plan.\n\nThanks!\nPatrick\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 2 Dec 2014 12:59:23 -0800",
"msg_from": "Patrick Krecker <[email protected]>",
"msg_from_op": true,
"msg_subject": "CTE query plan ignores selective index"
},
{
"msg_contents": "Patrick Krecker <[email protected]> writes:\n> -> Nested Loop (cost=0.57..6148400.28 rows=1 width=24)\n> (actual time=0.063..4.054 rows=88 loops=9)\n> -> WorkTable Scan on path path_1 (cost=0.00..33.40\n> rows=1670 width=16) (actual time=0.000..0.006 rows=112 loops=9)\n> -> Index Scan using component_document_id on\n> component t_1 (cost=0.57..3681.65 rows=1 width=16) (actual\n> time=0.023..0.036 rows=1 loops=1007)\n> Index Cond: (document_id = path_1.document_id)\n> Filter: (path_1.parent_id = internal_id)\n> Rows Removed by Filter: 237\n\n> I would think that it has decided that the document_id index is not\n> very selective for the given mix of rows, however I checked the\n> statistics for the table and I found that n_distinct for document_id\n> is 101559 (the true value is 162545). The value of pg_class.reltuples\n> for the table is 96055600, which is very close to the true value\n> 94613537.\n\n> In the first query, it appears to me that postgres thinks the index\n> scan is much more expensive than it really is. However, given the\n> accurate statistics, I can't see how.\n\nI think the problem is that it doesn't have any stats for the output of\npath_1, so it's probably falling back on some rather generic assumptions\nabout how many component rows will match each of the two join conditions.\nThat causes it to think that the indexscan will reject a lot of rows at\nthe filter step and therefore be expensive. Possibly that could be\nimproved, but it won't happen overnight.\n\nThe most expeditious way to fix this would likely be to provide an\nindex on component(document_id, internal_id). The planner should\nthen think an indexscan on that is cheap, regardless of whether the\ncheck on internal_id is really doing much of anything.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 02 Dec 2014 17:07:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: CTE query plan ignores selective index"
}
] |
[
{
"msg_contents": "Hi,\n\nApologies if this is the wrong list for this time of query (first time\nposting).\n\nI'm currently experimenting with hstore on Posgtres 9.4rc1. I've created a\ntable with an hstore column, with and index on that column (tried both gin\nand btree indexes) and the explain plan says that the index is never used\nfor the lookup and falls to a sequential scan every time (table has 1 000\n000 rows). The query plans and execution time for btree index, gin index\nand unindexed are the same. Is there something I'm doing wrong or missing\nin order to get indexes to work on hstore columns?\n\nDetails:\n\n0) Postgres version:\n\nbarkerm=# select version();\n version\n\n---------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.4rc1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.3\n20140911 (Red Hat 4.8.3-7), 64-bit\n(1 row)\n\n1) Created table with hstore column and btree index.\n\nbarkerm=# \\d audit\n Table \"public.audit\"\n Column | Type |\nModifiers\n---------------+-----------------------------+----------------------------------------------------\n id | integer | not null default\nnextval('audit_id_seq'::regclass)\n principal_id | integer |\n created_at | timestamp without time zone |\n root | character varying(255) |\n template_code | character(3) |\n attributes | hstore |\n args | character varying(255)[] |\nIndexes:\n \"audit_pkey\" PRIMARY KEY, btree (id)\n \"audit_attributes_idx\" btree (attributes)\n\n2) Insert 1 000 000 rows\n\nbarkerm=# select count(*) from audit;\n count\n---------\n 1000000\n(1 row)\n\n3) Run analyse.\n\n4) Pick a row somewhere in the middle:\n\nbarkerm=# select id, attributes from audit where id = 500000;\n id | attributes\n--------+---------------------------------------------------------\n 500000 | \"accountId\"=>\"1879355460\", \"instrumentId\"=>\"1625557725\"\n(1 row)\n\n5) Explain query using the attributes column in the where clause (uses Seq\nScan).\n\nbarkerm=# explain analyse select * from audit where attributes->'accountId'\n= '1879355460';\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------\n Seq Scan on audit (cost=0.00..35409.00 rows=5000 width=133) (actual\ntime=114.314..218.821 rows=1 loops=1)\n Filter: ((attributes -> 'accountId'::text) = '1879355460'::text)\n Rows Removed by Filter: 999999\n Planning time: 0.074 ms\n Execution time: 218.843 ms\n(5 rows)\n\n6) Rebuild the data using a gin index.\n\nbarkerm=# \\d audit\n Table \"public.audit\"\n Column | Type |\nModifiers\n---------------+-----------------------------+----------------------------------------------------\n id | integer | not null default\nnextval('audit_id_seq'::regclass)\n principal_id | integer |\n created_at | timestamp without time zone |\n root | character varying(255) |\n template_code | character(3) |\n attributes | hstore |\n args | character varying(255)[] |\nIndexes:\n \"audit_pkey\" PRIMARY KEY, btree (id)\n \"audit_attributes_idx\" gin (attributes)\n\n7) Again explain the selection of a single row using a constraint that\nreferences the hstore column. Seq Scan is still used.\n\nbarkerm=# explain analyse select * from audit where attributes->'accountId'\n= '1238334838';\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------\n Seq Scan on audit (cost=0.00..35409.00 rows=5000 width=133) (actual\ntime=122.173..226.363 rows=1 loops=1)\n Filter: ((attributes -> 'accountId'::text) = '1238334838'::text)\n Rows Removed by Filter: 999999\n Planning time: 0.164 ms\n Execution time: 226.392 ms\n(5 rows)\n\n8) Drop index an query as a baseline.\n\nbarkerm=# explain analyse select * from audit where attributes->'accountId'\n= '1238334838';\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------\n Seq Scan on audit (cost=0.00..35409.00 rows=5000 width=133) (actual\ntime=109.115..212.666 rows=1 loops=1)\n Filter: ((attributes -> 'accountId'::text) = '1238334838'::text)\n Rows Removed by Filter: 999999\n Planning time: 0.113 ms\n Execution time: 212.701 ms\n(5 rows)\n\nRegards,\nMichael Barker.\n\nHi,Apologies if this is the wrong list for this time of query (first time posting).I'm currently experimenting with hstore on Posgtres 9.4rc1. I've created a table with an hstore column, with and index on that column (tried both gin and btree indexes) and the explain plan says that the index is never used for the lookup and falls to a sequential scan every time (table has 1 000 000 rows). The query plans and execution time for btree index, gin index and unindexed are the same. Is there something I'm doing wrong or missing in order to get indexes to work on hstore columns?Details:0) Postgres version:barkerm=# select version(); version --------------------------------------------------------------------------------------------------------------- PostgreSQL 9.4rc1 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.8.3 20140911 (Red Hat 4.8.3-7), 64-bit(1 row)1) Created table with hstore column and btree index.barkerm=# \\d audit Table \"public.audit\" Column | Type | Modifiers ---------------+-----------------------------+---------------------------------------------------- id | integer | not null default nextval('audit_id_seq'::regclass) principal_id | integer | created_at | timestamp without time zone | root | character varying(255) | template_code | character(3) | attributes | hstore | args | character varying(255)[] | Indexes: \"audit_pkey\" PRIMARY KEY, btree (id) \"audit_attributes_idx\" btree (attributes)2) Insert 1 000 000 rowsbarkerm=# select count(*) from audit; count --------- 1000000(1 row)3) Run analyse.4) Pick a row somewhere in the middle:barkerm=# select id, attributes from audit where id = 500000; id | attributes --------+--------------------------------------------------------- 500000 | \"accountId\"=>\"1879355460\", \"instrumentId\"=>\"1625557725\"(1 row)5) Explain query using the attributes column in the where clause (uses Seq Scan).barkerm=# explain analyse select * from audit where attributes->'accountId' = '1879355460'; QUERY PLAN ------------------------------------------------------------------------------------------------------------ Seq Scan on audit (cost=0.00..35409.00 rows=5000 width=133) (actual time=114.314..218.821 rows=1 loops=1) Filter: ((attributes -> 'accountId'::text) = '1879355460'::text) Rows Removed by Filter: 999999 Planning time: 0.074 ms Execution time: 218.843 ms(5 rows)6) Rebuild the data using a gin index.barkerm=# \\d audit Table \"public.audit\" Column | Type | Modifiers ---------------+-----------------------------+---------------------------------------------------- id | integer | not null default nextval('audit_id_seq'::regclass) principal_id | integer | created_at | timestamp without time zone | root | character varying(255) | template_code | character(3) | attributes | hstore | args | character varying(255)[] | Indexes: \"audit_pkey\" PRIMARY KEY, btree (id) \"audit_attributes_idx\" gin (attributes)7) Again explain the selection of a single row using a constraint that references the hstore column. Seq Scan is still used.barkerm=# explain analyse select * from audit where attributes->'accountId' = '1238334838'; QUERY PLAN ------------------------------------------------------------------------------------------------------------ Seq Scan on audit (cost=0.00..35409.00 rows=5000 width=133) (actual time=122.173..226.363 rows=1 loops=1) Filter: ((attributes -> 'accountId'::text) = '1238334838'::text) Rows Removed by Filter: 999999 Planning time: 0.164 ms Execution time: 226.392 ms(5 rows)8) Drop index an query as a baseline.barkerm=# explain analyse select * from audit where attributes->'accountId' = '1238334838'; QUERY PLAN ------------------------------------------------------------------------------------------------------------ Seq Scan on audit (cost=0.00..35409.00 rows=5000 width=133) (actual time=109.115..212.666 rows=1 loops=1) Filter: ((attributes -> 'accountId'::text) = '1238334838'::text) Rows Removed by Filter: 999999 Planning time: 0.113 ms Execution time: 212.701 ms(5 rows)Regards,Michael Barker.",
"msg_date": "Fri, 5 Dec 2014 09:42:20 +1300",
"msg_from": "Michael Barker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query doesn't use index on hstore column"
},
{
"msg_contents": "On Fri, Dec 05, 2014 at 09:42:20AM +1300, Michael Barker wrote:\n> 1) Created table with hstore column and btree index.\n> \n> barkerm=# \\d audit\n> Table \"public.audit\"\n> Column | Type |\n> Modifiers\n> ---------------+-----------------------------+----------------------------------------------------\n> id | integer | not null default\n> nextval('audit_id_seq'::regclass)\n> principal_id | integer |\n> created_at | timestamp without time zone |\n> root | character varying(255) |\n> template_code | character(3) |\n> attributes | hstore |\n> args | character varying(255)[] |\n> Indexes:\n> \"audit_pkey\" PRIMARY KEY, btree (id)\n> \"audit_attributes_idx\" btree (attributes)\n> \n> ...\n> 5) Explain query using the attributes column in the where clause (uses Seq\n> Scan).\n> \n> barkerm=# explain analyse select * from audit where attributes->'accountId'\n> = '1879355460';\n> QUERY PLAN\n> \n> ------------------------------------------------------------------------------------------------------------\n> Seq Scan on audit (cost=0.00..35409.00 rows=5000 width=133) (actual\n> time=114.314..218.821 rows=1 loops=1)\n> Filter: ((attributes -> 'accountId'::text) = '1879355460'::text)\n> Rows Removed by Filter: 999999\n> Planning time: 0.074 ms\n> Execution time: 218.843 ms\n> (5 rows)\n> \nHi Michael,\n\nI think your index definitions need to be on the particular attribute from\nattributes and not attributes itself. That works but it does not apply to\nthe query you show above. I think that the binary json type in 9.4 will\ndo what you want. I have not worked with it myself, just looked at the docs.\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 4 Dec 2014 15:46:25 -0600",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query doesn't use index on hstore column"
},
{
"msg_contents": "Michael Barker <[email protected]> writes:\n> I'm currently experimenting with hstore on Posgtres 9.4rc1. I've created a\n> table with an hstore column, with and index on that column (tried both gin\n> and btree indexes) and the explain plan says that the index is never used\n> for the lookup and falls to a sequential scan every time (table has 1 000\n> 000 rows). The query plans and execution time for btree index, gin index\n> and unindexed are the same. Is there something I'm doing wrong or missing\n> in order to get indexes to work on hstore columns?\n\nWell, first off, a btree index is fairly useless for this query,\nbecause btree has no concept that the hstore has any sub-structure.\nA GIN index or GIST index could work though. Secondly, you have to\nremember that indexable WHERE conditions in Postgres are *always* of\nthe form \"WHERE indexed_column indexable_operator some_comparison_value\".\nSo the trick is to recast the condition you have into something that\nlooks like that. Instead of\n\n\tWHERE attributes->'accountId' = '1879355460'\n\nyou could do\n\n\tWHERE attributes @> 'accountId=>1879355460'\n\n(@> being the hstore containment operator, ie \"does attributes contain\na pair that looks like this?\") or equivalently but possibly easier to\ngenerate,\n\n\tWHERE attributes @> hstore('accountId', '1879355460')\n\nAnother possibility if you're only concerned about indexing searches\nfor one or a few specific keys is to use expression indexes:\n\n\tCREATE INDEX ON audit ((attributes->'accountId'));\n\nwhereupon your original query works, since the left-hand side of\nthe '=' operator is now the indexed expression. (Here, since you\nare testing plain equality on the indexed value, a btree works fine.)\n\nYou might care to read\nhttp://www.postgresql.org/docs/9.4/static/indexes.html\nto get a better handle on what Postgres indexes can and can't do.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 04 Dec 2014 20:32:50 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query doesn't use index on hstore column"
},
{
"msg_contents": ">\n> Well, first off, a btree index is fairly useless for this query,\n> because btree has no concept that the hstore has any sub-structure.\n> A GIN index or GIST index could work though. Secondly, you have to\n> remember that indexable WHERE conditions in Postgres are *always* of\n> the form \"WHERE indexed_column indexable_operator some_comparison_value\".\n>\n\nAnd the student was enlightened....\n\nCheers, seeing sensible explain plans now.\n\n\n> You might care to read\n> http://www.postgresql.org/docs/9.4/static/indexes.html\n> to get a better handle on what Postgres indexes can and can't do.\n>\n\nWill do, thanks again.\n\nMike.\n\nWell, first off, a btree index is fairly useless for this query,\nbecause btree has no concept that the hstore has any sub-structure.\nA GIN index or GIST index could work though. Secondly, you have to\nremember that indexable WHERE conditions in Postgres are *always* of\nthe form \"WHERE indexed_column indexable_operator some_comparison_value\".And the student was enlightened....Cheers, seeing sensible explain plans now. \nYou might care to read\nhttp://www.postgresql.org/docs/9.4/static/indexes.html\nto get a better handle on what Postgres indexes can and can't do.Will do, thanks again.Mike.",
"msg_date": "Sat, 6 Dec 2014 13:05:02 +1300",
"msg_from": "Michael Barker <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query doesn't use index on hstore column"
}
] |
[
{
"msg_contents": "I was doing some performance profiling regarding querying against jsonb \ncolumns and found something I can't explain.\nI created json version and standard column versions of some data, and \nindexed the json 'fields' and the normal columns and executed equivalent \nqueries against both.\nI find that the json version is quite a bit (approx 3x) slower which I \ncan't explain as both should (and are according to plans are) working \nagainst what I would expect are equivalent indexes.\n\nCan anyone explain this?\n\nExample code is here:\n\n\ncreate table json_test (\nid SERIAL,\nassay1_ic50 FLOAT,\nassay2_ic50 FLOAT,\ndata JSONB\n);\n\nDO\n$do$\nDECLARE\nval1 FLOAT;\nval2 FLOAT;\nBEGIN\nfor i in 1..10000000 LOOP\nval1 = random() * 100;\nval2 = random() * 100;\nINSERT INTO json_test (assay1_ic50, assay2_ic50, data) VALUES\n (val1, val2, ('{\"assay1_ic50\": ' || val1 || ', \"assay2_ic50\": ' || \nval2 || ', \"mod\": \"=\"}')::jsonb);\nend LOOP;\nEND\n$do$\n\ncreate index idx_data_json_assay1_ic50 on json_test (((data ->> \n'assay1_ic50')::float));\ncreate index idx_data_json_assay2_ic50 on json_test (((data ->> \n'assay2_ic50')::float));\n\ncreate index idx_data_col_assay1_ic50 on json_test (assay1_ic50);\ncreate index idx_data_col_assay2_ic50 on json_test (assay2_ic50);\n\nselect count(*) from json_test;\nselect * from json_test limit 10;\n\nselect count(*) from json_test where (data->>'assay1_ic50')::float > 90 \nand (data->>'assay2_ic50')::float < 10;\nselect count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 < 10;\n\n\n\nThanks\nTim\n\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Sun, 07 Dec 2014 19:59:38 -0300",
"msg_from": "Tim Dudgeon <[email protected]>",
"msg_from_op": true,
"msg_subject": "querying with index on jsonb slower than standard column. Why?"
},
{
"msg_contents": "On 12/07/2014 02:59 PM, Tim Dudgeon wrote:\n> I was doing some performance profiling regarding querying against jsonb\n> columns and found something I can't explain.\n> I created json version and standard column versions of some data, and\n> indexed the json 'fields' and the normal columns and executed equivalent\n> queries against both.\n> I find that the json version is quite a bit (approx 3x) slower which I\n> can't explain as both should (and are according to plans are) working\n> against what I would expect are equivalent indexes.\n>\n> Can anyone explain this?\n\nThe docs can:\n\nhttp://www.postgresql.org/docs/9.4/interactive/datatype-json.html#JSON-INDEXING\n\n>\n> Example code is here:\n>\n>\n> create table json_test (\n> id SERIAL,\n> assay1_ic50 FLOAT,\n> assay2_ic50 FLOAT,\n> data JSONB\n> );\n>\n> DO\n> $do$\n> DECLARE\n> val1 FLOAT;\n> val2 FLOAT;\n> BEGIN\n> for i in 1..10000000 LOOP\n> val1 = random() * 100;\n> val2 = random() * 100;\n> INSERT INTO json_test (assay1_ic50, assay2_ic50, data) VALUES\n> (val1, val2, ('{\"assay1_ic50\": ' || val1 || ', \"assay2_ic50\": ' ||\n> val2 || ', \"mod\": \"=\"}')::jsonb);\n> end LOOP;\n> END\n> $do$\n>\n> create index idx_data_json_assay1_ic50 on json_test (((data ->>\n> 'assay1_ic50')::float));\n> create index idx_data_json_assay2_ic50 on json_test (((data ->>\n> 'assay2_ic50')::float));\n>\n> create index idx_data_col_assay1_ic50 on json_test (assay1_ic50);\n> create index idx_data_col_assay2_ic50 on json_test (assay2_ic50);\n>\n> select count(*) from json_test;\n> select * from json_test limit 10;\n>\n> select count(*) from json_test where (data->>'assay1_ic50')::float > 90\n> and (data->>'assay2_ic50')::float < 10;\n> select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 < 10;\n>\n>\n>\n> Thanks\n> Tim\n>\n>\n>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Sun, 07 Dec 2014 16:19:51 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "\nOn 07/12/2014 21:19, Adrian Klaver wrote:\n> On 12/07/2014 02:59 PM, Tim Dudgeon wrote:\n>> I was doing some performance profiling regarding querying against jsonb\n>> columns and found something I can't explain.\n>> I created json version and standard column versions of some data, and\n>> indexed the json 'fields' and the normal columns and executed equivalent\n>> queries against both.\n>> I find that the json version is quite a bit (approx 3x) slower which I\n>> can't explain as both should (and are according to plans are) working\n>> against what I would expect are equivalent indexes.\n>>\n>> Can anyone explain this?\n>\n> The docs can:\n>\n> http://www.postgresql.org/docs/9.4/interactive/datatype-json.html#JSON-INDEXING \n>\n\nIf so them I'm missing it.\nThe index created is not a gin index. Its a standard btree index on the \ndata extracted from the json. So the indexes on the standard columns and \nthe ones on the 'fields' extracted from the json seem to be equivalent. \nBut perform differently.\n\nTim\n>\n>>\n>> Example code is here:\n>>\n>>\n>> create table json_test (\n>> id SERIAL,\n>> assay1_ic50 FLOAT,\n>> assay2_ic50 FLOAT,\n>> data JSONB\n>> );\n>>\n>> DO\n>> $do$\n>> DECLARE\n>> val1 FLOAT;\n>> val2 FLOAT;\n>> BEGIN\n>> for i in 1..10000000 LOOP\n>> val1 = random() * 100;\n>> val2 = random() * 100;\n>> INSERT INTO json_test (assay1_ic50, assay2_ic50, data) VALUES\n>> (val1, val2, ('{\"assay1_ic50\": ' || val1 || ', \"assay2_ic50\": ' ||\n>> val2 || ', \"mod\": \"=\"}')::jsonb);\n>> end LOOP;\n>> END\n>> $do$\n>>\n>> create index idx_data_json_assay1_ic50 on json_test (((data ->>\n>> 'assay1_ic50')::float));\n>> create index idx_data_json_assay2_ic50 on json_test (((data ->>\n>> 'assay2_ic50')::float));\n>>\n>> create index idx_data_col_assay1_ic50 on json_test (assay1_ic50);\n>> create index idx_data_col_assay2_ic50 on json_test (assay2_ic50);\n>>\n>> select count(*) from json_test;\n>> select * from json_test limit 10;\n>>\n>> select count(*) from json_test where (data->>'assay1_ic50')::float > 90\n>> and (data->>'assay2_ic50')::float < 10;\n>> select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 \n>> < 10;\n>>\n>>\n>>\n>> Thanks\n>> Tim\n>>\n>>\n>>\n>\n>\n\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Sun, 07 Dec 2014 21:43:35 -0300",
"msg_from": "Tim Dudgeon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "On 12/07/2014 04:43 PM, Tim Dudgeon wrote:\n>\n> On 07/12/2014 21:19, Adrian Klaver wrote:\n>> On 12/07/2014 02:59 PM, Tim Dudgeon wrote:\n>>> I was doing some performance profiling regarding querying against jsonb\n>>> columns and found something I can't explain.\n>>> I created json version and standard column versions of some data, and\n>>> indexed the json 'fields' and the normal columns and executed equivalent\n>>> queries against both.\n>>> I find that the json version is quite a bit (approx 3x) slower which I\n>>> can't explain as both should (and are according to plans are) working\n>>> against what I would expect are equivalent indexes.\n>>>\n>>> Can anyone explain this?\n>>\n>> The docs can:\n>>\n>> http://www.postgresql.org/docs/9.4/interactive/datatype-json.html#JSON-INDEXING\n>>\n>\n> If so them I'm missing it.\n> The index created is not a gin index. Its a standard btree index on the\n> data extracted from the json. So the indexes on the standard columns and\n> the ones on the 'fields' extracted from the json seem to be equivalent.\n> But perform differently.\n\nDown into the section there is this:\n\n\"jsonb also supports btree and hash indexes. These are usually useful \nonly if it's important to check equality of complete JSON documents. The \nbtree ordering for jsonb datums is seldom of great interest, but for \ncompleteness it is:\n\nObject > Array > Boolean > Number > String > Null\n\nObject with n pairs > object with n - 1 pairs\n\nArray with n elements > array with n - 1 elements\n\nObjects with equal numbers of pairs are compared in the order:\n\nkey-1, value-1, key-2 ...\n\nNote that object keys are compared in their storage order; in \nparticular, since shorter keys are stored before longer keys, this can \nlead to results that might be unintuitive, such as:\n\n{ \"aa\": 1, \"c\": 1} > {\"b\": 1, \"d\": 1}\n\nSimilarly, arrays with equal numbers of elements are compared in the order:\n\nelement-1, element-2 ...\n\nPrimitive JSON values are compared using the same comparison rules as \nfor the underlying PostgreSQL data type. Strings are compared using the \ndefault database collation.\n\"\n\nAs I understand it to get useful indexing into the jsonb datum(document) \nyou need to use the GIN indexes.\n\n>\n> Tim\n>>\n>>>\n>>> Example code is here:\n>>>\n>>>\n>>> create table json_test (\n>>> id SERIAL,\n>>> assay1_ic50 FLOAT,\n>>> assay2_ic50 FLOAT,\n>>> data JSONB\n>>> );\n>>>\n>>> DO\n>>> $do$\n>>> DECLARE\n>>> val1 FLOAT;\n>>> val2 FLOAT;\n>>> BEGIN\n>>> for i in 1..10000000 LOOP\n>>> val1 = random() * 100;\n>>> val2 = random() * 100;\n>>> INSERT INTO json_test (assay1_ic50, assay2_ic50, data) VALUES\n>>> (val1, val2, ('{\"assay1_ic50\": ' || val1 || ', \"assay2_ic50\": ' ||\n>>> val2 || ', \"mod\": \"=\"}')::jsonb);\n>>> end LOOP;\n>>> END\n>>> $do$\n>>>\n>>> create index idx_data_json_assay1_ic50 on json_test (((data ->>\n>>> 'assay1_ic50')::float));\n>>> create index idx_data_json_assay2_ic50 on json_test (((data ->>\n>>> 'assay2_ic50')::float));\n>>>\n>>> create index idx_data_col_assay1_ic50 on json_test (assay1_ic50);\n>>> create index idx_data_col_assay2_ic50 on json_test (assay2_ic50);\n>>>\n>>> select count(*) from json_test;\n>>> select * from json_test limit 10;\n>>>\n>>> select count(*) from json_test where (data->>'assay1_ic50')::float > 90\n>>> and (data->>'assay2_ic50')::float < 10;\n>>> select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50\n>>> < 10;\n>>>\n>>>\n>>>\n>>> Thanks\n>>> Tim\n>>>\n>>>\n>>>\n>>\n>>\n>\n>\n>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Sun, 07 Dec 2014 16:53:51 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "\nOn 07/12/2014 21:53, Adrian Klaver wrote:\n> On 12/07/2014 04:43 PM, Tim Dudgeon wrote:\n>>\n>> On 07/12/2014 21:19, Adrian Klaver wrote:\n>>> On 12/07/2014 02:59 PM, Tim Dudgeon wrote:\n>>>> I was doing some performance profiling regarding querying against \n>>>> jsonb\n>>>> columns and found something I can't explain.\n>>>> I created json version and standard column versions of some data, and\n>>>> indexed the json 'fields' and the normal columns and executed \n>>>> equivalent\n>>>> queries against both.\n>>>> I find that the json version is quite a bit (approx 3x) slower which I\n>>>> can't explain as both should (and are according to plans are) working\n>>>> against what I would expect are equivalent indexes.\n>>>>\n>>>> Can anyone explain this?\n>>>\n>>> The docs can:\n>>>\n>>> http://www.postgresql.org/docs/9.4/interactive/datatype-json.html#JSON-INDEXING \n>>>\n>>>\n>>\n>> If so them I'm missing it.\n>> The index created is not a gin index. Its a standard btree index on the\n>> data extracted from the json. So the indexes on the standard columns and\n>> the ones on the 'fields' extracted from the json seem to be equivalent.\n>> But perform differently.\n>\n> Down into the section there is this:\n>\n> \"jsonb also supports btree and hash indexes. These are usually useful \n> only if it's important to check equality of complete JSON documents. \n> The btree ordering for jsonb datums is seldom of great interest, but \n> for completeness it is:\n>\n> Object > Array > Boolean > Number > String > Null\n>\n> Object with n pairs > object with n - 1 pairs\n>\n> Array with n elements > array with n - 1 elements\n>\n> Objects with equal numbers of pairs are compared in the order:\n>\n> key-1, value-1, key-2 ...\n>\n> Note that object keys are compared in their storage order; in \n> particular, since shorter keys are stored before longer keys, this can \n> lead to results that might be unintuitive, such as:\n>\n> { \"aa\": 1, \"c\": 1} > {\"b\": 1, \"d\": 1}\n>\n> Similarly, arrays with equal numbers of elements are compared in the \n> order:\n>\n> element-1, element-2 ...\n>\n> Primitive JSON values are compared using the same comparison rules as \n> for the underlying PostgreSQL data type. Strings are compared using \n> the default database collation.\n> \"\n>\n> As I understand it to get useful indexing into the jsonb \n> datum(document) you need to use the GIN indexes.\n\nYes, but if my understanding is correct I'm not indexing the JSON, I'm \nindexing the PostgreSQL float type extracted from a field of the JSON, \nand indexing using a btree index:\n\ncreate index idx_data_json_assay2_ic50 on json_test (((data ->>\n'assay2_ic50')::float));\n\nThe data ->> 'assay2_ic50' bit extracts the value from the JSON as text, \nthe ::float bit casts to a float, and the index is built on the \nresulting float type.\n\nAnd the index is being used, and is reasonably fast, just not as fast as \nthe equivalent index on the 'normal' float column.\n\nTim\n>\n>>\n>> Tim\n>>>\n>>>>\n>>>> Example code is here:\n>>>>\n>>>>\n>>>> create table json_test (\n>>>> id SERIAL,\n>>>> assay1_ic50 FLOAT,\n>>>> assay2_ic50 FLOAT,\n>>>> data JSONB\n>>>> );\n>>>>\n>>>> DO\n>>>> $do$\n>>>> DECLARE\n>>>> val1 FLOAT;\n>>>> val2 FLOAT;\n>>>> BEGIN\n>>>> for i in 1..10000000 LOOP\n>>>> val1 = random() * 100;\n>>>> val2 = random() * 100;\n>>>> INSERT INTO json_test (assay1_ic50, assay2_ic50, data) VALUES\n>>>> (val1, val2, ('{\"assay1_ic50\": ' || val1 || ', \"assay2_ic50\": \n>>>> ' ||\n>>>> val2 || ', \"mod\": \"=\"}')::jsonb);\n>>>> end LOOP;\n>>>> END\n>>>> $do$\n>>>>\n>>>> create index idx_data_json_assay1_ic50 on json_test (((data ->>\n>>>> 'assay1_ic50')::float));\n>>>> create index idx_data_json_assay2_ic50 on json_test (((data ->>\n>>>> 'assay2_ic50')::float));\n>>>>\n>>>> create index idx_data_col_assay1_ic50 on json_test (assay1_ic50);\n>>>> create index idx_data_col_assay2_ic50 on json_test (assay2_ic50);\n>>>>\n>>>> select count(*) from json_test;\n>>>> select * from json_test limit 10;\n>>>>\n>>>> select count(*) from json_test where (data->>'assay1_ic50')::float \n>>>> > 90\n>>>> and (data->>'assay2_ic50')::float < 10;\n>>>> select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50\n>>>> < 10;\n>>>>\n>>>>\n>>>>\n>>>> Thanks\n>>>> Tim\n>>>>\n>>>>\n>>>>\n>>>\n>>>\n>>\n>>\n>>\n>\n>\n\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Sun, 07 Dec 2014 22:05:39 -0300",
"msg_from": "Tim Dudgeon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "On 12/07/2014 05:05 PM, Tim Dudgeon wrote:\n>\n> On 07/12/2014 21:53, Adrian Klaver wrote:\n>> On 12/07/2014 04:43 PM, Tim Dudgeon wrote:\n>>>\n>>> On 07/12/2014 21:19, Adrian Klaver wrote:\n>>>> On 12/07/2014 02:59 PM, Tim Dudgeon wrote:\n>>>>> I was doing some performance profiling regarding querying against\n>>>>> jsonb\n>>>>> columns and found something I can't explain.\n>>>>> I created json version and standard column versions of some data, and\n>>>>> indexed the json 'fields' and the normal columns and executed\n>>>>> equivalent\n>>>>> queries against both.\n>>>>> I find that the json version is quite a bit (approx 3x) slower which I\n>>>>> can't explain as both should (and are according to plans are) working\n>>>>> against what I would expect are equivalent indexes.\n>>>>>\n>>>>> Can anyone explain this?\n>>>>\n>>>> The docs can:\n>>>>\n>>>> http://www.postgresql.org/docs/9.4/interactive/datatype-json.html#JSON-INDEXING\n>>>>\n>>>>\n>>>\n>>> If so them I'm missing it.\n>>> The index created is not a gin index. Its a standard btree index on the\n>>> data extracted from the json. So the indexes on the standard columns and\n>>> the ones on the 'fields' extracted from the json seem to be equivalent.\n>>> But perform differently.\n>>\n>> Down into the section there is this:\n>>\n>> \"jsonb also supports btree and hash indexes. These are usually useful\n>> only if it's important to check equality of complete JSON documents.\n>> The btree ordering for jsonb datums is seldom of great interest, but\n>> for completeness it is:\n>>\n>> Object > Array > Boolean > Number > String > Null\n>>\n>> Object with n pairs > object with n - 1 pairs\n>>\n>> Array with n elements > array with n - 1 elements\n>>\n>> Objects with equal numbers of pairs are compared in the order:\n>>\n>> key-1, value-1, key-2 ...\n>>\n>> Note that object keys are compared in their storage order; in\n>> particular, since shorter keys are stored before longer keys, this can\n>> lead to results that might be unintuitive, such as:\n>>\n>> { \"aa\": 1, \"c\": 1} > {\"b\": 1, \"d\": 1}\n>>\n>> Similarly, arrays with equal numbers of elements are compared in the\n>> order:\n>>\n>> element-1, element-2 ...\n>>\n>> Primitive JSON values are compared using the same comparison rules as\n>> for the underlying PostgreSQL data type. Strings are compared using\n>> the default database collation.\n>> \"\n>>\n>> As I understand it to get useful indexing into the jsonb\n>> datum(document) you need to use the GIN indexes.\n>\n> Yes, but if my understanding is correct I'm not indexing the JSON, I'm\n> indexing the PostgreSQL float type extracted from a field of the JSON,\n> and indexing using a btree index:\n>\n> create index idx_data_json_assay2_ic50 on json_test (((data ->>\n> 'assay2_ic50')::float));\n>\n> The data ->> 'assay2_ic50' bit extracts the value from the JSON as text,\n\nWhich is where I would say your slow down happens. I have not spent a \nlot of time jsonb as I have been waiting on the dust to settle from the \nrecent big changes, so my empirical evidence is lacking.\n\n> the ::float bit casts to a float, and the index is built on the\n> resulting float type.\n>\n> And the index is being used, and is reasonably fast, just not as fast as\n> the equivalent index on the 'normal' float column.\n>\n> Tim\n>>\n>>>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Sun, 07 Dec 2014 17:15:17 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "Tim Dudgeon <[email protected]> writes:\n> The index created is not a gin index. Its a standard btree index on the \n> data extracted from the json. So the indexes on the standard columns and \n> the ones on the 'fields' extracted from the json seem to be equivalent. \n> But perform differently.\n\nI don't see any particular difference ...\n\nregression=# explain analyze select count(*) from json_test where (data->>'assay1_ic50')::float > 90 \nand (data->>'assay2_ic50')::float < 10;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=341613.79..341613.80 rows=1 width=0) (actual time=901.207..901.208 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=123684.69..338836.02 rows=1111111 width=0) (actual time=497.982..887.128 rows=100690 loops=1)\n Recheck Cond: ((((data ->> 'assay2_ic50'::text))::double precision < 10::double precision) AND (((data ->> 'assay1_ic50'::text))::double precision > 90::double precision))\n Heap Blocks: exact=77578\n -> BitmapAnd (cost=123684.69..123684.69 rows=1111111 width=0) (actual time=476.585..476.585 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_json_assay2_ic50 (cost=0.00..61564.44 rows=3333333 width=0) (actual time=219.287..219.287 rows=999795 loops=1)\n Index Cond: (((data ->> 'assay2_ic50'::text))::double precision < 10::double precision)\n -> Bitmap Index Scan on idx_data_json_assay1_ic50 (cost=0.00..61564.44 rows=3333333 width=0) (actual time=208.197..208.197 rows=1000231 loops=1)\n Index Cond: (((data ->> 'assay1_ic50'::text))::double precision > 90::double precision)\n Planning time: 0.128 ms\n Execution time: 904.196 ms\n(11 rows)\n\nregression=# explain analyze select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 < 10;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=197251.24..197251.25 rows=1 width=0) (actual time=895.238..895.238 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=36847.25..197003.24 rows=99197 width=0) (actual time=495.427..881.033 rows=100690 loops=1)\n Recheck Cond: ((assay2_ic50 < 10::double precision) AND (assay1_ic50 > 90::double precision))\n Heap Blocks: exact=77578\n -> BitmapAnd (cost=36847.25..36847.25 rows=99197 width=0) (actual time=474.201..474.201 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_col_assay2_ic50 (cost=0.00..18203.19 rows=985434 width=0) (actual time=219.060..219.060 rows=999795 loops=1)\n Index Cond: (assay2_ic50 < 10::double precision)\n -> Bitmap Index Scan on idx_data_col_assay1_ic50 (cost=0.00..18594.21 rows=1006637 width=0) (actual time=206.066..206.066 rows=1000231 loops=1)\n Index Cond: (assay1_ic50 > 90::double precision)\n Planning time: 0.129 ms\n Execution time: 898.237 ms\n(11 rows)\n\nregression=# \\timing\nTiming is on.\nregression=# select count(*) from json_test where (data->>'assay1_ic50')::float > 90 \nand (data->>'assay2_ic50')::float < 10;\n count \n--------\n 100690\n(1 row)\n\nTime: 882.607 ms\nregression=# select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 < 10;\n count \n--------\n 100690\n(1 row)\n\nTime: 881.071 ms\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Sun, 07 Dec 2014 20:28:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column. Why?"
},
{
"msg_contents": "On 12/07/2014 05:28 PM, Tom Lane wrote:\n> Tim Dudgeon <[email protected]> writes:\n>> The index created is not a gin index. Its a standard btree index on the\n>> data extracted from the json. So the indexes on the standard columns and\n>> the ones on the 'fields' extracted from the json seem to be equivalent.\n>> But perform differently.\n> \n> I don't see any particular difference ...\n> \n> regression=# explain analyze select count(*) from json_test where (data->>'assay1_ic50')::float > 90\n> and (data->>'assay2_ic50')::float < 10;\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=341613.79..341613.80 rows=1 width=0) (actual time=901.207..901.208 rows=1 loops=1)\n> -> Bitmap Heap Scan on json_test (cost=123684.69..338836.02 rows=1111111 width=0) (actual time=497.982..887.128 rows=100690 loops=1)\n> Recheck Cond: ((((data ->> 'assay2_ic50'::text))::double precision < 10::double precision) AND (((data ->> 'assay1_ic50'::text))::double precision > 90::double precision))\n> Heap Blocks: exact=77578\n> -> BitmapAnd (cost=123684.69..123684.69 rows=1111111 width=0) (actual time=476.585..476.585 rows=0 loops=1)\n> -> Bitmap Index Scan on idx_data_json_assay2_ic50 (cost=0.00..61564.44 rows=3333333 width=0) (actual time=219.287..219.287 rows=999795 loops=1)\n> Index Cond: (((data ->> 'assay2_ic50'::text))::double precision < 10::double precision)\n> -> Bitmap Index Scan on idx_data_json_assay1_ic50 (cost=0.00..61564.44 rows=3333333 width=0) (actual time=208.197..208.197 rows=1000231 loops=1)\n> Index Cond: (((data ->> 'assay1_ic50'::text))::double precision > 90::double precision)\n> Planning time: 0.128 ms\n> Execution time: 904.196 ms\n> (11 rows)\n> \n> regression=# explain analyze select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 < 10;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Aggregate (cost=197251.24..197251.25 rows=1 width=0) (actual time=895.238..895.238 rows=1 loops=1)\n> -> Bitmap Heap Scan on json_test (cost=36847.25..197003.24 rows=99197 width=0) (actual time=495.427..881.033 rows=100690 loops=1)\n> Recheck Cond: ((assay2_ic50 < 10::double precision) AND (assay1_ic50 > 90::double precision))\n> Heap Blocks: exact=77578\n> -> BitmapAnd (cost=36847.25..36847.25 rows=99197 width=0) (actual time=474.201..474.201 rows=0 loops=1)\n> -> Bitmap Index Scan on idx_data_col_assay2_ic50 (cost=0.00..18203.19 rows=985434 width=0) (actual time=219.060..219.060 rows=999795 loops=1)\n> Index Cond: (assay2_ic50 < 10::double precision)\n> -> Bitmap Index Scan on idx_data_col_assay1_ic50 (cost=0.00..18594.21 rows=1006637 width=0) (actual time=206.066..206.066 rows=1000231 loops=1)\n> Index Cond: (assay1_ic50 > 90::double precision)\n> Planning time: 0.129 ms\n> Execution time: 898.237 ms\n> (11 rows)\n> \n> regression=# \\timing\n> Timing is on.\n> regression=# select count(*) from json_test where (data->>'assay1_ic50')::float > 90\n> and (data->>'assay2_ic50')::float < 10;\n> count\n> --------\n> 100690\n> (1 row)\n> \n> Time: 882.607 ms\n> regression=# select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 < 10;\n> count\n> --------\n> 100690\n> (1 row)\n> \n> Time: 881.071 ms\n> \n> \t\t\tregards, tom lane\n> \n> \n\nRunning the above on my machine I do see the slow down the OP reports. I ran it several times \nand it stayed around 3.5x. It might be interesting to get the OS and architecture information \nfrom the OP.\n\ntest=# select version(); \n version \n------------------------------------------------------------------------------------------------------------------------------ \n PostgreSQL 9.4rc1 on i686-pc-linux-gnu, compiled by gcc (SUSE Linux) 4.8.1 20130909 [gcc-4_8-branch revision 202388], 32-bit \n(1 row) \n\n\ntest=# \\timing \nTiming is on.\ntest=# select count(*) from json_test where (data->>'assay1_ic50')::float > 90 \ntest-# and (data->>'assay2_ic50')::float < 10;\n count \n-------\n 99288\n(1 row)\n\nTime: 9092.966 ms\n\n\ntest=# select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 < 10;\n count \n-------\n 99288\n(1 row)\n \nTime: 2542.294 ms \n\n\nexplain analyze select count(*) from json_test where (data->>'assay1_ic50')::float > 90 \nand (data->>'assay2_ic50')::float < 10;\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=332209.79..332209.80 rows=1 width=0) (actual time=8980.009..8980.009 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=123684.69..329432.02 rows=1111111 width=0) (actual time=538.688..8960.308 rows=99288 loops=1)\n Recheck Cond: ((((data ->> 'assay2_ic50'::text))::double precision < 10::double precision) AND (((data ->> 'assay1_ic50'::text))::double precision > 90::double precision))\n Rows Removed by Index Recheck: 7588045\n Heap Blocks: exact=20894 lossy=131886\n -> BitmapAnd (cost=123684.69..123684.69 rows=1111111 width=0) (actual time=531.066..531.066 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_json_assay2_ic50 (cost=0.00..61564.44 rows=3333333 width=0) (actual time=258.717..258.717 rows=998690 loops=1)\n Index Cond: (((data ->> 'assay2_ic50'::text))::double precision < 10::double precision)\n -> Bitmap Index Scan on idx_data_json_assay1_ic50 (cost=0.00..61564.44 rows=3333333 width=0) (actual time=251.664..251.664 rows=997880 loops=1)\n Index Cond: (((data ->> 'assay1_ic50'::text))::double precision > 90::double precision)\n Planning time: 0.391 ms\n Execution time: 8980.391 ms\n(12 rows)\n\n\n\ntest=# explain analyze select count(*) from json_test where assay1_ic50 > 90 and assay2_ic50 < 10;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=196566.38..196566.39 rows=1 width=0) (actual time=2609.545..2609.545 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=37869.00..196304.39 rows=104796 width=0) (actual time=550.273..2590.093 rows=99288 loops=1)\n Recheck Cond: ((assay2_ic50 < 10::double precision) AND (assay1_ic50 > 90::double precision))\n Rows Removed by Index Recheck: 7588045\n Heap Blocks: exact=20894 lossy=131886\n -> BitmapAnd (cost=37869.00..37869.00 rows=104796 width=0) (actual time=542.666..542.666 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_col_assay2_ic50 (cost=0.00..18871.73 rows=1021773 width=0) (actual time=263.959..263.959 rows=998690 loops=1)\n Index Cond: (assay2_ic50 < 10::double precision)\n -> Bitmap Index Scan on idx_data_col_assay1_ic50 (cost=0.00..18944.62 rows=1025624 width=0) (actual time=257.912..257.912 rows=997880 loops=1)\n Index Cond: (assay1_ic50 > 90::double precision)\n Planning time: 0.834 ms\n Execution time: 2609.960 ms\n(12 rows)\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Mon, 08 Dec 2014 07:31:21 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "Adrian Klaver <[email protected]> writes:\n> On 12/07/2014 05:28 PM, Tom Lane wrote:\n>> I don't see any particular difference ...\n\n> Running the above on my machine I do see the slow down the OP reports. I\n> ran it several times and it stayed around 3.5x.\n\nInteresting. A couple of points that might be worth checking:\n\n* I tried this on a 64-bit build, whereas you were evidently using 32-bit.\n\n* The EXPLAIN ANALYZE output shows that my bitmaps didn't go lossy,\nwhereas yours did. This is likely because I had cranked up work_mem to\nmake the index builds go faster.\n\nIt's not apparent to me why either of those things would have an effect\nlike this, but *something* weird is happening here.\n\n(Thinks for a bit...) A possible theory, seeing that the majority of the\nblocks are lossy in your runs, is that the reduction to lossy form is\nmaking worse choices about which blocks to make lossy in one case than in\nthe other. I don't remember exactly how those decisions are made.\n\nAnother thing that seems odd about your printout is the discrepancy\nin planning time ... the two cases have just about the same planning\ntime for me, but not for you.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Mon, 08 Dec 2014 10:46:12 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column. Why?"
},
{
"msg_contents": "On 12/08/2014 07:46 AM, Tom Lane wrote:\n> Adrian Klaver <[email protected]> writes:\n>> On 12/07/2014 05:28 PM, Tom Lane wrote:\n>>> I don't see any particular difference ...\n>\n>> Running the above on my machine I do see the slow down the OP reports. I\n>> ran it several times and it stayed around 3.5x.\n>\n> Interesting. A couple of points that might be worth checking:\n>\n> * I tried this on a 64-bit build, whereas you were evidently using 32-bit.\n\nMy laptop is 64-bit, so when I get a chance I will setup the test there \nand run it to see what happens.\n\n>\n> * The EXPLAIN ANALYZE output shows that my bitmaps didn't go lossy,\n> whereas yours did. This is likely because I had cranked up work_mem to\n> make the index builds go faster.\n>\n> It's not apparent to me why either of those things would have an effect\n> like this, but *something* weird is happening here.\n>\n> (Thinks for a bit...) A possible theory, seeing that the majority of the\n> blocks are lossy in your runs, is that the reduction to lossy form is\n> making worse choices about which blocks to make lossy in one case than in\n> the other. I don't remember exactly how those decisions are made.\n>\n> Another thing that seems odd about your printout is the discrepancy\n> in planning time ... the two cases have just about the same planning\n> time for me, but not for you.\n>\n> \t\t\tregards, tom lane\n>\n>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Mon, 08 Dec 2014 07:50:18 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "On 12/08/2014 07:50 AM, Adrian Klaver wrote:\n> On 12/08/2014 07:46 AM, Tom Lane wrote:\n>> Adrian Klaver <[email protected]> writes:\n>>> On 12/07/2014 05:28 PM, Tom Lane wrote:\n>>>> I don't see any particular difference ...\n>>\n>>> Running the above on my machine I do see the slow down the OP reports. I\n>>> ran it several times and it stayed around 3.5x.\n>>\n>> Interesting. A couple of points that might be worth checking:\n>>\n>> * I tried this on a 64-bit build, whereas you were evidently using\n>> 32-bit.\n>\n> My laptop is 64-bit, so when I get a chance I will setup the test there\n> and run it to see what happens.\n>\n>>\n\nSeems work_mem is the key:\n\npostgres@test=# select version();\n version \n\n-------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.4rc1 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE \nLinux) 4.8.1 20130909 [gcc-4_8-branch revision 202388], 64-bit\n(1 row)\n\nThe default:\n\npostgres@test=# show work_mem ;\n work_mem\n----------\n 4MB\n(1 row)\n\n\npostgres@test=# \\timing\nTiming is on.\npostgres@test=# explain analyze select count(*) from json_test where \n(data->>'assay1_ic50')::float > 90\ntest-# and (data->>'assay2_ic50')::float < 10;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=198713.45..198713.46 rows=1 width=0) (actual \ntime=8564.799..8564.799 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=36841.42..198465.53 \nrows=99168 width=0) (actual time=1043.226..8550.183 rows=99781 loops=1)\n Recheck Cond: ((((data ->> 'assay1_ic50'::text))::double \nprecision > 90::double precision) AND (((data ->> \n'assay2_ic50'::text))::double precision < 10::double precision))\n Rows Removed by Index Recheck: 7236280\n Heap Blocks: exact=30252 lossy=131908\n -> BitmapAnd (cost=36841.42..36841.42 rows=99168 width=0) \n(actual time=1034.738..1034.738 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_json_assay1_ic50 \n(cost=0.00..18157.96 rows=983136 width=0) (actual time=513.878..513.878 \nrows=1001237 loops=1)\n Index Cond: (((data ->> \n'assay1_ic50'::text))::double precision > 90::double precision)\n -> Bitmap Index Scan on idx_data_json_assay2_ic50 \n(cost=0.00..18633.62 rows=1008691 width=0) (actual time=502.396..502.396 \nrows=1000930 loops=1)\n Index Cond: (((data ->> \n'assay2_ic50'::text))::double precision < 10::double precision)\n Planning time: 121.962 ms\n Execution time: 8565.609 ms\n(12 rows)\n\nTime: 9110.408 ms\npostgres@test=# explain analyze select count(*) from json_test where \nassay1_ic50 > 90 and assay2_ic50 < 10;\n \n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=197225.91..197225.92 rows=1 width=0) (actual \ntime=1848.769..1848.769 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=36841.41..196977.99 \nrows=99168 width=0) (actual time=405.110..1839.299 rows=99781 loops=1)\n Recheck Cond: ((assay1_ic50 > 90::double precision) AND \n(assay2_ic50 < 10::double precision))\n Rows Removed by Index Recheck: 7236280\n Heap Blocks: exact=30252 lossy=131908\n -> BitmapAnd (cost=36841.41..36841.41 rows=99168 width=0) \n(actual time=397.138..397.138 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_col_assay1_ic50 \n(cost=0.00..18157.96 rows=983136 width=0) (actual time=196.304..196.304 \nrows=1001237 loops=1)\n Index Cond: (assay1_ic50 > 90::double precision)\n -> Bitmap Index Scan on idx_data_col_assay2_ic50 \n(cost=0.00..18633.62 rows=1008691 width=0) (actual time=182.845..182.845 \nrows=1000930 loops=1)\n Index Cond: (assay2_ic50 < 10::double precision)\n Planning time: 0.212 ms\n Execution time: 1848.814 ms\n(12 rows)\n\nTime: 1849.570 ms\n\n\n****************************************************************************\n\nSet work_mem up:\n\npostgres@test=# set work_mem='16MB';\nSET\nTime: 0.143 ms\npostgres@test=# show work_mem;\n work_mem\n----------\n 16MB\n(1 row)\n\npostgres@test=# explain analyze select count(*) from json_test where \n(data->>'assay1_ic50')::float > 90\nand (data->>'assay2_ic50')::float < 10;\n \n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=198713.45..198713.46 rows=1 width=0) (actual \ntime=861.413..861.413 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=36841.42..198465.53 \nrows=99168 width=0) (actual time=588.969..852.720 rows=99781 loops=1)\n Recheck Cond: ((((data ->> 'assay1_ic50'::text))::double \nprecision > 90::double precision) AND (((data ->> \n'assay2_ic50'::text))::double precision < 10::double precision))\n Heap Blocks: exact=77216\n -> BitmapAnd (cost=36841.42..36841.42 rows=99168 width=0) \n(actual time=564.927..564.927 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_json_assay1_ic50 \n(cost=0.00..18157.96 rows=983136 width=0) (actual time=265.318..265.318 \nrows=1001237 loops=1)\n Index Cond: (((data ->> \n'assay1_ic50'::text))::double precision > 90::double precision)\n -> Bitmap Index Scan on idx_data_json_assay2_ic50 \n(cost=0.00..18633.62 rows=1008691 width=0) (actual time=256.225..256.225 \nrows=1000930 loops=1)\n Index Cond: (((data ->> \n'assay2_ic50'::text))::double precision < 10::double precision)\n Planning time: 0.126 ms\n Execution time: 861.453 ms\n(11 rows)\n\nTime: 861.965 ms\npostgres@test=# explain analyze select count(*) from json_test where \nassay1_ic50 > 90 and assay2_ic50 < 10;\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=197225.91..197225.92 rows=1 width=0) (actual \ntime=848.410..848.410 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=36841.41..196977.99 \nrows=99168 width=0) (actual time=578.360..839.659 rows=99781 loops=1)\n Recheck Cond: ((assay1_ic50 > 90::double precision) AND \n(assay2_ic50 < 10::double precision))\n Heap Blocks: exact=77216\n -> BitmapAnd (cost=36841.41..36841.41 rows=99168 width=0) \n(actual time=554.387..554.387 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_col_assay1_ic50 \n(cost=0.00..18157.96 rows=983136 width=0) (actual time=263.961..263.961 \nrows=1001237 loops=1)\n Index Cond: (assay1_ic50 > 90::double precision)\n -> Bitmap Index Scan on idx_data_col_assay2_ic50 \n(cost=0.00..18633.62 rows=1008691 width=0) (actual time=247.268..247.268 \nrows=1000930 loops=1)\n Index Cond: (assay2_ic50 < 10::double precision)\n Planning time: 0.128 ms\n Execution time: 848.453 ms\n(11 rows)\n\n\n*****************************************************************\n\nThen set it back:\n\npostgres@test=# set work_mem='4MB';\nSET\nTime: 0.213 ms\npostgres@test=# show work_mem ;\n work_mem\n----------\n 4MB\n(1 row)\n\n\npostgres@test=# explain analyze select count(*) from json_test where \n(data->>'assay1_ic50')::float > 90\nand (data->>'assay2_ic50')::float < 10;\n \n QUERY PLAN \n\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=198713.45..198713.46 rows=1 width=0) (actual \ntime=6607.650..6607.650 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=36841.42..198465.53 \nrows=99168 width=0) (actual time=400.598..6594.442 rows=99781 loops=1)\n Recheck Cond: ((((data ->> 'assay1_ic50'::text))::double \nprecision > 90::double precision) AND (((data ->> \n'assay2_ic50'::text))::double precision < 10::double precision))\n Rows Removed by Index Recheck: 7236280\n Heap Blocks: exact=30252 lossy=131908\n -> BitmapAnd (cost=36841.42..36841.42 rows=99168 width=0) \n(actual time=392.622..392.622 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_json_assay1_ic50 \n(cost=0.00..18157.96 rows=983136 width=0) (actual time=191.598..191.598 \nrows=1001237 loops=1)\n Index Cond: (((data ->> \n'assay1_ic50'::text))::double precision > 90::double precision)\n -> Bitmap Index Scan on idx_data_json_assay2_ic50 \n(cost=0.00..18633.62 rows=1008691 width=0) (actual time=183.107..183.107 \nrows=1000930 loops=1)\n Index Cond: (((data ->> \n'assay2_ic50'::text))::double precision < 10::double precision)\n Planning time: 0.126 ms\n Execution time: 6607.692 ms\n(12 rows)\n\nTime: 6608.197 ms\npostgres@test=# explain analyze select count(*) from json_test where \nassay1_ic50 > 90 and assay2_ic50 < 10;\n \n QUERY PLAN \n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=197225.91..197225.92 rows=1 width=0) (actual \ntime=1836.383..1836.383 rows=1 loops=1)\n -> Bitmap Heap Scan on json_test (cost=36841.41..196977.99 \nrows=99168 width=0) (actual time=396.414..1826.818 rows=99781 loops=1)\n Recheck Cond: ((assay1_ic50 > 90::double precision) AND \n(assay2_ic50 < 10::double precision))\n Rows Removed by Index Recheck: 7236280\n Heap Blocks: exact=30252 lossy=131908\n -> BitmapAnd (cost=36841.41..36841.41 rows=99168 width=0) \n(actual time=388.498..388.498 rows=0 loops=1)\n -> Bitmap Index Scan on idx_data_col_assay1_ic50 \n(cost=0.00..18157.96 rows=983136 width=0) (actual time=187.928..187.928 \nrows=1001237 loops=1)\n Index Cond: (assay1_ic50 > 90::double precision)\n -> Bitmap Index Scan on idx_data_col_assay2_ic50 \n(cost=0.00..18633.62 rows=1008691 width=0) (actual time=182.743..182.743 \nrows=1000930 loops=1)\n Index Cond: (assay2_ic50 < 10::double precision)\n Planning time: 0.109 ms\n Execution time: 1836.422 ms\n(12 rows)\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Mon, 08 Dec 2014 08:44:52 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "Adrian Klaver <[email protected]> writes:\n> Seems work_mem is the key:\n\nFascinating. So there's some bad behavior in the lossy-bitmap stuff\nthat's exposed by one case but not the other. The set of heap rows we\nactually need to examine is presumably identical in both cases. The\nonly idea that comes to mind is that the order in which the TIDs get\ninserted into the bitmaps might be entirely different between the two\nindex types. We might have to write it off as bad luck, if the\nlossification algorithm doesn't have enough information to do better;\nbut it seems worth looking into.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Mon, 08 Dec 2014 11:56:04 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column. Why?"
},
{
"msg_contents": "I wrote:\n> Adrian Klaver <[email protected]> writes:\n>> Seems work_mem is the key:\n\n> Fascinating. So there's some bad behavior in the lossy-bitmap stuff\n> that's exposed by one case but not the other.\n\nMeh. I was overthinking it. A bit of investigation with oprofile exposed\nthe true cause of the problem: whenever the bitmap goes lossy, we have to\nexecute the \"recheck\" condition for each tuple in the page(s) that the\nbitmap has a lossy reference to. So in the fast case we are talking about\n\nRecheck Cond: ((assay1_ic50 > 90::double precision) AND (assay2_ic50 < 10::double precision))\n\nwhich involves little except pulling the float8 values out of the tuple\nand executing float8gt and float8lt. In the slow case we have got\n\nRecheck Cond: ((((data ->> 'assay1_ic50'::text))::double precision > 90::double precision) AND (((data ->> 'assay2_ic50'::text))::double precision < 10::double precision))\n\nwhich means we have to pull the JSONB value out of the tuple, search\nit to find the 'assay1_ic50' key, convert the associated value to text\n(which is not exactly cheap because *the value is stored as a numeric*),\nthen reparse that text string into a float8, after which we can use\nfloat8gt. And then probably do an equivalent amount of work on the way\nto making the other comparison.\n\nSo this says nothing much about the lossy-bitmap code, and a lot about\nhow the JSONB code isn't very well optimized yet. In particular, the\ndecision not to provide an operator that could extract a numeric field\nwithout conversion to text is looking pretty bad here.\n\nFor reference, the oprofile results down to the 1% level for\nthe jsonb query:\n\nsamples % symbol name\n7646 8.1187 get_str_from_var\n7055 7.4911 AllocSetAlloc\n4447 4.7219 AllocSetCheck\n4000 4.2473 BitmapHeapNext\n3945 4.1889 lengthCompareJsonbStringValue\n3713 3.9425 findJsonbValueFromContainer\n3637 3.8618 ExecMakeFunctionResultNoSets\n3624 3.8480 hash_search_with_hash_value\n3452 3.6654 cstring_to_text\n2993 3.1780 slot_deform_tuple\n2566 2.7246 jsonb_object_field_text\n2225 2.3625 palloc\n2176 2.3105 heap_tuple_untoast_attr\n1993 2.1162 AllocSetReset\n1926 2.0451 findJsonbValueFromContainerLen\n1846 1.9601 GetPrivateRefCountEntry\n1563 1.6596 float8gt\n1486 1.5779 float8in\n1477 1.5683 InputFunctionCall\n1365 1.4494 getJsonbOffset\n1137 1.2073 slot_getattr\n1083 1.1500 init_var_from_num\n1058 1.1234 ExecEvalConst\n1056 1.1213 float8_cmp_internal\n1053 1.1181 cstring_to_text_with_len\n1032 1.0958 text_to_cstring\n988 1.0491 ExecClearTuple\n969 1.0289 ResourceOwnerForgetBuffer\n\nand for the other:\n\nsamples % symbol name\n14010 12.1898 BitmapHeapNext\n13479 11.7278 hash_search_with_hash_value\n8201 7.1355 GetPrivateRefCountEntry\n7524 6.5465 slot_deform_tuple\n6091 5.2997 ExecMakeFunctionResultNoSets\n4459 3.8797 ExecClearTuple\n4456 3.8771 slot_getattr\n3876 3.3724 ExecStoreTuple\n3112 2.7077 ReleaseBuffer\n3086 2.6851 float8_cmp_internal\n2890 2.5145 ExecQual\n2794 2.4310 HeapTupleSatisfiesMVCC\n2737 2.3814 float8gt\n2130 1.8533 ExecEvalScalarVarFast\n2102 1.8289 IncrBufferRefCount\n2100 1.8272 ResourceOwnerForgetBuffer\n1896 1.6497 hash_any\n1752 1.5244 ResourceOwnerRememberBuffer\n1567 1.3634 DatumGetFloat8\n1543 1.3425 ExecEvalConst\n1486 1.2929 LWLockAcquire\n1454 1.2651 _bt_checkkeys\n1424 1.2390 check_stack_depth\n1374 1.1955 ResourceOwnerEnlargeBuffers\n1354 1.1781 pgstat_end_function_usage\n1164 1.0128 tbm_iterate\n1158 1.0076 CheckForSerializableConflictOut\n\nJust to add insult to injury, this is only counting cycles in postgres\nproper; it appears that in the jsonb case 30% of the overall runtime is\nspent in strtod() :-(\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Dec 2014 15:53:09 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "On 12/08/2014 12:53 PM, Tom Lane wrote:\n> I wrote:\n>> Adrian Klaver <[email protected]> writes:\n>>> Seems work_mem is the key:\n> \n>> Fascinating. So there's some bad behavior in the lossy-bitmap stuff\n>> that's exposed by one case but not the other.\n> \n> Meh. I was overthinking it. A bit of investigation with oprofile exposed\n> the true cause of the problem: whenever the bitmap goes lossy, we have to\n> execute the \"recheck\" condition for each tuple in the page(s) that the\n> bitmap has a lossy reference to. So in the fast case we are talking about\n> \n> Recheck Cond: ((assay1_ic50 > 90::double precision) AND (assay2_ic50 < 10::double precision))\n> \n> which involves little except pulling the float8 values out of the tuple\n> and executing float8gt and float8lt. In the slow case we have got\n> \n> Recheck Cond: ((((data ->> 'assay1_ic50'::text))::double precision > 90::double precision) AND (((data ->> 'assay2_ic50'::text))::double precision < 10::double precision))\n> \n> which means we have to pull the JSONB value out of the tuple, search\n> it to find the 'assay1_ic50' key, convert the associated value to text\n> (which is not exactly cheap because *the value is stored as a numeric*),\n> then reparse that text string into a float8, after which we can use\n> float8gt. And then probably do an equivalent amount of work on the way\n> to making the other comparison.\n> \n> So this says nothing much about the lossy-bitmap code, and a lot about\n> how the JSONB code isn't very well optimized yet. In particular, the\n> decision not to provide an operator that could extract a numeric field\n> without conversion to text is looking pretty bad here.\n> \n\nI think I understand the above.\n\nI redid the test on my 32-bit machine, setting work_mem=16MB, and I got comparable results\nto what I saw on the 64-bit machine. So, what I am still am puzzled by is why work_mem seems \nto make the two paths equivalent in time?:\n\nFast case, assay1_ic50 > 90 and assay2_ic50 < 10:\n1183.997 ms\n\nSlow case, (data->>'assay1_ic50')::float > 90 and (data->>'assay2_ic50')::float < 10;:\n1190.187 ms\n\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Mon, 08 Dec 2014 13:14:48 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "Adrian Klaver <[email protected]> writes:\n> I redid the test on my 32-bit machine, setting work_mem=16MB, and I got\n> comparable results to what I saw on the 64-bit machine. So, what I am\n> still am puzzled by is why work_mem seems to make the two paths\n> equivalent in time?:\n\nIf work_mem is large enough that we never have to go through\ntbm_lossify(), then the recheck condition will never be executed,\nso its speed doesn't matter.\n\n(So the near-term workaround for Tim is to raise work_mem when\nworking with tables of this size.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 08 Dec 2014 16:22:56 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "On 12/08/2014 01:22 PM, Tom Lane wrote:\n> Adrian Klaver <[email protected]> writes:\n>> I redid the test on my 32-bit machine, setting work_mem=16MB, and I got\n>> comparable results to what I saw on the 64-bit machine. So, what I am\n>> still am puzzled by is why work_mem seems to make the two paths\n>> equivalent in time?:\n>\n> If work_mem is large enough that we never have to go through\n> tbm_lossify(), then the recheck condition will never be executed,\n> so its speed doesn't matter.\n\nAah, peeking into tidbitmap.c is enlightening. Thanks.\n\n>\n> (So the near-term workaround for Tim is to raise work_mem when\n> working with tables of this size.)\n>\n> \t\t\tregards, tom lane\n>\n>\n\n\n-- \nAdrian Klaver\[email protected]\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Mon, 08 Dec 2014 13:34:38 -0800",
"msg_from": "Adrian Klaver <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "On 08/12/2014 18:14, Adrian Klaver wrote:\n> Recheck Cond: ((((data ->> 'assay1_ic50'::text))::double precision > 90::double precision) AND (((data ->> 'assay2_ic50'::text))::double precision < 10::double precision))\n> >\n> >which means we have to pull the JSONB value out of the tuple, search\n> >it to find the 'assay1_ic50' key, convert the associated value to text\n> >(which is not exactly cheap because*the value is stored as a numeric*),\n> >then reparse that text string into a float8, after which we can use\n> >float8gt. And then probably do an equivalent amount of work on the way\n> >to making the other comparison.\n> >\n> >So this says nothing much about the lossy-bitmap code, and a lot about\n> >how the JSONB code isn't very well optimized yet. In particular, the\n> >decision not to provide an operator that could extract a numeric field\n> >without conversion to text is looking pretty bad here.\nYes, that bit seemed strange to me. As I understand the value is stored \ninternally as numeric, but the only way to access it is as text and then \ncast back to numeric.\nI *think* this is the only way to do it presently?\n\nTim\n\n\n\n\n\n\n On 08/12/2014 18:14, Adrian Klaver wrote:\n\nRecheck Cond: ((((data ->> 'assay1_ic50'::text))::double precision > 90::double precision) AND (((data ->> 'assay2_ic50'::text))::double precision < 10::double precision))\n> \n> which means we have to pull the JSONB value out of the tuple, search\n> it to find the 'assay1_ic50' key, convert the associated value to text\n> (which is not exactly cheap because *the value is stored as a numeric*),\n> then reparse that text string into a float8, after which we can use\n> float8gt. And then probably do an equivalent amount of work on the way\n> to making the other comparison.\n> \n> So this says nothing much about the lossy-bitmap code, and a lot about\n> how the JSONB code isn't very well optimized yet. In particular, the\n> decision not to provide an operator that could extract a numeric field\n> without conversion to text is looking pretty bad here.\n\n Yes, that bit seemed strange to me. As I understand the value is\n stored internally as numeric, but the only way to access it is as\n text and then cast back to numeric.\n I *think* this is the only way to do it presently?\n\n Tim",
"msg_date": "Mon, 08 Dec 2014 18:39:13 -0300",
"msg_from": "Tim Dudgeon <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] querying with index on jsonb slower than standard column.\n Why?"
},
{
"msg_contents": "On 12/08/2014 01:39 PM, Tim Dudgeon wrote:\n> On 08/12/2014 18:14, Adrian Klaver wrote:\n>> Recheck Cond: ((((data ->> 'assay1_ic50'::text))::double precision > 90::double precision) AND (((data ->> 'assay2_ic50'::text))::double precision < 10::double precision))\n>> > \n>> > which means we have to pull the JSONB value out of the tuple, search\n>> > it to find the 'assay1_ic50' key, convert the associated value to text\n>> > (which is not exactly cheap because *the value is stored as a numeric*),\n>> > then reparse that text string into a float8, after which we can use\n>> > float8gt. And then probably do an equivalent amount of work on the way\n>> > to making the other comparison.\n>> > \n>> > So this says nothing much about the lossy-bitmap code, and a lot about\n>> > how the JSONB code isn't very well optimized yet. In particular, the\n>> > decision not to provide an operator that could extract a numeric field\n>> > without conversion to text is looking pretty bad here.\n> Yes, that bit seemed strange to me. As I understand the value is stored\n> internally as numeric, but the only way to access it is as text and then\n> cast back to numeric.\n> I *think* this is the only way to do it presently?\n\nYeah, I believe the core problem is that Postgres currently doesn't have\nany way to have variadic return times from a function which don't match\nvariadic input types. Returning a value as an actual numeric from JSONB\nwould require returning a numeric from a function whose input type is\ntext or json. So a known issue but one which would require a lot of\nreplumbing to fix.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Dec 2014 13:24:04 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] querying with index on jsonb slower than\n standard column. Why?"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> Yeah, I believe the core problem is that Postgres currently doesn't have\n> any way to have variadic return times from a function which don't match\n> variadic input types. Returning a value as an actual numeric from JSONB\n> would require returning a numeric from a function whose input type is\n> text or json. So a known issue but one which would require a lot of\n> replumbing to fix.\n\nWell, it'd be easy to fix if we were willing to invent distinct operators\ndepending on which type you wanted out (perhaps ->> for text output as\ntoday, add ->># for numeric output, etc). Doesn't seem terribly nice\nfrom a usability standpoint though.\n\nThe usability issue could be fixed by teaching the planner to fold a\nconstruct like (jsonb ->> 'foo')::numeric into (jsonb ->># 'foo').\nBut I'm not sure how we do that except in a really ugly and ad-hoc\nfashion.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Dec 2014 16:44:10 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] querying with index on jsonb slower than standard\n column. Why?"
},
{
"msg_contents": "On Fri, Dec 12, 2014 at 6:44 PM, Tom Lane <[email protected]> wrote:\n> The usability issue could be fixed by teaching the planner to fold a\n> construct like (jsonb ->> 'foo')::numeric into (jsonb ->># 'foo').\n> But I'm not sure how we do that except in a really ugly and ad-hoc\n> fashion.\n\nIt would be doable if you could have polymorphism on return type, and\nteach the planner to interpret (jsonb ->> 'foo')::numeric as the\noperator with a numeric return type.\n\nThat's a trickier business even, but it could be far more useful and\ngenerically helpful than ->>#.\n\nTricky part is what to do when the cast is missing.\n\n\n-- \nSent via pgsql-sql mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-sql\n",
"msg_date": "Fri, 12 Dec 2014 19:10:29 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] Re: querying with index on jsonb slower than\n standard column. Why?"
},
{
"msg_contents": "\nOn 12/12/2014 04:44 PM, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n>> Yeah, I believe the core problem is that Postgres currently doesn't have\n>> any way to have variadic return times from a function which don't match\n>> variadic input types. Returning a value as an actual numeric from JSONB\n>> would require returning a numeric from a function whose input type is\n>> text or json. So a known issue but one which would require a lot of\n>> replumbing to fix.\n> Well, it'd be easy to fix if we were willing to invent distinct operators\n> depending on which type you wanted out (perhaps ->> for text output as\n> today, add ->># for numeric output, etc).\n\nThat was my immediate reaction. Not sure about the operator name. I'd \ntentatively suggest -># (taking an int or text argument) and #># taking \na text[] argument, both returning numeric, and erroring out if the value \nis a string, boolean, object or array.\n\n\n> Doesn't seem terribly nice\n> from a usability standpoint though.\n>\n> The usability issue could be fixed by teaching the planner to fold a\n> construct like (jsonb ->> 'foo')::numeric into (jsonb ->># 'foo').\n> But I'm not sure how we do that except in a really ugly and ad-hoc\n> fashion.\n>\n> \t\t\t\n\n\nI would be inclined to add the operator and see how cumbersome people \nfind it. I suspect in many cases it might be sufficient.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Dec 2014 18:27:31 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] querying with index on jsonb slower than\n standard column. Why?"
},
{
"msg_contents": "Andrew Dunstan <[email protected]> writes:\n> On 12/12/2014 04:44 PM, Tom Lane wrote:\n>> Well, it'd be easy to fix if we were willing to invent distinct operators\n>> depending on which type you wanted out (perhaps ->> for text output as\n>> today, add ->># for numeric output, etc).\n\n> That was my immediate reaction. Not sure about the operator name. I'd \n> tentatively suggest -># (taking an int or text argument) and #># taking \n> a text[] argument, both returning numeric, and erroring out if the value \n> is a string, boolean, object or array.\n\n>> The usability issue could be fixed by teaching the planner to fold a\n>> construct like (jsonb ->> 'foo')::numeric into (jsonb ->># 'foo').\n>> But I'm not sure how we do that except in a really ugly and ad-hoc\n>> fashion.\n\n> I would be inclined to add the operator and see how cumbersome people \n> find it. I suspect in many cases it might be sufficient.\n\nWe can't just add the operator and worry about usability later;\nif we're thinking we might want to introduce such an automatic\ntransformation, we have to be sure the new operator is defined in a\nway that allows the transformation to not change any semantics.\nWhat that means in this case is that if (jsonb ->> 'foo')::numeric\nwould have succeeded, (jsonb ->># 'foo') has to succeed; which means\nit'd better be willing to attempt conversion of string values to\nnumeric, not just throw an error on sight.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Dec 2014 20:20:37 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] querying with index on jsonb slower than standard\n column. Why?"
},
{
"msg_contents": "\nOn 12/12/2014 08:20 PM, Tom Lane wrote:\n> We can't just add the operator and worry about usability later;\n> if we're thinking we might want to introduce such an automatic\n> transformation, we have to be sure the new operator is defined in a\n> way that allows the transformation to not change any semantics.\n> What that means in this case is that if (jsonb ->> 'foo')::numeric\n> would have succeeded, (jsonb ->># 'foo') has to succeed; which means\n> it'd better be willing to attempt conversion of string values to\n> numeric, not just throw an error on sight.\n>\n> \t\t\t\n\nWell, I'm not 100% convinced about the magic transformation being a good \nthing.\n\nJson numbers are distinct from strings, and part of the justification \nfor this is to extract a numeric datum from jsonb exactly as stored, on \nperformance grounds. So turning round now and making that turn a string \ninto a number if possible seems to me to be going in the wrong direction.\n\ncheers\n\nandrew\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Dec 2014 22:05:20 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] querying with index on jsonb slower than\n standard column. Why?"
},
{
"msg_contents": "On Sat, Dec 13, 2014 at 12:05 AM, Andrew Dunstan <[email protected]> wrote:\n> On 12/12/2014 08:20 PM, Tom Lane wrote:\n>>\n>> We can't just add the operator and worry about usability later;\n>> if we're thinking we might want to introduce such an automatic\n>> transformation, we have to be sure the new operator is defined in a\n>> way that allows the transformation to not change any semantics.\n>> What that means in this case is that if (jsonb ->> 'foo')::numeric\n>> would have succeeded, (jsonb ->># 'foo') has to succeed; which means\n>> it'd better be willing to attempt conversion of string values to\n>> numeric, not just throw an error on sight.\n>>\n>>\n>\n>\n> Well, I'm not 100% convinced about the magic transformation being a good\n> thing.\n>\n> Json numbers are distinct from strings, and part of the justification for\n> this is to extract a numeric datum from jsonb exactly as stored, on\n> performance grounds. So turning round now and making that turn a string into\n> a number if possible seems to me to be going in the wrong direction.\n\nIt's still better than doing the conversion every time. The niceness\nof that implementation aside, I don't see how it can be considered the\nwrong direction.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 13 Dec 2014 02:38:41 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [SQL] querying with index on jsonb slower than\n standard column. Why?"
}
] |
[
{
"msg_contents": "Hi Team,\n\n\n\nWe are thinking of shifting our data warehouse solution from Netezza to\nPostgreSQL. I am reading a lot about PostgreSQL lately.\n\n\n\nCan you please let us know the minimum[overall performance should be good]\nhardware requirements for the below mentioned statistics. My question is\nactually divided into two parts.\n\n\n\n1. What hardware entities[RAM, Storage, Disk, RAID level etc. ]\nshould we keep in mind while finalizing the requirement? Sorry for my\nignorance here as I am totally new to this territory.\n\n2. What should be the plan of action for performance benchmarking?\n\n3. What should be the minimum hardware requirements for doing a POC\nand comparing performance benchmark with Netezza?\n\n\n\n*Parameter*\n\n*Numbers on 7th Dec 2014*\n\nTotal number of Users\n\n222\n\nTotal number of Application Users\n\n110\n\nIndividual Accounts\n\n112\n\nTotal number of Successful Queries on 7 December\n\n425124\n\nTotal number of Unsuccessful Queries\n\n2591\n\nMaximum number of Queries in an *Hour*\n\n79920\n\nMaximum number of Queries in a *minute* when load was maximum in an hour\n\n3143\n\nMaximum number of Queries in a *second* when load was maximum in an hour\n\n87\n\nNumber of Databases\n\n82\n\nUsed Space\n\n~1057 GB\n\nAllocated Space\n\n~4453 GB\n\n\n\n*What I actually want to achieve is right now is that running the same load\non Netezza and PostgreSQL [300 GB data, 20 concurrent queries, and 30k-40K\nqueries in an hour]. *\n\n\n\nI have asked for the following configuration right now.\n\n\n\n*Operating System* (Linux 64 bit)\n\n*RAM* : 8 GB\n\n*Storage*: 500 GB\n\n*CPU Cores*: Minimum 4\n\n*RAID*: Level 10\n\n*Disk Type*: SATA\n\n\n\nThese figures are for POC only.\n\n\n\nDoes that sound okay? Once again, my trivial questions could be irritating\nbut this is only a start.\n\n\n\nWarm Regards,\n\n\nVivekanand Joshi\n+919654227927\n\n\n\n[image: Zeta Interactive]\n\n185 Madison Ave. New York, NY 10016\n\nwww.zetainteractive.com",
"msg_date": "Wed, 10 Dec 2014 00:12:46 +0530",
"msg_from": "Vivekanand Joshi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Hardware Requirements"
},
{
"msg_contents": "On Tue, Dec 9, 2014 at 3:42 PM, Vivekanand Joshi <[email protected]\n> wrote:\n\n> What I actually want to achieve is right now is that running the same load\n> on Netezza and PostgreSQL [300 GB data, 20 concurrent queries, and 30k-40K\n> queries in an hour].\n\n\n\nYou will need to provide far more information about the type of queries\nyou're referring to.\n\nQuerying one single row against a PK isn't the same as performing complex\nanalytics on big chunks of historical data.\n\nOn Tue, Dec 9, 2014 at 3:42 PM, Vivekanand Joshi <[email protected]> wrote:What I actually want to achieve is right now is that running the same load on Netezza and PostgreSQL [300 GB data, 20 concurrent queries, and 30k-40K queries in an hour]. You will need to provide far more information about the type of queries you're referring to.Querying one single row against a PK isn't the same as performing complex analytics on big chunks of historical data.",
"msg_date": "Tue, 9 Dec 2014 15:55:47 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Hardware Requirements"
}
] |
[
{
"msg_contents": "I have a beast of a Dell server with the following specifications:\n\n - 4x Xeon E5-4657LV2 (48 cores total)\n - 196GB RAM\n - 2x SCSI 900GB in RAID1 (for the OS)\n - 8x Intel S3500 SSD 240GB in RAID10\n - H710p RAID controller, 1GB cache\n\nCentos 6.6, RAID10 SSDs uses XFS (mkfs.xfs -i size=512 /dev/sdb).\n\nHere are some relevant postgresql.conf settings:\nshared_buffers = 8GB\nwork_mem = 64MB\nmaintenance_work_mem = 1GB\nsynchronous_commit = off\ncheckpoint_segments = 256\ncheckpoint_timeout = 10min\ncheckpoint_completion_target = 0.9\nseq_page_cost = 1.0\neffective_cache_size = 100GB\n\nI ran some \"fast\" pgbench tests with 4, 6 and 8 drives in RAID10 and here\nare the results:\n\ntime /usr/pgsql-9.1/bin/pgbench -U postgres -i -s 12000 pgbench # 292GB DB\n\n4 drives6 drives8 drives105 min98 min94 min\n\n/usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 -N pgbench # Write\ntest\n\n4 drives6 drives8 drives656774278073\n\n/usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 pgbench # Read/Write\ntest\n\n4 drives6 drives8 drives365154747203\n\n/usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 -S pgbench # Read test\n\n4 drives6 drives8 drives176282548228698\n\n\nA few notes:\n\n - I ran these tests only once, so take these number with reserve. I\n didn't have the time to run them more times, because I had to test how the\n server works with our app and it takes a considerable amount of time to run\n them all.\n - I wanted to use a bigger scale factor, but there is a bug in pgbench\n with big scale factors.\n - Postgres 9.1 was chosen, since the app which will run on this server\n uses 9.1.\n - These tests are with the H710p controller set to write-back (WB) and\n with adaptive read ahead (ADRA). I ran a few tests with write-through (WT)\n and no read ahead (NORA), but the results were worse.\n - All tests were run using 96 clients as recommended on the pgbench wiki\n page, but I'm sure I would get better results if I used 48 clients (1 for\n each core), which I tried with the R/W test and got 7986 on 8 drives, which\n is almost 800TPS better than with 96 clients.\n\n\nSince our app is tied to the Postgres performance a lot, I'm currently\ntrying to optimize it. Do you have any suggestions what Postgres/system\nsettings I could try to tweak to increase performance? I have a feeling I\ncould get more performance out of this system.\n\n\nRegards,\nStrahinja\n\nI have a beast of a Dell server with the following specifications:4x Xeon E5-4657LV2 (48 cores total)196GB RAM2x SCSI 900GB in RAID1 (for the OS)8x Intel S3500 SSD 240GB in RAID10H710p RAID controller, 1GB cacheCentos 6.6, RAID10 SSDs uses XFS (mkfs.xfs -i size=512 /dev/sdb).Here are some relevant postgresql.conf settings:shared_buffers = 8GBwork_mem = 64MBmaintenance_work_mem = 1GBsynchronous_commit = offcheckpoint_segments = 256checkpoint_timeout = 10mincheckpoint_completion_target = 0.9seq_page_cost = 1.0effective_cache_size = 100GBI ran some \"fast\" pgbench tests with 4, 6 and 8 drives in RAID10 and here are the results:time /usr/pgsql-9.1/bin/pgbench -U postgres -i -s 12000 pgbench # 292GB DB4 drives6 drives8 drives105 min98 min94 min/usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 -N pgbench # Write test4 drives6 drives8 drives656774278073/usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 pgbench # Read/Write test4 drives6 drives8 drives365154747203/usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 -S pgbench # Read test4 drives6 drives8 drives176282548228698A few notes:I ran these tests only once, so take these number with reserve. I didn't have the time to run them more times, because I had to test how the server works with our app and it takes a considerable amount of time to run them all.I wanted to use a bigger scale factor, but there is a bug in pgbench with big scale factors.Postgres 9.1 was chosen, since the app which will run on this server uses 9.1.These tests are with the H710p controller set to write-back (WB) and with adaptive read ahead (ADRA). I ran a few tests with write-through (WT) and no read ahead (NORA), but the results were worse.All tests were run using 96 clients as recommended on the pgbench wiki page, but I'm sure I would get better results if I used 48 clients (1 for each core), which I tried with the R/W test and got 7986 on 8 drives, which is almost 800TPS better than with 96 clients.Since our app is tied to the Postgres performance a lot, I'm currently trying to optimize it. Do you have any suggestions what Postgres/system settings I could try to tweak to increase performance? I have a feeling I could get more performance out of this system.Regards,Strahinja",
"msg_date": "Wed, 10 Dec 2014 00:28:47 +0100",
"msg_from": "=?UTF-8?Q?Strahinja_Kustudi=C4=87?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "8xIntel S3500 SSD in RAID10 on Dell H710p"
},
{
"msg_contents": "On 10/12/14 12:28, Strahinja Kustudić wrote:\n\n> * These tests are with the H710p controller set to write-back (WB) and\n> with adaptive read ahead (ADRA). I ran a few tests with\n> write-through (WT) and no read ahead (NORA), but the results were worse.\n\nThat is interesting: I've done some testing on this type of card with 16 \n(slightly faster Hitachi) SSD attached. Setting WT and NORA should \nenable the so-called 'fastpath' mode for the card [1]. We saw \nperformance improve markedly (300MB/s random write go to 1300MB/s).\n\nThis *might* be related to the fact that 16 SSD can put out more IOPS \nthan the card can actually handle - whereas your 8 S3500 is probably the \nperfect number (e.g 8*11000 = 88000 which the card can handle ok).\n\n\n[1] If you make the change while there are no outstanding background \noperations (array rebuild etc) in progress (see \nhttp://www.flagshiptech.com/eBay/Dell/poweredgeh310h710h810UsersGuide.pdf).\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Dec 2014 16:55:14 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8xIntel S3500 SSD in RAID10 on Dell H710p"
},
{
"msg_contents": "On Wed, Dec 10, 2014 at 4:55 AM, Mark Kirkwood <\[email protected]> wrote:\n\n> That is interesting: I've done some testing on this type of card with 16\n> (slightly faster Hitachi) SSD attached. Setting WT and NORA should enable\n> the so-called 'fastpath' mode for the card [1]. We saw performance improve\n> markedly (300MB/s random write go to 1300MB/s).\n>\n> This *might* be related to the fact that 16 SSD can put out more IOPS than\n> the card can actually handle - whereas your 8 S3500 is probably the perfect\n> number (e.g 8*11000 = 88000 which the card can handle ok).\n>\n>\n> [1] If you make the change while there are no outstanding background\n> operations (array rebuild etc) in progress (see\n> http://www.flagshiptech.com/eBay/Dell/poweredgeh310h710h810UsersGuide.pdf\n> ).\n\n\nI read that guide too, which is the reason why I tried with WT/NORA, but\nthe document also states: \"NOTE: RAID 10, RAID 50, and RAID 60 virtual\ndisks cannot use FastPath.\" Which is a little odd, since usually if you\nwant performance with reliability, you go RAID10.\n\nDo you have any suggestions what I could try to tweak to get more\nperformance?\n\nOn Wed, Dec 10, 2014 at 4:55 AM, Mark Kirkwood <[email protected]> wrote:\nThat is interesting: I've done some testing on this type of card with 16 (slightly faster Hitachi) SSD attached. Setting WT and NORA should enable the so-called 'fastpath' mode for the card [1]. We saw performance improve markedly (300MB/s random write go to 1300MB/s).\n\nThis *might* be related to the fact that 16 SSD can put out more IOPS than the card can actually handle - whereas your 8 S3500 is probably the perfect number (e.g 8*11000 = 88000 which the card can handle ok).\n\n\n[1] If you make the change while there are no outstanding background operations (array rebuild etc) in progress (see http://www.flagshiptech.com/eBay/Dell/poweredgeh310h710h810UsersGuide.pdf).I read that guide too, which is the reason why I tried with WT/NORA, but the document also states: \"NOTE: RAID 10, RAID 50, and RAID 60 virtual disks cannot use FastPath.\" Which is a little odd, since usually if you want performance with reliability, you go RAID10.Do you have any suggestions what I could try to tweak to get more performance?",
"msg_date": "Wed, 10 Dec 2014 09:30:07 +0100",
"msg_from": "=?UTF-8?Q?Strahinja_Kustudi=C4=87?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8xIntel S3500 SSD in RAID10 on Dell H710p"
},
{
"msg_contents": "On 10/12/14 21:30, Strahinja Kustudić wrote:\n> On Wed, Dec 10, 2014 at 4:55 AM, Mark Kirkwood <\n> [email protected]> wrote:\n>\n>> That is interesting: I've done some testing on this type of card with 16\n>> (slightly faster Hitachi) SSD attached. Setting WT and NORA should enable\n>> the so-called 'fastpath' mode for the card [1]. We saw performance improve\n>> markedly (300MB/s random write go to 1300MB/s).\n>>\n>> This *might* be related to the fact that 16 SSD can put out more IOPS than\n>> the card can actually handle - whereas your 8 S3500 is probably the perfect\n>> number (e.g 8*11000 = 88000 which the card can handle ok).\n>>\n>>\n>> [1] If you make the change while there are no outstanding background\n>> operations (array rebuild etc) in progress (see\n>> http://www.flagshiptech.com/eBay/Dell/poweredgeh310h710h810UsersGuide.pdf\n>> ).\n>\n>\n> I read that guide too, which is the reason why I tried with WT/NORA, but\n> the document also states: \"NOTE: RAID 10, RAID 50, and RAID 60 virtual\n> disks cannot use FastPath.\" Which is a little odd, since usually if you\n> want performance with reliability, you go RAID10.\n>\n> Do you have any suggestions what I could try to tweak to get more\n> performance?\n>\n\nWe are using these configured as *individual* drives on RAID0 that are \nthen md raided in a (software) RAID 10 array. Maybe try that out (as \nfastpath only cares about the HW RAID setup).\n\nInterestingly we were also seeing better performance on a fully HW RAID \n10 array with WT/NORA...so (I guess) our Hitachi SSD probably have lower \nlatency than the S3500 does.\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Dec 2014 11:05:14 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8xIntel S3500 SSD in RAID10 on Dell H710p"
},
{
"msg_contents": "On Wed, Dec 10, 2014 at 2:30 AM, Strahinja Kustudić\n<[email protected]> wrote:\n> On Wed, Dec 10, 2014 at 4:55 AM, Mark Kirkwood\n> <[email protected]> wrote:\n>>\n>> That is interesting: I've done some testing on this type of card with 16\n>> (slightly faster Hitachi) SSD attached. Setting WT and NORA should enable\n>> the so-called 'fastpath' mode for the card [1]. We saw performance improve\n>> markedly (300MB/s random write go to 1300MB/s).\n>>\n>> This *might* be related to the fact that 16 SSD can put out more IOPS than\n>> the card can actually handle - whereas your 8 S3500 is probably the perfect\n>> number (e.g 8*11000 = 88000 which the card can handle ok).\n>>\n>>\n>> [1] If you make the change while there are no outstanding background\n>> operations (array rebuild etc) in progress (see\n>> http://www.flagshiptech.com/eBay/Dell/poweredgeh310h710h810UsersGuide.pdf).\n>\n>\n> I read that guide too, which is the reason why I tried with WT/NORA, but the\n> document also states: \"NOTE: RAID 10, RAID 50, and RAID 60 virtual disks\n> cannot use FastPath.\" Which is a little odd, since usually if you want\n> performance with reliability, you go RAID10.\n>\n> Do you have any suggestions what I could try to tweak to get more\n> performance?\n\nDefinitely crank effective_io_concurrency. It will not help stock\npgbench test since it doesn't involve bitmap heap scans but when it\nkicks in it's much faster.\n\nhttp://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\nAs it pertains to random read performance, I think you'll find that\nyou're getting pretty close to maxing out what the computer is\nbasically capable of -- I highly doubt you'll be read bound on storage\nfor any application; the classic techniques of optimizing queries,\nindexes and tables is where focus your energy. Sequential write will\nalso be no problem.\n\nThe only area where the s3500 falls short is random writes. If your\nrandom write i/o requirements are extreme, you've bought the wrong\ndrive, I'd have shelled out for the S3700 (but it's never too late;\nyou can stack one on and move high write activity tables to the s3700\ndriven tablespace).\n\nmerlni\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Dec 2014 15:06:42 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8xIntel S3500 SSD in RAID10 on Dell H710p"
},
{
"msg_contents": "> \r\n> I have a beast of a Dell server with the following specifications:\r\n> \t• 4x Xeon E5-4657LV2 (48 cores total)\r\n> \t• 196GB RAM\r\n> \t• 2x SCSI 900GB in RAID1 (for the OS)\r\n> \t• 8x Intel S3500 SSD 240GB in RAID10\r\n> \t• H710p RAID controller, 1GB cache\r\n> Centos 6.6, RAID10 SSDs uses XFS (mkfs.xfs -i size=512 /dev/sdb).\r\n\r\nThings to check\r\n\r\n- disk cache settings (EnDskCache - for SSD should be on or you're going to lose 90% of your performance)\r\n\r\n- OS settings e.g. \r\n\r\necho noop > /sys/block/sda/queue/scheduler\r\necho 975 > /sys/block/sda/queue/nr_requests\r\nblockdev --setra 16384 /dev/sdb\r\n\r\n- OS kernel version \r\n\r\nWe use H710Ps with SSDs as well, and these settings make a measurable difference to our performance here (though we measure more than just pgbench since it's a poor proxy for our use cases).\r\n\r\nAlso\r\n\r\n- SSDs - is the filesystem aligned and block size chosen correctly (you don't want to be forced to read 2 blocks of SSD to get every data block)? RAID stripe size? May make a small difference. \r\n\r\n- are the SSDs all sitting on different SATA channels? You don't want them to be forced to share one channel's worth of bandwidth. The H710P has 8 SATA channels I think (?) and you mention 10 devices above. \r\n\r\nGraeme Bell.\r\n\r\nOn 10 Dec 2014, at 00:28, Strahinja Kustudić <[email protected]> wrote:\r\n\r\n> I have a beast of a Dell server with the following specifications:\r\n> \t• 4x Xeon E5-4657LV2 (48 cores total)\r\n> \t• 196GB RAM\r\n> \t• 2x SCSI 900GB in RAID1 (for the OS)\r\n> \t• 8x Intel S3500 SSD 240GB in RAID10\r\n> \t• H710p RAID controller, 1GB cache\r\n> Centos 6.6, RAID10 SSDs uses XFS (mkfs.xfs -i size=512 /dev/sdb).\r\n> \r\n> Here are some relevant postgresql.conf settings:\r\n> shared_buffers = 8GB\r\n> work_mem = 64MB\r\n> maintenance_work_mem = 1GB\r\n> synchronous_commit = off\r\n> checkpoint_segments = 256\r\n> checkpoint_timeout = 10min\r\n> checkpoint_completion_target = 0.9\r\n> seq_page_cost = 1.0\r\n> effective_cache_size = 100GB\r\n> \r\n> I ran some \"fast\" pgbench tests with 4, 6 and 8 drives in RAID10 and here are the results:\r\n> \r\n> time /usr/pgsql-9.1/bin/pgbench -U postgres -i -s 12000 pgbench # 292GB DB\r\n> \r\n> 4 drives\t6 drives\t8 drives\r\n> 105 min\t98 min\t94 min\r\n> \r\n> /usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 -N pgbench # Write test\r\n> \r\n> 4 drives\t6 drives\t8 drives\r\n> 6567\t7427\t8073\r\n> \r\n> /usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 pgbench # Read/Write test\r\n> \r\n> 4 drives\t6 drives\t8 drives\r\n> 3651\t5474\t7203\r\n> \r\n> /usr/pgsql-9.1/bin/pgbench -U postgres -c 96 -T 600 -S pgbench # Read test\r\n> \r\n> 4 drives\t6 drives\t8 drives\r\n> 17628\t25482\t28698\r\n> \r\n> \r\n> A few notes:\r\n> \t• I ran these tests only once, so take these number with reserve. I didn't have the time to run them more times, because I had to test how the server works with our app and it takes a considerable amount of time to run them all.\r\n> \t• I wanted to use a bigger scale factor, but there is a bug in pgbench with big scale factors.\r\n> \t• Postgres 9.1 was chosen, since the app which will run on this server uses 9.1.\r\n> \t• These tests are with the H710p controller set to write-back (WB) and with adaptive read ahead (ADRA). I ran a few tests with write-through (WT) and no read ahead (NORA), but the results were worse.\r\n> \t• All tests were run using 96 clients as recommended on the pgbench wiki page, but I'm sure I would get better results if I used 48 clients (1 for each core), which I tried with the R/W test and got 7986 on 8 drives, which is almost 800TPS better than with 96 clients.\r\n> \r\n> Since our app is tied to the Postgres performance a lot, I'm currently trying to optimize it. Do you have any suggestions what Postgres/system settings I could try to tweak to increase performance? I have a feeling I could get more performance out of this system.\r\n> \r\n> \r\n> Regards,\r\n> Strahinja\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Dec 2014 11:47:12 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: 8xIntel S3500 SSD in RAID10 on Dell H710p"
},
{
"msg_contents": ">\n> - disk cache settings (EnDskCache - for SSD should be on or you're going\n> to lose 90% of your performance)\n>\n\nDisk cache is enabled, I know there is a huge performance impact.\n\n\n> - OS settings e.g.\n>\n> echo noop > /sys/block/sda/queue/scheduler\n> echo 975 > /sys/block/sda/queue/nr_requests\n> blockdev --setra 16384 /dev/sdb\n>\n\nI'll try to play with these as well. I haven't tried noop yet, but it was\non my checklist.\n\n\n\n> - OS kernel version\n>\n> We use H710Ps with SSDs as well, and these settings make a measurable\n> difference to our performance here (though we measure more than just\n> pgbench since it's a poor proxy for our use cases).\n>\n> Also\n>\n> - SSDs - is the filesystem aligned and block size chosen correctly (you\n> don't want to be forced to read 2 blocks of SSD to get every data block)?\n> RAID stripe size? May make a small difference.\n>\n\nI read a lot about filesystem alignment, but as far as I understand if I\njust format the whole drive and not created any partitions, all should be\naligned. I'll check what is the data block size of SSDs and try set block\nsize to that, but I'm not exactly sure how does block size work with\nRAID10, it seem logical to me that it works differently.\n\n\n- are the SSDs all sitting on different SATA channels? You don't want them\n> to be forced to share one channel's worth of bandwidth. The H710P has 8\n> SATA channels I think (?) and you mention 10 devices above.\n>\n\nGood question. Since the server is not physical at my place, I will have to\ncheck with the people who assembled it.\n\n\nThanks for your help. Once I do more testing, I'll return more details what\nhelped.\n\n- disk cache settings (EnDskCache - for SSD should be on or you're going to lose 90% of your performance)Disk cache is enabled, I know there is a huge performance impact. \n- OS settings e.g.\n\necho noop > /sys/block/sda/queue/scheduler\necho 975 > /sys/block/sda/queue/nr_requests\nblockdev --setra 16384 /dev/sdbI'll try to play with these as well. I haven't tried noop yet, but it was on my checklist. \n- OS kernel version\n\nWe use H710Ps with SSDs as well, and these settings make a measurable difference to our performance here (though we measure more than just pgbench since it's a poor proxy for our use cases).\n\nAlso\n\n- SSDs - is the filesystem aligned and block size chosen correctly (you don't want to be forced to read 2 blocks of SSD to get every data block)? RAID stripe size? May make a small difference.I read a lot about filesystem alignment, but as far as I understand if I just format the whole drive and not created any partitions, all should be aligned. I'll check what is the data block size of SSDs and try set block size to that, but I'm not exactly sure how does block size work with RAID10, it seem logical to me that it works differently. \n- are the SSDs all sitting on different SATA channels? You don't want them to be forced to share one channel's worth of bandwidth. The H710P has 8 SATA channels I think (?) and you mention 10 devices above.Good question. Since the server is not physical at my place, I will have to check with the people who assembled it.Thanks for your help. Once I do more testing, I'll return more details what helped.",
"msg_date": "Tue, 16 Dec 2014 20:46:05 +0100",
"msg_from": "=?UTF-8?Q?Strahinja_Kustudi=C4=87?= <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: 8xIntel S3500 SSD in RAID10 on Dell H710p"
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHello.\nI need to tune a postgres installation I've just made to get a better\nperformance. I use two identical servers with a hot replication\nconfiguration. The two servers have the following hardware:\n\nDual Processor Intel Xeon E5-2640V2 20Mb cache 2.00Ghz,\nRam Mem. 32Gb DDR-3 Ecc Registered,\nController MegaRaid 8-ports 1Gb cache,\n4 Enterprise Hdd NL Sas 600 4Tb Sata,\n2 Samsung SSD 840 Pro Series 512Gb,\n2 Hdd 500 Gb\n\nI made a software raid with the last two hard disks with ext4 and I\ninstalled Ubuntu 14.04.1 LTS (I have to use this SO) on it. I made a\nhardware raid with the four SAS hard disks and I mount the partition\non it with ext4 without journaling and I put the database on it.\n\nNow I have two more steps to do.\n\n1- could you please help tuning the configuration? What are the best\nvalue I should use for wal_buffers and shared_buffers?\n2- I would like to use the two SDD to store the wal file. Do you think\nit is useful or how should I use them?\n\nThank you for your answers.\n\nBest Regards,\nMaila Fatticcioni\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niEYEARECAAYFAlSII/gACgkQi2q3wPb3FcPUuACgg2m2o9dQWavLrN2EmmmCpGEv\nYnMAoN0R/gejcKwnxf0qFPKXtaGaIG1A\n=oLxU\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Dec 2014 11:44:08 +0100",
"msg_from": "Maila Fatticcioni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Tuning the configuration"
},
{
"msg_contents": "On Wed, Dec 10, 2014 at 2:44 AM, Maila Fatticcioni\n<[email protected]> wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Hello.\n> I need to tune a postgres installation I've just made to get a better\n> performance. I use two identical servers with a hot replication\n> configuration. The two servers have the following hardware:\n>\n> Dual Processor Intel Xeon E5-2640V2 20Mb cache 2.00Ghz,\n> Ram Mem. 32Gb DDR-3 Ecc Registered,\n> Controller MegaRaid 8-ports 1Gb cache,\n> 4 Enterprise Hdd NL Sas 600 4Tb Sata,\n> 2 Samsung SSD 840 Pro Series 512Gb,\n> 2 Hdd 500 Gb\n>\n> I made a software raid with the last two hard disks with ext4 and I\n> installed Ubuntu 14.04.1 LTS (I have to use this SO) on it. I made a\n> hardware raid with the four SAS hard disks and I mount the partition\n> on it with ext4 without journaling and I put the database on it.\n>\n> Now I have two more steps to do.\n>\n> 1- could you please help tuning the configuration? What are the best\n> value I should use for wal_buffers and shared_buffers?\n> 2- I would like to use the two SDD to store the wal file. Do you think\n> it is useful or how should I use them?\n>\n> Thank you for your answers.\n>\n> Best Regards,\n> Maila Fatticcioni\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1\n>\n> iEYEARECAAYFAlSII/gACgkQi2q3wPb3FcPUuACgg2m2o9dQWavLrN2EmmmCpGEv\n> YnMAoN0R/gejcKwnxf0qFPKXtaGaIG1A\n> =oLxU\n> -----END PGP SIGNATURE-----\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nWe used [1] to great effect when setting our server up. We have not\nhad to diverge much from the recommendations in that document.\n\nGenerally, the specifics of tuning depend on the workload of your\nspecific instance.\n\n[1] https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Dec 2014 09:47:06 -0800",
"msg_from": "Patrick Krecker <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 12/10/2014 06:47 PM, Patrick Krecker wrote:\n> On Wed, Dec 10, 2014 at 2:44 AM, Maila Fatticcioni \n> <[email protected]> wrote: Hello. I need to tune a postgres\n> installation I've just made to get a better performance. I use two\n> identical servers with a hot replication configuration. The two\n> servers have the following hardware:\n> \n> Dual Processor Intel Xeon E5-2640V2 20Mb cache 2.00Ghz, Ram Mem.\n> 32Gb DDR-3 Ecc Registered, Controller MegaRaid 8-ports 1Gb cache, 4\n> Enterprise Hdd NL Sas 600 4Tb Sata, 2 Samsung SSD 840 Pro Series\n> 512Gb, 2 Hdd 500 Gb\n> \n> I made a software raid with the last two hard disks with ext4 and\n> I installed Ubuntu 14.04.1 LTS (I have to use this SO) on it. I\n> made a hardware raid with the four SAS hard disks and I mount the\n> partition on it with ext4 without journaling and I put the database\n> on it.\n> \n> Now I have two more steps to do.\n> \n> 1- could you please help tuning the configuration? What are the\n> best value I should use for wal_buffers and shared_buffers? 2- I\n> would like to use the two SDD to store the wal file. Do you think \n> it is useful or how should I use them?\n> \n> Thank you for your answers.\n> \n> Best Regards, Maila Fatticcioni\n>> \n>> \n>> -- Sent via pgsql-performance mailing list\n>> ([email protected]) To make changes to your\n>> subscription: \n>> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> We used [1] to great effect when setting our server up. We have\n> not had to diverge much from the recommendations in that document.\n> \n> Generally, the specifics of tuning depend on the workload of your \n> specific instance.\n> \n> [1] https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n> \n\nHello.\nIndeed I followed this document to set up my configuration. I am glad\nthat you recommend this as well.\n\nEventually I use this setup:\n\nmax_connections = 150\nshared_buffers = 8GB\nwork_mem = 32MB\ncheckpoint_segments = 128\ncheckpoint_completion_target = 0.9\n\nBest Regards,\nMaila Fatticcioni\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niEYEARECAAYFAlSJUaEACgkQi2q3wPb3FcPsuQCeLR5P49d60anErETNiX0iHNLe\nEu4An0QN3nzr/kvlPUTm9Q1A0GkjB/gw\n=kdGU\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Dec 2014 09:11:19 +0100",
"msg_from": "Maila Fatticcioni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "On 12/10/2014 11:44 AM, Maila Fatticcioni wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Hello.\n> I need to tune a postgres installation I've just made to get a better\n> performance. I use two identical servers with a hot replication\n> configuration. The two servers have the following hardware:\n>\n> Dual Processor Intel Xeon E5-2640V2 20Mb cache 2.00Ghz,\n> Ram Mem. 32Gb DDR-3 Ecc Registered,\n> Controller MegaRaid 8-ports 1Gb cache,\n> 4 Enterprise Hdd NL Sas 600 4Tb Sata,\n> 2 Samsung SSD 840 Pro Series 512Gb,\n> 2 Hdd 500 Gb\n>\n> I made a software raid with the last two hard disks with ext4 and I\n> installed Ubuntu 14.04.1 LTS (I have to use this SO) on it. I made a\n> hardware raid with the four SAS hard disks and I mount the partition\n> on it with ext4 without journaling and I put the database on it.\n\nLeaving aside all the valid points Patrick already made, as of late I've found\nxfs a better choice for Postgres, performance wise.\n\n> Now I have two more steps to do.\n>\n> 1- could you please help tuning the configuration? What are the best\n> value I should use for wal_buffers and shared_buffers?\n\nit's probably outdated but you could try to read Greg Smith's\n\"PostgreSQL 9.0 High Performance\", because at least you\ncould have an idea of almost all the attack-points you\ncould use to increase you overall performance.\n\nEven in the archive of this very mailinglist you'll surely find\na lot of good advice, e.g. one that I've read here recently is\navoid using any kernels between ver 3.0 and 3.8\n(http://www.databasesoup.com/2014/09/why-you-need-to-avoid-linux-kernel-32.html)\n\n> 2- I would like to use the two SDD to store the wal file. Do you think\n> it is useful or how should I use them?\n\nI definitely would give it a try.\n\nAndrea\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Dec 2014 13:02:31 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "\n> On 11 Dec 2014, at 15:02, Andrea Suisani <[email protected]> wrote:\n> \n> On 12/10/2014 11:44 AM, Maila Fatticcioni wrote:\n>> 2- I would like to use the two SDD to store the wal file. Do you think\n>> it is useful or how should I use them?\n> \n> I definitely would give it a try.\n> \n\n\nI don't understand the logic behind using drives, \nwhich are best for random io, for sequent io workloads.\n\nBetter use 10k sas with BBU raid for wal, money wise.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Dec 2014 15:11:05 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn 12/11/2014 01:11 PM, Evgeniy Shishkin wrote:\n> \n>> On 11 Dec 2014, at 15:02, Andrea Suisani <[email protected]>\n>> wrote:\n>> \n>> On 12/10/2014 11:44 AM, Maila Fatticcioni wrote:\n>>> 2- I would like to use the two SDD to store the wal file. Do\n>>> you think it is useful or how should I use them?\n>> \n>> I definitely would give it a try.\n>> \n> \n> \n> I don't understand the logic behind using drives, which are best\n> for random io, for sequent io workloads.\n> \n> Better use 10k sas with BBU raid for wal, money wise.\n> \n> \n> \n\nWould you mind to explain me better why you do suggest me to use the\nsas raid for wal please?\n\nThanks,\nM.\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1\n\niEYEARECAAYFAlSJkokACgkQi2q3wPb3FcOOZQCgrhy3sOP3Jds1eGlPqjSW+GhM\nxFIAn3YbZgEFAlwTC+SX7GG2My0pElys\n=Bsn7\n-----END PGP SIGNATURE-----\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Dec 2014 13:48:09 +0100",
"msg_from": "Maila Fatticcioni <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "\n> Would you mind to explain me better why you do suggest me to use the\n> sas raid for wal please?\n\nSSDs are known to shine when they have to deal with random access pattern\nrather than sequential, on the other hand 10/15K rpm SAS disk is known to be\nbetter for sequential io workloads (in general \"rotating\" disk use to be\nbetter at sequential rather than random access)\n\nHaving said that it seems that SSDs are catching up, see:\n\nhttp://www.anandtech.com/show/6935/seagate-600-ssd-review/5\n\nAndrea\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Dec 2014 14:16:56 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "On 12/11/2014 01:11 PM, Evgeniy Shishkin wrote:\n>\n>> On 11 Dec 2014, at 15:02, Andrea Suisani <[email protected]> wrote:\n>>\n>> On 12/10/2014 11:44 AM, Maila Fatticcioni wrote:\n>>> 2- I would like to use the two SDD to store the wal file. Do you think\n>>> it is useful or how should I use them?\n>>\n>> I definitely would give it a try.\n>>\n>\n>\n> I don't understand the logic behind using drives,\n> which are best for random io, for sequent io workloads.\n>\n> Better use 10k sas with BBU raid for wal, money wise.\n\nWell since Malia had already used the 4 sas hd for the DB,\nI thought that it'd be quite quick to setup a raid1 array\n(even at software level, e.g. using md), placing pg_xlog\nin such array and measure the performance.\n\nAs a following step, depending on the time constraints involved,\nMalia could rearrange the disk setup enterly and use the SAS\ndisks as location for pg_xlog.\n\n\nAndrea\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Dec 2014 14:26:57 +0100",
"msg_from": "Andrea Suisani <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "\n________________________________________\nFrom: [email protected] <[email protected]> on behalf of Evgeniy Shishkin <[email protected]>\nSent: Thursday, December 11, 2014 7:11 AM\nTo: Andrea Suisani\nCc: [email protected]; [email protected]\nSubject: Re: [PERFORM] Tuning the configuration\n\n> On 11 Dec 2014, at 15:02, Andrea Suisani <[email protected]> wrote:\n>\n> On 12/10/2014 11:44 AM, Maila Fatticcioni wrote:\n>> 2- I would like to use the two SDD to store the wal file. Do you think\n>> it is useful or how should I use them?\n>\n> I definitely would give it a try.\n>\n\n\n> I don't understand the logic behind using drives,\n> which are best for random io, for sequent io workloads.\n\n> Better use 10k sas with BBU raid for wal, money wise.\n\nVery much agree with this. Because SSD is fast doesn't make it suited for certain things, and a streaming sequential 100% write workload is one of them. I've worked with everything from local disk to high-end SAN and even at the high end we've always put any DB logs on spinning disk. RAID1 is generally sufficient. SSD is king for read heavy random I/O workload.\n\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 11 Dec 2014 22:36:51 +0000",
"msg_from": "Eric Pierce <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "On 12/12/14 11:36, Eric Pierce wrote:\n>\n> ________________________________________\n> From: [email protected] <[email protected]> on behalf of Evgeniy Shishkin <[email protected]>\n> Sent: Thursday, December 11, 2014 7:11 AM\n> To: Andrea Suisani\n> Cc: [email protected]; [email protected]\n> Subject: Re: [PERFORM] Tuning the configuration\n>\n>> On 11 Dec 2014, at 15:02, Andrea Suisani <[email protected]> wrote:\n>>\n>> On 12/10/2014 11:44 AM, Maila Fatticcioni wrote:\n>>> 2- I would like to use the two SDD to store the wal file. Do you think\n>>> it is useful or how should I use them?\n>>\n>> I definitely would give it a try.\n>>\n>\n>\n>> I don't understand the logic behind using drives,\n>> which are best for random io, for sequent io workloads.\n>\n>> Better use 10k sas with BBU raid for wal, money wise.\n>\n> Very much agree with this. Because SSD is fast doesn't make it suited for certain things, and a streaming sequential 100% write workload is one of them. I've worked with everything from local disk to high-end SAN and even at the high end we've always put any DB logs on spinning disk. RAID1 is generally sufficient. SSD is king for read heavy random I/O workload.\n>\n\n\nMind you wal is a little different - the limiting factor is (usually) \nnot raw sequential speed but fsync latency. These days a modern SSD has \nfsync response pretty much equal to that of a card with BBU + spinners - \nand has \"more\" high speed storage available (cards usually have only a \n1G or so of RAM on them).\n\n\nregards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 12 Dec 2014 13:04:19 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "> Very much agree with this. Because SSD is fast doesn't make it suited for certain things, and a streaming sequential 100% write workload is one of them. I've worked with everything from local disk to high-end SAN and even at the high end we've always put any DB logs on spinning disk. RAID1 is generally sufficient. SSD is king for read heavy random I/O workload.\n\n\n1. Here we found SSD sustained serial writes were faster on SSD than to disk, by a factor of 3, both in RAID and single disk configurations. \n\n2. Also, something to watch out for is extended stalls due to synchronous write activity / clearing out of cache, when a lot of data has been building up in write caches. By placing the WAL on the same disk as the ordinary database, you avoid having too much dirty cache building up because the WAL forces the disk to flush more often. So you can trade off some DB filesystem performance here to avoid blocking / IO lag spikes.\n\n3. There's also the question of disk bays. When you have extra disks for OS, for logs, etc. , in some situations you're using up disks that could be used to extend your main database filesystem, particularly when those disks also need to be protected by the appropriate RAID mirrors and RAID hotspares. It can be cheaper to put the logs to SSD than to have 1 extra hdd + its RAID1 mirror + its hotspare + possible shelfspare, plus pay for a bigger chassis to have 3 more disk bays.\n\n4. Finally there's the issue of simplicity. If you get a fast SSD and run OS/logs/DB off a single RAID volume, there's less chance for error when some unlucky person has to do an emergency fix/rebuild later, than if they have to check disk caching policy etc across a range of devices and ensure different parts of the filesystem are mounted in all the right places. Makes documentation easier. \n\nGraeme Bell\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 15 Dec 2014 13:36:45 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "> \n> I don't understand the logic behind using drives, \n> which are best for random io, for sequent io workloads.\n\nBecause they are also best for sequential IO. I get 1.3-1.4GB/second from 4 SSDs in RAID or >500MB/s for single disk systems, even with cheap models. \nAre you getting more than that from high-end spinning rust?\n\nGraeme.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Dec 2014 11:51:19 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
},
{
"msg_contents": "\n> On 16 Dec 2014, at 14:51, Graeme B. Bell <[email protected]> wrote:\n> \n>> \n>> I don't understand the logic behind using drives, \n>> which are best for random io, for sequent io workloads.\n> \n> Because they are also best for sequential IO. I get 1.3-1.4GB/second from 4 SSDs in RAID or >500MB/s for single disk systems, even with cheap models. \n> Are you getting more than that from high-end spinning rust?\n\n\nI better use ssd for random iops when database doesn't fit in ram.\nFor wal logs i use raid with bbu cache and couple of sas drives.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 16 Dec 2014 19:37:28 +0300",
"msg_from": "Evgeniy Shishkin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Tuning the configuration"
}
] |
[
{
"msg_contents": "Hi everybody,\n\nDenis from SO (see his latest comment) advised me to post my question \nhere: \nhttp://stackoverflow.com/questions/27363263/when-does-postgresql-collapse-subqueries-to-joins-and-when-not \nPlease also read all the comments as they contain valuable data as well.\n\nWhat we actually have now is that PostgreSQL collapses subqueries to \njoins but a way differently than using \"normal joins\" by using \"Merge \nSemi Joins\" or \"Nested Loop Semi Joins\" (btw an explanation of these \nwould be great here :) ). The queries given in the post are reduced to \nthe problem at hand and the by-PostgreSQL-optimized version performed \nvery well (bit slower than \"normal joins\"). Regarding our actual query \nhowever, the still-different query plan leads to a big performance \nissue. We actually need the complete rows of a instead of a.id. I \nprepared the query plans for you (please note, that querie are executed \nempty file and mem chaches):\n\n\n\n################ Perfect Plan ###############\nWe assume all our queries to be equivalent and therefore want PostgreSQL \nto re-plan the others to this one.\n\nexplain analyze verbose select * from a where a.id in (select a.id from \na inner join text_b b1 on (a.id=b1.a_id) inner join text_b b2 on \n(a.id=b2.a_id) where b1.x='x1' and b1.y='y1' and b2.x='x2' and b2.y='y2' \norder by a.date desc limit 20);\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=183.30..191.34 rows=1 width=135) (actual \ntime=812.486..918.561 rows=20 loops=1)\n Output: a.id, a.date, a.content1, a.content2, a.content3, \na.content4, a.content5, a.content6n, a.content7, a.content8, a.content9\n -> HashAggregate (cost=182.87..182.88 rows=1 width=4) (actual \ntime=804.866..804.884 rows=20 loops=1)\n Output: a_1.id\n Group Key: a_1.id\n -> Limit (cost=182.85..182.86 rows=1 width=8) (actual \ntime=804.825..804.839 rows=20 loops=1)\n Output: a_1.id, a_1.date\n -> Sort (cost=182.85..182.86 rows=1 width=8) (actual \ntime=804.823..804.829 rows=20 loops=1)\n Output: a_1.id, a_1.date\n Sort Key: a_1.date\n Sort Method: top-N heapsort Memory: 25kB\n -> Nested Loop (cost=1.57..182.84 rows=1 \nwidth=8) (actual time=96.737..803.871 rows=739 loops=1)\n Output: a_1.id, a_1.date\n -> Merge Join (cost=1.14..178.74 rows=1 \nwidth=8) (actual time=64.829..83.489 rows=739 loops=1)\n Output: b1.a_id, b2.a_id\n Merge Cond: (b1.a_id = b2.a_id)\n -> Index Only Scan using text_b_y_x_y \non public.text_b b1 (cost=0.57..163.29 rows=3936 width=4) (actual \ntime=34.811..47.328 rows=15195 loops=1)\n Output: b1.x, b1.y, b1.a_id\n Index Cond: ((b1.x = 'x1'::text) \nAND (b1.y = 'y1'::text))\n Heap Fetches: 0\n -> Index Only Scan using text_b_y_x_y \non public.text_b b2 (cost=0.57..5.49 rows=46 width=4) (actual \ntime=22.123..30.940 rows=1009 loops=1)\n Output: b2.x, b2.y, b2.a_id\n Index Cond: ((b2.x = 'x2'::text) \nAND (b2.y = 'y2'::text))\n Heap Fetches: 0\n -> Index Only Scan using a_id_date on \npublic.a a_1 (cost=0.43..4.09 rows=1 width=8) (actual time=0.970..0.973 \nrows=1 loops=739)\n Output: a_1.id, a_1.date\n Index Cond: (a_1.id = b1.a_id)\n Heap Fetches: 0\n -> Index Scan using a_id_date on public.a (cost=0.43..8.45 rows=1 \nwidth=135) (actual time=5.677..5.679 rows=1 loops=20)\n Output: a.id, a.date, a.content1, a.content2, a.content3, \na.content4, a.content5, a.content6n, a.content7, a.content8, a.content9\n Index Cond: (a.id = a_1.id)\n Planning time: 331.190 ms\n Execution time: 918.694 ms\n\n\n###################### Not so perfect Plan ##################\nBecause PostgreSQL does not re-plan the id-only query from SO to the \nperfect query, we also see here a performance degradation.\n\nexplain analyze verbose select * from a where a.id in (select a.id from \na where a.id in (select text_b.a_id from text_b where text_b.x='x1' and \ntext_b.y='y1') and a.id in (select text_b.a_id from text_b where \ntext_b.x='x2' and text_b.y='y2') order by a.date desc limit 20);\n\nQUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=384.48..392.51 rows=1 width=135) (actual \ntime=1311.680..1426.135 rows=20 loops=1)\n Output: a.id, a.date, a.content1, a.content2, a.content3, \na.content4, a.content5, a.content6n, a.content7, a.content8, a.content9\n -> HashAggregate (cost=384.04..384.05 rows=1 width=4) (actual \ntime=1298.447..1298.470 rows=20 loops=1)\n Output: a_1.id\n Group Key: a_1.id\n -> Limit (cost=384.03..384.03 rows=1 width=8) (actual \ntime=1298.411..1298.426 rows=20 loops=1)\n Output: a_1.id, a_1.date\n -> Sort (cost=384.03..384.03 rows=1 width=8) (actual \ntime=1298.409..1298.416 rows=20 loops=1)\n Output: a_1.id, a_1.date\n Sort Key: a_1.date\n Sort Method: top-N heapsort Memory: 25kB\n -> Merge Semi Join (cost=1.57..384.02 rows=1 \nwidth=8) (actual time=160.186..1297.628 rows=739 loops=1)\n Output: a_1.id, a_1.date\n Merge Cond: (a_1.id = text_b.a_id)\n -> Nested Loop (cost=1.00..210.76 rows=46 \nwidth=12) (actual time=80.587..1236.967 rows=1009 loops=1)\n Output: a_1.id, a_1.date, text_b_1.a_id\n -> Index Only Scan using text_b_y_x_y \non public.text_b text_b_1 (cost=0.57..5.49 rows=46 width=4) (actual \ntime=51.190..63.400 rows=1009 loops=1)\n Output: text_b_1.x, text_b_1.y, \ntext_b_1.a_id\n Index Cond: ((text_b_1.x = \n'x2'::text) AND (text_b_1.y = 'y2'::text))\n Heap Fetches: 0\n -> Index Only Scan using a_id_date on \npublic.a a_1 (cost=0.43..4.45 rows=1 width=8) (actual time=1.158..1.160 \nrows=1 loops=1009)\n Output: a_1.id, a_1.date\n Index Cond: (a_1.id = text_b_1.a_id)\n Heap Fetches: 0\n -> Index Only Scan using text_b_y_x_y on \npublic.text_b (cost=0.57..163.29 rows=3936 width=4) (actual \ntime=36.963..54.396 rows=15194 loops=1)\n Output: text_b.x, text_b.y, text_b.a_id\n Index Cond: ((text_b.x = 'x1'::text) \nAND (text_b.y = 'y1'::text))\n Heap Fetches: 0\n -> Index Scan using a_id_date on public.a (cost=0.43..8.45 rows=1 \nwidth=135) (actual time=6.376..6.378 rows=1 loops=20)\n Output: a.id, a.date, a.content1, a.content2, a.content3, \na.content4, a.content5, a.content6n, a.content7, a.content8, a.content9\n Index Cond: (a.id = a_1.id)\n Planning time: 248.279 ms\n Execution time: 1426.337 ms\n\n\n################### Slow Joins ##########################\nDirectly querying from the join performs worse.\n\nexplain analyze verbose select * from a inner join text_b b1 on \n(a.id=b1.a_id) inner join text_b b2 on (a.id=b2.a_id) where b1.x='x1' \nand b1.y='y1' and b2.x='x2' and b2.y='y2' order by a.date desc limit 20;\n\nQUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=186.83..186.83 rows=1 width=177) (actual \ntime=4133.420..4133.434 rows=20 loops=1)\n Output: a.id, a.date, a.content1, a.content2, a.content3, \na.content4, a.content5, a.content6n, a.content7, a.content8, a.content9, \nb1.a_id, b1.x, b1.y, b2.a_id, b2.x, b2.y, a.date\n -> Sort (cost=186.83..186.83 rows=1 width=177) (actual \ntime=4133.417..4133.423 rows=20 loops=1)\n Output: a.id, a.date, a.content1, a.content2, a.content3, \na.content4, a.content5, a.content6n, a.content7, a.content8, a.content9, \nb1.a_id, b1.x, b1.y, b2.a_id, b2.x, b2.y, a.date\n Sort Key: a.date\n Sort Method: top-N heapsort Memory: 34kB\n -> Nested Loop (cost=1.57..186.82 rows=1 width=177) (actual \ntime=109.094..4130.290 rows=739 loops=1)\n Output: a.id, a.date, a.content1, a.content2, \na.content3, a.content4, a.content5, a.content6n, a.content7, a.content8, \na.content9, b1.a_id, b1.x, b1.y, b2.a_id, b2.x, b2.y, a.date\n -> Merge Join (cost=1.14..178.74 rows=1 width=42) \n(actual time=72.023..94.234 rows=739 loops=1)\n Output: b1.a_id, b1.x, b1.y, b2.a_id, b2.x, b2.y\n Merge Cond: (b1.a_id = b2.a_id)\n -> Index Only Scan using text_b_y_x_y on \npublic.text_b b1 (cost=0.57..163.29 rows=3936 width=21) (actual \ntime=36.084..50.308 rows=15195 loops=1)\n Output: b1.x, b1.y, b1.a_id\n Index Cond: ((b1.x = 'x1'::text) AND (b1.y = \n'y1'::text))\n Heap Fetches: 0\n -> Index Only Scan using text_b_y_x_y on \npublic.text_b b2 (cost=0.57..5.49 rows=46 width=21) (actual \ntime=20.227..37.654 rows=1009 loops=1)\n Output: b2.x, b2.y, b2.a_id\n Index Cond: ((b2.x = 'x2'::text) AND (b2.y = \n'y2'::text))\n Heap Fetches: 0\n -> Index Scan using a_id_date on public.a \n(cost=0.43..8.07 rows=1 width=135) (actual time=5.454..5.457 rows=1 \nloops=739)\n Output: a.id, a.date, a.content1, a.content2, \na.content3, a.content4, a.content5, a.content6n, a.content7, a.content8, \na.content9\n Index Cond: (a.id = b1.a_id)\n Planning time: 332.545 ms\n Execution time: 4133.574 ms\n\n################### Slow Subqueries ##########################\nDirectly querying from the subqueries performs even worse.\n\n\nexplain analyze verbose select * from a where a.id in (select \ntext_b.a_id from text_b where text_b.x='x1' and text_b.y='y1') and a.id \nin (select text_b.a_id from text_b where text_b.x='x2' and \ntext_b.y='y2') order by a.date desc limit 20;\n\nQUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=568.02..568.03 rows=1 width=135) (actual \ntime=9765.174..9765.190 rows=20 loops=1)\n Output: a.id, a.date, a.content1, a.content2, a.content3, \na.content4, a.content5, a.content6n, a.content7, a.content8, a.content9\n -> Sort (cost=568.02..568.03 rows=1 width=135) (actual \ntime=9765.173..9765.180 rows=20 loops=1)\n Output: a.id, a.date, a.content1, a.content2, a.content3, \na.content4, a.content5, a.content6n, a.content7, a.content8, a.content9\n Sort Key: a.date\n Sort Method: top-N heapsort Memory: 30kB\n -> Merge Semi Join (cost=1.57..568.01 rows=1 width=135) \n(actual time=294.909..9762.978 rows=739 loops=1)\n Output: a.id, a.date, a.content1, a.content2, \na.content3, a.content4, a.content5, a.content6n, a.content7, a.content8, \na.content9\n Merge Cond: (a.id = text_b.a_id)\n -> Nested Loop (cost=1.00..394.76 rows=46 width=139) \n(actual time=94.441..9668.179 rows=1009 loops=1)\n Output: a.id, a.date, a.content1, a.content2, \na.content3, a.content4, a.content5, a.content6n, a.content7, a.content8, \na.content9, text_b_1.a_id\n -> Index Only Scan using text_b_y_x_y on \npublic.text_b text_b_1 (cost=0.57..5.49 rows=46 width=4) (actual \ntime=52.588..67.307 rows=1009 loops=1)\n Output: text_b_1.x, text_b_1.y, text_b_1.a_id\n Index Cond: ((text_b_1.x = 'x2'::text) AND \n(text_b_1.y = 'y2'::text))\n Heap Fetches: 0\n -> Index Scan using a_id_date on public.a \n(cost=0.43..8.45 rows=1 width=135) (actual time=9.485..9.511 rows=1 \nloops=1009)\n Output: a.id, a.date, a.content1, \na.content2, a.content3, a.content4, a.content5, a.content6n, a.content7, \na.content8, a.content9\n Index Cond: (a.id = text_b_1.a_id)\n -> Index Only Scan using text_b_y_x_y on public.text_b \n(cost=0.57..163.29 rows=3936 width=4) (actual time=22.705..86.822 \nrows=15194 loops=1)\n Output: text_b.x, text_b.y, text_b.a_id\n Index Cond: ((text_b.x = 'x1'::text) AND (text_b.y \n= 'y1'::text))\n Heap Fetches: 0\n Planning time: 267.442 ms\n Execution time: 9765.339 m\n\n\nWhat needs to be done in order to feed PostgreSQL with the last query \nand achieve the performance of the first one?\n\nBest regards,\n\n-- \nSven R. Kunze\nTBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09130 Chemnitz\nTel: +49 (0)371 5347916, Fax: +49 (0)371 5347920\ne-mail: [email protected]\nweb: www.tbz-pariv.de\n\nGeschäftsführer: Dr. Reiner Wohlgemuth\nSitz der Gesellschaft: Chemnitz\nRegistergericht: Chemnitz HRB 8543\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Dec 2014 12:31:14 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "When does PostgreSQL collapse subqueries to join?"
},
{
"msg_contents": "\"Sven R. Kunze\" <[email protected]> writes:\n> ################ Perfect Plan ###############\n> We assume all our queries to be equivalent and therefore want PostgreSQL \n> to re-plan the others to this one.\n\n> explain analyze verbose select * from a where a.id in (select a.id from \n> a inner join text_b b1 on (a.id=b1.a_id) inner join text_b b2 on \n> (a.id=b2.a_id) where b1.x='x1' and b1.y='y1' and b2.x='x2' and b2.y='y2' \n> order by a.date desc limit 20);\n\n> [ ... other variant cases ... ]\n\n> ################### Slow Subqueries ##########################\n> Directly querying from the subqueries performs even worse.\n\n> explain analyze verbose select * from a where a.id in (select \n> text_b.a_id from text_b where text_b.x='x1' and text_b.y='y1') and a.id \n> in (select text_b.a_id from text_b where text_b.x='x2' and \n> text_b.y='y2') order by a.date desc limit 20;\n\n> What needs to be done in order to feed PostgreSQL with the last query \n> and achieve the performance of the first one?\n\nPostgres will *never* turn the last query into the first one, because\nthey are not in fact equivalent. Putting the ORDER BY/LIMIT inside the\nsubquery has entirely different effects than putting it outside. There's\nno guarantee at all that the first query returns only 20 rows, nor that\nthe returned rows are in any particular order.\n\nI'm a bit suspicious of the other aspect of your manual transformation\nhere too: in general semijoins (IN joins) don't commute with inner joins.\nIt's possible that it's okay here given the specific forms of the join\nclauses, but the planner won't assume that.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 10 Dec 2014 10:34:49 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: When does PostgreSQL collapse subqueries to join?"
}
] |
[
{
"msg_contents": "Hi,\n\n\nSoftware and hardware running postgresql are:\n- postgresql92-9.2.3-1.1.1.x86_64\n- openSuSE 12.3 x64_86\n- 16 GB of RAM\n- 2 GB of swap\n- 8-core Intel(R) Xeon(R) CPU E5-2407 0 @ 2.20GHz\n- ext4 filesystem hold on a hardware Dell PERC H710 RAID10 with 4x4TB SATA HDs.\n- 2 GB of RAM are reserved for a virtual machine.\n\nThe single database used was created by\nCREATE FUNCTION msg_function() RETURNS trigger\n LANGUAGE plpgsql\n AS $_$ DECLARE _tablename text; _date text; _slot timestamp; BEGIN _slot := \nNEW.slot; _date := to_char(_slot, 'YYYY-MM-DD'); _tablename := 'MSG_'||_date; \nPERFORM 1 FROM pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n \nON n.oid = c.relnamespace WHERE c.relkind = 'r' AND c.relname = _tablename \nAND n.nspname = 'public'; IF NOT FOUND THEN EXECUTE 'CREATE TABLE \npublic.' || quote_ident(_tablename) || ' ( ) INHERITS (public.MSG)'; EXECUTE 'ALTER \nTABLE public.' || quote_ident(_tablename) || ' OWNER TO seviri'; EXECUTE 'GRANT \nALL ON TABLE public.' || quote_ident(_tablename) || ' TO seviri'; EXECUTE 'ALTER \nTABLE ONLY public.' || quote_ident(_tablename) || ' ADD CONSTRAINT ' || \nquote_ident(_tablename||'_pkey') || ' PRIMARY KEY (slot,msg)'; END IF; EXECUTE \n'INSERT INTO public.' || quote_ident(_tablename) || ' VALUES ($1.*)' USING NEW; \nRETURN NULL; END; $_$;\n\nCREATE TABLE msg (\n slot timestamp(0) without time zone NOT NULL,\n msg integer NOT NULL,\n hrv bytea,\n vis006 bytea,\n vis008 bytea,\n ir_016 bytea,\n ir_039 bytea,\n wv_062 bytea,\n wv_073 bytea,\n ir_087 bytea,\n ir_097 bytea,\n ir_108 bytea,\n ir_120 bytea,\n ir_134 bytea,\n pro bytea,\n epi bytea,\n clm bytea,\n tape character varying(10)\n);\n\nBasically, this database consists of daily tables with the date stamp appended in their \nnames, i.e.\nMSG_YYYY-MM-DD and a global table MSG linked to these tables allowing to list all \nthe records.\n\nA cron script performing a single insert (upsert, see log excerpt below) runs every 15 \nminutes and\nnever had any issue.\n\nHowever, I also need to submit historical records. This is achieved by a bash script \nparsing a text file\nand building insert commands which are submitted 10 at a time to the database \nusing psql through a\ntemp file in a BEGIN; ...; COMMIT block. When running this script, I noticed that the \nINSERT\nsubprocess can reached around 4GB of memory using htop (see attached \nscreenshot). After a while,\nthe script inevitably crashes with the following messages\npsql:/tmp/tmp.a0ZrivBZhD:10: connection to server was lost\nCould not submit SQL request file /tmp/tmp.a0ZrivBZhD to database\n\nand the associated entries in the log:\n2014-12-15 17:54:07 GMT LOG: server process (PID 21897) was terminated by \nsignal 9: Killed\n2014-12-15 17:54:07 GMT DETAIL: Failed process was running: WITH upsert AS \n(update MSG set \n(slot,MSG,HRV,VIS006,VIS008,IR_016,IR_039,WV_062,WV_073,IR_087,IR_097,IR_1\n08,IR_120,IR_134,PRO,EPI,CLM,TAPE) = (to_timestamp('201212032145', \n'YYYYMMDDHH24MI'),2,'\\xffffff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\x01\n','\\x01','\\x7f','LTO5_020') where slot=to_timestamp('201212032145', \n'YYYYMMDDHH24MI') and MSG=2 RETURNING *) insert into MSG \n(slot,MSG,HRV,VIS006,VIS008,IR_016,IR_039,WV_062,WV_073,IR_087,IR_097,IR_1\n08,IR_120,IR_134,PRO,EPI,CLM,TAPE) select to_timestamp('201212032145', \n'YYYYMMDDHH24MI'),2,'\\xffffff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\x01\n','\\x01','\\x7f','LTO5_020' WHERE NOT EXISTS (SELECT * FROM upsert);\n2014-12-15 17:54:07 GMT LOG: terminating any other active server processes\n2014-12-15 17:54:07 GMT WARNING: terminating connection because of crash of \nanother server process\n2014-12-15 17:54:07 GMT DETAIL: The postmaster has commanded this server \nprocess to roll back the current transaction and exit, because another server process \nexited abnormally and possibly corrupted shared memory.\n2014-12-15 17:54:07 GMT HINT: In a moment you should be able to reconnect to \nthe database and repeat your command.\n2014-12-15 17:54:07 GMT seviri seviri WARNING: terminating connection because \nof crash of another server process\n2014-12-15 17:54:07 GMT seviri seviri DETAIL: The postmaster has commanded \nthis server process to roll back the current transaction and exit, because another \nserver process exited abnormally and possibly corrupted shared memory.\n2014-12-15 17:54:07 GMT seviri seviri HINT: In a moment you should be able to \nreconnect to the database and repeat your command.\n2014-12-15 17:54:07 GMT LOG: all server processes terminated; reinitializing\n2014-12-15 17:54:08 GMT LOG: database system was interrupted; last known up \nat 2014-12-15 17:49:38 GMT\n2014-12-15 17:54:08 GMT LOG: database system was not properly shut down; \nautomatic recovery in progress\n2014-12-15 17:54:08 GMT LOG: redo starts at 0/58C1C060\n2014-12-15 17:54:08 GMT LOG: record with zero length at 0/58C27950\n2014-12-15 17:54:08 GMT LOG: redo done at 0/58C27920\n2014-12-15 17:54:08 GMT LOG: last completed transaction was at log time \n2014-12-15 17:53:33.898086+00\n2014-12-15 17:54:08 GMT LOG: autovacuum launcher started\n2014-12-15 17:54:08 GMT LOG: database system is ready to accept connections\n\nMy postgresql.conf contains the following modified parameters:\nlisten_addresses = '*'\nmax_connections = 100\nshared_buffers = 96MB # increased from the default value of 24MB, because script \nwas failing in the beginning\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Wed, 17 Dec 2014 16:14:09 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Excessive memory used for INSERT"
},
{
"msg_contents": "Hello Alessandro,\n\n> 2014-12-15 17:54:07 GMT DETAIL: Failed process was running: WITH upsert\n> AS (update MSG set\n> (slot,MSG,HRV,VIS006,VIS008,IR_016,IR_039,WV_062,WV_073,IR_087,IR_097,IR_108,IR_120,IR_134,PRO,EPI,CLM,TAPE)\n> = (to_timestamp('201212032145',\n> 'YYYYMMDDHH24MI'),2,'\\xffffff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\x01','\\x01','\\x7f','LTO5_020')\n> where slot=to_timestamp('201212032145', 'YYYYMMDDHH24MI') and MSG=2\n> RETURNING *) insert into MSG\n> (slot,MSG,HRV,VIS006,VIS008,IR_016,IR_039,WV_062,WV_073,IR_087,IR_097,IR_108,IR_120,IR_134,PRO,EPI,CLM,TAPE)\n> select to_timestamp('201212032145',\n> 'YYYYMMDDHH24MI'),2,'\\xffffff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\\x01','\\x01','\\x7f','LTO5_020'\n> WHERE NOT EXISTS (SELECT * FROM upsert);\n\nHow many rows is \"(SELECT * FROM upsert)\" returning? Without knowing \nmore i would guess, that the result-set is very big and that could be \nthe reason for the memory usage.\n\nI would add an WHERE clause to reduce the result-set (an correct index \ncan fasten this method even more).\n\nGreetings,\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Dec 2014 16:26:32 +0100",
"msg_from": "Torsten Zuehlsdorff <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Torsten Zuehlsdorff <[email protected]> writes:\n> How many rows is \"(SELECT * FROM upsert)\" returning? Without knowing \n> more i would guess, that the result-set is very big and that could be \n> the reason for the memory usage.\n\nResult sets are not ordinarily accumulated on the server side.\n\nAlessandro didn't show the trigger definition, but my guess is that it's\nan AFTER trigger, which means that a trigger event record is accumulated\nin server memory for each inserted/updated row. If you're trying to\nupdate a huge number of rows in one command (or one transaction, if it's\na DEFERRED trigger) you'll eventually run out of memory for the event\nqueue.\n\nAn easy workaround is to make it a BEFORE trigger instead. This isn't\nreally nice from a theoretical standpoint; but as long as you make sure\nthere are no other BEFORE triggers that might fire after it, it'll work\nwell enough.\n\nAlternatively, you might want to reconsider the concept of updating\nhundreds of millions of rows in a single operation ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Dec 2014 10:41:31 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Hi Torsten,\n\n\nThanks for your answer.\n\nI have modified\n(SELECT * FROM upsert)\nto\n(SELECT * FROM upsert WHERE slot=to_timestamp('201212032145', 'YYYYMMDDHH24MI') and MSG=2)\naccording to your suggestion to reduce the result-set to a single row. However, the INSERT process is still consuming the same amount of RAM.\n\n\nRegards,\n\n\nAlessandro.\n\n\nOn Wednesday 17 December 2014 16:26:32 Torsten Zuehlsdorff wrote:\n> Hello Alessandro,\n> \n> > 2014-12-15 17:54:07 GMT DETAIL: Failed process was running: WITH upsert\n> > AS (update MSG set\n> > (slot,MSG,HRV,VIS006,VIS008,IR_016,IR_039,WV_062,WV_073,IR_087,IR_097,IR_1\n> > 08,IR_120,IR_134,PRO,EPI,CLM,TAPE) = (to_timestamp('201212032145',\n> > 'YYYYMMDDHH24MI'),2,'\\xffffff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\n> > \\xff','\\xff','\\xff','\\xff','\\xff','\\x01','\\x01','\\x7f','LTO5_020') where\n> > slot=to_timestamp('201212032145', 'YYYYMMDDHH24MI') and MSG=2 RETURNING\n> > *) insert into MSG\n> > (slot,MSG,HRV,VIS006,VIS008,IR_016,IR_039,WV_062,WV_073,IR_087,IR_097,IR_1\n> > 08,IR_120,IR_134,PRO,EPI,CLM,TAPE) select to_timestamp('201212032145',\n> > 'YYYYMMDDHH24MI'),2,'\\xffffff','\\xff','\\xff','\\xff','\\xff','\\xff','\\xff','\n> > \\xff','\\xff','\\xff','\\xff','\\xff','\\x01','\\x01','\\x7f','LTO5_020' WHERE\n> > NOT EXISTS (SELECT * FROM upsert);\n> \n> How many rows is \"(SELECT * FROM upsert)\" returning? Without knowing\n> more i would guess, that the result-set is very big and that could be\n> the reason for the memory usage.\n> \n> I would add an WHERE clause to reduce the result-set (an correct index\n> can fasten this method even more).\n> \n> Greetings,\n> Torsten\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Dec 2014 17:57:51 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Hi,\n\n\nMy dtrigger definition is\nCREATE TRIGGER msg_trigger BEFORE INSERT ON msg FOR EACH ROW EXECUTE PROCEDURE msg_function();\nso it seems that it is a BEFORE trigger.\n\nTo be totally honest, I have \"really\" limited knownledge in SQL and postgresql and all these were gathered from recipes found on the web...\n\n\nRegards,\n\n\nAlessandro.\n\n\nOn Wednesday 17 December 2014 10:41:31 Tom Lane wrote:\n> Torsten Zuehlsdorff <[email protected]> writes:\n> > How many rows is \"(SELECT * FROM upsert)\" returning? Without knowing\n> > more i would guess, that the result-set is very big and that could be\n> > the reason for the memory usage.\n> \n> Result sets are not ordinarily accumulated on the server side.\n> \n> Alessandro didn't show the trigger definition, but my guess is that it's\n> an AFTER trigger, which means that a trigger event record is accumulated\n> in server memory for each inserted/updated row. If you're trying to\n> update a huge number of rows in one command (or one transaction, if it's\n> a DEFERRED trigger) you'll eventually run out of memory for the event\n> queue.\n> \n> An easy workaround is to make it a BEFORE trigger instead. This isn't\n> really nice from a theoretical standpoint; but as long as you make sure\n> there are no other BEFORE triggers that might fire after it, it'll work\n> well enough.\n> \n> Alternatively, you might want to reconsider the concept of updating\n> hundreds of millions of rows in a single operation ...\n> \n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Dec 2014 18:04:42 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Alessandro Ipe <[email protected]> writes:\n> My dtrigger definition is\n> CREATE TRIGGER msg_trigger BEFORE INSERT ON msg FOR EACH ROW EXECUTE PROCEDURE msg_function();\n> so it seems that it is a BEFORE trigger.\n\nHm, no AFTER triggers anywhere? Are there foreign keys, perhaps?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 17 Dec 2014 12:49:03 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "On 17/12/14 16:14, Alessandro Ipe wrote:\n> 2014-12-15 17:54:07 GMT LOG: server process (PID 21897) was terminated\n> by signal 9: Killed\n\nsince it was killed by SIGKILL, maybe it's the kernel's OOM killer?\n\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Dec 2014 08:51:47 +0100",
"msg_from": "=?ISO-8859-15?Q?Torsten_F=F6rtsch?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Hi,\n\n\nA grep in a nightly dump of this database did not return any AFTER trigger.\nThe only keys are primary on each daily table, through\nADD CONSTRAINT \"MSG_YYYY-MM-DD_pkey\" PRIMARY KEY (slot, msg);\nand on the global table\nADD CONSTRAINT msg_pkey PRIMARY KEY (slot, msg);\n\n\nRegards,\n\n\nA.\n\n\nOn Wednesday 17 December 2014 12:49:03 Tom Lane wrote:\n> Alessandro Ipe <[email protected]> writes:\n> > My dtrigger definition is\n> > CREATE TRIGGER msg_trigger BEFORE INSERT ON msg FOR EACH ROW EXECUTE\n> > PROCEDURE msg_function(); so it seems that it is a BEFORE trigger.\n> \n> Hm, no AFTER triggers anywhere? Are there foreign keys, perhaps?\n> \n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Dec 2014 12:16:49 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "On Thursday 18 December 2014 08:51:47 Torsten Förtsch wrote:\n> On 17/12/14 16:14, Alessandro Ipe wrote:\n> > 2014-12-15 17:54:07 GMT LOG: server process (PID 21897) was \nterminated\n> > by signal 9: Killed\n> \n> since it was killed by SIGKILL, maybe it's the kernel's OOM killer?\n\nIndeed and this hopefully prevented postgresql to crash my whole system due to \nRAM exhaustion. But the problem remains : why an INSERT requires that huge \namount of memory ?\n\n\nRegards,\n\n\nA.\n\n\nOn Thursday 18 December 2014 08:51:47 Torsten Förtsch wrote:\n> On 17/12/14 16:14, Alessandro Ipe wrote:\n> > 2014-12-15 17:54:07 GMT LOG: server process (PID 21897) was terminated\n> > by signal 9: Killed\n> \n> since it was killed by SIGKILL, maybe it's the kernel's OOM killer?\n \nIndeed and this hopefully prevented postgresql to crash my whole system due to RAM exhaustion. But the problem remains : why an INSERT requires that huge amount of memory ?\n \n \nRegards,\n \n \nA.",
"msg_date": "Thu, 18 Dec 2014 16:42:23 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Hi,\n\n\nI tried also with an upsert function\nCREATE FUNCTION upsert_func(sql_insert text, sql_update text) RETURNS void\n LANGUAGE plpgsql\n AS $$\nBEGIN\nEXECUTE sql_update;\nIF FOUND THEN\n RETURN;\n END IF;\n BEGIN\nEXECUTE sql_insert;\nEXCEPTION WHEN OTHERS THEN\nEXECUTE sql_update;\nEND;\n RETURN;\nEND;\n$$;\nwith the same result on the memory used... \n\nThe tables hold 355000 rows in total.\n\n\nRegards,\n\n\nA.\n\n\nOn Thursday 18 December 2014 12:16:49 Alessandro Ipe wrote:\n> Hi,\n> \n> \n> A grep in a nightly dump of this database did not return any AFTER trigger.\n> The only keys are primary on each daily table, through\n> ADD CONSTRAINT \"MSG_YYYY-MM-DD_pkey\" PRIMARY KEY (slot, msg);\n> and on the global table\n> ADD CONSTRAINT msg_pkey PRIMARY KEY (slot, msg);\n> \n> \n> Regards,\n> \n> \n> A.\n> \n> On Wednesday 17 December 2014 12:49:03 Tom Lane wrote:\n> > Alessandro Ipe <[email protected]> writes:\n> > > My dtrigger definition is\n> > > CREATE TRIGGER msg_trigger BEFORE INSERT ON msg FOR EACH \nROW EXECUTE\n> > > PROCEDURE msg_function(); so it seems that it is a BEFORE trigger.\n> > \n> > Hm, no AFTER triggers anywhere? Are there foreign keys, perhaps?\n> > \n> > \t\t\tregards, tom lane\n\n\n\nHi,\n \n \nI tried also with an upsert function\nCREATE FUNCTION upsert_func(sql_insert text, sql_update text) RETURNS void\n LANGUAGE plpgsql\n AS $$\nBEGIN\nEXECUTE sql_update;\nIF FOUND THEN\n RETURN;\n END IF;\n BEGIN\nEXECUTE sql_insert;\nEXCEPTION WHEN OTHERS THEN\nEXECUTE sql_update;\nEND;\n RETURN;\nEND;\n$$;\nwith the same result on the memory used... \n \nThe tables hold 355000 rows in total.\n \n \nRegards,\n \n \nA.\n \n \nOn Thursday 18 December 2014 12:16:49 Alessandro Ipe wrote:\n> Hi,\n> \n> \n> A grep in a nightly dump of this database did not return any AFTER trigger.\n> The only keys are primary on each daily table, through\n> ADD CONSTRAINT \"MSG_YYYY-MM-DD_pkey\" PRIMARY KEY (slot, msg);\n> and on the global table\n> ADD CONSTRAINT msg_pkey PRIMARY KEY (slot, msg);\n> \n> \n> Regards,\n> \n> \n> A.\n> \n> On Wednesday 17 December 2014 12:49:03 Tom Lane wrote:\n> > Alessandro Ipe <[email protected]> writes:\n> > > My dtrigger definition is\n> > > CREATE TRIGGER msg_trigger BEFORE INSERT ON msg FOR EACH ROW EXECUTE\n> > > PROCEDURE msg_function(); so it seems that it is a BEFORE trigger.\n> > \n> > Hm, no AFTER triggers anywhere? Are there foreign keys, perhaps?\n> > \n> > \t\t\tregards, tom lane",
"msg_date": "Thu, 18 Dec 2014 17:31:46 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Alessandro Ipe <[email protected]> writes:\n> Hi,\n> I tried also with an upsert function\n> CREATE FUNCTION upsert_func(sql_insert text, sql_update text) RETURNS void\n> LANGUAGE plpgsql\n> AS $$\n> BEGIN\n> EXECUTE sql_update;\n> IF FOUND THEN\n> RETURN;\n> END IF;\n> BEGIN\n> EXECUTE sql_insert;\n> EXCEPTION WHEN OTHERS THEN\n> EXECUTE sql_update;\n> END;\n> RETURN;\n> END;\n> $$;\n> with the same result on the memory used... \n\nIf you want to provide a self-contained test case, possibly we could look\ninto it, but these fragmentary bits of what you're doing don't really\nconstitute an investigatable problem statement.\n\nI will note that EXCEPTION blocks aren't terribly cheap, so if you're\nreaching the \"EXECUTE sql_insert\" a lot of times that might have something\nto do with it.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Dec 2014 12:05:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Hi,\n\n\nI can send a full dump of my database (< 2MB) if it is OK for you.\n\n\nThanks,\n\n\nA.\n\n\nOn Thursday 18 December 2014 12:05:45 Tom Lane wrote:\n> Alessandro Ipe <[email protected]> writes:\n> > Hi,\n> > I tried also with an upsert function\n> > CREATE FUNCTION upsert_func(sql_insert text, sql_update text) RETURNS \nvoid\n> > \n> > LANGUAGE plpgsql\n> > AS $$\n> > \n> > BEGIN\n> > EXECUTE sql_update;\n> > IF FOUND THEN\n> > \n> > RETURN;\n> > \n> > END IF;\n> > BEGIN\n> > \n> > EXECUTE sql_insert;\n> > EXCEPTION WHEN OTHERS THEN\n> > EXECUTE sql_update;\n> > END;\n> > \n> > RETURN;\n> > \n> > END;\n> > $$;\n> > with the same result on the memory used...\n> \n> If you want to provide a self-contained test case, possibly we could look\n> into it, but these fragmentary bits of what you're doing don't really\n> constitute an investigatable problem statement.\n> \n> I will note that EXCEPTION blocks aren't terribly cheap, so if you're\n> reaching the \"EXECUTE sql_insert\" a lot of times that might have something\n> to do with it.\n> \n> \t\t\tregards, tom lane\n\n\n\nHi,\n \n \nI can send a full dump of my database (< 2MB) if it is OK for you.\n \n \nThanks,\n \n \nA.\n \n \nOn Thursday 18 December 2014 12:05:45 Tom Lane wrote:\n> Alessandro Ipe <[email protected]> writes:\n> > Hi,\n> > I tried also with an upsert function\n> > CREATE FUNCTION upsert_func(sql_insert text, sql_update text) RETURNS void\n> > \n> > LANGUAGE plpgsql\n> > AS $$\n> > \n> > BEGIN\n> > EXECUTE sql_update;\n> > IF FOUND THEN\n> > \n> > RETURN;\n> > \n> > END IF;\n> > BEGIN\n> > \n> > EXECUTE sql_insert;\n> > EXCEPTION WHEN OTHERS THEN\n> > EXECUTE sql_update;\n> > END;\n> > \n> > RETURN;\n> > \n> > END;\n> > $$;\n> > with the same result on the memory used...\n> \n> If you want to provide a self-contained test case, possibly we could look\n> into it, but these fragmentary bits of what you're doing don't really\n> constitute an investigatable problem statement.\n> \n> I will note that EXCEPTION blocks aren't terribly cheap, so if you're\n> reaching the \"EXECUTE sql_insert\" a lot of times that might have something\n> to do with it.\n> \n> \t\t\tregards, tom lane",
"msg_date": "Thu, 18 Dec 2014 18:31:42 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Hi,\n\n\nI guess the memory consumption is depending on the size of my database, so \nonly giving a reduced version of it won't allow to hit the issue.\n\nThe pg_dumpall file of my database can be found at the address\nhttps://gerb.oma.be/owncloud/public.php?service=files&t=5e0e9e1bb06dce1d12c95662a9ee1c03\n\nThe queries causing the issue are given in files\n- tmp.OqOavPYbHa (with the new upsert_func function)\n- tmp.f60wlgEDWB (with WITH .. AS statement)\n\nI hope it will help. Thanks.\n\n\nRegards,\n\n\nA.\n\n\nOn Thursday 18 December 2014 12:05:45 Tom Lane wrote:\n> Alessandro Ipe <[email protected]> writes:\n> > Hi,\n> > I tried also with an upsert function\n> > CREATE FUNCTION upsert_func(sql_insert text, sql_update text) RETURNS void\n> > \n> > LANGUAGE plpgsql\n> > AS $$\n> > \n> > BEGIN\n> > EXECUTE sql_update;\n> > IF FOUND THEN\n> > \n> > RETURN;\n> > \n> > END IF;\n> > BEGIN\n> > \n> > EXECUTE sql_insert;\n> > EXCEPTION WHEN OTHERS THEN\n> > EXECUTE sql_update;\n> > END;\n> > \n> > RETURN;\n> > \n> > END;\n> > $$;\n> > with the same result on the memory used...\n> \n> If you want to provide a self-contained test case, possibly we could look\n> into it, but these fragmentary bits of what you're doing don't really\n> constitute an investigatable problem statement.\n> \n> I will note that EXCEPTION blocks aren't terribly cheap, so if you're\n> reaching the \"EXECUTE sql_insert\" a lot of times that might have something\n> to do with it.\n> \n> \t\t\tregards, tom lane\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Tue, 23 Dec 2014 12:56:03 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Alessandro Ipe <[email protected]> writes:\n> I guess the memory consumption is depending on the size of my database, so \n> only giving a reduced version of it won't allow to hit the issue.\n\n> The pg_dumpall file of my database can be found at the address\n> https://gerb.oma.be/owncloud/public.php?service=files&t=5e0e9e1bb06dce1d12c95662a9ee1c03\n\n> The queries causing the issue are given in files\n> - tmp.OqOavPYbHa (with the new upsert_func function)\n> - tmp.f60wlgEDWB (with WITH .. AS statement)\n\nWell, the core of the problem here is that you've chosen to partition the\nMSG table at an unreasonably small grain: it's got 3711 child tables and\nit looks like you plan to add another one every day. For forty-some\nmegabytes worth of data, I'd have said you shouldn't be partitioning at\nall; for sure you shouldn't be partitioning like this. PG's inheritance\nmechanisms are only meant to cope with order-of-a-hundred child tables at\nmost. Moreover, the only good reason to partition is if you want to do\nbulk data management by, say, dropping the oldest partition every so\noften. It doesn't look like you're planning to do that at all, and I'm\nsure if you do, you don't need 1-day granularity of the drop.\n\nI'd recommend you either dispense with partitioning entirely (which would\nsimplify your life a great deal, since you'd not need all this hacky\npartition management code), or scale it back to something like one\npartition per year.\n\nHaving said that, it looks like the reason for the memory bloat is O(N^2)\nspace consumption in inheritance_planner() while trying to plan the\n\"UPDATE msg SET\" commands. We got rid of a leading term in that\nfunction's space consumption for many children awhile ago, but it looks\nlike you've found the next largest term :-(. I might be able to do\nsomething about that. In the meantime, if you want to stick with this\npartitioning design, couldn't you improve that code so the UPDATE is\nonly applied to the one child table it's needed for?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 23 Dec 2014 15:27:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Excessive memory used for INSERT"
},
{
"msg_contents": "Hi,\n\n\nDoing the UPDATE on the child table (provided that the table does exist) as \nyou recommended solved all my memory consumption issue.\n\n\nThanks a lot,\n\n\nAlessandro.\n\n\nOn Tuesday 23 December 2014 15:27:41 Tom Lane wrote:\n> Alessandro Ipe <[email protected]> writes:\n> > I guess the memory consumption is depending on the size of my database, so\n> > only giving a reduced version of it won't allow to hit the issue.\n> > \n> > The pg_dumpall file of my database can be found at the address\n> > https://gerb.oma.be/owncloud/public.php?service=files&t=5e0e9e1bb06dce1d12\n> > c95662a9ee1c03\n> > \n> > The queries causing the issue are given in files\n> > - tmp.OqOavPYbHa (with the new upsert_func function)\n> > - tmp.f60wlgEDWB (with WITH .. AS statement)\n> \n> Well, the core of the problem here is that you've chosen to partition the\n> MSG table at an unreasonably small grain: it's got 3711 child tables and\n> it looks like you plan to add another one every day. For forty-some\n> megabytes worth of data, I'd have said you shouldn't be partitioning at\n> all; for sure you shouldn't be partitioning like this. PG's inheritance\n> mechanisms are only meant to cope with order-of-a-hundred child tables at\n> most. Moreover, the only good reason to partition is if you want to do\n> bulk data management by, say, dropping the oldest partition every so\n> often. It doesn't look like you're planning to do that at all, and I'm\n> sure if you do, you don't need 1-day granularity of the drop.\n> \n> I'd recommend you either dispense with partitioning entirely (which would\n> simplify your life a great deal, since you'd not need all this hacky\n> partition management code), or scale it back to something like one\n> partition per year.\n> \n> Having said that, it looks like the reason for the memory bloat is O(N^2)\n> space consumption in inheritance_planner() while trying to plan the\n> \"UPDATE msg SET\" commands. We got rid of a leading term in that\n> function's space consumption for many children awhile ago, but it looks\n> like you've found the next largest term :-(. I might be able to do\n> something about that. In the meantime, if you want to stick with this\n> partitioning design, couldn't you improve that code so the UPDATE is\n> only applied to the one child table it's needed for?\n> \n> \t\t\tregards, tom lane\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 06 Jan 2015 18:40:05 +0100",
"msg_from": "Alessandro Ipe <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Excessive memory used for INSERT"
}
] |
[
{
"msg_contents": "So, for my use case I simply need to search for a case insensitive\nsubstring. It need not be super exact. It seems like there are two ways I\ncan do this:\n\nCREATE INDEX idx_users_name ON users USING GIST(lower(name) gist_trgm_ops);\nSELECT * FROM users WHERE lower(name) LIKE '%john%';\n\nOr I can do it like this:\n\nCREATE INDEX idx_users_name ON users USING GIST(name gist_trgm_ops);\nSELECT * FROM users WHERE name % 'john';\n\nUnfortunately I cannot find any documentation on the trade-offs between\nthese two approaches. For my test dataset of 75K records the query speed\nseems pretty damn similar.\n\nSo, I guess my question is, what is the difference for querying and insert\nfor the two approaches?\n\nThanks!\n\nSo, for my use case I simply need to search for a case insensitive substring. It need not be super exact. It seems like there are two ways I can do this:CREATE INDEX idx_users_name ON users USING GIST(lower(name) gist_trgm_ops);SELECT * FROM users WHERE lower(name) LIKE '%john%';Or I can do it like this:CREATE INDEX idx_users_name ON users USING GIST(name gist_trgm_ops);SELECT * FROM users WHERE name % 'john';Unfortunately I cannot find any documentation on the trade-offs between these two approaches. For my test dataset of 75K records the query speed seems pretty damn similar. So, I guess my question is, what is the difference for querying and insert for the two approaches?Thanks!",
"msg_date": "Thu, 18 Dec 2014 09:12:16 -0800",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Question about trigram GIST index"
},
{
"msg_contents": "Robert DiFalco <[email protected]> writes:\n> So, for my use case I simply need to search for a case insensitive\n> substring. It need not be super exact. It seems like there are two ways I\n> can do this:\n\n> CREATE INDEX idx_users_name ON users USING GIST(lower(name) gist_trgm_ops);\n> SELECT * FROM users WHERE lower(name) LIKE '%john%';\n\n> Or I can do it like this:\n\n> CREATE INDEX idx_users_name ON users USING GIST(name gist_trgm_ops);\n> SELECT * FROM users WHERE name % 'john';\n\nHm, I don't see anything in the pg_trgm docs suggesting that % is\ncase-insensitive. But in any case, I'd go with the former as being\nmore understandable to someone who knows standard SQL.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Dec 2014 12:18:52 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about trigram GIST index"
},
{
"msg_contents": "I know! I was surprised that % 'John' or % 'JOHN' or even % 'jOhn' all\nreturned the same result.\n\nBesides readability would there be any technical differences between a GIST\nindex that is lower or not and using LIKE vs. %?\n\nThanks!\n\n\nOn Thu, Dec 18, 2014 at 9:18 AM, Tom Lane <[email protected]> wrote:\n>\n> Robert DiFalco <[email protected]> writes:\n> > So, for my use case I simply need to search for a case insensitive\n> > substring. It need not be super exact. It seems like there are two ways I\n> > can do this:\n>\n> > CREATE INDEX idx_users_name ON users USING GIST(lower(name)\n> gist_trgm_ops);\n> > SELECT * FROM users WHERE lower(name) LIKE '%john%';\n>\n> > Or I can do it like this:\n>\n> > CREATE INDEX idx_users_name ON users USING GIST(name gist_trgm_ops);\n> > SELECT * FROM users WHERE name % 'john';\n>\n> Hm, I don't see anything in the pg_trgm docs suggesting that % is\n> case-insensitive. But in any case, I'd go with the former as being\n> more understandable to someone who knows standard SQL.\n>\n> regards, tom lane\n>\n\nI know! I was surprised that % 'John' or % 'JOHN' or even % 'jOhn' all returned the same result.Besides readability would there be any technical differences between a GIST index that is lower or not and using LIKE vs. %?Thanks!On Thu, Dec 18, 2014 at 9:18 AM, Tom Lane <[email protected]> wrote:Robert DiFalco <[email protected]> writes:\n> So, for my use case I simply need to search for a case insensitive\n> substring. It need not be super exact. It seems like there are two ways I\n> can do this:\n\n> CREATE INDEX idx_users_name ON users USING GIST(lower(name) gist_trgm_ops);\n> SELECT * FROM users WHERE lower(name) LIKE '%john%';\n\n> Or I can do it like this:\n\n> CREATE INDEX idx_users_name ON users USING GIST(name gist_trgm_ops);\n> SELECT * FROM users WHERE name % 'john';\n\nHm, I don't see anything in the pg_trgm docs suggesting that % is\ncase-insensitive. But in any case, I'd go with the former as being\nmore understandable to someone who knows standard SQL.\n\n regards, tom lane",
"msg_date": "Thu, 18 Dec 2014 09:32:39 -0800",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about trigram GIST index"
},
{
"msg_contents": "I'm not sure about the '%' operator, but I'm sure that the GIST index will\nnever be used in the\n\n SELECT * FROM users WHERE lower(name) LIKE '%john%';\n\nquery; it is used for left or right anchored search, such as 'john%' or\n'%john'.\n\nGiuseppe.\n-- \nGiuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it\n\nI'm not sure about the '%' operator, but I'm sure that the GIST index will never be used in the SELECT * FROM users WHERE lower(name) LIKE '%john%';query; it is used for left or right anchored search, such as 'john%' or '%john'.Giuseppe.-- Giuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it",
"msg_date": "Thu, 18 Dec 2014 19:00:00 +0100",
"msg_from": "Giuseppe Broccolo <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about trigram GIST index"
},
{
"msg_contents": "I'm pretty sure '%John%' uses the index.\n\nexplain analyze verbose SELECT name FROM wai_users WHERE lower(name) LIKE\n'%john%';\n\n QUERY PLAN\n\n------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on public.wai_users (cost=53.45..1345.46 rows=900\nwidth=14) (actual time=18.474..32.093 rows=1596 loops=1)\n Output: name\n Recheck Cond: (lower((wai_users.name)::text) ~~ '%john%'::text)\n -> Bitmap Index Scan on idx_user_name (cost=0.00..53.41 rows=900\nwidth=0) (actual time=18.227..18.227 rows=1596 loops=1)\n Index Cond: (lower((wai_users.name)::text) ~~ '%john%'::text)\n Total runtime: 33.662 ms\n(6 rows)\n\n\nOn Thu, Dec 18, 2014 at 10:00 AM, Giuseppe Broccolo <\[email protected]> wrote:\n>\n> I'm not sure about the '%' operator, but I'm sure that the GIST index will\n> never be used in the\n>\n> SELECT * FROM users WHERE lower(name) LIKE '%john%';\n>\n> query; it is used for left or right anchored search, such as 'john%' or\n> '%john'.\n>\n> Giuseppe.\n> --\n> Giuseppe Broccolo - 2ndQuadrant Italy\n> PostgreSQL Training, Services and Support\n> [email protected] | www.2ndQuadrant.it\n>\n\nI'm pretty sure '%John%' uses the index.explain analyze verbose SELECT name FROM wai_users WHERE lower(name) LIKE '%john%'; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------ Bitmap Heap Scan on public.wai_users (cost=53.45..1345.46 rows=900 width=14) (actual time=18.474..32.093 rows=1596 loops=1) Output: name Recheck Cond: (lower((wai_users.name)::text) ~~ '%john%'::text) -> Bitmap Index Scan on idx_user_name (cost=0.00..53.41 rows=900 width=0) (actual time=18.227..18.227 rows=1596 loops=1) Index Cond: (lower((wai_users.name)::text) ~~ '%john%'::text) Total runtime: 33.662 ms(6 rows)On Thu, Dec 18, 2014 at 10:00 AM, Giuseppe Broccolo <[email protected]> wrote:I'm not sure about the '%' operator, but I'm sure that the GIST index will never be used in the SELECT * FROM users WHERE lower(name) LIKE '%john%';query; it is used for left or right anchored search, such as 'john%' or '%john'.Giuseppe.-- Giuseppe Broccolo - 2ndQuadrant Italy\nPostgreSQL Training, Services and Support\[email protected] | www.2ndQuadrant.it",
"msg_date": "Thu, 18 Dec 2014 10:08:38 -0800",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about trigram GIST index"
},
{
"msg_contents": "On Thu, Dec 18, 2014 at 10:00 AM, Giuseppe Broccolo <\[email protected]> wrote:\n>\n> I'm not sure about the '%' operator, but I'm sure that the GIST index will\n> never be used in the\n>\n> SELECT * FROM users WHERE lower(name) LIKE '%john%';\n>\n> query; it is used for left or right anchored search, such as 'john%' or\n> '%john'.\n>\n\nThe point of the gist_trgm_ops operator is specifically to overcome that\nlimitation.\n\nIt is pretty awesome.\n\nCheers,\n\nJeff\n>\n>\n\nOn Thu, Dec 18, 2014 at 10:00 AM, Giuseppe Broccolo <[email protected]> wrote:I'm not sure about the '%' operator, but I'm sure that the GIST index will never be used in the SELECT * FROM users WHERE lower(name) LIKE '%john%';query; it is used for left or right anchored search, such as 'john%' or '%john'.The point of the gist_trgm_ops operator is specifically to overcome that limitation.It is pretty awesome.Cheers,Jeff",
"msg_date": "Thu, 18 Dec 2014 10:33:33 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about trigram GIST index"
},
{
"msg_contents": "Jeff, I'm not seeing that limitation.\n\nOn Thu, Dec 18, 2014 at 10:33 AM, Jeff Janes <[email protected]> wrote:\n>\n> On Thu, Dec 18, 2014 at 10:00 AM, Giuseppe Broccolo <\n> [email protected]> wrote:\n>>\n>> I'm not sure about the '%' operator, but I'm sure that the GIST index\n>> will never be used in the\n>>\n>> SELECT * FROM users WHERE lower(name) LIKE '%john%';\n>>\n>> query; it is used for left or right anchored search, such as 'john%' or\n>> '%john'.\n>>\n>\n> The point of the gist_trgm_ops operator is specifically to overcome that\n> limitation.\n>\n> It is pretty awesome.\n>\n> Cheers,\n>\n> Jeff\n>>\n>>\n\nJeff, I'm not seeing that limitation.On Thu, Dec 18, 2014 at 10:33 AM, Jeff Janes <[email protected]> wrote:On Thu, Dec 18, 2014 at 10:00 AM, Giuseppe Broccolo <[email protected]> wrote:I'm not sure about the '%' operator, but I'm sure that the GIST index will never be used in the SELECT * FROM users WHERE lower(name) LIKE '%john%';query; it is used for left or right anchored search, such as 'john%' or '%john'.The point of the gist_trgm_ops operator is specifically to overcome that limitation.It is pretty awesome.Cheers,Jeff",
"msg_date": "Thu, 18 Dec 2014 11:33:39 -0800",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Question about trigram GIST index"
},
{
"msg_contents": "Giuseppe Broccolo wrote:\n\n> I'm not sure about the '%' operator, but I'm sure that the GIST\n> index will never be used in the\n>\n> SELECT * FROM users WHERE lower(name) LIKE '%john%';\n>\n> query; it is used for left or right anchored search, such as\n> 'john%' or '%john'.\nIt *will* use a *trigram* index for a non-anchored search.\n\ntest=# create table words (word text not null);\nCREATE TABLE\ntest=# copy words from '/usr/share/dict/words';\nCOPY 99171\ntest=# CREATE EXTENSION pg_trgm;\nCREATE EXTENSION\ntest=# CREATE INDEX words_trgm ON words USING gist (word gist_trgm_ops);\nCREATE INDEX\ntest=# vacuum analyze words;\nVACUUM\ntest=# explain analyze select * from words where word like '%john%';\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on words (cost=4.36..40.24 rows=10 width=9) (actual time=17.758..17.772 rows=8 loops=1)\n Recheck Cond: (word ~~ '%john%'::text)\n Rows Removed by Index Recheck: 16\n Heap Blocks: exact=4\n -> Bitmap Index Scan on words_trgm (cost=0.00..4.36 rows=10 width=0) (actual time=17.708..17.708 rows=24 loops=1)\n Index Cond: (word ~~ '%john%'::text)\nPlanning time: 0.227 ms\nExecution time: 17.862 ms\n(8 rows)\n\ntest=# explain analyze select * from words where word ilike '%john%';\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------\nBitmap Heap Scan on words (cost=44.05..556.57 rows=1002 width=9) (actual time=12.151..12.197 rows=24 loops=1)\n Recheck Cond: (word ~~* '%john%'::text)\n Heap Blocks: exact=4\n -> Bitmap Index Scan on words_trgm (cost=0.00..43.80 rows=1002 width=0) (actual time=12.124..12.124 rows=24 loops=1)\n Index Cond: (word ~~* '%john%'::text)\nPlanning time: 0.392 ms\nExecution time: 12.252 ms\n(7 rows)\n\nNote that a trigram index is case-insensitive; doing a\ncase-sensitive search requires an extra Recheck node to eliminate\nthe rows that match in the case-insensitive index scan but have\ndifferent capitalization. Because of that case-sensitive is\nslower.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 18 Dec 2014 20:03:13 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Question about trigram GIST index"
}
] |
[
{
"msg_contents": "This may fall into the category of over-optimization but I've become\ncurious.\n\nI have a user table with about 14 columns that are all 1:1 data - so they\ncan't be normalized.\n\nWhen I insert a row all columns need to be set. But when I update, I\nsometimes only update 1-2 columns at a time. Does the number of columns\nimpact update speed?\n\nFor example:\n UPDATE users SET email = ? WHERE id = ?;\n\nI can easily break this up into logical tables like user_profile,\nuser_credential, user_contact_info, user_summary, etc with each table only\nhaving 1-4 columns. But with the multiple tables I would often be joining\nthem to bring back a collection of columns.\n\nI know I'm over thinking this but I'm curious of what the performance trade\noffs are for breaking up a table into smaller logically grouped tables.\n\nThanks.\n\nThis may fall into the category of over-optimization but I've become curious.I have a user table with about 14 columns that are all 1:1 data - so they can't be normalized.When I insert a row all columns need to be set. But when I update, I sometimes only update 1-2 columns at a time. Does the number of columns impact update speed?For example: UPDATE users SET email = ? WHERE id = ?;I can easily break this up into logical tables like user_profile, user_credential, user_contact_info, user_summary, etc with each table only having 1-4 columns. But with the multiple tables I would often be joining them to bring back a collection of columns. I know I'm over thinking this but I'm curious of what the performance trade offs are for breaking up a table into smaller logically grouped tables.Thanks.",
"msg_date": "Mon, 22 Dec 2014 12:53:03 -0800",
"msg_from": "Robert DiFalco <[email protected]>",
"msg_from_op": true,
"msg_subject": "Number of Columns and Update"
},
{
"msg_contents": "On 12/22/2014 10:53 PM, Robert DiFalco wrote:\n> This may fall into the category of over-optimization but I've become\n> curious.\n>\n> I have a user table with about 14 columns that are all 1:1 data - so they\n> can't be normalized.\n>\n> When I insert a row all columns need to be set. But when I update, I\n> sometimes only update 1-2 columns at a time. Does the number of columns\n> impact update speed?\n>\n> For example:\n> UPDATE users SET email = ? WHERE id = ?;\n\nYes, the number of columns in the table matters. The update is just as \nexpensive regardless of how many of the columns you update.\n\nWhen a row is updated, PostgreSQL creates a new version of the whole \nrow. The new row version takes more space when the table has more \ncolumns, leading to more bloating of the table, which generally slows \nthings down. In most applications the difference isn't big enough to \nmatter, but it can be significant if you have very wide rows, and you \nupdate a lot.\n\nPostgreSQL 9.4 made an improvement on this. In earlier versions, the new \nrow version was also included completely in the WAL record, which added \noverhead. In 9.4, any columns at the beginning or end of the row that \nare not modified are left out of the WAL record, as long as the new row \nversion is stored on the same page as the old one (which is common). For \nupdating a single column, or a few columns that are next to each other, \nthat's the same as saying that only the modified part of the row is \nWAL-logged.\n\n> I can easily break this up into logical tables like user_profile,\n> user_credential, user_contact_info, user_summary, etc with each table only\n> having 1-4 columns. But with the multiple tables I would often be joining\n> them to bring back a collection of columns.\n\nThat would help with the above-mentioned issues, but dealing with \nmultiple tables certainly adds a lot of overhead too. Most likely you're \nbetter off just having the single table, after all.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Dec 2014 23:40:13 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Number of Columns and Update"
},
{
"msg_contents": "\nOn 12/22/2014 03:53 PM, Robert DiFalco wrote:\n> This may fall into the category of over-optimization but I've become \n> curious.\n>\n> I have a user table with about 14 columns that are all 1:1 data - so \n> they can't be normalized.\n>\n> When I insert a row all columns need to be set. But when I update, I \n> sometimes only update 1-2 columns at a time. Does the number of \n> columns impact update speed?\n>\n> For example:\n> UPDATE users SET email = ? WHERE id = ?;\n>\n> I can easily break this up into logical tables like user_profile, \n> user_credential, user_contact_info, user_summary, etc with each table \n> only having 1-4 columns. But with the multiple tables I would often be \n> joining them to bring back a collection of columns.\n>\n> I know I'm over thinking this but I'm curious of what the performance \n> trade offs are for breaking up a table into smaller logically grouped \n> tables.\n>\n>\n\nAn update rewrites the whole row, not just the updated columns.\n\nI think you are overthinking it.\n\ncheers\n\nandrew\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 22 Dec 2014 16:41:54 -0500",
"msg_from": "Andrew Dunstan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Number of Columns and Update"
}
] |
[
{
"msg_contents": "Hello, I´ve installed postgresql 9.1 on ubuntu 12.04 with pgpoolII-3.3.3 and\npgPoolAdmin.\n\nI´m trying to make a test with pgbench-tools to measure the performance of\npostgresql.\n\nSo I move to the directory where is pgbench-tools and configure the config\nfile.\n\nI try to execute this order:\n\nsudo -u postgres ./runset\n\nAfter this it appears a message \"Removing old pgbench tables\"\n\nFirst error message (seems not to be important) is: ERROR: table \"accounts\ndoes not exist\"\n\nAfter this it appears a message: VACUUM creating new pgbench tables\n\nAfter this \n\ncreating tables\n10000 tuples done\n20000 tuples done\n...\n100000 tuples done\n...\nvacuum...done.\nRun set #1 of 2 with 2 clients scale=1\nRunning tests using: psql -h localhost -U postgres -p 5432 pgbench\nStoring results using: psql -h localhost -U postgres -p 5432 pgbench\n\nAnd after this it comes \"the crash\":\n\nERROR: relation \"branches\" does not exist\nLINE 1: select count(*) for branches\nERROR: Attempt to determine database scale returned \"\", aborting\n\nit´s maybe and stupid issue and I´m not being able to solve it as i don´t\nhave a high level of knowledge on those systems.\n\nAny idea about what to try?\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/trying-to-run-pgbench-tools-postgresql-ubuntu-ERROR-relation-branches-does-not-exist-tp5832477.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 30 Dec 2014 13:42:15 -0700 (MST)",
"msg_from": "cesar <[email protected]>",
"msg_from_op": true,
"msg_subject": "trying to run pgbench-tools postgresql ubuntu ERROR: relation\n \"branches\" does not exist"
},
{
"msg_contents": "On Wed, Dec 31, 2014 at 5:42 AM, cesar <[email protected]> wrote:\n> After this it appears a message \"Removing old pgbench tables\"\n> First error message (seems not to be important) is: ERROR: table \"accounts\n> does not exist\"\n> ERROR: relation \"branches\" does not exist\n> LINE 1: select count(*) for branches\n> ERROR: Attempt to determine database scale returned \"\", aborting\n\nThe tables \"accounts\" and \"branches\" refer to the table names in\npgbench shipped with Postgres 8.3 and older. It has been changed to\npgbench_* after that. So it seems that you are facing a version\nmismatch if this script uses pgbench 9.1.\n\n> Any idea about what to try?\nDiscussing this issue with the maintainers of pgbench-tools may be a\ngood thing as well:\nhttps://github.com/gregs1104/pgbench-tools/\nRegards,\n-- \nMichael\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Jan 2015 16:17:35 +0900",
"msg_from": "Michael Paquier <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: trying to run pgbench-tools postgresql ubuntu ERROR:\n relation \"branches\" does not exist"
},
{
"msg_contents": "You´ve been very helpful for me. Solved after downloading pgbench tools from\ngithub https://github.com/gregs1104/pgbench-tools, it seems like I was using\nan old version of pgbench-tools. Thanks for your help! \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/trying-to-run-pgbench-tools-postgresql-ubuntu-ERROR-relation-branches-does-not-exist-tp5832477p5832629.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 1 Jan 2015 13:45:47 -0700 (MST)",
"msg_from": "cesar <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: trying to run pgbench-tools postgresql ubuntu ERROR: relation\n \"branches\" does not exist"
}
] |
[
{
"msg_contents": "I will soon be migrating to some recently acquired hardware and seek \ninput from those who have gone before.\n\nA quick overview: the dataset size is ~100GB, (~250-million tuples) with \na workload that consists of about 2/3 writes, mostly single record \ninserts into various indexed tables, and 1/3 reads. Queries per second \npeak around 2,000 and our application typically demands fast response - \nfor many of these queries the timeout is set to 2-seconds and the \napplication moves forward and recovers later if that is exceeded.\n\nAlthough by count they are minimal, every hour there are dozens both of \nimport and of analysis queries involving multiple tables and tens of \nthousands of records. These queries may take up to a few minutes on our \ncurrent hardware.\n\nOld hardware is 4-core, 24GB RAM, battery-backed RAID-10 with four 15k \ndrives.\n\nNew hardware is quite different. 2x10-core E5-2660v3 @2.6GHz, 128GB \nDDR4-2133 RAM and 800GB Intel DC P3700 NVMe PCIe SSD. In essence, the \ndataset will fit in RAM and will be backed by exceedingly fast storage.\n\nThis new machine is very different than any we've had before so any \ncurrent thinking on optimization would be appreciated. Do I leave \nindexes as is and evaluate which ones to drop later? Any recommendations \non distribution and/or kernels (and kernel tuning)? PostgreSQL tuning \nstarting points? Whatever comes to mind.\n\nThanks,\nSteve\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 09 Jan 2015 11:26:13 -0800",
"msg_from": "Steve Crawford <[email protected]>",
"msg_from_op": true,
"msg_subject": "New server optimization advice"
},
{
"msg_contents": "On Fri, Jan 9, 2015 at 4:26 PM, Steve Crawford\n<[email protected]> wrote:\n> New hardware is quite different. 2x10-core E5-2660v3 @2.6GHz, 128GB\n> DDR4-2133 RAM and 800GB Intel DC P3700 NVMe PCIe SSD. In essence, the\n> dataset will fit in RAM and will be backed by exceedingly fast storage.\n>\n> This new machine is very different than any we've had before so any current\n> thinking on optimization would be appreciated. Do I leave indexes as is and\n> evaluate which ones to drop later? Any recommendations on distribution\n> and/or kernels (and kernel tuning)? PostgreSQL tuning starting points?\n> Whatever comes to mind.\n\n\nThat's always a good idea (don't optimize prematurely).\n\nStill, you may want to tweak random_page_cost to bring it closer to\nseq's cost to get plans that are more suited to your exceedingly fast\nstorage (not to mention effective_cache_size, which should be a\ngiven).\n\nYou'll most likely be CPU-bound, so optimization will involve tweaking\ndata types.\n\nSince you mention lots of writes, I'd imagine you will also want to\ntweak shared_buffers and checkpoint_segments to adapt it to your NVM\ncard's buffering, and as with everything new, test or reasearch into\nthe card's crash behavior (ie: what happens when you pull the plug).\nI've heard of SSD storage solutions that got hopelessly corrupted with\npull the plug tests, so be careful with that, but you do certainly\nwant to know how this card would behave in a power outage.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 9 Jan 2015 16:48:11 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server optimization advice"
},
{
"msg_contents": "On Fri, Jan 9, 2015 at 1:48 PM, Claudio Freire <[email protected]> wrote:\n> On Fri, Jan 9, 2015 at 4:26 PM, Steve Crawford\n> <[email protected]> wrote:\n>> New hardware is quite different. 2x10-core E5-2660v3 @2.6GHz, 128GB\n>> DDR4-2133 RAM and 800GB Intel DC P3700 NVMe PCIe SSD. In essence, the\n>> dataset will fit in RAM and will be backed by exceedingly fast storage.\n>>\n>> This new machine is very different than any we've had before so any current\n>> thinking on optimization would be appreciated. Do I leave indexes as is and\n>> evaluate which ones to drop later? Any recommendations on distribution\n>> and/or kernels (and kernel tuning)? PostgreSQL tuning starting points?\n>> Whatever comes to mind.\n>\n>\n> That's always a good idea (don't optimize prematurely).\n>\n> Still, you may want to tweak random_page_cost to bring it closer to\n> seq's cost to get plans that are more suited to your exceedingly fast\n> storage (not to mention effective_cache_size, which should be a\n> given).\n>\n> You'll most likely be CPU-bound, so optimization will involve tweaking\n> data types.\n>\n> Since you mention lots of writes, I'd imagine you will also want to\n> tweak shared_buffers and checkpoint_segments to adapt it to your NVM\n> card's buffering, and as with everything new, test or reasearch into\n> the card's crash behavior (ie: what happens when you pull the plug).\n> I've heard of SSD storage solutions that got hopelessly corrupted with\n> pull the plug tests, so be careful with that, but you do certainly\n> want to know how this card would behave in a power outage.\n\nThe intel DC branded SSD drives so far have an excellent safety record\n(for example see here: http://lkcl.net/reports/ssd_analysis.html).\nShould still test it carefully though, hopefully that will validate\nprevious results.\n\nFor fast SSD, I'd also set effective_io_concurrency to 256. This only\naffects bitmap heap scans but can double or triple performance of\nthem. See: http://www.postgresql.org/message-id/CAHyXU0yiVvfQAnR9cyH=HWh1WbLRsioe=mzRJTHwtr=2azsTdQ@mail.gmail.com\n\nIt'd be nice if you could bench and report some numbers for this\ndevice, particularly:\nlarge scale (at least 2x>ram) pgbench select only test (-S), one for\nsingle client, one for many clients\nlarge scale pgbench standard test, single client, many clients\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 12 Jan 2015 10:02:12 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: New server optimization advice"
}
] |
[
{
"msg_contents": "Hello,\n\nIf possible, I would need your help/suggestions for this problem :\n\nI'm experiencing a serious performance problem using postgresql foreign data wrapper.\nIn particular, a simple query performed via fdw lasts 80 times more than the same query performed directly on the local server.\n\nHere are the details :\n\nI have two postgresql servers both located on the same server farm, based on Vmware Esxi 5.1.\nThey're communicating directly on the same subnet, so network should't interfere with performance.\nOn the first server, which I'll call LOCAL, I defined a postgres_fdw foreign server pointing to the other server, which I'll call REMOTE.\nBoth servers are running Postgresql 9.3.5 (see bottom for complete details on server configuration)\n\nOn the local server I defined a foreign table \"v_mdn_colli_testata\" pointing to the remote server.\nThe foreign table is defined in this way :\n\nCREATE FOREIGN TABLE logimat.v_mdn_colli_testata\n (id bigint ,\n collo character varying(20) ,\n stato character(1) ,\n id_spedizione bigint ,\n id_es_rientro bigint ,\n peso numeric(15,3) ,\n volume numeric(15,3) ,\n ordine character varying(20) ,\n data timestamp without time zone ,\n capoconto text ,\n conto text ,\n causale character varying(10) ,\n descrizione character varying(50) ,\n tipo character varying(10) ,\n capoconto_v text ,\n conto_v text ,\n magazzino character varying(5) ,\n tipo_spedizione integer ,\n data_spedizione date ,\n consegna_spedizione character varying(2) ,\n documento character varying(20) ,\n data_documento timestamp without time zone ,\n borderau character varying(15) ,\n data_borderau timestamp without time zone )\n SERVER fdw_remote_server\n OPTIONS (schema_name 'public', table_name 'v_mdn_colli_testata');\nALTER FOREIGN TABLE logimat.v_mdn_colli_testata\n OWNER TO dba;\n\nThe table pointed on the remote server by the foreign table is actually a view defined in this way :\n\nCREATE OR REPLACE VIEW v_mdn_colli_testata AS\n SELECT uds.id,\n uds.codice AS collo,\n uds.flag1 AS stato,\n uds.id_spedizione,\n uds.id_reso AS id_es_rientro,\n uds.peso_netto AS peso,\n uds.volume,\n o.ordine,\n o.data,\n \"substring\"(o.destinatario::text, 1, 6) AS capoconto,\n \"substring\"(o.destinatario::text, 7, 7) AS conto,\n o.causale,\n o.desc_causale AS descrizione,\n o.tipo_ordine AS tipo,\n \"substring\"(o.corriere::text, 1, 6) AS capoconto_v,\n \"substring\"(o.corriere::text, 7, 7) AS conto_v,\n o.magazzino_prespedizione AS magazzino,\n o.priorita_codice AS tipo_spedizione,\n o.priorita_data AS data_spedizione,\n o.priorita_consegna AS consegna_spedizione,\n doc.ddt AS documento,\n doc.data_ddt AS data_documento,\n doc.borderau,\n doc.data_borderau\n FROM ordine_allestimento o\n LEFT JOIN allestimento al ON o.azienda::text = al.azienda::text AND o.divisione::text = al.divisione::text AND o.sezione::text = al.sezione::text AND o.ordine::text = al.ordine::text AND o.riga::text = al.riga::text\n LEFT JOIN azienda az ON az.codice::text = o.azienda::text\n LEFT JOIN soggetto s ON s.id_azienda = az.id AND s.soggetto::text = o.destinatario::text\n LEFT JOIN lista_allestimento_riga lr ON lr.id = al.id_riga_lista\n LEFT JOIN lista_allestimento la ON la.id = lr.id_lista\n LEFT JOIN documento_riga dr ON dr.id_riga_lista = lr.id\n LEFT JOIN documento doc ON doc.id = dr.id_documento\n LEFT JOIN packing_list_riga pr ON pr.id_riga_lista = lr.id\n LEFT JOIN oper_pian op ON op.id = pr.id_oper_pian\n LEFT JOIN uds ON uds.id = pr.id_uds\n LEFT JOIN packing_list pl ON pl.id = uds.id_packing_list\n WHERE la.id_tipo_lista = 147916620 AND uds.id IS NOT NULL;\n\n\nAnd here's is the problem :\n\non the REMOTE SERVER, the query :\n\nselect * from public.v_mdn_colli_testata where collo='U0019502'\n\nhas an execution time slightly greater than 100 ms :\n\nINFOLOG=# select * from public.v_mdn_colli_testata where collo='U0019502';\n-[ RECORD 1 ]-------+------------------------------\nid | 165999157\ncollo | U0019502\nstato | P\nid_spedizione |\nid_es_rientro |\npeso | 0.500\nvolume | 0.000\nordine | 001824\ndata | 2015-01-08 16:56:03.714\ncapoconto | 000100\nconto | 0001401\ncausale | PMP\ndescrizione | INVIO MATERIALE PUBBLICITARIO\ntipo | ORDT\ncapoconto_v | 000200\nconto_v | 0006128\nmagazzino | 00039\ntipo_spedizione | 0\ndata_spedizione |\nconsegna_spedizione |\ndocumento | 00000026\ndata_documento | 2015-01-09 15:54:17.706\nborderau | 00003212\ndata_borderau | 2015-01-09 00:00:00\n\nTime: 104.907 ms\n***************************************************************\n\non the LOCAL server instead, the same query performed on the foreign table lasts much longer :\n\nmdn=# select * from logimat.v_mdn_colli_testata where collo='U0019502';\n-[ RECORD 1 ]-------+------------------------------\nid | 165999157\ncollo | U0019502\nstato | P\nid_spedizione |\nid_es_rientro |\npeso | 0.500\nvolume | 0.000\nordine | 001824\ndata | 2015-01-08 16:56:03.714\ncapoconto | 000100\nconto | 0001401\ncausale | PMP\ndescrizione | INVIO MATERIALE PUBBLICITARIO\ntipo | ORDT\ncapoconto_v | 000200\nconto_v | 0006128\nmagazzino | 00039\ntipo_spedizione | 0\ndata_spedizione |\nconsegna_spedizione |\ndocumento | 00000026\ndata_documento | 2015-01-09 15:54:17.706\nborderau | 00003212\ndata_borderau | 2015-01-09 00:00:00\n\nTime: 9887.533 ms\n***************************************************************\n\nBoth query were issued repeatedly to get rid of disk access and database connection overhead time.\n\nActivating duration and statement logging on the remote server I can see that the query issued through the fdw from the LOCAL SERVER\nis actually performed by opening a cursor :\n\n2015-01-14 13:53:31 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: statement: START TRANSACTION ISOLATION LEVEL REPEATABLE READ\n2015-01-14 13:53:31 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: execute <unnamed>: DECLARE c1 CURSOR FOR SELECT id, collo, stato, id_spedizione, id_es_rientro, peso, volume, ordine, data, capoconto, conto, causale, descrizione, tipo, capoconto_v, conto_v, magazzino, tipo_spedizione, data_spedizione, consegna_spedizione, documento, data_documento, borderau, data_borderau FROM public.v_mdn_colli_testata WHERE ((collo = 'U0019502'::text))\n2015-01-14 13:53:31 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: statement: FETCH 100 FROM c1\n2015-01-14 13:53:41 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: duration: 9887.533 ms\n2015-01-14 13:53:41 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: statement: CLOSE c1\n2015-01-14 13:53:41 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: statement: COMMIT TRANSACTION\n\n\nMy questions are :\n\nIS THIS THE EXPECT BEHAVIOUR ?\n IS THERE ANY WAY TO MODIFY IT AND IMPROVE THE PERFORMANCE ?\n\n I hope everything is clear\nTHANKS VERY MUCH IN ADVANCE\n\nMarco\n\n\nI include information about my environment :\n\nHARDWARE:\n---------\nBoth servers are virtual machines running on Vmware Esxi 5.1\nLOCAL : 2 vCpu Intel Xeon E5-2690v2 3.00 Ghz , 6GB RAM\nREMOTE : 2 cpu, Intel Xeon E7-2860 2.27 GHz , 6GB RAM\n\nStorage on SAN-based datastore\n\n\nPOSTGRESQL VERSION :\n---------------------\n\nLOCAL SERVER : PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit\nREMOTE SERVER: PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit\n\nPostgresql compiled from source with :\n\nOPTIONS :\n ./configure --with-python --with-gssapi --with-krb-srvnam=POSTGRES\nMODULES :\n - pg_upgrade\n - adminpack\n - pg_upgrade_support\n - pgrowlocks\n - pg_archivecleanup\n - pg_test_fsync\n - pg_buffercache\n - postgres_fdw\n - pg_buffercache.sql\n - itcodes/italian_codes\n\nPOSTGRESQL CONFIGURATION :\n---------------------------\n\nLOCAL SERVER:\n\nmdn=# SELECT name, current_setting(name), source\nmdn-# FROM pg_settings\nmdn-# WHERE source NOT IN ('default', 'override');\n name | current_setting | source\n--------------------------------+--------------------------------+----------------------\napplication_name | psql | client\nautovacuum | on | configuration file\nautovacuum_max_workers | 7 | configuration file\nautovacuum_naptime | 10min | configuration file\nautovacuum_vacuum_cost_delay | 20ms | configuration file\nautovacuum_vacuum_cost_limit | 200 | configuration file\nbytea_output | escape | configuration file\ncheckpoint_completion_target | 0.8 | configuration file\ncheckpoint_segments | 32 | configuration file\ncheckpoint_timeout | 10min | configuration file\ncheckpoint_warning | 30s | configuration file\nclient_encoding | UTF8 | client\nclient_min_messages | warning | configuration file\ndebug_pretty_print | off | configuration file\ndebug_print_parse | off | configuration file\ndebug_print_plan | off | configuration file\ndebug_print_rewritten | off | configuration file\ndefault_statistics_target | 200 | configuration file\neffective_cache_size | 5870MB | configuration file\nfsync | on | configuration file\nfull_page_writes | on | configuration file\nkrb_server_keyfile | /usr/local/pgconf/PGpgsviltab | configuration file\nkrb_srvname | POSTGRES | configuration file\nlc_messages | en_US.UTF-8 | configuration file\nlc_monetary | en_US.UTF-8 | configuration file\nlc_numeric | en_US.UTF-8 | configuration file\nlc_time | en_US.UTF-8 | configuration file\nlisten_addresses | * | configuration file\nlog_autovacuum_min_duration | 1s | configuration file\nlog_checkpoints | on | configuration file\nlog_connections | off | configuration file\nlog_destination | stderr,syslog | configuration file\nlog_directory | /dbms/logs | configuration file\nlog_disconnections | off | configuration file\nlog_duration | off | configuration file\nlog_error_verbosity | default | configuration file\nlog_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\nlog_line_prefix | %t %r %u %d %x %c - | configuration file\nlog_min_duration_statement | 6s | configuration file\nlog_min_error_statement | error | configuration file\nlog_min_messages | warning | configuration file\nlog_rotation_age | 1d | configuration file\nlog_rotation_size | 0 | configuration file\nlog_statement | none | configuration file\nlog_truncate_on_rotation | off | configuration file\nlogging_collector | on | configuration file\nmaintenance_work_mem | 300MB | configuration file\nmax_connections | 100 | configuration file\nmax_stack_depth | 2MB | environment variable\nport | 5432 | configuration file\nrandom_page_cost | 2 | configuration file\nshared_buffers | 1GB | configuration file\nsuperuser_reserved_connections | 3 | configuration file\nsynchronous_commit | off | configuration file\nsyslog_facility | local1 | configuration file\nsyslog_ident | postgres | configuration file\ntemp_buffers | 8MB | configuration file\nTimeZone | Europe/Rome | configuration file\ntrack_activities | on | configuration file\ntrack_counts | on | configuration file\nvacuum_cost_delay | 0 | configuration file\nvacuum_cost_limit | 200 | configuration file\nwal_buffers | 16MB | configuration file\nwork_mem | 12MB | configuration file\n(64 rows)\n\n\nREMOTE SERVER:\n\nINFOLOG=# SELECT name, current_setting(name), source\nINFOLOG-# FROM pg_settings\nINFOLOG-# WHERE source NOT IN ('default', 'override');\n name | current_setting | source\n--------------------------------+-----------------------------------------------+----------------------\napplication_name | psql | client\narchive_command | /usr/local/bin/pg_wal_archive_script.sh %p %f | configuration file\narchive_mode | on | configuration file\narchive_timeout | 0 | configuration file\nautovacuum | on | configuration file\nautovacuum_max_workers | 7 | configuration file\nautovacuum_naptime | 10min | configuration file\nautovacuum_vacuum_cost_delay | 20ms | configuration file\nautovacuum_vacuum_cost_limit | 200 | configuration file\nbytea_output | escape | configuration file\ncheckpoint_completion_target | 0.8 | configuration file\ncheckpoint_segments | 32 | configuration file\ncheckpoint_timeout | 10min | configuration file\nclient_encoding | UTF8 | client\nclient_min_messages | warning | configuration file\ndebug_pretty_print | off | configuration file\ndebug_print_parse | off | configuration file\ndebug_print_plan | off | configuration file\ndebug_print_rewritten | off | configuration file\ndefault_statistics_target | 200 | configuration file\neffective_cache_size | 5870MB | configuration file\nfsync | on | configuration file\nfull_page_writes | on | configuration file\nlc_messages | en_US.UTF-8 | configuration file\nlc_monetary | en_US.UTF-8 | configuration file\nlc_numeric | en_US.UTF-8 | configuration file\nlc_time | en_US.UTF-8 | configuration file\nlisten_addresses | * | configuration file\nlog_autovacuum_min_duration | 1s | configuration file\nlog_checkpoints | on | configuration file\nlog_connections | off | configuration file\nlog_destination | stderr,syslog | configuration file\nlog_directory | /dbms/logs | configuration file\nlog_disconnections | off | configuration file\nlog_duration | off | configuration file\nlog_error_verbosity | default | configuration file\nlog_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\nlog_line_prefix | %t %r %u %d %x %c - | configuration file\nlog_min_duration_statement | 6s | configuration file\nlog_min_error_statement | error | configuration file\nlog_min_messages | error | configuration file\nlog_rotation_age | 1d | configuration file\nlog_rotation_size | 0 | configuration file\nlog_statement | none | configuration file\nlog_truncate_on_rotation | off | configuration file\nlogging_collector | on | configuration file\nmaintenance_work_mem | 300MB | configuration file\nmax_connections | 250 | configuration file\nmax_stack_depth | 2MB | environment variable\nmax_wal_senders | 5 | configuration file\nport | 5432 | configuration file\nrandom_page_cost | 2 | configuration file\nshared_buffers | 1GB | configuration file\nsuperuser_reserved_connections | 6 | configuration file\nsynchronous_commit | off | configuration file\nsyslog_facility | local4 | configuration file\nsyslog_ident | postgres | configuration file\ntemp_buffers | 8MB | configuration file\ntrack_activities | on | configuration file\ntrack_counts | on | configuration file\nvacuum_cost_delay | 0 | configuration file\nvacuum_cost_limit | 200 | configuration file\nwal_buffers | 16MB | configuration file\nwal_keep_segments | 10 | configuration file\nwal_level | hot_standby | configuration file\nwork_mem | 12MB | configuration file\n(66 rows)\n\n\nOPERATING SYSTEM :\n----------------------\n\nLOCAL SERVER :\n\nCentos 6.4\nuname -a\nLinux pg64test1.manord.com 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 23:51:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n\nREMOTE SERVER :\n\nCentos 6.5\nuname -a\nLinux pg64infolog.manord.com 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux\n\n\n\n\n\n\n\n\n\n\nHello,\n \nIf possible, I would need your help/suggestions for this problem :\n \nI'm experiencing a serious performance problem using postgresql foreign data wrapper.\nIn particular, a simple query performed via fdw lasts 80 times more than the same query performed directly on the local server.\n \nHere are the details :\n \nI have two postgresql servers both located on the same server farm, based on Vmware Esxi 5.1.\nThey're communicating directly on the same subnet, so network should't interfere with performance.\nOn the first server, which I'll call LOCAL, I defined a postgres_fdw foreign server pointing to the other server, which I'll call REMOTE.\nBoth servers are running Postgresql 9.3.5 (see bottom for complete details on server configuration)\n \nOn the local server I defined a foreign table \"v_mdn_colli_testata\" pointing to the remote server.\nThe foreign table is defined in this way :\n \nCREATE FOREIGN TABLE logimat.v_mdn_colli_testata\n (id bigint ,\n collo character varying(20) ,\n \nstato character(1) ,\n id_spedizione bigint ,\n id_es_rientro bigint ,\n peso numeric(15,3) ,\n \nvolume numeric(15,3) ,\n ordine character varying(20) ,\n data timestamp without time zone ,\n capoconto text ,\n conto text ,\n causale character varying(10) ,\n descrizione character varying(50) ,\n tipo character varying(10) ,\n \ncapoconto_v text ,\n conto_v text ,\n magazzino character varying(5) ,\n tipo_spedizione integer ,\n data_spedizione date ,\n consegna_spedizione character varying(2) ,\n \ndocumento character varying(20) ,\n data_documento timestamp without time zone ,\n borderau character varying(15) ,\n data_borderau timestamp without time zone )\n SERVER fdw_remote_server\n OPTIONS (schema_name 'public', table_name 'v_mdn_colli_testata');\nALTER FOREIGN TABLE logimat.v_mdn_colli_testata\n OWNER TO dba;\n \n\nThe table pointed on the remote server by the foreign table is actually a view defined in this way :\n \nCREATE OR REPLACE VIEW v_mdn_colli_testata AS\n\n SELECT uds.id,\n uds.codice AS collo,\n \nuds.flag1 AS stato,\n uds.id_spedizione,\n \nuds.id_reso AS id_es_rientro,\n \nuds.peso_netto AS peso,\n uds.volume,\n o.ordine,\n o.data,\n \"substring\"(o.destinatario::text, 1, 6) AS capoconto,\n \"substring\"(o.destinatario::text, 7, 7) AS conto,\n o.causale,\n o.desc_causale AS descrizione,\n o.tipo_ordine AS tipo,\n \n\"substring\"(o.corriere::text, 1, 6) AS capoconto_v,\n \"substring\"(o.corriere::text, 7, 7) AS conto_v,\n \no.magazzino_prespedizione AS magazzino,\n o.priorita_codice AS tipo_spedizione,\n o.priorita_data AS data_spedizione,\n o.priorita_consegna AS consegna_spedizione,\n doc.ddt AS documento,\n doc.data_ddt AS data_documento,\n doc.borderau,\n doc.data_borderau\n FROM ordine_allestimento o\n \nLEFT JOIN allestimento al ON o.azienda::text = al.azienda::text AND o.divisione::text = al.divisione::text AND o.sezione::text = al.sezione::text AND o.ordine::text = al.ordine::text\n AND o.riga::text = al.riga::text\n LEFT JOIN azienda az ON az.codice::text = o.azienda::text\n LEFT JOIN soggetto s ON s.id_azienda = az.id AND s.soggetto::text = o.destinatario::text\n LEFT JOIN lista_allestimento_riga lr ON lr.id = al.id_riga_lista\n LEFT JOIN lista_allestimento la ON la.id = lr.id_lista\n LEFT JOIN documento_riga dr ON dr.id_riga_lista = lr.id\n LEFT JOIN documento doc ON doc.id = dr.id_documento\n LEFT JOIN packing_list_riga pr ON pr.id_riga_lista = lr.id\n LEFT JOIN oper_pian op ON op.id = pr.id_oper_pian\n LEFT JOIN uds ON uds.id = pr.id_uds\n LEFT JOIN packing_list pl ON pl.id = uds.id_packing_list\n WHERE la.id_tipo_lista = 147916620 AND uds.id IS NOT NULL;\n \n\n \nAnd here's is the problem :\n \non the REMOTE SERVER, the query :\n \nselect * from public.v_mdn_colli_testata where collo='U0019502'\n\n \nhas an execution time slightly greater than 100 ms :\n \nINFOLOG=# select * from public.v_mdn_colli_testata where collo='U0019502';\n-[ RECORD 1 ]-------+------------------------------\nid | 165999157\ncollo | U0019502\nstato | P\nid_spedizione |\nid_es_rientro |\npeso | 0.500\nvolume | 0.000\nordine | 001824\ndata | 2015-01-08 16:56:03.714\ncapoconto | 000100\nconto | 0001401\ncausale | PMP\ndescrizione | INVIO MATERIALE PUBBLICITARIO\ntipo | ORDT\ncapoconto_v | 000200\nconto_v | 0006128\nmagazzino | 00039\ntipo_spedizione | 0\ndata_spedizione |\nconsegna_spedizione |\ndocumento | 00000026\ndata_documento | 2015-01-09 15:54:17.706\nborderau | 00003212\ndata_borderau | 2015-01-09 00:00:00\n \nTime: 104.907 ms\n***************************************************************\n \non the LOCAL server instead, the same query performed on the foreign table lasts much longer :\n \nmdn=# select * from logimat.v_mdn_colli_testata where collo='U0019502';\n-[ RECORD 1 ]-------+------------------------------\nid | 165999157\ncollo | U0019502\nstato | P\nid_spedizione |\nid_es_rientro |\npeso | 0.500\nvolume | 0.000\nordine | 001824\ndata | 2015-01-08 16:56:03.714\ncapoconto | 000100\nconto | 0001401\ncausale | PMP\ndescrizione | INVIO MATERIALE PUBBLICITARIO\ntipo | ORDT\ncapoconto_v | 000200\nconto_v | 0006128\nmagazzino | 00039\ntipo_spedizione | 0\ndata_spedizione |\nconsegna_spedizione |\ndocumento | 00000026\ndata_documento | 2015-01-09 15:54:17.706\nborderau | 00003212\ndata_borderau | 2015-01-09 00:00:00\n \nTime: 9887.533 ms\n***************************************************************\n \nBoth query were issued repeatedly to get rid of disk access and database connection overhead time.\n \nActivating duration and statement logging on the remote server I can see that the query issued through the fdw from the LOCAL SERVER\nis actually performed by opening a cursor :\n \n2015-01-14 13:53:31 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: statement: START TRANSACTION ISOLATION LEVEL REPEATABLE READ\n2015-01-14 13:53:31 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: execute <unnamed>: DECLARE c1 CURSOR FOR SELECT id, collo, stato, id_spedizione, id_es_rientro, peso,\n volume, ordine, data, capoconto, conto, causale, descrizione, tipo, capoconto_v, conto_v, magazzino, tipo_spedizione, data_spedizione, consegna_spedizione, documento, data_documento, borderau, data_borderau FROM public.v_mdn_colli_testata WHERE ((collo = 'U0019502'::text))\n2015-01-14 13:53:31 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: statement: FETCH 100 FROM c1\n2015-01-14 13:53:41 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: duration: 9887.533 ms\n2015-01-14 13:53:41 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: statement: CLOSE c1\n2015-01-14 13:53:41 GMT 192.168.2.31(58031) mdn INFOLOG 0 54b64297.327c - LOG: statement: COMMIT TRANSACTION\n \n \nMy questions are :\n \nIS THIS THE EXPECT BEHAVIOUR ?\n\n IS THERE ANY WAY TO MODIFY IT AND IMPROVE THE PERFORMANCE ?\n \n I hope everything is clear\nTHANKS VERY MUCH IN ADVANCE\n \nMarco\n \n\n \nI include information about my environment :\n \nHARDWARE:\n---------\nBoth servers are virtual machines running on Vmware Esxi 5.1\nLOCAL : 2 vCpu Intel Xeon E5-2690v2 3.00 Ghz , 6GB RAM\nREMOTE : 2 cpu, Intel Xeon E7-2860 2.27 GHz , 6GB RAM\n \nStorage on SAN-based datastore\n \n \nPOSTGRESQL VERSION :\n---------------------\n \nLOCAL SERVER : PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit\nREMOTE SERVER: PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3), 64-bit\n \nPostgresql compiled from source with :\n\n \nOPTIONS : \n\n ./configure --with-python --with-gssapi --with-krb-srvnam=POSTGRES\nMODULES :\n\n - pg_upgrade\n - adminpack\n - pg_upgrade_support\n - pgrowlocks\n - pg_archivecleanup\n - pg_test_fsync\n - pg_buffercache\n - postgres_fdw\n - pg_buffercache.sql\n - itcodes/italian_codes\n \n\nPOSTGRESQL CONFIGURATION :\n---------------------------\n \nLOCAL SERVER:\n \nmdn=# SELECT name, current_setting(name), source\nmdn-# FROM pg_settings\nmdn-# WHERE source NOT IN ('default', 'override');\n name | current_setting | source\n--------------------------------+--------------------------------+----------------------\napplication_name | psql | client\nautovacuum | on | configuration file\nautovacuum_max_workers | 7 | configuration file\nautovacuum_naptime | 10min | configuration file\nautovacuum_vacuum_cost_delay | 20ms | configuration file\nautovacuum_vacuum_cost_limit | 200 | configuration file\nbytea_output | escape | configuration file\ncheckpoint_completion_target | 0.8 | configuration file\ncheckpoint_segments | 32 | configuration file\ncheckpoint_timeout | 10min | configuration file\ncheckpoint_warning | 30s | configuration file\nclient_encoding | UTF8 | client\nclient_min_messages | warning | configuration file\ndebug_pretty_print | off | configuration file\ndebug_print_parse | off | configuration file\ndebug_print_plan | off | configuration file\ndebug_print_rewritten | off | configuration file\ndefault_statistics_target | 200 | configuration file\neffective_cache_size | 5870MB | configuration file\nfsync | on | configuration file\nfull_page_writes | on | configuration file\nkrb_server_keyfile | /usr/local/pgconf/PGpgsviltab | configuration file\nkrb_srvname | POSTGRES | configuration file\nlc_messages | en_US.UTF-8 | configuration file\nlc_monetary | en_US.UTF-8 | configuration file\nlc_numeric | en_US.UTF-8 | configuration file\nlc_time | en_US.UTF-8 | configuration file\nlisten_addresses | * | configuration file\nlog_autovacuum_min_duration | 1s | configuration file\nlog_checkpoints | on | configuration file\nlog_connections | off | configuration file\nlog_destination | stderr,syslog | configuration file\nlog_directory | /dbms/logs | configuration file\nlog_disconnections | off | configuration file\nlog_duration | off | configuration file\nlog_error_verbosity | default | configuration file\nlog_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\nlog_line_prefix | %t %r %u %d %x %c - | configuration file\nlog_min_duration_statement | 6s | configuration file\nlog_min_error_statement | error | configuration file\nlog_min_messages | warning | configuration file\nlog_rotation_age | 1d | configuration file\nlog_rotation_size | 0 | configuration file\nlog_statement | none | configuration file\nlog_truncate_on_rotation | off | configuration file\nlogging_collector | on | configuration file\nmaintenance_work_mem | 300MB | configuration file\nmax_connections | 100 | configuration file\nmax_stack_depth | 2MB | environment variable\nport | 5432 | configuration file\nrandom_page_cost | 2 | configuration file\nshared_buffers | 1GB | configuration file\nsuperuser_reserved_connections | 3 | configuration file\nsynchronous_commit | off | configuration file\nsyslog_facility | local1 | configuration file\nsyslog_ident | postgres | configuration file\ntemp_buffers | 8MB | configuration file\nTimeZone | Europe/Rome | configuration file\ntrack_activities | on | configuration file\ntrack_counts | on | configuration file\nvacuum_cost_delay | 0 | configuration file\nvacuum_cost_limit | 200 | configuration file\nwal_buffers | 16MB | configuration file\nwork_mem | 12MB | configuration file\n(64 rows)\n \n \nREMOTE SERVER:\n \nINFOLOG=# SELECT name, current_setting(name), source\nINFOLOG-# FROM pg_settings\nINFOLOG-# WHERE source NOT IN ('default', 'override');\n name | current_setting | source\n--------------------------------+-----------------------------------------------+----------------------\napplication_name | psql | client\narchive_command | /usr/local/bin/pg_wal_archive_script.sh %p %f | configuration file\narchive_mode | on | configuration file\narchive_timeout | 0 | configuration file\nautovacuum | on | configuration file\nautovacuum_max_workers | 7 | configuration file\nautovacuum_naptime | 10min | configuration file\nautovacuum_vacuum_cost_delay | 20ms | configuration file\nautovacuum_vacuum_cost_limit | 200 | configuration file\nbytea_output | escape | configuration file\ncheckpoint_completion_target | 0.8 | configuration file\ncheckpoint_segments | 32 | configuration file\ncheckpoint_timeout | 10min | configuration file\nclient_encoding | UTF8 | client\nclient_min_messages | warning | configuration file\ndebug_pretty_print | off | configuration file\ndebug_print_parse | off | configuration file\ndebug_print_plan | off | configuration file\ndebug_print_rewritten | off | configuration file\ndefault_statistics_target | 200 | configuration file\neffective_cache_size | 5870MB | configuration file\nfsync | on | configuration file\nfull_page_writes | on | configuration file\nlc_messages | en_US.UTF-8 | configuration file\nlc_monetary | en_US.UTF-8 | configuration file\nlc_numeric | en_US.UTF-8 | configuration file\nlc_time | en_US.UTF-8 | configuration file\nlisten_addresses | * | configuration file\nlog_autovacuum_min_duration | 1s | configuration file\nlog_checkpoints | on | configuration file\nlog_connections | off | configuration file\nlog_destination | stderr,syslog | configuration file\nlog_directory | /dbms/logs | configuration file\nlog_disconnections | off | configuration file\nlog_duration | off | configuration file\nlog_error_verbosity | default | configuration file\nlog_filename | postgresql-%Y-%m-%d_%H%M%S.log | configuration file\nlog_line_prefix | %t %r %u %d %x %c - | configuration file\nlog_min_duration_statement | 6s | configuration file\nlog_min_error_statement | error | configuration file\nlog_min_messages | error | configuration file\nlog_rotation_age | 1d | configuration file\nlog_rotation_size | 0 | configuration file\nlog_statement | none | configuration file\nlog_truncate_on_rotation | off | configuration file\nlogging_collector | on | configuration file\nmaintenance_work_mem | 300MB | configuration file\nmax_connections | 250 | configuration file\nmax_stack_depth | 2MB | environment variable\nmax_wal_senders | 5 | configuration file\nport | 5432 | configuration file\nrandom_page_cost | 2 | configuration file\nshared_buffers | 1GB | configuration file\nsuperuser_reserved_connections | 6 | configuration file\nsynchronous_commit | off | configuration file\nsyslog_facility | local4 | configuration file\nsyslog_ident | postgres | configuration file\ntemp_buffers | 8MB | configuration file\ntrack_activities | on | configuration file\ntrack_counts | on | configuration file\nvacuum_cost_delay | 0 | configuration file\nvacuum_cost_limit | 200 | configuration file\nwal_buffers | 16MB | configuration file\nwal_keep_segments | 10 | configuration file\nwal_level | hot_standby | configuration file\nwork_mem | 12MB | configuration file\n(66 rows)\n \n \nOPERATING SYSTEM :\n\n----------------------\n \nLOCAL SERVER :\n \nCentos 6.4\nuname -a\nLinux pg64test1.manord.com 2.6.32-358.14.1.el6.x86_64 #1 SMP Tue Jul 16 23:51:20 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n \nREMOTE SERVER :\n \nCentos 6.5\nuname -a\nLinux pg64infolog.manord.com 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux",
"msg_date": "Wed, 14 Jan 2015 16:48:10 +0000",
"msg_from": "\"Cassiano, Marco\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Performance of Postgresql Foreign Data Wrapper"
},
{
"msg_contents": "On 1/14/15 10:48 AM, Cassiano, Marco wrote:\n> Both query were issued repeatedly to get rid of disk access and database\n> connection overhead time.\n>\n> Activating duration and statement logging on the remote server I can see\n> that the query issued through the fdw from the LOCAL SERVER\n>\n> is actually performed by opening a cursor :\n\nI don't think the cursor is the issue here, but can you try running \nthose same commands directly on the remote server to make sure?\n\nIt looks like it's the fetch itself that's slow, which makes me wonder \nif there's some network or other problem.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 Jan 2015 20:01:16 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Performance of Postgresql Foreign Data Wrapper"
}
] |
[
{
"msg_contents": "Hi All\n\nI thought 'shared_buffers' sets how much memory that is dedicated to\nPostgreSQL to use for caching data, therefore not available to other\napplications.\n\nHowever, as shown in the following screenshots, The server (CentOS 6.6\n64bit) has 64GB of RAM, and 'shared_buffer' is set to 32GB, but the\nfree+buffer+cache is 60GB.\n\nShouldn't the maximum value for free+buffer+cache be 32GB ( 64 - 32)?\nIs 'shared_buffers' pre allocated to Postgres, and Postgres only?\n\nThanks\nHuan\n\n\n[image: Inline images 2]\n\n[image: Inline images 1]",
"msg_date": "Thu, 15 Jan 2015 22:30:55 +1100",
"msg_from": "Huan Ruan <[email protected]>",
"msg_from_op": true,
"msg_subject": "shared_buffers vs Linux file cache"
},
{
"msg_contents": "> From: Huan Ruan <[email protected]>\n>To: [email protected] \n>Sent: Thursday, 15 January 2015, 11:30\n>Subject: [PERFORM] shared_buffers vs Linux file cache\n> \n>\n>\n>Hi All\n>\n>\n>I thought 'shared_buffers' sets how much memory that is dedicated to PostgreSQL to use for caching data, therefore not available to other applications.\n>\n>\n>However, as shown in the following screenshots, The server (CentOS 6.6 64bit) has 64GB of RAM, and 'shared_buffer' is set to 32GB, but the free+buffer+cache is 60GB. \n>\n>\n>Shouldn't the maximum value for free+buffer+cache be 32GB ( 64 - 32)?\n>Is 'shared_buffers' pre allocated to Postgres, and Postgres only?\n\n>\n\nI've not looked at the images, but I think you're getting PostgreSQL shared_buffers and the OS buffercache mixed up; they are not the same.\n\nPostgreSQL shared_buffers is specific to postgres, whereas the OS buffercache will just use free memory to cache data pages from disk, and this is what you're seeing.\n\nSome reading for you: \nhttp://www.tldp.org/LDP/sag/html/buffer-cache.html\n\nGlyn\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Jan 2015 17:10:31 +0000 (UTC)",
"msg_from": "Glyn Astill <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers vs Linux file cache"
},
{
"msg_contents": "On Thu, Jan 15, 2015 at 3:30 AM, Huan Ruan <[email protected]> wrote:\n\n> Hi All\n>\n> I thought 'shared_buffers' sets how much memory that is dedicated to\n> PostgreSQL to use for caching data, therefore not available to other\n> applications.\n>\n> However, as shown in the following screenshots, The server (CentOS 6.6\n> 64bit) has 64GB of RAM, and 'shared_buffer' is set to 32GB, but the\n> free+buffer+cache is 60GB.\n>\n> Shouldn't the maximum value for free+buffer+cache be 32GB ( 64 - 32)?\n> Is 'shared_buffers' pre allocated to Postgres, and Postgres only?\n>\n\nWhile PostgreSQL has reserves the right to use 32GB, as long as PostgreSQL\nhas not actually dirtied that RAM yet, then the kernel is free to keep\nusing it to cache files.\n\n\nCheers,\n\nJeff\n\nOn Thu, Jan 15, 2015 at 3:30 AM, Huan Ruan <[email protected]> wrote:Hi AllI thought 'shared_buffers' sets how much memory that is dedicated to PostgreSQL to use for caching data, therefore not available to other applications.However, as shown in the following screenshots, The server (CentOS 6.6 64bit) has 64GB of RAM, and 'shared_buffer' is set to 32GB, but the free+buffer+cache is 60GB. Shouldn't the maximum value for free+buffer+cache be 32GB ( 64 - 32)?Is 'shared_buffers' pre allocated to Postgres, and Postgres only?While PostgreSQL has reserves the right to use 32GB, as long as PostgreSQL has not actually dirtied that RAM yet, then the kernel is free to keep using it to cache files. Cheers,Jeff",
"msg_date": "Thu, 15 Jan 2015 14:22:17 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers vs Linux file cache"
},
{
"msg_contents": "Jeff Janes <[email protected]> writes:\n> On Thu, Jan 15, 2015 at 3:30 AM, Huan Ruan <[email protected]> wrote:\n>> I thought 'shared_buffers' sets how much memory that is dedicated to\n>> PostgreSQL to use for caching data, therefore not available to other\n>> applications.\n\n> While PostgreSQL has reserves the right to use 32GB, as long as PostgreSQL\n> has not actually dirtied that RAM yet, then the kernel is free to keep\n> using it to cache files.\n\nAnother thing to keep in mind is that, even if Postgres *has* used the\nRAM, the kernel might decide to swap parts of it out if it's not being\nused heavily. This is pretty disastrous from a performance standpoint,\nso it's advisable to not make shared_buffers very much larger than what\nyour application will keep \"hot\".\n\nIdeally we'd lock the shared buffer arena into RAM to prevent that,\nbut such facilities are often unavailable or restricted to root.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Jan 2015 18:24:05 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers vs Linux file cache"
},
{
"msg_contents": "Thanks very much, Glyn, Jeff, and Tom. That was very clearly explained.\n\nA related case, see the following top dump. The Postgres process is using\n87g residential memory, which I thought was the physical memory consumed by\na process that can't be shared with others. While, the free+cached is about\n155gb. But, (87 + 155) is bigger than the total available 198g RAM. Does\nthis mean some of the residential memory used by Postgres is actually\nshareable to others?\n\n\n>> Mem: 198311880k total, 183836408k used, 14475472k free, 8388k buffers\n>> Swap: 4194300k total, 314284k used, 3880016k free, 141105408k cached\n>>\n>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>> 15338 postgres 20 0 97.9g 87g 87g S 0.3 46.4 21:47.44\n>> postgres: checkpointer process\n>> 27473 postgres 20 0 98.1g 29g 29g S 0.0 15.8 2:14.93\n>> postgres: xxxx idle\n>> 4710 postgres 20 0 98.1g 24g 23g S 0.0 12.7 1:17.41\n>> postgres: xxxx idle\n\n>> 26587 postgres 20 0 98.0g 15g 15g S 0.0 8.0 1:21.24\n\n\n>\n>\n\nThanks very much, Glyn, Jeff, and Tom. That was very clearly explained.A related case, see the following top dump. The Postgres process is using 87g residential memory, which I thought was the physical memory consumed by a process that can't be shared with others. While, the free+cached is about 155gb. But, (87 + 155) is bigger than the total available 198g RAM. Does this mean some of the residential memory used by Postgres is actually shareable to others? >> Mem: 198311880k total, 183836408k used, 14475472k free, 8388k buffers\n>> Swap: 4194300k total, 314284k used, 3880016k free, 141105408k cached\n>>\n>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>> 15338 postgres 20 0 97.9g 87g 87g S 0.3 46.4 21:47.44 \n>> postgres: checkpointer process\n>> 27473 postgres 20 0 98.1g 29g 29g S 0.0 15.8 2:14.93 \n>> postgres: xxxx idle\n>> 4710 postgres 20 0 98.1g 24g 23g S 0.0 12.7 1:17.41 \n>> postgres: xxxx idle >> 26587 postgres 20 0 98.0g 15g 15g S 0.0 8.0 1:21.24",
"msg_date": "Fri, 16 Jan 2015 14:32:03 +1100",
"msg_from": "Huan Ruan <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: shared_buffers vs Linux file cache"
},
{
"msg_contents": "Huan,\n\nResidential memory is part of the process memory that is now swapped and is\nin RAM. This includes also memory shared with other processes so sum of RES\nfor all processes may be greater that total physical memory.\n\nI recommend this article\nhttp://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/ to get\nbetter understanding how linux manages memory.\n\nRegards,\nRoman\n\nOn Fri, Jan 16, 2015 at 5:32 AM, Huan Ruan <[email protected]> wrote:\n\n> Thanks very much, Glyn, Jeff, and Tom. That was very clearly explained.\n>\n> A related case, see the following top dump. The Postgres process is using\n> 87g residential memory, which I thought was the physical memory consumed by\n> a process that can't be shared with others. While, the free+cached is about\n> 155gb. But, (87 + 155) is bigger than the total available 198g RAM. Does\n> this mean some of the residential memory used by Postgres is actually\n> shareable to others?\n>\n>\n> >> Mem: 198311880k total, 183836408k used, 14475472k free, 8388k buffers\n> >> Swap: 4194300k total, 314284k used, 3880016k free, 141105408k cached\n> >>\n> >> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> >> 15338 postgres 20 0 97.9g 87g 87g S 0.3 46.4 21:47.44\n> >> postgres: checkpointer process\n> >> 27473 postgres 20 0 98.1g 29g 29g S 0.0 15.8 2:14.93\n> >> postgres: xxxx idle\n> >> 4710 postgres 20 0 98.1g 24g 23g S 0.0 12.7 1:17.41\n> >> postgres: xxxx idle\n>\n> >> 26587 postgres 20 0 98.0g 15g 15g S 0.0 8.0 1:21.24\n>\n>\n>>\n>>\n>\n>\n>\n\nHuan,Residential memory is part of the process memory that is now swapped and is in RAM. This includes also memory shared with other processes so sum of RES for all processes may be greater that total physical memory.I recommend this article http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/ to get better understanding how linux manages memory.Regards,RomanOn Fri, Jan 16, 2015 at 5:32 AM, Huan Ruan <[email protected]> wrote:Thanks very much, Glyn, Jeff, and Tom. That was very clearly explained.A related case, see the following top dump. The Postgres process is using 87g residential memory, which I thought was the physical memory consumed by a process that can't be shared with others. While, the free+cached is about 155gb. But, (87 + 155) is bigger than the total available 198g RAM. Does this mean some of the residential memory used by Postgres is actually shareable to others? >> Mem: 198311880k total, 183836408k used, 14475472k free, 8388k buffers\n>> Swap: 4194300k total, 314284k used, 3880016k free, 141105408k cached\n>>\n>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>> 15338 postgres 20 0 97.9g 87g 87g S 0.3 46.4 21:47.44 \n>> postgres: checkpointer process\n>> 27473 postgres 20 0 98.1g 29g 29g S 0.0 15.8 2:14.93 \n>> postgres: xxxx idle\n>> 4710 postgres 20 0 98.1g 24g 23g S 0.0 12.7 1:17.41 \n>> postgres: xxxx idle >> 26587 postgres 20 0 98.0g 15g 15g S 0.0 8.0 1:21.24",
"msg_date": "Fri, 16 Jan 2015 09:49:01 +0200",
"msg_from": "Roman Konoval <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: shared_buffers vs Linux file cache"
}
] |
[
{
"msg_contents": "This is an obfuscation and mock up, but:\n\ntable files (\n\tid serial pk,\n\tfilename text not null,\n\tstate varchar(20) not null\n\t... 18 more columns\n)\n\nindex file_state on (state)\n\t(35GB in size)\nindex file_in_flight_state (state) where state in (\n'waiting','assigning', 'processing' )\n\t(600MB in size)\n... 10 more indexes\n\nMore important facts:\n* state = 'done' 95% of the time. thereform the partial index\nrepresents only 5% of the table\n* all indexes and the table are very bloated\n* server has 128GB RAM\n* Version 9.2.\n\nGiven this setup, I would expect the planner to *always* choose\nfile_in_flight_state over file_state for this query:\n\nSELECT id, filename FROM files WHERE state = 'waiting';\n\n... and yet it keeps selecting file_state based on extremely small\nchanges to the stats. This is important because the same query, using\nfile_state, is 20X to 50X slower, because that index frequently gets\npushed out of memory.\n\nWhat am I missing? Or is this potentially a planner bug for costing?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Jan 2015 11:30:08 +1300",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Strange choice of general index over partial index"
},
{
"msg_contents": "On Thu, Jan 15, 2015 at 2:30 PM, Josh Berkus <[email protected]> wrote:\n\n> This is an obfuscation and mock up, but:\n>\n> table files (\n> id serial pk,\n> filename text not null,\n> state varchar(20) not null\n> ... 18 more columns\n> )\n>\n> index file_state on (state)\n> (35GB in size)\n> index file_in_flight_state (state) where state in (\n> 'waiting','assigning', 'processing' )\n> (600MB in size)\n> ... 10 more indexes\n>\n> More important facts:\n> * state = 'done' 95% of the time. thereform the partial index\n> represents only 5% of the table\n> * all indexes and the table are very bloated\n> * server has 128GB RAM\n> * Version 9.2.\n>\n> Given this setup, I would expect the planner to *always* choose\n> file_in_flight_state over file_state for this query:\n>\n> SELECT id, filename FROM files WHERE state = 'waiting';\n>\n> ... and yet it keeps selecting file_state based on extremely small\n> changes to the stats. This is important because the same query, using\n> file_state, is 20X to 50X slower, because that index frequently gets\n> pushed out of memory.\n>\n> What am I missing? Or is this potentially a planner bug for costing?\n>\n\n\nI wonder if this could be related to 3e9960e9d935e7e7c12e78441, which first\nappeared in 9.2.3.\n\nBut I don't know why the small index *should* be better. If this query is\nfrequent, it should have no problem keeping just those leaf pages that\ncontain the 'waiting' rows out of the full index in memory, without having\nto keep the 'done' leaf pages around. And if it is not frequent, then it\nwould have just as much problem keeping the smaller index in memory as it\nwould a small portion of the large index.\n\nOf course if it randomly switches back and forth, now you have to keep\ntwice as much data in memory, the relevant parts of both indexes.\n\nWhat is the point of having the full index at all, in this case?\n\nCheers,\n\nJeff\n\nOn Thu, Jan 15, 2015 at 2:30 PM, Josh Berkus <[email protected]> wrote:This is an obfuscation and mock up, but:\n\ntable files (\n id serial pk,\n filename text not null,\n state varchar(20) not null\n ... 18 more columns\n)\n\nindex file_state on (state)\n (35GB in size)\nindex file_in_flight_state (state) where state in (\n'waiting','assigning', 'processing' )\n (600MB in size)\n... 10 more indexes\n\nMore important facts:\n* state = 'done' 95% of the time. thereform the partial index\nrepresents only 5% of the table\n* all indexes and the table are very bloated\n* server has 128GB RAM\n* Version 9.2.\n\nGiven this setup, I would expect the planner to *always* choose\nfile_in_flight_state over file_state for this query:\n\nSELECT id, filename FROM files WHERE state = 'waiting';\n\n... and yet it keeps selecting file_state based on extremely small\nchanges to the stats. This is important because the same query, using\nfile_state, is 20X to 50X slower, because that index frequently gets\npushed out of memory.\n\nWhat am I missing? Or is this potentially a planner bug for costing?I wonder if this could be related to 3e9960e9d935e7e7c12e78441, which first appeared in 9.2.3.But I don't know why the small index *should* be better. If this query is frequent, it should have no problem keeping just those leaf pages that contain the 'waiting' rows out of the full index in memory, without having to keep the 'done' leaf pages around. And if it is not frequent, then it would have just as much problem keeping the smaller index in memory as it would a small portion of the large index.Of course if it randomly switches back and forth, now you have to keep twice as much data in memory, the relevant parts of both indexes.What is the point of having the full index at all, in this case?Cheers,Jeff",
"msg_date": "Thu, 15 Jan 2015 15:39:36 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "On 16/01/15 11:30, Josh Berkus wrote:\n> This is an obfuscation and mock up, but:\n>\n> table files (\n> \tid serial pk,\n> \tfilename text not null,\n> \tstate varchar(20) not null\n> \t... 18 more columns\n> )\n>\n> index file_state on (state)\n> \t(35GB in size)\n> index file_in_flight_state (state) where state in (\n> 'waiting','assigning', 'processing' )\n> \t(600MB in size)\n> ... 10 more indexes\n>\n> More important facts:\n> * state = 'done' 95% of the time. thereform the partial index\n> represents only 5% of the table\n> * all indexes and the table are very bloated\n> * server has 128GB RAM\n> * Version 9.2.\n>\n> Given this setup, I would expect the planner to *always* choose\n> file_in_flight_state over file_state for this query:\n>\n> SELECT id, filename FROM files WHERE state = 'waiting';\n>\n> ... and yet it keeps selecting file_state based on extremely small\n> changes to the stats. This is important because the same query, using\n> file_state, is 20X to 50X slower, because that index frequently gets\n> pushed out of memory.\n>\n> What am I missing? Or is this potentially a planner bug for costing?\n>\n\nAre you seeing a bitmapscan access plan? If so see if disabling it gets \nyou a plan on the files_in_flight index. I'm seeing this scenario with a \nfake/generated dataset a bit like yours in 9.2 (9.5 uses the \nfiles_in_flight w/o any coercing).\n\nregards\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Jan 2015 13:37:06 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> index file_state on (state)\n> \t(35GB in size)\n> index file_in_flight_state (state) where state in (\n> 'waiting','assigning', 'processing' )\n> \t(600MB in size)\n> ... 10 more indexes\n\n> More important facts:\n> * state = 'done' 95% of the time. thereform the partial index\n> represents only 5% of the table\n> * all indexes and the table are very bloated\n> * server has 128GB RAM\n> * Version 9.2.\n\n9.2.what? And how much of the table is 'waiting' state?\n\n> What am I missing? Or is this potentially a planner bug for costing?\n\nThe only real difference between the two cases is index descent costs:\nthe number of heap pages visited will be the same whichever index is\nused, and the number of index leaf pages visited is probably about the\nsame too. 9.3 is the first release that makes any real attempt to\nmodel index descent costs realistically. Before that there were some\ndubious fudge factors, which we're unlikely to change in long-stable\nbranches no matter how badly the results might suck in specific instances.\n\nHaving said that, though, I'd have thought that the old fudge factors\nwould strongly prefer the smaller index given such a large difference in\nindex size. Have you neglected to mention some nondefault planner cost\nsettings?\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Jan 2015 20:48:24 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "On 16/01/15 13:37, Mark Kirkwood wrote:\n> On 16/01/15 11:30, Josh Berkus wrote:\n>> This is an obfuscation and mock up, but:\n>>\n>> table files (\n>> id serial pk,\n>> filename text not null,\n>> state varchar(20) not null\n>> ... 18 more columns\n>> )\n>>\n>> index file_state on (state)\n>> (35GB in size)\n>> index file_in_flight_state (state) where state in (\n>> 'waiting','assigning', 'processing' )\n>> (600MB in size)\n>> ... 10 more indexes\n>>\n>> More important facts:\n>> * state = 'done' 95% of the time. thereform the partial index\n>> represents only 5% of the table\n>> * all indexes and the table are very bloated\n>> * server has 128GB RAM\n>> * Version 9.2.\n>>\n>> Given this setup, I would expect the planner to *always* choose\n>> file_in_flight_state over file_state for this query:\n>>\n>> SELECT id, filename FROM files WHERE state = 'waiting';\n>>\n>> ... and yet it keeps selecting file_state based on extremely small\n>> changes to the stats. This is important because the same query, using\n>> file_state, is 20X to 50X slower, because that index frequently gets\n>> pushed out of memory.\n>>\n>> What am I missing? Or is this potentially a planner bug for costing?\n>>\n>\n> Are you seeing a bitmapscan access plan? If so see if disabling it gets\n> you a plan on the files_in_flight index. I'm seeing this scenario with a\n> fake/generated dataset a bit like yours in 9.2 (9.5 uses the\n> files_in_flight w/o any coercing).\n>\n\nFWIW: For me 9.2 and 9.3 (default config) generate plans like:\nstate=# EXPLAIN ANALYZE\nSELECT id, filename\nFROM files\nWHERE state = 'processing';\n QUERY PLAN \n\n----------------------------------------------------------------------------------------------------------------------------------\n Bitmap Heap Scan on files (cost=3102.02..89228.68 rows=164333 \nwidth=15) (actual time=26.629..803.507 rows=166696 loops=1)\n Recheck Cond: ((state)::text = 'processing'::text)\n Rows Removed by Index Recheck: 7714304\n -> Bitmap Index Scan on file_state (cost=0.00..3060.93 rows=164333 \nwidth=0) (actual time=25.682..25.682 rows=166696 loops=1)\n Index Cond: ((state)::text = 'processing'::text)\n Total runtime: 808.662 ms\n(6 rows)\n\n\nwhereas 9.4 and 9.5 get:\n\n QUERY \nPLAN\n\n---------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using file_in_flight on files (cost=0.42..62857.39 \nrows=158330 width=15) (actual time=0.055..202.732 rows=166696 loops=1)\n Index Cond: ((state)::text = 'processing'::text)\n Planning time: 24.203 ms\n Execution time: 208.926 ms\n(4 rows)\n\n\nThis is with each version loading exactly the same dataset (generated by \nthe attached scripty). Obviously this is a vast simplification of what \nJosh is looking at - but it is (hopefully) interesting that these later \nversions are doing so much better...\n\nCheers\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 16 Jan 2015 15:32:00 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "Mark Kirkwood <[email protected]> writes:\n> This is with each version loading exactly the same dataset (generated by \n> the attached scripty). Obviously this is a vast simplification of what \n> Josh is looking at - but it is (hopefully) interesting that these later \n> versions are doing so much better...\n\nActually, what I see when using this dataset is that both the estimated\ncost and the actual runtime of the query are within a percent or so of\nbeing the same when using either index. (Try forcing it to use the\nnon-preferred index by dropping the preferred one, and you'll see what\nI mean.) The absolute magnitude of the cost estimate varies across\nversions, but not the fact that we're getting about the same estimate\nfor both indexes.\n\nI suspect the same may be true for Josh's real-world database, meaning\nthat the index choice is depending on phase-of-the-moon factors like\nwhich index has the lower OID, which is doubtless contributing to\nhis frustration :-(\n\nI think that the real key to this problem lies in the index bloat pattern,\nwhich might be quite a bit different between the two indexes. This might\nmean traversing many more index leaf pages in one case than the other,\nwhich would account for the difference in real runtimes that he's seeing\nand I'm not. I don't recall at the moment whether 9.2's cost estimation\nrules would do a good job of accounting for such effects. (And even if\nit's trying, it'd be working from an average-case estimate, which might\nnot have much to do with reality for this specific query.)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 15 Jan 2015 22:03:53 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "On 16/01/15 15:32, Mark Kirkwood wrote:\n> On 16/01/15 13:37, Mark Kirkwood wrote:\n>> On 16/01/15 11:30, Josh Berkus wrote:\n>>> This is an obfuscation and mock up, but:\n>>>\n>>> table files (\n>>> id serial pk,\n>>> filename text not null,\n>>> state varchar(20) not null\n>>> ... 18 more columns\n>>> )\n>>>\n>>> index file_state on (state)\n>>> (35GB in size)\n>>> index file_in_flight_state (state) where state in (\n>>> 'waiting','assigning', 'processing' )\n>>> (600MB in size)\n>>> ... 10 more indexes\n>>>\n>>> More important facts:\n>>> * state = 'done' 95% of the time. thereform the partial index\n>>> represents only 5% of the table\n>>> * all indexes and the table are very bloated\n>>> * server has 128GB RAM\n>>> * Version 9.2.\n>>>\n>>> Given this setup, I would expect the planner to *always* choose\n>>> file_in_flight_state over file_state for this query:\n>>>\n>>> SELECT id, filename FROM files WHERE state = 'waiting';\n>>>\n>>> ... and yet it keeps selecting file_state based on extremely small\n>>> changes to the stats. This is important because the same query, using\n>>> file_state, is 20X to 50X slower, because that index frequently gets\n>>> pushed out of memory.\n>>>\n>>> What am I missing? Or is this potentially a planner bug for costing?\n>>>\n>>\n>> Are you seeing a bitmapscan access plan? If so see if disabling it gets\n>> you a plan on the files_in_flight index. I'm seeing this scenario with a\n>> fake/generated dataset a bit like yours in 9.2 (9.5 uses the\n>> files_in_flight w/o any coercing).\n>>\n>\n> FWIW: For me 9.2 and 9.3 (default config) generate plans like:\n> state=# EXPLAIN ANALYZE\n> SELECT id, filename\n> FROM files\n> WHERE state = 'processing';\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------------\n>\n> Bitmap Heap Scan on files (cost=3102.02..89228.68 rows=164333\n> width=15) (actual time=26.629..803.507 rows=166696 loops=1)\n> Recheck Cond: ((state)::text = 'processing'::text)\n> Rows Removed by Index Recheck: 7714304\n> -> Bitmap Index Scan on file_state (cost=0.00..3060.93 rows=164333\n> width=0) (actual time=25.682..25.682 rows=166696 loops=1)\n> Index Cond: ((state)::text = 'processing'::text)\n> Total runtime: 808.662 ms\n> (6 rows)\n>\n>\n> whereas 9.4 and 9.5 get:\n>\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------------------------------------\n>\n> Index Scan using file_in_flight on files (cost=0.42..62857.39\n> rows=158330 width=15) (actual time=0.055..202.732 rows=166696 loops=1)\n> Index Cond: ((state)::text = 'processing'::text)\n> Planning time: 24.203 ms\n> Execution time: 208.926 ms\n> (4 rows)\n>\n>\n> This is with each version loading exactly the same dataset (generated by\n> the attached scripty). Obviously this is a vast simplification of what\n> Josh is looking at - but it is (hopefully) interesting that these later\n> versions are doing so much better...\n>\n\nA bit more poking about shows that the major factor (which this fake \ndataset anyway) is the default for effective_cache_size (changes from \n128MB to 4GB in 9.4). Increasing this makes 9.2 start using the \nfiles_in_flight index in a plain index scan too.\n\nJosh - might be worth experimenting with this parameter.\n\nregards\n\nMark\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Jan 2015 16:06:26 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "On 16/01/15 16:06, Mark Kirkwood wrote:\n\n> A bit more poking about shows that the major factor (which this fake\n> dataset anyway) is the default for effective_cache_size (changes from\n> 128MB to 4GB in 9.4). Increasing this makes 9.2 start using the\n> files_in_flight index in a plain index scan too.\n>\n\nArrg - misread the planner output....in 9.2 what changes is a plan that \nuses an index scan on the *file_state* index (not \nfiles_in_flight)...which appears much faster than the bitmap scan on \nfile_state. Apologies for the confusion.\n\nI'm thinking that I'm seeing the effect Tom has just mentioned.\n\nregards\n\nMark\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Jan 2015 16:17:12 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "On 01/16/2015 04:17 PM, Mark Kirkwood wrote:\n> On 16/01/15 16:06, Mark Kirkwood wrote:\n> \n>> A bit more poking about shows that the major factor (which this fake\n>> dataset anyway) is the default for effective_cache_size (changes from\n>> 128MB to 4GB in 9.4). Increasing this makes 9.2 start using the\n>> files_in_flight index in a plain index scan too.\n>>\n> \n> Arrg - misread the planner output....in 9.2 what changes is a plan that\n> uses an index scan on the *file_state* index (not\n> files_in_flight)...which appears much faster than the bitmap scan on\n> file_state. Apologies for the confusion.\n> \n> I'm thinking that I'm seeing the effect Tom has just mentioned.\n\nIt's not using a bitmapscan in either case; it's a straight indexscan.\n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Jan 2015 16:28:34 +1300",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "On 16/01/15 16:28, Josh Berkus wrote:\n> On 01/16/2015 04:17 PM, Mark Kirkwood wrote:\n>> On 16/01/15 16:06, Mark Kirkwood wrote:\n>>\n>>> A bit more poking about shows that the major factor (which this fake\n>>> dataset anyway) is the default for effective_cache_size (changes from\n>>> 128MB to 4GB in 9.4). Increasing this makes 9.2 start using the\n>>> files_in_flight index in a plain index scan too.\n>>>\n>>\n>> Arrg - misread the planner output....in 9.2 what changes is a plan that\n>> uses an index scan on the *file_state* index (not\n>> files_in_flight)...which appears much faster than the bitmap scan on\n>> file_state. Apologies for the confusion.\n>>\n>> I'm thinking that I'm seeing the effect Tom has just mentioned.\n>\n> It's not using a bitmapscan in either case; it's a straight indexscan.\n>\n>\n\nRight, I suspect that bloating is possibly the significant factor then - \ncan you REINDEX?\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Jan 2015 17:00:44 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Strange choice of general index over partial index"
},
{
"msg_contents": "\n> Right, I suspect that bloating is possibly the significant factor then -\n> can you REINDEX?\n\nBelieve me, it's on the agenda. Of course, this is on a server with 90%\nsaturated IO, so doing a repack is going to take some finessing.\n\nBTW, effective_cache_size is set to 100GB. So I suspect that it's the\nother issue with Tom mentioned, which is that 9.2 really doesn't take\nphysical index size into account.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 16 Jan 2015 17:15:46 +1300",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Strange choice of general index over partial index"
}
] |
[
{
"msg_contents": "Hi,\n\nWe implemented an autocompletion feature (case and accent insensitive)\nusing PostgreSQL full text search.\nThe query fetches patient ids matching the full text query that belong to a\ngiven patient base (rows contain a pg_array with patient_base_ids).\nOur table grew over time (6.2 million rows now) and the query got slower.\nWe are wondering if we have hit the limit or if there is still room for\nperformance improvement with better indexing or data partitioning for\ninstance.\nHere is a link to the \"explain (analyze, buffers)\" results from such a\nquery run on one of our servers : http://explain.depesz.com/s/a5Q9\nRunning analyze on the table doesn't change the results and the table is\nautovacuumed (last one was 2015-01-08 22:18).\n\nYou will find below additional information to bring context to my question.\nThank you in advance for your help.\n\nHere is the schema of the table :\n\nCREATE TABLE patients (\n id integer NOT NULL,\n first_name character varying(255),\n last_name character varying(255),\n regular_doctor_name character varying(255),\n regular_doctor_city character varying(255),\n email character varying(255),\n phone_number character varying(255),\n secondary_phone_number character varying(255),\n gender boolean,\n birthdate date,\n zipcode character varying(255),\n city character varying(255),\n created_at timestamp without time zone,\n updated_at timestamp without time zone,\n imported_at timestamp without time zone,\n import_error text,\n import_identifier character varying(255),\n address character varying(255),\n deleted_at timestamp without time zone,\n account_id integer,\n main boolean DEFAULT false NOT NULL,\n insurance_type character varying(255),\n patient_base_ids_cache integer[] DEFAULT '{}'::integer[],\n crucial_info character varying(255),\n referrer character varying(255),\n occupation character varying(255),\n custom_fields_values hstore DEFAULT ''::hstore NOT NULL,\n bounced_at timestamp without time zone,\n merged_at timestamp without time zone,\n maiden_name character varying(255)\n);\n\nHere is the dictionary definition we used for full text search :\n\nCREATE TEXT SEARCH CONFIGURATION custom_name_search (\n PARSER = pg_catalog.\"default\" );\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR asciiword WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR word WITH unaccent, simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR numword WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR email WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR url WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR host WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR sfloat WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR version WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR hword_numpart WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR hword_part WITH unaccent, simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR hword_asciipart WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR numhword WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR asciihword WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR hword WITH unaccent, simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR url_path WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR file WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR \"float\" WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR \"int\" WITH simple;\n\nALTER TEXT SEARCH CONFIGURATION custom_name_search\n ADD MAPPING FOR uint WITH simple;\n\nHere are the indexes on the patients table :\n\nCREATE INDEX index_patients_on_account_id ON patients USING btree\n(account_id);\nCREATE INDEX index_patients_on_import_identifier ON patients USING btree\n(import_identifier);\nCREATE INDEX index_patients_on_patient_base_ids_cache ON patients USING gin\n(patient_base_ids_cache);\nCREATE INDEX index_patients_on_phone_number ON patients USING btree\n(phone_number);\nCREATE INDEX patients_clean_secondary_phone_number_index ON patients USING\nbtree (replace((secondary_phone_number)::text, ' '::text, ''::text));\nCREATE INDEX tsvector_on_patients ON patients USING gin\n(to_tsvector('custom_name_search'::regconfig, (((COALESCE(last_name,\n''::character varying))::text || ' '::text) || (COALESCE(first_name,\n''::character varying))::text)));\nCREATE INDEX tsvector_on_patients_and_patient_base_ids_cache ON patients\nUSING gin (to_tsvector('custom_name_search'::regconfig,\n(((COALESCE(last_name, ''::character varying))::text || ' '::text) ||\n(COALESCE(first_name, ''::character varying))::text)),\npatient_base_ids_cache);\nCREATE INDEX tsvector_on_patients_first_name ON patients USING gin\n(to_tsvector('custom_name_search'::regconfig, (COALESCE(first_name,\n''::character varying))::text));\nCREATE INDEX tsvector_on_patients_first_name_and_patient_base_ids_cache ON\npatients USING gin (to_tsvector('custom_name_search'::regconfig,\n(COALESCE(first_name, ''::character varying))::text),\npatient_base_ids_cache);\nCREATE INDEX tsvector_on_patients_last_name_and_patient_base_ids_cache ON\npatients USING gin (to_tsvector('custom_name_search'::regconfig,\n(COALESCE(last_name, ''::character varying))::text),\npatient_base_ids_cache);\n\n\nSELECT COUNT(id) FROM patients;\n\n count\n\n---------\n\n 6219569\n\n(1 row)\n\n SELECT version();\n\n\n version\n\n\n------------------------------------------------------------------------------------------------------------\n\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc\n(Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n\n(1 row)\n\n=> SELECT name, current_setting(name), source\n\n-> FROM pg_settings\n\n-> WHERE source NOT IN ('default', 'override');\n\n name |\n current_setting |\nsource\n\n--------------------------------+-------------------------------------------------------------------------------------------------------+----------------------\n\n application_name | psql\n | client\n\n archive_command | test -f\n/etc/postgresql/wal-e.d/ARCHIVING_OFF || envdir /etc/postgresql/wal-e.d/env\nwal-e wal-push %p | configuration file\n\n archive_mode | on\n | configuration\nfile\n\n archive_timeout | 1min\n | configuration\nfile\n\n bytea_output | escape\n | user\n\n checkpoint_completion_target | 0.7\n | configuration\nfile\n\n checkpoint_segments | 40\n | configuration\nfile\n\n checkpoint_timeout | 10min\n | configuration\nfile\n\n client_encoding | UTF8\n | client\n\n client_min_messages | notice\n | configuration\nfile\n\n cpu_index_tuple_cost | 0.001\n | configuration\nfile\n\n cpu_operator_cost | 0.0005\n | configuration\nfile\n\n cpu_tuple_cost | 0.003\n | configuration\nfile\n\n DateStyle | ISO, MDY\n | configuration\nfile\n\n default_text_search_config | pg_catalog.english\n | configuration\nfile\n\n effective_cache_size | 10800000kB\n | configuration\nfile\n\n hot_standby | on\n | configuration\nfile\n\n hot_standby_feedback | on\n | configuration\nfile\n\n lc_messages | en_US.UTF-8\n | configuration\nfile\n\n lc_monetary | en_US.UTF-8\n | configuration\nfile\n\n lc_numeric | en_US.UTF-8\n | configuration\nfile\n\n lc_time | en_US.UTF-8\n | configuration\nfile\n\n listen_addresses | *\n | configuration\nfile\n\n local_preload_libraries | pgextwlist\n | configuration\nfile\n\n log_checkpoints | on\n | configuration\nfile\n\n log_connections | on\n | configuration\nfile\n\n log_destination | stderr\n | configuration\nfile\n\n log_line_prefix | %m %p %u [PINK]\n | configuration\nfile\n\n log_lock_waits | on\n | configuration\nfile\n\n log_min_duration_statement | 2s\n | configuration\nfile\n\n log_min_messages | notice\n | configuration\nfile\n\n log_rotation_age | 1d\n | configuration\nfile\n\n log_rotation_size | 100MB\n | configuration\nfile\n\n log_statement | ddl\n | configuration\nfile\n\n log_temp_files | 10MB\n | configuration\nfile\n\n log_timezone | UTC\n | configuration\nfile\n\n log_truncate_on_rotation | off\n | configuration\nfile\n\n logfebe.identity | c671acf1-c82e-4c2d-a3b3-f815580b6db5\n | configuration\nfile\n\n logfebe.unix_socket |\n/tmp/pg_logplexcollector/pg_logplexcollector.sock\n | configuration file\n\n logging_collector | on\n | configuration\nfile\n\n maintenance_work_mem | 1700MB\n | configuration\nfile\n\n max_connections | 500\n | configuration\nfile\n\n max_prepared_transactions | 0\n | configuration\nfile\n\n max_stack_depth | 2MB\n | environment\nvariable\n\n max_standby_archive_delay | -1\n | configuration\nfile\n\n max_standby_streaming_delay | -1\n | configuration\nfile\n\n max_wal_senders | 20\n | configuration\nfile\n\n port | 5432\n | configuration\nfile\n\n random_page_cost | 2\n | configuration\nfile\n\n shared_buffers | 2929MB\n | configuration\nfile\n\n ssl | on\n | configuration\nfile\n\n ssl_renegotiation_limit | 0\n | configuration\nfile\n\n superuser_reserved_connections | 3\n | configuration\nfile\n\n synchronous_commit | local\n | configuration\nfile\n\n synchronous_standby_names | follower\n | configuration\nfile\n\n temp_tablespaces | ephemeral\n | database\n\n TimeZone | UTC\n | configuration\nfile\n\n track_io_timing | on\n | configuration\nfile\n\n wal_keep_segments | 61\n | configuration\nfile\n\n wal_level | hot_standby\n | configuration\nfile\n\n work_mem | 100MB\n | configuration\nfile\n\n(61 rows)\n\nHi,We implemented an autocompletion feature (case and accent insensitive) using PostgreSQL full text search.The query fetches patient ids matching the full text query that belong to a given patient base (rows contain a pg_array with patient_base_ids).Our table grew over time (6.2 million rows now) and the query got slower. We are wondering if we have hit the limit or if there is still room for performance improvement with better indexing or data partitioning for instance.Here is a link to the \"explain (analyze, buffers)\" results from such a query run on one of our servers : http://explain.depesz.com/s/a5Q9Running analyze on the table doesn't change the results and the table is autovacuumed (last one was 2015-01-08 22:18).\nYou will find below additional information to bring context to my question.Thank you in advance for your help.Here is the schema of the table :CREATE TABLE patients ( id integer NOT NULL, first_name character varying(255), last_name character varying(255), regular_doctor_name character varying(255), regular_doctor_city character varying(255), email character varying(255), phone_number character varying(255), secondary_phone_number character varying(255), gender boolean, birthdate date, zipcode character varying(255), city character varying(255), created_at timestamp without time zone, updated_at timestamp without time zone, imported_at timestamp without time zone, import_error text, import_identifier character varying(255), address character varying(255), deleted_at timestamp without time zone, account_id integer, main boolean DEFAULT false NOT NULL, insurance_type character varying(255), patient_base_ids_cache integer[] DEFAULT '{}'::integer[], crucial_info character varying(255), referrer character varying(255), occupation character varying(255), custom_fields_values hstore DEFAULT ''::hstore NOT NULL, bounced_at timestamp without time zone, merged_at timestamp without time zone, maiden_name character varying(255));Here is the dictionary definition we used for full text search :CREATE TEXT SEARCH CONFIGURATION custom_name_search ( PARSER = pg_catalog.\"default\" );ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR asciiword WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR word WITH unaccent, simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR numword WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR email WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR url WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR host WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR sfloat WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR version WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR hword_numpart WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR hword_part WITH unaccent, simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR hword_asciipart WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR numhword WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR asciihword WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR hword WITH unaccent, simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR url_path WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR file WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR \"float\" WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR \"int\" WITH simple;ALTER TEXT SEARCH CONFIGURATION custom_name_search ADD MAPPING FOR uint WITH simple;Here are the indexes on the patients table :CREATE INDEX index_patients_on_account_id ON patients USING btree (account_id);CREATE INDEX index_patients_on_import_identifier ON patients USING btree (import_identifier);CREATE INDEX index_patients_on_patient_base_ids_cache ON patients USING gin (patient_base_ids_cache);CREATE INDEX index_patients_on_phone_number ON patients USING btree (phone_number);CREATE INDEX patients_clean_secondary_phone_number_index ON patients USING btree (replace((secondary_phone_number)::text, ' '::text, ''::text));CREATE INDEX tsvector_on_patients ON patients USING gin (to_tsvector('custom_name_search'::regconfig, (((COALESCE(last_name, ''::character varying))::text || ' '::text) || (COALESCE(first_name, ''::character varying))::text)));CREATE INDEX tsvector_on_patients_and_patient_base_ids_cache ON patients USING gin (to_tsvector('custom_name_search'::regconfig, (((COALESCE(last_name, ''::character varying))::text || ' '::text) || (COALESCE(first_name, ''::character varying))::text)), patient_base_ids_cache);CREATE INDEX tsvector_on_patients_first_name ON patients USING gin (to_tsvector('custom_name_search'::regconfig, (COALESCE(first_name, ''::character varying))::text));CREATE INDEX tsvector_on_patients_first_name_and_patient_base_ids_cache ON patients USING gin (to_tsvector('custom_name_search'::regconfig, (COALESCE(first_name, ''::character varying))::text), patient_base_ids_cache);CREATE INDEX tsvector_on_patients_last_name_and_patient_base_ids_cache ON patients USING gin (to_tsvector('custom_name_search'::regconfig, (COALESCE(last_name, ''::character varying))::text), patient_base_ids_cache);\nSELECT COUNT(id) FROM patients;\n count \n---------\n 6219569\n(1 row)\n SELECT version(); version \n------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.3.5 on x86_64-unknown-linux-gnu, compiled by gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3, 64-bit\n(1 row)\n=> SELECT name, current_setting(name), source-> FROM pg_settings-> WHERE source NOT IN ('default', 'override'); name | current_setting | source --------------------------------+-------------------------------------------------------------------------------------------------------+---------------------- application_name | psql | client archive_command | test -f /etc/postgresql/wal-e.d/ARCHIVING_OFF || envdir /etc/postgresql/wal-e.d/env wal-e wal-push %p | configuration file archive_mode | on | configuration file archive_timeout | 1min | configuration file bytea_output | escape | user checkpoint_completion_target | 0.7 | configuration file checkpoint_segments | 40 | configuration file checkpoint_timeout | 10min | configuration file client_encoding | UTF8 | client client_min_messages | notice | configuration file cpu_index_tuple_cost | 0.001 | configuration file cpu_operator_cost | 0.0005 | configuration file cpu_tuple_cost | 0.003 | configuration file DateStyle | ISO, MDY | configuration file default_text_search_config | pg_catalog.english | configuration file effective_cache_size | 10800000kB | configuration file hot_standby | on | configuration file hot_standby_feedback | on | configuration file lc_messages | en_US.UTF-8 | configuration file lc_monetary | en_US.UTF-8 | configuration file lc_numeric | en_US.UTF-8 | configuration file lc_time | en_US.UTF-8 | configuration file listen_addresses | * | configuration file local_preload_libraries | pgextwlist | configuration file log_checkpoints | on | configuration file log_connections | on | configuration file log_destination | stderr | configuration file log_line_prefix | %m %p %u [PINK] | configuration file log_lock_waits | on | configuration file log_min_duration_statement | 2s | configuration file log_min_messages | notice | configuration file log_rotation_age | 1d | configuration file log_rotation_size | 100MB | configuration file log_statement | ddl | configuration file log_temp_files | 10MB | configuration file log_timezone | UTC | configuration file log_truncate_on_rotation | off | configuration file logfebe.identity | c671acf1-c82e-4c2d-a3b3-f815580b6db5 | configuration file logfebe.unix_socket | /tmp/pg_logplexcollector/pg_logplexcollector.sock | configuration file logging_collector | on | configuration file maintenance_work_mem | 1700MB | configuration file max_connections | 500 | configuration file max_prepared_transactions | 0 | configuration file max_stack_depth | 2MB | environment variable max_standby_archive_delay | -1 | configuration file max_standby_streaming_delay | -1 | configuration file max_wal_senders | 20 | configuration file port | 5432 | configuration file random_page_cost | 2 | configuration file shared_buffers | 2929MB | configuration file ssl | on | configuration file ssl_renegotiation_limit | 0 | configuration file superuser_reserved_connections | 3 | configuration file synchronous_commit | local | configuration file synchronous_standby_names | follower | configuration file temp_tablespaces | ephemeral | database TimeZone | UTC | configuration file track_io_timing | on | configuration file wal_keep_segments | 61 | configuration file wal_level | hot_standby | configuration file work_mem | 100MB | configuration file(61 rows)",
"msg_date": "Fri, 16 Jan 2015 01:41:47 +0100",
"msg_from": "Ivan Schneider <[email protected]>",
"msg_from_op": true,
"msg_subject": "Autocompletion with full text search"
},
{
"msg_contents": "On 1/15/15 6:41 PM, Ivan Schneider wrote:\n>\n> We implemented an autocompletion feature (case and accent insensitive)\n> using PostgreSQL full text search.\n> The query fetches patient ids matching the full text query that belong\n> to a given patient base (rows contain a pg_array with patient_base_ids).\n> Our table grew over time (6.2 million rows now) and the query got\n> slower. We are wondering if we have hit the limit or if there is still\n> room for performance improvement with better indexing or data\n> partitioning for instance.\n> Here is a link to the \"explain (analyze, buffers)\" results from such a\n> query run on one of our servers : http://explain.depesz.com/s/a5Q9\n> Running analyze on the table doesn't change the results and the table is\n> autovacuumed (last one was 2015-01-08 22:18).\n>\n\nThe query time is consumed by scanning the index, which at 152ms doesn't \nseem all that bad. Have you tried reindexing? That might help. You could \nalso try something like trigram \n(http://www.postgresql.org/docs/9.1/static/pgtrgm.html); it might be faster.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 Jan 2015 19:57:31 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Autocompletion with full text search"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to create datas on an initial import and i'm encountering a\nperformance issue.\nI've 2 tables, my process create a record in each table and execute a sum\nwith join on this 2 tables. (and other requests but there are very fast)\n\nMy 2 tables are empty before the import.\n\nMy count query is :\nselect sum(quantitest0_.quantite_valeur) as col_0_0_ from dm5_quantitestock\nquantitest0_, dm5_caracteristiquearticlestock caracteris1_ where\nquantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\ncaracteris1_.id_article='4028804c4a311178014a346546967c59'\n\ni use parameterized request.\n\nMy process create only 6000 records in each table.\n\nDuring the whole process this sum request lasts longer and longer.\n\nThe auto-explain plan show an seq scan\n\n----------\n Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from\ndm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock\ncaracteris1_ where\nquantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\ncaracteris1_.id_article=$1\n Aggregate (cost=2.04..2.05 rows=1 width=26) (actual\ntime=862.621..862.621 rows=1 loops=1)\n Output: sum(quantitest0_.quantite_valeur)\n -> Nested Loop (cost=0.00..2.04 rows=1 width=26) (actual\ntime=862.618..862.618 rows=0 loops=1)\n Output: quantitest0_.quantite_valeur\n Join Filter:\n((quantitest0_.id_caracteristiquearticlestock)::text =\n(caracteris1_.id)::text)\n Rows Removed by Join Filter: 1869\n -> Seq Scan on public.dm5_quantitestock quantitest0_\n (cost=0.00..1.01 rows=1 width=164) (actual time=0.004..0.408 rows=1869\nloops=1)\n Output: quantitest0_.id,\nquantitest0_.datefinvalidite, quantitest0_.quantite_valeur,\nquantitest0_.id_caracteristiquearticlestock,\nquantitest0_.id_caracteristiquelieustock, quantitest0_.datecreationsysteme,\nquantitest0_.datemodificationsysteme, quantitest0_.id_creeparsysteme,\nquantitest0_.id_modifieparsysteme\n -> Seq Scan on public.dm5_caracteristiquearticlestock\ncaracteris1_ (cost=0.00..1.01 rows=1 width=42) (actual time=0.456..0.456\nrows=1 loops=1869)\n Output: caracteris1_.id,\ncaracteris1_.datefinvalidite, caracteris1_.id_lot, caracteris1_.id_article,\ncaracteris1_.id_numeroserie, caracteris1_.datecreationsysteme,\ncaracteris1_.datemodificationsysteme, caracteris1_.id_modifieparsysteme,\ncaracteris1_.id_creeparsysteme\n Filter: ((caracteris1_.id_article)::text = ($1)::text)\n Rows Removed by Filter: 1869\n-----------\n\nif a launch an analyse during the process, the explain use index, but the\ntime remains the same.\n\n---------\nQuery Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from\ndm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock\ncaracteris1_ where\nquantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\ncaracteris1_.id_article=$1\nAggregate (cost=16.55..16.56 rows=1 width=26) (actual\ntime=654.998..654.998 rows=1 loops=1)\n Output: sum(quantitest0_.quantite_valeur)\n -> Nested Loop (cost=0.00..16.55 rows=1 width=26) (actual\ntime=654.994..654.994 rows=0 loops=1)\n Output: quantitest0_.quantite_valeur\n Join Filter: ((quantitest0_.id_caracteristiquearticlestock)::text =\n(caracteris1_.id)::text)\n Rows Removed by Join Filter: 1651\n -> Index Scan using x_dm5_quantitestock_00 on\npublic.dm5_quantitestock quantitest0_ (cost=0.00..8.27 rows=1 width=164)\n(actual time=0.011..0.579 rows=1651 loops=1)\n Output: quantitest0_.id, quantitest0_.datefinvalidite,\nquantitest0_.quantite_valeur, quantitest0_.id_caracteristiquearticlestock,\nquantitest0_.id_caracteristiquelieustock, quantitest0_.datecreationsysteme,\nquantitest0_.datemodificationsysteme, quantitest0_.id_creeparsysteme,\nquantitest0_.id_modifieparsysteme\n -> Index Scan using dm5_caracteristiquearticlestock_pkey on\npublic.dm5_caracteristiquearticlestock caracteris1_ (cost=0.00..8.27\nrows=1 width=42) (actual time=0.395..0.395 rows=1 loops=1651)\n Output: caracteris1_.id, caracteris1_.datefinvalidite,\ncaracteris1_.id_lot, caracteris1_.id_article, caracteris1_.id_numeroserie,\ncaracteris1_.datecreationsysteme, caracteris1_.datemodificationsysteme,\ncaracteris1_.id_modifieparsysteme, caracteris1_.id_creeparsysteme\n Filter: ((caracteris1_.id_article)::text =\n'4028804c4a311178014a346547307cce'::text)\n Rows Removed by Filter: 1651\n\n----------\n\nIf i create the first 1000 records, commit and end transaction, the whole\nimport is very fast.\n\n\nI can't change my process to cut the process in little part...\n\nAn idea ?\n\nThanks.\n\n\n*Laurent CATHALA*Architecte\[email protected]\n\n\n\n\n\n*7 rue Marcel Dassault - Z.A. La Mouline - 81990 Cambon d'Albi - FRANCETel\n: 05 63 53 08 18 - Fax : 05 63 53 07 42 - www.sylob.com\n<http://www.sylob.com/>Support : 05 63 53 78 35 - [email protected]\n<[email protected]>Entreprise certifiée ISO 9001 version 2008 par Bureau\nVeritas.*\n\n\n<http://twitter.com/SylobErp> <http://www.google.com/+sylob>\n<http://www.viadeo.com/fr/company/sylob-sas>\n<http://www.linkedin.com/company/sylob>\n\nHi, I'm trying to create datas on an initial import and i'm encountering a performance issue.I've 2 tables, my process create a record in each table and execute a sum with join on this 2 tables. (and other requests but there are very fast)My 2 tables are empty before the import.My count query is :select sum(quantitest0_.quantite_valeur) as col_0_0_ from dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock caracteris1_ where quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and caracteris1_.id_article='4028804c4a311178014a346546967c59'i use parameterized request.My process create only 6000 records in each table.During the whole process this sum request lasts longer and longer.The auto-explain plan show an seq scan ---------- Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock caracteris1_ where quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and caracteris1_.id_article=$1 Aggregate (cost=2.04..2.05 rows=1 width=26) (actual time=862.621..862.621 rows=1 loops=1) Output: sum(quantitest0_.quantite_valeur) -> Nested Loop (cost=0.00..2.04 rows=1 width=26) (actual time=862.618..862.618 rows=0 loops=1) Output: quantitest0_.quantite_valeur Join Filter: ((quantitest0_.id_caracteristiquearticlestock)::text = (caracteris1_.id)::text) Rows Removed by Join Filter: 1869 -> Seq Scan on public.dm5_quantitestock quantitest0_ (cost=0.00..1.01 rows=1 width=164) (actual time=0.004..0.408 rows=1869 loops=1) Output: quantitest0_.id, quantitest0_.datefinvalidite, quantitest0_.quantite_valeur, quantitest0_.id_caracteristiquearticlestock, quantitest0_.id_caracteristiquelieustock, quantitest0_.datecreationsysteme, quantitest0_.datemodificationsysteme, quantitest0_.id_creeparsysteme, quantitest0_.id_modifieparsysteme -> Seq Scan on public.dm5_caracteristiquearticlestock caracteris1_ (cost=0.00..1.01 rows=1 width=42) (actual time=0.456..0.456 rows=1 loops=1869) Output: caracteris1_.id, caracteris1_.datefinvalidite, caracteris1_.id_lot, caracteris1_.id_article, caracteris1_.id_numeroserie, caracteris1_.datecreationsysteme, caracteris1_.datemodificationsysteme, caracteris1_.id_modifieparsysteme, caracteris1_.id_creeparsysteme Filter: ((caracteris1_.id_article)::text = ($1)::text) Rows Removed by Filter: 1869-----------if a launch an analyse during the process, the explain use index, but the time remains the same.---------Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock caracteris1_ where quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and caracteris1_.id_article=$1 Aggregate (cost=16.55..16.56 rows=1 width=26) (actual time=654.998..654.998 rows=1 loops=1) Output: sum(quantitest0_.quantite_valeur) -> Nested Loop (cost=0.00..16.55 rows=1 width=26) (actual time=654.994..654.994 rows=0 loops=1) Output: quantitest0_.quantite_valeur Join Filter: ((quantitest0_.id_caracteristiquearticlestock)::text = (caracteris1_.id)::text) Rows Removed by Join Filter: 1651 -> Index Scan using x_dm5_quantitestock_00 on public.dm5_quantitestock quantitest0_ (cost=0.00..8.27 rows=1 width=164) (actual time=0.011..0.579 rows=1651 loops=1) Output: quantitest0_.id, quantitest0_.datefinvalidite, quantitest0_.quantite_valeur, quantitest0_.id_caracteristiquearticlestock, quantitest0_.id_caracteristiquelieustock, quantitest0_.datecreationsysteme, quantitest0_.datemodificationsysteme, quantitest0_.id_creeparsysteme, quantitest0_.id_modifieparsysteme -> Index Scan using dm5_caracteristiquearticlestock_pkey on public.dm5_caracteristiquearticlestock caracteris1_ (cost=0.00..8.27 rows=1 width=42) (actual time=0.395..0.395 rows=1 loops=1651) Output: caracteris1_.id, caracteris1_.datefinvalidite, caracteris1_.id_lot, caracteris1_.id_article, caracteris1_.id_numeroserie, caracteris1_.datecreationsysteme, caracteris1_.datemodificationsysteme, caracteris1_.id_modifieparsysteme, caracteris1_.id_creeparsysteme Filter: ((caracteris1_.id_article)::text = '4028804c4a311178014a346547307cce'::text) Rows Removed by Filter: 1651----------If i create the first 1000 records, commit and end transaction, the whole import is very fast.I can't change my process to cut the process in little part...An idea ? Thanks.Laurent [email protected] rue\nMarcel Dassault - Z.A. La Mouline - 81990 Cambon d'Albi - FRANCETel : 05 63 53 08 18 -\nFax : 05 63 53 07 42 - www.sylob.comSupport : 05 63 53 78 35 - [email protected] certifiée\nISO 9001 version 2008 par Bureau Veritas.",
"msg_date": "Thu, 22 Jan 2015 17:46:12 +0100",
"msg_from": "Laurent Cathala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Initial insert"
},
{
"msg_contents": "Hello,\nI forgot to mention my version : 9.2\n\nthanks,\n\n\n*Laurent CATHALA*Architecte\[email protected]\n\n\n\n\n\n*7 rue Marcel Dassault - Z.A. La Mouline - 81990 Cambon d'Albi - FRANCETel\n: 05 63 53 08 18 - Fax : 05 63 53 07 42 - www.sylob.com\n<http://www.sylob.com/>Support : 05 63 53 78 35 - [email protected]\n<[email protected]>Entreprise certifiée ISO 9001 version 2008 par Bureau\nVeritas.*\n\n\n<http://twitter.com/SylobErp> <http://www.google.com/+sylob>\n<http://www.viadeo.com/fr/company/sylob-sas>\n<http://www.linkedin.com/company/sylob>\n\n\n2015-01-22 17:46 GMT+01:00 Laurent Cathala <[email protected]>:\n\n> Hi,\n> I'm trying to create datas on an initial import and i'm encountering a\n> performance issue.\n> I've 2 tables, my process create a record in each table and execute a sum\n> with join on this 2 tables. (and other requests but there are very fast)\n>\n> My 2 tables are empty before the import.\n>\n> My count query is :\n> select sum(quantitest0_.quantite_valeur) as col_0_0_ from\n> dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock\n> caracteris1_ where\n> quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\n> caracteris1_.id_article='4028804c4a311178014a346546967c59'\n>\n> i use parameterized request.\n>\n> My process create only 6000 records in each table.\n>\n> During the whole process this sum request lasts longer and longer.\n>\n> The auto-explain plan show an seq scan\n>\n> ----------\n> Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from\n> dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock\n> caracteris1_ where\n> quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\n> caracteris1_.id_article=$1\n> Aggregate (cost=2.04..2.05 rows=1 width=26) (actual\n> time=862.621..862.621 rows=1 loops=1)\n> Output: sum(quantitest0_.quantite_valeur)\n> -> Nested Loop (cost=0.00..2.04 rows=1 width=26) (actual\n> time=862.618..862.618 rows=0 loops=1)\n> Output: quantitest0_.quantite_valeur\n> Join Filter:\n> ((quantitest0_.id_caracteristiquearticlestock)::text =\n> (caracteris1_.id)::text)\n> Rows Removed by Join Filter: 1869\n> -> Seq Scan on public.dm5_quantitestock quantitest0_\n> (cost=0.00..1.01 rows=1 width=164) (actual time=0.004..0.408 rows=1869\n> loops=1)\n> Output: quantitest0_.id,\n> quantitest0_.datefinvalidite, quantitest0_.quantite_valeur,\n> quantitest0_.id_caracteristiquearticlestock,\n> quantitest0_.id_caracteristiquelieustock, quantitest0_.datecreationsysteme,\n> quantitest0_.datemodificationsysteme, quantitest0_.id_creeparsysteme,\n> quantitest0_.id_modifieparsysteme\n> -> Seq Scan on public.dm5_caracteristiquearticlestock\n> caracteris1_ (cost=0.00..1.01 rows=1 width=42) (actual time=0.456..0.456\n> rows=1 loops=1869)\n> Output: caracteris1_.id,\n> caracteris1_.datefinvalidite, caracteris1_.id_lot, caracteris1_.id_article,\n> caracteris1_.id_numeroserie, caracteris1_.datecreationsysteme,\n> caracteris1_.datemodificationsysteme, caracteris1_.id_modifieparsysteme,\n> caracteris1_.id_creeparsysteme\n> Filter: ((caracteris1_.id_article)::text =\n> ($1)::text)\n> Rows Removed by Filter: 1869\n> -----------\n>\n> if a launch an analyse during the process, the explain use index, but the\n> time remains the same.\n>\n> ---------\n> Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from\n> dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock\n> caracteris1_ where\n> quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\n> caracteris1_.id_article=$1\n> Aggregate (cost=16.55..16.56 rows=1 width=26) (actual\n> time=654.998..654.998 rows=1 loops=1)\n> Output: sum(quantitest0_.quantite_valeur)\n> -> Nested Loop (cost=0.00..16.55 rows=1 width=26) (actual\n> time=654.994..654.994 rows=0 loops=1)\n> Output: quantitest0_.quantite_valeur\n> Join Filter: ((quantitest0_.id_caracteristiquearticlestock)::text\n> = (caracteris1_.id)::text)\n> Rows Removed by Join Filter: 1651\n> -> Index Scan using x_dm5_quantitestock_00 on\n> public.dm5_quantitestock quantitest0_ (cost=0.00..8.27 rows=1 width=164)\n> (actual time=0.011..0.579 rows=1651 loops=1)\n> Output: quantitest0_.id, quantitest0_.datefinvalidite,\n> quantitest0_.quantite_valeur, quantitest0_.id_caracteristiquearticlestock,\n> quantitest0_.id_caracteristiquelieustock, quantitest0_.datecreationsysteme,\n> quantitest0_.datemodificationsysteme, quantitest0_.id_creeparsysteme,\n> quantitest0_.id_modifieparsysteme\n> -> Index Scan using dm5_caracteristiquearticlestock_pkey on\n> public.dm5_caracteristiquearticlestock caracteris1_ (cost=0.00..8.27\n> rows=1 width=42) (actual time=0.395..0.395 rows=1 loops=1651)\n> Output: caracteris1_.id, caracteris1_.datefinvalidite,\n> caracteris1_.id_lot, caracteris1_.id_article, caracteris1_.id_numeroserie,\n> caracteris1_.datecreationsysteme, caracteris1_.datemodificationsysteme,\n> caracteris1_.id_modifieparsysteme, caracteris1_.id_creeparsysteme\n> Filter: ((caracteris1_.id_article)::text =\n> '4028804c4a311178014a346547307cce'::text)\n> Rows Removed by Filter: 1651\n>\n> ----------\n>\n> If i create the first 1000 records, commit and end transaction, the whole\n> import is very fast.\n>\n>\n> I can't change my process to cut the process in little part...\n>\n> An idea ?\n>\n> Thanks.\n>\n>\n> *Laurent CATHALA*Architecte\n> [email protected]\n>\n>\n>\n>\n>\n> *7 rue Marcel Dassault - Z.A. La Mouline - 81990 Cambon d'Albi - FRANCETel\n> : 05 63 53 08 18 - Fax : 05 63 53 07 42 - www.sylob.com\n> <http://www.sylob.com/>Support : 05 63 53 78 35 - [email protected]\n> <[email protected]>Entreprise certifiée ISO 9001 version 2008 par Bureau\n> Veritas.*\n>\n>\n> <http://twitter.com/SylobErp> <http://www.google.com/+sylob>\n> <http://www.viadeo.com/fr/company/sylob-sas>\n> <http://www.linkedin.com/company/sylob>\n>\n>\n\nHello,I forgot to mention my version : 9.2thanks,Laurent [email protected] rue\nMarcel Dassault - Z.A. La Mouline - 81990 Cambon d'Albi - FRANCETel : 05 63 53 08 18 -\nFax : 05 63 53 07 42 - www.sylob.comSupport : 05 63 53 78 35 - [email protected] certifiée\nISO 9001 version 2008 par Bureau Veritas. \n2015-01-22 17:46 GMT+01:00 Laurent Cathala <[email protected]>:Hi, I'm trying to create datas on an initial import and i'm encountering a performance issue.I've 2 tables, my process create a record in each table and execute a sum with join on this 2 tables. (and other requests but there are very fast)My 2 tables are empty before the import.My count query is :select sum(quantitest0_.quantite_valeur) as col_0_0_ from dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock caracteris1_ where quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and caracteris1_.id_article='4028804c4a311178014a346546967c59'i use parameterized request.My process create only 6000 records in each table.During the whole process this sum request lasts longer and longer.The auto-explain plan show an seq scan ---------- Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock caracteris1_ where quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and caracteris1_.id_article=$1 Aggregate (cost=2.04..2.05 rows=1 width=26) (actual time=862.621..862.621 rows=1 loops=1) Output: sum(quantitest0_.quantite_valeur) -> Nested Loop (cost=0.00..2.04 rows=1 width=26) (actual time=862.618..862.618 rows=0 loops=1) Output: quantitest0_.quantite_valeur Join Filter: ((quantitest0_.id_caracteristiquearticlestock)::text = (caracteris1_.id)::text) Rows Removed by Join Filter: 1869 -> Seq Scan on public.dm5_quantitestock quantitest0_ (cost=0.00..1.01 rows=1 width=164) (actual time=0.004..0.408 rows=1869 loops=1) Output: quantitest0_.id, quantitest0_.datefinvalidite, quantitest0_.quantite_valeur, quantitest0_.id_caracteristiquearticlestock, quantitest0_.id_caracteristiquelieustock, quantitest0_.datecreationsysteme, quantitest0_.datemodificationsysteme, quantitest0_.id_creeparsysteme, quantitest0_.id_modifieparsysteme -> Seq Scan on public.dm5_caracteristiquearticlestock caracteris1_ (cost=0.00..1.01 rows=1 width=42) (actual time=0.456..0.456 rows=1 loops=1869) Output: caracteris1_.id, caracteris1_.datefinvalidite, caracteris1_.id_lot, caracteris1_.id_article, caracteris1_.id_numeroserie, caracteris1_.datecreationsysteme, caracteris1_.datemodificationsysteme, caracteris1_.id_modifieparsysteme, caracteris1_.id_creeparsysteme Filter: ((caracteris1_.id_article)::text = ($1)::text) Rows Removed by Filter: 1869-----------if a launch an analyse during the process, the explain use index, but the time remains the same.---------Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock caracteris1_ where quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and caracteris1_.id_article=$1 Aggregate (cost=16.55..16.56 rows=1 width=26) (actual time=654.998..654.998 rows=1 loops=1) Output: sum(quantitest0_.quantite_valeur) -> Nested Loop (cost=0.00..16.55 rows=1 width=26) (actual time=654.994..654.994 rows=0 loops=1) Output: quantitest0_.quantite_valeur Join Filter: ((quantitest0_.id_caracteristiquearticlestock)::text = (caracteris1_.id)::text) Rows Removed by Join Filter: 1651 -> Index Scan using x_dm5_quantitestock_00 on public.dm5_quantitestock quantitest0_ (cost=0.00..8.27 rows=1 width=164) (actual time=0.011..0.579 rows=1651 loops=1) Output: quantitest0_.id, quantitest0_.datefinvalidite, quantitest0_.quantite_valeur, quantitest0_.id_caracteristiquearticlestock, quantitest0_.id_caracteristiquelieustock, quantitest0_.datecreationsysteme, quantitest0_.datemodificationsysteme, quantitest0_.id_creeparsysteme, quantitest0_.id_modifieparsysteme -> Index Scan using dm5_caracteristiquearticlestock_pkey on public.dm5_caracteristiquearticlestock caracteris1_ (cost=0.00..8.27 rows=1 width=42) (actual time=0.395..0.395 rows=1 loops=1651) Output: caracteris1_.id, caracteris1_.datefinvalidite, caracteris1_.id_lot, caracteris1_.id_article, caracteris1_.id_numeroserie, caracteris1_.datecreationsysteme, caracteris1_.datemodificationsysteme, caracteris1_.id_modifieparsysteme, caracteris1_.id_creeparsysteme Filter: ((caracteris1_.id_article)::text = '4028804c4a311178014a346547307cce'::text) Rows Removed by Filter: 1651----------If i create the first 1000 records, commit and end transaction, the whole import is very fast.I can't change my process to cut the process in little part...An idea ? Thanks.Laurent [email protected] rue\nMarcel Dassault - Z.A. La Mouline - 81990 Cambon d'Albi - FRANCETel : 05 63 53 08 18 -\nFax : 05 63 53 07 42 - www.sylob.comSupport : 05 63 53 78 35 - [email protected] certifiée\nISO 9001 version 2008 par Bureau Veritas.",
"msg_date": "Fri, 23 Jan 2015 09:54:52 +0100",
"msg_from": "Laurent Cathala <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Initial insert"
},
{
"msg_contents": "Hi,\n\nOn 22.1.2015 17:46, Laurent Cathala wrote:\n> Hi, \n> I'm trying to create datas on an initial import and i'm encountering a\n> performance issue.\n> I've 2 tables, my process create a record in each table and execute a\n> sum with join on this 2 tables. (and other requests but there are very fast)\n> \n> My 2 tables are empty before the import.\n> \n> My count query is :\n> select sum(quantitest0_.quantite_valeur) as col_0_0_ from\n> dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock\n> caracteris1_ where\n> quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\n> caracteris1_.id_article='4028804c4a311178014a346546967c59'\n> \n> i use parameterized request.\n> \n> My process create only 6000 records in each table.\n> \n> During the whole process this sum request lasts longer and longer.\n> \n> The auto-explain plan show an seq scan \n> \n> ----------\n> Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from\n> dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock\n> caracteris1_ where\n> quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\n> caracteris1_.id_article=$1\n> Aggregate (cost=2.04..2.05 rows=1 width=26) (actual\n> time=862.621..862.621 rows=1 loops=1)\n> Output: sum(quantitest0_.quantite_valeur)\n> -> Nested Loop (cost=0.00..2.04 rows=1 width=26) (actual\n> time=862.618..862.618 rows=0 loops=1)\n> Output: quantitest0_.quantite_valeur\n> Join Filter:\n> ((quantitest0_.id_caracteristiquearticlestock)::text =\n> (caracteris1_.id)::text)\n> Rows Removed by Join Filter: 1869\n> -> Seq Scan on public.dm5_quantitestock quantitest0_\n> (cost=0.00..1.01 rows=1 width=164) (actual time=0.004..0.408 rows=1869\n> loops=1)\n> Output: quantitest0_.id,\n> quantitest0_.datefinvalidite, quantitest0_.quantite_valeur,\n> quantitest0_.id_caracteristiquearticlestock,\n> quantitest0_.id_caracteristiquelieustock,\n> quantitest0_.datecreationsysteme, quantitest0_.datemodificationsysteme,\n> quantitest0_.id_creeparsysteme, quantitest0_.id_modifieparsysteme\n> -> Seq Scan on public.dm5_caracteristiquearticlestock\n> caracteris1_ (cost=0.00..1.01 rows=1 width=42) (actual\n> time=0.456..0.456 rows=1 loops=1869)\n> Output: caracteris1_.id,\n> caracteris1_.datefinvalidite, caracteris1_.id_lot,\n> caracteris1_.id_article, caracteris1_.id_numeroserie,\n> caracteris1_.datecreationsysteme, caracteris1_.datemodificationsysteme,\n> caracteris1_.id_modifieparsysteme, caracteris1_.id_creeparsysteme\n> Filter: ((caracteris1_.id_article)::text = ($1)::text)\n> Rows Removed by Filter: 1869\n> -----------\n> \n> if a launch an analyse during the process, the explain use index, but\n> the time remains the same.\n> \n> ---------\n> Query Text: select sum(quantitest0_.quantite_valeur) as col_0_0_ from\n> dm5_quantitestock quantitest0_, dm5_caracteristiquearticlestock\n> caracteris1_ where\n> quantitest0_.id_caracteristiquearticlestock=caracteris1_.id and\n> caracteris1_.id_article=$1\n> Aggregate (cost=16.55..16.56 rows=1 width=26) (actual\n> time=654.998..654.998 rows=1 loops=1)\n> Output: sum(quantitest0_.quantite_valeur)\n> -> Nested Loop (cost=0.00..16.55 rows=1 width=26) (actual\n> time=654.994..654.994 rows=0 loops=1)\n> Output: quantitest0_.quantite_valeur\n> Join Filter: ((quantitest0_.id_caracteristiquearticlestock)::text\n> = (caracteris1_.id)::text)\n> Rows Removed by Join Filter: 1651\n> -> Index Scan using x_dm5_quantitestock_00 on\n> public.dm5_quantitestock quantitest0_ (cost=0.00..8.27 rows=1\n> width=164) (actual time=0.011..0.579 rows=1651 loops=1)\n> Output: quantitest0_.id, quantitest0_.datefinvalidite,\n> quantitest0_.quantite_valeur,\n> quantitest0_.id_caracteristiquearticlestock,\n> quantitest0_.id_caracteristiquelieustock,\n> quantitest0_.datecreationsysteme, quantitest0_.datemodificationsysteme,\n> quantitest0_.id_creeparsysteme, quantitest0_.id_modifieparsysteme\n> -> Index Scan using dm5_caracteristiquearticlestock_pkey on\n> public.dm5_caracteristiquearticlestock caracteris1_ (cost=0.00..8.27\n> rows=1 width=42) (actual time=0.395..0.395 rows=1 loops=1651)\n> Output: caracteris1_.id, caracteris1_.datefinvalidite,\n> caracteris1_.id_lot, caracteris1_.id_article,\n> caracteris1_.id_numeroserie, caracteris1_.datecreationsysteme,\n> caracteris1_.datemodificationsysteme, caracteris1_.id_modifieparsysteme,\n> caracteris1_.id_creeparsysteme\n> Filter: ((caracteris1_.id_article)::text =\n> '4028804c4a311178014a346547307cce'::text)\n> Rows Removed by Filter: 1651\n> \n> ----------\n\nWhy is the first query using a parameter ($1) while the second one uses\na string literal? Have you executed them differently?\n\n> \n> If i create the first 1000 records, commit and end transaction, the \n> whole import is very fast.\n\nAnd what plans do the queries use?\n\n> \n> I can't change my process to cut the process in little part...\n\nWe're talking about a 600-800 ms query - even if you cut it to 1 ms, I\ndon't see how this would do a difference in a batch-style job.\n\nIf you're doing many such queries (with different id_article values),\nyou may do something like this\n\nselect caracteris1_.id_article, sum(quantitest0_.quantite_valeur) as\ncol_0_0_ from dm5_quantitestock quantitest0_,\ndm5_caracteristiquearticlestock caracteris1_ where\nquantitest0_.id_caracteristiquearticlestock=caracteris1_.id\ngroup by caracteris1_.id_article\n\nand then query this (supposedly much smaller) table.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 25 Jan 2015 02:29:27 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Initial insert"
}
] |
[
{
"msg_contents": "The documentation states that \"The extent of analysis can be controlled by\nadjusting the default_statistics_target configuration variable\". It looks\nlike I can tell Postgres to create more histograms with more bins, and more\ndistinct values. This implicitly means that Postgres will use a larger\nrandom subset to calculate statistics. \n\nHowever, this is not what I want. My data may be quite skewed, and I want\nfull control over the size of the sample. I want to explicitly tell Postgres\nto analyze the whole table. How can I accomplish that?\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-tell-ANALYZE-to-collect-statistics-from-the-whole-table-tp5835339.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 24 Jan 2015 16:33:19 -0700 (MST)",
"msg_from": "AlexK987 <[email protected]>",
"msg_from_op": true,
"msg_subject": "How to tell ANALYZE to collect statistics from the whole table?"
},
{
"msg_contents": "Hi,\n\nOn 25.1.2015 00:33, AlexK987 wrote:\n> The documentation states that \"The extent of analysis can be \n> controlled by adjusting the default_statistics_target configuration \n> variable\". It looks like I can tell Postgres to create more \n> histograms with more bins, and more distinct values. This implicitly\n> means that Postgres will use a larger random subset to calculate\n> statistics.\n> \n> However, this is not what I want. My data may be quite skewed, and I \n> want full control over the size of the sample. I want to explicitly \n> tell Postgres to analyze the whole table. How can I accomplish that?\n\nI don't think there's an official way to do that - at least I can't\nthink of one. The only thing you can do is increasing statistics target\n(either globally by setting default_statistics_target, or per column\nusing ALTER TABLE ... SET STATISTICS).\n\nAs you noticed, this however controls two things - sample size and how\ndetailed the statistics (MCV list / histogram) will be. The statistics\ntarget is used as upper bound for number of MCV items / histogram bins,\nand the number of sampled rows is (300 * statistics_target). With\ndefault_statistics_target = 10000 (which si the max allowed value since\n9.0), this produces very detailed stats and uses sample of ~3M rows.\n\nIt's a bit more complicated though, because there's an algorithm that\ndecides how many MCV items / histogram buckets to actually create, based\non the data. So you may not get more detailed stats, even when using\nlarger sample.\n\nThat being said, I really doubt increasing the statistics target above\n10000 (or even sampling the whole table) will help you in practice.\nMight be worth showing an example of a bad estimate with your data, or\nmaybe a test case to play with.\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 25 Jan 2015 01:49:23 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to tell ANALYZE to collect statistics from the\n whole table?"
},
{
"msg_contents": "Tomas,\n\nThank you for a very useful reply. Right now I do not have a case of poor\nperformance caused by strong data skew which is not properly reflected in\nstatistics. I was being defensive, trying to prevent every possible thing\nthat might go wrong.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/How-to-tell-ANALYZE-to-collect-statistics-from-the-whole-table-tp5835339p5835344.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 24 Jan 2015 18:04:31 -0700 (MST)",
"msg_from": "AlexK987 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: How to tell ANALYZE to collect statistics from the whole table?"
},
{
"msg_contents": "On 25.1.2015 02:04, AlexK987 wrote:\n> Tomas,\n> \n> Thank you for a very useful reply. Right now I do not have a case of\n> poor performance caused by strong data skew which is not properly\n> reflected in statistics. I was being defensive, trying to prevent\n> every possible thing that might go wrong.\n\nOK. My recommendation is not to mess with default_statistics unless you\nactually have to (e.g. increasing the value on all tables, withouth a\nquery where the current value causes trouble). It increases time to plan\nthe queries, collect statistics (ANALYZE / autovacuum) etc.\n\nregards\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 25 Jan 2015 02:21:50 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to tell ANALYZE to collect statistics from the\n whole table?"
},
{
"msg_contents": "AlexK987 <[email protected]> writes:\n> The documentation states that \"The extent of analysis can be controlled by\n> adjusting the default_statistics_target configuration variable\". It looks\n> like I can tell Postgres to create more histograms with more bins, and more\n> distinct values. This implicitly means that Postgres will use a larger\n> random subset to calculate statistics. \n\n> However, this is not what I want. My data may be quite skewed, and I want\n> full control over the size of the sample. I want to explicitly tell Postgres\n> to analyze the whole table. How can I accomplish that?\n\nYou can't, and you wouldn't want to if you could, because that would\nresult in slurping the entire table into backend local memory. All\nthe rows constituting the \"random sample\" are held in memory while\ndoing the statistical calculations.\n\nIn practice, the only stat that would be materially improved by taking\nenormously large samples would be the number-of-distinct-values estimate.\nThere's already a way you can override ANALYZE's estimate of that number\nif you need to.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 25 Jan 2015 00:14:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to tell ANALYZE to collect statistics from the whole table?"
},
{
"msg_contents": "On Sat, Jan 24, 2015 at 9:14 PM, Tom Lane <[email protected]> wrote:\n\n> AlexK987 <[email protected]> writes:\n> > The documentation states that \"The extent of analysis can be controlled\n> by\n> > adjusting the default_statistics_target configuration variable\". It looks\n> > like I can tell Postgres to create more histograms with more bins, and\n> more\n> > distinct values. This implicitly means that Postgres will use a larger\n> > random subset to calculate statistics.\n>\n> > However, this is not what I want. My data may be quite skewed, and I want\n> > full control over the size of the sample. I want to explicitly tell\n> Postgres\n> > to analyze the whole table. How can I accomplish that?\n>\n> You can't, and you wouldn't want to if you could, because that would\n> result in slurping the entire table into backend local memory. All\n> the rows constituting the \"random sample\" are held in memory while\n> doing the statistical calculations.\n>\n> In practice, the only stat that would be materially improved by taking\n> enormously large samples would be the number-of-distinct-values estimate.\n> There's already a way you can override ANALYZE's estimate of that number\n> if you need to.\n>\n\nThe accuracy of the list of most common values could also be improved a lot\nby increasing the sample.\n\nCheers,\n\nJeff\n\nOn Sat, Jan 24, 2015 at 9:14 PM, Tom Lane <[email protected]> wrote:AlexK987 <[email protected]> writes:\n> The documentation states that \"The extent of analysis can be controlled by\n> adjusting the default_statistics_target configuration variable\". It looks\n> like I can tell Postgres to create more histograms with more bins, and more\n> distinct values. This implicitly means that Postgres will use a larger\n> random subset to calculate statistics.\n\n> However, this is not what I want. My data may be quite skewed, and I want\n> full control over the size of the sample. I want to explicitly tell Postgres\n> to analyze the whole table. How can I accomplish that?\n\nYou can't, and you wouldn't want to if you could, because that would\nresult in slurping the entire table into backend local memory. All\nthe rows constituting the \"random sample\" are held in memory while\ndoing the statistical calculations.\n\nIn practice, the only stat that would be materially improved by taking\nenormously large samples would be the number-of-distinct-values estimate.\nThere's already a way you can override ANALYZE's estimate of that number\nif you need to.The accuracy of the list of most common values could also be improved a lot by increasing the sample. Cheers,Jeff",
"msg_date": "Mon, 26 Jan 2015 08:51:00 -0800",
"msg_from": "Jeff Janes <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: How to tell ANALYZE to collect statistics from the\n whole table?"
}
] |
[
{
"msg_contents": "I have an events table that records page views and purchases (type =\n'viewed' or type='purchased'). I have a query that figures out \"people who\nbought/viewed this also bought/viewed that\".\n\nIt worked fine, taking about 0.1 seconds to complete, until a few hours ago\nwhen it started taking hours to complete. Vacuum/analyze didn't help.\nTurned out there was one session_id that had 400k rows in the system.\nDeleting that made the query performant again.\n\nIs there anything I can do to make the query work better in cases like\nthat? Missing index, or better query?\n\nThis is on 9.3.5.\n\nThe below is reproduced at the following URL if it's not formatted\ncorrectly in the email.\nhttps://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n\nexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31\nrows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on\nevents_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using\nevents_session_id_type_product_id_idx on events e2 (cost=0.56..51.28\nrows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND\n(type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default\nnextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text,\n'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n\nI have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again. Is there anything I can do to make the query work better in cases like that? Missing index, or better query?This is on 9.3.5.The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txtexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default nextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)",
"msg_date": "Sat, 24 Jan 2015 21:41:13 -0800",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Query performance"
},
{
"msg_contents": "On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:\n\n> I have an events table that records page views and purchases (type =\n> 'viewed' or type='purchased'). I have a query that figures out \"people who\n> bought/viewed this also bought/viewed that\".\n>\n> It worked fine, taking about 0.1 seconds to complete, until a few hours\n> ago when it started taking hours to complete. Vacuum/analyze didn't help.\n> Turned out there was one session_id that had 400k rows in the system.\n> Deleting that made the query performant again.\n>\n> Is there anything I can do to make the query work better in cases like\n> that? Missing index, or better query?\n>\n> This is on 9.3.5.\n>\n> The below is reproduced at the following URL if it's not formatted\n> correctly in the email.\n> https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n>\n> explain select\n> e1.product_id,\n> e2.site_id,\n> e2.product_id,\n> count(nullif(e2.type='viewed', false)) view_count,\n> count(nullif(e2.type='purchased', false)) purchase_count\n> from events e1\n> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n> where\n> e1.product_id = '82503' and\n> e1.product_id != e2.product_id\n> group by e1.product_id, e2.product_id, e2.site_id;\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n> Sort Key: e1.product_id, e2.product_id, e2.site_id\n> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n> Recheck Cond: (product_id = '82503'::citext)\n> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n> Index Cond: (product_id = '82503'::citext)\n> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n> Filter: (e1.product_id <> product_id)\n> (11 rows)\n>\n> recommender_production=> \\d events\n> Table \"public.events\"\n> Column | Type | Modifiers\n> -------------+--------------------------+-----------------------------------------------------\n> id | bigint | not null default nextval('events_id_seq'::regclass)\n> user_id | citext |\n> session_id | citext | not null\n> product_id | citext | not null\n> site_id | citext | not null\n> type | text | not null\n> happened_at | timestamp with time zone | not null\n> created_at | timestamp with time zone | not null\n> Indexes:\n> \"events_pkey\" PRIMARY KEY, btree (id)\n> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n> Check constraints:\n> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>\n>\n>\n>\nAfter removing the session with 400k events, I was able to do an explain\nanalyze, here is one of them:\nhttp://explain.depesz.com/s/PFNk\n\nOn Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again. Is there anything I can do to make the query work better in cases like that? Missing index, or better query?This is on 9.3.5.The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txtexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default nextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)\nAfter removing the session with 400k events, I was able to do an explain analyze, here is one of them:http://explain.depesz.com/s/PFNk",
"msg_date": "Sat, 24 Jan 2015 21:43:17 -0800",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Oops, didn't run vacuum analyze after deleting the events. Here is another\n'explain analyze': http://explain.depesz.com/s/AviN\n\nOn Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:\n\n> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:\n>\n>> I have an events table that records page views and purchases (type =\n>> 'viewed' or type='purchased'). I have a query that figures out \"people who\n>> bought/viewed this also bought/viewed that\".\n>>\n>> It worked fine, taking about 0.1 seconds to complete, until a few hours\n>> ago when it started taking hours to complete. Vacuum/analyze didn't help.\n>> Turned out there was one session_id that had 400k rows in the system.\n>> Deleting that made the query performant again.\n>>\n>> Is there anything I can do to make the query work better in cases like\n>> that? Missing index, or better query?\n>>\n>> This is on 9.3.5.\n>>\n>> The below is reproduced at the following URL if it's not formatted\n>> correctly in the email.\n>> https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n>>\n>> explain select\n>> e1.product_id,\n>> e2.site_id,\n>> e2.product_id,\n>> count(nullif(e2.type='viewed', false)) view_count,\n>> count(nullif(e2.type='purchased', false)) purchase_count\n>> from events e1\n>> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n>> where\n>> e1.product_id = '82503' and\n>> e1.product_id != e2.product_id\n>> group by e1.product_id, e2.product_id, e2.site_id;\n>> QUERY PLAN\n>> ----------------------------------------------------------------------------------------------------------------------------\n>> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n>> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n>> Sort Key: e1.product_id, e2.product_id, e2.site_id\n>> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n>> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n>> Recheck Cond: (product_id = '82503'::citext)\n>> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n>> Index Cond: (product_id = '82503'::citext)\n>> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n>> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n>> Filter: (e1.product_id <> product_id)\n>> (11 rows)\n>>\n>> recommender_production=> \\d events\n>> Table \"public.events\"\n>> Column | Type | Modifiers\n>> -------------+--------------------------+-----------------------------------------------------\n>> id | bigint | not null default nextval('events_id_seq'::regclass)\n>> user_id | citext |\n>> session_id | citext | not null\n>> product_id | citext | not null\n>> site_id | citext | not null\n>> type | text | not null\n>> happened_at | timestamp with time zone | not null\n>> created_at | timestamp with time zone | not null\n>> Indexes:\n>> \"events_pkey\" PRIMARY KEY, btree (id)\n>> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n>> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n>> Check constraints:\n>> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n>> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n>> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>>\n>>\n>>\n>>\n> After removing the session with 400k events, I was able to do an explain\n> analyze, here is one of them:\n> http://explain.depesz.com/s/PFNk\n>\n\nOops, didn't run vacuum analyze after deleting the events. Here is another 'explain analyze': http://explain.depesz.com/s/AviNOn Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again. Is there anything I can do to make the query work better in cases like that? Missing index, or better query?This is on 9.3.5.The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txtexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default nextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)\nAfter removing the session with 400k events, I was able to do an explain analyze, here is one of them:http://explain.depesz.com/s/PFNk",
"msg_date": "Sat, 24 Jan 2015 21:45:50 -0800",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Hi\n\nthis plan looks well\n\nRegards\n\nPavel\n\n2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:\n\n> Oops, didn't run vacuum analyze after deleting the events. Here is another\n> 'explain analyze': http://explain.depesz.com/s/AviN\n>\n> On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:\n>\n>> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:\n>>\n>>> I have an events table that records page views and purchases (type =\n>>> 'viewed' or type='purchased'). I have a query that figures out \"people who\n>>> bought/viewed this also bought/viewed that\".\n>>>\n>>> It worked fine, taking about 0.1 seconds to complete, until a few hours\n>>> ago when it started taking hours to complete. Vacuum/analyze didn't help.\n>>> Turned out there was one session_id that had 400k rows in the system.\n>>> Deleting that made the query performant again.\n>>>\n>>> Is there anything I can do to make the query work better in cases like\n>>> that? Missing index, or better query?\n>>>\n>>> This is on 9.3.5.\n>>>\n>>> The below is reproduced at the following URL if it's not formatted\n>>> correctly in the email.\n>>> https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n>>>\n>>> explain select\n>>> e1.product_id,\n>>> e2.site_id,\n>>> e2.product_id,\n>>> count(nullif(e2.type='viewed', false)) view_count,\n>>> count(nullif(e2.type='purchased', false)) purchase_count\n>>> from events e1\n>>> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n>>> where\n>>> e1.product_id = '82503' and\n>>> e1.product_id != e2.product_id\n>>> group by e1.product_id, e2.product_id, e2.site_id;\n>>> QUERY PLAN\n>>> ----------------------------------------------------------------------------------------------------------------------------\n>>> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n>>> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n>>> Sort Key: e1.product_id, e2.product_id, e2.site_id\n>>> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n>>> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n>>> Recheck Cond: (product_id = '82503'::citext)\n>>> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n>>> Index Cond: (product_id = '82503'::citext)\n>>> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n>>> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n>>> Filter: (e1.product_id <> product_id)\n>>> (11 rows)\n>>>\n>>> recommender_production=> \\d events\n>>> Table \"public.events\"\n>>> Column | Type | Modifiers\n>>> -------------+--------------------------+-----------------------------------------------------\n>>> id | bigint | not null default nextval('events_id_seq'::regclass)\n>>> user_id | citext |\n>>> session_id | citext | not null\n>>> product_id | citext | not null\n>>> site_id | citext | not null\n>>> type | text | not null\n>>> happened_at | timestamp with time zone | not null\n>>> created_at | timestamp with time zone | not null\n>>> Indexes:\n>>> \"events_pkey\" PRIMARY KEY, btree (id)\n>>> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n>>> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n>>> Check constraints:\n>>> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n>>> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n>>> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>>>\n>>>\n>>>\n>>>\n>> After removing the session with 400k events, I was able to do an explain\n>> analyze, here is one of them:\n>> http://explain.depesz.com/s/PFNk\n>>\n>\n>\n\nHithis plan looks wellRegardsPavel2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:Oops, didn't run vacuum analyze after deleting the events. Here is another 'explain analyze': http://explain.depesz.com/s/AviNOn Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again. Is there anything I can do to make the query work better in cases like that? Missing index, or better query?This is on 9.3.5.The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txtexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default nextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)\nAfter removing the session with 400k events, I was able to do an explain analyze, here is one of them:http://explain.depesz.com/s/PFNk",
"msg_date": "Sun, 25 Jan 2015 07:12:11 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule <[email protected]>\nwrote:\n\n> Hi\n>\n> this plan looks well\n>\n> Regards\n>\n> Pavel\n>\n\nHere's one that's not quite as well: http://explain.depesz.com/s/SgT\n\nJoe\n\n\n>\n> 2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:\n>\n>> Oops, didn't run vacuum analyze after deleting the events. Here is\n>> another 'explain analyze': http://explain.depesz.com/s/AviN\n>>\n>> On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:\n>>\n>>> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:\n>>>\n>>>> I have an events table that records page views and purchases (type =\n>>>> 'viewed' or type='purchased'). I have a query that figures out \"people who\n>>>> bought/viewed this also bought/viewed that\".\n>>>>\n>>>> It worked fine, taking about 0.1 seconds to complete, until a few hours\n>>>> ago when it started taking hours to complete. Vacuum/analyze didn't help.\n>>>> Turned out there was one session_id that had 400k rows in the system.\n>>>> Deleting that made the query performant again.\n>>>>\n>>>> Is there anything I can do to make the query work better in cases like\n>>>> that? Missing index, or better query?\n>>>>\n>>>> This is on 9.3.5.\n>>>>\n>>>> The below is reproduced at the following URL if it's not formatted\n>>>> correctly in the email.\n>>>> https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n>>>>\n>>>> explain select\n>>>> e1.product_id,\n>>>> e2.site_id,\n>>>> e2.product_id,\n>>>> count(nullif(e2.type='viewed', false)) view_count,\n>>>> count(nullif(e2.type='purchased', false)) purchase_count\n>>>> from events e1\n>>>> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n>>>> where\n>>>> e1.product_id = '82503' and\n>>>> e1.product_id != e2.product_id\n>>>> group by e1.product_id, e2.product_id, e2.site_id;\n>>>> QUERY PLAN\n>>>> ----------------------------------------------------------------------------------------------------------------------------\n>>>> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n>>>> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n>>>> Sort Key: e1.product_id, e2.product_id, e2.site_id\n>>>> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n>>>> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n>>>> Recheck Cond: (product_id = '82503'::citext)\n>>>> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n>>>> Index Cond: (product_id = '82503'::citext)\n>>>> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n>>>> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n>>>> Filter: (e1.product_id <> product_id)\n>>>> (11 rows)\n>>>>\n>>>> recommender_production=> \\d events\n>>>> Table \"public.events\"\n>>>> Column | Type | Modifiers\n>>>> -------------+--------------------------+-----------------------------------------------------\n>>>> id | bigint | not null default nextval('events_id_seq'::regclass)\n>>>> user_id | citext |\n>>>> session_id | citext | not null\n>>>> product_id | citext | not null\n>>>> site_id | citext | not null\n>>>> type | text | not null\n>>>> happened_at | timestamp with time zone | not null\n>>>> created_at | timestamp with time zone | not null\n>>>> Indexes:\n>>>> \"events_pkey\" PRIMARY KEY, btree (id)\n>>>> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n>>>> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n>>>> Check constraints:\n>>>> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n>>>> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n>>>> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>>>>\n>>>>\n>>>>\n>>>>\n>>> After removing the session with 400k events, I was able to do an explain\n>>> analyze, here is one of them:\n>>> http://explain.depesz.com/s/PFNk\n>>>\n>>\n>>\n>\n\nOn Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule <[email protected]> wrote:Hithis plan looks wellRegardsPavelHere's one that's not quite as well: http://explain.depesz.com/s/SgTJoe 2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:Oops, didn't run vacuum analyze after deleting the events. Here is another 'explain analyze': http://explain.depesz.com/s/AviNOn Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again. Is there anything I can do to make the query work better in cases like that? Missing index, or better query?This is on 9.3.5.The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txtexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default nextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)\nAfter removing the session with 400k events, I was able to do an explain analyze, here is one of them:http://explain.depesz.com/s/PFNk",
"msg_date": "Sat, 24 Jan 2015 22:38:04 -0800",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "2015-01-25 7:38 GMT+01:00 Joe Van Dyk <[email protected]>:\n\n>\n>\n> On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule <[email protected]>\n> wrote:\n>\n>> Hi\n>>\n>> this plan looks well\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n> Here's one that's not quite as well: http://explain.depesz.com/s/SgT\n>\n\nI see a possible issue\n\n(product_id <> '81716'::citext) .. this operation is CPU expensive and\nmaybe nonsense\n\nproduct_id should be integer -- and if it isn't - it should not be on 4M\nrows extremly fast - mainly on citext\n\ntry to force a opposite cast - you will safe a case insensitive text\ncomparation\n\nproduct_id::int <> 81716\n\nRegards\n\nPavel\n\n\n\n\n>\n> Joe\n>\n>\n>>\n>> 2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:\n>>\n>>> Oops, didn't run vacuum analyze after deleting the events. Here is\n>>> another 'explain analyze': http://explain.depesz.com/s/AviN\n>>>\n>>> On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:\n>>>\n>>>> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:\n>>>>\n>>>>> I have an events table that records page views and purchases (type =\n>>>>> 'viewed' or type='purchased'). I have a query that figures out \"people who\n>>>>> bought/viewed this also bought/viewed that\".\n>>>>>\n>>>>> It worked fine, taking about 0.1 seconds to complete, until a few\n>>>>> hours ago when it started taking hours to complete. Vacuum/analyze didn't\n>>>>> help. Turned out there was one session_id that had 400k rows in the\n>>>>> system. Deleting that made the query performant again.\n>>>>>\n>>>>> Is there anything I can do to make the query work better in cases like\n>>>>> that? Missing index, or better query?\n>>>>>\n>>>>> This is on 9.3.5.\n>>>>>\n>>>>> The below is reproduced at the following URL if it's not formatted\n>>>>> correctly in the email.\n>>>>> https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n>>>>>\n>>>>> explain select\n>>>>> e1.product_id,\n>>>>> e2.site_id,\n>>>>> e2.product_id,\n>>>>> count(nullif(e2.type='viewed', false)) view_count,\n>>>>> count(nullif(e2.type='purchased', false)) purchase_count\n>>>>> from events e1\n>>>>> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n>>>>> where\n>>>>> e1.product_id = '82503' and\n>>>>> e1.product_id != e2.product_id\n>>>>> group by e1.product_id, e2.product_id, e2.site_id;\n>>>>> QUERY PLAN\n>>>>> ----------------------------------------------------------------------------------------------------------------------------\n>>>>> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n>>>>> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n>>>>> Sort Key: e1.product_id, e2.product_id, e2.site_id\n>>>>> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n>>>>> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n>>>>> Recheck Cond: (product_id = '82503'::citext)\n>>>>> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n>>>>> Index Cond: (product_id = '82503'::citext)\n>>>>> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n>>>>> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n>>>>> Filter: (e1.product_id <> product_id)\n>>>>> (11 rows)\n>>>>>\n>>>>> recommender_production=> \\d events\n>>>>> Table \"public.events\"\n>>>>> Column | Type | Modifiers\n>>>>> -------------+--------------------------+-----------------------------------------------------\n>>>>> id | bigint | not null default nextval('events_id_seq'::regclass)\n>>>>> user_id | citext |\n>>>>> session_id | citext | not null\n>>>>> product_id | citext | not null\n>>>>> site_id | citext | not null\n>>>>> type | text | not null\n>>>>> happened_at | timestamp with time zone | not null\n>>>>> created_at | timestamp with time zone | not null\n>>>>> Indexes:\n>>>>> \"events_pkey\" PRIMARY KEY, btree (id)\n>>>>> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n>>>>> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n>>>>> Check constraints:\n>>>>> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n>>>>> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n>>>>> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>> After removing the session with 400k events, I was able to do an\n>>>> explain analyze, here is one of them:\n>>>> http://explain.depesz.com/s/PFNk\n>>>>\n>>>\n>>>\n>>\n>\n\n2015-01-25 7:38 GMT+01:00 Joe Van Dyk <[email protected]>:On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule <[email protected]> wrote:Hithis plan looks wellRegardsPavelHere's one that's not quite as well: http://explain.depesz.com/s/SgTI see a possible issue (product_id <> '81716'::citext) .. this operation is CPU expensive and maybe nonsense product_id should be integer -- and if it isn't - it should not be on 4M rows extremly fast - mainly on citext try to force a opposite cast - you will safe a case insensitive text comparationproduct_id::int <> 81716RegardsPavel Joe 2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:Oops, didn't run vacuum analyze after deleting the events. Here is another 'explain analyze': http://explain.depesz.com/s/AviNOn Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again. Is there anything I can do to make the query work better in cases like that? Missing index, or better query?This is on 9.3.5.The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txtexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default nextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)\nAfter removing the session with 400k events, I was able to do an explain analyze, here is one of them:http://explain.depesz.com/s/PFNk",
"msg_date": "Sun, 25 Jan 2015 08:14:08 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "On Sat, Jan 24, 2015 at 11:14 PM, Pavel Stehule <[email protected]>\nwrote:\n\n>\n>\n> 2015-01-25 7:38 GMT+01:00 Joe Van Dyk <[email protected]>:\n>\n>>\n>>\n>> On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule <[email protected]>\n>> wrote:\n>>\n>>> Hi\n>>>\n>>> this plan looks well\n>>>\n>>> Regards\n>>>\n>>> Pavel\n>>>\n>>\n>> Here's one that's not quite as well: http://explain.depesz.com/s/SgT\n>>\n>\n> I see a possible issue\n>\n> (product_id <> '81716'::citext) .. this operation is CPU expensive and\n> maybe nonsense\n>\n> product_id should be integer -- and if it isn't - it should not be on 4M\n> rows extremly fast - mainly on citext\n>\n> try to force a opposite cast - you will safe a case insensitive text\n> comparation\n>\n> product_id::int <> 81716\n>\n\nIt might not always be an integer, just happens to be so here. Should I try\ntext instead? I don't have to have the case-insensitive matching.\n\nJoe\n\n\n>\n> Regards\n>\n> Pavel\n>\n>\n>\n>\n>>\n>> Joe\n>>\n>>\n>>>\n>>> 2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:\n>>>\n>>>> Oops, didn't run vacuum analyze after deleting the events. Here is\n>>>> another 'explain analyze': http://explain.depesz.com/s/AviN\n>>>>\n>>>> On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:\n>>>>\n>>>>> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:\n>>>>>\n>>>>>> I have an events table that records page views and purchases (type =\n>>>>>> 'viewed' or type='purchased'). I have a query that figures out \"people who\n>>>>>> bought/viewed this also bought/viewed that\".\n>>>>>>\n>>>>>> It worked fine, taking about 0.1 seconds to complete, until a few\n>>>>>> hours ago when it started taking hours to complete. Vacuum/analyze didn't\n>>>>>> help. Turned out there was one session_id that had 400k rows in the\n>>>>>> system. Deleting that made the query performant again.\n>>>>>>\n>>>>>> Is there anything I can do to make the query work better in cases\n>>>>>> like that? Missing index, or better query?\n>>>>>>\n>>>>>> This is on 9.3.5.\n>>>>>>\n>>>>>> The below is reproduced at the following URL if it's not formatted\n>>>>>> correctly in the email.\n>>>>>> https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n>>>>>>\n>>>>>> explain select\n>>>>>> e1.product_id,\n>>>>>> e2.site_id,\n>>>>>> e2.product_id,\n>>>>>> count(nullif(e2.type='viewed', false)) view_count,\n>>>>>> count(nullif(e2.type='purchased', false)) purchase_count\n>>>>>> from events e1\n>>>>>> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n>>>>>> where\n>>>>>> e1.product_id = '82503' and\n>>>>>> e1.product_id != e2.product_id\n>>>>>> group by e1.product_id, e2.product_id, e2.site_id;\n>>>>>> QUERY PLAN\n>>>>>> ----------------------------------------------------------------------------------------------------------------------------\n>>>>>> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n>>>>>> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n>>>>>> Sort Key: e1.product_id, e2.product_id, e2.site_id\n>>>>>> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n>>>>>> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n>>>>>> Recheck Cond: (product_id = '82503'::citext)\n>>>>>> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n>>>>>> Index Cond: (product_id = '82503'::citext)\n>>>>>> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n>>>>>> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n>>>>>> Filter: (e1.product_id <> product_id)\n>>>>>> (11 rows)\n>>>>>>\n>>>>>> recommender_production=> \\d events\n>>>>>> Table \"public.events\"\n>>>>>> Column | Type | Modifiers\n>>>>>> -------------+--------------------------+-----------------------------------------------------\n>>>>>> id | bigint | not null default nextval('events_id_seq'::regclass)\n>>>>>> user_id | citext |\n>>>>>> session_id | citext | not null\n>>>>>> product_id | citext | not null\n>>>>>> site_id | citext | not null\n>>>>>> type | text | not null\n>>>>>> happened_at | timestamp with time zone | not null\n>>>>>> created_at | timestamp with time zone | not null\n>>>>>> Indexes:\n>>>>>> \"events_pkey\" PRIMARY KEY, btree (id)\n>>>>>> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n>>>>>> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n>>>>>> Check constraints:\n>>>>>> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n>>>>>> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n>>>>>> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>> After removing the session with 400k events, I was able to do an\n>>>>> explain analyze, here is one of them:\n>>>>> http://explain.depesz.com/s/PFNk\n>>>>>\n>>>>\n>>>>\n>>>\n>>\n>\n\nOn Sat, Jan 24, 2015 at 11:14 PM, Pavel Stehule <[email protected]> wrote:2015-01-25 7:38 GMT+01:00 Joe Van Dyk <[email protected]>:On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule <[email protected]> wrote:Hithis plan looks wellRegardsPavelHere's one that's not quite as well: http://explain.depesz.com/s/SgTI see a possible issue (product_id <> '81716'::citext) .. this operation is CPU expensive and maybe nonsense product_id should be integer -- and if it isn't - it should not be on 4M rows extremly fast - mainly on citext try to force a opposite cast - you will safe a case insensitive text comparationproduct_id::int <> 81716It might not always be an integer, just happens to be so here. Should I try text instead? I don't have to have the case-insensitive matching.Joe RegardsPavel Joe 2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:Oops, didn't run vacuum analyze after deleting the events. Here is another 'explain analyze': http://explain.depesz.com/s/AviNOn Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again. Is there anything I can do to make the query work better in cases like that? Missing index, or better query?This is on 9.3.5.The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txtexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default nextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)\nAfter removing the session with 400k events, I was able to do an explain analyze, here is one of them:http://explain.depesz.com/s/PFNk",
"msg_date": "Sat, 24 Jan 2015 23:20:59 -0800",
"msg_from": "Joe Van Dyk <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "2015-01-25 8:20 GMT+01:00 Joe Van Dyk <[email protected]>:\n\n> On Sat, Jan 24, 2015 at 11:14 PM, Pavel Stehule <[email protected]>\n> wrote:\n>\n>>\n>>\n>> 2015-01-25 7:38 GMT+01:00 Joe Van Dyk <[email protected]>:\n>>\n>>>\n>>>\n>>> On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule <[email protected]\n>>> > wrote:\n>>>\n>>>> Hi\n>>>>\n>>>> this plan looks well\n>>>>\n>>>> Regards\n>>>>\n>>>> Pavel\n>>>>\n>>>\n>>> Here's one that's not quite as well: http://explain.depesz.com/s/SgT\n>>>\n>>\n>> I see a possible issue\n>>\n>> (product_id <> '81716'::citext) .. this operation is CPU expensive and\n>> maybe nonsense\n>>\n>> product_id should be integer -- and if it isn't - it should not be on 4M\n>> rows extremly fast - mainly on citext\n>>\n>> try to force a opposite cast - you will safe a case insensitive text\n>> comparation\n>>\n>> product_id::int <> 81716\n>>\n>\n> It might not always be an integer, just happens to be so here. Should I\n> try text instead? I don't have to have the case-insensitive matching.\n>\n\ntext can be better\n\nthis design is unhappy, but you cannot to change ot probably\n\n\n\n>\n> Joe\n>\n>\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>>\n>>\n>>\n>>>\n>>> Joe\n>>>\n>>>\n>>>>\n>>>> 2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:\n>>>>\n>>>>> Oops, didn't run vacuum analyze after deleting the events. Here is\n>>>>> another 'explain analyze': http://explain.depesz.com/s/AviN\n>>>>>\n>>>>> On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:\n>>>>>\n>>>>>> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:\n>>>>>>\n>>>>>>> I have an events table that records page views and purchases (type =\n>>>>>>> 'viewed' or type='purchased'). I have a query that figures out \"people who\n>>>>>>> bought/viewed this also bought/viewed that\".\n>>>>>>>\n>>>>>>> It worked fine, taking about 0.1 seconds to complete, until a few\n>>>>>>> hours ago when it started taking hours to complete. Vacuum/analyze didn't\n>>>>>>> help. Turned out there was one session_id that had 400k rows in the\n>>>>>>> system. Deleting that made the query performant again.\n>>>>>>>\n>>>>>>> Is there anything I can do to make the query work better in cases\n>>>>>>> like that? Missing index, or better query?\n>>>>>>>\n>>>>>>> This is on 9.3.5.\n>>>>>>>\n>>>>>>> The below is reproduced at the following URL if it's not formatted\n>>>>>>> correctly in the email.\n>>>>>>> https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n>>>>>>>\n>>>>>>> explain select\n>>>>>>> e1.product_id,\n>>>>>>> e2.site_id,\n>>>>>>> e2.product_id,\n>>>>>>> count(nullif(e2.type='viewed', false)) view_count,\n>>>>>>> count(nullif(e2.type='purchased', false)) purchase_count\n>>>>>>> from events e1\n>>>>>>> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n>>>>>>> where\n>>>>>>> e1.product_id = '82503' and\n>>>>>>> e1.product_id != e2.product_id\n>>>>>>> group by e1.product_id, e2.product_id, e2.site_id;\n>>>>>>> QUERY PLAN\n>>>>>>> ----------------------------------------------------------------------------------------------------------------------------\n>>>>>>> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n>>>>>>> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n>>>>>>> Sort Key: e1.product_id, e2.product_id, e2.site_id\n>>>>>>> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n>>>>>>> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n>>>>>>> Recheck Cond: (product_id = '82503'::citext)\n>>>>>>> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n>>>>>>> Index Cond: (product_id = '82503'::citext)\n>>>>>>> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n>>>>>>> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n>>>>>>> Filter: (e1.product_id <> product_id)\n>>>>>>> (11 rows)\n>>>>>>>\n>>>>>>> recommender_production=> \\d events\n>>>>>>> Table \"public.events\"\n>>>>>>> Column | Type | Modifiers\n>>>>>>> -------------+--------------------------+-----------------------------------------------------\n>>>>>>> id | bigint | not null default nextval('events_id_seq'::regclass)\n>>>>>>> user_id | citext |\n>>>>>>> session_id | citext | not null\n>>>>>>> product_id | citext | not null\n>>>>>>> site_id | citext | not null\n>>>>>>> type | text | not null\n>>>>>>> happened_at | timestamp with time zone | not null\n>>>>>>> created_at | timestamp with time zone | not null\n>>>>>>> Indexes:\n>>>>>>> \"events_pkey\" PRIMARY KEY, btree (id)\n>>>>>>> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n>>>>>>> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n>>>>>>> Check constraints:\n>>>>>>> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n>>>>>>> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n>>>>>>> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>>>\n>>>>>> After removing the session with 400k events, I was able to do an\n>>>>>> explain analyze, here is one of them:\n>>>>>> http://explain.depesz.com/s/PFNk\n>>>>>>\n>>>>>\n>>>>>\n>>>>\n>>>\n>>\n>\n\n2015-01-25 8:20 GMT+01:00 Joe Van Dyk <[email protected]>:On Sat, Jan 24, 2015 at 11:14 PM, Pavel Stehule <[email protected]> wrote:2015-01-25 7:38 GMT+01:00 Joe Van Dyk <[email protected]>:On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule <[email protected]> wrote:Hithis plan looks wellRegardsPavelHere's one that's not quite as well: http://explain.depesz.com/s/SgTI see a possible issue (product_id <> '81716'::citext) .. this operation is CPU expensive and maybe nonsense product_id should be integer -- and if it isn't - it should not be on 4M rows extremly fast - mainly on citext try to force a opposite cast - you will safe a case insensitive text comparationproduct_id::int <> 81716It might not always be an integer, just happens to be so here. Should I try text instead? I don't have to have the case-insensitive matching.text can be betterthis design is unhappy, but you cannot to change ot probably Joe RegardsPavel Joe 2015-01-25 6:45 GMT+01:00 Joe Van Dyk <[email protected]>:Oops, didn't run vacuum analyze after deleting the events. Here is another 'explain analyze': http://explain.depesz.com/s/AviNOn Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk <[email protected]> wrote:On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk <[email protected]> wrote:I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again. Is there anything I can do to make the query work better in cases like that? Missing index, or better query?This is on 9.3.5.The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txtexplain select\n e1.product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e1.product_id != e2.product_id\n group by e1.product_id, e2.product_id, e2.site_id;\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n Sort Key: e1.product_id, e2.product_id, e2.site_id\n -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n Recheck Cond: (product_id = '82503'::citext)\n -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n Index Cond: (product_id = '82503'::citext)\n -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n Filter: (e1.product_id <> product_id)\n(11 rows)\n\nrecommender_production=> \\d events\n Table \"public.events\"\n Column | Type | Modifiers\n-------------+--------------------------+-----------------------------------------------------\n id | bigint | not null default nextval('events_id_seq'::regclass)\n user_id | citext |\n session_id | citext | not null\n product_id | citext | not null\n site_id | citext | not null\n type | text | not null\n happened_at | timestamp with time zone | not null\n created_at | timestamp with time zone | not null\nIndexes:\n \"events_pkey\" PRIMARY KEY, btree (id)\n \"events_product_id_site_id_idx\" btree (product_id, site_id)\n \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\nCheck constraints:\n \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n \"events_user_id_check\" CHECK (length(user_id::text) < 255)\nAfter removing the session with 400k events, I was able to do an explain analyze, here is one of them:http://explain.depesz.com/s/PFNk",
"msg_date": "Sun, 25 Jan 2015 09:03:27 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "Hi,\n\nOn 25.1.2015 07:38, Joe Van Dyk wrote:\n> \n> Here's one that's not quite as well: http://explain.depesz.com/s/SgT\n\nAs Pavel already pointed out, the first problem is this part of the plan:\n\nSeq Scan on events e2 (cost=0.00..120,179.60 rows=4,450,241 width=51)\n(actual time=0.014..33,773.370 rows=4,450,865 loops=1)\n Filter: (product_id <> '81716'::citext)\n\nConsuming ~33 seconds of the runtime. If you can make this faster\nsomehow (e.g. by getting rid of the citext cast), that'd be nice.\n\nAnother issue is that the hashjoin is batched:\n\n Buckets: 65536 Batches: 8 Memory Usage: 46085kB\n\nThe hash preparation takes ~40 seconds, so maybe try to give it a bit\nmore memory - I assume you have work_mem=64MB, so try doubling that\n(ISTM 512MB should work with a single batch). Maybe this won't really\nimprove the performance, though. It still has to process ~4.5M rows.\n\nIncreasing the work mem could also result in switching to hash\naggregate, making the sort (~30 seconds) unnecessary.\n\nAnyway, ISTM this works as expected, i.e.\n\n(a) with rare product_id values the queries are fast\n(b) with common product_id values the queries are slow\n\nThat's expected, because (b) needs to process much more data. I don't\nthink you can magically make it run as fast as (a). The best solution\nmight be to keep a pre-aggregated results - I don't think you really\nneed exact answers when recommending \"similar\" products.\n\nI also wonder if you really need to join the tables? I mean, what if you\ndo something like this:\n\nCREATE TABLE events_aggregated AS SELECT\n site_id,\n array_agg(product_id) AS product_ids,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\nFROM events\nGROUP BY 1;\n\nand then using intarray with GIN indexes to query this table?\nSomething like this:\n\n CREATE products_agg_idx ON aggregated\n USING GIN (product_ids gin__int_ops);\n\n SELECT * FROM events_aggregated WHERE product_ids @> ARRAY['82503'];\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 25 Jan 2015 17:57:36 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": ">I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".\n>\n>It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant again.\n>\n>Is there anything I can do to make the query work better in cases like that? Missing index, or better query?\n>\n>This is on 9.3.5.\n>\n>The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n\nHello,\n\nhere are 2 variations that should be somewhat faster.\n\n It seems you may have duplicate (site_id,session_id,product_id)\n which would false the result. In that case you'll need some more logic in the query.\n\n select\n '82503' as product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e2.product_id != '82503'\n group by e2.product_id, e2.site_id;\n\n\n OR:\n\n WITH SALL as(\n select\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503'\n group by e2.product_id, e2.site_id\n )\n SELECT\n '82503' as product_id_1,\n site_id,\n product_id,\n view_count,\n purchase_count\n FROM SALL\n WHERE product_id != '82503';\n\n\n regards,\n Marc Mamin\n\n\n\n>explain select\n> e1.product_id,\n> e2.site_id,\n> e2.product_id,\n> count(nullif(e2.type='viewed', false)) view_count,\n> count(nullif(e2.type='purchased', false)) purchase_count\n> from events e1\n> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n> where\n> e1.product_id = '82503' and\n> e1.product_id != e2.product_id\n> group by e1.product_id, e2.product_id, e2.site_id;\n> QUERY PLAN\n>----------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n> Sort Key: e1.product_id, e2.product_id, e2.site_id\n> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n> Recheck Cond: (product_id = '82503'::citext)\n> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n> Index Cond: (product_id = '82503'::citext)\n> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n> Filter: (e1.product_id <> product_id)\n>(11 rows)\n>\n>recommender_production=> \\d events\n> Table \"public.events\"\n> Column | Type | Modifiers\n>-------------+--------------------------+-----------------------------------------------------\n> id | bigint | not null default nextval('events_id_seq'::regclass)\n> user_id | citext |\n> session_id | citext | not null\n> product_id | citext | not null\n> site_id | citext | not null\n> type | text | not null\n> happened_at | timestamp with time zone | not null\n> created_at | timestamp with time zone | not null\n>Indexes:\n> \"events_pkey\" PRIMARY KEY, btree (id)\n> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n>Check constraints:\n> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>\n>\n>\n>\n\n\n\n\n\n\n\n\n>I have an events table that records page views and purchases (type = 'viewed' or type='purchased'). I have a query that figures out \"people who bought/viewed this also bought/viewed that\".\n>\n>It worked fine, taking about 0.1 seconds to complete, until a few hours ago when it started taking hours to complete. Vacuum/analyze didn't help. Turned out there was one session_id that had 400k rows in the system. Deleting that made the query performant\n again. \n>\n>Is there anything I can do to make the query work better in cases like that? Missing index, or better query?\n>\n>This is on 9.3.5.\n>\n>The below is reproduced at the following URL if it's not formatted correctly in the email. https://gist.githubusercontent.com/joevandyk/cb8f4afdb6c1b178c606/raw/9940bbe033ebd56d38caa46e33c1ddfd9df36eda/gistfile1.txt\n\nHello,\n\nhere are 2 variations that should be somewhat faster.\n\n It seems you may have duplicate (site_id,session_id,product_id)\n which would false the result. In that case you'll need some more logic in the query.\n \n select\n '82503' as product_id,\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' and\n e2.product_id != '82503'\n group by e2.product_id, e2.site_id;\n \n \n OR:\n \n WITH SALL as(\n select\n e2.site_id,\n e2.product_id,\n count(nullif(e2.type='viewed', false)) view_count,\n count(nullif(e2.type='purchased', false)) purchase_count\n from events e1\n join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n where\n e1.product_id = '82503' \n group by e2.product_id, e2.site_id\n )\n SELECT \n '82503' as product_id_1,\n site_id,\n product_id,\n view_count,\n purchase_count\n FROM SALL\n WHERE product_id != '82503';\n \n\n regards,\n Marc Mamin\n \n\n\n>explain select\n> e1.product_id,\n> e2.site_id,\n> e2.product_id,\n> count(nullif(e2.type='viewed', false)) view_count,\n> count(nullif(e2.type='purchased', false)) purchase_count\n> from events e1\n> join events e2 on e1.session_id = e2.session_id and e1.type = e2.type\n> where\n> e1.product_id = '82503' and\n> e1.product_id != e2.product_id\n> group by e1.product_id, e2.product_id, e2.site_id;\n> QUERY PLAN\n>----------------------------------------------------------------------------------------------------------------------------\n> GroupAggregate (cost=828395.67..945838.90 rows=22110 width=19)\n> -> Sort (cost=828395.67..840117.89 rows=4688885 width=19)\n> Sort Key: e1.product_id, e2.product_id, e2.site_id\n> -> Nested Loop (cost=11.85..20371.14 rows=4688885 width=19)\n> -> Bitmap Heap Scan on events e1 (cost=11.29..1404.31 rows=369 width=49)\n> Recheck Cond: (product_id = '82503'::citext)\n> -> Bitmap Index Scan on events_product_id_site_id_idx (cost=0.00..11.20 rows=369 width=0)\n> Index Cond: (product_id = '82503'::citext)\n> -> Index Scan using events_session_id_type_product_id_idx on events e2 (cost=0.56..51.28 rows=12 width=51)\n> Index Cond: ((session_id = e1.session_id) AND (type = e1.type))\n> Filter: (e1.product_id <> product_id)\n>(11 rows)\n>\n>recommender_production=> \\d events\n> Table \"public.events\"\n> Column | Type | Modifiers\n>-------------+--------------------------+-----------------------------------------------------\n> id | bigint | not null default nextval('events_id_seq'::regclass)\n> user_id | citext |\n> session_id | citext | not null\n> product_id | citext | not null\n> site_id | citext | not null\n> type | text | not null\n> happened_at | timestamp with time zone | not null\n> created_at | timestamp with time zone | not null\n>Indexes:\n> \"events_pkey\" PRIMARY KEY, btree (id)\n> \"events_product_id_site_id_idx\" btree (product_id, site_id)\n> \"events_session_id_type_product_id_idx\" btree (session_id, type, product_id)\n>Check constraints:\n> \"events_session_id_check\" CHECK (length(session_id::text) < 255)\n> \"events_type_check\" CHECK (type = ANY (ARRAY['purchased'::text, 'viewed'::text]))\n> \"events_user_id_check\" CHECK (length(user_id::text) < 255)\n>\n>\n>\n>",
"msg_date": "Sun, 25 Jan 2015 21:07:06 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "On 1/25/15 2:03 AM, Pavel Stehule wrote:\n> It might not always be an integer, just happens to be so here.\n> Should I try text instead? I don't have to have the case-insensitive\n> matching.\n>\n>\n> text can be better\n\nbytea would be even better yet, because that will always be a straight \nbinary comparison. text will worry about conversion and what not \n(though, perhaps there's a way to force that to use C or SQL instead of \nsomething like UTF8, short of changing the encoding of the whole database).\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 Jan 2015 19:40:13 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
},
{
"msg_contents": "2015-01-31 2:40 GMT+01:00 Jim Nasby <[email protected]>:\n\n> On 1/25/15 2:03 AM, Pavel Stehule wrote:\n>\n>> It might not always be an integer, just happens to be so here.\n>> Should I try text instead? I don't have to have the case-insensitive\n>> matching.\n>>\n>>\n>> text can be better\n>>\n>\n> bytea would be even better yet, because that will always be a straight\n> binary comparison. text will worry about conversion and what not (though,\n> perhaps there's a way to force that to use C or SQL instead of something\n> like UTF8, short of changing the encoding of the whole database).\n>\n\ntrue,\n\ngood idea\n\nRegards\n\nPavel\n\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>\n\n2015-01-31 2:40 GMT+01:00 Jim Nasby <[email protected]>:On 1/25/15 2:03 AM, Pavel Stehule wrote:\n\n It might not always be an integer, just happens to be so here.\n Should I try text instead? I don't have to have the case-insensitive\n matching.\n\n\ntext can be better\n\n\nbytea would be even better yet, because that will always be a straight binary comparison. text will worry about conversion and what not (though, perhaps there's a way to force that to use C or SQL instead of something like UTF8, short of changing the encoding of the whole database).true,good ideaRegardsPavel \n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com",
"msg_date": "Sat, 31 Jan 2015 07:28:08 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Query performance"
}
] |
[
{
"msg_contents": "Hi guys,\n\nCan I take a jab at the celebrated “why is Postgres not using my index” riddle?\n\nI’m using PostgreSQL 9.3.3 on an Amazon RDS “db.r3.xlarge” 64-bit instance. I have two tables, one with about 30M rows and two indexes (in fact a monthly partition):\n\nCREATE TABLE staging.mixpanel_events_201409 (\n date_day date NOT NULL,\n event_id int NOT NULL REFERENCES mixpanel_event_list,\n prop hstore\n);\n\nCREATE INDEX mixpanel_idx_date_201409\n ON mixpanel_events_201409\n USING btree\n (date_day);\n\nCREATE INDEX mixpanel_idx_event_201409\n ON mixpanel_events_201409\n USING btree\n (event_id);\n\n\nAnd a lookup table with about 600 rows:\n\nCREATE TABLE staging.mixpanel_event_list (\n id serial PRIMARY KEY,\n name text UNIQUE,\n source event_source NULL\n);\n\n\nNow when I select a subset of the possible event IDs in the big table, PG uses the appropriate index:\n\nselect *\n from mixpanel_events_201409\n where event_id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);\n\n\nBitmap Heap Scan on mixpanel_events_201409 (cost=7663.36..1102862.70 rows=410022 width=949)\n Recheck Cond: (event_id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n -> Bitmap Index Scan on mixpanel_idx_event_201409 (cost=0.00..7560.85 rows=410022 width=0)\n Index Cond: (event_id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n\n\nBut when I try to join the lookup table and select from it, the index is dismissed for a full table scan with a catastrophic effect on performance:\n\nselect *\nfrom mixpanel_events_201409 mp\n inner join mixpanel_event_list ev on ( ev.id = mp.event_id )\nwhere ev.id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);\n\nHash Join (cost=20.73..2892183.32 rows=487288 width=1000)\n Hash Cond: (mp.event_id = ev.id)\n -> Seq Scan on mixpanel_events_201409 mp (cost=0.00..2809276.70 rows=20803470 width=949)\n -> Hash (cost=20.57..20.57 rows=13 width=51)\n -> Seq Scan on mixpanel_event_list ev (cost=0.00..20.57 rows=13 width=51)\n Filter: (id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n\n\nBoth tables have been vacuum analyzed.\n\nWhat gives?\n\nThanks a lot for your help,\nChris\n\n\nThis email is from Workshare Limited. The information contained in and accompanying this communication may be confidential, subject to legal privilege, or otherwise protected from disclosure, and is intended solely for the use of the intended recipient(s). If you are not the intended recipient of this communication, please delete and destroy all copies in your possession and note that any review or dissemination of, or the taking of any action in reliance on, this communication is expressly prohibited. Please contact the sender if you believe you have received this email in error. Workshare Limited is a limited liability company registered in England and Wales (registered number 3559880), its registered office is at 20 Fashion Street, London, E1 6PX for further information, please refer to http://www.workshare.com.\n\n\n\n\n\n\n\n\n\nHi guys,\n \nCan I take a jab at the celebrated “why is Postgres not using my index” riddle?\n \nI’m using PostgreSQL 9.3.3 on an Amazon RDS “db.r3.xlarge” 64-bit instance. I have two tables, one with about 30M rows and two indexes (in fact a monthly partition):\n \nCREATE TABLE staging.mixpanel_events_201409 (\n date_day date NOT NULL,\n event_id int NOT NULL REFERENCES mixpanel_event_list,\n prop hstore\n);\n \nCREATE INDEX mixpanel_idx_date_201409\n ON mixpanel_events_201409\n USING btree\n (date_day);\n \nCREATE INDEX mixpanel_idx_event_201409\n ON mixpanel_events_201409\n USING btree\n (event_id);\n \n \nAnd a lookup table with about 600 rows:\n \nCREATE TABLE staging.mixpanel_event_list (\n id serial PRIMARY KEY,\n name text UNIQUE,\n source event_source NULL\n);\n \n \nNow when I select a subset of the possible event IDs in the big table, PG uses the appropriate index:\n \nselect *\n from mixpanel_events_201409\n\n where event_id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);\n \n \nBitmap Heap Scan on mixpanel_events_201409 (cost=7663.36..1102862.70 rows=410022 width=949)\n Recheck Cond: (event_id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n -> Bitmap Index Scan on mixpanel_idx_event_201409 (cost=0.00..7560.85 rows=410022 width=0)\n Index Cond: (event_id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n \n \nBut when I try to join the lookup table and select from it, the index is dismissed for a full table scan with a catastrophic effect on performance:\n \nselect *\nfrom mixpanel_events_201409 mp\n\n inner join mixpanel_event_list ev on ( ev.id = mp.event_id )\nwhere ev.id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);\n \nHash Join (cost=20.73..2892183.32 rows=487288 width=1000)\n Hash Cond: (mp.event_id = ev.id)\n -> Seq Scan on mixpanel_events_201409 mp (cost=0.00..2809276.70 rows=20803470 width=949)\n -> Hash (cost=20.57..20.57 rows=13 width=51)\n -> Seq Scan on mixpanel_event_list ev (cost=0.00..20.57 rows=13 width=51)\n Filter: (id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n \n \nBoth tables have been vacuum analyzed.\n \nWhat gives?\n \nThanks a lot for your help,\nChris\n \n\n\nThis email is from Workshare Limited. The information contained in and accompanying this communication may be confidential, subject to legal privilege, or otherwise protected from disclosure, and is intended solely for the use of the\n intended recipient(s). If you are not the intended recipient of this communication, please delete and destroy all copies in your possession and note that any review or dissemination of, or the taking of any action in reliance on, this communication is expressly\n prohibited. Please contact the sender if you believe you have received this email in error. Workshare Limited is a limited liability company registered in England and Wales (registered number 3559880), its registered office is at 20 Fashion Street, London,\n E1 6PX for further information, please refer to http://www.workshare.com.",
"msg_date": "Mon, 26 Jan 2015 16:32:22 +0000",
"msg_from": "\"Christian Roche\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is PostgreSQL not using my index?"
},
{
"msg_contents": "\"Christian Roche\" <[email protected]> writes:\n> Now when I select a subset of the possible event IDs in the big table, PG uses the appropriate index:\n\n> select *\n> from mixpanel_events_201409\n> where event_id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);\n\n> Bitmap Heap Scan on mixpanel_events_201409 (cost=7663.36..1102862.70 rows=410022 width=949)\n> Recheck Cond: (event_id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n> -> Bitmap Index Scan on mixpanel_idx_event_201409 (cost=0.00..7560.85 rows=410022 width=0)\n> Index Cond: (event_id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n\n> But when I try to join the lookup table and select from it, the index is dismissed for a full table scan with a catastrophic effect on performance:\n\n> select *\n> from mixpanel_events_201409 mp\n> inner join mixpanel_event_list ev on ( ev.id = mp.event_id )\n> where ev.id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);\n\n> Hash Join (cost=20.73..2892183.32 rows=487288 width=1000)\n> Hash Cond: (mp.event_id = ev.id)\n> -> Seq Scan on mixpanel_events_201409 mp (cost=0.00..2809276.70 rows=20803470 width=949)\n> -> Hash (cost=20.57..20.57 rows=13 width=51)\n> -> Seq Scan on mixpanel_event_list ev (cost=0.00..20.57 rows=13 width=51)\n> Filter: (id = ANY ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n\nGiven the estimated costs and rowcounts here, I'm far from convinced that\nthe planner made the wrong decision. You seem to be expecting that it\nwill go for a nestloop plan that would require 13 separate indexscans of\nthe large table. Those are unlikely to be only 1/13th the cost of the\nunified bitmap scan with =ANY; there's going to be overhead from repeated\nwork. If there's say a factor of 2 penalty for the repeated scans, that'd\nbe plenty enough to push the cost of that plan to be more than the\nhashjoin.\n\nIf, indeed, the hashjoin is slower, that may suggest that you need to dial\ndown random_page_cost to better represent your environment. But you\nshould be wary of making such an adjustment on the basis of a single\nexample; you might find that it makes other plan choices worse.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 26 Jan 2015 15:31:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is PostgreSQL not using my index?"
},
{
"msg_contents": "Hi,\n\nOn 26.1.2015 17:32, Christian Roche wrote:\n> select *\n> \n> from mixpanel_events_201409 mp\n> \n> inner join mixpanel_event_list ev on ( ev.id = mp.event_id )\n> \n> where ev.id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);\n> \n> \n> \n> Hash Join (cost=20.73..2892183.32 rows=487288 width=1000)\n> \n> Hash Cond: (mp.event_id = ev.id)\n> \n> -> Seq Scan on mixpanel_events_201409 mp (cost=0.00..2809276.70\n> rows=20803470 width=949)\n> \n> -> Hash (cost=20.57..20.57 rows=13 width=51)\n> \n> -> Seq Scan on mixpanel_event_list ev (cost=0.00..20.57\n> rows=13 width=51)\n> \n> Filter: (id = ANY\n> ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n> \n> \n> \n> \n> \n> Both tables have been vacuum analyzed.\n\nCan we get EXPLAIN ANALYZE please, and maybe some timings for the two\nplans? Otherwise we have no clue how accurate those estimates really\nare, making it difficult to judge the plan choice.\n\nYou might also use enable_hashjoin=off to force a different join\nalgorithm (it may not switch to nested loop immediately, so maybe try\nthe other enable_* options).\n\nThe estimated row counts are quite near each other (410k vs. 487k), but\nthe costs are not. I'm pretty sure that's because while the fist query\nhas WHERE condition directly on the event_id column, the second one\nmoves the condition to the 'list' table, forcing this particular plan.\n\nBut as the condition is on the join column, you may try moving it back:\n\n select *\n from mixpanel_events_201409 mp\n inner join mixpanel_event_list ev on ( ev.id = mp.event_id )\n where mp.event_id in (3, 4, 5, 6, 7, 8, 9, 10, 11, 373, 375, 376, 318);\n\nOf course, this only works on this particular column - it won't work for\nother columns in the 'list' table.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 Jan 2015 05:25:02 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is PostgreSQL not using my index?"
},
{
"msg_contents": "On Mon, Jan 26, 2015 at 10:32 AM, Christian Roche\n<[email protected]> wrote:\n> Bitmap Heap Scan on mixpanel_events_201409 (cost=7663.36..1102862.70\n> rows=410022 width=949)\n>\n> Recheck Cond: (event_id = ANY\n> ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n>\n> -> Bitmap Index Scan on mixpanel_idx_event_201409 (cost=0.00..7560.85\n> rows=410022 width=0)\n>\n> Index Cond: (event_id = ANY\n> ('{3,4,5,6,7,8,9,10,11,373,375,376,318}'::integer[]))\n>\n>\n> But when I try to join the lookup table and select from it, the index is\n> dismissed for a full table scan with a catastrophic effect on performance:\n\nBetter to post 'explain analyze' times than 'explain', so we can get a\nbetter understanding of what 'catastrophic' means. Other frequently\noverlooked planner influencing settings are effective_cache_size,\nwhich estimates amount memory available for caching and work_mem.\neffective_cache_size in particular is often dreadfully underset making\nthe server thing it's going to have to do expensive random i/o to\nfacilitate nestloops and will therefore tend to avoid them.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 27 Jan 2015 08:11:17 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is PostgreSQL not using my index?"
}
] |
[
{
"msg_contents": "Folks,\n\nCurrently, JSONB fields don't have statistics, and estimate a flat 1%\nselectivity. This can result in poor query plans, and I'm wondering if\nanyone has a suggested workaround for this short of hacking a new\nselectivity function. For example, take the common case of using JSONB\nto hold a list of \"tags\" for tagging documents:\n\n Table \"public.doc_tags_json\"\n Column | Type | Modifiers\n--------+---------+-----------\n doc_id | integer |\n tags | jsonb |\nIndexes:\n \"doc_tags_json_doc_id_idx\" UNIQUE, btree (doc_id)\n \"doc_tags_json_tags_idx\" gin (tags)\n\nThis query:\n\nselect doc_id\nfrom doc_tags_json\nwhere tags @> '[ \"math\", \"physics\" ]'\norder by doc_id desc limit 25;\n\nUses this plan:\n\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.43..709.79 rows=25 width=4) (actual time=24.529..340.499\nrows=25 loops=1)\n -> Index Scan Backward using doc_tags_json_doc_id_idx on\ndoc_tags_json (cost=0.43..283740.95 rows=10000 width=4) (actual\ntime=24.528..340.483 rows=25 loops=1)\n Filter: (tags @> '[\"math\", \"physics\"]'::jsonb)\n Rows Removed by Filter: 1011878\n Planning time: 0.090 ms\n Execution time: 340.528 ms\n\nIt does this because it expects @> '[\"math\", \"physics\"]' to match 10,000\nrows, which means that it expects to only scan 25,000 entries in the\ndoc_id index to return the top 25. However, the matching condition is\nmuch rarer than it thinks, so it's actually far faster to use the index\non the JSONB column:\n\ndrop index doc_tags_json_doc_id_idx;\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=10517.08..10517.14 rows=25 width=4) (actual\ntime=7.594..7.602 rows=25 loops=1)\n -> Sort (cost=10517.08..10542.08 rows=10000 width=4) (actual\ntime=7.593..7.596 rows=25 loops=1)\n Sort Key: doc_id\n Sort Method: top-N heapsort Memory: 26kB\n -> Bitmap Heap Scan on doc_tags_json (cost=92.90..10234.89\nrows=10000 width=4) (actual time=6.733..7.475 rows=257 loops=1)\n Recheck Cond: (tags @> '[\"math\", \"physics\"]'::jsonb)\n Heap Blocks: exact=256\n -> Bitmap Index Scan on doc_tags_json_tags_idx\n(cost=0.00..90.40 rows=10000 width=0) (actual time=6.695..6.695 rows=257\nloops=1)\n Index Cond: (tags @> '[\"math\", \"physics\"]'::jsonb)\n Planning time: 0.093 ms\n Execution time: 7.632 ms\n\nOn a normal column, I'd raise n_distinct to reflect the higher\nselecivity of the search terms. However, since @> uses contsel,\nn_distinct is ignored. Anyone know a clever workaround I don't\ncurrently see?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 26 Jan 2015 23:06:09 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "working around JSONB's lack of stats?"
},
{
"msg_contents": "On 27.1.2015 08:06, Josh Berkus wrote:\n> Folks,\n> \n...\n>\n> On a normal column, I'd raise n_distinct to reflect the higher\n> selecivity of the search terms. However, since @> uses contsel,\n> n_distinct is ignored. Anyone know a clever workaround I don't\n> currently see?\n\nI don't see any reasonable workaround :-(\n\nISTM we'll have to invent a way to collect useful stats about contents\nof JSON/JSONB documents. JSONB is cool, but at the moment we're mostly\nrelying on defaults that may be reasonable, but still misfire in many\ncases. Do we have any ideas of how that might work?\n\nWe're already collecting stats about contents of arrays, and maybe we\ncould do something similar for JSONB? The nested nature of JSON makes\nthat rather incompatible with the flat MCV/histogram stats, though.\n\nregards\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jan 2015 20:48:02 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On 01/28/2015 11:48 AM, Tomas Vondra wrote:\n> On 27.1.2015 08:06, Josh Berkus wrote:\n>> Folks,\n>>\n> ...\n>>\n>> On a normal column, I'd raise n_distinct to reflect the higher\n>> selecivity of the search terms. However, since @> uses contsel,\n>> n_distinct is ignored. Anyone know a clever workaround I don't\n>> currently see?\n> \n> I don't see any reasonable workaround :-(\n> \n> ISTM we'll have to invent a way to collect useful stats about contents\n> of JSON/JSONB documents. JSONB is cool, but at the moment we're mostly\n> relying on defaults that may be reasonable, but still misfire in many\n> cases. Do we have any ideas of how that might work?\n> \n> We're already collecting stats about contents of arrays, and maybe we\n> could do something similar for JSONB? The nested nature of JSON makes\n> that rather incompatible with the flat MCV/histogram stats, though.\n\nWell, I was thinking about this.\n\nWe already have most_common_elem (MCE) for arrays and tsearch. What if\nwe put JSONB's most common top-level keys (or array elements, depending)\nin the MCE array? Then we could still apply a simple rule for any path\ncriteria below the top-level keys, say assuming that any sub-key\ncriteria would match 10% of the time. While it wouldn't be perfect, it\nwould be better than what we have now.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jan 2015 15:03:06 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On Wed, Jan 28, 2015 at 3:03 PM, Josh Berkus <[email protected]> wrote:\n> We already have most_common_elem (MCE) for arrays and tsearch. What if\n> we put JSONB's most common top-level keys (or array elements, depending)\n> in the MCE array? Then we could still apply a simple rule for any path\n> criteria below the top-level keys, say assuming that any sub-key\n> criteria would match 10% of the time. While it wouldn't be perfect, it\n> would be better than what we have now.\n\nWell, the \"top-level keys\" would still be gathered for expression\nindexes. So yeah, maybe it would work alright for arrays of \"tags\",\nand things like that. I tend to think that that's a common enough\nuse-case.\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jan 2015 15:34:41 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On 01/28/2015 03:34 PM, Peter Geoghegan wrote:\n> On Wed, Jan 28, 2015 at 3:03 PM, Josh Berkus <[email protected]> wrote:\n>> We already have most_common_elem (MCE) for arrays and tsearch. What if\n>> we put JSONB's most common top-level keys (or array elements, depending)\n>> in the MCE array? Then we could still apply a simple rule for any path\n>> criteria below the top-level keys, say assuming that any sub-key\n>> criteria would match 10% of the time. While it wouldn't be perfect, it\n>> would be better than what we have now.\n> \n> Well, the \"top-level keys\" would still be gathered for expression\n> indexes. So yeah, maybe it would work alright for arrays of \"tags\",\n> and things like that. I tend to think that that's a common enough\n> use-case.\n\nYah, and even for cases where people have nested structures, currently\nwe require @> to start at the top. So we can at least compare top-level\nkeys to see if the key returned is in the MCEs or not, and take action\naccordingly.\n\nWe could start with a constant for anything below the key, where we\nassume that all values show up 10% of the time.\n\nthus:\n\njsonb_col @> '[ \"key1\" ]'\nor jsonb_col ? 'key1'\n\tif in MCE, assign % from MCE\n\totherwise assign 1% of non-MCE %\n\njsonb_col @> '{ \"key1\": \"value1\" }'\n\tif in MCE, assign MCE% * 0.1\n\totherwise assign 0.01 of non-MCE %\n\nDoes that make sense?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jan 2015 15:42:11 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On Wed, Jan 28, 2015 at 3:42 PM, Josh Berkus <[email protected]> wrote:\n> jsonb_col @> '[ \"key1\" ]'\n> or jsonb_col ? 'key1'\n> if in MCE, assign % from MCE\n> otherwise assign 1% of non-MCE %\n>\n> jsonb_col @> '{ \"key1\": \"value1\" }'\n> if in MCE, assign MCE% * 0.1\n> otherwise assign 0.01 of non-MCE %\n>\n> Does that make sense?\n\nI suspect it makes a lot less sense. The way people seem to want to\nuse jsonb is as a document store with a bit of flexibility. Individual\nJSON documents tend to be fairly homogeneous in structure within a\ntable, just like with systems like MongoDB. Strings within arrays are\nkeys for our purposes, and these are often used for tags and so on.\nBut Strings that are the key of an object/pair are much less useful to\nindex, in my estimation.\n\n-- \nRegards,\nPeter Geoghegan\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 28 Jan 2015 15:50:35 -0800",
"msg_from": "Peter Geoghegan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On 29.1.2015 00:03, Josh Berkus wrote:\n> On 01/28/2015 11:48 AM, Tomas Vondra wrote:\n>> On 27.1.2015 08:06, Josh Berkus wrote:\n>>> Folks,\n>>>\n>> ...\n>>>\n>>> On a normal column, I'd raise n_distinct to reflect the higher\n>>> selecivity of the search terms. However, since @> uses contsel,\n>>> n_distinct is ignored. Anyone know a clever workaround I don't\n>>> currently see?\n>>\n>> I don't see any reasonable workaround :-(\n>>\n>> ISTM we'll have to invent a way to collect useful stats about contents\n>> of JSON/JSONB documents. JSONB is cool, but at the moment we're mostly\n>> relying on defaults that may be reasonable, but still misfire in many\n>> cases. Do we have any ideas of how that might work?\n>>\n>> We're already collecting stats about contents of arrays, and maybe we\n>> could do something similar for JSONB? The nested nature of JSON makes\n>> that rather incompatible with the flat MCV/histogram stats, though.\n> \n> Well, I was thinking about this.\n> \n> We already have most_common_elem (MCE) for arrays and tsearch. What\n> if we put JSONB's most common top-level keys (or array elements,\n> depending) in the MCE array? Then we could still apply a simple rule\n> for any path criteria below the top-level keys, say assuming that any\n> sub-key criteria would match 10% of the time. While it wouldn't be\n> perfect, it would be better than what we have now.\n\nSo how would that work with your 'tags' example? ISTM most of your\ndocuments have 'tags' as top-level key, so that would end up in the MCV\nlist. But there's no info about the elements of the 'tags' array (thus\nthe 10% default, which may work in this particular case, but it's hardly\na general solution and I doubt it's somehow superior to the defaults\nwe're using right now).\n\nI think a 'proper' solution to JSONB stats needs to somehow reflect the\nnested structure. What I was thinking about is tracking MCV for\n\"complete paths\", i.e. for a document:\n\n {\n \"keyA\" : {\n \"keyB\" : \"x\",\n \"keyC\" : \"z\",\n }\n \"keyD\" : [1, 2, 3, 4]\n }\n\nWe'd extract three paths\n\n \"keyA.keyB\"\n \"keyA.keyC\"\n \"keyD\"\n\nand aggregate that over all the documents to select the MCV paths.\nAnd then, for each of those MCV paths track the most common values.\n\nISTM this would allow better estimations, but it has issues too:\n\nFirstly, it does not match the MCV structure, because it requires\nstoring (a) MCV paths and (b) MCV values for those paths. Moreover, (b)\nprobably stores different data types (some values are strings, some\nintegers, etc.). Arrays might be handled just like regular arrays, i.e.\ntracking stats of elements, but it's still mixed data types.\n\nSecondly, I think it's based on the assumption of independence (i.e.\nthat the occurence of one path does not depend on occurence of a\ndifferent path in the same document). Same for values x paths. Which may\nor may not be be true - it's essentially the same as assumption of\nindependence for predicates on multiple columns. While I do have ideas\non how to approach this in the multi-column case, handling this for\nJSONB is going to be much more complex I think.\n\nBut the first question (what stats to collect and how to store them) is\nthe most important at this point, I guess.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 29 Jan 2015 00:55:25 +0100",
"msg_from": "Tomas Vondra <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On 01/28/2015 03:50 PM, Peter Geoghegan wrote:\n> On Wed, Jan 28, 2015 at 3:42 PM, Josh Berkus <[email protected]> wrote:\n>> jsonb_col @> '[ \"key1\" ]'\n>> or jsonb_col ? 'key1'\n>> if in MCE, assign % from MCE\n>> otherwise assign 1% of non-MCE %\n>>\n>> jsonb_col @> '{ \"key1\": \"value1\" }'\n>> if in MCE, assign MCE% * 0.1\n>> otherwise assign 0.01 of non-MCE %\n>>\n>> Does that make sense?\n> \n> I suspect it makes a lot less sense. The way people seem to want to\n> use jsonb is as a document store with a bit of flexibility. Individual\n> JSON documents tend to be fairly homogeneous in structure within a\n> table, just like with systems like MongoDB. Strings within arrays are\n> keys for our purposes, and these are often used for tags and so on.\n> But Strings that are the key of an object/pair are much less useful to\n> index, in my estimation.\n\nYeah, I see your point; except for arrays, people are usually searching\nfor a key:value pair, and the existence of the key is not in doubt.\n\nThat would make the \"element\" the key:value pair, no? But\nrealistically, we would only want to do that for simple keys and values.\n\nAlthough: if you \"flatten\" a nested JSON structure into just keys with\nscalar values (and array items as their own thing), then you could have\na series of expanded key:value pairs to put into MCE.\n\nFor example:\n\n{ house : { city : San Francisco,\n sqft: 1200,\n color: blue,\n occupants: [ mom, dad, child1 ]\n }\n occupation: programmer\n}\n\n... would get flattened out into the following pairs:\n\ncity: san francisco\nsqft: 1200\ncolor: blue\noccupants: [ mom ]\noccupants: [ dad ]\noccupants: [ child1 ]\noccupation: programmer\n\nThis would probably work because there aren't a lot of data structures\nwhere people would have the same key:value pair in different locations\nin the JSON, and care about it stats-wise. Alternatetly, if the same\nkey-value pair appears multiple times in the same sample row, we could\ncut the MC% by that multiple.\n\nNo?\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 Jan 2015 12:26:42 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On 1/30/15 2:26 PM, Josh Berkus wrote:\n> On 01/28/2015 03:50 PM, Peter Geoghegan wrote:\n>> On Wed, Jan 28, 2015 at 3:42 PM, Josh Berkus <[email protected]> wrote:\n>>> jsonb_col @> '[ \"key1\" ]'\n>>> or jsonb_col ? 'key1'\n>>> if in MCE, assign % from MCE\n>>> otherwise assign 1% of non-MCE %\n>>>\n>>> jsonb_col @> '{ \"key1\": \"value1\" }'\n>>> if in MCE, assign MCE% * 0.1\n>>> otherwise assign 0.01 of non-MCE %\n>>>\n>>> Does that make sense?\n>>\n>> I suspect it makes a lot less sense. The way people seem to want to\n>> use jsonb is as a document store with a bit of flexibility. Individual\n>> JSON documents tend to be fairly homogeneous in structure within a\n>> table, just like with systems like MongoDB. Strings within arrays are\n>> keys for our purposes, and these are often used for tags and so on.\n>> But Strings that are the key of an object/pair are much less useful to\n>> index, in my estimation.\n>\n> Yeah, I see your point; except for arrays, people are usually searching\n> for a key:value pair, and the existence of the key is not in doubt.\n>\n> That would make the \"element\" the key:value pair, no? But\n> realistically, we would only want to do that for simple keys and values.\n>\n> Although: if you \"flatten\" a nested JSON structure into just keys with\n> scalar values (and array items as their own thing), then you could have\n> a series of expanded key:value pairs to put into MCE.\n>\n> For example:\n>\n> { house : { city : San Francisco,\n> sqft: 1200,\n> color: blue,\n> occupants: [ mom, dad, child1 ]\n> }\n> occupation: programmer\n> }\n>\n> ... would get flattened out into the following pairs:\n>\n> city: san francisco\n> sqft: 1200\n> color: blue\n> occupants: [ mom ]\n> occupants: [ dad ]\n> occupants: [ child1 ]\n> occupation: programmer\n>\n> This would probably work because there aren't a lot of data structures\n> where people would have the same key:value pair in different locations\n> in the JSON, and care about it stats-wise. Alternatetly, if the same\n> key-value pair appears multiple times in the same sample row, we could\n> cut the MC% by that multiple.\n\nEven if there were multiple occurrences, this would probably still be an \nimprovement.\n\nAnother idea... at one time in the past when discussing statistics on \nmultiple columns, one idea was to build statistics on indexes. If we \nbuilt that, we could also do the same thing for at least JSONB (not sure \nabout JSON). Obviously doesn't help for stuff you haven't indexed, but \npresumably if you care about performance and have any significant size \nof data you've also indexed parts of the JSON, yes?\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 Jan 2015 19:34:40 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On 01/30/2015 05:34 PM, Jim Nasby wrote:\n> On 1/30/15 2:26 PM, Josh Berkus wrote:\n>> This would probably work because there aren't a lot of data structures\n>> where people would have the same key:value pair in different locations\n>> in the JSON, and care about it stats-wise. Alternatetly, if the same\n>> key-value pair appears multiple times in the same sample row, we could\n>> cut the MC% by that multiple.\n> \n> Even if there were multiple occurrences, this would probably still be an\n> improvement.\n> \n> Another idea... at one time in the past when discussing statistics on\n> multiple columns, one idea was to build statistics on indexes. If we\n> built that, we could also do the same thing for at least JSONB (not sure\n> about JSON). Obviously doesn't help for stuff you haven't indexed, but\n> presumably if you care about performance and have any significant size\n> of data you've also indexed parts of the JSON, yes?\n\nI'm not clear on what you're suggesting here. I'm discussing how the\nstats for a JSONB field would be stored and accessed; I don't understand\nwhat that has to do with indexing.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 01 Feb 2015 13:08:01 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On Tue, Jan 27, 2015 at 1:06 AM, Josh Berkus <[email protected]> wrote:\n> Folks,\n>\n> Currently, JSONB fields don't have statistics, and estimate a flat 1%\n> selectivity. This can result in poor query plans, and I'm wondering if\n> anyone has a suggested workaround for this short of hacking a new\n> selectivity function. For example, take the common case of using JSONB\n> to hold a list of \"tags\" for tagging documents:\n\nhm, Why stop at jsonb? What's needed is a way to override the\nplanner's row estimate in a general way.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Feb 2015 09:42:48 -0600",
"msg_from": "Merlin Moncure <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On 2/1/15 3:08 PM, Josh Berkus wrote:\n> On 01/30/2015 05:34 PM, Jim Nasby wrote:\n>> On 1/30/15 2:26 PM, Josh Berkus wrote:\n>>> This would probably work because there aren't a lot of data structures\n>>> where people would have the same key:value pair in different locations\n>>> in the JSON, and care about it stats-wise. Alternatetly, if the same\n>>> key-value pair appears multiple times in the same sample row, we could\n>>> cut the MC% by that multiple.\n>>\n>> Even if there were multiple occurrences, this would probably still be an\n>> improvement.\n>>\n>> Another idea... at one time in the past when discussing statistics on\n>> multiple columns, one idea was to build statistics on indexes. If we\n>> built that, we could also do the same thing for at least JSONB (not sure\n>> about JSON). Obviously doesn't help for stuff you haven't indexed, but\n>> presumably if you care about performance and have any significant size\n>> of data you've also indexed parts of the JSON, yes?\n>\n> I'm not clear on what you're suggesting here. I'm discussing how the\n> stats for a JSONB field would be stored and accessed; I don't understand\n> what that has to do with indexing.\n\nThe JSON problem is similar to the problem of doing multi-column \nstatistics: there's no way to simply try to keep statistics on all \npossible combinations because that's something that's can be extremely \nlarge.\n\nSomething that's been proposed is only trying to keep multi-column stats \non column combinations that we have an index on (which in a way is \nreally just keeping stats on the index itself).\n\nIf we built that, we could use the same technique for JSON by simply \ndefining the indexes you needed for whatever you were searching on.\n\nObviously that's not as ideal is simply keeping full statistics on \neverything in a JSON document, but it might be a heck of a lot easier to \naccomplish.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Feb 2015 19:48:19 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: working around JSONB's lack of stats?"
},
{
"msg_contents": "On 02/02/2015 05:48 PM, Jim Nasby wrote:\n> On 2/1/15 3:08 PM, Josh Berkus wrote:\n>> I'm not clear on what you're suggesting here. I'm discussing how the\n>> stats for a JSONB field would be stored and accessed; I don't understand\n>> what that has to do with indexing.\n> \n> The JSON problem is similar to the problem of doing multi-column\n> statistics: there's no way to simply try to keep statistics on all\n> possible combinations because that's something that's can be extremely\n> large.\n> \n> Something that's been proposed is only trying to keep multi-column stats\n> on column combinations that we have an index on (which in a way is\n> really just keeping stats on the index itself).\n\nThe difficulty with column correlation (as with value correlation for\nJSONB) is the combination of *values*, not the combination of *columns*.\n Via the brute force method, imagine you have one column with cardinalty\n100, and another with cardinality 100,000. This would require you do\nkeep 10 million different correlation coefficients in order to be able\nto estimate correctly. Even correlating MCVs would add up to quite a\nbit of stats in short order; these days people frequently set statistics\nto 1000. The same goes for JSON keys.\n\n> If we built that, we could use the same technique for JSON by simply\n> defining the indexes you needed for whatever you were searching on.\n> \n> Obviously that's not as ideal is simply keeping full statistics on\n> everything in a JSON document, but it might be a heck of a lot easier to\n> accomplish.\n\nWalk before trying to run, let alone fly, please. Right now we don't\nhave selectivity estimation for a *single* key; let's do that before we\nstart talking about better estimation for combinations of keys.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 03 Feb 2015 11:07:21 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: working around JSONB's lack of stats?"
}
] |
[
{
"msg_contents": "Hi,\nI have to deal with badly written system which regularly suffers from\ntransaction wraparound issue. This issue is happenning every 10-14 days and\nforces me to take system offline and vacuum in single-user mode.\nMain causes for this are (afaik):\n1) heavy transaction traffic + 100+GB of stale tables\n2) slow i/o (rotational drives)\n3) autovacuum can't keep up.\n\nBasically the database stores events data in daily partitioned table\n\"daily_events\".\nWhat I did, was - I ran vaccum freeze on all partitions (the tables are\nnever touched after they're done for a day). I have also scheduled\nvacuum-freeze for a partition after it's done writing.\n\nThis essentially set xmin in each partition to \"frozen\" value of \"2\".\nHowever, to my surprise, this was not enough!\nPostgres stores relfrozenxid in pg_class and this value apparently is\ngetting old pretty fast (due to high volume of transactions).\nAnd it seems that it doesn't really matter that xmin is frozen for a table,\nthe relfrozenxid is what causing transaction wraparound.\n\nWhy is that? and most importantly - why updating pg_class.relfrozenxid\nrequires huge amount of i/o by vacuum process for tables that are never\nupdated?\n\nIs it safe to just update pg_class.relfrozenxid for tables where xmin=2 for\nall rows? Same for linked toast table?\nThank you.\n\nHi,I have to deal with badly written system which regularly suffers from transaction wraparound issue. This issue is happenning every 10-14 days and forces me to take system offline and vacuum in single-user mode.Main causes for this are (afaik):1) heavy transaction traffic + 100+GB of stale tables2) slow i/o (rotational drives)3) autovacuum can't keep up.Basically the database stores events data in daily partitioned table \"daily_events\".What I did, was - I ran vaccum freeze on all partitions (the tables are never touched after they're done for a day). I have also scheduled vacuum-freeze for a partition after it's done writing.This essentially set xmin in each partition to \"frozen\" value of \"2\".However, to my surprise, this was not enough! Postgres stores relfrozenxid in pg_class and this value apparently is getting old pretty fast (due to high volume of transactions).And it seems that it doesn't really matter that xmin is frozen for a table, the relfrozenxid is what causing transaction wraparound.Why is that? and most importantly - why updating pg_class.relfrozenxid requires huge amount of i/o by vacuum process for tables that are never updated?Is it safe to just update pg_class.relfrozenxid for tables where xmin=2 for all rows? Same for linked toast table?Thank you.",
"msg_date": "Fri, 30 Jan 2015 15:44:00 -0800",
"msg_from": "Slava Mudry <[email protected]>",
"msg_from_op": true,
"msg_subject": "why pg_class.relfrozenxid needs to be updated for frozen tables\n (where all rows have xmin=2)?"
},
{
"msg_contents": "On 1/30/15 5:44 PM, Slava Mudry wrote:\n> Hi,\n> I have to deal with badly written system which regularly suffers from\n> transaction wraparound issue. This issue is happenning every 10-14 days\n> and forces me to take system offline and vacuum in single-user mode.\n> Main causes for this are (afaik):\n> 1) heavy transaction traffic + 100+GB of stale tables\n> 2) slow i/o (rotational drives)\n> 3) autovacuum can't keep up.\n>\n> Basically the database stores events data in daily partitioned table\n> \"daily_events\".\n> What I did, was - I ran vaccum freeze on all partitions (the tables are\n> never touched after they're done for a day). I have also scheduled\n> vacuum-freeze for a partition after it's done writing.\n>\n> This essentially set xmin in each partition to \"frozen\" value of \"2\".\n> However, to my surprise, this was not enough!\n> Postgres stores relfrozenxid in pg_class and this value apparently is\n> getting old pretty fast (due to high volume of transactions).\n> And it seems that it doesn't really matter that xmin is frozen for a\n> table, the relfrozenxid is what causing transaction wraparound.\n\nrelfrozenxid is only part of the picture. A database-wide freeze vacuum \nwill be controlled by pg_database.datfrozenxid.\n\nWhat version is this? You may also be suffering from multixact wrap.\n\n> Why is that? and most importantly - why updating pg_class.relfrozenxid\n> requires huge amount of i/o by vacuum process for tables that are never\n> updated?\n\nBecause it has to scan the entire table to see what the oldest XID is. \nWe don't check to see if relfrozenxid is already 2, though I suppose we \ncould add that.\n\n> Is it safe to just update pg_class.relfrozenxid for tables where xmin=2\n> for all rows? Same for linked toast table?\n\nThat would be a great way to lose data...\n\nYou need to look at relations where relfrozenxid is >= 3 and see why \nrelfrozenxid isn't advancing fast enough on them. Check your cost delay \nsettings as well as the *freeze* settings. It's very likely that on a \nsystem this busy autovac would never keep up with default settings.\n\nAlso, keep in mind that transaction and multixact IDs are cluster-wide, \nso this is going to affect all databases in that instance. You should \nthink about ways to move the heaviest transaction workload to a separate \ncluster; possibly putting the raw updates there and having a separate \nprocess that aggregates that data into fewer transactions for the main \ncluster.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 30 Jan 2015 19:28:26 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why pg_class.relfrozenxid needs to be updated for frozen\n tables (where all rows have xmin=2)?"
},
{
"msg_contents": "On 2/2/15 7:01 PM, Slava Mudry wrote:\n\nPlease don't top-post. It's much better to answer questions inline in an \nemail.\n\n> I am running PostgreSQL 9.3.2 on linux. Freeze values are defaults.\n> We cannot rely on autovacuum on our current hardware, so it's turned\n> down to 2 workers with naptime=10min. But we are running weekly vacuum\n> on whole db and daily vacuum freeze on new partitions.\n\nWhile you may not be able to rely on autovac for all your needs, \nchanging the naptime is unlikely to help much, unless you have an \nextremely large number of tables (like, 100k or more).\n\n> I agree that system is designed pretty bad in a way that it creates high\n> transaction volume instead of batching updates/inserts. However I still\n> feel that postgres should be able to do something more optimal in such\n> cases.\n>\n> Currently the fact that it needs to go back to old tables and FTS them\n> every 2B transactions (or rely on autovacuum for this) and you can't do\n> anything about it (like permanently freeze the tables) seems like a big\n> scalability issue. Does it not?\n\nUnfortunately it's not terribly easy to fix this. The problem is if we \ntry to play games here, we must have a 100% reliable method for changing \nrelfrozenxid as soon as someone inserts a new tuple in the relation. It \nmight be possible to tie this into the visibility map, but no one has \nlooked at this yet.\n\nPerhaps you'd be willing to investigate this, or sponsor the work?\n\n> Thank you.\n>\n> On Fri, Jan 30, 2015 at 5:28 PM, Jim Nasby <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> On 1/30/15 5:44 PM, Slava Mudry wrote:\n>\n> Hi,\n> I have to deal with badly written system which regularly suffers\n> from\n> transaction wraparound issue. This issue is happenning every\n> 10-14 days\n> and forces me to take system offline and vacuum in single-user mode.\n> Main causes for this are (afaik):\n> 1) heavy transaction traffic + 100+GB of stale tables\n> 2) slow i/o (rotational drives)\n> 3) autovacuum can't keep up.\n>\n> Basically the database stores events data in daily partitioned table\n> \"daily_events\".\n> What I did, was - I ran vaccum freeze on all partitions (the\n> tables are\n> never touched after they're done for a day). I have also scheduled\n> vacuum-freeze for a partition after it's done writing.\n>\n> This essentially set xmin in each partition to \"frozen\" value of\n> \"2\".\n> However, to my surprise, this was not enough!\n> Postgres stores relfrozenxid in pg_class and this value\n> apparently is\n> getting old pretty fast (due to high volume of transactions).\n> And it seems that it doesn't really matter that xmin is frozen for a\n> table, the relfrozenxid is what causing transaction wraparound.\n>\n>\n> relfrozenxid is only part of the picture. A database-wide freeze\n> vacuum will be controlled by pg_database.datfrozenxid.\n>\n> What version is this? You may also be suffering from multixact wrap.\n>\n> Why is that? and most importantly - why updating\n> pg_class.relfrozenxid\n> requires huge amount of i/o by vacuum process for tables that\n> are never\n> updated?\n>\n>\n> Because it has to scan the entire table to see what the oldest XID\n> is. We don't check to see if relfrozenxid is already 2, though I\n> suppose we could add that.\n>\n> Is it safe to just update pg_class.relfrozenxid for tables where\n> xmin=2\n> for all rows? Same for linked toast table?\n>\n>\n> That would be a great way to lose data...\n>\n> You need to look at relations where relfrozenxid is >= 3 and see why\n> relfrozenxid isn't advancing fast enough on them. Check your cost\n> delay settings as well as the *freeze* settings. It's very likely\n> that on a system this busy autovac would never keep up with default\n> settings.\n>\n> Also, keep in mind that transaction and multixact IDs are\n> cluster-wide, so this is going to affect all databases in that\n> instance. You should think about ways to move the heaviest\n> transaction workload to a separate cluster; possibly putting the raw\n> updates there and having a separate process that aggregates that\n> data into fewer transactions for the main cluster.\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>\n>\n\n\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Feb 2015 19:36:15 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why pg_class.relfrozenxid needs to be updated for frozen\n tables (where all rows have xmin=2)?"
},
{
"msg_contents": "On 2/2/15 7:36 PM, Jim Nasby wrote:\n>>\n>> Currently the fact that it needs to go back to old tables and FTS them\n>> every 2B transactions (or rely on autovacuum for this) and you can't do\n>> anything about it (like permanently freeze the tables) seems like a big\n>> scalability issue. Does it not?\n>\n> Unfortunately it's not terribly easy to fix this. The problem is if we\n> try to play games here, we must have a 100% reliable method for changing\n> relfrozenxid as soon as someone inserts a new tuple in the relation. It\n> might be possible to tie this into the visibility map, but no one has\n> looked at this yet.\n>\n> Perhaps you'd be willing to investigate this, or sponsor the work?\n\nOh, there is another possibility that's been discussed: read-only \ntables. If we had the ability to mark a table read-only, then a VACUUM \nFREEZE on such a table would be able to set that table's relfrozenxid to \nFrozenTransactionId and prevent any further attempts at vacuuming. This \nmight be easier than trying to do something automatic.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Feb 2015 19:52:32 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why pg_class.relfrozenxid needs to be updated for frozen\n tables (where all rows have xmin=2)?"
},
{
"msg_contents": "On Mon, Feb 2, 2015 at 5:52 PM, Jim Nasby <[email protected]> wrote:\n\n> On 2/2/15 7:36 PM, Jim Nasby wrote:\n>\n>>\n>>> Currently the fact that it needs to go back to old tables and FTS them\n>>> every 2B transactions (or rely on autovacuum for this) and you can't do\n>>> anything about it (like permanently freeze the tables) seems like a big\n>>> scalability issue. Does it not?\n>>>\n>>\n>> Unfortunately it's not terribly easy to fix this. The problem is if we\n>> try to play games here, we must have a 100% reliable method for changing\n>> relfrozenxid as soon as someone inserts a new tuple in the relation. It\n>> might be possible to tie this into the visibility map, but no one has\n>> looked at this yet.\n>>\n>> Perhaps you'd be willing to investigate this, or sponsor the work?\n>>\n> I'll see what I can do. Will talk to folks at pgDay in a month.\n\n\n>\n> Oh, there is another possibility that's been discussed: read-only tables.\n> If we had the ability to mark a table read-only, then a VACUUM FREEZE on\n> such a table would be able to set that table's relfrozenxid to\n> FrozenTransactionId and prevent any further attempts at vacuuming. This\n> might be easier than trying to do something automatic.\n>\n> I think if we could log \"last update/delete/insert\" timestamp for a table\n- we could use that to freeze tables that are not changed.\nI also wonder how pg_database.datfrozenxid is set? Is it equal to the\noldest pg_class.relfrozenxid for that database?\nI ask because I am willing to give a try and update relfrozenxid for the\ntables that are never updated and frozen. Currently we are looking at\n8-hour downtime to vacuum the whole db in single-user mode. High\navailability is more important that data loss in my case. [I still don't\nwant to lose data, but it won't be the end of world if it happens].\n\nHaving read-only tables would be great.\nI was able to get great performance from unlogged tables, similarly\nread-only tables would be able to address issue with high-transactions and\nmany large stale tables.\n\n\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>\n\nOn Mon, Feb 2, 2015 at 5:52 PM, Jim Nasby <[email protected]> wrote:On 2/2/15 7:36 PM, Jim Nasby wrote:\n\n\nCurrently the fact that it needs to go back to old tables and FTS them\nevery 2B transactions (or rely on autovacuum for this) and you can't do\nanything about it (like permanently freeze the tables) seems like a big\nscalability issue. Does it not?\n\n\nUnfortunately it's not terribly easy to fix this. The problem is if we\ntry to play games here, we must have a 100% reliable method for changing\nrelfrozenxid as soon as someone inserts a new tuple in the relation. It\nmight be possible to tie this into the visibility map, but no one has\nlooked at this yet.\n\nPerhaps you'd be willing to investigate this, or sponsor the work?I'll see what I can do. Will talk to folks at pgDay in a month. \n\n\nOh, there is another possibility that's been discussed: read-only tables. If we had the ability to mark a table read-only, then a VACUUM FREEZE on such a table would be able to set that table's relfrozenxid to FrozenTransactionId and prevent any further attempts at vacuuming. This might be easier than trying to do something automatic.I think if we could log \"last update/delete/insert\" timestamp for a table - we could use that to freeze tables that are not changed.I also wonder how pg_database.datfrozenxid is set? Is it equal to the oldest pg_class.relfrozenxid for that database?I ask because I am willing to give a try and update relfrozenxid for the tables that are never updated and frozen. Currently we are looking at 8-hour downtime to vacuum the whole db in single-user mode. High availability is more important that data loss in my case. [I still don't want to lose data, but it won't be the end of world if it happens].Having read-only tables would be great.I was able to get great performance from unlogged tables, similarly read-only tables would be able to address issue with high-transactions and many large stale tables. \n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com",
"msg_date": "Mon, 2 Feb 2015 19:37:32 -0800",
"msg_from": "Slava Mudry <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: why pg_class.relfrozenxid needs to be updated for\n frozen tables (where all rows have xmin=2)?"
},
{
"msg_contents": "On 2/2/15 9:37 PM, Slava Mudry wrote:\n>\n> On Mon, Feb 2, 2015 at 5:52 PM, Jim Nasby <[email protected]\n> <mailto:[email protected]>> wrote:\n>\n> On 2/2/15 7:36 PM, Jim Nasby wrote:\n>\n>\n> Currently the fact that it needs to go back to old tables\n> and FTS them\n> every 2B transactions (or rely on autovacuum for this) and\n> you can't do\n> anything about it (like permanently freeze the tables) seems\n> like a big\n> scalability issue. Does it not?\n>\n>\n> Unfortunately it's not terribly easy to fix this. The problem is\n> if we\n> try to play games here, we must have a 100% reliable method for\n> changing\n> relfrozenxid as soon as someone inserts a new tuple in the\n> relation. It\n> might be possible to tie this into the visibility map, but no\n> one has\n> looked at this yet.\n>\n> Perhaps you'd be willing to investigate this, or sponsor the work?\n>\n> I'll see what I can do. Will talk to folks at pgDay in a month.\n>\n>\n> Oh, there is another possibility that's been discussed: read-only\n> tables. If we had the ability to mark a table read-only, then a\n> VACUUM FREEZE on such a table would be able to set that table's\n> relfrozenxid to FrozenTransactionId and prevent any further attempts\n> at vacuuming. This might be easier than trying to do something\n> automatic.\n>\n> I think if we could log \"last update/delete/insert\" timestamp for a\n> table - we could use that to freeze tables that are not changed.\n\nA timestamp wouldn't work; you need to have an exact XID.\n\nEven if it did work you still have the same problem: there's a huge, \nhairy race condition between what vacuum is trying to do and any DML.\n\n> I also wonder how pg_database.datfrozenxid is set? Is it equal to the\n> oldest pg_class.relfrozenxid for that database?\n\nCorrect.\n\n> I ask because I am willing to give a try and update relfrozenxid for the\n> tables that are never updated and frozen. Currently we are looking at\n> 8-hour downtime to vacuum the whole db in single-user mode. High\n> availability is more important that data loss in my case. [I still don't\n> want to lose data, but it won't be the end of world if it happens].\n\nWhy are you trying to go into single user mode? There's no reason to do \nthat.\n\nForcing relfrozenxid to 2 might work, but you're certainly playing with \nfire.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Feb 2015 22:17:32 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: why pg_class.relfrozenxid needs to be updated for frozen\n tables (where all rows have xmin=2)?"
}
] |
[
{
"msg_contents": "Hi all,\r\n\r\nThe pg version in question is the latest 9.4., running on Windows.\r\n\r\nFor testing out the NoSQL features of pg I have a simple table called ‘articles’ with a column called ‘data’.\r\n\r\nThere is an index on ‘data’ like this:\r\n CREATE INDEX idx_data ON articles USING gin (data jsonb_path_ops);\r\n\r\nThe current test data set has 32570 entires of JSON docs like this:\r\n{\r\n\"title\": \"Foo Bar\",\r\n\"locked\": true,\r\n\"valid_until\": \"2049-12-31T00:00:00\",\r\n\"art_number\": 12345678,\r\n\"valid_since\": \"2013-10-05T00:00:00\",\r\n\"number_valid\": false,\r\n\"combinations\": {\r\n\"var1\": \"4711\",\r\n\"var2\": \"4711\",\r\n\"var3\": \"0815\",\r\n\"int_art_number\": \"000001\"\r\n}\r\n}\r\n\r\nNothing too complex, I think.\r\n\r\nWhen I run a simple query:\r\n SELECT data #>> ‘{\"title\"}'\r\n FROM articles\r\n WHERE data @> '{ “locked\" : true }';\r\n\r\nReproducingly, it takes approx. 900ms to get the results back.\r\nHonestly, I was expecting a much faster query.\r\n\r\nAny opinions on this?\r\n\r\nThanks,\r\n-C.\r\n\r\n\n\n\n\n\n\n\n\nHi all,\n\n\nThe pg version in question is the latest 9.4., running on Windows.\n\n\nFor testing out the NoSQL features of pg I have a simple table called ‘articles’ with a column called ‘data’.\n\n\nThere is an index on ‘data’ like this:\n CREATE INDEX idx_data ON articles USING gin (data jsonb_path_ops);\n\n\n\n\n\n\r\nThe current test data set has 32570 entires of JSON docs like this:\n\n{\n\"title\": \"Foo Bar\",\n\"locked\": true, \n\"valid_until\": \"2049-12-31T00:00:00\", \n\"art_number\": 12345678, \n\"valid_since\": \"2013-10-05T00:00:00\", \n\"number_valid\": false, \n\"combinations\": {\n\"var1\": \"4711\", \n\"var2\": \"4711\", \n\"var3\": \"0815\", \n\"int_art_number\": \"000001\"\n}\n}\n\n\nNothing too complex, I think.\n\n\nWhen I run a simple query:\n\n SELECT data #>> ‘{\"title\"}' \n FROM articles\n WHERE data @> '{ “locked\" : true }';\n\n\n\nReproducingly, it takes approx. 900ms to get the results back.\nHonestly, I was expecting a much faster query.\n\n\nAny opinions on this?\n\n\nThanks,\n\n\r\n-C.",
"msg_date": "Sat, 31 Jan 2015 16:00:42 +0000",
"msg_from": "Christian Weyer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Unexpected (bad) performance when querying indexed JSONB column"
}
] |
[
{
"msg_contents": "Just checked: the execution time is the same when I drop the index.\r\n\r\nExecution plan with index:\r\n---\r\n\"Bitmap Heap Scan on articles (cost=16.25..135.64 rows=33 width=427)\"\r\n\" Recheck Cond: (data @> '{\"locked\": true}'::jsonb)\"\r\n\" -> Bitmap Index Scan on idx_data (cost=0.00..16.24 rows=33 width=0)\"\r\n\" Index Cond: (data @> '{\"locked\": true}'::jsonb)\"\r\n---\r\n\r\nAnd without the index:\r\n---\r\n\"Seq Scan on articles (cost=0.00..2289.21 rows=33 width=427)\"\r\n\" Filter: (data @> '{\"locked\": true}'::jsonb)\"\r\n---\r\n\r\n-C.\r\n\r\nFrom: Christian Weyer\r\nDate: Samstag, 31. Januar 2015 17:00\r\nTo: \"[email protected]<mailto:[email protected]>\"\r\nSubject: [PERFORM] Unexpected (bad) performance when querying indexed JSONB column\r\n\r\nHi all,\r\n\r\nThe pg version in question is the latest 9.4., running on Windows.\r\n\r\nFor testing out the NoSQL features of pg I have a simple table called ‘articles’ with a column called ‘data’.\r\n\r\nThere is an index on ‘data’ like this:\r\n CREATE INDEX idx_data ON articles USING gin (data jsonb_path_ops);\r\n\r\nThe current test data set has 32570 entires of JSON docs like this:\r\n{\r\n\"title\": \"Foo Bar\",\r\n\"locked\": true,\r\n\"valid_until\": \"2049-12-31T00:00:00\",\r\n\"art_number\": 12345678,\r\n\"valid_since\": \"2013-10-05T00:00:00\",\r\n\"number_valid\": false,\r\n\"combinations\": {\r\n\"var1\": \"4711\",\r\n\"var2\": \"4711\",\r\n\"var3\": \"0815\",\r\n\"int_art_number\": \"000001\"\r\n}\r\n}\r\n\r\nNothing too complex, I think.\r\n\r\nWhen I run a simple query:\r\n SELECT data #>> ‘{\"title\"}'\r\n FROM articles\r\n WHERE data @> '{ “locked\" : true }';\r\n\r\nReproducingly, it takes approx. 900ms to get the results back.\r\nHonestly, I was expecting a much faster query.\r\n\r\nAny opinions on this?\r\n\r\nThanks,\r\n-C.\r\n\r\n\n\n\n\n\n\n\n\n\nJust checked: the execution time is the same when I drop the index.\n\n\nExecution plan with index:\n---\n\"Bitmap Heap Scan on articles (cost=16.25..135.64 rows=33 width=427)\"\n\" Recheck Cond: (data @> '{\"locked\": true}'::jsonb)\"\n\" -> Bitmap Index Scan on idx_data (cost=0.00..16.24 rows=33 width=0)\"\n\" Index Cond: (data @> '{\"locked\": true}'::jsonb)\"\n---\n\n\nAnd without the index:\n---\n\"Seq Scan on articles (cost=0.00..2289.21 rows=33 width=427)\"\n\" Filter: (data @> '{\"locked\": true}'::jsonb)\"\n---\n\n\n\n\n\n\r\n-C.\n\n\n\n\n\n\n\n\n\nFrom: Christian Weyer\nDate: Samstag, 31. Januar 2015 17:00\nTo: \"[email protected]\"\nSubject: [PERFORM] Unexpected (bad) performance when querying indexed JSONB column\n\n\n\n\n\n\n\nHi all,\n\n\nThe pg version in question is the latest 9.4., running on Windows.\n\n\nFor testing out the NoSQL features of pg I have a simple table called ‘articles’ with a column called ‘data’.\n\n\nThere is an index on ‘data’ like this:\n CREATE INDEX idx_data ON articles USING gin (data jsonb_path_ops);\n\n\n\n\n\n\r\nThe current test data set has 32570 entires of JSON docs like this:\n\n{\n\"title\": \"Foo Bar\",\n\"locked\": true, \n\"valid_until\": \"2049-12-31T00:00:00\", \n\"art_number\": 12345678, \n\"valid_since\": \"2013-10-05T00:00:00\", \n\"number_valid\": false, \n\"combinations\": {\n\"var1\": \"4711\", \n\"var2\": \"4711\", \n\"var3\": \"0815\", \n\"int_art_number\": \"000001\"\n}\n}\n\n\nNothing too complex, I think.\n\n\nWhen I run a simple query:\n\n SELECT data #>> ‘{\"title\"}' \n FROM articles\n WHERE data @> '{ “locked\" : true }';\n\n\n\nReproducingly, it takes approx. 900ms to get the results back.\nHonestly, I was expecting a much faster query.\n\n\nAny opinions on this?\n\n\nThanks,\n\n\r\n-C.",
"msg_date": "Sat, 31 Jan 2015 19:02:45 +0000",
"msg_from": "Christian Weyer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected (bad) performance when querying indexed\n JSONB column"
},
{
"msg_contents": "On 01/31/2015 11:02 AM, Christian Weyer wrote:\n> Just checked: the execution time is the same when I drop the index.\n> \n> Execution plan with index:\n> ---\n> \"Bitmap Heap Scan on articles (cost=16.25..135.64 rows=33 width=427)\"\n> \" Recheck Cond: (data @> '{\"locked\": true}'::jsonb)\"\n> \" -> Bitmap Index Scan on idx_data (cost=0.00..16.24 rows=33 width=0)\"\n> \" Index Cond: (data @> '{\"locked\": true}'::jsonb)\"\n> ---\n> \n> And without the index:\n> ---\n> \"Seq Scan on articles (cost=0.00..2289.21 rows=33 width=427)\"\n> \" Filter: (data @> '{\"locked\": true}'::jsonb)\"\n> ---\n\nPlease send us the output of EXPLAIN ( ANALYZE ON, BUFFERS ON ) so that\nwe can see what the query is actually doing, rather than just what the\nplan was.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 01 Feb 2015 13:06:23 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected (bad) performance when querying indexed\n JSONB column"
},
{
"msg_contents": "On 01.02.15 22:06, \"Josh Berkus\" <[email protected]> wrote:\r\n\r\n\r\n\r\n>Please send us the output of EXPLAIN ( ANALYZE ON, BUFFERS ON ) so that\r\n>we can see what the query is actually doing, rather than just what the\r\n>plan was.\r\n>\r\n>-- \r\n>Josh Berkus\r\n>PostgreSQL Experts Inc.\r\n>http://pgexperts.com\r\n\r\n\r\nSure. Here we go:\r\n\r\n\"Bitmap Heap Scan on articles (cost=16.25..135.64 rows=33 width=427) \r\n(actual time=6.425..43.603 rows=18584 loops=1)\"\r\n\" Recheck Cond: (data @> ‘{\"locked\": true}'::jsonb)\"\r\n\" Heap Blocks: exact=1496\"\r\n\" Buffers: shared hit=1504\"\r\n\" -> Bitmap Index Scan on idx_data (cost=0.00..16.24 rows=33 width=0) \r\n(actual time=6.090..6.090 rows=18584 loops=1)\"\r\n\" Index Cond: (data @> ‘{\"locked\": true}'::jsonb)\"\r\n\" Buffers: shared hit=8\"\r\n\"Planning time: 0.348 ms\"\r\n\"Execution time: 47.788 ms\"\r\n\r\n\r\nThanks for looking into this.\r\n\r\n-C.\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 1 Feb 2015 21:19:29 +0000",
"msg_from": "Christian Weyer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected (bad) performance when querying indexed\n JSONB column"
},
{
"msg_contents": "Christian Weyer <[email protected]> writes:\n> On 01.02.15 22:06, \"Josh Berkus\" <[email protected]> wrote:\n>> Please send us the output of EXPLAIN ( ANALYZE ON, BUFFERS ON ) so that\n>> we can see what the query is actually doing, rather than just what the\n>> plan was.\n\n> Sure. Here we go:\n\n> \"Bitmap Heap Scan on articles (cost=16.25..135.64 rows=33 width=427) \n> (actual time=6.425..43.603 rows=18584 loops=1)\"\n> \" Recheck Cond: (data @> ‘{\"locked\": true}'::jsonb)\"\n> \" Heap Blocks: exact=1496\"\n> \" Buffers: shared hit=1504\"\n> \" -> Bitmap Index Scan on idx_data (cost=0.00..16.24 rows=33 width=0) \n> (actual time=6.090..6.090 rows=18584 loops=1)\"\n> \" Index Cond: (data @> ‘{\"locked\": true}'::jsonb)\"\n> \" Buffers: shared hit=8\"\n> \"Planning time: 0.348 ms\"\n> \"Execution time: 47.788 ms\"\n\nSo that's showing a runtime of 48 ms, not 900. For retrieving 18584\nrows, doesn't sound that bad to me.\n\n(If the planner had had a better rowcount estimate, it'd likely have\nnot bothered with the index at all but just done a seqscan. This is\na consequence of the lack of any very useful stats for JSONB columns,\nwhich is something we hope to address soon; but it's not done in 9.4\nand likely won't be in 9.5 either ...)\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 01 Feb 2015 17:20:57 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Unexpected (bad) performance when querying indexed JSONB column"
},
{
"msg_contents": ">>\"Bitmap Heap Scan on articles (cost=16.25..135.64 rows=33 width=427) \r\n>> (actual time=6.425..43.603 rows=18584 loops=1)\"\r\n>> \" Recheck Cond: (data @> ‘{\"locked\": true}'::jsonb)\"\r\n>> \" Heap Blocks: exact=1496\"\r\n>> \" Buffers: shared hit=1504\"\r\n>> \" -> Bitmap Index Scan on idx_data (cost=0.00..16.24 rows=33 \r\n>>width=0) \r\n>> (actual time=6.090..6.090 rows=18584 loops=1)\"\r\n>> \" Index Cond: (data @> ‘{\"locked\": true}'::jsonb)\"\r\n>> \" Buffers: shared hit=8\"\r\n>> \"Planning time: 0.348 ms\"\r\n>> \"Execution time: 47.788 ms\"\r\n>\r\n>So that's showing a runtime of 48 ms, not 900. For retrieving 18584\r\n>rows, doesn't sound that bad to me.\r\n>\r\n>(If the planner had had a better rowcount estimate, it'd likely have\r\n>not bothered with the index at all but just done a seqscan. This is\r\n>a consequence of the lack of any very useful stats for JSONB columns,\r\n>which is something we hope to address soon; but it's not done in 9.4\r\n>and likely won't be in 9.5 either ...)\r\n>\r\n>\t\t\tregards, tom lane\r\n\r\nThanks for your insights.\r\nThis greatly helped.\r\n\r\n-C.\r\n\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Feb 2015 06:31:24 +0000",
"msg_from": "Christian Weyer <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Unexpected (bad) performance when querying indexed\n JSONB column"
}
] |
[
{
"msg_contents": "I've created a GIN index on an INT[] column, but it slows down the selects.\nHere is my table:\n\ncreate table talent(person_id INT NOT NULL,\nskills INT[] NOT NULL);\n\ninsert into talent(person_id, skills)\nselect generate_series, array[0, 1] || generate_series\nfrom generate_series(3, 1048575);\n\ncreate index talent_skills on talent using gin(skills);\n\nanalyze talent;\n\nHere is my select:\n\n\nexplain analyze \nselect * from talent \nwhere skills <@ array[1, 15]\n\n\"Bitmap Heap Scan on talent (cost=52.00..56.01 rows=1 width=37) (actual\ntime=590.022..590.022 rows=0 loops=1)\"\n\" Recheck Cond: (skills <@ '{1,15}'::integer[])\"\n\" Rows Removed by Index Recheck: 1048573\"\n\" Heap Blocks: exact=8739\"\n\" -> Bitmap Index Scan on talent_skills (cost=0.00..52.00 rows=1 width=0)\n(actual time=207.661..207.661 rows=1048573 loops=1)\"\n\" Index Cond: (skills <@ '{1,15}'::integer[])\"\n\"Planning time: 1.310 ms\"\n\"Execution time: 590.078 ms\"\n\n\nIf I drop my GIN index, my select is faster:\n\n\ndrop index talent_skills\n\nexplain analyze \nselect * from talent \nwhere skills <@ array[1, 15]\n\n\"Seq Scan on talent (cost=0.00..21846.16 rows=1 width=37) (actual\ntime=347.442..347.442 rows=0 loops=1)\"\n\" Filter: (skills <@ '{1,15}'::integer[])\"\n\" Rows Removed by Filter: 1048573\"\n\"Planning time: 0.130 ms\"\n\"Execution time: 347.470 ms\"\n\nAm I missing something?\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Why-is-GIN-index-slowing-down-my-query-tp5836319.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 1 Feb 2015 15:32:21 -0700 (MST)",
"msg_from": "AlexK987 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Why is GIN index slowing down my query?"
},
{
"msg_contents": "AlexK987 <[email protected]> writes:\n> I've created a GIN index on an INT[] column, but it slows down the selects.\n> Here is my table:\n\n> create table talent(person_id INT NOT NULL,\n> skills INT[] NOT NULL);\n\n> insert into talent(person_id, skills)\n> select generate_series, array[0, 1] || generate_series\n> from generate_series(3, 1048575);\n\n> create index talent_skills on talent using gin(skills);\n\n> analyze talent;\n\n> Here is my select:\n\n> explain analyze \n> select * from talent \n> where skills <@ array[1, 15]\n\nWell, that's pretty much going to suck given that data distribution.\nSince \"1\" is a member of every last entry, the GIN scan will end up\nexamining every entry, and then rejecting all of them as not being\ntrue subsets of [1,15]. I'm not sure whether it'd be practical to\nteach GIN about negative proofs, ie noticing that rows containing \"0\"\ncould be eliminated based on the index contents. But in any case it\ndoes not know that today.\n\nAnother problem, at least with the default statistics target, is that\nthe entries for \"0\" and \"1\" swamp everything else so that the planner\ndoesn't know that eg \"15\" is really rare.\n\nYou'd be much better off if you could refactor the data representation so\nthat whatever you mean by \"0\" and \"1\" is stored separately from whatever\nyou mean by the other entries, ie, don't keep both extremely common and\nextremely rare entries in the same array.\n\nAlso ... perhaps I'm reading too much into the name you chose for the\ncolumn, but I'm finding it hard to imagine why you'd care about the\nperformance of this query as opposed to \"where skills @> array[1, 15]\".\nThat is, wouldn't you typically be searching for people with at least\ncertain specified skills, rather than at most certain specified skills?\n\nAnother thing that maybe is a question for -hackers is why we consider\narraycontained to be an indexable operator at all. As this example\ndemonstrates, the current GIN infrastructure isn't really capable\nof coping efficiently, at least not in the general case. It might be\nall right in specific cases, but the planner doesn't have the smarts\nto tell which is which.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 01 Feb 2015 18:34:13 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is GIN index slowing down my query?"
},
{
"msg_contents": "Tom,\n\nThis is a realistic case: everyone have Python and Java skills, but PostGis\nand Haskell and Closure are rare. If we are looking for a person that has\nall the skills required for a task (array[1, 15]), that is \"skills <@\narray[1, 15] \" and not the opposite, right?\n\nAlso can you explain why \" entries for \"0\" and \"1\" swamp everything else so\nthat the planner \ndoesn't know that eg \"15\" is really rare. \" I thought that if a value is not\nfound in the histogram, than clearly that value is rare, correct? What am I\nmissing here?\n\nI hear what you are saying about \"don't keep both extremely common and \nextremely rare entries in the same array\", but I cannot predict the future,\nso I do not know which values are going to be common next year, or two years\nlater. So I think it would be very difficult to follow this advice.\n\nWhat do you think?\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Why-is-GIN-index-slowing-down-my-query-tp5836319p5836323.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 1 Feb 2015 18:37:26 -0700 (MST)",
"msg_from": "AlexK987 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is GIN index slowing down my query?"
},
{
"msg_contents": "Tom,\n\nOops, you were absolutely right: I needed to use @> instead of <@. Thanks\nagain!\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Why-is-GIN-index-slowing-down-my-query-tp5836319p5836327.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 1 Feb 2015 20:00:05 -0700 (MST)",
"msg_from": "AlexK987 <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Why is GIN index slowing down my query?"
},
{
"msg_contents": "AlexK987 <[email protected]> writes:\n> This is a realistic case: everyone have Python and Java skills, but PostGis\n> and Haskell and Closure are rare. If we are looking for a person that has\n> all the skills required for a task (array[1, 15]), that is \"skills <@\n> array[1, 15] \" and not the opposite, right?\n\nOne of us has this backwards. It might be me, but I don't think so.\nConsider a person who has the two desired skills plus skill #42:\n\nregression=# select array[1,15,42] <@ array[1,15];\n ?column? \n----------\n f\n(1 row)\n\nregression=# select array[1,15,42] @> array[1,15];\n ?column? \n----------\n t\n(1 row)\n\n> Also can you explain why \" entries for \"0\" and \"1\" swamp everything else so\n> that the planner \n> doesn't know that eg \"15\" is really rare. \" I thought that if a value is not\n> found in the histogram, than clearly that value is rare, correct? What am I\n> missing here?\n\nThe problem is *how* rare. The planner will take the lowest frequency\nseen among the most common elements as an upper bound for the frequency of\nunlisted elements --- but if all you have in the stats array is 0 and 1,\nand they both have frequency 1.0, that doesn't tell you anything. And\nthat's what I see for this example:\n\nregression=# select most_common_elems,most_common_elem_freqs from pg_stats where tablename = 'talent' and attname = 'skills';\n most_common_elems | most_common_elem_freqs \n-------------------+------------------------\n {0,1} | {1,1,1,1,0}\n(1 row)\n\nWith a less skewed distribution, that rule of thumb would work better :-(\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 01 Feb 2015 22:15:45 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is GIN index slowing down my query?"
},
{
"msg_contents": "AlexK987 <[email protected]> writes:\n>> I've created a GIN index on an INT[] column, but it slows down the selects.\n>> Here is my table:\n>\n>> create table talent(person_id INT NOT NULL,\n>> skills INT[] NOT NULL);\n>\n>> insert into talent(person_id, skills)\n>> select generate_series, array[0, 1] || generate_series\n>> from generate_series(3, 1048575);\n>\n>> create index talent_skills on talent using gin(skills);\n>\n>> analyze talent;\n>\n>> Here is my select:\n>\n>> explain analyze \n>> select * from talent \n>> where skills <@ array[1, 15]\n>\n>Well, that's pretty much going to suck given that data distribution.\n>Since \"1\" is a member of every last entry, the GIN scan will end up\n>examining every entry, and then rejecting all of them as not being\n>true subsets of [1,15]. \n\nThis is equivalent and fast:\n\nexplain analyze\nWITH rare AS (\n select * from talent \n where skills @> array[15])\nselect * from rare\n where skills @> array[1]\n -- (with changed operator)\n\nYou might variate your query according to an additional table that keeps the occurrence count of all skills.\nNot really pretty though.\n\nregards,\n\nMarc Mamin\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Feb 2015 09:17:17 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is GIN index slowing down my query?"
},
{
"msg_contents": "AlexK987 <[email protected]> writes:\n>>> I've created a GIN index on an INT[] column, but it slows down the selects.\n>>> Here is my table:\n>>\n>>> create table talent(person_id INT NOT NULL,\n>>> skills INT[] NOT NULL);\n>>\n>>> insert into talent(person_id, skills)\n>>> select generate_series, array[0, 1] || generate_series\n>>> from generate_series(3, 1048575);\n>>\n>>> create index talent_skills on talent using gin(skills);\n>>\n>>> analyze talent;\n>>\n>>> Here is my select:\n>>\n>>> explain analyze \n>>> select * from talent \n>>> where skills <@ array[1, 15]\n>>\n>>Well, that's pretty much going to suck given that data distribution.\n>>Since \"1\" is a member of every last entry, the GIN scan will end up\n>>examining every entry, and then rejecting all of them as not being\n>>true subsets of [1,15]. \n>\n>This is equivalent and fast:\n>\n>explain analyze\n>WITH rare AS (\n> select * from talent\n> where skills @> array[15])\n>select * from rare\n> where skills @> array[1]\n> -- (with changed operator)\n>\n>You might variate your query according to an additional table that keeps the occurrence count of all skills.\n>Not really pretty though.\n\nI wonder if in such cases, the Bitmap Index Scan could discard entries that would result in a table scan\nand use them only in the recheck part:\n\nexplain\n select * from talent \n where skills @> array[1]\n \n Seq Scan on talent (cost=0.00..21846.16 rows=1048573 width=37)\n Filter: (skills @> '{1}'::integer[])\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 2 Feb 2015 10:31:10 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Why is GIN index slowing down my query?"
}
] |
[
{
"msg_contents": "I read that the query planner changes with every release. Was there a\nchange from 8.4 to 9.3 that would account for a major (2 orders of\nmagnitude) difference in execution time for nested views after we upgraded\nto 9.3?\n\nhttp://stackoverflow.com/questions/24067543/nested-views-much-slower-in-pg-9-3-4-than-8-4-8\n\nProd server running Red Hat Enterprise Linux Server release 5.11 (Tikanga)\nand Pg 9.3.4 on a 2 x 2.33GHZ processor, 24GB of RAM, 900 GB of RAID 5\nstorage on 6 drive server.\n\nPg configuration:\nautovacuum,on,configuration file\nautovacuum_analyze_scale_factor,0.1,configuration file\nautovacuum_analyze_threshold,50,configuration file\nautovacuum_max_workers,3,configuration file\nautovacuum_naptime,1min,configuration file\nautovacuum_vacuum_cost_delay,20ms,configuration file\nautovacuum_vacuum_cost_limit,-1,configuration file\nautovacuum_vacuum_scale_factor,0.2,configuration file\nautovacuum_vacuum_threshold,50,configuration file\ncheckpoint_completion_target,0.9,configuration file\ncheckpoint_segments,16,configuration file\nclient_encoding,UTF8,session\nclient_min_messages,warning,configuration file\nDateStyle,\"ISO, MDY\",configuration file\ndeadlock_timeout,5s,configuration file\ndefault_text_search_config,pg_catalog.english,configuration file\neffective_cache_size,4GB,configuration file\nfrom_collapse_limit,8,configuration file\ngeqo_effort,5,configuration file\ngeqo_threshold,12,configuration file\nhot_standby,on,configuration file\nlc_messages,en_US.UTF-8,configuration file\nlc_monetary,en_US.UTF-8,configuration file\nlc_numeric,en_US.UTF-8,configuration file\nlc_time,en_US.UTF-8,configuration file\nlisten_addresses,*,configuration file\nlog_connections,on,configuration file\nlog_destination,stderr,configuration file\nlog_directory,/dbms/postgresql/logs/dtfprod,configuration file\nlog_disconnections,on,configuration file\nlog_duration,off,configuration file\nlog_error_verbosity,terse,configuration file\nlog_filename,postgresql-%a.log,configuration file\nlog_hostname,on,configuration file\nlog_line_prefix,< %m %u %d %h >,configuration file\nlog_min_error_statement,error,configuration file\nlog_min_messages,error,configuration file\nlog_rotation_age,1d,configuration file\nlog_rotation_size,100MB,configuration file\nlog_timezone,US/Pacific,configuration file\nlog_truncate_on_rotation,on,configuration file\nlogging_collector,on,configuration file\nmaintenance_work_mem,256MB,configuration file\nmax_connections,200,configuration file\nmax_stack_depth,8MB,configuration file\nmax_wal_senders,5,configuration file\nport,5432,configuration file\nrandom_page_cost,2,configuration file\nshared_buffers,2GB,configuration file\nssl,on,configuration file\nstats_temp_directory,pg_stat_tmp,configuration file\ntemp_buffers,16MB,configuration file\nTimeZone,US/Pacific,configuration file\ntrack_activities,on,configuration file\ntrack_activity_query_size,1024,configuration file\ntrack_counts,on,configuration file\ntrack_functions,none,configuration file\ntrack_io_timing,off,configuration file\nupdate_process_title,on,configuration file\nwal_keep_segments,1920,configuration file\nwal_level,hot_standby,configuration file\nwal_sender_timeout,1min,configuration file\nwal_sync_method,fdatasync,configuration file\nwork_mem,5MB,configuration file\n\nthanks, PasDep\n\nI read that the query planner changes with every release. Was there a change from 8.4 to 9.3 that would account for a major (2 orders of magnitude) difference in execution time for nested views after we upgraded to 9.3?http://stackoverflow.com/questions/24067543/nested-views-much-slower-in-pg-9-3-4-than-8-4-8Prod server running Red Hat Enterprise Linux Server release 5.11 (Tikanga) and Pg 9.3.4 on a 2 x 2.33GHZ processor, 24GB of RAM, 900 GB of RAID 5 storage on 6 drive server.Pg configuration:autovacuum,on,configuration fileautovacuum_analyze_scale_factor,0.1,configuration fileautovacuum_analyze_threshold,50,configuration fileautovacuum_max_workers,3,configuration fileautovacuum_naptime,1min,configuration fileautovacuum_vacuum_cost_delay,20ms,configuration fileautovacuum_vacuum_cost_limit,-1,configuration fileautovacuum_vacuum_scale_factor,0.2,configuration fileautovacuum_vacuum_threshold,50,configuration filecheckpoint_completion_target,0.9,configuration filecheckpoint_segments,16,configuration fileclient_encoding,UTF8,sessionclient_min_messages,warning,configuration fileDateStyle,\"ISO, MDY\",configuration filedeadlock_timeout,5s,configuration filedefault_text_search_config,pg_catalog.english,configuration fileeffective_cache_size,4GB,configuration filefrom_collapse_limit,8,configuration filegeqo_effort,5,configuration filegeqo_threshold,12,configuration filehot_standby,on,configuration filelc_messages,en_US.UTF-8,configuration filelc_monetary,en_US.UTF-8,configuration filelc_numeric,en_US.UTF-8,configuration filelc_time,en_US.UTF-8,configuration filelisten_addresses,*,configuration filelog_connections,on,configuration filelog_destination,stderr,configuration filelog_directory,/dbms/postgresql/logs/dtfprod,configuration filelog_disconnections,on,configuration filelog_duration,off,configuration filelog_error_verbosity,terse,configuration filelog_filename,postgresql-%a.log,configuration filelog_hostname,on,configuration filelog_line_prefix,< %m %u %d %h >,configuration filelog_min_error_statement,error,configuration filelog_min_messages,error,configuration filelog_rotation_age,1d,configuration filelog_rotation_size,100MB,configuration filelog_timezone,US/Pacific,configuration filelog_truncate_on_rotation,on,configuration filelogging_collector,on,configuration filemaintenance_work_mem,256MB,configuration filemax_connections,200,configuration filemax_stack_depth,8MB,configuration filemax_wal_senders,5,configuration fileport,5432,configuration filerandom_page_cost,2,configuration fileshared_buffers,2GB,configuration filessl,on,configuration filestats_temp_directory,pg_stat_tmp,configuration filetemp_buffers,16MB,configuration fileTimeZone,US/Pacific,configuration filetrack_activities,on,configuration filetrack_activity_query_size,1024,configuration filetrack_counts,on,configuration filetrack_functions,none,configuration filetrack_io_timing,off,configuration fileupdate_process_title,on,configuration filewal_keep_segments,1920,configuration filewal_level,hot_standby,configuration filewal_sender_timeout,1min,configuration filewal_sync_method,fdatasync,configuration filework_mem,5MB,configuration filethanks, PasDep",
"msg_date": "Thu, 5 Feb 2015 14:29:04 -0800",
"msg_from": "Pascal Depuis <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow nested views in 9.3"
}
] |
[
{
"msg_contents": "I made complex select using PGAdmin III Query Editor, Postgre server 9.3\n\n\nselect ... from mytable join .. join ... order by ....\n\nI get [Total query runtime: 8841 ms. 43602 rows retrieved.]\n\nbut when I use \n\ncopy ([same above select]) to '/x.txt' \nI get [Query returned successfully: 43602 rows affected, 683 ms execution\ntime.]\n\nthese test made on the same machine as the postgresql server.\n\n\ncan anyone explain huge difference in executing time?\n\nbest regards all \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Feb 2015 01:30:39 -0700 (MST)",
"msg_from": "belal <[email protected]>",
"msg_from_op": true,
"msg_subject": "Copy command Faster than original select"
},
{
"msg_contents": "Hi\n\n2015-02-06 9:30 GMT+01:00 belal <[email protected]>:\n\n> I made complex select using PGAdmin III Query Editor, Postgre server 9.3\n>\n>\n> select ... from mytable join .. join ... order by ....\n>\n> I get [Total query runtime: 8841 ms. 43602 rows retrieved.]\n>\n> but when I use\n>\n> copy ([same above select]) to '/x.txt'\n> I get [Query returned successfully: 43602 rows affected, 683 ms execution\n> time.]\n>\n> these test made on the same machine as the postgresql server.\n>\n>\n> can anyone explain huge difference in executing time?\n>\n\nprobably terrible uneffective execution plan\n\ncan you send a explain analyze of your slow query?\n\nRegards\n\nPavel\n\n\n\n>\n> best regards all\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nHi2015-02-06 9:30 GMT+01:00 belal <[email protected]>:I made complex select using PGAdmin III Query Editor, Postgre server 9.3\n\n\nselect ... from mytable join .. join ... order by ....\n\nI get [Total query runtime: 8841 ms. 43602 rows retrieved.]\n\nbut when I use\n\ncopy ([same above select]) to '/x.txt'\nI get [Query returned successfully: 43602 rows affected, 683 ms execution\ntime.]\n\nthese test made on the same machine as the postgresql server.\n\n\ncan anyone explain huge difference in executing time?probably terrible uneffective execution plancan you send a explain analyze of your slow query?RegardsPavel \n\nbest regards all\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Feb 2015 09:38:20 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "thanks,\n\nbut isn't copy use the same plan ???\n\nany way this is the query play\n\n\"Sort (cost=15402.76..15511.77 rows=43602 width=184)\"\n\" Output: \"Sessions\".\"SesUser\", \"Vouchers\".\"VouID\",\n\"Journals\".\"JurMuniment\", \"Journals\".\"JurRefID\", \"Journals\".\"JurDate\",\n\"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n\"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Journals\".\"JurM (...)\"\n\" Sort Key: \"VouItems\".\"ItmDate\", \"Vouchers\".\"VouID\",\n\"VouItems\".\"ItmNumber\"\"\n\" -> Hash Join (cost=4665.21..8164.77 rows=43602 width=184)\"\n\" Output: \"Sessions\".\"SesUser\", \"Vouchers\".\"VouID\",\n\"Journals\".\"JurMuniment\", \"Journals\".\"JurRefID\", \"Journals\".\"JurDate\",\n\"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n\"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Journals\" (...)\"\n\" Hash Cond: (\"VouItems\".\"ItmMaster\" = \"Vouchers\".\"VouID\")\"\n\" -> Seq Scan on public.\"VouItems\" (cost=0.00..1103.02 rows=43602\nwidth=89)\"\n\" Output: \"VouItems\".\"ItmMaster\", \"VouItems\".\"ItmNumber\",\n\"VouItems\".\"ItmCurDebit\", \"VouItems\".\"ItmCurCredit\",\n\"VouItems\".\"ItmAccount\", \"VouItems\".\"ItmBranch\", \"VouItems\".\"ItmSubAccount\",\n\"VouItems\".\"ItmMuniment\", \"VouItems\".\"ItmDate\", \"VouItem (...)\"\n\" -> Hash (cost=4107.41..4107.41 rows=20544 width=95)\"\n\" Output: \"Vouchers\".\"VouID\", \"Vouchers\".\"VouJournal\",\n\"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\", \"Vouchers\".\"VouNote\",\n\"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\", \"Vouchers\".\"VouCreateDate\",\n\"Vouchers\".\"VouDebit\", \"Vouchers\".\"VouCredit\" (...)\"\n\" -> Hash Join (cost=1793.25..4107.41 rows=20544 width=95)\"\n\" Output: \"Vouchers\".\"VouID\", \"Vouchers\".\"VouJournal\",\n\"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\", \"Vouchers\".\"VouNote\",\n\"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\", \"Vouchers\".\"VouCreateDate\",\n\"Vouchers\".\"VouDebit\", \"Vouchers\".\"VouC (...)\"\n\" Hash Cond: (\"Vouchers\".\"VouJournal\" =\n\"Journals\".\"JurID\")\"\n\" -> Hash Join (cost=1236.16..3165.12 rows=20544\nwidth=74)\"\n\" Output: \"Vouchers\".\"VouID\",\n\"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n\"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\",\n\"Vouchers\".\"VouCreateDate\", \"Vouchers\".\"VouDebit\", \"Vouchers\" (...)\"\n\" Hash Cond: (\"Vouchers\".\"VouSession\" =\n\"Sessions\".\"SesID\")\"\n\" -> Seq Scan on public.\"Vouchers\" \n(cost=0.00..883.44 rows=20544 width=78)\"\n\" Output: \"Vouchers\".\"VouID\",\n\"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n\"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\",\n\"Vouchers\".\"VouCreateDate\", \"Vouchers\".\"VouDebit\", \"Vou (...)\"\n\" -> Hash (cost=654.85..654.85 rows=33385\nwidth=12)\"\n\" Output: \"Sessions\".\"SesUser\",\n\"Sessions\".\"SesID\"\"\n\" -> Seq Scan on public.\"Sessions\" \n(cost=0.00..654.85 rows=33385 width=12)\"\n\" Output: \"Sessions\".\"SesUser\",\n\"Sessions\".\"SesID\"\"\n\" -> Hash (cost=417.04..417.04 rows=11204 width=29)\"\n\" Output: \"Journals\".\"JurMuniment\",\n\"Journals\".\"JurRefID\", \"Journals\".\"JurDate\", \"Journals\".\"JurMunNumber\",\n\"Journals\".\"JurID\"\"\n\" -> Seq Scan on public.\"Journals\" \n(cost=0.00..417.04 rows=11204 width=29)\"\n\" Output: \"Journals\".\"JurMuniment\",\n\"Journals\".\"JurRefID\", \"Journals\".\"JurDate\", \"Journals\".\"JurMunNumber\",\n\"Journals\".\"JurID\"\"\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836890.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Feb 2015 01:44:29 -0700 (MST)",
"msg_from": "Belal Al-Hamed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "2015-02-06 9:44 GMT+01:00 Belal Al-Hamed <[email protected]>:\n\n> thanks,\n>\n> but isn't copy use the same plan ???\n>\n\naha - I was wrong,\n\nthis slowdown can be enforced by slow client (or slow network). pgAdmin is\nnot terrible fast. Try to execute your query from psql.\n\nRegards\n\nPavel\n\n\n>\n> any way this is the query play\n>\n> \"Sort (cost=15402.76..15511.77 rows=43602 width=184)\"\n> \" Output: \"Sessions\".\"SesUser\", \"Vouchers\".\"VouID\",\n> \"Journals\".\"JurMuniment\", \"Journals\".\"JurRefID\", \"Journals\".\"JurDate\",\n> \"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n> \"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Journals\".\"JurM (...)\"\n> \" Sort Key: \"VouItems\".\"ItmDate\", \"Vouchers\".\"VouID\",\n> \"VouItems\".\"ItmNumber\"\"\n> \" -> Hash Join (cost=4665.21..8164.77 rows=43602 width=184)\"\n> \" Output: \"Sessions\".\"SesUser\", \"Vouchers\".\"VouID\",\n> \"Journals\".\"JurMuniment\", \"Journals\".\"JurRefID\", \"Journals\".\"JurDate\",\n> \"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n> \"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Journals\" (...)\"\n> \" Hash Cond: (\"VouItems\".\"ItmMaster\" = \"Vouchers\".\"VouID\")\"\n> \" -> Seq Scan on public.\"VouItems\" (cost=0.00..1103.02 rows=43602\n> width=89)\"\n> \" Output: \"VouItems\".\"ItmMaster\", \"VouItems\".\"ItmNumber\",\n> \"VouItems\".\"ItmCurDebit\", \"VouItems\".\"ItmCurCredit\",\n> \"VouItems\".\"ItmAccount\", \"VouItems\".\"ItmBranch\",\n> \"VouItems\".\"ItmSubAccount\",\n> \"VouItems\".\"ItmMuniment\", \"VouItems\".\"ItmDate\", \"VouItem (...)\"\n> \" -> Hash (cost=4107.41..4107.41 rows=20544 width=95)\"\n> \" Output: \"Vouchers\".\"VouID\", \"Vouchers\".\"VouJournal\",\n> \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\", \"Vouchers\".\"VouNote\",\n> \"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\", \"Vouchers\".\"VouCreateDate\",\n> \"Vouchers\".\"VouDebit\", \"Vouchers\".\"VouCredit\" (...)\"\n> \" -> Hash Join (cost=1793.25..4107.41 rows=20544 width=95)\"\n> \" Output: \"Vouchers\".\"VouID\", \"Vouchers\".\"VouJournal\",\n> \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\", \"Vouchers\".\"VouNote\",\n> \"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\", \"Vouchers\".\"VouCreateDate\",\n> \"Vouchers\".\"VouDebit\", \"Vouchers\".\"VouC (...)\"\n> \" Hash Cond: (\"Vouchers\".\"VouJournal\" =\n> \"Journals\".\"JurID\")\"\n> \" -> Hash Join (cost=1236.16..3165.12 rows=20544\n> width=74)\"\n> \" Output: \"Vouchers\".\"VouID\",\n> \"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n> \"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\",\n> \"Vouchers\".\"VouCreateDate\", \"Vouchers\".\"VouDebit\", \"Vouchers\" (...)\"\n> \" Hash Cond: (\"Vouchers\".\"VouSession\" =\n> \"Sessions\".\"SesID\")\"\n> \" -> Seq Scan on public.\"Vouchers\"\n> (cost=0.00..883.44 rows=20544 width=78)\"\n> \" Output: \"Vouchers\".\"VouID\",\n> \"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n> \"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\",\n> \"Vouchers\".\"VouCreateDate\", \"Vouchers\".\"VouDebit\", \"Vou (...)\"\n> \" -> Hash (cost=654.85..654.85 rows=33385\n> width=12)\"\n> \" Output: \"Sessions\".\"SesUser\",\n> \"Sessions\".\"SesID\"\"\n> \" -> Seq Scan on public.\"Sessions\"\n> (cost=0.00..654.85 rows=33385 width=12)\"\n> \" Output: \"Sessions\".\"SesUser\",\n> \"Sessions\".\"SesID\"\"\n> \" -> Hash (cost=417.04..417.04 rows=11204 width=29)\"\n> \" Output: \"Journals\".\"JurMuniment\",\n> \"Journals\".\"JurRefID\", \"Journals\".\"JurDate\", \"Journals\".\"JurMunNumber\",\n> \"Journals\".\"JurID\"\"\n> \" -> Seq Scan on public.\"Journals\"\n> (cost=0.00..417.04 rows=11204 width=29)\"\n> \" Output: \"Journals\".\"JurMuniment\",\n> \"Journals\".\"JurRefID\", \"Journals\".\"JurDate\", \"Journals\".\"JurMunNumber\",\n> \"Journals\".\"JurID\"\"\n>\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836890.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2015-02-06 9:44 GMT+01:00 Belal Al-Hamed <[email protected]>:thanks,\n\nbut isn't copy use the same plan ???aha - I was wrong,this slowdown can be enforced by slow client (or slow network). pgAdmin is not terrible fast. Try to execute your query from psql.RegardsPavel \n\nany way this is the query play\n\n\"Sort (cost=15402.76..15511.77 rows=43602 width=184)\"\n\" Output: \"Sessions\".\"SesUser\", \"Vouchers\".\"VouID\",\n\"Journals\".\"JurMuniment\", \"Journals\".\"JurRefID\", \"Journals\".\"JurDate\",\n\"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n\"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Journals\".\"JurM (...)\"\n\" Sort Key: \"VouItems\".\"ItmDate\", \"Vouchers\".\"VouID\",\n\"VouItems\".\"ItmNumber\"\"\n\" -> Hash Join (cost=4665.21..8164.77 rows=43602 width=184)\"\n\" Output: \"Sessions\".\"SesUser\", \"Vouchers\".\"VouID\",\n\"Journals\".\"JurMuniment\", \"Journals\".\"JurRefID\", \"Journals\".\"JurDate\",\n\"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n\"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Journals\" (...)\"\n\" Hash Cond: (\"VouItems\".\"ItmMaster\" = \"Vouchers\".\"VouID\")\"\n\" -> Seq Scan on public.\"VouItems\" (cost=0.00..1103.02 rows=43602\nwidth=89)\"\n\" Output: \"VouItems\".\"ItmMaster\", \"VouItems\".\"ItmNumber\",\n\"VouItems\".\"ItmCurDebit\", \"VouItems\".\"ItmCurCredit\",\n\"VouItems\".\"ItmAccount\", \"VouItems\".\"ItmBranch\", \"VouItems\".\"ItmSubAccount\",\n\"VouItems\".\"ItmMuniment\", \"VouItems\".\"ItmDate\", \"VouItem (...)\"\n\" -> Hash (cost=4107.41..4107.41 rows=20544 width=95)\"\n\" Output: \"Vouchers\".\"VouID\", \"Vouchers\".\"VouJournal\",\n\"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\", \"Vouchers\".\"VouNote\",\n\"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\", \"Vouchers\".\"VouCreateDate\",\n\"Vouchers\".\"VouDebit\", \"Vouchers\".\"VouCredit\" (...)\"\n\" -> Hash Join (cost=1793.25..4107.41 rows=20544 width=95)\"\n\" Output: \"Vouchers\".\"VouID\", \"Vouchers\".\"VouJournal\",\n\"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\", \"Vouchers\".\"VouNote\",\n\"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\", \"Vouchers\".\"VouCreateDate\",\n\"Vouchers\".\"VouDebit\", \"Vouchers\".\"VouC (...)\"\n\" Hash Cond: (\"Vouchers\".\"VouJournal\" =\n\"Journals\".\"JurID\")\"\n\" -> Hash Join (cost=1236.16..3165.12 rows=20544\nwidth=74)\"\n\" Output: \"Vouchers\".\"VouID\",\n\"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n\"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\",\n\"Vouchers\".\"VouCreateDate\", \"Vouchers\".\"VouDebit\", \"Vouchers\" (...)\"\n\" Hash Cond: (\"Vouchers\".\"VouSession\" =\n\"Sessions\".\"SesID\")\"\n\" -> Seq Scan on public.\"Vouchers\"\n(cost=0.00..883.44 rows=20544 width=78)\"\n\" Output: \"Vouchers\".\"VouID\",\n\"Vouchers\".\"VouJournal\", \"Vouchers\".\"VouMunNumber\", \"Vouchers\".\"VouDate\",\n\"Vouchers\".\"VouNote\", \"Vouchers\".\"VouDoc\", \"Vouchers\".\"VouIsHold\",\n\"Vouchers\".\"VouCreateDate\", \"Vouchers\".\"VouDebit\", \"Vou (...)\"\n\" -> Hash (cost=654.85..654.85 rows=33385\nwidth=12)\"\n\" Output: \"Sessions\".\"SesUser\",\n\"Sessions\".\"SesID\"\"\n\" -> Seq Scan on public.\"Sessions\"\n(cost=0.00..654.85 rows=33385 width=12)\"\n\" Output: \"Sessions\".\"SesUser\",\n\"Sessions\".\"SesID\"\"\n\" -> Hash (cost=417.04..417.04 rows=11204 width=29)\"\n\" Output: \"Journals\".\"JurMuniment\",\n\"Journals\".\"JurRefID\", \"Journals\".\"JurDate\", \"Journals\".\"JurMunNumber\",\n\"Journals\".\"JurID\"\"\n\" -> Seq Scan on public.\"Journals\"\n(cost=0.00..417.04 rows=11204 width=29)\"\n\" Output: \"Journals\".\"JurMuniment\",\n\"Journals\".\"JurRefID\", \"Journals\".\"JurDate\", \"Journals\".\"JurMunNumber\",\n\"Journals\".\"JurID\"\"\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836890.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Feb 2015 09:54:36 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "\"this slowdown can be enforced by slow client (or slow network).\"\nAs I said i made the tow test on the same machine as the server using\nPGAdmin no network involved.\n\n\"pgAdmin is not terrible fast\"\nI also try the same query from my application using libpq I get same results\n\nregards\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836893.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Feb 2015 02:15:38 -0700 (MST)",
"msg_from": "Belal Al-Hamed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "2015-02-06 10:15 GMT+01:00 Belal Al-Hamed <[email protected]>:\n\n> \"this slowdown can be enforced by slow client (or slow network).\"\n> As I said i made the tow test on the same machine as the server using\n> PGAdmin no network involved.\n>\n> \"pgAdmin is not terrible fast\"\n> I also try the same query from my application using libpq I get same\n> results\n>\n\nwhat is speed of\n\nCREATE TABLE xx AS SELECT /* your query */ ?\n\nregards\n\nPavel\n\n\n>\n> regards\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836893.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2015-02-06 10:15 GMT+01:00 Belal Al-Hamed <[email protected]>:\"this slowdown can be enforced by slow client (or slow network).\"\nAs I said i made the tow test on the same machine as the server using\nPGAdmin no network involved.\n\n\"pgAdmin is not terrible fast\"\nI also try the same query from my application using libpq I get same resultswhat is speed of CREATE TABLE xx AS SELECT /* your query */ ?regardsPavel \n\nregards\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836893.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Feb 2015 10:31:29 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "fast as\n\nQuery returned successfully: 43602 rows affected, 1089 ms execution time.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836902.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Feb 2015 02:50:05 -0700 (MST)",
"msg_from": "Belal Al-Hamed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "2015-02-06 10:50 GMT+01:00 Belal Al-Hamed <[email protected]>:\n\n> fast as\n>\n> Query returned successfully: 43602 rows affected, 1089 ms execution time.\n>\n\nso bottle neck have to be some where between client and server\n\nPavel\n\n\n\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836902.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2015-02-06 10:50 GMT+01:00 Belal Al-Hamed <[email protected]>:fast as\n\nQuery returned successfully: 43602 rows affected, 1089 ms execution time.so bottle neck have to be some where between client and serverPavel \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836902.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Feb 2015 10:55:32 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "On Fri, Feb 6, 2015 at 6:44 AM, Belal Al-Hamed <[email protected]> wrote:\n\n>\n> but isn't copy use the same plan ???\n>\n> any way this is the query play\n>\n> \"Sort (cost=15402.76..15511.77 rows=43602 width=184)\"\n>\n\nCan you try again but with EXPLAIN *ANALYZE* (not only EXPLAIN)?\n\nBest regards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Fri, Feb 6, 2015 at 6:44 AM, Belal Al-Hamed <[email protected]> wrote:\n\nbut isn't copy use the same plan ???\n\nany way this is the query play\n\n\"Sort (cost=15402.76..15511.77 rows=43602 width=184)\"Can you try again but with EXPLAIN *ANALYZE* (not only EXPLAIN)?Best regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 6 Feb 2015 08:36:39 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "\"so bottle neck have to be some where between client and server\"\nthat's what I need to know !\nwhere is the bug to made this performance\n\n\"Can you try again but with EXPLAIN *ANALYZE* (not only EXPLAIN)?\"\nit's not a matter of plan problem I think, it's related to sending data\nfrom server to client, perhaps in allocating buffers for data or problem in\nlibpq I don't know ...\nbecause why it's super fast exporting same select to file using copy\ncommand.\nagain I am using the same pc of the postgresql server\n\nregards to all\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836917.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Feb 2015 05:27:31 -0700 (MST)",
"msg_from": "Belal Al-Hamed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "On Fri, Feb 6, 2015 at 10:27 AM, Belal Al-Hamed <[email protected]>\nwrote:\n\n> \"so bottle neck have to be some where between client and server\"\n> that's what I need to know !\n> where is the bug to made this performance\n>\n>\n\nDid you executed it from psql? Tried with \\copy also? (sorry if you\nanswered it already and I missed).\n\n\n\n> \"Can you try again but with EXPLAIN *ANALYZE* (not only EXPLAIN)?\"\n> it's not a matter of plan problem I think, it's related to sending data\n> from server to client, perhaps in allocating buffers for data or problem in\n> libpq I don't know ...\n> because why it's super fast exporting same select to file using copy\n> command.\n> again I am using the same pc of the postgresql server\n\n\nI'd like to see the EXPLAIN ANALYZE to see the real query execution time\nonly (discarding even the write of output of COPY command), and also see if\nthe query can be improved with an index or so, I haven't say it is a plan\nproblem, as nothing suggested that so far.\n\nAlso, what OS are you on? Are you connecting through TCP or domain socket?\n\nBest regards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Fri, Feb 6, 2015 at 10:27 AM, Belal Al-Hamed <[email protected]> wrote:\"so bottle neck have to be some where between client and server\"\nthat's what I need to know !\nwhere is the bug to made this performance\nDid you executed it from psql? Tried with \\copy also? (sorry if you answered it already and I missed). \n\"Can you try again but with EXPLAIN *ANALYZE* (not only EXPLAIN)?\"\nit's not a matter of plan problem I think, it's related to sending data\nfrom server to client, perhaps in allocating buffers for data or problem in\nlibpq I don't know ...\nbecause why it's super fast exporting same select to file using copy\ncommand.\nagain I am using the same pc of the postgresql serverI'd like to see the EXPLAIN ANALYZE to see the real query execution time only (discarding even the write of output of COPY command), and also see if the query can be improved with an index or so, I haven't say it is a plan problem, as nothing suggested that so far.Also, what OS are you on? Are you connecting through TCP or domain socket?Best regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 6 Feb 2015 11:12:38 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "Let me change my question to this perhaps it would be clearer\n\nwhy writing data result of select statment from PG server to file on disk\nusing copy statement is much faster than getting same data through PGAdmin\nvia libpg on the same PC on the same system on the same connection\n(localhost) ?\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836933.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Feb 2015 06:39:30 -0700 (MST)",
"msg_from": "Belal Al-Hamed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "2015-02-06 14:39 GMT+01:00 Belal Al-Hamed <[email protected]>:\n\n> Let me change my question to this perhaps it would be clearer\n>\n> why writing data result of select statment from PG server to file on disk\n> using copy statement is much faster than getting same data through PGAdmin\n> via libpg on the same PC on the same system on the same connection\n> (localhost) ?\n>\n\nCOPY to filesystem can use a more CPU, and on modern computers, a data are\nstored to write cache first - and real IO operation can be processed later.\n\nPgAdmin uses only one CPU and works with expensive interactive element -\ngrid - probably there are some space for optimization - usually fill 40K\nrows to pgAdmin is not good idea (it is not good idea for any client).\n\n>\n>\n>\n> --\n> View this message in context:\n> http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836933.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n2015-02-06 14:39 GMT+01:00 Belal Al-Hamed <[email protected]>:Let me change my question to this perhaps it would be clearer\n\nwhy writing data result of select statment from PG server to file on disk\nusing copy statement is much faster than getting same data through PGAdmin\nvia libpg on the same PC on the same system on the same connection\n(localhost) ?COPY to filesystem can use a more CPU, and on modern computers, a data are stored to write cache first - and real IO operation can be processed later.PgAdmin uses only one CPU and works with expensive interactive element - grid - probably there are some space for optimization - usually fill 40K rows to pgAdmin is not good idea (it is not good idea for any client). \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5836933.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Fri, 6 Feb 2015 14:49:33 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "On Fri, Feb 6, 2015 at 11:39 AM, Belal Al-Hamed <[email protected]>\nwrote:\n\n> Let me change my question to this perhaps it would be clearer\n\n\nPerhaps if you answer all the questions asked, we'll be able to spot where\nis the bottleneck you are seeing. Could be many factors.\n\nBest regards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Fri, Feb 6, 2015 at 11:39 AM, Belal Al-Hamed <[email protected]> wrote:Let me change my question to this perhaps it would be clearerPerhaps if you answer all the questions asked, we'll be able to spot where is the bottleneck you are seeing. Could be many factors.Best regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 6 Feb 2015 13:11:51 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "I think, it is the difference between writing 43602 records into the file and displaying 43602 records on screen.\nIf you wrap up your select into select count(a.*) from your select, e.g.:\n\nSelect count(a.*) from (select ... from mytable join .. join ... order by ....) as a;\n\nThis will exclude time to display all these rows, so you'll get the same (or better) performance as with \"copy\" into text file, which will prove this theory.\n\nRegards,\nIgor Neyman\n\n-----Original Message-----\nFrom: [email protected] [mailto:[email protected]] On Behalf Of belal\nSent: Friday, February 06, 2015 3:31 AM\nTo: [email protected]\nSubject: [PERFORM] Copy command Faster than original select\n\nI made complex select using PGAdmin III Query Editor, Postgre server 9.3\n\n\nselect ... from mytable join .. join ... order by ....\n\nI get [Total query runtime: 8841 ms. 43602 rows retrieved.]\n\nbut when I use \n\ncopy ([same above select]) to '/x.txt' \nI get [Query returned successfully: 43602 rows affected, 683 ms execution time.]\n\nthese test made on the same machine as the postgresql server.\n\n\ncan anyone explain huge difference in executing time?\n\nbest regards all \n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Feb 2015 15:24:47 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "belal <[email protected]> writes:\n> I made complex select using PGAdmin III Query Editor, Postgre server 9.3\n> select ... from mytable join .. join ... order by ....\n> I get [Total query runtime: 8841 ms. 43602 rows retrieved.]\n\n> but when I use \n> copy ([same above select]) to '/x.txt' \n> I get [Query returned successfully: 43602 rows affected, 683 ms execution\n> time.]\n\n> these test made on the same machine as the postgresql server.\n\n> can anyone explain huge difference in executing time?\n\nIt's the time needed for PGAdmin to receive and display 43602 data rows,\nlikely. PGAdmin has a reputation of not being too speedy at that.\n\nYou could check this by trying some other client such as psql. Even\nin psql, the formatting options you use can make a very large difference\nin how fast it is. However, I think psql's \\timing option measures just\nthe server roundtrip time and not the time taken after that to format and\ndisplay the query result. PGAdmin is probably measuring the query time\ndifferently.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 06 Feb 2015 10:48:32 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
},
{
"msg_contents": "Executing \"Select count(a.*) from (select ... from mytable join .. join ...\norder by ....) as a;\" \n\nTotal query runtime: 454 ms.\n1 row retrieved.\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Copy-command-Faster-than-original-select-tp5836886p5837105.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sun, 8 Feb 2015 03:08:30 -0700 (MST)",
"msg_from": "Belal Al-Hamed <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Copy command Faster than original select"
}
] |
[
{
"msg_contents": "Hi All\n\nWe are testing our Oracle compatible business applications on PostgreSQL\ndatabase,\n\nthe issue we are facing is <empty string> Vs NULL\n\nIn Oracle '' (<empty string>) and NULL are treated as NULL\n\nbut, in PostgreSQL '' <empty string> not treated as NULL\n\nI need some *implicit* way in PostgreSQL where ''<empty string> can be\ntreated as NULL\n\nPlease,\n\nThanks\nSridhar BN\n\nHi AllWe are testing our Oracle compatible business applications on PostgreSQL database,the issue we are facing is <empty string> Vs NULLIn Oracle '' (<empty string>) and NULL are treated as NULLbut, in PostgreSQL '' <empty string> not treated as NULLI need some implicit way in PostgreSQL where ''<empty string> can be treated as NULLPlease,ThanksSridhar BN",
"msg_date": "Mon, 9 Feb 2015 16:52:54 +0530",
"msg_from": "sridhar bamandlapally <[email protected]>",
"msg_from_op": true,
"msg_subject": "<empty string> Vs NULL"
},
{
"msg_contents": "Hi\n\n2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:\n\n> Hi All\n>\n> We are testing our Oracle compatible business applications on PostgreSQL\n> database,\n>\n> the issue we are facing is <empty string> Vs NULL\n>\n> In Oracle '' (<empty string>) and NULL are treated as NULL\n>\n> but, in PostgreSQL '' <empty string> not treated as NULL\n>\n> I need some *implicit* way in PostgreSQL where ''<empty string> can be\n> treated as NULL\n>\n\nIt is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard -\nOracle not.\n\nRegards\n\nPavel\n\np.s. theoretically you can overwrite a type operators to support Oracle\nbehave, but you should not be sure about unexpected negative side effects.\n\n\n\n>\n> Please,\n>\n> Thanks\n> Sridhar BN\n>\n>\n\nHi2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:Hi AllWe are testing our Oracle compatible business applications on PostgreSQL database,the issue we are facing is <empty string> Vs NULLIn Oracle '' (<empty string>) and NULL are treated as NULLbut, in PostgreSQL '' <empty string> not treated as NULLI need some implicit way in PostgreSQL where ''<empty string> can be treated as NULLIt is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard - Oracle not.RegardsPavel p.s. theoretically you can overwrite a type operators to support Oracle behave, but you should not be sure about unexpected negative side effects. Please,ThanksSridhar BN",
"msg_date": "Mon, 9 Feb 2015 12:53:16 +0100",
"msg_from": "Pavel Stehule <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "Its been a while since I really worked with Postgres, but could you write a\ntrigger to convert empty string to null on save? You'd have to carefully\napply it everywhere but it'd get you the searching for null finds empty. If\nthat is all you do the you've got it.\n\nEssentially, there isn't a switch for it but you can do it with some\nmechanisms.\n\nNik\nOn Feb 9, 2015 6:54 AM, \"Pavel Stehule\" <[email protected]> wrote:\n\n> Hi\n>\n> 2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:\n>\n>> Hi All\n>>\n>> We are testing our Oracle compatible business applications on PostgreSQL\n>> database,\n>>\n>> the issue we are facing is <empty string> Vs NULL\n>>\n>> In Oracle '' (<empty string>) and NULL are treated as NULL\n>>\n>> but, in PostgreSQL '' <empty string> not treated as NULL\n>>\n>> I need some *implicit* way in PostgreSQL where ''<empty string> can be\n>> treated as NULL\n>>\n>\n> It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard -\n> Oracle not.\n>\n> Regards\n>\n> Pavel\n>\n> p.s. theoretically you can overwrite a type operators to support Oracle\n> behave, but you should not be sure about unexpected negative side effects.\n>\n>\n>\n>>\n>> Please,\n>>\n>> Thanks\n>> Sridhar BN\n>>\n>>\n>\n\nIts been a while since I really worked with Postgres, but could you write a trigger to convert empty string to null on save? You'd have to carefully apply it everywhere but it'd get you the searching for null finds empty. If that is all you do the you've got it. \nEssentially, there isn't a switch for it but you can do it with some mechanisms. \nNik\nOn Feb 9, 2015 6:54 AM, \"Pavel Stehule\" <[email protected]> wrote:Hi2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:Hi AllWe are testing our Oracle compatible business applications on PostgreSQL database,the issue we are facing is <empty string> Vs NULLIn Oracle '' (<empty string>) and NULL are treated as NULLbut, in PostgreSQL '' <empty string> not treated as NULLI need some implicit way in PostgreSQL where ''<empty string> can be treated as NULLIt is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard - Oracle not.RegardsPavel p.s. theoretically you can overwrite a type operators to support Oracle behave, but you should not be sure about unexpected negative side effects. Please,ThanksSridhar BN",
"msg_date": "Mon, 9 Feb 2015 07:36:13 -0500",
"msg_from": "Nikolas Everett <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: <empty string> Vs NULL"
},
{
"msg_contents": ">>Hi\n>>\n>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:\n>>\n>> Hi All\n>>\n>> We are testing our Oracle compatible business applications on PostgreSQL database,\n>>\n>> the issue we are facing is <empty string> Vs NULL\n>>\n>> In Oracle '' (<empty string>) and NULL are treated as NULL\n>>\n>> but, in PostgreSQL '' <empty string> not treated as NULL\n>>\n>> I need some implicit way in PostgreSQL where ''<empty string> can be treated as NULL\n\n>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard - Oracle not.\n>\n>Regards\n>\n>Pavel\n>\n>p.s. theoretically you can overwrite a type operators to support Oracle behave, but you should not be sure about unexpected negative side effects.\n\n\nA clean way would be to disallow empty strings on the PG side.\nThis is somewhat combersome depending on how dynamic your model is\nand add some last on your db though.\n\n\nALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck\n CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL ...) IS NULL)\n\n-- and to ensure compatibility with your app or migration:\n\nCREATE OR REPLACE FUNCTION tablename_setnull_trf()\n RETURNS trigger AS\n$BODY$\nBEGIN\n-- for all *string* columns\n NEW.colname1 = NULLIF (colname1,'');\n NEW.colname2 = NULLIF (colname2,'');\n NEW.colname3 = NULLIF (colname3,'');\nRETURN NEW;\nEND;\n$BODY$\n\nCREATE TRIGGER tablename_setnull_tr\n BEFORE INSERT OR UPDATE\n ON tablename\n FOR EACH ROW\n EXECUTE PROCEDURE tablename_setnull_trf();\n\nYou can query the pg catalog to generate all required statements.\nA possible issue is the order in which triggers are fired, when more than one exist for a given table:\n\"If more than one trigger is defined for the same event on the same relation, the triggers will be fired in alphabetical order by trigger name\"\n( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n\nregards,\n\nMarc Mamin\n\n\n\n\n\n\n\n\n>>Hi\n>>\n>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:\n>>\n>> Hi All\n>>\n>> We are testing our Oracle compatible business applications on PostgreSQL database,\n>>\n>> the issue we are facing is <empty string> Vs NULL\n>>\n>> In Oracle '' (<empty string>) and NULL are treated as NULL\n>>\n>> but, in PostgreSQL '' <empty string> not treated as NULL\n>>\n>> I need some implicit way in PostgreSQL where ''<empty string> can be treated as NULL\n\n>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard - Oracle not.\n>\n>Regards\n>\n>Pavel\n>\n>p.s. theoretically you can overwrite a type operators to support Oracle behave, but you should not be sure about unexpected negative side effects.\n\n\nA clean way would be to disallow empty strings on the PG side.\nThis is somewhat combersome depending on how dynamic your model is\nand add some last on your db though.\n\n\nALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck \n CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL ...) IS NULL)\n\n-- and to ensure compatibility with your app or migration:\n\nCREATE OR REPLACE FUNCTION tablename_setnull_trf()\n RETURNS trigger AS\n$BODY$\nBEGIN\n-- for all *string* columns\n NEW.colname1 = NULLIF (colname1,'');\n NEW.colname2 = NULLIF (colname2,'');\n NEW.colname3 = NULLIF (colname3,'');\nRETURN NEW;\nEND;\n$BODY$\n\nCREATE TRIGGER tablename_setnull_tr\n BEFORE INSERT OR UPDATE\n ON tablename\n FOR EACH ROW\n EXECUTE PROCEDURE tablename_setnull_trf();\n \nYou can query the pg catalog to generate all required statements.\nA possible issue is the order in which triggers are fired, when more than one exist for a given table:\n\"If more than one trigger is defined for the same event on the same relation, the triggers will be fired in alphabetical order by trigger name\"\n( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n\nregards,\n\nMarc Mamin",
"msg_date": "Mon, 9 Feb 2015 12:42:29 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "On 9 February 2015 at 11:22, sridhar bamandlapally <[email protected]>\nwrote:\n\n> the issue we are facing is <empty string> Vs NULL\n>\n> In Oracle '' (<empty string>) and NULL are treated as NULL\n>\n> but, in PostgreSQL '' <empty string> not treated as NULL\n>\n> I need some *implicit* way in PostgreSQL where ''<empty string> can be\n> treated as NULL\n>\n>\n\nThe Right Thing to do is to fix your application, and don't use broken\nDBMSes: NULL should not denote anything except \"this value is not set\". If\nyou count an empty string as null, how do you represent the empty string?\n\nOracle's own documentation suggests that developers should not rely on this\nbehaviour since it may change in the future.\n\nSo Do The Right Thing now, and you won't get bitten later.\n\nGeoff\n\nOn 9 February 2015 at 11:22, sridhar bamandlapally <[email protected]> wrote:the issue we are facing is <empty string> Vs NULLIn Oracle '' (<empty string>) and NULL are treated as NULLbut, in PostgreSQL '' <empty string> not treated as NULLI need some implicit way in PostgreSQL where ''<empty string> can be treated as NULLThe Right Thing to do is to fix your application, and don't use broken DBMSes: NULL should not denote anything except \"this value is not set\". If you count an empty string as null, how do you represent the empty string?Oracle's own documentation suggests that developers should not rely on this behaviour since it may change in the future.So Do The Right Thing now, and you won't get bitten later.Geoff",
"msg_date": "Mon, 9 Feb 2015 12:48:51 +0000",
"msg_from": "Geoff Winkless <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [ADMIN] <empty string> Vs NULL"
},
{
"msg_contents": "On 9 February 2015 at 12:48, Geoff Winkless <[email protected]> wrote:\n\n> Oracle's own documentation suggests that developers should not rely on\n> this behaviour since it may change in the future.\n>\n> \nJust in case you're looking for it:\n\nhttp://docs.oracle.com/database/121/SQLRF/sql_elements005.htm#SQLRF30037\n\n\nNote:\nOracle Database currently treats a character value with a length of zero as\nnull. However, this may not continue to be true in future releases, and\nOracle recommends that you do not treat empty strings the same as nulls.\n\nGeoff\n\nOn 9 February 2015 at 12:48, Geoff Winkless <[email protected]> wrote:Oracle's own documentation suggests that developers should not rely on this behaviour since it may change in the future.Just in case you're looking for it:http://docs.oracle.com/database/121/SQLRF/sql_elements005.htm#SQLRF30037Note:Oracle Database currently treats a character value with a length of zero as null. However, this may not continue to be true in future releases, and Oracle recommends that you do not treat empty strings the same as nulls.Geoff",
"msg_date": "Mon, 9 Feb 2015 12:57:19 +0000",
"msg_from": "Geoff Winkless <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: <empty string> Vs NULL"
},
{
"msg_contents": ">>>Hi\n>>>\n>>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:\n>>>\n>>> Hi All\n>>>\n>>> We are testing our Oracle compatible business applications on PostgreSQL database,\n>>>\n>>> the issue we are facing is <empty string> Vs NULL\n>>>\n>>> In Oracle '' (<empty string>) and NULL are treated as NULL\n>>>\n>>> but, in PostgreSQL '' <empty string> not treated as NULL\n>>>\n>>> I need some implicit way in PostgreSQL where ''<empty string> can be treated as NULL\n>\n>>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard - Oracle not.\n>>\n>>Regards\n>>\n>>Pavel\n>>\n>>p.s. theoretically you can overwrite a type operators to support Oracle behave, but you should not be sure about unexpected negative side effects.\n>\n>\n>A clean way would be to disallow empty strings on the PG side.\n>This is somewhat combersome depending on how dynamic your model is\n>and add some last on your db though.\n\nhmm, you could also consider disallowing NULLs, i.e. force empty strings.\nthis may result in a better compatibility although unwise from postgres point of view (see null storage in PG)\nand neither way allow a compatibility out of the box:\n\n Postgres ORACLE\n'' IS NULL false true\nNULL || 'foo' NULL 'foo'\n\nas mention in another post, you need to check/fix your application.\n\n>\n>ALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck\n> CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL ...) IS NULL)\n\noops, this shold be\n CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL ...))\n\n>\n>-- and to ensure compatibility with your app or migration:\n>\n>CREATE OR REPLACE FUNCTION tablename_setnull_trf()\n> RETURNS trigger AS\n>$BODY$\n>BEGIN\n>-- for all *string* columns\n> NEW.colname1 = NULLIF (colname1,'');\n> NEW.colname2 = NULLIF (colname2,'');\n> NEW.colname3 = NULLIF (colname3,'');\n>RETURN NEW;\n>END;\n>$BODY$\n>\n>CREATE TRIGGER tablename_setnull_tr\n> BEFORE INSERT OR UPDATE\n> ON tablename\n> FOR EACH ROW\n> EXECUTE PROCEDURE tablename_setnull_trf();\n>\n>You can query the pg catalog to generate all required statements.\n>A possible issue is the order in which triggers are fired, when more than one exist for a given table:\n>\"If more than one trigger is defined for the same event on the same relation, the triggers will be fired in alphabetical order by trigger name\"\n>( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n>\n>regards,\n>\n>Marc Mamin\n\n\n\n\n\n\n\n\n>>>Hi\n>>>\n>>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:\n>>>\n>>> Hi All\n>>>\n>>> We are testing our Oracle compatible business applications on PostgreSQL database,\n>>>\n>>> the issue we are facing is <empty string> Vs NULL\n>>>\n>>> In Oracle '' (<empty string>) and NULL are treated as NULL\n>>>\n>>> but, in PostgreSQL '' <empty string> not treated as NULL\n>>>\n>>> I need some implicit way in PostgreSQL where ''<empty string> can be treated as NULL\n>\n>>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard - Oracle not.\n>>\n>>Regards\n>>\n>>Pavel\n>>\n>>p.s. theoretically you can overwrite a type operators to support Oracle behave, but you should not be sure about unexpected negative side effects.\n>\n>\n>A clean way would be to disallow empty strings on the PG side.\n>This is somewhat combersome depending on how dynamic your model is\n>and add some last on your db though.\n\nhmm, you could also consider disallowing NULLs, i.e. force empty strings.\nthis may result in a better compatibility although unwise from postgres point of view (see null storage in PG)\nand neither way allow a compatibility out of the box:\n\n Postgres ORACLE\n'' IS NULL false true\nNULL || 'foo' NULL 'foo'\n \nas mention in another post, you need to check/fix your application. \n\n\n>\n>ALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck \n> CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL ...) IS NULL)\n\noops, this shold be \n CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL ...))\n\n>\n>-- and to ensure compatibility with your app or migration:\n>\n>CREATE OR REPLACE FUNCTION tablename_setnull_trf()\n> RETURNS trigger AS\n>$BODY$\n>BEGIN\n>-- for all *string* columns\n> NEW.colname1 = NULLIF (colname1,'');\n> NEW.colname2 = NULLIF (colname2,'');\n> NEW.colname3 = NULLIF (colname3,'');\n>RETURN NEW;\n>END;\n>$BODY$\n>\n>CREATE TRIGGER tablename_setnull_tr\n> BEFORE INSERT OR UPDATE\n> ON tablename\n> FOR EACH ROW\n> EXECUTE PROCEDURE tablename_setnull_trf();\n> \n>You can query the pg catalog to generate all required statements.\n>A possible issue is the order in which triggers are fired, when more than one exist for a given table:\n>\"If more than one trigger is defined for the same event on the same relation, the triggers will be fired in alphabetical order by trigger name\"\n>( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n>\n>regards,\n>\n>Marc Mamin",
"msg_date": "Mon, 9 Feb 2015 13:02:06 +0000",
"msg_from": "Marc Mamin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "In application code is\n\nwhile inserting/updating: INSERT/UPDATE into ... ( '' ) - which is empty\nstring in PG, and in Oracle its NULL\n\nwhile selecting: SELECT ... WHERE column IS NULL / NOT NULL\n\nthe issue is, while DML its empty string and while SELECT its comparing\nwith NULL\n\n\n\n\n\nOn Mon, Feb 9, 2015 at 6:32 PM, Marc Mamin <[email protected]> wrote:\n\n>\n> >>>Hi\n> >>>\n> >>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]\n> >:\n> >>>\n> >>> Hi All\n> >>>\n> >>> We are testing our Oracle compatible business applications on\n> PostgreSQL database,\n> >>>\n> >>> the issue we are facing is <empty string> Vs NULL\n> >>>\n> >>> In Oracle '' (<empty string>) and NULL are treated as NULL\n> >>>\n> >>> but, in PostgreSQL '' <empty string> not treated as NULL\n> >>>\n> >>> I need some implicit way in PostgreSQL where ''<empty string> can\n> be treated as NULL\n> >\n> >>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard\n> - Oracle not.\n> >>\n> >>Regards\n> >>\n> >>Pavel\n> >>\n> >>p.s. theoretically you can overwrite a type operators to support Oracle\n> behave, but you should not be sure about unexpected negative side effects.\n> >\n> >\n> >A clean way would be to disallow empty strings on the PG side.\n> >This is somewhat combersome depending on how dynamic your model is\n> >and add some last on your db though.\n>\n> hmm, you could also consider disallowing NULLs, i.e. force empty strings.\n> this may result in a better compatibility although unwise from postgres\n> point of view (see null storage in PG)\n> and neither way allow a compatibility out of the box:\n>\n> Postgres ORACLE\n> '' IS NULL false true\n> NULL || 'foo' NULL 'foo'\n>\n> as mention in another post, you need to check/fix your application.\n>\n>\n> >\n> >ALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck\n> > CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL\n> ...) IS NULL)\n>\n> oops, this shold be\n> CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL\n> ...))\n>\n> >\n> >-- and to ensure compatibility with your app or migration:\n> >\n> >CREATE OR REPLACE FUNCTION tablename_setnull_trf()\n> > RETURNS trigger AS\n> >$BODY$\n> >BEGIN\n> >-- for all *string* columns\n> > NEW.colname1 = NULLIF (colname1,'');\n> > NEW.colname2 = NULLIF (colname2,'');\n> > NEW.colname3 = NULLIF (colname3,'');\n> >RETURN NEW;\n> >END;\n> >$BODY$\n> >\n> >CREATE TRIGGER tablename_setnull_tr\n> > BEFORE INSERT OR UPDATE\n> > ON tablename\n> > FOR EACH ROW\n> > EXECUTE PROCEDURE tablename_setnull_trf();\n> >\n> >You can query the pg catalog to generate all required statements.\n> >A possible issue is the order in which triggers are fired, when more than\n> one exist for a given table:\n> >\"If more than one trigger is defined for the same event on the same\n> relation, the triggers will be fired in alphabetical order by trigger name\"\n> >( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n> >\n> >regards,\n> >\n> >Marc Mamin\n>\n\nIn application code is while inserting/updating: INSERT/UPDATE into ... ( '' ) - which is empty string in PG, and in Oracle its NULLwhile selecting: SELECT ... WHERE column IS NULL / NOT NULLthe issue is, while DML its empty string and while SELECT its comparing with NULLOn Mon, Feb 9, 2015 at 6:32 PM, Marc Mamin <[email protected]> wrote:\n\n\n>>>Hi\n>>>\n>>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally <[email protected]>:\n>>>\n>>> Hi All\n>>>\n>>> We are testing our Oracle compatible business applications on PostgreSQL database,\n>>>\n>>> the issue we are facing is <empty string> Vs NULL\n>>>\n>>> In Oracle '' (<empty string>) and NULL are treated as NULL\n>>>\n>>> but, in PostgreSQL '' <empty string> not treated as NULL\n>>>\n>>> I need some implicit way in PostgreSQL where ''<empty string> can be treated as NULL\n>\n>>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard - Oracle not.\n>>\n>>Regards\n>>\n>>Pavel\n>>\n>>p.s. theoretically you can overwrite a type operators to support Oracle behave, but you should not be sure about unexpected negative side effects.\n>\n>\n>A clean way would be to disallow empty strings on the PG side.\n>This is somewhat combersome depending on how dynamic your model is\n>and add some last on your db though.\n\nhmm, you could also consider disallowing NULLs, i.e. force empty strings.\nthis may result in a better compatibility although unwise from postgres point of view (see null storage in PG)\nand neither way allow a compatibility out of the box:\n\n Postgres ORACLE\n'' IS NULL false true\nNULL || 'foo' NULL 'foo'\n \nas mention in another post, you need to check/fix your application. \n\n\n>\n>ALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck \n> CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL ...) IS NULL)\n\noops, this shold be \n CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL ...))\n\n>\n>-- and to ensure compatibility with your app or migration:\n>\n>CREATE OR REPLACE FUNCTION tablename_setnull_trf()\n> RETURNS trigger AS\n>$BODY$\n>BEGIN\n>-- for all *string* columns\n> NEW.colname1 = NULLIF (colname1,'');\n> NEW.colname2 = NULLIF (colname2,'');\n> NEW.colname3 = NULLIF (colname3,'');\n>RETURN NEW;\n>END;\n>$BODY$\n>\n>CREATE TRIGGER tablename_setnull_tr\n> BEFORE INSERT OR UPDATE\n> ON tablename\n> FOR EACH ROW\n> EXECUTE PROCEDURE tablename_setnull_trf();\n> \n>You can query the pg catalog to generate all required statements.\n>A possible issue is the order in which triggers are fired, when more than one exist for a given table:\n>\"If more than one trigger is defined for the same event on the same relation, the triggers will be fired in alphabetical order by trigger name\"\n>( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n>\n>regards,\n>\n>Marc Mamin",
"msg_date": "Tue, 10 Feb 2015 09:23:35 +0530",
"msg_from": "sridhar bamandlapally <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "On Feb 9, 2015, at 8:53 PM, sridhar bamandlapally <[email protected]> wrote:\n> \n> the issue is, while DML its empty string and while SELECT its comparing with NULL\n\nThe issue is, empty string is NOT the same as null, and expecting select for null to match empty strings is a bug, which you need to fix.\n\n-- \nScott Ribe\[email protected]\nhttp://www.elevated-dev.com/\n(303) 722-0567 voice\n\n\n\n\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Mon, 9 Feb 2015 21:50:42 -0700",
"msg_from": "Scott Ribe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "sridhar bamandlapally wrote\n> In application code is\n> \n> while inserting/updating: INSERT/UPDATE into ... ( '' ) - which is empty\n> string in PG, and in Oracle its NULL\n> \n> while selecting: SELECT ... WHERE column IS NULL / NOT NULL\n> \n> the issue is, while DML its empty string and while SELECT its comparing\n> with NULL\n\nIf this is the extent of your problem then you can add table triggers to\nchange the empty-string input so that the result of the insert/update is\nNULL. Then all of your selects can use IS NULL for their comparisons just\nlike they do now.\n\nThat is as \"implicit\" as you are going to get without actually fixing the\nunderlying problem.\n\nDavid J.\n\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/empty-string-Vs-NULL-tp5837188p5837308.html\nSent from the PostgreSQL - admin mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Mon, 9 Feb 2015 22:02:37 -0700 (MST)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "Hi,\n\nPlease take this to another list, this has little to do with\nPostgreSQL admin or performance.\n\nFlorent\n\n\n\nOn Tue, Feb 10, 2015 at 4:53 AM, sridhar bamandlapally\n<[email protected]> wrote:\n> In application code is\n>\n> while inserting/updating: INSERT/UPDATE into ... ( '' ) - which is empty\n> string in PG, and in Oracle its NULL\n>\n> while selecting: SELECT ... WHERE column IS NULL / NOT NULL\n>\n> the issue is, while DML its empty string and while SELECT its comparing with\n> NULL\n>\n>\n>\n>\n>\n> On Mon, Feb 9, 2015 at 6:32 PM, Marc Mamin <[email protected]> wrote:\n>>\n>>\n>> >>>Hi\n>> >>>\n>> >>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally\n>> >>> <[email protected]>:\n>> >>>\n>> >>> Hi All\n>> >>>\n>> >>> We are testing our Oracle compatible business applications on\n>> >>> PostgreSQL database,\n>> >>>\n>> >>> the issue we are facing is <empty string> Vs NULL\n>> >>>\n>> >>> In Oracle '' (<empty string>) and NULL are treated as NULL\n>> >>>\n>> >>> but, in PostgreSQL '' <empty string> not treated as NULL\n>> >>>\n>> >>> I need some implicit way in PostgreSQL where ''<empty string> can\n>> >>> be treated as NULL\n>> >\n>> >>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard\n>> >> - Oracle not.\n>> >>\n>> >>Regards\n>> >>\n>> >>Pavel\n>> >>\n>> >>p.s. theoretically you can overwrite a type operators to support Oracle\n>> >> behave, but you should not be sure about unexpected negative side effects.\n>> >\n>> >\n>> >A clean way would be to disallow empty strings on the PG side.\n>> >This is somewhat combersome depending on how dynamic your model is\n>> >and add some last on your db though.\n>>\n>> hmm, you could also consider disallowing NULLs, i.e. force empty strings.\n>> this may result in a better compatibility although unwise from postgres\n>> point of view (see null storage in PG)\n>> and neither way allow a compatibility out of the box:\n>>\n>> Postgres ORACLE\n>> '' IS NULL false true\n>> NULL || 'foo' NULL 'foo'\n>>\n>> as mention in another post, you need to check/fix your application.\n>>\n>> >\n>> >ALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck\n>> > CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL\n>> > ...) IS NULL)\n>>\n>> oops, this shold be\n>> CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL\n>> ...))\n>>\n>> >\n>> >-- and to ensure compatibility with your app or migration:\n>> >\n>> >CREATE OR REPLACE FUNCTION tablename_setnull_trf()\n>> > RETURNS trigger AS\n>> >$BODY$\n>> >BEGIN\n>> >-- for all *string* columns\n>> > NEW.colname1 = NULLIF (colname1,'');\n>> > NEW.colname2 = NULLIF (colname2,'');\n>> > NEW.colname3 = NULLIF (colname3,'');\n>> >RETURN NEW;\n>> >END;\n>> >$BODY$\n>> >\n>> >CREATE TRIGGER tablename_setnull_tr\n>> > BEFORE INSERT OR UPDATE\n>> > ON tablename\n>> > FOR EACH ROW\n>> > EXECUTE PROCEDURE tablename_setnull_trf();\n>> >\n>> >You can query the pg catalog to generate all required statements.\n>> >A possible issue is the order in which triggers are fired, when more than\n>> > one exist for a given table:\n>> >\"If more than one trigger is defined for the same event on the same\n>> > relation, the triggers will be fired in alphabetical order by trigger name\"\n>> >( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n>> >\n>> >regards,\n>> >\n>> >Marc Mamin\n>\n>\n\n\n\n-- \nFlorent Guillaume, Director of R&D, Nuxeo\nOpen Source Content Management Platform for Business Apps\nhttp://www.nuxeo.com http://community.nuxeo.com\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Tue, 10 Feb 2015 11:07:05 +0100",
"msg_from": "Florent Guillaume <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "The first contact of database migration/issues is DBA (admin),\n\naccept performance is not required\n\nThanks\nSridhar BN\n\n\n\nOn Tue, Feb 10, 2015 at 3:37 PM, Florent Guillaume <[email protected]> wrote:\n\n> Hi,\n>\n> Please take this to another list, this has little to do with\n> PostgreSQL admin or performance.\n>\n> Florent\n>\n>\n>\n> On Tue, Feb 10, 2015 at 4:53 AM, sridhar bamandlapally\n> <[email protected]> wrote:\n> > In application code is\n> >\n> > while inserting/updating: INSERT/UPDATE into ... ( '' ) - which is empty\n> > string in PG, and in Oracle its NULL\n> >\n> > while selecting: SELECT ... WHERE column IS NULL / NOT NULL\n> >\n> > the issue is, while DML its empty string and while SELECT its comparing\n> with\n> > NULL\n> >\n> >\n> >\n> >\n> >\n> > On Mon, Feb 9, 2015 at 6:32 PM, Marc Mamin <[email protected]> wrote:\n> >>\n> >>\n> >> >>>Hi\n> >> >>>\n> >> >>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally\n> >> >>> <[email protected]>:\n> >> >>>\n> >> >>> Hi All\n> >> >>>\n> >> >>> We are testing our Oracle compatible business applications on\n> >> >>> PostgreSQL database,\n> >> >>>\n> >> >>> the issue we are facing is <empty string> Vs NULL\n> >> >>>\n> >> >>> In Oracle '' (<empty string>) and NULL are treated as NULL\n> >> >>>\n> >> >>> but, in PostgreSQL '' <empty string> not treated as NULL\n> >> >>>\n> >> >>> I need some implicit way in PostgreSQL where ''<empty string> can\n> >> >>> be treated as NULL\n> >> >\n> >> >>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL\n> standard\n> >> >> - Oracle not.\n> >> >>\n> >> >>Regards\n> >> >>\n> >> >>Pavel\n> >> >>\n> >> >>p.s. theoretically you can overwrite a type operators to support\n> Oracle\n> >> >> behave, but you should not be sure about unexpected negative side\n> effects.\n> >> >\n> >> >\n> >> >A clean way would be to disallow empty strings on the PG side.\n> >> >This is somewhat combersome depending on how dynamic your model is\n> >> >and add some last on your db though.\n> >>\n> >> hmm, you could also consider disallowing NULLs, i.e. force empty\n> strings.\n> >> this may result in a better compatibility although unwise from postgres\n> >> point of view (see null storage in PG)\n> >> and neither way allow a compatibility out of the box:\n> >>\n> >> Postgres ORACLE\n> >> '' IS NULL false true\n> >> NULL || 'foo' NULL 'foo'\n> >>\n> >> as mention in another post, you need to check/fix your application.\n> >>\n> >> >\n> >> >ALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck\n> >> > CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS\n> NULL\n> >> > ...) IS NULL)\n> >>\n> >> oops, this shold be\n> >> CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS\n> NULL\n> >> ...))\n> >>\n> >> >\n> >> >-- and to ensure compatibility with your app or migration:\n> >> >\n> >> >CREATE OR REPLACE FUNCTION tablename_setnull_trf()\n> >> > RETURNS trigger AS\n> >> >$BODY$\n> >> >BEGIN\n> >> >-- for all *string* columns\n> >> > NEW.colname1 = NULLIF (colname1,'');\n> >> > NEW.colname2 = NULLIF (colname2,'');\n> >> > NEW.colname3 = NULLIF (colname3,'');\n> >> >RETURN NEW;\n> >> >END;\n> >> >$BODY$\n> >> >\n> >> >CREATE TRIGGER tablename_setnull_tr\n> >> > BEFORE INSERT OR UPDATE\n> >> > ON tablename\n> >> > FOR EACH ROW\n> >> > EXECUTE PROCEDURE tablename_setnull_trf();\n> >> >\n> >> >You can query the pg catalog to generate all required statements.\n> >> >A possible issue is the order in which triggers are fired, when more\n> than\n> >> > one exist for a given table:\n> >> >\"If more than one trigger is defined for the same event on the same\n> >> > relation, the triggers will be fired in alphabetical order by trigger\n> name\"\n> >> >( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n> >> >\n> >> >regards,\n> >> >\n> >> >Marc Mamin\n> >\n> >\n>\n>\n>\n> --\n> Florent Guillaume, Director of R&D, Nuxeo\n> Open Source Content Management Platform for Business Apps\n> http://www.nuxeo.com http://community.nuxeo.com\n>\n\nThe first contact of database migration/issues is DBA (admin), accept performance is not requiredThanksSridhar BNOn Tue, Feb 10, 2015 at 3:37 PM, Florent Guillaume <[email protected]> wrote:Hi,\n\nPlease take this to another list, this has little to do with\nPostgreSQL admin or performance.\n\nFlorent\n\n\n\nOn Tue, Feb 10, 2015 at 4:53 AM, sridhar bamandlapally\n<[email protected]> wrote:\n> In application code is\n>\n> while inserting/updating: INSERT/UPDATE into ... ( '' ) - which is empty\n> string in PG, and in Oracle its NULL\n>\n> while selecting: SELECT ... WHERE column IS NULL / NOT NULL\n>\n> the issue is, while DML its empty string and while SELECT its comparing with\n> NULL\n>\n>\n>\n>\n>\n> On Mon, Feb 9, 2015 at 6:32 PM, Marc Mamin <[email protected]> wrote:\n>>\n>>\n>> >>>Hi\n>> >>>\n>> >>>2015-02-09 12:22 GMT+01:00 sridhar bamandlapally\n>> >>> <[email protected]>:\n>> >>>\n>> >>> Hi All\n>> >>>\n>> >>> We are testing our Oracle compatible business applications on\n>> >>> PostgreSQL database,\n>> >>>\n>> >>> the issue we are facing is <empty string> Vs NULL\n>> >>>\n>> >>> In Oracle '' (<empty string>) and NULL are treated as NULL\n>> >>>\n>> >>> but, in PostgreSQL '' <empty string> not treated as NULL\n>> >>>\n>> >>> I need some implicit way in PostgreSQL where ''<empty string> can\n>> >>> be treated as NULL\n>> >\n>> >>It is not possible in PostgreSQL. PostgreSQL respects ANSI SQL standard\n>> >> - Oracle not.\n>> >>\n>> >>Regards\n>> >>\n>> >>Pavel\n>> >>\n>> >>p.s. theoretically you can overwrite a type operators to support Oracle\n>> >> behave, but you should not be sure about unexpected negative side effects.\n>> >\n>> >\n>> >A clean way would be to disallow empty strings on the PG side.\n>> >This is somewhat combersome depending on how dynamic your model is\n>> >and add some last on your db though.\n>>\n>> hmm, you could also consider disallowing NULLs, i.e. force empty strings.\n>> this may result in a better compatibility although unwise from postgres\n>> point of view (see null storage in PG)\n>> and neither way allow a compatibility out of the box:\n>>\n>> Postgres ORACLE\n>> '' IS NULL false true\n>> NULL || 'foo' NULL 'foo'\n>>\n>> as mention in another post, you need to check/fix your application.\n>>\n>> >\n>> >ALTER TABLE tablename ADD CONSTRAINT tablename_not_empty_ck\n>> > CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL\n>> > ...) IS NULL)\n>>\n>> oops, this shold be\n>> CHECK (false= (colname1 IS NULL OR colname2 IS NULL OR colname3 IS NULL\n>> ...))\n>>\n>> >\n>> >-- and to ensure compatibility with your app or migration:\n>> >\n>> >CREATE OR REPLACE FUNCTION tablename_setnull_trf()\n>> > RETURNS trigger AS\n>> >$BODY$\n>> >BEGIN\n>> >-- for all *string* columns\n>> > NEW.colname1 = NULLIF (colname1,'');\n>> > NEW.colname2 = NULLIF (colname2,'');\n>> > NEW.colname3 = NULLIF (colname3,'');\n>> >RETURN NEW;\n>> >END;\n>> >$BODY$\n>> >\n>> >CREATE TRIGGER tablename_setnull_tr\n>> > BEFORE INSERT OR UPDATE\n>> > ON tablename\n>> > FOR EACH ROW\n>> > EXECUTE PROCEDURE tablename_setnull_trf();\n>> >\n>> >You can query the pg catalog to generate all required statements.\n>> >A possible issue is the order in which triggers are fired, when more than\n>> > one exist for a given table:\n>> >\"If more than one trigger is defined for the same event on the same\n>> > relation, the triggers will be fired in alphabetical order by trigger name\"\n>> >( http://www.postgresql.org/docs/9.3/static/trigger-definition.html )\n>> >\n>> >regards,\n>> >\n>> >Marc Mamin\n>\n>\n\n\n\n--\nFlorent Guillaume, Director of R&D, Nuxeo\nOpen Source Content Management Platform for Business Apps\nhttp://www.nuxeo.com http://community.nuxeo.com",
"msg_date": "Tue, 10 Feb 2015 16:10:39 +0530",
"msg_from": "sridhar bamandlapally <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "Hi all,\nI have a PG 9.3 and a streaming replication and I need standby ip address in the monitoring. To get that i can run \n\nselect client_addr from pg_stat_replication\n\nbut i have to connect as a superuser what's not desirable.\n\nAs i see in that view, it uses two functions: pg_stat_get_activity and pg_stat_get_wal_senders and one table pg_authid. As i don't need role information, i've cutted the table from the query and got the following query:\nSELECT s.pid, s.client_addr\n FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, waiting, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port)\n ,pg_stat_get_wal_senders() w(pid, state, sent_location, write_location, flush_location, replay_location, sync_priority, sync_state)\n WHERE s.pid = w.pid;\nWhen i run it as a superuser, everything is ok, when i run it as an ordinary user, the client_addr is NULL. As the function pg_stat_get_wal_senders() returns the result, the problem is in receiving the address from pg_stat_get_activity.\n\nUsing/granting pg_stat_get_backend_client_addr() is not solving the problem.\n\nIs there any way to get client_addr value running not as a superuser?\n\n\nRegards, Mikhail\n\n\nHi all,I have a PG 9.3 and a streaming replication and I need standby ip address in the monitoring. To get that i can run select client_addr from pg_stat_replicationbut i have to connect as a superuser what's not desirable.As i see in that view, it uses two functions: pg_stat_get_activity and pg_stat_get_wal_senders and one table pg_authid. As i don't need role information, i've cutted the table from the query and got the following query:SELECT s.pid, s.client_addr FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, waiting, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port) ,pg_stat_get_wal_senders() w(pid, state, sent_location, write_location, flush_location, replay_location, sync_priority, sync_state) WHERE s.pid = w.pid;When i run it as a superuser, everything is ok, when i run it as an ordinary user, the client_addr is NULL. As the function pg_stat_get_wal_senders() returns the result, the problem is in receiving the address from pg_stat_get_activity.Using/granting pg_stat_get_backend_client_addr() is not solving the problem.Is there any way to get client_addr value running not as a superuser?Regards, Mikhail",
"msg_date": "Tue, 10 Feb 2015 15:21:10 +0300",
"msg_from": "=?UTF-8?B?0JzQuNGF0LDQuNC7?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "=?UTF-8?B?Z2V0dGluZyBjbGllbnRfYWRkciBub3QgYXMgYSBzdXBlcnVzZXI=?="
},
{
"msg_contents": "you can create a function with security differ option as superuser.\n2015年2月10日 8:22 PM于 \"Михаил\" <[email protected]>写道:\n\n> Hi all,\n> I have a PG 9.3 and a streaming replication and I need standby ip address\n> in the monitoring. To get that i can run\n>\n> select client_addr from pg_stat_replication\n>\n> but i have to connect as a superuser what's not desirable.\n>\n> As i see in that view, it uses two functions: pg_stat_get_activity\n> and pg_stat_get_wal_senders and one table pg_authid. As i don't need role\n> information, i've cutted the table from the query and got the following\n> query:\n>\n> SELECT s.pid, s.client_addr\n> FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid,\n> application_name, state, query, waiting, xact_start, query_start,\n> backend_start, state_change, client_addr, client_hostname, client_port)\n> ,pg_stat_get_wal_senders() w(pid, state, sent_location,\n> write_location, flush_location, replay_location, sync_priority, sync_state)\n> WHERE s.pid = w.pid;\n>\n> When i run it as a superuser, everything is ok, when i run it as an\n> ordinary user, the client_addr is NULL. As the\n> function pg_stat_get_wal_senders() returns the result, the problem is in\n> receiving the address from pg_stat_get_activity.\n>\n> Using/granting pg_stat_get_backend_client_addr() is not solving the\n> problem.\n>\n> Is there any way to get client_addr value running not as a superuser?\n>\n>\n> Regards, Mikhail\n>\n>\n\nyou can create a function with security differ option as superuser.\n2015年2月10日 8:22 PM于 \"Михаил\" <[email protected]>写道:\nHi all,I have a PG 9.3 and a streaming replication and I need standby ip address in the monitoring. To get that i can run select client_addr from pg_stat_replicationbut i have to connect as a superuser what's not desirable.As i see in that view, it uses two functions: pg_stat_get_activity and pg_stat_get_wal_senders and one table pg_authid. As i don't need role information, i've cutted the table from the query and got the following query:SELECT s.pid, s.client_addr FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, waiting, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port) ,pg_stat_get_wal_senders() w(pid, state, sent_location, write_location, flush_location, replay_location, sync_priority, sync_state) WHERE s.pid = w.pid;When i run it as a superuser, everything is ok, when i run it as an ordinary user, the client_addr is NULL. As the function pg_stat_get_wal_senders() returns the result, the problem is in receiving the address from pg_stat_get_activity.Using/granting pg_stat_get_backend_client_addr() is not solving the problem.Is there any way to get client_addr value running not as a superuser?Regards, Mikhail",
"msg_date": "Tue, 10 Feb 2015 20:53:16 +0800",
"msg_from": "Jov <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: getting client_addr not as a superuser"
},
{
"msg_contents": "Hi,\n\nHave you try to put replication permissions?\n\nEx. CREATE ROLE username LOGIN\n PASSWORD 'bla'\n NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE REPLICATION;\n\n2015-02-10 10:53 GMT-02:00 Jov <[email protected]>:\n\n> you can create a function with security differ option as superuser.\n> 2015年2月10日 8:22 PM于 \"Михаил\" <[email protected]>写道:\n>\n> Hi all,\n>> I have a PG 9.3 and a streaming replication and I need standby ip address\n>> in the monitoring. To get that i can run\n>>\n>> select client_addr from pg_stat_replication\n>>\n>> but i have to connect as a superuser what's not desirable.\n>>\n>> As i see in that view, it uses two functions: pg_stat_get_activity\n>> and pg_stat_get_wal_senders and one table pg_authid. As i don't need role\n>> information, i've cutted the table from the query and got the following\n>> query:\n>>\n>> SELECT s.pid, s.client_addr\n>> FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid,\n>> application_name, state, query, waiting, xact_start, query_start,\n>> backend_start, state_change, client_addr, client_hostname, client_port)\n>> ,pg_stat_get_wal_senders() w(pid, state, sent_location,\n>> write_location, flush_location, replay_location, sync_priority, sync_state)\n>> WHERE s.pid = w.pid;\n>>\n>> When i run it as a superuser, everything is ok, when i run it as an\n>> ordinary user, the client_addr is NULL. As the\n>> function pg_stat_get_wal_senders() returns the result, the problem is in\n>> receiving the address from pg_stat_get_activity.\n>>\n>> Using/granting pg_stat_get_backend_client_addr() is not solving the\n>> problem.\n>>\n>> Is there any way to get client_addr value running not as a superuser?\n>>\n>>\n>> Regards, Mikhail\n>>\n>>\n\n\n-- \nLuis Antonio Dias de Sá Junior\n\nHi,Have you try to put replication permissions? Ex. CREATE ROLE username LOGIN PASSWORD 'bla' NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE REPLICATION;2015-02-10 10:53 GMT-02:00 Jov <[email protected]>:you can create a function with security differ option as superuser.\n2015年2月10日 8:22 PM于 \"Михаил\" <[email protected]>写道:\nHi all,I have a PG 9.3 and a streaming replication and I need standby ip address in the monitoring. To get that i can run select client_addr from pg_stat_replicationbut i have to connect as a superuser what's not desirable.As i see in that view, it uses two functions: pg_stat_get_activity and pg_stat_get_wal_senders and one table pg_authid. As i don't need role information, i've cutted the table from the query and got the following query:SELECT s.pid, s.client_addr FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, waiting, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port) ,pg_stat_get_wal_senders() w(pid, state, sent_location, write_location, flush_location, replay_location, sync_priority, sync_state) WHERE s.pid = w.pid;When i run it as a superuser, everything is ok, when i run it as an ordinary user, the client_addr is NULL. As the function pg_stat_get_wal_senders() returns the result, the problem is in receiving the address from pg_stat_get_activity.Using/granting pg_stat_get_backend_client_addr() is not solving the problem.Is there any way to get client_addr value running not as a superuser?Regards, Mikhail\n\n-- Luis Antonio Dias de Sá Junior",
"msg_date": "Tue, 10 Feb 2015 12:11:23 -0200",
"msg_from": "=?UTF-8?Q?Luis_Antonio_Dias_de_S=C3=A1_Junior?=\n <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: getting client_addr not as a superuser"
},
{
"msg_contents": "On Feb 10, 2015, at 3:40 AM, sridhar bamandlapally <[email protected]> wrote:\n> \n> The first contact of database migration/issues is DBA (admin), \n\nThis is a SQL usage issue, not a db admin issue, and so is most appropriate for the pgsql-general list. (Anyway, your question has been answered 4 times now.)\n\n-- \nScott Ribe\[email protected]\nhttp://www.elevated-dev.com/\n(303) 722-0567 voice\n\n\n\n\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin\n",
"msg_date": "Tue, 10 Feb 2015 07:32:31 -0700",
"msg_from": "Scott Ribe <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] <empty string> Vs NULL"
},
{
"msg_contents": "Hi,\nreplication permissions doesn't help:\n=> \\du zabbix\nList of roles\nRole name │ Attributes │ Member of\n───────────┼─────────────┼───────────\nzabbix │ Replication │ {}\n[local] zabbix@postgres\n=> select client_addr from pg_stat_replication;\nclient_addr\n─────────────\nNULL\n(1 row)\nSeems like for that moment function with security definer is the only solution, though it smells like workaround.\n\n\nВторник, 10 февраля 2015, 12:11 -02:00 от Luis Antonio Dias de Sá Junior <[email protected]>:\n>Hi,\n>\n>Have you try to put replication permissions? \n>\n>Ex. CREATE ROLE username LOGIN\n> PASSWORD 'bla'\n> NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE REPLICATION;\n>\n>2015-02-10 10:53 GMT-02:00 Jov < [email protected] > :\n>>you can create a function with security differ option as superuser.\n>>2015年2月10日 8:22 PM于 \"Михаил\" < [email protected] >写道:\n>>\n>>>Hi all,\n>>>I have a PG 9.3 and a streaming replication and I need standby ip address in the monitoring. To get that i can run \n>>>\n>>>select client_addr from pg_stat_replication\n>>>\n>>>but i have to connect as a superuser what's not desirable.\n>>>\n>>>As i see in that view, it uses two functions: pg_stat_get_activity and pg_stat_get_wal_senders and one table pg_authid. As i don't need role information, i've cutted the table from the query and got the following query:\n>>>SELECT s.pid, s.client_addr\n>>> FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, waiting, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port)\n>>> ,pg_stat_get_wal_senders() w(pid, state, sent_location, write_location, flush_location, replay_location, sync_priority, sync_state)\n>>> WHERE s.pid = w.pid;\n>>>When i run it as a superuser, everything is ok, when i run it as an ordinary user, the client_addr is NULL. As the function pg_stat_get_wal_senders() returns the result, the problem is in receiving the address from pg_stat_get_activity.\n>>>\n>>>Using/granting pg_stat_get_backend_client_addr() is not solving the problem.\n>>>\n>>>Is there any way to get client_addr value running not as a superuser?\n>>>\n>>>\n>>>Regards, Mikhail\n>>>\n>\n>\n>\n>-- \n>Luis Antonio Dias de Sá Junior\n\nС уважением,\n\[email protected]\n\nHi,replication permissions doesn't help:=> \\du zabbix List of roles Role name │ Attributes │ Member of───────────┼─────────────┼─────────── zabbix │ Replication │ {}[local] zabbix@postgres=> select client_addr from pg_stat_replication; client_addr───────────── NULL(1 row)Seems like for that moment function with security definer is the only solution, though it smells like workaround.Вторник, 10 февраля 2015, 12:11 -02:00 от Luis Antonio Dias de Sá Junior <[email protected]>:\n\n\n\n\n\n\nHi,Have you try to put replication permissions? Ex. CREATE ROLE username LOGIN PASSWORD 'bla' NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE REPLICATION;2015-02-10 10:53 GMT-02:00 Jov <[email protected]>:you can create a function with security differ option as superuser.\n2015年2月10日 8:22 PM于 \"Михаил\" <[email protected]>写道:\nHi all,I have a PG 9.3 and a streaming replication and I need standby ip address in the monitoring. To get that i can run select client_addr from pg_stat_replicationbut i have to connect as a superuser what's not desirable.As i see in that view, it uses two functions: pg_stat_get_activity and pg_stat_get_wal_senders and one table pg_authid. As i don't need role information, i've cutted the table from the query and got the following query:SELECT s.pid, s.client_addr FROM pg_stat_get_activity(NULL::integer) s(datid, pid, usesysid, application_name, state, query, waiting, xact_start, query_start, backend_start, state_change, client_addr, client_hostname, client_port) ,pg_stat_get_wal_senders() w(pid, state, sent_location, write_location, flush_location, replay_location, sync_priority, sync_state) WHERE s.pid = w.pid;When i run it as a superuser, everything is ok, when i run it as an ordinary user, the client_addr is NULL. As the function pg_stat_get_wal_senders() returns the result, the problem is in receiving the address from pg_stat_get_activity.Using/granting pg_stat_get_backend_client_addr() is not solving the problem.Is there any way to get client_addr value running not as a superuser?Regards, Mikhail\n\n-- Luis Antonio Dias de Sá Junior\n\n\n\n\n\n\n\n\nС уважением, [email protected]",
"msg_date": "Fri, 13 Feb 2015 13:58:35 +0300",
"msg_from": "=?UTF-8?B?0JzQuNGF0LDQuNC7?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "\n =?UTF-8?B?UmVbMl06IFtBRE1JTl0gZ2V0dGluZyBjbGllbnRfYWRkciBub3QgYXMgYSBz?=\n =?UTF-8?B?dXBlcnVzZXI=?="
}
] |
[
{
"msg_contents": "I have a system that I am needing to convert from FoxPro files being accessed\nwith DAO to PostgreSQL.\n\nThis system serves 1,000 clients and will be expanding to 2,000 within the\nnext 18 months.\n\nThe current system has a directory with files that contain information of a\nglobal nature such as the list of clients, list of users and which client\nthey belong to, etc. Then each client has files within their own directory\nto keep the size of the tables manageable. Each client has 165 tables. These\ntables are all the same definition across the different groups.\n\nI have been trying to determine the best way to define this system within\nPostgreSQL.\n\nI have considered partitioning tables, but if I am correct that would result\nin 330,000 files and I am not certain if that will cause an issue with\ndegraded file system performance.\n\nIs it possible to create a tablespace for each group and then create\npartitioned tables for the groups within the group's tablespace to limit the\nnumber of files in a directory? I plan on utilizing the built-in streaming\nreplication, so I assume if I went the tablespace route I would need to\ncreate directories for all future groups from the outset since creating them\nindividually with code on the backup systems would be difficult.\n\nAnother option would be placing an extra field in each table identifying the\ngroup it belongs to and combining all of the separate tables of the same\ndefinition into one table. This would result in some tables having 300\nmillion entries currently and that would climb over the next 18 months.\n\nThe final option I can see is creating a schema for each of the different\nclients. I am not certain if this is a better option than partitioned\ntables. I haven't been able to determine if schema objects are stored in a\nsub directory or if they are in the same directory as all of the other\ntables. If they are in the same directory then the same issue arises as the\npartitioned tables.\n\nOf course, I am certain there are a number of other possibilities that I am\noverlooking. I am just trying to determine the best way to move this over\nand get things into a more modern system.\n\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Migrating-a-FoxPro-system-and-would-like-input-on-the-best-way-to-achieve-optimal-performance-tp5837211.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Feb 2015 07:21:34 -0700 (MST)",
"msg_from": "TonyS <[email protected]>",
"msg_from_op": true,
"msg_subject": "Migrating a FoxPro system and would like input on the best way to\n achieve optimal performance"
},
{
"msg_contents": "TonyS wrote\n> Then each client has files within their own directory to keep the size of\n> the tables manageable. Each client has 165 tables. These tables are all\n> the same definition across the different groups.\n> \n> I have considered partitioning tables, but if I am correct that would\n> result in 330,000 files and I am not certain if that will cause an issue\n> with degraded file system performance.\n\nI suggest you not think about \"files\" when pondering about PostgreSQL. That\nsaid, 330,000 tables within a single database, or even cluster, is likely to\nbe problematic.\n\n\n> Is it possible to create a tablespace for each group and then create\n> partitioned tables for the groups within the group's tablespace to limit\n> the number of files in a directory? \n\nSame point about ignoring \"files\" and \"directories\". Tablespaces let you\nplace different kinds of data onto different filesystems; using them for\n\"directory management\" is not particularly helpful.\n\nNote that I presume you are planning on leaving the database backend on\nWindows...my experience is more with Linux but your core issue is data model\nwhich is largely O/S agnostic.\n\n\n> I plan on utilizing the built-in streaming replication, so I assume if I\n> went the tablespace route I would need to create directories for all\n> future groups from the outset since creating them individually with code\n> on the backup systems would be difficult.\n\nWhich is another reason why tablespaces should not implement logical\nattributes of the system.\n\n\n> Another option would be placing an extra field in each table identifying\n> the group it belongs to and combining all of the separate tables of the\n> same definition into one table. This would result in some tables having\n> 300 million entries currently and that would climb over the next 18\n> months.\n\nThis is the canonical solution to multi-tenancy. Physical partitioning then\noccurs on a hash of whatever key you are using; you do not have one tenant\nper table.\n\n\n> The final option I can see is creating a schema for each of the different\n> clients. I am not certain if this is a better option than partitioned\n> tables. I haven't been able to determine if schema objects are stored in a\n> sub directory or if they are in the same directory as all of the other\n> tables. If they are in the same directory then the same issue arises as\n> the partitioned tables.\n\nDepending on whether clients are able to get access to the data directly you\ncan also consider having a separate database for each client. I would then\nrecommend using either dblink or postgres_fdw to connect to the single\nshared database - or just replicate the shared schema and data subset into\neach individual client database.\n\n\n> Of course, I am certain there are a number of other possibilities that I\n> am overlooking. I am just trying to determine the best way to move this\n> over and get things into a more modern system.\n\nWithout understanding how your application works and makes use of the\nexisting data it is difficult to suggest alternatives. Specifically around\ndata visibility and the mechanics behind how the application access\ndifferent clients' data.\n\nI would personally choose only between having different databases for each\nclient or using a \"client_id\" column in conjunction with a multi-tenant\ndatabase. Those are the two logical models; everything else (e.g.\npartitioning) are physical implementation details.\n\nDavid J.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Migrating-a-FoxPro-system-and-would-like-input-on-the-best-way-to-achieve-optimal-performance-tp5837211p5837241.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Feb 2015 11:17:52 -0700 (MST)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Migrating a FoxPro system and would like input on the best way\n to achieve optimal performance"
}
] |
[
{
"msg_contents": "For a large database with lots of activity (transactions), the XIDs are very\noften re-frozen by AutoVacuum. Even when autovacuum_freeze_max_age is set to\n2 billion, the XIDs can wrap every couple of days on an active database.\nThis causes unnecessary changes to otherwise unmodified files and archiving\nprocesses that use rsync or similiar processes have way more work to do.\nCouldn't postgres reserve a special XID that is never available for normal\ntransactions but that indicates that any transaction can see it because it\nis so old? Then instead of constantly having to freeze old XIDs each time\nthe XID is going to wrap, vacuum can just set it to the special XID and\nnever touch it again unless something really changes.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Avoiding-Refreezing-XIDs-Repeatedly-tp5837222.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Feb 2015 08:58:36 -0700 (MST)",
"msg_from": "bkrug <[email protected]>",
"msg_from_op": true,
"msg_subject": "Avoiding Refreezing XIDs Repeatedly"
},
{
"msg_contents": "On Mon, Feb 9, 2015 at 1:58 PM, bkrug <[email protected]> wrote:\n\n> Couldn't postgres reserve a special XID that is never available for normal\n> transactions but that indicates that any transaction can see it because it\n> is so old? Then instead of constantly having to freeze old XIDs each time\n> the XID is going to wrap, vacuum can just set it to the special XID and\n> never touch it again unless something really changes.\n>\n\n\nIt changed in recent versions (9.3 or 9.4, I don't recall exactly which)\nand moved to tuple header, but what you described is exactly what was done,\nthe xid was 2.\n\nAnyway, an already frozen tuple won't be \"re-frozen\" again.\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Feb 9, 2015 at 1:58 PM, bkrug <[email protected]> wrote:\nCouldn't postgres reserve a special XID that is never available for normal\ntransactions but that indicates that any transaction can see it because it\nis so old? Then instead of constantly having to freeze old XIDs each time\nthe XID is going to wrap, vacuum can just set it to the special XID and\nnever touch it again unless something really changes.It changed in recent versions (9.3 or 9.4, I don't recall exactly which) and moved to tuple header, but what you described is exactly what was done, the xid was 2.Anyway, an already frozen tuple won't be \"re-frozen\" again.Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 9 Feb 2015 14:48:17 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Refreezing XIDs Repeatedly"
},
{
"msg_contents": "Matheus de Oliveira wrote:\n> On Mon, Feb 9, 2015 at 1:58 PM, bkrug <[email protected]> wrote:\n> \n> > Couldn't postgres reserve a special XID that is never available for normal\n> > transactions but that indicates that any transaction can see it because it\n> > is so old? Then instead of constantly having to freeze old XIDs each time\n> > the XID is going to wrap, vacuum can just set it to the special XID and\n> > never touch it again unless something really changes.\n> >\n> \n> \n> It changed in recent versions (9.3 or 9.4, I don't recall exactly which)\n> and moved to tuple header, but what you described is exactly what was done,\n> the xid was 2.\n\nActually, it's been done this way for ages -- it was introduced in 2001\n(release 7.2) by these commits:\n\nAuthor: Tom Lane <[email protected]>\nBranch: master Release: REL7_2 [2589735da] 2001-08-25 18:52:43 +0000\n\n Replace implementation of pg_log as a relation accessed through the\n buffer manager with 'pg_clog', a specialized access method modeled\n on pg_xlog. This simplifies startup (don't need to play games to\n open pg_log; among other things, OverrideTransactionSystem goes away),\n should improve performance a little, and opens the door to recycling\n commit log space by removing no-longer-needed segments of the commit\n log. Actual recycling is not there yet, but I felt I should commit\n this part separately since it'd still be useful if we chose not to\n do transaction ID wraparound.\n\n\nAuthor: Tom Lane <[email protected]>\nBranch: master Release: REL7_2 [bc7d37a52] 2001-08-26 16:56:03 +0000\n\n Transaction IDs wrap around, per my proposal of 13-Aug-01. More\n documentation to come, but the code is all here. initdb forced.\n\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Feb 2015 14:06:32 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Refreezing XIDs Repeatedly"
},
{
"msg_contents": "Matheus de Oliveira wrote\n> It changed in recent versions (9.3 or 9.4, I don't recall exactly which)\n> and moved to tuple header, but what you described is exactly what was\n> done,\n> the xid was 2.\n\nShould the relfrozenxid of pg_class then equal 2 for very old and already\nvacuumed tables? Because that is not what I am seeing.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Avoiding-Refreezing-XIDs-Repeatedly-tp5837222p5837247.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Feb 2015 11:45:53 -0700 (MST)",
"msg_from": "bkrug <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding Refreezing XIDs Repeatedly"
},
{
"msg_contents": "bkrug wrote:\n> Matheus de Oliveira wrote\n> > It changed in recent versions (9.3 or 9.4, I don't recall exactly which)\n> > and moved to tuple header, but what you described is exactly what was\n> > done,\n> > the xid was 2.\n> \n> Should the relfrozenxid of pg_class then equal 2 for very old and already\n> vacuumed tables? Because that is not what I am seeing.\n\nNo. The problem is that it's not easy to change the relfrozenxid when\nan INSERT/UPDATE command creates a tuple with a non-frozen XID.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Feb 2015 15:53:05 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Refreezing XIDs Repeatedly"
},
{
"msg_contents": "On Mon, Feb 9, 2015 at 4:45 PM, bkrug <[email protected]> wrote:\n\n> Should the relfrozenxid of pg_class then equal 2 for very old and already\n> vacuumed tables? Because that is not what I am seeing.\n\n\nhm... You meant in the entire table? Like an static table?\n\nThen no, it is done tuple by tuple only. In older versions (I think up to\n9.2) it was setting xmin column to 2.\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Mon, Feb 9, 2015 at 4:45 PM, bkrug <[email protected]> wrote:\nShould the relfrozenxid of pg_class then equal 2 for very old and already\nvacuumed tables? Because that is not what I am seeing.hm... You meant in the entire table? Like an static table?Then no, it is done tuple by tuple only. In older versions (I think up to 9.2) it was setting xmin column to 2.Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Mon, 9 Feb 2015 16:56:04 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Refreezing XIDs Repeatedly"
},
{
"msg_contents": "The problem I'm facing is that I have many large (several GB) tables that are\nnot being changed (they are several days old) but auto-vacuum keeps scanning\nand updating them every time the xid wraps around and thus my rsync back-up\nprocess sees that the disk files have changed and must copy them.\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/Avoiding-Refreezing-XIDs-Repeatedly-tp5837222p5837251.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Feb 2015 12:07:05 -0700 (MST)",
"msg_from": "bkrug <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Avoiding Refreezing XIDs Repeatedly"
},
{
"msg_contents": "bkrug wrote:\n> The problem I'm facing is that I have many large (several GB) tables that are\n> not being changed (they are several days old) but auto-vacuum keeps scanning\n> and updating them every time the xid wraps around and thus my rsync back-up\n> process sees that the disk files have changed and must copy them.\n\nWe have considered changing this, but it needs a concerted effort. It's\nnot a simple problem.\n\n-- \n�lvaro Herrera http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 9 Feb 2015 16:10:35 -0300",
"msg_from": "Alvaro Herrera <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Refreezing XIDs Repeatedly"
},
{
"msg_contents": "Alvaro Herrera <[email protected]> writes:\n> bkrug wrote:\n>> The problem I'm facing is that I have many large (several GB) tables that are\n>> not being changed (they are several days old) but auto-vacuum keeps scanning\n>> and updating them every time the xid wraps around and thus my rsync back-up\n>> process sees that the disk files have changed and must copy them.\n\n> We have considered changing this, but it needs a concerted effort. It's\n> not a simple problem.\n\nI'm not following. Yes, the tables will be *scanned* at least once per\nXID wraparound cycle, but if they are in fact static then they should not\nbe changing once the tuples have been frozen the first time. If this is\nincurring continuing rsync work then something else is going on.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 09 Feb 2015 15:00:34 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Avoiding Refreezing XIDs Repeatedly"
}
] |
[
{
"msg_contents": "Hi,\n\nA survay: with pgbench using TPS-B, what is the maximum TPS you're ever\nseen?\n\nFor me: 12000 TPS.\n\n-- \nLuis Antonio Dias de Sá Junior\n\nHi,A survay: with pgbench using TPS-B, what is the maximum TPS you're ever seen?For me: 12000 TPS.-- Luis Antonio Dias de Sá Junior",
"msg_date": "Mon, 9 Feb 2015 17:30:13 -0200",
"msg_from": "=?UTF-8?Q?Luis_Antonio_Dias_de_S=C3=A1_Junior?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Survey: Max TPS you've ever seen"
},
{
"msg_contents": "On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:\n> Hi,\n>\n> A survay: with pgbench using TPS-B, what is the maximum TPS you're \n> ever seen?\n>\n> For me: 12000 TPS.\n>\n> -- \n> Luis Antonio Dias de Sá Junior\nImportant to specify:\n\n 1. O/S\n 2. version of PostgreSQL\n 3. PostgreSQL configuration\n 4. hardware configuration\n 5. anything else that might affect performance\n\nI suspect that Linux will out perform Microsoft on the same hardware, \nand optimum configuration for both O/S's...\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Feb 2015 10:29:00 +1300",
"msg_from": "Gavin Flower <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Survey: Max TPS you've ever seen"
},
{
"msg_contents": "No problem with this. If anyone want to specify more details.\n\nBut I want to know how far postgres can go. No matter OS or other variables.\n\nGavin, you got more than 12000 TPS?\n\n2015-02-09 19:29 GMT-02:00 Gavin Flower <[email protected]>:\n\n> On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:\n>\n>> Hi,\n>>\n>> A survay: with pgbench using TPS-B, what is the maximum TPS you're ever\n>> seen?\n>>\n>> For me: 12000 TPS.\n>>\n>> --\n>> Luis Antonio Dias de Sá Junior\n>>\n> Important to specify:\n>\n> 1. O/S\n> 2. version of PostgreSQL\n> 3. PostgreSQL configuration\n> 4. hardware configuration\n> 5. anything else that might affect performance\n>\n> I suspect that Linux will out perform Microsoft on the same hardware, and\n> optimum configuration for both O/S's...\n>\n>\n> Cheers,\n> Gavin\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nLuis Antonio Dias de Sá Junior\n\nNo problem with this. If anyone want to specify more details.But I want to know how far postgres can go. No matter OS or other variables.Gavin, you got more than 12000 TPS?2015-02-09 19:29 GMT-02:00 Gavin Flower <[email protected]>:On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:\n\nHi,\n\nA survay: with pgbench using TPS-B, what is the maximum TPS you're ever seen?\n\nFor me: 12000 TPS.\n\n-- \nLuis Antonio Dias de Sá Junior\n\nImportant to specify:\n\n1. O/S\n2. version of PostgreSQL\n3. PostgreSQL configuration\n4. hardware configuration\n5. anything else that might affect performance\n\nI suspect that Linux will out perform Microsoft on the same hardware, and optimum configuration for both O/S's...\n\n\nCheers,\nGavin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n-- Luis Antonio Dias de Sá Junior",
"msg_date": "Tue, 10 Feb 2015 08:48:02 -0200",
"msg_from": "=?UTF-8?Q?Luis_Antonio_Dias_de_S=C3=A1_Junior?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Survey: Max TPS you've ever seen"
},
{
"msg_contents": "I'd suggest you run it on a large ramdisk with fsync turned off on a 32 core computer, see what you get, that will be a good indication of a maximum.\n\nKeep in mind though that 'postgres' with fsync (vs. without) is such a different creature that the comparison isn't meaningful. \nSimilarly 'postgres' on volatile backing store vs. non-volatile isn't really a meaningful comparison. \n\nThere's also a question here about the 't' in TPS. If you have no fsync and volatile storage, are you really doing 'transactions'? Depending on the definition you take, a transaction may have some sense of 'reliability' or atomicity which isn't reflected well in a ramdisk/no-fsync benchmark. \n\nIt's probably not ideal to fill a mailing list with numbers that have no meaning attached to them, so why not set up a little web database or Google doc to record max TPS and how it was achieved?\n\nFor example, imagine I tell you that the highest I've achieved is 1240000 tps. How does it help you if I say that? \n\nGraeme Bell\n\nOn 10 Feb 2015, at 11:48, Luis Antonio Dias de Sá Junior <[email protected]> wrote:\n\n> No problem with this. If anyone want to specify more details.\n> \n> But I want to know how far postgres can go. No matter OS or other variables.\n> \n> Gavin, you got more than 12000 TPS?\n> \n> 2015-02-09 19:29 GMT-02:00 Gavin Flower <[email protected]>:\n> On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:\n> Hi,\n> \n> A survay: with pgbench using TPS-B, what is the maximum TPS you're ever seen?\n> \n> For me: 12000 TPS.\n> \n> -- \n> Luis Antonio Dias de Sá Junior\n> Important to specify:\n> \n> 1. O/S\n> 2. version of PostgreSQL\n> 3. PostgreSQL configuration\n> 4. hardware configuration\n> 5. anything else that might affect performance\n> \n> I suspect that Linux will out perform Microsoft on the same hardware, and optimum configuration for both O/S's...\n> \n> \n> Cheers,\n> Gavin\n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \n> \n> \n> -- \n> Luis Antonio Dias de Sá Junior\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Feb 2015 11:27:36 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Survey: Max TPS you've ever seen"
},
{
"msg_contents": "On 10/02/15 10:29, Gavin Flower wrote:\n> On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:\n>> Hi,\n>>\n>> A survay: with pgbench using TPS-B, what is the maximum TPS you're\n>> ever seen?\n>>\n>> For me: 12000 TPS.\n>>\n>> --\n>> Luis Antonio Dias de Sá Junior\n> Important to specify:\n>\n> 1. O/S\n> 2. version of PostgreSQL\n> 3. PostgreSQL configuration\n> 4. hardware configuration\n> 5. anything else that might affect performance\n>\n> I suspect that Linux will out perform Microsoft on the same hardware,\n> and optimum configuration for both O/S's...\n>\n\n\nYes, exactly - and also the pgbench parameters:\n\n- scale\n- number of clients\n- number of threads\n- statement options (prepared or simple etc)\n- length of test\n\nWe've managed to get 40000 to 60000 TPS on some pretty serious hardware:\n\n- 60 core, 1 TB ram\n- 16 SSD + 4 PCIe SSD storage\n- Ubuntu 14.04\n- Postgres 9.4 (beta and rc)\n\n...with Postgres parameters customized:\n\n- checkpoint_segments 1920\n- checkpoint_completion_target 0.8\n- wal_buffers 256MB\n- wal_sync_method open_datasync\n- shared_buffers 10GB\n- max_connections 600\n- effective_io_concurrency 10\n\n..and finally pgbench parameters\n\n- scale 2000\n- clients 32, 64, 128, 256 (best results at 32 and 64 generally)\n- threads = 1/2 client number\n- prepared option\n- 10 minute test run time\n\nPoints to note, we did *not* disable fsync or prevent buffers being \nactually written (common dirty tricks in benchmarks). However, as others \nhave remarked - raw numbers mean little. Pgbench is very useful for \ntesting how tuning configurations are helping (or not) for a particular \nhardware and software setup, but is less useful for answering the \nquestion \"how many TPS can postgres do\"...\n\nRegards\n\nMark\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Feb 2015 13:31:15 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Survey: Max TPS you've ever seen"
},
{
"msg_contents": "For me 12000 tps until now\n\n 24 core, 150 Gb ram\n- 5 ssd raid 5\n- Debian 7.8\n- Postgres 9.3.5\n\n...with Postgres parameters customized:\n\n- checkpoint_segments 1000\n- checkpoint_completion_target 0.9\n- wal_buffers 256MB\n- shared_buffers 31 gb\n- max_connections 500\n- effective_io_concurrency 15\n\n..and finally pgbench parameters\n\n- scale 350\n- clients 300\n- threads 30\n- 60 seconds test run time\nEm 10/02/2015 22:32, \"Mark Kirkwood\" <[email protected]>\nescreveu:\n\n> On 10/02/15 10:29, Gavin Flower wrote:\n>\n>> On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:\n>>\n>>> Hi,\n>>>\n>>> A survay: with pgbench using TPS-B, what is the maximum TPS you're\n>>> ever seen?\n>>>\n>>> For me: 12000 TPS.\n>>>\n>>> --\n>>> Luis Antonio Dias de Sá Junior\n>>>\n>> Important to specify:\n>>\n>> 1. O/S\n>> 2. version of PostgreSQL\n>> 3. PostgreSQL configuration\n>> 4. hardware configuration\n>> 5. anything else that might affect performance\n>>\n>> I suspect that Linux will out perform Microsoft on the same hardware,\n>> and optimum configuration for both O/S's...\n>>\n>>\n>\n> Yes, exactly - and also the pgbench parameters:\n>\n> - scale\n> - number of clients\n> - number of threads\n> - statement options (prepared or simple etc)\n> - length of test\n>\n> We've managed to get 40000 to 60000 TPS on some pretty serious hardware:\n>\n> - 60 core, 1 TB ram\n> - 16 SSD + 4 PCIe SSD storage\n> - Ubuntu 14.04\n> - Postgres 9.4 (beta and rc)\n>\n> ...with Postgres parameters customized:\n>\n> - checkpoint_segments 1920\n> - checkpoint_completion_target 0.8\n> - wal_buffers 256MB\n> - wal_sync_method open_datasync\n> - shared_buffers 10GB\n> - max_connections 600\n> - effective_io_concurrency 10\n>\n> ..and finally pgbench parameters\n>\n> - scale 2000\n> - clients 32, 64, 128, 256 (best results at 32 and 64 generally)\n> - threads = 1/2 client number\n> - prepared option\n> - 10 minute test run time\n>\n> Points to note, we did *not* disable fsync or prevent buffers being\n> actually written (common dirty tricks in benchmarks). However, as others\n> have remarked - raw numbers mean little. Pgbench is very useful for testing\n> how tuning configurations are helping (or not) for a particular hardware\n> and software setup, but is less useful for answering the question \"how many\n> TPS can postgres do\"...\n>\n> Regards\n>\n> Mark\n>\n>\n>\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nFor me 12000 tps until now\n 24 core, 150 Gb ram\n- 5 ssd raid 5\n- Debian 7.8\n- Postgres 9.3.5\n...with Postgres parameters customized:\n- checkpoint_segments 1000\n- checkpoint_completion_target 0.9\n- wal_buffers 256MB\n- shared_buffers 31 gb\n- max_connections 500\n- effective_io_concurrency 15\n..and finally pgbench parameters\n- scale 350\n- clients 300\n- threads 30\n- 60 seconds test run time\nEm 10/02/2015 22:32, \"Mark Kirkwood\" <[email protected]> escreveu:On 10/02/15 10:29, Gavin Flower wrote:\n\nOn 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:\n\nHi,\n\nA survay: with pgbench using TPS-B, what is the maximum TPS you're\never seen?\n\nFor me: 12000 TPS.\n\n--\nLuis Antonio Dias de Sá Junior\n\nImportant to specify:\n\n1. O/S\n2. version of PostgreSQL\n3. PostgreSQL configuration\n4. hardware configuration\n5. anything else that might affect performance\n\nI suspect that Linux will out perform Microsoft on the same hardware,\nand optimum configuration for both O/S's...\n\n\n\n\nYes, exactly - and also the pgbench parameters:\n\n- scale\n- number of clients\n- number of threads\n- statement options (prepared or simple etc)\n- length of test\n\nWe've managed to get 40000 to 60000 TPS on some pretty serious hardware:\n\n- 60 core, 1 TB ram\n- 16 SSD + 4 PCIe SSD storage\n- Ubuntu 14.04\n- Postgres 9.4 (beta and rc)\n\n...with Postgres parameters customized:\n\n- checkpoint_segments 1920\n- checkpoint_completion_target 0.8\n- wal_buffers 256MB\n- wal_sync_method open_datasync\n- shared_buffers 10GB\n- max_connections 600\n- effective_io_concurrency 10\n\n..and finally pgbench parameters\n\n- scale 2000\n- clients 32, 64, 128, 256 (best results at 32 and 64 generally)\n- threads = 1/2 client number\n- prepared option\n- 10 minute test run time\n\nPoints to note, we did *not* disable fsync or prevent buffers being actually written (common dirty tricks in benchmarks). However, as others have remarked - raw numbers mean little. Pgbench is very useful for testing how tuning configurations are helping (or not) for a particular hardware and software setup, but is less useful for answering the question \"how many TPS can postgres do\"...\n\nRegards\n\nMark\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 12 Feb 2015 09:12:33 -0200",
"msg_from": "=?UTF-8?Q?Luis_Antonio_Dias_de_S=C3=A1_Junior?=\n <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Survey: Max TPS you've ever seen"
},
{
"msg_contents": "Hi all!\r\n\r\n> - checkpoint_segments 1000\r\n> - checkpoint_completion_target 0.9\r\n> - wal_buffers 256MB\r\n> - shared_buffers 31 gb\r\n> - max_connections 500\r\n\r\nI see that some of you are using wal_buffers = 256MB.\r\nI was under the impression that Postgres will not benefit from higher value than the segment size, i.e. 16MB. More than that will not do/help anything.\r\n\r\nWhat's the reasoning behind setting it to higher than 16MB? Do I have old information?\r\n\r\nBest regards, Martin\r\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 11:20:23 +0000",
"msg_from": "\"Gudmundsson Martin (mg)\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Survey: Max TPS you've ever seen"
},
{
"msg_contents": ">> 1. O/S\n\n\nUnder \"O/S\", don't forget to mention linux kernel version. \n\nWe saw a MASSIVE increase in TPS (I think it was a doubling? Don't have the data to hand right now) on our multicore RHEL6 servers, when moving from a stock RHEL6 kernel to an ELREPO 3.18 series kernel. That's what 10 years of kernel development will do for you. \n\n> - 16 SSD + 4 PCIe SSD storage\n\nSimilarly, it's useful to specify\n\n- exactly which drives were being used during the test (PCIe and SATA SSDs perform pretty differently!). Similarly if you're using e.g. a dell server with a ssd cache in front of the disks, remember to mention it. \n\n- Also exactly which PCI interface, now that there are different types of PCI attached SSD becoming available (traditional pciE SSD vs NVMe) with substantially different performance and overheads. \n\n(Performance junkies: Check out nvmE if you haven't heard of it) \n http://www.thessdreview.com/daily-news/latest-buzz/marvell-displays-88ss1094-nvme-ssd-controller-2-9gbs/\n http://www.thessdreview.com/daily-news/latest-buzz/memblaze-pmc-collaborate-pblaze4-pcie-ssd-hyperscale-data-centers-3-2gbs-reads-850000-iops/\n\n- Which firmware (some ssds exhibit noteable performance changes with firmware)\n\n- which filesystem and filesystem options (try benchmarking with a fresh ext4 filesystem and nobarriers - then compare against a mostly full filesystem with barriers on an SSD. You should see quite a difference)\n\n- which RAID controller. (Good luck if you're using an H710 with modern SSDs for example... the controller's write cache is the choke point for performance)\n\n- readahead settings (We *tripled* our read performance on large tables/transfers by changing this from the default value in linux up to around 16MB)\n\n- filesystem queue depth and scheduler ( e.g. shallow/deep queues on ssds and e.g. cfq vs. noop schedulers on ssds)\n\n- if anything else is running on the same server/filesystem (e.g. background db activity, web servers etc, operating system sharing the same disk)\n\n- even things like raid stripe size and filesystem block size can have a small impact if you're going for absolute maximum TPS. \n\nHowever honestly all of this is probably dwarfed by the question of what you're doing with your database. If what you do doesn't actually look like pgbench activity (e.g. your server is mostly burning clock cycles on running ancient legacy pl/sql code) then you're taking the wrong benchmark if you use pgbench. \n\n\n(Also, another note for performance junkies - some interesting news from the gaming world - spending extra money on 'fast memory' is probably a waste in the current generation of computers)\n\n http://www.anandtech.com/show/7364/memory-scaling-on-haswell/3\n\nGraeme Bell\n\nOn 11 Feb 2015, at 01:31, Mark Kirkwood <[email protected]> wrote:\n\n> On 10/02/15 10:29, Gavin Flower wrote:\n>> On 10/02/15 08:30, Luis Antonio Dias de Sá Junior wrote:\n>>> Hi,\n>>> \n>>> A survay: with pgbench using TPS-B, what is the maximum TPS you're\n>>> ever seen?\n>>> \n>>> For me: 12000 TPS.\n>>> \n>>> --\n>>> Luis Antonio Dias de Sá Junior\n>> Important to specify:\n>> \n>> 1. O/S\n>> 2. version of PostgreSQL\n>> 3. PostgreSQL configuration\n>> 4. hardware configuration\n>> 5. anything else that might affect performance\n>> \n>> I suspect that Linux will out perform Microsoft on the same hardware,\n>> and optimum configuration for both O/S's...\n>> \n> \n> \n> Yes, exactly - and also the pgbench parameters:\n> \n> - scale\n> - number of clients\n> - number of threads\n> - statement options (prepared or simple etc)\n> - length of test\n> \n> We've managed to get 40000 to 60000 TPS on some pretty serious hardware:\n> \n> - 60 core, 1 TB ram\n> - 16 SSD + 4 PCIe SSD storage\n> - Ubuntu 14.04\n> - Postgres 9.4 (beta and rc)\n> \n> ...with Postgres parameters customized:\n> \n> - checkpoint_segments 1920\n> - checkpoint_completion_target 0.8\n> - wal_buffers 256MB\n> - wal_sync_method open_datasync\n> - shared_buffers 10GB\n> - max_connections 600\n> - effective_io_concurrency 10\n> \n> ..and finally pgbench parameters\n> \n> - scale 2000\n> - clients 32, 64, 128, 256 (best results at 32 and 64 generally)\n> - threads = 1/2 client number\n> - prepared option\n> - 10 minute test run time\n> \n> Points to note, we did *not* disable fsync or prevent buffers being actually written (common dirty tricks in benchmarks). However, as others have remarked - raw numbers mean little. Pgbench is very useful for testing how tuning configurations are helping (or not) for a particular hardware and software setup, but is less useful for answering the question \"how many TPS can postgres do\"...\n> \n> Regards\n> \n> Mark\n> \n> \n> \n> \n> \n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 12:26:15 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Survey: Max TPS you've ever seen"
},
{
"msg_contents": "On 13/02/15 00:20, Gudmundsson Martin (mg) wrote:\n> Hi all!\n>\n>> - checkpoint_segments 1000\n>> - checkpoint_completion_target 0.9\n>> - wal_buffers 256MB\n>> - shared_buffers 31 gb\n>> - max_connections 500\n>\n> I see that some of you are using wal_buffers = 256MB.\n> I was under the impression that Postgres will not benefit from higher value than the segment size, i.e. 16MB. More than that will not do/help anything.\n>\n> What's the reasoning behind setting it to higher than 16MB? Do I have old information?\n>\n> Best regards, Martin\n>\n\nThere was some discussion a while ago in which 32MB and 8MB both \ndemonstrated better performance than 16MB (probably related to the fact \nthe the default wal file size is 16MB). We just experimented further \nwith bigger values, and saw some improvement.\n\nCheers\n\nMark\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Feb 2015 12:10:46 +1300",
"msg_from": "Mark Kirkwood <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Survey: Max TPS you've ever seen"
}
] |
[
{
"msg_contents": "Hi all. Using PG-9.4.0 I'm seeing this trying to delete from an \n\"entity\"-master table: *# explain analyze delete from onp_crm_entity \nwhere entity_id IN (select tmp.delivery_id from temp_delete_delivery_id tmp);\n QUERY \nPLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------------\n Delete on onp_crm_entity (cost=0.43..5673.40 rows=1770 width=12) (actual \ntime=7.370..7.370 rows=0 loops=1)\n -> Nested Loop (cost=0.43..5673.40 rows=1770 width=12) (actual \ntime=0.050..1.374 rows=108 loops=1)\n -> Seq Scan on temp_delete_delivery_id tmp (cost=0.00..27.70 \nrows=1770 width=14) (actual time=0.014..0.080 rows=108 loops=1)\n -> Index Scan using onp_crm_entity_pkey on onp_crm_entity \n(cost=0.43..3.18 rows=1 width=14) (actual time=0.010..0.011 rows=1 loops=108)\n Index Cond: (entity_id = tmp.delivery_id)\n Planning time: 0.314 ms\n Trigger for constraint onp_crm_activity_entity_id_fkey: time=4.141 calls=108 \n Trigger for constraint ... Trigger for constraint ... Trigger for constraint \n... I have lots of tables referencing onp_crm_entity(entity_id) so I expect \nthe poor performance of deleting from it is caused by all the triggers firing \nto check FKI-constraints. Are there any ways around this or do people simply \navoid having FKs in schemas like this? Thanks. -- Andreas Joseph Krogh CTO \n/ Partner - Visena AS Mobile: +47 909 56 963 [email protected] \n<mailto:[email protected]> www.visena.com <https://www.visena.com> \n<https://www.visena.com>",
"msg_date": "Mon, 9 Feb 2015 22:12:55 +0100 (CET)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Poor performance when deleting from entity-attribute-value type\n master-table"
},
{
"msg_contents": "From: [email protected] [mailto:[email protected]] On Behalf Of Andreas Joseph Krogh\r\nSent: Monday, February 09, 2015 4:13 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] Poor performance when deleting from entity-attribute-value type master-table\r\n\r\nHi all.\r\n\r\nUsing PG-9.4.0 I'm seeing this trying to delete from an \"entity\"-master table:\r\n\r\n*# explain analyze delete from onp_crm_entity where entity_id IN (select tmp.delivery_id from temp_delete_delivery_id tmp);\r\n QUERY PLAN\r\n---------------------------------------------------------------------------------------------------------------------------------------------------\r\n Delete on onp_crm_entity (cost=0.43..5673.40 rows=1770 width=12) (actual time=7.370..7.370 rows=0 loops=1)\r\n -> Nested Loop (cost=0.43..5673.40 rows=1770 width=12) (actual time=0.050..1.374 rows=108 loops=1)\r\n -> Seq Scan on temp_delete_delivery_id tmp (cost=0.00..27.70 rows=1770 width=14) (actual time=0.014..0.080 rows=108 loops=1)\r\n -> Index Scan using onp_crm_entity_pkey on onp_crm_entity (cost=0.43..3.18 rows=1 width=14) (actual time=0.010..0.011 rows=1 loops=108)\r\n Index Cond: (entity_id = tmp.delivery_id)\r\n Planning time: 0.314 ms\r\n Trigger for constraint onp_crm_activity_entity_id_fkey: time=4.141 calls=108\r\n Trigger for constraint ...\r\n Trigger for constraint ...\r\n Trigger for constraint ...\r\n\r\n\r\nI have lots of tables referencing onp_crm_entity(entity_id) so I expect the poor performance of deleting from it is caused by all the triggers firing to check FKI-constraints.\r\n\r\n\r\nAndreas, do you have indexes on FK columns in child tables?\r\nIf not – there is your problem.\r\n\r\nRegards,\r\nIgor Neyman\r\n\r\n\n\n\n\n\n\n\n\n\n \n \nFrom: [email protected] [mailto:[email protected]]\r\nOn Behalf Of Andreas Joseph Krogh\nSent: Monday, February 09, 2015 4:13 PM\nTo: [email protected]\nSubject: [PERFORM] Poor performance when deleting from entity-attribute-value type master-table\n \n\n\nHi all.\n\n\n \n\n\nUsing PG-9.4.0 I'm seeing this trying to delete from an \"entity\"-master table:\n\n\n \n\n\n*# explain analyze delete from onp_crm_entity where entity_id IN (select tmp.delivery_id from temp_delete_delivery_id tmp);\r\n QUERY PLAN \r\n---------------------------------------------------------------------------------------------------------------------------------------------------\r\n Delete on onp_crm_entity (cost=0.43..5673.40 rows=1770 width=12) (actual time=7.370..7.370 rows=0 loops=1)\r\n -> Nested Loop (cost=0.43..5673.40 rows=1770 width=12) (actual time=0.050..1.374 rows=108 loops=1)\r\n -> Seq Scan on temp_delete_delivery_id tmp (cost=0.00..27.70 rows=1770 width=14) (actual time=0.014..0.080 rows=108 loops=1)\r\n -> Index Scan using onp_crm_entity_pkey on onp_crm_entity (cost=0.43..3.18 rows=1 width=14) (actual time=0.010..0.011 rows=1 loops=108)\r\n Index Cond: (entity_id = tmp.delivery_id)\r\n Planning time: 0.314 ms\r\n Trigger for constraint onp_crm_activity_entity_id_fkey: time=4.141 calls=108\n\n\n Trigger for constraint ...\n\n\n Trigger for constraint ...\n\n\n Trigger for constraint ...\n\n\n \n\n\n \n\n\nI have lots of tables referencing onp_crm_entity(entity_id) so I expect the poor performance of deleting from it is caused by all the triggers firing to check FKI-constraints.\n\n\n \n \nAndreas, do you have indexes on FK columns in child tables?\nIf not – there is your problem.\n \nRegards,\nIgor Neyman",
"msg_date": "Mon, 9 Feb 2015 21:36:55 +0000",
"msg_from": "Igor Neyman <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance when deleting from\n entity-attribute-value type master-table"
},
{
"msg_contents": "Andreas Joseph Krogh <[email protected]> writes:\n\n> Hi all.\n> \n> Using PG-9.4.0 I'm seeing this trying to delete from an \"entity\"-master table:\n> \n> *# explain analyze delete from onp_crm_entity where entity_id IN (select tmp.delivery_id from temp_delete_delivery_id tmp);\n> QUERY PLAN \n> ---------------------------------------------------------------------------------------------------------------------------------------------------\n> Delete on onp_crm_entity (cost=0.43..5673.40 rows=1770 width=12) (actual time=7.370..7.370 rows=0 loops=1)\n> -> Nested Loop (cost=0.43..5673.40 rows=1770 width=12) (actual time=0.050..1.374 rows=108 loops=1)\n> -> Seq Scan on temp_delete_delivery_id tmp (cost=0.00..27.70 rows=1770 width=14) (actual time=0.014..0.080 rows=108 loops=1)\n> -> Index Scan using onp_crm_entity_pkey on onp_crm_entity (cost=0.43..3.18 rows=1 width=14) (actual time=0.010..0.011 rows=1 loops=108)\n> Index Cond: (entity_id = tmp.delivery_id)\n> Planning time: 0.314 ms\n> Trigger for constraint onp_crm_activity_entity_id_fkey: time=4.141 calls=108\n> Trigger for constraint ...\n> Trigger for constraint ...\n> Trigger for constraint ...\n> \n> \n> I have lots of tables referencing onp_crm_entity(entity_id) so I expect the poor performance of deleting from it is caused by all the triggers firing to check\n> FKI-constraints.\n> \n> Are there any ways around this or do people simply avoid having FKs in schemas like this?\n\nThe classic problem is that one/more of your referring tables is\nnon-trivial in size and you are missing an index on the referring column(s).\n\nInsure that this condition does not exist before butchering your design :-)\n\n\n> Thanks.\n> \n> --\n> Andreas Joseph Krogh\n> CTO / Partner - Visena AS\n> Mobile: +47 909 56 963\n> [email protected]\n> www.visena.com\n> [cid]\n>\n\n-- \nJerry Sievers\nPostgres DBA/Development Consulting\ne: [email protected]\np: 312.241.7800\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 09 Feb 2015 15:42:28 -0600",
"msg_from": "Jerry Sievers <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance when deleting from entity-attribute-value type\n master-table"
},
{
"msg_contents": "På mandag 09. februar 2015 kl. 22:36:55, skrev Igor Neyman <\[email protected] <mailto:[email protected]>>: \n\n \n\nFrom: [email protected] \n[mailto:[email protected]]On Behalf Of Andreas Joseph Krogh\nSent: Monday, February 09, 2015 4:13 PM\nTo: [email protected]\nSubject: [PERFORM] Poor performance when deleting from entity-attribute-value \ntype master-table\n\n \n\nHi all.\n\n \n\nUsing PG-9.4.0 I'm seeing this trying to delete from an \"entity\"-master table:\n\n \n\n*# explain analyze delete from onp_crm_entity where entity_id IN (select \ntmp.delivery_id from temp_delete_delivery_id tmp);\n QUERY \nPLAN \n \n---------------------------------------------------------------------------------------------------------------------------------------------------\n Delete on onp_crm_entity (cost=0.43..5673.40 rows=1770 width=12) (actual \ntime=7.370..7.370 rows=0 loops=1)\n -> Nested Loop (cost=0.43..5673.40 rows=1770 width=12) (actual \ntime=0.050..1.374 rows=108 loops=1)\n -> Seq Scan on temp_delete_delivery_id tmp (cost=0.00..27.70 \nrows=1770 width=14) (actual time=0.014..0.080 rows=108 loops=1)\n -> Index Scan using onp_crm_entity_pkey on onp_crm_entity \n(cost=0.43..3.18 rows=1 width=14) (actual time=0.010..0.011 rows=1 loops=108)\n Index Cond: (entity_id = tmp.delivery_id)\n Planning time: 0.314 ms\n Trigger for constraint onp_crm_activity_entity_id_fkey: time=4.141 calls=108\n\n Trigger for constraint ...\n\n Trigger for constraint ...\n\n Trigger for constraint ...\n\n \n\n \n\nI have lots of tables referencing onp_crm_entity(entity_id) so I expect the \npoor performance of deleting from it is caused by all the triggers firing to \ncheck FKI-constraints.\n\n \n\n \n\nAndreas, do you have indexes on FK columns in child tables?\n\nIf not – there is your problem.\n\n Yes, they have indexes, but deleting 1M rows still results in calling the \ntriggers 1M times * number of FKs... -- Andreas Joseph Krogh CTO / Partner - \nVisena AS Mobile: +47 909 56 963 [email protected] <mailto:[email protected]> \nwww.visena.com <https://www.visena.com> <https://www.visena.com>",
"msg_date": "Mon, 9 Feb 2015 22:50:31 +0100 (CET)",
"msg_from": "Andreas Joseph Krogh <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Poor performance when deleting from\n entity-attribute-value type master-table"
},
{
"msg_contents": "On 02/09/2015 01:12 PM, Andreas Joseph Krogh wrote:\n> Are there any ways around this or do people simply avoid having FKs in\n> schemas like this?\n\nDon't use EAV. It's a bad design pattern, especially for you, and\nyou've just discovered one of the reasons why.\n\n(In fact, I am just today dismantling an EAV database and normalizing\nit, and so far application throughput is up 500%)\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 09 Feb 2015 14:04:38 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Poor performance when deleting from entity-attribute-value\n type master-table"
}
] |
[
{
"msg_contents": "am connecting three tables in query. one table have 73000 records\n\nanother two tables have 138000 records.\n\nbut its take 12 sec for show 12402 rows in tables\n\nTables Structure:\n\nItems Table\n\nCREATE TABLE \"C_SAM_Master\".items\n(\n itemno integer NOT NULL,\n itemname character varying(250) NOT NULL,\n itemcode character varying(250) NOT NULL,\n shortname character varying(20) NOT NULL,\n aliasname character varying(250) NOT NULL,\n aliasnamelanguage character varying(250) NOT NULL,\n masteritemno integer NOT NULL,\n groupno1 smallint NOT NULL,\n groupno2 smallint NOT NULL,\n groupno3 smallint NOT NULL,\n commodityno smallint NOT NULL,\n unitno smallint NOT NULL,\n weighttype character(1) NOT NULL,\n altunitno smallint NOT NULL,\n weight double precision NOT NULL,\n reqmrp character(1) NOT NULL,\n reqbatch character(1) NOT NULL,\n reqmfrdate character(1) NOT NULL,\n mfrdateformat character varying(20) NOT NULL,\n reqexpdate character(1) NOT NULL,\n expdateformat character varying(20) NOT NULL,\n expdays1 smallint NOT NULL,\n expdays2 character(1) NOT NULL,\n expinfodays smallint NOT NULL,\n stdsaleratemethod smallint NOT NULL,\n salesrateper smallint NOT NULL,\n stdprofit1 double precision NOT NULL,\n stdprofit2 character(1) NOT NULL,\n includestockrep character(1) NOT NULL,\n minstock double precision NOT NULL,\n minstockunit smallint NOT NULL,\n minsaleqtynos double precision NOT NULL,\n minsaleqtyunit smallint NOT NULL,\n minsaleqty double precision NOT NULL,\n description text NOT NULL,\n remarks character varying(250) NOT NULL,\n actpurchaseorder character(1) NOT NULL,\n actpurchase character(1) NOT NULL,\n actpurchasereturn character(1) NOT NULL,\n actsalesorder character(1) NOT NULL,\n actsales character(1) NOT NULL,\n actsalesreturn character(1) NOT NULL,\n actreceiptnote character(1) NOT NULL,\n actdeliverynote character(1) NOT NULL,\n actconsumption character(1) NOT NULL,\n actproduction character(1) NOT NULL,\n actestimate character(1) NOT NULL,\n notifypurchaseorder character varying(250) NOT NULL,\n notifypurchase character varying(250) NOT NULL,\n notifypurchasereturn character varying(250) NOT NULL,\n notifysalesorder character varying(250) NOT NULL,\n notifysales character varying(250) NOT NULL,\n notifysalesreturn character varying(250) NOT NULL,\n notifyreceiptnote character varying(250) NOT NULL,\n notifydeliverynote character varying(250) NOT NULL,\n notifyconsumption character varying(250) NOT NULL,\n notifyproduction character varying(250) NOT NULL,\n notifyestimate character varying(250) NOT NULL,\n act boolean NOT NULL,\n recordowner smallint NOT NULL,\n lastmodified smallint NOT NULL,\n crdate timestamp without time zone NOT NULL,\n stdmaxprofit double precision NOT NULL,\n commodityname character varying(100) NOT NULL,\n lst double precision NOT NULL,\n unittype character(1) NOT NULL,\n unit1 character varying(15) NOT NULL,\n unit2 character varying(15) NOT NULL,\n units integer NOT NULL,\n unitname character varying(50) NOT NULL,\n decimals smallint NOT NULL,\n groupname1 character varying(50) NOT NULL,\n groupname2 character varying(50) NOT NULL,\n groupname3 character varying(50) NOT NULL,\n repgroupname character varying(160) NOT NULL,\n masteritemname character varying(100) NOT NULL,\n altunit1 character varying(15) NOT NULL,\n altunit2 character varying(15) NOT NULL,\n altunits integer NOT NULL,\n altunitname character varying(50) NOT NULL,\n altunitdecimals smallint NOT NULL,\n CONSTRAINT items_itemno_pk PRIMARY KEY (itemno),\n CONSTRAINT items_altunitno_fk FOREIGN KEY (altunitno)\n REFERENCES \"C_SAM_Master\".measureunits (unitno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_commodityno_fk FOREIGN KEY (commodityno)\n REFERENCES \"C_SAM_Master\".commodity (commodityno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_groupno1_fk FOREIGN KEY (groupno1)\n REFERENCES \"C_SAM_Master\".itemgroup1 (groupno1) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_groupno2_fk FOREIGN KEY (groupno2)\n REFERENCES \"C_SAM_Master\".itemgroup2 (groupno2) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_groupno3_fk FOREIGN KEY (groupno3)\n REFERENCES \"C_SAM_Master\".itemgroup3 (groupno3) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_lastmodified_fk FOREIGN KEY (lastmodified)\n REFERENCES appsetup.user1 (userno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_masteritemno_fk FOREIGN KEY (masteritemno)\n REFERENCES \"C_SAM_Master\".masteritems (masteritemno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_recordowner_fk FOREIGN KEY (recordowner)\n REFERENCES appsetup.user1 (userno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_unitno_fk FOREIGN KEY (unitno)\n REFERENCES \"C_SAM_Master\".measureunits (unitno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_actconsumption_ck CHECK (actconsumption::text <>\n''::text),\n CONSTRAINT items_actdeliverynote_ck CHECK (actdeliverynote::text <>\n''::text),\n CONSTRAINT items_actestimate_ck CHECK (actestimate::text <> ''::text),\n CONSTRAINT items_actproduction_ck CHECK (actproduction::text <>\n''::text),\n CONSTRAINT items_actpurchase_ck CHECK (actpurchase::text <> ''::text),\n CONSTRAINT items_actpurchaseorder_ck CHECK (actpurchaseorder::text <>\n''::text),\n CONSTRAINT items_actpurchasereturn_ck CHECK (actpurchasereturn::text <>\n''::text),\n CONSTRAINT items_actreceiptnote_ck CHECK (actreceiptnote::text <>\n''::text),\n CONSTRAINT items_actsales_ck CHECK (actsales::text <> ''::text),\n CONSTRAINT items_actsalesorder_ck CHECK (actsalesorder::text <>\n''::text),\n CONSTRAINT items_actsalesreturn_ck CHECK (actsalesreturn::text <>\n''::text),\n CONSTRAINT items_aliasname_ck CHECK (aliasname::text <> ''::text),\n CONSTRAINT items_altunitdecimals_ck CHECK (altunitdecimals >= 0 AND\naltunitdecimals <= 3),\n CONSTRAINT items_altunits_ck CHECK (altunits >= 0),\n CONSTRAINT items_commodityname_ck CHECK (commodityname::text <>\n''::text),\n CONSTRAINT items_decimals_ck CHECK (decimals >= 0 AND decimals <= 3),\n CONSTRAINT items_expdays1_ck CHECK (expdays1 >= 0),\n CONSTRAINT items_expinfodays_ck CHECK (expinfodays >= 0),\n CONSTRAINT items_includestockrep_ck CHECK (includestockrep::text <>\n''::text),\n CONSTRAINT items_itemcode_ck CHECK (itemcode::text <> ''::text),\n CONSTRAINT items_itemname_ck CHECK (itemname::text <> ''::text),\n CONSTRAINT items_itemno_ck CHECK (itemno > 0),\n CONSTRAINT items_lst_ck CHECK (lst >= 0::double precision),\n CONSTRAINT items_minsaleqty_ck CHECK (minsaleqty >= 0::double precision),\n CONSTRAINT items_minsaleqtynos_ck CHECK (minsaleqtynos >= 0::double\nprecision),\n CONSTRAINT items_minsaleqtyunit_ck CHECK (minsaleqtyunit >= 0 AND\nminsaleqtyunit <= 2),\n CONSTRAINT items_minstock_ck CHECK (minstock >= 0::double precision),\n CONSTRAINT items_minstockunit_ck CHECK (minstockunit >= 0 AND minstockunit\n<= 2),\n CONSTRAINT items_reqbatch_ck CHECK (reqbatch::text <> ''::text),\n CONSTRAINT items_reqexpdate_ck CHECK (reqexpdate::text <> ''::text),\n CONSTRAINT items_reqmfrdate_ck CHECK (reqmfrdate::text <> ''::text),\n CONSTRAINT items_reqmrp_ck CHECK (reqmrp::text <> ''::text),\n CONSTRAINT items_salesrateper_ck CHECK (salesrateper >= 0 AND salesrateper\n<= 4),\n CONSTRAINT items_stdsaleratemethod_ck CHECK (stdsaleratemethod >= 0 AND\nstdsaleratemethod <= 2),\n CONSTRAINT items_units_ck CHECK (units >= 0),\n CONSTRAINT items_unittype_ck CHECK (unittype::text <> ''::text),\n CONSTRAINT items_weight_ck CHECK (weight >= 0::double precision),\n CONSTRAINT items_weighttype_ck CHECK (weighttype::text <> ''::text)\n)\nWITH (\n OIDS=FALSE\n)\nTABLESPACE \"gpro2_SAM\";\nALTER TABLE \"C_SAM_Master\".items\n OWNER TO gpro2user;\n\n-- Index: \"C_SAM_Master\".items_itemname_uq\n\n-- DROP INDEX \"C_SAM_Master\".items_itemname_uq;\n\nCREATE UNIQUE INDEX items_itemname_uq\n ON \"C_SAM_Master\".items\n USING btree\n (lower(itemname::text) COLLATE pg_catalog.\"default\");\n\n\n-- Rule: rule_del_items ON \"C_SAM_Master\".items\n\n-- DROP RULE rule_del_items ON \"C_SAM_Master\".items;\n\nCREATE OR REPLACE RULE rule_del_items AS\n ON DELETE TO \"C_SAM_Master\".items DO ( DELETE FROM\n\"C_SAM_Master\".itembarcode\n WHERE itembarcode.itemno = old.itemno;\n DELETE FROM \"C_SAM_Master\".pricelist\n WHERE pricelist.itemno = old.itemno;\n DELETE FROM \"C_SAM_Master\".pricelistreview\n WHERE pricelistreview.itemno = old.itemno;\n);\n\n-- Rule: rule_del_items_c_sam_2014_2015 ON \"C_SAM_Master\".items\n\n-- DROP RULE rule_del_items_c_sam_2014_2015 ON \"C_SAM_Master\".items;\n\nCREATE OR REPLACE RULE rule_del_items_c_sam_2014_2015 AS\n ON DELETE TO \"C_SAM_Master\".items DO ( DELETE FROM\n\"C_SAM_2014-2015\".openingstock\n WHERE openingstock.itemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".stock\n WHERE stock.itemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".packingsetup\n WHERE packingsetup.primeitemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".packingsetup\n WHERE packingsetup.packingitemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".itemsuppliers\n WHERE itemsuppliers.itemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".partyopeningstock\n WHERE partyopeningstock.itemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".partystock\n WHERE partystock.itemno = old.itemno;\n);\n\nSales 1 Table\n\nCREATE TABLE \"C_KA_2014-2015\".sales1\n(\n vtno smallint NOT NULL,\n prefix character varying(5) NOT NULL,\n idno integer NOT NULL,\n suffix character varying(5) NOT NULL,\n txno character varying(20) NOT NULL,\n txdate timestamp without time zone NOT NULL,\n dracno integer NOT NULL,\n partyname character varying(100) NOT NULL,\n address1 character varying(100) NOT NULL,\n address2 character varying(100) NOT NULL,\n city character varying(50) NOT NULL,\n partytin character varying(30) NOT NULL,\n partycstno character varying(30) NOT NULL,\n mobileno character varying(15) NOT NULL,\n ponos character varying NOT NULL,\n pricelevelno smallint NOT NULL,\n invno character varying(20) NOT NULL,\n duedays smallint NOT NULL,\n duedate timestamp without time zone NOT NULL,\n paymentmode character varying(10) NOT NULL,\n bankrefno character varying(30) NOT NULL,\n bankrefdate character varying(10) NOT NULL,\n bankfavourname character varying(100) NOT NULL,\n bankcrossref character(1) NOT NULL,\n bankremarks character varying(100) NOT NULL,\n bankdate character varying(10) NOT NULL,\n bankstatus character(1) NOT NULL,\n bankreconcildate character varying(10) NOT NULL,\n stockpointno smallint NOT NULL,\n nettotal double precision NOT NULL,\n grosswt integer NOT NULL,\n tarewt integer NOT NULL,\n actualwt double precision NOT NULL,\n againstform character varying(15) NOT NULL,\n formseriesno character varying(15) NOT NULL,\n formno character varying(15) NOT NULL,\n formdate character varying(10) NOT NULL,\n totalqty double precision NOT NULL,\n totalqtyunit character varying(15) NOT NULL,\n totalfreeqty double precision NOT NULL,\n totalfreeqtyunit character varying(15) NOT NULL,\n totalaltqty double precision NOT NULL,\n totalaltqtyunit character varying(15) NOT NULL,\n orderby smallint NOT NULL,\n collectionby smallint NOT NULL,\n deliveredby1 character varying(30) NOT NULL,\n deliveredby2 character varying(50) NOT NULL,\n deliveredrefno character varying(30) NOT NULL,\n deliveredrefdate character varying(10) NOT NULL,\n goodsdelivered character(1) NOT NULL,\n deliveredto1 character varying(50) NOT NULL,\n deliveredto2 character varying(50) NOT NULL,\n cashrcvd double precision NOT NULL,\n remarks character varying(250) NOT NULL,\n totalstockvalue double precision NOT NULL,\n profit1 double precision NOT NULL,\n act boolean NOT NULL,\n totalassesvalue double precision NOT NULL,\n totaltax double precision NOT NULL,\n recordowner smallint NOT NULL,\n lastmodified smallint NOT NULL,\n crdate timestamp without time zone NOT NULL,\n lessadv double precision NOT NULL,\n lessadvpartyacno integer NOT NULL,\n rateadj double precision NOT NULL,\n jobcardtxno character varying(40) NOT NULL,\n txtime character varying(8) NOT NULL,\n CONSTRAINT sales1_txno_pk PRIMARY KEY (txno),\n CONSTRAINT sales1_collectionby_fk FOREIGN KEY (collectionby)\n REFERENCES \"G_KUMARANGROUPS_Master\".employee (empno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_dracno_fk FOREIGN KEY (dracno)\n REFERENCES \"C_KA_AcMaster\".acledger (acno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_lastmodified_fk FOREIGN KEY (lastmodified)\n REFERENCES appsetup.user1 (userno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_orderby_fk FOREIGN KEY (orderby)\n REFERENCES \"G_KUMARANGROUPS_Master\".employee (empno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_pricelevelno_fk FOREIGN KEY (pricelevelno)\n REFERENCES \"C_KA_AcMaster\".acpricelevel (pricelevelno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_recordowner_fk FOREIGN KEY (recordowner)\n REFERENCES appsetup.user1 (userno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_vto_fk FOREIGN KEY (vtno)\n REFERENCES \"C_KA_2014-2015\".acvouchertype (vtno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_vtnoprefixidnosuffix_uq UNIQUE (vtno, prefix, idno,\nsuffix),\n CONSTRAINT sales1_duedays_ck CHECK (duedays >= 0),\n CONSTRAINT sales1_idno_ck CHECK (idno > 0),\n CONSTRAINT sales1_lessadv_ck CHECK (lessadv >= 0::double precision),\n CONSTRAINT sales1_lessadvpartyacno_ck CHECK (lessadvpartyacno >= 0),\n CONSTRAINT sales1_partyname_ck CHECK (partyname::text <> ''::text),\n CONSTRAINT sales1_paymentmode_ck CHECK (paymentmode::text <> ''::text),\n CONSTRAINT sales1_stockpointno_ck CHECK (stockpointno >= 0)\n)\nWITH (\n OIDS=FALSE\n)\nTABLESPACE \"gpro2_KA\";\nALTER TABLE \"C_KA_2014-2015\".sales1\n OWNER TO gpro2user;\n\n-- Index: \"C_KA_2014-2015\".sales1_acno\n\n-- DROP INDEX \"C_KA_2014-2015\".sales1_acno;\n\nCREATE INDEX sales1_acno\n ON \"C_KA_2014-2015\".sales1\n USING btree\n (dracno);\n\n-- Index: \"C_KA_2014-2015\".sales1_txdate\n\n-- DROP INDEX \"C_KA_2014-2015\".sales1_txdate;\n\nCREATE INDEX sales1_txdate\n ON \"C_KA_2014-2015\".sales1\n USING btree\n (txdate);\n\n\n-- Rule: rule_del_sales ON \"C_KA_2014-2015\".sales1\n\n-- DROP RULE rule_del_sales ON \"C_KA_2014-2015\".sales1;\n\nCREATE OR REPLACE RULE rule_del_sales AS\n ON DELETE TO \"C_KA_2014-2015\".sales1 DO ( DELETE FROM\n\"C_KA_2014-2015\".packingitemsautopost\n WHERE packingitemsautopost.transtype::text = 'Sales'::text AND\npackingitemsautopost.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales6\n WHERE sales6.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales5\n WHERE sales5.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales4\n WHERE sales4.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales3\n WHERE sales3.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales2\n WHERE sales2.txno::text = old.txno::text;\n);\n\n\n-- Trigger: trg_sales1 on \"C_KA_2014-2015\".sales1\n\n-- DROP TRIGGER trg_sales1 ON \"C_KA_2014-2015\".sales1;\n\nCREATE TRIGGER trg_sales1\n AFTER UPDATE OF act\n ON \"C_KA_2014-2015\".sales1\n FOR EACH ROW\n EXECUTE PROCEDURE fn_trg_sales('sales1');\n\n-- Trigger: trg_sales1acpost on \"C_KA_2014-2015\".sales1\n\n-- DROP TRIGGER trg_sales1acpost ON \"C_KA_2014-2015\".sales1;\n\nCREATE TRIGGER trg_sales1acpost\n AFTER INSERT OR UPDATE OF txdate OR DELETE\n ON \"C_KA_2014-2015\".sales1\n FOR EACH ROW\n EXECUTE PROCEDURE fn_trg_sales1acpost();\n\nSales 2 Table\n\n\nCREATE TABLE \"C_KA_2014-2015\".sales2\n(\n txno character varying(20) NOT NULL,\n slno smallint NOT NULL,\n itemno integer NOT NULL,\n rowkey smallint NOT NULL,\n mrp double precision NOT NULL,\n batchno character varying(20) NOT NULL,\n expdate character varying(10) NOT NULL,\n qty1 double precision NOT NULL,\n qty2 double precision NOT NULL,\n freeqty1 double precision NOT NULL,\n freeqty2 double precision NOT NULL,\n altqty1 double precision NOT NULL,\n altqty2 double precision NOT NULL,\n rate double precision NOT NULL,\n rateper smallint NOT NULL,\n basedvalue double precision NOT NULL,\n tradedis1 double precision NOT NULL,\n tradedis2 double precision NOT NULL,\n totaltradis double precision NOT NULL,\n adnldis1 double precision NOT NULL,\n adnldis2 double precision NOT NULL,\n totaladnldis double precision NOT NULL,\n adnlcostbeforevat double precision NOT NULL,\n assesvalue double precision NOT NULL,\n cst1 double precision NOT NULL,\n cst2 double precision NOT NULL,\n lst1 double precision NOT NULL,\n lst2 double precision NOT NULL,\n amount double precision NOT NULL,\n itemdescription text NOT NULL,\n adnlcostafterevat double precision NOT NULL,\n nsr double precision NOT NULL,\n totalqty double precision NOT NULL,\n totalfreeqty double precision NOT NULL,\n totalaltqty double precision NOT NULL,\n primaryacno integer NOT NULL,\n taxacno integer NOT NULL,\n itemstockvalue double precision NOT NULL,\n itemprofit1 double precision NOT NULL,\n cliamscheme character(1) NOT NULL,\n netrate double precision NOT NULL,\n pricelistrate double precision NOT NULL,\n CONSTRAINT sales2_txnoslno_pk PRIMARY KEY (txno, slno),\n CONSTRAINT sales2_itemno_fk FOREIGN KEY (itemno)\n REFERENCES \"G_KUMARANGROUPS_Master\".items (itemno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales2_txno_fk FOREIGN KEY (txno)\n REFERENCES \"C_KA_2014-2015\".sales1 (txno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales2_rowkey_uq UNIQUE (rowkey, txno),\n CONSTRAINT sales2_cst1_ck CHECK (cst1 >= 0::double precision),\n CONSTRAINT sales2_lst1_ck CHECK (lst1 >= 0::double precision),\n CONSTRAINT sales2_mrp_ck CHECK (mrp >= 0::double precision),\n CONSTRAINT sales2_netrate_ck CHECK (netrate >= 0::double precision),\n CONSTRAINT sales2_nsr_ck CHECK (nsr >= 0::double precision),\n CONSTRAINT sales2_pricelistrate_ck CHECK (pricelistrate >= 0::double\nprecision),\n CONSTRAINT sales2_primaryacno_ck CHECK (primaryacno >= 0),\n CONSTRAINT sales2_rate_ck CHECK (rate >= 0::double precision),\n CONSTRAINT sales2_rateper_ck CHECK (rateper >= 0 AND rateper <= 4),\n CONSTRAINT sales2_rowkey_ck CHECK (rowkey > 0),\n CONSTRAINT sales2_slno_ck CHECK (slno > 0),\n CONSTRAINT sales2_taxacno_ck CHECK (taxacno >= 0),\n CONSTRAINT sales2_totalfreeqty_ck CHECK ((totalqty + totalfreeqty) <>\n0::double precision),\n CONSTRAINT sales2_totalqty_ck CHECK ((totalqty + totalfreeqty) <>\n0::double precision)\n)\nWITH (\n OIDS=FALSE\n)\nTABLESPACE \"gpro2_KA\";\nALTER TABLE \"C_KA_2014-2015\".sales2\n OWNER TO gpro2user;\n\n-- Index: \"C_KA_2014-2015\".sales2_itemno\n\n-- DROP INDEX \"C_KA_2014-2015\".sales2_itemno;\n\nCREATE INDEX sales2_itemno\n ON \"C_KA_2014-2015\".sales2\n USING btree\n (itemno);\n\n-- Index: \"C_KA_2014-2015\".sales2_txno\n\n-- DROP INDEX \"C_KA_2014-2015\".sales2_txno;\n\nCREATE INDEX sales2_txno\n ON \"C_KA_2014-2015\".sales2\n USING btree\n (txno COLLATE pg_catalog.\"default\");\n\n\n-- Trigger: trg_sales2 on \"C_KA_2014-2015\".sales2\n\n-- DROP TRIGGER trg_sales2 ON \"C_KA_2014-2015\".sales2;\n\nCREATE TRIGGER trg_sales2\n AFTER INSERT OR DELETE\n ON \"C_KA_2014-2015\".sales2\n FOR EACH ROW\n EXECUTE PROCEDURE fn_trg_sales2();\n\nQuery:\n\n select grp,disp,alisdisp,ord,'' as adnlorder,'' as calcorder,sum(case when\nord =3 then qty end) as qty,sum(case when ord=3 then freeqty end) as\nfreeqty,max(case when ord=3 then unit1 end) as unit1,sum(altqty) as\naltqty,max(altunit1) as altunit1,sum(discount) as discount,sum(amount) as\namount,sum(itemprofit) as itemprofit,0.00 as profitper,sum(itemstockvalue)\nas itemstockvalue from (select\nunnest(array[repgroupname,repgroupname||'-'||masteritemname,repgroupname||'-'||masteritemname||'-'||itemname])\nas grp,unnest(array[case when repgroupname ='' then 'UnGrouped' else\nrepgroupname end,masteritemname,itemname]) as\ndisp,unnest(array['','',aliasnamelanguage]) as alisdisp,unnest(array[1,2,3])\nas ord,cast(case when units > 1 then cast(case when sum(qty) > 0 then\nfloor(sum(qty)/units) else ceil(sum(qty)/units) end as text) when\n(mod(cast(sum(qty) as integer),units))<>0 then '.' ||\nabs(cast(mod(cast(sum(qty) as integer),units) as integer)) else\ncast(sum(qty) as text) end as double precision) as qty,cast(case when units\n> 1 then cast(case when sum(freeqty) > 0 then floor(sum(freeqty)/units) else\nceil(sum(freeqty)/units) end as text) when (mod(cast(sum(freeqty) as\ninteger),units))<>0 then '.' || abs(cast(mod(cast(sum(freeqty) as\ninteger),units) as integer)) else cast(sum(freeqty) as text) end as double\nprecision) as freeqty,unit1,cast(case when altunits > 1 then cast(case when\nsum(altqty) > 0 then floor(sum(altqty)/altunits) else\nceil(sum(altqty)/altunits) end as text) when (mod(cast(sum(altqty) as\ninteger),altunits))<>0 then '.' || abs(cast(mod(cast(sum(altqty) as\ninteger),altunits) as integer)) else cast(sum(altqty) as text) end as double\nprecision) as altqty,altunit1,sum(discount) as discount,sum(itemstockvalue)\nas itemstockvalue,sum(itemprofit) as itemprofit,sum(amount) as\namount,shortname from (select\ni.repgroupname,i.aliasnamelanguage,i.masteritemname,i.itemname,i.groupname1,i.groupname2,i.groupname3,i.units,i.unit1,i.unit2,i.altunit1,i.altunit2,i.altunits,sum(s2.totalqty)\nas qty,sum(s2.totalfreeqty) as freeqty,sum(s2.totalaltqty) as\naltqty,sum(s2.totaltradis + totaladnldis) as discount,sum(itemstockvalue) as\nitemstockvalue,sum(itemprofit1) as itemprofit,sum(s2.amount) as\namount,'KA'::text as shortname from \"C_KA_2014-2015\".sales1 s1 inner join\n\"C_KA_2014-2015\".sales2 s2 on s1.txno=s2.txno inner join\n\"G_KUMARANGROUPS_Master\".items i on i.itemno=s2.itemno where s1.act='t' and\ns1.txdate >= '01/04/2014' and s1.txdate <= '30/01/2015'group by\ni.repgroupname,i.aliasnamelanguage,i.groupname1,i.groupname2,i.groupname3,i.units,i.unit1,i.unit2,i.altunit1,i.altunit2,i.altunits,i.itemname,i.masteritemname\n ) as tt group by\ngrp,disp,alisdisp,units,altunits,ord,shortname,unit1,altunit1 order by\ngrp,disp ) as tab where disp <> ''\ngroup by grp,disp,alisdisp,ord order by grp,disp\n\n\nExplain Analysis and Buffers\n\n\"GroupAggregate (cost=3586024.69..3617755.12 rows=72944 width=160) (actual\ntime=11819.837..11884.868 rows=12064 loops=1)\"\n\" Buffers: shared hit=4462 read=9825, temp read=6381 written=6361\"\n\" -> Sort (cost=3586024.69..3587848.28 rows=729435 width=160) (actual\ntime=11819.780..11831.894 rows=12068 loops=1)\"\n\" Sort Key: tab.grp, tab.disp, tab.alisdisp, tab.ord\"\n\" Sort Method: external sort Disk: 1336kB\"\n\" Buffers: shared hit=4462 read=9825, temp read=6381 written=6361\"\n\" -> Subquery Scan on tab (cost=2742202.68..3342958.78 rows=729435\nwidth=160) (actual time=11424.007..11727.170 rows=12068 loops=1)\"\n\" Filter: ((tab.disp)::text <> ''::text)\"\n\" Rows Removed by Filter: 7\"\n\" Buffers: shared hit=4462 read=9825, temp read=6214\nwritten=6194\"\n\" -> GroupAggregate (cost=2742202.68..3333795.03 rows=733100\nwidth=115) (actual time=11424.001..11703.904 rows=12075 loops=1)\"\n\" Buffers: shared hit=4462 read=9825, temp read=6214\nwritten=6194\"\n\" -> Sort (cost=2742202.68..2760528.43 rows=7330300\nwidth=115) (actual time=11423.951..11543.478 rows=36183 loops=1)\"\n\" Sort Key: (unnest(ARRAY[tt.repgroupname,\n((((tt.repgroupname)::text || '-'::text) ||\n(tt.masteritemname)::text))::character varying, ((((((tt.repgroupname)::text\n|| '-'::text) || (tt.masteritemname)::text) || '-'::text) || (tt.ite (...)\"\n\" Sort Method: external merge Disk: 3552kB\"\n\" Buffers: shared hit=4462 read=9825, temp\nread=6214 written=6194\"\n\" -> Subquery Scan on tt\n(cost=56047.61..102407.06 rows=7330300 width=115) (actual\ntime=8877.785..11023.746 rows=36183 loops=1)\"\n\" Buffers: shared hit=4462 read=9825, temp\nread=5768 written=5748\"\n\" -> GroupAggregate\n(cost=56047.61..63373.22 rows=73303 width=96) (actual\ntime=8877.762..10906.503 rows=12061 loops=1)\"\n\" Buffers: shared hit=4462 read=9825,\ntemp read=5768 written=5748\"\n\" -> Sort (cost=56047.61..56347.27\nrows=119865 width=96) (actual time=8877.576..10555.267 rows=119714\nloops=1)\"\n\" Sort Key: i.repgroupname,\ni.aliasnamelanguage, i.groupname1, i.groupname2, i.groupname3, i.units,\ni.unit1, i.unit2, i.altunit1, i.altunit2, i.altunits, i.itemname,\ni.masteritemname\"\n\" Sort Method: external merge\nDisk: 12432kB\"\n\" Buffers: shared hit=4462\nread=9825, temp read=5768 written=5748\"\n\" -> Hash Join\n(cost=13948.80..33644.37 rows=119865 width=96) (actual\ntime=617.917..1756.039 rows=119714 loops=1)\"\n\" Hash Cond: (s2.itemno =\ni.itemno)\"\n\" Buffers: shared hit=4462\nread=9825, temp read=3098 written=3078\"\n\" -> Hash Join\n(cost=8849.48..23064.41 rows=119865 width=68) (actual time=339.948..1054.380\nrows=119714 loops=1)\"\n\" Hash Cond:\n((s2.txno)::text = (s1.txno)::text)\"\n\" Buffers: shared\nhit=1585 read=9825, temp read=1539 written=1533\"\n\" -> Seq Scan on\nsales2 s2 (cost=0.00..7490.64 rows=144964 width=76) (actual\ntime=0.023..262.043 rows=144964 loops=1)\"\n\" Buffers:\nshared hit=814 read=5227\"\n\" -> Hash\n(cost=7196.35..7196.35 rows=100731 width=8) (actual time=339.873..339.873\nrows=100850 loops=1)\"\n\" Buckets: 4096\n Batches: 4 Memory Usage: 803kB\"\n\" Buffers:\nshared hit=771 read=4598, temp written=257\"\n\" -> Seq Scan\non sales1 s1 (cost=0.00..7196.35 rows=100731 width=8) (actual\ntime=0.029..230.250 rows=100850 loops=1)\"\n\" Filter:\n(act AND (txdate >= '2014-04-01 00:00:00'::timestamp without time zone) AND\n(txdate <= '2015-01-30 00:00:00'::timestamp without time zone))\"\n\" Rows\nRemoved by Filter: 20973\"\n\"\nBuffers: shared hit=771 read=4598\"\n\" -> Hash\n(cost=3610.03..3610.03 rows=73303 width=36) (actual time=277.327..277.327\nrows=73303 loops=1)\"\n\" Buckets: 2048\nBatches: 8 Memory Usage: 593kB\"\n\" Buffers: shared\nhit=2877, temp written=475\"\n\" -> Seq Scan on\nitems i (cost=0.00..3610.03 rows=73303 width=36) (actual\ntime=0.007..153.900 rows=73303 loops=1)\"\n\" Buffers:\nshared hit=2877\"\n\"Total runtime: 11897.250 ms\"\n\nMy Hardware is\n\nInterCore\nCPU 3.10 CHZ\n2.89 GB RAM\n\nam connecting three tables in query. one table have 73000 records\n\nanother two tables have 138000 records.\n\nbut its take 12 sec for show 12402 rows in tables\n\nTables Structure:\n\nItems Table\n\nCREATE TABLE \"C_SAM_Master\".items\n(\n itemno integer NOT NULL,\n itemname character varying(250) NOT NULL,\n itemcode character varying(250) NOT NULL,\n shortname character varying(20) NOT NULL,\n aliasname character varying(250) NOT NULL,\n aliasnamelanguage character varying(250) NOT NULL,\n masteritemno integer NOT NULL,\n groupno1 smallint NOT NULL,\n groupno2 smallint NOT NULL,\n groupno3 smallint NOT NULL,\n commodityno smallint NOT NULL,\n unitno smallint NOT NULL,\n weighttype character(1) NOT NULL,\n altunitno smallint NOT NULL,\n weight double precision NOT NULL,\n reqmrp character(1) NOT NULL,\n reqbatch character(1) NOT NULL,\n reqmfrdate character(1) NOT NULL,\n mfrdateformat character varying(20) NOT NULL,\n reqexpdate character(1) NOT NULL,\n expdateformat character varying(20) NOT NULL,\n expdays1 smallint NOT NULL,\n expdays2 character(1) NOT NULL,\n expinfodays smallint NOT NULL,\n stdsaleratemethod smallint NOT NULL,\n salesrateper smallint NOT NULL,\n stdprofit1 double precision NOT NULL,\n stdprofit2 character(1) NOT NULL,\n includestockrep character(1) NOT NULL,\n minstock double precision NOT NULL,\n minstockunit smallint NOT NULL,\n minsaleqtynos double precision NOT NULL,\n minsaleqtyunit smallint NOT NULL,\n minsaleqty double precision NOT NULL,\n description text NOT NULL,\n remarks character varying(250) NOT NULL,\n actpurchaseorder character(1) NOT NULL,\n actpurchase character(1) NOT NULL,\n actpurchasereturn character(1) NOT NULL,\n actsalesorder character(1) NOT NULL,\n actsales character(1) NOT NULL,\n actsalesreturn character(1) NOT NULL,\n actreceiptnote character(1) NOT NULL,\n actdeliverynote character(1) NOT NULL,\n actconsumption character(1) NOT NULL,\n actproduction character(1) NOT NULL,\n actestimate character(1) NOT NULL,\n notifypurchaseorder character varying(250) NOT NULL,\n notifypurchase character varying(250) NOT NULL,\n notifypurchasereturn character varying(250) NOT NULL,\n notifysalesorder character varying(250) NOT NULL,\n notifysales character varying(250) NOT NULL,\n notifysalesreturn character varying(250) NOT NULL,\n notifyreceiptnote character varying(250) NOT NULL,\n notifydeliverynote character varying(250) NOT NULL,\n notifyconsumption character varying(250) NOT NULL,\n notifyproduction character varying(250) NOT NULL,\n notifyestimate character varying(250) NOT NULL,\n act boolean NOT NULL,\n recordowner smallint NOT NULL,\n lastmodified smallint NOT NULL,\n crdate timestamp without time zone NOT NULL,\n stdmaxprofit double precision NOT NULL,\n commodityname character varying(100) NOT NULL,\n lst double precision NOT NULL,\n unittype character(1) NOT NULL,\n unit1 character varying(15) NOT NULL,\n unit2 character varying(15) NOT NULL,\n units integer NOT NULL,\n unitname character varying(50) NOT NULL,\n decimals smallint NOT NULL,\n groupname1 character varying(50) NOT NULL,\n groupname2 character varying(50) NOT NULL,\n groupname3 character varying(50) NOT NULL,\n repgroupname character varying(160) NOT NULL,\n masteritemname character varying(100) NOT NULL,\n altunit1 character varying(15) NOT NULL,\n altunit2 character varying(15) NOT NULL,\n altunits integer NOT NULL,\n altunitname character varying(50) NOT NULL,\n altunitdecimals smallint NOT NULL,\n CONSTRAINT items_itemno_pk PRIMARY KEY (itemno),\n CONSTRAINT items_altunitno_fk FOREIGN KEY (altunitno)\n REFERENCES \"C_SAM_Master\".measureunits (unitno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_commodityno_fk FOREIGN KEY (commodityno)\n REFERENCES \"C_SAM_Master\".commodity (commodityno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_groupno1_fk FOREIGN KEY (groupno1)\n REFERENCES \"C_SAM_Master\".itemgroup1 (groupno1) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_groupno2_fk FOREIGN KEY (groupno2)\n REFERENCES \"C_SAM_Master\".itemgroup2 (groupno2) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_groupno3_fk FOREIGN KEY (groupno3)\n REFERENCES \"C_SAM_Master\".itemgroup3 (groupno3) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_lastmodified_fk FOREIGN KEY (lastmodified)\n REFERENCES appsetup.user1 (userno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_masteritemno_fk FOREIGN KEY (masteritemno)\n REFERENCES \"C_SAM_Master\".masteritems (masteritemno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_recordowner_fk FOREIGN KEY (recordowner)\n REFERENCES appsetup.user1 (userno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_unitno_fk FOREIGN KEY (unitno)\n REFERENCES \"C_SAM_Master\".measureunits (unitno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT items_actconsumption_ck CHECK (actconsumption::text <>\n''::text),\n CONSTRAINT items_actdeliverynote_ck CHECK (actdeliverynote::text <>\n''::text),\n CONSTRAINT items_actestimate_ck CHECK (actestimate::text <> ''::text),\n CONSTRAINT items_actproduction_ck CHECK (actproduction::text <>\n''::text),\n CONSTRAINT items_actpurchase_ck CHECK (actpurchase::text <> ''::text),\n CONSTRAINT items_actpurchaseorder_ck CHECK (actpurchaseorder::text <>\n''::text),\n CONSTRAINT items_actpurchasereturn_ck CHECK (actpurchasereturn::text <>\n''::text),\n CONSTRAINT items_actreceiptnote_ck CHECK (actreceiptnote::text <>\n''::text),\n CONSTRAINT items_actsales_ck CHECK (actsales::text <> ''::text),\n CONSTRAINT items_actsalesorder_ck CHECK (actsalesorder::text <>\n''::text),\n CONSTRAINT items_actsalesreturn_ck CHECK (actsalesreturn::text <>\n''::text),\n CONSTRAINT items_aliasname_ck CHECK (aliasname::text <> ''::text),\n CONSTRAINT items_altunitdecimals_ck CHECK (altunitdecimals >= 0 AND\naltunitdecimals <= 3),\n CONSTRAINT items_altunits_ck CHECK (altunits >= 0),\n CONSTRAINT items_commodityname_ck CHECK (commodityname::text <>\n''::text),\n CONSTRAINT items_decimals_ck CHECK (decimals >= 0 AND decimals <= 3),\n CONSTRAINT items_expdays1_ck CHECK (expdays1 >= 0),\n CONSTRAINT items_expinfodays_ck CHECK (expinfodays >= 0),\n CONSTRAINT items_includestockrep_ck CHECK (includestockrep::text <>\n''::text),\n CONSTRAINT items_itemcode_ck CHECK (itemcode::text <> ''::text),\n CONSTRAINT items_itemname_ck CHECK (itemname::text <> ''::text),\n CONSTRAINT items_itemno_ck CHECK (itemno > 0),\n CONSTRAINT items_lst_ck CHECK (lst >= 0::double precision),\n CONSTRAINT items_minsaleqty_ck CHECK (minsaleqty >= 0::double precision),\n CONSTRAINT items_minsaleqtynos_ck CHECK (minsaleqtynos >= 0::double\nprecision),\n CONSTRAINT items_minsaleqtyunit_ck CHECK (minsaleqtyunit >= 0 AND\nminsaleqtyunit <= 2),\n CONSTRAINT items_minstock_ck CHECK (minstock >= 0::double precision),\n CONSTRAINT items_minstockunit_ck CHECK (minstockunit >= 0 AND minstockunit\n<= 2),\n CONSTRAINT items_reqbatch_ck CHECK (reqbatch::text <> ''::text),\n CONSTRAINT items_reqexpdate_ck CHECK (reqexpdate::text <> ''::text),\n CONSTRAINT items_reqmfrdate_ck CHECK (reqmfrdate::text <> ''::text),\n CONSTRAINT items_reqmrp_ck CHECK (reqmrp::text <> ''::text),\n CONSTRAINT items_salesrateper_ck CHECK (salesrateper >= 0 AND salesrateper\n<= 4),\n CONSTRAINT items_stdsaleratemethod_ck CHECK (stdsaleratemethod >= 0 AND\nstdsaleratemethod <= 2),\n CONSTRAINT items_units_ck CHECK (units >= 0),\n CONSTRAINT items_unittype_ck CHECK (unittype::text <> ''::text),\n CONSTRAINT items_weight_ck CHECK (weight >= 0::double precision),\n CONSTRAINT items_weighttype_ck CHECK (weighttype::text <> ''::text)\n)\nWITH (\n OIDS=FALSE\n)\nTABLESPACE \"gpro2_SAM\";\nALTER TABLE \"C_SAM_Master\".items\n OWNER TO gpro2user;\n\n-- Index: \"C_SAM_Master\".items_itemname_uq\n\n-- DROP INDEX \"C_SAM_Master\".items_itemname_uq;\n\nCREATE UNIQUE INDEX items_itemname_uq\n ON \"C_SAM_Master\".items\n USING btree\n (lower(itemname::text) COLLATE pg_catalog.\"default\");\n\n\n-- Rule: rule_del_items ON \"C_SAM_Master\".items\n\n-- DROP RULE rule_del_items ON \"C_SAM_Master\".items;\n\nCREATE OR REPLACE RULE rule_del_items AS\n ON DELETE TO \"C_SAM_Master\".items DO ( DELETE FROM\n\"C_SAM_Master\".itembarcode\n WHERE itembarcode.itemno = old.itemno;\n DELETE FROM \"C_SAM_Master\".pricelist\n WHERE pricelist.itemno = old.itemno;\n DELETE FROM \"C_SAM_Master\".pricelistreview\n WHERE pricelistreview.itemno = old.itemno;\n);\n\n-- Rule: rule_del_items_c_sam_2014_2015 ON \"C_SAM_Master\".items\n\n-- DROP RULE rule_del_items_c_sam_2014_2015 ON \"C_SAM_Master\".items;\n\nCREATE OR REPLACE RULE rule_del_items_c_sam_2014_2015 AS\n ON DELETE TO \"C_SAM_Master\".items DO ( DELETE FROM\n\"C_SAM_2014-2015\".openingstock\n WHERE openingstock.itemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".stock\n WHERE stock.itemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".packingsetup\n WHERE packingsetup.primeitemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".packingsetup\n WHERE packingsetup.packingitemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".itemsuppliers\n WHERE itemsuppliers.itemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".partyopeningstock\n WHERE partyopeningstock.itemno = old.itemno;\n DELETE FROM \"C_SAM_2014-2015\".partystock\n WHERE partystock.itemno = old.itemno;\n);\n\nSales 1 Table\n\nCREATE TABLE \"C_KA_2014-2015\".sales1\n(\n vtno smallint NOT NULL,\n prefix character varying(5) NOT NULL,\n idno integer NOT NULL,\n suffix character varying(5) NOT NULL,\n txno character varying(20) NOT NULL,\n txdate timestamp without time zone NOT NULL,\n dracno integer NOT NULL,\n partyname character varying(100) NOT NULL,\n address1 character varying(100) NOT NULL,\n address2 character varying(100) NOT NULL,\n city character varying(50) NOT NULL,\n partytin character varying(30) NOT NULL,\n partycstno character varying(30) NOT NULL,\n mobileno character varying(15) NOT NULL,\n ponos character varying NOT NULL,\n pricelevelno smallint NOT NULL,\n invno character varying(20) NOT NULL,\n duedays smallint NOT NULL,\n duedate timestamp without time zone NOT NULL,\n paymentmode character varying(10) NOT NULL,\n bankrefno character varying(30) NOT NULL,\n bankrefdate character varying(10) NOT NULL,\n bankfavourname character varying(100) NOT NULL,\n bankcrossref character(1) NOT NULL,\n bankremarks character varying(100) NOT NULL,\n bankdate character varying(10) NOT NULL,\n bankstatus character(1) NOT NULL,\n bankreconcildate character varying(10) NOT NULL,\n stockpointno smallint NOT NULL,\n nettotal double precision NOT NULL,\n grosswt integer NOT NULL,\n tarewt integer NOT NULL,\n actualwt double precision NOT NULL,\n againstform character varying(15) NOT NULL,\n formseriesno character varying(15) NOT NULL,\n formno character varying(15) NOT NULL,\n formdate character varying(10) NOT NULL,\n totalqty double precision NOT NULL,\n totalqtyunit character varying(15) NOT NULL,\n totalfreeqty double precision NOT NULL,\n totalfreeqtyunit character varying(15) NOT NULL,\n totalaltqty double precision NOT NULL,\n totalaltqtyunit character varying(15) NOT NULL,\n orderby smallint NOT NULL,\n collectionby smallint NOT NULL,\n deliveredby1 character varying(30) NOT NULL,\n deliveredby2 character varying(50) NOT NULL,\n deliveredrefno character varying(30) NOT NULL,\n deliveredrefdate character varying(10) NOT NULL,\n goodsdelivered character(1) NOT NULL,\n deliveredto1 character varying(50) NOT NULL,\n deliveredto2 character varying(50) NOT NULL,\n cashrcvd double precision NOT NULL,\n remarks character varying(250) NOT NULL,\n totalstockvalue double precision NOT NULL,\n profit1 double precision NOT NULL,\n act boolean NOT NULL,\n totalassesvalue double precision NOT NULL,\n totaltax double precision NOT NULL,\n recordowner smallint NOT NULL,\n lastmodified smallint NOT NULL,\n crdate timestamp without time zone NOT NULL,\n lessadv double precision NOT NULL,\n lessadvpartyacno integer NOT NULL,\n rateadj double precision NOT NULL,\n jobcardtxno character varying(40) NOT NULL,\n txtime character varying(8) NOT NULL,\n CONSTRAINT sales1_txno_pk PRIMARY KEY (txno),\n CONSTRAINT sales1_collectionby_fk FOREIGN KEY (collectionby)\n REFERENCES \"G_KUMARANGROUPS_Master\".employee (empno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_dracno_fk FOREIGN KEY (dracno)\n REFERENCES \"C_KA_AcMaster\".acledger (acno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_lastmodified_fk FOREIGN KEY (lastmodified)\n REFERENCES appsetup.user1 (userno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_orderby_fk FOREIGN KEY (orderby)\n REFERENCES \"G_KUMARANGROUPS_Master\".employee (empno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_pricelevelno_fk FOREIGN KEY (pricelevelno)\n REFERENCES \"C_KA_AcMaster\".acpricelevel (pricelevelno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_recordowner_fk FOREIGN KEY (recordowner)\n REFERENCES appsetup.user1 (userno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_vto_fk FOREIGN KEY (vtno)\n REFERENCES \"C_KA_2014-2015\".acvouchertype (vtno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales1_vtnoprefixidnosuffix_uq UNIQUE (vtno, prefix, idno,\nsuffix),\n CONSTRAINT sales1_duedays_ck CHECK (duedays >= 0),\n CONSTRAINT sales1_idno_ck CHECK (idno > 0),\n CONSTRAINT sales1_lessadv_ck CHECK (lessadv >= 0::double precision),\n CONSTRAINT sales1_lessadvpartyacno_ck CHECK (lessadvpartyacno >= 0),\n CONSTRAINT sales1_partyname_ck CHECK (partyname::text <> ''::text),\n CONSTRAINT sales1_paymentmode_ck CHECK (paymentmode::text <> ''::text),\n CONSTRAINT sales1_stockpointno_ck CHECK (stockpointno >= 0)\n)\nWITH (\n OIDS=FALSE\n)\nTABLESPACE \"gpro2_KA\";\nALTER TABLE \"C_KA_2014-2015\".sales1\n OWNER TO gpro2user;\n\n-- Index: \"C_KA_2014-2015\".sales1_acno\n\n-- DROP INDEX \"C_KA_2014-2015\".sales1_acno;\n\nCREATE INDEX sales1_acno\n ON \"C_KA_2014-2015\".sales1\n USING btree\n (dracno);\n\n-- Index: \"C_KA_2014-2015\".sales1_txdate\n\n-- DROP INDEX \"C_KA_2014-2015\".sales1_txdate;\n\nCREATE INDEX sales1_txdate\n ON \"C_KA_2014-2015\".sales1\n USING btree\n (txdate);\n\n\n-- Rule: rule_del_sales ON \"C_KA_2014-2015\".sales1\n\n-- DROP RULE rule_del_sales ON \"C_KA_2014-2015\".sales1;\n\nCREATE OR REPLACE RULE rule_del_sales AS\n ON DELETE TO \"C_KA_2014-2015\".sales1 DO ( DELETE FROM\n\"C_KA_2014-2015\".packingitemsautopost\n WHERE packingitemsautopost.transtype::text = 'Sales'::text AND\npackingitemsautopost.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales6\n WHERE sales6.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales5\n WHERE sales5.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales4\n WHERE sales4.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales3\n WHERE sales3.txno::text = old.txno::text;\n DELETE FROM \"C_KA_2014-2015\".sales2\n WHERE sales2.txno::text = old.txno::text;\n);\n\n\n-- Trigger: trg_sales1 on \"C_KA_2014-2015\".sales1\n\n-- DROP TRIGGER trg_sales1 ON \"C_KA_2014-2015\".sales1;\n\nCREATE TRIGGER trg_sales1\n AFTER UPDATE OF act\n ON \"C_KA_2014-2015\".sales1\n FOR EACH ROW\n EXECUTE PROCEDURE fn_trg_sales('sales1');\n\n-- Trigger: trg_sales1acpost on \"C_KA_2014-2015\".sales1\n\n-- DROP TRIGGER trg_sales1acpost ON \"C_KA_2014-2015\".sales1;\n\nCREATE TRIGGER trg_sales1acpost\n AFTER INSERT OR UPDATE OF txdate OR DELETE\n ON \"C_KA_2014-2015\".sales1\n FOR EACH ROW\n EXECUTE PROCEDURE fn_trg_sales1acpost();\n\nSales 2 Table\n\n\nCREATE TABLE \"C_KA_2014-2015\".sales2\n(\n txno character varying(20) NOT NULL,\n slno smallint NOT NULL,\n itemno integer NOT NULL,\n rowkey smallint NOT NULL,\n mrp double precision NOT NULL,\n batchno character varying(20) NOT NULL,\n expdate character varying(10) NOT NULL,\n qty1 double precision NOT NULL,\n qty2 double precision NOT NULL,\n freeqty1 double precision NOT NULL,\n freeqty2 double precision NOT NULL,\n altqty1 double precision NOT NULL,\n altqty2 double precision NOT NULL,\n rate double precision NOT NULL,\n rateper smallint NOT NULL,\n basedvalue double precision NOT NULL,\n tradedis1 double precision NOT NULL,\n tradedis2 double precision NOT NULL,\n totaltradis double precision NOT NULL,\n adnldis1 double precision NOT NULL,\n adnldis2 double precision NOT NULL,\n totaladnldis double precision NOT NULL,\n adnlcostbeforevat double precision NOT NULL,\n assesvalue double precision NOT NULL,\n cst1 double precision NOT NULL,\n cst2 double precision NOT NULL,\n lst1 double precision NOT NULL,\n lst2 double precision NOT NULL,\n amount double precision NOT NULL,\n itemdescription text NOT NULL,\n adnlcostafterevat double precision NOT NULL,\n nsr double precision NOT NULL,\n totalqty double precision NOT NULL,\n totalfreeqty double precision NOT NULL,\n totalaltqty double precision NOT NULL,\n primaryacno integer NOT NULL,\n taxacno integer NOT NULL,\n itemstockvalue double precision NOT NULL,\n itemprofit1 double precision NOT NULL,\n cliamscheme character(1) NOT NULL,\n netrate double precision NOT NULL,\n pricelistrate double precision NOT NULL,\n CONSTRAINT sales2_txnoslno_pk PRIMARY KEY (txno, slno),\n CONSTRAINT sales2_itemno_fk FOREIGN KEY (itemno)\n REFERENCES \"G_KUMARANGROUPS_Master\".items (itemno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales2_txno_fk FOREIGN KEY (txno)\n REFERENCES \"C_KA_2014-2015\".sales1 (txno) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE RESTRICT,\n CONSTRAINT sales2_rowkey_uq UNIQUE (rowkey, txno),\n CONSTRAINT sales2_cst1_ck CHECK (cst1 >= 0::double precision),\n CONSTRAINT sales2_lst1_ck CHECK (lst1 >= 0::double precision),\n CONSTRAINT sales2_mrp_ck CHECK (mrp >= 0::double precision),\n CONSTRAINT sales2_netrate_ck CHECK (netrate >= 0::double precision),\n CONSTRAINT sales2_nsr_ck CHECK (nsr >= 0::double precision),\n CONSTRAINT sales2_pricelistrate_ck CHECK (pricelistrate >= 0::double\nprecision),\n CONSTRAINT sales2_primaryacno_ck CHECK (primaryacno >= 0),\n CONSTRAINT sales2_rate_ck CHECK (rate >= 0::double precision),\n CONSTRAINT sales2_rateper_ck CHECK (rateper >= 0 AND rateper <= 4),\n CONSTRAINT sales2_rowkey_ck CHECK (rowkey > 0),\n CONSTRAINT sales2_slno_ck CHECK (slno > 0),\n CONSTRAINT sales2_taxacno_ck CHECK (taxacno >= 0),\n CONSTRAINT sales2_totalfreeqty_ck CHECK ((totalqty + totalfreeqty) <>\n0::double precision),\n CONSTRAINT sales2_totalqty_ck CHECK ((totalqty + totalfreeqty) <>\n0::double precision)\n)\nWITH (\n OIDS=FALSE\n)\nTABLESPACE \"gpro2_KA\";\nALTER TABLE \"C_KA_2014-2015\".sales2\n OWNER TO gpro2user;\n\n-- Index: \"C_KA_2014-2015\".sales2_itemno\n\n-- DROP INDEX \"C_KA_2014-2015\".sales2_itemno;\n\nCREATE INDEX sales2_itemno\n ON \"C_KA_2014-2015\".sales2\n USING btree\n (itemno);\n\n-- Index: \"C_KA_2014-2015\".sales2_txno\n\n-- DROP INDEX \"C_KA_2014-2015\".sales2_txno;\n\nCREATE INDEX sales2_txno\n ON \"C_KA_2014-2015\".sales2\n USING btree\n (txno COLLATE pg_catalog.\"default\");\n\n\n-- Trigger: trg_sales2 on \"C_KA_2014-2015\".sales2\n\n-- DROP TRIGGER trg_sales2 ON \"C_KA_2014-2015\".sales2;\n\nCREATE TRIGGER trg_sales2\n AFTER INSERT OR DELETE\n ON \"C_KA_2014-2015\".sales2\n FOR EACH ROW\n EXECUTE PROCEDURE fn_trg_sales2();\n\nQuery:\n\n select grp,disp,alisdisp,ord,'' as adnlorder,'' as calcorder,sum(case when\nord =3 then qty end) as qty,sum(case when ord=3 then freeqty end) as\nfreeqty,max(case when ord=3 then unit1 end) as unit1,sum(altqty) as\naltqty,max(altunit1) as altunit1,sum(discount) as discount,sum(amount) as\namount,sum(itemprofit) as itemprofit,0.00 as profitper,sum(itemstockvalue)\nas itemstockvalue from (select\nunnest(array[repgroupname,repgroupname||'-'||masteritemname,repgroupname||'-'||masteritemname||'-'||itemname])\nas grp,unnest(array[case when repgroupname ='' then 'UnGrouped' else\nrepgroupname end,masteritemname,itemname]) as\ndisp,unnest(array['','',aliasnamelanguage]) as alisdisp,unnest(array[1,2,3])\nas ord,cast(case when units > 1 then cast(case when sum(qty) > 0 then\nfloor(sum(qty)/units) else ceil(sum(qty)/units) end as text) when\n(mod(cast(sum(qty) as integer),units))<>0 then '.' ||\nabs(cast(mod(cast(sum(qty) as integer),units) as integer)) else\ncast(sum(qty) as text) end as double precision) as qty,cast(case when units\n> 1 then cast(case when sum(freeqty) > 0 then floor(sum(freeqty)/units) else\nceil(sum(freeqty)/units) end as text) when (mod(cast(sum(freeqty) as\ninteger),units))<>0 then '.' || abs(cast(mod(cast(sum(freeqty) as\ninteger),units) as integer)) else cast(sum(freeqty) as text) end as double\nprecision) as freeqty,unit1,cast(case when altunits > 1 then cast(case when\nsum(altqty) > 0 then floor(sum(altqty)/altunits) else\nceil(sum(altqty)/altunits) end as text) when (mod(cast(sum(altqty) as\ninteger),altunits))<>0 then '.' || abs(cast(mod(cast(sum(altqty) as\ninteger),altunits) as integer)) else cast(sum(altqty) as text) end as double\nprecision) as altqty,altunit1,sum(discount) as discount,sum(itemstockvalue)\nas itemstockvalue,sum(itemprofit) as itemprofit,sum(amount) as\namount,shortname from (select\ni.repgroupname,i.aliasnamelanguage,i.masteritemname,i.itemname,i.groupname1,i.groupname2,i.groupname3,i.units,i.unit1,i.unit2,i.altunit1,i.altunit2,i.altunits,sum(s2.totalqty)\nas qty,sum(s2.totalfreeqty) as freeqty,sum(s2.totalaltqty) as\naltqty,sum(s2.totaltradis + totaladnldis) as discount,sum(itemstockvalue) as\nitemstockvalue,sum(itemprofit1) as itemprofit,sum(s2.amount) as\namount,'KA'::text as shortname from \"C_KA_2014-2015\".sales1 s1 inner join\n\"C_KA_2014-2015\".sales2 s2 on s1.txno=s2.txno inner join\n\"G_KUMARANGROUPS_Master\".items i on i.itemno=s2.itemno where s1.act='t' and\ns1.txdate >= '01/04/2014' and s1.txdate <= '30/01/2015'group by\ni.repgroupname,i.aliasnamelanguage,i.groupname1,i.groupname2,i.groupname3,i.units,i.unit1,i.unit2,i.altunit1,i.altunit2,i.altunits,i.itemname,i.masteritemname\n ) as tt group by\ngrp,disp,alisdisp,units,altunits,ord,shortname,unit1,altunit1 order by\ngrp,disp ) as tab where disp <> '' \ngroup by grp,disp,alisdisp,ord order by grp,disp\n\n\nExplain Analysis and Buffers\n\n\"GroupAggregate (cost=3586024.69..3617755.12 rows=72944 width=160) (actual\ntime=11819.837..11884.868 rows=12064 loops=1)\"\n\" Buffers: shared hit=4462 read=9825, temp read=6381 written=6361\"\n\" -> Sort (cost=3586024.69..3587848.28 rows=729435 width=160) (actual\ntime=11819.780..11831.894 rows=12068 loops=1)\"\n\" Sort Key: tab.grp, tab.disp, tab.alisdisp, tab.ord\"\n\" Sort Method: external sort Disk: 1336kB\"\n\" Buffers: shared hit=4462 read=9825, temp read=6381 written=6361\"\n\" -> Subquery Scan on tab (cost=2742202.68..3342958.78 rows=729435\nwidth=160) (actual time=11424.007..11727.170 rows=12068 loops=1)\"\n\" Filter: ((tab.disp)::text <> ''::text)\"\n\" Rows Removed by Filter: 7\"\n\" Buffers: shared hit=4462 read=9825, temp read=6214\nwritten=6194\"\n\" -> GroupAggregate (cost=2742202.68..3333795.03 rows=733100\nwidth=115) (actual time=11424.001..11703.904 rows=12075 loops=1)\"\n\" Buffers: shared hit=4462 read=9825, temp read=6214\nwritten=6194\"\n\" -> Sort (cost=2742202.68..2760528.43 rows=7330300\nwidth=115) (actual time=11423.951..11543.478 rows=36183 loops=1)\"\n\" Sort Key: (unnest(ARRAY[tt.repgroupname,\n((((tt.repgroupname)::text || '-'::text) ||\n(tt.masteritemname)::text))::character varying, ((((((tt.repgroupname)::text\n|| '-'::text) || (tt.masteritemname)::text) || '-'::text) || (tt.ite (...)\"\n\" Sort Method: external merge Disk: 3552kB\"\n\" Buffers: shared hit=4462 read=9825, temp\nread=6214 written=6194\"\n\" -> Subquery Scan on tt \n(cost=56047.61..102407.06 rows=7330300 width=115) (actual\ntime=8877.785..11023.746 rows=36183 loops=1)\"\n\" Buffers: shared hit=4462 read=9825, temp\nread=5768 written=5748\"\n\" -> GroupAggregate \n(cost=56047.61..63373.22 rows=73303 width=96) (actual\ntime=8877.762..10906.503 rows=12061 loops=1)\"\n\" Buffers: shared hit=4462 read=9825,\ntemp read=5768 written=5748\"\n\" -> Sort (cost=56047.61..56347.27\nrows=119865 width=96) (actual time=8877.576..10555.267 rows=119714\nloops=1)\"\n\" Sort Key: i.repgroupname,\ni.aliasnamelanguage, i.groupname1, i.groupname2, i.groupname3, i.units,\ni.unit1, i.unit2, i.altunit1, i.altunit2, i.altunits, i.itemname,\ni.masteritemname\"\n\" Sort Method: external merge \nDisk: 12432kB\"\n\" Buffers: shared hit=4462\nread=9825, temp read=5768 written=5748\"\n\" -> Hash Join \n(cost=13948.80..33644.37 rows=119865 width=96) (actual\ntime=617.917..1756.039 rows=119714 loops=1)\"\n\" Hash Cond: (s2.itemno =\ni.itemno)\"\n\" Buffers: shared hit=4462\nread=9825, temp read=3098 written=3078\"\n\" -> Hash Join \n(cost=8849.48..23064.41 rows=119865 width=68) (actual time=339.948..1054.380\nrows=119714 loops=1)\"\n\" Hash Cond:\n((s2.txno)::text = (s1.txno)::text)\"\n\" Buffers: shared\nhit=1585 read=9825, temp read=1539 written=1533\"\n\" -> Seq Scan on\nsales2 s2 (cost=0.00..7490.64 rows=144964 width=76) (actual\ntime=0.023..262.043 rows=144964 loops=1)\"\n\" Buffers:\nshared hit=814 read=5227\"\n\" -> Hash \n(cost=7196.35..7196.35 rows=100731 width=8) (actual time=339.873..339.873\nrows=100850 loops=1)\"\n\" Buckets: 4096\n Batches: 4 Memory Usage: 803kB\"\n\" Buffers:\nshared hit=771 read=4598, temp written=257\"\n\" -> Seq Scan\non sales1 s1 (cost=0.00..7196.35 rows=100731 width=8) (actual\ntime=0.029..230.250 rows=100850 loops=1)\"\n\" Filter:\n(act AND (txdate >= '2014-04-01 00:00:00'::timestamp without time zone) AND\n(txdate <= '2015-01-30 00:00:00'::timestamp without time zone))\"\n\" Rows\nRemoved by Filter: 20973\"\n\" \nBuffers: shared hit=771 read=4598\"\n\" -> Hash \n(cost=3610.03..3610.03 rows=73303 width=36) (actual time=277.327..277.327\nrows=73303 loops=1)\"\n\" Buckets: 2048 \nBatches: 8 Memory Usage: 593kB\"\n\" Buffers: shared\nhit=2877, temp written=475\"\n\" -> Seq Scan on\nitems i (cost=0.00..3610.03 rows=73303 width=36) (actual\ntime=0.007..153.900 rows=73303 loops=1)\"\n\" Buffers:\nshared hit=2877\"\n\"Total runtime: 11897.250 ms\"\n\nMy Hardware is\n\nInterCore\nCPU 3.10 CHZ\n2.89 GB RAM",
"msg_date": "Wed, 11 Feb 2015 12:11:27 +0530",
"msg_from": "Sathish Nelson <[email protected]>",
"msg_from_op": true,
"msg_subject": "slow query"
},
{
"msg_contents": "On 11/02/15 07:41, Sathish Nelson wrote:\n> am connecting three tables in query. one table have 73000 records\n> \n> another two tables have 138000 records.\n> \n> but its take 12 sec for show 12402 rows in tables\n\nyou need more work_mem. The query plan shows multiple external sort nodes.\n\nOtherwise, the query and the plan I received here are too badly\nformatted for me to comprehend.\n\nTorsten\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Feb 2015 14:33:05 +0100",
"msg_from": "=?UTF-8?B?VG9yc3RlbiBGw7ZydHNjaA==?= <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query"
},
{
"msg_contents": "Sathish Nelson <[email protected]> writes:\n> am connecting three tables in query. one table have 73000 records\n> another two tables have 138000 records.\n> but its take 12 sec for show 12402 rows in tables\n\nIncreasing work_mem would make those sort steps faster ...\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 11 Feb 2015 08:44:48 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: slow query"
}
] |
[
{
"msg_contents": "Hi\n\nI'm using cross join lateral with a non-trivial function in \nan attempt to limit calculation of that function, and am \nwondering about some aspects of how lateral is currently \nimplemented. \n\nNB these queries are generated by a certain ORM, and are\nusually embedded in much more complex queries...\n\n\nCase one: counting\n\n select count(alpha.id) \n from alpha\n cross join lateral some_function(alpha.id) as some_val\n where alpha.test\n\n Here the function is strict, and moreover its argument will never\n be null - hence there should always be a non-null value returned. \n\n I would expect that since the function doesn't impact on the \n number of rows (always one value returned for each row in alpha),\n then I'd hope the function is never called. EXPLAIN shows it being \n called for each row in the main table. \n\n\nCase two: pagination\n\n select alpha.*, some_val\n from alpha\n cross join lateral some_function(alpha.id) as some_val\n where alpha.test\n order by alpha.name asc\n limit 100 offset 100\n\n Same setup as above, and I'd expect that the ordering and\n selection of rows can be done first and the function only \n called on the rows that get selected. Again, EXPLAIN shows\n otherwise.\n\n\n\n\nSo: am I expecting too much for LATERAL, or have I missed a \ntrick somewhere? \n\nMany thanks in advance! \n\nPaul\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/query-laziness-of-lateral-join-with-function-tp5837706.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 11:07:09 -0700 (MST)",
"msg_from": "paulcc <[email protected]>",
"msg_from_op": true,
"msg_subject": "query - laziness of lateral join with function"
},
{
"msg_contents": "paulcc <[email protected]> writes:\n> select count(alpha.id) \n> from alpha\n> cross join lateral some_function(alpha.id) as some_val\n> where alpha.test\n\n> Here the function is strict, and moreover its argument will never\n> be null - hence there should always be a non-null value returned. \n\n> I would expect that since the function doesn't impact on the \n> number of rows (always one value returned for each row in alpha),\n> then I'd hope the function is never called. EXPLAIN shows it being \n> called for each row in the main table. \n\nYou're out of luck on that one at the moment, although testing it on\nHEAD suggests that commit 55d5b3c08279b487cfa44d4b6e6eea67a0af89e4\nmight have fixed it for you in future releases.\n\n> select alpha.*, some_val\n> from alpha\n> cross join lateral some_function(alpha.id) as some_val\n> where alpha.test\n> order by alpha.name asc\n> limit 100 offset 100\n\n> Same setup as above, and I'd expect that the ordering and\n> selection of rows can be done first and the function only \n> called on the rows that get selected.\n\nThe planner might produce such a result if there's an opportunity\nto perform the sorting via an index on \"alpha\" (ie, the ORDER BY\nmatches some index). If it has to do an explicit sort it's gonna\ndo the join first.\n\n(If you have such an index, and it's not going for the plan you want,\nyou might need to crank up the COST property of some_function to\npersuade the planner that it should try to minimize the number of calls\neven if that means a slower scan choice.)\n\nIn both cases though, I rather wonder why you're using LATERAL at all, as\nopposed to just calling the function in the main query when you want its\nresult. The query planner can't be expected to make up for arbitrary\namounts of stupidity in the formulation of the submitted query.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 16:17:41 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query - laziness of lateral join with function"
},
{
"msg_contents": "Tom Lane-2 wrote\n> paulcc <\n\n> paulcc.two@\n\n> > writes:\n>> select count(alpha.id) \n>> from alpha\n>> cross join lateral some_function(alpha.id) as some_val\n>> where alpha.test\n> \n>> Here the function is strict, and moreover its argument will never\n>> be null - hence there should always be a non-null value returned. \n> \n> In both cases though, I rather wonder why you're using LATERAL at all, as\n> opposed to just calling the function in the main query when you want its\n> result. The query planner can't be expected to make up for arbitrary\n> amounts of stupidity in the formulation of the submitted query.\n\nI'm trying to answer this with a bit more detail but cannot because the OP\nprovided too little information which is then causing Tom to make\nassumptions. I'm not sure to what degree the ORM is being stupid here since\nI do not know why it thinks LATERAL is more appropriate than a select-list\nfunction call for a non-SRF function (which I have to presume this is, but\nit is not stated).\n\nWith respect to \"the function will never return NULL\": this is not the\nissue. The issue is that the function could return nothing (i.e., zero\nrecords) in which case the CROSS JOIN would suppress the corresponding\ncorrelated row from the result.\n\nNon-SRF functions are more easily used within the select-list of the query\ninstead of attached to a LATERAL clause; the only issue there is when the\nfunction returns a composite and you try to immediately explode it into its\nconstituent parts - the function will be evaluated multiple times. I'm not\nsure if that is what Tom is saying above but the combination of that\nlimitation and limited optimizations if the function is in LATERAL seems to\nbe in conflict here.\n\nThere has been a recent uptick in interest in making PostgreSQL more ORM\nfriendly (i.e., more able to simply ignore stuff that is added to the query\neven though a particular call doesn't actually need it) but I haven't seen\nanyone looking into LATERAL. More detailed reports may at least bring\nexposure to what is being used in the wild and garner interest from other\nparties in improving things. Unfortunately this report is too limited to\nreally make a dent; lacking even the name of the ORM that is being used and\nthe entire queries that are being generated - and why.\n\nDavid J.\n\n\n\n\n--\nView this message in context: http://postgresql.nabble.com/query-laziness-of-lateral-join-with-function-tp5837706p5837735.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 15:16:06 -0700 (MST)",
"msg_from": "David G Johnston <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query - laziness of lateral join with function"
},
{
"msg_contents": "On Feb 12, 2015 9:17 PM, \"Tom Lane\" <[email protected]> wrote:\n\n> The planner might produce such a result if there's an opportunity\n> to perform the sorting via an index on \"alpha\" (ie, the ORDER BY\n> matches some index). If it has to do an explicit sort it's gonna\n> do the join first.\n>\n> (If you have such an index, and it's not going for the plan you want,\n> you might need to crank up the COST property of some_function to\n> persuade the planner that it should try to minimize the number of calls\n> even if that means a slower scan choice.)\n>\n> In both cases though, I rather wonder why you're using LATERAL at all, as\n> opposed to just calling the function in the main query when you want its\n> result. The query planner can't be expected to make up for arbitrary\n> amounts of stupidity in the formulation of the submitted query.\n\nUseful, many thanks. I'll try playing with cost changes and a more targeted\nindex.\n\nIn my real code, the function actually returns a json hash from which\nseveral fields are extracted in the main select (why? Unpacking in the\nquery saves some hassle in the app code...) So my plan was to use lateral\nto limit function calls to max once per row. Is there a better way, other\nthan using a nested query?\n\n\nOn Feb 12, 2015 9:17 PM, \"Tom Lane\" <[email protected]> wrote:\n> The planner might produce such a result if there's an opportunity\n> to perform the sorting via an index on \"alpha\" (ie, the ORDER BY\n> matches some index). If it has to do an explicit sort it's gonna\n> do the join first.\n>\n> (If you have such an index, and it's not going for the plan you want,\n> you might need to crank up the COST property of some_function to\n> persuade the planner that it should try to minimize the number of calls\n> even if that means a slower scan choice.)\n>\n> In both cases though, I rather wonder why you're using LATERAL at all, as\n> opposed to just calling the function in the main query when you want its\n> result. The query planner can't be expected to make up for arbitrary\n> amounts of stupidity in the formulation of the submitted query.\nUseful, many thanks. I'll try playing with cost changes and a more targeted index. \nIn my real code, the function actually returns a json hash from which several fields are extracted in the main select (why? Unpacking in the query saves some hassle in the app code...) So my plan was to use lateral to limit function calls to max once per row. Is there a better way, other than using a nested query?",
"msg_date": "Thu, 12 Feb 2015 23:28:19 +0000",
"msg_from": "Paul Callaghan <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: query - laziness of lateral join with function"
}
] |
[
{
"msg_contents": "Hello,\nI've been away from postgres for several years, so please forgive me if \nI forgot nearly everything:-)\n\nI've just inherited a database collecting environmental data. There's a \nbackground process continually inserting records (not so often, to say \nthe truth) and a web interface to query data.\nAt the moment the record count of the db is 250M and growing all the \ntime. The 3 main tables have just 3 columns.\n\nQueries get executed very very slowly, say 20 minutes. The most evident \nproblem I see is that io wait load is almost always 90+% while querying \ndata, 30-40% when \"idle\" (so to say).\nObviously disk access is to blame, but I'm a bit surprised because the \ncluster where this db is running is not at all old iron: it's a vmware \nVM with 16GB ram, 4cpu 2.2Ghz, 128GB disk (half of which used). The disk \nsystem underlying vmware is quite powerful, this postgres is the only \nsystem that runs slowly in this cluster.\nI can increase resources if necessary, but..\n\nEven before analyzing queries (that I did) I'd like to know if someone \nhas already succeeded in running postgres with 200-300M records with \nqueries running much faster than this. I'd like to compare the current \nconfiguration with a super-optimized one to identify the parameters that \nneed to be changed.\nAny link to a working configuration would be very appreciated.\n\nThanks for any help,\n Nico\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 23:25:54 +0100",
"msg_from": "Nico Sabbi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Configuration tips for very large database"
},
{
"msg_contents": "Nico Sabbi <[email protected]> wrote:\n\n> Queries get executed very very slowly, say 20 minutes.\n\n> I'd like to know if someone has already succeeded in running\n> postgres with 200-300M records with queries running much faster\n> than this.\n\nIf you go to the http://wcca.wicourts.gov/ web site, bring up any\ncase, and click the \"Court Record Events\" button, it will search a\ntable with hundreds of millions of rows. The table is not\npartitioned, but has several indexes on it which are useful for\nqueries such as the one that is used when you click the button.\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 22:38:01 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration tips for very large database"
},
{
"msg_contents": "On 02/12/2015 11:38 PM, Kevin Grittner wrote:\n>\n> If you go to the http://wcca.wicourts.gov/ web site, bring up any\n> case, and click the \"Court Record Events\" button, it will search a\n> table with hundreds of millions of rows. The table is not\n> partitioned, but has several indexes on it which are useful for\n> queries such as the one that is used when you click the button.\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nImpressive. Can you give any hint on the configuration and on the \nunderlying hardware?\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 23:48:13 +0100",
"msg_from": "Nico Sabbi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration tips for very large database"
},
{
"msg_contents": "Nico Sabbi <[email protected]> wrote:\n\n> Can you give any hint on the configuration and on the underlying \n> hardware?\n\nWell, this particular web site has millions of hits per day \n(running up to about 20 queries per hit) from thousands of \nconcurrent web users, while accepting logical replication from \nthousands of OLTP users via logical replication, so you probably \ndon't need equivalent hardware. If I recall correctly it is \nrunning 32 cores with 512GB RAM running two PostgreSQL clusters, \neach multiple TB, and each having a RAID 5 array of 40 drives, \nplus separate controllers and RAID for OS and WAL.\n\nFor server configuration, see these Wiki pages for the general\ntuning techniques used:\n\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nhttps://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n\nThe best course to solve your problem would probably be to review \nthose and see what might apply, and if you still have a problem \npick a specific slow-running query and use the process described \nhere:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 22:55:20 +0000 (UTC)",
"msg_from": "Kevin Grittner <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration tips for very large database"
},
{
"msg_contents": "I can't speak to the numbers postgresql can or cannot do but the numbers\nabove sound very very doable. If you can get a hold of *greg smith's\npostgresql high performance*, I always liked his method of tuning buffers\nand checkpoints using the background writer stats. All of which can help\nwith the IO load and caching.\n\ngood luck!\n\n\n\nOn Thu, Feb 12, 2015 at 4:55 PM, Kevin Grittner <[email protected]> wrote:\n\n> Nico Sabbi <[email protected]> wrote:\n>\n> > Can you give any hint on the configuration and on the underlying\n> > hardware?\n>\n> Well, this particular web site has millions of hits per day\n> (running up to about 20 queries per hit) from thousands of\n> concurrent web users, while accepting logical replication from\n> thousands of OLTP users via logical replication, so you probably\n> don't need equivalent hardware. If I recall correctly it is\n> running 32 cores with 512GB RAM running two PostgreSQL clusters,\n> each multiple TB, and each having a RAID 5 array of 40 drives,\n> plus separate controllers and RAID for OS and WAL.\n>\n> For server configuration, see these Wiki pages for the general\n> tuning techniques used:\n>\n> https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n>\n> https://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n>\n> The best course to solve your problem would probably be to review\n> those and see what might apply, and if you still have a problem\n> pick a specific slow-running query and use the process described\n> here:\n>\n> https://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> --\n> Kevin Grittner\n> EDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nI can't speak to the numbers postgresql can or cannot do but the numbers above sound very very doable. If you can get a hold of greg smith's postgresql high performance, I always liked his method of tuning buffers and checkpoints using the background writer stats. All of which can help with the IO load and caching. good luck! On Thu, Feb 12, 2015 at 4:55 PM, Kevin Grittner <[email protected]> wrote:Nico Sabbi <[email protected]> wrote:\n\n> Can you give any hint on the configuration and on the underlying\n> hardware?\n\nWell, this particular web site has millions of hits per day\n(running up to about 20 queries per hit) from thousands of\nconcurrent web users, while accepting logical replication from\nthousands of OLTP users via logical replication, so you probably\ndon't need equivalent hardware. If I recall correctly it is\nrunning 32 cores with 512GB RAM running two PostgreSQL clusters,\neach multiple TB, and each having a RAID 5 array of 40 drives,\nplus separate controllers and RAID for OS and WAL.\n\nFor server configuration, see these Wiki pages for the general\ntuning techniques used:\n\nhttps://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n\nhttps://wiki.postgresql.org/wiki/Number_Of_Database_Connections\n\nThe best course to solve your problem would probably be to review\nthose and see what might apply, and if you still have a problem\npick a specific slow-running query and use the process described\nhere:\n\nhttps://wiki.postgresql.org/wiki/SlowQueryQuestions\n\n--\nKevin Grittner\nEDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance",
"msg_date": "Thu, 12 Feb 2015 17:14:00 -0600",
"msg_from": "\"Mathis, Jason\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration tips for very large database"
},
{
"msg_contents": "On Thu, Feb 12, 2015 at 7:38 PM, Kevin Grittner <[email protected]> wrote:\n> Nico Sabbi <[email protected]> wrote:\n>\n>> Queries get executed very very slowly, say 20 minutes.\n>\n>> I'd like to know if someone has already succeeded in running\n>> postgres with 200-300M records with queries running much faster\n>> than this.\n>\n> If you go to the http://wcca.wicourts.gov/ web site, bring up any\n> case, and click the \"Court Record Events\" button, it will search a\n> table with hundreds of millions of rows. The table is not\n> partitioned, but has several indexes on it which are useful for\n> queries such as the one that is used when you click the button.\n\nI have a table with ~800M rows, wide ones, that runs reporting queries\nquite efficiently (usually seconds).\n\nOf course, the queries don't traverse the whole table. That wouldn't\nbe efficient. That's probably the key there, don't make you database\nprocess the whole thing every time if you expect it to be scalable.\n\nWhat kind of queries are you running that have slowed down?\n\nPost an explain analyze so people can diagnose. Possibly it's a\nquery/indexing issue rather than a hardware one.\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 12 Feb 2015 20:19:46 -0300",
"msg_from": "Claudio Freire <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration tips for very large database"
},
{
"msg_contents": "On Thu, Feb 12, 2015 at 11:25:54PM +0100, Nico Sabbi wrote:\n> Hello,\n> I've been away from postgres for several years, so please forgive\n> me if I forgot nearly everything:-)\n> \n> I've just inherited a database collecting environmental data.\n> There's a background process continually inserting records (not so\n> often, to say the truth) and a web interface to query data.\n> At the moment the record count of the db is 250M and growing all the\n> time. The 3 main tables have just 3 columns.\n> \n> Queries get executed very very slowly, say 20 minutes. The most\n> evident problem I see is that io wait load is almost always 90+%\n> while querying data, 30-40% when \"idle\" (so to say).\n> Obviously disk access is to blame, but I'm a bit surprised because\n> the cluster where this db is running is not at all old iron: it's a\n> vmware VM with 16GB ram, 4cpu 2.2Ghz, 128GB disk (half of which\n> used). The disk system underlying vmware is quite powerful, this\n> postgres is the only system that runs slowly in this cluster.\n> I can increase resources if necessary, but..\n> \n> Even before analyzing queries (that I did) I'd like to know if\n> someone has already succeeded in running postgres with 200-300M\n> records with queries running much faster than this. I'd like to\n> compare the current configuration with a super-optimized one to\n> identify the parameters that need to be changed.\n> Any link to a working configuration would be very appreciated.\n> \n> Thanks for any help,\n> Nico\n> \n\nHi Nico,\n\nNo one has mentioned the elephant in the room, but a database can\nbe very I/O intensive and you may not be getting the performance\nyou need from your virtual disk running on your VMware disk subsystem.\nWhat do IOmeter or other disk performance evaluation software report?\n\nRegards,\nKen\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Feb 2015 08:01:15 -0600",
"msg_from": "\"[email protected]\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration tips for very large database"
},
{
"msg_contents": "\n>> \n> \n> Hi Nico,\n> \n> No one has mentioned the elephant in the room, but a database can\n> be very I/O intensive and you may not be getting the performance\n> you need from your virtual disk running on your VMware disk subsystem.\n> What do IOmeter or other disk performance evaluation software report?\n> \n> Regards,\n> Ken\n\nAnecdatum: \n\nMoving from a contended VMware hard-disk based filesystem running over the network, to a bare metal RAID10 SSD, resulted in many DB operations running 20-30x faster.\n\nTable sizes circa 10-20G, millions of rows. \n\nGraeme.\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 13 Feb 2015 14:09:32 +0000",
"msg_from": "\"Graeme B. Bell\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Configuration tips for very large database"
},
{
"msg_contents": "On 02/13/2015 12:19 AM, Claudio Freire wrote:\n> I have a table with ~800M rows, wide ones, that runs reporting queries\n> quite efficiently (usually seconds).\n>\n> Of course, the queries don't traverse the whole table. That wouldn't\n> be efficient. That's probably the key there, don't make you database\n> process the whole thing every time if you expect it to be scalable.\n>\n> What kind of queries are you running that have slowed down?\n>\n> Post an explain analyze so people can diagnose. Possibly it's a\n> query/indexing issue rather than a hardware one.\n>\n\nThanks everybody for the answers. At the moment I don't have the queries \nat hand (saturday:-) ).\nI'll post them next week.\n\nI'd really like to avoid data partitioning if possible. It's a thing \nthat gives me a strong stomach ache.\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 14 Feb 2015 20:42:26 +0100",
"msg_from": "Nico Sabbi <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Configuration tips for very large database"
}
] |
[
{
"msg_contents": "Hi,\n\ndoes PostgreSQL support the concept of reverse key indexing as described \nhere? I couldn't find any documentation on this yet.\n\nhttp://www.toadworld.com/platforms/oracle/w/wiki/11075.reverse-key-index-from-the-concept-to-internals.aspx\n\nRegards,\n\n-- \nSven R. Kunze\nTBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09130 Chemnitz\nTel: +49 (0)371 33714721, Fax: +49 (0)371 5347920\ne-mail: [email protected]\nweb: www.tbz-pariv.de\n\nGeschäftsführer: Dr. Reiner Wohlgemuth\nSitz der Gesellschaft: Chemnitz\nRegistergericht: Chemnitz HRB 8543\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 14 Feb 2015 19:13:10 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Reverse Key Index"
},
{
"msg_contents": "\"Sven R. Kunze\" <[email protected]> writes:\n> does PostgreSQL support the concept of reverse key indexing as described \n> here? I couldn't find any documentation on this yet.\n\n> http://www.toadworld.com/platforms/oracle/w/wiki/11075.reverse-key-index-from-the-concept-to-internals.aspx\n\nThere's nothing built-in for that (and frankly, it doesn't sound useful\nenough that we'd ever add it). You could get the effect easily enough\nwith an expression index on a byte-reversing function. A related thing\nthat people often do is create an index on a hash function.\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 14 Feb 2015 13:18:19 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "Thanks for the immediate reply.\n\nI understand the use case is quite limited.\n\nOn the other hand, I see potential when it comes to applications which \nuse PostgreSQL. There, programmers would have to change a lot of code to \ntweak existing (and more importantly working) queries to hash/reverse an \nid column first. Using ORMs would make this change even more painful and \nmaybe even impossible.\n\nWhen reading \nhttps://richardfoote.wordpress.com/2008/01/14/introduction-to-reverse-key-indexes-part-i/ \ncarefully, it also seems to work with index scan partially in case of \nequality comparisons.\n\n\nOn 14.02.2015 19:18, Tom Lane wrote:\n> \"Sven R. Kunze\" <[email protected]> writes:\n>> does PostgreSQL support the concept of reverse key indexing as described\n>> here? I couldn't find any documentation on this yet.\n>> http://www.toadworld.com/platforms/oracle/w/wiki/11075.reverse-key-index-from-the-concept-to-internals.aspx\n> There's nothing built-in for that (and frankly, it doesn't sound useful\n> enough that we'd ever add it). You could get the effect easily enough\n> with an expression index on a byte-reversing function. A related thing\n> that people often do is create an index on a hash function.\n>\n> \t\t\tregards, tom lane\n\n\n-- \nSven R. Kunze\nTBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09130 Chemnitz\nTel: +49 (0)371 33714721, Fax: +49 (0)371 5347920\ne-mail: [email protected]\nweb: www.tbz-pariv.de\n\nGeschäftsführer: Dr. Reiner Wohlgemuth\nSitz der Gesellschaft: Chemnitz\nRegistergericht: Chemnitz HRB 8543\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 14 Feb 2015 19:35:05 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "On 02/14/2015 10:35 AM, Sven R. Kunze wrote:\n> Thanks for the immediate reply.\n> \n> I understand the use case is quite limited.\n> \n> On the other hand, I see potential when it comes to applications which\n> use PostgreSQL. There, programmers would have to change a lot of code to\n> tweak existing (and more importantly working) queries to hash/reverse an\n> id column first. Using ORMs would make this change even more painful and\n> maybe even impossible.\n> \n> When reading\n> https://richardfoote.wordpress.com/2008/01/14/introduction-to-reverse-key-indexes-part-i/\n> carefully, it also seems to work with index scan partially in case of\n> equality comparisons.\n\nSeems like a good use for SP-GiST. Go for it!\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Wed, 25 Feb 2015 14:31:33 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "On 25.02.2015 23:31, Josh Berkus wrote:\n> On 02/14/2015 10:35 AM, Sven R. Kunze wrote:\n>> Thanks for the immediate reply.\n>>\n>> I understand the use case is quite limited.\n>>\n>> On the other hand, I see potential when it comes to applications which\n>> use PostgreSQL. There, programmers would have to change a lot of code to\n>> tweak existing (and more importantly working) queries to hash/reverse an\n>> id column first. Using ORMs would make this change even more painful and\n>> maybe even impossible.\n>>\n>> When reading\n>> https://richardfoote.wordpress.com/2008/01/14/introduction-to-reverse-key-indexes-part-i/\n>> carefully, it also seems to work with index scan partially in case of\n>> equality comparisons.\n> Seems like a good use for SP-GiST. Go for it!\n>\n\nI just thought about btree indexes here mainly because they well-known \nand well-used in ORM frameworks. Considering the documentation and \nthird-party posts on GiST and btree_gist, at least to me, it seems as if \npeople would not want to use that for integers; which in turn is the \nmain use-case scenario for reverse key indexes.\n\n-- \nSven R. Kunze\nTBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz\nTel: +49 (0)371 33714721, Fax: +49 (0)371 5347920\ne-mail: [email protected]\nweb: www.tbz-pariv.de\n\nGeschäftsführer: Dr. Reiner Wohlgemuth\nSitz der Gesellschaft: Chemnitz\nRegistergericht: Chemnitz HRB 8543\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Feb 2015 12:04:01 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "Sven R. Kunze schrieb am 26.02.2015 um 12:04:\n> I just thought about btree indexes here mainly because they well-known and well-used in ORM frameworks. \n\nIf your ORM framework needs to know about the internals of an index definition or even requires a certain index type, then you should ditch that ORM framework.\n\nApart from indexes supporting business constraints (e.g. a unique index) neither the application nor the the ORM framework should care about indexes at all.\n\n> does PostgreSQL support the concept of reverse key indexing as described here? \n\nThe real question is: why do you think you need such an index? \nDo you have any performance problems with the existing BTree index? If yes, which problem exactly? \n\nThomas\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Feb 2015 12:45:46 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "On 26.02.2015 12:45, Thomas Kellerer wrote:\n> Sven R. Kunze schrieb am 26.02.2015 um 12:04:\n>> I just thought about btree indexes here mainly because they well-known and well-used in ORM frameworks.\n> If your ORM framework needs to know about the internals of an index definition or even requires a certain index type, then you should ditch that ORM framework.\n\nAs I said \"Considering the documentation and third-party posts on GiST \nand btree_gist, at least to me, it seems as if people would not want to \nuse that for integers; which in turn is the main use-case scenario for \nreverse key indexes.\"\n\n> Apart from indexes supporting business constraints (e.g. a unique index) neither the application nor the the ORM framework should care about indexes at all.\n\nWell, the world is not perfect: \nhttp://www.joelonsoftware.com/articles/LeakyAbstractions.html\n\n>> does PostgreSQL support the concept of reverse key indexing as described here?\n> The real question is: why do you think you need such an index?\n> Do you have any performance problems with the existing BTree index? If yes, which problem exactly?\n>\n\nThis is not the real question. I never said I personally have to solve \nissue around that. If so, I would have provide more detailed information \non the issue.\n\nHowever, I clearly see benefits of Oracle's solution over \"You could get \nthe effect easily enough with an expression index on a byte-reversing \nfunction. A related thing that people often do is create an index on a \nhash function.\"\n\nThese benefits, I described here: \"On the other hand, I see potential \nwhen it comes to applications which use PostgreSQL. There, programmers \nwould have to change a lot of code to tweak existing (and more \nimportantly working) queries to hash/reverse an id column first. Using \nORMs would make this change even more painful and maybe even impossible.\"\n\n\nSo, this discussion is more about what can PostgreSQL offer in \ncomparison to already existing solutions. I perfectly see Tom's proposal \nas a as-is solution but it has the drawbacks described above.\n\n\nIf you think Reverse Key Indexes have no usage here in PostgreSQL, you \nshould not support convenience features for easily improving performance \nwithout breaking the querying API or you won't have any intentions to \ninclude such a patch, just let me know and we can close the issue \nimmediately.\n\n-- \nSven R. Kunze\nTBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz\nTel: +49 (0)371 33714721, Fax: +49 (0)371 5347920\ne-mail: [email protected]\nweb: www.tbz-pariv.de\n\nGeschäftsführer: Dr. Reiner Wohlgemuth\nSitz der Gesellschaft: Chemnitz\nRegistergericht: Chemnitz HRB 8543\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Feb 2015 13:23:33 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "On 02/26/2015 12:31 AM, Josh Berkus wrote:\n> On 02/14/2015 10:35 AM, Sven R. Kunze wrote:\n>> Thanks for the immediate reply.\n>>\n>> I understand the use case is quite limited.\n>>\n>> On the other hand, I see potential when it comes to applications which\n>> use PostgreSQL. There, programmers would have to change a lot of code to\n>> tweak existing (and more importantly working) queries to hash/reverse an\n>> id column first. Using ORMs would make this change even more painful and\n>> maybe even impossible.\n>>\n>> When reading\n>> https://richardfoote.wordpress.com/2008/01/14/introduction-to-reverse-key-indexes-part-i/\n>> carefully, it also seems to work with index scan partially in case of\n>> equality comparisons.\n>\n> Seems like a good use for SP-GiST. Go for it!\n\nA b-tree opclass that just compares from right-to-left would work just\nas well, and perform better.\n\n- Heikki\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Feb 2015 14:37:04 +0200",
"msg_from": "Heikki Linnakangas <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "Sven R. Kunze schrieb am 26.02.2015 um 13:23:\n> If you think Reverse Key Indexes have no usage here in PostgreSQL, you should not support convenience features \n> for easily improving performance without breaking the querying API \n\nIt's also unclear to me which \"performance\" you are referring to.\nInsert performance? Retrieval performance? Concurrency? \n\nThe use-case for reverse indexes in Oracle is pretty small: it's _only_ about the contention when doing a lot of inserts with increasing numbers (because the different transactions will be blocked when accessing the blocks in question). \n\nAs Postgres manages inserts differently than Oracle I'm not so sure that this problem exists in Postgres the same way it does in Oracle.\nThat's why I asked if you have a _specific_ problem. \n\nRichard Footes blog post is mostly about the myth that _if_ you have a reverse index this is only used for equality operations. \nIt does not claim that a reverse index is faster than a regular index _if_ it is used for a range scan. \n\nThe question is: do you think you need a reverse index because you have a performance problem with when doing many, many inserts at the same time using \"close-by\" values into a table that uses a btree index on the column? \n\nOr do you think you need a reverse index to improve the performance of a range scan? If that is the then you can easily us a gin/gist index or even a simple btree index using a trigram index to speed up a \"LIKE '%abc%'\" (something Oracle can't do at all) without having to worry about obfuscation layers (aka ORM).\n\nThomas\n\n\n\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Feb 2015 13:48:03 +0100",
"msg_from": "Thomas Kellerer <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "On 26.02.2015 13:48, Thomas Kellerer wrote:\n> Sven R. Kunze schrieb am 26.02.2015 um 13:23:\n>> If you think Reverse Key Indexes have no usage here in PostgreSQL, you should not support convenience features\n>> for easily improving performance without breaking the querying API\n\nSorry for my bad English: The if-clause ends with \"just let me know and \nwe can close the issue immediately.\" You quoted an or'ed if-part.\n\nPoint was, if you see no benefits or you have no intention to include it \nanyway (patch provided or not), we can stop now. I am not married to \nthis features and right now I can live without it.\n\n> It's also unclear to me which \"performance\" you are referring to.\n> Insert performance? Retrieval performance? Concurrency?\n>\n> The use-case for reverse indexes in Oracle is pretty small: it's _only_ about the contention when doing a lot of inserts with increasing numbers (because the different transactions will be blocked when accessing the blocks in question).\nExactly. That would include logging databases and big/high-frequency \nOLTP systems.\n\n> As Postgres manages inserts differently than Oracle I'm not so sure that this problem exists in Postgres the same way it does in Oracle.\nMaybe, PostgreSQL internal experts can answer that question thoroughly.\n\n> That's why I asked if you have a _specific_ problem.\nI see. Answering explicitly: no, I don't.\n\n> Richard Footes blog post is mostly about the myth that _if_ you have a reverse index this is only used for equality operations.\n> It does not claim that a reverse index is faster than a regular index _if_ it is used for a range scan.\nCorrect.\n\n> The question is: do you think you need a reverse index because you have a performance problem with when doing many, many inserts at the same time using \"close-by\" values into a table that uses a btree index on the column?\n\nI presume that Oracle would not invest resources in implementing \nfeatures which would have no benefits for their customers. Thus, the \nresearch on this topic should already been done for us.\n\nThat given, if we can answer your question 'whether PostgreSQL handles \nit differently from Oracle so that the contention issue cannot arise' \ncan be answered with a no, I tend to say: yes.\n\n> Or do you think you need a reverse index to improve the performance of a range scan? If that is the then you can easily us a gin/gist index or even a simple btree index using a trigram index to speed up a \"LIKE '%abc%'\" (something Oracle can't do at all) without having to worry about obfuscation layers (aka ORM).\n\n From what I gather, reverse key indexes are not about improving range \nscans but about improving insertion speed due to diversification of \ninsertion location.\n\n\nI actually used Richard Foote's posts only to get a proper understanding \nof reverse key indexes and what can and cannot be done with them and \nwhere their issues are:\n\nhttps://richardfoote.wordpress.com/2008/01/14/introduction-to-reverse-key-indexes-part-i/\nhttps://richardfoote.wordpress.com/2008/01/16/introduction-to-reverse-key-indexes-part-ii-another-myth-bites-the-dust/\nhttps://richardfoote.wordpress.com/2008/01/18/introduction-to-reverse-key-indexes-part-iii-a-space-oddity/\nhttps://richardfoote.wordpress.com/2008/01/21/introduction-to-reverse-key-indexes-part-iv-cluster-one/\n\n-- \nSven R. Kunze\nTBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz\nTel: +49 (0)371 33714721, Fax: +49 (0)371 5347920\ne-mail: [email protected]\nweb: www.tbz-pariv.de\n\nGeschäftsführer: Dr. Reiner Wohlgemuth\nSitz der Gesellschaft: Chemnitz\nRegistergericht: Chemnitz HRB 8543\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 26 Feb 2015 14:20:48 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reverse Key Index"
},
{
"msg_contents": "On 26.02.2015 13:37, Heikki Linnakangas wrote:\n> On 02/26/2015 12:31 AM, Josh Berkus wrote:\n>> On 02/14/2015 10:35 AM, Sven R. Kunze wrote:\n>>> Thanks for the immediate reply.\n>>>\n>>> I understand the use case is quite limited.\n>>>\n>>> On the other hand, I see potential when it comes to applications which\n>>> use PostgreSQL. There, programmers would have to change a lot of \n>>> code to\n>>> tweak existing (and more importantly working) queries to \n>>> hash/reverse an\n>>> id column first. Using ORMs would make this change even more painful \n>>> and\n>>> maybe even impossible.\n>>>\n>>> When reading\n>>> https://richardfoote.wordpress.com/2008/01/14/introduction-to-reverse-key-indexes-part-i/ \n>>>\n>>> carefully, it also seems to work with index scan partially in case of\n>>> equality comparisons.\n>>\n>> Seems like a good use for SP-GiST. Go for it!\n>\n> A b-tree opclass that just compares from right-to-left would work just\n> as well, and perform better.\n>\n> - Heikki\n>\n\n\nThanks for the hint. That also sounds easy to implement.\n\nRegards,\n\n-- \nSven R. Kunze\nTBZ-PARIV GmbH, Bernsdorfer Str. 210-212, 09126 Chemnitz\nTel: +49 (0)371 33714721, Fax: +49 (0)371 5347920\ne-mail: [email protected]\nweb: www.tbz-pariv.de\n\nGeschäftsführer: Dr. Reiner Wohlgemuth\nSitz der Gesellschaft: Chemnitz\nRegistergericht: Chemnitz HRB 8543\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 05 Mar 2015 10:17:51 +0100",
"msg_from": "\"Sven R. Kunze\" <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Reverse Key Index"
}
] |
[
{
"msg_contents": "Hello !\n\nI have a huge table, 1 bilion rows, with many indexes.\nI have many materialysed view (MV), subsets of this huge table, with same\nkind indexes.\nI have many users, querying thoses MV.\nI have a storage problem, because of MV physical memory use.\n\nI wonder :\nIf I replace MV with classical Views, the only indexes that will be used\nwill be the huge table's one. As all users will query on the same indexes,\nis will always be loaded in memory, right ? This will be shared, I mean if\n10 users query the same time, will it use 10*ram memory for indexes or\njuste 1 time that ram ?\n\nI terms of performances, will MV better than simple Views in my case ?\n\n\nThanks for explanation by advance\n\n\nNicolas PARIS\n\nHello !I have a huge table, 1 bilion rows, with many indexes.I have many materialysed view (MV), subsets of this huge table, with same kind indexes.I have many users, querying thoses MV.I have a storage problem, because of MV physical memory use.I wonder :If I replace MV with classical Views, the only indexes that will be used will be the huge table's one. As all users will query on the same indexes, is will always be loaded in memory, right ? This will be shared, I mean if 10 users query the same time, will it use 10*ram memory for indexes or juste 1 time that ram ? I terms of performances, will MV better than simple Views in my case ?Thanks for explanation by advanceNicolas PARIS",
"msg_date": "Fri, 20 Feb 2015 11:28:27 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "On Fri, Feb 20, 2015 at 8:28 AM, Nicolas Paris <[email protected]> wrote:\n\n> If I replace MV with classical Views, the only indexes that will be used\n> will be the huge table's one. As all users will query on the same indexes,\n> is will always be loaded in memory, right ? This will be shared, I mean if\n> 10 users query the same time, will it use 10*ram memory for indexes or\n> juste 1 time that ram ?\n>\n>\nOnce one user load pages into the shared_buffer (or even OS memory cache),\nsubsequent users that requests the same pages will read from there (from\nthe memory), it is valid from pages of any kind of relation (MVs, tables,\nindexes, etc.). So if 10 users use the same index, then the pages read from\nit will be loaded in memory only once (unless it doesn't fit\nram/shared_buffer, of course).\n\n\n\n> I terms of performances, will MV better than simple Views in my case ?\n>\n\nWe'd need a lot more of information to answer this question. I tend to\nrecommend people to try simpler approaches (in your case \"simple views\")\nand only move to more robust ones if the performance of this one is bad.\n\nBy the little information you gave, looks like the queries gets a well\ndefined subset of this big table, so you should also consider:\n\n- Creating partial indexes for the subsets, or at least the most accessed\nones;\n- Partitioning the table (be really careful with that and make sure you\nactually use the partition keys).\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Fri, Feb 20, 2015 at 8:28 AM, Nicolas Paris <[email protected]> wrote:If I replace MV with classical Views, the only indexes that will be used will be the huge table's one. As all users will query on the same indexes, is will always be loaded in memory, right ? This will be shared, I mean if 10 users query the same time, will it use 10*ram memory for indexes or juste 1 time that ram ? Once one user load pages into the shared_buffer (or even OS memory cache), subsequent users that requests the same pages will read from there (from the memory), it is valid from pages of any kind of relation (MVs, tables, indexes, etc.). So if 10 users use the same index, then the pages read from it will be loaded in memory only once (unless it doesn't fit ram/shared_buffer, of course). I terms of performances, will MV better than simple Views in my case ?We'd need a lot more of information to answer this question. I tend to recommend people to try simpler approaches (in your case \"simple views\") and only move to more robust ones if the performance of this one is bad.By the little information you gave, looks like the queries gets a well defined subset of this big table, so you should also consider:- Creating partial indexes for the subsets, or at least the most accessed ones;- Partitioning the table (be really careful with that and make sure you actually use the partition keys).Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 20 Feb 2015 10:36:40 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "Thanks,\n\nI like the idea of partial indexes mixed with simple Views\nSo question :\n\nhuge_table{\nid,\nfield\n}\nCREATE INDEX idx_huge_table ON huge_table(id)\nCREATE INDEX idx_huge_table_for_view1 ON huge_table(id) WHERE id IN (1,2,3)\n\nCREATE VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2,3)\n\nDo the following query uses idx_huge_table_for_view1 ?\nSELECT * FROM view1\nWHERE field LIKE 'brillant idea'\n\nIn other words, do all queries on view1 will use the partial index (and\nnever the idx_hute_table ) ?\n\nNicolas PARIS\n\n2015-02-20 13:36 GMT+01:00 Matheus de Oliveira <[email protected]>:\n\n>\n> On Fri, Feb 20, 2015 at 8:28 AM, Nicolas Paris <[email protected]>\n> wrote:\n>\n>> If I replace MV with classical Views, the only indexes that will be used\n>> will be the huge table's one. As all users will query on the same indexes,\n>> is will always be loaded in memory, right ? This will be shared, I mean if\n>> 10 users query the same time, will it use 10*ram memory for indexes or\n>> juste 1 time that ram ?\n>>\n>>\n> Once one user load pages into the shared_buffer (or even OS memory cache),\n> subsequent users that requests the same pages will read from there (from\n> the memory), it is valid from pages of any kind of relation (MVs, tables,\n> indexes, etc.). So if 10 users use the same index, then the pages read from\n> it will be loaded in memory only once (unless it doesn't fit\n> ram/shared_buffer, of course).\n>\n>\n>\n>> I terms of performances, will MV better than simple Views in my case ?\n>>\n>\n> We'd need a lot more of information to answer this question. I tend to\n> recommend people to try simpler approaches (in your case \"simple views\")\n> and only move to more robust ones if the performance of this one is bad.\n>\n> By the little information you gave, looks like the queries gets a well\n> defined subset of this big table, so you should also consider:\n>\n> - Creating partial indexes for the subsets, or at least the most accessed\n> ones;\n> - Partitioning the table (be really careful with that and make sure you\n> actually use the partition keys).\n>\n> Regards,\n> --\n> Matheus de Oliveira\n> Analista de Banco de Dados\n> Dextra Sistemas - MPS.Br nível F!\n> www.dextra.com.br/postgres\n>\n>\n\nThanks,I like the idea of partial indexes mixed with simple ViewsSo question :huge_table{id,field}CREATE INDEX idx_huge_table ON huge_table(id)CREATE INDEX idx_huge_table_for_view1 ON huge_table(id) WHERE id IN (1,2,3)CREATE VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2,3)Do the following query uses idx_huge_table_for_view1 ?SELECT * FROM view1 WHERE field LIKE 'brillant idea'In other words, do all queries on view1 will use the partial index (and never the idx_hute_table ) ?Nicolas PARIS\n2015-02-20 13:36 GMT+01:00 Matheus de Oliveira <[email protected]>:On Fri, Feb 20, 2015 at 8:28 AM, Nicolas Paris <[email protected]> wrote:If I replace MV with classical Views, the only indexes that will be used will be the huge table's one. As all users will query on the same indexes, is will always be loaded in memory, right ? This will be shared, I mean if 10 users query the same time, will it use 10*ram memory for indexes or juste 1 time that ram ? Once one user load pages into the shared_buffer (or even OS memory cache), subsequent users that requests the same pages will read from there (from the memory), it is valid from pages of any kind of relation (MVs, tables, indexes, etc.). So if 10 users use the same index, then the pages read from it will be loaded in memory only once (unless it doesn't fit ram/shared_buffer, of course). I terms of performances, will MV better than simple Views in my case ?We'd need a lot more of information to answer this question. I tend to recommend people to try simpler approaches (in your case \"simple views\") and only move to more robust ones if the performance of this one is bad.By the little information you gave, looks like the queries gets a well defined subset of this big table, so you should also consider:- Creating partial indexes for the subsets, or at least the most accessed ones;- Partitioning the table (be really careful with that and make sure you actually use the partition keys).Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 20 Feb 2015 14:06:23 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "On Fri, Feb 20, 2015 at 11:06 AM, Nicolas Paris <[email protected]> wrote:\n\n> Thanks,\n>\n> I like the idea of partial indexes mixed with simple Views\n> So question :\n>\n> huge_table{\n> id,\n> field\n> }\n> CREATE INDEX idx_huge_table ON huge_table(id)\n> CREATE INDEX idx_huge_table_for_view1 ON huge_table(id) WHERE id IN (1,2,3)\n>\n> CREATE VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2,3)\n>\n> Do the following query uses idx_huge_table_for_view1 ?\n> SELECT * FROM view1\n> WHERE field LIKE 'brillant idea'\n>\n> In other words, do all queries on view1 will use the partial index (and\n> never the idx_hute_table ) ?\n>\n>\nYou can try that pretty easily:\n\n postgres=# CREATE TEMP TABLE huge_table(id int, field text);\n CREATE TABLE\n postgres=# CREATE INDEX huge_table_id_idx ON huge_table(id);\n CREATE INDEX\n postgres=# CREATE INDEX huge_table_id_partial_idx ON huge_table(id)\nWHERE id IN (1,2,3);\n CREATE INDEX\n postgres=# CREATE TEMP VIEW view1 AS SELECT * FROM huge_table WHERE id\nIN (1,2);\n CREATE VIEW\n postgres=# SET enable_seqscan TO off;\n SET\n postgres=# SET enable_bitmapscan To off;\n SET\n postgres=# EXPLAIN SELECT * FROM view1 WHERE field LIKE 'foo%';\n QUERY\nPLAN\n\n----------------------------------------------------------------------------------------------\n Index Scan using *huge_table_id_partial_idx* on huge_table\n(cost=0.12..36.41 rows=1 width=36)\n Index Cond: (id = ANY ('{1,2}'::integer[]))\n Filter: (field ~~ 'foo%'::text)\n (3 rows)\n\nI expect that to happen always, unless you have another index that matches\nbetter the filter from outside the view.\n\nRegards,\n-- \nMatheus de Oliveira\nAnalista de Banco de Dados\nDextra Sistemas - MPS.Br nível F!\nwww.dextra.com.br/postgres\n\nOn Fri, Feb 20, 2015 at 11:06 AM, Nicolas Paris <[email protected]> wrote:Thanks,I like the idea of partial indexes mixed with simple ViewsSo question :huge_table{id,field}CREATE INDEX idx_huge_table ON huge_table(id)CREATE INDEX idx_huge_table_for_view1 ON huge_table(id) WHERE id IN (1,2,3)CREATE VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2,3)Do the following query uses idx_huge_table_for_view1 ?SELECT * FROM view1 WHERE field LIKE 'brillant idea'In other words, do all queries on view1 will use the partial index (and never the idx_hute_table ) ?You can try that pretty easily: postgres=# CREATE TEMP TABLE huge_table(id int, field text); CREATE TABLE postgres=# CREATE INDEX huge_table_id_idx ON huge_table(id); CREATE INDEX postgres=# CREATE INDEX huge_table_id_partial_idx ON huge_table(id) WHERE id IN (1,2,3); CREATE INDEX postgres=# CREATE TEMP VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2); CREATE VIEW postgres=# SET enable_seqscan TO off; SET postgres=# SET enable_bitmapscan To off; SET postgres=# EXPLAIN SELECT * FROM view1 WHERE field LIKE 'foo%'; QUERY PLAN ---------------------------------------------------------------------------------------------- Index Scan using huge_table_id_partial_idx on huge_table (cost=0.12..36.41 rows=1 width=36) Index Cond: (id = ANY ('{1,2}'::integer[])) Filter: (field ~~ 'foo%'::text) (3 rows)I expect that to happen always, unless you have another index that matches better the filter from outside the view.Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 20 Feb 2015 12:44:45 -0200",
"msg_from": "Matheus de Oliveira <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "It appears that in the predicate close (WHERE id IN (foo)), foo cannot\ndepend on other table (join or other). It must be a list. I anderstand why\n(this must be static).\nI can build a string value, but in some case, I will have a milion key list.\nPostgresql do not have limitation in query size, and IN(...) keys number.\n\nBut creating a partial index, with a query of bilion character length is\nnot an issue ? It looks like a little dirty, not ?\n\nThanks for all\n\n\nNicolas PARIS\n\n2015-02-20 15:44 GMT+01:00 Matheus de Oliveira <[email protected]>:\n\n>\n>\n> On Fri, Feb 20, 2015 at 11:06 AM, Nicolas Paris <[email protected]>\n> wrote:\n>\n>> Thanks,\n>>\n>> I like the idea of partial indexes mixed with simple Views\n>> So question :\n>>\n>> huge_table{\n>> id,\n>> field\n>> }\n>> CREATE INDEX idx_huge_table ON huge_table(id)\n>> CREATE INDEX idx_huge_table_for_view1 ON huge_table(id) WHERE id IN\n>> (1,2,3)\n>>\n>> CREATE VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2,3)\n>>\n>> Do the following query uses idx_huge_table_for_view1 ?\n>> SELECT * FROM view1\n>> WHERE field LIKE 'brillant idea'\n>>\n>> In other words, do all queries on view1 will use the partial index (and\n>> never the idx_hute_table ) ?\n>>\n>>\n> You can try that pretty easily:\n>\n> postgres=# CREATE TEMP TABLE huge_table(id int, field text);\n> CREATE TABLE\n> postgres=# CREATE INDEX huge_table_id_idx ON huge_table(id);\n> CREATE INDEX\n> postgres=# CREATE INDEX huge_table_id_partial_idx ON huge_table(id)\n> WHERE id IN (1,2,3);\n> CREATE INDEX\n> postgres=# CREATE TEMP VIEW view1 AS SELECT * FROM huge_table WHERE id\n> IN (1,2);\n> CREATE VIEW\n> postgres=# SET enable_seqscan TO off;\n> SET\n> postgres=# SET enable_bitmapscan To off;\n> SET\n> postgres=# EXPLAIN SELECT * FROM view1 WHERE field LIKE 'foo%';\n> QUERY\n> PLAN\n>\n> ----------------------------------------------------------------------------------------------\n> Index Scan using *huge_table_id_partial_idx* on huge_table\n> (cost=0.12..36.41 rows=1 width=36)\n> Index Cond: (id = ANY ('{1,2}'::integer[]))\n> Filter: (field ~~ 'foo%'::text)\n> (3 rows)\n>\n> I expect that to happen always, unless you have another index that matches\n> better the filter from outside the view.\n>\n> Regards,\n> --\n> Matheus de Oliveira\n> Analista de Banco de Dados\n> Dextra Sistemas - MPS.Br nível F!\n> www.dextra.com.br/postgres\n>\n>\n\nIt appears that in the predicate close (WHERE id IN (foo)), foo cannot depend on other table (join or other). It must be a list. I anderstand why (this must be static).I can build a string value, but in some case, I will have a milion key list.Postgresql do not have limitation in query size, and IN(...) keys number.But creating a partial index, with a query of bilion character length is not an issue ? It looks like a little dirty, not ?Thanks for all Nicolas PARIS\n2015-02-20 15:44 GMT+01:00 Matheus de Oliveira <[email protected]>:On Fri, Feb 20, 2015 at 11:06 AM, Nicolas Paris <[email protected]> wrote:Thanks,I like the idea of partial indexes mixed with simple ViewsSo question :huge_table{id,field}CREATE INDEX idx_huge_table ON huge_table(id)CREATE INDEX idx_huge_table_for_view1 ON huge_table(id) WHERE id IN (1,2,3)CREATE VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2,3)Do the following query uses idx_huge_table_for_view1 ?SELECT * FROM view1 WHERE field LIKE 'brillant idea'In other words, do all queries on view1 will use the partial index (and never the idx_hute_table ) ?You can try that pretty easily: postgres=# CREATE TEMP TABLE huge_table(id int, field text); CREATE TABLE postgres=# CREATE INDEX huge_table_id_idx ON huge_table(id); CREATE INDEX postgres=# CREATE INDEX huge_table_id_partial_idx ON huge_table(id) WHERE id IN (1,2,3); CREATE INDEX postgres=# CREATE TEMP VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2); CREATE VIEW postgres=# SET enable_seqscan TO off; SET postgres=# SET enable_bitmapscan To off; SET postgres=# EXPLAIN SELECT * FROM view1 WHERE field LIKE 'foo%'; QUERY PLAN ---------------------------------------------------------------------------------------------- Index Scan using huge_table_id_partial_idx on huge_table (cost=0.12..36.41 rows=1 width=36) Index Cond: (id = ANY ('{1,2}'::integer[])) Filter: (field ~~ 'foo%'::text) (3 rows)I expect that to happen always, unless you have another index that matches better the filter from outside the view.Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 20 Feb 2015 17:19:46 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "Well it seems that max query size for CREATE INDEX is 8160 character in my\n9.3 postgresql version.\nThen the only solution see is to add a new boolean field : huge_table.view1\nand change predicat to \"WHERE view1=1 \"\nBut I may have 800 views.. adding 800 new fields indexed to the huge table\nis actually not a good idea. Too bad\n\nAny idea to solve that partial view limitation?\n\nNicolas PARIS\n\n2015-02-20 17:19 GMT+01:00 Nicolas Paris <[email protected]>:\n\n> It appears that in the predicate close (WHERE id IN (foo)), foo cannot\n> depend on other table (join or other). It must be a list. I anderstand why\n> (this must be static).\n> I can build a string value, but in some case, I will have a milion key\n> list.\n> Postgresql do not have limitation in query size, and IN(...) keys number.\n>\n> But creating a partial index, with a query of bilion character length is\n> not an issue ? It looks like a little dirty, not ?\n>\n> Thanks for all\n>\n>\n> Nicolas PARIS\n>\n> 2015-02-20 15:44 GMT+01:00 Matheus de Oliveira <[email protected]>\n> :\n>\n>>\n>>\n>> On Fri, Feb 20, 2015 at 11:06 AM, Nicolas Paris <[email protected]>\n>> wrote:\n>>\n>>> Thanks,\n>>>\n>>> I like the idea of partial indexes mixed with simple Views\n>>> So question :\n>>>\n>>> huge_table{\n>>> id,\n>>> field\n>>> }\n>>> CREATE INDEX idx_huge_table ON huge_table(id)\n>>> CREATE INDEX idx_huge_table_for_view1 ON huge_table(id) WHERE id IN\n>>> (1,2,3)\n>>>\n>>> CREATE VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2,3)\n>>>\n>>> Do the following query uses idx_huge_table_for_view1 ?\n>>> SELECT * FROM view1\n>>> WHERE field LIKE 'brillant idea'\n>>>\n>>> In other words, do all queries on view1 will use the partial index (and\n>>> never the idx_hute_table ) ?\n>>>\n>>>\n>> You can try that pretty easily:\n>>\n>> postgres=# CREATE TEMP TABLE huge_table(id int, field text);\n>> CREATE TABLE\n>> postgres=# CREATE INDEX huge_table_id_idx ON huge_table(id);\n>> CREATE INDEX\n>> postgres=# CREATE INDEX huge_table_id_partial_idx ON huge_table(id)\n>> WHERE id IN (1,2,3);\n>> CREATE INDEX\n>> postgres=# CREATE TEMP VIEW view1 AS SELECT * FROM huge_table WHERE\n>> id IN (1,2);\n>> CREATE VIEW\n>> postgres=# SET enable_seqscan TO off;\n>> SET\n>> postgres=# SET enable_bitmapscan To off;\n>> SET\n>> postgres=# EXPLAIN SELECT * FROM view1 WHERE field LIKE 'foo%';\n>> QUERY\n>> PLAN\n>>\n>> ----------------------------------------------------------------------------------------------\n>> Index Scan using *huge_table_id_partial_idx* on huge_table\n>> (cost=0.12..36.41 rows=1 width=36)\n>> Index Cond: (id = ANY ('{1,2}'::integer[]))\n>> Filter: (field ~~ 'foo%'::text)\n>> (3 rows)\n>>\n>> I expect that to happen always, unless you have another index that\n>> matches better the filter from outside the view.\n>>\n>> Regards,\n>> --\n>> Matheus de Oliveira\n>> Analista de Banco de Dados\n>> Dextra Sistemas - MPS.Br nível F!\n>> www.dextra.com.br/postgres\n>>\n>>\n>\n\nWell it seems that max query size for CREATE INDEX is 8160 character in my 9.3 postgresql version.Then the only solution see is to add a new boolean field : huge_table.view1and change predicat to \"WHERE view1=1 \"But I may have 800 views.. adding 800 new fields indexed to the huge table is actually not a good idea. Too badAny idea to solve that partial view limitation?Nicolas PARIS\n2015-02-20 17:19 GMT+01:00 Nicolas Paris <[email protected]>:It appears that in the predicate close (WHERE id IN (foo)), foo cannot depend on other table (join or other). It must be a list. I anderstand why (this must be static).I can build a string value, but in some case, I will have a milion key list.Postgresql do not have limitation in query size, and IN(...) keys number.But creating a partial index, with a query of bilion character length is not an issue ? It looks like a little dirty, not ?Thanks for all Nicolas PARIS\n2015-02-20 15:44 GMT+01:00 Matheus de Oliveira <[email protected]>:On Fri, Feb 20, 2015 at 11:06 AM, Nicolas Paris <[email protected]> wrote:Thanks,I like the idea of partial indexes mixed with simple ViewsSo question :huge_table{id,field}CREATE INDEX idx_huge_table ON huge_table(id)CREATE INDEX idx_huge_table_for_view1 ON huge_table(id) WHERE id IN (1,2,3)CREATE VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2,3)Do the following query uses idx_huge_table_for_view1 ?SELECT * FROM view1 WHERE field LIKE 'brillant idea'In other words, do all queries on view1 will use the partial index (and never the idx_hute_table ) ?You can try that pretty easily: postgres=# CREATE TEMP TABLE huge_table(id int, field text); CREATE TABLE postgres=# CREATE INDEX huge_table_id_idx ON huge_table(id); CREATE INDEX postgres=# CREATE INDEX huge_table_id_partial_idx ON huge_table(id) WHERE id IN (1,2,3); CREATE INDEX postgres=# CREATE TEMP VIEW view1 AS SELECT * FROM huge_table WHERE id IN (1,2); CREATE VIEW postgres=# SET enable_seqscan TO off; SET postgres=# SET enable_bitmapscan To off; SET postgres=# EXPLAIN SELECT * FROM view1 WHERE field LIKE 'foo%'; QUERY PLAN ---------------------------------------------------------------------------------------------- Index Scan using huge_table_id_partial_idx on huge_table (cost=0.12..36.41 rows=1 width=36) Index Cond: (id = ANY ('{1,2}'::integer[])) Filter: (field ~~ 'foo%'::text) (3 rows)I expect that to happen always, unless you have another index that matches better the filter from outside the view.Regards,-- Matheus de OliveiraAnalista de Banco de DadosDextra Sistemas - MPS.Br nível F!www.dextra.com.br/postgres",
"msg_date": "Fri, 20 Feb 2015 19:09:39 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "On 2/20/15 12:09 PM, Nicolas Paris wrote:\n> Well it seems that max query size for CREATE INDEX is 8160 character in\n> my 9.3 postgresql version.\n> Then the only solution see is to add a new boolean field : huge_table.view1\n> and change predicat to \"WHERE view1=1 \"\n> But I may have 800 views.. adding 800 new fields indexed to the huge\n> table is actually not a good idea. Too bad\n>\n> Any idea to solve that partial view limitation?\n\nIf you have that many different views I doubt you want that many indexes \nanyway.\n\nHave you tried just hitting the base table and indexes directly, either \nthrough plain views or just direct SQL?\n\nAlso, how frequently does data change in the huge table? This sounds \nlike a case where the visibility map could make a huge difference.\n\nBy the way, if all the Mat Views are in one schema that's already in the \nsearch path, a very easy way to test this would be to create an \nequivalent set of regular views in a different schema (which you can \nprobably do programmatically via pg_get_viewdef()) and then change the \nsearch_path to put the new schema before the old.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Thu, 5 Mar 2015 19:40:51 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared\n memory"
},
{
"msg_contents": ">\n> If you have that many different views I doubt you want that many indexes\n> anyway.\n\nIt's a datawarehouse, then each view is used by many user for each query.\nThose views must be subset of the huge material table. All indexes are\nneeded\n\n\n\n> Have you tried just hitting the base table and indexes directly, either\n> through plain views or just direct SQL?\n\nI have tried each. The performances are worst querying on a subset (the\nviews) than querying on whole huge table when using the huge indexes\n\n\n=> this is the solution I am implementing. (800 is not true, but in 10\nyears it maybe will be)\nActually, I have added a boolean column on the huge table for each views.\nThis is the way each view is a subset of huge table (Create View as Select\n * FROM hugeTable WHERE columnX is true --etc 800 times). Then I create\n800partials indexes on that column(create index...WHERE columnX is TRUE),\nfor each view.\nThis works great as the query planer chooses the partials indexes when\nquerying the little subset of the terrific table (potential 20bilion rows)\n\nThis is better than material views for some reasons :\n- saves places on hard drive (columnX is boolean +same indexes - data for\nMatViews)\n- saves time generating materialised views\n\nThis is quite more complicated because in the project, the number of view\nis increasing, and dynamic then :\n- then adding new mat views is simple\n- adding new views => adding new column on the huge table. It can take long\ntime to update boolean for each tuple. Then I need to truncate/bulk load\nall data each time I add a new View. Other problem is dynamic number column\ntable was a bit tricky to implement in an ETL soft such Talend, but the\nbenefits are I hope great.\n\n\nNicolas PARIS\n\n2015-03-06 2:40 GMT+01:00 Jim Nasby <[email protected]>:\n\n> On 2/20/15 12:09 PM, Nicolas Paris wrote:\n>\n>> Well it seems that max query size for CREATE INDEX is 8160 character in\n>> my 9.3 postgresql version.\n>> Then the only solution see is to add a new boolean field :\n>> huge_table.view1\n>> and change predicat to \"WHERE view1=1 \"\n>> But I may have 800 views.. adding 800 new fields indexed to the huge\n>> table is actually not a good idea. Too bad\n>>\n>> Any idea to solve that partial view limitation?\n>>\n>\n> If you have that many different views I doubt you want that many indexes\n> anyway.\n>\n> Have you tried just hitting the base table and indexes directly, either\n> through plain views or just direct SQL?\n>\n> Also, how frequently does data change in the huge table? This sounds like\n> a case where the visibility map could make a huge difference.\n>\n> By the way, if all the Mat Views are in one schema that's already in the\n> search path, a very easy way to test this would be to create an equivalent\n> set of regular views in a different schema (which you can probably do\n> programmatically via pg_get_viewdef()) and then change the search_path to\n> put the new schema before the old.\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>\n\nIf you have that many different views I doubt you want that many indexes anyway.It's a datawarehouse, then each view is used by many user for each query.Those views must be subset of the huge material table. All indexes are needed Have you tried just hitting the base table and indexes directly, either through plain views or just direct SQL?I have tried each. The performances are worst querying on a subset (the views) than querying on whole huge table when using the huge indexes => this is the solution I am implementing. (800 is not true, but in 10 years it maybe will be)Actually, I have added a boolean column on the huge table for each views. This is the way each view is a subset of huge table (Create View as Select * FROM hugeTable WHERE columnX is true --etc 800 times). Then I create 800partials indexes on that column(create index...WHERE columnX is TRUE), for each view. This works great as the query planer chooses the partials indexes when querying the little subset of the terrific table (potential 20bilion rows)This is better than material views for some reasons :- saves places on hard drive (columnX is boolean +same indexes - data for MatViews)- saves time generating materialised viewsThis is quite more complicated because in the project, the number of view is increasing, and dynamic then :- then adding new mat views is simple- adding new views => adding new column on the huge table. It can take long time to update boolean for each tuple. Then I need to truncate/bulk load all data each time I add a new View. Other problem is dynamic number column table was a bit tricky to implement in an ETL soft such Talend, but the benefits are I hope great.Nicolas PARIS\n2015-03-06 2:40 GMT+01:00 Jim Nasby <[email protected]>:On 2/20/15 12:09 PM, Nicolas Paris wrote:\n\nWell it seems that max query size for CREATE INDEX is 8160 character in\nmy 9.3 postgresql version.\nThen the only solution see is to add a new boolean field : huge_table.view1\nand change predicat to \"WHERE view1=1 \"\nBut I may have 800 views.. adding 800 new fields indexed to the huge\ntable is actually not a good idea. Too bad\n\nAny idea to solve that partial view limitation?\n\n\nIf you have that many different views I doubt you want that many indexes anyway.\n\nHave you tried just hitting the base table and indexes directly, either through plain views or just direct SQL?\n\nAlso, how frequently does data change in the huge table? This sounds like a case where the visibility map could make a huge difference.\n\nBy the way, if all the Mat Views are in one schema that's already in the search path, a very easy way to test this would be to create an equivalent set of regular views in a different schema (which you can probably do programmatically via pg_get_viewdef()) and then change the search_path to put the new schema before the old.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com",
"msg_date": "Fri, 6 Mar 2015 09:16:59 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "On 3/6/15 2:16 AM, Nicolas Paris wrote:\n> If you have that many different views I doubt you want that many\n> indexes anyway.\n>\n> It's a datawarehouse, then each view is used by many user for each query.\n> Those views must be subset of the huge material table. All indexes are\n> needed\n\nYes, but they don't have to be partial.\n\n> Have you tried just hitting the base table and indexes directly,\n> either through plain views or just direct SQL?\n>\n> I have tried each. The performances are worst querying on a subset\n> (the views) than querying on whole huge table when using the huge indexes\n\nYou mean the materialized views, right? If so, that makes sense: Instead \nof having all your users hitting one common set of data (your fact \ntable) you had them hitting a bunch of other data (the mat views). But \nyou still had other stuff hitting the fact table. So now you were \ndealing with a lot more data than if you just stuck to the single fact \ntable.\n\n> => this is the solution I am implementing. (800 is not true, but in 10\n> years it maybe will be)\n\nIn 10 years we'll all be using quantum computers anyway... ;P\n\n> Actually, I have added a boolean column on the huge table for each\n> views. This is the way each view is a subset of huge table (Create View\n> as Select * FROM hugeTable WHERE columnX is true --etc 800 times). Then\n> I create 800partials indexes on that column(create index...WHERE columnX\n> is TRUE), for each view.\n> This works great as the query planer chooses the partials indexes when\n> querying the little subset of the terrific table (potential 20bilion rows)\n>\n> This is better than material views for some reasons :\n> - saves places on hard drive (columnX is boolean +same indexes - data\n> for MatViews)\n> - saves time generating materialised views\n\nBut this isn't better than the mat views because of a bunch of booleans; \nit's better because it means less stain on the disk cache.\n\n> This is quite more complicated because in the project, the number of\n> view is increasing, and dynamic then :\n> - then adding new mat views is simple\n> - adding new views => adding new column on the huge table. It can take\n> long time to update boolean for each tuple. Then I need to truncate/bulk\n> load all data each time I add a new View. Other problem is dynamic\n> number column table was a bit tricky to implement in an ETL soft such\n> Talend, but the benefits are I hope great.\n\nI think you'll ultimately be unhappy trying to go down this route, for \nthe reasons you mention, plus the very large amount of extra space \nyou'll be using. 800 booleans is 800 extra bytes for every row in your \nfact table. That's a lot. Even if you used a bitmap instead (which means \nyou have to mess around with tracking which bit means what and probably \nother problems as well) you're still looking at 100 bytes per row. \nThat's nothing to sneeze at.\n\nMy suggestion is to test using nothing but plain views and plain indexes \non the base table. I expect that some of those views will not perform \nadequately, but many (or most) of them will be fine. For the views that \nare too slow, look at what the expensive part of the view and \nmaterialize *only that*. I suspect you'll find that when you do that \nyou'll discover that several views are slow because of the same thing, \nso if you materialize that one thing one time you can then use it to \nspeed up several views.\n\nUsing that approach means you'll have a lot less data that you have to read.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 6 Mar 2015 03:32:04 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared\n memory"
},
{
"msg_contents": "Thanks Jim,\n\nMy suggestion is to test using nothing but plain views and plain indexes on\n> the base table\n\nActualy the way I choose subset rows for views is complicated in terms of\nquery. Then using simple views without partial indexes is terrible in terms\nof performance (I have tested that).\n\nYou mean the materialized views, right?\n\nWell I have tested matviews, views without partial indexes, views with\nhashjoin on a key, ..\n\n\n\n> I think you'll ultimately be unhappy trying to go down this route, for the\n> reasons you mention, plus the very large amount of extra space you'll be\n> using. 800 booleans is 800 extra bytes for every row in your fact table.\n> That's a lot. Even if you used a bitmap instead (which means you have to\n> mess around with tracking which bit means what and probably other problems\n> as well) you're still looking at 100 bytes per row. That's nothing to\n> sneeze at.\n\n\nSince each subset is about 5% (this number is decreasing when number of\nviews increase) of the fact table, most boolean rows are null. This means\n5% of 800 extra bytes, right ? I have choosen smallint, because of bulk\nload decrease csv size (true VS 1).\n\nFor the views that are too slow, look at what the expensive part of the\n> view and materialize *only that*.\n\nIt would be great if I could, but all will be automatic, then It will be\ndifficult to apply such rules that demands human analyse, and manual\ndatabase modification, for one subset\n\n\nHope I have well anderstand you\n\n\n\nNicolas PARIS\n\n2015-03-06 10:32 GMT+01:00 Jim Nasby <[email protected]>:\n\n> On 3/6/15 2:16 AM, Nicolas Paris wrote:\n>\n>> If you have that many different views I doubt you want that many\n>> indexes anyway.\n>>\n>> It's a datawarehouse, then each view is used by many user for each query.\n>> Those views must be subset of the huge material table. All indexes are\n>> needed\n>>\n>\n> Yes, but they don't have to be partial.\n>\n> Have you tried just hitting the base table and indexes directly,\n>> either through plain views or just direct SQL?\n>>\n>> I have tried each. The performances are worst querying on a subset\n>> (the views) than querying on whole huge table when using the huge indexes\n>>\n>\n> You mean the materialized views, right? If so, that makes sense: Instead\n> of having all your users hitting one common set of data (your fact table)\n> you had them hitting a bunch of other data (the mat views). But you still\n> had other stuff hitting the fact table. So now you were dealing with a lot\n> more data than if you just stuck to the single fact table.\n>\n> => this is the solution I am implementing. (800 is not true, but in 10\n>> years it maybe will be)\n>>\n>\n> In 10 years we'll all be using quantum computers anyway... ;P\n>\n> Actually, I have added a boolean column on the huge table for each\n>> views. This is the way each view is a subset of huge table (Create View\n>> as Select * FROM hugeTable WHERE columnX is true --etc 800 times). Then\n>> I create 800partials indexes on that column(create index...WHERE columnX\n>> is TRUE), for each view.\n>> This works great as the query planer chooses the partials indexes when\n>> querying the little subset of the terrific table (potential 20bilion rows)\n>>\n>> This is better than material views for some reasons :\n>> - saves places on hard drive (columnX is boolean +same indexes - data\n>> for MatViews)\n>> - saves time generating materialised views\n>>\n>\n> But this isn't better than the mat views because of a bunch of booleans;\n> it's better because it means less stain on the disk cache.\n>\n> This is quite more complicated because in the project, the number of\n>> view is increasing, and dynamic then :\n>> - then adding new mat views is simple\n>> - adding new views => adding new column on the huge table. It can take\n>> long time to update boolean for each tuple. Then I need to truncate/bulk\n>> load all data each time I add a new View. Other problem is dynamic\n>> number column table was a bit tricky to implement in an ETL soft such\n>> Talend, but the benefits are I hope great.\n>>\n>\n> I think you'll ultimately be unhappy trying to go down this route, for the\n> reasons you mention, plus the very large amount of extra space you'll be\n> using. 800 booleans is 800 extra bytes for every row in your fact table.\n> That's a lot. Even if you used a bitmap instead (which means you have to\n> mess around with tracking which bit means what and probably other problems\n> as well) you're still looking at 100 bytes per row. That's nothing to\n> sneeze at.\n>\n> My suggestion is to test using nothing but plain views and plain indexes\n> on the base table. I expect that some of those views will not perform\n> adequately, but many (or most) of them will be fine. For the views that are\n> too slow, look at what the expensive part of the view and materialize *only\n> that*. I suspect you'll find that when you do that you'll discover that\n> several views are slow because of the same thing, so if you materialize\n> that one thing one time you can then use it to speed up several views.\n>\n> Using that approach means you'll have a lot less data that you have to\n> read.\n>\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>\n\nThanks Jim,My suggestion is to test using nothing but plain views and plain indexes on the base tableActualy the way I choose subset rows for views is complicated in terms of query. Then using simple views without partial indexes is terrible in terms of performance (I have tested that).You mean the materialized views, right?Well I have tested matviews, views without partial indexes, views with hashjoin on a key, .. I think you'll ultimately be unhappy trying to go down this route, for the reasons you mention, plus the very large amount of extra space you'll be using. 800 booleans is 800 extra bytes for every row in your fact table. That's a lot. Even if you used a bitmap instead (which means you have to mess around with tracking which bit means what and probably other problems as well) you're still looking at 100 bytes per row. That's nothing to sneeze at.Since each subset is about 5% (this number is decreasing when number of views increase) of the fact table, most boolean rows are null. This means 5% of 800 extra bytes, right ? I have choosen smallint, because of bulk load decrease csv size (true VS 1).For the views that are too slow, look at what the expensive part of the view and materialize *only that*.It would be great if I could, but all will be automatic, then It will be difficult to apply such rules that demands human analyse, and manual database modification, for one subsetHope I have well anderstand you Nicolas PARIS\n2015-03-06 10:32 GMT+01:00 Jim Nasby <[email protected]>:On 3/6/15 2:16 AM, Nicolas Paris wrote:\n\n If you have that many different views I doubt you want that many\n indexes anyway.\n\nIt's a datawarehouse, then each view is used by many user for each query.\nThose views must be subset of the huge material table. All indexes are\nneeded\n\n\nYes, but they don't have to be partial.\n\n\n Have you tried just hitting the base table and indexes directly,\n either through plain views or just direct SQL?\n\n I have tried each. The performances are worst querying on a subset\n(the views) than querying on whole huge table when using the huge indexes\n\n\nYou mean the materialized views, right? If so, that makes sense: Instead of having all your users hitting one common set of data (your fact table) you had them hitting a bunch of other data (the mat views). But you still had other stuff hitting the fact table. So now you were dealing with a lot more data than if you just stuck to the single fact table.\n\n\n=> this is the solution I am implementing. (800 is not true, but in 10\nyears it maybe will be)\n\n\nIn 10 years we'll all be using quantum computers anyway... ;P\n\n\n Actually, I have added a boolean column on the huge table for each\nviews. This is the way each view is a subset of huge table (Create View\nas Select * FROM hugeTable WHERE columnX is true --etc 800 times). Then\nI create 800partials indexes on that column(create index...WHERE columnX\nis TRUE), for each view.\nThis works great as the query planer chooses the partials indexes when\nquerying the little subset of the terrific table (potential 20bilion rows)\n\nThis is better than material views for some reasons :\n- saves places on hard drive (columnX is boolean +same indexes - data\nfor MatViews)\n- saves time generating materialised views\n\n\nBut this isn't better than the mat views because of a bunch of booleans; it's better because it means less stain on the disk cache.\n\n\nThis is quite more complicated because in the project, the number of\nview is increasing, and dynamic then :\n- then adding new mat views is simple\n- adding new views => adding new column on the huge table. It can take\nlong time to update boolean for each tuple. Then I need to truncate/bulk\nload all data each time I add a new View. Other problem is dynamic\nnumber column table was a bit tricky to implement in an ETL soft such\nTalend, but the benefits are I hope great.\n\n\nI think you'll ultimately be unhappy trying to go down this route, for the reasons you mention, plus the very large amount of extra space you'll be using. 800 booleans is 800 extra bytes for every row in your fact table. That's a lot. Even if you used a bitmap instead (which means you have to mess around with tracking which bit means what and probably other problems as well) you're still looking at 100 bytes per row. That's nothing to sneeze at.\n\nMy suggestion is to test using nothing but plain views and plain indexes on the base table. I expect that some of those views will not perform adequately, but many (or most) of them will be fine. For the views that are too slow, look at what the expensive part of the view and materialize *only that*. I suspect you'll find that when you do that you'll discover that several views are slow because of the same thing, so if you materialize that one thing one time you can then use it to speed up several views.\n\nUsing that approach means you'll have a lot less data that you have to read.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com",
"msg_date": "Fri, 6 Mar 2015 11:25:06 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "According to this link\nhttp://postgresql.nabble.com/NULL-saves-disk-space-td4344106.html\nNULL values do not take place if only one other column are null for that\nrow.\nBoolean takes 1 byte wheras smallint 2bytes.\nThen the space problem is not anymore a problem with boolean columns 95%\nempty\n\nOne thing that is really great with postgresql is transaction for drop\ntable cascade, that allow te restore all stuf index, views on a rollback if\nproblem in loading appears.\nI hope using one transaction to drop/load many table is not a performance\nissue ?\n\nNicolas PARIS\n\n2015-03-06 11:25 GMT+01:00 Nicolas Paris <[email protected]>:\n\n> Thanks Jim,\n>\n> My suggestion is to test using nothing but plain views and plain indexes\n>> on the base table\n>\n> Actualy the way I choose subset rows for views is complicated in terms of\n> query. Then using simple views without partial indexes is terrible in terms\n> of performance (I have tested that).\n>\n> You mean the materialized views, right?\n>\n> Well I have tested matviews, views without partial indexes, views with\n> hashjoin on a key, ..\n> \n>\n>\n>> I think you'll ultimately be unhappy trying to go down this route, for\n>> the reasons you mention, plus the very large amount of extra space you'll\n>> be using. 800 booleans is 800 extra bytes for every row in your fact table.\n>> That's a lot. Even if you used a bitmap instead (which means you have to\n>> mess around with tracking which bit means what and probably other problems\n>> as well) you're still looking at 100 bytes per row. That's nothing to\n>> sneeze at.\n>\n>\n> Since each subset is about 5% (this number is decreasing when number of\n> views increase) of the fact table, most boolean rows are null. This means\n> 5% of 800 extra bytes, right ? I have choosen smallint, because of bulk\n> load decrease csv size (true VS 1).\n>\n> For the views that are too slow, look at what the expensive part of the\n>> view and materialize *only that*.\n>\n> It would be great if I could, but all will be automatic, then It will be\n> difficult to apply such rules that demands human analyse, and manual\n> database modification, for one subset\n>\n>\n> Hope I have well anderstand you\n> \n>\n>\n> Nicolas PARIS\n>\n> 2015-03-06 10:32 GMT+01:00 Jim Nasby <[email protected]>:\n>\n>> On 3/6/15 2:16 AM, Nicolas Paris wrote:\n>>\n>>> If you have that many different views I doubt you want that many\n>>> indexes anyway.\n>>>\n>>> It's a datawarehouse, then each view is used by many user for each\n>>> query.\n>>> Those views must be subset of the huge material table. All indexes are\n>>> needed\n>>>\n>>\n>> Yes, but they don't have to be partial.\n>>\n>> Have you tried just hitting the base table and indexes directly,\n>>> either through plain views or just direct SQL?\n>>>\n>>> I have tried each. The performances are worst querying on a subset\n>>> (the views) than querying on whole huge table when using the huge indexes\n>>>\n>>\n>> You mean the materialized views, right? If so, that makes sense: Instead\n>> of having all your users hitting one common set of data (your fact table)\n>> you had them hitting a bunch of other data (the mat views). But you still\n>> had other stuff hitting the fact table. So now you were dealing with a lot\n>> more data than if you just stuck to the single fact table.\n>>\n>> => this is the solution I am implementing. (800 is not true, but in 10\n>>> years it maybe will be)\n>>>\n>>\n>> In 10 years we'll all be using quantum computers anyway... ;P\n>>\n>> Actually, I have added a boolean column on the huge table for each\n>>> views. This is the way each view is a subset of huge table (Create View\n>>> as Select * FROM hugeTable WHERE columnX is true --etc 800 times). Then\n>>> I create 800partials indexes on that column(create index...WHERE columnX\n>>> is TRUE), for each view.\n>>> This works great as the query planer chooses the partials indexes when\n>>> querying the little subset of the terrific table (potential 20bilion\n>>> rows)\n>>>\n>>> This is better than material views for some reasons :\n>>> - saves places on hard drive (columnX is boolean +same indexes - data\n>>> for MatViews)\n>>> - saves time generating materialised views\n>>>\n>>\n>> But this isn't better than the mat views because of a bunch of booleans;\n>> it's better because it means less stain on the disk cache.\n>>\n>> This is quite more complicated because in the project, the number of\n>>> view is increasing, and dynamic then :\n>>> - then adding new mat views is simple\n>>> - adding new views => adding new column on the huge table. It can take\n>>> long time to update boolean for each tuple. Then I need to truncate/bulk\n>>> load all data each time I add a new View. Other problem is dynamic\n>>> number column table was a bit tricky to implement in an ETL soft such\n>>> Talend, but the benefits are I hope great.\n>>>\n>>\n>> I think you'll ultimately be unhappy trying to go down this route, for\n>> the reasons you mention, plus the very large amount of extra space you'll\n>> be using. 800 booleans is 800 extra bytes for every row in your fact table.\n>> That's a lot. Even if you used a bitmap instead (which means you have to\n>> mess around with tracking which bit means what and probably other problems\n>> as well) you're still looking at 100 bytes per row. That's nothing to\n>> sneeze at.\n>>\n>> My suggestion is to test using nothing but plain views and plain indexes\n>> on the base table. I expect that some of those views will not perform\n>> adequately, but many (or most) of them will be fine. For the views that are\n>> too slow, look at what the expensive part of the view and materialize *only\n>> that*. I suspect you'll find that when you do that you'll discover that\n>> several views are slow because of the same thing, so if you materialize\n>> that one thing one time you can then use it to speed up several views.\n>>\n>> Using that approach means you'll have a lot less data that you have to\n>> read.\n>>\n>> --\n>> Jim Nasby, Data Architect, Blue Treble Consulting\n>> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>>\n>\n>\n\nAccording to this link http://postgresql.nabble.com/NULL-saves-disk-space-td4344106.htmlNULL values do not take place if only one other column are null for that row.Boolean takes 1 byte wheras smallint 2bytes.Then the space problem is not anymore a problem with boolean columns 95% emptyOne thing that is really great with postgresql is transaction for drop table cascade, that allow te restore all stuf index, views on a rollback if problem in loading appears.I hope using one transaction to drop/load many table is not a performance issue ?Nicolas PARIS\n2015-03-06 11:25 GMT+01:00 Nicolas Paris <[email protected]>:Thanks Jim,My suggestion is to test using nothing but plain views and plain indexes on the base tableActualy the way I choose subset rows for views is complicated in terms of query. Then using simple views without partial indexes is terrible in terms of performance (I have tested that).You mean the materialized views, right?Well I have tested matviews, views without partial indexes, views with hashjoin on a key, .. I think you'll ultimately be unhappy trying to go down this route, for the reasons you mention, plus the very large amount of extra space you'll be using. 800 booleans is 800 extra bytes for every row in your fact table. That's a lot. Even if you used a bitmap instead (which means you have to mess around with tracking which bit means what and probably other problems as well) you're still looking at 100 bytes per row. That's nothing to sneeze at.Since each subset is about 5% (this number is decreasing when number of views increase) of the fact table, most boolean rows are null. This means 5% of 800 extra bytes, right ? I have choosen smallint, because of bulk load decrease csv size (true VS 1).For the views that are too slow, look at what the expensive part of the view and materialize *only that*.It would be great if I could, but all will be automatic, then It will be difficult to apply such rules that demands human analyse, and manual database modification, for one subsetHope I have well anderstand you Nicolas PARIS\n2015-03-06 10:32 GMT+01:00 Jim Nasby <[email protected]>:On 3/6/15 2:16 AM, Nicolas Paris wrote:\n\n If you have that many different views I doubt you want that many\n indexes anyway.\n\nIt's a datawarehouse, then each view is used by many user for each query.\nThose views must be subset of the huge material table. All indexes are\nneeded\n\n\nYes, but they don't have to be partial.\n\n\n Have you tried just hitting the base table and indexes directly,\n either through plain views or just direct SQL?\n\n I have tried each. The performances are worst querying on a subset\n(the views) than querying on whole huge table when using the huge indexes\n\n\nYou mean the materialized views, right? If so, that makes sense: Instead of having all your users hitting one common set of data (your fact table) you had them hitting a bunch of other data (the mat views). But you still had other stuff hitting the fact table. So now you were dealing with a lot more data than if you just stuck to the single fact table.\n\n\n=> this is the solution I am implementing. (800 is not true, but in 10\nyears it maybe will be)\n\n\nIn 10 years we'll all be using quantum computers anyway... ;P\n\n\n Actually, I have added a boolean column on the huge table for each\nviews. This is the way each view is a subset of huge table (Create View\nas Select * FROM hugeTable WHERE columnX is true --etc 800 times). Then\nI create 800partials indexes on that column(create index...WHERE columnX\nis TRUE), for each view.\nThis works great as the query planer chooses the partials indexes when\nquerying the little subset of the terrific table (potential 20bilion rows)\n\nThis is better than material views for some reasons :\n- saves places on hard drive (columnX is boolean +same indexes - data\nfor MatViews)\n- saves time generating materialised views\n\n\nBut this isn't better than the mat views because of a bunch of booleans; it's better because it means less stain on the disk cache.\n\n\nThis is quite more complicated because in the project, the number of\nview is increasing, and dynamic then :\n- then adding new mat views is simple\n- adding new views => adding new column on the huge table. It can take\nlong time to update boolean for each tuple. Then I need to truncate/bulk\nload all data each time I add a new View. Other problem is dynamic\nnumber column table was a bit tricky to implement in an ETL soft such\nTalend, but the benefits are I hope great.\n\n\nI think you'll ultimately be unhappy trying to go down this route, for the reasons you mention, plus the very large amount of extra space you'll be using. 800 booleans is 800 extra bytes for every row in your fact table. That's a lot. Even if you used a bitmap instead (which means you have to mess around with tracking which bit means what and probably other problems as well) you're still looking at 100 bytes per row. That's nothing to sneeze at.\n\nMy suggestion is to test using nothing but plain views and plain indexes on the base table. I expect that some of those views will not perform adequately, but many (or most) of them will be fine. For the views that are too slow, look at what the expensive part of the view and materialize *only that*. I suspect you'll find that when you do that you'll discover that several views are slow because of the same thing, so if you materialize that one thing one time you can then use it to speed up several views.\n\nUsing that approach means you'll have a lot less data that you have to read.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com",
"msg_date": "Fri, 6 Mar 2015 21:26:27 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "(sorry for top-posting, gmail does not help.)\n\nThanks to your advice Jim, I have done an other test :\nNo partial indexes, just a partial index on boolean columns does the job.\n (I get same perfs as MV)\nCREATE INDEX ..ON (BoolColumnX) WHERE BoolColumnX IS TRUE\n\nThen VIEW =\nSELECT colA....colZ\nFROM huge_table\nWHERE BoolColumnX IS TRUE\n\nThen this only index is used 800times (for each bool col) and saves place\nas it does'nt indexes NULL values, and does no replicate. subsets. Moreover\nthe huge indexes are allways loaded in cache memory.\n\n\n\nNicolas PARIS\n\n2015-03-06 21:26 GMT+01:00 Nicolas Paris <[email protected]>:\n\n> According to this link\n> http://postgresql.nabble.com/NULL-saves-disk-space-td4344106.html\n> NULL values do not take place if only one other column are null for that\n> row.\n> Boolean takes 1 byte wheras smallint 2bytes.\n> Then the space problem is not anymore a problem with boolean columns 95%\n> empty\n>\n> One thing that is really great with postgresql is transaction for drop\n> table cascade, that allow te restore all stuf index, views on a rollback if\n> problem in loading appears.\n> I hope using one transaction to drop/load many table is not a performance\n> issue ?\n>\n> Nicolas PARIS\n>\n> 2015-03-06 11:25 GMT+01:00 Nicolas Paris <[email protected]>:\n>\n>> Thanks Jim,\n>>\n>> My suggestion is to test using nothing but plain views and plain indexes\n>>> on the base table\n>>\n>> Actualy the way I choose subset rows for views is complicated in terms of\n>> query. Then using simple views without partial indexes is terrible in terms\n>> of performance (I have tested that).\n>>\n>> You mean the materialized views, right?\n>>\n>> Well I have tested matviews, views without partial indexes, views with\n>> hashjoin on a key, ..\n>> \n>>\n>>\n>>> I think you'll ultimately be unhappy trying to go down this route, for\n>>> the reasons you mention, plus the very large amount of extra space you'll\n>>> be using. 800 booleans is 800 extra bytes for every row in your fact table.\n>>> That's a lot. Even if you used a bitmap instead (which means you have to\n>>> mess around with tracking which bit means what and probably other problems\n>>> as well) you're still looking at 100 bytes per row. That's nothing to\n>>> sneeze at.\n>>\n>>\n>> Since each subset is about 5% (this number is decreasing when number of\n>> views increase) of the fact table, most boolean rows are null. This means\n>> 5% of 800 extra bytes, right ? I have choosen smallint, because of bulk\n>> load decrease csv size (true VS 1).\n>>\n>> For the views that are too slow, look at what the expensive part of the\n>>> view and materialize *only that*.\n>>\n>> It would be great if I could, but all will be automatic, then It will be\n>> difficult to apply such rules that demands human analyse, and manual\n>> database modification, for one subset\n>>\n>>\n>> Hope I have well anderstand you\n>> \n>>\n>>\n>> Nicolas PARIS\n>>\n>> 2015-03-06 10:32 GMT+01:00 Jim Nasby <[email protected]>:\n>>\n>>> On 3/6/15 2:16 AM, Nicolas Paris wrote:\n>>>\n>>>> If you have that many different views I doubt you want that many\n>>>> indexes anyway.\n>>>>\n>>>> It's a datawarehouse, then each view is used by many user for each\n>>>> query.\n>>>> Those views must be subset of the huge material table. All indexes are\n>>>> needed\n>>>>\n>>>\n>>> Yes, but they don't have to be partial.\n>>>\n>>> Have you tried just hitting the base table and indexes directly,\n>>>> either through plain views or just direct SQL?\n>>>>\n>>>> I have tried each. The performances are worst querying on a subset\n>>>> (the views) than querying on whole huge table when using the huge\n>>>> indexes\n>>>>\n>>>\n>>> You mean the materialized views, right? If so, that makes sense: Instead\n>>> of having all your users hitting one common set of data (your fact table)\n>>> you had them hitting a bunch of other data (the mat views). But you still\n>>> had other stuff hitting the fact table. So now you were dealing with a lot\n>>> more data than if you just stuck to the single fact table.\n>>>\n>>> => this is the solution I am implementing. (800 is not true, but in 10\n>>>> years it maybe will be)\n>>>>\n>>>\n>>> In 10 years we'll all be using quantum computers anyway... ;P\n>>>\n>>> Actually, I have added a boolean column on the huge table for each\n>>>> views. This is the way each view is a subset of huge table (Create View\n>>>> as Select * FROM hugeTable WHERE columnX is true --etc 800 times). Then\n>>>> I create 800partials indexes on that column(create index...WHERE columnX\n>>>> is TRUE), for each view.\n>>>> This works great as the query planer chooses the partials indexes when\n>>>> querying the little subset of the terrific table (potential 20bilion\n>>>> rows)\n>>>>\n>>>> This is better than material views for some reasons :\n>>>> - saves places on hard drive (columnX is boolean +same indexes - data\n>>>> for MatViews)\n>>>> - saves time generating materialised views\n>>>>\n>>>\n>>> But this isn't better than the mat views because of a bunch of booleans;\n>>> it's better because it means less stain on the disk cache.\n>>>\n>>> This is quite more complicated because in the project, the number of\n>>>> view is increasing, and dynamic then :\n>>>> - then adding new mat views is simple\n>>>> - adding new views => adding new column on the huge table. It can take\n>>>> long time to update boolean for each tuple. Then I need to truncate/bulk\n>>>> load all data each time I add a new View. Other problem is dynamic\n>>>> number column table was a bit tricky to implement in an ETL soft such\n>>>> Talend, but the benefits are I hope great.\n>>>>\n>>>\n>>> I think you'll ultimately be unhappy trying to go down this route, for\n>>> the reasons you mention, plus the very large amount of extra space you'll\n>>> be using. 800 booleans is 800 extra bytes for every row in your fact table.\n>>> That's a lot. Even if you used a bitmap instead (which means you have to\n>>> mess around with tracking which bit means what and probably other problems\n>>> as well) you're still looking at 100 bytes per row. That's nothing to\n>>> sneeze at.\n>>>\n>>> My suggestion is to test using nothing but plain views and plain indexes\n>>> on the base table. I expect that some of those views will not perform\n>>> adequately, but many (or most) of them will be fine. For the views that are\n>>> too slow, look at what the expensive part of the view and materialize *only\n>>> that*. I suspect you'll find that when you do that you'll discover that\n>>> several views are slow because of the same thing, so if you materialize\n>>> that one thing one time you can then use it to speed up several views.\n>>>\n>>> Using that approach means you'll have a lot less data that you have to\n>>> read.\n>>>\n>>> --\n>>> Jim Nasby, Data Architect, Blue Treble Consulting\n>>> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>>>\n>>\n>>\n>\n\n(sorry for top-posting, gmail does not help.)Thanks to your advice Jim, I have done an other test :No partial indexes, just a partial index on boolean columns does the job. (I get same perfs as MV)CREATE INDEX ..ON (BoolColumnX) WHERE BoolColumnX IS TRUEThen VIEW = SELECT colA....colZ FROM huge_tableWHERE BoolColumnX IS TRUEThen this only index is used 800times (for each bool col) and saves place as it does'nt indexes NULL values, and does no replicate. subsets. Moreover the huge indexes are allways loaded in cache memory.Nicolas PARIS\n2015-03-06 21:26 GMT+01:00 Nicolas Paris <[email protected]>:According to this link http://postgresql.nabble.com/NULL-saves-disk-space-td4344106.htmlNULL values do not take place if only one other column are null for that row.Boolean takes 1 byte wheras smallint 2bytes.Then the space problem is not anymore a problem with boolean columns 95% emptyOne thing that is really great with postgresql is transaction for drop table cascade, that allow te restore all stuf index, views on a rollback if problem in loading appears.I hope using one transaction to drop/load many table is not a performance issue ?Nicolas PARIS\n2015-03-06 11:25 GMT+01:00 Nicolas Paris <[email protected]>:Thanks Jim,My suggestion is to test using nothing but plain views and plain indexes on the base tableActualy the way I choose subset rows for views is complicated in terms of query. Then using simple views without partial indexes is terrible in terms of performance (I have tested that).You mean the materialized views, right?Well I have tested matviews, views without partial indexes, views with hashjoin on a key, .. I think you'll ultimately be unhappy trying to go down this route, for the reasons you mention, plus the very large amount of extra space you'll be using. 800 booleans is 800 extra bytes for every row in your fact table. That's a lot. Even if you used a bitmap instead (which means you have to mess around with tracking which bit means what and probably other problems as well) you're still looking at 100 bytes per row. That's nothing to sneeze at.Since each subset is about 5% (this number is decreasing when number of views increase) of the fact table, most boolean rows are null. This means 5% of 800 extra bytes, right ? I have choosen smallint, because of bulk load decrease csv size (true VS 1).For the views that are too slow, look at what the expensive part of the view and materialize *only that*.It would be great if I could, but all will be automatic, then It will be difficult to apply such rules that demands human analyse, and manual database modification, for one subsetHope I have well anderstand you Nicolas PARIS\n2015-03-06 10:32 GMT+01:00 Jim Nasby <[email protected]>:On 3/6/15 2:16 AM, Nicolas Paris wrote:\n\n If you have that many different views I doubt you want that many\n indexes anyway.\n\nIt's a datawarehouse, then each view is used by many user for each query.\nThose views must be subset of the huge material table. All indexes are\nneeded\n\n\nYes, but they don't have to be partial.\n\n\n Have you tried just hitting the base table and indexes directly,\n either through plain views or just direct SQL?\n\n I have tried each. The performances are worst querying on a subset\n(the views) than querying on whole huge table when using the huge indexes\n\n\nYou mean the materialized views, right? If so, that makes sense: Instead of having all your users hitting one common set of data (your fact table) you had them hitting a bunch of other data (the mat views). But you still had other stuff hitting the fact table. So now you were dealing with a lot more data than if you just stuck to the single fact table.\n\n\n=> this is the solution I am implementing. (800 is not true, but in 10\nyears it maybe will be)\n\n\nIn 10 years we'll all be using quantum computers anyway... ;P\n\n\n Actually, I have added a boolean column on the huge table for each\nviews. This is the way each view is a subset of huge table (Create View\nas Select * FROM hugeTable WHERE columnX is true --etc 800 times). Then\nI create 800partials indexes on that column(create index...WHERE columnX\nis TRUE), for each view.\nThis works great as the query planer chooses the partials indexes when\nquerying the little subset of the terrific table (potential 20bilion rows)\n\nThis is better than material views for some reasons :\n- saves places on hard drive (columnX is boolean +same indexes - data\nfor MatViews)\n- saves time generating materialised views\n\n\nBut this isn't better than the mat views because of a bunch of booleans; it's better because it means less stain on the disk cache.\n\n\nThis is quite more complicated because in the project, the number of\nview is increasing, and dynamic then :\n- then adding new mat views is simple\n- adding new views => adding new column on the huge table. It can take\nlong time to update boolean for each tuple. Then I need to truncate/bulk\nload all data each time I add a new View. Other problem is dynamic\nnumber column table was a bit tricky to implement in an ETL soft such\nTalend, but the benefits are I hope great.\n\n\nI think you'll ultimately be unhappy trying to go down this route, for the reasons you mention, plus the very large amount of extra space you'll be using. 800 booleans is 800 extra bytes for every row in your fact table. That's a lot. Even if you used a bitmap instead (which means you have to mess around with tracking which bit means what and probably other problems as well) you're still looking at 100 bytes per row. That's nothing to sneeze at.\n\nMy suggestion is to test using nothing but plain views and plain indexes on the base table. I expect that some of those views will not perform adequately, but many (or most) of them will be fine. For the views that are too slow, look at what the expensive part of the view and materialize *only that*. I suspect you'll find that when you do that you'll discover that several views are slow because of the same thing, so if you materialize that one thing one time you can then use it to speed up several views.\n\nUsing that approach means you'll have a lot less data that you have to read.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com",
"msg_date": "Mon, 9 Mar 2015 14:17:48 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
},
{
"msg_contents": "On 3/9/15 8:17 AM, Nicolas Paris wrote:\n> (sorry for top-posting, gmail does not help.)\n\n*shakes fist at gmail*\n\n> Thanks to your advice Jim, I have done an other test :\n> No partial indexes, just a partial index on boolean columns does the\n> job. (I get same perfs as MV)\n> CREATE INDEX ..ON (BoolColumnX) WHERE BoolColumnX IS TRUE\n>\n> Then VIEW =\n> SELECT colA....colZ\n> FROM huge_table\n> WHERE BoolColumnX IS TRUE\n>\n> Then this only index is used 800times (for each bool col) and saves\n> place as it does'nt indexes NULL values, and does no replicate. subsets.\n> Moreover the huge indexes are allways loaded in cache memory.\n\nCool. :)\n\n> According to this link\n> http://postgresql.nabble.com/NULL-saves-disk-space-td4344106.html\n> NULL values do not take place if only one other column are null for\n> that row.\n> Boolean takes 1 byte wheras smallint 2bytes.\n> Then the space problem is not anymore a problem with boolean columns\n> 95% empty\n>\n> One thing that is really great with postgresql is transaction for\n> drop table cascade, that allow te restore all stuf index, views on a\n> rollback if problem in loading appears.\n> I hope using one transaction to drop/load many table is not a\n> performance issue ?\n\nWhy are you dropping and re-loading? You mentioned it before and it \nsounded like it had something to do with adding columns, but you don't \nhave to drop and reload to add a column.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Tue, 10 Mar 2015 03:31:29 -0500",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared\n memory"
},
{
"msg_contents": ">\n> \n> Why are you dropping and re-loading? You mentioned it before and it\n> sounded like it had something to do with adding columns, but you\n\ndon't have to drop and reload to add a column.\n\n\nAdding a NULL column is fast. Dropping one too. I need to set some row as\nTRUE. I can do it with an update, but in postgresql update is done by\ndelete then insert with copy of the row. This is really slow. A drop\ncascade, then bulk load is better.\n\nThis is not the only reason. Drop & load simplify all the ETL process. No\nquestion of delta changes and no \"fuck brain\" when a problem occurs or a\nmodification of the table. I've tested, it loads 20milion rows in 5 min\n(without time for reindexing and time to retrieve datas)\n\n2015-03-10 9:31 GMT+01:00 Jim Nasby <[email protected]>:\n\n> On 3/9/15 8:17 AM, Nicolas Paris wrote:\n>\n>> (sorry for top-posting, gmail does not help.)\n>>\n>\n> *shakes fist at gmail*\n>\n> Thanks to your advice Jim, I have done an other test :\n>> No partial indexes, just a partial index on boolean columns does the\n>> job. (I get same perfs as MV)\n>> CREATE INDEX ..ON (BoolColumnX) WHERE BoolColumnX IS TRUE\n>>\n>> Then VIEW =\n>> SELECT colA....colZ\n>> FROM huge_table\n>> WHERE BoolColumnX IS TRUE\n>>\n>> Then this only index is used 800times (for each bool col) and saves\n>> place as it does'nt indexes NULL values, and does no replicate. subsets.\n>> Moreover the huge indexes are allways loaded in cache memory.\n>>\n>\n> Cool. :)\n>\n> According to this link\n>> http://postgresql.nabble.com/NULL-saves-disk-space-td4344106.html\n>> NULL values do not take place if only one other column are null for\n>> that row.\n>> Boolean takes 1 byte wheras smallint 2bytes.\n>> Then the space problem is not anymore a problem with boolean columns\n>> 95% empty\n>>\n>> One thing that is really great with postgresql is transaction for\n>> drop table cascade, that allow te restore all stuf index, views on a\n>> rollback if problem in loading appears.\n>> I hope using one transaction to drop/load many table is not a\n>> performance issue ?\n>>\n>\n> Why are you dropping and re-loading? You mentioned it before and it\n> sounded like it had something to do with adding columns, but you don't have\n> to drop and reload to add a column.\n>\n> --\n> Jim Nasby, Data Architect, Blue Treble Consulting\n> Data in Trouble? Get it in Treble! http://BlueTreble.com\n>\n\nWhy are you dropping and re-loading? You mentioned it before and it sounded like it had something to do with adding columns, but you don't have to drop and reload to add a column.Adding a NULL column is fast. Dropping one too. I need to set some row as TRUE. I can do it with an update, but in postgresql update is done by delete then insert with copy of the row. This is really slow. A drop cascade, then bulk load is better.This is not the only reason. Drop & load simplify all the ETL process. No question of delta changes and no \"fuck brain\" when a problem occurs or a modification of the table. I've tested, it loads 20milion rows in 5 min (without time for reindexing and time to retrieve datas)2015-03-10 9:31 GMT+01:00 Jim Nasby <[email protected]>:On 3/9/15 8:17 AM, Nicolas Paris wrote:\n\n(sorry for top-posting, gmail does not help.)\n\n\n*shakes fist at gmail*\n\n\nThanks to your advice Jim, I have done an other test :\nNo partial indexes, just a partial index on boolean columns does the\njob. (I get same perfs as MV)\nCREATE INDEX ..ON (BoolColumnX) WHERE BoolColumnX IS TRUE\n\nThen VIEW =\nSELECT colA....colZ\nFROM huge_table\nWHERE BoolColumnX IS TRUE\n\nThen this only index is used 800times (for each bool col) and saves\nplace as it does'nt indexes NULL values, and does no replicate. subsets.\nMoreover the huge indexes are allways loaded in cache memory.\n\n\nCool. :)\n\n\n According to this link\n http://postgresql.nabble.com/NULL-saves-disk-space-td4344106.html\n NULL values do not take place if only one other column are null for\n that row.\n Boolean takes 1 byte wheras smallint 2bytes.\n Then the space problem is not anymore a problem with boolean columns\n 95% empty\n\n One thing that is really great with postgresql is transaction for\n drop table cascade, that allow te restore all stuf index, views on a\n rollback if problem in loading appears.\n I hope using one transaction to drop/load many table is not a\n performance issue ?\n\n\nWhy are you dropping and re-loading? You mentioned it before and it sounded like it had something to do with adding columns, but you don't have to drop and reload to add a column.\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com",
"msg_date": "Tue, 10 Mar 2015 09:53:20 +0100",
"msg_from": "Nicolas Paris <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: PG 9.3 materialized view VS Views, indexes, shared memory"
}
] |
[
{
"msg_contents": "Hello,\nHas anybody used online backup feature of postgreSQL? In fact precise postgreSQL term is called: \"Point-in-time Recovery\" (PITR)\nThis means enabling following additional options in config:\n---\narchive_command = on\narchive_command = 'cp %p /usr/local/pgsql/pgDataPITR/wals/%f' # This is only example path\n---\n\nIf yes then may I know how it is used and how it impacts database performance?\n\n\nRegards:\nSaurabh\n\n\n\n\n\n\n\n\n\n\n\nHello,\nHas anybody used online backup feature of postgreSQL? In fact precise postgreSQL term is called: \"Point-in-time Recovery\" (PITR)\n\nThis means enabling following additional options in config: \n--- \narchive_command = on \narchive_command = 'cp %p /usr/local/pgsql/pgDataPITR/wals/%f' # This is only example path\n\n---\n \nIf yes then may I know how it is used and how it impacts database performance?\n \n \nRegards:\nSaurabh",
"msg_date": "Mon, 23 Feb 2015 11:38:42 +0000",
"msg_from": "Saurabh Gupta A <[email protected]>",
"msg_from_op": true,
"msg_subject": "Regarding \"Point-in-time Recovery\" feature"
},
{
"msg_contents": "Full explanation starts here:\nhttp://www.postgresql.org/docs/9.3/static/continuous-archiving.html.\n\nWhile there are some considerations to take in it does not relate to\nperformance. Generally most production systems run with archiving on. Take\na minute to read through the documentation and let us know if you have any\nquestions. Most questions about \"backup and recovery\" should be asked in\nthe admin mailing list.\n\nGood Luck,\njason\n\nOn Mon, Feb 23, 2015 at 5:38 AM, Saurabh Gupta A <\[email protected]> wrote:\n\n> Hello,\n>\n> Has anybody used online backup feature of postgreSQL? In fact precise\n> postgreSQL term is called: \"Point-in-time Recovery\" (PITR)\n> This means enabling following additional options in config:\n> ---\n> archive_command = on\n> archive_command = 'cp %p /usr/local/pgsql/pgDataPITR/wals/%f' # This is\n> only example path\n> ---\n>\n>\n>\n> If yes then may I know how it is used and how it impacts database\n> performance?\n>\n>\n>\n>\n>\n> Regards:\n>\n> Saurabh\n>\n>\n>\n>\n>\n\nFull explanation starts here: http://www.postgresql.org/docs/9.3/static/continuous-archiving.html.While there are some considerations to take in it does not relate to performance. Generally most production systems run with archiving on. Take a minute to read through the documentation and let us know if you have any questions. Most questions about \"backup and recovery\" should be asked in the admin mailing list.Good Luck,jasonOn Mon, Feb 23, 2015 at 5:38 AM, Saurabh Gupta A <[email protected]> wrote:\n\n\nHello,\nHas anybody used online backup feature of postgreSQL? In fact precise postgreSQL term is called: \"Point-in-time Recovery\" (PITR)\n\nThis means enabling following additional options in config: \n--- \narchive_command = on \narchive_command = 'cp %p /usr/local/pgsql/pgDataPITR/wals/%f' # This is only example path\n\n---\n \nIf yes then may I know how it is used and how it impacts database performance?\n \n \nRegards:\nSaurabh",
"msg_date": "Mon, 23 Feb 2015 10:03:35 -0600",
"msg_from": "\"Mathis, Jason\" <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Regarding \"Point-in-time Recovery\" feature"
},
{
"msg_contents": "And there is an app to this: Barman --> http://www.pgbarman.org/\n\n\nOn Mon, 23 Feb 2015 10:03:35 -0600, Mathis, Jason wrote:\n> Full explanation starts\n> \n> here: http://www.postgresql.org/docs/9.3/static/continuous-archiving.html\n> [1].\n>\n> While there are some considerations to take in it does not relate to\n> performance. Generally most production systems run with archiving on.\n> Take a minute to read through the documentation and let us know if \n> you\n> have any questions. Most questions about \"backup and recovery\" should\n> be asked in the admin mailing list.\n>\n> Good Luck,\n>\n> jason\n>\n> On Mon, Feb 23, 2015 at 5:38 AM, Saurabh Gupta A wrote:\n>\n>> Hello,\n>>\n>> Has anybody used online backup feature of postgreSQL? In fact\n>> precise postgreSQL term is called: \"Point-in-time Recovery\" (PITR)\n>> This means enabling following additional options in config:\n>> ---\n>> archive_command = on\n>> archive_command = 'cp %p /usr/local/pgsql/pgDataPITR/wals/%f' #\n>> This is only example path\n>> ---\n>>\n>> \n>>\n>> If yes then may I know how it is used and how it impacts database\n>> performance?\n>>\n>> \n>>\n>> \n>>\n>> Regards:\n>>\n>> Saurabh\n>>\n>> \n>>\n>> \n>\n>\n>\n> Links:\n> ------\n> [1] \n> http://www.postgresql.org/docs/9.3/static/continuous-archiving.html\n> [2] mailto:[email protected]\n\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 23 Feb 2015 13:11:09 -0300",
"msg_from": "[email protected]",
"msg_from_op": false,
"msg_subject": "Re: Regarding \"Point-in-time Recovery\" feature"
}
] |
[
{
"msg_contents": "Hi all!\n\nMay someone help me with the issue in the apply process on the replica. \nWe have a stream replication and after vacuum stops working with a big \ntable we get a \"freeze\" in applying data on the replica database. It \nlooks like this:\n\nTue Feb 24 15:04:51 MSK 2015 Stream: MASTER-masterdb:79607136410456 \nSLAVE:79607136410456 Replay:79607136339456 :: REPLAY 69 KBytes \n(00:00:00.294485 seconds)\nTue Feb 24 15:04:52 MSK 2015 Stream: MASTER-masterdb:79607137892672 \nSLAVE:79607137715392 Replay:79607137715392 :: REPLAY 173 KBytes \n(00:00:00.142605 seconds)\nTue Feb 24 15:04:53 MSK 2015 Stream: MASTER-masterdb:79607139327776 \nSLAVE:79607139241816 Replay:79607139241816 :: REPLAY 84 KBytes \n(00:00:00.05223 seconds)\nTue Feb 24 15:04:54 MSK 2015 Stream: MASTER-masterdb:79607141134776 \nSLAVE:79607141073344 Replay:79607141080032 :: REPLAY 54 KBytes \n(00:00:00.010603 seconds)\nTue Feb 24 15:04:55 MSK 2015 Stream: MASTER-masterdb:79607143085176 \nSLAVE:79607143026440 Replay:79607143038040 :: REPLAY 46 KBytes \n(00:00:00.009506 seconds)\nTue Feb 24 15:04:56 MSK 2015 Stream: MASTER-masterdb:79607145111280 \nSLAVE:79607145021384 Replay:79607145025664 :: REPLAY 83 KBytes \n(00:00:00.006795 seconds)\nTue Feb 24 15:04:57 MSK 2015 Stream: MASTER-masterdb:79607146564424 \nSLAVE:79607146478336 Replay:79607146501264 :: REPLAY 61 KBytes \n(00:00:00.00701 seconds)\nTue Feb 24 15:04:58 MSK 2015 Stream: MASTER-masterdb:79607148160680 \nSLAVE:79607148108352 Replay:79607147369320 :: REPLAY 773 KBytes \n(00:00:00.449702 seconds)\nTue Feb 24 15:04:59 MSK 2015 Stream: MASTER-masterdb:79607150220688 \nSLAVE:79607150159632 Replay:79607150171312 :: REPLAY 48 KBytes \n(00:00:00.006594 seconds)\nTue Feb 24 15:05:00 MSK 2015 Stream: MASTER-masterdb:79607152365360 \nSLAVE:79607152262696 Replay:79607152285240 :: REPLAY 78 KBytes \n(00:00:00.007042 seconds)\nTue Feb 24 15:05:02 MSK 2015 Stream: MASTER-masterdb:79607154049848 \nSLAVE:79607154012624 Replay:79607153446800 :: REPLAY 589 KBytes \n(00:00:00.513637 seconds)\nTue Feb 24 15:05:03 MSK 2015 Stream: MASTER-masterdb:79607155229992 \nSLAVE:79607155187864 Replay:79607155188312 :: REPLAY 41 KBytes \n(00:00:00.004773 seconds)\nTue Feb 24 15:05:04 MSK 2015 Stream: MASTER-masterdb:79607156833968 \nSLAVE:79607156764128 Replay:79607156785488 :: REPLAY 47 KBytes \n(00:00:00.006846 seconds)\nTue Feb 24 15:05:05 MSK 2015 Stream: MASTER-masterdb:79607158419848 \nSLAVE:79607158344856 Replay:79607158396352 :: REPLAY 23 KBytes \n(00:00:00.005228 seconds)\nTue Feb 24 15:05:06 MSK 2015 Stream: MASTER-masterdb:79607160004776 \nSLAVE:79607159962400 Replay:79607159988888 :: REPLAY 16 KBytes \n(00:00:00.003162 seconds)\n*--here apply process just stops*\n\nTue Feb 24 15:05:07 MSK 2015 Stream: MASTER-masterdb:79607161592048 \nSLAVE:79607161550576 Replay:79607160986064 :: REPLAY 592 KBytes \n(00:00:00.398376 seconds)\nTue Feb 24 15:05:08 MSK 2015 Stream: MASTER-masterdb:79607163272840 \nSLAVE:79607163231384 Replay:79607160986064 :: REPLAY 2233 KBytes \n(00:00:01.446759 seconds)\nTue Feb 24 15:05:09 MSK 2015 Stream: MASTER-masterdb:79607164958632 \nSLAVE:79607164904448 Replay:79607160986064 :: REPLAY 3879 KBytes \n(00:00:02.497181 seconds)\nTue Feb 24 15:05:10 MSK 2015 Stream: MASTER-masterdb:79607166819560 \nSLAVE:79607166777712 Replay:79607160986064 :: REPLAY 5697 KBytes \n(00:00:03.543107 seconds)\nTue Feb 24 15:05:11 MSK 2015 Stream: MASTER-masterdb:79607168595280 \nSLAVE:79607168566536 Replay:79607160986064 :: REPLAY 7431 KBytes \n(00:00:04.589736 seconds)\nTue Feb 24 15:05:12 MSK 2015 Stream: MASTER-masterdb:79607170372064 \nSLAVE:79607170252480 Replay:79607160986064 :: REPLAY 9166 KBytes \n(00:00:05.635918 seconds)\nTue Feb 24 15:05:13 MSK 2015 Stream: MASTER-masterdb:79607171829480 \nSLAVE:79607171714144 Replay:79607160986064 :: REPLAY 10589 KBytes \n(00:00:06.688115 seconds)\nTue Feb 24 15:05:14 MSK 2015 Stream: MASTER-masterdb:79607173152488 \nSLAVE:79607173152488 Replay:79607160986064 :: REPLAY 11881 KBytes \n(00:00:07.736993 seconds)\nTue Feb 24 15:05:15 MSK 2015 Stream: MASTER-masterdb:79607174149968 \nSLAVE:79607174149968 Replay:79607160986064 :: REPLAY 12855 KBytes \n(00:00:08.78538 seconds)\nTue Feb 24 15:05:16 MSK 2015 Stream: MASTER-masterdb:79607176448344 \nSLAVE:79607176252088 Replay:79607160986064 :: REPLAY 15100 KBytes \n(00:00:09.835184 seconds)\nTue Feb 24 15:05:17 MSK 2015 Stream: MASTER-masterdb:79607177632216 \nSLAVE:79607177608224 Replay:79607160986064 :: REPLAY 16256 KBytes \n(00:00:10.926493 seconds)\nTue Feb 24 15:05:18 MSK 2015 Stream: MASTER-masterdb:79607179432960 \nSLAVE:79607179378096 Replay:79607160986064 :: REPLAY 18015 KBytes \n(00:00:11.97989 seconds)\nTue Feb 24 15:05:19 MSK 2015 Stream: MASTER-masterdb:79607180893384 \nSLAVE:79607180874256 Replay:79607160986064 :: REPLAY 19441 KBytes \n(00:00:13.028921 seconds)\nTue Feb 24 15:05:20 MSK 2015 Stream: MASTER-masterdb:79607182596224 \nSLAVE:79607182552272 Replay:79607160986064 :: REPLAY 21104 KBytes \n(00:00:14.079497 seconds)\nTue Feb 24 15:05:21 MSK 2015 Stream: MASTER-masterdb:79607183935312 \nSLAVE:79607183902592 Replay:79607160986064 :: REPLAY 22411 KBytes \n(00:00:15.127679 seconds)\nTue Feb 24 15:05:23 MSK 2015 Stream: MASTER-masterdb:79607185165880 \nSLAVE:79607185094032 Replay:79607160986064 :: REPLAY 23613 KBytes \n(00:00:16.175132 seconds)\nTue Feb 24 15:05:24 MSK 2015 Stream: MASTER-masterdb:79607187196920 \nSLAVE:79607187169368 Replay:79607160986064 :: REPLAY 25596 KBytes \n(00:00:17.221981 seconds)\nTue Feb 24 15:05:25 MSK 2015 Stream: MASTER-masterdb:79607188943856 \nSLAVE:79607188885952 Replay:79607160986064 :: REPLAY 27302 KBytes \n(00:00:18.274362 seconds)\nTue Feb 24 15:05:26 MSK 2015 Stream: MASTER-masterdb:79607190489400 \nSLAVE:79607190443160 Replay:79607160986064 :: REPLAY 28812 KBytes \n(00:00:19.319987 seconds)\nTue Feb 24 15:05:27 MSK 2015 Stream: MASTER-masterdb:79607192089312 \nSLAVE:79607192054048 Replay:79607160986064 :: REPLAY 30374 KBytes \n(00:00:20.372305 seconds)\nTue Feb 24 15:05:28 MSK 2015 Stream: MASTER-masterdb:79607193736800 \nSLAVE:79607193690056 Replay:79607160986064 :: REPLAY 31983 KBytes \n(00:00:21.421359 seconds)\nTue Feb 24 15:05:29 MSK 2015 Stream: MASTER-masterdb:79607195968648 \nSLAVE:79607195901296 Replay:79607160986064 :: REPLAY 34163 KBytes \n(00:00:22.471334 seconds)\nTue Feb 24 15:05:30 MSK 2015 Stream: MASTER-masterdb:79607197808840 \nSLAVE:79607197737720 Replay:79607160986064 :: REPLAY 35960 KBytes \n(00:00:23.52269 seconds)\nTue Feb 24 15:05:31 MSK 2015 Stream: MASTER-masterdb:79607199571144 \nSLAVE:79607199495976 Replay:79607160986064 :: REPLAY 37681 KBytes \n(00:00:24.577615 seconds)\nTue Feb 24 15:05:32 MSK 2015 Stream: MASTER-masterdb:79607201206104 \nSLAVE:79607201100392 Replay:79607160986064 :: REPLAY 39277 KBytes \n(00:00:25.624604 seconds)\nTue Feb 24 15:05:33 MSK 2015 Stream: MASTER-masterdb:79607203174208 \nSLAVE:79607203111136 Replay:79607160986064 :: REPLAY 41199 KBytes \n(00:00:26.67059 seconds)\nTue Feb 24 15:05:34 MSK 2015 Stream: MASTER-masterdb:79607204792888 \nSLAVE:79607204741600 Replay:79607160986064 :: REPLAY 42780 KBytes \n(00:00:27.719088 seconds)\nTue Feb 24 15:05:35 MSK 2015 Stream: MASTER-masterdb:79607206453216 \nSLAVE:79607206409032 Replay:79607160986064 :: REPLAY 44401 KBytes \n(00:00:28.766647 seconds)\nTue Feb 24 15:05:36 MSK 2015 Stream: MASTER-masterdb:79607208225344 \nSLAVE:79607208142176 Replay:79607160986064 :: REPLAY 46132 KBytes \n(00:00:29.811434 seconds)\n\n\nperf shows the following functions on the top\n+ 22.50% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n+ 8.48% postmaster postgres [.] hash_search_with_hash_value\n\n\nafter 10 minutes or so the apply process continue to work\n\nTue Feb 24 15:13:25 MSK 2015 Stream: MASTER-masterdb:79608758742560 \nSLAVE:79608758718008 Replay:79607160986064 :: REPLAY 1560309 KBytes \n(00:08:19.009653 seconds)\nTue Feb 24 15:13:26 MSK 2015 Stream: MASTER-masterdb:79608759203608 \nSLAVE:79608759189680 Replay:79607160986064 :: REPLAY 1560759 KBytes \n(00:08:20.057877 seconds)\nTue Feb 24 15:13:27 MSK 2015 Stream: MASTER-masterdb:79608759639680 \nSLAVE:79608759633224 Replay:79607160986064 :: REPLAY 1561185 KBytes \n(00:08:21.104723 seconds)\nTue Feb 24 15:13:28 MSK 2015 Stream: MASTER-masterdb:79608760271200 \nSLAVE:79608760264128 Replay:79607160986064 :: REPLAY 1561802 KBytes \n(00:08:22.148546 seconds)\nTue Feb 24 15:13:30 MSK 2015 Stream: MASTER-masterdb:79608760622920 \nSLAVE:79608760616656 Replay:79607160986064 :: REPLAY 1562145 KBytes \n(00:08:23.196645 seconds)\nTue Feb 24 15:13:31 MSK 2015 Stream: MASTER-masterdb:79608761122040 \nSLAVE:79608761084584 Replay:79607160986064 :: REPLAY 1562633 KBytes \n(00:08:24.240653 seconds)\nTue Feb 24 15:13:32 MSK 2015 Stream: MASTER-masterdb:79608761434200 \nSLAVE:79608761426080 Replay:79607160986064 :: REPLAY 1562938 KBytes \n(00:08:25.289429 seconds)\nTue Feb 24 15:13:33 MSK 2015 Stream: MASTER-masterdb:79608761931008 \nSLAVE:79608761904808 Replay:79607160986064 :: REPLAY 1563423 KBytes \n(00:08:26.338498 seconds)\n*--apply starts*\nTue Feb 24 15:13:34 MSK 2015 Stream: MASTER-masterdb:79608762360568 \nSLAVE:79608762325712 Replay:79607163554680 :: REPLAY 1561334 KBytes \n(00:08:25.702423 seconds)\nTue Feb 24 15:13:35 MSK 2015 Stream: MASTER-masterdb:79608762891224 \nSLAVE:79608762885928 Replay:79607166466488 :: REPLAY 1559008 KBytes \n(00:08:25.011046 seconds)\nTue Feb 24 15:13:36 MSK 2015 Stream: MASTER-masterdb:79608763681920 \nSLAVE:79608763667256 Replay:79607167054056 :: REPLAY 1559207 KBytes \n(00:08:25.827531 seconds)\nTue Feb 24 15:13:37 MSK 2015 Stream: MASTER-masterdb:79608764207088 \nSLAVE:79608764197744 Replay:79607175610296 :: REPLAY 1551364 KBytes \n(00:08:21.182428 seconds)\nTue Feb 24 15:13:38 MSK 2015 Stream: MASTER-masterdb:79608764857920 \nSLAVE:79608764832432 Replay:79607183599632 :: REPLAY 1544197 KBytes \n(00:08:16.742467 seconds)\nTue Feb 24 15:13:39 MSK 2015 Stream: MASTER-masterdb:79608765323360 \nSLAVE:79608765281408 Replay:79607186862176 :: REPLAY 1541466 KBytes \n(00:08:15.569874 seconds)\nTue Feb 24 15:13:40 MSK 2015 Stream: MASTER-masterdb:79608765848240 \nSLAVE:79608765824520 Replay:79607186862176 :: REPLAY 1541978 KBytes \n(00:08:16.620932 seconds)\n\n\nAll this is a result of completion of \"vacuum verbose analyze \nmaster_table\" on the master site\n\nAny help would be appreciated\n\n-- \nBest regards,\nSergey Shchukin\n\n\n\n\n\n\n\n Hi all!\n\n May someone help me with the issue in the apply process on the\n replica. We have a stream replication and after vacuum stops working\n with a big table we get a \"freeze\" in applying data on the replica\n database. It looks like this:\n\nTue Feb\n 24 15:04:51 MSK 2015 Stream: MASTER-masterdb:79607136410456\n SLAVE:79607136410456 Replay:79607136339456 :: REPLAY 69 KBytes\n (00:00:00.294485 seconds)\n Tue Feb 24 15:04:52 MSK 2015 Stream:\n MASTER-masterdb:79607137892672 SLAVE:79607137715392\n Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605\n seconds)\n Tue Feb 24 15:04:53 MSK 2015 Stream:\n MASTER-masterdb:79607139327776 SLAVE:79607139241816\n Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223\n seconds)\n Tue Feb 24 15:04:54 MSK 2015 Stream:\n MASTER-masterdb:79607141134776 SLAVE:79607141073344\n Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603\n seconds)\n Tue Feb 24 15:04:55 MSK 2015 Stream:\n MASTER-masterdb:79607143085176 SLAVE:79607143026440\n Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506\n seconds)\n Tue Feb 24 15:04:56 MSK 2015 Stream:\n MASTER-masterdb:79607145111280 SLAVE:79607145021384\n Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795\n seconds)\n Tue Feb 24 15:04:57 MSK 2015 Stream:\n MASTER-masterdb:79607146564424 SLAVE:79607146478336\n Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701\n seconds)\n Tue Feb 24 15:04:58 MSK 2015 Stream:\n MASTER-masterdb:79607148160680 SLAVE:79607148108352\n Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702\n seconds)\n Tue Feb 24 15:04:59 MSK 2015 Stream:\n MASTER-masterdb:79607150220688 SLAVE:79607150159632\n Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594\n seconds)\n Tue Feb 24 15:05:00 MSK 2015 Stream:\n MASTER-masterdb:79607152365360 SLAVE:79607152262696\n Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042\n seconds)\n Tue Feb 24 15:05:02 MSK 2015 Stream:\n MASTER-masterdb:79607154049848 SLAVE:79607154012624\n Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637\n seconds)\n Tue Feb 24 15:05:03 MSK 2015 Stream:\n MASTER-masterdb:79607155229992 SLAVE:79607155187864\n Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773\n seconds)\n Tue Feb 24 15:05:04 MSK 2015 Stream:\n MASTER-masterdb:79607156833968 SLAVE:79607156764128\n Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846\n seconds)\n Tue Feb 24 15:05:05 MSK 2015 Stream:\n MASTER-masterdb:79607158419848 SLAVE:79607158344856\n Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228\n seconds)\n Tue Feb 24 15:05:06 MSK 2015 Stream:\n MASTER-masterdb:79607160004776 SLAVE:79607159962400\n Replay:79607159988888 :: REPLAY 16 KBytes (00:00:00.003162\n seconds)\n--here apply process just stops\n\n Tue Feb 24 15:05:07 MSK 2015 Stream:\n MASTER-masterdb:79607161592048 SLAVE:79607161550576\n Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376\n seconds)\n Tue Feb 24 15:05:08 MSK 2015 Stream:\n MASTER-masterdb:79607163272840 SLAVE:79607163231384\n Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759\n seconds)\n Tue Feb 24 15:05:09 MSK 2015 Stream:\n MASTER-masterdb:79607164958632 SLAVE:79607164904448\n Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181\n seconds)\n Tue Feb 24 15:05:10 MSK 2015 Stream:\n MASTER-masterdb:79607166819560 SLAVE:79607166777712\n Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107\n seconds)\n Tue Feb 24 15:05:11 MSK 2015 Stream:\n MASTER-masterdb:79607168595280 SLAVE:79607168566536\n Replay:79607160986064 :: REPLAY 7431 KBytes (00:00:04.589736\n seconds)\n Tue Feb 24 15:05:12 MSK 2015 Stream:\n MASTER-masterdb:79607170372064 SLAVE:79607170252480\n Replay:79607160986064 :: REPLAY 9166 KBytes (00:00:05.635918\n seconds)\n Tue Feb 24 15:05:13 MSK 2015 Stream:\n MASTER-masterdb:79607171829480 SLAVE:79607171714144\n Replay:79607160986064 :: REPLAY 10589 KBytes (00:00:06.688115\n seconds)\n Tue Feb 24 15:05:14 MSK 2015 Stream:\n MASTER-masterdb:79607173152488 SLAVE:79607173152488\n Replay:79607160986064 :: REPLAY 11881 KBytes (00:00:07.736993\n seconds)\n Tue Feb 24 15:05:15 MSK 2015 Stream:\n MASTER-masterdb:79607174149968 SLAVE:79607174149968\n Replay:79607160986064 :: REPLAY 12855 KBytes (00:00:08.78538\n seconds)\n Tue Feb 24 15:05:16 MSK 2015 Stream:\n MASTER-masterdb:79607176448344 SLAVE:79607176252088\n Replay:79607160986064 :: REPLAY 15100 KBytes (00:00:09.835184\n seconds)\n Tue Feb 24 15:05:17 MSK 2015 Stream:\n MASTER-masterdb:79607177632216 SLAVE:79607177608224\n Replay:79607160986064 :: REPLAY 16256 KBytes (00:00:10.926493\n seconds)\n Tue Feb 24 15:05:18 MSK 2015 Stream:\n MASTER-masterdb:79607179432960 SLAVE:79607179378096\n Replay:79607160986064 :: REPLAY 18015 KBytes (00:00:11.97989\n seconds)\n Tue Feb 24 15:05:19 MSK 2015 Stream:\n MASTER-masterdb:79607180893384 SLAVE:79607180874256\n Replay:79607160986064 :: REPLAY 19441 KBytes (00:00:13.028921\n seconds)\n Tue Feb 24 15:05:20 MSK 2015 Stream:\n MASTER-masterdb:79607182596224 SLAVE:79607182552272\n Replay:79607160986064 :: REPLAY 21104 KBytes (00:00:14.079497\n seconds)\n Tue Feb 24 15:05:21 MSK 2015 Stream:\n MASTER-masterdb:79607183935312 SLAVE:79607183902592\n Replay:79607160986064 :: REPLAY 22411 KBytes (00:00:15.127679\n seconds)\n Tue Feb 24 15:05:23 MSK 2015 Stream:\n MASTER-masterdb:79607185165880 SLAVE:79607185094032\n Replay:79607160986064 :: REPLAY 23613 KBytes (00:00:16.175132\n seconds)\n Tue Feb 24 15:05:24 MSK 2015 Stream:\n MASTER-masterdb:79607187196920 SLAVE:79607187169368\n Replay:79607160986064 :: REPLAY 25596 KBytes (00:00:17.221981\n seconds)\n Tue Feb 24 15:05:25 MSK 2015 Stream:\n MASTER-masterdb:79607188943856 SLAVE:79607188885952\n Replay:79607160986064 :: REPLAY 27302 KBytes (00:00:18.274362\n seconds)\n Tue Feb 24 15:05:26 MSK 2015 Stream:\n MASTER-masterdb:79607190489400 SLAVE:79607190443160\n Replay:79607160986064 :: REPLAY 28812 KBytes (00:00:19.319987\n seconds)\n Tue Feb 24 15:05:27 MSK 2015 Stream:\n MASTER-masterdb:79607192089312 SLAVE:79607192054048\n Replay:79607160986064 :: REPLAY 30374 KBytes (00:00:20.372305\n seconds)\n Tue Feb 24 15:05:28 MSK 2015 Stream:\n MASTER-masterdb:79607193736800 SLAVE:79607193690056\n Replay:79607160986064 :: REPLAY 31983 KBytes (00:00:21.421359\n seconds)\n Tue Feb 24 15:05:29 MSK 2015 Stream:\n MASTER-masterdb:79607195968648 SLAVE:79607195901296\n Replay:79607160986064 :: REPLAY 34163 KBytes (00:00:22.471334\n seconds)\n Tue Feb 24 15:05:30 MSK 2015 Stream:\n MASTER-masterdb:79607197808840 SLAVE:79607197737720\n Replay:79607160986064 :: REPLAY 35960 KBytes (00:00:23.52269\n seconds)\n Tue Feb 24 15:05:31 MSK 2015 Stream:\n MASTER-masterdb:79607199571144 SLAVE:79607199495976\n Replay:79607160986064 :: REPLAY 37681 KBytes (00:00:24.577615\n seconds)\n Tue Feb 24 15:05:32 MSK 2015 Stream:\n MASTER-masterdb:79607201206104 SLAVE:79607201100392\n Replay:79607160986064 :: REPLAY 39277 KBytes (00:00:25.624604\n seconds)\n Tue Feb 24 15:05:33 MSK 2015 Stream:\n MASTER-masterdb:79607203174208 SLAVE:79607203111136\n Replay:79607160986064 :: REPLAY 41199 KBytes (00:00:26.67059\n seconds)\n Tue Feb 24 15:05:34 MSK 2015 Stream:\n MASTER-masterdb:79607204792888 SLAVE:79607204741600\n Replay:79607160986064 :: REPLAY 42780 KBytes (00:00:27.719088\n seconds)\n Tue Feb 24 15:05:35 MSK 2015 Stream:\n MASTER-masterdb:79607206453216 SLAVE:79607206409032\n Replay:79607160986064 :: REPLAY 44401 KBytes (00:00:28.766647\n seconds)\n Tue Feb 24 15:05:36 MSK 2015 Stream:\n MASTER-masterdb:79607208225344 SLAVE:79607208142176\n Replay:79607160986064 :: REPLAY 46132 KBytes (00:00:29.811434\n seconds)\n\n\n perf shows the following functions on the top\n + 22.50% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n + 8.48% postmaster postgres [.]\n hash_search_with_hash_value\n\n\n after 10 minutes or so the apply process continue to work\n\n Tue Feb 24 15:13:25 MSK 2015 Stream:\n MASTER-masterdb:79608758742560 SLAVE:79608758718008\n Replay:79607160986064 :: REPLAY 1560309 KBytes\n (00:08:19.009653 seconds)\n Tue Feb 24 15:13:26 MSK 2015 Stream:\n MASTER-masterdb:79608759203608 SLAVE:79608759189680\n Replay:79607160986064 :: REPLAY 1560759 KBytes\n (00:08:20.057877 seconds)\n Tue Feb 24 15:13:27 MSK 2015 Stream:\n MASTER-masterdb:79608759639680 SLAVE:79608759633224\n Replay:79607160986064 :: REPLAY 1561185 KBytes\n (00:08:21.104723 seconds)\n Tue Feb 24 15:13:28 MSK 2015 Stream:\n MASTER-masterdb:79608760271200 SLAVE:79608760264128\n Replay:79607160986064 :: REPLAY 1561802 KBytes\n (00:08:22.148546 seconds)\n Tue Feb 24 15:13:30 MSK 2015 Stream:\n MASTER-masterdb:79608760622920 SLAVE:79608760616656\n Replay:79607160986064 :: REPLAY 1562145 KBytes\n (00:08:23.196645 seconds)\n Tue Feb 24 15:13:31 MSK 2015 Stream:\n MASTER-masterdb:79608761122040 SLAVE:79608761084584\n Replay:79607160986064 :: REPLAY 1562633 KBytes\n (00:08:24.240653 seconds)\n Tue Feb 24 15:13:32 MSK 2015 Stream:\n MASTER-masterdb:79608761434200 SLAVE:79608761426080\n Replay:79607160986064 :: REPLAY 1562938 KBytes\n (00:08:25.289429 seconds)\n Tue Feb 24 15:13:33 MSK 2015 Stream:\n MASTER-masterdb:79608761931008 SLAVE:79608761904808\n Replay:79607160986064 :: REPLAY 1563423 KBytes\n (00:08:26.338498 seconds)\n--apply starts\n Tue Feb 24 15:13:34 MSK 2015 Stream:\n MASTER-masterdb:79608762360568 SLAVE:79608762325712\n Replay:79607163554680 :: REPLAY 1561334 KBytes\n (00:08:25.702423 seconds)\n Tue Feb 24 15:13:35 MSK 2015 Stream:\n MASTER-masterdb:79608762891224 SLAVE:79608762885928\n Replay:79607166466488 :: REPLAY 1559008 KBytes\n (00:08:25.011046 seconds)\n Tue Feb 24 15:13:36 MSK 2015 Stream:\n MASTER-masterdb:79608763681920 SLAVE:79608763667256\n Replay:79607167054056 :: REPLAY 1559207 KBytes\n (00:08:25.827531 seconds)\n Tue Feb 24 15:13:37 MSK 2015 Stream:\n MASTER-masterdb:79608764207088 SLAVE:79608764197744\n Replay:79607175610296 :: REPLAY 1551364 KBytes\n (00:08:21.182428 seconds)\n Tue Feb 24 15:13:38 MSK 2015 Stream:\n MASTER-masterdb:79608764857920 SLAVE:79608764832432\n Replay:79607183599632 :: REPLAY 1544197 KBytes\n (00:08:16.742467 seconds)\n Tue Feb 24 15:13:39 MSK 2015 Stream:\n MASTER-masterdb:79608765323360 SLAVE:79608765281408\n Replay:79607186862176 :: REPLAY 1541466 KBytes\n (00:08:15.569874 seconds)\n Tue Feb 24 15:13:40 MSK 2015 Stream:\n MASTER-masterdb:79608765848240 SLAVE:79608765824520\n Replay:79607186862176 :: REPLAY 1541978 KBytes\n (00:08:16.620932 seconds)\n\n\n All this is a result of completion of \"vacuum verbose analyze\n master_table\" on the master site\n\n Any help would be appreciated\n-- \nBest regards,\nSergey Shchukin",
"msg_date": "Tue, 24 Feb 2015 16:42:06 +0300",
"msg_from": "Sergey Shchukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Issue with a hanging apply process on the replica db after vacuum\n works on primary"
},
{
"msg_contents": "Hi Radovan !\n\nThank you for the reply. The question is that this table is not a \nsubject for a massive updates/deletes.\n\nIs there any additional traces except from perf or pg_top to trace what \nreplica is doing at the particular moment when we are lagging in replay? \nTo see locks or spins or sleeps etc..\n\nThank you!\n\n-\n\nBest regards,\nSergey Shchukin\n\n24.02.2015 19:05, Radovan Jablonovsky пишет:\n> This looks like more issue for pgsql-general mailing list.\n>\n> Possible solutions\n> 1) Set specific autovacuum parameters on the big table. The autovacuum \n> could vacuum table on multiple runs based on the thresholds and cost \n> settings\n> Example of setting specific values of autovacuum and analyze for \n> table. It should be adjusted for your system, work load, table usage, etc:\n> alter table \"my_schema\".\"my_big_table\" set (fillfactor = 80, \n> autovacuum_enabled = true, autovacuum_vacuum_threshold = 200, \n> autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor = \n> 0.05, autovacuum_analyze_scale_factor = 0.005, \n> autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = 5000);\n>\n> 2) Could be to partition the large table on master site and vacuum it \n> partition by partition.\n>\n> On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin \n> <[email protected] <mailto:[email protected]>> wrote:\n>\n> Hi all!\n>\n> May someone help me with the issue in the apply process on the\n> replica. We have a stream replication and after vacuum stops\n> working with a big table we get a \"freeze\" in applying data on the\n> replica database. It looks like this:\n>\n> Tue Feb 24 15:04:51 MSK 2015 Stream:\n> MASTER-masterdb:79607136410456 SLAVE:79607136410456\n> Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 seconds)\n> Tue Feb 24 15:04:52 MSK 2015 Stream:\n> MASTER-masterdb:79607137892672 SLAVE:79607137715392\n> Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 seconds)\n> Tue Feb 24 15:04:53 MSK 2015 Stream:\n> MASTER-masterdb:79607139327776 SLAVE:79607139241816\n> Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 seconds)\n> Tue Feb 24 15:04:54 MSK 2015 Stream:\n> MASTER-masterdb:79607141134776 SLAVE:79607141073344\n> Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 seconds)\n> Tue Feb 24 15:04:55 MSK 2015 Stream:\n> MASTER-masterdb:79607143085176 SLAVE:79607143026440\n> Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 seconds)\n> Tue Feb 24 15:04:56 MSK 2015 Stream:\n> MASTER-masterdb:79607145111280 SLAVE:79607145021384\n> Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 seconds)\n> Tue Feb 24 15:04:57 MSK 2015 Stream:\n> MASTER-masterdb:79607146564424 SLAVE:79607146478336\n> Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 seconds)\n> Tue Feb 24 15:04:58 MSK 2015 Stream:\n> MASTER-masterdb:79607148160680 SLAVE:79607148108352\n> Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 seconds)\n> Tue Feb 24 15:04:59 MSK 2015 Stream:\n> MASTER-masterdb:79607150220688 SLAVE:79607150159632\n> Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 seconds)\n> Tue Feb 24 15:05:00 MSK 2015 Stream:\n> MASTER-masterdb:79607152365360 SLAVE:79607152262696\n> Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 seconds)\n> Tue Feb 24 15:05:02 MSK 2015 Stream:\n> MASTER-masterdb:79607154049848 SLAVE:79607154012624\n> Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 seconds)\n> Tue Feb 24 15:05:03 MSK 2015 Stream:\n> MASTER-masterdb:79607155229992 SLAVE:79607155187864\n> Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 seconds)\n> Tue Feb 24 15:05:04 MSK 2015 Stream:\n> MASTER-masterdb:79607156833968 SLAVE:79607156764128\n> Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 seconds)\n> Tue Feb 24 15:05:05 MSK 2015 Stream:\n> MASTER-masterdb:79607158419848 SLAVE:79607158344856\n> Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 seconds)\n> Tue Feb 24 15:05:06 MSK 2015 Stream:\n> MASTER-masterdb:79607160004776 SLAVE:79607159962400\n> Replay:79607159988888 :: REPLAY 16 KBytes (00:00:00.003162 seconds)\n> *--here apply process just stops*\n>\n> Tue Feb 24 15:05:07 MSK 2015 Stream:\n> MASTER-masterdb:79607161592048 SLAVE:79607161550576\n> Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 seconds)\n> Tue Feb 24 15:05:08 MSK 2015 Stream:\n> MASTER-masterdb:79607163272840 SLAVE:79607163231384\n> Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 seconds)\n> Tue Feb 24 15:05:09 MSK 2015 Stream:\n> MASTER-masterdb:79607164958632 SLAVE:79607164904448\n> Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 seconds)\n> Tue Feb 24 15:05:10 MSK 2015 Stream:\n> MASTER-masterdb:79607166819560 SLAVE:79607166777712\n> Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107 seconds)\n> Tue Feb 24 15:05:11 MSK 2015 Stream:\n> MASTER-masterdb:79607168595280 SLAVE:79607168566536\n> Replay:79607160986064 :: REPLAY 7431 KBytes (00:00:04.589736 seconds)\n> Tue Feb 24 15:05:12 MSK 2015 Stream:\n> MASTER-masterdb:79607170372064 SLAVE:79607170252480\n> Replay:79607160986064 :: REPLAY 9166 KBytes (00:00:05.635918 seconds)\n> Tue Feb 24 15:05:13 MSK 2015 Stream:\n> MASTER-masterdb:79607171829480 SLAVE:79607171714144\n> Replay:79607160986064 :: REPLAY 10589 KBytes (00:00:06.688115 seconds)\n> Tue Feb 24 15:05:14 MSK 2015 Stream:\n> MASTER-masterdb:79607173152488 SLAVE:79607173152488\n> Replay:79607160986064 :: REPLAY 11881 KBytes (00:00:07.736993 seconds)\n> Tue Feb 24 15:05:15 MSK 2015 Stream:\n> MASTER-masterdb:79607174149968 SLAVE:79607174149968\n> Replay:79607160986064 :: REPLAY 12855 KBytes (00:00:08.78538 seconds)\n> Tue Feb 24 15:05:16 MSK 2015 Stream:\n> MASTER-masterdb:79607176448344 SLAVE:79607176252088\n> Replay:79607160986064 :: REPLAY 15100 KBytes (00:00:09.835184 seconds)\n> Tue Feb 24 15:05:17 MSK 2015 Stream:\n> MASTER-masterdb:79607177632216 SLAVE:79607177608224\n> Replay:79607160986064 :: REPLAY 16256 KBytes (00:00:10.926493 seconds)\n> Tue Feb 24 15:05:18 MSK 2015 Stream:\n> MASTER-masterdb:79607179432960 SLAVE:79607179378096\n> Replay:79607160986064 :: REPLAY 18015 KBytes (00:00:11.97989 seconds)\n> Tue Feb 24 15:05:19 MSK 2015 Stream:\n> MASTER-masterdb:79607180893384 SLAVE:79607180874256\n> Replay:79607160986064 :: REPLAY 19441 KBytes (00:00:13.028921 seconds)\n> Tue Feb 24 15:05:20 MSK 2015 Stream:\n> MASTER-masterdb:79607182596224 SLAVE:79607182552272\n> Replay:79607160986064 :: REPLAY 21104 KBytes (00:00:14.079497 seconds)\n> Tue Feb 24 15:05:21 MSK 2015 Stream:\n> MASTER-masterdb:79607183935312 SLAVE:79607183902592\n> Replay:79607160986064 :: REPLAY 22411 KBytes (00:00:15.127679 seconds)\n> Tue Feb 24 15:05:23 MSK 2015 Stream:\n> MASTER-masterdb:79607185165880 SLAVE:79607185094032\n> Replay:79607160986064 :: REPLAY 23613 KBytes (00:00:16.175132 seconds)\n> Tue Feb 24 15:05:24 MSK 2015 Stream:\n> MASTER-masterdb:79607187196920 SLAVE:79607187169368\n> Replay:79607160986064 :: REPLAY 25596 KBytes (00:00:17.221981 seconds)\n> Tue Feb 24 15:05:25 MSK 2015 Stream:\n> MASTER-masterdb:79607188943856 SLAVE:79607188885952\n> Replay:79607160986064 :: REPLAY 27302 KBytes (00:00:18.274362 seconds)\n> Tue Feb 24 15:05:26 MSK 2015 Stream:\n> MASTER-masterdb:79607190489400 SLAVE:79607190443160\n> Replay:79607160986064 :: REPLAY 28812 KBytes (00:00:19.319987 seconds)\n> Tue Feb 24 15:05:27 MSK 2015 Stream:\n> MASTER-masterdb:79607192089312 SLAVE:79607192054048\n> Replay:79607160986064 :: REPLAY 30374 KBytes (00:00:20.372305 seconds)\n> Tue Feb 24 15:05:28 MSK 2015 Stream:\n> MASTER-masterdb:79607193736800 SLAVE:79607193690056\n> Replay:79607160986064 :: REPLAY 31983 KBytes (00:00:21.421359 seconds)\n> Tue Feb 24 15:05:29 MSK 2015 Stream:\n> MASTER-masterdb:79607195968648 SLAVE:79607195901296\n> Replay:79607160986064 :: REPLAY 34163 KBytes (00:00:22.471334 seconds)\n> Tue Feb 24 15:05:30 MSK 2015 Stream:\n> MASTER-masterdb:79607197808840 SLAVE:79607197737720\n> Replay:79607160986064 :: REPLAY 35960 KBytes (00:00:23.52269 seconds)\n> Tue Feb 24 15:05:31 MSK 2015 Stream:\n> MASTER-masterdb:79607199571144 SLAVE:79607199495976\n> Replay:79607160986064 :: REPLAY 37681 KBytes (00:00:24.577615 seconds)\n> Tue Feb 24 15:05:32 MSK 2015 Stream:\n> MASTER-masterdb:79607201206104 SLAVE:79607201100392\n> Replay:79607160986064 :: REPLAY 39277 KBytes (00:00:25.624604 seconds)\n> Tue Feb 24 15:05:33 MSK 2015 Stream:\n> MASTER-masterdb:79607203174208 SLAVE:79607203111136\n> Replay:79607160986064 :: REPLAY 41199 KBytes (00:00:26.67059 seconds)\n> Tue Feb 24 15:05:34 MSK 2015 Stream:\n> MASTER-masterdb:79607204792888 SLAVE:79607204741600\n> Replay:79607160986064 :: REPLAY 42780 KBytes (00:00:27.719088 seconds)\n> Tue Feb 24 15:05:35 MSK 2015 Stream:\n> MASTER-masterdb:79607206453216 SLAVE:79607206409032\n> Replay:79607160986064 :: REPLAY 44401 KBytes (00:00:28.766647 seconds)\n> Tue Feb 24 15:05:36 MSK 2015 Stream:\n> MASTER-masterdb:79607208225344 SLAVE:79607208142176\n> Replay:79607160986064 :: REPLAY 46132 KBytes (00:00:29.811434 seconds)\n>\n>\n> perf shows the following functions on the top\n> + 22.50% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n> + 8.48% postmaster postgres [.]\n> hash_search_with_hash_value\n>\n>\n> after 10 minutes or so the apply process continue to work\n>\n> Tue Feb 24 15:13:25 MSK 2015 Stream:\n> MASTER-masterdb:79608758742560 SLAVE:79608758718008\n> Replay:79607160986064 :: REPLAY 1560309 KBytes (00:08:19.009653\n> seconds)\n> Tue Feb 24 15:13:26 MSK 2015 Stream:\n> MASTER-masterdb:79608759203608 SLAVE:79608759189680\n> Replay:79607160986064 :: REPLAY 1560759 KBytes (00:08:20.057877\n> seconds)\n> Tue Feb 24 15:13:27 MSK 2015 Stream:\n> MASTER-masterdb:79608759639680 SLAVE:79608759633224\n> Replay:79607160986064 :: REPLAY 1561185 KBytes (00:08:21.104723\n> seconds)\n> Tue Feb 24 15:13:28 MSK 2015 Stream:\n> MASTER-masterdb:79608760271200 SLAVE:79608760264128\n> Replay:79607160986064 :: REPLAY 1561802 KBytes (00:08:22.148546\n> seconds)\n> Tue Feb 24 15:13:30 MSK 2015 Stream:\n> MASTER-masterdb:79608760622920 SLAVE:79608760616656\n> Replay:79607160986064 :: REPLAY 1562145 KBytes (00:08:23.196645\n> seconds)\n> Tue Feb 24 15:13:31 MSK 2015 Stream:\n> MASTER-masterdb:79608761122040 SLAVE:79608761084584\n> Replay:79607160986064 :: REPLAY 1562633 KBytes (00:08:24.240653\n> seconds)\n> Tue Feb 24 15:13:32 MSK 2015 Stream:\n> MASTER-masterdb:79608761434200 SLAVE:79608761426080\n> Replay:79607160986064 :: REPLAY 1562938 KBytes (00:08:25.289429\n> seconds)\n> Tue Feb 24 15:13:33 MSK 2015 Stream:\n> MASTER-masterdb:79608761931008 SLAVE:79608761904808\n> Replay:79607160986064 :: REPLAY 1563423 KBytes (00:08:26.338498\n> seconds)\n> *--apply starts*\n> Tue Feb 24 15:13:34 MSK 2015 Stream:\n> MASTER-masterdb:79608762360568 SLAVE:79608762325712\n> Replay:79607163554680 :: REPLAY 1561334 KBytes (00:08:25.702423\n> seconds)\n> Tue Feb 24 15:13:35 MSK 2015 Stream:\n> MASTER-masterdb:79608762891224 SLAVE:79608762885928\n> Replay:79607166466488 :: REPLAY 1559008 KBytes (00:08:25.011046\n> seconds)\n> Tue Feb 24 15:13:36 MSK 2015 Stream:\n> MASTER-masterdb:79608763681920 SLAVE:79608763667256\n> Replay:79607167054056 :: REPLAY 1559207 KBytes (00:08:25.827531\n> seconds)\n> Tue Feb 24 15:13:37 MSK 2015 Stream:\n> MASTER-masterdb:79608764207088 SLAVE:79608764197744\n> Replay:79607175610296 :: REPLAY 1551364 KBytes (00:08:21.182428\n> seconds)\n> Tue Feb 24 15:13:38 MSK 2015 Stream:\n> MASTER-masterdb:79608764857920 SLAVE:79608764832432\n> Replay:79607183599632 :: REPLAY 1544197 KBytes (00:08:16.742467\n> seconds)\n> Tue Feb 24 15:13:39 MSK 2015 Stream:\n> MASTER-masterdb:79608765323360 SLAVE:79608765281408\n> Replay:79607186862176 :: REPLAY 1541466 KBytes (00:08:15.569874\n> seconds)\n> Tue Feb 24 15:13:40 MSK 2015 Stream:\n> MASTER-masterdb:79608765848240 SLAVE:79608765824520\n> Replay:79607186862176 :: REPLAY 1541978 KBytes (00:08:16.620932\n> seconds)\n>\n>\n> All this is a result of completion of \"vacuum verbose analyze\n> master_table\" on the master site\n>\n> Any help would be appreciated\n>\n> -- \n> Best regards,\n> Sergey Shchukin\n>\n>\n>\n>\n> -- \n>\n> *Radovan Jablonovsky* | SaaS DBA | Phone 1-403-262-6519 (ext. 7256) | \n> Fax 1-403-233-8046\n>\n>\n> *\n>\n> *Replicon* | Hassle-Free Time & Expense Management Software - 7,300 \n> Customers - 70 Countries\n> www.replicon.com <http://www.replicon.com/> | facebook \n> <http://www.facebook.com/Replicon.inc> | twitter \n> <http://twitter.com/Replicon> | blog <http://www.replicon.com/blog/> | \n> contact us <http://www.replicon.com/about_replicon/contact_us.aspx>\n>\n> *We are hiring!* | search jobs \n> <http://tbe.taleo.net/NA2/ats/careers/searchResults.jsp?org=REPLICON&cws=1&act=sort&sortColumn=1&__utma=1.651918544.1299001662.1299170819.1299174966.10&__utmb=1.8.10.1299174966&__utmc=1&__utmx=-&__utmz=1.1299174985.10.3.utmcsr=google%7Cutmccn=%28organic%29%7Cutmcmd=organic%7Cutmctr=replicon%20careers&__utmv=1.%7C3=Visitor%20Type=Prospects=1,&__utmk=40578466>\n>\n> *\n\n\n\n\n\n\n\nHi Radovan !\n\n Thank you for the reply. The question is that this table is not a\n subject for a massive updates/deletes.\n\n Is there any additional traces except from perf or pg_top to trace\n what replica is doing at the particular moment when we are lagging\n in replay? To see locks or spins or sleeps etc..\n\n Thank you!\n- \n\nBest regards,\nSergey Shchukin\n\n 24.02.2015 19:05, Radovan Jablonovsky пишет:\n\n\nThis looks like more issue for pgsql-general\n mailing list. \n \n\nPossible solutions\n1) Set specific autovacuum parameters on the big table. The\n autovacuum could vacuum table on multiple runs based on the\n thresholds and cost settings\nExample of setting specific values of autovacuum and\n analyze for table. It should be adjusted for your system, work\n load, table usage, etc:\nalter table \"my_schema\".\"my_big_table\" set (fillfactor =\n 80, autovacuum_enabled = true, autovacuum_vacuum_threshold =\n 200, autovacuum_analyze_threshold = 400, \n autovacuum_vacuum_scale_factor = 0.05,\n autovacuum_analyze_scale_factor = 0.005,\n autovacuum_vacuum_cost_delay = 10,\n autovacuum_vacuum_cost_limit = 5000);\n\n\n\n2) Could be to partition the large table on master site and\n vacuum it partition by partition.\n\n\nOn Tue, Feb 24, 2015 at 6:42 AM, Sergey\n Shchukin <[email protected]>\n wrote:\n\n Hi all!\n\n May someone help me with the issue in the apply process on\n the replica. We have a stream replication and after vacuum\n stops working with a big table we get a \"freeze\" in\n applying data on the replica database. It looks like this:\n\nTue\n Feb 24 15:04:51 MSK 2015 Stream:\n MASTER-masterdb:79607136410456 SLAVE:79607136410456\n Replay:79607136339456 :: REPLAY 69 KBytes\n (00:00:00.294485 seconds)\n Tue Feb 24 15:04:52 MSK 2015 Stream:\n MASTER-masterdb:79607137892672 SLAVE:79607137715392\n Replay:79607137715392 :: REPLAY 173 KBytes\n (00:00:00.142605 seconds)\n Tue Feb 24 15:04:53 MSK 2015 Stream:\n MASTER-masterdb:79607139327776 SLAVE:79607139241816\n Replay:79607139241816 :: REPLAY 84 KBytes\n (00:00:00.05223 seconds)\n Tue Feb 24 15:04:54 MSK 2015 Stream:\n MASTER-masterdb:79607141134776 SLAVE:79607141073344\n Replay:79607141080032 :: REPLAY 54 KBytes\n (00:00:00.010603 seconds)\n Tue Feb 24 15:04:55 MSK 2015 Stream:\n MASTER-masterdb:79607143085176 SLAVE:79607143026440\n Replay:79607143038040 :: REPLAY 46 KBytes\n (00:00:00.009506 seconds)\n Tue Feb 24 15:04:56 MSK 2015 Stream:\n MASTER-masterdb:79607145111280 SLAVE:79607145021384\n Replay:79607145025664 :: REPLAY 83 KBytes\n (00:00:00.006795 seconds)\n Tue Feb 24 15:04:57 MSK 2015 Stream:\n MASTER-masterdb:79607146564424 SLAVE:79607146478336\n Replay:79607146501264 :: REPLAY 61 KBytes\n (00:00:00.00701 seconds)\n Tue Feb 24 15:04:58 MSK 2015 Stream:\n MASTER-masterdb:79607148160680 SLAVE:79607148108352\n Replay:79607147369320 :: REPLAY 773 KBytes\n (00:00:00.449702 seconds)\n Tue Feb 24 15:04:59 MSK 2015 Stream:\n MASTER-masterdb:79607150220688 SLAVE:79607150159632\n Replay:79607150171312 :: REPLAY 48 KBytes\n (00:00:00.006594 seconds)\n Tue Feb 24 15:05:00 MSK 2015 Stream:\n MASTER-masterdb:79607152365360 SLAVE:79607152262696\n Replay:79607152285240 :: REPLAY 78 KBytes\n (00:00:00.007042 seconds)\n Tue Feb 24 15:05:02 MSK 2015 Stream:\n MASTER-masterdb:79607154049848 SLAVE:79607154012624\n Replay:79607153446800 :: REPLAY 589 KBytes\n (00:00:00.513637 seconds)\n Tue Feb 24 15:05:03 MSK 2015 Stream:\n MASTER-masterdb:79607155229992 SLAVE:79607155187864\n Replay:79607155188312 :: REPLAY 41 KBytes\n (00:00:00.004773 seconds)\n Tue Feb 24 15:05:04 MSK 2015 Stream:\n MASTER-masterdb:79607156833968 SLAVE:79607156764128\n Replay:79607156785488 :: REPLAY 47 KBytes\n (00:00:00.006846 seconds)\n Tue Feb 24 15:05:05 MSK 2015 Stream:\n MASTER-masterdb:79607158419848 SLAVE:79607158344856\n Replay:79607158396352 :: REPLAY 23 KBytes\n (00:00:00.005228 seconds)\n Tue Feb 24 15:05:06 MSK 2015 Stream:\n MASTER-masterdb:79607160004776 SLAVE:79607159962400\n Replay:79607159988888 :: REPLAY 16 KBytes\n (00:00:00.003162 seconds)\n--here apply process just stops\n\n Tue Feb 24 15:05:07 MSK 2015 Stream:\n MASTER-masterdb:79607161592048 SLAVE:79607161550576\n Replay:79607160986064 :: REPLAY 592 KBytes\n (00:00:00.398376 seconds)\n Tue Feb 24 15:05:08 MSK 2015 Stream:\n MASTER-masterdb:79607163272840 SLAVE:79607163231384\n Replay:79607160986064 :: REPLAY 2233 KBytes\n (00:00:01.446759 seconds)\n Tue Feb 24 15:05:09 MSK 2015 Stream:\n MASTER-masterdb:79607164958632 SLAVE:79607164904448\n Replay:79607160986064 :: REPLAY 3879 KBytes\n (00:00:02.497181 seconds)\n Tue Feb 24 15:05:10 MSK 2015 Stream:\n MASTER-masterdb:79607166819560 SLAVE:79607166777712\n Replay:79607160986064 :: REPLAY 5697 KBytes\n (00:00:03.543107 seconds)\n Tue Feb 24 15:05:11 MSK 2015 Stream:\n MASTER-masterdb:79607168595280 SLAVE:79607168566536\n Replay:79607160986064 :: REPLAY 7431 KBytes\n (00:00:04.589736 seconds)\n Tue Feb 24 15:05:12 MSK 2015 Stream:\n MASTER-masterdb:79607170372064 SLAVE:79607170252480\n Replay:79607160986064 :: REPLAY 9166 KBytes\n (00:00:05.635918 seconds)\n Tue Feb 24 15:05:13 MSK 2015 Stream:\n MASTER-masterdb:79607171829480 SLAVE:79607171714144\n Replay:79607160986064 :: REPLAY 10589 KBytes\n (00:00:06.688115 seconds)\n Tue Feb 24 15:05:14 MSK 2015 Stream:\n MASTER-masterdb:79607173152488 SLAVE:79607173152488\n Replay:79607160986064 :: REPLAY 11881 KBytes\n (00:00:07.736993 seconds)\n Tue Feb 24 15:05:15 MSK 2015 Stream:\n MASTER-masterdb:79607174149968 SLAVE:79607174149968\n Replay:79607160986064 :: REPLAY 12855 KBytes\n (00:00:08.78538 seconds)\n Tue Feb 24 15:05:16 MSK 2015 Stream:\n MASTER-masterdb:79607176448344 SLAVE:79607176252088\n Replay:79607160986064 :: REPLAY 15100 KBytes\n (00:00:09.835184 seconds)\n Tue Feb 24 15:05:17 MSK 2015 Stream:\n MASTER-masterdb:79607177632216 SLAVE:79607177608224\n Replay:79607160986064 :: REPLAY 16256 KBytes\n (00:00:10.926493 seconds)\n Tue Feb 24 15:05:18 MSK 2015 Stream:\n MASTER-masterdb:79607179432960 SLAVE:79607179378096\n Replay:79607160986064 :: REPLAY 18015 KBytes\n (00:00:11.97989 seconds)\n Tue Feb 24 15:05:19 MSK 2015 Stream:\n MASTER-masterdb:79607180893384 SLAVE:79607180874256\n Replay:79607160986064 :: REPLAY 19441 KBytes\n (00:00:13.028921 seconds)\n Tue Feb 24 15:05:20 MSK 2015 Stream:\n MASTER-masterdb:79607182596224 SLAVE:79607182552272\n Replay:79607160986064 :: REPLAY 21104 KBytes\n (00:00:14.079497 seconds)\n Tue Feb 24 15:05:21 MSK 2015 Stream:\n MASTER-masterdb:79607183935312 SLAVE:79607183902592\n Replay:79607160986064 :: REPLAY 22411 KBytes\n (00:00:15.127679 seconds)\n Tue Feb 24 15:05:23 MSK 2015 Stream:\n MASTER-masterdb:79607185165880 SLAVE:79607185094032\n Replay:79607160986064 :: REPLAY 23613 KBytes\n (00:00:16.175132 seconds)\n Tue Feb 24 15:05:24 MSK 2015 Stream:\n MASTER-masterdb:79607187196920 SLAVE:79607187169368\n Replay:79607160986064 :: REPLAY 25596 KBytes\n (00:00:17.221981 seconds)\n Tue Feb 24 15:05:25 MSK 2015 Stream:\n MASTER-masterdb:79607188943856 SLAVE:79607188885952\n Replay:79607160986064 :: REPLAY 27302 KBytes\n (00:00:18.274362 seconds)\n Tue Feb 24 15:05:26 MSK 2015 Stream:\n MASTER-masterdb:79607190489400 SLAVE:79607190443160\n Replay:79607160986064 :: REPLAY 28812 KBytes\n (00:00:19.319987 seconds)\n Tue Feb 24 15:05:27 MSK 2015 Stream:\n MASTER-masterdb:79607192089312 SLAVE:79607192054048\n Replay:79607160986064 :: REPLAY 30374 KBytes\n (00:00:20.372305 seconds)\n Tue Feb 24 15:05:28 MSK 2015 Stream:\n MASTER-masterdb:79607193736800 SLAVE:79607193690056\n Replay:79607160986064 :: REPLAY 31983 KBytes\n (00:00:21.421359 seconds)\n Tue Feb 24 15:05:29 MSK 2015 Stream:\n MASTER-masterdb:79607195968648 SLAVE:79607195901296\n Replay:79607160986064 :: REPLAY 34163 KBytes\n (00:00:22.471334 seconds)\n Tue Feb 24 15:05:30 MSK 2015 Stream:\n MASTER-masterdb:79607197808840 SLAVE:79607197737720\n Replay:79607160986064 :: REPLAY 35960 KBytes\n (00:00:23.52269 seconds)\n Tue Feb 24 15:05:31 MSK 2015 Stream:\n MASTER-masterdb:79607199571144 SLAVE:79607199495976\n Replay:79607160986064 :: REPLAY 37681 KBytes\n (00:00:24.577615 seconds)\n Tue Feb 24 15:05:32 MSK 2015 Stream:\n MASTER-masterdb:79607201206104 SLAVE:79607201100392\n Replay:79607160986064 :: REPLAY 39277 KBytes\n (00:00:25.624604 seconds)\n Tue Feb 24 15:05:33 MSK 2015 Stream:\n MASTER-masterdb:79607203174208 SLAVE:79607203111136\n Replay:79607160986064 :: REPLAY 41199 KBytes\n (00:00:26.67059 seconds)\n Tue Feb 24 15:05:34 MSK 2015 Stream:\n MASTER-masterdb:79607204792888 SLAVE:79607204741600\n Replay:79607160986064 :: REPLAY 42780 KBytes\n (00:00:27.719088 seconds)\n Tue Feb 24 15:05:35 MSK 2015 Stream:\n MASTER-masterdb:79607206453216 SLAVE:79607206409032\n Replay:79607160986064 :: REPLAY 44401 KBytes\n (00:00:28.766647 seconds)\n Tue Feb 24 15:05:36 MSK 2015 Stream:\n MASTER-masterdb:79607208225344 SLAVE:79607208142176\n Replay:79607160986064 :: REPLAY 46132 KBytes\n (00:00:29.811434 seconds)\n\n\n perf shows the following functions on the top\n + 22.50% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n + 8.48% postmaster postgres [.]\n hash_search_with_hash_value\n\n\n after 10 minutes or so the apply process continue to\n work\n\n Tue Feb 24 15:13:25 MSK 2015 Stream:\n MASTER-masterdb:79608758742560 SLAVE:79608758718008\n Replay:79607160986064 :: REPLAY 1560309 KBytes\n (00:08:19.009653 seconds)\n Tue Feb 24 15:13:26 MSK 2015 Stream:\n MASTER-masterdb:79608759203608 SLAVE:79608759189680\n Replay:79607160986064 :: REPLAY 1560759 KBytes\n (00:08:20.057877 seconds)\n Tue Feb 24 15:13:27 MSK 2015 Stream:\n MASTER-masterdb:79608759639680 SLAVE:79608759633224\n Replay:79607160986064 :: REPLAY 1561185 KBytes\n (00:08:21.104723 seconds)\n Tue Feb 24 15:13:28 MSK 2015 Stream:\n MASTER-masterdb:79608760271200 SLAVE:79608760264128\n Replay:79607160986064 :: REPLAY 1561802 KBytes\n (00:08:22.148546 seconds)\n Tue Feb 24 15:13:30 MSK 2015 Stream:\n MASTER-masterdb:79608760622920 SLAVE:79608760616656\n Replay:79607160986064 :: REPLAY 1562145 KBytes\n (00:08:23.196645 seconds)\n Tue Feb 24 15:13:31 MSK 2015 Stream:\n MASTER-masterdb:79608761122040 SLAVE:79608761084584\n Replay:79607160986064 :: REPLAY 1562633 KBytes\n (00:08:24.240653 seconds)\n Tue Feb 24 15:13:32 MSK 2015 Stream:\n MASTER-masterdb:79608761434200 SLAVE:79608761426080\n Replay:79607160986064 :: REPLAY 1562938 KBytes\n (00:08:25.289429 seconds)\n Tue Feb 24 15:13:33 MSK 2015 Stream:\n MASTER-masterdb:79608761931008 SLAVE:79608761904808\n Replay:79607160986064 :: REPLAY 1563423 KBytes\n (00:08:26.338498 seconds)\n--apply starts\n Tue Feb 24 15:13:34 MSK 2015 Stream:\n MASTER-masterdb:79608762360568 SLAVE:79608762325712\n Replay:79607163554680 :: REPLAY 1561334 KBytes\n (00:08:25.702423 seconds)\n Tue Feb 24 15:13:35 MSK 2015 Stream:\n MASTER-masterdb:79608762891224 SLAVE:79608762885928\n Replay:79607166466488 :: REPLAY 1559008 KBytes\n (00:08:25.011046 seconds)\n Tue Feb 24 15:13:36 MSK 2015 Stream:\n MASTER-masterdb:79608763681920 SLAVE:79608763667256\n Replay:79607167054056 :: REPLAY 1559207 KBytes\n (00:08:25.827531 seconds)\n Tue Feb 24 15:13:37 MSK 2015 Stream:\n MASTER-masterdb:79608764207088 SLAVE:79608764197744\n Replay:79607175610296 :: REPLAY 1551364 KBytes\n (00:08:21.182428 seconds)\n Tue Feb 24 15:13:38 MSK 2015 Stream:\n MASTER-masterdb:79608764857920 SLAVE:79608764832432\n Replay:79607183599632 :: REPLAY 1544197 KBytes\n (00:08:16.742467 seconds)\n Tue Feb 24 15:13:39 MSK 2015 Stream:\n MASTER-masterdb:79608765323360 SLAVE:79608765281408\n Replay:79607186862176 :: REPLAY 1541466 KBytes\n (00:08:15.569874 seconds)\n Tue Feb 24 15:13:40 MSK 2015 Stream:\n MASTER-masterdb:79608765848240 SLAVE:79608765824520\n Replay:79607186862176 :: REPLAY 1541978 KBytes\n (00:08:16.620932 seconds)\n\n\n All this is a result of completion of \"vacuum verbose\n analyze master_table\" on the master site\n\n Any help would be appreciated\n-- \nBest regards,\nSergey Shchukin\n\n\n\n\n\n\n\n\n -- \n\n\nRadovan Jablonovsky |\n SaaS DBA | Phone 1-403-262-6519 (ext.\n 7256) | Fax 1-403-233-8046\n\n\n\nReplicon |\n Hassle-Free Time & Expense Management Software\n - 7,300 Customers - 70 Countries\nwww.replicon.com | facebook | twitter | blog | contact us\nWe are hiring! | search jobs",
"msg_date": "Thu, 26 Feb 2015 09:25:31 +0300",
"msg_from": "Sergey Shchukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [pgadmin-support] Issue with a hanging apply process on the\n replica db after vacuum works on primary"
},
{
"msg_contents": "On 2/26/15 12:25 AM, Sergey Shchukin wrote:\n> Hi Radovan !\n>\n> Thank you for the reply. The question is that this table is not a\n> subject for a massive updates/deletes.\n>\n> Is there any additional traces except from perf or pg_top to trace what\n> replica is doing at the particular moment when we are lagging in replay?\n> To see locks or spins or sleeps etc..\n\nPlease don't top-post.\n\nWhat version is this? What is max_standby_streaming_delay set to?\n\n> Thank you!\n>\n> -\n>\n> Best regards,\n> Sergey Shchukin\n>\n> 24.02.2015 19:05, Radovan Jablonovsky пишет:\n>> This looks like more issue for pgsql-general mailing list.\n>>\n>> Possible solutions\n>> 1) Set specific autovacuum parameters on the big table. The autovacuum\n>> could vacuum table on multiple runs based on the thresholds and cost\n>> settings\n>> Example of setting specific values of autovacuum and analyze for\n>> table. It should be adjusted for your system, work load, table usage, etc:\n>> alter table \"my_schema\".\"my_big_table\" set (fillfactor = 80,\n>> autovacuum_enabled = true, autovacuum_vacuum_threshold = 200,\n>> autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor =\n>> 0.05, autovacuum_analyze_scale_factor = 0.005,\n>> autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = 5000);\n>>\n>> 2) Could be to partition the large table on master site and vacuum it\n>> partition by partition.\n>>\n>> On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin\n>> <[email protected] <mailto:[email protected]>> wrote:\n>>\n>> Hi all!\n>>\n>> May someone help me with the issue in the apply process on the\n>> replica. We have a stream replication and after vacuum stops\n>> working with a big table we get a \"freeze\" in applying data on the\n>> replica database. It looks like this:\n>>\n>> Tue Feb 24 15:04:51 MSK 2015 Stream:\n>> MASTER-masterdb:79607136410456 SLAVE:79607136410456\n>> Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 seconds)\n>> Tue Feb 24 15:04:52 MSK 2015 Stream:\n>> MASTER-masterdb:79607137892672 SLAVE:79607137715392\n>> Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 seconds)\n>> Tue Feb 24 15:04:53 MSK 2015 Stream:\n>> MASTER-masterdb:79607139327776 SLAVE:79607139241816\n>> Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 seconds)\n>> Tue Feb 24 15:04:54 MSK 2015 Stream:\n>> MASTER-masterdb:79607141134776 SLAVE:79607141073344\n>> Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 seconds)\n>> Tue Feb 24 15:04:55 MSK 2015 Stream:\n>> MASTER-masterdb:79607143085176 SLAVE:79607143026440\n>> Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 seconds)\n>> Tue Feb 24 15:04:56 MSK 2015 Stream:\n>> MASTER-masterdb:79607145111280 SLAVE:79607145021384\n>> Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 seconds)\n>> Tue Feb 24 15:04:57 MSK 2015 Stream:\n>> MASTER-masterdb:79607146564424 SLAVE:79607146478336\n>> Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 seconds)\n>> Tue Feb 24 15:04:58 MSK 2015 Stream:\n>> MASTER-masterdb:79607148160680 SLAVE:79607148108352\n>> Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 seconds)\n>> Tue Feb 24 15:04:59 MSK 2015 Stream:\n>> MASTER-masterdb:79607150220688 SLAVE:79607150159632\n>> Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 seconds)\n>> Tue Feb 24 15:05:00 MSK 2015 Stream:\n>> MASTER-masterdb:79607152365360 SLAVE:79607152262696\n>> Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 seconds)\n>> Tue Feb 24 15:05:02 MSK 2015 Stream:\n>> MASTER-masterdb:79607154049848 SLAVE:79607154012624\n>> Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 seconds)\n>> Tue Feb 24 15:05:03 MSK 2015 Stream:\n>> MASTER-masterdb:79607155229992 SLAVE:79607155187864\n>> Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 seconds)\n>> Tue Feb 24 15:05:04 MSK 2015 Stream:\n>> MASTER-masterdb:79607156833968 SLAVE:79607156764128\n>> Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 seconds)\n>> Tue Feb 24 15:05:05 MSK 2015 Stream:\n>> MASTER-masterdb:79607158419848 SLAVE:79607158344856\n>> Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 seconds)\n>> Tue Feb 24 15:05:06 MSK 2015 Stream:\n>> MASTER-masterdb:79607160004776 SLAVE:79607159962400\n>> Replay:79607159988888 :: REPLAY 16 KBytes (00:00:00.003162 seconds)\n>> *--here apply process just stops*\n>>\n>> Tue Feb 24 15:05:07 MSK 2015 Stream:\n>> MASTER-masterdb:79607161592048 SLAVE:79607161550576\n>> Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 seconds)\n>> Tue Feb 24 15:05:08 MSK 2015 Stream:\n>> MASTER-masterdb:79607163272840 SLAVE:79607163231384\n>> Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 seconds)\n>> Tue Feb 24 15:05:09 MSK 2015 Stream:\n>> MASTER-masterdb:79607164958632 SLAVE:79607164904448\n>> Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 seconds)\n>> Tue Feb 24 15:05:10 MSK 2015 Stream:\n>> MASTER-masterdb:79607166819560 SLAVE:79607166777712\n>> Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107 seconds)\n>> Tue Feb 24 15:05:11 MSK 2015 Stream:\n>> MASTER-masterdb:79607168595280 SLAVE:79607168566536\n>> Replay:79607160986064 :: REPLAY 7431 KBytes (00:00:04.589736 seconds)\n>> Tue Feb 24 15:05:12 MSK 2015 Stream:\n>> MASTER-masterdb:79607170372064 SLAVE:79607170252480\n>> Replay:79607160986064 :: REPLAY 9166 KBytes (00:00:05.635918 seconds)\n>> Tue Feb 24 15:05:13 MSK 2015 Stream:\n>> MASTER-masterdb:79607171829480 SLAVE:79607171714144\n>> Replay:79607160986064 :: REPLAY 10589 KBytes (00:00:06.688115 seconds)\n>> Tue Feb 24 15:05:14 MSK 2015 Stream:\n>> MASTER-masterdb:79607173152488 SLAVE:79607173152488\n>> Replay:79607160986064 :: REPLAY 11881 KBytes (00:00:07.736993 seconds)\n>> Tue Feb 24 15:05:15 MSK 2015 Stream:\n>> MASTER-masterdb:79607174149968 SLAVE:79607174149968\n>> Replay:79607160986064 :: REPLAY 12855 KBytes (00:00:08.78538 seconds)\n>> Tue Feb 24 15:05:16 MSK 2015 Stream:\n>> MASTER-masterdb:79607176448344 SLAVE:79607176252088\n>> Replay:79607160986064 :: REPLAY 15100 KBytes (00:00:09.835184 seconds)\n>> Tue Feb 24 15:05:17 MSK 2015 Stream:\n>> MASTER-masterdb:79607177632216 SLAVE:79607177608224\n>> Replay:79607160986064 :: REPLAY 16256 KBytes (00:00:10.926493 seconds)\n>> Tue Feb 24 15:05:18 MSK 2015 Stream:\n>> MASTER-masterdb:79607179432960 SLAVE:79607179378096\n>> Replay:79607160986064 :: REPLAY 18015 KBytes (00:00:11.97989 seconds)\n>> Tue Feb 24 15:05:19 MSK 2015 Stream:\n>> MASTER-masterdb:79607180893384 SLAVE:79607180874256\n>> Replay:79607160986064 :: REPLAY 19441 KBytes (00:00:13.028921 seconds)\n>> Tue Feb 24 15:05:20 MSK 2015 Stream:\n>> MASTER-masterdb:79607182596224 SLAVE:79607182552272\n>> Replay:79607160986064 :: REPLAY 21104 KBytes (00:00:14.079497 seconds)\n>> Tue Feb 24 15:05:21 MSK 2015 Stream:\n>> MASTER-masterdb:79607183935312 SLAVE:79607183902592\n>> Replay:79607160986064 :: REPLAY 22411 KBytes (00:00:15.127679 seconds)\n>> Tue Feb 24 15:05:23 MSK 2015 Stream:\n>> MASTER-masterdb:79607185165880 SLAVE:79607185094032\n>> Replay:79607160986064 :: REPLAY 23613 KBytes (00:00:16.175132 seconds)\n>> Tue Feb 24 15:05:24 MSK 2015 Stream:\n>> MASTER-masterdb:79607187196920 SLAVE:79607187169368\n>> Replay:79607160986064 :: REPLAY 25596 KBytes (00:00:17.221981 seconds)\n>> Tue Feb 24 15:05:25 MSK 2015 Stream:\n>> MASTER-masterdb:79607188943856 SLAVE:79607188885952\n>> Replay:79607160986064 :: REPLAY 27302 KBytes (00:00:18.274362 seconds)\n>> Tue Feb 24 15:05:26 MSK 2015 Stream:\n>> MASTER-masterdb:79607190489400 SLAVE:79607190443160\n>> Replay:79607160986064 :: REPLAY 28812 KBytes (00:00:19.319987 seconds)\n>> Tue Feb 24 15:05:27 MSK 2015 Stream:\n>> MASTER-masterdb:79607192089312 SLAVE:79607192054048\n>> Replay:79607160986064 :: REPLAY 30374 KBytes (00:00:20.372305 seconds)\n>> Tue Feb 24 15:05:28 MSK 2015 Stream:\n>> MASTER-masterdb:79607193736800 SLAVE:79607193690056\n>> Replay:79607160986064 :: REPLAY 31983 KBytes (00:00:21.421359 seconds)\n>> Tue Feb 24 15:05:29 MSK 2015 Stream:\n>> MASTER-masterdb:79607195968648 SLAVE:79607195901296\n>> Replay:79607160986064 :: REPLAY 34163 KBytes (00:00:22.471334 seconds)\n>> Tue Feb 24 15:05:30 MSK 2015 Stream:\n>> MASTER-masterdb:79607197808840 SLAVE:79607197737720\n>> Replay:79607160986064 :: REPLAY 35960 KBytes (00:00:23.52269 seconds)\n>> Tue Feb 24 15:05:31 MSK 2015 Stream:\n>> MASTER-masterdb:79607199571144 SLAVE:79607199495976\n>> Replay:79607160986064 :: REPLAY 37681 KBytes (00:00:24.577615 seconds)\n>> Tue Feb 24 15:05:32 MSK 2015 Stream:\n>> MASTER-masterdb:79607201206104 SLAVE:79607201100392\n>> Replay:79607160986064 :: REPLAY 39277 KBytes (00:00:25.624604 seconds)\n>> Tue Feb 24 15:05:33 MSK 2015 Stream:\n>> MASTER-masterdb:79607203174208 SLAVE:79607203111136\n>> Replay:79607160986064 :: REPLAY 41199 KBytes (00:00:26.67059 seconds)\n>> Tue Feb 24 15:05:34 MSK 2015 Stream:\n>> MASTER-masterdb:79607204792888 SLAVE:79607204741600\n>> Replay:79607160986064 :: REPLAY 42780 KBytes (00:00:27.719088 seconds)\n>> Tue Feb 24 15:05:35 MSK 2015 Stream:\n>> MASTER-masterdb:79607206453216 SLAVE:79607206409032\n>> Replay:79607160986064 :: REPLAY 44401 KBytes (00:00:28.766647 seconds)\n>> Tue Feb 24 15:05:36 MSK 2015 Stream:\n>> MASTER-masterdb:79607208225344 SLAVE:79607208142176\n>> Replay:79607160986064 :: REPLAY 46132 KBytes (00:00:29.811434 seconds)\n>>\n>>\n>> perf shows the following functions on the top\n>> + 22.50% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>> + 8.48% postmaster postgres [.]\n>> hash_search_with_hash_value\n>>\n>>\n>> after 10 minutes or so the apply process continue to work\n>>\n>> Tue Feb 24 15:13:25 MSK 2015 Stream:\n>> MASTER-masterdb:79608758742560 SLAVE:79608758718008\n>> Replay:79607160986064 :: REPLAY 1560309 KBytes (00:08:19.009653\n>> seconds)\n>> Tue Feb 24 15:13:26 MSK 2015 Stream:\n>> MASTER-masterdb:79608759203608 SLAVE:79608759189680\n>> Replay:79607160986064 :: REPLAY 1560759 KBytes (00:08:20.057877\n>> seconds)\n>> Tue Feb 24 15:13:27 MSK 2015 Stream:\n>> MASTER-masterdb:79608759639680 SLAVE:79608759633224\n>> Replay:79607160986064 :: REPLAY 1561185 KBytes (00:08:21.104723\n>> seconds)\n>> Tue Feb 24 15:13:28 MSK 2015 Stream:\n>> MASTER-masterdb:79608760271200 SLAVE:79608760264128\n>> Replay:79607160986064 :: REPLAY 1561802 KBytes (00:08:22.148546\n>> seconds)\n>> Tue Feb 24 15:13:30 MSK 2015 Stream:\n>> MASTER-masterdb:79608760622920 SLAVE:79608760616656\n>> Replay:79607160986064 :: REPLAY 1562145 KBytes (00:08:23.196645\n>> seconds)\n>> Tue Feb 24 15:13:31 MSK 2015 Stream:\n>> MASTER-masterdb:79608761122040 SLAVE:79608761084584\n>> Replay:79607160986064 :: REPLAY 1562633 KBytes (00:08:24.240653\n>> seconds)\n>> Tue Feb 24 15:13:32 MSK 2015 Stream:\n>> MASTER-masterdb:79608761434200 SLAVE:79608761426080\n>> Replay:79607160986064 :: REPLAY 1562938 KBytes (00:08:25.289429\n>> seconds)\n>> Tue Feb 24 15:13:33 MSK 2015 Stream:\n>> MASTER-masterdb:79608761931008 SLAVE:79608761904808\n>> Replay:79607160986064 :: REPLAY 1563423 KBytes (00:08:26.338498\n>> seconds)\n>> *--apply starts*\n>> Tue Feb 24 15:13:34 MSK 2015 Stream:\n>> MASTER-masterdb:79608762360568 SLAVE:79608762325712\n>> Replay:79607163554680 :: REPLAY 1561334 KBytes (00:08:25.702423\n>> seconds)\n>> Tue Feb 24 15:13:35 MSK 2015 Stream:\n>> MASTER-masterdb:79608762891224 SLAVE:79608762885928\n>> Replay:79607166466488 :: REPLAY 1559008 KBytes (00:08:25.011046\n>> seconds)\n>> Tue Feb 24 15:13:36 MSK 2015 Stream:\n>> MASTER-masterdb:79608763681920 SLAVE:79608763667256\n>> Replay:79607167054056 :: REPLAY 1559207 KBytes (00:08:25.827531\n>> seconds)\n>> Tue Feb 24 15:13:37 MSK 2015 Stream:\n>> MASTER-masterdb:79608764207088 SLAVE:79608764197744\n>> Replay:79607175610296 :: REPLAY 1551364 KBytes (00:08:21.182428\n>> seconds)\n>> Tue Feb 24 15:13:38 MSK 2015 Stream:\n>> MASTER-masterdb:79608764857920 SLAVE:79608764832432\n>> Replay:79607183599632 :: REPLAY 1544197 KBytes (00:08:16.742467\n>> seconds)\n>> Tue Feb 24 15:13:39 MSK 2015 Stream:\n>> MASTER-masterdb:79608765323360 SLAVE:79608765281408\n>> Replay:79607186862176 :: REPLAY 1541466 KBytes (00:08:15.569874\n>> seconds)\n>> Tue Feb 24 15:13:40 MSK 2015 Stream:\n>> MASTER-masterdb:79608765848240 SLAVE:79608765824520\n>> Replay:79607186862176 :: REPLAY 1541978 KBytes (00:08:16.620932\n>> seconds)\n>>\n>>\n>> All this is a result of completion of \"vacuum verbose analyze\n>> master_table\" on the master site\n>>\n>> Any help would be appreciated\n>>\n>> --\n>> Best regards,\n>> Sergey Shchukin\n>>\n>>\n>>\n>>\n>> --\n>>\n>> *Radovan Jablonovsky* | SaaS DBA | Phone 1-403-262-6519 (ext. 7256) |\n>> Fax 1-403-233-8046\n>>\n>>\n>> *\n>>\n>> *Replicon* | Hassle-Free Time & Expense Management Software - 7,300\n>> Customers - 70 Countries\n>> www.replicon.com <http://www.replicon.com/> | facebook\n>> <http://www.facebook.com/Replicon.inc> | twitter\n>> <http://twitter.com/Replicon> | blog <http://www.replicon.com/blog/> |\n>> contact us <http://www.replicon.com/about_replicon/contact_us.aspx>\n>>\n>> *We are hiring!* | search jobs\n>> <http://tbe.taleo.net/NA2/ats/careers/searchResults.jsp?org=REPLICON&cws=1&act=sort&sortColumn=1&__utma=1.651918544.1299001662.1299170819.1299174966.10&__utmb=1.8.10.1299174966&__utmc=1&__utmx=-&__utmz=1.1299174985.10.3.utmcsr=google%7Cutmccn=%28organic%29%7Cutmcmd=organic%7Cutmctr=replicon%20careers&__utmv=1.%7C3=Visitor%20Type=Prospects=1,&__utmk=40578466>\n>>\n>> *\n>\n\n\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Fri, 27 Feb 2015 02:52:34 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [pgadmin-support] Issue with a hanging apply process\n on the replica db after vacuum works on primary"
},
{
"msg_contents": "27.02.2015 11:52, Jim Nasby пишет:\n> On 2/26/15 12:25 AM, Sergey Shchukin wrote:\n>> Hi Radovan !\n>>\n>> Thank you for the reply. The question is that this table is not a\n>> subject for a massive updates/deletes.\n>>\n>> Is there any additional traces except from perf or pg_top to trace what\n>> replica is doing at the particular moment when we are lagging in replay?\n>> To see locks or spins or sleeps etc..\n>\n> Please don't top-post.\n>\n> What version is this? What is max_standby_streaming_delay set to?\n>\n>> Thank you!\n>>\n>> -\n>>\n>> Best regards,\n>> Sergey Shchukin\n>>\n>> 24.02.2015 19:05, Radovan Jablonovsky пишет:\n>>> This looks like more issue for pgsql-general mailing list.\n>>>\n>>> Possible solutions\n>>> 1) Set specific autovacuum parameters on the big table. The autovacuum\n>>> could vacuum table on multiple runs based on the thresholds and cost\n>>> settings\n>>> Example of setting specific values of autovacuum and analyze for\n>>> table. It should be adjusted for your system, work load, table \n>>> usage, etc:\n>>> alter table \"my_schema\".\"my_big_table\" set (fillfactor = 80,\n>>> autovacuum_enabled = true, autovacuum_vacuum_threshold = 200,\n>>> autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor =\n>>> 0.05, autovacuum_analyze_scale_factor = 0.005,\n>>> autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = \n>>> 5000);\n>>>\n>>> 2) Could be to partition the large table on master site and vacuum it\n>>> partition by partition.\n>>>\n>>> On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin\n>>> <[email protected] <mailto:[email protected]>> wrote:\n>>>\n>>> Hi all!\n>>>\n>>> May someone help me with the issue in the apply process on the\n>>> replica. We have a stream replication and after vacuum stops\n>>> working with a big table we get a \"freeze\" in applying data on the\n>>> replica database. It looks like this:\n>>>\n>>> Tue Feb 24 15:04:51 MSK 2015 Stream:\n>>> MASTER-masterdb:79607136410456 SLAVE:79607136410456\n>>> Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 seconds)\n>>> Tue Feb 24 15:04:52 MSK 2015 Stream:\n>>> MASTER-masterdb:79607137892672 SLAVE:79607137715392\n>>> Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 \n>>> seconds)\n>>> Tue Feb 24 15:04:53 MSK 2015 Stream:\n>>> MASTER-masterdb:79607139327776 SLAVE:79607139241816\n>>> Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 seconds)\n>>> Tue Feb 24 15:04:54 MSK 2015 Stream:\n>>> MASTER-masterdb:79607141134776 SLAVE:79607141073344\n>>> Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 seconds)\n>>> Tue Feb 24 15:04:55 MSK 2015 Stream:\n>>> MASTER-masterdb:79607143085176 SLAVE:79607143026440\n>>> Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 seconds)\n>>> Tue Feb 24 15:04:56 MSK 2015 Stream:\n>>> MASTER-masterdb:79607145111280 SLAVE:79607145021384\n>>> Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 seconds)\n>>> Tue Feb 24 15:04:57 MSK 2015 Stream:\n>>> MASTER-masterdb:79607146564424 SLAVE:79607146478336\n>>> Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 seconds)\n>>> Tue Feb 24 15:04:58 MSK 2015 Stream:\n>>> MASTER-masterdb:79607148160680 SLAVE:79607148108352\n>>> Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 \n>>> seconds)\n>>> Tue Feb 24 15:04:59 MSK 2015 Stream:\n>>> MASTER-masterdb:79607150220688 SLAVE:79607150159632\n>>> Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 seconds)\n>>> Tue Feb 24 15:05:00 MSK 2015 Stream:\n>>> MASTER-masterdb:79607152365360 SLAVE:79607152262696\n>>> Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 seconds)\n>>> Tue Feb 24 15:05:02 MSK 2015 Stream:\n>>> MASTER-masterdb:79607154049848 SLAVE:79607154012624\n>>> Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 \n>>> seconds)\n>>> Tue Feb 24 15:05:03 MSK 2015 Stream:\n>>> MASTER-masterdb:79607155229992 SLAVE:79607155187864\n>>> Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 seconds)\n>>> Tue Feb 24 15:05:04 MSK 2015 Stream:\n>>> MASTER-masterdb:79607156833968 SLAVE:79607156764128\n>>> Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 seconds)\n>>> Tue Feb 24 15:05:05 MSK 2015 Stream:\n>>> MASTER-masterdb:79607158419848 SLAVE:79607158344856\n>>> Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 seconds)\n>>> Tue Feb 24 15:05:06 MSK 2015 Stream:\n>>> MASTER-masterdb:79607160004776 SLAVE:79607159962400\n>>> Replay:79607159988888 :: REPLAY 16 KBytes (00:00:00.003162 seconds)\n>>> *--here apply process just stops*\n>>>\n>>> Tue Feb 24 15:05:07 MSK 2015 Stream:\n>>> MASTER-masterdb:79607161592048 SLAVE:79607161550576\n>>> Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 \n>>> seconds)\n>>> Tue Feb 24 15:05:08 MSK 2015 Stream:\n>>> MASTER-masterdb:79607163272840 SLAVE:79607163231384\n>>> Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 \n>>> seconds)\n>>> Tue Feb 24 15:05:09 MSK 2015 Stream:\n>>> MASTER-masterdb:79607164958632 SLAVE:79607164904448\n>>> Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 \n>>> seconds)\n>>> Tue Feb 24 15:05:10 MSK 2015 Stream:\n>>> MASTER-masterdb:79607166819560 SLAVE:79607166777712\n>>> Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107 \n>>> seconds)\n>>> Tue Feb 24 15:05:11 MSK 2015 Stream:\n>>> MASTER-masterdb:79607168595280 SLAVE:79607168566536\n>>> Replay:79607160986064 :: REPLAY 7431 KBytes (00:00:04.589736 \n>>> seconds)\n>>> Tue Feb 24 15:05:12 MSK 2015 Stream:\n>>> MASTER-masterdb:79607170372064 SLAVE:79607170252480\n>>> Replay:79607160986064 :: REPLAY 9166 KBytes (00:00:05.635918 \n>>> seconds)\n>>> Tue Feb 24 15:05:13 MSK 2015 Stream:\n>>> MASTER-masterdb:79607171829480 SLAVE:79607171714144\n>>> Replay:79607160986064 :: REPLAY 10589 KBytes (00:00:06.688115 \n>>> seconds)\n>>> Tue Feb 24 15:05:14 MSK 2015 Stream:\n>>> MASTER-masterdb:79607173152488 SLAVE:79607173152488\n>>> Replay:79607160986064 :: REPLAY 11881 KBytes (00:00:07.736993 \n>>> seconds)\n>>> Tue Feb 24 15:05:15 MSK 2015 Stream:\n>>> MASTER-masterdb:79607174149968 SLAVE:79607174149968\n>>> Replay:79607160986064 :: REPLAY 12855 KBytes (00:00:08.78538 \n>>> seconds)\n>>> Tue Feb 24 15:05:16 MSK 2015 Stream:\n>>> MASTER-masterdb:79607176448344 SLAVE:79607176252088\n>>> Replay:79607160986064 :: REPLAY 15100 KBytes (00:00:09.835184 \n>>> seconds)\n>>> Tue Feb 24 15:05:17 MSK 2015 Stream:\n>>> MASTER-masterdb:79607177632216 SLAVE:79607177608224\n>>> Replay:79607160986064 :: REPLAY 16256 KBytes (00:00:10.926493 \n>>> seconds)\n>>> Tue Feb 24 15:05:18 MSK 2015 Stream:\n>>> MASTER-masterdb:79607179432960 SLAVE:79607179378096\n>>> Replay:79607160986064 :: REPLAY 18015 KBytes (00:00:11.97989 \n>>> seconds)\n>>> Tue Feb 24 15:05:19 MSK 2015 Stream:\n>>> MASTER-masterdb:79607180893384 SLAVE:79607180874256\n>>> Replay:79607160986064 :: REPLAY 19441 KBytes (00:00:13.028921 \n>>> seconds)\n>>> Tue Feb 24 15:05:20 MSK 2015 Stream:\n>>> MASTER-masterdb:79607182596224 SLAVE:79607182552272\n>>> Replay:79607160986064 :: REPLAY 21104 KBytes (00:00:14.079497 \n>>> seconds)\n>>> Tue Feb 24 15:05:21 MSK 2015 Stream:\n>>> MASTER-masterdb:79607183935312 SLAVE:79607183902592\n>>> Replay:79607160986064 :: REPLAY 22411 KBytes (00:00:15.127679 \n>>> seconds)\n>>> Tue Feb 24 15:05:23 MSK 2015 Stream:\n>>> MASTER-masterdb:79607185165880 SLAVE:79607185094032\n>>> Replay:79607160986064 :: REPLAY 23613 KBytes (00:00:16.175132 \n>>> seconds)\n>>> Tue Feb 24 15:05:24 MSK 2015 Stream:\n>>> MASTER-masterdb:79607187196920 SLAVE:79607187169368\n>>> Replay:79607160986064 :: REPLAY 25596 KBytes (00:00:17.221981 \n>>> seconds)\n>>> Tue Feb 24 15:05:25 MSK 2015 Stream:\n>>> MASTER-masterdb:79607188943856 SLAVE:79607188885952\n>>> Replay:79607160986064 :: REPLAY 27302 KBytes (00:00:18.274362 \n>>> seconds)\n>>> Tue Feb 24 15:05:26 MSK 2015 Stream:\n>>> MASTER-masterdb:79607190489400 SLAVE:79607190443160\n>>> Replay:79607160986064 :: REPLAY 28812 KBytes (00:00:19.319987 \n>>> seconds)\n>>> Tue Feb 24 15:05:27 MSK 2015 Stream:\n>>> MASTER-masterdb:79607192089312 SLAVE:79607192054048\n>>> Replay:79607160986064 :: REPLAY 30374 KBytes (00:00:20.372305 \n>>> seconds)\n>>> Tue Feb 24 15:05:28 MSK 2015 Stream:\n>>> MASTER-masterdb:79607193736800 SLAVE:79607193690056\n>>> Replay:79607160986064 :: REPLAY 31983 KBytes (00:00:21.421359 \n>>> seconds)\n>>> Tue Feb 24 15:05:29 MSK 2015 Stream:\n>>> MASTER-masterdb:79607195968648 SLAVE:79607195901296\n>>> Replay:79607160986064 :: REPLAY 34163 KBytes (00:00:22.471334 \n>>> seconds)\n>>> Tue Feb 24 15:05:30 MSK 2015 Stream:\n>>> MASTER-masterdb:79607197808840 SLAVE:79607197737720\n>>> Replay:79607160986064 :: REPLAY 35960 KBytes (00:00:23.52269 \n>>> seconds)\n>>> Tue Feb 24 15:05:31 MSK 2015 Stream:\n>>> MASTER-masterdb:79607199571144 SLAVE:79607199495976\n>>> Replay:79607160986064 :: REPLAY 37681 KBytes (00:00:24.577615 \n>>> seconds)\n>>> Tue Feb 24 15:05:32 MSK 2015 Stream:\n>>> MASTER-masterdb:79607201206104 SLAVE:79607201100392\n>>> Replay:79607160986064 :: REPLAY 39277 KBytes (00:00:25.624604 \n>>> seconds)\n>>> Tue Feb 24 15:05:33 MSK 2015 Stream:\n>>> MASTER-masterdb:79607203174208 SLAVE:79607203111136\n>>> Replay:79607160986064 :: REPLAY 41199 KBytes (00:00:26.67059 \n>>> seconds)\n>>> Tue Feb 24 15:05:34 MSK 2015 Stream:\n>>> MASTER-masterdb:79607204792888 SLAVE:79607204741600\n>>> Replay:79607160986064 :: REPLAY 42780 KBytes (00:00:27.719088 \n>>> seconds)\n>>> Tue Feb 24 15:05:35 MSK 2015 Stream:\n>>> MASTER-masterdb:79607206453216 SLAVE:79607206409032\n>>> Replay:79607160986064 :: REPLAY 44401 KBytes (00:00:28.766647 \n>>> seconds)\n>>> Tue Feb 24 15:05:36 MSK 2015 Stream:\n>>> MASTER-masterdb:79607208225344 SLAVE:79607208142176\n>>> Replay:79607160986064 :: REPLAY 46132 KBytes (00:00:29.811434 \n>>> seconds)\n>>>\n>>>\n>>> perf shows the following functions on the top\n>>> + 22.50% postmaster [kernel.kallsyms] [k] \n>>> copy_user_generic_string\n>>> + 8.48% postmaster postgres [.]\n>>> hash_search_with_hash_value\n>>>\n>>>\n>>> after 10 minutes or so the apply process continue to work\n>>>\n>>> Tue Feb 24 15:13:25 MSK 2015 Stream:\n>>> MASTER-masterdb:79608758742560 SLAVE:79608758718008\n>>> Replay:79607160986064 :: REPLAY 1560309 KBytes (00:08:19.009653\n>>> seconds)\n>>> Tue Feb 24 15:13:26 MSK 2015 Stream:\n>>> MASTER-masterdb:79608759203608 SLAVE:79608759189680\n>>> Replay:79607160986064 :: REPLAY 1560759 KBytes (00:08:20.057877\n>>> seconds)\n>>> Tue Feb 24 15:13:27 MSK 2015 Stream:\n>>> MASTER-masterdb:79608759639680 SLAVE:79608759633224\n>>> Replay:79607160986064 :: REPLAY 1561185 KBytes (00:08:21.104723\n>>> seconds)\n>>> Tue Feb 24 15:13:28 MSK 2015 Stream:\n>>> MASTER-masterdb:79608760271200 SLAVE:79608760264128\n>>> Replay:79607160986064 :: REPLAY 1561802 KBytes (00:08:22.148546\n>>> seconds)\n>>> Tue Feb 24 15:13:30 MSK 2015 Stream:\n>>> MASTER-masterdb:79608760622920 SLAVE:79608760616656\n>>> Replay:79607160986064 :: REPLAY 1562145 KBytes (00:08:23.196645\n>>> seconds)\n>>> Tue Feb 24 15:13:31 MSK 2015 Stream:\n>>> MASTER-masterdb:79608761122040 SLAVE:79608761084584\n>>> Replay:79607160986064 :: REPLAY 1562633 KBytes (00:08:24.240653\n>>> seconds)\n>>> Tue Feb 24 15:13:32 MSK 2015 Stream:\n>>> MASTER-masterdb:79608761434200 SLAVE:79608761426080\n>>> Replay:79607160986064 :: REPLAY 1562938 KBytes (00:08:25.289429\n>>> seconds)\n>>> Tue Feb 24 15:13:33 MSK 2015 Stream:\n>>> MASTER-masterdb:79608761931008 SLAVE:79608761904808\n>>> Replay:79607160986064 :: REPLAY 1563423 KBytes (00:08:26.338498\n>>> seconds)\n>>> *--apply starts*\n>>> Tue Feb 24 15:13:34 MSK 2015 Stream:\n>>> MASTER-masterdb:79608762360568 SLAVE:79608762325712\n>>> Replay:79607163554680 :: REPLAY 1561334 KBytes (00:08:25.702423\n>>> seconds)\n>>> Tue Feb 24 15:13:35 MSK 2015 Stream:\n>>> MASTER-masterdb:79608762891224 SLAVE:79608762885928\n>>> Replay:79607166466488 :: REPLAY 1559008 KBytes (00:08:25.011046\n>>> seconds)\n>>> Tue Feb 24 15:13:36 MSK 2015 Stream:\n>>> MASTER-masterdb:79608763681920 SLAVE:79608763667256\n>>> Replay:79607167054056 :: REPLAY 1559207 KBytes (00:08:25.827531\n>>> seconds)\n>>> Tue Feb 24 15:13:37 MSK 2015 Stream:\n>>> MASTER-masterdb:79608764207088 SLAVE:79608764197744\n>>> Replay:79607175610296 :: REPLAY 1551364 KBytes (00:08:21.182428\n>>> seconds)\n>>> Tue Feb 24 15:13:38 MSK 2015 Stream:\n>>> MASTER-masterdb:79608764857920 SLAVE:79608764832432\n>>> Replay:79607183599632 :: REPLAY 1544197 KBytes (00:08:16.742467\n>>> seconds)\n>>> Tue Feb 24 15:13:39 MSK 2015 Stream:\n>>> MASTER-masterdb:79608765323360 SLAVE:79608765281408\n>>> Replay:79607186862176 :: REPLAY 1541466 KBytes (00:08:15.569874\n>>> seconds)\n>>> Tue Feb 24 15:13:40 MSK 2015 Stream:\n>>> MASTER-masterdb:79608765848240 SLAVE:79608765824520\n>>> Replay:79607186862176 :: REPLAY 1541978 KBytes (00:08:16.620932\n>>> seconds)\n>>>\n>>>\n>>> All this is a result of completion of \"vacuum verbose analyze\n>>> master_table\" on the master site\n>>>\n>>> Any help would be appreciated\n>>>\n>>> --\n>>> Best regards,\n>>> Sergey Shchukin\n>>>\n>>>\n>>>\n>>>\n>>> -- \n>>>\n>>> *Radovan Jablonovsky* | SaaS DBA | Phone 1-403-262-6519 (ext. 7256) |\n>>> Fax 1-403-233-8046\n>>>\n>>>\n>>>\n>>\n>\n>\nHi Jim,\n\nThe version is _PostgreSQL 9.3.6_ on x86_64 RHEL 6.6\n\nshow max_standby_streaming_delay;\n max_standby_streaming_delay\n-----------------------------\n 30s\n\n-\nBest regards,\nSergey Shchukin\n\n\n\n\n\n\n\n27.02.2015 11:52, Jim Nasby пишет:\n\nOn\n 2/26/15 12:25 AM, Sergey Shchukin wrote:\n \nHi Radovan !\n \n\n Thank you for the reply. The question is that this table is not\n a\n \n subject for a massive updates/deletes.\n \n\n Is there any additional traces except from perf or pg_top to\n trace what\n \n replica is doing at the particular moment when we are lagging in\n replay?\n \n To see locks or spins or sleeps etc..\n \n\n\n Please don't top-post.\n \n\n What version is this? What is max_standby_streaming_delay set to?\n \n\nThank you!\n \n\n -\n \n\n Best regards,\n \n Sergey Shchukin\n \n\n 24.02.2015 19:05, Radovan Jablonovsky пишет:\n \nThis looks like more issue for\n pgsql-general mailing list.\n \n\n Possible solutions\n \n 1) Set specific autovacuum parameters on the big table. The\n autovacuum\n \n could vacuum table on multiple runs based on the thresholds\n and cost\n \n settings\n \n Example of setting specific values of autovacuum and analyze\n for\n \n table. It should be adjusted for your system, work load, table\n usage, etc:\n \n alter table \"my_schema\".\"my_big_table\" set (fillfactor = 80,\n \n autovacuum_enabled = true, autovacuum_vacuum_threshold = 200,\n \n autovacuum_analyze_threshold = 400,\n autovacuum_vacuum_scale_factor =\n \n 0.05, autovacuum_analyze_scale_factor = 0.005,\n \n autovacuum_vacuum_cost_delay = 10,\n autovacuum_vacuum_cost_limit = 5000);\n \n\n 2) Could be to partition the large table on master site and\n vacuum it\n \n partition by partition.\n \n\n On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin\n \n <[email protected]\n<mailto:[email protected]>> wrote:\n \n\n Hi all!\n \n\n May someone help me with the issue in the apply process on\n the\n \n replica. We have a stream replication and after vacuum\n stops\n \n working with a big table we get a \"freeze\" in applying\n data on the\n \n replica database. It looks like this:\n \n\n Tue Feb 24 15:04:51 MSK 2015 Stream:\n \n MASTER-masterdb:79607136410456 SLAVE:79607136410456\n \n Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485\n seconds)\n \n Tue Feb 24 15:04:52 MSK 2015 Stream:\n \n MASTER-masterdb:79607137892672 SLAVE:79607137715392\n \n Replay:79607137715392 :: REPLAY 173 KBytes\n (00:00:00.142605 seconds)\n \n Tue Feb 24 15:04:53 MSK 2015 Stream:\n \n MASTER-masterdb:79607139327776 SLAVE:79607139241816\n \n Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223\n seconds)\n \n Tue Feb 24 15:04:54 MSK 2015 Stream:\n \n MASTER-masterdb:79607141134776 SLAVE:79607141073344\n \n Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603\n seconds)\n \n Tue Feb 24 15:04:55 MSK 2015 Stream:\n \n MASTER-masterdb:79607143085176 SLAVE:79607143026440\n \n Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506\n seconds)\n \n Tue Feb 24 15:04:56 MSK 2015 Stream:\n \n MASTER-masterdb:79607145111280 SLAVE:79607145021384\n \n Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795\n seconds)\n \n Tue Feb 24 15:04:57 MSK 2015 Stream:\n \n MASTER-masterdb:79607146564424 SLAVE:79607146478336\n \n Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701\n seconds)\n \n Tue Feb 24 15:04:58 MSK 2015 Stream:\n \n MASTER-masterdb:79607148160680 SLAVE:79607148108352\n \n Replay:79607147369320 :: REPLAY 773 KBytes\n (00:00:00.449702 seconds)\n \n Tue Feb 24 15:04:59 MSK 2015 Stream:\n \n MASTER-masterdb:79607150220688 SLAVE:79607150159632\n \n Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594\n seconds)\n \n Tue Feb 24 15:05:00 MSK 2015 Stream:\n \n MASTER-masterdb:79607152365360 SLAVE:79607152262696\n \n Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042\n seconds)\n \n Tue Feb 24 15:05:02 MSK 2015 Stream:\n \n MASTER-masterdb:79607154049848 SLAVE:79607154012624\n \n Replay:79607153446800 :: REPLAY 589 KBytes\n (00:00:00.513637 seconds)\n \n Tue Feb 24 15:05:03 MSK 2015 Stream:\n \n MASTER-masterdb:79607155229992 SLAVE:79607155187864\n \n Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773\n seconds)\n \n Tue Feb 24 15:05:04 MSK 2015 Stream:\n \n MASTER-masterdb:79607156833968 SLAVE:79607156764128\n \n Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846\n seconds)\n \n Tue Feb 24 15:05:05 MSK 2015 Stream:\n \n MASTER-masterdb:79607158419848 SLAVE:79607158344856\n \n Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228\n seconds)\n \n Tue Feb 24 15:05:06 MSK 2015 Stream:\n \n MASTER-masterdb:79607160004776 SLAVE:79607159962400\n \n Replay:79607159988888 :: REPLAY 16 KBytes (00:00:00.003162\n seconds)\n \n *--here apply process just stops*\n \n\n Tue Feb 24 15:05:07 MSK 2015 Stream:\n \n MASTER-masterdb:79607161592048 SLAVE:79607161550576\n \n Replay:79607160986064 :: REPLAY 592 KBytes\n (00:00:00.398376 seconds)\n \n Tue Feb 24 15:05:08 MSK 2015 Stream:\n \n MASTER-masterdb:79607163272840 SLAVE:79607163231384\n \n Replay:79607160986064 :: REPLAY 2233 KBytes\n (00:00:01.446759 seconds)\n \n Tue Feb 24 15:05:09 MSK 2015 Stream:\n \n MASTER-masterdb:79607164958632 SLAVE:79607164904448\n \n Replay:79607160986064 :: REPLAY 3879 KBytes\n (00:00:02.497181 seconds)\n \n Tue Feb 24 15:05:10 MSK 2015 Stream:\n \n MASTER-masterdb:79607166819560 SLAVE:79607166777712\n \n Replay:79607160986064 :: REPLAY 5697 KBytes\n (00:00:03.543107 seconds)\n \n Tue Feb 24 15:05:11 MSK 2015 Stream:\n \n MASTER-masterdb:79607168595280 SLAVE:79607168566536\n \n Replay:79607160986064 :: REPLAY 7431 KBytes\n (00:00:04.589736 seconds)\n \n Tue Feb 24 15:05:12 MSK 2015 Stream:\n \n MASTER-masterdb:79607170372064 SLAVE:79607170252480\n \n Replay:79607160986064 :: REPLAY 9166 KBytes\n (00:00:05.635918 seconds)\n \n Tue Feb 24 15:05:13 MSK 2015 Stream:\n \n MASTER-masterdb:79607171829480 SLAVE:79607171714144\n \n Replay:79607160986064 :: REPLAY 10589 KBytes\n (00:00:06.688115 seconds)\n \n Tue Feb 24 15:05:14 MSK 2015 Stream:\n \n MASTER-masterdb:79607173152488 SLAVE:79607173152488\n \n Replay:79607160986064 :: REPLAY 11881 KBytes\n (00:00:07.736993 seconds)\n \n Tue Feb 24 15:05:15 MSK 2015 Stream:\n \n MASTER-masterdb:79607174149968 SLAVE:79607174149968\n \n Replay:79607160986064 :: REPLAY 12855 KBytes\n (00:00:08.78538 seconds)\n \n Tue Feb 24 15:05:16 MSK 2015 Stream:\n \n MASTER-masterdb:79607176448344 SLAVE:79607176252088\n \n Replay:79607160986064 :: REPLAY 15100 KBytes\n (00:00:09.835184 seconds)\n \n Tue Feb 24 15:05:17 MSK 2015 Stream:\n \n MASTER-masterdb:79607177632216 SLAVE:79607177608224\n \n Replay:79607160986064 :: REPLAY 16256 KBytes\n (00:00:10.926493 seconds)\n \n Tue Feb 24 15:05:18 MSK 2015 Stream:\n \n MASTER-masterdb:79607179432960 SLAVE:79607179378096\n \n Replay:79607160986064 :: REPLAY 18015 KBytes\n (00:00:11.97989 seconds)\n \n Tue Feb 24 15:05:19 MSK 2015 Stream:\n \n MASTER-masterdb:79607180893384 SLAVE:79607180874256\n \n Replay:79607160986064 :: REPLAY 19441 KBytes\n (00:00:13.028921 seconds)\n \n Tue Feb 24 15:05:20 MSK 2015 Stream:\n \n MASTER-masterdb:79607182596224 SLAVE:79607182552272\n \n Replay:79607160986064 :: REPLAY 21104 KBytes\n (00:00:14.079497 seconds)\n \n Tue Feb 24 15:05:21 MSK 2015 Stream:\n \n MASTER-masterdb:79607183935312 SLAVE:79607183902592\n \n Replay:79607160986064 :: REPLAY 22411 KBytes\n (00:00:15.127679 seconds)\n \n Tue Feb 24 15:05:23 MSK 2015 Stream:\n \n MASTER-masterdb:79607185165880 SLAVE:79607185094032\n \n Replay:79607160986064 :: REPLAY 23613 KBytes\n (00:00:16.175132 seconds)\n \n Tue Feb 24 15:05:24 MSK 2015 Stream:\n \n MASTER-masterdb:79607187196920 SLAVE:79607187169368\n \n Replay:79607160986064 :: REPLAY 25596 KBytes\n (00:00:17.221981 seconds)\n \n Tue Feb 24 15:05:25 MSK 2015 Stream:\n \n MASTER-masterdb:79607188943856 SLAVE:79607188885952\n \n Replay:79607160986064 :: REPLAY 27302 KBytes\n (00:00:18.274362 seconds)\n \n Tue Feb 24 15:05:26 MSK 2015 Stream:\n \n MASTER-masterdb:79607190489400 SLAVE:79607190443160\n \n Replay:79607160986064 :: REPLAY 28812 KBytes\n (00:00:19.319987 seconds)\n \n Tue Feb 24 15:05:27 MSK 2015 Stream:\n \n MASTER-masterdb:79607192089312 SLAVE:79607192054048\n \n Replay:79607160986064 :: REPLAY 30374 KBytes\n (00:00:20.372305 seconds)\n \n Tue Feb 24 15:05:28 MSK 2015 Stream:\n \n MASTER-masterdb:79607193736800 SLAVE:79607193690056\n \n Replay:79607160986064 :: REPLAY 31983 KBytes\n (00:00:21.421359 seconds)\n \n Tue Feb 24 15:05:29 MSK 2015 Stream:\n \n MASTER-masterdb:79607195968648 SLAVE:79607195901296\n \n Replay:79607160986064 :: REPLAY 34163 KBytes\n (00:00:22.471334 seconds)\n \n Tue Feb 24 15:05:30 MSK 2015 Stream:\n \n MASTER-masterdb:79607197808840 SLAVE:79607197737720\n \n Replay:79607160986064 :: REPLAY 35960 KBytes\n (00:00:23.52269 seconds)\n \n Tue Feb 24 15:05:31 MSK 2015 Stream:\n \n MASTER-masterdb:79607199571144 SLAVE:79607199495976\n \n Replay:79607160986064 :: REPLAY 37681 KBytes\n (00:00:24.577615 seconds)\n \n Tue Feb 24 15:05:32 MSK 2015 Stream:\n \n MASTER-masterdb:79607201206104 SLAVE:79607201100392\n \n Replay:79607160986064 :: REPLAY 39277 KBytes\n (00:00:25.624604 seconds)\n \n Tue Feb 24 15:05:33 MSK 2015 Stream:\n \n MASTER-masterdb:79607203174208 SLAVE:79607203111136\n \n Replay:79607160986064 :: REPLAY 41199 KBytes\n (00:00:26.67059 seconds)\n \n Tue Feb 24 15:05:34 MSK 2015 Stream:\n \n MASTER-masterdb:79607204792888 SLAVE:79607204741600\n \n Replay:79607160986064 :: REPLAY 42780 KBytes\n (00:00:27.719088 seconds)\n \n Tue Feb 24 15:05:35 MSK 2015 Stream:\n \n MASTER-masterdb:79607206453216 SLAVE:79607206409032\n \n Replay:79607160986064 :: REPLAY 44401 KBytes\n (00:00:28.766647 seconds)\n \n Tue Feb 24 15:05:36 MSK 2015 Stream:\n \n MASTER-masterdb:79607208225344 SLAVE:79607208142176\n \n Replay:79607160986064 :: REPLAY 46132 KBytes\n (00:00:29.811434 seconds)\n \n\n\n perf shows the following functions on the top\n \n + 22.50% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n \n + 8.48% postmaster postgres [.]\n \n hash_search_with_hash_value\n \n\n\n after 10 minutes or so the apply process continue to work\n \n\n Tue Feb 24 15:13:25 MSK 2015 Stream:\n \n MASTER-masterdb:79608758742560 SLAVE:79608758718008\n \n Replay:79607160986064 :: REPLAY 1560309 KBytes\n (00:08:19.009653\n \n seconds)\n \n Tue Feb 24 15:13:26 MSK 2015 Stream:\n \n MASTER-masterdb:79608759203608 SLAVE:79608759189680\n \n Replay:79607160986064 :: REPLAY 1560759 KBytes\n (00:08:20.057877\n \n seconds)\n \n Tue Feb 24 15:13:27 MSK 2015 Stream:\n \n MASTER-masterdb:79608759639680 SLAVE:79608759633224\n \n Replay:79607160986064 :: REPLAY 1561185 KBytes\n (00:08:21.104723\n \n seconds)\n \n Tue Feb 24 15:13:28 MSK 2015 Stream:\n \n MASTER-masterdb:79608760271200 SLAVE:79608760264128\n \n Replay:79607160986064 :: REPLAY 1561802 KBytes\n (00:08:22.148546\n \n seconds)\n \n Tue Feb 24 15:13:30 MSK 2015 Stream:\n \n MASTER-masterdb:79608760622920 SLAVE:79608760616656\n \n Replay:79607160986064 :: REPLAY 1562145 KBytes\n (00:08:23.196645\n \n seconds)\n \n Tue Feb 24 15:13:31 MSK 2015 Stream:\n \n MASTER-masterdb:79608761122040 SLAVE:79608761084584\n \n Replay:79607160986064 :: REPLAY 1562633 KBytes\n (00:08:24.240653\n \n seconds)\n \n Tue Feb 24 15:13:32 MSK 2015 Stream:\n \n MASTER-masterdb:79608761434200 SLAVE:79608761426080\n \n Replay:79607160986064 :: REPLAY 1562938 KBytes\n (00:08:25.289429\n \n seconds)\n \n Tue Feb 24 15:13:33 MSK 2015 Stream:\n \n MASTER-masterdb:79608761931008 SLAVE:79608761904808\n \n Replay:79607160986064 :: REPLAY 1563423 KBytes\n (00:08:26.338498\n \n seconds)\n \n *--apply starts*\n \n Tue Feb 24 15:13:34 MSK 2015 Stream:\n \n MASTER-masterdb:79608762360568 SLAVE:79608762325712\n \n Replay:79607163554680 :: REPLAY 1561334 KBytes\n (00:08:25.702423\n \n seconds)\n \n Tue Feb 24 15:13:35 MSK 2015 Stream:\n \n MASTER-masterdb:79608762891224 SLAVE:79608762885928\n \n Replay:79607166466488 :: REPLAY 1559008 KBytes\n (00:08:25.011046\n \n seconds)\n \n Tue Feb 24 15:13:36 MSK 2015 Stream:\n \n MASTER-masterdb:79608763681920 SLAVE:79608763667256\n \n Replay:79607167054056 :: REPLAY 1559207 KBytes\n (00:08:25.827531\n \n seconds)\n \n Tue Feb 24 15:13:37 MSK 2015 Stream:\n \n MASTER-masterdb:79608764207088 SLAVE:79608764197744\n \n Replay:79607175610296 :: REPLAY 1551364 KBytes\n (00:08:21.182428\n \n seconds)\n \n Tue Feb 24 15:13:38 MSK 2015 Stream:\n \n MASTER-masterdb:79608764857920 SLAVE:79608764832432\n \n Replay:79607183599632 :: REPLAY 1544197 KBytes\n (00:08:16.742467\n \n seconds)\n \n Tue Feb 24 15:13:39 MSK 2015 Stream:\n \n MASTER-masterdb:79608765323360 SLAVE:79608765281408\n \n Replay:79607186862176 :: REPLAY 1541466 KBytes\n (00:08:15.569874\n \n seconds)\n \n Tue Feb 24 15:13:40 MSK 2015 Stream:\n \n MASTER-masterdb:79608765848240 SLAVE:79608765824520\n \n Replay:79607186862176 :: REPLAY 1541978 KBytes\n (00:08:16.620932\n \n seconds)\n \n\n\n All this is a result of completion of \"vacuum verbose\n analyze\n \n master_table\" on the master site\n \n\n Any help would be appreciated\n \n\n --\n \n Best regards,\n \n Sergey Shchukin\n \n\n\n\n\n --\n \n\n *Radovan Jablonovsky* | SaaS DBA | Phone 1-403-262-6519 (ext.\n 7256) |\n \n Fax 1-403-233-8046\n \n\n\n\n\n\n\n\n\n\n Hi Jim,\n\n The version is PostgreSQL 9.3.6 on x86_64 RHEL 6.6 \n\nshow\n max_standby_streaming_delay;\n max_standby_streaming_delay\n -----------------------------\n 30s\n\n- \nBest regards,\nSergey Shchukin",
"msg_date": "Fri, 27 Feb 2015 14:11:14 +0300",
"msg_from": "Sergey Shchukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: [pgadmin-support] Issue with a hanging apply process\n on the replica db after vacuum works on primary"
},
{
"msg_contents": "27.02.2015 14:11, Sergey Shchukin пишет:\n> 27.02.2015 11:52, Jim Nasby пишет:\n>> On 2/26/15 12:25 AM, Sergey Shchukin wrote:\n>>> Hi Radovan !\n>>>\n>>> Thank you for the reply. The question is that this table is not a\n>>> subject for a massive updates/deletes.\n>>>\n>>> Is there any additional traces except from perf or pg_top to trace what\n>>> replica is doing at the particular moment when we are lagging in \n>>> replay?\n>>> To see locks or spins or sleeps etc..\n>>\n>> Please don't top-post.\n>>\n>> What version is this? What is max_standby_streaming_delay set to?\n>>\n>>> Thank you!\n>>>\n>>> -\n>>>\n>>> Best regards,\n>>> Sergey Shchukin\n>>>\n>>> 24.02.2015 19:05, Radovan Jablonovsky пишет:\n>>>> This looks like more issue for pgsql-general mailing list.\n>>>>\n>>>> Possible solutions\n>>>> 1) Set specific autovacuum parameters on the big table. The autovacuum\n>>>> could vacuum table on multiple runs based on the thresholds and cost\n>>>> settings\n>>>> Example of setting specific values of autovacuum and analyze for\n>>>> table. It should be adjusted for your system, work load, table \n>>>> usage, etc:\n>>>> alter table \"my_schema\".\"my_big_table\" set (fillfactor = 80,\n>>>> autovacuum_enabled = true, autovacuum_vacuum_threshold = 200,\n>>>> autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor =\n>>>> 0.05, autovacuum_analyze_scale_factor = 0.005,\n>>>> autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = \n>>>> 5000);\n>>>>\n>>>> 2) Could be to partition the large table on master site and vacuum it\n>>>> partition by partition.\n>>>>\n>>>> On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin\n>>>> <[email protected] <mailto:[email protected]>> wrote:\n>>>>\n>>>> Hi all!\n>>>>\n>>>> May someone help me with the issue in the apply process on the\n>>>> replica. We have a stream replication and after vacuum stops\n>>>> working with a big table we get a \"freeze\" in applying data on the\n>>>> replica database. It looks like this:\n>>>>\n>>>> Tue Feb 24 15:04:51 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607136410456 SLAVE:79607136410456\n>>>> Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 \n>>>> seconds)\n>>>> Tue Feb 24 15:04:52 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607137892672 SLAVE:79607137715392\n>>>> Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 \n>>>> seconds)\n>>>> Tue Feb 24 15:04:53 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607139327776 SLAVE:79607139241816\n>>>> Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 seconds)\n>>>> Tue Feb 24 15:04:54 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607141134776 SLAVE:79607141073344\n>>>> Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 \n>>>> seconds)\n>>>> Tue Feb 24 15:04:55 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607143085176 SLAVE:79607143026440\n>>>> Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 \n>>>> seconds)\n>>>> Tue Feb 24 15:04:56 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607145111280 SLAVE:79607145021384\n>>>> Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 \n>>>> seconds)\n>>>> Tue Feb 24 15:04:57 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607146564424 SLAVE:79607146478336\n>>>> Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 seconds)\n>>>> Tue Feb 24 15:04:58 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607148160680 SLAVE:79607148108352\n>>>> Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 \n>>>> seconds)\n>>>> Tue Feb 24 15:04:59 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607150220688 SLAVE:79607150159632\n>>>> Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:00 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607152365360 SLAVE:79607152262696\n>>>> Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:02 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607154049848 SLAVE:79607154012624\n>>>> Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:03 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607155229992 SLAVE:79607155187864\n>>>> Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:04 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607156833968 SLAVE:79607156764128\n>>>> Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:05 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607158419848 SLAVE:79607158344856\n>>>> Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:06 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607160004776 SLAVE:79607159962400\n>>>> Replay:79607159988888 :: REPLAY 16 KBytes (00:00:00.003162 \n>>>> seconds)\n>>>> *--here apply process just stops*\n>>>>\n>>>> Tue Feb 24 15:05:07 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607161592048 SLAVE:79607161550576\n>>>> Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:08 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607163272840 SLAVE:79607163231384\n>>>> Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:09 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607164958632 SLAVE:79607164904448\n>>>> Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:10 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607166819560 SLAVE:79607166777712\n>>>> Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:11 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607168595280 SLAVE:79607168566536\n>>>> Replay:79607160986064 :: REPLAY 7431 KBytes (00:00:04.589736 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:12 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607170372064 SLAVE:79607170252480\n>>>> Replay:79607160986064 :: REPLAY 9166 KBytes (00:00:05.635918 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:13 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607171829480 SLAVE:79607171714144\n>>>> Replay:79607160986064 :: REPLAY 10589 KBytes (00:00:06.688115 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:14 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607173152488 SLAVE:79607173152488\n>>>> Replay:79607160986064 :: REPLAY 11881 KBytes (00:00:07.736993 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:15 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607174149968 SLAVE:79607174149968\n>>>> Replay:79607160986064 :: REPLAY 12855 KBytes (00:00:08.78538 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:16 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607176448344 SLAVE:79607176252088\n>>>> Replay:79607160986064 :: REPLAY 15100 KBytes (00:00:09.835184 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:17 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607177632216 SLAVE:79607177608224\n>>>> Replay:79607160986064 :: REPLAY 16256 KBytes (00:00:10.926493 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:18 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607179432960 SLAVE:79607179378096\n>>>> Replay:79607160986064 :: REPLAY 18015 KBytes (00:00:11.97989 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:19 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607180893384 SLAVE:79607180874256\n>>>> Replay:79607160986064 :: REPLAY 19441 KBytes (00:00:13.028921 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:20 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607182596224 SLAVE:79607182552272\n>>>> Replay:79607160986064 :: REPLAY 21104 KBytes (00:00:14.079497 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:21 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607183935312 SLAVE:79607183902592\n>>>> Replay:79607160986064 :: REPLAY 22411 KBytes (00:00:15.127679 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:23 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607185165880 SLAVE:79607185094032\n>>>> Replay:79607160986064 :: REPLAY 23613 KBytes (00:00:16.175132 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:24 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607187196920 SLAVE:79607187169368\n>>>> Replay:79607160986064 :: REPLAY 25596 KBytes (00:00:17.221981 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:25 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607188943856 SLAVE:79607188885952\n>>>> Replay:79607160986064 :: REPLAY 27302 KBytes (00:00:18.274362 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:26 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607190489400 SLAVE:79607190443160\n>>>> Replay:79607160986064 :: REPLAY 28812 KBytes (00:00:19.319987 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:27 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607192089312 SLAVE:79607192054048\n>>>> Replay:79607160986064 :: REPLAY 30374 KBytes (00:00:20.372305 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:28 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607193736800 SLAVE:79607193690056\n>>>> Replay:79607160986064 :: REPLAY 31983 KBytes (00:00:21.421359 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:29 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607195968648 SLAVE:79607195901296\n>>>> Replay:79607160986064 :: REPLAY 34163 KBytes (00:00:22.471334 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:30 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607197808840 SLAVE:79607197737720\n>>>> Replay:79607160986064 :: REPLAY 35960 KBytes (00:00:23.52269 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:31 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607199571144 SLAVE:79607199495976\n>>>> Replay:79607160986064 :: REPLAY 37681 KBytes (00:00:24.577615 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:32 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607201206104 SLAVE:79607201100392\n>>>> Replay:79607160986064 :: REPLAY 39277 KBytes (00:00:25.624604 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:33 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607203174208 SLAVE:79607203111136\n>>>> Replay:79607160986064 :: REPLAY 41199 KBytes (00:00:26.67059 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:34 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607204792888 SLAVE:79607204741600\n>>>> Replay:79607160986064 :: REPLAY 42780 KBytes (00:00:27.719088 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:35 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607206453216 SLAVE:79607206409032\n>>>> Replay:79607160986064 :: REPLAY 44401 KBytes (00:00:28.766647 \n>>>> seconds)\n>>>> Tue Feb 24 15:05:36 MSK 2015 Stream:\n>>>> MASTER-masterdb:79607208225344 SLAVE:79607208142176\n>>>> Replay:79607160986064 :: REPLAY 46132 KBytes (00:00:29.811434 \n>>>> seconds)\n>>>>\n>>>>\n>>>> perf shows the following functions on the top\n>>>> + 22.50% postmaster [kernel.kallsyms] [k] \n>>>> copy_user_generic_string\n>>>> + 8.48% postmaster postgres [.]\n>>>> hash_search_with_hash_value\n>>>>\n>>>>\n>>>> after 10 minutes or so the apply process continue to work\n>>>>\n>>>> Tue Feb 24 15:13:25 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608758742560 SLAVE:79608758718008\n>>>> Replay:79607160986064 :: REPLAY 1560309 KBytes (00:08:19.009653\n>>>> seconds)\n>>>> Tue Feb 24 15:13:26 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608759203608 SLAVE:79608759189680\n>>>> Replay:79607160986064 :: REPLAY 1560759 KBytes (00:08:20.057877\n>>>> seconds)\n>>>> Tue Feb 24 15:13:27 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608759639680 SLAVE:79608759633224\n>>>> Replay:79607160986064 :: REPLAY 1561185 KBytes (00:08:21.104723\n>>>> seconds)\n>>>> Tue Feb 24 15:13:28 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608760271200 SLAVE:79608760264128\n>>>> Replay:79607160986064 :: REPLAY 1561802 KBytes (00:08:22.148546\n>>>> seconds)\n>>>> Tue Feb 24 15:13:30 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608760622920 SLAVE:79608760616656\n>>>> Replay:79607160986064 :: REPLAY 1562145 KBytes (00:08:23.196645\n>>>> seconds)\n>>>> Tue Feb 24 15:13:31 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608761122040 SLAVE:79608761084584\n>>>> Replay:79607160986064 :: REPLAY 1562633 KBytes (00:08:24.240653\n>>>> seconds)\n>>>> Tue Feb 24 15:13:32 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608761434200 SLAVE:79608761426080\n>>>> Replay:79607160986064 :: REPLAY 1562938 KBytes (00:08:25.289429\n>>>> seconds)\n>>>> Tue Feb 24 15:13:33 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608761931008 SLAVE:79608761904808\n>>>> Replay:79607160986064 :: REPLAY 1563423 KBytes (00:08:26.338498\n>>>> seconds)\n>>>> *--apply starts*\n>>>> Tue Feb 24 15:13:34 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608762360568 SLAVE:79608762325712\n>>>> Replay:79607163554680 :: REPLAY 1561334 KBytes (00:08:25.702423\n>>>> seconds)\n>>>> Tue Feb 24 15:13:35 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608762891224 SLAVE:79608762885928\n>>>> Replay:79607166466488 :: REPLAY 1559008 KBytes (00:08:25.011046\n>>>> seconds)\n>>>> Tue Feb 24 15:13:36 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608763681920 SLAVE:79608763667256\n>>>> Replay:79607167054056 :: REPLAY 1559207 KBytes (00:08:25.827531\n>>>> seconds)\n>>>> Tue Feb 24 15:13:37 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608764207088 SLAVE:79608764197744\n>>>> Replay:79607175610296 :: REPLAY 1551364 KBytes (00:08:21.182428\n>>>> seconds)\n>>>> Tue Feb 24 15:13:38 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608764857920 SLAVE:79608764832432\n>>>> Replay:79607183599632 :: REPLAY 1544197 KBytes (00:08:16.742467\n>>>> seconds)\n>>>> Tue Feb 24 15:13:39 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608765323360 SLAVE:79608765281408\n>>>> Replay:79607186862176 :: REPLAY 1541466 KBytes (00:08:15.569874\n>>>> seconds)\n>>>> Tue Feb 24 15:13:40 MSK 2015 Stream:\n>>>> MASTER-masterdb:79608765848240 SLAVE:79608765824520\n>>>> Replay:79607186862176 :: REPLAY 1541978 KBytes (00:08:16.620932\n>>>> seconds)\n>>>>\n>>>>\n>>>> All this is a result of completion of \"vacuum verbose analyze\n>>>> master_table\" on the master site\n>>>>\n>>>> Any help would be appreciated\n>>>>\n>>>> --\n>>>> Best regards,\n>>>> Sergey Shchukin\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> -- \n>>>>\n>>>> *Radovan Jablonovsky* | SaaS DBA | Phone 1-403-262-6519 (ext. 7256) |\n>>>> Fax 1-403-233-8046\n>>>>\n>>>>\n>>>>\n>>>\n>>\n>>\n> Hi Jim,\n>\n> The version is _PostgreSQL 9.3.6_ on x86_64 RHEL 6.6\n>\n> show max_standby_streaming_delay;\n> max_standby_streaming_delay\n> -----------------------------\n> 30s\n>\n\nAgain, after the vacuum finished on my table I got locks in apply \nprocess on replica - *see lag_byte *\n\nmasterdb01d/masterdb M # vacuum verbose rtable.rtable_uidl;\nINFO: 00000: vacuuming \"rtable.rtable_uidl\"\nLOCATION: lazy_scan_heap, vacuumlazy.c:438\nINFO: 00000: scanned index \"pk_rtable_uidl\" to remove 6 row versions\nDETAIL: CPU 240.80s/183.19u sec elapsed 703.85 sec.\nLOCATION: lazy_vacuum_index, vacuumlazy.c:1335\nINFO: 00000: \"rtable_uidl\": removed 6 row versions in 6 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nLOCATION: lazy_vacuum_heap, vacuumlazy.c:1169\nINFO: 00000: index \"pk_rtable_uidl\" now contains 3763411079 row \nversions in 32755911 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nLOCATION: lazy_cleanup_index, vacuumlazy.c:1387\nINFO: 00000: \"rtable_uidl\": found 6 removable, 1426488 nonremovable row \nversions in 12734 out of 26047416 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 3 unused item pointers.\n0 pages are entirely empty.\nCPU 241.37s/184.04u sec elapsed 705.45 sec.\nLOCATION: lazy_scan_heap, vacuumlazy.c:1101\nVACUUM\nTime: 705685.954 ms\n\n\n\nmasterdb01d/postgres M # \\g\n-[ RECORD 1 ]----+------------------------------\nprocpid | 21487\nusesysid | 16413\nusename | repl\napplication_name | walreceiver\nclient_addr |\nclient_hostname | masterdb01e\nclient_port | 35261\nbackend_start | 2015-02-27 13:02:27.203938+03\nstate | streaming\nsent_location | 494B/CB30B530\nwrite_location | 494B/CB30B530\nflush_location | 494B/CB30B530\nreplay_location | *494B/A02B9070 <<< stopped here!1*\nsync_priority | 0\nsync_state | async\ntotal_lag_byte | 721757376\ntotal_lag_nice | 688 MB\nlag_byte | 721757376\nlag_byte_nice | 688 MB\n\n--\nBest regards,\nSergey Shchukin\n\n\n\n\n\n\n27.02.2015 14:11, Sergey Shchukin\n пишет:\n\n\n\n27.02.2015 11:52, Jim Nasby пишет:\n\nOn\n\n 2/26/15 12:25 AM, Sergey Shchukin wrote: \nHi Radovan ! \n\n Thank you for the reply. The question is that this table is\n not a \n subject for a massive updates/deletes. \n\n Is there any additional traces except from perf or pg_top to\n trace what \n replica is doing at the particular moment when we are lagging\n in replay? \n To see locks or spins or sleeps etc.. \n\n\n Please don't top-post. \n\n What version is this? What is max_standby_streaming_delay set\n to? \n\nThank you! \n\n - \n\n Best regards, \n Sergey Shchukin \n\n 24.02.2015 19:05, Radovan Jablonovsky пишет: \nThis looks like more issue for\n pgsql-general mailing list. \n\n Possible solutions \n 1) Set specific autovacuum parameters on the big table. The\n autovacuum \n could vacuum table on multiple runs based on the thresholds\n and cost \n settings \n Example of setting specific values of autovacuum and analyze\n for \n table. It should be adjusted for your system, work load,\n table usage, etc: \n alter table \"my_schema\".\"my_big_table\" set (fillfactor = 80,\n \n autovacuum_enabled = true, autovacuum_vacuum_threshold =\n 200, \n autovacuum_analyze_threshold = 400,\n autovacuum_vacuum_scale_factor = \n 0.05, autovacuum_analyze_scale_factor = 0.005, \n autovacuum_vacuum_cost_delay = 10,\n autovacuum_vacuum_cost_limit = 5000); \n\n 2) Could be to partition the large table on master site and\n vacuum it \n partition by partition. \n\n On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin \n <[email protected]\n<mailto:[email protected]>>\n wrote: \n\n Hi all! \n\n May someone help me with the issue in the apply process\n on the \n replica. We have a stream replication and after vacuum\n stops \n working with a big table we get a \"freeze\" in applying\n data on the \n replica database. It looks like this: \n\n Tue Feb 24 15:04:51 MSK 2015 Stream: \n MASTER-masterdb:79607136410456 SLAVE:79607136410456 \n Replay:79607136339456 :: REPLAY 69 KBytes\n (00:00:00.294485 seconds) \n Tue Feb 24 15:04:52 MSK 2015 Stream: \n MASTER-masterdb:79607137892672 SLAVE:79607137715392 \n Replay:79607137715392 :: REPLAY 173 KBytes\n (00:00:00.142605 seconds) \n Tue Feb 24 15:04:53 MSK 2015 Stream: \n MASTER-masterdb:79607139327776 SLAVE:79607139241816 \n Replay:79607139241816 :: REPLAY 84 KBytes\n (00:00:00.05223 seconds) \n Tue Feb 24 15:04:54 MSK 2015 Stream: \n MASTER-masterdb:79607141134776 SLAVE:79607141073344 \n Replay:79607141080032 :: REPLAY 54 KBytes\n (00:00:00.010603 seconds) \n Tue Feb 24 15:04:55 MSK 2015 Stream: \n MASTER-masterdb:79607143085176 SLAVE:79607143026440 \n Replay:79607143038040 :: REPLAY 46 KBytes\n (00:00:00.009506 seconds) \n Tue Feb 24 15:04:56 MSK 2015 Stream: \n MASTER-masterdb:79607145111280 SLAVE:79607145021384 \n Replay:79607145025664 :: REPLAY 83 KBytes\n (00:00:00.006795 seconds) \n Tue Feb 24 15:04:57 MSK 2015 Stream: \n MASTER-masterdb:79607146564424 SLAVE:79607146478336 \n Replay:79607146501264 :: REPLAY 61 KBytes\n (00:00:00.00701 seconds) \n Tue Feb 24 15:04:58 MSK 2015 Stream: \n MASTER-masterdb:79607148160680 SLAVE:79607148108352 \n Replay:79607147369320 :: REPLAY 773 KBytes\n (00:00:00.449702 seconds) \n Tue Feb 24 15:04:59 MSK 2015 Stream: \n MASTER-masterdb:79607150220688 SLAVE:79607150159632 \n Replay:79607150171312 :: REPLAY 48 KBytes\n (00:00:00.006594 seconds) \n Tue Feb 24 15:05:00 MSK 2015 Stream: \n MASTER-masterdb:79607152365360 SLAVE:79607152262696 \n Replay:79607152285240 :: REPLAY 78 KBytes\n (00:00:00.007042 seconds) \n Tue Feb 24 15:05:02 MSK 2015 Stream: \n MASTER-masterdb:79607154049848 SLAVE:79607154012624 \n Replay:79607153446800 :: REPLAY 589 KBytes\n (00:00:00.513637 seconds) \n Tue Feb 24 15:05:03 MSK 2015 Stream: \n MASTER-masterdb:79607155229992 SLAVE:79607155187864 \n Replay:79607155188312 :: REPLAY 41 KBytes\n (00:00:00.004773 seconds) \n Tue Feb 24 15:05:04 MSK 2015 Stream: \n MASTER-masterdb:79607156833968 SLAVE:79607156764128 \n Replay:79607156785488 :: REPLAY 47 KBytes\n (00:00:00.006846 seconds) \n Tue Feb 24 15:05:05 MSK 2015 Stream: \n MASTER-masterdb:79607158419848 SLAVE:79607158344856 \n Replay:79607158396352 :: REPLAY 23 KBytes\n (00:00:00.005228 seconds) \n Tue Feb 24 15:05:06 MSK 2015 Stream: \n MASTER-masterdb:79607160004776 SLAVE:79607159962400 \n Replay:79607159988888 :: REPLAY 16 KBytes\n (00:00:00.003162 seconds) \n *--here apply process just stops* \n\n Tue Feb 24 15:05:07 MSK 2015 Stream: \n MASTER-masterdb:79607161592048 SLAVE:79607161550576 \n Replay:79607160986064 :: REPLAY 592 KBytes\n (00:00:00.398376 seconds) \n Tue Feb 24 15:05:08 MSK 2015 Stream: \n MASTER-masterdb:79607163272840 SLAVE:79607163231384 \n Replay:79607160986064 :: REPLAY 2233 KBytes\n (00:00:01.446759 seconds) \n Tue Feb 24 15:05:09 MSK 2015 Stream: \n MASTER-masterdb:79607164958632 SLAVE:79607164904448 \n Replay:79607160986064 :: REPLAY 3879 KBytes\n (00:00:02.497181 seconds) \n Tue Feb 24 15:05:10 MSK 2015 Stream: \n MASTER-masterdb:79607166819560 SLAVE:79607166777712 \n Replay:79607160986064 :: REPLAY 5697 KBytes\n (00:00:03.543107 seconds) \n Tue Feb 24 15:05:11 MSK 2015 Stream: \n MASTER-masterdb:79607168595280 SLAVE:79607168566536 \n Replay:79607160986064 :: REPLAY 7431 KBytes\n (00:00:04.589736 seconds) \n Tue Feb 24 15:05:12 MSK 2015 Stream: \n MASTER-masterdb:79607170372064 SLAVE:79607170252480 \n Replay:79607160986064 :: REPLAY 9166 KBytes\n (00:00:05.635918 seconds) \n Tue Feb 24 15:05:13 MSK 2015 Stream: \n MASTER-masterdb:79607171829480 SLAVE:79607171714144 \n Replay:79607160986064 :: REPLAY 10589 KBytes\n (00:00:06.688115 seconds) \n Tue Feb 24 15:05:14 MSK 2015 Stream: \n MASTER-masterdb:79607173152488 SLAVE:79607173152488 \n Replay:79607160986064 :: REPLAY 11881 KBytes\n (00:00:07.736993 seconds) \n Tue Feb 24 15:05:15 MSK 2015 Stream: \n MASTER-masterdb:79607174149968 SLAVE:79607174149968 \n Replay:79607160986064 :: REPLAY 12855 KBytes\n (00:00:08.78538 seconds) \n Tue Feb 24 15:05:16 MSK 2015 Stream: \n MASTER-masterdb:79607176448344 SLAVE:79607176252088 \n Replay:79607160986064 :: REPLAY 15100 KBytes\n (00:00:09.835184 seconds) \n Tue Feb 24 15:05:17 MSK 2015 Stream: \n MASTER-masterdb:79607177632216 SLAVE:79607177608224 \n Replay:79607160986064 :: REPLAY 16256 KBytes\n (00:00:10.926493 seconds) \n Tue Feb 24 15:05:18 MSK 2015 Stream: \n MASTER-masterdb:79607179432960 SLAVE:79607179378096 \n Replay:79607160986064 :: REPLAY 18015 KBytes\n (00:00:11.97989 seconds) \n Tue Feb 24 15:05:19 MSK 2015 Stream: \n MASTER-masterdb:79607180893384 SLAVE:79607180874256 \n Replay:79607160986064 :: REPLAY 19441 KBytes\n (00:00:13.028921 seconds) \n Tue Feb 24 15:05:20 MSK 2015 Stream: \n MASTER-masterdb:79607182596224 SLAVE:79607182552272 \n Replay:79607160986064 :: REPLAY 21104 KBytes\n (00:00:14.079497 seconds) \n Tue Feb 24 15:05:21 MSK 2015 Stream: \n MASTER-masterdb:79607183935312 SLAVE:79607183902592 \n Replay:79607160986064 :: REPLAY 22411 KBytes\n (00:00:15.127679 seconds) \n Tue Feb 24 15:05:23 MSK 2015 Stream: \n MASTER-masterdb:79607185165880 SLAVE:79607185094032 \n Replay:79607160986064 :: REPLAY 23613 KBytes\n (00:00:16.175132 seconds) \n Tue Feb 24 15:05:24 MSK 2015 Stream: \n MASTER-masterdb:79607187196920 SLAVE:79607187169368 \n Replay:79607160986064 :: REPLAY 25596 KBytes\n (00:00:17.221981 seconds) \n Tue Feb 24 15:05:25 MSK 2015 Stream: \n MASTER-masterdb:79607188943856 SLAVE:79607188885952 \n Replay:79607160986064 :: REPLAY 27302 KBytes\n (00:00:18.274362 seconds) \n Tue Feb 24 15:05:26 MSK 2015 Stream: \n MASTER-masterdb:79607190489400 SLAVE:79607190443160 \n Replay:79607160986064 :: REPLAY 28812 KBytes\n (00:00:19.319987 seconds) \n Tue Feb 24 15:05:27 MSK 2015 Stream: \n MASTER-masterdb:79607192089312 SLAVE:79607192054048 \n Replay:79607160986064 :: REPLAY 30374 KBytes\n (00:00:20.372305 seconds) \n Tue Feb 24 15:05:28 MSK 2015 Stream: \n MASTER-masterdb:79607193736800 SLAVE:79607193690056 \n Replay:79607160986064 :: REPLAY 31983 KBytes\n (00:00:21.421359 seconds) \n Tue Feb 24 15:05:29 MSK 2015 Stream: \n MASTER-masterdb:79607195968648 SLAVE:79607195901296 \n Replay:79607160986064 :: REPLAY 34163 KBytes\n (00:00:22.471334 seconds) \n Tue Feb 24 15:05:30 MSK 2015 Stream: \n MASTER-masterdb:79607197808840 SLAVE:79607197737720 \n Replay:79607160986064 :: REPLAY 35960 KBytes\n (00:00:23.52269 seconds) \n Tue Feb 24 15:05:31 MSK 2015 Stream: \n MASTER-masterdb:79607199571144 SLAVE:79607199495976 \n Replay:79607160986064 :: REPLAY 37681 KBytes\n (00:00:24.577615 seconds) \n Tue Feb 24 15:05:32 MSK 2015 Stream: \n MASTER-masterdb:79607201206104 SLAVE:79607201100392 \n Replay:79607160986064 :: REPLAY 39277 KBytes\n (00:00:25.624604 seconds) \n Tue Feb 24 15:05:33 MSK 2015 Stream: \n MASTER-masterdb:79607203174208 SLAVE:79607203111136 \n Replay:79607160986064 :: REPLAY 41199 KBytes\n (00:00:26.67059 seconds) \n Tue Feb 24 15:05:34 MSK 2015 Stream: \n MASTER-masterdb:79607204792888 SLAVE:79607204741600 \n Replay:79607160986064 :: REPLAY 42780 KBytes\n (00:00:27.719088 seconds) \n Tue Feb 24 15:05:35 MSK 2015 Stream: \n MASTER-masterdb:79607206453216 SLAVE:79607206409032 \n Replay:79607160986064 :: REPLAY 44401 KBytes\n (00:00:28.766647 seconds) \n Tue Feb 24 15:05:36 MSK 2015 Stream: \n MASTER-masterdb:79607208225344 SLAVE:79607208142176 \n Replay:79607160986064 :: REPLAY 46132 KBytes\n (00:00:29.811434 seconds) \n\n\n perf shows the following functions on the top \n + 22.50% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string \n + 8.48% postmaster postgres [.] \n hash_search_with_hash_value \n\n\n after 10 minutes or so the apply process continue to\n work \n\n Tue Feb 24 15:13:25 MSK 2015 Stream: \n MASTER-masterdb:79608758742560 SLAVE:79608758718008 \n Replay:79607160986064 :: REPLAY 1560309 KBytes\n (00:08:19.009653 \n seconds) \n Tue Feb 24 15:13:26 MSK 2015 Stream: \n MASTER-masterdb:79608759203608 SLAVE:79608759189680 \n Replay:79607160986064 :: REPLAY 1560759 KBytes\n (00:08:20.057877 \n seconds) \n Tue Feb 24 15:13:27 MSK 2015 Stream: \n MASTER-masterdb:79608759639680 SLAVE:79608759633224 \n Replay:79607160986064 :: REPLAY 1561185 KBytes\n (00:08:21.104723 \n seconds) \n Tue Feb 24 15:13:28 MSK 2015 Stream: \n MASTER-masterdb:79608760271200 SLAVE:79608760264128 \n Replay:79607160986064 :: REPLAY 1561802 KBytes\n (00:08:22.148546 \n seconds) \n Tue Feb 24 15:13:30 MSK 2015 Stream: \n MASTER-masterdb:79608760622920 SLAVE:79608760616656 \n Replay:79607160986064 :: REPLAY 1562145 KBytes\n (00:08:23.196645 \n seconds) \n Tue Feb 24 15:13:31 MSK 2015 Stream: \n MASTER-masterdb:79608761122040 SLAVE:79608761084584 \n Replay:79607160986064 :: REPLAY 1562633 KBytes\n (00:08:24.240653 \n seconds) \n Tue Feb 24 15:13:32 MSK 2015 Stream: \n MASTER-masterdb:79608761434200 SLAVE:79608761426080 \n Replay:79607160986064 :: REPLAY 1562938 KBytes\n (00:08:25.289429 \n seconds) \n Tue Feb 24 15:13:33 MSK 2015 Stream: \n MASTER-masterdb:79608761931008 SLAVE:79608761904808 \n Replay:79607160986064 :: REPLAY 1563423 KBytes\n (00:08:26.338498 \n seconds) \n *--apply starts* \n Tue Feb 24 15:13:34 MSK 2015 Stream: \n MASTER-masterdb:79608762360568 SLAVE:79608762325712 \n Replay:79607163554680 :: REPLAY 1561334 KBytes\n (00:08:25.702423 \n seconds) \n Tue Feb 24 15:13:35 MSK 2015 Stream: \n MASTER-masterdb:79608762891224 SLAVE:79608762885928 \n Replay:79607166466488 :: REPLAY 1559008 KBytes\n (00:08:25.011046 \n seconds) \n Tue Feb 24 15:13:36 MSK 2015 Stream: \n MASTER-masterdb:79608763681920 SLAVE:79608763667256 \n Replay:79607167054056 :: REPLAY 1559207 KBytes\n (00:08:25.827531 \n seconds) \n Tue Feb 24 15:13:37 MSK 2015 Stream: \n MASTER-masterdb:79608764207088 SLAVE:79608764197744 \n Replay:79607175610296 :: REPLAY 1551364 KBytes\n (00:08:21.182428 \n seconds) \n Tue Feb 24 15:13:38 MSK 2015 Stream: \n MASTER-masterdb:79608764857920 SLAVE:79608764832432 \n Replay:79607183599632 :: REPLAY 1544197 KBytes\n (00:08:16.742467 \n seconds) \n Tue Feb 24 15:13:39 MSK 2015 Stream: \n MASTER-masterdb:79608765323360 SLAVE:79608765281408 \n Replay:79607186862176 :: REPLAY 1541466 KBytes\n (00:08:15.569874 \n seconds) \n Tue Feb 24 15:13:40 MSK 2015 Stream: \n MASTER-masterdb:79608765848240 SLAVE:79608765824520 \n Replay:79607186862176 :: REPLAY 1541978 KBytes\n (00:08:16.620932 \n seconds) \n\n\n All this is a result of completion of \"vacuum verbose\n analyze \n master_table\" on the master site \n\n Any help would be appreciated \n\n -- \n Best regards, \n Sergey Shchukin \n\n\n\n\n -- \n\n *Radovan Jablonovsky* | SaaS DBA | Phone 1-403-262-6519\n (ext. 7256) | \n Fax 1-403-233-8046 \n\n\n\n\n\n\n\n\n\n Hi Jim,\n\n The version is PostgreSQL 9.3.6 on x86_64 RHEL 6.6 \n\nshow\n max_standby_streaming_delay;\n max_standby_streaming_delay\n -----------------------------\n 30s\n\n\n\nAgain, after the vacuum finished\n on\n my table I got locks in apply process on replica - see\n lag_byte \n\n masterdb01d/masterdb M # vacuum verbose rtable.rtable_uidl;\n INFO: 00000: vacuuming \"rtable.rtable_uidl\"\n LOCATION: lazy_scan_heap, vacuumlazy.c:438\n INFO: 00000: scanned index \"pk_rtable_uidl\" to remove 6 row\n versions\n DETAIL: CPU 240.80s/183.19u sec elapsed 703.85 sec.\n LOCATION: lazy_vacuum_index, vacuumlazy.c:1335\n INFO: 00000: \"rtable_uidl\": removed 6 row versions in 6 pages\n DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\n LOCATION: lazy_vacuum_heap, vacuumlazy.c:1169\n INFO: 00000: index \"pk_rtable_uidl\" now contains 3763411079\n row versions in 32755911 pages\n DETAIL: 0 index row versions were removed.\n 0 index pages have been deleted, 0 are currently reusable.\n CPU 0.00s/0.00u sec elapsed 0.00 sec.\n LOCATION: lazy_cleanup_index, vacuumlazy.c:1387\n INFO: 00000: \"rtable_uidl\": found 6 removable, 1426488\n nonremovable row versions in 12734 out of 26047416 pages\n DETAIL: 0 dead row versions cannot be removed yet.\n There were 3 unused item pointers.\n 0 pages are entirely empty.\n CPU 241.37s/184.04u sec elapsed 705.45 sec.\n LOCATION: lazy_scan_heap, vacuumlazy.c:1101\n VACUUM\n Time: 705685.954 ms\n\n\n\n masterdb01d/postgres M # \\g\n -[ RECORD 1 ]----+------------------------------\n procpid | 21487\n usesysid | 16413\n usename | repl\n application_name | walreceiver\n client_addr | \n client_hostname | masterdb01e\n client_port | 35261\n backend_start | 2015-02-27 13:02:27.203938+03\n state | streaming\n sent_location | 494B/CB30B530\n write_location | 494B/CB30B530\n flush_location | 494B/CB30B530\n replay_location | 494B/A02B9070 <<< stopped\n here!1\n sync_priority | 0\n sync_state | async\n total_lag_byte | 721757376\n total_lag_nice | 688 MB\n lag_byte | 721757376\n lag_byte_nice | 688 MB\n\n --\n Best regards,\n Sergey Shchukin",
"msg_date": "Fri, 27 Feb 2015 14:42:39 +0300",
"msg_from": "Sergey Shchukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: [pgadmin-support] Issue with a hanging apply process\n on the replica db after vacuum works on primary"
},
{
"msg_contents": "27.02.2015 14:42, Sergey Shchukin пишет:\n> 27.02.2015 14:11, Sergey Shchukin пишет:\n>> 27.02.2015 11:52, Jim Nasby пишет:\n>>> On 2/26/15 12:25 AM, Sergey Shchukin wrote:\n>>>> Hi Radovan !\n>>>>\n>>>> Thank you for the reply. The question is that this table is not a\n>>>> subject for a massive updates/deletes.\n>>>>\n>>>> Is there any additional traces except from perf or pg_top to trace \n>>>> what\n>>>> replica is doing at the particular moment when we are lagging in \n>>>> replay?\n>>>> To see locks or spins or sleeps etc..\n>>>\n>>> Please don't top-post.\n>>>\n>>> What version is this? What is max_standby_streaming_delay set to?\n>>>\n>>>> Thank you!\n>>>>\n>>>> -\n>>>>\n>>>> Best regards,\n>>>> Sergey Shchukin\n>>>>\n>>>> 24.02.2015 19:05, Radovan Jablonovsky пишет:\n>>>>> This looks like more issue for pgsql-general mailing list.\n>>>>>\n>>>>> Possible solutions\n>>>>> 1) Set specific autovacuum parameters on the big table. The \n>>>>> autovacuum\n>>>>> could vacuum table on multiple runs based on the thresholds and cost\n>>>>> settings\n>>>>> Example of setting specific values of autovacuum and analyze for\n>>>>> table. It should be adjusted for your system, work load, table \n>>>>> usage, etc:\n>>>>> alter table \"my_schema\".\"my_big_table\" set (fillfactor = 80,\n>>>>> autovacuum_enabled = true, autovacuum_vacuum_threshold = 200,\n>>>>> autovacuum_analyze_threshold = 400, autovacuum_vacuum_scale_factor =\n>>>>> 0.05, autovacuum_analyze_scale_factor = 0.005,\n>>>>> autovacuum_vacuum_cost_delay = 10, autovacuum_vacuum_cost_limit = \n>>>>> 5000);\n>>>>>\n>>>>> 2) Could be to partition the large table on master site and vacuum it\n>>>>> partition by partition.\n>>>>>\n>>>>> On Tue, Feb 24, 2015 at 6:42 AM, Sergey Shchukin\n>>>>> <[email protected] <mailto:[email protected]>> wrote:\n>>>>>\n>>>>> Hi all!\n>>>>>\n>>>>> May someone help me with the issue in the apply process on the\n>>>>> replica. We have a stream replication and after vacuum stops\n>>>>> working with a big table we get a \"freeze\" in applying data on \n>>>>> the\n>>>>> replica database. It looks like this:\n>>>>>\n>>>>> Tue Feb 24 15:04:51 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607136410456 SLAVE:79607136410456\n>>>>> Replay:79607136339456 :: REPLAY 69 KBytes (00:00:00.294485 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:04:52 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607137892672 SLAVE:79607137715392\n>>>>> Replay:79607137715392 :: REPLAY 173 KBytes (00:00:00.142605 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:04:53 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607139327776 SLAVE:79607139241816\n>>>>> Replay:79607139241816 :: REPLAY 84 KBytes (00:00:00.05223 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:04:54 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607141134776 SLAVE:79607141073344\n>>>>> Replay:79607141080032 :: REPLAY 54 KBytes (00:00:00.010603 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:04:55 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607143085176 SLAVE:79607143026440\n>>>>> Replay:79607143038040 :: REPLAY 46 KBytes (00:00:00.009506 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:04:56 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607145111280 SLAVE:79607145021384\n>>>>> Replay:79607145025664 :: REPLAY 83 KBytes (00:00:00.006795 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:04:57 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607146564424 SLAVE:79607146478336\n>>>>> Replay:79607146501264 :: REPLAY 61 KBytes (00:00:00.00701 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:04:58 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607148160680 SLAVE:79607148108352\n>>>>> Replay:79607147369320 :: REPLAY 773 KBytes (00:00:00.449702 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:04:59 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607150220688 SLAVE:79607150159632\n>>>>> Replay:79607150171312 :: REPLAY 48 KBytes (00:00:00.006594 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:00 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607152365360 SLAVE:79607152262696\n>>>>> Replay:79607152285240 :: REPLAY 78 KBytes (00:00:00.007042 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:02 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607154049848 SLAVE:79607154012624\n>>>>> Replay:79607153446800 :: REPLAY 589 KBytes (00:00:00.513637 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:03 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607155229992 SLAVE:79607155187864\n>>>>> Replay:79607155188312 :: REPLAY 41 KBytes (00:00:00.004773 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:04 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607156833968 SLAVE:79607156764128\n>>>>> Replay:79607156785488 :: REPLAY 47 KBytes (00:00:00.006846 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:05 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607158419848 SLAVE:79607158344856\n>>>>> Replay:79607158396352 :: REPLAY 23 KBytes (00:00:00.005228 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:06 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607160004776 SLAVE:79607159962400\n>>>>> Replay:79607159988888 :: REPLAY 16 KBytes (00:00:00.003162 \n>>>>> seconds)\n>>>>> *--here apply process just stops*\n>>>>>\n>>>>> Tue Feb 24 15:05:07 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607161592048 SLAVE:79607161550576\n>>>>> Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:08 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607163272840 SLAVE:79607163231384\n>>>>> Replay:79607160986064 :: REPLAY 2233 KBytes (00:00:01.446759 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:09 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607164958632 SLAVE:79607164904448\n>>>>> Replay:79607160986064 :: REPLAY 3879 KBytes (00:00:02.497181 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:10 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607166819560 SLAVE:79607166777712\n>>>>> Replay:79607160986064 :: REPLAY 5697 KBytes (00:00:03.543107 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:11 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607168595280 SLAVE:79607168566536\n>>>>> Replay:79607160986064 :: REPLAY 7431 KBytes (00:00:04.589736 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:12 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607170372064 SLAVE:79607170252480\n>>>>> Replay:79607160986064 :: REPLAY 9166 KBytes (00:00:05.635918 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:13 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607171829480 SLAVE:79607171714144\n>>>>> Replay:79607160986064 :: REPLAY 10589 KBytes (00:00:06.688115 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:14 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607173152488 SLAVE:79607173152488\n>>>>> Replay:79607160986064 :: REPLAY 11881 KBytes (00:00:07.736993 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:15 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607174149968 SLAVE:79607174149968\n>>>>> Replay:79607160986064 :: REPLAY 12855 KBytes (00:00:08.78538 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:16 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607176448344 SLAVE:79607176252088\n>>>>> Replay:79607160986064 :: REPLAY 15100 KBytes (00:00:09.835184 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:17 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607177632216 SLAVE:79607177608224\n>>>>> Replay:79607160986064 :: REPLAY 16256 KBytes (00:00:10.926493 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:18 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607179432960 SLAVE:79607179378096\n>>>>> Replay:79607160986064 :: REPLAY 18015 KBytes (00:00:11.97989 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:19 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607180893384 SLAVE:79607180874256\n>>>>> Replay:79607160986064 :: REPLAY 19441 KBytes (00:00:13.028921 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:20 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607182596224 SLAVE:79607182552272\n>>>>> Replay:79607160986064 :: REPLAY 21104 KBytes (00:00:14.079497 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:21 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607183935312 SLAVE:79607183902592\n>>>>> Replay:79607160986064 :: REPLAY 22411 KBytes (00:00:15.127679 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:23 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607185165880 SLAVE:79607185094032\n>>>>> Replay:79607160986064 :: REPLAY 23613 KBytes (00:00:16.175132 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:24 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607187196920 SLAVE:79607187169368\n>>>>> Replay:79607160986064 :: REPLAY 25596 KBytes (00:00:17.221981 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:25 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607188943856 SLAVE:79607188885952\n>>>>> Replay:79607160986064 :: REPLAY 27302 KBytes (00:00:18.274362 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:26 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607190489400 SLAVE:79607190443160\n>>>>> Replay:79607160986064 :: REPLAY 28812 KBytes (00:00:19.319987 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:27 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607192089312 SLAVE:79607192054048\n>>>>> Replay:79607160986064 :: REPLAY 30374 KBytes (00:00:20.372305 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:28 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607193736800 SLAVE:79607193690056\n>>>>> Replay:79607160986064 :: REPLAY 31983 KBytes (00:00:21.421359 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:29 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607195968648 SLAVE:79607195901296\n>>>>> Replay:79607160986064 :: REPLAY 34163 KBytes (00:00:22.471334 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:30 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607197808840 SLAVE:79607197737720\n>>>>> Replay:79607160986064 :: REPLAY 35960 KBytes (00:00:23.52269 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:31 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607199571144 SLAVE:79607199495976\n>>>>> Replay:79607160986064 :: REPLAY 37681 KBytes (00:00:24.577615 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:32 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607201206104 SLAVE:79607201100392\n>>>>> Replay:79607160986064 :: REPLAY 39277 KBytes (00:00:25.624604 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:33 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607203174208 SLAVE:79607203111136\n>>>>> Replay:79607160986064 :: REPLAY 41199 KBytes (00:00:26.67059 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:34 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607204792888 SLAVE:79607204741600\n>>>>> Replay:79607160986064 :: REPLAY 42780 KBytes (00:00:27.719088 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:35 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607206453216 SLAVE:79607206409032\n>>>>> Replay:79607160986064 :: REPLAY 44401 KBytes (00:00:28.766647 \n>>>>> seconds)\n>>>>> Tue Feb 24 15:05:36 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79607208225344 SLAVE:79607208142176\n>>>>> Replay:79607160986064 :: REPLAY 46132 KBytes (00:00:29.811434 \n>>>>> seconds)\n>>>>>\n>>>>>\n>>>>> perf shows the following functions on the top\n>>>>> + 22.50% postmaster [kernel.kallsyms] [k] \n>>>>> copy_user_generic_string\n>>>>> + 8.48% postmaster postgres [.]\n>>>>> hash_search_with_hash_value\n>>>>>\n>>>>>\n>>>>> after 10 minutes or so the apply process continue to work\n>>>>>\n>>>>> Tue Feb 24 15:13:25 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608758742560 SLAVE:79608758718008\n>>>>> Replay:79607160986064 :: REPLAY 1560309 KBytes (00:08:19.009653\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:26 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608759203608 SLAVE:79608759189680\n>>>>> Replay:79607160986064 :: REPLAY 1560759 KBytes (00:08:20.057877\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:27 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608759639680 SLAVE:79608759633224\n>>>>> Replay:79607160986064 :: REPLAY 1561185 KBytes (00:08:21.104723\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:28 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608760271200 SLAVE:79608760264128\n>>>>> Replay:79607160986064 :: REPLAY 1561802 KBytes (00:08:22.148546\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:30 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608760622920 SLAVE:79608760616656\n>>>>> Replay:79607160986064 :: REPLAY 1562145 KBytes (00:08:23.196645\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:31 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608761122040 SLAVE:79608761084584\n>>>>> Replay:79607160986064 :: REPLAY 1562633 KBytes (00:08:24.240653\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:32 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608761434200 SLAVE:79608761426080\n>>>>> Replay:79607160986064 :: REPLAY 1562938 KBytes (00:08:25.289429\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:33 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608761931008 SLAVE:79608761904808\n>>>>> Replay:79607160986064 :: REPLAY 1563423 KBytes (00:08:26.338498\n>>>>> seconds)\n>>>>> *--apply starts*\n>>>>> Tue Feb 24 15:13:34 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608762360568 SLAVE:79608762325712\n>>>>> Replay:79607163554680 :: REPLAY 1561334 KBytes (00:08:25.702423\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:35 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608762891224 SLAVE:79608762885928\n>>>>> Replay:79607166466488 :: REPLAY 1559008 KBytes (00:08:25.011046\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:36 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608763681920 SLAVE:79608763667256\n>>>>> Replay:79607167054056 :: REPLAY 1559207 KBytes (00:08:25.827531\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:37 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608764207088 SLAVE:79608764197744\n>>>>> Replay:79607175610296 :: REPLAY 1551364 KBytes (00:08:21.182428\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:38 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608764857920 SLAVE:79608764832432\n>>>>> Replay:79607183599632 :: REPLAY 1544197 KBytes (00:08:16.742467\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:39 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608765323360 SLAVE:79608765281408\n>>>>> Replay:79607186862176 :: REPLAY 1541466 KBytes (00:08:15.569874\n>>>>> seconds)\n>>>>> Tue Feb 24 15:13:40 MSK 2015 Stream:\n>>>>> MASTER-masterdb:79608765848240 SLAVE:79608765824520\n>>>>> Replay:79607186862176 :: REPLAY 1541978 KBytes (00:08:16.620932\n>>>>> seconds)\n>>>>>\n>>>>>\n>>>>> All this is a result of completion of \"vacuum verbose analyze\n>>>>> master_table\" on the master site\n>>>>>\n>>>>> Any help would be appreciated\n>>>>>\n>>>>> --\n>>>>> Best regards,\n>>>>> Sergey Shchukin\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>> -- \n>>>>>\n>>>>> *Radovan Jablonovsky* | SaaS DBA | Phone 1-403-262-6519 (ext. 7256) |\n>>>>> Fax 1-403-233-8046\n>>>>>\n>>>>>\n>>>>>\n>>>>\n>>>\n>>>\n>> Hi Jim,\n>>\n>> The version is _PostgreSQL 9.3.6_ on x86_64 RHEL 6.6\n>>\n>> show max_standby_streaming_delay;\n>> max_standby_streaming_delay\n>> -----------------------------\n>> 30s\n>>\n>\n> Again, after the vacuum finished on my table I got locks in apply \n> process on replica - *see lag_byte *\n>\n> masterdb01d/masterdb M # vacuum verbose rtable.rtable_uidl;\n> INFO: 00000: vacuuming \"rtable.rtable_uidl\"\n> LOCATION: lazy_scan_heap, vacuumlazy.c:438\n> INFO: 00000: scanned index \"pk_rtable_uidl\" to remove 6 row versions\n> DETAIL: CPU 240.80s/183.19u sec elapsed 703.85 sec.\n> LOCATION: lazy_vacuum_index, vacuumlazy.c:1335\n> INFO: 00000: \"rtable_uidl\": removed 6 row versions in 6 pages\n> DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> LOCATION: lazy_vacuum_heap, vacuumlazy.c:1169\n> INFO: 00000: index \"pk_rtable_uidl\" now contains 3763411079 row \n> versions in 32755911 pages\n> DETAIL: 0 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> LOCATION: lazy_cleanup_index, vacuumlazy.c:1387\n> INFO: 00000: \"rtable_uidl\": found 6 removable, 1426488 nonremovable \n> row versions in 12734 out of 26047416 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 3 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 241.37s/184.04u sec elapsed 705.45 sec.\n> LOCATION: lazy_scan_heap, vacuumlazy.c:1101\n> VACUUM\n> Time: 705685.954 ms\n>\n>\n>\n> masterdb01d/postgres M # \\g\n> -[ RECORD 1 ]----+------------------------------\n> procpid | 21487\n> usesysid | 16413\n> usename | repl\n> application_name | walreceiver\n> client_addr |\n> client_hostname | masterdb01e\n> client_port | 35261\n> backend_start | 2015-02-27 13:02:27.203938+03\n> state | streaming\n> sent_location | 494B/CB30B530\n> write_location | 494B/CB30B530\n> flush_location | 494B/CB30B530\n> replay_location | *494B/A02B9070 <<< stopped here!1*\n> sync_priority | 0\n> sync_state | async\n> total_lag_byte | 721757376\n> total_lag_nice | 688 MB\n> lag_byte | 721757376\n> lag_byte_nice | 688 MB\n>\n> --\n> Best regards,\n> Sergey Shchukin\n\nHi All!\n\nThe issue is repeating still...\n\nSome updates:\n\nThe size of the rtable_uidl is about 3 773 185 761 rows, quite a big\n\nBefore vacuum\n\n schema_size | schema_size_with_indexes\n-------------+--------------------------\n 199 GB | 450 GB\n\n\nrtabledb01e/rtabledb R # select * from pg_stat_all_tables where \nschemaname = 'rtable' and relname ='rtable_uidl';\n-[ RECORD 1 ]-----+-----------\nrelid | 16511\nschemaname | rtable\nrelname | rtable_uidl\nseq_scan | 2\nseq_tup_read | 7546016953\nidx_scan | 3008619\nidx_tup_fetch | 44139478\nn_tup_ins | 0\nn_tup_upd | 0\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 0\nn_dead_tup | 0\nlast_vacuum | [null]\nlast_autovacuum | [null]\nlast_analyze | [null]\nlast_autoanalyze | [null]\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 0\n\n\n\nServer spec: 2 Xeon E5-2660, 128GB RAM, disk subsystem\n\n/dev/md2 /var/lib/pgsql/9.3/data\n/dev/md3 /var/lib/pgsql/9.3/data/pg_xlog\n\nraids are raid10 on top of 8x300GB intel SSD disks. Both raids use the \nsame disks which are split into two parts (ex. md2:sdb1.., md3:sdb2...).\n\nIt should be mentioned that we see significant disc activity on replica \nDBS during the lockup of apply process (see disks_load.png)\n\nOS: RHEL 6.6\n\nNon default parameters - please check the attachment\n\nAfter the vacuum\nrtabledb01d/rtabledb M # vacuum verbose rtable.rtable_uidl;\nINFO: 00000: vacuuming \"rtable.rtable_uidl\"\nLOCATION: lazy_scan_heap, vacuumlazy.c:438\nINFO: 00000: scanned index \"pk_rtable_uidl\" to remove 3 row versions\nDETAIL: CPU 251.85s/165.91u sec elapsed 743.44 sec.\nLOCATION: lazy_vacuum_index, vacuumlazy.c:1335\nINFO: 00000: \"rtable_uidl\": removed 3 row versions in 3 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nLOCATION: lazy_vacuum_heap, vacuumlazy.c:1169\nINFO: 00000: index \"pk_rtable_uidl\" now contains 3773254753 row \nversions in 32846328 pages\nDETAIL: 0 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nLOCATION: lazy_cleanup_index, vacuumlazy.c:1387\nINFO: 00000: \"rtable_uidl\": found 3 removable, 2589545 nonremovable row \nversions in 20398 out of 26102225 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 253.20s/167.60u sec elapsed 749.44 sec.\nLOCATION: lazy_scan_heap, vacuumlazy.c:1101\nVACUUM\nTime: 750141.401 ms\n\n date\n------------\n 2015-03-03\n(1 row)\n\n timetz\n--------------------\n 16:35:50.245227+03\n(1 row)\n\n\nReplay stopped on replicas\nTue Mar 3 16:35:12 MSK 2015 Stream: MASTER-rtabledb01d:81829147840528 \nSLAVE:81829147765224 Replay:*81827156622184 *:: REPLAY 1944550 KBytes \n(00:07:49.816045 seconds)\nTue Mar 3 16:35:17 MSK 2015 Stream: MASTER-rtabledb01d:81829156031576 \nSLAVE:81829155989056 Replay:*81827156622184 *:: REPLAY 1952549 KBytes \n(00:07:54.87067 seconds)\nTue Mar 3 16:35:22 MSK 2015 Stream: MASTER-rtabledb01d:81829165013208 \nSLAVE:81829164949312 Replay:*81827156622184 *:: REPLAY 1961320 KBytes \n(00:07:59.927685 seconds)\nTue Mar 3 16:35:27 MSK 2015 Stream: MASTER-rtabledb01d:81829172941288 \nSLAVE:81829172887000 Replay:*81827156622184 *:: REPLAY 1969062 KBytes \n(00:08:04.977663 seconds)\nTue Mar 3 16:35:32 MSK 2015 Stream: MASTER-rtabledb01d:81829181088512 \nSLAVE:81829181063304 Replay:*81827156622184 *:: REPLAY 1977018 KBytes \n(00:08:10.033499 seconds)\nTue Mar 3 16:35:37 MSK 2015 Stream: MASTER-rtabledb01d:81829191442216 \nSLAVE:81829191364192 Replay:*81827156622184 *:: REPLAY 1987129 KBytes \n(00:08:15.085862 seconds)\nTue Mar 3 16:35:42 MSK 2015 Stream: MASTER-rtabledb01d:81829204580736 \nSLAVE:81829204444136 Replay:*81827156622184 *:: REPLAY 1999960 KBytes \n(00:08:20.13815 seconds)\nTue Mar 3 16:35:47 MSK 2015 Stream: MASTER-rtabledb01d:81829218243240 \nSLAVE:81829218115272 Replay:*81827156622184 *:: REPLAY 2013302 KBytes \n(00:08:25.190515 seconds)\n\nAfter the vacuum\n\nschema_size | schema_size_with_indexes\n-------------+--------------------------\n 199 GB | 450 GB\n\nrtabledb01d/rtabledb M # select * from pg_stat_all_tables where \nschemaname = 'rtable' and relname ='rtable_uidl';\n-[ RECORD 1 ]-----+------------------------------\nrelid | 16511\nschemaname | rtable\nrelname | rtable_uidl\nseq_scan | 1\nseq_tup_read | 14223712\nidx_scan | 1370133\nidx_tup_fetch | 28596708\nn_tup_ins | 12473820\nn_tup_upd | 0\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 3771361529\nn_dead_tup | 0\nlast_vacuum | 2015-03-03 16:27:23.231513+03\nlast_autovacuum | 2015-03-03 07:11:50.891016+03\nlast_analyze | 2015-02-26 19:50:28.097629+03\nlast_autoanalyze | 2015-03-03 14:09:14.463953+03\nvacuum_count | 4\nautovacuum_count | 5\nanalyze_count | 1\nautoanalyze_count | 3\n\nThanks in advance!\n\nBest regards,\nSergey Shchukin\n\n\n\n-- \nSent via pgsql-admin mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-admin",
"msg_date": "Tue, 03 Mar 2015 16:54:49 +0300",
"msg_from": "Sergey Shchukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: [pgadmin-support] Issue with a hanging apply\n process\n on the replica db after vacuum works on primary"
},
{
"msg_contents": "On 2/27/15 5:11 AM, Sergey Shchukin wrote:\n>\n> show max_standby_streaming_delay;\n> max_standby_streaming_delay\n> -----------------------------\n> 30s\n\nWe both need to be more clear about which server we're talking about \n(master or replica).\n\nWhat are max_standby_streaming_delay and max_standby_archive_delay set \nto *on the replica*?\n\nMy hope is that one or both of those is set to somewhere around 8 \nminutes on the replica. That would explain everything.\n\nIf that's not the case then I suspect what's happening is there's \nsomething running on the replica that isn't checking for interrupts \nfrequently enough. That would also explain it.\n\nWhen replication hangs, is the replication process using a lot of CPU? \nOr is it just sitting there? What's the process status for the replay \nprocess show?\n\nCan you get a trace of the replay process on the replica when this is \nhappening to see where it's spending all it's time?\n\nHow are you generating these log lines?\n Tue Feb 24 15:05:07 MSK 2015 Stream: MASTER-masterdb:79607161592048 \nSLAVE:79607161550576 Replay:79607160986064 :: REPLAY 592 KBytes \n(00:00:00.398376 seconds)\n\nDo you see the confl_* fields in pg_stat_database_conflicts on the \n*replica* increasing?\n-- \nJim Nasby, Data Architect, Blue Treble Consulting\nData in Trouble? Get it in Treble! http://BlueTreble.com\n\n\n-- \nSent via pgsql-general mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-general\n",
"msg_date": "Thu, 5 Mar 2015 02:25:42 -0600",
"msg_from": "Jim Nasby <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Re: [pgadmin-support] Issue with a hanging apply process\n on the replica db after vacuum works on primary"
},
{
"msg_contents": "05.03.2015 11:25, Jim Nasby пишет:\n> On 2/27/15 5:11 AM, Sergey Shchukin wrote:\n>>\n>> show max_standby_streaming_delay;\n>> max_standby_streaming_delay\n>> -----------------------------\n>> 30s\n>\n> We both need to be more clear about which server we're talking about \n> (master or replica).\n>\n> What are max_standby_streaming_delay and max_standby_archive_delay set \n> to *on the replica*?\n>\n> My hope is that one or both of those is set to somewhere around 8 \n> minutes on the replica. That would explain everything.\n>\n> If that's not the case then I suspect what's happening is there's \n> something running on the replica that isn't checking for interrupts \n> frequently enough. That would also explain it.\n>\n> When replication hangs, is the replication process using a lot of CPU? \n> Or is it just sitting there? What's the process status for the replay \n> process show?\n>\n> Can you get a trace of the replay process on the replica when this is \n> happening to see where it's spending all it's time?\n>\n> How are you generating these log lines?\n> Tue Feb 24 15:05:07 MSK 2015 Stream: MASTER-masterdb:79607161592048 \n> SLAVE:79607161550576 Replay:79607160986064 :: REPLAY 592 KBytes \n> (00:00:00.398376 seconds)\n>\n> Do you see the confl_* fields in pg_stat_database_conflicts on the \n> *replica* increasing?\n\nHi Jim,\n\nmax_standby_streaming_delay and max_standby_archive_delay both are 30s \non master and replica dbs\n\nI don't see any specific or heavy workload during this issue with a \nhanging apply process. Just a normal queries as usual.\n\nBut I see an increased disk activity during the time when the apply \nissue is ongoing\n\nDSK | sdc | | *busy 61%* | read 11511 \n| | write 4534 | KiB/r 46 | | \nKiB/w 4 | MBr/s 52.78 | | MBw/s 1.88 | avq 1.45 \n| | avio 0.38 ms |\nDSK | sde | | *busy 60% * | read 11457 \n| | write 4398 | KiB/r 46 | | \nKiB/w 4 | MBr/s 51.97 | | MBw/s 1.83 | avq 1.47 \n| | avio 0.38 ms |\nDSK | sdd | |*busy 60%* | read 9673 \n| | write 4538 | KiB/r 61 | | \nKiB/w 4 | MBr/s 58.24 | | MBw/s 1.88 | avq 1.47 \n| | avio 0.42 ms |\nDSK | sdj | | *busy 59%* | read 9576 \n| | write 4177 | KiB/r 63 | | \nKiB/w 4 | MBr/s 59.30 | | MBw/s 1.75 | avq 1.48 \n| | avio 0.43 ms |\nDSK | sdh | | *busy 59%* | read 9615 \n| | write 4305 | KiB/r 63 | | \nKiB/w 4 | MBr/s 59.23 | | MBw/s 1.80 | avq 1.48 \n| | avio 0.42 ms |\nDSK | sdf | |*busy 59% * | read 9483 \n| | write 4404 | KiB/r 63 | | \nKiB/w 4 | MBr/s 59.11 | | MBw/s 1.83 | avq 1.47 \n| | avio 0.42 ms |\nDSK | sdi | | *busy 59%* | read 11273 \n| | write 4173 | KiB/r 46 | | \nKiB/w 4 | MBr/s 51.50 | | MBw/s 1.75 | avq 1.43 \n| | avio 0.38 ms |\nDSK | sdg | | *busy 59%* | read 11406 \n| | write 4297 | KiB/r 46 | | \nKiB/w 4 | MBr/s 51.66 | | MBw/s 1.80 | avq 1.46 \n| | avio 0.37 ms |\n\nAlthough it's not seems to be an upper IO limit.\n\nNormally disks are busy at 20-45%\n\nDSK | sde | | busy 29% | read 6524 \n| | write 14426 | KiB/r 26 | | KiB/w 5 | \nMBr/s 17.08 | | MBw/s 7.78 | avq 10.46 \n| | avio 0.14 ms |\nDSK | sdi | | busy 29% | read 6590 \n| | write 14391 | KiB/r 26 | | \nKiB/w 5 | MBr/s 17.19 | | MBw/s 7.76 | avq 8.75 \n| | avio 0.14 ms |\nDSK | sdg | | busy 29% | read 6547 \n| | write 14401 | KiB/r 26 | | \nKiB/w 5 | MBr/s 16.94 | | MBw/s 7.60 | avq 7.28 \n| | avio 0.14 ms |\nDSK | sdc | | busy 29% | read 6835 \n| | write 14283 | KiB/r 27 | | \nKiB/w 5 | MBr/s 18.08 | | MBw/s 7.74 | avq 8.77 \n| | avio 0.14 ms |\nDSK | sdf | | busy 23% | read 3808 \n| | write 14391 | KiB/r 36 | | \nKiB/w 5 | MBr/s 13.49 | | MBw/s 7.78 | avq 12.88 \n| | avio 0.13 ms |\nDSK | sdd | | busy 23% | read 3747 \n| | write 14229 | KiB/r 33 | | \nKiB/w 5 | MBr/s 12.32 | | MBw/s 7.74 | avq 10.07 \n| | avio 0.13 ms |\nDSK | sdj | | busy 23% | read 3737 \n| | write 14336 | KiB/r 36 | | \nKiB/w 5 | MBr/s 13.16 | | MBw/s 7.76 | avq 10.48 \n| | avio 0.13 ms |\nDSK | sdh | | busy 23% | read 3793 \n| | write 14362 | KiB/r 35 | | \nKiB/w 5 | MBr/s 13.29 | | MBw/s 7.60 | avq 8.61 \n| | avio 0.13 ms |\n\n\nAlso during the issue perf shows [k] copy_user_generic_string on the top \npositions\n 14.09% postmaster postgres [.] 0x00000000001b4569\n* 10.25% postmaster [kernel.kallsyms] [k] \ncopy_user_generic_string*\n 4.15% postmaster postgres [.] \nhash_search_with_hash_value\n 2.08% postmaster postgres [.] SearchCatCache\n 1.79% postmaster postgres [.] LWLockAcquire\n 1.18% postmaster libc-2.12.so [.] memcpy\n 1.12% postmaster postgres [.] mdnblocks\n\nIssue starts: at 19:43\nMon Mar 16 19:43:04 MSK 2015 Stream: MASTER-rdb04d:70837172337784 \nSLAVE:70837172314864 Replay:70837172316512 :: REPLAY 21 KBytes \n(00:00:00.006225 seconds)\nMon Mar 16 19:43:09 MSK 2015 Stream: MASTER-rdb04d:70837177455624 \nSLAVE:70837177390968 Replay:70837176794376 :: REPLAY 646 KBytes \n(00:00:00.367305 seconds)\n*Mon Mar 16 19:43:14 MSK 2015 Stream: MASTER-rdb04d:70837185005120 \nSLAVE:70837184961280 Replay:70837183253896 :: REPLAY 1710 KBytes \n(00:00:00.827881 seconds)*\nMon Mar 16 19:43:19 MSK 2015 Stream: MASTER-rdb04d:70837190417984 \nSLAVE:70837190230232 Replay:70837183253896 :: REPLAY 6996 KBytes \n(00:00:05.873169 seconds)\nMon Mar 16 19:43:24 MSK 2015 Stream: MASTER-rdb04d:70837198538232 \nSLAVE:70837198485000 Replay:70837183253896 :: REPLAY 14926 KBytes \n(00:00:11.025561 seconds)\nMon Mar 16 19:43:29 MSK 2015 Stream: MASTER-rdb04d:70837209961192 \nSLAVE:70837209869384 Replay:70837183253896 :: REPLAY 26081 KBytes \n(00:00:16.068014 seconds)\n\nWe see___[k] copy_user_generic_string_\n\n* 12.90% postmaster [kernel.kallsyms] [k] copy_user_generic_string*\n 11.49% postmaster postgres [.] 0x00000000001f40c1\n 4.74% postmaster postgres [.] \nhash_search_with_hash_value\n 1.86% postmaster postgres [.] mdnblocks\n 1.73% postmaster postgres [.] LWLockAcquire\n 1.67% postmaster postgres [.] SearchCatCache\n\n\n* 25.71% postmaster [kernel.kallsyms] [k] \ncopy_user_generic_string*\n 7.89% postmaster postgres [.] \nhash_search_with_hash_value\n 4.66% postmaster postgres [.] 0x00000000002108da\n 4.51% postmaster postgres [.] mdnblocks\n 3.36% postmaster [kernel.kallsyms] [k] put_page\n\nIssue stops: at 19:51:39\nMon Mar 16 19:51:24 MSK 2015 Stream: MASTER-rdb04d:70838904179344 \nSLAVE:70838903934392 Replay:70837183253896 :: REPLAY 1680591 KBytes \n(00:08:10.384679 seconds)\nMon Mar 16 19:51:29 MSK 2015 Stream: MASTER-rdb04d:70838929994336 \nSLAVE:70838929873624 Replay:70837183253896 :: REPLAY 1705801 KBytes \n(00:08:15.428773 seconds)\nMon Mar 16 19:51:34 MSK 2015 Stream: MASTER-rdb04d:70838951993624 \nSLAVE:70838951899768 Replay:70837183253896 :: REPLAY 1727285 KBytes \n(00:08:20.472567 seconds)\n*Mon Mar 16**19:51:39**MSK 2015 Stream: MASTER-rdb04d:70838975297912 \nSLAVE:70838975180384 Replay:70837208050872 :: REPLAY 1725827 KBytes \n(00:08:10.256935 seconds)*\nMon Mar 16 19:51:44 MSK 2015 Stream: MASTER-rdb04d:70839001502160 \nSLAVE:70839001412616 Replay:70837260116984 :: REPLAY 1700572 KBytes \n(00:07:49.849511 seconds)\nMon Mar 16 19:51:49 MSK 2015 Stream: MASTER-rdb04d:70839022866760 \nSLAVE:70839022751184 Replay:70837276732880 :: REPLAY 1705209 KBytes \n(00:07:42.307364 seconds)\n\nAnd copy_user_generic_string goes down\n+ 13.43% postmaster postgres [.] 0x000000000023dc9a\n*+ 3.71% postmaster [kernel.kallsyms] [k] \ncopy_user_generic_string*\n+ 2.46% init [kernel.kallsyms] [k] intel_idle\n+ 2.30% postmaster postgres [.] \nhash_search_with_hash_value\n+ 2.01% postmaster postgres [.] SearchCatCache\n\n\nCould you clarify what types of traces did you mean? GDB?\n\nTo calculate slave and apply lag I use the following query at the \nreplica site\n\nslave_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT \npg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS \nreceive\" $p_db)\nreplay_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT \npg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0') AS replay\" \n$p_db)\nreplay_timediff=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT \nNOW() - pg_last_xact_replay_timestamp() AS replication_delay\" $p_db)\nmaster_lag=$($psql -U monitor -h$p_host -p$p_port -A -t -c \"SELECT \npg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset\" $p_db)\necho \"$(date) Stream: MASTER-$p_host:$master_lag SLAVE:$slave_lag \nReplay:$replay_lag :: REPLAY $(bc <<< $master_lag/1024-$replay_lag/1024) \nKBytes (${replay_timediff} seconds)\"\n\n-\nBest regards,\nSergey Shchukin\n\n\n\n\n\n\n\n\n05.03.2015 11:25, Jim Nasby пишет:\n\nOn\n 2/27/15 5:11 AM, Sergey Shchukin wrote:\n \n\n\n show max_standby_streaming_delay;\n \n max_standby_streaming_delay\n \n -----------------------------\n \n 30s\n \n\n\n We both need to be more clear about which server we're talking\n about (master or replica).\n \n\n What are max_standby_streaming_delay and max_standby_archive_delay\n set to *on the replica*?\n \n\n My hope is that one or both of those is set to somewhere around 8\n minutes on the replica. That would explain everything.\n \n\n If that's not the case then I suspect what's happening is there's\n something running on the replica that isn't checking for\n interrupts frequently enough. That would also explain it.\n \n\n When replication hangs, is the replication process using a lot of\n CPU? Or is it just sitting there? What's the process status for\n the replay process show?\n \n\n Can you get a trace of the replay process on the replica when this\n is happening to see where it's spending all it's time?\n \n\n How are you generating these log lines?\n \n Tue Feb 24 15:05:07 MSK 2015 Stream:\n MASTER-masterdb:79607161592048 SLAVE:79607161550576\n Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376\n seconds)\n \n\n Do you see the confl_* fields in pg_stat_database_conflicts on the\n *replica* increasing?\n \n\n\n Hi Jim,\n\n max_standby_streaming_delay and max_standby_archive_delay both are\n 30s on master and replica dbs\n\n I don't see any specific or heavy workload during this issue with a\n hanging apply process. Just a normal queries as usual. \n\n But I see an increased disk activity during the time when the apply\n issue is ongoing\n\nDSK\n | sdc | | busy 61% |\n read 11511 | | write 4534 | KiB/r 46 \n | | KiB/w 4 | MBr/s 52.78 | \n | MBw/s 1.88 | avq 1.45 | | avio 0.38 ms\n |\n DSK | sde | | busy 60% |\n read 11457 | | write 4398 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.97 | \n | MBw/s 1.83 | avq 1.47 | | avio 0.38 ms\n |\n DSK | sdd | | busy 60% |\n read 9673 | | write 4538 | KiB/r 61 \n | | KiB/w 4 | MBr/s 58.24 | \n | MBw/s 1.88 | avq 1.47 | | avio 0.42 ms\n |\n DSK | sdj | | busy 59% |\n read 9576 | | write 4177 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.30 | \n | MBw/s 1.75 | avq 1.48 | | avio 0.43 ms\n |\n DSK | sdh | | busy 59% |\n read 9615 | | write 4305 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.23 | \n | MBw/s 1.80 | avq 1.48 | | avio 0.42 ms\n |\n DSK | sdf | | busy 59% |\n read 9483 | | write 4404 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.11 | \n | MBw/s 1.83 | avq 1.47 | | avio 0.42 ms\n |\n DSK | sdi | | busy 59% |\n read 11273 | | write 4173 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.50 | \n | MBw/s 1.75 | avq 1.43 | | avio 0.38 ms\n |\n DSK | sdg | | busy 59% |\n read 11406 | | write 4297 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.66 | \n | MBw/s 1.80 | avq 1.46 | | avio 0.37 ms\n |\n\n Although it's not seems to be an upper IO limit.\n\n Normally disks are busy at 20-45%\n\nDSK\n | sde | | busy 29% | read 6524\n | | write 14426 | KiB/r 26 | \n | KiB/w 5 | MBr/s 17.08 | | MBw/s 7.78\n | avq 10.46 | | avio 0.14 ms |\n DSK | sdi | | busy 29% | read \n 6590 | | write 14391 | KiB/r 26 \n | | KiB/w 5 | MBr/s 17.19 | \n | MBw/s 7.76 | avq 8.75 | | avio 0.14 ms\n |\n DSK | sdg | | busy 29% | read \n 6547 | | write 14401 | KiB/r 26 \n | | KiB/w 5 | MBr/s 16.94 | \n | MBw/s 7.60 | avq 7.28 | | avio 0.14 ms\n |\n DSK | sdc | | busy 29% | read \n 6835 | | write 14283 | KiB/r 27 \n | | KiB/w 5 | MBr/s 18.08 | \n | MBw/s 7.74 | avq 8.77 | | avio 0.14 ms\n |\n DSK | sdf | | busy 23% | read \n 3808 | | write 14391 | KiB/r 36 \n | | KiB/w 5 | MBr/s 13.49 | \n | MBw/s 7.78 | avq 12.88 | | avio 0.13 ms\n |\n DSK | sdd | | busy 23% | read \n 3747 | | write 14229 | KiB/r 33 \n | | KiB/w 5 | MBr/s 12.32 | \n | MBw/s 7.74 | avq 10.07 | | avio 0.13 ms\n |\n DSK | sdj | | busy 23% | read \n 3737 | | write 14336 | KiB/r 36 \n | | KiB/w 5 | MBr/s 13.16 | \n | MBw/s 7.76 | avq 10.48 | | avio 0.13 ms\n |\n DSK | sdh | | busy 23% | read \n 3793 | | write 14362 | KiB/r 35 \n | | KiB/w 5 | MBr/s 13.29 | \n | MBw/s 7.60 | avq 8.61 | | avio 0.13 ms\n |\n\n\n Also during the issue perf shows [k] copy_user_generic_string on the\n top positions\n 14.09% \n postmaster postgres [.] 0x00000000001b4569\n 10.25% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n 4.15% postmaster postgres [.]\n hash_search_with_hash_value\n 2.08% postmaster postgres [.]\n SearchCatCache\n 1.79% postmaster postgres [.] LWLockAcquire\n 1.18% postmaster libc-2.12.so [.] memcpy\n 1.12% postmaster postgres [.] mdnblocks\n\n Issue starts: at 19:43\nMon Mar\n 16 19:43:04 MSK 2015 Stream: MASTER-rdb04d:70837172337784\n SLAVE:70837172314864 Replay:70837172316512 :: REPLAY 21 KBytes\n (00:00:00.006225 seconds)\n Mon Mar 16 19:43:09 MSK 2015 Stream:\n MASTER-rdb04d:70837177455624 SLAVE:70837177390968\n Replay:70837176794376 :: REPLAY 646 KBytes (00:00:00.367305\n seconds)\nMon Mar 16 19:43:14 MSK 2015 Stream:\n MASTER-rdb04d:70837185005120 SLAVE:70837184961280\n Replay:70837183253896 :: REPLAY 1710 KBytes (00:00:00.827881\n seconds)\n Mon Mar 16 19:43:19 MSK 2015 Stream:\n MASTER-rdb04d:70837190417984 SLAVE:70837190230232\n Replay:70837183253896 :: REPLAY 6996 KBytes (00:00:05.873169\n seconds)\n Mon Mar 16 19:43:24 MSK 2015 Stream:\n MASTER-rdb04d:70837198538232 SLAVE:70837198485000\n Replay:70837183253896 :: REPLAY 14926 KBytes (00:00:11.025561\n seconds)\n Mon Mar 16 19:43:29 MSK 2015 Stream:\n MASTER-rdb04d:70837209961192 SLAVE:70837209869384\n Replay:70837183253896 :: REPLAY 26081 KBytes (00:00:16.068014\n seconds)\n\nWe see [k]\n copy_user_generic_string\n\n 12.90% postmaster \n [kernel.kallsyms] [k] copy_user_generic_string\n 11.49% postmaster postgres [.]\n 0x00000000001f40c1\n 4.74% postmaster postgres [.]\n hash_search_with_hash_value\n 1.86% postmaster postgres [.] mdnblocks\n 1.73% postmaster postgres [.]\n LWLockAcquire\n 1.67% postmaster postgres [.]\n SearchCatCache\n\n\n 25.71% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n 7.89% postmaster postgres [.]\n hash_search_with_hash_value\n 4.66% postmaster postgres [.]\n 0x00000000002108da\n 4.51% postmaster postgres [.] mdnblocks\n 3.36% postmaster [kernel.kallsyms] [k] put_page\n\n Issue stops: at 19:51:39\nMon Mar\n 16 19:51:24 MSK 2015 Stream: MASTER-rdb04d:70838904179344\n SLAVE:70838903934392 Replay:70837183253896 :: REPLAY 1680591\n KBytes (00:08:10.384679 seconds)\n Mon Mar 16 19:51:29 MSK 2015 Stream:\n MASTER-rdb04d:70838929994336 SLAVE:70838929873624\n Replay:70837183253896 :: REPLAY 1705801 KBytes\n (00:08:15.428773 seconds)\n Mon Mar 16 19:51:34 MSK 2015 Stream:\n MASTER-rdb04d:70838951993624 SLAVE:70838951899768\n Replay:70837183253896 :: REPLAY 1727285 KBytes\n (00:08:20.472567 seconds)\nMon Mar 16 19:51:39 MSK 2015 Stream:\n MASTER-rdb04d:70838975297912 SLAVE:70838975180384\n Replay:70837208050872 :: REPLAY 1725827 KBytes\n (00:08:10.256935 seconds)\n Mon Mar 16 19:51:44 MSK 2015 Stream:\n MASTER-rdb04d:70839001502160 SLAVE:70839001412616\n Replay:70837260116984 :: REPLAY 1700572 KBytes\n (00:07:49.849511 seconds)\n Mon Mar 16 19:51:49 MSK 2015 Stream:\n MASTER-rdb04d:70839022866760 SLAVE:70839022751184\n Replay:70837276732880 :: REPLAY 1705209 KBytes\n (00:07:42.307364 seconds)\n\n And copy_user_generic_string\n goes down\n+ \n 13.43% postmaster postgres [.]\n 0x000000000023dc9a\n+ 3.71% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n + 2.46% init [kernel.kallsyms] [k]\n intel_idle\n + 2.30% postmaster postgres [.]\n hash_search_with_hash_value\n + 2.01% postmaster postgres [.]\n SearchCatCache\n\n\n Could you clarify what types of traces did you mean? GDB?\n\n To calculate slave and apply lag I use the following query at the\n replica site\n\nslave_lag=$($psql\n -U monitor -h$s_host -p$s_port -A -t -c \"SELECT\n pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS\n receive\" $p_db)\n replay_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c\n \"SELECT pg_xlog_location_diff(pg_last_xlog_replay_location(),\n '0/0') AS replay\" $p_db)\n replay_timediff=$($psql -U monitor -h$s_host -p$s_port -A -t -c\n \"SELECT NOW() - pg_last_xact_replay_timestamp() AS\n replication_delay\" $p_db)\n master_lag=$($psql -U monitor -h$p_host -p$p_port -A -t -c\n \"SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0')\n AS offset\" $p_db)\n echo \"$(date) Stream: MASTER-$p_host:$master_lag\n SLAVE:$slave_lag Replay:$replay_lag :: REPLAY $(bc <<<\n $master_lag/1024-$replay_lag/1024) KBytes (${replay_timediff}\n seconds)\" \n\n- \nBest regards,\nSergey Shchukin",
"msg_date": "Tue, 17 Mar 2015 13:22:27 +0300",
"msg_from": "Sergey Shchukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Re: [pgadmin-support] Issue with a hanging apply process\n on the replica db after vacuum works on primary"
},
{
"msg_contents": "17.03.2015 13:22, Sergey Shchukin пишет:\n> 05.03.2015 11:25, Jim Nasby пишет:\n>> On 2/27/15 5:11 AM, Sergey Shchukin wrote:\n>>>\n>>> show max_standby_streaming_delay;\n>>> max_standby_streaming_delay\n>>> -----------------------------\n>>> 30s\n>>\n>> We both need to be more clear about which server we're talking about \n>> (master or replica).\n>>\n>> What are max_standby_streaming_delay and max_standby_archive_delay \n>> set to *on the replica*?\n>>\n>> My hope is that one or both of those is set to somewhere around 8 \n>> minutes on the replica. That would explain everything.\n>>\n>> If that's not the case then I suspect what's happening is there's \n>> something running on the replica that isn't checking for interrupts \n>> frequently enough. That would also explain it.\n>>\n>> When replication hangs, is the replication process using a lot of \n>> CPU? Or is it just sitting there? What's the process status for the \n>> replay process show?\n>>\n>> Can you get a trace of the replay process on the replica when this is \n>> happening to see where it's spending all it's time?\n>>\n>> How are you generating these log lines?\n>> Tue Feb 24 15:05:07 MSK 2015 Stream: MASTER-masterdb:79607161592048 \n>> SLAVE:79607161550576 Replay:79607160986064 :: REPLAY 592 KBytes \n>> (00:00:00.398376 seconds)\n>>\n>> Do you see the confl_* fields in pg_stat_database_conflicts on the \n>> *replica* increasing?\n>\n> Hi Jim,\n>\n> max_standby_streaming_delay and max_standby_archive_delay both are \n> 30s on master and replica dbs\n>\n> I don't see any specific or heavy workload during this issue with a \n> hanging apply process. Just a normal queries as usual.\n>\n> But I see an increased disk activity during the time when the apply \n> issue is ongoing\n>\n> DSK | sdc | | *busy 61%* | read 11511 \n> | | write 4534 | KiB/r 46 | | \n> KiB/w 4 | MBr/s 52.78 | | MBw/s 1.88 | avq \n> 1.45 | | avio 0.38 ms |\n> DSK | sde | | *busy 60% * | read 11457 \n> | | write 4398 | KiB/r 46 | | \n> KiB/w 4 | MBr/s 51.97 | | MBw/s 1.83 | avq \n> 1.47 | | avio 0.38 ms |\n> DSK | sdd | |*busy 60%* | read 9673 \n> | | write 4538 | KiB/r 61 | | \n> KiB/w 4 | MBr/s 58.24 | | MBw/s 1.88 | avq \n> 1.47 | | avio 0.42 ms |\n> DSK | sdj | | *busy 59%* | read 9576 \n> | | write 4177 | KiB/r 63 | | \n> KiB/w 4 | MBr/s 59.30 | | MBw/s 1.75 | avq \n> 1.48 | | avio 0.43 ms |\n> DSK | sdh | | *busy 59%* | read 9615 \n> | | write 4305 | KiB/r 63 | | \n> KiB/w 4 | MBr/s 59.23 | | MBw/s 1.80 | avq \n> 1.48 | | avio 0.42 ms |\n> DSK | sdf | |*busy 59% * | read 9483 \n> | | write 4404 | KiB/r 63 | | \n> KiB/w 4 | MBr/s 59.11 | | MBw/s 1.83 | avq \n> 1.47 | | avio 0.42 ms |\n> DSK | sdi | | *busy 59%* | read 11273 \n> | | write 4173 | KiB/r 46 | | \n> KiB/w 4 | MBr/s 51.50 | | MBw/s 1.75 | avq \n> 1.43 | | avio 0.38 ms |\n> DSK | sdg | | *busy 59%* | read 11406 \n> | | write 4297 | KiB/r 46 | | \n> KiB/w 4 | MBr/s 51.66 | | MBw/s 1.80 | avq \n> 1.46 | | avio 0.37 ms |\n>\n> Although it's not seems to be an upper IO limit.\n>\n> Normally disks are busy at 20-45%\n>\n> DSK | sde | | busy 29% | read 6524 \n> | | write 14426 | KiB/r 26 | | \n> KiB/w 5 | MBr/s 17.08 | | MBw/s 7.78 | avq \n> 10.46 | | avio 0.14 ms |\n> DSK | sdi | | busy 29% | read 6590 \n> | | write 14391 | KiB/r 26 | | \n> KiB/w 5 | MBr/s 17.19 | | MBw/s 7.76 | avq \n> 8.75 | | avio 0.14 ms |\n> DSK | sdg | | busy 29% | read 6547 \n> | | write 14401 | KiB/r 26 | | \n> KiB/w 5 | MBr/s 16.94 | | MBw/s 7.60 | avq \n> 7.28 | | avio 0.14 ms |\n> DSK | sdc | | busy 29% | read 6835 \n> | | write 14283 | KiB/r 27 | | \n> KiB/w 5 | MBr/s 18.08 | | MBw/s 7.74 | avq \n> 8.77 | | avio 0.14 ms |\n> DSK | sdf | | busy 23% | read 3808 \n> | | write 14391 | KiB/r 36 | | \n> KiB/w 5 | MBr/s 13.49 | | MBw/s 7.78 | avq \n> 12.88 | | avio 0.13 ms |\n> DSK | sdd | | busy 23% | read 3747 \n> | | write 14229 | KiB/r 33 | | \n> KiB/w 5 | MBr/s 12.32 | | MBw/s 7.74 | avq \n> 10.07 | | avio 0.13 ms |\n> DSK | sdj | | busy 23% | read 3737 \n> | | write 14336 | KiB/r 36 | | \n> KiB/w 5 | MBr/s 13.16 | | MBw/s 7.76 | avq \n> 10.48 | | avio 0.13 ms |\n> DSK | sdh | | busy 23% | read 3793 \n> | | write 14362 | KiB/r 35 | | \n> KiB/w 5 | MBr/s 13.29 | | MBw/s 7.60 | avq \n> 8.61 | | avio 0.13 ms |\n>\n>\n> Also during the issue perf shows [k] copy_user_generic_string on the \n> top positions\n> 14.09% postmaster postgres [.] 0x00000000001b4569\n> * 10.25% postmaster [kernel.kallsyms] [k] \n> copy_user_generic_string*\n> 4.15% postmaster postgres [.] \n> hash_search_with_hash_value\n> 2.08% postmaster postgres [.] SearchCatCache\n> 1.79% postmaster postgres [.] LWLockAcquire\n> 1.18% postmaster libc-2.12.so [.] memcpy\n> 1.12% postmaster postgres [.] mdnblocks\n>\n> Issue starts: at 19:43\n> Mon Mar 16 19:43:04 MSK 2015 Stream: MASTER-rdb04d:70837172337784 \n> SLAVE:70837172314864 Replay:70837172316512 :: REPLAY 21 KBytes \n> (00:00:00.006225 seconds)\n> Mon Mar 16 19:43:09 MSK 2015 Stream: MASTER-rdb04d:70837177455624 \n> SLAVE:70837177390968 Replay:70837176794376 :: REPLAY 646 KBytes \n> (00:00:00.367305 seconds)\n> *Mon Mar 16 19:43:14 MSK 2015 Stream: MASTER-rdb04d:70837185005120 \n> SLAVE:70837184961280 Replay:70837183253896 :: REPLAY 1710 KBytes \n> (00:00:00.827881 seconds)*\n> Mon Mar 16 19:43:19 MSK 2015 Stream: MASTER-rdb04d:70837190417984 \n> SLAVE:70837190230232 Replay:70837183253896 :: REPLAY 6996 KBytes \n> (00:00:05.873169 seconds)\n> Mon Mar 16 19:43:24 MSK 2015 Stream: MASTER-rdb04d:70837198538232 \n> SLAVE:70837198485000 Replay:70837183253896 :: REPLAY 14926 KBytes \n> (00:00:11.025561 seconds)\n> Mon Mar 16 19:43:29 MSK 2015 Stream: MASTER-rdb04d:70837209961192 \n> SLAVE:70837209869384 Replay:70837183253896 :: REPLAY 26081 KBytes \n> (00:00:16.068014 seconds)\n>\n> We see___[k] copy_user_generic_string_\n>\n> * 12.90% postmaster [kernel.kallsyms] [k] \n> copy_user_generic_string*\n> 11.49% postmaster postgres [.] 0x00000000001f40c1\n> 4.74% postmaster postgres [.] \n> hash_search_with_hash_value\n> 1.86% postmaster postgres [.] mdnblocks\n> 1.73% postmaster postgres [.] LWLockAcquire\n> 1.67% postmaster postgres [.] SearchCatCache\n>\n>\n> * 25.71% postmaster [kernel.kallsyms] [k] \n> copy_user_generic_string*\n> 7.89% postmaster postgres [.] \n> hash_search_with_hash_value\n> 4.66% postmaster postgres [.] 0x00000000002108da\n> 4.51% postmaster postgres [.] mdnblocks\n> 3.36% postmaster [kernel.kallsyms] [k] put_page\n>\n> Issue stops: at 19:51:39\n> Mon Mar 16 19:51:24 MSK 2015 Stream: MASTER-rdb04d:70838904179344 \n> SLAVE:70838903934392 Replay:70837183253896 :: REPLAY 1680591 KBytes \n> (00:08:10.384679 seconds)\n> Mon Mar 16 19:51:29 MSK 2015 Stream: MASTER-rdb04d:70838929994336 \n> SLAVE:70838929873624 Replay:70837183253896 :: REPLAY 1705801 KBytes \n> (00:08:15.428773 seconds)\n> Mon Mar 16 19:51:34 MSK 2015 Stream: MASTER-rdb04d:70838951993624 \n> SLAVE:70838951899768 Replay:70837183253896 :: REPLAY 1727285 KBytes \n> (00:08:20.472567 seconds)\n> *Mon Mar 16**19:51:39**MSK 2015 Stream: MASTER-rdb04d:70838975297912 \n> SLAVE:70838975180384 Replay:70837208050872 :: REPLAY 1725827 KBytes \n> (00:08:10.256935 seconds)*\n> Mon Mar 16 19:51:44 MSK 2015 Stream: MASTER-rdb04d:70839001502160 \n> SLAVE:70839001412616 Replay:70837260116984 :: REPLAY 1700572 KBytes \n> (00:07:49.849511 seconds)\n> Mon Mar 16 19:51:49 MSK 2015 Stream: MASTER-rdb04d:70839022866760 \n> SLAVE:70839022751184 Replay:70837276732880 :: REPLAY 1705209 KBytes \n> (00:07:42.307364 seconds)\n>\n> And copy_user_generic_string goes down\n> + 13.43% postmaster postgres [.] 0x000000000023dc9a\n> *+ 3.71% postmaster [kernel.kallsyms] [k] \n> copy_user_generic_string*\n> + 2.46% init [kernel.kallsyms] [k] intel_idle\n> + 2.30% postmaster postgres [.] \n> hash_search_with_hash_value\n> + 2.01% postmaster postgres [.] SearchCatCache\n>\n>\n> Could you clarify what types of traces did you mean? GDB?\n>\n> To calculate slave and apply lag I use the following query at the \n> replica site\n>\n> slave_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT \n> pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS \n> receive\" $p_db)\n> replay_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT \n> pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0') AS \n> replay\" $p_db)\n> replay_timediff=$($psql -U monitor -h$s_host -p$s_port -A -t -c \n> \"SELECT NOW() - pg_last_xact_replay_timestamp() AS replication_delay\" \n> $p_db)\n> master_lag=$($psql -U monitor -h$p_host -p$p_port -A -t -c \"SELECT \n> pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset\" $p_db)\n> echo \"$(date) Stream: MASTER-$p_host:$master_lag SLAVE:$slave_lag \n> Replay:$replay_lag :: REPLAY $(bc <<< \n> $master_lag/1024-$replay_lag/1024) KBytes (${replay_timediff} seconds)\"\n>\n> -\n> Best regards,\n> Sergey Shchukin\n>\n\nOne more thing\n\nWe have upgraded one of our shards to 9.4.1 and expectedly that did not help.\n\nA few things to notice which may be useful.\n\n1. When replay stops, startup process reads a lot from array with $PGDATA. In iotop and iostat we see the following:\n\nTotal DISK READ: 490.42 M/s | Total DISK WRITE: 3.82 M/s\n TID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND\n 3316 be/4 postgres 492.34 M/s 0.00 B/s 0.00 % 39.91 % postgres: startup process\n <...>\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\n <...>\n md2 0.00 0.00 6501.00 7.00 339.90 0.03 106.97 0.00 0.00 0.00 0.00\n md3 0.00 0.00 0.00 1739.00 0.00 6.79 8.00 0.00 0.00 0.00 0.00\n\n root@rpopdb04g ~ # fgrep 9.4 /proc/mounts\n /dev/md2 /var/lib/pgsql/9.4/data ext4 rw,noatime,nodiratime,barrier=1,stripe=64,data=ordered 0 0\n /dev/md3 /var/lib/pgsql/9.4/data/pg_xlog ext4 rw,noatime,nodiratime,barrier=0,stripe=64,data=ordered 0 0\n root@rpopdb04g ~ #\n\n2. The state of the startup process is changing in such a way:\n\n root@rpopdb04g ~ # while true; do ps aux | grep '[s]tartup'; sleep 1; done\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:11 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:11 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:12 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:12 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:14 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:14 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:15 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:16 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:16 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:17 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ts 18:04 8:17 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:19 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:19 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:21 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:22 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:22 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:24 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Ts 18:04 8:24 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:25 postgres: startup process\n ^C\n root@rpopdb04g ~ #\n\n3. confl* fields in pg_stat_database_conflicts are always zero during the pausing of replay.\n\n4. The stack-traces taken with GDB are not really informative. We will recompile PostgreSQL with —enable-debug option and run it on one of our replicas if needed. Since it is a production system we would like to do it last of all. But we will do it if anybody would not give us any ideas.\n\n5. In one of the experiments replay stopped on 4115/56126DC0 xlog position. Here is a bit of pg_xlogdump output:\n\n rpopdb04d/rpopdb M # select pg_xlogfile_name('4115/56126DC0');\n pg_xlogfile_name\n --------------------------\n 000000060000411500000056\n (1 row)\n\n Time: 0.496 ms\n rpopdb04d/rpopdb M #\n\n root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 # /usr/pgsql-9.4/bin/pg_xlogdump 000000060000411500000056 000000060000411500000056 | fgrep 4115/56126DC0 -C10\n rmgr: Heap len (rec/tot): 36/ 3948, tx: 3930143874, lsn: 4115/561226C8, prev 4115/56122698, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 196267/4 xmax 3930143874 ; new tid 196267/10 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143874, lsn: 4115/56123638, prev 4115/561226C8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.725158 MSK\n rmgr: Heap len (rec/tot): 58/ 90, tx: 3930143875, lsn: 4115/56123668, prev 4115/56123638, bkp: 0000, desc: hot_update: rel 1663/16420/16737; tid 23624/3 xmax 3930143875 ; new tid 23624/21 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143875, lsn: 4115/561236C8, prev 4115/56123668, bkp: 0000, desc: commit: 2015-03-19 18:26:27.726432 MSK\n rmgr: Heap len (rec/tot): 36/ 2196, tx: 3930143876, lsn: 4115/561236F8, prev 4115/561236C8, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 123008/4 xmax 3930143876 ; new tid 123008/5 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143876, lsn: 4115/56123F90, prev 4115/561236F8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.727088 MSK\n rmgr: Heap len (rec/tot): 36/ 7108, tx: 3930143877, lsn: 4115/56123FC0, prev 4115/56123F90, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 34815/6 xmax 3930143877 ; new tid 34815/16 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143877, lsn: 4115/56125BA0, prev 4115/56123FC0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728178 MSK\n rmgr: Heap len (rec/tot): 36/ 4520, tx: 3930143878, lsn: 4115/56125BD0, prev 4115/56125BA0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 147863/5 xmax 3930143878 ; new tid 147863/16 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143878, lsn: 4115/56126D90, prev 4115/56125BD0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728339 MSK\n rmgr: Btree len (rec/tot): 20/ 52, tx: 0, lsn: 4115/56126DC0, prev 4115/56126D90, bkp: 0000, desc: vacuum: rel 1663/16420/16796; blk 31222118, lastBlockVacuumed 0\n rmgr: Heap len (rec/tot): 36/ 6112, tx: 3930143879, lsn: 4115/56126DF8, prev 4115/56126DC0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23461/26 xmax 3930143879 ; new tid 23461/22 xmax 0\n rmgr: Heap2 len (rec/tot): 24/ 8160, tx: 0, lsn: 4115/561285F0, prev 4115/56126DF8, bkp: 1000, desc: clean: rel 1663/16420/16782; blk 21997709 remxid 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143879, lsn: 4115/5612A5E8, prev 4115/561285F0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728805 MSK\n rmgr: Heap2 len (rec/tot): 20/ 8268, tx: 0, lsn: 4115/5612A618, prev 4115/5612A5E8, bkp: 1000, desc: visible: rel 1663/16420/16782; blk 21997709\n rmgr: Heap len (rec/tot): 36/ 7420, tx: 3930143880, lsn: 4115/5612C680, prev 4115/5612A618, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 37456/8 xmax 3930143880 ; new tid 37456/29 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143880, lsn: 4115/5612E398, prev 4115/5612C680, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729141 MSK\n rmgr: Heap len (rec/tot): 36/ 7272, tx: 3930143881, lsn: 4115/5612E3C8, prev 4115/5612E398, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23614/2 xmax 3930143881 ; new tid 23614/22 xmax 0\n rmgr: Heap len (rec/tot): 150/ 182, tx: 0, lsn: 4115/56130048, prev 4115/5612E3C8, bkp: 0000, desc: inplace: rel 1663/16420/12764; tid 11/31\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143881, lsn: 4115/56130100, prev 4115/56130048, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729340 MSK\n rmgr: Heap len (rec/tot): 43/ 75, tx: 3930143882, lsn: 4115/56130130, prev 4115/56130100, bkp: 0000, desc: insert: rel 1663/16420/16773; tid 10159950/26\n rmgr: Btree len (rec/tot): 42/ 74, tx: 3930143882, lsn: 4115/56130180, prev 4115/56130130, bkp: 0000, desc: insert: rel 1663/16420/16800; tid 12758988/260\n root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 #\n\nAny help would be really appropriate. Thanks in advance.\n\n\n\n\n\n\n\n\n17.03.2015 13:22, Sergey Shchukin\n пишет:\n\n\n\n05.03.2015 11:25, Jim Nasby пишет:\n\nOn\n\n 2/27/15 5:11 AM, Sergey Shchukin wrote: \n \n show max_standby_streaming_delay; \n max_standby_streaming_delay \n ----------------------------- \n 30s \n\n\n We both need to be more clear about which server we're talking\n about (master or replica). \n\n What are max_standby_streaming_delay and\n max_standby_archive_delay set to *on the replica*? \n\n My hope is that one or both of those is set to somewhere around\n 8 minutes on the replica. That would explain everything. \n\n If that's not the case then I suspect what's happening is\n there's something running on the replica that isn't checking for\n interrupts frequently enough. That would also explain it. \n\n When replication hangs, is the replication process using a lot\n of CPU? Or is it just sitting there? What's the process status\n for the replay process show? \n\n Can you get a trace of the replay process on the replica when\n this is happening to see where it's spending all it's time? \n\n How are you generating these log lines? \n Tue Feb 24 15:05:07 MSK 2015 Stream:\n MASTER-masterdb:79607161592048 SLAVE:79607161550576\n Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376\n seconds) \n\n Do you see the confl_* fields in pg_stat_database_conflicts on\n the *replica* increasing? \n\n\n Hi Jim,\n\n max_standby_streaming_delay and max_standby_archive_delay both\n are 30s on master and replica dbs\n\n I don't see any specific or heavy workload during this issue with\n a hanging apply process. Just a normal queries as usual. \n\n But I see an increased disk activity during the time when the\n apply issue is ongoing\n\nDSK\n | sdc | | busy 61% |\n read 11511 | | write 4534 | KiB/r 46 \n | | KiB/w 4 | MBr/s 52.78\n | | MBw/s 1.88 | avq 1.45\n | | avio 0.38 ms |\n DSK | sde | | busy 60% |\n read 11457 | | write 4398 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.97\n | | MBw/s 1.83 | avq 1.47\n | | avio 0.38 ms |\n DSK | sdd | | busy 60% |\n read 9673 | | write 4538 | KiB/r 61 \n | | KiB/w 4 | MBr/s 58.24\n | | MBw/s 1.88 | avq 1.47\n | | avio 0.42 ms |\n DSK | sdj | | busy 59% |\n read 9576 | | write 4177 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.30\n | | MBw/s 1.75 | avq 1.48\n | | avio 0.43 ms |\n DSK | sdh | | busy 59% |\n read 9615 | | write 4305 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.23\n | | MBw/s 1.80 | avq 1.48\n | | avio 0.42 ms |\n DSK | sdf | | busy 59% |\n read 9483 | | write 4404 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.11\n | | MBw/s 1.83 | avq 1.47\n | | avio 0.42 ms |\n DSK | sdi | | busy 59% |\n read 11273 | | write 4173 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.50\n | | MBw/s 1.75 | avq 1.43\n | | avio 0.38 ms |\n DSK | sdg | | busy 59% |\n read 11406 | | write 4297 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.66\n | | MBw/s 1.80 | avq 1.46\n | | avio 0.37 ms |\n\n Although it's not seems to be an upper IO limit.\n\n Normally disks are busy at 20-45%\n\nDSK\n | sde | | busy 29% | read \n 6524 | | write 14426 | KiB/r 26 \n | | KiB/w 5 | MBr/s 17.08\n | | MBw/s 7.78 | avq 10.46\n | | avio 0.14 ms |\n DSK | sdi | | busy 29% | read \n 6590 | | write 14391 | KiB/r 26 \n | | KiB/w 5 | MBr/s 17.19\n | | MBw/s 7.76 | avq 8.75\n | | avio 0.14 ms |\n DSK | sdg | | busy 29% | read \n 6547 | | write 14401 | KiB/r 26 \n | | KiB/w 5 | MBr/s 16.94\n | | MBw/s 7.60 | avq 7.28\n | | avio 0.14 ms |\n DSK | sdc | | busy 29% | read \n 6835 | | write 14283 | KiB/r 27 \n | | KiB/w 5 | MBr/s 18.08\n | | MBw/s 7.74 | avq 8.77\n | | avio 0.14 ms |\n DSK | sdf | | busy 23% | read \n 3808 | | write 14391 | KiB/r 36 \n | | KiB/w 5 | MBr/s 13.49\n | | MBw/s 7.78 | avq 12.88\n | | avio 0.13 ms |\n DSK | sdd | | busy 23% | read \n 3747 | | write 14229 | KiB/r 33 \n | | KiB/w 5 | MBr/s 12.32\n | | MBw/s 7.74 | avq 10.07\n | | avio 0.13 ms |\n DSK | sdj | | busy 23% | read \n 3737 | | write 14336 | KiB/r 36 \n | | KiB/w 5 | MBr/s 13.16\n | | MBw/s 7.76 | avq 10.48\n | | avio 0.13 ms |\n DSK | sdh | | busy 23% | read \n 3793 | | write 14362 | KiB/r 35 \n | | KiB/w 5 | MBr/s 13.29\n | | MBw/s 7.60 | avq 8.61\n | | avio 0.13 ms |\n\n\n Also during the issue perf shows [k] copy_user_generic_string on\n the top positions\n 14.09% \n postmaster postgres [.] 0x00000000001b4569\n 10.25% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n 4.15% postmaster postgres [.]\n hash_search_with_hash_value\n 2.08% postmaster postgres [.]\n SearchCatCache\n 1.79% postmaster postgres [.]\n LWLockAcquire\n 1.18% postmaster libc-2.12.so [.] memcpy\n 1.12% postmaster postgres [.] mdnblocks\n\n Issue starts: at 19:43\nMon Mar\n 16 19:43:04 MSK 2015 Stream: MASTER-rdb04d:70837172337784\n SLAVE:70837172314864 Replay:70837172316512 :: REPLAY 21\n KBytes (00:00:00.006225 seconds)\n Mon Mar 16 19:43:09 MSK 2015 Stream:\n MASTER-rdb04d:70837177455624 SLAVE:70837177390968\n Replay:70837176794376 :: REPLAY 646 KBytes (00:00:00.367305\n seconds)\nMon Mar 16 19:43:14 MSK 2015 Stream:\n MASTER-rdb04d:70837185005120 SLAVE:70837184961280\n Replay:70837183253896 :: REPLAY 1710 KBytes\n (00:00:00.827881 seconds)\n Mon Mar 16 19:43:19 MSK 2015 Stream:\n MASTER-rdb04d:70837190417984 SLAVE:70837190230232\n Replay:70837183253896 :: REPLAY 6996 KBytes (00:00:05.873169\n seconds)\n Mon Mar 16 19:43:24 MSK 2015 Stream:\n MASTER-rdb04d:70837198538232 SLAVE:70837198485000\n Replay:70837183253896 :: REPLAY 14926 KBytes\n (00:00:11.025561 seconds)\n Mon Mar 16 19:43:29 MSK 2015 Stream:\n MASTER-rdb04d:70837209961192 SLAVE:70837209869384\n Replay:70837183253896 :: REPLAY 26081 KBytes\n (00:00:16.068014 seconds)\n\nWe see [k]\n\n copy_user_generic_string\n\n 12.90% postmaster \n [kernel.kallsyms] [k] copy_user_generic_string\n 11.49% postmaster postgres [.]\n 0x00000000001f40c1\n 4.74% postmaster postgres [.]\n hash_search_with_hash_value\n 1.86% postmaster postgres [.] mdnblocks\n 1.73% postmaster postgres [.]\n LWLockAcquire\n 1.67% postmaster postgres [.]\n SearchCatCache\n\n\n 25.71% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n 7.89% postmaster postgres [.]\n hash_search_with_hash_value\n 4.66% postmaster postgres [.]\n 0x00000000002108da\n 4.51% postmaster postgres [.] mdnblocks\n 3.36% postmaster [kernel.kallsyms] [k] put_page\n\n Issue stops: at 19:51:39\nMon Mar\n 16 19:51:24 MSK 2015 Stream: MASTER-rdb04d:70838904179344\n SLAVE:70838903934392 Replay:70837183253896 :: REPLAY 1680591\n KBytes (00:08:10.384679 seconds)\n Mon Mar 16 19:51:29 MSK 2015 Stream:\n MASTER-rdb04d:70838929994336 SLAVE:70838929873624\n Replay:70837183253896 :: REPLAY 1705801 KBytes\n (00:08:15.428773 seconds)\n Mon Mar 16 19:51:34 MSK 2015 Stream:\n MASTER-rdb04d:70838951993624 SLAVE:70838951899768\n Replay:70837183253896 :: REPLAY 1727285 KBytes\n (00:08:20.472567 seconds)\nMon Mar 16 19:51:39 MSK 2015 Stream:\n MASTER-rdb04d:70838975297912 SLAVE:70838975180384\n Replay:70837208050872 :: REPLAY 1725827 KBytes\n (00:08:10.256935 seconds)\n Mon Mar 16 19:51:44 MSK 2015 Stream:\n MASTER-rdb04d:70839001502160 SLAVE:70839001412616\n Replay:70837260116984 :: REPLAY 1700572 KBytes\n (00:07:49.849511 seconds)\n Mon Mar 16 19:51:49 MSK 2015 Stream:\n MASTER-rdb04d:70839022866760 SLAVE:70839022751184\n Replay:70837276732880 :: REPLAY 1705209 KBytes\n (00:07:42.307364 seconds)\n\n And copy_user_generic_string\n\n goes down\n+ \n 13.43% postmaster postgres [.]\n 0x000000000023dc9a\n+ 3.71% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n + 2.46% init [kernel.kallsyms] [k]\n intel_idle\n + 2.30% postmaster postgres [.]\n hash_search_with_hash_value\n + 2.01% postmaster postgres [.]\n SearchCatCache\n\n\n Could you clarify what types of traces did you mean? GDB?\n\n To calculate slave and apply lag I use the following query at the\n replica site\n\nslave_lag=$($psql\n\n -U monitor -h$s_host -p$s_port -A -t -c \"SELECT\n pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0')\n AS receive\" $p_db)\n replay_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c\n \"SELECT pg_xlog_location_diff(pg_last_xlog_replay_location(),\n '0/0') AS replay\" $p_db)\n replay_timediff=$($psql -U monitor -h$s_host -p$s_port -A -t\n -c \"SELECT NOW() - pg_last_xact_replay_timestamp() AS\n replication_delay\" $p_db)\n master_lag=$($psql -U monitor -h$p_host -p$p_port -A -t -c\n \"SELECT pg_xlog_location_diff(pg_current_xlog_location(),\n '0/0') AS offset\" $p_db)\n echo \"$(date) Stream: MASTER-$p_host:$master_lag\n SLAVE:$slave_lag Replay:$replay_lag :: REPLAY $(bc\n <<< $master_lag/1024-$replay_lag/1024) KBytes\n (${replay_timediff} seconds)\" \n\n- \nBest regards,\nSergey Shchukin \n\n\n\n\n One more thing\n\nWe have upgraded one of our shards to 9.4.1 and expectedly that did not help.\n\nA few things to notice which may be useful.\n\n1. When replay stops, startup process reads a lot from array with $PGDATA. In iotop and iostat we see the following:\n\nTotal DISK READ: 490.42 M/s | Total DISK WRITE: 3.82 M/s\n TID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND\n 3316 be/4 postgres 492.34 M/s 0.00 B/s 0.00 % 39.91 % postgres: startup process\n <...>\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\n <...>\n md2 0.00 0.00 6501.00 7.00 339.90 0.03 106.97 0.00 0.00 0.00 0.00\n md3 0.00 0.00 0.00 1739.00 0.00 6.79 8.00 0.00 0.00 0.00 0.00\n\n root@rpopdb04g ~ # fgrep 9.4 /proc/mounts\n /dev/md2 /var/lib/pgsql/9.4/data ext4 rw,noatime,nodiratime,barrier=1,stripe=64,data=ordered 0 0\n /dev/md3 /var/lib/pgsql/9.4/data/pg_xlog ext4 rw,noatime,nodiratime,barrier=0,stripe=64,data=ordered 0 0\n root@rpopdb04g ~ #\n\n2. The state of the startup process is changing in such a way:\n\n root@rpopdb04g ~ # while true; do ps aux | grep '[s]tartup'; sleep 1; done\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:11 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:11 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:12 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:12 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:14 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:14 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:15 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:16 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:16 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:17 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ts 18:04 8:17 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:19 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:19 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:21 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:22 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:22 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:24 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Ts 18:04 8:24 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:25 postgres: startup process\n ^C\n root@rpopdb04g ~ #\n\n3. confl* fields in pg_stat_database_conflicts are always zero during the pausing of replay.\n\n4. The stack-traces taken with GDB are not really informative. We will recompile PostgreSQL with —enable-debug option and run it on one of our replicas if needed. Since it is a production system we would like to do it last of all. But we will do it if anybody would not give us any ideas.\n\n5. In one of the experiments replay stopped on 4115/56126DC0 xlog position. Here is a bit of pg_xlogdump output:\n\n rpopdb04d/rpopdb M # select pg_xlogfile_name('4115/56126DC0');\n pg_xlogfile_name\n --------------------------\n 000000060000411500000056\n (1 row)\n\n Time: 0.496 ms\n rpopdb04d/rpopdb M #\n\n root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 # /usr/pgsql-9.4/bin/pg_xlogdump 000000060000411500000056 000000060000411500000056 | fgrep 4115/56126DC0 -C10\n rmgr: Heap len (rec/tot): 36/ 3948, tx: 3930143874, lsn: 4115/561226C8, prev 4115/56122698, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 196267/4 xmax 3930143874 ; new tid 196267/10 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143874, lsn: 4115/56123638, prev 4115/561226C8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.725158 MSK\n rmgr: Heap len (rec/tot): 58/ 90, tx: 3930143875, lsn: 4115/56123668, prev 4115/56123638, bkp: 0000, desc: hot_update: rel 1663/16420/16737; tid 23624/3 xmax 3930143875 ; new tid 23624/21 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143875, lsn: 4115/561236C8, prev 4115/56123668, bkp: 0000, desc: commit: 2015-03-19 18:26:27.726432 MSK\n rmgr: Heap len (rec/tot): 36/ 2196, tx: 3930143876, lsn: 4115/561236F8, prev 4115/561236C8, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 123008/4 xmax 3930143876 ; new tid 123008/5 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143876, lsn: 4115/56123F90, prev 4115/561236F8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.727088 MSK\n rmgr: Heap len (rec/tot): 36/ 7108, tx: 3930143877, lsn: 4115/56123FC0, prev 4115/56123F90, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 34815/6 xmax 3930143877 ; new tid 34815/16 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143877, lsn: 4115/56125BA0, prev 4115/56123FC0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728178 MSK\n rmgr: Heap len (rec/tot): 36/ 4520, tx: 3930143878, lsn: 4115/56125BD0, prev 4115/56125BA0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 147863/5 xmax 3930143878 ; new tid 147863/16 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143878, lsn: 4115/56126D90, prev 4115/56125BD0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728339 MSK\n rmgr: Btree len (rec/tot): 20/ 52, tx: 0, lsn: 4115/56126DC0, prev 4115/56126D90, bkp: 0000, desc: vacuum: rel 1663/16420/16796; blk 31222118, lastBlockVacuumed 0\n rmgr: Heap len (rec/tot): 36/ 6112, tx: 3930143879, lsn: 4115/56126DF8, prev 4115/56126DC0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23461/26 xmax 3930143879 ; new tid 23461/22 xmax 0\n rmgr: Heap2 len (rec/tot): 24/ 8160, tx: 0, lsn: 4115/561285F0, prev 4115/56126DF8, bkp: 1000, desc: clean: rel 1663/16420/16782; blk 21997709 remxid 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143879, lsn: 4115/5612A5E8, prev 4115/561285F0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728805 MSK\n rmgr: Heap2 len (rec/tot): 20/ 8268, tx: 0, lsn: 4115/5612A618, prev 4115/5612A5E8, bkp: 1000, desc: visible: rel 1663/16420/16782; blk 21997709\n rmgr: Heap len (rec/tot): 36/ 7420, tx: 3930143880, lsn: 4115/5612C680, prev 4115/5612A618, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 37456/8 xmax 3930143880 ; new tid 37456/29 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143880, lsn: 4115/5612E398, prev 4115/5612C680, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729141 MSK\n rmgr: Heap len (rec/tot): 36/ 7272, tx: 3930143881, lsn: 4115/5612E3C8, prev 4115/5612E398, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23614/2 xmax 3930143881 ; new tid 23614/22 xmax 0\n rmgr: Heap len (rec/tot): 150/ 182, tx: 0, lsn: 4115/56130048, prev 4115/5612E3C8, bkp: 0000, desc: inplace: rel 1663/16420/12764; tid 11/31\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143881, lsn: 4115/56130100, prev 4115/56130048, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729340 MSK\n rmgr: Heap len (rec/tot): 43/ 75, tx: 3930143882, lsn: 4115/56130130, prev 4115/56130100, bkp: 0000, desc: insert: rel 1663/16420/16773; tid 10159950/26\n rmgr: Btree len (rec/tot): 42/ 74, tx: 3930143882, lsn: 4115/56130180, prev 4115/56130130, bkp: 0000, desc: insert: rel 1663/16420/16800; tid 12758988/260\n root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 #\n\nAny help would be really appropriate. Thanks in advance.",
"msg_date": "Thu, 19 Mar 2015 20:30:44 +0300",
"msg_from": "Sergey Shchukin <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: [GENERAL] Re: [pgadmin-support] Issue with a hanging apply\n process\n on the replica db after vacuum works on primary"
},
{
"msg_contents": "> 19 марта 2015 г., в 20:30, Sergey Shchukin <[email protected]> написал(а):\n> \n> 17.03.2015 13:22, Sergey Shchukin пишет:\n>> 05.03.2015 11:25, Jim Nasby пишет:\n>>> On 2/27/15 5:11 AM, Sergey Shchukin wrote: \n>>>> \n>>>> show max_standby_streaming_delay; \n>>>> max_standby_streaming_delay \n>>>> ----------------------------- \n>>>> 30s \n>>> \n>>> We both need to be more clear about which server we're talking about (master or replica). \n>>> \n>>> What are max_standby_streaming_delay and max_standby_archive_delay set to *on the replica*? \n>>> \n>>> My hope is that one or both of those is set to somewhere around 8 minutes on the replica. That would explain everything. \n>>> \n>>> If that's not the case then I suspect what's happening is there's something running on the replica that isn't checking for interrupts frequently enough. That would also explain it. \n>>> \n>>> When replication hangs, is the replication process using a lot of CPU? Or is it just sitting there? What's the process status for the replay process show? \n>>> \n>>> Can you get a trace of the replay process on the replica when this is happening to see where it's spending all it's time? \n>>> \n>>> How are you generating these log lines? \n>>> Tue Feb 24 15:05:07 MSK 2015 Stream: MASTER-masterdb:79607161592048 SLAVE:79607161550576 Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 seconds) \n>>> \n>>> Do you see the confl_* fields in pg_stat_database_conflicts on the *replica* increasing? \n>> \n>> Hi Jim,\n>> \n>> max_standby_streaming_delay and max_standby_archive_delay both are 30s on master and replica dbs\n>> \n>> I don't see any specific or heavy workload during this issue with a hanging apply process. Just a normal queries as usual. \n>> \n>> But I see an increased disk activity during the time when the apply issue is ongoing\n>> \n>> DSK | sdc | | busy 61% | read 11511 | | write 4534 | KiB/r 46 | | KiB/w 4 | MBr/s 52.78 | | MBw/s 1.88 | avq 1.45 | | avio 0.38 ms |\n>> DSK | sde | | busy 60% | read 11457 | | write 4398 | KiB/r 46 | | KiB/w 4 | MBr/s 51.97 | | MBw/s 1.83 | avq 1.47 | | avio 0.38 ms |\n>> DSK | sdd | | busy 60% | read 9673 | | write 4538 | KiB/r 61 | | KiB/w 4 | MBr/s 58.24 | | MBw/s 1.88 | avq 1.47 | | avio 0.42 ms |\n>> DSK | sdj | | busy 59% | read 9576 | | write 4177 | KiB/r 63 | | KiB/w 4 | MBr/s 59.30 | | MBw/s 1.75 | avq 1.48 | | avio 0.43 ms |\n>> DSK | sdh | | busy 59% | read 9615 | | write 4305 | KiB/r 63 | | KiB/w 4 | MBr/s 59.23 | | MBw/s 1.80 | avq 1.48 | | avio 0.42 ms |\n>> DSK | sdf | | busy 59% | read 9483 | | write 4404 | KiB/r 63 | | KiB/w 4 | MBr/s 59.11 | | MBw/s 1.83 | avq 1.47 | | avio 0.42 ms |\n>> DSK | sdi | | busy 59% | read 11273 | | write 4173 | KiB/r 46 | | KiB/w 4 | MBr/s 51.50 | | MBw/s 1.75 | avq 1.43 | | avio 0.38 ms |\n>> DSK | sdg | | busy 59% | read 11406 | | write 4297 | KiB/r 46 | | KiB/w 4 | MBr/s 51.66 | | MBw/s 1.80 | avq 1.46 | | avio 0.37 ms |\n>> \n>> Although it's not seems to be an upper IO limit.\n>> \n>> Normally disks are busy at 20-45%\n>> \n>> DSK | sde | | busy 29% | read 6524 | | write 14426 | KiB/r 26 | | KiB/w 5 | MBr/s 17.08 | | MBw/s 7.78 | avq 10.46 | | avio 0.14 ms |\n>> DSK | sdi | | busy 29% | read 6590 | | write 14391 | KiB/r 26 | | KiB/w 5 | MBr/s 17.19 | | MBw/s 7.76 | avq 8.75 | | avio 0.14 ms |\n>> DSK | sdg | | busy 29% | read 6547 | | write 14401 | KiB/r 26 | | KiB/w 5 | MBr/s 16.94 | | MBw/s 7.60 | avq 7.28 | | avio 0.14 ms |\n>> DSK | sdc | | busy 29% | read 6835 | | write 14283 | KiB/r 27 | | KiB/w 5 | MBr/s 18.08 | | MBw/s 7.74 | avq 8.77 | | avio 0.14 ms |\n>> DSK | sdf | | busy 23% | read 3808 | | write 14391 | KiB/r 36 | | KiB/w 5 | MBr/s 13.49 | | MBw/s 7.78 | avq 12.88 | | avio 0.13 ms |\n>> DSK | sdd | | busy 23% | read 3747 | | write 14229 | KiB/r 33 | | KiB/w 5 | MBr/s 12.32 | | MBw/s 7.74 | avq 10.07 | | avio 0.13 ms |\n>> DSK | sdj | | busy 23% | read 3737 | | write 14336 | KiB/r 36 | | KiB/w 5 | MBr/s 13.16 | | MBw/s 7.76 | avq 10.48 | | avio 0.13 ms |\n>> DSK | sdh | | busy 23% | read 3793 | | write 14362 | KiB/r 35 | | KiB/w 5 | MBr/s 13.29 | | MBw/s 7.60 | avq 8.61 | | avio 0.13 ms |\n>> \n>> \n>> Also during the issue perf shows [k] copy_user_generic_string on the top positions\n>> 14.09% postmaster postgres [.] 0x00000000001b4569\n>> 10.25% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>> 4.15% postmaster postgres [.] hash_search_with_hash_value\n>> 2.08% postmaster postgres [.] SearchCatCache\n>> 1.79% postmaster postgres [.] LWLockAcquire\n>> 1.18% postmaster libc-2.12.so [.] memcpy\n>> 1.12% postmaster postgres [.] mdnblocks\n>> \n>> Issue starts: at 19:43\n>> Mon Mar 16 19:43:04 MSK 2015 Stream: MASTER-rdb04d:70837172337784 SLAVE:70837172314864 Replay:70837172316512 :: REPLAY 21 KBytes (00:00:00.006225 seconds)\n>> Mon Mar 16 19:43:09 MSK 2015 Stream: MASTER-rdb04d:70837177455624 SLAVE:70837177390968 Replay:70837176794376 :: REPLAY 646 KBytes (00:00:00.367305 seconds)\n>> Mon Mar 16 19:43:14 MSK 2015 Stream: MASTER-rdb04d:70837185005120 SLAVE:70837184961280 Replay:70837183253896 :: REPLAY 1710 KBytes (00:00:00.827881 seconds)\n>> Mon Mar 16 19:43:19 MSK 2015 Stream: MASTER-rdb04d:70837190417984 SLAVE:70837190230232 Replay:70837183253896 :: REPLAY 6996 KBytes (00:00:05.873169 seconds)\n>> Mon Mar 16 19:43:24 MSK 2015 Stream: MASTER-rdb04d:70837198538232 SLAVE:70837198485000 Replay:70837183253896 :: REPLAY 14926 KBytes (00:00:11.025561 seconds)\n>> Mon Mar 16 19:43:29 MSK 2015 Stream: MASTER-rdb04d:70837209961192 SLAVE:70837209869384 Replay:70837183253896 :: REPLAY 26081 KBytes (00:00:16.068014 seconds)\n>> \n>> We see [k] copy_user_generic_string\n>> \n>> 12.90% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>> 11.49% postmaster postgres [.] 0x00000000001f40c1\n>> 4.74% postmaster postgres [.] hash_search_with_hash_value\n>> 1.86% postmaster postgres [.] mdnblocks\n>> 1.73% postmaster postgres [.] LWLockAcquire\n>> 1.67% postmaster postgres [.] SearchCatCache\n>> \n>> \n>> 25.71% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>> 7.89% postmaster postgres [.] hash_search_with_hash_value\n>> 4.66% postmaster postgres [.] 0x00000000002108da\n>> 4.51% postmaster postgres [.] mdnblocks\n>> 3.36% postmaster [kernel.kallsyms] [k] put_page\n>> \n>> Issue stops: at 19:51:39\n>> Mon Mar 16 19:51:24 MSK 2015 Stream: MASTER-rdb04d:70838904179344 SLAVE:70838903934392 Replay:70837183253896 :: REPLAY 1680591 KBytes (00:08:10.384679 seconds)\n>> Mon Mar 16 19:51:29 MSK 2015 Stream: MASTER-rdb04d:70838929994336 SLAVE:70838929873624 Replay:70837183253896 :: REPLAY 1705801 KBytes (00:08:15.428773 seconds)\n>> Mon Mar 16 19:51:34 MSK 2015 Stream: MASTER-rdb04d:70838951993624 SLAVE:70838951899768 Replay:70837183253896 :: REPLAY 1727285 KBytes (00:08:20.472567 seconds)\n>> Mon Mar 16 19:51:39 MSK 2015 Stream: MASTER-rdb04d:70838975297912 SLAVE:70838975180384 Replay:70837208050872 :: REPLAY 1725827 KBytes (00:08:10.256935 seconds)\n>> Mon Mar 16 19:51:44 MSK 2015 Stream: MASTER-rdb04d:70839001502160 SLAVE:70839001412616 Replay:70837260116984 :: REPLAY 1700572 KBytes (00:07:49.849511 seconds)\n>> Mon Mar 16 19:51:49 MSK 2015 Stream: MASTER-rdb04d:70839022866760 SLAVE:70839022751184 Replay:70837276732880 :: REPLAY 1705209 KBytes (00:07:42.307364 seconds)\n>> \n>> And copy_user_generic_string goes down\n>> + 13.43% postmaster postgres [.] 0x000000000023dc9a\n>> + 3.71% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>> + 2.46% init [kernel.kallsyms] [k] intel_idle\n>> + 2.30% postmaster postgres [.] hash_search_with_hash_value\n>> + 2.01% postmaster postgres [.] SearchCatCache\n>> \n>> \n>> Could you clarify what types of traces did you mean? GDB?\n>> \n>> To calculate slave and apply lag I use the following query at the replica site\n>> \n>> slave_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS receive\" $p_db)\n>> replay_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0') AS replay\" $p_db)\n>> replay_timediff=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT NOW() - pg_last_xact_replay_timestamp() AS replication_delay\" $p_db)\n>> master_lag=$($psql -U monitor -h$p_host -p$p_port -A -t -c \"SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset\" $p_db)\n>> echo \"$(date) Stream: MASTER-$p_host:$master_lag SLAVE:$slave_lag Replay:$replay_lag :: REPLAY $(bc <<< $master_lag/1024-$replay_lag/1024) KBytes (${replay_timediff} seconds)\" \n>> \n>> - \n>> Best regards,\n>> Sergey Shchukin \n>> \n> \n> One more thing\n> \n> We have upgraded one of our shards to 9.4.1 and expectedly that did not help.\n> \n> A few things to notice which may be useful.\n> \n> 1. When replay stops, startup process reads a lot from array with $PGDATA. In iotop and iostat we see the following:\n> \n> Total DISK READ: 490.42 M/s | Total DISK WRITE: 3.82 M/s\n> TID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND\n> 3316 be/4 postgres 492.34 M/s 0.00 B/s 0.00 % 39.91 % postgres: startup process\n> <...>\n> \n> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\n> <...>\n> md2 0.00 0.00 6501.00 7.00 339.90 0.03 106.97 0.00 0.00 0.00 0.00\n> md3 0.00 0.00 0.00 1739.00 0.00 6.79 8.00 0.00 0.00 0.00 0.00\n> \n> root@rpopdb04g ~ # fgrep 9.4 /proc/mounts\n> /dev/md2 /var/lib/pgsql/9.4/data ext4 rw,noatime,nodiratime,barrier=1,stripe=64,data=ordered 0 0\n> /dev/md3 /var/lib/pgsql/9.4/data/pg_xlog ext4 rw,noatime,nodiratime,barrier=0,stripe=64,data=ordered 0 0\n> root@rpopdb04g ~ #\n> \n> 2. The state of the startup process is changing in such a way:\n> \n> root@rpopdb04g ~ # while true; do ps aux | grep '[s]tartup'; sleep 1; done\n> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:11 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:11 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:12 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:12 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:14 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:14 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:15 postgres: startup process\n> postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n> postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:16 postgres: startup process\n> postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:16 postgres: startup process\n> postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:17 postgres: startup process\n> postgres 3316 26.7 3.2 4732052 4299260 ? Ts 18:04 8:17 postgres: startup process\n> postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n> postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n> postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:19 postgres: startup process\n> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:19 postgres: startup process\n> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n> postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:21 postgres: startup process\n> postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:22 postgres: startup process\n> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:22 postgres: startup process\n> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n> postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n> postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:24 postgres: startup process\n> postgres 3316 26.9 3.2 4732052 4299260 ? Ts 18:04 8:24 postgres: startup process\n> postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:25 postgres: startup process\n> ^C\n> root@rpopdb04g ~ #\n> \n> 3. confl* fields in pg_stat_database_conflicts are always zero during the pausing of replay.\n> \n> 4. The stack-traces taken with GDB are not really informative. We will recompile PostgreSQL with —enable-debug option and run it on one of our replicas if needed. Since it is a production system we would like to do it last of all. But we will do it if anybody would not give us any ideas.\n\nWe did it. Most of the backtraces (taken while replay_location was not changing) looks like that:\n\n[Thread debugging using libthread_db enabled]\n0x00007f54a71444c0 in __read_nocancel () from /lib64/libc.so.6\n#0 0x00007f54a71444c0 in __read_nocancel () from /lib64/libc.so.6\n#1 0x000000000065d2f5 in FileRead (file=<value optimized out>, buffer=0x7f5470e6ba20 \"\\004\", amount=8192) at fd.c:1286\n#2 0x000000000067acad in mdread (reln=<value optimized out>, forknum=<value optimized out>, blocknum=311995, buffer=0x7f5470e6ba20 \"\\004\") at md.c:679\n#3 0x0000000000659b4e in ReadBuffer_common (smgr=<value optimized out>, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=311995, mode=RBM_NORMAL_NO_LOG, strategy=0x0, hit=0x7fff898a912f \"\") at bufmgr.c:476\n#4 0x000000000065a61b in ReadBufferWithoutRelcache (rnode=..., forkNum=MAIN_FORKNUM, blockNum=311995, mode=<value optimized out>, strategy=<value optimized out>) at bufmgr.c:287\n#5 0x00000000004cfb78 in XLogReadBufferExtended (rnode=..., forknum=MAIN_FORKNUM, blkno=311995, mode=RBM_NORMAL_NO_LOG) at xlogutils.c:324\n#6 0x00000000004a3651 in btree_xlog_vacuum (lsn=71742288638464, record=0x1e48b78) at nbtxlog.c:522\n#7 btree_redo (lsn=71742288638464, record=0x1e48b78) at nbtxlog.c:1144\n#8 0x00000000004c903a in StartupXLOG () at xlog.c:6827\n#9 0x000000000062f8bf in StartupProcessMain () at startup.c:224\n#10 0x00000000004d3e9a in AuxiliaryProcessMain (argc=2, argv=0x7fff898a98a0) at bootstrap.c:416\n#11 0x000000000062a99c in StartChildProcess (type=StartupProcess) at postmaster.c:5146\n#12 0x000000000062e9e2 in PostmasterMain (argc=3, argv=<value optimized out>) at postmaster.c:1237\n#13 0x00000000005c7d68 in main (argc=3, argv=0x1e22910) at main.c:228\n\nSo the problem seems to be in this part of code - http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/nbtree/nbtxlog.c;h=5f9fc49e78ca1388ab482e24c8b5a873238ae0b6;hb=d0f83327d3739a45102fdd486947248c70e0249d#l507. I suppose, that answers the question why startup process reads a lot from disk while paused replay.\n\nSo the questions are:\n1. Is there anything we can tune right now? Except for not reading from replicas and partitioning this table.\n2. Isn’t there still a function to determine that a buffer is not pinned in shared_buffers without reading it from disk? To optimize current behaviour in the future.\n\n> \n> 5. In one of the experiments replay stopped on 4115/56126DC0 xlog position. Here is a bit of pg_xlogdump output:\n> \n> rpopdb04d/rpopdb M # select pg_xlogfile_name('4115/56126DC0');\n> pg_xlogfile_name\n> --------------------------\n> 000000060000411500000056\n> (1 row)\n> \n> Time: 0.496 ms\n> rpopdb04d/rpopdb M #\n> \n> root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 # /usr/pgsql-9.4/bin/pg_xlogdump 000000060000411500000056 000000060000411500000056 | fgrep 4115/56126DC0 -C10\n> rmgr: Heap len (rec/tot): 36/ 3948, tx: 3930143874, lsn: 4115/561226C8, prev 4115/56122698, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 196267/4 xmax 3930143874 ; new tid 196267/10 xmax 0\n> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143874, lsn: 4115/56123638, prev 4115/561226C8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.725158 MSK\n> rmgr: Heap len (rec/tot): 58/ 90, tx: 3930143875, lsn: 4115/56123668, prev 4115/56123638, bkp: 0000, desc: hot_update: rel 1663/16420/16737; tid 23624/3 xmax 3930143875 ; new tid 23624/21 xmax 0\n> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143875, lsn: 4115/561236C8, prev 4115/56123668, bkp: 0000, desc: commit: 2015-03-19 18:26:27.726432 MSK\n> rmgr: Heap len (rec/tot): 36/ 2196, tx: 3930143876, lsn: 4115/561236F8, prev 4115/561236C8, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 123008/4 xmax 3930143876 ; new tid 123008/5 xmax 0\n> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143876, lsn: 4115/56123F90, prev 4115/561236F8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.727088 MSK\n> rmgr: Heap len (rec/tot): 36/ 7108, tx: 3930143877, lsn: 4115/56123FC0, prev 4115/56123F90, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 34815/6 xmax 3930143877 ; new tid 34815/16 xmax 0\n> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143877, lsn: 4115/56125BA0, prev 4115/56123FC0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728178 MSK\n> rmgr: Heap len (rec/tot): 36/ 4520, tx: 3930143878, lsn: 4115/56125BD0, prev 4115/56125BA0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 147863/5 xmax 3930143878 ; new tid 147863/16 xmax 0\n> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143878, lsn: 4115/56126D90, prev 4115/56125BD0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728339 MSK\n> rmgr: Btree len (rec/tot): 20/ 52, tx: 0, lsn: 4115/56126DC0, prev 4115/56126D90, bkp: 0000, desc: vacuum: rel 1663/16420/16796; blk 31222118, lastBlockVacuumed 0\n> rmgr: Heap len (rec/tot): 36/ 6112, tx: 3930143879, lsn: 4115/56126DF8, prev 4115/56126DC0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23461/26 xmax 3930143879 ; new tid 23461/22 xmax 0\n> rmgr: Heap2 len (rec/tot): 24/ 8160, tx: 0, lsn: 4115/561285F0, prev 4115/56126DF8, bkp: 1000, desc: clean: rel 1663/16420/16782; blk 21997709 remxid 0\n> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143879, lsn: 4115/5612A5E8, prev 4115/561285F0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728805 MSK\n> rmgr: Heap2 len (rec/tot): 20/ 8268, tx: 0, lsn: 4115/5612A618, prev 4115/5612A5E8, bkp: 1000, desc: visible: rel 1663/16420/16782; blk 21997709\n> rmgr: Heap len (rec/tot): 36/ 7420, tx: 3930143880, lsn: 4115/5612C680, prev 4115/5612A618, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 37456/8 xmax 3930143880 ; new tid 37456/29 xmax 0\n> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143880, lsn: 4115/5612E398, prev 4115/5612C680, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729141 MSK\n> rmgr: Heap len (rec/tot): 36/ 7272, tx: 3930143881, lsn: 4115/5612E3C8, prev 4115/5612E398, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23614/2 xmax 3930143881 ; new tid 23614/22 xmax 0\n> rmgr: Heap len (rec/tot): 150/ 182, tx: 0, lsn: 4115/56130048, prev 4115/5612E3C8, bkp: 0000, desc: inplace: rel 1663/16420/12764; tid 11/31\n> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143881, lsn: 4115/56130100, prev 4115/56130048, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729340 MSK\n> rmgr: Heap len (rec/tot): 43/ 75, tx: 3930143882, lsn: 4115/56130130, prev 4115/56130100, bkp: 0000, desc: insert: rel 1663/16420/16773; tid 10159950/26\n> rmgr: Btree len (rec/tot): 42/ 74, tx: 3930143882, lsn: 4115/56130180, prev 4115/56130130, bkp: 0000, desc: insert: rel 1663/16420/16800; tid 12758988/260\n> root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 #\n> \n> Any help would be really appropriate. Thanks in advance.\n> \n\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n19 марта 2015 г., в 20:30, Sergey Shchukin <[email protected]> написал(а):\n\n\n17.03.2015 13:22, Sergey Shchukin\n пишет:\n\n\n\n05.03.2015 11:25, Jim Nasby пишет:\n\nOn\n\n 2/27/15 5:11 AM, Sergey Shchukin wrote: \n \n show max_standby_streaming_delay; \n max_standby_streaming_delay \n ----------------------------- \n 30s \n\n\n We both need to be more clear about which server we're talking\n about (master or replica). \n\n What are max_standby_streaming_delay and\n max_standby_archive_delay set to *on the replica*? \n\n My hope is that one or both of those is set to somewhere around\n 8 minutes on the replica. That would explain everything. \n\n If that's not the case then I suspect what's happening is\n there's something running on the replica that isn't checking for\n interrupts frequently enough. That would also explain it. \n\n When replication hangs, is the replication process using a lot\n of CPU? Or is it just sitting there? What's the process status\n for the replay process show? \n\n Can you get a trace of the replay process on the replica when\n this is happening to see where it's spending all it's time? \n\n How are you generating these log lines? \n Tue Feb 24 15:05:07 MSK 2015 Stream:\n MASTER-masterdb:79607161592048 SLAVE:79607161550576\n Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376\n seconds) \n\n Do you see the confl_* fields in pg_stat_database_conflicts on\n the *replica* increasing? \n\n\n Hi Jim,\n\n max_standby_streaming_delay and max_standby_archive_delay both\n are 30s on master and replica dbs\n\n I don't see any specific or heavy workload during this issue with\n a hanging apply process. Just a normal queries as usual. \n\n But I see an increased disk activity during the time when the\n apply issue is ongoing\n\nDSK\n | sdc | | busy 61% |\n read 11511 | | write 4534 | KiB/r 46 \n | | KiB/w 4 | MBr/s 52.78\n | | MBw/s 1.88 | avq 1.45\n | | avio 0.38 ms |\n DSK | sde | | busy 60% |\n read 11457 | | write 4398 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.97\n | | MBw/s 1.83 | avq 1.47\n | | avio 0.38 ms |\n DSK | sdd | | busy 60% |\n read 9673 | | write 4538 | KiB/r 61 \n | | KiB/w 4 | MBr/s 58.24\n | | MBw/s 1.88 | avq 1.47\n | | avio 0.42 ms |\n DSK | sdj | | busy 59% |\n read 9576 | | write 4177 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.30\n | | MBw/s 1.75 | avq 1.48\n | | avio 0.43 ms |\n DSK | sdh | | busy 59% |\n read 9615 | | write 4305 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.23\n | | MBw/s 1.80 | avq 1.48\n | | avio 0.42 ms |\n DSK | sdf | | busy 59% |\n read 9483 | | write 4404 | KiB/r 63 \n | | KiB/w 4 | MBr/s 59.11\n | | MBw/s 1.83 | avq 1.47\n | | avio 0.42 ms |\n DSK | sdi | | busy 59% |\n read 11273 | | write 4173 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.50\n | | MBw/s 1.75 | avq 1.43\n | | avio 0.38 ms |\n DSK | sdg | | busy 59% |\n read 11406 | | write 4297 | KiB/r 46 \n | | KiB/w 4 | MBr/s 51.66\n | | MBw/s 1.80 | avq 1.46\n | | avio 0.37 ms |\n\n Although it's not seems to be an upper IO limit.\n\n Normally disks are busy at 20-45%\n\nDSK\n | sde | | busy 29% | read \n 6524 | | write 14426 | KiB/r 26 \n | | KiB/w 5 | MBr/s 17.08\n | | MBw/s 7.78 | avq 10.46\n | | avio 0.14 ms |\n DSK | sdi | | busy 29% | read \n 6590 | | write 14391 | KiB/r 26 \n | | KiB/w 5 | MBr/s 17.19\n | | MBw/s 7.76 | avq 8.75\n | | avio 0.14 ms |\n DSK | sdg | | busy 29% | read \n 6547 | | write 14401 | KiB/r 26 \n | | KiB/w 5 | MBr/s 16.94\n | | MBw/s 7.60 | avq 7.28\n | | avio 0.14 ms |\n DSK | sdc | | busy 29% | read \n 6835 | | write 14283 | KiB/r 27 \n | | KiB/w 5 | MBr/s 18.08\n | | MBw/s 7.74 | avq 8.77\n | | avio 0.14 ms |\n DSK | sdf | | busy 23% | read \n 3808 | | write 14391 | KiB/r 36 \n | | KiB/w 5 | MBr/s 13.49\n | | MBw/s 7.78 | avq 12.88\n | | avio 0.13 ms |\n DSK | sdd | | busy 23% | read \n 3747 | | write 14229 | KiB/r 33 \n | | KiB/w 5 | MBr/s 12.32\n | | MBw/s 7.74 | avq 10.07\n | | avio 0.13 ms |\n DSK | sdj | | busy 23% | read \n 3737 | | write 14336 | KiB/r 36 \n | | KiB/w 5 | MBr/s 13.16\n | | MBw/s 7.76 | avq 10.48\n | | avio 0.13 ms |\n DSK | sdh | | busy 23% | read \n 3793 | | write 14362 | KiB/r 35 \n | | KiB/w 5 | MBr/s 13.29\n | | MBw/s 7.60 | avq 8.61\n | | avio 0.13 ms |\n\n\n Also during the issue perf shows [k] copy_user_generic_string on\n the top positions\n 14.09% \n postmaster postgres [.] 0x00000000001b4569\n 10.25% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n 4.15% postmaster postgres [.]\n hash_search_with_hash_value\n 2.08% postmaster postgres [.]\n SearchCatCache\n 1.79% postmaster postgres [.]\n LWLockAcquire\n 1.18% postmaster libc-2.12.so [.] memcpy\n 1.12% postmaster postgres [.] mdnblocks\n\n Issue starts: at 19:43\nMon Mar\n 16 19:43:04 MSK 2015 Stream: MASTER-rdb04d:70837172337784\n SLAVE:70837172314864 Replay:70837172316512 :: REPLAY 21\n KBytes (00:00:00.006225 seconds)\n Mon Mar 16 19:43:09 MSK 2015 Stream:\n MASTER-rdb04d:70837177455624 SLAVE:70837177390968\n Replay:70837176794376 :: REPLAY 646 KBytes (00:00:00.367305\n seconds)\nMon Mar 16 19:43:14 MSK 2015 Stream:\n MASTER-rdb04d:70837185005120 SLAVE:70837184961280\n Replay:70837183253896 :: REPLAY 1710 KBytes\n (00:00:00.827881 seconds)\n Mon Mar 16 19:43:19 MSK 2015 Stream:\n MASTER-rdb04d:70837190417984 SLAVE:70837190230232\n Replay:70837183253896 :: REPLAY 6996 KBytes (00:00:05.873169\n seconds)\n Mon Mar 16 19:43:24 MSK 2015 Stream:\n MASTER-rdb04d:70837198538232 SLAVE:70837198485000\n Replay:70837183253896 :: REPLAY 14926 KBytes\n (00:00:11.025561 seconds)\n Mon Mar 16 19:43:29 MSK 2015 Stream:\n MASTER-rdb04d:70837209961192 SLAVE:70837209869384\n Replay:70837183253896 :: REPLAY 26081 KBytes\n (00:00:16.068014 seconds)\n\nWe see [k]\n\n copy_user_generic_string\n\n 12.90% postmaster \n [kernel.kallsyms] [k] copy_user_generic_string\n 11.49% postmaster postgres [.]\n 0x00000000001f40c1\n 4.74% postmaster postgres [.]\n hash_search_with_hash_value\n 1.86% postmaster postgres [.] mdnblocks\n 1.73% postmaster postgres [.]\n LWLockAcquire\n 1.67% postmaster postgres [.]\n SearchCatCache\n\n\n 25.71% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n 7.89% postmaster postgres [.]\n hash_search_with_hash_value\n 4.66% postmaster postgres [.]\n 0x00000000002108da\n 4.51% postmaster postgres [.] mdnblocks\n 3.36% postmaster [kernel.kallsyms] [k] put_page\n\n Issue stops: at 19:51:39\nMon Mar\n 16 19:51:24 MSK 2015 Stream: MASTER-rdb04d:70838904179344\n SLAVE:70838903934392 Replay:70837183253896 :: REPLAY 1680591\n KBytes (00:08:10.384679 seconds)\n Mon Mar 16 19:51:29 MSK 2015 Stream:\n MASTER-rdb04d:70838929994336 SLAVE:70838929873624\n Replay:70837183253896 :: REPLAY 1705801 KBytes\n (00:08:15.428773 seconds)\n Mon Mar 16 19:51:34 MSK 2015 Stream:\n MASTER-rdb04d:70838951993624 SLAVE:70838951899768\n Replay:70837183253896 :: REPLAY 1727285 KBytes\n (00:08:20.472567 seconds)\nMon Mar 16 19:51:39 MSK 2015 Stream:\n MASTER-rdb04d:70838975297912 SLAVE:70838975180384\n Replay:70837208050872 :: REPLAY 1725827 KBytes\n (00:08:10.256935 seconds)\n Mon Mar 16 19:51:44 MSK 2015 Stream:\n MASTER-rdb04d:70839001502160 SLAVE:70839001412616\n Replay:70837260116984 :: REPLAY 1700572 KBytes\n (00:07:49.849511 seconds)\n Mon Mar 16 19:51:49 MSK 2015 Stream:\n MASTER-rdb04d:70839022866760 SLAVE:70839022751184\n Replay:70837276732880 :: REPLAY 1705209 KBytes\n (00:07:42.307364 seconds)\n\n And copy_user_generic_string\n\n goes down\n+ \n 13.43% postmaster postgres [.]\n 0x000000000023dc9a\n+ 3.71% postmaster [kernel.kallsyms] [k]\n copy_user_generic_string\n + 2.46% init [kernel.kallsyms] [k]\n intel_idle\n + 2.30% postmaster postgres [.]\n hash_search_with_hash_value\n + 2.01% postmaster postgres [.]\n SearchCatCache\n\n\n Could you clarify what types of traces did you mean? GDB?\n\n To calculate slave and apply lag I use the following query at the\n replica site\n\nslave_lag=$($psql\n\n -U monitor -h$s_host -p$s_port -A -t -c \"SELECT\n pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0')\n AS receive\" $p_db)\n replay_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c\n \"SELECT pg_xlog_location_diff(pg_last_xlog_replay_location(),\n '0/0') AS replay\" $p_db)\n replay_timediff=$($psql -U monitor -h$s_host -p$s_port -A -t\n -c \"SELECT NOW() - pg_last_xact_replay_timestamp() AS\n replication_delay\" $p_db)\n master_lag=$($psql -U monitor -h$p_host -p$p_port -A -t -c\n \"SELECT pg_xlog_location_diff(pg_current_xlog_location(),\n '0/0') AS offset\" $p_db)\n echo \"$(date) Stream: MASTER-$p_host:$master_lag\n SLAVE:$slave_lag Replay:$replay_lag :: REPLAY $(bc\n <<< $master_lag/1024-$replay_lag/1024) KBytes\n (${replay_timediff} seconds)\" \n\n- \nBest regards,\nSergey Shchukin \n\n\n\n\n One more thing\n\nWe have upgraded one of our shards to 9.4.1 and expectedly that did not help.\n\nA few things to notice which may be useful.\n\n1. When replay stops, startup process reads a lot from array with $PGDATA. In iotop and iostat we see the following:\n\nTotal DISK READ: 490.42 M/s | Total DISK WRITE: 3.82 M/s\n TID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND\n 3316 be/4 postgres 492.34 M/s 0.00 B/s 0.00 % 39.91 % postgres: startup process\n <...>\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\n <...>\n md2 0.00 0.00 6501.00 7.00 339.90 0.03 106.97 0.00 0.00 0.00 0.00\n md3 0.00 0.00 0.00 1739.00 0.00 6.79 8.00 0.00 0.00 0.00 0.00\n\n root@rpopdb04g ~ # fgrep 9.4 /proc/mounts\n /dev/md2 /var/lib/pgsql/9.4/data ext4 rw,noatime,nodiratime,barrier=1,stripe=64,data=ordered 0 0\n /dev/md3 /var/lib/pgsql/9.4/data/pg_xlog ext4 rw,noatime,nodiratime,barrier=0,stripe=64,data=ordered 0 0\n root@rpopdb04g ~ #\n\n2. The state of the startup process is changing in such a way:\n\n root@rpopdb04g ~ # while true; do ps aux | grep '[s]tartup'; sleep 1; done\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:11 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:11 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:12 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:12 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:14 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:14 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:15 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:16 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:16 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:17 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ts 18:04 8:17 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:19 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:19 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:21 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:22 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:22 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:24 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Ts 18:04 8:24 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:25 postgres: startup process\n ^C\n root@rpopdb04g ~ #\n\n3. confl* fields in pg_stat_database_conflicts are always zero during the pausing of replay.\n\n4. The stack-traces taken with GDB are not really informative. We will recompile PostgreSQL with —enable-debug option and run it on one of our replicas if needed. Since it is a production system we would like to do it last of all. But we will do it if anybody would not give us any ideas.\nWe did it. Most of the backtraces (taken while replay_location was not changing) looks like that:[Thread debugging using libthread_db enabled]0x00007f54a71444c0 in __read_nocancel () from /lib64/libc.so.6#0 0x00007f54a71444c0 in __read_nocancel () from /lib64/libc.so.6#1 0x000000000065d2f5 in FileRead (file=<value optimized out>, buffer=0x7f5470e6ba20 \"\\004\", amount=8192) at fd.c:1286#2 0x000000000067acad in mdread (reln=<value optimized out>, forknum=<value optimized out>, blocknum=311995, buffer=0x7f5470e6ba20 \"\\004\") at md.c:679#3 0x0000000000659b4e in ReadBuffer_common (smgr=<value optimized out>, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=311995, mode=RBM_NORMAL_NO_LOG, strategy=0x0, hit=0x7fff898a912f \"\") at bufmgr.c:476#4 0x000000000065a61b in ReadBufferWithoutRelcache (rnode=..., forkNum=MAIN_FORKNUM, blockNum=311995, mode=<value optimized out>, strategy=<value optimized out>) at bufmgr.c:287#5 0x00000000004cfb78 in XLogReadBufferExtended (rnode=..., forknum=MAIN_FORKNUM, blkno=311995, mode=RBM_NORMAL_NO_LOG) at xlogutils.c:324#6 0x00000000004a3651 in btree_xlog_vacuum (lsn=71742288638464, record=0x1e48b78) at nbtxlog.c:522#7 btree_redo (lsn=71742288638464, record=0x1e48b78) at nbtxlog.c:1144#8 0x00000000004c903a in StartupXLOG () at xlog.c:6827#9 0x000000000062f8bf in StartupProcessMain () at startup.c:224#10 0x00000000004d3e9a in AuxiliaryProcessMain (argc=2, argv=0x7fff898a98a0) at bootstrap.c:416#11 0x000000000062a99c in StartChildProcess (type=StartupProcess) at postmaster.c:5146#12 0x000000000062e9e2 in PostmasterMain (argc=3, argv=<value optimized out>) at postmaster.c:1237#13 0x00000000005c7d68 in main (argc=3, argv=0x1e22910) at main.c:228So the problem seems to be in this part of code - http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/nbtree/nbtxlog.c;h=5f9fc49e78ca1388ab482e24c8b5a873238ae0b6;hb=d0f83327d3739a45102fdd486947248c70e0249d#l507. I suppose, that answers the question why startup process reads a lot from disk while paused replay.So the questions are:1. Is there anything we can tune right now? Except for not reading from replicas and partitioning this table.2. Isn’t there still a function to determine that a buffer is not pinned in shared_buffers without reading it from disk? To optimize current behaviour in the future.\n5. In one of the experiments replay stopped on 4115/56126DC0 xlog position. Here is a bit of pg_xlogdump output:\n\n rpopdb04d/rpopdb M # select pg_xlogfile_name('4115/56126DC0');\n pg_xlogfile_name\n --------------------------\n 000000060000411500000056\n (1 row)\n\n Time: 0.496 ms\n rpopdb04d/rpopdb M #\n\n root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 # /usr/pgsql-9.4/bin/pg_xlogdump 000000060000411500000056 000000060000411500000056 | fgrep 4115/56126DC0 -C10\n rmgr: Heap len (rec/tot): 36/ 3948, tx: 3930143874, lsn: 4115/561226C8, prev 4115/56122698, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 196267/4 xmax 3930143874 ; new tid 196267/10 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143874, lsn: 4115/56123638, prev 4115/561226C8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.725158 MSK\n rmgr: Heap len (rec/tot): 58/ 90, tx: 3930143875, lsn: 4115/56123668, prev 4115/56123638, bkp: 0000, desc: hot_update: rel 1663/16420/16737; tid 23624/3 xmax 3930143875 ; new tid 23624/21 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143875, lsn: 4115/561236C8, prev 4115/56123668, bkp: 0000, desc: commit: 2015-03-19 18:26:27.726432 MSK\n rmgr: Heap len (rec/tot): 36/ 2196, tx: 3930143876, lsn: 4115/561236F8, prev 4115/561236C8, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 123008/4 xmax 3930143876 ; new tid 123008/5 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143876, lsn: 4115/56123F90, prev 4115/561236F8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.727088 MSK\n rmgr: Heap len (rec/tot): 36/ 7108, tx: 3930143877, lsn: 4115/56123FC0, prev 4115/56123F90, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 34815/6 xmax 3930143877 ; new tid 34815/16 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143877, lsn: 4115/56125BA0, prev 4115/56123FC0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728178 MSK\n rmgr: Heap len (rec/tot): 36/ 4520, tx: 3930143878, lsn: 4115/56125BD0, prev 4115/56125BA0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 147863/5 xmax 3930143878 ; new tid 147863/16 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143878, lsn: 4115/56126D90, prev 4115/56125BD0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728339 MSK\n rmgr: Btree len (rec/tot): 20/ 52, tx: 0, lsn: 4115/56126DC0, prev 4115/56126D90, bkp: 0000, desc: vacuum: rel 1663/16420/16796; blk 31222118, lastBlockVacuumed 0\n rmgr: Heap len (rec/tot): 36/ 6112, tx: 3930143879, lsn: 4115/56126DF8, prev 4115/56126DC0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23461/26 xmax 3930143879 ; new tid 23461/22 xmax 0\n rmgr: Heap2 len (rec/tot): 24/ 8160, tx: 0, lsn: 4115/561285F0, prev 4115/56126DF8, bkp: 1000, desc: clean: rel 1663/16420/16782; blk 21997709 remxid 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143879, lsn: 4115/5612A5E8, prev 4115/561285F0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728805 MSK\n rmgr: Heap2 len (rec/tot): 20/ 8268, tx: 0, lsn: 4115/5612A618, prev 4115/5612A5E8, bkp: 1000, desc: visible: rel 1663/16420/16782; blk 21997709\n rmgr: Heap len (rec/tot): 36/ 7420, tx: 3930143880, lsn: 4115/5612C680, prev 4115/5612A618, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 37456/8 xmax 3930143880 ; new tid 37456/29 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143880, lsn: 4115/5612E398, prev 4115/5612C680, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729141 MSK\n rmgr: Heap len (rec/tot): 36/ 7272, tx: 3930143881, lsn: 4115/5612E3C8, prev 4115/5612E398, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23614/2 xmax 3930143881 ; new tid 23614/22 xmax 0\n rmgr: Heap len (rec/tot): 150/ 182, tx: 0, lsn: 4115/56130048, prev 4115/5612E3C8, bkp: 0000, desc: inplace: rel 1663/16420/12764; tid 11/31\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143881, lsn: 4115/56130100, prev 4115/56130048, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729340 MSK\n rmgr: Heap len (rec/tot): 43/ 75, tx: 3930143882, lsn: 4115/56130130, prev 4115/56130100, bkp: 0000, desc: insert: rel 1663/16420/16773; tid 10159950/26\n rmgr: Btree len (rec/tot): 42/ 74, tx: 3930143882, lsn: 4115/56130180, prev 4115/56130130, bkp: 0000, desc: insert: rel 1663/16420/16800; tid 12758988/260\n root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 #\n\nAny help would be really appropriate. Thanks in advance.\n\n\n\n--May the force be with you…https://simply.name",
"msg_date": "Fri, 20 Mar 2015 18:00:16 +0300",
"msg_from": "Vladimir Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [pgadmin-support] Issue with a hanging apply process on the\n replica db after vacuum works on primary"
},
{
"msg_contents": "> 20 марта 2015 г., в 18:00, Vladimir Borodin <[email protected]> написал(а):\n> \n>> \n>> 19 марта 2015 г., в 20:30, Sergey Shchukin <[email protected] <mailto:[email protected]>> написал(а):\n>> \n>> 17.03.2015 13:22, Sergey Shchukin пишет:\n>>> 05.03.2015 11:25, Jim Nasby пишет:\n>>>> On 2/27/15 5:11 AM, Sergey Shchukin wrote: \n>>>>> \n>>>>> show max_standby_streaming_delay; \n>>>>> max_standby_streaming_delay \n>>>>> ----------------------------- \n>>>>> 30s \n>>>> \n>>>> We both need to be more clear about which server we're talking about (master or replica). \n>>>> \n>>>> What are max_standby_streaming_delay and max_standby_archive_delay set to *on the replica*? \n>>>> \n>>>> My hope is that one or both of those is set to somewhere around 8 minutes on the replica. That would explain everything. \n>>>> \n>>>> If that's not the case then I suspect what's happening is there's something running on the replica that isn't checking for interrupts frequently enough. That would also explain it. \n>>>> \n>>>> When replication hangs, is the replication process using a lot of CPU? Or is it just sitting there? What's the process status for the replay process show? \n>>>> \n>>>> Can you get a trace of the replay process on the replica when this is happening to see where it's spending all it's time? \n>>>> \n>>>> How are you generating these log lines? \n>>>> Tue Feb 24 15:05:07 MSK 2015 Stream: MASTER-masterdb:79607161592048 SLAVE:79607161550576 Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 seconds) \n>>>> \n>>>> Do you see the confl_* fields in pg_stat_database_conflicts on the *replica* increasing? \n>>> \n>>> Hi Jim,\n>>> \n>>> max_standby_streaming_delay and max_standby_archive_delay both are 30s on master and replica dbs\n>>> \n>>> I don't see any specific or heavy workload during this issue with a hanging apply process. Just a normal queries as usual. \n>>> \n>>> But I see an increased disk activity during the time when the apply issue is ongoing\n>>> \n>>> DSK | sdc | | busy 61% | read 11511 | | write 4534 | KiB/r 46 | | KiB/w 4 | MBr/s 52.78 | | MBw/s 1.88 | avq 1.45 | | avio 0.38 ms |\n>>> DSK | sde | | busy 60% | read 11457 | | write 4398 | KiB/r 46 | | KiB/w 4 | MBr/s 51.97 | | MBw/s 1.83 | avq 1.47 | | avio 0.38 ms |\n>>> DSK | sdd | | busy 60% | read 9673 | | write 4538 | KiB/r 61 | | KiB/w 4 | MBr/s 58.24 | | MBw/s 1.88 | avq 1.47 | | avio 0.42 ms |\n>>> DSK | sdj | | busy 59% | read 9576 | | write 4177 | KiB/r 63 | | KiB/w 4 | MBr/s 59.30 | | MBw/s 1.75 | avq 1.48 | | avio 0.43 ms |\n>>> DSK | sdh | | busy 59% | read 9615 | | write 4305 | KiB/r 63 | | KiB/w 4 | MBr/s 59.23 | | MBw/s 1.80 | avq 1.48 | | avio 0.42 ms |\n>>> DSK | sdf | | busy 59% | read 9483 | | write 4404 | KiB/r 63 | | KiB/w 4 | MBr/s 59.11 | | MBw/s 1.83 | avq 1.47 | | avio 0.42 ms |\n>>> DSK | sdi | | busy 59% | read 11273 | | write 4173 | KiB/r 46 | | KiB/w 4 | MBr/s 51.50 | | MBw/s 1.75 | avq 1.43 | | avio 0.38 ms |\n>>> DSK | sdg | | busy 59% | read 11406 | | write 4297 | KiB/r 46 | | KiB/w 4 | MBr/s 51.66 | | MBw/s 1.80 | avq 1.46 | | avio 0.37 ms |\n>>> \n>>> Although it's not seems to be an upper IO limit.\n>>> \n>>> Normally disks are busy at 20-45%\n>>> \n>>> DSK | sde | | busy 29% | read 6524 | | write 14426 | KiB/r 26 | | KiB/w 5 | MBr/s 17.08 | | MBw/s 7.78 | avq 10.46 | | avio 0.14 ms |\n>>> DSK | sdi | | busy 29% | read 6590 | | write 14391 | KiB/r 26 | | KiB/w 5 | MBr/s 17.19 | | MBw/s 7.76 | avq 8.75 | | avio 0.14 ms |\n>>> DSK | sdg | | busy 29% | read 6547 | | write 14401 | KiB/r 26 | | KiB/w 5 | MBr/s 16.94 | | MBw/s 7.60 | avq 7.28 | | avio 0.14 ms |\n>>> DSK | sdc | | busy 29% | read 6835 | | write 14283 | KiB/r 27 | | KiB/w 5 | MBr/s 18.08 | | MBw/s 7.74 | avq 8.77 | | avio 0.14 ms |\n>>> DSK | sdf | | busy 23% | read 3808 | | write 14391 | KiB/r 36 | | KiB/w 5 | MBr/s 13.49 | | MBw/s 7.78 | avq 12.88 | | avio 0.13 ms |\n>>> DSK | sdd | | busy 23% | read 3747 | | write 14229 | KiB/r 33 | | KiB/w 5 | MBr/s 12.32 | | MBw/s 7.74 | avq 10.07 | | avio 0.13 ms |\n>>> DSK | sdj | | busy 23% | read 3737 | | write 14336 | KiB/r 36 | | KiB/w 5 | MBr/s 13.16 | | MBw/s 7.76 | avq 10.48 | | avio 0.13 ms |\n>>> DSK | sdh | | busy 23% | read 3793 | | write 14362 | KiB/r 35 | | KiB/w 5 | MBr/s 13.29 | | MBw/s 7.60 | avq 8.61 | | avio 0.13 ms |\n>>> \n>>> \n>>> Also during the issue perf shows [k] copy_user_generic_string on the top positions\n>>> 14.09% postmaster postgres [.] 0x00000000001b4569\n>>> 10.25% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>>> 4.15% postmaster postgres [.] hash_search_with_hash_value\n>>> 2.08% postmaster postgres [.] SearchCatCache\n>>> 1.79% postmaster postgres [.] LWLockAcquire\n>>> 1.18% postmaster libc-2.12.so [.] memcpy\n>>> 1.12% postmaster postgres [.] mdnblocks\n>>> \n>>> Issue starts: at 19:43\n>>> Mon Mar 16 19:43:04 MSK 2015 Stream: MASTER-rdb04d:70837172337784 SLAVE:70837172314864 Replay:70837172316512 :: REPLAY 21 KBytes (00:00:00.006225 seconds)\n>>> Mon Mar 16 19:43:09 MSK 2015 Stream: MASTER-rdb04d:70837177455624 SLAVE:70837177390968 Replay:70837176794376 :: REPLAY 646 KBytes (00:00:00.367305 seconds)\n>>> Mon Mar 16 19:43:14 MSK 2015 Stream: MASTER-rdb04d:70837185005120 SLAVE:70837184961280 Replay:70837183253896 :: REPLAY 1710 KBytes (00:00:00.827881 seconds)\n>>> Mon Mar 16 19:43:19 MSK 2015 Stream: MASTER-rdb04d:70837190417984 SLAVE:70837190230232 Replay:70837183253896 :: REPLAY 6996 KBytes (00:00:05.873169 seconds)\n>>> Mon Mar 16 19:43:24 MSK 2015 Stream: MASTER-rdb04d:70837198538232 SLAVE:70837198485000 Replay:70837183253896 :: REPLAY 14926 KBytes (00:00:11.025561 seconds)\n>>> Mon Mar 16 19:43:29 MSK 2015 Stream: MASTER-rdb04d:70837209961192 SLAVE:70837209869384 Replay:70837183253896 :: REPLAY 26081 KBytes (00:00:16.068014 seconds)\n>>> \n>>> We see [k] copy_user_generic_string\n>>> \n>>> 12.90% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>>> 11.49% postmaster postgres [.] 0x00000000001f40c1\n>>> 4.74% postmaster postgres [.] hash_search_with_hash_value\n>>> 1.86% postmaster postgres [.] mdnblocks\n>>> 1.73% postmaster postgres [.] LWLockAcquire\n>>> 1.67% postmaster postgres [.] SearchCatCache\n>>> \n>>> \n>>> 25.71% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>>> 7.89% postmaster postgres [.] hash_search_with_hash_value\n>>> 4.66% postmaster postgres [.] 0x00000000002108da\n>>> 4.51% postmaster postgres [.] mdnblocks\n>>> 3.36% postmaster [kernel.kallsyms] [k] put_page\n>>> \n>>> Issue stops: at 19:51:39\n>>> Mon Mar 16 19:51:24 MSK 2015 Stream: MASTER-rdb04d:70838904179344 SLAVE:70838903934392 Replay:70837183253896 :: REPLAY 1680591 KBytes (00:08:10.384679 seconds)\n>>> Mon Mar 16 19:51:29 MSK 2015 Stream: MASTER-rdb04d:70838929994336 SLAVE:70838929873624 Replay:70837183253896 :: REPLAY 1705801 KBytes (00:08:15.428773 seconds)\n>>> Mon Mar 16 19:51:34 MSK 2015 Stream: MASTER-rdb04d:70838951993624 SLAVE:70838951899768 Replay:70837183253896 :: REPLAY 1727285 KBytes (00:08:20.472567 seconds)\n>>> Mon Mar 16 19:51:39 MSK 2015 Stream: MASTER-rdb04d:70838975297912 SLAVE:70838975180384 Replay:70837208050872 :: REPLAY 1725827 KBytes (00:08:10.256935 seconds)\n>>> Mon Mar 16 19:51:44 MSK 2015 Stream: MASTER-rdb04d:70839001502160 SLAVE:70839001412616 Replay:70837260116984 :: REPLAY 1700572 KBytes (00:07:49.849511 seconds)\n>>> Mon Mar 16 19:51:49 MSK 2015 Stream: MASTER-rdb04d:70839022866760 SLAVE:70839022751184 Replay:70837276732880 :: REPLAY 1705209 KBytes (00:07:42.307364 seconds)\n>>> \n>>> And copy_user_generic_string goes down\n>>> + 13.43% postmaster postgres [.] 0x000000000023dc9a\n>>> + 3.71% postmaster [kernel.kallsyms] [k] copy_user_generic_string\n>>> + 2.46% init [kernel.kallsyms] [k] intel_idle\n>>> + 2.30% postmaster postgres [.] hash_search_with_hash_value\n>>> + 2.01% postmaster postgres [.] SearchCatCache\n>>> \n>>> \n>>> Could you clarify what types of traces did you mean? GDB?\n>>> \n>>> To calculate slave and apply lag I use the following query at the replica site\n>>> \n>>> slave_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS receive\" $p_db)\n>>> replay_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0') AS replay\" $p_db)\n>>> replay_timediff=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT NOW() - pg_last_xact_replay_timestamp() AS replication_delay\" $p_db)\n>>> master_lag=$($psql -U monitor -h$p_host -p$p_port -A -t -c \"SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset\" $p_db)\n>>> echo \"$(date) Stream: MASTER-$p_host:$master_lag SLAVE:$slave_lag Replay:$replay_lag :: REPLAY $(bc <<< $master_lag/1024-$replay_lag/1024) KBytes (${replay_timediff} seconds)\" \n>>> \n>>> - \n>>> Best regards,\n>>> Sergey Shchukin \n>>> \n>> \n>> One more thing\n>> \n>> We have upgraded one of our shards to 9.4.1 and expectedly that did not help.\n>> \n>> A few things to notice which may be useful.\n>> \n>> 1. When replay stops, startup process reads a lot from array with $PGDATA. In iotop and iostat we see the following:\n>> \n>> Total DISK READ: 490.42 M/s | Total DISK WRITE: 3.82 M/s\n>> TID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND\n>> 3316 be/4 postgres 492.34 M/s 0.00 B/s 0.00 % 39.91 % postgres: startup process\n>> <...>\n>> \n>> Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\n>> <...>\n>> md2 0.00 0.00 6501.00 7.00 339.90 0.03 106.97 0.00 0.00 0.00 0.00\n>> md3 0.00 0.00 0.00 1739.00 0.00 6.79 8.00 0.00 0.00 0.00 0.00\n>> \n>> root@rpopdb04g ~ # fgrep 9.4 /proc/mounts\n>> /dev/md2 /var/lib/pgsql/9.4/data ext4 rw,noatime,nodiratime,barrier=1,stripe=64,data=ordered 0 0\n>> /dev/md3 /var/lib/pgsql/9.4/data/pg_xlog ext4 rw,noatime,nodiratime,barrier=0,stripe=64,data=ordered 0 0\n>> root@rpopdb04g ~ #\n>> \n>> 2. The state of the startup process is changing in such a way:\n>> \n>> root@rpopdb04g ~ # while true; do ps aux | grep '[s]tartup'; sleep 1; done\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:11 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:11 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:12 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:12 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:14 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:14 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n>> postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:15 postgres: startup process\n>> postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n>> postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:16 postgres: startup process\n>> postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:16 postgres: startup process\n>> postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:17 postgres: startup process\n>> postgres 3316 26.7 3.2 4732052 4299260 ? Ts 18:04 8:17 postgres: startup process\n>> postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n>> postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n>> postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:19 postgres: startup process\n>> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:19 postgres: startup process\n>> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n>> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n>> postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:21 postgres: startup process\n>> postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:22 postgres: startup process\n>> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:22 postgres: startup process\n>> postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n>> postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n>> postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:24 postgres: startup process\n>> postgres 3316 26.9 3.2 4732052 4299260 ? Ts 18:04 8:24 postgres: startup process\n>> postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:25 postgres: startup process\n>> ^C\n>> root@rpopdb04g ~ #\n>> \n>> 3. confl* fields in pg_stat_database_conflicts are always zero during the pausing of replay.\n>> \n>> 4. The stack-traces taken with GDB are not really informative. We will recompile PostgreSQL with —enable-debug option and run it on one of our replicas if needed. Since it is a production system we would like to do it last of all. But we will do it if anybody would not give us any ideas.\n> \n> We did it. Most of the backtraces (taken while replay_location was not changing) looks like that:\n> \n> [Thread debugging using libthread_db enabled]\n> 0x00007f54a71444c0 in __read_nocancel () from /lib64/libc.so.6\n> #0 0x00007f54a71444c0 in __read_nocancel () from /lib64/libc.so.6\n> #1 0x000000000065d2f5 in FileRead (file=<value optimized out>, buffer=0x7f5470e6ba20 \"\\004\", amount=8192) at fd.c:1286\n> #2 0x000000000067acad in mdread (reln=<value optimized out>, forknum=<value optimized out>, blocknum=311995, buffer=0x7f5470e6ba20 \"\\004\") at md.c:679\n> #3 0x0000000000659b4e in ReadBuffer_common (smgr=<value optimized out>, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=311995, mode=RBM_NORMAL_NO_LOG, strategy=0x0, hit=0x7fff898a912f \"\") at bufmgr.c:476\n> #4 0x000000000065a61b in ReadBufferWithoutRelcache (rnode=..., forkNum=MAIN_FORKNUM, blockNum=311995, mode=<value optimized out>, strategy=<value optimized out>) at bufmgr.c:287\n> #5 0x00000000004cfb78 in XLogReadBufferExtended (rnode=..., forknum=MAIN_FORKNUM, blkno=311995, mode=RBM_NORMAL_NO_LOG) at xlogutils.c:324\n> #6 0x00000000004a3651 in btree_xlog_vacuum (lsn=71742288638464, record=0x1e48b78) at nbtxlog.c:522\n> #7 btree_redo (lsn=71742288638464, record=0x1e48b78) at nbtxlog.c:1144\n> #8 0x00000000004c903a in StartupXLOG () at xlog.c:6827\n> #9 0x000000000062f8bf in StartupProcessMain () at startup.c:224\n> #10 0x00000000004d3e9a in AuxiliaryProcessMain (argc=2, argv=0x7fff898a98a0) at bootstrap.c:416\n> #11 0x000000000062a99c in StartChildProcess (type=StartupProcess) at postmaster.c:5146\n> #12 0x000000000062e9e2 in PostmasterMain (argc=3, argv=<value optimized out>) at postmaster.c:1237\n> #13 0x00000000005c7d68 in main (argc=3, argv=0x1e22910) at main.c:228\n> \n> So the problem seems to be in this part of code - http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/nbtree/nbtxlog.c;h=5f9fc49e78ca1388ab482e24c8b5a873238ae0b6;hb=d0f83327d3739a45102fdd486947248c70e0249d#l507 <http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/nbtree/nbtxlog.c;h=5f9fc49e78ca1388ab482e24c8b5a873238ae0b6;hb=d0f83327d3739a45102fdd486947248c70e0249d#l507>. I suppose, that answers the question why startup process reads a lot from disk while paused replay.\n> \n> So the questions are:\n> 1. Is there anything we can tune right now? Except for not reading from replicas and partitioning this table.\n> 2. Isn’t there still a function to determine that a buffer is not pinned in shared_buffers without reading it from disk? To optimize current behaviour in the future.\n\n-many\n+hackers\n\nCan anyone help?\n\n> \n>> 5. In one of the experiments replay stopped on 4115/56126DC0 xlog position. Here is a bit of pg_xlogdump output:\n>> \n>> rpopdb04d/rpopdb M # select pg_xlogfile_name('4115/56126DC0');\n>> pg_xlogfile_name\n>> --------------------------\n>> 000000060000411500000056\n>> (1 row)\n>> \n>> Time: 0.496 ms\n>> rpopdb04d/rpopdb M #\n>> \n>> root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 # /usr/pgsql-9.4/bin/pg_xlogdump 000000060000411500000056 000000060000411500000056 | fgrep 4115/56126DC0 -C10\n>> rmgr: Heap len (rec/tot): 36/ 3948, tx: 3930143874, lsn: 4115/561226C8, prev 4115/56122698, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 196267/4 xmax 3930143874 ; new tid 196267/10 xmax 0\n>> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143874, lsn: 4115/56123638, prev 4115/561226C8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.725158 MSK\n>> rmgr: Heap len (rec/tot): 58/ 90, tx: 3930143875, lsn: 4115/56123668, prev 4115/56123638, bkp: 0000, desc: hot_update: rel 1663/16420/16737; tid 23624/3 xmax 3930143875 ; new tid 23624/21 xmax 0\n>> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143875, lsn: 4115/561236C8, prev 4115/56123668, bkp: 0000, desc: commit: 2015-03-19 18:26:27.726432 MSK\n>> rmgr: Heap len (rec/tot): 36/ 2196, tx: 3930143876, lsn: 4115/561236F8, prev 4115/561236C8, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 123008/4 xmax 3930143876 ; new tid 123008/5 xmax 0\n>> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143876, lsn: 4115/56123F90, prev 4115/561236F8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.727088 MSK\n>> rmgr: Heap len (rec/tot): 36/ 7108, tx: 3930143877, lsn: 4115/56123FC0, prev 4115/56123F90, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 34815/6 xmax 3930143877 ; new tid 34815/16 xmax 0\n>> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143877, lsn: 4115/56125BA0, prev 4115/56123FC0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728178 MSK\n>> rmgr: Heap len (rec/tot): 36/ 4520, tx: 3930143878, lsn: 4115/56125BD0, prev 4115/56125BA0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 147863/5 xmax 3930143878 ; new tid 147863/16 xmax 0\n>> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143878, lsn: 4115/56126D90, prev 4115/56125BD0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728339 MSK\n>> rmgr: Btree len (rec/tot): 20/ 52, tx: 0, lsn: 4115/56126DC0, prev 4115/56126D90, bkp: 0000, desc: vacuum: rel 1663/16420/16796; blk 31222118, lastBlockVacuumed 0\n>> rmgr: Heap len (rec/tot): 36/ 6112, tx: 3930143879, lsn: 4115/56126DF8, prev 4115/56126DC0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23461/26 xmax 3930143879 ; new tid 23461/22 xmax 0\n>> rmgr: Heap2 len (rec/tot): 24/ 8160, tx: 0, lsn: 4115/561285F0, prev 4115/56126DF8, bkp: 1000, desc: clean: rel 1663/16420/16782; blk 21997709 remxid 0\n>> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143879, lsn: 4115/5612A5E8, prev 4115/561285F0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728805 MSK\n>> rmgr: Heap2 len (rec/tot): 20/ 8268, tx: 0, lsn: 4115/5612A618, prev 4115/5612A5E8, bkp: 1000, desc: visible: rel 1663/16420/16782; blk 21997709\n>> rmgr: Heap len (rec/tot): 36/ 7420, tx: 3930143880, lsn: 4115/5612C680, prev 4115/5612A618, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 37456/8 xmax 3930143880 ; new tid 37456/29 xmax 0\n>> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143880, lsn: 4115/5612E398, prev 4115/5612C680, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729141 MSK\n>> rmgr: Heap len (rec/tot): 36/ 7272, tx: 3930143881, lsn: 4115/5612E3C8, prev 4115/5612E398, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23614/2 xmax 3930143881 ; new tid 23614/22 xmax 0\n>> rmgr: Heap len (rec/tot): 150/ 182, tx: 0, lsn: 4115/56130048, prev 4115/5612E3C8, bkp: 0000, desc: inplace: rel 1663/16420/12764; tid 11/31\n>> rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143881, lsn: 4115/56130100, prev 4115/56130048, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729340 MSK\n>> rmgr: Heap len (rec/tot): 43/ 75, tx: 3930143882, lsn: 4115/56130130, prev 4115/56130100, bkp: 0000, desc: insert: rel 1663/16420/16773; tid 10159950/26\n>> rmgr: Btree len (rec/tot): 42/ 74, tx: 3930143882, lsn: 4115/56130180, prev 4115/56130130, bkp: 0000, desc: insert: rel 1663/16420/16800; tid 12758988/260\n>> root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 #\n>> \n>> Any help would be really appropriate. Thanks in advance.\n>> \n> \n> \n> --\n> May the force be with you…\n> https://simply.name <https://simply.name/>\n\n--\nMay the force be with you…\nhttps://simply.name\n\n\n20 марта 2015 г., в 18:00, Vladimir Borodin <[email protected]> написал(а):19 марта 2015 г., в 20:30, Sergey Shchukin <[email protected]> написал(а):17.03.2015 13:22, Sergey Shchukin пишет:05.03.2015 11:25, Jim Nasby пишет:On 2/27/15 5:11 AM, Sergey Shchukin wrote: show max_standby_streaming_delay; max_standby_streaming_delay ----------------------------- 30s We both need to be more clear about which server we're talking about (master or replica). What are max_standby_streaming_delay and max_standby_archive_delay set to *on the replica*? My hope is that one or both of those is set to somewhere around 8 minutes on the replica. That would explain everything. If that's not the case then I suspect what's happening is there's something running on the replica that isn't checking for interrupts frequently enough. That would also explain it. When replication hangs, is the replication process using a lot of CPU? Or is it just sitting there? What's the process status for the replay process show? Can you get a trace of the replay process on the replica when this is happening to see where it's spending all it's time? How are you generating these log lines? Tue Feb 24 15:05:07 MSK 2015 Stream: MASTER-masterdb:79607161592048 SLAVE:79607161550576 Replay:79607160986064 :: REPLAY 592 KBytes (00:00:00.398376 seconds) Do you see the confl_* fields in pg_stat_database_conflicts on the *replica* increasing? Hi Jim,max_standby_streaming_delay and max_standby_archive_delay both are 30s on master and replica dbsI don't see any specific or heavy workload during this issue with a hanging apply process. Just a normal queries as usual. But I see an increased disk activity during the time when the apply issue is ongoingDSK | sdc | | busy 61% | read 11511 | | write 4534 | KiB/r 46 | | KiB/w 4 | MBr/s 52.78 | | MBw/s 1.88 | avq 1.45 | | avio 0.38 ms |DSK | sde | | busy 60% | read 11457 | | write 4398 | KiB/r 46 | | KiB/w 4 | MBr/s 51.97 | | MBw/s 1.83 | avq 1.47 | | avio 0.38 ms |DSK | sdd | | busy 60% | read 9673 | | write 4538 | KiB/r 61 | | KiB/w 4 | MBr/s 58.24 | | MBw/s 1.88 | avq 1.47 | | avio 0.42 ms |DSK | sdj | | busy 59% | read 9576 | | write 4177 | KiB/r 63 | | KiB/w 4 | MBr/s 59.30 | | MBw/s 1.75 | avq 1.48 | | avio 0.43 ms |DSK | sdh | | busy 59% | read 9615 | | write 4305 | KiB/r 63 | | KiB/w 4 | MBr/s 59.23 | | MBw/s 1.80 | avq 1.48 | | avio 0.42 ms |DSK | sdf | | busy 59% | read 9483 | | write 4404 | KiB/r 63 | | KiB/w 4 | MBr/s 59.11 | | MBw/s 1.83 | avq 1.47 | | avio 0.42 ms |DSK | sdi | | busy 59% | read 11273 | | write 4173 | KiB/r 46 | | KiB/w 4 | MBr/s 51.50 | | MBw/s 1.75 | avq 1.43 | | avio 0.38 ms |DSK | sdg | | busy 59% | read 11406 | | write 4297 | KiB/r 46 | | KiB/w 4 | MBr/s 51.66 | | MBw/s 1.80 | avq 1.46 | | avio 0.37 ms |Although it's not seems to be an upper IO limit.Normally disks are busy at 20-45%DSK | sde | | busy 29% | read 6524 | | write 14426 | KiB/r 26 | | KiB/w 5 | MBr/s 17.08 | | MBw/s 7.78 | avq 10.46 | | avio 0.14 ms |DSK | sdi | | busy 29% | read 6590 | | write 14391 | KiB/r 26 | | KiB/w 5 | MBr/s 17.19 | | MBw/s 7.76 | avq 8.75 | | avio 0.14 ms |DSK | sdg | | busy 29% | read 6547 | | write 14401 | KiB/r 26 | | KiB/w 5 | MBr/s 16.94 | | MBw/s 7.60 | avq 7.28 | | avio 0.14 ms |DSK | sdc | | busy 29% | read 6835 | | write 14283 | KiB/r 27 | | KiB/w 5 | MBr/s 18.08 | | MBw/s 7.74 | avq 8.77 | | avio 0.14 ms |DSK | sdf | | busy 23% | read 3808 | | write 14391 | KiB/r 36 | | KiB/w 5 | MBr/s 13.49 | | MBw/s 7.78 | avq 12.88 | | avio 0.13 ms |DSK | sdd | | busy 23% | read 3747 | | write 14229 | KiB/r 33 | | KiB/w 5 | MBr/s 12.32 | | MBw/s 7.74 | avq 10.07 | | avio 0.13 ms |DSK | sdj | | busy 23% | read 3737 | | write 14336 | KiB/r 36 | | KiB/w 5 | MBr/s 13.16 | | MBw/s 7.76 | avq 10.48 | | avio 0.13 ms |DSK | sdh | | busy 23% | read 3793 | | write 14362 | KiB/r 35 | | KiB/w 5 | MBr/s 13.29 | | MBw/s 7.60 | avq 8.61 | | avio 0.13 ms |Also during the issue perf shows [k] copy_user_generic_string on the top positions 14.09% postmaster postgres [.] 0x00000000001b4569 10.25% postmaster [kernel.kallsyms] [k] copy_user_generic_string 4.15% postmaster postgres [.] hash_search_with_hash_value 2.08% postmaster postgres [.] SearchCatCache 1.79% postmaster postgres [.] LWLockAcquire 1.18% postmaster libc-2.12.so [.] memcpy 1.12% postmaster postgres [.] mdnblocksIssue starts: at 19:43Mon Mar 16 19:43:04 MSK 2015 Stream: MASTER-rdb04d:70837172337784 SLAVE:70837172314864 Replay:70837172316512 :: REPLAY 21 KBytes (00:00:00.006225 seconds)Mon Mar 16 19:43:09 MSK 2015 Stream: MASTER-rdb04d:70837177455624 SLAVE:70837177390968 Replay:70837176794376 :: REPLAY 646 KBytes (00:00:00.367305 seconds)Mon Mar 16 19:43:14 MSK 2015 Stream: MASTER-rdb04d:70837185005120 SLAVE:70837184961280 Replay:70837183253896 :: REPLAY 1710 KBytes (00:00:00.827881 seconds)Mon Mar 16 19:43:19 MSK 2015 Stream: MASTER-rdb04d:70837190417984 SLAVE:70837190230232 Replay:70837183253896 :: REPLAY 6996 KBytes (00:00:05.873169 seconds)Mon Mar 16 19:43:24 MSK 2015 Stream: MASTER-rdb04d:70837198538232 SLAVE:70837198485000 Replay:70837183253896 :: REPLAY 14926 KBytes (00:00:11.025561 seconds)Mon Mar 16 19:43:29 MSK 2015 Stream: MASTER-rdb04d:70837209961192 SLAVE:70837209869384 Replay:70837183253896 :: REPLAY 26081 KBytes (00:00:16.068014 seconds)We see [k] copy_user_generic_string 12.90% postmaster [kernel.kallsyms] [k] copy_user_generic_string 11.49% postmaster postgres [.] 0x00000000001f40c1 4.74% postmaster postgres [.] hash_search_with_hash_value 1.86% postmaster postgres [.] mdnblocks 1.73% postmaster postgres [.] LWLockAcquire 1.67% postmaster postgres [.] SearchCatCache 25.71% postmaster [kernel.kallsyms] [k] copy_user_generic_string 7.89% postmaster postgres [.] hash_search_with_hash_value 4.66% postmaster postgres [.] 0x00000000002108da 4.51% postmaster postgres [.] mdnblocks 3.36% postmaster [kernel.kallsyms] [k] put_pageIssue stops: at 19:51:39Mon Mar 16 19:51:24 MSK 2015 Stream: MASTER-rdb04d:70838904179344 SLAVE:70838903934392 Replay:70837183253896 :: REPLAY 1680591 KBytes (00:08:10.384679 seconds)Mon Mar 16 19:51:29 MSK 2015 Stream: MASTER-rdb04d:70838929994336 SLAVE:70838929873624 Replay:70837183253896 :: REPLAY 1705801 KBytes (00:08:15.428773 seconds)Mon Mar 16 19:51:34 MSK 2015 Stream: MASTER-rdb04d:70838951993624 SLAVE:70838951899768 Replay:70837183253896 :: REPLAY 1727285 KBytes (00:08:20.472567 seconds)Mon Mar 16 19:51:39 MSK 2015 Stream: MASTER-rdb04d:70838975297912 SLAVE:70838975180384 Replay:70837208050872 :: REPLAY 1725827 KBytes (00:08:10.256935 seconds)Mon Mar 16 19:51:44 MSK 2015 Stream: MASTER-rdb04d:70839001502160 SLAVE:70839001412616 Replay:70837260116984 :: REPLAY 1700572 KBytes (00:07:49.849511 seconds)Mon Mar 16 19:51:49 MSK 2015 Stream: MASTER-rdb04d:70839022866760 SLAVE:70839022751184 Replay:70837276732880 :: REPLAY 1705209 KBytes (00:07:42.307364 seconds)And copy_user_generic_string goes down+ 13.43% postmaster postgres [.] 0x000000000023dc9a+ 3.71% postmaster [kernel.kallsyms] [k] copy_user_generic_string+ 2.46% init [kernel.kallsyms] [k] intel_idle+ 2.30% postmaster postgres [.] hash_search_with_hash_value+ 2.01% postmaster postgres [.] SearchCatCacheCould you clarify what types of traces did you mean? GDB?To calculate slave and apply lag I use the following query at the replica siteslave_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS receive\" $p_db)replay_lag=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0') AS replay\" $p_db)replay_timediff=$($psql -U monitor -h$s_host -p$s_port -A -t -c \"SELECT NOW() - pg_last_xact_replay_timestamp() AS replication_delay\" $p_db)master_lag=$($psql -U monitor -h$p_host -p$p_port -A -t -c \"SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset\" $p_db)echo \"$(date) Stream: MASTER-$p_host:$master_lag SLAVE:$slave_lag Replay:$replay_lag :: REPLAY $(bc <<< $master_lag/1024-$replay_lag/1024) KBytes (${replay_timediff} seconds)\" - \nBest regards,\nSergey Shchukin \nOne more thingWe have upgraded one of our shards to 9.4.1 and expectedly that did not help.\n\nA few things to notice which may be useful.\n\n1. When replay stops, startup process reads a lot from array with $PGDATA. In iotop and iostat we see the following:\n\nTotal DISK READ: 490.42 M/s | Total DISK WRITE: 3.82 M/s\n TID PRIO USER DISK READ> DISK WRITE SWAPIN IO COMMAND\n 3316 be/4 postgres 492.34 M/s 0.00 B/s 0.00 % 39.91 % postgres: startup process\n <...>\n\n Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\n <...>\n md2 0.00 0.00 6501.00 7.00 339.90 0.03 106.97 0.00 0.00 0.00 0.00\n md3 0.00 0.00 0.00 1739.00 0.00 6.79 8.00 0.00 0.00 0.00 0.00\n\n root@rpopdb04g ~ # fgrep 9.4 /proc/mounts\n /dev/md2 /var/lib/pgsql/9.4/data ext4 rw,noatime,nodiratime,barrier=1,stripe=64,data=ordered 0 0\n /dev/md3 /var/lib/pgsql/9.4/data/pg_xlog ext4 rw,noatime,nodiratime,barrier=0,stripe=64,data=ordered 0 0\n root@rpopdb04g ~ #\n\n2. The state of the startup process is changing in such a way:\n\n root@rpopdb04g ~ # while true; do ps aux | grep '[s]tartup'; sleep 1; done\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:11 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:11 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:12 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:12 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:13 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:14 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ts 18:04 8:14 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n postgres 3316 26.6 3.2 4732052 4299260 ? Rs 18:04 8:15 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:15 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:16 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:16 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:17 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ts 18:04 8:17 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Rs 18:04 8:18 postgres: startup process\n postgres 3316 26.7 3.2 4732052 4299260 ? Ds 18:04 8:19 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:19 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:20 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:21 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Ds 18:04 8:22 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:22 postgres: startup process\n postgres 3316 26.8 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:23 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:24 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Ts 18:04 8:24 postgres: startup process\n postgres 3316 26.9 3.2 4732052 4299260 ? Rs 18:04 8:25 postgres: startup process\n ^C\n root@rpopdb04g ~ #\n\n3. confl* fields in pg_stat_database_conflicts are always zero during the pausing of replay.\n\n4. The stack-traces taken with GDB are not really informative. We will recompile PostgreSQL with —enable-debug option and run it on one of our replicas if needed. Since it is a production system we would like to do it last of all. But we will do it if anybody would not give us any ideas.\nWe did it. Most of the backtraces (taken while replay_location was not changing) looks like that:[Thread debugging using libthread_db enabled]0x00007f54a71444c0 in __read_nocancel () from /lib64/libc.so.6#0 0x00007f54a71444c0 in __read_nocancel () from /lib64/libc.so.6#1 0x000000000065d2f5 in FileRead (file=<value optimized out>, buffer=0x7f5470e6ba20 \"\\004\", amount=8192) at fd.c:1286#2 0x000000000067acad in mdread (reln=<value optimized out>, forknum=<value optimized out>, blocknum=311995, buffer=0x7f5470e6ba20 \"\\004\") at md.c:679#3 0x0000000000659b4e in ReadBuffer_common (smgr=<value optimized out>, relpersistence=112 'p', forkNum=MAIN_FORKNUM, blockNum=311995, mode=RBM_NORMAL_NO_LOG, strategy=0x0, hit=0x7fff898a912f \"\") at bufmgr.c:476#4 0x000000000065a61b in ReadBufferWithoutRelcache (rnode=..., forkNum=MAIN_FORKNUM, blockNum=311995, mode=<value optimized out>, strategy=<value optimized out>) at bufmgr.c:287#5 0x00000000004cfb78 in XLogReadBufferExtended (rnode=..., forknum=MAIN_FORKNUM, blkno=311995, mode=RBM_NORMAL_NO_LOG) at xlogutils.c:324#6 0x00000000004a3651 in btree_xlog_vacuum (lsn=71742288638464, record=0x1e48b78) at nbtxlog.c:522#7 btree_redo (lsn=71742288638464, record=0x1e48b78) at nbtxlog.c:1144#8 0x00000000004c903a in StartupXLOG () at xlog.c:6827#9 0x000000000062f8bf in StartupProcessMain () at startup.c:224#10 0x00000000004d3e9a in AuxiliaryProcessMain (argc=2, argv=0x7fff898a98a0) at bootstrap.c:416#11 0x000000000062a99c in StartChildProcess (type=StartupProcess) at postmaster.c:5146#12 0x000000000062e9e2 in PostmasterMain (argc=3, argv=<value optimized out>) at postmaster.c:1237#13 0x00000000005c7d68 in main (argc=3, argv=0x1e22910) at main.c:228So the problem seems to be in this part of code - http://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/nbtree/nbtxlog.c;h=5f9fc49e78ca1388ab482e24c8b5a873238ae0b6;hb=d0f83327d3739a45102fdd486947248c70e0249d#l507. I suppose, that answers the question why startup process reads a lot from disk while paused replay.So the questions are:1. Is there anything we can tune right now? Except for not reading from replicas and partitioning this table.2. Isn’t there still a function to determine that a buffer is not pinned in shared_buffers without reading it from disk? To optimize current behaviour in the future.-many+hackersCan anyone help?5. In one of the experiments replay stopped on 4115/56126DC0 xlog position. Here is a bit of pg_xlogdump output:\n\n rpopdb04d/rpopdb M # select pg_xlogfile_name('4115/56126DC0');\n pg_xlogfile_name\n --------------------------\n 000000060000411500000056\n (1 row)\n\n Time: 0.496 ms\n rpopdb04d/rpopdb M #\n\n root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 # /usr/pgsql-9.4/bin/pg_xlogdump 000000060000411500000056 000000060000411500000056 | fgrep 4115/56126DC0 -C10\n rmgr: Heap len (rec/tot): 36/ 3948, tx: 3930143874, lsn: 4115/561226C8, prev 4115/56122698, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 196267/4 xmax 3930143874 ; new tid 196267/10 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143874, lsn: 4115/56123638, prev 4115/561226C8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.725158 MSK\n rmgr: Heap len (rec/tot): 58/ 90, tx: 3930143875, lsn: 4115/56123668, prev 4115/56123638, bkp: 0000, desc: hot_update: rel 1663/16420/16737; tid 23624/3 xmax 3930143875 ; new tid 23624/21 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143875, lsn: 4115/561236C8, prev 4115/56123668, bkp: 0000, desc: commit: 2015-03-19 18:26:27.726432 MSK\n rmgr: Heap len (rec/tot): 36/ 2196, tx: 3930143876, lsn: 4115/561236F8, prev 4115/561236C8, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 123008/4 xmax 3930143876 ; new tid 123008/5 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143876, lsn: 4115/56123F90, prev 4115/561236F8, bkp: 0000, desc: commit: 2015-03-19 18:26:27.727088 MSK\n rmgr: Heap len (rec/tot): 36/ 7108, tx: 3930143877, lsn: 4115/56123FC0, prev 4115/56123F90, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 34815/6 xmax 3930143877 ; new tid 34815/16 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143877, lsn: 4115/56125BA0, prev 4115/56123FC0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728178 MSK\n rmgr: Heap len (rec/tot): 36/ 4520, tx: 3930143878, lsn: 4115/56125BD0, prev 4115/56125BA0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 147863/5 xmax 3930143878 ; new tid 147863/16 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143878, lsn: 4115/56126D90, prev 4115/56125BD0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728339 MSK\n rmgr: Btree len (rec/tot): 20/ 52, tx: 0, lsn: 4115/56126DC0, prev 4115/56126D90, bkp: 0000, desc: vacuum: rel 1663/16420/16796; blk 31222118, lastBlockVacuumed 0\n rmgr: Heap len (rec/tot): 36/ 6112, tx: 3930143879, lsn: 4115/56126DF8, prev 4115/56126DC0, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23461/26 xmax 3930143879 ; new tid 23461/22 xmax 0\n rmgr: Heap2 len (rec/tot): 24/ 8160, tx: 0, lsn: 4115/561285F0, prev 4115/56126DF8, bkp: 1000, desc: clean: rel 1663/16420/16782; blk 21997709 remxid 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143879, lsn: 4115/5612A5E8, prev 4115/561285F0, bkp: 0000, desc: commit: 2015-03-19 18:26:27.728805 MSK\n rmgr: Heap2 len (rec/tot): 20/ 8268, tx: 0, lsn: 4115/5612A618, prev 4115/5612A5E8, bkp: 1000, desc: visible: rel 1663/16420/16782; blk 21997709\n rmgr: Heap len (rec/tot): 36/ 7420, tx: 3930143880, lsn: 4115/5612C680, prev 4115/5612A618, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 37456/8 xmax 3930143880 ; new tid 37456/29 xmax 0\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143880, lsn: 4115/5612E398, prev 4115/5612C680, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729141 MSK\n rmgr: Heap len (rec/tot): 36/ 7272, tx: 3930143881, lsn: 4115/5612E3C8, prev 4115/5612E398, bkp: 1000, desc: hot_update: rel 1663/16420/16737; tid 23614/2 xmax 3930143881 ; new tid 23614/22 xmax 0\n rmgr: Heap len (rec/tot): 150/ 182, tx: 0, lsn: 4115/56130048, prev 4115/5612E3C8, bkp: 0000, desc: inplace: rel 1663/16420/12764; tid 11/31\n rmgr: Transaction len (rec/tot): 12/ 44, tx: 3930143881, lsn: 4115/56130100, prev 4115/56130048, bkp: 0000, desc: commit: 2015-03-19 18:26:27.729340 MSK\n rmgr: Heap len (rec/tot): 43/ 75, tx: 3930143882, lsn: 4115/56130130, prev 4115/56130100, bkp: 0000, desc: insert: rel 1663/16420/16773; tid 10159950/26\n rmgr: Btree len (rec/tot): 42/ 74, tx: 3930143882, lsn: 4115/56130180, prev 4115/56130130, bkp: 0000, desc: insert: rel 1663/16420/16800; tid 12758988/260\n root@pg-backup04h /u0/rpopdb04/wals/0000000600004115 #\n\nAny help would be really appropriate. Thanks in advance.--May the force be with you…https://simply.name\n--May the force be with you…https://simply.name",
"msg_date": "Mon, 23 Mar 2015 17:45:12 +0300",
"msg_from": "Vladimir Borodin <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] [pgadmin-support] Issue with a hanging apply process on\n the replica db after vacuum works on primary"
}
] |
[
{
"msg_contents": "All:\n\nThis got posted to pgsql-bugs, but got no attention there[1], so I'm\nsending it to this list.\n\nTest case:\n\ncreatedb bench\npgbench -i -s bench\n\\c bench\n\nbench=# explain select * from pgbench_accounts where aid = 2;\n QUERY PLAN\n---------------------------------------------------------------\n Index Scan using pgbench_accounts_pkey on pgbench_accounts\n(cost=0.42..8.44 rows=1 width=97)\n Index Cond: (aid = 2)\n(2 rows)\n\nbench=# explain select * from pgbench_accounts where aid = 2 and false;\n\n QUERY PLAN\n-------------------------------------------------\n Result (cost=0.00..26394.00 rows=1 width=97)\n One-Time Filter: false\n -> Seq Scan on pgbench_accounts (cost=0.00..26394.00 rows=1 width=97)\n(3 rows)\n\nThis seems like a special case of the \"aborted plan cost\", that is, when\nthe planner expects to abort a plan early, it nevertheless returns the\nfull cost for the non-aborted version of the query, rather than the\nworking cost, which is based on the abort.\n\nFor example:\n\nbench=# create index on pgbench_accounts(bid);\nCREATE INDEX\nbench=# explain select * from pgbench_accounts where bid = 2;\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------\n Index Scan using pgbench_accounts_bid_idx on pgbench_accounts\n(cost=0.42..4612.10 rows=102667 width=97)\n Index Cond: (bid = 2)\n(2 rows)\n\nbench=# explain select * from pgbench_accounts where bid = 2 limit 1;\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n Limit (cost=0.00..0.28 rows=1 width=97)\n -> Seq Scan on pgbench_accounts (cost=0.00..28894.00 rows=102667\nwidth=97)\n Filter: (bid = 2)\n(3 rows)\n\nSo in this case, the top-level node returns a lower cost because the\nplanner knows that it will find a row with bid=2 fairly quickly in the\nseq scan. But in the WHERE FALSE example, that scan *is* the top-level\nnode, so the planner returns a fictitious cost for the whole query.\n\nOr is there something else at work here?\n\n[1]\nhttp://www.postgresql.org/message-id/[email protected]\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Fri, 27 Feb 2015 17:28:06 -0800",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Bad cost estimate with FALSE filter condition"
},
{
"msg_contents": "So ... should I assume my diagnosis is correct? Haven't heard any other\nsuggestions.\n\nOn 02/27/2015 05:28 PM, Josh Berkus wrote:\n> All:\n> \n> This got posted to pgsql-bugs, but got no attention there[1], so I'm\n> sending it to this list.\n> \n> Test case:\n> \n> createdb bench\n> pgbench -i -s bench\n> \\c bench\n> \n> bench=# explain select * from pgbench_accounts where aid = 2;\n> QUERY PLAN\n> ---------------------------------------------------------------\n> Index Scan using pgbench_accounts_pkey on pgbench_accounts\n> (cost=0.42..8.44 rows=1 width=97)\n> Index Cond: (aid = 2)\n> (2 rows)\n> \n> bench=# explain select * from pgbench_accounts where aid = 2 and false;\n> \n> QUERY PLAN\n> -------------------------------------------------\n> Result (cost=0.00..26394.00 rows=1 width=97)\n> One-Time Filter: false\n> -> Seq Scan on pgbench_accounts (cost=0.00..26394.00 rows=1 width=97)\n> (3 rows)\n> \n> This seems like a special case of the \"aborted plan cost\", that is, when\n> the planner expects to abort a plan early, it nevertheless returns the\n> full cost for the non-aborted version of the query, rather than the\n> working cost, which is based on the abort.\n> \n> For example:\n> \n> bench=# create index on pgbench_accounts(bid);\n> CREATE INDEX\n> bench=# explain select * from pgbench_accounts where bid = 2;\n> QUERY PLAN\n> \n> ----------------------------------------------------------------------------------------------------------\n> Index Scan using pgbench_accounts_bid_idx on pgbench_accounts\n> (cost=0.42..4612.10 rows=102667 width=97)\n> Index Cond: (bid = 2)\n> (2 rows)\n> \n> bench=# explain select * from pgbench_accounts where bid = 2 limit 1;\n> QUERY PLAN\n> \n> --------------------------------------------------------------------------------\n> Limit (cost=0.00..0.28 rows=1 width=97)\n> -> Seq Scan on pgbench_accounts (cost=0.00..28894.00 rows=102667\n> width=97)\n> Filter: (bid = 2)\n> (3 rows)\n> \n> So in this case, the top-level node returns a lower cost because the\n> planner knows that it will find a row with bid=2 fairly quickly in the\n> seq scan. But in the WHERE FALSE example, that scan *is* the top-level\n> node, so the planner returns a fictitious cost for the whole query.\n> \n> Or is there something else at work here?\n> \n> [1]\n> http://www.postgresql.org/message-id/[email protected]\n> \n\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 10:15:53 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad cost estimate with FALSE filter condition"
},
{
"msg_contents": "Josh Berkus <[email protected]> writes:\n> So ... should I assume my diagnosis is correct? Haven't heard any other\n> suggestions.\n\nI don't see any reason to think this is worth worrying about, or worth\nspending planner cycles on to produce a cosmetically nicer cost estimate.\nOne-time filters always apply at the top plan level so they're unlikely\nto change any planner choices. Moreover, for any case other than the\nnot-terribly-interesting constant FALSE case, we're better off assuming\nthat the filter condition will be true (and so there's nothing to adjust).\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 14:26:55 -0400",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: Bad cost estimate with FALSE filter condition"
},
{
"msg_contents": "On 03/16/2015 11:26 AM, Tom Lane wrote:\n> Josh Berkus <[email protected]> writes:\n>> So ... should I assume my diagnosis is correct? Haven't heard any other\n>> suggestions.\n> \n> I don't see any reason to think this is worth worrying about, or worth\n> spending planner cycles on to produce a cosmetically nicer cost estimate.\n\nI wouldn't say it's critical, but there's two issues:\n\n1) users are confused when they see the plan, especially if it's chosen\nin preference to a lower-cost plan. It's counter-intuitive for EXPLAIN\nto not display the \"real\" estimated cost.\n\n2) Tools which attempt to do some kind of useful aggregation or event\nhandling around estimated plan cost have to write special workarounds\nfor these cases.\n\nIs there anything *useful* about the existing behavior such that we'd\nlike to preserve it? Or is it just a matter of Nobody's Submitted A\nPatch Yet?\n\nI ask because I'm thinking about a patch, so if changing this will break\na lot of stuff, that's a good thing to know.\n\n> One-time filters always apply at the top plan level so they're unlikely\n> to change any planner choices. Moreover, for any case other than the\n> not-terribly-interesting constant FALSE case, we're better off assuming\n> that the filter condition will be true (and so there's nothing to adjust).\n\nExcept that we *don't* get back the same estimate for a TRUE filter\ncondition.\n\n-- \nJosh Berkus\nPostgreSQL Experts Inc.\nhttp://pgexperts.com\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Mon, 16 Mar 2015 12:12:15 -0700",
"msg_from": "Josh Berkus <[email protected]>",
"msg_from_op": true,
"msg_subject": "Re: Bad cost estimate with FALSE filter condition"
}
] |
[
{
"msg_contents": "Hi all,\n I've noticed that order by / limit are not distributed to union subqueries\nby the planner:\n\nExample:\n\nq1: (select * from t1) union all (select * from t2) order by x limit 10;\nq2: (select * from t1 order by x limit 10) union all (select * from t2\norder by x limit 10)\n order by x limit 10;\n\nboth queries should be equivalent, but the planner provides hugely different\nplans. I was expecting that the planner could rewrite the first to the\nsecond.\nAm I overlooking something? If this is the case, can anyone explain why this\noptimization is not performed?\n\nThanks!\nPaolo\n\nHi all, I've noticed that order by / limit are not distributed to union subqueriesby the planner:Example:q1: (select * from t1) union all (select * from t2) order by x limit 10;q2: (select * from t1 order by x limit 10) union all (select * from t2 order by x limit 10) order by x limit 10;both queries should be equivalent, but the planner provides hugely differentplans. I was expecting that the planner could rewrite the first to the second.Am I overlooking something? If this is the case, can anyone explain why thisoptimization is not performed?Thanks!Paolo",
"msg_date": "Sat, 28 Feb 2015 10:08:30 +0100",
"msg_from": "Paolo Losi <[email protected]>",
"msg_from_op": true,
"msg_subject": "pushing order by + limit to union subqueries"
},
{
"msg_contents": "Paolo Losi <[email protected]> writes:\n> I've noticed that order by / limit are not distributed to union subqueries\n> by the planner:\n\n> Example:\n\n> q1: (select * from t1) union all (select * from t2) order by x limit 10;\n> q2: (select * from t1 order by x limit 10) union all (select * from t2\n> order by x limit 10)\n> order by x limit 10;\n\n> both queries should be equivalent, but the planner provides hugely different\n> plans. I was expecting that the planner could rewrite the first to the\n> second.\n> Am I overlooking something? If this is the case, can anyone explain why this\n> optimization is not performed?\n\nThere would be cases where that would be a win, and there would be cases\nwhere it wouldn't be, so I'd not be in favor of making the transformation\nblindly. Unfortunately, given the current state of the planner that's\nall we could do really, because the subqueries are planned at arm's\nlength and then we just mechanically combine them. Doing it \"right\" would\nentail fully planning each subquery twice, which would be very expensive.\n\nI have a longstanding desire to rewrite the upper levels of the planner to\nuse path generation and comparison, which should make it more practical\nfor the planner to compare alternative implementations of UNION and other\ntop-level constructs. But I've been saying I would do that for several\nyears now, so don't hold your breath :-(\n\n\t\t\tregards, tom lane\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n",
"msg_date": "Sat, 28 Feb 2015 11:24:01 -0500",
"msg_from": "Tom Lane <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: pushing order by + limit to union subqueries"
},
{
"msg_contents": "On Sat, Feb 28, 2015 at 8:24 AM, Tom Lane <[email protected]> wrote:\n>\n> There would be cases where that would be a win, and there would be cases\n> where it wouldn't be, so I'd not be in favor of making the transformation\n> blindly. Unfortunately, given the current state of the planner that's\n> all we could do really, because the subqueries are planned at arm's\n> length and then we just mechanically combine them. Doing it \"right\" would\n> entail fully planning each subquery twice, which would be very expensive.\n>\nYes, after pulling up, subqueries are planned independently and we\nglue them together finally.\n\n> I have a longstanding desire to rewrite the upper levels of the planner to\n> use path generation and comparison, which should make it more practical\n> for the planner to compare alternative implementations of UNION and other\n> top-level constructs. But I've been saying I would do that for several\n> years now, so don't hold your breath :-(\n>\nGreenPlum utilizes Cascades optimizer framework (also used in SQL\nServer and some others) to make the optimizer more modular and\nextensible. In our context here, it allows thorough optimization\nwithout pre-defined boundaries - no \"subquery planning then glue\nthem\". Is that something in your mind?\n\nRegards,\nQingqing\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n",
"msg_date": "Thu, 16 Apr 2015 11:49:25 -0700",
"msg_from": "Qingqing Zhou <[email protected]>",
"msg_from_op": false,
"msg_subject": "Re: [PERFORM] pushing order by + limit to union subqueries"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.