threads
listlengths
1
275
[ { "msg_contents": "Hello performance, I need help explaining the performance of a particular\nquery:\n\nselect * from messages where ((messages.topic = E'/x') AND\n(messages.processed = 'f')) ORDER BY messages.created_at ASC limit 10;\n\n\nTable Structure:\n\n Column | Type |\nModifiers\n------------+-----------------------------+--------------------------------------------------------------------\n id | integer | not null default\nnextval('landing_page.messages_id_seq'::regclass)\n processed | boolean |\n topic | character varying(255) |\n body | text |\n created_at | timestamp without time zone |\n updated_at | timestamp without time zone |\nIndexes:\n \"messages_pkey\" PRIMARY KEY, btree (id)\n \"idx_landing_page_messages_created_at\" btree (created_at)\n \"idx_messages_topic_processed\" btree (topic, processed)\n\n\nTable row count ~ 1million\n\nWhen I run the query with limit 10 it skips the\nidx_messages_topic_processed.\nWhen I run the query with no limit, or with a limit above 20 it uses the\ndesired index.\nOn a different system with a much smaller data set (~200,000) i have to use\na limit of about 35 to use the desired index.\n\nthis is the good plan with no limit or 'sweet spot' limit\n\n Limit (cost=2050.29..2050.38 rows=35 width=1266)\n -> Sort (cost=2050.29..2052.13 rows=737 width=1266)\n Sort Key: created_at\n -> Bitmap Heap Scan on messages (cost=25.86..2027.70 rows=737\nwidth=1266)\n Recheck Cond: ((topic)::text = 'x'::text)\n Filter: (NOT processed)\n -> Bitmap Index Scan on idx_messages_topic_processed\n (cost=0.00..25.68 rows=737 width=0)\n Index Cond: (((topic)::text = '/x'::text) AND\n(processed = false))\n\nThis is the bad plan with limit 10\n Limit (cost=0.00..1844.07 rows=30 width=1266)\n -> Index Scan using idx_landing_page_messages_created_at on messages\n (cost=0.00..45302.70 rows=737 width=1266)\n Filter: ((NOT processed) AND ((topic)::text = 'x'::text))\n\n\nNot sure if cost has anything to do with it, but this is set in\npostgresql.conf. I am hesitant to change this as I have inherited the\ndatabase from a previous dba and dont want to adversely affect things that\ncaused this to be set in a non default manner if possible.\n\n#seq_page_cost = 1.0 # measured on an arbitrary scale\nrandom_page_cost = 3.0 # same scale as above\n\n\n\nWhy does the smaller limit cause it to skip the index?\nIs there a way to help the planner choose the better plan?\n\nMuch appreciated,\nMike\n\nHello performance, I need help explaining the performance of a particular query:select * from messages where ((messages.topic = E'/x') AND (messages.processed = 'f'))  ORDER BY messages.created_at ASC limit 10;\nTable Structure:   Column   |            Type             |                             Modifiers                              ------------+-----------------------------+--------------------------------------------------------------------\n id         | integer                     | not null default nextval('landing_page.messages_id_seq'::regclass) processed  | boolean                     |  topic      | character varying(255)      | \n body       | text                        |  created_at | timestamp without time zone |  updated_at | timestamp without time zone | Indexes:    \"messages_pkey\" PRIMARY KEY, btree (id)\n    \"idx_landing_page_messages_created_at\" btree (created_at)    \"idx_messages_topic_processed\" btree (topic, processed)Table row count ~ 1million\nWhen I run the query with limit 10 it skips the idx_messages_topic_processed.When I run the query with no limit, or with a limit above 20 it uses the desired index.On a different system with a much smaller data set (~200,000) i have to use a limit of about 35 to use the desired index.\nthis is the good plan with no limit or 'sweet spot' limit Limit  (cost=2050.29..2050.38 rows=35 width=1266)   ->  Sort  (cost=2050.29..2052.13 rows=737 width=1266)\n         Sort Key: created_at         ->  Bitmap Heap Scan on messages  (cost=25.86..2027.70 rows=737 width=1266)               Recheck Cond: ((topic)::text = 'x'::text)               Filter: (NOT processed)\n               ->  Bitmap Index Scan on idx_messages_topic_processed  (cost=0.00..25.68 rows=737 width=0)                     Index Cond: (((topic)::text = '/x'::text) AND (processed = false))\nThis is the bad plan with limit 10 Limit  (cost=0.00..1844.07 rows=30 width=1266)   ->  Index Scan using idx_landing_page_messages_created_at on messages  (cost=0.00..45302.70 rows=737 width=1266)\n         Filter: ((NOT processed) AND ((topic)::text = 'x'::text))Not sure if cost has anything to do with it, but this is set in postgresql.conf.  I am hesitant to change this as I have inherited the database from a previous dba and dont want to adversely affect things that caused this to be set in a non default manner if possible.\n#seq_page_cost = 1.0 # measured on an arbitrary scalerandom_page_cost = 3.0 # same scale as above\nWhy does the smaller limit cause it to skip the index?Is there a way to help the planner choose the better plan?Much appreciated, \nMike", "msg_date": "Wed, 5 Jan 2011 16:57:07 -0600", "msg_from": "Mike Broers <[email protected]>", "msg_from_op": true, "msg_subject": "plan question - query with order by and limit not choosing index\n\tdepends on size of limit, table" }, { "msg_contents": "Mike Broers <[email protected]> wrote:\n \n> Hello performance, I need help explaining the performance of a\n> particular query\n \nYou provided some of the information needed, but you should review\nthis page and post a bit more:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \nIn particular, post the result of EXPLAIN ANALYZE, not just EXPLAIN.\nAlso, showing all overrides in your postgresql.conf file is\nimportant, and some information about your hardware. How big is the\nactive portion of your database (the frequently read portion)?\n \n> Why does the smaller limit cause it to skip the index?\n \nBecause the optimizer thinks the query will return rows sooner that\nway.\n \n> Is there a way to help the planner choose the better plan?\n \nYou might get there by adjusting your memory settings and/or costing\nsettings, but we need to see more information to know that.\n \n-Kevin\n", "msg_date": "Wed, 05 Jan 2011 17:10:36 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan question - query with order by and limit\n\tnot choosing index depends on size of limit, table" }, { "msg_contents": "Thanks for the assistance.\n\nHere is an explain analyze of the query with the problem limit:\n\nproduction=# explain analyze select * from landing_page.messages where\n((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY\nmessages.created_at ASC limit 10;\n\n\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------\n Limit (cost=0.00..2891.06 rows=10 width=1340) (actual\ntime=207922.586..207922.586 rows=0 loops=1)\n -> Index Scan using idx_landing_page_messages_created_at on messages\n (cost=0.00..449560.48 rows=1555 widt\nh=1340) (actual time=207922.581..207922.581 rows=0 loops=1)\n Filter: ((NOT processed) AND ((topic)::text = 'x'::text))\n Total runtime: 207949.413 ms\n(4 rows)\n\n\nand an explain analyze with a higher limit that hits the index:\n\n\nproduction=# explain analyze select * from landing_page.messages where\n((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY\nmessages.created_at ASC limit 25;\n QUERY\nPLAN\n\n--------------------------------------------------------------------------------------------------------------\n-----------------------------------------\n Limit (cost=5885.47..5885.54 rows=25 width=1340) (actual\ntime=80.931..80.931 rows=0 loops=1)\n -> Sort (cost=5885.47..5889.36 rows=1555 width=1340) (actual\ntime=80.926..80.926 rows=0 loops=1)\n Sort Key: created_at\n Sort Method: quicksort Memory: 17kB\n -> Bitmap Heap Scan on messages (cost=60.45..5841.59 rows=1555\nwidth=1340) (actual time=64.404..64.\n404 rows=0 loops=1)\n Recheck Cond: ((topic)::text = 'x'::text)\n Filter: (NOT processed)\n -> Bitmap Index Scan on idx_messages_topic_processed\n (cost=0.00..60.06 rows=1550 width=0) (ac\ntual time=56.207..56.207 rows=0 loops=1)\n Index Cond: (((topic)::text = 'x'::text) AND (p\nrocessed = false))\n Total runtime: 88.051 ms\n(10 rows)\n\n\noverrides in postgresql.conf\n\nshared_buffers = 256MB\nwork_mem = 8MB\nmax_fsm_pages = 2000000\nmax_fsm_relations = 2000\ncheckpoint_segments = 10\narchive_mode = on\nrandom_page_cost = 3.0\neffective_cache_size = 6GB\ndefault_statistics_target = 250\nlogging_collector = on\n\n\nForgot to mention this is Postgres 8.3.8 with 6GB memory on the server.\n\nWhen you ask how big is the active portion of the database I am not sure how\nto answer. The whole database server is about 140GB, but there are other\napplications that use this database, this particular table is about 1.6GB\nand growing. Currently there are jobs that query from this table every\nminute.\n\nThanks again\nMike\n\n\n\n\n\n\nOn Wed, Jan 5, 2011 at 5:10 PM, Kevin Grittner\n<[email protected]>wrote:\n\n> Mike Broers <[email protected]> wrote:\n>\n> > Hello performance, I need help explaining the performance of a\n> > particular query\n>\n> You provided some of the information needed, but you should review\n> this page and post a bit more:\n>\n> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>\n> In particular, post the result of EXPLAIN ANALYZE, not just EXPLAIN.\n> Also, showing all overrides in your postgresql.conf file is\n> important, and some information about your hardware. How big is the\n> active portion of your database (the frequently read portion)?\n>\n> > Why does the smaller limit cause it to skip the index?\n>\n> Because the optimizer thinks the query will return rows sooner that\n> way.\n>\n> > Is there a way to help the planner choose the better plan?\n>\n> You might get there by adjusting your memory settings and/or costing\n> settings, but we need to see more information to know that.\n>\n> -Kevin\n>\n\nThanks for the assistance.  Here is an explain analyze of the query with the problem limit:production=# explain analyze select * from landing_page.messages where ((messages.topic = E'x') AND (messages.processed = 'f'))  ORDER BY messages.created_at ASC limit 10;\n                                                                                QUERY PLAN                                                                                --------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------ Limit  (cost=0.00..2891.06 rows=10 width=1340) (actual time=207922.586..207922.586 rows=0 loops=1)   ->  Index Scan using idx_landing_page_messages_created_at on messages  (cost=0.00..449560.48 rows=1555 widt\nh=1340) (actual time=207922.581..207922.581 rows=0 loops=1)         Filter: ((NOT processed) AND ((topic)::text = 'x'::text)) Total runtime: 207949.413 ms(4 rows)\nand an explain analyze with a higher limit that hits the index:production=# explain analyze select * from landing_page.messages where ((messages.topic = E'x') AND (messages.processed = 'f'))  ORDER BY messages.created_at ASC limit 25;\n                                                                      QUERY PLAN                                                                       --------------------------------------------------------------------------------------------------------------\n----------------------------------------- Limit  (cost=5885.47..5885.54 rows=25 width=1340) (actual time=80.931..80.931 rows=0 loops=1)   ->  Sort  (cost=5885.47..5889.36 rows=1555 width=1340) (actual time=80.926..80.926 rows=0 loops=1)\n         Sort Key: created_at         Sort Method:  quicksort  Memory: 17kB         ->  Bitmap Heap Scan on messages  (cost=60.45..5841.59 rows=1555 width=1340) (actual time=64.404..64.\n404 rows=0 loops=1)               Recheck Cond: ((topic)::text = 'x'::text)               Filter: (NOT processed)               ->  Bitmap Index Scan on idx_messages_topic_processed  (cost=0.00..60.06 rows=1550 width=0) (ac\ntual time=56.207..56.207 rows=0 loops=1)                     Index Cond: (((topic)::text = 'x'::text) AND (processed = false)) Total runtime: 88.051 ms(10 rows)\noverrides in postgresql.confshared_buffers = 256MBwork_mem = 8MBmax_fsm_pages = 2000000max_fsm_relations = 2000\ncheckpoint_segments = 10archive_mode = onrandom_page_cost = 3.0effective_cache_size = 6GBdefault_statistics_target = 250logging_collector = on\nForgot to mention this is Postgres 8.3.8 with 6GB memory on the server.When you ask how big is the active portion of the database I am not sure how to answer.  The whole database server is about 140GB, but there are other applications that use this database, this particular table is about 1.6GB and growing.  Currently there are jobs that query from this table every minute.\nThanks againMikeOn Wed, Jan 5, 2011 at 5:10 PM, Kevin Grittner <[email protected]> wrote:\nMike Broers <[email protected]> wrote:\n\n> Hello performance, I need help explaining the performance of a\n> particular query\n\nYou provided some of the information needed, but you should review\nthis page and post a bit more:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nIn particular, post the result of EXPLAIN ANALYZE, not just EXPLAIN.\nAlso, showing all overrides in your postgresql.conf file is\nimportant, and some information about your hardware.  How big is the\nactive portion of your database (the frequently read portion)?\n\n> Why does the smaller limit cause it to skip the index?\n\nBecause the optimizer thinks the query will return rows sooner that\nway.\n\n> Is there a way to help the planner choose the better plan?\n\nYou might get there by adjusting your memory settings and/or costing\nsettings, but we need to see more information to know that.\n\n-Kevin", "msg_date": "Thu, 6 Jan 2011 15:36:00 -0600", "msg_from": "Mike Broers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plan question - query with order by and limit not\n\tchoosing index depends on size of limit, table" }, { "msg_contents": "Try\norder by created_at+0\n\nOn 1/6/11, Mike Broers <[email protected]> wrote:\n> Thanks for the assistance.\n>\n> Here is an explain analyze of the query with the problem limit:\n>\n> production=# explain analyze select * from landing_page.messages where\n> ((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY\n> messages.created_at ASC limit 10;\n>\n>\n> QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------\n> ------------------------------------------------------------\n> Limit (cost=0.00..2891.06 rows=10 width=1340) (actual\n> time=207922.586..207922.586 rows=0 loops=1)\n> -> Index Scan using idx_landing_page_messages_created_at on messages\n> (cost=0.00..449560.48 rows=1555 widt\n> h=1340) (actual time=207922.581..207922.581 rows=0 loops=1)\n> Filter: ((NOT processed) AND ((topic)::text = 'x'::text))\n> Total runtime: 207949.413 ms\n> (4 rows)\n>\n>\n> and an explain analyze with a higher limit that hits the index:\n>\n>\n> production=# explain analyze select * from landing_page.messages where\n> ((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY\n> messages.created_at ASC limit 25;\n> QUERY\n> PLAN\n>\n> --------------------------------------------------------------------------------------------------------------\n> -----------------------------------------\n> Limit (cost=5885.47..5885.54 rows=25 width=1340) (actual\n> time=80.931..80.931 rows=0 loops=1)\n> -> Sort (cost=5885.47..5889.36 rows=1555 width=1340) (actual\n> time=80.926..80.926 rows=0 loops=1)\n> Sort Key: created_at\n> Sort Method: quicksort Memory: 17kB\n> -> Bitmap Heap Scan on messages (cost=60.45..5841.59 rows=1555\n> width=1340) (actual time=64.404..64.\n> 404 rows=0 loops=1)\n> Recheck Cond: ((topic)::text = 'x'::text)\n> Filter: (NOT processed)\n> -> Bitmap Index Scan on idx_messages_topic_processed\n> (cost=0.00..60.06 rows=1550 width=0) (ac\n> tual time=56.207..56.207 rows=0 loops=1)\n> Index Cond: (((topic)::text = 'x'::text) AND (p\n> rocessed = false))\n> Total runtime: 88.051 ms\n> (10 rows)\n>\n>\n> overrides in postgresql.conf\n>\n> shared_buffers = 256MB\n> work_mem = 8MB\n> max_fsm_pages = 2000000\n> max_fsm_relations = 2000\n> checkpoint_segments = 10\n> archive_mode = on\n> random_page_cost = 3.0\n> effective_cache_size = 6GB\n> default_statistics_target = 250\n> logging_collector = on\n>\n>\n> Forgot to mention this is Postgres 8.3.8 with 6GB memory on the server.\n>\n> When you ask how big is the active portion of the database I am not sure how\n> to answer. The whole database server is about 140GB, but there are other\n> applications that use this database, this particular table is about 1.6GB\n> and growing. Currently there are jobs that query from this table every\n> minute.\n>\n> Thanks again\n> Mike\n>\n>\n>\n>\n>\n>\n> On Wed, Jan 5, 2011 at 5:10 PM, Kevin Grittner\n> <[email protected]>wrote:\n>\n>> Mike Broers <[email protected]> wrote:\n>>\n>> > Hello performance, I need help explaining the performance of a\n>> > particular query\n>>\n>> You provided some of the information needed, but you should review\n>> this page and post a bit more:\n>>\n>> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>>\n>> In particular, post the result of EXPLAIN ANALYZE, not just EXPLAIN.\n>> Also, showing all overrides in your postgresql.conf file is\n>> important, and some information about your hardware. How big is the\n>> active portion of your database (the frequently read portion)?\n>>\n>> > Why does the smaller limit cause it to skip the index?\n>>\n>> Because the optimizer thinks the query will return rows sooner that\n>> way.\n>>\n>> > Is there a way to help the planner choose the better plan?\n>>\n>> You might get there by adjusting your memory settings and/or costing\n>> settings, but we need to see more information to know that.\n>>\n>> -Kevin\n>>\n>\n\n-- \nSent from my mobile device\n\n------------\npasman\n", "msg_date": "Fri, 7 Jan 2011 15:00:22 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan question - query with order by and limit not\n\tchoosing index depends on size of limit, table" }, { "msg_contents": "Thanks for the suggestion,\n\ncreated_at is a timestamp without time zone type column. When I add +0 to\ncreated at I get a cast error. I am able to get the query to use the\ndesired index when increasing or removing the limit, and I am still looking\nfor the reason why that is happening. Any advice or more information I can\nsupply please let me know.\n\n\nERROR: operator does not exist: timestamp without time zone + integer\nLINE 1: ...es.processed = 'f')) ORDER BY messages.created_at+0 ASC lim...\n ^\nHINT: No operator matches the given name and argument type(s). You might\nneed to add explicit type casts.\n\n\n\n\nFrom: \"pasman pasmański\" <[email protected]>\nTo: [email protected]\nDate: Fri, 7 Jan 2011 15:00:22 +0100\nSubject: Re: plan question - query with order by and limit not choosing\nindex depends on size of limit, table\nTry\norder by created_at+0\n\nOn Thu, Jan 6, 2011 at 3:36 PM, Mike Broers <[email protected]> wrote:\n\n> Thanks for the assistance.\n>\n> Here is an explain analyze of the query with the problem limit:\n>\n> production=# explain analyze select * from landing_page.messages where\n> ((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY\n> messages.created_at ASC limit 10;\n>\n>\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------\n> ------------------------------------------------------------\n> Limit (cost=0.00..2891.06 rows=10 width=1340) (actual\n> time=207922.586..207922.586 rows=0 loops=1)\n> -> Index Scan using idx_landing_page_messages_created_at on messages\n> (cost=0.00..449560.48 rows=1555 widt\n> h=1340) (actual time=207922.581..207922.581 rows=0 loops=1)\n> Filter: ((NOT processed) AND ((topic)::text = 'x'::text))\n> Total runtime: 207949.413 ms\n> (4 rows)\n>\n>\n> and an explain analyze with a higher limit that hits the index:\n>\n>\n> production=# explain analyze select * from landing_page.messages where\n> ((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY\n> messages.created_at ASC limit 25;\n> QUERY\n> PLAN\n>\n>\n> --------------------------------------------------------------------------------------------------------------\n> -----------------------------------------\n> Limit (cost=5885.47..5885.54 rows=25 width=1340) (actual\n> time=80.931..80.931 rows=0 loops=1)\n> -> Sort (cost=5885.47..5889.36 rows=1555 width=1340) (actual\n> time=80.926..80.926 rows=0 loops=1)\n> Sort Key: created_at\n> Sort Method: quicksort Memory: 17kB\n> -> Bitmap Heap Scan on messages (cost=60.45..5841.59 rows=1555\n> width=1340) (actual time=64.404..64.\n> 404 rows=0 loops=1)\n> Recheck Cond: ((topic)::text = 'x'::text)\n> Filter: (NOT processed)\n> -> Bitmap Index Scan on idx_messages_topic_processed\n> (cost=0.00..60.06 rows=1550 width=0) (ac\n> tual time=56.207..56.207 rows=0 loops=1)\n> Index Cond: (((topic)::text = 'x'::text) AND (p\n> rocessed = false))\n> Total runtime: 88.051 ms\n> (10 rows)\n>\n>\n> overrides in postgresql.conf\n>\n> shared_buffers = 256MB\n> work_mem = 8MB\n> max_fsm_pages = 2000000\n> max_fsm_relations = 2000\n> checkpoint_segments = 10\n> archive_mode = on\n> random_page_cost = 3.0\n> effective_cache_size = 6GB\n> default_statistics_target = 250\n> logging_collector = on\n>\n>\n> Forgot to mention this is Postgres 8.3.8 with 6GB memory on the server.\n>\n> When you ask how big is the active portion of the database I am not sure\n> how to answer. The whole database server is about 140GB, but there are\n> other applications that use this database, this particular table is about\n> 1.6GB and growing. Currently there are jobs that query from this table\n> every minute.\n>\n> Thanks again\n> Mike\n>\n>\n>\n>\n>\n>\n> On Wed, Jan 5, 2011 at 5:10 PM, Kevin Grittner <\n> [email protected]> wrote:\n>\n>> Mike Broers <[email protected]> wrote:\n>>\n>> > Hello performance, I need help explaining the performance of a\n>> > particular query\n>>\n>> You provided some of the information needed, but you should review\n>> this page and post a bit more:\n>>\n>> http://wiki.postgresql.org/wiki/SlowQueryQuestions\n>>\n>> In particular, post the result of EXPLAIN ANALYZE, not just EXPLAIN.\n>> Also, showing all overrides in your postgresql.conf file is\n>> important, and some information about your hardware. How big is the\n>> active portion of your database (the frequently read portion)?\n>>\n>> > Why does the smaller limit cause it to skip the index?\n>>\n>> Because the optimizer thinks the query will return rows sooner that\n>> way.\n>>\n>> > Is there a way to help the planner choose the better plan?\n>>\n>> You might get there by adjusting your memory settings and/or costing\n>> settings, but we need to see more information to know that.\n>>\n>> -Kevin\n>>\n>\n>\n\nThanks for the suggestion, \ncreated_at is a timestamp without time zone type column.  When I add +0 to created at I get a cast error.  I am able to get the query to use the desired index when increasing or removing the limit, and I am still looking for the reason why that is happening.  Any advice or more information I can supply please let me know.\n\nERROR:  operator does not exist: timestamp without time zone + integerLINE 1: ...es.processed = 'f'))  ORDER BY messages.created_at+0 ASC lim...                                                             ^\nHINT:  No operator matches the given name and argument type(s). You might need to add explicit type casts.\nFrom: \"pasman pasmański\" <[email protected]>\nTo: [email protected]\nDate: Fri, 7 Jan 2011 15:00:22 +0100Subject: Re: plan question - query with order by and limit not choosing index depends on size of limit, table\nTryorder by created_at+0\nOn Thu, Jan 6, 2011 at 3:36 PM, Mike Broers <[email protected]> wrote:\nThanks for the assistance.  Here is an explain analyze of the query with the problem limit:production=# explain analyze select * from landing_page.messages where ((messages.topic = E'x') AND (messages.processed = 'f'))  ORDER BY messages.created_at ASC limit 10;\n                                                                                QUERY PLAN                                                                                --------------------------------------------------------------------------------------------------------------\n------------------------------------------------------------ Limit  (cost=0.00..2891.06 rows=10 width=1340) (actual time=207922.586..207922.586 rows=0 loops=1)   ->  Index Scan using idx_landing_page_messages_created_at on messages  (cost=0.00..449560.48 rows=1555 widt\nh=1340) (actual time=207922.581..207922.581 rows=0 loops=1)         Filter: ((NOT processed) AND ((topic)::text = 'x'::text)) Total runtime: 207949.413 ms\n(4 rows)\nand an explain analyze with a higher limit that hits the index:production=# explain analyze select * from landing_page.messages where ((messages.topic = E'x') AND (messages.processed = 'f'))  ORDER BY messages.created_at ASC limit 25;\n                                                                      QUERY PLAN                                                                       --------------------------------------------------------------------------------------------------------------\n----------------------------------------- Limit  (cost=5885.47..5885.54 rows=25 width=1340) (actual time=80.931..80.931 rows=0 loops=1)   ->  Sort  (cost=5885.47..5889.36 rows=1555 width=1340) (actual time=80.926..80.926 rows=0 loops=1)\n         Sort Key: created_at         Sort Method:  quicksort  Memory: 17kB         ->  Bitmap Heap Scan on messages  (cost=60.45..5841.59 rows=1555 width=1340) (actual time=64.404..64.\n404 rows=0 loops=1)               Recheck Cond: ((topic)::text = 'x'::text)               Filter: (NOT processed)               ->  Bitmap Index Scan on idx_messages_topic_processed  (cost=0.00..60.06 rows=1550 width=0) (ac\ntual time=56.207..56.207 rows=0 loops=1)                     Index Cond: (((topic)::text = 'x'::text) AND (processed = false)) Total runtime: 88.051 ms\n(10 rows)\noverrides in postgresql.confshared_buffers = 256MBwork_mem = 8MBmax_fsm_pages = 2000000max_fsm_relations = 2000\n\ncheckpoint_segments = 10archive_mode = onrandom_page_cost = 3.0effective_cache_size = 6GBdefault_statistics_target = 250logging_collector = on\n\nForgot to mention this is Postgres 8.3.8 with 6GB memory on the server.When you ask how big is the active portion of the database I am not sure how to answer.  The whole database server is about 140GB, but there are other applications that use this database, this particular table is about 1.6GB and growing.  Currently there are jobs that query from this table every minute.\nThanks againMikeOn Wed, Jan 5, 2011 at 5:10 PM, Kevin Grittner <[email protected]> wrote:\nMike Broers <[email protected]> wrote:\n\n> Hello performance, I need help explaining the performance of a\n> particular query\n\nYou provided some of the information needed, but you should review\nthis page and post a bit more:\n\nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n\nIn particular, post the result of EXPLAIN ANALYZE, not just EXPLAIN.\nAlso, showing all overrides in your postgresql.conf file is\nimportant, and some information about your hardware.  How big is the\nactive portion of your database (the frequently read portion)?\n\n> Why does the smaller limit cause it to skip the index?\n\nBecause the optimizer thinks the query will return rows sooner that\nway.\n\n> Is there a way to help the planner choose the better plan?\n\nYou might get there by adjusting your memory settings and/or costing\nsettings, but we need to see more information to know that.\n\n-Kevin", "msg_date": "Mon, 10 Jan 2011 10:21:10 -0600", "msg_from": "Mike Broers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plan question - query with order by and limit not\n\tchoosing index depends on size of limit, table" }, { "msg_contents": "Thanks Robert, this is what I was looking for. I will try these suggestions\nand follow up if any of them are the silver bullet.\n\nOn Fri, Jan 14, 2011 at 7:11 AM, Robert Haas wrote:\n\n> On Thu, Jan 6, 2011 at 4:36 PM, Mike Broers <[email protected]> wrote:\n> > Thanks for the assistance.\n> > Here is an explain analyze of the query with the problem limit:\n> > production=# explain analyze select * from landing_page.messages where\n> > ((messages.topic = E'x') AND (messages.processed = 'f')) ORDER BY\n> > messages.created_at ASC limit 10;\n> >\n> > QUERY PLAN\n> >\n> >\n> --------------------------------------------------------------------------------------------------------------\n> > ------------------------------------------------------------\n> > Limit (cost=0.00..2891.06 rows=10 width=1340) (actual\n> > time=207922.586..207922.586 rows=0 loops=1)\n> > -> Index Scan using idx_landing_page_messages_created_at on messages\n> > (cost=0.00..449560.48 rows=1555 widt\n> > h=1340) (actual time=207922.581..207922.581 rows=0 loops=1)\n> > Filter: ((NOT processed) AND ((topic)::text = 'x'::text))\n> > Total runtime: 207949.413 ms\n> > (4 rows)\n>\n> You're not the first person to have been bitten by this. The\n> optimizer thinks that rows WHERE NOT processed and topic = 'x' are\n> reasonably common, so it figures that it can just index scan until it\n> finds 10 of them. But when it turns out that there are none at all,\n> it ends up having to scan the entire index, which stinks big-time.\n>\n> The alternative plan is to use a different index to find ALL the\n> relevant rows, sort them, and then take the top 10. That would suck\n> if there actually were tons of rows like this, but there aren't.\n>\n> So the root of the problem, in some sense, is that the planner's\n> estimate of the selectivity of \"NOT processed and topic = 'x'\" is not\n> very good. Some things to try:\n>\n> - increase the statistics target for the \"processed\" and \"topic\"\n> columns even higher\n> - put the processed rows in one table and the not processed rows in\n> another table\n> - do something like SELECT * FROM (SELECT .. LIMIT 200 OFFSET 0) LIMIT\n> 10 to try to fool the planner into planning based on the higher, inner\n> limit\n> - create a partial index on messages (topic) WHERE NOT processed and\n> see if the planner will use it\n>\n> ...Robert\n>\n\nThanks Robert, this is what I was looking for.  I will try these suggestions and follow up if any of them are the silver bullet.On Fri, Jan 14, 2011 at 7:11 AM, Robert Haas wrote:\nOn Thu, Jan 6, 2011 at 4:36 PM, Mike Broers <[email protected]> wrote:\n> Thanks for the assistance.\n> Here is an explain analyze of the query with the problem limit:\n> production=# explain analyze select * from landing_page.messages where\n> ((messages.topic = E'x') AND (messages.processed = 'f'))  ORDER BY\n> messages.created_at ASC limit 10;\n>\n>    QUERY PLAN\n>\n> --------------------------------------------------------------------------------------------------------------\n> ------------------------------------------------------------\n>  Limit  (cost=0.00..2891.06 rows=10 width=1340) (actual\n> time=207922.586..207922.586 rows=0 loops=1)\n>    ->  Index Scan using idx_landing_page_messages_created_at on messages\n>  (cost=0.00..449560.48 rows=1555 widt\n> h=1340) (actual time=207922.581..207922.581 rows=0 loops=1)\n>          Filter: ((NOT processed) AND ((topic)::text = 'x'::text))\n>  Total runtime: 207949.413 ms\n> (4 rows)\n\nYou're not the first person to have been bitten by this.  The\noptimizer thinks that rows WHERE NOT processed and topic = 'x' are\nreasonably common, so it figures that it can just index scan until it\nfinds 10 of them.  But when it turns out that there are none at all,\nit ends up having to scan the entire index, which stinks big-time.\n\nThe alternative plan is to use a different index to find ALL the\nrelevant rows, sort them, and then take the top 10.   That would suck\nif there actually were tons of rows like this, but there aren't.\n\nSo the root of the problem, in some sense, is that the planner's\nestimate of the selectivity of \"NOT processed and topic = 'x'\" is not\nvery good.  Some things to try:\n\n- increase the statistics target for the \"processed\" and \"topic\"\ncolumns even higher\n- put the processed rows in one table and the not processed rows in\nanother table\n- do something like SELECT * FROM (SELECT .. LIMIT 200 OFFSET 0) LIMIT\n10 to try to fool the planner into planning based on the higher, inner\nlimit\n- create a partial index on messages (topic) WHERE NOT processed and\nsee if the planner will use it\n\n...Robert", "msg_date": "Fri, 14 Jan 2011 10:36:43 -0600", "msg_from": "Mike Broers <[email protected]>", "msg_from_op": true, "msg_subject": "Re: plan question - query with order by and limit not\n\tchoosing index depends on size of limit, table" }, { "msg_contents": "On Fri, Jan 14, 2011 at 11:36 AM, Mike Broers <[email protected]> wrote:\n> Thanks Robert, this is what I was looking for.  I will try these suggestions\n> and follow up if any of them are the silver bullet.\n\nNo problem - and sorry for the off-list reply. I was a little sleepy\nwhen I wrote that; thanks for getting it back on-list.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 14 Jan 2011 13:03:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: plan question - query with order by and limit not\n\tchoosing index depends on size of limit, table" } ]
[ { "msg_contents": "Hi all,\n\nI'm using postgres 8.4.2 on a Ubuntu Linux machine.\n\nI have several tables, one of which is named Document, which of course\nrepresents information I need about my documents. I also have another\ntable, similar to the first one, called Doc2. The schema of both tables is\nthe following:\n\nCREATE TABLE \"Document\"\n(\ndocid integer NOT NULL DEFAULT nextval('doc_id_seq'::regclass),\nhwdocid character varying(511) NOT NULL,\npubdate bigint,\nfinished boolean DEFAULT false,\n\"location\" character varying(200),\ntitle tsvector,\ndescription tsvector,\n\"content\" text,\nCONSTRAINT pk_docid PRIMARY KEY (docid),\nCONSTRAINT hwdocid_uniq UNIQUE (hwdocid)\n)\nWITH (\nOIDS=FALSE\n);\n\nThe hwdocid in this occasion is no longer than 12 characters. The reason for\nbeing 511 max, is because the same schema is used by other applications.\n\nWhat i wish to do is dump contents from Doc2 to Document, provided that\nthe hwdocid from Doc2 is not present in Document (as the entries will be\nsimilar). Doc2 contains ~100000 rows while Document contains ~1000000.\n\nNow, I wrote a simple query to do this, which is the following:\n\nINSERT INTO \"Document\" ( hwdocid, pubdate, finished, \"location\", title,\ndescription, \"content\" )\nSELECT hwdocid, pubdate, finished, \"location\", title, description, \"content\"\nFROM \"Doc2\" d2\nWHERE d2.hwdocid NOT IN (\n SELECT d.hwdocid\n FROM \"Document\" d\n)\n\nAfter running for about half an hour in pgadmin3, I stopped the execution,\nsince I saw that\nwhat I was doing was pretty dumb, as with every insert the Document would\nincrease (and I\nknow beforehand that data from Doc2 contain unique hwdocid values). At first\nI thought that each\nINSERT creates a new transaction, which is why it was taking so long. So I\nthough I should do\nsomething else..\n\nSo, I though that I should dump the documents I want to a temp table and\nthen simply insert them in\nthe Document table. Before that, I wanted to see however, how many documents\nI was trying to\ninsert (as an indication of why it took so long). So I simply did the select\npart for those documents.\n\nSELECT *\nFROM \"Doc2\" d2\nWHERE d2.hwdocid NOT IN (\n SELECT d.hwdocid\n FROM \"Document\" d\n)\n\nI submitted the query again and let it run. After running for 5 hours, I\nstopped the query and submitted\nthe \"explain query\". After running for ~10 minutes, I also stopped the query\nexplanation phase. So I\nre-wrote the query as:\n\nSELECT hwdocid, pubdate, finished, \"location\", title, description, \"content\"\nFROM \"Doc2\" d2\nWHERE NOT EXISTS (\n SELECT d.hwdocid\n FROM \"Document\" d\n WHERE d.hwdocid = d2.hwdocid\n)\n\nand asked for the explanation, which was:\n\nHash Anti Join (cost=72484.24..90988.89 rows=1 width=317) (actual\ntime=3815.471..9063.184 rows=63836 loops=1)\n Hash Cond: ((d2.hwdocid)::text = (d.hwdocid)::text)\n -> Seq Scan on \"Doc2\" d2 (cost=0.00..5142.54 rows=96454 width=317) (actual\ntime=0.016..186.781 rows=96454 loops=1)\n -> Hash (cost=56435.22..56435.22 rows=949922 width=12) (actual\ntime=3814.968..3814.968 rows=948336 loops=1)\n -> Seq Scan on \"Document\" d (cost=0.00..56435.22 rows=949922 width=12)\n(actual time=0.008..1926.191 rows=948336 loops=1)\nTotal runtime: 9159.050 ms\n\nI then submitted it normally and got a result back in ~5-6 seconds.\n\nSo my questions are:\n\n1) Why is it taking *so* long for the first query (with the \"NOT IN\" ) to do\neven the simple select?\n\n2) The result between the two queries should be the same. Since I am not\neven returned an explanation, could someone\nmake a (wild) guess on what is the \"NOT IN\" statement doing (trying to do)\nthat is taking so long?\n\n3) My intuition would be that, since there exists a unique constraint on\nhwdocid, which implies the existence of an index,\nthis index would be used. Isn't that so? I mean, since it is a unique field,\nshouldn't it just do a sequential scan on Doc2\nand then simply query the index if the value exists? What am I getting\nwrong?\n\nThank you very much in advance!\n\nRegards,\nGeorge Valkanas\n\nHi all,I'm using postgres 8.4.2 on a Ubuntu Linux machine.I have several tables, one of which is named Document, which of courserepresents information I need about my documents. I also have another\ntable, similar to the first one, called Doc2. The schema of both tables isthe following:CREATE TABLE \"Document\"( docid integer NOT NULL DEFAULT nextval('doc_id_seq'::regclass),\n hwdocid character varying(511) NOT NULL, pubdate bigint, finished boolean DEFAULT false, \"location\" character varying(200), title tsvector, description tsvector, \"content\" text,\n CONSTRAINT pk_docid PRIMARY KEY (docid), CONSTRAINT hwdocid_uniq UNIQUE (hwdocid))WITH ( OIDS=FALSE);The hwdocid in this occasion is no longer than 12 characters. The reason for\nbeing 511 max, is because the same schema is used by other applications.What i wish to do is dump contents from Doc2 to Document, provided thatthe hwdocid from Doc2 is not present in Document (as the entries will be\nsimilar). Doc2 contains ~100000 rows while Document contains  ~1000000.Now, I wrote a simple query to do this, which is the following:INSERT INTO \"Document\" ( hwdocid, pubdate, finished, \"location\", title, description, \"content\" ) \nSELECT hwdocid, pubdate, finished, \"location\", title, description, \"content\"FROM \"Doc2\" d2WHERE d2.hwdocid NOT IN (     SELECT d.hwdocid      FROM \"Document\" d)\nAfter running for about half an hour in pgadmin3, I stopped the execution, since I saw thatwhat I was doing was pretty dumb, as with every insert the Document would increase (and Iknow beforehand that data from Doc2 contain unique hwdocid values). At first I thought that each\nINSERT creates a new transaction, which is why it was taking so long. So I though I should dosomething else..So, I though that I should dump the documents I want to a temp table and then simply insert them in\nthe Document table. Before that, I wanted to see however, how many documents I was trying toinsert (as an indication of why it took so long). So I simply did the select part for those documents.\nSELECT *FROM \"Doc2\" d2WHERE d2.hwdocid NOT IN (     SELECT d.hwdocid      FROM \"Document\" d)I submitted the query again and let it run. After running for 5 hours, I stopped the query and submitted\nthe \"explain query\". After running for ~10 minutes, I also stopped the query explanation phase. So Ire-wrote the query as:SELECT hwdocid, pubdate, finished, \"location\", title, description, \"content\"\nFROM \"Doc2\" d2WHERE NOT EXISTS (     SELECT d.hwdocid      FROM \"Document\" d     WHERE d.hwdocid = d2.hwdocid)and asked for the explanation, which was: \nHash Anti Join (cost=72484.24..90988.89 rows=1 width=317) (actual time=3815.471..9063.184 rows=63836 loops=1) Hash Cond: ((d2.hwdocid)::text = (d.hwdocid)::text) -> Seq Scan on \"Doc2\" d2 (cost=0.00..5142.54 rows=96454 width=317) (actual time=0.016..186.781 rows=96454 loops=1)\n -> Hash (cost=56435.22..56435.22 rows=949922 width=12) (actual time=3814.968..3814.968 rows=948336 loops=1) -> Seq Scan on \"Document\" d (cost=0.00..56435.22 rows=949922 width=12) (actual time=0.008..1926.191 rows=948336 loops=1)\nTotal runtime: 9159.050 msI then submitted it normally and got a result back in ~5-6 seconds.So my questions are:1) Why is it taking *so* long for the first query (with the \"NOT IN\" ) to do even the simple select?\n2) The result between the two queries should be the same. Since I am not even returned an explanation, could someonemake a (wild) guess on what is the \"NOT IN\" statement doing (trying to do) that is taking so long?\n3) My intuition would be that, since there exists a unique constraint on hwdocid, which implies the existence of an index,this index would be used. Isn't that so? I mean, since it is a unique field, shouldn't it just do a sequential scan on Doc2\nand then simply query the index if the value exists? What am I getting wrong?Thank you very much in advance!Regards,George Valkanas", "msg_date": "Fri, 7 Jan 2011 04:36:52 +0200", "msg_from": "=?ISO-8859-7?B?w+n58ePv8iDC4evq4e3h8g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "\"SELECT .. WHERE NOT IN\" query running for hours" }, { "msg_contents": "On 1/6/2011 9:36 PM, οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ wrote:\n>\n> 1) Why is it taking *so* long for the first query (with the \"NOT IN\" ) \n> to do even the simple select?\nBecause NOT IN has to execute the correlated subquery for every row and \nthen check whether the requested value is in the result set, usually by \ndoing sequential comparison. The NOT EXIST plan is also bad because \nthere is no index but at least it can use very fast and efficient hash \nalgorithm. Indexing the \"hwdocid\" column on the \"Document\" table or, \nideally, making it a primary key, should provide an additional boost to \nyour query. If you already do have an index, you may consider using \nenable_seqscan=false for this session, so that the \"hwdocid\" index will \nbe used. It's a common wisdom that in the most cases NOT EXISTS will \nbeat NOT IN. That is so all over the database world. I've seen that in \nOracle applications, MS SQL applications and, of course MySQL \napplications. Optimizing queries is far from trivial.\n\nοΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 06 Jan 2011 23:25:45 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"SELECT .. WHERE NOT IN\" query running for hours" }, { "msg_contents": "Fair enough!\n\nI also turned seqscan off, so the new plan (for the NOT EXISTS) is:\n\nMerge Anti Join (cost=0.00..212686.89 rows=1 width=313) (actual\ntime=0.426..14921.344 rows=63836 loops=1)\n Merge Cond: ((d2.hwdocid)::text = (d.hwdocid)::text)\n -> Index Scan using hwdocid2_uniq on \"Doc2\" d2 (cost=0.00..19442.87\nrows=96454 width=313) (actual time=0.130..1248.783 rows=96454 loops=1)\n -> Index Scan using hwdocid_uniq on \"Document\" d (cost=0.00..189665.17\nrows=949272 width=12) (actual time=0.085..11158.740 rows=948336 loops=1)\nTotal runtime: 15062.925 ms\n\nHmm.. doesn't really seem to be such a great boost on performance. But i\nguess I'll be sticking to this one.\n\nSo my follow-up question on the subject is this:\n\nAre there any particular semantics for the \"NOT IN\" statement that cause the\ncorrelated query to execute for every row of the outter query, as opposed to\nthe \"NOT EXISTS\" ? Or are there any other practical reasons, related to \"IN\n/ NOT IN\", for this to be happening? Or is it simply due to implementation\ndetails of each RDBMS? I guess the former (or the 2nd one), since, as you\nsay, this is common in most databases, but I would most appreciate an answer\nto clarify this.\n\nThanks again!\n\nBest regards,\nGeorge\n\n\n\n2011/1/7 Mladen Gogala <[email protected]>\n\n> On 1/6/2011 9:36 PM, Γιωργος Βαλκανας wrote:\n>\n>>\n>> 1) Why is it taking *so* long for the first query (with the \"NOT IN\" ) to\n>> do even the simple select?\n>>\n> Because NOT IN has to execute the correlated subquery for every row and\n> then check whether the requested value is in the result set, usually by\n> doing sequential comparison. The NOT EXIST plan is also bad because there is\n> no index but at least it can use very fast and efficient hash algorithm.\n> Indexing the \"hwdocid\" column on the \"Document\" table or, ideally, making it\n> a primary key, should provide an additional boost to your query. If you\n> already do have an index, you may consider using enable_seqscan=false for\n> this session, so that the \"hwdocid\" index will be used. It's a common wisdom\n> that in the most cases NOT EXISTS will beat NOT IN. That is so all over the\n> database world. I've seen that in Oracle applications, MS SQL applications\n> and, of course MySQL applications. Optimizing queries is far from trivial.\n>\n> Μλαδεν Γογαλα\n>\n> --\n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n>\n>\n\nFair enough!I also turned seqscan off, so the new plan (for the NOT EXISTS) is:Merge Anti Join (cost=0.00..212686.89 rows=1 width=313) (actual time=0.426..14921.344 rows=63836 loops=1)\n Merge Cond: ((d2.hwdocid)::text = (d.hwdocid)::text) -> Index Scan using hwdocid2_uniq on \"Doc2\" d2 (cost=0.00..19442.87 rows=96454 width=313) (actual time=0.130..1248.783 rows=96454 loops=1) -> Index Scan using hwdocid_uniq on \"Document\" d (cost=0.00..189665.17 rows=949272 width=12) (actual time=0.085..11158.740 rows=948336 loops=1)\nTotal runtime: 15062.925 msHmm.. doesn't really seem to be such a great boost on performance. But i guess I'll be sticking to this one.So my follow-up question on the subject is this:\nAre there any particular semantics for the \"NOT IN\" statement that cause the correlated query to execute for every row of the outter query, as opposed to the \"NOT EXISTS\" ? Or are there any other practical reasons, related to \"IN / NOT IN\", for this to be happening? Or is it simply due to implementation details of each RDBMS? I guess the former (or the 2nd one), since, as you say, this is common in most databases, but I would most appreciate an answer to clarify this.\nThanks again!Best regards,George2011/1/7 Mladen Gogala <[email protected]>\nOn 1/6/2011 9:36 PM, Γιωργος Βαλκανας wrote:\n\n\n1) Why is it taking *so* long for the first query (with the \"NOT IN\" ) to do even the simple select?\n\nBecause NOT IN has to execute the correlated subquery for every row and then check whether the requested value is in the result set, usually by doing sequential comparison. The NOT EXIST plan is also bad because there is no index but at least it can use very fast and efficient hash algorithm. Indexing the \"hwdocid\" column on the \"Document\" table or, ideally, making it a primary key, should provide an additional boost to your query. If you already do have an index, you may consider using enable_seqscan=false for this session, so that the \"hwdocid\" index will be used. It's a common wisdom that in the most cases NOT EXISTS will beat NOT IN. That is so all over the database world. I've seen that in Oracle applications, MS SQL applications and, of course MySQL applications. Optimizing queries is far from trivial.\n\nΜλαδεν Γογαλα\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com", "msg_date": "Fri, 7 Jan 2011 11:29:32 +0200", "msg_from": "=?ISO-8859-7?B?w+n58ePv8iDC4evq4e3h8g==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"SELECT .. WHERE NOT IN\" query running for hours" }, { "msg_contents": "οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½οΏ½ wrote:\n>\n> Are there any particular semantics for the \"NOT IN\" statement that \n> cause the correlated query to execute for every row of the outter \n> query, as opposed to the \"NOT EXISTS\" ? Or are there any other \n> practical reasons, related to \"IN / NOT IN\", for this to be happening? \n> Or is it simply due to implementation details of each RDBMS? I guess \n> the former (or the 2nd one), since, as you say, this is common in most \n> databases, but I would most appreciate an answer to clarify this.\n>\n> Thanks again!\n>\n> Best regards,\n> George\n>\n>\n>\n\nWell, I really hoped that Bruce, Robert or Greg would take on this one, \nbut since there are no more qualified takers, I'll take a shot at this \none. For the \"NOT IN (result of a correlated sub-query)\", the sub-query \nneeds to be executed for every row matching the conditions on the \ndriving table, while the !EXISTS is just a complement of join. It's \nall in the basic set theory which serves as a model for the relational \ndatabases.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Mon, 10 Jan 2011 12:28:34 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"SELECT .. WHERE NOT IN\" query running for hours" }, { "msg_contents": "\n\nOn 1/7/11 1:29 AM, \"??????? ????????\" <[email protected]> wrote:\n\n\n>\n>So my follow-up question on the subject is this:\n>\n>Are there any particular semantics for the \"NOT IN\" statement that cause\n>the correlated query to execute for every row of the outter query, as\n>opposed to the \"NOT EXISTS\" ?\n\n\n=> select * from explode_array(ARRAY[1,2,3,4,5]) where explode_array not\nin (0, 1, 2);\n explode_array \n---------------\n 3\n 4\n 5\n\n\n\n=> select * from explode_array(ARRAY[1,2,3,4,5]) where explode_array not\nin (0, 1, 2, null);\n explode_array \n---------------\n(0 rows)\n\n\n\nThe existence of a single NULL in the \"not in\" segment causes no results\nto be returned. Postgres isn't smart enough to analyze whether the\ncontents of the NOT IN() can contain null and choose a more optimal plan,\nand so it always scans ALL rows. Even if the NOT IN() is on a not null\nprimary key. NOT IN is generally dangerous because of this behavior --\nit results from the fact that '1 = null' is null, and 'not null' is equal\nto 'null':\n\n=> select (1 = 1);\n ?column? \n----------\n t\n\n\n\nselect NOT (1 = 1);\n ?column? \n----------\n f\n\n\n=> select (1 = null);\n ?column? \n----------\n \n(1 row)\n\n\n=> select NOT (1 = null);\n ?column? \n----------\n \n(1 row)\n\n\n\n\n\nNOT EXISTS doesn't have this problem, since NOT EXISTS essentially treats\nthe existence of null as false, where NOT IN treats the existence of null\nas true.\n\nrr=> select * from (select * from explode_array(ARRAY[1,2,3,4,5])) foo\nwhere not exists (select 1 where explode_array in (0, 1, 2, null));\n explode_array \n---------------\n 3\n 4\n 5\n(3 rows)\n\n\n\nOften, the best query plans result from 'LEFT JOIN WHERE right side is\nNULL' rather than NOT EXISTS however. I often get performance gains by\nswitching NOT EXISTS queries to LEFT JOIN form. Though sometimes it is\nless performant.\n\n\n\n\n\n", "msg_date": "Mon, 10 Jan 2011 12:24:10 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"SELECT .. WHERE NOT IN\" query running for hours" }, { "msg_contents": "Scott Carey <[email protected]> wrote:\n \n> Often, the best query plans result from 'LEFT JOIN WHERE right\n> side is NULL' rather than NOT EXISTS however. I often get\n> performance gains by switching NOT EXISTS queries to LEFT JOIN\n> form.\n \nEven in 8.4 and later? I would think that the anti-join that Tom\nadded in 8.4 would always perform at least as well as the LEFT JOIN\ntechnique you describe.\n \n-Kevin\n", "msg_date": "Mon, 10 Jan 2011 14:37:50 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"SELECT .. WHERE NOT IN\" query running for hours" }, { "msg_contents": "\n\nOn 1/10/11 12:37 PM, \"Kevin Grittner\" <[email protected]> wrote:\n\n>Scott Carey <[email protected]> wrote:\n> \n>> Often, the best query plans result from 'LEFT JOIN WHERE right\n>> side is NULL' rather than NOT EXISTS however. I often get\n>> performance gains by switching NOT EXISTS queries to LEFT JOIN\n>> form.\n> \n>Even in 8.4 and later? I would think that the anti-join that Tom\n>added in 8.4 would always perform at least as well as the LEFT JOIN\n>technique you describe.\n> \n>-Kevin\n\nYes, in 8.4. The query planner definitely does not treat the two as\nequivalent. I don't have a concrete example at hand right now, but I've\nbeen working exclusively on 8.4 since a month after it was released. It\ndoes often use an anti-join for NOT EXISTS, but does not seem to explore\nall avenues there. Or perhaps the statistics it has differ for some\nreason at that point. All I know, is that the resulting query plan\ndiffers sometimes and I'd say 3 out of 4 times the LEFT JOIN variant is\nmore optimal when they differ.\n\n>\n\n", "msg_date": "Mon, 10 Jan 2011 13:05:04 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"SELECT .. WHERE NOT IN\" query running for hours" }, { "msg_contents": "2011/1/10 Mladen Gogala <[email protected]>:\n> Well, I really hoped that Bruce, Robert or Greg would take on this one, but\n> since there are no more qualified takers, I'll take a shot at this one. For\n> the \"NOT IN (result of a correlated sub-query)\", the sub-query needs to be\n> executed for every row matching the conditions on the driving table, while\n> the   !EXISTS is just a complement of join. It's all in the basic set theory\n> which serves as a model for the relational databases.\n\nAs Scott says, the real problem is the NULL handling. The semantics\nare otherwise similar.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 14 Jan 2011 14:01:23 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"SELECT .. WHERE NOT IN\" query running for hours" } ]
[ { "msg_contents": "On 9.0, this configuration\n\ncheckpoint_segments = 512 # in logfile segments, min 1, 16MB each\n\nresults in 1034 segments, so the effective logfile segment size is 32 MB.\n\nThe documentation says this:\n\n Maximum number of log file segments between automatic WAL\n checkpoints (each segment is normally 16 megabytes). The default\n is three segments. Increasing this parameter can increase the\n amount of time needed for crash recovery. This parameter can only\n be set in the postgresql.conf file or on the server command line.\n\nIt would probably make sense to change this to\n\ncheckpoint_segments = 3 # each one effectively needs about 32MB on disk\n\nand:\n\n Number of log file segments between automatic WAL checkpoints. The\n default is three segments. Increasing this parameter can increase\n the amount of time needed for crash recovery. This parameter can\n only be set in the postgresql.conf file or on the server command\n line.\n\n Each segment normally requires 16 megabytes on disk. Segments can\n only be recycled after a checkpoint has completed, so disk space\n is required for twice the number of configured segments, plus some\n reserve.\n\nPerhaps it would also make sense to mention that increasing the\nsegment count decreases WAL traffic, and that changing this value does\nnot have an impact on transaction sizes?\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 07 Jan 2011 12:45:25 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": true, "msg_subject": "Wrong docs on checkpoint_segments?" }, { "msg_contents": "On Friday, January 07, 2011 01:45:25 PM Florian Weimer wrote:\n> On 9.0, this configuration\n> \n> checkpoint_segments = 512 # in logfile segments, min 1, 16MB each\n> \n> results in 1034 segments, so the effective logfile segment size is 32 MB.\nUm. Is it possible that you redefined XLOG_SEG_SIZE or used --with-wal-\nsegsize=SEGSIZE?\n\nThe default is still:\nandres@alap2:~/src/postgresql$ grep XLOG_SEG_SIZE src/include/pg_config.h\n/* XLOG_SEG_SIZE is the size of a single WAL file. This must be a power of 2\n XLOG_BLCKSZ). Changing XLOG_SEG_SIZE requires an initdb. */\n#define XLOG_SEG_SIZE (16 * 1024 * 1024)\n\nAndres\n", "msg_date": "Fri, 7 Jan 2011 14:39:31 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on checkpoint_segments?" }, { "msg_contents": "* Andres Freund:\n\n> On Friday, January 07, 2011 01:45:25 PM Florian Weimer wrote:\n>> On 9.0, this configuration\n>> \n>> checkpoint_segments = 512 # in logfile segments, min 1, 16MB each\n>> \n>> results in 1034 segments, so the effective logfile segment size is 32 MB.\n> Um. Is it possible that you redefined XLOG_SEG_SIZE or used --with-wal-\n> segsize=SEGSIZE?\n\nNo, the individual files are still 16 MB. It's just that the\ncheckpoint_segments limit is not a hard limit, and you end up with\nslightly more than twice the configured number of segments on disk.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 07 Jan 2011 13:45:02 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Wrong docs on checkpoint_segments?" }, { "msg_contents": "On Friday, January 07, 2011 02:45:02 PM Florian Weimer wrote:\n> * Andres Freund:\n> > On Friday, January 07, 2011 01:45:25 PM Florian Weimer wrote:\n> >> On 9.0, this configuration\n> >> \n> >> checkpoint_segments = 512 # in logfile segments, min 1, 16MB each\n> >> \n> >> results in 1034 segments, so the effective logfile segment size is 32\n> >> MB.\n> > \n> > Um. Is it possible that you redefined XLOG_SEG_SIZE or used --with-wal-\n> > segsize=SEGSIZE?\n> \n> No, the individual files are still 16 MB. It's just that the\n> checkpoint_segments limit is not a hard limit, and you end up with\n> slightly more than twice the configured number of segments on disk.\nThats documented:\n\"\nThere will always be at least one WAL segment file, and will normally not be \nmore files than the higher of wal_keep_segments or (2 + \ncheckpoint_completion_target) * checkpoint_segments + 1. Each segment file is \nnormally 16 MB (though this size can be altered when building the server). You \ncan use this to estimate space requirements for WAL. Ordinarily, when old log \nsegment files are no longer needed, they are recycled (renamed to become the \nnext segments in the numbered sequence). If, due to a short-term peak of log \noutput rate, there are more than 3 * checkpoint_segments + 1 segment files, the \nunneeded segment files will be deleted instead of recycled until the system \ngets back under this limit. \n\"\n\nAndres\n", "msg_date": "Fri, 7 Jan 2011 14:47:24 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Wrong docs on checkpoint_segments?" } ]
[ { "msg_contents": "At one point I was working on a patch to pgbench to have it adopt 64-bit \nmath internally even when running on 32 bit platforms, which are \ncurrently limited to a dataabase scale of ~4000 before the whole process \ncrashes and burns. But since the range was still plenty high on a \n64-bit system, I stopped working on that. People who are only running \n32 bit servers at this point in time aren't doing anything serious \nanyway, right?\n\nSo what is the upper limit now? The way it degrades when you cross it \namuses me:\n\n$ pgbench -i -s 21475 pgbench\ncreating tables...\nset primary key...\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index \n\"pgbench_branches_pkey\" for table \"pgbench_branches\"\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index \n\"pgbench_tellers_pkey\" for table \"pgbench_tellers\"\nNOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index \n\"pgbench_accounts_pkey\" for table \"pgbench_accounts\"\nvacuum...done.\n$ pgbench -S -t 10 pgbench\nstarting vacuum...end.\nsetrandom: invalid maximum number -2147467296\n\nIt doesn't throw any error during the initialization step, neither via \nclient or database logs, even though it doesn't do anything whatsoever. \nIt just turns into the quickest pgbench init ever. That's the exact \nthreshold, because this works:\n\n$ pgbench -i -s 21474 pgbench\ncreating tables...\n10000 tuples done.\n20000 tuples done.\n30000 tuples done.\n...\n\nSo where we're at now is that the maximum database pgbench can create is \na scale of 21474. That makes approximately a 313GB database. I can \ntell you the size for sure when that init finishes running, which is not \ngoing to be soon. That's not quite as big as I'd like to exercise a \nsystem with 128GB of RAM, the biggest size I run into regularly now, but \nit's close enough for now. This limit will need to finally got pushed \nupward soon though, because 256GB servers are getting cheaper every \nday--and the current pgbench can't make a database big enough to really \nescape cache on one of them.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 07 Jan 2011 20:59:01 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "pgbench to the MAXINT" }, { "msg_contents": "Em 07-01-2011 22:59, Greg Smith escreveu:\n> setrandom: invalid maximum number -2147467296\n>\nIt is failing at atoi() circa pgbench.c:1036. But it just the first one. There \nare some variables and constants that need to be converted to int64 and some \nfunctions that must speak 64-bit such as getrand(). Are you working on a patch?\n\n> It doesn't throw any error during the initialization step, neither via\n> client or database logs, even though it doesn't do anything whatsoever.\n> It just turns into the quickest pgbench init ever. That's the exact\n> threshold, because this works:\n>\nAFAICS that is because atoi() is so fragile.\n\n> So where we're at now is that the maximum database pgbench can create is\n> a scale of 21474.\n>\nThat's because 21475 * 100,000 > INT_MAX. We must provide an alternative to \natoi() that deals with 64-bit integers.\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/\n", "msg_date": "Mon, 10 Jan 2011 02:17:14 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench to the MAXINT" }, { "msg_contents": "Euler Taveira de Oliveira wrote:\n> Em 07-01-2011 22:59, Greg Smith escreveu:\n>> setrandom: invalid maximum number -2147467296\n>>\n> It is failing at atoi() circa pgbench.c:1036. But it just the first \n> one. There are some variables and constants that need to be converted \n> to int64 and some functions that must speak 64-bit such as getrand(). \n> Are you working on a patch?\n\nhttp://archives.postgresql.org/pgsql-hackers/2010-01/msg02868.php\nhttp://archives.postgresql.org/message-id/[email protected]\n\nI thought we really needed to fix that before 9.0 shipped, but it turned \nout the limit wasn't so bad after all on 64-bit systems; I dropped \nworrying about it for a while. It's starting to look like it's back on \nthe critical list for 9.1 again though.\n\nIf anyone here wanted to pick that up and help with review, I could \neasily update to it to current git HEAD and re-post. There's enough \npeople on this list who do tests on large machines that I was mainly \nalerting to where the breaking point is at, the fix required has already \nbeen worked on a bit. Someone with more patience than I to play around \nwith multi-platform string conversion trivia is what I think is really \nneeded next, followed by some performance tests on 32-bit systems.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 10 Jan 2011 03:25:23 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgbench to the MAXINT" }, { "msg_contents": "Em 10-01-2011 05:25, Greg Smith escreveu:\n> Euler Taveira de Oliveira wrote:\n>> Em 07-01-2011 22:59, Greg Smith escreveu:\n>>> setrandom: invalid maximum number -2147467296\n>>>\n>> It is failing at atoi() circa pgbench.c:1036. But it just the first\n>> one. There are some variables and constants that need to be converted\n>> to int64 and some functions that must speak 64-bit such as getrand().\n>> Are you working on a patch?\n>\n> http://archives.postgresql.org/pgsql-hackers/2010-01/msg02868.php\n> http://archives.postgresql.org/message-id/[email protected]\n>\nGreg, I just improved your patch. I tried to work around the problems pointed \nout in the above threads. Also, I want to raise some points:\n\n(i) If we want to support and scale factor greater than 21474 we have to \nconvert some columns to bigint; it will change the test. From the portability \npoint it is a pity but as we have never supported it I'm not too worried about \nit. Why? Because it will use bigint columns only if the scale factor is \ngreater than 21474. Is it a problem? I don't think so because generally people \ncompare tests with the same scale factor.\n\n(ii) From the performance perspective, we need to test if the modifications \ndon't impact performance. I don't create another code path for 64-bit \nmodifications (it is too ugly) and I'm afraid some modifications affect the \n32-bit performance. I'm in a position to test it though because I don't have a \nbig machine ATM. Greg, could you lead these tests?\n\n(iii) I decided to copy scanint8() (called strtoint64 there) from backend \n(Robert suggestion [1]) because Tom pointed out that strtoll() has portability \nissues. I replaced atoi() with strtoint64() but didn't do any performance tests.\n\nComments?\n\n\n[1] http://archives.postgresql.org/pgsql-hackers/2010-07/msg00173.php\n\n\n-- \n Euler Taveira de Oliveira\n http://www.timbira.com/", "msg_date": "Tue, 11 Jan 2011 18:34:17 -0300", "msg_from": "Euler Taveira de Oliveira <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench to the MAXINT" }, { "msg_contents": "Euler Taveira de Oliveira wrote:\n> (i) If we want to support and scale factor greater than 21474 we have \n> to convert some columns to bigint; it will change the test. From the \n> portability point it is a pity but as we have never supported it I'm \n> not too worried about it. Why? Because it will use bigint columns only \n> if the scale factor is greater than 21474. Is it a problem? I don't \n> think so because generally people compare tests with the same scale \n> factor.\n>\n> (ii) From the performance perspective, we need to test if the \n> modifications don't impact performance. I don't create another code \n> path for 64-bit modifications (it is too ugly) and I'm afraid some \n> modifications affect the 32-bit performance. I'm in a position to test \n> it though because I don't have a big machine ATM. Greg, could you lead \n> these tests?\n>\n> (iii) I decided to copy scanint8() (called strtoint64 there) from \n> backend (Robert suggestion [1]) because Tom pointed out that strtoll() \n> has portability issues. I replaced atoi() with strtoint64() but didn't \n> do any performance tests.\n\n(i): Completely agreed.\n\n(ii): There is no such thing as a \"big machine\" that is 32 bits now; \nanything that's 32 is a tiny system here in 2011. What I can do is \ncheck for degredation on the only 32-bit system I have left here, my \nlaptop. I'll pick a sensitive test case and take a look.\n\n(iii) This is an important thing to test, particularly given it has the \npotential to impact 64-bit results too.\n\nThanks for picking this up again and finishing the thing off. I'll add \nthis into my queue of performance tests to run and we can see if this is \nworth applying. Probably take a little longer than the usual CF review \ntime. But as this doesn't interfere with other code people are working \non and is sort of a bug fix, I don't think it will be a problem if it \ntakes a little longer to get this done.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 18 Jan 2011 13:42:59 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "On Tue, Jan 18, 2011 at 1:42 PM, Greg Smith <[email protected]> wrote:\n> Thanks for picking this up again and finishing the thing off.  I'll add this\n> into my queue of performance tests to run and we can see if this is worth\n> applying.  Probably take a little longer than the usual CF review time.  But\n> as this doesn't interfere with other code people are working on and is sort\n> of a bug fix, I don't think it will be a problem if it takes a little longer\n> to get this done.\n\nAt least in my book, we need to get this committed in the next two\nweeks, or wait for 9.2.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 30 Jan 2011 15:09:20 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "Robert Haas wrote:\n> At least in my book, we need to get this committed in the next two\n> weeks, or wait for 9.2.\n> \n\nYes, I was just suggesting that I was not going to get started in the \nfirst week or two given the other pgbench related tests I had queued up \nalready. Those are closing up nicely, and I'll start testing \nperformance of this change over the weekend.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 03 Feb 2011 22:28:55 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "The update on the work to push towards a bigger pgbench is that I now \nhave the patch running and generating databases larger than any \npreviously possible scale:\n\n$ time pgbench -i -s 25000 pgbench\n...\n2500000000 tuples done.\n...\nreal 258m46.350s\nuser 14m41.970s\nsys 0m21.310s\n\n$ psql -d pgbench -c \"select \npg_size_pretty(pg_relation_size('pgbench_accounts'));\"\n pg_size_pretty\n----------------\n 313 GB\n\n$ psql -d pgbench -c \"select \npg_size_pretty(pg_relation_size('pgbench_accounts_pkey'));\"\n pg_size_pretty\n----------------\n 52 GB\n\n$ time psql -d pgbench -c \"select count(*) from pgbench_accounts\"\n count \n------------\n 2500000000\n\nreal 18m48.363s\nuser 0m0.010s\nsys 0m0.000s\n\nThe only thing wrong with the patch sent already needed to reach this \npoint was this line:\n\n for (k = 0; k < naccounts * scale; k++)\n\nWhich needed a (int64) cast for the multiplied value in the middle there.\n\nUnfortunately the actual test itself doesn't run yet. Every line I see \nwhen running the SELECT-only test says:\n\nclient 0 sending SELECT abalance FROM pgbench_accounts WHERE aid = 1;\n\nSo something about the updated random generation code isn't quite right \nyet. Now that I have this monster built, I'm going to leave it on the \nserver until I can sort that out, which hopefully will finish up in the \nnext day or so.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 07 Feb 2011 11:03:42 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "Attached is an updated 64-bit pgbench patch that works as expected for \nall of the most common pgbench operations, including support for scales \nabove the previous boundary of just over 21,000. Here's the patched \nversion running against a 303GB database with a previously unavailable \nscale factor:\n\n$ pgbench -T 300 -j 2 -c 4 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 25000\nquery mode: simple\nnumber of clients: 4\nnumber of threads: 2\nduration: 300 s\nnumber of transactions actually processed: 21681\ntps = 72.249999 (including connections establishing)\ntps = 72.250610 (excluding connections establishing)\n\nAnd some basic Q/A that the values it touched were in the right range:\n\n$ psql -d pgbench -c \"select min(aid),max(aid) from pgbench_accounts\";\n\n min | max \n-----+------------\n 1 | 2500000000\n\n$ psql -d pgbench -c \"select min(aid),max(aid),count(*) from \npgbench_accounts where abalance!=0\" &\n\n min | max | count\n-------+------------+-------\n 51091 | 2499989587 | 21678\n\n(This system was doing 300MB/s on reads while executing that count, and \nit still took 19 minutes)\n\nThe clever way Euler updated the patch, you don't pay for the larger \non-disk data (bigint columns) unless you use a range that requires it, \nwhich greatly reduces the number of ways the test results can suffer \nfrom this change. I felt the way that was coded was a bit more \ncomplicated than it needed to be though, as it made where that switch \nhappened at get computed at runtime based on the true size of the \nintegers. I took that complexity out and just put a hard line in there \ninstead: if scale>=20000, you get bigints. That's not very different \nfrom the real limit, and it made documenting when the switch happens \neasy to write and to remember.\n\nThe main performance concern with this change was whether using int64 \nmore internally for computations would slow things down on a 32-bit \nsystem. I thought I'd test that on my few years old laptop. It turns \nout that even though I've been running an i386 Linux on here, it's \nactually a 64-bit CPU. (I think that it has a 32-bit install may be an \nartifact of Adobe Flash install issues, sadly) So this may not be as \ngood of a test case as I'd hoped. Regardless, running a test aimed to \nstress simple SELECTs, the thing I'd expect to suffer most from \nadditional CPU overhead, didn't show any difference in performance:\n\n$ createdb pgbench\n$ pgbench -i -s 10 pgbench\n$ psql -c \"show shared_buffers\"\n shared_buffers\n----------------\n 256MB\n(1 row)\n$ pgbench -S -j 2 -c 4 -T 60 pgbench\n\ni386 x86_64\n6932 6924 \n6923 6926 \n6923 6922 \n6688 6772 \n6914 6791 \n6902 6916 \n6917 6909 \n6943 6837 \n6689 6744 \n \n6688 6744 min\n6943 6926 max\n6870 6860 average\n\nGiven the noise level of pgbench tests, I'm happy saying that is the \nsame speed. I suspect the real overhead in pgbench's processing relates \nto how it is constantly parsing text to turn them into statements, and \nthat how big the integers it uses are is barley detectable over that.\n\nSo...where does that leave this patch? I feel that pgbench will become \nless relevant very quickly in 9.1 unless something like this is \ncommitted. And there don't seem to be significant downsides to this in \nterms of performance. There are however a few rough points left in here \nthat might raise concern:\n\n1) A look into the expected range of the rand() function suggests the \nglibc implementation normally proves 30 bits of resolution, so about 1 \nbillion numbers. You'll have >1B rows in a pgbench database once the \nscale goes over 10,000. So without a major overhaul of how random \nnumber generation is treated here, people can expect the distribution of \nrows touched by a test run to get less even once the database scale gets \nvery large. I added another warning paragraph to the end of the docs in \nthis update to mention this. Long-term, I suspect we may need to adopt \na superior 64-bit RNG approach, something like a Mersenne Twister \nperhaps. That's a bit more than can be chewed on during 9.1 development \nthough.\n\n2) I'd rate odds are good there's one or more corner-case bugs in \n\\setrandom or \\setshell I haven't found yet, just from the way that code \nwas converted. Those have some changes I haven't specifically tested \nexhaustively yet. I don't see any issues when running the most common \ntwo pgbench tests, but that's doesn't mean every part of that 32 -> 64 \nbit conversion was done correctly.\n\nGiven how I use pgbench, for data generation and rough load testing, I'd \nsay neither of those concerns outweights the need to expand the size \nrange of this program. I would be happy to see this go in, followed by \nsome alpha and beta testing aimed to see if any of the rough spots I'm \nconcerned about actually appear. Unfortunately I can't fit all of those \ntests in right now, as throwing around one of these 300GB data sets is \npainful--when you're only getting 72 TPS, looking for large scale \npatterns in the transactions takes a long time to do. For example, if I \nreally wanted a good read on how bad the data distribution skew due to \nsmall random range is, I'd need to let some things run for a week just \nfor a first pass.\n\nI'd like to see this go in, but the problems I've spotted are such that \nI would completely understand this being considered not ready by \nothers. Just having this patch available here is a very useful step \nforward in my mind, because now people can always just grab it and do a \ncustom build if they run into a larger system.\n\nWavering between Returned with Feedback and Ready for Committer here. \nThoughts?\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Wed, 09 Feb 2011 03:38:25 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "Greg,\n\n* Greg Smith ([email protected]) wrote:\n> I took that complexity out and just put a hard line\n> in there instead: if scale>=20000, you get bigints. That's not\n> very different from the real limit, and it made documenting when the\n> switch happens easy to write and to remember.\n\nAgreed completely on this.\n\n> It turns out that even though I've been running an i386 Linux on\n> here, it's actually a 64-bit CPU. (I think that it has a 32-bit\n> install may be an artifact of Adobe Flash install issues, sadly) So\n> this may not be as good of a test case as I'd hoped. \n\nActually, I would think it'd still be sufficient.. If you're under a\n32bit kernel you're not going to be using the extended registers, etc,\nthat would be available under a 64bit kernel.. That said, the idea that\nwe should care about 32-bit systems these days, in a benchmarking tool,\nis, well, silly, imv.\n\n> 1) A look into the expected range of the rand() function suggests\n> the glibc implementation normally proves 30 bits of resolution, so\n> about 1 billion numbers. You'll have >1B rows in a pgbench database\n> once the scale goes over 10,000. So without a major overhaul of how\n> random number generation is treated here, people can expect the\n> distribution of rows touched by a test run to get less even once the\n> database scale gets very large. \n\nJust wondering, did you consider just calling random() twice and\nsmashing the result together..?\n\n> I added another warning paragraph\n> to the end of the docs in this update to mention this. Long-term, I\n> suspect we may need to adopt a superior 64-bit RNG approach,\n> something like a Mersenne Twister perhaps. That's a bit more than\n> can be chewed on during 9.1 development though.\n\nI tend to agree that we should be able to improve the random number\ngeneration in the future. Additionally, imv, we should be able to say\n\"pg_bench version X isn't comparable to version Y\" in the release notes\nor something, or have seperate version #s for it which make it clear\nwhat can be compared to each other and what can't. Painting ourselves\ninto a corner by saying we can't ever make pgbench generate results that\ncan't be compared to every other released version of pgbench just isn't\npractical.\n\n> 2) I'd rate odds are good there's one or more corner-case bugs in\n> \\setrandom or \\setshell I haven't found yet, just from the way that\n> code was converted. Those have some changes I haven't specifically\n> tested exhaustively yet. I don't see any issues when running the\n> most common two pgbench tests, but that's doesn't mean every part of\n> that 32 -> 64 bit conversion was done correctly.\n\nI'll take a look. :)\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Wed, 9 Feb 2011 14:40:08 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "Stephen Frost wrote:\n> Just wondering, did you consider just calling random() twice and\n> smashing the result together..?\n> \n\nI did. The problem is that even within the 32 bits that random() \nreturns, it's not uniformly distributed. Combining two of them isn't \nreally going to solve the distribution problem, just move it around. \nSome number of lower-order bits are less random than the others, and \nwhich they are is implementation dependent.\n\nPoking around a bit more, I just discovered another possible approach is \nto use erand48 instead of rand in pgbench, which is either provided by \nthe OS or emulated in src/port/erand48.c That's way more resolution \nthan needed here, given that 2^48 pgbench accounts would be a scale of \n2.8M, which makes for a database of about 42 petabytes.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 10 Feb 2011 21:27:30 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> Poking around a bit more, I just discovered another possible approach is \n> to use erand48 instead of rand in pgbench, which is either provided by \n> the OS or emulated in src/port/erand48.c That's way more resolution \n> than needed here, given that 2^48 pgbench accounts would be a scale of \n> 2.8M, which makes for a database of about 42 petabytes.\n\nI think that might be a good idea --- it'd reduce the cross-platform\nvariability of the results quite a bit, I suspect. random() is not\nto be trusted everywhere, but I think erand48 is pretty much the same\nwherever it exists at all (and src/port/ provides it elsewhere).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Feb 2011 22:18:38 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT " }, { "msg_contents": "Greg,\n\n* Tom Lane ([email protected]) wrote:\n> Greg Smith <[email protected]> writes:\n> > Poking around a bit more, I just discovered another possible approach is \n> > to use erand48 instead of rand in pgbench, which is either provided by \n> > the OS or emulated in src/port/erand48.c That's way more resolution \n> > than needed here, given that 2^48 pgbench accounts would be a scale of \n> > 2.8M, which makes for a database of about 42 petabytes.\n> \n> I think that might be a good idea --- it'd reduce the cross-platform\n> variability of the results quite a bit, I suspect. random() is not\n> to be trusted everywhere, but I think erand48 is pretty much the same\n> wherever it exists at all (and src/port/ provides it elsewhere).\n\nWorks for me. Greg, will you be able to work on this change? If not, I\nmight be able to.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Fri, 11 Feb 2011 08:35:51 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "On Fri, Feb 11, 2011 at 8:35 AM, Stephen Frost <[email protected]> wrote:\n> Greg,\n>\n> * Tom Lane ([email protected]) wrote:\n>> Greg Smith <[email protected]> writes:\n>> > Poking around a bit more, I just discovered another possible approach is\n>> > to use erand48 instead of rand in pgbench, which is either provided by\n>> > the OS or emulated in src/port/erand48.c  That's way more resolution\n>> > than needed here, given that 2^48 pgbench accounts would be a scale of\n>> > 2.8M, which makes for a database of about 42 petabytes.\n>>\n>> I think that might be a good idea --- it'd reduce the cross-platform\n>> variability of the results quite a bit, I suspect.  random() is not\n>> to be trusted everywhere, but I think erand48 is pretty much the same\n>> wherever it exists at all (and src/port/ provides it elsewhere).\n>\n> Works for me.  Greg, will you be able to work on this change?  If not, I\n> might be able to.\n\nSeeing as how this patch has not been updated, I think it's time to\nmark this one Returned with Feedback.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 15 Feb 2011 21:41:24 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "Tom Lane wrote:\n> I think that might be a good idea --- it'd reduce the cross-platform\n> variability of the results quite a bit, I suspect. random() is not\n> to be trusted everywhere, but I think erand48 is pretty much the same\n> wherever it exists at all (and src/port/ provides it elsewhere).\n> \n\nGiven that pgbench will run with threads in some multi-worker \nconfigurations, after some more portability research I think odds are \ngood we'd get nailed by \nhttp://sourceware.org/bugzilla/show_bug.cgi?id=10320 : \"erand48 \nimplementation not thread safe but POSIX says it should be\". The AIX \ndocs have a similar warning on them, so who knows how many versions of \nthat library have the same issue.\n\nMaybe we could make sure the one in src/port/ is thread safe and make \nsure pgbench only uses it. This whole area continues to be messy enough \nthat I think the patch needs to brew for another CF before it will all \nbe sorted out properly. I'll mark it accordingly and can pick this back \nup later.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\n\n\n", "msg_date": "Wed, 16 Feb 2011 08:15:41 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> Given that pgbench will run with threads in some multi-worker \n> configurations, after some more portability research I think odds are \n> good we'd get nailed by \n> http://sourceware.org/bugzilla/show_bug.cgi?id=10320 : \"erand48 \n> implementation not thread safe but POSIX says it should be\". The AIX \n> docs have a similar warning on them, so who knows how many versions of \n> that library have the same issue.\n\nFWIW, I think that bug report is effectively complaining that if you use\nboth drand48 and erand48, the former can impact the latter. If you use\nonly erand48, I don't see that there's any problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 16 Feb 2011 10:40:31 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT " }, { "msg_contents": "On Wed, Feb 16, 2011 at 8:15 AM, Greg Smith <[email protected]> wrote:\n\n> Tom Lane wrote:\n>\n>> I think that might be a good idea --- it'd reduce the cross-platform\n>> variability of the results quite a bit, I suspect. random() is not\n>> to be trusted everywhere, but I think erand48 is pretty much the same\n>> wherever it exists at all (and src/port/ provides it elsewhere).\n>>\n>>\n>\n> Given that pgbench will run with threads in some multi-worker\n> configurations, after some more portability research I think odds are good\n> we'd get nailed by http://sourceware.org/**bugzilla/show_bug.cgi?id=10320<http://sourceware.org/bugzilla/show_bug.cgi?id=10320>: \"erand48 implementation not thread safe but POSIX says it should be\".\n> The AIX docs have a similar warning on them, so who knows how many\n> versions of that library have the same issue.\n>\n> Maybe we could make sure the one in src/port/ is thread safe and make sure\n> pgbench only uses it. This whole area continues to be messy enough that I\n> think the patch needs to brew for another CF before it will all be sorted\n> out properly. I'll mark it accordingly and can pick this back up later.\n>\n\nHi Greg,\n\n I spent some time rebasing this patch to current master. Attached is\nthe patch, based on master couple of commits old.\n\n Your concern of using erand48() has been resolved since pgbench now\nuses thread-safe and concurrent pg_erand48() from src/port/.\n\n The patch is very much what you had posted, except for a couple of\ndifferences due to bit-rot. (i) I didn't have to #define MAX_RANDOM_VALUE64\nsince its cousin MAX_RANDOM_VALUE is not used by code anymore, and (ii) I\nused ternary operator in DDLs[] array to decide when to use bigint vs int\ncolumns.\n\n Please review.\n\n As for tests, I am currently running 'pgbench -i -s 21474' using\nunpatched pgbench, and am recording the time taken;Scale factor 21475 had\nactually failed to do anything meaningful using unpatched pgbench. Next\nI'll run with '-s 21475' on patched version to see if it does the right\nthing, and in acceptable time compared to '-s 21474'.\n\n What tests would you and others like to see, to get some confidence in\nthe patch? The machine that I have access to has 62 GB RAM, 16-core\n64-hw-threads, and about 900 GB of disk space.\n\nLinux <host> 3.2.6-3.fc16.ppc64 #1 SMP Fri Feb 17 21:41:20 UTC 2012 ppc64\nppc64 ppc64 GNU/Linux\n\nBest regards,\n\nPS: The primary source of patch is this branch:\nhttps://github.com/gurjeet/postgres/tree/64bit_pgbench\n-- \nGurjeet Singh\n\nhttp://gurjeet.singh.im/\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers", "msg_date": "Fri, 21 Dec 2012 01:16:12 -0500", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "Hi,\n\nI have reviewed this patch.\n\nhttps://commitfest.postgresql.org/action/patch_view?id=1068\n\n2012/12/21 Gurjeet Singh <[email protected]>:\n> The patch is very much what you had posted, except for a couple of\n> differences due to bit-rot. (i) I didn't have to #define MAX_RANDOM_VALUE64\n> since its cousin MAX_RANDOM_VALUE is not used by code anymore, and (ii) I\n> used ternary operator in DDLs[] array to decide when to use bigint vs int\n> columns.\n>\n> Please review.\n>\n> As for tests, I am currently running 'pgbench -i -s 21474' using\n> unpatched pgbench, and am recording the time taken;Scale factor 21475 had\n> actually failed to do anything meaningful using unpatched pgbench. Next I'll\n> run with '-s 21475' on patched version to see if it does the right thing,\n> and in acceptable time compared to '-s 21474'.\n>\n> What tests would you and others like to see, to get some confidence in\n> the patch? The machine that I have access to has 62 GB RAM, 16-core\n> 64-hw-threads, and about 900 GB of disk space.\n\nI have tested this patch, and hvae confirmed that the columns\nfor aid would be switched to using bigint, instead of int,\nwhen the scalefactor >= 20,000.\n(aid columns would exeed the upper bound of int when sf>21474.)\n\nAlso, I added a few fixes on it.\n\n- Fixed to apply for the current git master.\n- Fixed to surpress few more warnings about INT64_FORMAT.\n- Minor improvement in the docs. (just my suggestion)\n\nI attached the revised one.\n\nRegards,\n-- \nSatoshi Nagayasu <[email protected]>\nUptime Technologies, LLC http://www.uptime.jp/\n\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers", "msg_date": "Sun, 27 Jan 2013 13:24:27 +0900", "msg_from": "Satoshi Nagayasu <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "On Sat, Jan 26, 2013 at 11:24 PM, Satoshi Nagayasu <[email protected]> wrote:\n\n> Hi,\n>\n> I have reviewed this patch.\n>\n> https://commitfest.postgresql.org/action/patch_view?id=1068\n>\n> 2012/12/21 Gurjeet Singh <[email protected]>:\n> > The patch is very much what you had posted, except for a couple of\n> > differences due to bit-rot. (i) I didn't have to #define\n> MAX_RANDOM_VALUE64\n> > since its cousin MAX_RANDOM_VALUE is not used by code anymore, and (ii) I\n> > used ternary operator in DDLs[] array to decide when to use bigint vs int\n> > columns.\n> >\n> > Please review.\n> >\n> > As for tests, I am currently running 'pgbench -i -s 21474' using\n> > unpatched pgbench, and am recording the time taken;Scale factor 21475 had\n> > actually failed to do anything meaningful using unpatched pgbench. Next\n> I'll\n> > run with '-s 21475' on patched version to see if it does the right thing,\n> > and in acceptable time compared to '-s 21474'.\n> >\n> > What tests would you and others like to see, to get some confidence\n> in\n> > the patch? The machine that I have access to has 62 GB RAM, 16-core\n> > 64-hw-threads, and about 900 GB of disk space.\n>\n> I have tested this patch, and hvae confirmed that the columns\n> for aid would be switched to using bigint, instead of int,\n> when the scalefactor >= 20,000.\n> (aid columns would exeed the upper bound of int when sf>21474.)\n>\n> Also, I added a few fixes on it.\n>\n> - Fixed to apply for the current git master.\n> - Fixed to surpress few more warnings about INT64_FORMAT.\n> - Minor improvement in the docs. (just my suggestion)\n>\n> I attached the revised one.\n>\n\nLooks good to me. Thanks!\n\n-- \nGurjeet Singh\n\nhttp://gurjeet.singh.im/\n\nOn Sat, Jan 26, 2013 at 11:24 PM, Satoshi Nagayasu <[email protected]> wrote:\n\nHi,\n\nI have reviewed this patch.\n\nhttps://commitfest.postgresql.org/action/patch_view?id=1068\n\n2012/12/21 Gurjeet Singh <[email protected]>:\n>     The patch is very much what you had posted, except for a couple of\n> differences due to bit-rot. (i) I didn't have to #define MAX_RANDOM_VALUE64\n> since its cousin MAX_RANDOM_VALUE is not used by code anymore, and (ii) I\n> used ternary operator in DDLs[] array to decide when to use bigint vs int\n> columns.\n>\n>     Please review.\n>\n>     As for tests, I am currently running 'pgbench -i -s 21474' using\n> unpatched pgbench, and am recording the time taken;Scale factor 21475 had\n> actually failed to do anything meaningful using unpatched pgbench. Next I'll\n> run with '-s 21475' on patched version to see if it does the right thing,\n> and in acceptable time compared to '-s 21474'.\n>\n>     What tests would you and others like to see, to get some confidence in\n> the patch? The machine that I have access to has 62 GB RAM, 16-core\n> 64-hw-threads, and about 900 GB of disk space.\n\nI have tested this patch, and hvae confirmed that the columns\nfor aid would be switched to using bigint, instead of int,\nwhen the scalefactor >= 20,000.\n(aid columns would exeed the upper bound of int when sf>21474.)\n\nAlso, I added a few fixes on it.\n\n- Fixed to apply for the current git master.\n- Fixed to surpress few more warnings about INT64_FORMAT.\n- Minor improvement in the docs. (just my suggestion)\n\nI attached the revised one.Looks good to me. Thanks!-- Gurjeet Singhhttp://gurjeet.singh.im/", "msg_date": "Mon, 28 Jan 2013 16:30:51 -0500", "msg_from": "Gurjeet Singh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" }, { "msg_contents": "On 28.01.2013 23:30, Gurjeet Singh wrote:\n> On Sat, Jan 26, 2013 at 11:24 PM, Satoshi Nagayasu<[email protected]> wrote:\n>\n>> 2012/12/21 Gurjeet Singh<[email protected]>:\n>>> The patch is very much what you had posted, except for a couple of\n>>> differences due to bit-rot. (i) I didn't have to #define\n>> MAX_RANDOM_VALUE64\n>>> since its cousin MAX_RANDOM_VALUE is not used by code anymore, and (ii) I\n>>> used ternary operator in DDLs[] array to decide when to use bigint vs int\n>>> columns.\n>>>\n>>> Please review.\n>>>\n>>> As for tests, I am currently running 'pgbench -i -s 21474' using\n>>> unpatched pgbench, and am recording the time taken;Scale factor 21475 had\n>>> actually failed to do anything meaningful using unpatched pgbench. Next\n>> I'll\n>>> run with '-s 21475' on patched version to see if it does the right thing,\n>>> and in acceptable time compared to '-s 21474'.\n>>>\n>>> What tests would you and others like to see, to get some confidence\n>> in\n>>> the patch? The machine that I have access to has 62 GB RAM, 16-core\n>>> 64-hw-threads, and about 900 GB of disk space.\n>>\n>> I have tested this patch, and hvae confirmed that the columns\n>> for aid would be switched to using bigint, instead of int,\n>> when the scalefactor>= 20,000.\n>> (aid columns would exeed the upper bound of int when sf>21474.)\n>>\n>> Also, I added a few fixes on it.\n>>\n>> - Fixed to apply for the current git master.\n>> - Fixed to surpress few more warnings about INT64_FORMAT.\n>> - Minor improvement in the docs. (just my suggestion)\n>>\n>> I attached the revised one.\n>\n> Looks good to me. Thanks!\n\nOk, committed.\n\nAt some point, we might want to have a strtoll() implementation in src/port.\n\n- Heikki\n\n\n-- \nSent via pgsql-hackers mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-hackers\n", "msg_date": "Tue, 29 Jan 2013 12:12:41 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] pgbench to the MAXINT" } ]
[ { "msg_contents": "Hello all,\nI am trying to do comparative study of PostgreSQL and Oracle.\nHas anybody tried to use same binary for connecting to oracle as well as \nPostgreSQL at the same time?\nI mean the program should be able to create 1 connection to oracle and 1 \nconnection to postgreSQL at the same time.\nThe problem I can foresee may be symbol clash etc (it is a C program using libpq \nand OCI).\n\nHas anyone been successful in loading and using both libraries in the program at \nthe same time without symbol clash?\n\n Best Regards,\nDivakar\n\n\n\n \nHello all,I am trying to do comparative study of PostgreSQL and Oracle.Has anybody tried to use same binary for connecting to oracle as well as PostgreSQL at the same time?I mean the program should be able to create 1 connection to oracle and 1 connection to postgreSQL at the same time.The problem I can foresee may be symbol clash etc (it is a C program using libpq and OCI).Has anyone been successful in loading and using both libraries in the program at the same time without symbol clash? Best Regards,Divakar", "msg_date": "Tue, 11 Jan 2011 22:54:10 -0800 (PST)", "msg_from": "Divakar Singh <[email protected]>", "msg_from_op": true, "msg_subject": "Performance test of Oracle and PostgreSQL using same binary" }, { "msg_contents": "On Tue, Jan 11, 2011 at 10:54 PM, Divakar Singh <[email protected]> wrote:\n> Hello all,\n> I am trying to do comparative study of PostgreSQL and Oracle.\n> Has anybody tried to use same binary for connecting to oracle as well as\n> PostgreSQL at the same time?\n> I mean the program should be able to create 1 connection to oracle and 1\n> connection to postgreSQL at the same time.\n> The problem I can foresee may be symbol clash etc (it is a C program using\n> libpq and OCI).\n>\n> Has anyone been successful in loading and using both libraries in the\n> program at the same time without symbol clash?\n\nI've done this from Perl using DBI, DBD::Oracle, and DBD::Pg. As perl\nis written in C, that should be a good sign for you.\n\nCheers,\n\nJeff\n", "msg_date": "Wed, 12 Jan 2011 09:11:52 -0800", "msg_from": "Jeff Janes <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance test of Oracle and PostgreSQL using same binary" } ]
[ { "msg_contents": "This will be simple question to answer. :-) There is a single table:\n\nselect count(*) from product_price_history -- 12982555 rows\n\nThis table has exactly one index and on primary key constraint:\n\nCREATE INDEX idx_product_price_history_id_hdate\n ON product_price_history\n USING btree\n (id, hdate);\n\nALTER TABLE product_price_history\n ADD CONSTRAINT pk_product_price_history PRIMARY KEY(hid);\n\nNo more constraints or indexes defined on this table. Rows are never \nupdated or deleted in this table, they are only inserted. It was \nvacuum-ed and reindex-ed today.\n\nStats on the table:\n\nseq scans=13, index scans=108, table size=3770MB, toast table size=8192 \nbytes, indexes size=666MB\n\nThis query:\n\nselect hid from product_price_history where id=35547581\n\nReturns 759 rows in 8837 msec! How can this be that slow???\n\nThe query plan is:\n\n\"Bitmap Heap Scan on product_price_history (cost=13.90..1863.51 \nrows=472 width=8)\"\n\" Recheck Cond: (id = 35547581)\"\n\" -> Bitmap Index Scan on idx_product_price_history_id_hdate \n(cost=0.00..13.78 rows=472 width=0)\"\n\" Index Cond: (id = 35547581)\"\n\nI don't understand why PostgreSQL uses bitmap heap scan + bitmap index \nscan? Why not just use an regular index scan? Data in a btree index is \nalready sorted. A normal index scan should take no more than a few page \nreads. This sould never take 8 seconds.\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Wed, 12 Jan 2011 13:14:22 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query + why bitmap index scan??" }, { "msg_contents": "* Laszlo Nagy:\n\n> This query:\n>\n> select hid from product_price_history where id=35547581\n>\n> Returns 759 rows in 8837 msec! How can this be that slow???\n\nIf most records are on different heap pages, processing this query\nrequires many seeks. 11ms per seek is not too bad if most of them are\ncache misses.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Wed, 12 Jan 2011 13:42:42 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query + why bitmap index scan??" }, { "msg_contents": "On 2011-01-12 14:42, Florian Weimer wrote:\n> * Laszlo Nagy:\n>\n>> This query:\n>>\n>> select hid from product_price_history where id=35547581\n>>\n>> Returns 759 rows in 8837 msec! How can this be that slow???\n> If most records are on different heap pages, processing this query\n> requires many seeks. 11ms per seek is not too bad if most of them are\n> cache misses.\nHow about this:\n\nselect id,hdate from product_price_history where id=35547581 -- 759 \nrows, 8837 ms\nQuery time average: 3 sec.\nQuery plan:\n\n\"Bitmap Heap Scan on product_price_history (cost=13.91..1871.34 \nrows=474 width=16)\"\n\" Recheck Cond: (id = 35547582)\"\n\" -> Bitmap Index Scan on idx_product_price_history_id_hdate \n(cost=0.00..13.79 rows=474 width=0)\"\n\" Index Cond: (id = 35547582)\"\n\nWhy still the heap scan here? All fields in the query are in the \nindex... Wouldn't a simple index scan be faster? (This is only a \ntheoretical question, just I'm curious.)\n\nMy first idea to speed things up is to cluster this table regularly. \nThat would convert (most of the) rows into a few pages. Few page reads \n-> faster query. Is it a good idea?\n\nAnother question. Do you think that increasing shared_mem would make it \nfaster?\n\nCurrently we have:\n\nshared_mem = 6GB\nwork_mem = 512MB\ntotal system memory=24GB\n\nTotal database size about 30GB, but there are other programs running on \nthe system, and many other tables.\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Wed, 12 Jan 2011 15:21:45 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query + why bitmap index scan??" }, { "msg_contents": "On Wed, Jan 12, 2011 at 03:21:45PM +0100, Laszlo Nagy wrote:\n> On 2011-01-12 14:42, Florian Weimer wrote:\n>> * Laszlo Nagy:\n>>\n>>> This query:\n>>>\n>>> select hid from product_price_history where id=35547581\n>>>\n>>> Returns 759 rows in 8837 msec! How can this be that slow???\n>> If most records are on different heap pages, processing this query\n>> requires many seeks. 11ms per seek is not too bad if most of them are\n>> cache misses.\n> How about this:\n>\n> select id,hdate from product_price_history where id=35547581 -- 759 rows, \n> 8837 ms\n> Query time average: 3 sec.\n> Query plan:\n>\n> \"Bitmap Heap Scan on product_price_history (cost=13.91..1871.34 rows=474 \n> width=16)\"\n> \" Recheck Cond: (id = 35547582)\"\n> \" -> Bitmap Index Scan on idx_product_price_history_id_hdate \n> (cost=0.00..13.79 rows=474 width=0)\"\n> \" Index Cond: (id = 35547582)\"\n>\n> Why still the heap scan here? All fields in the query are in the index... \n> Wouldn't a simple index scan be faster? (This is only a theoretical \n> question, just I'm curious.)\n>\n\nBecause of PostgreSQL's MVCC design, it must visit each heap tuple\nto check its visibility as well as look it up in the index.\n\n> My first idea to speed things up is to cluster this table regularly. That \n> would convert (most of the) rows into a few pages. Few page reads -> faster \n> query. Is it a good idea?\n>\n\nYes, clustering this table would greatly speed up this type of query.\n\n> Another question. Do you think that increasing shared_mem would make it \n> faster?\n\nI doubt it.\n\n>\n> Currently we have:\n>\n> shared_mem = 6GB\n> work_mem = 512MB\n> total system memory=24GB\n>\n> Total database size about 30GB, but there are other programs running on the \n> system, and many other tables.\n>\n> Thanks,\n>\n> Laszlo\n>\n\nClustering is your best option until we get indexes with visibility\ninformation.\n\nCheers,\nKen\n", "msg_date": "Wed, 12 Jan 2011 08:26:54 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query + why bitmap index scan??" }, { "msg_contents": "Laszlo Nagy <[email protected]> wrote:\n \n> shared_mem = 6GB\n> work_mem = 512MB\n> total system memory=24GB\n \nIn addition to the good advice from Ken, I suggest that you set\neffective_cache_size (if you haven't already). Add whatever the OS\nshows as RAM used for cache to the shared_mem setting.\n \nBut yeah, for your immediate problem, if you can cluster the table\non the index involved, it will be much faster. Of course, if the\ntable is already in a useful order for some other query, that might\nget slower, and unlike some other products, CLUSTER in PostgreSQL\ndoesn't *maintain* that order for the data as new rows are added --\nso this should probably become a weekly (or monthly or some such)\nmaintenance operation.\n \n-Kevin\n", "msg_date": "Wed, 12 Jan 2011 08:36:55 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query + why bitmap index scan??" }, { "msg_contents": "On 2011-01-12 15:36, Kevin Grittner wrote:\n> Laszlo Nagy<[email protected]> wrote:\n>\n>> shared_mem = 6GB\n>> work_mem = 512MB\n>> total system memory=24GB\n>\n> In addition to the good advice from Ken, I suggest that you set\n> effective_cache_size (if you haven't already). Add whatever the OS\n> shows as RAM used for cache to the shared_mem setting.\nIt was 1GB. Now I changed to 2GB. Although the OS shows 9GB inactive \nmemory, we have many concurrent connections to the database server. I \nhope it is okay to use 2GB.\n>\n> But yeah, for your immediate problem, if you can cluster the table\n> on the index involved, it will be much faster. Of course, if the\n> table is already in a useful order for some other query, that might\n> get slower, and unlike some other products, CLUSTER in PostgreSQL\n> doesn't *maintain* that order for the data as new rows are added --\n> so this should probably become a weekly (or monthly or some such)\n> maintenance operation.\nThank you! After clustering, queries are really fast. I don't worry \nabout other queries. This is the only way we use this table - get \ndetails for a given id value. I put the CLUSTER command into a cron \nscript that runs daily. For the second time, it took 2 minutes to run so \nI guess it will be fine.\n\nThank you for your help.\n\n Laszlo\n\n", "msg_date": "Wed, 12 Jan 2011 16:20:26 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query + why bitmap index scan??" }, { "msg_contents": "Laszlo Nagy <[email protected]> wrote:\n \n>> In addition to the good advice from Ken, I suggest that you set\n>> effective_cache_size (if you haven't already). Add whatever the\n>> OS shows as RAM used for cache to the shared_mem setting.\n> It was 1GB. Now I changed to 2GB. Although the OS shows 9GB\n> inactive memory, we have many concurrent connections to the\n> database server. I hope it is okay to use 2GB.\n \neffective_cache_size doesn't cause any RAM to be allocated, it's\njust a hint to the costing routines. Higher values tend to favor\nindex use, while lower values tend to favor sequential scans. I\nsuppose that if you regularly have many large queries running at the\nsame moment you might not want to set it to the full amount of cache\nspace available, but I've usually had good luck setting to the sum\nof shared_buffers space and OS cache.\n \nSince it only affects plan choice, not memory allocations, changing\nit won't help if good plans are already being chosen.\n \n-Kevin\n", "msg_date": "Wed, 12 Jan 2011 10:31:50 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query + why bitmap index scan??" }, { "msg_contents": "2011/1/12 Kevin Grittner <[email protected]>:\n> Laszlo Nagy <[email protected]> wrote:\n>\n>>>  In addition to the good advice from Ken, I suggest that you set\n>>> effective_cache_size (if you haven't already).  Add whatever the\n>>> OS shows as RAM used for cache to the shared_mem setting.\n>> It was 1GB. Now I changed to 2GB. Although the OS shows 9GB\n>> inactive memory, we have many concurrent connections to the\n>> database server. I hope it is okay to use 2GB.\n>\n> effective_cache_size doesn't cause any RAM to be allocated, it's\n> just a hint to the costing routines.  Higher values tend to favor\n> index use, while lower values tend to favor sequential scans.  I\n> suppose that if you regularly have many large queries running at the\n> same moment you might not want to set it to the full amount of cache\n> space available,\n> but I've usually had good luck setting to the sum\n> of shared_buffers space and OS cache.\n\nWhat is the OS used ? Do you have windows ? if yes the current\nparameters are not good, and linux should not have 9GB of 'inactive'\n(?) memory.\n\n>\n> Since it only affects plan choice, not memory allocations, changing\n> it won't help if good plans are already being chosen.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Wed, 12 Jan 2011 21:09:47 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query + why bitmap index scan??" } ]
[ { "msg_contents": "I am running a postgres update on one of my machines:\n\nDownloading Packages:\n(1/7): postgresql90-plpython-9.0.2-2PGDG.rhel5.x86_64.rp | 50 kB \n00:02 \n(2/7): postgresql90-plperl-9.0.2-2PGDG.rhel5.x86_64.rpm | 51 kB \n00:03 \n(3/7): postgresql90-libs-9.0.2-2PGDG.rhel5.x86_64.rpm | 217 kB \n00:14 \n(4/7): postgresql90-contrib-9.0.2-2PGDG.rhel5.x86_64.rpm | 451 kB \n00:40 \n(5/7): postgresql90-9.0.2-2PGDG.rhel5.x86_64.rpm | 1.4 MB \n01:57 \n(6/7): postgresql90-devel-9.0.2-2PGDG.rhel5.x86_64.rpm | 1.6 MB \n02:48 \n(7/7): postgresql90-se (68%) 44% [===== ] 7.0 kB/s | 2.2 MB \n06:33 ETA\n\n7 kilobytes per second??? That brings back the times of the good, old \n9600 USR modems and floppy disks.\n", "msg_date": "Wed, 12 Jan 2011 08:49:01 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "The good, old times" }, { "msg_contents": "Mladen Gogala <mladen.gogala 'at' vmsinfo.com> writes:\n\n> I am running a postgres update on one of my machines:\n>\n> Downloading Packages:\n> (1/7): postgresql90-plpython-9.0.2-2PGDG.rhel5.x86_64.rp | 50 kB\n> 00:02 (2/7): postgresql90-plperl-9.0.2-2PGDG.rhel5.x86_64.rpm |\n> 51 kB 00:03 (3/7):\n> postgresql90-libs-9.0.2-2PGDG.rhel5.x86_64.rpm | 217 kB 00:14\n> (4/7): postgresql90-contrib-9.0.2-2PGDG.rhel5.x86_64.rpm | 451 kB\n> 00:40 (5/7): postgresql90-9.0.2-2PGDG.rhel5.x86_64.rpm |\n> 1.4 MB 01:57 (6/7):\n> postgresql90-devel-9.0.2-2PGDG.rhel5.x86_64.rpm | 1.6 MB 02:48\n> (7/7): postgresql90-se (68%) 44% [===== ] 7.0 kB/s | 2.2 MB\n> 06:33 ETA\n>\n> 7 kilobytes per second??? That brings back the times of the good, old\n> 9600 USR modems and floppy disks.\n\nWhat's your point and in what is it related to that ML?\n\n-- \nGuillaume Cottenceau\n", "msg_date": "Wed, 12 Jan 2011 15:16:03 +0100", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The good, old times" }, { "msg_contents": "On 01/12/2011 10:16 PM, Guillaume Cottenceau wrote:\n\n> What's your point and in what is it related to that ML?\n\nGiven the package names, I suspect this is a poorly-expressed complaint \nabout the performance of downloads from the pgdg/psqlrpms site. If that \nwas the original poster's intent, they would've been better served with \na post that included some minimal details like:\n\n- Information abut their local connectivity\n- mtr --report / traceroute output\n- tests from other available hosts\n\nIf that wasn't the original poster's intent, perhaps it'd be worth a \nsecond try to explain what they were *trying* to say? Was it just a joke \n- 'cos if so, it was kinda flat.\n\n--\nCraig Ringer\n", "msg_date": "Fri, 14 Jan 2011 14:02:28 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The good, old times" }, { "msg_contents": "Craig Ringer wrote:\n> On 01/12/2011 10:16 PM, Guillaume Cottenceau wrote:\n>\n> \n>> What's your point and in what is it related to that ML?\n>> \n>\n> Given the package names, I suspect this is a poorly-expressed complaint \n> about the performance of downloads from the pgdg/psqlrpms site. If that \n> was the original poster's intent, they would've been better served with \n> a post that included some minimal details like:\n> \nYes, it was a complaint about the download speed.\n\n\n> - Information abut their local connectivity\n> - mtr --report / traceroute output\n> - tests from other available hosts\n> \n\nAs for the traceroute information, here it is:\n traceroute yum.pgrpms.org\ntraceroute to yum.pgrpms.org (77.79.103.58), 30 hops max, 40 byte packets\n 1 216.169.135.254 (216.169.135.254) 0.389 ms 0.404 ms 0.451 ms\n 2 host189.131.26.216.vmsinfo.com (216.26.131.189) 9.355 ms 9.357 ms \n9.368 ms\n 3 v11.lc2.lou.peak10.net (216.26.190.10) 9.645 ms 9.645 ms 9.637 ms\n 4 ge-7-41.car1.Cincinnati1.Level3.net (4.53.64.41) 13.002 ms 13.002 \nms 13.018 ms\n 5 ae-2-5.bar1.Cincinnati1.Level3.net (4.69.132.206) 13.101 ms 13.098 \nms 13.087 ms\n 6 ae-10-10.ebr2.Chicago1.Level3.net (4.69.136.214) 22.096 ms 21.358 \nms 21.329 ms\n 7 ae-1-100.ebr1.Chicago1.Level3.net (4.69.132.41) 27.729 ms 10.812 \nms 24.132 ms\n 8 ae-2-2.ebr2.NewYork2.Level3.net (4.69.132.66) 34.008 ms 33.960 ms \n34.088 ms\n 9 ae-1-100.ebr1.NewYork2.Level3.net (4.69.135.253) 34.152 ms 35.353 \nms 37.068 ms\n10 ae-4-4.ebr1.NewYork1.Level3.net (4.69.141.17) 36.998 ms 37.248 ms \n36.986 ms\n11 ae-43-43.ebr2.London1.Level3.net (4.69.137.73) 107.031 ms \nae-42-42.ebr2.London1.Level3.net (4.69.137.69) 104.624 ms 107.000 ms\n12 ae-2-52.edge4.London1.Level3.net (4.69.139.106) 107.506 ms 106.993 \nms 180.229 ms\n13 (195.50.122.174) 168.849 ms 160.917 ms 161.713 ms\n14 static.turktelekom.com.tr (212.156.103.42) 176.503 ms 179.012 ms \n179.394 ms\n15 gayrettepe-t3-1-gayrettepe-t2-1.turktelekom.com.tr (212.156.118.29) \n167.867 ms 167.870 ms 167.862 ms\n16 88.255.240.110 (88.255.240.110) 167.515 ms 168.172 ms 165.829 ms\n17 ns1.gunduz.org (77.79.103.58) 171.574 ms !X * *\n[mgogala@lpo-postgres-d01 ~]$\n\nAre there any good mirrors? Apparently, there is something slow in the \nforce.\n\n\n\n\n> If that wasn't the original poster's intent, perhaps it'd be worth a \n> second try to explain what they were *trying* to say? Was it just a joke \n> - 'cos if so, it was kinda flat.\n>\n> --\n> Craig Ringer\n>\n> \n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Fri, 14 Jan 2011 11:54:51 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: The good, old times" }, { "msg_contents": "On 01/15/2011 12:54 AM, Mladen Gogala wrote:\n\n> Yes, it was a complaint about the download speed.\n>\n>> - Information abut their local connectivity\n>> - mtr --report / traceroute output\n>> - tests from other available hosts\n\nOK, that's one out of three. How about the other two?\n\n> 15 gayrettepe-t3-1-gayrettepe-t2-1.turktelekom.com.tr (212.156.118.29)\n> 167.867 ms 167.870 ms 167.862 ms\n> 16 88.255.240.110 (88.255.240.110) 167.515 ms 168.172 ms 165.829 ms\n> 17 ns1.gunduz.org (77.79.103.58) 171.574 ms !X * *\n\nOutput from a smarter traceroute client like \"mtr\" would be helpful. See \nbelow for usage. \"mtr\" is available for most non-braindead unixes, and \nis pre-installed on many modern Linux variants.\n\n> Are there any good mirrors? Apparently, there is something slow in the\n> force.\n\nLooks fine from here, on iiNet Western Australian ADSL2+ via local \n802.11g . That latency is about what I expect for traffic from Western \nAustralia to Turkey via Sydney and the USA. I see similar results from \nother hosts. Performance when accessing the server is fine. This sample \nwas taken Sat Jan 15 2011, 21:35:55 +0800 time.\n\n> [craig@ayaki ~]$ mtr --report-wide --report 77.79.103.58\n> HOST: ayaki Loss% Snt Last Avg Best Wrst StDev\n> 1.|-- bob.iad 0.0% 10 1.9 1.9 1.2 3.7 0.7\n> 2.|-- nexthop.wa.iinet.net.au 0.0% 10 16.7 18.6 16.7 28.8 3.7\n> 3.|-- te7-2.per-qv1-bdr1.iinet.net.au 0.0% 10 17.7 17.8 17.1 18.6 0.4\n> 4.|-- te3-0-0.syd-ult-core1.iinet.net.au 0.0% 10 72.9 72.1 71.4 72.9 0.6\n> 5.|-- Bundle-Ether12.chw48.Sydney.telstra.net 0.0% 10 69.3 69.3 68.5 70.3 0.7\n> 6.|-- Bundle-Ether6.chw-core2.Sydney.telstra.net 0.0% 10 73.2 73.7 73.1 76.1 0.9\n> 7.|-- Bundle-Ether1.oxf-gw2.Sydney.telstra.net 0.0% 10 74.5 75.1 72.6 81.9 3.3\n> 8.|-- 203.50.13.102 0.0% 10 70.1 70.5 69.4 72.0 0.8\n> 9.|-- i-10-0-0.syd-core03.bi.reach.com 0.0% 10 75.9 75.8 75.0 76.5 0.5\n> 10.|-- i-0-3-0-0.1wlt-core01.bx.reach.com 0.0% 10 224.6 223.2 222.5 224.6 0.6\n> 11.|-- i-3-4.eqla01.bi.reach.com 0.0% 10 222.2 222.2 221.5 223.4 0.6\n> 12.|-- gblx-peer.eqla01.pr.reach.com 0.0% 10 248.0 247.8 246.9 248.4 0.5\n> 13.|-- 204.245.38.154 20.0% 10 638.2 520.0 465.1 638.2 71.4\n> 14.|-- static.turktelekom.com.tr 10.0% 10 475.1 475.1 471.9 479.8 2.1\n> 15.|-- gayrettepe-t3-1-gayrettepe-t2-1.turktelekom.com.tr 10.0% 10 476.2 475.3 472.0 476.8 1.8\n> 16.|-- 88.255.240.110 10.0% 10 481.9 490.0 480.6 528.8 15.0\n> 17.|-- ??? 100.0 10 0.0 0.0 0.0 0.0 0.0\n> 18.|-- ns1.gunduz.org 20.0% 10 483.5 482.3 476.7 485.0 2.7\n\n--\nCraig Ringer\n", "msg_date": "Sat, 15 Jan 2011 21:36:44 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The good, old times" }, { "msg_contents": "On 01/15/2011 12:54 AM, Mladen Gogala wrote:\n> Craig Ringer wrote:\n>> On 01/12/2011 10:16 PM, Guillaume Cottenceau wrote:\n>>\n>>> What's your point and in what is it related to that ML?\n>>\n>> Given the package names, I suspect this is a poorly-expressed\n>> complaint about the performance of downloads from the pgdg/psqlrpms\n>> site. If that was the original poster's intent, they would've been\n>> better served with a post that included some minimal details like:\n> Yes, it was a complaint about the download speed.\n\nOK, I'm seeing issues too now. It's transient and intermittent - sigh. \nThe two swear words of IT.\n\n(3/7): postgresql90-9.0.2-2PGDG.f14.x86_64.rpm \n | 865 kB 02:04 \n\nhttp://yum.pgrpms.org/9.0/fedora/fedora-14-x86_64/postgresql90-contrib-9.0.2-2PGDG.f14.x86_64.rpm: \n[Errno 14] PYCURL ERROR 6 - \"\"\nTrying other mirror.\nhttp://yum.pgrpms.org/9.0/fedora/fedora-14-x86_64/postgresql90-libs-9.0.2-2PGDG.f14.x86_64.rpm: \n[Errno 14] PYCURL ERROR 6 - \"\"\nTrying other mirror.\n\n\nA later try worked. Unfortunately I don't have a timestamp for the \nfailed attempt, but it was some time yesterday. mtr output and \nperformance of downloads are presently fine, and I don't have any data \nfor the problematic period.\n\n--\nCraig Ringer\n", "msg_date": "Sun, 16 Jan 2011 14:50:51 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The good, old times" }, { "msg_contents": "On Wed, 2011-01-12 at 08:49 -0500, Mladen Gogala wrote:\n\n<snip>\n> 7 kilobytes per second??? That brings back the times of the good, old\n> 9600 USR modems and floppy disks. \n\nThe machine is serving 40-50 Mbit/sec, and 90% of its traffic is for\npgrpms.org. I'm hosting the server in Turkey, and it is my own dedicated\nmachine -- but eventually it will be moved to a machine under\npostgresql.org infrastructure soon, so it will be faster, I believe.\nSorry for the current setup -- it is the only machine that I can host\nRPMs safely.\n\nWe are also *considering* to use FTP mirrors as RPM mirrors, too, but I\nwon't promise that now.\n\nPlease keep looking at http://yum.pgrpms.org for updates.\n\nRegards,\n\n-- \nDevrim GÜNDÜZ\nPostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer\nPostgreSQL RPM Repository: http://yum.pgrpms.org\nCommunity: devrim~PostgreSQL.org, devrim.gunduz~linux.org.tr\nhttp://www.gunduz.org Twitter: http://twitter.com/devrimgunduz", "msg_date": "Tue, 18 Jan 2011 10:43:00 +0200", "msg_from": "Devrim =?ISO-8859-1?Q?G=DCND=DCZ?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The good, old times" }, { "msg_contents": "2011/1/18 Devrim GÜNDÜZ <[email protected]>:\n> On Wed, 2011-01-12 at 08:49 -0500, Mladen Gogala wrote:\n>\n> <snip>\n>> 7 kilobytes per second???  That brings back the times of the good, old\n>> 9600 USR modems and floppy disks.\n>\n> The machine is serving 40-50 Mbit/sec, and 90% of its traffic is for\n> pgrpms.org. I'm hosting the server in Turkey, and it is my own dedicated\n> machine -- but eventually it will be moved to a machine under\n> postgresql.org infrastructure soon, so it will be faster, I believe.\n> Sorry for the current setup -- it is the only machine that I can host\n> RPMs safely.\n\nFYI, we've just had more hardware converted to the new infrastructure\nplatform (literally last night), so hopefully we can provision this\nmachine soon.\n\n-- \nDave Page\nBlog: http://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEnterpriseDB UK: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 18 Jan 2011 08:57:15 +0000", "msg_from": "Dave Page <[email protected]>", "msg_from_op": false, "msg_subject": "Re: The good, old times" } ]
[ { "msg_contents": "I was recently asked to look into why one particular set of queries\nwas taking a long time.\nThe queries are all of the same form. They select the UNION of a few\ncolumns from around 100 tables.\nThe query in particular was taking some 7-8 minutes to run.\nOn a whim, I changed the query from this form:\n\nSELECT a, b FROM FOO_a WHERE <conditions>\nUNION\nSELECT a,b FROM FOO_b WHERE <conditions>\n....\n\nto:\n\nSELECT DISTINCT a,b FROM FOO_a WHERE <conditions>\nUNION\nSELECT DISTINCT a,b FROM FOO_b WHERE <conditions>\n\nand the query time dropped to under a minute.\n\nIn the former case, the query plan was a bitmap heap scan for each\ntable. Then those results were Appended, Sorted, Uniqued, Sorted\nagain, and then returned.\n\nIn the latter, before Appending, each table's results were run through\nHashAggregate.\n\nThe total number of result rows is in the 500K range. Each table holds\napproximately 150K matching rows (but this can vary a bit).\n\nWhat I'm asking is this: since adding DISTINCT to each participating\nmember of the UNION query reduced the total number of appended rows,\nis there some sort of heuristic that postgresql could use to do this\nautomatically? The 12x speedup was quite nice.\n\n\n-- \nJon\n", "msg_date": "Thu, 13 Jan 2011 09:51:29 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "queries with lots of UNIONed relations" } ]
[ { "msg_contents": "I was recently asked to look into why one particular set of queries\nwas taking a long time. The queries are all of the same form. They\nselect the UNION of a few\ncolumns from around 100 tables.\n\nThe query in particular was taking some 7-8 minutes to run.\n\nOn a whim, I changed the query from this form:\n\nSELECT a, b FROM FOO_a WHERE <conditions>\nUNION\nSELECT a,b FROM FOO_b WHERE <conditions>\n....\n\nto:\n\nSELECT DISTINCT a,b FROM FOO_a WHERE <conditions>\nUNION\nSELECT DISTINCT a,b FROM FOO_b WHERE <conditions>\n...\n\nand the query time dropped to under a minute.\n\nIn the former case, the query plan was a bitmap heap scan for each\ntable. Then those results were Appended, Sorted, Uniqued, Sorted\nagain, and then returned.\n\nIn the latter, before Appending, each table's results were run through\nHashAggregate.\n\nThe total number of result rows is in the 500K range. Each table holds\napproximately 150K matching rows (but this can vary a bit).\n\nWhat I'm asking is this: since adding DISTINCT to each participating\nmember of the UNION query reduced the total number of appended rows,\nis there some sort of heuristic that postgresql could use to do this\nautomatically? The 12x speedup was quite nice.\n\n-- \nJon\n", "msg_date": "Thu, 13 Jan 2011 09:55:31 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "queries with lots of UNIONed relations" }, { "msg_contents": "Jon Nelson <[email protected]> writes:\n> In the former case, the query plan was a bitmap heap scan for each\n> table. Then those results were Appended, Sorted, Uniqued, Sorted\n> again, and then returned.\n\n> In the latter, before Appending, each table's results were run through\n> HashAggregate.\n\nProbably the reason it did that is that each individual de-duplication\nlooked like it would fit in work_mem, but a single de-duplication\ndidn't. Consider raising work_mem, at least for this one query.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2011 12:13:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations " }, { "msg_contents": "On Thu, Jan 13, 2011 at 11:13 AM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> In the former case, the query plan was a bitmap heap scan for each\n>> table. Then those results were Appended, Sorted, Uniqued, Sorted\n>> again, and then returned.\n>\n>> In the latter, before Appending, each table's results were run through\n>> HashAggregate.\n>\n> Probably the reason it did that is that each individual de-duplication\n> looked like it would fit in work_mem, but a single de-duplication\n> didn't.  Consider raising work_mem, at least for this one query.\n\nI raised work_mem to as high as 512MB (SET LOCAL work_mem = '512MB',\nwithin the transaction). Nice. Instead of 7-10 minutes the result is\nnow about a minute (the same as with individual de-duplication).\n\nYour comment regarding \"each individual de-duplication looked like it\nwould fit in work_mem\" doesn't really make sense, exactly. Maybe I'm\nmisunderstanding you.\n\nWhat I'm asking is this: can postgresql apply a de-duplication to each\nmember of a UNION (as I did with SELECT DISTINCT) in order to reduce\nthe total number of rows that need to be de-duplicated when all of the\nrows have been Appended?\n\nThe results of the various plans/tweaks are:\n\nInitial state: (work_mem = 16MB, no DISTINCT, run time of 7-10 minutes):\nUnique (Sort (Append ( Lots of Bitmap Heap Scans Here ) ) )\n\nand (work_mem = 16MB, with DISTINCT, run time of ~ 1 minute):\nHashAggregate ( Append ( Lots Of HashAggregate( Bitmap Heap Scan ) ) )\n\nand (work_mem = 64kB, DISTINCT, run time of *15+ minutes*):\nUnique (Sort ( Append ( Lots Of HashAggregate( Bitmap Heap Scan ) ) ) )\n\nSo I take from this the following:\n\n1. if the result set fits in work_mem, hash aggregate is wicked fast.\nAbout 1 jillion times faster than Unique+Sort.\n\n2. it would be nifty if postgresql could be taught that, in a UNION,\nto de-duplicate each contributory relation so as to reduce the total\nset of rows that need to be re-de-duplicated. It's extra work, true,\nand maybe there are some tricks here, but it seems to make a big\ndifference. This is useful so that the total result set is small\nenough that hash aggregate might apply.\n\nNOTE:\n\nI have to have work_mem really low as a global on this machine because\nother queries involving the same tables (such as those that involve\nUNION ALL for SUM() or GROUP BY operations) cause the machine to run\nout of memory. Indeed, even with work_mem at 1MB I run the machine out\nof memory if I don't explicitly disable hashagg for some queries. Can\nanything be done about that?\n\n\n-- \nJon\n", "msg_date": "Thu, 13 Jan 2011 13:54:03 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "Jon Nelson <[email protected]> writes:\n> Your comment regarding \"each individual de-duplication looked like it\n> would fit in work_mem\" doesn't really make sense, exactly. Maybe I'm\n> misunderstanding you.\n\nYeah. What I was suggesting was to NOT add the DISTINCT's, but instead\nraise work_mem high enough so you get just one HashAggregation step at\nthe top level. (Although I think this only works in 8.4 and up.)\nThat should be faster than two levels of de-duplication.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2011 15:05:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations " }, { "msg_contents": "On Thu, Jan 13, 2011 at 2:05 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> Your comment regarding \"each individual de-duplication looked like it\n>> would fit in work_mem\" doesn't really make sense, exactly. Maybe I'm\n>> misunderstanding you.\n>\n> Yeah.  What I was suggesting was to NOT add the DISTINCT's, but instead\n> raise work_mem high enough so you get just one HashAggregation step at\n> the top level.  (Although I think this only works in 8.4 and up.)\n> That should be faster than two levels of de-duplication.\n\nGave it a try -- performance either way doesn't seem to change -\nalthough the final set that has to undergo de-duplication is rather\nlarger (WITHOUT DISTINCT) so I still run the risk of not getting Hash\nAggregation.\n\nSince having the DISTINCT doesn't seem to hurt, and it avoids\n(potential) significant pain, I'll keep it.\n\nI still think that having UNION do de-duplication of each contributory\nrelation is a beneficial thing to consider -- especially if postgresql\nthinks the uniqueness is not very high.\n\nThanks!\n\n-- \nJon\n", "msg_date": "Thu, 13 Jan 2011 14:12:43 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson <[email protected]> wrote:\n> I still think that having UNION do de-duplication of each contributory\n> relation is a beneficial thing to consider -- especially if postgresql\n> thinks the uniqueness is not very high.\n\nThis might be worth a TODO.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 13 Jan 2011 16:44:30 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson <[email protected]> wrote:\n>> I still think that having UNION do de-duplication of each contributory\n>> relation is a beneficial thing to consider -- especially if postgresql\n>> thinks the uniqueness is not very high.\n\n> This might be worth a TODO.\n\nI don't believe there is any case where hashing each individual relation\nis a win compared to hashing them all together. If the optimizer were\nsmart enough to be considering the situation as a whole, it would always\ndo the latter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2011 17:26:22 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations " }, { "msg_contents": "On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson <[email protected]> wrote:\n>>> I still think that having UNION do de-duplication of each contributory\n>>> relation is a beneficial thing to consider -- especially if postgresql\n>>> thinks the uniqueness is not very high.\n>\n>> This might be worth a TODO.\n>\n> I don't believe there is any case where hashing each individual relation\n> is a win compared to hashing them all together.  If the optimizer were\n> smart enough to be considering the situation as a whole, it would always\n> do the latter.\n\nYou might be right, but I'm not sure. Suppose that there are 100\ninheritance children, and each has 10,000 distinct values, but none of\nthem are common between the tables. In that situation, de-duplicating\neach individual table requires a hash table that can hold 10,000\nentries. But deduplicating everything at once requires a hash table\nthat can hold 1,000,000 entries.\n\nOr am I all wet?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 13 Jan 2011 17:41:40 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "On Thu, Jan 13, 2011 at 5:41 PM, Robert Haas <[email protected]> wrote:\n> On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane <[email protected]> wrote:\n>> Robert Haas <[email protected]> writes:\n>>> On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson <[email protected]> wrote:\n>>>> I still think that having UNION do de-duplication of each contributory\n>>>> relation is a beneficial thing to consider -- especially if postgresql\n>>>> thinks the uniqueness is not very high.\n>>\n>>> This might be worth a TODO.\n>>\n>> I don't believe there is any case where hashing each individual relation\n>> is a win compared to hashing them all together.  If the optimizer were\n>> smart enough to be considering the situation as a whole, it would always\n>> do the latter.\n>\n> You might be right, but I'm not sure.  Suppose that there are 100\n> inheritance children, and each has 10,000 distinct values, but none of\n> them are common between the tables.  In that situation, de-duplicating\n> each individual table requires a hash table that can hold 10,000\n> entries.  But deduplicating everything at once requires a hash table\n> that can hold 1,000,000 entries.\n>\n> Or am I all wet?\n\nYeah, I'm all wet, because you'd still have to re-de-duplicate at the\nend. But then why did the OP get a speedup? *scratches head*\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 13 Jan 2011 17:42:41 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "On 1/13/2011 4:42 PM, Robert Haas wrote:\n> On Thu, Jan 13, 2011 at 5:41 PM, Robert Haas<[email protected]> wrote:\n>> On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane<[email protected]> wrote:\n>>> Robert Haas<[email protected]> writes:\n>>>> On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson<[email protected]> wrote:\n>>>>> I still think that having UNION do de-duplication of each contributory\n>>>>> relation is a beneficial thing to consider -- especially if postgresql\n>>>>> thinks the uniqueness is not very high.\n>>>\n>>>> This might be worth a TODO.\n>>>\n>>> I don't believe there is any case where hashing each individual relation\n>>> is a win compared to hashing them all together. If the optimizer were\n>>> smart enough to be considering the situation as a whole, it would always\n>>> do the latter.\n>>\n>> You might be right, but I'm not sure. Suppose that there are 100\n>> inheritance children, and each has 10,000 distinct values, but none of\n>> them are common between the tables. In that situation, de-duplicating\n>> each individual table requires a hash table that can hold 10,000\n>> entries. But deduplicating everything at once requires a hash table\n>> that can hold 1,000,000 entries.\n>>\n>> Or am I all wet?\n>\n> Yeah, I'm all wet, because you'd still have to re-de-duplicate at the\n> end. But then why did the OP get a speedup? *scratches head*\n>\n\nBecause it all fix it memory and didnt swap to disk?\n\n-Andy\n", "msg_date": "Thu, 13 Jan 2011 16:47:31 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "On Thu, Jan 13, 2011 at 5:47 PM, Andy Colson <[email protected]> wrote:\n>>>> I don't believe there is any case where hashing each individual relation\n>>>> is a win compared to hashing them all together.  If the optimizer were\n>>>> smart enough to be considering the situation as a whole, it would always\n>>>> do the latter.\n>>>\n>>> You might be right, but I'm not sure.  Suppose that there are 100\n>>> inheritance children, and each has 10,000 distinct values, but none of\n>>> them are common between the tables.  In that situation, de-duplicating\n>>> each individual table requires a hash table that can hold 10,000\n>>> entries.  But deduplicating everything at once requires a hash table\n>>> that can hold 1,000,000 entries.\n>>>\n>>> Or am I all wet?\n>>\n>> Yeah, I'm all wet, because you'd still have to re-de-duplicate at the\n>> end.  But then why did the OP get a speedup?  *scratches head*\n>\n> Because it all fix it memory and didnt swap to disk?\n\nDoesn't make sense. The re-de-duplication at the end should use the\nsame amount of memory regardless of whether the individual relations\nhave already been de-duplicated.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 13 Jan 2011 17:49:15 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "On 1/13/2011 4:49 PM, Robert Haas wrote:\n> On Thu, Jan 13, 2011 at 5:47 PM, Andy Colson<[email protected]> wrote:\n>>>>> I don't believe there is any case where hashing each individual relation\n>>>>> is a win compared to hashing them all together. If the optimizer were\n>>>>> smart enough to be considering the situation as a whole, it would always\n>>>>> do the latter.\n>>>>\n>>>> You might be right, but I'm not sure. Suppose that there are 100\n>>>> inheritance children, and each has 10,000 distinct values, but none of\n>>>> them are common between the tables. In that situation, de-duplicating\n>>>> each individual table requires a hash table that can hold 10,000\n>>>> entries. But deduplicating everything at once requires a hash table\n>>>> that can hold 1,000,000 entries.\n>>>>\n>>>> Or am I all wet?\n>>>\n>>> Yeah, I'm all wet, because you'd still have to re-de-duplicate at the\n>>> end. But then why did the OP get a speedup? *scratches head*\n>>\n>> Because it all fix it memory and didnt swap to disk?\n>\n> Doesn't make sense. The re-de-duplication at the end should use the\n> same amount of memory regardless of whether the individual relations\n> have already been de-duplicated.\n>\n\nUnless I missed something in the thread:\n\ndistinctList + distinctList + ... -> [fit in mem] -> last distinct -> \n[fit in mem]\n\nvs.\n\nfullList + fullList + ... -> [swapped to disk] -> last distinct -> [fit \nin mem]\n\n\n-Andy\n", "msg_date": "Thu, 13 Jan 2011 16:52:10 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "On Thu, Jan 13, 2011 at 4:49 PM, Robert Haas <[email protected]> wrote:\n> On Thu, Jan 13, 2011 at 5:47 PM, Andy Colson <[email protected]> wrote:\n>>>>> I don't believe there is any case where hashing each individual relation\n>>>>> is a win compared to hashing them all together.  If the optimizer were\n>>>>> smart enough to be considering the situation as a whole, it would always\n>>>>> do the latter.\n>>>>\n>>>> You might be right, but I'm not sure.  Suppose that there are 100\n>>>> inheritance children, and each has 10,000 distinct values, but none of\n>>>> them are common between the tables.  In that situation, de-duplicating\n>>>> each individual table requires a hash table that can hold 10,000\n>>>> entries.  But deduplicating everything at once requires a hash table\n>>>> that can hold 1,000,000 entries.\n>>>>\n>>>> Or am I all wet?\n>>>\n>>> Yeah, I'm all wet, because you'd still have to re-de-duplicate at the\n>>> end.  But then why did the OP get a speedup?  *scratches head*\n>>\n>> Because it all fix it memory and didnt swap to disk?\n>\n> Doesn't make sense.  The re-de-duplication at the end should use the\n> same amount of memory regardless of whether the individual relations\n> have already been de-duplicated.\n\nI don't believe that to be true.\nAssume 100 tables each of which produces 10,000 rows from this query.\nFurthermore, let's assume that there are 3,000 duplicates per table.\n\nWithout DISTINCT:\nuniqify (100 * 10,000 = 1,000,000 rows)\n\nWith DISTINCT:\nuniqify (100 * (10,000 - 3,000) = 700,000 rows)\n\n300,000 rows times (say, 64 bytes/row) = 18.75MB.\nNot a lot, but more than the work_mem of 16MB.\n\nOr maybe *I'm* all wet?\n\n-- \nJon\n", "msg_date": "Thu, 13 Jan 2011 16:53:22 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane <[email protected]> wrote:\n>> I don't believe there is any case where hashing each individual relation\n>> is a win compared to hashing them all together. �If the optimizer were\n>> smart enough to be considering the situation as a whole, it would always\n>> do the latter.\n\n> You might be right, but I'm not sure. Suppose that there are 100\n> inheritance children, and each has 10,000 distinct values, but none of\n> them are common between the tables. In that situation, de-duplicating\n> each individual table requires a hash table that can hold 10,000\n> entries. But deduplicating everything at once requires a hash table\n> that can hold 1,000,000 entries.\n\n> Or am I all wet?\n\nIf you have enough memory to de-dup them individually, you surely have\nenough to de-dup all at once. It is not possible for a single hashtable\nto have worse memory consumption than N hashtables followed by a union\nhashtable, and in fact if there are no common values then the latter eats\ntwice as much space because every value appears twice in two different\nhashtables.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2011 18:05:04 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations " }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> Yeah, I'm all wet, because you'd still have to re-de-duplicate at the\n> end. But then why did the OP get a speedup? *scratches head*\n\nHe was reporting that 2 levels of hashing was faster than sort+uniq\n(with the sorts swapping to disk, no doubt). One level of hashing\nshould be faster yet, but maybe not by enough to be obvious as long\nas you don't start to swap.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2011 18:07:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations " }, { "msg_contents": "On Thu, Jan 13, 2011 at 5:05 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Thu, Jan 13, 2011 at 5:26 PM, Tom Lane <[email protected]> wrote:\n>>> I don't believe there is any case where hashing each individual relation\n>>> is a win compared to hashing them all together.  If the optimizer were\n>>> smart enough to be considering the situation as a whole, it would always\n>>> do the latter.\n>\n>> You might be right, but I'm not sure.  Suppose that there are 100\n>> inheritance children, and each has 10,000 distinct values, but none of\n>> them are common between the tables.  In that situation, de-duplicating\n>> each individual table requires a hash table that can hold 10,000\n>> entries.  But deduplicating everything at once requires a hash table\n>> that can hold 1,000,000 entries.\n>\n>> Or am I all wet?\n>\n> If you have enough memory to de-dup them individually, you surely have\n> enough to de-dup all at once.  It is not possible for a single hashtable\n> to have worse memory consumption than N hashtables followed by a union\n> hashtable, and in fact if there are no common values then the latter eats\n> twice as much space because every value appears twice in two different\n> hashtables.\n\nIf everything were available up-front, sure.\nHowever, and please correct me if I'm wrong, but doesn't postgresql\nwork in a fairly linear fashion, moving from table to table performing\na series of operations on each? That seems to indicate that is what\nthe plan is:\n\nCompare:\n\nfor each table LOOP\n scan table for result rows, append to results\nEND LOOP\nhash / sort + unique results\n\nversus:\n\nfor each table LOOP\n scan table for result rows, append to table-results\n hash / sort+unique table-results, append to results\nEND LOOP\nhash / sort + unique results\n\nIn the former case, all of the result rows from all tables are\nappended together before the de-duplification process can start.\n\nIn the latter case, only enough memory for each table's result set is\nnecessary for de-duplification, and it would only be necessary to\nallocate it for that table.\n\nIs that not how this works?\n\n-- \nJon\n", "msg_date": "Thu, 13 Jan 2011 17:14:04 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "Jon Nelson <[email protected]> writes:\n> On Thu, Jan 13, 2011 at 5:05 PM, Tom Lane <[email protected]> wrote:\n>> If you have enough memory to de-dup them individually, you surely have\n>> enough to de-dup all at once.\n\n> If everything were available up-front, sure.\n> However, and please correct me if I'm wrong, but doesn't postgresql\n> work in a fairly linear fashion, moving from table to table performing\n> a series of operations on each?\n\nDoing a single sort+uniq works like that. But the alternate plan you\nare proposing we should consider involves building all the lower\nhashtables, and then reading from them to fill the upper hashtable.\nMax memory consumption *is* worst case here. Remember HashAggregate\nis incapable of swapping to disk (and if it did, you wouldn't be nearly\nas pleased with its performance).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Jan 2011 19:10:27 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations " }, { "msg_contents": "On 1/13/2011 5:41 PM, Robert Haas wrote:\n> You might be right, but I'm not sure. Suppose that there are 100\n> inheritance children, and each has 10,000 distinct values, but none of\n> them are common between the tables. In that situation, de-duplicating\n> each individual table requires a hash table that can hold 10,000\n> entries. But deduplicating everything at once requires a hash table\n> that can hold 1,000,000 entries.\n>\n> Or am I all wet?\n>\n\nHave you considered using Google's map-reduce framework for things like \nthat? Union and group functions look like ideal candidates for such a \nthing. I am not sure whether map-reduce can be married to a relational \ndatabase, but I must say that I was impressed with the speed of MongoDB. \nI am not suggesting that PostgreSQL should sacrifice its ACID compliance \nfor speed, but Mongo sure does look like a speeding bullet.\nOn the other hand, the algorithms that have been paralleled for a long \ntime are precisely sort/merge and hash algorithms used for union and \ngroup by functions. This is what I have in mind:\nhttp://labs.google.com/papers/mapreduce.html\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 13 Jan 2011 22:19:19 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "14.01.11 00:26, Tom Lane написав(ла):\n> Robert Haas<[email protected]> writes:\n>> On Thu, Jan 13, 2011 at 3:12 PM, Jon Nelson<[email protected]> wrote:\n>>> I still think that having UNION do de-duplication of each contributory\n>>> relation is a beneficial thing to consider -- especially if postgresql\n>>> thinks the uniqueness is not very high.\n>> This might be worth a TODO.\n> I don't believe there is any case where hashing each individual relation\n> is a win compared to hashing them all together. If the optimizer were\n> smart enough to be considering the situation as a whole, it would always\n> do the latter.\n>\n>\nHow about cases when individual relations are already sorted? This will \nmean that they can be deduplicated fast and in streaming manner. Even \npartial sort order may help because you will need to deduplicate only \ngroups with equal sorted fields, and this will take much less memory and \nbe much more streaming. And if all individual deduplications are \nstreaming and are sorted in one way - you can simply do a merge on top.\n\nBest regards, Vitalii Tymchyshyn.\n\n", "msg_date": "Fri, 14 Jan 2011 13:39:04 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "On Thu, Jan 13, 2011 at 6:10 PM, Tom Lane <[email protected]> wrote:\n> Jon Nelson <[email protected]> writes:\n>> On Thu, Jan 13, 2011 at 5:05 PM, Tom Lane <[email protected]> wrote:\n>>> If you have enough memory to de-dup them individually, you surely have\n>>> enough to de-dup all at once.\n>\n>> If everything were available up-front, sure.\n>> However, and please correct me if I'm wrong, but doesn't postgresql\n>> work in a fairly linear fashion, moving from table to table performing\n>> a series of operations on each?\n>\n> Doing a single sort+uniq works like that.  But the alternate plan you\n> are proposing we should consider involves building all the lower\n> hashtables, and then reading from them to fill the upper hashtable.\n> Max memory consumption *is* worst case here.  Remember HashAggregate\n> is incapable of swapping to disk (and if it did, you wouldn't be nearly\n> as pleased with its performance).\n\nThat's not exactly what I'm proposing - but it is probably due to a\nlack of understanding some of the underlying details of how postgresql\nworks. I guess I had assumed that the result of a HashAggregate or any\nother de-duplication process was a table-like structure.\n\nRegarding being pleased with hash aggregate - I am! - except when it\ngoes crazy and eats all of the memory in the machine. I'd trade a bit\nof performance loss for not using up all of the memory and crashing.\n\nHowever, maybe I'm misunderstanding how SELECT DISTINCT works internally.\nIn the case where a hashtable is used, does postgresql utilize\ntable-like structure or does it remain a hashtable in memory?\n\nIf it's a hashtable, couldn't the hashtable be built on-the-go rather\nthan only after all of the underlying tuples are available?\n\nI'd love a bit more explanation as to how this works.\n\nAnother example of where this might be useful: I'm currently running\na SELECT DISTINCT query over some 500 million rows (120 contributory\ntables). I expect a de-duplicated row count of well under 10% of that\n500 million, probably below 1%. The plan as it stands is to execute a\nseries of sequential scans, appending each of the rows from each\ncontributory table and then aggregating them. If the expected\ndistinctness of each contributory subquery is, say, 5% then instead of\naggregating over 500 million tuples the aggregation would take place\nover 25 million. In this case, that is a savings of 10 gigabytes,\napproximately.\n\nYes, it's true, the same amount of data has to be scanned. However,\nthe amount of data that needs to be stored (in memory or on disk) in\norder to provide a final de-duplication is much smaller.\n\n-- \nJon\n", "msg_date": "Fri, 14 Jan 2011 14:11:11 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: queries with lots of UNIONed relations" }, { "msg_contents": "On Fri, Jan 14, 2011 at 2:11 PM, Jon Nelson <[email protected]> wrote:\n> On Thu, Jan 13, 2011 at 6:10 PM, Tom Lane <[email protected]> wrote:\n>> Jon Nelson <[email protected]> writes:\n>>> On Thu, Jan 13, 2011 at 5:05 PM, Tom Lane <[email protected]> wrote:\n>>>> If you have enough memory to de-dup them individually, you surely have\n>>>> enough to de-dup all at once.\n>>\n>>> If everything were available up-front, sure.\n>>> However, and please correct me if I'm wrong, but doesn't postgresql\n>>> work in a fairly linear fashion, moving from table to table performing\n>>> a series of operations on each?\n>>\n>> Doing a single sort+uniq works like that.  But the alternate plan you\n>> are proposing we should consider involves building all the lower\n>> hashtables, and then reading from them to fill the upper hashtable.\n>> Max memory consumption *is* worst case here.  Remember HashAggregate\n>> is incapable of swapping to disk (and if it did, you wouldn't be nearly\n>> as pleased with its performance).\n>\n> That's not exactly what I'm proposing - but it is probably due to a\n> lack of understanding some of the underlying details of how postgresql\n> works. I guess I had assumed that the result of a HashAggregate or any\n> other de-duplication process was a table-like structure.\n\nAnd I assumed wrong, I think. I dug into the code (nodeHash.c and\nothers) and I think I understand now why HashAggregate works only in\ncertain circumstances, and I think I understand your comments a bit\nbetter now. Basically, HashAggregate doesn't stream unique Nodes the\nway nodeUnique.c does. nodeUnique basically emits Nodes and elides\nsubsequent, identical Nodes, which is why it relies on the input being\nsorted. HashAggregate works only on entire input sets at once, and\nnodeHash.c doesn't emit Nodes at all, really.\n\nThis makes me wonder if nodeHash.c and nodeHashjoin.c couldn't be\nmodified to output Nodes in a streaming fashion. The memory\nrequirements would not be any worse than now.\n\nDoes postgresql support any sort of merge sort? If it did, then if\nthe hashtable started consuming too much memory, it could be cleared\nand the nodes output from the new hashtable could be directed to\nanother temporary file, and then a merge sort could be performed on\nall of the temporary files (and thus Unique could be used to affect\nthe UNION operation).\n\n-- \nJon\n", "msg_date": "Sat, 15 Jan 2011 12:15:06 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: queries with lots of UNIONed relations" } ]
[ { "msg_contents": "Hi there,\nI have one log file per week and logging all statements >= 500 ms execution\ntime.\nBut, with \"normal\" statements are occuring something like this:\n\n2011-01-13 00:11:38 BRT LOG: duration: 2469.000 ms statement: FETCH 1000\nIN bloboid\n2011-01-13 00:12:01 BRT LOG: duration: 797.000 ms statement: SELECT\ntableoid, oid, nspname, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =\nnspowner) as rolname, nspacl FROM pg_namespace\n2011-01-13 00:12:06 BRT LOG: duration: 766.000 ms statement:\n*COPY*public.log (codlog, matricula, data, descricao, codcurso, ip)\nWITH OIDS\n*TO stdout; *\n2011-01-13 00:12:10 BRT LOG: duration: 2328.000 ms statement: FETCH 1000\nIN bloboid\n2011-01-13 00:12:34 BRT LOG: duration: 594.000 ms statement: SELECT\ntableoid, oid, nspname, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid =\nnspowner) as rolname, nspacl FROM pg_namespace\n2011-01-13 00:12:38 BRT LOG: duration: 672.000 ms statement:\n*COPY*public.avaliacao_topico_opcao (codavaliacao_topico_opcao,\ncodavaliacao_topico, descricao, selecao) WITH OIDS *TO stdout; *\n2011-01-13 00:12:39 BRT LOG: duration: 891.000 ms statement: COPY\npublic.log (codlog, matricula, data, descricao, codcurso, ip) WITH OIDS TO\nstdout;\n\nIs this normal? I'm afraid because my application doesn't run this kind of\nstatement, so how can I know what is doing these commands? Maybe pg_dump?\n\nThank you!\nFernando\n\nHi there,I have one log file per week and logging all statements >= 500 ms execution time.But, with \"normal\" statements are occuring something like this:2011-01-13 00:11:38 BRT LOG:  duration: 2469.000 ms  statement: FETCH 1000 IN bloboid\n2011-01-13 00:12:01 BRT LOG:  duration: 797.000 ms  statement: SELECT tableoid, oid, nspname, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = nspowner) as rolname, nspacl FROM pg_namespace\n2011-01-13 00:12:06 BRT LOG:  duration: 766.000 ms  statement: COPY public.log (codlog, matricula, data, descricao, codcurso, ip) WITH OIDS TO stdout;\n2011-01-13 00:12:10 BRT LOG:  duration: 2328.000 ms  statement: FETCH 1000 IN bloboid\n2011-01-13 00:12:34 BRT LOG:  duration: 594.000 ms  statement: SELECT tableoid, oid, nspname, (SELECT rolname FROM pg_catalog.pg_roles WHERE oid = nspowner) as rolname, nspacl FROM pg_namespace\n2011-01-13 00:12:38 BRT LOG:  duration: 672.000 ms  statement: COPY public.avaliacao_topico_opcao (codavaliacao_topico_opcao, codavaliacao_topico, descricao, selecao) WITH OIDS TO stdout;\n2011-01-13 00:12:39 BRT LOG:  duration: 891.000 ms  statement: COPY public.log (codlog, matricula, data, descricao, codcurso, ip) WITH OIDS TO stdout;\nIs this normal? I'm afraid because my application doesn't run this kind of statement, so how can I know what is doing these commands? Maybe pg_dump?\nThank you!Fernando", "msg_date": "Fri, 14 Jan 2011 17:42:23 -0200", "msg_from": "Fernando Mertins <[email protected]>", "msg_from_op": true, "msg_subject": "\"COPY TO stdout\" statements occurrence in log files" }, { "msg_contents": "> Is this normal? I'm afraid because my application doesn't run this kind of\n> statement, so how can I know what is doing these commands? Maybe pg_dump?\n\nI think pg_dump is likely, yes, if you have that scheduled. I don't\nthink anything in the log file will identify it as pg_dump explicitly\n(I believe as far as the server is concerned, pg_dump is just another\nclient), but if you're concerned about this, you can add the client\npid (%p) to log_line_prefix in postgresql.conf, log the pg_dump pid\nthrough whatever mechanism manages that, and compare.\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\[email protected]\nwww.truviso.com\n", "msg_date": "Fri, 14 Jan 2011 12:27:02 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"COPY TO stdout\" statements occurrence in log files" }, { "msg_contents": "[email protected] (Maciek Sakrejda) writes:\n>> Is this normal? I'm afraid because my application doesn't run this kind of\n>> statement, so how can I know what is doing these commands? Maybe pg_dump?\n>\n> I think pg_dump is likely, yes, if you have that scheduled. I don't\n> think anything in the log file will identify it as pg_dump explicitly\n> (I believe as far as the server is concerned, pg_dump is just another\n> client), but if you're concerned about this, you can add the client\n> pid (%p) to log_line_prefix in postgresql.conf, log the pg_dump pid\n> through whatever mechanism manages that, and compare.\n\nThat's an option... More are possible...\n\n1. Our DBAs have been known to create users specifically for doing\nbackups (\"dumpy\"). It doesn't seem like a *huge* proliferation of users\nto have some 'utility' user names for common processes.\n\n2. In 9.1, there will be a new answer, as there's a GUC to indicate the\n\"application_name\".\n-- \n\"Programming today is a race between software engineers striving to\nbuild bigger and better idiot-proof programs, and the Universe trying\nto produce bigger and better idiots. So far, the Universe is\nwinning.\" -- Rich Cook\n", "msg_date": "Fri, 14 Jan 2011 16:19:42 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"COPY TO stdout\" statements occurrence in log files" }, { "msg_contents": "On Fri, Jan 14, 2011 at 23:19, Chris Browne <[email protected]> wrote:\n> 2.  In 9.1, there will be a new answer, as there's a GUC to indicate the\n> \"application_name\".\n\nActually this was already introduced in PostgreSQL 9.0 :)\n\nYou can add application_name to your log_line_prefix with %a. For\npg_dump it will display \"pg_dump\"\n\nRegards,\nMarti\n", "msg_date": "Sat, 15 Jan 2011 00:42:43 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"COPY TO stdout\" statements occurrence in log files" } ]
[ { "msg_contents": "I'm using 8.3 and I have a table that contains many revisions of the same\nentity and I have a query that is super slow. Please help! I'm going to\npaste in some SQL to set up test cases and some plans below. If that isn't\nthe right way to post to this list please let me know and I'll revise.\n\nMy table looks kind of like this but wider:\nCREATE TEMPORARY TABLE test (revision SERIAL NOT NULL PRIMARY KEY, a INTEGER\nNOT NULL, b INTEGER NOT NULL, c INTEGER NOT NULL);\nINSERT INTO test (a, b, c) SELECT a, 1, 25 FROM generate_series(1, 100000)\nAS t1(a), generate_series(1, 10) as t2(b);\nCREATE INDEX test_a ON test (a);\nANALYZE test;\n\nI need to SELECT all the columns with the latest revision for a subset of\nAs. What is the right way to do this quickly?\n\nWhen I do it like this:\nCREATE TEMPORARY TABLE request (a INTEGER NOT NULL);\nINSERT INTO request SELECT a FROM generate_series(2, 200) AS t(a);\nANALYZE request;\nSELECT *\n FROM request\n JOIN (SELECT a, MAX(b) as b FROM test GROUP BY a) max USING (a)\n JOIN test USING (a, b);\nDROP TABLE request;\n\nThe plan for the SELECT is pretty bad:\n\"Hash Join (cost=32792.50..77907.29 rows=62288 width=20) (actual\ntime=769.570..2222.050 rows=199 loops=1)\"\n\" Hash Cond: ((max(pg_temp_7.test.revision)) = pg_temp_7.test.revision)\"\n\" -> Hash Join (cost=5.48..38659.23 rows=62288 width=8) (actual\ntime=20.621..830.235 rows=199 loops=1)\"\n\" Hash Cond: (pg_temp_7.test.a = request.a)\"\n\" -> GroupAggregate (cost=0.00..37170.11 rows=62601 width=8)\n(actual time=16.847..808.475 rows=100000 loops=1)\"\n\" -> Index Scan using test_a on test (cost=0.00..31388.04\nrows=999912 width=8) (actual time=16.826..569.035 rows=1000000 loops=1)\"\n\" -> Hash (cost=2.99..2.99 rows=199 width=4) (actual\ntime=3.736..3.736 rows=199 loops=1)\"\n\" -> Seq Scan on request (cost=0.00..2.99 rows=199 width=4)\n(actual time=3.658..3.689 rows=199 loops=1)\"\n\" -> Hash (cost=15405.12..15405.12 rows=999912 width=16) (actual\ntime=723.673..723.673 rows=1000000 loops=1)\"\n\" -> Seq Scan on test (cost=0.00..15405.12 rows=999912 width=16)\n(actual time=0.006..290.313 rows=1000000 loops=1)\"\n\"Total runtime: 2222.267 ms\"\n\nIf I instead issue the query as:\nCREATE TEMPORARY TABLE request (a INTEGER NOT NULL, revision INTEGER);\nINSERT INTO request SELECT a FROM generate_series(2, 200) AS t(a);\nUPDATE request SET revision = (SELECT MAX(revision) FROM test WHERE\nrequest.a = test.a);\nANALYZE request;\nSELECT *\n FROM request\n JOIN test USING (revision)\nDROP TABLE request;\n\nThe whole thing runs tons faster. The UPDATE uses the right index and is\nway sub second and the SELECT's plan is fine:\n\"Merge Join (cost=11.66..76.09 rows=199 width=20) (actual time=0.131..0.953\nrows=199 loops=1)\"\n\" Merge Cond: (test.revision = request.revision)\"\n\" -> Index Scan using test_pkey on test (cost=0.00..31388.04 rows=999912\nwidth=16) (actual time=0.017..0.407 rows=2001 loops=1)\"\n\" -> Sort (cost=11.59..12.09 rows=199 width=8) (actual time=0.102..0.133\nrows=199 loops=1)\"\n\" Sort Key: request.revision\"\n\" Sort Method: quicksort Memory: 34kB\"\n\" -> Seq Scan on request (cost=0.00..3.99 rows=199 width=8) (actual\ntime=0.020..0.050 rows=199 loops=1)\"\n\"Total runtime: 1.005 ms\"\n\nAm I missing something or is this really the best way to do this in 8.3?\n\nThanks for slogging through all this,\n\nNik Everett\n\nI'm using 8.3 and I have a table that contains many revisions of the same entity and I have a query that is super slow.  Please help!  I'm going to paste in some SQL to set up test cases and some plans below.  If that isn't the right way to post to this list please let me know and I'll revise.\nMy table looks kind of like this but wider:CREATE TEMPORARY TABLE test (revision SERIAL NOT NULL PRIMARY KEY, a INTEGER NOT NULL, b INTEGER NOT NULL, c INTEGER NOT NULL);INSERT INTO test (a, b, c) SELECT a, 1, 25 FROM generate_series(1, 100000) AS t1(a), generate_series(1, 10) as t2(b);\nCREATE INDEX test_a ON test (a);ANALYZE test;I need to SELECT all the columns with the latest revision for a subset of As.  What is the right way to do this quickly?\nWhen I do it like this:CREATE TEMPORARY TABLE request (a INTEGER NOT NULL);INSERT INTO request SELECT a FROM generate_series(2, 200) AS t(a);ANALYZE request;SELECT *\n  FROM request  JOIN (SELECT a, MAX(b) as b FROM test GROUP BY a) max USING (a)  JOIN test USING (a, b);DROP TABLE request;The plan for the SELECT is pretty bad:\n\"Hash Join  (cost=32792.50..77907.29 rows=62288 width=20) (actual time=769.570..2222.050 rows=199 loops=1)\"\"  Hash Cond: ((max(pg_temp_7.test.revision)) = pg_temp_7.test.revision)\"\n\"  ->  Hash Join  (cost=5.48..38659.23 rows=62288 width=8) (actual time=20.621..830.235 rows=199 loops=1)\"\"        Hash Cond: (pg_temp_7.test.a = request.a)\"\"        ->  GroupAggregate  (cost=0.00..37170.11 rows=62601 width=8) (actual time=16.847..808.475 rows=100000 loops=1)\"\n\"              ->  Index Scan using test_a on test  (cost=0.00..31388.04 rows=999912 width=8) (actual time=16.826..569.035 rows=1000000 loops=1)\"\"        ->  Hash  (cost=2.99..2.99 rows=199 width=4) (actual time=3.736..3.736 rows=199 loops=1)\"\n\"              ->  Seq Scan on request  (cost=0.00..2.99 rows=199 width=4) (actual time=3.658..3.689 rows=199 loops=1)\"\"  ->  Hash  (cost=15405.12..15405.12 rows=999912 width=16) (actual time=723.673..723.673 rows=1000000 loops=1)\"\n\"        ->  Seq Scan on test  (cost=0.00..15405.12 rows=999912 width=16) (actual time=0.006..290.313 rows=1000000 loops=1)\"\"Total runtime: 2222.267 ms\"If I instead issue the query as:\nCREATE TEMPORARY TABLE request (a INTEGER NOT NULL, revision INTEGER);INSERT INTO request SELECT a FROM generate_series(2, 200) AS t(a);UPDATE request SET revision = (SELECT MAX(revision) FROM test WHERE request.a = test.a);\nANALYZE request;SELECT *  FROM request  JOIN test USING (revision)DROP TABLE request;The whole thing runs tons faster.  The UPDATE uses the right index and is way sub second and the SELECT's plan is fine:\n\"Merge Join  (cost=11.66..76.09 rows=199 width=20) (actual time=0.131..0.953 rows=199 loops=1)\"\"  Merge Cond: (test.revision = request.revision)\"\"  ->  Index Scan using test_pkey on test  (cost=0.00..31388.04 rows=999912 width=16) (actual time=0.017..0.407 rows=2001 loops=1)\"\n\"  ->  Sort  (cost=11.59..12.09 rows=199 width=8) (actual time=0.102..0.133 rows=199 loops=1)\"\"        Sort Key: request.revision\"\"        Sort Method:  quicksort  Memory: 34kB\"\n\"        ->  Seq Scan on request  (cost=0.00..3.99 rows=199 width=8) (actual time=0.020..0.050 rows=199 loops=1)\"\"Total runtime: 1.005 ms\"Am I missing something or is this really the best way to do this in 8.3?\nThanks for slogging through all this,Nik Everett", "msg_date": "Fri, 14 Jan 2011 16:17:28 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Best way to get the latest revision from a table" }, { "msg_contents": "Nikolas Everett <[email protected]> wrote:\n \n> Am I missing something or is this really the best way to do this in\n8.3?\n \nHow about this?:\n \nSELECT y.*\n from (select a, max(revision) as revision\n from test where a between 2 and 200\n group by a) x\n join test y using (a, revision);\n \n-Kevin\n", "msg_date": "Fri, 14 Jan 2011 16:30:49 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "On Fri, Jan 14, 2011 at 5:30 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> SELECT y.*\n> from (select a, max(revision) as revision\n> from test where a between 2 and 200\n> group by a) x\n> join test y using (a, revision);\n\n\nWhile certainly simpler than my temp table this really just exposes a flaw\nin my example - I'm really going to be doing this with an arbitrary list of\nAs.\n\nOn Fri, Jan 14, 2011 at 5:30 PM, Kevin Grittner <[email protected]> wrote:\nSELECT y.*\n  from (select a, max(revision) as revision\n          from test where a between 2 and 200\n          group by a) x\n  join test y using (a, revision);While certainly simpler than my temp table this really just exposes a flaw in my example - I'm really going to be doing this with an arbitrary list of As.", "msg_date": "Fri, 14 Jan 2011 17:40:08 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "Nikolas Everett <[email protected]> wrote:\n \n> I'm really going to be doing this with an arbitrary list of As.\n \nOK, how about this?:\n \nCREATE TEMPORARY TABLE request (a INTEGER NOT NULL);\nINSERT INTO request SELECT a FROM generate_series(2, 200) AS t(a);\nANALYZE request;\nSELECT y.*\n from (select a, max(revision) as revision\n from test join request using (a)\n group by a) x\n join test y using (a, revision);\nDROP TABLE request;\n \n-Kevin\n", "msg_date": "Fri, 14 Jan 2011 16:57:11 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "On 01/14/2011 03:17 PM, Nikolas Everett wrote:\n\n> SELECT *\n> FROM request\n> JOIN (SELECT a, MAX(b) as b FROM test GROUP BY a) max USING (a)\n> JOIN test USING (a, b);\n\nThis actually looks like a perfect candidate for DISTINCT ON.\n\nSELECT DISTINCT ON (a, b) a, b, revision\n FROM test\n ORDER BY a, b DESC;\n\nMaybe I'm just misunderstanding your situation, though.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 14 Jan 2011 17:06:48 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "Shaun Thomas <[email protected]> wrote:\n \n> This actually looks like a perfect candidate for DISTINCT ON.\n> \n> SELECT DISTINCT ON (a, b) a, b, revision\n> FROM test\n> ORDER BY a, b DESC;\n \nI wouldn't say perfect. It runs about eight times slower than what\nI suggested and returns a fairly random value for revision instead\nof the max(revision).\n \n-Kevin\n", "msg_date": "Fri, 14 Jan 2011 17:33:27 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Shaun Thomas <[email protected]> wrote:\n>> This actually looks like a perfect candidate for DISTINCT ON.\n>> \n>> SELECT DISTINCT ON (a, b) a, b, revision\n>> FROM test\n>> ORDER BY a, b DESC;\n \n> I wouldn't say perfect. It runs about eight times slower than what\n> I suggested and returns a fairly random value for revision instead\n> of the max(revision).\n\nShaun's example is a bit off: normally, when using DISTINCT ON, you want\nan ORDER BY key that uses all the given DISTINCT keys and then some\nmore. To get the max revision for each a/b combination it ought to be\n\nSELECT DISTINCT ON (a, b) a, b, revision\n FROM test\n ORDER BY a, b, revision DESC;\n\nAs for speed, either one might be faster in a particular situation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 14 Jan 2011 18:40:43 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Shaun's example is a bit off\n \n> As for speed, either one might be faster in a particular\n> situation.\n \nAfter fixing a mistake in my testing and learning from Tom's example\nI generated queries against the OP's test data which produce\nidentical results, and I'm finding no significant difference between\nrun times for the two versions. The OP should definitely try both\nagainst the real tables.\n \nHere are the queries which run against the test set:\n \nDROP TABLE IF EXISTS request;\nCREATE TEMPORARY TABLE request (a INTEGER NOT NULL);\nINSERT INTO request SELECT a FROM generate_series(2, 200) AS t(a);\nANALYZE request;\nSELECT y.*\n from (select a, max(revision) as revision\n from test join request using (a)\n group by a) x\n join test y using (a, revision)\n order by a, revision DESC;\n \nDROP TABLE IF EXISTS request;\nCREATE TEMPORARY TABLE request (a INTEGER NOT NULL);\nINSERT INTO request SELECT a FROM generate_series(2, 200) AS t(a);\nANALYZE request;\nSELECT DISTINCT ON (a, b, c) revision, a, b, c\n FROM test join request using (a)\n ORDER BY a, b, c, revision DESC;\n \nSorry for not sorting it out better initially.\n \n-Kevin\n", "msg_date": "Fri, 14 Jan 2011 18:59:50 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "On Fri, Jan 14, 2011 at 7:59 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Tom Lane <[email protected]> wrote:\n>\n> > Shaun's example is a bit off\n>\n> > As for speed, either one might be faster in a particular\n> > situation.\n>\n> After fixing a mistake in my testing and learning from Tom's example\n> I generated queries against the OP's test data which produce\n> identical results, and I'm finding no significant difference between\n> run times for the two versions. The OP should definitely try both\n> against the real tables.\n>\n> <snip>\n\n> -Kevin\n>\n\nAfter trying both against the real tables DISTINCT ON seems to be about two\norders of magnitude faster than the other options.\n\nThanks so much!\n\nNik Everett\n\nOn Fri, Jan 14, 2011 at 7:59 PM, Kevin Grittner <[email protected]> wrote:\nTom Lane <[email protected]> wrote:\n\n> Shaun's example is a bit off\n\n> As for speed, either one might be faster in a particular\n> situation.\n\nAfter fixing a mistake in my testing and learning from Tom's example\nI generated queries against the OP's test data which produce\nidentical results, and I'm finding no significant difference between\nrun times for the two versions.  The OP should definitely try both\nagainst the real tables.<snip> \n-Kevin\nAfter trying both against the real tables DISTINCT ON seems to be about two orders of magnitude faster than the other options.Thanks so much!\nNik Everett", "msg_date": "Fri, 14 Jan 2011 20:50:02 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "\n> Shaun's example is a bit off: normally, when using DISTINCT ON, you want\n> an ORDER BY key that uses all the given DISTINCT keys and then some\n> more. To get the max revision for each a/b combination it ought to be\n\nHah, well i figured I was doing something wrong. I just thought about it a little bit, said to myself: \"Hey, I've used this before to get the most recent x for a bunch of y without a sub-query. We always used it to get the newest update to an event log.\n\nBut that's why I said I was probably misunderstanding something. :) I was trying to pick apart the logic to his temp tables and saw the max(b) and it threw me off. Glad you're around to set it straight. Heh.\n\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Sat, 15 Jan 2011 13:54:29 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table " }, { "msg_contents": "\n> After trying both against the real tables DISTINCT ON seems to be\n> about two orders of magnitude faster than the other options.\n\nGlad it worked. It looked at least naively similar to situations I've run into and DISTINCT ON always helped me out. It's all the fun of GROUP BY with the ability to discard non-aggregate results just by screwing around with your sorting. Still one of my favorite tricks.\n\n--\nShaun Thomas\nPeak6 | 141 W. Jackson Blvd. | Suite 800 | Chicago, IL 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Sat, 15 Jan 2011 13:58:45 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "On Fri, Jan 14, 2011 at 8:50 PM, Nikolas Everett <[email protected]> wrote:\n>\n>\n> On Fri, Jan 14, 2011 at 7:59 PM, Kevin Grittner\n> <[email protected]> wrote:\n>>\n>> Tom Lane <[email protected]> wrote:\n>>\n>> > Shaun's example is a bit off\n>>\n>> > As for speed, either one might be faster in a particular\n>> > situation.\n>>\n>> After fixing a mistake in my testing and learning from Tom's example\n>> I generated queries against the OP's test data which produce\n>> identical results, and I'm finding no significant difference between\n>> run times for the two versions.  The OP should definitely try both\n>> against the real tables.\n>>\n> <snip>\n>>\n>> -Kevin\n>\n> After trying both against the real tables DISTINCT ON seems to be about two\n> orders of magnitude faster than the other options.\n\nWhat I've often done in these situations is add a Boolean to the table\nthat defaults to true, and an ON INSERT trigger that flips the Boolean\nfor any existing row with the same key to false. Then you can just do\nsomething like \"SELECT * FROM tab WHERE latest\". And you can create\npartial indexes etc: CREATE INDEX foo ON tab (a) WHERE latest.\n\nAlthough if using DISTINCT ON is working, no reason to do anything\nmore complicated.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 21 Jan 2011 12:13:57 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Best way to get the latest revision from a table" }, { "msg_contents": "Distinct on is working really well! If I need to be able to index something\nI might start thinking along those lines.\n\nOn Fri, Jan 21, 2011 at 12:13 PM, Robert Haas <[email protected]> wrote:\n\n> On Fri, Jan 14, 2011 at 8:50 PM, Nikolas Everett <[email protected]>\n> wrote:\n> >\n> >\n> > On Fri, Jan 14, 2011 at 7:59 PM, Kevin Grittner\n> > <[email protected]> wrote:\n> >>\n> >> Tom Lane <[email protected]> wrote:\n> >>\n> >> > Shaun's example is a bit off\n> >>\n> >> > As for speed, either one might be faster in a particular\n> >> > situation.\n> >>\n> >> After fixing a mistake in my testing and learning from Tom's example\n> >> I generated queries against the OP's test data which produce\n> >> identical results, and I'm finding no significant difference between\n> >> run times for the two versions. The OP should definitely try both\n> >> against the real tables.\n> >>\n> > <snip>\n> >>\n> >> -Kevin\n> >\n> > After trying both against the real tables DISTINCT ON seems to be about\n> two\n> > orders of magnitude faster than the other options.\n>\n> What I've often done in these situations is add a Boolean to the table\n> that defaults to true, and an ON INSERT trigger that flips the Boolean\n> for any existing row with the same key to false. Then you can just do\n> something like \"SELECT * FROM tab WHERE latest\". And you can create\n> partial indexes etc: CREATE INDEX foo ON tab (a) WHERE latest.\n>\n> Although if using DISTINCT ON is working, no reason to do anything\n> more complicated.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n\nDistinct on is working really well!  If I need to be able to index something I might start thinking along those lines.On Fri, Jan 21, 2011 at 12:13 PM, Robert Haas <[email protected]> wrote:\nOn Fri, Jan 14, 2011 at 8:50 PM, Nikolas Everett <[email protected]> wrote:\n\n\n>\n>\n> On Fri, Jan 14, 2011 at 7:59 PM, Kevin Grittner\n> <[email protected]> wrote:\n>>\n>> Tom Lane <[email protected]> wrote:\n>>\n>> > Shaun's example is a bit off\n>>\n>> > As for speed, either one might be faster in a particular\n>> > situation.\n>>\n>> After fixing a mistake in my testing and learning from Tom's example\n>> I generated queries against the OP's test data which produce\n>> identical results, and I'm finding no significant difference between\n>> run times for the two versions.  The OP should definitely try both\n>> against the real tables.\n>>\n> <snip>\n>>\n>> -Kevin\n>\n> After trying both against the real tables DISTINCT ON seems to be about two\n> orders of magnitude faster than the other options.\n\nWhat I've often done in these situations is add a Boolean to the table\nthat defaults to true, and an ON INSERT trigger that flips the Boolean\nfor any existing row with the same key to false.  Then you can just do\nsomething like \"SELECT * FROM tab WHERE latest\".  And you can create\npartial indexes etc: CREATE INDEX foo ON tab (a) WHERE latest.\n\nAlthough if using DISTINCT ON is working, no reason to do anything\nmore complicated.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 21 Jan 2011 13:55:15 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Best way to get the latest revision from a table" } ]
[ { "msg_contents": "Hi there,\n\nCould someone please tell me why the following query won't work\n\nselect DISTINCT get_unit(unit) as unit, get_ingredient(ing) as ing, \nget_ing_aisle(1,ing) as aisle \n\nfrom recipe_ing where recipe in(1084, 1086, 1012, 618) and qtydec>0 and ing not \nin(select ing from excluded_ing where owner=1) \n\norder by aisle\n\nthe query returns the following with no values for aisle, but there should be \nsome.\n\n\"c\";\"Pumpkin Seeds\";\"\"\n\"tb\";\"Horseradish\";\"\"\n\"c\";\"Puffed Quinoa\";\"\"\n\"c\";\"Honey\";\"\"\n\"c\";\"sesame seeds\";\"\"\n\"\";\"Red Onion\";\"\"\n\"ts\";\"Spicy Mustard\";\"\"\n\"c\";\"Dry Oatmeal\";\"\"\n\"c\";\"Ketchup\";\"\"\n\"ts\";\"Pepper\";\"\"\n\"tb\";\"Brown Sugar\";\"\"\n\"c\";\"Pecans\";\"\"\n\"ts\";\"Dijon Mustard\";\"\"\n\"single\";\"Cadbury Flake Bar\";\"\"\n\"g\";\"Caster Sugar\";\"\"\n\"g\";\"Low-fat Mozzarella Cheese\";\"\"\n\"md\";\"Onion\";\"\"\n\"sm\";\"Whole-wheat Pita\";\"\"\n\"medium\";\"Lemon\";\"\"\n\"c\";\"Raisins\";\"\"\n\"c\";\"Almonds\";\"\"\n\"c\";\"Dates\";\"\"\n\"g\";\"Ham\";\"\"\n\"lb\";\"Ground Sirloin\";\"\"\n\"c\";\"Shredded Coconut\";\"\"\n\"c\";\"Sunflower Seeds\";\"\"\n\"\";\"Tomato\";\"\"\n\nThe function used to extract aisle is\n\nCREATE OR REPLACE FUNCTION get_ing_aisle(bigint, bigint)\n RETURNS character AS\n'SELECT get_aisle(aisle) as aisle FROM ingredient_owner WHERE ingredient=$1 and \nowner=$2'\n LANGUAGE 'sql' VOLATILE\n COST 100;\n\n Cheers\nBarb\n\n\n\n \nHi there,Could someone please tell me why the following query won't workselect DISTINCT get_unit(unit) as unit, get_ingredient(ing) as ing, get_ing_aisle(1,ing) as aisle from recipe_ing where recipe in(1084, 1086, 1012, 618) and qtydec>0 and ing not in(select ing from excluded_ing where owner=1) order by aislethe query returns the following with no values for aisle, but there should be some.\"c\";\"Pumpkin Seeds\";\"\"\"tb\";\"Horseradish\";\"\"\"c\";\"Puffed Quinoa\";\"\"\"c\";\"Honey\";\"\"\"c\";\"sesame seeds\";\"\"\"\";\"Red Onion\";\"\"\"ts\";\"Spicy Mustard\";\"\"\"c\";\"Dry Oatmeal\";\"\"\"c\";\"Ketchup\";\"\"\"ts\";\"Pepper\";\"\"\"tb\";\"Brown Sugar\";\"\"\"c\";\"Pecans\";\"\"\"ts\";\"Dijon Mustard\";\"\"\"single\";\"Cadbury Flake Bar\";\"\"\"g\";\"Caster\n Sugar\";\"\"\"g\";\"Low-fat Mozzarella Cheese\";\"\"\"md\";\"Onion\";\"\"\"sm\";\"Whole-wheat Pita\";\"\"\"medium\";\"Lemon\";\"\"\"c\";\"Raisins\";\"\"\"c\";\"Almonds\";\"\"\"c\";\"Dates\";\"\"\"g\";\"Ham\";\"\"\"lb\";\"Ground Sirloin\";\"\"\"c\";\"Shredded Coconut\";\"\"\"c\";\"Sunflower Seeds\";\"\"\"\";\"Tomato\";\"\"The function used to extract aisle isCREATE OR REPLACE FUNCTION get_ing_aisle(bigint, bigint)  RETURNS character AS'SELECT get_aisle(aisle) as aisle FROM ingredient_owner WHERE ingredient=$1 and owner=$2'  LANGUAGE 'sql' VOLATILE  COST 100; CheersBarb", "msg_date": "Sat, 15 Jan 2011 09:56:27 -0800 (PST)", "msg_from": "Barbara Woolums <[email protected]>", "msg_from_op": true, "msg_subject": "Problem with query" }, { "msg_contents": "On Sat, Jan 15, 2011 at 14:56, Barbara Woolums <[email protected]>wrote:\n\n> Hi there,\n>\n> Could someone please tell me why the following query won't work\n>\n> select DISTINCT get_unit(unit) as unit, get_ingredient(ing) as ing,\n> get_ing_aisle(1,ing) as aisle\n> from recipe_ing where recipe in(1084, 1086, 1012, 618) and qtydec>0 and ing\n> not in(select ing from excluded_ing where owner=1)\n> order by aisle\n>\n>\n\nYou seem to be mixing up the parameters for get_ing_aisle.\nTry *get_ing_aisle(ing,1) as aisle* instead\n\n\nOn Sat, Jan 15, 2011 at 14:56, Barbara Woolums <[email protected]> wrote:\n\n\nHi there,Could someone please tell me why the following query won't workselect DISTINCT get_unit(unit) as unit, get_ingredient(ing) as ing, get_ing_aisle(1,ing) as aisle \nfrom recipe_ing where recipe in(1084, 1086, 1012, 618) and qtydec>0 and ing not in(select ing from excluded_ing where owner=1) order by aisle\n \n \nYou seem to be mixing up the parameters for get_ing_aisle.\nTry get_ing_aisle(ing,1) as aisle instead", "msg_date": "Mon, 17 Jan 2011 20:42:44 -0300", "msg_from": "Fernando Hevia <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Problem with query" } ]
[ { "msg_contents": "Hi all,\n\nI've come to a dead end in trying to get a commonly used query to perform better. The query is against one table with 10 million rows. This table has been analysed. The table definition is:\n\nCREATE TABLE version_crs_coordinate_revision\n(\n _revision_created integer NOT NULL,\n _revision_expired integer,\n id integer NOT NULL,\n cos_id integer NOT NULL,\n nod_id integer NOT NULL,\n ort_type_1 character varying(4),\n ort_type_2 character varying(4),\n ort_type_3 character varying(4),\n status character varying(4) NOT NULL,\n sdc_status character(1) NOT NULL,\n source character varying(4),\n value1 numeric(22,12),\n value2 numeric(22,12),\n value3 numeric(22,12),\n wrk_id_created integer,\n cor_id integer,\n audit_id integer NOT NULL,\n CONSTRAINT pkey_version_crs_coordinate_revision PRIMARY KEY (_revision_created, id),\n CONSTRAINT version_crs_coordinate_revision_revision_created_fkey FOREIGN KEY (_revision_created)\n REFERENCES revision (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION,\n CONSTRAINT version_crs_coordinate_revision_revision_expired_fkey FOREIGN KEY (_revision_expired)\n REFERENCES revision (id) MATCH SIMPLE\n ON UPDATE NO ACTION ON DELETE NO ACTION\n)\nWITH (\n OIDS=FALSE\n);\nALTER TABLE \"version\".version_crs_coordinate_revision ALTER COLUMN _revision_created SET STATISTICS 1000;\nALTER TABLE \"version\".version_crs_coordinate_revision ALTER COLUMN _revision_expired SET STATISTICS 1000;\nALTER TABLE \"version\".version_crs_coordinate_revision ALTER COLUMN id SET STATISTICS 1000;\n\nCREATE INDEX idx_crs_coordinate_revision_created ON \"version\".version_crs_coordinate_revision USING btree (_revision_created);\nCREATE INDEX idx_crs_coordinate_revision_created_expired ON \"version\".version_crs_coordinate_revision USING btree (_revision_created, _revision_expired);\nCREATE INDEX idx_crs_coordinate_revision_expired ON \"version\".version_crs_coordinate_revision USING btree (_revision_expired);\nCREATE INDEX idx_crs_coordinate_revision_expired_created ON \"version\".version_crs_coordinate_revision USING btree (_revision_expired, _revision_created);\nCREATE INDEX idx_crs_coordinate_revision_expired_id ON \"version\".version_crs_coordinate_revision USING btree (_revision_expired, id);\nCREATE INDEX idx_crs_coordinate_revision_id ON \"version\".version_crs_coordinate_revision USING btree (id);\nCREATE INDEX idx_crs_coordinate_revision_id_created ON \"version\".version_crs_coordinate_revision USING btree (id, _revision_created); \n\n\nThe distribution of the data is that all but 120,000 rows have null values in the _revision_expired column.\n\nThe query itself that I'm trying to optimise is below:\n\nEXPLAIN\nSELECT * FROM (\n SELECT\n row_number() OVER (PARTITION BY id ORDER BY _revision_created DESC) as row_number,\n * \n FROM\n version_crs_coordinate_revision\n WHERE (\n (_revision_created <= 16 AND _revision_expired > 16 AND _revision_expired <= 40) OR \n (_revision_created > 16 AND _revision_created <= 40)\n )\n) AS T \nWHERE row_number = 1;\n\nSubquery Scan t (cost=170692.25..175678.27 rows=767 width=205)\n Filter: (t.row_number = 1)\n -> WindowAgg (cost=170692.25..173760.57 rows=153416 width=86)\n -> Sort (cost=170692.25..171075.79 rows=153416 width=86)\n Sort Key: version_crs_coordinate_revision.id, version_crs_coordinate_revision._revision_created\n -> Bitmap Heap Scan on version_crs_coordinate_revision (cost=3319.13..157477.69 rows=153416 width=86)\n Recheck Cond: (((_revision_expired > 16) AND (_revision_expired <= 40)) OR ((_revision_created > 16) AND (_revision_created <= 40)))\n Filter: (((_revision_created <= 16) AND (_revision_expired > 16) AND (_revision_expired <= 40)) OR ((_revision_created > 16) AND (_revision_created <= 40)))\n -> BitmapOr (cost=3319.13..3319.13 rows=154372 width=0)\n -> Bitmap Index Scan on idx_crs_coordinate_revision_expired (cost=0.00..2331.76 rows=111041 width=0)\n Index Cond: ((_revision_expired > 16) AND (_revision_expired <= 40))\n -> Bitmap Index Scan on idx_crs_coordinate_revision_created (cost=0.00..910.66 rows=43331 width=0)\n Index Cond: ((_revision_created > 16) AND (_revision_created <= 40))\n\n\nOne thought I have is that maybe the idx_crs_coordinate_revision_expired_created index could be used instead of idx_crs_coordinate_revision_expired.\n\nDoes anyone have any suggestions what I could do to improve the plan? Or how I could force the use of the idx_crs_coordinate_revision_expired_created index to see if that is better.\n\nThanks\nJeremy\n\n\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Mon, 17 Jan 2011 16:21:56 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Possible to improve query plan?" }, { "msg_contents": "On 01/16/2011 09:21 PM, Jeremy Palmer wrote:\n> Hi all,\n>\n> I've come to a dead end in trying to get a commonly used query to perform better. The query is against one table with 10 million rows. This table has been analysed. The table definition is:\n>\n> CREATE TABLE version_crs_coordinate_revision\n> (\n> _revision_created integer NOT NULL,\n> _revision_expired integer,\n> id integer NOT NULL,\n> cos_id integer NOT NULL,\n> nod_id integer NOT NULL,\n> ort_type_1 character varying(4),\n> ort_type_2 character varying(4),\n> ort_type_3 character varying(4),\n> status character varying(4) NOT NULL,\n> sdc_status character(1) NOT NULL,\n> source character varying(4),\n> value1 numeric(22,12),\n> value2 numeric(22,12),\n> value3 numeric(22,12),\n> wrk_id_created integer,\n> cor_id integer,\n> audit_id integer NOT NULL,\n> CONSTRAINT pkey_version_crs_coordinate_revision PRIMARY KEY (_revision_created, id),\n> CONSTRAINT version_crs_coordinate_revision_revision_created_fkey FOREIGN KEY (_revision_created)\n> REFERENCES revision (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION,\n> CONSTRAINT version_crs_coordinate_revision_revision_expired_fkey FOREIGN KEY (_revision_expired)\n> REFERENCES revision (id) MATCH SIMPLE\n> ON UPDATE NO ACTION ON DELETE NO ACTION\n> )\n> WITH (\n> OIDS=FALSE\n> );\n> ALTER TABLE \"version\".version_crs_coordinate_revision ALTER COLUMN _revision_created SET STATISTICS 1000;\n> ALTER TABLE \"version\".version_crs_coordinate_revision ALTER COLUMN _revision_expired SET STATISTICS 1000;\n> ALTER TABLE \"version\".version_crs_coordinate_revision ALTER COLUMN id SET STATISTICS 1000;\n>\n> CREATE INDEX idx_crs_coordinate_revision_created ON \"version\".version_crs_coordinate_revision USING btree (_revision_created);\n> CREATE INDEX idx_crs_coordinate_revision_created_expired ON \"version\".version_crs_coordinate_revision USING btree (_revision_created, _revision_expired);\n> CREATE INDEX idx_crs_coordinate_revision_expired ON \"version\".version_crs_coordinate_revision USING btree (_revision_expired);\n> CREATE INDEX idx_crs_coordinate_revision_expired_created ON \"version\".version_crs_coordinate_revision USING btree (_revision_expired, _revision_created);\n> CREATE INDEX idx_crs_coordinate_revision_expired_id ON \"version\".version_crs_coordinate_revision USING btree (_revision_expired, id);\n> CREATE INDEX idx_crs_coordinate_revision_id ON \"version\".version_crs_coordinate_revision USING btree (id);\n> CREATE INDEX idx_crs_coordinate_revision_id_created ON \"version\".version_crs_coordinate_revision USING btree (id, _revision_created);\n>\n>\n> The distribution of the data is that all but 120,000 rows have null values in the _revision_expired column.\n>\n> The query itself that I'm trying to optimise is below:\n>\n> EXPLAIN\n> SELECT * FROM (\n> SELECT\n> row_number() OVER (PARTITION BY id ORDER BY _revision_created DESC) as row_number,\n> *\n> FROM\n> version_crs_coordinate_revision\n> WHERE (\n> (_revision_created<= 16 AND _revision_expired> 16 AND _revision_expired<= 40) OR\n> (_revision_created> 16 AND _revision_created<= 40)\n> )\n> ) AS T\n> WHERE row_number = 1;\n>\n> Subquery Scan t (cost=170692.25..175678.27 rows=767 width=205)\n> Filter: (t.row_number = 1)\n> -> WindowAgg (cost=170692.25..173760.57 rows=153416 width=86)\n> -> Sort (cost=170692.25..171075.79 rows=153416 width=86)\n> Sort Key: version_crs_coordinate_revision.id, version_crs_coordinate_revision._revision_created\n> -> Bitmap Heap Scan on version_crs_coordinate_revision (cost=3319.13..157477.69 rows=153416 width=86)\n> Recheck Cond: (((_revision_expired> 16) AND (_revision_expired<= 40)) OR ((_revision_created> 16) AND (_revision_created<= 40)))\n> Filter: (((_revision_created<= 16) AND (_revision_expired> 16) AND (_revision_expired<= 40)) OR ((_revision_created> 16) AND (_revision_created<= 40)))\n> -> BitmapOr (cost=3319.13..3319.13 rows=154372 width=0)\n> -> Bitmap Index Scan on idx_crs_coordinate_revision_expired (cost=0.00..2331.76 rows=111041 width=0)\n> Index Cond: ((_revision_expired> 16) AND (_revision_expired<= 40))\n> -> Bitmap Index Scan on idx_crs_coordinate_revision_created (cost=0.00..910.66 rows=43331 width=0)\n> Index Cond: ((_revision_created> 16) AND (_revision_created<= 40))\n>\n>\n> One thought I have is that maybe the idx_crs_coordinate_revision_expired_created index could be used instead of idx_crs_coordinate_revision_expired.\n>\n> Does anyone have any suggestions what I could do to improve the plan? Or how I could force the use of the idx_crs_coordinate_revision_expired_created index to see if that is better.\n>\n> Thanks\n> Jeremy\n\nFirst, wow, those are long names... I had a hard time keeping track.\n\nSecond: you have lots of duplicated indexes. I count _revision_created in 4 indexes? Not sure what other sql you are using, but have you tried one index for one column? PG will be able to Bitmap them together if it thinks it can use more than one. Was that because you were testing?\n\nThird: any chance we can get an \"explain analyze\"? It give's more info. (Also, have you seen http://explain.depesz.com/)\n\nLast: If you wanted to force the index usage, for a test, you could drop the other indexes. I assume this is on a test box so it should be ok. If its live, you could wrap it in a BEGIN ... ROLLBACK (in theory... never tried it myself)\n\n-Andy\n", "msg_date": "Sun, 16 Jan 2011 22:22:20 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Hi Andy,\n\nYeah sorry about the long name, there are all generated by function as part of a table versioning system. And yes I placed all possible indexes on the table to see which would be used by the planner. In production I will drop the unused indexes. \n\nYes simple drop the extra index :P I have dropped the index and it made the query slower :(\n\nHere is the explain analyse:\n\nSubquery Scan t (cost=170692.25..175678.27 rows=767 width=205) (actual time=13762.783..14322.315 rows=106299 loops=1)'\n Filter: (t.row_number = 1)'\n -> WindowAgg (cost=170692.25..173760.57 rows=153416 width=86) (actual time=13762.774..14208.522 rows=149557 loops=1)'\n -> Sort (cost=170692.25..171075.79 rows=153416 width=86) (actual time=13762.745..13828.584 rows=149557 loops=1)'\n Sort Key: version_crs_coordinate_revision.id, version_crs_coordinate_revision._revision_created'\n Sort Method: quicksort Memory: 23960kB\n -> Bitmap Heap Scan on version_crs_coordinate_revision (cost=3319.13..157477.69 rows=153416 width=86) (actual time=70.925..13531.720 rows=149557 loops=1)\n Recheck Cond: (((_revision_expired > 16) AND (_revision_expired <= 40)) OR ((_revision_created > 16) AND (_revision_created <= 40)))\n Filter: (((_revision_created <= 16) AND (_revision_expired > 16) AND (_revision_expired <= 40)) OR ((_revision_created > 16) AND (_revision_created <= 40)))\n -> BitmapOr (cost=3319.13..3319.13 rows=154372 width=0) (actual time=53.650..53.650 rows=0 loops=1)\n -> Bitmap Index Scan on idx_crs_coordinate_revision_expired (cost=0.00..2331.76 rows=111041 width=0) (actual time=37.773..37.773 rows=110326 loops=1)\n Index Cond: ((_revision_expired > 16) AND (_revision_expired <= 40))\n -> Bitmap Index Scan on idx_crs_coordinate_revision_created (cost=0.00..910.66 rows=43331 width=0) (actual time=15.872..15.872 rows=43258 loops=1)\n Index Cond: ((_revision_created > 16) AND (_revision_created <= 40))\nTotal runtime: 14359.747 ms\n\nhttp://explain.depesz.com/s/qpL says that the bitmap heap scan is bad. Not sure what to do about it.\n\nThanks,\nJeremy\n\n-----Original Message-----\nFrom: Andy Colson [mailto:[email protected]] \nSent: Monday, 17 January 2011 5:22 p.m.\nTo: Jeremy Palmer\nCc: [email protected]\nSubject: Re: [PERFORM] Possible to improve query plan?\n\n\nFirst, wow, those are long names... I had a hard time keeping track.\n\nSecond: you have lots of duplicated indexes. I count _revision_created in 4 indexes? Not sure what other sql you are using, but have you tried one index for one column? PG will be able to Bitmap them together if it thinks it can use more than one. Was that because you were testing?\n\nThird: any chance we can get an \"explain analyze\"? It give's more info. (Also, have you seen http://explain.depesz.com/)\n\nLast: If you wanted to force the index usage, for a test, you could drop the other indexes. I assume this is on a test box so it should be ok. If its live, you could wrap it in a BEGIN ... ROLLBACK (in theory... never tried it myself)\n\n-Andy\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Mon, 17 Jan 2011 17:43:51 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Possible to improve query plan?" }, { "msg_contents": "> -----Original Message-----\n> From: Andy Colson [mailto:[email protected]]\n> Sent: Monday, 17 January 2011 5:22 p.m.\n> To: Jeremy Palmer\n> Cc: [email protected]\n> Subject: Re: [PERFORM] Possible to improve query plan?\n>\n>\n> First, wow, those are long names... I had a hard time keeping track.\n>\n> Second: you have lots of duplicated indexes. I count _revision_created in 4 indexes? Not sure what other sql you are using, but have you tried one index for one column? PG will be able to Bitmap them together if it thinks it can use more than one. Was that because you were testing?\n>\n> Third: any chance we can get an \"explain analyze\"? It give's more info. (Also, have you seen http://explain.depesz.com/)\n>\n> Last: If you wanted to force the index usage, for a test, you could drop the other indexes. I assume this is on a test box so it should be ok. If its live, you could wrap it in a BEGIN ... ROLLBACK (in theory... never tried it myself)\n>\n> -Andy\n\nOn 01/16/2011 10:43 PM, Jeremy Palmer wrote:\n> Hi Andy,\n>\n> Yeah sorry about the long name, there are all generated by function as part of a table versioning system. And yes I placed all possible indexes on the table to see which would be used by the planner. In production I will drop the unused indexes.\n>\n> Yes simple drop the extra index :P I have dropped the index and it made the query slower :(\n>\n> Here is the explain analyse:\n>\n> Subquery Scan t (cost=170692.25..175678.27 rows=767 width=205) (actual time=13762.783..14322.315 rows=106299 loops=1)'\n> Filter: (t.row_number = 1)'\n> -> WindowAgg (cost=170692.25..173760.57 rows=153416 width=86) (actual time=13762.774..14208.522 rows=149557 loops=1)'\n> -> Sort (cost=170692.25..171075.79 rows=153416 width=86) (actual time=13762.745..13828.584 rows=149557 loops=1)'\n> Sort Key: version_crs_coordinate_revision.id, version_crs_coordinate_revision._revision_created'\n> Sort Method: quicksort Memory: 23960kB\n> -> Bitmap Heap Scan on version_crs_coordinate_revision (cost=3319.13..157477.69 rows=153416 width=86) (actual time=70.925..13531.720 rows=149557 loops=1)\n> Recheck Cond: (((_revision_expired> 16) AND (_revision_expired<= 40)) OR ((_revision_created> 16) AND (_revision_created<= 40)))\n> Filter: (((_revision_created<= 16) AND (_revision_expired> 16) AND (_revision_expired<= 40)) OR ((_revision_created> 16) AND (_revision_created<= 40)))\n> -> BitmapOr (cost=3319.13..3319.13 rows=154372 width=0) (actual time=53.650..53.650 rows=0 loops=1)\n> -> Bitmap Index Scan on idx_crs_coordinate_revision_expired (cost=0.00..2331.76 rows=111041 width=0) (actual time=37.773..37.773 rows=110326 loops=1)\n> Index Cond: ((_revision_expired> 16) AND (_revision_expired<= 40))\n> -> Bitmap Index Scan on idx_crs_coordinate_revision_created (cost=0.00..910.66 rows=43331 width=0) (actual time=15.872..15.872 rows=43258 loops=1)\n> Index Cond: ((_revision_created> 16) AND (_revision_created<= 40))\n> Total runtime: 14359.747 ms\n>\n> http://explain.depesz.com/s/qpL says that the bitmap heap scan is bad. Not sure what to do about it.\n>\n> Thanks,\n> Jeremy\n>\n\n\nHum.. yeah it looks like it takes no time at all to pull data from the individual indexes, and them bitmap them. I'm not sure what the bitmap heap scan is, or why its slow. Hopefully someone smarter will come along.\n\nAlso its weird that explain.depesz.com didnt parse and show your entire plan. Hum.. you seem to have ending quotes on some of the lines?\n\nOne other though: quicksort Memory: 23960kB\nIt needs 20Meg to sort... It could be your sort is swapping to disk.\n\nWhat sort of PG version is this?\nWhat are you using for work_mem? (you could try to bump it up a little (its possible to set for session only, no need for server restart) and see if that'd help.\n\nAnd sorry, but its my bedtime, good luck though.\n\n-Andy\n\n", "msg_date": "Sun, 16 Jan 2011 22:57:29 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Hi Andy,\n\nYes important omissions:\n\nServer version: 8.4.6\nOS Windows Server 2003 Standard Ed :(\nThe work mem is 50mb.\n\nI tried setting the work_mem to 500mb, but it didn't make a huge difference in query execution time. But then again the OS disk caching is probably taking over here.\n\nOk here's the new plan with work_mem = 50mb:\n\nhttp://explain.depesz.com/s/xwv\n\nAnd here another plan with work_mem = 500mb:\n\nhttp://explain.depesz.com/s/VmO\n\nThanks,\nJeremy\n\n-----Original Message-----\nFrom: Andy Colson [mailto:[email protected]] \nSent: Monday, 17 January 2011 5:57 p.m.\nTo: Jeremy Palmer\nCc: [email protected]\nSubject: Re: [PERFORM] Possible to improve query plan?\n\n\nHum.. yeah it looks like it takes no time at all to pull data from the individual indexes, and them bitmap them. I'm not sure what the bitmap heap scan is, or why its slow. Hopefully someone smarter will come along.\n\nAlso its weird that explain.depesz.com didnt parse and show your entire plan. Hum.. you seem to have ending quotes on some of the lines?\n\nOne other though: quicksort Memory: 23960kB\nIt needs 20Meg to sort... It could be your sort is swapping to disk.\n\nWhat sort of PG version is this?\nWhat are you using for work_mem? (you could try to bump it up a little (its possible to set for session only, no need for server restart) and see if that'd help.\n\nAnd sorry, but its my bedtime, good luck though.\n\n-Andy\n\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Mon, 17 Jan 2011 18:13:25 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Hello,\n\n> \n> The distribution of the data is that all but 120,000 rows have null \n> values in the _revision_expired column.\n> \n\nA shot in the dark - will a partial index on the above column help?\nhttp://www.postgresql.org/docs/current/interactive/indexes-partial.html\nhttp://en.wikipedia.org/wiki/Partial_index\n\nOne link with discussion about it...\nhttp://www.devheads.net/database/postgresql/general/when-can-postgresql-use-partial-not-null-index-seems-depend-size-clause-even-enable-seqscan.htm\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only for \nthe person to whom it is addressed and may contain confidential and/or \nprivileged material. If you have received this e-mail in error, kindly \ncontact the sender and destroy all copies of the original communication. \nIBS makes no warranty, express or implied, nor guarantees the accuracy, \nadequacy or completeness of the information contained in this email or any \nattachment and is not liable for any errors, defects, omissions, viruses \nor for resultant loss or damage, if any, direct or indirect.\"\n\n\n\n\n\nHello,\n\n> \n> The distribution of the data is that all but 120,000 rows have null\n\n> values in the _revision_expired column.\n> \n\nA shot in the dark - will a partial index on the above\ncolumn help?\nhttp://www.postgresql.org/docs/current/interactive/indexes-partial.html\nhttp://en.wikipedia.org/wiki/Partial_index\n\nOne link with discussion about it...\nhttp://www.devheads.net/database/postgresql/general/when-can-postgresql-use-partial-not-null-index-seems-depend-size-clause-even-enable-seqscan.htm\n\nRegards,\nJayadevan\n\n\n\n\n\nDISCLAIMER: \n\n\"The information in this e-mail and any attachment is intended only\nfor the person to whom it is addressed and may contain confidential and/or\nprivileged material. If you have received this e-mail in error, kindly\ncontact the sender and destroy all copies of the original communication.\nIBS makes no warranty, express or implied, nor guarantees the accuracy,\nadequacy or completeness of the information contained in this email or\nany attachment and is not liable for any errors, defects, omissions, viruses\nor for resultant loss or damage, if any, direct or indirect.\"", "msg_date": "Mon, 17 Jan 2011 14:14:05 +0530", "msg_from": "Jayadevan M <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Jeremy Palmer <[email protected]> writes:\n> I've come to a dead end in trying to get a commonly used query to\n> perform better.\n\n> EXPLAIN\n> SELECT * FROM (\n> SELECT\n> row_number() OVER (PARTITION BY id ORDER BY _revision_created DESC) as row_number,\n> * \n> FROM\n> version_crs_coordinate_revision\n> WHERE (\n> (_revision_created <= 16 AND _revision_expired > 16 AND _revision_expired <= 40) OR \n> (_revision_created > 16 AND _revision_created <= 40)\n> )\n> ) AS T \n> WHERE row_number = 1;\n\nIf I'm not mistaken, that's a DB2-ish locution for a query with DISTINCT\nON, ie, you're looking for the row with highest _revision_created for\neach value of id. It might perform well on DB2, but it's going to\nmostly suck on Postgres --- we don't optimize window-function queries\nvery much at all at the moment. Try writing it with DISTINCT ON instead\nof a window function, like so:\n\nSELECT DISTINCT ON (id)\n * \n FROM\n version_crs_coordinate_revision\n WHERE (\n (_revision_created <= 16 AND _revision_expired > 16 AND _revision_expired <= 40) OR \n (_revision_created > 16 AND _revision_created <= 40)\n )\nORDER BY id, _revision_created DESC;\n\nYou could also experiment with various forms of GROUP BY if you're loath\nto use any Postgres-specific syntax.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Jan 2011 15:24:26 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan? " }, { "msg_contents": "Thanks that seems to make the query 10-15% faster :)\n\nCheers\njeremy\n\n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: Tuesday, 18 January 2011 9:24 a.m.\nTo: Jeremy Palmer\nCc: [email protected]\nSubject: Re: [PERFORM] Possible to improve query plan? \n\nJeremy Palmer <[email protected]> writes:\n> I've come to a dead end in trying to get a commonly used query to\n> perform better.\n\n> EXPLAIN\n> SELECT * FROM (\n> SELECT\n> row_number() OVER (PARTITION BY id ORDER BY _revision_created DESC) as row_number,\n> * \n> FROM\n> version_crs_coordinate_revision\n> WHERE (\n> (_revision_created <= 16 AND _revision_expired > 16 AND _revision_expired <= 40) OR \n> (_revision_created > 16 AND _revision_created <= 40)\n> )\n> ) AS T \n> WHERE row_number = 1;\n\nIf I'm not mistaken, that's a DB2-ish locution for a query with DISTINCT\nON, ie, you're looking for the row with highest _revision_created for\neach value of id. It might perform well on DB2, but it's going to\nmostly suck on Postgres --- we don't optimize window-function queries\nvery much at all at the moment. Try writing it with DISTINCT ON instead\nof a window function, like so:\n\nSELECT DISTINCT ON (id)\n * \n FROM\n version_crs_coordinate_revision\n WHERE (\n (_revision_created <= 16 AND _revision_expired > 16 AND _revision_expired <= 40) OR \n (_revision_created > 16 AND _revision_created <= 40)\n )\nORDER BY id, _revision_created DESC;\n\nYou could also experiment with various forms of GROUP BY if you're loath\nto use any Postgres-specific syntax.\n\n\t\t\tregards, tom lane\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Tue, 18 Jan 2011 10:01:17 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible to improve query plan? " }, { "msg_contents": "Tom Lane wrote:\n> If I'm not mistaken, that's a DB2-ish locution \n\nIt could also be a part of the Oracle vernacular. I've seen queries like \nthat running against Oracle RDBMS, too.\n\n> for a query with DISTINCT\n> ON, ie, you're looking for the row with highest _revision_created for\n> each value of id. It might perform well on DB2, but it's going to\n> mostly suck on Postgres --- we don't optimize window-function queries\n> very much at all at the moment. \nHmmm, what optimizations do you have in mind? I thought that window \nfunctions are just clever tricks with memory? Anything that can be \nexpected for 9.0x?\n\n\n> Try writing it with DISTINCT ON instead\n> of a window function, like so:\n> \nWouldn't \"distinct\" necessarily bring about the sort/merge?\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Mon, 17 Jan 2011 17:11:09 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" } ]
[ { "msg_contents": "It`s just a sample.\n\nselect c.id from OneRow c join abstract a on a.id=AsInteger(c.id)\n\n\"Nested Loop (cost=0.00..786642.96 rows=1 width=4) (actual \ntime=91021.167..119601.344 rows=1 loops=1)\"\n\" Join Filter: ((a.id)::integer = asinteger((c.id)::integer))\"\n\" -> Seq Scan on onerow c (cost=0.00..1.01 rows=1 width=4) (actual \ntime=0.007..0.008 rows=1 loops=1)\"\n\" -> Seq Scan on abstract a (cost=0.00..442339.78 rows=22953478 \nwidth=4) (actual time=0.003..115193.283 rows=22953478 loops=1)\"\n\"Total runtime: 119601.428 ms\"\n\n\nselect c.id from OneRow c join abstract a on a.id=c.id\n\n\"Nested Loop (cost=0.00..13.85 rows=1 width=4) (actual \ntime=254.579..254.585 rows=1 loops=1)\"\n\" -> Seq Scan on onerow c (cost=0.00..1.01 rows=1 width=4) (actual \ntime=0.006..0.007 rows=1 loops=1)\"\n\" -> Index Scan using integ_1197 on abstract a (cost=0.00..12.83 \nrows=1 width=4) (actual time=254.559..254.563 rows=1 loops=1)\"\n\" Index Cond: ((a.id)::integer = (c.id)::integer)\"\n\"Total runtime: 254.648 ms\"\n\n\nOneRow Contains only one row,\nabstract contains 22 953 500 rows\n\nAsInteger is simple function on Delphi\nit just return input value\n\nCREATE OR REPLACE FUNCTION asinteger(integer)\n RETURNS integer AS\n'oeudfpg.dll', 'AsInteger'\n LANGUAGE c VOLATILE\n COST 1;\n\n\nWhy SeqScan???\n\nthis query is simple sample to show SLOW seq scan plan\nI have a real query what i don`t know when it will be done... but at \nfirebird this query with full fetch 1-2 minutes\nI can`t give you this real query and database (database size is more, \nthan 20 GB)\nas i see that query have same problem as this sample\nIt`s so sad, because I spend so much time to support posgtresql in my \nproject and now i see what more queries is slower more than 10 times...\nPlease HELP!\n\nPostgreSQL version 9.0.2\n\n-- \nС уважением,\nЗотов Роман Владимирович\nруководитель Отдела инструментария\nЗАО \"НПО Консультант\"\nг.Иваново, ул. Палехская, д. 10\nтел./факс: (4932) 41-01-21\nmailto: [email protected]\n\n\n\n\n\n\n\n It`s just a sample.\n\nselect c.id from OneRow c join abstract a on\n a.id=AsInteger(c.id)\n\n \"Nested Loop  (cost=0.00..786642.96 rows=1 width=4) (actual\n time=91021.167..119601.344 rows=1 loops=1)\"\n \"  Join Filter: ((a.id)::integer = asinteger((c.id)::integer))\"\n \"  ->  Seq Scan on onerow c  (cost=0.00..1.01 rows=1 width=4)\n (actual time=0.007..0.008 rows=1 loops=1)\"\n \"  ->  Seq Scan on abstract a  (cost=0.00..442339.78\n rows=22953478 width=4) (actual time=0.003..115193.283 rows=22953478\n loops=1)\"\n \"Total runtime: 119601.428 ms\"\n\n\nselect c.id from OneRow c join abstract a on\n a.id=c.id\n\n \"Nested Loop  (cost=0.00..13.85 rows=1 width=4) (actual\n time=254.579..254.585 rows=1 loops=1)\"\n \"  ->  Seq Scan on onerow c  (cost=0.00..1.01 rows=1 width=4)\n (actual time=0.006..0.007 rows=1 loops=1)\"\n \"  ->  Index Scan using integ_1197 on abstract a \n (cost=0.00..12.83 rows=1 width=4) (actual time=254.559..254.563\n rows=1 loops=1)\"\n \"        Index Cond: ((a.id)::integer = (c.id)::integer)\"\n \"Total runtime: 254.648 ms\"\n\n\n OneRow Contains only one row,\n abstract contains 22 953 500 rows\n\n AsInteger is simple function on Delphi\n it just return input value\n\n CREATE OR REPLACE FUNCTION asinteger(integer)\n   RETURNS integer AS\n 'oeudfpg.dll', 'AsInteger'\n   LANGUAGE c VOLATILE\n   COST 1;\n\n\n Why SeqScan???\n\n this query is simple sample to show SLOW seq scan plan \n I have a real query what i don`t know when it will be done... but at\n firebird this query with full fetch 1-2 minutes\n I can`t give you this real query and database (database size is\n more, than 20 GB) \n as i see that query have same problem as this sample\n It`s so sad, because I spend so much time to support posgtresql in\n my project and now i see what more queries is slower more than 10\n times...\n Please HELP!\n\n PostgreSQL version 9.0.2\n\n-- \nС уважением,\nЗотов Роман Владимирович\nруководитель Отдела инструментария \nЗАО \"НПО Консультант\"\nг.Иваново, ул. Палехская, д. 10\nтел./факс: (4932) 41-01-21\nmailto: [email protected]", "msg_date": "Mon, 17 Jan 2011 11:03:29 +0300", "msg_from": "Zotov <[email protected]>", "msg_from_op": true, "msg_subject": "Bad plan when join on function" }, { "msg_contents": "2011/1/17 Zotov <[email protected]>:\n> It`s just a sample.\n>\n> select c.id from OneRow c join abstract a on a.id=AsInteger(c.id)\n>\n> \"Nested Loop  (cost=0.00..786642.96 rows=1 width=4) (actual\n> time=91021.167..119601.344 rows=1 loops=1)\"\n> \"  Join Filter: ((a.id)::integer = asinteger((c.id)::integer))\"\n> \"  ->  Seq Scan on onerow c  (cost=0.00..1.01 rows=1 width=4) (actual\n> time=0.007..0.008 rows=1 loops=1)\"\n> \"  ->  Seq Scan on abstract a  (cost=0.00..442339.78 rows=22953478 width=4)\n> (actual time=0.003..115193.283 rows=22953478 loops=1)\"\n> \"Total runtime: 119601.428 ms\"\n>\n>\n> select c.id from OneRow c join abstract a on a.id=c.id\n>\n> \"Nested Loop  (cost=0.00..13.85 rows=1 width=4) (actual\n> time=254.579..254.585 rows=1 loops=1)\"\n> \"  ->  Seq Scan on onerow c  (cost=0.00..1.01 rows=1 width=4) (actual\n> time=0.006..0.007 rows=1 loops=1)\"\n> \"  ->  Index Scan using integ_1197 on abstract a  (cost=0.00..12.83 rows=1\n> width=4) (actual time=254.559..254.563 rows=1 loops=1)\"\n> \"        Index Cond: ((a.id)::integer = (c.id)::integer)\"\n> \"Total runtime: 254.648 ms\"\n>\n>\n> OneRow Contains only one row,\n> abstract contains 22 953 500 rows\n>\n> AsInteger is simple function on Delphi\n> it just return input value\n>\n> CREATE OR REPLACE FUNCTION asinteger(integer)\n>   RETURNS integer AS\n> 'oeudfpg.dll', 'AsInteger'\n>   LANGUAGE c VOLATILE\n>   COST 1;\n\nare you sure so your function needs a VOLATILE flag?\n\nRegards\n\nPavel Stehule\n\n>\n>\n> Why SeqScan???\n>\n> this query is simple sample to show SLOW seq scan plan\n> I have a real query what i don`t know when it will be done... but at\n> firebird this query with full fetch 1-2 minutes\n> I can`t give you this real query and database (database size is more, than\n> 20 GB)\n> as i see that query have same problem as this sample\n> It`s so sad, because I spend so much time to support posgtresql in my\n> project and now i see what more queries is slower more than 10 times...\n> Please HELP!\n>\n> PostgreSQL version 9.0.2\n>\n> --\n> С уважением,\n> Зотов Роман Владимирович\n> руководитель Отдела инструментария\n> ЗАО \"НПО Консультант\"\n> г.Иваново, ул. Палехская, д. 10\n> тел./факс: (4932) 41-01-21\n> mailto: [email protected]\n", "msg_date": "Mon, 17 Jan 2011 21:12:31 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan when join on function" }, { "msg_contents": "On 01/17/2011 02:03 AM, Zotov wrote:\n\n> select c.id from OneRow c join abstract a on a.id=AsInteger(c.id)\n>\n> OneRow Contains only one row,\n> abstract contains 22 953 500 rows\n>\n> AsInteger is simple function on Delphi\n> it just return input value\n\nOk... there has to be some kind of misunderstanding, here. First of all, \nif you're trying to cast a value to an integer, there are so many \nbuilt-in ways to do this, I can't imagine why you'd call a C function. \nThe most common for your example would be ::INT.\n\nSecond, you need to understand how the optimizer works. It doesn't know \nwhat the function will return, so it has to apply the function to every \nrow in your 'abstract' table. You can get around this by applying an \nindex to your table with the result of your function, but to do that, \nyou'll have to mark your function as STABLE or IMMUTABLE instead of \nVOLATILE.\n\nJoining on the result of a function will always do this. The database \ncan't know what your function will return. If you can avoid using a \nfunction in your join clause, do so.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Tue, 18 Jan 2011 08:15:21 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan when join on function" } ]
[ { "msg_contents": "Which is the type of your application? You can see it on the Performance Whackamole Presentation from Josh Berkus on the \nPgCon 2009:\n- Web application\n- Online Transaction Processing (OLTP)\n- Data WareHousing (DW)\n\nAnd based on the type of your application, you can configure the postgresql.conf to gain a better performance of your PostgreSQL server.\nPostgreSQL postgresql.conf baseline:\n shared_buffers = 25% RAM\n work_mem = 512K[W] 2 MB[O] 128 MB[D]\n - but no more that RAM/no_connections\n maintenance_work_mem = 1/16 RAM\n checkpoint_segments = 8 [W], 16-64 [O], [D]\n wal_buffer = 1 MB [W], 8 MB [O], [D]\n effective_cache_size = 2/3 RAM\n\nRegards\n \n\nIng. Marcos Luís Ortíz Valmaseda\nLinux User # 418229 && PostgreSQL DBA\nCentro de Tecnologías Gestión de Datos (DATEC)\nhttp://postgresql.uci.cu\nhttp://www.postgresql.org\nhttp://it.toolbox.com/blogs/sql-apprentice\n\n----- Mensaje original -----\nDe: \"Jeremy Palmer\" <[email protected]>\nPara: \"Andy Colson\" <[email protected]>\nCC: [email protected]\nEnviados: Lunes, 17 de Enero 2011 0:13:25 GMT -05:00 Región oriental EE. UU./Canadá\nAsunto: Re: [PERFORM] Possible to improve query plan?\n\nHi Andy,\n\nYes important omissions:\n\nServer version: 8.4.6\nOS Windows Server 2003 Standard Ed :(\nThe work mem is 50mb.\n\nI tried setting the work_mem to 500mb, but it didn't make a huge difference in query execution time. But then again the OS disk caching is probably taking over here.\n\nOk here's the new plan with work_mem = 50mb:\n\nhttp://explain.depesz.com/s/xwv\n\nAnd here another plan with work_mem = 500mb:\n\nhttp://explain.depesz.com/s/VmO\n\nThanks,\nJeremy\n\n-----Original Message-----\nFrom: Andy Colson [mailto:[email protected]] \nSent: Monday, 17 January 2011 5:57 p.m.\nTo: Jeremy Palmer\nCc: [email protected]\nSubject: Re: [PERFORM] Possible to improve query plan?\n\n\nHum.. yeah it looks like it takes no time at all to pull data from the individual indexes, and them bitmap them. I'm not sure what the bitmap heap scan is, or why its slow. Hopefully someone smarter will come along.\n\nAlso its weird that explain.depesz.com didnt parse and show your entire plan. Hum.. you seem to have ending quotes on some of the lines?\n\nOne other though: quicksort Memory: 23960kB\nIt needs 20Meg to sort... It could be your sort is swapping to disk.\n\nWhat sort of PG version is this?\nWhat are you using for work_mem? (you could try to bump it up a little (its possible to set for session only, no need for server restart) and see if that'd help.\n\nAnd sorry, but its my bedtime, good luck though.\n\n-Andy\n\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 17 Jan 2011 08:37:35 -0500 (CST)", "msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "It fits a Data Warehousing type application. \n\nApart from work_mem, my other parameters are pretty close to these numbers. I had the work_mem down a little because a noticed some clients were getting out of memory errors with large queries which involved lots of sorting.\n\nThanks\nJeremy\n\n-----Original Message-----\nFrom: Ing. Marcos Ortiz Valmaseda [mailto:[email protected]] \nSent: Tuesday, 18 January 2011 2:38 a.m.\nTo: Jeremy Palmer\nCc: [email protected]; Andy Colson\nSubject: Re: [PERFORM] Possible to improve query plan?\n\nWhich is the type of your application? You can see it on the Performance Whackamole Presentation from Josh Berkus on the \nPgCon 2009:\n- Web application\n- Online Transaction Processing (OLTP)\n- Data WareHousing (DW)\n\nAnd based on the type of your application, you can configure the postgresql.conf to gain a better performance of your PostgreSQL server.\nPostgreSQL postgresql.conf baseline:\n shared_buffers = 25% RAM\n work_mem = 512K[W] 2 MB[O] 128 MB[D]\n - but no more that RAM/no_connections\n maintenance_work_mem = 1/16 RAM\n checkpoint_segments = 8 [W], 16-64 [O], [D]\n wal_buffer = 1 MB [W], 8 MB [O], [D]\n effective_cache_size = 2/3 RAM\n\nRegards\n \n\nIng. Marcos Luís Ortíz Valmaseda\nLinux User # 418229 && PostgreSQL DBA\nCentro de Tecnologías Gestión de Datos (DATEC)\nhttp://postgresql.uci.cu\nhttp://www.postgresql.org\nhttp://it.toolbox.com/blogs/sql-apprentice\n\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Tue, 18 Jan 2011 09:52:18 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" } ]
[ { "msg_contents": "Hello,\njust coming from this thread : http://archives.postgresql.org/pgsql-admin/2011-01/msg00050.php\nIt started as an admin question and turned out to be a performance question.\nYou may look at it for a history of this issue. I will repost all data here.\n\nDescription of the machines involved:\n\n1) Prod machine (thereafter called LINUX_PROD) :\nSystem: Linux Suse 2.6.16.46-0.12-smp, 16 x Intel Xeon(R) X7350 @ 2.93GHz, 64GB memory\nDB: PostgreSQL 8.3.13, shared_buffers=16GB, work_mem=512MB, db size=94GB\n2) Dev machine (therafter called FBSD_DEV) :\nSystem : FreeBSD 6.3, Intel(R) Core(TM)2 Duo CPU @ 2.80GHz, 2GB memory\nDB: PostgreSQL 8.3.13, shared_buffers=512MB, work_mem=1MB, db size=76GB\n3) Test machine (thereafter called FBSD_TEST) :\nSystem: FreeBSD 8.1, 4 x AMD Phenom(tm) 965 @ 3.4 GHz, 8GB memory\nDB: PostgreSQL 9.0.2, shared_buffers=5GB, work_mem=512MB, db size=7GB\n4) Linux Test machine (thereafter called LINUX_TEST) :\nSystem : Debian GNU/Linux 5.0, 2x AMD athlon @2.2GZ, 4GB Mem\nDB: PostgreSQL 9.0.2, shared_buffers=2GB, work_mem=512MB, db size=7GB\n\n(all DBs in the last three systems are identical, originating from FBSD_DEV)\n(additiinally no paging or thrashing were observed during the tests)\n\nQuery is :\nSELECT distinct m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname from marinerstates ms,vessels vsl,mariner m \nwhere m.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and coalesce(ms.endtime,now())::date >= '2006-07-15' and \nms.starttime::date <= '2007-01-11' and m.marinertype='Mariner' and m.id not in \n(SELECT distinct mold.id from marinerstates msold,vessels vslold,mariner mold where mold.id=msold.marinerid and msold.vslid=vslold.id and \nmsold.state='Active' and coalesce(msold.endtime,now())::date >= '2006-07-15' and msold.starttime::date <= '2007-01-11' and exists \n(select 1 from marinerstates msold2 where msold2.marinerid=msold.marinerid and msold2.state='Active' and msold2.id <> msold.id and \nmsold2.starttime<msold.starttime AND (msold.starttime-msold2.endtime)<='18 months') and mold.marinertype='Mariner' ) \norder by m.surname,coalesce(m.givenname,''),coalesce(m.midname,''); \n\ni get the following execution times: (with \\timing) \nFBSD_DEV : query : 240.419 ms\nLINUX_PROD : query : 219.568 ms\nFBSD_TEST : query : 2285.509 ms\nLINUX_TEST : query : 5788.988 ms\n\nRe writing the query in the \"NOT EXIST\" variation like:\n\nSELECT distinct m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname from marinerstates ms,vessels vsl,mariner m where \nm.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and coalesce(ms.endtime,now())::date >= '2006-07-15' and \nms.starttime::date <= '2007-01-11'  and m.marinertype='Mariner'  and NOT EXISTS \n   (SELECT distinct mold.id from marinerstates msold,vessels vslold,mariner mold  where mold.id=msold.marinerid and msold.vslid=vslold.id and \n   msold.state='Active' and coalesce(msold.endtime,now())::date >= '2006-07-15' and msold.starttime::date <= '2007-01-11' and \n   exists (select 1 from marinerstates msold2 where msold2.marinerid=msold.marinerid and msold2.state='Active' and msold2.id <> msold.id and \n              msold2.starttime<msold.starttime AND (msold.starttime-msold2.endtime)<='18 months')  \n   and mold.marinertype='Mariner' AND mold.id=m.id) \norder by m.surname,coalesce(m.givenname,''),coalesce(m.midname,'');\ngives:\n\nFBSD_DEV : query : 154.000 ms\nLINUX_PROD : query : 153.408 ms\nFBSD_TEST : query : 137.000 ms\nLINUX_TEST : query : 404.000 ms\n\nI found this query, since i observed that running the calling program was actually the first case that i \nencountered FBSD_TEST (while running a bigger database, a recent dump from LINUX_PROD) to be actually\nslower than LINUX_PROD.\nFrom the whole set of the tests involved, it seems like the \"NOT IN\" version of the query runs slow\nin any postgresql 9.0.2 tested.\n-- \nAchilleas Mantzios\n", "msg_date": "Mon, 17 Jan 2011 17:35:16 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": true, "msg_subject": "\"NOT IN\" substantially slower in 9.0.2 than 8.3.13 - NOT EXISTS runs\n\tfast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Achilleas Mantzios wrote:\n> From the whole set of the tests involved, it seems like the \"NOT IN\" version of the query runs slow\n> in any postgresql 9.0.2 tested.\n> \nNot only that, it will run slower even using Oracle 11.2 or MySQL 5.5.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Mon, 17 Jan 2011 11:47:47 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13\n\t- NOT EXISTS runs fast in both 8.3.13 and 9.0.2" } ]
[ { "msg_contents": "Jeremy Palmer wrote:\n \n> WHERE (\n> (_revision_created <= 16\n> AND _revision_expired > 16\n> AND _revision_expired <= 40)\n> OR (_revision_created > 16\n> AND _revision_created <= 40))\n \n> -> Bitmap Heap Scan on version_crs_coordinate_revision\n> (actual time=70.925..13531.720 rows=149557 loops=1)\n \n> -> BitmapOr (actual time=53.650..53.650 rows=0 loops=1)\n \nThis plan actually looks pretty good for what you're doing. The\nBitmap Index Scans and BitmapOr determine which tuples in the heap\nneed to be visited. The Bitmap Heap Scan then visits the heap pages\nin physical order (to avoid repeated fetches of the same page and to\npossibly edge toward sequential access speeds). You don't seem to\nhave a lot of bloat, which could be a killer on this type of query,\nsince the rowcounts from the index scans aren't that much higher than\nthe counts after you check the heap.\n \nThe only thing I can think of which might help is to CLUSTER the\ntable on whichever of the two indexes used in the plan which is\ntypically more selective for such queries. (In the example query\nthat seems to be idx_crs_coordinate_revision_created.) That might\nreduce the number of heap pages which need to be accessed and/or put\nplace them close enough that you'll get some sequential readahead.\n \nI guess you could also try adjusting effective_io_concurrency upward\nto see if that helps.\n \n-Kevin\n", "msg_date": "Mon, 17 Jan 2011 10:48:08 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "On Mon, Jan 17, 2011 at 11:48 AM, Kevin Grittner\n<[email protected]> wrote:\n> Jeremy Palmer  wrote:\n>\n>>   WHERE (\n>>       (_revision_created <= 16\n>>        AND _revision_expired > 16\n>>        AND _revision_expired <= 40)\n>>    OR (_revision_created > 16\n>>        AND _revision_created <= 40))\n>\n>> -> Bitmap Heap Scan on version_crs_coordinate_revision\n>>      (actual time=70.925..13531.720 rows=149557 loops=1)\n>\n>> -> BitmapOr (actual time=53.650..53.650 rows=0 loops=1)\n>\n> This plan actually looks pretty good for what you're doing.  The\n> Bitmap Index Scans and BitmapOr determine which tuples in the heap\n> need to be visited.  The Bitmap Heap Scan then visits the heap pages\n> in physical order (to avoid repeated fetches of the same page and to\n> possibly edge toward sequential access speeds).  You don't seem to\n> have a lot of bloat, which could be a killer on this type of query,\n> since the rowcounts from the index scans aren't that much higher than\n> the counts after you check the heap.\n\nBut isn't 13.5 seconds awfully slow to scan 149557 rows? The sort is\nsorting 23960kB. Dividing that by 149557 rows gives ~169 bytes/per\nrow, or roughly 49 rows per block, which works out to 3k blows, or\nabout 24MB of data. Clearly we must be hitting a LOT more data than\nthat, or this would be much faster than it is, I would think.\n\nAny chance this is 9.0.X? It'd be interesting to see the EXPLAIN\n(ANALYZE, BUFFERS) output for this query.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 24 Jan 2011 11:33:28 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Robert Haas <[email protected]> wrote:\n \n> But isn't 13.5 seconds awfully slow to scan 149557 rows? The sort\n> is sorting 23960kB. Dividing that by 149557 rows gives ~169\n> bytes/per row\n \nYou're right. I would expect 9 ms as per tuple as a worst case if\nit doesn't need to go to TOAST data. Caching, multiple rows per\npage, or adjacent pages should all tend to bring it down from there.\nHow does it get to 90 ms per row with rows that narrow?\n \nIs the table perhaps horribly bloated? Jeremy, did you try my\nsuggestion of using CLUSTER on the index which will tend to be more\nselective?\n \n-Kevin\n", "msg_date": "Mon, 24 Jan 2011 10:50:41 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> But isn't 13.5 seconds awfully slow to scan 149557 rows?\n\nDepends on how many physical blocks they're scattered across, which\nis hard to tell from this printout. And on how many of the blocks\nare already in cache, and what sort of disk hardware he's got, etc.\n\n> Any chance this is 9.0.X? It'd be interesting to see the EXPLAIN\n> (ANALYZE, BUFFERS) output for this query.\n\nYeah.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Jan 2011 12:31:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan? " }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Robert Haas <[email protected]> wrote:\n>> But isn't 13.5 seconds awfully slow to scan 149557 rows? The sort\n>> is sorting 23960kB. Dividing that by 149557 rows gives ~169\n>> bytes/per row\n \n> You're right. I would expect 9 ms as per tuple as a worst case if\n> it doesn't need to go to TOAST data. Caching, multiple rows per\n> page, or adjacent pages should all tend to bring it down from there.\n> How does it get to 90 ms per row with rows that narrow?\n\nUm, that looks like 90 usec per row, not msec.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 24 Jan 2011 12:33:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan? " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> Um, that looks like 90 usec per row, not msec.\n \nOh, right. Well, having to do a random heap access for 1% of the\nrows would pretty much explain the run time, then.\n \n-Kevin\n", "msg_date": "Mon, 24 Jan 2011 11:38:14 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Might be a chance on 9.0 in a couple of weeks, when I do an upgrade on one our dev boxes. \n\nKevin I've now clustered the table. And the performance did increase quite a bit. My only question is how often will I need to re-cluster the table, because it comes at quite a cost. The setup I'm running will mean that 10,000 new rows will be inserted, and 2,500 rows will be updated on this table each day.\n\nHere is the new explain output once I have clustered on the idx_crs_coordinate_revision_created index:\n\n\nSubquery Scan t (cost=168227.04..173053.88 rows=743 width=205) (actual time=392.586..946.879 rows=106299 loops=1)\n Output: t.row_number, t._revision_created, t._revision_expired, t.id, t.cos_id, t.nod_id, t.ort_type_1, t.ort_type_2, t.ort_type_3, t.status, t.sdc_status, t.source, t.value1, t.value2, t.value3, t.wrk_id_created, t.cor_id, t.audit_id\n Filter: (t.row_number = 1)\n -> WindowAgg (cost=168227.04..171197.40 rows=148518 width=86) (actual time=392.577..834.477 rows=149557 loops=1)\n Output: row_number() OVER (?), table_version_crs_coordinate_revision._revision_created, table_version_crs_coordinate_revision._revision_expired, table_version_crs_coordinate_revision.id, table_version_crs_coordinate_revision.cos_id, table_version_crs_coordinate_revision.nod_id, table_version_crs_coordinate_revision.ort_type_1, table_version_crs_coordinate_revision.ort_type_2, table_version_crs_coordinate_revision.ort_type_3, table_version_crs_coordinate_revision.status, table_version_crs_coordinate_revision.sdc_status, table_version_crs_coordinate_revision.source, table_version_crs_coordinate_revision.value1, table_version_crs_coordinate_revision.value2, table_version_crs_coordinate_revision.value3, table_version_crs_coordinate_revision.wrk_id_created, table_version_crs_coordinate_revision.cor_id, table_version_crs_coordinate_revision.audit_id\n -> Sort (cost=168227.04..168598.34 rows=148518 width=86) (actual time=392.550..457.460 rows=149557 loops=1)\n Output: table_version_crs_coordinate_revision._revision_created, table_version_crs_coordinate_revision._revision_expired, table_version_crs_coordinate_revision.id, table_version_crs_coordinate_revision.cos_id, table_version_crs_coordinate_revision.nod_id, table_version_crs_coordinate_revision.ort_type_1, table_version_crs_coordinate_revision.ort_type_2, table_version_crs_coordinate_revision.ort_type_3, table_version_crs_coordinate_revision.status, table_version_crs_coordinate_revision.sdc_status, table_version_crs_coordinate_revision.source, table_version_crs_coordinate_revision.value1, table_version_crs_coordinate_revision.value2, table_version_crs_coordinate_revision.value3, table_version_crs_coordinate_revision.wrk_id_created, table_version_crs_coordinate_revision.cor_id, table_version_crs_coordinate_revision.audit_id\n Sort Key: table_version_crs_coordinate_revision.id, table_version_crs_coordinate_revision._revision_created\n Sort Method: quicksort Memory: 23960kB\n -> Bitmap Heap Scan on table_version_crs_coordinate_revision (cost=3215.29..155469.14 rows=148518 width=86) (actual time=38.808..196.993 rows=149557 loops=1)\n Output: table_version_crs_coordinate_revision._revision_created, table_version_crs_coordinate_revision._revision_expired, table_version_crs_coordinate_revision.id, table_version_crs_coordinate_revision.cos_id, table_version_crs_coordinate_revision.nod_id, table_version_crs_coordinate_revision.ort_type_1, table_version_crs_coordinate_revision.ort_type_2, table_version_crs_coordinate_revision.ort_type_3, table_version_crs_coordinate_revision.status, table_version_crs_coordinate_revision.sdc_status, table_version_crs_coordinate_revision.source, table_version_crs_coordinate_revision.value1, table_version_crs_coordinate_revision.value2, table_version_crs_coordinate_revision.value3, table_version_crs_coordinate_revision.wrk_id_created, table_version_crs_coordinate_revision.cor_id, table_version_crs_coordinate_revision.audit_id\n Recheck Cond: (((_revision_expired > 16) AND (_revision_expired <= 40)) OR ((_revision_created > 16) AND (_revision_created <= 40)))\n Filter: (((_revision_created <= 16) AND (_revision_expired > 16) AND (_revision_expired <= 40)) OR ((_revision_created > 16) AND (_revision_created <= 40)))\n -> BitmapOr (cost=3215.29..3215.29 rows=149432 width=0) (actual time=27.330..27.330 rows=0 loops=1)\n -> Bitmap Index Scan on idx_crs_coordinate_revision_expired (cost=0.00..2225.36 rows=106001 width=0) (actual time=21.596..21.596 rows=110326 loops=1)\n Index Cond: ((_revision_expired > 16) AND (_revision_expired <= 40))\n -> Bitmap Index Scan on idx_crs_coordinate_revision_created (cost=0.00..915.67 rows=43432 width=0) (actual time=5.728..5.728 rows=43258 loops=1)\n Index Cond: ((_revision_created > 16) AND (_revision_created <= 40))\nTotal runtime: 985.671 ms\n\nThanks heaps,\nJeremy\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Tue, 25 Jan 2011 10:55:07 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Jeremy Palmer <[email protected]> wrote:\n \n> Kevin I've now clustered the table. And the performance did\n> increase quite a bit.\n \nYeah, that's enough to notice the difference.\n \n> My only question is how often will I need to re-cluster the table,\n> because it comes at quite a cost. The setup I'm running will mean\n> that 10,000 new rows will be inserted, and 2,500 rows will be\n> updated on this table each day.\n \nYou're going to see performance drop off as the data fragments. \nYou'll need to balance the performance against maintenance\ndown-time. I would guess, though, that if you have a weekly\nmaintenance window big enough to handle the CLUSTER, it might be\nworth doing it that often.\n \n-Kevin\n", "msg_date": "Mon, 24 Jan 2011 17:54:53 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Jeremy Palmer <[email protected]> wrote:\n \n> My only question is how often will I need to re-cluster the\n> table, because it comes at quite a cost.\n \nI probably should have mentioned that the CLUSTER will run faster if\nthe data is already mostly in the right sequence. You'll be doing a\nnearly sequential pass over the heap, which should minimize seek\ntime, especially if the OS notices the pattern and starts doing\nsequential read-ahead.\n \n-Kevin\n", "msg_date": "Mon, 24 Jan 2011 18:01:31 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "Thanks heaps for the advice. I will do some benchmarks to see how long it takes to cluster all of the database tables.\n\nCheers,\nJeremy\n\n-----Original Message-----\nFrom: Kevin Grittner [mailto:[email protected]] \nSent: Tuesday, 25 January 2011 1:02 p.m.\nTo: Jeremy Palmer; Tom Lane\nCc: Robert Haas; [email protected]; [email protected]\nSubject: RE: [PERFORM] Possible to improve query plan?\n\nJeremy Palmer <[email protected]> wrote:\n \n> My only question is how often will I need to re-cluster the\n> table, because it comes at quite a cost.\n \nI probably should have mentioned that the CLUSTER will run faster if\nthe data is already mostly in the right sequence. You'll be doing a\nnearly sequential pass over the heap, which should minimize seek\ntime, especially if the OS notices the pattern and starts doing\nsequential read-ahead.\n \n-Kevin\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n", "msg_date": "Tue, 25 Jan 2011 13:24:48 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" }, { "msg_contents": "2011/1/25 Kevin Grittner <[email protected]>:\n> Jeremy Palmer <[email protected]> wrote:\n>\n>> Kevin I've now clustered the table. And the performance did\n>> increase quite a bit.\n>\n> Yeah, that's enough to notice the difference.\n>\n>> My only question is how often will I need to re-cluster the table,\n>> because it comes at quite a cost. The setup I'm running will mean\n>> that 10,000 new rows will be inserted, and 2,500 rows will be\n>> updated on this table each day.\n>\n> You're going to see performance drop off as the data fragments.\n> You'll need to balance the performance against maintenance\n> down-time.  I would guess, though, that if you have a weekly\n> maintenance window big enough to handle the CLUSTER, it might be\n> worth doing it that often.\n\nWas FILLFACTOR already suggested regarding the INSERT vs UPDATE per day ?\n\nhttp://www.postgresql.org/docs/9.0/static/sql-altertable.html (and\nindex too, but they already have a default at 90% for btree)\n\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Tue, 25 Jan 2011 14:44:17 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible to improve query plan?" } ]
[ { "msg_contents": "Query is :\nSELECT distinct m.id,coalesce(m.givenname,''),\n coalesce(m.midname,''),\n m.surname from marinerstates ms,vessels vsl,mariner m \nWHERE m.id=ms.marinerid and ms.vslid=vsl.id \nAND ms.state='Active' and coalesce(ms.endtime,now())::date >= '2006-07-15'\nAND ms.starttime::date <= '2007-01-11' AND\n m.marinertype='Mariner' and m.id \nNOT IN (SELECT distinct mold.id\n FROM marinerstates msold,\n vessels vslold,\n mariner mold \n WHERE mold.id=msold.marinerid \n AND msold.vslid=vslold.id\n AND msold.state='Active' \n AND coalesce(msold.endtime,now())::date >= '2006-07-15' \n AND msold.starttime::date <= '2007-01-11' \n AND EXISTS (SELECT 1 \n FROM marinerstates msold2 \n WHERE msold2.marinerid=msold.marinerid\n AND msold2.state='Active' \n AND msold2.id <> msold.id \n AND msold2.starttime<msold.starttime\n AND (msold.starttime-msold2.endtime)<='18 months')\n AND mold.marinertype='Mariner' ) \n ORDER BY m.surname,coalesce(m.givenname,'')\n ,coalesce(m.midname,''); \n\ni get the following execution times: (with \\timing) \nFBSD_DEV : query : 240.419 ms\nLINUX_PROD : query : 219.568 ms\nFBSD_TEST : query : 2285.509 ms\nLINUX_TEST : query : 5788.988 ms\n\nRe writing the query in the \"NOT EXIST\" variation like:\n\nSELECT distinct m.id,coalesce(m.givenname,''),coalesce(m.midname,''),m.surname from marinerstates ms,vessels vsl,mariner m where \nm.id=ms.marinerid and ms.vslid=vsl.id and ms.state='Active' and coalesce(ms.endtime,now())::date >= '2006-07-15' and \nms.starttime::date <= '2007-01-11' and m.marinertype='Mariner' and NOT EXISTS \n (SELECT distinct mold.id from marinerstates msold,vessels vslold,mariner mold where mold.id=msold.marinerid and msold.vslid=vslold.id and \n msold.state='Active' and coalesce(msold.endtime,now())::date >= '2006-07-15' and msold.starttime::date <= '2007-01-11' and \n exists (select 1 from marinerstates msold2 where msold2.marinerid=msold.marinerid and msold2.state='Active' and msold2.id <> msold.id and \n msold2.starttime<msold.starttime AND (msold.starttime-msold2.endtime)<='18 months') \n and mold.marinertype='Mariner' AND mold.id=m.id) \norder by m.surname,coalesce(m.givenname,''),coalesce(m.midname,'');\ngives:\n\nFBSD_DEV : query : 154.000 ms\nLINUX_PROD : query : 153.408 ms\nFBSD_TEST : query : 137.000 ms\nLINUX_TEST : query : 404.000 ms\n\n\nWell, on the Release Notes on the PostgreSQL-8.4 Documentation, the developers recommend to use NOT EXISTS \ninstead NOT IN, because the first clause has a better performance. So, you can use it on that way.\n\nOther questions?\n- Do you have a partial index on marinerstates.marinerid where this condition is accomplished?\n- Do you have a index on mariner.id?\n- Can you provide a explain of these queries on the PostgreSQL-9.0 machines?\n\nRegards\n\n\nIng. Marcos Luís Ortíz Valmaseda\nLinux User # 418229 && PostgreSQL DBA\nCentro de Tecnologías Gestión de Datos (DATEC)\nhttp://postgresql.uci.cu\nhttp://www.postgresql.org\nhttp://it.toolbox.com/blogs/sql-apprentice\n", "msg_date": "Mon, 17 Jan 2011 11:52:27 -0500 (CST)", "msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 -\n\tNOT EXISTS runs fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Thanx,\n\nΣτις Monday 17 January 2011 18:52:27 ο/η Ing. Marcos Ortiz Valmaseda έγραψε:\n> \n> Well, on the Release Notes on the PostgreSQL-8.4 Documentation, the developers recommend to use NOT EXISTS \n> instead NOT IN, because the first clause has a better performance. So, you can use it on that way.\n> \nYou mean this?\n(from 8.4 changes)\n\"Create explicit concepts of semi-joins and anti-joins (Tom)\n This work formalizes our previous ad-hoc treatment of IN (SELECT\n ...) clauses, and extends it to EXISTS and NOT EXISTS clauses. It\n should result in significantly better planning of EXISTS and NOT\n EXISTS queries. In general, logically equivalent IN and EXISTS\n clauses should now have similar performance, whereas previously IN\n often won.\"\n\nI haven't found any other recent reference to this issue. And this is far from what you suggest.\nHere the entry talks about \"similar\" performance.\n\nAlso a similar issue was hot back in 7.4 days :\n\"IN / NOT IN subqueries are now much more efficient\n In previous releases, IN/NOT IN subqueries were joined to the\n upper query by sequentially scanning the subquery looking for a\n match. The 7.4 code uses the same sophisticated techniques used\n by ordinary joins and so is much faster. An IN will now usually\n be as fast as or faster than an equivalent EXISTS subquery; this\n reverses the conventional wisdom that applied to previous\n releases.\"\n\n> Other questions?\n> - Do you have a partial index on marinerstates.marinerid where this condition is accomplished?\nNo, but i just tried it (on state='Active') with no impact. \n> - Do you have a index on mariner.id?\nYes, It is the primary key.\n> - Can you provide a explain of these queries on the PostgreSQL-9.0 machines?\nSure, first i'll post the table definitions and then some stats and then the epxlain analyze(s)\n\nmariner\n=====\n id | integer | not null default nextval(('public.mariner_id_seq'::text)::regclass)\n givenname | character varying(200) |\n midname | character varying(100) |\n surname | character varying(200) | not null\n...\nIndexes:\n \"mariner_pkey\" PRIMARY KEY, btree (id)\n \"mariner_smauid\" UNIQUE, btree (smauid)\n \"mariner_username_key\" UNIQUE, btree (username)\n \"mariner_nationalityid\" btree (nationalityid)\n \"mariner_parentid\" btree (parentid)\n \"mariner_surname\" btree (surname)\n\nmarinerstates\n========\n id | integer | not null default nextval(('public.marinerstates_id_seq'::text)::regclass)\n marinerid | integer | not null\n state | character varying(20) | not null\n vslid | integer |\n leave_period_days | integer |\n comment | text |\n starttime | timestamp with time zone | not null\n endtime | timestamp with time zone |\n trid | integer |\n sal_bw | real | not null default 0.0\n sal_ot | real | not null default 0.0\n sal_lp | real | not null default 0.0\n sal_misc | real | not null default 0.0\n rankid | integer |\n system_vslid | integer |\n startport | text |\n endport | text |\n.....\nIndexes:\n \"marinerstates_pkey\" PRIMARY KEY, btree (id)\n \"marinerstates_mariner_cur_state\" UNIQUE, btree (marinerid) WHERE endtime IS NULL\n \"marinerstates_system_vslid\" UNIQUE, btree (marinerid, system_vslid)\n \"marinerstates__system_vslid\" btree (system_vslid)\n \"marinerstates_cur_mariners_states\" btree (endtime) WHERE endtime IS NULL\n \"marinerstates_mariner_past_state\" btree (marinerid, starttime, endtime) WHERE endtime IS NOT NULL\n \"marinerstates_marinerid\" btree (marinerid)\n \"marinerstates_marinerid_starttime\" btree (marinerid, starttime)\n \"marinerstates_rankid\" btree (rankid)\n \"marinerstates_rankid_cur_mariners\" btree (rankid) WHERE endtime IS NULL\n \"marinerstates_rankid_past_state\" btree (rankid, starttime, endtime) WHERE endtime IS NOT NULL\n \"marinerstates_state\" btree (state)\n \"marinerstates_state_cur_mariners\" btree (state) WHERE endtime IS NULL\n \"marinerstates_state_past_state\" btree (state, starttime, endtime) WHERE endtime IS NOT NULL\n \"marinerstates_vslid\" btree (vslid)\n \"marinerstates_vslid_cur_mariners\" btree (vslid) WHERE endtime IS NULL\n \"marinerstates_vslid_past_state\" btree (vslid, starttime, endtime) WHERE endtime IS NOT NULL\n\nvessels\n=====\n name | character varying(200) | not null\n id | integer | not null default nextval(('public.vessels_id_seq'::text)::regclass)\n...\nIndexes:\n \"vessels_pkey\" PRIMARY KEY, btree (id)\n \"vessels_name_key\" UNIQUE, btree (name)\n \"idx_name\" btree (name)\n \"vessels_flag\" btree (flag)\n \"vessels_groupno\" btree (groupno)\n \"vessels_vslstatus_idx\" btree (vslstatus)\n\ndynacom=# SELECT count(*) from mariner;\n count\n-------\n 14447\n\ndynacom=# SELECT count(*) from marinerstates;\n count\n-------\n 51013\n\ndynacom=# SELECT avg(marqry.cnt),stddev(marqry.cnt) FROM (SELECT m.id,count(ms.id) as cnt from mariner m, marinerstates ms WHERE m.id=ms.marinerid group by m.id) AS marqry;\n avg | stddev\n--------------------+--------------------\n 3.5665944207508914 | 4.4416879361829170\n\n(vessels do not play any impact in the query, so i'll leave them out)\n\nSlow plan in 9.0.2 :\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=11525.09..11571.55 rows=3717 width=23) (actual time=10462.561..10462.937 rows=603 loops=1)\n -> Sort (cost=11525.09..11534.38 rows=3717 width=23) (actual time=10462.560..10462.664 rows=603 loops=1)\n Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), (COALESCE(m.midname, ''::character varying)), m.id\n Sort Method: quicksort Memory: 71kB\n -> Hash Join (cost=8281.98..11304.67 rows=3717 width=23) (actual time=10425.261..10461.621 rows=603 loops=1)\n Hash Cond: (ms.marinerid = m.id)\n -> Hash Join (cost=20.12..2963.83 rows=3717 width=4) (actual time=0.228..34.993 rows=2625 loops=1)\n Hash Cond: (ms.vslid = vsl.id)\n -> Seq Scan on marinerstates ms (cost=0.00..2889.32 rows=4590 width=8) (actual time=0.011..33.494 rows=2625 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=16.72..16.72 rows=272 width=4) (actual time=0.207..0.207 rows=272 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on vessels vsl (cost=0.00..16.72 rows=272 width=4) (actual time=0.004..0.118 rows=272 loops=1)\n -> Hash (cost=8172.57..8172.57 rows=7143 width=23) (actual time=10424.994..10424.994 rows=12832 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 702kB\n -> Seq Scan on mariner m (cost=7614.86..8172.57 rows=7143 width=23) (actual time=10409.498..10419.971 rows=12832 loops=1)\n Filter: ((NOT (hashed SubPlan 1)) AND ((marinertype)::text = 'Mariner'::text))\n SubPlan 1\n -> Unique (cost=2768.00..7614.86 rows=1 width=4) (actual time=87.495..10408.446 rows=1454 loops=1)\n -> Nested Loop (cost=2768.00..7614.86 rows=1 width=4) (actual time=87.493..10407.517 rows=1835 loops=1)\n Join Filter: (msold.marinerid = mold.id)\n -> Index Scan using mariner_pkey on mariner mold (cost=0.00..1728.60 rows=14286 width=4) (actual time=0.007..13.931 rows=14286 loops=1)\n Filter: ((marinertype)::text = 'Mariner'::text)\n -> Materialize (cost=2768.00..5671.97 rows=1 width=8) (actual time=0.003..0.330 rows=1876 loops=14286)\n -> Nested Loop (cost=2768.00..5671.96 rows=1 width=8) (actual time=39.723..85.401 rows=1876 loops=1)\n -> Hash Semi Join (cost=2768.00..5671.67 rows=1 width=12) (actual time=39.708..81.501 rows=1876 loops=1)\n Hash Cond: (msold.marinerid = msold2.marinerid)\n Join Filter: ((msold2.id <> msold.id) AND (msold2.starttime < msold.starttime) AND ((msold.starttime - msold2.endtime) <= '1 year 6 mons'::interval))\n -> Seq Scan on marinerstates msold (cost=0.00..2889.32 rows=4590 width=20) (actual time=0.003..33.952 rows=2625 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=2251.66..2251.66 rows=41307 width=24) (actual time=39.613..39.613 rows=41250 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 2246kB\n -> Seq Scan on marinerstates msold2 (cost=0.00..2251.66 rows=41307 width=24) (actual time=0.002..24.882 rows=41250 loops=1)\n Filter: ((state)::text = 'Active'::text)\n -> Index Scan using vessels_pkey on vessels vslold (cost=0.00..0.28 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1876)\n Index Cond: (vslold.id = msold.vslid)\n Total runtime: 10463.619 ms\n(37 rows)\n\nFast plan in 8.3.13 : \n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=633677.56..633700.48 rows=1834 width=23) (actual time=543.684..551.003 rows=603 loops=1)\n -> Sort (cost=633677.56..633682.14 rows=1834 width=23) (actual time=543.676..546.070 rows=603 loops=1)\n Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), (COALESCE(m.midname, ''::character varying)), m.id\n Sort Method: quicksort Memory: 53kB\n -> Hash Join (cost=630601.65..633578.15 rows=1834 width=23) (actual time=439.969..540.573 rows=603 loops=1)\n Hash Cond: (ms.vslid = vsl.id)\n -> Hash Join (cost=630580.33..633530.01 rows=2261 width=27) (actual time=437.459..532.847 rows=603 loops=1)\n Hash Cond: (ms.marinerid = m.id)\n -> Seq Scan on marinerstates ms (cost=0.00..2875.32 rows=4599 width=8) (actual time=0.017..80.153 rows=2625 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=630491.54..630491.54 rows=7103 width=23) (actual time=437.307..437.307 rows=12832 loops=1)\n -> Index Scan using mariner_pkey on mariner m (cost=628776.89..630491.54 rows=7103 width=23) (actual time=311.023..380.168 rows=12832 loops=1)\n Filter: ((NOT (hashed subplan)) AND ((marinertype)::text = 'Mariner'::text))\n SubPlan\n -> Unique (cost=0.00..628772.30 rows=1834 width=4) (actual time=0.129..303.981 rows=1454 loops=1)\n -> Nested Loop (cost=0.00..628767.72 rows=1834 width=4) (actual time=0.120..289.961 rows=1835 loops=1)\n -> Nested Loop (cost=0.00..627027.98 rows=1865 width=4) (actual time=0.099..237.128 rows=1876 loops=1)\n -> Index Scan using marinerstates_marinerid on marinerstates msold (cost=0.00..626316.07 rows=2299 width=8) (actual time=0.079..186.150 rows=1876 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date) AND (subplan))\n SubPlan\n -> Bitmap Heap Scan on marinerstates msold2 (cost=4.28..12.11 rows=1 width=0) (actual time=0.020..0.020 rows=1 loops=2625)\n Recheck Cond: ((marinerid = $0) AND (starttime < $2))\n Filter: ((id <> $1) AND ((state)::text = 'Active'::text) AND (($2 - endtime) <= '1 year 6 mons'::interval))\n -> Bitmap Index Scan on marinerstates_marinerid_starttime (cost=0.00..4.28 rows=2 width=0) (actual time=0.009..0.009 rows=6 loops=2625)\n Index Cond: ((marinerid = $0) AND (starttime < $2))\n -> Index Scan using vessels_pkey on vessels vslold (cost=0.00..0.30 rows=1 width=4) (actual time=0.006..0.010 rows=1 loops=1876)\n Index Cond: (vslold.id = msold.vslid)\n -> Index Scan using mariner_pkey on mariner mold (cost=0.00..0.92 rows=1 width=4) (actual time=0.007..0.012 rows=1 loops=1876)\n Index Cond: (mold.id = msold.marinerid)\n Filter: ((mold.marinertype)::text = 'Mariner'::text)\n -> Hash (cost=17.81..17.81 rows=281 width=4) (actual time=2.491..2.491 rows=273 loops=1)\n -> Seq Scan on vessels vsl (cost=0.00..17.81 rows=281 width=4) (actual time=0.012..1.306 rows=273 loops=1)\n Total runtime: 553.601 ms\n(33 rows)\n\nIs there any other data i could post (pg_stat,...) that would help?\n\nthanx a lot.\n\n> \n> Regards\n> \n> \n> Ing. Marcos Luís Ortíz Valmaseda\n> Linux User # 418229 && PostgreSQL DBA\n> Centro de Tecnologías Gestión de Datos (DATEC)\n> http://postgresql.uci.cu\n> http://www.postgresql.org\n> http://it.toolbox.com/blogs/sql-apprentice\n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Tue, 18 Jan 2011 10:06:03 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 - NOT EXISTS\n\truns fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Achilleas Mantzios wrote:\n> Thanx,\n>\n> Στις Monday 17 January 2011 18:52:27 ο/η Ing. Marcos Ortiz Valmaseda έγραψε:\n> \n>> Well, on the Release Notes on the PostgreSQL-8.4 Documentation, the developers recommend to use NOT EXISTS\n>> instead NOT IN, because the first clause has a better performance. So, you can use it on that way.\n>>\n>> \n> You mean this?\n> (from 8.4 changes)\n> \"Create explicit concepts of semi-joins and anti-joins (Tom)\n> This work formalizes our previous ad-hoc treatment of IN (SELECT\n> ...) clauses, and extends it to EXISTS and NOT EXISTS clauses. It\n> should result in significantly better planning of EXISTS and NOT\n> EXISTS queries. In general, logically equivalent IN and EXISTS\n> clauses should now have similar performance, whereas previously IN\n> often won.\"\n>\n> I haven't found any other recent reference to this issue. And this is far from what you suggest.\n> Here the entry talks about \"similar\" performance.\n>\n> Also a similar issue was hot back in 7.4 days :\n> \"IN / NOT IN subqueries are now much more efficient\n> In previous releases, IN/NOT IN subqueries were joined to the\n> upper query by sequentially scanning the subquery looking for a\n> match. The 7.4 code uses the same sophisticated techniques used\n> by ordinary joins and so is much faster. An IN will now usually\n> be as fast as or faster than an equivalent EXISTS subquery; this\n> reverses the conventional wisdom that applied to previous\n> releases.\"\n>\n> \n>> Other questions?\n>> - Do you have a partial index on marinerstates.marinerid where this condition is accomplished?\n>> \n> No, but i just tried it (on state='Active') with no impact.\n> \n>> - Do you have a index on mariner.id?\n>> \n> Yes, It is the primary key.\n> \n>> - Can you provide a explain of these queries on the PostgreSQL-9.0 machines?\n>> \n> Sure, first i'll post the table definitions and then some stats and then the epxlain analyze(s)\n>\n> mariner\n> =====\n> id | integer | not null default nextval(('public.mariner_id_seq'::text)::regclass)\n> givenname | character varying(200) |\n> midname | character varying(100) |\n> surname | character varying(200) | not null\n> ...\n> Indexes:\n> \"mariner_pkey\" PRIMARY KEY, btree (id)\n> \"mariner_smauid\" UNIQUE, btree (smauid)\n> \"mariner_username_key\" UNIQUE, btree (username)\n> \"mariner_nationalityid\" btree (nationalityid)\n> \"mariner_parentid\" btree (parentid)\n> \"mariner_surname\" btree (surname)\n>\n> marinerstates\n> ========\n> id | integer | not null default nextval(('public.marinerstates_id_seq'::text)::regclass)\n> marinerid | integer | not null\n> state | character varying(20) | not null\n> vslid | integer |\n> leave_period_days | integer |\n> comment | text |\n> starttime | timestamp with time zone | not null\n> endtime | timestamp with time zone |\n> trid | integer |\n> sal_bw | real | not null default 0.0\n> sal_ot | real | not null default 0.0\n> sal_lp | real | not null default 0.0\n> sal_misc | real | not null default 0.0\n> rankid | integer |\n> system_vslid | integer |\n> startport | text |\n> endport | text |\n> .....\n> Indexes:\n> \"marinerstates_pkey\" PRIMARY KEY, btree (id)\n> \"marinerstates_mariner_cur_state\" UNIQUE, btree (marinerid) WHERE endtime IS NULL\n> \"marinerstates_system_vslid\" UNIQUE, btree (marinerid, system_vslid)\n> \"marinerstates__system_vslid\" btree (system_vslid)\n> \"marinerstates_cur_mariners_states\" btree (endtime) WHERE endtime IS NULL\n> \"marinerstates_mariner_past_state\" btree (marinerid, starttime, endtime) WHERE endtime IS NOT NULL\n> \"marinerstates_marinerid\" btree (marinerid)\n> \"marinerstates_marinerid_starttime\" btree (marinerid, starttime)\n> \"marinerstates_rankid\" btree (rankid)\n> \"marinerstates_rankid_cur_mariners\" btree (rankid) WHERE endtime IS NULL\n> \"marinerstates_rankid_past_state\" btree (rankid, starttime, endtime) WHERE endtime IS NOT NULL\n> \"marinerstates_state\" btree (state)\n> \"marinerstates_state_cur_mariners\" btree (state) WHERE endtime IS NULL\n> \"marinerstates_state_past_state\" btree (state, starttime, endtime) WHERE endtime IS NOT NULL\n> \"marinerstates_vslid\" btree (vslid)\n> \"marinerstates_vslid_cur_mariners\" btree (vslid) WHERE endtime IS NULL\n> \"marinerstates_vslid_past_state\" btree (vslid, starttime, endtime) WHERE endtime IS NOT NULL\n>\n> vessels\n> =====\n> name | character varying(200) | not null\n> id | integer | not null default nextval(('public.vessels_id_seq'::text)::regclass)\n> ...\n> Indexes:\n> \"vessels_pkey\" PRIMARY KEY, btree (id)\n> \"vessels_name_key\" UNIQUE, btree (name)\n> \"idx_name\" btree (name)\n> \"vessels_flag\" btree (flag)\n> \"vessels_groupno\" btree (groupno)\n> \"vessels_vslstatus_idx\" btree (vslstatus)\n>\n> dynacom=# SELECT count(*) from mariner;\n> count\n> -------\n> 14447\n>\n> dynacom=# SELECT count(*) from marinerstates;\n> count\n> -------\n> 51013\n>\n> dynacom=# SELECT avg(marqry.cnt),stddev(marqry.cnt) FROM (SELECT m.id,count(ms.id) as cnt from mariner m, marinerstates ms WHERE m.id=ms.marinerid group by m.id) AS marqry;\n> avg | stddev\n> --------------------+--------------------\n> 3.5665944207508914 | 4.4416879361829170\n>\n> (vessels do not play any impact in the query, so i'll leave them out)\n>\n> Slow plan in 9.0.2 :\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=11525.09..11571.55 rows=3717 width=23) (actual time=10462.561..10462.937 rows=603 loops=1)\n> -> Sort (cost=11525.09..11534.38 rows=3717 width=23) (actual time=10462.560..10462.664 rows=603 loops=1)\n> Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), (COALESCE(m.midname, ''::character varying)), m.id\n> Sort Method: quicksort Memory: 71kB\n> -> Hash Join (cost=8281.98..11304.67 rows=3717 width=23) (actual time=10425.261..10461.621 rows=603 loops=1)\n> Hash Cond: (ms.marinerid = m.id)\n> -> Hash Join (cost=20.12..2963.83 rows=3717 width=4) (actual time=0.228..34.993 rows=2625 loops=1)\n> Hash Cond: (ms.vslid = vsl.id)\n> -> Seq Scan on marinerstates ms (cost=0.00..2889.32 rows=4590 width=8) (actual time=0.011..33.494 rows=2625 loops=1)\n> Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n> -> Hash (cost=16.72..16.72 rows=272 width=4) (actual time=0.207..0.207 rows=272 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 10kB\n> -> Seq Scan on vessels vsl (cost=0.00..16.72 rows=272 width=4) (actual time=0.004..0.118 rows=272 loops=1)\n> -> Hash (cost=8172.57..8172.57 rows=7143 width=23) (actual time=10424.994..10424.994 rows=12832 loops=1)\n> Buckets: 1024 Batches: 1 Memory Usage: 702kB\n> -> Seq Scan on mariner m (cost=7614.86..8172.57 rows=7143 width=23) (actual time=10409.498..10419.971 rows=12832 loops=1)\n> Filter: ((NOT (hashed SubPlan 1)) AND ((marinertype)::text = 'Mariner'::text))\n> SubPlan 1\n> -> Unique (cost=2768.00..7614.86 rows=1 width=4) (actual time=87.495..10408.446 rows=1454 loops=1)\n> -> Nested Loop (cost=2768.00..7614.86 rows=1 width=4) (actual time=87.493..10407.517 rows=1835 loops=1)\n> Join Filter: (msold.marinerid = mold.id)\n> -> Index Scan using mariner_pkey on mariner mold (cost=0.00..1728.60 rows=14286 width=4) (actual time=0.007..13.931 rows=14286 loops=1)\n> Filter: ((marinertype)::text = 'Mariner'::text)\n> -> Materialize (cost=2768.00..5671.97 rows=1 width=8) (actual time=0.003..0.330 rows=1876 loops=14286)\n> -> Nested Loop (cost=2768.00..5671.96 rows=1 width=8) (actual time=39.723..85.401 rows=1876 loops=1)\n> -> Hash Semi Join (cost=2768.00..5671.67 rows=1 width=12) (actual time=39.708..81.501 rows=1876 loops=1)\n> Hash Cond: (msold.marinerid = msold2.marinerid)\n> Join Filter: ((msold2.id <> msold.id) AND (msold2.starttime < msold.starttime) AND ((msold.starttime - msold2.endtime) <= '1 year 6 mons'::interval))\n> -> Seq Scan on marinerstates msold (cost=0.00..2889.32 rows=4590 width=20) (actual time=0.003..33.952 rows=2625 loops=1)\n> Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n> -> Hash (cost=2251.66..2251.66 rows=41307 width=24) (actual time=39.613..39.613 rows=41250 loops=1)\n> Buckets: 8192 Batches: 1 Memory Usage: 2246kB\n> -> Seq Scan on marinerstates msold2 (cost=0.00..2251.66 rows=41307 width=24) (actual time=0.002..24.882 rows=41250 loops=1)\n> Filter: ((state)::text = 'Active'::text)\n> -> Index Scan using vessels_pkey on vessels vslold (cost=0.00..0.28 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1876)\n> Index Cond: (vslold.id = msold.vslid)\n> Total runtime: 10463.619 ms\n> (37 rows)\n>\n> Fast plan in 8.3.13 :\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Unique (cost=633677.56..633700.48 rows=1834 width=23) (actual time=543.684..551.003 rows=603 loops=1)\n> -> Sort (cost=633677.56..633682.14 rows=1834 width=23) (actual time=543.676..546.070 rows=603 loops=1)\n> Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), (COALESCE(m.midname, ''::character varying)), m.id\n> Sort Method: quicksort Memory: 53kB\n> -> Hash Join (cost=630601.65..633578.15 rows=1834 width=23) (actual time=439.969..540.573 rows=603 loops=1)\n> Hash Cond: (ms.vslid = vsl.id)\n> -> Hash Join (cost=630580.33..633530.01 rows=2261 width=27) (actual time=437.459..532.847 rows=603 loops=1)\n> Hash Cond: (ms.marinerid = m.id)\n> -> Seq Scan on marinerstates ms (cost=0.00..2875.32 rows=4599 width=8) (actual time=0.017..80.153 rows=2625 loops=1)\n> Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n> -> Hash (cost=630491.54..630491.54 rows=7103 width=23) (actual time=437.307..437.307 rows=12832 loops=1)\n> -> Index Scan using mariner_pkey on mariner m (cost=628776.89..630491.54 rows=7103 width=23) (actual time=311.023..380.168 rows=12832 loops=1)\n> Filter: ((NOT (hashed subplan)) AND ((marinertype)::text = 'Mariner'::text))\n> SubPlan\n> -> Unique (cost=0.00..628772.30 rows=1834 width=4) (actual time=0.129..303.981 rows=1454 loops=1)\n> -> Nested Loop (cost=0.00..628767.72 rows=1834 width=4) (actual time=0.120..289.961 rows=1835 loops=1)\n> -> Nested Loop (cost=0.00..627027.98 rows=1865 width=4) (actual time=0.099..237.128 rows=1876 loops=1)\n> -> Index Scan using marinerstates_marinerid on marinerstates msold (cost=0.00..626316.07 rows=2299 width=8) (actual time=0.079..186.150 rows=1876 loops=1)\n> Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date) AND (subplan))\n> SubPlan\n> -> Bitmap Heap Scan on marinerstates msold2 (cost=4.28..12.11 rows=1 width=0) (actual time=0.020..0.020 rows=1 loops=2625)\n> Recheck Cond: ((marinerid = $0) AND (starttime < $2))\n> Filter: ((id <> $1) AND ((state)::text = 'Active'::text) AND (($2 - endtime) <= '1 year 6 mons'::interval))\n> -> Bitmap Index Scan on marinerstates_marinerid_starttime (cost=0.00..4.28 rows=2 width=0) (actual time=0.009..0.009 rows=6 loops=2625)\n> Index Cond: ((marinerid = $0) AND (starttime < $2))\n> -> Index Scan using vessels_pkey on vessels vslold (cost=0.00..0.30 rows=1 width=4) (actual time=0.006..0.010 rows=1 loops=1876)\n> Index Cond: (vslold.id = msold.vslid)\n> -> Index Scan using mariner_pkey on mariner mold (cost=0.00..0.92 rows=1 width=4) (actual time=0.007..0.012 rows=1 loops=1876)\n> Index Cond: (mold.id = msold.marinerid)\n> Filter: ((mold.marinertype)::text = 'Mariner'::text)\n> -> Hash (cost=17.81..17.81 rows=281 width=4) (actual time=2.491..2.491 rows=273 loops=1)\n> -> Seq Scan on vessels vsl (cost=0.00..17.81 rows=281 width=4) (actual time=0.012..1.306 rows=273 loops=1)\n> Total runtime: 553.601 ms\n> (33 rows)\n>\n> Is there any other data i could post (pg_stat,...) that would help?\n>\n> thanx a lot.\n>\n> \n>> Regards\n>>\n>>\n>> Ing. Marcos Luís Ortíz Valmaseda\n>> Linux User # 418229 && PostgreSQL DBA\n>> Centro de Tecnologías Gestión de Datos (DATEC)\n>> http://postgresql.uci.cu\n>> http://www.postgresql.org\n>> http://it.toolbox.com/blogs/sql-apprentice\n>>\n>> \n>\n>\n>\n> --\n> Achilleas Mantzios\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> \nAchilleas, here is the slow part from 9.02:\n\n -> Hash Semi Join (cost=2768.00..5671.67 rows=1 width=12) (actual time=39.708..81.501 rows=1876 loops=1)\n Hash Cond: (msold.marinerid = msold2.marinerid)\n Join Filter: ((msold2.id <> msold.id) AND (msold2.starttime < msold.starttime) AND ((msold.starttime - msold2.endtime) <= '1 year 6 mons'::interval))\n -> Seq Scan on marinerstates msold (cost=0.00..2889.32 rows=4590 width=20) (actual time=0.003..33.952 rows=2625 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=2251.66..2251.66 rows=41307 width=24) (actual time=39.613..39.613 rows=41250 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 2246kB\n -> Seq Scan on marinerstates msold2 (cost=0.00..2251.66 rows=41307 width=24) (actual time=0.002..24.882 \n\n\nThe same part from 8.3.3 looks like this:\n\nSeq Scan on marinerstates ms (cost=0.00..2875.32 rows=4599 width=8) (actual time=0.017..80.153 rows=2625 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=630491.54..630491.54 rows=7103 width=23) (actual time=437.307..437.307 rows=12832 loops=1)\n -> Index Scan using mariner_pkey on mariner m (cost=628776.89..630491.54 rows=7103 width=23) (actual time=311.023..380.168 rows=12832 loops=1)\n Filter: ((NOT (hashed subplan)) AND ((marinertype)::text = 'Mariner'::text))\n SubPlan\n -> Unique (cost=0.00..628772.30 rows=1834 width=4) (actual time=0.129..303.981 rows=1454 loops=1)\n -> Nested Loop (cost=0.00..628767.72 rows=1834 width=4) (actual time=0.120..289.961 rows=1835 loops=1)\n\n\nThis leads me to the conclusion that the queries differ significantly. \n8.3.3 mentions NOT hashed plan, I don't see it in 9.02 and the filtering \nconditions look differently. Are you sure that the plans are from the \nsame query?\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Tue, 18 Jan 2011 09:26:21 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13\n\t- NOT EXISTS runs fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Στις Tuesday 18 January 2011 16:26:21 ο/η Mladen Gogala έγραψε:\n\n> This leads me to the conclusion that the queries differ significantly. \n> 8.3.3 mentions NOT hashed plan, I don't see it in 9.02 and the filtering \n> conditions look differently. Are you sure that the plans are from the \n> same query?\n\nFirst the num of rows in the two portions are different so you might be comparing apples and oranges here.\nAnyway, i will repost the EXPLAIN plans by copying pasting the query, without the analyze part.\n\n8.3.13\n\nUnique (cost=633677.56..633700.48 rows=1834 width=23)\n -> Sort (cost=633677.56..633682.14 rows=1834 width=23)\n Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), (COALESCE(m.midname, ''::character varying)), m.id\n -> Hash Join (cost=630601.65..633578.15 rows=1834 width=23)\n Hash Cond: (ms.vslid = vsl.id)\n -> Hash Join (cost=630580.33..633530.01 rows=2261 width=27)\n Hash Cond: (ms.marinerid = m.id)\n -> Seq Scan on marinerstates ms (cost=0.00..2875.32 rows=4599 width=8)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=630491.54..630491.54 rows=7103 width=23)\n -> Index Scan using mariner_pkey on mariner m (cost=628776.89..630491.54 rows=7103 width=23)\n Filter: ((NOT (hashed subplan)) AND ((marinertype)::text = 'Mariner'::text))\n SubPlan\n -> Unique (cost=0.00..628772.30 rows=1834 width=4)\n -> Nested Loop (cost=0.00..628767.72 rows=1834 width=4)\n -> Nested Loop (cost=0.00..627027.98 rows=1865 width=4)\n -> Index Scan using marinerstates_marinerid on marinerstates msold (cost=0.00..626316.07 rows=2299 width=8)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date) AND (subplan))\n SubPlan\n -> Bitmap Heap Scan on marinerstates msold2 (cost=4.28..12.11 rows=1 width=0)\n Recheck Cond: ((marinerid = $0) AND (starttime < $2))\n Filter: ((id <> $1) AND ((state)::text = 'Active'::text) AND (($2 - endtime) <= '1 year 6 mons'::interval))\n -> Bitmap Index Scan on marinerstates_marinerid_starttime (cost=0.00..4.28 rows=2 width=0)\n Index Cond: ((marinerid = $0) AND (starttime < $2))\n -> Index Scan using vessels_pkey on vessels vslold (cost=0.00..0.30 rows=1 width=4)\n Index Cond: (vslold.id = msold.vslid)\n -> Index Scan using mariner_pkey on mariner mold (cost=0.00..0.92 rows=1 width=4)\n Index Cond: (mold.id = msold.marinerid)\n Filter: ((mold.marinertype)::text = 'Mariner'::text)\n -> Hash (cost=17.81..17.81 rows=281 width=4)\n -> Seq Scan on vessels vsl (cost=0.00..17.81 rows=281 width=4)\n(31 rows)\n\n9.0.2\n\nUnique (cost=11525.09..11571.55 rows=3717 width=23)\n -> Sort (cost=11525.09..11534.38 rows=3717 width=23)\n Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), (COALESCE(m.midname, ''::character varying)), m.id\n -> Hash Join (cost=8281.98..11304.67 rows=3717 width=23)\n Hash Cond: (ms.marinerid = m.id)\n -> Hash Join (cost=20.12..2963.83 rows=3717 width=4)\n Hash Cond: (ms.vslid = vsl.id)\n -> Seq Scan on marinerstates ms (cost=0.00..2889.32 rows=4590 width=8)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=16.72..16.72 rows=272 width=4)\n -> Seq Scan on vessels vsl (cost=0.00..16.72 rows=272 width=4)\n -> Hash (cost=8172.57..8172.57 rows=7143 width=23)\n -> Seq Scan on mariner m (cost=7614.86..8172.57 rows=7143 width=23)\n Filter: ((NOT (hashed SubPlan 1)) AND ((marinertype)::text = 'Mariner'::text))\n SubPlan 1\n -> Unique (cost=2768.00..7614.86 rows=1 width=4)\n -> Nested Loop (cost=2768.00..7614.86 rows=1 width=4)\n Join Filter: (msold.marinerid = mold.id)\n -> Index Scan using mariner_pkey on mariner mold (cost=0.00..1728.60 rows=14286 width=4)\n Filter: ((marinertype)::text = 'Mariner'::text)\n -> Materialize (cost=2768.00..5671.97 rows=1 width=8)\n -> Nested Loop (cost=2768.00..5671.96 rows=1 width=8)\n -> Hash Semi Join (cost=2768.00..5671.67 rows=1 width=12)\n Hash Cond: (msold.marinerid = msold2.marinerid)\n Join Filter: ((msold2.id <> msold.id) AND (msold2.starttime < msold.starttime) AND ((msold.starttime - msold2.endtime) <= '1 year 6 mons'::interval))\n -> Seq Scan on marinerstates msold (cost=0.00..2889.32 rows=4590 width=20)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=2251.66..2251.66 rows=41307 width=24)\n -> Seq Scan on marinerstates msold2 (cost=0.00..2251.66 rows=41307 width=24)\n Filter: ((state)::text = 'Active'::text)\n -> Index Scan using vessels_pkey on vessels vslold (cost=0.00..0.28 rows=1 width=4)\n Index Cond: (vslold.id = msold.vslid)\n(32 rows)\n\n\n\n> \n> -- \n> Mladen Gogala \n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com \n> \n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Wed, 19 Jan 2011 11:10:05 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 - NOT EXISTS\n\truns fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Achilleas Mantzios <[email protected]> writes:\n> Anyway, i will repost the EXPLAIN plans by copying pasting the query, without the analyze part.\n\nPlease show EXPLAIN ANALYZE, not just EXPLAIN, results. When\ncomplaining that the planner did the wrong thing, it's not very helpful\nto see only its estimates and not reality.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 19 Jan 2011 12:26:56 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 - NOT EXISTS\n\truns fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Στις Wednesday 19 January 2011 19:26:56 ο/η Tom Lane έγραψε:\n> Achilleas Mantzios <[email protected]> writes:\n> > Anyway, i will repost the EXPLAIN plans by copying pasting the query, without the analyze part.\n> \n> Please show EXPLAIN ANALYZE, not just EXPLAIN, results. When\n> complaining that the planner did the wrong thing, it's not very helpful\n> to see only its estimates and not reality.\n\nI did so two posts before but one more won't do any harm. Here we go:\n\n9.0.2 - SLOW\n\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=11525.09..11571.55 rows=3717 width=23) (actual time=10439.797..10440.152 rows=603 loops=1)\n -> Sort (cost=11525.09..11534.38 rows=3717 width=23) (actual time=10439.795..10439.905 rows=603 loops=1)\n Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), (COALESCE(m.midname, ''::character varying)), m.id\n Sort Method: quicksort Memory: 71kB\n -> Hash Join (cost=8281.98..11304.67 rows=3717 width=23) (actual time=10402.338..10438.875 rows=603 loops=1)\n Hash Cond: (ms.marinerid = m.id)\n -> Hash Join (cost=20.12..2963.83 rows=3717 width=4) (actual time=0.228..35.178 rows=2625 loops=1)\n Hash Cond: (ms.vslid = vsl.id)\n -> Seq Scan on marinerstates ms (cost=0.00..2889.32 rows=4590 width=8) (actual time=0.015..33.634 rows=2625 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=16.72..16.72 rows=272 width=4) (actual time=0.203..0.203 rows=272 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 10kB\n -> Seq Scan on vessels vsl (cost=0.00..16.72 rows=272 width=4) (actual time=0.004..0.117 rows=272 loops=1)\n -> Hash (cost=8172.57..8172.57 rows=7143 width=23) (actual time=10402.075..10402.075 rows=12832 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 702kB\n -> Seq Scan on mariner m (cost=7614.86..8172.57 rows=7143 width=23) (actual time=10386.549..10397.193 rows=12832 loops=1)\n Filter: ((NOT (hashed SubPlan 1)) AND ((marinertype)::text = 'Mariner'::text))\n SubPlan 1\n -> Unique (cost=2768.00..7614.86 rows=1 width=4) (actual time=86.937..10385.379 rows=1454 loops=1)\n -> Nested Loop (cost=2768.00..7614.86 rows=1 width=4) (actual time=86.936..10384.555 rows=1835 loops=1)\n Join Filter: (msold.marinerid = mold.id)\n -> Index Scan using mariner_pkey on mariner mold (cost=0.00..1728.60 rows=14286 width=4) (actual time=0.007..14.250 rows=14286 loops=1)\n Filter: ((marinertype)::text = 'Mariner'::text)\n -> Materialize (cost=2768.00..5671.97 rows=1 width=8) (actual time=0.003..0.328 rows=1876 loops=14286)\n -> Nested Loop (cost=2768.00..5671.96 rows=1 width=8) (actual time=39.259..84.889 rows=1876 loops=1)\n -> Hash Semi Join (cost=2768.00..5671.67 rows=1 width=12) (actual time=39.249..81.025 rows=1876 loops=1)\n Hash Cond: (msold.marinerid = msold2.marinerid)\n Join Filter: ((msold2.id <> msold.id) AND (msold2.starttime < msold.starttime) AND ((msold.starttime - msold2.endtime) <= '1 year 6 mons'::interval))\n -> Seq Scan on marinerstates msold (cost=0.00..2889.32 rows=4590 width=20) (actual time=0.003..33.964 rows=2625 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=2251.66..2251.66 rows=41307 width=24) (actual time=39.156..39.156 rows=41250 loops=1)\n Buckets: 8192 Batches: 1 Memory Usage: 2246kB\n -> Seq Scan on marinerstates msold2 (cost=0.00..2251.66 rows=41307 width=24) (actual time=0.002..24.552 rows=41250 loops=1)\n Filter: ((state)::text = 'Active'::text)\n -> Index Scan using vessels_pkey on vessels vslold (cost=0.00..0.28 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1876)\n Index Cond: (vslold.id = msold.vslid)\n Total runtime: 10440.690 ms\n(37 rows)\n\n\n8.3.13 - FAST\n\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Unique (cost=633677.56..633700.48 rows=1834 width=23) (actual time=551.166..558.487 rows=603 loops=1)\n -> Sort (cost=633677.56..633682.14 rows=1834 width=23) (actual time=551.156..553.548 rows=603 loops=1)\n Sort Key: m.surname, (COALESCE(m.givenname, ''::character varying)), (COALESCE(m.midname, ''::character varying)), m.id\n Sort Method: quicksort Memory: 53kB\n -> Hash Join (cost=630601.65..633578.15 rows=1834 width=23) (actual time=447.773..547.934 rows=603 loops=1)\n Hash Cond: (ms.vslid = vsl.id)\n -> Hash Join (cost=630580.33..633530.01 rows=2261 width=27) (actual time=445.320..540.291 rows=603 loops=1)\n Hash Cond: (ms.marinerid = m.id)\n -> Seq Scan on marinerstates ms (cost=0.00..2875.32 rows=4599 width=8) (actual time=0.018..79.742 rows=2625 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n -> Hash (cost=630491.54..630491.54 rows=7103 width=23) (actual time=445.216..445.216 rows=12832 loops=1)\n -> Index Scan using mariner_pkey on mariner m (cost=628776.89..630491.54 rows=7103 width=23) (actual time=319.675..388.383 rows=12832 loops=1)\n Filter: ((NOT (hashed subplan)) AND ((marinertype)::text = 'Mariner'::text))\n SubPlan\n -> Unique (cost=0.00..628772.30 rows=1834 width=4) (actual time=0.196..312.728 rows=1454 loops=1)\n -> Nested Loop (cost=0.00..628767.72 rows=1834 width=4) (actual time=0.187..298.780 rows=1835 loops=1)\n -> Nested Loop (cost=0.00..627027.98 rows=1865 width=4) (actual time=0.165..244.706 rows=1876 loops=1)\n -> Index Scan using marinerstates_marinerid on marinerstates msold (cost=0.00..626316.07 rows=2299 width=8) (actual time=0.138..194.165 rows=1876 loops=1)\n Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date) AND (subplan))\n SubPlan\n -> Bitmap Heap Scan on marinerstates msold2 (cost=4.28..12.11 rows=1 width=0) (actual time=0.020..0.020 rows=1 loops=2625)\n Recheck Cond: ((marinerid = $0) AND (starttime < $2))\n Filter: ((id <> $1) AND ((state)::text = 'Active'::text) AND (($2 - endtime) <= '1 year 6 mons'::interval))\n -> Bitmap Index Scan on marinerstates_marinerid_starttime (cost=0.00..4.28 rows=2 width=0) (actual time=0.009..0.009 rows=6 loops=2625)\n Index Cond: ((marinerid = $0) AND (starttime < $2))\n -> Index Scan using vessels_pkey on vessels vslold (cost=0.00..0.30 rows=1 width=4) (actual time=0.006..0.010 rows=1 loops=1876)\n Index Cond: (vslold.id = msold.vslid)\n -> Index Scan using mariner_pkey on mariner mold (cost=0.00..0.92 rows=1 width=4) (actual time=0.008..0.012 rows=1 loops=1876)\n Index Cond: (mold.id = msold.marinerid)\n Filter: ((mold.marinertype)::text = 'Mariner'::text)\n -> Hash (cost=17.81..17.81 rows=281 width=4) (actual time=2.432..2.432 rows=273 loops=1)\n -> Seq Scan on vessels vsl (cost=0.00..17.81 rows=281 width=4) (actual time=0.033..1.220 rows=273 loops=1)\n Total runtime: 561.208 ms\n(33 rows)\n\n\n\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Thu, 20 Jan 2011 09:05:25 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 - NOT EXISTS\n\truns fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "On Thu, Jan 20, 2011 at 2:05 AM, Achilleas Mantzios\n<[email protected]> wrote:\n>                                                     ->  Hash Semi Join  (cost=2768.00..5671.67 rows=1 width=12) (actual time=39.249..81.025 rows=1876 loops=1)\n>                                                           Hash Cond: (msold.marinerid = msold2.marinerid)\n>                                                           Join Filter: ((msold2.id <> msold.id) AND (msold2.starttime < msold.starttime) AND ((msold.starttime - msold2.endtime) <= '1 year 6 mons'::interval))\n>                                                           ->  Seq Scan on marinerstates msold  (cost=0.00..2889.32 rows=4590 width=20) (actual time=0.003..33.964 rows=2625 loops=1)\n>                                                                 Filter: (((state)::text = 'Active'::text) AND ((starttime)::date <= '2007-01-11'::date) AND ((COALESCE(endtime, now()))::date >= '2006-07-15'::date))\n>                                                           ->  Hash  (cost=2251.66..2251.66 rows=41307 width=24) (actual time=39.156..39.156 rows=41250 loops=1)\n>                                                                 Buckets: 8192  Batches: 1  Memory Usage: 2246kB\n>                                                                 ->  Seq Scan on marinerstates msold2  (cost=0.00..2251.66 rows=41307 width=24) (actual time=0.002..24.552 rows=41250 loops=1)\n>                                                                       Filter: ((state)::text = 'Active'::text)\n\nLooks like the bad selectivity estimate there is what's killing it.\nNot sure I completely understand why 9.0.2 is coming up with such a\nbad estimate, though.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 21 Jan 2011 12:09:12 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 -\n\tNOT EXISTS runs fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "On 1/21/2011 12:09 PM, Robert Haas wrote:\n> Looks like the bad selectivity estimate there is what's killing it.\n> Not sure I completely understand why 9.0.2 is coming up with such a\n> bad estimate, though.\n>\n\nI would recommend setting default_statistics_target to 1024 and \neffective cache size to 20480MB and see what happens.\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Fri, 21 Jan 2011 12:42:37 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13\n\t- NOT EXISTS runs fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "On Fri, Jan 21, 2011 at 12:42 PM, Mladen Gogala\n<[email protected]> wrote:\n> On 1/21/2011 12:09 PM, Robert Haas wrote:\n>>\n>> Looks like the bad selectivity estimate there is what's killing it.\n>> Not sure I completely understand why 9.0.2 is coming up with such a\n>> bad estimate, though.\n>>\n>\n> I would recommend setting default_statistics_target to 1024 and effective\n> cache size to 20480MB and see what happens.\n\nI am starting to suspect that there is a bug in the join selectivity\nlogic in 9.0. We've had a few complaints where the join was projected\nto return more rows than the product of the inner side and outer side\nof the join, which is clearly nonsense. I read the function and I\ndon't see anything weird... and it clearly can't be too bad or we\nwould have had more complaints... but...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 21 Jan 2011 12:51:17 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 -\n\tNOT EXISTS runs fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "On 1/21/2011 12:51 PM, Robert Haas wrote:\n> I am starting to suspect that there is a bug in the join selectivity\n> logic in 9.0. We've had a few complaints where the join was projected\n> to return more rows than the product of the inner side and outer side\n> of the join, which is clearly nonsense. I read the function and I\n> don't see anything weird... and it clearly can't be too bad or we\n> would have had more complaints... but...\n\nWell the way to test it would be to take the function from 8.3, input \nthe same arguments and see if there is any difference with the results.\n\n-- \n\nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com\nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Fri, 21 Jan 2011 13:12:55 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13\n\t- NOT EXISTS runs fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Thu, Jan 20, 2011 at 2:05 AM, Achilleas Mantzios\n> <[email protected]> wrote:\n>> -> Hash Semi Join (cost=2768.00..5671.67 rows=1 width=12) (actual time=39.249..81.025 rows=1876 loops=1)\n>> Hash Cond: (msold.marinerid = msold2.marinerid)\n>> Join Filter: ((msold2.id <> msold.id) AND (msold2.starttime < msold.starttime) AND ((msold.starttime - msold2.endtime) <= '1 year 6 mons'::interval))\n\n> Looks like the bad selectivity estimate there is what's killing it.\n> Not sure I completely understand why 9.0.2 is coming up with such a\n> bad estimate, though.\n\nHm ... it's the <> clause. Look at this, in the regression database:\n\nregression=# explain analyze select * from tenk1 a where exists(select 1 from tenk1 b where a.hundred = b.hundred);\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------\n Nested Loop Semi Join (cost=0.00..1134.65 rows=10000 width=244) (actual time=0.362..960.732 rows=10000 loops=1)\n -> Seq Scan on tenk1 a (cost=0.00..458.00 rows=10000 width=244) (actual time=0.070..45.287 rows=10000 loops=1)\n -> Index Scan using tenk1_hundred on tenk1 b (cost=0.00..2.16 rows=100 width=4) (actual time=0.073..0.073 rows=1 loops=10000)\n Index Cond: (hundred = a.hundred)\n Total runtime: 996.990 ms\n(5 rows)\n\nregression=# explain analyze select * from tenk1 a where exists(select 1 from tenk1 b where a.hundred = b.hundred and a.thousand <> b.thousand);\n QUERY PLAN \n------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=583.00..1078.50 rows=1 width=244) (actual time=142.738..344.823 rows=10000 loops=1)\n Hash Cond: (a.hundred = b.hundred)\n Join Filter: (a.thousand <> b.thousand)\n -> Seq Scan on tenk1 a (cost=0.00..458.00 rows=10000 width=244) (actual time=0.051..44.137 rows=10000 loops=1)\n -> Hash (cost=458.00..458.00 rows=10000 width=8) (actual time=142.526..142.526 rows=10000 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 313kB\n -> Seq Scan on tenk1 b (cost=0.00..458.00 rows=10000 width=8) (actual time=0.027..71.778 rows=10000 loops=1)\n Total runtime: 384.017 ms\n(8 rows)\n\n(This is with enable_hashagg off, to make the two plans more obviously\ncomparable; but that's cosmetic. The important point is that the join\nrowcount estimate is dead on in the first case and dead wrong in the\nsecond.)\n\nSome digging turns up the fact that the semi-join selectivity of\n\"a.thousand <> b.thousand\" is being estimated at *zero*. This is\nbecause the semi-join selectivity of \"a.thousand = b.thousand\" is\nestimated at 1.0 (correctly: every row of a has at least one join\npartner in b). And then neqjoinsel is computed as 1 - eqjoinsel,\nwhich is a false conclusion for semijoins: joining to at least one row\ndoesn't mean joining to every row.\n\nI'm a bit inclined to fix this by having neqjoinsel hard-wire a result\nof 1 for semi and anti join cases --- that is, assume there's always\nat least one inner row that isn't equal to the outer row. That's\npresumably too high for real-world cases where the clause is probably\nbeing used together with other, correlated, clauses; but we've got no\ninfo available that would help narrow that down. The best we can do\nhere is a forced estimate. If it should be less than 1, then what?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jan 2011 14:26:46 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 - NOT EXISTS\n\truns fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n \n> If it should be less than 1, then what?\n \n1 - (estimated tuples / estimated distinct values) ?\n \n-Kevin\n\n", "msg_date": "Fri, 21 Jan 2011 14:00:44 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than\n\t8.3.13 - NOT EXISTS runs fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> Tom Lane <[email protected]> wrote:\n>> If it should be less than 1, then what?\n \n> 1 - (estimated tuples / estimated distinct values) ?\n\nUh, no. The number we're after is the probability that an outer tuple\nhas at least one unequal value in the inner relation. This is not 1\nminus the probability that a *specific* inner value is equal, which is\nwhat I think your formula is estimating.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jan 2011 15:22:24 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 - NOT EXISTS\n\truns fast in both 8.3.13 and 9.0.2" }, { "msg_contents": "Στις Friday 21 January 2011 22:22:24 ο/η Tom Lane έγραψε:\n> \"Kevin Grittner\" <[email protected]> writes:\n> > Tom Lane <[email protected]> wrote:\n> >> If it should be less than 1, then what?\n> \n> > 1 - (estimated tuples / estimated distinct values) ?\n> \n> Uh, no. The number we're after is the probability that an outer tuple\n> has at least one unequal value in the inner relation. This is not 1\n> minus the probability that a *specific* inner value is equal, which is\n> what I think your formula is estimating.\n\nIsn't this probablity (an outer tuple has at least one unequal value in the inner relation) \n= 1 - (probability that all values in the inner relation are equal to the value of the outer tuple)\n\nAnyways, glad to see smth came out of this.\nThx\n\n> \n> \t\t\tregards, tom lane\n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Mon, 24 Jan 2011 17:40:40 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: \"NOT IN\" substantially slower in 9.0.2 than 8.3.13 - NOT EXISTS\n\truns fast in both 8.3.13 and 9.0.2" } ]
[ { "msg_contents": "Zotov wrote:\n \n> select c.id from OneRow c join abstract a on a.id=AsInteger(c.id)\n \n> Why SeqScan???\n \nBecause you don't have an index on AsInteger(c.id).\n \nIf your function is IMMUTABLE (each possible combination of input\nvalues always yields the same result), and you declare it such, then\nyou can index on the function, and it will perform at a speed similar\nto the other example.\n \n-Kevin\n", "msg_date": "Mon, 17 Jan 2011 14:51:24 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bad plan when join on function" }, { "msg_contents": "2011/1/17 Kevin Grittner <[email protected]>:\n> Zotov  wrote:\n>\n>> select c.id from OneRow c join abstract a on a.id=AsInteger(c.id)\n>\n>> Why SeqScan???\n>\n> Because you don't have an index on AsInteger(c.id).\n>\n> If your function is IMMUTABLE (each possible combination of input\n> values always yields the same result), and you declare it such, then\n> you can index on the function, and it will perform at a speed similar\n> to the other example.\n\nit should to work without functional index - but not sure about effectivity\n\npostgres=# explain select 1 from a join b on a.f = sin(b.f);\n QUERY PLAN\n-----------------------------------------------------------------------------\n Merge Join (cost=809.39..1352.64 rows=10000 width=0)\n Merge Cond: (a.f = (sin(b.f)))\n -> Index Scan using a_f_idx on a (cost=0.00..318.25 rows=10000 width=8)\n -> Sort (cost=809.39..834.39 rows=10000 width=8)\n Sort Key: (sin(b.f))\n -> Seq Scan on b (cost=0.00..145.00 rows=10000 width=8)\n(6 rows)\n\nbut functional index always helps\n\npostgres=# create index on b((sin(f)));\nCREATE INDEX\npostgres=# explain select 1 from a join b on a.f = sin(b.f);\n QUERY PLAN\n-------------------------------------------------------------------------------\n Merge Join (cost=0.00..968.50 rows=10000 width=0)\n Merge Cond: (a.f = sin(b.f))\n -> Index Scan using a_f_idx on a (cost=0.00..318.25 rows=10000 width=8)\n -> Index Scan using b_sin_idx on b (cost=0.00..450.25 rows=10000 width=8)\n(4 rows)\n\nregards\n\nPavel Stehule\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 17 Jan 2011 22:24:33 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan when join on function" }, { "msg_contents": "Pavel Stehule <[email protected]> writes:\n> it should to work without functional index - but not sure about effectivity\n\nAs long as the function is VOLATILE, the planner can't use any\nintelligent query plan. Merge or hash join both require at least\nstable join keys.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Jan 2011 16:33:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan when join on function " }, { "msg_contents": "2011/1/17 Tom Lane <[email protected]>:\n> Pavel Stehule <[email protected]> writes:\n>> it should to work without functional index - but not sure about effectivity\n>\n> As long as the function is VOLATILE, the planner can't use any\n> intelligent query plan.  Merge or hash join both require at least\n> stable join keys.\n\nsure, my first advice was a question about function volatility - and\nmy sentence was related to using immutable function.\n\nregards\n\nPavel Stehule\n\n>\n>                        regards, tom lane\n>\n", "msg_date": "Mon, 17 Jan 2011 22:37:01 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad plan when join on function" } ]
[ { "msg_contents": "Hi,\n\nWe are in the process of moving a web based application from a MySql to Postgresql database.\nOur main reason for moving to Postgresql is problems with MySql (MyISAM) table locking.\nWe will buy a new set of servers to run the Postgresql databases.\n\nThe current setup is five Dell PowerEdge 2950 with 2 * XEON E5410, 4GB RAM. PERC 5/I 256MB NV Cache, 4 * 10K Disks (3 in RAID 5 + 1 spare).\n\nOne server is used for shared data.\nFour servers are used for sharded data. A user in the system only has data in one of the shards.\nThere is another server to which all data is replicated but I'll leave that one out of this discussion.\nThese are dedicated database servers. There are more or less no stored procedures. The shared database size is about 20GB and each shard database is about 40GB (total of 20 + 40 * 4 = 180GB). I would expect the size will grow 10%-15% this year. Server load might increase with 15%-30% this year. This setup is disk I/O bound. The overwhelming majority of sql statements are fast (typically single row selects, updates, inserts and deletes on primary key) but there are some slow long running (10min) queries.\n\nAs new server we are thinking of PowerEdge R510, 1 * Xeon X5650, 24Gb RAM, H700 512MB NV Cache.\nDell has offered two alternative SSDs:\nSamsung model SS805 (100GB Solid State Disk SATA 2.5\").\n(http://www.plianttechnology.com/lightning_lb.php)\nPliant model LB 150S (149GB Solid State Drive SAS 3Gbps 2.5\"). (http://www.samsung.com/global/business/semiconductor/products/SSD/Products_Enterprise_SSD.html)\n\nBoth are SLC drives. The price of the Pliant is about 2,3 times the price of the Samsung (does it have twice the performance?).\n\nOne alternative is 5 servers (1 shared and 4 shards) with 5 Samsung drives (4 in RAID 10 + 1 spare).\nAnother alternative would be 3 servers (1 shared and 2 shards) with 5 Pliant drives (4 in RAID 10 + 1 spare). This would be slightly more expensive than the first alternative but would be easier to upgrade with two new shard servers when it's needed.\n\nAnyone have experience using the Samsung or the Pliant SSD? Any information about degraded performance over time?\nAny comments on the setups? How would an alternative with 15K disks (6 RAID 10 + 1 spare, or even 10 RAID10 + 1 spare) compare?\nHow would these alternatives compare in I/O performance compared to the old setup?\nAnyone care to guess how the two alternatives would compare in performance running Postgresql?\nHow would the hardware usage of Postgresql compare to MySqls?\n\n\n\nRegards\n/Lars\n", "msg_date": "Tue, 18 Jan 2011 11:56:54 +0100", "msg_from": "Lars <[email protected]>", "msg_from_op": true, "msg_subject": "Migrating to Postgresql and new hardware" }, { "msg_contents": "On 1/18/2011 4:56 AM, Lars wrote:\n> Hi,\n>\n> We are in the process of moving a web based application from a MySql\n> to Postgresql database. Our main reason for moving to Postgresql is\n> problems with MySql (MyISAM) table locking. We will buy a new set of\n> servers to run the Postgresql databases.\n>\n> The current setup is five Dell PowerEdge 2950 with 2 * XEON E5410,\n> 4GB RAM. PERC 5/I 256MB NV Cache, 4 * 10K Disks (3 in RAID 5 + 1\n> spare).\n>\n> One server is used for shared data. Four servers are used for sharded\n> data. A user in the system only has data in one of the shards. There\n> is another server to which all data is replicated but I'll leave that\n> one out of this discussion. These are dedicated database servers.\n> There are more or less no stored procedures. The shared database size\n> is about 20GB and each shard database is about 40GB (total of 20 + 40\n> * 4 = 180GB). I would expect the size will grow 10%-15% this year.\n> Server load might increase with 15%-30% this year. This setup is disk\n> I/O bound. The overwhelming majority of sql statements are fast\n> (typically single row selects, updates, inserts and deletes on\n> primary key) but there are some slow long running (10min) queries.\n>\n\nNo idea what mysql thinks a shard is, but in PG we have read-only hot\nstandby's.\n\nThe standby database is exactly the same as the master (save a bit of \ndata that has not been synced yet.) I assume you know this... but I'd \nreally recommend trying out PG's hot-standby and make sure it works the \nway you need (because I bet its different than mysql's).\n\nAssuming the \"shared\" and the \"sharded\" databases are totally different \n(lets call them database a and c), with the PG setup you'd have database \na on one computer, then one master with database b on it (where all \nwrites go), then several hot-standby's mirroring database b (that \nsupport read-only queries).\n\nAs for the hardware, you'd better test it. Got any old servers you \ncould put a real-world workload on? Or just buy one new server for \ntesting? Its pretty hard to guess what your usage pattern is (70% read, \n small columns, no big blobs (like photos), etc)... and even then we'd \nstill have to guess.\n\nI can tell you, however, having your readers and writers not block each \nother is really nice.\n\nNot only will I not compare apples to oranges, but I really wont compare \napples in Canada to oranges in Japan. :-)\n\n-Andy\n", "msg_date": "Tue, 18 Jan 2011 13:17:23 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "oops, call them database 'a' and database 'b'.\n", "msg_date": "Tue, 18 Jan 2011 13:19:18 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "Are you going to RAID the SSD drives at all? You would likely be better off with a couple of things.\r\n\r\n- increasing ram to 256GB on each server to cache most of the databases. (easy, and cheaper than SSD)\r\n- move to fusionIO\r\n- move to SLC based SSD, warning not many raid controllers will get the performance out of the SSD's at this time.\r\n\r\nOf the three I would suggest #1, and #2, the cost of a SLC SSD raid will cost more than the fusionIO drive and still not match the fusionIO drive performance.\r\n\r\nOf course this is based on my experience, and I have my fireproof suit since I mentioned the word fusionIO :)\r\n\r\n- John\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Lars\r\nSent: Tuesday, January 18, 2011 4:57 AM\r\nTo: [email protected]\r\nSubject: [PERFORM] Migrating to Postgresql and new hardware\r\n\r\nHi,\r\n\r\nWe are in the process of moving a web based application from a MySql to Postgresql database.\r\nOur main reason for moving to Postgresql is problems with MySql (MyISAM) table locking.\r\nWe will buy a new set of servers to run the Postgresql databases.\r\n\r\nThe current setup is five Dell PowerEdge 2950 with 2 * XEON E5410, 4GB RAM. PERC 5/I 256MB NV Cache, 4 * 10K Disks (3 in RAID 5 + 1 spare).\r\n\r\nOne server is used for shared data.\r\nFour servers are used for sharded data. A user in the system only has data in one of the shards.\r\nThere is another server to which all data is replicated but I'll leave that one out of this discussion.\r\nThese are dedicated database servers. There are more or less no stored procedures. The shared database size is about 20GB and each shard database is about 40GB (total of 20 + 40 * 4 = 180GB). I would expect the size will grow 10%-15% this year. Server load might increase with 15%-30% this year. This setup is disk I/O bound. The overwhelming majority of sql statements are fast (typically single row selects, updates, inserts and deletes on primary key) but there are some slow long running (10min) queries.\r\n\r\nAs new server we are thinking of PowerEdge R510, 1 * Xeon X5650, 24Gb RAM, H700 512MB NV Cache.\r\nDell has offered two alternative SSDs:\r\nSamsung model SS805 (100GB Solid State Disk SATA 2.5\").\r\n(http://www.plianttechnology.com/lightning_lb.php)\r\nPliant model LB 150S (149GB Solid State Drive SAS 3Gbps 2.5\"). (http://www.samsung.com/global/business/semiconductor/products/SSD/Products_Enterprise_SSD.html)\r\n\r\nBoth are SLC drives. The price of the Pliant is about 2,3 times the price of the Samsung (does it have twice the performance?).\r\n\r\nOne alternative is 5 servers (1 shared and 4 shards) with 5 Samsung drives (4 in RAID 10 + 1 spare).\r\nAnother alternative would be 3 servers (1 shared and 2 shards) with 5 Pliant drives (4 in RAID 10 + 1 spare). This would be slightly more expensive than the first alternative but would be easier to upgrade with two new shard servers when it's needed.\r\n\r\nAnyone have experience using the Samsung or the Pliant SSD? Any information about degraded performance over time?\r\nAny comments on the setups? How would an alternative with 15K disks (6 RAID 10 + 1 spare, or even 10 RAID10 + 1 spare) compare?\r\nHow would these alternatives compare in I/O performance compared to the old setup?\r\nAnyone care to guess how the two alternatives would compare in performance running Postgresql?\r\nHow would the hardware usage of Postgresql compare to MySqls?\r\n\r\n\r\n\r\nRegards\r\n/Lars\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Tue, 18 Jan 2011 17:06:17 -0500", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "On Tue, 18 Jan 2011 16:06:17 -0600, Strange, John W \n<[email protected]> wrote:\n\n> Of course this is based on my experience, and I have my fireproof suit \n> since I mentioned the word fusionIO :)\n\nI'll throw a fire blanket up as well. We have a customer who has been \nrunning Fusion IO with Postgres for about 2 years. They get amazing \nperformance, but also aren't running fsync. They haven't had corruption \nwith OS crashes (they're very abusive to their CentOS install), but did \nwith a power outage (a UPS of ours went up in smoke; they weren't paying \nfor N+1 power). Now their data is mostly static; they run analytics once a \nday and if they have a problem they can reload yesterday's data and run \nthe analytics again to get back up to speed. If this is the type of stuff \nyou're doing and you can easily get your data back to a sane state by all \nmeans give FusionIO a whirl!\n\nThis customer did discuss this with me in length last time they stopped in \nand also pointed out that FusionIO was announced as being a major part of \nsome trading company or bank firm's database performance junk. I don't \nknow the details, but I think he said they were out of Chicago. If anyone \nknows what I'm talking about please share the link. Either way, it seems \nthat people are actually doing money transactions on FusionIO, so you can \neither take that as comforting reassurance or you can start getting really \nnervous about the stock market :-)\n\n\nRegards,\n\n\nMark\n\n\nPS, don't turn off fsync unless you know what you're doing.\n", "msg_date": "Tue, 18 Jan 2011 16:53:29 -0600", "msg_from": "\"Mark Felder\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "Comments in line, take em for what you paid for em.\n\n\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Lars\n> Sent: Tuesday, January 18, 2011 3:57 AM\n> To: [email protected]\n> Subject: [PERFORM] Migrating to Postgresql and new hardware\n> \n> Hi,\n> \n> We are in the process of moving a web based application from a MySql to\n> Postgresql database.\n> Our main reason for moving to Postgresql is problems with MySql\n> (MyISAM) table locking.\n\nI would never try and talk someone out of switching but.... MyISAM? What\nversion of MySQL and did you pick MyISAM for a good reason or just happened\nto end up there?\n\n\n\n> We will buy a new set of servers to run the Postgresql databases.\n> \n> The current setup is five Dell PowerEdge 2950 with 2 * XEON E5410, 4GB\n> RAM. PERC 5/I 256MB NV Cache, 4 * 10K Disks (3 in RAID 5 + 1 spare).\n> \n> One server is used for shared data.\n> Four servers are used for sharded data. A user in the system only has\n> data in one of the shards.\n> There is another server to which all data is replicated but I'll leave\n> that one out of this discussion.\n> These are dedicated database servers. There are more or less no stored\n> procedures. The shared database size is about 20GB and each shard\n> database is about 40GB (total of 20 + 40 * 4 = 180GB). I would expect\n> the size will grow 10%-15% this year. Server load might increase with\n> 15%-30% this year. This setup is disk I/O bound. The overwhelming\n> majority of sql statements are fast (typically single row selects,\n> updates, inserts and deletes on primary key) but there are some slow\n> long running (10min) queries.\n> \n> As new server we are thinking of PowerEdge R510, 1 * Xeon X5650, 24Gb\n> RAM, H700 512MB NV Cache.\n\nOne would think you should notice a nice speed improvement, ceteris paribus,\nsince the X5650 will have ->significantly<- more memory bandwidth than the\n5410s you are used to, and you are going to have a heck of a lot more ram\nfor things to cache in. I think the H700 is a step up in raid cards as well\nbut with only 4 disks your probably not maxing out there. \n\n\n\n> Dell has offered two alternative SSDs:\n> Samsung model SS805 (100GB Solid State Disk SATA 2.5\").\n> (http://www.plianttechnology.com/lightning_lb.php)\n> Pliant model LB 150S (149GB Solid State Drive SAS 3Gbps 2.5\").\n> (http://www.samsung.com/global/business/semiconductor/products/SSD/Prod\n> ucts_Enterprise_SSD.html)\n\nThe Samsung ones seems to indicate that they have protection in the event of\na power failure, and the pliant does not mention it. \n\nGranted I haven't done or seen any pull the plug under max load tests on\neither family, so I got nothing beyond that it is the first thing I have\nlooked at with every SSD that crosses my path.\n\n\n\n> \n> Both are SLC drives. The price of the Pliant is about 2,3 times the\n> price of the Samsung (does it have twice the performance?).\n> \n> One alternative is 5 servers (1 shared and 4 shards) with 5 Samsung\n> drives (4 in RAID 10 + 1 spare).\n> Another alternative would be 3 servers (1 shared and 2 shards) with 5\n> Pliant drives (4 in RAID 10 + 1 spare). This would be slightly more\n> expensive than the first alternative but would be easier to upgrade\n> with two new shard servers when it's needed.\n\nAs others have mentioned, how are you going to be doing your \"shards\"?\n\n\n\n> \n> Anyone have experience using the Samsung or the Pliant SSD? Any\n> information about degraded performance over time?\n> Any comments on the setups? How would an alternative with 15K disks (6\n> RAID 10 + 1 spare, or even 10 RAID10 + 1 spare) compare?\n\n\nYou still may find that breaking xlog out to its own logical drive (2 drives\nin raid 1) gives a speed improvement to the overall. YMMV - so tinker and\nfind out before you go deploying. \n\n> How would these alternatives compare in I/O performance compared to the\n> old setup?\n> Anyone care to guess how the two alternatives would compare in\n> performance running Postgresql?\n> How would the hardware usage of Postgresql compare to MySqls?\n\n\nI won't hazard a guess on the performance difference between PG w/ Fsync ON\nand MySQL running with MyISAM. \n\nIf you can get your OS and PG tuned you should be able to have a database\nthat can have pretty decent throughput for an OLTP workload. Since that\nseems to be the majority of your intended workload. \n\n\n-Mark\n\n> \n> \n> \n> Regards\n> /Lars\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Tue, 18 Jan 2011 21:09:38 -0700", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "On 18/01/11 18:56, Lars wrote:\n> Hi,\n> \n> We are in the process of moving a web based application from a MySql to Postgresql database.\n> Our main reason for moving to Postgresql is problems with MySql (MyISAM) table locking.\n> We will buy a new set of servers to run the Postgresql databases.\n\nMost people seem to simply move over to InnoDB when facing these issues,\nsaving themselves LOTS of pain over MyISAM while minimizing transition\ncosts. I assume you've rejected that, but I'm interested in why.\n\n> The current setup is five Dell PowerEdge 2950 with 2 * XEON E5410, 4GB RAM. PERC 5/I 256MB NV Cache, 4 * 10K Disks (3 in RAID 5 + 1 spare).\n> \n> One server is used for shared data.\n> Four servers are used for sharded data. A user in the system only has data in one of the shards.\n> There is another server to which all data is replicated but I'll leave that one out of this discussion.\n\nDon't, if you want to have a similar thing going in your Pg deployment\nlater. Replication in Pg remains ... interesting. An n-to-m (or n-to-1)\nreplication setup can't be achieved with the built-in replication in\n9.0; you need to use things like Slony-I, Bucardo, etc each of which\nhave their own limitations and quirks.\n\n> These are dedicated database servers. There are more or less no stored procedures. The shared database size is about 20GB and each shard database is about 40GB (total of 20 + 40 * 4 = 180GB). I would expect the size will grow 10%-15% this year. Server load might increase with 15%-30% this year. This setup is disk I/O bound. The overwhelming majority of sql statements are fast (typically single row selects, updates, inserts and deletes on primary key) but there are some slow long running (10min) queries.\n\nSince you're sharding (and thus clearly don't need strong cluster-wide\nACID) have you considered looking into relaxed semi-ACID / eventually\nconsistent database systems? If you're doing lots of simple queries and\nfew of the kind of heavy lifting reporting queries RDBMSs are great for,\nit may be worth considering.\n\nIf your app uses a data acesss layer, it should be pretty easy to\nprototype implementations on other databases and try them out.\n\nEven if you do go for PostgreSQL, if you're not using memcached yet\nyou're wasting money and effort. You might get lots more life out of\nyour hardware with a bit of memcached love.\n\n-- \nSystem & Network Administrator\nPOST Newspapers\n", "msg_date": "Wed, 19 Jan 2011 15:26:32 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "On Tue, Jan 18, 2011 at 3:56 AM, Lars <[email protected]> wrote:\n\n> Any comments on the setups? How would an alternative with 15K disks (6 RAID 10 + 1 spare, or even 10 RAID10 + 1 spare) compare?\n\nRAID-10 is going to trounce RAID-5 for writes, which is where you\nusually have the most issues.\n\n> How would these alternatives compare in I/O performance compared to the old setup?\n\nOnly testing can tell, but I've seen 4 SATA drives in RAID-10 with no\ncaching or fancy controller beat a 4 disk RAID-5 with BBU controller\nmore than once.\n", "msg_date": "Wed, 19 Jan 2011 00:49:12 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "> No idea what mysql thinks a shard is, but in PG we have read-only hot standby's.\nI used sharding as an expression for partitioning data into several databases.\nEach user in the system is unaware of any other user. The user never accesses the private data of another user. Each user could in theory be assigned their own database server. This makes it easy to split the 40000 users over a number of database servers. There are some shared data that is stored in a special \"shared\" database.\n\n> The standby database is exactly the same as the master (save a bit of\n> data that has not been synced yet.) I assume you know this... but I'd\n> really recommend trying out PG's hot-standby and make sure it works the\n> way you need (because I bet its different than mysql's).\n\n> Assuming the \"shared\" and the \"sharded\" databases are totally different\n> (lets call them database a and c), with the PG setup you'd have database\n> a on one computer, then one master with database b on it (where all\n> writes go), then several hot-standby's mirroring database b (that\n> support read-only queries).\nAs our data is easily partitioned into any number of servers we do not plan to use replication for load balancing. We do however plan to use it to set up a backup site.\n\n> Its pretty hard to guess what your usage pattern is (70% read,\n> small columns, no big blobs (like photos), etc)... and even then we'd\n> still have to guess.\nIt's more like 40% read 60% write.\n\n> Not only will I not compare apples to oranges, but I really wont compare\n> apples in Canada to oranges in Japan. :-)\nHehe\n\n/Lars\n", "msg_date": "Wed, 19 Jan 2011 09:17:46 +0100", "msg_from": "Lars <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "\n> Are you going to RAID the SSD drives at all?\nYes, I was thinking four drives in RAID 10 and a (hot) spare drive...\n\n> Of course this is based on my experience, and I have my fireproof suit since\n> I mentioned the word fusionIO :)\nHehe\n\nFusionIO has some impressive stats!\nSSD in RAID10 provides redundancy in case of disc failure. How do you handle this with fusionIO? Two mirrored cards?\n\n/Lars\n", "msg_date": "Wed, 19 Jan 2011 09:45:35 +0100", "msg_from": "Lars <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "Thanks for the reply!\n\nMyISAM was chosen back in 2000. I'm not aware of the reasoning behind this choice...\n\nDell claims both the Samsung and the Pliant are safe to use.\nBelow is a quote from the Pliant datasheet:\n\"No Write Cache:\nPliant EFDs deliver outstanding\nwrite performance\nwithout any dependence on\nwrite cache and thus does\nnot use battery/supercap.\"\n\n> As others have mentioned, how are you going to be doing your \"shards\"?\nHmm... shards might not have been a good word to describe it. I'll paste what I wrote in another reply:\nI used sharding as an expression for partitioning data into several databases.\nEach user in the system is unaware of any other user. The user never accesses the private data of another user. Each user could in theory be assigned their own database server. This makes it easy to split the 40000 users over a number of database servers. There are some shared data that is stored in a special \"shared\" database.\n\n/Lars\n\n-----Ursprungligt meddelande-----\nFrån: mark [mailto:[email protected]]\nSkickat: den 19 januari 2011 05:10\nTill: Lars\nKopia: [email protected]\nÄmne: RE: [PERFORM] Migrating to Postgresql and new hardware\n\nComments in line, take em for what you paid for em.\n\n\n\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Lars\n> Sent: Tuesday, January 18, 2011 3:57 AM\n> To: [email protected]\n> Subject: [PERFORM] Migrating to Postgresql and new hardware\n>\n> Hi,\n>\n> We are in the process of moving a web based application from a MySql to\n> Postgresql database.\n> Our main reason for moving to Postgresql is problems with MySql\n> (MyISAM) table locking.\n\nI would never try and talk someone out of switching but.... MyISAM? What\nversion of MySQL and did you pick MyISAM for a good reason or just happened\nto end up there?\n\n\n\n> We will buy a new set of servers to run the Postgresql databases.\n>\n> The current setup is five Dell PowerEdge 2950 with 2 * XEON E5410, 4GB\n> RAM. PERC 5/I 256MB NV Cache, 4 * 10K Disks (3 in RAID 5 + 1 spare).\n>\n> One server is used for shared data.\n> Four servers are used for sharded data. A user in the system only has\n> data in one of the shards.\n> There is another server to which all data is replicated but I'll leave\n> that one out of this discussion.\n> These are dedicated database servers. There are more or less no stored\n> procedures. The shared database size is about 20GB and each shard\n> database is about 40GB (total of 20 + 40 * 4 = 180GB). I would expect\n> the size will grow 10%-15% this year. Server load might increase with\n> 15%-30% this year. This setup is disk I/O bound. The overwhelming\n> majority of sql statements are fast (typically single row selects,\n> updates, inserts and deletes on primary key) but there are some slow\n> long running (10min) queries.\n>\n> As new server we are thinking of PowerEdge R510, 1 * Xeon X5650, 24Gb\n> RAM, H700 512MB NV Cache.\n\nOne would think you should notice a nice speed improvement, ceteris paribus,\nsince the X5650 will have ->significantly<- more memory bandwidth than the\n5410s you are used to, and you are going to have a heck of a lot more ram\nfor things to cache in. I think the H700 is a step up in raid cards as well\nbut with only 4 disks your probably not maxing out there.\n\n\n\n> Dell has offered two alternative SSDs:\n> Samsung model SS805 (100GB Solid State Disk SATA 2.5\").\n> (http://www.plianttechnology.com/lightning_lb.php)\n> Pliant model LB 150S (149GB Solid State Drive SAS 3Gbps 2.5\").\n> (http://www.samsung.com/global/business/semiconductor/products/SSD/Prod\n> ucts_Enterprise_SSD.html)\n\nThe Samsung ones seems to indicate that they have protection in the event of\na power failure, and the pliant does not mention it.\n\nGranted I haven't done or seen any pull the plug under max load tests on\neither family, so I got nothing beyond that it is the first thing I have\nlooked at with every SSD that crosses my path.\n\n\n\n>\n> Both are SLC drives. The price of the Pliant is about 2,3 times the\n> price of the Samsung (does it have twice the performance?).\n>\n> One alternative is 5 servers (1 shared and 4 shards) with 5 Samsung\n> drives (4 in RAID 10 + 1 spare).\n> Another alternative would be 3 servers (1 shared and 2 shards) with 5\n> Pliant drives (4 in RAID 10 + 1 spare). This would be slightly more\n> expensive than the first alternative but would be easier to upgrade\n> with two new shard servers when it's needed.\n\nAs others have mentioned, how are you going to be doing your \"shards\"?\n\n\n\n>\n> Anyone have experience using the Samsung or the Pliant SSD? Any\n> information about degraded performance over time?\n> Any comments on the setups? How would an alternative with 15K disks (6\n> RAID 10 + 1 spare, or even 10 RAID10 + 1 spare) compare?\n\n\nYou still may find that breaking xlog out to its own logical drive (2 drives\nin raid 1) gives a speed improvement to the overall. YMMV - so tinker and\nfind out before you go deploying.\n\n> How would these alternatives compare in I/O performance compared to the\n> old setup?\n> Anyone care to guess how the two alternatives would compare in\n> performance running Postgresql?\n> How would the hardware usage of Postgresql compare to MySqls?\n\n\nI won't hazard a guess on the performance difference between PG w/ Fsync ON\nand MySQL running with MyISAM.\n\nIf you can get your OS and PG tuned you should be able to have a database\nthat can have pretty decent throughput for an OLTP workload. Since that\nseems to be the majority of your intended workload.\n\n\n-Mark\n\n>\n>\n>\n> Regards\n> /Lars\n>\n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 19 Jan 2011 10:09:38 +0100", "msg_from": "Lars <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "On 01/19/2011 05:09 PM, Lars wrote:\n> Thanks for the reply!\n>\n> MyISAM was chosen back in 2000. I'm not aware of the reasoning behind this choice...\n>\n> Dell claims both the Samsung and the Pliant are safe to use.\n> Below is a quote from the Pliant datasheet:\n> \"No Write Cache:\n> Pliant EFDs deliver outstanding\n> write performance\n> without any dependence on\n> write cache and thus does\n> not use battery/supercap.\"\n\nEr ... magic? I wouldn't trust them without details on *how* it achieves \ngood performance, and what \"good\" is.\n\nIs there *any* device on the market that efficiently handles lots of \nsmall writes?\n\n>\n>> As others have mentioned, how are you going to be doing your \"shards\"?\n> Hmm... shards might not have been a good word to describe it. I'll paste what I wrote in another reply:\n> I used sharding as an expression for partitioning data into several databases.\n\n\"sharding\" or \"shards\" is pretty much the standard way that setup is \ndescribed. It doesn't come up on the Pg list a lot as most people doing \nweb-oriented horizontally scaled apps use MySQL or fashionable non-SQL \ndatabases, but it's pretty well known in wider circles.\n\n--\nCraig Ringer\n", "msg_date": "Thu, 20 Jan 2011 08:42:58 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "On 1/19/2011 6:42 PM, Craig Ringer wrote:\n> On 01/19/2011 05:09 PM, Lars wrote:\n>> Thanks for the reply!\n>>\n>>\n>>> As others have mentioned, how are you going to be doing your \"shards\"?\n>> Hmm... shards might not have been a good word to describe it. I'll\n>> paste what I wrote in another reply:\n>> I used sharding as an expression for partitioning data into several\n>> databases.\n>\n> \"sharding\" or \"shards\" is pretty much the standard way that setup is\n> described. It doesn't come up on the Pg list a lot as most people doing\n> web-oriented horizontally scaled apps use MySQL or fashionable non-SQL\n> databases, but it's pretty well known in wider circles.\n>\n> --\n> Craig Ringer\n>\n\n\nOr... PG is just so good we've never had to use more than one database \nserver! :-)\n\n-Andy\n", "msg_date": "Thu, 20 Jan 2011 08:48:53 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "On Thu, Jan 20, 2011 at 7:48 AM, Andy Colson <[email protected]> wrote:\n\n> Or... PG is just so good we've never had to use more than one database\n> server!  :-)\n\nHehe, while you can do a lot with one server, there are some scenarios\nwhere sharding is the answer. I have a horror story about not\nsharding when we should have I can tell you over a beer sometime.\n", "msg_date": "Thu, 20 Jan 2011 08:43:43 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "Lars wrote:\n> Below is a quote from the Pliant datasheet:\n> \"No Write Cache:\n> Pliant EFDs deliver outstanding\n> write performance\n> without any dependence on\n> write cache and thus does\n> not use battery/supercap.\"\n> \n\nI liked the article The Register wrote about them, with the headline \n\"Pliant's SSDs are awesome, says Pliant\". Of course they do. Check out \nthe write benchmark figures in the information review at \nhttp://oliveraaltonen.com/2010/09/29/preliminary-benchmark-results-of-the-pliant-ssd-drives/ \nto see how badly performance suffers on their design from those \ndecisions. The Fusion I/O devices get nearly an order of magnitude more \nwrite IOPS in those tests.\n\nAs far as I've been able to tell, what Pliant does is just push writes \nout all the time without waiting for them to be aligned with block \nsizes, followed by cleaning up the wreckage later via their internal \nautomatic maintenance ASICs (it's sort of an always on TRIM \nimplementation if I'm guessing right). That has significant limitations \nboth in regards to total write speed as well as device longevity. For a \ndatabase, I'd much rather have a supercap and get ultimate write \nperformance without those downsides. Depends on the read/write ratio \nthough; I could see a heavily read-biased system work well with their \napproach. Of course, a heavily read-based system would be better served \nby having a ton of RAM instead in most cases.\n\nCould be worst though--they could be misleading about the whole topic of \nwrite durability like Intel is. I consider claiming high performance \nwhen you don't always really have it, what Pliant is doing here, to be a \nmuch lesser sin than losing data at random and not being clear about \nwhen that can happen. I'd like FusionIO to put a big \"expect your \nserver to be down for many minutes after a power interruption\" warning \non their drives, too, while I'm wishing for complete vendor transparency \nhere.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 28 Jan 2011 16:27:09 -0800", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Migrating to Postgresql and new hardware" }, { "msg_contents": "Interesting.\nWould have been nice if the test was with a raid-10 setup as raid-5 is not very good for writes...\n\nWould you get much of a performance increase with a write-cached ssd even if you got a raid controller with (battery-backed) cache?\n\n/Lars\n\n-----Ursprungligt meddelande-----\nFrån: Greg Smith [mailto:[email protected]]\nSkickat: den 29 januari 2011 01:27\nTill: Lars\nKopia: mark; [email protected]\nÄmne: Re: [PERFORM] Migrating to Postgresql and new hardware\n\nLars wrote:\n> Below is a quote from the Pliant datasheet:\n> \"No Write Cache:\n> Pliant EFDs deliver outstanding\n> write performance\n> without any dependence on\n> write cache and thus does\n> not use battery/supercap.\"\n>\n\nI liked the article The Register wrote about them, with the headline\n\"Pliant's SSDs are awesome, says Pliant\". Of course they do. Check out\nthe write benchmark figures in the information review at\nhttp://oliveraaltonen.com/2010/09/29/preliminary-benchmark-results-of-the-pliant-ssd-drives/\nto see how badly performance suffers on their design from those\ndecisions. The Fusion I/O devices get nearly an order of magnitude more\nwrite IOPS in those tests.\n\nAs far as I've been able to tell, what Pliant does is just push writes\nout all the time without waiting for them to be aligned with block\nsizes, followed by cleaning up the wreckage later via their internal\nautomatic maintenance ASICs (it's sort of an always on TRIM\nimplementation if I'm guessing right). That has significant limitations\nboth in regards to total write speed as well as device longevity. For a\ndatabase, I'd much rather have a supercap and get ultimate write\nperformance without those downsides. Depends on the read/write ratio\nthough; I could see a heavily read-biased system work well with their\napproach. Of course, a heavily read-based system would be better served\nby having a ton of RAM instead in most cases.\n\nCould be worst though--they could be misleading about the whole topic of\nwrite durability like Intel is. I consider claiming high performance\nwhen you don't always really have it, what Pliant is doing here, to be a\nmuch lesser sin than losing data at random and not being clear about\nwhen that can happen. I'd like FusionIO to put a big \"expect your\nserver to be down for many minutes after a power interruption\" warning\non their drives, too, while I'm wishing for complete vendor transparency\nhere.\n\n--\nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 31 Jan 2011 11:12:33 +0100", "msg_from": "Lars <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Migrating to Postgresql and new hardware" } ]
[ { "msg_contents": "after i backdb->dropdb->restoredb and then vacuum analy+full -> vacuum\nfreeze\n\nthe XID had been increased by 4 billion in two weeks...is it noraml?\n\nwhat's the definetion of XID?\n\n\" select * from mybook\" SQL command also increase the XID ?\n\nreference:\nhttp://www.postgresql.org/docs/9.0/static/routine-vacuuming.html\n", "msg_date": "Wed, 19 Jan 2011 01:19:15 -0800 (PST)", "msg_from": "\"Charles.Hou\" <[email protected]>", "msg_from_op": true, "msg_subject": "the XID question" }, { "msg_contents": "On 1月19日, 下午5時19分, \"Charles.Hou\" <[email protected]> wrote:\n> after i backdb->dropdb->restoredb and then vacuum analy+full -> vacuum\n> freeze\n>\n> the XID had been increased by 4 billion in two weeks...is it noraml?\n>\n> what's the definetion of XID?\n>\n> \" select * from mybook\" SQL command also increase the XID ?\n>\n> reference:http://www.postgresql.org/docs/9.0/static/routine-vacuuming.html\n\nsorry... not 4 billion , is 4 hundred million\n", "msg_date": "Wed, 19 Jan 2011 02:21:55 -0800 (PST)", "msg_from": "\"Charles.Hou\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the XID question" }, { "msg_contents": "2011/1/19 Charles.Hou <[email protected]>:\n> what's the definetion of XID?\n\nXID == \"Transaction ID\".\n\n> \" select * from mybook\" SQL command also increase the XID ?\n\nYes. Single SELECT is a transaction. Hence, it needs a transaction ID.\n\n\ngreets,\nFilip\n", "msg_date": "Wed, 19 Jan 2011 13:00:43 +0100", "msg_from": "=?UTF-8?Q?Filip_Rembia=C5=82kowski?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the XID question" }, { "msg_contents": "Filip Rembia*kowski<[email protected]> wrote: \n> 2011/1/19 Charles.Hou <[email protected]>:\n \n>> \" select * from mybook\" SQL command also increase the XID ?\n> \n> Yes. Single SELECT is a transaction. Hence, it needs a transaction\n> ID.\n \nNo, not in recent versions of PostgreSQL. There's virtual\ntransaction ID, too; which is all that's needed unless the\ntransaction writes something.\n \nAlso, as a fine point, if you use explicit database transactions\n(with BEGIN or START TRANSACTION) then you normally get one XID for\nthe entire transaction, unless you use SAVEPOINTs.\n \n-Kevin\n", "msg_date": "Wed, 19 Jan 2011 08:39:47 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the XID question" }, { "msg_contents": "On 1月19日, 下午10時39分, [email protected] (\"Kevin Grittner\")\nwrote:\n> Filip Rembia*kowski<[email protected]> wrote:\n> > 2011/1/19 Charles.Hou <[email protected]>:\n> >> \" select * from mybook\" SQL command also increase the XID ?\n>\n> > Yes. Single SELECT is a transaction. Hence, it needs a transaction\n> > ID.\n>\n> No, not in recent versions of PostgreSQL.  There's virtual\n> transaction ID, too; which is all that's needed unless the\n> transaction writes something.\n>\nmy postgresql version is 8.1.3\nyou means the newer version has a virtual transaction ID. and what's\nthe maxmium of this virtual id, also 4 billion ?\nshould i also vacuum freeze the virtual id in the new version when it\nreached the 4 billion?\n\n> Also, as a fine point, if you use explicit database transactions\n> (with BEGIN or START TRANSACTION) then you normally get one XID for\n> the entire transaction, unless you use SAVEPOINTs.\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 19 Jan 2011 06:54:46 -0800 (PST)", "msg_from": "\"Charles.Hou\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the XID question" }, { "msg_contents": "[email protected] (\"Kevin Grittner\") writes:\n> Filip Rembia*kowski<[email protected]> wrote: \n>> 2011/1/19 Charles.Hou <[email protected]>:\n> \n>>> \" select * from mybook\" SQL command also increase the XID ?\n>> \n>> Yes. Single SELECT is a transaction. Hence, it needs a transaction\n>> ID.\n> \n> No, not in recent versions of PostgreSQL. There's virtual\n> transaction ID, too; which is all that's needed unless the\n> transaction writes something.\n> \n> Also, as a fine point, if you use explicit database transactions\n> (with BEGIN or START TRANSACTION) then you normally get one XID for\n> the entire transaction, unless you use SAVEPOINTs.\n\nErm, \"not *necessarily* in recent versions of PostgreSQL.\"\n\nA read-only transaction won't consume XIDs, but if you don't expressly\ndeclare it read-only, they're still liable to get eaten...\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"gmail.com\")\nhttp://www3.sympatico.ca/cbbrowne/lisp.html\nParenthesize to avoid ambiguity.\n", "msg_date": "Wed, 19 Jan 2011 13:06:58 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the XID question" }, { "msg_contents": "On Wednesday, January 19, 2011 07:06:58 PM Chris Browne wrote:\n> [email protected] (\"Kevin Grittner\") writes:\n> > Filip Rembia*kowski<[email protected]> wrote:\n> >> 2011/1/19 Charles.Hou <[email protected]>:\n> >>> \" select * from mybook\" SQL command also increase the XID ?\n> >> \n> >> Yes. Single SELECT is a transaction. Hence, it needs a transaction\n> >> ID.\n> > \n> > No, not in recent versions of PostgreSQL. There's virtual\n> > transaction ID, too; which is all that's needed unless the\n> > transaction writes something.\n> > \n> > Also, as a fine point, if you use explicit database transactions\n> > (with BEGIN or START TRANSACTION) then you normally get one XID for\n> > the entire transaction, unless you use SAVEPOINTs.\n> \n> Erm, \"not *necessarily* in recent versions of PostgreSQL.\"\n> \n> A read-only transaction won't consume XIDs, but if you don't expressly\n> declare it read-only, they're still liable to get eaten...\nNo. The Xid is generally only allocated at the first place a real xid is \nneeded. See GetCurrentTransactionId, AssignTransactionId in xact.c and the \ncaller of the former.\n\nAndres\n", "msg_date": "Wed, 19 Jan 2011 19:31:51 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the XID question" }, { "msg_contents": "Andres Freund <[email protected]> wrote:\n> On Wednesday, January 19, 2011 07:06:58 PM Chris Browne wrote:\n \n>> A read-only transaction won't consume XIDs, but if you don't\n>> expressly declare it read-only, they're still liable to get\n>> eaten...\n> No. The Xid is generally only allocated at the first place a real\n> xid is needed. See GetCurrentTransactionId, AssignTransactionId in\n> xact.c and the caller of the former.\n \nOr just test it in psql. BEGIN, run your query, look at pg_locks. \nIf an xid has been assigned, you'll see it there in the\ntransactionid column. You can easily satisfy yourself which\nstatements grab an xid....\n \n-Kevin\n", "msg_date": "Wed, 19 Jan 2011 12:41:06 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the XID question" }, { "msg_contents": "Kevin Grittner wrote:\n> Or just test it in psql. BEGIN, run your query, look at pg_locks. \n> If an xid has been assigned, you'll see it there in the\n> transactionid column. You can easily satisfy yourself which\n> statements grab an xid...\n\nThat's a good way to double-check exactly what's happening, but it's not \neven that hard:\n\ngsmith=# select txid_current();\ntxid_current | 696\n\ngsmith=# select 1;\n?column? | 1\n\ngsmith=# select 1;\n?column? | 1\n\ngsmith=# select txid_current();\ntxid_current | 697\n\nCalling txid_current bumps the number up, but if you account for that \nyou can see whether the thing(s) in the middle grabbed a real txid by \nwhether the count increased by 1 or more than that. So here's what one \nthat did get a real xid looks like:\n\ngsmith=# select txid_current();\ntxid_current | 702\n\ngsmith=# insert into t(i) values(1);\nINSERT 0 1\ngsmith=# select txid_current();\ntxid_current | 704\n\nThat proves the INSERT in the middle was assigned one.\n\nThe commit message that added this feature to 8.3 has a good quick intro \nto what changed from earlier revs: \nhttp://archives.postgresql.org/pgsql-committers/2007-09/msg00026.php\n\nDon't have to actually read the source to learn a bit more, because it's \nactually documented! Mechanics are described at \npgsql/src/backend/access/transam/README ; you need to know a bit more \nabout subtransactions to follow all of it, but it gets the general idea \nacross regardless:\n\n= Transaction and Subtransaction Numbering =\n\nTransactions and subtransactions are assigned permanent XIDs only when/if\nthey first do something that requires one --- typically, \ninsert/update/delete\na tuple, though there are a few other places that need an XID assigned.\nIf a subtransaction requires an XID, we always first assign one to its\nparent. This maintains the invariant that child transactions have XIDs \nlater\nthan their parents, which is assumed in a number of places.\n\nThe subsidiary actions of obtaining a lock on the XID and and entering \nit into\npg_subtrans and PG_PROC are done at the time it is assigned.\n\nA transaction that has no XID still needs to be identified for various\npurposes, notably holding locks. For this purpose we assign a \"virtual\ntransaction ID\" or VXID to each top-level transaction. VXIDs are formed \nfrom\ntwo fields, the backendID and a backend-local counter; this arrangement \nallows\nassignment of a new VXID at transaction start without any contention for\nshared memory. To ensure that a VXID isn't re-used too soon after backend\nexit, we store the last local counter value into shared memory at backend\nexit, and initialize it from the previous value for the same backendID slot\nat backend start. All these counters go back to zero at shared memory\nre-initialization, but that's OK because VXIDs never appear anywhere \non-disk.\n\nInternally, a backend needs a way to identify subtransactions whether or not\nthey have XIDs; but this need only lasts as long as the parent top \ntransaction\nendures. Therefore, we have SubTransactionId, which is somewhat like\nCommandId in that it's generated from a counter that we reset at the \nstart of\neach top transaction. The top-level transaction itself has \nSubTransactionId 1,\nand subtransactions have IDs 2 and up. (Zero is reserved for\nInvalidSubTransactionId.) Note that subtransactions do not have their\nown VXIDs; they use the parent top transaction's VXID.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 19 Jan 2011 17:46:51 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the XID question" }, { "msg_contents": "On 1月20日, 上午6時46分, [email protected] (Greg Smith) wrote:\n> Kevin Grittner wrote:\n> > Or just test it in psql.  BEGIN, run your query, look at pg_locks.\n> > If an xid has been assigned, you'll see it there in the\n> > transactionid column.  You can easily satisfy yourself which\n> > statements grab an xid...\n>\n> That's a good way to double-check exactly what's happening, but it's not\n> even that hard:\n>\n> gsmith=# select txid_current();\n> txid_current | 696\n>\n> gsmith=# select 1;\n> ?column? | 1\n>\n> gsmith=# select 1;\n> ?column? | 1\n>\n> gsmith=# select txid_current();\n> txid_current | 697\n>\n> Calling txid_current bumps the number up, but if you account for that\n> you can see whether the thing(s) in the middle grabbed a real txid by\n> whether the count increased by 1 or more than that.  So here's what one\n> that did get a real xid looks like:\n>\n> gsmith=# select txid_current();\n> txid_current | 702\n>\n> gsmith=# insert into t(i) values(1);\n> INSERT 0 1\n> gsmith=# select txid_current();\n> txid_current | 704\n>\n> That proves the INSERT in the middle was assigned one.\n>\n> The commit message that added this feature to 8.3 has a good quick intro\n> to what changed from earlier revs:http://archives.postgresql.org/pgsql-committers/2007-09/msg00026.php\n>\n> Don't have to actually read the source to learn a bit more, because it's\n> actually documented!  Mechanics are described at\n> pgsql/src/backend/access/transam/README ; you need to know a bit more\n> about subtransactions to follow all of it, but it gets the general idea\n> across regardless:\n>\n> = Transaction and Subtransaction Numbering =\n>\n> Transactions and subtransactions are assigned permanent XIDs only when/if\n> they first do something that requires one --- typically,\n> insert/update/delete\n> a tuple, though there are a few other places that need an XID assigned.\n> If a subtransaction requires an XID, we always first assign one to its\n> parent.  This maintains the invariant that child transactions have XIDs\n> later\n> than their parents, which is assumed in a number of places.\n>\n> The subsidiary actions of obtaining a lock on the XID and and entering\n> it into\n> pg_subtrans and PG_PROC are done at the time it is assigned.\n>\n> A transaction that has no XID still needs to be identified for various\n> purposes, notably holding locks.  For this purpose we assign a \"virtual\n> transaction ID\" or VXID to each top-level transaction.  VXIDs are formed\n> from\n> two fields, the backendID and a backend-local counter; this arrangement\n> allows\n> assignment of a new VXID at transaction start without any contention for\n> shared memory.  To ensure that a VXID isn't re-used too soon after backend\n> exit, we store the last local counter value into shared memory at backend\n> exit, and initialize it from the previous value for the same backendID slot\n> at backend start.  All these counters go back to zero at shared memory\n> re-initialization, but that's OK because VXIDs never appear anywhere\n> on-disk.\n>\n> Internally, a backend needs a way to identify subtransactions whether or not\n> they have XIDs; but this need only lasts as long as the parent top\n> transaction\n> endures.  Therefore, we have SubTransactionId, which is somewhat like\n> CommandId in that it's generated from a counter that we reset at the\n> start of\n> each top transaction.  The top-level transaction itself has\n> SubTransactionId 1,\n> and subtransactions have IDs 2 and up.  (Zero is reserved for\n> InvalidSubTransactionId.)  Note that subtransactions do not have their\n> own VXIDs; they use the parent top transaction's VXID.\n>\n> --\n> Greg Smith   2ndQuadrant US    [email protected]   Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\":http://www.2ndQuadrant.com/books\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance\n\nevery time, i execute this query string \"SELECT datname,\nage(datfrozenxid), FROM pg_database;\" in the sql query of\npgAdminIII , the age will be increased by 5 , not 1. why???\n", "msg_date": "Wed, 19 Jan 2011 23:26:57 -0800 (PST)", "msg_from": "\"Charles.Hou\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: the XID question" }, { "msg_contents": "\"Charles.Hou\" <[email protected]> wrote:\n \n> my postgresql version is 8.1.3\n \nOuch! That's getting pretty old; I hope it's not on Windows.\n \nhttp://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy\n \nhttp://www.postgresql.org/about/news.865\n \n> you means the newer version has a virtual transaction ID. and\n> what's the maxmium of this virtual id, also 4 billion ?\n> should i also vacuum freeze the virtual id in the new version when\n> it reached the 4 billion?\n \nThe point is to reduce maintenance, not increase it -- you don't\nneed to worry about cleaning these up.\n \n-Kevin\n", "msg_date": "Thu, 20 Jan 2011 11:04:59 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the XID question" }, { "msg_contents": "On Thu, Jan 20, 2011 at 12:04 PM, Kevin Grittner\n<[email protected]> wrote:\n> \"Charles.Hou\" <[email protected]> wrote:\n>\n>> my postgresql version is 8.1.3\n>\n> Ouch!  That's getting pretty old; I hope it's not on Windows.\n>\n> http://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy\n>\n> http://www.postgresql.org/about/news.865\n>\n>> you means the newer version has a virtual transaction ID. and\n>> what's the maxmium of this virtual id,  also 4 billion ?\n>> should i also vacuum freeze the virtual id in the new version when\n>> it reached the 4 billion?\n>\n> The point is to reduce maintenance, not increase it -- you don't\n> need to worry about cleaning these up.\n\nAnd in fact, in more recent releases - particularly 8.4 and 9.0, the\nneed to worry about vacuum in general is much less. There are many\nimprovements to both vacuum generally and autovacuum in particular\nthat make things much better, including enabling autovacuum by\ndefault, multiple autovacuum worker threads, the visibility map, and\nso on. It's fairly likely that everything that the OP is struggling\nwith on 8.1 would Just Work on 8.4 or 9.0.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 21 Jan 2011 12:49:02 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: the XID question" } ]
[ { "msg_contents": "hello,\n\ni have a table with OID column.. I want to use the copy command to insert\nbunch of rows (1 million).\nbut iam unable to specify the correct format for the oid type (i have .jpg\nfiles to be stored in this column)..\n\nI tried giving the path to the file, lo_import('pathto file').. appreciate\nany tips.. i did search the archives , but couldnt find any answer on this\n\nThanks again\n\nhello,i have a table with OID column.. I want to use the copy command to insert bunch of rows (1 million).but iam unable to specify the correct format for the oid type (i have .jpg files to be stored in this column)..\nI tried giving the path to the file, lo_import('pathto file').. appreciate any tips.. i did search the archives , but couldnt find any answer on thisThanks again", "msg_date": "Thu, 20 Jan 2011 15:12:48 -0500", "msg_from": "Madhu Ramachandran <[email protected]>", "msg_from_op": true, "msg_subject": "copy command and blobs" }, { "msg_contents": "Madhu Ramachandran wrote:\n> hello,\n>\n> i have a table with OID column.. I want to use the copy command to \n> insert bunch of rows (1 million).\n> but iam unable to specify the correct format for the oid type (i have \n> .jpg files to be stored in this column)..\nHuh? oid is a keyword, an automatically generated row id, and is \ndeprecated. You shouldn't be doing anything with it, much less copying it.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Thu, 20 Jan 2011 15:17:24 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy command and blobs" }, { "msg_contents": "Mladen Gogala <[email protected]> writes:\n> Madhu Ramachandran wrote:\n>> i have a table with OID column.. I want to use the copy command to \n>> insert bunch of rows (1 million).\n>> but iam unable to specify the correct format for the oid type (i have \n>> .jpg files to be stored in this column)..\n\n> Huh? oid is a keyword, an automatically generated row id, and is \n> deprecated. You shouldn't be doing anything with it, much less copying it.\n\nI think what the OP actually means is he's thinking of importing some\nimages as large objects, then storing their OIDs in a user (not system)\ncolumn of type oid. COPY can't be used for that though.\n\nIt might be better to use a bytea column, if you're willing to deal with\nbytea's weird escaping rules.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Jan 2011 16:11:33 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy command and blobs " }, { "msg_contents": "i was looking at\nhttp://www.postgresql.org/files/documentation/books/aw_pgsql/node96.html\n\n<http://www.postgresql.org/files/documentation/books/aw_pgsql/node96.html>when\nthey talk about using OID type to store large blobs (in my case .jpg files )\n\n\nOn Thu, Jan 20, 2011 at 3:17 PM, Mladen Gogala <[email protected]>wrote:\n\n> Madhu Ramachandran wrote:\n>\n>> hello,\n>>\n>> i have a table with OID column.. I want to use the copy command to insert\n>> bunch of rows (1 million).\n>> but iam unable to specify the correct format for the oid type (i have .jpg\n>> files to be stored in this column)..\n>>\n> Huh? oid is a keyword, an automatically generated row id, and is\n> deprecated. You shouldn't be doing anything with it, much less copying it.\n>\n>\n> --\n>\n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> http://www.vmsinfo.com The Leader in Integrated Media Intelligence\n> Solutions\n>\n>\n>\n>\n\ni was looking at http://www.postgresql.org/files/documentation/books/aw_pgsql/node96.htmlwhen they talk about using OID type to store large blobs (in my case .jpg files )\nOn Thu, Jan 20, 2011 at 3:17 PM, Mladen Gogala <[email protected]> wrote:\nMadhu Ramachandran wrote:\n\nhello,\n\ni have a table with OID column.. I want to use the copy command to insert bunch of rows (1 million).\nbut iam unable to specify the correct format for the oid type (i have .jpg files to be stored in this column)..\n\nHuh? oid is a keyword, an automatically generated row id, and is deprecated. You shouldn't be doing anything with it, much less copying it.\n\n\n-- \n\nMladen Gogala Sr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com The Leader in Integrated Media Intelligence Solutions", "msg_date": "Fri, 21 Jan 2011 17:10:28 -0500", "msg_from": "Madhu Ramachandran <[email protected]>", "msg_from_op": true, "msg_subject": "Re: copy command and blobs" }, { "msg_contents": "On Fri, Jan 21, 2011 at 5:10 PM, Madhu Ramachandran <[email protected]> wrote:\n> i was looking at\n> http://www.postgresql.org/files/documentation/books/aw_pgsql/node96.html\n> when they talk about using OID type to store large blobs (in my case .jpg\n> files )\n\nIt's probably worth noting that that document is 9 years old. It\nmight be worth reading something a little more up-to-date. Perhaps:\n\nhttp://www.postgresql.org/docs/current/static/largeobjects.html\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sat, 22 Jan 2011 22:41:46 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy command and blobs" }, { "msg_contents": "On Sat, Jan 22, 2011 at 8:41 PM, Robert Haas <[email protected]> wrote:\n\n> On Fri, Jan 21, 2011 at 5:10 PM, Madhu Ramachandran <[email protected]>\n> wrote:\n> > i was looking at\n> > http://www.postgresql.org/files/documentation/books/aw_pgsql/node96.html\n> > when they talk about using OID type to store large blobs (in my case .jpg\n> > files )\n>\n> It's probably worth noting that that document is 9 years old. It\n> might be worth reading something a little more up-to-date. Perhaps:\n>\n> http://www.postgresql.org/docs/current/static/largeobjects.html\n>\n>\nA bit late to respond but better than never!\n\nAs of my latest testing in 8.3, I've found that the lo_* functions while\nadequate are a bit slow. Our implemented alternative that leverages\npg_read_file() is significantly faster. I believe it is because\npg_read_file() tells the database to go straight to the file system rather\nthan through the client connection. From memory, I seem to recall this\nbeing about 20% faster than the lo_* option or simple INSERTs.\n\nThe downside to pg_read_file() is that the file must be 1) on the same\nsystem as the database and 2) must be under the $PGDATA directory. We opted\nto create a directory $PGDATA/public with proper system-side permissions but\nopen enough to allow the database owner to read the files.\n\nFor example,\npostgres=# select pg_read_file('public/a_file', 0,\n(pg_stat_file('postgresql.conf')).size);\n\nWe use this method in conjunction with additional checks to store files in\ntables governed by the MD5 hash of the file to prevent duplication.\n\nHTH.\nGreg\n\nOn Sat, Jan 22, 2011 at 8:41 PM, Robert Haas <[email protected]> wrote:\nOn Fri, Jan 21, 2011 at 5:10 PM, Madhu Ramachandran <[email protected]> wrote:\n> i was looking at\n> http://www.postgresql.org/files/documentation/books/aw_pgsql/node96.html\n> when they talk about using OID type to store large blobs (in my case .jpg\n> files )\n\nIt's probably worth noting that that document is 9 years old.  It\nmight be worth reading something a little more up-to-date.  Perhaps:\n\nhttp://www.postgresql.org/docs/current/static/largeobjects.html\nA bit late to respond but better than never!As of my latest testing in 8.3, I've found that the lo_* functions while adequate are a bit slow.  Our implemented alternative that leverages pg_read_file() is significantly faster.  I believe it is because pg_read_file() tells the database to go straight to the file system rather than through the client connection.  From memory, I seem to recall this being about 20% faster than the lo_* option or simple INSERTs.\nThe downside to pg_read_file() is that the file must be 1) on the same system as the database and 2) must be under the $PGDATA directory.  We opted to create a directory $PGDATA/public with proper system-side permissions but open enough to allow the database owner to read the files.\n\nFor example,\npostgres=# select pg_read_file('public/a_file', 0, (pg_stat_file('postgresql.conf')).size);\nWe use this method in conjunction with additional checks to store files in tables governed by the MD5 hash of the file to prevent duplication.HTH.Greg", "msg_date": "Sun, 6 Feb 2011 11:15:18 -0700", "msg_from": "Greg Spiegelberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: copy command and blobs" } ]
[ { "msg_contents": "I was doing a little testing to see how machine load affected the\nperformance of different types of queries, index range scans, hash joins,\nfull scans, a mix, etc.\n\nIn order to do this, I isolated different performance hits, spinning only\nCPU, loading the disk to create high I/O wait states, and using most of\nthe physical memory. This was on a 4 CPU Xen virtual machine running\n8.1.22 on CENTOS.\n\n\nHere is the fun part. When running 8 threads spinning calculating square\nroots (using the stress package), the full scan returned consistently 60%\nfaster than the machine with no load. It was returning 44,000 out of\n5,000,000 rows. Here is the explain analyze. I am hoping that this\ntriggers something (I can run more tests as needed) that can help us make\nit always better.\n\nIdling:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on schedule_details (cost=0.00..219437.90 rows=81386 width=187)\n(actual time=0.053..2915.966 rows=44320 loops=1)\n Filter: (schedule_type = '5X'::bpchar)\n Total runtime: 2986.764 ms\n\nLoaded:\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------\n Seq Scan on schedule_details (cost=0.00..219437.90 rows=81386 width=187)\n(actual time=0.034..1698.068 rows=44320 loops=1)\n Filter: (schedule_type = '5X'::bpchar)\n Total runtime: 1733.084 ms\n\n\n\n\n\n", "msg_date": "Fri, 21 Jan 2011 11:12:35 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Fun little performance IMPROVEMENT..." }, { "msg_contents": "On 1/21/2011 12:12 PM, [email protected] wrote:\n> I was doing a little testing to see how machine load affected the\n> performance of different types of queries, index range scans, hash joins,\n> full scans, a mix, etc.\n>\n> In order to do this, I isolated different performance hits, spinning only\n> CPU, loading the disk to create high I/O wait states, and using most of\n> the physical memory. This was on a 4 CPU Xen virtual machine running\n> 8.1.22 on CENTOS.\n>\n>\n> Here is the fun part. When running 8 threads spinning calculating square\n> roots (using the stress package), the full scan returned consistently 60%\n> faster than the machine with no load. It was returning 44,000 out of\n> 5,000,000 rows. Here is the explain analyze. I am hoping that this\n> triggers something (I can run more tests as needed) that can help us make\n> it always better.\n>\n> Idling:\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on schedule_details (cost=0.00..219437.90 rows=81386 width=187)\n> (actual time=0.053..2915.966 rows=44320 loops=1)\n> Filter: (schedule_type = '5X'::bpchar)\n> Total runtime: 2986.764 ms\n>\n> Loaded:\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on schedule_details (cost=0.00..219437.90 rows=81386 width=187)\n> (actual time=0.034..1698.068 rows=44320 loops=1)\n> Filter: (schedule_type = '5X'::bpchar)\n> Total runtime: 1733.084 ms\n>\n\nOdd. Did'ja by chance run the select more than once... maybe three or \nfour times, and always get the same (or close) results?\n\nIs the stress package running niced?\n\n-Andy\n", "msg_date": "Fri, 21 Jan 2011 12:29:15 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fun little performance IMPROVEMENT..." }, { "msg_contents": "[email protected] writes:\n> Here is the fun part. When running 8 threads spinning calculating square\n> roots (using the stress package), the full scan returned consistently 60%\n> faster than the machine with no load.\n\nPossibly the synchronized-seqscans logic kicking in, resulting in this\nguy not having to do all his own I/Os. It would be difficult to make\nany trustworthy conclusions about performance in such cases from a view\nof only one process's results --- you'd need to look at the aggregate\nbehavior to understand what's happening.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Jan 2011 14:50:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fun little performance IMPROVEMENT... " }, { "msg_contents": ">\n> Odd. Did'ja by chance run the select more than once... maybe three or\n> four times, and always get the same (or close) results?\n>\n> Is the stress package running niced?\n>\nThe stress package is not running niced. I ran it initially 5 times each.\n It was very consistent. Initially, I just ran everything to files.\nLater when I looked over it, I was confused, so tried it again, several\ntimes on each, with very little deviation, and the process with the CPU\nstressed always being faster.\n\nThe only deviation, which is understandable, was that the first run of\nanything after memory stress (using 7G of the available 8G). was slow as\nit swapped back in, so I did a swapoff/swapon to clear up swap, and still\ngot the same results.\n\n\n", "msg_date": "Fri, 21 Jan 2011 13:18:55 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Fun little performance IMPROVEMENT..." }, { "msg_contents": "[email protected] wrote:\n> This was on a 4 CPU Xen virtual machine running\n> 8.1.22 on CENTOS.\n> \n\nYou're not going to get anyone to spend a minute trying to figure what's \nhappening on virtual hardware with an ancient version of PostgreSQL. If \nthis was an actual full test case against PostgreSQL 8.4 or later on a \nphysical machine, it might be possible to draw some conclusions about it \nthat impact current PostgreSQL development. Note where 8.1 is on \nhttp://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy for \nexample.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 21 Jan 2011 15:19:21 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fun little performance IMPROVEMENT..." }, { "msg_contents": "> [email protected] writes:\n>> Here is the fun part. When running 8 threads spinning calculating\n>> square\n>> roots (using the stress package), the full scan returned consistently\n>> 60%\n>> faster than the machine with no load.\n>\n> Possibly the synchronized-seqscans logic kicking in, resulting in this\n> guy not having to do all his own I/Os. It would be difficult to make\n> any trustworthy conclusions about performance in such cases from a view\n> of only one process's results --- you'd need to look at the aggregate\n> behavior to understand what's happening.\n>\n> \t\t\tregards, tom lane\n>\nMy though was that either:\n\n1) It was preventing some other I/O or memory intensive process from\nhappening, opening the resources up.\n2) It was keeping the machine busy from the hypervisor's point of view,\npreventing it from waiting for a slot on the host machine.\n3) The square roots happen quickly, resulting in more yields, and\ntherefore more time slices for my process than if the system was in its\nidle loop.\n\nAny way you look at it, it is fun and interesting that a load can make\nsomething unrelated happen more quickly. I will continue to try to find\nout why it is the case.\n\n\n", "msg_date": "Fri, 21 Jan 2011 13:23:15 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Fun little performance IMPROVEMENT..." }, { "msg_contents": "\n\nOn 1/21/11 12:23 PM, \"[email protected]\" <[email protected]> wrote:\n\n>> [email protected] writes:\n>>> Here is the fun part. When running 8 threads spinning calculating\n>>> square\n>>> roots (using the stress package), the full scan returned consistently\n>>> 60%\n>>> faster than the machine with no load.\n>>\n>> Possibly the synchronized-seqscans logic kicking in, resulting in this\n>> guy not having to do all his own I/Os. It would be difficult to make\n>> any trustworthy conclusions about performance in such cases from a view\n>> of only one process's results --- you'd need to look at the aggregate\n>> behavior to understand what's happening.\n>>\n>> regards, tom lane\n>>\n>My though was that either:\n>\n>1) It was preventing some other I/O or memory intensive process from\n>happening, opening the resources up.\n>2) It was keeping the machine busy from the hypervisor's point of view,\n>preventing it from waiting for a slot on the host machine.\n\nMy guess is its something hypervisor related. If this happened on direct\nhardware I'd be more surprised. Hypervisors have all sorts of stuff going\non, like throttling the number of CPU cycles a vm gets. In your idle\ncase, your VM might effectively occupy 1Ghz of a CPU, but 2Ghz in the\nloaded case.\n\n>3) The square roots happen quickly, resulting in more yields, and\n>therefore more time slices for my process than if the system was in its\n>idle loop.\n>\n>Any way you look at it, it is fun and interesting that a load can make\n>something unrelated happen more quickly. I will continue to try to find\n>out why it is the case.\n>\n>\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 21 Jan 2011 13:10:50 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fun little performance IMPROVEMENT..." }, { "msg_contents": ">\n> Odd. Did'ja by chance run the select more than once... maybe three or\n> four times, and always get the same (or close) results?\n>\n> Is the stress package running niced?\n>\n> -Andy\n>\nI got a little crazy, and upgraded the DB to 8.4.5. It still reacts the\nsame.\n\nI am hoping someone has an idea of a metric I can run to see why it is\ndifferent.\n\n", "msg_date": "Fri, 21 Jan 2011 14:24:47 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Fun little performance IMPROVEMENT..." }, { "msg_contents": ">\n> My guess is its something hypervisor related. If this happened on direct\n> hardware I'd be more surprised. Hypervisors have all sorts of stuff going\n> on, like throttling the number of CPU cycles a vm gets. In your idle\n> case, your VM might effectively occupy 1Ghz of a CPU, but 2Ghz in the\n> loaded case.\n>\nI will be building a new machine this weekend on bare hardware. It won't\nbe very big on specs, but this is only 5 million rows, so it should be\nfine. I will try it there.\n\n", "msg_date": "Fri, 21 Jan 2011 14:26:30 -0700 (MST)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Fun little performance IMPROVEMENT..." }, { "msg_contents": "On 21/01/2011 19:12, [email protected] wrote:\n> I was doing a little testing to see how machine load affected the\n> performance of different types of queries, index range scans, hash joins,\n> full scans, a mix, etc.\n>\n> In order to do this, I isolated different performance hits, spinning only\n> CPU, loading the disk to create high I/O wait states, and using most of\n> the physical memory. This was on a 4 CPU Xen virtual machine running\n> 8.1.22 on CENTOS.\n>\n>\n> Here is the fun part. When running 8 threads spinning calculating square\n> roots (using the stress package), the full scan returned consistently 60%\n> faster than the machine with no load. It was returning 44,000 out of\n> 5,000,000 rows. Here is the explain analyze. I am hoping that this\n> triggers something (I can run more tests as needed) that can help us make\n> it always better.\n\nLooks like a virtualization artifact. Here's a list of some such noticed \nartifacts:\n\nhttp://wiki.freebsd.org/WhyNotBenchmarkUnderVMWare\n\n>\n> Idling:\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on schedule_details (cost=0.00..219437.90 rows=81386 width=187)\n> (actual time=0.053..2915.966 rows=44320 loops=1)\n> Filter: (schedule_type = '5X'::bpchar)\n> Total runtime: 2986.764 ms\n>\n> Loaded:\n> QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on schedule_details (cost=0.00..219437.90 rows=81386 width=187)\n> (actual time=0.034..1698.068 rows=44320 loops=1)\n> Filter: (schedule_type = '5X'::bpchar)\n> Total runtime: 1733.084 ms\n\nIn this case it looks like the IO generated by the VM is causing the \nHypervisor to frequently \"sleep\" the machine while waiting for the IO, \nbut if the machine is also generating CPU load, it is not put to sleep \nas often.\n\n", "msg_date": "Tue, 25 Jan 2011 16:27:15 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fun little performance IMPROVEMENT..." } ]
[ { "msg_contents": "Hi folks,\n\nI have a table like so:\n\ncreate table tagRecord (\n uid varchar(60) primary key,\n [bunch of other fields]\n location varchar(32),\n creationTS timestamp\n);\ncreate index idx_tagdata_loc_creationTS on tagRecord(location, creationTS);\n\nThe number of individual values in location is small (e.g. 2).\n\nI want to simply get the latest \"creationTS\" for each location,\nbut that seems to result in a full table scan:\n\ntts_server_db=# explain analyze select location, max(creationTS) from tagrecord group by location;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------\n HashAggregate (cost=5330.53..5330.55 rows=2 width=18) (actual time=286.161..286.165 rows=3 loops=1)\n -> Seq Scan on tagrecord (cost=0.00..4771.35 rows=111835 width=18) (actual time=0.059..119.828 rows=111739 loops=1)\n Total runtime: 286.222 ms\n\n\nNow I have the idx_tagdata_loc_creationTS, and it seemed to me that\nit should be able to use it to quickly figure out the max creationTS\nfor each location.\n\nAny way I can make this more efficient?\n\nBTW, I am using postgresql-server-8.1.22-1.el5_5.1\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Mon, 24 Jan 2011 13:29:01 -0500", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "How to use indexes for GROUP BY" }, { "msg_contents": "On 01/24/2011 12:29 PM, Dimi Paun wrote:\n\n> I want to simply get the latest \"creationTS\" for each location,\n> but that seems to result in a full table scan:\n>\n> tts_server_db=# explain analyze select location, max(creationTS) from\n> tagrecord group by location;\n\nTry this, it *might* work:\n\nselect DISTINCT ON (location) location, creationTS\n from tagrecord\n order by location, creationTS DESC;\n\nSecondly... Postgresql 8.1? Really? If at all possible, upgrade. There \nis a lot you're missing from the last six years of PostgreSQL releases. \nFor instance, your MAX means a reverse index scan for each location, \nwhich is far more expensive than an ordered index scan, so the planner \nmay be ignoring it, if the planner in 8.1 is even that intelligent.\n\nIf you were running 8.3, for instance, your index could be:\n\ncreate index idx_tagdata_loc_creationTS on tagRecord(location, \ncreationTS DESC);\n\nAnd then suddenly it just has to use the first match for that index for \neach location. Older PG versions are... flaky when it comes to \noptimization. I'm not sure if 8.1 used MAX as an internal construct or \ntreated it like a function. If it's the latter, it has to read every \nvalue to find out which is the \"max\", which is why using ORDER BY *may* \nfix your problem.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Mon, 24 Jan 2011 13:27:38 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use indexes for GROUP BY" }, { "msg_contents": "On Mon, Jan 24, 2011 at 11:29 AM, Dimi Paun <[email protected]> wrote:\n\nTwo very quick points:\n\n> tts_server_db=# explain analyze select location, max(creationTS) from tagrecord group by location;\n>                                                       QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------\n>  HashAggregate  (cost=5330.53..5330.55 rows=2 width=18) (actual time=286.161..286.165 rows=3 loops=1)\n>   ->  Seq Scan on tagrecord  (cost=0.00..4771.35 rows=111835 width=18) (actual time=0.059..119.828 rows=111739 loops=1)\n>  Total runtime: 286.222 ms\n\nMost of your run time is the hashaggregate running, not the seq scan\n\n> BTW, I am using postgresql-server-8.1.22-1.el5_5.1\n\nAs another poster observed, you're running an ancient version of pgsql\nfrom a performance perspective. Upgrading to 8.4 or 9.0 would make a\nhuge difference in overall performance, not just with one or two\nqueries.\n", "msg_date": "Mon, 24 Jan 2011 12:33:05 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use indexes for GROUP BY" }, { "msg_contents": "On Mon, Jan 24, 2011 at 01:29:01PM -0500, Dimi Paun wrote:\n> Hi folks,\n> \n> I have a table like so:\n> \n> create table tagRecord (\n> uid varchar(60) primary key,\n> [bunch of other fields]\n> location varchar(32),\n> creationTS timestamp\n> );\n> create index idx_tagdata_loc_creationTS on tagRecord(location, creationTS);\n> \n> The number of individual values in location is small (e.g. 2).\n> \n> I want to simply get the latest \"creationTS\" for each location,\n> but that seems to result in a full table scan:\n> \n> tts_server_db=# explain analyze select location, max(creationTS) from tagrecord group by location;\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------------\n> HashAggregate (cost=5330.53..5330.55 rows=2 width=18) (actual time=286.161..286.165 rows=3 loops=1)\n> -> Seq Scan on tagrecord (cost=0.00..4771.35 rows=111835 width=18) (actual time=0.059..119.828 rows=111739 loops=1)\n> Total runtime: 286.222 ms\n\nyou can use technique described in here:\nhttp://www.depesz.com/index.php/2009/07/10/getting-list-of-unique-elements/\n\nBest regards,\n\ndepesz\n\n", "msg_date": "Mon, 24 Jan 2011 21:07:36 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to use indexes for GROUP BY" }, { "msg_contents": "On Mon, 2011-01-24 at 12:33 -0700, Scott Marlowe wrote:\n> As another poster observed, you're running an ancient version of pgsql\n> from a performance perspective. Upgrading to 8.4 or 9.0 would make a\n> huge difference in overall performance, not just with one or two\n> queries. \n\nThanks for the trips.\n\nI'll try to first upgrade, and I'll report back if that doesn't help :)\n\n-- \nDimi Paun <[email protected]>\nLattica, Inc.\n\n", "msg_date": "Tue, 25 Jan 2011 00:03:26 -0500", "msg_from": "Dimi Paun <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to use indexes for GROUP BY" } ]
[ { "msg_contents": "Folks,\n\nI'm doing a postmortem on an 8.3 database which recently had to be\nrebuilt. The database was over 200% bloated ... 176GB as opposed to\ndump/reload size of 55GB. What I find really interesting is *which*\ntables were bloated. Consider these two tables, for example, which\nconsist of one row which gets updated around 1000 times/day:\n\n-[ RECORD 2 ]----------+------------------------------\nschemaname | public\nrelname | general_info\nn_dead_tup | 12\nn_live_tup | 1\nchanged | 8817\nn_tup_hot_upd | 8817\npg_relation_size | 155648\npg_total_relation_size | 172032\n-[ RECORD 4 ]----------+------------------------------\nschemaname | public\nrelname | current_info\nn_dead_tup | 27\nn_live_tup | 1\nchanged | 3296\nn_tup_hot_upd | 3296\npg_relation_size | 385024\npg_total_relation_size | 409600\n\nAs you can see, in both cases almost all of the updates on these tables\nwere HOT updates. Yet these HOT updates led to bloat (hundreds of disk\npages instead of the one required for each table), and autovacuum\ndoesn't seem to think it needed to do anything about them ... neither\ntable was *ever* autovacuumed.\n\nIt looks to me like autovacuum doesn't ever consider when HOT updates\nlead to page splits, and so require vacuuming. Or am I diagnosing it wrong?\n\nmax_fsm_pages may also have been slightly undersized.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 24 Jan 2011 18:26:16 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Bloat issue on 8.3; autovac ignores HOT page splits?" }, { "msg_contents": "On Mon, Jan 24, 2011 at 9:26 PM, Josh Berkus <[email protected]> wrote:\n> It looks to me like autovacuum doesn't ever consider when HOT updates\n> lead to page splits, and so require vacuuming.  Or am I diagnosing it wrong?\n\nI'm not sure what you mean by a page split. An update wherein the new\nheap tuple won't fit on the same page as the existing heap tuple\nshould be treated as non-HOT. But nothing gets split in that case. I\nthink of a page split as an index event, and if these are HOT updates\nthere shouldn't be any index changes at all.\n\nCan we see those stats again with n_tup_ins/upd/del?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 25 Jan 2011 11:41:35 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloat issue on 8.3; autovac ignores HOT page splits?" }, { "msg_contents": "\n> Can we see those stats again with n_tup_ins/upd/del?\n\nSure:\n\n-[ RECORD 2 ]----------+------------------------------\nschemaname | public\nrelname | general_info\nn_dead_tup | 12\nn_live_tup | 1\nn_tup_upd | 8817\nn_tup_del | 0\nn_tup_ins | 0\nn_tup_hot_upd | 8817\npg_relation_size | 155648\npg_total_relation_size | 172032\n-[ RECORD 4 ]----------+------------------------------\nschemaname | public\nrelname | current_info\nn_dead_tup | 27\nn_live_tup | 1\nn_tup_upd | 3296\nn_tup_del | 0\nn_tup_ins | 0\nn_tup_hot_upd | 3296\npg_relation_size | 385024\npg_total_relation_size | 409600\n\nOne question: in 8.3 and earlier, is the FSM used to track dead_rows for\npg_stat_user_tables?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Tue, 25 Jan 2011 10:28:14 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bloat issue on 8.3; autovac ignores HOT page splits?" }, { "msg_contents": "On 26/01/11 07:28, Josh Berkus wrote:\n>\n> One question: in 8.3 and earlier, is the FSM used to track dead_rows for\n> pg_stat_user_tables?\n>\n\nIf I'm understanding you correctly, ANALYZE is the main guy \ntracking/updating the dead row count.\n\nregards\n\nMark\n", "msg_date": "Wed, 26 Jan 2011 10:29:33 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloat issue on 8.3; autovac ignores HOT page splits?" }, { "msg_contents": "Robert, Mark,\n\nI have not been able to reproduce this issue in a clean test on 9.0. As\na result, I now think that it was related to the FSM being too small on\nthe user's 8.3 instance, and will consider it resolved.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 31 Jan 2011 10:27:33 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Bloat issue on 8.3; autovac ignores HOT new pages?" }, { "msg_contents": "On 01/02/11 07:27, Josh Berkus wrote:\n> Robert, Mark,\n>\n> I have not been able to reproduce this issue in a clean test on 9.0. As\n> a result, I now think that it was related to the FSM being too small on\n> the user's 8.3 instance, and will consider it resolved.\n>\n\nRight - it might be interesting to see if you can reproduce on 8.4. I \nwould hazard a guess that you will not (on disk FSM + visibility map \nvacuum improvements seem to make this whole area way better).\n\nCheers\n\nMark\n\n\n\n\n\n\n\n On 01/02/11 07:27, Josh Berkus wrote:\n \nRobert, Mark,\n\nI have not been able to reproduce this issue in a clean test on 9.0. As\na result, I now think that it was related to the FSM being too small on\nthe user's 8.3 instance, and will consider it resolved.\n\n\n\n\n Right - it might be interesting to see if you can reproduce on\n 8.4. I would hazard a guess that you will not (on disk FSM +\n visibility map vacuum improvements seem to make this whole area\n way better).\n\n Cheers\n\n Mark", "msg_date": "Tue, 01 Feb 2011 10:23:02 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloat issue on 8.3; autovac ignores HOT new pages?" }, { "msg_contents": "On Mon, Jan 31, 2011 at 11:27 AM, Josh Berkus <[email protected]> wrote:\n> Robert, Mark,\n>\n> I have not been able to reproduce this issue in a clean test on 9.0.  As\n> a result, I now think that it was related to the FSM being too small on\n> the user's 8.3 instance, and will consider it resolved.\n\nI used to try and size free space map to be a little bigger than it\nneeded to be. I now size 4 or 5 times what it needs to be. shared\nmemory is cheap. So is going to 8.4, but on legacy systems that you\ncan't upgrade, 8.3 with a huge FSM works well enough (with suitably\naggressive autovac).\n", "msg_date": "Mon, 31 Jan 2011 14:57:00 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloat issue on 8.3; autovac ignores HOT new pages?" }, { "msg_contents": "On 01/02/11 10:57, Scott Marlowe wrote:\n> On Mon, Jan 31, 2011 at 11:27 AM, Josh Berkus<[email protected]> wrote:\n>> Robert, Mark,\n>>\n>> I have not been able to reproduce this issue in a clean test on 9.0. As\n>> a result, I now think that it was related to the FSM being too small on\n>> the user's 8.3 instance, and will consider it resolved.\n> I used to try and size free space map to be a little bigger than it\n> needed to be. I now size 4 or 5 times what it needs to be. shared\n> memory is cheap. So is going to 8.4, but on legacy systems that you\n> can't upgrade, 8.3 with a huge FSM works well enough (with suitably\n> aggressive autovac).\n>\n\nYeah, 8.3 with very aggressive autovac my experience too - I've had the \nnaptime cranked down to 10s or even 1s in some cases, to try to tame \nbloat growth for web cache or session type tables that are heavily \nvolatile.\n\nregards\n\nMark\n\n\n\n\n\n\n On 01/02/11 10:57, Scott Marlowe wrote:\n \nOn Mon, Jan 31, 2011 at 11:27 AM, Josh Berkus <[email protected]> wrote:\n\n\nRobert, Mark,\n\nI have not been able to reproduce this issue in a clean test on 9.0.  As\na result, I now think that it was related to the FSM being too small on\nthe user's 8.3 instance, and will consider it resolved.\n\n\n\nI used to try and size free space map to be a little bigger than it\nneeded to be. I now size 4 or 5 times what it needs to be. shared\nmemory is cheap. So is going to 8.4, but on legacy systems that you\ncan't upgrade, 8.3 with a huge FSM works well enough (with suitably\naggressive autovac).\n\n\n\n\n Yeah, 8.3 with very aggressive autovac my experience too - I've\n had the naptime cranked down to 10s or even 1s in some cases, to\n try to tame bloat growth for web cache or session type tables\n that are heavily volatile. \n\n regards\n\n Mark", "msg_date": "Wed, 02 Feb 2011 12:37:30 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bloat issue on 8.3; autovac ignores HOT new pages?" } ]
[ { "msg_contents": "Hi,\n\nWe are running some performances tests. With a lot of concurrent\naccess, queries get very slow. When there is no load, those queries run\nfast.\n\nWe kind of see a trend about these queries: it seems like the ones that\nbecome very slow have an ORDER BY or MAX in them.\n\n \n\nHere are our config settings:\n\n name | setting |\ndescription \n---------------------------------+--------------------------+-----------\n------------------------------------------------------------------------\n--------------------------------------------\nadd_missing_from | off |\nAutomatically adds missing table references to FROM clauses.\nallow_system_table_mods | off | Allows\nmodifications of the structure of system tables.\narchive_command | (disabled) | Sets the\nshell command that will be called to archive a WAL file.\narchive_mode | off | Allows\narchiving of WAL files using archive_command.\narchive_timeout | 0 | Forces a\nswitch to the next xlog file if a new file has not been started within N\nseconds.\narray_nulls | on | Enable\ninput of NULL elements in arrays.\nauthentication_timeout | 1min | Sets the\nmaximum allowed time to complete client authentication.\nautovacuum | on | Starts the\nautovacuum subprocess.\nautovacuum_analyze_scale_factor | 0.1 | Number of\ntuple inserts, updates or deletes prior to analyze as a fraction of\nreltuples.\nautovacuum_analyze_threshold | 250 | Minimum\nnumber of tuple inserts, updates or deletes prior to analyze.\nautovacuum_freeze_max_age | 200000000 | Age at\nwhich to autovacuum a table to prevent transaction ID wraparound.\nautovacuum_max_workers | 3 | Sets the\nmaximum number of simultaneously running autovacuum worker processes.\nautovacuum_naptime | 5min | Time to\nsleep between autovacuum runs.\nautovacuum_vacuum_cost_delay | 20ms | Vacuum cost\ndelay in milliseconds, for autovacuum.\nautovacuum_vacuum_cost_limit | -1 | Vacuum cost\namount available before napping, for autovacuum.\nautovacuum_vacuum_scale_factor | 0.2 | Number of\ntuple updates or deletes prior to vacuum as a fraction of reltuples.\nautovacuum_vacuum_threshold | 500 | Minimum\nnumber of tuple updates or deletes prior to vacuum.\nbackslash_quote | safe_encoding | Sets\nwhether \"\\'\" is allowed in string literals.\nbgwriter_delay | 200ms | Background\nwriter sleep time between rounds.\nbgwriter_lru_maxpages | 100 | Background\nwriter maximum number of LRU pages to flush per round.\nbgwriter_lru_multiplier | 2 | Background\nwriter multiplier on average buffers to scan per round.\nblock_size | 8192 | Shows the\nsize of a disk block.\nbonjour_name | | Sets the\nBonjour broadcast service name.\ncheck_function_bodies | on | Check\nfunction bodies during CREATE FUNCTION.\ncheckpoint_completion_target | 0.5 | Time spent\nflushing dirty buffers during checkpoint, as fraction of checkpoint\ninterval.\ncheckpoint_segments | 3 | Sets the\nmaximum distance in log segments between automatic WAL checkpoints.\ncheckpoint_timeout | 5min | Sets the\nmaximum time between automatic WAL checkpoints.\ncheckpoint_warning | 30s | Enables\nwarnings if checkpoint segments are filled more frequently than this.\nclient_encoding | UTF8 | Sets the\nclient's character set encoding.\nclient_min_messages | notice | Sets the\nmessage levels that are sent to the client.\ncommit_delay | 250 | Sets the\ndelay in microseconds between transaction commit and flushing WAL to\ndisk.\ncommit_siblings | 10 | Sets the\nminimum concurrent open transactions before performing commit_delay.\nconstraint_exclusion | off | Enables the\nplanner to use constraints to optimize queries.\ncpu_index_tuple_cost | 0.005 | Sets the\nplanner's estimate of the cost of processing each index entry during an\nindex scan.\ncpu_operator_cost | 0.0025 | Sets the\nplanner's estimate of the cost of processing each operator or function\ncall.\ncpu_tuple_cost | 0.01 | Sets the\nplanner's estimate of the cost of processing each tuple (row).\ncustom_variable_classes | | Sets the\nlist of known custom variable classes.\nDateStyle | ISO, MDY | Sets the\ndisplay format for date and time values.\ndb_user_namespace | off | Enables\nper-database user names.\ndeadlock_timeout | 1s | Sets the\ntime to wait on a lock before checking for deadlock.\ndebug_assertions | off | Turns on\nvarious assertion checks.\ndebug_pretty_print | off | Indents\nparse and plan tree displays.\ndebug_print_parse | off | Prints the\nparse tree to the server log.\ndebug_print_plan | off | Prints the\nexecution plan to server log.\ndebug_print_rewritten | off | Prints the\nparse tree after rewriting to server log.\ndefault_statistics_target | 10 | Sets the\ndefault statistics target.\ndefault_tablespace | | Sets the\ndefault tablespace to create tables and indexes in.\ndefault_text_search_config | pg_catalog.simple | Sets\ndefault text search configuration.\n\n \n\n \n\nand the box info:\n\n> cat /proc/meminfo\n\nMemTotal: 8177116 kB\nMemFree: 2830212 kB\nBuffers: 83212 kB\nCached: 2385740 kB\nSwapCached: 32 kB\nActive: 4037560 kB\nInactive: 1082912 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 8177116 kB\nLowFree: 2830212 kB\nSwapTotal: 2097112 kB\nSwapFree: 2096612 kB\nDirty: 4548 kB\nWriteback: 72 kB\nAnonPages: 2651288 kB\nMapped: 311824 kB\nSlab: 173968 kB\nPageTables: 20512 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nCommitLimit: 6185668 kB\nCommitted_AS: 3602784 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 263672 kB\nVmallocChunk: 34359474295 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugepagesize: 2048 kB\n\n \n\n> cat /proc/meminfo\n\nMemTotal: 8177116 kB\nMemFree: 2830212 kB\nBuffers: 83212 kB\nCached: 2385740 kB\nSwapCached: 32 kB\nActive: 4037560 kB\nInactive: 1082912 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 8177116 kB\nLowFree: 2830212 kB\nSwapTotal: 2097112 kB\nSwapFree: 2096612 kB\nDirty: 4548 kB\nWriteback: 72 kB\nAnonPages: 2651288 kB\nMapped: 311824 kB\nSlab: 173968 kB\nPageTables: 20512 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nCommitLimit: 6185668 kB\nCommitted_AS: 3602784 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 263672 kB\nVmallocChunk: 34359474295 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugepagesize: 2048 kB\n\n \n\n \n\n \n\nIt seems to me that we should try increasing shared_buffers. But do you\nhave any other suggestions? Or do you see anything wrong in our config?\n\n \n\n \n\nThanks,\n\nAnne\n\n\nHi,We are running some performances tests.  With a lot of concurrent access,  queries get very slow. When there is no load, those queries run fast.We kind of see a trend about these queries:  it seems like the ones that become very slow have an ORDER BY or MAX in them. Here are our config settings:              name               |         setting          |                                                          description                                                          ---------------------------------+--------------------------+------------------------------------------------------------------------------------------------------------------------------- add_missing_from                | off                      | Automatically adds missing table references to FROM clauses. allow_system_table_mods         | off                      | Allows modifications of the structure of system tables. archive_command                 | (disabled)               | Sets the shell command that will be called to archive a WAL file. archive_mode                    | off                      | Allows archiving of WAL files using archive_command. archive_timeout                 | 0                        | Forces a switch to the next xlog file if a new file has not been started within N seconds. array_nulls                     | on                       | Enable input of NULL elements in arrays. authentication_timeout          | 1min                     | Sets the maximum allowed time to complete client authentication. autovacuum                      | on                       | Starts the autovacuum subprocess. autovacuum_analyze_scale_factor | 0.1                      | Number of tuple inserts, updates or deletes prior to analyze as a fraction of reltuples. autovacuum_analyze_threshold    | 250                      | Minimum number of tuple inserts, updates or deletes prior to analyze. autovacuum_freeze_max_age       | 200000000                | Age at which to autovacuum a table to prevent transaction ID wraparound. autovacuum_max_workers          | 3                        | Sets the maximum number of simultaneously running autovacuum worker processes. autovacuum_naptime              | 5min                     | Time to sleep between autovacuum runs. autovacuum_vacuum_cost_delay    | 20ms                     | Vacuum cost delay in milliseconds, for autovacuum. autovacuum_vacuum_cost_limit    | -1                       | Vacuum cost amount available before napping, for autovacuum. autovacuum_vacuum_scale_factor  | 0.2                      | Number of tuple updates or deletes prior to vacuum as a fraction of reltuples. autovacuum_vacuum_threshold     | 500                      | Minimum number of tuple updates or deletes prior to vacuum. backslash_quote                 | safe_encoding            | Sets whether \"\\'\" is allowed in string literals. bgwriter_delay                  | 200ms                    | Background writer sleep time between rounds. bgwriter_lru_maxpages           | 100                      | Background writer maximum number of LRU pages to flush per round. bgwriter_lru_multiplier         | 2                        | Background writer multiplier on average buffers to scan per round. block_size                      | 8192                     | Shows the size of a disk block. bonjour_name                    |                          | Sets the Bonjour broadcast service name. check_function_bodies           | on                       | Check function bodies during CREATE FUNCTION. checkpoint_completion_target    | 0.5                      | Time spent flushing dirty buffers during checkpoint, as fraction of checkpoint interval. checkpoint_segments             | 3                        | Sets the maximum distance in log segments between automatic WAL checkpoints. checkpoint_timeout              | 5min                     | Sets the maximum time between automatic WAL checkpoints. checkpoint_warning              | 30s                      | Enables warnings if checkpoint segments are filled more frequently than this. client_encoding                 | UTF8                     | Sets the client's character set encoding. client_min_messages             | notice                   | Sets the message levels that are sent to the client. commit_delay                    | 250                      | Sets the delay in microseconds between transaction commit and flushing WAL to disk. commit_siblings                 | 10                       | Sets the minimum concurrent open transactions before performing commit_delay. constraint_exclusion            | off                      | Enables the planner to use constraints to optimize queries. cpu_index_tuple_cost            | 0.005                    | Sets the planner's estimate of the cost of processing each index entry during an index scan. cpu_operator_cost               | 0.0025                   | Sets the planner's estimate of the cost of processing each operator or function call. cpu_tuple_cost                  | 0.01                     | Sets the planner's estimate of the cost of processing each tuple (row). custom_variable_classes         |                          | Sets the list of known custom variable classes. DateStyle                       | ISO, MDY                 | Sets the display format for date and time values. db_user_namespace               | off                      | Enables per-database user names. deadlock_timeout                | 1s                       | Sets the time to wait on a lock before checking for deadlock. debug_assertions                | off                      | Turns on various assertion checks. debug_pretty_print              | off                      | Indents parse and plan tree displays. debug_print_parse               | off                      | Prints the parse tree to the server log. debug_print_plan                | off                      | Prints the execution plan to server log. debug_print_rewritten           | off                      | Prints the parse tree after rewriting to server log. default_statistics_target       | 10                       | Sets the default statistics target. default_tablespace              |                          | Sets the default tablespace to create tables and indexes in. default_text_search_config      | pg_catalog.simple        | Sets default text search configuration.  and the box info:> cat /proc/meminfoMemTotal:      8177116 kBMemFree:       2830212 kBBuffers:         83212 kBCached:        2385740 kBSwapCached:         32 kBActive:        4037560 kBInactive:      1082912 kBHighTotal:           0 kBHighFree:            0 kBLowTotal:      8177116 kBLowFree:       2830212 kBSwapTotal:     2097112 kBSwapFree:      2096612 kBDirty:            4548 kBWriteback:          72 kBAnonPages:     2651288 kBMapped:         311824 kBSlab:           173968 kBPageTables:      20512 kBNFS_Unstable:        0 kBBounce:              0 kBCommitLimit:   6185668 kBCommitted_AS:  3602784 kBVmallocTotal: 34359738367 kBVmallocUsed:    263672 kBVmallocChunk: 34359474295 kBHugePages_Total:     0HugePages_Free:      0HugePages_Rsvd:      0Hugepagesize:     2048 kB > cat /proc/meminfoMemTotal:      8177116 kBMemFree:       2830212 kBBuffers:         83212 kBCached:        2385740 kBSwapCached:         32 kBActive:        4037560 kBInactive:      1082912 kBHighTotal:           0 kBHighFree:            0 kBLowTotal:      8177116 kBLowFree:       2830212 kBSwapTotal:     2097112 kBSwapFree:      2096612 kBDirty:            4548 kBWriteback:          72 kBAnonPages:     2651288 kBMapped:         311824 kBSlab:           173968 kBPageTables:      20512 kBNFS_Unstable:        0 kBBounce:              0 kBCommitLimit:   6185668 kBCommitted_AS:  3602784 kBVmallocTotal: 34359738367 kBVmallocUsed:    263672 kBVmallocChunk: 34359474295 kBHugePages_Total:     0HugePages_Free:      0HugePages_Rsvd:      0Hugepagesize:     2048 kB   It seems to me that we should try increasing shared_buffers. But do you have any other suggestions? Or do you see anything wrong in our config?  Thanks,Anne", "msg_date": "Tue, 25 Jan 2011 13:37:54 -0800", "msg_from": "\"Anne Rosset\" <[email protected]>", "msg_from_op": true, "msg_subject": "Queries becoming slow under heavy load" }, { "msg_contents": "\"Anne Rosset\" <[email protected]> wrote:\n \n> We are running some performances tests. With a lot of concurrent\n> access, queries get very slow. When there is no load, those\n> queries run fast.\n \nWhat's \"a lot\"?\n \n> We kind of see a trend about these queries: it seems like the\n> ones that become very slow have an ORDER BY or MAX in them.\n \nWithout knowing the PostgreSQL version or more details about the\nqueries, I would only be guessing at the cause.\n \n> It seems to me that we should try increasing shared_buffers. But\n> do you have any other suggestions? Or do you see anything wrong in\n> our config?\n \nI don't think you showed us your whole PostgreSQL configuration, and\nthe format was hard to read -- it's best to show the contents of\nyour postgresql.conf file, minus comments.\n \nIf you read this page and re-post we can probably be more helpful:\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Tue, 25 Jan 2011 16:12:24 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries becoming slow under heavy load" }, { "msg_contents": "On 1/25/2011 3:37 PM, Anne Rosset wrote:\n> Hi,\n>\n> We are running some performances tests. With a lot of concurrent access,\n> queries get very slow. When there is no load, those queries run fast.\n>\n> We kind of see a trend about these queries: it seems like the ones that\n> become very slow have an ORDER BY or MAX in them.\n>\n> Here are our config settings:\n>\n\n<SNIP>\n\n> It seems to me that we should try increasing shared_buffers. But do you\n> have any other suggestions? Or do you see anything wrong in our config?\n>\n> Thanks,\n>\n> Anne\n>\n\nWhile I applaud your attempt to get us lots of information, \nunfortunately the the one property you ask about (shared_buffers), I \ncan't seem to find.\n\nSo, maybe you could post a bit more:\n\n1) how many concurrent clients?\n2) can we see an explain analyze for a query when its fast, and then \nagain when its slow?\n3) Is this box dedicated to PG or are there other services running?\n4) Looks like you have 8 Gig of ram, so I assume this is a 64 bit OS, \ncan you tell us what you have for:\n\nshared_buffers\neffective_cahce_size\nwork_mem\n\n\n5) Once many clients start hitting the db, it might not all fit into ram \nand start hitting the HD, can you tell us what sort of IO you have \n(sata, scsi, raid, # of disks, etc).\n\nThe stats from /proc/meminfo:\nSwapTotal: 2097112 kB\nSwapFree: 2096612 kB\n\nWas this run when the system was busy? Looks like you are not using any \nswap, so thats good at least. Oh, wait, there are two cat \n/proc/meminfo's. Is one when its fast and one when its slow?\n\nLooks to me, in both cases, you are not using much memory at all. (if \nyou happen to have 'free', its output is a little more readable, if you \nwouldn't mind posting it (only really need it for when the box is slow)\n\n-Andy\n", "msg_date": "Tue, 25 Jan 2011 16:12:45 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries becoming slow under heavy load" }, { "msg_contents": "On 25/01/2011 22:37, Anne Rosset wrote:\n> Hi,\n>\n> We are running some performances tests. With a lot of concurrent\n> access, queries get very slow. When there is no load, those queries run\n> fast.\n\nAs others said, you need to stat how many concurrent clients are working \non the database and also the number of logical CPUs (CPU cores, \nhyperthreading) are present in the machine. So far, as a rule of thumb, \nif you have more concurrent active connections (i.e. doing queries, not \nidling), you will experience a sharp decline in performance if this \nnumber exceeds the number of logical CPUs in the machine.\n\n", "msg_date": "Wed, 26 Jan 2011 14:25:17 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries becoming slow under heavy load" }, { "msg_contents": "\nSorry it seems like the postgres configuration didn't come thru the\nfirst time.\n\nname\t|\tsetting\n---------------------------------\t+\n--------------------------\nadd_missing_from\t|\toff\nallow_system_table_mods\t|\toff\narchive_command\t|\t(disabled)\narchive_mode\t|\toff\narchive_timeout\t|\t0\narray_nulls\t|\ton\nauthentication_timeout\t|\t1min\nautovacuum\t|\ton\nautovacuum_analyze_scale_factor\t|\t0.1\nautovacuum_analyze_threshold\t|\t250\nautovacuum_freeze_max_age\t|\t200000000\nautovacuum_max_workers\t|\t3\nautovacuum_naptime\t|\t5min\nautovacuum_vacuum_cost_delay\t|\t20ms\nautovacuum_vacuum_cost_limit\t|\t-1\nautovacuum_vacuum_scale_factor\t|\t0.2\nautovacuum_vacuum_threshold\t|\t500\nbackslash_quote\t|\tsafe_encoding\nbgwriter_delay\t|\t200ms\nbgwriter_lru_maxpages\t|\t100\nbgwriter_lru_multiplier\t|\t2\nblock_size\t|\t8192\nbonjour_name\t|\t\ncheck_function_bodies\t|\ton\ncheckpoint_completion_target\t|\t0.5\ncheckpoint_segments\t|\t3\ncheckpoint_timeout\t|\t5min\ncheckpoint_warning\t|\t30s\nclient_encoding\t|\tUTF8\nclient_min_messages\t|\tnotice\ncommit_delay\t|\t250\ncommit_siblings\t|\t10\nconstraint_exclusion\t|\toff\ncpu_index_tuple_cost\t|\t0.005\ncpu_operator_cost\t|\t0.0025\ncpu_tuple_cost\t|\t0.01\ncustom_variable_classes\t|\t\nDateStyle\t|\tISO, MDY\ndb_user_namespace\t|\toff\ndeadlock_timeout\t|\t1s\ndebug_assertions\t|\toff\ndebug_pretty_print\t|\toff\ndebug_print_parse\t|\toff\ndebug_print_plan\t|\toff\ndebug_print_rewritten\t|\toff\ndefault_statistics_target\t|\t10\ndefault_tablespace\t|\t\ndefault_text_search_config\t|\tpg_catalog.simple\ndefault_transaction_isolation\t|\tread committed\ndefault_transaction_read_only\t|\toff\ndefault_with_oids\t|\toff\neffective_cache_size\t|\t4000000kB\nenable_bitmapscan\t|\ton\nenable_hashagg\t|\ton\nenable_hashjoin\t|\ton\nenable_indexscan\t|\ton\nenable_mergejoin\t|\toff\nenable_nestloop\t|\ton\nenable_seqscan\t|\ton\nenable_sort\t|\ton\nenable_tidscan\t|\ton\nescape_string_warning\t|\ton\nexplain_pretty_print\t|\ton\nextra_float_digits\t|\t0\nfrom_collapse_limit\t|\t8\nfsync\t|\ton\nfull_page_writes\t|\ton\ngeqo\t|\toff\ngeqo_effort\t|\t5\ngeqo_generations\t|\t0\ngeqo_pool_size\t|\t0\ngeqo_selection_bias\t|\t2\ngeqo_threshold\t|\t12\ngin_fuzzy_search_limit\t|\t0\nignore_system_indexes\t|\toff\ninteger_datetimes\t|\toff\njoin_collapse_limit\t|\t8\nkrb_caseins_users\t|\toff\nkrb_server_hostname\t|\t\nkrb_srvname\t|\tpostgres\nlc_collate\t|\ten_US.UTF-8\nlc_ctype\t|\ten_US.UTF-8\nlc_messages\t|\ten_US.UTF-8\nlc_monetary\t|\ten_US.UTF-8\nlc_numeric\t|\ten_US.UTF-8\nlc_time\t|\ten_US.UTF-8\nlisten_addresses\t|\t127.0.0.1,208.75.198.149\nlocal_preload_libraries\t|\t\nlog_autovacuum_min_duration\t|\t-1\nlog_checkpoints\t|\toff\nlog_connections\t|\toff\nlog_destination\t|\tstderr\nlog_disconnections\t|\toff\nlog_duration\t|\toff\nlog_error_verbosity\t|\tdefault\nlog_executor_stats\t|\toff\nlog_hostname\t|\toff\nlog_line_prefix\t|\t\nlog_lock_waits\t|\toff\nlog_min_duration_statement\t|\t-1\nlog_min_error_statement\t|\terror\nlog_min_messages\t|\tnotice\nlog_parser_stats\t|\toff\nlog_planner_stats\t|\toff\nlog_rotation_age\t|\t0\nlog_rotation_size\t|\t0\nlog_statement\t|\tnone\nlog_statement_stats\t|\toff\nlog_temp_files\t|\t-1\nlog_timezone\t|\tAsia/Kolkata\nlog_truncate_on_rotation\t|\toff\nlogging_collector\t|\ton\nmaintenance_work_mem\t|\t256MB\nmax_connections\t|\t100\nmax_files_per_process\t|\t1000\nmax_fsm_pages\t|\t500000\nmax_fsm_relations\t|\t500\nmax_function_args\t|\t100\nmax_identifier_length\t|\t63\nmax_index_keys\t|\t32\nmax_locks_per_transaction\t|\t64\nmax_prepared_transactions\t|\t5\nmax_stack_depth\t|\t5MB\npassword_encryption\t|\ton\nport\t|\t5432\npost_auth_delay\t|\t0\npre_auth_delay\t|\t0\nrandom_page_cost\t|\t4\nregex_flavor\t|\tadvanced\nsearch_path\t|\t\"$user\",public\nseq_page_cost\t|\t1\nserver_encoding\t|\tUTF8\nserver_version\t|\t8.3.8\nserver_version_num\t|\t80308\nsession_replication_role\t|\torigin\nshared_buffers\t|\t240MB\nsilent_mode\t|\toff\nsql_inheritance\t|\ton\nssl\t|\toff\nstandard_conforming_strings\t|\toff\nstatement_timeout\t|\t0\nsuperuser_reserved_connections\t|\t3\nsynchronize_seqscans\t|\ton\nsynchronous_commit\t|\ton\nsyslog_facility\t|\tLOCAL0\nsyslog_ident\t|\tpostgres\ntcp_keepalives_count\t|\t9\ntcp_keepalives_idle\t|\t7200\ntcp_keepalives_interval\t|\t75\ntemp_buffers\t|\t1024\ntemp_tablespaces\t|\t\nTimeZone\t|\tAsia/Kolkata\ntimezone_abbreviations\t|\tDefault\ntrace_notify\t|\toff\ntrace_sort\t|\toff\ntrack_activities\t|\ton\ntrack_counts\t|\ton\ntransaction_isolation\t|\tread committed\ntransaction_read_only\t|\toff\ntransform_null_equals\t|\toff\nunix_socket_group\t|\t\nunix_socket_permissions\t|\t511\nupdate_process_title\t|\ton\nvacuum_cost_delay\t|\t50ms\nvacuum_cost_limit\t|\t200\nvacuum_cost_page_dirty\t|\t20\nvacuum_cost_page_hit\t|\t1\nvacuum_cost_page_miss\t|\t10\nvacuum_freeze_min_age\t|\t100000000\nwal_buffers\t|\t10MB\nwal_sync_method\t|\tfdatasync\nwal_writer_delay\t|\t200ms\nwork_mem\t|\t64MB\nxmlbinary\t|\tbase64\nxmloption\t|\tcontent\nzero_damaged_pages\t|\toff\n(176 rows)\t\n\n\t\nToday we did more analysis and observed postgress processes that\ncontinually reported status 'D' in top. The corresponding vmstat showed\na proportionate amount of processes under the 'b' column,\n\"uninterruptible\" state.\n\nWe've been able to match long running database queries to such\nprocesses. This occurs under relatively low load average (say 4 out of\n8) and can involve as little as 1 single sql query.\nIt seems that many queries get into that state and that is causing our\nload average to spike very high.\nQueries are finishing even though we continue to see an increase in\npostgres processes in 'D' state.\nAre we facing some serious db locking? What could lead to this?\n(The box has 8G and 8 cores)\n\nThanks for any help,\nAnne\n\n\n\n\n-----Original Message-----\nFrom: Andy Colson [mailto:[email protected]]\nSent: Tuesday, January 25, 2011 2:13 PM\nTo: Anne Rosset\nCc: [email protected]\nSubject: Re: [PERFORM] Queries becoming slow under heavy load\n\nOn 1/25/2011 3:37 PM, Anne Rosset wrote:\n> Hi,\n>\n> We are running some performances tests. With a lot of concurrent \n> access, queries get very slow. When there is no load, those queries\nrun fast.\n>\n> We kind of see a trend about these queries: it seems like the ones \n> that become very slow have an ORDER BY or MAX in them.\n>\n> Here are our config settings:\n>\n\n<SNIP>\n\n> It seems to me that we should try increasing shared_buffers. But do \n> you have any other suggestions? Or do you see anything wrong in our\nconfig?\n>\n> Thanks,\n>\n> Anne\n>\n\nWhile I applaud your attempt to get us lots of information,\nunfortunately the the one property you ask about (shared_buffers), I\ncan't seem to find.\n\nSo, maybe you could post a bit more:\n\n1) how many concurrent clients?\n2) can we see an explain analyze for a query when its fast, and then\nagain when its slow?\n3) Is this box dedicated to PG or are there other services running?\n4) Looks like you have 8 Gig of ram, so I assume this is a 64 bit OS,\ncan you tell us what you have for:\n\nshared_buffers\neffective_cahce_size\nwork_mem\n\n\n5) Once many clients start hitting the db, it might not all fit into ram\nand start hitting the HD, can you tell us what sort of IO you have\n(sata, scsi, raid, # of disks, etc).\n\nThe stats from /proc/meminfo:\nSwapTotal: 2097112 kB\nSwapFree: 2096612 kB\n\nWas this run when the system was busy? Looks like you are not using any\nswap, so thats good at least. Oh, wait, there are two cat\n/proc/meminfo's. Is one when its fast and one when its slow?\n\nLooks to me, in both cases, you are not using much memory at all. (if\nyou happen to have 'free', its output is a little more readable, if you\nwouldn't mind posting it (only really need it for when the box is slow)\n\n-Andy\n", "msg_date": "Wed, 26 Jan 2011 08:04:41 -0800", "msg_from": "\"Anne Rosset\" <[email protected]>", "msg_from_op": true, "msg_subject": "FW: Queries becoming slow under heavy load" }, { "msg_contents": "On 01/26/2011 10:04 AM, Anne Rosset wrote:\n\n> We've been able to match long running database queries to such\n> processes. This occurs under relatively low load average (say 4 out of\n> 8) and can involve as little as 1 single sql query.\n\nThe b state means the process is blocking, waiting for... something. One \nthing you need to consider is far more than your CPU usage. If you have \nthe 'sar' utility, run it as 'sar 1 100' just to see how your system is \nworking. What you want to watch for is iowait.\n\nIf even one query is churning your disks, every single other query that \nhas to take even one block from disk instead of cache, is going to \nstall. If you see an iowait of anything greater than 5%, you'll want to \ncheck further on the device that contains your database with iostat. My \nfavorite use of this is 'iostat -dmx [device] 1' where [device] is the \nblock device where your data files are, if your WAL is somewhere else.\n\nAnd yeah, your shared_buffers are kinda on the lowish side. Your \neffective_cache_size is good, but you have a lot more room to increase \nPG-specific memory.\n\nWorse however, is your checkpoints. Lord. Increase checkpoint_segments \nto *at least* 20, and increase your checkpoint_completion_target to 0.7 \nor 0.8. Check your logs for checkpoint warnings, and I'll bet it's \nconstantly complaining about increasing your checkpoint segments. Every \ncheckpoint not started by the scheduled system risks a checkpoint spike, \nwhich can flood your system with IO regardless of which queries are \nrunning. That kind of IO storm will ruin your performance, and with only \n3 checkpoint segments on a busy database, are probably happening constantly.\n\nUnfortunately we still need to know more. This is just based on your PG \nsettings, and that's not really enough to know how \"busy\" your DB is. \nOne way to check is to log the contents of pg_stat_database, especially \nthe xact_commit and xact_rollback columns. Grab those with a timestamp. \nIf you get a snapshot of that every minute, you can figure out how many \nqueries you're processing per minute or per second pretty easily. We've \nhit 8600 TPS before and don't have nearly the trouble you've been reporting.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Wed, 26 Jan 2011 11:16:38 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Queries becoming slow under heavy load" }, { "msg_contents": "\n> -----Original Message-----\n> From: [email protected] [mailto:pgsql-performance-\n> [email protected]] On Behalf Of Ivan Voras\n> Sent: Wednesday, January 26, 2011 6:25 AM\n> To: [email protected]\n> Subject: Re: [PERFORM] Queries becoming slow under heavy load\n> \n> On 25/01/2011 22:37, Anne Rosset wrote:\n> > Hi,\n> >\n> > We are running some performances tests. With a lot of concurrent\n> > access, queries get very slow. When there is no load, those queries\n> run\n> > fast.\n> \n> As others said, you need to stat how many concurrent clients are\n> working\n> on the database and also the number of logical CPUs (CPU cores,\n> hyperthreading) are present in the machine. So far, as a rule of thumb,\n> if you have more concurrent active connections (i.e. doing queries, not\n> idling), you will experience a sharp decline in performance if this\n> number exceeds the number of logical CPUs in the machine.\n> \nDepending on what version the OP is running - I didn't see where a version was givin - if there is a \"lot\" number of idle connections it can affect things as well. Tom indicated to me this should be \"much better\" in 8.4 and later. \n\n\n<slight deviation to put idle connections overhead in prespecive>\nWe cut our idle connections from 600+ to a bit over 300 and saw a good drop in box load and query responsiveness. (still get large user cpu load spikes though when a few hundred idle connection processes are woken open because they all appear to be sleeping on the same semaphore and one of them has work to do. )\n\n(yeah I know get a pooler, to bad only bouncer seems to pool out idle connections with transaction pooling but then I lose prepared statements... I am still working on that part and getting off 8.3. yes our app tried to do its own quasi connection pooling. When we deployed the app on a few hundred boxes the error of this choice years ago when this app lived on much fewer machines is now pretty evident.)\n\n> \n> --\n> Sent via pgsql-performance mailing list (pgsql-\n> [email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Wed, 26 Jan 2011 19:34:42 -0700", "msg_from": "\"mark\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Queries becoming slow under heavy load" }, { "msg_contents": "On Wed, Jan 26, 2011 at 9:04 AM, Anne Rosset <[email protected]> wrote:\n\n<HUGE LIST OF SETTINGS DELETED>\n\nPLEASE post just the settings you changed. I'm not searching through\na list that big for the interesting bits.\n\n> Today we did more analysis and observed  postgress processes that\n> continually reported status 'D' in top.\n\nFull stop. The most likely problem here is that the query is now\nhitting the disks and waiting. If you have 1 disk and two users, the\naccess speed will drop by factors, usually much higher than 2.\n\nTo put it very simply, you need as many mirror pairs in your RAID-10\nor as many disks in your RAID5 or RAID 6 as you have users reading the\ndisk drives. If you're writing you need more and more disks too.\nMediating this issue we find things like SSD cache in ZFS or battery\nbacked RAID controllers. They allow the reads and writes to be\nstreamlined quite a bit to the spinning disks, making it appear the\nRAID array underneath it was much faster, had better access, and all\nthe sectors were near each other. To an extent.\n\nIf you have the answer to the previous poster's question \"can you tell\nus what sort of IO you have (sata, scsi, raid, # of disks, etc).\" you\nshould provide it. If you've got a pair of 5k RPM SATA drives in a\nRAID-1 you might need more hardware.\n\nSo, instead of just settings, show us a few carefully selected lines\nof output from vmstat or iostat while this is happening. Don't tell\nus what you see, show us.\n", "msg_date": "Wed, 26 Jan 2011 21:18:34 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Queries becoming slow under heavy load" }, { "msg_contents": "On Wed, Jan 26, 2011 at 10:16 AM, Shaun Thomas <[email protected]> wrote:\n> Worse however, is your checkpoints. Lord. Increase checkpoint_segments to\n> *at least* 20, and increase your checkpoint_completion_target to 0.7 or 0.8.\n> Check your logs for checkpoint warnings, and I'll bet it's constantly\n> complaining about increasing your checkpoint segments. Every checkpoint not\n> started by the scheduled system risks a checkpoint spike, which can flood\n> your system with IO regardless of which queries are running. That kind of IO\n> storm will ruin your performance, and with only 3 checkpoint segments on a\n> busy database, are probably happening constantly.\n\nTo Shaun:\n\nUnless she's not write bound but read bound. We can't tell because we\nhaven't seen the queries. We haven't seen the output of iostat or\nvmstat.\n\nTo Anne:\n\nAnother tool to recommend is sar. it's part of the sysstat package on\ndebian / ubuntu / rhel. you have to enable it in various ways, it'll\ntell you when you try to run it after installing it. It allows you\nto look back over the last 7 days, 5 minutes at a time, to see the\ntrends on your servers. Very useful stuff and easy to graph in a\nspreadsheet or web page. Or just read it.\n\nFor instance, here's the output of sar on the data drive of a slave\n(read only) server under slony replication.\n\nsar -d -f sa25|grep \"02:[01234].:.. AM.\\+dev251-1\"\nLinux 2.6.32-27-server () \t01/25/2011 \t_x86_64_\t(16 CPU)\n\n12:00:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz\navgqu-sz await svctm %util\n09:45:01 AM dev251-1 481.21 6981.74 1745.82 18.14\n4.86 10.10 1.57 75.65\n09:55:01 AM dev251-1 620.16 28539.52 2135.67 49.46\n5.25 8.47 1.22 75.42\n10:05:01 AM dev251-1 1497.16 29283.95 1898.94 20.83\n13.89 9.28 0.64 96.52\n10:15:01 AM dev251-1 1040.47 17123.89 2286.10 18.66\n8.89 8.54 0.87 90.49\n10:25:01 AM dev251-1 562.97 8802.77 1515.50 18.33\n4.84 8.60 1.41 79.57\n\nLet me interpret for ya, in case it's not obvious. IO Utilization\nruns from about 50% to about 90%. when it's at 90% we are running 700\nto 1000 tps, reading at a maximum of 15MB a second and writing at a\npaltry 1M or so a second. Average wait stays around 10ms. If we use\nsar -u from the same time period, we cna match up iowait to this chart\nand see if we were really waiting on IO or not.\n\n12:00:01 AM CPU %user %nice %system %iowait %steal %idle\n09:45:01 AM all 47.44 0.00 5.20 4.94 0.00 42.42\n09:55:01 AM all 46.42 0.00 5.63 5.77 0.00 42.18\n10:05:01 AM all 48.64 0.00 6.35 11.87 0.00 33.15\n10:15:01 AM all 46.94 0.00 5.79 8.81 0.00 38.46\n10:25:01 AM all 48.68 0.00 5.58 5.42 0.00 40.32\n\nWe can see that we have at peak, 11% of our CPU power is waiting\nbehind IO. We have 16 CPUs, so each one is 100/16 or 6.25% of the\ntotal. So at 11% we have two cores on hold the whole time basically.\nIn real life on this machine we have ALL cpus waiting about 11% of the\ntime across the board. But the math comes out the same. We're\nwaiting on IO.\n\nHere's a heavy db server, lots of ram, same time period. sdh is one\nof a large number of disks in a RAID-10 array. md17 is that RAID-10\narray (actually the RAID0 at the top of a bunch of RAID-1s I still\ndon't trust linux's RAID-10 implementation).\n12:00:01 AM DEV tps rd_sec/s wr_sec/s avgrq-sz\navgqu-sz await svctm %util\n09:45:01 sdh 5.86 5.65 158.87 28.08\n0.21 35.07 3.36 1.97\n09:45:01 md17 253.78 168.69 1987.64 8.50\n0.00 0.00 0.00 0.00\n09:55:01 sdh 5.48 5.65 134.99 25.68\n0.16 30.00 3.31 1.81\n09:55:01 md17 215.13 157.56 1679.67 8.54\n0.00 0.00 0.00 0.00\n10:05:01 sdh 4.37 5.39 106.53 25.58\n0.09 21.61 3.57 1.56\n10:05:01 md17 170.81 133.76 1334.43 8.60\n0.00 0.00 0.00 0.00\n10:15:01 sdh 6.16 5.37 177.25 29.64\n0.25 40.95 3.38 2.08\n10:15:01 md17 280.63 137.88 2206.95 8.36\n0.00 0.00 0.00 0.00\n10:25:01 sdh 4.52 3.72 116.41 26.56\n0.09 20.64 3.58 1.62\n10:25:01 md17 187.65 107.59 1470.88 8.41\n0.00 0.00 0.00 0.00\n\n(Note that md devices do not show values for %util, svctm or await\nhence the need for sdh)\n\nThis db fits the data set in ram, the other machine doesn't. It had a\nRAID controller, but that caught fire, and burned down. The next\ncaught fire, burned down, and fell into the swamp. It now has a\nstraight up SAS controller with no caching. Numbers were even better\nwhen it had a caching RAID controller, but I got tired of replacing\nthem.\n\nOK, so md17 is handling only 280 tps, while the array on the other,\nsmaller server, was around 1,000. The master is reading 5 or 6\nsectors per second, while the slave is reading 30k sectors a second.\nThe master is writing at ~1500 to 2000 sectors a second, the slave is\nsimilar. The slave server here was IO bound very badly because it A:\ndidn't have enough memory to cache the data set, and B: hadn't had\ntime to warm up to get what memory it had to do the job. It was\nthrown into the mix mid morning rush and it fell flat on its ass. If\nit had been warmed up first (running it at 100:1 load factor by our\nload balancing module to start) it would have been ok. It would have\nstill had horrible IO numbers though. Once the caches load by\n1500hrs, the slave is reading at just 500 sectors / sec, child's play\nreally.\n\nSo, get sar running, and get some numbers from the machine when these\nthings are happening. Look for how it looks before during and after\nthe crisis.\n", "msg_date": "Wed, 26 Jan 2011 22:07:42 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Queries becoming slow under heavy load" }, { "msg_contents": "Scott,\nThanks for your response.\nWe are over NFS for our storage ...\n\nHere is what we see during our performance testing:\nThis is about 7 seconds after the query was sent to postgres:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n 7090 root 25 0 689m 399m 10m R 89.9 5.0 3868:44 java \n 1846 postgres 16 0 474m 198m 103m R 75.2 2.5 0:28.69 postmaster \n 2170 postgres 15 0 391m 203m 188m R 44.0 2.6 0:17.63 postmaster \n 2555 httpd 18 0 298m 15m 4808 R 22.0 0.2 0:00.12 httpd \n 2558 root 15 0 29056 2324 1424 R 1.8 0.0 0:00.01 top \n 1207 httpd 15 0 337m 20m 7064 R 0.0 0.3 0:00.69 httpd \n28312 postgres 16 0 396m 183m 162m D 0.0 2.3 0:50.82 postmaster <---- this is the query here\n\nNotice the 0% CPU, also, notice the 183m RES memory.\n\nTen seconds later:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n 7090 root 25 0 689m 399m 10m R 92.9 5.0 3868:53 java \n 2657 root 15 0 29056 2328 1424 R 1.9 0.0 0:00.01 top \n28312 postgres 16 0 396m 184m 162m D 0.0 2.3 0:50.84 postmaster <---- here\n\nTen seconds after that:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n 7090 root 25 0 689m 399m 10m R 88.7 5.0 3869:02 java \n 1845 postgres 16 0 473m 223m 127m D 22.6 2.8 0:26.39 postmaster \n 2412 httpd 15 0 2245m 1.4g 16m R 18.9 17.8 0:02.48 java \n 966 postgres 15 0 395m 242m 221m D 0.0 3.0 1:02.31 postmaster \n 2680 root 15 0 29056 2336 1424 R 0.0 0.0 0:00.01 top \n28312 postgres 16 0 396m 184m 163m D 0.0 2.3 0:50.85 postmaster <--- here\n\netc....\n\nand it's not until around the 221 second mark that we see catch it consuming CPU:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n 7090 root 25 0 689m 399m 10m R 93.4 5.0 3872:07 java \n28312 postgres 16 0 396m 225m 204m R 5.7 2.8 0:51.52 postmaster <----- here \n 3391 root 15 0 29056 2348 1424 R 1.9 0.0 0:00.01 top \n 4297 root 16 0 10228 740 632 D 0.0 0.0 12:53.66 hald-addon-stor \n26885 httpd 15 0 2263m 1.5g 16m R 0.0 19.0 0:00.01 java\n\nNote that the load average is fine during this timeframe, ~4 out of 8, so plenty of CPU.\n\nLooks like this is true \"halting\".\n\nFurther, or worse yet, this same behavior expands out to multiple processes, producing a true \"back up\". It can look \nsomething like this. Notice the 0% cpu consumption:\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND \n 7090 root 22 0 689m 399m 10m R 91.1 5.0 3874:32 java \n 4139 root 15 0 29080 2344 1424 R 1.9 0.0 0:00.01 top \n 1555 postgres 16 0 474m 258m 162m D 0.0 3.2 0:17.32 postmaster \n 1846 postgres 16 0 474m 285m 189m D 0.0 3.6 0:47.43 postmaster \n 2713 postgres 16 0 404m 202m 179m D 0.0 2.5 0:33.54 postmaster \n 2801 postgres 16 0 391m 146m 131m D 0.0 1.8 0:04.48 postmaster \n 2804 postgres 16 0 419m 172m 133m D 0.0 2.2 0:09.41 postmaster \n 2825 postgres 16 0 473m 142m 49m D 0.0 1.8 0:04.12 postmaster\n\nThanks for any additional explanation/advice,\nAnne\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: Wednesday, January 26, 2011 8:19 PM\nTo: Anne Rosset\nCc: [email protected]\nSubject: Re: FW: [PERFORM] Queries becoming slow under heavy load\n\nOn Wed, Jan 26, 2011 at 9:04 AM, Anne Rosset <[email protected]> wrote:\n\n<HUGE LIST OF SETTINGS DELETED>\n\nPLEASE post just the settings you changed. I'm not searching through a list that big for the interesting bits.\n\n> Today we did more analysis and observed  postgress processes that \n> continually reported status 'D' in top.\n\nFull stop. The most likely problem here is that the query is now hitting the disks and waiting. If you have 1 disk and two users, the access speed will drop by factors, usually much higher than 2.\n\nTo put it very simply, you need as many mirror pairs in your RAID-10 or as many disks in your RAID5 or RAID 6 as you have users reading the disk drives. If you're writing you need more and more disks too.\nMediating this issue we find things like SSD cache in ZFS or battery backed RAID controllers. They allow the reads and writes to be streamlined quite a bit to the spinning disks, making it appear the RAID array underneath it was much faster, had better access, and all\nthe sectors were near each other. To an extent.\n\nIf you have the answer to the previous poster's question \"can you tell us what sort of IO you have (sata, scsi, raid, # of disks, etc).\" you should provide it. If you've got a pair of 5k RPM SATA drives in a\nRAID-1 you might need more hardware.\n\nSo, instead of just settings, show us a few carefully selected lines of output from vmstat or iostat while this is happening. Don't tell us what you see, show us.\n", "msg_date": "Thu, 27 Jan 2011 21:12:36 -0800", "msg_from": "\"Anne Rosset\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: Queries becoming slow under heavy load" }, { "msg_contents": "On 01/27/2011 11:12 PM, Anne Rosset wrote:\n\n> Thanks for your response.\n> We are over NFS for our storage ...\n\nNFS? I'm not sure you know this, but NFS has major locking issues\nthat would make it a terrible candidate for hosting a database.\n\n> and it's not until around the 221 second mark that we see catch it consuming CPU:\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 7090 root 25 0 689m 399m 10m R 93.4 5.0 3872:07 java\n> 28312 postgres 16 0 396m 225m 204m R 5.7 2.8 0:51.52 postmaster<----- here\n> 3391 root 15 0 29056 2348 1424 R 1.9 0.0 0:00.01 top\n> 4297 root 16 0 10228 740 632 D 0.0 0.0 12:53.66 hald-addon-stor\n> 26885 httpd 15 0 2263m 1.5g 16m R 0.0 19.0 0:00.01 java\n> \n> Note that the load average is fine during this timeframe, ~4 out of 8, so plenty of CPU.\n\nPlease listen to us. We asked you to use sar, or iostat, to tell\nus how much the disk IO is being utilized. From your other\nscreenshots, there were at least two other PG processes that \nwere running and could have been thrashing the disk or locking \ntables your \"slow\" query needed. If it's waiting for disk IO, the \nCPU will remain low until it gets what it needs.\n\nNot everything is about the CPU. Especially now that we know your DB is \nrunning on top of NFS.\n\n> Further, or worse yet, this same behavior expands out to multiple processes, \n> producing a true \"back up\". It can look\n> something like this. Notice the 0% cpu consumption:\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 7090 root 22 0 689m 399m 10m R 91.1 5.0 3874:32 java\n> 4139 root 15 0 29080 2344 1424 R 1.9 0.0 0:00.01 top\n> 1555 postgres 16 0 474m 258m 162m D 0.0 3.2 0:17.32 postmaster\n> 1846 postgres 16 0 474m 285m 189m D 0.0 3.6 0:47.43 postmaster\n> 2713 postgres 16 0 404m 202m 179m D 0.0 2.5 0:33.54 postmaster\n> 2801 postgres 16 0 391m 146m 131m D 0.0 1.8 0:04.48 postmaster\n> 2804 postgres 16 0 419m 172m 133m D 0.0 2.2 0:09.41 postmaster\n> 2825 postgres 16 0 473m 142m 49m D 0.0 1.8 0:04.12 postmaster\n\nYes. And they could all be waiting for IO. Or NFS locking is blocking \nthe reads. Or... what is that Java app doing? We don't know the state \nof your IO, and when you have 0% or very low CPU usage, you either have \nlocking contention or you're being IO starved.\n\nAnd what queries are these connections performing? You can check it by \ngetting the contents of the pg_stat_activity system view. If they're \nselecting and still \"slow\", compare that against the iostat or sar \nresults. For instance, here's an IOSTAT of our system:\n\niostat -dmx dm-9 1\n\nLinux 2.6.18-92.el5 (oslchi6pedb1) \t01/28/2011\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\ndm-9 0.00 0.00 125.46 227.78 4.95 0.89 33.88 0.08 0.19 0.08 2.91\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\ndm-9 0.00 0.00 5.00 0.00 0.04 0.00 14.40 0.05 10.60 10.60 5.30\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\ndm-9 0.00 0.00 2.00 0.00 0.02 0.00 16.00 0.01 7.00 7.00 1.40\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\ndm-9 0.00 0.00 4.00 1184.00 0.04 4.62 8.04 27.23 11.73 0.06 6.80\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await svctm %util\ndm-9 0.00 0.00 11.00 847.00 0.09 3.31 8.10 29.31 49.65 0.79 67.90\n\n\nThat last column, %util, effectively tells us how saturated the\ncontroller is. If the percentage is high, it's really working \nhard to supply the data we're asking for, or trying to write. If \nit's low, we're probably working from memory cache, or getting \nless requests. There have been times our queries are \"slow\" and \nwhen we check this stat, it's often at or above 90%, sometimes \nfor minutes at a time. That's almost always a clear indicator \nyou have IO contention. Queries can't work without the data \nthey need to return your results.\n\nSending us more CPU charts isn't going to help us in helping\nyou.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 28 Jan 2011 09:31:10 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Queries becoming slow under heavy load" }, { "msg_contents": "Shaun Thomas wrote:\n> On 01/27/2011 11:12 PM, Anne Rosset wrote:\n>\n> \n>> Thanks for your response.\n>> We are over NFS for our storage ...\n>> \n>\n> NFS? I'm not sure you know this, but NFS has major locking issues\n> that would make it a terrible candidate for hosting a database.\n>\n> \nThat depends on the implementation. Vendor supported NAS, running NFS3 \nor NFS4 should be OK. There are other databases that can use it, too. \nSome databases even have a built-in NFS client.\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Fri, 28 Jan 2011 13:46:09 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Queries becoming slow under heavy load" }, { "msg_contents": "Thanks to all of you who replied and pointed NFS as a potential\nculprit.\nOur issue was that pgsql's temp dir (pgsql_tmp) was set to the default\nvalue ( $PSQL_DIR/base/pgsql_tmp/) which was located in NFS. \nMoving the temp dir to local disk got us a huge improvement. \n\nAnne\n\n-----Original Message-----\nFrom: Shaun Thomas [mailto:[email protected]] \nSent: Friday, January 28, 2011 7:31 AM\nTo: Anne Rosset\nCc: [email protected]\nSubject: Re: FW: [PERFORM] Queries becoming slow under heavy load\n\nOn 01/27/2011 11:12 PM, Anne Rosset wrote:\n\n> Thanks for your response.\n> We are over NFS for our storage ...\n\nNFS? I'm not sure you know this, but NFS has major locking issues that\nwould make it a terrible candidate for hosting a database.\n\n> and it's not until around the 221 second mark that we see catch it\nconsuming CPU:\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 7090 root 25 0 689m 399m 10m R 93.4 5.0 3872:07 java\n> 28312 postgres 16 0 396m 225m 204m R 5.7 2.8 0:51.52\npostmaster<----- here\n> 3391 root 15 0 29056 2348 1424 R 1.9 0.0 0:00.01 top\n> 4297 root 16 0 10228 740 632 D 0.0 0.0 12:53.66\nhald-addon-stor\n> 26885 httpd 15 0 2263m 1.5g 16m R 0.0 19.0 0:00.01 java\n> \n> Note that the load average is fine during this timeframe, ~4 out of 8,\nso plenty of CPU.\n\nPlease listen to us. We asked you to use sar, or iostat, to tell us how\nmuch the disk IO is being utilized. From your other screenshots, there\nwere at least two other PG processes that were running and could have\nbeen thrashing the disk or locking tables your \"slow\" query needed. If\nit's waiting for disk IO, the CPU will remain low until it gets what it\nneeds.\n\nNot everything is about the CPU. Especially now that we know your DB is\nrunning on top of NFS.\n\n> Further, or worse yet, this same behavior expands out to multiple \n> processes, producing a true \"back up\". It can look something like \n> this. Notice the 0% cpu consumption:\n> \n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 7090 root 22 0 689m 399m 10m R 91.1 5.0 3874:32 java\n> 4139 root 15 0 29080 2344 1424 R 1.9 0.0 0:00.01 top\n> 1555 postgres 16 0 474m 258m 162m D 0.0 3.2 0:17.32\npostmaster\n> 1846 postgres 16 0 474m 285m 189m D 0.0 3.6 0:47.43\npostmaster\n> 2713 postgres 16 0 404m 202m 179m D 0.0 2.5 0:33.54\npostmaster\n> 2801 postgres 16 0 391m 146m 131m D 0.0 1.8 0:04.48\npostmaster\n> 2804 postgres 16 0 419m 172m 133m D 0.0 2.2 0:09.41\npostmaster\n> 2825 postgres 16 0 473m 142m 49m D 0.0 1.8 0:04.12\npostmaster\n\nYes. And they could all be waiting for IO. Or NFS locking is blocking\nthe reads. Or... what is that Java app doing? We don't know the state of\nyour IO, and when you have 0% or very low CPU usage, you either have\nlocking contention or you're being IO starved.\n\nAnd what queries are these connections performing? You can check it by\ngetting the contents of the pg_stat_activity system view. If they're\nselecting and still \"slow\", compare that against the iostat or sar\nresults. For instance, here's an IOSTAT of our system:\n\niostat -dmx dm-9 1\n\nLinux 2.6.18-92.el5 (oslchi6pedb1) \t01/28/2011\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz\navgqu-sz await svctm %util\ndm-9 0.00 0.00 125.46 227.78 4.95 0.89 33.88\n0.08 0.19 0.08 2.91\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz\navgqu-sz await svctm %util\ndm-9 0.00 0.00 5.00 0.00 0.04 0.00 14.40\n0.05 10.60 10.60 5.30\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz\navgqu-sz await svctm %util\ndm-9 0.00 0.00 2.00 0.00 0.02 0.00 16.00\n0.01 7.00 7.00 1.40\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz\navgqu-sz await svctm %util\ndm-9 0.00 0.00 4.00 1184.00 0.04 4.62 8.04\n27.23 11.73 0.06 6.80\n\nDevice: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz\navgqu-sz await svctm %util\ndm-9 0.00 0.00 11.00 847.00 0.09 3.31 8.10\n29.31 49.65 0.79 67.90\n\n\nThat last column, %util, effectively tells us how saturated the\ncontroller is. If the percentage is high, it's really working hard to\nsupply the data we're asking for, or trying to write. If it's low, we're\nprobably working from memory cache, or getting less requests. There have\nbeen times our queries are \"slow\" and when we check this stat, it's\noften at or above 90%, sometimes for minutes at a time. That's almost\nalways a clear indicator you have IO contention. Queries can't work\nwithout the data they need to return your results.\n\nSending us more CPU charts isn't going to help us in helping you.\n\n--\nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Thu, 3 Feb 2011 09:40:02 -0800", "msg_from": "\"Anne Rosset\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: FW: Queries becoming slow under heavy load" }, { "msg_contents": "Excellent! And you learned a bit more about how to monitor your\nserver while you were at it. Win win!\n\nOn Thu, Feb 3, 2011 at 10:40 AM, Anne Rosset <[email protected]> wrote:\n> Thanks to all  of you who replied and pointed NFS as a potential\n> culprit.\n> Our issue was that  pgsql's temp dir (pgsql_tmp)  was set to the default\n> value ( $PSQL_DIR/base/pgsql_tmp/)  which was located in NFS.\n> Moving the temp dir to local disk got us  a huge improvement.\n>\n> Anne\n>\n> -----Original Message-----\n> From: Shaun Thomas [mailto:[email protected]]\n> Sent: Friday, January 28, 2011 7:31 AM\n> To: Anne Rosset\n> Cc: [email protected]\n> Subject: Re: FW: [PERFORM] Queries becoming slow under heavy load\n>\n> On 01/27/2011 11:12 PM, Anne Rosset wrote:\n>\n>> Thanks for your response.\n>> We are over NFS for our storage ...\n>\n> NFS? I'm not sure you know this, but NFS has major locking issues that\n> would make it a terrible candidate for hosting a database.\n>\n>> and it's not until around the 221 second mark that we see catch it\n> consuming CPU:\n>>\n>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n>>   7090 root      25   0  689m 399m  10m R 93.4  5.0   3872:07 java\n>> 28312 postgres  16   0  396m 225m 204m R  5.7  2.8   0:51.52\n> postmaster<----- here\n>>   3391 root      15   0 29056 2348 1424 R  1.9  0.0   0:00.01 top\n>>   4297 root      16   0 10228  740  632 D  0.0  0.0  12:53.66\n> hald-addon-stor\n>> 26885 httpd     15   0 2263m 1.5g  16m R  0.0 19.0   0:00.01 java\n>>\n>> Note that the load average is fine during this timeframe, ~4 out of 8,\n> so plenty of CPU.\n>\n> Please listen to us. We asked you to use sar, or iostat, to tell us how\n> much the disk IO is being utilized. From your other screenshots, there\n> were at least two other PG processes that were running and could have\n> been thrashing the disk or locking tables your \"slow\" query needed. If\n> it's waiting for disk IO, the CPU will remain low until it gets what it\n> needs.\n>\n> Not everything is about the CPU. Especially now that we know your DB is\n> running on top of NFS.\n>\n>> Further, or worse yet, this same behavior expands out to multiple\n>> processes, producing a true \"back up\". It can look something like\n>> this. Notice the 0% cpu consumption:\n>>\n>>    PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND\n>>   7090 root      22   0  689m 399m  10m R 91.1  5.0   3874:32 java\n>>   4139 root      15   0 29080 2344 1424 R  1.9  0.0   0:00.01 top\n>>   1555 postgres  16   0  474m 258m 162m D  0.0  3.2   0:17.32\n> postmaster\n>>   1846 postgres  16   0  474m 285m 189m D  0.0  3.6   0:47.43\n> postmaster\n>>   2713 postgres  16   0  404m 202m 179m D  0.0  2.5   0:33.54\n> postmaster\n>>   2801 postgres  16   0  391m 146m 131m D  0.0  1.8   0:04.48\n> postmaster\n>>   2804 postgres  16   0  419m 172m 133m D  0.0  2.2   0:09.41\n> postmaster\n>>   2825 postgres  16   0  473m 142m  49m D  0.0  1.8   0:04.12\n> postmaster\n>\n> Yes. And they could all be waiting for IO. Or NFS locking is blocking\n> the reads. Or... what is that Java app doing? We don't know the state of\n> your IO, and when you have 0% or very low CPU usage, you either have\n> locking contention or you're being IO starved.\n>\n> And what queries are these connections performing? You can check it by\n> getting the contents of the pg_stat_activity system view. If they're\n> selecting and still \"slow\", compare that against the iostat or sar\n> results. For instance, here's an IOSTAT of our system:\n>\n> iostat -dmx dm-9 1\n>\n> Linux 2.6.18-92.el5 (oslchi6pedb1)      01/28/2011\n>\n> Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz\n> avgqu-sz   await  svctm  %util\n> dm-9              0.00     0.00 125.46 227.78     4.95     0.89    33.88\n> 0.08    0.19   0.08   2.91\n>\n> Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz\n> avgqu-sz   await  svctm  %util\n> dm-9              0.00     0.00  5.00  0.00     0.04     0.00    14.40\n> 0.05   10.60  10.60   5.30\n>\n> Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz\n> avgqu-sz   await  svctm  %util\n> dm-9              0.00     0.00  2.00  0.00     0.02     0.00    16.00\n> 0.01    7.00   7.00   1.40\n>\n> Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz\n> avgqu-sz   await  svctm  %util\n> dm-9              0.00     0.00  4.00 1184.00     0.04     4.62     8.04\n> 27.23   11.73   0.06   6.80\n>\n> Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz\n> avgqu-sz   await  svctm  %util\n> dm-9              0.00     0.00 11.00 847.00     0.09     3.31     8.10\n> 29.31   49.65   0.79  67.90\n>\n>\n> That last column, %util, effectively tells us how saturated the\n> controller is. If the percentage is high, it's really working hard to\n> supply the data we're asking for, or trying to write. If it's low, we're\n> probably working from memory cache, or getting less requests. There have\n> been times our queries are \"slow\" and when we check this stat, it's\n> often at or above 90%, sometimes for minutes at a time. That's almost\n> always a clear indicator you have IO contention. Queries can't work\n> without the data they need to return your results.\n>\n> Sending us more CPU charts isn't going to help us in helping you.\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See  http://www.peak6.com/email_disclaimer.php\n> for terms and conditions related to this email\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Thu, 3 Feb 2011 11:22:24 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: Queries becoming slow under heavy load" } ]
[ { "msg_contents": "\nWhen you say that with a lot of concurrent access, queries get very slow, How many concurrent connections to your server have you had?\nmore that max_connections´value?\nIf you want to have many concurrent connections, you should have consider to use a pooling connection system like pgbouncer or pgpool.\n\nWhich are the values for:\n- work_mem\n- shared_buffers\n- maintenance_work_mem\n- effective_cache_size\n- effective_io_concurrency\n- server_version\n\nWhich are your platform?\n\nRegards\n--\nIng. Marcos Luís Ortíz Valmaseda\nSystem Engineer -- Database Administrator\n\nCentro de Tecnologías de Gestión de Datos (DATEC)\nUniversidad de las Ciencias Informáticas\nhttp://postgresql.uci.cu\n\n", "msg_date": "Tue, 25 Jan 2011 17:27:01 -0500 (CST)", "msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Queries becoming slow under heavy load" } ]
[ { "msg_contents": "New to Postgres and am prototyping a migration from Oracle to Postgres 9.0.1 on Linux. Starting with the data warehouse. Current process is to load the data from\r\nour OLTP (Oracle), dump it into another instance of Oracle for staging and manipulation, then extract it and load it into Infobright. I am trying\r\nto replace the Oracle instance used for staging and manipulation with Postgres. Kettle (PDI), a Java ETL tool, is used for this process.\r\n\r\nCame across a problem I find perplexing. I recreated the dimensional tables in Oracle and the fields that are integers in Oracle became integers\r\nin Postgres. Was experiencing terrible performance during the load and narrowed down to a particular dimensional lookup problem. The table\r\ndim_carrier holds about 80k rows. You can see the actual query issued by Kettle below, but basically I am looking up using the business key from\r\nour OLTP system. This field is carrier_source_id and is indexed as you can see below. If I change this field from an integer to a real, I get\r\nabout a 70x increase in performance of the query. The EXPLAIN ANALYZE output is nearly identical, except for the casting of 1 to a real when the column\r\nis a real. In real life, this query is actually bound and parameterized, but I wished to simplify things a bit here (and don't yet know how to EXPLAIN ANALYZE a parameterized\r\nquery). Now in terms of actual performance, the same query executed about 25k times takes 7 seconds with the real column, and 500 seconds with the integer column.\r\n\r\nWhat gives here? Seems like integer (or serial) is a pretty common choice for primary key columns, and therefore what I'm experiencing must be an anomoly.\r\n\r\n\r\n\r\n Table \"hits_olap.dim_carrier\"\r\n Column | Type | Modifiers\r\n-------------------+-----------------------------+-----------\r\n carrier_id | integer | not null\r\n dim_version | smallint |\r\n dim_effect_date | timestamp without time zone |\r\n dim_expire_date | timestamp without time zone |\r\n carrier_source_id | integer |\r\n carrier_name | character varying(30) |\r\n carrier_type | character varying(30) |\r\n carrier_scac | character varying(4) |\r\n carrier_currency | character varying(3) |\r\n current_row | smallint | default 0\r\nIndexes:\r\n \"dim_carrier_pk\" PRIMARY KEY, btree (carrier_id)\r\n \"idx_dim_carrier_lookup\" btree (carrier_source_id)\r\n\r\nVACUUM\r\nANALYZE\r\nREINDEX\r\n\r\n EXPLAIN ANALYZE SELECT CARRIER_ID, DIM_VERSION FROM HITS_OLAP.DIM_CARRIER WHERE CARRIER_SOURCE_ID = '1' AND now() >= DIM_EFFECT_DATE\r\n AND now() < DIM_EXPIRE_DATE;\r\n\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------------------------\r\n Index Scan using idx_dim_carrier_lookup on dim_carrier (cost=0.00..12.10 rows=2 width=6) (actual time=0.076..0.077 rows=1 loops=1)\r\n Index Cond: (carrier_source_id = 1)\r\n Filter: ((now() >= dim_effect_date) AND (now() < dim_expire_date)) Total runtime: 0.108 ms\r\n(4 rows)\r\n\r\nALTER TABLE\r\nALTER TABLE\r\n Table \"hits_olap.dim_carrier\"\r\n Column | Type | Modifiers\r\n-------------------+-----------------------------+-----------\r\n carrier_id | integer | not null\r\n dim_version | smallint |\r\n dim_effect_date | timestamp without time zone |\r\n dim_expire_date | timestamp without time zone |\r\n carrier_source_id | real |\r\n carrier_name | character varying(30) |\r\n carrier_type | character varying(30) |\r\n carrier_scac | character varying(4) |\r\n carrier_currency | character varying(3) |\r\n current_row | smallint | default 0\r\nIndexes:\r\n \"dim_carrier_pk\" PRIMARY KEY, btree (carrier_id)\r\n \"idx_dim_carrier_lookup\" btree (carrier_source_id)\r\n\r\nVACUUM\r\nANALYZE\r\nREINDEX\r\n\r\n EXPLAIN ANALYZE SELECT CARRIER_ID, DIM_VERSION FROM HITS_OLAP.DIM_CARRIER WHERE CARRIER_SOURCE_ID = '1' AND now() >= DIM_EFFECT_DATE\r\n AND now() < DIM_EXPIRE_DATE;\r\n\r\n QUERY PLAN\r\n-------------------------------------------------------------------------------------------------------------------------------------\r\n Index Scan using idx_dim_carrier_lookup on dim_carrier (cost=0.00..12.10 rows=2 width=6) (actual time=0.068..0.069 rows=1 loops=1)\r\n Index Cond: (carrier_source_id = 1::real)\r\n Filter: ((now() >= dim_effect_date) AND (now() < dim_expire_date)) Total runtime: 0.097 ms\r\n(4 rows)\r\n\r\n\r\n\r\nThanks for the help,\r\n\r\nDave Greco\r\n\n\n\n\n\n\n\n\n\n\nNew to Postgres and am prototyping a migration from Oracle\r\nto Postgres 9.0.1 on Linux. Starting with the data warehouse. Current process\r\nis to load the data from\nour OLTP (Oracle), dump it into another instance of Oracle\r\nfor staging and manipulation, then extract it and load it into Infobright. I am\r\ntrying\nto replace the Oracle instance used for staging and\r\nmanipulation with Postgres. Kettle (PDI), a Java ETL tool, is used for this\r\nprocess.\n \nCame across a problem I find perplexing. I recreated the\r\ndimensional tables in Oracle and the fields that are integers in Oracle became\r\nintegers\nin Postgres. Was experiencing terrible performance during\r\nthe load and narrowed down to a particular dimensional lookup problem. The table\ndim_carrier holds about 80k rows. You can see the actual\r\nquery issued by Kettle below, but basically I am looking up using the business\r\nkey from\nour OLTP system. This field is carrier_source_id and is\r\nindexed as you can see below. If I change this field from an integer to a real,\r\nI get\nabout a 70x increase in performance of the query. The\r\nEXPLAIN ANALYZE output is nearly identical, except for the casting of 1 to a\r\nreal when the column\nis a real. In real life, this query is actually bound and\r\nparameterized, but I wished to simplify things a bit here (and don't yet know\r\nhow to EXPLAIN ANALYZE a parameterized\nquery). Now in terms of actual performance, the same query\r\nexecuted about 25k times takes 7 seconds with the real column, and 500 seconds\r\nwith the integer column.\n \nWhat gives here? Seems like integer (or serial) is a pretty\r\ncommon choice for primary key columns, and therefore what I'm experiencing must\r\nbe an anomoly.\n \n \n \n                Table \"hits_olap.dim_carrier\"\n      Column       |            Type             | Modifiers\r\n\n-------------------+-----------------------------+-----------\n carrier_id        | integer                     | not null\n dim_version       | smallint                    | \n dim_effect_date   | timestamp without time zone | \n dim_expire_date   | timestamp without time zone | \n carrier_source_id | integer                     | \n carrier_name      | character varying(30)       | \n carrier_type      | character varying(30)       | \n carrier_scac      | character varying(4)        | \n carrier_currency  | character varying(3)        | \n current_row       | smallint                    | default 0\nIndexes:\n    \"dim_carrier_pk\" PRIMARY KEY, btree\r\n(carrier_id)\n    \"idx_dim_carrier_lookup\" btree\r\n(carrier_source_id)\n \nVACUUM\nANALYZE\nREINDEX\n \n EXPLAIN ANALYZE SELECT CARRIER_ID, DIM_VERSION FROM\r\nHITS_OLAP.DIM_CARRIER WHERE CARRIER_SOURCE_ID = '1'  AND now() >=\r\nDIM_EFFECT_DATE\n AND now() < DIM_EXPIRE_DATE;\n \n                                                            \r\nQUERY PLAN                                                              \n-------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_dim_carrier_lookup on dim_carrier \r\n(cost=0.00..12.10 rows=2 width=6) (actual time=0.076..0.077 rows=1 loops=1)\n   Index Cond: (carrier_source_id = 1)\n   Filter: ((now() >= dim_effect_date) AND (now() <\r\ndim_expire_date))  Total runtime: 0.108 ms\n(4 rows)\n \nALTER TABLE\nALTER TABLE\n                Table \"hits_olap.dim_carrier\"\n      Column       |            Type             | Modifiers\r\n\n-------------------+-----------------------------+-----------\n carrier_id        | integer                     | not null\n dim_version       | smallint                    | \n dim_effect_date   | timestamp without time zone | \n dim_expire_date   | timestamp without time zone | \n carrier_source_id | real                        | \n carrier_name      | character varying(30)       | \n carrier_type      | character varying(30)       | \n carrier_scac      | character varying(4)        | \n carrier_currency  | character varying(3)        | \n current_row       | smallint                    | default 0\nIndexes:\n    \"dim_carrier_pk\" PRIMARY KEY, btree\r\n(carrier_id)\n    \"idx_dim_carrier_lookup\" btree\r\n(carrier_source_id)\n \nVACUUM\nANALYZE\nREINDEX\n \n EXPLAIN ANALYZE SELECT CARRIER_ID, DIM_VERSION FROM\r\nHITS_OLAP.DIM_CARRIER WHERE CARRIER_SOURCE_ID = '1'  AND now() >=\r\nDIM_EFFECT_DATE\n AND now() < DIM_EXPIRE_DATE;\n \n                                                             QUERY\r\nPLAN\n-------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using idx_dim_carrier_lookup on dim_carrier  (cost=0.00..12.10\r\nrows=2 width=6) (actual time=0.068..0.069 rows=1 loops=1)\n   Index Cond: (carrier_source_id = 1::real)\n   Filter: ((now() >= dim_effect_date) AND (now() <\r\ndim_expire_date))  Total runtime: 0.097 ms\n(4 rows)\n \n \n \nThanks for the help,\n \nDave Greco", "msg_date": "Wed, 26 Jan 2011 11:31:58 -0800", "msg_from": "David Greco <[email protected]>", "msg_from_op": true, "msg_subject": "Real vs Int performance" }, { "msg_contents": "David Greco <[email protected]> wrote:\n \n> If I change this field from an integer to a real, I get about a\n> 70x increase in performance of the query.\n \n> I wished to simplify things a bit here (and don't yet know how to\n> EXPLAIN ANALYZE a parameterized query).\n \n> carrier_source_id | integer |\n \n> runtime: 0.108 ms\n \n> carrier_source_id | real |\n \n> runtime: 0.097 ms\n \nThis doesn't show the problem, so it's hard to guess the cause. \nPerhaps you can do it with a prepared statement?:\n \nhttp://www.postgresql.org/docs/9.0/interactive/sql-prepare.html\n \nAlso, plans can be completely different based on the number of rows,\nwidth of the rows, distribution of values, etc. You may want to\nselect against the actual tables where you've seen the problem.\n \nOne tip -- if size permits, try to CLUSTER both tables to avoid any\nbloat issues, and VACUUM ANALYZE the tables to ensure that hint bits\nare set and statistics are up to date before running the tests. Run\neach test several times in a row to see what affect caching has on\nthe issue.\n \n-Kevin\n", "msg_date": "Wed, 26 Jan 2011 15:52:51 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Real vs Int performance" }, { "msg_contents": "David Greco <[email protected]> writes:\n> Came across a problem I find perplexing. I recreated the dimensional tables in Oracle and the fields that are integers in Oracle became integers\n> in Postgres. Was experiencing terrible performance during the load and narrowed down to a particular dimensional lookup problem. The table\n> dim_carrier holds about 80k rows. You can see the actual query issued by Kettle below, but basically I am looking up using the business key from\n> our OLTP system. This field is carrier_source_id and is indexed as you can see below. If I change this field from an integer to a real, I get\n> about a 70x increase in performance of the query.\n\nThat's really, really hard to believe, given that all else is equal ---\nso I'm betting it isn't. I suspect that what is really happening is\nthat you're passing non-integral comparison constants in your queries.\nFor example, if carrier_id is an integer, then\n\n\tSELECT ... WHERE carrier_id = 42\n\nis indexable, but this isn't:\n\n\tSELECT ... WHERE carrier_id = 42.0\n\nThe latter case however *would* be indexable if carrier_id were float.\n\nThe examples you show fail to show any performance difference at all,\nbut that's probably because you used quoted literals ('42' not 42),\nwhich prevents the parser from deciding that a cross-type comparison\nis demanded.\n\nI believe Oracle handles such things differently, so running into this\ntype of issue during an Oracle port isn't too surprising.\n\n> In real life, this query is actually bound and parameterized,\n\nIn that case, an EXPLAIN using literal constants is next door to useless\nin terms of telling you what will happen in real life. You need to pay\nattention to exactly how the parameterization is done. Again, I'm\nsuspecting a wrong datatype indication.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Jan 2011 17:12:12 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Real vs Int performance " }, { "msg_contents": "Right you are. Kettle is turning the number(11) field from Oracle into a BigNumber, which is a decimal. If I cast the field into an Integer in Kettle and keep the field an integer in Postgres, I get good performance. Suspect the correct course of action would simply be to make number(11) fields in Oracle numeric(11,0) fields in Postgres.\r\n\r\n\r\n-----Original Message-----\r\nFrom: Tom Lane [mailto:[email protected]] \r\nSent: Wednesday, January 26, 2011 5:12 PM\r\nTo: David Greco\r\nCc: [email protected]\r\nSubject: Re: [PERFORM] Real vs Int performance \r\n\r\nDavid Greco <[email protected]> writes:\r\n> Came across a problem I find perplexing. I recreated the dimensional tables in Oracle and the fields that are integers in Oracle became integers\r\n> in Postgres. Was experiencing terrible performance during the load and narrowed down to a particular dimensional lookup problem. The table\r\n> dim_carrier holds about 80k rows. You can see the actual query issued by Kettle below, but basically I am looking up using the business key from\r\n> our OLTP system. This field is carrier_source_id and is indexed as you can see below. If I change this field from an integer to a real, I get\r\n> about a 70x increase in performance of the query.\r\n\r\nThat's really, really hard to believe, given that all else is equal ---\r\nso I'm betting it isn't. I suspect that what is really happening is\r\nthat you're passing non-integral comparison constants in your queries.\r\nFor example, if carrier_id is an integer, then\r\n\r\n\tSELECT ... WHERE carrier_id = 42\r\n\r\nis indexable, but this isn't:\r\n\r\n\tSELECT ... WHERE carrier_id = 42.0\r\n\r\nThe latter case however *would* be indexable if carrier_id were float.\r\n\r\nThe examples you show fail to show any performance difference at all,\r\nbut that's probably because you used quoted literals ('42' not 42),\r\nwhich prevents the parser from deciding that a cross-type comparison\r\nis demanded.\r\n\r\nI believe Oracle handles such things differently, so running into this\r\ntype of issue during an Oracle port isn't too surprising.\r\n\r\n> In real life, this query is actually bound and parameterized,\r\n\r\nIn that case, an EXPLAIN using literal constants is next door to useless\r\nin terms of telling you what will happen in real life. You need to pay\r\nattention to exactly how the parameterization is done. Again, I'm\r\nsuspecting a wrong datatype indication.\r\n\r\n\t\t\tregards, tom lane\r\n\r\n", "msg_date": "Thu, 27 Jan 2011 05:48:00 -0800", "msg_from": "David Greco <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Real vs Int performance " }, { "msg_contents": "David Greco <[email protected]> writes:\n> Right you are. Kettle is turning the number(11) field from Oracle into\n> a BigNumber, which is a decimal. If I cast the field into an Integer\n> in Kettle and keep the field an integer in Postgres, I get good\n> performance. Suspect the correct course of action would simply be to\n> make number(11) fields in Oracle numeric(11,0) fields in Postgres.\n\nNot if you can persuade the client-side code to output integers as\nintegers. \"numeric\" type is orders of magnitude slower than integers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Jan 2011 09:18:11 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Real vs Int performance " }, { "msg_contents": "On 01/27/2011 08:18 AM, Tom Lane wrote:\n\n> Not if you can persuade the client-side code to output integers as\n> integers. \"numeric\" type is orders of magnitude slower than integers.\n\nI sadly have to vouch for this. My company converted an old Oracle app \nand they changed all their primary keys (and foreign keys, and random \nlarger int fields) to NUMERIC(19)'s. I've convinced them all new stuff \nshould be BIGINT if they need that level of coverage, but the damage is \nalready done.\n\nI'm not sure about orders of magnitude on the storage/index side, but my \ntests gave us a 10% boost if just the keys are switched over to INT or \nBIGINT.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Thu, 27 Jan 2011 08:30:15 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Real vs Int performance" }, { "msg_contents": "On 1/27/2011 9:30 AM, Shaun Thomas wrote:\n> I'm not sure about orders of magnitude on the storage/index side, but my\n> tests gave us a 10% boost if just the keys are switched over to INT or\n> BIGINT.\n\nWell, it depends on what you're doing. Searching by an integer vs. \nsearching by a text string will probably not make much of a difference. \nHowever, if you are calculating sums or averages, there will be a huge \ndifference.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 10:11:12 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Real vs Int performance" }, { "msg_contents": " \n\n> -----Original Message-----\n> From: Tom Lane [mailto:[email protected]] \n> Sent: Wednesday, January 26, 2011 5:12 PM\n> To: David Greco\n> Cc: [email protected]\n> Subject: Re: Real vs Int performance \n> \n> David Greco <[email protected]> writes:\n> > Came across a problem I find perplexing. I recreated the \n> dimensional \n> > tables in Oracle and the fields that are integers in Oracle became \n> > integers in Postgres. Was experiencing terrible performance \n> during the \n> > load and narrowed down to a particular dimensional lookup \n> problem. \n> .......................................\n> .......................................\n> .......................................\n> .......................................\n> In real life, this query is actually bound and parameterized,\n> \n> In that case, an EXPLAIN using literal constants is next door \n> to useless in terms of telling you what will happen in real \n> life. You need to pay attention to exactly how the \n> parameterization is done. Again, I'm suspecting a wrong \n> datatype indication.\n> \n> \t\t\tregards, tom lane\n> \n\nTo see what happens with parametrized query in \"real life\" you could try\n\"auto_explain\" contrib module.\n\nRegards,\nIgor Neyman\n", "msg_date": "Thu, 27 Jan 2011 11:17:46 -0500", "msg_from": "\"Igor Neyman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Real vs Int performance " } ]
[ { "msg_contents": "Hi all,\n\nwe are running a fairly big Ruby on Rails application on Postgres 8.4.\nOur traffic grew quite a bit lately, and since then we are facing DB\nperformance issues. System load occasionally explodes (around 170\nyesterday on a 16 core system), which seems to be caused by disk I/O\n(iowait in our Munin graphs goes up significantly during these\nperiods). At other times the laod stays rather low under pretty much\nthe same circumstances.\n\nThere are 6 application servers with 18 unicorns each, as well as 12\nbeanstalk workers talking to the DB. I know the problem description is\nvery vague, but so far we haven't consistently managed to reproduce\nthe problem. Turning of the beanstalk workers usually leads to a great\ndecreases in writes and system loads, but during yesterday's debugging\nsession they obviously ran fine (thanks, Murphy).\n\nBelow you'll find our system information and Postgres config, maybe\nsomeone could be so kind as to point out any obvious flaws in our\ncurrent configuration while I'm trying to get a better description of\nthe underlying problem.\n\nPostgres version: 8.4.6\n\nNumber of logical CPUs: 16 (4x Quadcore Xeon E5520 @ 2.27GHz)\n\nRAM: 16GB\n\n total used free shared buffers cached\nMem: 16461012 16399520 61492 0 72392 12546112\n-/+ buffers/cache: 3781016 12679996\nSwap: 999992 195336 804656\n\nHDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1\n\nConcurrent connections (according to our monitoring tool): 7 (min), 74\n(avg), 197 (max)\n\nOur config (all other settings at default value):\n\nmax_connections = 200\t\t\t\nssl = true\t\t\t\t\nshared_buffers = 4096MB\t\t\t\nwork_mem = 256MB\t\t\t\t\nmaintenance_work_mem = 512MB\t\t\nsynchronous_commit = off\t\nwal_buffers = 8MB\t\ncheckpoint_segments = 30\t\t\ncheckpoint_timeout = 15min\t\t\ncheckpoint_completion_target = 0.9\t\nrandom_page_cost = 2.0\t\t\t\neffective_cache_size = 8192MB\nlogging_collector = on\t\t\nlog_directory = '/var/log/postgresql'\t\t\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\t\nlog_min_duration_statement = 1000\t\nlog_connections = on\nlog_disconnections = on\nlog_line_prefix = '%t '\t\t\t\ndatestyle = 'iso, mdy'\ngin_fuzzy_search_limit = 10000\n\nThe config options are a mix of the article \"Configuring PostgreSQL\nfor Pretty Good Performance\" [1] and the talk \"PostgreSQL as a secret\nweapon for high-performance Ruby on Rails applications\" [2].\n\nThanks,\nMichael\n\n[1] http://www.linux.com/learn/tutorials/394523-configuring-postgresql-for-pretty-good-performance\n[2] http://www.pgcon.org/2010/schedule/events/210.en.html\n", "msg_date": "Thu, 27 Jan 2011 11:31:13 +0100", "msg_from": "Michael Kohl <[email protected]>", "msg_from_op": true, "msg_subject": "High load," }, { "msg_contents": "2011/1/27 Michael Kohl <[email protected]>:\n> Hi all,\n>\n> we are running a fairly big Ruby on Rails application on Postgres 8.4.\n> Our traffic grew quite a bit lately, and since then we are facing DB\n> performance issues. System load occasionally explodes (around 170\n> yesterday on a 16 core system), which seems to be caused by disk I/O\n> (iowait in our Munin graphs goes up significantly during these\n> periods). At other times the laod stays rather low under pretty much\n> the same circumstances.\n>\n> There are 6 application servers with 18 unicorns each, as well as 12\n> beanstalk workers talking to the DB. I know the problem description is\n> very vague, but so far we haven't consistently managed to reproduce\n> the problem. Turning of the beanstalk workers usually leads to a great\n> decreases in writes and system loads, but during yesterday's debugging\n> session they obviously ran fine (thanks, Murphy).\n>\n> Below you'll find our system information and Postgres config, maybe\n> someone could be so kind as to point out any obvious flaws in our\n> current configuration while I'm trying to get a better description of\n> the underlying problem.\n>\n> Postgres version: 8.4.6\n>\n> Number of logical CPUs: 16 (4x Quadcore Xeon E5520  @ 2.27GHz)\n>\n> RAM: 16GB\n>\n>             total       used       free     shared    buffers     cached\n> Mem:      16461012   16399520      61492          0      72392   12546112\n> -/+ buffers/cache:    3781016   12679996\n> Swap:       999992     195336     804656\n\nyou have swap used, IO on the swap partition ?\ncan you paste the /proc/meminfo ?\nAlso turn on log_checkpoint if it is not already and check the\nduration to write the data.\n\nYou didn't said the DB size (and size of active part of it), it would help here.\n\n>\n> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1\n>\n> Concurrent connections (according to our monitoring tool): 7 (min), 74\n> (avg), 197 (max)\n>\n> Our config (all other settings at default value):\n>\n> max_connections = 200\n> ssl = true\n> shared_buffers = 4096MB\n> work_mem = 256MB\n\nit is too much with 200 connections. you may experiment case where you\ntry to use more than the memory available.\nsee http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n--> work_mem maintainance_work_mem\n\n> maintenance_work_mem = 512MB\n\n128MB is usualy enough\n\n> synchronous_commit = off\n> wal_buffers = 8MB\n\n16MB should work well\n\n> checkpoint_segments = 30\n> checkpoint_timeout = 15min\n> checkpoint_completion_target = 0.9\n> random_page_cost = 2.0\n> effective_cache_size = 8192MB\n\n12-14GB looks better\n\n> logging_collector = on\n> log_directory = '/var/log/postgresql'\n> log_filename = 'postgresql-%Y-%m-%d_%H%M%S.log'\n> log_min_duration_statement = 1000\n> log_connections = on\n> log_disconnections = on\n> log_line_prefix = '%t '\n> datestyle = 'iso, mdy'\n> gin_fuzzy_search_limit = 10000\n\nyou use full_text_search ?\n\n>\n> The config options are a mix of the article \"Configuring PostgreSQL\n> for Pretty Good Performance\" [1] and the talk \"PostgreSQL as a secret\n> weapon for high-performance Ruby on Rails applications\" [2].\n\ndo you monitor the 'locks' ? and the commit/rollbacks ?\n\n>\n> Thanks,\n> Michael\n>\n> [1] http://www.linux.com/learn/tutorials/394523-configuring-postgresql-for-pretty-good-performance\n> [2] http://www.pgcon.org/2010/schedule/events/210.en.html\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 27 Jan 2011 12:24:10 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "Cédric, thanks a lot for your answer so far!\n\nOn Thu, Jan 27, 2011 at 12:24 PM, Cédric Villemain\n<[email protected]> wrote:\n\n> you have swap used, IO on the swap partition ?\n\nMemory-wise we are fine.\n\n> can you paste the /proc/meminfo ?\n\nSure:\n\n# cat /proc/meminfo\nMemTotal: 16461012 kB\nMemFree: 280440 kB\nBuffers: 60984 kB\nCached: 13757080 kB\nSwapCached: 6112 kB\nActive: 7049744 kB\nInactive: 7716308 kB\nActive(anon): 2743696 kB\nInactive(anon): 2498056 kB\nActive(file): 4306048 kB\nInactive(file): 5218252 kB\nUnevictable: 0 kB\nMlocked: 0 kB\nSwapTotal: 999992 kB\nSwapFree: 989496 kB\nDirty: 3500 kB\nWriteback: 0 kB\nAnonPages: 943752 kB\nMapped: 4114916 kB\nShmem: 4293312 kB\nSlab: 247036 kB\nSReclaimable: 212788 kB\nSUnreclaim: 34248 kB\nKernelStack: 3144 kB\nPageTables: 832768 kB\nNFS_Unstable: 0 kB\nBounce: 0 kB\nWritebackTmp: 0 kB\nCommitLimit: 9230496 kB\nCommitted_AS: 5651528 kB\nVmallocTotal: 34359738367 kB\nVmallocUsed: 51060 kB\nVmallocChunk: 34350787468 kB\nHardwareCorrupted: 0 kB\nHugePages_Total: 0\nHugePages_Free: 0\nHugePages_Rsvd: 0\nHugePages_Surp: 0\nHugepagesize: 2048 kB\nDirectMap4k: 7936 kB\nDirectMap2M: 16760832 kB\n\n> Also turn on log_checkpoint if it is not already and check the\n> duration to write the data.\n\nWill do, thanks!\n\n> You didn't said the DB size (and size of active part of it), it would help here.\n\n=> select pg_size_pretty(pg_database_size('xxx'));\n pg_size_pretty\n----------------\n 32 GB\n(1 row)\n\n> it is too much with 200 connections. you may experiment case where you\n> try to use more than the memory available.\n\nSo far memory never really was a problem, but I'll keep these\nsuggestions in mind.\n\n> 16MB should work well\n\nWe already thought of increasing that, will do so now.\n\n>> effective_cache_size = 8192MB\n>\n> 12-14GB looks better\n\nThank you, I was rather unsure on this on.\n\n> you use full_text_search ?\n\nNot anymore, probably a leftover.\n\n> do you monitor the 'locks' ? and the commit/rollbacks  ?\n\nNo, but I'll look into doing that.\n\nThanks a lot for the feedback again,\nMichael\n", "msg_date": "Thu, 27 Jan 2011 12:36:45 +0100", "msg_from": "Michael Kohl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load," }, { "msg_contents": "2011/1/27 Michael Kohl <[email protected]>:\n> Cédric, thanks a lot for your answer so far!\n>\n> On Thu, Jan 27, 2011 at 12:24 PM, Cédric Villemain\n> <[email protected]> wrote:\n>\n>> you have swap used, IO on the swap partition ?\n>\n> Memory-wise we are fine.\n>\n>> can you paste the /proc/meminfo ?\n>\n> Sure:\n>\n> # cat /proc/meminfo\n> MemTotal:       16461012 kB\n> MemFree:          280440 kB\n> Buffers:           60984 kB\n> Cached:         13757080 kB\n> SwapCached:         6112 kB\n> Active:          7049744 kB\n> Inactive:        7716308 kB\n> Active(anon):    2743696 kB\n> Inactive(anon):  2498056 kB\n> Active(file):    4306048 kB\n> Inactive(file):  5218252 kB\n> Unevictable:           0 kB\n> Mlocked:               0 kB\n> SwapTotal:        999992 kB\n> SwapFree:         989496 kB\n> Dirty:              3500 kB\n> Writeback:             0 kB\n> AnonPages:        943752 kB\n> Mapped:          4114916 kB\n> Shmem:           4293312 kB\n> Slab:             247036 kB\n> SReclaimable:     212788 kB\n> SUnreclaim:        34248 kB\n> KernelStack:        3144 kB\n> PageTables:       832768 kB\n> NFS_Unstable:          0 kB\n> Bounce:                0 kB\n> WritebackTmp:          0 kB\n> CommitLimit:     9230496 kB\n\nthe commitlimit looks to low, it is because your swap partition is small.\n\nYou need to either enlarge the swap partition, or change the\nvm.overcommit_ratio if you want to be able to use more of your mermory\n sanely.\n(\nsee kernel/Documentation/filesystems/proc.txt for the explanations on\nthe formula :\nCommitLimit = ('vm.overcommit_ratio' * Physical RAM) + Swap\n)\n\n> Committed_AS:    5651528 kB\n\nthis is way under CommitLimit so you are good. (it is rare to be\nlimited by that anyway, and your perf issues are not relative to that)\n\n> VmallocTotal:   34359738367 kB\n> VmallocUsed:       51060 kB\n> VmallocChunk:   34350787468 kB\n> HardwareCorrupted:     0 kB\n> HugePages_Total:       0\n> HugePages_Free:        0\n> HugePages_Rsvd:        0\n> HugePages_Surp:        0\n> Hugepagesize:       2048 kB\n> DirectMap4k:        7936 kB\n> DirectMap2M:    16760832 kB\n>\n>> Also turn on log_checkpoint if it is not already and check the\n>> duration to write the data.\n>\n> Will do, thanks!\n>\n>> You didn't said the DB size (and size of active part of it), it would help here.\n>\n> => select pg_size_pretty(pg_database_size('xxx'));\n>  pg_size_pretty\n> ----------------\n>  32 GB\n> (1 row)\n>\n>> it is too much with 200 connections. you may experiment case where you\n>> try to use more than the memory available.\n>\n> So far memory never really was a problem, but I'll keep these\n> suggestions in mind.\n>\n>> 16MB should work well\n>\n> We already thought of increasing that, will do so now.\n>\n>>> effective_cache_size = 8192MB\n>>\n>> 12-14GB looks better\n>\n> Thank you, I was rather unsure on this on.\n>\n>> you use full_text_search ?\n>\n> Not anymore, probably a leftover.\n>\n>> do you monitor the 'locks' ? and the commit/rollbacks  ?\n>\n> No, but I'll look into doing that.\n\nIt may help to find what is the issue.\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 27 Jan 2011 12:58:02 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "> Number of logical CPUs: 16 (4x Quadcore Xeon E5520  @ 2.27GHz)\n> RAM: 16GB\n> Concurrent connections (according to our monitoring tool): 7 (min), 74\n> (avg), 197 (max)\n\nYour current issue may be IO wait, but a connection pool isn't far off\nin your future either.\n\n> max_connections = 200\n> work_mem = 256MB\n\nThat is a foot-gun waiting to go off. If 32 queries manage to\nsimultaneously each need 256MB to sort, your cache is blown out and\nthe server is out of RAM. If your application is like most, you need a\nhuge work_mem for, maybe, 1% of your queries. You can request it high\non a per connection/per query basis for the queries that need it, and\nset the default to a low, safe figure.\n\n> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1\n> random_page_cost = 2.0\nI thought these drives were a lot better at random IO than this gives\nthem credit for. The are certainly no better at sequential IO than\n(good) conventional drives. You might have a lot of room to turn this\ndown even smaller.\n", "msg_date": "Thu, 27 Jan 2011 07:30:54 -0500", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thursday, January 27, 2011 12:24:10 PM Cédric Villemain wrote:\n> > maintenance_work_mem = 512MB\n> 128MB is usualy enough\nUhm, I don't want to be picky, but thats not really my experience. Sorts for \nindex creation are highly dependent on a high m_w_m. Quite regularly I find the \nexisting 1GB limit a probleme here...\n\nAndres\n", "msg_date": "Thu, 27 Jan 2011 13:35:24 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thu, Jan 27, 2011 at 1:30 PM, Justin Pitts <[email protected]> wrote:\n> That is a foot-gun waiting to go off.\n\nThanks, I had already changed this after Cedric's mail.\n\n>> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1\n>> random_page_cost = 2.0\n> I thought these drives were a lot better at random IO than this gives\n> them credit for.\n\nI'll look into that.\n\nThanks a lot,\nMichael\n", "msg_date": "Thu, 27 Jan 2011 13:56:35 +0100", "msg_from": "Michael Kohl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load," }, { "msg_contents": "2011/1/27 Andres Freund <[email protected]>:\n> On Thursday, January 27, 2011 12:24:10 PM Cédric Villemain wrote:\n>> > maintenance_work_mem = 512MB\n>> 128MB is usualy enough\n> Uhm, I don't want to be picky, but thats not really my experience. Sorts for\n> index creation are highly dependent on a high m_w_m. Quite regularly I find the\n> existing 1GB limit a probleme here...\n\nThat is right for index creation, but not for 'pure' maintenance\nstuff. Once the database is running as usual, there is no really point\nto give auto-vacuum or auto-analyze much more (depend on the raid card\nmemory too ...)\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Thu, 27 Jan 2011 14:23:48 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thursday, January 27, 2011 02:23:48 PM Cédric Villemain wrote:\n> 2011/1/27 Andres Freund <[email protected]>:\n> > On Thursday, January 27, 2011 12:24:10 PM Cédric Villemain wrote:\n> >> > maintenance_work_mem = 512MB\n> >> \n> >> 128MB is usualy enough\n> > \n> > Uhm, I don't want to be picky, but thats not really my experience. Sorts\n> > for index creation are highly dependent on a high m_w_m. Quite regularly\n> > I find the existing 1GB limit a probleme here...\n> \n> That is right for index creation, but not for 'pure' maintenance\n> stuff. Once the database is running as usual, there is no really point\n> to give auto-vacuum or auto-analyze much more (depend on the raid card\n> memory too ...)\nEven that I cannot agree with, sorry ;-). If you have a database with much \nchurn a high m_w_m helps to avoid multiple scans during vacuum of the database \nbecause the amount of dead tuples doesn't fit m_w_m.\n\nAndres\n", "msg_date": "Thu, 27 Jan 2011 14:26:38 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On 1/27/2011 4:31 AM, Michael Kohl wrote:\n> Hi all,\n>\n> we are running a fairly big Ruby on Rails application on Postgres 8.4.\n> Our traffic grew quite a bit lately, and since then we are facing DB\n> performance issues. System load occasionally explodes (around 170\n> yesterday on a 16 core system), which seems to be caused by disk I/O\n> (iowait in our Munin graphs goes up significantly during these\n> periods). At other times the laod stays rather low under pretty much\n> the same circumstances.\n>\n> There are 6 application servers with 18 unicorns each, as well as 12\n> beanstalk workers talking to the DB. I know the problem description is\n> very vague, but so far we haven't consistently managed to reproduce\n> the problem. Turning of the beanstalk workers usually leads to a great\n> decreases in writes and system loads, but during yesterday's debugging\n> session they obviously ran fine (thanks, Murphy).\n>\n> Below you'll find our system information and Postgres config, maybe\n> someone could be so kind as to point out any obvious flaws in our\n> current configuration while I'm trying to get a better description of\n> the underlying problem.\n>\n<SNIP>\n\nIf the suggestions below are not enough, you might have to check some of \nyour sql statements and make sure they are all behaving. You may not \nnotice a table scan when the user count is low, but you will when it \ngets higher.\n\nHave you run each of your queries through explain analyze lately?\n\nHave you checked for bloat?\n\nYou are vacuuming/autovacuuming, correct?\n\n-Andy\n", "msg_date": "Thu, 27 Jan 2011 09:06:38 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thu, Jan 27, 2011 at 4:06 PM, Andy Colson <[email protected]> wrote:\n> Have you run each of your queries through explain analyze lately?\n\nA code review including checking of queries is on our agenda.\n\n> You are vacuuming/autovacuuming, correct?\n\nSure :-)\n\nThank you,\nMichael\n", "msg_date": "Thu, 27 Jan 2011 16:09:16 +0100", "msg_from": "Michael Kohl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load," }, { "msg_contents": "On Thu, Jan 27, 2011 at 8:09 AM, Michael Kohl <[email protected]> wrote:\n> On Thu, Jan 27, 2011 at 4:06 PM, Andy Colson <[email protected]> wrote:\n>> Have you run each of your queries through explain analyze lately?\n>\n> A code review including checking of queries is on our agenda.\n\nA good method to start is to log long running queries and then explain\nanalyze just them.\n", "msg_date": "Thu, 27 Jan 2011 10:05:23 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On 1/27/2011 9:09 AM, Michael Kohl wrote:\n> On Thu, Jan 27, 2011 at 4:06 PM, Andy Colson<[email protected]> wrote:\n>> Have you run each of your queries through explain analyze lately?\n>\n> A code review including checking of queries is on our agenda.\n>\n>> You are vacuuming/autovacuuming, correct?\n>\n> Sure :-)\n>\n> Thank you,\n> Michael\n>\n\nOh, also, when the box is really busy, have you watched vmstat to see if \nyou start swapping?\n\n-Andy\n", "msg_date": "Thu, 27 Jan 2011 11:20:18 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "* Michael Kohl ([email protected]) wrote:\n> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1\n\nI'm amazed no one else has mentioned this yet, but you should look into\nsplitting your data and your WALs. Obviously, having another set of\nSSDs to put your WALs on would be ideal.\n\nYou should probably also be looking into adjustments to the background\nwriter. It sounds like you're getting hit by large checkpoint i/o\n(if you turn on logging of that, as someone else suggested, you'll be\nable to corrollate the times), which can be helped by increasing the\namount of writing done between checkpoints, so that the checkpoints\naren't as big and painful. That can be done by making the background\nwriter more aggressive.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 27 Jan 2011 12:54:22 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thu, Jan 27, 2011 at 10:54 AM, Stephen Frost <[email protected]> wrote:\n> * Michael Kohl ([email protected]) wrote:\n>> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1\n>\n> I'm amazed no one else has mentioned this yet, but you should look into\n> splitting your data and your WALs.  Obviously, having another set of\n> SSDs to put your WALs on would be ideal.\n\nActually spinning media would be a better choice. A pair of fast\n15krpm drives in a mirror will almost always outrun an SSD for\nsequential write speed. Even meh-grade 7200RPM SATA drives will win.\n\n> You should probably also be looking into adjustments to the background\n> writer.  It sounds like you're getting hit by large checkpoint i/o\n> (if you turn on logging of that, as someone else suggested, you'll be\n> able to corrollate the times), which can be helped by increasing the\n> amount of writing done between checkpoints, so that the checkpoints\n> aren't as big and painful.  That can be done by making the background\n> writer more aggressive.\n\nThis++. Increasing checkpoint segments can make a huge difference.\nWe run 64 segments in production and it's a world of difference from\nthe stock setting.\n", "msg_date": "Thu, 27 Jan 2011 11:13:17 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thu, Jan 27, 2011 at 10:20 AM, Andy Colson <[email protected]> wrote:\n> On 1/27/2011 9:09 AM, Michael Kohl wrote:\n>>\n>> On Thu, Jan 27, 2011 at 4:06 PM, Andy Colson<[email protected]>  wrote:\n>>>\n>>> Have you run each of your queries through explain analyze lately?\n>>\n>> A code review including checking of queries is on our agenda.\n>>\n>>> You are vacuuming/autovacuuming, correct?\n>>\n>> Sure :-)\n>>\n>> Thank you,\n>> Michael\n>>\n>\n> Oh, also, when the box is really busy, have you watched vmstat to see if you\n> start swapping?\n\nSetting sysstat service to run so you can see what your disks were\ndoing in the last 7 days is useful too. Makes it much easier to\nfigure things out afterwards when you have history of what has been\nhappening.\n", "msg_date": "Thu, 27 Jan 2011 11:14:32 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thursday, January 27, 2011 07:13:17 PM Scott Marlowe wrote:\n> On Thu, Jan 27, 2011 at 10:54 AM, Stephen Frost <[email protected]> wrote:\n> > * Michael Kohl ([email protected]) wrote:\n> >> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1\n> > \n> > I'm amazed no one else has mentioned this yet, but you should look into\n> > splitting your data and your WALs. Obviously, having another set of\n> > SSDs to put your WALs on would be ideal.\n> \n> Actually spinning media would be a better choice. A pair of fast\n> 15krpm drives in a mirror will almost always outrun an SSD for\n> sequential write speed. Even meh-grade 7200RPM SATA drives will win.\nUnless he is bulk loading or running with synchronous_commit=off sequential \nspeed wont be the limit for WAL. The number of syncs will be the limit.\n\nAndres\n", "msg_date": "Thu, 27 Jan 2011 19:19:35 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thu, Jan 27, 2011 at 6:05 PM, Scott Marlowe <[email protected]> wrote:\n> A good method to start is to log long running queries and then explain\n> analyze just them.\n\nWe are already doing the logging part, we are just a bit behind on the\n\"explain analyze\" part of things. One day soon...\n\nThanks,\nMichael\n", "msg_date": "Fri, 28 Jan 2011 09:46:29 +0100", "msg_from": "Michael Kohl <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load," }, { "msg_contents": "Michael Kohl wrote:\n> We are already doing the logging part, we are just a bit behind on the\n> \"explain analyze\" part of things. One day soon...\n>\n> \nThere is, of course, the auto_explain module which will do that for you.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Fri, 28 Jan 2011 08:17:24 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On 27/01/2011 11:31, Michael Kohl wrote:\n> Hi all,\n>\n> we are running a fairly big Ruby on Rails application on Postgres 8.4.\n> Our traffic grew quite a bit lately, and since then we are facing DB\n> performance issues. System load occasionally explodes (around 170\n> yesterday on a 16 core system), which seems to be caused by disk I/O\n> (iowait in our Munin graphs goes up significantly during these\n> periods). At other times the laod stays rather low under pretty much\n> the same circumstances.\n\nIs there any way you can moderate the number of total active connections \nto the database to approximately match the number of (logical) CPU cores \non your system? I.e. some kind of connection pool or connection \nlimiting? This should help you in more ways than one (limit PG lock \ncontention, limit parallel disk IO).\n\n\n", "msg_date": "Fri, 28 Jan 2011 15:06:07 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thu, Jan 27, 2011 at 5:31 AM, Michael Kohl <[email protected]> wrote:\n> we are running a fairly big Ruby on Rails application on Postgres 8.4.\n> Our traffic grew quite a bit lately, and since then we are facing DB\n> performance issues. System load occasionally explodes (around 170\n> yesterday on a 16 core system), which seems to be caused by disk I/O\n> (iowait in our Munin graphs goes up significantly during these\n> periods). At other times the laod stays rather low under pretty much\n> the same circumstances.\n[...]\n> [1] http://www.linux.com/learn/tutorials/394523-configuring-postgresql-for-pretty-good-performance\n> [2] http://www.pgcon.org/2010/schedule/events/210.en.html\n\nAt the risk of shameless self-promotion, you might also find this helpful:\n\nhttp://rhaas.blogspot.com/2010/12/troubleshooting-database.html\n\nIt's fairly basic but it might at least get you pointed in the right\ndirection...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 3 Feb 2011 13:04:03 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "Michael Kohl wrote:\n> HDD: 2x 120 GB OCZ Vertex 2 SSD; RAID 1\n> \n\nAs a general warning here, as far as I know the regular Vertex 2 SSD \ndoesn't cache writes properly for database use. It's possible to have a \ncrash that leaves the database corrupted, if the drive has writes queued \nup in its cache. The Vertex 2 Pro resolves this issue with a supercap, \nyou may have a model with concerns here. See \nhttp://wiki.postgresql.org/wiki/Reliable_Writes for more information.\n\nIn addition to the log_checkpoints suggestion already made, I'd also \nrecommend turning on log_lock_waits and log_temp_files on your server. \nAll three of those--checkpoints, locks, and unexpected temp file \nuse--can cause the sort of issue you're seeing. Well, not locks so much \ngiven you're seeing heavy disk I/O, but it's good to start logging those \nissues before they get bad, too.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 07 Feb 2011 03:31:52 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," }, { "msg_contents": "On Thu, Jan 27, 2011 at 6:36 AM, Michael Kohl <[email protected]> wrote:\n> Cédric, thanks a lot for your answer so far!\n>\n> On Thu, Jan 27, 2011 at 12:24 PM, Cédric Villemain\n> <[email protected]> wrote:\n>\n>> you have swap used, IO on the swap partition ?\n>\n> Memory-wise we are fine.\n>\n>> can you paste the /proc/meminfo ?\n>\n> Sure:\n>\n> # cat /proc/meminfo\n> MemTotal:       16461012 kB\n> MemFree:          280440 kB\n> Buffers:           60984 kB\n> Cached:         13757080 kB\n> SwapCached:         6112 kB\n> Active:          7049744 kB\n> Inactive:        7716308 kB\n> Active(anon):    2743696 kB\n> Inactive(anon):  2498056 kB\n> Active(file):    4306048 kB\n> Inactive(file):  5218252 kB\n> Unevictable:           0 kB\n> Mlocked:               0 kB\n> SwapTotal:        999992 kB\n> SwapFree:         989496 kB\n> Dirty:              3500 kB\n> Writeback:             0 kB\n> AnonPages:        943752 kB\n> Mapped:          4114916 kB\n> Shmem:           4293312 kB\n> Slab:             247036 kB\n> SReclaimable:     212788 kB\n> SUnreclaim:        34248 kB\n> KernelStack:        3144 kB\n> PageTables:       832768 kB\n> NFS_Unstable:          0 kB\n> Bounce:                0 kB\n> WritebackTmp:          0 kB\n> CommitLimit:     9230496 kB\n> Committed_AS:    5651528 kB\n> VmallocTotal:   34359738367 kB\n> VmallocUsed:       51060 kB\n> VmallocChunk:   34350787468 kB\n> HardwareCorrupted:     0 kB\n> HugePages_Total:       0\n> HugePages_Free:        0\n> HugePages_Rsvd:        0\n> HugePages_Surp:        0\n> Hugepagesize:       2048 kB\n> DirectMap4k:        7936 kB\n> DirectMap2M:    16760832 kB\n>\n>> Also turn on log_checkpoint if it is not already and check the\n>> duration to write the data.\n>\n> Will do, thanks!\n>\n>> You didn't said the DB size (and size of active part of it), it would help here.\n>\n> => select pg_size_pretty(pg_database_size('xxx'));\n>  pg_size_pretty\n> ----------------\n>  32 GB\n> (1 row)\n>\n\n\nHere I am still a big fan of setting\nshared_buffers=8GB\n\nfor dbsize of 32GB that is a 25% in bufferpool ration\neffective cache size then will be more like 8GB.\n\nThe only time this will hurt is you have more sequential access than\nrandom which wont be populated in the shared_buffer but chances of\nthat being the problem is lowered with your random_page_cost set to\n2.0 or lower.\n\nAlso I am a big fan of separating the WAL and data separately which\ngives two advantages and monitoring the IO that way so you know where\nyour IO are coming from.. WAL or DATA and then further tuning can be\ndone according to what you see.\n\nAlso SSDs sometimes have trouble with varying sizes of WAL writes so\nresponse times for WAL writes varies quite a bit and can confuse SSDs.\n\n-Jignesh\n\n\n>> it is too much with 200 connections. you may experiment case where you\n>> try to use more than the memory available.\n>\n> So far memory never really was a problem, but I'll keep these\n> suggestions in mind.\n>\n>> 16MB should work well\n>\n> We already thought of increasing that, will do so now.\n>\n>>> effective_cache_size = 8192MB\n>>\n>> 12-14GB looks better\n>\n> Thank you, I was rather unsure on this on.\n>\n>> you use full_text_search ?\n>\n> Not anymore, probably a leftover.\n>\n>> do you monitor the 'locks' ? and the commit/rollbacks  ?\n>\n> No, but I'll look into doing that.\n>\n> Thanks a lot for the feedback again,\n> Michael\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Fri, 11 Feb 2011 12:04:59 -0500", "msg_from": "Jignesh Shah <[email protected]>", "msg_from_op": false, "msg_subject": "Re: High load," } ]
[ { "msg_contents": "Hi,\n\nYesterday I perform a crash test, but I lost the last pg_xlog file.\nLet me explain:\n\nFirst I did the pg_start_backup\nSecond I copied the phisical files to the slave server\nThird I did the pg_stop_backup\n\nSo I run a test,\n\n1) In the master server I created a 100.000 records table\n2) I Runned the \"checkpoint\" command to be sure the table will be saved.\n3) I check the pg_xlog, there was created 6 logs, but in the archive \nthere was only 5.\n4) I copied the achives files to the slave server and I created a \nrecovery.conf and started the slave postgres.\n5) Postgres recovered the 5 log files correctly, but the new table did \nnot came to the slave. I think because the last pg_xlog file was not \narchived.\n\nI did not understand why the last file wasnt archieved?\nThere is a command to be sure all pg_xlog is archived? or I need to copy \nthe last file to the slave before recovery?\n\nHow is the correctly way to perform a recovery in this case?\n\nThanks,\n\nWaldomiro Caraiani\n\n", "msg_date": "Thu, 27 Jan 2011 10:10:01 -0200", "msg_from": "Waldomiro <[email protected]>", "msg_from_op": true, "msg_subject": "Why I lost the last pg_xlog file?" }, { "msg_contents": "On 01/27/2011 06:10 AM, Waldomiro wrote:\n\n> 3) I check the pg_xlog, there was created 6 logs, but in the archive\n> there was only 5.\n\nxlogs are only \"archived\" when they'd normally be deleted. If you have \nreally high data turnover or very frequent checkpoints, that effectively \nhappens constantly. I'm not sure where the cutoff is, but there's a \ncertain amount of \"reserve\" xlog space based on your checkpoint_segments \nsetting.\n\nIt is an archive, after all. pg_start_backup/pg_stop_backup ensure your \nbackup is consistent, nothing else. Depending on your archives to \ncapture everything isn't going to work. If you really want everything, \nyou can either copy the xlogs manually (not safe) or initiate another \nbackup. You can kinda fudge it doing this:\n\n1. Call pg_current_xlog_location to get the current xlog.\n2. Call pg_switch_xlog to force a new xlog.\n3. Copy the file from step 1 and anything older than it to your \narchive/slave. Doing this *may* confuse the built-in archive system if \nyour archive_command is too strict.\n4. Profit.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Thu, 27 Jan 2011 08:17:23 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why I lost the last pg_xlog file?" }, { "msg_contents": "Waldomiro <[email protected]> wrote:\n \n> Yesterday I perform a crash test, but I lost the last pg_xlog\n> file.\n \nDid you follow the steps laid out in the documentation?:\n \nhttp://www.postgresql.org/docs/current/interactive/continuous-archiving.html#BACKUP-PITR-RECOVERY\n \nIn particular, I'm wondering if you followed steps 2 and 6 properly.\n \n-Kevin\n", "msg_date": "Thu, 27 Jan 2011 08:51:08 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Why I lost the last pg_xlog file?" } ]
[ { "msg_contents": "I have a table EMP, with 14 rows and a description like this:\nscott=> \\d+ emp\n Table \"public.emp\"\n Column | Type | Modifiers | Storage | \nDescription\n----------+-----------------------------+-----------+----------+-------------\n empno | smallint | not null | plain |\n ename | character varying(10) | | extended |\n job | character varying(9) | | extended |\n mgr | smallint | | plain |\n hiredate | timestamp without time zone | | plain |\n sal | double precision | | plain |\n comm | double precision | | plain |\n deptno | smallint | | plain |\nIndexes:\n \"emp_pkey\" PRIMARY KEY, btree (empno)\n \"emp_mgr_i\" btree (mgr)\nForeign-key constraints:\n \"fk_deptno\" FOREIGN KEY (deptno) REFERENCES dept(deptno)\nHas OIDs: no\n\nscott=>\n\nA recursive query doesn't use existing index on mgr:\nscott=> explain analyze\nwith recursive e(empno,ename,mgr,bossname,level) as (\nselect empno,ename,mgr,NULL::varchar,0 from emp where empno=7839\nunion\nselect emp.empno,emp.ename,emp.mgr,e.ename,e.level+1\nfrom emp,e\nwhere emp.mgr=e.empno)\nselect * from e;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n CTE Scan on e (cost=20.59..23.21 rows=131 width=78) (actual \ntime=0.020..0.143 rows=14 loops=1)\n CTE e\n -> Recursive Union (cost=0.00..20.59 rows=131 width=52) (actual \ntime=0.018..0.128 rows=14 loops=1)\n -> Seq Scan on emp (cost=0.00..1.18 rows=1 width=10) \n(actual time=0.013..0.015 rows=1 loops=1)\n Filter: (empno = 7839)\n -> Hash Join (cost=0.33..1.68 rows=13 width=52) (actual \ntime=0.016..0.021 rows=3 loops=4)\n Hash Cond: (public.emp.mgr = e.empno)\n -> Seq Scan on emp (cost=0.00..1.14 rows=14 \nwidth=10) (actual time=0.001..0.004 rows=14 loops=4)\n -> Hash (cost=0.20..0.20 rows=10 width=44) (actual \ntime=0.004..0.004 rows=4 loops=4)\n Buckets: 1024 Batches: 1 Memory Usage: 1kB\n -> WorkTable Scan on e (cost=0.00..0.20 \nrows=10 width=44) (actual time=0.001..0.002 rows=4 loops=4)\n Total runtime: 0.218 ms\n(12 rows)\n\nscott=>\n\nThe optimizer will not use index, not even when I turn off both hash and \nmerge joins. This is not particularly important for a table with 14 \nrows, but for a larger table, this is a problem. The\nonly way to actually force the use of index is by disabling seqscan, but \nthat chooses a wrong path\nagain, because it reads the \"outer\" table by primary key, which will be \nvery slow. Full table scan,\ndone by the primary key is probably the slowest thing around. I know \nabout the PostgreSQL philosophy\nwhich says \"hints are bad\", and I deeply disagree with it, but would it \nbe possible to have at\nleast one parameter that would change calculations in such a way that \nindexes are favored, where they exist?\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 10:41:08 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On Thu, Jan 27, 2011 at 10:41:08AM -0500, Mladen Gogala wrote:\n> I have a table EMP, with 14 rows and a description like this:\n> scott=> \\d+ emp\n> Table \"public.emp\"\n> Column | Type | Modifiers | Storage | \n> Description\n> ----------+-----------------------------+-----------+----------+-------------\n> empno | smallint | not null | plain |\n> ename | character varying(10) | | extended |\n> job | character varying(9) | | extended |\n> mgr | smallint | | plain |\n> hiredate | timestamp without time zone | | plain |\n> sal | double precision | | plain |\n> comm | double precision | | plain |\n> deptno | smallint | | plain |\n> Indexes:\n> \"emp_pkey\" PRIMARY KEY, btree (empno)\n> \"emp_mgr_i\" btree (mgr)\n> Foreign-key constraints:\n> \"fk_deptno\" FOREIGN KEY (deptno) REFERENCES dept(deptno)\n> Has OIDs: no\n>\n> scott=>\n>\n> A recursive query doesn't use existing index on mgr:\n> scott=> explain analyze\n> with recursive e(empno,ename,mgr,bossname,level) as (\n> select empno,ename,mgr,NULL::varchar,0 from emp where empno=7839\n> union\n> select emp.empno,emp.ename,emp.mgr,e.ename,e.level+1\n> from emp,e\n> where emp.mgr=e.empno)\n> select * from e;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------\n> CTE Scan on e (cost=20.59..23.21 rows=131 width=78) (actual \n> time=0.020..0.143 rows=14 loops=1)\n> CTE e\n> -> Recursive Union (cost=0.00..20.59 rows=131 width=52) (actual \n> time=0.018..0.128 rows=14 loops=1)\n> -> Seq Scan on emp (cost=0.00..1.18 rows=1 width=10) (actual \n> time=0.013..0.015 rows=1 loops=1)\n> Filter: (empno = 7839)\n> -> Hash Join (cost=0.33..1.68 rows=13 width=52) (actual \n> time=0.016..0.021 rows=3 loops=4)\n> Hash Cond: (public.emp.mgr = e.empno)\n> -> Seq Scan on emp (cost=0.00..1.14 rows=14 width=10) \n> (actual time=0.001..0.004 rows=14 loops=4)\n> -> Hash (cost=0.20..0.20 rows=10 width=44) (actual \n> time=0.004..0.004 rows=4 loops=4)\n> Buckets: 1024 Batches: 1 Memory Usage: 1kB\n> -> WorkTable Scan on e (cost=0.00..0.20 rows=10 \n> width=44) (actual time=0.001..0.002 rows=4 loops=4)\n> Total runtime: 0.218 ms\n> (12 rows)\n>\n> scott=>\n>\n> The optimizer will not use index, not even when I turn off both hash and \n> merge joins. This is not particularly important for a table with 14 rows, \n> but for a larger table, this is a problem. The\n> only way to actually force the use of index is by disabling seqscan, but \n> that chooses a wrong path\n> again, because it reads the \"outer\" table by primary key, which will be \n> very slow. Full table scan,\n> done by the primary key is probably the slowest thing around. I know about \n> the PostgreSQL philosophy\n> which says \"hints are bad\", and I deeply disagree with it, but would it be \n> possible to have at\n> least one parameter that would change calculations in such a way that \n> indexes are favored, where they exist?\n>\n> -- \n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n\nHi Mladen,\n\nPostgreSQL will only use an index if the planner thinks that it\nwill be faster than the alternative, a sequential scan in this case.\nFor 14 rows, a sequential scan is 1 read and should actually be\nfaster than the index. Did you try the query using EXPLAIN ANALYZE\nonce with index and once without? What were the timings? If they\ndo not match reality, adjusting cost parameters would be in order.\n\nRegards,\nKen\n", "msg_date": "Thu, 27 Jan 2011 09:45:39 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> The optimizer will not use index, not even when I turn off both\n> hash and merge joins. This is not particularly important for a\n> table with 14 rows, but for a larger table, this is a problem.\n \nIf it still does that with a larger table. Do you have an example\nof that? Showing that it goes straight to the data page when the\ntable only has one, without first wasting time going through the\nindex page, doesn't prove that it won't use the index when it might\nactually help -- much less point to the cause of the issue in the\nlarger table, which might lead to a solution.\n \n-Kevin\n", "msg_date": "Thu, 27 Jan 2011 09:49:38 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "Odds are that a table of 14 rows will more likely be cached in RAM\nthan a table of 14 million rows. PostgreSQL would certainly be more\n\"openminded\" to using an index if chances are low that the table is\ncached. If the table *is* cached, though, what point would there be\nin reading an index?\n\nAlso, if random_page_cost is set to default (4.0), the planner will\ntend towards sequential scans. You can drop this number a bit to\n\"help\" the planner be more selective of indexes...and there's also\ncpu_tuple_* settings that can be modified to pursuade the planner to\nuse indexes.\n\nDoubtful that any prodding will force an index scan with a cached\ntable of 14 rows, though...\n\nOn 1/27/11, Mladen Gogala <[email protected]> wrote:\n> I have a table EMP, with 14 rows and a description like this:\n> scott=> \\d+ emp\n> Table \"public.emp\"\n> Column | Type | Modifiers | Storage |\n> Description\n> ----------+-----------------------------+-----------+----------+-------------\n> empno | smallint | not null | plain |\n> ename | character varying(10) | | extended |\n> job | character varying(9) | | extended |\n> mgr | smallint | | plain |\n> hiredate | timestamp without time zone | | plain |\n> sal | double precision | | plain |\n> comm | double precision | | plain |\n> deptno | smallint | | plain |\n> Indexes:\n> \"emp_pkey\" PRIMARY KEY, btree (empno)\n> \"emp_mgr_i\" btree (mgr)\n> Foreign-key constraints:\n> \"fk_deptno\" FOREIGN KEY (deptno) REFERENCES dept(deptno)\n> Has OIDs: no\n>\n> scott=>\n>\n> A recursive query doesn't use existing index on mgr:\n> scott=> explain analyze\n> with recursive e(empno,ename,mgr,bossname,level) as (\n> select empno,ename,mgr,NULL::varchar,0 from emp where empno=7839\n> union\n> select emp.empno,emp.ename,emp.mgr,e.ename,e.level+1\n> from emp,e\n> where emp.mgr=e.empno)\n> select * from e;\n> QUERY PLAN\n> -----------------------------------------------------------------------------------------------------------------------------\n> CTE Scan on e (cost=20.59..23.21 rows=131 width=78) (actual\n> time=0.020..0.143 rows=14 loops=1)\n> CTE e\n> -> Recursive Union (cost=0.00..20.59 rows=131 width=52) (actual\n> time=0.018..0.128 rows=14 loops=1)\n> -> Seq Scan on emp (cost=0.00..1.18 rows=1 width=10)\n> (actual time=0.013..0.015 rows=1 loops=1)\n> Filter: (empno = 7839)\n> -> Hash Join (cost=0.33..1.68 rows=13 width=52) (actual\n> time=0.016..0.021 rows=3 loops=4)\n> Hash Cond: (public.emp.mgr = e.empno)\n> -> Seq Scan on emp (cost=0.00..1.14 rows=14\n> width=10) (actual time=0.001..0.004 rows=14 loops=4)\n> -> Hash (cost=0.20..0.20 rows=10 width=44) (actual\n> time=0.004..0.004 rows=4 loops=4)\n> Buckets: 1024 Batches: 1 Memory Usage: 1kB\n> -> WorkTable Scan on e (cost=0.00..0.20\n> rows=10 width=44) (actual time=0.001..0.002 rows=4 loops=4)\n> Total runtime: 0.218 ms\n> (12 rows)\n>\n> scott=>\n>\n> The optimizer will not use index, not even when I turn off both hash and\n> merge joins. This is not particularly important for a table with 14\n> rows, but for a larger table, this is a problem. The\n> only way to actually force the use of index is by disabling seqscan, but\n> that chooses a wrong path\n> again, because it reads the \"outer\" table by primary key, which will be\n> very slow. Full table scan,\n> done by the primary key is probably the slowest thing around. I know\n> about the PostgreSQL philosophy\n> which says \"hints are bad\", and I deeply disagree with it, but would it\n> be possible to have at\n> least one parameter that would change calculations in such a way that\n> indexes are favored, where they exist?\n>\n> --\n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n-- \nComputers are like air conditioners...\nThey quit working when you open Windows.\n", "msg_date": "Thu, 27 Jan 2011 09:51:16 -0600", "msg_from": "J Sisson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On 1/27/2011 10:45 AM, Kenneth Marshall wrote:\n> PostgreSQL will only use an index if the planner thinks that it\n> will be faster than the alternative, a sequential scan in this case.\n> For 14 rows, a sequential scan is 1 read and should actually be\n> faster than the index. Did you try the query using EXPLAIN ANALYZE\n> once with index and once without? What were the timings? If they\n> do not match reality, adjusting cost parameters would be in order.\n>\nI did. I even tried with an almost equivalent outer join:\n\n explain analyze select e1.empno,e1.ename,e2.empno,e2.ename\nfrom emp e1 left outer join emp e2 on (e1.mgr=e2.empno);\n QUERY PLAN\n\n--------------------------------------------------------------------------------\n------------------------------\n Nested Loop Left Join (cost=0.00..7.25 rows=14 width=16) (actual \ntime=0.028..0\n.105 rows=14 loops=1)\n Join Filter: (e1.mgr = e2.empno)\n -> Seq Scan on emp e1 (cost=0.00..2.14 rows=14 width=10) (actual \ntime=0.006\n..0.010 rows=14 loops=1)\n -> Materialize (cost=0.00..2.21 rows=14 width=8) (actual \ntime=0.001..0.003\nrows=14 loops=14)\n -> Seq Scan on emp e2 (cost=0.00..2.14 rows=14 width=8) \n(actual time=\n0.001..0.005 rows=14 loops=1)\n Total runtime: 0.142 ms\n(6 rows)\n\nThis gives me the same result as the recursive version, minus the level \ncolumn. I am porting an application from Oracle, there is a fairly large \ntable that is accessed by \"connect by\". Rewriting it as a recursive join \nis not a problem, but the optimizer doesn't really use the indexes.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 10:56:11 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On 1/27/2011 10:51 AM, J Sisson wrote:\n> Also, if random_page_cost is set to default (4.0), the planner will\n> tend towards sequential scans.\nscott=> show random_page_cost;\n random_page_cost\n------------------\n 1\n(1 row)\n\nscott=> show seq_page_cost;\n seq_page_cost\n---------------\n 2\n(1 row)\n\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 10:57:50 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On Thu, Jan 27, 2011 at 10:56 AM, Mladen Gogala\n<[email protected]>wrote:\n\n> I even tried with an almost equivalent outer join:\n>\n> explain analyze select e1.empno,e1.ename,e2.empno,e2.ename\n> from emp e1 left outer join emp e2 on (e1.mgr=e2.empno);\n> QUERY PLAN\n>\n>\n> --------------------------------------------------------------------------------\n> ------------------------------\n> Nested Loop Left Join (cost=0.00..7.25 rows=14 width=16) (actual\n> time=0.028..0\n> .105 rows=14 loops=1)\n> Join Filter: (e1.mgr = e2.empno)\n> -> Seq Scan on emp e1 (cost=0.00..2.14 rows=14 width=10) (actual\n> time=0.006\n> ..0.010 rows=14 loops=1)\n> -> Materialize (cost=0.00..2.21 rows=14 width=8) (actual\n> time=0.001..0.003\n> rows=14 loops=14)\n> -> Seq Scan on emp e2 (cost=0.00..2.14 rows=14 width=8) (actual\n> time=\n> 0.001..0.005 rows=14 loops=1)\n> Total runtime: 0.142 ms\n> (6 rows)\n>\n> This gives me the same result as the recursive version, minus the level\n> column. I am porting an application from Oracle, there is a fairly large\n> table that is accessed by \"connect by\". Rewriting it as a recursive join is\n> not a problem, but the optimizer doesn't really use the indexes.\n>\n>\nYou're still using a 14 row table, though. Postgres isn't going to be stupid\nenough to use an index in this case when the seq scan is clearly faster\nunless you go out of your way to absolutely force it to do so. If the table\nis going to be \"fairly large\", that's the size you need to be testing and\ntuning with.\n\n-- \n- David T. Wilson\[email protected]\n\nOn Thu, Jan 27, 2011 at 10:56 AM, Mladen Gogala <[email protected]> wrote:\nI even tried with an almost equivalent outer join:\n\n explain analyze select e1.empno,e1.ename,e2.empno,e2.ename\nfrom emp e1 left outer join emp e2 on (e1.mgr=e2.empno);\n                                                  QUERY PLAN\n\n--------------------------------------------------------------------------------\n------------------------------\n Nested Loop Left Join  (cost=0.00..7.25 rows=14 width=16) (actual time=0.028..0\n.105 rows=14 loops=1)\n   Join Filter: (e1.mgr = e2.empno)\n   ->  Seq Scan on emp e1  (cost=0.00..2.14 rows=14 width=10) (actual time=0.006\n..0.010 rows=14 loops=1)\n   ->  Materialize  (cost=0.00..2.21 rows=14 width=8) (actual time=0.001..0.003\nrows=14 loops=14)\n         ->  Seq Scan on emp e2  (cost=0.00..2.14 rows=14 width=8) (actual time=\n0.001..0.005 rows=14 loops=1)\n Total runtime: 0.142 ms\n(6 rows)\n\nThis gives me the same result as the recursive version, minus the level column. I am porting an application from Oracle, there is a fairly large table that is accessed by \"connect by\". Rewriting it as a recursive join is not a problem, but the optimizer doesn't really use the indexes.\nYou're still using a 14 row table, though. Postgres isn't going to be stupid enough to use an index in this case when the seq scan is clearly faster unless you go out of your way to absolutely force it to do so. If the table is going to be \"fairly large\", that's the size you need to be testing and tuning with.\n-- - David T. [email protected]", "msg_date": "Thu, 27 Jan 2011 11:09:15 -0500", "msg_from": "David Wilson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "David Wilson <[email protected]> writes:\n> You're still using a 14 row table, though.\n\nExactly. Please note what it says in the fine manual:\n\n It is worth noting that EXPLAIN results should not be extrapolated\n to situations other than the one you are actually testing; for\n example, results on a toy-sized table cannot be assumed to apply to\n large tables. The planner's cost estimates are not linear and so it\n might choose a different plan for a larger or smaller table. An\n extreme example is that on a table that only occupies one disk page,\n you'll nearly always get a sequential scan plan whether indexes are\n available or not. The planner realizes that it's going to take one\n disk page read to process the table in any case, so there's no value\n in expending additional page reads to look at an index.\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Jan 2011 11:40:47 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes " }, { "msg_contents": "On 1/27/2011 11:40 AM, Tom Lane wrote:\n> It is worth noting that EXPLAIN results should not be extrapolated\n> to situations other than the one you are actually testing; for\n> example, results on a toy-sized table cannot be assumed to apply to\n> large tables.\nWell, that's precisely what I tried. Bummer, I will have to copy a large \ntable over.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 12:00:06 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": " \n\n> -----Original Message-----\n> From: Mladen Gogala [mailto:[email protected]] \n> Sent: Thursday, January 27, 2011 12:00 PM\n> To: Tom Lane\n> Cc: David Wilson; Kenneth Marshall; [email protected]\n> Subject: Re: Postgres 9.0 has a bias against indexes\n> \n> On 1/27/2011 11:40 AM, Tom Lane wrote:\n> > It is worth noting that EXPLAIN results should not be extrapolated\n> > to situations other than the one you are actually testing; for\n> > example, results on a toy-sized table cannot be \n> assumed to apply to\n> > large tables.\n> Well, that's precisely what I tried. Bummer, I will have to \n> copy a large table over.\n> \n> --\n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n> \n> \n\nMladen,\n\nI don't think, this is exclusive Postgres feature.\nI'm pretty sure, Oracle optimizer will do \"TABLE ACCESS (FULL)\" instead\nof using index on 14-row table either.\n\nRegards,\nIgor Neyman\n", "msg_date": "Thu, 27 Jan 2011 15:10:56 -0500", "msg_from": "\"Igor Neyman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On 1/27/2011 3:10 PM, Igor Neyman wrote:\n>\n> Mladen,\n>\n> I don't think, this is exclusive Postgres feature.\n> I'm pretty sure, Oracle optimizer will do \"TABLE ACCESS (FULL)\" instead\n> of using index on 14-row table either.\n>\n> Regards,\n> Igor Neyman\n\nWell, lets' see:\n\nSQL> select * from v$version;\n\nBANNER\n--------------------------------------------------------------------------------\nOracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production\nPL/SQL Release 11.2.0.2.0 - Production\nCORE 11.2.0.2.0 Production\nTNS for Linux: Version 11.2.0.2.0 - Production\nNLSRTL Version 11.2.0.2.0 - Production\n\nElapsed: 00:00:00.00\nSQL> set autotrace on explain\nSQL> with e(empno,ename,mgr,bossname,lev) as (\n 2 select empno,ename,mgr,NULL,0 from emp where empno=7839\n 3 union all\n 4 select emp.empno,emp.ename,emp.mgr,e.ename,e.lev+1\n 5 from emp,e\n 6 where emp.mgr=e.empno)\n 7 select * from e\n 8 /\n\n EMPNO ENAME MGR BOSSNAME LEV\n---------- ---------- ---------- ---------- ----------\n 7839 KING 0\n 7566 JONES 7839 KING 1\n 7698 BLAKE 7839 KING 1\n 7782 CLARK 7839 KING 1\n 7499 ALLEN 7698 BLAKE 2\n 7521 WARD 7698 BLAKE 2\n 7654 MARTIN 7698 BLAKE 2\n 7788 SCOTT 7566 JONES 2\n 7844 TURNER 7698 BLAKE 2\n 7900 JAMES 7698 BLAKE 2\n 7902 FORD 7566 JONES 2\n\n EMPNO ENAME MGR BOSSNAME LEV\n---------- ---------- ---------- ---------- ----------\n 7934 MILLER 7782 CLARK 2\n 7369 SMITH 7902 FORD 3\n 7876 ADAMS 7788 SCOTT 3\n\n14 rows selected.\n\nElapsed: 00:00:00.01\n\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 2925328376\n\n--------------------------------------------------------------------------------\n--------------------\n\n| Id | Operation | Name | Rows | \nBytes | Cos\nt (%CPU)| Time |\n\n--------------------------------------------------------------------------------\n--------------------\n\n| 0 | SELECT STATEMENT | | 15 | 795 |\n 6 (17)| 00:00:56 |\n\n| 1 | VIEW | | 15 | 795 |\n 6 (17)| 00:00:56 |\n\n| 2 | UNION ALL (RECURSIVE WITH) BREADTH FIRST| | | |\n | |\n\n| 3 | TABLE ACCESS BY INDEX ROWID | EMP | 1 | 24 |\n 1 (0)| 00:00:11 |\n\n|* 4 | INDEX UNIQUE SCAN | PK_EMP | 1 | |\n 0 (0)| 00:00:01 |\n\n|* 5 | HASH JOIN | | 14 | 798 |\n 5 (20)| 00:00:46 |\n\n| 6 | RECURSIVE WITH PUMP | | | |\n | |\n\n| 7 | TABLE ACCESS FULL | EMP | 14 | 336 |\n 3 (0)| 00:00:31 |\n\n--------------------------------------------------------------------------------\n--------------------\n\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n 4 - access(\"EMPNO\"=7839)\n 5 - access(\"EMP\".\"MGR\"=\"E\".\"EMPNO\")\n\nNote\n-----\n - SQL plan baseline \"SQL_PLAN_1tmxjj25531vff51d791e\" used for this \nstatement\n\nSQL> spool off\n\n\nThere is INDEX UNIQUE SCAN PK_EMP. Oracle will use an index.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 15:31:40 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala\n<[email protected]> wrote:\n> There is INDEX UNIQUE SCAN PK_EMP.  Oracle will use an index.\n\nThat's because Oracle has covering indexes.\n", "msg_date": "Thu, 27 Jan 2011 13:37:22 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On 1/27/2011 3:37 PM, Scott Marlowe wrote:\n> On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala\n> <[email protected]> wrote:\n>> There is INDEX UNIQUE SCAN PK_EMP. Oracle will use an index.\n> That's because Oracle has covering indexes.\n>\nI am not sure what you mean by \"covering indexes\" but I hope that for \nthe larger table I have in mind, indexes will be used. For a small \ntable like this, not using an index may actually be a better plan. I \ncannot compare because my development PostgreSQL cluster is on a much \nweaker machine than the development Oracle database.\nI even looked into Wikipedia for the notion of \"covering index\" and it \nis defined as an index which contains all the data requested in a query. \nThis is not the case, EMP is not an index-organized table. The only \nindex used was the primary key, also available in the PostgreSQL version \nof the table.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 15:44:10 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On Thu, Jan 27, 2011 at 1:44 PM, Mladen Gogala\n<[email protected]> wrote:\n> On 1/27/2011 3:37 PM, Scott Marlowe wrote:\n>>\n>> On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala\n>> <[email protected]>  wrote:\n>>>\n>>> There is INDEX UNIQUE SCAN PK_EMP.  Oracle will use an index.\n>>\n>> That's because Oracle has covering indexes.\n>>\n> I am not sure what you mean by \"covering indexes\" but I hope that for the\n> larger table I have in mind,  indexes will be used.  For a small table like\n\nIn Oracle you can hit JUST the index to get the data you need (and\nmaybe rollback logs, which are generally pretty small)\n\nIn Pgsql, once you hit the index you must then hit the actual data\nstore to get the right version of your tuple. So, index access in pg\nis more expensive than in Oracle. However, updates are cheaper.\nAlways a trade off\n", "msg_date": "Thu, 27 Jan 2011 13:59:15 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": " \n\n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]] \n> Sent: Thursday, January 27, 2011 3:59 PM\n> To: Mladen Gogala\n> Cc: Igor Neyman; Tom Lane; David Wilson; Kenneth Marshall; \n> [email protected]\n> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n> \n> On Thu, Jan 27, 2011 at 1:44 PM, Mladen Gogala \n> <[email protected]> wrote:\n> > On 1/27/2011 3:37 PM, Scott Marlowe wrote:\n> >>\n> >> On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala \n> >> <[email protected]>  wrote:\n> >>>\n> >>> There is INDEX UNIQUE SCAN PK_EMP.  Oracle will use an index.\n> >>\n> >> That's because Oracle has covering indexes.\n> >>\n> > I am not sure what you mean by \"covering indexes\" but I \n> hope that for \n> > the larger table I have in mind,  indexes will be used.  \n> For a small \n> > table like\n> \n> In Oracle you can hit JUST the index to get the data you need \n> (and maybe rollback logs, which are generally pretty small)\n> \n> In Pgsql, once you hit the index you must then hit the actual \n> data store to get the right version of your tuple. So, index \n> access in pg is more expensive than in Oracle. However, \n> updates are cheaper.\n> Always a trade off\n> \n> \n\nScott,\nWhat you describe here isn't about \"covering indexes\" - it's about different ways implementing MVCC in Oracle and PG.\n\nMladen, \nyou were right.\nFor recursive query like yours Oracle uses index even on small table.\nI made an assumption without testing it.\nHowever some other (non-recursive) queries against the same small table that also require reading all 14 rows do \"table scan\".\n\nRegards,\nIgor Neyman\n", "msg_date": "Thu, 27 Jan 2011 16:12:53 -0500", "msg_from": "\"Igor Neyman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On Thu, Jan 27, 2011 at 2:12 PM, Igor Neyman <[email protected]> wrote:\n>\n>\n>> -----Original Message-----\n>> From: Scott Marlowe [mailto:[email protected]]\n>> Sent: Thursday, January 27, 2011 3:59 PM\n>> To: Mladen Gogala\n>> Cc: Igor Neyman; Tom Lane; David Wilson; Kenneth Marshall;\n>> [email protected]\n>> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n>>\n>> On Thu, Jan 27, 2011 at 1:44 PM, Mladen Gogala\n>> <[email protected]> wrote:\n>> > On 1/27/2011 3:37 PM, Scott Marlowe wrote:\n>> >>\n>> >> On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala\n>> >> <[email protected]>  wrote:\n>> >>>\n>> >>> There is INDEX UNIQUE SCAN PK_EMP.  Oracle will use an index.\n>> >>\n>> >> That's because Oracle has covering indexes.\n>> >>\n>> > I am not sure what you mean by \"covering indexes\" but I\n>> hope that for\n>> > the larger table I have in mind,  indexes will be used.\n>> For a small\n>> > table like\n>>\n>> In Oracle you can hit JUST the index to get the data you need\n>> (and maybe rollback logs, which are generally pretty small)\n>>\n>> In Pgsql, once you hit the index you must then hit the actual\n>> data store to get the right version of your tuple.  So, index\n>> access in pg is more expensive than in Oracle.  However,\n>> updates are cheaper.\n>> Always a trade off\n>>\n>>\n>\n> Scott,\n> What you describe here isn't about \"covering indexes\" - it's about different ways implementing MVCC in Oracle and PG.\n\nIt is about covering indexes AND it's about the difference in how MVCC\nis implemented in both databases.\n", "msg_date": "Thu, 27 Jan 2011 14:16:29 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": " \n\n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]] \n> Sent: Thursday, January 27, 2011 4:16 PM\n> To: Igor Neyman\n> Cc: Mladen Gogala; Tom Lane; David Wilson; Kenneth Marshall; \n> [email protected]\n> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n> \n> On Thu, Jan 27, 2011 at 2:12 PM, Igor Neyman \n> <[email protected]> wrote:\n> >\n> >\n> >> -----Original Message-----\n> >> From: Scott Marlowe [mailto:[email protected]]\n> >> Sent: Thursday, January 27, 2011 3:59 PM\n> >> To: Mladen Gogala\n> >> Cc: Igor Neyman; Tom Lane; David Wilson; Kenneth Marshall; \n> >> [email protected]\n> >> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n> >>\n> >> On Thu, Jan 27, 2011 at 1:44 PM, Mladen Gogala \n> >> <[email protected]> wrote:\n> >> > On 1/27/2011 3:37 PM, Scott Marlowe wrote:\n> >> >>\n> >> >> On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala \n> >> >> <[email protected]>  wrote:\n> >> >>>\n> >> >>> There is INDEX UNIQUE SCAN PK_EMP.  Oracle will use an index.\n> >> >>\n> >> >> That's because Oracle has covering indexes.\n> >> >>\n> >> > I am not sure what you mean by \"covering indexes\" but I\n> >> hope that for\n> >> > the larger table I have in mind,  indexes will be used.\n> >> For a small\n> >> > table like\n> >>\n> >> In Oracle you can hit JUST the index to get the data you need (and \n> >> maybe rollback logs, which are generally pretty small)\n> >>\n> >> In Pgsql, once you hit the index you must then hit the actual data \n> >> store to get the right version of your tuple.  So, index \n> access in pg \n> >> is more expensive than in Oracle.  However, updates are cheaper.\n> >> Always a trade off\n> >>\n> >>\n> >\n> > Scott,\n> > What you describe here isn't about \"covering indexes\" - \n> it's about different ways implementing MVCC in Oracle and PG.\n> \n> It is about covering indexes AND it's about the difference in \n> how MVCC is implemented in both databases.\n> \n> \n\nWell, Mladen's query doesn't involve covering indexes.\n", "msg_date": "Thu, 27 Jan 2011 16:18:12 -0500", "msg_from": "\"Igor Neyman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On Thu, Jan 27, 2011 at 04:12:53PM -0500, Igor Neyman wrote:\n> \n> \n> > -----Original Message-----\n> > From: Scott Marlowe [mailto:[email protected]] \n> > Sent: Thursday, January 27, 2011 3:59 PM\n> > To: Mladen Gogala\n> > Cc: Igor Neyman; Tom Lane; David Wilson; Kenneth Marshall; \n> > [email protected]\n> > Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n> > \n> > On Thu, Jan 27, 2011 at 1:44 PM, Mladen Gogala \n> > <[email protected]> wrote:\n> > > On 1/27/2011 3:37 PM, Scott Marlowe wrote:\n> > >>\n> > >> On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala \n> > >> <[email protected]> ?wrote:\n> > >>>\n> > >>> There is INDEX UNIQUE SCAN PK_EMP. ?Oracle will use an index.\n> > >>\n> > >> That's because Oracle has covering indexes.\n> > >>\n> > > I am not sure what you mean by \"covering indexes\" but I \n> > hope that for \n> > > the larger table I have in mind, ?indexes will be used. ?\n> > For a small \n> > > table like\n> > \n> > In Oracle you can hit JUST the index to get the data you need \n> > (and maybe rollback logs, which are generally pretty small)\n> > \n> > In Pgsql, once you hit the index you must then hit the actual \n> > data store to get the right version of your tuple. So, index \n> > access in pg is more expensive than in Oracle. However, \n> > updates are cheaper.\n> > Always a trade off\n> > \n> > \n> \n> Scott,\n> What you describe here isn't about \"covering indexes\" - it's about different ways implementing MVCC in Oracle and PG.\n> \n> Mladen, \n> you were right.\n> For recursive query like yours Oracle uses index even on small table.\n> I made an assumption without testing it.\n> However some other (non-recursive) queries against the same small table that also require reading all 14 rows do \"table scan\".\n> \n> Regards,\n> Igor Neyman\n> \nInteresting. Can you force it to use a Seqential Scan and if so, how\ndoes that affect the timing? i.e. Is the index scan actually faster?\n\nCheers,\nKen\n", "msg_date": "Thu, 27 Jan 2011 15:20:53 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On Thu, Jan 27, 2011 at 2:18 PM, Igor Neyman <[email protected]> wrote:\n>\n>> -----Original Message-----\n>> From: Scott Marlowe [mailto:[email protected]]\n>> Sent: Thursday, January 27, 2011 4:16 PM\n>> To: Igor Neyman\n>> Cc: Mladen Gogala; Tom Lane; David Wilson; Kenneth Marshall;\n>> [email protected]\n>> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n>>\n>> On Thu, Jan 27, 2011 at 2:12 PM, Igor Neyman\n>> <[email protected]> wrote:\n>> >\n>> >\n>> >> -----Original Message-----\n>> >> From: Scott Marlowe [mailto:[email protected]]\n>> >> Sent: Thursday, January 27, 2011 3:59 PM\n>> >> To: Mladen Gogala\n>> >> Cc: Igor Neyman; Tom Lane; David Wilson; Kenneth Marshall;\n>> >> [email protected]\n>> >> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n>> >>\n>> >> On Thu, Jan 27, 2011 at 1:44 PM, Mladen Gogala\n>> >> <[email protected]> wrote:\n>> >> > On 1/27/2011 3:37 PM, Scott Marlowe wrote:\n>> >> >>\n>> >> >> On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala\n>> >> >> <[email protected]>  wrote:\n>> >> >>>\n>> >> >>> There is INDEX UNIQUE SCAN PK_EMP.  Oracle will use an index.\n>> >> >>\n>> >> >> That's because Oracle has covering indexes.\n>> >> >>\n>> >> > I am not sure what you mean by \"covering indexes\" but I\n>> >> hope that for\n>> >> > the larger table I have in mind,  indexes will be used.\n>> >> For a small\n>> >> > table like\n>> >>\n>> >> In Oracle you can hit JUST the index to get the data you need (and\n>> >> maybe rollback logs, which are generally pretty small)\n>> >>\n>> >> In Pgsql, once you hit the index you must then hit the actual data\n>> >> store to get the right version of your tuple.  So, index\n>> access in pg\n>> >> is more expensive than in Oracle.  However, updates are cheaper.\n>> >> Always a trade off\n>> >>\n>> >>\n>> >\n>> > Scott,\n>> > What you describe here isn't about \"covering indexes\" -\n>> it's about different ways implementing MVCC in Oracle and PG.\n>>\n>> It is about covering indexes AND it's about the difference in\n>> how MVCC is implemented in both databases.\n>>\n>>\n>\n> Well, Mladen's query doesn't involve covering indexes.\n\nOn Oracle? Then how can it get the values it needs without having to\nhit the data store?\n", "msg_date": "Thu, 27 Jan 2011 14:25:04 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On 1/27/2011 4:20 PM, Kenneth Marshall wrote:\n> Interesting. Can you force it to use a Seqential Scan and if so, how\n> does that affect the timing? i.e. Is the index scan actually faster?\n>\n> Cheers,\n> Ken\nYes, Oracle can be forced into doing a sequential scan and it is \nactually faster than an index scan:\n\nSQL> set autotrace on explain\nSQL> with e(empno,ename,mgr,bossname,lev) as (\n 2 select empno,ename,mgr,NULL,0 from emp where empno=7839\n 3 union all\n 4 select emp.empno,emp.ename,emp.mgr,e.ename,e.lev+1\n 5 from emp,e\n 6 where emp.mgr=e.empno)\n 7 select * from e\n 8 /\n\n EMPNO ENAME MGR BOSSNAME LEV\n---------- ---------- ---------- ---------- ----------\n 7839 KING 0\n 7566 JONES 7839 KING 1\n 7698 BLAKE 7839 KING 1\n 7782 CLARK 7839 KING 1\n 7499 ALLEN 7698 BLAKE 2\n 7521 WARD 7698 BLAKE 2\n 7654 MARTIN 7698 BLAKE 2\n 7788 SCOTT 7566 JONES 2\n 7844 TURNER 7698 BLAKE 2\n 7900 JAMES 7698 BLAKE 2\n 7902 FORD 7566 JONES 2\n\n EMPNO ENAME MGR BOSSNAME LEV\n---------- ---------- ---------- ---------- ----------\n 7934 MILLER 7782 CLARK 2\n 7369 SMITH 7902 FORD 3\n 7876 ADAMS 7788 SCOTT 3\n\n14 rows selected.\n\nElapsed: 00:00:00.18\n\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 2925328376\n\n--------------------------------------------------------------------------------\n--------------------\n\n| Id | Operation | Name | Rows | \nBytes | Cos\nt (%CPU)| Time |\n\n--------------------------------------------------------------------------------\n--------------------\n\n| 0 | SELECT STATEMENT | | 15 | 795 |\n 6 (17)| 00:00:56 |\n\n| 1 | VIEW | | 15 | 795 |\n 6 (17)| 00:00:56 |\n\n| 2 | UNION ALL (RECURSIVE WITH) BREADTH FIRST| | | |\n | |\n\n| 3 | TABLE ACCESS BY INDEX ROWID | EMP | 1 | 24 |\n 1 (0)| 00:00:11 |\n\n|* 4 | INDEX UNIQUE SCAN | PK_EMP | 1 | |\n 0 (0)| 00:00:01 |\n\n|* 5 | HASH JOIN | | 14 | 798 |\n 5 (20)| 00:00:46 |\n\n| 6 | RECURSIVE WITH PUMP | | | |\n | |\n\n| 7 | TABLE ACCESS FULL | EMP | 14 | 336 |\n 3 (0)| 00:00:31 |\n\n--------------------------------------------------------------------------------\n--------------------\n\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n 4 - access(\"EMPNO\"=7839)\n 5 - access(\"EMP\".\"MGR\"=\"E\".\"EMPNO\")\n\nNote\n-----\n - SQL plan baseline \"SQL_PLAN_1tmxjj25531vff51d791e\" used for this \nstatement\n\nSQL>\nSQL> with e1(empno,ename,mgr,bossname,lev) as (\n 2 select /*+ full(emp) */ empno,ename,mgr,NULL,0 from emp where \nempno=7839\n 3 union all\n 4 select /*+ full(e2) */\n 5 e2.empno,e2.ename,e2.mgr,e1.ename,e1.lev+1\n 6 from emp e2,e1\n 7 where e2.mgr=e1.empno)\n 8 select * from e1\n 9 /\n\n EMPNO ENAME MGR BOSSNAME LEV\n---------- ---------- ---------- ---------- ----------\n 7839 KING 0\n 7566 JONES 7839 KING 1\n 7698 BLAKE 7839 KING 1\n 7782 CLARK 7839 KING 1\n 7499 ALLEN 7698 BLAKE 2\n 7521 WARD 7698 BLAKE 2\n 7654 MARTIN 7698 BLAKE 2\n 7788 SCOTT 7566 JONES 2\n 7844 TURNER 7698 BLAKE 2\n 7900 JAMES 7698 BLAKE 2\n 7902 FORD 7566 JONES 2\n\n EMPNO ENAME MGR BOSSNAME LEV\n---------- ---------- ---------- ---------- ----------\n 7934 MILLER 7782 CLARK 2\n 7369 SMITH 7902 FORD 3\n 7876 ADAMS 7788 SCOTT 3\n\n14 rows selected.\n\nElapsed: 00:00:00.14\n\nExecution Plan\n----------------------------------------------------------\nPlan hash value: 2042363665\n\n--------------------------------------------------------------------------------\n------------------\n\n| Id | Operation | Name | Rows | Bytes \n| Cost\n(%CPU)| Time |\n\n--------------------------------------------------------------------------------\n------------------\n\n| 0 | SELECT STATEMENT | | 15 | 795 \n| 10\n (10)| 00:01:36 |\n\n| 1 | VIEW | | 15 | 795 \n| 10\n (10)| 00:01:36 |\n\n| 2 | UNION ALL (RECURSIVE WITH) BREADTH FIRST| | | |\n | |\n\n|* 3 | TABLE ACCESS FULL | EMP | 1 | 24 \n| 3\n (0)| 00:00:31 |\n\n|* 4 | HASH JOIN | | 14 | 798 \n| 7\n (15)| 00:01:06 |\n\n| 5 | RECURSIVE WITH PUMP | | | |\n | |\n\n| 6 | TABLE ACCESS FULL | EMP | 14 | 336 \n| 3\n (0)| 00:00:31 |\n\n--------------------------------------------------------------------------------\n------------------\n\n\nPredicate Information (identified by operation id):\n---------------------------------------------------\n\n 3 - filter(\"EMPNO\"=7839)\n 4 - access(\"E2\".\"MGR\"=\"E1\".\"EMPNO\")\n\nSQL> spool off\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 16:31:24 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": " \n\n> -----Original Message-----\n> From: Scott Marlowe [mailto:[email protected]] \n> Sent: Thursday, January 27, 2011 4:25 PM\n> To: Igor Neyman\n> Cc: Mladen Gogala; Tom Lane; David Wilson; Kenneth Marshall; \n> [email protected]\n> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n> \n> On Thu, Jan 27, 2011 at 2:18 PM, Igor Neyman \n> <[email protected]> wrote:\n> >\n> >> -----Original Message-----\n> >> From: Scott Marlowe [mailto:[email protected]]\n> >> Sent: Thursday, January 27, 2011 4:16 PM\n> >> To: Igor Neyman\n> >> Cc: Mladen Gogala; Tom Lane; David Wilson; Kenneth Marshall; \n> >> [email protected]\n> >> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n> >>\n> >> On Thu, Jan 27, 2011 at 2:12 PM, Igor Neyman \n> <[email protected]> \n> >> wrote:\n> >> >\n> >> >\n> >> >> -----Original Message-----\n> >> >> From: Scott Marlowe [mailto:[email protected]]\n> >> >> Sent: Thursday, January 27, 2011 3:59 PM\n> >> >> To: Mladen Gogala\n> >> >> Cc: Igor Neyman; Tom Lane; David Wilson; Kenneth Marshall; \n> >> >> [email protected]\n> >> >> Subject: Re: [PERFORM] Postgres 9.0 has a bias against indexes\n> >> >>\n> >> >> On Thu, Jan 27, 2011 at 1:44 PM, Mladen Gogala \n> >> >> <[email protected]> wrote:\n> >> >> > On 1/27/2011 3:37 PM, Scott Marlowe wrote:\n> >> >> >>\n> >> >> >> On Thu, Jan 27, 2011 at 1:31 PM, Mladen Gogala \n> >> >> >> <[email protected]>  wrote:\n> >> >> >>>\n> >> >> >>> There is INDEX UNIQUE SCAN PK_EMP.  Oracle will use \n> an index.\n> >> >> >>\n> >> >> >> That's because Oracle has covering indexes.\n> >> >> >>\n> >> >> > I am not sure what you mean by \"covering indexes\" but I\n> >> >> hope that for\n> >> >> > the larger table I have in mind,  indexes will be used.\n> >> >> For a small\n> >> >> > table like\n> >> >>\n> >> >> In Oracle you can hit JUST the index to get the data \n> you need (and \n> >> >> maybe rollback logs, which are generally pretty small)\n> >> >>\n> >> >> In Pgsql, once you hit the index you must then hit the \n> actual data \n> >> >> store to get the right version of your tuple.  So, index\n> >> access in pg\n> >> >> is more expensive than in Oracle.  However, updates are cheaper.\n> >> >> Always a trade off\n> >> >>\n> >> >>\n> >> >\n> >> > Scott,\n> >> > What you describe here isn't about \"covering indexes\" -\n> >> it's about different ways implementing MVCC in Oracle and PG.\n> >>\n> >> It is about covering indexes AND it's about the difference in how \n> >> MVCC is implemented in both databases.\n> >>\n> >>\n> >\n> > Well, Mladen's query doesn't involve covering indexes.\n> \n> On Oracle? Then how can it get the values it needs without \n> having to hit the data store?\n> \n> \n\nIt doesn't.\nIt does \"INDEX UNIQUE SCAN\" and then \"TABLE ACCESS BY INDEX ROWID\".\n", "msg_date": "Thu, 27 Jan 2011 16:32:02 -0500", "msg_from": "\"Igor Neyman\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On Thu, Jan 27, 2011 at 2:32 PM, Igor Neyman <[email protected]> wrote:\n>> On Oracle?  Then how can it get the values it needs without\n>> having to hit the data store?\n>\n> It doesn't.\n> It does \"INDEX UNIQUE SCAN\" and then \"TABLE ACCESS BY INDEX ROWID\".\n\nAhhh, ok. I thought Oracle used covering indexes by default.\n", "msg_date": "Thu, 27 Jan 2011 14:33:17 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "On 1/27/2011 4:25 PM, Scott Marlowe wrote:\n> On Oracle? Then how can it get the values it needs without having to\n> hit the data store?\n\nIt can't. It does hit the data store.\n\n-- \nMladen Gogala\nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com\n\n", "msg_date": "Thu, 27 Jan 2011 16:33:38 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" }, { "msg_contents": "Mladen Gogala <[email protected]> wrote:\n \n> Yes, Oracle can be forced into doing a sequential scan and it is \n> actually faster than an index scan:\n \nAnd PostgreSQL can be coerced to use an indexed scan. Its plans are\ncost-based, with user configurable cost factors; so if you tell it\nthat seq_page_cost and random_page_cost are both equal to some\nreally low value (like 0.001), you'll get an index scan. Part of\nthe process of tuning PostgreSQL is to discover the relative\n*actual* costs on *your environment* (which is largely dependent on\nthe degree of caching of the active portion of your database). When\nyou get your costing factors to approximate reality, the optimizer\nwill do a pretty good job of picking the fastest plan.\n \n-Kevin\n", "msg_date": "Thu, 27 Jan 2011 15:43:29 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 9.0 has a bias against indexes" } ]
[ { "msg_contents": "Pg 9.0.2 is performing better than pg8.4.1\n\nThere are more transactions per second in pg9.0.2 than in pg8.4.1, which is\na better thing.\n\nalso below are kernel parameters that i used.\n\n\n------ Shared Memory Limits --------\nmax number of segments = 4096\nmax seg size (kbytes) = 15099492\nmax total shared memory (kbytes) = 15099492\nmin seg size (bytes) = 1\n\n------ Semaphore Limits --------\nmax number of arrays = 8192\nmax semaphores per array = 250\nmax semaphores system wide = 2048000\nmax ops per semop call = 32\nsemaphore max value = 32767\n\n------ Messages: Limits --------\nmax queues system wide = 16\nmax size of message (bytes) = 65536\ndefault max size of queue (bytes) = 65536\n\n\nIs there anything that i can do to still improve 9.0.2 performance. the\nperformance (tps) that i got is only 10% is it ideal, or should i need to\nget more?\n\nThanks\nDeepak\n\nOn Wed, Jan 26, 2011 at 7:12 PM, DM <[email protected]> wrote:\n\n> Hello All,\n>\n> I did a pgbench marking test and by comparing the tps of both versions, it\n> looks like 8.4.1 performing better than 9.0.2.\n>\n> Let me know if I need to make any changes to Postgresql.conf of 9.0.2 file\n> to improve its performance\n>\n>\n> =========================================================================================\n> Server Information:\n> OS - CentOS - version 4.1.2\n> CPU - Intel(R) Xeon(R) CPU X5550 @ 2.67GHz\n> 16 CPUS total\n> RAM - 16GB\n> ===============================\n>\n> Postgresql 8.4.1\n> shared_buffers = 4GB\n> checkpoint_segments = 3\n> checkpoint_completion_target = 0.5\n> wal_buffers = 64kB\n> max_connections = 4096\n>\n> Postgresql 9.0.2\n> shared_buffers = 4GB\n> checkpoint_segments = 3\n> checkpoint_completion_target = 0.5\n> wal_buffers = 64KB\n> max_connections = 4096\n>\n> (rest parameters are default)\n> =====================================\n> 8.4.1 Analysis\n>\n> Iterations, Trans_type, Scale, Query_Mode, Clients, no.trans/client, no.\n> trans processed, tps (wih connections estab), tps (without connections\n> estab), DB Size\n> 1/1, SELECT, 1, simple, 32, 2000, 64000/64000, 66501.728525, 70463.861398,\n> 21 MB\n> 1/2, SELECT, 1, simple, 32, 2000, 64000/64000, 66743.003977, 70702.841481,\n> 21 MB\n> 1/3, SELECT, 1, simple, 32, 2000, 64000/64000, 67547.172201, 71925.063075,\n> 21 MB\n> 5/1, SELECT, 5, simple, 32, 2000, 64000/64000, 56964.639200, 60009.939146,\n> 80 MB\n> 5/2, SELECT, 5, simple, 32, 2000, 64000/64000, 62999.318820, 67349.775799,\n> 80 MB\n> 5/3, SELECT, 5, simple, 32, 2000, 64000/64000, 64178.222925, 68242.135894,\n> 80 MB\n> 10/1, SELECT, 10, simple, 32, 2000, 64000/64000, 63754.926064,\n> 67760.289506, 155 MB\n> 10/2, SELECT, 10, simple, 32, 2000, 64000/64000, 62776.794166,\n> 66902.637846, 155 MB\n> 10/3, SELECT, 10, simple, 32, 2000, 64000/64000, 63354.794770,\n> 67239.957345, 155 MB\n> 20/1, SELECT, 20, simple, 32, 2000, 64000/64000, 63525.843107,\n> 66996.134114, 305 MB\n> 20/2, SELECT, 20, simple, 32, 2000, 64000/64000, 62432.263433,\n> 66401.613559, 305 MB\n> 20/3, SELECT, 20, simple, 32, 2000, 64000/64000, 63381.083717,\n> 67308.339503, 305 MB\n> 30/1, SELECT, 30, simple, 32, 2000, 64000/64000, 61896.090005,\n> 65923.244742, 454 MB\n> 30/2, SELECT, 30, simple, 32, 2000, 64000/64000, 62743.314161,\n> 66192.699359, 454 MB\n> 30/3, SELECT, 30, simple, 32, 2000, 64000/64000, 62526.378316,\n> 66496.546336, 454 MB\n> 40/1, SELECT, 40, simple, 32, 2000, 64000/64000, 61668.201948,\n> 65381.511334, 604 MB\n> 40/2, SELECT, 40, simple, 32, 2000, 64000/64000, 60185.106819,\n> 64128.449284, 604 MB\n> 40/3, SELECT, 40, simple, 32, 2000, 64000/64000, 60613.292874,\n> 64453.754431, 604 MB\n> 50/1, SELECT, 50, simple, 32, 2000, 64000/64000, 60863.172930,\n> 64428.319468, 753 MB\n> 50/2, SELECT, 50, simple, 32, 2000, 64000/64000, 61051.691704,\n> 64447.977894, 753 MB\n> 50/3, SELECT, 50, simple, 32, 2000, 64000/64000, 61442.988587,\n> 65375.166630, 753 MB\n> 75/1, SELECT, 75, simple, 32, 2000, 64000/64000, 59635.904169,\n> 62949.189185, 1127 MB\n> 75/2, SELECT, 75, simple, 32, 2000, 64000/64000, 60065.133129,\n> 63538.645892, 1127 MB\n> 75/3, SELECT, 75, simple, 32, 2000, 64000/64000, 61838.497170,\n> 65818.634695, 1127 MB\n> 100/1, SELECT, 100, simple, 32, 2000, 64000/64000, 57373.940935,\n> 60575.027377, 1501 MB\n> 100/2, SELECT, 100, simple, 32, 2000, 64000/64000, 58197.108149,\n> 61314.721760, 1501 MB\n> 100/3, SELECT, 100, simple, 32, 2000, 64000/64000, 57523.281200,\n> 60991.938581, 1501 MB\n> 200/1, SELECT, 200, simple, 32, 2000, 64000/64000, 52143.250545,\n> 54823.997834, 2996 MB\n> 200/2, SELECT, 200, simple, 32, 2000, 64000/64000, 51014.063940,\n> 53368.779097, 2996 MB\n> 200/3, SELECT, 200, simple, 32, 2000, 64000/64000, 56898.700754,\n> 59677.499065, 2996 MB\n> 500/1, SELECT, 500, simple, 32, 2000, 64000/64000, 53167.009206,\n> 55809.410862, 7482 MB\n> 500/2, SELECT, 500, simple, 32, 2000, 64000/64000, 53141.669047,\n> 55865.580430, 7482 MB\n> 500/3, SELECT, 500, simple, 32, 2000, 64000/64000, 53038.703336,\n> 55914.388083, 7482 MB\n>\n> =====================================\n> 9.0.2 Analysis\n>\n> Iterations, Trans_type, Scale, Query_Mode, Clients, no.trans/client, no.\n> trans processed, tps (wih connections estab), tps (without connections\n> estab), DB Size\n> 1/1, SELECT, 1, simple, 32, 2000, 64000/64000, 70763.426807, 76119.159787,\n> 21 MB\n> 1/2, SELECT, 1, simple, 32, 2000, 64000/64000, 70139.061649, 75282.249622,\n> 21 MB\n> 1/3, SELECT, 1, simple, 32, 2000, 64000/64000, 69998.140674, 75508.027447,\n> 21 MB\n> 5/1, SELECT, 5, simple, 32, 2000, 64000/64000, 71248.938224, 76835.989978,\n> 80 MB\n> 5/2, SELECT, 5, simple, 32, 2000, 64000/64000, 68324.678874, 73664.740257,\n> 80 MB\n> 5/3, SELECT, 5, simple, 32, 2000, 64000/64000, 67986.887029, 73594.855720,\n> 80 MB\n> 10/1, SELECT, 10, simple, 32, 2000, 64000/64000, 67766.818613,\n> 73131.991818, 155 MB\n> 10/2, SELECT, 10, simple, 32, 2000, 64000/64000, 69045.201952,\n> 74669.616117, 155 MB\n> 10/3, SELECT, 10, simple, 32, 2000, 64000/64000, 62094.807128,\n> 66287.996487, 155 MB\n> 20/1, SELECT, 20, simple, 32, 2000, 64000/64000, 66972.157372,\n> 72221.720682, 305 MB\n> 20/2, SELECT, 20, simple, 32, 2000, 64000/64000, 67587.975254,\n> 72683.167260, 305 MB\n> 20/3, SELECT, 20, simple, 32, 2000, 64000/64000, 67113.601305,\n> 71948.430962, 305 MB\n> 30/1, SELECT, 30, simple, 32, 2000, 64000/64000, 65509.670353,\n> 70293.133349, 454 MB\n> 30/2, SELECT, 30, simple, 32, 2000, 64000/64000, 67489.902878,\n> 72454.333958, 454 MB\n> 30/3, SELECT, 30, simple, 32, 2000, 64000/64000, 65234.497633,\n> 70089.363939, 454 MB\n> 40/1, SELECT, 40, simple, 32, 2000, 64000/64000, 65681.175365,\n> 70457.733066, 604 MB\n> 40/2, SELECT, 40, simple, 32, 2000, 64000/64000, 64592.963404,\n> 69444.519797, 604 MB\n> 40/3, SELECT, 40, simple, 32, 2000, 64000/64000, 66772.250287,\n> 71749.602855, 604 MB\n> 50/1, SELECT, 50, simple, 32, 2000, 64000/64000, 57715.060745,\n> 61701.317420, 753 MB\n> 50/2, SELECT, 50, simple, 32, 2000, 64000/64000, 64812.489367,\n> 69917.311854, 753 MB\n> 50/3, SELECT, 50, simple, 32, 2000, 64000/64000, 65786.903883,\n> 70713.309460, 753 MB\n> 75/1, SELECT, 75, simple, 32, 2000, 64000/64000, 65105.491241,\n> 70354.023646, 1127 MB\n> 75/2, SELECT, 75, simple, 32, 2000, 64000/64000, 64134.747104,\n> 68658.772338, 1127 MB\n> 75/3, SELECT, 75, simple, 32, 2000, 64000/64000, 63974.154442,\n> 68779.264771, 1127 MB\n> 100/1, SELECT, 100, simple, 32, 2000, 64000/64000, 62137.309862,\n> 66605.264938, 1501 MB\n> 100/2, SELECT, 100, simple, 32, 2000, 64000/64000, 62003.667904,\n> 66372.002630, 1501 MB\n> 100/3, SELECT, 100, simple, 32, 2000, 64000/64000, 61511.372876,\n> 65768.109866, 1501 MB\n> 200/1, SELECT, 200, simple, 32, 2000, 64000/64000, 59470.544890,\n> 63584.980830, 2996 MB\n> 200/2, SELECT, 200, simple, 32, 2000, 64000/64000, 60463.204833,\n> 64584.359283, 2996 MB\n> 200/3, SELECT, 200, simple, 32, 2000, 64000/64000, 59025.725071,\n> 63048.783011, 2996 MB\n> 500/1, SELECT, 500, simple, 32, 2000, 64000/64000, 56162.668148,\n> 59781.963968, 7482 MB\n> 500/2, SELECT, 500, simple, 32, 2000, 64000/64000, 55649.899526,\n> 59268.808123, 7482 MB\n> 500/3, SELECT, 500, simple, 32, 2000, 64000/64000, 57373.632334,\n> 60672.421067, 7482 MB\n>\n>\n> I have also attached postgresql.conf file for both versions for refrence\n>\n> Thanks\n> Deepak\n>\n\nPg 9.0.2 is performing better than pg8.4.1There are more transactions per second in pg9.0.2 than in pg8.4.1, which is a better thing.also below are kernel parameters that i used.\n------ Shared Memory Limits --------max number of segments = 4096max seg size (kbytes) = 15099492max total shared memory (kbytes) = 15099492\nmin seg size (bytes) = 1------ Semaphore Limits --------max number of arrays = 8192max semaphores per array = 250max semaphores system wide = 2048000max ops per semop call = 32\nsemaphore max value = 32767------ Messages: Limits --------max queues system wide = 16max size of message (bytes) = 65536default max size of queue (bytes) = 65536\nIs there anything that i can do to still improve 9.0.2 performance. the performance (tps) that i got is only 10% is it ideal, or should i need to get more?\nThanksDeepakOn Wed, Jan 26, 2011 at 7:12 PM, DM <[email protected]> wrote:\nHello All,I did a pgbench marking test and by comparing the tps of both versions, it looks like 8.4.1 performing better than 9.0.2. Let me know if I need to make any changes to Postgresql.conf of 9.0.2 file to improve its performance \n=========================================================================================Server Information:OS - CentOS - version 4.1.2CPU - Intel(R) Xeon(R) CPU           X5550  @ 2.67GHz16 CPUS total\n\nRAM - 16GB===============================Postgresql 8.4.1shared_buffers = 4GBcheckpoint_segments = 3 checkpoint_completion_target = 0.5 wal_buffers = 64kB max_connections = 4096Postgresql 9.0.2\n\nshared_buffers = 4GBcheckpoint_segments = 3checkpoint_completion_target = 0.5wal_buffers = 64KBmax_connections = 4096(rest parameters are default)=====================================8.4.1 Analysis\nIterations, Trans_type, Scale, Query_Mode, Clients, no.trans/client, no. trans processed, tps (wih connections estab), tps (without connections estab), DB Size1/1, SELECT, 1, simple, 32, 2000, 64000/64000, 66501.728525, 70463.861398, 21 MB \n\n1/2, SELECT, 1, simple, 32, 2000, 64000/64000, 66743.003977, 70702.841481, 21 MB 1/3, SELECT, 1, simple, 32, 2000, 64000/64000, 67547.172201, 71925.063075, 21 MB 5/1, SELECT, 5, simple, 32, 2000, 64000/64000, 56964.639200, 60009.939146, 80 MB \n\n5/2, SELECT, 5, simple, 32, 2000, 64000/64000, 62999.318820, 67349.775799, 80 MB 5/3, SELECT, 5, simple, 32, 2000, 64000/64000, 64178.222925, 68242.135894, 80 MB 10/1, SELECT, 10, simple, 32, 2000, 64000/64000, 63754.926064, 67760.289506, 155 MB \n\n10/2, SELECT, 10, simple, 32, 2000, 64000/64000, 62776.794166, 66902.637846, 155 MB 10/3, SELECT, 10, simple, 32, 2000, 64000/64000, 63354.794770, 67239.957345, 155 MB 20/1, SELECT, 20, simple, 32, 2000, 64000/64000, 63525.843107, 66996.134114, 305 MB \n\n20/2, SELECT, 20, simple, 32, 2000, 64000/64000, 62432.263433, 66401.613559, 305 MB 20/3, SELECT, 20, simple, 32, 2000, 64000/64000, 63381.083717, 67308.339503, 305 MB 30/1, SELECT, 30, simple, 32, 2000, 64000/64000, 61896.090005, 65923.244742, 454 MB \n\n30/2, SELECT, 30, simple, 32, 2000, 64000/64000, 62743.314161, 66192.699359, 454 MB 30/3, SELECT, 30, simple, 32, 2000, 64000/64000, 62526.378316, 66496.546336, 454 MB 40/1, SELECT, 40, simple, 32, 2000, 64000/64000, 61668.201948, 65381.511334, 604 MB \n\n40/2, SELECT, 40, simple, 32, 2000, 64000/64000, 60185.106819, 64128.449284, 604 MB 40/3, SELECT, 40, simple, 32, 2000, 64000/64000, 60613.292874, 64453.754431, 604 MB 50/1, SELECT, 50, simple, 32, 2000, 64000/64000, 60863.172930, 64428.319468, 753 MB \n\n50/2, SELECT, 50, simple, 32, 2000, 64000/64000, 61051.691704, 64447.977894, 753 MB 50/3, SELECT, 50, simple, 32, 2000, 64000/64000, 61442.988587, 65375.166630, 753 MB 75/1, SELECT, 75, simple, 32, 2000, 64000/64000, 59635.904169, 62949.189185, 1127 MB \n\n75/2, SELECT, 75, simple, 32, 2000, 64000/64000, 60065.133129, 63538.645892, 1127 MB 75/3, SELECT, 75, simple, 32, 2000, 64000/64000, 61838.497170, 65818.634695, 1127 MB 100/1, SELECT, 100, simple, 32, 2000, 64000/64000, 57373.940935, 60575.027377, 1501 MB \n\n100/2, SELECT, 100, simple, 32, 2000, 64000/64000, 58197.108149, 61314.721760, 1501 MB 100/3, SELECT, 100, simple, 32, 2000, 64000/64000, 57523.281200, 60991.938581, 1501 MB 200/1, SELECT, 200, simple, 32, 2000, 64000/64000, 52143.250545, 54823.997834, 2996 MB \n\n200/2, SELECT, 200, simple, 32, 2000, 64000/64000, 51014.063940, 53368.779097, 2996 MB 200/3, SELECT, 200, simple, 32, 2000, 64000/64000, 56898.700754, 59677.499065, 2996 MB 500/1, SELECT, 500, simple, 32, 2000, 64000/64000, 53167.009206, 55809.410862, 7482 MB \n\n500/2, SELECT, 500, simple, 32, 2000, 64000/64000, 53141.669047, 55865.580430, 7482 MB 500/3, SELECT, 500, simple, 32, 2000, 64000/64000, 53038.703336, 55914.388083, 7482 MB =====================================\n\n9.0.2 AnalysisIterations, Trans_type, Scale, Query_Mode, Clients, no.trans/client, no. trans processed, tps (wih connections estab), tps (without connections estab), DB Size1/1, SELECT, 1, simple, 32, 2000, 64000/64000, 70763.426807, 76119.159787, 21 MB \n\n1/2, SELECT, 1, simple, 32, 2000, 64000/64000, 70139.061649, 75282.249622, 21 MB 1/3, SELECT, 1, simple, 32, 2000, 64000/64000, 69998.140674, 75508.027447, 21 MB 5/1, SELECT, 5, simple, 32, 2000, 64000/64000, 71248.938224, 76835.989978, 80 MB \n\n5/2, SELECT, 5, simple, 32, 2000, 64000/64000, 68324.678874, 73664.740257, 80 MB 5/3, SELECT, 5, simple, 32, 2000, 64000/64000, 67986.887029, 73594.855720, 80 MB 10/1, SELECT, 10, simple, 32, 2000, 64000/64000, 67766.818613, 73131.991818, 155 MB \n\n10/2, SELECT, 10, simple, 32, 2000, 64000/64000, 69045.201952, 74669.616117, 155 MB 10/3, SELECT, 10, simple, 32, 2000, 64000/64000, 62094.807128, 66287.996487, 155 MB 20/1, SELECT, 20, simple, 32, 2000, 64000/64000, 66972.157372, 72221.720682, 305 MB \n\n20/2, SELECT, 20, simple, 32, 2000, 64000/64000, 67587.975254, 72683.167260, 305 MB 20/3, SELECT, 20, simple, 32, 2000, 64000/64000, 67113.601305, 71948.430962, 305 MB 30/1, SELECT, 30, simple, 32, 2000, 64000/64000, 65509.670353, 70293.133349, 454 MB \n\n30/2, SELECT, 30, simple, 32, 2000, 64000/64000, 67489.902878, 72454.333958, 454 MB 30/3, SELECT, 30, simple, 32, 2000, 64000/64000, 65234.497633, 70089.363939, 454 MB 40/1, SELECT, 40, simple, 32, 2000, 64000/64000, 65681.175365, 70457.733066, 604 MB \n\n40/2, SELECT, 40, simple, 32, 2000, 64000/64000, 64592.963404, 69444.519797, 604 MB 40/3, SELECT, 40, simple, 32, 2000, 64000/64000, 66772.250287, 71749.602855, 604 MB 50/1, SELECT, 50, simple, 32, 2000, 64000/64000, 57715.060745, 61701.317420, 753 MB \n\n50/2, SELECT, 50, simple, 32, 2000, 64000/64000, 64812.489367, 69917.311854, 753 MB 50/3, SELECT, 50, simple, 32, 2000, 64000/64000, 65786.903883, 70713.309460, 753 MB 75/1, SELECT, 75, simple, 32, 2000, 64000/64000, 65105.491241, 70354.023646, 1127 MB \n\n75/2, SELECT, 75, simple, 32, 2000, 64000/64000, 64134.747104, 68658.772338, 1127 MB 75/3, SELECT, 75, simple, 32, 2000, 64000/64000, 63974.154442, 68779.264771, 1127 MB 100/1, SELECT, 100, simple, 32, 2000, 64000/64000, 62137.309862, 66605.264938, 1501 MB \n\n100/2, SELECT, 100, simple, 32, 2000, 64000/64000, 62003.667904, 66372.002630, 1501 MB 100/3, SELECT, 100, simple, 32, 2000, 64000/64000, 61511.372876, 65768.109866, 1501 MB 200/1, SELECT, 200, simple, 32, 2000, 64000/64000, 59470.544890, 63584.980830, 2996 MB \n\n200/2, SELECT, 200, simple, 32, 2000, 64000/64000, 60463.204833, 64584.359283, 2996 MB 200/3, SELECT, 200, simple, 32, 2000, 64000/64000, 59025.725071, 63048.783011, 2996 MB 500/1, SELECT, 500, simple, 32, 2000, 64000/64000, 56162.668148, 59781.963968, 7482 MB \n\n500/2, SELECT, 500, simple, 32, 2000, 64000/64000, 55649.899526, 59268.808123, 7482 MB 500/3, SELECT, 500, simple, 32, 2000, 64000/64000, 57373.632334, 60672.421067, 7482 MB I have also attached postgresql.conf file for both versions for refrence\nThanksDeepak", "msg_date": "Thu, 27 Jan 2011 11:26:43 -0800", "msg_from": "DM <[email protected]>", "msg_from_op": true, "msg_subject": "Re: pgbench - tps for Postgresql-9.0.2 is more than tps for\n\tPostgresql-8.4.1" }, { "msg_contents": "On Thu, Jan 27, 2011 at 2:26 PM, DM <[email protected]> wrote:\n> Is there anything that i can do to still improve 9.0.2 performance. the\n> performance (tps) that i got is only 10% is it ideal, or should i need to\n> get more?\n\nWell, the settings you specified don't sound like the values that we\nnormally recommend.\n\nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\nhttp://www.linux.com/learn/tutorials/394523:configuring-postgresql-for-pretty-good-performance\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 30 Jan 2011 14:25:23 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pgbench - tps for Postgresql-9.0.2 is more than tps for\n\tPostgresql-8.4.1" } ]
[ { "msg_contents": "Another advice is to look the presentation of Alexander Dymo, on the RailsConf2009 called: Advanced Performance Optimization of Rails Applications available on \nhttp://en.oreilly.com/rails2009/public/schedule/detail/8615\nThis talk are focused on Rails and PostgreSQL, based on the development of the Acunote ´s Project Management Platform\n\nhttp://blog.pluron.com\n\n\n----- Mensaje original -----\nDe: \"Andy Colson\" <[email protected]>\nPara: \"Michael Kohl\" <[email protected]>\nCC: [email protected]\nEnviados: Jueves, 27 de Enero 2011 12:20:18 GMT -05:00 Región oriental EE. UU./Canadá\nAsunto: Re: [PERFORM] High load,\n\nOn 1/27/2011 9:09 AM, Michael Kohl wrote:\n> On Thu, Jan 27, 2011 at 4:06 PM, Andy Colson<[email protected]> wrote:\n>> Have you run each of your queries through explain analyze lately?\n>\n> A code review including checking of queries is on our agenda.\n>\n>> You are vacuuming/autovacuuming, correct?\n>\n> Sure :-)\n>\n> Thank you,\n> Michael\n>\n\nOh, also, when the box is really busy, have you watched vmstat to see if \nyou start swapping?\n\n-Andy\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n\n-- \nIng. Marcos Luís Ortíz Valmaseda\nSystem Engineer -- Database Administrator\n\nCentro de Tecnologías de Gestión de Datos (DATEC)\nUniversidad de las Ciencias Informáticas\nhttp://postgresql.uci.cu\n\n", "msg_date": "Thu, 27 Jan 2011 15:21:16 -0500 (CST)", "msg_from": "\"Ing. Marcos Ortiz Valmaseda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: High load," } ]
[ { "msg_contents": "HI,\n\nI use PostgreSQL basically as a data warehouse to store all the genetic \ndata that our lab generates. The only person that accesses the database \nis myself and therefore I've had it housed on my workstation in my \noffice up till now. However, it's getting time to move it to bigger \nhardware. I currently have a server that is basically only storing \nbackup images of all our other workstations so I'm going to move my \ndatabase onto it. The server looks like this: Windows Server Enterprise \n2008 R2 64-bit, AMD 2350 quad-core x2, 32GB RAM. For my purposes the \nCPUs and RAM are fine. I currently have an Adaptec 52445+BBU controller \nthat has the OS (4 drive RAID5), FTP (4 drive RAID5) and two backup \narrays (8 drive each RAID0). The backup arrays are in a 16 drive \nexternal enclosure through an expander so I actually have 16 ports free \non the 52445 card. I plan to remove 3 of the drives from my backup \narrays to make room for 3 - 73GB 15k.5 drives (re-purposed from my \nworkstation). Two 16 drive enclosures with SAS2 expanders just arrived \nas well as 36 Seagate 15k.7 300GB drives (ST3300657SS). I also intend \non getting an Adaptec 6445 controller with the flash module when it \nbecomes available in about a month or two. I already have several \nAdaptec cards so I'd prefer to stick with them.\n\nHere's the way I was planning using the new hardware:\nxlog & wal: 3 - 73G 15k.5 RAID1+hot spare in enclosure A on 52445 \ncontroller\ndata: 22 - 300G 15k.7 RAID10 enclosure B&C on 6445 controller\nindexes: 8 - 300G 15k.7 RAID10 enclosure C on 6445 controller\n2 - 300G 15k.7 as hot spares enclosure C\n4 spare 15k.7 for on the shelf\n\nWith this configuration I figure I'll have ~3TB for my main data tables \nand 1TB for indexes. Right now my database is 500GB total. The 3:1 \nsplit reflects my current table structure and what I foresee coming down \nthe road in terms of new data.\n\nSo my questions are 1) am I'm crazy for doing this, 2) would you change \nanything and 3) is it acceptable to put the xlog & wal (and perhaps tmp \nfilespace) on a different controller than everything else? Please keep \nin mind I'm a geneticist who happens to know a little bit about \nbioinformatics and not the reverse. :-)\n\nThanks!\nBob\n\n-- \n\n*************************************************\nRobert Schnabel\nResearch Assistant Professor\nUniversity of Missouri-Columbia\nAnimal Sciences Unit, Rm.162\n920 East Campus Drive\nColumbia, MO 65211-5300\nPhone: 573-884-4106\nFax: 573-882-6827\nhttp://animalgenomics.missouri.edu\n\n\"...Socialist governments traditionally do make\na financial mess. They always run out of other\npeople's money.\"\n\nMargaret Thatcher, 5 February 1976\n*************************************************\n\n", "msg_date": "Thu, 27 Jan 2011 17:01:03 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": true, "msg_subject": "How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "On Thu, 27 Jan 2011, Robert Schnabel wrote:\n\n> HI,\n>\n> I use PostgreSQL basically as a data warehouse to store all the genetic data \n> that our lab generates. The only person that accesses the database is myself \n> and therefore I've had it housed on my workstation in my office up till now. \n> However, it's getting time to move it to bigger hardware. I currently have a \n> server that is basically only storing backup images of all our other \n> workstations so I'm going to move my database onto it. The server looks like \n> this: Windows Server Enterprise 2008 R2 64-bit, AMD 2350 quad-core x2, 32GB \n> RAM. For my purposes the CPUs and RAM are fine. I currently have an Adaptec \n> 52445+BBU controller that has the OS (4 drive RAID5), FTP (4 drive RAID5) and \n> two backup arrays (8 drive each RAID0). The backup arrays are in a 16 drive \n> external enclosure through an expander so I actually have 16 ports free on \n> the 52445 card. I plan to remove 3 of the drives from my backup arrays to \n> make room for 3 - 73GB 15k.5 drives (re-purposed from my workstation). Two \n> 16 drive enclosures with SAS2 expanders just arrived as well as 36 Seagate \n> 15k.7 300GB drives (ST3300657SS). I also intend on getting an Adaptec 6445 \n> controller with the flash module when it becomes available in about a month \n> or two. I already have several Adaptec cards so I'd prefer to stick with \n> them.\n>\n> Here's the way I was planning using the new hardware:\n> xlog & wal: 3 - 73G 15k.5 RAID1+hot spare in enclosure A on 52445 controller\n> data: 22 - 300G 15k.7 RAID10 enclosure B&C on 6445 controller\n> indexes: 8 - 300G 15k.7 RAID10 enclosure C on 6445 controller\n> 2 - 300G 15k.7 as hot spares enclosure C\n> 4 spare 15k.7 for on the shelf\n>\n> With this configuration I figure I'll have ~3TB for my main data tables and \n> 1TB for indexes. Right now my database is 500GB total. The 3:1 split \n> reflects my current table structure and what I foresee coming down the road \n> in terms of new data.\n>\n> So my questions are 1) am I'm crazy for doing this, 2) would you change \n> anything and 3) is it acceptable to put the xlog & wal (and perhaps tmp \n> filespace) on a different controller than everything else? Please keep in \n> mind I'm a geneticist who happens to know a little bit about bioinformatics \n> and not the reverse. :-)\n\na number of questions spring to mind\n\nhow much of the time are you expecting to spend inserting data into this \nsystem vs querying data from the system?\n\nis data arriving continuously, or is it a matter of receiving a bunch of \ndata, inserting it, then querying it?\n\nwhich do you need to optimize for, insert speed or query speed?\n\ndo you expect your queries to be searching for a subset of the data \nscattered randomly throughlut the input data, or do you expect it to be \n'grab this (relativly) sequential chunk of input data and manipulate it to \ngenerate a report' type of thing\n\nwhat is your connectvity to the raid enclosures? (does \nputting 22 drives on one cable mean that you will be limited due to the \nbandwidth of this cable rather than the performance of the drives)\n\ncan you do other forms of raid on these drives or only raid 10?\n\nhow critical is the data in this database? if it were to die would it just \nbe a matter of recreating it and reloading the data? or would you loose \nirreplaceable data?\n\nDavid Lang\n", "msg_date": "Thu, 27 Jan 2011 15:19:32 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "On January 27, 2011, Robert Schnabel <[email protected]> wrote:\n> So my questions are 1) am I'm crazy for doing this, 2) would you change\n> anything and 3) is it acceptable to put the xlog & wal (and perhaps tmp\n> filespace) on a different controller than everything else? Please keep\n> in mind I'm a geneticist who happens to know a little bit about\n> bioinformatics and not the reverse. :-)\n> \n\nPutting the WAL on a second controller does help, if you're write-heavy.\n\nI tried separating indexes and data once on one server and didn't really \nnotice that it helped much. Managing the space was problematic. I would \nsuggest putting those together on a single RAID-10 of all the 300GB drives \n(minus a spare). It will probably outperform separate arrays most of the \ntime, and be much easier to manage.\n\n-- \nA hybrid Escalade is missing the point much in the same way that having a \ndiet soda with your extra large pepperoni pizza is missing the point.\n\n\nOn January 27, 2011, Robert Schnabel <[email protected]> wrote:\n> So my questions are 1) am I'm crazy for doing this, 2) would you change\n> anything and 3) is it acceptable to put the xlog & wal (and perhaps tmp\n> filespace) on a different controller than everything else? Please keep\n> in mind I'm a geneticist who happens to know a little bit about\n> bioinformatics and not the reverse. :-)\n> \n\nPutting the WAL on a second controller does help, if you're write-heavy.\n\nI tried separating indexes and data once on one server and didn't really notice that it helped much. Managing the space was problematic. I would suggest putting those together on a single RAID-10 of all the 300GB drives (minus a spare). It will probably outperform separate arrays most of the time, and be much easier to manage.\n\n-- \nA hybrid Escalade is missing the point much in the same way that having a diet soda with your extra large pepperoni pizza is missing the point.", "msg_date": "Thu, 27 Jan 2011 16:11:18 -0800", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "\nOn 1/27/2011 5:19 PM, [email protected] wrote:\n> On Thu, 27 Jan 2011, Robert Schnabel wrote:\n>\n>> HI,\n>>\n>> I use PostgreSQL basically as a data warehouse to store all the genetic data\n>> that our lab generates. The only person that accesses the database is myself\n>> and therefore I've had it housed on my workstation in my office up till now.\n>> However, it's getting time to move it to bigger hardware. I currently have a\n>> server that is basically only storing backup images of all our other\n>> workstations so I'm going to move my database onto it. The server looks like\n>> this: Windows Server Enterprise 2008 R2 64-bit, AMD 2350 quad-core x2, 32GB\n>> RAM. For my purposes the CPUs and RAM are fine. I currently have an Adaptec\n>> 52445+BBU controller that has the OS (4 drive RAID5), FTP (4 drive RAID5) and\n>> two backup arrays (8 drive each RAID0). The backup arrays are in a 16 drive\n>> external enclosure through an expander so I actually have 16 ports free on\n>> the 52445 card. I plan to remove 3 of the drives from my backup arrays to\n>> make room for 3 - 73GB 15k.5 drives (re-purposed from my workstation). Two\n>> 16 drive enclosures with SAS2 expanders just arrived as well as 36 Seagate\n>> 15k.7 300GB drives (ST3300657SS). I also intend on getting an Adaptec 6445\n>> controller with the flash module when it becomes available in about a month\n>> or two. I already have several Adaptec cards so I'd prefer to stick with\n>> them.\n>>\n>> Here's the way I was planning using the new hardware:\n>> xlog& wal: 3 - 73G 15k.5 RAID1+hot spare in enclosure A on 52445 controller\n>> data: 22 - 300G 15k.7 RAID10 enclosure B&C on 6445 controller\n>> indexes: 8 - 300G 15k.7 RAID10 enclosure C on 6445 controller\n>> 2 - 300G 15k.7 as hot spares enclosure C\n>> 4 spare 15k.7 for on the shelf\n>>\n>> With this configuration I figure I'll have ~3TB for my main data tables and\n>> 1TB for indexes. Right now my database is 500GB total. The 3:1 split\n>> reflects my current table structure and what I foresee coming down the road\n>> in terms of new data.\n>>\n>> So my questions are 1) am I'm crazy for doing this, 2) would you change\n>> anything and 3) is it acceptable to put the xlog& wal (and perhaps tmp\n>> filespace) on a different controller than everything else? Please keep in\n>> mind I'm a geneticist who happens to know a little bit about bioinformatics\n>> and not the reverse. :-)\n> a number of questions spring to mind\n>\n> how much of the time are you expecting to spend inserting data into this\n> system vs querying data from the system?\n>\n> is data arriving continuously, or is it a matter of receiving a bunch of\n> data, inserting it, then querying it?\n>\n> which do you need to optimize for, insert speed or query speed?\n>\nBulk loads of GB of data via COPY from csv files once every couple \nweeks. I basically only have a couple different table \"types\" based on \nthe data going into them. Each type is set up as inherited tables so \nthere is a new child table for each \"sample\" that is added. Once the \nbulk data is inserted into the tables I generally do some updates on \ncolumns to set values which characterize the data. These columns then \nget indexed. Basically once the initial manipulation is done the table \nis then static and what I'm looking for is query speed.\n\n> do you expect your queries to be searching for a subset of the data\n> scattered randomly throughlut the input data, or do you expect it to be\n> 'grab this (relativly) sequential chunk of input data and manipulate it to\n> generate a report' type of thing\nGenerally it is grab a big sequential chunk of data and either dump it \nto a csv or insert into another table. I use external scripts to format \ndata. My two big table structures look like this:\n\nCREATE TABLE genotypes\n(\n snp_number integer NOT NULL,\n sample_id integer NOT NULL,\n genotype smallint NOT NULL\n)\n\nThere are ~58k unique snp_number. Other tables will have upwards of \n600-700k snp_number. The child tables have a constraint based on \nsample_id such as:\nCONSTRAINT check100 CHECK (sample_id > 100000000 AND sample_id < 101000000)\n\nThe data is sorted by snp_number, sample_id. So if I want the data for \na given sample_id it would be a block of ~58k rows. The size of the \ntable depends on how many sample_id's there are. My largest has ~30k \nsample_id by 58k snp_number per sample. The other big table (with \nchildren) is \"mutations\" and is set up similarly so that I can access \nindividual tables (samples) based on constraints. Each of these \nchildren have between 5-60M records.\n\n> what is your connectvity to the raid enclosures? (does\n> putting 22 drives on one cable mean that you will be limited due to the\n> bandwidth of this cable rather than the performance of the drives)\n>\n> can you do other forms of raid on these drives or only raid 10?\nThis is all direct attach storage via SAS2 so I'm guessing it's probably \nlimited to the single port link between the controller and the \nexpander. Again, geneticist here not computer scientist. ;-) The \nenclosures have Areca ARC-8026-16 expanders. I can basically do \nwhatever RAID level I want.\n\n> how critical is the data in this database? if it were to die would it just\n> be a matter of recreating it and reloading the data? or would you loose\n> irreplaceable data?\n>\n> David Lang\nAll of the data could be reloaded. Basically, once I get the data into \nthe database and I'm done manipulating it I create a backup copy/dump \nwhich then gets stored at a couple different locations. Like I said, I \nreally only do big loads/updates periodically so if it tanked all I'd be \nout is whatever I did since the last backup/dump and some time.\n\nMy goal is to 1) have a fairly robust system so that I don't have to \nspend my time rebuilding things and 2) be able to query the data \nquickly. Most of what I do are ad hoc queries. I have an idea... \"how \nmany X have Y in this set of Z samples\" and write the query to get the \nanswer. I can wait a couple minutes to get an answer but waiting an \nhour is becoming tiresome.\n\nBob\n\n\n\n\n", "msg_date": "Thu, 27 Jan 2011 18:27:13 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "sorry for not replying properly to your response, I managed to delete the \nmail.\n\nas I understand your data access pattern it's the following:\n\nfor the main table space:\n\nbulk loads every couple of weeks. if the data is lost you can just reload \nit.\n\nsearches tend to be extracting large sequential chunks of data, either to \nexternal files or into different tables spaces.\n\nfor this table space, you are basically only inserting every couple of \nweeks, and it sounds as if you do not really care how long it takes to \nload the data.\n\n\nfirst the disclaimer, I'm not a postgres expert, but I do have good \nexperiance with large amounts of data on linux systems (and especially \nrunning into the limitations when doing it on the cheap ;-)\n\n\nwith this data pattern your WAL is meaningless (as it's only relavent for \nisertes), and you may as well use raid6 as raid10 (both allow you to \nutalize all drives for reads, but raid6 gives you 2 drives worth of \nreducnancy while the wrong two drives on raid10 could kill the entire \narray). You may even want to disable fsync on imports. It will save you a \nlot of time, and if the system crashes during the load you can just \nreinitialize and reload the data.\n\nhowever, since you are going to be large sequential data transfers, you \nwant to be utalizing multiple SAS links, preferrably as evenly as \npossible, so rather than putting all your data drives on one port, you may \nwant to spread them between ports so that your aggragate bandwidth to the \ndrives is higher (with this many high speed drives, this is a significant \nlimitation)\n\n\nthe usual reason for keeping the index drives separate is to avoid having \nwrites interact with index reads. Since you are not going to be doing both \nat the same time, I don't know if it helps to separate your indexes.\n\n\nnow, if you pull the data from this main table into a smaller table for \nanalysis, you may want to do more interesting things with the drives that \nyou use for this smaller table as you are going to be loading data into \nthem more frequently.\n\nDavid Lang\n\n\nOn Thu, 27 Jan 2011, [email protected] wrote:\n\n> Date: Thu, 27 Jan 2011 15:19:32 -0800 (PST)\n> From: [email protected]\n> To: Robert Schnabel <[email protected]>\n> Cc: pgsql-performance <[email protected]>\n> Subject: Re: [PERFORM] How to best use 32 15k.7 300GB drives?\n> \n> On Thu, 27 Jan 2011, Robert Schnabel wrote:\n>\n>> HI,\n>> \n>> I use PostgreSQL basically as a data warehouse to store all the genetic \n>> data that our lab generates. The only person that accesses the database is \n>> myself and therefore I've had it housed on my workstation in my office up \n>> till now. However, it's getting time to move it to bigger hardware. I \n>> currently have a server that is basically only storing backup images of all \n>> our other workstations so I'm going to move my database onto it. The \n>> server looks like this: Windows Server Enterprise 2008 R2 64-bit, AMD 2350 \n>> quad-core x2, 32GB RAM. For my purposes the CPUs and RAM are fine. I \n>> currently have an Adaptec 52445+BBU controller that has the OS (4 drive \n>> RAID5), FTP (4 drive RAID5) and two backup arrays (8 drive each RAID0). \n>> The backup arrays are in a 16 drive external enclosure through an expander \n>> so I actually have 16 ports free on the 52445 card. I plan to remove 3 of \n>> the drives from my backup arrays to make room for 3 - 73GB 15k.5 drives \n>> (re-purposed from my workstation). Two 16 drive enclosures with SAS2 \n>> expanders just arrived as well as 36 Seagate 15k.7 300GB drives \n>> (ST3300657SS). I also intend on getting an Adaptec 6445 controller with \n>> the flash module when it becomes available in about a month or two. I \n>> already have several Adaptec cards so I'd prefer to stick with them.\n>> \n>> Here's the way I was planning using the new hardware:\n>> xlog & wal: 3 - 73G 15k.5 RAID1+hot spare in enclosure A on 52445 \n>> controller\n>> data: 22 - 300G 15k.7 RAID10 enclosure B&C on 6445 controller\n>> indexes: 8 - 300G 15k.7 RAID10 enclosure C on 6445 controller\n>> 2 - 300G 15k.7 as hot spares enclosure C\n>> 4 spare 15k.7 for on the shelf\n>> \n>> With this configuration I figure I'll have ~3TB for my main data tables and \n>> 1TB for indexes. Right now my database is 500GB total. The 3:1 split \n>> reflects my current table structure and what I foresee coming down the road \n>> in terms of new data.\n>> \n>> So my questions are 1) am I'm crazy for doing this, 2) would you change \n>> anything and 3) is it acceptable to put the xlog & wal (and perhaps tmp \n>> filespace) on a different controller than everything else? Please keep in \n>> mind I'm a geneticist who happens to know a little bit about bioinformatics \n>> and not the reverse. :-)\n>\n> a number of questions spring to mind\n>\n> how much of the time are you expecting to spend inserting data into this \n> system vs querying data from the system?\n>\n> is data arriving continuously, or is it a matter of receiving a bunch of \n> data, inserting it, then querying it?\n>\n> which do you need to optimize for, insert speed or query speed?\n>\n> do you expect your queries to be searching for a subset of the data scattered \n> randomly throughlut the input data, or do you expect it to be 'grab this \n> (relativly) sequential chunk of input data and manipulate it to generate a \n> report' type of thing\n>\n> what is your connectvity to the raid enclosures? (does putting 22 drives on \n> one cable mean that you will be limited due to the bandwidth of this cable \n> rather than the performance of the drives)\n>\n> can you do other forms of raid on these drives or only raid 10?\n>\n> how critical is the data in this database? if it were to die would it just be \n> a matter of recreating it and reloading the data? or would you loose \n> irreplaceable data?\n>\n> David Lang\n>\n", "msg_date": "Thu, 27 Jan 2011 16:53:00 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "Robert,\n\n* Robert Schnabel ([email protected]) wrote:\n> Once the bulk data is inserted into the tables I generally\n> do some updates on columns to set values which characterize the\n> data. \n\nPlease tell me you're not running actual full-table UPDATE statements...\nYou would be *much* better off either:\na) munging the data on the way in (if possible/reasonable)\nb) loading the data into temp tables first, and then using INSERT\n statements to move the data into the 'final' tables WITH the new\n columns/info you want\nc) considering if you can normalize the data into multiple tables and/or\n to cut down the columns to only what you need as you go through the\n above, too\n\nA full-table UPDATE means you're basically making the table twice as big\nas it needs to be, and it'll never get smaller..\n\n> These columns then get indexed. Basically once the initial\n> manipulation is done the table is then static and what I'm looking\n> for is query speed.\n\nSadly, this is the same type of DW needs that I've got (though with\ntelecomm data and phone calls, not genetic stuffs ;), and PG ends up\nbeing limited by the fact that it can only use one core/thread to go\nthrough the data with.\n\nYou might consider investing some time trying to figure out how to\nparallelize your queries. My approach to this has been to partition the\ndata (probably something you're doing already) into multiple tables and\nthen have shell/perl scripts which will run a given query against all of\nthe tables, dumping the results of that aggregation/analysis into other\ntables, and then having a final 'merge' query.\n\n> The data is sorted by snp_number, sample_id. So if I want the data\n> for a given sample_id it would be a block of ~58k rows. The size of\n> the table depends on how many sample_id's there are. My largest has\n> ~30k sample_id by 58k snp_number per sample. The other big table\n> (with children) is \"mutations\" and is set up similarly so that I can\n> access individual tables (samples) based on constraints. Each of\n> these children have between 5-60M records.\n\nUnderstand that indexes are only going to be used/useful, typically, if\nthe amount of records being returned is small relative to the size of\nthe table (eg: 5%).\n\n> This is all direct attach storage via SAS2 so I'm guessing it's\n> probably limited to the single port link between the controller and\n> the expander. Again, geneticist here not computer scientist. ;-)\n\nThat link certainly isn't going to help things.. You might consider how\nor if you can improve that.\n\n> All of the data could be reloaded. Basically, once I get the data\n> into the database and I'm done manipulating it I create a backup\n> copy/dump which then gets stored at a couple different locations.\n\nYou might consider turning fsync off while you're doing these massive\ndata loads.. and make sure that you issue your 'CREATE TABLE' and your\n'COPY' statements in the same transaction, and again, I suggest loading\ninto temporary (CREATE TEMPORARY TABLE) tables first, then doing the\nCREATE TABLE/INSERT statement for the 'real' table. Make sure that you\ncreate *both* your constraints *and* your indexes *after* the table is\npopulated.\n\nIf you turn fsync off, make sure you turn it back on. :)\n\n> My goal is to 1) have a fairly robust system so that I don't have to\n> spend my time rebuilding things and 2) be able to query the data\n> quickly. Most of what I do are ad hoc queries. I have an idea...\n> \"how many X have Y in this set of Z samples\" and write the query to\n> get the answer. I can wait a couple minutes to get an answer but\n> waiting an hour is becoming tiresome.\n\nHave you done any analysis to see what the bottleneck actually is? When\nyou run top, is your PG process constantly in 'D' state, or is it in 'R'\nstate, or what? Might help figure some of that out. Note that\nparallelizing the query will help regardless of if it's disk bound or\nCPU bound, when you're running on the kind of hardware you're talking\nabout (lots of spindles, multiple CPUs, etc).\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Fri, 28 Jan 2011 08:14:10 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "\n> Putting the WAL on a second controller does help, if you're write-heavy.\n>\n> I tried separating indexes and data once on one server and didn't \n> really notice that it helped much. Managing the space was problematic. \n> I would suggest putting those together on a single RAID-10 of all the \n> 300GB drives (minus a spare). It will probably outperform separate \n> arrays most of the time, and be much easier to manage.\n>\n> -- \n>\n>\nI like to use RAID 1, and let LVM do the striping. That way I can add \nmore drives later too.\n", "msg_date": "Fri, 28 Jan 2011 06:24:11 -0700", "msg_from": "Grant Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "\n\n\n\n\n\n On 1/28/2011 7:14 AM, Stephen Frost wrote:\n \nRobert,\n\n* Robert Schnabel ([email protected]) wrote:\n\n\nOnce the bulk data is inserted into the tables I generally\ndo some updates on columns to set values which characterize the\ndata. \n\n\n\nPlease tell me you're not running actual full-table UPDATE statements...\nYou would be *much* better off either:\na) munging the data on the way in (if possible/reasonable)\nb) loading the data into temp tables first, and then using INSERT\n statements to move the data into the 'final' tables WITH the new\n columns/info you want\nc) considering if you can normalize the data into multiple tables and/or\n to cut down the columns to only what you need as you go through the\n above, too\n\nA full-table UPDATE means you're basically making the table twice as big\nas it needs to be, and it'll never get smaller..\n\n\n Depends on what you mean by that.  The tables that I'm concerned\n with look something like bigint x2, char var x13, int x24, real x8,\n smallint x4 by about 65M rows, each.  I only do the updates on one\n table at a time.  The real columns are actually null in the input\n csv file.  I run an update which basically uses some of the integer\n columns and calculates frequencies which go into the real columns. \n Ditto with some of the other columns.  I don't do this before I\n upload the data because 1) it's easier this way and 2) I can't\n because some of the updates involve joins to other tables to grab\n info that I can't do outside the database.  So yes, once the upload\n is done I run queries that update every row for certain columns, not\n every column.  After I'm done with a table I run a VACUUM ANALYZE. \n I'm really not worried about what my table looks like on disk.  I\n actually take other steps also to avoid what you're talking about.\n\n\n\n\nThese columns then get indexed. Basically once the initial\nmanipulation is done the table is then static and what I'm looking\nfor is query speed.\n\n\n\nSadly, this is the same type of DW needs that I've got (though with\ntelecomm data and phone calls, not genetic stuffs ;), and PG ends up\nbeing limited by the fact that it can only use one core/thread to go\nthrough the data with.\n\nYou might consider investing some time trying to figure out how to\nparallelize your queries. My approach to this has been to partition the\ndata (probably something you're doing already) into multiple tables and\nthen have shell/perl scripts which will run a given query against all of\nthe tables, dumping the results of that aggregation/analysis into other\ntables, and then having a final 'merge' query.\n\n\n Thanks for the advise but parallelizing/automating doesn't really do\n anything for me.  The data is already partitioned.  Think of it this\n way, you just got 65M new records with about 30 data points per\n record on an individual sample.  You put it in a new table of it's\n own and now you want to characterize those 65M data points.  The\n first update flags about 60M of the rows as uninteresting so you\n move them to their own *uninteresting* table and basically never\n really touch them again (but you cant get rid of them).  Now you're\n working with 5M that you're going to characterize into about 20\n categories based on what is in those 30 columns of data.  Do all the\n querying/updating then index and you're done.  Too long to describe\n but I cannot automate this.  I only update one partition at a time\n and only about every couple weeks or so.\n\n\n\n\n\nThe data is sorted by snp_number, sample_id. So if I want the data\nfor a given sample_id it would be a block of ~58k rows. The size of\nthe table depends on how many sample_id's there are. My largest has\n~30k sample_id by 58k snp_number per sample. The other big table\n(with children) is \"mutations\" and is set up similarly so that I can\naccess individual tables (samples) based on constraints. Each of\nthese children have between 5-60M records.\n\n\n\nUnderstand that indexes are only going to be used/useful, typically, if\nthe amount of records being returned is small relative to the size of\nthe table (eg: 5%).\n\n\n Yep, I understand that.  Even though they occupy a lot of space, I\n keep them around because there are times when I need them.\n\n\n\n\n\n\nThis is all direct attach storage via SAS2 so I'm guessing it's\nprobably limited to the single port link between the controller and\nthe expander. Again, geneticist here not computer scientist. ;-)\n\n\n\nThat link certainly isn't going to help things.. You might consider how\nor if you can improve that.\n\n\n Suggestions???  It was previously suggested to split the drives on\n each array across the two controller ports rather than have all the\n data drives on one port which makes sense.  Maybe I'm getting my\n terminology wrong here but I'm talking about a single SFF-8088 link\n to each 16 drive enclosure.  What about two controllers, one for\n each enclosure?  Don't know if I have enough empty slots though.\n\n\n\n\nAll of the data could be reloaded. Basically, once I get the data\ninto the database and I'm done manipulating it I create a backup\ncopy/dump which then gets stored at a couple different locations.\n\n\n\nYou might consider turning fsync off while you're doing these massive\ndata loads.. and make sure that you issue your 'CREATE TABLE' and your\n'COPY' statements in the same transaction, and again, I suggest loading\ninto temporary (CREATE TEMPORARY TABLE) tables first, then doing the\nCREATE TABLE/INSERT statement for the 'real' table. Make sure that you\ncreate *both* your constraints *and* your indexes *after* the table is\npopulated.\n\nIf you turn fsync off, make sure you turn it back on. :)\n\n\n\n I haven't messed with fsync but maybe I'll try.  In general, I\n create my indexes and constraints after I'm done doing all the\n updating I need to do.  I made the mistake *once* of copying\n millions of rows into a table that already had indexes.\n\n\n\n\nMy goal is to 1) have a fairly robust system so that I don't have to\nspend my time rebuilding things and 2) be able to query the data\nquickly. Most of what I do are ad hoc queries. I have an idea...\n\"how many X have Y in this set of Z samples\" and write the query to\nget the answer. I can wait a couple minutes to get an answer but\nwaiting an hour is becoming tiresome.\n\n\n\nHave you done any analysis to see what the bottleneck actually is? When\nyou run top, is your PG process constantly in 'D' state, or is it in 'R'\nstate, or what? Might help figure some of that out. Note that\nparallelizing the query will help regardless of if it's disk bound or\nCPU bound, when you're running on the kind of hardware you're talking\nabout (lots of spindles, multiple CPUs, etc).\n\n\tThanks,\n\n\t\tStephen\n\n\n It got lost from the original post but my database (9.0.0) is\n currently on my Windows XP 64-bit workstation in my office on a 16\n drive Seagate 15k.5 RAID5, no comments needed, I know, I'm moving it\n :-).  I'm moving it to my server which is Windows Ent Server 2008 R2\n 64-bit 8 AMD cores & 32G ram and these new drives/controller. So\n no top or lvm although I do keep an eye on things with Process\n Explorer.  Also, I don't have any single query that is a problem.  I\n have my canned queries which I run manually to\n update/manipulate/move data around every couple weeks when I get a\n new chunk of data.  Other than that my queries are all ad hoc.  I'm\n just trying to get opinions on the best way to set up these\n drives/controllers/enclosures for basically large sequential reads\n that quite often use indexes.\n\n So far I'd summarize the consensus as:\n 1) putting WAL on a separate array is worthless since I do very\n little writes.  What about if I put my temp tablespace on the same\n array with WAL & xlog?  I've noticed a lot of the ad hoc queries\n I run create tmp files, sometimes tens of GB.  I appreciate the fact\n that managing multiple tablespaces is not as easy as managing one\n but if it helps...\n\n 2) Indexes on a separate array may not be all that useful since I'm\n not doing simultaneous reads/writes.\n\n 3) Since I can very easily recreate the database in case of\n crash/corruption RAID10 may not be the best option.  However, if I\n do go with RAID10 split the drives between the two enclosures (this\n assumes data & index arrays).  I've thought about RAID0 but\n quite frankly I really don't like having to rebuild things.  At some\n point my time becomes valuable.  RAID6 was suggested but rebuilding\n a 9TB RAID6 seems scary slow to me.\n\n I appreciate the comments thus far.\n Bob\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 28 Jan 2011 10:39:03 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "On Fri, Jan 28, 2011 at 9:39 AM, Robert Schnabel <[email protected]> wrote:\n> I can't do outside the database.  So yes, once the upload is done I run\n> queries that update every row for certain columns, not every column.  After\n> I'm done with a table I run a VACUUM ANALYZE.  I'm really not worried about\n> what my table looks like on disk.  I actually take other steps also to avoid\n> what you're talking about.\n\nIt will still get bloated. If you update one column in one row in pg,\nyou now have two copies of that row in the database. If you date 1\ncolumn in 1M rows, you now have 2M rows in the database (1M \"dead\"\nrows, 1M \"live\" rows). vacuum analyze will not get rid of them, but\nwill free them up to be used in future updates / inserts. Vacuum full\nor cluster will free up the space, but will lock the table while it\ndoes so.\n\nThere's nothing wrong with whole table updates as part of an import\nprocess, you just have to know to \"clean up\" after you're done, and\nregular vacuum can't fix this issue, only vacuum full or reindex or\ncluster.\n", "msg_date": "Fri, 28 Jan 2011 10:00:18 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "\nOn 1/28/2011 11:00 AM, Scott Marlowe wrote:\n> On Fri, Jan 28, 2011 at 9:39 AM, Robert Schnabel<[email protected]> wrote:\n>> I can't do outside the database. So yes, once the upload is done I run\n>> queries that update every row for certain columns, not every column. After\n>> I'm done with a table I run a VACUUM ANALYZE. I'm really not worried about\n>> what my table looks like on disk. I actually take other steps also to avoid\n>> what you're talking about.\n> It will still get bloated. If you update one column in one row in pg,\n> you now have two copies of that row in the database. If you date 1\n> column in 1M rows, you now have 2M rows in the database (1M \"dead\"\n> rows, 1M \"live\" rows). vacuum analyze will not get rid of them, but\n> will free them up to be used in future updates / inserts. Vacuum full\n> or cluster will free up the space, but will lock the table while it\n> does so.\n>\n> There's nothing wrong with whole table updates as part of an import\n> process, you just have to know to \"clean up\" after you're done, and\n> regular vacuum can't fix this issue, only vacuum full or reindex or\n> cluster.\n\nThose are exactly what I was referring to with my \"other steps\". I just \ndon't always do them as soon as I'm done updating because sometimes I \nwant to query the table right away to find out something. Yep, I found \nout the hard way that regular VACUUM didn't help.\n\n\n", "msg_date": "Fri, 28 Jan 2011 11:09:53 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "Robert,\n\n* Robert Schnabel ([email protected]) wrote:\n> Depends on what you mean by that.  The tables that I'm concerned with look\n> something like bigint x2, char var x13, int x24, real x8, smallint x4 by\n> about 65M rows, each.  I only do the updates on one table at a time.  The\n> real columns are actually null in the input csv file.  I run an update\n> which basically uses some of the integer columns and calculates\n> frequencies which go into the real columns.  \n\nErm, I'm pretty sure you're still increasing the size of the resulting\ntables by quite a bit by doing this process- which will slow down later\nqueries.\n\n> Ditto with some of the other\n> columns.  I don't do this before I upload the data because 1) it's easier\n> this way and 2) I can't because some of the updates involve joins to other\n> tables to grab info that I can't do outside the database.  \n\nThat's fine- just first load the data into temporary tables and then do\nINSERT INTO new_table SELECT <your query>;\n\ninstead.\n\n> So yes, once\n> the upload is done I run queries that update every row for certain\n> columns, not every column.  After I'm done with a table I run a VACUUM\n> ANALYZE.  I'm really not worried about what my table looks like on disk. \n\nI thought you wanted it fast..? If not, I'm not sure why you're\nbothering to post to this list. What it looks like on disk certainly\nimpacts how fast it is...\n\n> I actually take other steps also to avoid what you're talking about.\n\nIf you really don't feel like changing your process, you could just run\n'CLUSTER' on the table, on whatever index you use most frequently, and\nPG will rewrite the entire table for you, dropping all the dead rows,\netc. You should then run VACUUM FREEZE on it.\n\n> These columns then get indexed. Basically once the initial\n> manipulation is done the table is then static and what I'm looking\n> for is query speed.\n\nYes, I gathered that, making the table smaller on disk will improve\nquery speed.\n\n> Thanks for the advise but parallelizing/automating doesn't really do\n> anything for me.  The data is already partitioned.  Think of it this way,\n> you just got 65M new records with about 30 data points per record on an\n> individual sample.  You put it in a new table of it's own and now you want\n> to characterize those 65M data points.  The first update flags about 60M\n> of the rows as uninteresting so you move them to their own *uninteresting*\n> table and basically never really touch them again (but you cant get rid of\n> them).  Now you're working with 5M that you're going to characterize into\n> about 20 categories based on what is in those 30 columns of data.  Do all\n> the querying/updating then index and you're done.  Too long to describe\n> but I cannot automate this.  I only update one partition at a time and\n> only about every couple weeks or so.\n\nI was referring to parallelizing queries *after* the data is all loaded,\netc. I wasn't talking about the queries that you use during the load.\n\nI presume that after the load you run some queries. You can probably\nparallelize those queries (most DW queries can be, be ime...).\n\n> That link certainly isn't going to help things.. You might consider how\n> or if you can improve that.\n> \n> Suggestions???  It was previously suggested to split the drives on each\n> array across the two controller ports rather than have all the data drives\n> on one port which makes sense.  Maybe I'm getting my terminology wrong\n> here but I'm talking about a single SFF-8088 link to each 16 drive\n> enclosure.  What about two controllers, one for each enclosure?  Don't\n> know if I have enough empty slots though.\n\nI don't know that you'd need a second controller (though it probably\nwouldn't hurt if you could). If there's only one way to attach the\nenclosure, then so be it. The issue is if the enclosures end up\nmulti-plexing the individual drives into fewer channels than there are\nactual drives, hence creating a bottle-neck. You would need different\nenclosures to deal with that, if that's the case.\n\n> I haven't messed with fsync but maybe I'll try.  In general, I create my\n> indexes and constraints after I'm done doing all the updating I need to\n> do.  I made the mistake *once* of copying millions of rows into a table\n> that already had indexes.\n\nYeah, I bet that took a while. As I said above, if you don't want to\nchange your process (which, tbh, I think would be faster if you were\ndoing INSERTs into a new table than full-table UPDATEs...), then you\nshould do a CLUSTER after you've created whatever is the most popular\nINDEX, and then create your other indexes after that.\n\n> It got lost from the original post but my database (9.0.0) is currently on\n> my Windows XP 64-bit workstation in my office on a 16 drive Seagate 15k.5\n> RAID5, no comments needed, I know, I'm moving it :-).  I'm moving it to my\n> server which is Windows Ent Server 2008 R2 64-bit 8 AMD cores & 32G ram\n> and these new drives/controller.\n\nUghh... No chance to get a Unix-based system (Linux, BSD, whatever) on\nthere instead? I really don't think Windows Server is going to help\nyour situation one bit.. :(\n\n> 1) putting WAL on a separate array is worthless since I do very little\n> writes.  What about if I put my temp tablespace on the same array with WAL\n> & xlog?  I've noticed a lot of the ad hoc queries I run create tmp files,\n> sometimes tens of GB.  I appreciate the fact that managing multiple\n> tablespaces is not as easy as managing one but if it helps...\n\nThat's not a bad idea but I'm not sure it'd make as much difference as\nyou think it would.. What would be better would be to *avoid*, at all\ncost, letting it spill out to on-disk for queries. The way to do that\nis to make sure your work_mem is as high as PG will actually use (1GB),\nand then to *parallelize* those queries using multiple PG connections,\nso that each one will be able to use up that much memory.\n\nFor example, say you need to summarize the values for each of your\nstrands (or whatever) across 5 different \"loads\". Your query might\nlook like:\n\nselect load,strand,sum(value) from parent_table group by load,strand;\n\nIdeally, PG will use a hash table, key'd on load+strand, to store the\nresulting summations in. If it doesn't think the hash table will fit in\nwork_mem, it's going to SORT ALL OF YOUR DATA ON DISK first instead, and\nthen WALK THROUGH IT, sum'ing each section, then spitting out the result\nto the client, and moving on. This is *not* a fast process. If doing\nthe same query on an individual child will use a hash table, then it'd\nbe hugely faster to query each load first, storing the results into\ntemporary tables. What would be even *faster* would be the run all 5 of\nthose queries against the child tables in parallel (given that you have\nover 5 CPUs and enough memory that you don't start swapping).\n\nIf it's still too big on the per-child basis, you might be able to use\nconditionals to do the first 100 strands, then the next hundred, etc.\n\n> I appreciate the comments thus far.\n\nLet's hope you'll always appreciate them. :)\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Fri, 28 Jan 2011 12:14:11 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "* Scott Marlowe ([email protected]) wrote:\n> There's nothing wrong with whole table updates as part of an import\n> process, you just have to know to \"clean up\" after you're done, and\n> regular vacuum can't fix this issue, only vacuum full or reindex or\n> cluster.\n\nJust to share my experiences- I've found that creating a new table and\ninserting into it is actually faster than doing full-table updates, if\nthat's an option for you.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Fri, 28 Jan 2011 12:28:10 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "On 1/27/11 4:11 PM, \"Alan Hodgson\" <[email protected]<mailto:[email protected]>> wrote:\n\n\nOn January 27, 2011, Robert Schnabel <[email protected]<mailto:[email protected]>> wrote:\n\n> So my questions are 1) am I'm crazy for doing this, 2) would you change\n\n> anything and 3) is it acceptable to put the xlog & wal (and perhaps tmp\n\n> filespace) on a different controller than everything else? Please keep\n\n> in mind I'm a geneticist who happens to know a little bit about\n\n> bioinformatics and not the reverse. :-)\n\n>\n\nPutting the WAL on a second controller does help, if you're write-heavy.\n\nI tried separating indexes and data once on one server and didn't really notice that it helped much. Managing the space was problematic. I would suggest putting those together on a single RAID-10 of all the 300GB drives (minus a spare). It will probably outperform separate arrays most of the time, and be much easier to manage.\n\nIf you go this route, I suggest two equally sized RAID 10's on different controllers fir index + data, with software raid-0 on top of that. RAID 10 will max out a controller after 6 to 10 drives, usually. Using the OS RAID 0 to aggregate the throughput of two controllers works great.\n\nWAL only has to be a little bit faster than your network in most cases. I've never seen it be a bottleneck on large bulk loads if it is on its own controller with 120MB/sec write throughput. I suppose a bulk load from COPY might stress it a bit more, but CPU ends up the bottleneck in postgres once you have I/O hardware this capable.\n\n\n\n--\n\nA hybrid Escalade is missing the point much in the same way that having a diet soda with your extra large pepperoni pizza is missing the point.\n\nOn 1/27/11 4:11 PM, \"Alan Hodgson\" <[email protected]> wrote:On January 27, 2011, Robert Schnabel <[email protected]> wrote:> So my questions are 1) am I'm crazy for doing this, 2) would you change> anything and 3) is it acceptable to put the xlog & wal (and perhaps tmp> filespace) on a different controller than everything else? Please keep> in mind I'm a geneticist who happens to know a little bit about> bioinformatics and not the reverse. :-)> Putting the WAL on a second controller does help, if you're write-heavy.I tried separating indexes and data once on one server and didn't really notice that it helped much. Managing the space was problematic. I would suggest putting those together on a single RAID-10 of all the 300GB drives (minus a spare). It will probably outperform separate arrays most of the time, and be much easier to manage.If you go this route, I suggest two equally sized RAID 10's on different controllers fir index + data, with software raid-0 on top of that.  RAID 10 will max out a controller after 6 to 10 drives, usually.  Using the OS RAID 0 to aggregate the throughput of two controllers works great.WAL only has to be a little bit faster than your network in most cases.  I've never seen it be a bottleneck on large bulk loads if it is on its own controller with 120MB/sec write throughput.  I suppose a bulk load from COPY might stress it a bit more, but CPU ends up the bottleneck in postgres once you have I/O hardware this capable.-- A hybrid Escalade is missing the point much in the same way that having a diet soda with your extra large pepperoni pizza is missing the point.", "msg_date": "Fri, 28 Jan 2011 09:44:33 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "\n\nOn 1/28/11 9:00 AM, \"Scott Marlowe\" <[email protected]> wrote:\n\n>On Fri, Jan 28, 2011 at 9:39 AM, Robert Schnabel <[email protected]>\n>wrote:\n>> I can't do outside the database. So yes, once the upload is done I run\n>> queries that update every row for certain columns, not every column.\n>>After\n>> I'm done with a table I run a VACUUM ANALYZE. I'm really not worried\n>>about\n>> what my table looks like on disk. I actually take other steps also to\n>>avoid\n>> what you're talking about.\n>\n>It will still get bloated. If you update one column in one row in pg,\n>you now have two copies of that row in the database. If you date 1\n>column in 1M rows, you now have 2M rows in the database (1M \"dead\"\n>rows, 1M \"live\" rows). vacuum analyze will not get rid of them, but\n>will free them up to be used in future updates / inserts. Vacuum full\n>or cluster will free up the space, but will lock the table while it\n>does so.\n>\n>There's nothing wrong with whole table updates as part of an import\n>process, you just have to know to \"clean up\" after you're done, and\n>regular vacuum can't fix this issue, only vacuum full or reindex or\n>cluster.\n\n\nAlso note that HOT will come into play if you have FILLFACTOR set\nappropriately, so you won't get two copies of the row. This is true if\nthe column being updated is small enough and not indexed. It wastes some\nspace, but a lot less than the factor of two.\n\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n\n", "msg_date": "Fri, 28 Jan 2011 09:47:55 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "\n\nOn 1/28/11 9:28 AM, \"Stephen Frost\" <[email protected]> wrote:\n\n>* Scott Marlowe ([email protected]) wrote:\n>> There's nothing wrong with whole table updates as part of an import\n>> process, you just have to know to \"clean up\" after you're done, and\n>> regular vacuum can't fix this issue, only vacuum full or reindex or\n>> cluster.\n>\n>Just to share my experiences- I've found that creating a new table and\n>inserting into it is actually faster than doing full-table updates, if\n>that's an option for you.\n\nI wonder if postgres could automatically optimize that, if it thought that\nit was going to update more than X% of a table, and HOT was not going to\nhelp, then just create a new table file for XID's = or higher than the one\nmaking the change, and leave the old one for old XIDs, then regular VACUUM\ncould toss out the old one if no more transactions could see it.\n\n\n>\n> Thanks,\n>\n> Stephen\n\n", "msg_date": "Fri, 28 Jan 2011 09:50:47 -0800", "msg_from": "Scott Carey <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "On Fri, Jan 28, 2011 at 10:44 AM, Scott Carey <[email protected]> wrote:\n> If you go this route, I suggest two equally sized RAID 10's on different\n> controllers fir index + data, with software raid-0 on top of that.  RAID 10\n> will max out a controller after 6 to 10 drives, usually.  Using the OS RAID\n> 0 to aggregate the throughput of two controllers works great.\n\nI often go one step further and just create a bunch of RAID-1 pairs\nand use OS level RAID-0 on top of that. On the LSI8888 cards that was\nby far the fastest setup I tested.\n", "msg_date": "Fri, 28 Jan 2011 11:55:10 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "\nOn 1/28/2011 11:14 AM, Stephen Frost wrote:\n\n>> It got lost from the original post but my database (9.0.0) is currently on\n>> my Windows XP 64-bit workstation in my office on a 16 drive Seagate 15k.5\n>> RAID5, no comments needed, I know, I'm moving it :-). I'm moving it to my\n>> server which is Windows Ent Server 2008 R2 64-bit 8 AMD cores& 32G ram\n>> and these new drives/controller.\n> Ughh... No chance to get a Unix-based system (Linux, BSD, whatever) on\n> there instead? I really don't think Windows Server is going to help\n> your situation one bit.. :(\n>\nAlmost zero chance. I basically admin the server myself so I can do \nwhatever I want but all permissions are controlled through campus active \ndirectory and our departmental IT person doesn't do *nix. So let's just \nassume I'm stuck with Windows. The main purpose of the server at the \nmoment is to house our backup images. I have two 9 TB arrays which I \nuse robocopy to mirror images once a day between our other server and my \nworkstation. There's really not much of anything else ever eating up \nCPUs on the server which is why I'm moving my database onto it.\n\n>> I appreciate the comments thus far.\n> Let's hope you'll always appreciate them. :)\n>\n> \tThanks,\n>\n> \t\tStephen\nUmm, that didn't quite read the way I meant it to when I wrote it. All \ncomments are appreciated. :-)\n\nSeriously though, there have been points made that have made me rethink \nhow I go about processing data which I'm sure will help. I'm in a \nfairly fortunate position in that I can put these new drives on the \nserver and play around with different configurations while I maintain my \ncurrent setup on my workstation. I guess I just need to experiment and \nsee what works.\n\nThanks again,\nBob\n\n\n", "msg_date": "Fri, 28 Jan 2011 15:19:49 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "\n\n\n\n\n\n On 1/28/2011 11:44 AM, Scott Carey wrote:\n \n\n\n\n\n\n\nOn 1/27/11 4:11 PM, \"Alan Hodgson\" <[email protected]>\n wrote:\n\n\n\n\n\n\n\nOn January 27,\n 2011, Robert Schnabel <[email protected]>\n wrote:\n> So my\n questions are 1) am I'm crazy for doing this, 2) would\n you change\n> anything\n and 3) is it acceptable to put the xlog & wal (and\n perhaps tmp\n> filespace)\n on a different controller than everything else? Please\n keep\n> in mind I'm\n a geneticist who happens to know a little bit about\n>\n bioinformatics and not the reverse. :-)\n> \nPutting the WAL\n on a second controller does help, if you're write-heavy.\nI tried\n separating indexes and data once on one server and\n didn't really notice that it helped much. Managing the\n space was problematic. I would suggest putting those\n together on a single RAID-10 of all the 300GB drives\n (minus a spare). It will probably outperform separate\n arrays most of the time, and be much easier to manage.\n\n\n\n\n\n\nIf you go this route, I suggest two equally sized RAID 10's\n on different controllers fir index + data, with software raid-0\n on top of that.  RAID 10 will max out a controller after 6 to 10\n drives, usually.  Using the OS RAID 0 to aggregate the\n throughput of two controllers works great.\n\n\nWAL only has to be a little bit faster than your network in\n most cases.  I've never seen it be a bottleneck on large bulk\n loads if it is on its own controller with 120MB/sec write\n throughput.  I suppose a bulk load from COPY might stress it a\n bit more, but CPU ends up the bottleneck in postgres once you\n have I/O hardware this capable.\n\n\n\n Do you mean 14 drives in one box as RAID10's on one controller, then\n 14 drives in the other box on a second controller, then software\n RAID0 each of the two RAID10's together essentially as a single 4 TB\n array?  Would you still recommend doing this with Windows?\n Bob\n\n\n\n\n\n", "msg_date": "Fri, 28 Jan 2011 15:33:28 -0600", "msg_from": "Robert Schnabel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "2011/1/28 Scott Carey <[email protected]>\n\n>\n>\n> On 1/28/11 9:28 AM, \"Stephen Frost\" <[email protected]> wrote:\n>\n> >* Scott Marlowe ([email protected]) wrote:\n> >> There's nothing wrong with whole table updates as part of an import\n> >> process, you just have to know to \"clean up\" after you're done, and\n> >> regular vacuum can't fix this issue, only vacuum full or reindex or\n> >> cluster.\n> >\n> >Just to share my experiences- I've found that creating a new table and\n> >inserting into it is actually faster than doing full-table updates, if\n> >that's an option for you.\n>\n> I wonder if postgres could automatically optimize that, if it thought that\n> it was going to update more than X% of a table, and HOT was not going to\n> help, then just create a new table file for XID's = or higher than the one\n> making the change, and leave the old one for old XIDs, then regular VACUUM\n> could toss out the old one if no more transactions could see it.\n>\n>\n> I was thinking if a table file could be deleted if it has no single live\nrow. And if this could be done by vacuum. In this case vacuum on table that\nwas fully updated recently could be almost as good as cluster - any scan\nwould skip such non-existing files really fast. Also almost no disk space\nwould be wasted.\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2011/1/28 Scott Carey <[email protected]>\n\n\nOn 1/28/11 9:28 AM, \"Stephen Frost\" <[email protected]> wrote:\n\n>* Scott Marlowe ([email protected]) wrote:\n>> There's nothing wrong with whole table updates as part of an import\n>> process, you just have to know to \"clean up\" after you're done, and\n>> regular vacuum can't fix this issue, only vacuum full or reindex or\n>> cluster.\n>\n>Just to share my experiences- I've found that creating a new table and\n>inserting into it is actually faster than doing full-table updates, if\n>that's an option for you.\n\nI wonder if postgres could automatically optimize that, if it thought that\nit was going to update more than X% of a table, and HOT was not going to\nhelp, then just create a new table file for XID's = or higher than the one\nmaking the change, and leave the old one for old XIDs, then regular VACUUM\ncould toss out the old one if no more transactions could see it.\nI was thinking if a table file could be deleted if it has no single live row. And if this could be done by vacuum. In this case vacuum on table that was fully updated recently could be almost as good as cluster - any scan would skip such non-existing files really fast. Also almost no disk space would be wasted. \n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Sun, 30 Jan 2011 18:26:24 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "2011/1/30 Віталій Тимчишин <[email protected]>:\n> I was thinking if a table file could be deleted if it has no single live\n> row. And if this could be done by vacuum. In this case vacuum on table that\n> was fully updated recently could be almost as good as cluster - any scan\n> would skip such non-existing files really fast. Also almost no disk space\n> would be wasted.\n\nVACUUM actually already does something along these lines. If there\nare 1 or any larger number of entirely-free pages at the end of a\ntable, VACUUM will truncate them away. In the degenerate case where\nALL pages are entirely-free, this results in zeroing out the file.\n\nThe problem with this is that it rarely does much. Consider a table\nwith 1,000,000 pages, 50% of which contain live rows. On average, how\nmany pages will this algorithm truncate away? Answer: if the pages\ncontaining live rows are randomly distributed, approximately one.\n(Proof: There is a 50% chance that the last page will contain live\nrows. If so, we can't truncate anything. If not, we can truncate one\npage, and maybe more. Now the chances of the next page being free are\n499,999 in 999,999, or roughly one-half. So we have an almost-25%\nchance of being able to truncate at least two pages. And so on. So\nyou get roughly 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... = 1 page.)\n\nYour idea of having a set of heaps rather than a single heap is an\ninteresting one, but it's pretty much catering to the very specific\ncase of a full-table update. I think the code changes needed would be\nfar too invasive to seriously contemplate doing it just for that one\ncase - although it is an important case that I would like to see us\nimprove. Tom Lane previously objected to the idea of on-line table\ncompaction on the grounds that people's apps might break if CTIDs\nchanged under them, but I think a brawl between all the people who\nwant on-line table compaction and all the people who want to avoid\nunexpected CTID changes would be pretty short. A bigger problem - or\nat least another problem - is that moving tuples this way is\ncumbersome and expensive. You basically have to move some tuples\n(inserting new index entries for them), vacuum away the old index\nentries (requiring a full scan of every index), and then repeat as\nmany times as necessary to shrink the table. This is not exactly a\nsmooth maintenance procedure, or one that can be done without\nsignificant disruption, but AFAIK nobody's come up with a better idea\nyet.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 3 Feb 2011 13:42:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "There is a process in Oracle which essentially allows you to do the\nequivalent of a CLUSTER in Postgres, but without locking the table, and so\nupdates can continue throughout the process. It requires a bit of manual\nsetup and fiddling (at least in Oracle 10g) .... this would probably scratch\na lot of people's itches in this area. Of course, it's not trivial at all to\nimplement :-(\n\nThe Oracle equivalent of \"too many dead rows\" is \"too many chained rows\" and\nthat's where I've seen it used.\n\nCheers\nDave\n\n2011/2/3 Robert Haas <[email protected]>\n\n> 2011/1/30 Віталій Тимчишин <[email protected]>:\n> > I was thinking if a table file could be deleted if it has no single live\n> > row. And if this could be done by vacuum. In this case vacuum on table\n> that\n> > was fully updated recently could be almost as good as cluster - any scan\n> > would skip such non-existing files really fast. Also almost no disk space\n> > would be wasted.\n>\n> VACUUM actually already does something along these lines. If there\n> are 1 or any larger number of entirely-free pages at the end of a\n> table, VACUUM will truncate them away. In the degenerate case where\n> ALL pages are entirely-free, this results in zeroing out the file.\n>\n> The problem with this is that it rarely does much. Consider a table\n> with 1,000,000 pages, 50% of which contain live rows. On average, how\n> many pages will this algorithm truncate away? Answer: if the pages\n> containing live rows are randomly distributed, approximately one.\n> (Proof: There is a 50% chance that the last page will contain live\n> rows. If so, we can't truncate anything. If not, we can truncate one\n> page, and maybe more. Now the chances of the next page being free are\n> 499,999 in 999,999, or roughly one-half. So we have an almost-25%\n> chance of being able to truncate at least two pages. And so on. So\n> you get roughly 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... = 1 page.)\n>\n> Your idea of having a set of heaps rather than a single heap is an\n> interesting one, but it's pretty much catering to the very specific\n> case of a full-table update. I think the code changes needed would be\n> far too invasive to seriously contemplate doing it just for that one\n> case - although it is an important case that I would like to see us\n> improve. Tom Lane previously objected to the idea of on-line table\n> compaction on the grounds that people's apps might break if CTIDs\n> changed under them, but I think a brawl between all the people who\n> want on-line table compaction and all the people who want to avoid\n> unexpected CTID changes would be pretty short. A bigger problem - or\n> at least another problem - is that moving tuples this way is\n> cumbersome and expensive. You basically have to move some tuples\n> (inserting new index entries for them), vacuum away the old index\n> entries (requiring a full scan of every index), and then repeat as\n> many times as necessary to shrink the table. This is not exactly a\n> smooth maintenance procedure, or one that can be done without\n> significant disruption, but AFAIK nobody's come up with a better idea\n> yet.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nThere is a process in Oracle which essentially allows you to do the equivalent of a CLUSTER in Postgres, but without locking the table, and so updates can continue throughout the process. It requires a bit of manual setup and fiddling (at least in Oracle 10g) .... this would probably scratch a lot of people's itches in this area. Of course, it's not trivial at all to implement :-(\nThe Oracle equivalent of \"too many dead rows\" is \"too many chained rows\" and that's where I've seen it used. CheersDave2011/2/3 Robert Haas <[email protected]>\n2011/1/30 Віталій Тимчишин <[email protected]>:\n> I was thinking if a table file could be deleted if it has no single live\n> row. And if this could be done by vacuum. In this case vacuum on table that\n> was fully updated recently could be almost as good as cluster - any scan\n> would skip such non-existing files really fast. Also almost no disk space\n> would be wasted.\n\nVACUUM actually already does something along these lines.  If there\nare 1 or any larger number of entirely-free pages at the end of a\ntable, VACUUM will truncate them away.  In the degenerate case where\nALL pages are entirely-free, this results in zeroing out the file.\n\nThe problem with this is that it rarely does much.  Consider a table\nwith 1,000,000 pages, 50% of which contain live rows.  On average, how\nmany pages will this algorithm truncate away?  Answer: if the pages\ncontaining live rows are randomly distributed, approximately one.\n(Proof: There is a 50% chance that the last page will contain live\nrows.  If so, we can't truncate anything.  If not, we can truncate one\npage, and maybe more.  Now the chances of the next page being free are\n499,999 in 999,999, or roughly one-half.  So we have an almost-25%\nchance of being able to truncate at least two pages.  And so on.   So\nyou get roughly 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + ... = 1 page.)\n\nYour idea of having a set of heaps rather than a single heap is an\ninteresting one, but it's pretty much catering to the very specific\ncase of a full-table update.  I think the code changes needed would be\nfar too invasive to seriously contemplate doing it just for that one\ncase - although it is an important case that I would like to see us\nimprove.  Tom Lane previously objected to the idea of on-line table\ncompaction on the grounds that people's apps might break if CTIDs\nchanged under them, but I think a brawl between all the people who\nwant on-line table compaction and all the people who want to avoid\nunexpected CTID changes would be pretty short.  A bigger problem - or\nat least another problem - is that moving tuples this way is\ncumbersome and expensive.  You basically have to move some tuples\n(inserting new index entries for them), vacuum away the old index\nentries (requiring a full scan of every index), and then repeat as\nmany times as necessary to shrink the table.  This is not exactly a\nsmooth maintenance procedure, or one that can be done without\nsignificant disruption, but AFAIK nobody's come up with a better idea\nyet.\n\n--\nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Thu, 3 Feb 2011 13:06:49 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "03.02.11 20:42, Robert Haas написав(ла):\n> 2011/1/30 Віталій Тимчишин<[email protected]>:\n>> I was thinking if a table file could be deleted if it has no single live\n>> row. And if this could be done by vacuum. In this case vacuum on table that\n>> was fully updated recently could be almost as good as cluster - any scan\n>> would skip such non-existing files really fast. Also almost no disk space\n>> would be wasted.\n> VACUUM actually already does something along these lines. If there\n> are 1 or any larger number of entirely-free pages at the end of a\n> table, VACUUM will truncate them away. In the degenerate case where\n> ALL pages are entirely-free, this results in zeroing out the file.\n>\n> The problem with this is that it rarely does much. Consider a table\n> with 1,000,000 pages, 50% of which contain live rows. On average, how\n> many pages will this algorithm truncate away? Answer: if the pages\n> containing live rows are randomly distributed, approximately one.\nYes, but take into account operations on a (by different reasons) \nclustered tables, like removing archived data (yes I know, this is best \ndone with partitioning, but one must still go to a point when he will \ndecide to use partitioning :) ).\n> Your idea of having a set of heaps rather than a single heap is an\n> interesting one, but it's pretty much catering to the very specific\n> case of a full-table update. I think the code changes needed would be\n> far too invasive to seriously contemplate doing it just for that one\n> case - although it is an important case that I would like to see us\n> improve.\nWhy do you expect such a invasive code changes? I know little about \npostgresql code layering, but what I propose (with changing delete to \ntruncate) is:\n1) Leave tuple addressing as it is now\n2) Allow truncated files, treating non-existing part as if it contained \nnot used tuples\n3) Make vacuum truncate file if it has not used tuples at the end.\n\nThe only (relatively) tricky thing I can see is synchronizing truncation \nwith parallel ongoing scan.\n\nBest regards, Vitalii Tymchyshyn\n\n\n", "msg_date": "Fri, 04 Feb 2011 11:19:13 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "On Fri, Feb 4, 2011 at 4:19 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n> Why do you expect such a invasive code changes? I know little about\n> postgresql code layering, but what I propose (with changing delete to\n> truncate) is:\n> 1) Leave tuple addressing as it is now\n\ni.e. a block number and a slot position within the block?\n\nSeems like you'd need <file,block,slot>.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 4 Feb 2011 13:49:44 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" }, { "msg_contents": "2011/2/4 Robert Haas <[email protected]>\n\n> On Fri, Feb 4, 2011 at 4:19 AM, Vitalii Tymchyshyn <[email protected]>\n> wrote:\n> > Why do you expect such a invasive code changes? I know little about\n> > postgresql code layering, but what I propose (with changing delete to\n> > truncate) is:\n> > 1) Leave tuple addressing as it is now\n>\n> i.e. a block number and a slot position within the block?\n>\n> Seems like you'd need <file,block,slot>.\n>\n\nNo, that's what I mean. Leave as it is. You will have file logical length\n(fixed for all but the last one, 1GB currently) and file actual legth that\ncan be less (if file trucated). In the latter case you still have this\n\"empty\" blocks that don't exists at all. Actually the simplest\nimplementation could be to tell to file system \"drop this part of file and\npretend it's all zeros\", but I don't think many FSs (OSes?) supports this.\nSo, each file still have it's fixed N blocks. And filenumber is still\nblocknumber / N.\n\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n2011/2/4 Robert Haas <[email protected]>\nOn Fri, Feb 4, 2011 at 4:19 AM, Vitalii Tymchyshyn <[email protected]> wrote:\n> Why do you expect such a invasive code changes? I know little about\n> postgresql code layering, but what I propose (with changing delete to\n> truncate) is:\n> 1) Leave tuple addressing as it is now\n\ni.e. a block number and a slot position within the block?\n\nSeems like you'd need <file,block,slot>.\nNo, that's what I mean. Leave as it is. You will have file logical length (fixed for all but the last one, 1GB currently) and file actual legth that can be less (if file trucated). In the latter case you still have this \"empty\" blocks that don't exists at all. Actually the simplest implementation could be to tell to file system \"drop this part of file and pretend it's all zeros\", but I don't think many FSs (OSes?) supports this.\nSo, each  file still have it's fixed N blocks. And filenumber is still blocknumber / N.-- Best regards, Vitalii Tymchyshyn", "msg_date": "Sat, 5 Feb 2011 11:01:05 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to best use 32 15k.7 300GB drives?" } ]
[ { "msg_contents": "I am evaluating postgres 9 to migrate away from Oracle. The following query\nruns too slow, also please find the explain plan:\n\n****************************************************************\nexplain analyze select DISTINCT EVENT.ID, ORIGIN.ID AS\nORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN,\nEVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE,\nORIGIN.DEPTH,ORIGIN.EVTYPE,\nORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR, ORIGIN.CONTRIBUTOR OCONTRIBUTOR,\nMAGNITUDE.ID AS MAGID,\nMAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE\nfrom event.event left join event.origin on event.id=origin.eventid left join\nevent.magnitude on origin.id=event.magnitude.origin_id\nWHERE EXISTS(select origin_id from event.magnitude where\n magnitude.magnitude>=7.2 and origin.id=origin_id)\norder by ORIGIN.TIME desc,MAGNITUDE.MAGNITUDE desc,EVENT.ID\n,EVENT.PREFERRED_ORIGIN_ID,ORIGIN.ID\n\n\n\"Unique (cost=740549.86..741151.42 rows=15039 width=80) (actual\ntime=17791.557..17799.092 rows=5517 loops=1)\"\n\" -> Sort (cost=740549.86..740587.45 rows=15039 width=80) (actual\ntime=17791.556..17792.220 rows=5517 loops=1)\"\n\" Sort Key: origin.\"time\", event.magnitude.magnitude, event.id,\nevent.preferred_origin_id, origin.id, event.contributor, origin.latitude,\norigin.longitude, origin.depth, origin.evtype, origin.catalog,\norigin.author, origin.contributor, event.magnitude.id, event.magnitude.type\"\n\" Sort Method: quicksort Memory: 968kB\"\n\" -> Nested Loop Left Join (cost=34642.50..739506.42 rows=15039\nwidth=80) (actual time=6.927..17769.788 rows=5517 loops=1)\"\n\" -> Hash Semi Join (cost=34642.50..723750.23 rows=14382\nwidth=62) (actual time=6.912..17744.858 rows=2246 loops=1)\"\n\" Hash Cond: (origin.id = event.magnitude.origin_id)\"\n\" -> Merge Left Join (cost=0.00..641544.72 rows=6133105\nwidth=62) (actual time=0.036..16221.008 rows=6133105 loops=1)\"\n\" Merge Cond: (event.id = origin.eventid)\"\n\" -> Index Scan using event_key_index on event\n (cost=0.00..163046.53 rows=3272228 width=12) (actual time=0.017..1243.616\nrows=3276192 loops=1)\"\n\" -> Index Scan using origin_fk_index on origin\n (cost=0.00..393653.81 rows=6133105 width=54) (actual time=0.013..3033.657\nrows=6133105 loops=1)\"\n\" -> Hash (cost=34462.73..34462.73 rows=14382 width=4)\n(actual time=6.668..6.668 rows=3198 loops=1)\"\n\" Buckets: 2048 Batches: 1 Memory Usage: 113kB\"\n\" -> Bitmap Heap Scan on magnitude\n (cost=324.65..34462.73 rows=14382 width=4) (actual time=1.682..5.414\nrows=3198 loops=1)\"\n\" Recheck Cond: (magnitude >= 7.2)\"\n\" -> Bitmap Index Scan on mag_index\n (cost=0.00..321.05 rows=14382 width=0) (actual time=1.331..1.331 rows=3198\nloops=1)\"\n\" Index Cond: (magnitude >= 7.2)\"\n\" -> Index Scan using mag_fkey_index on magnitude\n (cost=0.00..1.06 rows=3 width=22) (actual time=0.007..0.009 rows=2\nloops=2246)\"\n\" Index Cond: (origin.id = event.magnitude.origin_id)\"\n\"Total runtime: 17799.669 ms\"\n****************************************************************\n\nThis query runs in Oracle in 1 second while takes 16 seconds in postgres,\nThe difference tells me that I am doing something wrong somewhere. This is\na new installation on a local Mac machine with 12G of RAM.\n\nI have:\neffective_cache_size=4096MB\nshared_buffer=2048MB\nwork_mem=100MB\n\nI am evaluating postgres 9 to migrate away from Oracle.  The following query runs too slow, also please find the explain plan:****************************************************************\nexplain analyze select DISTINCT EVENT.ID, ORIGIN.ID AS ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN, EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE, ORIGIN.DEPTH,ORIGIN.EVTYPE, \nORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR, ORIGIN.CONTRIBUTOR OCONTRIBUTOR,MAGNITUDE.ID AS MAGID,MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE from event.event left join event.origin on event.id=origin.eventid left join event.magnitude on origin.id=event.magnitude.origin_id \nWHERE EXISTS(select origin_id from event.magnitude where  magnitude.magnitude>=7.2 and origin.id=origin_id) order by ORIGIN.TIME desc,MAGNITUDE.MAGNITUDE desc,EVENT.ID,EVENT.PREFERRED_ORIGIN_ID,ORIGIN.ID\n\"Unique  (cost=740549.86..741151.42 rows=15039 width=80) (actual time=17791.557..17799.092 rows=5517 loops=1)\"\"  ->  Sort  (cost=740549.86..740587.45 rows=15039 width=80) (actual time=17791.556..17792.220 rows=5517 loops=1)\"\n\"        Sort Key: origin.\"time\", event.magnitude.magnitude, event.id, event.preferred_origin_id, origin.id, event.contributor, origin.latitude, origin.longitude, origin.depth, origin.evtype, origin.catalog, origin.author, origin.contributor, event.magnitude.id, event.magnitude.type\"\n\"        Sort Method:  quicksort  Memory: 968kB\"\"        ->  Nested Loop Left Join  (cost=34642.50..739506.42 rows=15039 width=80) (actual time=6.927..17769.788 rows=5517 loops=1)\"\n\"              ->  Hash Semi Join  (cost=34642.50..723750.23 rows=14382 width=62) (actual time=6.912..17744.858 rows=2246 loops=1)\"\"                    Hash Cond: (origin.id = event.magnitude.origin_id)\"\n\"                    ->  Merge Left Join  (cost=0.00..641544.72 rows=6133105 width=62) (actual time=0.036..16221.008 rows=6133105 loops=1)\"\"                          Merge Cond: (event.id = origin.eventid)\"\n\"                          ->  Index Scan using event_key_index on event  (cost=0.00..163046.53 rows=3272228 width=12) (actual time=0.017..1243.616 rows=3276192 loops=1)\"\"                          ->  Index Scan using origin_fk_index on origin  (cost=0.00..393653.81 rows=6133105 width=54) (actual time=0.013..3033.657 rows=6133105 loops=1)\"\n\"                    ->  Hash  (cost=34462.73..34462.73 rows=14382 width=4) (actual time=6.668..6.668 rows=3198 loops=1)\"\"                          Buckets: 2048  Batches: 1  Memory Usage: 113kB\"\n\"                          ->  Bitmap Heap Scan on magnitude  (cost=324.65..34462.73 rows=14382 width=4) (actual time=1.682..5.414 rows=3198 loops=1)\"\"                                Recheck Cond: (magnitude >= 7.2)\"\n\"                                ->  Bitmap Index Scan on mag_index  (cost=0.00..321.05 rows=14382 width=0) (actual time=1.331..1.331 rows=3198 loops=1)\"\"                                      Index Cond: (magnitude >= 7.2)\"\n\"              ->  Index Scan using mag_fkey_index on magnitude  (cost=0.00..1.06 rows=3 width=22) (actual time=0.007..0.009 rows=2 loops=2246)\"\"                    Index Cond: (origin.id = event.magnitude.origin_id)\"\n\"Total runtime: 17799.669 ms\"****************************************************************This query runs in Oracle in 1 second while takes 16 seconds in postgres, The difference tells me that I am doing something wrong somewhere.  This is a new installation on a local Mac machine with 12G of RAM.\nI have:effective_cache_size=4096MB shared_buffer=2048MBwork_mem=100MB", "msg_date": "Fri, 28 Jan 2011 09:30:19 -0800", "msg_from": "yazan suleiman <[email protected]>", "msg_from_op": true, "msg_subject": "postgres 9 query performance" }, { "msg_contents": "On Fri, Jan 28, 2011 at 10:30 AM, yazan suleiman\n<[email protected]> wrote:\n> I am evaluating postgres 9 to migrate away from Oracle.  The following query\n> runs too slow, also please find the explain plan:\n> ****************************************************************\n> explain analyze select DISTINCT EVENT.ID, ORIGIN.ID AS\n> ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN,\n> EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE,\n> ORIGIN.DEPTH,ORIGIN.EVTYPE,\n> \"Total runtime: 17799.669 ms\"\n> ****************************************************************\n> This query runs in Oracle in 1 second while takes 16 seconds in postgres,\n> The difference tells me that I am doing something wrong somewhere.  This is\n> a new installation on a local Mac machine with 12G of RAM.\n\nTry turning it into a group by instead of a distinct. i.e.\n\nselect a,b,c,d from xyz group by a,b,c,d\n\nand see if it's faster. There is some poor performance on large data\nsets for distinct. Don't know if they got fixed in 9.0 or not, if not\nthen definitely try a group by and see.\n", "msg_date": "Fri, 28 Jan 2011 13:33:57 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 9 query performance" }, { "msg_contents": "On Fri, Jan 28, 2011 at 09:30:19AM -0800, yazan suleiman wrote:\n> I am evaluating postgres 9 to migrate away from Oracle. The following query\n> runs too slow, also please find the explain plan:\n> \n> ****************************************************************\n> explain analyze select DISTINCT EVENT.ID, ORIGIN.ID AS\n> ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN,\n> EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE,\n> ORIGIN.DEPTH,ORIGIN.EVTYPE,\n> ORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR, ORIGIN.CONTRIBUTOR OCONTRIBUTOR,\n> MAGNITUDE.ID AS MAGID,\n> MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE\n> from event.event left join event.origin on event.id=origin.eventid left join\n> event.magnitude on origin.id=event.magnitude.origin_id\n> WHERE EXISTS(select origin_id from event.magnitude where\n> magnitude.magnitude>=7.2 and origin.id=origin_id)\n> order by ORIGIN.TIME desc,MAGNITUDE.MAGNITUDE desc,EVENT.ID\n> ,EVENT.PREFERRED_ORIGIN_ID,ORIGIN.ID\n> \n> \n> \"Unique (cost=740549.86..741151.42 rows=15039 width=80) (actual\n> time=17791.557..17799.092 rows=5517 loops=1)\"\n> \" -> Sort (cost=740549.86..740587.45 rows=15039 width=80) (actual\n> time=17791.556..17792.220 rows=5517 loops=1)\"\n> \" Sort Key: origin.\"time\", event.magnitude.magnitude, event.id,\n> event.preferred_origin_id, origin.id, event.contributor, origin.latitude,\n> origin.longitude, origin.depth, origin.evtype, origin.catalog,\n> origin.author, origin.contributor, event.magnitude.id, event.magnitude.type\"\n> \" Sort Method: quicksort Memory: 968kB\"\n> \" -> Nested Loop Left Join (cost=34642.50..739506.42 rows=15039\n> width=80) (actual time=6.927..17769.788 rows=5517 loops=1)\"\n> \" -> Hash Semi Join (cost=34642.50..723750.23 rows=14382\n> width=62) (actual time=6.912..17744.858 rows=2246 loops=1)\"\n> \" Hash Cond: (origin.id = event.magnitude.origin_id)\"\n> \" -> Merge Left Join (cost=0.00..641544.72 rows=6133105\n> width=62) (actual time=0.036..16221.008 rows=6133105 loops=1)\"\n> \" Merge Cond: (event.id = origin.eventid)\"\n> \" -> Index Scan using event_key_index on event\n> (cost=0.00..163046.53 rows=3272228 width=12) (actual time=0.017..1243.616\n> rows=3276192 loops=1)\"\n> \" -> Index Scan using origin_fk_index on origin\n> (cost=0.00..393653.81 rows=6133105 width=54) (actual time=0.013..3033.657\n> rows=6133105 loops=1)\"\n> \" -> Hash (cost=34462.73..34462.73 rows=14382 width=4)\n> (actual time=6.668..6.668 rows=3198 loops=1)\"\n> \" Buckets: 2048 Batches: 1 Memory Usage: 113kB\"\n> \" -> Bitmap Heap Scan on magnitude\n> (cost=324.65..34462.73 rows=14382 width=4) (actual time=1.682..5.414\n> rows=3198 loops=1)\"\n> \" Recheck Cond: (magnitude >= 7.2)\"\n> \" -> Bitmap Index Scan on mag_index\n> (cost=0.00..321.05 rows=14382 width=0) (actual time=1.331..1.331 rows=3198\n> loops=1)\"\n> \" Index Cond: (magnitude >= 7.2)\"\n> \" -> Index Scan using mag_fkey_index on magnitude\n> (cost=0.00..1.06 rows=3 width=22) (actual time=0.007..0.009 rows=2\n> loops=2246)\"\n> \" Index Cond: (origin.id = event.magnitude.origin_id)\"\n> \"Total runtime: 17799.669 ms\"\n> ****************************************************************\n> \n> This query runs in Oracle in 1 second while takes 16 seconds in postgres,\n> The difference tells me that I am doing something wrong somewhere. This is\n> a new installation on a local Mac machine with 12G of RAM.\n> \n> I have:\n> effective_cache_size=4096MB\n> shared_buffer=2048MB\n> work_mem=100MB\n\nIt sounds like the queries are not doing the same thing. What is\nthe schema/index definition for Oracle versus PostgreSQL?\n\nKen\n", "msg_date": "Fri, 28 Jan 2011 14:50:28 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 9 query performance" }, { "msg_contents": "They have the same indexes, foreign keys are indexed in addition to the\nsearch values like magnitude. Distinct does nothing to speed up the query.\nIf I remove the select in the where clause the time goes down to 98 ms:\n\nselect DISTINCT EVENT.ID, ORIGIN.ID AS ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS\nPREFERRED_ORIGIN, EVENT.CONTRIBUTOR,\nORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE, ORIGIN.DEPTH,ORIGIN.EVTYPE,\nORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR,\nORIGIN.CONTRIBUTOR OCONTRIBUTOR,MAGNITUDE.ID AS\nMAGID,MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE\nfrom event.event left join event.origin on event.id=origin.eventid left join\nevent.magnitude on origin.id=event.magnitude.origin_id\nWHERE magnitude.magnitude>=7.2 order by ORIGIN.TIME desc,MAGNITUDE.MAGNITUDE\ndesc,EVENT.ID,EVENT.PREFERRED_ORIGIN_ID,ORIGIN.ID\n\nThe new query returns 4000 rows, so the result is still big. I am not sure\nif I am answering your question, but I don't have access to generate ddl\nfrom Oracle. Thanks for the reply.\n\nOn Fri, Jan 28, 2011 at 12:50 PM, Kenneth Marshall <[email protected]> wrote:\n\n> On Fri, Jan 28, 2011 at 09:30:19AM -0800, yazan suleiman wrote:\n> > I am evaluating postgres 9 to migrate away from Oracle. The following\n> query\n> > runs too slow, also please find the explain plan:\n> >\n> > ****************************************************************\n> > explain analyze select DISTINCT EVENT.ID, ORIGIN.ID AS\n> > ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN,\n> > EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE,\n> > ORIGIN.DEPTH,ORIGIN.EVTYPE,\n> > ORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR, ORIGIN.CONTRIBUTOR OCONTRIBUTOR,\n> > MAGNITUDE.ID AS MAGID,\n> > MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE\n> > from event.event left join event.origin on event.id=origin.eventid left\n> join\n> > event.magnitude on origin.id=event.magnitude.origin_id\n> > WHERE EXISTS(select origin_id from event.magnitude where\n> > magnitude.magnitude>=7.2 and origin.id=origin_id)\n> > order by ORIGIN.TIME desc,MAGNITUDE.MAGNITUDE desc,EVENT.ID\n> > ,EVENT.PREFERRED_ORIGIN_ID,ORIGIN.ID\n> >\n> >\n> > \"Unique (cost=740549.86..741151.42 rows=15039 width=80) (actual\n> > time=17791.557..17799.092 rows=5517 loops=1)\"\n> > \" -> Sort (cost=740549.86..740587.45 rows=15039 width=80) (actual\n> > time=17791.556..17792.220 rows=5517 loops=1)\"\n> > \" Sort Key: origin.\"time\", event.magnitude.magnitude, event.id,\n> > event.preferred_origin_id, origin.id, event.contributor,\n> origin.latitude,\n> > origin.longitude, origin.depth, origin.evtype, origin.catalog,\n> > origin.author, origin.contributor, event.magnitude.id,\n> event.magnitude.type\"\n> > \" Sort Method: quicksort Memory: 968kB\"\n> > \" -> Nested Loop Left Join (cost=34642.50..739506.42 rows=15039\n> > width=80) (actual time=6.927..17769.788 rows=5517 loops=1)\"\n> > \" -> Hash Semi Join (cost=34642.50..723750.23 rows=14382\n> > width=62) (actual time=6.912..17744.858 rows=2246 loops=1)\"\n> > \" Hash Cond: (origin.id = event.magnitude.origin_id)\"\n> > \" -> Merge Left Join (cost=0.00..641544.72\n> rows=6133105\n> > width=62) (actual time=0.036..16221.008 rows=6133105 loops=1)\"\n> > \" Merge Cond: (event.id = origin.eventid)\"\n> > \" -> Index Scan using event_key_index on event\n> > (cost=0.00..163046.53 rows=3272228 width=12) (actual\n> time=0.017..1243.616\n> > rows=3276192 loops=1)\"\n> > \" -> Index Scan using origin_fk_index on origin\n> > (cost=0.00..393653.81 rows=6133105 width=54) (actual\n> time=0.013..3033.657\n> > rows=6133105 loops=1)\"\n> > \" -> Hash (cost=34462.73..34462.73 rows=14382\n> width=4)\n> > (actual time=6.668..6.668 rows=3198 loops=1)\"\n> > \" Buckets: 2048 Batches: 1 Memory Usage:\n> 113kB\"\n> > \" -> Bitmap Heap Scan on magnitude\n> > (cost=324.65..34462.73 rows=14382 width=4) (actual time=1.682..5.414\n> > rows=3198 loops=1)\"\n> > \" Recheck Cond: (magnitude >= 7.2)\"\n> > \" -> Bitmap Index Scan on mag_index\n> > (cost=0.00..321.05 rows=14382 width=0) (actual time=1.331..1.331\n> rows=3198\n> > loops=1)\"\n> > \" Index Cond: (magnitude >= 7.2)\"\n> > \" -> Index Scan using mag_fkey_index on magnitude\n> > (cost=0.00..1.06 rows=3 width=22) (actual time=0.007..0.009 rows=2\n> > loops=2246)\"\n> > \" Index Cond: (origin.id =\n> event.magnitude.origin_id)\"\n> > \"Total runtime: 17799.669 ms\"\n> > ****************************************************************\n> >\n> > This query runs in Oracle in 1 second while takes 16 seconds in postgres,\n> > The difference tells me that I am doing something wrong somewhere. This\n> is\n> > a new installation on a local Mac machine with 12G of RAM.\n> >\n> > I have:\n> > effective_cache_size=4096MB\n> > shared_buffer=2048MB\n> > work_mem=100MB\n>\n> It sounds like the queries are not doing the same thing. What is\n> the schema/index definition for Oracle versus PostgreSQL?\n>\n> Ken\n>\n\nThey have the same indexes, foreign keys are indexed in addition to the search values like magnitude.  Distinct does nothing to speed up the query.  If I remove the select in the where clause the time goes down to 98 ms:\nselect DISTINCT EVENT.ID, ORIGIN.ID AS ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN, EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE, ORIGIN.DEPTH,ORIGIN.EVTYPE, ORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR, \nORIGIN.CONTRIBUTOR OCONTRIBUTOR,MAGNITUDE.ID AS MAGID,MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE from event.event left join event.origin on event.id=origin.eventid left join event.magnitude on origin.id=event.magnitude.origin_id \nWHERE magnitude.magnitude>=7.2 order by ORIGIN.TIME desc,MAGNITUDE.MAGNITUDE desc,EVENT.ID,EVENT.PREFERRED_ORIGIN_ID,ORIGIN.IDThe new query returns 4000 rows, so the result is still big.  I am not sure if I am answering your question, but I don't have access to generate ddl from Oracle.  Thanks for the reply.\nOn Fri, Jan 28, 2011 at 12:50 PM, Kenneth Marshall <[email protected]> wrote:\nOn Fri, Jan 28, 2011 at 09:30:19AM -0800, yazan suleiman wrote:\n> I am evaluating postgres 9 to migrate away from Oracle.  The following query\n> runs too slow, also please find the explain plan:\n>\n> ****************************************************************\n> explain analyze select DISTINCT EVENT.ID, ORIGIN.ID AS\n> ORIGINID,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN,\n> EVENT.CONTRIBUTOR, ORIGIN.TIME, ORIGIN.LATITUDE, ORIGIN.LONGITUDE,\n> ORIGIN.DEPTH,ORIGIN.EVTYPE,\n> ORIGIN.CATALOG, ORIGIN.AUTHOR OAUTHOR, ORIGIN.CONTRIBUTOR OCONTRIBUTOR,\n> MAGNITUDE.ID AS MAGID,\n> MAGNITUDE.MAGNITUDE,MAGNITUDE.TYPE AS MAGTYPE\n> from event.event left join event.origin on event.id=origin.eventid left join\n> event.magnitude on origin.id=event.magnitude.origin_id\n> WHERE EXISTS(select origin_id from event.magnitude where\n>  magnitude.magnitude>=7.2 and origin.id=origin_id)\n> order by ORIGIN.TIME desc,MAGNITUDE.MAGNITUDE desc,EVENT.ID\n> ,EVENT.PREFERRED_ORIGIN_ID,ORIGIN.ID\n>\n>\n> \"Unique  (cost=740549.86..741151.42 rows=15039 width=80) (actual\n> time=17791.557..17799.092 rows=5517 loops=1)\"\n> \"  ->  Sort  (cost=740549.86..740587.45 rows=15039 width=80) (actual\n> time=17791.556..17792.220 rows=5517 loops=1)\"\n> \"        Sort Key: origin.\"time\", event.magnitude.magnitude, event.id,\n> event.preferred_origin_id, origin.id, event.contributor, origin.latitude,\n> origin.longitude, origin.depth, origin.evtype, origin.catalog,\n> origin.author, origin.contributor, event.magnitude.id, event.magnitude.type\"\n> \"        Sort Method:  quicksort  Memory: 968kB\"\n> \"        ->  Nested Loop Left Join  (cost=34642.50..739506.42 rows=15039\n> width=80) (actual time=6.927..17769.788 rows=5517 loops=1)\"\n> \"              ->  Hash Semi Join  (cost=34642.50..723750.23 rows=14382\n> width=62) (actual time=6.912..17744.858 rows=2246 loops=1)\"\n> \"                    Hash Cond: (origin.id = event.magnitude.origin_id)\"\n> \"                    ->  Merge Left Join  (cost=0.00..641544.72 rows=6133105\n> width=62) (actual time=0.036..16221.008 rows=6133105 loops=1)\"\n> \"                          Merge Cond: (event.id = origin.eventid)\"\n> \"                          ->  Index Scan using event_key_index on event\n>  (cost=0.00..163046.53 rows=3272228 width=12) (actual time=0.017..1243.616\n> rows=3276192 loops=1)\"\n> \"                          ->  Index Scan using origin_fk_index on origin\n>  (cost=0.00..393653.81 rows=6133105 width=54) (actual time=0.013..3033.657\n> rows=6133105 loops=1)\"\n> \"                    ->  Hash  (cost=34462.73..34462.73 rows=14382 width=4)\n> (actual time=6.668..6.668 rows=3198 loops=1)\"\n> \"                          Buckets: 2048  Batches: 1  Memory Usage: 113kB\"\n> \"                          ->  Bitmap Heap Scan on magnitude\n>  (cost=324.65..34462.73 rows=14382 width=4) (actual time=1.682..5.414\n> rows=3198 loops=1)\"\n> \"                                Recheck Cond: (magnitude >= 7.2)\"\n> \"                                ->  Bitmap Index Scan on mag_index\n>  (cost=0.00..321.05 rows=14382 width=0) (actual time=1.331..1.331 rows=3198\n> loops=1)\"\n> \"                                      Index Cond: (magnitude >= 7.2)\"\n> \"              ->  Index Scan using mag_fkey_index on magnitude\n>  (cost=0.00..1.06 rows=3 width=22) (actual time=0.007..0.009 rows=2\n> loops=2246)\"\n> \"                    Index Cond: (origin.id = event.magnitude.origin_id)\"\n> \"Total runtime: 17799.669 ms\"\n> ****************************************************************\n>\n> This query runs in Oracle in 1 second while takes 16 seconds in postgres,\n> The difference tells me that I am doing something wrong somewhere.  This is\n> a new installation on a local Mac machine with 12G of RAM.\n>\n> I have:\n> effective_cache_size=4096MB\n> shared_buffer=2048MB\n> work_mem=100MB\n\nIt sounds like the queries are not doing the same thing. What is\nthe schema/index definition for Oracle versus PostgreSQL?\n\nKen", "msg_date": "Fri, 28 Jan 2011 13:01:40 -0800", "msg_from": "yazan suleiman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 9 query performance" }, { "msg_contents": "On Friday, January 28, 2011 06:30:19 PM yazan suleiman wrote:\n> I am evaluating postgres 9 to migrate away from Oracle. The following\n> query runs too slow, also please find the explain plan:\nFirst:\n\nexplain analyze\nSELECT DISTINCT\n EVENT.ID\n ,ORIGIN.ID AS ORIGINID\n ,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN\n ,EVENT.CONTRIBUTOR\n ,ORIGIN.TIME\n ,ORIGIN.LATITUDE\n ,ORIGIN.LONGITUDE\n ,ORIGIN.DEPTH\n ,ORIGIN.EVTYPE\n ,ORIGIN.CATALOG\n ,ORIGIN.AUTHOR OAUTHOR\n ,ORIGIN.CONTRIBUTOR OCONTRIBUTOR\n ,MAGNITUDE.ID AS MAGID\n ,MAGNITUDE.MAGNITUDE\n ,MAGNITUDE.TYPE AS MAGTYPE\nFROM\n event.event\n left join event.origin on event.id = origin.eventid\n left join event.magnitude on origin.id = event.magnitude.origin_id\nWHERE\n EXISTS(\n select origin_id\n from event.magnitude\n where magnitude.magnitude >= 7.2 and origin.id = origin_id\n )\norder by\n ORIGIN.TIME desc\n ,MAGNITUDE.MAGNITUDE desc\n ,EVENT.ID\n ,EVENT.PREFERRED_ORIGIN_ID\n ,ORIGIN.ID\n\nI am honestly stumped if anybody can figure something sensible out of the \noriginal formatting of the query...\n\nWhat happens if you change the\n left join event.origin on event.id = origin.eventid\ninto\n join event.origin on event.id = origin.eventid\n?\n\nThe EXISTS() requires that origin is not null anyway. (Not sure why the \nplanner doesn't recognize that though).\n\nAndres\n", "msg_date": "Fri, 28 Jan 2011 22:19:29 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 9 query performance" }, { "msg_contents": "OK, that did it. Time is now 315 ms. I am so exited working with\npostgres. I really apologize for the format, my first time posting on the\nlist. That does not justify it though. Really thanks.\n\nOn Fri, Jan 28, 2011 at 1:19 PM, Andres Freund <[email protected]> wrote:\n\n> On Friday, January 28, 2011 06:30:19 PM yazan suleiman wrote:\n> > I am evaluating postgres 9 to migrate away from Oracle. The following\n> > query runs too slow, also please find the explain plan:\n> First:\n>\n> explain analyze\n> SELECT DISTINCT\n> EVENT.ID\n> ,ORIGIN.ID AS ORIGINID\n> ,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN\n> ,EVENT.CONTRIBUTOR\n> ,ORIGIN.TIME\n> ,ORIGIN.LATITUDE\n> ,ORIGIN.LONGITUDE\n> ,ORIGIN.DEPTH\n> ,ORIGIN.EVTYPE\n> ,ORIGIN.CATALOG\n> ,ORIGIN.AUTHOR OAUTHOR\n> ,ORIGIN.CONTRIBUTOR OCONTRIBUTOR\n> ,MAGNITUDE.ID AS MAGID\n> ,MAGNITUDE.MAGNITUDE\n> ,MAGNITUDE.TYPE AS MAGTYPE\n> FROM\n> event.event\n> left join event.origin on event.id = origin.eventid\n> left join event.magnitude on origin.id = event.magnitude.origin_id\n> WHERE\n> EXISTS(\n> select origin_id\n> from event.magnitude\n> where magnitude.magnitude >= 7.2 and origin.id = origin_id\n> )\n> order by\n> ORIGIN.TIME desc\n> ,MAGNITUDE.MAGNITUDE desc\n> ,EVENT.ID\n> ,EVENT.PREFERRED_ORIGIN_ID\n> ,ORIGIN.ID\n>\n> I am honestly stumped if anybody can figure something sensible out of the\n> original formatting of the query...\n>\n> What happens if you change the\n> left join event.origin on event.id = origin.eventid\n> into\n> join event.origin on event.id = origin.eventid\n> ?\n>\n> The EXISTS() requires that origin is not null anyway. (Not sure why the\n> planner doesn't recognize that though).\n>\n> Andres\n>\n\nOK, that did it.  Time is now 315 ms.  I am so exited working with postgres.  I really apologize for the format, my first time posting on the list.  That does not justify it though.  Really thanks.\nOn Fri, Jan 28, 2011 at 1:19 PM, Andres Freund <[email protected]> wrote:\nOn Friday, January 28, 2011 06:30:19 PM yazan suleiman wrote:\n> I am evaluating postgres 9 to migrate away from Oracle.  The following\n> query runs too slow, also please find the explain plan:\nFirst:\n\nexplain analyze\nSELECT DISTINCT\n    EVENT.ID\n    ,ORIGIN.ID AS ORIGINID\n    ,EVENT.PREFERRED_ORIGIN_ID AS PREFERRED_ORIGIN\n    ,EVENT.CONTRIBUTOR\n    ,ORIGIN.TIME\n    ,ORIGIN.LATITUDE\n    ,ORIGIN.LONGITUDE\n    ,ORIGIN.DEPTH\n    ,ORIGIN.EVTYPE\n    ,ORIGIN.CATALOG\n    ,ORIGIN.AUTHOR OAUTHOR\n    ,ORIGIN.CONTRIBUTOR OCONTRIBUTOR\n    ,MAGNITUDE.ID AS MAGID\n    ,MAGNITUDE.MAGNITUDE\n    ,MAGNITUDE.TYPE AS MAGTYPE\nFROM\n    event.event\n    left join event.origin on event.id = origin.eventid\n    left join event.magnitude on origin.id = event.magnitude.origin_id\nWHERE\n    EXISTS(\n        select origin_id\n        from event.magnitude\n        where magnitude.magnitude >= 7.2 and origin.id = origin_id\n    )\norder by\n    ORIGIN.TIME desc\n    ,MAGNITUDE.MAGNITUDE desc\n    ,EVENT.ID\n    ,EVENT.PREFERRED_ORIGIN_ID\n    ,ORIGIN.ID\n\nI am honestly stumped if anybody can figure something sensible out of the\noriginal formatting of the query...\n\nWhat happens if you change the\n    left join event.origin on event.id = origin.eventid\ninto\n    join event.origin on event.id = origin.eventid\n?\n\nThe EXISTS() requires that origin is not null anyway. (Not sure why the\nplanner doesn't recognize that though).\n\nAndres", "msg_date": "Fri, 28 Jan 2011 13:34:45 -0800", "msg_from": "yazan suleiman <[email protected]>", "msg_from_op": true, "msg_subject": "Re: postgres 9 query performance" }, { "msg_contents": "Andres Freund <[email protected]> writes:\n> What happens if you change the\n> left join event.origin on event.id = origin.eventid\n> into\n> join event.origin on event.id = origin.eventid\n> ?\n\n> The EXISTS() requires that origin is not null anyway. (Not sure why the \n> planner doesn't recognize that though).\n\nSloppy thinking in reduce_outer_joins() is why. Fixed now:\nhttp://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=1df57f63f3f60c684aa8918910ac410e9c780713\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 30 Jan 2011 17:18:15 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 9 query performance " }, { "msg_contents": "On Sunday 30 January 2011 23:18:15 Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > What happens if you change the\n> > \n> > left join event.origin on event.id = origin.eventid\n> > \n> > into\n> > \n> > join event.origin on event.id = origin.eventid\n> > \n> > ?\n> > \n> > The EXISTS() requires that origin is not null anyway. (Not sure why the\n> > planner doesn't recognize that though).\n> \n> Sloppy thinking in reduce_outer_joins() is why. \nWow. Nice one, thanks.\n\nAndres\n", "msg_date": "Sun, 30 Jan 2011 23:35:03 +0100", "msg_from": "Andres Freund <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 9 query performance" }, { "msg_contents": "On Sun, Jan 30, 2011 at 05:18:15PM -0500, Tom Lane wrote:\n> Andres Freund <[email protected]> writes:\n> > What happens if you change the\n> > left join event.origin on event.id = origin.eventid\n> > into\n> > join event.origin on event.id = origin.eventid\n> > ?\n> \n> > The EXISTS() requires that origin is not null anyway. (Not sure why the \n> > planner doesn't recognize that though).\n> \n> Sloppy thinking in reduce_outer_joins() is why. Fixed now:\n> http://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=1df57f63f3f60c684aa8918910ac410e9c780713\n> \n> \t\t\tregards, tom lane\n\nThis is one of the reasons I love open source in general, and PostgreSQL\nin particular: Tom has the bandwidth to notice these kinds of\nworkarounds being discussed on support lists, and turn them immediately\ninto improvements in the planner. Partly because (I assume, based on\nthe commit message) Andres's parenthetical comment red-flagged it for\nhim, since he knew he could trust Andres's opinion that there was\nprobably a planner improvement hiding here. Amazing!\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nConnexions http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n", "msg_date": "Tue, 1 Feb 2011 10:42:53 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres 9 query performance" } ]
[ { "msg_contents": "Did anyone try using \"shake\" while the cluster is active? Any problems \nwith corruption or data loss? I ran the thing on my home directory and \nnothing was broken. I didn't develop any performance test, so cannot \nvouch for the effectiveness of the procedure. Did anyone play with that? \nAny positive or negative things to say about shake?\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Sun, 30 Jan 2011 15:11:51 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Any experience using \"shake\" defragmenter?" }, { "msg_contents": "On Sun, 30 Jan 2011 14:11:51 -0600, Mladen Gogala \n<[email protected]> wrote:\n\n> Did anyone try using \"shake\" while the cluster is active? Any problems \n> with corruption or data loss? I ran the thing on my home directory and \n> nothing was broken. I didn't develop any performance test, so cannot \n> vouch for the effectiveness of the procedure. Did anyone play with that? \n> Any positive or negative things to say about shake?\n>\n\nWhy do you feel the need to defrag your *nix box?\n\n\nRegards,\n\n\nMark\n", "msg_date": "Sun, 30 Jan 2011 15:31:26 -0600", "msg_from": "\"Mark Felder\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "W dniu 2011-01-30 22:31, Mark Felder pisze:\n> Why do you feel the need to defrag your *nix box?\n\nI'm guessing, maybe he used filefrag and saw >30000 extents? :)\nNext question will be \"which fs do you use?\" and then flame will start:(\nRegards\n", "msg_date": "Sun, 30 Jan 2011 23:27:16 +0100", "msg_from": "=?ISO-8859-2?Q?Marcin_Miros=B3aw?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "Marcin Mirosďż˝aw wrote:\n> W dniu 2011-01-30 22:31, Mark Felder pisze:\n> \n>> Why do you feel the need to defrag your *nix box?\n>> \n>\n> I'm guessing, maybe he used filefrag and saw >30000 extents? :)\n> Next question will be \"which fs do you use?\" and then flame will start:(\n> Regards\n>\n> \nWith all due respect, I don't want to start a fruitless flame war. I am \nasking those who have used it about their experiences with the product. \nLet's leave discussion of my motivation for some other time. I guess \nit's all about my unhappy childhood. If you have used the defragmenter, \nI'd be grateful for your experience.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Sun, 30 Jan 2011 23:33:02 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "Mark Felder wrote:\n> Why do you feel the need to defrag your *nix box?\n>\n>\n> \nLet's stick to the original question and leave my motivation for some \nother time. Have you used the product? If you have, I'd be happy to hear \nabout your experience with it.\n\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Sun, 30 Jan 2011 23:38:38 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "On 01/30/2011 11:38 PM, Mladen Gogala wrote:\n> Mark Felder wrote:\n>> Why do you feel the need to defrag your *nix box?\n>>\n>>\n> Let's stick to the original question and leave my motivation for some other\n> time. Have you used the product? If you have, I'd be happy to hear about your\n> experience with it.\n\nThat seems a little harsh. You post to a discussion group but want to \nsuppress discussion?\n\nMaybe that works with paid tech-support staff, but here ...\n\n-- \nLew\nCeci n'est pas une fenêtre.\n.___________.\n|###] | [###|\n|##/ | *\\##|\n|#/ * | \\#|\n|#----|----#|\n|| | * ||\n|o * | o|\n|_____|_____|\n|===========|\n", "msg_date": "Mon, 31 Jan 2011 07:28:39 -0500", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "* Mark Felder:\n\n> Why do you feel the need to defrag your *nix box?\n\nSome file systems (such as XFS) read the whole extent list into RAM\nwhen a file is opened. When the extend list is long due to\nfragmentation, this can take a *long* time (in the order of minutes\nwith multi-gigabyte Oracle Berkeley DB files). This phenomenon is\nless pronounced with PostgreSQL because it splits large relations into\none-gigabyte chunks, and it writes the files sequentally. But a small\neffect is probably still there.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Mon, 31 Jan 2011 16:44:07 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "* Mladen Gogala:\n\n> Did anyone try using \"shake\" while the cluster is active?\n\nAs far as I can tell, it's totally unsafe.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Mon, 31 Jan 2011 16:49:53 +0000", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "Please reply to the list with list business.\n\nOn 01/31/2011 03:22 PM, Mladen Gogala wrote:\n> On 1/31/2011 7:28 AM, Lew wrote:\n>> That seems a little harsh.\n> Oh? How so?\n>> You post to a discussion group but want to\n>> suppress discussion?\n>\n> No, I just want to stick to the subject. My motivation for doing so, my\n> unhappy childhood or somebody's need for attention are not too important. If\n> you have had any experience with the product, I'd be extremely keen to learn\n> about it and grateful to you for sharing it. If not, then....well, I'll\n> explain my reasoning some other time. I have better things to do right now.\n>\n>> Maybe that works with paid tech-support staff, but here ...\n>>\n> Things have lived up to my expectation. Basically, the only people who replied\n> are those who have no experience with the product but apparently do have an\n> irresistible urge to discuss something that I am not particularly interested\n> in discussing.\n>\n\nI'm so very, very sorry that we insist on having a discussion instead of \nadhering to your ukase.\n\nPerhaps your dictatorial attitude discourages people from responding? I mean, \nMark Felder asked a perfectly reasonable question and now you're all snarky. \nWell, a big \"Harrumph!\" to that!\n\nI wish you the best of luck. You'll need it with that attitude.\n\n-- \nLew\nCeci n'est pas une fenêtre.\n.___________.\n|###] | [###|\n|##/ | *\\##|\n|#/ * | \\#|\n|#----|----#|\n|| | * ||\n|o * | o|\n|_____|_____|\n|===========|\n", "msg_date": "Mon, 31 Jan 2011 18:41:26 -0500", "msg_from": "Lew <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "Mladen Gogala wrote:\n> Did anyone try using \"shake\" while the cluster is active? Any problems \n> with corruption or data loss? I ran the thing on my home directory and \n> nothing was broken. I didn't develop any performance test, so cannot \n> vouch for the effectiveness of the procedure. Did anyone play with \n> that? Any positive or negative things to say about shake?\n>\n\nShake works by allocating a new file the size of the original, in what \nis presumed to be then be unfragmented space. It copies the original \nover to this new space and then gets rid of the original. That \nprocedure will cause database corruption if the server happens to access \nthe file it's moving while it's in the middle of doing so. If the \ndatabase isn't running, though, it is probably fine.\n\nOn ext3 you can measure whether it was useful or not by taking the \nfilesystem off-line and running fsck before/after using it. Look for \npercentages given for \"non-contiguous files\" and directories. If those \nwere low to begin with, little reason to run the utility. If they're \nhigh, running shake should bring them down afterwards if it's doing its \njob right.\n\nOn a PostgreSQL database system, you can get the same basic effect while \nleaving the server up--but just with the table locked--using CLUSTER. \nAnd that will clean up a bunch of other potential messes inside the \ndatabase that shake can't touch. I just do that instead if I'm worried \na particular table has become fragmented on disk.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 01 Feb 2011 14:24:44 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "On Tue, Feb 1, 2011 at 1:24 PM, Greg Smith <[email protected]> wrote:\n> Mladen Gogala wrote:\n>>\n>> Did anyone try using \"shake\" while the cluster is active? Any problems\n>> with corruption or data loss? I ran the thing on my home directory and\n>> nothing was broken. I didn't develop any performance test, so cannot vouch\n>> for the effectiveness of the procedure. Did anyone play with that? Any\n>> positive or negative things to say about shake?\n>>\n>\n> Shake works by allocating a new file the size of the original, in what is\n> presumed to be then be unfragmented space.  It copies the original over to\n> this new space and then gets rid of the original.  That procedure will cause\n> database corruption if the server happens to access the file it's moving\n> while it's in the middle of doing so.  If the database isn't running,\n> though, it is probably fine.\n>\n> On ext3 you can measure whether it was useful or not by taking the\n> filesystem off-line and running fsck before/after using it.  Look for\n> percentages given for \"non-contiguous files\" and directories.  If those were\n> low to begin with, little reason to run the utility.  If they're high,\n> running shake should bring them down afterwards if it's doing its job right.\n>\n> On a PostgreSQL database system, you can get the same basic effect while\n> leaving the server up--but just with the table locked--using CLUSTER.  And\n> that will clean up a bunch of other potential messes inside the database\n> that shake can't touch.  I just do that instead if I'm worried a particular\n> table has become fragmented on disk.\n\nOne thing to note is that, in my experiments, ext4 handles large files\n(such as the 1GiB files postgresql uses for large relations) in a\n*vastly* improved manner over ext3. This is due to the use of\nextents. I found that, in some cases, heavily fragmented files under\next3 could not be effectively defragmented - and yes, I tried shake\nand some others (including one I wrote which *does* use fallocate /\nfallocate_posix). There was improvement, but by far the biggest\nimprovement was switching to ext4.\n\nInstead of something like 'shake' (which more or less works, even\nthough it doesn't use fallocate and friends) I frequently use either\nCLUSTER (which is what Greg Smith is suggesting) or a series of ALTER\nTABLE ... ALTER COLUMN... which rewrites the table. With PG 9 perhaps\nVACUUM FULL is more appropriate. Of course, the advice regarding\nusing 'shake' (or any other defragmenter) on a \"live\" postgresql data\ndirectory is excellent - the potential for causing damage if the\ndatabase is active during that time is very high.\n\n-- \nJon\n", "msg_date": "Tue, 1 Feb 2011 13:38:13 -0600", "msg_from": "Jon Nelson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "\n> Instead of something like 'shake' (which more or less works, even\n> though it doesn't use fallocate and friends) I frequently use either\n> CLUSTER (which is what Greg Smith is suggesting) or a series of ALTER\n> TABLE ... ALTER COLUMN... which rewrites the table. With PG 9 perhaps\n> VACUUM FULL is more appropriate. Of course, the advice regarding\n> using 'shake' (or any other defragmenter) on a \"live\" postgresql data\n> directory is excellent - the potential for causing damage if the\n> database is active during that time is very high.\n>\n> \nI agree that unless it makes sure there are no open file handles before \nmoving the file, there is a high chance of corrupting data, and if it \ndoes check, there is little chance it will do anything useful on a live \nDB, since it will skip every open file.\n\nDoes vacuum full rewrite the whole table, or only the blocks with free \nspace? If it only rewrites the blocks with free space, the only \nsolution may be exclusive table lock, alter table to new name, create \nold table name as select * from new table name. I also like the cluster \nidea, but I am not sure if it rewrites everything, or just the blocks \nthat have out of order rows, in which case, it would not work well the \nsecond time.\n\n", "msg_date": "Tue, 01 Feb 2011 13:31:22 -0700", "msg_from": "Grant Johnson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "On Tue, Feb 1, 2011 at 3:31 PM, Grant Johnson <[email protected]> wrote:\n> Does vacuum full rewrite the whole table, or only the blocks with free\n> space?\n\nThe whole table.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 1 Feb 2011 17:33:11 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" }, { "msg_contents": "On 31/01/11 17:38, Mladen Gogala wrote:\n> Mark Felder wrote:\n>> Why do you feel the need to defrag your *nix box?\n>>\n>>\n> Let's stick to the original question and leave my motivation for some \n> other time. Have you used the product? If you have, I'd be happy to \n> hear about your experience with it.\n>\n>\n\nMladen sometimes asking about the motivation behind a question brings to \nlight new information that makes the original question no longer \nrelevant. In this case it might bring to light a better solution than \n\"Shake\", or else methods for determining if fragmentation is harmful or \nnot.\n\nI don't believe people are asking in order to either flame you or insult \nyour intelligence, but there is genuine interest in why you are wanting \nto defrag. There is a lot of expertise on this list - indulging a little \ncuriosity will only help you get better value for your questions.\n\nCheers\n\nMark\n\nP.s: I'm curious too :-)\n\n\n\n\n\n\n On 31/01/11 17:38, Mladen Gogala wrote:\n Mark\n Felder wrote:\n \nWhy do you feel the need to defrag your\n *nix box?\n \n\n\n ᅵ \n Let's stick to the original question and leave my motivation for\n some other time. Have you used the product? If you have, I'd be\n happy to hear about your experience with it.\n \n\n\n\n\n Mladen sometimes asking about the motivation behind a question\n brings to light new information that makes the original question\n no longer relevant. In this case it might bring to light a\n better solution than \"Shake\", or else methods for determining if\n fragmentation is harmful or not. \n\n I don't believe people are asking in order to either flame you\n or insult your intelligence, but there is genuine interest in\n why you are wanting to defrag. There is a lot of expertise on\n this list - indulging a little curiosity will only help you get\n better value for your questions.\n\n Cheers\n\n Mark\n\n P.s: I'm curious too :-)", "msg_date": "Wed, 02 Feb 2011 12:50:12 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Any experience using \"shake\" defragmenter?" } ]
[ { "msg_contents": "Scott,\n\nI don't know if you received my private email, but just in case you did not I am posting the infomration here.\n\n \n\nI have a new set of servers coming in - Dual Xeon E5620's, 96GB RAM, 18 spindles (1 RAID1 for OS - SATA, 12 disk RAID10 for data - SAS, RAID-1 for logs - SAS, 2 hot spares SAS). They are replacing a single Dual Xeon E5406 with 16GB RAM and 2x RAID1 - one for OS/Data, one for Logs.\n\nCurrent server is using 3840MB of shared buffers.\n\n \n\nIt will be running FreeBSD 8.1 x64, PG 9.0.2, running streaming replication to a like server.\n\nI have read the performance tuning book written by Greg Smith, and am using it as a guide to configure it for performance.\n\nThe main questions which I have are the following:\n\n \n\nIs the 25% RAM for shared memory still a good number to go with for this size server?\n\nThere are approximately 50 tables which get updated with almost 100% records updated every 5 minutes - what is a good number of autovacuum processes to have on these? The current server I am replacing only has 3 of them but I think I may gain a benefit from having more.\n\nCurrently I have what I believe to be an aggressive bgwriter setting as follows:\n\n \n\nbgwriter_delay = 200ms # 10-10000ms between rounds\n\nbgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round \n\nbgwriter_lru_multiplier = 10 # 0-10.0 multipler on buffers scanned/round\n\n \n\nDoes this look right?\n\n \n\nI have the following settings:\n\nwork_mem = 64MB # min 64kB\n\nmaintenance_work_mem = 128MB # min 1MB\n\n \n\nAnd, of course, some of the most critical ones - the WAL settings. Right now, in order to give the best performance to the end users due to the size of the current box, I have a very unoptimal setting in my opinion \n\n \n\nfsync = off # turns forced synchronization on or off\n\n#synchronous_commit = on # immediate fsync at commit\n\n#wal_sync_method = fsync # the default is the first option\n\n # supported by the operating system:\n\n # open_datasync\n\n # fdatasync\n\n # fsync\n\n # fsync_writethrough\n\n # open_sync\n\nfull_page_writes = on # recover from partial page writes\n\nwal_buffers = 16MB\n\n#wal_buffers = 1024KB # min 32kB\n\n # (change requires restart)\n\n# wal_writer_delay = 100ms # 1-10000 milliseconds\n\n \n\n#commit_delay = 0 # range 0-100000, in microseconds\n\n#commit_siblings = 5 # range 1-1000\n\n \n\n# - Checkpoints -\n\n \n\n#checkpoint_segments = 128 # in logfile segments, min 1, 16MB each\n\ncheckpoint_segments = 1024\n\ncheckpoint_timeout = 60min # range 30s-1h\n\n#checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0\n\ncheckpoint_completion_target = 0.1\n\ncheckpoint_warning = 45min # 0 disables\n\n \n\nThese are values which I arrived to by playing with them to make sure that the end user performance did not suffer. The checkpoints are taking about 8 minutes to complete, but between checkpoints the disk i/o on the data partition is very minimal - when I had lower segments running a 15 minute timeout with a .9 completion target, the platform was fairly slow vis-à-vis the end user.\n\n \n\nThe above configuration is using PG 8.4.\n\n \n\nThanks in advance for any insight.\n\n\nScott,I don’t know if you received my private email, but just in case you did not I am posting the infomration here. I have a new set of servers coming in – Dual Xeon E5620’s, 96GB RAM, 18 spindles (1 RAID1 for OS – SATA, 12 disk RAID10 for data – SAS, RAID-1 for logs – SAS, 2 hot spares SAS).  They are replacing a single Dual Xeon E5406 with 16GB RAM and 2x RAID1 – one for OS/Data, one for Logs.Current server is using 3840MB of shared buffers. It will be running FreeBSD 8.1 x64, PG 9.0.2, running streaming replication to a like server.I have read the performance tuning book written by Greg Smith, and am using it as a guide to configure it for performance.The main questions which I have are the following: Is the 25% RAM for shared memory still a good number to go with for this size server?There are approximately 50 tables which get updated with almost 100% records updated every 5 minutes – what is a good number of autovacuum processes to have on these?  The current server I am replacing only has 3 of them but I think I may gain a benefit from having more.Currently I have what I believe to be an aggressive bgwriter setting as follows: bgwriter_delay = 200ms                  # 10-10000ms between roundsbgwriter_lru_maxpages = 1000            # 0-1000 max buffers written/round     bgwriter_lru_multiplier = 10            # 0-10.0 multipler on buffers scanned/round Does this look right? I have the following settings:work_mem = 64MB                         # min 64kBmaintenance_work_mem = 128MB            # min 1MB And, of course, some of the most critical ones – the WAL settings.  Right now, in order to give the best performance to the end users due to the size of the current box, I have a very unoptimal setting in my opinion  fsync = off                             # turns forced synchronization on or off#synchronous_commit = on                # immediate fsync at commit#wal_sync_method = fsync                # the default is the first option                                        # supported by the operating system:                                        #   open_datasync                                        #   fdatasync                                        #   fsync                                        #   fsync_writethrough                                        #   open_syncfull_page_writes = on                   # recover from partial page writeswal_buffers = 16MB#wal_buffers = 1024KB                   # min 32kB                                        # (change requires restart)# wal_writer_delay = 100ms              # 1-10000 milliseconds                                        #commit_delay = 0                       # range 0-100000, in microseconds#commit_siblings = 5                    # range 1-1000 # - Checkpoints - #checkpoint_segments = 128              # in logfile segments, min 1, 16MB eachcheckpoint_segments = 1024checkpoint_timeout = 60min              # range 30s-1h#checkpoint_completion_target = 0.5     # checkpoint target duration, 0.0 - 1.0checkpoint_completion_target = 0.1checkpoint_warning = 45min              # 0 disables These are values which I arrived to by playing with them to make sure that the end user performance did not suffer.  The checkpoints are taking about 8 minutes to complete, but between checkpoints the disk i/o on the data partition is very minimal – when I had lower segments running a 15 minute timeout with a .9 completion target, the platform was fairly slow vis-à-vis the end user. The above configuration is using PG 8.4. Thanks in advance for any insight.", "msg_date": "Mon, 31 Jan 2011 16:55:32 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Configuration for a new server." }, { "msg_contents": "Benjamin Krajmalnik wrote:\n>\n> have a new set of servers coming in -- Dual Xeon E5620's, 96GB RAM, \n> 18 spindles (1 RAID1 for OS -- SATA, 12 disk RAID10 for data -- SAS, \n> RAID-1 for logs -- SAS, 2 hot spares SAS).\n>\n\nYou didn't mention the RAID controller and its cache setup. That's a \ncritical piece to get write, err, right. Presumably you've got a \nbattery-backed RAID cache on your SAS controller. Knowing that and what \nmodel it is (to make sure it's one of the ones that performs well) would \nbe good info to pass along here.\n \n>\n> Is the 25% RAM for shared memory still a good number to go with for \n> this size server?\n>\n\nSeveral people have reported to me they see drop-offs in performance \nbetween 8GB and 10GB for that setting. I currently recommend limiting \nshared_buffers to 8GB until we have more data on why that is. You \nsuggested already having checkpoint issues, too; if that's true, you \ndon't want to dedicate too much RAM to the database for that reason, too.\n\n> There are approximately 50 tables which get updated with almost 100% \n> records updated every 5 minutes -- what is a good number of autovacuum \n> processes to have on these? The current server I am replacing only \n> has 3 of them but I think I may gain a benefit from having more.\n>\n\nWatch pg_stat_user_tables and you can figure this out for your \nworkload. There are no generic answers in this area.\n\n> Currently I have what I believe to be an aggressive bgwriter setting \n> as follows:\n>\n> \n>\n> bgwriter_delay = 200ms # 10-10000ms between rounds\n>\n> bgwriter_lru_maxpages = 1000 # 0-1000 max buffers \n> written/round \n>\n> bgwriter_lru_multiplier = 10 # 0-10.0 multipler on buffers \n> scanned/round\n>\n> \n>\n> Does this look right?\n>\n\nYou'd probably be better off decreasing the delay rather than pushing up \nthe other two parameters. It's easy to tell if you did it right or not; \njust look at pg_stat_bgwriter. If buffers_backend is high relative to \nthe others, that means the multiplier or delay is wrong. Or if \nmaxwritten_clean is increasing fast, that means bgwriter_lru_maxpages is \ntoo low.\n\n\n> These are values which I arrived to by playing with them to make sure \n> that the end user performance did not suffer. The checkpoints are \n> taking about 8 minutes to complete, but between checkpoints the disk \n> i/o on the data partition is very minimal -- when I had lower segments \n> running a 15 minute timeout with a .9 completion target, the platform \n> was fairly slow vis-�-vis the end user.\n>\n\nThe completion target isn't the main driver here, the number of \nsegments/timeout is. When you space checkpoints out further, the actual \namount of total I/O the server does decreases, both to the WAL and to \nthe main database. So I suspect your tweaking the target had little \nimpact, and it's possible you might even get smoother performance if you \nput it back to a higher value again.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nBenjamin Krajmalnik wrote:\n\n\n\n\n\n have a new set of servers coming in – Dual Xeon\nE5620’s, 96GB RAM, 18 spindles (1 RAID1 for OS – SATA, 12 disk RAID10\nfor data – SAS, RAID-1 for logs – SAS, 2 hot spares SAS). \n\n\n\n\nYou didn't mention the RAID controller and its cache setup.  That's a\ncritical piece to get write, err, right.  Presumably you've got a\nbattery-backed RAID cache on your SAS controller.  Knowing that and\nwhat model it is (to make sure it's one of the ones that performs well)\nwould be good info to pass along here.\n \n\n\nIs the 25% RAM for shared memory still a good\nnumber to go with for this size server?\n\n\n\nSeveral people have reported to me they see drop-offs in performance\nbetween 8GB and 10GB for that setting.  I currently recommend limiting\nshared_buffers to 8GB until we have more data on why that is.  You\nsuggested already having checkpoint issues, too; if that's true, you\ndon't want to dedicate too much RAM to the database for that reason,\ntoo.\n\n\n\nThere are approximately 50 tables which get\nupdated with almost 100% records updated every 5 minutes – what is a\ngood number of autovacuum processes to have on these?  The current\nserver I am replacing only has 3 of them but I think I may gain a\nbenefit from having more.\n\n\n\nWatch pg_stat_user_tables and you can figure this out for your\nworkload.  There are no generic answers in this area.\n\n\n\nCurrently I have what I believe to be an\naggressive bgwriter setting as follows:\n \nbgwriter_delay = 200ms                  #\n10-10000ms between rounds\nbgwriter_lru_maxpages = 1000            # 0-1000\nmax buffers written/round     \nbgwriter_lru_multiplier = 10            # 0-10.0\nmultipler on buffers scanned/round\n \nDoes this look right?\n\n\n\nYou'd probably be better off decreasing the delay rather than pushing\nup the other two parameters.  It's easy to tell if you did it right or\nnot; just look at pg_stat_bgwriter.  If buffers_backend is high\nrelative to the others, that means the multiplier or delay is wrong. \nOr if maxwritten_clean is increasing fast, that means\nbgwriter_lru_maxpages is too low.\n\n\n\n\nThese are values which I arrived to by playing\nwith them to make sure that the end user performance did not suffer. \nThe checkpoints are taking about 8 minutes to complete, but between\ncheckpoints the disk i/o on the data partition is very minimal – when I\nhad lower segments running a 15 minute timeout with a .9 completion\ntarget, the platform was fairly slow vis-à-vis the end user.\n\n\n\nThe completion target isn't the main driver here, the number of\nsegments/timeout is.  When you space checkpoints out further, the\nactual amount of total I/O the server does decreases, both to the WAL\nand to the main database.  So I suspect your tweaking the target had\nlittle impact, and it's possible you might even get smoother\nperformance if you put it back to a higher value again.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Tue, 01 Feb 2011 06:53:49 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuration for a new server." }, { "msg_contents": "Greg,\n\n \n\nThank you very much for your quick response.\n\nThe servers are using Areca 1600 series controllers with battery backup and 2GB cache.\n\nI really enjoyed your book (actually, both of the books your company published). Found them extremely helpful and they filled a lot of gaps in my still gappy knowledge J\n\n \n\n \n\n \n\n \n\nFrom: Greg Smith [mailto:[email protected]] \nSent: Tuesday, February 01, 2011 4:54 AM\nTo: Benjamin Krajmalnik\nCc: [email protected]\nSubject: Re: [PERFORM] Configuration for a new server.\n\n \n\nBenjamin Krajmalnik wrote: \n\n \n\n have a new set of servers coming in - Dual Xeon E5620's, 96GB RAM, 18 spindles (1 RAID1 for OS - SATA, 12 disk RAID10 for data - SAS, RAID-1 for logs - SAS, 2 hot spares SAS). \n\n\nYou didn't mention the RAID controller and its cache setup. That's a critical piece to get write, err, right. Presumably you've got a battery-backed RAID cache on your SAS controller. Knowing that and what model it is (to make sure it's one of the ones that performs well) would be good info to pass along here.\n \n\nIs the 25% RAM for shared memory still a good number to go with for this size server?\n\n\nSeveral people have reported to me they see drop-offs in performance between 8GB and 10GB for that setting. I currently recommend limiting shared_buffers to 8GB until we have more data on why that is. You suggested already having checkpoint issues, too; if that's true, you don't want to dedicate too much RAM to the database for that reason, too.\n\n\n\n\nThere are approximately 50 tables which get updated with almost 100% records updated every 5 minutes - what is a good number of autovacuum processes to have on these? The current server I am replacing only has 3 of them but I think I may gain a benefit from having more.\n\n\nWatch pg_stat_user_tables and you can figure this out for your workload. There are no generic answers in this area.\n\n\n\n\nCurrently I have what I believe to be an aggressive bgwriter setting as follows:\n\n \n\nbgwriter_delay = 200ms # 10-10000ms between rounds\n\nbgwriter_lru_maxpages = 1000 # 0-1000 max buffers written/round \n\nbgwriter_lru_multiplier = 10 # 0-10.0 multipler on buffers scanned/round\n\n \n\nDoes this look right?\n\n\nYou'd probably be better off decreasing the delay rather than pushing up the other two parameters. It's easy to tell if you did it right or not; just look at pg_stat_bgwriter. If buffers_backend is high relative to the others, that means the multiplier or delay is wrong. Or if maxwritten_clean is increasing fast, that means bgwriter_lru_maxpages is too low.\n\n\n\n\n\nThese are values which I arrived to by playing with them to make sure that the end user performance did not suffer. The checkpoints are taking about 8 minutes to complete, but between checkpoints the disk i/o on the data partition is very minimal - when I had lower segments running a 15 minute timeout with a .9 completion target, the platform was fairly slow vis-à-vis the end user.\n\n\nThe completion target isn't the main driver here, the number of segments/timeout is. When you space checkpoints out further, the actual amount of total I/O the server does decreases, both to the WAL and to the main database. So I suspect your tweaking the target had little impact, and it's possible you might even get smoother performance if you put it back to a higher value again.\n\n\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\nGreg, Thank you very much for your quick response.The servers are using Areca 1600 series controllers with battery backup and 2GB cache.I really enjoyed your book (actually, both of the books your company published).  Found them extremely helpful and they filled a lot of gaps in my still gappy knowledge J    From: Greg Smith [mailto:[email protected]] Sent: Tuesday, February 01, 2011 4:54 AMTo: Benjamin KrajmalnikCc: [email protected]: Re: [PERFORM] Configuration for a new server. Benjamin Krajmalnik wrote:   have a new set of servers coming in – Dual Xeon E5620’s, 96GB RAM, 18 spindles (1 RAID1 for OS – SATA, 12 disk RAID10 for data – SAS, RAID-1 for logs – SAS, 2 hot spares SAS). You didn't mention the RAID controller and its cache setup.  That's a critical piece to get write, err, right.  Presumably you've got a battery-backed RAID cache on your SAS controller.  Knowing that and what model it is (to make sure it's one of the ones that performs well) would be good info to pass along here.  Is the 25% RAM for shared memory still a good number to go with for this size server?Several people have reported to me they see drop-offs in performance between 8GB and 10GB for that setting.  I currently recommend limiting shared_buffers to 8GB until we have more data on why that is.  You suggested already having checkpoint issues, too; if that's true, you don't want to dedicate too much RAM to the database for that reason, too.There are approximately 50 tables which get updated with almost 100% records updated every 5 minutes – what is a good number of autovacuum processes to have on these?  The current server I am replacing only has 3 of them but I think I may gain a benefit from having more.Watch pg_stat_user_tables and you can figure this out for your workload.  There are no generic answers in this area.Currently I have what I believe to be an aggressive bgwriter setting as follows: bgwriter_delay = 200ms                  # 10-10000ms between roundsbgwriter_lru_maxpages = 1000            # 0-1000 max buffers written/round     bgwriter_lru_multiplier = 10            # 0-10.0 multipler on buffers scanned/round Does this look right?You'd probably be better off decreasing the delay rather than pushing up the other two parameters.  It's easy to tell if you did it right or not; just look at pg_stat_bgwriter.  If buffers_backend is high relative to the others, that means the multiplier or delay is wrong.  Or if maxwritten_clean is increasing fast, that means bgwriter_lru_maxpages is too low.These are values which I arrived to by playing with them to make sure that the end user performance did not suffer.  The checkpoints are taking about 8 minutes to complete, but between checkpoints the disk i/o on the data partition is very minimal – when I had lower segments running a 15 minute timeout with a .9 completion target, the platform was fairly slow vis-à-vis the end user.The completion target isn't the main driver here, the number of segments/timeout is.  When you space checkpoints out further, the actual amount of total I/O the server does decreases, both to the WAL and to the main database.  So I suspect your tweaking the target had little impact, and it's possible you might even get smoother performance if you put it back to a higher value again.-- Greg Smith   2ndQuadrant US    [email protected]   Baltimore, MDPostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Tue, 1 Feb 2011 09:46:59 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Configuration for a new server." }, { "msg_contents": "There are approximately 50 tables which get updated with almost 100%\nrecords updated every 5 minutes - what is a good number of autovacuum\nprocesses to have on these? The current server I am replacing only has\n3 of them but I think I may gain a benefit from having more.\n\n\nWatch pg_stat_user_tables and you can figure this out for your workload.\nThere are no generic answers in this area.\n\nWhat in particular should I be looking at to help me decide?\n\n \n\n \n\nCurrently I have what I believe to be an aggressive bgwriter setting as\nfollows:\n\n \n\nbgwriter_delay = 200ms # 10-10000ms between rounds\n\nbgwriter_lru_maxpages = 1000 # 0-1000 max buffers\nwritten/round \n\nbgwriter_lru_multiplier = 10 # 0-10.0 multipler on buffers\nscanned/round\n\n \n\nDoes this look right?\n\n\nYou'd probably be better off decreasing the delay rather than pushing up\nthe other two parameters. It's easy to tell if you did it right or not;\njust look at pg_stat_bgwriter. If buffers_backend is high relative to\nthe others, that means the multiplier or delay is wrong. Or if\nmaxwritten_clean is increasing fast, that means bgwriter_lru_maxpages is\ntoo low.\n\ncheckpoints_timed = 261\n\ncheckpoints_req = 0\n\nbuffers_checkpoint = 49058438\n\nbuffers_clean = 3562421\n\nmaxwritten_clean = 243\n\nbuffers_backend = 11774254\n\nbuffers_alloc = 42816578\n\n\nThere are approximately 50 tables which get updated with almost 100% records updated every 5 minutes – what is a good number of autovacuum processes to have on these?  The current server I am replacing only has 3 of them but I think I may gain a benefit from having more.Watch pg_stat_user_tables and you can figure this out for your workload.  There are no generic answers in this area.What in particular should I be looking at to help me decide?  Currently I have what I believe to be an aggressive bgwriter setting as follows: bgwriter_delay = 200ms                  # 10-10000ms between roundsbgwriter_lru_maxpages = 1000            # 0-1000 max buffers written/round     bgwriter_lru_multiplier = 10            # 0-10.0 multipler on buffers scanned/round Does this look right?You'd probably be better off decreasing the delay rather than pushing up the other two parameters.  It's easy to tell if you did it right or not; just look at pg_stat_bgwriter.  If buffers_backend is high relative to the others, that means the multiplier or delay is wrong.  Or if maxwritten_clean is increasing fast, that means bgwriter_lru_maxpages is too low.checkpoints_timed = 261checkpoints_req = 0buffers_checkpoint = 49058438buffers_clean = 3562421maxwritten_clean = 243buffers_backend = 11774254buffers_alloc = 42816578", "msg_date": "Tue, 1 Feb 2011 10:22:21 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Configuration for a new server." }, { "msg_contents": "Benjamin Krajmalnik wrote:\n>\n> There are approximately 50 tables which get updated with almost 100% \n> records updated every 5 minutes -- what is a good number of autovacuum \n> processes to have on these? The current server I am replacing only \n> has 3 of them but I think I may gain a benefit from having more.\n>\n>\n> Watch pg_stat_user_tables and you can figure this out for your \n> workload. There are no generic answers in this area.\n>\n> What in particular should I be looking at to help me decide?\n>\n\n\nThe information reported that's related to vacuuming. If you don't have \nenough workers, you can watch the dead row counts pop upwards without \nenough \"last autovacuum time\" updates on enough tables to suggest it's \nkeeping up. If you see >20% dead rows on lots of tables and they're not \nbeing processed by AV and having their timestamps, that's your sign that \nyou don't have enough workers.\n\n\n> You'd probably be better off decreasing the delay rather than pushing \n> up the other two parameters. It's easy to tell if you did it right or \n> not; just look at pg_stat_bgwriter. If buffers_backend is high \n> relative to the others, that means the multiplier or delay is wrong. \n> Or if maxwritten_clean is increasing fast, that means \n> bgwriter_lru_maxpages is too low.\n>\n> checkpoints_timed = 261\n>\n> checkpoints_req = 0\n>\n> buffers_checkpoint = 49,058,438\n>\n> buffers_clean = 3,562,421\n>\n> maxwritten_clean = 243\n>\n> buffers_backend = 11,774,254\n>\n> buffers_alloc = 42,816,578\n>\n\nSee how buffers_backend is much larger than buffers_clean, even though \nmaxwritten_clean is low? That means the background writer isn't running \noften enough to keep up with cleaning things, even though it does a lot \nof work when it does kick in. In your situation I'd normally do a first \npass by cutting bgwriter_lru_maxpages to 1/4 of what it is now, cut \nbgwriter_delay to 1/4 as well (to 50ms), and then see how the \nproportions change. You can probably cut the multiplier, too, yet still \nsee more pages written by the cleaner.\n\nI recommend saving a snapsot of this data with a timestamp, i.e.:\n\nselect now(),* from pg_stat_bgwriter;\n\nAnytime you make a change to one of the background writer or checkpoint \ntiming parameters. That way you have a new baseline to compare \nagainst. These numbers aren't very useful with a single value, but once \nyou get two of them with timestamps you can compute all sorts of fun \nstatistics from the pair.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nBenjamin Krajmalnik wrote:\n\n\n\n\n\n\nThere are approximately 50 tables which get\nupdated with almost 100% records updated every 5 minutes – what is a\ngood number of autovacuum processes to have on these?  The current\nserver I am replacing only has 3 of them but I think I may gain a\nbenefit from having more.\n\nWatch pg_stat_user_tables and you can figure this out for your\nworkload.  There are no generic answers in this area.\n\nWhat\nin particular should I be looking at to help me decide?\n\n\n\n\n\nThe information reported that's related to vacuuming.  If you don't\nhave enough workers, you can watch the dead row counts pop upwards\nwithout enough \"last autovacuum time\" updates on enough tables to\nsuggest it's keeping up.  If you see >20% dead rows on lots of\ntables and they're not being processed by AV and having their\ntimestamps, that's your sign that you don't have enough workers.\n\n\n\n\n\nYou'd\nprobably be better off decreasing the delay rather than pushing up the\nother two parameters.  It's easy to tell if you did it right or not;\njust look at pg_stat_bgwriter.  If buffers_backend is high relative to\nthe others, that means the multiplier or delay is wrong.  Or if\nmaxwritten_clean is increasing fast, that means bgwriter_lru_maxpages\nis too low.\n\ncheckpoints_timed\n= 261\ncheckpoints_req\n= 0\nbuffers_checkpoint\n= 49,058,438\nbuffers_clean\n= 3,562,421\nmaxwritten_clean\n= 243\nbuffers_backend\n= 11,774,254\nbuffers_alloc\n= 42,816,578\n\n\n\n\nSee how buffers_backend is much larger than buffers_clean, even though\nmaxwritten_clean is low?  That means the background writer isn't\nrunning often enough to keep up with cleaning things, even though it\ndoes a lot of work when it does kick in.  In your situation I'd\nnormally do a first pass by cutting bgwriter_lru_maxpages to 1/4 of\nwhat it is now, cut bgwriter_delay to 1/4 as well (to 50ms), and then\nsee how the proportions change.  You can probably cut the multiplier,\ntoo, yet still see more pages written by the cleaner.\n\nI recommend saving a snapsot of this data with a timestamp, i.e.:\n\nselect now(),* from pg_stat_bgwriter;\n\nAnytime you make a change to one of the background writer or checkpoint\ntiming parameters.  That way you have a new baseline to compare\nagainst.  These numbers aren't very useful with a single value, but\nonce you get two of them with timestamps you can compute all sorts of\nfun statistics from the pair.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Tue, 01 Feb 2011 20:16:26 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuration for a new server." }, { "msg_contents": "\n\n>See how buffers_backend is much larger than buffers_clean, even though maxwritten_clean is low?  That means the background writer isn't running often enough to keep up with cleaning things, even though >it does a lot of work when it does kick in.  In your situation I'd normally do a first pass by cutting bgwriter_lru_maxpages to 1/4 of what it is now, cut bgwriter_delay to 1/4 as well (to 50ms), and >then see how the proportions change.  You can probably cut the multiplier, too, yet still see more pages written by the cleaner.\n\n>I recommend saving a snapsot of this data with a timestamp, i.e.:\n\n>select now(),* from pg_stat_bgwriter;\n\n>Anytime you make a change to one of the background writer or checkpoint timing parameters.  That way you have a new baseline to compare against.  These numbers aren't very useful with a single value, >but once you get two of them with timestamps you can compute all sorts of fun statistics from the pair.\n\nSo, if I understand correctly, I should strive for a relative increase in buffers_clean to buffers_backend\n\n\n", "msg_date": "Wed, 2 Feb 2011 10:46:06 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Configuration for a new server." }, { "msg_contents": "Benjamin Krajmalnik wrote:\n> So, if I understand correctly, I should strive for a relative increase in buffers_clean to buffers_backend\n> \n\nRight. Buffers written by a backend are the least efficient way to get \ndata out of the buffer cache, because the client running into that is \nstuck waiting for a write call before it can use the resuling free \nblock. You want to avoid those in favor of checkpoint and \nbackground-writer cleaner writes instead.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 02 Feb 2011 13:22:55 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Configuration for a new server." } ]
[ { "msg_contents": "Dear list,\n\nIs there an exhaustive list of what takes what locks and how long they last?\n I'm asking because we just had some trouble doing a hot db change to an\n8.3.6 system. I know it is an old version but it is what I have to work\nwith. You can reproduce it like so:\n\nFirst:\nDROP TABLE IF EXISTS foo;\nDROP TABLE IF EXISTS account;\n\nCREATE TABLE account (account_id SERIAL PRIMARY KEY, name CHARACTER VARYING\nNOT NULL);\nCREATE TABLE foo (account_id INTEGER NOT NULL REFERENCES account\n(account_id), stuff CHARACTER VARYING);\n\nIn one connection:\nINSERT INTO account (name) SELECT generate_series FROM GENERATE_SERIES(0,\n10000000);\n\nIn another connection while that last one is running:\nDROP TABLE foo;\n\nAnd in another connection if you are feeling frisky:\n select\n pg_stat_activity.datname,pg_class.relname,pg_locks.transactionid,\npg_locks.mode, pg_locks.granted,\n pg_stat_activity.usename,pg_stat_activity.current_query,\npg_stat_activity.query_start,\n age(now(),pg_stat_activity.query_start) as \"age\",\npg_stat_activity.procpid\n from pg_stat_activity,pg_locks left\n outer join pg_class on (pg_locks.relation = pg_class.oid)\n where pg_locks.pid=pg_stat_activity.procpid order by query_start;\n\nThat query shows that the DROP takes an AccessExclusiveLock on account.\n This isn't totally unexpected but it is unfortunate because it means we\nhave to wait for a downtime window to maintain constraints even if they are\nnot really in use.\n\nThis isn't exactly how our workload actually works. Ours is more deadlock\nprone. We have many connections all querying account and we do the\nmigration in a transaction. It looks as though the AccessExclusiveLock is\nheld until the transaction terminates.\n\nNik Everett\n\nDear list,Is there an exhaustive list of what takes what locks and how long they last?  I'm asking because we just had some trouble doing a hot db change to an 8.3.6 system.  I know it is an old version but it is what I have to work with.  You can reproduce it like so:\nFirst:DROP TABLE IF EXISTS foo;DROP TABLE IF EXISTS account;CREATE TABLE account (account_id SERIAL PRIMARY KEY, name CHARACTER VARYING NOT NULL);\nCREATE TABLE foo (account_id INTEGER NOT NULL REFERENCES account (account_id), stuff CHARACTER VARYING);In one connection:INSERT INTO account (name) SELECT generate_series FROM GENERATE_SERIES(0, 10000000);\nIn another connection while that last one is running:DROP TABLE foo;And in another connection if you are feeling frisky:   select \n     pg_stat_activity.datname,pg_class.relname,pg_locks.transactionid, pg_locks.mode, pg_locks.granted,\n     pg_stat_activity.usename,pg_stat_activity.current_query, pg_stat_activity.query_start,      age(now(),pg_stat_activity.query_start) as \"age\", pg_stat_activity.procpid    from pg_stat_activity,pg_locks left \n     outer join pg_class on (pg_locks.relation = pg_class.oid)     where pg_locks.pid=pg_stat_activity.procpid order by query_start;That query shows that the DROP takes an AccessExclusiveLock on account.  This isn't totally unexpected but it is unfortunate because it means we have to wait for a downtime window to maintain constraints even if they are not really in use.\nThis isn't exactly how our workload actually works.  Ours is more deadlock prone.  We have many connections all querying account and we do the migration in a transaction.  It looks as though the AccessExclusiveLock is held until the transaction terminates.\nNik Everett", "msg_date": "Tue, 1 Feb 2011 14:18:37 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Exhaustive list of what takes what locks" }, { "msg_contents": "On Tue, Feb 01, 2011 at 02:18:37PM -0500, Nikolas Everett wrote:\n> Is there an exhaustive list of what takes what locks and how long they last?\n\nThis documents which commands take each lock type, but it is not exhaustive:\nhttp://www.postgresql.org/docs/current/interactive/explicit-locking.html\n\nAll locks on user-created database objects last until the transaction ends.\nThis does not apply to advisory locks. Also, many commands internally take\nlocks on system catalogs and release those locks as soon as possible.\n\n> CREATE TABLE account (account_id SERIAL PRIMARY KEY, name CHARACTER VARYING\n> NOT NULL);\n> CREATE TABLE foo (account_id INTEGER NOT NULL REFERENCES account\n> (account_id), stuff CHARACTER VARYING);\n\n> DROP TABLE foo;\n\n> That query shows that the DROP takes an AccessExclusiveLock on account.\n> This isn't totally unexpected but it is unfortunate because it means we\n> have to wait for a downtime window to maintain constraints even if they are\n> not really in use.\n\nPostgreSQL 9.1 will contain changes to make similar operations, though not that\none, take ShareRowExclusiveLock instead of AccessExclusiveLock. Offhand, the\nsame optimization probably could be arranged for it with minimal fuss. If\n\"account\" is heavily queried but seldom changed, that might be enough for you.\n\nThe internal implementation of a FOREIGN KEY constraint takes the form of\ntriggers on both tables. Each INSERT or UPDATE needs to know definitively\nwhether to fire a given trigger, so adding or removing an arbitrary trigger will\ncontinue to require at least ShareRowExclusiveLock. In the abstract, the\nspecial case of a FOREIGN KEY constraint could be looser still, but that would\nbe tricky to implement.\n\nnm\n", "msg_date": "Wed, 2 Feb 2011 00:20:04 -0500", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "Nikolas Everett wrote:\n> Is there an exhaustive list of what takes what locks and how long they \n> last? I'm asking because we just had some trouble doing a hot db \n> change to an 8.3.6 system. I know it is an old version but it is what \n> I have to work with.\n\nThere haven't been any major changes in this area since then, it \nwouldn't really matter if you were on a newer version. The short answer \nto your question is that no, there is no such list. The documentation \nat \nhttp://www.postgresql.org/docs/current/interactive/explicit-locking.html \nand \nhttp://www.postgresql.org/docs/current/interactive/view-pg-locks.html \nare unfortunately as good as it gets right now. The subject is a bit \nmore complicated even than it appears at first, given that you don't \njust need to take into account what statement is executing. You need to \nknow things like whether any foreign keys are involved as well as what \nindex type is used (see \nhttp://www.postgresql.org/docs/current/interactive/locking-indexes.html \n) to fully predict what the locking situation for your SQL is going to \nbecome. It's a fairly big grid of things to take into account.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 02 Feb 2011 13:58:48 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "Given that the a list would be difficult to maintain, is there some way I\ncan make Postgres spit out the list of what locks are taken?\n\n--Nik\n\nOn Wed, Feb 2, 2011 at 1:58 PM, Greg Smith <[email protected]> wrote:\n\n> Nikolas Everett wrote:\n>\n>> Is there an exhaustive list of what takes what locks and how long they\n>> last? I'm asking because we just had some trouble doing a hot db change to\n>> an 8.3.6 system. I know it is an old version but it is what I have to work\n>> with.\n>>\n>\n> There haven't been any major changes in this area since then, it wouldn't\n> really matter if you were on a newer version. The short answer to your\n> question is that no, there is no such list. The documentation at\n> http://www.postgresql.org/docs/current/interactive/explicit-locking.htmland\n> http://www.postgresql.org/docs/current/interactive/view-pg-locks.html are\n> unfortunately as good as it gets right now. The subject is a bit more\n> complicated even than it appears at first, given that you don't just need to\n> take into account what statement is executing. You need to know things like\n> whether any foreign keys are involved as well as what index type is used\n> (see\n> http://www.postgresql.org/docs/current/interactive/locking-indexes.html )\n> to fully predict what the locking situation for your SQL is going to become.\n> It's a fairly big grid of things to take into account.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n\nGiven that the a list would be difficult to maintain, is there some way I can make Postgres spit out the list of what locks are taken?--NikOn Wed, Feb 2, 2011 at 1:58 PM, Greg Smith <[email protected]> wrote:\nNikolas Everett wrote:\n\nIs there an exhaustive list of what takes what locks and how long they last?  I'm asking because we just had some trouble doing a hot db change to an 8.3.6 system.  I know it is an old version but it is what I have to work with.\n\n\nThere haven't been any major changes in this area since then, it wouldn't really matter if you were on a newer version.  The short answer to your question is that no, there is no such list.  The documentation at http://www.postgresql.org/docs/current/interactive/explicit-locking.html and http://www.postgresql.org/docs/current/interactive/view-pg-locks.html are unfortunately as good as it gets right now.  The subject is a bit more complicated even than it appears at first, given that you don't just need to take into account what statement is executing.  You need to know things like whether any foreign keys are involved as well as what index type is used (see http://www.postgresql.org/docs/current/interactive/locking-indexes.html ) to fully predict what the locking situation for your SQL is going to become.  It's a fairly big grid of things to take into account.\n\n\n-- \nGreg Smith   2ndQuadrant US    [email protected]   Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Wed, 2 Feb 2011 14:53:49 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "On Wed, Feb 2, 2011 at 2:53 PM, Nikolas Everett <[email protected]> wrote:\n\n> Given that the a list would be difficult to maintain, is there some way I\n> can make Postgres spit out the list of what locks are taken?\n>\n> --Nik\n>\n\nI just answered my own question -\ncompile with -DLOCK_DEBUG in your src/Makefile.custom and then SET\nTRACK_LOCKS=true when you want it.\n\n--Nik\n\nOn Wed, Feb 2, 2011 at 2:53 PM, Nikolas Everett <[email protected]> wrote:\nGiven that the a list would be difficult to maintain, is there some way I can make Postgres spit out the list of what locks are taken?--NikI just answered my own question -\ncompile with -DLOCK_DEBUG in your src/Makefile.custom and then SET TRACK_LOCKS=true when you want it.--Nik", "msg_date": "Wed, 2 Feb 2011 15:29:50 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "On Wed, Feb 2, 2011 at 3:29 PM, Nikolas Everett <[email protected]> wrote:\n\n>\n>\n> On Wed, Feb 2, 2011 at 2:53 PM, Nikolas Everett <[email protected]> wrote:\n>\n>> Given that the a list would be difficult to maintain, is there some way I\n>> can make Postgres spit out the list of what locks are taken?\n>>\n>> --Nik\n>>\n>\n> I just answered my own question -\n> compile with -DLOCK_DEBUG in your src/Makefile.custom and then SET\n> TRACK_LOCKS=true when you want it.\n>\n> --Nik\n>\n\nI just wrote a script to parse the output of postgres' log file into\nsomething more useful to me. I'm not sure that it is right but it certainly\nseems to be working.\n\nI shoved the script here in case it is useful to anyone:\nhttps://github.com/nik9000/Postgres-Tools\n\nOn Wed, Feb 2, 2011 at 3:29 PM, Nikolas Everett <[email protected]> wrote:\nOn Wed, Feb 2, 2011 at 2:53 PM, Nikolas Everett <[email protected]> wrote:\nGiven that the a list would be difficult to maintain, is there some way I can make Postgres spit out the list of what locks are taken?--NikI just answered my own question -\ncompile with -DLOCK_DEBUG in your src/Makefile.custom and then SET TRACK_LOCKS=true when you want it.--Nik \nI just wrote a script to parse the output of postgres' log file into something more useful to me.  I'm not sure that it is right but it certainly seems to be working.\nI shoved the script here in case it is useful to anyone:  https://github.com/nik9000/Postgres-Tools", "msg_date": "Wed, 2 Feb 2011 17:12:19 -0500", "msg_from": "Nikolas Everett <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "On Tue, Feb 1, 2011 at 2:18 PM, Nikolas Everett <[email protected]> wrote:\n> This isn't exactly how our workload actually works.  Ours is more deadlock\n> prone.  We have many connections all querying account and we do the\n> migration in a transaction.  It looks as though the AccessExclusiveLock is\n> held until the transaction terminates.\n\nUnfortunately, that's necessary for correctness. :-(\n\nI'd really like to figure out some way to make these cases work with\nless locking. 9.1 will have some improvements in this area, as\nregards ALTER TABLE, but dropping a constraint will still require\nAccessExclusiveLock.\n\nThere are even workloads where competition for AccessShareLock on the\ntarget table is a performance bottleneck (try pgbench -S -c 36 -j 36\nor so). I've been idly mulling over whether there's any way to\neliminate that locking or at least make it uncontended in the common\ncase, but so far haven't thought of a solution that I'm entirely happy\nwith.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 3 Feb 2011 12:52:34 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "On Wed, Feb 2, 2011 at 12:20 AM, Noah Misch <[email protected]> wrote:\n>> CREATE TABLE account (account_id SERIAL PRIMARY KEY, name CHARACTER VARYING\n>> NOT NULL);\n>> CREATE TABLE foo (account_id INTEGER NOT NULL REFERENCES account\n>> (account_id), stuff CHARACTER VARYING);\n>\n>> DROP TABLE foo;\n>\n>> That query shows that the DROP takes an AccessExclusiveLock on account.\n>>  This isn't totally unexpected but it is unfortunate because it means we\n>> have to wait for a downtime window to maintain constraints even if they are\n>> not really in use.\n>\n> PostgreSQL 9.1 will contain changes to make similar operations, though not that\n> one, take ShareRowExclusiveLock instead of AccessExclusiveLock.  Offhand, the\n> same optimization probably could be arranged for it with minimal fuss.  If\n> \"account\" is heavily queried but seldom changed, that might be enough for you.\n\nThe problem is that constraints can affect the query plan. If a\ntransaction sees the constraint in the system catalogs (under\nSnapshotNow) but the table data doesn't conform (under some earlier\nsnapshot) and if the chosen plan depends on the validity of the\nconstraint, then we've got trouble. At least when running at READ\nCOMMITTED, taking an AccessExclusiveLock protects us against that\nhazard (I'm not exactly sure what if anything protects us at higher\nisolation levels... but I hope there is something).\n\nNow, it's true that in the specific case of a foreign key constraint,\nwe don't currently have anything in the planner that depends on that.\nBut I'm hoping to get around to working on inner join removal again\none of these days.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 22 Feb 2011 22:18:36 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> The problem is that constraints can affect the query plan. If a\n> transaction sees the constraint in the system catalogs (under\n> SnapshotNow) but the table data doesn't conform (under some earlier\n> snapshot) and if the chosen plan depends on the validity of the\n> constraint, then we've got trouble. At least when running at READ\n> COMMITTED, taking an AccessExclusiveLock protects us against that\n> hazard (I'm not exactly sure what if anything protects us at higher\n> isolation levels... but I hope there is something).\n\nInteresting point. If we really wanted to make that work \"right\",\nwe might have to do something like the hack that's in place for CREATE\nINDEX CONCURRENTLY, wherein there's a notion that an index can't be used\nby a transaction with xmin before some horizon. Not entirely convinced\nit's worth the trouble, but ...\n\n> Now, it's true that in the specific case of a foreign key constraint,\n> we don't currently have anything in the planner that depends on that.\n> But I'm hoping to get around to working on inner join removal again\n> one of these days.\n\nYeah, that sort of thing will certainly be there eventually.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 22 Feb 2011 22:34:08 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks " }, { "msg_contents": "On Tue, Feb 22, 2011 at 10:18:36PM -0500, Robert Haas wrote:\n> On Wed, Feb 2, 2011 at 12:20 AM, Noah Misch <[email protected]> wrote:\n> >> CREATE TABLE account (account_id SERIAL PRIMARY KEY, name CHARACTER VARYING\n> >> NOT NULL);\n> >> CREATE TABLE foo (account_id INTEGER NOT NULL REFERENCES account\n> >> (account_id), stuff CHARACTER VARYING);\n> >\n> >> DROP TABLE foo;\n> >\n> >> That query shows that the DROP takes an AccessExclusiveLock on account.\n> >> ?This isn't totally unexpected but it is unfortunate because it means we\n> >> have to wait for a downtime window to maintain constraints even if they are\n> >> not really in use.\n> >\n> > PostgreSQL 9.1 will contain changes to make similar operations, though not that\n> > one, take ShareRowExclusiveLock instead of AccessExclusiveLock. ?Offhand, the\n> > same optimization probably could be arranged for it with minimal fuss. ?If\n> > \"account\" is heavily queried but seldom changed, that might be enough for you.\n> \n> The problem is that constraints can affect the query plan. If a\n> transaction sees the constraint in the system catalogs (under\n> SnapshotNow) but the table data doesn't conform (under some earlier\n> snapshot) and if the chosen plan depends on the validity of the\n> constraint, then we've got trouble. At least when running at READ\n> COMMITTED, taking an AccessExclusiveLock protects us against that\n> hazard (I'm not exactly sure what if anything protects us at higher\n> isolation levels... but I hope there is something).\n\nAccessExclusiveLock does not prevent that problem. We're already on thin ice in\nthis regard:\n\n-- session 1\nCREATE TABLE t (x) AS SELECT NULL::int;\nBEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;\nSELECT 1;\n-- session 2\nDELETE FROM t;\nALTER TABLE t ALTER x SET NOT NULL;\n-- session 1\nTABLE t;\n\nWith contortions, we can coax the same from READ COMMITTED:\n\n-- session 1\nCREATE TABLE t (x) AS SELECT NULL::int;\nCREATE FUNCTION pg_temp.f() RETURNS int LANGUAGE sql\n\tSTABLE -- reuse snapshot\n\tAS 'SELECT 1; TABLE t'; -- extra statement to avoid inlining\nVALUES (pg_sleep(15), pg_temp.f());\n-- session 2\nDELETE FROM t;\nALTER TABLE t ALTER x SET NOT NULL;\n\nThe catalogs say x is NOT NULL, but we read a NULL value just the same. I'm not\nsure what anomalies this permits today, if any, but it's in the same vein.\n", "msg_date": "Tue, 22 Feb 2011 23:21:08 -0500", "msg_from": "Noah Misch <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "On Tue, Feb 22, 2011 at 11:21 PM, Noah Misch <[email protected]> wrote:\n> On Tue, Feb 22, 2011 at 10:18:36PM -0500, Robert Haas wrote:\n>> On Wed, Feb 2, 2011 at 12:20 AM, Noah Misch <[email protected]> wrote:\n>> >> CREATE TABLE account (account_id SERIAL PRIMARY KEY, name CHARACTER VARYING\n>> >> NOT NULL);\n>> >> CREATE TABLE foo (account_id INTEGER NOT NULL REFERENCES account\n>> >> (account_id), stuff CHARACTER VARYING);\n>> >\n>> >> DROP TABLE foo;\n>> >\n>> >> That query shows that the DROP takes an AccessExclusiveLock on account.\n>> >> ?This isn't totally unexpected but it is unfortunate because it means we\n>> >> have to wait for a downtime window to maintain constraints even if they are\n>> >> not really in use.\n>> >\n>> > PostgreSQL 9.1 will contain changes to make similar operations, though not that\n>> > one, take ShareRowExclusiveLock instead of AccessExclusiveLock. ?Offhand, the\n>> > same optimization probably could be arranged for it with minimal fuss. ?If\n>> > \"account\" is heavily queried but seldom changed, that might be enough for you.\n>>\n>> The problem is that constraints can affect the query plan.  If a\n>> transaction sees the constraint in the system catalogs (under\n>> SnapshotNow) but the table data doesn't conform (under some earlier\n>> snapshot) and if the chosen plan depends on the validity of the\n>> constraint, then we've got trouble.  At least when running at READ\n>> COMMITTED, taking an AccessExclusiveLock protects us against that\n>> hazard (I'm not exactly sure what if anything protects us at higher\n>> isolation levels... but I hope there is something).\n>\n> AccessExclusiveLock does not prevent that problem.  We're already on thin ice in\n> this regard:\n>\n> -- session 1\n> CREATE TABLE t (x) AS SELECT NULL::int;\n> BEGIN TRANSACTION ISOLATION LEVEL REPEATABLE READ;\n> SELECT 1;\n> -- session 2\n> DELETE FROM t;\n> ALTER TABLE t ALTER x SET NOT NULL;\n> -- session 1\n> TABLE t;\n>\n> With contortions, we can coax the same from READ COMMITTED:\n>\n> -- session 1\n> CREATE TABLE t (x) AS SELECT NULL::int;\n> CREATE FUNCTION pg_temp.f() RETURNS int LANGUAGE sql\n>        STABLE -- reuse snapshot\n>        AS 'SELECT 1; TABLE t'; -- extra statement to avoid inlining\n> VALUES (pg_sleep(15), pg_temp.f());\n> -- session 2\n> DELETE FROM t;\n> ALTER TABLE t ALTER x SET NOT NULL;\n>\n> The catalogs say x is NOT NULL, but we read a NULL value just the same.  I'm not\n> sure what anomalies this permits today, if any, but it's in the same vein.\n\nUgh. Well, I guess if we want to fix that we need the conxmin bit Tom\nwas just musing about. That sucks.\n\nI wonder if it'd be safe to reduce the locking strength for *dropping*\na constraint, though. The comment just says:\n\n case AT_DropConstraint: /* as DROP INDEX */\n\n...but that begs the question of why DROP INDEX needs an\nAccessExclusiveLock. It probably needs such a lock *on the index* but\nI don't see why we'd need it on the table.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 23 Feb 2011 12:21:12 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> ...but that begs the question of why DROP INDEX needs an\n> AccessExclusiveLock. It probably needs such a lock *on the index* but\n> I don't see why we'd need it on the table.\n\nSome other session might be in process of planning a query on the table.\nIt would be sad if the index it had chosen turned out to have vanished\nmeanwhile. You could perhaps confine DROP INDEX's ex-lock to the index,\nbut only at the price of making the planner take out a lock on every\nindex it considers even transiently. Which isn't going to be a net\nimprovement.\n\n(While we're on the subject, I have strong suspicions that most of what\nSimon did this cycle on ALTER TABLE lock strength reduction is\nhopelessly broken and will have to be reverted. It's on my to-do list\nto try to break that patch during beta, and I expect to succeed.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Feb 2011 12:31:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks " }, { "msg_contents": "On Wed, Feb 23, 2011 at 12:31 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> ...but that begs the question of why DROP INDEX needs an\n>> AccessExclusiveLock.  It probably needs such a lock *on the index* but\n>> I don't see why we'd need it on the table.\n>\n> Some other session might be in process of planning a query on the table.\n> It would be sad if the index it had chosen turned out to have vanished\n> meanwhile.  You could perhaps confine DROP INDEX's ex-lock to the index,\n> but only at the price of making the planner take out a lock on every\n> index it considers even transiently.  Which isn't going to be a net\n> improvement.\n\nOh. I assumed we were doing that anyway. If not, yeah.\n\n> (While we're on the subject, I have strong suspicions that most of what\n> Simon did this cycle on ALTER TABLE lock strength reduction is\n> hopelessly broken and will have to be reverted.  It's on my to-do list\n> to try to break that patch during beta, and I expect to succeed.)\n\nIt wouldn't surprise me if there are some holes there. But I'd like\nto try to preserve as much of it as we can, and I think there's\nprobably a good chunk of it that is OK.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 23 Feb 2011 12:59:31 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Exhaustive list of what takes what locks" } ]
[ { "msg_contents": "We're building a new database box. With the help of Gregory Smith's \nbook, we're benchmarking the box: We want to know that we've set it up \nright, we want numbers to go back to if we have trouble later, and we \nwant something to compare our _next_ box against. What I'd like to know \nis, are the performance numbers we're getting in the ballpark for the \nclass of hardware we've picked?\n\nFirst, the setup:\n\nCPU: Two AMD Opteron 6128 (Magny-Cours) 2000 mHz, eight cores each\nRAM: DDR3-1333 64 GB (ECC)\nRAID: 3Ware 9750 SAS2/SATA-II PCIe, 512 MB battery backed cache, \nwrite-back caching enabled.\nDrives: 16 Seagate ST3500414SS 500GB 7200RPM SAS, 16 MB cache:\n 2 RAID1 ($PG_DATA/xlog)\n 12 RAID10 ($PG_DATA)\n 2 hot spare\nPostgreSQL 8.4.1 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.3.real \n(Debian 4.3.4-2) 4.3.4, 64-bit\nFile system: XFS (nobarrier, noatime)\ni/o scheduler: noop\n\nDatabase config (differences from stock that might affect performance):\nshared_buffers = 8192MB\ntemp_buffers = 16MB\nwork_mem = 192MB\nmaintenance_work_mem = 5GB\nwal_buffers = 8MB\ncheckpoint_segments = 64\ncheckpoint_completion_target = 0.9\nrandom_page_cost = 1.0\nconstraint_exclusion = on\n\nNow, the test results:\n\nMemtest86+ says our memory bandwidth is:\n L1 32,788 MB/S\n L2 is 10,050 MB/S\n L3 is 6,826 MB/S\n\nStream v5.9 says:\n 1 core: 4,320\n 2 cores: 8,387\n 4 cores: 15,840\n 8 cores: 23,088\n 16 cores: 24,286\n\nBonnie++ (-f -n 0 -c 4)\n $PGDATA/xlog (RAID1)\n random seek: 369/sec\n block out: 87 MB/sec\n block in: 180 MB/sec\n $PGDATA (RAID10, 12 drives)\n random seek: 452\n block out: 439 MB/sec\n block in: 881 MB/sec\n\nsysbench test of fsync (commit) rate:\n\n $PGDATA/xlog (RAID1)\n cache off: 29 req/sec\n cache on: 9,342 req/sec\n $PGDATA (RAID10, 12 drives)\n cache off: 61 req/sec\n cache on: 8,191 req/sec\n\npgbench-tools:\n\n Averages for test set 1 by scale:\n avg_\n set \tclients tps \tlatency\t90%< \tmax_latency\n 1 \t1 \t29141 \t0.248 \t0.342 \t5.453\n 1 \t10 \t31467 \t0.263 \t0.361 \t7.148\n 1 \t100 \t31081 \t0.265 \t0.363 \t7.843\n 1 \t1000 \t29499 \t0.278 \t0.365 \t11.264\n\n Averages for test set 1 by clients:\n avg_\n set \tclients tps \tlatency\t90%< \tmax_latency\n 1 \t1 \t9527 \t0.102 \t0.105 \t1.5\n 1 \t2 \t13850 \t0.14 \t0.195 \t5.316\n 1 \t4 \t19148 \t0.19 \t0.251 \t2.228\n 1 \t8 \t44101 \t0.179 \t0.248 \t2.557\n 1 \t16 \t50311 \t0.315 \t0.381 \t11.057\n 1 \t32 \t47765 \t0.666 \t0.989 \t24.076\n\nWe've used Brad Fitzpatrick's diskchecker script to show that the i/o \nstack is telling the truth when it comes to fsync.\n\nAre there any nails sticking up that we need to pound on before we start \nmore extensive (real-world-ish) testing with this box?\n", "msg_date": "Tue, 01 Feb 2011 17:23:02 -0700", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": true, "msg_subject": "Are we in the ballpark?" }, { "msg_contents": "Wayne Conrad wrote:\n> We're building a new database box. With the help of Gregory Smith's \n> book, we're benchmarking the box: We want to know that we've set it up \n> right, we want numbers to go back to if we have trouble later, and we \n> want something to compare our _next_ box against.\n\nDo you not want any excitement in your life?\n\n> PostgreSQL 8.4.1 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.3.real \n> (Debian 4.3.4-2) 4.3.4, 64-bit\n\n8.4.7 is current; there are a lot of useful fixes to be had. See if you \ncan get a newer Debian package installed before you go live with this.\n\n\n> File system: XFS (nobarrier, noatime)\n\nShould probably add \"logbufs=8\" in there too.\n\n\n> shared_buffers = 8192MB\n> temp_buffers = 16MB\n> work_mem = 192MB\n> maintenance_work_mem = 5GB\n> wal_buffers = 8MB\n> checkpoint_segments = 64\n> checkpoint_completion_target = 0.9\n> random_page_cost = 1.0\n> constraint_exclusion = on\n\nThat work_mem is a bit on the scary side of things, given how much \nmemory is allocated to other things. Just be careful you don't get a \nlot of connections and run out of server RAM.\n\nMight as well bump wal_buffers up to 16MB and be done with it.\n\nSetting random_page_cost to 1.0 is essentially telling the server the \nentire database is cached in RAM. If that's not true, you don't want to \ngo quite that far in reducing it.\n\nWith 8.4, you should be able to keep constraint_exclusion at its default \nof 'partition' and have that work as expected; any particular reason you \nforced it to always be 'on'?\n\n> Bonnie++ (-f -n 0 -c 4)\n> $PGDATA/xlog (RAID1)\n> random seek: 369/sec\n> block out: 87 MB/sec\n> block in: 180 MB/sec\n> $PGDATA (RAID10, 12 drives)\n> random seek: 452\n> block out: 439 MB/sec\n> block in: 881 MB/sec\n>\n> sysbench test of fsync (commit) rate:\n>\n> $PGDATA/xlog (RAID1)\n> cache off: 29 req/sec\n> cache on: 9,342 req/sec\n> $PGDATA (RAID10, 12 drives)\n> cache off: 61 req/sec\n> cache on: 8,191 req/sec\n\nThat random seek rate is a little low for 12 drives, but that's probably \nthe limitations of the 3ware controller kicking in there. Your \"cache \noff\" figures are really weird though; I'd expect those both to be around \n100. Makes me wonder if something weird is happening in the controller, \nor if there was a problem with your config when testing that. Not a big \ndeal, really--the cached numbers are normally going to be the important \nones--but it is odd.\n\nYour pgbench SELECT numbers look fine, but particularly given that \ncommit oddity here I'd recommend running some of the standard TPC-B-like \ntests, too, just to be completely sure there's no problem here. You \nshould get results that look like \"Set 3: Longer ext3 tests\" in the set \nI've published to http://www.2ndquadrant.us/pgbench-results/index.htm \npresuming you let those run for 10 minutes or so. The server those came \noff of has less RAM and disks than yours, so you'll fit larger database \nscales into memory before performance falls off, but that gives you \nsomething to compare against.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 01 Feb 2011 20:30:20 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we in the ballpark?" }, { "msg_contents": "Greg, It's so nice to get a reply from the author of *the book*. Thank \nyou for taking the time to help us out.\n\nOn 02/01/11 18:30, Greg Smith wrote:\n> Do you not want any excitement in your life?\n\nI've had database excitement enough to last a lifetime. That's why I'm \nmending my ways. Your book is the first step of our 12 step program.\n\n> 8.4.7 is current; there are a lot of useful fixes to be had. See if you\n> can get a newer Debian package installed before you go live with this.\n\nI'll look for 8.4.7, but we'll be switching to 9 before too long.\n\n>> File system: XFS (nobarrier, noatime)\n>\n> Should probably add \"logbufs=8\" in there too.\n\nWill do.\n\n>> work_mem = 192MB\n>> wal_buffers = 8MB\n>> random_page_cost = 1.0\n>\n> That work_mem is a bit on the scary side of things, given how much\n> memory is allocated to other things. Just be careful you don't get a lot\n> of connections and run out of server RAM.\n\nThat's a leftover from the days when we *really* didn't know what we're \ndoing (now we only *mostly* don't know what we're doing). I'll set \nwork_mem down to something less scary.\n\n> Might as well bump wal_buffers up to 16MB and be done with it.\n\nWill do.\n\n> Setting random_page_cost to 1.0 is essentially telling the server the\n> entire database is cached in RAM. If that's not true, you don't want to\n> go quite that far in reducing it.\n\nOops, that was a typo. We've set random_page_cost to 2, not 1.\n\n> With 8.4, you should be able to keep constraint_exclusion at its default\n> of 'partition' and have that work as expected; any particular reason you\n> forced it to always be 'on'?\n\nSee \"we really didn't know what we were doing.\" We'll leave \nconstraint_exclusion at its default.\n\n>> Bonnie++ (-f -n 0 -c 4)\n>> $PGDATA/xlog (RAID1)\n>> random seek: 369/sec\n>> block out: 87 MB/sec\n>> block in: 180 MB/sec\n>> $PGDATA (RAID10, 12 drives)\n>> random seek: 452\n>> block out: 439 MB/sec\n>> block in: 881 MB/sec\n>>\n>> sysbench test of fsync (commit) rate:\n>>\n>> $PGDATA/xlog (RAID1)\n>> cache off: 29 req/sec\n>> cache on: 9,342 req/sec\n>> $PGDATA (RAID10, 12 drives)\n>> cache off: 61 req/sec\n>> cache on: 8,191 req/sec\n>\n> That random seek rate is a little low for 12 drives, but that's probably\n> the limitations of the 3ware controller kicking in there. Your \"cache\n> off\" figures are really weird though; I'd expect those both to be around\n> 100. Makes me wonder if something weird is happening in the controller,\n> or if there was a problem with your config when testing that. Not a big\n> deal, really--the cached numbers are normally going to be the important\n> ones--but it is odd.\n\nI also thought the \"cache off\" figures were odd. I expected something \nmuch closer to 120 req/sec (7200 rpm drives). I probably won't \ninvestigate that with any vigor, since the cache-on numbers are OK.\n\n> Your pgbench SELECT numbers look fine, but particularly given that\n> commit oddity here I'd recommend running some of the standard TPC-B-like\n> tests, too, just to be completely sure there's no problem here. You\n> should get results that look like \"Set 3: Longer ext3 tests\" in the set\n> I've published to http://www.2ndquadrant.us/pgbench-results/index.htm\n> presuming you let those run for 10 minutes or so. The server those came\n> off of has less RAM and disks than yours, so you'll fit larger database\n> scales into memory before performance falls off, but that gives you\n> something to compare against.\n\nTCB-B-like tests, will do.\n\nGreg, Thanks a million.\n\n Wayne Conrad\n\n", "msg_date": "Wed, 02 Feb 2011 10:06:53 -0700", "msg_from": "Wayne Conrad <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Are we in the ballpark?" }, { "msg_contents": "On Wed, Feb 02, 2011 at 10:06:53AM -0700, Wayne Conrad wrote:\n> On 02/01/11 18:30, Greg Smith wrote:\n> >>Bonnie++ (-f -n 0 -c 4)\n> >>$PGDATA/xlog (RAID1)\n> >>random seek: 369/sec\n> >>block out: 87 MB/sec\n> >>block in: 180 MB/sec\n> >>$PGDATA (RAID10, 12 drives)\n> >>random seek: 452\n> >>block out: 439 MB/sec\n> >>block in: 881 MB/sec\n> >>\n> >>sysbench test of fsync (commit) rate:\n> >>\n> >>$PGDATA/xlog (RAID1)\n> >>cache off: 29 req/sec\n> >>cache on: 9,342 req/sec\n> >>$PGDATA (RAID10, 12 drives)\n> >>cache off: 61 req/sec\n> >>cache on: 8,191 req/sec\n> >\n> >That random seek rate is a little low for 12 drives, but that's probably\n> >the limitations of the 3ware controller kicking in there. Your \"cache\n> >off\" figures are really weird though; I'd expect those both to be around\n> >100. Makes me wonder if something weird is happening in the controller,\n> >or if there was a problem with your config when testing that. Not a big\n> >deal, really--the cached numbers are normally going to be the important\n> >ones--but it is odd.\n> \n> I also thought the \"cache off\" figures were odd. I expected\n> something much closer to 120 req/sec (7200 rpm drives). I probably\n> won't investigate that with any vigor, since the cache-on numbers\n> are OK.\n\nYou may want to look into the \"cache off\" figures a little more. We\nrun a number of battery backed raid controllers and we test the\nbatteries every 6 months or so. When we test the batteries, the cache\ngoes off line (as it should) to help keep the data valid.\n\nIf you need to test your raid card batteries (nothing like having a\nbattery with only a 6 hour runtime when it takes you a couple of days\nMTTR), can your database app survive with that low a commit rate? As\nyou said you ar expecting something almost 4-5x faster with 7200 rpm\ndisks.\n\n-- \n\t\t\t\t-- rouilj\n\nJohn Rouillard System Administrator\nRenesys Corporation 603-244-9084 (cell) 603-643-9300 x 111\n", "msg_date": "Wed, 2 Feb 2011 19:51:01 +0000", "msg_from": "John Rouillard <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Are we in the ballpark?" } ]
[ { "msg_contents": "Hi, im César, im developing an app that saves information included in\n\"pg_stat_activity\" view in order to monitor querys. The objective of this\napp is to gather information about querys that take to long to finish and\noverload the server. I was wandering if I could see somehwere the\nimplementation of \"pg_stat_activity\" view, in order to not save the\ninformation in another table, I need to use historical information about\nquerys.\nIf you could help looking about this, or if you know about an app that\nalready do this, please let me know.\n\nHi, im César, im developing an app that saves information included in \"pg_stat_activity\" view in order to monitor querys. The objective of this app is to gather information about querys that take to long to finish and overload the server. I was wandering if I could see somehwere the implementation of \"pg_stat_activity\" view, in order to not save the information in another table, I need to use historical information about querys. \nIf you could help looking about this, or if you know about an app that already do this, please let me know.", "msg_date": "Wed, 2 Feb 2011 12:21:47 -0300", "msg_from": "Cesar Arrieta <[email protected]>", "msg_from_op": true, "msg_subject": "About pg_stat_activity" }, { "msg_contents": "On Wednesday 02 February 2011 16:21:47 Cesar Arrieta wrote:\n\nHi,\n\n> If you could help looking about this, or if you know about an app that\n> already do this, please let me know.\n\nhave a look for http://pgfouine.projects.postgresql.org/ \nand http://pgfoundry.org/projects/pgstatspack/\n\nHTH, \nJens\n", "msg_date": "Wed, 2 Feb 2011 17:03:42 +0100", "msg_from": "Jens Wilke <[email protected]>", "msg_from_op": false, "msg_subject": "monitoring querys Re: About pg_stat_activity" }, { "msg_contents": ">I was wandering if I could see somehwere the implementation of \"pg_stat_activity\" view\n\n>From psql\n\n\\d+ pg_stat_activity\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\nwww.truviso.com\n", "msg_date": "Wed, 2 Feb 2011 09:15:01 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About pg_stat_activity" }, { "msg_contents": "Cesar Arrieta wrote:\n> im developing an app that saves information included in \n> \"pg_stat_activity\" view in order to monitor querys. The objective of \n> this app is to gather information about querys that take to long to \n> finish and overload the server.\n\nI hope you're already setting log_min_duration_statement, then analyzing \nthe resulting logs using something like pgFouine. If you have \nPostgreSQL 8.4 or later, possibly add loading the auto_explain module as \nwell, or collecting the data using pg_stat_statements instead can be \nuseful. Trying to grab this info in real-time from pg_stat_activity \ninstead is a lot of work and won't give you results as good. If you're \nalready doing something like that and are just looking to increase the \namount of info you collect by also looking at pg_stat_activity, that can \nbe worthwhile.\n\nMaciek just gave the quickest answer to your main question; I'll just \nadd that reading the source code to the file system_views.sql will show \nyou how pg_stat_activity as well as other interesting built-in views work.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 02 Feb 2011 12:24:26 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About pg_stat_activity" } ]
[ { "msg_contents": "Hi, I have a Server with Fedora Core 11, Tomcat and Postgresql 8.3.\nWith Hardware:\n* 8GB RAM\n* 8 processors Intel Xeon E5520 @2.27GHz\n* 250GB SATA DISK\n\nActually, it serves at most 250 connections.\nThe problem happends when it serves many many connections at a time, tables\nand queries began to get blocked, then I have to kill some processes\nin order to allow other people continue working.\n\nWich recommendations could you give me for to configure postgresql.conf, and\ncould it be eficcient to buy another server with almost same hardware\nin order to use pgPool2 with replication, load balance and parallel query?.\n\nHi, I have a Server with Fedora Core 11, Tomcat and Postgresql 8.3.With Hardware:* 8GB RAM* 8 processors Intel Xeon E5520 @2.27GHz* 250GB SATA DISKActually, it serves at most 250 connections. \nThe problem happends when it serves many many connections at a time, tables and queries began to get blocked, then I have to kill some processes in order to allow other people continue working.Wich recommendations could you give me for to configure postgresql.conf, and could it be eficcient to buy another server with almost same hardware\nin order to use pgPool2 with replication, load balance and parallel query?.", "msg_date": "Wed, 2 Feb 2011 15:15:22 -0300", "msg_from": "Cesar Arrieta <[email protected]>", "msg_from_op": true, "msg_subject": "Server Configuration" }, { "msg_contents": "On Wed, Feb 02, 2011 at 03:15:22PM -0300, Cesar Arrieta wrote:\n> Hi, I have a Server with Fedora Core 11, Tomcat and Postgresql 8.3.\n> With Hardware:\n> * 8GB RAM\n> * 8 processors Intel Xeon E5520 @2.27GHz\n> * 250GB SATA DISK\n> \n> Actually, it serves at most 250 connections.\n> The problem happends when it serves many many connections at a time, tables\n> and queries began to get blocked, then I have to kill some processes\n> in order to allow other people continue working.\n> \n> Wich recommendations could you give me for to configure postgresql.conf, and\n> could it be eficcient to buy another server with almost same hardware\n> in order to use pgPool2 with replication, load balance and parallel query?.\n\nIt sounds like you may just need a connection pooler (pgpool, pgbouncer)\nand it might work just fine.\n\nCheers,\nKen\n", "msg_date": "Wed, 2 Feb 2011 12:32:17 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Configuration" }, { "msg_contents": "I would personally highly recommend using pgBouncer! I have been using\nthis in production migrating from MySQL and have had phenomenal success\nwith it combined with lighttpd and php as an internal information\nsystem. I am getting on average 300 requests per second very low load\naverage as it is maintaining between 10 and 20 DB connections throughout\nthe day.\n\nI would also recommend splitting the application server and the DB\nserver if at all possible. Doing this would make it possible to do\nhorizontal scaling by way of a reverse proxy in front of the application\nserver.\n\nThe overhead I experienced of natively running Postgres with PHP was a\nperformance killer. Connecting to the DB and disconnecting was killing\nthe CPU - as it seems with your problem.\n\nWhat is the nature of the application which will be running on this\nserver?\n\nBest of Luck!\nRichard Carnes\n\n\n-----Original Message-----\nFrom: Kenneth Marshall [mailto:[email protected]] \nSent: Wednesday, February 02, 2011 1:32 PM\nTo: Cesar Arrieta\nCc: [email protected]\nSubject: Re: Server Configuration\n\n\nOn Wed, Feb 02, 2011 at 03:15:22PM -0300, Cesar Arrieta wrote:\n> Hi, I have a Server with Fedora Core 11, Tomcat and Postgresql 8.3.\n> With Hardware:\n> * 8GB RAM\n> * 8 processors Intel Xeon E5520 @2.27GHz\n> * 250GB SATA DISK\n> \n> Actually, it serves at most 250 connections.\n> The problem happends when it serves many many connections at a time,\ntables\n> and queries began to get blocked, then I have to kill some processes\n> in order to allow other people continue working.\n> \n> Wich recommendations could you give me for to configure\npostgresql.conf, and\n> could it be eficcient to buy another server with almost same hardware\n> in order to use pgPool2 with replication, load balance and parallel\nquery?.\n\nIt sounds like you may just need a connection pooler (pgpool, pgbouncer)\nand it might work just fine.\n\nCheers,\nKen\n\nThe information contained in this electronic message from CCH Small Firm Services, and any attachments, contains information that may be confidential and/or privileged. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution or use of this information is strictly prohibited. If you have received this communication in error, please notify CCH Small Firm Services immediately by e-mail or by telephone at 770-857-5000, and destroy this communication. Thank you.\n", "msg_date": "Wed, 2 Feb 2011 14:11:12 -0500", "msg_from": "\"Richard Carnes\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Configuration" }, { "msg_contents": "On Wed, 2011-02-02 at 15:15 -0300, Cesar Arrieta wrote:\n> Hi, I have a Server with Fedora Core 11, Tomcat and Postgresql 8.3.\n> With Hardware:\n> * 8GB RAM\n> * 8 processors Intel Xeon E5520 @2.27GHz\n> * 250GB SATA DISK\n> \n> Actually, it serves at most 250 connections. \n> The problem happends when it serves many many connections at a time,\n> tables and queries began to get blocked, then I have to kill some\n> processes \n> in order to allow other people continue working.\n> \n> Wich recommendations could you give me for to configure\n> postgresql.conf, and could it be eficcient to buy another server with\n> almost same hardware\n> in order to use pgPool2 with replication, load balance and parallel\n> query?.\nMy first recommedation is to update your PostgreSQL version to 9.0 and\nlater you can use the PgPool-II version 3.0 in order to use the Hot\nstandby/Streaming Replication features with it.\n\nHere is a example of the configuration of PgPool-II 3.0.1 and\nPostgreSQL-9.0.2\nhttp://lists.pgfoundry.org/pipermail/pgpool-general/2011-February/003338.html\n\nRegards\n\n\n\n-- \nIng. Marcos Luís Ortíz Valmaseda\nSystem Engineer -- Database Administrator\n\nCentro de Tecnologías de Gestión de Datos (DATEC)\nUniversidad de las Ciencias Informáticas\nhttp://postgresql.uci.cu\n\n", "msg_date": "Wed, 02 Feb 2011 21:49:12 -0430", "msg_from": "Marcos Ortiz <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Server Configuration" } ]
[ { "msg_contents": "I mistakenly replied to sender only.\n\nJon Nelson wrote:\n> However, sometimes using an index results in a HORRIBLE HORRIBLE plan.\n> I recently encountered the issue myself, and plopping an ANALYZE\n> $tablename in there, since I was using a temporary table anyway, make\n> all the difference. The planner switched from an index-based query to\n> a sequential scan, and a sequential scan was (is) vastly more\n> efficient in this particular case.\n> \n\nThat can be fixed by modifying the query. One can write the query in \nsuch a way that optimizer cannot use an index.\n\n> Personally, I'd get rid of autovacuum/autoanalyze support on temporary\n> tables (they typically have short lives and are often accessed\n> immediately after creation preventing the auto* stuff from being\n> useful anyway), *AND* every time I ask I'm always told \"make sure\n> ANALYZE the table before you use it\".\n>\n> \nI consider that requirement very bad. I hate it when I have to do things \nlike this:\ntry {\n $tmprows=array();\n $db->StartTrans();\n foreach ($result[\"matches\"] as $doc => $docinfo) {\n $tmp=$result[\"matches\"][$doc][\"attrs\"][\"created\"];\n $tmprows[]=array(date($FMT,$tmp),$doc);\n }\n $db->Execute($TMPINS,$tmprows);\n $db->CommitTrans();\n\n// Why the heck is this needed?\n\n $db->Execute(\"analyze tempids\");\n\n $tmprows=array();\n if ($result[\"total_found\"]>$result[\"total\"]) {\n print \"Total results:\" . $result[\"total_found\"] . \"<br>\";\n print \"Returned results:\" . $result[\"total\"] . \"<br>\";\n }\n $result=array();\n $rs = $db->Execute($IGEN, array($beg, $end));\n show($fmt,$rs);\n }\n catch(Exception $e) {\n\nThe \"analyze tempids\" line makes my code ugly and slows it down.\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n\n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Wed, 02 Feb 2011 14:31:43 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": true, "msg_subject": "[Fwd: Re: [HACKERS] Slow count(*) again...]" } ]
[ { "msg_contents": "Mladen Gogala wrote:\n \n> I'm 6'4\", 235LBS so telling me that you disagree and that I am more\n> stupid than a computer program, would not be a smart thing to do.\n \nEven if you had used a smiley there, that would have been incredibly\ninappropriate. I've never seen a computer program do anything so\nstupid, actually; so I'm quite sure you're not always operating to\nthe level they can manage.\n \n-Kevin\n", "msg_date": "Wed, 02 Feb 2011 15:16:56 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Slow count(*) again..." }, { "msg_contents": "On Wed, Feb 2, 2011 at 4:16 PM, Kevin Grittner\n<[email protected]> wrote:\n> Mladen Gogala  wrote:\n>\n>> I'm 6'4\", 235LBS so telling me that you disagree and that I am more\n>> stupid than a computer program, would not be a smart thing to do.\n>\n> Even if you had used a smiley there, that would have been incredibly\n> inappropriate.  I've never seen a computer program do anything so\n> stupid, actually; so I'm quite sure you're not always operating to\n> the level they can manage.\n\nBeep, time out. Everybody take a step or three back and calm down.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 2 Feb 2011 16:19:33 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow count(*) again..." }, { "msg_contents": "I direct anyone who thought Mladen was making a serious comment to \nhttp://www.nydailynews.com/news/politics/2009/01/08/2009-01-08_misunderestimate_tops_list_of_notable_bu-3.html \nif you want to get his little joke there. I plan to start using \n\"misunderestimate\" more in the future when talking about planner \nerrors. Might even try to slip it into the docs at some point in the \nfuture and see if anybody catches it.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 02 Feb 2011 19:17:05 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow count(*) again..." }, { "msg_contents": "\n\nOn 02/02/2011 07:17 PM, Greg Smith wrote:\n> I direct anyone who thought Mladen was making a serious comment to \n> http://www.nydailynews.com/news/politics/2009/01/08/2009-01-08_misunderestimate_tops_list_of_notable_bu-3.html \n> if you want to get his little joke there. I plan to start using \n> \"misunderestimate\" more in the future when talking about planner \n> errors. Might even try to slip it into the docs at some point in the \n> future and see if anybody catches it.\n\nMy wings take dream ...\n\n\ncheers\n\nandrew\n", "msg_date": "Wed, 02 Feb 2011 19:25:21 -0500", "msg_from": "Andrew Dunstan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow count(*) again..." }, { "msg_contents": "Andrew Dunstan wrote:\n> \n> \n> On 02/02/2011 07:17 PM, Greg Smith wrote:\n> > I direct anyone who thought Mladen was making a serious comment to \n> > http://www.nydailynews.com/news/politics/2009/01/08/2009-01-08_misunderestimate_tops_list_of_notable_bu-3.html \n> > if you want to get his little joke there. I plan to start using \n> > \"misunderestimate\" more in the future when talking about planner \n> > errors. Might even try to slip it into the docs at some point in the \n> > future and see if anybody catches it.\n> \n> My wings take dream ...\n\nI think this humorous video really nails it:\n\n\thttp://www.youtube.com/watch?v=Km26gMI847Y\n\tPresidential Speechalist \n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n", "msg_date": "Wed, 2 Feb 2011 20:19:42 -0500 (EST)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Slow count(*) again..." } ]
[ { "msg_contents": "I'm setting up a dedicated linux postgres box with 2x300GB 15k SAS drive in\na RAID 1, though if future load dictates we would like to be able to upgrade\nto RAID 10. The hosting provider offers the following options for a RAID\ncontroller (all are the same price):\n\n ADAPTEC 3405 RAID Controller\n ADAPTEC 4800 RAID Controller\n LSI MegaRaid 8308 RAID Controller\n ADAPTEC 5405 RAID Controller\n ADAPTEC 5405Z RAID Controller\n ADAPTEC 5805 RAID Controller\n ADAPTEC 5805Z RAID Controller\n\nHowever, they can't guarantee that any particular RAID controller would be\nin stock when they are building the machine, so basically I would like to\nknow if any of these cards are sufficiently better or worse than the others\nthat I should either a) wait for a particular card or b) avoid a card.\n\nAlso, I am planning on replicating this box to a standby machine with\nstreaming replication. Given this, is there any reason not to use the write\ncache on the RAID controller assuming it has battery backup? My though\nbeing even in a worst case scenario if the BBU fails and the master DB gets\ncorrupted, the standby (not using any write cache) would still be ok sans a\nfew seconds of data (assuming the replication was keeping up, which would be\nmonitored).\n\n-Dan\n\nI'm setting up a dedicated linux postgres box with 2x300GB 15k SAS drive in a RAID 1, though if future load dictates we would like to be able to upgrade to RAID 10.  The hosting provider offers the following options for a RAID controller (all are the same price):\n ADAPTEC 3405 RAID Controller ADAPTEC 4800 RAID Controller\n LSI MegaRaid 8308 RAID Controller ADAPTEC 5405 RAID Controller ADAPTEC 5405Z RAID Controller ADAPTEC 5805 RAID Controller ADAPTEC 5805Z RAID Controller\nHowever, they can't guarantee that any particular RAID controller would be in stock when they are building the machine, so basically I would like to know if any of these cards are sufficiently better or worse than the others that I should either a) wait for a particular card or b) avoid a card.\nAlso, I am planning on replicating this box to a standby machine with streaming replication.  Given this, is there any reason not to use the write cache on the RAID controller assuming it has battery backup?  My though being even in a worst case scenario if the BBU fails and the master DB gets corrupted, the standby (not using any write cache) would still be ok sans a few seconds of data (assuming the replication was keeping up, which would be monitored).\n-Dan", "msg_date": "Wed, 2 Feb 2011 15:15:26 -0800", "msg_from": "Dan Birken <[email protected]>", "msg_from_op": true, "msg_subject": "Which RAID Controllers to pick/avoid?" }, { "msg_contents": "On 03/02/11 07:15, Dan Birken wrote:\n\n> However, they can't guarantee that any particular RAID controller would\n> be in stock when they are building the machine, so basically I would\n> like to know if any of these cards are sufficiently better or worse than\n> the others that I should either a) wait for a particular card or b)\n> avoid a card.\n\nI don't know the Adaptec parts above (I avoid them these days) but AFAIK\nthe LSI is at least OK.\n\nWhatever RAID controller you get, make sure you have a battery backup\nunit (BBU) installed so you can safely enable write-back caching.\nWithout that, you might as well use software RAID - it'll generally be\nfaster (and cheaper) than HW RAID w/o a BBU.\n\nI get great results with Linux software RAID 10 on my Pg server - but\nthen, I'm not loading it particularly hard. (I continually wish the `md'\ndriver could use a PCIe/SATA battery-backed DRAM cache, because it'd be\ncapable of massively outperforming most HW raid implementation if only\nit could offer safe write-back caching using persistent cache.)\n\n> Also, I am planning on replicating this box to a standby machine with\n> streaming replication. Given this, is there any reason not to use the\n> write cache on the RAID controller assuming it has battery backup? My\n> though being even in a worst case scenario if the BBU fails and the\n> master DB gets corrupted, the standby (not using any write cache) would\n> still be ok sans a few seconds of data (assuming the replication was\n> keeping up, which would be monitored).\n\nThat sounds about right. The standby would be fine, the master would be\ntotaled.\n\nIf you're doing write-back caching without a BBU and things go down,\nit's not going to be neatly time-warped back a few seconds. Your DB will\nbe corrupted, possibly massively. One big advantage of write-back\ncaching is that it lets the controller batch and re-order writes for\nhigher disk throughput as well as lower latencies at the app level. The\ncost of that is that you lose the safety of ordered writes to WAL then\nheap; it's quite possible for physical media writes to hit the heap\nbefore the WAL, for newer writes to hit the WAL before older writes, etc\netc. Because of the cache, the OS and Pg never see or care about the\ncrazily inconsistent state of the actual disk media - unless the cache\nis lost, in which case you're screwed.\n\nAssume that a BBU failure will require restoration from a backup or from\na standby server. If you can't afford that, you should operate in\nwrite-through cache mode, possibly using synchronous commit and/or\ncommit delay options if you can afford a few seconds data loss on crash.\n\n-- \nSystem & Network Administrator\nPOST Newspapers\n", "msg_date": "Thu, 03 Feb 2011 10:00:33 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "Dan Birken wrote:\n> ADAPTEC 3405 RAID Controller\n> ADAPTEC 4800 RAID Controller\n\nThe 3405 and 4800 are two of Adaptec's older cards with only 128MB of \ncache on them. Those are on the slow side compared to the others listed.\n\n> LSI MegaRaid 8308 RAID Controller\n> ADAPTEC 5405 RAID Controller\n> ADAPTEC 5405Z RAID Controller\n> ADAPTEC 5805 RAID Controller\n> ADAPTEC 5805Z RAID Controller\n\nThe LSI cards are some of the most popular and known to work well with \nPostgreSQL ones around.\n\nI've recently tested a system based on the 5405, and as much as I've \nhated Adaptec controllers in the past I have to admit this latest line \nfrom them is pretty solid. My own benchmarks and the others I've seen \nsuggest it's easily capable of keeping up with the LSI and Areca \ncontrollers Adaptec used to be seriously outrun by. See \nhttp://www.tomshardware.com/reviews/adaptec-serial-controllers,1806-14.html \nfor example.\n\nThe \"Z\" variations use their \"Zero-Maintenance Cache\" instead of a \nstandard battery-backup unit; that's a small amount of flash memory and \na supercap, similar to the good caches on SSD: \nhttp://www.adaptec.com/NR/rdonlyres/7FD8C372-8231-4727-B12B-5ABF79D9325C/0/6514_Series5Z_1_7.pdf\n\n5405 has 256MB of cache, the others 512MB.\n\nThe 5405 and 5805 models do have a known problem where they overheat if \nyou don't have enough cooling in the server box, with the 5805 seeming \nto be the bigger source of such issues. See the reviews at \nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16816103099 for \nexample. Scott Marlowe was griping recently about a similar issue in \nsome of the LSI models, too. I suspect it's a problem impacting several \nof the larger RAID cards that use the big Intel IOP processors for their \nRAID computations, given that's the part with the heatsink on it.\n\nQuick summary: avoid the Adaptec 3405 and 4800. Rest are decent \ncards. Just make sure you monitor the temperatures in your case (and \nthe card too if arcconf lets you, I haven't checked for that yet) if you \nend up with a 5405/5805.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\n\nDan Birken wrote:\n\n\n\n\n\n\n\n ADAPTEC\n3405 RAID Controller\n ADAPTEC 4800 RAID Controller\n\n\n\n\n\n\nThe 3405 and 4800 are two of Adaptec's older cards with only 128MB of\ncache on them.  Those are on the slow side compared to the others\nlisted.\n\n\n\n\n\n \nLSI MegaRaid 8308 RAID Controller\n ADAPTEC 5405 RAID Controller\n ADAPTEC 5405Z RAID Controller\n ADAPTEC 5805 RAID Controller\n ADAPTEC 5805Z RAID Controller\n\n\n\n\n\nThe LSI cards are some of the most popular and known to work well with\nPostgreSQL ones around.\n\nI've recently tested a system based on the 5405, and as much as I've\nhated Adaptec controllers in the past I have to admit this latest line\nfrom them is pretty solid.  My own benchmarks and the others I've seen\nsuggest it's easily capable of keeping up with the LSI and Areca\ncontrollers Adaptec used to be seriously outrun by.  See\nhttp://www.tomshardware.com/reviews/adaptec-serial-controllers,1806-14.html\nfor example.\n\nThe \"Z\" variations use their \"Zero-Maintenance Cache\" instead of a\nstandard battery-backup unit; that's a small amount of flash memory and\na supercap, similar to the good caches on SSD: \nhttp://www.adaptec.com/NR/rdonlyres/7FD8C372-8231-4727-B12B-5ABF79D9325C/0/6514_Series5Z_1_7.pdf\n\n5405 has 256MB of cache, the others 512MB.\n\nThe 5405 and 5805 models do have a known problem where they overheat if\nyou don't have enough cooling in the server box, with the 5805 seeming\nto be the bigger source of such issues.  See the reviews at\nhttp://www.newegg.com/Product/Product.aspx?Item=N82E16816103099 for\nexample.  Scott Marlowe was griping recently about a similar issue in\nsome of the LSI models, too.  I suspect it's a problem impacting\nseveral of the larger RAID cards that use the big Intel IOP processors\nfor their RAID computations, given that's the part with the heatsink on\nit.\n\nQuick summary:  avoid the Adaptec 3405 and 4800.  Rest are decent\ncards.  Just make sure you monitor the temperatures in your case (and\nthe card too if arcconf lets you, I haven't checked for that yet) if\nyou\nend up with a 5405/5805.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Thu, 03 Feb 2011 00:46:02 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "On Wed, Feb 2, 2011 at 10:46 PM, Greg Smith <[email protected]> wrote:\n> example.  Scott Marlowe was griping recently about a similar issue in some\n> of the LSI models, too.  I suspect it's a problem impacting several of the\n> larger RAID cards that use the big Intel IOP processors for their RAID\n> computations, given that's the part with the heatsink on it.\n\nSpecifically the LSI 8888 in a case that gave very low amounts of air\nflow over the RAID card. The case right above the card was quite hot,\nand the multilane cable was warm enough to almost burn my fingers when\nI pulled it out the back. I'm not sure any RAID card would have\nsurvived there, but the LSI 8888 LP definitely did not.\n", "msg_date": "Wed, 2 Feb 2011 22:58:38 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "On Wed, Feb 2, 2011 at 7:00 PM, Craig Ringer\n<[email protected]> wrote:\n> Whatever RAID controller you get, make sure you have a battery backup\n> unit (BBU) installed so you can safely enable write-back caching.\n> Without that, you might as well use software RAID - it'll generally be\n> faster (and cheaper) than HW RAID w/o a BBU.\n\nRecently we had to pull our RAID controllers and go to plain SAS\ncards. While random access dropped a bit, sequential throughput\nskyrocketed, saturating the 4 lane cable we use. 4x300Gb/s =\n1200Gb/s or right around 1G of data a second off the array. VERY\nimpressive.\n", "msg_date": "Wed, 2 Feb 2011 23:02:19 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "Scott Marlowe wrote:\n> On Wed, Feb 2, 2011 at 10:46 PM, Greg Smith <[email protected]> wrote:\n> \n>> example. Scott Marlowe was griping recently about a similar issue in some\n>> of the LSI models, too. I suspect it's a problem impacting several of the\n>> larger RAID cards that use the big Intel IOP processors for their RAID\n>> computations, given that's the part with the heatsink on it.\n>> \n>\n> Specifically the LSI 8888 in a case that gave very low amounts of air\n> flow over the RAID card. The case right above the card was quite hot,\n> and the multilane cable was warm enough to almost burn my fingers whe\n\nInteresting...that shoots down my theory. Now that I check, the LSI \n8888 uses their SAS1078 controller, which is based on a PowerPC 440 \nprocessor--it's not one of the Intel IOP processors at all. The 8308 \nDan has as an option is using the very popular Intel IOP333 instead, \nwhich is also used in some Areca 1200 series cards (1220/1230/1260).\n\nThe Adaptec 5405 and 5805 cards both use the Intel IOP348, as does the \nAreca 1680. Areca puts a fan right on it; Adaptec does not. I \nsuspect the only reason the 5805 cards have gotten more reports of \noverheating than the 5405 ones is just because having more drives \ntypically connected increases their actual workload. I don't think \nthere's actually any difference between the cooling situation between \nthe two otherwise.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Wed, Feb 2, 2011 at 10:46 PM, Greg Smith <[email protected]> wrote:\n \n\nexample.  Scott Marlowe was griping recently about a similar issue in some\nof the LSI models, too.  I suspect it's a problem impacting several of the\nlarger RAID cards that use the big Intel IOP processors for their RAID\ncomputations, given that's the part with the heatsink on it.\n \n\n\nSpecifically the LSI 8888 in a case that gave very low amounts of air\nflow over the RAID card. The case right above the card was quite hot,\nand the multilane cable was warm enough to almost burn my fingers whe\n\n\nInteresting...that shoots down my theory.  Now that I check, the LSI\n8888 uses their SAS1078 controller, which is based on a PowerPC 440\nprocessor--it's not one of the Intel IOP processors at all.  The 8308\nDan has as an option is using the very popular Intel IOP333 instead,\nwhich is also used in some Areca 1200 series cards (1220/1230/1260).\n\nThe Adaptec 5405 and 5805 cards both use the Intel IOP348, as does the\nAreca 1680.  Areca puts a fan right on it; Adaptec does not.    I\nsuspect the only reason the 5805 cards have gotten more reports of\noverheating than the 5405 ones is just because having more drives\ntypically connected increases their actual workload.  I don't think\nthere's actually any difference between the cooling situation between\nthe two otherwise.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Thu, 03 Feb 2011 01:15:32 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "Thank you everybody for the detailed answers, the help is well appreciated.\n\nA couple of follow-up questions:\n- Is the supercap + flash memory considered superior to the BBU in practice?\n Is that type of system well tested?\n- Is the linux support of the LSI and Adaptec cards comparable?\n\n-Dan\n\nOn Wed, Feb 2, 2011 at 10:15 PM, Greg Smith <[email protected]> wrote:\n\n> Scott Marlowe wrote:\n>\n> On Wed, Feb 2, 2011 at 10:46 PM, Greg Smith <[email protected]> <[email protected]> wrote:\n>\n>\n> example. Scott Marlowe was griping recently about a similar issue in some\n> of the LSI models, too. I suspect it's a problem impacting several of the\n> larger RAID cards that use the big Intel IOP processors for their RAID\n> computations, given that's the part with the heatsink on it.\n>\n>\n> Specifically the LSI 8888 in a case that gave very low amounts of air\n> flow over the RAID card. The case right above the card was quite hot,\n> and the multilane cable was warm enough to almost burn my fingers whe\n>\n>\n> Interesting...that shoots down my theory. Now that I check, the LSI 8888\n> uses their SAS1078 controller, which is based on a PowerPC 440\n> processor--it's not one of the Intel IOP processors at all. The 8308 Dan\n> has as an option is using the very popular Intel IOP333 instead, which is\n> also used in some Areca 1200 series cards (1220/1230/1260).\n>\n> The Adaptec 5405 and 5805 cards both use the Intel IOP348, as does the\n> Areca 1680. Areca puts a fan right on it; Adaptec does not. I suspect\n> the only reason the 5805 cards have gotten more reports of overheating than\n> the 5405 ones is just because having more drives typically connected\n> increases their actual workload. I don't think there's actually any\n> difference between the cooling situation between the two otherwise.\n>\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n\nThank you everybody for the detailed answers, the help is well appreciated.A couple of follow-up questions:- Is the supercap + flash memory considered superior to the BBU in practice?  Is that type of system well tested?\n- Is the linux support of the LSI and Adaptec cards comparable?-DanOn Wed, Feb 2, 2011 at 10:15 PM, Greg Smith <[email protected]> wrote:\n\n\nScott Marlowe wrote:\n\nOn Wed, Feb 2, 2011 at 10:46 PM, Greg Smith <[email protected]> wrote:\n \n\nexample.  Scott Marlowe was griping recently about a similar issue in some\nof the LSI models, too.  I suspect it's a problem impacting several of the\nlarger RAID cards that use the big Intel IOP processors for their RAID\ncomputations, given that's the part with the heatsink on it.\n \n\nSpecifically the LSI 8888 in a case that gave very low amounts of air\nflow over the RAID card. The case right above the card was quite hot,\nand the multilane cable was warm enough to almost burn my fingers whe\n\n\nInteresting...that shoots down my theory.  Now that I check, the LSI\n8888 uses their SAS1078 controller, which is based on a PowerPC 440\nprocessor--it's not one of the Intel IOP processors at all.  The 8308\nDan has as an option is using the very popular Intel IOP333 instead,\nwhich is also used in some Areca 1200 series cards (1220/1230/1260).\n\nThe Adaptec 5405 and 5805 cards both use the Intel IOP348, as does the\nAreca 1680.  Areca puts a fan right on it; Adaptec does not.    I\nsuspect the only reason the 5805 cards have gotten more reports of\noverheating than the 5405 ones is just because having more drives\ntypically connected increases their actual workload.  I don't think\nthere's actually any difference between the cooling situation between\nthe two otherwise.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Wed, 2 Feb 2011 22:30:31 -0800", "msg_from": "Dan Birken <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "On Wed, Feb 2, 2011 at 11:15 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>\n> On Wed, Feb 2, 2011 at 10:46 PM, Greg Smith <[email protected]> wrote:\n>\n>\n> example.  Scott Marlowe was griping recently about a similar issue in some\n> of the LSI models, too.  I suspect it's a problem impacting several of the\n> larger RAID cards that use the big Intel IOP processors for their RAID\n> computations, given that's the part with the heatsink on it.\n>\n>\n> Specifically the LSI 8888 in a case that gave very low amounts of air\n> flow over the RAID card. The case right above the card was quite hot,\n> and the multilane cable was warm enough to almost burn my fingers whe\n>\n> Interesting...that shoots down my theory.  Now that I check, the LSI 8888\n> uses their SAS1078 controller, which is based on a PowerPC 440\n> processor--it's not one of the Intel IOP processors at all.  The 8308 Dan\n> has as an option is using the very popular Intel IOP333 instead, which is\n> also used in some Areca 1200 series cards (1220/1230/1260).\n>\n> The Adaptec 5405 and 5805 cards both use the Intel IOP348, as does the Areca\n> 1680.  Areca puts a fan right on it; Adaptec does not.    I suspect the only\n> reason the 5805 cards have gotten more reports of overheating than the 5405\n> ones is just because having more drives typically connected increases their\n> actual workload.  I don't think there's actually any difference between the\n> cooling situation between the two otherwise.\n\nThe LSI 8888 has a fan right on it. But it was just moving 90C air\naround in place.\n", "msg_date": "Wed, 2 Feb 2011 23:31:05 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "On Thu, Feb 3, 2011 at 07:30, Dan Birken <[email protected]> wrote:\n> Thank you everybody for the detailed answers, the help is well appreciated.\n> A couple of follow-up questions:\n> - Is the supercap + flash memory considered superior to the BBU in practice?\n\nI think it's considered about equivalent.\n\nThe advantages of the flash one are no battery maintenance (=no\ndowntime every <n> years to replace it) and no \"timeout\" if you loose\npower (but the battery backed ones are usually good for 48 hours or\nso, and if you don't have a server up by then...)\n\n>  Is that type of system well tested?\n\nYes.\n\n> - Is the linux support of the LSI and Adaptec cards comparable?\n\nCan't comment on that one, sorry.\n\n-- \n Magnus Hagander\n Me: http://www.hagander.net/\n Work: http://www.redpill-linpro.com/\n", "msg_date": "Thu, 3 Feb 2011 07:45:04 +0100", "msg_from": "Magnus Hagander <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "Am 03.02.2011 07:45, schrieb Magnus Hagander:\n> On Thu, Feb 3, 2011 at 07:30, Dan Birken<[email protected]> wrote:\n>> - Is the linux support of the LSI and Adaptec cards comparable?\n>\n> Can't comment on that one, sorry.\n\nWe dropped LSI in favour of Adaptec for exactly this reason. We run \nhundreds of machines in remote locations, and always again had problems \nwith LSI regarding support of new controller models and kernel versions \n(esp. on system installation), and because of occasional kernel panics \ntriggers by the LSI driver. The Adaptec-support integrated into the \nLinux kernel source tree does a flawless job here.\n\nOn the other hand, Adaptec is not perfect as well: when we attached a \n24-drive SAS storage unit to a 5805Z, it failed to properly handle drive \nfailures in this unit. We are using an LSI again there, which so far \nworks well...\n\n Joachim\n\n\n", "msg_date": "Thu, 03 Feb 2011 09:11:44 +0100", "msg_from": "Joachim Worringen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "Am 03.02.2011 00:15, schrieb Dan Birken:\n> I'm setting up a dedicated linux postgres box with 2x300GB 15k SAS \n> drive in a RAID 1, though if future load dictates we would like to be \n> able to upgrade to RAID 10. The hosting provider offers the following \n> options for a RAID controller (all are the same price):\nAdaptec at least has good tools for managing the controller, and \nperformance in our RAID-1 (DB) and RAID-5 setups (Files) is very good. I \ndon't think you can do wrong with the Adaptec controllers.\n\nCan't say much regarding LSI, but avoid cheap HP controllers.\n\n", "msg_date": "Thu, 03 Feb 2011 15:57:57 +0100", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "Dan Birken wrote:\n> - Is the supercap + flash memory considered superior to the BBU in \n> practice? Is that type of system well tested?\n\nThe main risk is that it's a pretty new approach. The standard BBU \nsetup has been used for a long time now; this whole flash+supercap thing \nhas only showed up in the last couple of years. Theoretically it's \nbetter; the #1 weakness of the old battery setup was only surviving an \noutage of a few days, and storing to flash doesn't have that issue. \nIt's just got the usual risks of something new.\n\n> - Is the linux support of the LSI and Adaptec cards comparable?\n\nSeems to be. The latest versions of Adaptec's arcconf utility even \nprovide about the same quality of command-line tools as LSI's megactl, \nafter being behind in that area for a while. Only quirk, and I can't \nsay where this was the manufacturer of the box or not because they \ninstalled the base OS, is that the 5405 setup I saw didn't turn off the \nwrite caches on the individual drives of the system. That's the \nstandard safe practice and default for the LSI cards.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 03 Feb 2011 10:02:21 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "\n> On Wed, Feb 2, 2011 at 7:00 PM, Craig Ringer\n> <[email protected]> wrote:\n>> Whatever RAID controller you get, make sure you have a battery backup\n>> unit (BBU) installed so you can safely enable write-back caching.\n>> Without that, you might as well use software RAID - it'll generally be\n>> faster (and cheaper) than HW RAID w/o a BBU.\n> \n> Recently we had to pull our RAID controllers and go to plain SAS\n> cards. While random access dropped a bit, sequential throughput\n> skyrocketed, saturating the 4 lane cable we use. 4x300Gb/s =\n> 1200Gb/s or right around 1G of data a second off the array. VERY\n> impressive.\n\n\nThis is really surprising. Software raid generally outperform hardware raid without BBU? Why is that? My company uses hardware raid quite a bit without BBU and have never thought to compare with software raid =/\n\nThanks!\n\n--Royce", "msg_date": "Sun, 6 Feb 2011 20:39:29 +1100", "msg_from": "Royce Ausburn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "On Sun, Feb 6, 2011 at 2:39 AM, Royce Ausburn <[email protected]> wrote:\n>\n>> On Wed, Feb 2, 2011 at 7:00 PM, Craig Ringer\n>> <[email protected]> wrote:\n>>> Whatever RAID controller you get, make sure you have a battery backup\n>>> unit (BBU) installed so you can safely enable write-back caching.\n>>> Without that, you might as well use software RAID - it'll generally be\n>>> faster (and cheaper) than HW RAID w/o a BBU.\n>>\n>> Recently we had to pull our RAID controllers and go to plain SAS\n>> cards.  While random access dropped a bit, sequential throughput\n>> skyrocketed, saturating the 4 lane cable we use.    4x300Gb/s =\n>> 1200Gb/s or right around 1G of data a second off the array.  VERY\n>> impressive.\n>\n>\n> This is really surprising.  Software raid generally outperform hardware raid without BBU?  Why is that?  My company uses hardware raid quite a bit without BBU and have never thought to compare with software raid =/\n\nFor raw throughtput it's not uncommon to beat a RAID card whether it\nhas a battery backed cache or not. If I'm wiriting a 200G file to the\ndisks, a BBU cache isn't gonna make that any faster, it'll fill up in\na second and then it's got to write to disk. BBU Cache are for faster\nrandom writes, and will handily beat SW RAID. But for raw large file\nread and write SW RAID is the fastest thing I've seen.\n", "msg_date": "Sun, 6 Feb 2011 02:55:41 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" }, { "msg_contents": "On Sun, 6 Feb 2011, Scott Marlowe wrote:\n\n> On Sun, Feb 6, 2011 at 2:39 AM, Royce Ausburn <[email protected]> wrote:\n>>\n>>> On Wed, Feb 2, 2011 at 7:00 PM, Craig Ringer\n>>> <[email protected]> wrote:\n>>>> Whatever RAID controller you get, make sure you have a battery backup\n>>>> unit (BBU) installed so you can safely enable write-back caching.\n>>>> Without that, you might as well use software RAID - it'll generally be\n>>>> faster (and cheaper) than HW RAID w/o a BBU.\n>>>\n>>> Recently we had to pull our RAID controllers and go to plain SAS\n>>> cards. �While random access dropped a bit, sequential throughput\n>>> skyrocketed, saturating the 4 lane cable we use. � �4x300Gb/s =\n>>> 1200Gb/s or right around 1G of data a second off the array. �VERY\n>>> impressive.\n>>\n>>\n>> This is really surprising. �Software raid generally outperform hardware \n>> raid without BBU? �Why is that? �My company uses hardware raid quite a \n>> bit without BBU and have never thought to compare with software raid =/\n>\n> For raw throughtput it's not uncommon to beat a RAID card whether it\n> has a battery backed cache or not. If I'm wiriting a 200G file to the\n> disks, a BBU cache isn't gonna make that any faster, it'll fill up in\n> a second and then it's got to write to disk. BBU Cache are for faster\n> random writes, and will handily beat SW RAID. But for raw large file\n> read and write SW RAID is the fastest thing I've seen.\n>\n\nkeep in mind that hardware raide with BBU is safer than software raid.\n\nsince the updates to the drives do not all happen at the same time, there \nis a chance that a write to software raid may have happened on some drives \nand not others when the system crashes.\n\nwith hardware raid and BBU, the controller knows what it was trying to \nwrite where, and if it didn't get the scknowledgement, it will complete \nthe write when it comes up again.\n\nbut with software raid you will have updates some part of the array and \nnot others. this will result in a corrupted stripe in the array.\n\nDavid Lang\n>From [email protected] Sun Feb 6 09:04:39 2011\nReceived: from maia.hub.org (maia-5.hub.org [200.46.204.29])\n\tby mail.postgresql.org (Postfix) with ESMTP id C84291336B42\n\tfor <[email protected]>; Sun, 6 Feb 2011 09:04:38 -0400 (AST)\nReceived: from mail.postgresql.org ([200.46.204.86])\n by maia.hub.org (mx1.hub.org [200.46.204.29]) (amavisd-maia, port 10024)\n with ESMTP id 14200-03\n for <[email protected]>;\n Sun, 6 Feb 2011 13:04:32 +0000 (UTC)\nX-Greylist: from auto-whitelisted by SQLgrey-1.7.6\nReceived: from locust.cns.vt.edu (locust.cns.vt.edu [198.82.169.14])\n\tby mail.postgresql.org (Postfix) with ESMTP id 228971337B49\n\tfor <[email protected]>; Sun, 6 Feb 2011 09:04:31 -0400 (AST)\nReceived: by locust.cns.vt.edu (Postfix, from userid 986)\n\tid 55FC2118DD1; Sun, 6 Feb 2011 08:04:30 -0500 (EST)\nDate: Sun, 6 Feb 2011 08:04:30 -0500\nFrom: Ray Stell <[email protected]>\nTo: felix <[email protected]>\nCc: [email protected],\n\t\"[email protected]\" <[email protected]>\nSubject: Re: Really really slow select count(*)\nMessage-ID: <[email protected]>\nReferences: <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]> <[email protected]>\nMIME-Version: 1.0\nContent-Type: text/plain; charset=us-ascii\nContent-Disposition: inline\nIn-Reply-To: <[email protected]>\nUser-Agent: Mutt/1.5.17 (2007-11-01)\nX-Virus-Scanned: Maia Mailguard 1.0.1\nX-Spam-Status: No, hits=-1.91 tagged_above=-10 required=5 tests=BAYES_00=-1.9,\n T_RP_MATCHES_RCVD=-0.01\nX-Spam-Level: \nX-Archive-Number: 201102/253\nX-Sequence-Number: 42391\n\nOn Sun, Feb 06, 2011 at 11:48:50AM +0100, felix wrote:\n> BRUTAL\n> \n\nDid the changes work in your test environment?\n", "msg_date": "Sun, 6 Feb 2011 04:15:39 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" } ]
[ { "msg_contents": "--- On Thu, 3/2/11, Greg Smith <[email protected]> wrote:\n> The 5405 and 5805 models do have a known problem where they overheat if\n> you don't have enough cooling in the server box, with the 5805 seeming\n> to be the bigger source of such issues.  See the reviews at\n> http://www.newegg.com/Product/Product.aspx?Item=N82E16816103099 for\n> example.  Scott Marlowe was griping recently about a similar issue in\n> some of the LSI models, too.  I suspect it's a problem impacting\n> several of the larger RAID cards that use the big Intel IOP processors\n> for their RAID computations, given that's the part with the heatsink on\n> it.\n\n> Quick summary:  avoid the Adaptec 3405 and 4800.  Rest are decent\n> cards.  Just make sure you monitor the temperatures in your case (and\n> the card too if arcconf lets you, I haven't checked for that yet) if\n> you end up with a 5405/5805.\n\n\nI can attest to the 5805 and 5805Z cards running a little hot, the ones we're running tend to run in the high 60s and low 70s Celsius with fairly good airflow over them.\n\nI've been running some 5805s for 3 years now, and 5805Zs for a year and they've been really good, stable, fast cards. I monitor everything on them (including temperature) with nagios and a simple script that uses the arcconf utility.\n\nGlyn\n\n\n\n \n", "msg_date": "Thu, 3 Feb 2011 09:54:31 +0000 (GMT)", "msg_from": "Glyn Astill <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Which RAID Controllers to pick/avoid?" } ]
[ { "msg_contents": "\n Hi All,\n\nI'm working on a client program that iterates over master-detail \nrelationships in a loop chain.\n\nPseudo code:\n\nfor row_1 in table_1:\n table_2 = get_details(row_1,\"table2\")\n for row_2 in table_2:\n row_3 = get_details(row_2,\"table3\")\n .... etc.\n process_data(row1,row_2,row_3,....)\n\nMy task is to write the \"get_details\" iterator effectively. The obvious \nway to do it is to query details in every get_details() call, but that \nis not efficient. We have relationships where one master only has a few \ndetails. For 1 million master rows, that would result in execution of \nmillions of SQL SELECT commands, degrading the performance by \nmagnitudes. My idea was that the iterator should pre-fetch and cache \ndata for many master records at once. The get_details() would use the \ncached rows, thus reducing the number of SQL SELECT statements needed. \nActually I wrote the iterator, and it works fine in some cases. For example:\n\nproducers = get_rows(\"producer\")\nfor producer in producers:\n products = get_getails(producer,\"product\")\n for product in products:\n prices = get_details(product,\"prices\")\n for price in prices:\n process_product_price(producer,product,price)\n\nThis works fine if one producer has not more than 1000 products and one \nproduct has not more than 10 prices. I can easly keep 10 000 records in \nmemory. The actual code executes about 15 SQL queries while iterating \nover 1 million rows. Compared to the original \"obvious\" method, \nperformance is increased to 1500%\n\nBut sometimes it just doesn't work. If a producer has 1 million \nproducts, and one product has 100 prices, then it won't work, because I \ncannot keep 100 million prices in memory. My program should somehow \nfigure out, how much rows it will get for one master, and select between \nthe cached and not cached methods.\n\nSo here is the question: is there a way to get this information from \nPostgreSQL itself? I know that the query plan contains information about \nthis, but I'm not sure how to extract. Should I run an ANALYZE command \nof some kind, and parse the result as a string? For example:\n\nEXPLAIN select * from product where producer_id=1008;\n QUERY PLAN\n----------------------------------------------------------------------\n Seq Scan on product (cost=0.00..1018914.74 rows=4727498 width=1400)\n Filter: (producer_id = 1008)\n(2 rows)\n\n\nThen I could extract \"rows=4727498\" to get an idea about how much detail \nrows I'll get for the master.\n\nIs there any better way to do it? And how reliable is this?\n\n\nThanks,\n\n Laszlo\n\n", "msg_date": "Thu, 03 Feb 2011 12:40:12 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Get master-detail relationship metadata" }, { "msg_contents": "On 2/3/2011 5:40 AM, Laszlo Nagy wrote:\n>\n> Hi All,\n>\n> I'm working on a client program that iterates over master-detail\n> relationships in a loop chain.\n>\n> Pseudo code:\n>\n> for row_1 in table_1:\n> table_2 = get_details(row_1,\"table2\")\n> for row_2 in table_2:\n> row_3 = get_details(row_2,\"table3\")\n> .... etc.\n> process_data(row1,row_2,row_3,....)\n>\n> My task is to write the \"get_details\" iterator effectively. The obvious\n> way to do it is to query details in every get_details() call, but that\n> is not efficient. We have relationships where one master only has a few\n> details. For 1 million master rows, that would result in execution of\n> millions of SQL SELECT commands, degrading the performance by\n> magnitudes. My idea was that the iterator should pre-fetch and cache\n> data for many master records at once. The get_details() would use the\n> cached rows, thus reducing the number of SQL SELECT statements needed.\n> Actually I wrote the iterator, and it works fine in some cases. For\n> example:\n>\n> producers = get_rows(\"producer\")\n> for producer in producers:\n> products = get_getails(producer,\"product\")\n> for product in products:\n> prices = get_details(product,\"prices\")\n> for price in prices:\n> process_product_price(producer,product,price)\n>\n> This works fine if one producer has not more than 1000 products and one\n> product has not more than 10 prices. I can easly keep 10 000 records in\n> memory. The actual code executes about 15 SQL queries while iterating\n> over 1 million rows. Compared to the original \"obvious\" method,\n> performance is increased to 1500%\n>\n> But sometimes it just doesn't work. If a producer has 1 million\n> products, and one product has 100 prices, then it won't work, because I\n> cannot keep 100 million prices in memory. My program should somehow\n> figure out, how much rows it will get for one master, and select between\n> the cached and not cached methods.\n>\n> So here is the question: is there a way to get this information from\n> PostgreSQL itself? I know that the query plan contains information about\n> this, but I'm not sure how to extract. Should I run an ANALYZE command\n> of some kind, and parse the result as a string? For example:\n>\n> EXPLAIN select * from product where producer_id=1008;\n> QUERY PLAN\n> ----------------------------------------------------------------------\n> Seq Scan on product (cost=0.00..1018914.74 rows=4727498 width=1400)\n> Filter: (producer_id = 1008)\n> (2 rows)\n>\n>\n> Then I could extract \"rows=4727498\" to get an idea about how much detail\n> rows I'll get for the master.\n>\n> Is there any better way to do it? And how reliable is this?\n>\n>\n> Thanks,\n>\n> Laszlo\n>\n>\n\nOne way would be to join the master to the detail, and write your code \nexpecting duplicates.\n\nq = get_rows(\"select * from product inner join price ... order by \nproductid, priceid\");\n\nlastprodid = ''\nfor x in q:\n\tprodid = q.prodid\n\tif prodid <> lastprodid:\n\t\t# we saw the last product, prepare to move to the next product\n\t\tlastprodid = prodid\n\n... etc\n\n > Is there any better way to do it? And how reliable is this?\n\nIt makes the sql really easy, but the code complex... so pick your poison.\n\n-Andy\n", "msg_date": "Thu, 03 Feb 2011 09:26:26 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Get master-detail relationship metadata" } ]
[ { "msg_contents": "\nEach night we run over a 100,000 \"saved searches\" against PostgreSQL\n9.0.x. These are all complex SELECTs using \"cube\" functions to perform a\ngeo-spatial search to help people find adoptable pets at shelters.\n\nAll of our machines in development in production have at least 2 cores\nin them, and I'm wondering about the best way to maximally engage all\nthe processors.\n\nNow we simply run the searches in serial. I realize PostgreSQL may be\ntaking advantage of the multiple cores some in this arrangement, but I'm\nseeking advice about the possibility and methods for running the\nsearches in parallel.\n\nOne naive I approach I considered was to use parallel cron scripts. One\nwould run the \"odd\" searches and the other would run the \"even\"\nsearches. This would be easy to implement, but perhaps there is a better\nway. To those who have covered this area already, what's the best way\nto put multiple cores to use when running repeated SELECTs with PostgreSQL?\n\nThanks!\n\n Mark\n\n", "msg_date": "Thu, 03 Feb 2011 10:08:35 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "getting the most of out multi-core systems for repeated complex\n SELECT\n\tstatements" }, { "msg_contents": "On 2/3/2011 9:08 AM, Mark Stosberg wrote:\n>\n> Each night we run over a 100,000 \"saved searches\" against PostgreSQL\n> 9.0.x. These are all complex SELECTs using \"cube\" functions to perform a\n> geo-spatial search to help people find adoptable pets at shelters.\n>\n> All of our machines in development in production have at least 2 cores\n> in them, and I'm wondering about the best way to maximally engage all\n> the processors.\n>\n> Now we simply run the searches in serial. I realize PostgreSQL may be\n> taking advantage of the multiple cores some in this arrangement, but I'm\n> seeking advice about the possibility and methods for running the\n> searches in parallel.\n>\n> One naive I approach I considered was to use parallel cron scripts. One\n> would run the \"odd\" searches and the other would run the \"even\"\n> searches. This would be easy to implement, but perhaps there is a better\n> way. To those who have covered this area already, what's the best way\n> to put multiple cores to use when running repeated SELECTs with PostgreSQL?\n>\n> Thanks!\n>\n> Mark\n>\n>\n\n1) I'm assuming this is all server side processing.\n2) One database connection will use one core. To use multiple cores you \nneed multiple database connections.\n3) If your jobs are IO bound, then running multiple jobs may hurt \nperformance.\n\nYour naive approach is the best. Just spawn off two jobs (or three, or \nwhatever). I think its also the only method. (If there is another \nmethod, I dont know what it would be)\n\n-Andy\n", "msg_date": "Thu, 03 Feb 2011 09:44:03 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "Mark,\n\nyou could try gevel module to get structure of GIST index and look if \nitems distributed more or less homogenous (see different levels). \nYou can visualize index like http://www.sai.msu.su/~megera/wiki/Rtree_Index\nAlso, if your searches are neighbourhood searches, them you could try knn, available\nin 9.1 development version.\n\n\nOleg\n\nOn Thu, 3 Feb 2011, Mark Stosberg wrote:\n\n>\n> Each night we run over a 100,000 \"saved searches\" against PostgreSQL\n> 9.0.x. These are all complex SELECTs using \"cube\" functions to perform a\n> geo-spatial search to help people find adoptable pets at shelters.\n>\n> All of our machines in development in production have at least 2 cores\n> in them, and I'm wondering about the best way to maximally engage all\n> the processors.\n>\n> Now we simply run the searches in serial. I realize PostgreSQL may be\n> taking advantage of the multiple cores some in this arrangement, but I'm\n> seeking advice about the possibility and methods for running the\n> searches in parallel.\n>\n> One naive I approach I considered was to use parallel cron scripts. One\n> would run the \"odd\" searches and the other would run the \"even\"\n> searches. This would be easy to implement, but perhaps there is a better\n> way. To those who have covered this area already, what's the best way\n> to put multiple cores to use when running repeated SELECTs with PostgreSQL?\n>\n> Thanks!\n>\n> Mark\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 3 Feb 2011 18:54:02 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "On 02/03/2011 10:54 AM, Oleg Bartunov wrote:\n> Mark,\n> \n> you could try gevel module to get structure of GIST index and look if\n> items distributed more or less homogenous (see different levels). You\n> can visualize index like http://www.sai.msu.su/~megera/wiki/Rtree_Index\n> Also, if your searches are neighbourhood searches, them you could try\n> knn, available\n> in 9.1 development version.\n\nOleg,\n\nThose are interesting details to consider. I read more about KNN here:\n\nhttp://www.depesz.com/index.php/2010/12/11/waiting-for-9-1-knngist/\n\nWill I be able to use it improve the performance of finding nearby\nzipcodes? It sounds like KNN has great potential for performance\nimprovements!\n\n Mark\n", "msg_date": "Thu, 03 Feb 2011 11:16:15 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" } ]
[ { "msg_contents": "Time for my pet meme to wiggle out of its hole (next to Phil's, and a day later). For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level. It has to. And it has to because the Big Boys already do so, to some extent, and they've realized that the BCNF schema on such machines is supremely efficient. PG/MySql/OSEngineOfChoice will get left behind simply because the efficiency offered will be worth the price.\n\nI know this is far from trivial, and my C skills are such that I can offer no help. These machines have been the obvious \"current\" machine in waiting for at least 5 years, and those applications which benefit from parallelism (servers of all kinds, in particular) will filter out the winners and losers based on exploiting this parallelism.\n\nMuch as it pains me to say it, but the MicroSoft approach to software: write to the next generation processor and force users to upgrade, will be the winning strategy for database engines. There's just way too much to gain.\n\n-- Robert\n\n---- Original message ----\n>Date: Thu, 03 Feb 2011 09:44:03 -0600\n>From: [email protected] (on behalf of Andy Colson <[email protected]>)\n>Subject: Re: [PERFORM] getting the most of out multi-core systems for repeated complex SELECT statements \n>To: Mark Stosberg <[email protected]>\n>Cc: [email protected]\n>\n>On 2/3/2011 9:08 AM, Mark Stosberg wrote:\n>>\n>> Each night we run over a 100,000 \"saved searches\" against PostgreSQL\n>> 9.0.x. These are all complex SELECTs using \"cube\" functions to perform a\n>> geo-spatial search to help people find adoptable pets at shelters.\n>>\n>> All of our machines in development in production have at least 2 cores\n>> in them, and I'm wondering about the best way to maximally engage all\n>> the processors.\n>>\n>> Now we simply run the searches in serial. I realize PostgreSQL may be\n>> taking advantage of the multiple cores some in this arrangement, but I'm\n>> seeking advice about the possibility and methods for running the\n>> searches in parallel.\n>>\n>> One naive I approach I considered was to use parallel cron scripts. One\n>> would run the \"odd\" searches and the other would run the \"even\"\n>> searches. This would be easy to implement, but perhaps there is a better\n>> way. To those who have covered this area already, what's the best way\n>> to put multiple cores to use when running repeated SELECTs with PostgreSQL?\n>>\n>> Thanks!\n>>\n>> Mark\n>>\n>>\n>\n>1) I'm assuming this is all server side processing.\n>2) One database connection will use one core. To use multiple cores you \n>need multiple database connections.\n>3) If your jobs are IO bound, then running multiple jobs may hurt \n>performance.\n>\n>Your naive approach is the best. Just spawn off two jobs (or three, or \n>whatever). I think its also the only method. (If there is another \n>method, I dont know what it would be)\n>\n>-Andy\n>\n>-- \n>Sent via pgsql-performance mailing list ([email protected])\n>To make changes to your subscription:\n>http://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Thu, 3 Feb 2011 10:57:04 -0500 (EST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: getting the most of out multi-core\n\tsystems for repeated complex SELECT statements" }, { "msg_contents": "On Thu, Feb 3, 2011 at 4:57 PM, <[email protected]> wrote:\n> Time for my pet meme to wiggle out of its hole (next to Phil's, and a day later).  For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level.  It has to.  And it has to because the Big Boys already do so, to some extent, and they've realized that the BCNF schema on such machines is supremely efficient.  PG/MySql/OSEngineOfChoice will get left behind simply because the efficiency offered will be worth the price.\n\nthis kind of view on what postgres community has to do can only be\ntrue if postgres has no intention to support \"cloud environments\" or\nany kind of hardware virtualization.\nwhile i'm sure targeting specific hardware features can greatly\nimprove postgres performance it should be an option not a requirement.\nforcing users to have specific hardware is basically telling users\nthat you can forget about using postgres in amazon/rackspace cloud\nenvironments (or any similar environment).\ni'm sure that a large part of postgres community doesn't care about\n\"cloud environments\" (although this is only my personal impression)\nbut if plan is to disable postgres usage in such environments you are\nbasically loosing a large part of developers/companies targeting\nglobal internet consumers with their online products.\ncloud environments are currently the best platform for internet\noriented developers/companies to start a new project or even to\nmigrate from custom hardware/dedicated data center.\n\n> Much as it pains me to say it, but the MicroSoft approach to software: write to the next generation processor and force users to upgrade, will be the winning strategy for database engines.  There's just way too much to gain.\n\nit can arguably be said that because of this approach microsoft is\nlosing ground in most of their businesses/strategies.\n\nAljosa Mohorovic\n", "msg_date": "Thu, 3 Feb 2011 18:56:34 +0100", "msg_from": "=?UTF-8?B?QWxqb8WhYSBNb2hvcm92acSH?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "On Thu, Feb 3, 2011 at 8:57 AM, <[email protected]> wrote:\n> Time for my pet meme to wiggle out of its hole (next to Phil's, and a day later).  For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level.  It has to.  And\n\nI'm pretty sure multi-core query processing is in the TODO list. Not\nsure anyone's working on it tho. Writing a big check might help.\n", "msg_date": "Thu, 3 Feb 2011 11:21:18 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "Scott Marlowe wrote:\n> On Thu, Feb 3, 2011 at 8:57 AM, <[email protected]> wrote:\n> \n>> Time for my pet meme to wiggle out of its hole (next to Phil's, and a day later). For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level. It has to. And\n>> \n>\n> I'm pretty sure multi-core query processing is in the TODO list. Not\n> sure anyone's working on it tho. Writing a big check might help.\n> \n\nWork on the exciting parts people are interested in is blocked behind \ncompletely mundane tasks like coordinating how the multiple sessions are \ngoing to end up with a consistent view of the database. See \"Export \nsnapshots to other sessions\" at \nhttp://wiki.postgresql.org/wiki/ClusterFeatures for details on that one.\n\nParallel query works well for accelerating CPU-bound operations that are \nexecuting in RAM. The reality here is that while the feature sounds \nimportant, these situations don't actually show up that often. There \nare exactly zero clients I deal with regularly who would be helped out \nby this. The ones running web applications whose workloads do fit into \nmemory are more concerned about supporting large numbers of users, not \noptimizing things for a single one. And the ones who have so much data \nthat single users running large reports would seemingly benefit from \nthis are usually disk-bound instead.\n\nThe same sort of situation exists with SSDs. Take out the potential \nusers whose data can fit in RAM instead, take out those who can't \npossibly get an SSD big enough to hold all their stuff anyway, and \nwhat's left in the middle is not very many people. In a database \ncontext I still haven't found anything better to do with a SSD than to \nput mid-sized indexes on them, ones a bit too large for RAM but not so \nbig that only regular hard drives can hold them.\n\nI would rather strongly disagree with the suggestion that embracing \neither of these fancy but not really as functional as they appear at \nfirst approaches is critical to PostgreSQL's future. They're \nspecialized techniques useful to only a limited number of people.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn Thu, Feb 3, 2011 at 8:57 AM, <[email protected]> wrote:\n \n\nTime for my pet meme to wiggle out of its hole (next to Phil's, and a day later).  For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level.  It has to.  And\n \n\n\nI'm pretty sure multi-core query processing is in the TODO list. Not\nsure anyone's working on it tho. Writing a big check might help.\n \n\n\nWork on the exciting parts people are interested in is blocked behind\ncompletely mundane tasks like coordinating how the multiple sessions\nare going to end up with a consistent view of the database.  See\n\"Export snapshots to other sessions\" at\nhttp://wiki.postgresql.org/wiki/ClusterFeatures for details on that one.\n\nParallel query works well for accelerating CPU-bound operations that\nare executing in RAM.  The reality here is that while the feature\nsounds important, these situations don't actually show up that often. \nThere are exactly zero clients I deal with regularly who would be\nhelped out by this.  The ones running web applications whose workloads\ndo fit into memory are more concerned about supporting large numbers of\nusers, not optimizing things for a single one.  And the ones who have\nso much data that single users running large reports would seemingly\nbenefit from this are usually disk-bound instead.\n\nThe same sort of situation exists with SSDs.  Take out the potential\nusers whose data can fit in RAM instead, take out those who can't\npossibly get an SSD big enough to hold all their stuff anyway, and\nwhat's left in the middle is not very many people.  In a database\ncontext I still haven't found anything better to do with a SSD than to\nput mid-sized indexes on them, ones a bit too large for RAM but not so\nbig that only regular hard drives can hold them.\n\nI would rather strongly disagree with the suggestion that embracing\neither of these fancy but not really as functional as they appear at\nfirst approaches is critical to PostgreSQL's future.  They're\nspecialized techniques useful to only a limited number of people.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Thu, 03 Feb 2011 17:56:57 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "On 02/03/2011 04:56 PM, Greg Smith wrote:\n> Scott Marlowe wrote:\n>> On Thu, Feb 3, 2011 at 8:57 AM,<[email protected]> wrote:\n>>\n>>> Time for my pet meme to wiggle out of its hole (next to Phil's, and a day later). For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level. It has to. And\n>>>\n>>\n>> I'm pretty sure multi-core query processing is in the TODO list. Not\n>> sure anyone's working on it tho. Writing a big check might help.\n>>\n>\n> Work on the exciting parts people are interested in is blocked behind completely mundane tasks like coordinating how the multiple sessions are going to end up with a consistent view of the database. See \"Export snapshots to other sessions\" at http://wiki.postgresql.org/wiki/ClusterFeatures for details on that one.\n>\n> Parallel query works well for accelerating CPU-bound operations that are executing in RAM. The reality here is that while the feature sounds important, these situations don't actually show up that often. There are exactly zero clients I deal with regularly who would be helped out by this. The ones running web applications whose workloads do fit into memory are more concerned about supporting large numbers of users, not optimizing things for a single one. And the ones who have so much data that single users running large reports would seemingly benefit from this are usually disk-bound instead.\n>\n> The same sort of situation exists with SSDs. Take out the potential users whose data can fit in RAM instead, take out those who can't possibly get an SSD big enough to hold all their stuff anyway, and what's left in the middle is not very many people. In a database context I still haven't found anything better to do with a SSD than to put mid-sized indexes on them, ones a bit too large for RAM but not so big that only regular hard drives can hold them.\n>\n> I would rather strongly disagree with the suggestion that embracing either of these fancy but not really as functional as they appear at first approaches is critical to PostgreSQL's future. They're specialized techniques useful to only a limited number of people.\n>\n> --\n> Greg Smith 2ndQuadrant [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Supportwww.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\":http://www.2ndQuadrant.com/books\n>\n\n4 cores is cheap and popular now, 6 in a bit, 8 next year, 16/24 cores in 5 years. You can do 16 cores now, but its a bit expensive. I figure hundreds of cores will be expensive in 5 years, but possible, and available.\n\nCpu's wont get faster, but HD's and SSD's will. To have one database connection, which runs one query, run fast, it's going to need multi-core support.\n\nThat's not to say we need \"parallel query's\". Or we need multiple backends to work on one query. We need one backend, working on one query, using mostly the same architecture, to just use more than one core.\n\nYou'll notice I used _mostly_ and _just_, and have no knowledge of PG internals, so I fully expect to be wrong.\n\nMy point is, there must be levels of threading, yes? If a backend has data to sort, has it collected, nothing locked, what would it hurt to use multi-core sorting?\n\n-- OR --\n\nThreading (and multicore), to me, always mean queues. What if new type's of backend's were created that did \"simple\" things, that normal backends could distribute work to, then go off and do other things, and come back to collect the results.\n\nI thought I read a paper someplace that said shared cache (L1/L2/etc) multicore cpu's would start getting really slow at 16/32 cores, and that message passing was the way forward past that. If PG started aiming for 128 core support right now, it should use some kinda message passing with queues thing, yes?\n\n-Andy\n", "msg_date": "Thu, 03 Feb 2011 21:21:21 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "Andy Colson wrote:\n> Cpu's wont get faster, but HD's and SSD's will. To have one database \n> connection, which runs one query, run fast, it's going to need \n> multi-core support.\n\nMy point was that situations where people need to run one query on one \ndatabase connection that aren't in fact limited by disk I/O are far less \ncommon than people think. My troublesome database servers aren't ones \nwith a single CPU at its max but wishing there were more workers, \nthey're the ones that have >25% waiting for I/O. And even that crowd is \nstill a subset, distinct from people who don't care about the speed of \nany one core, they need lots of connections to go at once.\n\n\n> That's not to say we need \"parallel query's\". Or we need multiple \n> backends to work on one query. We need one backend, working on one \n> query, using mostly the same architecture, to just use more than one \n> core.\n\nThat's exactly what we mean when we say \"parallel query\" in the context \nof a single server.\n\n> My point is, there must be levels of threading, yes? If a backend has \n> data to sort, has it collected, nothing locked, what would it hurt to \n> use multi-core sorting?\n\nOptimizer nodes don't run that way. The executor \"pulls\" rows out of \nthe top of the node tree, which then pulls from its children, etc. If \nyou just blindly ran off and executed every individual node to \ncompletion in parallel, that's not always going to be faster--could be a \nlot slower, if the original query never even needed to execute portions \nof the tree.\n\nWhen you start dealing with all of the types of nodes that are out there \nit gets very messy in a hurry. Decomposing the nodes of the query tree \ninto steps that can be executed in parallel usefully is the hard problem \nhiding behind the simple idea of \"use all the cores!\"\n\n> I thought I read a paper someplace that said shared cache (L1/L2/etc) \n> multicore cpu's would start getting really slow at 16/32 cores, and \n> that message passing was the way forward past that. If PG started \n> aiming for 128 core support right now, it should use some kinda \n> message passing with queues thing, yes?\n\nThere already is a TupleStore type that is going to serve as the message \nbeing sent between the client backends. Unfortunately we won't get \nanywhere near 128 cores without addressing the known scalability issues \nthat are in the code right now, ones you can easily run into even with 8 \ncores.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 03 Feb 2011 23:00:24 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "On Thu, Feb 3, 2011 at 9:00 PM, Greg Smith <[email protected]> wrote:\n> Andy Colson wrote:\n>>\n>> Cpu's wont get faster, but HD's and SSD's will.  To have one database\n>> connection, which runs one query, run fast, it's going to need multi-core\n>> support.\n>\n> My point was that situations where people need to run one query on one\n> database connection that aren't in fact limited by disk I/O are far less\n> common than people think.  My troublesome database servers aren't ones with\n> a single CPU at its max but wishing there were more workers, they're the\n> ones that have >25% waiting for I/O.  And even that crowd is still a subset,\n> distinct from people who don't care about the speed of any one core, they\n> need lots of connections to go at once.\n\nThe most common case where I can use > 1 core is loading data. and\npg_restore supports parallel restore threads, so that takes care of\nthat pretty well.\n", "msg_date": "Thu, 3 Feb 2011 21:04:09 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "On 02/03/2011 10:00 PM, Greg Smith wrote:\n> Andy Colson wrote:\n>> Cpu's wont get faster, but HD's and SSD's will. To have one database connection, which runs one query, run fast, it's going to need multi-core support.\n>\n> My point was that situations where people need to run one query on one database connection that aren't in fact limited by disk I/O are far less common than people think. My troublesome database servers aren't ones with a single CPU at its max but wishing there were more workers, they're the ones that have >25% waiting for I/O. And even that crowd is still a subset, distinct from people who don't care about the speed of any one core, they need lots of connections to go at once.\n>\n\nYes, I agree... for today. If you gaze into 5 years... double the core count (but not the speed), double the IO rate. What do you see?\n\n\n>> My point is, there must be levels of threading, yes? If a backend has data to sort, has it collected, nothing locked, what would it hurt to use multi-core sorting?\n>\n> Optimizer nodes don't run that way. The executor \"pulls\" rows out of the top of the node tree, which then pulls from its children, etc. If you just blindly ran off and executed every individual node to completion in parallel, that's not always going to be faster--could be a lot slower, if the original query never even needed to execute portions of the tree.\n>\n> When you start dealing with all of the types of nodes that are out there it gets very messy in a hurry. Decomposing the nodes of the query tree into steps that can be executed in parallel usefully is the hard problem hiding behind the simple idea of \"use all the cores!\"\n>\n\n\nWhat if... the nodes were run in separate threads, and interconnected via queues? A node would not have to run to completion either. A queue could be setup to have a max items. When a node adds 5 out of 5 items it would go to sleep. Its parent node, removing one of the items could wake it up.\n\n-Andy\n", "msg_date": "Thu, 03 Feb 2011 22:19:48 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "On Thu, Feb 3, 2011 at 9:19 PM, Andy Colson <[email protected]> wrote:\n> On 02/03/2011 10:00 PM, Greg Smith wrote:\n>>\n>> Andy Colson wrote:\n>>>\n>>> Cpu's wont get faster, but HD's and SSD's will. To have one database\n>>> connection, which runs one query, run fast, it's going to need multi-core\n>>> support.\n>>\n>> My point was that situations where people need to run one query on one\n>> database connection that aren't in fact limited by disk I/O are far less\n>> common than people think. My troublesome database servers aren't ones with a\n>> single CPU at its max but wishing there were more workers, they're the ones\n>> that have >25% waiting for I/O. And even that crowd is still a subset,\n>> distinct from people who don't care about the speed of any one core, they\n>> need lots of connections to go at once.\n>>\n>\n> Yes, I agree... for today.  If you gaze into 5 years... double the core\n> count (but not the speed), double the IO rate.  What do you see?\n\nI run a cluster of pg servers under slony replication, and we have 112\ncores between three servers, soon to go to 144 cores. We have no need\nfor individual queries to span the cores, honestly. Our real limit is\nthe ability get all those cores working at the same time on individual\nqueries efficiently without thundering herd issues. Yeah, it's only\none datapoint, but for us, with a lot of cores, we need each one to\nrun one query as fast as it can.\n", "msg_date": "Thu, 3 Feb 2011 21:57:31 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "Andy Colson wrote:\n> Yes, I agree... for today. If you gaze into 5 years... double the \n> core count (but not the speed), double the IO rate. What do you see?\n\nFour more versions of PostgreSQL addressing problems people are having \nright now. When we reach the point where parallel query is the only way \naround the actual bottlenecks in the software people are running into, \nsomeone will finish parallel query. I am not a fan of speculative \ndevelopment in advance of real demand for it. There are multiple much \nmore serious bottlenecks impacting scalability in PostgreSQL that need \nto be addressed before this one is #1 on the development priority list \nto me.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 04 Feb 2011 06:33:48 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "On 02/03/2011 10:57 AM, [email protected] wrote:\n> For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level\n\nAs the person who brought up the original concern, I'll add that\n\"multi-core at the query level\" really isn't important for us. Most of\nour PostgreSQL usage is through a web application which fairly\nautomatically takes advantage of multiple cores, because there are\nseveral parallel connections.\n\nA smaller but important piece of what we do is run this cron script\nneeds to run hundreds of thousands of variations of the same complex\nSELECT as fast it can.\n\nWhat honestly would have helped most is not technical at all-- it would\nhave been some documentation on how to take advantage of multiple cores\nfor this case.\n\nIt looks like it's going to be trivial-- Divide up the data with a\nmodulo, and run multiple parallel cron scripts that each processes a\nslice of the data. A benchmark showed that this approach sped up our\nprocessing 3x when splitting the application 4 ways across 4 processors.\n(I think we failed to achieve a 4x improvement because the server was\nalready busy handling some other tasks).\n\nPart of our case is likely fairly common *today*: many servers are\nmulti-core now, but people don't necessarily understand how to take\nadvantage of that if it doesn't happen automatically.\n\n Mark\n", "msg_date": "Fri, 04 Feb 2011 16:18:18 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated complex\n\tSELECT statements" }, { "msg_contents": "On Fri, Feb 4, 2011 at 2:18 PM, Mark Stosberg <[email protected]> wrote:\n> It looks like it's going to be trivial-- Divide up the data with a\n> modulo, and run multiple parallel cron scripts that each processes a\n> slice of the data. A benchmark showed that this approach sped up our\n> processing 3x when splitting the application 4 ways across 4 processors.\n> (I think we failed to achieve a 4x improvement because the server was\n> already busy handling some other tasks).\n\nI once had about 2 months of machine work ahead of me for one server.\nLuckily it was easy to break up into chunks and run it on all the\nworkstations at night in the office, and we were done in < 1 week.\npgsql was the data store for it, and it was just like what you're\ntalking about, break it into chunks, spread it around.\n", "msg_date": "Fri, 4 Feb 2011 15:57:59 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: getting the most of out multi-core systems for\n\trepeated complex SELECT statements" }, { "msg_contents": "[email protected] writes:\n> Time for my pet meme to wiggle out of its hole (next to Phil's, and a\n> day later). For PG to prosper in the future, it has to embrace the\n> multi-core/processor/SSD machine at the query level. It has to. And\n> it has to because the Big Boys already do so, to some extent, and\n> they've realized that the BCNF schema on such machines is supremely\n> efficient. PG/MySql/OSEngineOfChoice will get left behind simply\n> because the efficiency offered will be worth the price.\n>\n> I know this is far from trivial, and my C skills are such that I can\n> offer no help. These machines have been the obvious \"current\" machine\n> in waiting for at least 5 years, and those applications which benefit\n> from parallelism (servers of all kinds, in particular) will filter out\n> the winners and losers based on exploiting this parallelism.\n>\n> Much as it pains me to say it, but the MicroSoft approach to software:\n> write to the next generation processor and force users to upgrade,\n> will be the winning strategy for database engines. There's just way\n> too much to gain.\n\nI'm not sure how true that is, really. (e.g. - \"too much to gain.\")\n\nI know that Jan Wieck and I have been bouncing thoughts on valid use of\nthreading off each other for *years*, now, and it tends to be\ninteresting but difficult to the point of impracticality.\n\nBut how things play out are quite fundamentally different for different\nusage models.\n\nIt's useful to cross items off the list, so we're left with the tough\nones that are actually a problem.\n\n1. For instance, OLTP applications, that generate a lot of concurrent\nconnections, already do perfectly well in scaling on multi-core systems.\nEach connection is a separate process, and that already harnesses\nmulti-core systems perfectly well. Things have improved a lot over the\nlast 10 years, and there may yet be further improvements to be found,\nbut it seems pretty reasonable to me to say that the OLTP scenario can\nbe treated as \"solved\" in this context.\n\nThe scenario where I can squint and see value in trying to multithread\nis the contrast to that, of OLAP. The case where we only use a single\ncore, today, is where there's only a single connection, and a single\nquery, running.\n\nBut that can reasonably be further constrained; not every\nsingle-connection query could be improved by trying to spread work\nacross cores. We need to add some further assumptions:\n\n2. The query needs to NOT be I/O-bound. If it's I/O bound, then your\nsystem is waiting for the data to come off disk, rather than to do\nprocessing of that data.\n\nThat condition can be somewhat further strengthened... It further needs\nto be a query where multi-processing would not increase the I/O burden.\n\nBetween those two assumptions, that cuts the scope of usefulness to a\nvery considerable degree.\n\nAnd if we *are* multiprocessing, we introduce several new problems, each\nof which is quite troublesome:\n\n - How do we decompose the query so that the pieces are processed in\n ways that improve processing time?\n\n In effect, how to generate a parallel query plan?\n\n It would be more than stupid to consider this to be \"obvious.\" We've\n got 15-ish years worth of query optimization efforts that have gone\n into Postgres, and many of those changes were not \"obvious\" until\n after they got thought through carefully. This multiplies the\n complexity, and opportunity for error.\n\n - Coordinating processing\n\n Becomes quite a bit more complex. Multiple threads/processes are\n accessing parts of the same data concurrently, so a \"parallelized\n query\" that harnesses 8 CPUs might generate 8x as many locks and\n analogous coordination points.\n\n - Platform specificity\n\n Threading is a problem in that each OS platform has its own\n implementation, and even when they claim to conform to common\n standards, they still have somewhat different interpretations. This\n tends to go in one of the following directions:\n\n a) You have to pick one platform to do threading on.\n\n Oops. There's now PostgreSQL-Linux, that is the only platform\n where our multiprocessing thing works. It could be worse than\n that; it might work on a particular version of a particular OS...\n\n b) You follow some apparently portable threading standard\n\n And find that things are hugely buggy because the platforms\n follow the standard a bit differently. And perhaps this means\n that, analogous to a), you've got a set of platforms where this\n \"works\" (for some value of \"works\"), and others where it can't.\n That's almost as evil as a).\n\n c) You follow some apparently portable threading standard\n\n And need to wrap things in a pretty thick safety blanket to make\n sure it is compatible with all the bugs in interpretation and\n implementation. Complexity++, and performance probably suffers.\n\n None of these are particularly palatable, which is why threading\n proposals get a lot of pushback.\n\nAt the end of the day, if this is only providing value for a subset of\nuse cases, involving peculiar-ish conditions, well, it's quite likely\nwiser for most would-be implementors to spend their time on improvements\nlikely to help a larger set of users that might, in fact, include those\nthat imagine that this parallelization would be helpful.\n-- \nselect 'cbbrowne' || '@' || 'acm.org';\nhttp://www3.sympatico.ca/cbbrowne/x.html\nFLORIDA: Where your vote counts and counts and counts.\n", "msg_date": "Fri, 04 Feb 2011 18:15:10 -0500", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated complex\n\tSELECT statements" }, { "msg_contents": "On Fri, 4 Feb 2011, Chris Browne wrote:\n\n> 2. The query needs to NOT be I/O-bound. If it's I/O bound, then your\n> system is waiting for the data to come off disk, rather than to do\n> processing of that data.\n\nyes and no on this one.\n\nit is very possible to have a situation where the process generating the \nI/O is waiting for the data to come off disk, but if there are still idle \nresources in the disk subsystem.\n\nit may be that the best way to address this is to have the process \ngenerating the I/O send off more requests, but that sometimes is \nsignificantly more complicated than splitting the work between two \nprocesses and letting them each generate I/O requests\n\nwith rotating disks, ideally you want to have at least two requests \noutstanding, one that the disk is working on now, and one for it to start \non as soon as it finishes the one that it's on (so that the disk doesn't \nsit idle while the process decides what the next read should be). In \npractice you tend to want to have even more outstanding from the \napplication so that they can be optimized (combined, reordered, etc) by \nthe lower layers.\n\nif you end up with a largish raid array (say 16 disks), this can translate \ninto a lot of outstanding requests that you want to have active to fully \nuntilize the array, but having the same number of requests outstanding \nwith a single disk would be counterproductive as the disk would not be \nable to see all the outstanding requests and therefor would not be able to \noptimize them as effectivly.\n\nDavid Lang\n", "msg_date": "Fri, 4 Feb 2011 23:15:52 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" }, { "msg_contents": "Hi, all\n\nMy small thoughts about parallelizing single query.\nAFAIK in the cases where it is needed, there is usually one single \noperation that takes a lot of CPU, e.g. hashing or sorting. And this are \nusually tasks that has well known algorithms to parallelize.\nThe main problem, as for me, is thread safety. First of all, operations \nthat are going to be parallelized, must be thread safe. Then functions \nand procedures they call must be thread safe too. So, a marker for a \nprocedure must be introduced and all standard ones should be \nchecked/fixed for parallel processing with marker set.\nThen, one should not forget optimizer checks for when to introduce \nparallelizing. How should it be accounted in the query plan? Should it \ninfluence optimizer decisions (should it count CPU or wall time when \noptimizing query plan)?\nOr can it simply be used by an operation when it can see it will benefit \nfrom it.\n\nBest regards, Vitalii Tymchyshyn\n", "msg_date": "Mon, 07 Feb 2011 12:02:57 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: getting the most of out multi-core systems for repeated\n\tcomplex SELECT statements" } ]
[ { "msg_contents": "\r\n\r\n---- Original message ----\r\n>Date: Thu, 3 Feb 2011 18:56:34 +0100\r\n>From: [email protected] (on behalf of Aljoša Mohorović <[email protected]>)\r\n>Subject: Re: [PERFORM] getting the most of out multi-core systems for repeated complex SELECT statements \r\n>To: [email protected]\r\n>Cc: [email protected]\r\n>\r\n>On Thu, Feb 3, 2011 at 4:57 PM, <[email protected]> wrote:\r\n>> Time for my pet meme to wiggle out of its hole (next to Phil's, and a day later).  For PG to prosper in the future, it has to embrace the multi-core/processor/SSD machine at the query level.  It has to.  And it has to because the Big Boys already do so, to some extent, and they've realized that the BCNF schema on such machines is supremely efficient.  PG/MySql/OSEngineOfChoice will get left behind simply because the efficiency offered will be worth the price.\r\n>\r\n>this kind of view on what postgres community has to do can only be\r\n>true if postgres has no intention to support \"cloud environments\" or\r\n>any kind of hardware virtualization.\r\n>while i'm sure targeting specific hardware features can greatly\r\n>improve postgres performance it should be an option not a requirement.\r\n\r\nBeing an option is just fine. It's not there now. Asserting that the cloud meme, based on lowest cost marginal hardware, should dictate a database engine is putting the cart before the horse.\r\n\r\n\r\n>forcing users to have specific hardware is basically telling users\r\n>that you can forget about using postgres in amazon/rackspace cloud\r\n>environments (or any similar environment).\r\n\r\nJust not on cheap clouds, if they want maximal performance from the engine using BCNF schemas. Replicating COBOL/VSAM/flatfile applications in any relational database engine is merely deluding oneself. \r\n\r\n\r\n>i'm sure that a large part of postgres community doesn't care about\r\n>\"cloud environments\" (although this is only my personal impression)\r\n>but if plan is to disable postgres usage in such environments you are\r\n>basically loosing a large part of developers/companies targeting\r\n>global internet consumers with their online products.\r\n>cloud environments are currently the best platform for internet\r\n>oriented developers/companies to start a new project or even to\r\n>migrate from custom hardware/dedicated data center.\r\n>\r\n>> Much as it pains me to say it, but the MicroSoft approach to software: write to the next generation processor and force users to upgrade, will be the winning strategy for database engines.  There's just way too much to gain.\r\n>\r\n>it can arguably be said that because of this approach microsoft is\r\n>losing ground in most of their businesses/strategies.\r\n\r\nNot really. MicroSoft is losing ground for the same reason all other client/standalone applications are: such applications don't run any better on multi-core/processor machines. Add in the netbook/phone devices, and that they can't seem to make a version of windows that's markedly better than XP. Arguably MicroSoft is failing *because Office no longer requires* the next generation hardware to run right. Hmm? Linux prospers because it's a server OS, largely. Desktop may, or may not, remain relevant. Linux does make good use of such machines. MicroSoft applications? Not so much. \r\n>\r\n>Aljosa Mohorovic\r\n>\r\n>-- \r\n>Sent via pgsql-performance mailing list ([email protected])\r\n>To make changes to your subscription:\r\n>http://www.postgresql.org/mailpref/pgsql-performance\r\n", "msg_date": "Thu, 3 Feb 2011 14:21:45 -0500 (EST)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: getting the most of out multi-core\n\tsystems for repeated complex SELECT statements" } ]
[ { "msg_contents": "Hi, all.\n\nAll this optimizer vs hint thread reminded me about crazy idea that got to\nmy head some time ago.\nI currently has two problems with postgresql optimizer\n1) Dictionary tables. Very usual thing is something like \"select * from\nbig_table where distionary_id = (select id from dictionary where\nname=value)\". This works awful if dictionary_id distribution is not uniform.\nThe thing that helps is to retrieve subselect value and then simply do\n\"select * from big_table where dictionary_id=id_value\".\n2) Complex queries. If there are over 3 levels of subselects, optmizer\ncounts often become less and less correct as we go up on levels. On ~3rd\nlevel this often lead to wrong choises. The thing that helps is to create\ntemporary tables from subselects, analyze them and then do main select using\nthis temporary tables.\nWhile first one can be fixed by introducing some correlation statistics, I\ndon't think there is any simple way to fix second one.\n\nBut what if optimizer could in some cases tell \"fetch this and this and then\nI'll plan other part of the query based on statistics of what you've\nfetched\"?\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\nHi, all.All this optimizer vs hint thread reminded me about crazy idea that got to my head some time ago.I currently has two problems with postgresql optimizer1) Dictionary tables. Very usual thing is something like \"select * from big_table where distionary_id = (select id from dictionary where name=value)\". This works awful if dictionary_id distribution is not uniform. The thing that helps is to retrieve subselect value and then simply do \"select * from big_table where dictionary_id=id_value\".\n2) Complex queries. If there are over 3 levels of subselects, optmizer counts often become less and less correct as we go up on levels. On ~3rd level this often lead to wrong choises. The thing that helps is to create temporary tables from subselects, analyze them and then do main select using this temporary tables.\nWhile first one can be fixed by introducing some correlation statistics, I don't think there is any simple way to fix second one.But what if optimizer could in some cases tell \"fetch this and this and then I'll plan other part of the query based on statistics of what you've fetched\"?\n-- Best regards, Vitalii Tymchyshyn", "msg_date": "Fri, 4 Feb 2011 10:03:39 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Talking about optimizer, my long dream" }, { "msg_contents": "О©ҐО©ҐО©ҐО©ҐліО©Ґ О©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©ҐО©Ґ wrote:\n> Hi, all.\n>\n> All this optimizer vs hint thread\nThere is no \"optimizer vs. hint\". Hints are a necessary part of the \noptimizer in all other databases. Without hints Postgres will not get \nused in the company that I work for, period. I was willing to wait but \nthe fatwa against hints seems unyielding, so that's it. I am even \ninclined to believe that deep down under the hood, this fatwa has an \nulterior motive, which disgusts me deeply. With hints, there would be \nfar fewer consulting gigs.\n\nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Fri, 04 Feb 2011 08:56:32 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "On 02/04/2011 07:56 AM, Mladen Gogala wrote:\n\n> Hints are a necessary part of the\n> optimizer in all other databases. Without hints Postgres will not get\n> used in the company that I work for, period.\n\nI've said repeatedly that EnterpriseDB, a fork of PostgreSQL, has the \nhints you seek, yet you seem to enjoy berating the PostgreSQL community \nas if it owes you something.\n\nAlso, we don't care if you don't use PostgreSQL. If I put something up \nfor free, some random guy not taking it won't exactly hurt my feelings.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 08:02:42 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "On 04 Feb, 2011,at 02:56 PM, Mladen Gogala <[email protected]> wrote:\n\n> Віталій Тимчишин wrote:\n> > Hi, all.\n> >\n> > All this optimizer vs hint thread\n> There is no \"optimizer vs. hint\". Hints are a necessary part of the\n> optimizer in all other databases. \n \nThat has nothing to do with PostgreSQL: PostgreSQL = PostgreSQL. And it doesn't have hints and everybody knows it.\n> Without hints Postgres will not get\n> used in the company that I work for, period. \n \nThat's up to you, that's fine. But why did you start with PostgreSQL in the first place? You knew PostgreSQL doesn't have hints and the wiki told you hints are not wanted as well. When hints are an essential requirement for your company, you should pick another product, EnterpriseDB Postgres Plus for example.\n> I was willing to wait but\n> the fatwa against hints seems unyielding, \n \nThere is no fatwa. The PostgreSQL project prefers to spend resources on a better optimizer to solve the real problems, not on hints for working around the problems. That has nothing to do with any fatwa or religion.\n> so that's it. I am even\n> inclined to believe that deep down under the hood, this fatwa has an\n> ulterior motive, which disgusts me deeply. With hints, there would be\n> far fewer consulting gigs.\n \nThe consulting guys are the ones who love hints: They know they have to come back the other month because the old hint does more harm than good when data changes. And data will change over time.\n\nYou said it's so simple to implement hints in PostgreSQL, so please, show us. Or ask/pay somebody to write this simple code for you to support hints, nobody will ever stop you from doing that. When you have a use case that proves the usage of hints will improve the performance of PostgreSQL and you have some code that can be maintained by the PostgreSQL project, it might be implemented in the contrib or even core. It's up to you, not somebody else.\n>\n>\n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n>\n>\n> -- \n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n\nOn 04 Feb, 2011,at 02:56 PM, Mladen Gogala <[email protected]> wrote:Віталій Тимчишин wrote:\n> Hi, all.\n>\n> All this optimizer vs hint thread\nThere is no \"optimizer vs. hint\". Hints are a necessary part of the \noptimizer in all other databases.  That has nothing to do with PostgreSQL: PostgreSQL = PostgreSQL. And it doesn't have hints and everybody knows it.Without hints Postgres will not get \nused in the company that I work for, period.  That's up to you, that's fine. But why did you start with PostgreSQL in the first place? You knew PostgreSQL doesn't have hints and the wiki told you hints are not wanted as well. When hints are an essential requirement for your company, you should pick another product, EnterpriseDB Postgres Plus for example.I was willing to wait but \nthe fatwa against hints seems unyielding,  There is no fatwa. The PostgreSQL project prefers to spend resources on a better optimizer to solve the real problems, not on hints for working around the problems. That has nothing to do with any fatwa or religion.so that's it. I am even \ninclined to believe that deep down under the hood, this fatwa has an \nulterior motive, which disgusts me deeply. With hints, there would be \nfar fewer consulting gigs. The consulting guys are the ones who love hints: They know they have to come back the other month because the old hint does more harm than good when data changes. And data will change over time.You said it's so simple to implement hints in PostgreSQL, so please, show us. Or ask/pay somebody to write this simple code for you to support hints, nobody will ever stop you from doing that. When you have a use case that proves the usage of hints will improve the performance of PostgreSQL and you have some code that can be maintained by the PostgreSQL project, it might be implemented in the contrib or even core. It's up to you, not somebody else.\n\nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 04 Feb 2011 14:24:30 +0000 (GMT)", "msg_from": "Frank Heikens <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "Mladen Gogala wrote:\n> I am even inclined to believe that deep down under the hood, this \n> fatwa has an ulterior motive, which disgusts me deeply. With hints, \n> there would be far fewer consulting gigs.\n\nNow you're just being rude. Given that you have direct access to the \ndevelopers of the software, for free, on these mailing lists, the main \nreason there is consulting work anyway is because some companies can't \npublish their queries or data publicly. All of us doing PostgreSQL \nconsulting regularly take those confidental reports and turn them into \nfeedback to improve the core software. That is what our clients want, \ntoo: a better PostgreSQL capable of handling their problem, not just a \nhacked up application that works today, but will break later once data \nvolume or distribution changes.\n\nYou really just don't get how open-source development works at all if \nyou think money is involved in why people have their respective \ntechnical opinions on controversial subjects. Try and hire the \nsometimes leader of this particular \"fatwa\", Tom Lane, for a consulting \ngig if you think that's where his motivation lies. I would love to have \na recording of *that* phone call.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 04 Feb 2011 09:27:55 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "Shaun Thomas wrote:\n> On 02/04/2011 07:56 AM, Mladen Gogala wrote:\n>\n> \n>> Hints are a necessary part of the\n>> optimizer in all other databases. Without hints Postgres will not get\n>> used in the company that I work for, period.\n>> \n>\n> I've said repeatedly that EnterpriseDB, a fork of PostgreSQL, has the \n> hints you seek, yet you seem to enjoy berating the PostgreSQL community \n> as if it owes you something.\n>\n> Also, we don't care if you don't use PostgreSQL. If I put something up \n> for free, some random guy not taking it won't exactly hurt my feelings.\n>\n> \nShaun, I don't need to convince you or the Postgres community. I needed \nan argument to convince my boss.\nMy argument was that the sanctimonious and narrow minded Postgres \ncommunity is unwilling to even consider creating the tools I need for \nlarge porting projects, tools provided by other major databases. This \ndiscussion served my purpose wonderfully. Project is killed, here we \npart ways. No more problems for either of us. Good luck with the \n\"perfect optimizer\" and good riddance. My only regret is about the time \nI have wasted.\n\n-- \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nwww.vmsinfo.com \n\n", "msg_date": "Fri, 04 Feb 2011 09:36:10 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "2011/2/4 Frank Heikens <[email protected]>:\n>\n>\n> On 04 Feb, 2011,at 02:56 PM, Mladen Gogala <[email protected]>\n> wrote:\n>\n> Віталій Тимчишин wrote:\n>> Hi, all.\n>>\n>> All this optimizer vs hint thread\n> There is no \"optimizer vs. hint\". Hints are a necessary part of the\n> optimizer in all other databases.\n>\n>\n> That has nothing to do with PostgreSQL: PostgreSQL = PostgreSQL. And it\n> doesn't have hints and everybody knows it.\n>\n> Without hints Postgres will not get\n> used in the company that I work for, period.\n>\n>\n> That's up to you, that's fine. But why did you start with PostgreSQL in the\n> first place? You knew PostgreSQL doesn't have hints and the wiki told you\n> hints are not wanted as well. When hints are an essential requirement for\n> your company, you should pick another product, EnterpriseDB Postgres Plus\n> for example.\n>\n> I was willing to wait but\n> the fatwa against hints seems unyielding,\n>\n>\n> There is no fatwa. The PostgreSQL project prefers to spend resources on a\n> better optimizer to solve the real problems, not on hints for working around\n> the problems. That has nothing to do with any fatwa or religion.\n>\n> so that's it. I am even\n> inclined to believe that deep down under the hood, this fatwa has an\n> ulterior motive, which disgusts me deeply. With hints, there would be\n> far fewer consulting gigs.\n>\n>\n> The consulting guys are the ones who love hints: They know they have to come\n> back the other month because the old hint does more harm than good when data\n> changes. And data will change over time.\n>\n> You said it's so simple to implement hints in PostgreSQL, so please, show\n> us. Or ask/pay somebody to write this simple code for you to support hints,\n> nobody will ever stop you from doing that. When you have a use case that\n> proves the usage of hints will improve the performance of PostgreSQL and you\n> have some code that can be maintained by the PostgreSQL project, it might be\n> implemented in the contrib or even core. It's up to you, not somebody else.\n\nJust in case you miss it:\nhttp://www.sai.msu.su/~megera/wiki/plantuner\n\nBtw feel free to do how you want, it is open source, and BSD, you can\ntake PostgreSQL, add hints, go and sell that to your boss.\n\n\n>\n>\n>\n> Mladen Gogala\n> Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Fri, 4 Feb 2011 16:45:39 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "Please, don't include me on your emails. I unsubscribed from the list.\n\n\nCédric Villemain wrote:\n> 2011/2/4 Frank Heikens <[email protected]>:\n> \n>> On 04 Feb, 2011,at 02:56 PM, Mladen Gogala <[email protected]>\n>> wrote:\n>>\n>> Віталій Тимчишин wrote:\n>> \n>>> Hi, all.\n>>>\n>>> All this optimizer vs hint thread\n>>> \n>> There is no \"optimizer vs. hint\". Hints are a necessary part of the\n>> optimizer in all other databases.\n>>\n>>\n>> That has nothing to do with PostgreSQL: PostgreSQL = PostgreSQL. And it\n>> doesn't have hints and everybody knows it.\n>>\n>> Without hints Postgres will not get\n>> used in the company that I work for, period.\n>>\n>>\n>> That's up to you, that's fine. But why did you start with PostgreSQL in the\n>> first place? You knew PostgreSQL doesn't have hints and the wiki told you\n>> hints are not wanted as well. When hints are an essential requirement for\n>> your company, you should pick another product, EnterpriseDB Postgres Plus\n>> for example.\n>>\n>> I was willing to wait but\n>> the fatwa against hints seems unyielding,\n>>\n>>\n>> There is no fatwa. The PostgreSQL project prefers to spend resources on a\n>> better optimizer to solve the real problems, not on hints for working around\n>> the problems. That has nothing to do with any fatwa or religion.\n>>\n>> so that's it. I am even\n>> inclined to believe that deep down under the hood, this fatwa has an\n>> ulterior motive, which disgusts me deeply. With hints, there would be\n>> far fewer consulting gigs.\n>>\n>>\n>> The consulting guys are the ones who love hints: They know they have to come\n>> back the other month because the old hint does more harm than good when data\n>> changes. And data will change over time.\n>>\n>> You said it's so simple to implement hints in PostgreSQL, so please, show\n>> us. Or ask/pay somebody to write this simple code for you to support hints,\n>> nobody will ever stop you from doing that. When you have a use case that\n>> proves the usage of hints will improve the performance of PostgreSQL and you\n>> have some code that can be maintained by the PostgreSQL project, it might be\n>> implemented in the contrib or even core. It's up to you, not somebody else.\n>> \n>\n> Just in case you miss it:\n> http://www.sai.msu.su/~megera/wiki/plantuner\n>\n> Btw feel free to do how you want, it is open source, and BSD, you can\n> take PostgreSQL, add hints, go and sell that to your boss.\n>\n>\n> \n>>\n>> Mladen Gogala\n>> Sr. Oracle DBA\n>> 1500 Broadway\n>> New York, NY 10036\n>> (212) 329-5251\n>> www.vmsinfo.com\n>>\n>>\n>> --\n>> Sent via pgsql-performance mailing list ([email protected])\n>> To make changes to your subscription:\n>> http://www.postgresql.org/mailpref/pgsql-performance\n>>\n>> \n>\n>\n>\n> \n\n\n-- \n \nMladen Gogala \nSr. Oracle DBA\n1500 Broadway\nNew York, NY 10036\n(212) 329-5251\nhttp://www.vmsinfo.com \nThe Leader in Integrated Media Intelligence Solutions\n\n\n\n", "msg_date": "Fri, 04 Feb 2011 11:06:28 -0500", "msg_from": "Mladen Gogala <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "Greg, 1st off, thanx for your great book, and i really hope i find the time to read it\nthoroughly. (since i am still stuck somewhere in the middle of \"Administration Cookbook\" lol!)\nWell, people, speaking from the point of the occasional poster and frequent lurker\ni can see that smth is going a little bit out of hand in the lists.\nI remember my own thread 1-2 weeks ago about NOT IN working much better\nin 8.3 than 9.0, how much i was trying to convince people that it was not\nFreeBSD related, nor other setting related, trying to convince mladen\nthat i was not cheating with the explain analyze i posted, trying to answer politely to Tom\nwho was asking me to post smth that i had already posted 4-5 times till then,\nand i can feel the agony of certain members here. \nBefore i moved the thread from -admin over to -performance i had certain issues at my home's FreeBSD mail server.\nThank God, some reply i wrote one late night on -admin didn't make it to the list.\nAnyways, that's open source. Great products, access to source and knowledge at a negligible cost come with a price.\nSo here is my advice to people in similar situations (me mainly!) : take a deep breath, dont hit the send button unless you are \n100% certain you have smth new/positive to say, take some time to do more homework from your part, even try\nto read/hack the source, etc...\nIf this is not possible, then its better to seek for alternatives rather than turning angry.\n\njust my 3 euros!\n\nΣτις Friday 04 February 2011 16:27:55 ο/η Greg Smith έγραψε:\n> Mladen Gogala wrote:\n> > I am even inclined to believe that deep down under the hood, this \n> > fatwa has an ulterior motive, which disgusts me deeply. With hints, \n> > there would be far fewer consulting gigs.\n> \n> Now you're just being rude. Given that you have direct access to the \n> developers of the software, for free, on these mailing lists, the main \n> reason there is consulting work anyway is because some companies can't \n> publish their queries or data publicly. All of us doing PostgreSQL \n> consulting regularly take those confidental reports and turn them into \n> feedback to improve the core software. That is what our clients want, \n> too: a better PostgreSQL capable of handling their problem, not just a \n> hacked up application that works today, but will break later once data \n> volume or distribution changes.\n> \n> You really just don't get how open-source development works at all if \n> you think money is involved in why people have their respective \n> technical opinions on controversial subjects. Try and hire the \n> sometimes leader of this particular \"fatwa\", Tom Lane, for a consulting \n> gig if you think that's where his motivation lies. I would love to have \n> a recording of *that* phone call.\n> \n> -- \n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n> \n> \n\n\n\n-- \nAchilleas Mantzios\n", "msg_date": "Fri, 4 Feb 2011 18:19:46 +0200", "msg_from": "Achilleas Mantzios <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "On 05/02/11 03:36, Mladen Gogala wrote:\n> Shaun, I don't need to convince you or the Postgres community. I \n> needed an argument to convince my boss.\n> My argument was that the sanctimonious and narrow minded Postgres \n> community is unwilling to even consider creating the tools I need for \n> large porting projects, tools provided by other major databases. This \n> discussion served my purpose wonderfully. Project is killed, here we \n> part ways. No more problems for either of us. Good luck with the \n> \"perfect optimizer\" and good riddance. My only regret is about the \n> time I have wasted.\n>\n\nI think it is unlikely that your boss is going to dismiss Postgres on \nthe basis of some minor technical point (no optimizer hints). Bosses \nusually (and should) care about stuff like reference sites, product \npedigree and product usage in similar sized companies to theirs. \nPostgres will come out rather well if such an assessment is actually \nperformed I would think.\n\nThe real question you should be asking is this:\n\nGiven that there are no hints, what do I do to solve the problem of a \nslow query suddenly popping up in production? If and when this situation \noccurs, see how quickly the community steps in to help you solve it (and \nit'd bet it will solved be very quickly indeed).\n\nBest wishes\n\nMark\n\n\n\n", "msg_date": "Sat, 05 Feb 2011 12:44:19 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "2011/2/4 Mark Kirkwood <[email protected]>:\n> Given that there are no hints, what do I do to solve the problem of a slow\n> query suddenly popping up in production? If and when this situation occurs,\n> see how quickly the community steps in to help you solve it (and it'd bet it\n> will solved be very quickly indeed).\n\nThat is EXACTLY what happened to me. I had a query killing my\nproduction box because it was running VERY long by picking the wrong\nplan. Turned out it was ignoring the number of NULLs and this led to\nit thinking one access method that was wrong was the right one. I had\na patch within 24 hours of identifying the problem, and it took me < 1\nhour to have it applied and running in production.\n\nIf Oracle can patch their query planner for you in 24 hours, and you\ncan apply patch with confidence against your test then production\nservers in an hour or so, great. Til then I'll stick to a database\nthat has the absolutely, without a doubt, best coder support of any\nproject I've ever used.\n\nMy point in the other thread is that if you can identify a point where\na hint would help, like my situation above, you're often better off\npresenting a test case here and getting a patch to make it smarter.\n\nHowever, there are places where the planner just kind of guesses. And\nthose are the places to attack when you find a pathological behaviour.\n Or to rewrite your query or use a functional index.\n", "msg_date": "Fri, 4 Feb 2011 21:54:24 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "O\n> If Oracle can patch their query planner for you in 24 hours, and you\n> can apply patch with confidence against your test then production\n> servers in an hour or so, great. Til then I'll stick to a database\n> that has the absolutely, without a doubt, best coder support of any\n> project I've ever used.\n>\n> My point in the other thread is that if you can identify a point where\n> a hint would help, like my situation above, you're often better off\n> presenting a test case here and getting a patch to make it smarter.\n>\n\nBy way of contrast - I had a similar situation with DB2 (a few years \nago) with a bad plan being chosen for BETWEEN predicates in some cases. \nI found myself having to spend about a hour or two a week chasing the \nsupport organization for - wait for it - 6 months to get a planner patch!\n", "msg_date": "Sat, 05 Feb 2011 18:38:07 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "2011/2/4 Віталій Тимчишин <[email protected]>:\n> Hi, all.\n> All this optimizer vs hint thread reminded me about crazy idea that got to\n> my head some time ago.\n> I currently has two problems with postgresql optimizer\n> 1) Dictionary tables. Very usual thing is something like \"select * from\n> big_table where distionary_id = (select id from dictionary where\n> name=value)\". This works awful if dictionary_id distribution is not uniform.\n\nDoes it work better if you write it as a join?\n\nSELECT b.* FROM big_table b, dictionary d WHERE b.dictionary_id = d.id\nAND d.name = 'value'\n\nI would like to see a concrete example of this not working well,\nbecause I've been writing queries like this (with MANY tables) for\nyears and it's usually worked very well for me.\n\n> The thing that helps is to retrieve subselect value and then simply do\n> \"select * from big_table where dictionary_id=id_value\".\n> 2) Complex queries. If there are over 3 levels of subselects, optmizer\n> counts often become less and less correct as we go up on levels. On ~3rd\n> level this often lead to wrong choises. The thing that helps is to create\n> temporary tables from subselects, analyze them and then do main select using\n> this temporary tables.\n> While first one can be fixed by introducing some correlation statistics, I\n> don't think there is any simple way to fix second one.\n> But what if optimizer could in some cases tell \"fetch this and this and then\n> I'll plan other part of the query based on statistics of what you've\n> fetched\"?\n\nI've had that thought, too. It's pretty hard to see how to make ti\nwork, but I think there are cases where it could be beneficial.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 27 Feb 2011 12:59:28 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "27 лютого 2011 р. 19:59 Robert Haas <[email protected]> написав:\n\n> 2011/2/4 Віталій Тимчишин <[email protected]>:\n> > Hi, all.\n> > All this optimizer vs hint thread reminded me about crazy idea that got\n> to\n> > my head some time ago.\n> > I currently has two problems with postgresql optimizer\n> > 1) Dictionary tables. Very usual thing is something like \"select * from\n> > big_table where distionary_id = (select id from dictionary where\n> > name=value)\". This works awful if dictionary_id distribution is not\n> uniform.\n>\n> Does it work better if you write it as a join?\n>\n\n> SELECT b.* FROM big_table b, dictionary d WHERE b.dictionary_id = d.id\n> AND d.name = 'value'\n>\n> I would like to see a concrete example of this not working well,\n> because I've been writing queries like this (with MANY tables) for\n> years and it's usually worked very well for me.\n>\n> Here you are:\n PostgreSQL 8.4.7 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real\n(Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit\ncreate table a(dict int4, val int4);\ncreate table b(dict int4, name text);\ncreate index c on a(dict);\ninsert into b values (1, 'small'), (2, 'large');\ninsert into a values (1,1);\ninsert into a select 2,generate_series(1,10000);\nanalyze a;\nanalyze b;\ntest=# explain analyze select * from a where dict=1;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------\n Index Scan using c on a (cost=0.00..8.27 rows=1 width=8) (actual\ntime=0.014..0.016 rows=1 loops=1)\n Index Cond: (dict = 1)\n Total runtime: 0.041 ms\n(3 rows)\ntest=# explain analyze select * from a where dict=2;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------\n Seq Scan on a (cost=0.00..170.01 rows=10000 width=8) (actual\ntime=0.014..6.876 rows=10000 loops=1)\n Filter: (dict = 2)\n Total runtime: 13.419 ms\n(3 rows)\ntest=# explain analyze select * from a,b where a.dict=b.dict and b.name\n='small';\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.04..233.55 rows=5000 width=18) (actual\ntime=0.047..13.159 rows=1 loops=1)\n Hash Cond: (a.dict = b.dict)\n -> Seq Scan on a (cost=0.00..145.01 rows=10001 width=8) (actual\ntime=0.009..6.633 rows=10001 loops=1)\n -> Hash (cost=1.02..1.02 rows=1 width=10) (actual time=0.011..0.011\nrows=1 loops=1)\n -> Seq Scan on b (cost=0.00..1.02 rows=1 width=10) (actual\ntime=0.006..0.008 rows=1 loops=1)\n Filter: (name = 'small'::text)\n Total runtime: 13.197 ms\n(7 rows)\ntest=# explain analyze select * from a,b where a.dict=b.dict and b.name\n='large';\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------\n Hash Join (cost=1.04..233.55 rows=5000 width=18) (actual\ntime=0.074..21.476 rows=10000 loops=1)\n Hash Cond: (a.dict = b.dict)\n -> Seq Scan on a (cost=0.00..145.01 rows=10001 width=8) (actual\ntime=0.012..7.085 rows=10001 loops=1)\n -> Hash (cost=1.02..1.02 rows=1 width=10) (actual time=0.021..0.021\nrows=1 loops=1)\n -> Seq Scan on b (cost=0.00..1.02 rows=1 width=10) (actual\ntime=0.015..0.016 rows=1 loops=1)\n Filter: (name = 'large'::text)\n Total runtime: 28.293 ms\n(7 rows)\n\nIt simply don't know that small=1 and large=2, so it never uses nested loop\n+ iindex scan:\ntest=# set enable_hashjoin=false;\nSET\ntest=# explain analyze select * from a,b where a.dict=b.dict and b.name\n='small';\n QUERY PLAN\n\n----------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..253.28 rows=5000 width=18) (actual\ntime=0.041..0.047 rows=1 loops=1)\n -> Seq Scan on b (cost=0.00..1.02 rows=1 width=10) (actual\ntime=0.010..0.012 rows=1 loops=1)\n Filter: (name = 'small'::text)\n -> Index Scan using c on a (cost=0.00..189.75 rows=5000 width=8)\n(actual time=0.021..0.023 rows=1 loops=1)\n Index Cond: (a.dict = b.dict)\n Total runtime: 0.089 ms\n(6 rows)\n\n-- \nBest regards,\n Vitalii Tymchyshyn\n\n27 лютого 2011 р. 19:59 Robert Haas <[email protected]> написав:\n2011/2/4 Віталій Тимчишин <[email protected]>:\n> Hi, all.\n> All this optimizer vs hint thread reminded me about crazy idea that got to\n> my head some time ago.\n> I currently has two problems with postgresql optimizer\n> 1) Dictionary tables. Very usual thing is something like \"select * from\n> big_table where distionary_id = (select id from dictionary where\n> name=value)\". This works awful if dictionary_id distribution is not uniform.\n\nDoes it work better if you write it as a join?\nSELECT b.* FROM big_table b, dictionary d WHERE b.dictionary_id = d.id\nAND d.name = 'value'\n\nI would like to see a concrete example of this not working well,\nbecause I've been writing queries like this (with MANY tables) for\nyears and it's usually worked very well for me.\nHere you are: PostgreSQL 8.4.7 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bitcreate table a(dict int4, val int4);\ncreate table b(dict int4, name text);create index c on a(dict);insert into b values (1, 'small'), (2, 'large');insert into a values (1,1);insert into a select 2,generate_series(1,10000);\nanalyze a;analyze b;test=# explain analyze select * from a where dict=1;                                             QUERY PLAN                                              \n----------------------------------------------------------------------------------------------------- Index Scan using c on a  (cost=0.00..8.27 rows=1 width=8) (actual time=0.014..0.016 rows=1 loops=1)\n   Index Cond: (dict = 1) Total runtime: 0.041 ms(3 rows)test=# explain analyze select * from a where dict=2;                                             QUERY PLAN                                              \n----------------------------------------------------------------------------------------------------- Seq Scan on a  (cost=0.00..170.01 rows=10000 width=8) (actual time=0.014..6.876 rows=10000 loops=1)\n   Filter: (dict = 2) Total runtime: 13.419 ms(3 rows)test=# explain analyze select * from a,b where a.dict=b.dict and b.name='small'; \n                                                QUERY PLAN                                                 -----------------------------------------------------------------------------------------------------------\n Hash Join  (cost=1.04..233.55 rows=5000 width=18) (actual time=0.047..13.159 rows=1 loops=1)   Hash Cond: (a.dict = b.dict)   ->  Seq Scan on a  (cost=0.00..145.01 rows=10001 width=8) (actual time=0.009..6.633 rows=10001 loops=1)\n   ->  Hash  (cost=1.02..1.02 rows=1 width=10) (actual time=0.011..0.011 rows=1 loops=1)         ->  Seq Scan on b  (cost=0.00..1.02 rows=1 width=10) (actual time=0.006..0.008 rows=1 loops=1)\n               Filter: (name = 'small'::text) Total runtime: 13.197 ms(7 rows)test=# explain analyze select * from a,b where a.dict=b.dict and b.name='large';\n                                                QUERY PLAN                                                 -----------------------------------------------------------------------------------------------------------\n Hash Join  (cost=1.04..233.55 rows=5000 width=18) (actual time=0.074..21.476 rows=10000 loops=1)   Hash Cond: (a.dict = b.dict)   ->  Seq Scan on a  (cost=0.00..145.01 rows=10001 width=8) (actual time=0.012..7.085 rows=10001 loops=1)\n   ->  Hash  (cost=1.02..1.02 rows=1 width=10) (actual time=0.021..0.021 rows=1 loops=1)         ->  Seq Scan on b  (cost=0.00..1.02 rows=1 width=10) (actual time=0.015..0.016 rows=1 loops=1)\n               Filter: (name = 'large'::text) Total runtime: 28.293 ms(7 rows)It simply don't know that small=1 and large=2, so it never uses nested loop + iindex scan:\ntest=# set enable_hashjoin=false;SETtest=# explain analyze select * from a,b where a.dict=b.dict and b.name='small';                                                   QUERY PLAN                                                   \n---------------------------------------------------------------------------------------------------------------- Nested Loop  (cost=0.00..253.28 rows=5000 width=18) (actual time=0.041..0.047 rows=1 loops=1)\n   ->  Seq Scan on b  (cost=0.00..1.02 rows=1 width=10) (actual time=0.010..0.012 rows=1 loops=1)         Filter: (name = 'small'::text)   ->  Index Scan using c on a  (cost=0.00..189.75 rows=5000 width=8) (actual time=0.021..0.023 rows=1 loops=1)\n         Index Cond: (a.dict = b.dict) Total runtime: 0.089 ms(6 rows)-- Best regards, Vitalii Tymchyshyn", "msg_date": "Mon, 28 Feb 2011 00:20:54 +0200", "msg_from": "=?KOI8-U?B?96bUwcymyiD0yc3eydvJzg==?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "2011/2/27 Віталій Тимчишин <[email protected]>:\n>\n>\n> 27 лютого 2011 р. 19:59 Robert Haas <[email protected]> написав:\n>>\n>> 2011/2/4 Віталій Тимчишин <[email protected]>:\n>> > Hi, all.\n>> > All this optimizer vs hint thread reminded me about crazy idea that got\n>> > to\n>> > my head some time ago.\n>> > I currently has two problems with postgresql optimizer\n>> > 1) Dictionary tables. Very usual thing is something like \"select * from\n>> > big_table where distionary_id = (select id from dictionary where\n>> > name=value)\". This works awful if dictionary_id distribution is not\n>> > uniform.\n>>\n>> Does it work better if you write it as a join?\n>>\n>> SELECT b.* FROM big_table b, dictionary d WHERE b.dictionary_id = d.id\n>> AND d.name = 'value'\n>>\n>> I would like to see a concrete example of this not working well,\n>> because I've been writing queries like this (with MANY tables) for\n>> years and it's usually worked very well for me.\n>>\n> Here you are:\n>  PostgreSQL 8.4.7 on x86_64-pc-linux-gnu, compiled by GCC gcc-4.4.real\n> (Ubuntu 4.4.3-4ubuntu5) 4.4.3, 64-bit\n> create table a(dict int4, val int4);\n> create table b(dict int4, name text);\n> create index c on a(dict);\n> insert into b values (1, 'small'), (2, 'large');\n> insert into a values (1,1);\n> insert into a select 2,generate_series(1,10000);\n> analyze a;\n> analyze b;\n> test=# explain analyze select * from a where dict=1;\n>                                              QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------\n>  Index Scan using c on a  (cost=0.00..8.27 rows=1 width=8) (actual\n> time=0.014..0.016 rows=1 loops=1)\n>    Index Cond: (dict = 1)\n>  Total runtime: 0.041 ms\n> (3 rows)\n> test=# explain analyze select * from a where dict=2;\n>                                              QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------\n>  Seq Scan on a  (cost=0.00..170.01 rows=10000 width=8) (actual\n> time=0.014..6.876 rows=10000 loops=1)\n>    Filter: (dict = 2)\n>  Total runtime: 13.419 ms\n> (3 rows)\n> test=# explain analyze select * from a,b where a.dict=b.dict and\n> b.name='small';\n>                                                 QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------\n>  Hash Join  (cost=1.04..233.55 rows=5000 width=18) (actual\n> time=0.047..13.159 rows=1 loops=1)\n>    Hash Cond: (a.dict = b.dict)\n>    ->  Seq Scan on a  (cost=0.00..145.01 rows=10001 width=8) (actual\n> time=0.009..6.633 rows=10001 loops=1)\n>    ->  Hash  (cost=1.02..1.02 rows=1 width=10) (actual time=0.011..0.011\n> rows=1 loops=1)\n>          ->  Seq Scan on b  (cost=0.00..1.02 rows=1 width=10) (actual\n> time=0.006..0.008 rows=1 loops=1)\n>                Filter: (name = 'small'::text)\n>  Total runtime: 13.197 ms\n> (7 rows)\n> test=# explain analyze select * from a,b where a.dict=b.dict and\n> b.name='large';\n>                                                 QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------\n>  Hash Join  (cost=1.04..233.55 rows=5000 width=18) (actual\n> time=0.074..21.476 rows=10000 loops=1)\n>    Hash Cond: (a.dict = b.dict)\n>    ->  Seq Scan on a  (cost=0.00..145.01 rows=10001 width=8) (actual\n> time=0.012..7.085 rows=10001 loops=1)\n>    ->  Hash  (cost=1.02..1.02 rows=1 width=10) (actual time=0.021..0.021\n> rows=1 loops=1)\n>          ->  Seq Scan on b  (cost=0.00..1.02 rows=1 width=10) (actual\n> time=0.015..0.016 rows=1 loops=1)\n>                Filter: (name = 'large'::text)\n>  Total runtime: 28.293 ms\n> (7 rows)\n> It simply don't know that small=1 and large=2, so it never uses nested loop\n> + iindex scan:\n> test=# set enable_hashjoin=false;\n> SET\n> test=# explain analyze select * from a,b where a.dict=b.dict and\n> b.name='small';\n>                                                    QUERY PLAN\n>\n> ----------------------------------------------------------------------------------------------------------------\n>  Nested Loop  (cost=0.00..253.28 rows=5000 width=18) (actual\n> time=0.041..0.047 rows=1 loops=1)\n>    ->  Seq Scan on b  (cost=0.00..1.02 rows=1 width=10) (actual\n> time=0.010..0.012 rows=1 loops=1)\n>          Filter: (name = 'small'::text)\n>    ->  Index Scan using c on a  (cost=0.00..189.75 rows=5000 width=8)\n> (actual time=0.021..0.023 rows=1 loops=1)\n>          Index Cond: (a.dict = b.dict)\n>  Total runtime: 0.089 ms\n> (6 rows)\n\nOh, I see. Interesting example.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 28 Feb 2011 14:09:15 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" }, { "msg_contents": "2011/2/4 Mladen Gogala <[email protected]>:\n> Віталій Тимчишин wrote:\n>>\n>> Hi, all.\n>>\n>> All this optimizer vs hint thread\n>\n> There is no \"optimizer vs. hint\". Hints are a necessary part of the\n> optimizer in all other databases. Without hints Postgres will not get used\n> in the company that I work for, period. I was willing to wait but the fatwa\n> against hints seems unyielding, so that's it. I am even inclined to believe\n> that deep down under the hood, this fatwa has an ulterior motive, which\n> disgusts me deeply. With hints, there would be far fewer consulting gigs.\n>\n> Mladen Gogala Sr. Oracle DBA\n> 1500 Broadway\n> New York, NY 10036\n> (212) 329-5251\n> www.vmsinfo.com\n\nAh, that's too bad...we really will miss you here. With luck and a\ngood helping of Oracle expertise you should be finally be able to get\nthat 14 record table under control!\n\nmerlin\n", "msg_date": "Tue, 1 Mar 2011 09:37:00 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Talking about optimizer, my long dream" } ]
[ { "msg_contents": "I'm running all this on a 9.0 server with good enough hardware. The \nquery is:\n\nSELECT news.id AS news_id\n , news.layout_id\n , news.news_relation_id\n , news.author_id\n , news.date_created\n , news.date_published\n , news.lastedit\n , news.lastedit_user_id\n , news.lastedit_date\n , news.approved_by\n , news.state\n , news.visible_from\n , news.visible_to\n , news.archived_by\n , news.archived_date\n , news.priority\n , news.collection_id\n , news.comment\n , news.keywords\n , news.icon\n , news.icon_url\n , news.icon_width\n , news.icon_height\n , news.icon_position\n , news.icon_onclick\n , news.icon_newwindow\n , news.no_lead\n , news.content_exists\n , news.title, news.lead, news.content\n\n\n , author.public_name AS \nauthor_public_name\n , lastedit_user.public_name AS \nlastedit_user_public_name\n , approved_by_user.public_name AS \napproved_by_public_name\n , archived_by_user.public_name AS \narchived_by_public_name\n FROM news\n JOIN users AS author ON news.author_id \n= author.id\n LEFT JOIN users AS lastedit_user ON \nnews.lastedit_user_id = lastedit_user.id\n LEFT JOIN users AS approved_by_user ON \nnews.approved_by = approved_by_user.id\n LEFT JOIN users AS archived_by_user ON \nnews.archived_by = archived_by_user.id\n\n WHERE (news.layout_id = 8980) AND (state = \n2) AND (date_published <= 1296806570 AND (visible_from IS NULL OR \n1296806570 BETWEEN visible_f\nrom AND visible_to))\n ORDER BY priority DESC, date_published DESC\n;\n\nThe \"vanilla\" plan, with default settings is:\n\n Sort (cost=7325.84..7329.39 rows=1422 width=678) (actual \ntime=100.846..100.852 rows=7 loops=1)\n Sort Key: news.priority, news.date_published\n Sort Method: quicksort Memory: 38kB\n -> Hash Left Join (cost=2908.02..7251.37 rows=1422 width=678) \n(actual time=100.695..100.799 rows=7 loops=1)\n Hash Cond: (news.archived_by = archived_by_user.id)\n -> Hash Left Join (cost=2501.75..6819.47 rows=1422 \nwidth=667) (actual time=76.742..76.830 rows=7 loops=1)\n Hash Cond: (news.approved_by = approved_by_user.id)\n -> Hash Left Join (cost=2095.48..6377.69 rows=1422 \nwidth=656) (actual time=53.248..53.318 rows=7 loops=1)\n Hash Cond: (news.lastedit_user_id = lastedit_user.id)\n -> Hash Join (cost=1689.21..5935.87 rows=1422 \nwidth=645) (actual time=29.793..29.846 rows=7 loops=1)\n Hash Cond: (news.author_id = author.id)\n -> Bitmap Heap Scan on news \n(cost=1282.94..5494.05 rows=1422 width=634) (actual time=5.532..5.560 \nrows=7 loops=1)\n Recheck Cond: ((layout_id = 8980) AND \n(state = 2) AND ((visible_from IS NULL) OR (1296806570 <= visible_to)))\n Filter: ((date_published <= \n1296806570) AND ((visible_from IS NULL) OR ((1296806570 >= visible_from) \nAND (1296806570 <= visible_to))))\n -> BitmapAnd (cost=1282.94..1282.94 \nrows=1430 width=0) (actual time=5.508..5.508 rows=0 loops=1)\n -> Bitmap Index Scan on \nnews_index_layout_id_state (cost=0.00..150.14 rows=2587 width=0) \n(actual time=0.909..0.909 rows=3464 loops=1)\n Index Cond: ((layout_id = \n8980) AND (state = 2))\n -> BitmapOr \n(cost=1132.20..1132.20 rows=20127 width=0) (actual time=4.136..4.136 \nrows=0 loops=1)\n -> Bitmap Index Scan on \nnews_visible_from (cost=0.00..1122.09 rows=19976 width=0) (actual \ntime=3.367..3.367 rows=19932 loops=1)\n Index Cond: \n(visible_from IS NULL)\n -> Bitmap Index Scan on \nnews_visible_to (cost=0.00..9.40 rows=151 width=0) (actual \ntime=0.766..0.766 rows=43 loops=1)\n Index Cond: \n(1296806570 <= visible_to)\n -> Hash (cost=281.12..281.12 rows=10012 \nwidth=15) (actual time=24.247..24.247 rows=10012 loops=1)\n Buckets: 1024 Batches: 1 Memory \nUsage: 482kB\n -> Seq Scan on users author \n(cost=0.00..281.12 rows=10012 width=15) (actual time=0.004..11.354 \nrows=10012 loops=1)\n -> Hash (cost=281.12..281.12 rows=10012 \nwidth=15) (actual time=23.444..23.444 rows=10012 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 482kB\n -> Seq Scan on users lastedit_user \n(cost=0.00..281.12 rows=10012 width=15) (actual time=0.004..10.752 \nrows=10012 loops=1)\n -> Hash (cost=281.12..281.12 rows=10012 width=15) \n(actual time=23.481..23.481 rows=10012 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 482kB\n -> Seq Scan on users approved_by_user \n(cost=0.00..281.12 rows=10012 width=15) (actual time=0.002..10.695 \nrows=10012 loops=1)\n -> Hash (cost=281.12..281.12 rows=10012 width=15) (actual \ntime=23.941..23.941 rows=10012 loops=1)\n Buckets: 1024 Batches: 1 Memory Usage: 482kB\n -> Seq Scan on users archived_by_user \n(cost=0.00..281.12 rows=10012 width=15) (actual time=0.003..10.673 \nrows=10012 loops=1)\n Total runtime: 101.302 ms\n(35 rows)\n\nBut with these changes:\n\nset enable_hashjoin=f;\nset enable_mergejoin=f;\n\nthe plan becomes:\n\n Sort (cost=9786.25..9789.87 rows=1446 width=678) (actual \ntime=5.408..5.414 rows=7 loops=1)\n Sort Key: news.priority, news.date_published\n Sort Method: quicksort Memory: 38kB\n -> Nested Loop Left Join (cost=439.10..9710.35 rows=1446 \nwidth=678) (actual time=5.133..5.364 rows=7 loops=1)\n -> Nested Loop Left Join (cost=439.10..8459.74 rows=1446 \nwidth=667) (actual time=5.128..5.330 rows=7 loops=1)\n -> Nested Loop Left Join (cost=439.10..7209.12 \nrows=1446 width=656) (actual time=5.122..5.271 rows=7 loops=1)\n -> Nested Loop (cost=439.10..5958.51 rows=1446 \nwidth=645) (actual time=5.112..5.204 rows=7 loops=1)\n -> Bitmap Heap Scan on news \n(cost=439.10..4707.89 rows=1446 width=634) (actual time=5.096..5.122 \nrows=7 loops=1)\n Recheck Cond: ((layout_id = 8980) AND \n(state = 2) AND ((visible_from IS NULL) OR (1296806570 <= visible_to)))\n Filter: ((date_published <= \n1296806570) AND ((visible_from IS NULL) OR ((1296806570 >= visible_from) \nAND (1296806570 <= visible_to))))\n -> BitmapAnd (cost=439.10..439.10 \nrows=1455 width=0) (actual time=5.073..5.073 rows=0 loops=1)\n -> Bitmap Index Scan on \nnews_index_layout_id_state (cost=0.00..58.62 rows=2637 width=0) (actual \ntime=0.880..0.880 rows=3464 loops=1)\n Index Cond: ((layout_id = \n8980) AND (state = 2))\n -> BitmapOr \n(cost=379.86..379.86 rows=20084 width=0) (actual time=3.734..3.734 \nrows=0 loops=1)\n -> Bitmap Index Scan on \nnews_visible_from (cost=0.00..373.74 rows=19932 width=0) (actual \ntime=3.255..3.255 rows=19932 loops=1)\n Index Cond: \n(visible_from IS NULL)\n -> Bitmap Index Scan on \nnews_visible_to (cost=0.00..5.39 rows=152 width=0) (actual \ntime=0.476..0.476 rows=43 loops=1)\n Index Cond: \n(1296806570 <= visible_to)\n -> Index Scan using users_pkey on users \nauthor (cost=0.00..0.85 rows=1 width=15) (actual time=0.006..0.007 \nrows=1 loops=7)\n Index Cond: (author.id = news.author_id)\n -> Index Scan using users_pkey on users \nlastedit_user (cost=0.00..0.85 rows=1 width=15) (actual \ntime=0.004..0.005 rows=1 loops=7)\n Index Cond: (news.lastedit_user_id = \nlastedit_user.id)\n -> Index Scan using users_pkey on users \napproved_by_user (cost=0.00..0.85 rows=1 width=15) (actual \ntime=0.002..0.004 rows=1 loops=7)\n Index Cond: (news.approved_by = approved_by_user.id)\n -> Index Scan using users_pkey on users archived_by_user \n(cost=0.00..0.85 rows=1 width=15) (actual time=0.001..0.001 rows=0 loops=7)\n Index Cond: (news.archived_by = archived_by_user.id)\n Total runtime: 5.605 ms\n(27 rows)\n\nNote the difference in execution times: 100 ms vs 5 ms.\n\nSo far, I've tried increasing statistics to 1000 on state, layout_id, \nauthor_id, lastedit_user_id, approved_by, archived_by fields, reindexing \nand vacuum analyze-ing it, but with the default settings the planner \nkeeps missing the mark.\n\nThe news table is:\n\n Table \"public.news\"\n Column | Type | \nModifiers\n------------------+------------------------+---------------------------------------------------\n id | integer | not null default \nnextval('news_id_seq'::regclass)\n layout_id | integer | not null\n news_relation_id | integer | not null\n author_id | integer | not null default 10\n date_created | integer | not null\n date_published | integer | not null\n lastedit | boolean | not null default false\n lastedit_user_id | integer | not null default 10\n lastedit_date | integer | not null\n approved_by | integer | default 10\n state | smallint | not null\n visible_from | integer |\n visible_to | integer |\n archived_by | integer | default 10\n archived_date | integer |\n priority | smallint | not null default 5\n collection_id | integer |\n comment | boolean | not null default false\n keywords | text | not null default ''::text\n icon | boolean | not null default false\n icon_url | text |\n icon_width | smallint |\n icon_height | smallint |\n icon_position | character(1) |\n icon_onclick | text |\n icon_newwindow | boolean |\n title | character varying(300) | not null\n no_lead | boolean | not null default false\n content_exists | boolean | not null default false\n lead | text | not null\n content | text | not null default ''::text\n _fts_ | tsvector |\nIndexes:\n \"news_pkey\" PRIMARY KEY, btree (id)\n \"news_layout_id_key\" UNIQUE, btree (layout_id, news_relation_id)\n \"forms_index_layout_id_state\" btree (layout_id, state)\n \"ii1\" btree (author_id)\n \"ii2\" btree (lastedit_user_id)\n \"ii3\" btree (approved_by)\n \"ii4\" btree (archived_by)\n \"news_fts\" gin (_fts_)\n \"news_index_date_published\" btree (date_published)\n \"news_index_lastedit\" btree (lastedit_date)\n \"news_index_layout_id\" btree (layout_id)\n \"news_index_layout_id_state\" btree (layout_id, state)\n \"news_index_priority\" btree (priority)\n \"news_visible_from\" btree (visible_from)\n \"news_visible_to\" btree (visible_to)\n\n", "msg_date": "Fri, 04 Feb 2011 13:08:23 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Query performance with disabled hashjoin and mergejoin" }, { "msg_contents": "Ivan Voras wrote:\n> The \"vanilla\" plan, with default settings is:\n\nPause here for a second: why default settings? A default PostgreSQL \nconfiguration is suitable for systems with about 128MB of RAM. Since \nyou say you have \"good enough hardware\", I'm assuming you have a bit \nmore than that. The first things to try here are the list at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ; your bad \nquery here looks like it might benefit from a large increase to \neffective_cache_size, and possibly an increase to work_mem as well. \nYour \"bad\" plan here is doing a lot of sequential scans instead of \nindexed lookups, which makes me wonder if the change in join types \nyou're forcing isn't fixing that part as a coincidence.\n\nNote that the estimated number of rows coming out of each form of plan \nis off by a factor of about 200X, so it's not that the other plan type \nis better estimating anything.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 04 Feb 2011 09:44:58 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin" }, { "msg_contents": "Sorry for the misunderstaning: of course not default \"normal\" settings; shared buffers, work mem, wal segments and others have been tuned according to available hardware (e.g. 4 GB, 32 MB, 10 for these settings, respectively). I meant \"planner default settings\" in the post.\n-- \nSent from my Android phone, please excuse my brevity.\n\nGreg Smith <[email protected]> wrote:\n\nIvan Voras wrote: > The \"vanilla\" plan, with default settings is: Pause here for a second: why default settings? A default PostgreSQL configuration is suitable for systems with about 128MB of RAM. Since you say you have \"good enough hardware\", I'm assuming you have a bit more than that. The first things to try here are the list at http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ; your bad query here looks like it might benefit from a large increase to effective_cache_size, and possibly an increase to work_mem as well. Your \"bad\" plan here is doing a lot of sequential scans instead of indexed lookups, which makes me wonder if the change in join types you're forcing isn't fixing that part as a coincidence. Note that the estimated number of rows coming out of each form of plan is off by a factor of about 200X, so it's not that the other plan type is better estimating anything. -- Greg Smith 2ndQuadrant US [email protected] Baltimore, MD PostgreSQL Training, Serv\n ices,\nand 24x7 Support www.2ndQuadrant.us \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books \n\n\nSorry for the misunderstaning: of course not default \"normal\" settings; shared buffers, work mem, wal segments and others have been tuned according to available hardware (e.g. 4 GB, 32 MB, 10 for these settings, respectively). I meant \"planner default settings\" in the post.\n-- \nSent from my Android phone, please excuse my brevity.Greg Smith <[email protected]> wrote:\nIvan Voras wrote:\n> The \"vanilla\" plan, with default settings is:\n\nPause here for a second: why default settings? A default PostgreSQL \nconfiguration is suitable for systems with about 128MB of RAM. Since you say you have \"good enough hardware\", I'm assuming you have a bit more than that. The first things to try here are the list at \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ; your bad query here looks like it might benefit from a large increase to \neffective_cache_size, and possibly an increase to work_mem as well. \nYour \"bad\" plan here is doing a lot of sequential scans instead of indexed lookups, which makes me wonder if the change in join types you're forcing isn't fixing that part as a coincidence.\n\nNote that the estimated number of rows coming out of each form of plan is off by a factor of about 200X, so it's not that the other plan type is better estimating anything.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Fri, 04 Feb 2011 16:13:19 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin" }, { "msg_contents": "On 04/02/2011 15:44, Greg Smith wrote:\n> Ivan Voras wrote:\n>> The \"vanilla\" plan, with default settings is:\n>\n> Pause here for a second: why default settings? A default PostgreSQL\n> configuration is suitable for systems with about 128MB of RAM. Since you\n> say you have \"good enough hardware\", I'm assuming you have a bit more\n> than that. The first things to try here are the list at\n> http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server ; your bad\n> query here looks like it might benefit from a large increase to\n> effective_cache_size, and possibly an increase to work_mem as well. Your\n> \"bad\" plan here is doing a lot of sequential scans instead of indexed\n> lookups, which makes me wonder if the change in join types you're\n> forcing isn't fixing that part as a coincidence.\n\nMy earlier message didn't get through so here's a repeat:\n\nSorry for the confusion, by \"default settings\" I meant \"planner default \nsettings\" not generic shared buffers, wal logs, work memory etc. - which \nare adequately tuned.\n\n> Note that the estimated number of rows coming out of each form of plan\n> is off by a factor of about 200X, so it's not that the other plan type\n> is better estimating anything.\n\nAny ideas how to fix the estimates? Or will I have to simulate hints by \nissuing \"set enable_hashjoin=f; set enable_mergejoin=f;\" for this query? :)\n\n\n", "msg_date": "Sat, 05 Feb 2011 03:50:50 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin" }, { "msg_contents": "On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras <[email protected]> wrote:\n>                                 ->  BitmapAnd  (cost=1282.94..1282.94\n> rows=1430 width=0) (actual time=5.508..5.508 rows=0 loops=1)\n>                                       ->  Bitmap Index Scan on\n> news_index_layout_id_state  (cost=0.00..150.14 rows=2587 width=0) (actual\n> time=0.909..0.909 rows=3464 loops=1)\n>                                             Index Cond: ((layout_id = 8980)\n> AND (state = 2))\n>                                       ->  BitmapOr (cost=1132.20..1132.20\n> rows=20127 width=0) (actual time=4.136..4.136 rows=0 loops=1)\n>                                             ->  Bitmap Index Scan on\n> news_visible_from  (cost=0.00..1122.09 rows=19976 width=0) (actual\n> time=3.367..3.367 rows=19932 loops=1)\n>                                                   Index Cond: (visible_from\n> IS NULL)\n>                                             ->  Bitmap Index Scan on\n> news_visible_to  (cost=0.00..9.40 rows=151 width=0) (actual\n> time=0.766..0.766 rows=43 loops=1)\n>                                                   Index Cond: (1296806570 <=\n> visible_to)\n\nI think this part of the query is the problem. Since the planner\ndoesn't support cross-column statistics, it can't spot the correlation\nbetween these different search conditions, resulting in a badly broken\nselectivity estimate.\n\nSometimes you can work around this by adding a single column, computed\nwith a trigger, that contains enough information to test the whole\nWHERE-clause condition using a single indexable test against the\ncolumn value. Or sometimes you can get around it by partitioning the\ndata into multiple tables, say with the visible_from IS NULL rows in a\ndifferent table from the rest.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 22 Feb 2011 22:07:32 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin" }, { "msg_contents": "On Tue, Feb 22, 2011 at 9:07 PM, Robert Haas <[email protected]> wrote:\n> On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras <[email protected]> wrote:\n>>                                 ->  BitmapAnd  (cost=1282.94..1282.94\n>> rows=1430 width=0) (actual time=5.508..5.508 rows=0 loops=1)\n>>                                       ->  Bitmap Index Scan on\n>> news_index_layout_id_state  (cost=0.00..150.14 rows=2587 width=0) (actual\n>> time=0.909..0.909 rows=3464 loops=1)\n>>                                             Index Cond: ((layout_id = 8980)\n>> AND (state = 2))\n>>                                       ->  BitmapOr (cost=1132.20..1132.20\n>> rows=20127 width=0) (actual time=4.136..4.136 rows=0 loops=1)\n>>                                             ->  Bitmap Index Scan on\n>> news_visible_from  (cost=0.00..1122.09 rows=19976 width=0) (actual\n>> time=3.367..3.367 rows=19932 loops=1)\n>>                                                   Index Cond: (visible_from\n>> IS NULL)\n>>                                             ->  Bitmap Index Scan on\n>> news_visible_to  (cost=0.00..9.40 rows=151 width=0) (actual\n>> time=0.766..0.766 rows=43 loops=1)\n>>                                                   Index Cond: (1296806570 <=\n>> visible_to)\n>\n> I think this part of the query is the problem.  Since the planner\n> doesn't support cross-column statistics, it can't spot the correlation\n> between these different search conditions, resulting in a badly broken\n> selectivity estimate.\n>\n> Sometimes you can work around this by adding a single column, computed\n> with a trigger, that contains enough information to test the whole\n> WHERE-clause condition using a single indexable test against the\n> column value.  Or sometimes you can get around it by partitioning the\n> data into multiple tables, say with the visible_from IS NULL rows in a\n> different table from the rest.\n\nWhy should you need cross column statistics for this case? You should\nbe able to multiple selectivity from left to right as long as you are\ndoing equality comparisons, yes?\n\nRight now the planner is treating\nselect * from foo where (a,b,c) between (1,1,1) and (9,9,9) the same\n(using selectivity on a) as\nselect * from foo where (a,b,c) between (1,1,5) and (1,1,7)\n\nbut they are not the same. since in the second query terms a,b are\nequal, shouldn't you able to multiply the selectivity through?\n\nmerlin\n", "msg_date": "Mon, 7 Mar 2011 14:40:48 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin" }, { "msg_contents": "On Mon, Mar 7, 2011 at 3:40 PM, Merlin Moncure <[email protected]> wrote:\n> On Tue, Feb 22, 2011 at 9:07 PM, Robert Haas <[email protected]> wrote:\n>> On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras <[email protected]> wrote:\n>>>                                 ->  BitmapAnd  (cost=1282.94..1282.94\n>>> rows=1430 width=0) (actual time=5.508..5.508 rows=0 loops=1)\n>>>                                       ->  Bitmap Index Scan on\n>>> news_index_layout_id_state  (cost=0.00..150.14 rows=2587 width=0) (actual\n>>> time=0.909..0.909 rows=3464 loops=1)\n>>>                                             Index Cond: ((layout_id = 8980)\n>>> AND (state = 2))\n>>>                                       ->  BitmapOr (cost=1132.20..1132.20\n>>> rows=20127 width=0) (actual time=4.136..4.136 rows=0 loops=1)\n>>>                                             ->  Bitmap Index Scan on\n>>> news_visible_from  (cost=0.00..1122.09 rows=19976 width=0) (actual\n>>> time=3.367..3.367 rows=19932 loops=1)\n>>>                                                   Index Cond: (visible_from\n>>> IS NULL)\n>>>                                             ->  Bitmap Index Scan on\n>>> news_visible_to  (cost=0.00..9.40 rows=151 width=0) (actual\n>>> time=0.766..0.766 rows=43 loops=1)\n>>>                                                   Index Cond: (1296806570 <=\n>>> visible_to)\n>>\n>> I think this part of the query is the problem.  Since the planner\n>> doesn't support cross-column statistics, it can't spot the correlation\n>> between these different search conditions, resulting in a badly broken\n>> selectivity estimate.\n>>\n>> Sometimes you can work around this by adding a single column, computed\n>> with a trigger, that contains enough information to test the whole\n>> WHERE-clause condition using a single indexable test against the\n>> column value.  Or sometimes you can get around it by partitioning the\n>> data into multiple tables, say with the visible_from IS NULL rows in a\n>> different table from the rest.\n>\n> Why should you need cross column statistics for this case?  You should\n> be able to multiple selectivity from left to right as long as you are\n> doing equality comparisons, yes?\n>\n> Right now the planner is treating\n> select * from foo where (a,b,c) between (1,1,1) and (9,9,9) the same\n> (using selectivity on a) as\n> select * from foo where (a,b,c) between (1,1,5) and (1,1,7)\n>\n> but they are not the same. since in the second query terms a,b are\n> equal, shouldn't you able to multiply the selectivity through?\n\nI'm not quite following that...\n\nThe reason I thought cross-column correlations might be relevant is\nthat the bitmap index scan on news_visible_from is quite accurate\n(19976 estimated vs. 19932 actual) and the bitmap index scan on\nnews_visible_to is tolerably accurate (151 estimated vs. 41 actual)\nbut the estimate on the BitmapOr is somehow totally wrong (20127\nestimated vs. 0 actual). But on further reflection that doesn't make\nmuch sense. How can the BitmapOr produce fewer rows than the sum of\nits constituent inputs?\n\n/me scratches head.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Tue, 8 Mar 2011 15:57:46 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> The reason I thought cross-column correlations might be relevant is\n> that the bitmap index scan on news_visible_from is quite accurate\n> (19976 estimated vs. 19932 actual) and the bitmap index scan on\n> news_visible_to is tolerably accurate (151 estimated vs. 41 actual)\n> but the estimate on the BitmapOr is somehow totally wrong (20127\n> estimated vs. 0 actual). But on further reflection that doesn't make\n> much sense. How can the BitmapOr produce fewer rows than the sum of\n> its constituent inputs?\n\nThat's not an estimation bug, that's a measurement bug. We don't try to\ncount the actual number of rows present in the result of a BitmapOr or\nBitmapAnd node. (It would be impractical in lossy cases anyway, not to\nmention expensive.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Mar 2011 16:24:25 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin " }, { "msg_contents": "On Tue, Mar 8, 2011 at 2:57 PM, Robert Haas <[email protected]> wrote:\n> On Mon, Mar 7, 2011 at 3:40 PM, Merlin Moncure <[email protected]> wrote:\n>> On Tue, Feb 22, 2011 at 9:07 PM, Robert Haas <[email protected]> wrote:\n>>> On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras <[email protected]> wrote:\n>>>>                                 ->  BitmapAnd  (cost=1282.94..1282.94\n>>>> rows=1430 width=0) (actual time=5.508..5.508 rows=0 loops=1)\n>>>>                                       ->  Bitmap Index Scan on\n>>>> news_index_layout_id_state  (cost=0.00..150.14 rows=2587 width=0) (actual\n>>>> time=0.909..0.909 rows=3464 loops=1)\n>>>>                                             Index Cond: ((layout_id = 8980)\n>>>> AND (state = 2))\n>>>>                                       ->  BitmapOr (cost=1132.20..1132.20\n>>>> rows=20127 width=0) (actual time=4.136..4.136 rows=0 loops=1)\n>>>>                                             ->  Bitmap Index Scan on\n>>>> news_visible_from  (cost=0.00..1122.09 rows=19976 width=0) (actual\n>>>> time=3.367..3.367 rows=19932 loops=1)\n>>>>                                                   Index Cond: (visible_from\n>>>> IS NULL)\n>>>>                                             ->  Bitmap Index Scan on\n>>>> news_visible_to  (cost=0.00..9.40 rows=151 width=0) (actual\n>>>> time=0.766..0.766 rows=43 loops=1)\n>>>>                                                   Index Cond: (1296806570 <=\n>>>> visible_to)\n>>>\n>>> I think this part of the query is the problem.  Since the planner\n>>> doesn't support cross-column statistics, it can't spot the correlation\n>>> between these different search conditions, resulting in a badly broken\n>>> selectivity estimate.\n>>>\n>>> Sometimes you can work around this by adding a single column, computed\n>>> with a trigger, that contains enough information to test the whole\n>>> WHERE-clause condition using a single indexable test against the\n>>> column value.  Or sometimes you can get around it by partitioning the\n>>> data into multiple tables, say with the visible_from IS NULL rows in a\n>>> different table from the rest.\n>>\n>> Why should you need cross column statistics for this case?  You should\n>> be able to multiple selectivity from left to right as long as you are\n>> doing equality comparisons, yes?\n>>\n>> Right now the planner is treating\n>> select * from foo where (a,b,c) between (1,1,1) and (9,9,9) the same\n>> (using selectivity on a) as\n>> select * from foo where (a,b,c) between (1,1,5) and (1,1,7)\n>>\n>> but they are not the same. since in the second query terms a,b are\n>> equal, shouldn't you able to multiply the selectivity through?\n>\n> I'm not quite following that...\n>\n> The reason I thought cross-column correlations might be relevant is\n> that the bitmap index scan on news_visible_from is quite accurate\n> (19976 estimated vs. 19932 actual) and the bitmap index scan on\n> news_visible_to is tolerably accurate (151 estimated vs. 41 actual)\n> but the estimate on the BitmapOr is somehow totally wrong (20127\n> estimated vs. 0 actual).  But on further reflection that doesn't make\n> much sense.  How can the BitmapOr produce fewer rows than the sum of\n> its constituent inputs?\n>\n> /me scratches head.\n\nmy fault -- the point i was making I think was valid but didn't apply\nto the op's question: I mistakenly where expression could be converted\nto row wise comparison type operation but that wasn't the case...\n\nmerlin\n", "msg_date": "Tue, 8 Mar 2011 15:56:51 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin" }, { "msg_contents": "On Tue, Mar 8, 2011 at 4:24 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> The reason I thought cross-column correlations might be relevant is\n>> that the bitmap index scan on news_visible_from is quite accurate\n>> (19976 estimated vs. 19932 actual) and the bitmap index scan on\n>> news_visible_to is tolerably accurate (151 estimated vs. 41 actual)\n>> but the estimate on the BitmapOr is somehow totally wrong (20127\n>> estimated vs. 0 actual).  But on further reflection that doesn't make\n>> much sense.  How can the BitmapOr produce fewer rows than the sum of\n>> its constituent inputs?\n>\n> That's not an estimation bug, that's a measurement bug.  We don't try to\n> count the actual number of rows present in the result of a BitmapOr or\n> BitmapAnd node.  (It would be impractical in lossy cases anyway, not to\n> mention expensive.)\n\nMmm, OK. But I still think there's a problem with the selectivity\nestimate in there somewhere, because\n\n -> Bitmap Heap Scan on news\n(cost=1282.94..5494.05 rows=1422 width=634) (actual time=5.532..5.560\nrows=7 loops=1)\n\n...which may be why the planner is going wrong for the OP.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 9 Mar 2011 11:48:41 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query performance with disabled hashjoin and mergejoin" } ]
[ { "msg_contents": "I am having huge performance problems with a table. Performance deteriorates\nevery day and I have to run REINDEX and ANALYZE on it every day. auto\nvacuum is on. yes, I am reading the other thread about count(*) :)\n\nbut obviously I'm doing something wrong here\n\n\nexplain analyze select count(*) from fastadder_fastadderstatus;\n\nAggregate (cost=62458.73..62458.74 rows=1 width=0) (actual\ntime=77130.000..77130.000 rows=1 loops=1)\n -> Seq Scan on fastadder_fastadderstatus (cost=0.00..61701.18\nrows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1)\n Total runtime: *77250.000 ms*\n\ndirectly after REINDEX and ANALYZE:\n\n Aggregate (cost=62348.70..62348.71 rows=1 width=0) (actual\ntime=15830.000..15830.000 rows=1 loops=1)\n -> Seq Scan on fastadder_fastadderstatus (cost=0.00..61613.16\nrows=294216 width=0) (actual time=30.000..15570.000 rows=302479 loops=1)\n Total runtime: 15830.000 ms\n\nstill very bad for a 300k row table\n\na similar table:\n\nexplain analyze select count(*) from fastadder_fastadderstatuslog;\n\n Aggregate (cost=8332.53..8332.54 rows=1 width=0) (actual\ntime=1270.000..1270.000 rows=1 loops=1)\n -> Seq Scan on fastadder_fastadderstatuslog (cost=0.00..7389.02\nrows=377402 width=0) (actual time=0.000..910.000 rows=377033 loops=1)\n Total runtime: 1270.000 ms\n\n\nIt gets updated quite a bit each day, and this is perhaps the problem.\nTo me it doesn't seem like that many updates\n\n100-500 rows inserted per day\nno deletes\n\n10k-50k updates per day\nmostly of this sort: set priority=1 where id=12345\n\nis it perhaps this that is causing the performance problem ?\n\nI could rework the app to be more efficient and do updates using batches\nwhere id IN (1,2,3,4...)\n\nI assume that means a more efficient index update compared to individual\nupdates.\n\nThere is one routine that updates position_in_queue using a lot (too many)\nupdate statements.\nIs that likely to be the culprit ?\n\n*What else can I do to investigate ?*\n\n\n Table\n\"public.fastadder_fastadderstatus\"\n Column | Type |\n Modifiers\n-------------------+--------------------------+------------------------------------------------------------------------\n id | integer | not null default\nnextval('fastadder_fastadderstatus_id_seq'::regclass)\n apt_id | integer | not null\n service_id | integer | not null\n agent_priority | integer | not null\n priority | integer | not null\n last_validated | timestamp with time zone |\n last_sent | timestamp with time zone |\n last_checked | timestamp with time zone |\n last_modified | timestamp with time zone | not null\n running_status | integer |\n validation_status | integer |\n position_in_queue | integer |\n sent | boolean | not null default false\n built | boolean | not null default false\n webid_suffix | integer |\n build_cache | text |\nIndexes:\n \"fastadder_fastadderstatus_pkey\" PRIMARY KEY, btree (id)\n \"fastadder_fastadderstatus_apt_id_key\" UNIQUE, btree (apt_id,\nservice_id)\n \"fastadder_fastadderstatus_agent_priority\" btree (agent_priority)\n \"fastadder_fastadderstatus_apt_id\" btree (apt_id)\n \"fastadder_fastadderstatus_built\" btree (built)\n \"fastadder_fastadderstatus_last_checked\" btree (last_checked)\n \"fastadder_fastadderstatus_last_validated\" btree (last_validated)\n \"fastadder_fastadderstatus_position_in_queue\" btree (position_in_queue)\n \"fastadder_fastadderstatus_priority\" btree (priority)\n \"fastadder_fastadderstatus_running_status\" btree (running_status)\n \"fastadder_fastadderstatus_service_id\" btree (service_id)\nForeign-key constraints:\n \"fastadder_fastadderstatus_apt_id_fkey\" FOREIGN KEY (apt_id) REFERENCES\nnsproperties_apt(id) DEFERRABLE INITIALLY DEFERRED\n \"fastadder_fastadderstatus_service_id_fkey\" FOREIGN KEY (service_id)\nREFERENCES fastadder_fastadderservice(id) DEFERRABLE INITIALLY DEFERRED\n\n\nthanks !\n\nI am having huge performance problems with a table. Performance deteriorates every day and I have to run REINDEX and ANALYZE on it every day.  auto vacuum is on.  yes, I am reading the other thread about count(*) :)\nbut obviously I'm doing something wrong hereexplain analyze select count(*) from fastadder_fastadderstatus;Aggregate  (cost=62458.73..62458.74 rows=1 width=0) (actual time=77130.000..77130.000 rows=1 loops=1)\n   ->  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61701.18 rows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1) Total runtime: 77250.000 msdirectly after REINDEX and ANALYZE:\n Aggregate  (cost=62348.70..62348.71 rows=1 width=0) (actual time=15830.000..15830.000 rows=1 loops=1)   ->  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61613.16 rows=294216 width=0) (actual time=30.000..15570.000 rows=302479 loops=1)\n Total runtime: 15830.000 msstill very bad for a 300k row tablea similar table:explain analyze select count(*) from fastadder_fastadderstatuslog;\n Aggregate  (cost=8332.53..8332.54 rows=1 width=0) (actual time=1270.000..1270.000 rows=1 loops=1)   ->  Seq Scan on fastadder_fastadderstatuslog  (cost=0.00..7389.02 rows=377402 width=0) (actual time=0.000..910.000 rows=377033 loops=1)\n Total runtime: 1270.000 msIt gets updated quite a bit each day, and this is perhaps the problem.To me it doesn't seem like that many updates\n100-500 rows inserted per dayno deletes10k-50k updates per daymostly of this sort:   set priority=1 where id=12345is it perhaps this that is causing the performance problem ?\nI could rework the app to be more efficient and do updates using batcheswhere id IN (1,2,3,4...)I assume that means a more efficient index update compared to individual updates.\nThere is one routine that updates position_in_queue using a lot (too many) update statements.Is that likely to be the culprit ?What else can I do to investigate ?\n                                       Table \"public.fastadder_fastadderstatus\"\n      Column       |           Type           |                               Modifiers                                -------------------+--------------------------+------------------------------------------------------------------------\n id                | integer                  | not null default nextval('fastadder_fastadderstatus_id_seq'::regclass) apt_id            | integer                  | not null\n service_id        | integer                  | not null agent_priority    | integer                  | not null\n priority          | integer                  | not null last_validated    | timestamp with time zone | \n last_sent         | timestamp with time zone |  last_checked      | timestamp with time zone | \n last_modified     | timestamp with time zone | not null running_status    | integer                  | \n validation_status | integer                  |  position_in_queue | integer                  | \n sent              | boolean                  | not null default false built             | boolean                  | not null default false\n webid_suffix      | integer                  |  build_cache       | text                     | \nIndexes:    \"fastadder_fastadderstatus_pkey\" PRIMARY KEY, btree (id)\n    \"fastadder_fastadderstatus_apt_id_key\" UNIQUE, btree (apt_id, service_id)    \"fastadder_fastadderstatus_agent_priority\" btree (agent_priority)\n    \"fastadder_fastadderstatus_apt_id\" btree (apt_id)    \"fastadder_fastadderstatus_built\" btree (built)\n    \"fastadder_fastadderstatus_last_checked\" btree (last_checked)    \"fastadder_fastadderstatus_last_validated\" btree (last_validated)\n    \"fastadder_fastadderstatus_position_in_queue\" btree (position_in_queue)    \"fastadder_fastadderstatus_priority\" btree (priority)\n    \"fastadder_fastadderstatus_running_status\" btree (running_status)    \"fastadder_fastadderstatus_service_id\" btree (service_id)\nForeign-key constraints:    \"fastadder_fastadderstatus_apt_id_fkey\" FOREIGN KEY (apt_id) REFERENCES nsproperties_apt(id) DEFERRABLE INITIALLY DEFERRED\n    \"fastadder_fastadderstatus_service_id_fkey\" FOREIGN KEY (service_id) REFERENCES fastadder_fastadderservice(id) DEFERRABLE INITIALLY DEFERRED\nthanks !", "msg_date": "Fri, 4 Feb 2011 15:46:35 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 04, 2011 at 03:46:35PM +0100, felix wrote:\n> directly after REINDEX and ANALYZE:\n> \n> Aggregate (cost=62348.70..62348.71 rows=1 width=0) (actual\n> time=15830.000..15830.000 rows=1 loops=1)\n> -> Seq Scan on fastadder_fastadderstatus (cost=0.00..61613.16\n> rows=294216 width=0) (actual time=30.000..15570.000 rows=302479 loops=1)\n> Total runtime: 15830.000 ms\n\ndo run vacuum of the table. reindex doesn't matter for seq scans, and\nanalyze, while can help choose different plan - will not help here\nanyway.\n\nBest regards,\n\ndepesz\n\n", "msg_date": "Fri, 4 Feb 2011 15:49:51 +0100", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "felix wrote:\n> explain analyze select count(*) from fastadder_fastadderstatus;\n>\n> Aggregate (cost=62458.73..62458.74 rows=1 width=0) (actual \n> time=77130.000..77130.000 rows=1 loops=1)\n> -> Seq Scan on fastadder_fastadderstatus (cost=0.00..61701.18 \n> rows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1)\n> Total runtime: *77250.000 ms*\n>\n\nPostgreSQL version? If you're running on 8.3 or earlier, I would be \nsuspicous that your Free Space Map has been overrun.\n\nWhat you are seeing is that the table itself is much larger on disk than \nit's supposed to be. That can be caused by frequent UPDATEs if you \ndon't have vacuum cleanup working effectively, you'll get lots of dead \nsections left behind from UPDATEs in the middle. The best way to fix \nall this is to run CLUSTER on the table. That will introduce a bit of \ndowntime while it holds a lock on the table (only a few minutes based on \nwhat you've shown here), but the copy you'll have afterwards won't be \nspread all over disk anymore.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nfelix wrote:\n\n\nexplain analyze select count(*) from fastadder_fastadderstatus;\n\n\nAggregate  (cost=62458.73..62458.74 rows=1 width=0) (actual\ntime=77130.000..77130.000 rows=1 loops=1)\n   ->  Seq Scan on fastadder_fastadderstatus\n (cost=0.00..61701.18 rows=303018 width=0) (actual\ntime=50.000..76930.000 rows=302479 loops=1)\n Total runtime: 77250.000 ms\n\n\n\n\n\nPostgreSQL version?  If you're running on 8.3 or earlier, I would be\nsuspicous that your Free Space Map has been overrun.\n\nWhat you are seeing is that the table itself is much larger on disk\nthan it's supposed to be.  That can be caused by frequent UPDATEs if\nyou don't have vacuum cleanup working effectively, you'll get lots of\ndead sections left behind from UPDATEs in the middle.  The best way to\nfix all this is to run CLUSTER on the table.  That will introduce a bit\nof downtime while it holds a lock on the table (only a few minutes\nbased on what you've shown here), but the copy you'll have afterwards\nwon't be spread all over disk anymore.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Fri, 04 Feb 2011 09:56:12 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 08:46 AM, felix wrote:\n\n> explain analyze select count(*) from fastadder_fastadderstatus;\n>\n> Aggregate (cost=62458.73..62458.74 rows=1 width=0) (actual\n> time=77130.000..77130.000 rows=1 loops=1)\n> -> Seq Scan on fastadder_fastadderstatus (cost=0.00..61701.18\n> rows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1)\n> Total runtime: *77250.000 ms*\n\nHow big is this table when it's acting all bloated and ugly?\n\nSELECT relpages*8/1024 FROM pg_class\n WHERE relname='fastadder_fastadderstatus';\n\nThat's the number of MB it's taking up that would immediately affect a \ncount statement.\n\n> directly after REINDEX and ANALYZE:\n>\n> Aggregate (cost=62348.70..62348.71 rows=1 width=0) (actual\n> time=15830.000..15830.000 rows=1 loops=1)\n> -> Seq Scan on fastadder_fastadderstatus (cost=0.00..61613.16\n> rows=294216 width=0) (actual time=30.000..15570.000 rows=302479 loops=1)\n> Total runtime: 15830.000 ms\n\nThat probably put it into cache, explaining the difference, but yeah... \nthat is pretty darn slow. Is this the only thing running when you're \ndoing your tests? What does your disk IO look like?\n\n> 10k-50k updates per day\n> mostly of this sort: set priority=1 where id=12345\n\nWell... that's up to 16% turnover per day, but even then, regular \nvacuuming should keep it manageable.\n\n> I could rework the app to be more efficient and do updates using batches\n> where id IN (1,2,3,4...)\n\nNo. Don't do that. You'd be better off loading everything into a temp \ntable and doing this:\n\nUPDATE fastadder_fastadderstatus s\n SET priority = 1\n FROM temp_statuses t\n WHERE t.id=s.id;\n\nIt's a better practice, but still doesn't really explain your \nperformance issues.\n\n> \"fastadder_fastadderstatus_pkey\" PRIMARY KEY, btree (id)\n> \"fastadder_fastadderstatus_apt_id_key\" UNIQUE, btree (apt_id, service_id)\n> \"fastadder_fastadderstatus_agent_priority\" btree (agent_priority)\n> \"fastadder_fastadderstatus_apt_id\" btree (apt_id)\n> \"fastadder_fastadderstatus_built\" btree (built)\n> \"fastadder_fastadderstatus_last_checked\" btree (last_checked)\n> \"fastadder_fastadderstatus_last_validated\" btree (last_validated)\n> \"fastadder_fastadderstatus_position_in_queue\" btree (position_in_queue)\n> \"fastadder_fastadderstatus_priority\" btree (priority)\n> \"fastadder_fastadderstatus_running_status\" btree (running_status)\n> \"fastadder_fastadderstatus_service_id\" btree (service_id)\n\nWhoh! Hold on, here. That looks like *way* too many indexes. Definitely \nwill slow down your insert/update performance. The index on 'built' for \nexample, is a boolean. If it's evenly distributed, that's 150k matches \nfor true or false, rendering it useless, yet still requiring space and \nmaintenance. I'm guessing the story is similar for quite a few of the \nothers.\n\nIt doesn't really explain your count speed, but it certainly isn't helping.\n\nSomething seems fishy, here.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 09:00:51 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 08:56 AM, Greg Smith wrote:\n\n> PostgreSQL version? If you're running on 8.3 or earlier, I would be\n> suspicous that your Free Space Map has been overrun.\n\nThat's my first inclination. If he says autovacuum is running, there's \nno way it should be bloating the table that much.\n\nFelix, If you're running a version before 8.4, what is your \nmax_fsm_pages setting? If it's too low, autovacuum won't save you, and \nyour tables will continue to grow daily unless you vacuum full \nregularly, and I wouldn't recommend that to my worst enemy. ;)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 09:03:49 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "sorry, reply was meant to go to the list.\n\n---------- Forwarded message ----------\nFrom: felix <[email protected]>\nDate: Fri, Feb 4, 2011 at 5:17 PM\nSubject: Re: [PERFORM] Really really slow select count(*)\nTo: [email protected]\n\n\n\n\nOn Fri, Feb 4, 2011 at 4:00 PM, Shaun Thomas <[email protected]> wrote:\n\n> How big is this table when it's acting all bloated and ugly?\n>\n458MB\n\n Is this the only thing running when you're doing your tests? What does your\n> disk IO look like?\n\n\nthis is on a live site. best not to scare the animals.\n\nI have the same config on the dev environment but not the same table size.\n\n\n> 10k-50k updates per day\n>> mostly of this sort: set priority=1 where id=12345\n>>\n>\n> Well... that's up to 16% turnover per day, but even then, regular vacuuming\n> should keep it manageable.\n\n\nsomething is definitely amiss with this table.\n\nI'm not sure if its something that happened at one point when killing an\ntask that was writing to it or if its something about the way the app is\nupdating. it SHOULDN'T be that much of a problem, though I can find ways to\nimprove it.\n\n\nNo. Don't do that. You'd be better off loading everything into a temp table\n> and doing this:\n>\n> UPDATE fastadder_fastadderstatus s\n> SET priority = 1\n> FROM temp_statuses t\n> WHERE t.id=s.id;\n>\n\nok, that is one the solutions I was thinking about.\n\nare updates of the where id IN (1,2,3,4) generally not efficient ?\nhow about for select queries ?\n\n\n \"fastadder_fastadderstatus_pkey\" PRIMARY KEY, btree (id)\n>> \"fastadder_fastadderstatus_apt_id_key\" UNIQUE, btree (apt_id, service_id)\n>> \"fastadder_fastadderstatus_agent_priority\" btree (agent_priority)\n>> \"fastadder_fastadderstatus_apt_id\" btree (apt_id)\n>> \"fastadder_fastadderstatus_built\" btree (built)\n>> \"fastadder_fastadderstatus_last_checked\" btree (last_checked)\n>> \"fastadder_fastadderstatus_last_validated\" btree (last_validated)\n>> \"fastadder_fastadderstatus_position_in_queue\" btree (position_in_queue)\n>> \"fastadder_fastadderstatus_priority\" btree (priority)\n>> \"fastadder_fastadderstatus_running_status\" btree (running_status)\n>> \"fastadder_fastadderstatus_service_id\" btree (service_id)\n>>\n>\n> Whoh! Hold on, here. That looks like *way* too many indexes.\n\n\nI actually just added most of those yesterday in an attempt to improve\nperformance. priority and agent_priority were missing indexes and that was a\nbig mistake.\n\noverall performance went way up on my primary selects\n\n\n> Definitely will slow down your insert/update performance.\n\n\nthere are a lot more selects happening throughout the day\n\n\n> The index on 'built' for example, is a boolean. If it's evenly distributed,\n> that's 150k matches for true or false,\n\n\nok,\n\nbuilt True is in the minority.\n\nhere is the test query that caused me to add indices to the booleans. this\nis a 30k table which is doing selects on two booleans constantly. again:\nTrue is the minority\n\nexplain analyze SELECT \"nsproperties_apt\".\"id\",\n\"nsproperties_apt\".\"display_address\", \"nsproperties_apt\".\"apt_num\",\n\"nsproperties_apt\".\"bldg_id\", \"nsproperties_apt\".\"is_rental\",\n\"nsproperties_apt\".\"is_furnished\", \"nsproperties_apt\".\"listing_type\",\n\"nsproperties_apt\".\"list_on_web\", \"nsproperties_apt\".\"is_approved\",\n\"nsproperties_apt\".\"status\", \"nsproperties_apt\".\"headline\",\n\"nsproperties_apt\".\"slug\", \"nsproperties_apt\".\"cross_street\",\n\"nsproperties_apt\".\"show_apt_num\", \"nsproperties_apt\".\"show_building_name\",\n\"nsproperties_apt\".\"external_url\", \"nsproperties_apt\".\"listed_on\",\n\"nsproperties_bldg\".\"id\", \"nsproperties_bldg\".\"name\" FROM \"nsproperties_apt\"\nLEFT OUTER JOIN \"nsproperties_bldg\" ON (\"nsproperties_apt\".\"bldg_id\" =\n\"nsproperties_bldg\".\"id\") WHERE (\"nsproperties_apt\".\"list_on_web\" = True AND\n\"nsproperties_apt\".\"is_available\" = True ) ;\n QUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join (cost=408.74..10062.18 rows=3344 width=152) (actual\ntime=12.688..2442.542 rows=2640 loops=1)\n Hash Cond: (nsproperties_apt.bldg_id = nsproperties_bldg.id)\n -> Seq Scan on nsproperties_apt (cost=0.00..9602.52 rows=3344\nwidth=139) (actual time=0.025..2411.644 rows=2640 loops=1)\n Filter: (list_on_web AND is_available)\n -> Hash (cost=346.66..346.66 rows=4966 width=13) (actual\ntime=12.646..12.646 rows=4966 loops=1)\n -> Seq Scan on nsproperties_bldg (cost=0.00..346.66 rows=4966\nwidth=13) (actual time=0.036..8.236 rows=4966 loops=1)\n Total runtime: 2444.067 ms\n(7 rows)\n\n=>\n\n Hash Left Join (cost=1232.45..9784.18 rows=5690 width=173) (actual\ntime=30.000..100.000 rows=5076 loops=1)\n Hash Cond: (nsproperties_apt.bldg_id = nsproperties_bldg.id)\n -> Bitmap Heap Scan on nsproperties_apt (cost=618.23..9075.84 rows=5690\nwidth=157) (actual time=10.000..60.000 rows=5076 loops=1)\n Filter: (list_on_web AND is_available)\n -> BitmapAnd (cost=618.23..618.23 rows=5690 width=0) (actual\ntime=10.000..10.000 rows=0 loops=1)\n -> Bitmap Index Scan on nsproperties_apt_is_available\n (cost=0.00..131.81 rows=6874 width=0) (actual time=0.000..0.000 rows=6545\nloops=1)\n Index Cond: (is_available = true)\n -> Bitmap Index Scan on nsproperties_apt_list_on_web\n (cost=0.00..483.32 rows=25476 width=0) (actual time=10.000..10.000\nrows=26010 loops=1)\n Index Cond: (list_on_web = true)\n -> Hash (cost=537.99..537.99 rows=6099 width=16) (actual\ntime=20.000..20.000 rows=6099 loops=1)\n -> Seq Scan on nsproperties_bldg (cost=0.00..537.99 rows=6099\nwidth=16) (actual time=0.000..10.000 rows=6099 loops=1)\n Total runtime: 100.000 ms\n(12 rows)\n\n\n\n\n> rendering it useless, yet still requiring space and maintenance. I'm\n> guessing the story is similar for quite a few of the others.\n>\n> It doesn't really explain your count speed, but it certainly isn't helping.\n>\n\nit shouldn't affect count speed at all\nit will affect the updates of course.\n\n\n>\n> Something seems fishy, here.\n>\n\nindeed\n\n\n\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer.php\n> for terms and conditions related to this email\n>\n\nsorry, reply was meant to go to the list.---------- Forwarded message ----------From: felix <[email protected]>\nDate: Fri, Feb 4, 2011 at 5:17 PMSubject: Re: [PERFORM] Really really slow select count(*)To: [email protected] Fri, Feb 4, 2011 at 4:00 PM, Shaun Thomas <[email protected]> wrote:\n\nHow big is this table when it's acting all bloated and ugly?458MB\n\n Is this the only thing running when you're doing your tests? What does your disk IO look like?this is on a live site.  best not to scare the animals.I have the same config on the dev environment but not the same table size.\n\n\n\n10k-50k updates per day\nmostly of this sort:   set priority=1 where id=12345\n\n\nWell... that's up to 16% turnover per day, but even then, regular vacuuming should keep it manageable.something is definitely amiss with this table. I'm not sure if its something that happened at one point when killing an task that was writing to it or if its something about the way the app is updating.  it SHOULDN'T be that much of a problem, though I can find ways to improve it.\n\nNo. Don't do that. You'd be better off loading everything into a temp table and doing this:\n\nUPDATE fastadder_fastadderstatus s\n   SET priority = 1\n  FROM temp_statuses t\n WHERE t.id=s.id;ok, that is one the solutions I was thinking about.\nare updates of the where id IN (1,2,3,4) generally not efficient ?\nhow about for select queries ?\n\n\"fastadder_fastadderstatus_pkey\" PRIMARY KEY, btree (id)\n\"fastadder_fastadderstatus_apt_id_key\" UNIQUE, btree (apt_id, service_id)\n\"fastadder_fastadderstatus_agent_priority\" btree (agent_priority)\n\"fastadder_fastadderstatus_apt_id\" btree (apt_id)\n\"fastadder_fastadderstatus_built\" btree (built)\n\"fastadder_fastadderstatus_last_checked\" btree (last_checked)\n\"fastadder_fastadderstatus_last_validated\" btree (last_validated)\n\"fastadder_fastadderstatus_position_in_queue\" btree (position_in_queue)\n\"fastadder_fastadderstatus_priority\" btree (priority)\n\"fastadder_fastadderstatus_running_status\" btree (running_status)\n\"fastadder_fastadderstatus_service_id\" btree (service_id)\n\n\nWhoh! Hold on, here. That looks like *way* too many indexes.I actually just added most of those yesterday in an attempt to improve performance. priority and agent_priority were missing indexes and that was a big mistake.\noverall performance went way up on my primary selects  Definitely will slow down your insert/update performance. \nthere are a lot more selects happening throughout the day The index on 'built' for example, is a boolean. If it's evenly distributed, that's 150k matches for true or false,\nok,built True is in the minority.here is the test query that caused me to add indices to the booleans.  this is a 30k table which is doing selects on two booleans constantly.  again: True is the minority\nexplain analyze SELECT \"nsproperties_apt\".\"id\", \"nsproperties_apt\".\"display_address\", \"nsproperties_apt\".\"apt_num\", \"nsproperties_apt\".\"bldg_id\", \"nsproperties_apt\".\"is_rental\", \"nsproperties_apt\".\"is_furnished\", \"nsproperties_apt\".\"listing_type\", \"nsproperties_apt\".\"list_on_web\", \"nsproperties_apt\".\"is_approved\", \"nsproperties_apt\".\"status\", \"nsproperties_apt\".\"headline\", \"nsproperties_apt\".\"slug\", \"nsproperties_apt\".\"cross_street\", \"nsproperties_apt\".\"show_apt_num\", \"nsproperties_apt\".\"show_building_name\", \"nsproperties_apt\".\"external_url\", \"nsproperties_apt\".\"listed_on\", \"nsproperties_bldg\".\"id\", \"nsproperties_bldg\".\"name\" FROM \"nsproperties_apt\" LEFT OUTER JOIN \"nsproperties_bldg\" ON (\"nsproperties_apt\".\"bldg_id\" = \"nsproperties_bldg\".\"id\") WHERE (\"nsproperties_apt\".\"list_on_web\" = True AND \"nsproperties_apt\".\"is_available\" = True ) ;\n                                                           QUERY PLAN                                                           --------------------------------------------------------------------------------------------------------------------------------\n Hash Left Join  (cost=408.74..10062.18 rows=3344 width=152) (actual time=12.688..2442.542 rows=2640 loops=1)   Hash Cond: (nsproperties_apt.bldg_id = nsproperties_bldg.id)\n   ->  Seq Scan on nsproperties_apt  (cost=0.00..9602.52 rows=3344 width=139) (actual time=0.025..2411.644 rows=2640 loops=1)         Filter: (list_on_web AND is_available)   ->  Hash  (cost=346.66..346.66 rows=4966 width=13) (actual time=12.646..12.646 rows=4966 loops=1)\n         ->  Seq Scan on nsproperties_bldg  (cost=0.00..346.66 rows=4966 width=13) (actual time=0.036..8.236 rows=4966 loops=1) Total runtime: 2444.067 ms(7 rows)=>\n Hash Left Join  (cost=1232.45..9784.18 rows=5690 width=173) (actual time=30.000..100.000 rows=5076 loops=1)   Hash Cond: (nsproperties_apt.bldg_id = nsproperties_bldg.id)\n   ->  Bitmap Heap Scan on nsproperties_apt  (cost=618.23..9075.84 rows=5690 width=157) (actual time=10.000..60.000 rows=5076 loops=1)         Filter: (list_on_web AND is_available)         ->  BitmapAnd  (cost=618.23..618.23 rows=5690 width=0) (actual time=10.000..10.000 rows=0 loops=1)\n               ->  Bitmap Index Scan on nsproperties_apt_is_available  (cost=0.00..131.81 rows=6874 width=0) (actual time=0.000..0.000 rows=6545 loops=1)                     Index Cond: (is_available = true)\n               ->  Bitmap Index Scan on nsproperties_apt_list_on_web  (cost=0.00..483.32 rows=25476 width=0) (actual time=10.000..10.000 rows=26010 loops=1)                     Index Cond: (list_on_web = true)\n   ->  Hash  (cost=537.99..537.99 rows=6099 width=16) (actual time=20.000..20.000 rows=6099 loops=1)         ->  Seq Scan on nsproperties_bldg  (cost=0.00..537.99 rows=6099 width=16) (actual time=0.000..10.000 rows=6099 loops=1)\n Total runtime: 100.000 ms(12 rows) \n rendering it useless, yet still requiring space and maintenance. I'm guessing the story is similar for quite a few of the others.\n\nIt doesn't really explain your count speed, but it certainly isn't helping.it shouldn't affect count speed at allit will affect the updates of course.\n\n \n\nSomething seems fishy, here.indeed \n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee  http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email", "msg_date": "Fri, 4 Feb 2011 17:19:14 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: Really really slow select count(*)" }, { "msg_contents": "reply was meant for the list\n\n---------- Forwarded message ----------\nFrom: felix <[email protected]>\nDate: Fri, Feb 4, 2011 at 4:39 PM\nSubject: Re: [PERFORM] Really really slow select count(*)\nTo: Greg Smith <[email protected]>\n\n\n\n\nOn Fri, Feb 4, 2011 at 3:56 PM, Greg Smith <[email protected]> wrote:\n\n> PostgreSQL version? If you're running on 8.3 or earlier, I would be\n> suspicous that your Free Space Map has been overrun.\n>\n\n8.3\n\n\n\n>\n> What you are seeing is that the table itself is much larger on disk than\n> it's supposed to be.\n>\n\nwhich part of the explain told you that ?\n\n> shaun thomas\n\nSELECT relpages*8/1024 FROM pg_class\n WHERE relname='fastadder_fastadderstatus';\n\n458MB\n\nway too big. build_cache is text between 500-1k chars\n\n\n\n\n> That can be caused by frequent UPDATEs if you don't have vacuum cleanup\n> working effectively, you'll get lots of dead sections left behind from\n> UPDATEs in the middle.\n>\n\nok, I just vacuumed it (did this manually a few times as well). and auto is\non.\n\nstill:\n32840.000ms\nand still 458MB\n\n\n\n> The best way to fix all this is to run CLUSTER on the table.\n>\n\nhttp://www.postgresonline.com/journal/archives/10-How-does-CLUSTER-ON-improve-index-performance.html\n\nnow that would order the data on disk by id (primary key)\nthe usage of the table is either by a query or by position_in_queue which is\nrewritten often (I might change this part of the app and pull it out of this\ntable)\n\nis this definitely the best way to fix this ?\n\nthanks for your help !\n\n\nThat will introduce a bit of downtime while it holds a lock on the table\n> (only a few minutes based on what you've shown here), but the copy you'll\n> have afterwards won't be spread all over disk anymore.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n\nreply was meant for the list---------- Forwarded message ----------From: felix <[email protected]>\nDate: Fri, Feb 4, 2011 at 4:39 PMSubject: Re: [PERFORM] Really really slow select count(*)To: Greg Smith <[email protected]>\nOn Fri, Feb 4, 2011 at 3:56 PM, Greg Smith <[email protected]> wrote:\nPostgreSQL version?  If you're running on 8.3 or earlier, I would be\nsuspicous that your Free Space Map has been overrun.8.3 \n\n\nWhat you are seeing is that the table itself is much larger on disk\nthan it's supposed to be. which part of the explain told you that ?> shaun thomas \n\nSELECT relpages*8/1024 FROM pg_class WHERE relname='fastadder_fastadderstatus';\n458MB\nway too big. build_cache is text between 500-1k chars\n \n That can be caused by frequent UPDATEs if\nyou don't have vacuum cleanup working effectively, you'll get lots of\ndead sections left behind from UPDATEs in the middle. ok, I just vacuumed it (did this manually a few times as well). and auto is on.still:\n32840.000ms\nand still 458MB  The best way to\nfix all this is to run CLUSTER on the table.  http://www.postgresonline.com/journal/archives/10-How-does-CLUSTER-ON-improve-index-performance.html\nnow that would order the data on disk by id (primary key) the usage of the table is either by a query or by position_in_queue which is rewritten often (I might change this part of the app and pull it out of this table)\nis this definitely the best way to fix this ?thanks for your help !\nThat will introduce a bit\nof downtime while it holds a lock on the table (only a few minutes\nbased on what you've shown here), but the copy you'll have afterwards\nwon't be spread all over disk anymore.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Fri, 4 Feb 2011 17:20:27 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 04, 2011 at 05:20:27PM +0100, felix wrote:\n> reply was meant for the list\n> \n> ---------- Forwarded message ----------\n> From: felix <[email protected]>\n> Date: Fri, Feb 4, 2011 at 4:39 PM\n> Subject: Re: [PERFORM] Really really slow select count(*)\n> To: Greg Smith <[email protected]>\n> \n> \n> \n> \n> On Fri, Feb 4, 2011 at 3:56 PM, Greg Smith <[email protected]> wrote:\n> \n> > PostgreSQL version? If you're running on 8.3 or earlier, I would be\n> > suspicous that your Free Space Map has been overrun.\n> >\n> \n> 8.3\n> \n> \n> \n> >\n> > What you are seeing is that the table itself is much larger on disk than\n> > it's supposed to be.\n> >\n> \n> which part of the explain told you that ?\n> \n> > shaun thomas\n> \n> SELECT relpages*8/1024 FROM pg_class\n> WHERE relname='fastadder_fastadderstatus';\n> \n> 458MB\n> \n> way too big. build_cache is text between 500-1k chars\n> \n\nAs has been suggested, you really need to CLUSTER the table\nto remove dead rows. VACUUM will not do that, VACUUM FULL will\nbut will take a full table lock and then you would need to\nREINDEX to fix index bloat. CLUSTER will do this in one shot.\nYou almost certainly have your free space map way too small,\nwhich is how you bloated in the first place.\n\nCheers,\nKen\n", "msg_date": "Fri, 4 Feb 2011 10:27:02 -0600", "msg_from": "Kenneth Marshall <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 10:17 AM, felix wrote:\n\n> > How big is this table when it's acting all bloated and ugly?\n>\n> 458MB\n\nWow! There's no way a table with 300k records should be that big unless \nit's just full of text. 70-seconds seems like a really long time to read \nhalf a gig, but that might be because it's fighting for IO with other \nprocesses.\n\nFor perspective, we have several 1-2 million row tables smaller than \nthat. Heck, I have a 11-million row table that's only 30% larger.\n\n> are updates of the where id IN (1,2,3,4) generally not efficient ?\n> how about for select queries ?\n\nWell, IN is notorious for being inefficient. It's been getting better, \nbut even EXISTS is a better bet than using IN. We've got a lot of stuff \nusing IN here, and we're slowly phasing it out. Every time I get rid of \nit, things get faster.\n\n> I actually just added most of those yesterday in an attempt to improve\n> performance. priority and agent_priority were missing indexes and that\n> was a big mistake.\n\nHaha. Well, that can always be true. Ironically one of the things you \nactually did by creating the indexes is create fast lookup values to \ncircumvent your table bloat. It would help with anything except sequence \nscans, which you saw with your count query.\n\n> ok,\n> built True is in the minority.\n\nOk, in that case, use a partial index. If a boolean value is only 1% of \nyour table or something, why bother indexing the rest anyway?\n\nCREATE INDEX fastadder_fastadderstatus_built\n ON fastadder_fastadderstatus\n WHERE built;\n\nBut only if it really is the vast minority. Check this way:\n\nSELECT built, count(1)\n FROM fastadder_fastadderstatus\n GROUP BY 1;\n\nWe used one of these to ignore a status that was over 90% of the table, \nwhere the other statuses combined were less than 10%. The index was 10x \nsmaller and much faster than before.\n\nIf you know both booleans are used together often, you can combine them \ninto a single index, again using a partial where it only indexes if both \nvalues are true. Much smaller, much faster index if it's more selective \nthan the other indexes.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 10:34:57 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 10:03 AM, felix wrote:\n\n> max_fsm_pages | 153600 | Sets the\n> maximum number of disk pages for which free space is tracked.\n> max_fsm_relations | 1000 | Sets the\n> maximum number of tables and indexes for which free space is tracked.\n>\n> how do I determine the best size or if that's the problem ?\n\nWell, the best way is to run:\n\nvacuumdb -a -v -z &>vacuum.log\n\nAnd at the end of the log, it'll tell you how many pages it wants, and \nhow many pages were available.\n\n From the sounds of your database, 150k is way too small. If a single \ntable is getting 10-50k updates per day, it's a good chance a ton of \nother tables are getting similar traffic. With max_fsm_pages at that \nsetting, any update beyond 150k effectively gets forgotten, and \nforgotten rows aren't reused by new inserts or updates.\n\nYour database has probably been slowly expanding for months without you \nrealizing it. The tables that get the most turnover will be hit the \nhardest, as it sounds like what happened here.\n\nYou can stop the bloating by setting the right max_fsm_pages setting, \nbut you'll either have to go through and VACUUM FULL every table in your \ndatabase, or dump/restore to regain all the lost space and performance \n(the later would actually be faster). Before I even touch an older \nPostgreSQL DB, I set it to some value over 3-million just as a starting \nvalue to be on the safe side. A little used memory is a small price to \npay for stopping gradual expansion.\n\nYour reindex was a good idea. Indexes do sometimes need that. But your \nbase tables need work too. Unless you're on 8.4 or above, auto_vacuum \nisn't enough.\n\nJust to share an anecdote, I was with a company about five years ago and \nthey also used the default max_fsm_pages setting. Their DB had expanded \nto 40GB and was filling their disk, only a couple weeks before \nexhausting it. I set the max_fsm_pages setting to 2-million, set up a \nbunch of scripts to vacuum-full the tables from smallest to largest (to \nmake enough space for the larger tables, you see) and the database ended \nup at less than 20GB.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 10:35:15 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas <[email protected]> wrote:\n\n>\n>\n> vacuumdb -a -v -z &>vacuum.log\n>\n> And at the end of the log, it'll tell you how many pages it wants, and how\n> many pages were available.\n>\n\nthis is the dev, not live. but this is after it gets done with that table:\n\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.fastadder_fastadderstatus\"\nINFO: \"fastadder_fastadderstatus\": scanned 2492 of 2492 pages, containing\n154378 live rows and 0 dead rows; 30000 rows in sample, 154378 estimated\ntotal rows\n\nand there's nothing at the end of the whole vacuum output about pages\n\nactual command:\n\nvacuumdb -U postgres -W -v -z djns4 &> vacuum.log\n\nI tried it with all databases too\n\n?\n\nthanks\n\nOn Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas <[email protected]> wrote:\n\n\nvacuumdb -a -v -z &>vacuum.log\n\nAnd at the end of the log, it'll tell you how many pages it wants, and how many pages were available.this is the dev, not live. but this is after it gets done with that table:\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO:  analyzing \"public.fastadder_fastadderstatus\"INFO:  \"fastadder_fastadderstatus\": scanned 2492 of 2492 pages, containing 154378 live rows and 0 dead rows; 30000 rows in sample, 154378 estimated total rows\nand there's nothing at the end of the whole vacuum output about pagesactual command:vacuumdb -U postgres -W -v -z djns4 &> vacuum.log\nI tried it with all databases too?thanks", "msg_date": "Fri, 4 Feb 2011 18:38:15 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 4, 2011 at 10:38 AM, felix <[email protected]> wrote:\n>\n>\n> On Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas <[email protected]> wrote:\n>>\n>>\n>> vacuumdb -a -v -z &>vacuum.log\n>>\n>> And at the end of the log, it'll tell you how many pages it wants, and how\n>> many pages were available.\n>\n> this is the dev, not live. but this is after it gets done with that table:\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO:  analyzing \"public.fastadder_fastadderstatus\"\n> INFO:  \"fastadder_fastadderstatus\": scanned 2492 of 2492 pages, containing\n> 154378 live rows and 0 dead rows; 30000 rows in sample, 154378 estimated\n> total rows\n> and there's nothing at the end of the whole vacuum output about pages\n> actual command:\n> vacuumdb -U postgres -W -v -z djns4 &> vacuum.log\n> I tried it with all databases too\n\nI believe you have to run it on the whole db to get that output.\n", "msg_date": "Fri, 4 Feb 2011 10:40:42 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "vacuumdb -a -v -z -U postgres -W &> vacuum.log\n\nthat's all, isn't it ?\n\nit did each db\n\n8.3 in case that matters\n\nthe very end:\n\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: analyzing \"public.seo_partnerlinkcategory\"\nINFO: \"seo_partnerlinkcategory\": scanned 0 of 0 pages, containing 0 live\nrows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n\n\n\nOn Fri, Feb 4, 2011 at 6:40 PM, Scott Marlowe <[email protected]>wrote:\n\n>\n> > I tried it with all databases too\n>\n> I believe you have to run it on the whole db to get that output.\n>\n\nvacuumdb -a -v -z -U postgres -W &> vacuum.logthat's all, isn't it ?it did each db8.3 in case that matters\nthe very end:There were 0 unused item pointers.0 pages are entirely empty.CPU 0.00s/0.00u sec elapsed 0.00 sec.INFO:  analyzing \"public.seo_partnerlinkcategory\"\nINFO:  \"seo_partnerlinkcategory\": scanned 0 of 0 pages, containing 0 live rows and 0 dead rows; 0 rows in sample, 0 estimated total rowsOn Fri, Feb 4, 2011 at 6:40 PM, Scott Marlowe <[email protected]> wrote:\n\n> I tried it with all databases too\n\nI believe you have to run it on the whole db to get that output.", "msg_date": "Fri, 4 Feb 2011 18:44:52 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 11:38 AM, felix wrote:\n\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: analyzing \"public.fastadder_fastadderstatus\"\n> INFO: \"fastadder_fastadderstatus\": scanned 2492 of 2492 pages,\n> containing 154378 live rows and 0 dead rows; 30000 rows in sample,\n> 154378 estimated total rows\n>\n> and there's nothing at the end of the whole vacuum output about pages\n\nI'm not sure if it gives it to you if you pick a single DB, but if you \nuse -a for all, you should get something at the very end like this:\n\nINFO: free space map contains 1365918 pages in 1507 relations\nDETAIL: A total of 1326656 page slots are in use (including overhead).\n1326656 page slots are required to track all free space.\nCurrent limits are: 3000000 page slots, 3500 relations, using 38784 kB.\nVACUUM\n\nThat's on our dev system. Your dev table seems properly sized, but prod \nprobably isn't. If you run an all-database vacuum after-hours, you'll \nsee the stuff at the end. And if your 'page slots are required' is \ngreater than your 'page slots are in use,' you've got a problem.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 11:45:22 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "vacuumdb -a -v -z -U postgres -W &> vacuum.log\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\nPassword:\ncruxnu:nsbuildout crucial$\n\ndo you think its possible that it just doesn't have anything to complain\nabout ?\nor the password is affecting it ?\n\nIn any case I'm not sure I want to run this even at night on production.\n\nwhat is the downside to estimating max_fsm_pages too high ?\n\n3000000 should be safe\nits certainly not 150k\n\nI have one very large table (10m) that is being analyzed before I warehouse\nit.\nthat could've been the monster that ate the free map.\nI think today I've learned that even unused tables affect postgres\nperformance.\n\n\nand do you agree that I should turn CLUSTER ON ?\nI have no problem to stop all tasks to this table at night and just reload\nit\n\n\n\nOn Fri, Feb 4, 2011 at 6:47 PM, Shaun Thomas <[email protected]> wrote:\n\n> On 02/04/2011 11:44 AM, felix wrote:\n>\n> the very end:\n>>\n>> There were 0 unused item pointers.\n>> 0 pages are entirely empty.\n>> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n>> INFO: analyzing \"public.seo_partnerlinkcategory\"\n>> INFO: \"seo_partnerlinkcategory\": scanned 0 of 0 pages, containing 0 live\n>> rows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n>>\n>\n> That looks to me like it didn't finish. Did you fork it off with '&' or run\n> it and wait until it gave control back to you?\n>\n> It really should be telling you how many pages it wanted, and are in use.\n> If not, something odd is going on.\n>\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer.php\n> for terms and conditions related to this email\n>\n\nvacuumdb -a -v -z -U postgres -W &> vacuum.logPassword: Password: Password: Password: Password: Password: Password: Password: \nPassword: Password: Password: cruxnu:nsbuildout crucial$do you think its possible that it just doesn't have anything to complain about ?or the password is affecting it ?\nIn any case I'm not sure I want to run this even at night on production.what is the downside to estimating max_fsm_pages too high ?\n3000000 should be safe\nits certainly not 150kI have one very large table (10m) that is being analyzed before I warehouse it.that could've been the monster that ate the free map.\nI think today I've learned that even unused tables affect postgres performance.and do you agree that I should turn CLUSTER ON ?I have no problem to stop all tasks to this table at night and just reload it\nOn Fri, Feb 4, 2011 at 6:47 PM, Shaun Thomas <[email protected]> wrote:\nOn 02/04/2011 11:44 AM, felix wrote:\n\n\nthe very end:\n\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO:  analyzing \"public.seo_partnerlinkcategory\"\nINFO: \"seo_partnerlinkcategory\": scanned 0 of 0 pages, containing 0 live\nrows and 0 dead rows; 0 rows in sample, 0 estimated total rows\n\n\nThat looks to me like it didn't finish. Did you fork it off with '&' or run it and wait until it gave control back to you?\n\nIt really should be telling you how many pages it wanted, and are in use. If not, something odd is going on.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee  http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email", "msg_date": "Fri, 4 Feb 2011 19:14:00 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "You can run vacuum verbose on just the postgres database and get the\nglobal numbers at the end. gotta be a superuser as well.\n\n# \\c postgres postgres\npostgres=# vacuum verbose;\n.... lots deleted.\nDETAIL: A total of 7664 page slots are in use (including overhead).\n7664 page slots are required to track all free space.\nCurrent limits are: 1004800 page slots, 5000 relations, using 6426 kB.\n", "msg_date": "Fri, 4 Feb 2011 11:33:38 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 12:14 PM, felix wrote:\n\n> do you think its possible that it just doesn't have anything to\n> complain about ? or the password is affecting it ?\n\nWhy is it asking for the password over and over again? It shouldn't be \ndoing that. And also, are you running this as a user with superuser \nprivileges? You might want to think about setting up a .pgpass file, or \nsetting up local trust for the postgres user so you can run maintenance \nwithout having to manually enter a password.\n\n> In any case I'm not sure I want to run this even at night on\n> production.\n\nYou should be. Even with auto vacuum turned on, all of our production \nsystems get a nightly vacuum over the entire list of databases. It's non \ndestructive, and about the only thing that happens is disk IO. If your \napp has times where it's not very busy, say 3am, it's a good time.\n\nThis is especially true since your free space map is behind.\n\nWe actually turn off autovacuum because we have a very transactionally \nintense DB, and if autovacuum launches on a table in the middle of the \nday, our IO totally obliterates performance. We only run a nightly \nvacuum over all the databases when very few users or scripts are using \nanything.\n\n> what is the downside to estimating max_fsm_pages too high ?\n\nNothing really. It uses more memory to track it, but on modern servers, \nit's not a concern. The only risk is that you don't know what the real \nsetting should be, so you may not completely stop your bloating.\n\n> and do you agree that I should turn CLUSTER ON ?\n\nCluster isn't really something you turn on, but something you do. It's \nlike vacuum full, in that it basically rebuilds the table and all \nindexes from scratch. The major issue you'll run into is that it \nreorders the table by the index you chose, so you'd best select the \nprimary key unless you have reasons to use something else. And you have \nto do it table by table, which will really suck since we already know \nyour whole db has bloated, not just one or two tables.\n\nYou're going to be doing some scripting, buddy. :) Well, unless you just \ndo a dump/restore and start over with sane postgresql.conf settings.\n\n> I have no problem to stop all tasks to this table at night and just\n> reload it\n\nThat will work for this table. Just keep in mind all your tables have \nbeen suffering since you installed this database. Tables with the \nhighest turnover were hit hardest, but they all have non-ideal sizes \ncompared to what they would be if your maintenance was working.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 12:34:42 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "felix wrote:\n> and do you agree that I should turn CLUSTER ON ?\n> I have no problem to stop all tasks to this table at night and just \n> reload it\n\nYou don't turn it on; it's a one time operation that does a cleanup. It \nis by far the easiest way to clean up the mess you have right now. \nMoving forward, if you have max_fsm_pages set to an appropriate number, \nyou shouldn't end up back in this position again. But VACUUM along \nwon't get you out of there, and VACUUM FULL is always a worse way to \nclean this up than CLUSTER.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Fri, 04 Feb 2011 13:38:05 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 4, 2011 at 11:38 AM, Greg Smith <[email protected]> wrote:\n> You don't turn it on; it's a one time operation that does a cleanup.  It is\n> by far the easiest way to clean up the mess you have right now.  Moving\n> forward, if you have max_fsm_pages set to an appropriate number, you\n> shouldn't end up back in this position again.  But VACUUM along won't get\n> you out of there, and VACUUM FULL is always a worse way to clean this up\n> than CLUSTER.\n\nnote that for large, randomly ordered tables, cluster can be pretty\nslow, and you might want to do the old:\n\nbegin;\nselect * into temporaryholdingtable order by somefield;\ntruncate oldtable;\ninsert into oldtables select * from temporaryholdingtable;\ncommit;\n\nfor fastest performance. I've had Cluster take hours to do that the\nabove does in 1/4th the time.\n", "msg_date": "Fri, 4 Feb 2011 12:01:08 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 01:01 PM, Scott Marlowe wrote:\n\n> begin;\n> select * into temporaryholdingtable order by somefield;\n> truncate oldtable;\n> insert into oldtables select * from temporaryholdingtable;\n> commit;\n\nThat's usually how I do it, except for larger tables, I also throw in a \nDROP INDEX for all the indexes on the table before the insert, and \nCREATE INDEX statements afterwards.\n\nWhich actually brings up a question I've been wondering to myself that I \nmay submit to [HACKERS]: Can we add a a parallel option to the reindexdb \ncommand? We added one to pg_restore, so we already know it works.\n\nI have a bunch of scripts that get all the indexes in the database and \norder them by size (so they're distributed evenly), round-robin them \ninto separate REINDEX sql files, and launches them all in parallel \ndepending on how many threads you want, but that's so hacky I feel dirty \nevery time I use it.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 13:14:14 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 4, 2011 at 7:34 PM, Shaun Thomas <[email protected]> wrote:\n\n> Why is it asking for the password over and over again? It shouldn't be\n> doing that.\n>\n\nbecause I asked it to: -W\non the production server I need to enter password and I'm testing on dev\nfirst.\n\nI just sudo tried it but still no report\n\n\n and do you agree that I should turn CLUSTER ON ?\n>>\n>\n> Cluster isn't really something you turn on, but something you do.\n\n\ndjns4=# cluster fastadder_fastadderstatus;\nERROR: there is no previously clustered index for table\n\"fastadder_fastadderstatus\"\n\nhttp://www.postgresonline.com/journal/archives/10-How-does-CLUSTER-ON-improve-index-performance.html\n\ndjns4=# alter table fastadder_fastadderstatus CLUSTER ON\nfastadder_fastadderstatus_pkey; ALTER TABLE djns4=# CLUSTER\nfastadder_fastadderstatus; CLUSTER\n\nok, that's why I figured I was turning something on. the table has been\naltered.\n\nit will be pk ordered, new entries always at the end and no deletes\n\nbut this means I have to manually run cluster from time to time, right ? not\nthat there will be much or any reordering. or it should be fine going\nforward with vacuum and enlarging the free space memory map.\n\n\n\n> It's like vacuum full, in that it basically rebuilds the table and all\n> indexes from scratch. The major issue you'll run into is that it reorders\n> the table by the index you chose, so you'd best select the primary key\n> unless you have reasons to use something else. And you have to do it table\n> by table, which will really suck since we already know your whole db has\n> bloated, not just one or two tables.\n>\n\ndo we know that ? many of the tables are fairly static.\n\nonly this one is seriously borked, and yet other related tables seem to be\nfine.\n\n\n\n\n> You're going to be doing some scripting, buddy. :) Well, unless you just do\n> a dump/restore and start over with sane postgresql.conf settings.\n\n\nwell who knew the defaults were unsane ? :)\n\nscripting this is trivial, I already have the script\n\nI have made the mistake of doing VACUUM FULL in the past. in fact on this\ntable, and it had to be killed because it took down my entire website !\n that may well be the major borking event. a credit to postgres that the\ntable still functions if that's the case.\n\nscott marlowe:\n\nbegin;\n> select * into temporaryholdingtable order by somefield;\n> truncate oldtable;\n> insert into oldtables select * from temporaryholdingtable;\n> commit;\n\n\nthat sounds like a good approach.\n\ngentlemen, 300,000 + thanks for your generous time !\n(a small number, I know)\n\n-felix\n\nOn Fri, Feb 4, 2011 at 7:34 PM, Shaun Thomas <[email protected]> wrote:\nWhy is it asking for the password over and over again? It shouldn't be doing that.because I asked it to: -Won the production server I need to enter password and I'm testing on dev first.\nI just sudo tried it but still no report\n\nand do you agree that I should turn CLUSTER ON ?\n\n\nCluster isn't really something you turn on, but something you do. djns4=# cluster fastadder_fastadderstatus;ERROR:  there is no previously clustered index for table \"fastadder_fastadderstatus\"\nhttp://www.postgresonline.com/journal/archives/10-How-does-CLUSTER-ON-improve-index-performance.html\ndjns4=# alter table fastadder_fastadderstatus CLUSTER ON fastadder_fastadderstatus_pkey;\nALTER TABLE\ndjns4=# CLUSTER fastadder_fastadderstatus;\nCLUSTER\n\nok, that's why I figured I was turning something on. the table has been altered.\n\nit will be pk ordered, new entries always at the end and no deletesbut this means I have to manually run cluster from time to time, right ? not that there will be much or any reordering.  or it should be fine going forward with vacuum and enlarging the free space memory map.\n It's like vacuum full, in that it basically rebuilds the table and all indexes from scratch. The major issue you'll run into is that it reorders the table by the index you chose, so you'd best select the primary key unless you have reasons to use something else. And you have to do it table by table, which will really suck since we already know your whole db has bloated, not just one or two tables.\ndo we know that ?  many of the tables are fairly static. only this one is seriously borked, and yet other related tables seem to be fine.\n\n\nYou're going to be doing some scripting, buddy. :) Well, unless you just do a dump/restore and start over with sane postgresql.conf settings.well who knew the defaults were unsane ? :)\nscripting this is trivial, I already have the scriptI have made the mistake of doing VACUUM FULL in the past. in fact on this table, and it had to be killed because it took down my entire website !  that may well be the major borking event. a credit to postgres that the table still functions if that's the case.\n\nscott marlowe:\nbegin;\nselect * into temporaryholdingtable order by somefield;truncate oldtable;\ninsert into oldtables select * from temporaryholdingtable;commit;\nthat sounds like a good approach.gentlemen, 300,000 + thanks for your generous time !(a small number, I know)-felix", "msg_date": "Fri, 4 Feb 2011 20:26:17 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 4, 2011 at 12:26 PM, felix <[email protected]> wrote:\n> I just sudo tried it but still no report\n\nIt's not about who you are in Unix / Linux, it's about who you are in\nPostgresql. \\du will show you who is a superusr. psql -U username\nwill let you connect as that user.\n", "msg_date": "Fri, 4 Feb 2011 12:34:35 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 01:26 PM, felix wrote:\n\n> because I asked it to: -W on the production server I need to enter\n> password and I'm testing on dev first.\n\nRight. I'm just surprised it threw up the prompt so many times.\n\n> I just sudo tried it but still no report\n\nNono... you have to run the vacuum command with the -U for a superuser \nin the database. Like the postgres user.\n\n> but this means I have to manually run cluster from time to time, right ?\n> not that there will be much or any reordering. or it should be fine\n> going forward with vacuum and enlarging the free space memory map.\n\nIt should be fine going forward. You only need to re-cluster if you want \nto force the table to remain in the order you chose, since it doesn't \nmaintain the order for updates and new inserts. Since you're only doing \nit as a cleanup, that's not a concern for you.\n\n> do we know that ? many of the tables are fairly static. only this\n> one is seriously borked, and yet other related tables seem to be\n> fine.\n\nProbably not in your case. I just mean that any non-static table is \ngoing to have this problem. If you know what those are, great. I don't \nusually have that luxury, so I err on the side of assuming the whole DB \nis borked. :)\n\nAlso, here's a query you may find useful in the future. It reports the \ntop 20 tables by size, but also reports the row counts and what not. \nIt's a good way to find possibly bloated tables, or tables you could \narchive:\n\nSELECT n.nspname AS schema_name, c.relname AS table_name,\n c.reltuples AS row_count,\n c.relpages*8/1024 AS mb_used,\n pg_total_relation_size(c.oid)/1024/1024 AS total_mb_used\n FROM pg_class c\n JOIN pg_namespace n ON (n.oid=c.relnamespace)\n WHERE c.relkind = 'r'\n ORDER BY total_mb_used DESC\n LIMIT 20;\n\nThe total_mb_used column is the table + all of the indexes and toast \ntable space. The mb_used is just for the table itself. This will also \nhelp you see index bloat, or if a table has too much toasted data.\n\n> well who knew the defaults were unsane ? :)\n\nNot really \"unsane,\" but for any large database, they're not ideal. This \nalso goes for the default_statistics_target setting. If you haven't \nalready, you may want to bump this up to 100 from the default of 10. Not \nenough stats can make the planner ignore indexes and other bad things, \nand it sounds like your DB is big enough to benefit from that.\n\nLater versions have made 100 the default, so you'd just be catching up. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 13:40:19 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "ah right, duh.\nyes, I did it as -U postgres, verified as a superuser\n\njust now did it from inside psql as postgres\n\n\\c djns4\nvacuum verbose analyze;\n\nstill no advice on the pages\n\n\n\nOn Fri, Feb 4, 2011 at 8:34 PM, Scott Marlowe <[email protected]>wrote:\n\n> On Fri, Feb 4, 2011 at 12:26 PM, felix <[email protected]> wrote:\n> > I just sudo tried it but still no report\n>\n> It's not about who you are in Unix / Linux, it's about who you are in\n> Postgresql. \\du will show you who is a superusr. psql -U username\n> will let you connect as that user.\n>\n\nah right, duh. yes, I did it as -U postgres, verified as a superuserjust now did it from inside psql as postgres\n\\c djns4vacuum verbose analyze;still no advice on the pages\nOn Fri, Feb 4, 2011 at 8:34 PM, Scott Marlowe <[email protected]> wrote:\nOn Fri, Feb 4, 2011 at 12:26 PM, felix <[email protected]> wrote:\n\n> I just sudo tried it but still no report\n\nIt's not about who you are in Unix / Linux, it's about who you are in\nPostgresql.  \\du will show you who is a superusr.  psql -U username\nwill let you connect as that user.", "msg_date": "Fri, 4 Feb 2011 20:59:45 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 01:59 PM, felix wrote:\n\n\n> still no advice on the pages\n\nI think it just hates you.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 14:00:38 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "it probably has good reason to hate me.\n\n\n\nns=> SELECT n.nspname AS schema_name, c.relname AS table_name,\nns-> c.reltuples AS row_count,\nns-> c.relpages*8/1024 AS mb_used,\nns-> pg_total_relation_size(c.oid)/1024/1024 AS total_mb_used\nns-> FROM pg_class c\nns-> JOIN pg_namespace n ON (n.oid=c.relnamespace)\nns-> WHERE c.relkind = 'r'\nns-> ORDER BY total_mb_used DESC\nns-> LIMIT 20;\n schema_name | table_name | row_count | mb_used |\ntotal_mb_used\n-------------+----------------------------------+-------------+---------+---------------\n public | django_session | 1.47843e+07 | 4122 |\n 18832\n public | traffic_tracking2010 | 9.81985e+06 | 811 |\n 1653\n public | mailer_mailingmessagelog | 7.20214e+06 | 441 |\n 1082\n public | auth_user | 3.20077e+06 | 572 |\n 791\n public | fastadder_fastadderstatus | 302479 | 458 |\n 693\n public | registration_registrationprofile | 3.01345e+06 | 248 |\n 404\n public | reporting_dp_6c93734c | 1.1741e+06 | 82 |\n 224\n public | peoplez_contact | 79759 | 18 |\n 221\n public | traffic_tracking201101 | 1.49972e+06 | 163 |\n 204\n public | reporting_dp_a3439e2a | 1.32739e+06 | 82 |\n 187\n public | nsproperties_apthistory | 44906 | 69 |\n 126\n public | nsproperties_apt | 30780 | 71 |\n 125\n public | clients_showingrequest | 85175 | 77 |\n 103\n public | reporting_dp_4ffe04ad | 330252 | 26 |\n 63\n public | fastadder_fastadderstatuslog | 377402 | 28 |\n 60\n public | nsmailings_officememotoagent | 268345 | 15 |\n 52\n public | celery_taskmeta | 5041 | 12 |\n 32\n public | mailer_messagelog | 168298 | 24 |\n 32\n public | datapoints_job | 9167 | 12 |\n 23\n public | fastadder_fastadderstatus_errors | 146314 | 7 |\n 21\n\noh and there in the footnotes to django they say \"dont' forget to run the\ndelete expired sessions management every once in a while\". thanks guys.\n\nit won't run now because its too big, I can delete them from psql though\n\nwell just think how sprightly my website will run tomorrow once I fix these.\n\n\n\n\nOn Fri, Feb 4, 2011 at 9:00 PM, Shaun Thomas <[email protected]> wrote:\n\n> On 02/04/2011 01:59 PM, felix wrote:\n>\n>\n> still no advice on the pages\n>>\n>\n> I think it just hates you.\n>\n>\n> --\n> Shaun Thomas\n> OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n> 312-676-8870\n> [email protected]\n>\n> ______________________________________________\n>\n> See http://www.peak6.com/email_disclaimer.php\n> for terms and conditions related to this email\n>\n\nit probably has good reason to hate me.ns=> SELECT n.nspname AS schema_name, c.relname AS table_name,\nns->       c.reltuples AS row_count,ns->       c.relpages*8/1024 AS mb_used,\nns->       pg_total_relation_size(c.oid)/1024/1024 AS total_mb_usedns->  FROM pg_class c\nns->  JOIN pg_namespace n ON (n.oid=c.relnamespace)ns->  WHERE c.relkind = 'r'\nns->  ORDER BY total_mb_used DESCns->  LIMIT 20;\n schema_name |            table_name            |  row_count  | mb_used | total_mb_used -------------+----------------------------------+-------------+---------+---------------\n public      | django_session                   | 1.47843e+07 |    4122 |         18832 public      | traffic_tracking2010             | 9.81985e+06 |     811 |          1653\n public      | mailer_mailingmessagelog         | 7.20214e+06 |     441 |          1082 public      | auth_user                        | 3.20077e+06 |     572 |           791\n public      | fastadder_fastadderstatus        |      302479 |     458 |           693 public      | registration_registrationprofile | 3.01345e+06 |     248 |           404\n public      | reporting_dp_6c93734c            |  1.1741e+06 |      82 |           224 public      | peoplez_contact                  |       79759 |      18 |           221\n public      | traffic_tracking201101           | 1.49972e+06 |     163 |           204 public      | reporting_dp_a3439e2a            | 1.32739e+06 |      82 |           187\n public      | nsproperties_apthistory          |       44906 |      69 |           126 public      | nsproperties_apt                 |       30780 |      71 |           125\n public      | clients_showingrequest           |       85175 |      77 |           103 public      | reporting_dp_4ffe04ad            |      330252 |      26 |            63\n public      | fastadder_fastadderstatuslog     |      377402 |      28 |            60 public      | nsmailings_officememotoagent     |      268345 |      15 |            52\n public      | celery_taskmeta                  |        5041 |      12 |            32 public      | mailer_messagelog                |      168298 |      24 |            32\n public      | datapoints_job                   |        9167 |      12 |            23 public      | fastadder_fastadderstatus_errors |      146314 |       7 |            21\noh and there in the footnotes to django they say \"dont' forget to run the delete expired sessions management every once in a while\". thanks guys.\nit won't run now because its too big, I can delete them from psql thoughwell just think how sprightly my website will run tomorrow once I fix these.\nOn Fri, Feb 4, 2011 at 9:00 PM, Shaun Thomas <[email protected]> wrote:\nOn 02/04/2011 01:59 PM, felix wrote:\n\n\n\nstill no advice on the pages\n\n\nI think it just hates you.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee  http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email", "msg_date": "Fri, 4 Feb 2011 21:14:22 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/04/2011 02:14 PM, felix wrote:\n\n> oh and there in the footnotes to django they say \"dont' forget to run\n> the delete expired sessions management every once in a while\".\n> thanks guys.\n\nOh Django... :)\n\n> it won't run now because its too big, I can delete them from psql though\n\nYou might be better off deleting the inverse. You know, start a \ntransaction, select all the sessions that *aren't* expired, truncate the \ntable, insert them back into the session table, and commit.\n\nBEGIN;\nCREATE TEMP TABLE foo_1 AS\nSELECT * FROM django_session WHERE date_expired < CURRENT_DATE;\nTRUNCATE django_session;\nINSERT INTO django_session SELECT * from foo_1;\nCOMMIT;\n\nExcept I don't actually know what the expired column is. You can figure \nthat out pretty quick, I assume. That'll also have the benefit of \ncleaning up the indexes and the table all at once. If you just do a \ndelete, the table won't change at all, except that it'll have less \nactive records.\n\n> well just think how sprightly my website will run tomorrow once I fix\n> these.\n\nMaybe. :)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Fri, 4 Feb 2011 14:37:56 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Fri, 04 Feb 2011 21:37:56 +0100, Shaun Thomas <[email protected]> wrote:\n\n> On 02/04/2011 02:14 PM, felix wrote:\n>\n>> oh and there in the footnotes to django they say \"dont' forget to run\n>> the delete expired sessions management every once in a while\".\n>> thanks guys.\n>\n> Oh Django... :)\n>\n>> it won't run now because its too big, I can delete them from psql though\n>\n> You might be better off deleting the inverse. You know, start a \n> transaction, select all the sessions that *aren't* expired, truncate the \n> table, insert them back into the session table, and commit.\n\nNote that for a session table, that is updated very often, you can use the \npostgres' HOT feature which will create a lot less dead rows. Look it up \nin the docs.\n", "msg_date": "Sat, 05 Feb 2011 10:06:19 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "BRUTAL\n\n\nhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html\nmax_fsm_pages\n\nSee Section 17.4.1<http://www.postgresql.org/docs/8.3/interactive/kernel-resources.html#SYSVIPC>\nfor\ninformation on how to adjust those parameters, if necessary.\n\nI see absolutely nothing in there about how to set those parameters.\n\nseveral hours later (\nwhere is my data directory ? 8.4 shows it in SHOW ALL; 8.3 does not.\nconf files ? \"in the data directory\" no, its in /etc/postgres/8.3/main\nwhere is pg_ctl ?\nwhat user do I need to be ? postgres\nthen why was it installed in the home dir of a user that does not have\npermissions to use it ??\n)\n\n\ncd /home/crucial/bin\n\n/home/crucial/bin/pg_ctl -D /var/lib/postgresql/8.3/main reload\n\nreload does not reset max_fsm_pages, I need to actually restart the server.\n\npostgres@nestseekers:/home/crucial/bin$ /home/crucial/bin/pg_ctl -D\n/var/lib/postgresql/8.3/main restart\nwaiting for server to shut\ndown............................................................... failed\npg_ctl: server does not shut down\n\n\nOK, my mistake. probably I have to disconnect all clients. I don't want\nto do a \"planned maintenance\" right now.\n\nso I go to sleep\n\nthe server restarts itself an hour later.\n\nbut no, it fails to restart because this memory setting you recommend is not\npossible without reconfiguring the kernel.\n\n\npostgres@nestseekers:/home/crucial/bin$ 2011-02-06 05:18:00 EST LOG: could\nnot load root certificate file \"root.crt\": No such file or directory\n2011-02-06 05:18:00 EST DETAIL: Will not verify client certificates.\n2011-02-06 05:18:00 EST FATAL: could not create shared memory segment:\nInvalid argument\n2011-02-06 05:18:00 EST DETAIL: Failed system call was shmget(key=5432001,\nsize=35463168, 03600).\n2011-02-06 05:18:00 EST HINT: This error usually means that PostgreSQL's\nrequest for a shared memory segment exceeded your kernel's SHMMAX parameter.\n You can either reduce the request size or reconfigure the kernel with\nlarger SHMMAX. To reduce the request size (currently 35463168 bytes),\nreduce PostgreSQL's shared_buffers parameter (currently 3072) and/or its\nmax_connections parameter (currently 103).\nIf the request size is already small, it's possible that it is less than\nyour kernel's SHMMIN parameter, in which case raising the request size or\nreconfiguring SHMMIN is called for.\nThe PostgreSQL documentation contains more information about shared memory\nconfiguration.\n^C\n\n*and the website is down for the next 6 hours while I sleep.*\n\ntotal disaster\n\nafter a few tries I get it to take an max_fsm_pages of 300k\n\npostgres@nestseekers:/home/crucial/bin$ 2011-02-06 05:19:26 EST LOG: could\nnot load root certificate file \"root.crt\": No such file or directory\n2011-02-06 05:19:26 EST DETAIL: Will not verify client certificates.\n2011-02-06 05:19:26 EST LOG: database system was shut down at 2011-02-06\n00:07:41 EST\n2011-02-06 05:19:27 EST LOG: autovacuum launcher started\n2011-02-06 05:19:27 EST LOG: database system is ready to accept connections\n^C\n\n\n\n2011-02-06 05:33:45 EST LOG: checkpoints are occurring too frequently (21\nseconds apart)\n2011-02-06 05:33:45 EST HINT: Consider increasing the configuration\nparameter \"checkpoint_segments\".\n\n\n??\n\n\n From my perspective: the defaults for postgres 8.3 result in a database that\ndoes not scale and fails dramatically after 6 months. changing that default\nis brutally difficult and can only really be done by adjusting something in\nthe kernel.\n\n\nI have clustered that table, its still unbelievably slow.\nI still don't know if this bloat due to the small free space map has\nanything to do with why the table is performing like this.\n\n\nOn Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas <[email protected]> wrote:\n\n>\n> You can stop the bloating by setting the right max_fsm_pages setting,\n>\n\n\n\n\n\n\n> but you'll either have to go through and VACUUM FULL every table in your\n> database, or dump/restore to regain all the lost space and performance (the\n> later would actually be faster). Before I even touch an older PostgreSQL DB,\n> I set it to some value over 3-million just as a starting value to be on the\n> safe side. A little used memory is a small price to pay for stopping gradual\n> expansion.\n>\n>\n\nBRUTALhttp://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html\nmax_fsm_pages\n\nSee Section 17.4.1 for information on how to adjust those parameters, if necessary.\nI see absolutely nothing in there about how to set those parameters.several hours later (where is my data directory ?  8.4 shows it in SHOW ALL; 8.3 does not.\nconf files ? \"in the data directory\" no, its in /etc/postgres/8.3/mainwhere is pg_ctl ? what user do I need to be ? postgresthen why was it installed in the home dir of a user that does not have permissions to use it ??  \n)\ncd /home/crucial/bin/home/crucial/bin/pg_ctl -D /var/lib/postgresql/8.3/main reloadreload does not reset max_fsm_pages, I need to actually restart the server.\npostgres@nestseekers:/home/crucial/bin$ /home/crucial/bin/pg_ctl -D /var/lib/postgresql/8.3/main restartwaiting for server to shut down............................................................... failed\npg_ctl: server does not shut downOK, my mistake.   probably I have to disconnect all clients.  I don't want to do a \"planned maintenance\" right now.\nso I go to sleepthe server restarts itself an hour later.but no, it fails to restart because this memory setting you recommend is not possible without reconfiguring the kernel.\npostgres@nestseekers:/home/crucial/bin$ 2011-02-06 05:18:00 EST LOG:  could not load root certificate file \"root.crt\": No such file or directory2011-02-06 05:18:00 EST DETAIL:  Will not verify client certificates.\n2011-02-06 05:18:00 EST FATAL:  could not create shared memory segment: Invalid argument2011-02-06 05:18:00 EST DETAIL:  Failed system call was shmget(key=5432001, size=35463168, 03600).2011-02-06 05:18:00 EST HINT:  This error usually means that PostgreSQL's request for a shared memory segment exceeded your kernel's SHMMAX parameter.  You can either reduce the request size or reconfigure the kernel with larger SHMMAX.  To reduce the request size (currently 35463168 bytes), reduce PostgreSQL's shared_buffers parameter (currently 3072) and/or its max_connections parameter (currently 103).\n If the request size is already small, it's possible that it is less than your kernel's SHMMIN parameter, in which case raising the request size or reconfiguring SHMMIN is called for.\n The PostgreSQL documentation contains more information about shared memory configuration.^Cand the website is down for the next 6 hours while I sleep.\ntotal disasterafter a few tries I get it to take an max_fsm_pages of 300kpostgres@nestseekers:/home/crucial/bin$ 2011-02-06 05:19:26 EST LOG:  could not load root certificate file \"root.crt\": No such file or directory\n2011-02-06 05:19:26 EST DETAIL:  Will not verify client certificates.2011-02-06 05:19:26 EST LOG:  database system was shut down at 2011-02-06 00:07:41 EST2011-02-06 05:19:27 EST LOG:  autovacuum launcher started\n2011-02-06 05:19:27 EST LOG:  database system is ready to accept connections^C2011-02-06 05:33:45 EST LOG:  checkpoints are occurring too frequently (21 seconds apart)\n2011-02-06 05:33:45 EST HINT:  Consider increasing the configuration parameter \"checkpoint_segments\".??From my perspective: the defaults for postgres 8.3 result in a database that does not scale and fails dramatically after 6 months.  changing that default is brutally difficult and can only really be done by adjusting something in the kernel.\nI have clustered that table, its still unbelievably slow.I still don't know if this bloat due to the small free space map has anything to do with why the table is performing like this.\nOn Fri, Feb 4, 2011 at 5:35 PM, Shaun Thomas <[email protected]> wrote:\n\nYou can stop the bloating by setting the right max_fsm_pages setting,  \nbut you'll either have to go through and VACUUM FULL every table in your database, or dump/restore to regain all the lost space and performance (the later would actually be faster). Before I even touch an older PostgreSQL DB, I set it to some value over 3-million just as a starting value to be on the safe side. A little used memory is a small price to pay for stopping gradual expansion.", "msg_date": "Sun, 6 Feb 2011 11:48:50 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Sun, Feb 6, 2011 at 3:48 AM, felix <[email protected]> wrote:\n> BRUTAL\n>\nSNIP\n\n> OK, my mistake.   probably I have to disconnect all clients.  I don't want\n> to do a \"planned maintenance\" right now.\n> so I go to sleep\n> the server restarts itself an hour later.\n> but no, it fails to restart because this memory setting you recommend is not\n> possible without reconfiguring the kernel.\n\nSNIP\n\n> and the website is down for the next 6 hours while I sleep.\n> total disaster\n\nLet's review:\n1: No test or staging system used before production\n2: DB left in an unknown state (trying to shut down, not able)\n3: No monitoring software to tell you when the site is down\n4: I'm gonna just go ahead and guess no backups were taken either, or\nare regularly taken.\n\nThis website can't be very important, if that's the way you treat it.\nNumber 1 up there becomes even worse because it was your first time\ntrying to make this particular change in Postgresql. If it is\nimportant, you need to learn how to start treating it that way. Even\nthe most junior of sys admins or developers I work with know we test\nit a couple times outside of production before just trying it there.\nAnd my phone starts complaining a minute after the site stops\nresponding if something does go wrong the rest of the time. Do not\nlay this at anyone else's feet.\n\n> From my perspective: the defaults for postgres 8.3 result in a database that\n> does not scale and fails dramatically after 6 months.\n\nAgreed. Welcome to using shared memory and the ridiculously low\ndefaults on most flavors of unix or linux.\n\n>  changing that default\n> is brutally difficult and can only really be done by adjusting something in\n> the kernel.\n\nPlease, that's a gross exaggeration. The sum totoal to changing them is:\n\nrun sysctl -a|grep shm\ncopy out proper lines to cahnge\nedit sysctl.conf\nput new lines in there with changes\nsudo sysctl -p # applies changes\nedit the appropriate postgresql.conf, make changes\nsudo /etc/init.d/postgresql-8.3 stop\nsudo /etc/init.d/postgresql-8.3 start\n\n> I have clustered that table, its still unbelievably slow.\n\nDid you actually delete the old entries before clustering it? if it's\nstill got 4G of old sessions or whatever in it, clustering ain't gonna\nhelp.\n\n> I still don't know if this bloat due to the small free space map has\n> anything to do with why the table is performing like this.\n\nSince you haven't show us what changes, if any, have happened to the\ntable, neither do we :)\n", "msg_date": "Sun, 6 Feb 2011 08:23:12 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 4, 2011 at 1:14 PM, felix <[email protected]> wrote:\n>  schema_name |            table_name            |  row_count  | mb_used |\n> total_mb_used\n> -------------+----------------------------------+-------------+---------+---------------\n>  public      | django_session                   | 1.47843e+07 |    4122 |\n>       18832\n\nSo does this row still have 15M rows in it? Any old ones you can\ndelete, then run cluster on the table?\n", "msg_date": "Sun, 6 Feb 2011 12:02:37 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "\n>> I have clustered that table, its still unbelievably slow.\n>\n> Did you actually delete the old entries before clustering it? if it's\n> still got 4G of old sessions or whatever in it, clustering ain't gonna\n> help.\n\nAlso, IMHO it is a lot better to store sessions in something like \nmemcached, rather than imposing this rather large load on the main \ndatabase...\n\nPS : if your site has been down for 6 hours, you can TRUNCATE your \nsessions table...\n", "msg_date": "Sun, 06 Feb 2011 20:19:17 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Sun, Feb 6, 2011 at 12:19 PM, Pierre C <[email protected]> wrote:\n>\n>>> I have clustered that table, its still unbelievably slow.\n>>\n>> Did you actually delete the old entries before clustering it?  if it's\n>> still got 4G of old sessions or whatever in it, clustering ain't gonna\n>> help.\n>\n> Also, IMHO it is a lot better to store sessions in something like memcached,\n> rather than imposing this rather large load on the main database...\n>\n> PS : if your site has been down for 6 hours, you can TRUNCATE your sessions\n> table...\n\nAgreed. When I started where I am sessions were on pg and falling\nover all the time. Because I couldn't change it at the time, I was\nforced to make autovac MUCH more aggressive. I didn't have to crank\nup fsm a lot really but did a bit. Then just ran a vacuum full /\nreindex across the sessions table and everything was fine after that.\nBut we could handle 100x time the load for sessions with memcached I\nbet.\n", "msg_date": "Sun, 6 Feb 2011 12:23:53 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Sun, Feb 6, 2011 at 4:23 PM, Scott Marlowe <[email protected]>wrote:\n\n> Let's review:\n>\n\n\n1: No test or staging system used before production\n>\n\nno, I do not have a full ubuntu machine replicating the exact memory and\napplication load of the production server.\n\nthis was changing one configuration parameter. something I was advised to\ndo, read about quite a bit, tested on my development server (mac) and then\nproceeded to do at 6 am on Sunday morning, our slowest time.\n\n\n2: DB left in an unknown state (trying to shut down, not able)\n>\n\nwhat ?\n\nI checked the site, everything was normal. I went in via psql and tried\nsome queries for about half an hour and continued to monitor the site. then\nI went to bed at 7am (EU time).\n\nWhy did it shutdown so much later ?\n\nI have never restarted postgres before, so this was all new to me. I\napologize that I wasn't born innately with such knowledge.\n\nSo is it normal for postgres to report that it failed to shut down, operate\nfor an hour and then go ahead and restart itself ?\n\n3: No monitoring software to tell you when the site is down\n>\n\nof course I have monitoring software. both external and internal. but it\ndoesn't come and kick me out of bed. yes, I need an automated cel phone\ncall. that was the first thing I saw to afterwards.\n\n\n4: I'm gonna just go ahead and guess no backups were taken either, or\n> are regularly taken.\n>\n\nWTF ? of course I have backups. I just went through a very harsh down\nperiod event. I fail to see why it is now necessary for you to launch such\nan attack on me.\n\nPerhaps the tone of my post sounded like I was blaming you, or at least you\nfelt that way. Why do you feel that way ?\n\nWhy not respond with: \"ouch ! did you check this ... that....\" say\nsomething nice and helpful. correct my mistakes\n\n\n\n\n> This website can't be very important, if that's the way you treat it.\n>\n\njust to let you know, that is straight up offensive\n\nThis is high traffic real estate site. Downtime is unacceptable. I had\nless downtime than this when I migrated to the new platform.\n\nI spent rather a large amount of time reading and questioning here. I asked\nmany questions for clarification and didn't do ANYTHING until I was sure it\nwas the correct solution. I didn't just pull some shit off a blog and start\nchanging settings at random.\n\nI double checked opinions against different people and I searched for more\ndocs on that param. Amazingly none of the ones I found commented on the\nshared memory issue and I didn't even understand the docs discussing shared\nmemory because it didn't seem to apply to what I was doing. that's my\nmisunderstanding. I come her to share my misunderstanding.\n\n\n\n\n> And my phone starts complaining a minute after the site stops\n> responding if something does go wrong the rest of the time. Do not\n> lay this at anyone else's feet.\n>\n\nI didn't. There is not even the slightest hint of that in my post.\n\nI came here and posted the details of where I went wrong and what confused\nme about the documentation that I followed. That's so other people can\nfollow it and so somebody here can comment on it.\n\n\n\n> changing that default\n> > is brutally difficult and can only really be done by adjusting something\n> in\n> > the kernel.\n>\n> Please, that's a gross exaggeration. The sum totoal to changing them is:\n>\n> run sysctl -a|grep shm\n> copy out proper lines to cahnge\n> edit sysctl.conf\n> put new lines in there with changes\n> sudo sysctl -p # applies changes\n> edit the appropriate postgresql.conf, make changes\n> sudo /etc/init.d/postgresql-8.3 stop\n> sudo /etc/init.d/postgresql-8.3 start\n>\n\nConsidering how splendidly the experiment with changing fsm_max_pages went,\nI think you can understand that I have no desire to experiment with kernel\nsettings.\n\nIt is easy for you because you ALREADY KNOW everything involved. I am not a\nsysadmin and we don't have one. My apologies for that.\n\nso does the above mean that I don't have to restart the entire server, just\npostgres ? I assumed that changing kernel settings means rebooting the\nserver.\n\n\n\n> I have clustered that table, its still unbelievably slow.\n>\n> Did you actually delete the old entries before clustering it? if it's\n> still got 4G of old sessions or whatever in it, clustering ain't gonna\n> help.\n>\n\nits a different table. the problem one has only 300k rows\n\nthe problem is not the size, the problem is the speed is catastrophic\n\n\n\n> I still don't know if this bloat due to the small free space map has\n> > anything to do with why the table is performing like this.\n>\n> Since you haven't show us what changes, if any, have happened to the\n> table, neither do we :)\n>\n\nsorry, it didn't seem to be the most important topic when I got out of bed\n\nOn Sun, Feb 6, 2011 at 4:23 PM, Scott Marlowe <[email protected]> wrote:\nLet's review: \n1: No test or staging system used before productionno, I do not have a full ubuntu machine replicating the exact memory and application load of the production server.\nthis was changing one configuration parameter. something I was advised to do, read about quite a bit, tested on my development server (mac) and then proceeded to do at 6 am on Sunday morning, our slowest time.\n\n2: DB left in an unknown state (trying to shut down, not able)what ?I checked the site, everything was normal.  I went in via psql and tried some queries for about half an hour and continued to monitor the site.  then I went to bed at 7am (EU time).\nWhy did it shutdown so much later ?I have never restarted postgres before, so this was all new to me.  I apologize that I wasn't born innately with such knowledge.\nSo is it normal for postgres to report that it failed to shut down, operate for an hour and then go ahead and restart itself ?\n\n3: No monitoring software to tell you when the site is downof course I have monitoring software.  both external and internal.  but it doesn't come and kick me out of bed.  yes, I need an automated cel phone call.  that was the first thing I saw to afterwards.\n\n4: I'm gonna just go ahead and guess no backups were taken either, or\nare regularly taken.WTF ?   of course I have backups.  I just went through a very harsh down period event.  I fail to see why it is now necessary for you to launch such an attack on me.  \nPerhaps the tone of my post sounded like I was blaming you, or at least you felt that way.  Why do you feel that way ?Why not respond with:  \"ouch !  did you check this ... that....\"  say something nice and helpful.  correct my mistakes\n \nThis website can't be very important, if that's the way you treat it.just to let you know, that is straight up offensiveThis is high traffic real estate site.  Downtime is unacceptable.  I had less downtime than this when I migrated to the new platform.\nI spent rather a large amount of time reading and questioning here.  I asked many questions for clarification and didn't do ANYTHING until I was sure it was the correct solution.  I didn't just pull some shit off a blog and start changing settings at random.\nI double checked opinions against different people and I searched for more docs on that param.  Amazingly none of the ones I found commented on the shared memory issue and I didn't even understand the docs discussing shared memory because it didn't seem to apply to what I was doing.  that's my misunderstanding.  I come her to share my misunderstanding.\n \nAnd my phone starts complaining a minute after the site stops\nresponding if something does go wrong the rest of the time.  Do not\nlay this at anyone else's feet.I didn't.  There is not even the slightest hint of that in my post.I came here and posted the details of where I went wrong and what confused me about the documentation that I followed.  That's so other people can follow it and so somebody here can comment on it.\n>  changing that default\n> is brutally difficult and can only really be done by adjusting something in\n> the kernel.\n\nPlease, that's a gross exaggeration.  The sum totoal to changing them is:\n\nrun sysctl -a|grep shm\ncopy out proper lines to cahnge\nedit sysctl.conf\nput new lines in there with changes\nsudo sysctl -p  # applies changes\nedit the appropriate postgresql.conf, make changes\nsudo /etc/init.d/postgresql-8.3 stop\nsudo /etc/init.d/postgresql-8.3 startConsidering how splendidly the experiment with changing fsm_max_pages went, I think you can understand that I have no desire to experiment with kernel settings.\nIt is easy for you because you ALREADY KNOW everything involved.  I am not a sysadmin and we don't have one.  My apologies for that.so does the above mean that I don't have to restart the entire server, just postgres ?  I assumed that changing kernel settings means rebooting the server.\n\n> I have clustered that table, its still unbelievably slow.\n\nDid you actually delete the old entries before clustering it?  if it's\nstill got 4G of old sessions or whatever in it, clustering ain't gonna\nhelp.its a different table.  the problem one has only 300k rowsthe problem is not the size, the problem is the speed is catastrophic\n\n> I still don't know if this bloat due to the small free space map has\n> anything to do with why the table is performing like this.\n\nSince you haven't show us what changes, if any, have happened to the\ntable, neither do we :)sorry, it didn't seem to be the most important topic when I got out of bed", "msg_date": "Mon, 7 Feb 2011 02:52:01 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "yeah, it already uses memcached with db save. nothing important in session\nanyway\n\nthe session table is not the issue\n\nand I never clustered that one or ever will\n\nthanks for the tip, also the other one about HOT\n\n\nOn Sun, Feb 6, 2011 at 8:19 PM, Pierre C <[email protected]> wrote:\n\n>\n> I have clustered that table, its still unbelievably slow.\n>>>\n>>\n>> Did you actually delete the old entries before clustering it? if it's\n>> still got 4G of old sessions or whatever in it, clustering ain't gonna\n>> help.\n>>\n>\n> Also, IMHO it is a lot better to store sessions in something like\n> memcached, rather than imposing this rather large load on the main\n> database...\n>\n> PS : if your site has been down for 6 hours, you can TRUNCATE your sessions\n> table...\n>\n\nyeah, it already uses memcached with db save.  nothing important in session anywaythe session table is not the issueand I never clustered that one or ever will\nthanks for the tip, also the other one about HOTOn Sun, Feb 6, 2011 at 8:19 PM, Pierre C <[email protected]> wrote:\n\n\nI have clustered that table, its still unbelievably slow.\n\n\nDid you actually delete the old entries before clustering it?  if it's\nstill got 4G of old sessions or whatever in it, clustering ain't gonna\nhelp.\n\n\nAlso, IMHO it is a lot better to store sessions in something like memcached, rather than imposing this rather large load on the main database...\n\nPS : if your site has been down for 6 hours, you can TRUNCATE your sessions table...", "msg_date": "Mon, 7 Feb 2011 02:55:57 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 07/02/11 09:52, felix wrote:\n\n> So is it normal for postgres to report that it failed to shut down,\n> operate for an hour and then go ahead and restart itself ?\n\nThat's pretty wacky. Did you shut it down via pg_ctl or using an init\nscript / \"service\" command in your OS?\n\nIt shouldn't matter, but it'd be good to know. If the problem is with an\ninit script, then knowing which OS and version you're on would help. If\nit was with psql directly, that's something that can be looked into.\n\n> this was changing one configuration parameter. something I was advised\n> to do, read about quite a bit, tested on my development server (mac) and\n> then proceeded to do at 6 am on Sunday morning, our slowest time.\n\nSystem V shared memory is awful - but it's really the only reasonable\nalternative for a multi-process (rather than multi-threaded) server.\n\nPostgreSQL could use mmap()ed temp files, but that'd add additional\noverheads and they'd potentially get flushed from main memory unless the\nmemory was mlock()ed. As mlock() has similar limits and configuration\nmethods to system V shared memory, you get back to the same problem in a\nslightly different form.\n\nWhat would possibly help would be if Pg could fall back to lower\nshared_buffers automatically, screaming about it in the logs but still\nlaunching. OTOH, many people don't check the logs, so they'd think their\nnew setting had taken effect and it hadn't - you've traded one usability\nproblem for another. Even if Pg issued WARNING messages to each client\nthat connected, lots of (non-psql) clients don't display them, so many\nusers would never know.\n\nDo you have a suggestion about how to do this better? The current\napproach is known to be rather unlovely, but nobody's come up with a\nbetter one that works reasonably and doesn't trample on other System V\nshared memory users that may exist on the system.\n\n> so does the above mean that I don't have to restart the entire server,\n> just postgres ? I assumed that changing kernel settings means rebooting\n> the server.\n\nNope. sysctl settings like shmmax may be changed on the fly.\n\n-- \nSystem & Network Administrator\nPOST Newspapers\n", "msg_date": "Mon, 07 Feb 2011 11:03:37 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Sun, Feb 6, 2011 at 6:52 PM, felix <[email protected]> wrote:\n> On Sun, Feb 6, 2011 at 4:23 PM, Scott Marlowe <[email protected]>\n> wrote:\n>>\n>> Let's review:\n>>\n>> 1: No test or staging system used before production\n>\n> no, I do not have a full ubuntu machine replicating the exact memory and\n> application load of the production server.\n> this was changing one configuration parameter. something I was advised to\n> do, read about quite a bit, tested on my development server (mac) and then\n> proceeded to do at 6 am on Sunday morning, our slowest time.\n\nI would strongly suggest you at least test these changes out\nelsewhere. It doesn't have to exactly match, but if you had a machine\nthat was even close to test on you'd have known what to expect.\nVirtual machines are dirt simple to set up now. So not having one\ninexcusable.\n\n>> 2: DB left in an unknown state (trying to shut down, not able)\n>\n> what ?\n\nYou told it to restart, which is a stop and a start. It didn't stop.\nIt was in an unknown state. With settings in its config file you\ndidn't know whether or not they worked because you hadn't tested them\nalready on somthing similar.\n\n> Why did it shutdown so much later ?\n\nBecause that's when the last open connection from before when you told\nit to shutdown / restart.\n\n> I have never restarted postgres before, so this was all new to me.\n\nWhich is why you use a virtual machine to build a test lab so you CAN\nmake these changes somewhere other than produciton.\n\n>  I apologize that I wasn't born innately with such knowledge.\n\nGuess what!? Neither was I! I do however know how to setup a test\nsystem so I don't test things on my production machine.\n\n> So is it normal for postgres to report that it failed to shut down, operate\n> for an hour and then go ahead and restart itself ?\n\nYes. It eventually finished your restart you told it to do.\n\n>> 3: No monitoring software to tell you when the site is down\n>\n> of course I have monitoring software.  both external and internal.  but it\n> doesn't come and kick me out of bed.  yes, I need an automated cel phone\n> call.  that was the first thing I saw to afterwards.\n\nMonitoring software that can't send you emails when things break is in\nneed of having that feature enabled.\n\n>\n>> 4: I'm gonna just go ahead and guess no backups were taken either, or\n>> are regularly taken.\n>\n> WTF ?   of course I have backups.  I just went through a very harsh down\n> period event.  I fail to see why it is now necessary for you to launch such\n> an attack on me.\n\nNo, it just seemed like your admin skills were pretty sloppy, so a\nlack of a backup wouldn't surprise me.\n\n> Perhaps the tone of my post sounded like I was blaming you, or at least you\n> felt that way.\n\nIt felt more like you were blaming PostgreSQL for being overly\ncomplex, but I wasn't taking it all that personally.\n\n>  Why do you feel that way ?\n\nI don't.\n\n> Why not respond with:  \"ouch !  did you check this ... that....\"  say\n> something nice and helpful.  correct my mistakes\n\nI'd be glad to, but your message wasn't looking for help. go back and\nread it. It's one long complaint.\n\n>> This website can't be very important, if that's the way you treat it.\n>\n> just to let you know, that is straight up offensive\n\nReally? I'd say performing maintenance with no plan or pre-testing is\nfar more offensive.\n\n> This is high traffic real estate site.  Downtime is unacceptable.  I had\n> less downtime than this when I migrated to the new platform.\n\nI expect you did more planning an testing?\n\n> I spent rather a large amount of time reading and questioning here.  I asked\n> many questions for clarification and didn't do ANYTHING until I was sure it\n> was the correct solution.  I didn't just pull some shit off a blog and start\n> changing settings at random.\n\nBut yet you failed to test it on even the simplest similar system\nsetup. And so you lacked the practical knowledge of how to make this\nchange in production safely.\n\n> I double checked opinions against different people and I searched for more\n> docs on that param.  Amazingly none of the ones I found commented on the\n> shared memory issue and I didn't even understand the docs discussing shared\n> memory because it didn't seem to apply to what I was doing.  that's my\n> misunderstanding.  I come her to share my misunderstanding.\n\nWell, that's useful. And I can see where there could be some changes\nmade to the docs or a user friendly howto on how to increase shared\nmemory and fsm and all that.\n\n>> Please, that's a gross exaggeration.  The sum totoal to changing them is:\n>>\n>> run sysctl -a|grep shm\n>> copy out proper lines to cahnge\n>> edit sysctl.conf\n>> put new lines in there with changes\n>> sudo sysctl -p  # applies changes\n>> edit the appropriate postgresql.conf, make changes\n>> sudo /etc/init.d/postgresql-8.3 stop\n>> sudo /etc/init.d/postgresql-8.3 start\n>\n> Considering how splendidly the experiment with changing fsm_max_pages went,\n> I think you can understand that I have no desire to experiment with kernel\n> settings.\n\nExperimenting is what you do on a test machine, not a production server.\n\n> It is easy for you because you ALREADY KNOW everything involved.\n\nBut this is important, it was NOT EASY the first time, and I certainly\ndidn't try to make changes on a production server the first time.\n\n> I am not a\n> sysadmin and we don't have one.  My apologies for that.\n\nNo need to apologize. Learn the skills needed to fill that role, or\nhire someone.\n\n> so does the above mean that I don't have to restart the entire server, just\n> postgres ?  I assumed that changing kernel settings means rebooting the\n> server.\n\nExactly. Just pgsql. You use sysctl -p to make the changes take effect.\n\n>> Did you actually delete the old entries before clustering it?  if it's\n>> still got 4G of old sessions or whatever in it, clustering ain't gonna\n>> help.\n>\n> its a different table.  the problem one has only 300k rows\n> the problem is not the size, the problem is the speed is catastrophic\n\nWell, is it bloated? Which table in that previous post is it?\n\n> sorry, it didn't seem to be the most important topic when I got out of bed\n\nIf it's not coffee, it's not an important topic when I get out of bed.\n", "msg_date": "Sun, 6 Feb 2011 20:14:39 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "You really got screwed by the default settings. You don’t actually need to “hack” the kernel, but you do have to make these changes, because the amount of memory PG has on your system is laughable. That might actually be the majority of your problem.\r\n\r\nIn your /etc/sysctl.conf, you need these lines:\r\n\r\nkernel.shmmax = 68719476736\r\nkernel.shmall = 4294967296\r\n\r\nThen you need to run\r\n\r\nsysctl -p\r\n\r\nThese changes can only be made as root, by the way. That will give you more than enough shared memory to restart PG. But it also tells me you’re using the default memory settings. If you have more than 4GB on that system, you need to set shared_buffers to 1G or so. In addition, you need to bump your effective_cache_size to something representing the remaining inode cache in your system. Run ‘free’ to see that.\r\n\r\nYou also need to know something about unix systems. If you’re running an ubuntu system, your control files are in /etc/init.d, and you can invoke them with:\r\n\r\nservice pg_cluster restart\r\n\r\nor the more ghetto:\r\n\r\n/etc/init.d/pg_cluster restart\r\n\r\nIt may also be named postgres, postgresql, or some other variant.\r\n\r\nThe problem you’ll run into with this is that PG tries to play nice, so it’ll wait for all connections to disconnect before it shuts down to restart. That means, of course, you need to do a fast shutdown, which forces all connections to disconnect, but the service control script won’t do that. So you’re left with the pg_ctl command again.\r\n\r\npg_ctl –D /my/pg/dir –m fast\r\n\r\nAnd yeah, your checkpoint segments probably are too low. Based on your session table, you should probably have that at 25 or higher.\r\n\r\nBut that’s part of the point. I highly recommend you scan around Google for pages on optimizing PostgreSQL installs. These are pretty much covered in all of them. Fixing the shmall and shmax kernel settings are also pretty well known in database circles, because they really are set to ridiculously low defaults for any machine that may eventually be a server of anything. I was surprised it blocked the memory request for the max_fsm_pages setting, but that just proves your system was unoptimized in several different ways that may have been slowing down your count(*) statements, among other things.\r\n\r\nPlease, for your own sanity and the safety of your systems, look this stuff up to the point you can do most of it without looking. You can clearly do well, because you picked your way through the manuals to know about the kernel settings, and that you could call pg_ctl, and so on.\r\n\r\n\r\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\r\n\nYou really got screwed by the default settings. You don’t actually need to “hack” the kernel, but you do have to make these changes, because the amount of memory PG has on your system is laughable. That might actually be the majority of your problem. In your /etc/sysctl.conf, you need these lines: kernel.shmmax = 68719476736kernel.shmall = 4294967296 Then you need to run sysctl -p These changes can only be made as root, by the way. That will give you more than enough shared memory to restart PG. But it also tells me you’re using the default memory settings. If you have more than 4GB on that system, you need to set shared_buffers to 1G or so. In addition, you need to bump your effective_cache_size to something representing the remaining inode cache in your system. Run ‘free’ to see that. You also need to know something about unix systems. If you’re running an ubuntu system, your control files are in /etc/init.d, and you can invoke them with: service pg_cluster restart or the more ghetto: /etc/init.d/pg_cluster restart It may also be named postgres, postgresql, or some other variant. The problem you’ll run into with this is that PG tries to play nice, so it’ll wait for all connections to disconnect before it shuts down to restart. That means, of course, you need to do a fast shutdown, which forces all connections to disconnect, but the service control script won’t do that. So you’re left with the pg_ctl command again. pg_ctl –D /my/pg/dir –m fast And yeah, your checkpoint segments probably are too low. Based on your session table, you should probably have that at 25 or higher. But that’s part of the point. I highly recommend you scan around Google for pages on optimizing PostgreSQL installs. These are pretty much covered in all of them. Fixing the shmall and shmax kernel settings are also pretty well known in database circles, because they really are set to ridiculously low defaults for any machine that may eventually be a server of anything. I was surprised it blocked the memory request for the max_fsm_pages setting, but that just proves your system was unoptimized in several different ways that may have been slowing down your count(*) statements, among other things. Please, for your own sanity and the safety of your systems, look this stuff up to the point you can do most of it without looking. You can clearly do well, because you picked your way through the manuals to know about the kernel settings, and that you could call pg_ctl, and so on. \nSee http://www.peak6.com/email_disclaimer.php for terms and conditions related to this email", "msg_date": "Sun, 6 Feb 2011 22:50:08 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "> I checked the site, everything was normal. I went in via psql and tried some\r\n> queries for about half an hour and continued to monitor the site. then I went\r\n> to bed at 7am (EU time).\r\n>\r\n> Why did it shutdown so much later ?\r\n\r\nThat’s one of the things I talked about. To be safe, PG will start to shut down but disallow new connections, and *that’s all*. Old connections are grandfathered in until they disconnect, and when they all go away, it shuts down gracefully.\r\n\r\npg_ctl –D /my/pg/dir stop –m fast\r\npg_ctl –D /my/pg/dir start\r\n\r\nIs what you wanted.\r\n\r\n> I have never restarted postgres before, so this was all new to me. I apologize\r\n> that I wasn't born innately with such knowledge.\r\n\r\nForget about it. But you need to learn your tools. Restarting the DB server is something you’ll need to do occasionally. Just like restarting your Django proxy or app. You need to be fully knowledgeable about every part of your tool-chain, or at least the parts you’re responsible for.\r\n\r\n> I double checked opinions against different people and I searched for more docs\r\n> on that param. Amazingly none of the ones I found commented on the shared\r\n> memory issue and I didn't even understand the docs discussing shared memory\r\n> because it didn't seem to apply to what I was doing.\r\n\r\nThat’s no coincidence. I’ve seen that complaint if you increase shared_buffers, but not for max_fsm_pages. I guess I’m so used to bumping up shmmax and shmall that I forget how low default systems leave those values. But you do need to increase them. Every time. They’re crippling your install in more ways than just postgres.\r\n\r\nSo far as your Django install, have you activated the memcache contrib. module? Your pages should be lazy-caching and rarely depend on the DB, if they can. You should also rarely be doing count(*) on a 300k row table, even if everything is cached and speedy. 300k row tables have nasty habits of becoming 3M row tables (or more) after enough time, and no amount of cache will save you from counting that. It’ll take 1 second or more every time eventually, and then you’ll be in real trouble. That’s an application design issue you need to address before it’s too late, or you have to rush and implement a hasty fix.\r\n\r\nI suggest setting your log_min_duration to 1000, so every query that takes longer than 1 second to execute is logged in your postgres logs. You can use that to track down trouble spots before they get really bad. That’s normally aggressive enough to catch the real problem queries without flooding your logs with too much output.\r\n\r\nBeing a DBA sucks sometimes. ☺\r\n\r\n\r\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\r\n\n> I checked the site, everything was normal.  I went in via psql and tried some> queries for about half an hour and continued to monitor the site.  then I went> to bed at 7am (EU time).> > Why did it shutdown so much later ? That’s one of the things I talked about. To be safe, PG will start to shut down but disallow new connections, and *that’s all*. Old connections are grandfathered in until they disconnect, and when they all go away, it shuts down gracefully. pg_ctl –D /my/pg/dir stop –m fastpg_ctl –D /my/pg/dir start Is what you wanted. > I have never restarted postgres before, so this was all new to me.  I apologize> that I wasn't born innately with such knowledge. Forget about it. But you need to learn your tools. Restarting the DB server is something you’ll need to do occasionally. Just like restarting your Django proxy or app. You need to be fully knowledgeable about every part of your tool-chain, or at least the parts you’re responsible for. > I double checked opinions against different people and I searched for more docs> on that param.  Amazingly none of the ones I found commented on the shared > memory issue and I didn't even understand the docs discussing shared memory> because it didn't seem to apply to what I was doing. That’s no coincidence. I’ve seen that complaint if you increase shared_buffers, but not for max_fsm_pages. I guess I’m so used to bumping up shmmax and shmall that I forget how low default systems leave those values. But you do need to increase them. Every time. They’re crippling your install in more ways than just postgres. So far as your Django install, have you activated the memcache contrib. module? Your pages should be lazy-caching and rarely depend on the DB, if they can. You should also rarely be doing count(*) on a 300k row table, even if everything is cached and speedy. 300k row tables have nasty habits of becoming 3M row tables (or more) after enough time, and no amount of cache will save you from counting that. It’ll take 1 second or more every time eventually, and then you’ll be in real trouble. That’s an application design issue you need to address before it’s too late, or you have to rush and implement a hasty fix. I suggest setting your log_min_duration to 1000, so every query that takes longer than 1 second to execute is logged in your postgres logs. You can use that to track down trouble spots before they get really bad. That’s normally aggressive enough to catch the real problem queries without flooding your logs with too much output. Being a DBA sucks sometimes. J \nSee http://www.peak6.com/email_disclaimer.php for terms and conditions related to this email", "msg_date": "Sun, 6 Feb 2011 23:05:12 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "felix wrote:\n> So is it normal for postgres to report that it failed to shut down, \n> operate for an hour and then go ahead and restart itself ?\n\nYou've already gotten a few explanations for why waiting for connections \ncan cause this. I'll only add that it is critical to be watching the \ndatabase log file when doing work like this with PostgreSQL. Go back \nand check it if you still have the data from when your problematic \nrestart attempt happened, normally you'll get some warnings about it \nstarting to shutdown. Try to look for the actual server shutdown \nmessage and then the restart one after doing this sort of thing. If you \ndon't see them when you do this again, you'll know something unexpected \nis happening, and then to look into what that is.\n\nAlso, as a general downtime commentary born from years of being the \nreceiving end of outages, I'd recommend against ever doing any server \nmaintenance operation for the first time just before bedtime. While \nthat may be convienent from a \"less users are using the site\" \nperspective, the downside is what you've seen here: mistakes can mean \nrather extended outages. Better to get up early and do this sort of \nthing instead, so you can watch the site afterwards for a few hours to \nmake sure nothing is broken. For similar reasons I try to avoid ever \ndoing major changes on a Friday.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 07 Feb 2011 02:51:50 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Mon, Feb 7, 2011 at 05:03, Craig Ringer <[email protected]> wrote:\n> What would possibly help would be if Pg could fall back to lower\n> shared_buffers automatically, screaming about it in the logs but still\n> launching. OTOH, many people don't check the logs, so they'd think their\n> new setting had taken effect and it hadn't - you've traded one usability\n> problem for another. Even if Pg issued WARNING messages to each client\n> that connected, lots of (non-psql) clients don't display them, so many\n> users would never know.\n>\n> Do you have a suggestion about how to do this better? The current\n> approach is known to be rather unlovely, but nobody's come up with a\n> better one that works reasonably and doesn't trample on other System V\n> shared memory users that may exist on the system.\n\nWe could do something similar to what Apache does -- provide distros\nwith a binary to check the configuration file in advance. This check\nprogram is launched before the \"restart\" command, and if it fails, the\nserver is not restarted.\n\nRegards,\nMarti\n", "msg_date": "Mon, 7 Feb 2011 12:30:25 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "+1\n\nthis is exactly what I was looking for at the time: a -t (configtest)\noption to pg_ctl\n\nand I think it should fall back to lower shared buffers and log it.\n\nSHOW ALL; would show the used value\n\n\n\nOn Mon, Feb 7, 2011 at 11:30 AM, Marti Raudsepp <[email protected]> wrote:\n\n> On Mon, Feb 7, 2011 at 05:03, Craig Ringer <[email protected]>\n> wrote:\n> > What would possibly help would be if Pg could fall back to lower\n> > shared_buffers automatically, screaming about it in the logs but still\n> > launching. OTOH, many people don't check the logs, so they'd think their\n> > new setting had taken effect and it hadn't - you've traded one usability\n> > problem for another. Even if Pg issued WARNING messages to each client\n> > that connected, lots of (non-psql) clients don't display them, so many\n> > users would never know.\n> >\n> > Do you have a suggestion about how to do this better? The current\n> > approach is known to be rather unlovely, but nobody's come up with a\n> > better one that works reasonably and doesn't trample on other System V\n> > shared memory users that may exist on the system.\n>\n> We could do something similar to what Apache does -- provide distros\n> with a binary to check the configuration file in advance. This check\n> program is launched before the \"restart\" command, and if it fails, the\n> server is not restarted.\n>\n> Regards,\n> Marti\n>\n\n+1 this is exactly what I was looking for at the time:  a -t (configtest) option to pg_ctland I think it should fall back to lower shared buffers and log it.  SHOW ALL; would show the used value\nOn Mon, Feb 7, 2011 at 11:30 AM, Marti Raudsepp <[email protected]> wrote:\nOn Mon, Feb 7, 2011 at 05:03, Craig Ringer <[email protected]> wrote:\n> What would possibly help would be if Pg could fall back to lower\n> shared_buffers automatically, screaming about it in the logs but still\n> launching. OTOH, many people don't check the logs, so they'd think their\n> new setting had taken effect and it hadn't - you've traded one usability\n> problem for another. Even if Pg issued WARNING messages to each client\n> that connected, lots of (non-psql) clients don't display them, so many\n> users would never know.\n>\n> Do you have a suggestion about how to do this better? The current\n> approach is known to be rather unlovely, but nobody's come up with a\n> better one that works reasonably and doesn't trample on other System V\n> shared memory users that may exist on the system.\n\nWe could do something similar to what Apache does -- provide distros\nwith a binary to check the configuration file in advance. This check\nprogram is launched before the \"restart\" command, and if it fails, the\nserver is not restarted.\n\nRegards,\nMarti", "msg_date": "Mon, 7 Feb 2011 16:05:07 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Mon, Feb 7, 2011 at 8:05 AM, felix <[email protected]> wrote:\n> +1\n> this is exactly what I was looking for at the time:  a -t (configtest)\n> option to pg_ctl\n> and I think it should fall back to lower shared buffers and log it.\n> SHOW ALL; would show the used value\n\nhowever, much like apache, this might not have gotten caught. In\norder to catch it we'd have to see how much shared mem was available,\nand I think you have to actually allocate it to find out if you can.\nSince pg is already running, allocating shared_buffers / fsm twice\nmight fail when allocating it once would succeed.\n", "msg_date": "Mon, 7 Feb 2011 11:42:02 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "Craig Ringer wrote:\n> What would possibly help would be if Pg could fall back to lower\n> shared_buffers automatically, screaming about it in the logs but still\n> launching.\n\nThis is exactly what initdb does when it produces an initial setting for \nshared_buffers that goes into the postgresql.conf file. It wouldn't be \nhard to move that same logic into a loop that executed when startup \nfailed to allocated enough memory.\n\nThere are two problems here, one almost solved, the other more \nphilosphical. It used to be that max_fsm_pages and wal_buffers could be \nlarge enough components to the allocation that reducing them might \nactually be a necessary fix, too. With the removal of the former and a \nmethod to automatically set the latter now available, the remaining \ncomponents to the shared memory sizing computation are probably possible \nto try and fix automatically if the kernel limits are too low.\n\nBut it's unclear whether running in a degraded mode, where performance \nmight be terrible, with only a log message is preferrable to stopping \nand forcing the DBA's attention toward the mistake that was made \nimmediately. Log files get rotated out, and it's not hard to imagine \nthis problem coming to haunt someone only a month or two later--by which \ntime the change to shared_buffers is long forgotten, and the log message \ncomplaining about it lost too. Accordingly I would expect any serious \nattempt to add some auto-reduction behavior to be beset with argument, \nand I'd never consider writing such a thing as a result. Too many \nnon-controversial things I could work on instead.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 07 Feb 2011 14:05:25 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/07/2011 06:30 PM, Marti Raudsepp wrote:\n> On Mon, Feb 7, 2011 at 05:03, Craig Ringer<[email protected]> wrote:\n>> What would possibly help would be if Pg could fall back to lower\n>> shared_buffers automatically, screaming about it in the logs but still\n>> launching. OTOH, many people don't check the logs, so they'd think their\n>> new setting had taken effect and it hadn't - you've traded one usability\n>> problem for another. Even if Pg issued WARNING messages to each client\n>> that connected, lots of (non-psql) clients don't display them, so many\n>> users would never know.\n>>\n>> Do you have a suggestion about how to do this better? The current\n>> approach is known to be rather unlovely, but nobody's come up with a\n>> better one that works reasonably and doesn't trample on other System V\n>> shared memory users that may exist on the system.\n>\n> We could do something similar to what Apache does -- provide distros\n> with a binary to check the configuration file in advance. This check\n> program is launched before the \"restart\" command, and if it fails, the\n> server is not restarted.\n\nThat would work for config file errors (and would probably be a good \nidea) but won't help with bad shared memory configuration. When Pg is \nalready running, it's usually not possible for a test program to claim \nthe amount of shared memory the config file says to allocate, because Pg \nis already using it. Nonetheless, Pg will work fine when restarted.\n\n--\nCraig Ringer\n", "msg_date": "Tue, 08 Feb 2011 07:49:27 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/08/2011 03:05 AM, Greg Smith wrote:\n\n> Accordingly I would expect any serious\n> attempt to add some auto-reduction behavior to be beset with argument,\n> and I'd never consider writing such a thing as a result. Too many\n> non-controversial things I could work on instead.\n\nYep. I expressed my own doubts in the post I suggested that in.\n\nIf Pg did auto-correct down, it'd be necessary to scream about it \nangrily and continuously, not just once during startup. Given that it's \nclear many people never even look at the logs (\"what logs? where are \nthey?\") I think Pg would also have to send notices to the client. \nProblem is, many clients don't process notices/warnings, so particularly \nslack admins won't see that either.\n\nI'm not particularly excited about the idea.\n\n--\nCraig Ringer\n", "msg_date": "Tue, 08 Feb 2011 07:55:56 +0800", "msg_from": "Craig Ringer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Mon, Feb 7, 2011 at 6:05 AM, Shaun Thomas <[email protected]> wrote:\n\n>\n> That’s one of the things I talked about. To be safe, PG will start to shut\n> down but disallow new connections, and **that’s all**. Old connections are\n> grandfathered in until they disconnect, and when they all go away, it shuts\n> down gracefully.\n>\n\n\nWell.... it said \"Failed to shutdown ...............\" and then returned\ncontrol.\nand then proceeded to run for about an hour.\n\nI'm not sure how graceful that is.\n\nI generally take programs at their word. \"Failed\" is clearly past tense.\n\n\n\nSo far as your Django install, have you activated the memcache contrib.\n> module? Your pages should be lazy-caching and rarely depend on the DB, if\n> they can.\n>\n\nyes thanks my web app is very finely tuned and is working splendidly.\nI've been working on very large sites sites since 1998 and this client has\nbeen with me for 10 years already. its a fairly high traffic site.\n\nI've only been using postgres since we migrated in May\n\nbut it is one particular table on postgres that has shit the sock drawer.\n\n\n\n\n> You should also rarely be doing count(*) on a 300k row table, even if\n> everything is cached and speedy.\n>\n\nI'm not\n\nthis is a test query that is obviously way out of bounds for acceptable\nresponse.\n\nthere is something very very wrong with this table and I need to solve it\nASAP.\nother tables that have less updates but similar sizes are not having this\nproblem.\n\nthere are foreign keys pointing to this table so its a bit tricky to just\nrefill it, but I can think of one way. I'll have to do that.\n\nits only conjecture that the issue is file space bloat or free map problems.\n those are overall issues that I will get to as soon as I can. but this is\ntable specific.\n\n\n That’s an application design issue you need to address before it’s too\n> late, or you have to rush and implement a hasty fix.\n>\n\nit is not an application design issue, though there are always improvements\nbeing made.\n\nBeing a DBA sucks sometimes. J\n>\n\nI am not a DBA, I'm just trying to query a 300k row table.\n\nthough I am happy to learn more. I know an awful lot about a lot of things.\n but you can't specialize in everything\n\nOn Mon, Feb 7, 2011 at 6:05 AM, Shaun Thomas <[email protected]> wrote:\nThat’s one of the things I talked about. To be safe, PG will start to shut down but disallow new connections, and *that’s all*. Old connections are grandfathered in until they disconnect, and when they all go away, it shuts down gracefully.\nWell.... it said \"Failed to shutdown ...............\"  and then returned control.and then proceeded to run for about an hour.\nI'm not sure how graceful that is.I generally take programs at their word.  \"Failed\" is clearly past tense.\nSo far as your Django install, have you activated the memcache contrib. module? Your pages should be lazy-caching and rarely depend on the DB, if they can. \nyes thanks my web app is very finely tuned and is working splendidly.I've been working on very large sites sites since 1998 and this client has been with me for 10 years already.  its a fairly high traffic site.\nI've only been using postgres since we migrated in Maybut it is one particular table on postgres that has shit the sock drawer.\n You should also rarely be doing count(*) on a 300k row table, even if everything is cached and speedy. \nI'm notthis is a test query that is obviously way out of bounds for acceptable response. there is something very very wrong with this table and I need to solve it ASAP.\nother tables that have less updates but similar sizes are not having this problem.there are foreign keys pointing to this table so its a bit tricky to just refill it, but I can think of one way.  I'll have to do that.  \nits only conjecture that the issue is file space bloat or free map problems.  those are overall issues that I will get to as soon as I can. but this is table specific.\n That’s an application design issue you need to address before it’s too late, or you have to rush and implement a hasty fix.\nit is not an application design issue, though there are always improvements being made.\nBeing a DBA sucks sometimes. J\nI am not a DBA, I'm just trying to query a 300k row table.though I am happy to learn more. I know an awful lot about a lot of things.  but you can't specialize in everything", "msg_date": "Tue, 8 Feb 2011 04:17:46 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Mon, Feb 7, 2011 at 8:17 PM, felix <[email protected]> wrote:\n>\n> On Mon, Feb 7, 2011 at 6:05 AM, Shaun Thomas <[email protected]> wrote:\n>>\n>> That’s one of the things I talked about. To be safe, PG will start to shut\n>> down but disallow new connections, and *that’s all*. Old connections are\n>> grandfathered in until they disconnect, and when they all go away, it shuts\n>> down gracefully.\n>\n> Well.... it said \"Failed to shutdown ...............\"  and then returned\n> control.\n> and then proceeded to run for about an hour.\n> I'm not sure how graceful that is.\n> I generally take programs at their word.  \"Failed\" is clearly past tense.\n\nI agree that here what pg_ctl said and what it didn't aren't exactly\nthe same thing.\n\n> but it is one particular table on postgres that has shit the sock drawer.\n\nWhat queries are running slow, and what does explain analyze have to\nsay about them?\n\n>> You should also rarely be doing count(*) on a 300k row table, even if\n>> everything is cached and speedy.\n>\n> I'm not\n> this is a test query that is obviously way out of bounds for acceptable\n> response.\n> there is something very very wrong with this table and I need to solve it\n> ASAP.\n> other tables that have less updates but similar sizes are not having this\n> problem.\n\nIs this the same problem you had at the beginning and were trying to\nfix with clustering and increasing fsm, or is this now a different\ntable and a different problem?\n\n> there are foreign keys pointing to this table so its a bit tricky to just\n> refill it, but I can think of one way.  I'll have to do that.\n> its only conjecture that the issue is file space bloat or free map problems.\n>  those are overall issues that I will get to as soon as I can. but this is\n> table specific.\n\nWhat does the query you ran before that shows bloat show on this table now?\n\n>>  That’s an application design issue you need to address before it’s too\n>> late, or you have to rush and implement a hasty fix.\n>\n> it is not an application design issue, though there are always improvements\n> being made.\n\nIf your application is doing select count(*) with either no where\nclause or with a very non-selective one, then it is somewhat of a\ndesign issue, and there are ways to make that faster. if it's a\ndifferent query, show us what it and its explain analyze look like.\n\n>> Being a DBA sucks sometimes. J\n>\n> I am not a DBA, I'm just trying to query a 300k row table.\n> though I am happy to learn more. I know an awful lot about a lot of things.\n>  but you can't specialize in everything\n\nWell the good news is that there's a LOT less arcana involved in keep\npgsql happy than there is in keeping something like Oracle happy.\n", "msg_date": "Mon, 7 Feb 2011 21:25:45 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On 02/07/2011 09:17 PM, felix wrote:\n\n> Well.... it said \"Failed to shutdown ...............\" and then\n> returned control. and then proceeded to run for about an hour. I'm\n> not sure how graceful that is.\n\nAh, but that was just the control script that sends the database the \ncommand to shut down. The 'graceful' part, is that the database is being \nnice to everyone trying to do things with the data inside.\n\nThe control script has a timeout. So it'll send the command, wait a few \nseconds to see if the database responds, and then gives up. At that \npoint, you can use a fast shutdown to tell the database not to be so \nnice, and it'll force disconnect all users and shut down as quickly as \npossible while maintaining data integrity.\n\nThe easiest way to see this in action is to take a look at the postgres \nlog files. In most default installs, this is in /your/pg/dir/pg_log and \nthe files follow a postgresql-YYYY-MM-DD_HHMMSS.log format and generally \nauto-rotate. If not, set redirect_stderr to on, and make sure \nlog_directory and log_filename are both set. Those are in your \npostgresql.conf, by the way. :)\n\n> I've only been using postgres since we migrated in May\n\nAha. Yeah... relatively new installs tend to have the worst growing \npains. Once you shake this stuff out, you'll be much better off.\n\n> its only conjecture that the issue is file space bloat or free map\n> problems. those are overall issues that I will get to as soon as I can.\n> but this is table specific.\n\nWith 300k rows, count(*) isn't a good test, really. That's just on the \nedge of big-enough that it could be > 1-second to fetch from the disk \ncontroller, even if the table is fully vacuumed. And in your case, that \ntable really will likely come from the disk controller, as your \nshared_buffers are set way too low. The default settings are not going \nto cut it for a database of your size, with the volume you say it's getting.\n\nBut you need to put in those kernel parameters I suggested. And I know \nthis sucks, but you also have to raise your shared_buffers and possibly \nyour work_mem and then restart the DB. But this time, pg_ctl to invoke a \nfast stop, and then use the init script in /etc/init.d to restart it.\n\n> I am not a DBA,\n\nYou are now. :) You're administering a database, either as part of your \njob description, or because you have no choice because your company \ndoesn't have an official DBA. Either way, you'll need to know this \nstuff. Which is why we're helping out.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Tue, 8 Feb 2011 08:23:02 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": ">> Well.... it said \"Failed to shutdown ...............\" and then\n>> returned control. and then proceeded to run for about an hour. I'm\n>> not sure how graceful that is.\n>\n> Ah, but that was just the control script that sends the database the command\n> to shut down. The 'graceful' part, is that the database is being nice to\n> everyone trying to do things with the data inside.\n>\n> The control script has a timeout. So it'll send the command, wait a few\n> seconds to see if the database responds, and then gives up.\n\nFor what it's worth, I think that's the not-so-graceful part. The\ncontrol script gives up, but the actual shutdown still occurs\neventually, after all current connections have ended. I think most\nusers will take pg_ctl at its word, and assume \"Failed to shutdown\"\nmeans \"I couldn't shut down with this command, maybe you should try\nsomething else\", and not \"I couldn't shut down right now, although\nI'll get to it as soon as everyone disconnects.\".\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Tue, 8 Feb 2011 08:23:25 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "Maciek Sakrejda <[email protected]> wrote:\n>>> Well.... it said \"Failed to shutdown ...............\" and then\n>>> returned control. and then proceeded to run for about an hour.\n>>> I'm not sure how graceful that is.\n>>\n>> Ah, but that was just the control script that sends the database\n>> the command to shut down. The 'graceful' part, is that the\n>> database is being nice to everyone trying to do things with the\n>> data inside.\n>>\n>> The control script has a timeout. So it'll send the command, wait\n>> a few seconds to see if the database responds, and then gives up.\n> \n> For what it's worth, I think that's the not-so-graceful part. The\n> control script gives up, but the actual shutdown still occurs\n> eventually, after all current connections have ended. I think most\n> users will take pg_ctl at its word, and assume \"Failed to\n> shutdown\" means \"I couldn't shut down with this command, maybe you\n> should try something else\", and not \"I couldn't shut down right\n> now, although I'll get to it as soon as everyone disconnects.\".\n \nYeah, current behavior with that shutdown option is the opposite of\nsmart for any production environment I've seen. (I can see where it\nwould be handy in development, though.) What's best in production\nis the equivalent of the fast option with escalation to immediate if\nnecessary to ensure shutdown within the time limit.\n \nIn my world, telling PostgreSQL to shut down PostgreSQL is most\noften because in a few minutes someone is going to pull the plug to\nmove the server, an electrician is going to flip the circuit off to\ndo some wiring, or (in one recent event) the building is on fire and\nthe fire department is about to cut electrical power. In such\nsituations, patiently waiting for a long-running query to complete\nis a Very Bad Idea, much less waiting for a connection pool to cycle\nall connections out. Telling the user that the shutdown failed,\nwhen what is really happening is that it will block new connections\nand keep waiting around indefinitely, with an actual shutdown at\nsome ill-defined future moment is adding insult to injury.\n \nIn my view, anyway....\n \n-Kevin\n", "msg_date": "Tue, 08 Feb 2011 10:36:09 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Tue, Feb 8, 2011 at 18:36, Kevin Grittner\n<[email protected]> wrote:\n> Yeah, current behavior with that shutdown option is the opposite of\n> smart for any production environment I've seen.  (I can see where it\n> would be handy in development, though.)  What's best in production\n> is the equivalent of the fast option with escalation to immediate if\n> necessary to ensure shutdown within the time limit.\n\n+1, we should call it \"dumb\" :)\n\nNot accepting new connections with \"the database system is shutting\ndown\" makes it even worse -- it means you can't log in to the server\nto inspect who's querying it or call pg_terminate_backend() on them.\n\nI couldn't find any past discussions about changing the default to \"fast\".\nAre there any reasons why that cannot be done in a future release?\n\nRegards,\nMarti\n", "msg_date": "Tue, 8 Feb 2011 18:50:23 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Tue, Feb 8, 2011 at 9:50 AM, Marti Raudsepp <[email protected]> wrote:\n> On Tue, Feb 8, 2011 at 18:36, Kevin Grittner\n> <[email protected]> wrote:\n>> Yeah, current behavior with that shutdown option is the opposite of\n>> smart for any production environment I've seen.  (I can see where it\n>> would be handy in development, though.)  What's best in production\n>> is the equivalent of the fast option with escalation to immediate if\n>> necessary to ensure shutdown within the time limit.\n>\n> +1, we should call it \"dumb\" :)\n>\n> Not accepting new connections with \"the database system is shutting\n> down\" makes it even worse -- it means you can't log in to the server\n> to inspect who's querying it or call pg_terminate_backend() on them.\n>\n> I couldn't find any past discussions about changing the default to \"fast\".\n> Are there any reasons why that cannot be done in a future release?\n\nOr at least throw a hint the user's way that -m fast might be needed.\n", "msg_date": "Tue, 8 Feb 2011 10:31:25 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": ">> I couldn't find any past discussions about changing the default to \"fast\".\n>> Are there any reasons why that cannot be done in a future release?\n>\n> Or at least throw a hint the user's way that -m fast might be needed.\n\nI think there are several issues here:\n\n1. Does pg_ctl give a clear indication of the outcome of a failed\n\"smart\" mode shutdown?\n2. Is the current \"smart\" shutdown mode behavior useful?\n3. Should the default shutdown mode be changed to \"fast\"?\n\nI think felix mainly complained about (1), and that's what I was\ntalking about as well. The current message (I have only an 8.3 handy,\nbut I don't imagine this has changed much) is:\n\npg_ctl stop -t5\nwaiting for server to shut down........ failed\npg_ctl: server does not shut down\n\nThis leaves out crucial information (namely, \"but it will stop\naccepting new connections and shut down when all current connections\nare closed\"). It seems like something along those lines should be\nadded to the error message, or perhaps at least to pg_ctl\ndocumentation. Currently, the docs page (\nhttp://www.postgresql.org/docs/current/static/app-pg-ctl.html ) only\nhints at this, and pg_ctl --help does not really mention this at all.\n\nOf the two other issues, (3) seems reasonable (I have no strong\nfeelings there either way), and (2) is probably a moot point (the\nbehavior won't change in a backward-incompatible manner now, and if\nit's dethroned as default, that doesn't really matter).\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Tue, 8 Feb 2011 09:58:27 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "Marti Raudsepp <[email protected]> wrote:\n \n> I couldn't find any past discussions about changing the default to\n> \"fast\".\n \nIt's not entirely unrelated to the \"Linux LSB init script\" in August\nand September of 1009:\n \nhttp://archives.postgresql.org/pgsql-hackers/2009-08/msg01843.php\n \nhttp://archives.postgresql.org/pgsql-hackers/2009-09/msg01963.php\n \n-Kevin\n", "msg_date": "Tue, 08 Feb 2011 13:00:37 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "Marti Raudsepp wrote:\n> I couldn't find any past discussions about changing the default to \"fast\".\n> Are there any reasons why that cannot be done in a future release?\n> \n\nWell, it won't actually help as much as you might think. It's possible \nfor clients to be in a state where fast shutdown doesn't work, either. \nYou either have to kill them manually or use an immediate shutdown.\n\nKevin and I both suggested a \"fast plus timeout then immediate\" behavior \nis what many users seem to want. My comments were at \nhttp://archives.postgresql.org/pgsql-hackers/2009-09/msg01145.php ; for \nan example of how fast shutdown can fail see \nhttp://archives.postgresql.org/pgsql-bugs/2009-03/msg00062.php\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 08 Feb 2011 15:09:40 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Tue, Feb 8, 2011 at 22:09, Greg Smith <[email protected]> wrote:\n> Kevin and I both suggested a \"fast plus timeout then immediate\" behavior is\n> what many users seem to want.  My comments were at\n> http://archives.postgresql.org/pgsql-hackers/2009-09/msg01145.php ; for an\n> example of how fast shutdown can fail see\n> http://archives.postgresql.org/pgsql-bugs/2009-03/msg00062.php\n\nTrue, I've hit that a few times too.\n\nSeems that a better solution would be implementing a new -m option\nthat does this transparently?\n\nRegards,\nMarti\n", "msg_date": "Tue, 8 Feb 2011 23:09:46 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "Marti Raudsepp <[email protected]> wrote:\n> Greg Smith <[email protected]> wrote:\n>> Kevin and I both suggested a \"fast plus timeout then immediate\"\n>> behavior is what many users seem to want.\n \n> Seems that a better solution would be implementing a new -m option\n> that does this transparently?\n \nMaybe. Another option might be to use -t or some new switch (or -t\nin combination with some new switch) as a time limit before\nescalating to the next shutdown mode.\n \n-Kevin\n", "msg_date": "Tue, 08 Feb 2011 15:20:23 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Tue, Feb 8, 2011 at 1:09 PM, Greg Smith <[email protected]> wrote:\n> Marti Raudsepp wrote:\n>>\n>> I couldn't find any past discussions about changing the default to \"fast\".\n>> Are there any reasons why that cannot be done in a future release?\n>>\n> Kevin and I both suggested a \"fast plus timeout then immediate\" behavior is\n> what many users seem to want.  My comments were at\n> http://archives.postgresql.org/pgsql-hackers/2009-09/msg01145.php ; for an\n> example of how fast shutdown can fail see\n> http://archives.postgresql.org/pgsql-bugs/2009-03/msg00062.php\n\nAre there any settings in postgresql.conf that would make it unsafe to\nuse -m immediate?\n", "msg_date": "Tue, 8 Feb 2011 14:41:00 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "Scott Marlowe <[email protected]> wrote:\n> Greg Smith <[email protected]> wrote:\n \n>> Kevin and I both suggested a \"fast plus timeout then immediate\"\n>> behavior is what many users seem to want.\n \n> Are there any settings in postgresql.conf that would make it\n> unsafe to use -m immediate?\n \nI don't think so. There could definitely be problems if someone\ncuts power before your shutdown completes, though. (I hear that\nthose firefighters like to cut power to a building before they grab\nthose big brass nozzles to spray a stream of water into a building. \nGo figure...)\n \n-Kevin\n", "msg_date": "Tue, 08 Feb 2011 15:52:31 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "Scott Marlowe wrote:\n> Are there any settings in postgresql.conf that would make it unsafe to\n> use -m immediate?\n> \n\nTwo concerns:\n\n-Clients will be killed without any review, and data related to them lost\n\n-The server will have to go through recovery to start back up again, \nwhich could potentially take a long time. If you manage a successful \nshutdown that doesn't happen.\n\nShouldn't be unsafe, just has those issues.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Tue, 08 Feb 2011 17:08:25 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Tue, Feb 8, 2011 at 3:08 PM, Greg Smith <[email protected]> wrote:\n> Scott Marlowe wrote:\n>>\n>> Are there any settings in postgresql.conf that would make it unsafe to\n>> use -m immediate?\n>>\n>\n> Two concerns:\n>\n> -Clients will be killed without any review, and data related to them lost\n>\n> -The server will have to go through recovery to start back up again, which\n> could potentially take a long time.  If you manage a successful shutdown\n> that doesn't happen.\n>\n> Shouldn't be unsafe, just has those issues.\n\nGood, I was kinda worried about full_page_writes being off or fsync or\nsomething like that being a problem.\n", "msg_date": "Tue, 8 Feb 2011 15:14:46 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Tue, Feb 8, 2011 at 3:23 PM, Shaun Thomas <[email protected]> wrote:\n\n>\n> With 300k rows, count(*) isn't a good test, really. That's just on the edge\n> of big-enough that it could be > 1-second to fetch from the disk controller,\n>\n\n\n1 second you say ? excellent, sign me up\n\n70 seconds is way out of bounds\n\nI don't want a more efficient query to test with, I want the shitty query\nthat performs badly that isolates an obvious problem.\n\nThe default settings are not going to cut it for a database of your size,\n> with the volume you say it's getting.\n>\n\nnot to mention the map reduce jobs I'm hammering it with all night :)\n\nbut I did pause those until this is solved\n\nBut you need to put in those kernel parameters I suggested. And I know this\n> sucks, but you also have to raise your shared_buffers and possibly your\n> work_mem and then restart the DB. But this time, pg_ctl to invoke a fast\n> stop, and then use the init script in /etc/init.d to restart it.\n\n\nI'm getting another slicehost slice. hopefully I can clone the whole thing\nover without doing a full install and go screw around with it there.\n\nits a fairly complicated install, even with buildout doing most of the\nconfiguration.\n\n\n=felix\n\nOn Tue, Feb 8, 2011 at 3:23 PM, Shaun Thomas <[email protected]> wrote:\n\nWith 300k rows, count(*) isn't a good test, really. That's just on the edge of big-enough that it could be > 1-second to fetch from the disk controller, 1 second you say ?  excellent, sign me up\n70 seconds is way out of bounds I don't want a more efficient query to test with, I want the shitty query that performs badly that isolates an obvious problem.\nThe default settings are not going to cut it for a database of your size, with the volume you say it's getting.\nnot to mention the map reduce jobs I'm hammering it with all night :)but I did pause those until this is solved\n\nBut you need to put in those kernel parameters I suggested. And I know this sucks, but you also have to raise your shared_buffers and possibly your work_mem and then restart the DB. But this time, pg_ctl to invoke a fast stop, and then use the init script in /etc/init.d to restart it.\nI'm getting another slicehost slice. hopefully I can clone the whole thing over without doing a full install and go screw around with it there.its a fairly complicated install, even with buildout doing most of the configuration.\n=felix", "msg_date": "Wed, 9 Feb 2011 23:54:29 +0100", "msg_from": "felix <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Tue, Feb 08, 2011 at 03:52:31PM -0600, Kevin Grittner wrote:\n> Scott Marlowe <[email protected]> wrote:\n> > Greg Smith <[email protected]> wrote:\n> \n> >> Kevin and I both suggested a \"fast plus timeout then immediate\"\n> >> behavior is what many users seem to want.\n> \n> > Are there any settings in postgresql.conf that would make it\n> > unsafe to use -m immediate?\n> \n> I don't think so. There could definitely be problems if someone\n> cuts power before your shutdown completes, though. (I hear that\n> those firefighters like to cut power to a building before they grab\n> those big brass nozzles to spray a stream of water into a building. \n> Go figure...)\n\nFollowing you off topic, I know of one admin type who has stated \"I don't\ncare what sort of fine the power company wants to give me, if my\nproperty's on fire, I'm going to pull the meter, in order to hand it to\nthe first responder, rather than have them sit there waiting for the\npower tech to arrive while my house burns.\"\n\nBack on topic, I like the the idea of a timed escalation. That means\nthere's two things to configure though, timeout(s?) and the set of\nstates to escalate through. I can see different use cases for different\nsets. Hmmm:\n\npg_ctl -m s:10:f:5:i restart\n\nfor smart, 5 sec. timeout, escalate to fast, 5 sec., then immediate?\nNot sure how rhat would interact w/ -t.\n\nPerhaps:\n\npg_ctl -t 10 -m s -t 5 -m f -m i restart\n\nSome video-processing tools do things like that: the order of options\nimpacts their interaction.\n\nRoss\n-- \nRoss Reedstrom, Ph.D. [email protected]\nSystems Engineer & Admin, Research Scientist phone: 713-348-6166\nConnexions http://cnx.org fax: 713-348-3665\nRice University MS-375, Houston, TX 77005\nGPG Key fingerprint = F023 82C8 9B0E 2CC6 0D8E F888 D3AE 810E 88F0 BEDE\n\n\n\n", "msg_date": "Wed, 16 Feb 2011 10:28:32 -0600", "msg_from": "\"Ross J. Reedstrom\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "Ross,\n\nWay off topic now, but from my time programming electrical meters I can tell you pulling the meter from its socket is potentially an extremely dangerous thing to do. If there is a load across the meter's poles the spark that results on disconnect could kill the puller instantly. (You don't want to know what happens if the person isn't killed.) \n\nI don't know what property your admin type is trying to protect, but I'm inclined to let it burn and live to work through the insurance collection process.\n\nOh, and +1 for timed escalation of a shutdown.\n\nBob Lunney\n\n--- On Wed, 2/16/11, Ross J. Reedstrom <[email protected]> wrote:\n\n> From: Ross J. Reedstrom <[email protected]>\n> Subject: Re: [PERFORM] Really really slow select count(*)\n\n<<big snip>>\n\n> \n> Following you off topic, I know of one admin type who has\n> stated \"I don't\n> care what sort of fine the power company wants to give me,\n> if my\n> property's on fire, I'm going to pull the meter, in order\n> to hand it to\n> the first responder, rather than have them sit there\n> waiting for the\n> power tech to arrive while my house burns.\"\n\n\n \n", "msg_date": "Wed, 16 Feb 2011 11:20:27 -0800 (PST)", "msg_from": "Bob Lunney <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" }, { "msg_contents": "On Fri, Feb 4, 2011 at 8:46 AM, felix <[email protected]> wrote:\n>\n> I am having huge performance problems with a table. Performance deteriorates\n> every day and I have to run REINDEX and ANALYZE on it every day.  auto\n> vacuum is on.  yes, I am reading the other thread about count(*) :)\n> but obviously I'm doing something wrong here\n>\n> explain analyze select count(*) from fastadder_fastadderstatus;\n> Aggregate  (cost=62458.73..62458.74 rows=1 width=0) (actual\n> time=77130.000..77130.000 rows=1 loops=1)\n>    ->  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61701.18\n> rows=303018 width=0) (actual time=50.000..76930.000 rows=302479 loops=1)\n>  Total runtime: 77250.000 ms\n> directly after REINDEX and ANALYZE:\n>  Aggregate  (cost=62348.70..62348.71 rows=1 width=0) (actual\n> time=15830.000..15830.000 rows=1 loops=1)\n>    ->  Seq Scan on fastadder_fastadderstatus  (cost=0.00..61613.16\n> rows=294216 width=0) (actual time=30.000..15570.000 rows=302479 loops=1)\n>  Total runtime: 15830.000 ms\n> still very bad for a 300k row table\n> a similar table:\n> explain analyze select count(*) from fastadder_fastadderstatuslog;\n>  Aggregate  (cost=8332.53..8332.54 rows=1 width=0) (actual\n> time=1270.000..1270.000 rows=1 loops=1)\n>    ->  Seq Scan on fastadder_fastadderstatuslog  (cost=0.00..7389.02\n> rows=377402 width=0) (actual time=0.000..910.000 rows=377033 loops=1)\n>  Total runtime: 1270.000 ms\n>\n> It gets updated quite a bit each day, and this is perhaps the problem.\n> To me it doesn't seem like that many updates\n> 100-500 rows inserted per day\n> no deletes\n> 10k-50k updates per day\n> mostly of this sort:   set priority=1 where id=12345\n> is it perhaps this that is causing the performance problem ?\n> I could rework the app to be more efficient and do updates using batches\n> where id IN (1,2,3,4...)\n> I assume that means a more efficient index update compared to individual\n> updates.\n> There is one routine that updates position_in_queue using a lot (too many)\n> update statements.\n> Is that likely to be the culprit ?\n> What else can I do to investigate ?\n\nI scanned the thread and I don't think anyone mentioned this: updates\nthat only hit unindexed columns are much cheaper long term in terms of\nbloat purposes than those that touch indexed columns in 8.3+ because\nof the 'hot' feature. Do you really need the priority index? If you\ndon't you are much better off without it if priority gets updated a\nlot. Of course, you might still need it -- it's going to depend on\nyour queries.\n\nOn my workstation, I can brute force a 90mb table in about 300ms.\nYour table can be much smaller than that if you keep the bloat down\nunless your text column is very large and not toasted (how large is it\non average?)\n\nmerlin\n", "msg_date": "Thu, 17 Feb 2011 09:22:17 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Really really slow select count(*)" } ]
[ { "msg_contents": "Hi all,\n\nI have a execution planner related issue that I'd like to have some help\nin understanding a bit deeper -\n\nI have a table which basically contains fields for a value, a timestamp\nand\na record type which is an integer. I would like to do a query which\nretrieves\nthe newest record for each type, and the persistence framework that I'm\nusing\ndoes something which is structurally like\n\nSELECT * FROM table t1 WHERE 0 = (SELECT COUNT(*) FROM table t2 WHERE\n t2.type = t1.type AND t2.timestamp > t1.timestamp)\n\nOn all of the PostgreSQL versions I've tried (9.0.2 included) this\nexecutes\nin about 20-30 seconds, which isn't exactly stellar. I've tried the (I\nthink)\nequivalent\n\nSELECT * FROM table t1 WHERE NOT EXISTS (SELECT * FROM table t2 WHERE\n t2.type = t1.type AND t2.timestamp > t1.timestamp)\n\ninstead, and that executes in about 100 ms, so it's about 200 times\nfaster.\n\nThe two statements have completely different execution plans, so I\nunderstand\nwhy there is a difference in performance, but as I'm unable to modify the\nSQL that the persistence framework generates I'd like to know if there's\nanything that I can do in order to make the first query execute as fast as\nthe second one.\n\nI'm more specifically thinking whether I'm missing out on a crucial\nplanner\nconfiguration knob or something like that, which causes the planner to\ntreat the two cases differently.\n\nBest regards & thanks for an excellent database engine,\n Mikkel Lauritsen\n\n", "msg_date": "Fri, 04 Feb 2011 16:22:02 +0100", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Different execution plans for semantically equivalent queries" }, { "msg_contents": "Mikkel Lauritsen <[email protected]> writes:\n> I would like to do a query which retrieves the newest record for each\n> type, and the persistence framework that I'm using does something\n> which is structurally like\n\n> SELECT * FROM table t1 WHERE 0 = (SELECT COUNT(*) FROM table t2 WHERE\n> t2.type = t1.type AND t2.timestamp > t1.timestamp)\n\nI suspect that *any* database is going to have trouble optimizing that.\nYou'd be well advised to lobby the persistence framework's authors to\nproduce less brain-dead SQL. The NOT EXISTS formulation seems to\nexpress what's wanted much less indirectly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Feb 2011 16:29:01 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Different execution plans for semantically equivalent queries " }, { "msg_contents": "Hi Tom et al,\n\nMany thanks for your prompt reply - you wrote:\n\n>> SELECT * FROM table t1 WHERE 0 = (SELECT COUNT(*) FROM table t2 WHERE\n>> t2.type = t1.type AND t2.timestamp > t1.timestamp)\n> \n> I suspect that *any* database is going to have trouble optimizing that.\n\nOkay, I expected that much.\n\nJust out of curiosity I've been looking a bit at the optimizer code\nin PostgreSQL, and it seems as if it would be at least theoretically\npossible to add support for things like transforming the query at\nhand into the NOT EXISTS form; a bit like how = NULL is converted\nto IS NULL.\n\nWould a change like that be accepted, or would you rather try to\nindirectly educate people into writing better SQL?\n\n> You'd be well advised to lobby the persistence framework's authors to\n> produce less brain-dead SQL. The NOT EXISTS formulation seems to\n> express what's wanted much less indirectly.\n\nWill do :-)\n\nFor now I guess I'll hack it by wrapping a proxy around the JDBC\ndriver and rewriting the SQL on the fly; I encounter other bugs in\nthe persistence layer that are probably best handled that way as\nwell.\n\nBest regards & thanks,\n Mikkel\n", "msg_date": "Sun, 06 Feb 2011 23:03:18 +0100", "msg_from": "Mikkel Lauritsen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Different execution plans for semantically equivalent queries" }, { "msg_contents": "On Mon, Feb 7, 2011 at 00:03, Mikkel Lauritsen <[email protected]> wrote:\n>>> SELECT * FROM table t1 WHERE 0 = (SELECT COUNT(*) FROM table t2 WHERE\n>>>     t2.type = t1.type AND t2.timestamp > t1.timestamp)\n>>\n>> I suspect that *any* database is going to have trouble optimizing that.\n\n> Just out of curiosity I've been looking a bit at the optimizer code\n> in PostgreSQL, and it seems as if it would be at least theoretically\n> possible to add support for things like transforming the query at\n> hand into the NOT EXISTS form; a bit like how = NULL is converted\n> to IS NULL.\n>\n> Would a change like that be accepted, or would you rather try to\n> indirectly educate people into writing better SQL?\n\nThere are some reasonable and generic optimizations that could be done\nhere. Being able to inline subqueries with aggregates into joins would\nbe a good thing e.g. transform your query into this:\n\nSELECT t1.* FROM table t1 JOIN table t2 ON (t2.type = t1.type)\nWHERE t2.timestamp > t1.timestamp\nGROUP BY t1.* HAVING COUNT(t2.*)=0\n\nHowever, this is probably still worse than a NOT EXISTS query.\n\nI am less excited about turning \"COUNT(x)=0\" query to NOT EXISTS\nbecause that's just a bad way to write a query.\n\nRegards,\nMarti\n", "msg_date": "Mon, 7 Feb 2011 12:47:26 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Different execution plans for semantically equivalent queries" } ]
[ { "msg_contents": "All,\n\nSeeing an issue which is new on me. On a mostly idle PostgreSQL server,\nthe stats collector is rewriting the entire stats file twice per second.\n\nVersion: 8.4.4\nServer: Ubuntu, kernel 2.6.32\nServer set up: ApacheMQ server. 25 databases, each of which hold 2-3\ntables.\nFilesystem: Ext4, defaults\nActive connections: around 15\nAutovacuum settings: defaults\n\nSymptoms: on a server which gets around 20 reads and 15 writes per\nminute, we are seeing average 500K/second writes by the stats collector\nto pg_stat.tmp. pg_stat.tmp is around 270K.\n\nAn strace of the stats collector process shows that the stats collector\nis, in fact, rewriting the entire stats file twice per second.\n\nAnyone seen anything like this before?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 04 Feb 2011 11:05:57 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Write-heavy pg_stats_collector on mostly idle server" }, { "msg_contents": "2011/2/4 Josh Berkus <[email protected]>:\n> All,\n>\n> Seeing an issue which is new on me.  On a mostly idle PostgreSQL server,\n> the stats collector is rewriting the entire stats file twice per second.\n>\n> Version: 8.4.4\n> Server: Ubuntu, kernel 2.6.32\n> Server set up: ApacheMQ server.  25 databases, each of which hold 2-3\n> tables.\n> Filesystem: Ext4, defaults\n> Active connections: around 15\n> Autovacuum settings: defaults\n>\n> Symptoms: on a server which gets around 20 reads and 15 writes per\n> minute, we are seeing average 500K/second writes by the stats collector\n> to pg_stat.tmp.  pg_stat.tmp is around 270K.\n>\n> An strace of the stats collector process shows that the stats collector\n> is, in fact, rewriting the entire stats file twice per second.\n>\n> Anyone seen anything like this before?\n>\n\nit is the expected behavior, IIRC\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Sat, 5 Feb 2011 23:15:15 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write-heavy pg_stats_collector on mostly idle server" }, { "msg_contents": "\n>> Anyone seen anything like this before?\n>>\n> \n> it is the expected behavior, IIRC\n\nOK. It just seems kind of pathological for stats file writing to be 10X\nthe volume of data writing. I see why it's happening, but I think it's\nsomething we should fix.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 07 Feb 2011 14:58:40 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Write-heavy pg_stats_collector on mostly idle server" }, { "msg_contents": "On Mon, 2011-02-07 at 14:58 -0800, Josh Berkus wrote:\n> >> Anyone seen anything like this before?\n> >>\n> > \n> > it is the expected behavior, IIRC\n> \n> OK. It just seems kind of pathological for stats file writing to be 10X\n> the volume of data writing. I see why it's happening, but I think it's\n> something we should fix.\n\nI don't think it is expected. As I recall, it is something we fixed a\ncouple of major versions back (8.2?). It used to be that stats would\nwrite every 500ms. We changed that to when they are asked for (via a\nselect from the table or something). Specifically because it could cause\nthis type of problem.\n\nAm I thinking of something else?\n\nI remember going back and forth with tgl about this, tgl?\n\nJD\n\n> \n> -- \n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n> \n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Mon, 07 Feb 2011 15:12:36 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write-heavy pg_stats_collector on mostly idle server" }, { "msg_contents": "2011/2/8 Joshua D. Drake <[email protected]>:\n> On Mon, 2011-02-07 at 14:58 -0800, Josh Berkus wrote:\n>> >> Anyone seen anything like this before?\n>> >>\n>> >\n>> > it is the expected behavior, IIRC\n>>\n>> OK.  It just seems kind of pathological for stats file writing to be 10X\n>> the volume of data writing.  I see why it's happening, but I think it's\n>> something we should fix.\n>\n> I don't think it is expected. As I recall, it is something we fixed a\n> couple of major versions back (8.2?). It used to be that stats would\n> write every 500ms. We changed that to when they are asked for (via a\n> select from the table or something). Specifically because it could cause\n> this type of problem.\n>\n> Am I thinking of something else?\n>\n> I remember going back and forth with tgl about this, tgl?\n\nOoops.\nIt looks like you are right, see ./src/backend/postmaster/pgstat.c\n\n3c2313f4 (Tom Lane 2008-11-03 01:17:08 +0000 2926)\n if (last_statwrite < last_statrequest)\n70d75697 (Magnus Hagander 2008-08-05 12:09:30 +0000 2927)\n pgstat_write_statsfile(false);\n\n\n>\n> JD\n>\n>>\n>> --\n>>                                   -- Josh Berkus\n>>                                      PostgreSQL Experts Inc.\n>>                                      http://www.pgexperts.com\n>>\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n>\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Tue, 8 Feb 2011 00:24:37 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write-heavy pg_stats_collector on mostly idle server" }, { "msg_contents": "\n> Ooops.\n> It looks like you are right, see ./src/backend/postmaster/pgstat.c\n> \n> 3c2313f4 (Tom Lane 2008-11-03 01:17:08 +0000 2926)\n> if (last_statwrite < last_statrequest)\n> 70d75697 (Magnus Hagander 2008-08-05 12:09:30 +0000 2927)\n> pgstat_write_statsfile(false);\n\nThis is a different issue. This is happening because we have a bunch of\ndatabases (25 to 35) and as a result autovacuum is requesting the stats\nfile rather frequently. And autovacuum for whatever reason won't accept\na stats file more than 10ms old, so it pretty much rewrites the stats\nfile on every request.\n\nAt least, that's my reading of it after poking around and talking to\nGierth.\n\nIt seems like that 10ms window for autovac is way too small.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 07 Feb 2011 17:39:45 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Write-heavy pg_stats_collector on mostly idle server" }, { "msg_contents": "Hi Josh,\n\nit's \"known\" issue, see this thread:\n\nhttp://archives.postgresql.org/pgsql-hackers/2010-02/thrd6.php#01290\n\nHTH,\n\nKuba\n\nDne 8.2.2011 2:39, Josh Berkus napsal(a):\n>\n>> Ooops.\n>> It looks like you are right, see ./src/backend/postmaster/pgstat.c\n>>\n>> 3c2313f4 (Tom Lane 2008-11-03 01:17:08 +0000 2926)\n>> if (last_statwrite< last_statrequest)\n>> 70d75697 (Magnus Hagander 2008-08-05 12:09:30 +0000 2927)\n>> pgstat_write_statsfile(false);\n>\n> This is a different issue. This is happening because we have a bunch of\n> databases (25 to 35) and as a result autovacuum is requesting the stats\n> file rather frequently. And autovacuum for whatever reason won't accept\n> a stats file more than 10ms old, so it pretty much rewrites the stats\n> file on every request.\n>\n> At least, that's my reading of it after poking around and talking to\n> Gierth.\n>\n> It seems like that 10ms window for autovac is way too small.\n>\n\n", "msg_date": "Tue, 08 Feb 2011 09:41:12 +0100", "msg_from": "Jakub Ouhrabka <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Write-heavy pg_stats_collector on mostly idle server" } ]
[ { "msg_contents": "Greg (Smith),\n\nGiven your analysis of fsync'ing behavior on Ext3, would you say that it\nis better to set checkpoint_completion_target to 0.0 on Ext3?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Fri, 04 Feb 2011 13:22:29 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "checkpoint_completion_target and Ext3" }, { "msg_contents": "Josh Berkus wrote:\n> Given your analysis of fsync'ing behavior on Ext3, would you say that it\n> is better to set checkpoint_completion_target to 0.0 on Ext3?\n> \n\nSetting that to 0.0 gives the same basic behavior as in 8.2 and earlier \nversions. Those had even worst I/O spikes issues. Even on ext3, there \nis value to spreading the writes around over time, particularly if you \nhave a large setting for checkpoint_segments. Ideally the write phase \nwill be spread out over 2.5 minutes, if you've set the segments high \nenough that checkpoints are being driven by checkpoint_timeout. The \noriginal testing myself and Heikki did settled on the default of 0.5 for \ncheckpoint_completion_target on ext3, so that part hasn't really \nchanged. It's still better than just writing everything in one big \ndump, as you'd see with it set to 0.0.\n\nWhile Linux and ext3 aren't great about getting stuff to disk, doing \nsome writing in advance of sync will improve things at least a little. \nThe thing that I don't ever expect to work on ext3 is spreading the sync \nphase out over time.\n\nP.S. those of you who are into filesystem trivia but don't read \npgsql-hackers normally may enjoy \nhttp://blog.2ndquadrant.com/en/2011/01/tuning-linux-for-low-postgresq.html \nand \nhttp://archives.postgresql.org/message-id/[email protected] \nwhich has the research Josh is alluding to here. I also just wrote a \nrebuttal today to the \"PostgreSQL doesn't have hints\" meme at \nhttp://blog.2ndquadrant.com/en/2011/02/hinting-at-postgresql.html\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sun, 06 Feb 2011 03:09:03 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: checkpoint_completion_target and Ext3" } ]
[ { "msg_contents": "I implemented table partitioning, and it caused havoc with a \"select\nmax(id)\" on the parent table - the query plan has changed from a\nlightningly fast backwards index scan to a deadly seq scan. Both\npartitions are set up with primary key index and draws new IDs from\nthe same sequence ... \"select max(id)\" on both partitions are fast.\nAre there any tricks I can do to speed up this query? I can't add the\nID to the table constraints, we may still get in \"old\" data causing\nrows with fresh IDs to get into the old table.\n\n(I decided to keep this short rather than include lots of details -\nbut at least worth mentioning that we're using PG9)\n", "msg_date": "Sat, 5 Feb 2011 02:24:44 +0300", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "table partitioning and select max(id)" }, { "msg_contents": "This is a known limitation of partitioning. One solution is to use a\nrecursive stored proc, which can use indexes. Such a solution is\ndiscussed here:\nhttp://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php\n\nRegards,\nKen\n\nhttp://archives.postgresql.org/pgsql-performance/2009-09/msg00036.php\n\nOn Fri, Feb 4, 2011 at 6:24 PM, Tobias Brox <[email protected]> wrote:\n> I implemented table partitioning, and it caused havoc with a \"select\n> max(id)\" on the parent table - the query plan has changed from a\n> lightningly fast backwards index scan to a deadly seq scan.  Both\n> partitions are set up with primary key index and draws new IDs from\n> the same sequence ... \"select max(id)\" on both partitions are fast.\n> Are there any tricks I can do to speed up this query?  I can't add the\n> ID to the table constraints, we may still get in \"old\" data causing\n> rows with fresh IDs to get into the old table.\n>\n> (I decided to keep this short rather than include lots of details -\n> but at least worth mentioning that we're using PG9)\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \n-Ken\n", "msg_date": "Fri, 4 Feb 2011 22:38:04 -0500", "msg_from": "Ken Cox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table partitioning and select max(id)" }, { "msg_contents": "Tobias Brox wrote:\n> I implemented table partitioning, and it caused havoc with a \"select\n> max(id)\" on the parent table - the query plan has changed from a\n> lightningly fast backwards index scan to a deadly seq scan. \n\nThis problem was fixed in the upcoming 9.1: \n\nhttp://archives.postgresql.org/pgsql-committers/2010-11/msg00028.php\nhttp://git.postgresql.org/gitweb?p=postgresql.git;a=commitdiff;h=034967bdcbb0c7be61d0500955226e1234ec5f04\n\nHere's the comment from that describing the main technique used to fix it:\n\n\"This module tries to replace MIN/MAX aggregate functions by subqueries \nof the form\n\n(SELECT col FROM tab WHERE ... ORDER BY col ASC/DESC LIMIT 1)\n\nGiven a suitable index on tab.col, this can be much faster than the \ngeneric scan-all-the-rows aggregation plan. We can handle multiple \nMIN/MAX aggregates by generating multiple subqueries, and their \norderings can be different. However, if the query contains any \nnon-optimizable aggregates, there's no point since we'll have to scan \nall the rows anyway.\"\n\nUnfortunately that change ends a series of 6 commits of optimizer \nrefactoring in this area, so it's not the case that you just apply this \none commit as a bug-fix to a 9.0 system. I have a project in process to \ndo the full backport needed I might be able to share with you if that \nworks out, and you're willing to run with a customer patched server \nprocess. Using one of the user-space ideas Ken suggested may very well \nbe easier for you. I'm stuck with an app I can't rewrite to do that.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sat, 05 Feb 2011 00:03:08 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table partitioning and select max(id)" }, { "msg_contents": "[Greg Smith]\n> Here's the comment from that describing the main technique used to fix it:\n>\n> \"This module tries to replace MIN/MAX aggregate functions by subqueries of\n> the form\n>\n> (SELECT col FROM tab WHERE ... ORDER BY col ASC/DESC LIMIT 1)\n\nHuh ... that sounds a bit like pg 8.0 to me ;-) I remember on 7.x one\nhad to write \"select id from table order by id desc limit 1\" to force\nthrough a quick index scan. This was fixed in 8.0 IIRC. I did test\n\"select id from table order by id desc limit 1\" on my parent table\nyesterday, it would still do the seq-scan. Even adding a\nwhere-restriction to make sure only one partition was queried I still\ngot the seq-scan.\n\n> Unfortunately that change ends a series of 6 commits of optimizer\n> refactoring in this area, so it's not the case that you just apply this one\n> commit as a bug-fix to a 9.0 system.  I have a project in process to do the\n> full backport needed I might be able to share with you if that works out,\n> and you're willing to run with a customer patched server process.\n\nIn this particular case, \"wait for 9.1\" seems to be the best option :-)\n", "msg_date": "Sat, 5 Feb 2011 11:16:09 +0300", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": true, "msg_subject": "Re: table partitioning and select max(id)" }, { "msg_contents": "Tobias Brox wrote:\n> I did test \"select id from table order by id desc limit 1\" on my parent table\n> yesterday, it would still do the seq-scan. Even adding a\n> where-restriction to make sure only one partition was queried I still\n> got the seq-scan.\n> \n\nRight; you actually have to direct the query toward the specific \npartition by name, nothing run against the parent table will work. The \nnew logic for 9.1 essentially splits the query into this alternate form, \nruns it against every partition individually, then combines the \nresults. If you can afford to wait for 9.1, that is certainly the easy \npath here. It just works out of the box in that version.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sat, 05 Feb 2011 03:49:24 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: table partitioning and select max(id)" } ]
[ { "msg_contents": "Before I deploy some new servers, I figured I would do some\nbenchmarking.\n\nServer is a Dual E5620, 96GB RAM, 16 x 450GB SAS(15K) drives.\n\nController is an Areca 1680 with 2GB RAM and battery backup.\n\nSo far I have only run bonie++ since each cycle is quite long (writing\n192GB).\n\n \n\nMy data partition is 12 drives in RAID 1+0 (2.7TB) running UFS2.\nVfs.read_max has been set to 32, and no other tuning has been done.\n\nFiles system is not mounted with noatime at this point.\n\nBelow are the results:\n\n \n\n \n\ndb1# bonnie++ -d /usr/local/pgsql -c 4 -n 2:10000000:1000000:64 -u pgsql\n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\n\nConcurrency 4 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\n\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\ndb1.stackdump. 192G 860 99 213731 52 28518 45 1079 70 155479 34\n49.9 12\n\nLatency 10008us 2385ms 1190ms 457ms 2152ms\n231ms\n\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\n\ndb1.stackdump.local -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n\nfiles:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n\n2:10000000:1000000/64 49 33 128 96 277 97 57 39 130 90\n275 97\n\nLatency 660ms 13954us 13003us 904ms 334ms\n13365us\n\n \n\nNot having anything to compare it to, I do not know if these are decent\nnumbers or not - they are definitely slower than a similar setup which\nwas posted recently using XFS on Linux, but I have not found anything\nin FreeBSD using UFS2 to compare it to. What strikes me in particular\nis that the write performance is higher than the read performance - I\nwould have intuitively expected it to be the other way around.\n\n \n\nMy log partition is a RAID1, same drives. Performance follows:\n\n \n\nVersion 1.96 ------Sequential Output------ --Sequential Input-\n--Random-\n\nConcurrency 4 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--\n--Seeks--\n\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP\n/sec %CP\n\ndb1.stackdump. 192G 861 99 117023 28 20142 43 359 23 109719 24\n419.5 12\n\nLatency 9890us 13227ms 8944ms 3623ms 2236ms\n252ms\n\nVersion 1.96 ------Sequential Create------ --------Random\nCreate--------\n\ndb1.stackdump.local -Create-- --Read--- -Delete-- -Create-- --Read---\n-Delete--\n\nfiles:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n/sec %CP\n\n2:10000000:1000000/64 24 16 121 93 276 97 22 15 134 93\n275 97\n\nLatency 4070ms 14029us 13079us 15016ms 573ms\n13369us\n\n \n\nAfter seeing these results, I decided to download the areca cli and\ncheck the actual setup\n\nInfo from the RAID controller follows:\n\n \n\nCLI> sys info\n\nThe System Information\n\n===========================================\n\nMain Processor : 1200MHz\n\nCPU ICache Size : 32KB\n\nCPU DCache Size : 32KB\n\nCPU SCache Size : 512KB\n\nSystem Memory : 2048MB/533MHz/ECC\n\nFirmware Version : V1.48 2010-10-21\n\nBOOT ROM Version : V1.48 2010-01-04\n\nSerial Number : Y051CABVAR600825\n\nController Name : ARC-1680\n\nCurrent IP Address : 192.168.1.100\n\n \n\nCLI> rsf info raid=2\n\nRaid Set Information \n\n===========================================\n\nRaid Set Name : Raid Set # 001 \n\nMember Disks : 12\n\nTotal Raw Capacity : 5400.0GB\n\nFree Raw Capacity : 0.0GB\n\nMin Member Disk Size : 450.0GB\n\nRaid Set State : Normal\n\n \n\nCLI> vsf info vol=2\n\nVolume Set Information \n\n===========================================\n\nVolume Set Name : ARC-1680-VOL#001\n\nRaid Set Name : Raid Set # 001 \n\nVolume Capacity : 2700.0GB\n\nSCSI Ch/Id/Lun : 00/00/01\n\nRaid Level : Raid1+0\n\nStripe Size : 8K\n\nMember Disks : 12\n\nCache Mode : Write Back\n\nTagged Queuing : Enabled\n\nVolume State : Normal\n\n===========================================\n\n \n\nHaving done this, I noticed that the stripe size is configured to 8K.\n\nI am thinking the problem may be due to the stripe size. I had asked\nthe vendor to set up the file system for these two arrays with 8K\nblocks, and I believe they may have misunderstood my request and set the\nstripe size to 8K. I assume increasing the stripe size will improve the\nperformance.\n\nWhat stripe sizes are you typically using? I was planning on setting it\nup with a 64K stripe size.\n\n \n\nTIA,\n\n \n\nBenjamin\n\n \n\n\nBefore I deploy some new servers, I figured I would do some benchmarking.Server is a Dual E5620, 96GB RAM, 16 x 450GB SAS(15K) drives.Controller is an Areca 1680 with 2GB RAM and battery backup.So far I have only run bonie++ since each cycle is quite long (writing 192GB). My data partition is 12 drives in RAID 1+0 (2.7TB) running  UFS2.  Vfs.read_max has been set to 32, and no other tuning has been done.Files system is not mounted with noatime at this point.Below are the results:  db1# bonnie++ -d /usr/local/pgsql -c 4 -n 2:10000000:1000000:64 -u pgsqlVersion  1.96       ------Sequential Output------ --Sequential Input- --Random-Concurrency   4     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CPdb1.stackdump. 192G   860  99 213731  52 28518  45  1079  70 155479  34  49.9  12Latency             10008us    2385ms    1190ms     457ms    2152ms     231msVersion  1.96       ------Sequential Create------ --------Random Create--------db1.stackdump.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP2:10000000:1000000/64    49  33   128  96   277  97    57  39   130  90   275  97Latency               660ms   13954us   13003us     904ms     334ms   13365us Not having anything to compare it to, I do not know if these are decent numbers or not – they are definitely slower than a similar setup which was posted recently using  XFS on Linux, but I have not found anything in FreeBSD using UFS2 to compare it  to.  What strikes me in particular is that the write performance is higher than the read performance – I would have intuitively expected it to be the other way around. My log partition is a RAID1, same drives.  Performance follows: Version  1.96       ------Sequential Output------ --Sequential Input- --Random-Concurrency   4     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CPdb1.stackdump. 192G   861  99 117023  28 20142  43   359  23 109719  24 419.5  12Latency              9890us   13227ms    8944ms    3623ms    2236ms     252msVersion  1.96       ------Sequential Create------ --------Random Create--------db1.stackdump.local -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--files:max:min        /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP2:10000000:1000000/64    24  16   121  93   276  97    22  15   134  93   275  97Latency              4070ms   14029us   13079us   15016ms     573ms   13369us After seeing these results, I decided to download the areca cli and check the actual setupInfo from the RAID controller follows: CLI> sys infoThe System Information===========================================Main Processor     : 1200MHzCPU ICache Size    : 32KBCPU DCache Size    : 32KBCPU SCache Size    : 512KBSystem Memory      : 2048MB/533MHz/ECCFirmware Version   : V1.48 2010-10-21BOOT ROM Version   : V1.48 2010-01-04Serial Number      : Y051CABVAR600825Controller Name    : ARC-1680Current IP Address : 192.168.1.100 CLI> rsf info raid=2Raid Set Information ===========================================Raid Set Name        : Raid Set # 001  Member Disks         : 12Total Raw Capacity   : 5400.0GBFree Raw Capacity    : 0.0GBMin Member Disk Size : 450.0GBRaid Set State       : Normal CLI> vsf info vol=2Volume Set Information ===========================================Volume Set Name : ARC-1680-VOL#001Raid Set Name   : Raid Set # 001  Volume Capacity : 2700.0GBSCSI Ch/Id/Lun  : 00/00/01Raid Level      : Raid1+0Stripe Size     : 8KMember Disks    : 12Cache Mode      : Write BackTagged Queuing  : EnabledVolume State    : Normal=========================================== Having done this, I noticed that the stripe size is configured to 8K.I am thinking the problem may be due to the stripe size.  I had asked the vendor to set up the file system for these two arrays with 8K blocks, and I believe they may have misunderstood my request and set the stripe size to 8K.  I assume increasing the stripe size will improve the performance.What stripe sizes are you typically using?  I was planning on setting it up with a 64K stripe size. TIA, Benjamin", "msg_date": "Sun, 6 Feb 2011 01:04:40 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Need some help analyzing some benchmarks" }, { "msg_contents": "Benjamin Krajmalnik wrote:\n>\n> My data partition is 12 drives in RAID 1+0 (2.7TB) running UFS2. \n> Vfs.read_max has been set to 32, and no other tuning has been done...\n>\n> Not having anything to compare it to, I do not know if these are \n> decent numbers or not -- they are definitely slower than a similar \n> setup which was posted recently using XFS on Linux, but I have not \n> found anything in FreeBSD using UFS2 to compare it to. What strikes \n> me in particular is that the write performance is higher than the read \n> performance -- I would have intuitively expected it to be the other \n> way around.\n>\n\nGenerally write speed higher than read means that volume read-ahead \nstill isn't high enough for the OS to keep the disk completely busy. \nTry increasing read_max further; I haven't done many such tests on \nFreeBSD, but as far as I know settings of 128 and 256 are generally \nwhere read performance peaks on that OS. You should see sequential read \nspeed go up as you increase that parameter, eventually levelling off. \nWhen you reach that point you've found the right setting.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nBenjamin Krajmalnik wrote:\n\n\n\n\n\nMy data partition is 12 drives in RAID 1+0\n(2.7TB) running  UFS2.  Vfs.read_max has been set to 32, and no other\ntuning has been done...\nNot having anything to compare it to, I do not\nknow if these are decent numbers or not – they are definitely slower\nthan a similar setup which was posted recently using  XFS on Linux, but\nI have not found anything in FreeBSD using UFS2 to compare it  to. \nWhat strikes me in particular is that the write performance is higher\nthan the read performance – I would have intuitively expected it to be\nthe other way around.\n\n\n\nGenerally write speed higher than read means that volume read-ahead\nstill isn't high enough for the OS to keep the disk completely busy. \nTry increasing read_max further; I haven't done many such tests on\nFreeBSD, but as far as I know settings of 128 and 256 are generally\nwhere read performance peaks on that OS.  You should see sequential\nread speed go up as you increase that parameter, eventually levelling\noff.  When you reach that point you've found the right setting.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Sun, 06 Feb 2011 20:36:55 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Need some help analyzing some benchmarks" } ]
[ { "msg_contents": "I am searching what would be the best hardware combination to a new server i \nhave to install, the new server will have a Postgresql 9.0 with a database of \nabout 10gb, the database part it is not the problem for me, in this size almost, \nthe part where i am a bit lost is that i have to share from the same server \nabout 20TB of data with samba or ISCSI (i have to test both ways to share when i \nhave the hardware) because this is to be the file-server of 8 Avid video \nworkstations.\n\nI am pretty sure that the Postgresql should be installed in a raid10 \nconfiguration wigh pg_xlog in a different physical volume, i know too that the \nmost important things to Postgresql are memory and disks, i will get a raid card \nwith BBU or flash cache to be safe with write caching.\n\nAbout the file-server part the files we are going to move are in the range of \n1gb~4gb in mostly a sequential access nature for what i have read of how avid \nworks, so i think the best would be a raid 6 here, but i am not sure what \nhardware to buy because i have not stored never this quantity of storage, i have \nread in this mailing list other times about external enclosure boxes to attach \nstorage and i have read raid cards recommendations before, lsi, 3ware or areca \nseems safe bets but i don't know how to put all the pieces together.\n\nI have searched in HP Servers but i can only find HP Smart Array controllers and \ni don't find a server that sill uses the best of the Smart Array P800 or Smart \nArray P812, so i don't know what server to buy (it is better to stick with a \nserver vendor or mount my own with hardware parts), what box enclosure to buy \nand how to connect the two. Could anyone please point me specific hardware parts \nthat he would recommend me?\n\nI am studying too the possibility of use an OCZ Vertex 2 Pro with Flashcache or \nBcache to use it like a second level filesystem cache, any comments on that please?\n\nThanks,\nMiguel Angel.\n", "msg_date": "Sun, 06 Feb 2011 19:16:23 +0100", "msg_from": "Linos <[email protected]>", "msg_from_op": true, "msg_subject": "general hardware advice" }, { "msg_contents": "On Sun, Feb 6, 2011 at 11:16 AM, Linos <[email protected]> wrote:\n> I am searching what would be the best hardware combination to a new server i\n> have to install, the new server will have a Postgresql 9.0 with a database\n> of about 10gb, the database part it is not the problem for me, in this size\n> almost, the part where i am a bit lost is that i have to share from the same\n> server about 20TB of data with samba or ISCSI (i have to test both ways to\n> share when i have the hardware) because this is to be the file-server of 8\n> Avid video workstations.\n>\n> I am pretty sure that the Postgresql should be installed in a raid10\n> configuration wigh pg_xlog in a different physical volume, i know too that\n> the most important things to Postgresql are memory and disks, i will get a\n> raid card with BBU or flash cache to be safe with write caching.\n\nI'd put all of pgsql on a different card than the file share.\n\n> About the file-server part the files we are going to move are in the range\n> of 1gb~4gb in mostly a sequential access nature for what i have read of how\n> avid works, so i think the best would be a raid 6 here, but i am not sure\n> what hardware to buy because i have not stored never this quantity of\n> storage, i have read in this mailing list other times about external\n> enclosure boxes to attach storage and i have read raid cards recommendations\n> before, lsi, 3ware or areca seems safe bets but i don't know how to put all\n> the pieces together.\n\nIf the data on this is considered disposable, then you could look at\nRAID-0 for the fastest performance. Even with SW RAID0 you'd get\nincredible throughput with a few disks.\n\n> I am studying too the possibility of use an OCZ Vertex 2 Pro with Flashcache\n> or Bcache to use it like a second level filesystem cache, any comments on\n> that please?\n\nMy coworkers RAVE over ZFS solaris with flash drives for cache and\nspinning drives for mass storage.\n", "msg_date": "Sun, 6 Feb 2011 12:05:37 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: general hardware advice" }, { "msg_contents": "On Sun, 06 Feb 2011 19:16:23 +0100, Linos <[email protected]> wrote:\n\n> I am searching what would be the best hardware combination to a new \n> server i have to install, the new server will have a Postgresql 9.0 with \n> a database of about 10gb, the database part it is not the problem for \n> me, in this size almost, the part where i am a bit lost is that i have \n> to share from the same server about 20TB of data with samba or ISCSI (i \n> have to test both ways to share when i have the hardware) because this \n> is to be the file-server of 8 Avid video workstations.\n\nWhat is the expected load on the postgresql instance ?\n\nAlso, for multiple high throughput concurrent streams (as in AVID), \nfilesystem choice is critical.\n", "msg_date": "Sun, 06 Feb 2011 20:24:49 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: general hardware advice" }, { "msg_contents": "El 06/02/11 20:05, Scott Marlowe escribi�:\n> On Sun, Feb 6, 2011 at 11:16 AM, Linos<[email protected]> wrote:\n>> I am searching what would be the best hardware combination to a new server i\n>> have to install, the new server will have a Postgresql 9.0 with a database\n>> of about 10gb, the database part it is not the problem for me, in this size\n>> almost, the part where i am a bit lost is that i have to share from the same\n>> server about 20TB of data with samba or ISCSI (i have to test both ways to\n>> share when i have the hardware) because this is to be the file-server of 8\n>> Avid video workstations.\n>>\n>> I am pretty sure that the Postgresql should be installed in a raid10\n>> configuration wigh pg_xlog in a different physical volume, i know too that\n>> the most important things to Postgresql are memory and disks, i will get a\n>> raid card with BBU or flash cache to be safe with write caching.\n>\n> I'd put all of pgsql on a different card than the file share.\n\nThis should not be a problem, i think you have reason here maybe i could use \ninternal disks and 1 raid card for Postgresql.\n\n>\n>> About the file-server part the files we are going to move are in the range\n>> of 1gb~4gb in mostly a sequential access nature for what i have read of how\n>> avid works, so i think the best would be a raid 6 here, but i am not sure\n>> what hardware to buy because i have not stored never this quantity of\n>> storage, i have read in this mailing list other times about external\n>> enclosure boxes to attach storage and i have read raid cards recommendations\n>> before, lsi, 3ware or areca seems safe bets but i don't know how to put all\n>> the pieces together.\n>\n> If the data on this is considered disposable, then you could look at\n> RAID-0 for the fastest performance. Even with SW RAID0 you'd get\n> incredible throughput with a few disks.\n>\n\nIt is not disposable, all the contrary, i have to take special care of the this \nfiles :)\n\n>> I am studying too the possibility of use an OCZ Vertex 2 Pro with Flashcache\n>> or Bcache to use it like a second level filesystem cache, any comments on\n>> that please?\n>\n> My coworkers RAVE over ZFS solaris with flash drives for cache and\n> spinning drives for mass storage.\n>\n\ni think neither flashcache nor bcache are at the level of zfs with l2arc in ssd \nbut should perform well anyway, what it is the preferred way of use good raid \ncards with zfs? jbod and configure in raid-z? i have not used zfs still.\n\nRegards,\nMiguel Angel.\n", "msg_date": "Sun, 06 Feb 2011 20:31:28 +0100", "msg_from": "Linos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: general hardware advice" }, { "msg_contents": "El 06/02/11 20:24, Pierre C escribió:\n> On Sun, 06 Feb 2011 19:16:23 +0100, Linos <[email protected]> wrote:\n>\n>> I am searching what would be the best hardware combination to a new server i\n>> have to install, the new server will have a Postgresql 9.0 with a database of\n>> about 10gb, the database part it is not the problem for me, in this size\n>> almost, the part where i am a bit lost is that i have to share from the same\n>> server about 20TB of data with samba or ISCSI (i have to test both ways to\n>> share when i have the hardware) because this is to be the file-server of 8\n>> Avid video workstations.\n>\n> What is the expected load on the postgresql instance ?\n>\n> Also, for multiple high throughput concurrent streams (as in AVID), filesystem\n> choice is critical.\n>\n\nThe load for the Postgresql instance it is to be low, maybe 5 or 6 simultaneous \nclients with a oltp application, but it is database intensive only when \nreporting not in the usual work.\n\nAbout the filesystem i was thinking to use xfs, i think would be the best for \nthis use-case in Linux, any advice on this please?\n\nRegards,\nMiguel Angel.\n", "msg_date": "Sun, 06 Feb 2011 20:34:46 +0100", "msg_from": "Linos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: general hardware advice" }, { "msg_contents": "\n--- On Sun, 2/6/11, Linos <[email protected]> wrote:\n \n> I am studying too the possibility of use an OCZ Vertex 2\n> Pro with Flashcache or Bcache to use it like a second level\n> filesystem cache, any comments on that please?\n> \n\nOCZ Vertex 2 Pro is a lot more expensive than other SSD of comparable performances because it comes with a supercapacitor that guarantees durability.\n\nIf you're just using the SSD as a cache, you don't need durability. You could save a lot of money by getting SSDs without supercapacitor, such as Corsair Force.\n\n\n \n", "msg_date": "Sun, 6 Feb 2011 12:38:18 -0800 (PST)", "msg_from": "Andy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: general hardware advice" }, { "msg_contents": "ah ok, i was not sure about this point, thanks.\n\nRegards,\nMiguel Angel\n\nEl 06/02/11 21:38, Andy escribi�:\n>\n> --- On Sun, 2/6/11, Linos<[email protected]> wrote:\n>\n>> I am studying too the possibility of use an OCZ Vertex 2\n>> Pro with Flashcache or Bcache to use it like a second level\n>> filesystem cache, any comments on that please?\n>>\n>\n> OCZ Vertex 2 Pro is a lot more expensive than other SSD of comparable performances because it comes with a supercapacitor that guarantees durability.\n>\n> If you're just using the SSD as a cache, you don't need durability. You could save a lot of money by getting SSDs without supercapacitor, such as Corsair Force.\n>\n>\n>\n>\n\n", "msg_date": "Mon, 07 Feb 2011 00:34:29 +0100", "msg_from": "Linos <[email protected]>", "msg_from_op": true, "msg_subject": "Re: general hardware advice" } ]
[ { "msg_contents": "Hi,\n\nI am trying to understand how indexes works to get the most of them.\n\nFirst I would like to know if there is more advantage than overhead to\nsplit an index in several ones using conditions e.g. doing :\n\nCREATE INDEX directory_id_user_0_btree_idx ON mike.directory USING btree (id_user) WHERE id_user < 250000;\nCREATE INDEX directory_id_user_250000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 250000 AND id_user < 500000;\nCREATE INDEX directory_id_user_500000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 500000 AND id_user < 750000;\nCREATE INDEX directory_id_user_750000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 750000 AND id_user < 1000000;\n\ninstead of having only one index for all the id_user. the forecasts for\nthe table directory are +500 millions records and something like 1\nmillion distinct id_user.\n\nIf there is my idea was to do a repartition in the indexes using a\nconsistent hash algorithm in order to fill the indexes in parallel\ninstead of successively :\n\nCREATE OR REPLACE FUNCTION mike.__mod_cons_hash(\n IN in_dividend bigint,\n IN in_divisor integer,\n OUT remainder integer\n) AS $__$\n\nBEGIN\n SELECT in_dividend % in_divisor INTO remainder;\nEND;\n\n$__$ LANGUAGE plpgsql IMMUTABLE COST 10;\n\nCREATE INDEX directory_id_user_mod_cons_hash_0_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 0;\nCREATE INDEX directory_id_user_mod_cons_hash_1_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 1;\nCREATE INDEX directory_id_user_mod_cons_hash_2_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 2;\nCREATE INDEX directory_id_user_mod_cons_hash_3_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 3;\n\nBut the thing is the indexes are not used :\n\nmike=# SELECT version();\n version \n-------------------------------------------------------------------------------------------------------------------\n PostgreSQL 8.4.7 on i686-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5, 32-bit\n(1 row)\n\nmike=# REINDEX INDEX directory_id_user_mod_cons_hash_0_btree_idx;\nLOG: duration: 14644.160 ms statement: REINDEX INDEX\ndirectory_id_user_mod_cons_hash_0_btree_idx;\nREINDEX\nmike=# EXPLAIN ANALYZE SELECT * FROM directory WHERE id_user = 4;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------\n Seq Scan on directory (cost=0.00..38140.66 rows=67 width=148) (actual time=0.077..348.211 rows=10303 loops=1)\n Filter: (id_user = 4)\n Total runtime: 351.114 ms\n(3 rows)\n\nSo I also did this test :\n\nmike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user > 3 and id_user < 5;\nCREATE INDEX\nmike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using directory_id_user_4_btree_idx on directory (cost=0.00..10.58 rows=67 width=148) (actual time=0.169..7.753 rows=10303 loops=1)\n Index Cond: (id_user = 4)\n Total runtime: 10.973 ms\n(3 rows)\n\nmike=# DROP INDEX directory_id_user_4_btree_idx;\nDROP INDEX\nmike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user - 1 > 2 and id_user + 1 < 6;\nCREATE INDEX\nmike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------\n Seq Scan on directory (cost=0.00..38140.66 rows=67 width=148) (actual time=0.153..360.020 rows=10303 loops=1)\n Filter: (id_user = 4)\n Total runtime: 363.106 ms\n(3 rows)\n\nmike=# DROP INDEX directory_id_user_4_btree_idx;\nDROP INDEX\nmike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user > 2 + 1 and id_user < 6 - 1;\nCREATE INDEX\nmike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using directory_id_user_4_btree_idx on directory (cost=0.00..10.58 rows=67 width=148) (actual time=0.245..8.262 rows=10303 loops=1)\n Index Cond: (id_user = 4)\n Total runtime: 11.110 ms\n(3 rows)\n\nAs you see the index condition although, differently written, is the\nsame but the second index is not used apparently because the immutable\nfunction is applied on the column.\n\nSo do you know the reason why the planner is not able to use indexes\nwhich have immutable functions applied to the column in their\ncondition ?\n\nRegards.\n\n-- \nSylvain Rabot <[email protected]>", "msg_date": "Tue, 08 Feb 2011 01:14:58 +0100", "msg_from": "Sylvain Rabot <[email protected]>", "msg_from_op": true, "msg_subject": "Indexes with condition using immutable functions applied to column\n\tnot used" }, { "msg_contents": "On 2011-02-08 01:14, Sylvain Rabot wrote:\n> CREATE INDEX directory_id_user_mod_cons_hash_0_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 0;\n> CREATE INDEX directory_id_user_mod_cons_hash_1_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 1;\n> CREATE INDEX directory_id_user_mod_cons_hash_2_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 2;\n> CREATE INDEX directory_id_user_mod_cons_hash_3_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 3;\n>\n\n> mike=# EXPLAIN ANALYZE SELECT * FROM directory WHERE id_user = 4;\n\nShould be written as:\nselect * from directory where __mod_cons_hash(id_user,4) = 4%4;\n\nThen it should just work.\n\n-- \nJesper\n", "msg_date": "Tue, 08 Feb 2011 06:15:22 +0100", "msg_from": "Jesper Krogh <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes with condition using immutable functions applied\n\tto column not used" }, { "msg_contents": "On Tue, 2011-02-08 at 06:15 +0100, Jesper Krogh wrote:\n> On 2011-02-08 01:14, Sylvain Rabot wrote:\n> > CREATE INDEX directory_id_user_mod_cons_hash_0_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 0;\n> > CREATE INDEX directory_id_user_mod_cons_hash_1_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 1;\n> > CREATE INDEX directory_id_user_mod_cons_hash_2_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 2;\n> > CREATE INDEX directory_id_user_mod_cons_hash_3_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 3;\n> >\n> \n> > mike=# EXPLAIN ANALYZE SELECT * FROM directory WHERE id_user = 4;\n> \n> Should be written as:\n> select * from directory where __mod_cons_hash(id_user,4) = 4%4;\n> \n> Then it should just work.\n> \n> -- \n> Jesper\n> \n\nThe where clause you wrote selects all the directory records that have a\nid_user % 4 equivalent to 0 like 0, 4, 8, 16 ... etc. It does use the\nindexes but it is not was I want to select.\n\n-- \nSylvain Rabot <[email protected]>", "msg_date": "Tue, 08 Feb 2011 18:30:25 +0100", "msg_from": "Sylvain Rabot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes with condition using immutable functions\n\tapplied to column not used" }, { "msg_contents": "I also tried to do table partitioning using the same immutable function,\nit works well except for constraint exclusion.\n\nCREATE TABLE mike.directory_part_0 () INHERITS (mike.directory) WITH (fillfactor = 90);\nCREATE RULE directory_part_0_insert AS ON INSERT TO mike.directory WHERE (__mod_cons_hash(new.id_user::bigint, 2) = 0)\nDO INSTEAD INSERT INTO mike.directory_part_0 VALUES (new.*);\n\nCREATE TABLE mike.directory_part_1 () INHERITS (mike.directory) WITH (fillfactor = 90);\nCREATE RULE directory_part_1_insert AS ON INSERT TO mike.directory WHERE (__mod_cons_hash(new.id_user::bigint, 2) = 1)\nDO INSTEAD INSERT INTO mike.directory_part_1 VALUES (new.*);\n\nmike_part=# explain analyze select * from directory where id_user = 3;\n QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..310.21 rows=5226 width=141) (actual time=0.080..7.583 rows=2653 loops=1)\n -> Append (cost=0.00..310.21 rows=5226 width=141) (actual time=0.077..3.654 rows=2653 loops=1)\n -> Index Scan using directory_id_user_btree_idx on directory (cost=0.00..8.27 rows=1 width=141) (actual time=0.007..0.007 rows=0 loops=1)\n Index Cond: (id_user = 3)\n -> Index Scan using directory_part_0_id_user_btree_idx on directory_part_0 directory (cost=0.00..8.27 rows=1 width=150) (actual time=0.035..0.035 rows=0 loops=1)\n Index Cond: (id_user = 3)\n -> Index Scan using directory_part_1_id_user_btree_idx on directory_part_1 directory (cost=0.00..293.67 rows=5224 width=141) (actual time=0.035..2.037 rows=2653 loops=1)\n Index Cond: (id_user = 3)\n Total runtime: 8.807 ms\n(9 rows)\n\n\nOn Tue, 2011-02-08 at 01:14 +0100, Sylvain Rabot wrote:\n> Hi,\n> \n> I am trying to understand how indexes works to get the most of them.\n> \n> First I would like to know if there is more advantage than overhead to\n> split an index in several ones using conditions e.g. doing :\n> \n> CREATE INDEX directory_id_user_0_btree_idx ON mike.directory USING btree (id_user) WHERE id_user < 250000;\n> CREATE INDEX directory_id_user_250000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 250000 AND id_user < 500000;\n> CREATE INDEX directory_id_user_500000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 500000 AND id_user < 750000;\n> CREATE INDEX directory_id_user_750000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 750000 AND id_user < 1000000;\n> \n> instead of having only one index for all the id_user. the forecasts for\n> the table directory are +500 millions records and something like 1\n> million distinct id_user.\n> \n> If there is my idea was to do a repartition in the indexes using a\n> consistent hash algorithm in order to fill the indexes in parallel\n> instead of successively :\n> \n> CREATE OR REPLACE FUNCTION mike.__mod_cons_hash(\n> IN in_dividend bigint,\n> IN in_divisor integer,\n> OUT remainder integer\n> ) AS $__$\n> \n> BEGIN\n> SELECT in_dividend % in_divisor INTO remainder;\n> END;\n> \n> $__$ LANGUAGE plpgsql IMMUTABLE COST 10;\n> \n> CREATE INDEX directory_id_user_mod_cons_hash_0_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 0;\n> CREATE INDEX directory_id_user_mod_cons_hash_1_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 1;\n> CREATE INDEX directory_id_user_mod_cons_hash_2_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 2;\n> CREATE INDEX directory_id_user_mod_cons_hash_3_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 3;\n> \n> But the thing is the indexes are not used :\n> \n> mike=# SELECT version();\n> version \n> -------------------------------------------------------------------------------------------------------------------\n> PostgreSQL 8.4.7 on i686-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5, 32-bit\n> (1 row)\n> \n> mike=# REINDEX INDEX directory_id_user_mod_cons_hash_0_btree_idx;\n> LOG: duration: 14644.160 ms statement: REINDEX INDEX\n> directory_id_user_mod_cons_hash_0_btree_idx;\n> REINDEX\n> mike=# EXPLAIN ANALYZE SELECT * FROM directory WHERE id_user = 4;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------\n> Seq Scan on directory (cost=0.00..38140.66 rows=67 width=148) (actual time=0.077..348.211 rows=10303 loops=1)\n> Filter: (id_user = 4)\n> Total runtime: 351.114 ms\n> (3 rows)\n> \n> So I also did this test :\n> \n> mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user > 3 and id_user < 5;\n> CREATE INDEX\n> mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using directory_id_user_4_btree_idx on directory (cost=0.00..10.58 rows=67 width=148) (actual time=0.169..7.753 rows=10303 loops=1)\n> Index Cond: (id_user = 4)\n> Total runtime: 10.973 ms\n> (3 rows)\n> \n> mike=# DROP INDEX directory_id_user_4_btree_idx;\n> DROP INDEX\n> mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user - 1 > 2 and id_user + 1 < 6;\n> CREATE INDEX\n> mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n> QUERY PLAN \n> ----------------------------------------------------------------------------------------------------------------\n> Seq Scan on directory (cost=0.00..38140.66 rows=67 width=148) (actual time=0.153..360.020 rows=10303 loops=1)\n> Filter: (id_user = 4)\n> Total runtime: 363.106 ms\n> (3 rows)\n> \n> mike=# DROP INDEX directory_id_user_4_btree_idx;\n> DROP INDEX\n> mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user > 2 + 1 and id_user < 6 - 1;\n> CREATE INDEX\n> mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n> QUERY PLAN \n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using directory_id_user_4_btree_idx on directory (cost=0.00..10.58 rows=67 width=148) (actual time=0.245..8.262 rows=10303 loops=1)\n> Index Cond: (id_user = 4)\n> Total runtime: 11.110 ms\n> (3 rows)\n> \n> As you see the index condition although, differently written, is the\n> same but the second index is not used apparently because the immutable\n> function is applied on the column.\n> \n> So do you know the reason why the planner is not able to use indexes\n> which have immutable functions applied to the column in their\n> condition ?\n> \n> Regards.\n> \n\n-- \nSylvain Rabot <[email protected]>", "msg_date": "Tue, 08 Feb 2011 21:08:54 +0100", "msg_from": "Sylvain Rabot <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Indexes with condition using immutable functions applied to\n\tcolumn not used" }, { "msg_contents": "Should there be a Rule for Select to cause partitions to be excluded ?\n\n\nOn 8 February 2011 20:08, Sylvain Rabot <[email protected]> wrote:\n\n> I also tried to do table partitioning using the same immutable function,\n> it works well except for constraint exclusion.\n>\n> CREATE TABLE mike.directory_part_0 () INHERITS (mike.directory) WITH\n> (fillfactor = 90);\n> CREATE RULE directory_part_0_insert AS ON INSERT TO mike.directory WHERE\n> (__mod_cons_hash(new.id_user::bigint, 2) = 0)\n> DO INSTEAD INSERT INTO mike.directory_part_0 VALUES (new.*);\n>\n> CREATE TABLE mike.directory_part_1 () INHERITS (mike.directory) WITH\n> (fillfactor = 90);\n> CREATE RULE directory_part_1_insert AS ON INSERT TO mike.directory WHERE\n> (__mod_cons_hash(new.id_user::bigint, 2) = 1)\n> DO INSTEAD INSERT INTO mike.directory_part_1 VALUES (new.*);\n>\n> mike_part=# explain analyze select * from directory where id_user = 3;\n>\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Result (cost=0.00..310.21 rows=5226 width=141) (actual time=0.080..7.583\n> rows=2653 loops=1)\n> -> Append (cost=0.00..310.21 rows=5226 width=141) (actual\n> time=0.077..3.654 rows=2653 loops=1)\n> -> Index Scan using directory_id_user_btree_idx on directory\n> (cost=0.00..8.27 rows=1 width=141) (actual time=0.007..0.007 rows=0\n> loops=1)\n> Index Cond: (id_user = 3)\n> -> Index Scan using directory_part_0_id_user_btree_idx on\n> directory_part_0 directory (cost=0.00..8.27 rows=1 width=150) (actual\n> time=0.035..0.035 rows=0 loops=1)\n> Index Cond: (id_user = 3)\n> -> Index Scan using directory_part_1_id_user_btree_idx on\n> directory_part_1 directory (cost=0.00..293.67 rows=5224 width=141) (actual\n> time=0.035..2.037 rows=2653 loops=1)\n> Index Cond: (id_user = 3)\n> Total runtime: 8.807 ms\n> (9 rows)\n>\n>\n> On Tue, 2011-02-08 at 01:14 +0100, Sylvain Rabot wrote:\n> > Hi,\n> >\n> > I am trying to understand how indexes works to get the most of them.\n> >\n> > First I would like to know if there is more advantage than overhead to\n> > split an index in several ones using conditions e.g. doing :\n> >\n> > CREATE INDEX directory_id_user_0_btree_idx ON mike.directory USING btree\n> (id_user) WHERE id_user < 250000;\n> > CREATE INDEX directory_id_user_250000_btree_idx ON mike.directory USING\n> btree (id_user) WHERE id_user >= 250000 AND id_user < 500000;\n> > CREATE INDEX directory_id_user_500000_btree_idx ON mike.directory USING\n> btree (id_user) WHERE id_user >= 500000 AND id_user < 750000;\n> > CREATE INDEX directory_id_user_750000_btree_idx ON mike.directory USING\n> btree (id_user) WHERE id_user >= 750000 AND id_user < 1000000;\n> >\n> > instead of having only one index for all the id_user. the forecasts for\n> > the table directory are +500 millions records and something like 1\n> > million distinct id_user.\n> >\n> > If there is my idea was to do a repartition in the indexes using a\n> > consistent hash algorithm in order to fill the indexes in parallel\n> > instead of successively :\n> >\n> > CREATE OR REPLACE FUNCTION mike.__mod_cons_hash(\n> > IN in_dividend bigint,\n> > IN in_divisor integer,\n> > OUT remainder integer\n> > ) AS $__$\n> >\n> > BEGIN\n> > SELECT in_dividend % in_divisor INTO remainder;\n> > END;\n> >\n> > $__$ LANGUAGE plpgsql IMMUTABLE COST 10;\n> >\n> > CREATE INDEX directory_id_user_mod_cons_hash_0_btree_idx ON\n> mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 0;\n> > CREATE INDEX directory_id_user_mod_cons_hash_1_btree_idx ON\n> mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 1;\n> > CREATE INDEX directory_id_user_mod_cons_hash_2_btree_idx ON\n> mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 2;\n> > CREATE INDEX directory_id_user_mod_cons_hash_3_btree_idx ON\n> mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 3;\n> >\n> > But the thing is the indexes are not used :\n> >\n> > mike=# SELECT version();\n> > version\n> >\n> -------------------------------------------------------------------------------------------------------------------\n> > PostgreSQL 8.4.7 on i686-pc-linux-gnu, compiled by GCC gcc-4.4.real\n> (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5, 32-bit\n> > (1 row)\n> >\n> > mike=# REINDEX INDEX directory_id_user_mod_cons_hash_0_btree_idx;\n> > LOG: duration: 14644.160 ms statement: REINDEX INDEX\n> > directory_id_user_mod_cons_hash_0_btree_idx;\n> > REINDEX\n> > mike=# EXPLAIN ANALYZE SELECT * FROM directory WHERE id_user = 4;\n> > QUERY PLAN\n> >\n> ----------------------------------------------------------------------------------------------------------------\n> > Seq Scan on directory (cost=0.00..38140.66 rows=67 width=148) (actual\n> time=0.077..348.211 rows=10303 loops=1)\n> > Filter: (id_user = 4)\n> > Total runtime: 351.114 ms\n> > (3 rows)\n> >\n> > So I also did this test :\n> >\n> > mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING\n> btree (id_user) WHERE id_user > 3 and id_user < 5;\n> > CREATE INDEX\n> > mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n> > QUERY\n> PLAN\n> >\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using directory_id_user_4_btree_idx on directory\n> (cost=0.00..10.58 rows=67 width=148) (actual time=0.169..7.753 rows=10303\n> loops=1)\n> > Index Cond: (id_user = 4)\n> > Total runtime: 10.973 ms\n> > (3 rows)\n> >\n> > mike=# DROP INDEX directory_id_user_4_btree_idx;\n> > DROP INDEX\n> > mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING\n> btree (id_user) WHERE id_user - 1 > 2 and id_user + 1 < 6;\n> > CREATE INDEX\n> > mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n> > QUERY PLAN\n> >\n> ----------------------------------------------------------------------------------------------------------------\n> > Seq Scan on directory (cost=0.00..38140.66 rows=67 width=148) (actual\n> time=0.153..360.020 rows=10303 loops=1)\n> > Filter: (id_user = 4)\n> > Total runtime: 363.106 ms\n> > (3 rows)\n> >\n> > mike=# DROP INDEX directory_id_user_4_btree_idx;\n> > DROP INDEX\n> > mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING\n> btree (id_user) WHERE id_user > 2 + 1 and id_user < 6 - 1;\n> > CREATE INDEX\n> > mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n> > QUERY\n> PLAN\n> >\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using directory_id_user_4_btree_idx on directory\n> (cost=0.00..10.58 rows=67 width=148) (actual time=0.245..8.262 rows=10303\n> loops=1)\n> > Index Cond: (id_user = 4)\n> > Total runtime: 11.110 ms\n> > (3 rows)\n> >\n> > As you see the index condition although, differently written, is the\n> > same but the second index is not used apparently because the immutable\n> > function is applied on the column.\n> >\n> > So do you know the reason why the planner is not able to use indexes\n> > which have immutable functions applied to the column in their\n> > condition ?\n> >\n> > Regards.\n> >\n>\n> --\n> Sylvain Rabot <[email protected]>\n>\n\n\n\n-- \n\n\nNick Lello | Web Architect\no +44 (0) 8433309374 | m +44 (0) 7917 138319\nEmail: nick.lello at rentrak.com\nRENTRAK | www.rentrak.com | NASDAQ: RENT\n\nShould there be a Rule for Select to cause partitions to be excluded ?On 8 February 2011 20:08, Sylvain Rabot <[email protected]> wrote:\nI also tried to do table partitioning using the same immutable function,\nit works well except for constraint exclusion.\n\nCREATE TABLE mike.directory_part_0 () INHERITS (mike.directory) WITH (fillfactor = 90);\nCREATE RULE directory_part_0_insert AS ON INSERT TO mike.directory WHERE (__mod_cons_hash(new.id_user::bigint, 2) = 0)\nDO INSTEAD INSERT INTO mike.directory_part_0 VALUES (new.*);\n\nCREATE TABLE mike.directory_part_1 () INHERITS (mike.directory) WITH (fillfactor = 90);\nCREATE RULE directory_part_1_insert AS ON INSERT TO mike.directory WHERE (__mod_cons_hash(new.id_user::bigint, 2) = 1)\nDO INSTEAD INSERT INTO mike.directory_part_1 VALUES (new.*);\n\nmike_part=# explain analyze select * from directory where id_user = 3;\n                                                                                     QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Result  (cost=0.00..310.21 rows=5226 width=141) (actual time=0.080..7.583 rows=2653 loops=1)\n   ->  Append  (cost=0.00..310.21 rows=5226 width=141) (actual time=0.077..3.654 rows=2653 loops=1)\n         ->  Index Scan using directory_id_user_btree_idx on directory  (cost=0.00..8.27 rows=1 width=141) (actual time=0.007..0.007 rows=0 loops=1)\n               Index Cond: (id_user = 3)\n         ->  Index Scan using directory_part_0_id_user_btree_idx on directory_part_0 directory  (cost=0.00..8.27 rows=1 width=150) (actual time=0.035..0.035 rows=0 loops=1)\n               Index Cond: (id_user = 3)\n         ->  Index Scan using directory_part_1_id_user_btree_idx on directory_part_1 directory  (cost=0.00..293.67 rows=5224 width=141) (actual time=0.035..2.037 rows=2653 loops=1)\n               Index Cond: (id_user = 3)\n Total runtime: 8.807 ms\n(9 rows)\n\n\nOn Tue, 2011-02-08 at 01:14 +0100, Sylvain Rabot wrote:\n> Hi,\n>\n> I am trying to understand how indexes works to get the most of them.\n>\n> First I would like to know if there is more advantage than overhead to\n> split an index in several ones using conditions e.g. doing :\n>\n> CREATE INDEX directory_id_user_0_btree_idx ON mike.directory USING btree (id_user) WHERE id_user < 250000;\n> CREATE INDEX directory_id_user_250000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 250000 AND id_user < 500000;\n> CREATE INDEX directory_id_user_500000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 500000 AND id_user < 750000;\n> CREATE INDEX directory_id_user_750000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 750000 AND id_user < 1000000;\n>\n> instead of having only one index for all the id_user. the forecasts for\n> the table directory are +500 millions records and something like 1\n> million distinct id_user.\n>\n> If there is my idea was to do a repartition in the indexes using a\n> consistent hash algorithm in order to fill the indexes in parallel\n> instead of successively :\n>\n> CREATE OR REPLACE FUNCTION mike.__mod_cons_hash(\n>     IN  in_dividend     bigint,\n>     IN  in_divisor      integer,\n>     OUT remainder       integer\n> ) AS $__$\n>\n> BEGIN\n>     SELECT in_dividend % in_divisor INTO remainder;\n> END;\n>\n> $__$ LANGUAGE plpgsql IMMUTABLE COST 10;\n>\n> CREATE INDEX directory_id_user_mod_cons_hash_0_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 0;\n> CREATE INDEX directory_id_user_mod_cons_hash_1_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 1;\n> CREATE INDEX directory_id_user_mod_cons_hash_2_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 2;\n> CREATE INDEX directory_id_user_mod_cons_hash_3_btree_idx ON mike.directory USING btree (id_user) WHERE __mod_cons_hash(id_user, 4) = 3;\n>\n> But the thing is the indexes are not used :\n>\n> mike=# SELECT version();\n>                                                       version\n> -------------------------------------------------------------------------------------------------------------------\n>  PostgreSQL 8.4.7 on i686-pc-linux-gnu, compiled by GCC gcc-4.4.real (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5, 32-bit\n> (1 row)\n>\n> mike=# REINDEX INDEX directory_id_user_mod_cons_hash_0_btree_idx;\n> LOG:  duration: 14644.160 ms  statement: REINDEX INDEX\n> directory_id_user_mod_cons_hash_0_btree_idx;\n> REINDEX\n> mike=# EXPLAIN ANALYZE SELECT * FROM directory WHERE id_user = 4;\n>                                                    QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n>  Seq Scan on directory  (cost=0.00..38140.66 rows=67 width=148) (actual time=0.077..348.211 rows=10303 loops=1)\n>    Filter: (id_user = 4)\n>  Total runtime: 351.114 ms\n> (3 rows)\n>\n> So I also did this test :\n>\n> mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user > 3 and id_user < 5;\n> CREATE INDEX\n> mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n>                                                                    QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n>  Index Scan using directory_id_user_4_btree_idx on directory  (cost=0.00..10.58 rows=67 width=148) (actual time=0.169..7.753 rows=10303 loops=1)\n>    Index Cond: (id_user = 4)\n>  Total runtime: 10.973 ms\n> (3 rows)\n>\n> mike=# DROP INDEX directory_id_user_4_btree_idx;\n> DROP INDEX\n> mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user - 1 > 2 and id_user + 1 < 6;\n> CREATE INDEX\n> mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n>                                                    QUERY PLAN\n> ----------------------------------------------------------------------------------------------------------------\n>  Seq Scan on directory  (cost=0.00..38140.66 rows=67 width=148) (actual time=0.153..360.020 rows=10303 loops=1)\n>    Filter: (id_user = 4)\n>  Total runtime: 363.106 ms\n> (3 rows)\n>\n> mike=# DROP INDEX directory_id_user_4_btree_idx;\n> DROP INDEX\n> mike=# CREATE INDEX directory_id_user_4_btree_idx ON mike.directory USING btree (id_user) WHERE id_user > 2 + 1 and id_user < 6 - 1;\n> CREATE INDEX\n> mike=# EXPLAIN ANALYZE select * from directory where id_user = 4;\n>                                                                    QUERY PLAN\n> -------------------------------------------------------------------------------------------------------------------------------------------------\n>  Index Scan using directory_id_user_4_btree_idx on directory  (cost=0.00..10.58 rows=67 width=148) (actual time=0.245..8.262 rows=10303 loops=1)\n>    Index Cond: (id_user = 4)\n>  Total runtime: 11.110 ms\n> (3 rows)\n>\n> As you see the index condition although, differently written, is the\n> same but the second index is not used apparently because the immutable\n> function is applied on the column.\n>\n> So do you know the reason why the planner is not able to use indexes\n> which have immutable functions applied to the column in their\n> condition ?\n>\n> Regards.\n>\n\n--\nSylvain Rabot <[email protected]>\n--   Nick Lello | Web Architecto +44 (0) 8433309374 | m +44 (0) 7917 138319Email: nick.lello at rentrak.com\n\nRENTRAK | www.rentrak.com | NASDAQ: RENT", "msg_date": "Wed, 9 Feb 2011 15:34:01 +0000", "msg_from": "Nick Lello <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: Indexes with condition using immutable functions\n\tapplied to column not used" }, { "msg_contents": "On Mon, Feb 7, 2011 at 7:14 PM, Sylvain Rabot <[email protected]> wrote:\n> First I would like to know if there is more advantage than overhead to\n> split an index in several ones using conditions\n\nI don't see why that would be any better than just defining one big index.\n\n> e.g. doing :\n>\n> CREATE INDEX directory_id_user_0_btree_idx ON mike.directory USING btree (id_user) WHERE id_user < 250000;\n> CREATE INDEX directory_id_user_250000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 250000 AND id_user < 500000;\n> CREATE INDEX directory_id_user_500000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 500000 AND id_user < 750000;\n> CREATE INDEX directory_id_user_750000_btree_idx ON mike.directory USING btree (id_user) WHERE id_user >= 750000 AND id_user < 1000000;\n>\n> instead of having only one index for all the id_user. the forecasts for\n> the table directory are +500 millions records and something like 1\n> million distinct id_user.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 27 Feb 2011 13:21:02 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Indexes with condition using immutable functions\n\tapplied to column not used" } ]
[ { "msg_contents": "Hi.\nI do small test of plsql and perl.Result is that perl may be\n2xfaster in simple loops.\n\n\nCREATE OR REPLACE FUNCTION _.test1() RETURNS void AS\n$BODY$\ndeclare i integer; j bigint := 0;\nbegin\nfor i in 1..1000000 loop j:=j+i; end loop;\nend;\n$BODY$ LANGUAGE plpgsql VOLATILE COST 100;\n\n\"Result (cost=0.00..0.26 rows=1 width=0) (actual\ntime=1382.851..1382.853 rows=1 loops=1)\"\n\"Total runtime: 1383.167 ms\"\n\n\nCREATE OR REPLACE FUNCTION _.test2() RETURNS void AS\n$BODY$\n$j=0;\nfor($i=0;$i<1000000;$i++) {\n $j = $j + $i;\n}\n$BODY$ LANGUAGE plperlu VOLATILE COST 100;\n\n\"Result (cost=0.00..0.26 rows=1 width=0) (actual\ntime=584.272..584.275 rows=1 loops=1)\"\n\"Total runtime: 584.355 ms\"\n\n\n------------\npasman\n", "msg_date": "Tue, 8 Feb 2011 11:35:29 +0100", "msg_from": "=?ISO-8859-2?Q?pasman_pasma=F1ski?= <[email protected]>", "msg_from_op": true, "msg_subject": "compare languages" }, { "msg_contents": "Hello\n\nit isn't surprise. PL/pgSQL hasn't own arithmetic unit. All\nexpressions are transformed to simple SELECTs.\n\nProbably you can find a tasks, where Perl should be 10, 100, 1000x\nfaster than PL/pgSQL - array sort, array creation, ..\n\nOn second hand, PL/pgSQL is very fast with embeded SQL.\n\nSo if you need to calculate a numeric expensive task, then you need to\nuse Perl, maybe Python or C. If you need to join a embedded SQL, then\nPL/pgSQL is good tool.\n\nRegards\n\nPavel Stehule\n\np.s. Once I had to solve very slow statistical analysis. 99% of time\nneeded a bublesort implemented in PL/pgSQL. When I replaced it by\nbuildin quicksort in SQL language, the problem was solved.\n\n\n\n2011/2/8 pasman pasmański <[email protected]>:\n> Hi.\n> I do small test of plsql and perl.Result is that perl may be\n> 2xfaster in simple loops.\n>\n>\n> CREATE OR REPLACE FUNCTION _.test1()  RETURNS void AS\n> $BODY$\n> declare  i integer;  j bigint := 0;\n> begin\n> for i in 1..1000000 loop  j:=j+i; end loop;\n> end;\n> $BODY$ LANGUAGE plpgsql VOLATILE  COST 100;\n>\n> \"Result  (cost=0.00..0.26 rows=1 width=0) (actual\n> time=1382.851..1382.853 rows=1 loops=1)\"\n> \"Total runtime: 1383.167 ms\"\n>\n>\n> CREATE OR REPLACE FUNCTION _.test2()  RETURNS void AS\n> $BODY$\n> $j=0;\n> for($i=0;$i<1000000;$i++) {\n>    $j = $j + $i;\n> }\n> $BODY$  LANGUAGE plperlu VOLATILE COST 100;\n>\n> \"Result  (cost=0.00..0.26 rows=1 width=0) (actual\n> time=584.272..584.275 rows=1 loops=1)\"\n> \"Total runtime: 584.355 ms\"\n>\n>\n> ------------\n> pasman\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Tue, 8 Feb 2011 11:45:52 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: compare languages" } ]
[ { "msg_contents": "This query:\n\nselect p.id,p.producer_id,visa.variation_item_id, vi.qtyavail\n from variation_item_sellingsite_asin visa\n inner join product p on p.id = visa.product_id\n inner join variation_item vi on vi.id = \nvisa.variation_item_id\n where visa.id =4\n\nruns in 43 msec. The \"visa.id\" column has int4 datatype. The query plan \nuses an index condition:\n\n\"Nested Loop (cost=0.00..26.19 rows=1 width=28)\"\n\" -> Nested Loop (cost=0.00..17.75 rows=1 width=24)\"\n\" -> Index Scan using variation_item_sellingsite_asin_pkey on \nvariation_item_sellingsite_asin visa (cost=0.00..8.58 rows=1 width=16)\"\n\" Index Cond: (id = 4)\"\n\" -> Index Scan using pk_product_id on product p \n(cost=0.00..9.16 rows=1 width=16)\"\n\" Index Cond: (p.id = visa.product_id)\"\n\" -> Index Scan using pk_variation_item_id on variation_item vi \n(cost=0.00..8.43 rows=1 width=12)\"\n\" Index Cond: (vi.id = visa.variation_item_id)\"\n\n\nThis query:\n\nselect p.id,p.producer_id,visa.variation_item_id, vi.qtyavail\n from variation_item_sellingsite_asin visa\n inner join product p on p.id = visa.product_id\n inner join variation_item vi on vi.id = \nvisa.variation_item_id\n where visa.id =4.0\n\nRuns for 1144 msec! Query plan uses seq scan + filter:\n\n\"Nested Loop (cost=33957.27..226162.68 rows=14374 width=28)\"\n\" -> Hash Join (cost=33957.27..106190.76 rows=14374 width=20)\"\n\" Hash Cond: (visa.variation_item_id = vi.id)\"\n\" -> Seq Scan on variation_item_sellingsite_asin visa \n(cost=0.00..71928.04 rows=14374 width=16)\"\n\" Filter: ((id)::numeric = 4.0)\"\n\" -> Hash (cost=22026.01..22026.01 rows=954501 width=12)\"\n\" -> Seq Scan on variation_item vi (cost=0.00..22026.01 \nrows=954501 width=12)\"\n\" -> Index Scan using pk_product_id on product p (cost=0.00..8.33 \nrows=1 width=16)\"\n\" Index Cond: (p.id = visa.product_id)\"\n\n\nWhich is silly. I think that PostgreSQL converts the int side to a \nfloat, and then compares them.\n\nIt would be better to do this, for each item in the loop:\n\n * evaluate the right side (which is float)\n * tell if it is an integer or not\n * if not an integer, then discard the row immediately\n * otherwise use its integer value for the index scan\n\nThe result is identical, but it makes possible to use the index scan. Of \ncourse, I know that the query itself is wrong, because I sould not use a \nfloat where an int is expected. But this CAN be optimized, so I think it \nshould be! My idea for the query optimizer is not to use the \"wider\" \ndata type, but use the data type that has an index on it instead.\n\n(I spent an hour figuring out what is wrong with my program. In some \ncases it was slow, in other cases it was really fast, and I never got an \nerror message.)\n\nWhat do you think?\n\n Laszlo\n\n\n\n\n\n\n\n This query:\n\n select p.id,p.producer_id,visa.variation_item_id, vi.qtyavail\n                             from  variation_item_sellingsite_asin\n visa\n                             inner join product p on p.id =\n visa.product_id\n                             inner join variation_item vi on vi.id =\n visa.variation_item_id \n                             where visa.id =4\n\n runs in 43 msec. The \"visa.id\" column has int4 datatype. The query\n plan uses an index condition:\n\n \"Nested Loop  (cost=0.00..26.19 rows=1 width=28)\"\n \"  ->  Nested Loop  (cost=0.00..17.75 rows=1 width=24)\"\n \"        ->  Index Scan using\n variation_item_sellingsite_asin_pkey on\n variation_item_sellingsite_asin visa  (cost=0.00..8.58 rows=1\n width=16)\"\n \"              Index Cond: (id = 4)\"\n \"        ->  Index Scan using pk_product_id on product p \n (cost=0.00..9.16 rows=1 width=16)\"\n \"              Index Cond: (p.id = visa.product_id)\"\n \"  ->  Index Scan using pk_variation_item_id on variation_item\n vi  (cost=0.00..8.43 rows=1 width=12)\"\n \"        Index Cond: (vi.id = visa.variation_item_id)\"\n\n\n This query:\n\n select p.id,p.producer_id,visa.variation_item_id, vi.qtyavail\n                             from  variation_item_sellingsite_asin\n visa\n                             inner join product p on p.id =\n visa.product_id\n                             inner join variation_item vi on vi.id =\n visa.variation_item_id\n                             where visa.id =4.0\n\n Runs for  1144 msec! Query plan uses seq scan + filter:\n\n \"Nested Loop  (cost=33957.27..226162.68 rows=14374 width=28)\"\n \"  ->  Hash Join  (cost=33957.27..106190.76 rows=14374 width=20)\"\n \"        Hash Cond: (visa.variation_item_id = vi.id)\"\n \"        ->  Seq Scan on variation_item_sellingsite_asin visa \n (cost=0.00..71928.04 rows=14374 width=16)\"\n \"              Filter: ((id)::numeric = 4.0)\"\n \"        ->  Hash  (cost=22026.01..22026.01 rows=954501\n width=12)\"\n \"              ->  Seq Scan on variation_item vi \n (cost=0.00..22026.01 rows=954501 width=12)\"\n \"  ->  Index Scan using pk_product_id on product p \n (cost=0.00..8.33 rows=1 width=16)\"\n \"        Index Cond: (p.id = visa.product_id)\"\n\n\n Which is silly. I think that PostgreSQL converts the int side to a\n float, and then compares them.\n\n It would be better to do this, for each item in the loop:\n\nevaluate the right side (which is float)\ntell if it is an integer or not\nif not an integer, then discard the row immediately\n\notherwise use its integer value for the index scan\n\n\n The result is identical, but it makes possible to use the index\n scan. Of course, I know that the query itself is wrong, because I\n sould not use a float where an int is expected. But this CAN be\n optimized, so I think it should be! My idea for the query optimizer\n is not to use the \"wider\" data type, but use the data type that has\n an index on it instead.\n\n (I spent an hour figuring out what is wrong with my program. In some\n cases it was slow, in other cases it was really fast, and I never\n got an error message.)\n\n What do you think?\n\n    Laszlo", "msg_date": "Tue, 08 Feb 2011 15:15:27 +0100", "msg_from": "Laszlo Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Bad query plan when the wrong data type is used" }, { "msg_contents": "Laszlo,\n\n> Which is silly. I think that PostgreSQL converts the int side to a\n> float, and then compares them.\n> \n> It would be better to do this, for each item in the loop:\n> \n> * evaluate the right side (which is float)\n> * tell if it is an integer or not\n> * if not an integer, then discard the row immediately\n> * otherwise use its integer value for the index scan\n\nNot terribly likely, I'm afraid. Data type coercion is *way* more\ncomplex than you realize (consider the number of data types we have, and\nthe ability to add UDTs, and then square it). And the functionality you\npropose would break backwards compatibility; many people currently use\n\".0\" currently in order to force a coercion to Float or Numeric.\n\nI'm not saying that PostgreSQL couldn't do better on this kind of case,\nbut that doing better is a major project, not a minor one.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Tue, 08 Feb 2011 14:04:59 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query plan when the wrong data type is used" }, { "msg_contents": "You will get the same behaviour from any database product where the query as\nwritten requires type coercion - the coercion has to go in the direction of\nthe \"wider\" type. I have seen the exact same scenario with Oracle, and I\nview it as a problem with the way the query is written, not with the\ndatabase server.\n\nWhoever coded the application which is making this query presumably knows\nthat the visa.id field is an integer type in the schema they designed, so\nwhy are they passing a float? Convert the 4.0 to 4 on the application side\ninstead, it's one function call or cast.\n\nIt's not reasonable to expect the query compiler to pick up the slack for\npoorly written SQL.\n\nCheers\nDave\n\nOn Tue, Feb 8, 2011 at 4:04 PM, Josh Berkus <[email protected]> wrote:\n\n> Laszlo,\n>\n> > Which is silly. I think that PostgreSQL converts the int side to a\n> > float, and then compares them.\n> >\n> > It would be better to do this, for each item in the loop:\n> >\n> > * evaluate the right side (which is float)\n> > * tell if it is an integer or not\n> > * if not an integer, then discard the row immediately\n> > * otherwise use its integer value for the index scan\n>\n> Not terribly likely, I'm afraid. Data type coercion is *way* more\n> complex than you realize (consider the number of data types we have, and\n> the ability to add UDTs, and then square it). And the functionality you\n> propose would break backwards compatibility; many people currently use\n> \".0\" currently in order to force a coercion to Float or Numeric.\n>\n> I'm not saying that PostgreSQL couldn't do better on this kind of case,\n> but that doing better is a major project, not a minor one.\n>\n> --\n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nYou will get the same behaviour from any database product where the query as written requires type coercion - the coercion has to go in the direction of the \"wider\" type. I have seen the exact same scenario with Oracle, and I view it as a problem with the way the query is written, not with the database server.\nWhoever coded the application which is making this query presumably knows that the visa.id field is an integer type in the schema they designed, so why are they passing a float? Convert the 4.0 to 4 on the application side instead, it's one function call or cast.\nIt's not reasonable to expect the query compiler to pick up the slack for poorly written SQL.CheersDaveOn Tue, Feb 8, 2011 at 4:04 PM, Josh Berkus <[email protected]> wrote:\nLaszlo,\n\n> Which is silly. I think that PostgreSQL converts the int side to a\n> float, and then compares them.\n>\n> It would be better to do this, for each item in the loop:\n>\n>     * evaluate the right side (which is float)\n>     * tell if it is an integer or not\n>     * if not an integer, then discard the row immediately\n>     * otherwise use its integer value for the index scan\n\nNot terribly likely, I'm afraid.  Data type coercion is *way* more\ncomplex than you realize (consider the number of data types we have, and\nthe ability to add UDTs, and then square it).  And the functionality you\npropose would break backwards compatibility; many people currently use\n\".0\" currently in order to force a coercion to Float or Numeric.\n\nI'm not saying that PostgreSQL couldn't do better on this kind of case,\nbut that doing better is a major project, not a minor one.\n\n--\n                                  -- Josh Berkus\n                                     PostgreSQL Experts Inc.\n                                     http://www.pgexperts.com\n\n--\nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Tue, 8 Feb 2011 17:14:20 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query plan when the wrong data type is used" }, { "msg_contents": "09.02.11 01:14, Dave Crooke написав(ла):\n> You will get the same behaviour from any database product where the \n> query as written requires type coercion - the coercion has to go in \n> the direction of the \"wider\" type. I have seen the exact same scenario \n> with Oracle, and I view it as a problem with the way the query is \n> written, not with the database server.\n>\n> Whoever coded the application which is making this query presumably \n> knows that the visa.id <http://visa.id> field is an integer type in \n> the schema they designed, so why are they passing a float? Convert the \n> 4.0 to 4 on the application side instead, it's one function call or cast.\nActually the problem may be in layers, and the problem may even be not \nnoticed until it's late enough. As far as I remember from this list \nthere are problems with column being integer and parameter prepared as \nbigint or number. Same for number vs double vs float.\nAs for me it would be great for optimizer to consider the next:\n1) val1::narrow = val2::wide as (val1::narrow = val2::narrow and \nval2::narrow = val2::wide)\n2) val1::narrow < val2::wide as (val1::narrow < val2::narrow and \nval1::wide < val2::wide)\n3) val1::narrow > val2::wide as (val1::narrow + 1 > val2::narrow and \nval1::wide > val2::wide)\nOf course it should use additional check it this allows to use an index.\nSurely, this is not an easy thing to implement, but as for me similar \nquestion are raised quite often in this list.\n\nBest regards, Vitalii Tymchyshyn\n\n\n\n\n\n\n\n 09.02.11 01:14, Dave Crooke написав(ла):\n You will get the same behaviour from any database\n product where the query as written requires type coercion - the\n coercion has to go in the direction of the \"wider\" type. I have\n seen the exact same scenario with Oracle, and I view it as a\n problem with the way the query is written, not with the database\n server.\n\n Whoever coded the application which is making this query\n presumably knows that the visa.id field is an integer type in\n the schema they designed, so why are they passing a float? Convert\n the 4.0 to 4 on the application side instead, it's one function\n call or cast.\n\n Actually the problem may be in layers, and the  problem may even be\n not noticed until it's late enough. As far as I remember from this\n list there are problems with column being integer and parameter\n prepared as bigint or number. Same for number vs double vs float.\n As for me it would be great for optimizer to consider the next:\n 1) val1::narrow = val2::wide as (val1::narrow = val2::narrow and\n val2::narrow = val2::wide)\n 2) val1::narrow < val2::wide as (val1::narrow < val2::narrow\n and val1::wide < val2::wide)\n 3) val1::narrow > val2::wide as (val1::narrow + 1 >\n val2::narrow and val1::wide > val2::wide)\n Of course it should use additional check it this allows to use an\n index. \n Surely, this is not an easy thing to implement, but as for me\n similar question are raised quite often in this list.\n\n Best regards, Vitalii Tymchyshyn", "msg_date": "Wed, 09 Feb 2011 11:52:44 +0200", "msg_from": "Vitalii Tymchyshyn <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query plan when the wrong data type is used" }, { "msg_contents": "On Tue, Feb 8, 2011 at 5:04 PM, Josh Berkus <[email protected]> wrote:\n> Laszlo,\n>\n>> Which is silly. I think that PostgreSQL converts the int side to a\n>> float, and then compares them.\n>>\n>> It would be better to do this, for each item in the loop:\n>>\n>>     * evaluate the right side (which is float)\n>>     * tell if it is an integer or not\n>>     * if not an integer, then discard the row immediately\n>>     * otherwise use its integer value for the index scan\n>\n> Not terribly likely, I'm afraid.  Data type coercion is *way* more\n> complex than you realize (consider the number of data types we have, and\n> the ability to add UDTs, and then square it).  And the functionality you\n> propose would break backwards compatibility; many people currently use\n> \".0\" currently in order to force a coercion to Float or Numeric.\n>\n> I'm not saying that PostgreSQL couldn't do better on this kind of case,\n> but that doing better is a major project, not a minor one.\n\nSpecifically, the problem is that x = 4.0, where x is an integer, is\ndefined to mean x::numeric = 4.0, not x = 4.0::integer. If it meant\nthe latter, then testing x = 3.5 would throw an error, whereas what\nactually happens is it just returns false.\n\nNow, in this particular case, we all know that the only way x::numeric\n= 4.0 can be true is if x = 4::int. But that's a property of the\nnumeric and integer data types that doesn't hold in general. Consider\nt = 'foo'::citext, where t has type text. That could be true if t =\n'Foo' or t = 'foO' or t = 'FOO', etc.\n\nWe could fix this by adding some special case logic that understands\nproperties of integers and numeric values and optimizes x =\n4.0::numeric to x = 4::int and x = 3.5::numeric to constant false.\nThat would be cool, in a way, but I'm not sure it's really worth the\ncode it would take, unless it falls naturally out of some larger\nproject in that area.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Sun, 27 Feb 2011 13:16:55 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query plan when the wrong data type is used" }, { "msg_contents": "Robert Haas <[email protected]> writes:\n> On Tue, Feb 8, 2011 at 5:04 PM, Josh Berkus <[email protected]> wrote:\n>> I'm not saying that PostgreSQL couldn't do better on this kind of case,\n>> but that doing better is a major project, not a minor one.\n\n> Specifically, the problem is that x = 4.0, where x is an integer, is\n> defined to mean x::numeric = 4.0, not x = 4.0::integer. If it meant\n> the latter, then testing x = 3.5 would throw an error, whereas what\n> actually happens is it just returns false.\n\n> We could fix this by adding some special case logic that understands\n> properties of integers and numeric values and optimizes x =\n> 4.0::numeric to x = 4::int and x = 3.5::numeric to constant false.\n> That would be cool, in a way, but I'm not sure it's really worth the\n> code it would take, unless it falls naturally out of some larger\n> project in that area.\n\nI think that most of the practical problems around this case could be\nsolved without such a hack. What we should do instead is invent\ncross-type operators \"int = numeric\" etc and make them members of both\nthe integer and numeric index opclasses. There are reasons why that\nwouldn't work for integer versus float (read the last section of\nsrc/backend/access/nbtree/README) but right offhand it seems like it\nought to be safe enough for numeric. Now, it wouldn't be quite as fast\nas if we somehow downconverted numeric to integer beforehand, but at\nleast you'd only be talking about a slow comparison operator and not a\nfundamentally stupider plan. That's close enough for me, for what is\nin the end a stupidly written query.\n\nOf course, the above is still not exactly a small project, since you'd\nbe talking about something like 36 new operators to cover all of int2,\nint4, int8. But it's a straightforward extension.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 27 Feb 2011 13:39:41 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query plan when the wrong data type is used " }, { "msg_contents": "On Sun, Feb 27, 2011 at 1:39 PM, Tom Lane <[email protected]> wrote:\n> Robert Haas <[email protected]> writes:\n>> On Tue, Feb 8, 2011 at 5:04 PM, Josh Berkus <[email protected]> wrote:\n>>> I'm not saying that PostgreSQL couldn't do better on this kind of case,\n>>> but that doing better is a major project, not a minor one.\n>\n>> Specifically, the problem is that x = 4.0, where x is an integer, is\n>> defined to mean x::numeric = 4.0, not x = 4.0::integer.  If it meant\n>> the latter, then testing x = 3.5 would throw an error, whereas what\n>> actually happens is it just returns false.\n>\n>> We could fix this by adding some special case logic that understands\n>> properties of integers and numeric values and optimizes x =\n>> 4.0::numeric to x = 4::int and x = 3.5::numeric to constant false.\n>> That would be cool, in a way, but I'm not sure it's really worth the\n>> code it would take, unless it falls naturally out of some larger\n>> project in that area.\n>\n> I think that most of the practical problems around this case could be\n> solved without such a hack.  What we should do instead is invent\n> cross-type operators \"int = numeric\" etc and make them members of both\n> the integer and numeric index opclasses.  There are reasons why that\n> wouldn't work for integer versus float (read the last section of\n> src/backend/access/nbtree/README) but right offhand it seems like it\n> ought to be safe enough for numeric.  Now, it wouldn't be quite as fast\n> as if we somehow downconverted numeric to integer beforehand, but at\n> least you'd only be talking about a slow comparison operator and not a\n> fundamentally stupider plan.  That's close enough for me, for what is\n> in the end a stupidly written query.\n>\n> Of course, the above is still not exactly a small project, since you'd\n> be talking about something like 36 new operators to cover all of int2,\n> int4, int8.  But it's a straightforward extension.\n\nInteresting. Worth a TODO?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 28 Feb 2011 14:04:53 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query plan when the wrong data type is used" }, { "msg_contents": "On Mon, Feb 28, 2011 at 02:04:53PM -0500, Robert Haas wrote:\n> On Sun, Feb 27, 2011 at 1:39 PM, Tom Lane <[email protected]> wrote:\n> > Robert Haas <[email protected]> writes:\n> >> On Tue, Feb 8, 2011 at 5:04 PM, Josh Berkus <[email protected]> wrote:\n> >>> I'm not saying that PostgreSQL couldn't do better on this kind of case,\n> >>> but that doing better is a major project, not a minor one.\n> >\n> >> Specifically, the problem is that x = 4.0, where x is an integer, is\n> >> defined to mean x::numeric = 4.0, not x = 4.0::integer. �If it meant\n> >> the latter, then testing x = 3.5 would throw an error, whereas what\n> >> actually happens is it just returns false.\n> >\n> >> We could fix this by adding some special case logic that understands\n> >> properties of integers and numeric values and optimizes x =\n> >> 4.0::numeric to x = 4::int and x = 3.5::numeric to constant false.\n> >> That would be cool, in a way, but I'm not sure it's really worth the\n> >> code it would take, unless it falls naturally out of some larger\n> >> project in that area.\n> >\n> > I think that most of the practical problems around this case could be\n> > solved without such a hack. �What we should do instead is invent\n> > cross-type operators \"int = numeric\" etc and make them members of both\n> > the integer and numeric index opclasses. �There are reasons why that\n> > wouldn't work for integer versus float (read the last section of\n> > src/backend/access/nbtree/README) but right offhand it seems like it\n> > ought to be safe enough for numeric. �Now, it wouldn't be quite as fast\n> > as if we somehow downconverted numeric to integer beforehand, but at\n> > least you'd only be talking about a slow comparison operator and not a\n> > fundamentally stupider plan. �That's close enough for me, for what is\n> > in the end a stupidly written query.\n> >\n> > Of course, the above is still not exactly a small project, since you'd\n> > be talking about something like 36 new operators to cover all of int2,\n> > int4, int8. �But it's a straightforward extension.\n> \n> Interesting. Worth a TODO?\n\nSince we are discussing int2 casting, I wanted to bring up this other\ncasting issue from 2011, in case it helped the discussion.\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n + It's impossible for everything to be true. +\n\n", "msg_date": "Sat, 1 Sep 2012 12:25:37 -0400", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Bad query plan when the wrong data type is used" } ]
[ { "msg_contents": "Howdy,\n\nEnvironment:\n\nPostgres 8.3.13\nSolaris 10\n\nI have a SELECT query that runs no problem standalone but when running\nwithin a perl script it intermittently core dumps. Random, no pattern\nto the timing of the core dumps. The perl script processes the rows\nfrom the query, if the rows satisfy a condition then the perl script\nadds the rows to another table. When the script works it runs for\nabout a minute. If the script fails, it runs for about 5 minutes and\ncore dumps. The core dump is in the perl error handling routines. We\nsuspect the bug is related to how the perl postgres libraries interact\nwith postgres.\n\nThe query:\n\nSELECT pa.tag,\n pa.name,\n pa.notices_sent,\n pa.parent,\n pa.contact,\n pa.adsl_type,\n pa.adsl_order_state,\n pa.adsl_line,\n pa.adsl_site_address,\n pa.subnet_addresses,\n pa.plan, pa.username,\n pa.product_type,\n pa.framed_routes,\n c.tag,\n c.contact,\n c.name,\n c.customer_type,\n pa.technology,\n pa.carrier,\n pa.dependent_services,\n pa.provisioning_email,\n pa.provisioning_mobile,\n pa.ull_termination_cable,\n pa.ull_termination_pair,\n pa.ull_termination_terminal_box\nFROM personal_adsl pa,\n client c\nWHERE pa.parent = c.tag\nAND pa.adsl_migration_id is null\nAND (pa.change_to not ilike '%IBC%' OR pa.change_to is null)\nAND pa.adsl_order_state in ('Confirmed', 'Churn-Ordered', 'Provisioned', 'Held')\nAND (pa.adsl_type <> 'IBC' OR pa.adsl_type is null)\nAND pa.active in ('Active', 'Pending')\nAND (c.contact not ilike '%noncontact%' OR c.contact is null)\nAND (pa.contact not ilike '%noncontact%' OR pa.contact is null)\nAND (pa.notices_sent is null OR\n (\n (pa.adsl_order_state in ('Confirmed', 'Churn-Ordered') AND\npa.notices_sent not similar to '%(Confirm|Provision)%') OR\n(pa.adsl_order_state = 'Provisioned'AND pa.notices_sent not ilike\n'%Provision%') OR\n (pa.adsl_order_state = 'Held' AND pa.notices_sent not ilike '%Held%')\n )\n );\n\nThe EXPLAIN ANALYZE:\n\n\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\nHash Join (cost=159798.93..612582.99 rows=17979 width=442) (actual\ntime=87639.667..90179.888 rows=219 loops=1)\n Hash Cond: (pa.parent = c.tag)\n -> Bitmap Heap Scan on personal_adsl pa (cost=94326.53..546467.23\nrows=46357 width=323) (actual time=85137.720..87676.712 rows=225\nloops=1)\n Recheck Cond: ((active = ANY ('{Active,Pending}'::text[]))\nAND (adsl_order_state = ANY\n('{Confirmed,Churn-Ordered,Provisioned,Held}'::text[])))\n Filter: ((adsl_migration_id IS NULL) AND ((change_to !~~*\n'%IBC%'::text) OR (change_to IS NULL)) AND ((adsl_type <> 'IBC'::text)\nOR (adsl_type IS NULL)) AND ((contact !~~* '%noncontact%'::text) OR\n(contact IS NULL)) AND ((notices_sent IS NULL) OR ((adsl_order_state =\nANY ('{Confirmed,Churn-Ordered}'::text[])) AND (notices_sent !~\n'***:^(?:.*(Confirm|Provision).*)$'::text)) OR ((adsl_order_state =\n'Provisioned'::text) AND (notices_sent !~~* '%Provision%'::text)) OR\n((adsl_order_state = 'Held'::text) AND (notices_sent !~~*\n'%Held%'::text))))\n -> BitmapAnd (cost=94326.53..94326.53 rows=185454 width=0)\n(actual time=85067.110..85067.110 rows=0 loops=1)\n -> Bitmap Index Scan on personal_adsl_t2\n(cost=0.00..43679.06 rows=481242 width=0) (actual\ntime=374.128..374.128 rows=858904 loops=1)\n Index Cond: (active = ANY ('{Active,Pending}'::text[]))\n -> Bitmap Index Scan on\npersonal_adsl_dsl_order_state_index (cost=0.00..50624.05 rows=481811\nwidth=0) (actual time=84651.450..84651.450 rows=854106 loops=1)\n Index Cond: (adsl_order_state = ANY\n('{Confirmed,Churn-Ordered,Provisioned,Held}'::text[]))\n -> Hash (cost=60834.43..60834.43 rows=371038 width=119) (actual\ntime=2501.358..2501.358 rows=337954 loops=1)\n -> Seq Scan on client c (cost=0.00..60834.43 rows=371038\nwidth=119) (actual time=0.056..2077.094 rows=337954 loops=1)\n Filter: ((contact !~~* '%noncontact%'::text) OR\n(contact IS NULL))\nTotal runtime: 90180.225 ms\n(14 rows)\n\nThe tables:\n\nsqlsnbs=# \\d personal_adsl\n Table \"public.personal_adsl\"\n Column | Type | Modifiers\n-------------------------------------+---------+-----------\ntag | text |\nadsl_type | text |\n_modified | integer |\nsubnet_addresses | text |\ninsidesales | text |\ncost_mb | text |\ntechnology | text |\nbase_hour | text |\ncharge | text |\n_excess_warning | text |\nnotify | text |\nactive | text |\nadsl_migration_to_id | text |\nadsl_order_state | text |\ninvoice_notes | text |\nhibis_timestamp_3 | text |\n_created_by | text |\nspeed_change_date | text |\nplan | text |\nadsl_exchange | text |\npaid_till | text |\nhibis_timestamp_2 | text |\nold_change_to | text |\nretired | text |\nunwired_eid | text |\nadsl_migration_to_date | text |\nadsl_speed | text |\nsetup_fee | text |\nhibis_status | text |\nsnbs_user | text |\nline_loss_estimate | text |\nadsl_detail_status | text |\nhibis_advice_method | text |\nparent | text |\ncommission_date_paid | text |\nannex_mask | text |\ngift | text |\nchanging_to | text |\nadsl_layer | text |\nline_loss_cpe | text |\nbase_mb | text |\ncca | text |\n_next_excess | text |\ncommission | text |\nadd_framed_route_auto | text |\noutsidesales | text |\ngst_exempt | text |\nexternal_snbs_reference | text |\ncost_hour | text |\nnotices_sent | text |\nadsl_xpair | text |\nname | text |\nchurn | text |\ncontact | text |\nhibis_cust_id | text |\naccesslist | text |\nearly_termination_end | text |\nexcess_checked | text |\ncarrier | text |\nstatus | text |\nadsl_line | text |\nproduct_type | text |\nchange_to | text |\ncontract_end | text |\nadsl_cpair | text |\nadsl_migration_id | text |\nsubnet_addresses_specify | text |\n_current_hour | text |\nusername | text |\nadsl_status_detail | text |\nadsl_migration_completion_date | text |\nearly_termination_length | text |\nemail | text |\nadsl_cable_id | text |\nsponsored_amount | text |\nsla | text |\nchange_in_progress | text |\nhibis_incentive_payment_retail | text |\n_created | integer |\nservice_id | text |\ncontract_length | text |\npriority | text |\nreport_pending | text |\nautoraise_date | text |\nframed_routes | text |\nadsl_migration_to_completion_date | text |\ndiscount | text |\nhibis_incentive_payment_wholesale | text |\nsponsored_by | text |\nhibis_timestamp_0 | text |\nadsl_site_address | text |\ndontsendtotelstra | text |\nservice_state | text |\ncidr_group | text |\nadsl_esa_code | text |\nupfront_commission | text |\ncommission_to | text |\n_current_mb | text |\nadsl_profile | text |\nadsl_migration_date | text |\nbilling_interval | text |\nadd_framed_route_specify | text |\nhibis_timestamp_1 | text |\nadd_framed_route_specify_skip_check | text |\nremove_framed_routes | text |\nadsl_do_not_migrate | text |\nwdsl_rsa | text |\nwdsl_mac | text |\nwdsl_gps_long | text |\nwdsl_gps_lat | text |\npaid_to_migrate | text |\nwdsl_verified | text |\nadsl_paid_to_migrate | text |\nlock_profile | text |\nextra_address_info | text |\nboris_record_id | text |\n_boris_record_id | text |\nusage_reference | text |\nl3exit_category | text |\nl3exit_cutoverdate | text |\nl3exit_l3serviceid | text |\nhibis_contract_expiry_date | text |\nl3exit_attributes | text |\nl3exit_l2serviceid | text |\null_ca_signed_date | text |\null_assurance_category | text |\null_power_indicator | text |\null_identifier | text |\nexternal_contract_type | text |\nexternal_contract_expiry_date | text |\null_call_diversion_number | text |\null_losing_fnn | text |\nexisting_equip | text |\null_cutover_date | text |\null_sub_request_type | text |\nlast_check_request | text |\null_dsl_service_id | text |\ncampaign_code | text |\nreseller | text |\ntransition_from_date | text |\ntransition_from_type | text |\ntransition_from_snbsid | text |\ntransition_to_snbsid | text |\ntransition_to_date | text |\ntransition_to_type | text |\null_boundary_point_details | text |\ndependent_services | text |\nparent_service_id | text |\ncontract_id | text |\nretirement_type_code | text |\nretirement_reason_code | text |\nretirement_date | text |\nearly_termination_fee | text |\nstaff_sold_by | text |\nprovisioning_mobile | text |\nprovisioning_email | text |\nadsl_parent_esa_code | text |\nplan_id | text |\nusers | text |\nearly_termination_schedule | text |\ninitial_payment_workflow | text |\nprovisioning_workflow | text |\nexternal_commission_schedule | text |\nadsl_dslam_type | text |\null_live_fnn_at_address | text |\ndata_usage_rating_scheme | text |\null_termination_terminal_box | text |\null_termination_pair | text |\null_termination_cable | text |\ndiscount_negotiated_by | text |\nno_discounted_status_on_invoice | text |\nsom_key_list | text |\nsom_id_list | text |\nstandalone_narration | text |\nnetsuite_id | text |\nopticomm_ref | text |\nprevious_charge | text |\nmulticast_enabled | text |\naddon_pack | text |\nIndexes:\n \"personal_adsl_adsl_carrier_index\" btree (carrier)\n \"personal_adsl_adsl_cidr_group_index\" btree (cidr_group)\n \"personal_adsl_adsl_parent_index\" btree (parent)\n \"personal_adsl_adsl_plan_index\" btree (plan)\n \"personal_adsl_adsl_retired_index\" btree (retired)\n \"personal_adsl_adsl_snbs_user_index\" btree (snbs_user)\n \"personal_adsl_adsl_subnet_addresses_index\" btree (subnet_addresses)\n \"personal_adsl_adsl_technology_index\" btree (technology)\n \"personal_adsl_change_to_index\" btree (change_to)\n \"personal_adsl_dsl_order_state_index\" btree (adsl_order_state)\n \"personal_adsl_exchange_index\" btree (adsl_exchange)\n \"personal_adsl_framed_routes\" btree (framed_routes)\n \"personal_adsl_layer_index\" btree (adsl_layer)\n \"personal_adsl_line_index\" btree (adsl_line)\n \"personal_adsl_migration_id_index\" btree (adsl_migration_id NULLS FIRST)\n \"personal_adsl_profile_index\" btree (adsl_profile)\n \"personal_adsl_speed_index\" btree (adsl_speed)\n \"personal_adsl_t1\" btree (parent)\n \"personal_adsl_t2\" btree (active)\n \"personal_adsl_type_index\" btree (adsl_type)\n \"personal_adsl_usage_ref\" btree (usage_reference)\n \"personal_adsl_username_simple_idx\" btree (username)\n \"tag_personal_adsl_adsl\" btree (tag)\n\nsqlsnbs=#\n\nsqlsnbs=# \\d client\n Table \"public.client\"\n Column | Type | Modifiers\n---------------------------------+---------+-----------\ntag | text |\ncontact | text |\n_modified | integer |\nstatus | text |\ninsidesales | text |\ntransaction_gst_exempt | text |\nresold_by | text |\ncapricorn_id | text |\ndd_name | text |\ncard_name | text |\nshipping_address | text |\ncard_4 | text |\nbilling_date_change | text |\npassword | text |\nusername | text |\nnotify | text |\ncard_3 | text |\n_card_debit_fail_warning | text |\nbilling_address | text |\ncredit_status | text |\nbilling_via | text |\ncard_2 | text |\nreferral | text |\ndd_account | text |\n_ccexpiry_impending_warning | text |\ntransaction_module | text |\n_created_by | text |\ntransaction_type | text |\ntransaction_amount | text |\n_created | integer |\npayment_method | text |\nextended_off | text |\nhomepop | text |\ncard_expiry | text |\ncustomer_type | text |\npriority | text |\ncard_1 | text |\nautoraise_date | text |\npending_suspension | text |\nsandl_member | text |\nrollover_balance | text |\nsnbs_user | text |\n_last_statement_time | text |\nbilling_dest | text |\ndiscount | text |\ndd_bsb | text |\nbilling_as | text |\ntransaction_service | text |\ncard_amount | text |\nbalance | text |\nnotes | text |\n_age_balance | text |\nlast_statement | text |\ncommission | text |\nbilling_date | text |\noutsidesales | text |\ncommission_to | text |\nbal | text |\ntransaction_comment | text |\nname | text |\nqdsnbs | text |\n_last_direct_debit | text |\nbilling_date_lock | text |\ninvoicing_style | text |\n_order_service | text |\nadsl_line | text |\njob_type | text |\n_order_client | text |\nstaff_sponsorship | text |\nndbm_sucks | text |\nallocation_method | text |\ninside_sales | text |\nexclude_from_promotional_emails | text |\naddress_sub_address_type | text |\naddress_address_type | text |\naddress_street_name | text |\naddress_validation_info | text |\naddress_validation_status | text |\naddress_street_type | text |\nactive | text |\naddress_state | text |\naddress_sub_address_number | text |\naddress_locality | text |\naddress_postcode | text |\naddress_street_number | text |\naddress_parent_updated | text |\nparent | text |\nadsl_type | text |\nexcess_checked | text |\ncarrier | text |\nadsl_speed | text |\nsetup_fee | text |\nchange_to | text |\ncharge | text |\nadsl_site_address | text |\nearly_termination_length | text |\nemail | text |\nbase_mb | text |\ncca | text |\nadsl_order_state | text |\nplan | text |\nbilling_interval | text |\ntransition_to_snbsid | text |\naccesslist | text |\nearly_termination_end | text |\n_kill_sessions | text |\nproduct_type | text |\nadsl_migration_id | text |\ntechnology | text |\n_excess_warning | text |\ntransition_from_date | text |\nsla | text |\nadsl_migration_to_id | text |\nadsl_exchange | text |\npaid_till | text |\nold_change_to | text |\nreport_pending | text |\nhibis_status | text |\ntransition_from_type | text |\ntransition_from_snbsid | text |\nservice_state | text |\ncidr_group | text |\n_next_excess | text |\nearly_termination_fee | text |\nadsl_esa_code | text |\ntransition_to_date | text |\ntransition_to_type | text |\nadsl_status_detail | text |\nusage_reference | text |\nretired | text |\noutside_sales | text |\ncontact_backup | text |\nsales_zone | text |\nbilling_destination | text |\ncard_type | text |\nstatement_hold | text |\ntransaction_id | text |\n_use_cba | text |\n_cba_cc_token | text |\n_pci_card_pan | text |\nnetsuite_id | text |\npom_id | text |\nIndexes:\n \"client_credit_status_index\" btree (credit_status)\n \"client_customer_type_index\" btree (customer_type)\n \"tag_client\" btree (tag)\n\nsqlsnbs=#\n\nAnyone have any ideas?\n\nThanks,\n\nSam\n", "msg_date": "Wed, 9 Feb 2011 10:20:15 +1030", "msg_from": "Sam Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Query Core Dumping" }, { "msg_contents": "Sam Stearns <[email protected]> writes:\n> I have a SELECT query that runs no problem standalone but when running\n> within a perl script it intermittently core dumps. Random, no pattern\n> to the timing of the core dumps. The perl script processes the rows\n> from the query, if the rows satisfy a condition then the perl script\n> adds the rows to another table. When the script works it runs for\n> about a minute. If the script fails, it runs for about 5 minutes and\n> core dumps. The core dump is in the perl error handling routines. We\n> suspect the bug is related to how the perl postgres libraries interact\n> with postgres.\n\nCan you get a stack trace from one of the core dumps?\n\nAlso, exactly which perl version are you using, and with what build\noptions? (\"perl -V\" output would be a good answer here.)\n\nBTW, this seems pretty off-topic for pgsql-performance.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Feb 2011 18:58:44 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query Core Dumping " }, { "msg_contents": "Thanks, Tom. Forwarded from pgsql-performance. Working on stack\ntrace. perl -V:\n\n> perl -V\nSummary of my perl5 (revision 5 version 8 subversion 8) configuration:\n Platform:\n osname=solaris, osvers=2.10, archname=i86pc-solaris\n uname='sunos katana7 5.10 generic_118855-15 i86pc i386 i86pc '\n config_args='-de -A prepend:libswanted=db\n-Dlocincpth=/usr/local/include/db_185\n-Dloclibpth=/usr/local/lib/db_185 -Dcc=gcc\n-Dprefix=/usr/local/stow/perl-5.8.8'\n hint=recommended, useposix=true, d_sigaction=define\n usethreads=undef use5005threads=undef useithreads=undef\nusemultiplicity=undef\n useperlio=define d_sfio=undef uselargefiles=define usesocks=undef\n use64bitint=undef use64bitall=undef uselongdouble=undef\n usemymalloc=n, bincompat5005=undef\n Compiler:\n cc='gcc', ccflags ='-fno-strict-aliasing -pipe\n-Wdeclaration-after-statement -I/usr/local/include/db_185\n-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DPERL_USE_SAFE_PUTENV\n-DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV\n-DPERL_USE_SAFE_PUTENV',\n optimize='-O',\n cppflags='-fno-strict-aliasing -pipe -Wdeclaration-after-statement\n-I/usr/local/include/db_185'\n ccversion='', gccversion='3.4.3\n(csl-sol210-3_4-branch+sol_rpath)', gccosandvers='solaris2.10'\n intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234\n d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12\n ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t',\nlseeksize=8\n alignbytes=4, prototype=define\n Linker and Libraries:\n ld='gcc', ldflags =' -L/usr/local/lib/db_185 '\n libpth=/usr/local/lib/db_185 /usr/lib /usr/ccs/lib /usr/local/lib\n libs=-lsocket -lnsl -ldl -lm -lc\n perllibs=-lsocket -lnsl -ldl -lm -lc\n libc=/lib/libc.so, so=so, useshrplib=false, libperl=libperl.a\n gnulibc_version=''\n Dynamic Linking:\n dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '\n cccdlflags='-fPIC', lddlflags='-G -L/usr/local/lib/db_185'\n\n\nCharacteristics of this binary (from libperl):\n Compile-time options: PERL_MALLOC_WRAP PERL_USE_SAFE_PUTENV\n USE_LARGE_FILES USE_PERLIO\n Built under solaris\n Compiled at Aug 29 2006 21:33:23\n @INC:\n /usr/local/stow/perl-5.8.8/lib/5.8.8/i86pc-solaris\n /usr/local/stow/perl-5.8.8/lib/5.8.8\n /usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris\n /usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8\n /usr/local/stow/perl-5.8.8/lib/site_perl\n .\n>\n\nSam\n\nOn Wed, Feb 9, 2011 at 10:28 AM, Tom Lane <[email protected]> wrote:\n> Sam Stearns <[email protected]> writes:\n>> I have a SELECT query that runs no problem standalone but when running\n>> within a perl script it intermittently core dumps.  Random, no pattern\n>> to the timing of the core dumps.  The perl script processes the rows\n>> from the query, if the rows satisfy  a condition then the perl script\n>> adds the rows to another table.  When the script works it runs for\n>> about a minute.  If the script fails, it runs for about 5 minutes and\n>> core dumps.  The core dump is in the perl error handling routines.  We\n>> suspect the bug is related to how the perl postgres libraries interact\n>> with postgres.\n>\n> Can you get a stack trace from one of the core dumps?\n>\n> Also, exactly which perl version are you using, and with what build\n> options?  (\"perl -V\" output would be a good answer here.)\n>\n> BTW, this seems pretty off-topic for pgsql-performance.\n>\n>                        regards, tom lane\n>\n", "msg_date": "Wed, 9 Feb 2011 10:33:04 +1030", "msg_from": "Sam Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Query Core Dumping" }, { "msg_contents": "Segmentation Fault - core dumped\n\n===================================\n> pollsys(0x080475B8, 1, 0x00000000, 0x00000000) = 1\n> recv(4, \" i t z _ l i s a @ h o t\".., 16005, 0) = 8192\n> brk(0x095FBCF0) = 0\n> brk(0x095FDCF0) = 0\n> pollsys(0x080475B8, 1, 0x00000000, 0x00000000) = 1\n> recv(4, \"\\0\\0\\007 U L L - N S W\\0\".., 16278, 0) = 3404\n> brk(0x095FDCF0) = 0\n> brk(0x095FFCF0) = 0\n> Incurred fault #6, FLTBOUNDS %pc = 0x080BF4EF\n> siginfo: SIGSEGV SEGV_MAPERR addr=0x00000168\n> Received signal #11, SIGSEGV [default]\n> siginfo: SIGSEGV SEGV_MAPERR addr=0x00000168\n>\n===================================\n\n>From gdb:\n\n#0 0x080bf4ef in Perl_sv_vcatpvfn ()\n#1 0x080bdb30 in Perl_sv_vsetpvfn ()\n#2 0x080a1363 in Perl_vmess ()\n#3 0x080a1d06 in Perl_vwarn ()\n#4 0x080a1fde in Perl_warn ()\n#5 0xfe7f9d58 in pg_warn () from\n/usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris/auto/DBD/Pg/Pg.so\n#6 0xfe7c704e in defaultNoticeReceiver () from /usr/local/lib/libpq.so.4\n#7 0xfe7cf0bc in pqGetErrorNotice3 () from /usr/local/lib/libpq.so.4\n#8 0xfe7cf67f in pqParseInput3 () from /usr/local/lib/libpq.so.4\n#9 0xfe7c8398 in parseInput () from /usr/local/lib/libpq.so.4\n#10 0xfe7c8c63 in PQgetResult () from /usr/local/lib/libpq.so.4\n#11 0xfe7c8d5b in PQexecFinish () from /usr/local/lib/libpq.so.4\n#12 0xfe7fdcc1 in dbd_st_execute () from\n/usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris/auto/DBD/Pg/Pg.so\n#13 0xfe7f512f in XS_DBD_Pg_db_selectall_arrayref () from\n/usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris/auto/DBD/Pg/Pg.so\n#14 0xfecfb301 in XS_DBI_dispatch () from\n/usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris/auto/DBI/DBI.so\n#15 0x080b3609 in Perl_pp_entersub ()\n#16 0x080ad365 in Perl_runops_standard ()\n#17 0x08068814 in S_run_body ()\n#18 0x0806852b in perl_run ()\n#19 0x08065ae0 in main ()\n\nSam\n\nOn Wed, Feb 9, 2011 at 10:33 AM, Sam Stearns <[email protected]> wrote:\n> Thanks, Tom.  Forwarded from pgsql-performance.  Working on stack\n> trace.  perl -V:\n>\n>> perl -V\n> Summary of my perl5 (revision 5 version 8 subversion 8) configuration:\n>  Platform:\n>    osname=solaris, osvers=2.10, archname=i86pc-solaris\n>    uname='sunos katana7 5.10 generic_118855-15 i86pc i386 i86pc '\n>    config_args='-de -A prepend:libswanted=db\n> -Dlocincpth=/usr/local/include/db_185\n> -Dloclibpth=/usr/local/lib/db_185 -Dcc=gcc\n> -Dprefix=/usr/local/stow/perl-5.8.8'\n>    hint=recommended, useposix=true, d_sigaction=define\n>    usethreads=undef use5005threads=undef useithreads=undef\n> usemultiplicity=undef\n>    useperlio=define d_sfio=undef uselargefiles=define usesocks=undef\n>    use64bitint=undef use64bitall=undef uselongdouble=undef\n>    usemymalloc=n, bincompat5005=undef\n>  Compiler:\n>    cc='gcc', ccflags ='-fno-strict-aliasing -pipe\n> -Wdeclaration-after-statement -I/usr/local/include/db_185\n> -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DPERL_USE_SAFE_PUTENV\n> -DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV\n> -DPERL_USE_SAFE_PUTENV',\n>    optimize='-O',\n>    cppflags='-fno-strict-aliasing -pipe -Wdeclaration-after-statement\n> -I/usr/local/include/db_185'\n>    ccversion='', gccversion='3.4.3\n> (csl-sol210-3_4-branch+sol_rpath)', gccosandvers='solaris2.10'\n>    intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234\n>    d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12\n>    ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t',\n> lseeksize=8\n>    alignbytes=4, prototype=define\n>  Linker and Libraries:\n>    ld='gcc', ldflags =' -L/usr/local/lib/db_185 '\n>    libpth=/usr/local/lib/db_185 /usr/lib /usr/ccs/lib /usr/local/lib\n>    libs=-lsocket -lnsl -ldl -lm -lc\n>    perllibs=-lsocket -lnsl -ldl -lm -lc\n>    libc=/lib/libc.so, so=so, useshrplib=false, libperl=libperl.a\n>    gnulibc_version=''\n>  Dynamic Linking:\n>    dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '\n>    cccdlflags='-fPIC', lddlflags='-G -L/usr/local/lib/db_185'\n>\n>\n> Characteristics of this binary (from libperl):\n>  Compile-time options: PERL_MALLOC_WRAP PERL_USE_SAFE_PUTENV\n>                        USE_LARGE_FILES USE_PERLIO\n>  Built under solaris\n>  Compiled at Aug 29 2006 21:33:23\n>  @INC:\n>    /usr/local/stow/perl-5.8.8/lib/5.8.8/i86pc-solaris\n>    /usr/local/stow/perl-5.8.8/lib/5.8.8\n>    /usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris\n>    /usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8\n>    /usr/local/stow/perl-5.8.8/lib/site_perl\n>    .\n>>\n>\n> Sam\n>\n> On Wed, Feb 9, 2011 at 10:28 AM, Tom Lane <[email protected]> wrote:\n>> Sam Stearns <[email protected]> writes:\n>>> I have a SELECT query that runs no problem standalone but when running\n>>> within a perl script it intermittently core dumps.  Random, no pattern\n>>> to the timing of the core dumps.  The perl script processes the rows\n>>> from the query, if the rows satisfy  a condition then the perl script\n>>> adds the rows to another table.  When the script works it runs for\n>>> about a minute.  If the script fails, it runs for about 5 minutes and\n>>> core dumps.  The core dump is in the perl error handling routines.  We\n>>> suspect the bug is related to how the perl postgres libraries interact\n>>> with postgres.\n>>\n>> Can you get a stack trace from one of the core dumps?\n>>\n>> Also, exactly which perl version are you using, and with what build\n>> options?  (\"perl -V\" output would be a good answer here.)\n>>\n>> BTW, this seems pretty off-topic for pgsql-performance.\n>>\n>>                        regards, tom lane\n>>\n>\n", "msg_date": "Wed, 9 Feb 2011 10:37:50 +1030", "msg_from": "Sam Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Query Core Dumping" }, { "msg_contents": "Thanks, Tom. Forwarded from pgsl-performance.\n\nSegmentation Fault - core dumped\n\n===================================\n> pollsys(0x080475B8, 1, 0x00000000, 0x00000000) = 1\n> recv(4, \" i t z _ l i s a @ h o t\".., 16005, 0) = 8192\n> brk(0x095FBCF0) = 0\n> brk(0x095FDCF0) = 0\n> pollsys(0x080475B8, 1, 0x00000000, 0x00000000) = 1\n> recv(4, \"\\0\\0\\007 U L L - N S W\\0\".., 16278, 0) = 3404\n> brk(0x095FDCF0) = 0\n> brk(0x095FFCF0) = 0\n> Incurred fault #6, FLTBOUNDS %pc = 0x080BF4EF\n> siginfo: SIGSEGV SEGV_MAPERR addr=0x00000168\n> Received signal #11, SIGSEGV [default]\n> siginfo: SIGSEGV SEGV_MAPERR addr=0x00000168\n>\n===================================\n\n>From gdb:\n\n#0 0x080bf4ef in Perl_sv_vcatpvfn ()\n#1 0x080bdb30 in Perl_sv_vsetpvfn ()\n#2 0x080a1363 in Perl_vmess ()\n#3 0x080a1d06 in Perl_vwarn ()\n#4 0x080a1fde in Perl_warn ()\n#5 0xfe7f9d58 in pg_warn () from\n/usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris/auto/DBD/Pg/Pg.so\n#6 0xfe7c704e in defaultNoticeReceiver () from /usr/local/lib/libpq.so.4\n#7 0xfe7cf0bc in pqGetErrorNotice3 () from /usr/local/lib/libpq.so.4\n#8 0xfe7cf67f in pqParseInput3 () from /usr/local/lib/libpq.so.4\n#9 0xfe7c8398 in parseInput () from /usr/local/lib/libpq.so.4\n#10 0xfe7c8c63 in PQgetResult () from /usr/local/lib/libpq.so.4\n#11 0xfe7c8d5b in PQexecFinish () from /usr/local/lib/libpq.so.4\n#12 0xfe7fdcc1 in dbd_st_execute () from\n/usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris/auto/DBD/Pg/Pg.so\n#13 0xfe7f512f in XS_DBD_Pg_db_selectall_arrayref () from\n/usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris/auto/DBD/Pg/Pg.so\n#14 0xfecfb301 in XS_DBI_dispatch () from\n/usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris/auto/DBI/DBI.so\n#15 0x080b3609 in Perl_pp_entersub ()\n#16 0x080ad365 in Perl_runops_standard ()\n#17 0x08068814 in S_run_body ()\n#18 0x0806852b in perl_run ()\n#19 0x08065ae0 in main ()\n\n> perl -V\nSummary of my perl5 (revision 5 version 8 subversion 8) configuration:\n Platform:\n osname=solaris, osvers=2.10, archname=i86pc-solaris\n uname='sunos katana7 5.10 generic_118855-15 i86pc i386 i86pc '\n config_args='-de -A prepend:libswanted=db\n-Dlocincpth=/usr/local/include/db_185\n-Dloclibpth=/usr/local/lib/db_185 -Dcc=gcc\n-Dprefix=/usr/local/stow/perl-5.8.8'\n hint=recommended, useposix=true, d_sigaction=define\n usethreads=undef use5005threads=undef useithreads=undef\nusemultiplicity=undef\n useperlio=define d_sfio=undef uselargefiles=define usesocks=undef\n use64bitint=undef use64bitall=undef uselongdouble=undef\n usemymalloc=n, bincompat5005=undef\n Compiler:\n cc='gcc', ccflags ='-fno-strict-aliasing -pipe\n-Wdeclaration-after-statement -I/usr/local/include/db_185\n-D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -DPERL_USE_SAFE_PUTENV\n-DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV -DPERL_USE_SAFE_PUTENV\n-DPERL_USE_SAFE_PUTENV',\n optimize='-O',\n cppflags='-fno-strict-aliasing -pipe -Wdeclaration-after-statement\n-I/usr/local/include/db_185'\n ccversion='', gccversion='3.4.3\n(csl-sol210-3_4-branch+sol_rpath)', gccosandvers='solaris2.10'\n intsize=4, longsize=4, ptrsize=4, doublesize=8, byteorder=1234\n d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=12\n ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t',\nlseeksize=8\n alignbytes=4, prototype=define\n Linker and Libraries:\n ld='gcc', ldflags =' -L/usr/local/lib/db_185 '\n libpth=/usr/local/lib/db_185 /usr/lib /usr/ccs/lib /usr/local/lib\n libs=-lsocket -lnsl -ldl -lm -lc\n perllibs=-lsocket -lnsl -ldl -lm -lc\n libc=/lib/libc.so, so=so, useshrplib=false, libperl=libperl.a\n gnulibc_version=''\n Dynamic Linking:\n dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=undef, ccdlflags=' '\n cccdlflags='-fPIC', lddlflags='-G -L/usr/local/lib/db_185'\n\n\nCharacteristics of this binary (from libperl):\n Compile-time options: PERL_MALLOC_WRAP PERL_USE_SAFE_PUTENV\n USE_LARGE_FILES USE_PERLIO\n Built under solaris\n Compiled at Aug 29 2006 21:33:23\n @INC:\n /usr/local/stow/perl-5.8.8/lib/5.8.8/i86pc-solaris\n /usr/local/stow/perl-5.8.8/lib/5.8.8\n /usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8/i86pc-solaris\n /usr/local/stow/perl-5.8.8/lib/site_perl/5.8.8\n /usr/local/stow/perl-5.8.8/lib/site_perl\n .\n>\n\nSam\n\nOn Wed, Feb 9, 2011 at 10:28 AM, Tom Lane <[email protected]> wrote:\n> Sam Stearns <[email protected]> writes:\n>> I have a SELECT query that runs no problem standalone but when running\n>> within a perl script it intermittently core dumps.  Random, no pattern\n>> to the timing of the core dumps.  The perl script processes the rows\n>> from the query, if the rows satisfy  a condition then the perl script\n>> adds the rows to another table.  When the script works it runs for\n>> about a minute.  If the script fails, it runs for about 5 minutes and\n>> core dumps.  The core dump is in the perl error handling routines.  We\n>> suspect the bug is related to how the perl postgres libraries interact\n>> with postgres.\n>\n> Can you get a stack trace from one of the core dumps?\n>\n> Also, exactly which perl version are you using, and with what build\n> options?  (\"perl -V\" output would be a good answer here.)\n>\n> BTW, this seems pretty off-topic for pgsql-performance.\n>\n>                        regards, tom lane\n>\n", "msg_date": "Wed, 9 Feb 2011 11:03:05 +1030", "msg_from": "Sam Stearns <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [PERFORM] Query Core Dumping" } ]
[ { "msg_contents": "Hi, I wanted to know, if there is some configuration in order tables dont\nget blocked for too long time?\n\nHi, I wanted to know, if there is some configuration in order tables dont get blocked for too long time?", "msg_date": "Fri, 11 Feb 2011 15:35:35 -0300", "msg_from": "Cesar Arrieta <[email protected]>", "msg_from_op": true, "msg_subject": "Unblock tables" }, { "msg_contents": "On Fri, Feb 11, 2011 at 11:35 AM, Cesar Arrieta <[email protected]> wrote:\n> Hi, I wanted to know, if there is some configuration in order tables dont\n> get blocked for too long time?\n\nBlocked by what? And how exactly are they blocked? What's a long\ntime? I think we need more explanation of the problem before we can\ncome up with a possible solution.\n", "msg_date": "Fri, 11 Feb 2011 12:20:45 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unblock tables" } ]
[ { "msg_contents": "Hello,\n\nI got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will\nbe used solely by PostgresQL database and I am trying to choose the best\nRAID level for it.\n\nThe most priority is for read performance since we operate large data sets\n(tables, indexes) and we do lots of searches/scans, joins and nested\nqueries. With the old disks that we have now the most slowdowns happen on\nSELECTs.\n\nFault tolerance is less important, it can be 1 or 2 disks.\n\nSpace is the least important factor. Even 1T will be enough.\n\nWhich RAID level would you recommend in this situation. The current options\nare 60, 50 and 10, but probably other options can be even better.\n\nThank you!\n\nHello,I got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it.The most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans, joins and nested queries. With the old disks that we have now the most slowdowns happen on SELECTs.\nFault tolerance is less important, it can be 1 or 2 disks.Space is the least important factor. Even 1T will be enough.Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.\nThank you!", "msg_date": "Sun, 13 Feb 2011 22:12:06 +0200", "msg_from": "sergey <[email protected]>", "msg_from_op": true, "msg_subject": "choosing the right RAID level for PostgresQL database" }, { "msg_contents": "On Sun, Feb 13, 2011 at 1:12 PM, sergey <[email protected]> wrote:\n> Hello,\n>\n> I got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will\n> be used solely by PostgresQL database and I am trying to choose the best\n> RAID level for it.\n>\n> The most priority is for read performance since we operate large data sets\n> (tables, indexes) and we do lots of searches/scans, joins and nested\n> queries. With the old disks that we have now the most slowdowns happen on\n> SELECTs.\n>\n> Fault tolerance is less important, it can be 1 or 2 disks.\n>\n> Space is the least important factor. Even 1T will be enough.\n>\n> Which RAID level would you recommend in this situation. The current options\n> are 60, 50 and 10, but probably other options can be even better.\n\nUnless testing shows some other level is better, RAID-10 is usually\nthe best. with software RAID-10 and 24 disks I can flood a 4 channel\nSAS cable with sequential transfers quite easily, and for random\naccess it's very good as well, allowing me to reach about 5 to 6k tps\nwith a large pgbench db (-i -s 4000) ~ 40Gig\n", "msg_date": "Sun, 13 Feb 2011 15:54:28 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right RAID level for PostgresQL database" }, { "msg_contents": "On Sun, Feb 13, 2011 at 3:54 PM, Scott Marlowe <[email protected]> wrote:\n> On Sun, Feb 13, 2011 at 1:12 PM, sergey <[email protected]> wrote:\n>> Hello,\n>>\n>> I got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will\n>> be used solely by PostgresQL database and I am trying to choose the best\n>> RAID level for it.\n>>\n>> The most priority is for read performance since we operate large data sets\n>> (tables, indexes) and we do lots of searches/scans, joins and nested\n>> queries. With the old disks that we have now the most slowdowns happen on\n>> SELECTs.\n>>\n>> Fault tolerance is less important, it can be 1 or 2 disks.\n>>\n>> Space is the least important factor. Even 1T will be enough.\n>>\n>> Which RAID level would you recommend in this situation. The current options\n>> are 60, 50 and 10, but probably other options can be even better.\n>\n> Unless testing shows some other level is better, RAID-10 is usually\n> the best.  with software RAID-10 and 24 disks I can flood a 4 channel\n> SAS cable with sequential transfers quite easily, and for random\n> access it's very good as well, allowing me to reach about 5 to 6k tps\n> with a large pgbench db (-i -s 4000) ~ 40Gig\n\nAlso, keep in mind that even if RAID5,6,50,60 are faster when not\ndegraded, if they are degraded they will usually be quite a bit slower\nthan RAID-10 with a missing drive.\n", "msg_date": "Sun, 13 Feb 2011 15:59:25 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right RAID level for PostgresQL database" }, { "msg_contents": "sergey wrote:\n> I got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It \n> will be used solely by PostgresQL database and I am trying to choose \n> the best RAID level for it.\n> ..\n> Space is the least important factor. Even 1T will be enough.\n\nUse RAID10, measure the speed of the whole array using the bonnie++ ZCAV \ntool, and only use the fastest part of each array to store the important \nstuff. You will improve worst-case performance in both sequential reads \nand seek time that way. Drives are nearly twice as fast at their \nbeginning as they are at the end, and with only 8 drives you should be \nable to setup a RAID10 array with all the fast parts aligned.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Sun, 13 Feb 2011 18:23:19 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right RAID level for PostgresQL database" }, { "msg_contents": "For any database, anywhere, the answer is pretty much always RAID-10.\n\nThe only time you would do anything else is for odd special cases.\n\nCheers\nDave\n\nOn Sun, Feb 13, 2011 at 2:12 PM, sergey <[email protected]> wrote:\n\n> Hello,\n>\n> I got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will\n> be used solely by PostgresQL database and I am trying to choose the best\n> RAID level for it.\n>\n> The most priority is for read performance since we operate large data sets\n> (tables, indexes) and we do lots of searches/scans, joins and nested\n> queries. With the old disks that we have now the most slowdowns happen on\n> SELECTs.\n>\n> Fault tolerance is less important, it can be 1 or 2 disks.\n>\n> Space is the least important factor. Even 1T will be enough.\n>\n> Which RAID level would you recommend in this situation. The current options\n> are 60, 50 and 10, but probably other options can be even better.\n>\n> Thank you!\n>\n>\n\nFor any database, anywhere, the answer is pretty much always RAID-10.The only time you would do anything else is for odd special cases.CheersDaveOn Sun, Feb 13, 2011 at 2:12 PM, sergey <[email protected]> wrote:\nHello,I got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it.\nThe most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans, joins and nested queries. With the old disks that we have now the most slowdowns happen on SELECTs.\nFault tolerance is less important, it can be 1 or 2 disks.Space is the least important factor. Even 1T will be enough.Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.\nThank you!", "msg_date": "Sun, 13 Feb 2011 19:54:11 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: choosing the right RAID level for PostgresQL database" }, { "msg_contents": "On Sun, 13 Feb 2011, Dave Crooke wrote:\n\n> For any database, anywhere, the answer is pretty much always RAID-10.\n>\n> The only time you would do anything else is for odd special cases.\n\nthere are two situations where you would opt for something other than \nRAID-10\n\n1. if you need the space that raid 6 gives you compared to raid 10 you may \nnot have much choice\n\n2. if you do almost no updates to the disk during the time you are doing \nthe reads then raid 6 can be at least as fast as raid 10 in non-degraded \nmode (it could be faster if you are able to use faster parts of the disks \nin raid 6 than you could in raid 10). degraded mode suffers more, but you \ncan tolerate any 2 drives failing rather than just any 1 drive failing for \nraid 10 (the wrong two drives failing can kill a raid 10, while if the \nright drives fail you can loose a lot more drives in raid 10)\n\nwhere raid 6 is significantly slower than raid 10 is when you are doing \nsmall random writes. Also many the performance variation between raid \ncontrollers will be much higher with raid 6 than with raid 10\n\nDavid Lang\n\n> Cheers\n> Dave\n>\n> On Sun, Feb 13, 2011 at 2:12 PM, sergey <[email protected]> wrote:\n>\n>> Hello,\n>>\n>> I got a disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will\n>> be used solely by PostgresQL database and I am trying to choose the best\n>> RAID level for it.\n>>\n>> The most priority is for read performance since we operate large data sets\n>> (tables, indexes) and we do lots of searches/scans, joins and nested\n>> queries. With the old disks that we have now the most slowdowns happen on\n>> SELECTs.\n>>\n>> Fault tolerance is less important, it can be 1 or 2 disks.\n>>\n>> Space is the least important factor. Even 1T will be enough.\n>>\n>> Which RAID level would you recommend in this situation. The current options\n>> are 60, 50 and 10, but probably other options can be even better.\n>>\n>> Thank you!\n>>\n>>\n>\n", "msg_date": "Mon, 14 Feb 2011 00:02:39 -0800 (PST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: choosing the right RAID level for PostgresQL\n database" } ]
[ { "msg_contents": "Hi\n\nMy question is: Was there any major optimizer change between 8.3.10 to 8.3.14?\n\nI'm getting a difference in explain plans that I need to account for.\n\nWe are running production pg8.3.10, and are considering upgrading to 8.4.x (maybe 9.0), because we expected to benefit from some of the performance fixes of 8.4, in particular the improved use of the posix fadvise on bitmap index scans, mentioned in the 8.4.0 release notes.\n\nSo, I installed the latest 8.3.14 and did a comparison query between the test machine and prod 8.3.10, to establish a machine power difference.\nTo do this, I am running a query across two pg 8.3.x installs - prod 8.3.10 and new 8.3.14.\nThe database used as the test in each instance is a new database, with an identical data import on both.\nThe 8.3.10 prod machine is a faster cpu ( prod 8.3.10: 3ghz intel E5700, 8.3.14: 2.2ghz intel E5345 (and less ram)).\nThe memory settings (shared_buffers, effective_cache_size, work_mem, maintenance_work_mem) are equal.\nThe results are against repeated queries, so there is no I/O component in the comparison - it is simply cpu and memory.\n\nSo, I expected the query response on 8.3.14 to be slower, due to being on a less powerful machine.\nHowever, not so: I am actually getting a faster result on the 8.3.14 installation (in spite of the machine being less powerful).\n\nLooking at the explain plan, something changed.\n\nFor some reason, the \"index scan\" and \"index cond\" ops used by 8.3.10 are replaced by a \"bitmap index scan\" and \"index cond\" in the 8.3.14.\nI'm pretty sure this is giving me the better result in 8.3.14.\n(in spite of the reduced machine power).\n\n\nObviously this result is quite unexpected and I am trying to work out why.\n(The only other mention that I have seen of bitmap index scan improvements was in the 8.4.0 release notes).\n\nSo, I am looking for information as to why this change occurred.\nI reckon either it is a real version difference between 8.3.10 and 8.3.14, or else a difference in configuration.\n\nDoes anyone have any comments?\n\n\n\nThe 8.3.10 plan is:\n\nexplain select * from view_v1 where action_date between '2010-10-01' and '2010-12-08'\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------\nHash Join (cost=755.53..546874.02 rows=3362295 width=99)\n Hash Cond: (gp.ql2_id = dt.ql2_id)\n -> Index Scan using gpn_nk_1 on data_stuff_new gp (cost=0.00..495732.14 rows=4465137 width=40)\n Index Cond: ((action_date >= '2010-10-01'::date) AND (action_date <= '2010-12-08'::date))\n Filter: (action_hour = ANY ('{8,10,11,12,13,14,15,16}'::integer[]))\n -> Hash (cost=720.80..720.80 rows=2779 width=67)\n -> Hash Join (cost=561.38..720.80 rows=2779 width=67)\n Hash Cond: (dtxgm.ql2_id = dt.ql2_id)\n -> Seq Scan on data_thindt_xref_group_membership dtxgm (cost=0.00..93.41 rows=2779 width=10)\n Filter: (org_id = 1288539986)\n -> Hash (cost=451.17..451.17 rows=8817 width=57)\n -> Seq Scan on data_thing dt (cost=0.00..451.17 rows=8817 width=57)\n(12 rows)\n\n\nThe plan on 8.3.14 is:\n\nexplain select * from view_v1 where action_date between '2010-10-01' and '2010-12-08'\n\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------\nHash Join (cost=190151.42..684420.67 rows=3403329 width=99)\n Hash Cond: (gp.ql2_id = dt.ql2_id)\n -> Bitmap Heap Scan on data_stuff_new gp (cost=189395.38..633046.20 rows=4471358 width=40)\n Recheck Cond: ((action_date >= '2010-10-01'::date) AND (action_date <= '2010-12-08'::date))\n Filter: (action_hour = ANY ('{8,10,11,12,13,14,15,16}'::integer[]))\n -> Bitmap Index Scan on gpn_nk_1 (cost=0.00..188277.54 rows=7090513 width=0)\n Index Cond: ((action_date >= '2010-10-01'::date) AND (action_date <= '2010-12-08'::date))\n -> Hash (cost=721.13..721.13 rows=2793 width=67)\n -> Hash Join (cost=561.38..721.13 rows=2793 width=67)\n Hash Cond: (dtxgm.ql2_id = dt.ql2_id)\n -> Seq Scan on data_thindt_xref_group_membership dtxgm (cost=0.00..93.41 rows=2793 width=10)\n Filter: (org_id = 1288539986)\n -> Hash (cost=451.17..451.17 rows=8817 width=57)\n -> Seq Scan on data_thing dt (cost=0.00..451.17 rows=8817 width=57)\n(14 rows)\n\n\n\n\nHi My question is:         Was there any major optimizer change between 8.3.10 to 8.3.14? I’m getting a difference in explain plans that I need to account for. We are running production pg8.3.10, and are considering upgrading to 8.4.x (maybe 9.0), because we expected to benefit from some of the performance fixes of 8.4, in particular the improved use of the posix fadvise on bitmap index scans, mentioned in the 8.4.0 release notes. So, I installed the latest 8.3.14 and did a comparison query between the test machine and prod 8.3.10, to establish a machine power difference.To do this, I am running a query across two pg 8.3.x installs – prod 8.3.10 and new 8.3.14.The database used as the test in each instance is a new database, with an identical data import on both.The 8.3.10 prod machine is a faster cpu ( prod 8.3.10: 3ghz intel E5700, 8.3.14: 2.2ghz intel E5345 (and less ram)).The memory settings (shared_buffers, effective_cache_size, work_mem, maintenance_work_mem) are equal.The results are against repeated queries, so there is no I/O component in the comparison – it is simply cpu and memory. So, I expected the query response on  8.3.14 to be slower, due to being on a less powerful machine.However, not so: I am actually getting a faster result on the 8.3.14 installation (in spite of the machine being less powerful). Looking at the explain plan, something changed. For some reason, the “index scan” and “index cond” ops used by 8.3.10 are replaced by a “bitmap index scan” and “index cond” in the 8.3.14.I’m pretty sure this is giving me the better result in 8.3.14.(in spite of the reduced machine power).   Obviously this result is quite unexpected and I am trying to work out why.(The only other mention that I have seen of bitmap index scan improvements was in the 8.4.0 release notes). So, I am looking for information as to why this change occurred.I reckon either it is a real version difference between 8.3.10 and 8.3.14, or else a difference in configuration. Does anyone have any comments?   The 8.3.10 plan is: explain select * from view_v1 where action_date between '2010-10-01' and '2010-12-08'                                                  QUERY PLAN-------------------------------------------------------------------------------------------------------------- Hash Join  (cost=755.53..546874.02 rows=3362295 width=99)   Hash Cond: (gp.ql2_id = dt.ql2_id)   ->  Index Scan using gpn_nk_1 on data_stuff_new gp  (cost=0.00..495732.14 rows=4465137 width=40)         Index Cond: ((action_date >= '2010-10-01'::date) AND (action_date <= '2010-12-08'::date))         Filter: (action_hour = ANY ('{8,10,11,12,13,14,15,16}'::integer[]))   ->  Hash  (cost=720.80..720.80 rows=2779 width=67)         ->  Hash Join  (cost=561.38..720.80 rows=2779 width=67)               Hash Cond: (dtxgm.ql2_id = dt.ql2_id)               ->  Seq Scan on data_thindt_xref_group_membership dtxgm  (cost=0.00..93.41 rows=2779 width=10)                     Filter: (org_id = 1288539986)               ->  Hash  (cost=451.17..451.17 rows=8817 width=57)                     ->  Seq Scan on data_thing dt  (cost=0.00..451.17 rows=8817 width=57)(12 rows)  The plan on 8.3.14 is: explain select * from view_v1 where action_date between '2010-10-01' and '2010-12-08'                                                    QUERY PLAN----------------------------------------------------------------------------------------------------------------- Hash Join  (cost=190151.42..684420.67 rows=3403329 width=99)   Hash Cond: (gp.ql2_id = dt.ql2_id)   ->  Bitmap Heap Scan on data_stuff_new gp  (cost=189395.38..633046.20 rows=4471358 width=40)         Recheck Cond: ((action_date >= '2010-10-01'::date) AND (action_date <= '2010-12-08'::date))         Filter: (action_hour = ANY ('{8,10,11,12,13,14,15,16}'::integer[]))         ->  Bitmap Index Scan on gpn_nk_1  (cost=0.00..188277.54 rows=7090513 width=0)               Index Cond: ((action_date >= '2010-10-01'::date) AND (action_date <= '2010-12-08'::date))   ->  Hash  (cost=721.13..721.13 rows=2793 width=67)         ->  Hash Join  (cost=561.38..721.13 rows=2793 width=67)               Hash Cond: (dtxgm.ql2_id = dt.ql2_id)               ->  Seq Scan on data_thindt_xref_group_membership dtxgm  (cost=0.00..93.41 rows=2793 width=10)                     Filter: (org_id = 1288539986)               ->  Hash  (cost=451.17..451.17 rows=8817 width=57)                     ->  Seq Scan on data_thing dt  (cost=0.00..451.17 rows=8817 width=57)(14 rows)", "msg_date": "Sun, 13 Feb 2011 17:29:51 -0800", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": true, "msg_subject": "comparison of 8.3.10 to 8.3.14 reveals unexpected difference in\n\texplain plan" }, { "msg_contents": "If you diff the postgresql.conf files for both installs, what's different?\n", "msg_date": "Sun, 13 Feb 2011 19:12:36 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: comparison of 8.3.10 to 8.3.14 reveals unexpected\n\tdifference in explain plan" }, { "msg_contents": "Mark Rostron wrote:\n>\n> Was there any major optimizer change between 8.3.10 to 8.3.14? \n>\n> I'm getting a difference in explain plans that I need to account for.\n>\n\nThere were some major changes in terms of how hashing is used for some \ntypes of query plans. And one of the database parameters, \ndefault_statistics_target, increased from 10 to 100 between those two \nversions. You can check what setting you have on each by doing:\n\nshow default_statistics_target;\n\n From within psql. It's possible the 8.3 optimizer is just getting \nlucky running without many statistics, and collecting more of them is \nmaking things worse. It's also possible you're running into a situation \nwhere one of the new hash approaches in 8.4 just isn't working out well \nfor you.\n\nIt would be easier to suggest what might be wrong if you included \n\"EXPLAIN ANALYZE\" output instead of just EXPLAIN. It's not obvious \nwhether 8.3 or 8.4 is estimating things better.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nMark Rostron wrote:\n\n\n\n\n\nWas there any major optimizer change between\n8.3.10 to 8.3.14? \nI’m getting a difference in explain plans that I\nneed to account for.\n\n\n\nThere were some major changes in terms of how hashing is used for some\ntypes of query plans.  And one of the database parameters,\ndefault_statistics_target, increased from 10 to 100 between those two\nversions.  You can check what setting you have on each by doing:\n\nshow default_statistics_target;\n\n>From within psql.  It's possible the 8.3 optimizer is just getting\nlucky running without many statistics, and collecting more of them is\nmaking things worse.  It's also possible you're running into a\nsituation where one of the new hash approaches in 8.4 just isn't\nworking out well for you.\n\nIt would be easier to suggest what might be wrong if you included\n\"EXPLAIN ANALYZE\" output instead of just EXPLAIN.  It's not obvious\nwhether 8.3 or 8.4 is estimating things better.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Sun, 13 Feb 2011 21:33:39 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: comparison of 8.3.10 to 8.3.14 reveals unexpected difference\n\tin explain plan" }, { "msg_contents": "I found the difference.\nRandom_page_cost is 1 in the production 8.3.10, I guess weighting the decision to use \"index scan\".\nThanks for the replies, gentlemen.\n\n> If you diff the postgresql.conf files for both installs, what's different?\n\nIn the list below, 8.3.10 parameter value is in the clear, (8.3.14 is in brackets)\n\nMax_fsm_pages 819200 vs (204800)\nMax_fsm_relations 4000 vs (dflt 1000)\nSynchronous_commit off vs (dflt on)\nWal_buffers 256kb vs (dflt 64kb)\nCheckpoint_segments 128 vs (dflt 3)\nRandom_page_cost 1 vs (dflt 4) #!!! Actually this is the difference in the explain plans\nConstraint_exclusion on vs (dflt off)\n.... a bunch of logging parameters have been set ....\nAutovacuum_freeze_max_age 900000000 vs (dflt 200000000)\nvacuum_freeze_min_age = 50000000 vs (dflt 100000000)\ndeadlock_timeout = 20s (vs dflt 1s)\nadd_missing_from = on (vs dflt off)\n\n\n\n", "msg_date": "Sun, 13 Feb 2011 20:17:01 -0800", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: comparison of 8.3.10 to 8.3.14 reveals unexpected\n\tdifference in explain plan" }, { "msg_contents": "> It would be easier to suggest what might be wrong if you included \"EXPLAIN ANALYZE\" output instead of just EXPLAIN.\n> It's not obvious whether 8.3 or 8.4 is estimating things better.\n\nThanks for reply man\nTurns out random_page_cost was set low in the 8.3.10 version - when I reset it to 4(dflt), the explain plans are the same.\nWe'll double check our other queries, and then I'll see if I can reset it to dflt for the database.\n\n\n\n> It would be easier to suggest what might be wrong if you included \"EXPLAIN ANALYZE\" output instead of just EXPLAIN. > It's not obvious whether 8.3 or 8.4 is estimating things better.Thanks for reply manTurns out random_page_cost was set low in the 8.3.10 version – when I reset it to 4(dflt), the explain plans are the same.We’ll double check our other queries, and then I’ll see if I can reset it to dflt for the database.", "msg_date": "Sun, 13 Feb 2011 20:40:20 -0800", "msg_from": "Mark Rostron <[email protected]>", "msg_from_op": true, "msg_subject": "Re: comparison of 8.3.10 to 8.3.14 reveals unexpected\n\tdifference in explain plan" } ]
[ { "msg_contents": "Hi,\n\nHow can we boost performance of queries containing pattern matching\ncharacters? In my case, we're using a percent sign (%) that matches any\nstring of zero or more characters.\n\nQUERY: DELETE FROM MYTABLE WHERE EMAIL ILIKE '%domain.com%'\n\nEMAIL column is VARCHAR(256).\n\nAs it is clear from the above query, email is matched \"partially and\ncase-insensitively\", which my application requirement demands. \n\nIn case, if it were a full match, I could easily define a functional INDEX\non EMAIL column (lower(EMAIL)) and I could rewrite my DELETE where criteria\nlike lower(EMAIL) = '[email protected]'.\n\nMYTABLE currently contains 2 million records and grows consistently.\n\nRegards,\nGnanam\n\n", "msg_date": "Mon, 14 Feb 2011 12:29:04 +0530", "msg_from": "\"Gnanakumar\" <[email protected]>", "msg_from_op": true, "msg_subject": "How to boost performance of queries containing pattern matching\n\tcharacters" }, { "msg_contents": "On 14/02/11 06:59, Gnanakumar wrote:\n>\n> How can we boost performance of queries containing pattern matching\n> characters?\n\n> QUERY: DELETE FROM MYTABLE WHERE EMAIL ILIKE '%domain.com%'\n\n> As it is clear from the above query, email is matched \"partially and\n> case-insensitively\", which my application requirement demands.\n\nWell, for that exact pattern you're not going to find an index that's \nmuch help. Do you really need something so wide-ranging though? The \nabove will match all of the following:\n\[email protected]\[email protected]\[email protected]\[email protected]\[email protected]\[email protected]\[email protected]\[email protected]\[email protected]\n\nIs that really what you are after? Or, did you just want to match:\n [email protected]\n [email protected]\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 14 Feb 2011 07:18:42 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern\n\tmatching characters" }, { "msg_contents": "> Is that really what you are after? Or, did you just want to match:\n> [email protected]\n> [email protected]\n\nI understand that because I've (%) at the beginning and end, it's going to\nmatch unrelated domains, etc., which as you said rightly, it is\nwide-ranging. But my point here is that how can I improve performance of\nthe queries containing pattern matching characters.\n\n", "msg_date": "Mon, 14 Feb 2011 12:58:23 +0530", "msg_from": "\"Gnanakumar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to boost performance of queries containing pattern matching\n\tcharacters" }, { "msg_contents": ">How can we boost performance of queries containing pattern matching \n>characters? In my case, we're using a percent sign (%) that matches any\nstring of zero or more characters.\n>\n> QUERY: DELETE FROM MYTABLE WHERE EMAIL ILIKE '%domain.com%'\n>\n> EMAIL column is VARCHAR(256).\n> \n> As it is clear from the above query, email is matched \"partially and\ncase-insensitively\", which my application requirement demands. \n> \n> In case, if it were a full match, I could easily define a functional \n> INDEX on EMAIL column (lower(EMAIL)) and I could rewrite my DELETE where\ncriteria like lower(EMAIL) = '[email protected]'.\n> \n> MYTABLE currently contains 2 million records and grows consistently.\n\nI had almost the same problem.\nTo resolve it, I created my own text search parser (myftscfg) which divides\ntext in column into three letters parts, for example:\n\[email protected] is divided to som, ome,mee,eem,ema,mai,ail,il@,\nl@d,@do,dom,oma,mai,ain,in.,n.c,.co,com\n\nThere should be also index on email column:\n\nCREATE INDEX \"email _fts\" on mytable using gin\n(to_tsvector('myftscfg'::regconfig, email))\n\nEvery query like email ilike '%domain.com%' should be rewrited to:\n\nWHERE\nto_tsvector('myftscfg',email) @@ to_tsquery('dom') AND\nto_tsvector('myftscfg',email) @@ to_tsquery('oma') AND\nto_tsvector('myftscfg',email) @@ to_tsquery('mai') AND\nto_tsvector('myftscfg',email) @@ to_tsquery('ain') AND\nto_tsvector('myftscfg',email) @@ to_tsquery('in.') AND\nto_tsvector('myftscfg',email) @@ to_tsquery('n.c') AND\nto_tsvector('myftscfg',email) @@ to_tsquery('.co') AND\nto_tsvector('myftscfg',email) @@ to_tsquery('com') AND email ILIKE\n'%domain.com%';\n\nIndex is reducing number of records and clause email ILIKE '%domain.com%' is\nselecting only valid records.\n\nI didn't found better solution.\n\n-------------------------------------------\nArtur Zajac\n\n", "msg_date": "Mon, 14 Feb 2011 08:38:48 +0100", "msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern matching\n\tcharacters" }, { "msg_contents": "On 14/02/11 07:28, Gnanakumar wrote:\n>> Is that really what you are after? Or, did you just want to match:\n>> [email protected]\n>> [email protected]\n>\n> I understand that because I've (%) at the beginning and end, it's going to\n> match unrelated domains, etc., which as you said rightly, it is\n> wide-ranging. But my point here is that how can I improve performance of\n> the queries containing pattern matching characters.\n\nIf you really need to match all those options, you can't use an index. A \nsubstring-matching index would need to have multiple entries per \ncharacter per value (since it doesn't know what you will search for). \nThe index-size becomes unmanageable very quickly.\n\nThat's why I asked what you really wanted to match.\n\nSo, I'll ask again: do you really want to match all of those options?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 14 Feb 2011 07:39:33 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern\n\tmatching characters" }, { "msg_contents": "> If you really need to match all those options, you can't use an index. A \n> substring-matching index would need to have multiple entries per \n> character per value (since it doesn't know what you will search for). \n> The index-size becomes unmanageable very quickly.\n\n> That's why I asked what you really wanted to match.\nTo be more specific, in fact, our current application allows to delete\nemail(s) with a minimum of 3 characters. There is a note/warning also given\nfor application Users' before deleting, explaining the implication of this\ndelete action (partial & case-insensitive, and it could be wide-ranging\ntoo).\n\n> So, I'll ask again: do you really want to match all of those options?\nYes, as explained above, I want to match all those.\n\n", "msg_date": "Mon, 14 Feb 2011 13:16:07 +0530", "msg_from": "\"Gnanakumar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to boost performance of queries containing pattern matching\n\tcharacters" }, { "msg_contents": "On 14/02/11 07:38, Artur Zajďż˝c wrote:\n> I had almost the same problem.\n> To resolve it, I created my own text search parser (myftscfg) which divides\n> text in column into three letters parts, for example:\n>\n> [email protected] is divided to som, ome,mee,eem,ema,mai,ail,il@,\n> l@d,@do,dom,oma,mai,ain,in.,n.c,.co,com\n>\n> There should be also index on email column:\n>\n> CREATE INDEX \"email _fts\" on mytable using gin\n> (to_tsvector('myftscfg'::regconfig, email))\n>\n> Every query like email ilike '%domain.com%' should be rewrited to:\n>\n> WHERE\n> to_tsvector('myftscfg',email) @@ to_tsquery('dom') AND\n> to_tsvector('myftscfg',email) @@ to_tsquery('oma') AND\n> to_tsvector('myftscfg',email) @@ to_tsquery('mai') AND\n...\n\nLooks like you've almost re-invented the trigram module:\n http://www.postgresql.org/docs/9.0/static/pgtrgm.html\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 14 Feb 2011 07:49:54 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern\n\tmatching characters" }, { "msg_contents": "On 14/02/11 07:46, Gnanakumar wrote:\n>> If you really need to match all those options, you can't use an index. A\n>> substring-matching index would need to have multiple entries per\n>> character per value (since it doesn't know what you will search for).\n>> The index-size becomes unmanageable very quickly.\n>\n>> That's why I asked what you really wanted to match.\n> To be more specific, in fact, our current application allows to delete\n> email(s) with a minimum of 3 characters. There is a note/warning also given\n> for application Users' before deleting, explaining the implication of this\n> delete action (partial& case-insensitive, and it could be wide-ranging\n> too).\n>\n>> So, I'll ask again: do you really want to match all of those options?\n> Yes, as explained above, I want to match all those.\n\nThen you can't use a simple index. If you did use an index it would \nprobably be much slower for \"com\" or \"yah\" or \"gma\" and so on.\n\nThe closest you can do is something like Artur's option (or the pg_trgm \nmodule - handy since you are looking at 3-chars and up) to select likely \nmatches combined with a separate search on '%domain.com%' to confirm \nthat fact.\n\nP.S. - I'd be inclined to just match the central domain parts, so for \n\"[email protected]\" you would index \"europe\" and \"megacorp\" and \nonly allow matching on the start of each string. Of course if your \napplication spec says you need to match on \"p.c\" too then that's what \nyou have to do.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Mon, 14 Feb 2011 07:56:56 +0000", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern\n\tmatching characters" }, { "msg_contents": "> The closest you can do is something like Artur's option (or the pg_trgm \n> module - handy since you are looking at 3-chars and up) to select likely \n> matches combined with a separate search on '%domain.com%' to confirm \n> that fact.\n\nThanks for your suggestion. Our production server is currently running\nPostgreSQL v8.2.3. I think pg_trgm contrib module is not available for 8.2\nseries. \n\nAlso, I read about WildSpeed - fast wildcard search for LIKE operator. What\nis your opinion on that?\nhttp://www.sai.msu.su/~megera/wiki/wildspeed\n\n", "msg_date": "Mon, 14 Feb 2011 13:32:14 +0530", "msg_from": "\"Gnanakumar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: How to boost performance of queries containing pattern matching\n\tcharacters" }, { "msg_contents": "> Looks like you've almost re-invented the trigram module:\n> http://www.postgresql.org/docs/9.0/static/pgtrgm.html\n\nI didn't know about this module.\nIdea to use three letters strings and use Full Text Search is the same, but\nthe rest is not.\n\nIs the idea to use similarity for this problem is really correct? How should\nbe query constructed to return really all records matching ILIKE criteria?\n\n\n-------------------------------------------\nArtur Zajac\n\n", "msg_date": "Mon, 14 Feb 2011 09:09:55 +0100", "msg_from": "=?iso-8859-2?Q?Artur_Zaj=B1c?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern matching\n\tcharacters" }, { "msg_contents": "On 02/14/2011 12:59 AM, Gnanakumar wrote:\n\n> QUERY: DELETE FROM MYTABLE WHERE EMAIL ILIKE '%domain.com%'\n> EMAIL column is VARCHAR(256).\n\nHonestly? You'd be better off normalizing this column and maybe hiding \nthat fact in a view if your app requires email as a single column. Split \nit like this:\n\nSo [email protected] becomes:\n\nemail_acct (user)\nemail_domain (gmail)\nemail_tld (com)\n\nThis would let you drop the first % on your like match and then \ntraditional indexes would work just fine. You could also differentiate \nbetween domains with different TLDs without using wildcards, which is \nalways faster.\n\nI might ask why you are checking email for wildcards after the TLD in \nthe first place. Is it really so common you are trying to match .com, \n.com.au, .com.bar.baz.edu, or whatever? At the very least, splitting the \naccount from the domain+tld would be beneficial, as it would remove the \nnecessity of the first wildcard, which is really what's hurting you.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Mon, 14 Feb 2011 07:55:36 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern\n\tmatching characters" }, { "msg_contents": "Gnanakumar wrote:\n> Thanks for your suggestion. Our production server is currently running\n> PostgreSQL v8.2.3. I think pg_trgm contrib module is not available for 8.2\n> series. \n> \n\nYou're going to find that most of the useful answers here will not work \non 8.2. Full-text search was not fully integrated into the database \nuntil 8.3. Trying to run an app using it on 8.2 is going to be a \nconstant headache for you. Also, moving from 8.2 to 8.3 is just a \ngeneral performance boost in many ways.\n\n\n> Also, I read about WildSpeed - fast wildcard search for LIKE operator. What\n> is your opinion on that?\n> http://www.sai.msu.su/~megera/wiki/wildspeed\n> \n\nWildSpeed works fine if you can handle the massive disk space and \nmaintenance overhead it introduces. I consider it useful only for data \nsets that are almost static, where you can afford to build its large \nindex structure once and then use it to accelerate reads continuously, \nwith minimal updates.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Mon, 14 Feb 2011 10:16:27 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern\n\tmatching characters" }, { "msg_contents": "\nOn Feb 14, 2011, at 12:09 AM, Artur Zając wrote:\n\n>> Looks like you've almost re-invented the trigram module:\n>> http://www.postgresql.org/docs/9.0/static/pgtrgm.html\n> \n> I didn't know about this module.\n> Idea to use three letters strings and use Full Text Search is the same, but\n> the rest is not.\n> \n> Is the idea to use similarity for this problem is really correct? How should\n> be query constructed to return really all records matching ILIKE criteria?\n\nIf what you really want is the ability to select email addresses based on\nsubdomain, you might want to do this instead:\n\ncreate email_domain_idx on mytable (reverse(lower(split_part(email, '@', 2))));\n\nThen you can do things like this ...\n\ndelete from mytable where reverse(lower(split_part(email, '@', 2))) = reverse('aol.com');\n\n... to delete all aol.com users or like this ...\n\ndelete from mytable where reverse(lower(split_part(email, '@', 2))) like reverse('%.aol.com');\n\n... to delete all email addresses that are in a subdomain of \"aol.com\".\n\nYou need a reverse() function to do that. Here's one in plpgsql:\n\nCREATE OR REPLACE FUNCTION reverse(text) RETURNS text AS '\nDECLARE\n original alias for $1;\n reverse_str text;\n i int4;\nBEGIN\n reverse_str = '''';\n FOR i IN REVERSE LENGTH(original)..1 LOOP\n reverse_str = reverse_str || substr(original,i,1);\n END LOOP;\n return reverse_str;\nEND;'\nLANGUAGE 'plpgsql' IMMUTABLE;\n\n(Normalizing the email address so that you store local part and domain part separately is even better, but an index on the reverse of the domain is still useful for working with subdomains).\n\nCheers,\n Steve\n\n", "msg_date": "Mon, 14 Feb 2011 08:43:24 -0800", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How to boost performance of queries containing pattern matching\n\tcharacters" } ]
[ { "msg_contents": "\n\ncreate table a( address1 int,address2 int,address3 int)\ncreate table b(address int[3])\n\nI have created two tables. In the first table i am using many fields to\nstore 3 address. \nas well as in b table, i am using array data type to store 3 address. is\nthere any issue would face in performance related things.... which one will\ncause the performance issue.\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/performance-issue-in-the-fields-tp3384307p3384307.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Mon, 14 Feb 2011 03:33:12 -0800 (PST)", "msg_from": "dba <[email protected]>", "msg_from_op": true, "msg_subject": "performance issue in the fields." }, { "msg_contents": "Hello\n\n2011/2/14 dba <[email protected]>:\n>\n>\n> create table a( address1 int,address2 int,address3 int)\n> create table b(address int[3])\n>\n> I have created two tables. In the first table i am using many fields to\n> store 3 address.\n> as well as in b table, i am using array data type to store 3 address.  is\n> there any issue would face in performance related things.... which one will\n> cause the performance issue.\n\nyes, there is. Planner can not to work well with foreign keys stored in array.\n\nRegards\n\nPavel Stehule\n\n> --\n> View this message in context: http://postgresql.1045698.n5.nabble.com/performance-issue-in-the-fields-tp3384307p3384307.html\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Mon, 14 Feb 2011 12:36:50 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance issue in the fields." }, { "msg_contents": "On Mon, Feb 14, 2011 at 5:36 AM, Pavel Stehule <[email protected]> wrote:\n> Hello\n>\n> 2011/2/14 dba <[email protected]>:\n>>\n>>\n>> create table a( address1 int,address2 int,address3 int)\n>> create table b(address int[3])\n>>\n>> I have created two tables. In the first table i am using many fields to\n>> store 3 address.\n>> as well as in b table, i am using array data type to store 3 address.  is\n>> there any issue would face in performance related things.... which one will\n>> cause the performance issue.\n>\n> yes, there is. Planner can not to work well with foreign keys stored in array.\n\nalso the array variant is going to be bigger on disk. This is because\nas fields, all the important info about the fields is stored in the\ntable header (inside the system catalogs). But with the array,\nvarious header information specific to the array has to be stored with\neach row. This is largely due to some questionable design decisions\nmade in early array implementation that we are stuck with :-).\n\nmerlin\n", "msg_date": "Wed, 23 Feb 2011 13:59:12 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance issue in the fields." }, { "msg_contents": "\n>>> I have created two tables. In the first table i am using many fields to\n>>> store 3 address.\n>>> as well as in b table, i am using array data type to store 3 address. \n>>> is\n>>> there any issue would face in performance related things.... which one \n>>> will\n>>> cause the performance issue.\n\nThe array is interesting :\n- if you put a gist index on it and do searches like \"array contains \nvalues X and Y and Z\", gist index has a some special optimizations for this\n- if you might store a variable number of integers, and for some reason \nyou don't want a normalized one-line-per-value approach\n\n", "msg_date": "Thu, 24 Feb 2011 09:22:53 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance issue in the fields." } ]
[ { "msg_contents": "\n\nI have two identical tables. But the with of the fields are different. Need\nto know whether changing from varchar(100) to varchar(30) will increase the\nperformance, or its just the memory access.\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/Field-wise-checking-the-performance-tp3384348p3384348.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Mon, 14 Feb 2011 04:06:27 -0800 (PST)", "msg_from": "dba <[email protected]>", "msg_from_op": true, "msg_subject": "Field wise checking the performance." }, { "msg_contents": "On 14.02.2011 14:06, dba wrote:\n> I have two identical tables. But the with of the fields are different. Need\n> to know whether changing from varchar(100) to varchar(30) will increase the\n> performance, or its just the memory access.\n\nIt will make no difference. The max length is just a constraint on what \nvalues can be stored, it doesn't affect how the strings are stored. In \nboth cases, the strings are stored in a variable-length format.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 14 Feb 2011 14:30:03 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Field wise checking the performance." } ]
[ { "msg_contents": "During heavy writes times we get the checkpoint too often error, what's the real knock down effect of checkpointing too often? The documents don't really say what is wrong with checkpointing too often, does it cause block, io contention, etc, etc? From my understanding it's just IO contention, but I wanted to make sure.\r\n\r\n14.4.6. Increase checkpoint_segments\r\nTemporarily increasing the checkpoint_segments<http://www.postgresql.org/docs/8.4/static/runtime-config-wal.html#GUC-CHECKPOINT-SEGMENTS> configuration variable can also make large data loads faster. This is because loading a large amount of data into PostgreSQL will cause checkpoints to occur more often than the normal checkpoint frequency (specified by the checkpoint_timeout configuration variable). Whenever a checkpoint occurs, all dirty pages must be flushed to disk. By increasing checkpoint_segments temporarily during bulk data loads, the number of checkpoints that are required can be reduced.\r\n\r\n\r\n- John\r\n\r\n[ 2011-02-15 09:34:30.549 GMT ] :4d404b6e.47ef LOG: checkpoint starting: xlog\r\n[ 2011-02-15 09:34:43.656 GMT ] :4d404b6e.47ef LOG: checkpoint complete: wrote 36135 buffers (0.4%); 0 transaction log file(s) added, 0 removed, 12 recycled; write=13.101 s, sync=0.000 s, total=13.107 s\r\n[ 2011-02-15 09:34:57.090 GMT ] :4d404b6e.47ef LOG: checkpoints are occurring too frequently (27 seconds apart)\r\n[ 2011-02-15 09:34:57.090 GMT ] :4d404b6e.47ef HINT: Consider increasing the configuration parameter \"checkpoint_segments\".\r\n[ 2011-02-15 09:34:57.090 GMT ] :4d404b6e.47ef LOG: checkpoint starting: xlog\r\n[ 2011-02-15 09:35:11.492 GMT ] :4d404b6e.47ef LOG: checkpoint complete: wrote 54634 buffers (0.7%); 0 transaction log file(s) added, 0 removed, 30 recycled; write=14.290 s, sync=0.000 s, total=14.401 s\r\n[ 2011-02-15 09:35:25.496 GMT ] :4d404b6e.47ef LOG: checkpoints are occurring too frequently (28 seconds apart)\r\n[ 2011-02-15 09:35:25.496 GMT ] :4d404b6e.47ef HINT: Consider increasing the configuration parameter \"checkpoint_segments\".\r\n[ 2011-02-15 09:35:25.496 GMT ] :4d404b6e.47ef LOG: checkpoint starting: xlog\r\n[ 2011-02-15 09:35:39.688 GMT ] :4d404b6e.47ef LOG: checkpoint complete: wrote 39352 buffers (0.5%); 0 transaction log file(s) added, 0 removed, 30 recycled; write=14.185 s, sync=0.000 s, total=14.192 s\r\n[ 2011-02-15 09:35:53.417 GMT ] :4d404b6e.47ef LOG: checkpoints are occurring too frequently (28 seconds apart)\r\n[ 2011-02-15 09:35:53.417 GMT ] :4d404b6e.47ef HINT: Consider increasing the configuration parameter \"checkpoint_segments\".\r\n[ 2011-02-15 09:35:53.417 GMT ] :4d404b6e.47ef LOG: checkpoint starting: xlog\r\n[ 2011-02-15 09:36:09.059 GMT ] :4d404b6e.47ef LOG: checkpoint complete: wrote 48803 buffers (0.6%); 0 transaction log file(s) added, 0 removed, 30 recycled; write=15.408 s, sync=0.000 s, total=15.641 s\r\n\r\n\r\n\r\n\r\n\r\n\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n\n\n\n\n\n\n\n\n\nDuring\r\nheavy writes times we get the checkpoint too often error, what's the real knock\r\ndown effect of checkpointing too often?  The documents don’t really\r\nsay what is wrong with checkpointing too often, does it cause block, io\r\ncontention, etc, etc?  From my understanding it's just IO contention, but\r\nI wanted to make sure.\n \n14.4.6. Increase checkpoint_segments\nTemporarily\r\nincreasing the checkpoint_segments\r\nconfiguration variable can also make large data loads faster. This is because\r\nloading a large amount of data into PostgreSQL will cause checkpoints to occur\r\nmore often than the normal checkpoint frequency (specified by the checkpoint_timeout\r\nconfiguration variable). Whenever a checkpoint occurs, all dirty pages must be\r\nflushed to disk. By increasing checkpoint_segments temporarily during bulk data\r\nloads, the number of checkpoints that are required can be reduced. \n \n \n-\r\nJohn\n \n[\r\n2011-02-15 09:34:30.549 GMT ]  :4d404b6e.47ef LOG:  checkpoint\r\nstarting: xlog\n[\r\n2011-02-15 09:34:43.656 GMT ]  :4d404b6e.47ef LOG:  checkpoint\r\ncomplete: wrote 36135 buffers (0.4%); 0 transaction log file(s) added, 0\r\nremoved, 12 recycled; write=13.101 s, sync=0.000 s, total=13.107 s\n[\r\n2011-02-15 09:34:57.090 GMT ]  :4d404b6e.47ef LOG:  checkpoints are\r\noccurring too frequently (27 seconds apart)\n[\r\n2011-02-15 09:34:57.090 GMT ]  :4d404b6e.47ef HINT:  Consider\r\nincreasing the configuration parameter \"checkpoint_segments\".\n[\r\n2011-02-15 09:34:57.090 GMT ]  :4d404b6e.47ef LOG:  checkpoint\r\nstarting: xlog\n[\r\n2011-02-15 09:35:11.492 GMT ]  :4d404b6e.47ef LOG:  checkpoint\r\ncomplete: wrote 54634 buffers (0.7%); 0 transaction log file(s) added, 0\r\nremoved, 30 recycled; write=14.290 s, sync=0.000 s, total=14.401 s\n[\r\n2011-02-15 09:35:25.496 GMT ]  :4d404b6e.47ef LOG:  checkpoints are\r\noccurring too frequently (28 seconds apart)\n[\r\n2011-02-15 09:35:25.496 GMT ]  :4d404b6e.47ef HINT:  Consider\r\nincreasing the configuration parameter \"checkpoint_segments\".\n[\r\n2011-02-15 09:35:25.496 GMT ]  :4d404b6e.47ef LOG:  checkpoint\r\nstarting: xlog\n[\r\n2011-02-15 09:35:39.688 GMT ]  :4d404b6e.47ef LOG:  checkpoint\r\ncomplete: wrote 39352 buffers (0.5%); 0 transaction log file(s) added, 0\r\nremoved, 30 recycled; write=14.185 s, sync=0.000 s, total=14.192 s\n[\r\n2011-02-15 09:35:53.417 GMT ]  :4d404b6e.47ef LOG:  checkpoints are\r\noccurring too frequently (28 seconds apart)\n[\r\n2011-02-15 09:35:53.417 GMT ]  :4d404b6e.47ef HINT:  Consider\r\nincreasing the configuration parameter \"checkpoint_segments\".\n[\r\n2011-02-15 09:35:53.417 GMT ]  :4d404b6e.47ef LOG:  checkpoint starting:\r\nxlog\n[\r\n2011-02-15 09:36:09.059 GMT ]  :4d404b6e.47ef LOG:  checkpoint\r\ncomplete: wrote 48803 buffers (0.6%); 0 transaction log file(s) added, 0\r\nremoved, 30 recycled; write=15.408 s, sync=0.000 s, total=15.641 s", "msg_date": "Tue, 15 Feb 2011 06:45:28 -0500", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": true, "msg_subject": "Checkpointing question" }, { "msg_contents": "\"Strange, John W\" <[email protected]> wrote:\n \n> During heavy writes times we get the checkpoint too often error,\n> what's the real knock down effect of checkpointing too often?\n \nThe main concern is that it may cause an increase in disk writes,\npossibly to the point of causing blocking while waiting for the\ndisk.\n \nGenerally bigger checkpoint_segments settings improve performance,\nespecially as shared_buffers is increased. There are some\ncounter-examples, particularly during bulk loads, which haven't\nreally been explained:\n \nhttp://archives.postgresql.org/pgsql-hackers/2010-04/msg00848.php\n \nThat makes this an area where careful testing of your real workload\nwith different settings can be important, at least if you're trying\nto wring that last ounce of performance out of your server..\n \n-Kevin\n", "msg_date": "Tue, 15 Feb 2011 11:51:24 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Checkpointing question" } ]
[ { "msg_contents": "Hello list,\n\ndoes `postgres (PostgreSQL) 8.4.5' use the LIMIT of a query when it is run on a partitioned-table or am I doing something wrong? It looks as if postgres queries all partitions and then LIMITing the records afterwards!? This results in a long (>3 minutes) running query. What can I do to optimise this?\n\nThe query could look like this:\n\n EXPLAIN ANALYSE\n SELECT *\n FROM flexserver.unitstat\n WHERE nodeid = 'abcd'\n AND ts > '2010-01-01 00:00:00'\n AND ts < '2011-02-15 15:00:00'\n ORDER BY nodeid, ts\n LIMIT 1000;\n\nThis is the `EXPLAIN ANALYSE'-output:\n\n Limit (cost=232195.49..232197.99 rows=1000 width=194) (actual time=205846.722..205852.218 rows=1000 loops=1)\n -> Sort (cost=232195.49..232498.26 rows=121108 width=194) (actual time=205846.717..205848.684 rows=1000 loops=1)\n Sort Key: flexserver.unitstat.ts\n Sort Method: top-N heapsort Memory: 314kB\n -> Result (cost=0.00..225555.27 rows=121108 width=194) (actual time=444.969..205136.182 rows=203492 loops=1)\n -> Append (cost=0.00..225555.27 rows=121108 width=194) (actual time=444.963..204236.800 rows=203492 loops=1)\n -> Seq Scan on unitstat (cost=0.00..14.90 rows=1 width=258) (actual time=0.007..0.007 rows=0 loops=1)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone) AND ((nodeid)::text = 'abcd'::text))\n -> Bitmap Heap Scan on unitstat_y2011m01 unitstat (cost=116.47..8097.17 rows=4189 width=194) (actual time=444.949..9900.002 rows=5377 loops=1)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2011m01_nodeid_gps_ts (cost=0.00..115.42 rows=4190 width=0) (actual time=426.599..426.599 rows=5377 loops=1)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Bitmap Heap Scan on unitstat_y2011m02 unitstat (cost=52.67..3689.16 rows=1906 width=194) (actual time=73.512..3211.698 rows=796 loops=1)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2011m02_nodeid_gps_ts (cost=0.00..52.20 rows=1906 width=0) (actual time=55.458..55.458 rows=796 loops=1)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Index Scan using fki_unitstat_y2010m02_nodeid_ts_fkey on unitstat_y2010m02 unitstat (cost=0.00..10179.11 rows=5257 width=193) (actual time=39.531..11660.741 rows=6524 loops=1)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Index Scan using fki_unitstat_y2010m01_nodeid_ts_fkey on unitstat_y2010m01 unitstat (cost=0.00..10324.31 rows=5358 width=193) (actual time=38.255..9808.237 rows=7128 loops=1)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Heap Scan on unitstat_y2010m11 unitstat (cost=586.92..39314.99 rows=21965 width=195) (actual time=1417.528..26090.404 rows=24464 loops=1)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m11_nodeid_gps_ts (cost=0.00..581.43 rows=21970 width=0) (actual time=1400.898..1400.898 rows=24464 loops=1)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Bitmap Heap Scan on unitstat_y2010m12 unitstat (cost=128.72..9050.29 rows=4683 width=194) (actual time=238.679..7472.936 rows=2014 loops=1)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m12_nodeid_gps_ts (cost=0.00..127.55 rows=4684 width=0) (actual time=225.009..225.009 rows=2014 loops=1)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Bitmap Heap Scan on unitstat_y2010m10 unitstat (cost=101.74..9686.81 rows=4987 width=194) (actual time=488.130..35826.742 rows=25279 loops=1)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m10_nodeid_gps_ts (cost=0.00..100.49 rows=4988 width=0) (actual time=472.796..472.796 rows=25279 loops=1)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Bitmap Heap Scan on unitstat_y2010m09 unitstat (cost=489.56..49567.74 rows=27466 width=194) (actual time=185.198..12753.315 rows=31099 loops=1)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m09_nodeid_gps_ts (cost=0.00..482.69 rows=27472 width=0) (actual time=158.072..158.072 rows=31099 loops=1)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Index Scan using fki_unitstat_y2010m08_nodeid_ts_fkey on unitstat_y2010m08 unitstat (cost=0.00..9353.76 rows=4824 width=194) (actual time=31.351..10259.090 rows=17606 loops=1)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Index Scan using fki_unitstat_y2010m07_nodeid_ts_fkey on unitstat_y2010m07 unitstat (cost=0.00..8686.72 rows=4492 width=194) (actual time=41.572..9636.335 rows=9511 loops=1)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Heap Scan on unitstat_y2010m06 unitstat (cost=311.50..32142.18 rows=17406 width=194) (actual time=113.857..12136.570 rows=17041 loops=1)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m06_nodeid_gps_ts (cost=0.00..307.15 rows=17410 width=0) (actual time=91.638..91.638 rows=17041 loops=1)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Index Scan using fki_unitstat_y2010m05_nodeid_ts_fkey on unitstat_y2010m05 unitstat (cost=0.00..11942.82 rows=6279 width=193) (actual time=62.264..19887.675 rows=19246 loops=1)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Index Scan using fki_unitstat_y2010m04_nodeid_ts_fkey on unitstat_y2010m04 unitstat (cost=0.00..11840.93 rows=6194 width=193) (actual time=52.735..17302.361 rows=21936 loops=1)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Index Scan using fki_unitstat_y2010m03_nodeid_ts_fkey on unitstat_y2010m03 unitstat (cost=0.00..11664.36 rows=6101 width=194) (actual time=66.613..17541.374 rows=15471 loops=1)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n Total runtime: 205855.569 ms\n\n\nRegards,\n\nKim\n", "msg_date": "Tue, 15 Feb 2011 15:23:40 +0100", "msg_from": "\"Kim A. Brandt\" <[email protected]>", "msg_from_op": true, "msg_subject": "LIMIT on partitioned-table!?" }, { "msg_contents": "On 02/15/2011 08:23 AM, Kim A. Brandt wrote:\n\n> does `postgres (PostgreSQL) 8.4.5' use the LIMIT of a query when it\n> is run on a partitioned-table or am I doing something wrong? It looks\n> as if postgres queries all partitions and then LIMITing the records\n> afterwards!? This results in a long (>3 minutes) running query. What\n> can I do to optimise this?\n\nMake sure you have constraint_exclusion set to 'on' in your config. \nAlso, what are your checks for your partitions? You've got a pretty wide \nrange in your 'ts' checks, so if you're using them as your partition \ndefinition, you're not helping yourself.\n\nThe main issue might just be that you've used an order clause. LIMIT \n1000 or not, even if it can restrict the result set based on your CHECK \ncriteria, it'll still need to select every matching row from every \nmatched partition, order the results, and chop off the first 1000.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Tue, 15 Feb 2011 08:49:37 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT on partitioned-table!?" }, { "msg_contents": "Thank you Shaun,\n\nremoving the ORDER BY worked. But I am afraid to ask this. How can I order by partition? It seams that the planner has picked a random(!?) order of partition to select from. The returned records, from the selected partition, are correctly sorted bythe index though.\n\nOn 2011-02-15 15:49, Shaun Thomas wrote:\n> On 02/15/2011 08:23 AM, Kim A. Brandt wrote:\n>\n>> does `postgres (PostgreSQL) 8.4.5' use the LIMIT of a query when it\n>> is run on a partitioned-table or am I doing something wrong? It looks\n>> as if postgres queries all partitions and then LIMITing the records\n>> afterwards!? This results in a long (>3 minutes) running query. What\n>> can I do to optimise this?\n>\n> Make sure you have constraint_exclusion set to 'on' in your config. Also, what are your checks for your partitions? You've got a pretty wide range in your 'ts' checks, so if you're using them as your partition definition, you're not helping yourself.\n\nThe parameter `constraint_exclusion' is set to `partition'. Postgres is on FreeBSD.\n\nMy checks (if I understand you right) are as follows:\n\n CREATE TABLE flexserver.unitstat_y2011m02\n (\n ts timestamp without time zone NOT NULL,\n nodeid character varying(10) NOT NULL,\n gps_ts timestamp without time zone NOT NULL,\n ...\n CONSTRAINT unitstat_y2011m02_ts_check CHECK (ts >= '2011-02-01 00:00:00'::timestamp without time zone AND ts < '2011-03-01 00:00:00'::timestamp without time zone)\n )\n INHERITS (flexserver.unitstat);\n\nEach partition is constrained to one month.\n\nAbout the wide range, I am aware of that. This probably has to change anyway!? So the current (and probably final solution) is to use a narrower search range. Thank you for the hint.\n\n> The main issue might just be that you've used an order clause. LIMIT 1000 or not, even if it can restrict the result set based on your CHECK criteria, it'll still need to select every matching row from every matched partition, order the results, and chop off the first 1000.\n\nThat was it. Just how can one order by partition if one would do a wide range search over multiple partitions?\n\nThe new query and EXPLAIN ANALYSE-output is:\n\n SELECT *\n FROM flexserver.unitstat\n WHERE nodeid = 'abcd'\n AND ts > '2010-01-01 00:00:00'\n AND ts < '2011-02-15 15:00:00'\n --ORDER BY nodeid, ts\n LIMIT 1000;\n\n\n Limit (cost=0.00..1862.46 rows=1000 width=194) (actual time=2.569..18.948 rows=1000 loops=1)\n -> Result (cost=0.00..225611.08 rows=121136 width=194) (actual time=2.566..15.412 rows=1000 loops=1)\n -> Append (cost=0.00..225611.08 rows=121136 width=194) (actual time=2.558..11.243 rows=1000 loops=1)\n -> Seq Scan on unitstat (cost=0.00..14.90 rows=1 width=258) (actual time=0.003..0.003 rows=0 loops=1)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone) AND ((nodeid)::text = 'abcd'::text))\n -> Bitmap Heap Scan on unitstat_y2011m01 unitstat (cost=116.47..8097.17 rows=4189 width=194) (actual time=2.550..7.701 rows=1000 loops=1)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2011m01_nodeid_gps_ts (cost=0.00..115.42 rows=4190 width=0) (actual time=1.706..1.706 rows=5377 loops=1)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Bitmap Heap Scan on unitstat_y2011m02 unitstat (cost=52.92..3744.97 rows=1934 width=194) (never executed)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2011m02_nodeid_gps_ts (cost=0.00..52.44 rows=1935 width=0) (never executed)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Index Scan using fki_unitstat_y2010m02_nodeid_ts_fkey on unitstat_y2010m02 unitstat (cost=0.00..10179.11 rows=5257 width=193) (never executed)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Index Scan using fki_unitstat_y2010m01_nodeid_ts_fkey on unitstat_y2010m01 unitstat (cost=0.00..10324.31 rows=5358 width=193) (never executed)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Heap Scan on unitstat_y2010m11 unitstat (cost=586.92..39314.99 rows=21965 width=195) (never executed)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m11_nodeid_gps_ts (cost=0.00..581.43 rows=21970 width=0) (never executed)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Bitmap Heap Scan on unitstat_y2010m12 unitstat (cost=128.72..9050.29 rows=4683 width=194) (never executed)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m12_nodeid_gps_ts (cost=0.00..127.55 rows=4684 width=0) (never executed)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Bitmap Heap Scan on unitstat_y2010m10 unitstat (cost=101.74..9686.81 rows=4987 width=194) (never executed)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m10_nodeid_gps_ts (cost=0.00..100.49 rows=4988 width=0) (never executed)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Bitmap Heap Scan on unitstat_y2010m09 unitstat (cost=489.56..49567.74 rows=27466 width=194) (never executed)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m09_nodeid_gps_ts (cost=0.00..482.69 rows=27472 width=0) (never executed)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Index Scan using fki_unitstat_y2010m08_nodeid_ts_fkey on unitstat_y2010m08 unitstat (cost=0.00..9353.76 rows=4824 width=194) (never executed)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Index Scan using fki_unitstat_y2010m07_nodeid_ts_fkey on unitstat_y2010m07 unitstat (cost=0.00..8686.72 rows=4492 width=194) (never executed)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Heap Scan on unitstat_y2010m06 unitstat (cost=311.50..32142.18 rows=17406 width=194) (never executed)\n Recheck Cond: ((nodeid)::text = 'abcd'::text)\n Filter: ((ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Bitmap Index Scan on idx_unitstat_y2010m06_nodeid_gps_ts (cost=0.00..307.15 rows=17410 width=0) (never executed)\n Index Cond: ((nodeid)::text = 'abcd'::text)\n -> Index Scan using fki_unitstat_y2010m05_nodeid_ts_fkey on unitstat_y2010m05 unitstat (cost=0.00..11942.82 rows=6279 width=193) (never executed)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Index Scan using fki_unitstat_y2010m04_nodeid_ts_fkey on unitstat_y2010m04 unitstat (cost=0.00..11840.93 rows=6194 width=193) (never executed)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n -> Index Scan using fki_unitstat_y2010m03_nodeid_ts_fkey on unitstat_y2010m03 unitstat (cost=0.00..11664.36 rows=6101 width=194) (never executed)\n Index Cond: (((nodeid)::text = 'abcd'::text) AND (ts > '2010-01-01 00:00:00'::timestamp without time zone) AND (ts < '2011-02-15 15:00:00'::timestamp without time zone))\n Total runtime: 21.219 ms\n\n\nNow most partitions are not looked at (never executed). But how can one affect the order of partition (e.g. begin with the oldest)?\n\nSorry for asking the same thing thrice. I just need to understand this one. :)\n\n\nKind regards,\n\nKim\n", "msg_date": "Tue, 15 Feb 2011 20:33:27 +0100", "msg_from": "\"Kim A. Brandt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIMIT on partitioned-table!?" }, { "msg_contents": "On Tue, Feb 15, 2011 at 21:33, Kim A. Brandt <[email protected]> wrote:\n> removing the ORDER BY worked. But I am afraid to ask this. How can I order\n> by partition? It seams that the planner has picked a random(!?) order of\n> partition to select from. The returned records, from the selected partition,\n> are correctly sorted bythe index though.\n\nIf a single query accesses more than one partition, PostgreSQL\ncurrently cannot read the values in index-sorted order. Hence with\nORDER BY and LIMIT, PostgreSQL cannot return *any* results before it\nhas read all matching rows and then sorted them. Adding a LIMIT\ndoesn't help much. Your only bet is to reduce the number of matched\nrows, or make sure that you only access a single partition.\n\nIncreasing work_mem may speed up the sort step if you're hitting the\ndisk (EXPLAIN ANALYZE VERBOSE will tell you whether that's the case).\n\nThis will change in PostgreSQL 9.1 which has a new Merge Append plan node.\n\nRegards,\nMarti\n", "msg_date": "Tue, 15 Feb 2011 23:13:50 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIMIT on partitioned-table!?" }, { "msg_contents": "Thank you Marti,\n\nI will go with the ``reduced number of matched rows'' and naturally be waiting for postgres 9.1 expectantly.\n\n\nKind regards,\n\nKim\n\n\n\nOn 2011-02-15 22:13, Marti Raudsepp wrote:\n> On Tue, Feb 15, 2011 at 21:33, Kim A. Brandt<[email protected]> wrote:\n>> removing the ORDER BY worked. But I am afraid to ask this. How can I order\n>> by partition? It seams that the planner has picked a random(!?) order of\n>> partition to select from. The returned records, from the selected partition,\n>> are correctly sorted bythe index though.\n>\n> If a single query accesses more than one partition, PostgreSQL\n> currently cannot read the values in index-sorted order. Hence with\n> ORDER BY and LIMIT, PostgreSQL cannot return *any* results before it\n> has read all matching rows and then sorted them. Adding a LIMIT\n> doesn't help much. Your only bet is to reduce the number of matched\n> rows, or make sure that you only access a single partition.\n>\n> Increasing work_mem may speed up the sort step if you're hitting the\n> disk (EXPLAIN ANALYZE VERBOSE will tell you whether that's the case).\n>\n> This will change in PostgreSQL 9.1 which has a new Merge Append plan node.\n>\n> Regards,\n> Marti\n", "msg_date": "Wed, 16 Feb 2011 08:24:06 +0100", "msg_from": "\"Kim A. Brandt\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: LIMIT on partitioned-table!?" } ]
[ { "msg_contents": "Hi list,\n\n \n\nfirst time for me here, hope you're not dealing too severely with me regarding guidelines. Giving my best.\n\n \n\nWe are running PostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit on a Supermicro SuperServer 8026B-6RF.\n\nThis version is downloaded from postgresql.org and selfcompiled, running for over a year now. The Server has 128 GB RAM and Four Intel® Xeon® X7550 with 64 logical cores.\n\nOperating System is \"Linux database1 2.6.32-bpo.5-amd64 #1 SMP Mon Dec 13 17:10:39 UTC 2010 x86_64 GNU/Linux\".\n\n \n\nThe System boots through iscsi over a Qlogic QLE4062C HBA. Pgdata and xlog is logged in over iscsi HBA too. We tried en and disabling jumbo frames. Makes no difference.\n\nWe are using a DELL Equallogic SAN Backend with SAS drives.\n\n \n\nPostgres is used as backend for a high performance website. We are using nginx with php-fastcgi and memcached.\n\n \n\nSince a few weeks we have really strange peaks on this system. User CPU is increasing up to 100% and we have lots of SELECTs running. \n\nThere is no iowait at this time, only high user cpu and we don't know where this is coming from. It seems like this is only happening under certain circumstances.\n\n \n\nWe can solve this problem by simply removing the load from the website by delivering an offline page. We let database calm down for a while and then slowly throttling users.\n\n \n\nSee ganglia: http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg\n\n \n\nHas someone made similar experiences? Perhaps there is some issue between Postgres 8.4.4 and kernel 2.6.32?\n\n \n\nThank in advance\n\nThomas\n\n \n\n \n\n \n\n-- \n\nTurtle Entertainment GmbH\n\nThomas Pöhler, Manager IT Operations\n\nSiegburger Str. 189\n\n50679 Cologne\n\nGermany\n\nfon. +49 221 880449-331\n\nfax. +49 221 880449-399\n\nhttp://www.turtle-entertainment.com/\n\nhttp://www.esl.eu/\n\nhttp://www.consoles.net/\n\nManaging Director: Ralf Reichert\n\nRegister Court: Local Court Cologne, HRB 36678\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi list,\n \nfirst time for me here, hope you’re\nnot dealing too severely with me regarding guidelines. Giving my best.\n \nWe are running PostgreSQL 8.4.4 on\nx86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit\non a Supermicro SuperServer 8026B-6RF.\nThis version is downloaded from\npostgresql.org and selfcompiled, running for over a year now. The Server has\n128 GB RAM and Four Intel® Xeon® X7550 with 64 logical cores.\nOperating System is “Linux database1\n2.6.32-bpo.5-amd64 #1 SMP Mon Dec 13 17:10:39 UTC 2010 x86_64 GNU/Linux”.\n \nThe System boots through iscsi over a\nQlogic QLE4062C HBA. Pgdata and xlog is logged in over iscsi HBA too. We tried\nen and disabling jumbo frames. Makes no difference.\nWe are using a DELL Equallogic SAN Backend\nwith SAS drives.\n \nPostgres is used as  backend for a high\nperformance website. We are using nginx with php-fastcgi and memcached.\n \nSince a few weeks we have really strange\npeaks on this system. User CPU is increasing up to 100% and we have lots of\nSELECTs running. \nThere is no iowait at this time, only high\nuser cpu and we don’t know where this is coming from. It seems like this\nis only happening under certain circumstances.\n \nWe can solve this problem by simply\nremoving the load from the website by delivering an offline page. We let database\ncalm down for a while and then slowly throttling users.\n \nSee ganglia: http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg\n \nHas someone made similar experiences? Perhaps\nthere is some issue between Postgres 8.4.4 and kernel 2.6.32?\n \nThank in advance\nThomas\n \n \n \n-- \nTurtle Entertainment GmbH\nThomas Pöhler, Manager IT Operations\nSiegburger Str. 189\n50679 Cologne\nGermany\nfon. +49 221 880449-331\nfax. +49 221 880449-399\nhttp://www.turtle-entertainment.com/\nhttp://www.esl.eu/\nhttp://www.consoles.net/\nManaging Director: Ralf Reichert\nRegister Court: Local Court Cologne, HRB\n36678", "msg_date": "Tue, 15 Feb 2011 18:19:14 +0100", "msg_from": "=?iso-8859-1?Q?Thomas_P=F6hler?= <[email protected]>", "msg_from_op": true, "msg_subject": "high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "On Tue, Feb 15, 2011 at 10:19 AM, Thomas Pöhler\n<[email protected]> wrote:\n> Since a few weeks we have really strange peaks on this system. User CPU is\n> increasing up to 100% and we have lots of SELECTs running.\n\nAre you using pooling of some kind, or do you have LOTS of connections?\n\n> There is no iowait at this time, only high user cpu and we don’t know where\n> this is coming from. It seems like this is only happening under certain\n> circumstances.\n\nrun htop and look for red. if youi've got lots of red bar on each CPU\nbut no io wait then it's waiting for memory access. Most of these\nmulti-core machines will be memory read / write speed bound. Pooling\nwill help relieve some of that memory bandwidth load, but might not be\nenough to eliminate it.\n", "msg_date": "Tue, 15 Feb 2011 11:01:53 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "Thomas Pᅵhler<[email protected]> wrote:\n \n> we have lots of SELECTs running.\n \nHow many?\n \nCould you show your postgresql.conf file, with all comments removed?\n \nWhat does vmstat 1 (or similar) show at baseline and during your\nproblem episodes?\n \n-Kevin\n", "msg_date": "Tue, 15 Feb 2011 12:06:56 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting\n\t problem" }, { "msg_contents": "You have also run analyze verbose, and checked to make sure you don't have a ton of bloated indexes?\r\n\r\n- check the process with strace -p PID\r\n- check the diskIO with iostat, not vmstat\r\n- run analyze verbose, and possible reindex the database, or cluster the larger tables.\r\n- dump from pg_stat_activity, and check what the largest objects are based on relpages from pg_class.\r\n- check index scans/table scans from pg_statio tables if you have track_activities on in the .conf file.\r\n\r\n- John\r\n\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Thomas Pöhler\r\nSent: 15 February 2011 17:19\r\nTo: [email protected]\r\nCc: Felix Feinhals; Verteiler_A-Team; Björn Metzdorf\r\nSubject: [PERFORM] high user cpu, massive SELECTs, no io waiting problem\r\n\r\nHi list,\r\n\r\nfirst time for me here, hope you're not dealing too severely with me regarding guidelines. Giving my best.\r\n\r\nWe are running PostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit on a Supermicro SuperServer 8026B-6RF.\r\nThis version is downloaded from postgresql.org and selfcompiled, running for over a year now. The Server has 128 GB RAM and Four Intel® Xeon® X7550 with 64 logical cores.\r\nOperating System is \"Linux database1 2.6.32-bpo.5-amd64 #1 SMP Mon Dec 13 17:10:39 UTC 2010 x86_64 GNU/Linux\".\r\n\r\nThe System boots through iscsi over a Qlogic QLE4062C HBA. Pgdata and xlog is logged in over iscsi HBA too. We tried en and disabling jumbo frames. Makes no difference.\r\nWe are using a DELL Equallogic SAN Backend with SAS drives.\r\n\r\nPostgres is used as backend for a high performance website. We are using nginx with php-fastcgi and memcached.\r\n\r\nSince a few weeks we have really strange peaks on this system. User CPU is increasing up to 100% and we have lots of SELECTs running.\r\nThere is no iowait at this time, only high user cpu and we don't know where this is coming from. It seems like this is only happening under certain circumstances.\r\n\r\nWe can solve this problem by simply removing the load from the website by delivering an offline page. We let database calm down for a while and then slowly throttling users.\r\n\r\nSee ganglia: http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg\r\n\r\nHas someone made similar experiences? Perhaps there is some issue between Postgres 8.4.4 and kernel 2.6.32?\r\n\r\nThank in advance\r\nThomas\r\n\r\n\r\n\r\n--\r\nTurtle Entertainment GmbH\r\nThomas Pöhler, Manager IT Operations\r\nSiegburger Str. 189\r\n50679 Cologne\r\nGermany\r\nfon. +49 221 880449-331\r\nfax. +49 221 880449-399\r\nhttp://www.turtle-entertainment.com/\r\nhttp://www.esl.eu/\r\nhttp://www.consoles.net/\r\nManaging Director: Ralf Reichert\r\nRegister Court: Local Court Cologne, HRB 36678\r\n\r\n\r\n\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\nYou have also run analyze verbose, and checked to make sure you don’t have a ton of bloated indexes? - check the process with strace –p PID- check the diskIO with iostat, not vmstat- run analyze verbose, and possible reindex the database, or cluster the larger tables.- dump from pg_stat_activity, and check what the largest objects are based on relpages from pg_class.- check index scans/table scans from pg_statio tables if you have track_activities on in the .conf file. - John From: [email protected] [mailto:[email protected]] On Behalf Of Thomas PöhlerSent: 15 February 2011 17:19To: [email protected]: Felix Feinhals; Verteiler_A-Team; Björn MetzdorfSubject: [PERFORM] high user cpu, massive SELECTs, no io waiting problem Hi list, first time for me here, hope you’re not dealing too severely with me regarding guidelines. Giving my best. We are running PostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit on a Supermicro SuperServer 8026B-6RF.This version is downloaded from postgresql.org and selfcompiled, running for over a year now. The Server has 128 GB RAM and Four Intel® Xeon® X7550 with 64 logical cores.Operating System is “Linux database1 2.6.32-bpo.5-amd64 #1 SMP Mon Dec 13 17:10:39 UTC 2010 x86_64 GNU/Linux”. The System boots through iscsi over a Qlogic QLE4062C HBA. Pgdata and xlog is logged in over iscsi HBA too. We tried en and disabling jumbo frames. Makes no difference.We are using a DELL Equallogic SAN Backend with SAS drives. Postgres is used as  backend for a high performance website. We are using nginx with php-fastcgi and memcached. Since a few weeks we have really strange peaks on this system. User CPU is increasing up to 100% and we have lots of SELECTs running. There is no iowait at this time, only high user cpu and we don’t know where this is coming from. It seems like this is only happening under certain circumstances. We can solve this problem by simply removing the load from the website by delivering an offline page. We let database calm down for a while and then slowly throttling users. See ganglia: http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg Has someone made similar experiences? Perhaps there is some issue between Postgres 8.4.4 and kernel 2.6.32? Thank in advanceThomas   -- Turtle Entertainment GmbHThomas Pöhler, Manager IT OperationsSiegburger Str. 18950679 CologneGermanyfon. +49 221 880449-331fax. +49 221 880449-399http://www.turtle-entertainment.com/http://www.esl.eu/http://www.consoles.net/Managing Director: Ralf ReichertRegister Court: Local Court Cologne, HRB 36678", "msg_date": "Tue, 15 Feb 2011 14:08:18 -0500", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "On Tue, Feb 15, 2011 at 6:19 PM, Thomas Pöhler\n<[email protected]> wrote:\n> Hi list,\n>\n> See ganglia: http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg\n>\n\nWhat is the bottom graph? queries/minute? Looks like Your database is\njust getting hammered.\nMaybe there is a really badly coded page somewhere (a query for each\nuser or something similar)?\n\nGreetings\nMarcin Mańk\n", "msg_date": "Tue, 15 Feb 2011 20:55:48 +0100", "msg_from": "marcin mank <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "On 15/02/2011 18:19, Thomas Pöhler wrote:\n> Hi list,\n>\n> first time for me here, hope you’re not dealing too severely with me\n> regarding guidelines. Giving my best.\n>\n> We are running PostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by\n> GCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit on a Supermicro SuperServer\n> 8026B-6RF.\n>\n> This version is downloaded from postgresql.org and selfcompiled, running\n> for over a year now. The Server has 128 GB RAM and Four Intel® Xeon®\n> X7550 with 64 logical cores.\n\nSo, 64 logical cores total.\n\n> Operating System is “Linux database1 2.6.32-bpo.5-amd64 #1 SMP Mon Dec\n> 13 17:10:39 UTC 2010 x86_64 GNU/Linux”.\n>\n> The System boots through iscsi over a Qlogic QLE4062C HBA. Pgdata and\n> xlog is logged in over iscsi HBA too. We tried en and disabling jumbo\n> frames. Makes no difference.\n\nAre you using 10 Gbit/s Ethernet for iSCSI? Regular 1 Gbit/s Ethernet \nmight be too slow for you.\n\n> Since a few weeks we have really strange peaks on this system. User CPU\n> is increasing up to 100% and we have lots of SELECTs running.\n\n> See ganglia: http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg\n>\n> Has someone made similar experiences? Perhaps there is some issue\n> between Postgres 8.4.4 and kernel 2.6.32?\n\n From your graph it looks like the number of active processes (I'm \nassuming they are PostgreSQL processes) is going out of control.\n\nThere is an old problem (which I've encountered so I'm replying but it \nmay or may not be in your case) in which PostgreSQL starts behaving \nbadly even for SELECT queries if the number of simultaneous queries \nexceeds the number of logical CPUs. To test this, I'd recommend setting \nup a utility like pgpool-II (http://pgpool.projects.postgresql.org/) in \nfront of the database to try and limit the number of active connections \nto nearly 64 (maybe you can have good results with 80 or 100).\n\nYou might also experiment with pgsql.max_links setting of PHP but IIRC \nPHP will just refuse more connections than that instead of waiting for \nthem (but maybe your application can spin-wait for them, possibly while \nalso using usleep()).\n\n\n", "msg_date": "Wed, 16 Feb 2011 02:00:22 +0100", "msg_from": "Ivan Voras <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "On Tue, Feb 15, 2011 at 6:00 PM, Ivan Voras <[email protected]> wrote:\n> There is an old problem (which I've encountered so I'm replying but it may\n> or may not be in your case) in which PostgreSQL starts behaving badly even\n> for SELECT queries if the number of simultaneous queries exceeds the number\n> of logical CPUs.\n\nNote that this is a problem for most RDBMS engines, not just\npostgresql. The performance drop off isn't too bad, but the total\nnumber of connections times even a doubling of response time results\nin a slow server.\n\n> To test this, I'd recommend setting up a utility like\n> pgpool-II (http://pgpool.projects.postgresql.org/) in front of the database\n> to try and limit the number of active connections to nearly 64 (maybe you\n> can have good results with 80 or 100).\n\npgpool IS the answer for most of these issues.\n\n> You might also experiment with pgsql.max_links setting of PHP but IIRC PHP\n> will just refuse more connections than that instead of waiting for them (but\n> maybe your application can spin-wait for them, possibly while also using\n> usleep()).\n\nThat setting is PER PROCESS so it might not help that much.\n\nhttp://www.php.net/manual/en/pgsql.configuration.php#ini.pgsql.max-links\n", "msg_date": "Tue, 15 Feb 2011 18:19:16 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "Kevin Grittner wrote:\n> Could you show your postgresql.conf file, with all comments removed\n\n\nI just added a sample query to provide the data we always want here \nwithout people having to edit their config files, by querying \npg_settings for it, to http://wiki.postgresql.org/wiki/Server_Configuration\n\nI already updated http://wiki.postgresql.org/wiki/SlowQueryQuestions and \nhttp://wiki.postgresql.org/wiki/Guide_to_reporting_problems to mention \nthis too.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 16 Feb 2011 02:33:22 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting\t problem" }, { "msg_contents": "On Tue, Feb 15, 2011 at 20:01, Scott Marlowe <[email protected]> wrote:\n> run htop and look for red.  if youi've got lots of red bar on each CPU\n> but no io wait then it's waiting for memory access.\n\nI don't think this is true. AFAICT the red bar refers to \"system\ntime\", time that's spent in the kernel -- either in syscalls or kernel\nbackground threads.\n\nOperating systems don't generally account memory accesses (cache\nmisses) for processes, if you don't specially ask for it. The closest\nthing I know of is using Linux perf tools, e.g. \"perf top -e\ncache-misses\". OProfile, DTrace and SystemTap can probably do\nsomething similar.\n\nRegards,\nMarti\n", "msg_date": "Wed, 16 Feb 2011 15:44:06 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "Greg Smith <[email protected]> wrote:\n \n> I just added a sample query to provide the data we always want\n> here without people having to edit their config files, by\n> querying pg_settings for it, to\n> http://wiki.postgresql.org/wiki/Server_Configuration\n \nNice! Thanks!\n \nA few very nice things about this:\n \n(1) You don't need rights to the postgresql.conf file; any user can\nrun this.\n \n(2) You don't need to know how to strip the comments with sed or\nperl or something, or go through the file with tedious manual\nediting.\n \n(3) It shows some things which aren't coming from the\npostgresql.conf file which might be of interest.\n \nIn fact, I wonder whether we shouldn't leave a couple items you've\nexcluded, since they are sometimes germane to problems posted, like\nlc_collate and TimeZone.\n \n-Kevin\n", "msg_date": "Wed, 16 Feb 2011 08:37:32 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting\t\n\t problem" }, { "msg_contents": "Kevin Grittner wrote:\n> In fact, I wonder whether we shouldn't leave a couple items you've\n> excluded, since they are sometimes germane to problems posted, like\n> lc_collate and TimeZone.\n\nI pulled some of them out only because they're not really \npostgresql.conf settings; lc_collate and lc_ctype for example are set at \ninitdb time. Feel free to hack on that example if you feel it could be \nimproved, just be aware which of those things are not really in the main \nconfig file when pondering if they should be included.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 16 Feb 2011 09:55:39 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting\t\t problem" }, { "msg_contents": "I think adding\n\nUNION ALL SELECT 'postgres version', version();\n\nmight be a good thing.\n\nOn Wed, Feb 16, 2011 at 9:55 AM, Greg Smith <[email protected]> wrote:\n> Kevin Grittner wrote:\n>>\n>> In fact, I wonder whether we shouldn't leave a couple items you've\n>> excluded, since they are sometimes germane to problems posted, like\n>> lc_collate and TimeZone.\n>\n> I pulled some of them out only because they're not really postgresql.conf\n> settings; lc_collate and lc_ctype for example are set at initdb time.  Feel\n> free to hack on that example if you feel it could be improved, just be aware\n> which of those things are not really in the main config file when pondering\n> if they should be included.\n>\n> --\n> Greg Smith   2ndQuadrant US    [email protected]   Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Wed, 16 Feb 2011 10:04:42 -0500", "msg_from": "Justin Pitts <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "On Wed, Feb 16, 2011 at 6:44 AM, Marti Raudsepp <[email protected]> wrote:\n> On Tue, Feb 15, 2011 at 20:01, Scott Marlowe <[email protected]> wrote:\n>> run htop and look for red.  if youi've got lots of red bar on each CPU\n>> but no io wait then it's waiting for memory access.\n>\n> I don't think this is true. AFAICT the red bar refers to \"system\n> time\", time that's spent in the kernel -- either in syscalls or kernel\n> background threads.\n\nMy point being that if you've got a lot of RED it'll be the OS waiting\nfor memory access. Trust me, when we start to hit our memory\nbandwidth (in the 70 to 80 GB/s range) we start to get more and more\nred and more and more kernel wait time.\n", "msg_date": "Wed, 16 Feb 2011 08:43:06 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "Scott, are you really moving that much data through memory, 70-80GB/sec is the limit of the new intel 7500 series in a best case scenario. \r\n\r\n- John\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Scott Marlowe\r\nSent: 16 February 2011 15:43\r\nTo: Marti Raudsepp\r\nCc: Thomas Pöhler; [email protected]; Felix Feinhals; Verteiler_A-Team; Björn Metzdorf\r\nSubject: Re: [PERFORM] high user cpu, massive SELECTs, no io waiting problem\r\n\r\nOn Wed, Feb 16, 2011 at 6:44 AM, Marti Raudsepp <[email protected]> wrote:\r\n> On Tue, Feb 15, 2011 at 20:01, Scott Marlowe <[email protected]> wrote:\r\n>> run htop and look for red.  if youi've got lots of red bar on each CPU\r\n>> but no io wait then it's waiting for memory access.\r\n>\r\n> I don't think this is true. AFAICT the red bar refers to \"system\r\n> time\", time that's spent in the kernel -- either in syscalls or kernel\r\n> background threads.\r\n\r\nMy point being that if you've got a lot of RED it'll be the OS waiting\r\nfor memory access. Trust me, when we start to hit our memory\r\nbandwidth (in the 70 to 80 GB/s range) we start to get more and more\r\nred and more and more kernel wait time.\r\n\r\n-- \r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\nThis communication is for informational purposes only. It is not\nintended as an offer or solicitation for the purchase or sale of\nany financial instrument or as an official confirmation of any\ntransaction. All market prices, data and other information are not\nwarranted as to completeness or accuracy and are subject to change\nwithout notice. Any comments or statements made herein do not\nnecessarily reflect those of JPMorgan Chase & Co., its subsidiaries\nand affiliates.\r\n\r\nThis transmission may contain information that is privileged,\nconfidential, legally privileged, and/or exempt from disclosure\nunder applicable law. If you are not the intended recipient, you\nare hereby notified that any disclosure, copying, distribution, or\nuse of the information contained herein (including any reliance\nthereon) is STRICTLY PROHIBITED. Although this transmission and any\nattachments are believed to be free of any virus or other defect\nthat might affect any computer system into which it is received and\nopened, it is the responsibility of the recipient to ensure that it\nis virus free and no responsibility is accepted by JPMorgan Chase &\nCo., its subsidiaries and affiliates, as applicable, for any loss\nor damage arising in any way from its use. If you received this\ntransmission in error, please immediately contact the sender and\ndestroy the material in its entirety, whether in electronic or hard\ncopy format. Thank you.\r\n\r\nPlease refer to http://www.jpmorgan.com/pages/disclosures for\ndisclosures relating to European legal entities.\n", "msg_date": "Wed, 16 Feb 2011 10:53:47 -0500", "msg_from": "\"Strange, John W\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "Yeah, at max load we are. We're running quad 12 core AMD Magny Cours.\n Under max load all of our cores go about 20 to 30% red (i.e. kernel)\nand we wind up waiting on the kernel much more. Could be a mix of\ncontext switching and waiting on memory, so it's just a guesstimate\nI'm making based on previous testing with Greg Smith's memory\nstreaming test and familiarity with this system.\n\nOn Wed, Feb 16, 2011 at 8:53 AM, Strange, John W\n<[email protected]> wrote:\n> Scott, are you really moving that much data through memory, 70-80GB/sec is the limit of the new intel 7500 series in a best case scenario.\n>\n> - John\n>\n> -----Original Message-----\n> From: [email protected] [mailto:[email protected]] On Behalf Of Scott Marlowe\n> Sent: 16 February 2011 15:43\n> To: Marti Raudsepp\n> Cc: Thomas Pöhler; [email protected]; Felix Feinhals; Verteiler_A-Team; Björn Metzdorf\n> Subject: Re: [PERFORM] high user cpu, massive SELECTs, no io waiting problem\n>\n> On Wed, Feb 16, 2011 at 6:44 AM, Marti Raudsepp <[email protected]> wrote:\n>> On Tue, Feb 15, 2011 at 20:01, Scott Marlowe <[email protected]> wrote:\n>>> run htop and look for red.  if youi've got lots of red bar on each CPU\n>>> but no io wait then it's waiting for memory access.\n>>\n>> I don't think this is true. AFAICT the red bar refers to \"system\n>> time\", time that's spent in the kernel -- either in syscalls or kernel\n>> background threads.\n>\n> My point being that if you've got a lot of RED it'll be the OS waiting\n> for memory access.  Trust me, when we start to hit our memory\n> bandwidth (in the 70 to 80 GB/s range) we start to get more and more\n> red and more and more kernel wait time.\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n> This communication is for informational purposes only. It is not\n> intended as an offer or solicitation for the purchase or sale of\n> any financial instrument or as an official confirmation of any\n> transaction. All market prices, data and other information are not\n> warranted as to completeness or accuracy and are subject to change\n> without notice. Any comments or statements made herein do not\n> necessarily reflect those of JPMorgan Chase & Co., its subsidiaries\n> and affiliates.\n>\n> This transmission may contain information that is privileged,\n> confidential, legally privileged, and/or exempt from disclosure\n> under applicable law. If you are not the intended recipient, you\n> are hereby notified that any disclosure, copying, distribution, or\n> use of the information contained herein (including any reliance\n> thereon) is STRICTLY PROHIBITED. Although this transmission and any\n> attachments are believed to be free of any virus or other defect\n> that might affect any computer system into which it is received and\n> opened, it is the responsibility of the recipient to ensure that it\n> is virus free and no responsibility is accepted by JPMorgan Chase &\n> Co., its subsidiaries and affiliates, as applicable, for any loss\n> or damage arising in any way from its use. If you received this\n> transmission in error, please immediately contact the sender and\n> destroy the material in its entirety, whether in electronic or hard\n> copy format. Thank you.\n>\n> Please refer to http://www.jpmorgan.com/pages/disclosures for\n> disclosures relating to European legal entities.\n>\n\n\n\n-- \nTo understand recursion, one must first understand recursion.\n", "msg_date": "Wed, 16 Feb 2011 09:02:43 -0700", "msg_from": "Scott Marlowe <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "Justin Pitts <[email protected]> wrote: \n> I think adding\n> \n> UNION ALL SELECT 'postgres version', version();\n> \n> might be a good thing.\n \nGood point. Added.\n \n> Greg Smith <[email protected]> wrote:\n>> Kevin Grittner wrote:\n>>>\n>>> In fact, I wonder whether we shouldn't leave a couple items\n>>> you've excluded, since they are sometimes germane to problems\n>>> posted, like lc_collate and TimeZone.\n>>\n>> I pulled some of them out only because they're not really\n>> postgresql.conf settings; lc_collate and lc_ctype for example are\n>> set at initdb time. Feel free to hack on that example if you\n>> feel it could be improved, just be aware which of those things\n>> are not really in the main config file when pondering if they\n>> should be included.\n \nBasically, the ones I could remember us needing to ask about on\nmultiple occasions, I put back -- provisionally. If someone thinks\nthey're pointless, I won't worry about them being dropped again:\ntime zone, character encoding scheme, character set, and collation. \nI'm pretty sure I've seen us ask about all of those in trying to\nsort out a problem.\n \nI also tried the query on a newly installed HEAD build which had no\nmanual changes to the postgresql.conf file and found a few others\nwhich seemed to me to be worth suppressing.\n \nI took my shot -- anyone else is welcome to do so.... :-)\n \n-Kevin\n", "msg_date": "Wed, 16 Feb 2011 10:08:54 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting\n\t problem" }, { "msg_contents": "Hi,\n\nwe are using two instances of pgbouncer v1.4 for connection pooling.\nOne for prepared statements (pool_mode session) and one without (pool_mode transaction). \n\nPgbouncer.ini:\n[pgbouncer]\npool_mode = transaction/session\nserver_reset_query = DISCARD ALL;\nserver_check_query = select 1\nserver_check_delay = 10\nmax_client_conn = 10000\ndefault_pool_size = 450\nlog_connections = 0\nlog_disconnections = 0\nlog_pooler_errors = 1\nclient_login_timeout = 0\n\n\nI will examine htop next time during a peak. \n\nIf I remember correctly vmstat showed lots of context switches during a peak above 50k. \n\nWe are running a biweekly downtime where we do a complete reindex and vaccum full. We cannot identify certain queries causing this. \n\nThe last graph in ganglia (http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg) shows the avg_queries from pgbouncers stats. I think this is a symptom of many waiting queries which accumulate.\n\nOur iscsi is connected with 3Gibt/s. But that's more than enough. We don't have high traffic throughput.\n\nThis is the result of the query you gave me:\n\nversion\tPostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit\ncheckpoint_segments\t40\ncustom_variable_classes\tpg_stat_statements\neffective_cache_size\t48335MB\nescape_string_warning\toff\nfsync\ton\nlc_collate\tC\nlc_ctype\tC\nlisten_addresses\t*\nlog_destination\tstderr\nlog_line_prefix\t%t %p %d %u %r\nlog_lock_waits\ton\nlog_min_duration_statement\t1s\nlog_min_messages\tnotice\nlog_rotation_size\t10MB\nlog_temp_files\t50MB\nlogging_collector\ton\nmaintenance_work_mem\t1GB\nmax_connections\t1000\nmax_prepared_transactions\t5\nmax_stack_depth\t2MB\npg_stat_statements.max\t10000\npg_stat_statements.track\tall\nport\t5433\nserver_encoding\tUTF8\nshared_buffers\t16GB\nTimeZone\tEurope/Berlin\nupdate_process_title\ton\nwal_buffers\t1MB\nwork_mem\t32MB\n\n\nSeems like connection limit 10000 is way too much on pgbouncer? Our queries overall are not that CPU intensive. If they are slow, they are mostly waiting for disk io. When having a look at the traffic of this database server we see 2/3 of the traffic is going to san/disk and only 1/3 going to the server. In other words from the traffic view, 2/3 of our operations are writes and 1/3 are reads. The database is fitting completely into ram, so reads should not be a problem.\n\nAppreciate your help!\nThomas\n\n-----Ursprüngliche Nachricht-----\nVon: Kevin Grittner [mailto:[email protected]] \nGesendet: Mittwoch, 16. Februar 2011 17:09\nAn: Greg Smith; Justin Pitts\nCc: [email protected]; Verteiler_A-Team; Björn Metzdorf; Felix Feinhals; Thomas Pöhler\nBetreff: Re: [PERFORM] high user cpu, massive SELECTs, no io waiting problem\n\nJustin Pitts <[email protected]> wrote: \n> I think adding\n> \n> UNION ALL SELECT 'postgres version', version();\n> \n> might be a good thing.\n \nGood point. Added.\n \n> Greg Smith <[email protected]> wrote:\n>> Kevin Grittner wrote:\n>>>\n>>> In fact, I wonder whether we shouldn't leave a couple items\n>>> you've excluded, since they are sometimes germane to problems\n>>> posted, like lc_collate and TimeZone.\n>>\n>> I pulled some of them out only because they're not really\n>> postgresql.conf settings; lc_collate and lc_ctype for example are\n>> set at initdb time. Feel free to hack on that example if you\n>> feel it could be improved, just be aware which of those things\n>> are not really in the main config file when pondering if they\n>> should be included.\n \nBasically, the ones I could remember us needing to ask about on\nmultiple occasions, I put back -- provisionally. If someone thinks\nthey're pointless, I won't worry about them being dropped again:\ntime zone, character encoding scheme, character set, and collation. \nI'm pretty sure I've seen us ask about all of those in trying to\nsort out a problem.\n \nI also tried the query on a newly installed HEAD build which had no\nmanual changes to the postgresql.conf file and found a few others\nwhich seemed to me to be worth suppressing.\n \nI took my shot -- anyone else is welcome to do so.... :-)\n \n-Kevin\n", "msg_date": "Wed, 16 Feb 2011 18:11:45 +0100", "msg_from": "=?iso-8859-1?Q?Thomas_P=F6hler?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "2011/2/16 Thomas Pöhler <[email protected]>:\n> Hi,\n>\n> we are using two instances of pgbouncer v1.4 for connection pooling.\n> One for prepared statements (pool_mode session) and one without (pool_mode transaction).\n>\n> Pgbouncer.ini:\n> [pgbouncer]\n> pool_mode = transaction/session\n> server_reset_query = DISCARD ALL;\n> server_check_query = select 1\n> server_check_delay = 10\n> max_client_conn = 10000\n> default_pool_size = 450\n> log_connections = 0\n> log_disconnections = 0\n> log_pooler_errors = 1\n> client_login_timeout = 0\n>\n>\n> I will examine htop next time during a peak.\n>\n> If I remember correctly vmstat showed lots of context switches during a peak above 50k.\n>\n> We are running a biweekly downtime where we do a complete reindex and vaccum full. We cannot identify certain queries causing this.\n>\n> The last graph in ganglia (http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg) shows the avg_queries from pgbouncers stats. I think this is a symptom of many waiting queries which accumulate.\n>\n> Our iscsi is connected with 3Gibt/s. But that's more than enough. We don't have high traffic throughput.\n>\n> This is the result of the query you gave me:\n>\n> version PostgreSQL 8.4.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (Debian 4.3.2-1.1) 4.3.2, 64-bit\n> checkpoint_segments     40\n> custom_variable_classes pg_stat_statements\n> effective_cache_size    48335MB\n> escape_string_warning   off\n> fsync   on\n> lc_collate      C\n> lc_ctype        C\n> listen_addresses        *\n> log_destination stderr\n> log_line_prefix %t %p %d %u %r\n> log_lock_waits  on\n> log_min_duration_statement      1s\n> log_min_messages        notice\n> log_rotation_size       10MB\n> log_temp_files  50MB\n> logging_collector       on\n> maintenance_work_mem    1GB\n> max_connections 1000\n> max_prepared_transactions       5\n> max_stack_depth 2MB\n> pg_stat_statements.max  10000\n> pg_stat_statements.track        all\n> port    5433\n> server_encoding UTF8\n> shared_buffers  16GB\n> TimeZone        Europe/Berlin\n> update_process_title    on\n> wal_buffers     1MB\n> work_mem        32MB\n>\n>\n> Seems like connection limit 10000 is way too much on pgbouncer? Our queries overall are not that CPU intensive. If they are slow, they are mostly waiting for disk io. When having a look at the traffic of this database server we see 2/3 of the traffic is going to san/disk and only 1/3 going to the server. In other words from the traffic view, 2/3 of our operations are writes and 1/3 are reads. The database is fitting completely into ram, so reads should not be a problem.\n\nI used pgbouncer with way more than that, not an issue on its own\n*but* can you export the pgbouncers in another box ?\nI get issues in very high-mem usage (more than IO) and ton's of\nconnection via pgbouncer, then moving the bouncer in a 3rd box salve\nthe situation.\n\n>\n> Appreciate your help!\n> Thomas\n>\n> -----Ursprüngliche Nachricht-----\n> Von: Kevin Grittner [mailto:[email protected]]\n> Gesendet: Mittwoch, 16. Februar 2011 17:09\n> An: Greg Smith; Justin Pitts\n> Cc: [email protected]; Verteiler_A-Team; Björn Metzdorf; Felix Feinhals; Thomas Pöhler\n> Betreff: Re: [PERFORM] high user cpu, massive SELECTs, no io waiting problem\n>\n> Justin Pitts <[email protected]> wrote:\n>> I think adding\n>>\n>> UNION ALL SELECT 'postgres version', version();\n>>\n>> might be a good thing.\n>\n> Good point.  Added.\n>\n>> Greg Smith <[email protected]> wrote:\n>>> Kevin Grittner wrote:\n>>>>\n>>>> In fact, I wonder whether we shouldn't leave a couple items\n>>>> you've excluded, since they are sometimes germane to problems\n>>>> posted, like lc_collate and TimeZone.\n>>>\n>>> I pulled some of them out only because they're not really\n>>> postgresql.conf settings; lc_collate and lc_ctype for example are\n>>> set at initdb time.  Feel free to hack on that example if you\n>>> feel it could be improved, just be aware which of those things\n>>> are not really in the main config file when pondering if they\n>>> should be included.\n>\n> Basically, the ones I could remember us needing to ask about on\n> multiple occasions, I put back -- provisionally.  If someone thinks\n> they're pointless, I won't worry about them being dropped again:\n> time zone, character encoding scheme, character set, and collation.\n> I'm pretty sure I've seen us ask about all of those in trying to\n> sort out a problem.\n>\n> I also tried the query on a newly installed HEAD build which had no\n> manual changes to the postgresql.conf file and found a few others\n> which seemed to me to be worth suppressing.\n>\n> I took my shot -- anyone else is welcome to do so....  :-)\n>\n> -Kevin\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\n\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Wed, 16 Feb 2011 18:58:23 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "Thomas Pᅵhler<[email protected]> wrote:\n \n> we are using two instances of pgbouncer v1.4 for connection\n> pooling. One for prepared statements (pool_mode session) and one\n> without (pool_mode transaction).\n \n> max_client_conn = 10000\n> default_pool_size = 450\n \nYour best defense against the \"thundering herd\" issues you describe\nwould be to eliminate the session pool (if you can), and drop the\ndefault_pool_size for the transaction pool to where at peak the\nnumber of backends actually busy is about twice your number of\n*actual* cores. (Don't count hyperthreading \"logical\" cores for\nthis purpose.) max_client_conn can be as high as you need; the\npoint is for the connection pool to funnel the requests through a\nmuch smaller pool of database connections.\n \n> If I remember correctly vmstat showed lots of context switches\n> during a peak above 50k.\n \nYeah, that's part of the reason throughput tanks when your active\nconnection count gets too high.\n \n> We are running a biweekly downtime where we do a complete reindex\n> and vacuum full. We cannot identify certain queries causing this.\n \nIf you really get bloat which requires VACUUM FULL, tracking down\nthe reason should be a high priority. You normally shouldn't need\nto run that.\n \nAlso, I hope when you run that it is VACUUM FULL followed by\nREINDEX, not the other way around. In fact, it would probably be\nfaster to CLUSTER (if you have room) or drop the indexes, VACUUM\nFULL, and then create the indexes again.\n \n> The last graph in ganglia \n> (http://dl.dropbox.com/u/183323/CPUloadprobsdb1.jpg) shows the\n> avg_queries from pgbouncers stats. I think this is a symptom of\n> many waiting queries which accumulate.\n \nWhile it seems counter-intuitive, you're likely to have fewer\nqueries waiting a long time there if you reduce\ndefault_pool_size so that contention doesn't kill performance when\nthe queries *do* get to run.\n \n> max_connections\t1000\n \nThis is what you need to try to reduce.\n \n> max_prepared_transactions\t5\n \nIf you're actually using prepared transactions, make sure none are\nlingering about for a long time during these incidents. Well,\n*ever*, really -- but I would definitely check during problem\nperiods.\n \n> wal_buffers\t1MB\n \nYou should bump this to 16MB.\n \n> The database is fitting completely into ram\n \nThen you should probably be adjusting sequential_page_cost and\nrand_page_cost. You'll probably get plans which run faster, which\nshould help overall.\n \n-Kevin\n", "msg_date": "Wed, 16 Feb 2011 13:22:26 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting\n\t problem" }, { "msg_contents": "Thomas P�hler wrote:\n> We are running a biweekly downtime where we do a complete reindex and vaccum full. We cannot identify certain queries causing this.\n\nIf you feel that you need VACUUM FULL, either something terribly wrong \nhas happened, or someone has gotten confused. In both cases it's \nunlikely you want to keep doing that. See \nhttp://wiki.postgresql.org/wiki/VACUUM_FULL for a nice document leading \nthrough figuring what to do instead.\n\nNote that if you have a database that fits in RAM, but is filled with \nthe sort of index bloat garbage that using VACUUM FULL will leave \nbehind, it will cause excessive CPU use when running queries. If you \nalready have planned downtime, you really should try to use use CLUSTER \ninstead, to remove that from the list of possible causes for your issue.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 16 Feb 2011 15:36:01 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" }, { "msg_contents": "> Thomas Pöhler wrote:\n\nI remember you said you were using nginx and php-fastcgi, how many web \nserver boxes do you have, and what are the specs ?\n", "msg_date": "Thu, 17 Feb 2011 08:30:25 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: high user cpu, massive SELECTs, no io waiting problem" } ]
[ { "msg_contents": "Hello,\n\nI was under the impression that pg_dumpall didn't affect database\nperformance when dumping while the db is live. However I have evidence to\nthe contrary now - queries that are run during the pg_dumpall time take 10\nto a 100 times longer to execute than normal while pg_dumpall is running.\nThe strange thing is that this started after my database grew by about 25%\nafter a large influx of data due to user load. I'm wonder if there is a\ntipping\npoint or a config setting I need to change now that the db is larger that\nis\ncausing all this to happen.\n\nThanks,\n Mark\n\n", "msg_date": "Tue, 15 Feb 2011 13:41:04 -0500", "msg_from": "Mark Mikulec <[email protected]>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?pg=5Fdumpall=20affecting=20performance?=" }, { "msg_contents": "I was always under the impression that pg_dump and pg_dumpall cause all data to be read in to the buffers and then out, (of course squeezing out whatever may be active). That is the big advantage to using PITR backups and using a tar or cpio method of backing up active containers and shipping off to another system, disk or api to tape system.\r\n\r\n-----Original Message-----\r\nFrom: [email protected] [mailto:[email protected]] On Behalf Of Mark Mikulec\r\nSent: Tuesday, February 15, 2011 12:41 PM\r\nTo: [email protected]\r\nSubject: [PERFORM] pg_dumpall affecting performance\r\n\r\nHello,\r\n\r\nI was under the impression that pg_dumpall didn't affect database performance when dumping while the db is live. However I have evidence to the contrary now - queries that are run during the pg_dumpall time take 10 to a 100 times longer to execute than normal while pg_dumpall is running.\r\nThe strange thing is that this started after my database grew by about 25% after a large influx of data due to user load. I'm wonder if there is a tipping point or a config setting I need to change now that the db is larger that is causing all this to happen.\r\n\r\nThanks,\r\n Mark\r\n\r\n\r\n--\r\nSent via pgsql-performance mailing list ([email protected])\r\nTo make changes to your subscription:\r\nhttp://www.postgresql.org/mailpref/pgsql-performance\r\n", "msg_date": "Tue, 15 Feb 2011 12:45:34 -0600", "msg_from": "\"Plugge, Joe R.\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumpall affecting performance" }, { "msg_contents": "On 02/15/2011 10:41 AM, Mark Mikulec wrote:\n> Hello,\n>\n> I was under the impression that pg_dumpall didn't affect database\n> performance when dumping while the db is live. However I have evidence to\n> the contrary now - queries that are run during the pg_dumpall time take 10\n> to a 100 times longer to execute than normal while pg_dumpall is running.\n> The strange thing is that this started after my database grew by about 25%\n> after a large influx of data due to user load. I'm wonder if there is a\n> tipping\n> point or a config setting I need to change now that the db is larger that\n> is\n> causing all this to happen.\n>\nDon't know where that impression came from. It is true that you can \ncontinue to *use* your database normally while running a dump but you \nare reading the entire database and either transmitting it over the \nnetwork or writing it to a local drive so it shouldn't be surprising \nthat performance is impacted.\n\nThere are tipping points - one big one is when you move from having all \nyour data in RAM to needing to read disk. And it can be a whopper. If \nall your information, through PG or OS caching is in RAM then your dumps \nmay run very quickly. The moment you cross the point that things don't \nquite fit you can see a sharp decline.\n\nConsider a least-recently-used algorithm and a very simplistic scenario. \nYou read the \"start\" data. It isn't cached so you go to disk *and* you \nput those blocks into cache pushing others than you would need later out \nof cache. This continues and you potentially end up having to read \neverything from disk plus incur the overhead of checking and updating \nthe cache. Meanwhile, the data you needed for your query may have been \npushed out of cache so there is more contention for disk.\n\nAdmittedly an over-simplified example but you see the problem.\n\nCheers,\nSteve\n\n", "msg_date": "Tue, 15 Feb 2011 10:56:44 -0800", "msg_from": "Steve Crawford <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumpall affecting performance" }, { "msg_contents": "Mark Mikulec <[email protected]> wrote:\n \n> The strange thing is that this started after my database grew by\n> about 25% after a large influx of data due to user load\n \nIn addition to the issues already mentioned, there is the fact that\nto maintain consistency an entire database must be dumped in a\nsingle database transaction with one snapshot. This means that\ngarbage collection can't run, which may lead to bloat under some\ncircumstances. This may be why your database grew by 25%. If that\nbloat is concentrated in a small number of tables, you may want to\nschedule aggressive maintenance (like CLUSTER) on those tables.\n \nOne other factor which can affect running applications is the table\nlocks which the dump must hold.\n \nYou might want to look into PITR backup techniques, or streaming\nreplication on 9.0\n \n-Kevin\n", "msg_date": "Tue, 15 Feb 2011 13:13:10 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dumpall affecting performance" } ]
[ { "msg_contents": "All,\n\nI'm trying to estimate the size of my hot data set, and wanted to get some\nvalidation that I'm doing this correctly.\n\nBasically, I'm using the sum(heap_blks_read + idx_blks_read) from\npg_statio_all_tables, and diffing the numbers over a period of time (1 hour\nat least). Is this a fair estimate? The reason for doing this is we are\nlooking at new server hardware, and I want to try and get enough ram on the\nmachine to keep the hot data in memory plus provide room for growth.\n\nThanks,\n\nChris\n\nExample:\n\n\n\n *Time*\n\n*Total Blocks*\n\n2011-02-16 11:25:34.621874-05\n\n123,260,464,427.00\n\n2011-02-16 12:25:46.486719-05\n\n123,325,880,943.00\n\n\n\nTo get the hot data for this hour (in KB), I'm taking:\n\n\n (123,325,880,943.00 - 123,260,464,427.00)* 8 = 523,332,128KB\n\n\nCorrect?\n\nAll,I'm trying to estimate the size of my hot data set, and wanted to get some validation that I'm doing this correctly.Basically, I'm using the sum(heap_blks_read + idx_blks_read) from pg_statio_all_tables, and diffing the numbers over a period of time (1 hour at least).  Is this a fair estimate?  The reason for doing this is we are looking at new server hardware, and I want to try and get enough ram on the machine to keep the hot data in memory plus provide room for growth.\nThanks,ChrisExample:\n\n\n\n\n\nTime\n\n\nTotal Blocks\n\n\n\n\n2011-02-16 11:25:34.621874-05\n\n\n123,260,464,427.00\n\n\n\n\n\n2011-02-16 12:25:46.486719-05\n\n\n123,325,880,943.00\n\n\n\n\n\nTo get the hot data for this hour (in KB), I'm taking:\n\n (123,325,880,943.00 - 123,260,464,427.00)* 8 = 523,332,128KB\n\nCorrect?", "msg_date": "Wed, 16 Feb 2011 15:51:36 -0500", "msg_from": "Chris Hoover <[email protected]>", "msg_from_op": true, "msg_subject": "Estimating hot data size" }, { "msg_contents": "Dne 16.2.2011 21:51, Chris Hoover napsal(a):\n> All,\n> \n> I'm trying to estimate the size of my hot data set, and wanted to get\n> some validation that I'm doing this correctly.\n> \n> Basically, I'm using the sum(heap_blks_read + idx_blks_read) from\n> pg_statio_all_tables, and diffing the numbers over a period of time (1\n> hour at least). Is this a fair estimate? The reason for doing this is\n> we are looking at new server hardware, and I want to try and get enough\n> ram on the machine to keep the hot data in memory plus provide room for\n> growth.\n> \n> Thanks,\n> \n> Chris\n> \n> Example:\n> \n> \n> \n> *Time*\n> \n> \t\n> \n> *Total Blocks*\n> \n> 2011-02-16 11:25:34.621874-05\n> \n> \t\n> \n> 123,260,464,427.00\n> \n> 2011-02-16 12:25:46.486719-05\n> \n> \t\n> \n> 123,325,880,943.00\n> \n> \n> \n> To get the hot data for this hour (in KB), I'm taking:\n> \n> \n> (123,325,880,943.00 - 123,260,464,427.00)* 8 = 523,332,128KB\n> \n> \n> Correct?\n\nI doubt that, although I'm not sure what exactly you mean by hot data\nset. I guess it's the data set you're working with frequently, right?\n\nThe first gotcha is that heap_blks_read counts only blocks not found in\nshared buffers, so those 500MB is actually the amount of data read from\nthe disk (or filesystem cache). It does not say anything about how\nfrequently the data are used.\n\nThe second gotcha is that the same block may be counted repeatedly,\nespecially if it is not frequently used. It's counted for query A, then\nit's removed from the cache (to be replaced by another block), and then\nfor another query B. So the number heap_blks_read does not mean there\nwere that many different blocks read from the disk.\n\nWhat I'd recommend is to measure the cache hit ratio, i.e. this\n\n heap_blks_hit / (heap_blks_read + heap_blks_hit)\n\nwhich means how efficient the cache is. Increase shared buffers until it\nstops to increase - that's the hot data set size.\n\nregards\nTomas\n\nPS: The value heap_blks_hit does not actually mean the blocks were read\n from the disk - it might be read from filesystem cache (and there's\n not easy way to find out this AFAIK).\n", "msg_date": "Wed, 16 Feb 2011 22:13:41 +0100", "msg_from": "Tomas Vondra <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimating hot data size" }, { "msg_contents": "Chris Hoover wrote:\n> Basically, I'm using the sum(heap_blks_read + idx_blks_read) from \n> pg_statio_all_tables, and diffing the numbers over a period of time (1 \n> hour at least). Is this a fair estimate? The reason for doing this \n> is we are looking at new server hardware, and I want to try and get \n> enough ram on the machine to keep the hot data in memory plus provide \n> room for growth.\n\nThose two are measuring reads to the operating system, which isn't \nreally a good measure of the working data set. If you switch to the \ninternal counters that measure what's already cached, that won't be \nquite right either. Those will be repeatedly measuring the same block, \non the truly hot ones, which inflates how big you'll think the working \nset is relative to its true size. \n\nIf you visit http://projects.2ndquadrant.com/talks you'll find a talk \ncalled \"Inside the PostgreSQL Buffer Cache\" that goes over how the cache \nis actually managed within the database. There's also some sample \nqueries that run after you install the pg_buffercache module into a \ndatabase. Check out \"Buffer contents summary, with percentages\". \nThat's the only way to really measure what you're trying to see. I will \nsometimes set shared_buffers to a larger value than would normally be \noptimal for a bit, just to get a better reading on what the hot data is.\n\nIf you also want to get an idea what's in the operating system cache, \nthe pgfincore module from http://pgfoundry.org/projects/pgfincore/ will \nallow that on a Linux system.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Wed, 16 Feb 2011 17:02:40 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Estimating hot data size" } ]
[ { "msg_contents": "In normal circumstances does locking a table in access exclusive mode improve insert, update and delete operation performance on that table.\n\nIs MVCC disabled or somehow has less work to do?\n\nCheers\nJeremy\n\n______________________________________________________________________________________________________\n\nThis message contains information, which is confidential and may be subject to legal privilege. \nIf you are not the intended recipient, you must not peruse, use, disseminate, distribute or copy this message.\nIf you have received this message in error, please notify us immediately (Phone 0800 665 463 or [email protected]) and destroy the original message.\nLINZ accepts no responsibility for changes to this email, or for any attachments, after its transmission from LINZ.\n\nThank you.\n______________________________________________________________________________________________________\n\n\n\n\n\n\n\n\n\n\nIn normal circumstances does locking a table in access\nexclusive mode improve insert, update and delete operation performance on that\ntable. \n \nIs MVCC disabled or somehow has less work to do?\n \nCheers\nJeremy\n\n\n\nThis message contains information, which is confidential and may be subject \nto legal privilege. If you are not the intended recipient, you must not \nperuse, use, disseminate, distribute or copy this message.If you have \nreceived this message in error, please notify us immediately (Phone 0800 665 463 \nor [email protected] ) and destroy the \noriginal message.\nLINZ accepts no responsibility for changes to this email, or for any \nattachments, after its transmission from LINZ.\n \nThank you.", "msg_date": "Thu, 17 Feb 2011 17:38:34 +1300", "msg_from": "Jeremy Palmer <[email protected]>", "msg_from_op": true, "msg_subject": "Does exclusive locking improve performance?" }, { "msg_contents": "* Jeremy Palmer ([email protected]) wrote:\n> In normal circumstances does locking a table in access exclusive mode improve insert, update and delete operation performance on that table.\n> \n> Is MVCC disabled or somehow has less work to do?\n\nMVCC certainly isn't disabled. Does it have less work to do? That's a\nbit harder to say but my guess is \"not so much that you'd actually be\nable to notice it.\"..\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 17 Feb 2011 00:18:35 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Does exclusive locking improve performance?" } ]
[ { "msg_contents": "\nWe perform over 1,000,000 searches each day for \"adoptable shelter pets\nnear your zipcode\". We already have adequate performance for these\nsearches using the \"cube\" contrib, but the new KNN work in 9.1 seemed\nlike it might be a promising way to speed this up even further.\n\nI installed PostgreSQL 9.1 on my laptop to evaluate it, using this post\nas a reference:\nhttp://www.depesz.com/index.php/2010/12/11/waiting-for-9-1-knngist/\n\nThe first task was to translate a geo-spatial search to use the new KNN\nsyntax.\n\nI'm most familiar with two approaches to geo-spatial searching with\nPostgreSQL. The first is the older \"earthdistance\" approach, using\n\"point\" types and the \"<@>\" operator.\n\nThe second is the one we are using now, which uses a cube type, the\n\"cube_distance()\" and \"earth_box()\" method and a GIST index on the cube\ntype.\n\nImmediately there is a hurdle in that KNN only appears to work with\npoint types and the <-> operator, which does simple point-to-point\ndistance, instead of the distance-around-the-earth. Still, I thought\nthat could be enough of an approximation to test the waters.\n\nI started with some \"real world\" queries that involved some table joins,\nand when those failed to show improvement, I worked with some\nreduced-test-case queries.\n\nWhile I could confirm the new GIST index was being used on the point\ntype, I couldn't get a query to benchmark better when it was invoked.\nI'm wondering if perhaps US zipcode searches aren't good use of this\ntechnology, perhaps because the data set is too small ( About 40,000\nzipcodes ).\n\nGiven that we can already do GIST-indexed searches with the cube type\nthat provide good reasonable approximations for zipcode-radius searches,\nare others planning to eventually apply the KNN work to US zipcode\nsearches?\n\nSample EXPLAIN output and query times are below.\n\n Mark\n\nEXPLAIN ANALYZE SELECT zipcode,\n lon_lat <-> '(-118.412426,34.096629)' AS radius\n FROM zipcodes ;\n-------------------------------------------\n Seq Scan on zipcodes (cost=0.00..1257.54 rows=41483 width=22) (actual\ntime=0.019..84.543 rows=41483 loops=1)\n Total runtime: 148.129 ms\n\n\nEXPLAIN ANALYZE SELECT zipcode,\n lon_lat <-> '(-118.412426,34.096629)' As radius\n FROM zipcodes\n ORDER BY lon_lat <-> '(-118.412426,34.096629)';\n--------------------------------------------------\n Index Scan using zipcodes_knn on zipcodes (cost=0.00..5365.93\nrows=41483 width=22) (actual time=0.451..141.590 rows=41483 loops=1)\n Order By: (lon_lat <-> '(-118.412426,34.096629)'::point)\n Total runtime: 206.392 ms\n\n\n\n\n\n\n\n", "msg_date": "Thu, 17 Feb 2011 09:40:56 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "application of KNN code to US zipcode searches?" }, { "msg_contents": "Mark Stosberg <[email protected]> wrote:\n \n> Sample EXPLAIN output and query times are below.\n \n> Seq Scan on zipcodes (cost=0.00..1257.54 rows=41483 width=22)\n> (actual time=0.019..84.543 rows=41483 loops=1)\n \n> Index Scan using zipcodes_knn on zipcodes (cost=0.00..5365.93\n> rows=41483 width=22) (actual time=0.451..141.590 rows=41483\n> loops=1)\n \nI thought the benefit of KNN was that you could retrieve the rows in\ndistance order, so that a query for the closest 20 locations (for\nexample) would be very fast. I wouldn't have expected it to be\nhelpful when you're selecting all the rows regardless of distance.\n \n-Kevin\n", "msg_date": "Thu, 17 Feb 2011 08:49:28 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "\n> I thought the benefit of KNN was that you could retrieve the rows in\n> distance order, so that a query for the closest 20 locations (for\n> example) would be very fast. I wouldn't have expected it to be\n> helpful when you're selecting all the rows regardless of distance.\n\nKevin,\n\nThanks for the feedback. You are right that my \"reduced test case\"\nwasn't a good approximation. I added a limit, to simulate finding the\n100 zipcodes closest to 90210.\n\nBelow I compare 4 approaches to the same query:\n\n1. Cube search\n2. Earth Distance Search\n3. Simple point distance (no index)\n4. Simple point distance (KNN)\n\nNow KNN benchmarks to be almost 100x faster! That's very promising.\nThen there's only the issue that simple point distance is not expected\nto be a good enough approximation of earth-distances. Perhaps that can\nbe solved by pre-computing coordinates based on the lat/long pairs....\nmuch like the map projections used to present a curved surface on a flat\nmap? Given that's OK to be be a few miles off, it seems we have some\nleeway here.\n\nRecommendations?\n\n Mark\n\nEXPLAIN ANALYZE\nSELECT zipcode,\n cube_distance( '(-2513120.64361786, -4645511.0460328,\n3575538.9507084)', zipcodes.earth_coords)/1609.344 AS radius\n FROM zipcodes ORDER BY radius LIMIT 100;\n\n---------------------------------------------------------------\n Limit (cost=2946.70..2946.95 rows=100 width=62) (actual\ntime=167.650..168.064 rows=100 loops=1)\n -> Sort (cost=2946.70..3050.40 rows=41483 width=62) (actual\ntime=167.644..167.829 rows=100 loops=1)\n Sort Key: ((cube_distance('(-2513120.64361786,\n-4645511.0460328, 3575538.9507084)'::cube, earth_coords) /\n1609.344::double precision))\n Sort Method: top-N heapsort Memory: 20kB\n -> Seq Scan on zipcodes (cost=0.00..1361.24 rows=41483\nwidth=62) (actual time=0.030..90.807 rows=41483 loops=1)\n Total runtime: 168.300 ms\n\n############################################################3\n\n-- Using Earthdistance\nEXPLAIN ANALYZE SELECT zipcode,\n lon_lat <@> '(-118.412426,34.096629)' As radius\n FROM zipcodes\n ORDER BY lon_lat <@> '(-118.412426,34.096629)'\n LIMIT 100;\n\n------------------------------------------------------------\n Limit (cost=2842.99..2843.24 rows=100 width=22) (actual\ntime=187.995..188.451 rows=100 loops=1)\n -> Sort (cost=2842.99..2946.70 rows=41483 width=22) (actual\ntime=187.989..188.149 rows=100 loops=1)\n Sort Key: ((lon_lat <@> '(-118.412426,34.096629)'::point))\n Sort Method: top-N heapsort Memory: 20kB\n -> Seq Scan on zipcodes (cost=0.00..1257.54 rows=41483\nwidth=22) (actual time=0.033..108.203 rows=41483 loops=1)\n Total runtime: 188.660 ms\n\n##########################################\n\nUsing simple point distance, but with no Gist Index:\n\nEXPLAIN ANALYZE SELECT zipcode,\n lon_lat <-> '(-118.412426,34.096629)' As radius\n FROM zipcodes\n ORDER BY lon_lat <-> '(-118.412426,34.096629)'\n LIMIT 100;\n\n--------------------------------------------------------\n Limit (cost=2842.99..2843.24 rows=100 width=22) (actual\ntime=160.574..161.057 rows=100 loops=1)\n -> Sort (cost=2842.99..2946.70 rows=41483 width=22) (actual\ntime=160.568..160.691 rows=100 loops=1)\n Sort Key: ((lon_lat <-> '(-118.412426,34.096629)'::point))\n Sort Method: top-N heapsort Memory: 20kB\n -> Seq Scan on zipcodes (cost=0.00..1257.54 rows=41483\nwidth=22) (actual time=0.027..84.610 rows=41483 loops=1)\n Total runtime: 161.226 ms\n\n#########################################\n\n-- Using KNN-GIST index\nEXPLAIN ANALYZE SELECT zipcode,\n lon_lat <-> '(-118.412426,34.096629)' As radius\n FROM zipcodes\n ORDER BY lon_lat <-> '(-118.412426,34.096629)'\n LIMIT 100;\n------------------------------------------------------------------\n Limit (cost=0.00..12.94 rows=100 width=22) (actual time=0.447..1.892\nrows=100 loops=1)\n -> Index Scan using zipcodes_knn on zipcodes (cost=0.00..5365.93\nrows=41483 width=22) (actual time=0.440..1.407 rows=100 loops=1)\n Order By: (lon_lat <-> '(-118.412426,34.096629)'::point)\n Total runtime: 2.198 ms\n\n", "msg_date": "Thu, 17 Feb 2011 10:20:58 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "* Mark Stosberg ([email protected]) wrote:\n> Recommendations?\n\nPostGIS, geometry columns, and UTM.. I'm not sure where they are wrt\nadding KNN support, but it's something they've been anxious to have for\na while, so I expect support will come quickly.\n\n\tThanks,\n\n\t\tStephen", "msg_date": "Thu, 17 Feb 2011 10:24:45 -0500", "msg_from": "Stephen Frost <[email protected]>", "msg_from_op": false, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "> PostGIS, geometry columns, and UTM.. I'm not sure where they are wrt\n> adding KNN support, but it's something they've been anxious to have for\n> a while, so I expect support will come quickly.\n\nI've looked into this a little more.\n\nOne approach seems to be to project the lat/long pairs on to a flat\nplane using the Albers projection (which would be a one-time\ncalculation), and then the current KNN point/distance calculations could\nbe used.\n\nHere's a Perl module that references the Albers projection (although\nit's not yet clear to me how to use it):\n\nhttp://search.cpan.org/dist/PDL/\n\nAnd a Wikipedia page on various calculation possibilities:\nhttp://en.wikipedia.org/wiki/Geographical_distance#Flat-surface_formulae\n\nFurther suggestions welcome.\n\n Thanks,\n\n Mark\n\n", "msg_date": "Thu, 17 Feb 2011 10:55:53 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "On 17.02.2011 17:20, Mark Stosberg wrote:\n>> I thought the benefit of KNN was that you could retrieve the rows in\n>> distance order, so that a query for the closest 20 locations (for\n>> example) would be very fast. I wouldn't have expected it to be\n>> helpful when you're selecting all the rows regardless of distance.\n>\n> Kevin,\n>\n> Thanks for the feedback. You are right that my \"reduced test case\"\n> wasn't a good approximation. I added a limit, to simulate finding the\n> 100 zipcodes closest to 90210.\n>\n> Below I compare 4 approaches to the same query:\n>\n> 1. Cube search\n> 2. Earth Distance Search\n> 3. Simple point distance (no index)\n> 4. Simple point distance (KNN)\n>\n> Now KNN benchmarks to be almost 100x faster! That's very promising.\n> Then there's only the issue that simple point distance is not expected\n> to be a good enough approximation of earth-distances. Perhaps that can\n> be solved by pre-computing coordinates based on the lat/long pairs....\n> much like the map projections used to present a curved surface on a flat\n> map? Given that's OK to be be a few miles off, it seems we have some\n> leeway here.\n>\n> Recommendations?\n\nThe existing opclasses only support distance-to-a-point, but I believe \nthe KNN gist code is flexible enough that it could be used for distance \nto the edge of a shape as well. Someone just needs to write the \noperators and support functions.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 17 Feb 2011 18:41:29 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "\nI tried again to use KNN for a real-world query, and I was able to get\nit to add an approximately 6x speed-up vs the cube search or\nearthdistance methods ( from 300 ms to 50ms ).\n\nI had to make some notable changes for the KNN index to be considered.\n\n- Of course, I had to switch to using basic point/distance calculation.\n As previously noted, this still needs more work to confirm the\n accuracy and get the \"distance\" reported in miles.\n\n- The query planner didn't like it when the \"ORDER BY\" referred to a\n column value instead of a static value, even when I believe it should\n know that the column value never changes. See this pseudo-query where\n we look-up the coordinates for 90210 once:\n\n EXPLAIN ANALYZE\n SELECT pets.pet_id,\n zipcodes.lon_lat <-> center.lon_lat AS radius\n FROM (SELECT lon_lat FROM zipcodes WHERE zipcode = '90210') AS\ncenter, pets\n JOIN shelters USING (shelter_id)\n JOIN zipcodes USING (zipcode)\n ORDER BY postal_codes.lon_lat <-> center.lon_lat limit 1000;\n\n This didn't use the KNN index until I changed the \"center.lon_lat\" in\n the ORDER BY to an explicit point value. I'm not sure if that's\n expected, or something I should take up with -hackers.\n\n This could be worked around by doing a initial query to look-up this\n value, and then feed a static value into this query. That's not ideal,\n but the combination would still be faster.\n\n- I had to drop the part of the WHERE clause which restricted the\n results to shelters within 50 miles from the target zipcode. However,\n I could set the \"LIMIT\" so high that I could get back \"enough\" pets,\n and then the application could trim out the results. Or, perhaps\n I could push this query down into a sub-select, and let PostgreSQL\n do a second pass to throw out some of the results.\n\nIn any case, with a real-world speed-up of 6x, this looks like it will\nbe worth it to us to continue to investigate.\n\n\n", "msg_date": "Thu, 17 Feb 2011 11:41:51 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "Heikki Linnakangas <[email protected]> writes:\n> The existing opclasses only support distance-to-a-point, but I believe \n> the KNN gist code is flexible enough that it could be used for distance \n> to the edge of a shape as well. Someone just needs to write the \n> operators and support functions.\n\nThe distance has to be exactly computable from the index entry, so you'd\nneed to store the whole shape in the index, not just a bounding box.\nNot sure how practical that will be for complex shapes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Feb 2011 14:13:05 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: application of KNN code to US zipcode searches? " }, { "msg_contents": "Mark Stosberg <[email protected]> writes:\n> - The query planner didn't like it when the \"ORDER BY\" referred to a\n> column value instead of a static value, even when I believe it should\n> know that the column value never changes. See this pseudo-query where\n> we look-up the coordinates for 90210 once:\n\n> EXPLAIN ANALYZE\n> SELECT pets.pet_id,\n> zipcodes.lon_lat <-> center.lon_lat AS radius\n> FROM (SELECT lon_lat FROM zipcodes WHERE zipcode = '90210') AS\n> center, pets\n> JOIN shelters USING (shelter_id)\n> JOIN zipcodes USING (zipcode)\n> ORDER BY postal_codes.lon_lat <-> center.lon_lat limit 1000;\n\nAs phrased, that's a join condition, so there's no way that an index on\na single table can possibly satisfy it. You could probably convert it\nto a sub-select though:\n\n ORDER BY postal_codes.lon_lat <-> (SELECT lon_lat FROM zipcodes WHERE zipcode = '90210') limit 1000;\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 17 Feb 2011 14:17:17 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: application of KNN code to US zipcode searches? " }, { "msg_contents": "Mark,\n\nwe investigating pgsphere http://pgsphere.projects.postgresql.org/, if we could add KNN support.\n\n\nOleg\nOn Thu, 17 Feb 2011, Mark Stosberg wrote:\n\n>\n>> I thought the benefit of KNN was that you could retrieve the rows in\n>> distance order, so that a query for the closest 20 locations (for\n>> example) would be very fast. I wouldn't have expected it to be\n>> helpful when you're selecting all the rows regardless of distance.\n>\n> Kevin,\n>\n> Thanks for the feedback. You are right that my \"reduced test case\"\n> wasn't a good approximation. I added a limit, to simulate finding the\n> 100 zipcodes closest to 90210.\n>\n> Below I compare 4 approaches to the same query:\n>\n> 1. Cube search\n> 2. Earth Distance Search\n> 3. Simple point distance (no index)\n> 4. Simple point distance (KNN)\n>\n> Now KNN benchmarks to be almost 100x faster! That's very promising.\n> Then there's only the issue that simple point distance is not expected\n> to be a good enough approximation of earth-distances. Perhaps that can\n> be solved by pre-computing coordinates based on the lat/long pairs....\n> much like the map projections used to present a curved surface on a flat\n> map? Given that's OK to be be a few miles off, it seems we have some\n> leeway here.\n>\n> Recommendations?\n>\n> Mark\n>\n> EXPLAIN ANALYZE\n> SELECT zipcode,\n> cube_distance( '(-2513120.64361786, -4645511.0460328,\n> 3575538.9507084)', zipcodes.earth_coords)/1609.344 AS radius\n> FROM zipcodes ORDER BY radius LIMIT 100;\n>\n> ---------------------------------------------------------------\n> Limit (cost=2946.70..2946.95 rows=100 width=62) (actual\n> time=167.650..168.064 rows=100 loops=1)\n> -> Sort (cost=2946.70..3050.40 rows=41483 width=62) (actual\n> time=167.644..167.829 rows=100 loops=1)\n> Sort Key: ((cube_distance('(-2513120.64361786,\n> -4645511.0460328, 3575538.9507084)'::cube, earth_coords) /\n> 1609.344::double precision))\n> Sort Method: top-N heapsort Memory: 20kB\n> -> Seq Scan on zipcodes (cost=0.00..1361.24 rows=41483\n> width=62) (actual time=0.030..90.807 rows=41483 loops=1)\n> Total runtime: 168.300 ms\n>\n> ############################################################3\n>\n> -- Using Earthdistance\n> EXPLAIN ANALYZE SELECT zipcode,\n> lon_lat <@> '(-118.412426,34.096629)' As radius\n> FROM zipcodes\n> ORDER BY lon_lat <@> '(-118.412426,34.096629)'\n> LIMIT 100;\n>\n> ------------------------------------------------------------\n> Limit (cost=2842.99..2843.24 rows=100 width=22) (actual\n> time=187.995..188.451 rows=100 loops=1)\n> -> Sort (cost=2842.99..2946.70 rows=41483 width=22) (actual\n> time=187.989..188.149 rows=100 loops=1)\n> Sort Key: ((lon_lat <@> '(-118.412426,34.096629)'::point))\n> Sort Method: top-N heapsort Memory: 20kB\n> -> Seq Scan on zipcodes (cost=0.00..1257.54 rows=41483\n> width=22) (actual time=0.033..108.203 rows=41483 loops=1)\n> Total runtime: 188.660 ms\n>\n> ##########################################\n>\n> Using simple point distance, but with no Gist Index:\n>\n> EXPLAIN ANALYZE SELECT zipcode,\n> lon_lat <-> '(-118.412426,34.096629)' As radius\n> FROM zipcodes\n> ORDER BY lon_lat <-> '(-118.412426,34.096629)'\n> LIMIT 100;\n>\n> --------------------------------------------------------\n> Limit (cost=2842.99..2843.24 rows=100 width=22) (actual\n> time=160.574..161.057 rows=100 loops=1)\n> -> Sort (cost=2842.99..2946.70 rows=41483 width=22) (actual\n> time=160.568..160.691 rows=100 loops=1)\n> Sort Key: ((lon_lat <-> '(-118.412426,34.096629)'::point))\n> Sort Method: top-N heapsort Memory: 20kB\n> -> Seq Scan on zipcodes (cost=0.00..1257.54 rows=41483\n> width=22) (actual time=0.027..84.610 rows=41483 loops=1)\n> Total runtime: 161.226 ms\n>\n> #########################################\n>\n> -- Using KNN-GIST index\n> EXPLAIN ANALYZE SELECT zipcode,\n> lon_lat <-> '(-118.412426,34.096629)' As radius\n> FROM zipcodes\n> ORDER BY lon_lat <-> '(-118.412426,34.096629)'\n> LIMIT 100;\n> ------------------------------------------------------------------\n> Limit (cost=0.00..12.94 rows=100 width=22) (actual time=0.447..1.892\n> rows=100 loops=1)\n> -> Index Scan using zipcodes_knn on zipcodes (cost=0.00..5365.93\n> rows=41483 width=22) (actual time=0.440..1.407 rows=100 loops=1)\n> Order By: (lon_lat <-> '(-118.412426,34.096629)'::point)\n> Total runtime: 2.198 ms\n>\n>\n>\n\n \tRegards,\n \t\tOleg\n_____________________________________________________________\nOleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),\nSternberg Astronomical Institute, Moscow University, Russia\nInternet: [email protected], http://www.sai.msu.su/~megera/\nphone: +007(495)939-16-83, +007(495)939-23-83\n", "msg_date": "Thu, 17 Feb 2011 23:17:52 +0300 (MSK)", "msg_from": "Oleg Bartunov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "On 02/17/2011 03:17 PM, Oleg Bartunov wrote:\n> Mark,\n> \n> we investigating pgsphere http://pgsphere.projects.postgresql.org/, if\n> we could add KNN support.\n\nGreat, thanks Oleg.\n\nI'll be happy to test it when something is ready.\n\n Mark\n\n", "msg_date": "Thu, 17 Feb 2011 15:38:30 -0500", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "On Thu, Feb 17, 2011 at 11:17 AM, Tom Lane <[email protected]> wrote:\n> Mark Stosberg <[email protected]> writes:\n>> - The query planner didn't like it when the \"ORDER BY\" referred to a\n>>   column value instead of a static value, even when I believe it should\n>>   know that the column value never changes. See this pseudo-query where\n>>   we look-up the coordinates for 90210 once:\n>\n>>   EXPLAIN ANALYZE\n>>   SELECT pets.pet_id,\n>>       zipcodes.lon_lat <-> center.lon_lat AS radius\n>>       FROM (SELECT lon_lat FROM zipcodes WHERE zipcode = '90210') AS\n>> center, pets\n>>       JOIN shelters USING (shelter_id)\n>>       JOIN zipcodes USING (zipcode)\n>>        ORDER BY postal_codes.lon_lat <-> center.lon_lat limit 1000;\n>\n> As phrased, that's a join condition, so there's no way that an index on\n> a single table can possibly satisfy it.  You could probably convert it\n> to a sub-select though:\n>\n>       ORDER BY postal_codes.lon_lat <-> (SELECT lon_lat FROM zipcodes WHERE zipcode = '90210') limit 1000;\n>\n>                        regards, tom lane\n\nWould pushing that subquery to a WITH clause be helpful at all?\n", "msg_date": "Thu, 17 Feb 2011 15:26:05 -0800", "msg_from": "bricklen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: application of KNN code to US zipcode searches?" }, { "msg_contents": "\nHello,\n\nI want to report that I have now solved the challenges I ran into using\nKNN for US zipcode searching. I've found the new approach to not only be\nviable, but to benchmark about 3x faster for our own real-world\napplication than the previous approach we used, involving\ncube_distance() and earth_box().\n\nHere's some details about my research so far.\n\nTo evaluate it, I installed PostgreSQL 9.1 and a current PostGIS 2.0\nsnapshot (not yet released as stable).\n\nA primary challenge I had to solve was that KNN is designed for a\nslightly different problem than what I needed to solve. I need to answer\nthe question:\n\n \"What are all the objects that are in zipcodes with 50 miles of a given\nzipcode?\"\n\nHowever, KNN only directly provides a performance boost to this\nvariation:\n\n \"What are the N nearest objects to this point?\"\n\nJust adding a \"WHERE clause\" to check the 50 mile rule would erase the\nbenefits of KNN, which works through an \"ORDER BY\" clause.\n\nI solved my issue by using a \"WITH\" clause that creates a pseudo-table\ncalled \"nearby_zipcodes\". In this example, I select all the zipcodes\nthat are within 50 miles of the \"47374\" zipcode. The trick I've\nemployed is that I've set the LIMIT value to 286-- exactly the number of\nzipcodes within 50 miles of 47374. My plan is to add another column to\nmy \"zipcodes\" table for each of the small number distances I need to\nsearch. Then, when I load new zipcodes I can pre-compute how many\nzipcodes would be found at this distance.\n\nThis have approach would not have worked without a \"WITH\" clause, or\nsome equivalent, because the number of objects within the radius is not\nknown, but the number of nearby zipcodes is fixed.\n\nThis approach allows me to get the performance benefits of KNN, while\nalso returning exactly those objects within 50 miles of my\ntarget zipcode, by JOINing on the \"nearby_zipcodes\" table:\n\n WITH nearby_zipcodes AS (\n SELECT zipcode,\n st_distance_sphere(lonlat_point, (SELECT lonlat_point from\nzipcodes WHERE zipcode = '47374')) / 1609.344 as radius\n FROM zipcodes\n ORDER BY lonlat_point <-> (SELECT lonlat_point from zipcodes WHERE\nzipcode = '47374')\n LIMIT 286\n )\n SELECT ...\n\nYou might also notice that \"st_distance_sphere()\" doesn't mean exactly\nthe same thing as the \"<->\" operator. That's something I could refine\ngoing forward.\n\nThat's what I've got so far. How could I improve this further?\n\nFor reference, here are the key parts of the \"zipcodes\" table:\n\n# \\d zipcodes\n Table \"public.zipcodes\"\n Column | Type | Modifiers\n--------------+-----------------------+-----------\n zipcode | character varying(5) | not null\n lonlat_point | geometry(Point,4326) |\nIndexes:\n \"zipcodes_pkey\" PRIMARY KEY, btree (zipcode)\n \"lonlat_point_idx\" gist (lonlat_point)\n\nThanks for the peer review!\n\n Mark Stosberg\n\n", "msg_date": "Sat, 29 Oct 2011 10:45:22 -0400", "msg_from": "Mark Stosberg <[email protected]>", "msg_from_op": true, "msg_subject": "Re: application of KNN code to US zipcode searches?" } ]
[ { "msg_contents": "Hi,\n\nI have a java application which generates inperformant query plans.\nI checked the query plan from the java application via auto_explain module\nand I compared the plan which I generate in psql.\nThey are different and I have no idea how I can convince the java\napplication to use the index.\n\nthe query plan i generate via psql is:\n\ntest=# prepare s as SELECT COUNT(1) AS AMOUNT\n\ntest-# FROM NNDB.POI_LOCATION P\n\ntest-# WHERE P.LON BETWEEN $1 AND $2\n\ntest-# AND P.LAT BETWEEN $3 AND $4 limit $5;\n\nPREPARE\n\ntest=# explain execute s(994341, 994377, 5355822, 5355851, 1);\n\n QUERY PLAN\n\n-------------------------------------------------------------------------------------------------\n\nLimit (cost=17.09..17.10 rows=1 width=0)\n\n -> Aggregate (cost=17.09..17.10 rows=1 width=0)\n\n -> Bitmap Heap Scan on poi_location p (cost=9.42..17.08 rows=2\nwidth=0)\n\n Recheck Cond: ((lat >= $3) AND (lat <= $4) AND (lon >= $1)\nAND (lon <= $2))\n\n -> Bitmap Index Scan on nx_poilocation_lat_lon\n(cost=0.00..9.42 rows=2 width=0)\n\n Index Cond: ((lat >= $3) AND (lat <= $4) AND (lon >=\n$1) AND (lon <= $2))\n\n(6 rows)\n\nthe query plan from the java application is:\n\n2011-02-18 15:10:02 CET LOG: duration: 25.180 ms plan:\n\n Limit (cost=2571.79..2571.80 rows=1 width=0) (actual\ntime=25.172..25.172 rows=1 loops=1)\n\n Output: (count(1))\n\n -> Aggregate (cost=2571.79..2571.80 rows=1 width=0) (actual\ntime=25.171..25.171 rows=1 loops=1)\n\n Output: count(1)\n\n -> Seq Scan on poi_location p (cost=0.00..2571.78 rows=2\nwidth=0) (actual time=25.168..25.168 rows=0 loops=1)\n\n Output: location_id, road_link_id, link_id, side,\npercent_from_ref, lat, lon, location_type\n\n Filter: (((lon)::double precision >= $1) AND\n((lon)::double precision <= $2) AND ((lat)::double precision >= $3) AND\n((lat)::double precision <= $4))\n\nI checked that neither the java application or the psql client uses any evil\nnon-default settings like enable_*\nset enable_idxscan=off\n\nAny hints may help.\n\nbest...\nUwe\n\nHi,I have a java application which generates inperformant query plans.I checked the query plan from the java application via auto_explain module and I compared the plan which I generate in psql.They are different and I have no idea how I can convince the java application to use the index.\nthe query plan i generate via psql is:test=# prepare s as  SELECT COUNT(1) AS AMOUNTtest-# FROM NNDB.POI_LOCATION P\ntest-# WHERE P.LON BETWEEN $1 AND $2test-# AND P.LAT BETWEEN $3 AND $4 limit $5;PREPARE\ntest=# explain execute s(994341, 994377, 5355822, 5355851, 1);                                           QUERY PLAN\n------------------------------------------------------------------------------------------------- Limit  (cost=17.09..17.10 rows=1 width=0)\n   ->  Aggregate  (cost=17.09..17.10 rows=1 width=0)         ->  Bitmap Heap Scan on poi_location p  (cost=9.42..17.08 rows=2 width=0)\n               Recheck Cond: ((lat >= $3) AND (lat <= $4) AND (lon >= $1) AND (lon <= $2))               ->  Bitmap Index Scan on nx_poilocation_lat_lon  (cost=0.00..9.42 rows=2 width=0)\n                     Index Cond: ((lat >= $3) AND (lat <= $4) AND (lon >= $1) AND (lon <= $2))(6 rows)the query plan from the java application is:\n2011-02-18 15:10:02 CET LOG:  duration: 25.180 ms  plan:        Limit  (cost=2571.79..2571.80 rows=1 width=0) (actual time=25.172..25.172 rows=1 loops=1)\n          Output: (count(1))          ->  Aggregate  (cost=2571.79..2571.80 rows=1 width=0) (actual time=25.171..25.171 rows=1 loops=1)\n                Output: count(1)               \n ->  Seq Scan on poi_location p  (cost=0.00..2571.78 rows=2 width=0) \n(actual time=25.168..25.168 rows=0 loops=1)                      Output: location_id, road_link_id, link_id, side, percent_from_ref, lat, lon, location_type\n \n                     Filter: (((lon)::double precision >= $1) AND \n((lon)::double precision <= $2) AND ((lat)::double precision >= \n$3) AND ((lat)::double precision <= $4))I checked that neither the java application or the psql client uses any evil non-default settings like enable_*set enable_idxscan=offAny hints may help.\nbest...Uwe", "msg_date": "Fri, 18 Feb 2011 15:29:01 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "different clients, different query plans" }, { "msg_contents": "Uwe Bartels <[email protected]> wrote:\n \n> I have a java application which generates inperformant query\n> plans.\n \n> Index Cond: ((lat >= $3) AND (lat <= $4) AND (lon >= $1) AND (lon\n> <= $2))\n \n> Filter: (((lon)::double precision >= $1) AND ((lon)::double\n> precision <= $2) AND ((lat)::double precision >= $3) AND\n> ((lat)::double precision <= $4))\n \nIt is the cast of the table columns to double precision which is\ntaking the index out of play.\n \nWhat are the data types of those columns? What does the code look\nlike where you're setting the values for the parameters? If nothing\nelse, writing the query so that the parameters are cast to the right\ntype before use might solve the problem, but I would start by\nlooking at the object classes used in the Java app.\n \n-Kevin\n", "msg_date": "Fri, 18 Feb 2011 08:58:29 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: different clients, different query plans" }, { "msg_contents": "the types are integer.\nexcellent!\nyou saved my weekend.\n\nUwe\n\nUwe Bartels\nSystemarchitect - Freelancer\nmailto: [email protected]\ntel: +49 172 3899006\nprofile: https://www.xing.com/profile/Uwe_Bartels\nwebsite: http://www.uwebartels.com\n\n\n\nOn 18 February 2011 15:58, Kevin Grittner <[email protected]>wrote:\n\n> Uwe Bartels <[email protected]> wrote:\n>\n> > I have a java application which generates inperformant query\n> > plans.\n>\n> > Index Cond: ((lat >= $3) AND (lat <= $4) AND (lon >= $1) AND (lon\n> > <= $2))\n>\n> > Filter: (((lon)::double precision >= $1) AND ((lon)::double\n> > precision <= $2) AND ((lat)::double precision >= $3) AND\n> > ((lat)::double precision <= $4))\n>\n> It is the cast of the table columns to double precision which is\n> taking the index out of play.\n>\n> What are the data types of those columns? What does the code look\n> like where you're setting the values for the parameters? If nothing\n> else, writing the query so that the parameters are cast to the right\n> type before use might solve the problem, but I would start by\n> looking at the object classes used in the Java app.\n>\n> -Kevin\n>\n\nthe types are integer.excellent!you saved my weekend.UweUwe BartelsSystemarchitect - Freelancermailto: [email protected]\ntel: +49 172 3899006profile: https://www.xing.com/profile/Uwe_Bartelswebsite: http://www.uwebartels.com\n\nOn 18 February 2011 15:58, Kevin Grittner <[email protected]> wrote:\nUwe Bartels <[email protected]> wrote:\n\n> I have a java application which generates inperformant query\n> plans.\n\n> Index Cond: ((lat >= $3) AND (lat <= $4) AND (lon >= $1) AND (lon\n> <= $2))\n\n> Filter: (((lon)::double precision >= $1) AND ((lon)::double\n> precision <= $2) AND ((lat)::double precision >= $3) AND\n> ((lat)::double precision <= $4))\n\nIt is the cast of the table columns to double precision which is\ntaking the index out of play.\n\nWhat are the data types of those columns?  What does the code look\nlike where you're setting the values for the parameters?  If nothing\nelse, writing the query so that the parameters are cast to the right\ntype before use might solve the problem, but I would start by\nlooking at the object classes used in the Java app.\n\n-Kevin", "msg_date": "Fri, 18 Feb 2011 16:06:12 +0100", "msg_from": "Uwe Bartels <[email protected]>", "msg_from_op": true, "msg_subject": "Re: different clients, different query plans" } ]
[ { "msg_contents": "Hello,\n\nWhen executing huge (10kb), hibernate-generated queries I noticed that\nwhen executed remotly over high-latency network (ping to server\n200-400ms), the query takes a lot longer to complete.\n\nWhen the query is executed remotly (psql or jdbc) it takes 1800ms to\nexecute, when I issue the query in an ssh terminal, I see the results\nalmost immediatly.\nSo although I should see the same latency over ssh , its way faster over ssh.\nThe transmitted data is small (the wireshard-file has 22kb, attached),\nand even though the umts-network is high-latency its relativly high\nbandwith (~512kbit/s up, ~2mbit/s down).\n\nAny idea whats causing this? Maybe too small buffers somewhere?\nFor me it poses problem, because I am working on a 2-Tier java\napplication which should connect to postgres remotly - however with\nevery more complex query taking 2s its almost unuseable over wireless\nnetworks (umts).\n\nThank you in advance, Clemens\n", "msg_date": "Sat, 19 Feb 2011 11:30:42 +0100", "msg_from": "Clemens Eisserer <[email protected]>", "msg_from_op": true, "msg_subject": "Slow query execution over high latency network" }, { "msg_contents": ">\n> When executing huge (10kb), hibernate-generated queries I noticed that\n> when executed remotly over high-latency network (ping to server\n> 200-400ms), the query takes a lot longer to complete.\n>\n> When the query is executed remotly (psql or jdbc) it takes 1800ms to\n> execute, when I issue the query in an ssh terminal, I see the results\n> almost immediatly.\n> So although I should see the same latency over ssh , its way faster over \n> ssh.\n> The transmitted data is small (the wireshard-file has 22kb, attached),\n> and even though the umts-network is high-latency its relativly high\n> bandwith (~512kbit/s up, ~2mbit/s down).\n\n\nWell, if your upload bandwidth is really 512 kbits, uncompressed \ntransmission of your query text should take about 0.2s, not too bad. SSH \nnormally uses compression, so it should be a lor faster.\n\nYour attached file didn't come through.\n\nAnyway, there are several options :\n\n- different execution plan between your app and ssh+psql, which can happen \nif the planning uses/doesn't use your specific parameters, or if some \nwrong type bindings in your app force postgres not to use an index \n(there's been a few messages on that lately, check the archives).\n\n- dumb client versus smart client :\n\nsmart client : use the protocol which sends the query text + parameters + \nprepare + execute in 1 TCP message, 1 ping, postgres works, 1 ping, get \nreply\ndumb client :\n- send prepare\n- wait for reply\n- send execute\n- wait for reply\n- send \"gimme result\"\n- wait for reply\n- etc\n\n> Any idea whats causing this? Maybe too small buffers somewhere?\n> For me it poses problem, because I am working on a 2-Tier java\n> application which should connect to postgres remotly - however with\n> every more complex query taking 2s its almost unuseable over wireless\n> networks (umts).\n\nIf you want to ensure the fastest response time you need to ensure than \none user action (click) needs one and only one roundtrip to the server \nbefore all the results are displayed. If said action needs 2 SQL queries, \nit ain't possible, unless (maybe) you use the asynchronous query protocol. \nYou can also stuff multiple queries in stored procedures (but Hibernate \nwon't be able to generate them obviously).\n\nOne solution could be to put the database handling stuff inside an \nappserver, make your app communicate to it with a low-overhead RPC \nprotocol (ie, not raw uncompressed XML) that minimizes the number of \nroudtrips, and compresses data thoroughly.\n", "msg_date": "Sat, 19 Feb 2011 13:23:18 +0100", "msg_from": "\"Pierre C\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query execution over high latency network" }, { "msg_contents": "Hi Pierre,\n\nThanks a lot for your reply.\n\n> Your attached file didn't come through.\nHmm, ok.\nI uploaded the wireshark-log to: http://93.190.88.182/psql_large_query.bin\n\n> - different execution plan between your app and ssh+psql, which can happen\n> if the planning uses/doesn't use your specific parameters,\n\nIts both times (ssh and remote psql) exactly the same query - I copied\nthe SQL generated by hibernate and executed it in psql. And although\nit has many columns (~210) the result-set is only about 5 rows and am\nsure not larger than a few kb.\n\n> - dumb client versus smart client :\n> smart client : use the protocol which sends the query text + parameters +\n> prepare + execute in 1 TCP message, 1 ping, postgres works, 1 ping, get\n> reply\n\nSo are both psql and the jdbc driver dumb clients?\nOr are there only buffers somewhere too small and therefor data is\nsent in many smal batches.\nI thought one query would more or less equal to one roundtrip, right?\nMaybe I should ask on the pgsql-jdbc list.\n\n> If you want to ensure the fastest response time you need to ensure than one\n> user action (click) needs one and only one roundtrip to the server before\n> all the results are displayed\n> One solution could be to put the database handling stuff inside an\n> appserver, make your app communicate to it with a low-overhead RPC protocol\n> (ie, not raw uncompressed XML) that minimizes the number of roudtrips, and\n> compresses data thoroughly.\n\nI use well tuned hibernate fetch profiles to ensure fewest possible roundtrips,\nhowever I am not getting paid well enough to create an appserver tier ;)\n\nThanks, Clemens\n", "msg_date": "Sat, 19 Feb 2011 14:05:11 +0100", "msg_from": "Clemens Eisserer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query execution over high latency network" }, { "msg_contents": "On 20 February 2011 02:05, Clemens Eisserer <[email protected]> wrote:\n> I use well tuned hibernate fetch profiles to ensure fewest possible roundtrips,\n> however I am not getting paid well enough to create an appserver tier ;)\n\nI just had a brief glance over your tcpdump data ... are you sure\nhibernate isn't using a cursor to fetch each row individually?\n\n\nCheers,\nAndrej\n\n\n\n-- \nPlease don't top post, and don't use HTML e-Mail :}  Make your quotes concise.\n\nhttp://www.georgedillon.com/web/html_email_is_evil.shtml\n", "msg_date": "Mon, 21 Feb 2011 06:03:11 +1300", "msg_from": "Andrej <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow query execution over high latency network" }, { "msg_contents": "Hi Andrej,\n\nThanks a lot for taking a loot at the tcpdump data.\n\n> I just had a brief glance over your tcpdump data ... are you sure\n> hibernate isn't using a cursor to fetch each row individually?\n\nPretty sure, yes. I get the same performance when executing the\nhibernate-generated query using JDBC,\neven setting a large fetch-size doesn't improve the situation:\n\n> \t st.setFetchSize(100);\n>\t st.setFetchDirection(ResultSet.FETCH_FORWARD);\n\nCould it be jdbc driver struggles with the huge number of columns (~500)?\n\nThanks, Clemens\n", "msg_date": "Mon, 21 Feb 2011 22:08:32 +0100", "msg_from": "Clemens Eisserer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow query execution over high latency network" } ]
[ { "msg_contents": "I am trying to clean up our schema by removing any indices which are not\nbeing used frequently or at all.\n\nUsing pgadmin, looking at the statistics for an index, I see various\npieces of information:\n\n \n\nIndex Scans, Index Tuples Read, Index Tuples Fetched, Index Blocks Read,\nand Index Blocks Hit.\n\nI have on index with the following statistics:\n\n \n\nIndex Scans 0 \n\nIndex Tuples Read 0 \n\nIndex Tuples Fetched 0 \n\nIndex Blocks Read 834389 \n\nIndex Blocks Hit 247283300 \n\nIndex Size 1752 kB \n\n \n\n \n\nSince there are no index scans, would it be safe to remove this one?\n\n \n\n\nI am trying to clean up our schema by removing any indices which are not being used frequently or at all.Using pgadmin, looking at the statistics for an index, I see various pieces of information: Index Scans, Index Tuples Read, Index Tuples Fetched, Index Blocks Read, and Index Blocks Hit.I have on index with the following statistics: Index Scans        0              Index Tuples Read           0              Index Tuples Fetched    0              Index Blocks Read           834389  Index Blocks Hit                247283300           Index Size           1752 kB   Since there are no index scans, would it be safe to remove this one?", "msg_date": "Wed, 23 Feb 2011 12:12:42 -0700", "msg_from": "\"Benjamin Krajmalnik\" <[email protected]>", "msg_from_op": true, "msg_subject": "Unused indices" }, { "msg_contents": "Benjamin Krajmalnik wrote:\n>\n>\n> Index Scans 0 \n>\n> Index Tuples Read 0 \n>\n> Index Tuples Fetched 0 \n>\n> Index Blocks Read 834389 \n>\n> Index Blocks Hit 247283300 \n>\n> Index Size 1752 kB\n>\n> \n>\n> \n>\n> Since there are no index scans, would it be safe to remove this one?\n>\n\nYes. The block usage you're seeing there reflects the activity from \nmaintaining the index. But since it isn't ever being used for queries, \nwith zero scans and zero rows it's delivered to clients, it's not doing \nyou any good. Might as well reclaim your 1.7MB of disk space and reduce \noverhead by removing it.\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nBenjamin Krajmalnik wrote:\n\n\n\n\n\n\n\nIndex Scans        0              \nIndex Tuples Read           0              \nIndex Tuples Fetched    0              \nIndex Blocks Read           834389  \nIndex Blocks Hit               \n247283300           \nIndex Size           1752 kB \n \n \nSince there are no index scans, would it be safe\nto remove this one?\n\n\n\n\nYes.  The block usage you're seeing there reflects the activity from\nmaintaining the index.  But since it isn't ever being used for queries,\nwith zero scans and zero rows it's delivered to clients, it's not doing\nyou any good.  Might as well reclaim your 1.7MB of disk space and\nreduce overhead by removing it.\n\n\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Wed, 23 Feb 2011 16:17:16 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unused indices" }, { "msg_contents": "On 02/23/2011 03:17 PM, Greg Smith wrote:\n\n> Yes. The block usage you're seeing there reflects the activity from\n> maintaining the index. But since it isn't ever being used for\n> queries, with zero scans and zero rows it's delivered to clients,\n\nNice to know. To that end, here's a query that will find every unused \nindex in your database:\n\nSELECT i.schemaname, i.relname, i.indexrelname, c.relpages*8 indsize\n FROM pg_stat_user_indexes i\n JOIN pg_class c on (i.indexrelid=c.oid)\n JOIN pg_index ix ON (i.indexrelid=ix.indexrelid)\n WHERE i.idx_scan = 0\n AND i.idx_tup_read = 0\n AND i.schemaname NOT IN ('zzz', 'archive')\n AND NOT ix.indisprimary\n AND c.relpages > 0\n ORDER BY indsize DESC;\n\nI noticed with our database that without the indisprimary clause, we had \nanother 4GB of unused indexes. Clearly we need to look at those tables \nin general, but this will find all the \"safe\" indexes for removal.\n\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Thu, 24 Feb 2011 08:25:19 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unused indices" }, { "msg_contents": "Shaun Thomas wrote:\n> I noticed with our database that without the indisprimary clause, we \n> had another 4GB of unused indexes.\n\nThat's not quite the right filter. You want to screen out everything \nthat isn't a unique index, not just the primary key ones. You probably \ncan't drop any of those without impacting database integrity.\n\nAlso, as a picky point, you really should use functions like \npg_relation_size instead of doing math on relpages. Your example breaks \non PostgreSQL builds that change the page size, and if you try to \ncompute bytes that way it will overflow on large tables unless you start \ncasting things to int8.\n\nHere's the simplest thing that does something useful here, showing all \nof the indexes on the system starting with the ones that are unused:\n\nSELECT\n schemaname,\n relname,\n indexrelname,\n idx_scan,\n pg_size_pretty(pg_relation_size(i.indexrelid)) AS index_size\nFROM\n pg_stat_user_indexes i\n JOIN pg_index USING (indexrelid)\nWHERE\n indisunique IS false\nORDER BY idx_scan,relname;\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n", "msg_date": "Thu, 24 Feb 2011 13:13:12 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unused indices" }, { "msg_contents": "On 02/24/2011 12:13 PM, Greg Smith wrote:\n\n> That's not quite the right filter. You want to screen out\n> everything that isn't a unique index, not just the primary key ones.\n> You probably can't drop any of those without impacting database\n> integrity.\n\nAh yes. I was considering adding the clause for unique indexes. Filthy \nconstraint violations.\n\n> Also, as a picky point, you really should use functions like\n> pg_relation_size instead of doing math on relpages.\n\nYou know, I always think about that, but I'm essentially lazy. :) I \npersonally haven't ever had the old *8 trick fail, but from your \nperspective of working with so many variations, I could see how you'd \nwant to avoid it.\n\nI'll be good from now on. ;)\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Thu, 24 Feb 2011 13:36:32 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Unused indices" } ]
[ { "msg_contents": "Hi all,\n\nRunning PostgreSQL 8.4.7 (backport package from Debian Lenny).\n\nI have some queries that are based on views, and an engine adds a few\nclauses (like NULLS LAST). One of these queries has a performance problem.\n\nThe simplified form is this:\n\nshs=# explain analyze select * from performance e JOIN part v ON\nv.performance_id = e.id order by e.creation_date desc limit 10;\n\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..4.25 rows=10 width=312) (actual time=0.078..0.147\nrows=10 loops=1)\n -> Nested Loop (cost=0.00..62180.28 rows=146294 width=312) (actual\ntime=0.078..0.145 rows=10 loops=1)\n -> Index Scan Backward using performance_create_idx on performance\ne (cost=0.00..12049.21 rows=145379 width=247) (actual time=0.051..0.087\nrows=10 loops=1)\n -> Index Scan using part_performance_idx on part v\n (cost=0.00..0.33 rows=1 width=65) (actual time=0.005..0.005 rows=1\nloops=10)\n Index Cond: (v.performance_id = e.id)\n Total runtime: 0.205 ms\n\ncreation_date is declared as NOT NULL, and since it's a inner join,\ncreation_date can never be null in this query. I'd think that if I add NULLS\nLAST, it wouldn't have any effect.\n\nHowever, the query with NULLS LAST (as generated by the engine):\n\nshs=# explain analyze select * from performance e JOIN part v ON\nv.performance_id = e.id order by e.creation_date desc nulls last limit 10;\n\nQUERY PLAN\n\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=25773.76..25773.79 rows=10 width=312) (actual\ntime=492.959..492.963 rows=10 loops=1)\n -> Sort (cost=25773.76..26139.50 rows=146294 width=312) (actual\ntime=492.958..492.962 rows=10 loops=1)\n Sort Key: e.creation_date\n Sort Method: top-N heapsort Memory: 27kB\n -> Merge Join (cost=1.27..22612.40 rows=146294 width=312) (actual\ntime=0.064..367.160 rows=146294 loops=1)\n Merge Cond: (e.id = v.performance_id)\n -> Index Scan using performance_pkey on performance e\n (cost=0.00..11989.20 rows=145379 width=247) (actual time=0.035..160.838\nrows=145379 loops=1)\n -> Index Scan using part_performance_idx on part v\n (cost=0.00..8432.35 rows=146294 width=65) (actual time=0.025..91.084\nrows=146294 loops=1)\n Total runtime: 493.062 ms\n\nBoth tables have around 150k rows as you can read from the last plan.\n\nTable performance:\n\n Table \"public.performance\"\n Column | Type |\n Modifiers\n-----------------+--------------------------+----------------------------------------------------------\n created_by | integer | not null\n creation_date | timestamp with time zone | not null\n comments | text |\n owned_by | integer | not null\n id | integer | not null default\nnextval('performance_id_seq'::regclass)\n title | text |\n title_ | text |\n performer_id | integer |\n first_medium_id | integer |\n vperf_id | integer |\n perf_date | partial_date |\n bonustrack | boolean | not null default false\n type_id | integer | not null\n instrumental | boolean | not null default false\n init_rev_level | smallint | not null default 1\n curr_rev_level | smallint | not null default 1\n revision_date | timestamp with time zone |\n revised_by | integer |\n object_type | text | not null default\n'performance'::text\n editor_note | text |\n active | boolean | not null default true\nIndexes:\n \"performance_pkey\" PRIMARY KEY, btree (id)\n \"performance_create_idx\" btree (creation_date)\n \"performance_medium_idx\" btree (first_medium_id)\n \"performance_own_idx\" btree (owned_by)\n \"performance_performer_idx\" btree (performer_id)\n\nTable part:\n\n Table \"public.part\"\n Column | Type | Modifiers\n\n----------------+--------------------------+---------------------------------------------------\n created_by | integer | not null\n creation_date | timestamp with time zone |\n comments | text |\n owned_by | integer | not null\n id | integer | not null default\nnextval('part_id_seq'::regclass)\n work_id | integer | not null\n performance_id | integer | not null\nIndexes:\n \"part_pkey\" PRIMARY KEY, btree (id)\n \"part_own_idx\" btree (owned_by)\n \"part_performance_idx\" btree (performance_id)\n \"part_work_idx\" btree (work_id)\n\nPlease advise!\n\nThanks.\n\nKind regards,\n\nMathieu\n\nHi all,Running PostgreSQL 8.4.7 (backport package from Debian Lenny).I have some queries that are based on views, and an engine adds a few clauses (like NULLS LAST). One of these queries has a performance problem.\nThe simplified form is this:shs=# explain analyze select * from performance e JOIN part v ON v.performance_id = e.id order by e.creation_date desc limit 10;\n                                                                              QUERY PLAN                                                                               -----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=0.00..4.25 rows=10 width=312) (actual time=0.078..0.147 rows=10 loops=1)   ->  Nested Loop  (cost=0.00..62180.28 rows=146294 width=312) (actual time=0.078..0.145 rows=10 loops=1)\n\n         ->  Index Scan Backward using performance_create_idx on performance e  (cost=0.00..12049.21 rows=145379 width=247) (actual time=0.051..0.087 rows=10 loops=1)         ->  Index Scan using part_performance_idx on part v  (cost=0.00..0.33 rows=1 width=65) (actual time=0.005..0.005 rows=1 loops=10)\n               Index Cond: (v.performance_id = e.id) Total runtime: 0.205 mscreation_date is declared as NOT NULL, and since it's a inner join, creation_date can never be null in this query. I'd think that if I add NULLS LAST, it wouldn't have any effect.\nHowever, the query with NULLS LAST (as generated by the engine):shs=# explain analyze select * from performance e JOIN part v ON v.performance_id = e.id order by e.creation_date desc nulls last limit 10;\n                                                                             QUERY PLAN                                                                             --------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit  (cost=25773.76..25773.79 rows=10 width=312) (actual time=492.959..492.963 rows=10 loops=1)   ->  Sort  (cost=25773.76..26139.50 rows=146294 width=312) (actual time=492.958..492.962 rows=10 loops=1)\n         Sort Key: e.creation_date         Sort Method:  top-N heapsort  Memory: 27kB         ->  Merge Join  (cost=1.27..22612.40 rows=146294 width=312) (actual time=0.064..367.160 rows=146294 loops=1)\n               Merge Cond: (e.id = v.performance_id)               ->  Index Scan using performance_pkey on performance e  (cost=0.00..11989.20 rows=145379 width=247) (actual time=0.035..160.838 rows=145379 loops=1)\n               ->  Index Scan using part_performance_idx on part v  (cost=0.00..8432.35 rows=146294 width=65) (actual time=0.025..91.084 rows=146294 loops=1) Total runtime: 493.062 ms\nBoth tables have around 150k rows as you can read from the last plan.Table performance:                                      Table \"public.performance\"\n     Column      |           Type           |                        Modifiers                         -----------------+--------------------------+----------------------------------------------------------\n created_by      | integer                  | not null creation_date   | timestamp with time zone | not null comments        | text                     |  owned_by        | integer                  | not null\n id              | integer                  | not null default nextval('performance_id_seq'::regclass) title           | text                     |  title_          | text                     | \n performer_id    | integer                  |  first_medium_id | integer                  |  vperf_id        | integer                  |  perf_date       | partial_date             | \n bonustrack      | boolean                  | not null default false type_id         | integer                  | not null instrumental    | boolean                  | not null default false\n init_rev_level  | smallint                 | not null default 1 curr_rev_level  | smallint                 | not null default 1 revision_date   | timestamp with time zone |  revised_by      | integer                  | \n object_type     | text                     | not null default 'performance'::text editor_note     | text                     |  active          | boolean                  | not null default true\nIndexes:    \"performance_pkey\" PRIMARY KEY, btree (id)    \"performance_create_idx\" btree (creation_date)    \"performance_medium_idx\" btree (first_medium_id)\n    \"performance_own_idx\" btree (owned_by)    \"performance_performer_idx\" btree (performer_id)Table part:                                      Table \"public.part\"\n     Column     |           Type           |                     Modifiers                     ----------------+--------------------------+--------------------------------------------------- created_by     | integer                  | not null\n creation_date  | timestamp with time zone |  comments       | text                     |  owned_by       | integer                  | not null id             | integer                  | not null default nextval('part_id_seq'::regclass)\n work_id        | integer                  | not null performance_id | integer                  | not nullIndexes:    \"part_pkey\" PRIMARY KEY, btree (id)    \"part_own_idx\" btree (owned_by)\n    \"part_performance_idx\" btree (performance_id)    \"part_work_idx\" btree (work_id)Please advise!Thanks.\n\nKind regards,Mathieu", "msg_date": "Wed, 23 Feb 2011 20:27:11 +0100", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "NULLS LAST performance" }, { "msg_contents": "On Wed, Feb 23, 2011 at 1:27 PM, Mathieu De Zutter <[email protected]> wrote:\n> Hi all,\n> Running PostgreSQL 8.4.7 (backport package from Debian Lenny).\n> I have some queries that are based on views, and an engine adds a few\n> clauses (like NULLS LAST). One of these queries has a performance problem.\n> The simplified form is this:\n> shs=# explain analyze select * from performance e JOIN part v ON\n> v.performance_id = e.id order by e.creation_date desc limit 10;\n>\n>  QUERY PLAN\n>\n> -----------------------------------------------------------------------------------------------------------------------------------------------------------------------\n>  Limit  (cost=0.00..4.25 rows=10 width=312) (actual time=0.078..0.147\n> rows=10 loops=1)\n>    ->  Nested Loop  (cost=0.00..62180.28 rows=146294 width=312) (actual\n> time=0.078..0.145 rows=10 loops=1)\n>          ->  Index Scan Backward using performance_create_idx on performance\n> e  (cost=0.00..12049.21 rows=145379 width=247) (actual time=0.051..0.087\n> rows=10 loops=1)\n>          ->  Index Scan using part_performance_idx on part v\n>  (cost=0.00..0.33 rows=1 width=65) (actual time=0.005..0.005 rows=1\n> loops=10)\n>                Index Cond: (v.performance_id = e.id)\n>  Total runtime: 0.205 ms\n> creation_date is declared as NOT NULL, and since it's a inner join,\n> creation_date can never be null in this query. I'd think that if I add NULLS\n> LAST, it wouldn't have any effect.\n> However, the query with NULLS LAST (as generated by the engine):\n> shs=# explain analyze select * from performance e JOIN part v ON\n> v.performance_id = e.id order by e.creation_date desc nulls last limit 10;\n\nhm, creation date being NOT NULL is not applied in that sense. maybe\nit could be logically (I'm not sure). Your best bet is to probably to\nget the engine to knock off the nulls last stuff, but if you can't,\nyou can always do this:\n\ncreate index performance_creation_date_desc_idx on\nperformance(creation_date desc nulls last);\n\nwhich will index optimize your sql. Interesting that 'null last'\nfools disallows index usage even when the index was created with\nnullls last as the default.\n\nmerlin\n", "msg_date": "Wed, 23 Feb 2011 13:48:03 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NULLS LAST performance" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> you can always do this:\n\n> create index performance_creation_date_desc_idx on\n> performance(creation_date desc nulls last);\n\n> which will index optimize your sql. Interesting that 'null last'\n> fools disallows index usage even when the index was created with\n> nullls last as the default.\n\nThe problem is that his query needs to scan the index in DESC order,\nwhich means it's effectively NULLS FIRST, which doesn't match the\nrequested sort order.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 23 Feb 2011 16:37:34 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NULLS LAST performance " }, { "msg_contents": "On Wed, Feb 23, 2011 at 10:37 PM, Tom Lane <[email protected]> wrote:\n\n> Merlin Moncure <[email protected]> writes:\n> > you can always do this:\n>\n> > create index performance_creation_date_desc_idx on\n> > performance(creation_date desc nulls last);\n>\n> > which will index optimize your sql. Interesting that 'null last'\n> > fools disallows index usage even when the index was created with\n> > nullls last as the default.\n>\n> The problem is that his query needs to scan the index in DESC order,\n> which means it's effectively NULLS FIRST, which doesn't match the\n> requested sort order.\n>\n\nMerlin, Tom,\n\nThanks for explaining the behavior!\n\nAny chance that the planner could get smarter about this? In my naive view,\nit would just be telling the planner that it can disregard \"NULLS\" when\nsearching for an index, in case the column is known to be NOT NULL.\n\nKind regards,\nMathieu\n\nOn Wed, Feb 23, 2011 at 10:37 PM, Tom Lane <[email protected]> wrote:\nMerlin Moncure <[email protected]> writes:\n> you can always do this:\n\n> create index performance_creation_date_desc_idx on\n> performance(creation_date desc nulls last);\n\n> which will index optimize your sql.  Interesting that 'null last'\n> fools disallows index usage even when the index was created with\n> nullls last as the default.\n\nThe problem is that his query needs to scan the index in DESC order,\nwhich means it's effectively NULLS FIRST, which doesn't match the\nrequested sort order. Merlin, Tom,Thanks for explaining the behavior!\nAny chance that the planner could get smarter about this? In my naive view, it would just be telling the planner that it can disregard \"NULLS\" when searching for an index, in case the column is known to be NOT NULL.\nKind regards,Mathieu", "msg_date": "Thu, 24 Feb 2011 10:47:07 +0100", "msg_from": "Mathieu De Zutter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: NULLS LAST performance" }, { "msg_contents": "On Feb 24, 2011, at 3:47 AM, Mathieu De Zutter wrote:\n> > which will index optimize your sql. Interesting that 'null last'\n> > fools disallows index usage even when the index was created with\n> > nullls last as the default.\n> \n> The problem is that his query needs to scan the index in DESC order,\n> which means it's effectively NULLS FIRST, which doesn't match the\n> requested sort order.\n> \n> Merlin, Tom,\n> \n> Thanks for explaining the behavior!\n> \n> Any chance that the planner could get smarter about this? In my naive view, it would just be telling the planner that it can disregard \"NULLS\" when searching for an index, in case the column is known to be NOT NULL.\n\nUnfortunately, I don't think the planner actually has that level of knowledge. \n\nA more reasonable fix might be to teach the executor that it can do 2 scans of the index: one to get non-null data and a second to get null data. I don't know if the use case is prevalent enough to warrant the extra code though.\n--\nJim C. Nasby, Database Architect [email protected]\n512.569.9461 (cell) http://jim.nasby.net\n\n\n", "msg_date": "Wed, 9 Mar 2011 17:01:23 -0600", "msg_from": "Jim Nasby <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NULLS LAST performance" }, { "msg_contents": "On Wed, Mar 9, 2011 at 6:01 PM, Jim Nasby <[email protected]> wrote:\n> Unfortunately, I don't think the planner actually has that level of knowledge.\n\nActually, I don't think it would be that hard to teach the planner\nabout that special case...\n\n> A more reasonable fix might be to teach the executor that it can do 2 scans of the index: one to get non-null data and a second to get null data. I don't know if the use case is prevalent enough to warrant the extra code though.\n\nThat would probably be harder, but useful. I thought about working on\nit before but got sidetracked onto other things.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 10 Mar 2011 10:55:16 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NULLS LAST performance" }, { "msg_contents": "On Thu, Mar 10, 2011 at 9:55 AM, Robert Haas <[email protected]> wrote:\n> On Wed, Mar 9, 2011 at 6:01 PM, Jim Nasby <[email protected]> wrote:\n>> Unfortunately, I don't think the planner actually has that level of knowledge.\n>\n> Actually, I don't think it would be that hard to teach the planner\n> about that special case...\n>\n>> A more reasonable fix might be to teach the executor that it can do 2 scans of the index: one to get non-null data and a second to get null data. I don't know if the use case is prevalent enough to warrant the extra code though.\n>\n> That would probably be harder, but useful.  I thought about working on\n> it before but got sidetracked onto other things.\n\nISTM this isn't all that different from the case of composite indexes\nwhere you are missing the left most term, or you have an index on\na,b,c (which the server already handles) but user asks for a,b desc,\nc. If cardinality on b is low it might pay to loop and break up the\nscan.\n\nmerlin\n", "msg_date": "Thu, 10 Mar 2011 10:32:56 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NULLS LAST performance" }, { "msg_contents": "On Thu, Mar 10, 2011 at 11:32 AM, Merlin Moncure <[email protected]> wrote:\n> On Thu, Mar 10, 2011 at 9:55 AM, Robert Haas <[email protected]> wrote:\n>> On Wed, Mar 9, 2011 at 6:01 PM, Jim Nasby <[email protected]> wrote:\n>>> Unfortunately, I don't think the planner actually has that level of knowledge.\n>>\n>> Actually, I don't think it would be that hard to teach the planner\n>> about that special case...\n>>\n>>> A more reasonable fix might be to teach the executor that it can do 2 scans of the index: one to get non-null data and a second to get null data. I don't know if the use case is prevalent enough to warrant the extra code though.\n>>\n>> That would probably be harder, but useful.  I thought about working on\n>> it before but got sidetracked onto other things.\n>\n> ISTM this isn't all that different from the case of composite indexes\n> where you are missing the left most term, or you have an index on\n> a,b,c (which the server already handles) but user asks for a,b desc,\n> c. If cardinality on b is low it might pay to loop and break up the\n> scan.\n\nYeah, there are a couple of refinements possible here. One\npossibility is that you might ask for ORDER BY a, b and the only\nrelevant index is on a. In that case, it'd be a good idea to consider\nscanning the index and sorting each equal group on b. I've seen quite\na few queries that would benefit from this. A second possibility is\nthat you might ask for ORDER BY a, b and the only relevant index is on\na, b DESC. In that case, you could do three things:\n\n- Scan the index and sort each group that's equal on a by b desc, just\nas if the index were only on a.\n- Scan the index and reverse each group.\n- Scan the index in a funny order - for each value of a, find the\nhighest value of b and scan backwards until the a value changes; then\nrepeat for the next a-value.\n\nAnd similarly with the case where you have ORDER BY a NULLS FIRST and\nan index on a NULLS LAST, you could either:\n\n- Detect when the column is NOT NULL and ignore the NULLS FIRST/LAST\nproperty for purposes of matching the index in such cases, or\n- Scan the index in a funny order - traverse the index to find the\nfirst non-NULL entry at whichever end of the index has the nulls, go\nfrom there to the end, and then \"wrap around\" to pick up the null\nentries\n\nThe tricky part, at least IMO, is that you've got to not only teach\nthe planner to recognize these conditions when they occur, but also\nfind some way of passing it down to the index AM, which you also have\nto modify to know how to do all this stuff. The worst part about\nmaking modifications of this type is that it's really hard to unit\ntest them - the planner, executor, and index AM all have to cooperate\nbefore you can get off the ground.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 10 Mar 2011 23:36:27 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: NULLS LAST performance" } ]
[ { "msg_contents": "I'm using PostgreSQL 8.3.3 and I have a view that does a UNION ALL on\ntwo joins and it doesn't seem to want to push the IN (subquery)\noptimization down into the plan for the two queries being unioned. Is\nthere something I can do to fix this? Or is it just a limitation of\nthe planner/optimizer?\n\nI also tried this with 8.4.7 and it seemed to exhibit the same\nbehaviour, so here's an example of what I'm talking about (obviously\nin a real system I'd have indexes and all that other fun stuff):\n\nCREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT);\nCREATE TABLE addresses1 (userid INTEGER, value INTEGER);\nCREATE TABLE addresses1 (userid INTEGER, value INTEGER);\nCREATE VIEW addressesall AS SELECT u.id, u.name, a.value FROM\naddresses1 AS a JOIN users AS u ON a.userid=u.id UNION ALL SELECT\nu.id, u.name, a.value FROM addresses2 AS a JOIN users AS u ON\na.userid=u.id;\n\n\nHere's the EXPLAIN output for two example queries:\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (SELECT\nid FROM users WHERE name='A');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=2.15..5.58 rows=1 width=40) (actual\ntime=0.144..0.340 rows=3 loops=1)\n Hash Cond: (u.id = users.id)\n -> Append (cost=1.09..4.48 rows=9 width=40) (actual\ntime=0.059..0.239 rows=9 loops=1)\n -> Hash Join (cost=1.09..2.19 rows=4 width=10) (actual\ntime=0.055..0.075 rows=4 loops=1)\n Hash Cond: (a.userid = u.id)\n -> Seq Scan on addresses1 a (cost=0.00..1.04 rows=4\nwidth=8) (actual time=0.006..0.013 rows=4 loops=1)\n -> Hash (cost=1.04..1.04 rows=4 width=6) (actual\ntime=0.019..0.019 rows=4 loops=1)\n -> Seq Scan on users u (cost=0.00..1.04 rows=4\nwidth=6) (actual time=0.003..0.008 rows=4 loops=1)\n -> Hash Join (cost=1.09..2.21 rows=5 width=10) (actual\ntime=0.109..0.133 rows=5 loops=1)\n Hash Cond: (a.userid = u.id)\n -> Seq Scan on addresses2 a (cost=0.00..1.05 rows=5\nwidth=8) (actual time=0.004..0.012 rows=5 loops=1)\n -> Hash (cost=1.04..1.04 rows=4 width=6) (actual\ntime=0.020..0.020 rows=4 loops=1)\n -> Seq Scan on users u (cost=0.00..1.04 rows=4\nwidth=6) (actual time=0.004..0.010 rows=4 loops=1)\n -> Hash (cost=1.05..1.05 rows=1 width=4) (actual\ntime=0.053..0.053 rows=1 loops=1)\n -> Seq Scan on users (cost=0.00..1.05 rows=1 width=4)\n(actual time=0.032..0.040 rows=1 loops=1)\n Filter: (name = 'A'::text)\n Total runtime: 0.519 ms\n(17 rows)\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (SELECT\nid FROM users WHERE name='A');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=2.15..5.58 rows=1 width=40) (actual\ntime=0.144..0.340 rows=3 loops=1)\n Hash Cond: (u.id = users.id)\n -> Append (cost=1.09..4.48 rows=9 width=40) (actual\ntime=0.059..0.239 rows=9 loops=1)\n -> Hash Join (cost=1.09..2.19 rows=4 width=10) (actual\ntime=0.055..0.075 rows=4 loops=1)\n Hash Cond: (a.userid = u.id)\n -> Seq Scan on addresses1 a (cost=0.00..1.04 rows=4\nwidth=8) (actual time=0.006..0.013 rows=4 loops=1)\n -> Hash (cost=1.04..1.04 rows=4 width=6) (actual\ntime=0.019..0.019 rows=4 loops=1)\n -> Seq Scan on users u (cost=0.00..1.04 rows=4\nwidth=6) (actual time=0.003..0.008 rows=4 loops=1)\n -> Hash Join (cost=1.09..2.21 rows=5 width=10) (actual\ntime=0.109..0.133 rows=5 loops=1)\n Hash Cond: (a.userid = u.id)\n -> Seq Scan on addresses2 a (cost=0.00..1.05 rows=5\nwidth=8) (actual time=0.004..0.012 rows=5 loops=1)\n -> Hash (cost=1.04..1.04 rows=4 width=6) (actual\ntime=0.020..0.020 rows=4 loops=1)\n -> Seq Scan on users u (cost=0.00..1.04 rows=4\nwidth=6) (actual time=0.004..0.010 rows=4 loops=1)\n -> Hash (cost=1.05..1.05 rows=1 width=4) (actual\ntime=0.053..0.053 rows=1 loops=1)\n -> Seq Scan on users (cost=0.00..1.05 rows=1 width=4)\n(actual time=0.032..0.040 rows=1 loops=1)\n Filter: (name = 'A'::text)\n Total runtime: 0.519 ms\n(17 rows)\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (1);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..4.27 rows=3 width=40) (actual time=0.053..0.114\nrows=3 loops=1)\n -> Append (cost=0.00..4.27 rows=3 width=40) (actual\ntime=0.049..0.101 rows=3 loops=1)\n -> Nested Loop (cost=0.00..2.12 rows=2 width=10) (actual\ntime=0.046..0.063 rows=2 loops=1)\n -> Seq Scan on users u (cost=0.00..1.05 rows=1\nwidth=6) (actual time=0.025..0.028 rows=1 loops=1)\n Filter: (id = 1)\n -> Seq Scan on addresses1 a (cost=0.00..1.05 rows=2\nwidth=8) (actual time=0.009..0.017 rows=2 loops=1)\n Filter: (a.userid = 1)\n -> Nested Loop (cost=0.00..2.12 rows=1 width=10) (actual\ntime=0.015..0.025 rows=1 loops=1)\n -> Seq Scan on addresses2 a (cost=0.00..1.06 rows=1\nwidth=8) (actual time=0.005..0.008 rows=1 loops=1)\n Filter: (userid = 1)\n -> Seq Scan on users u (cost=0.00..1.05 rows=1\nwidth=6) (actual time=0.004..0.007 rows=1 loops=1)\n Filter: (u.id = 1)\n Total runtime: 0.251 ms\n(13 rows)\n\nYou'll notice that the subquery version is doing the full join and\nthen the filtering, but the explicitly listed version pushing the\nfiltering into the plan before the join. Is there a way to make the\nsubquery version perform the same optimization?\n\nThanks,\nDave\n", "msg_date": "Wed, 23 Feb 2011 21:10:31 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Pushing IN (subquery) down through UNION ALL?" } ]
[ { "msg_contents": "Hi,\n\nWe're using PostgreSQL v8.2.3 on RHEL5.\n\nI'm developing a PostgreSQL plpgsql function for one of our application\nreport. When I try to run the function multiple times (even twice or\nthrice), I'm seeing considerable amount of memory being taken up by\nPostgreSQL and thereby after sometime, complete server itself comes to\nstandstill and not responding at all, even am not able to login to my server\nusing PuTTY client. I then end up physically restarting the server.\n\nPasted below the function which I'm developing. \n\nIs there something am doing differently in the function that would cause\nPostgreSQL to consume lot of memory? In my experience, I can say, this is\nthe first time I'm seeing PostgreSQL consuming/eating lot of memory and\ncausing severe performance issue and eventually making server come to\nstandstill. Also, I can say that another 2 functions which I'm calling from\nwithin this function (\"get_hours_worked\" and\n\"convert_hours_n_minutes_to_decimal\") do not have any performance issues,\nsince those 2 functions we're already using in some other reports and have\nnot found any performance issues.\n\nExperts suggestions/recommendations on this are highly appreciated.\n\nFor example, I would call this function like: SELECT\nhours_worked_day_wise_breakup(90204,23893,38921, '01-01-2010 00:00:00',\n'12-31-2010 23:59:59');\nOutput of this function will be like this:\n8.00-typ1,4.25-typ2,0.00-typ5,6.00-typ3,8.00-typ4\nLogic of this function: Given any 2 dates and filter inputs (input1, input2,\ninput3), it would return hours worked for each day (along with a suffix -\ntyp[1,2,3,4]) in comma separated form. In above example, I'm trying to run\nthis function for one year.\n\nCREATE or replace FUNCTION hours_worked_day_wise_breakup(numeric, numeric,\nnumeric, varchar, varchar) RETURNS VARCHAR AS '\n\nDECLARE\n\tp_input1\t\t\tALIAS FOR $1;\n\tp_input2\t\t\tALIAS FOR $2;\n\tp_input3\t\t\tALIAS FOR $3;\n\tp_startdate\t\t\tALIAS FOR $4;\n\tp_enddate\t\t\tALIAS FOR $5;\n\t\n\tv_loopingdate\t\t\tVARCHAR;\n\tv_cur_start_date\t\tVARCHAR;\n\tv_cur_end_date\t\t\tVARCHAR;\n\tv_hours_in_decimal\t\t\tNUMERIC := 0.00;\n\tv_returnvalue\t\t\tVARCHAR := '''';\n\nBEGIN\n\tv_loopingdate := TO_CHAR(DATE(p_startdate), ''mm-dd-yyyy'');\n\t\n\tWHILE (DATE(v_loopingdate) <= DATE(p_enddate)) LOOP\n\t\tv_cur_start_date := v_loopingdate || '' 00:00:00'';\n\t\tv_cur_end_date := v_loopingdate || '' 23:59:59'';\n\n\t\tIF (LENGTH(TRIM(v_returnvalue)) >0) THEN\n\t\t\tv_returnvalue := v_returnvalue || '','';\n\t\tEND IF;\n\n\t\tv_hours_in_decimal :=\nconvert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 7,\n1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n\t\tIF (v_hours_in_decimal > 0) THEN\n\t\t\tv_returnvalue := v_returnvalue || v_hours_in_decimal\n|| ''-typ1'';\n\t\t\tv_loopingdate := TO_CHAR((DATE(v_loopingdate) +\ninterval ''1 day''), ''mm-dd-yyyy'');\n\t\t\tCONTINUE;\n\t\tEND IF;\n\n\t\tv_hours_in_decimal :=\nconvert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 6,\n1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n\t\tIF (v_hours_in_decimal > 0) THEN\n\t\t\tv_returnvalue := v_returnvalue || v_hours_in_decimal\n|| ''-typ2'';\n\t\t\tv_loopingdate := TO_CHAR((DATE(v_loopingdate) +\ninterval ''1 day''), ''mm-dd-yyyy'');\n\t\t\tCONTINUE;\n\t\tEND IF;\n\n\t\tv_hours_in_decimal :=\nconvert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 4,\n1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n\t\tIF (v_hours_in_decimal > 0) THEN\n\t\t\tv_returnvalue := v_returnvalue || v_hours_in_decimal\n|| ''-typ3'';\n\t\t\tv_loopingdate := TO_CHAR((DATE(v_loopingdate) +\ninterval ''1 day''), ''mm-dd-yyyy'');\n\t\t\tCONTINUE;\n\t\tEND IF;\n\n\t\tv_hours_in_decimal :=\nconvert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 3,\n1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n\t\tIF (v_hours_in_decimal > 0) THEN\n\t\t\tv_returnvalue := v_returnvalue || v_hours_in_decimal\n|| ''-typ4'';\n\t\t\tv_loopingdate := TO_CHAR((DATE(v_loopingdate) +\ninterval ''1 day''), ''mm-dd-yyyy'');\n\t\t\tCONTINUE;\n\t\tEND IF;\n\n\t\tv_hours_in_decimal := 0.00;\n\t\tv_returnvalue := v_returnvalue || v_hours_in_decimal ||\n''-typ5'';\n\t\tv_loopingdate := TO_CHAR((DATE(v_loopingdate) + interval ''1\nday''), ''mm-dd-yyyy'');\n\tEND LOOP;\n\n\tRETURN v_returnvalue;\n\nEND ;'\nLANGUAGE 'plpgsql';\n\nRegards,\nGnanam\n\n", "msg_date": "Thu, 24 Feb 2011 15:22:38 +0530", "msg_from": "\"Gnanakumar\" <[email protected]>", "msg_from_op": true, "msg_subject": "Function execution consuming lot of memory and eventually making\n\tserver unresponsive" }, { "msg_contents": "Hello\n\nIt hard to say where can be a problem\n\nI see a some risks\n\na) v8.2.3 isn't last version of 8.2 line\nb) why you use a varchar data type for v_loopingdate,\nv_cur_start_date, v_cur_end_date - it's bad idea? - you have to do\ncast between date and varchar - all operation are slower and needs\nmore memory - it should be timestamp\n\nRegards\n\nPavel Stehule\n\n\n2011/2/24 Gnanakumar <[email protected]>:\n> Hi,\n>\n> We're using PostgreSQL v8.2.3 on RHEL5.\n>\n> I'm developing a PostgreSQL plpgsql function for one of our application\n> report.  When I try to run the function multiple times (even twice or\n> thrice), I'm seeing considerable amount of memory being taken up by\n> PostgreSQL and thereby after sometime, complete server itself comes to\n> standstill and not responding at all, even am not able to login to my server\n> using PuTTY client.  I then end up physically restarting the server.\n>\n> Pasted below the function which I'm developing.\n>\n> Is there something am doing differently in the function that would cause\n> PostgreSQL to consume lot of memory?  In my experience, I can say, this is\n> the first time I'm seeing PostgreSQL consuming/eating lot of memory and\n> causing severe performance issue and eventually making server come to\n> standstill.  Also, I can say that another 2 functions which I'm calling from\n> within this function (\"get_hours_worked\" and\n> \"convert_hours_n_minutes_to_decimal\") do not have any performance issues,\n> since those 2 functions we're already using in some other reports and have\n> not found any performance issues.\n>\n> Experts suggestions/recommendations on this are highly appreciated.\n>\n> For example, I would call this function like: SELECT\n> hours_worked_day_wise_breakup(90204,23893,38921, '01-01-2010 00:00:00',\n> '12-31-2010 23:59:59');\n> Output of this function will be like this:\n> 8.00-typ1,4.25-typ2,0.00-typ5,6.00-typ3,8.00-typ4\n> Logic of this function: Given any 2 dates and filter inputs (input1, input2,\n> input3), it would return hours worked for each day (along with a suffix -\n> typ[1,2,3,4]) in comma separated form.  In above example, I'm trying to run\n> this function for one year.\n>\n> CREATE or replace FUNCTION hours_worked_day_wise_breakup(numeric, numeric,\n> numeric, varchar, varchar) RETURNS VARCHAR AS '\n>\n> DECLARE\n>        p_input1                        ALIAS FOR $1;\n>        p_input2                        ALIAS FOR $2;\n>        p_input3                        ALIAS FOR $3;\n>        p_startdate                     ALIAS FOR $4;\n>        p_enddate                       ALIAS FOR $5;\n>\n>        v_loopingdate                   VARCHAR;\n>        v_cur_start_date                VARCHAR;\n>        v_cur_end_date                  VARCHAR;\n>        v_hours_in_decimal                      NUMERIC := 0.00;\n>        v_returnvalue                   VARCHAR := '''';\n>\n> BEGIN\n>        v_loopingdate := TO_CHAR(DATE(p_startdate), ''mm-dd-yyyy'');\n>\n>        WHILE (DATE(v_loopingdate) <= DATE(p_enddate)) LOOP\n>                v_cur_start_date := v_loopingdate || '' 00:00:00'';\n>                v_cur_end_date := v_loopingdate || '' 23:59:59'';\n>\n>                IF (LENGTH(TRIM(v_returnvalue)) >0) THEN\n>                        v_returnvalue := v_returnvalue || '','';\n>                END IF;\n>\n>                v_hours_in_decimal :=\n> convert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 7,\n> 1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n>                IF (v_hours_in_decimal > 0) THEN\n>                        v_returnvalue := v_returnvalue || v_hours_in_decimal\n> || ''-typ1'';\n>                        v_loopingdate := TO_CHAR((DATE(v_loopingdate) +\n> interval ''1 day''), ''mm-dd-yyyy'');\n>                        CONTINUE;\n>                END IF;\n>\n>                v_hours_in_decimal :=\n> convert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 6,\n> 1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n>                IF (v_hours_in_decimal > 0) THEN\n>                        v_returnvalue := v_returnvalue || v_hours_in_decimal\n> || ''-typ2'';\n>                        v_loopingdate := TO_CHAR((DATE(v_loopingdate) +\n> interval ''1 day''), ''mm-dd-yyyy'');\n>                        CONTINUE;\n>                END IF;\n>\n>                v_hours_in_decimal :=\n> convert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 4,\n> 1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n>                IF (v_hours_in_decimal > 0) THEN\n>                        v_returnvalue := v_returnvalue || v_hours_in_decimal\n> || ''-typ3'';\n>                        v_loopingdate := TO_CHAR((DATE(v_loopingdate) +\n> interval ''1 day''), ''mm-dd-yyyy'');\n>                        CONTINUE;\n>                END IF;\n>\n>                v_hours_in_decimal :=\n> convert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 3,\n> 1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n>                IF (v_hours_in_decimal > 0) THEN\n>                        v_returnvalue := v_returnvalue || v_hours_in_decimal\n> || ''-typ4'';\n>                        v_loopingdate := TO_CHAR((DATE(v_loopingdate) +\n> interval ''1 day''), ''mm-dd-yyyy'');\n>                        CONTINUE;\n>                END IF;\n>\n>                v_hours_in_decimal := 0.00;\n>                v_returnvalue := v_returnvalue || v_hours_in_decimal ||\n> ''-typ5'';\n>                v_loopingdate := TO_CHAR((DATE(v_loopingdate) + interval ''1\n> day''), ''mm-dd-yyyy'');\n>        END LOOP;\n>\n>        RETURN v_returnvalue;\n>\n> END ;'\n> LANGUAGE 'plpgsql';\n>\n> Regards,\n> Gnanam\n>\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n", "msg_date": "Thu, 24 Feb 2011 11:29:28 +0100", "msg_from": "Pavel Stehule <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Function execution consuming lot of memory and\n\teventually making server unresponsive" }, { "msg_contents": "On Thu, Feb 24, 2011 at 3:52 AM, Gnanakumar <[email protected]> wrote:\n> Hi,\n>\n> We're using PostgreSQL v8.2.3 on RHEL5.\n>\n> I'm developing a PostgreSQL plpgsql function for one of our application\n> report.  When I try to run the function multiple times (even twice or\n> thrice), I'm seeing considerable amount of memory being taken up by\n> PostgreSQL and thereby after sometime, complete server itself comes to\n> standstill and not responding at all, even am not able to login to my server\n> using PuTTY client.  I then end up physically restarting the server.\n>\n> Pasted below the function which I'm developing.\n>\n> Is there something am doing differently in the function that would cause\n> PostgreSQL to consume lot of memory?  In my experience, I can say, this is\n> the first time I'm seeing PostgreSQL consuming/eating lot of memory and\n> causing severe performance issue and eventually making server come to\n> standstill.  Also, I can say that another 2 functions which I'm calling from\n> within this function (\"get_hours_worked\" and\n> \"convert_hours_n_minutes_to_decimal\") do not have any performance issues,\n> since those 2 functions we're already using in some other reports and have\n> not found any performance issues.\n>\n> Experts suggestions/recommendations on this are highly appreciated.\n>\n> For example, I would call this function like: SELECT\n> hours_worked_day_wise_breakup(90204,23893,38921, '01-01-2010 00:00:00',\n> '12-31-2010 23:59:59');\n> Output of this function will be like this:\n> 8.00-typ1,4.25-typ2,0.00-typ5,6.00-typ3,8.00-typ4\n> Logic of this function: Given any 2 dates and filter inputs (input1, input2,\n> input3), it would return hours worked for each day (along with a suffix -\n> typ[1,2,3,4]) in comma separated form.  In above example, I'm trying to run\n> this function for one year.\n>\n> CREATE or replace FUNCTION hours_worked_day_wise_breakup(numeric, numeric,\n> numeric, varchar, varchar) RETURNS VARCHAR AS '\n>\n> DECLARE\n>        p_input1                        ALIAS FOR $1;\n>        p_input2                        ALIAS FOR $2;\n>        p_input3                        ALIAS FOR $3;\n>        p_startdate                     ALIAS FOR $4;\n>        p_enddate                       ALIAS FOR $5;\n>\n>        v_loopingdate                   VARCHAR;\n>        v_cur_start_date                VARCHAR;\n>        v_cur_end_date                  VARCHAR;\n>        v_hours_in_decimal                      NUMERIC := 0.00;\n>        v_returnvalue                   VARCHAR := '''';\n>\n> BEGIN\n>        v_loopingdate := TO_CHAR(DATE(p_startdate), ''mm-dd-yyyy'');\n>\n>        WHILE (DATE(v_loopingdate) <= DATE(p_enddate)) LOOP\n>                v_cur_start_date := v_loopingdate || '' 00:00:00'';\n>                v_cur_end_date := v_loopingdate || '' 23:59:59'';\n>\n>                IF (LENGTH(TRIM(v_returnvalue)) >0) THEN\n>                        v_returnvalue := v_returnvalue || '','';\n>                END IF;\n>\n>                v_hours_in_decimal :=\n> convert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 7,\n> 1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n>                IF (v_hours_in_decimal > 0) THEN\n>                        v_returnvalue := v_returnvalue || v_hours_in_decimal\n> || ''-typ1'';\n>                        v_loopingdate := TO_CHAR((DATE(v_loopingdate) +\n> interval ''1 day''), ''mm-dd-yyyy'');\n>                        CONTINUE;\n>                END IF;\n>\n>                v_hours_in_decimal :=\n> convert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 6,\n> 1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n>                IF (v_hours_in_decimal > 0) THEN\n>                        v_returnvalue := v_returnvalue || v_hours_in_decimal\n> || ''-typ2'';\n>                        v_loopingdate := TO_CHAR((DATE(v_loopingdate) +\n> interval ''1 day''), ''mm-dd-yyyy'');\n>                        CONTINUE;\n>                END IF;\n>\n>                v_hours_in_decimal :=\n> convert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 4,\n> 1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n>                IF (v_hours_in_decimal > 0) THEN\n>                        v_returnvalue := v_returnvalue || v_hours_in_decimal\n> || ''-typ3'';\n>                        v_loopingdate := TO_CHAR((DATE(v_loopingdate) +\n> interval ''1 day''), ''mm-dd-yyyy'');\n>                        CONTINUE;\n>                END IF;\n>\n>                v_hours_in_decimal :=\n> convert_hours_n_minutes_to_decimal(get_hours_worked(p_input1, p_input2, 3,\n> 1, -1, p_input3, v_cur_start_date, v_cur_end_date));\n>                IF (v_hours_in_decimal > 0) THEN\n>                        v_returnvalue := v_returnvalue || v_hours_in_decimal\n> || ''-typ4'';\n>                        v_loopingdate := TO_CHAR((DATE(v_loopingdate) +\n> interval ''1 day''), ''mm-dd-yyyy'');\n>                        CONTINUE;\n>                END IF;\n>\n>                v_hours_in_decimal := 0.00;\n>                v_returnvalue := v_returnvalue || v_hours_in_decimal ||\n> ''-typ5'';\n>                v_loopingdate := TO_CHAR((DATE(v_loopingdate) + interval ''1\n> day''), ''mm-dd-yyyy'');\n>        END LOOP;\n>\n>        RETURN v_returnvalue;\n>\n> END ;'\n> LANGUAGE 'plpgsql';\n\nIt's a pretty safe bet you are stuck in the loop (either infinite, or\nvery long) using string concatenation operator on the return code. ||\nis not designed for extremely heavy use on large strings in a loop.\n\nYour entire function could probably be reduced to one SQL expression\nwith some thought.\n\nmerlin\n", "msg_date": "Thu, 24 Feb 2011 09:14:13 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Function execution consuming lot of memory and\n\teventually making server unresponsive" }, { "msg_contents": "Merlin Moncure <[email protected]> writes:\n> Your entire function could probably be reduced to one SQL expression\n> with some thought.\n\nOr if not that, at least try to get rid of the use of varchar. All\nthose forced varchar-to-date-and-back conversions are expensive.\nI'm also more than a tad worried by this:\n\n> v_loopingdate := TO_CHAR(DATE(p_startdate), ''mm-dd-yyyy'');\n>\n> WHILE (DATE(v_loopingdate) <= DATE(p_enddate)) LOOP\n\nThere's nothing here guaranteeing that DATE() will think its input\nis in mm-dd-yyyy format. If DateStyle is set to something else,\nthe logic would at least be wrong, and very possibly that explains\nyour infinite loop.\n\nLearn to use PG's type system instead of fighting it. Your code\nwill be shorter, clearer, faster, and less error-prone.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 24 Feb 2011 10:58:53 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Function execution consuming lot of memory and eventually making\n\tserver unresponsive" } ]
[ { "msg_contents": "\"Gnanakumar\" wrote:\n \n> We're using PostgreSQL v8.2.3 on RHEL5.\n \nhttp://www.postgresql.org/support/versioning\n \nThe 8.2 release is up to 8.2.20:\n \nhttp://www.postgresql.org/\n \nBy the way, 8.2 is scheduled to go out of support later this year:\n \nhttp://wiki.postgresql.org/wiki/PostgreSQL_Release_Support_Policy\n \nYou might want to start planning to upgrade.\n \n> I'm developing a PostgreSQL plpgsql function for one of our\n> application report. When I try to run the function multiple times\n> (even twice or thrice), I'm seeing considerable amount of memory\n> being taken up by PostgreSQL and thereby after sometime, complete\n> server itself comes to standstill and not responding at all, even\n> am not able to login to my server using PuTTY client. I then end up\n> physically restarting the server.\n \nYou might want to review the bug fixes since 8.2.3 and see if any\ninvolve memory leaks:\n \nhttp://www.postgresql.org/docs/8.2/static/release.html\n \n-Kevin\n", "msg_date": "Thu, 24 Feb 2011 06:29:04 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Function execution consuming lot of memory and\n\teventually making server unresponsive" } ]
[ { "msg_contents": "I'm using PostgreSQL 8.3.3 and I have a view that does a UNION ALL on two\njoins and it doesn't seem to want to push the IN (subquery) optimization\ndown into the plan for the two queries being unioned. Is there something I\ncan do to fix this? Or is it just a limitation of the planner/optimizer?\n\nI also tried this with 8.4.7 and it seemed to exhibit the same behaviour, so\nhere's an example of what I'm talking about (obviously in a real system I'd\nhave indexes and all that other fun stuff):\n\nCREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT);\nCREATE TABLE addresses1 (userid INTEGER, value INTEGER);\nCREATE TABLE addresses1 (userid INTEGER, value INTEGER);\nCREATE VIEW addressesall AS SELECT u.id, u.name, a.value FROM addresses1 AS\na JOIN users AS u ON a.userid=u.id UNION ALL SELECT u.id, u.name, a.value\nFROM addresses2 AS a JOIN users AS u ON a.userid=u.id;\n\n\nHere's the EXPLAIN output for two example queries:\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (SELECT id\nFROM users WHERE name='A');\n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------\n Hash Semi Join (cost=2.15..5.58 rows=1 width=40) (actual\ntime=0.144..0.340 rows=3 loops=1)\n Hash Cond: (u.id = users.id)\n -> Append (cost=1.09..4.48 rows=9 width=40) (actual time=0.059..0.239\nrows=9 loops=1)\n -> Hash Join (cost=1.09..2.19 rows=4 width=10) (actual\ntime=0.055..0.075 rows=4 loops=1)\n Hash Cond: (a.userid = u.id)\n -> Seq Scan on addresses1 a (cost=0.00..1.04 rows=4 width=8)\n(actual time=0.006..0.013 rows=4 loops=1)\n -> Hash (cost=1.04..1.04 rows=4 width=6) (actual\ntime=0.019..0.019 rows=4 loops=1)\n -> Seq Scan on users u (cost=0.00..1.04 rows=4\nwidth=6) (actual time=0.003..0.008 rows=4 loops=1)\n -> Hash Join (cost=1.09..2.21 rows=5 width=10) (actual\ntime=0.109..0.133 rows=5 loops=1)\n Hash Cond: (a.userid = u.id)\n -> Seq Scan on addresses2 a (cost=0.00..1.05 rows=5 width=8)\n(actual time=0.004..0.012 rows=5 loops=1)\n -> Hash (cost=1.04..1.04 rows=4 width=6) (actual\ntime=0.020..0.020 rows=4 loops=1)\n -> Seq Scan on users u (cost=0.00..1.04 rows=4\nwidth=6) (actual time=0.004..0.010 rows=4 loops=1)\n -> Hash (cost=1.05..1.05 rows=1 width=4) (actual time=0.053..0.053\nrows=1 loops=1)\n -> Seq Scan on users (cost=0.00..1.05 rows=1 width=4) (actual\ntime=0.032..0.040 rows=1 loops=1)\n Filter: (name = 'A'::text)\n Total runtime: 0.519 ms\n(17 rows)\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (1);\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Result (cost=0.00..4.27 rows=3 width=40) (actual time=0.053..0.114 rows=3\nloops=1)\n -> Append (cost=0.00..4.27 rows=3 width=40) (actual time=0.049..0.101\nrows=3 loops=1)\n -> Nested Loop (cost=0.00..2.12 rows=2 width=10) (actual\ntime=0.046..0.063 rows=2 loops=1)\n -> Seq Scan on users u (cost=0.00..1.05 rows=1 width=6)\n(actual time=0.025..0.028 rows=1 loops=1)\n Filter: (id = 1)\n -> Seq Scan on addresses1 a (cost=0.00..1.05 rows=2 width=8)\n(actual time=0.009..0.017 rows=2 loops=1)\n Filter: (a.userid = 1)\n -> Nested Loop (cost=0.00..2.12 rows=1 width=10) (actual\ntime=0.015..0.025 rows=1 loops=1)\n -> Seq Scan on addresses2 a (cost=0.00..1.06 rows=1 width=8)\n(actual time=0.005..0.008 rows=1 loops=1)\n Filter: (userid = 1)\n -> Seq Scan on users u (cost=0.00..1.05 rows=1 width=6)\n(actual time=0.004..0.007 rows=1 loops=1)\n Filter: (u.id = 1)\n Total runtime: 0.251 ms\n(13 rows)\n\nYou'll notice that the subquery version is doing the full join and then the\nfiltering, but the explicitly listed version pushing the filtering into the\nplan before the join. Is there a way to make the subquery version perform\nthe same optimization?\n\nThanks,\nDave\n\nI'm using PostgreSQL 8.3.3 and I have a view that does a UNION ALL on\ntwo joins and it doesn't seem to want to push the IN (subquery) optimization down into the plan for the two queries being unioned. Is there something I can do to fix this? Or is it just a limitation of the planner/optimizer?\n\nI also tried this with 8.4.7 and it seemed to exhibit the same behaviour, so here's an example of what I'm talking about (obviously in a real system I'd have indexes and all that other fun stuff):\n\nCREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT);\nCREATE TABLE addresses1 (userid INTEGER, value INTEGER);\nCREATE TABLE addresses1 (userid INTEGER, value INTEGER);\nCREATE VIEW addressesall AS SELECT u.id, u.name, a.value FROM\naddresses1 AS a JOIN users AS u ON a.userid=u.id UNION ALL SELECT\nu.id, u.name, a.value FROM addresses2 AS a JOIN users AS u ON a.userid=u.id;\n\n\nHere's the EXPLAIN output for two example queries:\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (SELECT\nid FROM users WHERE name='A');\n                                                       QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------- \nHash Semi Join  (cost=2.15..5.58 rows=1 width=40) (actual\ntime=0.144..0.340 rows=3 loops=1)\n   Hash Cond: (u.id = users.id)\n   ->  Append  (cost=1.09..4.48 rows=9 width=40) (actual\ntime=0.059..0.239 rows=9 loops=1)\n         ->  Hash Join  (cost=1.09..2.19 rows=4 width=10) (actual\ntime=0.055..0.075 rows=4 loops=1)\n               Hash Cond: (a.userid = u.id)\n               ->  Seq Scan on addresses1 a  (cost=0.00..1.04 rows=4\nwidth=8) (actual time=0.006..0.013 rows=4 loops=1)\n               ->  Hash  (cost=1.04..1.04 rows=4 width=6) (actual time=0.019..0.019 rows=4 loops=1)\n                     ->  Seq Scan on users u  (cost=0.00..1.04 rows=4\nwidth=6) (actual time=0.003..0.008 rows=4 loops=1)\n         ->  Hash Join  (cost=1.09..2.21 rows=5 width=10) (actual time=0.109..0.133 rows=5 loops=1)\n               Hash Cond: (a.userid = u.id)\n               ->  Seq Scan on addresses2 a  (cost=0.00..1.05 rows=5\nwidth=8) (actual time=0.004..0.012 rows=5 loops=1)\n               ->  Hash  (cost=1.04..1.04 rows=4 width=6) (actual time=0.020..0.020 rows=4 loops=1)\n                     ->  Seq Scan on users u  (cost=0.00..1.04 rows=4 width=6) (actual time=0.004..0.010 rows=4 loops=1)\n   ->  Hash  (cost=1.05..1.05 rows=1 width=4) (actual time=0.053..0.053 rows=1 loops=1)\n         ->  Seq Scan on users  (cost=0.00..1.05 rows=1 width=4)\n(actual time=0.032..0.040 rows=1 loops=1)\n               Filter: (name = 'A'::text)\n Total runtime: 0.519 ms\n(17 rows)\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (1);\n                                                       QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Result  (cost=0.00..4.27 rows=3 width=40) (actual time=0.053..0.114\nrows=3 loops=1)\n   ->  Append  (cost=0.00..4.27 rows=3 width=40) (actual time=0.049..0.101 rows=3 loops=1)\n         ->  Nested Loop  (cost=0.00..2.12 rows=2 width=10) (actual\ntime=0.046..0.063 rows=2 loops=1)\n               ->  Seq Scan on users u  (cost=0.00..1.05 rows=1 width=6) (actual time=0.025..0.028 rows=1 loops=1)\n                     Filter: (id = 1)\n               ->  Seq Scan on addresses1 a  (cost=0.00..1.05 rows=2\nwidth=8) (actual time=0.009..0.017 rows=2 loops=1)\n                     Filter: (a.userid = 1)\n         ->  Nested Loop  (cost=0.00..2.12 rows=1 width=10) (actual\ntime=0.015..0.025 rows=1 loops=1)\n               ->  Seq Scan on addresses2 a  (cost=0.00..1.06 rows=1\nwidth=8) (actual time=0.005..0.008 rows=1 loops=1)\n                     Filter: (userid = 1)\n               ->  Seq Scan on users u  (cost=0.00..1.05 rows=1 width=6) (actual time=0.004..0.007 rows=1 loops=1)\n                     Filter: (u.id = 1)\n Total runtime: 0.251 ms\n(13 rows)\n\nYou'll notice that the subquery version is doing the full join and\nthen the filtering, but the explicitly listed version pushing the\nfiltering into the plan before the join. Is there a way to make the subquery version perform the same optimization?\n\nThanks,\nDave", "msg_date": "Thu, 24 Feb 2011 08:14:00 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On Thu, Feb 24, 2011 at 8:14 AM, Dave Johansen <[email protected]>wrote:\n\n> I'm using PostgreSQL 8.3.3 and I have a view that does a UNION ALL on two\n> joins and it doesn't seem to want to push the IN (subquery) optimization\n> down into the plan for the two queries being unioned. Is there something I\n> can do to fix this? Or is it just a limitation of the planner/optimizer?\n>\n> I also tried this with 8.4.7 and it seemed to exhibit the same behaviour,\n> so here's an example of what I'm talking about (obviously in a real system\n> I'd have indexes and all that other fun stuff):\n>\n> CREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT);\n> CREATE TABLE addresses1 (userid INTEGER, value INTEGER);\n> CREATE TABLE addresses1 (userid INTEGER, value INTEGER);\n> CREATE VIEW addressesall AS SELECT u.id, u.name, a.value FROM addresses1\n> AS a JOIN users AS u ON a.userid=u.id UNION ALL SELECT u.id, u.name,\n> a.value FROM addresses2 AS a JOIN users AS u ON a.userid=u.id;\n>\n>\n> Here's the EXPLAIN output for two example queries:\n>\n> test=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (SELECT id\n> FROM users WHERE name='A');\n> QUERY PLAN\n>\n> -------------------------------------------------------------------------------------------------------------------------\n> Hash Semi Join (cost=2.15..5.58 rows=1 width=40) (actual\n> time=0.144..0.340 rows=3 loops=1)\n> Hash Cond: (u.id = users.id)\n> -> Append (cost=1.09..4.48 rows=9 width=40) (actual time=0.059..0.239\n> rows=9 loops=1)\n> -> Hash Join (cost=1.09..2.19 rows=4 width=10) (actual\n> time=0.055..0.075 rows=4 loops=1)\n> Hash Cond: (a.userid = u.id)\n> -> Seq Scan on addresses1 a (cost=0.00..1.04 rows=4\n> width=8) (actual time=0.006..0.013 rows=4 loops=1)\n> -> Hash (cost=1.04..1.04 rows=4 width=6) (actual\n> time=0.019..0.019 rows=4 loops=1)\n> -> Seq Scan on users u (cost=0.00..1.04 rows=4\n> width=6) (actual time=0.003..0.008 rows=4 loops=1)\n> -> Hash Join (cost=1.09..2.21 rows=5 width=10) (actual\n> time=0.109..0.133 rows=5 loops=1)\n> Hash Cond: (a.userid = u.id)\n> -> Seq Scan on addresses2 a (cost=0.00..1.05 rows=5\n> width=8) (actual time=0.004..0.012 rows=5 loops=1)\n> -> Hash (cost=1.04..1.04 rows=4 width=6) (actual\n> time=0.020..0.020 rows=4 loops=1)\n> -> Seq Scan on users u (cost=0.00..1.04 rows=4\n> width=6) (actual time=0.004..0.010 rows=4 loops=1)\n> -> Hash (cost=1.05..1.05 rows=1 width=4) (actual time=0.053..0.053\n> rows=1 loops=1)\n> -> Seq Scan on users (cost=0.00..1.05 rows=1 width=4) (actual\n> time=0.032..0.040 rows=1 loops=1)\n> Filter: (name = 'A'::text)\n> Total runtime: 0.519 ms\n> (17 rows)\n>\n> test=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (1);\n> QUERY PLAN\n>\n> ------------------------------------------------------------------------------------------------------------------------\n> Result (cost=0.00..4.27 rows=3 width=40) (actual time=0.053..0.114 rows=3\n> loops=1)\n> -> Append (cost=0.00..4.27 rows=3 width=40) (actual time=0.049..0.101\n> rows=3 loops=1)\n> -> Nested Loop (cost=0.00..2.12 rows=2 width=10) (actual\n> time=0.046..0.063 rows=2 loops=1)\n> -> Seq Scan on users u (cost=0.00..1.05 rows=1 width=6)\n> (actual time=0.025..0.028 rows=1 loops=1)\n> Filter: (id = 1)\n> -> Seq Scan on addresses1 a (cost=0.00..1.05 rows=2\n> width=8) (actual time=0.009..0.017 rows=2 loops=1)\n> Filter: (a.userid = 1)\n> -> Nested Loop (cost=0.00..2.12 rows=1 width=10) (actual\n> time=0.015..0.025 rows=1 loops=1)\n> -> Seq Scan on addresses2 a (cost=0.00..1.06 rows=1\n> width=8) (actual time=0.005..0.008 rows=1 loops=1)\n> Filter: (userid = 1)\n> -> Seq Scan on users u (cost=0.00..1.05 rows=1 width=6)\n> (actual time=0.004..0.007 rows=1 loops=1)\n> Filter: (u.id = 1)\n> Total runtime: 0.251 ms\n> (13 rows)\n>\n> You'll notice that the subquery version is doing the full join and then the\n> filtering, but the explicitly listed version pushing the filtering into the\n> plan before the join. Is there a way to make the subquery version perform\n> the same optimization?\n>\n> Thanks,\n> Dave\n>\n\nI also just noticed that an ORDER BY x LIMIT n optimization is not pushed\ndown through the UNION ALL as well. I understand that this may be a little\ntrickier because the ORDER BY and LIMIT would need to be applied to the\nsubqueries and then re-applied after the APPEND, but is there some way to\nget either the previous issue or this issue to optimize as desired? Or do I\njust need to change my schema to not use two separate tables with a VIEW and\na UNION ALL?\n\nThanks again,\nDave\n\nOn Thu, Feb 24, 2011 at 8:14 AM, Dave Johansen <[email protected]> wrote:\nI'm using PostgreSQL 8.3.3 and I have a view that does a UNION ALL on\ntwo joins and it doesn't seem to want to push the IN (subquery) optimization down into the plan for the two queries being unioned. Is there something I can do to fix this? Or is it just a limitation of the planner/optimizer?\n\nI also tried this with 8.4.7 and it seemed to exhibit the same behaviour, so here's an example of what I'm talking about (obviously in a real system I'd have indexes and all that other fun stuff):\n\nCREATE TABLE users (id SERIAL PRIMARY KEY, name TEXT);\nCREATE TABLE addresses1 (userid INTEGER, value INTEGER);\nCREATE TABLE addresses1 (userid INTEGER, value INTEGER);\nCREATE VIEW addressesall AS SELECT u.id, u.name, a.value FROM\naddresses1 AS a JOIN users AS u ON a.userid=u.id UNION ALL SELECT\nu.id, u.name, a.value FROM addresses2 AS a JOIN users AS u ON a.userid=u.id;\n\n\nHere's the EXPLAIN output for two example queries:\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (SELECT\nid FROM users WHERE name='A');\n                                                       QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------- \nHash Semi Join  (cost=2.15..5.58 rows=1 width=40) (actual\ntime=0.144..0.340 rows=3 loops=1)\n   Hash Cond: (u.id = users.id)\n   ->  Append  (cost=1.09..4.48 rows=9 width=40) (actual\ntime=0.059..0.239 rows=9 loops=1)\n         ->  Hash Join  (cost=1.09..2.19 rows=4 width=10) (actual\ntime=0.055..0.075 rows=4 loops=1)\n               Hash Cond: (a.userid = u.id)\n               ->  Seq Scan on addresses1 a  (cost=0.00..1.04 rows=4\nwidth=8) (actual time=0.006..0.013 rows=4 loops=1)\n               ->  Hash  (cost=1.04..1.04 rows=4 width=6) (actual time=0.019..0.019 rows=4 loops=1)\n                     ->  Seq Scan on users u  (cost=0.00..1.04 rows=4\nwidth=6) (actual time=0.003..0.008 rows=4 loops=1)\n         ->  Hash Join  (cost=1.09..2.21 rows=5 width=10) (actual time=0.109..0.133 rows=5 loops=1)\n               Hash Cond: (a.userid = u.id)\n               ->  Seq Scan on addresses2 a  (cost=0.00..1.05 rows=5\nwidth=8) (actual time=0.004..0.012 rows=5 loops=1)\n               ->  Hash  (cost=1.04..1.04 rows=4 width=6) (actual time=0.020..0.020 rows=4 loops=1)\n                     ->  Seq Scan on users u  (cost=0.00..1.04 rows=4 width=6) (actual time=0.004..0.010 rows=4 loops=1)\n   ->  Hash  (cost=1.05..1.05 rows=1 width=4) (actual time=0.053..0.053 rows=1 loops=1)\n         ->  Seq Scan on users  (cost=0.00..1.05 rows=1 width=4)\n(actual time=0.032..0.040 rows=1 loops=1)\n               Filter: (name = 'A'::text)\n Total runtime: 0.519 ms\n(17 rows)\n\ntest=# EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id IN (1);\n                                                       QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------\n Result  (cost=0.00..4.27 rows=3 width=40) (actual time=0.053..0.114\nrows=3 loops=1)\n   ->  Append  (cost=0.00..4.27 rows=3 width=40) (actual time=0.049..0.101 rows=3 loops=1)\n         ->  Nested Loop  (cost=0.00..2.12 rows=2 width=10) (actual\ntime=0.046..0.063 rows=2 loops=1)\n               ->  Seq Scan on users u  (cost=0.00..1.05 rows=1 width=6) (actual time=0.025..0.028 rows=1 loops=1)\n                     Filter: (id = 1)\n               ->  Seq Scan on addresses1 a  (cost=0.00..1.05 rows=2\nwidth=8) (actual time=0.009..0.017 rows=2 loops=1)\n                     Filter: (a.userid = 1)\n         ->  Nested Loop  (cost=0.00..2.12 rows=1 width=10) (actual\ntime=0.015..0.025 rows=1 loops=1)\n               ->  Seq Scan on addresses2 a  (cost=0.00..1.06 rows=1\nwidth=8) (actual time=0.005..0.008 rows=1 loops=1)\n                     Filter: (userid = 1)\n               ->  Seq Scan on users u  (cost=0.00..1.05 rows=1 width=6) (actual time=0.004..0.007 rows=1 loops=1)\n                     Filter: (u.id = 1)\n Total runtime: 0.251 ms\n(13 rows)\n\nYou'll notice that the subquery version is doing the full join and\nthen the filtering, but the explicitly listed version pushing the\nfiltering into the plan before the join. Is there a way to make the subquery version perform the same optimization?\n\nThanks,\nDave\n\nI also just noticed that an ORDER BY x LIMIT n optimization is not pushed down through the UNION ALL as well. I understand that this may be a little trickier because the ORDER BY and LIMIT would need to be applied to the subqueries and then re-applied after the APPEND, but is there some way to get either the previous issue or this issue to optimize as desired? Or do I just need to change my schema to not use two separate tables with a VIEW and a UNION ALL?\nThanks again,Dave", "msg_date": "Thu, 24 Feb 2011 09:38:56 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On Thu, Feb 24, 2011 at 16:14, Dave Johansen <[email protected]> wrote:\n\n> You'll notice that the subquery version is doing the full join and then the\n> filtering, but the explicitly listed version pushing the filtering into the\n> plan before the join. Is there a way to make the subquery version perform\n> the same optimization?\n>\n\nEXPLAIN ANALYZE SELECT * FROM addressesall WHERE id = ANY (array(SELECT id\nFROM users WHERE name='A'));\n\n(Tested on 9.0.3)\n\nOn Thu, Feb 24, 2011 at 16:14, Dave Johansen <[email protected]> wrote:\n\nYou'll notice that the subquery version is doing the full join and\nthen the filtering, but the explicitly listed version pushing the\nfiltering into the plan before the join. Is there a way to make the subquery version perform the same optimization?\nEXPLAIN ANALYZE SELECT * FROM addressesall WHERE id = ANY (array(SELECT id FROM users WHERE name='A'));(Tested on 9.0.3)", "msg_date": "Thu, 24 Feb 2011 20:33:00 +0100", "msg_from": "Vik Reykja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On Thu, Feb 24, 2011 at 12:33 PM, Vik Reykja <[email protected]> wrote:\n\n> On Thu, Feb 24, 2011 at 16:14, Dave Johansen <[email protected]>wrote:\n>\n>> You'll notice that the subquery version is doing the full join and then\n>> the filtering, but the explicitly listed version pushing the filtering into\n>> the plan before the join. Is there a way to make the subquery version\n>> perform the same optimization?\n>>\n>\n> EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id = ANY (array(SELECT id\n> FROM users WHERE name='A'));\n>\n> (Tested on 9.0.3)\n>\n\nI just tested that on 8.3.3 and it performed quickly like I expected the\nother query to, so that did the trick.\nThanks a ton,\nDave\n\nOn Thu, Feb 24, 2011 at 12:33 PM, Vik Reykja <[email protected]> wrote:\nOn Thu, Feb 24, 2011 at 16:14, Dave Johansen <[email protected]> wrote:\n\n\nYou'll notice that the subquery version is doing the full join and\nthen the filtering, but the explicitly listed version pushing the\nfiltering into the plan before the join. Is there a way to make the subquery version perform the same optimization?\nEXPLAIN ANALYZE SELECT * FROM addressesall WHERE id = ANY (array(SELECT id FROM users WHERE name='A'));(Tested on 9.0.3)\nI just tested that on 8.3.3 and it performed quickly like I expected the other query to, so that did the trick.Thanks a ton,Dave", "msg_date": "Thu, 24 Feb 2011 12:56:44 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On Thu, Feb 24, 2011 at 20:56, Dave Johansen <[email protected]> wrote:\n\n> On Thu, Feb 24, 2011 at 12:33 PM, Vik Reykja <[email protected]> wrote:\n>\n>> On Thu, Feb 24, 2011 at 16:14, Dave Johansen <[email protected]>wrote:\n>>\n>>> You'll notice that the subquery version is doing the full join and then\n>>> the filtering, but the explicitly listed version pushing the filtering into\n>>> the plan before the join. Is there a way to make the subquery version\n>>> perform the same optimization?\n>>>\n>>\n>> EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id = ANY (array(SELECT id\n>> FROM users WHERE name='A'));\n>>\n>> (Tested on 9.0.3)\n>>\n>\n> I just tested that on 8.3.3 and it performed quickly like I expected the\n> other query to, so that did the trick.\n>\n\nIs there any good reason you're not using 8.3.14?\n\n\n> Thanks a ton,\n>\n\nYou're welcome.\n\nOn Thu, Feb 24, 2011 at 20:56, Dave Johansen <[email protected]> wrote:\nOn Thu, Feb 24, 2011 at 12:33 PM, Vik Reykja <[email protected]> wrote:\nOn Thu, Feb 24, 2011 at 16:14, Dave Johansen <[email protected]> wrote:\n\n\n\n\nYou'll notice that the subquery version is doing the full join and\nthen the filtering, but the explicitly listed version pushing the\nfiltering into the plan before the join. Is there a way to make the subquery version perform the same optimization?\nEXPLAIN ANALYZE SELECT * FROM addressesall WHERE id = ANY (array(SELECT id FROM users WHERE name='A'));(Tested on 9.0.3)\nI just tested that on 8.3.3 and it performed quickly like I expected the other query to, so that did the trick.Is there any good reason you're not using 8.3.14? \nThanks a ton,You're welcome.", "msg_date": "Thu, 24 Feb 2011 20:59:38 +0100", "msg_from": "Vik Reykja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On Thu, Feb 24, 2011 at 12:59 PM, Vik Reykja <[email protected]> wrote:\n\n> On Thu, Feb 24, 2011 at 20:56, Dave Johansen <[email protected]>wrote:\n>\n>> On Thu, Feb 24, 2011 at 12:33 PM, Vik Reykja <[email protected]> wrote:\n>>\n>>> On Thu, Feb 24, 2011 at 16:14, Dave Johansen <[email protected]>wrote:\n>>>\n>>>> You'll notice that the subquery version is doing the full join and then\n>>>> the filtering, but the explicitly listed version pushing the filtering into\n>>>> the plan before the join. Is there a way to make the subquery version\n>>>> perform the same optimization?\n>>>>\n>>>\n>>> EXPLAIN ANALYZE SELECT * FROM addressesall WHERE id = ANY (array(SELECT\n>>> id FROM users WHERE name='A'));\n>>>\n>>> (Tested on 9.0.3)\n>>>\n>>\n>> I just tested that on 8.3.3 and it performed quickly like I expected the\n>> other query to, so that did the trick.\n>>\n>\n> Is there any good reason you're not using 8.3.14?\n>\n\nNo, I just haven't taken the time to do the upgrade on all of our systems.\nIt is definitely something that I have started to consider more strongly\nthough.\n\nDave\n\nOn Thu, Feb 24, 2011 at 12:59 PM, Vik Reykja <[email protected]> wrote:\nOn Thu, Feb 24, 2011 at 20:56, Dave Johansen <[email protected]> wrote:\n\nOn Thu, Feb 24, 2011 at 12:33 PM, Vik Reykja <[email protected]> wrote:\nOn Thu, Feb 24, 2011 at 16:14, Dave Johansen <[email protected]> wrote:\n\n\n\n\n\nYou'll notice that the subquery version is doing the full join and\nthen the filtering, but the explicitly listed version pushing the\nfiltering into the plan before the join. Is there a way to make the subquery version perform the same optimization?\nEXPLAIN ANALYZE SELECT * FROM addressesall WHERE id = ANY (array(SELECT id FROM users WHERE name='A'));(Tested on 9.0.3)\nI just tested that on 8.3.3 and it performed quickly like I expected the other query to, so that did the trick.Is there any good reason you're not using 8.3.14?\nNo, I just haven't taken the time to do the upgrade on all of our systems. It is definitely something that I have started to consider more strongly though.Dave", "msg_date": "Thu, 24 Feb 2011 13:51:53 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On Thu, Feb 24, 2011 at 11:38 AM, Dave Johansen <[email protected]> wrote:\n> I also just noticed that an ORDER BY x LIMIT n optimization is not pushed\n> down through the UNION ALL as well. I understand that this may be a little\n> trickier because the ORDER BY and LIMIT would need to be applied to the\n> subqueries and then re-applied after the APPEND,\n\nPostgreSQL 9.1 will know how to do this, FWIW.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 2 Mar 2011 09:08:45 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On 2 March 2011 19:38, Robert Haas <[email protected]> wrote:\n> On Thu, Feb 24, 2011 at 11:38 AM, Dave Johansen <[email protected]> wrote:\n>> I also just noticed that an ORDER BY x LIMIT n optimization is not pushed\n>> down through the UNION ALL as well. I understand that this may be a little\n>> trickier because the ORDER BY and LIMIT would need to be applied to the\n>> subqueries and then re-applied after the APPEND,\n>\n> PostgreSQL 9.1 will know how to do this, FWIW.\n\nOut of curiosity, what was the commit for this?\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n", "msg_date": "Wed, 2 Mar 2011 19:41:51 +0530", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On Wed, Mar 2, 2011 at 9:11 AM, Thom Brown <[email protected]> wrote:\n> On 2 March 2011 19:38, Robert Haas <[email protected]> wrote:\n>> On Thu, Feb 24, 2011 at 11:38 AM, Dave Johansen <[email protected]> wrote:\n>>> I also just noticed that an ORDER BY x LIMIT n optimization is not pushed\n>>> down through the UNION ALL as well. I understand that this may be a little\n>>> trickier because the ORDER BY and LIMIT would need to be applied to the\n>>> subqueries and then re-applied after the APPEND,\n>>\n>> PostgreSQL 9.1 will know how to do this, FWIW.\n>\n> Out of curiosity, what was the commit for this?\n\n11cad29c91524aac1d0b61e0ea0357398ab79bf8 Support MergeAppend plans, to\nallow sorted output from append relations.\n034967bdcbb0c7be61d0500955226e1234ec5f04 Reimplement planner's\nhandling of MIN/MAX aggregate optimization.\n947d0c862c895618a874344322e7b07c9df05cb2 Use appendrel planning logic\nfor top-level UNION ALL structures.\n6fbc323c8042303a737028f9da7616896bccc517 Further fallout from the\nMergeAppend patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 2 Mar 2011 09:22:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" }, { "msg_contents": "On 2 March 2011 19:52, Robert Haas <[email protected]> wrote:\n> On Wed, Mar 2, 2011 at 9:11 AM, Thom Brown <[email protected]> wrote:\n>> On 2 March 2011 19:38, Robert Haas <[email protected]> wrote:\n>>> On Thu, Feb 24, 2011 at 11:38 AM, Dave Johansen <[email protected]> wrote:\n>>>> I also just noticed that an ORDER BY x LIMIT n optimization is not pushed\n>>>> down through the UNION ALL as well. I understand that this may be a little\n>>>> trickier because the ORDER BY and LIMIT would need to be applied to the\n>>>> subqueries and then re-applied after the APPEND,\n>>>\n>>> PostgreSQL 9.1 will know how to do this, FWIW.\n>>\n>> Out of curiosity, what was the commit for this?\n>\n> 11cad29c91524aac1d0b61e0ea0357398ab79bf8 Support MergeAppend plans, to\n> allow sorted output from append relations.\n> 034967bdcbb0c7be61d0500955226e1234ec5f04 Reimplement planner's\n> handling of MIN/MAX aggregate optimization.\n> 947d0c862c895618a874344322e7b07c9df05cb2 Use appendrel planning logic\n> for top-level UNION ALL structures.\n> 6fbc323c8042303a737028f9da7616896bccc517 Further fallout from the\n> MergeAppend patch.\n\nErk.. I see. Thanks :)\n\n-- \nThom Brown\nTwitter: @darkixion\nIRC (freenode): dark_ixion\nRegistered Linux user: #516935\n", "msg_date": "Wed, 2 Mar 2011 19:53:49 +0530", "msg_from": "Thom Brown <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Pushing IN (subquery) down through UNION ALL?" } ]
[ { "msg_contents": "Hi foks\n\nThis is an old chestnut which I've found a number of online threads for, and\nnever seen a clever answer to. It seems a common enough idiom that there\nmight be some slicker way to do it, so I thought I might inquire with this\naugust group if such a clever answer exists ....\n\nConsider the following table\n\ncreate table data\n (id_key int,\n time_stamp timestamp without time zone,\n value double precision);\n\ncreate unique index data_idx on data (id_key, time_stamp);\n\nwith around 1m rows, with 3500 or so distinct values of id_key.\n\nI need to find the most recent value for each distinct value of id_key.\nThere is no elegant (that I know of) syntax for this, and there are two ways\nI've typically seen it done:\n\n1. Use a dependent subquery to find the most recent time stamp, i.e.\n\nselect\n a.id_key, a.time_stamp, a.value\nfrom\n data a\nwhere\n a.time_stamp=\n (select max(time_stamp)\n from data b\n where a.id_key=b.id_key)\n\n2. Define a temporary table / view with the most recent time stamp for each\nkey, and join against it:\n\nselect\n a.id_key, a.time_stamp, a.value\nfrom\n data a,\n (select id_key, max(time_stamp) as mts\n from data group by id_key) b\nwhere\n a.id_key=b.id_key and a.time_stamp=b.mts\n\nI've found that for my data set, PG 8.4.2 selects the \"obvious\" / \"do it as\nwritten\" plan in each case, and that method 2. is much quicker (2.6 sec vs.\n2 min on my laptop) ....\n\nIs there a more elegant way to write this, perhaps using PG-specific\nextensions?\n\nCheers\nDave\n\nHi foksThis is an old chestnut which I've found a number of online threads for, and never seen a clever answer to. It seems a common enough idiom that there might be some slicker way to do it, so I thought I might inquire with this august group if such a clever answer exists .... \nConsider the following tablecreate table data   (id_key int, \n    time_stamp timestamp without time zone,     value double precision);\ncreate unique index data_idx on data (id_key, time_stamp);with around 1m rows, with 3500 or so distinct values of id_key. \nI need to find the most recent value for each distinct value of id_key.  There is no elegant (that I know of) syntax for this, and there are two ways I've typically seen it done:1. Use a dependent subquery to find the most recent time stamp, i.e.\nselect   a.id_key, a.time_stamp, a.value\nfrom   data a\nwhere   a.time_stamp=     (select max(time_stamp) \n      from data b      where a.id_key=b.id_key)\n2. Define a temporary table / view with the most recent time stamp for each key, and join against it:select\n   a.id_key, a.time_stamp, a.valuefrom\n   data a,   (select id_key, max(time_stamp) as mts\n    from data group by id_key) bwhere\n   a.id_key=b.id_key and a.time_stamp=b.mtsI've found that for my data set, PG 8.4.2 selects the \"obvious\" / \"do it as written\" plan in each case, and that method 2. is much quicker (2.6 sec vs. 2 min on my laptop) .... \nIs there a more elegant way to write this, perhaps using PG-specific extensions?CheersDave", "msg_date": "Thu, 24 Feb 2011 13:55:31 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Picking out the most recent row using a time stamp column" }, { "msg_contents": "On Thu, Feb 24, 2011 at 1:55 PM, Dave Crooke <[email protected]> wrote:\n> Hi foks\n>\n> This is an old chestnut which I've found a number of online threads for, and\n> never seen a clever answer to. It seems a common enough idiom that there\n> might be some slicker way to do it, so I thought I might inquire with this\n> august group if such a clever answer exists ....\n>\n> Consider the following table\n>\n> create table data\n>    (id_key int,\n>     time_stamp timestamp without time zone,\n>     value double precision);\n>\n> create unique index data_idx on data (id_key, time_stamp);\n>\n> with around 1m rows, with 3500 or so distinct values of id_key.\n>\n> I need to find the most recent value for each distinct value of id_key.\n> There is no elegant (that I know of) syntax for this, and there are two ways\n> I've typically seen it done:\n>\n> 1. Use a dependent subquery to find the most recent time stamp, i.e.\n>\n> select\n>    a.id_key, a.time_stamp, a.value\n> from\n>    data a\n> where\n>   a.time_stamp=\n>      (select max(time_stamp)\n>       from data b\n>       where a.id_key=b.id_key)\n>\n> 2. Define a temporary table / view with the most recent time stamp for each\n> key, and join against it:\n>\n> select\n>    a.id_key, a.time_stamp, a.value\n> from\n>    data a,\n>    (select id_key, max(time_stamp) as mts\n>     from data group by id_key) b\n> where\n>    a.id_key=b.id_key and a.time_stamp=b.mts\n>\n> I've found that for my data set, PG 8.4.2 selects the \"obvious\" / \"do it as\n> written\" plan in each case, and that method 2. is much quicker (2.6 sec vs.\n> 2 min on my laptop) ....\n>\n> Is there a more elegant way to write this, perhaps using PG-specific\n> extensions?\n\none pg specific method that a lot of people overlook for this sort of\nproblem is custom aggregates.\n\ncreate or replace function maxfoo(foo, foo) returns foo as\n$$\n select case when $1.t > $2.t then $1 else $2 end;\n$$ language sql immutable;\n\ncreate aggregate aggfoo(foo)\n(\n sfunc=maxfoo,\n stype=foo\n);\n\ncreate table foo(id int, t timestamptz default now());\ninsert into foo values (1);\ninsert into foo values (1);\n\nselect (f).* from (select aggfoo(foo) as f from foo group by id) q;\n\npostgres=# select (f).* from (select aggfoo(foo) as f from foo group by id) q;\n id | t\n----+----------------------------\n 1 | 2011-02-24 14:01:20.051-06\n(1 row)\n\n\nwhere this approach can be useful is when you have a very complicated\naggregation condition that can be awkward to express in a join.\n\nmerlin\n", "msg_date": "Thu, 24 Feb 2011 14:11:29 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "Dave Crooke <[email protected]> wrote:\n \n> create table data\n> (id_key int,\n> time_stamp timestamp without time zone,\n> value double precision);\n> \n> create unique index data_idx on data (id_key, time_stamp);\n \n> I need to find the most recent value for each distinct value of\n> id_key.\n \nWell, unless you use timestamp WITH time zone, you might not be able\nto do that at all. There are very few places where timestamp\nWITHOUT time zone actually makes sense.\n \n> There is no elegant (that I know of) syntax for this\n \nHow about this?:\n \nselect distinct on (id_key) * from data order by id_key, time_stamp;\n \n> select\n> a.id_key, a.time_stamp, a.value\n> from\n> data a\n> where\n> a.time_stamp=\n> (select max(time_stamp)\n> from data b\n> where a.id_key=b.id_key)\n \nRather than the above, I typically find this much faster:\n \nselect\n a.id_key, a.time_stamp, a.value\nfrom\n data a\nwhere not exists\n (select * from data b\n where b.id_key=a.id_key and b.time_stamp > a.time_stamp)\n \n-Kevin\n", "msg_date": "Thu, 24 Feb 2011 14:18:55 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time\n\t stamp column" }, { "msg_contents": "\nOn Feb 24, 2011, at 14:55, Dave Crooke wrote:\n\n> Is there a more elegant way to write this, perhaps using PG-specific\n> extensions?\n\nSELECT DISTINCT ON (data.id_key)\n data.id_key, data.time_stamp, data.value\n FROM data\n ORDER BY data.id_key, data.time_stamp DESC;\n\nMichael Glaesemann\ngrzm seespotcode net\n\n\n\n", "msg_date": "Thu, 24 Feb 2011 15:21:49 -0500", "msg_from": "Michael Glaesemann <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "Michael Glaesemann <[email protected]> wrote:\n \n> SELECT DISTINCT ON (data.id_key)\n> data.id_key, data.time_stamp, data.value\n> FROM data\n> ORDER BY data.id_key, data.time_stamp DESC;\n \nDang! I forgot the DESC in my post! Thanks for showing the\n*correct* version.\n \n-Kevin\n", "msg_date": "Thu, 24 Feb 2011 14:24:15 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time\n\t stamp column" }, { "msg_contents": "On Thu, Feb 24, 2011 at 2:18 PM, Kevin Grittner\n<[email protected]> wrote:\n> Dave Crooke <[email protected]> wrote:\n>\n>> create table data\n>>    (id_key int,\n>>     time_stamp timestamp without time zone,\n>>     value double precision);\n>>\n>> create unique index data_idx on data (id_key, time_stamp);\n>\n>> I need to find the most recent value for each distinct value of\n>> id_key.\n>\n> Well, unless you use timestamp WITH time zone, you might not be able\n> to do that at all.  There are very few places where timestamp\n> WITHOUT time zone actually makes sense.\n>\n>> There is no elegant (that I know of) syntax for this\n>\n> How about this?:\n>\n> select distinct on (id_key) * from data order by id_key, time_stamp;\n>\n>> select\n>>    a.id_key, a.time_stamp, a.value\n>> from\n>>    data a\n>> where\n>>   a.time_stamp=\n>>      (select max(time_stamp)\n>>       from data b\n>>       where a.id_key=b.id_key)\n>\n> Rather than the above, I typically find this much faster:\n>\n> select\n>   a.id_key, a.time_stamp, a.value\n> from\n>   data a\n> where not exists\n>  (select * from data b\n>   where b.id_key=a.id_key and b.time_stamp > a.time_stamp)\n\nhm. not only is it faster, but much more flexible...that's definitely\nthe way to go.\n\nmerlin\n", "msg_date": "Thu, 24 Feb 2011 15:14:59 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "Thanks to all .... I had a tickling feeling at the back of my mind that\nthere was a neater answer here. For the record, times (all from in-memory\ncached data, averaged over a bunch of runs):\n\nDependent subquery = 117.9 seconds\nJoin to temp table = 2.7 sec\nDISTINCT ON = 2.7 sec\n\nSo the DISTINCT ON may not be quicker, but it sure is tidier.\n\nCheers\nDave\n\nOn Thu, Feb 24, 2011 at 2:24 PM, Kevin Grittner <[email protected]\n> wrote:\n\n> Michael Glaesemann <[email protected]> wrote:\n>\n> > SELECT DISTINCT ON (data.id_key)\n> > data.id_key, data.time_stamp, data.value\n> > FROM data\n> > ORDER BY data.id_key, data.time_stamp DESC;\n>\n> Dang! I forgot the DESC in my post! Thanks for showing the\n> *correct* version.\n>\n> -Kevin\n>\n\nThanks to all .... I had a tickling feeling at the back of my mind that there was a neater answer here. For the record, times (all from in-memory cached data, averaged over a bunch of runs):Dependent subquery = 117.9 seconds\nJoin to temp table = 2.7 secDISTINCT ON = 2.7 secSo the DISTINCT ON may not be quicker, but it sure is tidier.CheersDaveOn Thu, Feb 24, 2011 at 2:24 PM, Kevin Grittner <[email protected]> wrote:\nMichael Glaesemann <[email protected]> wrote:\n\n> SELECT DISTINCT ON (data.id_key)\n>        data.id_key, data.time_stamp, data.value\n>   FROM data\n>   ORDER BY data.id_key, data.time_stamp DESC;\n\nDang!  I forgot the DESC in my post!  Thanks for showing the\n*correct* version.\n\n-Kevin", "msg_date": "Thu, 24 Feb 2011 17:38:37 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "On 2/24/11 3:38 PM, Dave Crooke wrote:\n> Thanks to all .... I had a tickling feeling at the back of my mind that\n> there was a neater answer here. For the record, times (all from\n> in-memory cached data, averaged over a bunch of runs):\n> \n> Dependent subquery = 117.9 seconds\n> Join to temp table = 2.7 sec\n> DISTINCT ON = 2.7 sec\n\nBut wait, there's more! You haven't tested the Windowing Function\nsolution. I'll bet it's even faster.\n\n\nSELECT id_key, time_stamp, value\nFROM (\n\tSELECT id_key, time_stamp, value,\n\t\trow_number()\n\t\tOVER ( PARTITION BY id_key\n\t\t\tORDER BY time_stamp DESC)\n \t\tas ranking\n\tFROM thetable\n ) as filtered_table\nWHERE ranking = 1\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 24 Feb 2011 16:20:00 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp\n column" }, { "msg_contents": "On 02/24/2011 06:20 PM, Josh Berkus wrote:\n\n> SELECT id_key, time_stamp, value\n> FROM (\n> \tSELECT id_key, time_stamp, value,\n> \t\trow_number()\n> \t\tOVER ( PARTITION BY id_key\n> \t\t\tORDER BY time_stamp DESC)\n> \t\tas ranking\n> \tFROM thetable\n> ) as filtered_table\n> WHERE ranking = 1\n\nWhy did you use row_number instead of rank?\n\nI am now curious how the speed compares though. I still think the \nDISTINCT ON will be faster, but it would be a great surprise.\n\n-- \nShaun Thomas\nOptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604\n312-676-8870\[email protected]\n\n______________________________________________\n\nSee http://www.peak6.com/email_disclaimer.php\nfor terms and conditions related to this email\n", "msg_date": "Thu, 24 Feb 2011 18:58:33 -0600", "msg_from": "Shaun Thomas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp\n column" }, { "msg_contents": "\n> Why did you use row_number instead of rank?\n\nBecause I assumed he only wanted one row in the event of ties.\n\nHmmm, although with that schema, there won't be ties. So it's pretty\nmuch arbitrary then.\n\n> I am now curious how the speed compares though. I still think the\n> DISTINCT ON will be faster, but it would be a great surprise.\n\nHopefully we'll find out! The windowing functions are usually much\nfaster for me. I think in 9.0 or 9.1 someone replumbed DISTINCT ON to\nuse a bunch of the window function internals, at which point it'll cease\nto matter.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Thu, 24 Feb 2011 17:52:19 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp\n column" }, { "msg_contents": "On Thu, Feb 24, 2011 at 4:38 PM, Dave Crooke <[email protected]> wrote:\n\n> Thanks to all .... I had a tickling feeling at the back of my mind that\n> there was a neater answer here. For the record, times (all from in-memory\n> cached data, averaged over a bunch of runs):\n>\n> Dependent subquery = 117.9 seconds\n> Join to temp table = 2.7 sec\n> DISTINCT ON = 2.7 sec\n>\n> So the DISTINCT ON may not be quicker, but it sure is tidier.\n>\n> Cheers\n> Dave\n\n\nI'm using 8.3.3 and I have a similar sort of setup and just thought I'd add\nanother point of reference, here's the timing from doing the same sort of\nqueries on my dataset of ~700,000 records with ~10,000 unique \"id_key\"s.\n\nI also added a 4th version that uses a permanent table that's auto-populated\nby a trigger with the rid of the most recent entry from the main table, so\nit's a simple join to get the latest entries.\n\nDependent subquery = (killed it after it ran for over 10 minutes)\nJoin on temp table = 1.5 seconds\nDISTINCT ON = 2.9 seconds\nJoin on auto-populated table = 0.8 seconds\n\nDave\n\nOn Thu, Feb 24, 2011 at 4:38 PM, Dave Crooke <[email protected]> wrote:\nThanks to all .... I had a tickling feeling at the back of my mind that there was a neater answer here. For the record, times (all from in-memory cached data, averaged over a bunch of runs):Dependent subquery = 117.9 seconds\n\nJoin to temp table = 2.7 secDISTINCT ON = 2.7 secSo the DISTINCT ON may not be quicker, but it sure is tidier.CheersDaveI'm using 8.3.3 and I have a similar sort of setup and just thought I'd add another point of reference, here's the timing from doing the same sort of queries on my dataset of ~700,000 records with ~10,000 unique \"id_key\"s.\nI also added a 4th version that uses a permanent table that's auto-populated by a trigger with the rid of the most recent entry from the main table, so it's a simple join to get the latest entries.Dependent subquery = (killed it after it ran for over 10 minutes)\nJoin on temp table = 1.5 secondsDISTINCT ON = 2.9 secondsJoin on auto-populated table = 0.8 secondsDave", "msg_date": "Fri, 25 Feb 2011 09:50:40 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "Hi Dave\n\nYes, 100% the best solution .... I did the same thing a while back, I just\nhave a separate copy of the data in a \"latest\" table and the Java code just\nruns a second SQL statement to update it when writing a new record (I've\nnever been a trigger fan).\n\nI found myself looking at the \"find the latest\" query again though in the\nprocess of building a \"demo mode\" into our application, which will replay a\nfinite set of data on a rolling loop by moving it forward in time, and also\nhas to simulate the continuous updating of the \"latest\" table so the the\nbusiness logic will be appropriately fooled.\n\nMy next tweak will be to cache the \"latest\" table in the Java layer ;-)\n\nCheers\nDave\n\nOn Fri, Feb 25, 2011 at 10:50 AM, Dave Johansen <[email protected]>wrote:\n\n> On Thu, Feb 24, 2011 at 4:38 PM, Dave Crooke <[email protected]> wrote:\n>\n>> Thanks to all .... I had a tickling feeling at the back of my mind that\n>> there was a neater answer here. For the record, times (all from in-memory\n>> cached data, averaged over a bunch of runs):\n>>\n>> Dependent subquery = 117.9 seconds\n>> Join to temp table = 2.7 sec\n>> DISTINCT ON = 2.7 sec\n>>\n>> So the DISTINCT ON may not be quicker, but it sure is tidier.\n>>\n>> Cheers\n>> Dave\n>\n>\n> I'm using 8.3.3 and I have a similar sort of setup and just thought I'd add\n> another point of reference, here's the timing from doing the same sort of\n> queries on my dataset of ~700,000 records with ~10,000 unique \"id_key\"s.\n>\n> I also added a 4th version that uses a permanent table that's\n> auto-populated by a trigger with the rid of the most recent entry from the\n> main table, so it's a simple join to get the latest entries.\n>\n> Dependent subquery = (killed it after it ran for over 10 minutes)\n> Join on temp table = 1.5 seconds\n> DISTINCT ON = 2.9 seconds\n> Join on auto-populated table = 0.8 seconds\n>\n> Dave\n>\n\nHi DaveYes, 100% the best solution .... I did the same thing a while back, I just have a separate copy of the data in a \"latest\" table and the Java code just runs a second SQL statement to update it when writing a new record (I've never been a trigger fan).\nI found myself looking at the \"find the latest\" query again though in the process of building a \"demo mode\" into our application, which will replay a finite set of data on a rolling loop by moving it forward in time, and also has to simulate the continuous updating of the \"latest\" table so the the business logic will be appropriately fooled.\nMy next tweak will be to cache the \"latest\" table in the Java layer ;-)CheersDaveOn Fri, Feb 25, 2011 at 10:50 AM, Dave Johansen <[email protected]> wrote:\nOn Thu, Feb 24, 2011 at 4:38 PM, Dave Crooke <[email protected]> wrote:\n\nThanks to all .... I had a tickling feeling at the back of my mind that there was a neater answer here. For the record, times (all from in-memory cached data, averaged over a bunch of runs):Dependent subquery = 117.9 seconds\n\n\nJoin to temp table = 2.7 secDISTINCT ON = 2.7 secSo the DISTINCT ON may not be quicker, but it sure is tidier.CheersDaveI'm using 8.3.3 and I have a similar sort of setup and just thought I'd add another point of reference, here's the timing from doing the same sort of queries on my dataset of ~700,000 records with ~10,000 unique \"id_key\"s.\nI also added a 4th version that uses a permanent table that's auto-populated by a trigger with the rid of the most recent entry from the main table, so it's a simple join to get the latest entries.Dependent subquery = (killed it after it ran for over 10 minutes)\n\nJoin on temp table = 1.5 secondsDISTINCT ON = 2.9 secondsJoin on auto-populated table = 0.8 secondsDave", "msg_date": "Fri, 25 Feb 2011 14:45:23 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "On Fri, Feb 25, 2011 at 1:45 PM, Dave Crooke <[email protected]> wrote:\n\n> Hi Dave\n>\n> Yes, 100% the best solution .... I did the same thing a while back, I just\n> have a separate copy of the data in a \"latest\" table and the Java code just\n> runs a second SQL statement to update it when writing a new record (I've\n> never been a trigger fan).\n>\n> I found myself looking at the \"find the latest\" query again though in the\n> process of building a \"demo mode\" into our application, which will replay a\n> finite set of data on a rolling loop by moving it forward in time, and also\n> has to simulate the continuous updating of the \"latest\" table so the the\n> business logic will be appropriately fooled.\n>\n> My next tweak will be to cache the \"latest\" table in the Java layer ;-)\n>\n> Cheers\n> Dave\n\n\nOur application has what sounds like a similar functionality that we call\n\"playback\". The way that we did it was to have a schema called \"playback\"\nwith identical tables to those that we want to have repopulated. All the\nother tables exist in only the \"public\" schema and then we don't have to do\nany duplication of that data. Then during playback it just runs a query to\ncopy from the \"public\" table to the \"playback\" table and the trigger will\npopulate the \"latest\" table in the \"playback\" schema automatically just like\nwhen the program is running normally and populating the \"public\" version.\n\nThe secret sauce comes in by setting \"SET search_path TO playback, public;\"\nbecause then your application runs all the same queries to get the data and\ndoesn't have to know that anything different is going on other than the copy\ncoperation that it's doing. It's nice because it takes all of the data\nmanagement burden off of the application and then allows the database to do\nthe hard work for you. It's obviously not the perfect solution but it wasn't\ntoo hard to setup and we've really liked the way it works.\n\nDave\n\nOn Fri, Feb 25, 2011 at 1:45 PM, Dave Crooke <[email protected]> wrote:\nHi DaveYes, 100% the best solution .... I did the same thing a while back, I just have a separate copy of the data in a \"latest\" table and the Java code just runs a second SQL statement to update it when writing a new record (I've never been a trigger fan).\nI found myself looking at the \"find the latest\" query again though in the process of building a \"demo mode\" into our application, which will replay a finite set of data on a rolling loop by moving it forward in time, and also has to simulate the continuous updating of the \"latest\" table so the the business logic will be appropriately fooled.\nMy next tweak will be to cache the \"latest\" table in the Java layer ;-)CheersDave \n \nOur application has what sounds like a similar functionality that we call \"playback\". The way that we did it was to have a schema called \"playback\" with identical tables to those that we want to have repopulated. All the other tables exist in only the \"public\" schema and then we don't have to do any duplication of that data. Then during playback it just runs a query to copy from the \"public\" table to the \"playback\" table and the trigger will populate the \"latest\" table in the \"playback\" schema automatically just like when the program is running normally and populating the \"public\" version.\n \nThe secret sauce comes in by setting \"SET search_path TO playback, public;\" because then your application runs all the same queries to get the data and doesn't have to know that anything different is going on other than the copy coperation that it's doing. It's nice because it takes all of the data management burden off of the application and then allows the database to do the hard work for you. It's obviously not the perfect solution but it wasn't too hard to setup and we've really liked the way it works.\n \nDave", "msg_date": "Sat, 26 Feb 2011 06:44:28 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "* Kevin Grittner:\n\n> Dave Crooke <[email protected]> wrote:\n> \n>> create table data\n>> (id_key int,\n>> time_stamp timestamp without time zone,\n>> value double precision);\n>> \n>> create unique index data_idx on data (id_key, time_stamp);\n> \n>> I need to find the most recent value for each distinct value of\n>> id_key.\n> \n> Well, unless you use timestamp WITH time zone, you might not be able\n> to do that at all. There are very few places where timestamp\n> WITHOUT time zone actually makes sense.\n\nI don't think PostgreSQL keeps track of actual time zone values, just\nas it doesn't keep track of the character encoding of TEXT columns.\nUnless suppressed with WITHOUT TIME ZONE, PostgreSQL makes up some\ntime zone on demand. This makes TIMESTAMP WITH TIME ZONE not that\nuseful, and it's often to use TIMESTAMP WITHOUT TIME ZONE with times\nin UTC.\n", "msg_date": "Sat, 26 Feb 2011 21:54:50 +0100", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "Dave,\n\nWhy not test the windowing version I posted?\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Sat, 26 Feb 2011 13:06:00 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp\n column" }, { "msg_contents": "Unfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff\nwasn't added til 8.4.\nDave\nOn Feb 26, 2011 2:06 PM, \"Josh Berkus\" <[email protected]> wrote:\n> Dave,\n>\n> Why not test the windowing version I posted?\n>\n> --\n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n\nUnfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff wasn't added til 8.4.\nDave\nOn Feb 26, 2011 2:06 PM, \"Josh Berkus\" <[email protected]> wrote:> Dave,> > Why not test the windowing version I posted?\n> > -- > -- Josh Berkus> PostgreSQL Experts Inc.> http://www.pgexperts.com", "msg_date": "Sat, 26 Feb 2011 14:38:05 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "On Sat, Feb 26, 2011 at 2:38 PM, Dave Johansen <[email protected]>\nwrote:\n>\n> Unfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff\n> wasn't added til 8.4.\n> Dave\n>\n> On Feb 26, 2011 2:06 PM, \"Josh Berkus\" <[email protected]> wrote:\n> > Dave,\n> >\n> > Why not test the windowing version I posted?\n\nWe finally have moved over to 8.4 and so I just wanted to post the\ntime comparison numbers to show the times on 8.4 as well. This is also\na newer data set with ~700k rows and ~4k distinct id_key values.\n\n1) Dependent subquery\nSELECT a.id_key, a.time_stamp, a.value FROM data AS a WHERE\na.time_stamp = (SELECT MAX(time_stamp) FROM data AS b WHERE a.id_key =\nb.id_key);\n8.3.3: Killed it after a few minutes\n8.4.13: Killed it after a few minutes\n\n2) Join against temporary table\nSELECT a.id_key, a.time_stamp, a.value FROM data AS a JOIN (SELECT\nid_key, MAX(time_stamp) AS max_time_stamp FROM data GROUP BY id_key)\nAS b WHERE a.id_key = b.id_key AND a.time_stamp = b.max_time_stamp;\n8.3.3: 1.4 s\n8.4.13: 0.5 s\n\n3) DISTINCT ON:\nSELECT DISTINCT ON (id_key) id_key, time_stamp, value FROM data ORDER\nBY id_key, time_stamp DESC;\nWithout Index:\n8.3.3: 34.1 s\n8.4.13: 98.7 s\nWith Index (data(id_key, time_stamp DESC)):\n8.3.3: 3.4 s\n8.4.13: 1.3 s\n\n4) Auto-populated table\nSELECT id_key, time_stamp, value FROM data WHERE rid IN (SELECT rid\nFROM latestdata);\n8.3.3: 0.2 s\n8.4.13: 0.06 s\n\n5) Windowing\nSELECT id_key, time_stamp, value FROM (SELECT id_key, time_stamp,\nvalue, row_number() OVER (PARTITION BY id_key ORDER BY time_stamp\nDESC) AS ranking FROM data) AS a WHERE ranking=1;\n8.3.3: N/A\n8.4.13: 1.6 s\n\nSo the auto-populated table (#4) is the fastest by an order of\nmagnitude, but the join against the temporary table (#2) is the next\nbest option based on speed and doesn't require the extra multi-column\nindex that DISTINCT ON (#3) does.\n\nOn a related note though, is there a way to make the multi-column\nindex used in the DISTINCT ON more efficient. Based on the results, it\nappears that the multi-column index is actually a single index with\nthe ordering of the tree based on the first value and then the second\nvalue. Is there a way to make it be a \"multi-level index\"? What I mean\nis that the first value is basically a tree/hash that then points to\nthe second index because if that's possible then that would probably\nmake the DISTINCT ON (#3) version as fast or faster than the\nauto-populated table (#4). Is there a way to create an index like that\nin postgres?\n\nThanks,\nDave\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 5 Apr 2013 09:54:06 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "On Fri, Apr 5, 2013 at 11:54 AM, Dave Johansen <[email protected]> wrote:\n> On Sat, Feb 26, 2011 at 2:38 PM, Dave Johansen <[email protected]>\n> wrote:\n>>\n>> Unfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff\n>> wasn't added til 8.4.\n>> Dave\n>>\n>> On Feb 26, 2011 2:06 PM, \"Josh Berkus\" <[email protected]> wrote:\n>> > Dave,\n>> >\n>> > Why not test the windowing version I posted?\n>\n> We finally have moved over to 8.4 and so I just wanted to post the\n> time comparison numbers to show the times on 8.4 as well. This is also\n> a newer data set with ~700k rows and ~4k distinct id_key values.\n>\n> 1) Dependent subquery\n> SELECT a.id_key, a.time_stamp, a.value FROM data AS a WHERE\n> a.time_stamp = (SELECT MAX(time_stamp) FROM data AS b WHERE a.id_key =\n> b.id_key);\n> 8.3.3: Killed it after a few minutes\n> 8.4.13: Killed it after a few minutes\n>\n> 2) Join against temporary table\n> SELECT a.id_key, a.time_stamp, a.value FROM data AS a JOIN (SELECT\n> id_key, MAX(time_stamp) AS max_time_stamp FROM data GROUP BY id_key)\n> AS b WHERE a.id_key = b.id_key AND a.time_stamp = b.max_time_stamp;\n> 8.3.3: 1.4 s\n> 8.4.13: 0.5 s\n>\n> 3) DISTINCT ON:\n> SELECT DISTINCT ON (id_key) id_key, time_stamp, value FROM data ORDER\n> BY id_key, time_stamp DESC;\n> Without Index:\n> 8.3.3: 34.1 s\n> 8.4.13: 98.7 s\n> With Index (data(id_key, time_stamp DESC)):\n> 8.3.3: 3.4 s\n> 8.4.13: 1.3 s\n>\n> 4) Auto-populated table\n> SELECT id_key, time_stamp, value FROM data WHERE rid IN (SELECT rid\n> FROM latestdata);\n> 8.3.3: 0.2 s\n> 8.4.13: 0.06 s\n>\n> 5) Windowing\n> SELECT id_key, time_stamp, value FROM (SELECT id_key, time_stamp,\n> value, row_number() OVER (PARTITION BY id_key ORDER BY time_stamp\n> DESC) AS ranking FROM data) AS a WHERE ranking=1;\n> 8.3.3: N/A\n> 8.4.13: 1.6 s\n\nI would also test:\n\n*) EXISTS()\n\nSELECT a.id_key, a.time_stamp, a.value FROM data\nWHERE NOT EXISTS\n(\n SELECT 1 FROM data b\n WHERE\n a.id_key = b.id_key\n and b.time_stamp > a.time_stamp\n);\n\n*) custom aggregate (this will not be the fastest option but is a good\ntechnique to know -- it can be a real life saver when selection\ncriteria is complex)\n\nCREATE FUNCTION agg_latest_data(data, data) returns data AS\n$$\n SELECT CASE WHEN $1 > $2 THEN $1 ELSE $2 END;\n$$ LANGUAGE SQL IMMUTABLE;\n\nCREATE AGGREGATE latest_data (\n SFUNC=agg_latest_data,\n STYPE=data\n);\n\nSELECT latest_data(d) FROM data d group by d.id_key;\n\nthe above returns the composite, not the fields, but that can be worked around.\n\nmerlin\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Fri, 5 Apr 2013 13:40:16 -0500", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" }, { "msg_contents": "On Fri, Apr 5, 2013 at 11:40 AM, Merlin Moncure <[email protected]> wrote:\n>\n> On Fri, Apr 5, 2013 at 11:54 AM, Dave Johansen <[email protected]> wrote:\n> > On Sat, Feb 26, 2011 at 2:38 PM, Dave Johansen <[email protected]>\n> > wrote:\n> >>\n> >> Unfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff\n> >> wasn't added til 8.4.\n> >> Dave\n> >>\n> >> On Feb 26, 2011 2:06 PM, \"Josh Berkus\" <[email protected]> wrote:\n> >> > Dave,\n> >> >\n> >> > Why not test the windowing version I posted?\n> >\n> > We finally have moved over to 8.4 and so I just wanted to post the\n> > time comparison numbers to show the times on 8.4 as well. This is also\n> > a newer data set with ~700k rows and ~4k distinct id_key values.\n> >\n> > 1) Dependent subquery\n> > SELECT a.id_key, a.time_stamp, a.value FROM data AS a WHERE\n> > a.time_stamp = (SELECT MAX(time_stamp) FROM data AS b WHERE a.id_key =\n> > b.id_key);\n> > 8.3.3: Killed it after a few minutes\n> > 8.4.13: Killed it after a few minutes\n> >\n> > 2) Join against temporary table\n> > SELECT a.id_key, a.time_stamp, a.value FROM data AS a JOIN (SELECT\n> > id_key, MAX(time_stamp) AS max_time_stamp FROM data GROUP BY id_key)\n> > AS b WHERE a.id_key = b.id_key AND a.time_stamp = b.max_time_stamp;\n> > 8.3.3: 1.4 s\n> > 8.4.13: 0.5 s\n> >\n> > 3) DISTINCT ON:\n> > SELECT DISTINCT ON (id_key) id_key, time_stamp, value FROM data ORDER\n> > BY id_key, time_stamp DESC;\n> > Without Index:\n> > 8.3.3: 34.1 s\n> > 8.4.13: 98.7 s\n> > With Index (data(id_key, time_stamp DESC)):\n> > 8.3.3: 3.4 s\n> > 8.4.13: 1.3 s\n> >\n> > 4) Auto-populated table\n> > SELECT id_key, time_stamp, value FROM data WHERE rid IN (SELECT rid\n> > FROM latestdata);\n> > 8.3.3: 0.2 s\n> > 8.4.13: 0.06 s\n> >\n> > 5) Windowing\n> > SELECT id_key, time_stamp, value FROM (SELECT id_key, time_stamp,\n> > value, row_number() OVER (PARTITION BY id_key ORDER BY time_stamp\n> > DESC) AS ranking FROM data) AS a WHERE ranking=1;\n> > 8.3.3: N/A\n> > 8.4.13: 1.6 s\n>\n> I would also test:\n>\n> *) EXISTS()\n>\n> SELECT a.id_key, a.time_stamp, a.value FROM data\n> WHERE NOT EXISTS\n> (\n> SELECT 1 FROM data b\n> WHERE\n> a.id_key = b.id_key\n> and b.time_stamp > a.time_stamp\n> );\n\nI tried this and it was slow:\n8.3.3: 674.4 s\n8.4.13: 40.4 s\n\n>\n> *) custom aggregate (this will not be the fastest option but is a good\n> technique to know -- it can be a real life saver when selection\n> criteria is complex)\n>\n> CREATE FUNCTION agg_latest_data(data, data) returns data AS\n> $$\n> SELECT CASE WHEN $1 > $2 THEN $1 ELSE $2 END;\n> $$ LANGUAGE SQL IMMUTABLE;\n>\n> CREATE AGGREGATE latest_data (\n> SFUNC=agg_latest_data,\n> STYPE=data\n> );\n>\n> SELECT latest_data(d) FROM data d group by d.id_key;\n>\n> the above returns the composite, not the fields, but that can be worked around.\n\nMy real table actually returns/needs all the values from the row so I\ndidn't feel like messing with aggregate stuff.\n\nThanks,\nDave\n\n\n-- \nSent via pgsql-performance mailing list ([email protected])\nTo make changes to your subscription:\nhttp://www.postgresql.org/mailpref/pgsql-performance\n", "msg_date": "Mon, 8 Apr 2013 14:10:53 -0700", "msg_from": "Dave Johansen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Picking out the most recent row using a time stamp column" } ]
[ { "msg_contents": "P.S. I noticed inadvertently (by making a typo ;-) that not all of the\ncolumns in the DISTINCT ON are required to be part of the output, in which\ncase it appears to reduce the DISTINCT ON to the columns that are\nrepresented .... in my real world situation, \"id_key\" is actually composed\nof 3 columns, and I made a typo like the following (in which I've tweaked\nthe spacing to highlight the missing comma:\n\nselect distinct on (a, b, c)\na, b c, time_stamp, value\nfrom data\norder by a, b, c, time_stamp desc;\n\nThe output produced is the same as this query:\n\nselect distinct on (a, b)\na, b, time_stamp, value\nfrom data\norder by a, b, time_stamp desc;\n\nNot sure if this is considered a parser bug or not, but it feels slightly\nodd not to get an error.\n\nPG 8.4.7 installed from Ubuntu 10.04's 64-bit build.\n\nCheers\nDave\n\nOn Thu, Feb 24, 2011 at 5:38 PM, Dave Crooke <[email protected]> wrote:\n\n> Thanks to all .... I had a tickling feeling at the back of my mind that\n> there was a neater answer here. For the record, times (all from in-memory\n> cached data, averaged over a bunch of runs):\n>\n> Dependent subquery = 117.9 seconds\n> Join to temp table = 2.7 sec\n> DISTINCT ON = 2.7 sec\n>\n> So the DISTINCT ON may not be quicker, but it sure is tidier.\n>\n> Cheers\n> Dave\n>\n>\n> On Thu, Feb 24, 2011 at 2:24 PM, Kevin Grittner <\n> [email protected]> wrote:\n>\n>> Michael Glaesemann <[email protected]> wrote:\n>>\n>> > SELECT DISTINCT ON (data.id_key)\n>> > data.id_key, data.time_stamp, data.value\n>> > FROM data\n>> > ORDER BY data.id_key, data.time_stamp DESC;\n>>\n>> Dang! I forgot the DESC in my post! Thanks for showing the\n>> *correct* version.\n>>\n>> -Kevin\n>>\n>\n>\n\nP.S. I noticed inadvertently (by making a typo ;-) that not all of the columns in the DISTINCT ON are required to be part of the output, in which case it appears to reduce the DISTINCT ON to the columns that are represented .... in my real world situation, \"id_key\" is actually composed of 3 columns, and I made a typo like the following (in which I've tweaked the spacing to highlight the missing comma:\nselect distinct on (a,    b,    c)a,   b c,   time_stamp,    value\nfrom dataorder by a, b, c, time_stamp desc;The output produced is the same as this query:\nselect distinct on (a,    b)\na,   b,   time_stamp,    value\nfrom data\norder by a, b, time_stamp desc;\nNot sure if this is considered a parser bug or not, but it feels slightly odd not to get an error. PG 8.4.7 installed from Ubuntu 10.04's 64-bit build.CheersDave\nOn Thu, Feb 24, 2011 at 5:38 PM, Dave Crooke <[email protected]> wrote:\nThanks to all .... I had a tickling feeling at the back of my mind that there was a neater answer here. For the record, times (all from in-memory cached data, averaged over a bunch of runs):Dependent subquery = 117.9 seconds\n\nJoin to temp table = 2.7 secDISTINCT ON = 2.7 secSo the DISTINCT ON may not be quicker, but it sure is tidier.CheersDave\nOn Thu, Feb 24, 2011 at 2:24 PM, Kevin Grittner <[email protected]> wrote:\nMichael Glaesemann <[email protected]> wrote:\n\n> SELECT DISTINCT ON (data.id_key)\n>        data.id_key, data.time_stamp, data.value\n>   FROM data\n>   ORDER BY data.id_key, data.time_stamp DESC;\n\nDang!  I forgot the DESC in my post!  Thanks for showing the\n*correct* version.\n\n-Kevin", "msg_date": "Thu, 24 Feb 2011 17:53:08 -0600", "msg_from": "Dave Crooke <[email protected]>", "msg_from_op": true, "msg_subject": "Possible parser bug? .... Re: Picking out the most recent\n\trow using a time stamp column" }, { "msg_contents": "Friday, February 25, 2011, 12:53:08 AM you wrote:\n\n> select distinct on (a, b, c)\n> a, b c, time_stamp, value\n\nWithout the comma, you declare 'b AS c'\n\n> from data\n> order by a, b, c, time_stamp desc;\n\n> The output produced is the same as this query:\n\n> select distinct on (a, b)\n> a, b, time_stamp, value\n> from data\n> order by a, b, time_stamp desc;\n\nthe 'c' is optimized away, since it is an alias for b, and thus redundant \nfor the distinct.\n\n> Not sure if this is considered a parser bug or not, but it feels slightly\n> odd not to get an error.\n\nNo error, just plain SQL :-)\n\n\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Fri, 25 Feb 2011 01:03:12 +0100", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible parser bug? .... Re: Picking out the most recent row\n\tusing a time stamp column" } ]
[ { "msg_contents": "I run into performance problem when I pass the condition/variable in binding\nways, however if I put them in the query string it's absolutely fine.\n\n----\nHere is my table and index:\nCREATE TABLE shipment_lookup\n(\n shipment_id text NOT NULL,\n lookup text NOT NULL\n);\nCREATE INDEX shipment_lookup_prefix\n ONshipment_lookup\n USING btree\n (upper(lookup));\n----\nI have 10 million rows in the table.\n\n* My query is:\n$dbh->selectall_arrayref(\"SELECT * from shipment_lookup WHERE (UPPER(lookup)\nLIKE '0GURG5YGVQA9%')\");\n\nHere is the explain I get by using Perl and pgAdmin III.\nIndex Scan using shipment_lookup_prefix on shipment_lookup (cost=0.01..5.00\nrows=921 width=28)\n Index Cond: ((upper(lookup) >= '0GURG5YGVQA9'::text) AND (upper(lookup) <\n'0GURG5YGVQA:'::text))\n Filter: (upper(lookup) ~~ '0GURG5YGVQA9%'::text)\n\nIndex is used, and it just takes 50ms to execute. So far so good.\n\n* But if I do this - using binding:\n$dbh->selectall_arrayref(\"SELECT * from shipment_lookup WHERE (UPPER(lookup)\nLIKE ?)\", undef, '0GURG5YGVQA9%');\nIt took 10 seconds to finish the query, just like it was using full table\nscan instead! Even though the 'explain' shows the same query plan.\n\nSo what would be the issue...? I can't isolate if it's the Perl or pgsql.\n\nThanks,\nSam\n----\nVersion Info:\nPostgresql: \"PostgreSQL 8.4.5, compiled by Visual C++ build 1400, 32-bit\" on\nWindows 2003\nPerl: 5.10.1\nDBD::Pg: 2.17.2\n\n\n\n", "msg_date": "Fri, 25 Feb 2011 11:02:23 +0800", "msg_from": "\"Sam Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Perl Binding affects speed?" }, { "msg_contents": "On Fri, Feb 25, 2011 at 05:02, Sam Wong <[email protected]> wrote:\n> * But if I do this - using binding:\n> $dbh->selectall_arrayref(\"SELECT * from shipment_lookup WHERE (UPPER(lookup)\n> LIKE ?)\", undef, '0GURG5YGVQA9%');\n> It took 10 seconds to finish the query, just like it was using full table\n> scan instead! Even though the 'explain' shows the same query plan.\n\nThis is a pretty common shortcoming with placeholders. Since planning\nof parameterized queries is done *before* binding parameters, the\nplanner has no knowledge of what the \"?\" placeholder actually is. Thus\nit often gets the selectivity statistics wrong and produces worse\nplans for your values.\n\nAFAIK the only workaround is to not use variable binding in these\ncases, but escape and insert your variables straight it into the SQL\nquery.\n\nRegards,\nMarti\n", "msg_date": "Fri, 25 Feb 2011 14:25:32 +0200", "msg_from": "Marti Raudsepp <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl Binding affects speed?" }, { "msg_contents": "\nOn 25/02/2011, at 13.25, Marti Raudsepp wrote:\n\n> On Fri, Feb 25, 2011 at 05:02, Sam Wong <[email protected]> wrote:\n>> * But if I do this - using binding:\n>> $dbh->selectall_arrayref(\"SELECT * from shipment_lookup WHERE (UPPER(lookup)\n>> LIKE ?)\", undef, '0GURG5YGVQA9%');\n>> It took 10 seconds to finish the query, just like it was using full table\n>> scan instead! Even though the 'explain' shows the same query plan.\n> \n> This is a pretty common shortcoming with placeholders. Since planning\n> of parameterized queries is done *before* binding parameters, the\n> planner has no knowledge of what the \"?\" placeholder actually is. Thus\n> it often gets the selectivity statistics wrong and produces worse\n> plans for your values.\n> \n> AFAIK the only workaround is to not use variable binding in these\n> cases, but escape and insert your variables straight it into the SQL\n> query.\n\nInstead of not using the placeholder syntax you can use:\n\nlocal $dbh->{pg_server_prepare} = 0;\n\nwhich disables prepared queries serverside in the current scope and therefore doesn't have the late variable binding issue, but allows you to avoid SQL injection attacks.\n\nRegards,\nMartin", "msg_date": "Fri, 25 Feb 2011 13:58:55 +0100", "msg_from": "Martin Kjeldsen <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Perl Binding affects speed?" }, { "msg_contents": "From: Martin Kjeldsen, Sent: 2011/2/25, 20:59 \n> \n> On 25/02/2011, at 13.25, Marti Raudsepp wrote:\n> \n> > On Fri, Feb 25, 2011 at 05:02, Sam Wong <[email protected]> wrote:\n> >> * But if I do this - using binding:\n> >> $dbh->selectall_arrayref(\"SELECT * from shipment_lookup WHERE\n> >> (UPPER(lookup) LIKE ?)\", undef, '0GURG5YGVQA9%'); It took 10 seconds\n> >> to finish the query, just like it was using full table scan instead!\n> >> Even though the 'explain' shows the same query plan.\n> >\n> > This is a pretty common shortcoming with placeholders. Since planning\n> > of parameterized queries is done *before* binding parameters, the\n> > planner has no knowledge of what the \"?\" placeholder actually is. Thus\n> > it often gets the selectivity statistics wrong and produces worse\n> > plans for your values.\n> >\n> > AFAIK the only workaround is to not use variable binding in these\n> > cases, but escape and insert your variables straight it into the SQL\n> > query.\n> \n> Instead of not using the placeholder syntax you can use:\n> \n> local $dbh->{pg_server_prepare} = 0;\n> \n> which disables prepared queries serverside in the current scope and\n> therefore doesn't have the late variable binding issue, but allows you to\n> avoid SQL injection attacks.\n> \n\nThanks, I will look into that.\n\nSam\n\n", "msg_date": "Fri, 25 Feb 2011 21:24:16 +0800", "msg_from": "\"Sam Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Perl Binding affects speed?" } ]
[ { "msg_contents": "I found that \"LIKE\", \"= ANY (...)\", \"LIKE .. OR LIKE ..\" against a text\nfield used the index correctly, but not \"LIKE ANY (...)\". Would that be a\nbug?\n\n----\nHere is my table and index:\nCREATE TABLE shipment_lookup\n(\n shipment_id text NOT NULL,\n lookup text NOT NULL\n);\nCREATE INDEX shipment_lookup_prefix\n ONshipment_lookup\n USING btree\n (upper(lookup));\n----\nThe table have 10 million rows.\n\nThe following statements use the index as expected:\nselect * from shipment_lookup where (UPPER(lookup) = 'SD1102228482' or\nUPPER(lookup) ='ABCDEFGHIJK')\nselect * from shipment_lookup where (UPPER(lookup) = ANY\n(ARRAY['SD1102228482','ABCDEFGHIJK']))\nselect * from shipment_lookup where (UPPER(lookup) LIKE 'SD1102228482%' or\nUPPER(lookup) LIKE 'ABCDEFGHIJK%')\n\nThe following statement results in a full table scan (but this is what I\nreally want to do):\nselect * from shipment_lookup where (UPPER(lookup) LIKE\nANY(ARRAY['SD1102228482%', 'ABCDEFGHIJK%']))\n\nI could rewrite the LIKE ANY(ARRAY[...]) as an LIKE .. OR .. LIKE .., but I\nwonder what makes the difference?\n\nThanks,\nSam\n\n----\nVersion Info:\nPostgresql: \"PostgreSQL 8.4.5, compiled by Visual C++ build 1400, 32-bit\" on\nWindows 2003\n\n", "msg_date": "Fri, 25 Feb 2011 21:31:26 +0800", "msg_from": "\"Sam Wong\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index use difference betweer LIKE, LIKE ANY?" }, { "msg_contents": "On 2/25/11 5:31 AM, Sam Wong wrote:\n> I found that \"LIKE\", \"= ANY (...)\", \"LIKE .. OR LIKE ..\" against a text\n> field used the index correctly, but not \"LIKE ANY (...)\". Would that be a\n> bug?\n\nNo, it would be a TODO. This is a known limitation; it needs some\nclever code to make it work, and nobody's written it.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Sat, 26 Feb 2011 13:13:34 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use difference betweer LIKE, LIKE ANY?" }, { "msg_contents": "On Sun, Feb 27, 2011 at 2:43 AM, Josh Berkus <[email protected]> wrote:\n\n> On 2/25/11 5:31 AM, Sam Wong wrote:\n> > I found that \"LIKE\", \"= ANY (...)\", \"LIKE .. OR LIKE ..\" against a text\n> > field used the index correctly, but not \"LIKE ANY (...)\". Would that be a\n> > bug?\n>\n> No, it would be a TODO. This is a known limitation; it needs some\n> clever code to make it work, and nobody's written it.\n>\n>\ncame up with attached patch without thinking too much.\nWith this patch, the explain output for the same query is as below:\n\npostgres=# explain select * from shipment_lookup where (UPPER(lookup)\nLIKE\nANY(ARRAY['SD1102228482%', 'ABCDEFGHIJK%']))\n;e\n QUERY\nPLAN\n-------------------------------------------------------------------------------------------------\n Seq Scan on shipment_lookup (cost=0.00..254057.36 rows=2000 width=14)\n * Filter: ((upper(lookup) ~~ 'SD1102228482%'::text) OR (upper(lookup) ~~\n'ABCDEFGHIJK%'::text))*\n(2 rows)\n\npostgres-#\n\nThe thing to be noted here is that the where clause \"<pred> LIKE ANY\nARRAY[..]\"\nhas been converted into\n(<pred> LIKE first_array_element) or (<pred> LIKE second_array_element) or\n....\n\nPlease pass on your inputs.\n\nRegards,\nChetan\n\n-- \nChetan Sutrave\nSenior Software Engineer\nEnterpriseDB Corporation\nThe Enterprise PostgreSQL Company\nPhone: +91.20.30589523\n\nWebsite: www.enterprisedb.com\nEnterpriseDB Blog: http://blogs.enterprisedb.com/\nFollow us on Twitter: http://www.twitter.com/enterprisedb\n\nThis e-mail message (and any attachment) is intended for the use of the\nindividual or entity to whom it is addressed. This message contains\ninformation from EnterpriseDB Corporation that may be privileged,\nconfidential, or exempt from disclosure under applicable law. If you are not\nthe intended recipient or authorized to receive this for the intended\nrecipient, any use, dissemination, distribution, retention, archiving, or\ncopying of this communication is strictly prohibited. If you have received\nthis e-mail in error, please notify the sender immediately by reply e-mail\nand delete this message.", "msg_date": "Tue, 15 Mar 2011 18:00:00 +0530", "msg_from": "Chetan Suttraway <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use difference betweer LIKE, LIKE ANY?" }, { "msg_contents": "On Tue, Mar 15, 2011 at 8:30 AM, Chetan Suttraway\n<[email protected]> wrote:\n> On Sun, Feb 27, 2011 at 2:43 AM, Josh Berkus <[email protected]> wrote:\n>>\n>> On 2/25/11 5:31 AM, Sam Wong wrote:\n>> > I found that \"LIKE\", \"= ANY (...)\", \"LIKE .. OR LIKE ..\" against a text\n>> > field used the index correctly, but not \"LIKE ANY (...)\". Would that be\n>> > a\n>> > bug?\n>>\n>> No, it would be a TODO.  This is a known limitation; it needs some\n>> clever code to make it work, and nobody's written it.\n>>\n>\n> came up with attached patch without thinking too much.\n> With this patch, the explain output for the same query is as below:\n>\n> postgres=# explain select * from shipment_lookup where (UPPER(lookup)\n> LIKE\n> ANY(ARRAY['SD1102228482%', 'ABCDEFGHIJK%']))\n> ;e\n>                                            QUERY\n> PLAN\n> -------------------------------------------------------------------------------------------------\n>  Seq Scan on shipment_lookup  (cost=0.00..254057.36 rows=2000 width=14)\n>    Filter: ((upper(lookup) ~~ 'SD1102228482%'::text) OR (upper(lookup) ~~\n> 'ABCDEFGHIJK%'::text))\n> (2 rows)\n>\n> postgres-#\n>\n> The thing to be noted here is that  the where clause \"<pred> LIKE ANY\n> ARRAY[..]\"\n> has been converted into\n> (<pred> LIKE first_array_element) or (<pred> LIKE second_array_element) or\n> ....\n>\n> Please pass on your inputs.\n\nPlease add your patch here:\n\nhttps://commitfest.postgresql.org/action/commitfest_view/open\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 18 Apr 2011 12:14:59 -0400", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use difference betweer LIKE, LIKE ANY?" }, { "msg_contents": "On 15.03.2011 14:30, Chetan Suttraway wrote:\n> On Sun, Feb 27, 2011 at 2:43 AM, Josh Berkus<[email protected]> wrote:\n>\n>> On 2/25/11 5:31 AM, Sam Wong wrote:\n>>> I found that \"LIKE\", \"= ANY (...)\", \"LIKE .. OR LIKE ..\" against a text\n>>> field used the index correctly, but not \"LIKE ANY (...)\". Would that be a\n>>> bug?\n>>\n>> No, it would be a TODO. This is a known limitation; it needs some\n>> clever code to make it work, and nobody's written it.\n>>\n>>\n> came up with attached patch without thinking too much.\n> With this patch, the explain output for the same query is as below:\n>\n> postgres=# explain select * from shipment_lookup where (UPPER(lookup)\n> LIKE\n> ANY(ARRAY['SD1102228482%', 'ABCDEFGHIJK%']))\n> ;e\n> QUERY\n> PLAN\n> -------------------------------------------------------------------------------------------------\n> Seq Scan on shipment_lookup (cost=0.00..254057.36 rows=2000 width=14)\n> * Filter: ((upper(lookup) ~~ 'SD1102228482%'::text) OR (upper(lookup) ~~\n> 'ABCDEFGHIJK%'::text))*\n> (2 rows)\n>\n> postgres-#\n>\n> The thing to be noted here is that the where clause \"<pred> LIKE ANY\n> ARRAY[..]\"\n> has been converted into\n> (<pred> LIKE first_array_element) or (<pred> LIKE second_array_element) or\n> ....\n>\n> Please pass on your inputs.\n\nThis suffers from the same multiple-evaluation issue that was recently \ndiscovered in BETWEEN and IN expressions \n(http://archives.postgresql.org/message-id/[email protected]). \nThis transformation would also need to be done in the planner, after \nchecking that the left-hand expression is not volatile.\n\nAlso, even when safe, it's not clear that the transformation is always a \nwin. The left-hand expression could be expensive, in which case having \nto evaluate it multiple times could hurt performance. Maybe yo\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 06 Jun 2011 12:43:55 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use difference betweer LIKE, LIKE ANY?" }, { "msg_contents": "On 06.06.2011 12:43, Heikki Linnakangas wrote:\n> Also, even when safe, it's not clear that the transformation is always a\n> win. The left-hand expression could be expensive, in which case having\n> to evaluate it multiple times could hurt performance. Maybe yo\n\nSorry, hit \"send\" too early.\n\nMaybe you could put in some heuristic to only do the transformation when \nthe left-hand expression is cheap, or maybe use something like the \nCaseTestExpr to avoid multiple evaluation and still use the OR form. \nAlso, if the array is very large, opening it into the OR form could \nincrease plan time substantially, so we'd probably only want to do it if \nthere's any Vars involved, and thus any chance of matching an index.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 06 Jun 2011 12:47:10 +0300", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index use difference betweer LIKE, LIKE ANY?" } ]
[ { "msg_contents": "Hi,\n\nWe were running full vacuum on DB when we encountered the error below;\n\nINFO: analyzing \"public.bkup_access_control\"\nINFO: \"bkup_access_control\": scanned 14420 of 14420 pages, containing\n1634113 live rows and 0 dead rows; 30000 rows in sample, 1634113 estimated\ntotal rows\nINFO: vacuuming \"pg_catalog.pg_index\"\n*vacuumdb: vacuuming of database \"rpt_production\" failed: ERROR: duplicate\nkey value violates unique constraint \"pg_index_indexrelid_index\"*\nDETAIL: Key (indexrelid)=(2678) already exists.\n\nThe above table on which the error occured was actually a backup table of an\nexisting one.\n\nJFI. The backup table was created by\n\nSELECT * into bkup_access_control FROM access_control;\n\n\nDetails :\n\n version\n\n----------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.0.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (SUSE\nLinux) 4.5.0 20100604 [gcc-4_5-branch revision 160292], 64-bit\n\n\nCan you let us know the reason for this error?\n\nRegards,\nBhakti\n\nHi,We were running full vacuum on DB when we encountered the error below;INFO:  analyzing \"public.bkup_access_control\"\nINFO:  \"bkup_access_control\": scanned 14420 of 14420 pages, containing 1634113 live rows and 0 dead rows; 30000 rows in sample, 1634113 estimated total rowsINFO:  vacuuming \"pg_catalog.pg_index\"\nvacuumdb: vacuuming of database \"rpt_production\" failed: ERROR:  duplicate key value violates unique constraint \"pg_index_indexrelid_index\"DETAIL:  Key (indexrelid)=(2678) already exists.\nThe above table on which the error occured was actually a backup table of an existing one. JFI. The backup table was created by SELECT * into bkup_access_control FROM access_control;\nDetails :                                                                 version                                                      ----------------------------------------------------------------------------------------------------------------------------------------\n PostgreSQL 9.0.1 on x86_64-unknown-linux-gnu, compiled by GCC gcc (SUSE Linux) 4.5.0 20100604 [gcc-4_5-branch revision 160292], 64-bitCan you let us know the reason for this error?\nRegards,Bhakti", "msg_date": "Sat, 26 Feb 2011 12:30:58 +0530", "msg_from": "Bhakti Ghatkar <[email protected]>", "msg_from_op": true, "msg_subject": "Vacuum problem due to temp tables" }, { "msg_contents": "Bhakti Ghatkar <[email protected]> writes:\n> We were running full vacuum on DB when we encountered the error below;\n\n> INFO: vacuuming \"pg_catalog.pg_index\"\n> *vacuumdb: vacuuming of database \"rpt_production\" failed: ERROR: duplicate\n> key value violates unique constraint \"pg_index_indexrelid_index\"*\n> DETAIL: Key (indexrelid)=(2678) already exists.\n\nThat's pretty bizarre, but what makes you think it has anything to do\nwith temp tables? OID 2678 is pg_index_indexrelid_index itself.\nIt looks to me like you must have duplicate rows in pg_index for that\nindex (and maybe others?), and the problem is exposed during vacuum full\nbecause it tries to rebuild the indexes.\n\nCould we see the output of\n\n\tselect ctid,xmin,xmax,* from pg_index where indexrelid in\n\t (select indexrelid from pg_index group by 1 having count(*)>1);\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 Feb 2011 12:25:32 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum problem due to temp tables " }, { "msg_contents": "Tom,\n\nThe query which you gave returns me 0 rows.\n\nselect ctid,xmin,xmax,* from pg_index where indexrelid in\n (select indexrelid from pg_index group by 1 having count(*)>1);\n\nRegards,\nBhakti\n\nOn Sat, Feb 26, 2011 at 10:55 PM, Tom Lane <[email protected]> wrote:\n\n> Bhakti Ghatkar <[email protected]> writes:\n> > We were running full vacuum on DB when we encountered the error below;\n>\n> > INFO: vacuuming \"pg_catalog.pg_index\"\n> > *vacuumdb: vacuuming of database \"rpt_production\" failed: ERROR:\n> duplicate\n> > key value violates unique constraint \"pg_index_indexrelid_index\"*\n> > DETAIL: Key (indexrelid)=(2678) already exists.\n>\n> That's pretty bizarre, but what makes you think it has anything to do\n> with temp tables? OID 2678 is pg_index_indexrelid_index itself.\n> It looks to me like you must have duplicate rows in pg_index for that\n> index (and maybe others?), and the problem is exposed during vacuum full\n> because it tries to rebuild the indexes.\n>\n> Could we see the output of\n>\n> select ctid,xmin,xmax,* from pg_index where indexrelid in\n> (select indexrelid from pg_index group by 1 having count(*)>1);\n>\n> regards, tom lane\n>\n\n Tom,The query which you gave returns me 0 rows.\nselect ctid,xmin,xmax,* from pg_index where indexrelid in\n\n         (select indexrelid from pg_index group by 1 having count(*)>1);Regards,\nBhaktiOn Sat, Feb 26, 2011 at 10:55 PM, Tom Lane <[email protected]> wrote:\nBhakti Ghatkar <[email protected]> writes:\n> We were running full vacuum on DB when we encountered the error below;\n\n> INFO:  vacuuming \"pg_catalog.pg_index\"\n> *vacuumdb: vacuuming of database \"rpt_production\" failed: ERROR:  duplicate\n> key value violates unique constraint \"pg_index_indexrelid_index\"*\n> DETAIL:  Key (indexrelid)=(2678) already exists.\n\nThat's pretty bizarre, but what makes you think it has anything to do\nwith temp tables?  OID 2678 is pg_index_indexrelid_index itself.\nIt looks to me like you must have duplicate rows in pg_index for that\nindex (and maybe others?), and the problem is exposed during vacuum full\nbecause it tries to rebuild the indexes.\n\nCould we see the output of\n\n        select ctid,xmin,xmax,* from pg_index where indexrelid in\n          (select indexrelid from pg_index group by 1 having count(*)>1);\n\n                        regards, tom lane", "msg_date": "Mon, 28 Feb 2011 10:38:36 +0530", "msg_from": "Bhakti Ghatkar <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacuum problem due to temp tables" }, { "msg_contents": "On Mon, Feb 28, 2011 at 12:08 AM, Bhakti Ghatkar <[email protected]> wrote:\n>  Tom,\n> The query which you gave returns me 0 rows.\n> select ctid,xmin,xmax,* from pg_index where indexrelid in\n>          (select indexrelid from pg_index group by 1 having count(*)>1);\n> Regards,\n> Bhakti\n\nHow about just select ctid,xmin,xmax,* from pg_index?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 2 Mar 2011 10:44:33 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum problem due to temp tables" }, { "msg_contents": "Robert,\n\nselect ctid,xmin,xmax,* from pg_index gives 2074 records.\n\nRegards\nVidhya\n\n\n\nOn Wed, Mar 2, 2011 at 9:14 PM, Robert Haas <[email protected]> wrote:\n\n> On Mon, Feb 28, 2011 at 12:08 AM, Bhakti Ghatkar <[email protected]>\n> wrote:\n> > Tom,\n> > The query which you gave returns me 0 rows.\n> > select ctid,xmin,xmax,* from pg_index where indexrelid in\n> > (select indexrelid from pg_index group by 1 having count(*)>1);\n> > Regards,\n> > Bhakti\n>\n> How about just select ctid,xmin,xmax,* from pg_index?\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n> --\n> Sent via pgsql-performance mailing list ([email protected])\n> To make changes to your subscription:\n> http://www.postgresql.org/mailpref/pgsql-performance\n>\n\nRobert,\n \nselect ctid,xmin,xmax,* from pg_index gives 2074 records.\n \nRegards\nVidhya\n \n \nOn Wed, Mar 2, 2011 at 9:14 PM, Robert Haas <[email protected]> wrote:\n\nOn Mon, Feb 28, 2011 at 12:08 AM, Bhakti Ghatkar <[email protected]> wrote:>  Tom,> The query which you gave returns me 0 rows.> select ctid,xmin,xmax,* from pg_index where indexrelid in\n>          (select indexrelid from pg_index group by 1 having count(*)>1);> Regards,> BhaktiHow about just select ctid,xmin,xmax,* from pg_index?--Robert Haas\nEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company\n\n\n--Sent via pgsql-performance mailing list ([email protected])To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-performance", "msg_date": "Fri, 4 Mar 2011 15:56:02 +0530", "msg_from": "Vidhya Bondre <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum problem due to temp tables" }, { "msg_contents": "On Fri, Mar 4, 2011 at 5:26 AM, Vidhya Bondre <[email protected]> wrote:\n> select ctid,xmin,xmax,* from pg_index gives 2074 records.\n\nCan you put them in a text file and post them here as an attachment?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 4 Mar 2011 09:14:25 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacuum problem due to temp tables" } ]
[ { "msg_contents": "Florian Weimer wrote:\n> Kevin Grittner:\n \n>> Well, unless you use timestamp WITH time zone, you might not be\n>> able to do that at all. There are very few places where timestamp\n>> WITHOUT time zone actually makes sense.\n>\n> I don't think PostgreSQL keeps track of actual time zone values,\n\nTrue -- TIMESTAMP WITH TIME ZONE is always stored in UTC, which makes\nit part of a consistent time stream. If you use TIMESTAMP WITHOUT\nTIME ZONE, then unless you go to a lot of trouble you have a gap in\nyour time line in the spring and an overlap in autumn. With enough\nwork you can dance around that, but it's a heck of lot easier when\nyou can count on the UTC storage.\n \nIt sounds like you've successfully managed to find a way to dance\naround it, so it might not be worth trying to refactor now; but I'd\nbet your code would be simpler and more robust if you worked with the\ndata type intended to represent a moment in the stream of time\ninstead of constantly trying to pin the WITHOUT TIME ZONE to a time\nzone (UTC) explicitly.\n \n-Kevin\n", "msg_date": "Sat, 26 Feb 2011 15:52:44 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Picking out the most recent row using a time\n\t stamp column" } ]
[ { "msg_contents": "Hi,\n\nWe have installed PostgreSQL9 and setup standby(s). Now we have to test the\nperformance before we migrate all the data from Informix. The PostgreSQL9\nthat we installed is the Linux version from EnterpriseDB which runs on Red\nHat. The documentation on PostgreSQL website shows that we have gmake from\nsource. So for that purpose we downloaded the source into a UBuntu machine\nto gmake and install it. But UBuntu on the other hand complaints that it\ncan't find gmake. So looks like we are stuck here.\n\nWhat should we do?\n(1) Is the a binary for the Regression Test module that can be downloaded\nand ran from the RedHat environment? OR\n(2) If there are no binary, how to proceed if gmake does not run in UBuntu?\n\nPlease assist.\n\nRegards,\n\nSelvam\n\nHi,We have installed PostgreSQL9 and setup standby(s). Now we have to test the performance before we migrate all the data from Informix. The PostgreSQL9 that we installed is the Linux version from EnterpriseDB which runs on Red Hat. The documentation on PostgreSQL website shows that we have gmake from source. So for that purpose we downloaded the source into a UBuntu machine to gmake and install it. But UBuntu on the other hand complaints that it can't find gmake. So looks like we are stuck here.\nWhat should we do?(1) Is the a binary for the Regression Test module that can be downloaded and ran from the RedHat environment? OR(2) If there are no binary, how to proceed if gmake does not run in UBuntu?\nPlease assist.Regards,Selvam", "msg_date": "Mon, 28 Feb 2011 11:26:38 +0800", "msg_from": "Selva manickaraja <[email protected]>", "msg_from_op": true, "msg_subject": "Performance Test for PostgreSQL9" }, { "msg_contents": "On 28/02/11 16:26, Selva manickaraja wrote:\n>\n>\n> We have installed PostgreSQL9 and setup standby(s). Now we have to \n> test the performance before we migrate all the data from Informix. The \n> PostgreSQL9 that we installed is the Linux version from EnterpriseDB \n> which runs on Red Hat. The documentation on PostgreSQL website shows \n> that we have gmake from source. So for that purpose we downloaded the \n> source into a UBuntu machine to gmake and install it. But UBuntu on \n> the other hand complaints that it can't find gmake. So looks like we \n> are stuck here.\n>\n> What should we do?\n> (1) Is the a binary for the Regression Test module that can be \n> downloaded and ran from the RedHat environment? OR\n> (2) If there are no binary, how to proceed if gmake does not run in \n> UBuntu?\n>\n\n'gmake' means GNU make - in the case of Linux, the binary is simply \n'make'. E.g on my Ubuntu 10.10 system:\n\n$ make --version\nGNU Make 3.81\nCopyright (C) 2006 Free Software Foundation, Inc.\n\nregards\n\nMark\n\n\n\n\n\n\n\n On 28/02/11 16:26, Selva manickaraja wrote:\n \n\n\n\n We have installed PostgreSQL9 and setup standby(s). Now we have to\n test the performance before we migrate all the data from Informix.\n The PostgreSQL9 that we installed is the Linux version from\n EnterpriseDB which runs on Red Hat. The documentation on\n PostgreSQL website shows that we have gmake from source. So for\n that purpose we downloaded the source into a UBuntu machine to\n gmake and install it. But UBuntu on the other hand complaints that\n it can't find gmake. So looks like we are stuck here.\n\n What should we do?\n (1) Is the a binary for the Regression Test module that can be\n downloaded and ran from the RedHat environment? OR\n (2) If there are no binary, how to proceed if gmake does not run\n in UBuntu?\n\n\n\n 'gmake' means GNU make - in the case of Linux, the binary is simply\n 'make'. E.g on my Ubuntu 10.10 system:\n\n $ make --version\n GNU Make 3.81\n Copyright (C) 2006  Free Software Foundation, Inc.\n\n regards\n\n Mark", "msg_date": "Mon, 28 Feb 2011 17:18:29 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "As mentioned in the documentation, I went to the directory src/test/regress\nand ran the command. It gives the error\n\nGNUmakefile:15: ../../../src/Makefile.global: No such file or directory\nGNUmakefile:80: /src/Makefile.shlib: No such file or directory\nmake: *** No rule to make target `/src/Makefile.shlib'. Stop.\n\nReally can't make any sense out of this.\n\nAny ideas?\n\n\nOn Mon, Feb 28, 2011 at 12:18 PM, Mark Kirkwood <\[email protected]> wrote:\n\n> On 28/02/11 16:26, Selva manickaraja wrote:\n>\n>\n>\n> We have installed PostgreSQL9 and setup standby(s). Now we have to test the\n> performance before we migrate all the data from Informix. The PostgreSQL9\n> that we installed is the Linux version from EnterpriseDB which runs on Red\n> Hat. The documentation on PostgreSQL website shows that we have gmake from\n> source. So for that purpose we downloaded the source into a UBuntu machine\n> to gmake and install it. But UBuntu on the other hand complaints that it\n> can't find gmake. So looks like we are stuck here.\n>\n> What should we do?\n> (1) Is the a binary for the Regression Test module that can be downloaded\n> and ran from the RedHat environment? OR\n> (2) If there are no binary, how to proceed if gmake does not run in UBuntu?\n>\n>\n> 'gmake' means GNU make - in the case of Linux, the binary is simply 'make'.\n> E.g on my Ubuntu 10.10 system:\n>\n> $ make --version\n> GNU Make 3.81\n> Copyright (C) 2006 Free Software Foundation, Inc.\n>\n> regards\n>\n> Mark\n>\n>\n\nAs mentioned in the documentation, I went to the directory src/test/regress and ran the command. It gives the errorGNUmakefile:15: ../../../src/Makefile.global: No such file or directoryGNUmakefile:80: /src/Makefile.shlib: No such file or directory\nmake: *** No rule to make target `/src/Makefile.shlib'.  Stop.Really can't make any sense out of this.Any ideas?On Mon, Feb 28, 2011 at 12:18 PM, Mark Kirkwood <[email protected]> wrote:\n\n\n On 28/02/11 16:26, Selva manickaraja wrote:\n \n\n\n We have installed PostgreSQL9 and setup standby(s). Now we have to\n test the performance before we migrate all the data from Informix.\n The PostgreSQL9 that we installed is the Linux version from\n EnterpriseDB which runs on Red Hat. The documentation on\n PostgreSQL website shows that we have gmake from source. So for\n that purpose we downloaded the source into a UBuntu machine to\n gmake and install it. But UBuntu on the other hand complaints that\n it can't find gmake. So looks like we are stuck here.\n\n What should we do?\n (1) Is the a binary for the Regression Test module that can be\n downloaded and ran from the RedHat environment? OR\n (2) If there are no binary, how to proceed if gmake does not run\n in UBuntu?\n\n\n\n 'gmake' means GNU make - in the case of Linux, the binary is simply\n 'make'. E.g on my Ubuntu 10.10 system:\n\n $ make --version\n GNU Make 3.81\n Copyright (C) 2006  Free Software Foundation, Inc.\n\n regards\n\n Mark", "msg_date": "Mon, 28 Feb 2011 13:09:49 +0800", "msg_from": "Selva manickaraja <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "On 28/02/11 18:09, Selva manickaraja wrote:\n> As mentioned in the documentation, I went to the directory \n> src/test/regress and ran the command. It gives the error\n>\n> GNUmakefile:15: ../../../src/Makefile.global: No such file or directory\n> GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n> make: *** No rule to make target `/src/Makefile.shlib'. Stop.\n>\n> Really can't make any sense out of this.\n>\n> Any ideas?\n>\n\nYou have not run configure to generate these make files (or you have run \n'make distclean' to destroy them).\n\ngenerally you need to do:\n\n$ ./configure --prefix=your-chosen-install-prefix-here\n$ make\n$ make install\n$ make check\n\nThe last step runs the regression test.\n\nregards\n\nMark\n\nP.s: this discussion really belongs on pg-general rather than \nperformance, as it is about building and installing postgres rather than \nperformance, *when* you have it installed ok, then performance based \ndiscussion here is fine :-)\n\n\n\n\n\n\n\n\n On 28/02/11 18:09, Selva manickaraja wrote:\n \n\n As mentioned in the documentation, I went to the directory\n src/test/regress and ran the command. It gives the error\n\n GNUmakefile:15: ../../../src/Makefile.global: No such file or\n directory\n GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n make: *** No rule to make target `/src/Makefile.shlib'.  Stop.\n\n Really can't make any sense out of this.\n\n Any ideas?\n\n\n\n You have not run configure to generate these make files (or you have\n run 'make distclean' to destroy them).\n\n generally you need to do:\n\n $ ./configure --prefix=your-chosen-install-prefix-here\n $ make \n $ make install\n $ make check \n\n The last step runs the regression test.\n\n regards\n\n Mark\n\n P.s: this discussion really belongs on pg-general rather than\n performance, as it is about building and installing postgres rather\n than performance, *when* you have it installed ok, then performance\n based discussion here is fine :-)", "msg_date": "Mon, 28 Feb 2011 18:53:22 +1300", "msg_from": "Mark Kirkwood <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "Yes, true now it looks like pg-general. I started out this discussion\nbecause I couldn't get Performance Testing done. But looks like the\nperformance cannot be done due to the tool cannot be built...:) and all\nevils are getting unleashed from this..\n\nOK, I did exactly to move to the top of the directory and run the\n./configure first. Everything work until the last time it reports error\nnow....\n\n-----------------------------------------------------------------------------------------------\nchecking for -lreadline... no\nchecking for -ledit... no\nconfigure: error: readline library not found\nIf you have readline already installed, see config.log for details on the\nfailure. It is possible the compiler isn't looking in the proper directory.\nUse --without-readline to disable readline support.\n-----------------------------------------------------------------------------------------------\n\nI tried man readline and man edit, there seem to manuals on it. I checked\nthe Synaptic Manager. There seem to be a package called readline-common. I\nthen search the net for some assistance. Looks like there was another guy\nwho had a similar problem like me. The URL is\nhttp://ubuntuforums.org/showthread.php?t=1638949\n\nSo I tried installing 'readline' using\n\n->sudo apt-cache search readline\nAND\n->sudo apt-get install libreadline6 libreadline6-dev\n\nUpon answering 'y' to install without verification, I get Bad Gateway error.\n\nInstall these packages without verification [y/N]? y\nErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libncurses5-dev i386\n5.7+20100626-0ubuntu1\n 502 Bad Gateway\nErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libreadline6-dev i386\n6.1-3\n 502 Bad Gateway\nFailed to fetch\nhttp://my.archive.ubuntu.com/ubuntu/pool/main/n/ncurses/libncurses5-dev_5.7+20100626-0ubuntu1_i386.deb\n502 Bad Gateway\nFailed to fetch\nhttp://my.archive.ubuntu.com/ubuntu/pool/main/r/readline6/libreadline6-dev_6.1-3_i386.deb\n502 Bad Gateway\nE: Unable to fetch some archives, maybe run apt-get update or try with\n--fix-missing?\n\nLooks like I'm stuck at this level. Please assist to breakaway....\n\nThank you.\n\nRegards,\n\nSelvam\n\n\n\n\n\n\n\n\n\n\n\nOn Mon, Feb 28, 2011 at 1:53 PM, Mark Kirkwood <\[email protected]> wrote:\n\n> On 28/02/11 18:09, Selva manickaraja wrote:\n>\n> As mentioned in the documentation, I went to the directory src/test/regress\n> and ran the command. It gives the error\n>\n> GNUmakefile:15: ../../../src/Makefile.global: No such file or directory\n> GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n> make: *** No rule to make target `/src/Makefile.shlib'. Stop.\n>\n> Really can't make any sense out of this.\n>\n> Any ideas?\n>\n>\n> You have not run configure to generate these make files (or you have run\n> 'make distclean' to destroy them).\n>\n> generally you need to do:\n>\n> $ ./configure --prefix=your-chosen-install-prefix-here\n> $ make\n> $ make install\n> $ make check\n>\n> The last step runs the regression test.\n>\n> regards\n>\n> Mark\n>\n> P.s: this discussion really belongs on pg-general rather than performance,\n> as it is about building and installing postgres rather than performance,\n> *when* you have it installed ok, then performance based discussion here is\n> fine :-)\n>\n>\n>\n\nYes, true now it looks like pg-general. I started out this discussion because I couldn't get Performance Testing done. But looks like the performance cannot be done due to the tool cannot be built...:) and all evils are getting unleashed from this..\nOK, I did exactly to move to the top of the directory and run the ./configure first. Everything work until the last time it reports error now....-----------------------------------------------------------------------------------------------\nchecking for -lreadline... nochecking for -ledit... noconfigure: error: readline library not foundIf you have readline already installed, see config.log for details on thefailure.  It is possible the compiler isn't looking in the proper directory.\nUse --without-readline to disable readline support.-----------------------------------------------------------------------------------------------I tried man readline and man edit, there seem to manuals on it. I checked the Synaptic Manager. There seem to be a package called readline-common. I then search the net for some assistance. Looks like there was another guy who had a similar problem like me. The URL is http://ubuntuforums.org/showthread.php?t=1638949\nSo I tried installing 'readline' using->sudo apt-cache search readlineAND->sudo apt-get install libreadline6 libreadline6-devUpon answering 'y' to install without verification, I get Bad Gateway error.\nInstall these packages without verification [y/N]? yErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libncurses5-dev i386 5.7+20100626-0ubuntu1  502  Bad Gateway\nErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libreadline6-dev i386 6.1-3  502  Bad GatewayFailed to fetch http://my.archive.ubuntu.com/ubuntu/pool/main/n/ncurses/libncurses5-dev_5.7+20100626-0ubuntu1_i386.deb  502  Bad Gateway\nFailed to fetch http://my.archive.ubuntu.com/ubuntu/pool/main/r/readline6/libreadline6-dev_6.1-3_i386.deb  502  Bad Gateway\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?Looks like I'm stuck at this level. Please assist to breakaway....Thank you.Regards,Selvam\nOn Mon, Feb 28, 2011 at 1:53 PM, Mark Kirkwood <[email protected]> wrote:\n\n\n On 28/02/11 18:09, Selva manickaraja wrote:\n \n \n As mentioned in the documentation, I went to the directory\n src/test/regress and ran the command. It gives the error\n\n GNUmakefile:15: ../../../src/Makefile.global: No such file or\n directory\n GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n make: *** No rule to make target `/src/Makefile.shlib'.  Stop.\n\n Really can't make any sense out of this.\n\n Any ideas?\n\n\n\n You have not run configure to generate these make files (or you have\n run 'make distclean' to destroy them).\n\n generally you need to do:\n\n $ ./configure --prefix=your-chosen-install-prefix-here\n $ make \n $ make install\n $ make check \n\n The last step runs the regression test.\n\n regards\n\n Mark\n\n P.s: this discussion really belongs on pg-general rather than\n performance, as it is about building and installing postgres rather\n than performance, *when* you have it installed ok, then performance\n based discussion here is fine :-)", "msg_date": "Mon, 28 Feb 2011 14:39:30 +0800", "msg_from": "Selva manickaraja <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "Monday, February 28, 2011, 7:39:30 AM you wrote:\n\n> OK, I did exactly to move to the top of the directory and run the\n> ./configure first. Everything work until the last time it reports error\n> now....\n\n> -----------------------------------------------------------------------------------------------\n> checking for -lreadline... no\n> checking for -ledit... no\n> configure: error: readline library not found\n> If you have readline already installed, see config.log for details on the\n> failure. It is possible the compiler isn't looking in the proper directory.\n> Use --without-readline to disable readline support.\n> -----------------------------------------------------------------------------------------------\n\nDid you try './configure --without-readline'?\n\n\n-- \nJochen Erwied | home: [email protected] +49-208-38800-18, FAX: -19\nSauerbruchstr. 17 | work: [email protected] +49-2151-7294-24, FAX: -50\nD-45470 Muelheim | mobile: [email protected] +49-173-5404164\n\n", "msg_date": "Mon, 28 Feb 2011 07:42:41 +0100", "msg_from": "Jochen Erwied <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "Use apt-get to install\n\nsudo apt-get install libreadline-dev\nand zlib1g-dev. Then do a symbolic link for make to gmake.\n\n\n\nOn Sun, Feb 27, 2011 at 11:39 PM, Selva manickaraja <[email protected]>wrote:\n\n> Yes, true now it looks like pg-general. I started out this discussion\n> because I couldn't get Performance Testing done. But looks like the\n> performance cannot be done due to the tool cannot be built...:) and all\n> evils are getting unleashed from this..\n>\n> OK, I did exactly to move to the top of the directory and run the\n> ./configure first. Everything work until the last time it reports error\n> now....\n>\n>\n> -----------------------------------------------------------------------------------------------\n> checking for -lreadline... no\n> checking for -ledit... no\n> configure: error: readline library not found\n> If you have readline already installed, see config.log for details on the\n> failure. It is possible the compiler isn't looking in the proper\n> directory.\n> Use --without-readline to disable readline support.\n>\n> -----------------------------------------------------------------------------------------------\n>\n> I tried man readline and man edit, there seem to manuals on it. I checked\n> the Synaptic Manager. There seem to be a package called readline-common. I\n> then search the net for some assistance. Looks like there was another guy\n> who had a similar problem like me. The URL is\n> http://ubuntuforums.org/showthread.php?t=1638949\n>\n> So I tried installing 'readline' using\n>\n> ->sudo apt-cache search readline\n> AND\n> ->sudo apt-get install libreadline6 libreadline6-dev\n>\n> Upon answering 'y' to install without verification, I get Bad Gateway\n> error.\n>\n> Install these packages without verification [y/N]? y\n> Err http://my.archive.ubuntu.com/ubuntu/ maverick/main libncurses5-dev\n> i386 5.7+20100626-0ubuntu1\n> 502 Bad Gateway\n> Err http://my.archive.ubuntu.com/ubuntu/ maverick/main libreadline6-dev\n> i386 6.1-3\n> 502 Bad Gateway\n> Failed to fetch\n> http://my.archive.ubuntu.com/ubuntu/pool/main/n/ncurses/libncurses5-dev_5.7+20100626-0ubuntu1_i386.deb\n> 502 Bad Gateway\n> Failed to fetch\n> http://my.archive.ubuntu.com/ubuntu/pool/main/r/readline6/libreadline6-dev_6.1-3_i386.deb\n> 502 Bad Gateway\n> E: Unable to fetch some archives, maybe run apt-get update or try with\n> --fix-missing?\n>\n> Looks like I'm stuck at this level. Please assist to breakaway....\n>\n> Thank you.\n>\n> Regards,\n>\n> Selvam\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> On Mon, Feb 28, 2011 at 1:53 PM, Mark Kirkwood <\n> [email protected]> wrote:\n>\n>> On 28/02/11 18:09, Selva manickaraja wrote:\n>>\n>> As mentioned in the documentation, I went to the directory\n>> src/test/regress and ran the command. It gives the error\n>>\n>> GNUmakefile:15: ../../../src/Makefile.global: No such file or directory\n>> GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n>> make: *** No rule to make target `/src/Makefile.shlib'. Stop.\n>>\n>> Really can't make any sense out of this.\n>>\n>> Any ideas?\n>>\n>>\n>> You have not run configure to generate these make files (or you have run\n>> 'make distclean' to destroy them).\n>>\n>> generally you need to do:\n>>\n>> $ ./configure --prefix=your-chosen-install-prefix-here\n>> $ make\n>> $ make install\n>> $ make check\n>>\n>> The last step runs the regression test.\n>>\n>> regards\n>>\n>> Mark\n>>\n>> P.s: this discussion really belongs on pg-general rather than performance,\n>> as it is about building and installing postgres rather than performance,\n>> *when* you have it installed ok, then performance based discussion here is\n>> fine :-)\n>>\n>>\n>>\n>\n\nUse apt-get to install sudo apt-get install libreadline-dev and zlib1g-dev.  Then do a symbolic link for make to gmake.On Sun, Feb 27, 2011 at 11:39 PM, Selva manickaraja <[email protected]> wrote:\nYes, true now it looks like pg-general. I started out this discussion because I couldn't get Performance Testing done. But looks like the performance cannot be done due to the tool cannot be built...:) and all evils are getting unleashed from this..\nOK, I did exactly to move to the top of the directory and run the ./configure first. Everything work until the last time it reports error now....-----------------------------------------------------------------------------------------------\n\nchecking for -lreadline... nochecking for -ledit... noconfigure: error: readline library not foundIf you have readline already installed, see config.log for details on thefailure.  It is possible the compiler isn't looking in the proper directory.\n\nUse --without-readline to disable readline support.-----------------------------------------------------------------------------------------------I tried man readline and man edit, there seem to manuals on it. I checked the Synaptic Manager. There seem to be a package called readline-common. I then search the net for some assistance. Looks like there was another guy who had a similar problem like me. The URL is http://ubuntuforums.org/showthread.php?t=1638949\nSo I tried installing 'readline' using->sudo apt-cache search readlineAND->sudo apt-get install libreadline6 libreadline6-devUpon answering 'y' to install without verification, I get Bad Gateway error.\nInstall these packages without verification [y/N]? yErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libncurses5-dev i386 5.7+20100626-0ubuntu1\n  502  Bad Gateway\nErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libreadline6-dev i386 6.1-3  502  Bad GatewayFailed to fetch http://my.archive.ubuntu.com/ubuntu/pool/main/n/ncurses/libncurses5-dev_5.7+20100626-0ubuntu1_i386.deb  502  Bad Gateway\n\nFailed to fetch http://my.archive.ubuntu.com/ubuntu/pool/main/r/readline6/libreadline6-dev_6.1-3_i386.deb  502  Bad Gateway\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?Looks like I'm stuck at this level. Please assist to breakaway....Thank you.Regards,Selvam\nOn Mon, Feb 28, 2011 at 1:53 PM, Mark Kirkwood <[email protected]> wrote:\n\n\n On 28/02/11 18:09, Selva manickaraja wrote:\n \n \n As mentioned in the documentation, I went to the directory\n src/test/regress and ran the command. It gives the error\n\n GNUmakefile:15: ../../../src/Makefile.global: No such file or\n directory\n GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n make: *** No rule to make target `/src/Makefile.shlib'.  Stop.\n\n Really can't make any sense out of this.\n\n Any ideas?\n\n\n\n You have not run configure to generate these make files (or you have\n run 'make distclean' to destroy them).\n\n generally you need to do:\n\n $ ./configure --prefix=your-chosen-install-prefix-here\n $ make \n $ make install\n $ make check \n\n The last step runs the regression test.\n\n regards\n\n Mark\n\n P.s: this discussion really belongs on pg-general rather than\n performance, as it is about building and installing postgres rather\n than performance, *when* you have it installed ok, then performance\n based discussion here is fine :-)", "msg_date": "Sun, 27 Feb 2011 23:51:40 -0700", "msg_from": "Melton Low <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "Resending. Hit the send button too soon.\n\nUse apt-get to install\n\nsudo apt-get install libreadline-dev\nsudo apt-get install zlib1g-dev\n\nand other dependencies mentioned in the source distribution INSTALL file.\n\nln -s /usr/bin/make /usr/bin/gmake\n\nThis will give you gmake which would already be installed in Ubuntu as make\nbut allow it to be invoke as gmake.\n\nThen follow the configure, make and install steps previously posted.\n\nYou can also get the ppa package from\n\nhttp://www.openscg.org/\n\nCheers,\nMel\n\nOn Sun, Feb 27, 2011 at 11:39 PM, Selva manickaraja <[email protected]>wrote:\n\n> Yes, true now it looks like pg-general. I started out this discussion\n> because I couldn't get Performance Testing done. But looks like the\n> performance cannot be done due to the tool cannot be built...:) and all\n> evils are getting unleashed from this..\n>\n> OK, I did exactly to move to the top of the directory and run the\n> ./configure first. Everything work until the last time it reports error\n> now....\n>\n>\n> -----------------------------------------------------------------------------------------------\n> checking for -lreadline... no\n> checking for -ledit... no\n> configure: error: readline library not found\n> If you have readline already installed, see config.log for details on the\n> failure. It is possible the compiler isn't looking in the proper\n> directory.\n> Use --without-readline to disable readline support.\n>\n> -----------------------------------------------------------------------------------------------\n>\n> I tried man readline and man edit, there seem to manuals on it. I checked\n> the Synaptic Manager. There seem to be a package called readline-common. I\n> then search the net for some assistance. Looks like there was another guy\n> who had a similar problem like me. The URL is\n> http://ubuntuforums.org/showthread.php?t=1638949\n>\n> So I tried installing 'readline' using\n>\n> ->sudo apt-cache search readline\n> AND\n> ->sudo apt-get install libreadline6 libreadline6-dev\n>\n> Upon answering 'y' to install without verification, I get Bad Gateway\n> error.\n>\n> Install these packages without verification [y/N]? y\n> Err http://my.archive.ubuntu.com/ubuntu/ maverick/main libncurses5-dev\n> i386 5.7+20100626-0ubuntu1\n> 502 Bad Gateway\n> Err http://my.archive.ubuntu.com/ubuntu/ maverick/main libreadline6-dev\n> i386 6.1-3\n> 502 Bad Gateway\n> Failed to fetch\n> http://my.archive.ubuntu.com/ubuntu/pool/main/n/ncurses/libncurses5-dev_5.7+20100626-0ubuntu1_i386.deb\n> 502 Bad Gateway\n> Failed to fetch\n> http://my.archive.ubuntu.com/ubuntu/pool/main/r/readline6/libreadline6-dev_6.1-3_i386.deb\n> 502 Bad Gateway\n> E: Unable to fetch some archives, maybe run apt-get update or try with\n> --fix-missing?\n>\n> Looks like I'm stuck at this level. Please assist to breakaway....\n>\n> Thank you.\n>\n> Regards,\n>\n> Selvam\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n>\n> On Mon, Feb 28, 2011 at 1:53 PM, Mark Kirkwood <\n> [email protected]> wrote:\n>\n>> On 28/02/11 18:09, Selva manickaraja wrote:\n>>\n>> As mentioned in the documentation, I went to the directory\n>> src/test/regress and ran the command. It gives the error\n>>\n>> GNUmakefile:15: ../../../src/Makefile.global: No such file or directory\n>> GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n>> make: *** No rule to make target `/src/Makefile.shlib'. Stop.\n>>\n>> Really can't make any sense out of this.\n>>\n>> Any ideas?\n>>\n>>\n>> You have not run configure to generate these make files (or you have run\n>> 'make distclean' to destroy them).\n>>\n>> generally you need to do:\n>>\n>> $ ./configure --prefix=your-chosen-install-prefix-here\n>> $ make\n>> $ make install\n>> $ make check\n>>\n>> The last step runs the regression test.\n>>\n>> regards\n>>\n>> Mark\n>>\n>> P.s: this discussion really belongs on pg-general rather than performance,\n>> as it is about building and installing postgres rather than performance,\n>> *when* you have it installed ok, then performance based discussion here is\n>> fine :-)\n>>\n>>\n>>\n>\n\nResending.  Hit the send button too soon.\nUse apt-get to install\nsudo apt-get install libreadline-dev\nsudo apt-get install  zlib1g-dev\nand other dependencies mentioned in the source distribution INSTALL file.\nln -s /usr/bin/make /usr/bin/gmake\nThis will give you gmake which would already be installed in Ubuntu as make but allow it to be invoke as gmake.\nThen follow the configure, make and install steps previously posted.\nYou can also get the ppa package from \nhttp://www.openscg.org/\nCheers, \nMelOn Sun, Feb 27, 2011 at 11:39 PM, Selva manickaraja <[email protected]> wrote:\nYes, true now it looks like pg-general. I started out this discussion because I couldn't get Performance Testing done. But looks like the performance cannot be done due to the tool cannot be built...:) and all evils are getting unleashed from this..\nOK, I did exactly to move to the top of the directory and run the ./configure first. Everything work until the last time it reports error now....-----------------------------------------------------------------------------------------------\n\nchecking for -lreadline... nochecking for -ledit... noconfigure: error: readline library not foundIf you have readline already installed, see config.log for details on thefailure.  It is possible the compiler isn't looking in the proper directory.\n\nUse --without-readline to disable readline support.-----------------------------------------------------------------------------------------------I tried man readline and man edit, there seem to manuals on it. I checked the Synaptic Manager. There seem to be a package called readline-common. I then search the net for some assistance. Looks like there was another guy who had a similar problem like me. The URL is http://ubuntuforums.org/showthread.php?t=1638949\nSo I tried installing 'readline' using->sudo apt-cache search readlineAND->sudo apt-get install libreadline6 libreadline6-devUpon answering 'y' to install without verification, I get Bad Gateway error.\nInstall these packages without verification [y/N]? yErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libncurses5-dev i386 5.7+20100626-0ubuntu1\n  502  Bad Gateway\nErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libreadline6-dev i386 6.1-3  502  Bad GatewayFailed to fetch http://my.archive.ubuntu.com/ubuntu/pool/main/n/ncurses/libncurses5-dev_5.7+20100626-0ubuntu1_i386.deb  502  Bad Gateway\n\nFailed to fetch http://my.archive.ubuntu.com/ubuntu/pool/main/r/readline6/libreadline6-dev_6.1-3_i386.deb  502  Bad Gateway\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?Looks like I'm stuck at this level. Please assist to breakaway....Thank you.Regards,Selvam\nOn Mon, Feb 28, 2011 at 1:53 PM, Mark Kirkwood <[email protected]> wrote:\n\n\n On 28/02/11 18:09, Selva manickaraja wrote:\n \n \n As mentioned in the documentation, I went to the directory\n src/test/regress and ran the command. It gives the error\n\n GNUmakefile:15: ../../../src/Makefile.global: No such file or\n directory\n GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n make: *** No rule to make target `/src/Makefile.shlib'.  Stop.\n\n Really can't make any sense out of this.\n\n Any ideas?\n\n\n\n You have not run configure to generate these make files (or you have\n run 'make distclean' to destroy them).\n\n generally you need to do:\n\n $ ./configure --prefix=your-chosen-install-prefix-here\n $ make \n $ make install\n $ make check \n\n The last step runs the regression test.\n\n regards\n\n Mark\n\n P.s: this discussion really belongs on pg-general rather than\n performance, as it is about building and installing postgres rather\n than performance, *when* you have it installed ok, then performance\n based discussion here is fine :-)", "msg_date": "Sun, 27 Feb 2011 23:57:38 -0700", "msg_from": "Melton Low <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "OK, somehow I got these modules installed. Finally I successfully built and\ninstalled PostgreSQL! I must thank you guys so much for helping.\n\nNow coming to the real issue of the matter. According to the documentation\nthe \"gmake installcheck\" can be run in various directories. However it seem\nto be only local. Can these tests be run from local but to stress test a\ndatabase on a remote machine. This way I don't need to go on building\npostgresql from source in every new db server. I will wait for your reply.\n\nThank you.\n\nRegards,\n\nSelvam\n\nOn Mon, Feb 28, 2011 at 2:57 PM, Melton Low <[email protected]> wrote:\n\n> Resending. Hit the send button too soon.\n>\n> Use apt-get to install\n>\n> sudo apt-get install libreadline-dev\n> sudo apt-get install zlib1g-dev\n>\n> and other dependencies mentioned in the source distribution INSTALL file.\n>\n> ln -s /usr/bin/make /usr/bin/gmake\n>\n> This will give you gmake which would already be installed in Ubuntu as make\n> but allow it to be invoke as gmake.\n>\n> Then follow the configure, make and install steps previously posted.\n>\n> You can also get the ppa package from\n>\n> http://www.openscg.org/\n>\n> Cheers,\n> Mel\n>\n> On Sun, Feb 27, 2011 at 11:39 PM, Selva manickaraja <[email protected]>wrote:\n>\n>> Yes, true now it looks like pg-general. I started out this discussion\n>> because I couldn't get Performance Testing done. But looks like the\n>> performance cannot be done due to the tool cannot be built...:) and all\n>> evils are getting unleashed from this..\n>>\n>> OK, I did exactly to move to the top of the directory and run the\n>> ./configure first. Everything work until the last time it reports error\n>> now....\n>>\n>>\n>> -----------------------------------------------------------------------------------------------\n>> checking for -lreadline... no\n>> checking for -ledit... no\n>> configure: error: readline library not found\n>> If you have readline already installed, see config.log for details on the\n>> failure. It is possible the compiler isn't looking in the proper\n>> directory.\n>> Use --without-readline to disable readline support.\n>>\n>> -----------------------------------------------------------------------------------------------\n>>\n>> I tried man readline and man edit, there seem to manuals on it. I checked\n>> the Synaptic Manager. There seem to be a package called readline-common. I\n>> then search the net for some assistance. Looks like there was another guy\n>> who had a similar problem like me. The URL is\n>> http://ubuntuforums.org/showthread.php?t=1638949\n>>\n>> So I tried installing 'readline' using\n>>\n>> ->sudo apt-cache search readline\n>> AND\n>> ->sudo apt-get install libreadline6 libreadline6-dev\n>>\n>> Upon answering 'y' to install without verification, I get Bad Gateway\n>> error.\n>>\n>> Install these packages without verification [y/N]? y\n>> Err http://my.archive.ubuntu.com/ubuntu/ maverick/main libncurses5-dev\n>> i386 5.7+20100626-0ubuntu1\n>> 502 Bad Gateway\n>> Err http://my.archive.ubuntu.com/ubuntu/ maverick/main libreadline6-dev\n>> i386 6.1-3\n>> 502 Bad Gateway\n>> Failed to fetch\n>> http://my.archive.ubuntu.com/ubuntu/pool/main/n/ncurses/libncurses5-dev_5.7+20100626-0ubuntu1_i386.deb\n>> 502 Bad Gateway\n>> Failed to fetch\n>> http://my.archive.ubuntu.com/ubuntu/pool/main/r/readline6/libreadline6-dev_6.1-3_i386.deb\n>> 502 Bad Gateway\n>> E: Unable to fetch some archives, maybe run apt-get update or try with\n>> --fix-missing?\n>>\n>> Looks like I'm stuck at this level. Please assist to breakaway....\n>>\n>> Thank you.\n>>\n>> Regards,\n>>\n>> Selvam\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> On Mon, Feb 28, 2011 at 1:53 PM, Mark Kirkwood <\n>> [email protected]> wrote:\n>>\n>>> On 28/02/11 18:09, Selva manickaraja wrote:\n>>>\n>>> As mentioned in the documentation, I went to the directory\n>>> src/test/regress and ran the command. It gives the error\n>>>\n>>> GNUmakefile:15: ../../../src/Makefile.global: No such file or directory\n>>> GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n>>> make: *** No rule to make target `/src/Makefile.shlib'. Stop.\n>>>\n>>> Really can't make any sense out of this.\n>>>\n>>> Any ideas?\n>>>\n>>>\n>>> You have not run configure to generate these make files (or you have run\n>>> 'make distclean' to destroy them).\n>>>\n>>> generally you need to do:\n>>>\n>>> $ ./configure --prefix=your-chosen-install-prefix-here\n>>> $ make\n>>> $ make install\n>>> $ make check\n>>>\n>>> The last step runs the regression test.\n>>>\n>>> regards\n>>>\n>>> Mark\n>>>\n>>> P.s: this discussion really belongs on pg-general rather than\n>>> performance, as it is about building and installing postgres rather than\n>>> performance, *when* you have it installed ok, then performance based\n>>> discussion here is fine :-)\n>>>\n>>>\n>>>\n>>\n>\n\nOK, somehow I got these modules installed. Finally I successfully built and installed PostgreSQL! I must thank you guys so much for helping.Now coming to the real issue of the matter. According to the documentation the \"gmake installcheck\" can be run in various directories. However it seem to be only local. Can these tests be run from local but to stress test a database on a remote machine. This way I don't need to go on building postgresql from source in every new db server. I will wait for your reply.\nThank you.Regards,SelvamOn Mon, Feb 28, 2011 at 2:57 PM, Melton Low <[email protected]> wrote:\n\nResending.  Hit the send button too soon.\nUse apt-get to install\nsudo apt-get install libreadline-dev\nsudo apt-get install  zlib1g-dev\nand other dependencies mentioned in the source distribution INSTALL file.\nln -s /usr/bin/make /usr/bin/gmake\nThis will give you gmake which would already be installed in Ubuntu as make but allow it to be invoke as gmake.\nThen follow the configure, make and install steps previously posted.\nYou can also get the ppa package from \nhttp://www.openscg.org/\nCheers, \nMelOn Sun, Feb 27, 2011 at 11:39 PM, Selva manickaraja <[email protected]> wrote:\nYes, true now it looks like pg-general. I started out this discussion because I couldn't get Performance Testing done. But looks like the performance cannot be done due to the tool cannot be built...:) and all evils are getting unleashed from this..\nOK, I did exactly to move to the top of the directory and run the ./configure first. Everything work until the last time it reports error now....-----------------------------------------------------------------------------------------------\n\n\nchecking for -lreadline... nochecking for -ledit... noconfigure: error: readline library not foundIf you have readline already installed, see config.log for details on thefailure.  It is possible the compiler isn't looking in the proper directory.\n\n\nUse --without-readline to disable readline support.-----------------------------------------------------------------------------------------------I tried man readline and man edit, there seem to manuals on it. I checked the Synaptic Manager. There seem to be a package called readline-common. I then search the net for some assistance. Looks like there was another guy who had a similar problem like me. The URL is http://ubuntuforums.org/showthread.php?t=1638949\nSo I tried installing 'readline' using->sudo apt-cache search readlineAND->sudo apt-get install libreadline6 libreadline6-devUpon answering 'y' to install without verification, I get Bad Gateway error.\nInstall these packages without verification [y/N]? yErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libncurses5-dev i386 5.7+20100626-0ubuntu1\n\n  502  Bad Gateway\nErr http://my.archive.ubuntu.com/ubuntu/ maverick/main libreadline6-dev i386 6.1-3  502  Bad GatewayFailed to fetch http://my.archive.ubuntu.com/ubuntu/pool/main/n/ncurses/libncurses5-dev_5.7+20100626-0ubuntu1_i386.deb  502  Bad Gateway\n\n\nFailed to fetch http://my.archive.ubuntu.com/ubuntu/pool/main/r/readline6/libreadline6-dev_6.1-3_i386.deb  502  Bad Gateway\n\n\nE: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?Looks like I'm stuck at this level. Please assist to breakaway....Thank you.Regards,Selvam\nOn Mon, Feb 28, 2011 at 1:53 PM, Mark Kirkwood <[email protected]> wrote:\n\n\n On 28/02/11 18:09, Selva manickaraja wrote:\n \n \n As mentioned in the documentation, I went to the directory\n src/test/regress and ran the command. It gives the error\n\n GNUmakefile:15: ../../../src/Makefile.global: No such file or\n directory\n GNUmakefile:80: /src/Makefile.shlib: No such file or directory\n make: *** No rule to make target `/src/Makefile.shlib'.  Stop.\n\n Really can't make any sense out of this.\n\n Any ideas?\n\n\n\n You have not run configure to generate these make files (or you have\n run 'make distclean' to destroy them).\n\n generally you need to do:\n\n $ ./configure --prefix=your-chosen-install-prefix-here\n $ make \n $ make install\n $ make check \n\n The last step runs the regression test.\n\n regards\n\n Mark\n\n P.s: this discussion really belongs on pg-general rather than\n performance, as it is about building and installing postgres rather\n than performance, *when* you have it installed ok, then performance\n based discussion here is fine :-)", "msg_date": "Mon, 28 Feb 2011 17:10:56 +0800", "msg_from": "Selva manickaraja <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "On 28.02.2011 11:10, Selva manickaraja wrote:\n> OK, somehow I got these modules installed. Finally I successfully built and\n> installed PostgreSQL! I must thank you guys so much for helping.\n>\n> Now coming to the real issue of the matter. According to the documentation\n> the \"gmake installcheck\" can be run in various directories. However it seem\n> to be only local. Can these tests be run from local but to stress test a\n> database on a remote machine. This way I don't need to go on building\n> postgresql from source in every new db server. I will wait for your reply.\n\nTry\n\nPGHOST=servername make installcheck\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 28 Feb 2011 13:30:48 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "On Sun, Feb 27, 2011 at 10:26 PM, Selva manickaraja <[email protected]> wrote:\n> We have installed PostgreSQL9 and setup standby(s). Now we have to test the\n> performance before we migrate all the data from Informix. The PostgreSQL9\n> that we installed is the Linux version from EnterpriseDB which runs on Red\n> Hat. The documentation on PostgreSQL website shows that we have gmake from\n> source. So for that purpose we downloaded the source into a UBuntu machine\n> to gmake and install it. But UBuntu on the other hand complaints that it\n> can't find gmake. So looks like we are stuck here.\n\nI am a bit confused. Why would you need to install from source\ninstead of using an installer (either from EnterpriseDB or installing\nvia apt-get)?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 2 Mar 2011 13:19:44 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "On Wed, 2011-03-02 at 13:19 -0500, Robert Haas wrote:\n> On Sun, Feb 27, 2011 at 10:26 PM, Selva manickaraja <[email protected]> wrote:\n> > We have installed PostgreSQL9 and setup standby(s). Now we have to test the\n> > performance before we migrate all the data from Informix. The PostgreSQL9\n> > that we installed is the Linux version from EnterpriseDB which runs on Red\n> > Hat. The documentation on PostgreSQL website shows that we have gmake from\n> > source. So for that purpose we downloaded the source into a UBuntu machine\n> > to gmake and install it. But UBuntu on the other hand complaints that it\n> > can't find gmake. So looks like we are stuck here.\n> \n> I am a bit confused. Why would you need to install from source\n> instead of using an installer (either from EnterpriseDB or installing\n> via apt-get)?\n\nTo be rude but honest. If you can't solve that problem you really should\ncontract with someone to help you with your performance tests because\nyou are not going to be able to adequately tune PostgreSQL for a proper\ntest.\n\nThat said, the reason you can't find make is that you don't have the\nproper development tools installed.\n\n+1 to what Robert said.\n\nSincerely,\n\nJoshua D. Drake\n\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Wed, 02 Mar 2011 10:29:55 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "I followed the advice from Melton Low and was able to make and make-install.\n\nThe reason I had to compile is because there are no binaries for regression\ntests and the documentation requires us to make-install.\n\n\nOn Thu, Mar 3, 2011 at 2:29 AM, Joshua D. Drake <[email protected]>wrote:\n\n> On Wed, 2011-03-02 at 13:19 -0500, Robert Haas wrote:\n> > On Sun, Feb 27, 2011 at 10:26 PM, Selva manickaraja <[email protected]>\n> wrote:\n> > > We have installed PostgreSQL9 and setup standby(s). Now we have to test\n> the\n> > > performance before we migrate all the data from Informix. The\n> PostgreSQL9\n> > > that we installed is the Linux version from EnterpriseDB which runs on\n> Red\n> > > Hat. The documentation on PostgreSQL website shows that we have gmake\n> from\n> > > source. So for that purpose we downloaded the source into a UBuntu\n> machine\n> > > to gmake and install it. But UBuntu on the other hand complaints that\n> it\n> > > can't find gmake. So looks like we are stuck here.\n> >\n> > I am a bit confused. Why would you need to install from source\n> > instead of using an installer (either from EnterpriseDB or installing\n> > via apt-get)?\n>\n> To be rude but honest. If you can't solve that problem you really should\n> contract with someone to help you with your performance tests because\n> you are not going to be able to adequately tune PostgreSQL for a proper\n> test.\n>\n> That said, the reason you can't find make is that you don't have the\n> proper development tools installed.\n>\n> +1 to what Robert said.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n> --\n> PostgreSQL.org Major Contributor\n> Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\n> Consulting, Training, Support, Custom Development, Engineering\n> http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n>\n>\n\nI followed the advice from Melton Low and was able to make and make-install.\nThe reason I had to compile is because there are no binaries for regression tests and the documentation requires us to make-install.\nOn Thu, Mar 3, 2011 at 2:29 AM, Joshua D. Drake <[email protected]> wrote:\nOn Wed, 2011-03-02 at 13:19 -0500, Robert Haas wrote:\n> On Sun, Feb 27, 2011 at 10:26 PM, Selva manickaraja <[email protected]> wrote:\n> > We have installed PostgreSQL9 and setup standby(s). Now we have to test the\n> > performance before we migrate all the data from Informix. The PostgreSQL9\n> > that we installed is the Linux version from EnterpriseDB which runs on Red\n> > Hat. The documentation on PostgreSQL website shows that we have gmake from\n> > source. So for that purpose we downloaded the source into a UBuntu machine\n> > to gmake and install it. But UBuntu on the other hand complaints that it\n> > can't find gmake. So looks like we are stuck here.\n>\n> I am a bit confused.  Why would you need to install from source\n> instead of using an installer (either from EnterpriseDB or installing\n> via apt-get)?\n\nTo be rude but honest. If you can't solve that problem you really should\ncontract with someone to help you with your performance tests because\nyou are not going to be able to adequately tune PostgreSQL for a proper\ntest.\n\nThat said, the reason you can't find make is that you don't have the\nproper development tools installed.\n\n+1 to what Robert said.\n\nSincerely,\n\nJoshua D. Drake\n\n\n--\nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt", "msg_date": "Thu, 3 Mar 2011 09:46:23 +0800", "msg_from": "Selva manickaraja <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "Selva manickaraja wrote:\n> The reason I had to compile is because there are no binaries for \n> regression tests and the documentation requires us to make-install.\n\nThe reason for that is there is little reason for users of the database \nto ever run those. Most (possibly all) of the the packaged builds will \nrun the regression tests as part of the build process. So the odds of \nyou finding an error with them is pretty low. \n\nI'm getting the impression you think that the regression tests are \nsomehow useful for performance testing. They aren't; they are strictly \ncode quality tests. The only performance testing tool that comes with \nthe database is pgbench, which is a tricky tool to get useful test \nresults from.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n\n\n\n\n\n\n\nSelva manickaraja wrote:\nThe reason I had to compile is because\nthere are no binaries for regression tests and the documentation\nrequires us to make-install.\n\n\nThe reason for that is there is little reason for users of the database\nto ever run those.  Most (possibly all) of the the packaged builds will\nrun the regression tests as part of the build process.  So the odds of\nyou finding an error with them is pretty low.  \n\nI'm getting the impression you think that the regression tests are\nsomehow useful for performance testing.  They aren't; they are strictly\ncode quality tests.  The only performance testing tool that comes with\nthe database is pgbench, which is a tricky tool to get useful test\nresults from.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Wed, 02 Mar 2011 23:01:37 -0500", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "Thanks for the enlightenment. I will then look into other tools that help\nwith performance testing. Is pgbench really useful? We need to produce the\nreports and statistics to our management as we are planning to migrate one\nsystem at a time from Informix. This is to ensure that we do not overload\nthe database with all the systems eventually. So can pgbench help us here?\n\nOn Thu, Mar 3, 2011 at 12:01 PM, Greg Smith <[email protected]> wrote:\n\n> Selva manickaraja wrote:\n>\n> The reason I had to compile is because there are no binaries for regression\n> tests and the documentation requires us to make-install.\n>\n>\n> The reason for that is there is little reason for users of the database to\n> ever run those. Most (possibly all) of the the packaged builds will run the\n> regression tests as part of the build process. So the odds of you finding\n> an error with them is pretty low.\n>\n> I'm getting the impression you think that the regression tests are somehow\n> useful for performance testing. They aren't; they are strictly code quality\n> tests. The only performance testing tool that comes with the database is\n> pgbench, which is a tricky tool to get useful test results from.\n>\n> --\n> Greg Smith 2ndQuadrant US [email protected] Baltimore, MD\n> PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n> \"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books\n>\n>\n\nThanks for the enlightenment. I will then look into other tools that help with performance testing. Is pgbench really useful? We need to produce the reports and statistics to our management as we are planning to migrate one system at a time from Informix. This is to ensure that we do not overload the database with all the systems eventually. So can pgbench help us here?\nOn Thu, Mar 3, 2011 at 12:01 PM, Greg Smith <[email protected]> wrote:\n\nSelva manickaraja wrote:\nThe reason I had to compile is because\nthere are no binaries for regression tests and the documentation\nrequires us to make-install.\n\n\nThe reason for that is there is little reason for users of the database\nto ever run those.  Most (possibly all) of the the packaged builds will\nrun the regression tests as part of the build process.  So the odds of\nyou finding an error with them is pretty low.  \n\nI'm getting the impression you think that the regression tests are\nsomehow useful for performance testing.  They aren't; they are strictly\ncode quality tests.  The only performance testing tool that comes with\nthe database is pgbench, which is a tricky tool to get useful test\nresults from.\n\n-- \nGreg Smith 2ndQuadrant US [email protected] Baltimore, MD\nPostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us\n\"PostgreSQL 9.0 High Performance\": http://www.2ndQuadrant.com/books", "msg_date": "Thu, 3 Mar 2011 13:16:38 +0800", "msg_from": "Selva manickaraja <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance Test for PostgreSQL9" }, { "msg_contents": "On Thu, 2011-03-03 at 13:16 +0800, Selva manickaraja wrote:\n> Thanks for the enlightenment. I will then look into other tools that\n> help\n> with performance testing. Is pgbench really useful? We need to produce\n> the\n> reports and statistics to our management as we are planning to migrate\n> one\n> system at a time from Informix. This is to ensure that we do not\n> overload\n> the database with all the systems eventually. So can pgbench help us\n> here? \n\nIf you have an existing system, you best bet is to migrate your schema\nand a data snapshot from that system to PostgreSQL. Then take a portion\nof your more expensive queries and port them to PostgreSQL and compare\nfrom there.\n\nA vanilla PgBench or other workload manager will do nothing to help you\nwith a real world metric to provide to those that wear ties.\n\nJD\n\n-- \nPostgreSQL.org Major Contributor\nCommand Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579\nConsulting, Training, Support, Custom Development, Engineering\nhttp://twitter.com/cmdpromptinc | http://identi.ca/commandprompt\n\n", "msg_date": "Wed, 02 Mar 2011 21:20:24 -0800", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance Test for PostgreSQL9" } ]
[ { "msg_contents": "Greetings to all,\n\nI use to run below query on my Postgres Database Server very often :\n\nselect \nm.doc_category,p.heading,l.lat,l.lon,p.crawled_page_url,p.category,p.dt_stamp,p.crawled_page_id,p.content \nfrom loc_context_demo l,page_content_demo p,metadata_demo m\nwhere l.source_id=p.crawled_page_id and m.doc_id=l.source_id and \nst_within(l.geom,GeomFromText('POLYGON((26.493618940784085 \n94.73526463903742,26.493618940784085 94.73526463903742,26.49414347324995 \n94.73609294031571,25.27305797085655 91.2111565730387,22.577266399435437 \n91.25956595906088,21.786005217742066 93.8817223698167,24.890143541531135 \n95.16269696276306,24.89070526076922 95.16324228285777,24.89070526076922 \n95.16324228285777,26.493618940784085 94.73526463903742))',4326)) and \nm.doc_category='Terrorism' order by p.dt_stamp desc;\n\n\nI think I need to optimized above query for fast execution as I can. Any \nsuggestions are always welcome :\n\nExplain output :\n\n Sort (cost=160385.28..160386.32 rows=418 width=1316)\n Sort Key: p.dt_stamp\n -> Hash Join (cost=85558.37..160367.08 rows=418 width=1316)\n Hash Cond: (p.crawled_page_id = l.source_id)\n -> Seq Scan on page_content_demo p (cost=0.00..73344.20 \nrows=389420 width=1251)\n -> Hash (cost=85553.92..85553.92 rows=356 width=73)\n -> Hash Join (cost=37301.92..85553.92 rows=356 width=73)\n Hash Cond: (l.source_id = m.doc_id)\n -> Seq Scan on loc_context_demo l \n(cost=0.00..48108.71 rows=356 width=18)\n Filter: ((geom && \n'0103000020E6100000010000000A000000935A97CF5D7E3A408BA46A930EAF5740935A97CF5D7E3A408BA46A930EAF5740F023C92F807E3\nA403D5E90251CAF5740B2BD8E20E745394059E2DB9683CD5640A6A712BBC793364091548ABA9CD0564002B050A337C93540EBA0A9236E785740319F7772E0E33840758E85A069CA574003618D4205\nE43840B48CC28F72CA574003618D4205E43840B48CC28F72CA5740935A97CF5D7E3A408BA46A930EAF5740'::geometry) \nAND _st_within(geom, '0103000020E6100000010000000A00000093\n5A97CF5D7E3A408BA46A930EAF5740935A97CF5D7E3A408BA46A930EAF5740F023C92F807E3A403D5E90251CAF5740B2BD8E20E745394059E2DB9683CD5640A6A712BBC793364091548ABA9CD0564\n002B050A337C93540EBA0A9236E785740319F7772E0E33840758E85A069CA574003618D4205E43840B48CC28F72CA574003618D4205E43840B48CC28F72CA5740935A97CF5D7E3A408BA46A930EAF\n5740'::geometry))\n -> Hash (cost=37186.32..37186.32 rows=9248 width=55)\n -> Seq Scan on metadata_demo m \n(cost=0.00..37186.32 rows=9248 width=55)\n Filter: (doc_category = \n'Terrorism'::bpchar)\n(13 \nrows) \n\n \nExplain Ananlyze Output \n:- \n\n \nQUERY PLAN \n \n\n \n\n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n------------------\n Sort (cost=160385.28..160386.32 rows=418 width=1316) (actual \ntime=1210.025..1210.041 rows=21 loops=1)\n Sort Key: p.dt_stamp\n Sort Method: quicksort Memory: 65kB\n -> Hash Join (cost=85558.37..160367.08 rows=418 width=1316) (actual \ntime=619.985..1209.821 rows=21 loops=1)\n Hash Cond: (p.crawled_page_id = l.source_id)\n -> Seq Scan on page_content_demo p (cost=0.00..73344.20 \nrows=389420 width=1251) (actual time=0.006..290.829 rows=362293 loops=1)\n -> Hash (cost=85553.92..85553.92 rows=356 width=73) (actual \ntime=507.942..507.942 rows=21 loops=1)\n -> Hash Join (cost=37301.92..85553.92 rows=356 \nwidth=73) (actual time=215.384..507.903 rows=21 loops=1)\n Hash Cond: (l.source_id = m.doc_id)\n -> Seq Scan on loc_context_demo l \n(cost=0.00..48108.71 rows=356 width=18) (actual time=0.986..316.129 \nrows=816 loops=1)\n Filter: ((geom && \n'0103000020E6100000010000000A000000935A97CF5D7E3A408BA46A930EAF5740935A97CF5D7E3A408BA46A930EAF5740F023C92F807E3\nA403D5E90251CAF5740B2BD8E20E745394059E2DB9683CD5640A6A712BBC793364091548ABA9CD0564002B050A337C93540EBA0A9236E785740319F7772E0E33840758E85A069CA574003618D4205\nE43840B48CC28F72CA574003618D4205E43840B48CC28F72CA5740935A97CF5D7E3A408BA46A930EAF5740'::geometry) \nAND _st_within(geom, '0103000020E6100000010000000A00000093\n5A97CF5D7E3A408BA46A930EAF5740935A97CF5D7E3A408BA46A930EAF5740F023C92F807E3A403D5E90251CAF5740B2BD8E20E745394059E2DB9683CD5640A6A712BBC793364091548ABA9CD0564\n002B050A337C93540EBA0A9236E785740319F7772E0E33840758E85A069CA574003618D4205E43840B48CC28F72CA574003618D4205E43840B48CC28F72CA5740935A97CF5D7E3A408BA46A930EAF\n5740'::geometry))\n -> Hash (cost=37186.32..37186.32 rows=9248 \nwidth=55) (actual time=190.396..190.396 rows=9016 loops=1)\n -> Seq Scan on metadata_demo m \n(cost=0.00..37186.32 rows=9248 width=55) (actual time=38.895..183.396 \nrows=9016 loops=1)\n Filter: (doc_category = \n'Terrorism'::bpchar)\n Total runtime: 1210.112 ms\n(15 rows)\n\n\n\nBest regards,\nAdarsh Sharma\n", "msg_date": "Mon, 28 Feb 2011 12:53:08 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Is Query need to be optimized" }, { "msg_contents": "do you have any indexes on that table ?\n", "msg_date": "Thu, 3 Mar 2011 09:07:10 +0000", "msg_from": "=?UTF-8?Q?Grzegorz_Ja=C5=9Bkiewicz?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is Query need to be optimized" } ]
[ { "msg_contents": "Hi,\n\nI've been facing a very large (more than 15 seconds) planning time in a \npartitioned configuration. The amount of partitions wasn't completely crazy, \naround 500, not in the thousands. The problem was that there were nearly 1000 \ncolumns in the parent table (very special use case, there is a reason for this \napplication for having these many columns). The check constraint was extremely \nsimple (for each child, 1 column = 1 constant, always the same column).\n\nAs I was surprised by this very large planning time, I have been trying to \nstudy the variation of planning time against several parameters:\n- number of columns\n- number of children tables\n- constraint exclusion's value (partition or off)\n\nWhat (I think) I measured is that the planning time seems to be O(n^2) for the \nnumber of columns, and O(n^2) for the number of children tables.\n\nConstraint exclusion had a limited impact on planning time (it added between \n20% and 100% planning time when there were many columns).\n\nI'd like to know if this is a known behavior ? And if I could mitigate it \nsomehow ?\n\n\n\nAttached is a zipped csv file containing the result of the tests for \nconstraint_exclusion=partition, for children from 100 to 1000 in steps of 100, \nand for columns from 10 to 1590 in steps of 20.\n\nA few values are a bit off-chart as this was done on my personal computer, and \nit was sometimes used for other things at the same time.\n\nThe tests were done with a parent table made of only integer columns, and \nevery children having a check (col0=id_of_child) constraint (I can also \nprovide the perl script, of course).\n\nThe test query was \"SELECT * FROM parent_table WHERE col0=id_of_child_0\". \nReplacing it with \"SELECT col0 FROM parent_table WHERE col0=id_of_child_0\" \ndidn't change the planning time significantly: it was around 5% lower, but \nstill O(n^2). This query returned nothing (every partition is empty).\n\nI've also done an openoffice spreadsheet graphing all this, but as it's 130kB \nI won't send it before being told to do so :)\n\nThe computer running the tests was an Intel core i7 870. Postgresql was 9.0.3.\n\nAnything else I could add ?\n\nCheers", "msg_date": "Mon, 28 Feb 2011 10:38:04 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "inheritance: planning time vs children number vs column number" }, { "msg_contents": "On 28.02.2011 11:38, Marc Cousin wrote:\n> I've been facing a very large (more than 15 seconds) planning time in a\n> partitioned configuration. The amount of partitions wasn't completely crazy,\n> around 500, not in the thousands. The problem was that there were nearly 1000\n> columns in the parent table (very special use case, there is a reason for this\n> application for having these many columns). The check constraint was extremely\n> simple (for each child, 1 column = 1 constant, always the same column).\n>\n> As I was surprised by this very large planning time, I have been trying to\n> study the variation of planning time against several parameters:\n> - number of columns\n> - number of children tables\n> - constraint exclusion's value (partition or off)\n>\n> What (I think) I measured is that the planning time seems to be O(n^2) for the\n> number of columns, and O(n^2) for the number of children tables.\n>\n> Constraint exclusion had a limited impact on planning time (it added between\n> 20% and 100% planning time when there were many columns).\n\nTesting here with a table with 1000 columns and 100 partitions, about \n80% of the planning time is looking up the statistics on attribute \nwidth, to calculate average tuple width. I don't see O(n^2) behavior, \nthough, it seems linear.\n\n> I'd like to know if this is a known behavior ? And if I could mitigate it\n> somehow ?\n\nI'm out of ideas on how to make it faster, I'm afraid.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 28 Feb 2011 14:57:45 +0200", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance: planning time vs children number vs column\n number" }, { "msg_contents": "The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :\n> On 28.02.2011 11:38, Marc Cousin wrote:\n> > I've been facing a very large (more than 15 seconds) planning time in a\n> > partitioned configuration. The amount of partitions wasn't completely\n> > crazy, around 500, not in the thousands. The problem was that there were\n> > nearly 1000 columns in the parent table (very special use case, there is\n> > a reason for this application for having these many columns). The check\n> > constraint was extremely simple (for each child, 1 column = 1 constant,\n> > always the same column).\n> > \n> > As I was surprised by this very large planning time, I have been trying\n> > to study the variation of planning time against several parameters: -\n> > number of columns\n> > - number of children tables\n> > - constraint exclusion's value (partition or off)\n> > \n> > What (I think) I measured is that the planning time seems to be O(n^2)\n> > for the number of columns, and O(n^2) for the number of children tables.\n> > \n> > Constraint exclusion had a limited impact on planning time (it added\n> > between 20% and 100% planning time when there were many columns).\n> \n> Testing here with a table with 1000 columns and 100 partitions, about\n> 80% of the planning time is looking up the statistics on attribute\n> width, to calculate average tuple width. I don't see O(n^2) behavior,\n> though, it seems linear.\n\nIt is only based on experimentation, for my part, of course… \n\nIf you measure the planning time, modifying either the columns or the \npartitions number, the square root of the planning time is almost perfectly \nproportional with the parameter you're playing with.\n\n\nThe Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :\n> On 28.02.2011 11:38, Marc Cousin wrote:\n> > I've been facing a very large (more than 15 seconds) planning time in a\n> > partitioned configuration. The amount of partitions wasn't completely\n> > crazy, around 500, not in the thousands. The problem was that there were\n> > nearly 1000 columns in the parent table (very special use case, there is\n> > a reason for this application for having these many columns). The check\n> > constraint was extremely simple (for each child, 1 column = 1 constant,\n> > always the same column).\n> > \n> > As I was surprised by this very large planning time, I have been trying\n> > to study the variation of planning time against several parameters: -\n> > number of columns\n> > - number of children tables\n> > - constraint exclusion's value (partition or off)\n> > \n> > What (I think) I measured is that the planning time seems to be O(n^2)\n> > for the number of columns, and O(n^2) for the number of children tables.\n> > \n> > Constraint exclusion had a limited impact on planning time (it added\n> > between 20% and 100% planning time when there were many columns).\n> \n> Testing here with a table with 1000 columns and 100 partitions, about\n> 80% of the planning time is looking up the statistics on attribute\n> width, to calculate average tuple width. I don't see O(n^2) behavior,\n> though, it seems linear.\n\nIt is only based on experimentation, for my part, of course… \n\nIf you measure the planning time, modifying either the columns or the partitions number, the square root of the planning time is almost perfectly proportional with the parameter you're playing with.", "msg_date": "Mon, 28 Feb 2011 14:09:55 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: inheritance: planning time vs children number vs column number" }, { "msg_contents": "Marc Cousin <[email protected]> writes:\n> The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :\n>> Testing here with a table with 1000 columns and 100 partitions, about\n>> 80% of the planning time is looking up the statistics on attribute\n>> width, to calculate average tuple width. I don't see O(n^2) behavior,\n>> though, it seems linear.\n\n> It is only based on experimentation, for my part, of course… \n\n> If you measure the planning time, modifying either the columns or the \n> partitions number, the square root of the planning time is almost perfectly \n> proportional with the parameter you're playing with.\n\nCould we see a concrete example demonstrating that? I agree with Heikki\nthat it's not obvious what you are testing that would have such behavior.\nI can think of places that would have O(N^2) behavior in the length of\nthe targetlist, but it seems unlikely that they'd come to dominate\nruntime at a mere 1000 columns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Feb 2011 10:35:37 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance: planning time vs children number vs column number " }, { "msg_contents": "We have a medium-sized catalog (about 5 million rows), but some of our customers only want to see portions of it. I've been experimenting with a customer-specific schema that contains nothing but a \"join table\" -- just the primary keys of that portion of the data that each customer wants to see, which is used to create a view that looks like the original table. But the most important query, the one that customers use to scan page-by-page through search results, turns out to be far too slow (65 seconds versus 55 milliseconds).\n\nBelow are the results of two explain/analyze statements. The first one uses the view, the second one goes directly to the original tables. I thought this would be a slam-dunk, that it would return results in a flash because the view is created from two tables with the same primary keys.\n\nMy guess (and it's just a wild guess) is that the \"left join\" is forcing a sequence scan or something. But we need the left join, because it's on a \"hitlist\" that recorded all the matches to a customer's earlier query, and if rows have been removed from the tables, the customer needs to see a blank row.\n\nHere is the \"bad\" query, which is run on the view:\n\nem=> explain analyze\nselect version.version_id, version.isosmiles\nfrom hitlist_rows_reset_140\nleft join version on (hitlist_rows_reset_140.objectid = version.version_id)\nwhere hitlist_rows_reset_140.sortorder >= 1\nand hitlist_rows_reset_140.sortorder <= 10\norder by hitlist_rows_reset_140.sortorder;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------\n------------------------------\n Nested Loop Left Join (cost=23687.51..215315.74 rows=1 width=54) (actual time=2682.662..63680.076 rows=10 loops=1)\n Join Filter: (hitlist_rows_reset_140.objectid = v.version_id)\n -> Index Scan using hitlist_rows_reset_140_pkey on hitlist_rows_reset_140 (cost=0.00..8.36 rows=1 width=8) (actual time=\n0.015..0.049 rows=10 loops=1)\n Index Cond: ((sortorder >= 1) AND (sortorder <= 10))\n -> Hash Join (cost=23687.51..204666.54 rows=851267 width=50) (actual time=31.829..6263.403 rows=851267 loops=10)\n Hash Cond: (v.version_id = mv.version_id)\n -> Seq Scan on version v (cost=0.00..116146.68 rows=5631968 width=50) (actual time=0.006..859.758 rows=5632191 loo\nps=10)\n -> Hash (cost=13046.67..13046.67 rows=851267 width=4) (actual time=317.488..317.488 rows=851267 loops=1)\n -> Seq Scan on my_version mv (cost=0.00..13046.67 rows=851267 width=4) (actual time=2.888..115.166 rows=8512\n67 loops=1)\n Total runtime: 63680.162 ms\n\nHere is the \"good\" query, which is run directly on the data tables.\n\nem=> explain analyze\nselect registry.version.version_id, registry.version.isosmiles\nfrom hitlist_rows_reset_140\nleft join registry.version on (hitlist_rows_reset_140.objectid = registry.version.version_id)\nwhere hitlist_rows_reset_140.sortorder >= 1\nand hitlist_rows_reset_140.sortorder <= 10\norder by hitlist_rows_reset_140.SortOrder;\n QUERY PLAN\n\n-----------------------------------------------------------------------------------------------------------------------------\n------------------------------\n Nested Loop Left Join (cost=0.00..17.73 rows=1 width=54) (actual time=36.022..55.558 rows=10 loops=1)\n -> Index Scan using hitlist_rows_reset_140_pkey on hitlist_rows_reset_140 (cost=0.00..8.36 rows=1 width=8) (actual time=\n0.021..0.025 rows=10 loops=1)\n Index Cond: ((sortorder >= 1) AND (sortorder <= 10))\n -> Index Scan using version_pkey on version (cost=0.00..9.35 rows=1 width=50) (actual time=5.551..5.552 rows=1 loops=10)\n Index Cond: (hitlist_rows_reset_140.objectid = version.version_id)\n Total runtime: 55.608 ms\n(6 rows)\n\n\nThe view is defined like this:\n\nem=> \\d my_version\nTable \"test_schema.my_version\"\n Column | Type | Modifiers\n------------+---------+-----------\n version_id | integer | not null\nIndexes:\n \"my_version_pkey\" PRIMARY KEY, btree (version_id)\n\nem=> \\d version\n View \"test_schema.version\"\n Column | Type | Modifiers\n------------+---------+-----------\n version_id | integer |\n parent_id | integer |\n isosmiles | text |\n coord_2d | text |\nView definition:\n SELECT v.version_id, v.parent_id, v.isosmiles, v.coord_2d\n FROM registry.version v\n JOIN my_version mv ON mv.version_id = v.version_id;\n\nThis is:\n\n Postgres 8.4.4\n Ubuntu Linux 2.6.32-27\n Database: 8x7200 RAID 10, LSI RAID controller with BBU\n WAL: 2x7200 RAID1\n\nNon-default config parameters:\n\nmax_connections = 500\nshared_buffers = 1000MB\nwork_mem = 128MB\nsynchronous_commit = off\nfull_page_writes = off\nwal_buffers = 256kB\ncheckpoint_segments = 30\neffective_cache_size = 4GB\ntrack_activities = on\ntrack_counts = off\ntrack_functions = none\nescape_string_warning = off\n\nThanks,\nCraig\n\n", "msg_date": "Mon, 28 Feb 2011 10:28:44 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Query on view radically slower than query on underlying table" }, { "msg_contents": "The Monday 28 February 2011 16:35:37, Tom Lane wrote :\n> Marc Cousin <[email protected]> writes:\n> > The Monday 28 February 2011 13:57:45, Heikki Linnakangas wrote :\n> >> Testing here with a table with 1000 columns and 100 partitions, about\n> >> 80% of the planning time is looking up the statistics on attribute\n> >> width, to calculate average tuple width. I don't see O(n^2) behavior,\n> >> though, it seems linear.\n> > \n> > It is only based on experimentation, for my part, of course
\n> > \n> > If you measure the planning time, modifying either the columns or the\n> > partitions number, the square root of the planning time is almost\n> > perfectly proportional with the parameter you're playing with.\n> \n> Could we see a concrete example demonstrating that? I agree with Heikki\n> that it's not obvious what you are testing that would have such behavior.\n> I can think of places that would have O(N^2) behavior in the length of\n> the targetlist, but it seems unlikely that they'd come to dominate\n> runtime at a mere 1000 columns.\n> \n> \t\t\tregards, tom lane\n\nI feel a little silly not having provided a test case from the start…\n\nA script doing a complete test is attached to this email.\n\nIt's doing a simple \n\nCREATE TABLE test_father (col0 int,col1 int,col2 int,col3 int,col4 int,col5 \nint,col6 int,col7 int,col8 int,col9 int,col10 in\nt,col11 int,col12 int,col13 int,col14 int,col15 int,col16 int,col17 int,col18 \nint,col19 int,col20 int,col21 int,col22 int,co\nl23 int,…)\n\nFollowed by 600 \nCREATE TABLE test_child_0 (CHECK (col0=0)) INHERITS (test_father);\n\nAnd a single \n\nSELECT col0 FROM test_father WHERE col0=0;\n\n\nHere are my results (from the same machine). I've done it with 600 partitions, \nto have big planning times. If you need a smaller one (this one takes nearly \nten minutes to run) tell me.\n\nCOLS:100 PARTITIONS:600\nTime : 513,764 ms (sqrt : 22.6)\nCOLS:200 PARTITIONS:600\nTime : 906,214 ms (sqrt : 30.1)\nCOLS:300 PARTITIONS:600\nTime : 2255,390 ms (sqrt : 47.48)\nCOLS:400 PARTITIONS:600\nTime : 4816,820 ms (sqrt : 69.4)\nCOLS:500 PARTITIONS:600\nTime : 5736,602 ms (sqrt : 75.73)\nCOLS:600 PARTITIONS:600\nTime : 7659,617 ms (sqrt : 87.51)\nCOLS:700 PARTITIONS:600\nTime : 9313,260 ms (sqrt : 96.5)\nCOLS:800 PARTITIONS:600\nTime : 13700,353 ms (sqrt : 117.04)\nCOLS:900 PARTITIONS:600\nTime : 13914,765 ms (sqrt : 117.95)\nCOLS:1000 PARTITIONS:600\nTime : 20335,750 ms (sqrt : 142.6)\nCOLS:1100 PARTITIONS:600\nTime : 21048,958 ms (sqrt : 145.08)\nCOLS:1200 PARTITIONS:600\nTime : 27619,559 ms (sqrt : 166.18)\nCOLS:1300 PARTITIONS:600\nTime : 31357,353 ms (sqrt : 177.08)\nCOLS:1400 PARTITIONS:600\nTime : 34435,711 ms (sqrt : 185.57)\nCOLS:1500 PARTITIONS:600\nTime : 38954,676 ms (sqrt : 197.37)\n\n\nAs for my previous results, these ones are on a machine doing a bit of other \nwork, so some values may be a bit offset, and it's only one measure each time \nanyway.\n\nThe CSV file I sent from the first email is obtained running the exact same \ncommands, but playing on both columns and partitions, and averaged over 3 \nmeasures.\n\nRegards.", "msg_date": "Mon, 28 Feb 2011 19:47:46 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: inheritance: planning time vs children number vs column number" }, { "msg_contents": "Craig James <[email protected]> writes:\n> My guess (and it's just a wild guess) is that the \"left join\" is\n> forcing a sequence scan or something.\n\nNo, that's forcing the other join to be done in toto because it can't\nreorder the left join and regular join.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Feb 2011 13:57:36 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query on view radically slower than query on underlying table " }, { "msg_contents": "> Craig James<[email protected]> writes:\n>> Here is the \"bad\" query, which is run on the view:\n>>\n>> em=> explain analyze\n>> select version.version_id, version.isosmiles\n>> from hitlist_rows_reset_140\n>> left join version on (hitlist_rows_reset_140.objectid = version.version_id)\n>> where hitlist_rows_reset_140.sortorder >= 1\n>> and hitlist_rows_reset_140.sortorder <= 10\n>> order by hitlist_rows_reset_140.sortorder;\n>> QUERY PLAN\n>>\n>> -----------------------------------------------------------------------------------------------------------------------------\n>> ------------------------------\n>> Nested Loop Left Join (cost=23687.51..215315.74 rows=1 width=54) (actual time=2682.662..63680.076 rows=10 loops=1)\n>> Join Filter: (hitlist_rows_reset_140.objectid = v.version_id)\n>> -> Index Scan using hitlist_rows_reset_140_pkey on hitlist_rows_reset_140 (cost=0.00..8.36 rows=1 width=8) (actual time=\n>> 0.015..0.049 rows=10 loops=1)\n>> Index Cond: ((sortorder >= 1) AND (sortorder <= 10))\n>> -> Hash Join (cost=23687.51..204666.54 rows=851267 width=50) (actual time=31.829..6263.403 rows=851267 loops=10)\n>> Hash Cond: (v.version_id = mv.version_id)\n>> -> Seq Scan on version v (cost=0.00..116146.68 rows=5631968 width=50) (actual time=0.006..859.758 rows=5632191 loo\n>> ps=10)\n>> -> Hash (cost=13046.67..13046.67 rows=851267 width=4) (actual time=317.488..317.488 rows=851267 loops=1)\n>> -> Seq Scan on my_version mv (cost=0.00..13046.67 rows=851267 width=4) (actual time=2.888..115.166 rows=8512\n>> 67 loops=1)\n>> Total runtime: 63680.162 ms\n\nOn 2/28/11 10:57 AM, Tom Lane wrote:\n>> My guess (and it's just a wild guess) is that the \"left join\" is\n>> forcing a sequence scan or something.\n>\n> No, that's forcing the other join to be done in toto because it can't\n> reorder the left join and regular join.\n\nI change the \"left join\" to just \"join\" and confirmed that it's fast -- the join on the view drops from 65 seconds back down to a few milliseconds.\n\nThen I thought maybe putting a foreign-key constraint on table \"my_version\" would solve the problem:\n\n alter table my_version add constraint fk_my_view foreign key(version_id)\n references registry.version(version_id) on delete cascade;\n\nThat way, the planner would know that every key in table \"my_version\" has to also be in table \"version\", thus avoiding that part about \"forcing the other join to be done in toto\". But the foreign-key constraint makes no difference, it still does the full join and takes 65 seconds.\n\nSo here's how I see it:\n\n - The select can only return ten rows from table \"hitlist_rows_reset_140\"\n - The left join could be applied to table \"my_version\"\n - The results of that could be joined to table \"version\"\n\nIt seems to me that with the foreign-key constraint, it shouldn't have to examine more than ten rows from any of the three tables. Or have I overlooked something?\n\nThanks,\nCraig\n", "msg_date": "Mon, 28 Feb 2011 11:23:52 -0800", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query on view radically slower than query on underlying\n table" }, { "msg_contents": "Craig James <[email protected]> writes:\n> Then I thought maybe putting a foreign-key constraint on table \"my_version\" would solve the problem:\n\n> alter table my_version add constraint fk_my_view foreign key(version_id)\n> references registry.version(version_id) on delete cascade;\n\n> That way, the planner would know that every key in table \"my_version\" has to also be in table \"version\", thus avoiding that part about \"forcing the other join to be done in toto\". But the foreign-key constraint makes no difference, it still does the full join and takes 65 seconds.\n\nThat's just wishful thinking I'm afraid. The planner doesn't currently\nmake any deductions whatsoever from the presence of a foreign key\nconstraint; and even if it did, I'm not sure that this would help it\ndecide that a join order constraint could safely be dropped.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 28 Feb 2011 14:58:28 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query on view radically slower than query on underlying table " }, { "msg_contents": "Marc Cousin <[email protected]> writes:\n> The Monday 28 February 2011 16:35:37, Tom Lane wrote :\n>> Could we see a concrete example demonstrating that? I agree with Heikki\n>> that it's not obvious what you are testing that would have such behavior.\n>> I can think of places that would have O(N^2) behavior in the length of\n>> the targetlist, but it seems unlikely that they'd come to dominate\n>> runtime at a mere 1000 columns.\n\n> I feel a little silly not having provided a test case from the start…\n\n> A script doing a complete test is attached to this email.\n\nI did some oprofile analysis of this test case. It's spending\nessentially all its time in SearchCatCache, on failed searches of\npg_statistic. The cache accumulates negative entries for each probed\ncolumn, and then the searches take time proportional to the number of\nentries, so indeed there is an O(N^2) behavior --- but N is the number\nof columns times number of tables in your test case, not just the number\nof columns.\n\nThe cache is a hash table, so ideally the search time would be more or\nless constant as the table grows, but to make that happen we'd need to\nreallocate with more buckets as the table grows, and catcache.c doesn't\ndo that yet. We've seen a few cases that make that look worth doing,\nbut they tend to be pretty extreme, like this one.\n\nIt's worth pointing out that the only reason this effect is dominating\nthe runtime is that you don't have any statistics for these toy test\ntables. If you did, the cycles spent using those entries would dwarf\nthe lookup costs, I think. So it's hard to get excited about doing\nanything based on this test case --- it's likely the bottleneck would be\nsomewhere else entirely if you'd bothered to load up some data.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Mar 2011 01:20:19 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance: planning time vs children number vs column number " }, { "msg_contents": "Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :\n> Marc Cousin <[email protected]> writes:\n> > The Monday 28 February 2011 16:35:37, Tom Lane wrote :\n> >> Could we see a concrete example demonstrating that? I agree with Heikki\n> >> that it's not obvious what you are testing that would have such\n> >> behavior. I can think of places that would have O(N^2) behavior in the\n> >> length of the targetlist, but it seems unlikely that they'd come to\n> >> dominate runtime at a mere 1000 columns.\n> > \n> > I feel a little silly not having provided a test case from the startق€�\n> > \n> > A script doing a complete test is attached to this email.\n> \n> I did some oprofile analysis of this test case. It's spending\n> essentially all its time in SearchCatCache, on failed searches of\n> pg_statistic. The cache accumulates negative entries for each probed\n> column, and then the searches take time proportional to the number of\n> entries, so indeed there is an O(N^2) behavior --- but N is the number\n> of columns times number of tables in your test case, not just the number\n> of columns.\n> \n> The cache is a hash table, so ideally the search time would be more or\n> less constant as the table grows, but to make that happen we'd need to\n> reallocate with more buckets as the table grows, and catcache.c doesn't\n> do that yet. We've seen a few cases that make that look worth doing,\n> but they tend to be pretty extreme, like this one.\n> \n> It's worth pointing out that the only reason this effect is dominating\n> the runtime is that you don't have any statistics for these toy test\n> tables. If you did, the cycles spent using those entries would dwarf\n> the lookup costs, I think. So it's hard to get excited about doing\n> anything based on this test case --- it's likely the bottleneck would be\n> somewhere else entirely if you'd bothered to load up some data.\n> \n> \t\t\tregards, tom lane\n\nYes, for the same test case, with a bit of data in every partition and \nstatistics up to date, planning time goes from 20 seconds to 125ms for the 600 \nchildren/1000 columns case. Which is of course more than acceptable.\n\nNow I've got to check it's the same problem on the real environment. I think \nit has quite a few empty partitions, so no statistics for them…\n\nThanks a lot.\n\nMarc\n", "msg_date": "Tue, 1 Mar 2011 08:42:42 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: inheritance: planning time vs children number vs column number" }, { "msg_contents": "Marc Cousin <[email protected]> writes:\n> Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :\n>> It's worth pointing out that the only reason this effect is dominating\n>> the runtime is that you don't have any statistics for these toy test\n>> tables. If you did, the cycles spent using those entries would dwarf\n>> the lookup costs, I think. So it's hard to get excited about doing\n>> anything based on this test case --- it's likely the bottleneck would be\n>> somewhere else entirely if you'd bothered to load up some data.\n\n> Yes, for the same test case, with a bit of data in every partition and \n> statistics up to date, planning time goes from 20 seconds to 125ms for the 600 \n> children/1000 columns case. Which is of course more than acceptable.\n\n[ scratches head ... ] Actually, I was expecting the runtime to go up\nnot down. Maybe there's something else strange going on here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Mar 2011 10:33:51 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance: planning time vs children number vs column number " }, { "msg_contents": "The Tuesday 01 March 2011 16:33:51, Tom Lane wrote :\n> Marc Cousin <[email protected]> writes:\n> > Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :\n> >> It's worth pointing out that the only reason this effect is dominating\n> >> the runtime is that you don't have any statistics for these toy test\n> >> tables. If you did, the cycles spent using those entries would dwarf\n> >> the lookup costs, I think. So it's hard to get excited about doing\n> >> anything based on this test case --- it's likely the bottleneck would be\n> >> somewhere else entirely if you'd bothered to load up some data.\n> > \n> > Yes, for the same test case, with a bit of data in every partition and\n> > statistics up to date, planning time goes from 20 seconds to 125ms for\n> > the 600 children/1000 columns case. Which is of course more than\n> > acceptable.\n> \n> [ scratches head ... ] Actually, I was expecting the runtime to go up\n> not down. Maybe there's something else strange going on here.\n> \n> \t\t\tregards, tom lane\n\nThen, what can I do to help ?\n\n\nThe Tuesday 01 March 2011 16:33:51, Tom Lane wrote :\n> Marc Cousin <[email protected]> writes:\n> > Le mardi 01 mars 2011 07:20:19, Tom Lane a écrit :\n> >> It's worth pointing out that the only reason this effect is dominating\n> >> the runtime is that you don't have any statistics for these toy test\n> >> tables. If you did, the cycles spent using those entries would dwarf\n> >> the lookup costs, I think. So it's hard to get excited about doing\n> >> anything based on this test case --- it's likely the bottleneck would be\n> >> somewhere else entirely if you'd bothered to load up some data.\n> > \n> > Yes, for the same test case, with a bit of data in every partition and\n> > statistics up to date, planning time goes from 20 seconds to 125ms for\n> > the 600 children/1000 columns case. Which is of course more than\n> > acceptable.\n> \n> [ scratches head ... ] Actually, I was expecting the runtime to go up\n> not down. Maybe there's something else strange going on here.\n> \n> \t\t\tregards, tom lane\n\nThen, what can I do to help ?", "msg_date": "Tue, 1 Mar 2011 16:39:19 +0100", "msg_from": "Marc Cousin <[email protected]>", "msg_from_op": true, "msg_subject": "Re: inheritance: planning time vs children number vs column number" }, { "msg_contents": "I wrote:\n> Marc Cousin <[email protected]> writes:\n>> Yes, for the same test case, with a bit of data in every partition and \n>> statistics up to date, planning time goes from 20 seconds to 125ms for the 600 \n>> children/1000 columns case. Which is of course more than acceptable.\n\n> [ scratches head ... ] Actually, I was expecting the runtime to go up\n> not down. Maybe there's something else strange going on here.\n\nOh, doh: the failing pg_statistic lookups are all coming from the part\nof estimate_rel_size() where it tries to induce a reasonable tuple width\nestimate for an empty table (see get_rel_data_width). Probably not a\ncase we need to get really tense about. Of course, you could also argue\nthat this code is stupid because it's very unlikely that there will be\nany pg_statistic entries either. Maybe we should just have it go\ndirectly to the datatype-based estimate instead of making a boatload\nof useless pg_statistic probes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Mar 2011 12:33:54 -0500", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: inheritance: planning time vs children number vs column number " }, { "msg_contents": "On Mon, Feb 28, 2011 at 2:58 PM, Tom Lane <[email protected]> wrote:\n> Craig James <[email protected]> writes:\n>> Then I thought maybe putting a foreign-key constraint on table \"my_version\" would solve the problem:\n>\n>>    alter table my_version add constraint fk_my_view foreign key(version_id)\n>>    references registry.version(version_id) on delete cascade;\n>\n>> That way, the planner would know that every key in table \"my_version\" has to also be in table \"version\", thus avoiding that part about \"forcing the other join to be done in toto\".  But the foreign-key constraint makes no difference, it still does the full join and takes 65 seconds.\n>\n> That's just wishful thinking I'm afraid.  The planner doesn't currently\n> make any deductions whatsoever from the presence of a foreign key\n> constraint; and even if it did, I'm not sure that this would help it\n> decide that a join order constraint could safely be dropped.\n\nI've previously mused on -hackers about teaching the planner the\nconcept of an inner-or-left-join; that is, a join that's guaranteed to\nreturn the same results whichever way we choose to implement it.\nProving that an inner join is actually inner-or-left would allow the\njoin removal logic to consider removing it altogether, and would allow\nreordering in cases that aren't otherwise known to be safe. Proving\nthat a left join is actually inner-or-left doesn't help with join\nremoval, but it might allow the join to be reordered. Maybe\n\"non-row-reducing-join\" is better terminology than\n\"inner-or-left-join\", but in any case I have a suspicion that inner\njoin removal will end up being implemented as a special case of\nnoticing that an inner join falls into this class.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 2 Mar 2011 10:39:58 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query on view radically slower than query on underlying table" } ]
[ { "msg_contents": "Hi!\ncan you help me with performance optimization\non my machine I have 8 databases with ca. 1-2GB\n\nprocessor is:\nprocessor       : 0\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU           E3110  @ 3.00GHz\nstepping        : 10\ncpu MHz         : 2992.585\ncache size      : 6144 KB\nphysical id     : 0\nsiblings        : 2\ncore id         : 0\ncpu cores       : 2\n...\nprocessor       : 1\nvendor_id       : GenuineIntel\ncpu family      : 6\nmodel           : 23\nmodel name      : Intel(R) Xeon(R) CPU           E3110  @ 3.00GHz\nstepping        : 10\ncpu MHz         : 2992.585\ncache size      : 6144 KB\nphysical id     : 0\nsiblings        : 2\ncore id         : 1\ncpu cores       : 2\n...\nmemory: 8GB (4*2GB ecc ram)\n\nhdd: raid 1 (mdadm) - 2x 500GB, ext4 (mounted for /var, without /var/log)\n\npowered with UPS\n\nmy English is not very good and I completely lost track while reading\nthe user manual\ni find something like that\nhttp://samiux.wordpress.com/2009/07/26/howto-performance-tuning-for-postgresql-on-ubuntudebian/\nbut i,m not so sure, reading mha220 comments, that is safe for me & my server :/\n\ngreatings\n", "msg_date": "Mon, 28 Feb 2011 10:41:31 +0100", "msg_from": "croolyc <[email protected]>", "msg_from_op": true, "msg_subject": "optimalization" }, { "msg_contents": "croolyc <[email protected]> wrote:\n\n> Hi!\n> can you help me with performance optimization\n> on my machine I have 8 databases with ca. 1-2GB\n\nPerformace optimization depends on the workload...\nIs that a dedicated server, only for PostgreSQL? I assume it.\n\n\n> memory: 8GB (4*2GB ecc ram)\n\nOkay, as a first try, set shared_buffers to 2 GByte, and increase\nwork_mem up to, maybe, 20 MByte.\n\n\nYou can (and should) set log_min_duration_statement, maybe to 1000\n(1000ms), and observe the log. Read our docu about EXPLAIN, and analyse\nlong-running queries with EXPLAIN (ANALYSE).\n\nAnd again, performance tuning depends on the real workload on the\nserver, it's hard to give you a recipe for all use-cases.\n\n\nAndreas\n-- \nReally, I'm not out to destroy Microsoft. That will just be a completely\nunintentional side effect. (Linus Torvalds)\n\"If I was god, I would recompile penguin with --enable-fly.\" (unknown)\nKaufbach, Saxony, Germany, Europe. N 51.05082�, E 13.56889�\n", "msg_date": "Mon, 28 Feb 2011 19:32:46 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimalization" } ]
[ { "msg_contents": "Hi!\ncan you help me with performance optimization\non my machine I have 8 databases with ca. 1-2GB\n\nprocessor is:\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 23\nmodel name : Intel(R) Xeon(R) CPU E3110 @ 3.00GHz\nstepping : 10\ncpu MHz : 2992.585\ncache size : 6144 KB\nphysical id : 0\nsiblings : 2\ncore id : 0\ncpu cores : 2\n...\nprocessor : 1\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 23\nmodel name : Intel(R) Xeon(R) CPU E3110 @ 3.00GHz\nstepping : 10\ncpu MHz : 2992.585\ncache size : 6144 KB\nphysical id : 0\nsiblings : 2\ncore id : 1\ncpu cores : 2\n...\nmemory: 8GB (4*2GB ecc ram)\n\nhdd: raid 1 (mdadm) - 2x 500GB, ext4 (mounted for /var, without /var/log)\n\npowered with UPS\n\nmy English is not very good and I completely lost track while reading\nthe user manual\ni find something like that\nhttp://samiux.wordpress.com/2009/07/26/howto-performance-tuning-for-postgresql-on-ubuntudebian/\nbut i,m not so sure, reading mha220 comments, that is safe for me & my server :/\n\ngreatings\n", "msg_date": "Mon, 28 Feb 2011 10:43:53 +0100", "msg_from": "croolyc <[email protected]>", "msg_from_op": true, "msg_subject": "optimization" }, { "msg_contents": "croolyc <[email protected]> wrote:\n \n> can you help me with performance optimization\n \nFor overall tuning you could start here:\n \nhttp://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server\n \nIf, after some general tuning, you are having problems with slow\nqueries, it is best if you pick one and show it with EXPLAIN ANALYZE\noutput and the schema of the tables involved.\n \nhttp://wiki.postgresql.org/wiki/SlowQueryQuestions\n \n-Kevin\n", "msg_date": "Mon, 28 Feb 2011 08:57:06 -0600", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimization" } ]
[ { "msg_contents": "Hi,\n\nI came across some tools such as Tsung and Bristlecone. Are these fine or\nare there any better tools well suited for this DB?\n\nThank you.\n\nWarmest Regards,\n\nSelvam\n\nHi,I came across some tools such as Tsung and Bristlecone. Are these fine or are there any better tools well suited for this DB?Thank you.Warmest Regards,Selvam", "msg_date": "Tue, 1 Mar 2011 00:25:53 +0800", "msg_from": "Selva manickaraja <[email protected]>", "msg_from_op": true, "msg_subject": "Load and Stress on PostgreSQL 9.0" }, { "msg_contents": "On 2/28/11 8:25 AM, Selva manickaraja wrote:\n> Hi,\n> \n> I came across some tools such as Tsung and Bristlecone. Are these fine\n> or are there any better tools well suited for this DB?\n\nTsung is great. I've never used Bristlecone.\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 28 Feb 2011 10:35:42 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Load and Stress on PostgreSQL 9.0" }, { "msg_contents": "Ok, let us look into Tsung. On the other hand, (this might be a bit out of\ncontext) we want to load/stress test the Windows client application at that\naccessed the PostgreSQL through an App Server. This is to gauge if the App\nServer can take the load to handle the client request. Is there a tool that\ncan be used to load test from application. We think something like record\nuser events on a Windows GUI Client app would be suitable. Please assist to\nprovide some guidance.\n\nThank you.\n\nRegards,\n\nSelvam\n\nOn Tue, Mar 1, 2011 at 2:35 AM, Josh Berkus <[email protected]> wrote:\n\n> On 2/28/11 8:25 AM, Selva manickaraja wrote:\n> > Hi,\n> >\n> > I came across some tools such as Tsung and Bristlecone. Are these fine\n> > or are there any better tools well suited for this DB?\n>\n> Tsung is great. I've never used Bristlecone.\n>\n> --\n> -- Josh Berkus\n> PostgreSQL Experts Inc.\n> http://www.pgexperts.com\n>\n\nOk, let us look into Tsung. On the other hand, (this might be a bit out of context) we want to load/stress test the Windows client application at that accessed the PostgreSQL through an App Server. This is to gauge if the App Server can take the load to handle the client request. Is there a tool that can be used to load test from application. We think something like record user events on a Windows GUI Client app would be suitable. Please assist to provide some guidance.\nThank you.Regards,SelvamOn Tue, Mar 1, 2011 at 2:35 AM, Josh Berkus <[email protected]> wrote:\nOn 2/28/11 8:25 AM, Selva manickaraja wrote:\n> Hi,\n>\n> I came across some tools such as Tsung and Bristlecone. Are these fine\n> or are there any better tools well suited for this DB?\n\nTsung is great.  I've never used Bristlecone.\n\n--\n                                  -- Josh Berkus\n                                     PostgreSQL Experts Inc.\n                                     http://www.pgexperts.com", "msg_date": "Tue, 1 Mar 2011 09:48:14 +0800", "msg_from": "Selva manickaraja <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Load and Stress on PostgreSQL 9.0" } ]
[ { "msg_contents": "Hey,\n\nDoes anyone have the hardware to test FlashCache with PostgreSQL?\n\nhttp://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx\n\nI'd be interested to hear how it performs ...\n\n-- \n -- Josh Berkus\n PostgreSQL Experts Inc.\n http://www.pgexperts.com\n", "msg_date": "Mon, 28 Feb 2011 11:09:55 -0800", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": true, "msg_subject": "Anyone tried Flashcache with PostgreSQL?" }, { "msg_contents": "On Mon, Feb 28, 2011 at 2:09 PM, Josh Berkus <[email protected]> wrote:\n> Does anyone have the hardware to test FlashCache with PostgreSQL?\n>\n> http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx\n>\n> I'd be interested to hear how it performs ...\n\nIt'd be a lot more interesting if it were a write-through cache rather\nthan a write-back cache, wouldn't it?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Wed, 2 Mar 2011 10:29:21 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone tried Flashcache with PostgreSQL?" }, { "msg_contents": "On Wed, Mar 2, 2011 at 7:29 AM, Robert Haas <[email protected]> wrote:\n\n> On Mon, Feb 28, 2011 at 2:09 PM, Josh Berkus <[email protected]> wrote:\n> > Does anyone have the hardware to test FlashCache with PostgreSQL?\n> >\n> > http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx\n> >\n> > I'd be interested to hear how it performs ...\n>\n> It'd be a lot more interesting if it were a write-through cache rather\n> than a write-back cache, wouldn't it?\n>\n\nWell, it is open source...\n\nOn Wed, Mar 2, 2011 at 7:29 AM, Robert Haas <[email protected]> wrote:\nOn Mon, Feb 28, 2011 at 2:09 PM, Josh Berkus <[email protected]> wrote:\n> Does anyone have the hardware to test FlashCache with PostgreSQL?\n>\n> http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx\n>\n> I'd be interested to hear how it performs ...\n\nIt'd be a lot more interesting if it were a write-through cache rather\nthan a write-back cache, wouldn't it?Well, it is open source...", "msg_date": "Wed, 2 Mar 2011 16:20:27 -0800", "msg_from": "Samuel Gendler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone tried Flashcache with PostgreSQL?" }, { "msg_contents": "On 2-3-2011 16:29 Robert Haas wrote:\n> On Mon, Feb 28, 2011 at 2:09 PM, Josh Berkus<[email protected]> wrote:\n>> Does anyone have the hardware to test FlashCache with PostgreSQL?\n>>\n>> http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx\n>>\n>> I'd be interested to hear how it performs ...\n>\n> It'd be a lot more interesting if it were a write-through cache rather\n> than a write-back cache, wouldn't it?\n\nThat's what bcache tries to accomplish, both read and write cache.\nIt also appears to aim to be more widely usable, rather than the \nrelatively specific requirements the facebook variant is designed for.\n\nhttp://bcache.evilpiepirate.org/\n\nThey seem to try and combine both the dedicated ZIL and L2ARC \nfunctionality from ZFS in one block device based caching layer.\n\nBest regards,\n\nArjen\n", "msg_date": "Thu, 03 Mar 2011 09:15:03 +0100", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone tried Flashcache with PostgreSQL?" }, { "msg_contents": "2011/3/3 Arjen van der Meijden <[email protected]>:\n> On 2-3-2011 16:29 Robert Haas wrote:\n>>\n>> On Mon, Feb 28, 2011 at 2:09 PM, Josh Berkus<[email protected]>  wrote:\n>>>\n>>> Does anyone have the hardware to test FlashCache with PostgreSQL?\n>>>\n>>> http://perspectives.mvdirona.com/2010/04/29/FacebookFlashcache.aspx\n>>>\n>>> I'd be interested to hear how it performs ...\n>>\n>> It'd be a lot more interesting if it were a write-through cache rather\n>> than a write-back cache, wouldn't it?\n>\n> That's what bcache tries to accomplish, both read and write cache.\n> It also appears to aim to be more widely usable, rather than the relatively\n> specific requirements the facebook variant is designed for.\n>\n> http://bcache.evilpiepirate.org/\n>\n> They seem to try and combine both the dedicated ZIL and L2ARC functionality\n> from ZFS in one block device based caching layer.\n\nBcache looks more interesting, yes. Still it is not production ready\nand get some dangerous caveeat with administration tasks (for example\nremounting devices without their caches open the door of all evils).\n\n-- \nCédric Villemain               2ndQuadrant\nhttp://2ndQuadrant.fr/     PostgreSQL : Expertise, Formation et Support\n", "msg_date": "Mon, 7 Mar 2011 16:48:07 +0100", "msg_from": "=?ISO-8859-1?Q?C=E9dric_Villemain?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Anyone tried Flashcache with PostgreSQL?" } ]
[ { "msg_contents": "*Hi all !\n\nPostgresql (8.2) has as a strange behaviour in some of my environments.\n*\n*A request follows two execution plans ( but not always !!! ). I encounter\nsome difficulties to reproduce the case.*\n\n*J-2*\nAggregate (*cost=2323350.24..2323350.28 rows=1 width=24*)\n -> Merge Join (cost=2214044.98..2322432.49 rows=91774 width=24)\n Merge Cond: ((azy_header.txhd_azy_nr = azy_detail.txhd_azy_nr) AND\n((azy_header.till_short_desc)::text = inner\".\"?column8?\") AND\n((azy_header.orgu_xxx)::text = \"inner\".\"?column9?\") AND\n((azy_header.orgu_xxx_cmpy)::text = \"inner\".\"?column10?\"))\"\n -> Sort (cost=409971.56..410050.39 rows=31532 width=77)\n Sort Key: azy_queue.txhd_azy_nr,\n(azy_queue.till_short_desc)::text, (azy_queue.orgu_xxx)::text,\n(azy_queue.orgu_xxx_cmpy)::text\n -> Nested Loop (cost=0.00..407615.41 rows=31532 width=77)\n -> Nested Loop (cost=0.00..70178.58 rows=52216\nwidth=46)\n Join Filter: (((azy_queue.orgu_xxx_cmpy)::text =\n(firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n(firma_session.orgu_xxx)::text))\n -> Seq Scan on firma_session (cost=0.00..599.29\nrows=401 width=25)\n Filter: ((cssn_trading_date >=\n'20110226'::bpchar) AND (cssn_trading_date <= '20110226'::bpchar))\n -> Index Scan using azyq_ix2 on azy_queue\n(cost=0.00..165.92 rows=434 width=41)\n Index Cond: (azy_queue.cssn_session_id =\nfirma_session.cssn_session_id)\n -> Index Scan using txhd_pk on azy_header\n(cost=0.00..6.44 rows=1 width=31)\n Index Cond: (((azy_queue.orgu_xxx_cmpy)::text =\n(azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n(azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text =\n(azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr =\nazy_header.txhd_azy_nr))\n Filter: (txhd_voided = 0::numeric)\n -> Sort (cost=1804073.42..1825494.05 rows=8568252 width=55)\n Sort Key: azy_detail.txhd_azy_nr,\n(azy_detail.till_short_desc)::text, (azy_detail.orgu_xxx)::text,\n(azy_detail.orgu_xxx_cmpy)::text\n -> Seq Scan on azy_detail (cost=0.00..509908.30 rows=8568252\nwidth=55)\n Filter: (txde_item_void = 0::numeric)\n\n\n\n*J-1*\nAggregate (*cost=10188.38..10188.42 rows=1 width=24*)\n -> Nested Loop (cost=0.00..10186.08 rows=229 width=24)\n -> Nested Loop (cost=0.00..2028.51 rows=79 width=77)\n -> Nested Loop (cost=0.00..865.09 rows=130 width=46)\n Join Filter: (((azy_queue.orgu_xxx_cmpy)::text =\n(firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n(firma_session.orgu_xxx)::text))\n -> Seq Scan on firma_session (cost=0.00..599.29 rows=1\nwidth=25)\n Filter: ((cssn_trading_date >= '20110227'::bpchar)\nAND (cssn_trading_date <= '20110227'::bpchar))\n -> Index Scan using azyq_ix2 on azy_queue\n(cost=0.00..258.20 rows=434 width=41)\n Index Cond: (azy_queue.cssn_session_id =\nfirma_session.cssn_session_id)\n -> Index Scan using txhd_pk on azy_header (cost=0.00..8.93\nrows=1 width=31)\n Index Cond: (((azy_queue.orgu_xxx_cmpy)::text =\n(azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n(azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text =\n(azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr =\nazy_header.txhd_azy_nr))\n Filter: (txhd_voided = 0::numeric)\n -> Index Scan using txde_pk on azy_detail (cost=0.00..102.26\nrows=50 width=55)\n Index Cond: (((azy_detail.orgu_xxx_cmpy)::text =\n(azy_header.orgu_xxx_cmpy)::text) AND ((azy_detail.orgu_xxx)::text =\n(azy_header.orgu_xxx)::text) AND ((azy_detail.till_short_desc)::text =\n(azy_header.till_short_desc)::text) AND (azy_detail.txhd_azy_nr =\nazy_header.txhd_azy_nr))\n Filter: (txde_item_void = 0::numeric)\n\n\n\n*\nWhere shall I investigate ?*\nThanks for your help\n\nHi all !Postgresql (8.2) has as a strange behaviour in some of my environments.A request follows two execution plans ( but not always !!! ). I encounter some difficulties to reproduce the case.\nJ-2Aggregate  (cost=2323350.24..2323350.28 rows=1 width=24)\n  ->  Merge Join  (cost=2214044.98..2322432.49 rows=91774 width=24)        Merge Cond: ((azy_header.txhd_azy_nr = azy_detail.txhd_azy_nr) AND ((azy_header.till_short_desc)::text = inner\".\"?column8?\") AND ((azy_header.orgu_xxx)::text = \"inner\".\"?column9?\") AND ((azy_header.orgu_xxx_cmpy)::text = \"inner\".\"?column10?\"))\"\n        ->  Sort  (cost=409971.56..410050.39 rows=31532 width=77)              Sort Key: azy_queue.txhd_azy_nr, (azy_queue.till_short_desc)::text, (azy_queue.orgu_xxx)::text, (azy_queue.orgu_xxx_cmpy)::text\n              ->  Nested Loop  (cost=0.00..407615.41 rows=31532 width=77)                    ->  Nested Loop  (cost=0.00..70178.58 rows=52216 width=46)\n                          Join Filter: (((azy_queue.orgu_xxx_cmpy)::text = (firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text = (firma_session.orgu_xxx)::text))\n                          ->  Seq Scan on firma_session  (cost=0.00..599.29 rows=401 width=25)                                Filter: ((cssn_trading_date >= '20110226'::bpchar) AND (cssn_trading_date <= '20110226'::bpchar))\n                          ->  Index Scan using azyq_ix2 on azy_queue  (cost=0.00..165.92 rows=434 width=41)                                Index Cond: (azy_queue.cssn_session_id = firma_session.cssn_session_id)\n                    ->  Index Scan using txhd_pk on azy_header  (cost=0.00..6.44 rows=1 width=31)                          Index Cond: (((azy_queue.orgu_xxx_cmpy)::text = (azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text = (azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text = (azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr = azy_header.txhd_azy_nr))\n                          Filter: (txhd_voided = 0::numeric)        ->  Sort  (cost=1804073.42..1825494.05 rows=8568252 width=55)\n              Sort Key: azy_detail.txhd_azy_nr, (azy_detail.till_short_desc)::text, (azy_detail.orgu_xxx)::text, (azy_detail.orgu_xxx_cmpy)::text\n              ->  Seq Scan on azy_detail  (cost=0.00..509908.30 rows=8568252 width=55)                    Filter: (txde_item_void = 0::numeric)\nJ-1\nAggregate  (cost=10188.38..10188.42 rows=1 width=24)  ->  Nested Loop  (cost=0.00..10186.08 rows=229 width=24)\n        ->  Nested Loop  (cost=0.00..2028.51 rows=79 width=77)              ->  Nested Loop  (cost=0.00..865.09 rows=130 width=46)\n                    Join Filter: (((azy_queue.orgu_xxx_cmpy)::text = (firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text = (firma_session.orgu_xxx)::text))\n                    ->  Seq Scan on firma_session  (cost=0.00..599.29 rows=1 width=25)                          Filter: ((cssn_trading_date >= '20110227'::bpchar) AND (cssn_trading_date <= '20110227'::bpchar))\n                    ->  Index Scan using azyq_ix2 on azy_queue  (cost=0.00..258.20 rows=434 width=41)                          Index Cond: (azy_queue.cssn_session_id = firma_session.cssn_session_id)\n              ->  Index Scan using txhd_pk on azy_header  (cost=0.00..8.93 rows=1 width=31)                    Index Cond: (((azy_queue.orgu_xxx_cmpy)::text = (azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text = (azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text = (azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr = azy_header.txhd_azy_nr))\n                    Filter: (txhd_voided = 0::numeric)        ->  Index Scan using txde_pk on azy_detail  (cost=0.00..102.26 rows=50 width=55)\n              Index Cond: (((azy_detail.orgu_xxx_cmpy)::text = (azy_header.orgu_xxx_cmpy)::text) AND ((azy_detail.orgu_xxx)::text = (azy_header.orgu_xxx)::text) AND ((azy_detail.till_short_desc)::text = (azy_header.till_short_desc)::text) AND (azy_detail.txhd_azy_nr = azy_header.txhd_azy_nr))\n              Filter: (txde_item_void = 0::numeric)Where shall I investigate ?Thanks for your help", "msg_date": "Tue, 1 Mar 2011 09:46:44 +0100", "msg_from": "Joby Joba <[email protected]>", "msg_from_op": true, "msg_subject": "Two different execution plans for similar requests" }, { "msg_contents": "Hi, and why do you think this is a problem?\n\nThe explain plan is expected to change for different parameter values,\nthat's OK. The merge in the first query is expected to produce\nsignificantly more rows (91774) than the other one (229). That's why the\nsecond query chooses nested loop instead of merge join ...\n\nBut it's difficult to say if those plans are OK, as you have posted just\nEXPLAIN output - please, provide 'EXPLAIN ANALYZE' output so that we can\nsee if the stats are off.\n\nregards\nTomas\n\n> *Hi all !\n>\n> Postgresql (8.2) has as a strange behaviour in some of my environments.\n> *\n> *A request follows two execution plans ( but not always !!! ). I encounter\n> some difficulties to reproduce the case.*\n>\n> *J-2*\n> Aggregate (*cost=2323350.24..2323350.28 rows=1 width=24*)\n> -> Merge Join (cost=2214044.98..2322432.49 rows=91774 width=24)\n> Merge Cond: ((azy_header.txhd_azy_nr = azy_detail.txhd_azy_nr) AND\n> ((azy_header.till_short_desc)::text = inner\".\"?column8?\") AND\n> ((azy_header.orgu_xxx)::text = \"inner\".\"?column9?\") AND\n> ((azy_header.orgu_xxx_cmpy)::text = \"inner\".\"?column10?\"))\"\n> -> Sort (cost=409971.56..410050.39 rows=31532 width=77)\n> Sort Key: azy_queue.txhd_azy_nr,\n> (azy_queue.till_short_desc)::text, (azy_queue.orgu_xxx)::text,\n> (azy_queue.orgu_xxx_cmpy)::text\n> -> Nested Loop (cost=0.00..407615.41 rows=31532 width=77)\n> -> Nested Loop (cost=0.00..70178.58 rows=52216\n> width=46)\n> Join Filter: (((azy_queue.orgu_xxx_cmpy)::text =\n> (firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> (firma_session.orgu_xxx)::text))\n> -> Seq Scan on firma_session\n> (cost=0.00..599.29\n> rows=401 width=25)\n> Filter: ((cssn_trading_date >=\n> '20110226'::bpchar) AND (cssn_trading_date <= '20110226'::bpchar))\n> -> Index Scan using azyq_ix2 on azy_queue\n> (cost=0.00..165.92 rows=434 width=41)\n> Index Cond: (azy_queue.cssn_session_id =\n> firma_session.cssn_session_id)\n> -> Index Scan using txhd_pk on azy_header\n> (cost=0.00..6.44 rows=1 width=31)\n> Index Cond: (((azy_queue.orgu_xxx_cmpy)::text =\n> (azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> (azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text =\n> (azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr =\n> azy_header.txhd_azy_nr))\n> Filter: (txhd_voided = 0::numeric)\n> -> Sort (cost=1804073.42..1825494.05 rows=8568252 width=55)\n> Sort Key: azy_detail.txhd_azy_nr,\n> (azy_detail.till_short_desc)::text, (azy_detail.orgu_xxx)::text,\n> (azy_detail.orgu_xxx_cmpy)::text\n> -> Seq Scan on azy_detail (cost=0.00..509908.30\n> rows=8568252\n> width=55)\n> Filter: (txde_item_void = 0::numeric)\n>\n>\n>\n> *J-1*\n> Aggregate (*cost=10188.38..10188.42 rows=1 width=24*)\n> -> Nested Loop (cost=0.00..10186.08 rows=229 width=24)\n> -> Nested Loop (cost=0.00..2028.51 rows=79 width=77)\n> -> Nested Loop (cost=0.00..865.09 rows=130 width=46)\n> Join Filter: (((azy_queue.orgu_xxx_cmpy)::text =\n> (firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> (firma_session.orgu_xxx)::text))\n> -> Seq Scan on firma_session (cost=0.00..599.29\n> rows=1\n> width=25)\n> Filter: ((cssn_trading_date >=\n> '20110227'::bpchar)\n> AND (cssn_trading_date <= '20110227'::bpchar))\n> -> Index Scan using azyq_ix2 on azy_queue\n> (cost=0.00..258.20 rows=434 width=41)\n> Index Cond: (azy_queue.cssn_session_id =\n> firma_session.cssn_session_id)\n> -> Index Scan using txhd_pk on azy_header (cost=0.00..8.93\n> rows=1 width=31)\n> Index Cond: (((azy_queue.orgu_xxx_cmpy)::text =\n> (azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> (azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text =\n> (azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr =\n> azy_header.txhd_azy_nr))\n> Filter: (txhd_voided = 0::numeric)\n> -> Index Scan using txde_pk on azy_detail (cost=0.00..102.26\n> rows=50 width=55)\n> Index Cond: (((azy_detail.orgu_xxx_cmpy)::text =\n> (azy_header.orgu_xxx_cmpy)::text) AND ((azy_detail.orgu_xxx)::text =\n> (azy_header.orgu_xxx)::text) AND ((azy_detail.till_short_desc)::text =\n> (azy_header.till_short_desc)::text) AND (azy_detail.txhd_azy_nr =\n> azy_header.txhd_azy_nr))\n> Filter: (txde_item_void = 0::numeric)\n>\n>\n>\n> *\n> Where shall I investigate ?*\n> Thanks for your help\n>\n\n\n", "msg_date": "Tue, 1 Mar 2011 10:10:02 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Two different execution plans for similar requests" }, { "msg_contents": "I've already used an 'EXPLAIN ANALYZE' to post the message. So I don't\nclearly understand what you are expecting for, when you tell me to provide\n'EXPLAIN ANALYZE' (please excuse me for the misunderstood)\n\nI agree with you when you say that for two different values, the costs will\nbe different. But I probably forgot to tell that one day the cost is very\nhigh and at another moment this cost for the same value is lower. And there\nis no vacuum/analyze between the two executions.\n\nRegards\n\nJoby\n\n2011/3/1 <[email protected]>\n\n> Hi, and why do you think this is a problem?\n>\n> The explain plan is expected to change for different parameter values,\n> that's OK. The merge in the first query is expected to produce\n> significantly more rows (91774) than the other one (229). That's why the\n> second query chooses nested loop instead of merge join ...\n>\n> But it's difficult to say if those plans are OK, as you have posted just\n> EXPLAIN output - please, provide 'EXPLAIN ANALYZE' output so that we can\n> see if the stats are off.\n>\n> regards\n> Tomas\n>\n> > *Hi all !\n> >\n> > Postgresql (8.2) has as a strange behaviour in some of my environments.\n> > *\n> > *A request follows two execution plans ( but not always !!! ). I\n> encounter\n> > some difficulties to reproduce the case.*\n> >\n> > *J-2*\n> > Aggregate (*cost=2323350.24..2323350.28 rows=1 width=24*)\n> > -> Merge Join (cost=2214044.98..2322432.49 rows=91774 width=24)\n> > Merge Cond: ((azy_header.txhd_azy_nr = azy_detail.txhd_azy_nr)\n> AND\n> > ((azy_header.till_short_desc)::text = inner\".\"?column8?\") AND\n> > ((azy_header.orgu_xxx)::text = \"inner\".\"?column9?\") AND\n> > ((azy_header.orgu_xxx_cmpy)::text = \"inner\".\"?column10?\"))\"\n> > -> Sort (cost=409971.56..410050.39 rows=31532 width=77)\n> > Sort Key: azy_queue.txhd_azy_nr,\n> > (azy_queue.till_short_desc)::text, (azy_queue.orgu_xxx)::text,\n> > (azy_queue.orgu_xxx_cmpy)::text\n> > -> Nested Loop (cost=0.00..407615.41 rows=31532 width=77)\n> > -> Nested Loop (cost=0.00..70178.58 rows=52216\n> > width=46)\n> > Join Filter: (((azy_queue.orgu_xxx_cmpy)::text\n> =\n> > (firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> > (firma_session.orgu_xxx)::text))\n> > -> Seq Scan on firma_session\n> > (cost=0.00..599.29\n> > rows=401 width=25)\n> > Filter: ((cssn_trading_date >=\n> > '20110226'::bpchar) AND (cssn_trading_date <= '20110226'::bpchar))\n> > -> Index Scan using azyq_ix2 on azy_queue\n> > (cost=0.00..165.92 rows=434 width=41)\n> > Index Cond: (azy_queue.cssn_session_id =\n> > firma_session.cssn_session_id)\n> > -> Index Scan using txhd_pk on azy_header\n> > (cost=0.00..6.44 rows=1 width=31)\n> > Index Cond: (((azy_queue.orgu_xxx_cmpy)::text =\n> > (azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> > (azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text =\n> > (azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr =\n> > azy_header.txhd_azy_nr))\n> > Filter: (txhd_voided = 0::numeric)\n> > -> Sort (cost=1804073.42..1825494.05 rows=8568252 width=55)\n> > Sort Key: azy_detail.txhd_azy_nr,\n> > (azy_detail.till_short_desc)::text, (azy_detail.orgu_xxx)::text,\n> > (azy_detail.orgu_xxx_cmpy)::text\n> > -> Seq Scan on azy_detail (cost=0.00..509908.30\n> > rows=8568252\n> > width=55)\n> > Filter: (txde_item_void = 0::numeric)\n> >\n> >\n> >\n> > *J-1*\n> > Aggregate (*cost=10188.38..10188.42 rows=1 width=24*)\n> > -> Nested Loop (cost=0.00..10186.08 rows=229 width=24)\n> > -> Nested Loop (cost=0.00..2028.51 rows=79 width=77)\n> > -> Nested Loop (cost=0.00..865.09 rows=130 width=46)\n> > Join Filter: (((azy_queue.orgu_xxx_cmpy)::text =\n> > (firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> > (firma_session.orgu_xxx)::text))\n> > -> Seq Scan on firma_session (cost=0.00..599.29\n> > rows=1\n> > width=25)\n> > Filter: ((cssn_trading_date >=\n> > '20110227'::bpchar)\n> > AND (cssn_trading_date <= '20110227'::bpchar))\n> > -> Index Scan using azyq_ix2 on azy_queue\n> > (cost=0.00..258.20 rows=434 width=41)\n> > Index Cond: (azy_queue.cssn_session_id =\n> > firma_session.cssn_session_id)\n> > -> Index Scan using txhd_pk on azy_header\n> (cost=0.00..8.93\n> > rows=1 width=31)\n> > Index Cond: (((azy_queue.orgu_xxx_cmpy)::text =\n> > (azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> > (azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text =\n> > (azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr =\n> > azy_header.txhd_azy_nr))\n> > Filter: (txhd_voided = 0::numeric)\n> > -> Index Scan using txde_pk on azy_detail (cost=0.00..102.26\n> > rows=50 width=55)\n> > Index Cond: (((azy_detail.orgu_xxx_cmpy)::text =\n> > (azy_header.orgu_xxx_cmpy)::text) AND ((azy_detail.orgu_xxx)::text =\n> > (azy_header.orgu_xxx)::text) AND ((azy_detail.till_short_desc)::text =\n> > (azy_header.till_short_desc)::text) AND (azy_detail.txhd_azy_nr =\n> > azy_header.txhd_azy_nr))\n> > Filter: (txde_item_void = 0::numeric)\n> >\n> >\n> >\n> > *\n> > Where shall I investigate ?*\n> > Thanks for your help\n> >\n>\n>\n>\n\nI've already used an 'EXPLAIN ANALYZE' to post the message. So I don't clearly understand what you are expecting for, when you tell me to provide 'EXPLAIN ANALYZE'  (please excuse me for the misunderstood)\nI agree with you when you say that for two different values, the costs will be different. But I probably forgot to tell that one day the cost is very high and at another moment this cost for the same value is lower. And there is no vacuum/analyze between the two executions.\nRegardsJoby2011/3/1 <[email protected]>\nHi, and why do you think this is a problem?\n\nThe explain plan is expected to change for different parameter values,\nthat's OK. The merge in the first query is expected to produce\nsignificantly more rows (91774) than the other one (229). That's why the\nsecond query chooses nested loop instead of merge join ...\n\nBut it's difficult to say if those plans are OK, as you have posted just\nEXPLAIN output - please, provide 'EXPLAIN ANALYZE' output so that we can\nsee if the stats are off.\n\nregards\nTomas\n\n> *Hi all !\n>\n> Postgresql (8.2) has as a strange behaviour in some of my environments.\n> *\n> *A request follows two execution plans ( but not always !!! ). I encounter\n> some difficulties to reproduce the case.*\n>\n> *J-2*\n> Aggregate  (*cost=2323350.24..2323350.28 rows=1 width=24*)\n>   ->  Merge Join  (cost=2214044.98..2322432.49 rows=91774 width=24)\n>         Merge Cond: ((azy_header.txhd_azy_nr = azy_detail.txhd_azy_nr) AND\n> ((azy_header.till_short_desc)::text = inner\".\"?column8?\") AND\n> ((azy_header.orgu_xxx)::text = \"inner\".\"?column9?\") AND\n> ((azy_header.orgu_xxx_cmpy)::text = \"inner\".\"?column10?\"))\"\n>         ->  Sort  (cost=409971.56..410050.39 rows=31532 width=77)\n>               Sort Key: azy_queue.txhd_azy_nr,\n> (azy_queue.till_short_desc)::text, (azy_queue.orgu_xxx)::text,\n> (azy_queue.orgu_xxx_cmpy)::text\n>               ->  Nested Loop  (cost=0.00..407615.41 rows=31532 width=77)\n>                     ->  Nested Loop  (cost=0.00..70178.58 rows=52216\n> width=46)\n>                           Join Filter: (((azy_queue.orgu_xxx_cmpy)::text =\n> (firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> (firma_session.orgu_xxx)::text))\n>                           ->  Seq Scan on firma_session\n> (cost=0.00..599.29\n> rows=401 width=25)\n>                                 Filter: ((cssn_trading_date >=\n> '20110226'::bpchar) AND (cssn_trading_date <= '20110226'::bpchar))\n>                           ->  Index Scan using azyq_ix2 on azy_queue\n> (cost=0.00..165.92 rows=434 width=41)\n>                                 Index Cond: (azy_queue.cssn_session_id =\n> firma_session.cssn_session_id)\n>                     ->  Index Scan using txhd_pk on azy_header\n> (cost=0.00..6.44 rows=1 width=31)\n>                           Index Cond: (((azy_queue.orgu_xxx_cmpy)::text =\n> (azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> (azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text =\n> (azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr =\n> azy_header.txhd_azy_nr))\n>                           Filter: (txhd_voided = 0::numeric)\n>         ->  Sort  (cost=1804073.42..1825494.05 rows=8568252 width=55)\n>               Sort Key: azy_detail.txhd_azy_nr,\n> (azy_detail.till_short_desc)::text, (azy_detail.orgu_xxx)::text,\n> (azy_detail.orgu_xxx_cmpy)::text\n>               ->  Seq Scan on azy_detail  (cost=0.00..509908.30\n> rows=8568252\n> width=55)\n>                     Filter: (txde_item_void = 0::numeric)\n>\n>\n>\n> *J-1*\n> Aggregate  (*cost=10188.38..10188.42 rows=1 width=24*)\n>   ->  Nested Loop  (cost=0.00..10186.08 rows=229 width=24)\n>         ->  Nested Loop  (cost=0.00..2028.51 rows=79 width=77)\n>               ->  Nested Loop  (cost=0.00..865.09 rows=130 width=46)\n>                     Join Filter: (((azy_queue.orgu_xxx_cmpy)::text =\n> (firma_session.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> (firma_session.orgu_xxx)::text))\n>                     ->  Seq Scan on firma_session  (cost=0.00..599.29\n> rows=1\n> width=25)\n>                           Filter: ((cssn_trading_date >=\n> '20110227'::bpchar)\n> AND (cssn_trading_date <= '20110227'::bpchar))\n>                     ->  Index Scan using azyq_ix2 on azy_queue\n> (cost=0.00..258.20 rows=434 width=41)\n>                           Index Cond: (azy_queue.cssn_session_id =\n> firma_session.cssn_session_id)\n>               ->  Index Scan using txhd_pk on azy_header  (cost=0.00..8.93\n> rows=1 width=31)\n>                     Index Cond: (((azy_queue.orgu_xxx_cmpy)::text =\n> (azy_header.orgu_xxx_cmpy)::text) AND ((azy_queue.orgu_xxx)::text =\n> (azy_header.orgu_xxx)::text) AND ((azy_queue.till_short_desc)::text =\n> (azy_header.till_short_desc)::text) AND (azy_queue.txhd_azy_nr =\n> azy_header.txhd_azy_nr))\n>                     Filter: (txhd_voided = 0::numeric)\n>         ->  Index Scan using txde_pk on azy_detail  (cost=0.00..102.26\n> rows=50 width=55)\n>               Index Cond: (((azy_detail.orgu_xxx_cmpy)::text =\n> (azy_header.orgu_xxx_cmpy)::text) AND ((azy_detail.orgu_xxx)::text =\n> (azy_header.orgu_xxx)::text) AND ((azy_detail.till_short_desc)::text =\n> (azy_header.till_short_desc)::text) AND (azy_detail.txhd_azy_nr =\n> azy_header.txhd_azy_nr))\n>               Filter: (txde_item_void = 0::numeric)\n>\n>\n>\n> *\n> Where shall I investigate ?*\n> Thanks for your help\n>", "msg_date": "Tue, 1 Mar 2011 10:25:50 +0100", "msg_from": "Joby Joba <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two different execution plans for similar requests" }, { "msg_contents": "> I've already used an 'EXPLAIN ANALYZE' to post the message. So I don't\n> clearly understand what you are expecting for, when you tell me to provide\n> 'EXPLAIN ANALYZE' (please excuse me for the misunderstood)\n\nNo, you haven't. You've provided 'EXPLAIN' output, but that just prepares\nan execution plan and displays it. So it shows just estimates of row\ncounts etc. and not actual values.\n\nDo the same thing but use 'EXPLAIN ANALYZE' instead of 'EXPLAIN' - it will\nrun the query and provide more details about it (run time for each node,\nactual number of rows etc.).\n\nAnyway the sudden changes of estimated costs are suspicious ...\n\nTomas\n\n", "msg_date": "Tue, 1 Mar 2011 10:40:59 +0100", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Two different execution plans for similar requests" }, { "msg_contents": "Sorry ! The command I use is 'EXPLAIN ANALYZE'\nI can't do better ...\n\n2011/3/1 <[email protected]>\n\n> > I've already used an 'EXPLAIN ANALYZE' to post the message. So I don't\n> > clearly understand what you are expecting for, when you tell me to\n> provide\n> > 'EXPLAIN ANALYZE' (please excuse me for the misunderstood)\n>\n> No, you haven't. You've provided 'EXPLAIN' output, but that just prepares\n> an execution plan and displays it. So it shows just estimates of row\n> counts etc. and not actual values.\n>\n> Do the same thing but use 'EXPLAIN ANALYZE' instead of 'EXPLAIN' - it will\n> run the query and provide more details about it (run time for each node,\n> actual number of rows etc.).\n>\n> Anyway the sudden changes of estimated costs are suspicious ...\n>\n> Tomas\n>\n>\n\nSorry ! The command I use is 'EXPLAIN ANALYZE'I can't do better ...2011/3/1 <[email protected]>\n> I've already used an 'EXPLAIN ANALYZE' to post the message. So I don't\n> clearly understand what you are expecting for, when you tell me to provide\n> 'EXPLAIN ANALYZE'  (please excuse me for the misunderstood)\n\nNo, you haven't. You've provided 'EXPLAIN' output, but that just prepares\nan execution plan and displays it. So it shows just estimates of row\ncounts etc. and not actual values.\n\nDo the same thing but use 'EXPLAIN ANALYZE' instead of 'EXPLAIN' - it will\nrun the query and provide more details about it (run time for each node,\nactual number of rows etc.).\n\nAnyway the sudden changes of estimated costs are suspicious ...\n\nTomas", "msg_date": "Tue, 1 Mar 2011 13:37:51 +0100", "msg_from": "Joby Joba <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two different execution plans for similar requests" }, { "msg_contents": "Me again ! I have checked this question of 'explain analyze' and I\nunderstand now.\n\nWhen the problem occured I have run a 'EXPLAIN'\n\nI have run the request again to open this case using 'EXPLAIN ANALYZE' but I\ndidn't reproduce the case. That's why I sent the other trace but it doesn't\ncountain the information you are asking for.\nSo Tomas, you were right !\n\nAll apologies !!!!! (I'm confused)\n\n\nBut the problem still not resolved .....\n\n\n2011/3/1 Joby Joba <[email protected]>\n\n> Sorry ! The command I use is 'EXPLAIN ANALYZE'\n> I can't do better ...\n>\n> 2011/3/1 <[email protected]>\n>\n>> > I've already used an 'EXPLAIN ANALYZE' to post the message. So I don't\n>>\n>> > clearly understand what you are expecting for, when you tell me to\n>> provide\n>> > 'EXPLAIN ANALYZE' (please excuse me for the misunderstood)\n>>\n>> No, you haven't. You've provided 'EXPLAIN' output, but that just prepares\n>> an execution plan and displays it. So it shows just estimates of row\n>> counts etc. and not actual values.\n>>\n>> Do the same thing but use 'EXPLAIN ANALYZE' instead of 'EXPLAIN' - it will\n>> run the query and provide more details about it (run time for each node,\n>> actual number of rows etc.).\n>>\n>> Anyway the sudden changes of estimated costs are suspicious ...\n>>\n>> Tomas\n>>\n>>\n>\n\nMe again !  I have checked this question of 'explain analyze' and I understand now.When the problem occured I have run a 'EXPLAIN' I have run the request again to open this case using 'EXPLAIN ANALYZE' but I didn't reproduce the case. That's why I sent the other trace but it doesn't countain the information you are asking for. \nSo Tomas, you were right ! All apologies !!!!! (I'm confused)But the problem still not resolved .....2011/3/1 Joby Joba <[email protected]>\nSorry ! The command I use is 'EXPLAIN ANALYZE'I can't do better ...\n2011/3/1 <[email protected]>\n> I've already used an 'EXPLAIN ANALYZE' to post the message. So I don't\n> clearly understand what you are expecting for, when you tell me to provide\n> 'EXPLAIN ANALYZE'  (please excuse me for the misunderstood)\n\nNo, you haven't. You've provided 'EXPLAIN' output, but that just prepares\nan execution plan and displays it. So it shows just estimates of row\ncounts etc. and not actual values.\n\nDo the same thing but use 'EXPLAIN ANALYZE' instead of 'EXPLAIN' - it will\nrun the query and provide more details about it (run time for each node,\nactual number of rows etc.).\n\nAnyway the sudden changes of estimated costs are suspicious ...\n\nTomas", "msg_date": "Tue, 1 Mar 2011 13:44:14 +0100", "msg_from": "Joby Joba <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two different execution plans for similar requests" }, { "msg_contents": "On Tue, Mar 1, 2011 at 4:44 AM, Joby Joba <[email protected]> wrote:\n> Me again ! I have checked this question of 'explain analyze' and I\n> understand now.\n>\n> When the problem occured I have run a 'EXPLAIN'\n>\n> I have run the request again to open this case using 'EXPLAIN ANALYZE' but I\n> didn't reproduce the case. That's why I sent the other trace but it doesn't\n> countain the information you are asking for.\n> So Tomas, you were right !\n>\n> All apologies !!!!! (I'm confused)\n>\n>\n> But the problem still not resolved .....\n\nWhat exactly is the problem? Is one version of this plan slow? Which\none? If you can't reproduce with EXPLAIN ANALYZE (which actually runs\nthe query), how are you reproducing this?\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com\n", "msg_date": "Tue, 1 Mar 2011 09:07:18 -0800", "msg_from": "Maciek Sakrejda <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Two different execution plans for similar requests" }, { "msg_contents": "random_page_cost with a value set to \"2\" and it works fine\n\nThanks for your help\n\n2011/3/1 Maciek Sakrejda <[email protected]>\n\n> On Tue, Mar 1, 2011 at 4:44 AM, Joby Joba <[email protected]> wrote:\n> > Me again ! I have checked this question of 'explain analyze' and I\n> > understand now.\n> >\n> > When the problem occured I have run a 'EXPLAIN'\n> >\n> > I have run the request again to open this case using 'EXPLAIN ANALYZE'\n> but I\n> > didn't reproduce the case. That's why I sent the other trace but it\n> doesn't\n> > countain the information you are asking for.\n> > So Tomas, you were right !\n> >\n> > All apologies !!!!! (I'm confused)\n> >\n> >\n> > But the problem still not resolved .....\n>\n> What exactly is the problem? Is one version of this plan slow? Which\n> one? If you can't reproduce with EXPLAIN ANALYZE (which actually runs\n> the query), how are you reproducing this?\n>\n> ---\n> Maciek Sakrejda | System Architect | Truviso\n>\n> 1065 E. Hillsdale Blvd., Suite 215\n> Foster City, CA 94404\n> (650) 242-3500 Main\n> www.truviso.com\n>\n\nrandom_page_cost with a value set to \"2\" and it works fineThanks for your help2011/3/1 Maciek Sakrejda <[email protected]>\nOn Tue, Mar 1, 2011 at 4:44 AM, Joby Joba <[email protected]> wrote:\n\n> Me again !  I have checked this question of 'explain analyze' and I\n> understand now.\n>\n> When the problem occured I have run a 'EXPLAIN'\n>\n> I have run the request again to open this case using 'EXPLAIN ANALYZE' but I\n> didn't reproduce the case. That's why I sent the other trace but it doesn't\n> countain the information you are asking for.\n> So Tomas, you were right !\n>\n> All apologies !!!!! (I'm confused)\n>\n>\n> But the problem still not resolved .....\n\nWhat exactly is the problem? Is one version of this plan slow? Which\none? If you can't reproduce with EXPLAIN ANALYZE (which actually runs\nthe query), how are you reproducing this?\n\n---\nMaciek Sakrejda | System Architect | Truviso\n\n1065 E. Hillsdale Blvd., Suite 215\nFoster City, CA 94404\n(650) 242-3500 Main\nwww.truviso.com", "msg_date": "Wed, 20 Apr 2011 09:58:10 +0200", "msg_from": "Joby Joba <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Two different execution plans for similar requests" } ]
[ { "msg_contents": "Hi, appreciated mailing list. Thanks already for taking your time for my\nperformance question. Regards, Sander.\n\n\n===POSTGRESQL VERSION AND ORIGIN===\n\nPostgreSQL 8.3.9 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4\n(Ubuntu 4.2.4-1ubuntu3)\nInstalled using \"apt-get install postgresql-8.3\"\n\n\n===A DESCRIPTION OF WHAT YOU ARE TRYING TO ACHIEVE===\n\nQuery involving tables events_events and events_eventdetails. There is any\nnumber of events_eventdetails records for each events_events record.\n\nThere may be multiple records in events_events that have the same value for\ntheir transactionId, which is available in one of their events_eventdetails\nrecords.\n\nWe want a total query that returns events_events records that match\ncondition I. or II., sorted by datetime descending, first 50.\n\nCondition I.\nAll events_events records for which an events_eventdetails records that\nmatches the following conditions:\n- Column keyname (in events_eventdetails) equals \"customerId\", and\n- Column value (in events_eventdetails) equals 598124, or more precisely\nsubstring(customerDetails.value,0,32)='598124'\n\nCondition II.\nAll events_events records that have a same value for in one of their\nevents_eventdetails records with keyname 'transactionId' as any of the\nresulting events_events records of condition I.\n\nIn other words: I want all events for a certain customerId, and all events\nwith the same transactionId as those.\n\nThe total query's time should be of the magnitude 100ms, but currently is of\nthe magnitude 1min.\n\nJUST FOR THE PURPOSE OF EXPERIMENT I've now a denormalized copy of\ntransactionId as a column in the events_events records. Been trying queries\non those, with no improvements.\n\nI am not seeking WHY my query is too slow, rather trying to find a way to\nget it faster :-)\n\n\n===THE EXACT TEXT OF THE QUERY YOU RAN===\n\nThe total query:\n\nSELECT events1.id, events1.transactionId, events1.dateTime FROM\nevents_events events1\nJOIN events_eventdetails customerDetails\nON events1.id = customerDetails.event_id\nAND customerDetails.keyname='customer_id'\nAND substring(customerDetails.value,0,32)='598124'\nWHERE events1.eventtype_id IN\n(100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108)\nUNION\nSELECT events2.id, events2.transactionId, events2.dateTime FROM\nevents_events events2\nJOIN events_eventdetails details2_transKey\n\tON events2.id = details2_transKey.event_id\n\tAND details2_transKey.keyname='transactionId'\nJOIN events_eventdetails details2_transValue\n\tON substring(details2_transKey.value,0,32) =\nsubstring(details2_transValue.value,0,32)\n\tAND details2_transValue.keyname='transactionId'\nJOIN events_eventdetails customerDetails\n\tON details2_transValue.event_id = customerDetails.event_id\n\tAND customerDetails.keyname='customer_id'\n\tAND substring(customerDetails.value,0,32)='598124'\nWHERE events2.eventtype_id IN\n(100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108)\nORDER BY dateTime DESC LIMIT 50\n\n\n===THE EXACT OUTPUT OF THAT QUERY===\n\nThe exactly correct and desired output is as follows:\n\n id | transactionid | datetime\n----------+-------------------------------------+----------------------------\n 16336643 | | 2011-03-01 11:10:38.648+01\n 16336642 | | 2011-03-01 11:10:35.629+01\n 16336641 | | 2011-03-01 11:10:35.625+01\n 16336637 | | 2011-03-01 11:09:53.306+01\n 16336634 | | 2011-03-01 11:09:14.027+01\n 16336633 | 26eaeb24-7a93-4c9a-99f9-bd3b77f9636 | 2011-03-01 11:09:14.004+01\n 16336632 | 26eaeb24-7a93-4c9a-99f9-bd3b77f9636 | 2011-03-01 11:09:13.925+01\n 16336631 | | 2011-03-01 11:09:13.873+01\n 16336630 | | 2011-03-01 11:09:13.741+01\n 16336626 | | 2011-03-01 11:09:08.931+01\n 16336625 | | 2011-03-01 11:09:01.811+01\n 16336624 | 2037f235-89d2-402a-90eb-3bcf40d633c | 2011-03-01 11:09:01.771+01\n 16336623 | 2037f235-89d2-402a-90eb-3bcf40d633c | 2011-03-01 11:09:01.729+01\n 16336611 | | 2011-03-01 11:08:08.63+01\n 16336610 | | 2011-03-01 11:08:02.805+01\n 16336609 | | 2011-03-01 11:08:02.801+01\n 16336606 | | 2011-03-01 11:07:55.324+01\n 16336602 | | 2011-03-01 11:07:38.63+01\n 16336600 | | 2011-03-01 11:07:34.561+01\n 16336599 | | 2011-03-01 11:07:34.547+01\n 16336595 | | 2011-03-01 11:07:24.471+01\n 16336594 | e3235ae5-9f40-4ceb-94bf-61ed99b793a | 2011-03-01 11:07:24.445+01\n 16336593 | e3235ae5-9f40-4ceb-94bf-61ed99b793a | 2011-03-01 11:07:24.373+01\n 16336591 | | 2011-03-01 11:07:24.268+01\n 16336590 | | 2011-03-01 11:07:24.065+01\n 16336583 | | 2011-03-01 11:06:43.63+01\n 16336582 | | 2011-03-01 11:06:36.977+01\n 16336581 | | 2011-03-01 11:06:36.973+01\n 16336575 | | 2011-03-01 11:06:18.637+01\n 16336573 | | 2011-03-01 11:06:16.728+01\n 16336572 | | 2011-03-01 11:06:16.723+01\n 16336569 | | 2011-03-01 11:06:06.662+01\n 16336568 | 3519e6de-bc8f-4641-a686-c088e459b47 | 2011-03-01 11:06:06.639+01\n 16336567 | 3519e6de-bc8f-4641-a686-c088e459b47 | 2011-03-01 11:06:06.569+01\n 16336566 | | 2011-03-01 11:06:06.526+01\n 16336565 | | 2011-03-01 11:06:06.359+01\n 16336561 | | 2011-03-01 11:05:58.868+01\n 16336560 | | 2011-03-01 11:05:50.80+01\n 16336559 | fbd5c7b2-6035-45cf-9222-a3d9f77d9ae | 2011-03-01 11:05:50.767+01\n 16336558 | fbd5c7b2-6035-45cf-9222-a3d9f77d9ae | 2011-03-01 11:05:50.724+01\n 16336550 | | 2011-03-01 11:05:33.631+01\n 16336549 | | 2011-03-01 11:05:28.313+01\n 16336548 | | 2011-03-01 11:05:28.309+01\n 16336545 | | 2011-03-01 11:05:20.86+01\n 16336541 | | 2011-03-01 11:03:23.626+01\n 16336539 | | 2011-03-01 11:03:18.623+01\n 16336538 | | 2011-03-01 11:03:18.613+01\n 16336535 | | 2011-03-01 11:03:08.553+01\n 16336534 | db9dcfb9-870d-42f6-947f-9a22fb49592 | 2011-03-01 11:03:08.531+01\n 16336533 | db9dcfb9-870d-42f6-947f-9a22fb49592 | 2011-03-01 11:03:08.457+01\n(50 rows)\nTime: 51442.732 ms\n\nOnly problem is the time it took!\n\n\n===WHAT PROGRAM YOU'RE USING TO CONNECT===\n\nFor this debugging purposes I'm using a mix of psql (on the target system)\nand pgAdmin (from my Windows machine). The final application involves a Java\nweb application running in Tomcat 5.5.\n\n\n===CHANGES MADE TO THE SETTINGS IN THE POSTGRESQL.CONF FILE===\n\n\t# This specific configuration is made on basis of postgresql.conf of\nPostgreSQL 8.3\n\t#\n\t# Memory usage: typical: maximum:\n\t# - Application server 1,2GB 1,5GB\n\t# - Shared buffers 1,0GB 1,0GB\n\t# - Work memory ? 2,2GB\n\t# ((>>connections) * work_mem)\n\t# (only counting relevant application server connections*)\n\t# - Query cache ? 2,0GB\n\t# -------------------------------------------\n\t# Total 6,7GB\n\t#\n\t\n\t# tailor maximum connections to what our application server requires\napplication\n\t# server to use 30 + 8 at most few left over for i.e. psql thingies\n\tmax_connections = 40\n\t\n\t# 1/2 up to 3/4 of total memory, used by all connections and such\n\tshared_buffers = 1024MB\n\t\n\ttemp_buffers = 64MB\n\t\n\tmax_prepared_transactions = 25\n\t\n\t# sorting large sets of queries (such as events, event details)\n\t# might require some memory to be able to be done in RAM\n\t# the total work memory is max_connections * work_mem\n\t# dynamically addressed\n\twork_mem = 75MB\n\t\n\t# been told that no one should want to use this anymore\n\t# there are no known disadvantages of not writing full pages\n\tfull_page_writes = off\n\t\n\twal_buffers = 8MB\n\t\n\t# larger numbers of segments to write logs into makes each of them easier,\n\t# thus checkpointing becoming less of a resource hog\n\tcheckpoint_segments = 32\n\t\n\t# longer time to do the same amount of work,\n\t# thus checkpointing becoming less of a resource hog\n\tcheckpoint_timeout = 10min\n\t\n\t# longer time to do the same amount of work,\n\t# thus checkpointing becoming less of a resource hog\n\tcheckpoint_completion_target = 0.8\n\t\n\t# 1/2 up to 3/4 of total memory\n\t# used to keep query results for reuse for a certain amount of time\n\t# dynamically addressed\n\teffective_cache_size = 2048MB\n\t\n\t# the higher the value, the better the query planner optimizes for\nperformance\n\tdefault_statistics_target = 100\n\t\n\t# constraint_exclusion not retained over dump/restore otherwise\n\tconstraint_exclusion = on\n\t\n\tlog_destination = 'syslog'\n\t\n\t# log queries if they take longer than this value (in ms)\n\tlog_min_duration_statement = 300\n\t\n\t# auto-vacuum less often, thus less of a performance hog\n\tautovacuum_vacuum_threshold = 500\n\t\n\t# analyze less often, thus less of a performance hog\n\tautovacuum_analyze_threshold = 500\n\n\n===OPERATING SYSTEM AND VERSION===\n\nUbuntu 8.04.4 LTS \\n \\l\nLinux 2.6.24-23-server\nSoftware RAID I (MDADM)\n\n\n===FOR PERFORMANCE QUESTIONS: WHAT KIND OF HARDWARE===\n\nINTEL CR2 DUO DT 1.8G 800F 2M 775P TY(G)\nSEAGATE 80G 3.5\" SATA 7KRPM 8M(G)\n8G DDR2-800 RAM\n\n\n===FULL TABLE AND INDEX SCHEMA===\n\n\tCREATE TABLE events_events\n\t(\n\t id bigserial NOT NULL,\n\t carparkid bigint,\n\t cleared boolean NOT NULL,\n\t datetime timestamp with time zone,\n\t identity character varying(255),\n\t generatedbystationid bigint,\n\t eventtype_id bigint NOT NULL,\n\t relatedstationid bigint,\n\t processingstatus character varying(255) NOT NULL,\n\t transactionid character varying(36),\n\t CONSTRAINT events_events_pkey PRIMARY KEY (id),\n\t CONSTRAINT fk88fe3effa0559276 FOREIGN KEY (eventtype_id)\n\t REFERENCES events_event_types (id) MATCH SIMPLE\n\t ON UPDATE NO ACTION ON DELETE NO ACTION\n\t)\n\tWITH (OIDS=FALSE);\n\tALTER TABLE events_events OWNER TO postgres;\n\t\n\tCREATE INDEX events_events_cleared_eventtype_id_datetime_ind ON\nevents_events USING btree (cleared, eventtype_id, datetime);\n\tCREATE INDEX events_events_cleared_ind ON events_events USING btree\n(cleared);\n\tCREATE INDEX events_events_datetime_cleared_ind ON events_events USING\nbtree (datetime, cleared) WHERE NOT cleared;\n\tCREATE INDEX events_events_datetime_eventtype_id_ind ON events_events USING\nbtree (datetime, eventtype_id);\n\tCREATE INDEX events_events_datetime_ind ON events_events USING btree\n(datetime);\n\tCREATE INDEX events_events_eventtype_id_datetime_ind ON events_events USING\nbtree (eventtype_id, datetime);\n\tCREATE INDEX events_events_eventtype_id_ind ON events_events USING btree\n(eventtype_id);\n\tCREATE INDEX events_events_identity_ind ON events_events USING btree\n(identity);\n\tCREATE INDEX events_events_not_cleared_ind ON events_events USING btree\n(cleared) WHERE NOT cleared;\n\tCREATE INDEX events_events_processingstatus_new ON events_events USING\nbtree (processingstatus) WHERE processingstatus::text = 'NEW'::text;\n\tCREATE INDEX events_events_relatedstation_eventtype_datetime_desc_ind ON\nevents_events USING btree (relatedstationid, eventtype_id, datetime);\n\t\n\t\n\tCREATE TABLE events_eventdetails\n\t(\n\t id bigserial NOT NULL,\n\t keyname character varying(255) NOT NULL,\n\t \"value\" text NOT NULL,\n\t event_id bigint NOT NULL,\n\t listindex integer,\n\t CONSTRAINT events_eventdetails_pkey PRIMARY KEY (id),\n\t CONSTRAINT events_eventdetails_event_id_fk FOREIGN KEY (event_id)\n\t REFERENCES events_events (id) MATCH SIMPLE\n\t ON UPDATE NO ACTION ON DELETE CASCADE,\n\t CONSTRAINT events_eventdetails_event_id_key UNIQUE (event_id, keyname,\nlistindex)\n\t)\n\tWITH (OIDS=FALSE);\n\tALTER TABLE events_eventdetails OWNER TO postgres;\n\t\n\tCREATE INDEX events_eventdetails_event_id_ind ON events_eventdetails USING\nbtree (event_id);\n\tCREATE INDEX events_eventdetails_keyname_ind ON events_eventdetails USING\nbtree (keyname);\n\tCREATE INDEX events_eventdetails_substring_ind ON events_eventdetails USING\nbtree (keyname, \"substring\"(value, 0, 32));\n\tCREATE INDEX events_eventdetails_value_ind ON events_eventdetails USING\nbtree (\"substring\"(value, 0, 32));\n\t\n\tMany partitions, approx. 50. E.g.:\n\tCREATE OR REPLACE RULE events_eventdetails_insert_customer_id AS ON INSERT\nTO events_eventdetails WHERE new.keyname::text = 'customer_id'::text DO\nINSTEAD INSERT INTO events_eventdetails_customer_id (id, keyname, value,\nevent_id, listindex) VALUES (new.id, new.keyname, new.value, new.event_id,\nnew.listindex);\n\t\n\t\n\tCREATE TABLE events_eventdetails_customer_id\n\t(\n\t-- Inherited: id bigint NOT NULL DEFAULT\nnextval('events_eventdetails_id_seq'::regclass),\n\t-- Inherited: keyname character varying(255) NOT NULL,\n\t-- Inherited: \"value\" text NOT NULL,\n\t-- Inherited: event_id bigint NOT NULL,\n\t-- Inherited: listindex integer,\n\t CONSTRAINT events_eventdetails_customer_id_pkey PRIMARY KEY (id),\n\t CONSTRAINT events_eventdetails_customer_id_event_id_fk FOREIGN KEY\n(event_id)\n\t REFERENCES events_events (id) MATCH SIMPLE\n\t ON UPDATE NO ACTION ON DELETE CASCADE,\n\t CONSTRAINT events_eventdetails_customer_id_keyname_check CHECK\n(keyname::text = 'customer_id'::text)\n\t)\n\tINHERITS (events_eventdetails)\n\tWITH (OIDS=FALSE);\n\tALTER TABLE events_eventdetails_customer_id OWNER TO postgres;\n\t\n\tCREATE INDEX events_eventdetails_customer_id_event_id_ind ON\nevents_eventdetails_customer_id USING btree (event_id);\n\tCREATE INDEX events_eventdetails_customer_id_substring_ind ON\nevents_eventdetails_customer_id USING btree (keyname, \"substring\"(value, 0,\n32));\n\n\n===TABLE METADATA===\n\n\tselect count(*) from events_events; --> 3910163\n\tselect count(*) from events_eventdetails; --> 30216033\n\tselect count(*) from events_eventdetails_customer_id; (single partition)\n--> 2976101\n\n\n===EXPLAIN ANALYZE===\n\nLimit (cost=36962467348.39..36962467348.51 rows=50 width=52) (actual\ntime=58765.029..58765.078 rows=50 loops=1)\n -> Sort (cost=36962467348.39..37251140933.75 rows=115469434145 width=52)\n(actual time=58765.023..58765.042 rows=50 loops=1)\n Sort Key: events1.datetime\n Sort Method: top-N heapsort Memory: 19kB\n -> Unique (cost=31971961433.07..33126655774.52 rows=115469434145\nwidth=52) (actual time=58764.565..58764.844 rows=145 loops=1)\n -> Sort (cost=31971961433.07..32260635018.43\nrows=115469434145 width=52) (actual time=58764.564..58764.652 rows=222\nloops=1)\n Sort Key: events1.id, events1.transactionid,\nevents1.datetime\n Sort Method: quicksort Memory: 29kB\n -> Append (cost=0.00..3256444445.93 rows=115469434145\nwidth=52) (actual time=0.304..58763.738 rows=222 loops=1)\n -> Nested Loop (cost=0.00..148161.10 rows=10345\nwidth=52) (actual time=0.303..2.781 rows=145 loops=1)\n -> Append (cost=0.00..21312.39 rows=15312\nwidth=8) (actual time=0.236..0.738 rows=187 loops=1)\n -> Index Scan using\nevents_eventdetails_substring_ind on events_eventdetails customerdetails \n(cost=0.00..457.37 rows=113 width=8) (actual time=0.077..0.077 rows=0\nloops=1)\n Index Cond: (((keyname)::text =\n'customer_id'::text) AND (\"substring\"(value, 0, 32) = '598124'::text))\n -> Index Scan using\nevents_eventdetails_customer_id_substring_ind on\nevents_eventdetails_customer_id customerdetails (cost=0.00..20855.02\nrows=15199 width=8) (actual time=0.158..0.530 rows=187 loops=1)\n Index Cond: (((keyname)::text =\n'customer_id'::text) AND (\"substring\"(value, 0, 32) = '598124'::text))\n -> Index Scan using events_events_pkey on\nevents_events events1 (cost=0.00..8.27 rows=1 width=52) (actual\ntime=0.009..0.009 rows=1 loops=187)\n Index Cond: (events1.id =\ncustomerdetails.event_id)\n Filter: (events1.eventtype_id = ANY\n('{100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108}'::bigint[]))\n -> Merge Join (cost=369560509.82..2101601943.38\nrows=115469423800 width=52) (actual time=58760.353..58760.806 rows=77\nloops=1)\n Merge Cond: (customerdetails.event_id =\ndetails2_transvalue.event_id)\n -> Sort (cost=24111.00..24149.28\nrows=15312 width=8) (actual time=0.644..0.710 rows=187 loops=1)\n Sort Key: customerdetails.event_id\n Sort Method: quicksort Memory: 24kB\n -> Append (cost=0.00..23046.64\nrows=15312 width=8) (actual time=0.130..0.461 rows=187 loops=1)\n -> Index Scan using\nevents_eventdetails_substring_ind on events_eventdetails customerdetails \n(cost=0.00..457.37 rows=113 width=8) (actual time=0.021..0.021 rows=0\nloops=1)\n Index Cond:\n(((keyname)::text = 'customer_id'::text) AND (\"substring\"(value, 0, 32) =\n'598124'::text))\n -> Index Scan using\nevents_eventdetails_customer_id_substring_ind on\nevents_eventdetails_customer_id customerdetails (cost=0.00..22589.27\nrows=15199 width=8) (actual time=0.107..0.319 rows=187 loops=1)\n Index Cond:\n(((keyname)::text = 'customer_id'::text) AND (\"substring\"(value, 0, 32) =\n'598124'::text))\n -> Materialize \n(cost=369536398.82..388389165.24 rows=1508221314 width=60) (actual\ntime=56515.482..58227.360 rows=986788 loops=1)\n -> Sort \n(cost=369536398.82..373306952.10 rows=1508221314 width=60) (actual\ntime=56515.478..57357.833 rows=986788 loops=1)\n Sort Key:\ndetails2_transvalue.event_id\n Sort Method: external merge \nDisk: 69416kB\n -> Merge Join \n(cost=1181483.03..31350423.76 rows=1508221314 width=60) (actual\ntime=42137.760..51804.819 rows=986808 loops=1)\n Merge Cond:\n((\"substring\"(details2_transkey.value, 0, 32)) =\n(\"substring\"(details2_transvalue.value, 0, 32)))\n -> Sort \n(cost=908652.98..909781.59 rows=451445 width=127) (actual\ntime=25898.797..27330.921 rows=658637 loops=1)\n Sort Key:\n(\"substring\"(details2_transkey.value, 0, 32))\n Sort Method: \nexternal merge Disk: 85584kB\n -> Hash Join \n(cost=621670.67..866252.84 rows=451445 width=127) (actual\ntime=8238.959..15256.168 rows=658637 loops=1)\n Hash Cond:\n(details2_transkey.event_id = events2.id)\n -> Append \n(cost=16092.38..208184.56 rows=668175 width=83) (actual\ntime=383.062..3180.853 rows=658638 loops=1)\n -> \nBitmap Heap Scan on events_eventdetails details2_transkey \n(cost=16092.38..208184.56 rows=668175 width=83) (actual\ntime=383.060..2755.386 rows=658638 loops=1)\n \nRecheck Cond: ((keyname)::text = 'transactionId'::text)\n -> \nBitmap Index Scan on events_eventdetails_keyname_ind (cost=0.00..15925.33\nrows=668175 width=0) (actual time=274.388..274.388 rows=658909 loops=1)\n \nIndex Cond: ((keyname)::text = 'transactionId'::text)\n -> Hash \n(cost=548042.97..548042.97 rows=2641946 width=52) (actual\ntime=7855.242..7855.242 rows=3711961 loops=1)\n -> \nBitmap Heap Scan on events_events events2 (cost=75211.99..548042.97\nrows=2641946 width=52) (actual time=1024.685..4581.588 rows=3711961 loops=1)\n \nRecheck Cond: (eventtype_id = ANY\n('{100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108}'::bigint[]))\n -> \nBitmap Index Scan on events_events_eventtype_id_ind (cost=0.00..74551.50\nrows=2641946 width=0) (actual time=983.354..983.354 rows=3712003 loops=1)\n \nIndex Cond: (eventtype_id = ANY\n('{100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108}'::bigint[]))\n -> Sort \n(cost=272830.05..274500.49 rows=668175 width=83) (actual\ntime=16238.940..16958.522 rows=986808 loops=1)\n Sort Key:\n(\"substring\"(details2_transvalue.value, 0, 32))\n Sort Method: \nexternal sort Disk: 61784kB\n -> Result \n(cost=16092.38..208184.56 rows=668175 width=83) (actual\ntime=391.124..4336.367 rows=658638 loops=1)\n -> Append \n(cost=16092.38..208184.56 rows=668175 width=83) (actual\ntime=391.104..3130.494 rows=658638 loops=1)\n -> \nBitmap Heap Scan on events_eventdetails details2_transvalue \n(cost=16092.38..208184.56 rows=668175 width=83) (actual\ntime=391.103..2713.520 rows=658638 loops=1)\n \nRecheck Cond: ((keyname)::text = 'transactionId'::text)\n -> \nBitmap Index Scan on events_eventdetails_keyname_ind (cost=0.00..15925.33\nrows=668175 width=0) (actual time=283.327..283.327 rows=658909 loops=1)\n \nIndex Cond: ((keyname)::text = 'transactionId'::text)\nTotal runtime: 58869.397 ms\n\n-- \nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-trouble-finding-records-through-related-records-tp3405914p3405914.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 1 Mar 2011 16:14:15 -0800 (PST)", "msg_from": "sverhagen <[email protected]>", "msg_from_op": true, "msg_subject": "Performance trouble finding records through related records" }, { "msg_contents": "On 03/01/2011 06:14 PM, sverhagen wrote:\n> Hi, appreciated mailing list. Thanks already for taking your time for my\n> performance question. Regards, Sander.\n>\n>\n> ===POSTGRESQL VERSION AND ORIGIN===\n>\n> PostgreSQL 8.3.9 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.2.4\n> (Ubuntu 4.2.4-1ubuntu3)\n> Installed using \"apt-get install postgresql-8.3\"\n>\n>\n> ===A DESCRIPTION OF WHAT YOU ARE TRYING TO ACHIEVE===\n>\n> Query involving tables events_events and events_eventdetails. There is any\n> number of events_eventdetails records for each events_events record.\n>\n> There may be multiple records in events_events that have the same value for\n> their transactionId, which is available in one of their events_eventdetails\n> records.\n>\n> We want a total query that returns events_events records that match\n> condition I. or II., sorted by datetime descending, first 50.\n>\n> Condition I.\n> All events_events records for which an events_eventdetails records that\n> matches the following conditions:\n> - Column keyname (in events_eventdetails) equals \"customerId\", and\n> - Column value (in events_eventdetails) equals 598124, or more precisely\n> substring(customerDetails.value,0,32)='598124'\n>\n> Condition II.\n> All events_events records that have a same value for in one of their\n> events_eventdetails records with keyname 'transactionId' as any of the\n> resulting events_events records of condition I.\n>\n> In other words: I want all events for a certain customerId, and all events\n> with the same transactionId as those.\n>\n> The total query's time should be of the magnitude 100ms, but currently is of\n> the magnitude 1min.\n>\n> JUST FOR THE PURPOSE OF EXPERIMENT I've now a denormalized copy of\n> transactionId as a column in the events_events records. Been trying queries\n> on those, with no improvements.\n>\n> I am not seeking WHY my query is too slow, rather trying to find a way to\n> get it faster :-)\n>\n\n<much snippage>\n\nFirst off, excellent detail.\n\nSecond, your explain analyze was hard to read... but since you are not really interested in your posted query, I wont worry about looking at it... but... have you seen:\n\nhttp://explain.depesz.com/\n\nIts nice.\n\nAnd last, to my questions:\n\nSELECT events1.id, events1.transactionId, events1.dateTime FROM\nevents_events events1\nJOIN events_eventdetails customerDetails\nON events1.id = customerDetails.event_id\nAND customerDetails.keyname='customer_id'\nAND substring(customerDetails.value,0,32)='598124'\nWHERE events1.eventtype_id IN\n(100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108)\nUNION\nSELECT events2.id, events2.transactionId, events2.dateTime FROM\nevents_events events2\nJOIN events_eventdetails details2_transKey\n\tON events2.id = details2_transKey.event_id\n\tAND details2_transKey.keyname='transactionId'\nJOIN events_eventdetails details2_transValue\n\tON substring(details2_transKey.value,0,32) =\nsubstring(details2_transValue.value,0,32)\n\tAND details2_transValue.keyname='transactionId'\nJOIN events_eventdetails customerDetails\n\tON details2_transValue.event_id = customerDetails.event_id\n\tAND customerDetails.keyname='customer_id'\n\tAND substring(customerDetails.value,0,32)='598124'\nWHERE events2.eventtype_id IN\n(100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108)\nORDER BY dateTime DESC LIMIT 50\n\n\nIf you run the individual queries, without the union, are the part's slow too?\n\nLooked like your row counts (the estimate vs the actual) were way off, have you analyzed lately?\n\nI could not tell from the explain analyze if an index was used, but I notice you have a ton of indexes on events_events table. You have two indexes on the same fields but in reverse order:\n\nevents_events_eventtype_id_datetime_ind (datetime, eventtype_id);\nevents_events_datetime_eventtype_id_ind (eventtype_id, datetime);\n\nAND both eventtype_id and datetime are in other indexes! I think you need to review your indexes. Drop all of them and add one or two that are actually useful.\n\n\nA useful tool I have found for complex queries is to break them down into smaller sub sets, write sql that get's me just those sets, and them add them all back into one main query with subselects:\n\nselect a,b,c,...\nfrom events_events\nwhere\n id in ( select id from details where some subset is needed )\nand id not in ( select id frome details where some set is bad )\nand id in ( select anotherid from anothertable where ... )\n\n\nIts the subselects you need to think about. Find one that gets you a small set that's interesting somehow. Once you get all your little sets, its easy to combine them.\n\n-Andy\n", "msg_date": "Tue, 01 Mar 2011 21:41:23 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance trouble finding records through related\n records" }, { "msg_contents": "Thanks for your help already!\nHope you're up for some more :-)\n\n\nAndy Colson wrote:\n> \n> First off, excellent detail.\n> \n> Second, your explain analyze was hard to read... but since you are not\n> really interested in your posted query, I wont worry about looking at\n> it... but... have you seen:\n> \n> http://explain.depesz.com/\n> \n\nThanks for that. Using it below :-)\n\n\nAndy Colson wrote:\n> \n> If you run the individual queries, without the union, are the part's slow\n> too?\n> \n\nOnly problem is the second part. So that part can safely be isolated. Also\nthe following does not play a role at this point: WHERE events2.eventtype_id\nIN\n(100,103,105,...\n\nThen I went ahead and denormalized the transactionId on both ends, so that\nboth events_events records and events_eventdetails records have the\ntransactionId (or NULL). That simplifies the query to this:\n\n\tSELECT events_events.* FROM events_events WHERE transactionid IN (\n\t\tSELECT transactionid FROM events_eventdetails customerDetails\n\t\tWHERE customerDetails.keyname='customer_id'\n\t\tAND substring(customerDetails.value,0,32)='1957'\n\t\tAND transactionid IS NOT NULL\n\t) ORDER BY id LIMIT 50;\n\nTo no avail. Also changing the above WHERE IN into implicit/explicit JOIN's\ndoesn't make more than a marginal difference. Should joining not be very\nefficient somehow?\n\nhttp://explain.depesz.com/s/Pnb\n\nThe above link nicely shows the hotspots, but I am at a loss now as how to\napproach them.\n\n\nAndy Colson wrote:\n> \n> Looked like your row counts (the estimate vs the actual) were way off,\n> have you analyzed lately?\n> \n\nNote sure what that means.\nIsn't all the maintenance nicely automated through my config?\n\n\nAndy Colson wrote:\n> \n> I could not tell from the explain analyze if an index was used, but I\n> notice you have a ton of indexes on events_events table.\n> \n\nYes, a ton of indexes, but still not the right one :-)\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-trouble-finding-records-through-related-records-tp3405914p3407330.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Wed, 2 Mar 2011 16:12:36 -0800 (PST)", "msg_from": "sverhagen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance trouble finding records through related records" }, { "msg_contents": "On 03/02/2011 06:12 PM, sverhagen wrote:\n> Thanks for your help already!\n> Hope you're up for some more :-)\n>\n>\n> Andy Colson wrote:\n>>\n>> First off, excellent detail.\n>>\n>> Second, your explain analyze was hard to read... but since you are not\n>> really interested in your posted query, I wont worry about looking at\n>> it... but... have you seen:\n>>\n>> http://explain.depesz.com/\n>>\n>\n> Thanks for that. Using it below :-)\n>\n>\n> Andy Colson wrote:\n>>\n>> If you run the individual queries, without the union, are the part's slow\n>> too?\n>>\n>\n> Only problem is the second part. So that part can safely be isolated. Also\n> the following does not play a role at this point: WHERE events2.eventtype_id\n> IN\n> (100,103,105,...\n>\n> Then I went ahead and denormalized the transactionId on both ends, so that\n> both events_events records and events_eventdetails records have the\n> transactionId (or NULL). That simplifies the query to this:\n>\n> \tSELECT events_events.* FROM events_events WHERE transactionid IN (\n> \t\tSELECT transactionid FROM events_eventdetails customerDetails\n> \t\tWHERE customerDetails.keyname='customer_id'\n> \t\tAND substring(customerDetails.value,0,32)='1957'\n> \t\tAND transactionid IS NOT NULL\n> \t) ORDER BY id LIMIT 50;\n>\n> To no avail. Also changing the above WHERE IN into implicit/explicit JOIN's\n> doesn't make more than a marginal difference. Should joining not be very\n> efficient somehow?\n>\n> http://explain.depesz.com/s/Pnb\n>\n> The above link nicely shows the hotspots, but I am at a loss now as how to\n> approach them.\n>\n>\n> Andy Colson wrote:\n>>\n>> Looked like your row counts (the estimate vs the actual) were way off,\n>> have you analyzed lately?\n>>\n>\n> Note sure what that means.\n> Isn't all the maintenance nicely automated through my config?\n>\n>\n\nIn the explain analyze you'll see stuff like:\nAppend (cost=0.00..3256444445.93 rows=115469434145 width=52) (actual time=0.304..58763.738 rows=222 loops=1)\n\nThis is taken from your first email. Red flags should go off when the row counts are not close. The first set is the planner's guess. The second set is what actually happened. The planner thought there would be 115,469,434,145 rows.. but turned out to only be 222. That's usually caused by bad stats.\n\n> Isn't all the maintenance nicely automated through my config?\n>\n\nI'd never assume. But the numbers in the plan you posted:\n\n> http://explain.depesz.com/s/Pnb\n\nlook fine to me (well, the row counts), and I didnt look to much at that plan in the first email, so we can probably ignore it.\n\n\n> Andy Colson wrote:\n>>\n>> I could not tell from the explain analyze if an index was used, but I\n>> notice you have a ton of indexes on events_events table.\n>>\n>\n> Yes, a ton of indexes, but still not the right one :-)\n\nBut... many indexes will slow down update/inserts. And an index on an unselective field can cause more problems than it would help. Especially if the stats are off. If PG has lots and lots of options, it'll take longer to plan querys too. If it picks an index to use, that it thinks is selective, but in reality is not, you are in for a world of hurt.\n\nFor your query, I think a join would be the best bet, can we see its explain analyze?\n\n-Andy\n", "msg_date": "Wed, 02 Mar 2011 21:04:42 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance trouble finding records through related\n records" }, { "msg_contents": "\nAndy Colson wrote:\n> \n> For your query, I think a join would be the best bet, can we see its\n> explain analyze?\n> \n\n\nHere is a few variations:\n\n\nSELECT events_events.* FROM events_events WHERE transactionid IN (\n\tSELECT transactionid FROM events_eventdetails customerDetails\n\tWHERE customerDetails.keyname='customer_id'\n\tAND substring(customerDetails.value,0,32)='1957'\n\tAND transactionid IS NOT NULL\n) ORDER BY id LIMIT 50; \n\n-- http://explain.depesz.com/s/Pnb\n\n\nexplain analyze SELECT events_events.* FROM events_events,\nevents_eventdetails customerDetails\n\tWHERE events_events.transactionid = customerDetails.transactionid\n\tAND customerDetails.keyname='customer_id'\n\tAND substring(customerDetails.value,0,32)='1957'\n\tAND customerDetails.transactionid IS NOT NULL\nORDER BY id LIMIT 50; \n\n-- http://explain.depesz.com/s/rDh\n\n\nexplain analyze SELECT events_events.* FROM events_events\nJOIN events_eventdetails customerDetails\n\tON events_events.transactionid = customerDetails.transactionid\n\tAND customerDetails.keyname='customer_id'\n\tAND substring(customerDetails.value,0,32)='1957'\n\tAND customerDetails.transactionid IS NOT NULL\nORDER BY id LIMIT 50; \n\n-- http://explain.depesz.com/s/6aB\n\n\nThanks for your efforts!\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-trouble-finding-records-through-related-records-tp3405914p3407689.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Thu, 3 Mar 2011 01:19:20 -0800 (PST)", "msg_from": "sverhagen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance trouble finding records through related records" }, { "msg_contents": "On 3/3/2011 3:19 AM, sverhagen wrote:\n>\n> Andy Colson wrote:\n>>\n>> For your query, I think a join would be the best bet, can we see its\n>> explain analyze?\n>>\n>\n>\n> Here is a few variations:\n>\n>\n> SELECT events_events.* FROM events_events WHERE transactionid IN (\n> \tSELECT transactionid FROM events_eventdetails customerDetails\n> \tWHERE customerDetails.keyname='customer_id'\n> \tAND substring(customerDetails.value,0,32)='1957'\n> \tAND transactionid IS NOT NULL\n> ) ORDER BY id LIMIT 50;\n>\n> -- http://explain.depesz.com/s/Pnb\n>\n>\n> explain analyze SELECT events_events.* FROM events_events,\n> events_eventdetails customerDetails\n> \tWHERE events_events.transactionid = customerDetails.transactionid\n> \tAND customerDetails.keyname='customer_id'\n> \tAND substring(customerDetails.value,0,32)='1957'\n> \tAND customerDetails.transactionid IS NOT NULL\n> ORDER BY id LIMIT 50;\n>\n> -- http://explain.depesz.com/s/rDh\n>\n>\n> explain analyze SELECT events_events.* FROM events_events\n> JOIN events_eventdetails customerDetails\n> \tON events_events.transactionid = customerDetails.transactionid\n> \tAND customerDetails.keyname='customer_id'\n> \tAND substring(customerDetails.value,0,32)='1957'\n> \tAND customerDetails.transactionid IS NOT NULL\n> ORDER BY id LIMIT 50;\n>\n> -- http://explain.depesz.com/s/6aB\n>\n>\n> Thanks for your efforts!\n>\n\nHuh. Pretty much exactly the same. I'm sorry but I think I'm at my \nlimit. I'm not sure why the nested loop takes so long, or how to get it \nto use something different.\n\n-Andy\n", "msg_date": "Thu, 03 Mar 2011 08:55:45 -0600", "msg_from": "Andy Colson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance trouble finding records through related\n records" }, { "msg_contents": "On Thu, Mar 3, 2011 at 9:55 AM, Andy Colson <[email protected]> wrote:\n>> explain analyze SELECT events_events.* FROM events_events\n>> JOIN events_eventdetails customerDetails\n>>        ON events_events.transactionid = customerDetails.transactionid\n>>        AND customerDetails.keyname='customer_id'\n>>        AND substring(customerDetails.value,0,32)='1957'\n>>        AND customerDetails.transactionid IS NOT NULL\n>> ORDER BY id LIMIT 50;\n>>\n>> -- http://explain.depesz.com/s/6aB\n>>\n>>\n>> Thanks for your efforts!\n>>\n>\n> Huh.  Pretty much exactly the same.  I'm sorry but I think I'm at my limit.\n>  I'm not sure why the nested loop takes so long, or how to get it to use\n> something different.\n\nThe join condition is showing up in the explain output as:\n\nJoin Filter: ((events_events.transactionid)::text =\n(customerdetails.transactionid)::text)\n\nNow why is there a cast to text there on both sides? Do those two\ncolumns have exactly the same datatype? If not, you probably want to\nfix that, as it can make a big difference.\n\nAlso, how many rows are there in events_events and how many in\nevents_eventdetails?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Mon, 7 Mar 2011 14:25:50 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance trouble finding records through related records" }, { "msg_contents": "On Wed, Mar 2, 2011 at 6:12 PM, sverhagen <[email protected]> wrote:\n> Thanks for your help already!\n> Hope you're up for some more :-)\n>\n>\n> Andy Colson wrote:\n>>\n>> First off, excellent detail.\n>>\n>> Second, your explain analyze was hard to read... but since you are not\n>> really interested in your posted query, I wont worry about looking at\n>> it... but... have you seen:\n>>\n>> http://explain.depesz.com/\n>>\n>\n> Thanks for that. Using it below :-)\n>\n>\n> Andy Colson wrote:\n>>\n>> If you run the individual queries, without the union, are the part's slow\n>> too?\n>>\n>\n> Only problem is the second part. So that part can safely be isolated. Also\n> the following does not play a role at this point: WHERE events2.eventtype_id\n> IN\n> (100,103,105,...\n>\n> Then I went ahead and denormalized the transactionId on both ends, so that\n> both events_events records and events_eventdetails records have the\n> transactionId (or NULL). That simplifies the query to this:\n>\n>        SELECT events_events.* FROM events_events WHERE transactionid IN (\n>                SELECT transactionid FROM events_eventdetails customerDetails\n>                WHERE customerDetails.keyname='customer_id'\n>                AND substring(customerDetails.value,0,32)='1957'\n>                AND transactionid IS NOT NULL\n>        ) ORDER BY id LIMIT 50;\n\n8.3? try converting the above to WHERE EXISTS or (even better) a JOIN...\n\nmerlin\n", "msg_date": "Mon, 7 Mar 2011 14:27:24 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance trouble finding records through related records" }, { "msg_contents": "Hi. Thanks for your response.\n\n\nRobert Haas wrote:\n> \n> Join Filter: ((events_events.transactionid)::text =\n> (customerdetails.transactionid)::text)\n> \n> Now why is there a cast to text there on both sides? Do those two\n> columns have exactly the same datatype? If not, you probably want to\n> fix that, as it can make a big difference.\n> \n\nGood question. I seem not able to get rid of that, even though these are\nsame type:\n\n\t=# \\d events_events\n\tTable \"public.events_events\"\n\t Column | Type | Modifiers\n\t----------------------+--------------------------+----------\n\t[snip]\n\t transactionid | character varying(36) | not null\n\t[snip]\n\t\n\t=# \\d events_eventdetails\n\tTable \"public.events_eventdetails\"\n\t Column | Type | Modifiers\n\t---------------+------------------------+----------\n\t[snip]\n\t transactionid | character varying(36) | not null\n\t[snip]\n\n(These columns allowing null or not is just something I've been playing with\nto no avail too.)\n\n\n\nRobert Haas wrote:\n> \n> Also, how many rows are there in events_events and how many in\n> events_eventdetails?\n> \n\nselect count(*) from events_events; --> 3910163\nselect count(*) from events_eventdetails; --> 30216033\nselect count(*) from events_eventdetails_customer_id; (single partition) -->\n2976101\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-trouble-finding-records-through-related-records-tp3405914p3413801.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 8 Mar 2011 03:17:42 -0800 (PST)", "msg_from": "sverhagen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance trouble finding records through related records" }, { "msg_contents": "\nMerlin Moncure-2 wrote:\n> \n> \n> 8.3? try converting the above to WHERE EXISTS or (even better) a JOIN...\n> \n> \n\n\nThanks for that. But in my Mar 03, 2011; 10:19am post I already broke it\ndown to the barebones with some variations, among which JOIN. The EXISTS IN\nvariation was so poor that I left that one out.\n\nBest regards, Sander.\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-trouble-finding-records-through-related-records-tp3405914p3413814.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Tue, 8 Mar 2011 03:31:43 -0800 (PST)", "msg_from": "sverhagen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance trouble finding records through related records" }, { "msg_contents": "Hi, all. I've done some further analysis, found a form that works if I split\nthings over two separate queries (B1 and B2, below) but still trouble when\ncombining (B, below).\n\nThis is the full pseudo-query: SELECT FROM A UNION SELECT FROM B ORDER BY\ndateTime DESC LIMIT 50\nIn that pseudo-query:\n - A is fast (few ms). A is all events for the given customer\n - B is slow (1 minute). B is all events for the same transactions as\nall events for the given customer\n\nZooming in on B it looks originally as follows:\n\nSELECT events2.id, events2.transactionId, events2.dateTime FROM\nevents_events events2\nJOIN events_eventdetails details2_transKey\n ON events2.id = details2_transKey.event_id\n AND details2_transKey.keyname='transactionId'\nJOIN events_eventdetails details2_transValue\n ON substring(details2_transKey.value,0,32) =\nsubstring(details2_transValue.value,0,32)\n AND details2_transValue.keyname='transactionId'\nJOIN events_eventdetails customerDetails\n ON details2_transValue.event_id = customerDetails.event_id\n AND customerDetails.keyname='customer_id'\n AND substring(customerDetails.value,0,32)='598124'\nWHERE events2.eventtype_id IN (100,103,105,... et cetera ...) \n\n\nThe above version of B is tremendously slow.\n\nThe only fast version I've yet come to find is as follows:\n - Do a sub-query B1\n - Do a sub-query B2 with the results of B1\n\nB1 looks as follows:\nWorks very fast (few ms)\nhttp://explain.depesz.com/s/7JS\n\nSELECT substring(details2_transValue.value,0,32)\nFROM events_eventdetails_customer_id customerDetails\nJOIN only events_eventdetails details2_transValue\nUSING (event_id)\n WHERE customerDetails.keyname='customer_id'\n AND substring(customerDetails.value,0,32)='49'\n AND details2_transValue.keyname='transactionId'\n\n\nB2 looks as follows:\nWorks very fast (few ms)\nhttp://explain.depesz.com/s/jGO\n\nSELECT events2.id, events2.dateTime\nFROM events_events events2\nJOIN events_eventdetails details2_transKey\nON events2.id = details2_transKey.event_id\n AND details2_transKey.keyname='transactionId'\n AND substring(details2_transKey.value,0,32) IN (... results of B1\n...)\n AND events2.eventtype_id IN\n(100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108)\n\nThe combined version of B works slow again (3-10 seconds):\nhttp://explain.depesz.com/s/9oM\n\nSELECT events2.id, events2.dateTime\nFROM events_events events2\nJOIN events_eventdetails details2_transKey\nON events2.id = details2_transKey.event_id\n AND details2_transKey.keyname='transactionId'\n AND substring(details2_transKey.value,0,32) IN (\n SELECT substring(details2_transValue.value,0,32)\n FROM events_eventdetails_customer_id customerDetails\n JOIN only events_eventdetails details2_transValue\n USING (event_id)\n WHERE customerDetails.keyname='customer_id'\n AND substring(customerDetails.value,0,32)='49'\n AND details2_transValue.keyname='transactionId')\n AND events2.eventtype_id IN\n(100,103,105,106,45,34,14,87,58,78,7,76,11,25,57,98,30,35,33,49,52,28,85,59,23,22,51,48,36,65,66,18,13,86,75,44,38,43,94,56,95,96,71,50,81,90,89,16,17,88,79,77,68,97,92,67,72,53,2,10,31,32,80,24,93,26,9,8,61,5,73,70,63,20,60,40,41,39,101,104,107,99,64,62,55,69,19,46,47,15,21,27,54,12,102,108)\n\nAt the moment I see not other conclusion than to offer B1 and B2 to the\ndatabase separately, but it feels like defeat :-|\n\n\n--\nView this message in context: http://postgresql.1045698.n5.nabble.com/Performance-trouble-finding-records-through-related-records-tp3405914p3423334.html\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n", "msg_date": "Thu, 10 Mar 2011 06:10:40 -0800 (PST)", "msg_from": "sverhagen <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance trouble finding records through related records" } ]
[ { "msg_contents": "Dear all,\n\nI have a query that i used to fire many times in our application and \nneed to be tuned at the deeper level.\n\nQuery :\n\n explain analyze select p.crawled_page_id, p.content, \nw.publication_name, w.country_name, p.publishing_date,m.doc_category\n,l.display_name as location, l.lat, l.lon, l.pop_rank, \np.crawled_page_url, substring(p.content,1,250)\nas display_text, p.heading from page_content_terror p, location l, \nloc_context_terror lc, meta_terror m,\nwebsite_master w where p.crawled_page_id>0 and \np.crawled_page_id=lc.source_id and lc.location_id=l.id\n and p.crawled_page_id=m.doc_id and p.url_id= w.url_id limit 1000;\n\n\n \nQUERY \nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=483.31..1542.79 rows=1000 width=5460) (actual \ntime=8.797..125504.603 rows=1000 loops=1)\n -> Hash Join (cost=483.31..169719466.71 rows=160190779 width=5460) \n(actual time=8.794..125502.974 rows=1000 loops=1)\n Hash Cond: (p.url_id = w.url_id)\n -> Nested Loop (cost=0.00..163675973.13 rows=13034265 \nwidth=4056) (actual time=1.294..125463.784 rows=1000 loops=1)\n -> Nested Loop (cost=0.00..115348580.99 rows=13034265 \nwidth=3024) (actual time=1.219..125436.104 rows=1156 loops=1)\n Join Filter: (p.crawled_page_id = lc.source_id)\n -> Nested Loop (cost=0.00..10960127.98 rows=53553 \nwidth=3024) (actual time=0.037..66671.887 rows=1156 loops=1)\n Join Filter: (p.crawled_page_id = m.doc_id)\n -> Seq Scan on page_content_terror p \n(cost=0.00..8637.64 rows=2844 width=2816) (actual time=0.013..5.884 \nrows=1156 loops=1)\n Filter: (crawled_page_id > 0)\n -> Seq Scan on meta_terror m \n(cost=0.00..3803.66 rows=3766 width=208) (actual time=0.003..30.117 \nrows=45148 loops=1156)\n -> Seq Scan on loc_context_terror lc \n(cost=0.00..1340.78 rows=48678 width=8) (actual time=0.376..24.675 \nrows=41658 loops=1156)\n -> Index Scan using location_pk on location l \n(cost=0.00..3.70 rows=1 width=1040) (actual time=0.016..0.017 rows=1 \nloops=1156)\n Index Cond: (l.id = lc.location_id)\n -> Hash (cost=452.58..452.58 rows=2458 width=1412) (actual \ntime=7.344..7.344 rows=2458 loops=1)\n -> Seq Scan on website_master w (cost=0.00..452.58 \nrows=2458 width=1412) (actual time=0.013..4.094 rows=2458 loops=1)\n Total runtime: 125506.007 ms\n\n/***********************************After adding \nindexes*******************************************/\n \nQUERY \nPLAN \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------\n---------\n Limit (cost=483.31..583.30 rows=1000 width=5460) (actual \ntime=9.769..63.374 rows=1000 loops=1)\n -> Hash Join (cost=483.31..871571182.73 rows=8716355049 width=5460) \n(actual time=9.765..62.314 rows=1000 loops=1)\n Hash Cond: (p.url_id = w.url_id)\n -> Nested Loop (cost=0.00..542756386.37 rows=709224822 \nwidth=4056) (actual time=1.640..30.895 rows=1000 loops=1)\n -> Nested Loop (cost=0.00..2587537.38 rows=3139483 \nwidth=3856) (actual time=1.558..22.552 rows=1000 loops=1)\n -> Nested Loop (cost=0.00..157876.94 rows=41693 \nwidth=1040) (actual time=1.419..13.039 rows=1000 loops=1)\n -> Seq Scan on loc_context_terror lc \n(cost=0.00..1270.93 rows=41693 width=8) (actual time=1.346..2.264 \nrows=1156 loops=1)\n -> Index Scan using location_pk on location \nl (cost=0.00..3.74 rows=1 width=1040) (actual time=0.005..0.006 rows=1 \nloops=1156)\n Index Cond: (l.id = lc.location_id)\n -> Index Scan using idx_crawled_s9 on \npage_content_terror p (cost=0.00..57.34 rows=75 width=2816) (actual \ntime=0.005..0.006 rows=1 loo\nps=1000)\n Index Cond: ((p.crawled_page_id > 0) AND \n(p.crawled_page_id = lc.source_id))\n -> Index Scan using idx_doc_s9 on meta_terror m \n(cost=0.00..169.23 rows=226 width=208) (actual time=0.005..0.006 rows=1 \nloops=1000)\n Index Cond: (m.doc_id = p.crawled_page_id)\n -> Hash (cost=452.58..452.58 rows=2458 width=1412) (actual \ntime=7.964..7.964 rows=2458 loops=1)\n -> Seq Scan on website_master w (cost=0.00..452.58 \nrows=2458 width=1412) (actual time=0.009..4.495 rows=2458 loops=1)\n Total runtime: 64.396 ms\n\n\nDon't know why it uses Seq Scan on loc_context_terror as i have indexes \non the desired columns as well.\n\n\nThanks & best Regards,\n\nAdarsh Sharma\n", "msg_date": "Thu, 03 Mar 2011 10:01:58 +0530", "msg_from": "Adarsh Sharma <[email protected]>", "msg_from_op": true, "msg_subject": "Is it require further tuning" }, { "msg_contents": "On Wed, Mar 2, 2011 at 11:31 PM, Adarsh Sharma <[email protected]> wrote:\n> Don't know why it uses Seq Scan on loc_context_terror as i have indexes on\n> the desired columns as well.\n\nI don't see how an index scan would help. The query appears to need\nall the rows from that table.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 4 Mar 2011 09:55:39 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Is it require further tuning" } ]
[ { "msg_contents": "Hi. I've only been using PostgreSQL properly for a week or so, so I\napologise if this has been covered numerous times, however Google is\nproducing nothing of use.\n\nI'm trying to import a large amount of legacy data (billions of\ndenormalised rows) into a pg database with a completely different schema,\nde-duplicating bits of it on-the-fly while maintaining a reference count.\nThe procedures to do this have proven easy to write, however the speed is\nnot pretty. I've spent some time breaking down the cause and it's come down\nto a simple UPDATE as evidenced below:\n\n\nCREATE TABLE foo (a int PRIMARY KEY, b int);\nINSERT INTO foo VALUES (1,1);\n\nCREATE OR REPLACE FUNCTION test() RETURNS int AS $$\nDECLARE\n i int;\nBEGIN\n FOR i IN 1..10000 LOOP\n UPDATE foo SET b=b+1 WHERE a=1;\n END LOOP;\n RETURN 1;\nEND;\n$$ LANGUAGE plpgsql;\n\nWhen run individually, this function produces the following timing:\nTime: 1912.593 ms\nTime: 1937.183 ms\nTime: 1941.607 ms\nTime: 1943.673 ms\nTime: 1944.738 ms\n\nHowever, when run inside a transaction (to simulate a function doing the\nsame work) I get this:\n\nSTART TRANSACTION\nTime: 0.836 ms\nTime: 1908.931 ms\nTime: 5370.110 ms\nTime: 8809.378 ms\nTime: 12274.294 ms\nTime: 15698.745 ms\nTime: 19218.389 ms\n\n\nThere is no disk i/o and the postgresql process runs 100% cpu.\nServer is amd64 FreeBSD 8-STABLE w/16GB RAM running postgresql 9.0.3 from\npackages\n\nLooking at the timing of real data (heavily grouped), it seems the speed of\nUPDATEs can vary dependent on how heavily updated a row is, so I set out to\nproduce a test case:\n\nCREATE TABLE foo (a int PRIMARY KEY, b int);\nINSERT INTO foo VALUES (1,1),(2,1),(3,1),(4,1);\n\nCREATE OR REPLACE FUNCTION test(int) RETURNS int AS $$\nDECLARE\n i int;\nBEGIN\n FOR i IN 1..10000 LOOP\n UPDATE foo SET b=1 WHERE a=$1;\n END LOOP;\n RETURN 1;\nEND;\n$$ LANGUAGE plpgsql;\nSTART TRANSACTION;\nSELECT test(1); Time: 1917.305 ms\nSELECT test(2); Time: 1926.758 ms\nSELECT test(3); Time: 1926.498 ms\nSELECT test(1); Time: 5376.691 ms\nSELECT test(2); Time: 5408.231 ms\nSELECT test(3); Time: 5403.654 ms\nSELECT test(1); Time: 8842.425 ms\nSELECT test(4); Time: 1925.954 ms\nCOMMIT; START TRANSACTION;\nSELECT test(1); Time: 1911.153 ms\n\n\nAs you can see, the more an individual row is updated /within a\ntransaction/, the slower it becomes for some reason.\n\nUnfortunately in my real-world case, I need to do many billions of these\nUPDATEs. Is there any way I can get around this without pulling my huge\nsource table out of the database and feeding everything in line-at-a-time\nfrom outside the database?\n\n\n\nThanks.\n\n\n-- \n\n \nThe information contained in this message is confidential and is intended for the addressee only. If you have received this message in error or there are any problems please notify the originator immediately. The unauthorised use, disclosure, copying or alteration of this message is strictly forbidden. \n\nCritical Software Ltd. reserves the right to monitor and record e-mail messages sent to and from this address for the purposes of investigating or detecting any unauthorised use of its system and ensuring its effective operation.\n\nCritical Software Ltd. registered in England, 04909220. Registered Office: IC2, Keele Science Park, Keele, Staffordshire, ST5 5NH.\n\n------------------------------------------------------------\nThis message has been scanned for security threats by iCritical.\n For further information, please visit www.icritical.com\n------------------------------------------------------------\n", "msg_date": "Thu, 03 Mar 2011 14:13:28 +0000", "msg_from": "Matt Burke <[email protected]>", "msg_from_op": true, "msg_subject": "Slowing UPDATEs inside a transaction" }, { "msg_contents": "On Thu, Mar 3, 2011 at 9:13 AM, Matt Burke <[email protected]> wrote:\n> Hi. I've only been using PostgreSQL properly for a week or so, so I\n> apologise if this has been covered numerous times, however Google is\n> producing nothing of use.\n>\n> I'm trying to import a large amount of legacy data (billions of\n> denormalised rows) into a pg database with a completely different schema,\n> de-duplicating bits of it on-the-fly while maintaining a reference count.\n> The procedures to do this have proven easy to write, however the speed is\n> not pretty. I've spent some time breaking down the cause and it's come down\n> to a simple UPDATE as evidenced below:\n\nPostgreSQL uses MVCC, which means that transactions see a snapshot of\nthe database at existed at a certain point in time, usually the\nbeginning of the currently query. Old row versions have to be kept\naround until they're no longer of interest to any still-running\ntransaction. Sadly, our ability to detect which row versions are\nstill of interest is imperfect, so we sometimes keep row versions that\nare technically not required. Unfortunately, repeated updates by the\nsame transaction to the same database row are one of the cases that we\ndon't handle very well - all the old row versions will be kept until\nthe transaction commits. I suspect if you look at the problem case\nyou'll find that the table and index are getting bigger with every set\nof updates, whereas when you do the updates in separate transactions\nthe size grows for a while and then levels off.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Thu, 3 Mar 2011 09:26:02 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slowing UPDATEs inside a transaction" }, { "msg_contents": "On Thu, Mar 3, 2011 at 8:26 AM, Robert Haas <[email protected]> wrote:\n> On Thu, Mar 3, 2011 at 9:13 AM, Matt Burke <[email protected]> wrote:\n>> Hi. I've only been using PostgreSQL properly for a week or so, so I\n>> apologise if this has been covered numerous times, however Google is\n>> producing nothing of use.\n>>\n>> I'm trying to import a large amount of legacy data (billions of\n>> denormalised rows) into a pg database with a completely different schema,\n>> de-duplicating bits of it on-the-fly while maintaining a reference count.\n>> The procedures to do this have proven easy to write, however the speed is\n>> not pretty. I've spent some time breaking down the cause and it's come down\n>> to a simple UPDATE as evidenced below:\n>\n> PostgreSQL uses MVCC, which means that transactions see a snapshot of\n> the database at existed at a certain point in time, usually the\n> beginning of the currently query.  Old row versions have to be kept\n> around until they're no longer of interest to any still-running\n> transaction.  Sadly, our ability to detect which row versions are\n> still of interest is imperfect, so we sometimes keep row versions that\n> are technically not required.  Unfortunately, repeated updates by the\n> same transaction to the same database row are one of the cases that we\n> don't handle very well - all the old row versions will be kept until\n> the transaction commits.  I suspect if you look at the problem case\n> you'll find that the table and index are getting bigger with every set\n> of updates, whereas when you do the updates in separate transactions\n> the size grows for a while and then levels off.\n\nAnother perspective on this is that not having explicit transaction\ncontrol via stored procedures contributes to the problem. Being able\nto manage transaction state would allow for a lot of workarounds for\nthis problem without forcing the processing into the client side.\n\nTo the OP I would suggest rethinking your processing as inserts into\none or more staging tables, followed up by a insert...select into the\nfinal destination table. Try to use less looping and more sql if\npossible...\n\nmerlin\n", "msg_date": "Thu, 3 Mar 2011 09:23:32 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slowing UPDATEs inside a transaction" }, { "msg_contents": "Robert Haas wrote:\n> Old row versions have to be kept around until they're no longer of \n> interest to any still-running transaction.\n\nThanks for the explanation.\n\nRegarding the snippet above, why would the intermediate history of\nmultiply-modified uncommitted rows be of interest to anything, or is the\ncurrent behaviour simply \"cheaper\" overall in terms of cpu/developer time?\n\n\n-- \n \nThe information contained in this message is confidential and is intended for the addressee only. If you have received this message in error or there are any problems please notify the originator immediately. The unauthorised use, disclosure, copying or alteration of this message is strictly forbidden. \n\nCritical Software Ltd. reserves the right to monitor and record e-mail messages sent to and from this address for the purposes of investigating or detecting any unauthorised use of its system and ensuring its effective operation.\n\nCritical Software Ltd. registered in England, 04909220. Registered Office: IC2, Keele Science Park, Keele, Staffordshire, ST5 5NH.\n\n------------------------------------------------------------\nThis message has been scanned for security threats by iCritical.\n For further information, please visit www.icritical.com\n------------------------------------------------------------\n", "msg_date": "Fri, 04 Mar 2011 09:21:54 +0000", "msg_from": "Matt Burke <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slowing UPDATEs inside a transaction" }, { "msg_contents": "On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke <[email protected]> wrote:\n> Robert Haas wrote:\n>> Old row versions have to be kept around until they're no longer of\n>> interest to any still-running transaction.\n>\n> Thanks for the explanation.\n>\n> Regarding the snippet above, why would the intermediate history of\n> multiply-modified uncommitted rows be of interest to anything, or is the\n> current behaviour simply \"cheaper\" overall in terms of cpu/developer time?\n\nBecause in theory you could have a cursor open. You could open a\ncursor, start to read from it, then make an update. Now the cursor\nneeds to see things as they were before the update.\n\nWe might be able to do some optimization here if we had some\ninfrastructure to detect when a backend has no registered snapshots\nwith a command-ID less than the command-ID of the currently active\nsnapshot, but nobody's put in the effort to figure out exactly what's\ninvolved and whether it makes sense. It's a problem I'm interested\nin, but #(needed round-tuits) > #(actual round-tuits).\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n", "msg_date": "Fri, 4 Mar 2011 09:20:46 -0500", "msg_from": "Robert Haas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slowing UPDATEs inside a transaction" }, { "msg_contents": "On Fri, Mar 4, 2011 at 8:20 AM, Robert Haas <[email protected]> wrote:\n> On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke <[email protected]> wrote:\n>> Robert Haas wrote:\n>>> Old row versions have to be kept around until they're no longer of\n>>> interest to any still-running transaction.\n>>\n>> Thanks for the explanation.\n>>\n>> Regarding the snippet above, why would the intermediate history of\n>> multiply-modified uncommitted rows be of interest to anything, or is the\n>> current behaviour simply \"cheaper\" overall in terms of cpu/developer time?\n>\n> Because in theory you could have a cursor open.  You could open a\n> cursor, start to read from it, then make an update.  Now the cursor\n> needs to see things as they were before the update.\n>\n> We might be able to do some optimization here if we had some\n> infrastructure to detect when a backend has no registered snapshots\n> with a command-ID less than the command-ID of the currently active\n> snapshot, but nobody's put in the effort to figure out exactly what's\n> involved and whether it makes sense.  It's a problem I'm interested\n> in, but #(needed round-tuits) > #(actual round-tuits).\n\nNot just cursors, but pl/pgsql for example is also pretty aggressive\nabout grabbing snapshots. Also,t' is a matter of speculation if the\ncase of a single row being updated a high number of times in a single\ntransaction merits such complicated optimizations.\n\nIt bears repeating: Explicit transaction control (beyond the dblink\ntype hacks that currently exist) in backend scripting would solve many\ncases where this is a problem in practice without having to muck\naround in the mvcc engine. Autonomous transactions are another way to\ndo this...\n\nmerlin\n", "msg_date": "Fri, 4 Mar 2011 08:51:30 -0600", "msg_from": "Merlin Moncure <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slowing UPDATEs inside a transaction" } ]