threads
listlengths
1
275
[ { "msg_contents": "I am having some dead locking problem with my app system. Our dev are\ndebugging the app to find out the cause of the problem. In the mean time\nI looked at postgresql.conf file. I found that there is a parameter in\npostgresql.conf file deadlock_timeout which was set 1000 (ms). Normally\nI see deadlock in the night or when auto vacuum is running for a long\ntime.\n\n \n\nMy question is \n\n \n\nWhat is the significance of this parameter and updating this parameter\nvalue will make any difference ? \n\n \n\nThanks\n\nRegards\n\nsachi\n\n \n\n \n\n\n\n\n\n\n\n\n\n\nI am having some dead locking problem with my app system.\nOur dev are debugging the app to find out the cause of the problem. In the mean\ntime I looked at postgresql.conf file. I found that there is a parameter in\npostgresql.conf file deadlock_timeout which was set 1000 (ms).  Normally I\nsee deadlock in the night or when auto vacuum is running for a long time.\n \nMy question is \n \nWhat is the significance of this parameter and updating this\nparameter value will make any difference ? \n \nThanks\nRegards\nsachi", "msg_date": "Thu, 23 Aug 2007 14:23:57 -0400", "msg_from": "\"Sachchida Ojha\" <[email protected]>", "msg_from_op": true, "msg_subject": "deadlock_timeout parameter in Postgresql.cof" }, { "msg_contents": "In response to \"Sachchida Ojha\" <[email protected]>:\n\n> I am having some dead locking problem with my app system. Our dev are\n> debugging the app to find out the cause of the problem. In the mean time\n> I looked at postgresql.conf file. I found that there is a parameter in\n> postgresql.conf file deadlock_timeout which was set 1000 (ms). Normally\n> I see deadlock in the night or when auto vacuum is running for a long\n> time.\n> \n> My question is \n> \n> What is the significance of this parameter and updating this parameter\n> value will make any difference ? \n\nDoes the documentation leave anything unanswered?\nhttp://www.postgresql.org/docs/8.2/static/runtime-config-locks.html\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Thu, 23 Aug 2007 14:52:08 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: deadlock_timeout parameter in Postgresql.cof" } ]
[ { "msg_contents": "Hi.\n\nI just created partitioned table, n_traf, sliced by month\n(n_traf_y2007m01, n_traf_y2007m02... and so on, see below). They are\nindexed by 'date_time' column.\nThen I populate it (last value have date 2007-08-...) and do VACUUM\nANALYZE ON n_traf_y2007... all of it.\n\nNow I try to select latest value (ORDER BY date_time LIMIT 1), but\nPostgres produced the ugly plan:\n\n=# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------\n Limit (cost=824637.69..824637.69 rows=1 width=32)\n -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n Sort Key: public.n_traf.date_time\n -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n -> Append (cost=0.00..100877.99 rows=5643499 width=32)\n -> Seq Scan on n_traf (cost=0.00..22.30\nrows=1230 width=32)\n -> Seq Scan on n_traf_y2007m01 n_traf\n(cost=0.00..22.30 rows=1230 width=32)\n -> Seq Scan on n_traf_y2007m02 n_traf\n(cost=0.00..22.30 rows=1230 width=32)\n -> Seq Scan on n_traf_y2007m03 n_traf\n(cost=0.00..22.30 rows=1230 width=32)\n -> Seq Scan on n_traf_y2007m04 n_traf\n(cost=0.00..1.01 rows=1 width=32)\n -> Seq Scan on n_traf_y2007m05 n_traf\n(cost=0.00..9110.89 rows=509689 width=32)\n -> Seq Scan on n_traf_y2007m06 n_traf\n(cost=0.00..32003.89 rows=1790489 width=32)\n -> Seq Scan on n_traf_y2007m07 n_traf\n(cost=0.00..33881.10 rows=1895510 width=32)\n -> Seq Scan on n_traf_y2007m08 n_traf\n(cost=0.00..25702.70 rows=1437970 width=32)\n -> Seq Scan on n_traf_y2007m09 n_traf\n(cost=0.00..22.30 rows=1230 width=32)\n -> Seq Scan on n_traf_y2007m10 n_traf\n(cost=0.00..22.30 rows=1230 width=32)\n -> Seq Scan on n_traf_y2007m11 n_traf\n(cost=0.00..22.30 rows=1230 width=32)\n -> Seq Scan on n_traf_y2007m12 n_traf\n(cost=0.00..22.30 rows=1230 width=32)\n(18 rows)\n\n\nWhy it no uses indexes at all?\n-------------------------------------------\n\nThe simplier query goes fast, use index.\n=# explain analyze SELECT * FROM n_traf_y2007m08 ORDER BY date_time\nDESC LIMIT 1;\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..0.03 rows=1 width=32) (actual time=0.156..0.158\nrows=1 loops=1)\n -> Index Scan Backward using n_traf_y2007m08_date_time_login_id on\nn_traf_y2007m08 (cost=0.00..39489.48 rows=1437970 width=32) (actual\ntime=0.150..0.150 rows=1 loops=1)\n Total runtime: 0.241 ms\n(3 rows)\n\nTable n_traf looks like this:\n=# \\d n_traf\n Table \"public.n_traf\"\n Column | Type | Modifiers\n-------------+-----------------------------+--------------------\n login_id | integer | not null\n traftype_id | integer | not null\n date_time | timestamp without time zone | not null\n bytes_in | bigint | not null default 0\n bytes_out | bigint | not null default 0\nIndexes:\n \"n_traf_login_id_key\" UNIQUE, btree (login_id, traftype_id, date_time)\n \"n_traf_date_time_login_id\" btree (date_time, login_id)\nForeign-key constraints:\n \"n_traf_login_id_fkey\" FOREIGN KEY (login_id) REFERENCES\nn_logins(login_id) ON UPDATE CASCADE ON DELETE CASCADE\n \"n_traf_traftype_id_fkey\" FOREIGN KEY (traftype_id) REFERENCES\nn_traftypes(traftype_id) ON UPDATE CASCADE\nRules:\n n_traf_insert_y2007m01 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-01-01'::date AND new.date_time <\n'2007-02-01 00:00:00'::timestamp without time zone DO INSTEAD\n INSERT INTO n_traf_y2007m01 (login_id, traftype_id, date_time,\nbytes_in, bytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m02 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-02-01'::date AND new.date_time <\n'2007-03-01 00:00:00'::timestamp without time zone DO INSTEAD\n INSERT INTO n_traf_y2007m02 (login_id, traftype_id, date_time,\nbytes_in, bytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m03 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-03-01'::date AND new.date_time <\n'2007-04-01 00:00:00'::timestamp without time zone DO INSTEAD\n INSERT INTO n_traf_y2007m03 (login_id, traftype_id, date_time,\nbytes_in, bytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m04 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-04-01'::date AND new.date_time <\n'2007-05-01 00:00:00'::timestamp without time zone DO INSTEAD\n INSERT INTO n_traf_y2007m04 (login_id, traftype_id, date_time,\nbytes_in, bytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m05 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-05-01'::date AND new.date_time <\n'2007-06-01 00:00:00'::timestamp without time zone DO INSTEAD\n INSERT INTO n_traf_y2007m05 (login_id, traftype_id, date_time,\nbytes_in, bytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m06 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-06-01'::date AND new.date_time <\n'2007-07-01 00:00:00'::timestamp without time zone DO INSTEAD\n INSERT INTO n_traf_y2007m06 (login_id, traftype_id, date_time,\nbytes_in, bytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m07 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-07-01'::date AND new.date_time <\n'2007-08-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\nINTO n_traf_y2007m07 (login_id, traftype_id, date_time, bytes_in,\nbytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m08 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-08-01'::date AND new.date_time <\n'2007-09-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\nINTO n_traf_y2007m08 (login_id, traftype_id, date_time, bytes_in,\nbytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m09 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-09-01'::date AND new.date_time <\n'2007-10-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\nINTO n_traf_y2007m09 (login_id, traftype_id, date_time, bytes_in,\nbytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m10 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-10-01'::date AND new.date_time <\n'2007-11-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\nINTO n_traf_y2007m10 (login_id, traftype_id, date_time, bytes_in,\nbytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m11 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-11-01'::date AND new.date_time <\n'2007-12-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\nINTO n_traf_y2007m11 (login_id, traftype_id, date_time, bytes_in,\nbytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n n_traf_insert_y2007m12 AS\n ON INSERT TO n_traf\n WHERE new.date_time >= '2007-12-01'::date AND new.date_time <\n'2008-01-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\nINTO n_traf_y2007m12 (login_id, traftype_id, date_time, bytes_in,\nbytes_out)\n VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\nnew.bytes_out)\n\n\nTables n_traf_y2007m... looks like these\n\n Table \"public.n_traf_y2007m01\"\n Column | Type | Modifiers\n-------------+-----------------------------+--------------------\n login_id | integer | not null\n traftype_id | integer | not null\n date_time | timestamp without time zone | not null\n bytes_in | bigint | not null default 0\n bytes_out | bigint | not null default 0\nIndexes:\n \"n_traf_y2007m01_date_time_login_id\" btree (date_time, login_id)\nCheck constraints:\n \"n_traf_y2007m01_date_time_check\" CHECK (date_time >=\n'2007-01-01'::date AND date_time < '2007-02-01 00:00:00'::timestamp\nwithout time zone)\nInherits: n_traf\n\nIndex \"public.n_traf_y2007m01_date_time_login_id\"\n Column | Type\n-----------+-----------------------------\n date_time | timestamp without time zone\n login_id | integer\nbtree, for table \"public.n_traf_y2007m01\"\n\n Table \"public.n_traf_y2007m02\"\n Column | Type | Modifiers\n-------------+-----------------------------+--------------------\n login_id | integer | not null\n traftype_id | integer | not null\n date_time | timestamp without time zone | not null\n bytes_in | bigint | not null default 0\n bytes_out | bigint | not null default 0\nIndexes:\n \"n_traf_y2007m02_date_time_login_id\" btree (date_time, login_id)\nCheck constraints:\n \"n_traf_y2007m02_date_time_check\" CHECK (date_time >=\n'2007-02-01'::date AND date_time < '2007-03-01 00:00:00'::timestamp\nwithout time zone)\nInherits: n_traf\n...\n\n-- \nengineer\n", "msg_date": "Fri, 24 Aug 2007 14:53:05 +0600", "msg_from": "Anton <[email protected]>", "msg_from_op": true, "msg_subject": "partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "On 8/24/07, Anton <[email protected]> wrote:\n>\n> Hi.\n>\n> I just created partitioned table, n_traf, sliced by month\n> (n_traf_y2007m01, n_traf_y2007m02... and so on, see below). They are\n> indexed by 'date_time' column.\n> Then I populate it (last value have date 2007-08-...) and do VACUUM\n> ANALYZE ON n_traf_y2007... all of it.\n>\n> Now I try to select latest value (ORDER BY date_time LIMIT 1), but\n> Postgres produced the ugly plan:\n>\n> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n> QUERY PLAN\n>\n> ---------------------------------------------------------------------------------------------------------\n> Limit (cost=824637.69..824637.69 rows=1 width=32)\n> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n> Sort Key: public.n_traf.date_time\n> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n> -> Append (cost=0.00..100877.99 rows=5643499 width=32)\n> -> Seq Scan on n_traf (cost=0.00..22.30\n> rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m01 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m02 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m03 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m04 n_traf\n> (cost=0.00..1.01 rows=1 width=32)\n> -> Seq Scan on n_traf_y2007m05 n_traf\n> (cost=0.00..9110.89 rows=509689 width=32)\n> -> Seq Scan on n_traf_y2007m06 n_traf\n> (cost=0.00..32003.89 rows=1790489 width=32)\n> -> Seq Scan on n_traf_y2007m07 n_traf\n> (cost=0.00..33881.10 rows=1895510 width=32)\n> -> Seq Scan on n_traf_y2007m08 n_traf\n> (cost=0.00..25702.70 rows=1437970 width=32)\n> -> Seq Scan on n_traf_y2007m09 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m10 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m11 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m12 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> (18 rows)\n>\n>\n> Why it no uses indexes at all?\n> -------------------------------------------\n\n\n\n\nI'm no expert but I'd guess that the the planner doesn't know which\npartition holds the latest time so it has to read them all.\n\nRegards\n\nMP\n\nOn 8/24/07, Anton <[email protected]> wrote:\nHi.I just created partitioned table, n_traf, sliced by month(n_traf_y2007m01, n_traf_y2007m02... and so on, see below). They areindexed by 'date_time' column.Then I populate it (last value have date 2007-08-...) and do VACUUM\nANALYZE ON n_traf_y2007... all of it.Now I try to select latest value (ORDER BY date_time LIMIT 1), butPostgres produced the ugly plan:=# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n                                               QUERY PLAN--------------------------------------------------------------------------------------------------------- Limit  (cost=824637.69..824637.69 rows=1 width=32)\n   ->  Sort  (cost=824637.69..838746.44 rows=5643499 width=32)         Sort Key: public.n_traf.date_time         ->  Result  (cost=0.00..100877.99 rows=5643499 width=32)               ->  Append  (cost=\n0.00..100877.99 rows=5643499 width=32)                     ->  Seq Scan on n_traf  (cost=0.00..22.30rows=1230 width=32)                     ->  Seq Scan on n_traf_y2007m01 n_traf(cost=0.00..22.30 rows=1230 width=32)\n                     ->  Seq Scan on n_traf_y2007m02 n_traf(cost=0.00..22.30 rows=1230 width=32)                     ->  Seq Scan on n_traf_y2007m03 n_traf(cost=0.00..22.30 rows=1230 width=32)                     ->  Seq Scan on n_traf_y2007m04 n_traf\n(cost=0.00..1.01 rows=1 width=32)                     ->  Seq Scan on n_traf_y2007m05 n_traf(cost=0.00..9110.89 rows=509689 width=32)                     ->  Seq Scan on n_traf_y2007m06 n_traf(cost=\n0.00..32003.89 rows=1790489 width=32)                     ->  Seq Scan on n_traf_y2007m07 n_traf(cost=0.00..33881.10 rows=1895510 width=32)                     ->  Seq Scan on n_traf_y2007m08 n_traf(cost=\n0.00..25702.70 rows=1437970 width=32)                     ->  Seq Scan on n_traf_y2007m09 n_traf(cost=0.00..22.30 rows=1230 width=32)                     ->  Seq Scan on n_traf_y2007m10 n_traf(cost=0.00..22.30\n rows=1230 width=32)                     ->  Seq Scan on n_traf_y2007m11 n_traf(cost=0.00..22.30 rows=1230 width=32)                     ->  Seq Scan on n_traf_y2007m12 n_traf(cost=0.00..22.30 rows=1230 width=32)\n(18 rows)Why it no uses indexes at all?-------------------------------------------I'm no expert but I'd guess that the the planner doesn't know which partition holds the latest time so it has to read them all.\nRegardsMP", "msg_date": "Fri, 24 Aug 2007 13:24:31 +0300", "msg_from": "\"Mikko Partio\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "> > =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n> > QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> > Limit (cost=824637.69..824637.69 rows=1 width=32)\n> > -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n> > Sort Key: public.n_traf.date_time\n> > -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n> > -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n> > -> Seq Scan on n_traf (cost=0.00..22.30\n> > rows=1230 width=32)\n> > -> Seq Scan on n_traf_y2007m01 n_traf\n> > (cost=0.00..22.30 rows=1230 width=32)\n...\n> > -> Seq Scan on n_traf_y2007m12 n_traf\n> > (cost=0.00..22.30 rows=1230 width=32)\n> > (18 rows)\n> >\n> > Why it no uses indexes at all?\n> > -------------------------------------------\n> I'm no expert but I'd guess that the the planner doesn't know which\n> partition holds the latest time so it has to read them all.\n\nAgree. But why it not uses indexes when it reading them?\n\n-- \nengineer\n", "msg_date": "Fri, 24 Aug 2007 16:32:43 +0600", "msg_from": "Anton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "Anton wrote:\n>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n>>> QUERY PLAN\n>> ---------------------------------------------------------------------------------------------------------\n>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n>>> Sort Key: public.n_traf.date_time\n>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n>>> -> Seq Scan on n_traf (cost=0.00..22.30\n>>> rows=1230 width=32)\n>>> -> Seq Scan on n_traf_y2007m01 n_traf\n>>> (cost=0.00..22.30 rows=1230 width=32)\n> ...\n>>> -> Seq Scan on n_traf_y2007m12 n_traf\n>>> (cost=0.00..22.30 rows=1230 width=32)\n>>> (18 rows)\n>>>\n>>> Why it no uses indexes at all?\n>>> -------------------------------------------\n>> I'm no expert but I'd guess that the the planner doesn't know which\n>> partition holds the latest time so it has to read them all.\n> \n> Agree. But why it not uses indexes when it reading them?\n\nThe planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\nbelow the append node. Therefore it needs to fetch all rows from all the\ntables, and the fastest way to do that is a seq scan.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 24 Aug 2007 11:38:26 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "We just fixed this - I'll post a patch, but I don't have time to verify\nagainst HEAD.\n\n- Luke\n\n\nOn 8/24/07 3:38 AM, \"Heikki Linnakangas\" <[email protected]> wrote:\n\n> Anton wrote:\n>>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n>>>> QUERY PLAN\n>>> ----------------------------------------------------------------------------\n>>> -----------------------------\n>>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n>>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n>>>> Sort Key: public.n_traf.date_time\n>>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n>>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n>>>> -> Seq Scan on n_traf (cost=0.00..22.30\n>>>> rows=1230 width=32)\n>>>> -> Seq Scan on n_traf_y2007m01 n_traf\n>>>> (cost=0.00..22.30 rows=1230 width=32)\n>> ...\n>>>> -> Seq Scan on n_traf_y2007m12 n_traf\n>>>> (cost=0.00..22.30 rows=1230 width=32)\n>>>> (18 rows)\n>>>> \n>>>> Why it no uses indexes at all?\n>>>> -------------------------------------------\n>>> I'm no expert but I'd guess that the the planner doesn't know which\n>>> partition holds the latest time so it has to read them all.\n>> \n>> Agree. But why it not uses indexes when it reading them?\n> \n> The planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\n> below the append node. Therefore it needs to fetch all rows from all the\n> tables, and the fastest way to do that is a seq scan.\n\n\n", "msg_date": "Fri, 24 Aug 2007 09:20:28 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC\n LIMIT 1" }, { "msg_contents": "Below is a patch against 8.2.4 (more or less), Heikki can you take a look at\nit?\n\nThis enables the use of index scan of a child table by recognizing sort\norder of the append node. Kurt Harriman did the work.\n\n- Luke\n\nIndex: cdb-pg/src/backend/optimizer/path/indxpath.c\n===================================================================\nRCS file: \n/data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\ner/path/indxpath.c,v\ndiff -u -N -r1.22 -r1.22.2.1\n--- cdb-pg/src/backend/optimizer/path/indxpath.c 25 Apr 2007 22:07:21\n-0000 1.22\n+++ cdb-pg/src/backend/optimizer/path/indxpath.c 10 Aug 2007 03:41:15\n-0000 1.22.2.1\n@@ -379,8 +379,51 @@\n index_pathkeys = build_index_pathkeys(root, index,\n ForwardScanDirection,\n true);\n- useful_pathkeys = truncate_useless_pathkeys(root, rel,\n- index_pathkeys);\n+ /*\n+ * CDB: For appendrel child, pathkeys contain Var nodes in\nterms \n+ * of the child's baserel. Transform the pathkey list to refer\nto \n+ * columns of the appendrel.\n+ */\n+ if (rel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n+ {\n+ AppendRelInfo *appinfo = NULL;\n+ RelOptInfo *appendrel = NULL;\n+ ListCell *appcell;\n+ CdbPathLocus notalocus;\n+\n+ /* Find the appendrel of which this baserel is a child. */\n+ foreach(appcell, root->append_rel_list)\n+ {\n+ appinfo = (AppendRelInfo *)lfirst(appcell);\n+ if (appinfo->child_relid == rel->relid)\n+ break;\n+ }\n+ Assert(appinfo);\n+ appendrel = find_base_rel(root, appinfo->parent_relid);\n+\n+ /*\n+ * The pathkey list happens to have the same format as the\n+ * partitioning key of a Hashed locus, so by disguising it\n+ * we can use cdbpathlocus_pull_above_projection() to do\nthe \n+ * transformation.\n+ */\n+ CdbPathLocus_MakeHashed(&notalocus, index_pathkeys);\n+ notalocus =\n+ cdbpathlocus_pull_above_projection(root,\n+ notalocus,\n+ rel->relids,\n+ rel->reltargetlist,\n+ \nappendrel->reltargetlist,\n+ appendrel->relid);\n+ if (CdbPathLocus_IsHashed(notalocus))\n+ index_pathkeys = truncate_useless_pathkeys(root,\nappendrel,\n+ \nnotalocus.partkey);\n+ else\n+ index_pathkeys = NULL;\n+ }\n+\n+ useful_pathkeys = truncate_useless_pathkeys(root, rel,\n+ index_pathkeys);\n }\n else\n useful_pathkeys = NIL;\nIndex: cdb-pg/src/backend/optimizer/path/pathkeys.c\n===================================================================\nRCS file: \n/data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\ner/path/pathkeys.c,v\ndiff -u -N -r1.18 -r1.18.2.1\n--- cdb-pg/src/backend/optimizer/path/pathkeys.c 30 Apr 2007 05:44:07\n-0000 1.18\n+++ cdb-pg/src/backend/optimizer/path/pathkeys.c 10 Aug 2007 03:41:15\n-0000 1.18.2.1\n@@ -1403,55 +1403,53 @@\n {\n PathKeyItem *item;\n Expr *newexpr;\n+ AttrNumber targetindex;\n \n Assert(pathkey);\n \n- /* Use constant expr if available. Will be at head of list. */\n- if (CdbPathkeyEqualsConstant(pathkey))\n+ /* Find an expr that we can rewrite to use the projected columns. */\n+ item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n+ relids,\n+ targetlist,\n+ &targetindex); // OUT\n+ \n+ /* If not found, see if the equiv class contains a constant expr. */\n+ if (!item &&\n+ CdbPathkeyEqualsConstant(pathkey))\n {\n item = (PathKeyItem *)linitial(pathkey);\n newexpr = (Expr *)copyObject(item->key);\n }\n \n- /* New vars for old! */\n- else\n- {\n- AttrNumber targetindex;\n+ /* Fail if no usable expr. */\n+ else if (!item)\n+ return NULL;\n \n- /* Find an expr that we can rewrite to use the projected columns.\n*/\n- item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n- relids,\n- targetlist,\n- &targetindex); // OUT\n- if (!item)\n- return NULL;\n+ /* If found matching targetlist item, make a Var that references it. */\n+ else if (targetindex > 0)\n+ newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n+ targetindex,\n+ newvarlist,\n+ (Expr *)item->key);\n \n- /* If found matching targetlist item, make a Var that references\nit. */\n- if (targetindex > 0)\n- newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n- targetindex,\n- newvarlist,\n- (Expr *)item->key);\n+ /* Replace expr's Var nodes with new ones referencing the targetlist.\n*/\n+ else\n+ newexpr = cdbpullup_expr((Expr *)item->key,\n+ targetlist,\n+ newvarlist,\n+ newrelid);\n \n- /* Replace expr's Var nodes with new ones referencing the\ntargetlist. */\n- else\n- newexpr = cdbpullup_expr((Expr *)item->key,\n- targetlist,\n- newvarlist,\n- newrelid);\n+ /* Pull up RelabelType node too, unless tlist expr has right type. */\n+ if (IsA(item->key, RelabelType))\n+ {\n+ RelabelType *oldrelabel = (RelabelType *)item->key;\n \n- /* Pull up RelabelType node too, unless tlist expr has right type.\n*/\n- if (IsA(item->key, RelabelType))\n- {\n- RelabelType *oldrelabel = (RelabelType *)item->key;\n-\n- if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n- oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n- newexpr = (Expr *)makeRelabelType(newexpr,\n- oldrelabel->resulttype,\n- oldrelabel->resulttypmod,\n- \noldrelabel->relabelformat);\n- }\n+ if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n+ oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n+ newexpr = (Expr *)makeRelabelType(newexpr,\n+ oldrelabel->resulttype,\n+ oldrelabel->resulttypmod,\n+ oldrelabel->relabelformat);\n }\n Insist(newexpr);\n \nIndex: cdb-pg/src/backend/optimizer/util/pathnode.c\n===================================================================\nRCS file: \n/data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\ner/util/pathnode.c,v\ndiff -u -N -r1.52.2.4 -r1.52.2.5\n--- cdb-pg/src/backend/optimizer/util/pathnode.c 5 Aug 2007 23:06:44\n-0000 1.52.2.4\n+++ cdb-pg/src/backend/optimizer/util/pathnode.c 10 Aug 2007 03:41:15\n-0000 1.52.2.5\n@@ -1563,7 +1563,15 @@\n pathnode->path.rescannable = false;\n }\n \n- return pathnode;\n+ /* \n+ * CDB: If there is exactly one subpath, its ordering is preserved.\n+ * Child rel's pathkey exprs are already expressed in terms of the\n+ * columns of the parent appendrel. See find_usable_indexes().\n+ */\n+ if (list_length(subpaths) == 1)\n+ pathnode->path.pathkeys = ((Path *)linitial(subpaths))->pathkeys;\n+ \n+ return pathnode;\n }\n \n /*\n\n\nOn 8/24/07 3:38 AM, \"Heikki Linnakangas\" <[email protected]> wrote:\n\n> Anton wrote:\n>>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n>>>> QUERY PLAN\n>>> ----------------------------------------------------------------------------\n>>> -----------------------------\n>>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n>>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n>>>> Sort Key: public.n_traf.date_time\n>>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n>>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n>>>> -> Seq Scan on n_traf (cost=0.00..22.30\n>>>> rows=1230 width=32)\n>>>> -> Seq Scan on n_traf_y2007m01 n_traf\n>>>> (cost=0.00..22.30 rows=1230 width=32)\n>> ...\n>>>> -> Seq Scan on n_traf_y2007m12 n_traf\n>>>> (cost=0.00..22.30 rows=1230 width=32)\n>>>> (18 rows)\n>>>> \n>>>> Why it no uses indexes at all?\n>>>> -------------------------------------------\n>>> I'm no expert but I'd guess that the the planner doesn't know which\n>>> partition holds the latest time so it has to read them all.\n>> \n>> Agree. But why it not uses indexes when it reading them?\n> \n> The planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\n> below the append node. Therefore it needs to fetch all rows from all the\n> tables, and the fastest way to do that is a seq scan.\n\n\n", "msg_date": "Fri, 24 Aug 2007 09:25:53 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC\n LIMIT 1" }, { "msg_contents": "Below is a patch against Greenplum Database that fixes the problem.\n\n- Luke\n\n------ Forwarded Message\nFrom: Luke Lonergan <[email protected]>\nDate: Fri, 24 Aug 2007 09:25:53 -0700\nTo: Heikki Linnakangas <[email protected]>, Anton <[email protected]>\nCc: <[email protected]>\nConversation: [PERFORM] partitioned table and ORDER BY indexed_field DESC\nLIMIT 1\nSubject: Re: [PERFORM] partitioned table and ORDER BY indexed_field DESC\nLIMIT 1\n\nBelow is a patch against 8.2.4 (more or less), Heikki can you take a look at\nit?\n\nThis enables the use of index scan of a child table by recognizing sort\norder of the append node. Kurt Harriman did the work.\n\n- Luke\n\nIndex: cdb-pg/src/backend/optimizer/path/indxpath.c\n===================================================================\nRCS file: \n/data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\ner/path/indxpath.c,v\ndiff -u -N -r1.22 -r1.22.2.1\n--- cdb-pg/src/backend/optimizer/path/indxpath.c 25 Apr 2007 22:07:21\n-0000 1.22\n+++ cdb-pg/src/backend/optimizer/path/indxpath.c 10 Aug 2007 03:41:15\n-0000 1.22.2.1\n@@ -379,8 +379,51 @@\n index_pathkeys = build_index_pathkeys(root, index,\n ForwardScanDirection,\n true);\n- useful_pathkeys = truncate_useless_pathkeys(root, rel,\n- index_pathkeys);\n+ /*\n+ * CDB: For appendrel child, pathkeys contain Var nodes in\nterms \n+ * of the child's baserel. Transform the pathkey list to refer\nto \n+ * columns of the appendrel.\n+ */\n+ if (rel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n+ {\n+ AppendRelInfo *appinfo = NULL;\n+ RelOptInfo *appendrel = NULL;\n+ ListCell *appcell;\n+ CdbPathLocus notalocus;\n+\n+ /* Find the appendrel of which this baserel is a child. */\n+ foreach(appcell, root->append_rel_list)\n+ {\n+ appinfo = (AppendRelInfo *)lfirst(appcell);\n+ if (appinfo->child_relid == rel->relid)\n+ break;\n+ }\n+ Assert(appinfo);\n+ appendrel = find_base_rel(root, appinfo->parent_relid);\n+\n+ /*\n+ * The pathkey list happens to have the same format as the\n+ * partitioning key of a Hashed locus, so by disguising it\n+ * we can use cdbpathlocus_pull_above_projection() to do\nthe \n+ * transformation.\n+ */\n+ CdbPathLocus_MakeHashed(&notalocus, index_pathkeys);\n+ notalocus =\n+ cdbpathlocus_pull_above_projection(root,\n+ notalocus,\n+ rel->relids,\n+ rel->reltargetlist,\n+ \nappendrel->reltargetlist,\n+ appendrel->relid);\n+ if (CdbPathLocus_IsHashed(notalocus))\n+ index_pathkeys = truncate_useless_pathkeys(root,\nappendrel,\n+ \nnotalocus.partkey);\n+ else\n+ index_pathkeys = NULL;\n+ }\n+\n+ useful_pathkeys = truncate_useless_pathkeys(root, rel,\n+ index_pathkeys);\n }\n else\n useful_pathkeys = NIL;\nIndex: cdb-pg/src/backend/optimizer/path/pathkeys.c\n===================================================================\nRCS file: \n/data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\ner/path/pathkeys.c,v\ndiff -u -N -r1.18 -r1.18.2.1\n--- cdb-pg/src/backend/optimizer/path/pathkeys.c 30 Apr 2007 05:44:07\n-0000 1.18\n+++ cdb-pg/src/backend/optimizer/path/pathkeys.c 10 Aug 2007 03:41:15\n-0000 1.18.2.1\n@@ -1403,55 +1403,53 @@\n {\n PathKeyItem *item;\n Expr *newexpr;\n+ AttrNumber targetindex;\n \n Assert(pathkey);\n \n- /* Use constant expr if available. Will be at head of list. */\n- if (CdbPathkeyEqualsConstant(pathkey))\n+ /* Find an expr that we can rewrite to use the projected columns. */\n+ item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n+ relids,\n+ targetlist,\n+ &targetindex); // OUT\n+ \n+ /* If not found, see if the equiv class contains a constant expr. */\n+ if (!item &&\n+ CdbPathkeyEqualsConstant(pathkey))\n {\n item = (PathKeyItem *)linitial(pathkey);\n newexpr = (Expr *)copyObject(item->key);\n }\n \n- /* New vars for old! */\n- else\n- {\n- AttrNumber targetindex;\n+ /* Fail if no usable expr. */\n+ else if (!item)\n+ return NULL;\n \n- /* Find an expr that we can rewrite to use the projected columns.\n*/\n- item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n- relids,\n- targetlist,\n- &targetindex); // OUT\n- if (!item)\n- return NULL;\n+ /* If found matching targetlist item, make a Var that references it. */\n+ else if (targetindex > 0)\n+ newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n+ targetindex,\n+ newvarlist,\n+ (Expr *)item->key);\n \n- /* If found matching targetlist item, make a Var that references\nit. */\n- if (targetindex > 0)\n- newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n- targetindex,\n- newvarlist,\n- (Expr *)item->key);\n+ /* Replace expr's Var nodes with new ones referencing the targetlist.\n*/\n+ else\n+ newexpr = cdbpullup_expr((Expr *)item->key,\n+ targetlist,\n+ newvarlist,\n+ newrelid);\n \n- /* Replace expr's Var nodes with new ones referencing the\ntargetlist. */\n- else\n- newexpr = cdbpullup_expr((Expr *)item->key,\n- targetlist,\n- newvarlist,\n- newrelid);\n+ /* Pull up RelabelType node too, unless tlist expr has right type. */\n+ if (IsA(item->key, RelabelType))\n+ {\n+ RelabelType *oldrelabel = (RelabelType *)item->key;\n \n- /* Pull up RelabelType node too, unless tlist expr has right type.\n*/\n- if (IsA(item->key, RelabelType))\n- {\n- RelabelType *oldrelabel = (RelabelType *)item->key;\n-\n- if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n- oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n- newexpr = (Expr *)makeRelabelType(newexpr,\n- oldrelabel->resulttype,\n- oldrelabel->resulttypmod,\n- \noldrelabel->relabelformat);\n- }\n+ if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n+ oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n+ newexpr = (Expr *)makeRelabelType(newexpr,\n+ oldrelabel->resulttype,\n+ oldrelabel->resulttypmod,\n+ oldrelabel->relabelformat);\n }\n Insist(newexpr);\n \nIndex: cdb-pg/src/backend/optimizer/util/pathnode.c\n===================================================================\nRCS file: \n/data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\ner/util/pathnode.c,v\ndiff -u -N -r1.52.2.4 -r1.52.2.5\n--- cdb-pg/src/backend/optimizer/util/pathnode.c 5 Aug 2007 23:06:44\n-0000 1.52.2.4\n+++ cdb-pg/src/backend/optimizer/util/pathnode.c 10 Aug 2007 03:41:15\n-0000 1.52.2.5\n@@ -1563,7 +1563,15 @@\n pathnode->path.rescannable = false;\n }\n \n- return pathnode;\n+ /* \n+ * CDB: If there is exactly one subpath, its ordering is preserved.\n+ * Child rel's pathkey exprs are already expressed in terms of the\n+ * columns of the parent appendrel. See find_usable_indexes().\n+ */\n+ if (list_length(subpaths) == 1)\n+ pathnode->path.pathkeys = ((Path *)linitial(subpaths))->pathkeys;\n+ \n+ return pathnode;\n }\n \n /*\n\n\nOn 8/24/07 3:38 AM, \"Heikki Linnakangas\" <[email protected]> wrote:\n\n> Anton wrote:\n>>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n>>>> QUERY PLAN\n>>> ----------------------------------------------------------------------------\n>>> -----------------------------\n>>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n>>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n>>>> Sort Key: public.n_traf.date_time\n>>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n>>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n>>>> -> Seq Scan on n_traf (cost=0.00..22.30\n>>>> rows=1230 width=32)\n>>>> -> Seq Scan on n_traf_y2007m01 n_traf\n>>>> (cost=0.00..22.30 rows=1230 width=32)\n>> ...\n>>>> -> Seq Scan on n_traf_y2007m12 n_traf\n>>>> (cost=0.00..22.30 rows=1230 width=32)\n>>>> (18 rows)\n>>>> \n>>>> Why it no uses indexes at all?\n>>>> -------------------------------------------\n>>> I'm no expert but I'd guess that the the planner doesn't know which\n>>> partition holds the latest time so it has to read them all.\n>> \n>> Agree. But why it not uses indexes when it reading them?\n> \n> The planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\n> below the append node. Therefore it needs to fetch all rows from all the\n> tables, and the fastest way to do that is a seq scan.\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 7: You can help support the PostgreSQL project by donating at\n\n http://www.postgresql.org/about/donate\n\n------ End of Forwarded Message\n\n\n", "msg_date": "Sat, 25 Aug 2007 09:56:42 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "FW: was [PERFORM] partitioned table and ORDER BY indexed_field\n\tDESC LIMIT 1" }, { "msg_contents": "Pn, 2007 08 24 14:53 +0600, Anton rašė:\n> Hi.\n> \n> I just created partitioned table, n_traf, sliced by month\n> (n_traf_y2007m01, n_traf_y2007m02... and so on, see below). They are\n> indexed by 'date_time' column.\n> Then I populate it (last value have date 2007-08-...) and do VACUUM\n> ANALYZE ON n_traf_y2007... all of it.\n> \n> Now I try to select latest value (ORDER BY date_time LIMIT 1), but\n> Postgres produced the ugly plan:\n> \n> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n> QUERY PL \n\ncan you test performance and send explain results of select like this :\nselect * from n_traf where date_time = (select max(date_time) from\nn_traf);\n\ni have similar problem with ~70M rows table (then using ordering), but\nmy table not partitioned. \nI`m interesting how this select will works on partitioned table. \n\n\n-- \nPagarbiai, \nTomas Tamošaitis\nProjektų Vadovas\nConnecty \nSkype://mazgis1009?add \nMob: +370 652 86127 \ne-pastas: [email protected]\nweb: www.connecty.lt\n\n", "msg_date": "Mon, 27 Aug 2007 15:57:29 +0300", "msg_from": "Tomas Tamosaitis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC\n\tLIMIT 1" }, { "msg_contents": "Bruce, would you please add this to the 8.4 patch queue so we remember\nto look at this later?\n\nIt didn't occur to me that we can do that in the degenerate case when\nthere's just a single node below the Append. A more general solution\nwould be to check if the pathkeys of all the child nodes match, and do a\n\"merge append\" similar to a merge join.\n\nLuke Lonergan wrote:\n> Below is a patch against 8.2.4 (more or less), Heikki can you take a look at\n> it?\n> \n> This enables the use of index scan of a child table by recognizing sort\n> order of the append node. Kurt Harriman did the work.\n> \n> - Luke\n> \n> Index: cdb-pg/src/backend/optimizer/path/indxpath.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/path/indxpath.c,v\n> diff -u -N -r1.22 -r1.22.2.1\n> --- cdb-pg/src/backend/optimizer/path/indxpath.c 25 Apr 2007 22:07:21\n> -0000 1.22\n> +++ cdb-pg/src/backend/optimizer/path/indxpath.c 10 Aug 2007 03:41:15\n> -0000 1.22.2.1\n> @@ -379,8 +379,51 @@\n> index_pathkeys = build_index_pathkeys(root, index,\n> ForwardScanDirection,\n> true);\n> - useful_pathkeys = truncate_useless_pathkeys(root, rel,\n> - index_pathkeys);\n> + /*\n> + * CDB: For appendrel child, pathkeys contain Var nodes in\n> terms \n> + * of the child's baserel. Transform the pathkey list to refer\n> to \n> + * columns of the appendrel.\n> + */\n> + if (rel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n> + {\n> + AppendRelInfo *appinfo = NULL;\n> + RelOptInfo *appendrel = NULL;\n> + ListCell *appcell;\n> + CdbPathLocus notalocus;\n> +\n> + /* Find the appendrel of which this baserel is a child. */\n> + foreach(appcell, root->append_rel_list)\n> + {\n> + appinfo = (AppendRelInfo *)lfirst(appcell);\n> + if (appinfo->child_relid == rel->relid)\n> + break;\n> + }\n> + Assert(appinfo);\n> + appendrel = find_base_rel(root, appinfo->parent_relid);\n> +\n> + /*\n> + * The pathkey list happens to have the same format as the\n> + * partitioning key of a Hashed locus, so by disguising it\n> + * we can use cdbpathlocus_pull_above_projection() to do\n> the \n> + * transformation.\n> + */\n> + CdbPathLocus_MakeHashed(&notalocus, index_pathkeys);\n> + notalocus =\n> + cdbpathlocus_pull_above_projection(root,\n> + notalocus,\n> + rel->relids,\n> + rel->reltargetlist,\n> + \n> appendrel->reltargetlist,\n> + appendrel->relid);\n> + if (CdbPathLocus_IsHashed(notalocus))\n> + index_pathkeys = truncate_useless_pathkeys(root,\n> appendrel,\n> + \n> notalocus.partkey);\n> + else\n> + index_pathkeys = NULL;\n> + }\n> +\n> + useful_pathkeys = truncate_useless_pathkeys(root, rel,\n> + index_pathkeys);\n> }\n> else\n> useful_pathkeys = NIL;\n> Index: cdb-pg/src/backend/optimizer/path/pathkeys.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/path/pathkeys.c,v\n> diff -u -N -r1.18 -r1.18.2.1\n> --- cdb-pg/src/backend/optimizer/path/pathkeys.c 30 Apr 2007 05:44:07\n> -0000 1.18\n> +++ cdb-pg/src/backend/optimizer/path/pathkeys.c 10 Aug 2007 03:41:15\n> -0000 1.18.2.1\n> @@ -1403,55 +1403,53 @@\n> {\n> PathKeyItem *item;\n> Expr *newexpr;\n> + AttrNumber targetindex;\n> \n> Assert(pathkey);\n> \n> - /* Use constant expr if available. Will be at head of list. */\n> - if (CdbPathkeyEqualsConstant(pathkey))\n> + /* Find an expr that we can rewrite to use the projected columns. */\n> + item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n> + relids,\n> + targetlist,\n> + &targetindex); // OUT\n> + \n> + /* If not found, see if the equiv class contains a constant expr. */\n> + if (!item &&\n> + CdbPathkeyEqualsConstant(pathkey))\n> {\n> item = (PathKeyItem *)linitial(pathkey);\n> newexpr = (Expr *)copyObject(item->key);\n> }\n> \n> - /* New vars for old! */\n> - else\n> - {\n> - AttrNumber targetindex;\n> + /* Fail if no usable expr. */\n> + else if (!item)\n> + return NULL;\n> \n> - /* Find an expr that we can rewrite to use the projected columns.\n> */\n> - item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n> - relids,\n> - targetlist,\n> - &targetindex); // OUT\n> - if (!item)\n> - return NULL;\n> + /* If found matching targetlist item, make a Var that references it. */\n> + else if (targetindex > 0)\n> + newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n> + targetindex,\n> + newvarlist,\n> + (Expr *)item->key);\n> \n> - /* If found matching targetlist item, make a Var that references\n> it. */\n> - if (targetindex > 0)\n> - newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n> - targetindex,\n> - newvarlist,\n> - (Expr *)item->key);\n> + /* Replace expr's Var nodes with new ones referencing the targetlist.\n> */\n> + else\n> + newexpr = cdbpullup_expr((Expr *)item->key,\n> + targetlist,\n> + newvarlist,\n> + newrelid);\n> \n> - /* Replace expr's Var nodes with new ones referencing the\n> targetlist. */\n> - else\n> - newexpr = cdbpullup_expr((Expr *)item->key,\n> - targetlist,\n> - newvarlist,\n> - newrelid);\n> + /* Pull up RelabelType node too, unless tlist expr has right type. */\n> + if (IsA(item->key, RelabelType))\n> + {\n> + RelabelType *oldrelabel = (RelabelType *)item->key;\n> \n> - /* Pull up RelabelType node too, unless tlist expr has right type.\n> */\n> - if (IsA(item->key, RelabelType))\n> - {\n> - RelabelType *oldrelabel = (RelabelType *)item->key;\n> -\n> - if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n> - oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n> - newexpr = (Expr *)makeRelabelType(newexpr,\n> - oldrelabel->resulttype,\n> - oldrelabel->resulttypmod,\n> - \n> oldrelabel->relabelformat);\n> - }\n> + if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n> + oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n> + newexpr = (Expr *)makeRelabelType(newexpr,\n> + oldrelabel->resulttype,\n> + oldrelabel->resulttypmod,\n> + oldrelabel->relabelformat);\n> }\n> Insist(newexpr);\n> \n> Index: cdb-pg/src/backend/optimizer/util/pathnode.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/util/pathnode.c,v\n> diff -u -N -r1.52.2.4 -r1.52.2.5\n> --- cdb-pg/src/backend/optimizer/util/pathnode.c 5 Aug 2007 23:06:44\n> -0000 1.52.2.4\n> +++ cdb-pg/src/backend/optimizer/util/pathnode.c 10 Aug 2007 03:41:15\n> -0000 1.52.2.5\n> @@ -1563,7 +1563,15 @@\n> pathnode->path.rescannable = false;\n> }\n> \n> - return pathnode;\n> + /* \n> + * CDB: If there is exactly one subpath, its ordering is preserved.\n> + * Child rel's pathkey exprs are already expressed in terms of the\n> + * columns of the parent appendrel. See find_usable_indexes().\n> + */\n> + if (list_length(subpaths) == 1)\n> + pathnode->path.pathkeys = ((Path *)linitial(subpaths))->pathkeys;\n> + \n> + return pathnode;\n> }\n> \n> /*\n> \n> \n> On 8/24/07 3:38 AM, \"Heikki Linnakangas\" <[email protected]> wrote:\n> \n>> Anton wrote:\n>>>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n>>>>> QUERY PLAN\n>>>> ----------------------------------------------------------------------------\n>>>> -----------------------------\n>>>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n>>>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n>>>>> Sort Key: public.n_traf.date_time\n>>>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n>>>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n>>>>> -> Seq Scan on n_traf (cost=0.00..22.30\n>>>>> rows=1230 width=32)\n>>>>> -> Seq Scan on n_traf_y2007m01 n_traf\n>>>>> (cost=0.00..22.30 rows=1230 width=32)\n>>> ...\n>>>>> -> Seq Scan on n_traf_y2007m12 n_traf\n>>>>> (cost=0.00..22.30 rows=1230 width=32)\n>>>>> (18 rows)\n>>>>>\n>>>>> Why it no uses indexes at all?\n>>>>> -------------------------------------------\n>>>> I'm no expert but I'd guess that the the planner doesn't know which\n>>>> partition holds the latest time so it has to read them all.\n>>> Agree. But why it not uses indexes when it reading them?\n>> The planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\n>> below the append node. Therefore it needs to fetch all rows from all the\n>> tables, and the fastest way to do that is a seq scan.\n> \n> \n\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 28 Aug 2007 11:20:16 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "\nThis has been saved for the 8.4 release:\n\n\thttp://momjian.postgresql.org/cgi-bin/pgpatches_hold\n\n---------------------------------------------------------------------------\n\nLuke Lonergan wrote:\n> Below is a patch against Greenplum Database that fixes the problem.\n> \n> - Luke\n> \n> ------ Forwarded Message\n> From: Luke Lonergan <[email protected]>\n> Date: Fri, 24 Aug 2007 09:25:53 -0700\n> To: Heikki Linnakangas <[email protected]>, Anton <[email protected]>\n> Cc: <[email protected]>\n> Conversation: [PERFORM] partitioned table and ORDER BY indexed_field DESC\n> LIMIT 1\n> Subject: Re: [PERFORM] partitioned table and ORDER BY indexed_field DESC\n> LIMIT 1\n> \n> Below is a patch against 8.2.4 (more or less), Heikki can you take a look at\n> it?\n> \n> This enables the use of index scan of a child table by recognizing sort\n> order of the append node. Kurt Harriman did the work.\n> \n> - Luke\n> \n> Index: cdb-pg/src/backend/optimizer/path/indxpath.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/path/indxpath.c,v\n> diff -u -N -r1.22 -r1.22.2.1\n> --- cdb-pg/src/backend/optimizer/path/indxpath.c 25 Apr 2007 22:07:21\n> -0000 1.22\n> +++ cdb-pg/src/backend/optimizer/path/indxpath.c 10 Aug 2007 03:41:15\n> -0000 1.22.2.1\n> @@ -379,8 +379,51 @@\n> index_pathkeys = build_index_pathkeys(root, index,\n> ForwardScanDirection,\n> true);\n> - useful_pathkeys = truncate_useless_pathkeys(root, rel,\n> - index_pathkeys);\n> + /*\n> + * CDB: For appendrel child, pathkeys contain Var nodes in\n> terms \n> + * of the child's baserel. Transform the pathkey list to refer\n> to \n> + * columns of the appendrel.\n> + */\n> + if (rel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n> + {\n> + AppendRelInfo *appinfo = NULL;\n> + RelOptInfo *appendrel = NULL;\n> + ListCell *appcell;\n> + CdbPathLocus notalocus;\n> +\n> + /* Find the appendrel of which this baserel is a child. */\n> + foreach(appcell, root->append_rel_list)\n> + {\n> + appinfo = (AppendRelInfo *)lfirst(appcell);\n> + if (appinfo->child_relid == rel->relid)\n> + break;\n> + }\n> + Assert(appinfo);\n> + appendrel = find_base_rel(root, appinfo->parent_relid);\n> +\n> + /*\n> + * The pathkey list happens to have the same format as the\n> + * partitioning key of a Hashed locus, so by disguising it\n> + * we can use cdbpathlocus_pull_above_projection() to do\n> the \n> + * transformation.\n> + */\n> + CdbPathLocus_MakeHashed(&notalocus, index_pathkeys);\n> + notalocus =\n> + cdbpathlocus_pull_above_projection(root,\n> + notalocus,\n> + rel->relids,\n> + rel->reltargetlist,\n> + \n> appendrel->reltargetlist,\n> + appendrel->relid);\n> + if (CdbPathLocus_IsHashed(notalocus))\n> + index_pathkeys = truncate_useless_pathkeys(root,\n> appendrel,\n> + \n> notalocus.partkey);\n> + else\n> + index_pathkeys = NULL;\n> + }\n> +\n> + useful_pathkeys = truncate_useless_pathkeys(root, rel,\n> + index_pathkeys);\n> }\n> else\n> useful_pathkeys = NIL;\n> Index: cdb-pg/src/backend/optimizer/path/pathkeys.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/path/pathkeys.c,v\n> diff -u -N -r1.18 -r1.18.2.1\n> --- cdb-pg/src/backend/optimizer/path/pathkeys.c 30 Apr 2007 05:44:07\n> -0000 1.18\n> +++ cdb-pg/src/backend/optimizer/path/pathkeys.c 10 Aug 2007 03:41:15\n> -0000 1.18.2.1\n> @@ -1403,55 +1403,53 @@\n> {\n> PathKeyItem *item;\n> Expr *newexpr;\n> + AttrNumber targetindex;\n> \n> Assert(pathkey);\n> \n> - /* Use constant expr if available. Will be at head of list. */\n> - if (CdbPathkeyEqualsConstant(pathkey))\n> + /* Find an expr that we can rewrite to use the projected columns. */\n> + item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n> + relids,\n> + targetlist,\n> + &targetindex); // OUT\n> + \n> + /* If not found, see if the equiv class contains a constant expr. */\n> + if (!item &&\n> + CdbPathkeyEqualsConstant(pathkey))\n> {\n> item = (PathKeyItem *)linitial(pathkey);\n> newexpr = (Expr *)copyObject(item->key);\n> }\n> \n> - /* New vars for old! */\n> - else\n> - {\n> - AttrNumber targetindex;\n> + /* Fail if no usable expr. */\n> + else if (!item)\n> + return NULL;\n> \n> - /* Find an expr that we can rewrite to use the projected columns.\n> */\n> - item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n> - relids,\n> - targetlist,\n> - &targetindex); // OUT\n> - if (!item)\n> - return NULL;\n> + /* If found matching targetlist item, make a Var that references it. */\n> + else if (targetindex > 0)\n> + newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n> + targetindex,\n> + newvarlist,\n> + (Expr *)item->key);\n> \n> - /* If found matching targetlist item, make a Var that references\n> it. */\n> - if (targetindex > 0)\n> - newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n> - targetindex,\n> - newvarlist,\n> - (Expr *)item->key);\n> + /* Replace expr's Var nodes with new ones referencing the targetlist.\n> */\n> + else\n> + newexpr = cdbpullup_expr((Expr *)item->key,\n> + targetlist,\n> + newvarlist,\n> + newrelid);\n> \n> - /* Replace expr's Var nodes with new ones referencing the\n> targetlist. */\n> - else\n> - newexpr = cdbpullup_expr((Expr *)item->key,\n> - targetlist,\n> - newvarlist,\n> - newrelid);\n> + /* Pull up RelabelType node too, unless tlist expr has right type. */\n> + if (IsA(item->key, RelabelType))\n> + {\n> + RelabelType *oldrelabel = (RelabelType *)item->key;\n> \n> - /* Pull up RelabelType node too, unless tlist expr has right type.\n> */\n> - if (IsA(item->key, RelabelType))\n> - {\n> - RelabelType *oldrelabel = (RelabelType *)item->key;\n> -\n> - if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n> - oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n> - newexpr = (Expr *)makeRelabelType(newexpr,\n> - oldrelabel->resulttype,\n> - oldrelabel->resulttypmod,\n> - \n> oldrelabel->relabelformat);\n> - }\n> + if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n> + oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n> + newexpr = (Expr *)makeRelabelType(newexpr,\n> + oldrelabel->resulttype,\n> + oldrelabel->resulttypmod,\n> + oldrelabel->relabelformat);\n> }\n> Insist(newexpr);\n> \n> Index: cdb-pg/src/backend/optimizer/util/pathnode.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/util/pathnode.c,v\n> diff -u -N -r1.52.2.4 -r1.52.2.5\n> --- cdb-pg/src/backend/optimizer/util/pathnode.c 5 Aug 2007 23:06:44\n> -0000 1.52.2.4\n> +++ cdb-pg/src/backend/optimizer/util/pathnode.c 10 Aug 2007 03:41:15\n> -0000 1.52.2.5\n> @@ -1563,7 +1563,15 @@\n> pathnode->path.rescannable = false;\n> }\n> \n> - return pathnode;\n> + /* \n> + * CDB: If there is exactly one subpath, its ordering is preserved.\n> + * Child rel's pathkey exprs are already expressed in terms of the\n> + * columns of the parent appendrel. See find_usable_indexes().\n> + */\n> + if (list_length(subpaths) == 1)\n> + pathnode->path.pathkeys = ((Path *)linitial(subpaths))->pathkeys;\n> + \n> + return pathnode;\n> }\n> \n> /*\n> \n> \n> On 8/24/07 3:38 AM, \"Heikki Linnakangas\" <[email protected]> wrote:\n> \n> > Anton wrote:\n> >>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n> >>>> QUERY PLAN\n> >>> ----------------------------------------------------------------------------\n> >>> -----------------------------\n> >>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n> >>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n> >>>> Sort Key: public.n_traf.date_time\n> >>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n> >>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n> >>>> -> Seq Scan on n_traf (cost=0.00..22.30\n> >>>> rows=1230 width=32)\n> >>>> -> Seq Scan on n_traf_y2007m01 n_traf\n> >>>> (cost=0.00..22.30 rows=1230 width=32)\n> >> ...\n> >>>> -> Seq Scan on n_traf_y2007m12 n_traf\n> >>>> (cost=0.00..22.30 rows=1230 width=32)\n> >>>> (18 rows)\n> >>>> \n> >>>> Why it no uses indexes at all?\n> >>>> -------------------------------------------\n> >>> I'm no expert but I'd guess that the the planner doesn't know which\n> >>> partition holds the latest time so it has to read them all.\n> >> \n> >> Agree. But why it not uses indexes when it reading them?\n> > \n> > The planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\n> > below the append node. Therefore it needs to fetch all rows from all the\n> > tables, and the fastest way to do that is a seq scan.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n> ------ End of Forwarded Message\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://www.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Tue, 28 Aug 2007 11:44:03 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: was [PERFORM] partitioned table and ORDER\n\tBY indexed_field DESC LIMIT 1" }, { "msg_contents": "I want ask about problem with partioned tables (it was discussed some\ntime ago, see below). Is it fixed somehow in 8.2.5 ?\n\n2007/8/24, Luke Lonergan <[email protected]>:\n> Below is a patch against 8.2.4 (more or less), Heikki can you take a look at\n> it?\n>\n> This enables the use of index scan of a child table by recognizing sort\n> order of the append node. Kurt Harriman did the work.\n...\n>\n> On 8/24/07 3:38 AM, \"Heikki Linnakangas\" <[email protected]> wrote:\n>\n> > Anton wrote:\n> >>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n> >>>> QUERY PLAN\n> >>> ----------------------------------------------------------------------------\n> >>> -----------------------------\n> >>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n> >>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n> >>>> Sort Key: public.n_traf.date_time\n> >>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n> >>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n> >>>> -> Seq Scan on n_traf (cost=0.00..22.30\n> >>>> rows=1230 width=32)\n> >>>> -> Seq Scan on n_traf_y2007m01 n_traf\n> >>>> (cost=0.00..22.30 rows=1230 width=32)\n> >> ...\n> >>>> -> Seq Scan on n_traf_y2007m12 n_traf\n> >>>> (cost=0.00..22.30 rows=1230 width=32)\n> >>>> (18 rows)\n> >>>>\n> >>>> Why it no uses indexes at all?\n> >>>> -------------------------------------------\n> >>> I'm no expert but I'd guess that the the planner doesn't know which\n> >>> partition holds the latest time so it has to read them all.\n> >>\n> >> Agree. But why it not uses indexes when it reading them?\n> >\n> > The planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\n> > below the append node. Therefore it needs to fetch all rows from all the\n> > tables, and the fastest way to do that is a seq scan.\n\n-- \nengineer\n", "msg_date": "Sat, 27 Oct 2007 10:26:21 +0600", "msg_from": "Anton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "Anton <[email protected]> writes:\n> I want ask about problem with partioned tables (it was discussed some\n> time ago, see below). Is it fixed somehow in 8.2.5 ?\n\nNo. The patch you mention never was considered at all, since it\nconsisted of a selective quote from Greenplum source code. It would\nnot even compile in community Postgres, because it adds calls to half a\ndozen Greenplum routines that we've never seen. Not to mention that\nthe base of the diff is Greenplum proprietary code, so the patch itself\nwouldn't even apply successfully.\n\nAs to whether it would work if we had the full story ... well, not\nhaving the full story, I don't want to opine.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 27 Oct 2007 01:37:29 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1 " }, { "msg_contents": "2007/10/27, Tom Lane <[email protected]>:\n> Anton <[email protected]> writes:\n> > I want ask about problem with partioned tables (it was discussed some\n> > time ago, see below). Is it fixed somehow in 8.2.5 ?\n>\n> No. The patch you mention never was considered at all, since it\n> consisted of a selective quote from Greenplum source code. It would\n...\n> As to whether it would work if we had the full story ... well, not\n> having the full story, I don't want to opine.\n\n\nSorry, my english is not good enough to understand your last sentence.\n\nI repost here my original question \"Why it no uses indexes?\" (on\npartitioned table and ORDER BY indexed_field DESC LIMIT 1), if you\nmean that you miss this discussion.\n\n> I just created partitioned table, n_traf, sliced by month\n> (n_traf_y2007m01, n_traf_y2007m02... and so on, see below). They are\n> indexed by 'date_time' column.\n> Then I populate it (last value have date 2007-08-...) and do VACUUM\n> ANALYZE ON n_traf_y2007... all of it.\n>\n> Now I try to select latest value (ORDER BY date_time LIMIT 1), but\n> Postgres produced the ugly plan:\n>\n> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n> QUERY PLAN\n> ---------------------------------------------------------------------------------------------------------\n> Limit (cost=824637.69..824637.69 rows=1 width=32)\n> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n> Sort Key: public.n_traf.date_time\n> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n> -> Append (cost=0.00..100877.99 rows=5643499 width=32)\n> -> Seq Scan on n_traf (cost=0.00..22.30\n> rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m01 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m02 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m03 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m04 n_traf\n> (cost=0.00..1.01 rows=1 width=32)\n> -> Seq Scan on n_traf_y2007m05 n_traf\n> (cost=0.00..9110.89 rows=509689 width=32)\n> -> Seq Scan on n_traf_y2007m06 n_traf\n> (cost=0.00..32003.89 rows=1790489 width=32)\n> -> Seq Scan on n_traf_y2007m07 n_traf\n> (cost=0.00..33881.10 rows=1895510 width=32)\n> -> Seq Scan on n_traf_y2007m08 n_traf\n> (cost=0.00..25702.70 rows=1437970 width=32)\n> -> Seq Scan on n_traf_y2007m09 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m10 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m11 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> -> Seq Scan on n_traf_y2007m12 n_traf\n> (cost=0.00..22.30 rows=1230 width=32)\n> (18 rows)\n>\n>\n> Why it no uses indexes at all?\n> -------------------------------------------\n>\n> The simplier query goes fast, use index.\n> =# explain analyze SELECT * FROM n_traf_y2007m08 ORDER BY date_time\n> DESC LIMIT 1;\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Limit (cost=0.00..0.03 rows=1 width=32) (actual time=0.156..0.158\n> rows=1 loops=1)\n> -> Index Scan Backward using n_traf_y2007m08_date_time_login_id on\n> n_traf_y2007m08 (cost=0.00..39489.48 rows=1437970 width=32) (actual\n> time=0.150..0.150 rows=1 loops=1)\n> Total runtime: 0.241 ms\n> (3 rows)\n>\n> Table n_traf looks like this:\n> =# \\d n_traf\n> Table \"public.n_traf\"\n> Column | Type | Modifiers\n> -------------+-----------------------------+--------------------\n> login_id | integer | not null\n> traftype_id | integer | not null\n> date_time | timestamp without time zone | not null\n> bytes_in | bigint | not null default 0\n> bytes_out | bigint | not null default 0\n> Indexes:\n> \"n_traf_login_id_key\" UNIQUE, btree (login_id, traftype_id, date_time)\n> \"n_traf_date_time_login_id\" btree (date_time, login_id)\n> Foreign-key constraints:\n> \"n_traf_login_id_fkey\" FOREIGN KEY (login_id) REFERENCES\n> n_logins(login_id) ON UPDATE CASCADE ON DELETE CASCADE\n> \"n_traf_traftype_id_fkey\" FOREIGN KEY (traftype_id) REFERENCES\n> n_traftypes(traftype_id) ON UPDATE CASCADE\n> Rules:\n> n_traf_insert_y2007m01 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-01-01'::date AND new.date_time <\n> '2007-02-01 00:00:00'::timestamp without time zone DO INSTEAD\n> INSERT INTO n_traf_y2007m01 (login_id, traftype_id, date_time,\n> bytes_in, bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m02 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-02-01'::date AND new.date_time <\n> '2007-03-01 00:00:00'::timestamp without time zone DO INSTEAD\n> INSERT INTO n_traf_y2007m02 (login_id, traftype_id, date_time,\n> bytes_in, bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m03 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-03-01'::date AND new.date_time <\n> '2007-04-01 00:00:00'::timestamp without time zone DO INSTEAD\n> INSERT INTO n_traf_y2007m03 (login_id, traftype_id, date_time,\n> bytes_in, bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m04 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-04-01'::date AND new.date_time <\n> '2007-05-01 00:00:00'::timestamp without time zone DO INSTEAD\n> INSERT INTO n_traf_y2007m04 (login_id, traftype_id, date_time,\n> bytes_in, bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m05 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-05-01'::date AND new.date_time <\n> '2007-06-01 00:00:00'::timestamp without time zone DO INSTEAD\n> INSERT INTO n_traf_y2007m05 (login_id, traftype_id, date_time,\n> bytes_in, bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m06 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-06-01'::date AND new.date_time <\n> '2007-07-01 00:00:00'::timestamp without time zone DO INSTEAD\n> INSERT INTO n_traf_y2007m06 (login_id, traftype_id, date_time,\n> bytes_in, bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m07 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-07-01'::date AND new.date_time <\n> '2007-08-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\n> INTO n_traf_y2007m07 (login_id, traftype_id, date_time, bytes_in,\n> bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m08 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-08-01'::date AND new.date_time <\n> '2007-09-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\n> INTO n_traf_y2007m08 (login_id, traftype_id, date_time, bytes_in,\n> bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m09 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-09-01'::date AND new.date_time <\n> '2007-10-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\n> INTO n_traf_y2007m09 (login_id, traftype_id, date_time, bytes_in,\n> bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m10 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-10-01'::date AND new.date_time <\n> '2007-11-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\n> INTO n_traf_y2007m10 (login_id, traftype_id, date_time, bytes_in,\n> bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m11 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-11-01'::date AND new.date_time <\n> '2007-12-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\n> INTO n_traf_y2007m11 (login_id, traftype_id, date_time, bytes_in,\n> bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n> n_traf_insert_y2007m12 AS\n> ON INSERT TO n_traf\n> WHERE new.date_time >= '2007-12-01'::date AND new.date_time <\n> '2008-01-01 00:00:00'::timestamp without time zone DO INSTEAD INSERT\n> INTO n_traf_y2007m12 (login_id, traftype_id, date_time, bytes_in,\n> bytes_out)\n> VALUES (new.login_id, new.traftype_id, new.date_time, new.bytes_in,\n> new.bytes_out)\n>\n>\n> Tables n_traf_y2007m... looks like these\n>\n> Table \"public.n_traf_y2007m01\"\n> Column | Type | Modifiers\n> -------------+-----------------------------+--------------------\n> login_id | integer | not null\n> traftype_id | integer | not null\n> date_time | timestamp without time zone | not null\n> bytes_in | bigint | not null default 0\n> bytes_out | bigint | not null default 0\n> Indexes:\n> \"n_traf_y2007m01_date_time_login_id\" btree (date_time, login_id)\n> Check constraints:\n> \"n_traf_y2007m01_date_time_check\" CHECK (date_time >=\n> '2007-01-01'::date AND date_time < '2007-02-01 00:00:00'::timestamp\n> without time zone)\n> Inherits: n_traf\n>\n> Index \"public.n_traf_y2007m01_date_time_login_id\"\n> Column | Type\n> -----------+-----------------------------\n> date_time | timestamp without time zone\n> login_id | integer\n> btree, for table \"public.n_traf_y2007m01\"\n>\n> Table \"public.n_traf_y2007m02\"\n> Column | Type | Modifiers\n> -------------+-----------------------------+--------------------\n> login_id | integer | not null\n> traftype_id | integer | not null\n> date_time | timestamp without time zone | not null\n> bytes_in | bigint | not null default 0\n> bytes_out | bigint | not null default 0\n> Indexes:\n> \"n_traf_y2007m02_date_time_login_id\" btree (date_time, login_id)\n> Check constraints:\n> \"n_traf_y2007m02_date_time_check\" CHECK (date_time >=\n> '2007-02-01'::date AND date_time < '2007-03-01 00:00:00'::timestamp\n> without time zone)\n> Inherits: n_traf\n> ...\n\n-- \nengineer\n", "msg_date": "Sat, 27 Oct 2007 14:53:30 +0600", "msg_from": "Anton <[email protected]>", "msg_from_op": true, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT 1" }, { "msg_contents": "Anton wrote:\n> I repost here my original question \"Why it no uses indexes?\" (on\n> partitioned table and ORDER BY indexed_field DESC LIMIT 1), if you\n> mean that you miss this discussion.\n\nAs I said back then:\n\nThe planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\nbelow the append node.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 27 Oct 2007 10:11:32 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field DESC LIMIT\n 1" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Bruce, would you please add this to the 8.4 patch queue so we remember\n> to look at this later?\n\nOK, added to queue, but Tom's patch queue comment is:\n\nThis is useless since it does not represent a complete patch; the\nprovided code calls a lot of Greenplum-private routines that weren't\nprovided. It's not even reviewable let alone a candidate to apply.\n\n---------------------------------------------------------------------------\n\n\n> \n> It didn't occur to me that we can do that in the degenerate case when\n> there's just a single node below the Append. A more general solution\n> would be to check if the pathkeys of all the child nodes match, and do a\n> \"merge append\" similar to a merge join.\n> \n> Luke Lonergan wrote:\n> > Below is a patch against 8.2.4 (more or less), Heikki can you take a look at\n> > it?\n> > \n> > This enables the use of index scan of a child table by recognizing sort\n> > order of the append node. Kurt Harriman did the work.\n> > \n> > - Luke\n> > \n> > Index: cdb-pg/src/backend/optimizer/path/indxpath.c\n> > ===================================================================\n> > RCS file: \n> > /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> > er/path/indxpath.c,v\n> > diff -u -N -r1.22 -r1.22.2.1\n> > --- cdb-pg/src/backend/optimizer/path/indxpath.c 25 Apr 2007 22:07:21\n> > -0000 1.22\n> > +++ cdb-pg/src/backend/optimizer/path/indxpath.c 10 Aug 2007 03:41:15\n> > -0000 1.22.2.1\n> > @@ -379,8 +379,51 @@\n> > index_pathkeys = build_index_pathkeys(root, index,\n> > ForwardScanDirection,\n> > true);\n> > - useful_pathkeys = truncate_useless_pathkeys(root, rel,\n> > - index_pathkeys);\n> > + /*\n> > + * CDB: For appendrel child, pathkeys contain Var nodes in\n> > terms \n> > + * of the child's baserel. Transform the pathkey list to refer\n> > to \n> > + * columns of the appendrel.\n> > + */\n> > + if (rel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n> > + {\n> > + AppendRelInfo *appinfo = NULL;\n> > + RelOptInfo *appendrel = NULL;\n> > + ListCell *appcell;\n> > + CdbPathLocus notalocus;\n> > +\n> > + /* Find the appendrel of which this baserel is a child. */\n> > + foreach(appcell, root->append_rel_list)\n> > + {\n> > + appinfo = (AppendRelInfo *)lfirst(appcell);\n> > + if (appinfo->child_relid == rel->relid)\n> > + break;\n> > + }\n> > + Assert(appinfo);\n> > + appendrel = find_base_rel(root, appinfo->parent_relid);\n> > +\n> > + /*\n> > + * The pathkey list happens to have the same format as the\n> > + * partitioning key of a Hashed locus, so by disguising it\n> > + * we can use cdbpathlocus_pull_above_projection() to do\n> > the \n> > + * transformation.\n> > + */\n> > + CdbPathLocus_MakeHashed(&notalocus, index_pathkeys);\n> > + notalocus =\n> > + cdbpathlocus_pull_above_projection(root,\n> > + notalocus,\n> > + rel->relids,\n> > + rel->reltargetlist,\n> > + \n> > appendrel->reltargetlist,\n> > + appendrel->relid);\n> > + if (CdbPathLocus_IsHashed(notalocus))\n> > + index_pathkeys = truncate_useless_pathkeys(root,\n> > appendrel,\n> > + \n> > notalocus.partkey);\n> > + else\n> > + index_pathkeys = NULL;\n> > + }\n> > +\n> > + useful_pathkeys = truncate_useless_pathkeys(root, rel,\n> > + index_pathkeys);\n> > }\n> > else\n> > useful_pathkeys = NIL;\n> > Index: cdb-pg/src/backend/optimizer/path/pathkeys.c\n> > ===================================================================\n> > RCS file: \n> > /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> > er/path/pathkeys.c,v\n> > diff -u -N -r1.18 -r1.18.2.1\n> > --- cdb-pg/src/backend/optimizer/path/pathkeys.c 30 Apr 2007 05:44:07\n> > -0000 1.18\n> > +++ cdb-pg/src/backend/optimizer/path/pathkeys.c 10 Aug 2007 03:41:15\n> > -0000 1.18.2.1\n> > @@ -1403,55 +1403,53 @@\n> > {\n> > PathKeyItem *item;\n> > Expr *newexpr;\n> > + AttrNumber targetindex;\n> > \n> > Assert(pathkey);\n> > \n> > - /* Use constant expr if available. Will be at head of list. */\n> > - if (CdbPathkeyEqualsConstant(pathkey))\n> > + /* Find an expr that we can rewrite to use the projected columns. */\n> > + item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n> > + relids,\n> > + targetlist,\n> > + &targetindex); // OUT\n> > + \n> > + /* If not found, see if the equiv class contains a constant expr. */\n> > + if (!item &&\n> > + CdbPathkeyEqualsConstant(pathkey))\n> > {\n> > item = (PathKeyItem *)linitial(pathkey);\n> > newexpr = (Expr *)copyObject(item->key);\n> > }\n> > \n> > - /* New vars for old! */\n> > - else\n> > - {\n> > - AttrNumber targetindex;\n> > + /* Fail if no usable expr. */\n> > + else if (!item)\n> > + return NULL;\n> > \n> > - /* Find an expr that we can rewrite to use the projected columns.\n> > */\n> > - item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n> > - relids,\n> > - targetlist,\n> > - &targetindex); // OUT\n> > - if (!item)\n> > - return NULL;\n> > + /* If found matching targetlist item, make a Var that references it. */\n> > + else if (targetindex > 0)\n> > + newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n> > + targetindex,\n> > + newvarlist,\n> > + (Expr *)item->key);\n> > \n> > - /* If found matching targetlist item, make a Var that references\n> > it. */\n> > - if (targetindex > 0)\n> > - newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n> > - targetindex,\n> > - newvarlist,\n> > - (Expr *)item->key);\n> > + /* Replace expr's Var nodes with new ones referencing the targetlist.\n> > */\n> > + else\n> > + newexpr = cdbpullup_expr((Expr *)item->key,\n> > + targetlist,\n> > + newvarlist,\n> > + newrelid);\n> > \n> > - /* Replace expr's Var nodes with new ones referencing the\n> > targetlist. */\n> > - else\n> > - newexpr = cdbpullup_expr((Expr *)item->key,\n> > - targetlist,\n> > - newvarlist,\n> > - newrelid);\n> > + /* Pull up RelabelType node too, unless tlist expr has right type. */\n> > + if (IsA(item->key, RelabelType))\n> > + {\n> > + RelabelType *oldrelabel = (RelabelType *)item->key;\n> > \n> > - /* Pull up RelabelType node too, unless tlist expr has right type.\n> > */\n> > - if (IsA(item->key, RelabelType))\n> > - {\n> > - RelabelType *oldrelabel = (RelabelType *)item->key;\n> > -\n> > - if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n> > - oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n> > - newexpr = (Expr *)makeRelabelType(newexpr,\n> > - oldrelabel->resulttype,\n> > - oldrelabel->resulttypmod,\n> > - \n> > oldrelabel->relabelformat);\n> > - }\n> > + if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n> > + oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n> > + newexpr = (Expr *)makeRelabelType(newexpr,\n> > + oldrelabel->resulttype,\n> > + oldrelabel->resulttypmod,\n> > + oldrelabel->relabelformat);\n> > }\n> > Insist(newexpr);\n> > \n> > Index: cdb-pg/src/backend/optimizer/util/pathnode.c\n> > ===================================================================\n> > RCS file: \n> > /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> > er/util/pathnode.c,v\n> > diff -u -N -r1.52.2.4 -r1.52.2.5\n> > --- cdb-pg/src/backend/optimizer/util/pathnode.c 5 Aug 2007 23:06:44\n> > -0000 1.52.2.4\n> > +++ cdb-pg/src/backend/optimizer/util/pathnode.c 10 Aug 2007 03:41:15\n> > -0000 1.52.2.5\n> > @@ -1563,7 +1563,15 @@\n> > pathnode->path.rescannable = false;\n> > }\n> > \n> > - return pathnode;\n> > + /* \n> > + * CDB: If there is exactly one subpath, its ordering is preserved.\n> > + * Child rel's pathkey exprs are already expressed in terms of the\n> > + * columns of the parent appendrel. See find_usable_indexes().\n> > + */\n> > + if (list_length(subpaths) == 1)\n> > + pathnode->path.pathkeys = ((Path *)linitial(subpaths))->pathkeys;\n> > + \n> > + return pathnode;\n> > }\n> > \n> > /*\n> > \n> > \n> > On 8/24/07 3:38 AM, \"Heikki Linnakangas\" <[email protected]> wrote:\n> > \n> >> Anton wrote:\n> >>>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n> >>>>> QUERY PLAN\n> >>>> ----------------------------------------------------------------------------\n> >>>> -----------------------------\n> >>>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n> >>>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n> >>>>> Sort Key: public.n_traf.date_time\n> >>>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n> >>>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n> >>>>> -> Seq Scan on n_traf (cost=0.00..22.30\n> >>>>> rows=1230 width=32)\n> >>>>> -> Seq Scan on n_traf_y2007m01 n_traf\n> >>>>> (cost=0.00..22.30 rows=1230 width=32)\n> >>> ...\n> >>>>> -> Seq Scan on n_traf_y2007m12 n_traf\n> >>>>> (cost=0.00..22.30 rows=1230 width=32)\n> >>>>> (18 rows)\n> >>>>>\n> >>>>> Why it no uses indexes at all?\n> >>>>> -------------------------------------------\n> >>>> I'm no expert but I'd guess that the the planner doesn't know which\n> >>>> partition holds the latest time so it has to read them all.\n> >>> Agree. But why it not uses indexes when it reading them?\n> >> The planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\n> >> below the append node. Therefore it needs to fetch all rows from all the\n> >> tables, and the fastest way to do that is a seq scan.\n> > \n> > \n> \n> \n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Wed, 12 Mar 2008 15:16:39 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: partitioned table and ORDER BY indexed_field\n DESC LIMIT 1" }, { "msg_contents": "\nPatch rejected because of Tom Lane's comments:\n\n> This is useless since it does not represent a complete patch; the\n> provided code calls a lot of Greenplum-private routines that weren't\n> provided. It's not even reviewable let alone a candidate to apply.\n\n\n---------------------------------------------------------------------------\n\nLuke Lonergan wrote:\n> Below is a patch against Greenplum Database that fixes the problem.\n> \n> - Luke\n> \n> ------ Forwarded Message\n> From: Luke Lonergan <[email protected]>\n> Date: Fri, 24 Aug 2007 09:25:53 -0700\n> To: Heikki Linnakangas <[email protected]>, Anton <[email protected]>\n> Cc: <[email protected]>\n> Conversation: [PERFORM] partitioned table and ORDER BY indexed_field DESC\n> LIMIT 1\n> Subject: Re: [PERFORM] partitioned table and ORDER BY indexed_field DESC\n> LIMIT 1\n> \n> Below is a patch against 8.2.4 (more or less), Heikki can you take a look at\n> it?\n> \n> This enables the use of index scan of a child table by recognizing sort\n> order of the append node. Kurt Harriman did the work.\n> \n> - Luke\n> \n> Index: cdb-pg/src/backend/optimizer/path/indxpath.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/path/indxpath.c,v\n> diff -u -N -r1.22 -r1.22.2.1\n> --- cdb-pg/src/backend/optimizer/path/indxpath.c 25 Apr 2007 22:07:21\n> -0000 1.22\n> +++ cdb-pg/src/backend/optimizer/path/indxpath.c 10 Aug 2007 03:41:15\n> -0000 1.22.2.1\n> @@ -379,8 +379,51 @@\n> index_pathkeys = build_index_pathkeys(root, index,\n> ForwardScanDirection,\n> true);\n> - useful_pathkeys = truncate_useless_pathkeys(root, rel,\n> - index_pathkeys);\n> + /*\n> + * CDB: For appendrel child, pathkeys contain Var nodes in\n> terms \n> + * of the child's baserel. Transform the pathkey list to refer\n> to \n> + * columns of the appendrel.\n> + */\n> + if (rel->reloptkind == RELOPT_OTHER_MEMBER_REL)\n> + {\n> + AppendRelInfo *appinfo = NULL;\n> + RelOptInfo *appendrel = NULL;\n> + ListCell *appcell;\n> + CdbPathLocus notalocus;\n> +\n> + /* Find the appendrel of which this baserel is a child. */\n> + foreach(appcell, root->append_rel_list)\n> + {\n> + appinfo = (AppendRelInfo *)lfirst(appcell);\n> + if (appinfo->child_relid == rel->relid)\n> + break;\n> + }\n> + Assert(appinfo);\n> + appendrel = find_base_rel(root, appinfo->parent_relid);\n> +\n> + /*\n> + * The pathkey list happens to have the same format as the\n> + * partitioning key of a Hashed locus, so by disguising it\n> + * we can use cdbpathlocus_pull_above_projection() to do\n> the \n> + * transformation.\n> + */\n> + CdbPathLocus_MakeHashed(&notalocus, index_pathkeys);\n> + notalocus =\n> + cdbpathlocus_pull_above_projection(root,\n> + notalocus,\n> + rel->relids,\n> + rel->reltargetlist,\n> + \n> appendrel->reltargetlist,\n> + appendrel->relid);\n> + if (CdbPathLocus_IsHashed(notalocus))\n> + index_pathkeys = truncate_useless_pathkeys(root,\n> appendrel,\n> + \n> notalocus.partkey);\n> + else\n> + index_pathkeys = NULL;\n> + }\n> +\n> + useful_pathkeys = truncate_useless_pathkeys(root, rel,\n> + index_pathkeys);\n> }\n> else\n> useful_pathkeys = NIL;\n> Index: cdb-pg/src/backend/optimizer/path/pathkeys.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/path/pathkeys.c,v\n> diff -u -N -r1.18 -r1.18.2.1\n> --- cdb-pg/src/backend/optimizer/path/pathkeys.c 30 Apr 2007 05:44:07\n> -0000 1.18\n> +++ cdb-pg/src/backend/optimizer/path/pathkeys.c 10 Aug 2007 03:41:15\n> -0000 1.18.2.1\n> @@ -1403,55 +1403,53 @@\n> {\n> PathKeyItem *item;\n> Expr *newexpr;\n> + AttrNumber targetindex;\n> \n> Assert(pathkey);\n> \n> - /* Use constant expr if available. Will be at head of list. */\n> - if (CdbPathkeyEqualsConstant(pathkey))\n> + /* Find an expr that we can rewrite to use the projected columns. */\n> + item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n> + relids,\n> + targetlist,\n> + &targetindex); // OUT\n> + \n> + /* If not found, see if the equiv class contains a constant expr. */\n> + if (!item &&\n> + CdbPathkeyEqualsConstant(pathkey))\n> {\n> item = (PathKeyItem *)linitial(pathkey);\n> newexpr = (Expr *)copyObject(item->key);\n> }\n> \n> - /* New vars for old! */\n> - else\n> - {\n> - AttrNumber targetindex;\n> + /* Fail if no usable expr. */\n> + else if (!item)\n> + return NULL;\n> \n> - /* Find an expr that we can rewrite to use the projected columns.\n> */\n> - item = cdbpullup_findPathKeyItemInTargetList(pathkey,\n> - relids,\n> - targetlist,\n> - &targetindex); // OUT\n> - if (!item)\n> - return NULL;\n> + /* If found matching targetlist item, make a Var that references it. */\n> + else if (targetindex > 0)\n> + newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n> + targetindex,\n> + newvarlist,\n> + (Expr *)item->key);\n> \n> - /* If found matching targetlist item, make a Var that references\n> it. */\n> - if (targetindex > 0)\n> - newexpr = (Expr *)cdbpullup_makeVar(newrelid,\n> - targetindex,\n> - newvarlist,\n> - (Expr *)item->key);\n> + /* Replace expr's Var nodes with new ones referencing the targetlist.\n> */\n> + else\n> + newexpr = cdbpullup_expr((Expr *)item->key,\n> + targetlist,\n> + newvarlist,\n> + newrelid);\n> \n> - /* Replace expr's Var nodes with new ones referencing the\n> targetlist. */\n> - else\n> - newexpr = cdbpullup_expr((Expr *)item->key,\n> - targetlist,\n> - newvarlist,\n> - newrelid);\n> + /* Pull up RelabelType node too, unless tlist expr has right type. */\n> + if (IsA(item->key, RelabelType))\n> + {\n> + RelabelType *oldrelabel = (RelabelType *)item->key;\n> \n> - /* Pull up RelabelType node too, unless tlist expr has right type.\n> */\n> - if (IsA(item->key, RelabelType))\n> - {\n> - RelabelType *oldrelabel = (RelabelType *)item->key;\n> -\n> - if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n> - oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n> - newexpr = (Expr *)makeRelabelType(newexpr,\n> - oldrelabel->resulttype,\n> - oldrelabel->resulttypmod,\n> - \n> oldrelabel->relabelformat);\n> - }\n> + if (oldrelabel->resulttype != exprType((Node *)newexpr) ||\n> + oldrelabel->resulttypmod != exprTypmod((Node *)newexpr))\n> + newexpr = (Expr *)makeRelabelType(newexpr,\n> + oldrelabel->resulttype,\n> + oldrelabel->resulttypmod,\n> + oldrelabel->relabelformat);\n> }\n> Insist(newexpr);\n> \n> Index: cdb-pg/src/backend/optimizer/util/pathnode.c\n> ===================================================================\n> RCS file: \n> /data/FISHEYE_REPOSITORIES/greenplum/cvsroot/cdb2/cdb-pg/src/backend/optimiz\n> er/util/pathnode.c,v\n> diff -u -N -r1.52.2.4 -r1.52.2.5\n> --- cdb-pg/src/backend/optimizer/util/pathnode.c 5 Aug 2007 23:06:44\n> -0000 1.52.2.4\n> +++ cdb-pg/src/backend/optimizer/util/pathnode.c 10 Aug 2007 03:41:15\n> -0000 1.52.2.5\n> @@ -1563,7 +1563,15 @@\n> pathnode->path.rescannable = false;\n> }\n> \n> - return pathnode;\n> + /* \n> + * CDB: If there is exactly one subpath, its ordering is preserved.\n> + * Child rel's pathkey exprs are already expressed in terms of the\n> + * columns of the parent appendrel. See find_usable_indexes().\n> + */\n> + if (list_length(subpaths) == 1)\n> + pathnode->path.pathkeys = ((Path *)linitial(subpaths))->pathkeys;\n> + \n> + return pathnode;\n> }\n> \n> /*\n> \n> \n> On 8/24/07 3:38 AM, \"Heikki Linnakangas\" <[email protected]> wrote:\n> \n> > Anton wrote:\n> >>>> =# explain SELECT * FROM n_traf ORDER BY date_time DESC LIMIT 1;\n> >>>> QUERY PLAN\n> >>> ----------------------------------------------------------------------------\n> >>> -----------------------------\n> >>>> Limit (cost=824637.69..824637.69 rows=1 width=32)\n> >>>> -> Sort (cost=824637.69..838746.44 rows=5643499 width=32)\n> >>>> Sort Key: public.n_traf.date_time\n> >>>> -> Result (cost=0.00..100877.99 rows=5643499 width=32)\n> >>>> -> Append (cost= 0.00..100877.99 rows=5643499 width=32)\n> >>>> -> Seq Scan on n_traf (cost=0.00..22.30\n> >>>> rows=1230 width=32)\n> >>>> -> Seq Scan on n_traf_y2007m01 n_traf\n> >>>> (cost=0.00..22.30 rows=1230 width=32)\n> >> ...\n> >>>> -> Seq Scan on n_traf_y2007m12 n_traf\n> >>>> (cost=0.00..22.30 rows=1230 width=32)\n> >>>> (18 rows)\n> >>>> \n> >>>> Why it no uses indexes at all?\n> >>>> -------------------------------------------\n> >>> I'm no expert but I'd guess that the the planner doesn't know which\n> >>> partition holds the latest time so it has to read them all.\n> >> \n> >> Agree. But why it not uses indexes when it reading them?\n> > \n> > The planner isn't smart enough to push the \"ORDER BY ... LIMIT ...\"\n> > below the append node. Therefore it needs to fetch all rows from all the\n> > tables, and the fastest way to do that is a seq scan.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n> ------ End of Forwarded Message\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian <[email protected]> http://momjian.us\n EnterpriseDB http://postgres.enterprisedb.com\n\n + If your life is a hard drive, Christ can be your backup. +\n", "msg_date": "Mon, 24 Mar 2008 13:44:08 -0400 (EDT)", "msg_from": "Bruce Momjian <[email protected]>", "msg_from_op": false, "msg_subject": "Re: FW: was [PERFORM] partitioned table and ORDER\n\tBY indexed_field DESC LIMIT 1" } ]
[ { "msg_contents": "Hi List;\n\nI've just started working with a client that has been running Postgres (with \nno DBA) for a few years. They're running version 8.1.4 on 4-way dell boxes \nwith 4Gig of memory on each box attached to RAID-10 disk arrays. \n\nSome of their key config settings are here:\nshared_buffers = 20480\nwork_mem = 16384\nmaintenance_work_mem = 32758\nwal_buffers = 24\ncheckpoint_segments = 32\ncheckpoint_timeout = 300\ncheckpoint_warning = 30\neffective_cache_size = 524288\nautovacuum = on\nautovacuum_naptime = 60\nautovacuum_vacuum_threshold = 500\nautovacuum_analyze_threshold = 250\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_analyze_scale_factor = 0.1\nautovacuum_vacuum_cost_delay = -1\nautovacuum_vacuum_cost_limit = -1\n\n\nCurrently I've found that they have approx 17 tables that show a significant \namount of bloat in the system. The worst one showing over 5million pages \nworth of dead space. One of the problems is that their system is so busy with \nactivity during the day and massive data load processes at night that they \noften kill the pid of vacuum processes kicked off by autovacuum because the \noverall load impact disallows users from logging into the app since the login \nprocess includes at least one db query that then seems to hang because there \nare anywhere from 100 - 300 queries ahead of it at any given time. Normally a \nuser gets logged in with an avg wait of 5 - 10 seconds but when a long \nrunning vacuum (sometimes due to a long running update process that's trying \nto sort/update > 40million rows) is going the system gets to a state where \nthe login queries never get executed until the vacuum process is killed.\n\nAs a result of this I believe that the biggest table (the one with > 5million \npages worth of dead space) has never been vacuumed to completion. I suspect \nthis is the case for a few of the other top dead space tables as well but I \ncan't be sure. \n\nMy first priority was to get this vacuum scenario cleaned up. First off I \nadded the biggest table into pg_autovacuum and set the enabled column to \nfalse ('f'). Then I set vacuum_cost_delay to 10 and in the same session \nran \"vacuum analyze verbose big_table\". This ran for 7.5 hours before we had \nto kill it due to system load - and to make matters worse the high system \nload was forcing many of the nightly batch queries that load, update, etc the \ndata to stack up to a point where the system was at less than 2% idle (CPU) \nfor the next 4 hours and barely responding to the command line.\n\nTo make matters worse I find out this morning that the db is at 85% per used \ntransaction ID's - again since a vacuum on the entire db has never been \ncompleted. \n\nAs far as I can tell, the overall db size is currently 199G of which approx \n104G seems to be valid data.\n\nHere's my thoughts per how to proceed:\n\n=====================================\n1) fix the big table ASAP (probably over the weekend) since it's not only the \nbiggest table but the most active like this:\n\n a) run a pg_dump of this table\n\n b) restore this dump into a new table (i.e. new_big_table)\n\n c) lock the original big_table, sync any changes, inserts, deletes since we \ndid the dump from big_table into new_big_table\n\n d) drop big_table\n\n e) re-name new_big_table to big_table\n\n* I may run through this for a few of the other large, highly active tables \nthat have minimal page density as well.\n=====================================\n\n\nThe development folks that have been here awhile tell me that it seems like \nwhen they have a query (not limited to vacuum processes) that has been \nrunning for a long time (i.e. > 5 or 6 hours) that the query sort of \"goes \ncrazy\" and the entire system gets pegged until they kill that process. - I've \nnot heard of this but I suspect upgrading to 8.2.4 is probably a good plan at \nthis point as well, so for step 2, I'll do this:\n\n=====================================\n2) (obviously I'll do this in dev first, then in QA and finally in prod)\n a) install verson 8.2.4 from source, leaving 8.1.4 in place\n\n b) create the new 8.2.4 cluster on a new port\n\n c) setup WAL archiving on the 8.1.4 cluster\n\n d) do a full dump of the 8.1.4 cluster and restore it to the new 8.2.4 \ncluster\n\n e) stop the 8.2.4 cluster and bring it up in recovery mode, pointing it to \nthe directory where we're archiving the 8.1.4 cluster's WAL segments.\n\n f) once caught up, bring both clusters down\n\n g) copy any final files from the 8.1.4 cluster's pg_xlog directory into the \nnew 8.2.4 pg_xlog dir (is this ok, since I'm moving 8.1.4 version tx logs \ninto an 8.2.4 xlog dir?)\n\n h) Change the port on the 8.2.4 cluster to what the original 8.1.4 cluster \nport was\n\n i) bring up the new 8.2.4 system, and actively manage the vacuum needs \nmoving fwd via a combination of autovacuum, cron processes for specififed \ntable vac's (daily, hourly, 15min, 5min, etc), and as needed interactive \nsession vacuums\n=====================================\n\n\nThe src based install will allow me to setup a robust upgrade CM process \ncapable of supporting multiple concurrent versions on a server if needed, the \nability to quickly revert to a previous version, etc however this is a \ndiscussion for another day - I only mention it in case the question \"why not \njust use RPM's?\" arises...\n\n\nSo here's my questions:\n\n1) Does this sound like a good plan?\n\n2) Are there other steps I should be taking, other Issues I should be \nconcerned about short-term, etc? \n\n3) Does anyone have any additional advice for managing either this initial \nmess, or the system(s) long term?\n\nThanks in advance...\n\n/Kevin\n", "msg_date": "Fri, 24 Aug 2007 13:57:23 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "significant vacuum issues - looking for suggestions" }, { "msg_contents": "In response to Kevin Kempter <[email protected]>:\n\n> Hi List;\n> \n> I've just started working with a client that has been running Postgres (with \n> no DBA) for a few years. They're running version 8.1.4 on 4-way dell boxes \n> with 4Gig of memory on each box attached to RAID-10 disk arrays. \n> \n> Some of their key config settings are here:\n> shared_buffers = 20480\n> work_mem = 16384\n> maintenance_work_mem = 32758\n\nBefore you do any of those other things, bump shared_buffers to about\n120000 and maintenance_work_mem to 250000 or so -- unless this box\nhas other applications on it using significant amounts of those 4G of\nRAM. You may find that these changes alone are enough to get vacuum\nto complete. You'll need to restart the server for the shared_buffers\nsetting to take effect.\n\nCan you do a pg_relation_size() on the tables in question?\n\n> wal_buffers = 24\n> checkpoint_segments = 32\n> checkpoint_timeout = 300\n> checkpoint_warning = 30\n> effective_cache_size = 524288\n> autovacuum = on\n> autovacuum_naptime = 60\n> autovacuum_vacuum_threshold = 500\n> autovacuum_analyze_threshold = 250\n> autovacuum_vacuum_scale_factor = 0.2\n> autovacuum_analyze_scale_factor = 0.1\n> autovacuum_vacuum_cost_delay = -1\n> autovacuum_vacuum_cost_limit = -1\n> \n> \n> Currently I've found that they have approx 17 tables that show a significant \n> amount of bloat in the system. The worst one showing over 5million pages \n> worth of dead space. One of the problems is that their system is so busy with \n> activity during the day and massive data load processes at night that they \n> often kill the pid of vacuum processes kicked off by autovacuum because the \n> overall load impact disallows users from logging into the app since the login \n> process includes at least one db query that then seems to hang because there \n> are anywhere from 100 - 300 queries ahead of it at any given time. Normally a \n> user gets logged in with an avg wait of 5 - 10 seconds but when a long \n> running vacuum (sometimes due to a long running update process that's trying \n> to sort/update > 40million rows) is going the system gets to a state where \n> the login queries never get executed until the vacuum process is killed.\n> \n> As a result of this I believe that the biggest table (the one with > 5million \n> pages worth of dead space) has never been vacuumed to completion. I suspect \n> this is the case for a few of the other top dead space tables as well but I \n> can't be sure. \n> \n> My first priority was to get this vacuum scenario cleaned up. First off I \n> added the biggest table into pg_autovacuum and set the enabled column to \n> false ('f'). Then I set vacuum_cost_delay to 10 and in the same session \n> ran \"vacuum analyze verbose big_table\". This ran for 7.5 hours before we had \n> to kill it due to system load - and to make matters worse the high system \n> load was forcing many of the nightly batch queries that load, update, etc the \n> data to stack up to a point where the system was at less than 2% idle (CPU) \n> for the next 4 hours and barely responding to the command line.\n> \n> To make matters worse I find out this morning that the db is at 85% per used \n> transaction ID's - again since a vacuum on the entire db has never been \n> completed. \n> \n> As far as I can tell, the overall db size is currently 199G of which approx \n> 104G seems to be valid data.\n> \n> Here's my thoughts per how to proceed:\n> \n> =====================================\n> 1) fix the big table ASAP (probably over the weekend) since it's not only the \n> biggest table but the most active like this:\n> \n> a) run a pg_dump of this table\n> \n> b) restore this dump into a new table (i.e. new_big_table)\n> \n> c) lock the original big_table, sync any changes, inserts, deletes since we \n> did the dump from big_table into new_big_table\n> \n> d) drop big_table\n> \n> e) re-name new_big_table to big_table\n> \n> * I may run through this for a few of the other large, highly active tables \n> that have minimal page density as well.\n> =====================================\n> \n> \n> The development folks that have been here awhile tell me that it seems like \n> when they have a query (not limited to vacuum processes) that has been \n> running for a long time (i.e. > 5 or 6 hours) that the query sort of \"goes \n> crazy\" and the entire system gets pegged until they kill that process. - I've \n> not heard of this but I suspect upgrading to 8.2.4 is probably a good plan at \n> this point as well, so for step 2, I'll do this:\n> \n> =====================================\n> 2) (obviously I'll do this in dev first, then in QA and finally in prod)\n> a) install verson 8.2.4 from source, leaving 8.1.4 in place\n> \n> b) create the new 8.2.4 cluster on a new port\n> \n> c) setup WAL archiving on the 8.1.4 cluster\n> \n> d) do a full dump of the 8.1.4 cluster and restore it to the new 8.2.4 \n> cluster\n> \n> e) stop the 8.2.4 cluster and bring it up in recovery mode, pointing it to \n> the directory where we're archiving the 8.1.4 cluster's WAL segments.\n> \n> f) once caught up, bring both clusters down\n> \n> g) copy any final files from the 8.1.4 cluster's pg_xlog directory into the \n> new 8.2.4 pg_xlog dir (is this ok, since I'm moving 8.1.4 version tx logs \n> into an 8.2.4 xlog dir?)\n> \n> h) Change the port on the 8.2.4 cluster to what the original 8.1.4 cluster \n> port was\n> \n> i) bring up the new 8.2.4 system, and actively manage the vacuum needs \n> moving fwd via a combination of autovacuum, cron processes for specififed \n> table vac's (daily, hourly, 15min, 5min, etc), and as needed interactive \n> session vacuums\n> =====================================\n> \n> \n> The src based install will allow me to setup a robust upgrade CM process \n> capable of supporting multiple concurrent versions on a server if needed, the \n> ability to quickly revert to a previous version, etc however this is a \n> discussion for another day - I only mention it in case the question \"why not \n> just use RPM's?\" arises...\n> \n> \n> So here's my questions:\n> \n> 1) Does this sound like a good plan?\n> \n> 2) Are there other steps I should be taking, other Issues I should be \n> concerned about short-term, etc? \n> \n> 3) Does anyone have any additional advice for managing either this initial \n> mess, or the system(s) long term?\n> \n> Thanks in advance...\n> \n> /Kevin\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Fri, 24 Aug 2007 16:41:44 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant vacuum issues - looking for suggestions" }, { "msg_contents": ">>> On Fri, Aug 24, 2007 at 2:57 PM, in message\n<[email protected]>, Kevin Kempter\n<[email protected]> wrote: \n> c) setup WAL archiving on the 8.1.4 cluster\n> \n> d) do a full dump of the 8.1.4 cluster and restore it to the new 8.2.4 \n> cluster\n> \n> e) stop the 8.2.4 cluster and bring it up in recovery mode, pointing it \n> to \n> the directory where we're archiving the 8.1.4 cluster's WAL segments.\n\nYou can't use these techniques for a major version upgrade.\nUse pg_dump piped to psql. That will also eliminate all bloat.\n \n-Kevin\n \n\n\n", "msg_date": "Fri, 24 Aug 2007 15:52:20 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant vacuum issues - looking for\n\tsuggestions" }, { "msg_contents": "In response to \"Kevin Grittner\" <[email protected]>:\n\n> >>> On Fri, Aug 24, 2007 at 2:57 PM, in message\n> <[email protected]>, Kevin Kempter\n> <[email protected]> wrote: \n> > c) setup WAL archiving on the 8.1.4 cluster\n> > \n> > d) do a full dump of the 8.1.4 cluster and restore it to the new 8.2.4 \n> > cluster\n> > \n> > e) stop the 8.2.4 cluster and bring it up in recovery mode, pointing it \n> > to \n> > the directory where we're archiving the 8.1.4 cluster's WAL segments.\n> \n> You can't use these techniques for a major version upgrade.\n> Use pg_dump piped to psql. That will also eliminate all bloat.\n\nIf you can't afford any downtime, you may be able to use Slony to\ndo your upgrade. However, slony adds overhead, and if this system\nis tapped out already, it may not tolerate the additional overhead.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n", "msg_date": "Fri, 24 Aug 2007 17:08:13 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant vacuum issues - looking for suggestions" }, { "msg_contents": "Kevin Kempter <[email protected]> writes:\n> The development folks that have been here awhile tell me that it seems like \n> when they have a query (not limited to vacuum processes) that has been \n> running for a long time (i.e. > 5 or 6 hours) that the query sort of \"goes \n> crazy\" and the entire system gets pegged until they kill that\n> process. - I've not heard of this \n\nMe either, but I wonder whether their queries are tickling some memory\nleak. I could imagine that what they are seeing is the backend process\ngrowing slowly until it starts to swap, and then continuing to grow and\nneeding more and more swap activity. Once you get over the knee of that\ncurve, things get real bad real fast. It might not be a bad idea to run\nthe postmaster under a (carefully chosen) ulimit setting to cut such\nthings off before the system starts swapping. Other things to look at:\n\n* what exactly gets \"pegged\" --- is it CPU or I/O bound? Watching\n\"vmstat 1\" is usually a good diagnostic since you can see CPU, swap,\nand regular disk I/O activity at once.\n\n* is there really not any pattern to the queries that cause the problem?\nI don't think 8.1.4 has any widespread leakage problem, but they might\nbe tickling something isolated, in which case 8.2 is not necessarily\ngonna fix it. If you can produce a test case showing this behavior it'd\nbe time to call in pgsql-hackers.\n\nYour other points seem pretty well covered by other replies.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 24 Aug 2007 17:39:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant vacuum issues - looking for suggestions " }, { "msg_contents": "On Friday 24 August 2007 15:39:22 Tom Lane wrote:\n> Kevin Kempter <[email protected]> writes:\n> > The development folks that have been here awhile tell me that it seems\n> > like when they have a query (not limited to vacuum processes) that has\n> > been running for a long time (i.e. > 5 or 6 hours) that the query sort of\n> > \"goes crazy\" and the entire system gets pegged until they kill that\n> > process. - I've not heard of this\n>\n> Me either, but I wonder whether their queries are tickling some memory\n> leak. I could imagine that what they are seeing is the backend process\n> growing slowly until it starts to swap, and then continuing to grow and\n> needing more and more swap activity. Once you get over the knee of that\n> curve, things get real bad real fast. It might not be a bad idea to run\n> the postmaster under a (carefully chosen) ulimit setting to cut such\n> things off before the system starts swapping. Other things to look at:\n>\n> * what exactly gets \"pegged\" --- is it CPU or I/O bound? Watching\n> \"vmstat 1\" is usually a good diagnostic since you can see CPU, swap,\n> and regular disk I/O activity at once.\n>\n> * is there really not any pattern to the queries that cause the problem?\n> I don't think 8.1.4 has any widespread leakage problem, but they might\n> be tickling something isolated, in which case 8.2 is not necessarily\n> gonna fix it. If you can produce a test case showing this behavior it'd\n> be time to call in pgsql-hackers.\n>\n> Your other points seem pretty well covered by other replies.\n>\n> \t\t\tregards, tom lane\n\nThanks everyone for the help. I'll first up the memory settings like Bill \nsuggested and then see where I'm at. Moving fwd I'll see if I have a test \ncase that I can re-create, plus I may try constraining the postmaster via a \nulimit setting, again based on what I see once the cluster is allowed to use \nthe memory it should have been given up front.\n\n/Kevin\n\n", "msg_date": "Fri, 24 Aug 2007 23:34:23 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: significant vacuum issues - looking for suggestions" }, { "msg_contents": "On Fri, Aug 24, 2007 at 04:41:44PM -0400, Bill Moran wrote:\n> In response to Kevin Kempter <[email protected]>:\n> \n> > Hi List;\n> > \n> > I've just started working with a client that has been running Postgres (with \n> > no DBA) for a few years. They're running version 8.1.4 on 4-way dell boxes \n> > with 4Gig of memory on each box attached to RAID-10 disk arrays. \n> > \n> > Some of their key config settings are here:\n> > shared_buffers = 20480\n> > work_mem = 16384\n> > maintenance_work_mem = 32758\n> \n> Before you do any of those other things, bump shared_buffers to about\n> 120000 and maintenance_work_mem to 250000 or so -- unless this box\n> has other applications on it using significant amounts of those 4G of\n> RAM. You may find that these changes alone are enough to get vacuum\n> to complete. You'll need to restart the server for the shared_buffers\n> setting to take effect.\n\nFor the really bloated table, you might need to go even higher than\n250000 for maint_work_mem. IIRC vacuum needs 6 bytes per dead tuple, so\nthat means 43M rows... with 5M dead pages, that means less than 10 rows\nper page, which is unlikely. Keep in mind that if you do a vacuum\nverbose, you'll be able to see if vacuum runs out of\nmaintenance_work_mem, because you'll see multiple passes through all the\nindexes.\n\nYou could also potentially use this to your benefit. Set maint_work_mem\nlow enough so that vacuum will have to start it's cleaning pass after\nonly an hour or so... depending on how big/bloated the indexes are on\nthe table, it might take another 2-3 hours to clean everything. I\nbelieve that as soon as you see it start on the indexes a second time\nyou can kill it... you'll have wasted some work, but more importantly\nyou'll have actually vacuumed part of the table.\n\nBut all of that's a moot point if they're running the default free space\nmap settings, which are way, way, way to conservative in 8.1. If you've\ngot one table with 5M dead pages, you probably want to set fsm_pages to\nat least 50000000 as a rough guess, at least until this is under\ncontrol. Keep in mind that does equate to 286M of memory, though.\n\nAs for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *\nFROM bloated_table? That would likely be much faster than messing around\nwith pg_dump.\n\nWhat kind of disk hardware is this running on? A good raid 10 array with\nwrite caching should be able to handle a 200G database fairly well; at\nleast better than it is from what I'm hearing.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Mon, 27 Aug 2007 16:00:41 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant vacuum issues - looking for suggestions" }, { "msg_contents": ">>> Decibel! <[email protected]> 08/27/07 4:00 PM >>> \n> > > They're running version 8.1.4\n> \n> As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *\n> FROM bloated_table? That would likely be much faster than messing around\n> with pg_dump.\n \nHe wanted to upgrade to 8.2.4. CREATE TABLE AS won't get him there.\n \n> > > They're running version 8.1.4 on 4-way dell boxes \n> > > with 4Gig of memory on each box attached to RAID-10 disk arrays. \n> \n> What kind of disk hardware is this running on? A good raid 10 array with\n> write caching should be able to handle a 200G database fairly well\n \nWhat other details were you looking for?\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 27 Aug 2007 16:56:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant vacuum issues - looking for\n\tsuggestions" }, { "msg_contents": "On Monday 27 August 2007 15:56:33 Kevin Grittner wrote:\n> >>> Decibel! <[email protected]> 08/27/07 4:00 PM >>>\n> >>>\n> > > > They're running version 8.1.4\n> >\n> > As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *\n> > FROM bloated_table? That would likely be much faster than messing around\n> > with pg_dump.\n>\n> He wanted to upgrade to 8.2.4. CREATE TABLE AS won't get him there.\n>\n> > > > They're running version 8.1.4 on 4-way dell boxes\n> > > > with 4Gig of memory on each box attached to RAID-10 disk arrays.\n> >\n> > What kind of disk hardware is this running on? A good raid 10 array with\n> > write caching should be able to handle a 200G database fairly well\n>\n> What other details were you looking for?\n>\n> -Kevin\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n\nI decided to fix this table first - did so by creating a new table, running a \nselect from insert into and renaming the orig table to old_XXX and then \nrenamed the new table to the orig table's name.\n\nI'll drop the orig table once I'm sure there are no data issues.\n\nI'm planning to setup a new file system layout for the box(es) and try to do a \npg_dump | psql for the upgrade.\n\n\n", "msg_date": "Mon, 27 Aug 2007 16:00:01 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: significant vacuum issues - looking for suggestions" }, { "msg_contents": "On Monday 27 August 2007 15:00:41 you wrote:\n> On Fri, Aug 24, 2007 at 04:41:44PM -0400, Bill Moran wrote:\n> > In response to Kevin Kempter <[email protected]>:\n> > > Hi List;\n> > >\n> > > I've just started working with a client that has been running Postgres\n> > > (with no DBA) for a few years. They're running version 8.1.4 on 4-way\n> > > dell boxes with 4Gig of memory on each box attached to RAID-10 disk\n> > > arrays.\n> > >\n> > > Some of their key config settings are here:\n> > > shared_buffers = 20480\n> > > work_mem = 16384\n> > > maintenance_work_mem = 32758\n> >\n> > Before you do any of those other things, bump shared_buffers to about\n> > 120000 and maintenance_work_mem to 250000 or so -- unless this box\n> > has other applications on it using significant amounts of those 4G of\n> > RAM. You may find that these changes alone are enough to get vacuum\n> > to complete. You'll need to restart the server for the shared_buffers\n> > setting to take effect.\n>\n> For the really bloated table, you might need to go even higher than\n> 250000 for maint_work_mem. IIRC vacuum needs 6 bytes per dead tuple, so\n> that means 43M rows... with 5M dead pages, that means less than 10 rows\n> per page, which is unlikely. Keep in mind that if you do a vacuum\n> verbose, you'll be able to see if vacuum runs out of\n> maintenance_work_mem, because you'll see multiple passes through all the\n> indexes.\n>\n> You could also potentially use this to your benefit. Set maint_work_mem\n> low enough so that vacuum will have to start it's cleaning pass after\n> only an hour or so... depending on how big/bloated the indexes are on\n> the table, it might take another 2-3 hours to clean everything. I\n> believe that as soon as you see it start on the indexes a second time\n> you can kill it... you'll have wasted some work, but more importantly\n> you'll have actually vacuumed part of the table.\n>\n> But all of that's a moot point if they're running the default free space\n> map settings, which are way, way, way to conservative in 8.1. If you've\n> got one table with 5M dead pages, you probably want to set fsm_pages to\n> at least 50000000 as a rough guess, at least until this is under\n> control. Keep in mind that does equate to 286M of memory, though.\n>\n> As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *\n> FROM bloated_table? That would likely be much faster than messing around\n> with pg_dump.\n>\n> What kind of disk hardware is this running on? A good raid 10 array with\n> write caching should be able to handle a 200G database fairly well; at\n> least better than it is from what I'm hearing.\n\nThe memory settings are way low on all their db servers (less than 170Meg for \nthe shared_buffers). I fixed this table via creating a new_** table, select \nfrom insert into, and a rename.\n\nI'm still working through the memory settings and reviewing their other config \nsettings, the filesystem type/settings and eventually a security audit. It's \na new client and theyve been running postgres for a few years on approx 8 db \nservers with no DBA.\n\nThe servers are 4-way intel boxes (NOT dual-core) with 4G of memory and \nrunning raid-10 arrays.\n\n\n\n", "msg_date": "Mon, 27 Aug 2007 16:03:52 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: significant vacuum issues - looking for suggestions" }, { "msg_contents": "On Mon, Aug 27, 2007 at 04:56:33PM -0500, Kevin Grittner wrote:\n> >>> Decibel! <[email protected]> 08/27/07 4:00 PM >>> \n> > > > They're running version 8.1.4\n> > \n> > As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *\n> > FROM bloated_table? That would likely be much faster than messing around\n> > with pg_dump.\n> \n> He wanted to upgrade to 8.2.4. CREATE TABLE AS won't get him there.\n> \n> > > > They're running version 8.1.4 on 4-way dell boxes \n> > > > with 4Gig of memory on each box attached to RAID-10 disk arrays. \n> > \n> > What kind of disk hardware is this running on? A good raid 10 array with\n> > write caching should be able to handle a 200G database fairly well\n> \n> What other details were you looking for?\n\nHow many drives? Write caching? 200G isn't *that* big for good drive\nhardware, *IF* it's performing the way it should. You'd be surprised how\nmany arrays fall on their face even from a simple dd test.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Mon, 27 Aug 2007 17:04:39 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: significant vacuum issues - looking for suggestions" }, { "msg_contents": "On Monday 27 August 2007 16:04:39 Decibel! wrote:\n> On Mon, Aug 27, 2007 at 04:56:33PM -0500, Kevin Grittner wrote:\n> > >>> Decibel! <[email protected]> 08/27/07 4:00 PM >>>\n> > >>>\n> > > > > They're running version 8.1.4\n> > >\n> > > As for your pg_dump idea... why not just do a CREATE TABLE AS SELECT *\n> > > FROM bloated_table? That would likely be much faster than messing\n> > > around with pg_dump.\n> >\n> > He wanted to upgrade to 8.2.4. CREATE TABLE AS won't get him there.\n> >\n> > > > > They're running version 8.1.4 on 4-way dell boxes\n> > > > > with 4Gig of memory on each box attached to RAID-10 disk arrays.\n> > >\n> > > What kind of disk hardware is this running on? A good raid 10 array\n> > > with write caching should be able to handle a 200G database fairly well\n> >\n> > What other details were you looking for?\n>\n> How many drives? Write caching? 200G isn't *that* big for good drive\n> hardware, *IF* it's performing the way it should. You'd be surprised how\n> many arrays fall on their face even from a simple dd test.\n\nI havent gotten that info yet, the key resources are too busy... I'll have \nmore info next week.\n\nThanks for the replies...\n\n\n", "msg_date": "Mon, 27 Aug 2007 16:06:00 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Re: significant vacuum issues - looking for suggestions" } ]
[ { "msg_contents": "Hi,\n\nI have an application which loads millions of NEW documents each month\ninto a PostgreSQL tsearch2 table. I have the initial version completed\nand searching performance is great but my problem is that each time a \nnew\nmonth rolls around I have to drop all the indexes do a COPY and re-index\nthe entire table. This is problematic considering that each month takes\nlonger than the previous to rebuild the indexes and the application in\nunavailable during the rebuilding process.\n\nIn order to avoid the re-indexing I was thinking of instead creating \na new\ntable each month (building its indexes and etc) and accessing them all\nthrough a view. This way I only have to index the new data each month.\n\nDoes this work? Does a view with N tables make it N times slower for\ntsearch2 queries? Is there a better solution?\n\nBenjamin\n", "msg_date": "Fri, 24 Aug 2007 17:41:48 -0700", "msg_from": "\"Benjamin Arai\" <[email protected]>", "msg_from_op": true, "msg_subject": "Partioning tsearch2 a table into chunks and accessing via views" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nBrandon Shalton wrote:\n> Benjamin,\n> \n> \n>>\n>> In order to avoid the re-indexing I was thinking of instead creating \n>> a new\n>> table each month (building its indexes and etc) and accessing them all\n>> through a view. This way I only have to index the new data each month.\n>>\n> \n> Take a look at bizgres.org (based on postgres).\n> \n> They have a parent-child structure.\n> \n> The way i use it, is I have about 30M records a day that are inserted\n> into the database.\n> \n> Each day is a \"child\" table to the \"parent\".\n> \n> so example:\n> \n> the parent table is called \"logfile\"\n> \n> each day, is a child, with the structure like \"logfile_YYMMDD\"\n> \n> the \"child\" inherits the table structure of the parent, such that you\n> could query the child table name directly, or you run the query against\n> the parent (ie. logfile table) and get all the data.\n> \n> the indexes are done on a per table basis, so new data that comes in, is\n> a lesser amount, and doesn't require re-indexing.\n\n\nPostgreSQL can do all of this too.\n\nSincerely,\n\nJoshua D. Drake\n\n> \n> \n> example:\n> \n> select * from logfile_070825 where datafield = 'foo'\n> \n> if i knew i wanted to specifically go into that child, or:\n> \n> select * from logfile where datafield = 'foo'\n> \n> and all child tables are searched and results merged. You can perform\n> any kind of sql query and field structures are you normally do.\n> \n> the downside is that the queries are run sequentially.\n> \n> so if you had 100 child tables, each table is queried via indexes, then\n> results are merged.\n> \n> but, this approach does allow me to dump alot of data in, without having\n> the re-indexing issues you are facing.\n> \n> at some point, you could roll up the days, in to weekly child tables,\n> then monthly tables, etc.\n> \n> I believe Bizgres has a new version of their system that does parallel\n> queries which would certainly speed things up.\n> \n> For your documents, you can do it by the day it was checked in, or maybe\n> you have some other way of logically grouping, but the parent/child\n> table structure really helped to solve my problem of adding in millions\n> of records each day.\n> \n> The closest thing in mysql is using merge tables, which is not really\n> practical when it comes time to do the joins to the tables.\n> \n> -brandon\n> \n> http://www.t3report.com - marketing intelligence for online marketing\n> and affiliate programs\n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFGz4XuATb/zqfZUUQRAukhAJ9b2x4PLPZsoPmtm3O/Ze4AobDXngCgq+rl\nX2j2ePDyjYxRajfGCVmjnYU=\n=pIjb\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 24 Aug 2007 18:29:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and\n\taccessing via views" }, { "msg_contents": "Benjamin,\n\n\n>\n> In order to avoid the re-indexing I was thinking of instead creating a \n> new\n> table each month (building its indexes and etc) and accessing them all\n> through a view. This way I only have to index the new data each month.\n>\n\nTake a look at bizgres.org (based on postgres).\n\nThey have a parent-child structure.\n\nThe way i use it, is I have about 30M records a day that are inserted into \nthe database.\n\nEach day is a \"child\" table to the \"parent\".\n\nso example:\n\nthe parent table is called \"logfile\"\n\neach day, is a child, with the structure like \"logfile_YYMMDD\"\n\nthe \"child\" inherits the table structure of the parent, such that you could \nquery the child table name directly, or you run the query against the parent \n(ie. logfile table) and get all the data.\n\nthe indexes are done on a per table basis, so new data that comes in, is a \nlesser amount, and doesn't require re-indexing.\n\n\nexample:\n\nselect * from logfile_070825 where datafield = 'foo'\n\nif i knew i wanted to specifically go into that child, or:\n\nselect * from logfile where datafield = 'foo'\n\nand all child tables are searched and results merged. You can perform any \nkind of sql query and field structures are you normally do.\n\nthe downside is that the queries are run sequentially.\n\nso if you had 100 child tables, each table is queried via indexes, then \nresults are merged.\n\nbut, this approach does allow me to dump alot of data in, without having the \nre-indexing issues you are facing.\n\nat some point, you could roll up the days, in to weekly child tables, then \nmonthly tables, etc.\n\nI believe Bizgres has a new version of their system that does parallel \nqueries which would certainly speed things up.\n\nFor your documents, you can do it by the day it was checked in, or maybe you \nhave some other way of logically grouping, but the parent/child table \nstructure really helped to solve my problem of adding in millions of records \neach day.\n\nThe closest thing in mysql is using merge tables, which is not really \npractical when it comes time to do the joins to the tables.\n\n-brandon\n\nhttp://www.t3report.com - marketing intelligence for online marketing and \naffiliate programs\n\n\n\n\n\n\n\n\n\n", "msg_date": "Fri, 24 Aug 2007 20:54:24 -0700", "msg_from": "\"Brandon Shalton\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and accessing\n\tvia views" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nThis kind of disappointing, I was hoping there was more that could be \ndone.\n\nThere has to be another way to do incremental indexing without \nloosing that much performance.\n\nBenjamin\n\nOn Aug 24, 2007, at 6:29 PM, Joshua D. Drake wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Brandon Shalton wrote:\n>> Benjamin,\n>>\n>>\n>>>\n>>> In order to avoid the re-indexing I was thinking of instead creating\n>>> a new\n>>> table each month (building its indexes and etc) and accessing \n>>> them all\n>>> through a view. This way I only have to index the new data each \n>>> month.\n>>>\n>>\n>> Take a look at bizgres.org (based on postgres).\n>>\n>> They have a parent-child structure.\n>>\n>> The way i use it, is I have about 30M records a day that are inserted\n>> into the database.\n>>\n>> Each day is a \"child\" table to the \"parent\".\n>>\n>> so example:\n>>\n>> the parent table is called \"logfile\"\n>>\n>> each day, is a child, with the structure like \"logfile_YYMMDD\"\n>>\n>> the \"child\" inherits the table structure of the parent, such that you\n>> could query the child table name directly, or you run the query \n>> against\n>> the parent (ie. logfile table) and get all the data.\n>>\n>> the indexes are done on a per table basis, so new data that comes \n>> in, is\n>> a lesser amount, and doesn't require re-indexing.\n>\n>\n> PostgreSQL can do all of this too.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>>\n>>\n>> example:\n>>\n>> select * from logfile_070825 where datafield = 'foo'\n>>\n>> if i knew i wanted to specifically go into that child, or:\n>>\n>> select * from logfile where datafield = 'foo'\n>>\n>> and all child tables are searched and results merged. You can \n>> perform\n>> any kind of sql query and field structures are you normally do.\n>>\n>> the downside is that the queries are run sequentially.\n>>\n>> so if you had 100 child tables, each table is queried via indexes, \n>> then\n>> results are merged.\n>>\n>> but, this approach does allow me to dump alot of data in, without \n>> having\n>> the re-indexing issues you are facing.\n>>\n>> at some point, you could roll up the days, in to weekly child tables,\n>> then monthly tables, etc.\n>>\n>> I believe Bizgres has a new version of their system that does \n>> parallel\n>> queries which would certainly speed things up.\n>>\n>> For your documents, you can do it by the day it was checked in, or \n>> maybe\n>> you have some other way of logically grouping, but the parent/child\n>> table structure really helped to solve my problem of adding in \n>> millions\n>> of records each day.\n>>\n>> The closest thing in mysql is using merge tables, which is not really\n>> practical when it comes time to do the joins to the tables.\n>>\n>> -brandon\n>>\n>> http://www.t3report.com - marketing intelligence for online marketing\n>> and affiliate programs\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>>\n>> ---------------------------(end of \n>> broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n>>\n>\n>\n> - --\n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n> \t\t\tUNIQUE NOT NULL\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/ \n> donate\n> PostgreSQL Replication: http://www.commandprompt.com/products/\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n>\n> iD8DBQFGz4XuATb/zqfZUUQRAukhAJ9b2x4PLPZsoPmtm3O/Ze4AobDXngCgq+rl\n> X2j2ePDyjYxRajfGCVmjnYU=\n> =pIjb\n> -----END PGP SIGNATURE-----\n>\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (Darwin)\n\niQIVAwUBRs+/UfyqRf6YpodNAQL6Xg//eEqR0UQ4I/snn7Dtmkru40jCuECGeG8g\nXoxLWEa+bumVgwrEYbjKTBp3KP6OEKz9VV4xHQROTtqxh+rg0hdoc0kWxSyquCm8\nVljL24ykvBmRmjhacwi8FKp092zwRcLrbkzTxIr90q8u008aVPWxQCBtmfL6QVTv\nI9AyN0kb00ypx+B9I2ySugYzBerVCMUiKUeXplHWn1loSSm1w+5CzXY8gtvivFEV\nYspS1Fk2rxjnjlPE/FTGUiwJrdWZTJrd3BuSVbH5DWBoCjz9gzq0NyNZAtESWX2H\noGwlWBEJNFTtoHnK4iTMS+CzKHQQQZ9ZuQcHy84SlXYUo9n0/NCIeabu2xaj44Fs\nLFq8jBCH3ebAkD/hQOgk1H05ljbfX8A/u2zz75W1NbD0xTB/sAljWqhypz2x7pOo\nsUJF9MQ7DwVG8JitUAAc5fuGpLLR4WxF68YdkgycaCNknP7IATeD2ecqJkC26Av+\nGHHci2ct5ypVq9Qq8OuesYSox7XpO2+E+Y5DtgBo+/R7eOJRLA3Z0FDXFLGsdFxy\n0OKoew1MN79jP+KMZFJwvddH/TrkZBdIKlkacXYwUHU3c1ATwne6WteKTnEmr2aP\n99oQgfmNDyQgTeEL20jokF4YZOdm1UO3Cc7wTi2QlwyqUDbUmYtWzgbS9QbnaGGA\n58XdVacGznw=\n=Hst4\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 24 Aug 2007 22:34:06 -0700", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and accessing\n\tvia views" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nBenjamin Arai wrote:\n> This kind of disappointing, I was hoping there was more that could be done.\n> \n> There has to be another way to do incremental indexing without loosing\n> that much performance.\n\nWhat makes you think you are loosing performance by using partitioning?\n\nJoshua D. Drake\n\n> \n> Benjamin\n> \n> On Aug 24, 2007, at 6:29 PM, Joshua D. Drake wrote:\n> \n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n> \n>> Brandon Shalton wrote:\n>>> Benjamin,\n>>>\n>>>\n>>>>\n>>>> In order to avoid the re-indexing I was thinking of instead creating\n>>>> a new\n>>>> table each month (building its indexes and etc) and accessing them all\n>>>> through a view. This way I only have to index the new data each month.\n>>>>\n>>>\n>>> Take a look at bizgres.org (based on postgres).\n>>>\n>>> They have a parent-child structure.\n>>>\n>>> The way i use it, is I have about 30M records a day that are inserted\n>>> into the database.\n>>>\n>>> Each day is a \"child\" table to the \"parent\".\n>>>\n>>> so example:\n>>>\n>>> the parent table is called \"logfile\"\n>>>\n>>> each day, is a child, with the structure like \"logfile_YYMMDD\"\n>>>\n>>> the \"child\" inherits the table structure of the parent, such that you\n>>> could query the child table name directly, or you run the query against\n>>> the parent (ie. logfile table) and get all the data.\n>>>\n>>> the indexes are done on a per table basis, so new data that comes in, is\n>>> a lesser amount, and doesn't require re-indexing.\n> \n> \n>> PostgreSQL can do all of this too.\n> \n>> Sincerely,\n> \n>> Joshua D. Drake\n> \n>>>\n>>>\n>>> example:\n>>>\n>>> select * from logfile_070825 where datafield = 'foo'\n>>>\n>>> if i knew i wanted to specifically go into that child, or:\n>>>\n>>> select * from logfile where datafield = 'foo'\n>>>\n>>> and all child tables are searched and results merged. You can perform\n>>> any kind of sql query and field structures are you normally do.\n>>>\n>>> the downside is that the queries are run sequentially.\n>>>\n>>> so if you had 100 child tables, each table is queried via indexes, then\n>>> results are merged.\n>>>\n>>> but, this approach does allow me to dump alot of data in, without having\n>>> the re-indexing issues you are facing.\n>>>\n>>> at some point, you could roll up the days, in to weekly child tables,\n>>> then monthly tables, etc.\n>>>\n>>> I believe Bizgres has a new version of their system that does parallel\n>>> queries which would certainly speed things up.\n>>>\n>>> For your documents, you can do it by the day it was checked in, or maybe\n>>> you have some other way of logically grouping, but the parent/child\n>>> table structure really helped to solve my problem of adding in millions\n>>> of records each day.\n>>>\n>>> The closest thing in mysql is using merge tables, which is not really\n>>> practical when it comes time to do the joins to the tables.\n>>>\n>>> -brandon\n>>>\n>>> http://www.t3report.com - marketing intelligence for online marketing\n>>> and affiliate programs\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>>\n>>> ---------------------------(end of broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>>\n> \n> \n>> - --\n> \n>> === The PostgreSQL Company: Command Prompt, Inc. ===\n>> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n>> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n>> UNIQUE NOT NULL\n>> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n>> PostgreSQL Replication: http://www.commandprompt.com/products/\n> \n>> \n\n\n- ---------------------------(end of broadcast)---------------------------\nTIP 6: explain analyze is your friend\n\n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG0FZKATb/zqfZUUQRAsfRAJ4mjQP+1ltG7pqLFQ+Ru52LA5e7XACcDqKr\nPIihth2x3gx3qTEI8WfWNjo=\n=AhJx\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 25 Aug 2007 09:18:18 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and\n\taccessing via views" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nAs stated in the previous email if I use partitioning then queries \nwill be executed sequentially - i.e., instead of log(n) it would be \n(# partitions) * log(n). Right?\n\nBenjamin\n\nOn Aug 25, 2007, at 9:18 AM, Joshua D. Drake wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Benjamin Arai wrote:\n>> This kind of disappointing, I was hoping there was more that could \n>> be done.\n>>\n>> There has to be another way to do incremental indexing without \n>> loosing\n>> that much performance.\n>\n> What makes you think you are loosing performance by using \n> partitioning?\n>\n> Joshua D. Drake\n>\n>>\n>> Benjamin\n>>\n>> On Aug 24, 2007, at 6:29 PM, Joshua D. Drake wrote:\n>>\n>>> -----BEGIN PGP SIGNED MESSAGE-----\n>>> Hash: SHA1\n>>\n>>> Brandon Shalton wrote:\n>>>> Benjamin,\n>>>>\n>>>>\n>>>>>\n>>>>> In order to avoid the re-indexing I was thinking of instead \n>>>>> creating\n>>>>> a new\n>>>>> table each month (building its indexes and etc) and accessing \n>>>>> them all\n>>>>> through a view. This way I only have to index the new data each \n>>>>> month.\n>>>>>\n>>>>\n>>>> Take a look at bizgres.org (based on postgres).\n>>>>\n>>>> They have a parent-child structure.\n>>>>\n>>>> The way i use it, is I have about 30M records a day that are \n>>>> inserted\n>>>> into the database.\n>>>>\n>>>> Each day is a \"child\" table to the \"parent\".\n>>>>\n>>>> so example:\n>>>>\n>>>> the parent table is called \"logfile\"\n>>>>\n>>>> each day, is a child, with the structure like \"logfile_YYMMDD\"\n>>>>\n>>>> the \"child\" inherits the table structure of the parent, such \n>>>> that you\n>>>> could query the child table name directly, or you run the query \n>>>> against\n>>>> the parent (ie. logfile table) and get all the data.\n>>>>\n>>>> the indexes are done on a per table basis, so new data that \n>>>> comes in, is\n>>>> a lesser amount, and doesn't require re-indexing.\n>>\n>>\n>>> PostgreSQL can do all of this too.\n>>\n>>> Sincerely,\n>>\n>>> Joshua D. Drake\n>>\n>>>>\n>>>>\n>>>> example:\n>>>>\n>>>> select * from logfile_070825 where datafield = 'foo'\n>>>>\n>>>> if i knew i wanted to specifically go into that child, or:\n>>>>\n>>>> select * from logfile where datafield = 'foo'\n>>>>\n>>>> and all child tables are searched and results merged. You can \n>>>> perform\n>>>> any kind of sql query and field structures are you normally do.\n>>>>\n>>>> the downside is that the queries are run sequentially.\n>>>>\n>>>> so if you had 100 child tables, each table is queried via \n>>>> indexes, then\n>>>> results are merged.\n>>>>\n>>>> but, this approach does allow me to dump alot of data in, \n>>>> without having\n>>>> the re-indexing issues you are facing.\n>>>>\n>>>> at some point, you could roll up the days, in to weekly child \n>>>> tables,\n>>>> then monthly tables, etc.\n>>>>\n>>>> I believe Bizgres has a new version of their system that does \n>>>> parallel\n>>>> queries which would certainly speed things up.\n>>>>\n>>>> For your documents, you can do it by the day it was checked in, \n>>>> or maybe\n>>>> you have some other way of logically grouping, but the parent/child\n>>>> table structure really helped to solve my problem of adding in \n>>>> millions\n>>>> of records each day.\n>>>>\n>>>> The closest thing in mysql is using merge tables, which is not \n>>>> really\n>>>> practical when it comes time to do the joins to the tables.\n>>>>\n>>>> -brandon\n>>>>\n>>>> http://www.t3report.com - marketing intelligence for online \n>>>> marketing\n>>>> and affiliate programs\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>>\n>>>> ---------------------------(end of \n>>>> broadcast)---------------------------\n>>>> TIP 6: explain analyze is your friend\n>>>>\n>>\n>>\n>>> - --\n>>\n>>> === The PostgreSQL Company: Command Prompt, Inc. ===\n>>> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n>>> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n>>> UNIQUE NOT NULL\n>>> Donate to the PostgreSQL Project: http://www.postgresql.org/about/ \n>>> donate\n>>> PostgreSQL Replication: http://www.commandprompt.com/products/\n>>\n>>>\n>\n>\n> - ---------------------------(end of \n> broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n>\n>\n>\n> - --\n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n> \t\t\tUNIQUE NOT NULL\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/ \n> donate\n> PostgreSQL Replication: http://www.commandprompt.com/products/\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n>\n> iD8DBQFG0FZKATb/zqfZUUQRAsfRAJ4mjQP+1ltG7pqLFQ+Ru52LA5e7XACcDqKr\n> PIihth2x3gx3qTEI8WfWNjo=\n> =AhJx\n> -----END PGP SIGNATURE-----\n>\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (Darwin)\n\niQIVAwUBRtBxK/yqRf6YpodNAQKVkhAAgF4DaXeMxplX1EUXZMuw9aqr+75NxNcp\nZOCJPSFN0jwzY3MlFCRVjL1kzXmRJB4L3fE2xVQX9reY62TPfYC8m/xatey1X6nc\nRdfNb9IzL6OyAghcpnUnwYntQtmGRpJtS7LQrx/SiDz8LWIp2S5v3Q9S8alKNTUS\nFupCNy1bL3yJf9tySSvol6JSH2edVt8f48J1j03f5B9zh+G/rKrQ+muuKOHyU3mb\ncVJ+gbSWCesuo+9rfaJ24m2ODwZm/YA+ENhlc3EOvD8z+cYn2OjuvAqvHABRsEKe\n+E9NWBPK/7UT4/T4B/LcBW1B6VISFqyETkwe2fhY5kVZnF+f0KtQIxXh/9qMsnnh\ntWthI9YmG4MIBmCsJwdneABHdfMJDp8IlawXqMlX4VkPHUrUtiQV/oDNsHMrU8BM\nSZOK5m0ADgXk0rndkEWXhERsyuFaocFj+snvaJEVH9PJSDVgjo7EMW5Qfo6p3NFg\nujBurhLaSuj52vClbdOs3lYp0Drbuf9iQnot3pD4XsCKAOTQm3S7BvgKMd5FUHLX\nHBFn4KiSRGx7hwlrss4rjqJ8BoJKbtvGxyNSiwZkrAOke+gqEML6pPdvlAj3Dif8\nKrsKcEu/cuR8euqX9IYCZIw4GYLqgs3mewfQIt5bSfw3yHvFyOgolyUeYfnYYlbr\n+u145pL2KZc=\n=T4dg\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 25 Aug 2007 11:12:54 -0700", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and accessing\n\tvia views" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nBenjamin Arai wrote:\n> As stated in the previous email if I use partitioning then queries will\n> be executed sequentially - i.e., instead of log(n) it would be (#\n> partitions) * log(n). Right?\n\nThe planner will consider every relevant partition during the execution.\nWhich may be a performance hit, it may not be. It depends on many\nfactors. In general however, partitioning when done correctly is a\nperformance benefit and a maintenance benefit.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> Benjamin\n> \n> On Aug 25, 2007, at 9:18 AM, Joshua D. Drake wrote:\n> \n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n> \n>> Benjamin Arai wrote:\n>>> This kind of disappointing, I was hoping there was more that could be\n>>> done.\n>>>\n>>> There has to be another way to do incremental indexing without loosing\n>>> that much performance.\n> \n>> What makes you think you are loosing performance by using partitioning?\n> \n>> Joshua D. Drake\n> \n>>>\n>>> Benjamin\n>>>\n>>> On Aug 24, 2007, at 6:29 PM, Joshua D. Drake wrote:\n>>>\n>>>> -----BEGIN PGP SIGNED MESSAGE-----\n>>>> Hash: SHA1\n>>>\n>>>> Brandon Shalton wrote:\n>>>>> Benjamin,\n>>>>>\n>>>>>\n>>>>>>\n>>>>>> In order to avoid the re-indexing I was thinking of instead creating\n>>>>>> a new\n>>>>>> table each month (building its indexes and etc) and accessing them\n>>>>>> all\n>>>>>> through a view. This way I only have to index the new data each\n>>>>>> month.\n>>>>>>\n>>>>>\n>>>>> Take a look at bizgres.org (based on postgres).\n>>>>>\n>>>>> They have a parent-child structure.\n>>>>>\n>>>>> The way i use it, is I have about 30M records a day that are inserted\n>>>>> into the database.\n>>>>>\n>>>>> Each day is a \"child\" table to the \"parent\".\n>>>>>\n>>>>> so example:\n>>>>>\n>>>>> the parent table is called \"logfile\"\n>>>>>\n>>>>> each day, is a child, with the structure like \"logfile_YYMMDD\"\n>>>>>\n>>>>> the \"child\" inherits the table structure of the parent, such that you\n>>>>> could query the child table name directly, or you run the query\n>>>>> against\n>>>>> the parent (ie. logfile table) and get all the data.\n>>>>>\n>>>>> the indexes are done on a per table basis, so new data that comes\n>>>>> in, is\n>>>>> a lesser amount, and doesn't require re-indexing.\n>>>\n>>>\n>>>> PostgreSQL can do all of this too.\n>>>\n>>>> Sincerely,\n>>>\n>>>> Joshua D. Drake\n>>>\n>>>>>\n>>>>>\n>>>>> example:\n>>>>>\n>>>>> select * from logfile_070825 where datafield = 'foo'\n>>>>>\n>>>>> if i knew i wanted to specifically go into that child, or:\n>>>>>\n>>>>> select * from logfile where datafield = 'foo'\n>>>>>\n>>>>> and all child tables are searched and results merged. You can perform\n>>>>> any kind of sql query and field structures are you normally do.\n>>>>>\n>>>>> the downside is that the queries are run sequentially.\n>>>>>\n>>>>> so if you had 100 child tables, each table is queried via indexes,\n>>>>> then\n>>>>> results are merged.\n>>>>>\n>>>>> but, this approach does allow me to dump alot of data in, without\n>>>>> having\n>>>>> the re-indexing issues you are facing.\n>>>>>\n>>>>> at some point, you could roll up the days, in to weekly child tables,\n>>>>> then monthly tables, etc.\n>>>>>\n>>>>> I believe Bizgres has a new version of their system that does parallel\n>>>>> queries which would certainly speed things up.\n>>>>>\n>>>>> For your documents, you can do it by the day it was checked in, or\n>>>>> maybe\n>>>>> you have some other way of logically grouping, but the parent/child\n>>>>> table structure really helped to solve my problem of adding in\n>>>>> millions\n>>>>> of records each day.\n>>>>>\n>>>>> The closest thing in mysql is using merge tables, which is not really\n>>>>> practical when it comes time to do the joins to the tables.\n>>>>>\n>>>>> -brandon\n>>>>>\n>>>>> http://www.t3report.com - marketing intelligence for online marketing\n>>>>> and affiliate programs\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>>\n>>>>> ---------------------------(end of\n>>>>> broadcast)---------------------------\n>>>>> TIP 6: explain analyze is your friend\n>>>>>\n>>>\n>>>\n>>>> - --\n>>>\n>>>> === The PostgreSQL Company: Command Prompt, Inc. ===\n>>>> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n>>>> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n>>>> UNIQUE NOT NULL\n>>>> Donate to the PostgreSQL Project:\n>>>> http://www.postgresql.org/about/donate\n>>>> PostgreSQL Replication: http://www.commandprompt.com/products/\n>>>\n>>>>\n> \n> \n>> - ---------------------------(end of\n>> broadcast)---------------------------\n>> TIP 6: explain analyze is your friend\n> \n> \n> \n>> - --\n> \n>> === The PostgreSQL Company: Command Prompt, Inc. ===\n>> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n>> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n>> UNIQUE NOT NULL\n>> Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate\n>> PostgreSQL Replication: http://www.commandprompt.com/products/\n> \n>> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG0HJ5ATb/zqfZUUQRAuEdAJwNwsr/XCsr85tElSVbRVMUHME+PACglbJK\ngj5cZgOtgEEjUPph0jpsOcw=\n=u7Ox\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 25 Aug 2007 11:18:33 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and\n\taccessing via views" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nSince I am using tsearch2 on the table I think there is going to be a \nsignificant performance hit - e.g., I partition by batch (batches are \nnot separated by date, they are essentially random subsets of a much \nlarger data-set). I am querying this database using tsearch2, so I \nam assuming all tables are going to be queried each time since the \ntext is not partition by any specific constraint - e.g., >R goes to \ntable 1 and <=R goes to table 2.\n\nBenjamin\n\nOn Aug 25, 2007, at 11:18 AM, Joshua D. Drake wrote:\n\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Benjamin Arai wrote:\n>> As stated in the previous email if I use partitioning then queries \n>> will\n>> be executed sequentially - i.e., instead of log(n) it would be (#\n>> partitions) * log(n). Right?\n>\n> The planner will consider every relevant partition during the \n> execution.\n> Which may be a performance hit, it may not be. It depends on many\n> factors. In general however, partitioning when done correctly is a\n> performance benefit and a maintenance benefit.\n>\n> Sincerely,\n>\n> Joshua D. Drake\n>\n>\n>>\n>> Benjamin\n>>\n>> On Aug 25, 2007, at 9:18 AM, Joshua D. Drake wrote:\n>>\n>>> -----BEGIN PGP SIGNED MESSAGE-----\n>>> Hash: SHA1\n>>\n>>> Benjamin Arai wrote:\n>>>> This kind of disappointing, I was hoping there was more that \n>>>> could be\n>>>> done.\n>>>>\n>>>> There has to be another way to do incremental indexing without \n>>>> loosing\n>>>> that much performance.\n>>\n>>> What makes you think you are loosing performance by using \n>>> partitioning?\n>>\n>>> Joshua D. Drake\n>>\n>>>>\n>>>> Benjamin\n>>>>\n>>>> On Aug 24, 2007, at 6:29 PM, Joshua D. Drake wrote:\n>>>>\n>>>>> -----BEGIN PGP SIGNED MESSAGE-----\n>>>>> Hash: SHA1\n>>>>\n>>>>> Brandon Shalton wrote:\n>>>>>> Benjamin,\n>>>>>>\n>>>>>>\n>>>>>>>\n>>>>>>> In order to avoid the re-indexing I was thinking of instead \n>>>>>>> creating\n>>>>>>> a new\n>>>>>>> table each month (building its indexes and etc) and accessing \n>>>>>>> them\n>>>>>>> all\n>>>>>>> through a view. This way I only have to index the new data each\n>>>>>>> month.\n>>>>>>>\n>>>>>>\n>>>>>> Take a look at bizgres.org (based on postgres).\n>>>>>>\n>>>>>> They have a parent-child structure.\n>>>>>>\n>>>>>> The way i use it, is I have about 30M records a day that are \n>>>>>> inserted\n>>>>>> into the database.\n>>>>>>\n>>>>>> Each day is a \"child\" table to the \"parent\".\n>>>>>>\n>>>>>> so example:\n>>>>>>\n>>>>>> the parent table is called \"logfile\"\n>>>>>>\n>>>>>> each day, is a child, with the structure like \"logfile_YYMMDD\"\n>>>>>>\n>>>>>> the \"child\" inherits the table structure of the parent, such \n>>>>>> that you\n>>>>>> could query the child table name directly, or you run the query\n>>>>>> against\n>>>>>> the parent (ie. logfile table) and get all the data.\n>>>>>>\n>>>>>> the indexes are done on a per table basis, so new data that comes\n>>>>>> in, is\n>>>>>> a lesser amount, and doesn't require re-indexing.\n>>>>\n>>>>\n>>>>> PostgreSQL can do all of this too.\n>>>>\n>>>>> Sincerely,\n>>>>\n>>>>> Joshua D. Drake\n>>>>\n>>>>>>\n>>>>>>\n>>>>>> example:\n>>>>>>\n>>>>>> select * from logfile_070825 where datafield = 'foo'\n>>>>>>\n>>>>>> if i knew i wanted to specifically go into that child, or:\n>>>>>>\n>>>>>> select * from logfile where datafield = 'foo'\n>>>>>>\n>>>>>> and all child tables are searched and results merged. You can \n>>>>>> perform\n>>>>>> any kind of sql query and field structures are you normally do.\n>>>>>>\n>>>>>> the downside is that the queries are run sequentially.\n>>>>>>\n>>>>>> so if you had 100 child tables, each table is queried via \n>>>>>> indexes,\n>>>>>> then\n>>>>>> results are merged.\n>>>>>>\n>>>>>> but, this approach does allow me to dump alot of data in, without\n>>>>>> having\n>>>>>> the re-indexing issues you are facing.\n>>>>>>\n>>>>>> at some point, you could roll up the days, in to weekly child \n>>>>>> tables,\n>>>>>> then monthly tables, etc.\n>>>>>>\n>>>>>> I believe Bizgres has a new version of their system that does \n>>>>>> parallel\n>>>>>> queries which would certainly speed things up.\n>>>>>>\n>>>>>> For your documents, you can do it by the day it was checked \n>>>>>> in, or\n>>>>>> maybe\n>>>>>> you have some other way of logically grouping, but the parent/ \n>>>>>> child\n>>>>>> table structure really helped to solve my problem of adding in\n>>>>>> millions\n>>>>>> of records each day.\n>>>>>>\n>>>>>> The closest thing in mysql is using merge tables, which is not \n>>>>>> really\n>>>>>> practical when it comes time to do the joins to the tables.\n>>>>>>\n>>>>>> -brandon\n>>>>>>\n>>>>>> http://www.t3report.com - marketing intelligence for online \n>>>>>> marketing\n>>>>>> and affiliate programs\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>>\n>>>>>> ---------------------------(end of\n>>>>>> broadcast)---------------------------\n>>>>>> TIP 6: explain analyze is your friend\n>>>>>>\n>>>>\n>>>>\n>>>>> - --\n>>>>\n>>>>> === The PostgreSQL Company: Command Prompt, Inc. ===\n>>>>> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n>>>>> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n>>>>> UNIQUE NOT NULL\n>>>>> Donate to the PostgreSQL Project:\n>>>>> http://www.postgresql.org/about/donate\n>>>>> PostgreSQL Replication: http://www.commandprompt.com/products/\n>>>>\n>>>>>\n>>\n>>\n>>> - ---------------------------(end of\n>>> broadcast)---------------------------\n>>> TIP 6: explain analyze is your friend\n>>\n>>\n>>\n>>> - --\n>>\n>>> === The PostgreSQL Company: Command Prompt, Inc. ===\n>>> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n>>> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n>>> UNIQUE NOT NULL\n>>> Donate to the PostgreSQL Project: http://www.postgresql.org/about/ \n>>> donate\n>>> PostgreSQL Replication: http://www.commandprompt.com/products/\n>>\n>>>\n>\n>\n> - --\n>\n> === The PostgreSQL Company: Command Prompt, Inc. ===\n> Sales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\n> PostgreSQL solutions since 1997 http://www.commandprompt.com/\n> \t\t\tUNIQUE NOT NULL\n> Donate to the PostgreSQL Project: http://www.postgresql.org/about/ \n> donate\n> PostgreSQL Replication: http://www.commandprompt.com/products/\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n>\n> iD8DBQFG0HJ5ATb/zqfZUUQRAuEdAJwNwsr/XCsr85tElSVbRVMUHME+PACglbJK\n> gj5cZgOtgEEjUPph0jpsOcw=\n> =u7Ox\n> -----END PGP SIGNATURE-----\n>\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (Darwin)\n\niQIVAwUBRtCHvPyqRf6YpodNAQLDuBAAp+dg1MHZy+hjZ8Zk9OQTbeADgJqGPpi9\nG7+y3iyaaqOF66TC52P7OaqO6nPhoNCNMCxwztnASyxpftGD5yJ4AZTSGcbAsWB9\ncO5mE1mgbngZNPnRLypeJ81hyE6bniNNL7xXSq9LB8wRMczFZwGVZT66+lMIFjvv\n0OrbAcSNUFqddky7EFm8gx6A2FNIzSdFB0dNbzpKwEOTnCHKvh+O99sAr/LB7mmL\nHj/wzeQKrWbDAB3+N9rczivZ03DvYAGbUY9qBfNj7Y9YL3iu/Q+Oy4bHtI6d/a7B\nwepol2xe1sYEtQ+R3yMPXFte0483n8XIdXxa412ZSIEBfLxHzV6M7JTbPtgWwE+9\n7xvyYbO7xQL9N/P8ZGg75eEqXtUrepGmJG0Y30qF5sNdMG0pWoz1bzDjSLNCnylq\nJwsO8p1EHNPnPRqotwZZSfLUW16eREqLaOrSC84gIw5Q6zAMZe/k2ckzzHKPGB1c\nsckaQROcgK4Lu9ywjRjBjNqclOMasf0MCrsDVMQE/wnh4GoDL/PAyEOqnlpvJ+cx\nk4kmOrEz5GRZQehHUI7CdejFwZ32sAB+nV2r8zDW9FSxgoRoFvtE2hooJ9orv0IU\n1F8TeBdifVP/Ef/lHAHs6IqEH45y72WqrWFZsIdU1PDe0MyfgMaOBwdwXNeZqky/\nIF5SMKbl9yA=\n=F9Oq\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 25 Aug 2007 12:49:13 -0700", "msg_from": "Benjamin Arai <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and accessing\n\tvia views" }, { "msg_contents": "\nOn Aug 24, 2007, at 7:41 PM, Benjamin Arai wrote:\n\n> Hi,\n>\n> I have an application which loads millions of NEW documents each month\n> into a PostgreSQL tsearch2 table. I have the initial version \n> completed\n> and searching performance is great but my problem is that each time \n> a new\n> month rolls around I have to drop all the indexes do a COPY and re- \n> index\n> the entire table. This is problematic considering that each month \n> takes\n> longer than the previous to rebuild the indexes and the application in\n> unavailable during the rebuilding process.\n>\n> In order to avoid the re-indexing I was thinking of instead \n> creating a new\n> table each month (building its indexes and etc) and accessing them all\n> through a view. This way I only have to index the new data each month.\n>\n> Does this work? Does a view with N tables make it N times slower for\n> tsearch2 queries? Is there a better solution?\n\n\nYou can use Postgres's inheritance mechanism for your partitioning \nmechanism and combine it with constraint exclusion to avoid the N^2 \nissues. See:\n\nhttp://www.postgresql.org/docs/8.2/interactive/ddl-inherit.html\n\nand\n\nhttp://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html\n\nBasically, create a table from which all of your partitioned tables \ninherit. Partition in such a way that you can use constraint \nexclusion and then you can treat the parent table like the view you \nwere suggesting.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Sat, 25 Aug 2007 14:58:06 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and accessing\n\tvia views" }, { "msg_contents": "\nOn Aug 25, 2007, at 2:58 PM, Erik Jones wrote:\n\n>\n> On Aug 24, 2007, at 7:41 PM, Benjamin Arai wrote:\n>\n>> Hi,\n>>\n>> I have an application which loads millions of NEW documents each \n>> month\n>> into a PostgreSQL tsearch2 table. I have the initial version \n>> completed\n>> and searching performance is great but my problem is that each \n>> time a new\n>> month rolls around I have to drop all the indexes do a COPY and re- \n>> index\n>> the entire table. This is problematic considering that each month \n>> takes\n>> longer than the previous to rebuild the indexes and the \n>> application in\n>> unavailable during the rebuilding process.\n>>\n>> In order to avoid the re-indexing I was thinking of instead \n>> creating a new\n>> table each month (building its indexes and etc) and accessing them \n>> all\n>> through a view. This way I only have to index the new data each \n>> month.\n>>\n>> Does this work? Does a view with N tables make it N times slower for\n>> tsearch2 queries? Is there a better solution?\n>\n>\n> You can use Postgres's inheritance mechanism for your partitioning \n> mechanism and combine it with constraint exclusion to avoid the N^2 \n> issues. See:\n>\n> http://www.postgresql.org/docs/8.2/interactive/ddl-inherit.html\n>\n> and\n>\n> http://www.postgresql.org/docs/8.2/interactive/ddl-partitioning.html\n>\n> Basically, create a table from which all of your partitioned tables \n> inherit. Partition in such a way that you can use constraint \n> exclusion and then you can treat the parent table like the view you \n> were suggesting.\n>\n> Erik Jones\n>\n\nSorry, I didn't see that you had crossposted and carried the \nconversation on another list. Please, don't do that. Avoid the top \nposting, as well. They both make it difficult for others to join in \nor follow the conversations and issues.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Sat, 25 Aug 2007 15:09:01 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and accessing\n\tvia views" }, { "msg_contents": ">\n> As stated in the previous email if I use partitioning then queries will \n> be executed sequentially - i.e., instead of log(n) it would be (# \n> partitions) * log(n). Right?\n>\n\ndepends.. since indexes would be hit for each child table, the time for \nquery is dependent on the amount of data that is indexed in each table.\n\nthe querying of the parent is still pretty quick given dual processor and a \nfast array filestorage device.\n\ngiven your situation, i would give the parent/child approach a child. I \nhaven't checked in postgres if it is has it has Joshua had replied, but I do \nknow bizgres does as i have been running this configuration for the last 3 \nyears and it solved my problem of importing 30-60M records in a day and \nstill being able to query the database for data.\n\n-brandon\n\nhttp://www.t3report.com - marketing intelligence for online marketing and \naffiliate programs\n\n", "msg_date": "Sat, 25 Aug 2007 14:17:53 -0700", "msg_from": "\"Brandon Shalton\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and accessing\n\tvia views" }, { "msg_contents": "\nOn Aug 25, 2007, at 1:34 AM, Benjamin Arai wrote:\n\n> There has to be another way to do incremental indexing without \n> loosing that much performance.\n\nThis is the killer feature that prevents us from using the tsearch2 \nfull text indexer on postgres. we're investigating making a foreign \ntable from a SOLR full text index so our app only talks to Pg but the \ntext search is held in a good index.\n\n", "msg_date": "Mon, 27 Aug 2007 10:43:19 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [PERFORM] Partioning tsearch2 a table into chunks and accessing\n\tvia views" } ]
[ { "msg_contents": "On Sun, Aug 26, 2007 at 01:22:58AM +0400, Max Zorloff wrote:\n> Hello.\n> \n> I have a postgres 8.0 and ~400mb database with lots of simple selects \n> using indexes.\n> I've installed pgpool on the system. I've set num_init_children to 5 and \n> here is the top output.\n> One of postmasters is my demon running some insert/update tasks. I see \n> that they all use cpu heavily, but do not use the shared memory. \n> shared_buffers is set to 60000, yet they use a minimal part of that. I'd \n> like to know why won't they use more? All the indexes and half of the \n> database should be in the shared memory, is it not? Or am I completely \n> missing what are the shared_buffers for? If so, then how do I put my \n> indexes and at least a part of the data into memory?\n\nshared_memory is used for caching. It is filled as stuff is used. If\nyou're not using all of it that means it isn't needed. Remember, it is\nnot the only cache. Since your database is only 400MB it will fit\nentirely inside the OS disk cache, so you really don't need much shared\nmemory at all.\n\nLoading stuff into memory for the hell of it is a waste, let the system\nmanage the memory itself, if it needs it, it'll use it.\n\nHave a nice day,\n-- \nMartijn van Oosterhout <[email protected]> http://svana.org/kleptog/\n> From each according to his ability. To each according to his ability to litigate.", "msg_date": "Sat, 25 Aug 2007 22:39:52 +0200", "msg_from": "Martijn van Oosterhout <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Shared memory usage" }, { "msg_contents": "Hello.\n\nI have a postgres 8.0 and ~400mb database with lots of simple selects \nusing indexes.\nI've installed pgpool on the system. I've set num_init_children to 5 and \nhere is the top output.\nOne of postmasters is my demon running some insert/update tasks. I see \nthat they all use cpu heavily, but do not use the shared memory. \nshared_buffers is set to 60000, yet they use a minimal part of that. I'd \nlike to know why won't they use more? All the indexes and half of the \ndatabase should be in the shared memory, is it not? Or am I completely \nmissing what are the shared_buffers for? If so, then how do I put my \nindexes and at least a part of the data into memory?\n\ntop - 00:12:35 up 50 days, 13:22, 8 users, load average: 4.84, 9.71, \n13.22\nTasks: 279 total, 10 running, 268 sleeping, 1 stopped, 0 zombie\nCpu(s): 50.0% us, 12.9% sy, 0.0% ni, 33.2% id, 1.8% wa, 0.0% hi, 2.1% \nsi\nMem: 6102304k total, 4206948k used, 1895356k free, 159436k buffers\nSwap: 1959888k total, 12304k used, 1947584k free, 2919816k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n11492 postgres 16 0 530m 72m 60m S 14 1.2 0:50.91 postmaster\n11493 postgres 16 0 531m 72m 60m R 14 1.2 0:48.78 postmaster\n11490 postgres 15 0 530m 71m 59m S 13 1.2 0:50.26 postmaster\n11491 postgres 15 0 531m 75m 62m S 11 1.3 0:50.67 postmaster\n11495 postgres 16 0 530m 71m 59m R 10 1.2 0:50.71 postmaster\n10195 postgres 15 0 536m 84m 66m S 6 1.4 1:11.72 postmaster\n\npostgresql.conf:\n\nshared_buffers = 60000\nwork_mem = 2048\nmaintenance_work_mem = 256000\n\nThe rest are basically default values\n\nThank you in advance.\n", "msg_date": "Sun, 26 Aug 2007 01:22:58 +0400", "msg_from": "\"Max Zorloff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Shared memory usage" }, { "msg_contents": "On Sun, 26 Aug 2007 00:39:52 +0400, Martijn van Oosterhout \n<[email protected]> wrote:\n\n> On Sun, Aug 26, 2007 at 01:22:58AM +0400, Max Zorloff wrote:\n>> Hello.\n>>\n>> I have a postgres 8.0 and ~400mb database with lots of simple selects\n>> using indexes.\n>> I've installed pgpool on the system. I've set num_init_children to 5 and\n>> here is the top output.\n>> One of postmasters is my demon running some insert/update tasks. I see\n>> that they all use cpu heavily, but do not use the shared memory.\n>> shared_buffers is set to 60000, yet they use a minimal part of that. I'd\n>> like to know why won't they use more? All the indexes and half of the\n>> database should be in the shared memory, is it not? Or am I completely\n>> missing what are the shared_buffers for? If so, then how do I put my\n>> indexes and at least a part of the data into memory?\n>\n> shared_memory is used for caching. It is filled as stuff is used. If\n> you're not using all of it that means it isn't needed. Remember, it is\n> not the only cache. Since your database is only 400MB it will fit\n> entirely inside the OS disk cache, so you really don't need much shared\n> memory at all.\n>\n> Loading stuff into memory for the hell of it is a waste, let the system\n> manage the memory itself, if it needs it, it'll use it.\n>\n> Have a nice day,\n\nCould it be that most of the cpu usage is from lots of fast indexed sql \nqueries\nwrapped in sql functions?\n", "msg_date": "Sun, 26 Aug 2007 02:22:23 +0400", "msg_from": "\"Max Zorloff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared memory usage" }, { "msg_contents": "On Sun, 26 Aug 2007 00:39:52 +0400, Martijn van Oosterhout \n<[email protected]> wrote:\n\n> On Sun, Aug 26, 2007 at 01:22:58AM +0400, Max Zorloff wrote:\n>> Hello.\n>>\n> shared_memory is used for caching. It is filled as stuff is used. If\n> you're not using all of it that means it isn't needed. Remember, it is\n> not the only cache. Since your database is only 400MB it will fit\n> entirely inside the OS disk cache, so you really don't need much shared\n> memory at all.\n>\n> Loading stuff into memory for the hell of it is a waste, let the system\n> manage the memory itself, if it needs it, it'll use it.\n>\n\nWhere do I find my OS disk cache settings? I'm using Linux.\n", "msg_date": "Sun, 26 Aug 2007 02:32:59 +0400", "msg_from": "\"Max Zorloff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared memory usage" }, { "msg_contents": "> I have a postgres 8.0 and ~400mb database with lots of simple selects \n> using indexes.\n> I've installed pgpool on the system. I've set num_init_children to 5 and \n> here is the top output.\n> One of postmasters is my demon running some insert/update tasks. I see \n> that they all use cpu heavily, but do not use the shared memory. \n> shared_buffers is set to 60000, yet they use a minimal part of that. I'd \n> like to know why won't they use more? \n\nThis just looks like the output of top; what is telling you that\nPostgreSQL is not using the shared memory? Enable statistics collection\nand then look in pg_statio_user_tables.\n\n> top - 00:12:35 up 50 days, 13:22, 8 users, load average: 4.84, 9.71, \n> 13.22\n> Tasks: 279 total, 10 running, 268 sleeping, 1 stopped, 0 zombie\n> Cpu(s): 50.0% us, 12.9% sy, 0.0% ni, 33.2% id, 1.8% wa, 0.0% hi, 2.1% \n> si\n> Mem: 6102304k total, 4206948k used, 1895356k free, 159436k buffers\n> Swap: 1959888k total, 12304k used, 1947584k free, 2919816k cached\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 11492 postgres 16 0 530m 72m 60m S 14 1.2 0:50.91 postmaster\n> 11493 postgres 16 0 531m 72m 60m R 14 1.2 0:48.78 postmaster\n> 11490 postgres 15 0 530m 71m 59m S 13 1.2 0:50.26 postmaster\n> 11491 postgres 15 0 531m 75m 62m S 11 1.3 0:50.67 postmaster\n> 11495 postgres 16 0 530m 71m 59m R 10 1.2 0:50.71 postmaster\n> 10195 postgres 15 0 536m 84m 66m S 6 1.4 1:11.72 postmaster\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Mon, 27 Aug 2007 06:21:43 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared memory usage" }, { "msg_contents": "On Mon, 27 Aug 2007 14:21:43 +0400, Adam Tauno Williams \n<[email protected]> wrote:\n\n>> I have a postgres 8.0 and ~400mb database with lots of simple selects\n>> using indexes.\n>> I've installed pgpool on the system. I've set num_init_children to 5 and\n>> here is the top output.\n>> One of postmasters is my demon running some insert/update tasks. I see\n>> that they all use cpu heavily, but do not use the shared memory.\n>> shared_buffers is set to 60000, yet they use a minimal part of that. I'd\n>> like to know why won't they use more?\n>\n> This just looks like the output of top; what is telling you that\n> PostgreSQL is not using the shared memory? Enable statistics collection\n> and then look in pg_statio_user_tables.\n\nI have it enabled. How can I tell whether the shared memory is used from \nthe information in this table?\n", "msg_date": "Mon, 27 Aug 2007 17:18:19 +0400", "msg_from": "\"Max Zorloff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared memory usage" }, { "msg_contents": "First off, posting to two lists like you did (-general and -performance) \nis frowned on here. Pick whichever is more appropriate for the topic and \npost to just that one; in your case, the performance list would be more \nappropriate, and I'm only replying to there.\n\nOn Sun, 26 Aug 2007, Max Zorloff wrote:\n\n> shared_buffers is set to 60000, yet they use a minimal part of that.\n> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n> 11492 postgres 16 0 530m 72m 60m S 14 1.2 0:50.91 postmaster\n\nLooks to me like PostgreSQL is grabbing 530MB worth of memory on your \nsystem. run the ipcs command to see how big the block that's dedicated to \nthe main server is; I suspect you'll find it's at 400MB just like you \nexpect it to be. Here's an example from my server which has a 256MB \nshared_buffers:\n\n-bash-3.00$ ipcs\n------ Shared Memory Segments --------\nkey shmid owner perms bytes nattch status\n0x0052e2c1 1114114 postgres 600 277856256 3\n\nAlso: when you've got top running, hit the \"c\" key and the postmaster \nprocesses will give you more information about what they're doing you may \nfind helpful.\n\n> All the indexes and half of the database should be in the shared memory, \n> is it not? Or am I completely missing what are the shared_buffers for? \n> If so, then how do I put my indexes and at least a part of the data into \n> memory?\n\nYou can find out what's inside the shared_buffers cache by using the \ninstalling the contrib/pg_buffercache module against your database. The \nREADME.pg_buffercache file in there gives instructions on how to install \nit, and the sample query provided there should tell you what you're \nlooking for here.\n\n> Where do I find my OS disk cache settings? I'm using Linux.\n\nYou can get a summary of how much memory Linux is using to cache data by \nrunning the free command, and more in-depth information is available if \nyou look at the /proc/meminfo information. I have a paper you may find \nhelpful here, it has more detail in it than you need but it provides some \npointers to resources to help you better understand how memory management \nin Linux works: http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Wed, 29 Aug 2007 15:26:06 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared memory usage" }, { "msg_contents": "On Wed, 29 Aug 2007 23:26:06 +0400, Greg Smith <[email protected]> \nwrote:\n\n> First off, posting to two lists like you did (-general and -performance) \n> is frowned on here. Pick whichever is more appropriate for the topic \n> and post to just that one; in your case, the performance list would be \n> more appropriate, and I'm only replying to there.\n\nSorry, didn't know that.\n\n> On Sun, 26 Aug 2007, Max Zorloff wrote:\n>\n>> shared_buffers is set to 60000, yet they use a minimal part of that.\n>> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n>> 11492 postgres 16 0 530m 72m 60m S 14 1.2 0:50.91 postmaster\n>\n> Looks to me like PostgreSQL is grabbing 530MB worth of memory on your \n> system. run the ipcs command to see how big the block that's dedicated \n> to the main server is; I suspect you'll find it's at 400MB just like you \n> expect it to be. Here's an example from my server which has a 256MB \n> shared_buffers:\n>\n> -bash-3.00$ ipcs\n> ------ Shared Memory Segments --------\n> key shmid owner perms bytes nattch status\n> 0x0052e2c1 1114114 postgres 600 277856256 3\n>\n> Also: when you've got top running, hit the \"c\" key and the postmaster \n> processes will give you more information about what they're doing you \n> may find helpful.\n>\n>> All the indexes and half of the database should be in the shared \n>> memory, is it not? Or am I completely missing what are the \n>> shared_buffers for? If so, then how do I put my indexes and at least a \n>> part of the data into memory?\n>\n> You can find out what's inside the shared_buffers cache by using the \n> installing the contrib/pg_buffercache module against your database. The \n> README.pg_buffercache file in there gives instructions on how to install \n> it, and the sample query provided there should tell you what you're \n> looking for here.\n\nThanks, I'll see that.\n\n>> Where do I find my OS disk cache settings? I'm using Linux.\n>\n> You can get a summary of how much memory Linux is using to cache data by \n> running the free command, and more in-depth information is available if \n> you look at the /proc/meminfo information. I have a paper you may find \n> helpful here, it has more detail in it than you need but it provides \n> some pointers to resources to help you better understand how memory \n> management in Linux works: \n> http://www.westnet.com/~gsmith/content/linux-pdflush.htm\n\nThanks for that, too.\n", "msg_date": "Thu, 30 Aug 2007 10:40:32 +0400", "msg_from": "\"Max Zorloff\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Shared memory usage" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nHi,\n\nNote: I have already vacumm full. It does not solve the problem.\n\nI have a postgres 8.1 database. In the last days I have half traffic\nthan 4 weeks ago, and resources usage is twice. The resource monitor\ngraphs also shows hight peaks (usually there is not peaks)\n\nThe performarce is getting poor with the time.\n\nIm not able to find the problem, seems there is not slow querys ( I have\nlog_min_duration_statement = 5000 right now, tomorrow I ll decrease it )\n\nServer is HP, and seems there is not hardware problems detected.\n\nAny ideas to debug it?\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG0rh5Io1XmbAXRboRAuaTAJ4tDVHUMN5YRBdWxT//kPAfBvYqRACgvLst\nrJF3dmxzWHDOWB8yQwTyvpw=\n=2ic9\n-----END PGP SIGNATURE-----", "msg_date": "Mon, 27 Aug 2007 13:41:45 +0200", "msg_from": "Ruben Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres performance problem" }, { "msg_contents": "> Hi,\n> \n> Note: I have already vacumm full. It does not solve the problem.\n> \n> I have a postgres 8.1 database. In the last days I have half traffic\n> than 4 weeks ago, and resources usage is twice. The resource monitor\n> graphs also shows hight peaks (usually there is not peaks)\n> \n> The performarce is getting poor with the time.\n> \n> Im not able to find the problem, seems there is not slow querys ( I have\n> log_min_duration_statement = 5000 right now, tomorrow I ll decrease it )\n> \n> Server is HP, and seems there is not hardware problems detected.\n> \n> Any ideas to debug it?\n\nHi,\n\nfirst of all: let us know the exact version of PG and the OS.\n\nIf performance is getting worse, there ususally is some bloat\nenvolved. Not vacuuming aggressivly enough, might be the most\ncommon cause. Do you autovacuum or vacuum manually?\nTell us more...\n\n\nBye,\nChris.\n\n\n", "msg_date": "Mon, 27 Aug 2007 14:33:51 +0200", "msg_from": "Chris Mair <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres performance problem" }, { "msg_contents": "\n\nSO: CentOS release 4.3 (Final) (kernel: 2.6.9-34.0.1.ELsmp)\nPostgres: 8.1.3\n\nI had some problems before with autovacuum. So, Each day I crontab execute:\n\nvacuumdb -f -v --analyze\nreindex database vacadb\n\nI saw logs (the output of vacuum and reindex) and there is no errors.\n\nIf u need more info, I ll be pleased to tell it here ...\n\n\nChris Mair escribi�:\n>> Hi,\n>>\n>> Note: I have already vacumm full. It does not solve the problem.\n>>\n>> I have a postgres 8.1 database. In the last days I have half traffic\n>> than 4 weeks ago, and resources usage is twice. The resource monitor\n>> graphs also shows hight peaks (usually there is not peaks)\n>>\n>> The performarce is getting poor with the time.\n>>\n>> Im not able to find the problem, seems there is not slow querys ( I have\n>> log_min_duration_statement = 5000 right now, tomorrow I ll decrease it )\n>>\n>> Server is HP, and seems there is not hardware problems detected.\n>>\n>> Any ideas to debug it?\n> \n> Hi,\n> \n> first of all: let us know the exact version of PG and the OS.\n> \n> If performance is getting worse, there ususally is some bloat\n> envolved. Not vacuuming aggressivly enough, might be the most\n> common cause. Do you autovacuum or vacuum manually?\n> Tell us more...\n> \n> \n> Bye,\n> Chris.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n", "msg_date": "Mon, 27 Aug 2007 14:52:22 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgres performance problem" }, { "msg_contents": "In response to Chris Mair <[email protected]>:\n\n> > Hi,\n> > \n> > Note: I have already vacumm full. It does not solve the problem.\n\nTo jump in here in Chris' defense, regular vacuum is not at all the same\nas vacuum full. Periodic vacuum is _much_ preferable to an occasional\nvacuum full.\n\nThe output of vacuum verbose would have useful information ... are you\nexceeding your FSM limits?\n\nTry a reindex on the database. There may be some obscure corner\ncases where reindex makes a notable improvement in performance.\n\n> > I have a postgres 8.1 database. In the last days I have half traffic\n> > than 4 weeks ago, and resources usage is twice. The resource monitor\n> > graphs also shows hight peaks (usually there is not peaks)\n\nResource monitor graphs? That statement means nothing to me, therefore\nI don't know if the information they're providing is useful or accurate,\nor even _what_ it is. What, exactly, are these graphs monitoring?\n\nYou might want to provide your postgresql.conf.\n\nHave you considered the possibility that the database has simply got more\nrecords and therefore access takes more IO and CPU?\n\n> > The performarce is getting poor with the time.\n> > \n> > Im not able to find the problem, seems there is not slow querys ( I have\n> > log_min_duration_statement = 5000 right now, tomorrow I ll decrease it )\n> > \n> > Server is HP, and seems there is not hardware problems detected.\n> > \n> > Any ideas to debug it?\n> \n> Hi,\n> \n> first of all: let us know the exact version of PG and the OS.\n> \n> If performance is getting worse, there ususally is some bloat\n> envolved. Not vacuuming aggressivly enough, might be the most\n> common cause. Do you autovacuum or vacuum manually?\n> Tell us more...\n> \n> \n> Bye,\n> Chris.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Mon, 27 Aug 2007 09:29:35 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres performance problem" }, { "msg_contents": "Just a random thought/question...\nAre you running else on the machine? When you say \"resource usage\", do\nyou mean hd space, memory, processor, ???\nWhat are your values in top?\nMore info...\nCheers\nAnton\n\n\nOn 27/08/2007, Bill Moran <[email protected]> wrote:\n> In response to Chris Mair <[email protected]>:\n>\n> > > Hi,\n> > >\n> > > Note: I have already vacumm full. It does not solve the problem.\n>\n> To jump in here in Chris' defense, regular vacuum is not at all the same\n> as vacuum full. Periodic vacuum is _much_ preferable to an occasional\n> vacuum full.\n>\n> The output of vacuum verbose would have useful information ... are you\n> exceeding your FSM limits?\n>\n> Try a reindex on the database. There may be some obscure corner\n> cases where reindex makes a notable improvement in performance.\n>\n> > > I have a postgres 8.1 database. In the last days I have half traffic\n> > > than 4 weeks ago, and resources usage is twice. The resource monitor\n> > > graphs also shows hight peaks (usually there is not peaks)\n>\n> Resource monitor graphs? That statement means nothing to me, therefore\n> I don't know if the information they're providing is useful or accurate,\n> or even _what_ it is. What, exactly, are these graphs monitoring?\n>\n> You might want to provide your postgresql.conf.\n>\n> Have you considered the possibility that the database has simply got more\n> records and therefore access takes more IO and CPU?\n>\n> > > The performarce is getting poor with the time.\n> > >\n> > > Im not able to find the problem, seems there is not slow querys ( I have\n> > > log_min_duration_statement = 5000 right now, tomorrow I ll decrease it )\n> > >\n> > > Server is HP, and seems there is not hardware problems detected.\n> > >\n> > > Any ideas to debug it?\n> >\n> > Hi,\n> >\n> > first of all: let us know the exact version of PG and the OS.\n> >\n> > If performance is getting worse, there ususally is some bloat\n> > envolved. Not vacuuming aggressivly enough, might be the most\n> > common cause. Do you autovacuum or vacuum manually?\n> > Tell us more...\n> >\n> >\n> > Bye,\n> > Chris.\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/docs/faq\n> >\n> >\n> >\n> >\n> >\n> >\n>\n>\n> --\n> Bill Moran\n> Collaborative Fusion Inc.\n> http://people.collaborativefusion.com/~wmoran/\n>\n> [email protected]\n> Phone: 412-422-3463x4023\n>\n> ****************************************************************\n> IMPORTANT: This message contains confidential information and is\n> intended only for the individual named. If the reader of this\n> message is not an intended recipient (or the individual\n> responsible for the delivery of this message to an intended\n> recipient), please be advised that any re-use, dissemination,\n> distribution or copying of this message is prohibited. Please\n> notify the sender immediately by e-mail if you have received\n> this e-mail by mistake and delete this e-mail from your system.\n> E-mail transmission cannot be guaranteed to be secure or\n> error-free as information could be intercepted, corrupted, lost,\n> destroyed, arrive late or incomplete, or contain viruses. The\n> sender therefore does not accept liability for any errors or\n> omissions in the contents of this message, which arise as a\n> result of e-mail transmission.\n> ****************************************************************\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\n\n-- \necho '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc\nThis will help you for 99.9% of your problems ...\n", "msg_date": "Tue, 28 Aug 2007 20:36:11 +0200", "msg_from": "\"Anton Melser\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres performance problem" }, { "msg_contents": "Bill Moran escribi�:\n> In response to Chris Mair <[email protected]>:\n> \n>>> Hi,\n>>>\n>>> Note: I have already vacumm full. It does not solve the problem.\n> \n> To jump in here in Chris' defense, regular vacuum is not at all the same\n> as vacuum full. Periodic vacuum is _much_ preferable to an occasional\n> vacuum full.\n> \n> The output of vacuum verbose would have useful information ... are you\n> exceeding your FSM limits?\n\n\nI think its ok. There is not warning messages on vacuum verbose in the\nlast year. (I save all logs)\n\n\n> \n> Try a reindex on the database. There may be some obscure corner\n> cases where reindex makes a notable improvement in performance.\n\nI do it all days. (i know it is not necessary all days, but I should\navoid problems)\n\n> \n>>> I have a postgres 8.1 database. In the last days I have half traffic\n>>> than 4 weeks ago, and resources usage is twice. The resource monitor\n>>> graphs also shows hight peaks (usually there is not peaks)\n> \n> Resource monitor graphs? That statement means nothing to me, therefore\n> I don't know if the information they're providing is useful or accurate,\n> or even _what_ it is. What, exactly, are these graphs monitoring?\n\nI should have explain that. I use \"sar\" command, I get 1 minute per 5\nminutes, and I show the result in a graph. I have been using this one\nyear, and use to work very well due traffic / resources usage . In last\ndays started to show thinks that should not.\n\nBy the way, the server is two intel dual processor 4Gb ram. It is only\ndatabase server.\n\n\n> \n> You might want to provide your postgresql.conf.\n> \n\n\nI have tested these values. It makes sqls faster. I didn't change it in\nthe last 10 months, The values I changed are:\n\nmax_connections = 500\nautovacuum = off\nshared_buffers = 24576\nwork_mem = 3072\nmaintenance_work_mem = 65536\nwal_buffers = 1024\ncheckpoint_segments = 12\ncheckpoint_warning = 30\neffective_cache_size = 225000\nrandom_page_cost = 2\nlog_min_duration_statement = 1000\n\n\n> Have you considered the possibility that the database has simply got more\n> records and therefore access takes more IO and CPU?\n\nNot possible. All database is in RAM memory. iostat is quiet. And if\nthat was the problem, it should be in another way. Not in few weeks with\nhalf traffic. (I also have checked a possible attack, automatic \"sing\nup\" or thinks like that ... nothing found)\n\nBy other way, there is not \"slow sqls\". (ok, there is a few slow sqls\nbut are known slow sqls)\n\nI saw once a postgres sql server 8.1 that had poor performance with sql\ns that involve one table because someone change a column data type and\nseems it didn't work well. There is not that kind of changes in my\ndatabase for 2 months at least, but ... is there any chance that\ndatabase is being corrupted? Maybe dump database , delete it and restore\nit again may solve the problem ?\n\n\n> \n>>> The performarce is getting poor with the time.\n>>>\n>>> Im not able to find the problem, seems there is not slow querys ( I have\n>>> log_min_duration_statement = 5000 right now, tomorrow I ll decrease it )\n>>>\n>>> Server is HP, and seems there is not hardware problems detected.\n>>>\n>>> Any ideas to debug it?\n>> Hi,\n>>\n>> first of all: let us know the exact version of PG and the OS.\n>>\n>> If performance is getting worse, there ususally is some bloat\n>> envolved. Not vacuuming aggressivly enough, might be the most\n>> common cause. Do you autovacuum or vacuum manually?\n>> Tell us more...\n>>\n>>\n>> Bye,\n>> Chris.\n>>\n>>\n>>\n>> ---------------------------(end of broadcast)---------------------------\n>> TIP 3: Have you checked our extensive FAQ?\n>>\n>> http://www.postgresql.org/docs/faq\n>>\n>>\n>>\n>>\n>>\n>>\n> \n> \n", "msg_date": "Thu, 30 Aug 2007 09:52:16 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Postgres performance problem" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nHi ...\n\nSeems its solved. But the problem is not found.\n\nAs you may know, I do a vacuum full and a reindex database each day. I\nhave logs that confirm that its done and I can check that everything was\n fine.\n\nSo, this morning, I stopped the website, I stopped database, started it\nagain. (I was around 200 days without restarting), then I vacuum\ndatabase and reindex it (Same command as everyday) . Restart again, and\nrun again the website.\n\nNow seems its working fine. But I really does not know where is the\nproblem. Seems vacuum its not working fine? Maybe database should need\na restart? I really don't know.\n\nDoes someone had a similar problem?\n\nThanks in advance,\nRuben Rubio\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG1pLMIo1XmbAXRboRAqgQAKCkWcZYE8RDppEVI485wDLnIW2SfQCfV+Hj\ne8PurQb2TOSYDPW545AJ83c=\n=dQgM\n-----END PGP SIGNATURE-----", "msg_date": "Thu, 30 Aug 2007 11:50:04 +0200", "msg_from": "Ruben Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Solved] Postgres performance problem" }, { "msg_contents": "On Thu, Aug 30, 2007 at 11:50:04AM +0200, Ruben Rubio wrote:\n> As you may know, I do a vacuum full and a reindex database each day. I\n> have logs that confirm that its done and I can check that everything was\n> fine.\n> \n> So, this morning, I stopped the website, I stopped database, started it\n> again. (I was around 200 days without restarting), then I vacuum\n> database and reindex it (Same command as everyday) . Restart again, and\n> run again the website.\n> \n> Now seems its working fine. But I really does not know where is the\n> problem. Seems vacuum its not working fine? Maybe database should need\n> a restart? I really don't know.\n\nNo, it sounds to me like you just weren't vacuuming aggressively enough\nto keep up with demand.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Thu, 30 Aug 2007 12:31:51 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Solved] Postgres performance problem" }, { "msg_contents": "Decibel! wrote:\n> On Thu, Aug 30, 2007 at 11:50:04AM +0200, Ruben Rubio wrote:\n>> As you may know, I do a vacuum full and a reindex database each day. I\n>> have logs that confirm that its done and I can check that everything was\n>> fine.\n>>\n>> So, this morning, I stopped the website, I stopped database, started it\n>> again. (I was around 200 days without restarting), then I vacuum\n>> database and reindex it (Same command as everyday) . Restart again, and\n>> run again the website.\n>>\n>> Now seems its working fine. But I really does not know where is the\n>> problem. Seems vacuum its not working fine? Maybe database should need\n>> a restart? I really don't know.\n> \n> No, it sounds to me like you just weren't vacuuming aggressively enough\n> to keep up with demand.\n\nActually , I think it sounds like a stray long-lived transaction.\n\nRuben - vacuum can't recover rows if another transaction might be able \nto see them. So, if you have a connection that issues BEGIN and sits \nthere for 200 days you can end up with a lot of bloat in your database.\n\nNow, there's no way to prove that since you've restarted the \ndatabase-server, but keep an eye on it.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 30 Aug 2007 19:07:20 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Solved] Postgres performance problem" }, { "msg_contents": "Perhaps you had a long-running transaction open (probably a buggy or\nhung application) that was preventing dead rows from being cleaned up.\nRestarting PG closed the offending connection and rolled back the\ntransaction, which allowed vacuum to clean up all the dead rows.\n\nIf you're not running regular VACUUMs at all but are instead exclusively\nrunning VACUUM FULL, then I don't think you would see warnings about\nrunning out of fsm enties, which would explain why you did not notice\nthe bloat. I haven't confirmed that though, so I might be wrong.\n\n-- Mark Lewis\n\nOn Thu, 2007-08-30 at 11:50 +0200, Ruben Rubio wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> \n> \n> Hi ...\n> \n> Seems its solved. But the problem is not found.\n> \n> As you may know, I do a vacuum full and a reindex database each day. I\n> have logs that confirm that its done and I can check that everything was\n> fine.\n> \n> So, this morning, I stopped the website, I stopped database, started it\n> again. (I was around 200 days without restarting), then I vacuum\n> database and reindex it (Same command as everyday) . Restart again, and\n> run again the website.\n> \n> Now seems its working fine. But I really does not know where is the\n> problem. Seems vacuum its not working fine? Maybe database should need\n> a restart? I really don't know.\n> \n> Does someone had a similar problem?\n> \n> Thanks in advance,\n> Ruben Rubio\n> \n> \n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n> \n> iD8DBQFG1pLMIo1XmbAXRboRAqgQAKCkWcZYE8RDppEVI485wDLnIW2SfQCfV+Hj\n> e8PurQb2TOSYDPW545AJ83c=\n> =dQgM\n> -----END PGP SIGNATURE-----\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n", "msg_date": "Thu, 30 Aug 2007 11:08:47 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Solved] Postgres performance problem" }, { "msg_contents": "\nOn Aug 30, 2007, at 2:08 PM, Mark Lewis wrote:\n\n> If you're not running regular VACUUMs at all but are instead \n> exclusively\n> running VACUUM FULL, then I don't think you would see warnings about\n> running out of fsm enties, which would explain why you did not notice\n> the bloat. I haven't confirmed that though, so I might be wrong.\n\nIf you run vacuum full, your pages should be full so there should be \nvery small if any number of pages in the free space map. Thus, there \nwould be no warnings.\n\n", "msg_date": "Fri, 31 Aug 2007 15:36:37 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Solved] Postgres performance problem" } ]
[ { "msg_contents": "Hi Guys,\n\nI have something odd. I have Gallery2 running on PostgreSQL 8.1, and \nrecently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is \nhow do I get PostgreSQL to work with their horrible code. The queries \nthey generate look something like :\nSELECT blah, blah FROM table1, table2 WHERE <some relational stuff> AND \nid IN (<here a list of 42000+ IDs are listed>)\n\nOn the previous version (which I can't recall what it was, but it was a \nversion 8.1) the queries executed fine, but suddenly now, these queries \nare taking up-to 4 minutes to complete. I am convinced it's the \nparsing/handling of the IN clause. It could, of course, be that the list \nhas grown so large that it can't fit into a buffer anymore. For obvious \nreasons I can't run an EXPLAIN ANALYZE from a prompt. I vacuum and \nreindex the database daily.\n\nI'd prefer not to have to rewrite the code, so any suggestions would be \nvery welcome.\n\nKind regards\n\nWillo van der Merwe\n", "msg_date": "Mon, 27 Aug 2007 14:16:48 +0200", "msg_from": "Willo van der Merwe <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue" } ]
[ { "msg_contents": "Hello,\n\nI have a table (stats.tickets) with 2288965 rows (51 columns) and\nindexes like:\nind_ti_stats_numero btree (tday, tmonth, tyear, r_cat, r_numero)\nind_ti_stats_service btree (tday, tmonth, tyear, r_cat, r_service)\nind_ti_stats_tmp_service btree (r_service, tyear, tmonth)\nind_ti_stats_tmp_service2 btree (r_service, tyear, tmonth, r_cat)\n\n\nNow if i do :\n1°)# explain analyze SELECT tday AS n, '' AS class, a.r_cat AS cat,\nCOUNT(*) AS cnx, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE 1\nEND) AS p, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 1 ELSE 0 END)\nAS np, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE r_duree\nEND) AS tps, ROUND(AVG(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE\nt_duree1 END),0) AS tmc FROM stats.tickets AS a WHERE a.r_numero='9908'\nAND tyear = 2007 AND tmonth = 8 GROUP BY tyear, tmonth, tday, a.r_cat;\n\nQUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=45412.96..45412.99 rows=1 width=34) (actual\ntime=649.944..650.178 rows=50 loops=1)\n -> Index Scan using ind_ti_stats_numero on tickets a\n(cost=0.00..45385.46 rows=1222 width=34) (actual time=15.697..642.570\nrows=1043 loops=1)\n Index Cond: ((tmonth = 8) AND (tyear = 2007) AND\n((r_numero)::text = '9908'::text))\nTotal runtime: 650.342 ms\n(4 lignes)\n\nTemps : 652,234 ms\n\n\n\n2°)\n# explain analyze SELECT tday AS n, '' AS class, a.r_cat AS cat,\nCOUNT(*) AS cnx, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE 1\nEND) AS p, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 1 ELSE 0 END)\nAS np, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE r_duree\nEND) AS tps, ROUND(AVG(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE\nt_duree1 END),0) AS tmc FROM stats.tickets AS a WHERE a.r_service=95\nAND tyear = 2007 AND tmonth = 8 GROUP BY tyear, tmonth, tday, a.r_cat;\n\nQUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=193969.97..193970.88 rows=26 width=34) (actual\ntime=20834.559..20834.694 rows=27 loops=1)\n -> Bitmap Heap Scan on tickets a (cost=3714.84..186913.32\nrows=313629 width=34) (actual time=889.880..19028.315 rows=321395\nloops=1)\n Recheck Cond: ((r_service = 95) AND (tyear = 2007) AND (tmonth\n= 8))\n -> Bitmap Index Scan on ind_ti_stats_tmp_service\n(cost=0.00..3714.84 rows=313629 width=0) (actual time=836.181..836.181\nrows=321395 loops=1)\n Index Cond: ((r_service = 95) AND (tyear = 2007) AND\n(tmonth = 8))\nTotal runtime: 20835.191 ms\n(6 lignes)\n\nTemps : 20838,798 ms\n\n\n\\d stats.tickets\n[...]\nr_numero | character varying(17) | not null\nr_service | integer | not null default 0\n[...]\nstats.tickets has 173351 relpages , 2.30996e+06 reltuples.\n\n\nWhy in the first case, pgsql uses the \"better\" index and if i search\nr_service instead of r_numero pgsql does a \"Bitmap Heap scan\" first ?\nThere ara too much rows in this table ?\nI'm doing something wrong ?\n\n\n\n\nPS: sorry for my english, i'm french.\n\n-- \nPaul.\n\n\n\n\n\n\n\nHello,\n\nI have a table (stats.tickets) with  2288965 rows (51 columns) and indexes like:\nind_ti_stats_numero btree (tday, tmonth, tyear, r_cat, r_numero)\nind_ti_stats_service btree (tday, tmonth, tyear, r_cat, r_service)\nind_ti_stats_tmp_service btree (r_service, tyear, tmonth)\nind_ti_stats_tmp_service2 btree (r_service, tyear, tmonth, r_cat)\n\n\nNow if i do :\n1°)# explain analyze SELECT  tday AS n,  '' AS class, a.r_cat AS cat, COUNT(*) AS cnx, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE 1 END) AS p,  SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 1 ELSE 0 END) AS np,  SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE r_duree END) AS tps, ROUND(AVG(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE t_duree1 END),0) AS tmc FROM stats.tickets AS a  WHERE a.r_numero='9908'  AND tyear = 2007 AND tmonth = 8  GROUP BY  tyear, tmonth, tday, a.r_cat;\n                                                                    QUERY PLAN                                                                     \n---------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate  (cost=45412.96..45412.99 rows=1 width=34) (actual time=649.944..650.178 rows=50 loops=1)\n   ->  Index Scan using ind_ti_stats_numero on tickets a  (cost=0.00..45385.46 rows=1222 width=34) (actual time=15.697..642.570 rows=1043 loops=1)\n         Index Cond: ((tmonth = 8) AND (tyear = 2007) AND ((r_numero)::text = '9908'::text))\nTotal runtime: 650.342 ms\n(4 lignes)\n\nTemps : 652,234 ms\n\n\n\n2°)\n# explain analyze SELECT  tday AS n,  '' AS class, a.r_cat AS cat, COUNT(*) AS cnx, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE 1 END) AS p,  SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 1 ELSE 0 END) AS np,  SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE r_duree END) AS tps, ROUND(AVG(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE t_duree1 END),0) AS tmc FROM stats.tickets AS a  WHERE a.r_service=95  AND tyear = 2007 AND tmonth = 8  GROUP BY  tyear, tmonth, tday, a.r_cat;\n                                                                       QUERY PLAN                                                                       \n--------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate  (cost=193969.97..193970.88 rows=26 width=34) (actual time=20834.559..20834.694 rows=27 loops=1)\n   ->  Bitmap Heap Scan on tickets a  (cost=3714.84..186913.32 rows=313629 width=34) (actual time=889.880..19028.315 rows=321395 loops=1)\n         Recheck Cond: ((r_service = 95) AND (tyear = 2007) AND (tmonth = 8))\n         ->  Bitmap Index Scan on ind_ti_stats_tmp_service  (cost=0.00..3714.84 rows=313629 width=0) (actual time=836.181..836.181 rows=321395 loops=1)\n               Index Cond: ((r_service = 95) AND (tyear = 2007) AND (tmonth = 8))\nTotal runtime: 20835.191 ms\n(6 lignes)\n\nTemps : 20838,798 ms\n\n\n\\d stats.tickets\n[...]\nr_numero            | character varying(17)       | not null\nr_service           | integer                     | not null default 0\n[...]\nstats.tickets has 173351 relpages , 2.30996e+06 reltuples.\n\n\nWhy in the first case, pgsql uses the \"better\" index and if i search r_service instead of r_numero pgsql does a \"Bitmap Heap scan\" first ?\nThere ara too much rows in this table ?\nI'm doing something wrong ?\n\n\n\n\nPS: sorry for my english, i'm french.\n\n-- \nPaul.", "msg_date": "Mon, 27 Aug 2007 15:11:57 +0200", "msg_from": "GOERGLER Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Bitmap Heap Scan before using index" } ]
[ { "msg_contents": "Hi Guys,\n\nI have something odd. I have Gallery2 running on PostgreSQL 8.1, and\nrecently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is\nhow do I get PostgreSQL to work with their horrible code. The queries\nthey generate look something like :\nSELECT blah, blah FROM table1, table2 WHERE <some relational stuff> AND\nid IN (<here a list of 42000+ IDs are listed>)\n\nOn the previous version (which I can't recall what it was, but it was a\nversion 8.1) the queries executed fine, but suddenly now, these queries\nare taking up-to 4 minutes to complete. I am convinced it's the\nparsing/handling of the IN clause. It could, of course, be that the list\nhas grown so large that it can't fit into a buffer anymore. For obvious\nreasons I can't run an EXPLAIN ANALYZE from a prompt. I vacuum and\nreindex the database daily.\n\nI'd prefer not to have to rewrite the code, so any suggestions would be\nvery welcome.\n\nKind regards\n\nWillo van der Merwe\n\n", "msg_date": "Mon, 27 Aug 2007 15:41:42 +0200", "msg_from": "Willo van der Merwe <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue" }, { "msg_contents": "In response to Willo van der Merwe <[email protected]>:\n\n> Hi Guys,\n> \n> I have something odd. I have Gallery2 running on PostgreSQL 8.1, and\n> recently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is\n> how do I get PostgreSQL to work with their horrible code. The queries\n> they generate look something like :\n> SELECT blah, blah FROM table1, table2 WHERE <some relational stuff> AND\n> id IN (<here a list of 42000+ IDs are listed>)\n> \n> On the previous version (which I can't recall what it was, but it was a\n> version 8.1) the queries executed fine, but suddenly now, these queries\n> are taking up-to 4 minutes to complete. I am convinced it's the\n> parsing/handling of the IN clause. It could, of course, be that the list\n> has grown so large that it can't fit into a buffer anymore. For obvious\n> reasons I can't run an EXPLAIN ANALYZE from a prompt.\n\nThose reasons are not obvious to me. The explain analyze output is\ngoing to be key to working this out -- unless it's something like\nyour postgresql.conf isn't properly tuned.\n\n> I vacuum and\n> reindex the database daily.\n> \n> I'd prefer not to have to rewrite the code, so any suggestions would be\n> very welcome.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Mon, 27 Aug 2007 10:45:20 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue" }, { "msg_contents": "Willo van der Merwe <[email protected]> writes:\n> I have something odd. I have Gallery2 running on PostgreSQL 8.1, and\n> recently I upgraded to 8.1.9-1.el4s1.1 (64bit). The issue here really is\n> how do I get PostgreSQL to work with their horrible code. The queries\n> they generate look something like :\n> SELECT blah, blah FROM table1, table2 WHERE <some relational stuff> AND\n> id IN (<here a list of 42000+ IDs are listed>)\n\n> On the previous version (which I can't recall what it was, but it was a\n> version 8.1) the queries executed fine, but suddenly now, these queries\n> are taking up-to 4 minutes to complete. I am convinced it's the\n> parsing/handling of the IN clause.\n\nYou're wrong about that, because we have not done anything to change IN\nplanning in 8.1.x. You might need to re-ANALYZE or something; it sounds\nto me more like the planner has changed strategies in the wrong direction.\n\nFWIW, 8.2 should be vastly more efficient than 8.1 for this sort of\nquery --- any chance of an upgrade?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 27 Aug 2007 11:55:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue " }, { "msg_contents": "Hi Guys,\n\nFollowing Tom Lane's advice I upgraded to 8.2, and that solved all my \nproblems. :D\n\nThank you so much for your input, I really appreciate it.\n\nKind regards\n\nWillo van der Merwe\n\n", "msg_date": "Tue, 28 Aug 2007 11:34:18 +0200", "msg_from": "Willo van der Merwe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance issue" } ]
[ { "msg_contents": "Hi List;\n\nI've just inherited multiple postgres database servers in multiple data \ncenters across the US and Europe via a new contract I've just started.\n\nEach night during the nightly batch processing several of the servers (2 in \nparticular) slow to a crawl - they are dedicated postgres database servers. \nThere is a lot of database activity going on sometimes upwards of 200 \nconcurrent queries however I just dont think that the machines should be this \npegged. I am in the process of cleaning up dead space - their #1 fix for \nperformance issues in the past is to kill the current vacuum process. \nLikewise I've just bumped shared_buffers to 150000 and work_mem to 250000. \n\nEven at that I still see slow processing/high system loads at nite.I have \nnoticed that killing the current vacuum process (autovacuum is turned on) \nspeeds up the entire machine significantly.\n\nThe servers are 4-CPU intel boxes (not dual-core) with 4Gig of memory and \nattached to raid-10 array's\n\nAny thoughts on where to start?\n\nBelow are the current/relevant/changed postgresql.conf settings.\n\nThanks in advance...\n\n/Kevin\n\n\n\n\n============== postgresql.conf (partial listing)========================\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '$user,public'\t\t# schema names\n#default_tablespace = ''\t\t# a tablespace name, '' uses\n\t\t\t\t\t# the default\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0\t\t\t# 0 is disabled, in milliseconds\n\n# - Locale and Formatting -\n\n#datestyle = 'iso, mdy'\n#timezone = unknown\t\t\t# actually, defaults to TZ \n\t\t\t\t\t# environment setting\n#australian_timezones = off\n#extra_float_digits = 0\t\t\t# min -15, max 2\n#client_encoding = sql_ascii\t\t# actually, defaults to database\n\t\t\t\t\t# encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'en_US.UTF-8'\t\t\t# locale for system error message \n\t\t\t\t\t# strings\nlc_monetary = 'en_US.UTF-8'\t\t\t# locale for monetary formatting\nlc_numeric = 'en_US.UTF-8'\t\t\t# locale for number formatting\nlc_time = 'en_US.UTF-8'\t\t\t\t# locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1000\t\t# in milliseconds\n#max_locks_per_transaction = 64\t\t# min 10\n# note: each lock table slot uses ~220 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#backslash_quote = safe_encoding\t# on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = off\n#regex_flavor = advanced\t\t# advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = ''\t\t# list of custom variable class names\n=============================================\n", "msg_date": "Mon, 27 Aug 2007 22:13:14 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "server performance issues - suggestions for tuning" }, { "msg_contents": "Kevin Kempter wrote:\n> Hi List;\n> \n> I've just inherited multiple postgres database servers in multiple data \n> centers across the US and Europe via a new contract I've just started.\n> \n> Each night during the nightly batch processing several of the servers (2 in \n> particular) slow to a crawl - they are dedicated postgres database servers. \n> There is a lot of database activity going on sometimes upwards of 200 \n> concurrent queries however I just dont think that the machines should be this \n> pegged. I am in the process of cleaning up dead space - their #1 fix for \n> performance issues in the past is to kill the current vacuum process. \n> Likewise I've just bumped shared_buffers to 150000 and work_mem to 250000. \n\nWell, allowing vacuum to do its job can clearly only help matters. I'm \nnot sure about setting work_mem so high though. That's the memory you're \nusing per-sort, so you can use multiples of that in a single query. With \n200 concurrent queries I'd worry about running into swap. If you're \ndoing it just for the batch processes that might make sense.\n\nYou might well want to set maintenance_work_mem quite high though, for \nany overnight maintenance.\n\nA shared_buffers of 1.2GB isn't outrageous, but again with 200 backend \nprocesses you'll want to consider how much memory each process will \nconsume. It could be that you're better off with a smaller \nshared_buffers and relying more on the OS doing its disk caching.\n\n> Even at that I still see slow processing/high system loads at nite.I have \n> noticed that killing the current vacuum process (autovacuum is turned on) \n> speeds up the entire machine significantly.\n\nIf it's disk i/o that's the limiting factor you might want to look at \nthe \"Cost-Based Vacuum Delay\" section in the configuration settings.\n\n> The servers are 4-CPU intel boxes (not dual-core) with 4Gig of memory and \n> attached to raid-10 array's\n> \n> Any thoughts on where to start?\n\nMake sure you are gathering stats and at least stats_block_level stuff. \nThen have a cron-job make copies of the stats tables, but adding a \ntimestamp column. That way you can run diffs against different time periods.\n\nPair this up with top/vmstat/iostat activity.\n\nUse log_min_duration_statement to catch any long-running queries so you \ncan see if you're getting bad plans that push activity up.\n\nTry and make only one change at a time, otherwise it's difficult to tell \nwhat's helping/hurting.\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Tue, 28 Aug 2007 08:50:54 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: server performance issues - suggestions for tuning" }, { "msg_contents": ">>> On Mon, Aug 27, 2007 at 11:13 PM, in message\n<[email protected]>, Kevin Kempter\n<[email protected]> wrote: \n> Each night during the nightly batch processing several of the servers (2 in \n> particular) slow to a crawl - they are dedicated postgres database servers. \n> There is a lot of database activity going on sometimes upwards of 200 \n> concurrent queries\n \n> Any thoughts on where to start?\n \nIs there any way to queue up these queries and limit how many are running at\na time? I don't know what the experience of others is, but I've found that\nwhen I have more than two to four queries running per CPU, throughput starts\nto drop, and response time drops even faster.\n \nFor purposes of illustration, for a moment let's forget that a query may\nblock waiting for I/O and another query might be able to use the CPU in the\nmeantime. Then, think of it this way -- if you have one CPU and 100 queries\nto run, each of which will take one second, if you start them all and they\ntime slice, nobody gets anything for 100 seconds, so that is your average\nresponse time. If you run the one at a time, only one query takes that\nlong, the rest are faster, and you've cut your average response time in\nhalf. On top of that, there is overhead to switching between processes,\nand there can be contention for resources such as locks, which both have a\ntendency to further slow things down.\n \nIn the real world, there are multiple resources which can hold up a\nquery, so you get benefit from running more than one query at a time,\nbecause they will often be using different resources.\n \nBut unless that machine has 50 CPUs, you will probably get better throughput\nand response time by queuing the requests.\n \n-Kevin\n \n\n", "msg_date": "Tue, 28 Aug 2007 08:12:06 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: server performance issues - suggestions for\n\ttuning" }, { "msg_contents": "On Tue, Aug 28, 2007 at 08:12:06AM -0500, Kevin Grittner wrote:\n> \n> Is there any way to queue up these queries and limit how many are running at\n> a time? \n\nSure: limit the number of connections to the database, and put a pool\nin front. It can indeed help.\n\nIf you have a lot of bloat due to large numbers of failed vacuums,\nhowever, I suspect your problem is I/O. Vacuum churns through the\ndisk very aggressively, and if you're close to your I/O limit, it can\npush you over the top.\n\nA\n\n-- \nAndrew Sullivan | [email protected]\n\"The year's penultimate month\" is not in truth a good way of saying\nNovember.\n\t\t--H.W. Fowler\n", "msg_date": "Tue, 28 Aug 2007 10:26:52 -0400", "msg_from": "Andrew Sullivan <[email protected]>", "msg_from_op": false, "msg_subject": "Re: server performance issues - suggestions for tuning" }, { "msg_contents": "On 8/27/07, Kevin Kempter <[email protected]> wrote:\n> Hi List;\n>\n> I've just inherited multiple postgres database servers in multiple data\n> centers across the US and Europe via a new contract I've just started.\n\nWhat pg version are you working with, and on what OS / OS version?\n\n> Each night during the nightly batch processing several of the servers (2 in\n> particular) slow to a crawl - they are dedicated postgres database servers.\n> There is a lot of database activity going on sometimes upwards of 200\n> concurrent queries however I just dont think that the machines should be this\n> pegged. I am in the process of cleaning up dead space - their #1 fix for\n> performance issues in the past is to kill the current vacuum process.\n> Likewise I've just bumped shared_buffers to 150000 and work_mem to 250000.\n\nway too big for work_mem as mentioned before. Set it to something\nreasonable, like 8M or so. Then, if you've got one query that really\nneeds lots of memory to run well, you can set it higher for that\nconnection / query only. You can even set work_mem to a particular\nnumber for a particular user with alter user command.\n\nOh, and 200 concurrent queries is a LOT.\n\n> Even at that I still see slow processing/high system loads at nite.I have\n> noticed that killing the current vacuum process (autovacuum is turned on)\n> speeds up the entire machine significantly.\n\n> The servers are 4-CPU intel boxes (not dual-core) with 4Gig of memory and\n> attached to raid-10 array's\n\nIt sounds to me like your systems are I/O bound, at least when vacuum\nis running. If you want to get good performance and have vacuum run\nin a reasonable amount of time, you might need to upgrade your RAID\nsubsystems. Do you have battery backed caching controllers? Which\nexact model controller are you using? How many drives in your RAID10\narray? What types of queries are typical (OLAP versus OLTP really)?\n\n> Any thoughts on where to start?\n\nThe vacuum cost settings to reduce the impact vacuum has.\n\nIncreasing fsm settings as needed.\n\nVacuum verbose to see if you've blown out your fsm settings and to see\nwhat fsm settings you might need.\n\nreindexing particularly bloated tables / indexes.\n\nhardware upgrades if needed.\n", "msg_date": "Tue, 28 Aug 2007 09:32:01 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: server performance issues - suggestions for tuning" } ]
[ { "msg_contents": "Hello,\n\n\nI have a table (stats.tickets) with 2288965 rows (51 columns) and\nindexes like:\nind_ti_stats_numero btree (tday, tmonth, tyear, r_cat, r_numero)\nind_ti_stats_service btree (tday, tmonth, tyear, r_cat, r_service)\nind_ti_stats_tmp_service btree (r_service, tyear, tmonth)\nind_ti_stats_tmp_service2 btree (r_service, tyear, tmonth, r_cat)\n\n\nNow if i do :\n1°)# explain analyze SELECT tday AS n, '' AS class, a.r_cat AS cat,\nCOUNT(*) AS cnx, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE 1\nEND) AS p, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 1 ELSE 0 END)\nAS np, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE r_duree\nEND) AS tps, ROUND(AVG(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE\nt_duree1 END),0) AS tmc FROM stats.tickets AS a WHERE\na.r_numero='99084040' AND tyear = 2007 AND tmonth = 8 GROUP BY tyear,\ntmonth, tday, a.r_cat;\n\nQUERY\nPLAN \n---------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=45412.96..45412.99 rows=1 width=34) (actual\ntime=649.944..650.178 rows=50 loops=1)\n -> Index Scan using ind_ti_stats_numero on tickets a\n(cost=0.00..45385.46 rows=1222 width=34) (actual time=15.697..642.570\nrows=1043 loops=1)\n Index Cond: ((tmonth = 8) AND (tyear = 2007) AND\n((r_numero)::text = '99084040'::text))\nTotal runtime: 650.342 ms\n(4 lignes)\n\nTemps : 652,234 ms\n\n\n\n2°)\n# explain analyze SELECT tday AS n, '' AS class, a.r_cat AS cat,\nCOUNT(*) AS cnx, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE 1\nEND) AS p, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 1 ELSE 0 END)\nAS np, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE r_duree\nEND) AS tps, ROUND(AVG(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE\nt_duree1 END),0) AS tmc FROM stats.tickets AS a WHERE a.r_service=95\nAND tyear = 2007 AND tmonth = 8 GROUP BY tyear, tmonth, tday, a.r_cat;\n\nQUERY\nPLAN \n--------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate (cost=193969.97..193970.88 rows=26 width=34) (actual\ntime=20834.559..20834.694 rows=27 loops=1)\n -> Bitmap Heap Scan on tickets a (cost=3714.84..186913.32\nrows=313629 width=34) (actual time=889.880..19028.315 rows=321395\nloops=1)\n Recheck Cond: ((r_service = 95) AND (tyear = 2007) AND (tmonth\n= 8))\n -> Bitmap Index Scan on ind_ti_stats_tmp_service\n(cost=0.00..3714.84 rows=313629 width=0) (actual time=836.181..836.181\nrows=321395 loops=1)\n Index Cond: ((r_service = 95) AND (tyear = 2007) AND\n(tmonth = 8))\nTotal runtime: 20835.191 ms\n(6 lignes)\n\nTemps : 20838,798 ms\n\n\n\\d stats.tickets\n[...]\nr_numero | character varying(17) | not null\nr_service | integer | not null default 0\n[...]\nstats.tickets has 173351 relpages , 2.30996e+06 reltuples.\n\n\nWhy in the first case, pgsql uses the \"better\" index and if i search\nr_service instead of r_numero pgsql does a \"Bitmap Heap scan\" first ?\nThere ara too much rows in this table ?\n\n\n\n\nPS: sorry for my english, i'm french.\n\n-- \nPaul.\n\n\n\n\n\n\n\nHello,\n\n\nI have a table (stats.tickets) with  2288965 rows (51 columns) and indexes like:\nind_ti_stats_numero btree (tday, tmonth, tyear, r_cat, r_numero)\nind_ti_stats_service btree (tday, tmonth, tyear, r_cat, r_service)\nind_ti_stats_tmp_service btree (r_service, tyear, tmonth)\nind_ti_stats_tmp_service2 btree (r_service, tyear, tmonth, r_cat)\n\n\nNow if i do :\n1°)# explain analyze SELECT  tday AS n,  '' AS class, a.r_cat AS cat, COUNT(*) AS cnx, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE 1 END) AS p,  SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 1 ELSE 0 END) AS np,  SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE r_duree END) AS tps, ROUND(AVG(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE t_duree1 END),0) AS tmc FROM stats.tickets AS a  WHERE a.r_numero='99084040'  AND tyear = 2007 AND tmonth = 8  GROUP BY  tyear, tmonth, tday, a.r_cat;\n                                                                    QUERY PLAN                                                                     \n---------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate  (cost=45412.96..45412.99 rows=1 width=34) (actual time=649.944..650.178 rows=50 loops=1)\n   ->  Index Scan using ind_ti_stats_numero on tickets a  (cost=0.00..45385.46 rows=1222 width=34) (actual time=15.697..642.570 rows=1043 loops=1)\n         Index Cond: ((tmonth = 8) AND (tyear = 2007) AND ((r_numero)::text = '99084040'::text))\nTotal runtime: 650.342 ms\n(4 lignes)\n\nTemps : 652,234 ms\n\n\n\n2°)\n# explain analyze SELECT  tday AS n,  '' AS class, a.r_cat AS cat, COUNT(*) AS cnx, SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE 1 END) AS p,  SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 1 ELSE 0 END) AS np,  SUM(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE r_duree END) AS tps, ROUND(AVG(CASE WHEN t_duree1 < r_palier_min_con THEN 0 ELSE t_duree1 END),0) AS tmc FROM stats.tickets AS a  WHERE a.r_service=95  AND tyear = 2007 AND tmonth = 8  GROUP BY  tyear, tmonth, tday, a.r_cat;\n                                                                       QUERY PLAN                                                                       \n--------------------------------------------------------------------------------------------------------------------------------------------------------\nHashAggregate  (cost=193969.97..193970.88 rows=26 width=34) (actual time=20834.559..20834.694 rows=27 loops=1)\n   ->  Bitmap Heap Scan on tickets a  (cost=3714.84..186913.32 rows=313629 width=34) (actual time=889.880..19028.315 rows=321395 loops=1)\n         Recheck Cond: ((r_service = 95) AND (tyear = 2007) AND (tmonth = 8))\n         ->  Bitmap Index Scan on ind_ti_stats_tmp_service  (cost=0.00..3714.84 rows=313629 width=0) (actual time=836.181..836.181 rows=321395 loops=1)\n               Index Cond: ((r_service = 95) AND (tyear = 2007) AND (tmonth = 8))\nTotal runtime: 20835.191 ms\n(6 lignes)\n\nTemps : 20838,798 ms\n\n\n\\d stats.tickets\n[...]\nr_numero            | character varying(17)       | not null\nr_service           | integer                     | not null default 0\n[...]\nstats.tickets has 173351 relpages , 2.30996e+06 reltuples.\n\n\nWhy in the first case, pgsql uses the \"better\" index and if i search r_service instead of r_numero pgsql does a \"Bitmap Heap scan\" first ?\nThere ara too much rows in this table ?\n\n\n\n\nPS: sorry for my english, i'm french.\n\n-- \nPaul.", "msg_date": "Tue, 28 Aug 2007 10:49:09 +0200", "msg_from": "Paul <[email protected]>", "msg_from_op": true, "msg_subject": "index & Bitmap Heap Scan" }, { "msg_contents": "Paul <[email protected]> writes:\n> Why in the first case, pgsql uses the \"better\" index and if i search\n> r_service instead of r_numero pgsql does a \"Bitmap Heap scan\" first ?\n\nGiven the difference in the number of rows to be fetched, both plan\nchoices look pretty reasonable to me. If you want to experiment,\nyou can try forcing the other choice in each case (use enable_indexscan\nand enable_bitmapscan) and see how fast it is, but I suspect the planner\ngot it right.\n\nBeware of cache effects when trying two plans in quick succession ---\nthe second one might go faster just because all the data is already\nswapped in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Aug 2007 12:55:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: index & Bitmap Heap Scan " }, { "msg_contents": "Thank you for your answer.\nNow i ve to find how to reduce the size of the table.\n\nPaul.\n\nLe mardi 28 août 2007 à 12:55 -0400, Tom Lane a écrit :\n> Paul <[email protected]> writes:\n> > Why in the first case, pgsql uses the \"better\" index and if i search\n> > r_service instead of r_numero pgsql does a \"Bitmap Heap scan\" first ?\n> \n> Given the difference in the number of rows to be fetched, both plan\n> choices look pretty reasonable to me. If you want to experiment,\n> you can try forcing the other choice in each case (use enable_indexscan\n> and enable_bitmapscan) and see how fast it is, but I suspect the planner\n> got it right.\n> \n> Beware of cache effects when trying two plans in quick succession ---\n> the second one might go faster just because all the data is already\n> swapped in.\n> \n> \t\t\tregards, tom lane\n> \n\n", "msg_date": "Wed, 29 Aug 2007 10:16:45 +0200", "msg_from": "Paul <[email protected]>", "msg_from_op": true, "msg_subject": "Re: index & Bitmap Heap Scan" } ]
[ { "msg_contents": "Hi,\n\nI have just reorganized a relatively decent sized query such that its\nconstituent functions / tables are now spread over 3-4 schemas.\n\nHowever, the query has for some reason now become very slow (earlier used to\ntake about 20 seconds, now takes about 500 seconds). The explain analyse\n(given below) doesn't help either.\n\n(Of what I did try, reducing the number of functions made the query faster,\nwhich frankly doesn't help me at all. Sadly removing the functions\none-by-one led me to two of them which were taking a lot of time (the 3rd\nlast and the 4th last) but their reason is to me still unknown. Besides,\neven after removing these two fields the query is still painfully slow as\ncompared to its previous performance).\n\nAll functions are STABLE (but that shouldnt matter because this analyse was\nspecifically done for 1 row).\nMost functions are in the 'processing' schema and most tables are in the\nfundsys1 schema.\nAlmost all the required fields are indexed (It was working fast enough\nearlier, so I dont think that should be an issue).\nDid a VACUUM ANALYSE before running this query.\nThe NULL with COALESCE is just a temporary hack to replace a variable with\nNULL to run this query for a small set.\n\nCould someone confirm as to whether a query across multiple schemas is known\nto have any kind of a degraded performance ?\nAny other ideas ?\n\n======================================\n\"Nested Loop (cost=206.15..246.63 rows=37 width=16) (actual time=\n362.139..296937.587 rows=841 loops=1)\"\n\" -> Merge Join (cost=206.15..206.33 rows=1 width=12) (actual time=\n12.817..12.832 rows=1 loops=1)\"\n\" Merge Cond: (main.scheme_code = jn_set_schemecode.scheme_code)\"\n\" -> Sort (cost=201.24..201.31 rows=27 width=12) (actual time=\n12.672..12.683 rows=8 loops=1)\"\n\" Sort Key: main.variant_scheme_code\"\n\" -> Seq Scan on main (cost=0.00..200.60 rows=27 width=12)\n(actual time=0.029..6.728 rows=2593 loops=1)\"\n\" Filter: (variant_scheme_code = scheme_code)\"\n\" -> Sort (cost=4.91..4.93 rows=9 width=4) (actual time=\n0.107..0.110 rows=1 loops=1)\"\n\" Sort Key: jn_set_schemecode.scheme_code\"\n\" -> Seq Scan on jn_set_schemecode (cost=0.00..4.76 rows=9\nwidth=4) (actual time=0.074..0.076 rows=1 loops=1)\"\n\" Filter: (set_id = 10)\"\n\" -> Seq Scan on \"month\" (cost=0.00..25.41 rows=841 width=4) (actual\ntime=0.033..3.049 rows=841 loops=1)\"\n\"Total runtime: 296939.886 ms\"\n\n======================================\nSELECT\n main.scheme_code,\n (\n (processing.fund_month_end_mean(main.scheme_code,\n'2005-1-1'::date, '2007-6-30'::date)*12) -\n (processing.risk_free_index_month_end_mean('2005-1-1'::date,\n'2007-6-30'::date) * 12)\n )/(processing.fund_month_end_stddev_pop(main.scheme_code,\n'2005-1-1'::date, '2007-6-30'::date)*sqrt(12)),\n\n processing.fund_month_end_stddev_pop(main.scheme_code,\n'2005-1-1'::date, '2007-6-30'::date) ,\n\n (\n (processing.covariance_fund_index_monthly(\nmain.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date,\n'2007-6-30'::date)*12)/\n (processing.fund_month_end_stddev_pop(\nmain.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*sqrt(12) *\n processing.index_month_end_stddev_pop(COALESCE(NULL,\nstated_index), '2005-1-1'::date, '2007-6-30'::date)*sqrt(12))\n ),\n\n processing.information_ratio_monthly(main.scheme_code,\nCOALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,\n (\n (processing.fund_month_end_mean(main.scheme_code,\n'2005-1-1'::date, '2007-6-30'::date)*12) -\n ((processing.risk_free_index_month_end_mean('2005-1-1'::date,\n'2007-6-30'::date) * 12) +\n ((\n (processing.covariance_fund_index_monthly(\nmain.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date,\n'2007-6-30'::date)*12) /\n (processing.index_month_end_variance(COALESCE(NULL,\nstated_index), '2005-1-1'::date, '2007-6-30'::date)* 12)\n )*\n (\n (processing.index_month_end_mean(COALESCE(NULL,\nstated_index), '2005-1-1'::date, '2007-6-30'::date)*12) -\n\n(processing.risk_free_index_month_end_mean('2005-1-1'::date,\n'2007-6-30'::date) * 12)\n )\n )\n ),\n (\n (processing.covariance_fund_index_monthly(\nmain.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date,\n'2007-6-30'::date)*12) /\n (processing.index_month_end_variance(COALESCE(NULL,\nstated_index), '2005-1-1'::date, '2007-6-30'::date)* 12)\n ),\n processing.upside_capture_ratio_monthly(main.scheme_code,\nCOALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,\n processing.downside_capture_ratio_monthly(main.scheme_code,\nCOALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,\n processing.fund_return(main.scheme_code, '2007-6-30'::date, '1\nyear', true) ,\n processing.fund_return(main.scheme_code, '2007-6-30'::date, '2\nyears', true) ,\n processing.fund_return(main.scheme_code, '2007-6-30'::date, '3\nyears', true) ,\n processing.fund_return(main.scheme_code, '2007-6-30'::date, '5\nyears', true) ,\n processing.rolling_return(main.scheme_code, '2007-6-30'::date,\n'1 year', '1 month', '1 day') ,\n processing.calendar_year_return(main.scheme_code, (extract(year\nfrom now()))::integer) ,\n processing.calendar_year_return(main.scheme_code, (extract(year\nfrom now()) - 1)::integer),\n processing.calendar_year_return(main.scheme_code, (extract(year\nfrom now()) - 2)::integer),\n processing.days_to_liquidate(main.scheme_code,\n'2007-6-30'::date) as days_to_liquidate,\n processing.deviation_from_index(main.scheme_code, COALESCE(NULL,\nstated_index), '2007-6-30'::date) ,\n (SELECT index_full_name FROM fundsys1.fs_indices INNER JOIN\nfundsys1.main ON main.stated_index = index_code where main.scheme_code =\njn_set_schemecode.scheme_code),\n (SELECT stated_index FROM fundsys1.main where main.scheme_code =\njn_set_schemecode.scheme_code),\n processing.number_of_companies_in_index(jn_set_schemecode.scheme_code,\nlookup_tables.month.month_end_date),\n processing.percentage_of_assets_in_stocks_as_in_benchmark(jn_set_schemecode.scheme_code,\nlookup_tables.month.month_end_date)\n FROM lookup_tables.month, fundsys1.main\n INNER JOIN output.jn_set_schemecode ON\njn_set_schemecode.scheme_code = main.scheme_code\n WHERE jn_set_schemecode.set_id=10\n AND main.variant_scheme_code = main.scheme_code\n ORDER BY main.scheme_code\n======================================\n\nThanks\nRobins Tharakan\n\nHi,I have just reorganized a relatively decent sized query such that its constituent functions / tables are now spread over 3-4 schemas.However, the query has for some reason now become very slow (earlier used to take about 20 seconds, now takes about 500 seconds). The explain analyse (given below) doesn't help either.\n(Of what I did try, reducing the number of functions made the query faster, which frankly doesn't help me at all. Sadly removing the functions one-by-one led me to two of them which were taking a lot of time (the 3rd last and the 4th last) but their reason is to me still unknown. Besides, even after removing these two fields the query is still painfully slow as compared to its previous performance).\nAll functions are STABLE (but that shouldnt matter because this analyse was specifically done for 1 row).Most functions are in the 'processing' schema and most tables are in the fundsys1 schema.Almost all the required fields are indexed (It was working fast enough earlier, so I dont think that should be an issue).\nDid a VACUUM ANALYSE before running this query.The NULL with COALESCE is just a temporary hack to replace a variable with NULL to run this query for a small set.Could someone confirm as to whether a query across multiple schemas is known to have any kind of a degraded performance ?\nAny other ideas ?======================================\"Nested Loop  (cost=206.15..246.63 rows=37 width=16) (actual time=362.139..296937.587 rows=841 loops=1)\"\"  ->  Merge Join  (cost=\n206.15..206.33 rows=1 width=12) (actual time=12.817..12.832 rows=1 loops=1)\"\"        Merge Cond: (main.scheme_code = jn_set_schemecode.scheme_code)\"\"        ->  Sort  (cost=201.24..201.31 rows=27 width=12) (actual time=\n12.672..12.683 rows=8 loops=1)\"\"              Sort Key: main.variant_scheme_code\"\"              ->  Seq Scan on main  (cost=0.00..200.60 rows=27 width=12) (actual time=0.029..6.728 rows=2593 loops=1)\"\n\"                    Filter: (variant_scheme_code = scheme_code)\"\"        ->  Sort  (cost=4.91..4.93 rows=9 width=4) (actual time=0.107..0.110 rows=1 loops=1)\"\"              Sort Key: jn_set_schemecode.scheme_code\"\n\"              ->  Seq Scan on jn_set_schemecode  (cost=0.00..4.76 rows=9 width=4) (actual time=0.074..0.076 rows=1 loops=1)\"\"                    Filter: (set_id = 10)\"\"  ->  Seq Scan on \"month\"  (cost=\n0.00..25.41 rows=841 width=4) (actual time=0.033..3.049 rows=841 loops=1)\"\"Total runtime: 296939.886 ms\"======================================SELECT        main.scheme_code,            (\n            (processing.fund_month_end_mean(main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*12) -            (processing.risk_free_index_month_end_mean('2005-1-1'::date, '2007-6-30'::date) * 12)\n            )/(processing.fund_month_end_stddev_pop(main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*sqrt(12)),                       processing.fund_month_end_stddev_pop(main.scheme_code\n, '2005-1-1'::date, '2007-6-30'::date) ,                               (                    (processing.covariance_fund_index_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*12)/\n                        (processing.fund_month_end_stddev_pop(main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*sqrt(12) *                        processing.index_month_end_stddev_pop(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*sqrt(12))\n                ),            processing.information_ratio_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,            (                (processing.fund_month_end_mean\n(main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*12) -                ((processing.risk_free_index_month_end_mean('2005-1-1'::date, '2007-6-30'::date) * 12) +                    ((\n                        (processing.covariance_fund_index_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*12) /                        (processing.index_month_end_variance\n(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)* 12)                    )*                    (                        (processing.index_month_end_mean(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*12) -\n                        (processing.risk_free_index_month_end_mean('2005-1-1'::date, '2007-6-30'::date) * 12)                    )                    )                ),                (\n                    (processing.covariance_fund_index_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*12) /                    (processing.index_month_end_variance\n(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)* 12)                ),            processing.upside_capture_ratio_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,\n            processing.downside_capture_ratio_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,            processing.fund_return(main.scheme_code, '2007-6-30'::date, '1 year', true) ,\n            processing.fund_return(main.scheme_code, '2007-6-30'::date, '2 years', true) ,            processing.fund_return(main.scheme_code, '2007-6-30'::date, '3 years', true) ,\n            processing.fund_return(main.scheme_code, '2007-6-30'::date, '5 years', true) ,            processing.rolling_return(main.scheme_code, '2007-6-30'::date, '1 year', '1 month', '1 day') ,\n            processing.calendar_year_return(main.scheme_code, (extract(year from now()))::integer) ,            processing.calendar_year_return(main.scheme_code, (extract(year from now()) - 1)::integer),            \nprocessing.calendar_year_return(main.scheme_code, (extract(year from now()) - 2)::integer),            processing.days_to_liquidate(main.scheme_code, '2007-6-30'::date) as days_to_liquidate,            processing.deviation_from_index\n(main.scheme_code, COALESCE(NULL, stated_index), '2007-6-30'::date) ,            (SELECT index_full_name FROM fundsys1.fs_indices INNER JOIN fundsys1.main ON main.stated_index = index_code where main.scheme_code\n = jn_set_schemecode.scheme_code),            (SELECT stated_index FROM fundsys1.main where main.scheme_code = jn_set_schemecode.scheme_code),            processing.number_of_companies_in_index(jn_set_schemecode.scheme_code, lookup_tables.month.month_end_date),\n            processing.percentage_of_assets_in_stocks_as_in_benchmark(jn_set_schemecode.scheme_code, lookup_tables.month.month_end_date)        FROM lookup_tables.month, fundsys1.main            INNER JOIN output.jn_set_schemecode\n ON jn_set_schemecode.scheme_code = main.scheme_code        WHERE jn_set_schemecode.set_id=10            AND main.variant_scheme_code = main.scheme_code        ORDER BY main.scheme_code======================================\nThanksRobins Tharakan", "msg_date": "Tue, 28 Aug 2007 18:35:24 +0530", "msg_from": "Robins <[email protected]>", "msg_from_op": true, "msg_subject": "Performance across multiple schemas" }, { "msg_contents": "Oops!\nGuess I shot myself in the foot there.\n\nIt seems to be an SQL issue and not really a PG problem... Sorry for\nbothering you all.\n\nHowever, now that we are here, could anyone tell if you would advise for\nmultiple schemas (in PG) while designing the database structure ?\n\nThanks\nRobins Tharakan\n\n\nOn 8/28/07, Robins <[email protected]> wrote:\n>\n> Hi,\n>\n> I have just reorganized a relatively decent sized query such that its\n> constituent functions / tables are now spread over 3-4 schemas.\n>\n> However, the query has for some reason now become very slow (earlier used\n> to take about 20 seconds, now takes about 500 seconds). The explain analyse\n> (given below) doesn't help either.\n>\n> (Of what I did try, reducing the number of functions made the query\n> faster, which frankly doesn't help me at all. Sadly removing the functions\n> one-by-one led me to two of them which were taking a lot of time (the 3rd\n> last and the 4th last) but their reason is to me still unknown. Besides,\n> even after removing these two fields the query is still painfully slow as\n> compared to its previous performance).\n>\n> All functions are STABLE (but that shouldnt matter because this analyse\n> was specifically done for 1 row).\n> Most functions are in the 'processing' schema and most tables are in the\n> fundsys1 schema.\n> Almost all the required fields are indexed (It was working fast enough\n> earlier, so I dont think that should be an issue).\n> Did a VACUUM ANALYSE before running this query.\n> The NULL with COALESCE is just a temporary hack to replace a variable with\n> NULL to run this query for a small set.\n>\n> Could someone confirm as to whether a query across multiple schemas is\n> known to have any kind of a degraded performance ?\n> Any other ideas ?\n>\n> ======================================\n> \"Nested Loop (cost=206.15..246.63 rows=37 width=16) (actual time=\n> 362.139..296937.587 rows=841 loops=1)\"\n> \" -> Merge Join (cost= 206.15..206.33 rows=1 width=12) (actual time=\n> 12.817..12.832 rows=1 loops=1)\"\n> \" Merge Cond: (main.scheme_code = jn_set_schemecode.scheme_code)\"\n> \" -> Sort (cost=201.24..201.31 rows=27 width=12) (actual time=\n> 12.672..12.683 rows=8 loops=1)\"\n> \" Sort Key: main.variant_scheme_code\"\n> \" -> Seq Scan on main (cost=0.00..200.60 rows=27 width=12)\n> (actual time=0.029..6.728 rows=2593 loops=1)\"\n> \" Filter: (variant_scheme_code = scheme_code)\"\n> \" -> Sort (cost=4.91..4.93 rows=9 width=4) (actual time=\n> 0.107..0.110 rows=1 loops=1)\"\n> \" Sort Key: jn_set_schemecode.scheme_code\"\n> \" -> Seq Scan on jn_set_schemecode (cost=0.00..4.76 rows=9\n> width=4) (actual time=0.074..0.076 rows=1 loops=1)\"\n> \" Filter: (set_id = 10)\"\n> \" -> Seq Scan on \"month\" (cost= 0.00..25.41 rows=841 width=4) (actual\n> time=0.033..3.049 rows=841 loops=1)\"\n> \"Total runtime: 296939.886 ms\"\n>\n> ======================================\n> SELECT\n> main.scheme_code,\n> (\n> (processing.fund_month_end_mean(main.scheme_code,\n> '2005-1-1'::date, '2007-6-30'::date)*12) -\n> (processing.risk_free_index_month_end_mean('2005-1-1'::date,\n> '2007-6-30'::date) * 12)\n> )/(processing.fund_month_end_stddev_pop(main.scheme_code,\n> '2005-1-1'::date, '2007-6-30'::date)*sqrt(12)),\n>\n> processing.fund_month_end_stddev_pop(main.scheme_code ,\n> '2005-1-1'::date, '2007-6-30'::date) ,\n>\n> (\n> (processing.covariance_fund_index_monthly(\n> main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date,\n> '2007-6-30'::date)*12)/\n> (processing.fund_month_end_stddev_pop(\n> main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*sqrt(12) *\n> processing.index_month_end_stddev_pop(COALESCE(NULL,\n> stated_index), '2005-1-1'::date, '2007-6-30'::date)*sqrt(12))\n> ),\n>\n> processing.information_ratio_monthly(main.scheme_code,\n> COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,\n> (\n> (processing.fund_month_end_mean (main.scheme_code,\n> '2005-1-1'::date, '2007-6-30'::date)*12) -\n> ((processing.risk_free_index_month_end_mean('2005-1-1'::date,\n> '2007-6-30'::date) * 12) +\n> ((\n> (processing.covariance_fund_index_monthly(\n> main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date,\n> '2007-6-30'::date)*12) /\n> (processing.index_month_end_variance(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)* 12)\n> )*\n> (\n> (processing.index_month_end_mean(COALESCE(NULL,\n> stated_index), '2005-1-1'::date, '2007-6-30'::date)*12) -\n> (processing.risk_free_index_month_end_mean('2005-1-1'::date,\n> '2007-6-30'::date) * 12)\n> )\n> )\n> ),\n> (\n> (processing.covariance_fund_index_monthly(\n> main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date,\n> '2007-6-30'::date)*12) /\n> (processing.index_month_end_variance (COALESCE(NULL,\n> stated_index), '2005-1-1'::date, '2007-6-30'::date)* 12)\n> ),\n> processing.upside_capture_ratio_monthly(main.scheme_code,\n> COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,\n> processing.downside_capture_ratio_monthly(main.scheme_code,\n> COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,\n> processing.fund_return(main.scheme_code, '2007-6-30'::date, '1\n> year', true) ,\n> processing.fund_return(main.scheme_code, '2007-6-30'::date, '2\n> years', true) ,\n> processing.fund_return(main.scheme_code, '2007-6-30'::date, '3\n> years', true) ,\n> processing.fund_return(main.scheme_code, '2007-6-30'::date, '5\n> years', true) ,\n> processing.rolling_return(main.scheme_code, '2007-6-30'::date,\n> '1 year', '1 month', '1 day') ,\n> processing.calendar_year_return(main.scheme_code,\n> (extract(year from now()))::integer) ,\n> processing.calendar_year_return(main.scheme_code,\n> (extract(year from now()) - 1)::integer),\n> processing.calendar_year_return(main.scheme_code,\n> (extract(year from now()) - 2)::integer),\n> processing.days_to_liquidate(main.scheme_code,\n> '2007-6-30'::date) as days_to_liquidate,\n> processing.deviation_from_index (main.scheme_code,\n> COALESCE(NULL, stated_index), '2007-6-30'::date) ,\n> (SELECT index_full_name FROM fundsys1.fs_indices INNER JOIN\n> fundsys1.main ON main.stated_index = index_code where main.scheme_code =\n> jn_set_schemecode.scheme_code),\n> (SELECT stated_index FROM fundsys1.main where main.scheme_code= jn_set_schemecode.scheme_code),\n> processing.number_of_companies_in_index(jn_set_schemecode.scheme_code,\n> lookup_tables.month.month_end_date),\n> processing.percentage_of_assets_in_stocks_as_in_benchmark(jn_set_schemecode.scheme_code,\n> lookup_tables.month.month_end_date)\n> FROM lookup_tables.month, fundsys1.main\n> INNER JOIN output.jn_set_schemecode ON\n> jn_set_schemecode.scheme_code = main.scheme_code\n> WHERE jn_set_schemecode.set_id=10\n> AND main.variant_scheme_code = main.scheme_code\n> ORDER BY main.scheme_code\n> ======================================\n>\n> Thanks\n> Robins Tharakan\n\n\n\n\n-- \nRobins\n\nOops!Guess I shot myself in the foot there.It seems to be an SQL issue and not really a PG problem... Sorry for bothering you all.However, now that we are here, could anyone tell if you would advise for multiple schemas (in PG) while designing the database structure ?\nThanksRobins TharakanOn 8/28/07, Robins <[email protected]\n> wrote:Hi,I have just reorganized a relatively decent sized query such that its constituent functions / tables are now spread over 3-4 schemas.\nHowever, the query has for some reason now become very slow (earlier used to take about 20 seconds, now takes about 500 seconds). The explain analyse (given below) doesn't help either.\n(Of what I did try, reducing the number of functions made the query faster, which frankly doesn't help me at all. Sadly removing the functions one-by-one led me to two of them which were taking a lot of time (the 3rd last and the 4th last) but their reason is to me still unknown. Besides, even after removing these two fields the query is still painfully slow as compared to its previous performance).\nAll functions are STABLE (but that shouldnt matter because this analyse was specifically done for 1 row).Most functions are in the 'processing' schema and most tables are in the fundsys1 schema.Almost all the required fields are indexed (It was working fast enough earlier, so I dont think that should be an issue).\nDid a VACUUM ANALYSE before running this query.The NULL with COALESCE is just a temporary hack to replace a variable with NULL to run this query for a small set.Could someone confirm as to whether a query across multiple schemas is known to have any kind of a degraded performance ?\nAny other ideas ?======================================\"Nested Loop  (cost=206.15..246.63 rows=37 width=16) (actual time=362.139..296937.587 rows=841 loops=1)\"\"  ->  Merge Join  (cost=\n206.15..206.33 rows=1 width=12) (actual time=12.817..12.832 rows=1 loops=1)\"\"        Merge Cond: (main.scheme_code = jn_set_schemecode.scheme_code)\"\"        ->  Sort  (cost=201.24..201.31 rows=27 width=12) (actual time=\n12.672..12.683 rows=8 loops=1)\"\"              Sort Key: main.variant_scheme_code\"\"              ->  Seq Scan on main  (cost=0.00..200.60 rows=27 width=12) (actual time=0.029..6.728 rows=2593 loops=1)\"\n\"                    Filter: (variant_scheme_code = scheme_code)\"\"        ->  Sort  (cost=4.91..4.93 rows=9 width=4) (actual time=0.107..0.110 rows=1 loops=1)\"\"              Sort Key: jn_set_schemecode.scheme_code\"\n\"              ->  Seq Scan on jn_set_schemecode  (cost=0.00..4.76 rows=9 width=4) (actual time=0.074..0.076 rows=1 loops=1)\"\"                    Filter: (set_id = 10)\"\"  ->  Seq Scan on \"month\"  (cost=\n0.00..25.41 rows=841 width=4) (actual time=0.033..3.049 rows=841 loops=1)\"\"Total runtime: 296939.886 ms\"======================================SELECT        main.scheme_code,            (\n            (processing.fund_month_end_mean(main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*12) -            (processing.risk_free_index_month_end_mean('2005-1-1'::date, '2007-6-30'::date) * 12)\n            )/(processing.fund_month_end_stddev_pop(main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*sqrt(12)),                       processing.fund_month_end_stddev_pop(main.scheme_code\n\n, '2005-1-1'::date, '2007-6-30'::date) ,                               (                    (processing.covariance_fund_index_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*12)/\n                        (processing.fund_month_end_stddev_pop(main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*sqrt(12) *                        processing.index_month_end_stddev_pop(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*sqrt(12))\n                ),            processing.information_ratio_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,            (                (processing.fund_month_end_mean\n\n(main.scheme_code, '2005-1-1'::date, '2007-6-30'::date)*12) -                ((processing.risk_free_index_month_end_mean('2005-1-1'::date, '2007-6-30'::date) * 12) +                    ((\n                        (processing.covariance_fund_index_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*12) /                        (processing.index_month_end_variance\n\n(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)* 12)                    )*                    (                        (processing.index_month_end_mean(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*12) -\n                        (processing.risk_free_index_month_end_mean('2005-1-1'::date, '2007-6-30'::date) * 12)                    )                    )                ),                (\n                    (processing.covariance_fund_index_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)*12) /                    (processing.index_month_end_variance\n\n(COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date)* 12)                ),            processing.upside_capture_ratio_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,\n            processing.downside_capture_ratio_monthly(main.scheme_code, COALESCE(NULL, stated_index), '2005-1-1'::date, '2007-6-30'::date) ,            processing.fund_return(main.scheme_code, '2007-6-30'::date, '1 year', true) ,\n            processing.fund_return(main.scheme_code, '2007-6-30'::date, '2 years', true) ,            processing.fund_return(main.scheme_code, '2007-6-30'::date, '3 years', true) ,\n\n            processing.fund_return(main.scheme_code, '2007-6-30'::date, '5 years', true) ,            processing.rolling_return(main.scheme_code, '2007-6-30'::date, '1 year', '1 month', '1 day') ,\n            processing.calendar_year_return(main.scheme_code, (extract(year from now()))::integer) ,            processing.calendar_year_return(main.scheme_code, (extract(year from now()) - 1)::integer),            \nprocessing.calendar_year_return(main.scheme_code, (extract(year from now()) - 2)::integer),            processing.days_to_liquidate(main.scheme_code, '2007-6-30'::date) as days_to_liquidate,            processing.deviation_from_index\n\n(main.scheme_code, COALESCE(NULL, stated_index), '2007-6-30'::date) ,            (SELECT index_full_name FROM fundsys1.fs_indices INNER JOIN fundsys1.main ON main.stated_index = index_code where main.scheme_code\n\n = jn_set_schemecode.scheme_code),            (SELECT stated_index FROM fundsys1.main where main.scheme_code = jn_set_schemecode.scheme_code),            processing.number_of_companies_in_index(jn_set_schemecode.scheme_code, lookup_tables.month.month_end_date),\n            processing.percentage_of_assets_in_stocks_as_in_benchmark(jn_set_schemecode.scheme_code, lookup_tables.month.month_end_date)        FROM lookup_tables.month, fundsys1.main            INNER JOIN output.jn_set_schemecode\n\n ON jn_set_schemecode.scheme_code = main.scheme_code        WHERE jn_set_schemecode.set_id=10            AND main.variant_scheme_code = main.scheme_code        ORDER BY main.scheme_code======================================\nThanksRobins Tharakan\n-- Robins", "msg_date": "Tue, 28 Aug 2007 19:14:06 +0530", "msg_from": "Robins <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance across multiple schemas" }, { "msg_contents": "Robins <[email protected]> writes:\n> Could someone confirm as to whether a query across multiple schemas is known\n> to have any kind of a degraded performance ?\n\nSchemas are utterly, utterly irrelevant to performance.\n\nI'm guessing you missed analyzing one of the tables, or forgot an index,\nor something like that. Also, if you did anything \"cute\" like use the\nsame table name in more than one schema, you need to check the\npossibility that some query is selecting the wrong one of the tables.\n\nThe explain output you showed is no help because the expense is\nevidently down inside one of the functions in the SELECT output list.\n\nOne thing you should probably try before getting too frantic is\nre-ANALYZEing all the tables and then starting a fresh session to\nclear any cached plans inside the functions. If it's still slow\nthen it'd be worth digging deeper.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Aug 2007 09:49:26 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance across multiple schemas " }, { "msg_contents": "Thanks Tom,\n\nExactly what I did, when I realised that there was an extra Table in the\nFROM with no conditions set.\n\nWell anyway, this did clear my doubts about whether schema affects\nperformance at all.\n\nRobins\n\nOn 8/28/07, Tom Lane <[email protected]> wrote:\n>\n>\n> Schemas are utterly, utterly irrelevant to performance.\n>\n> I'm guessing you missed analyzing one of the tables, or forgot an index,\n> or something like that. Also, if you did anything \"cute\" like use the\n> same table name in more than one schema, you need to check the\n> possibility that some query is selecting the wrong one of the tables.\n>\n> The explain output you showed is no help because the expense is\n> evidently down inside one of the functions in the SELECT output list.\n>\n> One thing you should probably try before getting too frantic is\n> re-ANALYZEing all the tables and then starting a fresh session to\n> clear any cached plans inside the functions. If it's still slow\n> then it'd be worth digging deeper.\n>\n> regards, tom lane\n\nThanks Tom,Exactly what I did, when I realised that there was an extra Table in the FROM with no conditions set.Well anyway, this did clear my doubts about whether schema affects performance at all.\nRobinsOn 8/28/07, Tom Lane <[email protected]> wrote:\nSchemas are utterly, utterly irrelevant to performance.I'm guessing you missed analyzing one of the tables, or forgot an index,or something like that.  Also, if you did anything \"cute\" like use the\nsame table name in more than one schema, you need to check thepossibility that some query is selecting the wrong one of the tables.The explain output you showed is no help because the expense isevidently down inside one of the functions in the SELECT output list.\nOne thing you should probably try before getting too frantic isre-ANALYZEing all the tables and then starting a fresh session toclear any cached plans inside the functions.  If it's still slowthen it'd be worth digging deeper.\n                        regards, tom lane", "msg_date": "Wed, 29 Aug 2007 07:20:51 +0530", "msg_from": "\"Robins Tharakan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance across multiple schemas" }, { "msg_contents": "Thanks Tom,\n\nExactly what I did, when I realised that there was an extra Table in the\nFROM with no conditions set.\n\nWell anyway, this did clear my doubts about whether schema affects\nperformance at all.\n\nRobins\n\nOn 8/29/07, Robins Tharakan <[email protected]> wrote:\n>\n> Thanks Tom,\n>\n> Exactly what I did, when I realised that there was an extra Table in the\n> FROM with no conditions set.\n>\n> Well anyway, this did clear my doubts about whether schema affects\n> performance at all.\n>\n> Robins\n>\n> On 8/28/07, Tom Lane <[email protected]> wrote:\n> >\n> >\n> > Schemas are utterly, utterly irrelevant to performance.\n> >\n> > I'm guessing you missed analyzing one of the tables, or forgot an index,\n> > or something like that. Also, if you did anything \"cute\" like use the\n> > same table name in more than one schema, you need to check the\n> > possibility that some query is selecting the wrong one of the tables.\n> >\n> > The explain output you showed is no help because the expense is\n> > evidently down inside one of the functions in the SELECT output list.\n> >\n> > One thing you should probably try before getting too frantic is\n> > re-ANALYZEing all the tables and then starting a fresh session to\n> > clear any cached plans inside the functions. If it's still slow\n> > then it'd be worth digging deeper.\n> >\n> > regards, tom lane\n>\n>\n\n\n-- \nRobins\n\nThanks Tom,Exactly what I did, when I realised that there was an extra Table in the FROM with no conditions set.Well anyway, this did clear my doubts about whether schema affects performance at all.\nRobinsOn 8/29/07, Robins Tharakan <[email protected]> wrote:\nThanks Tom,Exactly what I did, when I realised that there was an extra Table in the FROM with no conditions set.\nWell anyway, this did clear my doubts about whether schema affects performance at all.\nRobinsOn 8/28/07, Tom Lane <\[email protected]> wrote:\nSchemas are utterly, utterly irrelevant to performance.I'm guessing you missed analyzing one of the tables, or forgot an index,or something like that.  Also, if you did anything \"cute\" like use the\nsame table name in more than one schema, you need to check thepossibility that some query is selecting the wrong one of the tables.The explain output you showed is no help because the expense isevidently down inside one of the functions in the SELECT output list.\nOne thing you should probably try before getting too frantic isre-ANALYZEing all the tables and then starting a fresh session toclear any cached plans inside the functions.  If it's still slowthen it'd be worth digging deeper.\n                        regards, tom lane\n-- Robins", "msg_date": "Wed, 29 Aug 2007 08:15:09 +0530", "msg_from": "Robins <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance across multiple schemas" } ]
[ { "msg_contents": "Hi,\n\n We have recently upgraded our production database from 8.0.12 to\n8.2.4, We have seen lot of improvements on 8.2.4 side but we are also\nseeing some queries which are slow.\n\n Particularly this below query is really bad in 8.2.4 , I can get\nonly the explain on this as explain analyze never finishes even after 20\nmin.\n 8.2.4 plan uses this index which is pretty much doing a full index\nscan on 52mill records and I think that is taking lot of time to\nexecute. Where as 8.0.12 doesnt use this index in the plan.\n\n -> Index Scan Backward using pk_activity_activityid on activity\nactivity1_ (cost=0.00..1827471.18 rows=52363227 width=8)\n\n I have also pasted the 8.0.12 explain analyze output which takes\nlittle over a min , I can live with that.\n All the related tables in 8.2.4 are vacuumed and analyzed thru\nautovacuum utility.\n\n Can anyone tell why the 8.2.4 plan is bad for this query ? Is this\nexpected behavior in 8.2.4 ?\n\nThanks!\nPallav.\n\nHardware\n-------------\nOS: Open Suse 10.1\nMemory: 8gb\nCPU: 2 (Dual Core).\n\nPostgres Settings\n----------------------\nshared_buffers = 1GB\nwork_mem = 32MB\nmaintenance_work_mem = 256MB\neffective_cache_size = 6400MB\n\n8.2.4 Plan\n=======\nexplain\nselect accountact0_.accountactivityid as accounta1_46_,\naccountact0_.fkaccountid as fkaccoun2_46_,\n accountact0_.fkserviceinstanceid as fkservic3_46_,\naccountact0_.fkactivityid as fkactivi4_46_\nfrom provisioning.accountactivity accountact0_, common.activity\nactivity1_, common.activitytype activityty2_\nwhere accountact0_.fkactivityid=activity1_.activityId\nand activity1_.fkactivitytypeid=activityty2_.activitytypeid\nand accountact0_.fkaccountid= 1455437\nand activityty2_.name='UNLOCK_ACCOUNT'\norder by activity1_.activityid desc\nlimit 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=3.43..57381.12 rows=1 width=20)\n -> Nested Loop (cost=3.43..4819729.72 rows=84 width=20)\n -> Nested Loop (cost=3.43..3005647.22 rows=459327 width=4)\n Join Filter: (activity1_.fkactivitytypeid =\nactivityty2_.activitytypeid)\n -> Index Scan Backward using pk_activity_activityid on\nactivity activity1_ (cost=0.00..1827471.18 rows=52363227 width=8)\n -> Materialize (cost=3.43..3.44 rows=1 width=4)\n -> Seq Scan on activitytype activityty2_ \n(cost=0.00..3.42 rows=1 width=4)\n Filter: (name = 'UNLOCK_ACCOUNT'::text)\n -> Index Scan using idx_accountactivity_fkactivityid on\naccountactivity accountact0_ (cost=0.00..3.94 rows=1 width=16)\n Index Cond: (accountact0_.fkactivityid =\nactivity1_.activityid)\n Filter: (fkaccountid = 1455437)\n(11 rows)\n\n\n8.0.12 Plan\n========\nexplain analyze\nselect accountact0_.accountactivityid as accounta1_46_,\naccountact0_.fkaccountid as fkaccoun2_46_,\n accountact0_.fkserviceinstanceid as fkservic3_46_,\naccountact0_.fkactivityid as fkactivi4_46_\nfrom provisioning.accountactivity accountact0_, common.activity\nactivity1_, common.activitytype activityty2_\nwhere accountact0_.fkactivityid=activity1_.activityId\nand activity1_.fkactivitytypeid=activityty2_.activitytypeid\nand accountact0_.fkaccountid= 1455437\nand activityty2_.name='UNLOCK_ACCOUNT'\norder by activity1_.activityid desc\nlimit 1;\n \nQUERY PLAN \n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=5725.89..5725.89 rows=1 width=20) (actual\ntime=64555.895..64555.895 rows=0 loops=1)\n -> Sort (cost=5725.89..5725.92 rows=12 width=20) (actual\ntime=64555.893..64555.893 rows=0 loops=1)\n Sort Key: activity1_.activityid\n -> Nested Loop (cost=0.00..5725.67 rows=12 width=20) (actual\ntime=64555.730..64555.730 rows=0 loops=1)\n Join Filter: (\"inner\".fkactivitytypeid =\n\"outer\".activitytypeid)\n -> Seq Scan on activitytype activityty2_ \n(cost=0.00..3.42 rows=1 width=4) (actual time=8.670..8.691 rows=1 loops=1)\n Filter: (name = 'UNLOCK_ACCOUNT'::text)\n -> Nested Loop (cost=0.00..5705.46 rows=1343 width=24)\n(actual time=282.550..64539.423 rows=10302 loops=1)\n -> Index Scan using\nidx_accountactivity_fkaccountid on accountactivity accountact0_ \n(cost=0.00..1641.42 rows=1343 width=16) (actual time=115.348..864.416\nrows=10302 loops=1)\n Index Cond: (fkaccountid = 1455437)\n -> Index Scan using pk_activity_activityid on\nactivity activity1_ (cost=0.00..3.01 rows=1 width=8) (actual\ntime=6.177..6.178 rows=1 loops=10302)\n Index Cond: (\"outer\".fkactivityid =\nactivity1_.activityid)\n Total runtime: 64555.994 ms\n(13 rows)\n\nTable Definitions\n===========\n\n #\\d provisioning.accountactivity\n Table \"provisioning.accountactivity\"\n Column | Type | Modifiers\n---------------------+---------+-------------------------------------------------------------------------------\n accountactivityid | integer | not null default\nnextval(('provisioning.AccountActivitySeq'::text)::regclass)\n fkaccountid | integer | not null\n fkactivityid | integer | not null\n fkserviceinstanceid | integer |\nIndexes:\n \"pk_accountactivity_accountactivityid\" PRIMARY KEY, btree\n(accountactivityid), tablespace \"indexdata\"\n \"idx_accountactivity_fkaccountid\" btree (fkaccountid), tablespace\n\"indexdata\"\n \"idx_accountactivity_fkactivityid\" btree (fkactivityid), tablespace\n\"indexdata\"\nForeign-key constraints:\n \"fk_accountactivity_account\" FOREIGN KEY (fkaccountid) REFERENCES\nprovisioning.account(accountid)\n \"fk_accountactivity_activity\" FOREIGN KEY (fkactivityid) REFERENCES\ncommon.activity(activityid)\n \"fk_accountactivity_serviceinstance\" FOREIGN KEY\n(fkserviceinstanceid) REFERENCES\nprovisioning.serviceinstance(serviceinstanceid)\n\n# \\d common.activity\n Table \"common.activity\"\n Column | Type \n| Modifiers\n------------------+-----------------------------+------------------------------------------------------------------\n activityid | integer | not null default\nnextval(('common.ActivitySeq'::text)::regclass)\n createdate | timestamp without time zone | not null default\n('now'::text)::timestamp(6) without time zone\n fkactivitytypeid | integer | not null\n extra | text |\n ipaddress | text |\nIndexes:\n \"pk_activity_activityid\" PRIMARY KEY, btree (activityid), tablespace\n\"indexdata\"\n \"idx_activity_createdate\" btree (createdate), tablespace \"indexdata\"\nForeign-key constraints:\n \"fk_activity_activitytype\" FOREIGN KEY (fkactivitytypeid) REFERENCES\ncommon.activitytype(activitytypeid)\n\n# \\d common.activitytype\n Table \"common.activitytype\"\n Column | Type | Modifiers\n----------------+---------+----------------------------------------------------------------------\n activitytypeid | integer | not null default\nnextval(('common.ActivityTypeSeq'::text)::regclass)\n name | text | not null\n description | text |\n displayname | text |\nIndexes:\n \"pk_activitytype_activitytypeid\" PRIMARY KEY, btree\n(activitytypeid), tablespace \"indexdata\"\n \"uq_activitytype_name\" UNIQUE, btree (name), tablespace \"indexdata\"\n\n\n\n", "msg_date": "Tue, 28 Aug 2007 11:09:00 -0400", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "8.2.4 Chooses Bad Query Plan" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> We have recently upgraded our production database from 8.0.12 to\n> 8.2.4, We have seen lot of improvements on 8.2.4 side but we are also\n> seeing some queries which are slow.\n\n> Particularly this below query is really bad in 8.2.4 , I can get\n> only the explain on this as explain analyze never finishes even after 20\n> min.\n\nWhat it's doing is scanning backward on activity1_.activityid and hoping\nto find a row that matches all the other constraints soon enough to make\nthat faster than any other way of doing the query. 8.0 would have done\nthe same thing, I believe, if the statistics looked favorable for it.\nSo I wonder if you've forgotten to re-ANALYZE your data since migrating\n(a pg_dump script won't do this for you).\n\n> -> Index Scan using idx_accountactivity_fkactivityid on\n> accountactivity accountact0_ (cost=0.00..3.94 rows=1 width=16)\n> Index Cond: (accountact0_.fkactivityid =\n> activity1_.activityid)\n> Filter: (fkaccountid = 1455437)\n\n> -> Index Scan using\n> idx_accountactivity_fkaccountid on accountactivity accountact0_ \n> (cost=0.00..1641.42 rows=1343 width=16) (actual time=115.348..864.416\n> rows=10302 loops=1)\n> Index Cond: (fkaccountid = 1455437)\n\nThe discrepancy in rowcount estimates here is pretty damning.\nEven the 8.0 estimate wasn't really very good --- you might want to\nconsider increasing default_statistics_target.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Aug 2007 14:36:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2.4 Chooses Bad Query Plan " }, { "msg_contents": "Hi Tom,\n\n Thanks! for the reply, see my comments below\n\nTom Lane wrote:\n> Pallav Kalva <[email protected]> writes:\n> \n>> We have recently upgraded our production database from 8.0.12 to\n>> 8.2.4, We have seen lot of improvements on 8.2.4 side but we are also\n>> seeing some queries which are slow.\n>> \n>\n> \n>> Particularly this below query is really bad in 8.2.4 , I can get\n>> only the explain on this as explain analyze never finishes even after 20\n>> min.\n>> \n>\n> What it's doing is scanning backward on activity1_.activityid and hoping\n> to find a row that matches all the other constraints soon enough to make\n> that faster than any other way of doing the query. 8.0 would have done\n> the same thing, I believe, if the statistics looked favorable for it.\n> So I wonder if you've forgotten to re-ANALYZE your data since migrating\n> (a pg_dump script won't do this for you).\n>\n> \n\nSo, if I understand this correctly it keeps doing index scan backwards\nuntil it finds\na matching record , if it cant find any record it pretty much scans the\nwhole table\nusing \"index scan backward\" ?\n\nIf I have no matching record I pretty much wait until the query\nfinishes ? \n\nIs there anything else I can do to improve the query ?\n\nI have analyzed tables again and also my default_stats_target is set to\n100,\nstill it shows the same plan.\n\n>> -> Index Scan using idx_accountactivity_fkactivityid on\n>> accountactivity accountact0_ (cost=0.00..3.94 rows=1 width=16)\n>> Index Cond: (accountact0_.fkactivityid =\n>> activity1_.activityid)\n>> Filter: (fkaccountid = 1455437)\n>> \n>\n> \n>> -> Index Scan using\n>> idx_accountactivity_fkaccountid on accountactivity accountact0_ \n>> (cost=0.00..1641.42 rows=1343 width=16) (actual time=115.348..864.416\n>> rows=10302 loops=1)\n>> Index Cond: (fkaccountid = 1455437)\n>> \n>\n> The discrepancy in rowcount estimates here is pretty damning.\n> Even the 8.0 estimate wasn't really very good --- you might want to\n> consider increasing default_statistics_target.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n", "msg_date": "Tue, 28 Aug 2007 15:28:59 -0400", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.2.4 Chooses Bad Query Plan" }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> I have analyzed tables again and also my default_stats_target is set to\n> 100, still it shows the same plan.\n\n>>> -> Index Scan using idx_accountactivity_fkactivityid on\n>>> accountactivity accountact0_ (cost=0.00..3.94 rows=1 width=16)\n>>> Index Cond: (accountact0_.fkactivityid =\n>>> activity1_.activityid)\n>>> Filter: (fkaccountid = 1455437)\n\n>>> -> Index Scan using\n>>> idx_accountactivity_fkaccountid on accountactivity accountact0_ \n>>> (cost=0.00..1641.42 rows=1343 width=16) (actual time=115.348..864.416\n>>> rows=10302 loops=1)\n>>> Index Cond: (fkaccountid = 1455437)\n\nOh, my bad, I failed to look closely enough at these subplans.\nI thought they were identical but they're not using the same scan\nconditions, so the rowcount estimates shouldn't be comparable after all.\n\nCould you try EXPLAINing (maybe even with ANALYZE) the query *without*\nthe LIMIT clause? I'm curious to see what it thinks the best plan is\nthen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Aug 2007 17:06:12 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2.4 Chooses Bad Query Plan " } ]
[ { "msg_contents": "Dearest dragon hunters and mortal wanna-bes,\n\nI recently upgraded a system from Apache2/mod_perl2 to\nLighttpd/fastcgi. The upgrade went about as rough as can be. While in\nthe midst of a bad day, I decided to make it worse, and upgrade Pg 8.1\nto 8.2. Most people I talk to seem to think 8.1 was a lemon release;\nnot I. It worked excellent for me for the longest time, and I had no\ngood reason to upgrade it, other than to just have done so. In the\nprocess, A query that took a matter of 2minutes, started taking hours.\nI broke that query up into something more atomic and used it as a\nsample.\n\nThe following material is provided for your assisting-me-pleasure: the\noriginal SQL; the \\ds for all pertinent views and tables; the output\nof Explain Analyze; and the original query.\n\nThe original query both trials was: SELECT * FROM test_view where U_ID = 8;\n\ntest_view.sql = http://rafb.net/p/HhT9g489.html\n\n8.1_explain_analyze = http://rafb.net/p/uIyY1s44.html\n8.2_explain_analzye = http://rafb.net/p/mxHWi340.html\n\n\\d table/views = http://rafb.net/p/EPnyB229.html\n\nYes, I ran vacuum full after loading both dbs.\n\nThanks again, ask and I will provide anything else. I'm on freenode,\nin #postgresql, and can be found at all times with the nick\nEvanCarroll.\n\n-- \nEvan Carroll\nSystem Lord of the Internets\[email protected]\n832-445-8877\n", "msg_date": "Tue, 28 Aug 2007 10:22:11 -0500", "msg_from": "\"Evan Carroll\" <[email protected]>", "msg_from_op": true, "msg_subject": "8.2 Query 10 times slower than 8.1 (view-heavy)" }, { "msg_contents": "On 8/28/07, Evan Carroll <[email protected]> wrote:\n\n> the midst of a bad day, I decided to make it worse, and upgrade Pg 8.1\n> to 8.2. Most people I talk to seem to think 8.1 was a lemon release;\n> not I.\n\n8.0 was the release that had more issues for me, as it was the first\nversion with all the backend work done to make it capable of running\nwindows. for that reason I stayed on 7.4 until 8.1.4 or so was out.\n\n8.2 was a nice incremental upgrade, and I migrated to it around 8.2.3\nand have been happy every since.\n\n> Yes, I ran vacuum full after loading both dbs.\n\nDid you run analyze? It's not built into vacuum.\n", "msg_date": "Tue, 28 Aug 2007 10:32:01 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Query 10 times slower than 8.1 (view-heavy)" }, { "msg_contents": "\"Evan Carroll\" <[email protected]> writes:\n\n\"Evan Carroll\" <[email protected]> writes:\n\n> Dearest dragon hunters and mortal wanna-bes,\n>\n> I recently upgraded a system from Apache2/mod_perl2 to\n> Lighttpd/fastcgi. The upgrade went about as rough as can be. While in\n> the midst of a bad day, I decided to make it worse, and upgrade Pg 8.1\n> to 8.2. Most people I talk to seem to think 8.1 was a lemon release;\n> not I. It worked excellent for me for the longest time, and I had no\n> good reason to upgrade it, other than to just have done so. \n\nI assume you mean 8.1.9 and 8.2.4? \n\n> The following material is provided for your assisting-me-pleasure: the\n> original SQL; the \\ds for all pertinent views and tables; the output\n> of Explain Analyze; and the original query.\n\nWhile I do in fact enjoy analyzing query plans I have to say that 75-line\nplans push the bounds of my assisting-you-pleasure. Have you experimented with\nsimplifying this query?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 28 Aug 2007 16:58:12 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Query 10 times slower than 8.1 (view-heavy)" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGregory Stark wrote:\n> \"Evan Carroll\" <[email protected]> writes:\n> \n> \"Evan Carroll\" <[email protected]> writes:\n> \n>> Dearest dragon hunters and mortal wanna-bes,\n>>\n>> I recently upgraded a system from Apache2/mod_perl2 to\n>> Lighttpd/fastcgi. The upgrade went about as rough as can be. While in\n>> the midst of a bad day, I decided to make it worse, and upgrade Pg 8.1\n>> to 8.2. Most people I talk to seem to think 8.1 was a lemon release;\n>> not I. It worked excellent for me for the longest time, and I had no\n>> good reason to upgrade it, other than to just have done so. \n> \n> I assume you mean 8.1.9 and 8.2.4? \n> \n>> The following material is provided for your assisting-me-pleasure: the\n>> original SQL; the \\ds for all pertinent views and tables; the output\n>> of Explain Analyze; and the original query.\n> \n> While I do in fact enjoy analyzing query plans I have to say that 75-line\n> plans push the bounds of my assisting-you-pleasure. Have you experimented with\n> simplifying this query?\n\nAlthough simplifying the query is probably in order, doesn't it stand to\nreason that there may be a problem here. 10x difference (in the worse)\nfrom a lower version to a higher, is likely wrong :)\n\nJoshua D. Drake\n\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG1EqYATb/zqfZUUQRAighAJ9g+Py+CRwsW7f5QWuA4uZ5G26a9gCcCXG2\n0Le2KBGpdhDZyu4ZT30y8RA=\n=MfQw\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 28 Aug 2007 09:17:28 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Query 10 times slower than 8.1 (view-heavy)" }, { "msg_contents": ">>> On Tue, Aug 28, 2007 at 10:22 AM, in message\n<[email protected]>, \"Evan Carroll\"\n<[email protected]> wrote: \n> Yes, I ran vacuum full after loading both dbs.\n \nHave you run VACUUM ANALYZE or ANALYZE?\n \n-Kevin\n \n\n\n", "msg_date": "Tue, 28 Aug 2007 11:20:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Query 10 times slower than 8.1 (view-heavy)" }, { "msg_contents": "On 8/28/07, Kevin Grittner <[email protected]> wrote:\n> >>> On Tue, Aug 28, 2007 at 10:22 AM, in message\n> <[email protected]>, \"Evan Carroll\"\n> <[email protected]> wrote:\n> > Yes, I ran vacuum full after loading both dbs.\n>\n> Have you run VACUUM ANALYZE or ANALYZE?\n\nVACUUM FULL ANALYZE on both tables, out of habit.\n>\n> -Kevin\n>\n>\n>\n>\n\n\n-- \nEvan Carroll\nSystem Lord of the Internets\[email protected]\n832-445-8877\n", "msg_date": "Tue, 28 Aug 2007 11:21:54 -0500", "msg_from": "\"Evan Carroll\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Query 10 times slower than 8.1 (view-heavy)" }, { "msg_contents": "---------- Forwarded message ----------\nFrom: Evan Carroll <[email protected]>\nDate: Aug 28, 2007 11:23 AM\nSubject: Re: [PERFORM] 8.2 Query 10 times slower than 8.1 (view-heavy)\nTo: Scott Marlowe <[email protected]>\n\n\nOn 8/28/07, Scott Marlowe <[email protected]> wrote:\n> I looked through your query plan, and this is what stood out in the 8.2 plan:\n>\n> -> Nested Loop Left Join (cost=8830.30..10871.27 rows=1\n> width=102) (actual time=2148.444..236018.971 rows=62 loops=1)\n> Join Filter: ((public.contact.pkid =\n> public.contact.pkid) AND (public.event.ts_in > public.event.ts_in))\n> Filter: (public.event.pkid IS NULL)\n>\n> Notice the misestimation is by a factor of 62, and the actual time\n> goes from 2149 to 236018 ms.\n>\n> Again, have you analyzed your tables / databases?\n>\ncontacts=# \\o scott_marlowe_test\ncontacts=# VACUUM FULL ANALYZE;\ncontacts=# SELECT * FROM test_view WHERE U_ID = 8;\nCancel request sent\nERROR: canceling statement due to user request\ncontacts=# EXPLAIN ANALYZE SELECT * FROM test_view WHERE U_ID = 8;\n\noutput found at http://rafb.net/p/EQouMI82.html\n\n--\nEvan Carroll\nSystem Lord of the Internets\[email protected]\n832-445-8877\n\n\n-- \nEvan Carroll\nSystem Lord of the Internets\[email protected]\n832-445-8877\n", "msg_date": "Tue, 28 Aug 2007 11:24:57 -0500", "msg_from": "\"Evan Carroll\" <[email protected]>", "msg_from_op": true, "msg_subject": "Fwd: 8.2 Query 10 times slower than 8.1 (view-heavy)" }, { "msg_contents": "It looks like your view is using a left join to look for rows in one\ntable without matching rows in the other, i.e. a SQL construct similar\nin form to the query below:\n\nSELECT ...\nFROM A LEFT JOIN B ON (...)\nWHERE B.primary_key IS NULL\n\nUnfortunately there has been a planner regression in 8.2 in some cases\nwith these forms of queries. This was discussed a few weeks (months?)\nago on this forum. I haven't looked closely enough to confirm that this\nis the problem in your case, but it seems likely. Is it possible to\nrefactor the query to avoid using this construct to see if that helps?\n\nWe've been holding back from upgrading to 8.2 because this one is a\nshow-stopper for us.\n\n-- Mark Lewis\n\nOn Tue, 2007-08-28 at 11:24 -0500, Evan Carroll wrote:\n> ---------- Forwarded message ----------\n> From: Evan Carroll <[email protected]>\n> Date: Aug 28, 2007 11:23 AM\n> Subject: Re: [PERFORM] 8.2 Query 10 times slower than 8.1 (view-heavy)\n> To: Scott Marlowe <[email protected]>\n> \n> \n> On 8/28/07, Scott Marlowe <[email protected]> wrote:\n> > I looked through your query plan, and this is what stood out in the 8.2 plan:\n> >\n> > -> Nested Loop Left Join (cost=8830.30..10871.27 rows=1\n> > width=102) (actual time=2148.444..236018.971 rows=62 loops=1)\n> > Join Filter: ((public.contact.pkid =\n> > public.contact.pkid) AND (public.event.ts_in > public.event.ts_in))\n> > Filter: (public.event.pkid IS NULL)\n> >\n> > Notice the misestimation is by a factor of 62, and the actual time\n> > goes from 2149 to 236018 ms.\n> >\n> > Again, have you analyzed your tables / databases?\n> >\n> contacts=# \\o scott_marlowe_test\n> contacts=# VACUUM FULL ANALYZE;\n> contacts=# SELECT * FROM test_view WHERE U_ID = 8;\n> Cancel request sent\n> ERROR: canceling statement due to user request\n> contacts=# EXPLAIN ANALYZE SELECT * FROM test_view WHERE U_ID = 8;\n> \n> output found at http://rafb.net/p/EQouMI82.html\n> \n> --\n> Evan Carroll\n> System Lord of the Internets\n> [email protected]\n> 832-445-8877\n> \n> \n", "msg_date": "Tue, 28 Aug 2007 09:51:31 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: 8.2 Query 10 times slower than 8.1 (view-heavy)" }, { "msg_contents": "Mark Lewis <[email protected]> writes:\n> Unfortunately there has been a planner regression in 8.2 in some cases\n> with these forms of queries. This was discussed a few weeks (months?)\n> ago on this forum. I haven't looked closely enough to confirm that this\n> is the problem in your case, but it seems likely.\n\nYeah, the EXPLAIN ANALYZE output clearly shows a drastic underestimate\nof the number of rows out of a join like this, and a consequent choice\nof a nestloop above it that performs terribly.\n\n> We've been holding back from upgrading to 8.2 because this one is a\n> show-stopper for us.\n\nWell, you could always make your own version with this patch reverted:\nhttp://archives.postgresql.org/pgsql-committers/2006-11/msg00066.php\n\nI might end up doing that in the 8.2 branch if a better solution\nseems too large to back-patch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 28 Aug 2007 14:48:00 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: 8.2 Query 10 times slower than 8.1 (view-heavy) " }, { "msg_contents": "I wrote:\n> Mark Lewis <[email protected]> writes:\n>> We've been holding back from upgrading to 8.2 because this one is a\n>> show-stopper for us.\n\n> Well, you could always make your own version with this patch reverted:\n> http://archives.postgresql.org/pgsql-committers/2006-11/msg00066.php\n> I might end up doing that in the 8.2 branch if a better solution\n> seems too large to back-patch.\n\nI thought of a suitably small hack that should cover at least the main\nproblem without going so far as to revert that patch entirely. What we\ncan do is have the IS NULL estimator recognize when the clause is being\napplied at an outer join, and not believe the table statistics in that\ncase. I've applied the attached patch for this --- are you interested\nin trying it out on your queries before 8.2.5 comes out?\n\n\t\t\tregards, tom lane\n\n\nIndex: src/backend/optimizer/path/clausesel.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/optimizer/path/clausesel.c,v\nretrieving revision 1.82\ndiff -c -r1.82 clausesel.c\n*** src/backend/optimizer/path/clausesel.c\t4 Oct 2006 00:29:53 -0000\t1.82\n--- src/backend/optimizer/path/clausesel.c\t31 Aug 2007 23:29:01 -0000\n***************\n*** 218,224 ****\n \t\t\t\ts2 = rqlist->hibound + rqlist->lobound - 1.0;\n \n \t\t\t\t/* Adjust for double-exclusion of NULLs */\n! \t\t\t\ts2 += nulltestsel(root, IS_NULL, rqlist->var, varRelid);\n \n \t\t\t\t/*\n \t\t\t\t * A zero or slightly negative s2 should be converted into a\n--- 218,226 ----\n \t\t\t\ts2 = rqlist->hibound + rqlist->lobound - 1.0;\n \n \t\t\t\t/* Adjust for double-exclusion of NULLs */\n! \t\t\t\t/* HACK: disable nulltestsel's special outer-join logic */\n! \t\t\t\ts2 += nulltestsel(root, IS_NULL, rqlist->var,\n! \t\t\t\t\t\t\t\t varRelid, JOIN_INNER);\n \n \t\t\t\t/*\n \t\t\t\t * A zero or slightly negative s2 should be converted into a\n***************\n*** 701,707 ****\n \t\ts1 = nulltestsel(root,\n \t\t\t\t\t\t ((NullTest *) clause)->nulltesttype,\n \t\t\t\t\t\t (Node *) ((NullTest *) clause)->arg,\n! \t\t\t\t\t\t varRelid);\n \t}\n \telse if (IsA(clause, BooleanTest))\n \t{\n--- 703,710 ----\n \t\ts1 = nulltestsel(root,\n \t\t\t\t\t\t ((NullTest *) clause)->nulltesttype,\n \t\t\t\t\t\t (Node *) ((NullTest *) clause)->arg,\n! \t\t\t\t\t\t varRelid,\n! \t\t\t\t\t\t jointype);\n \t}\n \telse if (IsA(clause, BooleanTest))\n \t{\nIndex: src/backend/utils/adt/selfuncs.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/adt/selfuncs.c,v\nretrieving revision 1.214.2.5\ndiff -c -r1.214.2.5 selfuncs.c\n*** src/backend/utils/adt/selfuncs.c\t5 May 2007 17:05:55 -0000\t1.214.2.5\n--- src/backend/utils/adt/selfuncs.c\t31 Aug 2007 23:29:02 -0000\n***************\n*** 1386,1396 ****\n */\n Selectivity\n nulltestsel(PlannerInfo *root, NullTestType nulltesttype,\n! \t\t\tNode *arg, int varRelid)\n {\n \tVariableStatData vardata;\n \tdouble\t\tselec;\n \n \texamine_variable(root, arg, varRelid, &vardata);\n \n \tif (HeapTupleIsValid(vardata.statsTuple))\n--- 1386,1409 ----\n */\n Selectivity\n nulltestsel(PlannerInfo *root, NullTestType nulltesttype,\n! \t\t\tNode *arg, int varRelid, JoinType jointype)\n {\n \tVariableStatData vardata;\n \tdouble\t\tselec;\n \n+ \t/*\n+ \t * Special hack: an IS NULL test being applied at an outer join should not\n+ \t * be taken at face value, since it's very likely being used to select the\n+ \t * outer-side rows that don't have a match, and thus its selectivity has\n+ \t * nothing whatever to do with the statistics of the original table\n+ \t * column. We do not have nearly enough context here to determine its\n+ \t * true selectivity, so for the moment punt and guess at 0.5. Eventually\n+ \t * the planner should be made to provide enough info about the clause's\n+ \t * context to let us do better.\n+ \t */\n+ \tif (IS_OUTER_JOIN(jointype) && nulltesttype == IS_NULL)\n+ \t\treturn (Selectivity) 0.5;\n+ \n \texamine_variable(root, arg, varRelid, &vardata);\n \n \tif (HeapTupleIsValid(vardata.statsTuple))\nIndex: src/include/utils/selfuncs.h\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/utils/selfuncs.h,v\nretrieving revision 1.36\ndiff -c -r1.36 selfuncs.h\n*** src/include/utils/selfuncs.h\t4 Oct 2006 00:30:11 -0000\t1.36\n--- src/include/utils/selfuncs.h\t31 Aug 2007 23:29:02 -0000\n***************\n*** 149,155 ****\n extern Selectivity booltestsel(PlannerInfo *root, BoolTestType booltesttype,\n \t\t\tNode *arg, int varRelid, JoinType jointype);\n extern Selectivity nulltestsel(PlannerInfo *root, NullTestType nulltesttype,\n! \t\t\tNode *arg, int varRelid);\n extern Selectivity scalararraysel(PlannerInfo *root,\n \t\t\t ScalarArrayOpExpr *clause,\n \t\t\t bool is_join_clause,\n--- 149,155 ----\n extern Selectivity booltestsel(PlannerInfo *root, BoolTestType booltesttype,\n \t\t\tNode *arg, int varRelid, JoinType jointype);\n extern Selectivity nulltestsel(PlannerInfo *root, NullTestType nulltesttype,\n! \t\t\tNode *arg, int varRelid, JoinType jointype);\n extern Selectivity scalararraysel(PlannerInfo *root,\n \t\t\t ScalarArrayOpExpr *clause,\n \t\t\t bool is_join_clause,", "msg_date": "Fri, 31 Aug 2007 19:39:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: 8.2 Query 10 times slower than 8.1 (view-heavy) " }, { "msg_contents": "On Fri, 2007-08-31 at 19:39 -0400, Tom Lane wrote:\n> I wrote:\n> > Mark Lewis <[email protected]> writes:\n> >> We've been holding back from upgrading to 8.2 because this one is a\n> >> show-stopper for us.\n> \n> > Well, you could always make your own version with this patch reverted:\n> > http://archives.postgresql.org/pgsql-committers/2006-11/msg00066.php\n> > I might end up doing that in the 8.2 branch if a better solution\n> > seems too large to back-patch.\n> \n> I thought of a suitably small hack that should cover at least the main\n> problem without going so far as to revert that patch entirely. What we\n> can do is have the IS NULL estimator recognize when the clause is being\n> applied at an outer join, and not believe the table statistics in that\n> case. I've applied the attached patch for this --- are you interested\n> in trying it out on your queries before 8.2.5 comes out?\n\nWish I could, but I'm afraid that I'm not going to be in a position to\ntry out the patch on the application that exhibits the problem for at\nleast the next few weeks.\n\n-- Mark\n", "msg_date": "Fri, 31 Aug 2007 17:09:17 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Fwd: 8.2 Query 10 times slower than 8.1 (view-heavy)" } ]
[ { "msg_contents": "From: \"Evan Carroll\" <[email protected]>\nTo: \"Kevin Grittner\" <[email protected]>,\[email protected]\nDate: Tue, 28 Aug 2007 11:21:54 -0500\nSubject: Re: [PERFORM] 8.2 Query 10 times slower than 8.1 (view-heavy)\nOn 8/28/07, Kevin Grittner <[email protected]> wrote:\n> >>> On Tue, Aug 28, 2007 at 10:22 AM, in message\n> <[email protected]>, \"Evan Carroll\"\n> <[email protected]> wrote:\n> > Yes, I ran vacuum full after loading both dbs.\n>\n> Have you run VACUUM ANALYZE or ANALYZE?\n\nVACUUM FULL ANALYZE on both tables, out of habit.\n>\n> -Kevin\n\n-- \nEvan Carroll\nSystem Lord of the Internets\[email protected]\n832-445-8877\n", "msg_date": "Tue, 28 Aug 2007 11:30:19 -0500", "msg_from": "\"Evan Carroll\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.2 Query 10 times slower than 8.1 (view-heavy)" } ]
[ { "msg_contents": "Hello!\n\nSome background info.. We have a blog table that contains about eight \nmillion blog entries. Average length of an entry is 1200 letters. Because \neach 8k page can accommodate only a few entries, every query that involves \nseveral entries causes several random seeks to disk. We are having \nproblems with queries like:\n\n1) give me a list of months when I have written someting\n2) give me id's of entries I have written on month X year X\n3) give me the number of blog entries my friends have written since last\n time\n\nClustering would probably decrease random seeks but it is not an option. \nIt locks the table and operation would take \"some\" time. It should also be \ndone periodically to maintain clustering.\n\nI guess that file system cache gets filled with text contents of blog \nentries although they are totally useless for queries like these. Contents \nof individual blog entries are cached to memcached on application level \nanyway. There's rarely any need to fetch them from database.\n\nIt would be nice if I could flag a column to be toasted always, regardless \nof it's length.\n\nBecause there isn't such option maybe I should create a separate table for \nblog text content. Does anybody have better ideas for this? :)\n\nThanks!\n\n\nP.S. Here's a plan for query #3. Users can have several bookmark groups \nthey are following. User can limit visibility of an entry to some of \nhis/her bookmark group. Those are not any kind of bottlenecks anyway...\n\n Sort (cost=34112.60..34117.94 rows=2138 width=14)\n Sort Key: count(*), upper((u.nick)::text)\n -> HashAggregate (cost=33962.28..33994.35 rows=2138 width=14)\n -> Nested Loop (cost=8399.95..33946.24 rows=2138 width=14)\n -> Nested Loop (cost=8399.95..9133.16 rows=90 width=22)\n -> HashAggregate (cost=8399.95..8402.32 rows=237 width=8)\n -> Nested Loop (cost=0.00..8395.99 rows=792 width=8)\n -> Index Scan using user_bookmark_uid on user_bookmark ub (cost=0.00..541.39 rows=2368 width=12)\n Index Cond: (uid = 256979)\n -> Index Scan using user_bookmark_group_pkey on user_bookmark_group bg (cost=0.00..3.30 rows=1 width=4)\n Index Cond: (\"outer\".bookmark_group_id = bg.bookmark_group_id)\n Filter: ((\"type\" >= 0) AND (\"type\" <= 1) AND (trace_blog = 'y'::bpchar))\n -> Index Scan using users_uid_accepted_only on users u (cost=0.00..3.06 rows=1 width=14)\n Index Cond: (u.uid = \"outer\".marked_uid)\n -> Index Scan using blog_entry_uid_beid on blog_entry be (cost=0.00..275.34 rows=24 width=8)\n Index Cond: ((be.uid = \"outer\".marked_uid) AND (COALESCE(\"outer\".last_seen_blog_entry_id, 0) < be.blog_entry_id))\n Filter: ((visibility = 'p'::bpchar) AND ((status = 'p'::bpchar) OR (status = 'l'::bpchar)) AND ((bookmark_group_id IS NULL) OR (subplan)))\n SubPlan\n -> Index Scan using user_bookmark_pkey on user_bookmark fub (cost=0.00..3.42 rows=1 width=0)\n Index Cond: ((bookmark_group_id = $0) AND (marked_uid = 256979))\n\nP.S. That particular user has quite many unread entries though...\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n", "msg_date": "Tue, 28 Aug 2007 21:53:37 +0300 (EETDST)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": true, "msg_subject": "Performance problem with table containing a lot of text (blog)" }, { "msg_contents": "Kari Lavikka wrote:\n> Hello!\n> \n> Some background info.. We have a blog table that contains about eight \n> million blog entries. Average length of an entry is 1200 letters. \n> Because each 8k page can accommodate only a few entries, every query \n> that involves several entries causes several random seeks to disk. We \n> are having problems with queries like:\n> \n> 1) give me a list of months when I have written someting\n> 2) give me id's of entries I have written on month X year X\n> 3) give me the number of blog entries my friends have written since last\n> time\n\nI didn't see your schema, but couldn't these problems be solved by storing the \narticle id, owner id, and blog date in a separate table? It seems that if you \ndon't actually need the content of the blogs, all of those questions could be \nanswered by querying a very simple table with minimal I/O overhead.\n\n\n", "msg_date": "Tue, 28 Aug 2007 13:04:31 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with table containing a lot of\n text (blog)" }, { "msg_contents": "\n> I didn't see your schema, but couldn't these problems be solved by storing \n> the article id, owner id, and blog date in a separate table? It seems that \n> if you don't actually need the content of the blogs, all of those questions \n> could be answered by querying a very simple table with minimal I/O overhead.\n\nYes. I was suggesting this as an option but I'm wondering if there \nare other solutions.\n\n |\\__/|\n ( oo ) Kari Lavikka - [email protected] - (050) 380 3808\n__ooO( )Ooo_______ _____ ___ _ _ _ _ _ _ _\n \"\"\n\nOn Tue, 28 Aug 2007, Dan Harris wrote:\n\n> Kari Lavikka wrote:\n>> Hello!\n>> \n>> Some background info.. We have a blog table that contains about eight \n>> million blog entries. Average length of an entry is 1200 letters. Because \n>> each 8k page can accommodate only a few entries, every query that involves \n>> several entries causes several random seeks to disk. We are having \n>> problems with queries like:\n>> \n>> 1) give me a list of months when I have written someting\n>> 2) give me id's of entries I have written on month X year X\n>> 3) give me the number of blog entries my friends have written since last\n>> time\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n>\n", "msg_date": "Tue, 28 Aug 2007 22:10:33 +0300 (EETDST)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem with table containing a lot of\n text (blog)" }, { "msg_contents": "Kari Lavikka wrote:\n> It would be nice if I could flag a column to be toasted always,\n> regardless of it's length.\n\nThe idea of being able to set the toast threshold per column was\ndiscussed during 8.3 development, but no patch was produced IIRC. We\nmight do that in the future. If you're willing to compile from source,\nyou can lower TOAST_TUPLE_THRESHOLD.\n\nYou could also use ALTER TABLE ... ALTER COLUMN ... SET STORAGE EXTERNAL\nto force the long blog entries to be stored in the toast table instead\nof compressing them in the main table. Values smaller than\nTOAST_TUPLE_THRESHOLD (2k by default?) still wouldn't be toasted,\nthough, so it might not make much difference.\n\n> Because there isn't such option maybe I should create a separate table\n> for blog text content. Does anybody have better ideas for this? :)\n\nThat's probably the easiest solution. You can put a view on top of them\nto hide it from the application.\n\n> P.S. Here's a plan for query #3. Users can have several bookmark groups\n> they are following. User can limit visibility of an entry to some of\n> his/her bookmark group. Those are not any kind of bottlenecks anyway...\n\nIf the user_bookmark table is not clustered by uid, I'm surprised the\nplanner didn't choose a bitmap index scan. Which version of PostgreSQL\nis this?\n\nPS. EXPLAIN ANALYZE is much more helpful than plain EXPLAIN.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 29 Aug 2007 09:29:21 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with table containing a lot of text (blog)" }, { "msg_contents": "On Wed, 29 Aug 2007, Heikki Linnakangas wrote:\n\n> The idea of being able to set the toast threshold per column was\n> discussed during 8.3 development, but no patch was produced IIRC. We\n> might do that in the future. If you're willing to compile from source,\n> you can lower TOAST_TUPLE_THRESHOLD.\n\nWe are currently using Postgres 8.1 but have to upgrade to 8.2 shortly. \nNew version fixes some vacuum problems.\n\nI always compile postgres from source. Maybe I have to do some \ncalculations because that setting affects all tables and databases. Most \nof our text/varchar columns are quite short but setting the threshold too \nlow causes excessive seeks to toast tables... right?\n\n>> Because there isn't such option maybe I should create a separate table\n>> for blog text content. Does anybody have better ideas for this? :)\n>\n> That's probably the easiest solution. You can put a view on top of them\n> to hide it from the application.\n\nYeh.\n\n> If the user_bookmark table is not clustered by uid, I'm surprised the\n> planner didn't choose a bitmap index scan.\n\nDrumroll... there are:\n \"user_bookmark_pkey\" PRIMARY KEY, btree (bookmark_group_id, marked_uid), tablespace \"lun3\"\n \"user_bookmark_marked_uid\" btree (marked_uid)\n \"user_bookmark_uid\" btree (uid) CLUSTER, tablespace \"lun3\"\n\nQueries are mostly like \"Gimme all of my bookmarked friends in all of my \nbookmark groups\" and rarely the opposite \"Gimme all users who have \nbookmarked me\"\n\nI have clustered the table using uid to minimize random page fetches.\n\n - Kari\n\n>\n> -- \n> Heikki Linnakangas\n> EnterpriseDB http://www.enterprisedb.com\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n", "msg_date": "Wed, 29 Aug 2007 12:39:13 +0300 (EETDST)", "msg_from": "Kari Lavikka <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance problem with table containing a lot of\n text (blog)" }, { "msg_contents": "Kari Lavikka wrote:\n> On Wed, 29 Aug 2007, Heikki Linnakangas wrote:\n> \n>> The idea of being able to set the toast threshold per column was\n>> discussed during 8.3 development, but no patch was produced IIRC. We\n>> might do that in the future. If you're willing to compile from source,\n>> you can lower TOAST_TUPLE_THRESHOLD.\n> \n> We are currently using Postgres 8.1 but have to upgrade to 8.2 shortly.\n> New version fixes some vacuum problems.\n> \n> I always compile postgres from source. Maybe I have to do some\n> calculations because that setting affects all tables and databases. Most\n> of our text/varchar columns are quite short but setting the threshold\n> too low causes excessive seeks to toast tables... right?\n\nRight. If you have trouble finding the right balance, you can also use\nALTER STORAGE PLAIN to force the other columns not to be toasted.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 29 Aug 2007 11:14:20 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance problem with table containing a lot of text (blog)" } ]
[ { "msg_contents": "Hi all,\n\nI'm having a strange performance issue with two almost similar queries, the\none running as expected, the other one taking far more time. The only\ndifference is that I have \"uniid in (10)\" in the normally running query and\n\"uniid in (9,10)\" in the other one. The number of rows resulting from the\nrespective table differs not very much being 406 for the first and 511 for\nthe second query.\n\nThis is the full query - the \"uniid in (9,10)\" is in the last subquery:\n\n\nSELECT 'Anzahl' AS column1, count(DISTINCT sid) AS column2\nFROM (\n\tSELECT sid\n\tFROM stud\n\tWHERE stud.status > 0\n\tAND length(stud.vname) > 1\n\tAND length(stud.nname) > 1\n) AS qur_filter_1 INNER JOIN (\n\tSELECT DISTINCT sid\n\tFROM stud_vera\n\tINNER JOIN phon USING (sid)\n\tWHERE veraid = 22\n\tAND stud_vera.status > 0\n\tAND (\n\t\t(\n\t\tveraid IN (2, 3, 22, 24, 36)\n\t\tAND phontyp = 5\n\t\tAND phon.typ = 1\n\t\tAND phon.status > 0\n\t\t) OR (\n\t\tveraid NOT IN (2, 3, 22, 24, 36)\n\t\t)\n\t)\n) AS qur_filter_2 USING (sid) INNER JOIN (\n\tSELECT DISTINCT sid \n\tFROM ausb\n\tINNER JOIN uni USING (uniid)\n\tWHERE uni.uniort IN ('Augsburg')\n\tAND ausb.overview = 1\n\tAND ausb.zweitstudium != 2\n\tAND ausb.status > 0\n) AS qur_filter_3 USING (sid) INNER JOIN (\n\tSELECT DISTINCT sid \n\tFROM ausb\n\tWHERE uniid IN (9, 10)\n\tAND ausb.overview = 1\n\tAND ausb.zweitstudium != 2\n\tAND ausb.status > 0\n) AS qur_filter_4 USING (sid)\n\n\n\nThese are the query-plans for both queries, first the problematic one:\n\n\n\nAggregate (cost=78785.78..78785.79 rows=1 width=4) (actual\ntime=698777.890..698777.891 rows=1 loops=1)\n\n -> Nested Loop (cost=65462.58..78785.78 rows=1 width=4) (actual\ntime=6743.856..698776.957 rows=250 loops=1)\n\n Join Filter: (\"outer\".sid = \"inner\".sid)\n\n -> Merge Join (cost=11031.79..11883.12 rows=1 width=12) (actual\ntime=387.837..433.612 rows=494 loops=1)\n\n Merge Cond: (\"outer\".sid = \"inner\".sid)\n\n -> Nested Loop (cost=5643.11..6490.17 rows=19 width=8)\n(actual time=114.323..154.043 rows=494 loops=1)\n\n -> Unique (cost=5643.11..5645.35 rows=180 width=4)\n(actual time=114.202..116.002 rows=511 loops=1)\n\n -> Sort (cost=5643.11..5644.23 rows=448 width=4)\n(actual time=114.199..114.717 rows=511 loops=1)\n\n Sort Key: public.ausb.sid\n\n -> Seq Scan on ausb (cost=0.00..5623.38\nrows=448 width=4) (actual time=0.351..112.459 rows=511 loops=1)\n\n Filter: (((uniid = 9) OR (uniid = 10))\nAND (overview = 1) AND (zweitstudium <> 2) AND (status > 0))\n\n -> Index Scan using stud_pkey on stud (cost=0.00..4.67\nrows=1 width=4) (actual time=0.062..0.067 rows=1 loops=511)\n\n Index Cond: (stud.sid = \"outer\".sid)\n\n Filter: ((status > 0) AND (length((vname)::text) >\n1) AND (length((nname)::text) > 1))\n\n -> Materialize (cost=5388.68..5392.05 rows=337 width=4)\n(actual time=273.506..276.785 rows=511 loops=1)\n\n -> Unique (cost=5383.29..5384.98 rows=337 width=4)\n(actual time=273.501..275.421 rows=511 loops=1)\n\n -> Sort (cost=5383.29..5384.13 rows=337 width=4)\n(actual time=273.499..274.091 rows=511 loops=1)\n\n Sort Key: public.ausb.sid\n\n -> Hash Join (cost=17.61..5369.14 rows=337\nwidth=4) (actual time=1.139..272.465 rows=511 loops=1)\n\n Hash Cond: (\"outer\".uniid =\n\"inner\".uniid)\n\n -> Seq Scan on ausb\n(cost=0.00..4827.30 rows=104174 width=8) (actual time=0.026..200.111\nrows=103593 loops=1)\n\n Filter: ((overview = 1) AND\n(zweitstudium <> 2) AND (status > 0))\n\n -> Hash (cost=17.60..17.60 rows=2\nwidth=4) (actual time=0.435..0.435 rows=2 loops=1)\n\n -> Seq Scan on uni\n(cost=0.00..17.60 rows=2 width=4) (actual time=0.412..0.424 rows=2 loops=1)\n\n Filter: ((uniort)::text =\n'Augsburg'::text)\n\n -> Unique (cost=54430.79..66664.18 rows=10599 width=4) (actual\ntime=6.851..1374.135 rows=40230 loops=494)\n\n -> Merge Join (cost=54430.79..66319.65 rows=137811 width=4)\n(actual time=6.849..1282.333 rows=40233 loops=494)\n\n Merge Cond: (\"outer\".sid = \"inner\".sid)\n\n Join Filter: ((((\"outer\".veraid = 2) OR (\"outer\".veraid\n= 3) OR (\"outer\".veraid = 22) OR (\"outer\".veraid = 24) OR (\"outer\".veraid =\n36)) AND (\"inner\".phontyp = 5) AND (\"inner\".typ = 1) AND (\"inner\".status >\n0)) OR ((\"outer\".veraid <> 2) AND (\"outer\".veraid <> 3) AND (\"outer\".veraid\n<> 22) AND (\"outer\".veraid <> 24) AND (\"outer\".veraid <> 36)))\n\n -> Sort (cost=11962.11..12098.59 rows=54593 width=8)\n(actual time=0.547..46.482 rows=53354 loops=494)\n\n Sort Key: stud_vera.sid\n\n -> Bitmap Heap Scan on stud_vera\n(cost=2239.14..7666.61 rows=54593 width=8) (actual time=43.096..165.300\nrows=53354 loops=1)\n\n Recheck Cond: (veraid = 22)\n\n Filter: (status > 0)\n\n -> Bitmap Index Scan on\nstud_vera_sid_veraid_idx (cost=0.00..2239.14 rows=58765 width=0) (actual\ntime=41.242..41.242 rows=61855 loops=1)\n\n Index Cond: (veraid = 22)\n\n -> Sort (cost=42468.68..43407.53 rows=375539 width=12)\n(actual time=6.297..533.711 rows=375539 loops=494)\n\n Sort Key: phon.sid\n\n -> Seq Scan on phon (cost=0.00..7696.39\nrows=375539 width=12) (actual time=0.048..544.999 rows=375539 loops=1)\n\n\n\n\n!!!! The query-plan for the normally running query: !!!!!\n\n\n\n\n\nAggregate (cost=77846.97..77846.98 rows=1 width=4) (actual\ntime=5488.913..5488.913 rows=1 loops=1)\n\n -> Nested Loop (cost=65471.70..77846.97 rows=1 width=4) (actual\ntime=3913.839..5488.513 rows=208 loops=1)\n\n Join Filter: (\"outer\".sid = \"inner\".sid)\n\n -> Merge Join (cost=60088.41..72454.41 rows=1 width=12) (actual\ntime=3598.105..4841.242 rows=208 loops=1)\n\n Merge Cond: (\"outer\".sid = \"inner\".sid)\n\n -> Unique (cost=54430.79..66664.18 rows=10599 width=4)\n(actual time=3479.029..4697.051 rows=40129 loops=1)\n\n -> Merge Join (cost=54430.79..66319.65 rows=137811\nwidth=4) (actual time=3479.027..4616.245 rows=40132 loops=1)\n\n Merge Cond: (\"outer\".sid = \"inner\".sid)\n\n Join Filter: ((((\"outer\".veraid = 2) OR\n(\"outer\".veraid = 3) OR (\"outer\".veraid = 22) OR (\"outer\".veraid = 24) OR\n(\"outer\".veraid = 36)) AND (\"inner\".phontyp = 5) AND (\"inner\".typ = 1) AND\n(\"inner\".status > 0)) OR ((\"outer\".veraid <> 2) AND (\"outer\".veraid <> 3)\nAND (\"outer\".veraid <> 22) AND (\"outer\".veraid <> 24) AND (\"outer\".veraid <>\n36)))\n\n -> Sort (cost=11962.11..12098.59 rows=54593\nwidth=8) (actual time=274.248..315.052 rows=53252 loops=1)\n\n Sort Key: stud_vera.sid\n\n -> Bitmap Heap Scan on stud_vera\n(cost=2239.14..7666.61 rows=54593 width=8) (actual time=46.669..167.599\nrows=53352 loops=1)\n\n Recheck Cond: (veraid = 22)\n\n Filter: (status > 0)\n\n -> Bitmap Index Scan on\nstud_vera_sid_veraid_idx (cost=0.00..2239.14 rows=58765 width=0) (actual\ntime=44.618..44.618 rows=61857 loops=1)\n\n Index Cond: (veraid = 22)\n\n -> Sort (cost=42468.68..43407.53 rows=375539\nwidth=12) (actual time=3204.729..3681.598 rows=375090 loops=1)\n\n Sort Key: phon.sid\n\n -> Seq Scan on phon (cost=0.00..7696.39\nrows=375539 width=12) (actual time=0.052..628.838 rows=375539 loops=1)\n\n -> Materialize (cost=5657.62..5657.71 rows=9 width=8)\n(actual time=91.290..105.557 rows=406 loops=1)\n\n -> Nested Loop (cost=5234.08..5657.61 rows=9 width=8)\n(actual time=91.282..104.571 rows=406 loops=1)\n\n -> Unique (cost=5234.08..5235.20 rows=90\nwidth=4) (actual time=91.156..92.232 rows=420 loops=1)\n\n -> Sort (cost=5234.08..5234.64 rows=224\nwidth=4) (actual time=91.154..91.484 rows=420 loops=1)\n\n Sort Key: public.ausb.sid\n\n -> Seq Scan on ausb\n(cost=0.00..5225.34 rows=224 width=4) (actual time=0.266..90.242 rows=420\nloops=1)\n\n Filter: ((uniid = 10) AND\n(overview = 1) AND (zweitstudium <> 2) AND (status > 0))\n\n -> Index Scan using stud_pkey on stud\n(cost=0.00..4.67 rows=1 width=4) (actual time=0.024..0.026 rows=1 loops=420)\n\n Index Cond: (stud.sid = \"outer\".sid)\n\n Filter: ((status > 0) AND\n(length((vname)::text) > 1) AND (length((nname)::text) > 1))\n\n -> Unique (cost=5383.29..5384.98 rows=337 width=4) (actual\ntime=1.520..2.686 rows=511 loops=208)\n\n -> Sort (cost=5383.29..5384.13 rows=337 width=4) (actual\ntime=1.519..1.871 rows=511 loops=208)\n\n Sort Key: public.ausb.sid\n\n -> Hash Join (cost=17.61..5369.14 rows=337 width=4)\n(actual time=1.133..314.584 rows=511 loops=1)\n\n Hash Cond: (\"outer\".uniid = \"inner\".uniid)\n\n -> Seq Scan on ausb (cost=0.00..4827.30\nrows=104174 width=8) (actual time=0.030..226.532 rows=103593 loops=1)\n\n Filter: ((overview = 1) AND (zweitstudium <>\n2) AND (status > 0))\n\n -> Hash (cost=17.60..17.60 rows=2 width=4)\n(actual time=0.392..0.392 rows=2 loops=1)\n\n -> Seq Scan on uni (cost=0.00..17.60\nrows=2 width=4) (actual time=0.369..0.381 rows=2 loops=1)\n\n Filter: ((uniort)::text =\n'Augsburg'::text)\n\n\n\nThe estimated row numbers are not bad as long as one table is affected.\nThey're much worse as soon as two or more tables are joined. Though the\nquery plans are slightly different, the number of merged rows at different\nstages seems to be rather the same for both plans. The big difference in my\neyes seems the cost for the first nested loop. This seems to be the point,\nwhere the long running query consumes most time. I've then set\nenable_nestloop to off, and actually the problem disappears. \n\n\nOther maybe relevant parameters:\ndefault_statistics_target = 100\nwork_mem = 4096\nmax_fsm_pages = 100000\n\nMy questions:\n\nWhat could be the problem behind high amount of actually used time for the\nnested loop in the first query?\n\nIf we decided to constantly turn off nested loops, what side effects would\nwe have to expect?\n\nAre there more granular ways to tell the query planner when to use nested\nloops?\n\nOr just other ideas what to do? We'd be grateful for any hint!\n\n\nMany thanks\nJens\n\n\n-- \nJens Reufsteck\nHobsons GmbH\nWildunger Straße 6\n60487 Frankfurt am Main\nDeutschland\n\nTel: +49 (69) 255 37-140\nFax: +49 (69) 255 37-2140\n\nhttp://www.hobsons.de\nhttp://www.hobsons.ch\n\nGeschäftsführung:\nChristopher Letcher, Judith Oppitz, Adam Webster\nAmtsgericht Frankfurt HRB 58610\n\n", "msg_date": "Wed, 29 Aug 2007 12:15:49 +0200", "msg_from": "\"Jens Reufsteck\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue with nested loop" }, { "msg_contents": "On Aug 29, 2007, at 5:15 AM, Jens Reufsteck wrote:\n> I'm having a strange performance issue with two almost similar \n> queries, the\n> one running as expected, the other one taking far more time. The only\n> difference is that I have \"uniid in (10)\" in the normally running \n> query and\n> \"uniid in (9,10)\" in the other one. The number of rows resulting \n> from the\n> respective table differs not very much being 406 for the first and \n> 511 for\n> the second query.\n>\n> This is the full query - the \"uniid in (9,10)\" is in the last \n> subquery:\n>\n>\n> SELECT 'Anzahl' AS column1, count(DISTINCT sid) AS column2\n> FROM (\n> \tSELECT sid\n> \tFROM stud\n> \tWHERE stud.status > 0\n> \tAND length(stud.vname) > 1\n> \tAND length(stud.nname) > 1\n> ) AS qur_filter_1 INNER JOIN (\n> \tSELECT DISTINCT sid\n> \tFROM stud_vera\n> \tINNER JOIN phon USING (sid)\n> \tWHERE veraid = 22\n> \tAND stud_vera.status > 0\n> \tAND (\n> \t\t(\n> \t\tveraid IN (2, 3, 22, 24, 36)\n> \t\tAND phontyp = 5\n> \t\tAND phon.typ = 1\n> \t\tAND phon.status > 0\n> \t\t) OR (\n> \t\tveraid NOT IN (2, 3, 22, 24, 36)\n> \t\t)\n> \t)\n> ) AS qur_filter_2 USING (sid) INNER JOIN (\n> \tSELECT DISTINCT sid\n> \tFROM ausb\n> \tINNER JOIN uni USING (uniid)\n> \tWHERE uni.uniort IN ('Augsburg')\n> \tAND ausb.overview = 1\n> \tAND ausb.zweitstudium != 2\n> \tAND ausb.status > 0\n> ) AS qur_filter_3 USING (sid) INNER JOIN (\n> \tSELECT DISTINCT sid\n> \tFROM ausb\n> \tWHERE uniid IN (9, 10)\n> \tAND ausb.overview = 1\n> \tAND ausb.zweitstudium != 2\n> \tAND ausb.status > 0\n> ) AS qur_filter_4 USING (sid)\n>\n>\n>\n> These are the query-plans for both queries, first the problematic one:\n>\n>\n>\n> Aggregate (cost=78785.78..78785.79 rows=1 width=4) (actual\n> time=698777.890..698777.891 rows=1 loops=1)\n>\n> -> Nested Loop (cost=65462.58..78785.78 rows=1 width=4) (actual\n> time=6743.856..698776.957 rows=250 loops=1)\n>\n> Join Filter: (\"outer\".sid = \"inner\".sid)\n>\n> -> Merge Join (cost=11031.79..11883.12 rows=1 width=12) \n> (actual\n> time=387.837..433.612 rows=494 loops=1)\n>\n> Merge Cond: (\"outer\".sid = \"inner\".sid)\n>\n> -> Nested Loop (cost=5643.11..6490.17 rows=19 width=8)\n> (actual time=114.323..154.043 rows=494 loops=1)\n>\n> -> Unique (cost=5643.11..5645.35 rows=180 \n> width=4)\n> (actual time=114.202..116.002 rows=511 loops=1)\n>\n> -> Sort (cost=5643.11..5644.23 rows=448 \n> width=4)\n> (actual time=114.199..114.717 rows=511 loops=1)\n>\n> Sort Key: public.ausb.sid\n>\n> -> Seq Scan on ausb \n> (cost=0.00..5623.38\n> rows=448 width=4) (actual time=0.351..112.459 rows=511 loops=1)\n>\n> Filter: (((uniid = 9) OR \n> (uniid = 10))\n> AND (overview = 1) AND (zweitstudium <> 2) AND (status > 0))\n>\n> -> Index Scan using stud_pkey on stud \n> (cost=0.00..4.67\n> rows=1 width=4) (actual time=0.062..0.067 rows=1 loops=511)\n>\n> Index Cond: (stud.sid = \"outer\".sid)\n>\n> Filter: ((status > 0) AND (length \n> ((vname)::text) >\n> 1) AND (length((nname)::text) > 1))\n>\n> -> Materialize (cost=5388.68..5392.05 rows=337 \n> width=4)\n> (actual time=273.506..276.785 rows=511 loops=1)\n>\n> -> Unique (cost=5383.29..5384.98 rows=337 \n> width=4)\n> (actual time=273.501..275.421 rows=511 loops=1)\n>\n> -> Sort (cost=5383.29..5384.13 rows=337 \n> width=4)\n> (actual time=273.499..274.091 rows=511 loops=1)\n>\n> Sort Key: public.ausb.sid\n>\n> -> Hash Join (cost=17.61..5369.14 \n> rows=337\n> width=4) (actual time=1.139..272.465 rows=511 loops=1)\n>\n> Hash Cond: (\"outer\".uniid =\n> \"inner\".uniid)\n>\n> -> Seq Scan on ausb\n> (cost=0.00..4827.30 rows=104174 width=8) (actual time=0.026..200.111\n> rows=103593 loops=1)\n>\n> Filter: ((overview = 1) \n> AND\n> (zweitstudium <> 2) AND (status > 0))\n>\n> -> Hash (cost=17.60..17.60 \n> rows=2\n> width=4) (actual time=0.435..0.435 rows=2 loops=1)\n>\n> -> Seq Scan on uni\n> (cost=0.00..17.60 rows=2 width=4) (actual time=0.412..0.424 rows=2 \n> loops=1)\n>\n> Filter: \n> ((uniort)::text =\n> 'Augsburg'::text)\n>\n> -> Unique (cost=54430.79..66664.18 rows=10599 width=4) \n> (actual\n> time=6.851..1374.135 rows=40230 loops=494)\n>\n> -> Merge Join (cost=54430.79..66319.65 rows=137811 \n> width=4)\n> (actual time=6.849..1282.333 rows=40233 loops=494)\n>\n> Merge Cond: (\"outer\".sid = \"inner\".sid)\n>\n> Join Filter: ((((\"outer\".veraid = 2) OR \n> (\"outer\".veraid\n> = 3) OR (\"outer\".veraid = 22) OR (\"outer\".veraid = 24) OR \n> (\"outer\".veraid =\n> 36)) AND (\"inner\".phontyp = 5) AND (\"inner\".typ = 1) AND \n> (\"inner\".status >\n> 0)) OR ((\"outer\".veraid <> 2) AND (\"outer\".veraid <> 3) AND \n> (\"outer\".veraid\n> <> 22) AND (\"outer\".veraid <> 24) AND (\"outer\".veraid <> 36)))\n>\n> -> Sort (cost=11962.11..12098.59 rows=54593 \n> width=8)\n> (actual time=0.547..46.482 rows=53354 loops=494)\n>\n> Sort Key: stud_vera.sid\n>\n> -> Bitmap Heap Scan on stud_vera\n> (cost=2239.14..7666.61 rows=54593 width=8) (actual \n> time=43.096..165.300\n> rows=53354 loops=1)\n>\n> Recheck Cond: (veraid = 22)\n>\n> Filter: (status > 0)\n>\n> -> Bitmap Index Scan on\n> stud_vera_sid_veraid_idx (cost=0.00..2239.14 rows=58765 width=0) \n> (actual\n> time=41.242..41.242 rows=61855 loops=1)\n>\n> Index Cond: (veraid = 22)\n>\n> -> Sort (cost=42468.68..43407.53 rows=375539 \n> width=12)\n> (actual time=6.297..533.711 rows=375539 loops=494)\n>\n> Sort Key: phon.sid\n>\n> -> Seq Scan on phon (cost=0.00..7696.39\n> rows=375539 width=12) (actual time=0.048..544.999 rows=375539 loops=1)\n>\n>\n>\n>\n> !!!! The query-plan for the normally running query: !!!!!\n>\n>\n>\n>\n>\n> Aggregate (cost=77846.97..77846.98 rows=1 width=4) (actual\n> time=5488.913..5488.913 rows=1 loops=1)\n>\n> -> Nested Loop (cost=65471.70..77846.97 rows=1 width=4) (actual\n> time=3913.839..5488.513 rows=208 loops=1)\n>\n> Join Filter: (\"outer\".sid = \"inner\".sid)\n>\n> -> Merge Join (cost=60088.41..72454.41 rows=1 width=12) \n> (actual\n> time=3598.105..4841.242 rows=208 loops=1)\n>\n> Merge Cond: (\"outer\".sid = \"inner\".sid)\n>\n> -> Unique (cost=54430.79..66664.18 rows=10599 width=4)\n> (actual time=3479.029..4697.051 rows=40129 loops=1)\n>\n> -> Merge Join (cost=54430.79..66319.65 \n> rows=137811\n> width=4) (actual time=3479.027..4616.245 rows=40132 loops=1)\n>\n> Merge Cond: (\"outer\".sid = \"inner\".sid)\n>\n> Join Filter: ((((\"outer\".veraid = 2) OR\n> (\"outer\".veraid = 3) OR (\"outer\".veraid = 22) OR (\"outer\".veraid = \n> 24) OR\n> (\"outer\".veraid = 36)) AND (\"inner\".phontyp = 5) AND (\"inner\".typ = \n> 1) AND\n> (\"inner\".status > 0)) OR ((\"outer\".veraid <> 2) AND (\"outer\".veraid \n> <> 3)\n> AND (\"outer\".veraid <> 22) AND (\"outer\".veraid <> 24) AND \n> (\"outer\".veraid <>\n> 36)))\n>\n> -> Sort (cost=11962.11..12098.59 \n> rows=54593\n> width=8) (actual time=274.248..315.052 rows=53252 loops=1)\n>\n> Sort Key: stud_vera.sid\n>\n> -> Bitmap Heap Scan on stud_vera\n> (cost=2239.14..7666.61 rows=54593 width=8) (actual \n> time=46.669..167.599\n> rows=53352 loops=1)\n>\n> Recheck Cond: (veraid = 22)\n>\n> Filter: (status > 0)\n>\n> -> Bitmap Index Scan on\n> stud_vera_sid_veraid_idx (cost=0.00..2239.14 rows=58765 width=0) \n> (actual\n> time=44.618..44.618 rows=61857 loops=1)\n>\n> Index Cond: (veraid = 22)\n>\n> -> Sort (cost=42468.68..43407.53 \n> rows=375539\n> width=12) (actual time=3204.729..3681.598 rows=375090 loops=1)\n>\n> Sort Key: phon.sid\n>\n> -> Seq Scan on phon \n> (cost=0.00..7696.39\n> rows=375539 width=12) (actual time=0.052..628.838 rows=375539 loops=1)\n>\n> -> Materialize (cost=5657.62..5657.71 rows=9 width=8)\n> (actual time=91.290..105.557 rows=406 loops=1)\n>\n> -> Nested Loop (cost=5234.08..5657.61 rows=9 \n> width=8)\n> (actual time=91.282..104.571 rows=406 loops=1)\n>\n> -> Unique (cost=5234.08..5235.20 rows=90\n> width=4) (actual time=91.156..92.232 rows=420 loops=1)\n>\n> -> Sort (cost=5234.08..5234.64 \n> rows=224\n> width=4) (actual time=91.154..91.484 rows=420 loops=1)\n>\n> Sort Key: public.ausb.sid\n>\n> -> Seq Scan on ausb\n> (cost=0.00..5225.34 rows=224 width=4) (actual time=0.266..90.242 \n> rows=420\n> loops=1)\n>\n> Filter: ((uniid = 10) AND\n> (overview = 1) AND (zweitstudium <> 2) AND (status > 0))\n>\n> -> Index Scan using stud_pkey on stud\n> (cost=0.00..4.67 rows=1 width=4) (actual time=0.024..0.026 rows=1 \n> loops=420)\n>\n> Index Cond: (stud.sid = \"outer\".sid)\n>\n> Filter: ((status > 0) AND\n> (length((vname)::text) > 1) AND (length((nname)::text) > 1))\n>\n> -> Unique (cost=5383.29..5384.98 rows=337 width=4) (actual\n> time=1.520..2.686 rows=511 loops=208)\n>\n> -> Sort (cost=5383.29..5384.13 rows=337 width=4) \n> (actual\n> time=1.519..1.871 rows=511 loops=208)\n>\n> Sort Key: public.ausb.sid\n>\n> -> Hash Join (cost=17.61..5369.14 rows=337 \n> width=4)\n> (actual time=1.133..314.584 rows=511 loops=1)\n>\n> Hash Cond: (\"outer\".uniid = \"inner\".uniid)\n>\n> -> Seq Scan on ausb (cost=0.00..4827.30\n> rows=104174 width=8) (actual time=0.030..226.532 rows=103593 loops=1)\n>\n> Filter: ((overview = 1) AND \n> (zweitstudium <>\n> 2) AND (status > 0))\n>\n> -> Hash (cost=17.60..17.60 rows=2 width=4)\n> (actual time=0.392..0.392 rows=2 loops=1)\n>\n> -> Seq Scan on uni (cost=0.00..17.60\n> rows=2 width=4) (actual time=0.369..0.381 rows=2 loops=1)\n>\n> Filter: ((uniort)::text =\n> 'Augsburg'::text)\n>\n>\n>\n> The estimated row numbers are not bad as long as one table is \n> affected.\n> They're much worse as soon as two or more tables are joined. Though \n> the\n> query plans are slightly different, the number of merged rows at \n> different\n> stages seems to be rather the same for both plans. The big \n> difference in my\n> eyes seems the cost for the first nested loop. This seems to be the \n> point,\n> where the long running query consumes most time. I've then set\n> enable_nestloop to off, and actually the problem disappears.\n>\n>\n> Other maybe relevant parameters:\n> default_statistics_target = 100\n> work_mem = 4096\n> max_fsm_pages = 100000\n>\n> My questions:\n>\n> What could be the problem behind high amount of actually used time \n> for the\n> nested loop in the first query?\n>\n> If we decided to constantly turn off nested loops, what side \n> effects would\n> we have to expect?\n>\n> Are there more granular ways to tell the query planner when to use \n> nested\n> loops?\n>\n> Or just other ideas what to do? We'd be grateful for any hint!\n\nHere's what's killing you:\n\n -> Nested Loop (cost=65462.58..78785.78 rows=1 width=4) (actual\ntime=6743.856..698776.957 rows=250 loops=1)\n\n Join Filter: (\"outer\".sid = \"inner\".sid)\n\n -> Merge Join (cost=11031.79..11883.12 rows=1 width=12) \n(actual\ntime=387.837..433.612 rows=494 loops=1)\n\nThat merge thinks it's olny going to see 1 row, but it ends up with \n494, which results in:\n\n -> Unique (cost=54430.79..66664.18 rows=10599 width=4) \n(actual\ntime=6.851..1374.135 rows=40230 loops=494)\n\nThe miss-estimation is actually coming from lower in the query... I \nsee there's one place where it expects 180 rows and gets 511, which \nis part of the problem. Try increasing the stats on ausb.sid.\n\nOh, and please don't line-wrap explain output.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)\n\n\n", "msg_date": "Sat, 1 Sep 2007 18:50:41 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance issue with nested loop" } ]
[ { "msg_contents": "*************************************************************************************************************************\n1) \n\nEXPLAIN ANALYSE SELECT \njob_category.job_id,job.name,job.state,job.build_id,cat.name as \nreporting_group\nFROM category,job_category,job,category as cat\nWHERE job.job_id=job_category.job_id\nAND job_category.category_id=category.category_id\nAND cat.build_id=category.build_id\nAND category.name = 'build_id.pap3260-20070828_01'\nAND cat.name like ('reporting_group.Tier2%');\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=0.00..291.53 rows=8 width=103) (actual \ntime=98.999..385.590 rows=100 loops=1)\n -> Nested Loop (cost=0.00..250.12 rows=9 width=34) (actual \ntime=98.854..381.106 rows=100 loops=1)\n -> Nested Loop (cost=0.00..123.22 rows=1 width=34) (actual \ntime=98.717..380.185 rows=1 loops=1)\n -> Index Scan using idx_cat_by_name on category cat \n(cost=0.00..5.97 rows=1 width=34) (actual time=95.834..245.276 rows=977 \nloops=1)\n Index Cond: (((name)::text >= 'reporting'::character \nvarying) AND ((name)::text < 'reportinh'::character varying))\n Filter: ((name)::text ~~ \n'reporting_group.Tier2%'::text)\n -> Index Scan using idx_cat_by_bld_id on category \n(cost=0.00..117.24 rows=1 width=8) (actual time=0.134..0.134 rows=0 \nloops=977)\n Index Cond: (\"outer\".build_id = category.build_id)\n Filter: ((name)::text = \n'build_id.pap3260-20070828_01'::text)\n -> Index Scan using idx_jcat_by_cat_id on job_category \n(cost=0.00..126.00 rows=71 width=8) (actual time=0.126..0.569 rows=100 \nloops=1)\n Index Cond: (job_category.category_id = \n\"outer\".category_id)\n -> Index Scan using job_pkey on job (cost=0.00..4.59 rows=1 width=73) \n(actual time=0.033..0.036 rows=1 loops=100)\n Index Cond: (job.job_id = \"outer\".job_id)\n\n Total runtime: 385.882 ms\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n\n but , if I use AND cat.name = 'reporting_group.Tier2' ; \n\n*************************************************************************************************************************\n2)\n\nEXPLAIN ANALYSE SELECT \njob_category.job_id,job.name,job.state,job.build_id,cat.name as \nreporting_group\nFROM category,job_category,job,category as cat\nWHERE job.job_id=job_category.job_id\nAND job_category.category_id=category.category_id\nAND cat.build_id=category.build_id\nAND category.name = 'build_id.pap3260-20070828_01'\nAND cat.name = 'reporting_group.Tier2' ;\n QUERY PLAN \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop (cost=8186.96..26124.40 rows=796 width=103) (actual \ntime=40.584..48.966 rows=100 loops=1)\n -> Nested Loop (cost=8186.96..21776.35 rows=945 width=34) (actual \ntime=40.445..41.437 rows=100 loops=1)\n -> Merge Join (cost=8186.96..8198.88 rows=107 width=34) (actual \ntime=40.290..40.303 rows=1 loops=1)\n Merge Cond: (\"outer\".build_id = \"inner\".build_id)\n -> Sort (cost=4093.48..4096.19 rows=1085 width=8) (actual \ntime=0.206..0.211 rows=3 loops=1)\n Sort Key: category.build_id\n -> Index Scan using idx_cat_by_name on category \n(cost=0.00..4038.78 rows=1085 width=8) (actual time=0.130..0.183 rows=3 \nloops=1)\n Index Cond: ((name)::text = \n'build_id.pap3260-20070828_01'::text)\n -> Sort (cost=4093.48..4096.19 rows=1085 width=34) \n(actual time=37.424..38.591 rows=956 loops=1)\n Sort Key: cat.build_id\n -> Index Scan using idx_cat_by_name on category cat \n(cost=0.00..4038.78 rows=1085 width=34) (actual time=0.076..34.328 \nrows=962 loops=1)\n Index Cond: ((name)::text = \n'reporting_group.Tier2'::text)\n -> Index Scan using idx_jcat_by_cat_id on job_category \n(cost=0.00..126.00 rows=71 width=8) (actual time=0.139..0.743 rows=100 \nloops=1)\n Index Cond: (job_category.category_id = \n\"outer\".category_id)\n -> Index Scan using job_pkey on job (cost=0.00..4.59 rows=1 width=73) \n(actual time=0.063..0.066 rows=1 loops=100)\n Index Cond: (job.job_id = \"outer\".job_id)\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Total runtime: 49.453 ms\n\nHow to increase the performance of the first query ? \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThank you !\nRegards, \nKarthi \n------------------------------------------------------------------- \nKarthikeyan Mahadevan\nJava Technology Center\nIBM Software Labs ,Bangalore, India.\nPhone: +91 80 2504 4000 or 2509 4000 Ext: 2413 \nDirect : +91 80 25094413 \nEmail : [email protected] \n\"Doesn't expecting the unexpected make the unexpected become the expected? \n\" \n---------------------------------------------------------------------------- \n\n*************************************************************************************************************************\n1)  \n\nEXPLAIN ANALYSE SELECT job_category.job_id,job.name,job.state,job.build_id,cat.name\nas reporting_group\nFROM category,job_category,job,category\nas cat\nWHERE job.job_id=job_category.job_id\nAND job_category.category_id=category.category_id\nAND cat.build_id=category.build_id\nAND category.name = 'build_id.pap3260-20070828_01'\nAND cat.name like ('reporting_group.Tier2%');\n           \n                     \n                     \n              QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=0.00..291.53\nrows=8 width=103) (actual time=98.999..385.590 rows=100 loops=1)\n   ->  Nested Loop\n (cost=0.00..250.12 rows=9 width=34) (actual time=98.854..381.106\nrows=100 loops=1)\n         ->\n Nested Loop  (cost=0.00..123.22 rows=1 width=34) (actual time=98.717..380.185\nrows=1 loops=1)\n           \n   ->  Index Scan using idx_cat_by_name on category cat\n (cost=0.00..5.97 rows=1 width=34) (actual time=95.834..245.276 rows=977\nloops=1)\n           \n         Index Cond: (((name)::text >= 'reporting'::character\nvarying) AND ((name)::text < 'reportinh'::character varying))\n           \n         Filter: ((name)::text ~~ 'reporting_group.Tier2%'::text)\n           \n   ->  Index Scan using idx_cat_by_bld_id on category\n (cost=0.00..117.24 rows=1 width=8) (actual time=0.134..0.134 rows=0\nloops=977)\n           \n         Index Cond: (\"outer\".build_id\n= category.build_id)\n           \n         Filter: ((name)::text = 'build_id.pap3260-20070828_01'::text)\n         ->\n Index Scan using idx_jcat_by_cat_id on job_category  (cost=0.00..126.00\nrows=71 width=8) (actual time=0.126..0.569 rows=100 loops=1)\n           \n   Index Cond: (job_category.category_id = \"outer\".category_id)\n   ->  Index Scan\nusing job_pkey on job  (cost=0.00..4.59 rows=1 width=73) (actual time=0.033..0.036\nrows=1 loops=100)\n         Index\nCond: (job.job_id = \"outer\".job_id)\n\n Total runtime: 385.882 ms\n------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n\n but , if   I use  AND\ncat.name = 'reporting_group.Tier2' ;       \n\n*************************************************************************************************************************\n2)\n\nEXPLAIN ANALYSE SELECT job_category.job_id,job.name,job.state,job.build_id,cat.name\nas reporting_group\nFROM category,job_category,job,category\nas cat\nWHERE job.job_id=job_category.job_id\nAND job_category.category_id=category.category_id\nAND cat.build_id=category.build_id\nAND category.name = 'build_id.pap3260-20070828_01'\nAND cat.name = 'reporting_group.Tier2'\n;\n           \n                     \n                     \n                   QUERY\nPLAN                    \n                     \n                     \n       \n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Nested Loop  (cost=8186.96..26124.40\nrows=796 width=103) (actual time=40.584..48.966 rows=100 loops=1)\n   ->  Nested Loop\n (cost=8186.96..21776.35 rows=945 width=34) (actual time=40.445..41.437\nrows=100 loops=1)\n         ->\n Merge Join  (cost=8186.96..8198.88 rows=107 width=34) (actual\ntime=40.290..40.303 rows=1 loops=1)\n           \n   Merge Cond: (\"outer\".build_id = \"inner\".build_id)\n           \n   ->  Sort  (cost=4093.48..4096.19 rows=1085 width=8)\n(actual time=0.206..0.211 rows=3 loops=1)\n           \n         Sort Key: category.build_id\n           \n         ->  Index Scan using idx_cat_by_name\non category  (cost=0.00..4038.78 rows=1085 width=8) (actual time=0.130..0.183\nrows=3 loops=1)\n           \n               Index Cond: ((name)::text\n= 'build_id.pap3260-20070828_01'::text)\n           \n   ->  Sort  (cost=4093.48..4096.19 rows=1085 width=34)\n(actual time=37.424..38.591 rows=956 loops=1)\n           \n         Sort Key: cat.build_id\n           \n         ->  Index Scan using idx_cat_by_name\non category cat  (cost=0.00..4038.78 rows=1085 width=34) (actual time=0.076..34.328\nrows=962 loops=1)\n           \n               Index Cond: ((name)::text\n= 'reporting_group.Tier2'::text)\n         ->\n Index Scan using idx_jcat_by_cat_id on job_category  (cost=0.00..126.00\nrows=71 width=8) (actual time=0.139..0.743 rows=100 loops=1)\n           \n   Index Cond: (job_category.category_id = \"outer\".category_id)\n   ->  Index Scan\nusing job_pkey on job  (cost=0.00..4.59 rows=1 width=73) (actual time=0.063..0.066\nrows=1 loops=100)\n         Index\nCond: (job.job_id = \"outer\".job_id)\n----------------------------------------------------------------------------------------------------------------------------------------------------------------\n Total runtime: 49.453 ms\n\nHow to increase the performance of the\n first query ? \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThank you !\nRegards, \nKarthi \n------------------------------------------------------------------- \nKarthikeyan Mahadevan\nJava Technology Center\nIBM Software Labs ,Bangalore, India.\nPhone: +91 80 2504 4000 or 2509 4000 Ext: 2413 \nDirect : +91 80 25094413 \nEmail : [email protected] \n\"Doesn't expecting the unexpected make the unexpected become the expected?\n\" \n----------------------------------------------------------------------------", "msg_date": "Wed, 29 Aug 2007 18:01:43 +0530", "msg_from": "Karthikeyan Mahadevan <[email protected]>", "msg_from_op": true, "msg_subject": "LIKE query verses = " }, { "msg_contents": "On Wed, 2007-08-29 at 18:01 +0530, Karthikeyan Mahadevan wrote:\n> \n> ************************************************************************************************************************* \n> 1) \n> \n> EXPLAIN ANALYSE SELECT\n> job_category.job_id,job.name,job.state,job.build_id,cat.name as\n> reporting_group \n> FROM category,job_category,job,category as cat \n> WHERE job.job_id=job_category.job_id \n> AND job_category.category_id=category.category_id \n> AND cat.build_id=category.build_id \n> AND category.name = 'build_id.pap3260-20070828_01' \n> AND cat.name like ('reporting_group.Tier2%'); \n> \n> QUERY PLAN \n> ------------------------------------------------------------------------------------------------------------------------------------------------------ \n> Nested Loop (cost=0.00..291.53 rows=8 width=103) (actual\n> time=98.999..385.590 rows=100 loops=1) \n> -> Nested Loop (cost=0.00..250.12 rows=9 width=34) (actual\n> time=98.854..381.106 rows=100 loops=1) \n> -> Nested Loop (cost=0.00..123.22 rows=1 width=34) (actual\n> time=98.717..380.185 rows=1 loops=1) \n> -> Index Scan using idx_cat_by_name on category cat\n> (cost=0.00..5.97 rows=1 width=34) (actual time=95.834..245.276\n> rows=977 loops=1) \n> Index Cond: (((name)::text >=\n> 'reporting'::character varying) AND ((name)::text <\n> 'reportinh'::character varying)) \n> Filter: ((name)::text ~~\n> 'reporting_group.Tier2%'::text) \n> -> Index Scan using idx_cat_by_bld_id on category\n> (cost=0.00..117.24 rows=1 width=8) (actual time=0.134..0.134 rows=0\n> loops=977) \n> Index Cond: (\"outer\".build_id =\n> category.build_id) \n> Filter: ((name)::text =\n> 'build_id.pap3260-20070828_01'::text) \n> -> Index Scan using idx_jcat_by_cat_id on job_category\n> (cost=0.00..126.00 rows=71 width=8) (actual time=0.126..0.569\n> rows=100 loops=1) \n> Index Cond: (job_category.category_id =\n> \"outer\".category_id) \n> -> Index Scan using job_pkey on job (cost=0.00..4.59 rows=1\n> width=73) (actual time=0.033..0.036 rows=1 loops=100) \n> Index Cond: (job.job_id = \"outer\".job_id) \n> \n> Total runtime: 385.882 ms \n> ------------------------------------------------------------------------------------------------------------------------------------------------------ \n\nRemember that using LIKE causes PG to interpret an underscore as 'any\ncharacter', which means that it can only scan the index for all records\nthat start with 'reporting', and then it needs to apply a filter to each\nmatch. This is going to be slower than just going directly to the\nmatching index entry.\n\nWhat you probably want to do is tell PG that you're looking for a\nliteral underscore and not for any matching character by escaping the\nunderscore, that will allow it to do a much quicker index scan.\nSomething like:\n\ncat.name like 'reporting|_group.Tier2%' ESCAPE '|'\n\n-- Mark Lewis\n", "msg_date": "Wed, 29 Aug 2007 10:12:31 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE query verses =" }, { "msg_contents": "Mark Lewis <[email protected]> writes:\n> What you probably want to do is tell PG that you're looking for a\n> literal underscore and not for any matching character by escaping the\n> underscore, that will allow it to do a much quicker index scan.\n\nThe other half of the problem is that the planner is drastically\nmisestimating the number of matching rows --- it thinks only one\nwhen there are really about a thousand, and this leads it to use\na nestloop that will be very inefficient with so many rows.\nTry increasing the statistics target for that column. Also, if\nthis is a pre-8.2 PG release, consider upgrading; I believe we\nimproved the LIKE estimator in 8.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 29 Aug 2007 13:51:41 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: LIKE query verses = " } ]
[ { "msg_contents": "Hi all,\n\nI'm having a strange performance issue with two almost similar queries, the\none running as expected, the other one taking far more time. The only\ndifference is that I have \"uniid in (10)\" in the normally running query and\n\"uniid in (9,10)\" in the other one. The number of rows resulting from the\nrespective table differs not very much being 406 for the first and 511 for\nthe second query.\n\nThis is the full query - the \"uniid in (9,10)\" is in the last subquery:\n\n\nSELECT 'Anzahl' AS column1, count(DISTINCT sid) AS column2\nFROM (\n\tSELECT sid\n\tFROM stud\n\tWHERE stud.status > 0\n\tAND length(stud.vname) > 1\n\tAND length(stud.nname) > 1\n) AS qur_filter_1 INNER JOIN (\n\tSELECT DISTINCT sid\n\tFROM stud_vera\n\tINNER JOIN phon USING (sid)\n\tWHERE veraid = 22\n\tAND stud_vera.status > 0\n\tAND (\n\t\t(\n\t\tveraid IN (2, 3, 22, 24, 36)\n\t\tAND phontyp = 5\n\t\tAND phon.typ = 1\n\t\tAND phon.status > 0\n\t\t) OR (\n\t\tveraid NOT IN (2, 3, 22, 24, 36)\n\t\t)\n\t)\n) AS qur_filter_2 USING (sid) INNER JOIN (\n\tSELECT DISTINCT sid \n\tFROM ausb\n\tINNER JOIN uni USING (uniid)\n\tWHERE uni.uniort IN ('Augsburg')\n\tAND ausb.overview = 1\n\tAND ausb.zweitstudium != 2\n\tAND ausb.status > 0\n) AS qur_filter_3 USING (sid) INNER JOIN (\n\tSELECT DISTINCT sid \n\tFROM ausb\n\tWHERE uniid IN (9, 10)\n\tAND ausb.overview = 1\n\tAND ausb.zweitstudium != 2\n\tAND ausb.status > 0\n) AS qur_filter_4 USING (sid)\n\n\n\nThese are the query-plans for both queries, first the problematic one:\n\n\n\nAggregate (cost=78785.78..78785.79 rows=1 width=4) (actual\ntime=698777.890..698777.891 rows=1 loops=1)\n\n -> Nested Loop (cost=65462.58..78785.78 rows=1 width=4) (actual\ntime=6743.856..698776.957 rows=250 loops=1)\n\n Join Filter: (\"outer\".sid = \"inner\".sid)\n\n -> Merge Join (cost=11031.79..11883.12 rows=1 width=12) (actual\ntime=387.837..433.612 rows=494 loops=1)\n\n Merge Cond: (\"outer\".sid = \"inner\".sid)\n\n -> Nested Loop (cost=5643.11..6490.17 rows=19 width=8)\n(actual time=114.323..154.043 rows=494 loops=1)\n\n -> Unique (cost=5643.11..5645.35 rows=180 width=4)\n(actual time=114.202..116.002 rows=511 loops=1)\n\n -> Sort (cost=5643.11..5644.23 rows=448 width=4)\n(actual time=114.199..114.717 rows=511 loops=1)\n\n Sort Key: public.ausb.sid\n\n -> Seq Scan on ausb (cost=0.00..5623.38\nrows=448 width=4) (actual time=0.351..112.459 rows=511 loops=1)\n\n Filter: (((uniid = 9) OR (uniid = 10))\nAND (overview = 1) AND (zweitstudium <> 2) AND (status > 0))\n\n -> Index Scan using stud_pkey on stud (cost=0.00..4.67\nrows=1 width=4) (actual time=0.062..0.067 rows=1 loops=511)\n\n Index Cond: (stud.sid = \"outer\".sid)\n\n Filter: ((status > 0) AND (length((vname)::text) >\n1) AND (length((nname)::text) > 1))\n\n -> Materialize (cost=5388.68..5392.05 rows=337 width=4)\n(actual time=273.506..276.785 rows=511 loops=1)\n\n -> Unique (cost=5383.29..5384.98 rows=337 width=4)\n(actual time=273.501..275.421 rows=511 loops=1)\n\n -> Sort (cost=5383.29..5384.13 rows=337 width=4)\n(actual time=273.499..274.091 rows=511 loops=1)\n\n Sort Key: public.ausb.sid\n\n -> Hash Join (cost=17.61..5369.14 rows=337\nwidth=4) (actual time=1.139..272.465 rows=511 loops=1)\n\n Hash Cond: (\"outer\".uniid =\n\"inner\".uniid)\n\n -> Seq Scan on ausb\n(cost=0.00..4827.30 rows=104174 width=8) (actual time=0.026..200.111\nrows=103593 loops=1)\n\n Filter: ((overview = 1) AND\n(zweitstudium <> 2) AND (status > 0))\n\n -> Hash (cost=17.60..17.60 rows=2\nwidth=4) (actual time=0.435..0.435 rows=2 loops=1)\n\n -> Seq Scan on uni\n(cost=0.00..17.60 rows=2 width=4) (actual time=0.412..0.424 rows=2 loops=1)\n\n Filter: ((uniort)::text =\n'Augsburg'::text)\n\n -> Unique (cost=54430.79..66664.18 rows=10599 width=4) (actual\ntime=6.851..1374.135 rows=40230 loops=494)\n\n -> Merge Join (cost=54430.79..66319.65 rows=137811 width=4)\n(actual time=6.849..1282.333 rows=40233 loops=494)\n\n Merge Cond: (\"outer\".sid = \"inner\".sid)\n\n Join Filter: ((((\"outer\".veraid = 2) OR (\"outer\".veraid\n= 3) OR (\"outer\".veraid = 22) OR (\"outer\".veraid = 24) OR (\"outer\".veraid =\n36)) AND (\"inner\".phontyp = 5) AND (\"inner\".typ = 1) AND (\"inner\".status >\n0)) OR ((\"outer\".veraid <> 2) AND (\"outer\".veraid <> 3) AND (\"outer\".veraid\n<> 22) AND (\"outer\".veraid <> 24) AND (\"outer\".veraid <> 36)))\n\n -> Sort (cost=11962.11..12098.59 rows=54593 width=8)\n(actual time=0.547..46.482 rows=53354 loops=494)\n\n Sort Key: stud_vera.sid\n\n -> Bitmap Heap Scan on stud_vera\n(cost=2239.14..7666.61 rows=54593 width=8) (actual time=43.096..165.300\nrows=53354 loops=1)\n\n Recheck Cond: (veraid = 22)\n\n Filter: (status > 0)\n\n -> Bitmap Index Scan on\nstud_vera_sid_veraid_idx (cost=0.00..2239.14 rows=58765 width=0) (actual\ntime=41.242..41.242 rows=61855 loops=1)\n\n Index Cond: (veraid = 22)\n\n -> Sort (cost=42468.68..43407.53 rows=375539 width=12)\n(actual time=6.297..533.711 rows=375539 loops=494)\n\n Sort Key: phon.sid\n\n -> Seq Scan on phon (cost=0.00..7696.39\nrows=375539 width=12) (actual time=0.048..544.999 rows=375539 loops=1)\n\n\n\n\n!!!! The query-plan for the normally running query: !!!!!\n\n\n\n\n\nAggregate (cost=77846.97..77846.98 rows=1 width=4) (actual\ntime=5488.913..5488.913 rows=1 loops=1)\n\n -> Nested Loop (cost=65471.70..77846.97 rows=1 width=4) (actual\ntime=3913.839..5488.513 rows=208 loops=1)\n\n Join Filter: (\"outer\".sid = \"inner\".sid)\n\n -> Merge Join (cost=60088.41..72454.41 rows=1 width=12) (actual\ntime=3598.105..4841.242 rows=208 loops=1)\n\n Merge Cond: (\"outer\".sid = \"inner\".sid)\n\n -> Unique (cost=54430.79..66664.18 rows=10599 width=4)\n(actual time=3479.029..4697.051 rows=40129 loops=1)\n\n -> Merge Join (cost=54430.79..66319.65 rows=137811\nwidth=4) (actual time=3479.027..4616.245 rows=40132 loops=1)\n\n Merge Cond: (\"outer\".sid = \"inner\".sid)\n\n Join Filter: ((((\"outer\".veraid = 2) OR\n(\"outer\".veraid = 3) OR (\"outer\".veraid = 22) OR (\"outer\".veraid = 24) OR\n(\"outer\".veraid = 36)) AND (\"inner\".phontyp = 5) AND (\"inner\".typ = 1) AND\n(\"inner\".status > 0)) OR ((\"outer\".veraid <> 2) AND (\"outer\".veraid <> 3)\nAND (\"outer\".veraid <> 22) AND (\"outer\".veraid <> 24) AND (\"outer\".veraid <>\n36)))\n\n -> Sort (cost=11962.11..12098.59 rows=54593\nwidth=8) (actual time=274.248..315.052 rows=53252 loops=1)\n\n Sort Key: stud_vera.sid\n\n -> Bitmap Heap Scan on stud_vera\n(cost=2239.14..7666.61 rows=54593 width=8) (actual time=46.669..167.599\nrows=53352 loops=1)\n\n Recheck Cond: (veraid = 22)\n\n Filter: (status > 0)\n\n -> Bitmap Index Scan on\nstud_vera_sid_veraid_idx (cost=0.00..2239.14 rows=58765 width=0) (actual\ntime=44.618..44.618 rows=61857 loops=1)\n\n Index Cond: (veraid = 22)\n\n -> Sort (cost=42468.68..43407.53 rows=375539\nwidth=12) (actual time=3204.729..3681.598 rows=375090 loops=1)\n\n Sort Key: phon.sid\n\n -> Seq Scan on phon (cost=0.00..7696.39\nrows=375539 width=12) (actual time=0.052..628.838 rows=375539 loops=1)\n\n -> Materialize (cost=5657.62..5657.71 rows=9 width=8)\n(actual time=91.290..105.557 rows=406 loops=1)\n\n -> Nested Loop (cost=5234.08..5657.61 rows=9 width=8)\n(actual time=91.282..104.571 rows=406 loops=1)\n\n -> Unique (cost=5234.08..5235.20 rows=90\nwidth=4) (actual time=91.156..92.232 rows=420 loops=1)\n\n -> Sort (cost=5234.08..5234.64 rows=224\nwidth=4) (actual time=91.154..91.484 rows=420 loops=1)\n\n Sort Key: public.ausb.sid\n\n -> Seq Scan on ausb\n(cost=0.00..5225.34 rows=224 width=4) (actual time=0.266..90.242 rows=420\nloops=1)\n\n Filter: ((uniid = 10) AND\n(overview = 1) AND (zweitstudium <> 2) AND (status > 0))\n\n -> Index Scan using stud_pkey on stud\n(cost=0.00..4.67 rows=1 width=4) (actual time=0.024..0.026 rows=1 loops=420)\n\n Index Cond: (stud.sid = \"outer\".sid)\n\n Filter: ((status > 0) AND\n(length((vname)::text) > 1) AND (length((nname)::text) > 1))\n\n -> Unique (cost=5383.29..5384.98 rows=337 width=4) (actual\ntime=1.520..2.686 rows=511 loops=208)\n\n -> Sort (cost=5383.29..5384.13 rows=337 width=4) (actual\ntime=1.519..1.871 rows=511 loops=208)\n\n Sort Key: public.ausb.sid\n\n -> Hash Join (cost=17.61..5369.14 rows=337 width=4)\n(actual time=1.133..314.584 rows=511 loops=1)\n\n Hash Cond: (\"outer\".uniid = \"inner\".uniid)\n\n -> Seq Scan on ausb (cost=0.00..4827.30\nrows=104174 width=8) (actual time=0.030..226.532 rows=103593 loops=1)\n\n Filter: ((overview = 1) AND (zweitstudium <>\n2) AND (status > 0))\n\n -> Hash (cost=17.60..17.60 rows=2 width=4)\n(actual time=0.392..0.392 rows=2 loops=1)\n\n -> Seq Scan on uni (cost=0.00..17.60\nrows=2 width=4) (actual time=0.369..0.381 rows=2 loops=1)\n\n Filter: ((uniort)::text =\n'Augsburg'::text)\n\n\n\nThe estimated row numbers are not bad as long as one table is affected.\nThey're much worse as soon as two or more tables are joined. Though the\nquery plans are slightly different, the number of merged rows at different\nstages seems to be rather the same for both plans. The big difference in my\neyes seems the cost for the first nested loop. This seems to be the point,\nwhere the long running query consumes most time. I've then set\nenable_nestloop to off, and actually the problem disappears. \n\n\nOther maybe relevant parameters:\ndefault_statistics_target = 100\nwork_mem = 4096\nmax_fsm_pages = 100000\n\nMy questions:\n\nWhat could be the problem behind high amount of actually used time for the\nnested loop in the first query?\n\nIf we decided to constantly turn off nested loops, what side effects would\nwe have to expect?\n\nAre there more granular ways to tell the query planner when to use nested\nloops?\n\nOr just other ideas what to do? We'd be grateful for any hint!\n\n\nMany thanks\nJens\n\n\n-- \nJens Reufsteck\nHobsons GmbH\nWildunger Straße 6\n60487 Frankfurt am Main\nDeutschland\n\nTel: +49 (69) 255 37-140\nFax: +49 (69) 255 37-2140\n\nhttp://www.hobsons.de\nhttp://www.hobsons.ch\n\nGeschäftsführung:\nChristopher Letcher, Judith Oppitz, Adam Webster\nAmtsgericht Frankfurt HRB 58610\n\n", "msg_date": "Wed, 29 Aug 2007 16:33:14 +0200", "msg_from": "\"Jens Reufsteck\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance issue with nested loop" } ]
[ { "msg_contents": "For best performance, the transaction log should be on a separate disk.\n\nDoes the writing of the log benefit from a battery backed controller as well? If not, what do people think about writing the transaction log to a flash card or the like?\n\n\nFor best performance, the transaction log should be on a separate disk.Does the writing of the log benefit from a battery backed controller as well?  If not, what do people think about writing the transaction log to a flash card or the like?", "msg_date": "Wed, 29 Aug 2007 12:22:03 -0700 (PDT)", "msg_from": "Markus Benne <[email protected]>", "msg_from_op": true, "msg_subject": "Transaction Log" } ]
[ { "msg_contents": "For best performance, the transaction log should be on a separate disk.\n\nDoes\nthe writing of the log benefit from a battery backed controller as\nwell? If not, what do people think about writing the transaction log\nto a flash card or the like?\n\nFor best performance, the transaction log should be on a separate disk.Does\nthe writing of the log benefit from a battery backed controller as\nwell?  If not, what do people think about writing the transaction log\nto a flash card or the like?", "msg_date": "Wed, 29 Aug 2007 12:24:34 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Transaction Log" }, { "msg_contents": "[email protected] wrote:\n> For best performance, the transaction log should be on a separate disk.\n>\n> Does the writing of the log benefit from a battery backed controller \n> as well? If not, what do people think about writing the transaction \n> log to a flash card or the like?\nHow popular are the battery backed RAM drives that exist today? I don't \nrecall seeing them spoken about in this mailing list. The local geek \nshop has these devices on sale. Are they still too expensive?\n\nFor those that don't know what I am talking about - they are PCI devices \nthat present themselves as a hard drive, but are filled with commodity \nRAM instead of a magnetic platter, and a battery that lasts a few weeks \nwithout external power.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\[email protected] wrote:\n\n\n\nFor best performance, the transaction log should be on a\nseparate disk.\n\nDoes\nthe writing of the log benefit from a battery backed controller as\nwell?  If not, what do people think about writing the transaction log\nto a flash card or the like?\n\n\nHow popular are the battery backed RAM drives that exist today? I don't\nrecall seeing them spoken about in this mailing list. The local geek\nshop has these devices on sale. Are they still too expensive?\n\nFor those that don't know what I am talking about - they are PCI\ndevices that present themselves as a hard drive, but are filled with\ncommodity RAM instead of a magnetic platter, and a battery that lasts a\nfew weeks without external power.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Wed, 29 Aug 2007 15:54:05 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction Log" }, { "msg_contents": "\nOn Aug 29, 2007, at 12:54 PM, Mark Mielke wrote:\n\n> [email protected] wrote:\n>> For best performance, the transaction log should be on a separate \n>> disk.\n>>\n>> Does the writing of the log benefit from a battery backed \n>> controller as well? If not, what do people think about writing \n>> the transaction log to a flash card or the like?\n> How popular are the battery backed RAM drives that exist today? I \n> don't recall seeing them spoken about in this mailing list. The \n> local geek shop has these devices on sale. Are they still too \n> expensive?\n>\n> For those that don't know what I am talking about - they are PCI \n> devices that present themselves as a hard drive, but are filled \n> with commodity RAM instead of a magnetic platter, and a battery \n> that lasts a few weeks without external power.\n\nIt think the general conclusion was \"When they come out with an ECC \nversion, we'll look at them.\"\n\nThere are higher end ones that do have ECC RAM (and backup drives and \nstuff) but they're spectacularly more expensive than the cheapo \nconsumer ones.\n\nCheers,\n Steve\n\n\n\n", "msg_date": "Wed, 29 Aug 2007 13:11:32 -0700", "msg_from": "Steve Atkins <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction Log" }, { "msg_contents": "On Wed, Aug 29, 2007 at 01:11:32PM -0700, Steve Atkins wrote:\n> It think the general conclusion was \"When they come out with an ECC \n> version, we'll look at them.\"\n\nFWIW, it shouldn't be impossible to implement ECC in software; they'd still\nbe orders of magnitude faster than normal disks.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Wed, 29 Aug 2007 22:17:58 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction Log" }, { "msg_contents": "In response to Mark Mielke <[email protected]>:\n\n> [email protected] wrote:\n> > For best performance, the transaction log should be on a separate disk.\n> >\n> > Does the writing of the log benefit from a battery backed controller \n> > as well? If not, what do people think about writing the transaction \n> > log to a flash card or the like?\n\nFlash cards write _very_ slowly.\n\n> How popular are the battery backed RAM drives that exist today? I don't \n> recall seeing them spoken about in this mailing list. The local geek \n> shop has these devices on sale. Are they still too expensive?\n\nI've seen them around and as best I can tell, they're pretty\ninexpensive. The main drawback is the storage, you'd be looking at\nthe price of the card, plus the price of however much RAM you wanted\non it.\n\nhttp://www.amazon.com/Gigabyte-GC-RAMDISK-i-RAM-Hard-Drive/dp/B000EPM9NC/ref=pd_bbs_sr_1/102-3968336-1618519?ie=UTF8&s=electronics&qid=1188418613&sr=8-1\nhttp://techreport.com/articles.x/9312/1\nUp to 4G, but you have to add the price of the RAM on to the price of\nthe card.\n\nIn the case of WAL logs, you could probably get away with a lot less\nspace than many other usages, so they might be very practical.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Wed, 29 Aug 2007 16:21:53 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction Log" }, { "msg_contents": "On Wednesday 29 August 2007, Steve Atkins <[email protected]> wrote:\n> There are higher end ones that do have ECC RAM (and backup drives and\n> stuff) but they're spectacularly more expensive than the cheapo\n> consumer ones.\n>\n\nYeah the good ones look more like http://ramsan.com/ .\n\n-- \n\"Pulling together is the aim of despotism and tyranny. Free men pull in \nall kinds of directions.\" -- Terry Pratchett\n\n", "msg_date": "Wed, 29 Aug 2007 13:36:48 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Transaction Log" } ]
[ { "msg_contents": "Hi,\n\n\n We turned on autovacuums on 8.2 and we have a database which is read \nonly , it is basically a USPS database used only for address lookups \n(only SELECTS, no updates/deletes/inserts).\n\n This database has about 10gig data and yesterday autovacuum started \non this database and all of a sudden I see lot of archive logs generated \nduring this time, I guess it might have generated close to 3-4gig data \nduring this period.\n\n It was doing only vacuum not vacuum analyze. \n\n My question is why does it have to generate so many archive logs on \nstatic tables ?\n\n I am thinking these archive logs are mostly empty , the reason I am \nsaying that because I noticed that when I restore the db using PITR \nbackups for my reporting db these same logs are recovered in seconds \ncompared to the logs generated while vacuums are not running.\n\n Is this a BUG ? or am I missing something here ?\n\n\nVacuum Settings\n---------------------\nvacuum_cost_delay = 30\nvacuum_cost_limit = 150\ncheckpoint_segments = 64\ncheckpoint_timeout = 5min \ncheckpoint_warning = 30s\nautovacuum = on\nautovacuum_naptime = 120min\nautovacuum_vacuum_threshold = 500\nautovacuum_analyze_threshold = 250\nautovacuum_vacuum_scale_factor = 0.001\nautovacuum_analyze_scale_factor = 0.001\nautovacuum_freeze_max_age = 200000000\nautovacuum_vacuum_cost_delay = -1\nautovacuum_vacuum_cost_limit = -1\n\n\n\nThanks!\nPallav.\n", "msg_date": "Fri, 31 Aug 2007 11:30:47 -0400", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "8.2 Autovacuum BUG ? " }, { "msg_contents": "Pallav Kalva <[email protected]> writes:\n> We turned on autovacuums on 8.2 and we have a database which is read \n> only , it is basically a USPS database used only for address lookups \n> (only SELECTS, no updates/deletes/inserts).\n\n> This database has about 10gig data and yesterday autovacuum started \n> on this database and all of a sudden I see lot of archive logs generated \n> during this time, I guess it might have generated close to 3-4gig data \n> during this period.\n\nProbably represents freezing of old tuples, which is a WAL-logged\noperation as of 8.2. Is it likely that the data is 200M transactions\nold?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Aug 2007 11:39:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ? " }, { "msg_contents": "Tom Lane wrote:\n> Pallav Kalva <[email protected]> writes:\n> \n>> We turned on autovacuums on 8.2 and we have a database which is read \n>> only , it is basically a USPS database used only for address lookups \n>> (only SELECTS, no updates/deletes/inserts).\n>> \n>\n> \n>> This database has about 10gig data and yesterday autovacuum started \n>> on this database and all of a sudden I see lot of archive logs generated \n>> during this time, I guess it might have generated close to 3-4gig data \n>> during this period.\n>> \n>\n> Probably represents freezing of old tuples, which is a WAL-logged\n> operation as of 8.2. Is it likely that the data is 200M transactions\n> old?\n> \nIf nothing changed on these tables how can it freeze old tuples ?\nDoes it mean that once it reaches 200M transactions it will do the same \nthing all over again ?\nIf I am doing just SELECTS on these tables ? how can there be any \ntransactions ? or SELECTS considered transactions too ?\n\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \n\n", "msg_date": "Fri, 31 Aug 2007 12:05:36 -0400", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Pallav Kalva wrote:\n> Tom Lane wrote:\n\n>> Probably represents freezing of old tuples, which is a WAL-logged\n>> operation as of 8.2. Is it likely that the data is 200M transactions\n>> old?\n>> \n> If nothing changed on these tables how can it freeze old tuples ?\n> Does it mean that once it reaches 200M transactions it will do the same \n> thing all over again ?\n\nNo -- once tuples are frozen, they don't need freezing again (unless\nthey are modified by UPDATE or DELETE).\n\n> If I am doing just SELECTS on these tables ? how can there be any \n> transactions ? or SELECTS considered transactions too ?\n\nSelects are transactions too. They just don't modify data.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 31 Aug 2007 12:11:58 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Alvaro Herrera wrote:\n> Pallav Kalva wrote:\n> \n>> Tom Lane wrote:\n>> \n>\n> \n>>> Probably represents freezing of old tuples, which is a WAL-logged\n>>> operation as of 8.2. Is it likely that the data is 200M transactions\n>>> old?\n>>> \n>>> \n>> If nothing changed on these tables how can it freeze old tuples ?\n>> Does it mean that once it reaches 200M transactions it will do the same \n>> thing all over again ?\n>> \n>\n> No -- once tuples are frozen, they don't need freezing again (unless\n> they are modified by UPDATE or DELETE).\n>\n> \n>> If I am doing just SELECTS on these tables ? how can there be any \n>> transactions ? or SELECTS considered transactions too ?\n>> \n>\n> Selects are transactions too. They just don't modify data.\n>\n> \nCan you please correct me if I am wrong, I want to understand how this \nworks.\nBased on what you said, it will run autovacuum again when it passes 200M \ntransactions, as SELECTS are transactions too and are going on these \ntables.\nBut the next time when it runs autovacuum, it shouldnt freeze the tuples \nagain as they are already frozen and wont generate lot of archive logs ?\nOr is this because of it ran autovacuum for the first time on this db ? \njust the first time it does this process ?\n\n\n\n", "msg_date": "Fri, 31 Aug 2007 12:25:16 -0400", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Fri, 2007-08-31 at 12:25 -0400, Pallav Kalva wrote:\n> Can you please correct me if I am wrong, I want to understand how this \n> works.\n> Based on what you said, it will run autovacuum again when it passes 200M \n> transactions, as SELECTS are transactions too and are going on these \n> tables.\n> But the next time when it runs autovacuum, it shouldnt freeze the tuples \n> again as they are already frozen and wont generate lot of archive logs ?\n> Or is this because of it ran autovacuum for the first time on this db ? \n> just the first time it does this process ?\n\nThat is correct. The tuples are now frozen, which means that they will\nnot need to be frozen ever again unless you insert/update any records.\n\n", "msg_date": "Fri, 31 Aug 2007 09:29:31 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "[email protected] (Pallav Kalva) writes:\n> Tom Lane wrote:\n>> Pallav Kalva <[email protected]> writes:\n>>\n>>> We turned on autovacuums on 8.2 and we have a database which is\n>>> read only , it is basically a USPS database used only for address\n>>> lookups (only SELECTS, no updates/deletes/inserts).\n>>>\n>>\n>>\n>>> This database has about 10gig data and yesterday autovacuum\n>>> started on this database and all of a sudden I see lot of archive\n>>> logs generated during this time, I guess it might have generated\n>>> close to 3-4gig data during this period.\n>>>\n>>\n>> Probably represents freezing of old tuples, which is a WAL-logged\n>> operation as of 8.2. Is it likely that the data is 200M transactions\n>> old?\n>>\n> If nothing changed on these tables how can it freeze old tuples ?\n\nIt does so very easily, by changing the XID from whatever it was to 2\n(which indicates that a tuple has been \"frozen.\")\n\nI don't imagine you were wondering how it is done - more likely you\nwere wondering why.\n\n\"Why\" is to prevent transaction ID wraparound failures.\n\n> Does it mean that once it reaches 200M transactions it will do the\n> same thing all over again ?\n\nIt won't freeze those same tuples again, as they're obviously already\nfrozen, but a vacuum next week may be expected to freeze tuples that\nare roughly a week newer.\n\n> If I am doing just SELECTS on these tables ? how can there be any\n> transactions ? or SELECTS considered transactions too ?\n\nEvery query submitted comes in the context of a transaction. If there\nwasn't a BEGIN submitted somewhere, then yes, every SELECT could\npotentially invoke a transaction, irrespective of whether it writes\ndata or not.\n\nIf you submit a million SELECT statements, yes, that could, indeed,\nindicate a million transactions.\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in name ^ \"@\" ^ tld;;\nhttp://cbbrowne.com/info/nonrdbms.html\nHow much deeper would the ocean be if sponges didn't live there? \n", "msg_date": "Fri, 31 Aug 2007 13:46:09 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On 8/31/07, Alvaro Herrera <[email protected]> wrote:\n>\n> Pallav Kalva wrote:\n> > Tom Lane wrote:\n>\n> >> Probably represents freezing of old tuples, which is a WAL-logged\n> >> operation as of 8.2. Is it likely that the data is 200M transactions\n> >> old?\n> >>\n> > If nothing changed on these tables how can it freeze old tuples ?\n> > Does it mean that once it reaches 200M transactions it will do the same\n> > thing all over again ?\n>\n> No -- once tuples are frozen, they don't need freezing again (unless\n> they are modified by UPDATE or DELETE).\n>\n>\n\nOff-topic question: the documentation says that XID numbers are 32 bit.\nCould the XID be 64 bit when running on a 64 bit platform? That would\neffectively prevent wrap-around issues.\n\nRegards\n\nMP\n\nOn 8/31/07, Alvaro Herrera <[email protected]> wrote:\nPallav Kalva wrote:> Tom Lane wrote:>> Probably represents freezing of old tuples, which is a WAL-logged>> operation as of 8.2.  Is it likely that the data is 200M transactions>> old?\n>>> If nothing changed on these tables how can it freeze old tuples ?> Does it mean that once it reaches 200M transactions it will do the same> thing all over again ?No -- once tuples are frozen, they don't need freezing again (unless\nthey are modified by UPDATE or DELETE).Off-topic question: the documentation says that XID numbers are 32 bit. Could the XID be 64 bit when running on a 64 bit platform? That would effectively prevent wrap-around issues.\nRegardsMP", "msg_date": "Fri, 31 Aug 2007 21:31:47 +0300", "msg_from": "\"Mikko Partio\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Mark Lewis wrote:\n> On Fri, 2007-08-31 at 12:25 -0400, Pallav Kalva wrote:\n> \n>> Can you please correct me if I am wrong, I want to understand how this \n>> works.\n>> Based on what you said, it will run autovacuum again when it passes 200M \n>> transactions, as SELECTS are transactions too and are going on these \n>> tables.\n>> But the next time when it runs autovacuum, it shouldnt freeze the tuples \n>> again as they are already frozen and wont generate lot of archive logs ?\n>> Or is this because of it ran autovacuum for the first time on this db ? \n>> just the first time it does this process ?\n>> \n>\n> That is correct. The tuples are now frozen, which means that they will\n> not need to be frozen ever again unless you insert/update any records.\n>\n> \n\nMy main concern is filling up my disk with archive logs, so from all the \nreplies I get is that since tuples are already frozen, next time when it \nruns autovacuum it wont generate any archive logs.\n\nIs my assumption right ?\n\nThanks! everybody on all your replies. It's was very helpful.\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n> \n\n", "msg_date": "Fri, 31 Aug 2007 14:42:11 -0400", "msg_from": "Pallav Kalva <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Mikko Partio escribi�:\n\n> Off-topic question: the documentation says that XID numbers are 32 bit.\n> Could the XID be 64 bit when running on a 64 bit platform? That would\n> effectively prevent wrap-around issues.\n\nNo, because they would take too much space in tuple headers.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/5ZYLFMCVHXC\nAl principio era UNIX, y UNIX habl� y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n", "msg_date": "Fri, 31 Aug 2007 14:47:44 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Pallav Kalva wrote:\n\n> My main concern is filling up my disk with archive logs, so from all the \n> replies I get is that since tuples are already frozen, next time when it \n> runs autovacuum it wont generate any archive logs.\n>\n> Is my assumption right ?\n\nWell, it won't generate any logs for the tuples that were just frozen,\nbut it will generate logs for tuples that weren't frozen. How many of\nthese there are, depends on how many tuples you inserted after the batch\nthat was just frozen.\n\nIf you want to freeze the whole table completely, you can you VACUUM\nFREEZE.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Fri, 31 Aug 2007 14:51:28 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Mikko Partio escribi�:\n>> Off-topic question: the documentation says that XID numbers are 32 bit.\n>> Could the XID be 64 bit when running on a 64 bit platform? That would\n>> effectively prevent wrap-around issues.\n\n> No, because they would take too much space in tuple headers.\n\nIt's worth noting that the patch Florian is working on, to suppress\nassignment of XIDs for transactions that never write anything, will make\nfor a large reduction in the rate of XID consumption in many real-world\napplications. That will reduce the need for tuple freezing and probably\nlessen the attraction of wider XIDs even more.\n\nIf he gets it done soon (before the HOT dust settles) I will be strongly\ntempted to try to sneak it into 8.3 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Aug 2007 15:08:45 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ? " }, { "msg_contents": "[email protected] (Pallav Kalva) writes:\n> Mark Lewis wrote:\n>> On Fri, 2007-08-31 at 12:25 -0400, Pallav Kalva wrote:\n>>\n>>> Can you please correct me if I am wrong, I want to understand how\n>>> this works.\n>>> Based on what you said, it will run autovacuum again when it passes\n>>> 200M transactions, as SELECTS are transactions too and are going on\n>>> these tables.\n>>> But the next time when it runs autovacuum, it shouldnt freeze the\n>>> tuples again as they are already frozen and wont generate lot of\n>>> archive logs ?\n>>> Or is this because of it ran autovacuum for the first time on this\n>>> db ? just the first time it does this process ?\n>>>\n>>\n>> That is correct. The tuples are now frozen, which means that they will\n>> not need to be frozen ever again unless you insert/update any records.\n>>\n>>\n>\n> My main concern is filling up my disk with archive logs, so from all\n> the replies I get is that since tuples are already frozen, next time\n> when it runs autovacuum it wont generate any archive logs.\n>\n> Is my assumption right ?\n\nNo, your assumption is wrong.\n\nLater vacuums will not generate archive files for the tuples that were\n*previously* frozen, but if you have additional tuples that have\ngotten old enough to reach the \"freeze point,\" THOSE tuples will get\nfrozen, and so you'll continue to see archive logs generated.\n\nAnd this is Certainly Not A Bug. If the system did not do this, those\nunfrozen tuples would eventually disappear when your current\ntransaction XID rolls over. The freezing is *necessary.*\n-- \nlet name=\"cbbrowne\" and tld=\"cbbrowne.com\" in name ^ \"@\" ^ tld;;\nhttp://linuxdatabases.info/info/unix.html\nRules of the Evil Overlord #86. \"I will make sure that my doomsday\ndevice is up to code and properly grounded.\"\n<http://www.eviloverlord.com/>\n", "msg_date": "Fri, 31 Aug 2007 15:13:54 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Aug 31, 2007, at 2:08 PM, Tom Lane wrote:\n\n> Alvaro Herrera <[email protected]> writes:\n>> Mikko Partio escribi�:\n>>> Off-topic question: the documentation says that XID numbers are \n>>> 32 bit.\n>>> Could the XID be 64 bit when running on a 64 bit platform? That \n>>> would\n>>> effectively prevent wrap-around issues.\n>\n>> No, because they would take too much space in tuple headers.\n>\n> It's worth noting that the patch Florian is working on, to suppress\n> assignment of XIDs for transactions that never write anything, will \n> make\n> for a large reduction in the rate of XID consumption in many real- \n> world\n> applications. That will reduce the need for tuple freezing and \n> probably\n> lessen the attraction of wider XIDs even more.\n>\n> If he gets it done soon (before the HOT dust settles) I will be \n> strongly\n> tempted to try to sneak it into 8.3 ...\n>\n> \t\t\tregards, tom lane\n\nOff topic and just out of curiousity, is this the work that will \nallow standby servers to have selects run on them without stopping \nWAL replay?\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Fri, 31 Aug 2007 14:35:49 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ? " }, { "msg_contents": "Erik Jones <[email protected]> writes:\n> On Aug 31, 2007, at 2:08 PM, Tom Lane wrote:\n>> It's worth noting that the patch Florian is working on, to suppress\n>> assignment of XIDs for transactions that never write anything, will make\n>> for a large reduction in the rate of XID consumption in many real-world\n>> applications.\n\n> Off topic and just out of curiousity, is this the work that will \n> allow standby servers to have selects run on them without stopping \n> WAL replay?\n\nIt's a small component of that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Aug 2007 15:48:43 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ? " }, { "msg_contents": "On Fri, 31 Aug 2007, Tom Lane wrote:\n\n> If he gets it done soon (before the HOT dust settles) I will be strongly\n> tempted to try to sneak it into 8.3 ...\n\nCould you or Florian suggest how other people might assist in meeting that \ngoal? It seems like something worthwhile but it's not clear to me how to \nadd manpower to it usefully.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 31 Aug 2007 19:24:56 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ? " }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Fri, 31 Aug 2007, Tom Lane wrote:\n>> If he gets it done soon (before the HOT dust settles) I will be strongly\n>> tempted to try to sneak it into 8.3 ...\n\n> Could you or Florian suggest how other people might assist in meeting that \n> goal? It seems like something worthwhile but it's not clear to me how to \n> add manpower to it usefully.\n\nReview the patch? He posted v2 on -hackers just a little bit ago. I\nsuggested some cosmetic changes but it's certainly ready to read now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 31 Aug 2007 19:43:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ? " }, { "msg_contents": "Hi Pallav,\n\nI'm currently on PostgreSQL 9.1. Everything was fine till Dec 27th 2017. But\nto my wonder archive logs started to increase from December 28th 2017 till\ndate. \n\nThe configuration parameters were default and everything in the past was\nfine with default configuration parameters. I'm facing a serious problem\nwith this huge archive generation of 48GB per day, that is 2GB per hour. The\nDML's statements are almost same. \n\nIn detail, archive logs are getting generated at 9'th minute and 39'th\nminute of an hour, preceding with a log message 'checkpoints are occurring\ntoo frequently (2 seconds apart).Consider increasing the configuration\nparameter \"checkpoint_segments\" '.\n\nSo how to reduce this abnormal archive log generation. Thanks in Advance.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Sun, 21 Jan 2018 23:09:10 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hello\nHow big is database?\nPlease show result of this query: select * from pg_stat_activity where query like 'autovacuum%';\nI think here is running antiwraparound autovacuum. In this case all is normal, antiwraparound will produce a lot of WAL and this is necessary to continue database working.\n\nPS: please note postgresql 9.1 is EOL.\n\nregards, Sergei\n\n", "msg_date": "Mon, 22 Jan 2018 12:41:00 +0300", "msg_from": "Sergei Kornilov <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hello Sergi,\n\nThe size of the database is 24GB.\n\nThe output of the above query is :\n\n datid | datname | procpid | usesysid | usename | application_name |\nclient_addr | client_hostname | client_port | backend_start \n| xact_start | query_start |\nwaiting | current_query \n--------+----------+---------+----------+----------+------------------+-------------+-----------------+-------------+----------------------------------+----------------------------------+----------------------------------+---------+--------------------------------------\n 400091 | prod_erp | 19373 | 10 | postgres | | \n| | | 2018-01-22 15:40:38.163865+05:30 |\n2018-01-22 15:40:38.655754+05:30 | 2018-01-22 15:40:38.655754+05:30 | f \n| autovacuum: ANALYZE public.table1\n 400091 | prod_erp | 19373 | 10 | postgres | | \n| | | 2018-01-22 15:40:38.163865+05:30 |\n2018-01-22 15:40:38.655754+05:30 | 2018-01-22 15:40:38.655754+05:30 | f \n| autovacuum: ANALYZE public.table1\n400091 | prod_erp | 19373 | 10 | postgres | | \n| | | 2018-01-22 15:40:38.163865+05:30 |\n2018-01-22 15:40:38.218954+05:30 | 2018-01-22 15:40:38.218954+05:30 | f \n| autovacuum: ANALYZE public.table2\n400091 | prod_erp | 18440 | 10 | postgres | | \n| | | 2018-01-22 15:39:38.128879+05:30 |\n2018-01-22 15:39:38.166507+05:30 | 2018-01-22 15:39:38.166507+05:30 | f \n| autovacuum: VACUUM public.table3\n\n\nCould you please explain what antiwraparound autovacuum is?? Is it related\nfor preventing transactionID wraparound failures? If so does running vacuum\nfull against the database will suppress this abnormal generation of archive\nlogs??\n\nPlease give your kind advice.\n\nRegards,\nPavan\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Mon, 22 Jan 2018 03:21:39 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "\n\nAm 22.01.2018 um 11:21 schrieb pavan95:\n> Could you please explain what antiwraparound autovacuum is?? Is it related\n> for preventing transactionID wraparound failures?\n\nYes.\n\n\n> If so does running vacuum\n> full against the database will suppress this abnormal generation of archive\n> logs??\n\nSuch a vacuum freeze isn't abnormal. Do you have a really problem with it?\n\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Mon, 22 Jan 2018 12:47:10 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hello,\n\nIs there any way to check, how many transactions happened till date from the\npoint the database created and started accepting transactions ?\n\nThe reason for this doubt is to find whether my database has crossed 2\nmillion transactions or not. \n\nStrangely had an interesting observation, when I tried to a vacuum full, it\nis generating 1GB of archive logs per sec, and yes it's true.\n\n\nSo I had a doubt like whether this is related to vacuum....\n\nPlease help me cope up with this.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Tue, 23 Jan 2018 00:04:52 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hi Andreas,\n\nYes I'm facing problem because of this huge WAL(archive log) generation. As\nit is seriously consuming a lot of disk space almost close to 50GB per day\neven if the DML's don't have that impact in this WAL generation.\n\nPreviously the archive_log size is nearly 2 to 3 GB a day. Now with the same\nset of DML's how is it being generated to 50GB is my burning doubt.\n\nI just wanted to know how to stabilize this issue, as checking and deleting\nthe archive logs on hourly basis is not a good idea.\n\nFinally, I'm looking how to reduce this back to normal. Thanks in Advance.\n\nRegards,\nPavan \n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Tue, 23 Jan 2018 04:51:01 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "\n\nAm 23.01.2018 um 12:51 schrieb pavan95:\n> Hi Andreas,\n>\n> Yes I'm facing problem because of this huge WAL(archive log) generation. As\n> it is seriously consuming a lot of disk space almost close to 50GB per day\n> even if the DML's don't have that impact in this WAL generation.\n>\n> Previously the archive_log size is nearly 2 to 3 GB a day. Now with the same\n> set of DML's how is it being generated to 50GB is my burning doubt.\n\nWill so many wals continue to be produced?\n\n\n>\n> I just wanted to know how to stabilize this issue, as checking and deleting\n> the archive logs on hourly basis is not a good idea.\nDon't delete wal's!\n\n\n> Finally, I'm looking how to reduce this back to normal. Thanks in Advance.\n\nhave you set archive_mode to on and defined an archive_command? \nWal-files will be reused after 2 checkpoints.\nIs there something in the logs?\n\n\nRegards, Andreas\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n", "msg_date": "Tue, 23 Jan 2018 15:17:27 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Yes so many wals are continuing to be produced.\n\nDeleting the wals after a backup of the database.\n\nYes archiving mode is on. And the warning message in log file is\n\n\" checkpoints are frequently occurring (1second apart). Consider increasing\ncheckpoint_segements parameter\".\n\nMy doubt is previously the same are the parameters which are reflected as\nof now. Then what is the point in considering altering those values.\nCorrect me if I am wrong.\n\nRegards,\nPavan\n\nOn Jan 23, 2018 7:47 PM, \"Andreas Kretschmer\" <[email protected]>\nwrote:\n\n\n\nAm 23.01.2018 um 12:51 schrieb pavan95:\n\n> Hi Andreas,\n>\n> Yes I'm facing problem because of this huge WAL(archive log) generation. As\n> it is seriously consuming a lot of disk space almost close to 50GB per day\n> even if the DML's don't have that impact in this WAL generation.\n>\n> Previously the archive_log size is nearly 2 to 3 GB a day. Now with the\n> same\n> set of DML's how is it being generated to 50GB is my burning doubt.\n>\n\nWill so many wals continue to be produced?\n\n\n\n\n> I just wanted to know how to stabilize this issue, as checking and deleting\n> the archive logs on hourly basis is not a good idea.\n>\nDon't delete wal's!\n\n\n\nFinally, I'm looking how to reduce this back to normal. Thanks in Advance.\n>\n\nhave you set archive_mode to on and defined an archive_command? Wal-files\nwill be reused after 2 checkpoints.\nIs there something in the logs?\n\n\nRegards, Andreas\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\nYes so many wals are continuing to be produced.Deleting the wals after a backup of the database.Yes archiving mode is on. And the warning message in log file is \" checkpoints are frequently occurring (1second apart). Consider increasing checkpoint_segements parameter\".My doubt is previously the same are the parameters which are reflected as of now. Then what is the point in considering altering those values. Correct me if I am wrong.Regards,PavanOn Jan 23, 2018 7:47 PM, \"Andreas Kretschmer\" <[email protected]> wrote:\n\nAm 23.01.2018 um 12:51 schrieb pavan95:\n\nHi Andreas,\n\nYes I'm facing problem because of this huge WAL(archive log) generation. As\nit is seriously consuming a lot of disk space almost close to 50GB per day\neven if the DML's don't have that impact in this WAL generation.\n\nPreviously the archive_log size is nearly 2 to 3 GB a day. Now with the same\nset of DML's how is it being generated to 50GB is my burning doubt.\n\n\nWill so many wals continue to be produced?\n\n\n\n\nI just wanted to know how to stabilize this issue, as checking and deleting\nthe archive logs on hourly basis is not a good idea.\n\nDon't delete wal's!\n\n\n\nFinally, I'm looking how to reduce this back to normal. Thanks in Advance.\n\n\nhave you set archive_mode to on and defined an archive_command? Wal-files will be reused after 2 checkpoints.\nIs there something in the logs?\n\n\nRegards, Andreas\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com", "msg_date": "Tue, 23 Jan 2018 20:09:27 +0530", "msg_from": "Pavan Teja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Tue, Jan 23, 2018 at 7:39 AM, Pavan Teja <[email protected]>\nwrote:\n\n> \" checkpoints are frequently occurring (1second apart). Consider\n> increasing checkpoint_segements parameter\".\n>\n\nThe custom on these lists is to bottom or inline post.​\n\n​This tends to appear when someone decide to write a load script of the\nform:\n\nINSERT INTO tbl (cols) VALUES (...);\nINSERT INTO ​tbl (cols) VALUES (...);\n[repeat many, many, times]\n\n(note the lack of BEGIN/END, single transaction help mitigate it somewhat)\n\nDavid J.\n\nOn Tue, Jan 23, 2018 at 7:39 AM, Pavan Teja <[email protected]> wrote:\" checkpoints are frequently occurring (1second apart). Consider increasing checkpoint_segements parameter\".The custom on these lists is to bottom or inline post.​​This tends to appear when someone decide to write a load script of the form:INSERT INTO tbl (cols) VALUES (...);INSERT INTO ​tbl (cols) VALUES (...);[repeat many, many, times](note the lack of BEGIN/END, single transaction help mitigate it somewhat)David J.", "msg_date": "Tue, 23 Jan 2018 07:45:14 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Please don't top-posting\n\n\nAm 23.01.2018 um 15:39 schrieb Pavan Teja:\n> Yes so many wals are continuing to be produced.\n\nyou have to identify why. Please check pg_stat_activity for\n* autovacuum\n* large inserts\n* large updates\n* large deletes\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Tue, 23 Jan 2018 15:57:40 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hi David,\n\nIf it's yes what needs to be done in order to stabilize this issue??\n\nThanks in advance.\n\nRegards,\nPavan\n\nOn Jan 23, 2018 8:15 PM, \"David G. Johnston\" <[email protected]>\nwrote:\n\n> On Tue, Jan 23, 2018 at 7:39 AM, Pavan Teja <[email protected]>\n> wrote:\n>\n>> \" checkpoints are frequently occurring (1second apart). Consider\n>> increasing checkpoint_segements parameter\".\n>>\n>\n> The custom on these lists is to bottom or inline post.​\n>\n> ​This tends to appear when someone decide to write a load script of the\n> form:\n>\n> INSERT INTO tbl (cols) VALUES (...);\n> INSERT INTO ​tbl (cols) VALUES (...);\n> [repeat many, many, times]\n>\n> (note the lack of BEGIN/END, single transaction help mitigate it somewhat)\n>\n> David J.\n>\n>\n\nHi David,If it's yes what needs to be done in order to stabilize this issue??Thanks in advance.Regards,PavanOn Jan 23, 2018 8:15 PM, \"David G. Johnston\" <[email protected]> wrote:On Tue, Jan 23, 2018 at 7:39 AM, Pavan Teja <[email protected]> wrote:\" checkpoints are frequently occurring (1second apart). Consider increasing checkpoint_segements parameter\".The custom on these lists is to bottom or inline post.​​This tends to appear when someone decide to write a load script of the form:INSERT INTO tbl (cols) VALUES (...);INSERT INTO ​tbl (cols) VALUES (...);[repeat many, many, times](note the lack of BEGIN/END, single transaction help mitigate it somewhat)David J.", "msg_date": "Tue, 23 Jan 2018 20:50:15 +0530", "msg_from": "Pavan Teja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "\n\nAm 23.01.2018 um 16:20 schrieb Pavan Teja:\n> Hi David,\n>\n> If it's yes what needs to be done in order to stabilize this issue??\n>\n\nDon't top-post ;-)\n\n\nYou can't prevent the generation of wal's (apart from using unlogged \ntables, but i'm sure, that will be not your solution.)\n\nRegards, Andreas\n\n-- \n2ndQuadrant - The PostgreSQL Support Company.\nwww.2ndQuadrant.com\n\n\n", "msg_date": "Tue, 23 Jan 2018 16:26:44 +0100", "msg_from": "Andreas Kretschmer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja <[email protected]>\nwrote:\n\n> Yes so many wals are continuing to be produced.\n>\n> Deleting the wals after a backup of the database.\n>\n> Yes archiving mode is on. And the warning message in log file is\n>\n> \" checkpoints are frequently occurring (1second apart). Consider\n> increasing checkpoint_segements parameter\".\n>\n> My doubt is previously the same are the parameters which are reflected as\n> of now. Then what is the point in considering altering those values.\n> Correct me if I am wrong.\n>\n\nYou can use pg_xlogdump to inspect those logs and see which\nrelations/transactions are generating so much WAL.\n\nThen you can hunt within your apps which code is responsible for that\ntraffic, or whether it in fact is autovacuum.\n\nOn Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja <[email protected]> wrote:Yes so many wals are continuing to be produced.Deleting the wals after a backup of the database.Yes archiving mode is on. And the warning message in log file is \" checkpoints are frequently occurring (1second apart). Consider increasing checkpoint_segements parameter\".My doubt is previously the same are the parameters which are reflected as of now. Then what is the point in considering altering those values. Correct me if I am wrong.You can use pg_xlogdump to inspect those logs and see which relations/transactions are generating so much WAL.Then you can hunt within your apps which code is responsible for that traffic, or whether it in fact is autovacuum.", "msg_date": "Tue, 23 Jan 2018 13:07:34 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Jan 23, 2018 9:37 PM, \"Claudio Freire\" <[email protected]> wrote:\n\n\n\nOn Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja <[email protected]>\nwrote:\n\n> Yes so many wals are continuing to be produced.\n>\n> Deleting the wals after a backup of the database.\n>\n> Yes archiving mode is on. And the warning message in log file is\n>\n> \" checkpoints are frequently occurring (1second apart). Consider\n> increasing checkpoint_segements parameter\".\n>\n> My doubt is previously the same are the parameters which are reflected as\n> of now. Then what is the point in considering altering those values.\n> Correct me if I am wrong.\n>\n\nYou can use pg_xlogdump to inspect those logs and see which\nrelations/transactions are generating so much WAL.\n\nThen you can hunt within your apps which code is responsible for that\ntraffic, or whether it in fact is autovacuum.\n\n\n\nHi Claudio,\n\nIs pg_xlogdump available for postgres 9.1, as my current production is\npostgres 9.1.\n\nYes investigated in that area, found DML's and also autovacuum statements\nfor some relations. And the DML's are the same before this huge WAL traffic\nand normal WAL traffic.\n\nAnyways, thanks for your timely response 😊\n\nRegards,\nPavan\n\nOn Jan 23, 2018 9:37 PM, \"Claudio Freire\" <[email protected]> wrote:On Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja <[email protected]> wrote:Yes so many wals are continuing to be produced.Deleting the wals after a backup of the database.Yes archiving mode is on. And the warning message in log file is \" checkpoints are frequently occurring (1second apart). Consider increasing checkpoint_segements parameter\".My doubt is previously the same are the parameters which are reflected as of now. Then what is the point in considering altering those values. Correct me if I am wrong.You can use pg_xlogdump to inspect those logs and see which relations/transactions are generating so much WAL.Then you can hunt within your apps which code is responsible for that traffic, or whether it in fact is autovacuum. Hi Claudio,Is pg_xlogdump available for postgres 9.1, as my current production is postgres 9.1.Yes investigated in that area, found DML's and also autovacuum statements for some relations. And the DML's are the same before this huge WAL traffic and normal WAL traffic. Anyways, thanks for your timely response 😊Regards,Pavan", "msg_date": "Tue, 23 Jan 2018 21:46:11 +0530", "msg_from": "Pavan Teja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Tue, Jan 23, 2018 at 1:16 PM, Pavan Teja <[email protected]>\nwrote:\n\n> On Jan 23, 2018 9:37 PM, \"Claudio Freire\" <[email protected]> wrote:\n>\n>\n>\n> On Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja <[email protected]>\n> wrote:\n>\n>> Yes so many wals are continuing to be produced.\n>>\n>> Deleting the wals after a backup of the database.\n>>\n>> Yes archiving mode is on. And the warning message in log file is\n>>\n>> \" checkpoints are frequently occurring (1second apart). Consider\n>> increasing checkpoint_segements parameter\".\n>>\n>> My doubt is previously the same are the parameters which are reflected as\n>> of now. Then what is the point in considering altering those values.\n>> Correct me if I am wrong.\n>>\n>\n> You can use pg_xlogdump to inspect those logs and see which\n> relations/transactions are generating so much WAL.\n>\n> Then you can hunt within your apps which code is responsible for that\n> traffic, or whether it in fact is autovacuum.\n>\n>\n>\n> Hi Claudio,\n>\n> Is pg_xlogdump available for postgres 9.1, as my current production is\n> postgres 9.1.\n>\n\nRight, it was added in 9.3\n\nI'm unsure whether it can parse pre-9.3 WAL. I know technically speaking,\nWAL doesn't have to stay compatible across versions, but it might be for\nthe limited purposes of xlogdump.\n\nYes investigated in that area, found DML's and also autovacuum statements\n> for some relations. And the DML's are the same before this huge WAL traffic\n> and normal WAL traffic.\n>\n> Anyways, thanks for your timely response 😊\n>\n\nWhile looking at current query activity makes sense, if you can't identify\na culprit doing that, inspecting the WAL directly will let you know with\nprecision what is causing all that WAL. Hence the suggestion.\n\nIf xlogdump doesn't work in 9.1, I'm not sure what you can do.\n\nOne idea that pops to mind, though there's probably a better one, you may\nwant to consider attaching an strace to a recovery process on a replica.\nPreferrably one you're not worried about slowing down. Analyzing output\nfrom that is much harder, but it may give you some insight. You'll have to\ncorrelate file handles to file names to relations manually, which can be\nquite a chore.\n\nOn Tue, Jan 23, 2018 at 1:16 PM, Pavan Teja <[email protected]> wrote:On Jan 23, 2018 9:37 PM, \"Claudio Freire\" <[email protected]> wrote:On Tue, Jan 23, 2018 at 11:39 AM, Pavan Teja <[email protected]> wrote:Yes so many wals are continuing to be produced.Deleting the wals after a backup of the database.Yes archiving mode is on. And the warning message in log file is \" checkpoints are frequently occurring (1second apart). Consider increasing checkpoint_segements parameter\".My doubt is previously the same are the parameters which are reflected as of now. Then what is the point in considering altering those values. Correct me if I am wrong.You can use pg_xlogdump to inspect those logs and see which relations/transactions are generating so much WAL.Then you can hunt within your apps which code is responsible for that traffic, or whether it in fact is autovacuum. Hi Claudio,Is pg_xlogdump available for postgres 9.1, as my current production is postgres 9.1.Right, it was added in 9.3I'm unsure whether it can parse pre-9.3 WAL. I know technically speaking, WAL doesn't have to stay compatible across versions, but it might be for the limited purposes of xlogdump.Yes investigated in that area, found DML's and also autovacuum statements for some relations. And the DML's are the same before this huge WAL traffic and normal WAL traffic. Anyways, thanks for your timely response 😊While looking at current query activity makes sense, if you can't identify a culprit doing that, inspecting the WAL directly will let you know with precision what is causing all that WAL. Hence the suggestion.If xlogdump doesn't work in 9.1, I'm not sure what you can do.One idea that pops to mind, though there's probably a better one, you may want to consider attaching an strace to a recovery process on a replica. Preferrably one you're not worried about slowing down. Analyzing output from that is much harder, but it may give you some insight. You'll have to correlate file handles to file names to relations manually, which can be quite a chore.", "msg_date": "Tue, 23 Jan 2018 19:59:23 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hi Claudio,\n\nWe didn't configure any replication to our production server. Which strace\nare you talking about?\n\nWe did a keen observation that only at the time 9'th minute of the hour and\n39'th minute of the hour the so called archive logs are generated even when\nnobody is connecting from application(off the business hours). Minimum of 76\nfiles are being produced in these two intervals of a hour. Tried to monitor\nthe DML's but those are the same DML's which were in the past. Any idea??\n\nThanks in advance.\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Tue, 23 Jan 2018 23:54:39 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Wed, Jan 24, 2018 at 3:54 AM, pavan95 <[email protected]>\nwrote:\n\n> Hi Claudio,\n>\n> We didn't configure any replication to our production server. Which strace\n> are you talking about?\n>\n\nThis one: https://linux.die.net/man/1/strace\n\nYou can attach it to a process (assuming you have the necessary\npermissions) and it will report all the syscalls the process does. That\ndoes slow down the process though.\n\nThen lsof ( https://linux.die.net/man/8/lsof ) can be used to map file\ndescriptor numbers to file paths. You have to do it as soon as you read the\noutput, because files get closed and file descriptors reused. So it's\nbetter to have a script that directly reads from /proc/pid/fd or fdinfo,\nbut that takes some programming.\n\nIt is nontrivial, but sometimes it's the only tool in your belt. You may\nwant to try something else first though.\n\n\n> We did a keen observation that only at the time 9'th minute of the hour and\n> 39'th minute of the hour the so called archive logs are generated even\n> when\n\nnobody is connecting from application(off the business hours).\n\n\nWell, if you don't know what happens at those times (and only at those\ntimes), it's not that useful.\n\nSince you don't know what is causing this for certain, first thing you have\nto do is ascertain that. Try increasing logging as much as you can,\nespecially around those times, and see what turns on then and not at other\ntimes. You can monitor autovacuum processes as well in pg_stat_activity, so\nmake sure you check that as well, as autovacuum will only log once it's\ndone.\n\nYou do know autovacuum is running at those times, you have to check whether\nit isn't when WAL isn't being generated, and whether autovacuum is\nvacuuming the same tables over and over or what. Your earlier mails show\nautoanalyze runs, not vacuum. Those shouldn't cause so much WAL, but if\nit's running very often and you have lots of stats, then maybe.\n\nYou can also try pg_stat_statements:\nhttps://www.postgresql.org/docs/9.1/static/pgstatstatements.html\n\nAgain, concentrate on the differential - what happens at those times, that\ndoesn't at other times.\n\nAnother idea would be to check for freeze runs in autovacuum. Ie, what's\ndescribed here: https://wiki.postgresql.org/wiki/VacuumHeadaches#FREEZE\n\nThere's a nice blog post with some queries to help you with that here:\nhttp://www.databasesoup.com/2012/09/freezing-your-tuples-off-part-1.html\n(and it's continuation here:\nhttp://www.databasesoup.com/2012/10/freezing-your-tuples-off-part-2.html ).\nI'm not saying you should tune those parameters, what you were showing was\nautoanalyze activity, not vacuum freeze, but you should check whether you\nneed to anyway.\n\nOn Wed, Jan 24, 2018 at 3:54 AM, pavan95 <[email protected]> wrote:Hi Claudio,\n\nWe didn't configure any replication to our production server. Which strace\nare you talking about?This one: https://linux.die.net/man/1/straceYou can attach it to a process (assuming you have the necessary permissions) and it will report all the syscalls the process does. That does slow down the process though.Then lsof ( https://linux.die.net/man/8/lsof ) can be used to map file descriptor numbers to file paths. You have to do it as soon as you read the output, because files get closed and file descriptors reused. So it's better to have a script that directly reads from /proc/pid/fd or fdinfo, but that takes some programming.It is nontrivial, but sometimes it's the only tool in your belt. You may want to try something else first though. \n\nWe did a keen observation that only at the time 9'th minute of the hour and\n39'th minute of the hour the so called archive logs are generated even when nobody is connecting from application(off the business hours).Well, if you don't know what happens at those times (and only at those times), it's not that useful.Since you don't know what is causing this for certain, first thing you have to do is ascertain that. Try increasing logging as much as you can, especially around those times, and see what turns on then and not at other times. You can monitor autovacuum processes as well in pg_stat_activity, so make sure you check that as well, as autovacuum will only log once it's done.You do know autovacuum is running at those times, you have to check whether it isn't when WAL isn't being generated, and whether autovacuum is vacuuming the same tables over and over or what. Your earlier mails show autoanalyze runs, not vacuum. Those shouldn't cause so much WAL, but if it's running very often and you have lots of stats, then maybe.You can also try pg_stat_statements: https://www.postgresql.org/docs/9.1/static/pgstatstatements.htmlAgain, concentrate on the differential - what happens at those times, that doesn't at other times.Another idea would be to check for freeze runs in autovacuum. Ie, what's described here: https://wiki.postgresql.org/wiki/VacuumHeadaches#FREEZEThere's a nice blog post with some queries to help you with that here: http://www.databasesoup.com/2012/09/freezing-your-tuples-off-part-1.html(and it's continuation here: http://www.databasesoup.com/2012/10/freezing-your-tuples-off-part-2.html ). I'm not saying you should tune those parameters, what you were showing was autoanalyze activity, not vacuum freeze, but you should check whether you need to anyway.", "msg_date": "Wed, 24 Jan 2018 04:42:36 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hello all,\n\nOne more interesting observation made by me. \n\nI have ran the below query(s) on production:\n\nSELECT \n relname, \n age(relfrozenxid) as xid_age,\n pg_size_pretty(pg_table_size(oid)) as table_size\nFROM pg_class\nWHERE relkind = 'r' and pg_table_size(oid) > 1073741824\nORDER BY age(relfrozenxid) DESC ;\n relname |\nxid_age | table_size\n------------------------------------------------------------+---------+------------\n *hxxxxxxxxxx* |\n7798262 | 3245 MB\n hrxxxxxxxxx |\n7797554 | 4917 MB\n irxxxxxxxxxx |\n7796771 | 2841 MB\n hr_xxxxxxxxxxxxxxxx | 7744262 |\n4778 MB\n reimbxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | 6767712 | 1110 MB\n\nshow autovacuum_freeze_max_age;\n autovacuum_freeze_max_age\n---------------------------\n 200000000\n(1 row)\n\n\n\nSELECT txid_current();---AT 15:09PM on 24th Jan 2018\n txid_current\n--------------\n 8204011\n\t \n(1 row)\n \nThen I tried to perform *VACUUM FREEZE* on the *hxxxxxxxxxx*. To my wonder\nit had generated 107 archive log files, which is nearly 1.67GB. \n\nThe verbose information of above *VACUUM FREEZE* is shown below:\n\n*x_db*=#VACUUM (FREEZE,VERBOSE) hxxxxxxxxxxx;\nINFO: vacuuming \"public.hxxxxxxxxxxx\"\nINFO: scanned index \"hxxxxxxxxxxx_pkey\" to remove 10984 row versions\nDETAIL: CPU 0.00s/0.01u sec elapsed 0.04 sec.\nINFO: scanned index \"hxxxxxxxxxxx_x_email_from\" to remove 10984 row\nversions\nDETAIL: CPU 0.00s/0.04u sec elapsed 0.12 sec.\nINFO: scanned index \"hxxxxxxxxxxx_x_mobile\" to remove 10984 row versions\nDETAIL: CPU 0.00s/0.03u sec elapsed 0.09 sec.\nINFO: scanned index \"hxxxxxxxxxxx_x_pan\" to remove 10984 row versions\nDETAIL: CPU 0.00s/0.02u sec elapsed 0.08 sec.\nINFO: scanned index \"hxxxxxxxxxxx_x_ssn\" to remove 10984 row versions\nDETAIL: CPU 0.00s/0.01u sec elapsed 0.04 sec.\nINFO: scanned index \"hxxxxxxxxxxx_x_email_from_index\" to remove 10984 row\nversions\nDETAIL: CPU 0.01s/0.03u sec elapsed 0.12 sec.\nINFO: scanned index \"hxxxxxxxxxxx_x_vendor_id_index\" to remove 10984 row\nversions\nDETAIL: CPU 0.00s/0.01u sec elapsed 0.04 sec.\nINFO: \"hxxxxxxxxxxx\": removed 10984 row versions in 3419 pages\nDETAIL: CPU 0.02s/0.02u sec elapsed 0.18 sec.\nINFO: index \"hxxxxxxxxxxx_pkey\" now contains 71243 row versions in 208\npages\nDETAIL: 2160 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"hxxxxxxxxxxx_x_email_from\" now contains 71243 row versions in\n536 pages\nDETAIL: 9386 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"hxxxxxxxxxxx_x_mobile\" now contains 71243 row versions in 389\npages\nDETAIL: 8686 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"hxxxxxxxxxxx_x_pan\" now contains 71243 row versions in 261\npages\nDETAIL: 8979 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"hxxxxxxxxxxx_x_ssn\" now contains 71243 row versions in 257\npages\nDETAIL: 8979 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"hxxxxxxxxxxx_x_email_from_index\" now contains 71243 row\nversions in 536 pages\nDETAIL: 8979 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"hxxxxxxxxxxx_x_vendor_id_index\" now contains 71243 row\nversions in 257 pages\nDETAIL: 8979 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"hxxxxxxxxxxx\": found 2597 removable, 71243 nonremovable row versions\nin 7202 out of 7202 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 10144 unused item pointers.\n0 pages are entirely empty.\nCPU 0.21s/0.66u sec elapsed 3.21 sec.\nINFO: vacuuming \"pg_toast.pg_toast_401161\"\n^CCancel request sent\nERROR: canceling statement due to user request\n\nNote: Cancelled because it got struck over there and it seems to be overhead\nto DB in business hours.\n\nNow from this experiment is there something to suspect if I do VACUUM FREEZE\non the database will it reduce my HUGE ARCHIVE LOG GENERATION?\n\nPlease help. Thanks in Advance.\n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 24 Jan 2018 04:50:33 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Wed, Jan 24, 2018 at 8:50 AM, pavan95 <[email protected]>\nwrote:\n\n> Hello all,\n>\n> One more interesting observation made by me.\n>\n> I have ran the below query(s) on production:\n>\n> SELECT\n> relname,\n> age(relfrozenxid) as xid_age,\n> pg_size_pretty(pg_table_size(oid)) as table_size\n> FROM pg_class\n> WHERE relkind = 'r' and pg_table_size(oid) > 1073741824\n> ORDER BY age(relfrozenxid) DESC ;\n> relname |\n> xid_age | table_size\n> ------------------------------------------------------------\n> +---------+------------\n> *hxxxxxxxxxx* |\n> 7798262 | 3245 MB\n> hrxxxxxxxxx |\n> 7797554 | 4917 MB\n> irxxxxxxxxxx |\n> 7796771 | 2841 MB\n> hr_xxxxxxxxxxxxxxxx | 7744262 |\n> 4778 MB\n> reimbxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | 6767712 | 1110 MB\n>\n> show autovacuum_freeze_max_age;\n> autovacuum_freeze_max_age\n> ---------------------------\n> 200000000\n> (1 row)\n>\n\nYou seem to be rather far from the freeze_max_age. Unless you're consuming\ntxids at a very high rate, I don't think that's your problem.\n\nOn Wed, Jan 24, 2018 at 8:50 AM, pavan95 <[email protected]> wrote:Hello all,\n\nOne more interesting observation made by me.\n\nI have ran the below query(s) on production:\n\nSELECT\n    relname,\n    age(relfrozenxid) as xid_age,\n    pg_size_pretty(pg_table_size(oid)) as table_size\nFROM pg_class\nWHERE relkind = 'r' and pg_table_size(oid) > 1073741824\nORDER BY age(relfrozenxid) DESC ;\n                    relname                                              |\nxid_age | table_size\n------------------------------------------------------------+---------+------------\n *hxxxxxxxxxx*                                                      |\n7798262 | 3245 MB\n hrxxxxxxxxx                                                         |\n7797554 | 4917 MB\n irxxxxxxxxxx                                                        |\n7796771 | 2841 MB\n hr_xxxxxxxxxxxxxxxx                                           | 7744262 |\n4778 MB\n reimbxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | 6767712 | 1110 MB\n\nshow autovacuum_freeze_max_age;\n autovacuum_freeze_max_age\n---------------------------\n 200000000\n(1 row)You seem to be rather far from the freeze_max_age. Unless you're consuming txids at a very high rate, I don't think that's your problem.", "msg_date": "Wed, 24 Jan 2018 11:27:11 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Jan 24, 2018 7:57 PM, \"Claudio Freire\" <[email protected]> wrote:\n\n\n\nOn Wed, Jan 24, 2018 at 8:50 AM, pavan95 <[email protected]>\nwrote:\n\n> Hello all,\n>\n> One more interesting observation made by me.\n>\n> I have ran the below query(s) on production:\n>\n> SELECT\n> relname,\n> age(relfrozenxid) as xid_age,\n> pg_size_pretty(pg_table_size(oid)) as table_size\n> FROM pg_class\n> WHERE relkind = 'r' and pg_table_size(oid) > 1073741824\n> ORDER BY age(relfrozenxid) DESC ;\n> relname |\n> xid_age | table_size\n> ------------------------------------------------------------\n> +---------+------------\n> *hxxxxxxxxxx* |\n> 7798262 | 3245 MB\n> hrxxxxxxxxx |\n> 7797554 | 4917 MB\n> irxxxxxxxxxx |\n> 7796771 | 2841 MB\n> hr_xxxxxxxxxxxxxxxx | 7744262 |\n> 4778 MB\n> reimbxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | 6767712 | 1110 MB\n>\n> show autovacuum_freeze_max_age;\n> autovacuum_freeze_max_age\n> ---------------------------\n> 200000000\n> (1 row)\n>\n\nYou seem to be rather far from the freeze_max_age. Unless you're consuming\ntxids at a very high rate, I don't think that's your problem.\n\n\n Hi ,\n\n\n Yes, but why doing vacuum freeze of a table is causing a rapid\n​archiving??\nAny idea??\n\nRegards,\nPavan\n\nOn Jan 24, 2018 7:57 PM, \"Claudio Freire\" <[email protected]> wrote:On Wed, Jan 24, 2018 at 8:50 AM, pavan95 <[email protected]> wrote:Hello all,\n\nOne more interesting observation made by me.\n\nI have ran the below query(s) on production:\n\nSELECT\n    relname,\n    age(relfrozenxid) as xid_age,\n    pg_size_pretty(pg_table_size(oid)) as table_size\nFROM pg_class\nWHERE relkind = 'r' and pg_table_size(oid) > 1073741824\nORDER BY age(relfrozenxid) DESC ;\n                    relname                                              |\nxid_age | table_size\n------------------------------------------------------------+---------+------------\n *hxxxxxxxxxx*                                                      |\n7798262 | 3245 MB\n hrxxxxxxxxx                                                         |\n7797554 | 4917 MB\n irxxxxxxxxxx                                                        |\n7796771 | 2841 MB\n hr_xxxxxxxxxxxxxxxx                                           | 7744262 |\n4778 MB\n reimbxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx | 6767712 | 1110 MB\n\nshow autovacuum_freeze_max_age;\n autovacuum_freeze_max_age\n---------------------------\n 200000000\n(1 row)You seem to be rather far from the freeze_max_age. Unless you're consuming txids at a very high rate, I don't think that's your problem.   Hi ,\n     Yes, but why doing vacuum freeze of a table is causing a rapid ​archiving?? Any idea??Regards,Pavan", "msg_date": "Wed, 24 Jan 2018 20:18:00 +0530", "msg_from": "Pavan Teja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Wed, Jan 24, 2018 at 7:48 AM, Pavan Teja <[email protected]>\nwrote:\n\n>\n>\n> Yes, but why doing vacuum freeze of a table is causing a rapid\n> ​archiving??\n> Any idea??\n>\n>\nIIUC ​Freezing involves physically altering those pages that are not frozen\nto make them frozen. Those changes are logged just like any (most?) other\nphysical changes to pages. The rapid-ness is because freezing is not that\ndifficult so lots of pages can be changed in a relatively short period of\ntime.\n\nDavid J.\n​\n\nOn Wed, Jan 24, 2018 at 7:48 AM, Pavan Teja <[email protected]> wrote:\n     Yes, but why doing vacuum freeze of a table is causing a rapid ​archiving?? Any idea??IIUC ​Freezing involves physically altering those pages that are not frozen to make them frozen.  Those changes are logged just like any (most?) other physical changes to pages.  The rapid-ness is because freezing is not that difficult so lots of pages can be changed in a relatively short period of time.David J.​", "msg_date": "Wed, 24 Jan 2018 07:57:16 -0700", "msg_from": "\"David G. Johnston\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Please show the output of these queries in the relevant databases:\n\nselect name, setting, source, sourcefile, sourceline from pg_settings where name like '%vacuum%';\nselect oid::regclass, reloptions from pg_class where reloptions is not null;\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Wed, 24 Jan 2018 12:19:20 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hi Álvaro Herrera,\n\nPlease find the corresponding output:\n\n*1).select name, setting, source, sourcefile, sourceline from pg_settings\nwhere name like '%vacuum%'; *\n-[ RECORD 1 ]----------------------------------------\nname | autovacuum\nsetting | on\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 437\n-[ RECORD 2 ]----------------------------------------\nname | autovacuum_analyze_scale_factor\nsetting | 0.1\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 451\n-[ RECORD 3 ]----------------------------------------\nname | autovacuum_analyze_threshold\nsetting | 50\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 448\n-[ RECORD 4 ]----------------------------------------\nname | autovacuum_freeze_max_age\nsetting | 200000000\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 452\n-[ RECORD 5 ]----------------------------------------\nname | autovacuum_max_workers\nsetting | 3\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 443\n-[ RECORD 6 ]----------------------------------------\nname | autovacuum_naptime\nsetting | 60\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 445\n-[ RECORD 7 ]----------------------------------------\nname | autovacuum_vacuum_cost_delay\nsetting | 20\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 454\n-[ RECORD 8 ]----------------------------------------\nname | autovacuum_vacuum_cost_limit\nsetting | -1\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 457\n-[ RECORD 9 ]----------------------------------------\nname | autovacuum_vacuum_scale_factor\nsetting | 0.2\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 450\n-[ RECORD 10 ]---------------------------------------\nname | autovacuum_vacuum_threshold\nsetting | 50\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 446\n-[ RECORD 11 ]---------------------------------------\nname | log_autovacuum_min_duration\nsetting | 100\nsource | configuration file\nsourcefile | /etc/postgresql/9.1/main/postgresql.conf\nsourceline | 439\n-[ RECORD 12 ]---------------------------------------\nname | vacuum_cost_delay\nsetting | 0\nsource | default\nsourcefile |\nsourceline |\n-[ RECORD 13 ]---------------------------------------\nname | vacuum_cost_limit\nsetting | 200\nsource | default\nsourcefile |\nsourceline |\n-[ RECORD 14 ]---------------------------------------\nname | vacuum_cost_page_dirty\nsetting | 20\nsource | default\nsourcefile |\nsourceline |\n-[ RECORD 15 ]---------------------------------------\nname | vacuum_cost_page_hit\nsetting | 1\nsource | default\nsourcefile |\nsourceline |\n-[ RECORD 16 ]---------------------------------------\nname | vacuum_cost_page_miss\nsetting | 10\nsource | default\nsourcefile |\nsourceline |\n-[ RECORD 17 ]---------------------------------------\nname | vacuum_defer_cleanup_age\nsetting | 0\nsource | default\nsourcefile |\nsourceline |\n-[ RECORD 18 ]---------------------------------------\nname | vacuum_freeze_min_age\nsetting | 50000000\nsource | default\nsourcefile |\nsourceline |\n-[ RECORD 19 ]---------------------------------------\nname | vacuum_freeze_table_age\nsetting | 150000000\nsource | default\nsourcefile |\nsourceline |\n\n\n*2).select oid::regclass, reloptions from pg_class where reloptions is not\nnull; *\n\n(No rows)\n\n\n\nThanks in Advance.\n\n\nRegards,\nPavan\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 24 Jan 2018 20:47:15 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "pavan95 wrote:\n> Hi �lvaro Herrera,\n> \n> Please find the corresponding output:\n\nOK, these settings look pretty normal, so they don't explain your\nproblem.\n\nWhat is checkpoint_segments set to? And checkpoint_timeout?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n", "msg_date": "Thu, 25 Jan 2018 18:30:10 -0300", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Jan 26, 2018 3:00 AM, \"Alvaro Herrera\" <[email protected]> wrote:\n\npavan95 wrote:\n> Hi Álvaro Herrera,\n>\n> Please find the corresponding output:\n\nOK, these settings look pretty normal, so they don't explain your\nproblem.\n\nWhat is checkpoint_segments set to? And checkpoint_timeout?\n\n--\nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n Hi,\n\n checkpoint_segments are set to '3' &\n checkpoint_timeout was set to '5 min'.\n\n Regards,\n Pavan.\n\nOn Jan 26, 2018 3:00 AM, \"Alvaro Herrera\" <[email protected]> wrote:pavan95 wrote:\n> Hi Álvaro Herrera,\n>\n> Please find the corresponding output:\n\nOK, these settings look pretty normal, so they don't explain your\nproblem.\n\nWhat is checkpoint_segments set to?  And checkpoint_timeout?\n\n--\nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n    Hi,      checkpoint_segments are set to '3' &      checkpoint_timeout was set to '5                min'.     Regards,     Pavan.", "msg_date": "Fri, 26 Jan 2018 06:02:39 +0530", "msg_from": "Pavan Teja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Jan 26, 2018 6:02 AM, \"Pavan Teja\" <[email protected]> wrote:\n\n\n\nOn Jan 26, 2018 3:00 AM, \"Alvaro Herrera\" <[email protected]> wrote:\n\npavan95 wrote:\n> Hi Álvaro Herrera,\n>\n> Please find the corresponding output:\n\nOK, these settings look pretty normal, so they don't explain your\nproblem.\n\nWhat is checkpoint_segments set to? And checkpoint_timeout?\n\n--\nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n Hi,\n\n checkpoint_segments are set to '3' &\n checkpoint_timeout was set to '5 min'.\n\n Regards,\n Pavan.\n\n Any clue???\n\n Regards,\n Pavan.\n\nOn Jan 26, 2018 6:02 AM, \"Pavan Teja\" <[email protected]> wrote:On Jan 26, 2018 3:00 AM, \"Alvaro Herrera\" <[email protected]> wrote:pavan95 wrote:\n> Hi Álvaro Herrera,\n>\n> Please find the corresponding output:\n\nOK, these settings look pretty normal, so they don't explain your\nproblem.\n\nWhat is checkpoint_segments set to?  And checkpoint_timeout?\n\n--\nÁlvaro Herrera                https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n    Hi,      checkpoint_segments are set to '3' &      checkpoint_timeout was set to '5                min'.     Regards,     Pavan.\n          Any clue???     Regards,      Pavan.", "msg_date": "Mon, 29 Jan 2018 10:16:26 +0530", "msg_from": "Pavan Teja <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hello all,\n\nWill a sudden restart(stop/start) of a postgres database will generate this\nhuge WAL?\n\nRegards,\nPavan\n\n\n\n\n\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Tue, 30 Jan 2018 06:55:57 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "On Tue, Jan 30, 2018 at 10:55 AM, pavan95 <[email protected]> wrote:\n> Hello all,\n>\n> Will a sudden restart(stop/start) of a postgres database will generate this\n> huge WAL?\n\nShouldn't\n\n", "msg_date": "Tue, 30 Jan 2018 13:13:40 -0300", "msg_from": "Claudio Freire <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" }, { "msg_contents": "Hi all,\n\nRegarding this archive log generation found one observation. \n\nA table named abc_table id found to be archived every 9'th and 39'th minute.\nWe are able to find number of tuples deleted from the pg_stat_user_tables\nview. \n\nBut to my wonder the number of tuple inserts are shown 0. How can there be\nany delete without any inserts.\n\nIt was found that the table is having 2060 rows, where in which all rows are\ngetting deleted in every 9'th and 39'th minute of an hour. It implies that\nthose deleted should be inserted before the delete operation.\n\nAlso performed vacuum freeze on that table before 9'th minute of an hour it\ngenerated 36 archive logs, and when I tried to do the same operation after\n9'th minute(say 11'th minute of the same hour), it is generating the same\nnumber of archive logs.\n\nThis is possible only if the entire table gets updated/recreated. Now my\nfinal doubt is why the tuple inserts in pg_stat_user_tables is showing 0,\nwhen corresponding deletes are existing?\n\nPlease find the below outputs FYR.\n\n\n--Steps performed on production server:--\n\n--1. Found Count Of Rows in Production\n--******************************************\nprod_erp=# select count(*) from abc_table;;\n count\n-------\n 2060\n(1 row)\n\n--2. Issued 'Select pg_stat_reset();'\n\n--3. Before Delete Statements (Before JAN 31'st 2018 14:09 Hrs)\n--****************************************************************\n\nIssued:\n\nselect * from pg_stat_user_tables where relname ='abc_table';\n-[ RECORD 1 ]-----+----------------------------\nrelid | 550314\nschemaname | public\nrelname | abc_table\nseq_scan | 2\nseq_tup_read | 4120\nidx_scan | 0\nidx_tup_fetch | 0\nn_tup_ins | 0\nn_tup_upd | 0\nn_tup_del | 0\nn_tup_hot_upd | 0\nn_live_tup | 0\nn_dead_tup | 0\nlast_vacuum |\nlast_autovacuum |\nlast_analyze |\nlast_autoanalyze |\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 0\n\n\n--4. After Delete Statements (Before JAN 31'st 2018 14:09 Hrs)\n--****************************************************************\n\nselect * from pg_stat_user_tables where relname ='abc_table';\n-[ RECORD 1 ]-----+----------------------------\nrelid | 550314\nschemaname | public\nrelname | abc_table\nseq_scan | 3\nseq_tup_read | 6180\nidx_scan | 2060\nidx_tup_fetch | 2060\nn_tup_ins | 0\nn_tup_upd | 0\nn_tup_del | 2060\nn_tup_hot_upd | 0\nn_live_tup | 0\nn_dead_tup | 0\nlast_vacuum |\nlast_autovacuum |\nlast_analyze |\nlast_autoanalyze |\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 0\n\n\n--5. After Delete Statements (Before JAN 31'st 2018 14:39 Hrs)\n--**************************************************************** \n\nselect * from pg_stat_user_tables where relname ='abc_table';\n-[ RECORD 1 ]-----+----------------------------\nrelid | 550314\nschemaname | public\nrelname | abc_table\nseq_scan | 4\nseq_tup_read | 8240\nidx_scan | 4120\nidx_tup_fetch | 4120\nn_tup_ins | 0\nn_tup_upd | 0\nn_tup_del | 4120\nn_tup_hot_upd | 0\nn_live_tup | 0\nn_dead_tup | 0\nlast_vacuum |\nlast_autovacuum |\nlast_analyze |\nlast_autoanalyze |\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 0\n\n\n--6. After Delete Statements (Before JAN 31'st 2018 15:09 Hrs)\n--**************************************************************** \n\n\nselect * from pg_stat_user_tables where relname ='abc_table';\n-[ RECORD 1 ]-----+----------------------------\nrelid | 550314\nschemaname | public\nrelname | abc_table\nseq_scan | 5\nseq_tup_read | 10300\nidx_scan | 6180\nidx_tup_fetch | 6180\nn_tup_ins | 0\nn_tup_upd | 0\nn_tup_del | 6180\nn_tup_hot_upd | 0\nn_live_tup | 0\nn_dead_tup | 0\nlast_vacuum |\nlast_autovacuum |\nlast_analyze |\nlast_autoanalyze |\nvacuum_count | 0\nautovacuum_count | 0\nanalyze_count | 0\nautoanalyze_count | 0\n\n\n\nAs said above if we compare n_tup_del value in steps 4,5,6 it says us that\nentire table is getting deleted(correct me if I'm wrong), but n_tup_ins is\n0. \n\nRegards,\nPavan\n\n\n\n--\nSent from: http://www.postgresql-archive.org/PostgreSQL-performance-f2050081.html\n\n", "msg_date": "Wed, 31 Jan 2018 23:18:00 -0700 (MST)", "msg_from": "pavan95 <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 8.2 Autovacuum BUG ?" } ]
[ { "msg_contents": "Hi,\nI was looking for opinions on performance for a design involving schemas. We have a 3-tier system with a lot of hand-written SQL in our Java-based server, but we want to start limiting the data that different users can access based on certain user properties. Rather than update hundreds of queries throughout our server code based on these user properties we were thinking that instead we would do the following:\n\n1. Build a schema for each user.\n2. Reset the users search path for each database connection so it accesses their schema first, then the public schema\n3. Inside that users schema create about 5 views to \"replace\" tables in the public schema with the same name. Each of these views would provide only a subset of the data for each corresponding table in the public schema based on the users properties.\n4. Provide rules for each of these views so they would act as insertable/updateable/deleteable views. \n\nDoes anyone have any thoughts on how this may perform over the long-haul? Database cleanup or maintenance problems?\n\nWe currently only handle about 50 users at a time, but expect it to potentially handle about 150-200 users within a year or two.\n\nRunning PostgreSQL 8.2.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.3\n\nThanks!\n \n \n--------------------------------------------------------\n\nInformation in this e-mail may be confidential. It is intended only for the addressee(s) identified above. If you are not the addressee(s), or an employee or agent of the addressee(s), please note that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender of the error.\n", "msg_date": "Fri, 31 Aug 2007 23:02:25 -0400", "msg_from": "\"Brennan, Sean \\(IMS\\)\" <[email protected]>", "msg_from_op": true, "msg_subject": "schemas to limit data access" }, { "msg_contents": "On 8/31/07, Brennan, Sean (IMS) <[email protected]> wrote:\n> Hi,\n> I was looking for opinions on performance for a design involving schemas. We have a 3-tier system with a lot of hand-written SQL in our Java-based server, but we want to start limiting the data that different users can access based on certain user properties. Rather than update hundreds of queries throughout our server code based on these user properties we were thinking that instead we would do the following:\n\n\n> 1. Build a schema for each user.\n> 2. Reset the users search path for each database connection so it accesses their schema first, then the public schema\n> 3. Inside that users schema create about 5 views to \"replace\" tables in the public schema with the same name. Each of these views would provide only a subset of the data for each corresponding table in the public schema based on the users properties.\n> 4. Provide rules for each of these views so they would act as insertable/updateable/deleteable views.\n>\n> Does anyone have any thoughts on how this may perform over the long-haul? Database cleanup or maintenance problems?\n\nThis will work relatively ok if the main tables in the public schema\ndo not change very much...otherwise you have to drop all the views,\nchange tables, and re-make. Even still, that's a lot of rules flying\naround, and excessive use of rules is asking for trouble.\n\nYou may want to explore trying to do it using a single view for each\nunderlying table, and drop the schemas approach (which I would be\nlooking at for separate physical tables, not views). A very simple\nway to do this that might work for you is:\n\ncreate view foo_view select * from foo where owner_col = current_user;\nplus update, delete rules, etc.\n\nyou can then rename the tables in place for seamless app integration.\nYou could replace the current_user item with an expression but the\nperformance issues could be large...owner_col could be an array though\nas long as it's relatively small (you may want to look at array\nindexing techniques if you go the array route).\n\nmerlin\n", "msg_date": "Mon, 3 Sep 2007 21:08:48 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: schemas to limit data access" } ]
[ { "msg_contents": "Hi,\nI was looking for opinions on performance for a design involving schemas. We have a 3-tier system with a lot of hand-written SQL in our Java-based server, but we want to start limiting the data that different users can access based on certain user properties. Rather than update hundreds of queries throughout our server code based on these user properties we were thinking that instead we would do the following:\n\n1. Build a schema for each user.\n2. Reset the users search path for each database connection so it accesses their schema first, then the public schema\n3. Inside that users schema create about 5 views to \"replace\" tables in the public schema with the same name. Each of these views would provide only a subset of the data for each corresponding table in the public schema based on the users properties.\n4. Provide rules for each of these views so they would act as insertable/updateable/deleteable views. \n\nDoes anyone have any thoughts on how this may perform over the long-haul? Database cleanup or maintenance problems?\n\nWe currently only handle about 50 users at a time, but expect it to potentially handle about 150-200 users within a year or two.\n\nRunning PostgreSQL 8.2.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.3\n\nThanks!\n \n \n--------------------------------------------------------\n\nInformation in this e-mail may be confidential. It is intended only for the addressee(s) identified above. If you are not the addressee(s), or an employee or agent of the addressee(s), please note that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender of the error.\n", "msg_date": "Fri, 31 Aug 2007 23:02:32 -0400", "msg_from": "\"Brennan, Sean \\(IMS\\)\" <[email protected]>", "msg_from_op": true, "msg_subject": "schemas to limit data access" }, { "msg_contents": "Hi,\nI was looking for opinions on performance for a design involving schemas. We have a 3-tier system with a lot of hand-written SQL in our Java-based server, but we want to start limiting the data that different users can access based on certain user properties. Rather than update hundreds of queries throughout our server code based on these user properties we were thinking that instead we would do the following:\n\n1. Build a schema for each user.\n2. Reset the users search path for each database connection so it accesses their schema first, then the public schema\n3. Inside that users schema create about 5 views to \"replace\" tables in the public schema with the same name. Each of these views would provide only a subset of the data for each corresponding table in the public schema based on the users properties.\n4. Provide rules for each of these views so they would act as insertable/updateable/deleteable views. \n\nDoes anyone have any thoughts on how this may perform over the long-haul? Database cleanup or maintenance problems?\n\nWe currently only handle about 50 users at a time, but expect it to potentially handle about 150-200 users within a year or two.\n\nRunning PostgreSQL 8.2.4 on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 3.4.3\n\nThanks!\n \n \n--------------------------------------------------------\n\nInformation in this e-mail may be confidential. It is intended only for the addressee(s) identified above. If you are not the addressee(s), or an employee or agent of the addressee(s), please note that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender of the error.\n", "msg_date": "Fri, 31 Aug 2007 23:08:34 -0400", "msg_from": "\"Brennan, Sean \\(IMS\\)\" <[email protected]>", "msg_from_op": true, "msg_subject": "schemas to limit data access" } ]
[ { "msg_contents": "Hello,\n\nI have a recurring script that updates some tables from an MS_SQL\nserver. One of the operations sets a field in all records to null in\npreparation of being updated with values from the other server. The\nSQL statement is:\n\nupdate shawns_data set alias = null;\n\nAlias is a type varchar(8)\n\nThe table has 26 fields per record and there are about 15,700\nrecords. The server hardware is a dual QUAD-CORE Intel 2 GHz XEON dell\n2950 server with 4 drive SAS RAID-5 array, and 16G of RAM. The OS is\nSlackware 11 with some updatews and Postgres v8.2.4 built from source.\n\nEven after VACUUM this simple line takes 35 sec to complete. Other\nmore complicated deletes and updates, some of the tables in this\ndatabase are over 300 million records, take as much time as this\nsimple query.\n\nMy question: Is there a better, ie. faster, way to do this task?\n\nShawn\n", "msg_date": "Sat, 1 Sep 2007 10:29:47 -0700", "msg_from": "Shawn <[email protected]>", "msg_from_op": true, "msg_subject": "Slow Query" }, { "msg_contents": "Shawn <[email protected]> writes:\n> update shawns_data set alias = null;\n\n> Alias is a type varchar(8)\n\n> The table has 26 fields per record and there are about 15,700\n> records. The server hardware is a dual QUAD-CORE Intel 2 GHz XEON dell\n> 2950 server with 4 drive SAS RAID-5 array, and 16G of RAM. The OS is\n> Slackware 11 with some updatews and Postgres v8.2.4 built from source.\n\n> Even after VACUUM this simple line takes 35 sec to complete.\n\nSeems like a lot. Table bloat maybe (what does VACUUM VERBOSE say about\nthis table)? An unreasonably large number of indexes to update?\nForeign key checks? (Though unless you have FKs pointing at alias,\nI'd think 8.2 would avoid needing to make any FK checks.)\n\nCould we see EXPLAIN ANALYZE output for this operation? (If you don't\nreally want to zap the column right now, wrap the EXPLAIN in\nBEGIN/ROLLBACK.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Sep 2007 14:09:54 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query " }, { "msg_contents": "\nHi Tom,\n\nThanks for replying.\n\nThere are no FK's, indexes, or dependents on the alias field.\n\nThe system is in the middle of its weekly full activity log resync,\nabout 600 Million records. It will be done later this evening and I\nwill run the explain analyze thenand I will post the results. I will\nalso run a vacuum full analyze on it before the run and have timing on.\n\nShawn\n\n\n On Sat, 01 Sep 2007 14:09:54 -0400 Tom Lane\n<[email protected]> wrote:\n\n> Shawn <[email protected]> writes:\n> > update shawns_data set alias = null;\n> \n> > Alias is a type varchar(8)\n> \n> > The table has 26 fields per record and there are about 15,700\n> > records. The server hardware is a dual QUAD-CORE Intel 2 GHz XEON\n> > dell 2950 server with 4 drive SAS RAID-5 array, and 16G of RAM.\n> > The OS is Slackware 11 with some updatews and Postgres v8.2.4 built\n> > from source.\n> \n> > Even after VACUUM this simple line takes 35 sec to complete.\n> \n> Seems like a lot. Table bloat maybe (what does VACUUM VERBOSE say\n> about this table)? An unreasonably large number of indexes to update?\n> Foreign key checks? (Though unless you have FKs pointing at alias,\n> I'd think 8.2 would avoid needing to make any FK checks.)\n> \n> Could we see EXPLAIN ANALYZE output for this operation? (If you don't\n> really want to zap the column right now, wrap the EXPLAIN in\n> BEGIN/ROLLBACK.)\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 6: explain analyze is your\n> friend\n> \n", "msg_date": "Sat, 1 Sep 2007 13:18:16 -0700", "msg_from": "Shawn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Query" }, { "msg_contents": "\nOk,\n\nThe query just ran and here is the basic output:\n\nUPDATE 15445\nTime: 22121.141 ms\n\nand\n\n\n\nexplain ANALYZE update shawns_data set alias = null;\n QUERY PLAN \n-----------------------------------------------------------------------------------------------------------------\n Seq Scan on shawns_data (cost=0.00..465.45 rows=15445 width=480) (actual time=0.034..67.743 rows=15445 loops=1)\n Total runtime: 1865.002 ms\n(2 rows)\n\n\n\nShawn\n\nOn Sat, 1 Sep 2007 13:18:16 -0700\nShawn <[email protected]> wrote:\n\n> \n> Hi Tom,\n> \n> Thanks for replying.\n> \n> There are no FK's, indexes, or dependents on the alias field.\n> \n> The system is in the middle of its weekly full activity log resync,\n> about 600 Million records. It will be done later this evening and I\n> will run the explain analyze thenand I will post the results. I will\n> also run a vacuum full analyze on it before the run and have timing\n> on.\n> \n> Shawn\n> \n> \n> On Sat, 01 Sep 2007 14:09:54 -0400 Tom Lane\n> <[email protected]> wrote:\n> \n> > Shawn <[email protected]> writes:\n> > > update shawns_data set alias = null;\n> > \n> > > Alias is a type varchar(8)\n> > \n> > > The table has 26 fields per record and there are about 15,700\n> > > records. The server hardware is a dual QUAD-CORE Intel 2 GHz XEON\n> > > dell 2950 server with 4 drive SAS RAID-5 array, and 16G of RAM.\n> > > The OS is Slackware 11 with some updatews and Postgres v8.2.4\n> > > built from source.\n> > \n> > > Even after VACUUM this simple line takes 35 sec to complete.\n> > \n> > Seems like a lot. Table bloat maybe (what does VACUUM VERBOSE say\n> > about this table)? An unreasonably large number of indexes to\n> > update? Foreign key checks? (Though unless you have FKs pointing\n> > at alias, I'd think 8.2 would avoid needing to make any FK checks.)\n> > \n> > Could we see EXPLAIN ANALYZE output for this operation? (If you\n> > don't really want to zap the column right now, wrap the EXPLAIN in\n> > BEGIN/ROLLBACK.)\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of\n> > broadcast)--------------------------- TIP 6: explain analyze is your\n> > friend\n> > \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 1: if posting/reading\n> through Usenet, please send an appropriate subscribe-nomail command\n> to [email protected] so that your message can get through to\n> the mailing list cleanly\n> \n", "msg_date": "Sat, 1 Sep 2007 17:35:19 -0700", "msg_from": "Shawn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Query" }, { "msg_contents": "Shawn <[email protected]> writes:\n> The query just ran and here is the basic output:\n\n> UPDATE 15445\n> Time: 22121.141 ms\n\n> and\n\n> explain ANALYZE update shawns_data set alias = null;\n> QUERY PLAN \n> -----------------------------------------------------------------------------------------------------------------\n> Seq Scan on shawns_data (cost=0.00..465.45 rows=15445 width=480) (actual time=0.034..67.743 rows=15445 loops=1)\n> Total runtime: 1865.002 ms\n> (2 rows)\n\nHmmm ... did you run the real query and the EXPLAIN in immediate\nsuccession? If so, the only reason I can think of for the speed\ndifference is that all the rows were fetched already for the second\nrun. Which doesn't make a lot of sense given the hardware specs\nyou mentioned. Try watching \"vmstat 1\" and see if there's some\nnoticeable difference in the behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 01 Sep 2007 23:00:10 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query " }, { "msg_contents": ">>> On Sat, Sep 1, 2007 at 12:29 PM, in message\n<[email protected]>, Shawn\n<[email protected]> wrote: \n> update shawns_data set alias = null;\n> Even after VACUUM this simple line takes 35 sec to complete.\n \nWould any rows already have a null alias when you run this?\n \nIf so, try adding 'where alias is not null' to the query.\n \n-Kevin\n \n\n\n", "msg_date": "Sun, 02 Sep 2007 10:49:09 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "\n\nThanks Kevin,\n\nThis one initially added about 10sec to the run but I added a HASH\nindex on the alias field and its now about 5 sec average runtime, a net\nimprovement.\n\nShawn\n\n On Sun, 02 Sep 2007 10:49:09 -0500 \"Kevin Grittner\"\n<[email protected]> wrote:\n\n> >>> On Sat, Sep 1, 2007 at 12:29 PM, in message\n> <[email protected]>, Shawn\n> <[email protected]> wrote: \n> > update shawns_data set alias = null;\n> > Even after VACUUM this simple line takes 35 sec to complete.\n> \n> Would any rows already have a null alias when you run this?\n> \n> If so, try adding 'where alias is not null' to the query.\n> \n> -Kevin\n> \n> \n> \n> \n> ---------------------------(end of\n> broadcast)--------------------------- TIP 2: Don't 'kill -9' the\n> postmaster\n> \n", "msg_date": "Mon, 3 Sep 2007 09:15:58 -0700", "msg_from": "Shawn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Query" }, { "msg_contents": "On Sat, 01 Sep 2007 23:00:10 -0400\nTom Lane <[email protected]> wrote:\n\n> Shawn <[email protected]> writes:\n> > The query just ran and here is the basic output:\n> \n> > UPDATE 15445\n> > Time: 22121.141 ms\n> \n> > and\n> \n> > explain ANALYZE update shawns_data set alias = null;\n> > QUERY\n> > PLAN\n> > -----------------------------------------------------------------------------------------------------------------\n> > Seq Scan on shawns_data (cost=0.00..465.45 rows=15445 width=480)\n> > (actual time=0.034..67.743 rows=15445 loops=1) Total runtime:\n> > 1865.002 ms (2 rows)\n> \n> Hmmm ... did you run the real query and the EXPLAIN in immediate\n> succession? If so, the only reason I can think of for the speed\n> difference is that all the rows were fetched already for the second\n> run. Which doesn't make a lot of sense given the hardware specs\n> you mentioned. Try watching \"vmstat 1\" and see if there's some\n> noticeable difference in the behavior.\n> \n\nActually no,\n\nThe runs were at least an 24 hours apart.\n\nRunning the query standalone was quite interesting. There's a lot of\ndisk output:\n\nprocs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----\n r b swpd free buff cache si so bi bo in cs us sy id wa\n 0 0 2580 450992 84420 16007864 0 0 1 1 1 2 3 0 94 2\n 0 0 2580 450992 84420 16007864 0 0 0 0 1003 18 0 0 100 0\n 0 0 2580 450660 84420 16007864 0 0 0 6764 1127 202 0 0 100 0\n 0 0 2580 450660 84420 16007864 0 0 0 0 1004 28 0 0 100 0\n 0 0 2580 450660 84420 16007864 0 0 0 0 1011 58 0 0 100 0\n 0 0 2580 450660 84420 16007864 0 0 0 0 1011 50 0 0 100 0\n 0 0 2580 450660 84428 16007864 0 0 0 32 1011 53 0 0 100 0\n 0 0 2580 450660 84428 16007864 0 0 0 3034 1095 191 0 0 100 0\n 0 0 2580 450164 84428 16007864 0 0 0 0 1032 362 0 0 100 0\n 0 0 2580 450164 84428 16007864 0 0 0 0 1002 22 0 0 100 0\n 1 0 2580 446444 84428 16012240 0 0 0 2568 1024 398 4 1 95 0\n 1 0 2580 456172 84428 16001760 0 0 8 11549 1136 616 10 3 87 0\n 0 0 2580 452164 84428 16005524 0 0 4 26580 1885 1479 4 2 93 1\n 0 0 2580 452164 84436 16005516 0 0 0 40 1004 26 0 0 100 0\n 0 0 2580 452164 84436 16005524 0 0 0 0 1001 36 0 0 100 0\n 0 0 2580 452164 84436 16005524 0 0 0 0 1001 24 0 0 100 0\n 0 0 2580 452164 84436 16005524 0 0 0 0 1001 36 0 0 100 0\n 0 0 2580 452164 84436 16005524 0 0 0 3 1005 27 0 0 100 0\n 0 0 2580 452040 84436 16005524 0 0 0 0 1006 41 0 0 100 0\n 0 0 2580 452040 84436 16005524 0 0 0 0 1002 20 0 0 100 0\n\nAlso it runs a lot faster by itself, so I am thinking there is some query early on that is \nreally fragmenting the table. Here's the whole script:\n\n\\timing\nselect now();\ncreate temporary table unit_operation(lid varchar(5), hang_time varchar(3),multi_csd_enabled bool,multi_node_enabled bool,wide_area_enabled bool);\ncreate temporary table unit(lid varchar(5),alias char(8),type smallint); \n\\copy unit from '/tmp/wc_unit'\n\\copy unit_operation from '/tmp/wc_unit_operation'\ntruncate csd;\ninsert into csd(lid,alias,type) select lid,alias,type from unit;\nupdate csd set hang_time = uo.hang_time, multi_csd_enabled = uo.multi_csd_enabled, multi_node_enabled = uo.multi_node_enabled, wide_area_enabled = uo.wide_area_enabled from unit_operation uo where csd.lid = uo.lid;\ncreate temporary table groups(lid varchar(5),alias varchar(8));\ncreate temporary table group_operation(lid varchar(15),hang_time varchar(8));\n\\copy groups from '/tmp/wc_groups'\n\\copy group_operation from '/tmp/wc_group_operation'\ntruncate gid;\ninsert into gid(gid,alias) select lid,alias from groups;\nupdate gid set hang_time = go.hang_time from group_operation go where gid.gid = go.lid;\ntruncate rc_unit_state;\n\\copy rc_unit_state from '/tmp/wc_rc_unit_state' with null as ''\nupdate lidrange set csd = false, data = false, disable = false, lost = false;\nupdate lidrange set csd = true from csd where lidrange.lid = csd.lid;\nupdate lidrange set data = true from shawns_data where lidrange.lid = shawns_data.lid;\nupdate lidrange set disable = true from rc_unit_state where lidrange.lid = rc_unit_state.lid;\nupdate shawns_data set alias = null where alias is not null;\nupdate shawns_data set alias = csd.alias from csd where shawns_data.lid = csd.lid;\nupdate lidrange set lost = true from shawns_data where shawns_data.lid = lidrange.lid and shawns_data.lost = true\n\\i '/home/shawn/vacuum_data.sql'\n\nAs you can see there is nothing really accessing the shawns_data table before the query in question.\nI have also allocated 380 Megs of RAM to any virtual process, I can't see where any of these tables\ncould take up this much RAM. Here is my postgresql.conf:\n\n# -----------------------------\n# PostgreSQL configuration file\n# -----------------------------\n#\n# This file consists of lines of the form:\n#\n# name = value\n#\n# (The '=' is optional.) White space may be used. Comments are introduced\n# with '#' anywhere on a line. The complete list of option names and\n# allowed values can be found in the PostgreSQL documentation. The\n# commented-out settings shown in this file represent the default values.\n#\n# Please note that re-commenting a setting is NOT sufficient to revert it\n# to the default value, unless you restart the server.\n#\n# Any option can also be given as a command line switch to the server,\n# e.g., 'postgres -c log_connections=on'. Some options can be changed at\n# run-time with the 'SET' SQL command.\n#\n# This file is read on server startup and when the server receives a\n# SIGHUP. If you edit the file on a running system, you have to SIGHUP the\n# server for the changes to take effect, or use \"pg_ctl reload\". Some\n# settings, which are marked below, require a server shutdown and restart\n# to take effect.\n#\n# Memory units: kB = kilobytes MB = megabytes GB = gigabytes\n# Time units: ms = milliseconds s = seconds min = minutes h = hours d = days\n\n\n#---------------------------------------------------------------------------\n# FILE LOCATIONS\n#---------------------------------------------------------------------------\n\n# The default values of these variables are driven from the -D command line\n# switch or PGDATA environment variable, represented here as ConfigDir.\n\n#data_directory = 'ConfigDir'\t\t# use data in another directory\n\t\t\t\t\t# (change requires restart)\n#hba_file = 'ConfigDir/pg_hba.conf'\t# host-based authentication file\n\t\t\t\t\t# (change requires restart)\n#ident_file = 'ConfigDir/pg_ident.conf'\t# ident configuration file\n\t\t\t\t\t# (change requires restart)\n\n# If external_pid_file is not explicitly set, no extra PID file is written.\n#external_pid_file = '(none)'\t\t# write an extra PID file\n\t\t\t\t\t# (change requires restart)\n\n\n#---------------------------------------------------------------------------\n# CONNECTIONS AND AUTHENTICATION\n#---------------------------------------------------------------------------\n\n# - Connection Settings -\n\n#listen_addresses = 'localhost'\t\t# what IP address(es) to listen on; \n\t\t\t\t\t# comma-separated list of addresses;\n\t\t\t\t\t# defaults to 'localhost', '*' = all\n\t\t\t\t\t# (change requires restart)\n#port = 5432\t\t\t\t# (change requires restart)\nmax_connections = 20\t\t\t# (change requires restart)\n# Note: increasing max_connections costs ~400 bytes of shared memory per \n# connection slot, plus lock space (see max_locks_per_transaction). You\n# might also need to raise shared_buffers to support more connections.\n#superuser_reserved_connections = 3\t# (change requires restart)\n#unix_socket_directory = ''\t\t# (change requires restart)\n#unix_socket_group = ''\t\t\t# (change requires restart)\n#unix_socket_permissions = 0777\t\t# octal\n\t\t\t\t\t# (change requires restart)\n#bonjour_name = ''\t\t\t# defaults to the computer name\n\t\t\t\t\t# (change requires restart)\n\n# - Security & Authentication -\n\n#authentication_timeout = 1min\t\t# 1s-600s\n#ssl = off\t\t\t\t# (change requires restart)\n#password_encryption = on\n#db_user_namespace = off\n\n# Kerberos\n#krb_server_keyfile = ''\t\t# (change requires restart)\n#krb_srvname = 'postgres'\t\t# (change requires restart)\n#krb_server_hostname = ''\t\t# empty string matches any keytab entry\n\t\t\t\t\t# (change requires restart)\n#krb_caseins_users = off\t\t# (change requires restart)\n\n# - TCP Keepalives -\n# see 'man 7 tcp' for details\n\n#tcp_keepalives_idle = 0\t\t# TCP_KEEPIDLE, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_interval = 0\t\t# TCP_KEEPINTVL, in seconds;\n\t\t\t\t\t# 0 selects the system default\n#tcp_keepalives_count = 0\t\t# TCP_KEEPCNT;\n\t\t\t\t\t# 0 selects the system default\n\n\n#---------------------------------------------------------------------------\n# RESOURCE USAGE (except WAL)\n#---------------------------------------------------------------------------\n\n# - Memory -\n\nshared_buffers = 300MB\t\t\t# min 128kB or max_connections*16kB\n\t\t\t\t\t# (change requires restart)\n#temp_buffers = 8MB\t\t\t# min 800kB\n#max_prepared_transactions = 5\t\t# can be 0 or more\n\t\t\t\t\t# (change requires restart)\n# Note: increasing max_prepared_transactions costs ~600 bytes of shared memory\n# per transaction slot, plus lock space (see max_locks_per_transaction).\nwork_mem = 300MB\t\t\t# min 64kB\nmaintenance_work_mem = 260MB\t\t# min 1MB\nmax_stack_depth = 7MB\t\t\t# min 100kB\n\n# - Free Space Map -\n\nmax_fsm_pages = 15360000\t\t# min max_fsm_relations*16, 6 bytes each\n\t\t\t\t\t# (change requires restart)\n#max_fsm_relations = 1000\t\t# min 100, ~70 bytes each\n\t\t\t\t\t# (change requires restart)\n\n# - Kernel Resource Usage -\n\n#max_files_per_process = 1000\t\t# min 25\n\t\t\t\t\t# (change requires restart)\n#shared_preload_libraries = ''\t\t# (change requires restart)\n\n# - Cost-Based Vacuum Delay -\n\n#vacuum_cost_delay = 0\t\t\t# 0-1000 milliseconds\n#vacuum_cost_page_hit = 1\t\t# 0-10000 credits\n#vacuum_cost_page_miss = 10\t\t# 0-10000 credits\n#vacuum_cost_page_dirty = 20\t\t# 0-10000 credits\n#vacuum_cost_limit = 200\t\t# 0-10000 credits\n\n# - Background writer -\n\n#bgwriter_delay = 200ms\t\t\t# 10-10000ms between rounds\n#bgwriter_lru_percent = 1.0\t\t# 0-100% of LRU buffers scanned/round\n#bgwriter_lru_maxpages = 5\t\t# 0-1000 buffers max written/round\n#bgwriter_all_percent = 0.333\t\t# 0-100% of all buffers scanned/round\n#bgwriter_all_maxpages = 5\t\t# 0-1000 buffers max written/round\n\n\n#---------------------------------------------------------------------------\n# WRITE AHEAD LOG\n#---------------------------------------------------------------------------\n\n# - Settings -\n\n#fsync = on\t\t\t\t# turns forced synchronization on or off\n#wal_sync_method = fsync\t\t# the default is the first option \n\t\t\t\t\t# supported by the operating system:\n\t\t\t\t\t# open_datasync\n\t\t\t\t\t# fdatasync\n\t\t\t\t\t# fsync\n\t\t\t\t\t# fsync_writethrough\n\t\t\t\t\t# open_sync\n#full_page_writes = on\t\t\t# recover from partial page writes\n#wal_buffers = 64kB\t\t\t# min 32kB\n\t\t\t\t\t# (change requires restart)\n#commit_delay = 0\t\t\t# range 0-100000, in microseconds\n#commit_siblings = 5\t\t\t# range 1-1000\n\n# - Checkpoints -\n\ncheckpoint_segments = 50\t\t# in logfile segments, min 1, 16MB each\n#checkpoint_timeout = 5min\t\t# range 30s-1h\n#checkpoint_warning = 30s\t\t# 0 is off\n\n# - Archiving -\n\n#archive_command = ''\t\t# command to use to archive a logfile segment\n#archive_timeout = 0\t\t# force a logfile segment switch after this\n\t\t\t\t# many seconds; 0 is off\n\n\n#---------------------------------------------------------------------------\n# QUERY TUNING\n#---------------------------------------------------------------------------\n\n# - Planner Method Configuration -\n\n#enable_bitmapscan = on\n#enable_hashagg = on\n#enable_hashjoin = on\n#enable_indexscan = on\n#enable_mergejoin = on\n#enable_nestloop = on\n#enable_seqscan = on\n#enable_sort = on\n#enable_tidscan = on\n\n# - Planner Cost Constants -\n\n#seq_page_cost = 1.0\t\t\t# measured on an arbitrary scale\n#random_page_cost = 4.0\t\t\t# same scale as above\n#cpu_tuple_cost = 0.01\t\t\t# same scale as above\n#cpu_index_tuple_cost = 0.005\t\t# same scale as above\n#cpu_operator_cost = 0.0025\t\t# same scale as above\n#effective_cache_size = 128MB\n\n# - Genetic Query Optimizer -\n\n#geqo = on\n#geqo_threshold = 12\n#geqo_effort = 5\t\t\t# range 1-10\n#geqo_pool_size = 0\t\t\t# selects default based on effort\n#geqo_generations = 0\t\t\t# selects default based on effort\n#geqo_selection_bias = 2.0\t\t# range 1.5-2.0\n\n# - Other Planner Options -\n\n#default_statistics_target = 10\t\t# range 1-1000\n#constraint_exclusion = off\n#from_collapse_limit = 8\n#join_collapse_limit = 8\t\t# 1 disables collapsing of explicit \n\t\t\t\t\t# JOINs\n\n\n#---------------------------------------------------------------------------\n# ERROR REPORTING AND LOGGING\n#---------------------------------------------------------------------------\n\n# - Where to Log -\n\n#log_destination = 'stderr'\t\t# Valid values are combinations of \n\t\t\t\t\t# stderr, syslog and eventlog, \n\t\t\t\t\t# depending on platform.\n\n# This is used when logging to stderr:\n#redirect_stderr = off\t\t\t# Enable capturing of stderr into log \n\t\t\t\t\t# files\n\t\t\t\t\t# (change requires restart)\n\n\n# These are only used if redirect_stderr is on:\n#log_directory = 'pg_log'\t\t# Directory where log files are written\n\t\t\t\t\t# Can be absolute or relative to PGDATA\nlog_filename = 'postgresql-%Y-%m-%d_%H%M%S.log' # Log file name pattern.\n\t\t\t\t\t# Can include strftime() escapes\nlog_truncate_on_rotation = off # If on, any existing log file of the same \n\t\t\t\t\t# name as the new log file will be\n\t\t\t\t\t# truncated rather than appended to. But\n\t\t\t\t\t# such truncation only occurs on\n\t\t\t\t\t# time-driven rotation, not on restarts\n\t\t\t\t\t# or size-driven rotation. Default is\n\t\t\t\t\t# off, meaning append to existing files\n\t\t\t\t\t# in all cases.\n#log_rotation_age = 1m\t\t\t# Automatic rotation of logfiles will \n\t\t\t\t\t# happen after that time. 0 to \n\t\t\t\t\t# disable.\nlog_rotation_size = 10MB\t\t# Automatic rotation of logfiles will \n\t\t\t\t\t# happen after that much log\n\t\t\t\t\t# output. 0 to disable.\n\n# These are relevant when logging to syslog:\n#syslog_facility = 'LOCAL0'\n#syslog_ident = 'postgres'\n\n\n# - When to Log -\n\n#client_min_messages = notice\t\t# Values, in order of decreasing detail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# log\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\n#log_min_messages = notice\t\t# Values, in order of decreasing detail:\n\t\t\t\t\t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t\t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# log\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic\n\n#log_error_verbosity = default\t\t# terse, default, or verbose messages\n\n#log_min_error_statement = error\t# Values in order of increasing severity:\n\t\t\t\t \t# debug5\n\t\t\t\t\t# debug4\n\t\t\t\t\t# debug3\n\t\t\t\t\t# debug2\n\t\t\t\t\t# debug1\n\t\t\t\t \t# info\n\t\t\t\t\t# notice\n\t\t\t\t\t# warning\n\t\t\t\t\t# error\n\t\t\t\t\t# fatal\n\t\t\t\t\t# panic (effectively off)\n\n#log_min_duration_statement = -1\t# -1 is disabled, 0 logs all statements\n\t\t\t\t\t# and their durations.\n\n#silent_mode = off\t\t\t# DO NOT USE without syslog or \n\t\t\t\t\t# redirect_stderr\n\t\t\t\t\t# (change requires restart)\n\n# - What to Log -\n\n#debug_print_parse = off\n#debug_print_rewritten = off\n#debug_print_plan = off\n#debug_pretty_print = off\n#log_connections = off\n#log_disconnections = off\n#log_duration = off\n#log_line_prefix = ''\t\t\t# Special values:\n\t\t\t\t\t# %u = user name\n\t\t\t\t\t# %d = database name\n\t\t\t\t\t# %r = remote host and port\n\t\t\t\t\t# %h = remote host\n\t\t\t\t\t# %p = PID\n\t\t\t\t\t# %t = timestamp (no milliseconds)\n\t\t\t\t\t# %m = timestamp with milliseconds\n\t\t\t\t\t# %i = command tag\n\t\t\t\t\t# %c = session id\n\t\t\t\t\t# %l = session line number\n\t\t\t\t\t# %s = session start timestamp\n\t\t\t\t\t# %x = transaction id\n\t\t\t\t\t# %q = stop here in non-session \n\t\t\t\t\t# processes\n\t\t\t\t\t# %% = '%'\n\t\t\t\t\t# e.g. '<%u%%%d> '\n#log_statement = 'none'\t\t\t# none, ddl, mod, all\n#log_hostname = off\n\n\n#---------------------------------------------------------------------------\n# RUNTIME STATISTICS\n#---------------------------------------------------------------------------\n\n# - Query/Index Statistics Collector -\n\n#stats_command_string = on\n#update_process_title = on\n\n#stats_start_collector = on\t\t# needed for block or row stats\n\t\t\t\t\t# (change requires restart)\n#stats_block_level = off\n#stats_row_level = off\n#stats_reset_on_server_start = off\t# (change requires restart)\n\n\n# - Statistics Monitoring -\n\n#log_parser_stats = off\n#log_planner_stats = off\n#log_executor_stats = off\n#log_statement_stats = off\n\n\n#---------------------------------------------------------------------------\n# AUTOVACUUM PARAMETERS\n#---------------------------------------------------------------------------\n\n#autovacuum = off\t\t\t# enable autovacuum subprocess?\n\t\t\t\t\t# 'on' requires stats_start_collector\n\t\t\t\t\t# and stats_row_level to also be on\n#autovacuum_naptime = 1min\t\t# time between autovacuum runs\n#autovacuum_vacuum_threshold = 500\t# min # of tuple updates before\n\t\t\t\t\t# vacuum\n#autovacuum_analyze_threshold = 250\t# min # of tuple updates before \n\t\t\t\t\t# analyze\n#autovacuum_vacuum_scale_factor = 0.2\t# fraction of rel size before \n\t\t\t\t\t# vacuum\n#autovacuum_analyze_scale_factor = 0.1\t# fraction of rel size before \n\t\t\t\t\t# analyze\n#autovacuum_freeze_max_age = 200000000\t# maximum XID age before forced vacuum\n\t\t\t\t\t# (change requires restart)\n#autovacuum_vacuum_cost_delay = -1\t# default vacuum cost delay for \n\t\t\t\t\t# autovacuum, -1 means use \n\t\t\t\t\t# vacuum_cost_delay\n#autovacuum_vacuum_cost_limit = -1\t# default vacuum cost limit for \n\t\t\t\t\t# autovacuum, -1 means use\n\t\t\t\t\t# vacuum_cost_limit\n\n\n#---------------------------------------------------------------------------\n# CLIENT CONNECTION DEFAULTS\n#---------------------------------------------------------------------------\n\n# - Statement Behavior -\n\n#search_path = '\"$user\",public'\t\t# schema names\n#default_tablespace = ''\t\t# a tablespace name, '' uses\n\t\t\t\t\t# the default\n#check_function_bodies = on\n#default_transaction_isolation = 'read committed'\n#default_transaction_read_only = off\n#statement_timeout = 0\t\t\t# 0 is disabled\n#vacuum_freeze_min_age = 100000000\n\n# - Locale and Formatting -\n\ndatestyle = 'iso, mdy'\n#timezone = unknown\t\t\t# actually, defaults to TZ \n\t\t\t\t\t# environment setting\n#timezone_abbreviations = 'Default' # select the set of available timezone\n\t\t\t\t\t# abbreviations. Currently, there are\n\t\t\t\t\t# Default\n\t\t\t\t\t# Australia\n\t\t\t\t\t# India\n\t\t\t\t\t# However you can also create your own\n\t\t\t\t\t# file in share/timezonesets/.\n#extra_float_digits = 0\t\t\t# min -15, max 2\n#client_encoding = sql_ascii\t\t# actually, defaults to database\n\t\t\t\t\t# encoding\n\n# These settings are initialized by initdb -- they might be changed\nlc_messages = 'en_US'\t\t\t# locale for system error message \n\t\t\t\t\t# strings\nlc_monetary = 'en_US'\t\t\t# locale for monetary formatting\nlc_numeric = 'en_US'\t\t\t# locale for number formatting\nlc_time = 'en_US'\t\t\t\t# locale for time formatting\n\n# - Other Defaults -\n\n#explain_pretty_print = on\n#dynamic_library_path = '$libdir'\n#local_preload_libraries = ''\n\n\n#---------------------------------------------------------------------------\n# LOCK MANAGEMENT\n#---------------------------------------------------------------------------\n\n#deadlock_timeout = 1s\n#max_locks_per_transaction = 64\t\t# min 10\n\t\t\t\t\t# (change requires restart)\n# Note: each lock table slot uses ~270 bytes of shared memory, and there are\n# max_locks_per_transaction * (max_connections + max_prepared_transactions)\n# lock table slots.\n\n\n#---------------------------------------------------------------------------\n# VERSION/PLATFORM COMPATIBILITY\n#---------------------------------------------------------------------------\n\n# - Previous Postgres Versions -\n\n#add_missing_from = off\n#array_nulls = on\n#backslash_quote = safe_encoding\t# on, off, or safe_encoding\n#default_with_oids = off\n#escape_string_warning = on\n#standard_conforming_strings = off\n#regex_flavor = advanced\t\t# advanced, extended, or basic\n#sql_inheritance = on\n\n# - Other Platforms & Clients -\n\n#transform_null_equals = off\n\n\n#---------------------------------------------------------------------------\n# CUSTOMIZED OPTIONS\n#---------------------------------------------------------------------------\n\n#custom_variable_classes = ''\t\t# list of custom variable class names\n\n\nAny suggestions to improve things?\n\nShawn\n\n\n\n", "msg_date": "Mon, 3 Sep 2007 09:57:49 -0700", "msg_from": "Shawn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Query" }, { "msg_contents": ">>> On Mon, Sep 3, 2007 at 11:15 AM, in message\n<[email protected]>, Shawn\n<[email protected]> wrote: \n> On Sun, 02 Sep 2007 10:49:09 -0500 \"Kevin Grittner\"\n> <[email protected]> wrote:\n> \n>> >>> On Sat, Sep 1, 2007 at 12:29 PM, in message\n>> <[email protected]>, Shawn\n>> <[email protected]> wrote: \n>> > update shawns_data set alias = null;\n>> > Even after VACUUM this simple line takes 35 sec to complete.\n>> \n>> Would any rows already have a null alias when you run this?\n>> If so, try adding 'where alias is not null' to the query.\n> \n> This one initially added about 10sec to the run but I added a HASH\n> index on the alias field and its now about 5 sec average runtime, a net\n> improvement.\n \nTesting for null on 15,700 rows took five seconds more than the time saved\nfrom not updating some portion of the rows????? I've never seen anything\nremotely like that.\n \nDid you ever capture the output of VACUUM VERBOSE against this table (as\nTom requested)?\n \nWhat happens if you run CLUSTER against this table before running one of\nthese updates? (Be sure to do that VACUUM VERBOSE first, to see what the\n\"old\" state of the table was, and run it again after.)\n \nWhat is the row count from the second update of the table in your script?\n(An overly loose join there could bloat the table.)\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 03 Sep 2007 13:07:41 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": ">>> On Mon, Sep 3, 2007 at 11:57 AM, in message\n<[email protected]>, Shawn\n<[email protected]> wrote: \n> Also it runs a lot faster by itself\n \nGiven the context of the run, there is a possibility that a checkpoint tends\nto fall at this point in the script because you're filling your WAL files.\nThere is a known issue (which people have been working on fixing in 8.3)\nwhich causes the checkpoint to push a lot of pages to the disk and then wait\non physical writes. If the VACUUM ANALYZE doesn't turn up anything useful,\nan interesting experiment would be to run your script with these\nmodifications to your postgresql.conf file:\n \nbgwriter_lru_percent = 20.0 # 0-100% of LRU buffers scanned/round\nbgwriter_lru_maxpages = 200 # 0-1000 buffers max written/round\nbgwriter_all_percent = 10.0 # 0-100% of all buffers scanned/round\nbgwriter_all_maxpages = 600 # 0-1000 buffers max written/round\n \nDon't leave these in effect permanently without close attention to the\noverall impact. These settings have worked well for us, but are likely\nnot to work well for everyone.\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 03 Sep 2007 13:32:13 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "On Mon, 03 Sep 2007 13:07:41 -0500\n\"Kevin Grittner\" <[email protected]> wrote:\n\n> >>> On Mon, Sep 3, 2007 at 11:15 AM, in message\n> <[email protected]>, Shawn\n> <[email protected]> wrote: \n> > On Sun, 02 Sep 2007 10:49:09 -0500 \"Kevin Grittner\"\n> > <[email protected]> wrote:\n> > \n> >> >>> On Sat, Sep 1, 2007 at 12:29 PM, in message\n> >> <[email protected]>, Shawn\n> >> <[email protected]> wrote: \n> >> > update shawns_data set alias = null;\n> >> > Even after VACUUM this simple line takes 35 sec to complete.\n> >> \n> >> Would any rows already have a null alias when you run this?\n> >> If so, try adding 'where alias is not null' to the query.\n> > \n> > This one initially added about 10sec to the run but I added a HASH\n> > index on the alias field and its now about 5 sec average runtime, a\n> > net improvement.\n> \n> Testing for null on 15,700 rows took five seconds more than the time\n> saved from not updating some portion of the rows????? I've never\n> seen anything remotely like that.\n> \n> Did you ever capture the output of VACUUM VERBOSE against this table\n> (as Tom requested)?\n> \n> What happens if you run CLUSTER against this table before running one\n> of these updates? (Be sure to do that VACUUM VERBOSE first, to see\n> what the \"old\" state of the table was, and run it again after.)\n> \n> What is the row count from the second update of the table in your\n> script? (An overly loose join there could bloat the table.)\n\nhere is the vacuum results:\n\nvacuum verbose analyze shawns_data;\nINFO: vacuuming \"public.shawns_data\"\nINFO: scanned index \"shawns_data_pkey\" to remove 21444 row versions\nDETAIL: CPU 0.24s/0.12u sec elapsed 8.35 sec.\nINFO: scanned index \"sd_l\" to remove 21444 row versions\nDETAIL: CPU 0.32s/0.16u sec elapsed 6.11 sec.\nINFO: scanned index \"sd_b\" to remove 21444 row versions\nDETAIL: CPU 0.34s/0.13u sec elapsed 10.10 sec.\nINFO: scanned index \"sd_s\" to remove 21444 row versions\nDETAIL: CPU 0.36s/0.13u sec elapsed 7.16 sec.\nINFO: scanned index \"sd_e\" to remove 21444 row versions\nDETAIL: CPU 0.40s/0.17u sec elapsed 6.71 sec.\nINFO: scanned index \"sd_alias_hash\" to remove 21444 row versions\nDETAIL: CPU 0.00s/0.01u sec elapsed 0.01 sec.\nINFO: \"shawns_data\": removed 21444 row versions in 513 pages\nDETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"shawns_data_pkey\" now contains 15445 row versions in\n35230 pages DETAIL: 21444 index row versions were removed.\n19255 index pages have been deleted, 19255 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"sd_l\" now contains 15445 row versions in 32569 pages\nDETAIL: 21444 index row versions were removed.\n18059 index pages have been deleted, 18059 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"sd_b\" now contains 15445 row versions in 34119 pages\nDETAIL: 21444 index row versions were removed.\n30276 index pages have been deleted, 30219 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"sd_s\" now contains 15445 row versions in 35700 pages\nDETAIL: 21444 index row versions were removed.\n31284 index pages have been deleted, 31233 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"sd_e\" now contains 15445 row versions in 42333 pages\nDETAIL: 21444 index row versions were removed.\n28828 index pages have been deleted, 28820 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: index \"sd_alias_hash\" now contains 10722 row versions in 298\npages DETAIL: 10722 index row versions were removed.\n0 index pages have been deleted, 0 are currently reusable.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: \"shawns_data\": found 21444 removable, 15445 nonremovable row\nversions in 770 pages DETAIL: 0 dead row versions cannot be removed\nyet. There were 5825 unused item pointers.\n543 pages contain useful free space.\n0 pages are entirely empty.\nCPU 1.68s/0.77u sec elapsed 38.47 sec.\nINFO: analyzing \"public.shawns_data\"\nINFO: \"shawns_data\": scanned 770 of 770 pages, containing 15445 live\nrows and 0 dead rows; 3000 rows in sample, 15445 estimated total rows\nVACUUM\n\n\nShawn\n", "msg_date": "Mon, 3 Sep 2007 16:53:34 -0700", "msg_from": "Shawn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Query" }, { "msg_contents": "Shawn <[email protected]> writes:\n> \"Kevin Grittner\" <[email protected]> wrote:\n>> Did you ever capture the output of VACUUM VERBOSE against this table\n\n> vacuum verbose analyze shawns_data;\n> ...\n> INFO: index \"shawns_data_pkey\" now contains 15445 row versions in\n> 35230 pages\n\n[ and comparably bloated sizes for other indexes ]\n\nOuch! The table itself doesn't look nearly as bad:\n\n> INFO: \"shawns_data\": found 21444 removable, 15445 nonremovable row\n> versions in 770 pages\n\nbut you've got a spectacularly bad case of index bloat. An index 50\ntimes bigger than its table is Not Good. I think you'd find that\n\"REINDEX TABLE shawns_data\" does wonders for the situation.\n\nThe next question is how it got to be that way... what is your\nvacuuming policy for this database? Maybe you need to raise\nmax_fsm_pages?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 03 Sep 2007 20:49:42 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query " }, { "msg_contents": ">>> On Mon, Sep 3, 2007 at 6:53 PM, in message\n<[email protected]>, Shawn\n<[email protected]> wrote: \n> vacuum verbose analyze shawns_data;\n> INFO: vacuuming \"public.shawns_data\"\n> INFO: scanned index \"shawns_data_pkey\" to remove 21444 row versions\n> DETAIL: CPU 0.24s/0.12u sec elapsed 8.35 sec.\n> INFO: scanned index \"sd_l\" to remove 21444 row versions\n> DETAIL: CPU 0.32s/0.16u sec elapsed 6.11 sec.\n> INFO: scanned index \"sd_b\" to remove 21444 row versions\n> DETAIL: CPU 0.34s/0.13u sec elapsed 10.10 sec.\n> INFO: scanned index \"sd_s\" to remove 21444 row versions\n> DETAIL: CPU 0.36s/0.13u sec elapsed 7.16 sec.\n> INFO: scanned index \"sd_e\" to remove 21444 row versions\n> DETAIL: CPU 0.40s/0.17u sec elapsed 6.71 sec.\n> INFO: scanned index \"sd_alias_hash\" to remove 21444 row versions\n> DETAIL: CPU 0.00s/0.01u sec elapsed 0.01 sec.\n> INFO: \"shawns_data\": removed 21444 row versions in 513 pages\n> DETAIL: CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"shawns_data_pkey\" now contains 15445 row versions in\n> 35230 pages DETAIL: 21444 index row versions were removed.\n> 19255 index pages have been deleted, 19255 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"sd_l\" now contains 15445 row versions in 32569 pages\n> DETAIL: 21444 index row versions were removed.\n> 18059 index pages have been deleted, 18059 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"sd_b\" now contains 15445 row versions in 34119 pages\n> DETAIL: 21444 index row versions were removed.\n> 30276 index pages have been deleted, 30219 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"sd_s\" now contains 15445 row versions in 35700 pages\n> DETAIL: 21444 index row versions were removed.\n> 31284 index pages have been deleted, 31233 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"sd_e\" now contains 15445 row versions in 42333 pages\n> DETAIL: 21444 index row versions were removed.\n> 28828 index pages have been deleted, 28820 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: index \"sd_alias_hash\" now contains 10722 row versions in 298\n> pages DETAIL: 10722 index row versions were removed.\n> 0 index pages have been deleted, 0 are currently reusable.\n> CPU 0.00s/0.00u sec elapsed 0.00 sec.\n> INFO: \"shawns_data\": found 21444 removable, 15445 nonremovable row\n> versions in 770 pages DETAIL: 0 dead row versions cannot be removed\n> yet. There were 5825 unused item pointers.\n> 543 pages contain useful free space.\n> 0 pages are entirely empty.\n> CPU 1.68s/0.77u sec elapsed 38.47 sec.\n \nThose indexes are killing you. Hopefully you realize that each of those\nindexes will have a new entry inserted whenever you update a row. If your\nindexes are that expensive to maintain, you want to go out of your way\nupdate rows only when something actually changes, which is not the case\nfor your second update statement yet.\n \nI don't recall seeing the table definition yet. Could we see that, with\nthe indexes? Also, have you tried that CLUSTER yet? Syntax:\n \nCLUSTER shawns_data_pkey ON shawns_data;\nANALYZE shawns_data;\n(or VACUUM ANALYZE)\n \nThis will clean up index bloat.\n \n-Kevin\n \n\n", "msg_date": "Mon, 03 Sep 2007 21:40:33 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Slow Query" }, { "msg_contents": "On Mon, 03 Sep 2007 21:10:06 -0400\nTom Lane <[email protected]> wrote:\n\n> Shawn <[email protected]> writes:\n> > You weren't kidding, that really made a difference. its .5 sec now\n> > on the run. I think the vacuuming not running for a few weeks is\n> > what got me. I caught an edit that had stopped the vacuuming\n> > routine a while back. That must have been what caused the index\n> > bloat.\n> \n> Hmm, but no vacuuming at all would have allowed the table itself to\n> bloat too. Did you do a VACUUM FULL to recover after you noticed\n> the size problem? The trouble with VACUUM FULL is it compresses the\n> table but doesn't do a dang thing for indexes ...\n> \n> \t\t\tregards, tom lane\n> \n\nThanks again to the list. I ran the reindex command that Tom suggested\nand things seem to be holding. Even the vacuum command is running\nfaster. The whole script runs in about 10 seconds, far more acceptable.\n\nThanks Guys!\n\nShawn\n", "msg_date": "Sun, 9 Sep 2007 18:31:13 -0700", "msg_from": "Shawn <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Slow Query" } ]
[ { "msg_contents": "Hello everyone:\n\n I wanted to ask you about how the VACUUM ANALYZE works. is it possible\nthat something can happen in order to reset its effects forcing to execute\nthe VACUUM ANALYZE comand again? i am asking this because i am struggling\nwith a query which works ok after i run a VACUUM ANALYZE, however, sudennly,\nit starts to take forever (the execution of the query) until i make another\nVACUUM ANALYZE, and so on ...\n I'd like to point that i am a novice when it comes to non basic\npostgresql performance related stuff.\n\nThank you all in advance\n\nRafael\n\n\n", "msg_date": "Tue, 4 Sep 2007 14:27:07 -0300 (ART)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Vacum Analyze problem" }, { "msg_contents": "On 9/4/07, [email protected] <[email protected]> wrote:\n>\n> Hello everyone:\n>\n> I wanted to ask you about how the VACUUM ANALYZE works. is it possible\n> that something can happen in order to reset its effects forcing to execute\n> the VACUUM ANALYZE comand again?\n\n\n\nYes, lots of modifications (INSERT,UPDATE,DELETE) to the table in question.\n\nRegards\n\nMP\n\nOn 9/4/07, [email protected] <[email protected]\n> wrote:Hello everyone:   I wanted to ask you about how the VACUUM ANALYZE works. is it possible\nthat something can happen in order to reset its effects forcing to executethe VACUUM ANALYZE comand again?Yes, lots of modifications (INSERT,UPDATE,DELETE) to the table in question. \nRegardsMP", "msg_date": "Tue, 4 Sep 2007 20:33:29 +0300", "msg_from": "\"Mikko Partio\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacum Analyze problem" }, { "msg_contents": "On Tuesday 04 September 2007 11:27:07 [email protected] wrote:\n> Hello everyone:\n>\n> I wanted to ask you about how the VACUUM ANALYZE works. is it possible\n> that something can happen in order to reset its effects forcing to execute\n> the VACUUM ANALYZE comand again? i am asking this because i am struggling\n> with a query which works ok after i run a VACUUM ANALYZE, however,\n> sudennly, it starts to take forever (the execution of the query) until i\n> make another VACUUM ANALYZE, and so on ...\n> I'd like to point that i am a novice when it comes to non basic\n> postgresql performance related stuff.\n>\n> Thank you all in advance\n>\n> Rafael\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\nRafael;\n\nVacuum Analyze performs 2 tasks at once. \n\n1) Vacuum - this analyzes the table pages and sets appropriate dead row space \n(those from old updates or deletes that are not possibly needed by any \nexisting transactions) as such that the db can re-use (over-write) that \nspace.\n\n2) Analyze - Like an Oracle compute stats, updates the system catalogs with \ncurrent table stat data.\n\nThe Vacuum will improve queries since the dead space can be re-used and any \ndead space if the table you are having issues with is a high volume table \nthen the solution is generally to run vacuum more often - I've seen tables \nthat needed a vacuum every 5 minutes due to significant sustained churn. \n\nThe Analyze of course is key for the planner, if the table is growing rapidly \nthen running analyze more often will help, if however there is lots of churn \nbut little change in the data (i.e. lots of inserts followed by delete's of \nthe same rows) then a straight vacuum is probably what you need. If the data \nis changing rapidly then bumping up the default_statistics_target value may \nhelp - you can bump the default_statistics_target for a single table in the \npg_autovacuum system catalog table.\n\nHope this helps...\n\n/Kevin\n\n", "msg_date": "Tue, 4 Sep 2007 11:45:21 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacum Analyze problem" }, { "msg_contents": "> On 9/4/07, [email protected] <[email protected]>\n> wrote:\n>>\n>> Hello everyone:\n>>\n>> I wanted to ask you about how the VACUUM ANALYZE works. is it\n>> possible\n>> that something can happen in order to reset its effects forcing to\n>> execute the VACUUM ANALYZE comand again?\n>\n>\n>\n> Yes, lots of modifications (INSERT,UPDATE,DELETE) to the table in\n> question.\n>\n> Regards\n>\n> MP\n\nI knew that in the long run the VACUUM ANALYZE comand has to be executed\nagain. My question is if something can happen over night and cause the need\nof a new VACUUM ANALYZE (regenerating indexes or other thing related with\nperformance).\n\nThanks for your reply.\n\nRafael\n\n\n", "msg_date": "Tue, 4 Sep 2007 14:46:37 -0300 (ART)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacum Analyze problem" }, { "msg_contents": "In response to [email protected]:\n\n> Hello everyone:\n> \n> I wanted to ask you about how the VACUUM ANALYZE works. is it possible\n> that something can happen in order to reset its effects forcing to execute\n> the VACUUM ANALYZE comand again? i am asking this because i am struggling\n> with a query which works ok after i run a VACUUM ANALYZE, however, sudennly,\n> it starts to take forever (the execution of the query) until i make another\n> VACUUM ANALYZE, and so on ...\n> I'd like to point that i am a novice when it comes to non basic\n> postgresql performance related stuff.\n> \n> Thank you all in advance\n\nTo add to Mikko's comments:\n\nPeriodic vacuuming and analyzing is a mandatory part of running a\nPostgreSQL database server. You'll probably be best served to configure\nthe autovacuum daemon to handle this for you. See the postgresql.conf\nconfig file.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 4 Sep 2007 13:55:59 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacum Analyze problem" }, { "msg_contents": "> On Tuesday 04 September 2007 11:27:07 [email protected] wrote:\n>> Hello everyone:\n>>\n>> I wanted to ask you about how the VACUUM ANALYZE works. is it\n>> possible\n>> that something can happen in order to reset its effects forcing to\n>> execute the VACUUM ANALYZE comand again? i am asking this because i am\n>> struggling with a query which works ok after i run a VACUUM ANALYZE,\n>> however, sudennly, it starts to take forever (the execution of the\n>> query) until i make another VACUUM ANALYZE, and so on ...\n>> I'd like to point that i am a novice when it comes to non basic\n>> postgresql performance related stuff.\n>>\n>> Thank you all in advance\n>>\n>> Rafael\n>>\n>>\n>>\n>> ---------------------------(end of\n>> broadcast)--------------------------- TIP 1: if posting/reading\n>> through Usenet, please send an appropriate\n>> subscribe-nomail command to [email protected] so that\n>> your message can get through to the mailing list cleanly\n>\n> Rafael;\n>\n> Vacuum Analyze performs 2 tasks at once.\n>\n> 1) Vacuum - this analyzes the table pages and sets appropriate dead row\n> space (those from old updates or deletes that are not possibly needed\n> by any existing transactions) as such that the db can re-use\n> (over-write) that space.\n>\n> 2) Analyze - Like an Oracle compute stats, updates the system catalogs\n> with current table stat data.\n>\n> The Vacuum will improve queries since the dead space can be re-used and\n> any dead space if the table you are having issues with is a high\n> volume table then the solution is generally to run vacuum more often -\n> I've seen tables that needed a vacuum every 5 minutes due to\n> significant sustained churn.\n>\n> The Analyze of course is key for the planner, if the table is growing\n> rapidly then running analyze more often will help, if however there is\n> lots of churn but little change in the data (i.e. lots of inserts\n> followed by delete's of the same rows) then a straight vacuum is\n> probably what you need. If the data is changing rapidly then bumping\n> up the default_statistics_target value may help - you can bump the\n> default_statistics_target for a single table in the pg_autovacuum\n> system catalog table.\n>\n> Hope this helps...\n>\n> /Kevin\n>\n>\n> ---------------------------(end of\n> broadcast)--------------------------- TIP 3: Have you checked our\n> extensive FAQ?\n>\n> http://www.postgresql.org/docs/faq\n\nThank you all for the information. I'll get to work on it and see what\nhappends.\nThanks again\n\nRafael\n\n\n\n\n\n\n\n\n", "msg_date": "Tue, 4 Sep 2007 15:46:20 -0300 (ART)", "msg_from": "<[email protected]>", "msg_from_op": true, "msg_subject": "Re: Vacum Analyze problem" }, { "msg_contents": "--- [email protected] wrote:\n> Thank you all for the information. I'll get to work on it and see what\n> happends.\n> Thanks again\n> \n> Rafael\n\nI'll chime in with one last thought about excellent resources on Vacuum:\n\nhttp://www.postgresql.org/docs/8.2/static/sql-vacuum.html\nhttp://www.postgresql.org/docs/8.2/static/sql-analyze.html\nhttp://www.postgresql.org/docs/8.2/static/routine-vacuuming.html\n\nRegards,\nRichard Broersma Jr.\n", "msg_date": "Tue, 4 Sep 2007 11:52:31 -0700 (PDT)", "msg_from": "Richard Broersma Jr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacum Analyze problem" }, { "msg_contents": "<[email protected]> writes:\n\n> I knew that in the long run the VACUUM ANALYZE comand has to be executed\n> again. My question is if something can happen over night and cause the need\n> of a new VACUUM ANALYZE (regenerating indexes or other thing related with\n> performance).\n\nThe answer to your question is possibly yes for two reasons:\n\n1) If you're running an autovacuum daemon it might decide it's time to vacuum\nthe table and kick off a vacuum. That sounds most like what you're describing.\n\n2) If the size of the table changes substantially becoming much larger (or\nsmaller but that wouldn't happen just due to deletes unless you run vacuum)\nthen recent versions of Postgres will notice even if you don't run analyze and\ntake that into account.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 05 Sep 2007 00:24:32 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Vacum Analyze problem" } ]
[ { "msg_contents": "I have this turned on, and if I look at the log, it runs once a minute,\nwhich is fine.\n\nBut what does it do? I.e, it runs VACUUM, but does it also do an analyze?\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 14:35:01 up 26 days, 17:57, 2 users, load average: 4.31, 4.40, 4.78\n", "msg_date": "Tue, 04 Sep 2007 14:39:44 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "About autovacuum" }, { "msg_contents": "In response to Jean-David Beyer <[email protected]>:\n\n> I have this turned on, and if I look at the log, it runs once a minute,\n> which is fine.\n> \n> But what does it do? I.e, it runs VACUUM, but does it also do an analyze?\n\nYes. If you turn up the debugging level, you'll see detailed log\nmessages about its activities.\n\nThere were discussions on other lists about improving autovacuum's log\nmessages, I'm pretty sure it will log more helpful information in 8.3.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 4 Sep 2007 15:01:02 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: About autovacuum" } ]
[ { "msg_contents": "Hi,\n\nI wonder about differences in performance between two scenarios:\n\nBackground:\nTable A, ~50,000 records\nTable B, ~3,000,000 records (~20 cols)\nTable C, ~30,000,000 records (~10 cols)\n\na query every 3sec. with limit 10\n\nTable C depends on Table B wich depends on Table A, int8 foreign key, btree index\n\n* consider it a read only scenario (load data only in night, with time for vacuum analyze daily)\n* im required to show records from Table C, but also with some (~5cols) info from Table B \n* where clause always contains the foreign key to Table A\n* where clause may contain further 1-10 search parameter\n\n\nScenario A)\nsimply inner join Table B + C\n\nScenario B)\nwith use of trigger on insert/update I could push the required information from table B down to table C.\n-> so i would only require to select from table C.\n\n\nMy question:\n1) From your experience ... how much faster (approximately) in percent do you regard Scenario B faster than A ?\n\n2) any other tips for such a read only scenario\n\nThx for any attention :-)\nWalter\n-- \nGMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.\nAlle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail\n", "msg_date": "Tue, 04 Sep 2007 20:53:21 +0200", "msg_from": "\"Walter Mauritz\" <[email protected]>", "msg_from_op": true, "msg_subject": "join tables vs. denormalization by trigger" }, { "msg_contents": "On Tue, 2007-09-04 at 20:53 +0200, Walter Mauritz wrote:\n> Hi,\n> \n> I wonder about differences in performance between two scenarios:\n> \n> Background:\n> Table A, ~50,000 records\n> Table B, ~3,000,000 records (~20 cols)\n> Table C, ~30,000,000 records (~10 cols)\n> \n> a query every 3sec. with limit 10\n> \n> Table C depends on Table B wich depends on Table A, int8 foreign key, btree index\n> \n> * consider it a read only scenario (load data only in night, with time for vacuum analyze daily)\n> * im required to show records from Table C, but also with some (~5cols) info from Table B \n> * where clause always contains the foreign key to Table A\n> * where clause may contain further 1-10 search parameter\n> \n> \n> Scenario A)\n> simply inner join Table B + C\n> \n> Scenario B)\n> with use of trigger on insert/update I could push the required information from table B down to table C.\n> -> so i would only require to select from table C.\n> \n> \n> My question:\n> 1) From your experience ... how much faster (approximately) in percent do you regard Scenario B faster than A ?\n\nYou're assuming that B is always going to be faster than A, which\ncertainly isn't a foregone conclusion. Let's say that you average 10\nbytes per column. In scenario A, the total data size is then roughly\n3,000,000 * 20 * 10 + 30,000,000 * 10 * 10 = 3.6 GiB. In scenario B due\nto your denormalization, the total data size is more like 30,000,000 *\n30 * 10 = 9 GiB, or 2.5 times more raw data.\n\nThat's a lot of extra disk IO, unless your database will always fit in\nmemory in both scenarios.\n\nAlthough you didn't provide enough data to answer with certainty, I\nwould go on the assumption that A is going to be faster than B. But\neven if it weren't, remember that premature optimization is the root of\nall evil. If you try A and it doesn't perform fast enough, then you can\nalways try B later to see if it works any better.\n\n-- Mark Lewis\n", "msg_date": "Tue, 04 Sep 2007 12:18:00 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join tables vs. denormalization by trigger" }, { "msg_contents": "Hello,\n\nI had a similar issue and -atfer testing - decided to merge the tables\nB and C into a single table.\nIn my case the resulting table contains a large proportion of nulls\nwhich limits the size increase...\nYou'll have to do some testing with your data to evaluate the\nperformance gain.\n\nHope to help,\n\nMarc \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Walter\nMauritz\nSent: Tuesday, September 04, 2007 8:53 PM\nTo: [email protected]\nSubject: [PERFORM] join tables vs. denormalization by trigger\n\nHi,\n\nI wonder about differences in performance between two scenarios:\n\nBackground:\nTable A, ~50,000 records\nTable B, ~3,000,000 records (~20 cols)\nTable C, ~30,000,000 records (~10 cols)\n\na query every 3sec. with limit 10\n\nTable C depends on Table B wich depends on Table A, int8 foreign key,\nbtree index\n\n* consider it a read only scenario (load data only in night, with time\nfor vacuum analyze daily)\n* im required to show records from Table C, but also with some (~5cols)\ninfo from Table B\n* where clause always contains the foreign key to Table A\n* where clause may contain further 1-10 search parameter\n\n\nScenario A)\nsimply inner join Table B + C\n\nScenario B)\nwith use of trigger on insert/update I could push the required\ninformation from table B down to table C.\n-> so i would only require to select from table C.\n\n\nMy question:\n1) From your experience ... how much faster (approximately) in percent\ndo you regard Scenario B faster than A ?\n\n2) any other tips for such a read only scenario\n\nThx for any attention :-)\nWalter\n-- \nGMX FreeMail: 1 GB Postfach, 5 E-Mail-Adressen, 10 Free SMS.\nAlle Infos und kostenlose Anmeldung: http://www.gmx.net/de/go/freemail\n\n---------------------------(end of broadcast)---------------------------\nTIP 9: In versions below 8.0, the planner will ignore your desire to\n choose an index scan if your joining column's datatypes do not\n match\n", "msg_date": "Wed, 5 Sep 2007 00:48:41 +0200", "msg_from": "\"Marc Mamin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: join tables vs. denormalization by trigger" } ]
[ { "msg_contents": "A client is moving their postgresql db to a brand new Windows 2003 x64 \nserver with 2 quad cores and 32GB of RAM. It is a dedicated server to run \n8.2.4.\n\nThe server typically will have less than 10 users. The primary use of this \nserver is to host a database that is continuously being updated by data \nconsolidation and matching software software that hits the server very hard. \nThere are typically eight such processes running at any one time. The \nsoftware extensively exploits postgresql native fuzzy string for data \nmatching. The SQL is dynamically generated by the software and consists of \nlarge, complex joins. (the structure of the joins change as the software \nadapts its matching strategies).\n\nI would like to favour the needs of the data matching software, and the \nserver is almost exclusivly dedicated to PostgreSQL.\n\nI have made some tentative modifications to the default postgres.config file \n(see below), but I don't think I've scratched the surface of what this new \nsystem is capable of. Can I ask - given my client's needs and this new, \npowerful server and the fact that the server typically has a small number of \nextremely busy processes, what numbers they would change, and what the \nrecommendations would be?\n\nThanks!\n\nCarlo\n\nmax_connections = 100\nshared_buffers = 100000\nwork_mem = 1000000\nmax_fsm_pages = 204800\nmax_fsm_relations = 1500\nvacuum_cost_delay = 40\nbgwriter_lru_maxpages = 100\nbgwriter_all_maxpages = 100\ncheckpoint_segments = 64\ncheckpoint_warning = 290\neffective_cache_size = 375000\nstats_command_string = on\nstats_start_collector = on\nstats_row_level = on\nautovacuum = on\nautovacuum_naptime = 1min\nautovacuum_vacuum_threshold = 500\nautovacuum_analyze_threshold = 250\nautovacuum_vacuum_scale_factor = 0.2\nautovacuum_analyze_scale_factor = 0.1\n\n", "msg_date": "Tue, 4 Sep 2007 18:45:24 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/4/07, Carlo Stonebanks <[email protected]> wrote:\n> A client is moving their postgresql db to a brand new Windows 2003 x64\n> server with 2 quad cores and 32GB of RAM. It is a dedicated server to run\n> 8.2.4.\n\nAnd what does the drive subsystem look like? All that horsepower\nisn't going to help if all your data is sitting on an inferior drive\nsubsystem.\n\n> The server typically will have less than 10 users. The primary use of this\n> server is to host a database that is continuously being updated by data\n> consolidation and matching software software that hits the server very hard.\n> There are typically eight such processes running at any one time. The\n> software extensively exploits postgresql native fuzzy string for data\n> matching. The SQL is dynamically generated by the software and consists of\n> large, complex joins. (the structure of the joins change as the software\n> adapts its matching strategies).\n>\n> I would like to favour the needs of the data matching software, and the\n> server is almost exclusivly dedicated to PostgreSQL.\n>\n> I have made some tentative modifications to the default postgres.config file\n> (see below), but I don't think I've scratched the surface of what this new\n> system is capable of. Can I ask - given my client's needs and this new,\n> powerful server and the fact that the server typically has a small number of\n> extremely busy processes, what numbers they would change, and what the\n> recommendations would be?\n>\n> Thanks!\n>\n> Carlo\n>\n> max_connections = 100\n> shared_buffers = 100000\n> work_mem = 1000000\n\nEven with only 10 users, 1 gig work_mem is extremely high. (without a\nunit, work_mem is set in k on 8.2.x) 10000 would be much more\nreasonable.\n\nOTOH, shared_buffers, at 100000 is only setting it to 100 meg. that's\npretty small on a machine with 32 gig. Also, I recommend setting\nvalues more readable, like 500MB in postgresql.conf. Much easier to\nread than 100000...\n\n> effective_cache_size = 375000\n\nThis seems low by an order of magnitude or two.\n\nBut the most important thing is what you've left out. What kind of\nI/O does this machine have. It's really important for something that\nsounds like an OLAP server.\n", "msg_date": "Tue, 4 Sep 2007 18:03:11 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Carlo Stonebanks wrote:\n> A client is moving their postgresql db to a brand new Windows 2003 x64 \n> server with 2 quad cores and 32GB of RAM. It is a dedicated server to run \n> 8.2.4.\n\nLarge shared_buffers and Windows do not mix. Perhaps you should leave\nthe shmem config low, so that the kernel can cache the file pages.\n\n> The server typically will have less than 10 users. The primary use of this \n> server is to host a database that is continuously being updated by data \n> consolidation and matching software software that hits the server very \n> hard. There are typically eight such processes running at any one time. The \n> software extensively exploits postgresql native fuzzy string for data \n> matching. The SQL is dynamically generated by the software and consists of \n> large, complex joins. (the structure of the joins change as the software \n> adapts its matching strategies).\n\nIt sounds like you will need a huge lot of vacuuming effort to keep up.\nMaybe you should lower autovac scale factors so that your tables are\nvisited more frequently. A vacuum_delay of 40 sounds like too much\nthough.\n\nSince you didn't describe your disk configuration, it is most likely not\nreally prepared to handle high I/O load. Maybe you should fix that.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Tue, 4 Sep 2007 19:06:39 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/4/07, Alvaro Herrera <[email protected]> wrote:\n> Carlo Stonebanks wrote:\n> > A client is moving their postgresql db to a brand new Windows 2003 x64\n> > server with 2 quad cores and 32GB of RAM. It is a dedicated server to run\n> > 8.2.4.\n>\n> Large shared_buffers and Windows do not mix. Perhaps you should leave\n> the shmem config low, so that the kernel can cache the file pages.\n\nEgads, I'd completely missed the word Windows up there.\n\nI would highly recommend building the postgresql server on a unixish\nOS. Even with minimum tuning, I'd expect the same box running linux\nor freebsd to stomp windows pretty heavily in the performance\ndepartment.\n\nBut yeah, the I/O, that's the big one. If it's just a single or a\ncouple of IDE drives, it's not gonna be able to handle much load.\n", "msg_date": "Tue, 4 Sep 2007 18:15:16 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 05.09.2007 01:15, Scott Marlowe wrote:\n> On 9/4/07, Alvaro Herrera <[email protected]> wrote:\n>> Carlo Stonebanks wrote:\n>>> A client is moving their postgresql db to a brand new Windows 2003 x64\n>>> server with 2 quad cores and 32GB of RAM. It is a dedicated server to run\n>>> 8.2.4.\n>> Large shared_buffers and Windows do not mix. Perhaps you should leave\n>> the shmem config low, so that the kernel can cache the file pages.\n> \n> But yeah, the I/O, that's the big one. If it's just a single or a\n> couple of IDE drives, it's not gonna be able to handle much load.\n\nRight, additionally NTFS is really nothing to use on any serious disc array.\n\n\n-- \nRegards,\nHannes Dorbath\n", "msg_date": "Wed, 05 Sep 2007 10:20:31 +0200", "msg_from": "Hannes Dorbath <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Unfortunately, LINUX is not an option at this time. We looked into it; there\nis no *NIX expertise in the enterprise. However, I have raised this issue in\nvarious forums before, and when pressed no one was willing to say that \"*NIX\n*DEFINITELY* outperforms Windows\" for what my client is doing (or if it did\noutperform Windows, that it would outperform so significantly that it\nmerited the move).\n\nWas this incorrect? Can my client DEFINITELY expect a significant\nimprovement in performance for what he is doing?\n\nDISK subsystem reports: SCSI/RAID Smart Array E200 controller using RAID 1.\n\n\n\n\n\n-----Original Message-----\nFrom: Scott Marlowe [mailto:[email protected]] \nSent: September 4, 2007 7:15 PM\nTo: Alvaro Herrera\nCc: Carlo Stonebanks; [email protected]\nSubject: Re: [PERFORM] Performance on 8CPU's and 32GB of RAM\n\nOn 9/4/07, Alvaro Herrera <[email protected]> wrote:\n> Carlo Stonebanks wrote:\n> > A client is moving their postgresql db to a brand new Windows 2003 x64\n> > server with 2 quad cores and 32GB of RAM. It is a dedicated server to\nrun\n> > 8.2.4.\n>\n> Large shared_buffers and Windows do not mix. Perhaps you should leave\n> the shmem config low, so that the kernel can cache the file pages.\n\nEgads, I'd completely missed the word Windows up there.\n\nI would highly recommend building the postgresql server on a unixish\nOS. Even with minimum tuning, I'd expect the same box running linux\nor freebsd to stomp windows pretty heavily in the performance\ndepartment.\n\nBut yeah, the I/O, that's the big one. If it's just a single or a\ncouple of IDE drives, it's not gonna be able to handle much load.\n\n\n", "msg_date": "Wed, 5 Sep 2007 11:06:03 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Carlo Stonebanks <[email protected]> wrote:\n> Unfortunately, LINUX is not an option at this time. We looked into it; there\n> is no *NIX expertise in the enterprise. However, I have raised this issue in\n> various forums before, and when pressed no one was willing to say that \"*NIX\n> *DEFINITELY* outperforms Windows\" for what my client is doing (or if it did\n> outperform Windows, that it would outperform so significantly that it\n> merited the move).\n\nWhere unixes generally outperform windows is in starting up new\nbackends, better file systems, and handling very large shared_buffer\nsettings.\n\n> Was this incorrect? Can my client DEFINITELY expect a significant\n> improvement in performance for what he is doing?\n\nDepends on what you mean by incorrect. Windows can do ok. But pgsql\nis still much newer on windows than on unix / linux and there are\nstill some issues that pop up here and there that are being worked on.\n Plus there's still no real definitive set of guidelines to tune on\nWindows just yet.\n\n> DISK subsystem reports: SCSI/RAID Smart Array E200 controller using RAID 1.\n\nSo, just two disks? for the load you mentioned before, you should\nprobably be looking at at least 4 maybe 6 or 8 disks in a RAID-10.\nAnd a battery backed cache. I've seen reports on this list of the\nE300 being a pretty mediocre performer. A better controller might be\nworth looking into as well.\n", "msg_date": "Wed, 5 Sep 2007 10:15:42 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "> Right, additionally NTFS is really nothing to use on any serious disc \n> array.\n\nDo you mean that I will not see any big improvement if I upgrade the disk \nsubsystem because the client is using NTFS (i.e. Windows)\n\n", "msg_date": "Wed, 5 Sep 2007 11:23:05 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Carlo Stonebanks <[email protected]> wrote:\n> > Right, additionally NTFS is really nothing to use on any serious disc\n> > array.\n>\n> Do you mean that I will not see any big improvement if I upgrade the disk\n> subsystem because the client is using NTFS (i.e. Windows)\n\nNo, I think he's referring more to the lack of reliability of NTFS\ncompared to UFS / ZFS / JFS / XFS on unixen.\n\nA faster disk subsystem will likely be a real help.\n", "msg_date": "Wed, 5 Sep 2007 10:45:24 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": ">> Large shared_buffers and Windows do not mix. Perhaps you should leave\nthe shmem config low, so that the kernel can cache the file pages.\n<<\n\nIs there a problem BESIDES the one that used to cause windows to fail to\nallocate memory in blocks larger than 1.5GB?\n\nThe symptom of this problem was that postgresql would just refuse to\nrestart. Microsoft released a patch for this problem and we can now start\npostgresql with larger shared buffers. If this is indeed the problem that\nyou refer to - and it has indeed been solved by Microsoft - is there a down\nside to this?\n\n\n>> It sounds like you will need a huge lot of vacuuming effort to keep up.\nMaybe you should lower autovac scale factors so that your tables are\nvisited more frequently. A vacuum_delay of 40 sounds like too much\nthough.\n<<\n\nDoes autovacuum not impede performance while it is vacuuming a table?\n\n\n", "msg_date": "Wed, 5 Sep 2007 11:57:19 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Carlo Stonebanks wrote:\n\n> >> It sounds like you will need a huge lot of vacuuming effort to keep up.\n> Maybe you should lower autovac scale factors so that your tables are\n> visited more frequently. A vacuum_delay of 40 sounds like too much\n> though.\n> <<\n> \n> Does autovacuum not impede performance while it is vacuuming a table?\n\nIt causes I/O. Not sure what else you have in mind. vacuum_delay\nthrottles the I/O usage, at the expense of longer vacuum times.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Wed, 5 Sep 2007 12:03:48 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Carlo Stonebanks <[email protected]> wrote:\n> >> Large shared_buffers and Windows do not mix. Perhaps you should leave\n> the shmem config low, so that the kernel can cache the file pages.\n> <<\n>\n> Is there a problem BESIDES the one that used to cause windows to fail to\n> allocate memory in blocks larger than 1.5GB?\n>\n> The symptom of this problem was that postgresql would just refuse to\n> restart. Microsoft released a patch for this problem and we can now start\n> postgresql with larger shared buffers. If this is indeed the problem that\n> you refer to - and it has indeed been solved by Microsoft - is there a down\n> side to this?\n\nThere have been some reports that performance-wise large shared buffer\nsettings don't work as well on windows as they do on linux / unix.\nDon't know myself. Just what I've read.\n\n> >> It sounds like you will need a huge lot of vacuuming effort to keep up.\n> Maybe you should lower autovac scale factors so that your tables are\n> visited more frequently. A vacuum_delay of 40 sounds like too much\n> though.\n> <<\n>\n> Does autovacuum not impede performance while it is vacuuming a table?\n\nOf course vacuum impedes performance. Depends on your I/O subsystem.\nBy adjusting your vacuum parameters in postgresql.conf, the impact can\nbe made pretty small. But not vacuuming has a slow but sure\ndeteriorating effect over time. So, it's generally better to let\nautovacuum take care of things and run vacuum with a reasonable set of\nparameters so it doesn't eat all your I/O bandwidth.\n", "msg_date": "Wed, 5 Sep 2007 11:06:07 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Scott Marlowe <[email protected]> wrote:\n> On 9/5/07, Carlo Stonebanks <[email protected]> wrote:\n> > > Right, additionally NTFS is really nothing to use on any serious disc\n> > > array.\n> >\n> > Do you mean that I will not see any big improvement if I upgrade the disk\n> > subsystem because the client is using NTFS (i.e. Windows)\n>\n> No, I think he's referring more to the lack of reliability of NTFS\n> compared to UFS / ZFS / JFS / XFS on unixen.\n\nLack of reliability compared to _UFS_? Can you elaborate on this?\n", "msg_date": "Wed, 5 Sep 2007 10:16:51 -0700", "msg_from": "\"Trevor Talbot\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Trevor Talbot wrote:\n> \n> Lack of reliability compared to _UFS_? Can you elaborate on this?\n\nWhat elaboration's needed? UFS seems to have one of the longest\nhistories of support from major vendors of any file system supported\non any OS (Solaris, HP-UX, SVR4, Tru64 Unix all use it).\n\nCan you elaborate on your question?\n", "msg_date": "Wed, 05 Sep 2007 11:09:44 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Trevor Talbot <[email protected]> wrote:\n> On 9/5/07, Scott Marlowe <[email protected]> wrote:\n> > On 9/5/07, Carlo Stonebanks <[email protected]> wrote:\n> > > > Right, additionally NTFS is really nothing to use on any serious disc\n> > > > array.\n> > >\n> > > Do you mean that I will not see any big improvement if I upgrade the disk\n> > > subsystem because the client is using NTFS (i.e. Windows)\n> >\n> > No, I think he's referring more to the lack of reliability of NTFS\n> > compared to UFS / ZFS / JFS / XFS on unixen.\n>\n> Lack of reliability compared to _UFS_? Can you elaborate on this?\n\nNot a lot. Back when I was an NT 4.0 sysadmin, I had many many\noccasions where NTFS simply corrupted for no apparent reason. No\nsystem crash, no obvious problems with the drive, and bang suddenly a\nfile goes corrupted. About that time I gave up on Windows and started\nsupporting Linux and Solaris. Neither is perfect, but I've never had\neither of them just corrupt a file on good hardware for no reason.\nKeep in mind, the machine that was corrupting files for no reason went\non to be our development / staging linux server, handling quite a\nheavy load of developers on it, and never once had a corrupted file on\nit.\n\nWith the newer journalling file systems on linux, solaris and BSD, you\nget both good performance and very reliable behaviour. Maybe NTFS has\ngotten better since then, but I don't personally know.\n", "msg_date": "Wed, 5 Sep 2007 14:29:25 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Trevor Talbot <[email protected]> wrote:\n> On 9/5/07, Scott Marlowe <[email protected]> wrote:\n> > On 9/5/07, Carlo Stonebanks <[email protected]> wrote:\n> > > > Right, additionally NTFS is really nothing to use on any serious disc\n> > > > array.\n> > >\n> > > Do you mean that I will not see any big improvement if I upgrade the disk\n> > > subsystem because the client is using NTFS (i.e. Windows)\n> >\n> > No, I think he's referring more to the lack of reliability of NTFS\n> > compared to UFS / ZFS / JFS / XFS on unixen.\n>\n> Lack of reliability compared to _UFS_? Can you elaborate on this?\n\nOh, the other issue that NTFS still seems to suffer from that most\nunix file systems have overcome is fragmentation. Since you can't\ndefrag a live system, you have to plan time to take down the db should\nthe NTFS partition for your db get overly fragmented.\n\nAnd there's the issue that with windows / NTFS that when one process\nopens a file for read, it locks it for all other users. This means\nthat things like virus scanners can cause odd, unpredictable failures\nof your database.\n", "msg_date": "Wed, 5 Sep 2007 14:36:43 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 2007-09-05 Scott Marlowe wrote:\n> And there's the issue that with windows / NTFS that when one process\n> opens a file for read, it locks it for all other users. This means\n> that things like virus scanners can cause odd, unpredictable failures\n> of your database.\n\nUh... what? Locking isn't done by the filesystem but by applications\n(which certainly can decide to not lock a file when opening it). And no\none in his right mind would ever have a virus scanner access the files\nof a running database, regardless of operating system or filesystem.\n\nRegards\nAngar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Wed, 5 Sep 2007 23:02:12 +0200", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Scott Marlowe <[email protected]> wrote:\n> On 9/5/07, Trevor Talbot <[email protected]> wrote:\n> > On 9/5/07, Scott Marlowe <[email protected]> wrote:\n> > > On 9/5/07, Carlo Stonebanks <[email protected]> wrote:\n> > > > > Right, additionally NTFS is really nothing to use on any serious disc\n> > > > > array.\n> > > >\n> > > > Do you mean that I will not see any big improvement if I upgrade the disk\n> > > > subsystem because the client is using NTFS (i.e. Windows)\n> > >\n> > > No, I think he's referring more to the lack of reliability of NTFS\n> > > compared to UFS / ZFS / JFS / XFS on unixen.\n> >\n> > Lack of reliability compared to _UFS_? Can you elaborate on this?\n\n> Not a lot. Back when I was an NT 4.0 sysadmin, I had many many\n> occasions where NTFS simply corrupted for no apparent reason. No\n> system crash, no obvious problems with the drive, and bang suddenly a\n> file goes corrupted. About that time I gave up on Windows and started\n> supporting Linux and Solaris. Neither is perfect, but I've never had\n> either of them just corrupt a file on good hardware for no reason.\n\nAnecdotal then. That's fine, but needs to be qualified as such, not\npresented as a general case that everyone with experience agrees is\ntrue.\n\nI mean, I've got a box running OpenBSD UFS that's lost files on me,\nwhile my NTFS boxes have been fine other than catastrophic drive\nfailure. But that anecdote doesn't actually mean anything, since it's\nuseless in the general case. (The issues on that one UFS box have a\nknown cause anyway, related to power failures.)\n\n> With the newer journalling file systems on linux, solaris and BSD, you\n> get both good performance and very reliable behaviour. Maybe NTFS has\n> gotten better since then, but I don't personally know.\n\nThe thing is, most UFS implementations I'm familiar with don't\njournal; that's what prompted my question in the first place, since I\nfigured you were thinking along those lines. NTFS is\nmetadata-journaling, like most of the others, and has continued to\nimprove over time.\n\nI took the original comment to be about performance, actually. NTFS's\njournaling method tends to get bashed in that department compared to\nsome of the more modern filesystems. I don't have any experience with\nintensive I/O on large arrays to know.\n\nHopefully he'll clarify what he meant.\n\n> Oh, the other issue that NTFS still seems to suffer from that most\n> unix file systems have overcome is fragmentation. Since you can't\n> defrag a live system, you have to plan time to take down the db should\n> the NTFS partition for your db get overly fragmented.\n\nLive defragmentation has been supported since NT4, although Microsoft\nnever included tools or publicly documented it until 2000. The NTFS\nimplementation in Windows doesn't make much effort to avoid\nfragmentation, but that varies among implementations of the other\nfilesystems too. Modern ones tend to be better at it.\n\n> And there's the issue that with windows / NTFS that when one process\n> opens a file for read, it locks it for all other users. This means\n> that things like virus scanners can cause odd, unpredictable failures\n> of your database.\n\nIt's simply a Windows platform default for file I/O; there's no hard\nlimitation there, and it's not about a particular filesystem. In the\ncase of antivirus vs database, it's more of an administrative issue:\nconfigure the AV to ignore the database files, harass the AV vendor to\nget programmers with clue, find another AV vendor, or just don't run\nAV on your dedicated database server.\n", "msg_date": "Wed, 5 Sep 2007 14:08:41 -0700", "msg_from": "\"Trevor Talbot\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Ansgar -59cobalt- Wiechers <[email protected]> wrote:\n> On 2007-09-05 Scott Marlowe wrote:\n> > And there's the issue that with windows / NTFS that when one process\n> > opens a file for read, it locks it for all other users. This means\n> > that things like virus scanners can cause odd, unpredictable failures\n> > of your database.\n>\n> Uh... what? Locking isn't done by the filesystem but by applications\n> (which certainly can decide to not lock a file when opening it). And no\n> one in his right mind would ever have a virus scanner access the files\n> of a running database, regardless of operating system or filesystem.\n\nExactly, the default is to lock the file. The application has to\nexplicitly NOT lock it. It's the opposite of linux.\n\nAnd be careful, you're insulting a LOT of people who have come on this\nlist with the exact problem of having their anti-virus scramble the\nbrain of their postgresql installation. It's a far more common\nproblem than it should be.\n", "msg_date": "Wed, 5 Sep 2007 16:59:17 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On Wed, 2007-09-05 at 14:36 -0500, Scott Marlowe wrote:\n> On 9/5/07, Trevor Talbot <[email protected]> wrote:\n> > On 9/5/07, Scott Marlowe <[email protected]> wrote:\n> > > On 9/5/07, Carlo Stonebanks <[email protected]> wrote:\n> > > > > Right, additionally NTFS is really nothing to use on any serious disc\n> > > > > array.\n> > > > Do you mean that I will not see any big improvement if I upgrade the disk\n> > > > subsystem because the client is using NTFS (i.e. Windows)\n\nI haven't had a corrupt NTFS filesystem is ages; even with hardware\nfailures. If NTFS was inherently unstable there wouldn't be hundreds of\nthousands of large M$-SQL and Exchange instances.\n\n> And there's the issue that with windows / NTFS that when one process\n> opens a file for read, it locks it for all other users. \n\nThis isn't true; the mode of a file open is up to the application.\nPossibly lots of Windows applications are stupid or sloppy in how they\nmanage files but that isn't a flaw in NTFS.\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Wed, 05 Sep 2007 18:15:58 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 2007-09-05 Scott Marlowe wrote:\n> On 9/5/07, Ansgar -59cobalt- Wiechers <[email protected]> wrote:\n>> On 2007-09-05 Scott Marlowe wrote:\n>>> And there's the issue that with windows / NTFS that when one process\n>>> opens a file for read, it locks it for all other users. This means\n>>> that things like virus scanners can cause odd, unpredictable\n>>> failures of your database.\n>>\n>> Uh... what? Locking isn't done by the filesystem but by applications\n>> (which certainly can decide to not lock a file when opening it). And\n>> no one in his right mind would ever have a virus scanner access the\n>> files of a running database, regardless of operating system or\n>> filesystem.\n> \n> Exactly, the default is to lock the file. The application has to\n> explicitly NOT lock it. It's the opposite of linux.\n\nYes. So? It's still up to the application, and it still has nothing at\nall to do with the filesystem.\n\n> And be careful, you're insulting a LOT of people who have come on this\n> list with the exact problem of having their anti-virus scramble the\n> brain of their postgresql installation. It's a far more common\n> problem than it should be.\n\nHow does that make it any less stup^Wintellectually challenged?\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Thu, 6 Sep 2007 01:19:21 +0200", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/5/07, Ansgar -59cobalt- Wiechers <[email protected]> wrote:\n> On 2007-09-05 Scott Marlowe wrote:\n> > On 9/5/07, Ansgar -59cobalt- Wiechers <[email protected]> wrote:\n> >> On 2007-09-05 Scott Marlowe wrote:\n> >>> And there's the issue that with windows / NTFS that when one process\n> >>> opens a file for read, it locks it for all other users. This means\n> >>> that things like virus scanners can cause odd, unpredictable\n> >>> failures of your database.\n> >>\n> >> Uh... what? Locking isn't done by the filesystem but by applications\n> >> (which certainly can decide to not lock a file when opening it). And\n> >> no one in his right mind would ever have a virus scanner access the\n> >> files of a running database, regardless of operating system or\n> >> filesystem.\n> >\n> > Exactly, the default is to lock the file. The application has to\n> > explicitly NOT lock it. It's the opposite of linux.\n>\n> Yes. So? It's still up to the application, and it still has nothing at\n> all to do with the filesystem.\n\nAnd if you look at my original reply, you'll see that I said WINDOWS /\nNTFS. not just NTFS. i.e. it's a windowsism.\n\n>\n> > And be careful, you're insulting a LOT of people who have come on this\n> > list with the exact problem of having their anti-virus scramble the\n> > brain of their postgresql installation. It's a far more common\n> > problem than it should be.\n>\n> How does that make it any less stup^Wintellectually challenged?\n\nIt doesn't. It's just not necessary to insult people to make a point.\n", "msg_date": "Wed, 5 Sep 2007 19:12:04 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nCarlo Stonebanks wrote:\n> Unfortunately, LINUX is not an option at this time. We looked into it; there\n> is no *NIX expertise in the enterprise. However, I have raised this issue in\n> various forums before, and when pressed no one was willing to say that \"*NIX\n> *DEFINITELY* outperforms Windows\" for what my client is doing (or if it did\n> outperform Windows, that it would outperform so significantly that it\n> merited the move).\n> \n> Was this incorrect? Can my client DEFINITELY expect a significant\n> improvement in performance for what he is doing?\n> \n> DISK subsystem reports: SCSI/RAID Smart Array E200 controller using RAID 1.\n> \n\nBased on the hardware config you mention, it sounds to me as if you put\nall your money in the wrong basket (e.g; way too much ram and cpu / not\nenough IO).\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG32r5ATb/zqfZUUQRAkAgAJ97aaOJZBbf8PobFjWs2v2fPh67PQCfeDVF\nmU6DA7mb3XfWDlpRsOfLi0U=\n=t7b9\n-----END PGP SIGNATURE-----\n", "msg_date": "Wed, 05 Sep 2007 19:50:34 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 2007-09-05 Scott Marlowe wrote:\n> On 9/5/07, Ansgar -59cobalt- Wiechers <[email protected]> wrote:\n>> On 2007-09-05 Scott Marlowe wrote:\n>>> On 9/5/07, Ansgar -59cobalt- Wiechers <[email protected]> wrote:\n>>>> On 2007-09-05 Scott Marlowe wrote:\n>>>>> And there's the issue that with windows / NTFS that when one\n>>>>> process opens a file for read, it locks it for all other users.\n>>>>> This means that things like virus scanners can cause odd,\n>>>>> unpredictable failures of your database.\n>>>>\n>>>> Uh... what? Locking isn't done by the filesystem but by\n>>>> applications (which certainly can decide to not lock a file when\n>>>> opening it). And no one in his right mind would ever have a virus\n>>>> scanner access the files of a running database, regardless of\n>>>> operating system or filesystem.\n>>>\n>>> Exactly, the default is to lock the file. The application has to\n>>> explicitly NOT lock it. It's the opposite of linux.\n>>\n>> Yes. So? It's still up to the application, and it still has nothing\n>> at all to do with the filesystem.\n> \n> And if you look at my original reply, you'll see that I said WINDOWS /\n> NTFS. not just NTFS. i.e. it's a windowsism.\n\nI am aware of what you wrote. However, since the locking behaviour is \nexactly the same with Windows/FAT32 or Windows/%ANY_OTHER_FILESYSTEM%\nyour statement is still wrong.\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Thu, 6 Sep 2007 08:47:46 +0200", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Scott Marlowe wrote:\n> And there's the issue that with windows / NTFS that when one process\n> opens a file for read, it locks it for all other users. This means\n> that things like virus scanners can cause odd, unpredictable failures\n> of your database.\n>\n> \nCan you provide some justification for this?\n\nJames\n\n", "msg_date": "Thu, 06 Sep 2007 21:30:55 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Scott Marlowe wrote:\n> Where unixes generally outperform windows is in starting up new\n> backends, better file systems, and handling very large shared_buffer\n> settings.\n> \n\nWhy do you think that UNIX systems are better at handling large shared \nbuffers than Wndows?\n32 bit Windows systems can suffer from fragmented address space, to be \nsure, but if the\nperformance of the operating-system supplied mutex or semaphore isn't \ngood enough, you can\njust use the raw atomic ops.\n\nIf what you mean is that pg has a design that's heavily oriented towards \nthings that tend to\nbe cheap on POSIX and doesn't use the core Win32 features effectively, \nthen let's track\nthat as an optimisation opportunity for the Win32 port.\n\n", "msg_date": "Thu, 06 Sep 2007 21:39:26 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "<<\nIf what you mean is that pg has a design that's heavily oriented towards \nthings that tend to\nbe cheap on POSIX and doesn't use the core Win32 features effectively, \nthen let's track\nthat as an optimisation opportunity for the Win32 port.\n>>\n\nIsn't it just easier to assume that Windows Server can't do anything right?\n;-)\n\n\n", "msg_date": "Thu, 6 Sep 2007 16:46:27 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Carlo Stonebanks wrote:\n> Isn't it just easier to assume that Windows Server can't do anything right?\n> ;-)\n>\n> \nWell, avoiding the ;-) - people do, and its remarkably foolish of them. Its\na long-standing whinge that many people with a UNIX-background seem to\njust assume that Windows sucks, but you could run 40,000 sockets from a\nsingle Win32 process for a while and some well-known UNIX systems would\nstill struggle to do this, libevent or no. Admitedly, the way a Win32\napp is architected would be rather different from a typical POSIX one.\n\nWindows has been a cheap target bt its remarkably adequate and the\nTPC results speak for themselves.\n\n\n\n\n\n", "msg_date": "Thu, 06 Sep 2007 21:55:11 +0100", "msg_from": "James Mansion <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/6/07, James Mansion <[email protected]> wrote:\n> Scott Marlowe wrote:\n> > And there's the issue that with windows / NTFS that when one process\n> > opens a file for read, it locks it for all other users. This means\n> > that things like virus scanners can cause odd, unpredictable failures\n> > of your database.\n> >\n> >\n> Can you provide some justification for this?\n\nSeeing as I didn't write Windows or any of the plethora of anti-virus\nsoftware, no I really can't. It's unforgivable behaviour.\n\nCan I provide evidence that it happens? Just read the archives of\nthis list for the evidence. I've seen it often enough to know that\nmost anti-virus software seems to open files in exclusive mode and\ncause problems for postgresql, among other apps.\n\n\n> Why do you think that UNIX systems are better at handling large shared\n> buffers than Wndows?\n\nBecause we've seen lots of problems with large shared buffers on windows here.\n\nNow, maybe for a windows specific app it's all fine and dandy. but\nfor the way pgsql works, windows and large shared buffers don't seem\nto get along.\n\nI'm done. Use windows all you want. I'll stick to unix. It seems to\njust work for pgsql.\n", "msg_date": "Thu, 6 Sep 2007 15:59:34 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "James Mansion escribi�:\n\n> If what you mean is that pg has a design that's heavily oriented\n> towards things that tend to be cheap on POSIX and doesn't use the core\n> Win32 features effectively, then let's track that as an optimisation\n> opportunity for the Win32 port.\n\nAlready done for 8.3 (actual performance improvements still to be\nreported), but that doesn't help those poor users still on 8.2.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/DXLWNGRJD34J\n\"Los dioses no protegen a los insensatos. �stos reciben protecci�n de\notros insensatos mejor dotados\" (Luis Wu, Mundo Anillo)\n", "msg_date": "Thu, 6 Sep 2007 17:21:16 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Wow - it's nice to hear someone say that... out loud.\n\nThanks, you gave me hope!\n\n-----Original Message-----\nFrom: James Mansion [mailto:[email protected]] \nSent: September 6, 2007 4:55 PM\nTo: Carlo Stonebanks\nCc: Scott Marlowe; Alvaro Herrera; [email protected]\nSubject: Re: [PERFORM] Performance on 8CPU's and 32GB of RAM\n\nCarlo Stonebanks wrote:\n> Isn't it just easier to assume that Windows Server can't do anything\nright?\n> ;-)\n>\n> \nWell, avoiding the ;-) - people do, and its remarkably foolish of them. Its\na long-standing whinge that many people with a UNIX-background seem to\njust assume that Windows sucks, but you could run 40,000 sockets from a\nsingle Win32 process for a while and some well-known UNIX systems would\nstill struggle to do this, libevent or no. Admitedly, the way a Win32\napp is architected would be rather different from a typical POSIX one.\n\nWindows has been a cheap target bt its remarkably adequate and the\nTPC results speak for themselves.\n\n\n\n\n\n", "msg_date": "Thu, 6 Sep 2007 19:18:40 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "* Scott Marlowe:\n\n> And there's the issue that with windows / NTFS that when one process\n> opens a file for read, it locks it for all other users. This means\n> that things like virus scanners can cause odd, unpredictable failures\n> of your database.\n\nI think most of them open the file in shared/backup mode. The only\nlock that is created by that guards deletion and renaming. It can\nstill lead to obscure failures, but it's not a wholly-eclusive lock.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 07 Sep 2007 10:17:11 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On 9/7/07, Florian Weimer <[email protected]> wrote:\n> * Scott Marlowe:\n>\n> > And there's the issue that with windows / NTFS that when one process\n> > opens a file for read, it locks it for all other users. This means\n> > that things like virus scanners can cause odd, unpredictable failures\n> > of your database.\n>\n> I think most of them open the file in shared/backup mode. The only\n> lock that is created by that guards deletion and renaming. It can\n> still lead to obscure failures, but it's not a wholly-eclusive lock.\n\nWell, there've been a lot of issues with anti-virus and postgresql not\ngetting along. I wonder if pgsql takes out a stronger lock, and when\nit can't get it then the failure happens. Not familiar enough with\nwindows to do more than speculate.\n", "msg_date": "Fri, 7 Sep 2007 11:16:38 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "Scott,\n\nWell, there've been a lot of issues with anti-virus and postgresql not\n> getting along. I wonder if pgsql takes out a stronger lock, and when\n> it can't get it then the failure happens. Not familiar enough with\n> windows to do more than speculate.\n\n\nwithout touching the file-concurrency issues caused by virus scanners:\n\na LOT of the Postgres <-> VirusScanner problems on Windows were caused\nduring the \"postgres spawns a new process and communicates with that process\nvia ipstack\"\n\nMany Virus Scanners seam to have dealt with the TCP/IP stack in a not\ncompatible manner...\n\nHarald\n\n-- \nGHUM Harald Massa\npersuadere et programmare\nHarald Armin Massa\nSpielberger Straße 49\n70435 Stuttgart\n0173/9409607\nfx 01212-5-13695179\n-\nEuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!\n\nScott,Well, there've been a lot of issues with anti-virus and postgresql not\ngetting along.  I wonder if pgsql takes out a stronger lock, and whenit can't get it then the failure happens.  Not familiar enough withwindows to do more than speculate.without touching the file-concurrency issues caused by virus scanners:\na LOT of the Postgres <-> VirusScanner problems on Windows were caused during the \"postgres spawns a new process and communicates with that process via ipstack\" Many Virus Scanners seam to have dealt with the TCP/IP stack in a not compatible manner...\nHarald-- GHUM Harald Massapersuadere et programmareHarald Armin MassaSpielberger Straße 4970435 Stuttgart0173/9409607fx 01212-5-13695179 -EuroPython 2008 will take place in Vilnius, Lithuania - Stay tuned!", "msg_date": "Fri, 7 Sep 2007 19:32:53 +0200", "msg_from": "\"Harald Armin Massa\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" }, { "msg_contents": "On Wed, Sep 05, 2007 at 11:06:03AM -0400, Carlo Stonebanks wrote:\n> Unfortunately, LINUX is not an option at this time. We looked into it; there\n> is no *NIX expertise in the enterprise. However, I have raised this issue in\n> various forums before, and when pressed no one was willing to say that \"*NIX\n> *DEFINITELY* outperforms Windows\" for what my client is doing (or if it did\n> outperform Windows, that it would outperform so significantly that it\n> merited the move).\n> \n> Was this incorrect? Can my client DEFINITELY expect a significant\n> improvement in performance for what he is doing?\n\nSince we don't know your actual workload, there's no way to predict\nthis. That's what benchmarking is for. If you haven't already bought the\nhardware, I'd strongly recommend benchmarking this before buying\nanything, so that you have a better idea of what your workload looks\nlike. Is it I/O-bound? CPU-bound? Memory?\n\nOne of the fastest ways to non-performance in PostgreSQL is not\nvacuuming frequently enough. Vacuum more, not less, and control IO\nimpact via vacuum_cost_delay. Make sure the FSM is big enough, too.\n\nUnless your database is small enough to fit in-memory, your IO subsystem\nis almost certainly going to kill you. Even if it does fit in memory, if\nyou're doing much writing at all you're going to be in big trouble.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 15:32:42 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Performance on 8CPU's and 32GB of RAM" } ]
[ { "msg_contents": "Hi again, a different question ...\n\nOn a previous upgrade to 7.4 (I think) I used the pg_dumplo contrib \nutility to add the large objects to my restored via pg_dumpall databases \n(not all using large objects).\nI liked it because it was easy and I hadn't to remove the databases with \nlarge objects to reimport them with the dumped via pg_dump versions \nwhich seemed more work (and more possibilities of problems).\nNow I see that theres is not pg_dumplo on contrib directory for 8.2.4 \n(or at least i did'nt found it)...\n\nWhich is the best method to import my large objects in this case?\n\n1.- Import all the stuff via pg_dumpall+psql, drop databases with LO, \nimport LO databases with pg_dump+psql\n2.-Import all the stuff via pg_dumpall+psql, import LO databases with \npg_dump+psql (without delete them)\n3.- ??\n\nThanks in advance\n\n-- \n********************************************************\nDaniel Rubio Rodr�guez\nOASI (Organisme Aut�nom Per la Societat de la Informaci�)\nc/ Assalt, 12\n43003 - Tarragona\nTef.: 977.244.007 - Fax: 977.224.517\ne-mail: drubio a oasi.org\n******************************************************** \n\n\n", "msg_date": "Wed, 05 Sep 2007 09:31:05 +0200", "msg_from": "Daniel Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "No pg_dumplo on 8.2.4" }, { "msg_contents": "Daniel Rubio <[email protected]> writes:\n> Now I see that theres is not pg_dumplo on contrib directory for 8.2.4 \n\nThat's because it's obsolete --- regular pg_dump can handle large\nobjects now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Sep 2007 11:03:22 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: No pg_dumplo on 8.2.4 " }, { "msg_contents": "Hello All,\n\nWe have a postgres setup on solaris 10 with sun cluster for HA purposes.\n2 nodes are configured in the cluster in active-passive mode \nwith pg_data stored on external storage. Everything is working as\nexpected however, when we either switch the resource group from one node\nto other or rg restart on primary node, the apps fails with \"An I/O\nerror occurred while sending to the backend.\" and doesn't recover back\nfrom db failover. All queries to the db give the above error after\nresource group restart.\n\nOur app uses resin container db pooling, with following HA parameters\nset. With same settings, the app recovers just fine with database\nconfigured in non-cluster mode i.e. no sun cluster setup etc.\n\n<database>\n<jndi-name>jdbc/nbbsDB</jndi-name>\n<driver type=\"org.postgresql.Driver\">\n <url>jdbc:postgresql://db-vip:5432/appdbname</url>\n <user>appusr</user>\n <password>apppass</password>\n</driver>\n<max-connections>100</max-connections>\n<max-idle-time>5m</max-idle-time>\n<max-active-time>6h</max-active-time>\n<max-pool-time>24h</max-pool-time>\n<connection-wait-time>30s</connection-wait-time>\n<max-overflow-connections>0</max-overflow-connections>\n<ping-table>pingtable</ping-table>\n<ping>true</ping>\n<ping-interval>60s</ping-interval>\n<prepared-statement-cache-size>10</prepared-statement-cache-size>\n<spy>false</spy>\n</database> \n\nAny pointers to debug this futher is greatly appreciated. We are running\npostgres 8.2.4.\n\nOther thing i noticed in pg_log/server.logs is that whenever i restart\npostgres i get below error, when there is no other postgres running on\n5432.\n\n\"LOG: could not bind IPv6 socket: Cannot assign requested address\nHINT: Is another postmaster already running on port 5432? If not, wait\na few seconds and retry.\"\n\nThanks,\nStalin\n", "msg_date": "Wed, 5 Sep 2007 18:33:06 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": false, "msg_subject": "Postgres with Sun Cluster HA/Solaris 10" }, { "msg_contents": "Hello All,\n\nWe have a postgres setup on solaris 10 with sun cluster for HA purposes.\n2 nodes are configured in the cluster in active-passive mode with\npg_data stored on external storage. Everything is working as expected\nhowever, when we either switch the resource group from one node to other\nor rg restart on primary node, the apps fails with \"An I/O error\noccurred while sending to the backend.\" and doesn't recover back from db\nfailover. All queries to the db give the above error after resource\ngroup restart.\n\nOur app uses resin container db pooling, with following HA parameters\nset. With same settings, the app recovers just fine with database\nconfigured in non-cluster mode i.e. no sun cluster setup etc.\n\n<database>\n<jndi-name>jdbc/nbbsDB</jndi-name>\n<driver type=\"org.postgresql.Driver\">\n <url>jdbc:postgresql://db-vip:5432/appdbname</url>\n <user>appusr</user>\n <password>apppass</password>\n</driver>\n<max-connections>100</max-connections>\n<max-idle-time>5m</max-idle-time>\n<max-active-time>6h</max-active-time>\n<max-pool-time>24h</max-pool-time>\n<connection-wait-time>30s</connection-wait-time>\n<max-overflow-connections>0</max-overflow-connections>\n<ping-table>pingtable</ping-table>\n<ping>true</ping>\n<ping-interval>60s</ping-interval>\n<prepared-statement-cache-size>10</prepared-statement-cache-size>\n<spy>false</spy>\n</database> \n\nAny pointers to debug this futher is greatly appreciated. We are running\npostgres 8.2.4.\n\nOther thing i noticed in pg_log/server.logs is that whenever i restart\npostgres i get below error, when there is no other postgres running on\n5432.\n\n\"LOG: could not bind IPv6 socket: Cannot assign requested address\nHINT: Is another postmaster already running on port 5432? If not, wait\na few seconds and retry.\"\n\nThanks,\nStalin\n", "msg_date": "Wed, 5 Sep 2007 18:48:52 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": false, "msg_subject": "RESEND:Postgres with Sun Cluster HA/Solaris 10" }, { "msg_contents": "Hi,\n\nSubbiah Stalin-XCGF84 wrote:\n> Any pointers to debug this futher is greatly appreciated.\n\nI'm not quite sure, how sun cluster is working, but to me it sounds like \na shared-disk failover solution (see [1] for more details).\n\nAs such, only one node should run a postgres instance at any time. Does \nsun cluster take care of that?\n\n> \"LOG: could not bind IPv6 socket: Cannot assign requested address\n> HINT: Is another postmaster already running on port 5432? If not, wait\n> a few seconds and retry.\"\n\nThat's talking about IPv6. Are you using IPv6? Is sun cluster doing \nsomething magic WRT your network?\n\n(BTW, please don't cross post (removed -performance). And don't reply to \nemails when you intend to start a new thread, thanks.)\n\nRegards\n\nMarkus\n\n[1]: Postgres Documentation High Availability and Load Balancing\nhttp://www.postgresql.org/docs/8.2/static/high-availability.html\n\n", "msg_date": "Fri, 07 Sep 2007 10:23:38 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres with Sun Cluster HA/Solaris 10" }, { "msg_contents": "Yes, it's a shared-disk failover solution as described in [1] and only\nnode will run the pg instance. We got it fixed by setting ping-interval\nin resin from 60s to 0s. It's more like validate every connection before\ngiving to app to process. \n\n> That's talking about IPv6. Are you using IPv6? Is sun cluster doing\nsomething magic WRT your network?\n\nWell we see this error on a stand-alone sol 10 box with no cluster. How\ndo I check if it's setup to use IPv6 or not.\n\nThanks,\nStalin\n\n-----Original Message-----\nFrom: Markus Schiltknecht [mailto:[email protected]] \nSent: Friday, September 07, 2007 1:24 AM\nTo: Subbiah Stalin-XCGF84\nCc: [email protected]\nSubject: Re: [ADMIN] Postgres with Sun Cluster HA/Solaris 10\n\nHi,\n\nSubbiah Stalin-XCGF84 wrote:\n> Any pointers to debug this futher is greatly appreciated.\n\nI'm not quite sure, how sun cluster is working, but to me it sounds like\na shared-disk failover solution (see [1] for more details).\n\nAs such, only one node should run a postgres instance at any time. Does\nsun cluster take care of that?\n\n> \"LOG: could not bind IPv6 socket: Cannot assign requested address\n> HINT: Is another postmaster already running on port 5432? If not, \n> wait a few seconds and retry.\"\n\nThat's talking about IPv6. Are you using IPv6? Is sun cluster doing\nsomething magic WRT your network?\n\n(BTW, please don't cross post (removed -performance). And don't reply to\nemails when you intend to start a new thread, thanks.)\n\nRegards\n\nMarkus\n\n[1]: Postgres Documentation High Availability and Load Balancing\nhttp://www.postgresql.org/docs/8.2/static/high-availability.html\n\n", "msg_date": "Fri, 7 Sep 2007 15:24:12 -0400", "msg_from": "\"Subbiah Stalin-XCGF84\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres with Sun Cluster HA/Solaris 10" } ]
[ { "msg_contents": "Hi all,\n\nI need to improve a query like :\n\nSELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;\n\nStupidly, I create a B-tree index on my_table(the_date), witch is logically not used in my query, because it's not with a constant ? isn't it ?\n\nI know that I can't create a function index with an aggregative function.\n\nHow I can do ?\n\nthanks,\n\njsubei\n\n\n\n\n _____________________________________________________________________________ \nNe gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail \n", "msg_date": "Wed, 5 Sep 2007 09:53:22 +0000 (GMT)", "msg_from": "JS Ubei <[email protected]>", "msg_from_op": true, "msg_subject": "optimize query with a maximum(date) extraction" }, { "msg_contents": "bad address kep his from going to the list on my first try ... apologies to the moderators.\n\n-----Original Message-----\nFrom: Gregory Williamson\nSent: Wed 9/5/2007 4:59 AM\nTo: JS Ubei; [email protected]\nSubject: RE: [PERFORM] optimize query with a maximum(date) extraction\n \nIn order to help others help you, you might provide the following:\n\ntable description (columns, types, indexes) (\\d tablename from psql does nicely)\n\nthe same query run as \"EXPLAIN ANALYZE <your query here>;\"\n\nGreg Williamson\nSenior DBA\nGlobeXplorer LLC, a DigitalGlobe company\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n-----Original Message-----\nFrom: [email protected] on behalf of JS Ubei\nSent: Wed 9/5/2007 3:53 AM\nTo: [email protected]\nSubject: [PERFORM] optimize query with a maximum(date) extraction\n \nHi all,\n\nI need to improve a query like :\n\nSELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;\n\nStupidly, I create a B-tree index on my_table(the_date), witch is logically not used in my query, because it's not with a constant ? isn't it ?\n\nI know that I can't create a function index with an aggregative function.\n\nHow I can do ?\n\nthanks,\n\njsubei\n\n\n\n\n _____________________________________________________________________________ \nNe gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n\n\n\n\n\n\nFW: [PERFORM] optimize query with a maximum(date) extraction\n\n\n\nbad address kep his from going to the list on my first try ... apologies to the moderators.\n\n-----Original Message-----\nFrom: Gregory Williamson\nSent: Wed 9/5/2007 4:59 AM\nTo: JS Ubei; [email protected]\nSubject: RE: [PERFORM] optimize query with a maximum(date) extraction\n\nIn order to help others help you, you might provide the following:\n\ntable description (columns, types, indexes) (\\d tablename from psql does nicely)\n\nthe same query run as \"EXPLAIN ANALYZE <your query here>;\"\n\nGreg Williamson\nSenior DBA\nGlobeXplorer LLC, a DigitalGlobe company\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n-----Original Message-----\nFrom: [email protected] on behalf of JS Ubei\nSent: Wed 9/5/2007 3:53 AM\nTo: [email protected]\nSubject: [PERFORM] optimize query with a maximum(date) extraction\n\nHi all,\n\nI need to improve a query like :\n\nSELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;\n\nStupidly, I create a B-tree index on my_table(the_date), witch is logically not used in my query, because it's not with a constant ? isn't it ?\n\nI know that I can't create a function index with an aggregative function.\n\nHow I can do ?\n\nthanks,\n\njsubei\n\n\n\n\n      _____________________________________________________________________________\nNe gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings", "msg_date": "Wed, 5 Sep 2007 05:06:43 -0600", "msg_from": "\"Gregory Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "FW: optimize query with a maximum(date) extraction" }, { "msg_contents": "\"JS Ubei\" <[email protected]> writes:\n\n> Hi all,\n>\n> I need to improve a query like :\n>\n> SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;\n>\n> Stupidly, I create a B-tree index on my_table(the_date), witch is logically\n> not used in my query, because it's not with a constant ? isn't it ?\n\nThat's not so stupid, it would be possible for a database to make use of such\nan index for this query. But it's not one of the plans Postgres knows how to\nexecute.\n\nI don't think you'll find anything much faster for this particular query. You\ncould profile running these two (non-standard) queries:\n\nSELECT DISTINCT ON (id) id, the_date AS min_date FROM my_table ORDER BY id, the_date ASC\nSELECT DISTINCT ON (id) id, the_date AS max_date FROM my_table ORDER BY id, the_date DESC\n\nI think the first of these can actually use your index but the latter can't\nunless you create one for it specifically (which is not so easy -- it'll be\neasier in 8.3 though). Worse, I'm not really sure it'll be any faster than the\nquery you already have.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 05 Sep 2007 12:30:21 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" }, { "msg_contents": "\"Gregory Stark\" <[email protected]> writes:\n\n> \"JS Ubei\" <[email protected]> writes:\n>\n>> I need to improve a query like :\n>>\n>> SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;\n>...\n> I don't think you'll find anything much faster for this particular query. You\n> could profile running these two (non-standard) queries:\n>\n> SELECT DISTINCT ON (id) id, the_date AS min_date FROM my_table ORDER BY id, the_date ASC\n> SELECT DISTINCT ON (id) id, the_date AS max_date FROM my_table ORDER BY id, the_date DESC\n\nSomething else you might try:\n\nselect id, \n (select min(the_date) from my_table where id=x.id) as min_date,\n (select max(the_date) from my_table where id=x.id) as max_date\n from (select distinct id from my_table)\n\nRecent versions of Postgres do know how to use the index for a simple\nungrouped min() or max() like these subqueries.\n\nThis would be even better if you have a better source for the list of distinct\nids you're interested in than my_table. If you have a source that just has one\nrecord for each id then you won't need an extra step to eliminate duplicates.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 05 Sep 2007 13:06:01 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" }, { "msg_contents": "On 05/09/07, Gregory Stark <[email protected]> wrote:\n>\n> \"Gregory Stark\" <[email protected]> writes:\n>\n> > \"JS Ubei\" <[email protected]> writes:\n> >\n> >> I need to improve a query like :\n> >>\n> >> SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;\n> >...\n> > I don't think you'll find anything much faster for this particular\n> query. You\n> > could profile running these two (non-standard) queries:\n> >\n> > SELECT DISTINCT ON (id) id, the_date AS min_date FROM my_table ORDER BY\n> id, the_date ASC\n> > SELECT DISTINCT ON (id) id, the_date AS max_date FROM my_table ORDER BY\n> id, the_date DESC\n>\n> Something else you might try:\n>\n> select id,\n> (select min(the_date) from my_table where id=x.id) as min_date,\n> (select max(the_date) from my_table where id=x.id) as max_date\n> from (select distinct id from my_table)\n>\n> Recent versions of Postgres do know how to use the index for a simple\n> ungrouped min() or max() like these subqueries.\n>\n> This would be even better if you have a better source for the list of\n> distinct\n> ids you're interested in than my_table. If you have a source that just has\n> one\n> record for each id then you won't need an extra step to eliminate\n> duplicates.\n>\n>\nMy personal reaction is why are you using distinct at all?\n\nwhy not\n\nselect id,\n min(the_date) as min_date,\n max(the_date) as max_date\n from my_table group by id;\n\nSince 8.0 or was it earlier this will use an index should a reasonable one\nexist.\n\nPeter.\n\nOn 05/09/07, Gregory Stark <[email protected]> wrote:\n\"Gregory Stark\" <[email protected]> writes:> \"JS Ubei\" <[email protected]> writes:>\n>> I need to improve a query like :>>>> SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;>...> I don't think you'll find anything much faster for this particular query. You\n> could profile running these two (non-standard) queries:>> SELECT DISTINCT ON (id) id, the_date AS min_date FROM my_table ORDER BY id, the_date ASC> SELECT DISTINCT ON (id) id, the_date AS max_date FROM my_table ORDER BY id, the_date DESC\nSomething else you might try:select id,       (select min(the_date) from my_table where id=x.id) as min_date,       (select max(the_date) from my_table where id=\nx.id) as max_date  from (select distinct id from my_table)Recent versions of Postgres do know how to use the index for a simpleungrouped min() or max() like these subqueries.This would be even better if you have a better source for the list of distinct\nids you're interested in than my_table. If you have a source that just has onerecord for each id then you won't need an extra step to eliminate duplicates.My personal reaction is why are you using distinct at all?\nwhy notselect id,       min(the_date) as min_date,       max(the_date) as max_date  from my_table group by id;Since 8.0 or was it earlier this will use an index should a reasonable one exist.\nPeter.", "msg_date": "Wed, 5 Sep 2007 13:43:24 +0100", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" }, { "msg_contents": ">\n> why not\n>\n> select id,\n> min(the_date) as min_date,\n> max(the_date) as max_date\n> from my_table group by id;\n>\n> Since 8.0 or was it earlier this will use an index should a reasonable one\n> exist.\n\nwithout any limits, seq scan is optimal.\n\nRegards\nPavel Stehule\n", "msg_date": "Wed, 5 Sep 2007 14:48:18 +0200", "msg_from": "\"Pavel Stehule\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" }, { "msg_contents": "On Wed, Sep 05, 2007 at 12:30:21PM +0100, Gregory Stark wrote:\n> SELECT DISTINCT ON (id) id, the_date AS min_date FROM my_table ORDER BY id, the_date ASC\n> SELECT DISTINCT ON (id) id, the_date AS max_date FROM my_table ORDER BY id, the_date DESC\n> I think the first of these can actually use your index but the latter can't\n> unless you create one for it specifically (which is not so easy -- it'll be\n> easier in 8.3 though). Worse, I'm not really sure it'll be any faster than the\n> query you already have.\n\nit's easy to fix the second query (fix to use index) - just change order\nby to:\norder by id desc, the_date desc.\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Wed, 5 Sep 2007 15:22:50 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" }, { "msg_contents": "\"hubert depesz lubaczewski\" <[email protected]> writes:\n\n> On Wed, Sep 05, 2007 at 12:30:21PM +0100, Gregory Stark wrote:\n>> SELECT DISTINCT ON (id) id, the_date AS min_date FROM my_table ORDER BY id, the_date ASC\n>> SELECT DISTINCT ON (id) id, the_date AS max_date FROM my_table ORDER BY id, the_date DESC\n>> I think the first of these can actually use your index but the latter can't\n>> unless you create one for it specifically (which is not so easy -- it'll be\n>> easier in 8.3 though). Worse, I'm not really sure it'll be any faster than the\n>> query you already have.\n>\n> it's easy to fix the second query (fix to use index) - just change order\n> by to:\n> order by id desc, the_date desc.\n\nCute. I didn't think of that, thanks\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 05 Sep 2007 15:14:00 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" }, { "msg_contents": "\"Peter Childs\" <[email protected]> writes:\n\n> My personal reaction is why are you using distinct at all?\n>\n> why not\n>\n> select id,\n> min(the_date) as min_date,\n> max(the_date) as max_date\n> from my_table group by id;\n>\n> Since 8.0 or was it earlier this will use an index should a reasonable one\n> exist.\n\nThat's not true for this query. In fact that was precisely the original query\nhe as looking to optimize.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 05 Sep 2007 15:23:23 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" }, { "msg_contents": "\"Pavel Stehule\" <[email protected]> writes:\n\n>>\n>> why not\n>>\n>> select id,\n>> min(the_date) as min_date,\n>> max(the_date) as max_date\n>> from my_table group by id;\n>>\n>> Since 8.0 or was it earlier this will use an index should a reasonable one\n>> exist.\n\nAs I mentioned in the other post that's not true for this query.\n\n> without any limits, seq scan is optimal.\n\nThat's not necessarily true either. You could have ten distinct ids and\nmillions of dates for them. In that case a scan of the index which jumped\naround to scan from the beginning and end of each distinct id value would be\nfaster. There's just no such plan type in Postgres currently. \n\nYou can simulate such a plan with the subqueries I described but there's a bit\nmore overhead than necessary and you need a reasonably efficient source of the\ndistinct ids. Also it may or may not be faster than simply scanning the whole\ntable like above and simulating it with subqueries makes it impossible to\nchoose the best plan.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 05 Sep 2007 15:28:53 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" }, { "msg_contents": "Gregory Stark <[email protected]> writes:\n> You can simulate such a plan with the subqueries I described but\n> there's a bit more overhead than necessary and you need a reasonably\n> efficient source of the distinct ids.\n\nYeah, that seems like the $64 question. If you have no better way of\nfinding out the set of ID values than a seqscan, then there's no point\nin trying to optimize the min/max accumulation ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 05 Sep 2007 11:23:38 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction " }, { "msg_contents": "BTW, will it improve something if you change your index to \"my_table(\nid, the_date )\"?\n\nRgds,\n-Dimitri\n\nOn 9/5/07, JS Ubei <[email protected]> wrote:\n> Hi all,\n>\n> I need to improve a query like :\n>\n> SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;\n>\n> Stupidly, I create a B-tree index on my_table(the_date), witch is logically\n> not used in my query, because it's not with a constant ? isn't it ?\n>\n> I know that I can't create a function index with an aggregative function.\n>\n> How I can do ?\n>\n> thanks,\n>\n> jsubei\n>\n>\n>\n>\n>\n> _____________________________________________________________________________\n> Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n>\n", "msg_date": "Sat, 8 Sep 2007 10:00:22 +0200", "msg_from": "Dimitri <[email protected]>", "msg_from_op": false, "msg_subject": "Re: optimize query with a maximum(date) extraction" } ]
[ { "msg_contents": "Great idea !\n\nwith your second solution, my query seem to use the index on date. but the global performance is worse :-(\n\nI will keep th original solution !\n\nLot of thanks, Gregory\n\njsubei\n\n----- Message d'origine ----\nDe : Gregory Stark <[email protected]>\nÀ : JS Ubei <[email protected]>\nCc : [email protected]\nEnvoyé le : Mercredi, 5 Septembre 2007, 14h06mn 01s\nObjet : Re: optimize query with a maximum(date) extraction\n\n\"Gregory Stark\" <[email protected]> writes:\n\n> \"JS Ubei\" <[email protected]> writes:\n>\n>> I need to improve a query like :\n>>\n>> SELECT id, min(the_date), max(the_date) FROM my_table GROUP BY id;\n>...\n> I don't think you'll find anything much faster for this particular query. You\n> could profile running these two (non-standard) queries:\n>\n> SELECT DISTINCT ON (id) id, the_date AS min_date FROM my_table ORDER BY id, the_date ASC\n> SELECT DISTINCT ON (id) id, the_date AS max_date FROM my_table ORDER BY id, the_date DESC\n\nSomething else you might try:\n\nselect id, \n (select min(the_date) from my_table where id=x.id) as min_date,\n (select max(the_date) from my_table where id=x.id) as max_date\n from (select distinct id from my_table)\n\nRecent versions of Postgres do know how to use the index for a simple\nungrouped min() or max() like these subqueries.\n\nThis would be even better if you have a better source for the list of distinct\nids you're interested in than my_table. If you have a source that just has one\nrecord for each id then you won't need an extra step to eliminate duplicates.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n\n\n\n\n _____________________________________________________________________________ \nNe gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo! Mail \n", "msg_date": "Wed, 5 Sep 2007 13:11:31 +0000 (GMT)", "msg_from": "JS Ubei <[email protected]>", "msg_from_op": true, "msg_subject": "Re : optimize query with a maximum(date) extraction" } ]
[ { "msg_contents": "Hi\n\nI couldnt find any specifics on this subject in the documentation, so I \nthought I'd ask the group.\n\nhow does pg utilise multi cpus/cores, i.e. does it use more than one \ncore? and possibly, how, are there any documentation about this.\n\nthomas\n", "msg_date": "Thu, 06 Sep 2007 00:41:06 +0200", "msg_from": "Thomas Finneid <[email protected]>", "msg_from_op": true, "msg_subject": "utilising multi-cpu/core machines?" }, { "msg_contents": "On 9/5/07, Thomas Finneid <[email protected]> wrote:\n> how does pg utilise multi cpus/cores, i.e. does it use more than one\n> core? and possibly, how, are there any documentation about this.\n\nUnlike other systems which manage their own affinity and\nprioritization, Postgres relies solely on the OS to handle process\nmanagement across multiple CPUs/cores.\n\n-- \nJonah H. Harris, Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n33 Wood Ave S, 3rd Floor | [email protected]\nIselin, New Jersey 08830 | http://www.enterprisedb.com/\n", "msg_date": "Wed, 5 Sep 2007 19:02:36 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: utilising multi-cpu/core machines?" }, { "msg_contents": "On 9/5/07, Thomas Finneid <[email protected]> wrote:\n\n> how does pg utilise multi cpus/cores, i.e. does it use more than one\n> core? and possibly, how, are there any documentation about this.\n\nPostgreSQL creates a new process to handle each connection to the\ndatabase. Multiple sessions can therefore spread across multiple\ncores, but a single session will never use more than one.\n", "msg_date": "Wed, 5 Sep 2007 17:07:57 -0700", "msg_from": "\"Trevor Talbot\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: utilising multi-cpu/core machines?" }, { "msg_contents": ">>> On Wed, Sep 5, 2007 at 5:41 PM, in message <[email protected]>,\nThomas Finneid <[email protected]> wrote: \n> how does pg utilise multi cpus/cores, i.e. does it use more than one \n> core? and possibly, how, are there any documentation about this.\n \nFor portability reasons PostgreSQL doesn't use threads, per se, but spawns\na new process for each connection, and a few for other purposes. Each\nprocess may be running on a separate CPU, but a single connection will\nonly be using one -- directly, anyway. (The OS may well be using the\nother for I/O, etc.)\n \nFor documentation, you could start with this:\n \nhttp://www.postgresql.org/docs/8.2/interactive/app-postgres.html\n \n-Kevin\n \n\n\n", "msg_date": "Thu, 06 Sep 2007 08:46:40 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: utilising multi-cpu/core machines?" }, { "msg_contents": "Hi Thomas,\n\nPostgreSQL does scale up very well. But you have to keep in mind that\nthis also depends on profile of the application you're on PostgreSQL.\n\nInsufficient memory and slow disk systems can interfere PostgreSQL.\nAnother issue is contention if the server has more than 4 cpus.\n(Please check out discussions about \"context strom\" in this group.)\n\nAnyhow, I had create a benchmark for my company which shows the scale up\nof PostgreSQL 8.1.4. This benchmark does try to enforce contention\nbecause of the profile of our application.\n\nClients/scale-up factor\n1\t1\n2\t1,78\n3\t2,47\n4\t3,12\n5\t3,62\n6\t4,23\n7\t4,35\n8\t4,79\n9\t5,05\n10\t5,17\n\nScale-up factor is relative to one client the number of completed\nqueries in a time frame. (throughput)\n\nThis test was done on a 16 core Intel-box (4-way Xeon E7340).\n\nThe results of TPC-B benchmark are looking similar.\n\nSven.\n\n\nThomas Finneid schrieb:\n> Hi\n> \n> I couldnt find any specifics on this subject in the documentation, so I\n> thought I'd ask the group.\n> \n> how does pg utilise multi cpus/cores, i.e. does it use more than one\n> core? and possibly, how, are there any documentation about this.\n> \n> thomas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n\n-- \nSven Geisler <[email protected]> Tel +49.30.921017.81 Fax .50\nSenior Developer, AEC/communications GmbH & Co. KG Berlin, Germany\n", "msg_date": "Fri, 07 Sep 2007 11:50:15 +0200", "msg_from": "Sven Geisler <[email protected]>", "msg_from_op": false, "msg_subject": "Re: utilising multi-cpu/core machines?" } ]
[ { "msg_contents": "Hi guys,\n\nI'm have the rare opportunity to spec the hardware for a new database\nserver. It's going to replace an older one, driving a social networking\nweb application. The current server (a quad opteron with 4Gb of RAM and\n80Gb fast SCSI RAID10) is coping with an average load of ranging between\n1.5 and 3.5.\n\nThe new machine spec I have so far:\n 2 x Intel Xeon 2.33 GHz Dual Core Woodcrest Processors\n 4 Gb RAM\n 5x73 GB Ultra320 SCSI RAID 5 (288 GB storage)\n\nI've heard that RAID 5 is not necessarily the best performer. Also, are\nthere any special tricks when partition the file system?\n\nRegards,\n\nWillo\n\n", "msg_date": "Thu, 06 Sep 2007 08:54:52 +0200", "msg_from": "Willo van der Merwe <[email protected]>", "msg_from_op": true, "msg_subject": "Hardware spec" }, { "msg_contents": "Willo van der Merwe wrote:\n> Hi guys,\n> \n> I'm have the rare opportunity to spec the hardware for a new database\n> server. It's going to replace an older one, driving a social networking\n> web application. The current server (a quad opteron with 4Gb of RAM and\n> 80Gb fast SCSI RAID10) is coping with an average load of ranging between\n> 1.5 and 3.5.\n> \n> The new machine spec I have so far:\n\nWhat's the limiting factor on your current machine - disk, memory, cpup?\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 06 Sep 2007 09:17:34 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "Richard Huxton wrote:\n> Willo van der Merwe wrote:\n>> Hi guys,\n>>\n>> I'm have the rare opportunity to spec the hardware for a new database\n>> server. It's going to replace an older one, driving a social networking\n>> web application. The current server (a quad opteron with 4Gb of RAM and\n>> 80Gb fast SCSI RAID10) is coping with an average load of ranging between\n>> 1.5 and 3.5.\n>>\n>> The new machine spec I have so far:\n> What's the limiting factor on your current machine - disk, memory, cpup?\nI'm a bit embarrassed to admit that I'm not sure. The reason we're \nchanging machines is that we might be changing ISPs and we're renting / \nleasing the machines from the ISP.\n", "msg_date": "Thu, 06 Sep 2007 11:26:46 +0200", "msg_from": "Willo van der Merwe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nWillo van der Merwe wrote:\n> Richard Huxton wrote:\n>> Willo van der Merwe wrote:\n>>> Hi guys,\n>>> \n>>> I'm have the rare opportunity to spec the hardware for a new database\n>>> server. It's going to replace an older one, driving a social\n>>> networking web application. The current server (a quad opteron with\n>>> 4Gb of RAM and 80Gb fast SCSI RAID10) is coping with an average load\n>>> of ranging between 1.5 and 3.5.\n>>> \n>>> The new machine spec I have so far:\n>> What's the limiting factor on your current machine - disk, memory,\n>> cpup?\n> I'm a bit embarrassed to admit that I'm not sure. The reason we're \n> changing machines is that we might be changing ISPs and we're renting / \n> leasing the machines from the ISP.\n> \nBefore you get rid of the current ISP, better examine what is going on with\nthe present setup. It would be good to know if you are memory, processor, or\nIO limited. That way you could increase what needs to be increased, and not\nwaste money where the bottleneck is not.\n\n- --\n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 07:10:01 up 28 days, 10:32, 4 users, load average: 5.48, 4.77, 4.37\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (GNU/Linux)\nComment: Using GnuPG with CentOS - http://enigmail.mozdev.org\n\niD8DBQFG3+FwPtu2XpovyZoRAmp+AJ9R4mvznqJ24ZCPK8DcTAsz2d34+QCfQzhH\nvmXnoJO0vm/A/f/Ol0TOy6o=\n=9rsm\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 06 Sep 2007 07:16:00 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "Jean-David Beyer wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n> Willo van der Merwe wrote:\n> \n>> Richard Huxton wrote:\n>> \n>>> Willo van der Merwe wrote:\n>>> \n>>>> Hi guys,\n>>>>\n>>>> I'm have the rare opportunity to spec the hardware for a new database\n>>>> server. It's going to replace an older one, driving a social\n>>>> networking web application. The current server (a quad opteron with\n>>>> 4Gb of RAM and 80Gb fast SCSI RAID10) is coping with an average load\n>>>> of ranging between 1.5 and 3.5.\n>>>>\n>>>> The new machine spec I have so far:\n>>>> \n>>> What's the limiting factor on your current machine - disk, memory,\n>>> cpup?\n>>> \n>> I'm a bit embarrassed to admit that I'm not sure. The reason we're \n>> changing machines is that we might be changing ISPs and we're renting / \n>> leasing the machines from the ISP.\n>>\n>> \n> Before you get rid of the current ISP, better examine what is going on with\n> the present setup. It would be good to know if you are memory, processor, or\n> IO limited. That way you could increase what needs to be increased, and not\n> waste money where the bottleneck is not.\n> \nGood advice. After running a vmstat and iostat, it is clear, to my mind \nanyway, that the most likely bottleneck is IO, next is probably some \nmore RAM.\nHere's the output:\nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 0 0 29688 80908 128308 3315792 0 0 8 63 6 8 17 \n2 80 1\n\n\navg-cpu: %user %nice %sys %iowait %idle\n 17.18 0.00 1.93 0.81 80.08\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda 14.57 66.48 506.45 58557617 446072213\nsda1 0.60 0.27 4.70 235122 4136128\nsda2 0.38 0.77 2.27 678754 2002576\nsda3 2.37 0.49 18.61 429171 16389960\nsda4 0.00 0.00 0.00 2 0\nsda5 0.71 0.66 5.46 578307 4807087\nsda6 0.03 0.01 0.24 6300 214196\nsda7 0.02 0.00 0.19 2622 165992\nsda8 60.19 64.29 474.98 56626211 418356226\n\n", "msg_date": "Thu, 06 Sep 2007 13:37:24 +0200", "msg_from": "Willo van der Merwe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "* Willo van der Merwe:\n\n> Good advice. After running a vmstat and iostat, it is clear, to my\n> mind anyway, that the most likely bottleneck is IO, next is probably\n> some more RAM.\n\n> Here's the output:\n> procs -----------memory---------- ---swap-- -----io---- --system-- \n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us\n> sy id wa\n> 0 0 29688 80908 128308 3315792 0 0 8 63 6 8 17\n> 2 80 1\n\nYou need to run \"vmstat 10\" (for ten-second averages) and report a\ncouple of lines.\n\n> avg-cpu: %user %nice %sys %iowait %idle\n> 17.18 0.00 1.93 0.81 80.08\n\nSame for iostat.\n\nYour initial numbers suggest that your server isn't I/O-bound, though\n(the percentage spent in iowait is much too small, and so are the tps\nnumbers).\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Thu, 06 Sep 2007 16:19:03 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "Florian Weimer wrote:\n> You need to run \"vmstat 10\" (for ten-second averages) and report a\n> couple of lines.\n> \nprocs -----------memory---------- ---swap-- -----io---- --system-- \n----cpu----\n r b swpd free buff cache si so bi bo in cs us sy \nid wa\n 1 0 61732 47388 27908 3431344 0 0 10 65 1 4 17 \n2 80 1\n 5 0 61732 37052 28180 3431956 0 0 14 987 2320 2021 38 \n4 56 2\n 1 0 61620 43076 28356 3432256 0 0 0 367 1691 1321 28 \n3 67 1\n 3 0 61620 37620 28484 3432740 0 0 0 580 4088 6792 40 \n5 54 1\n 2 0 61596 33716 28748 3433520 0 0 24 415 2087 1890 44 \n4 49 2\n 3 0 61592 45300 28904 3416200 3 0 61 403 2282 2154 41 \n4 54 1\n 7 0 61592 30172 29092 3416964 0 0 19 358 2779 3478 31 \n6 63 1\n 1 0 61580 62948 29180 3417368 6 0 27 312 3632 4396 38 \n4 57 1\n 1 0 61444 62388 29400 3417964 0 0 6 354 2163 1918 31 \n4 64 1\n 2 0 61444 53988 29648 3417988 0 0 0 553 2095 1687 33 \n3 63 1\n 1 0 61444 63988 29832 3418348 0 0 6 352 1767 1424 22 \n3 73 1\n 1 1 61444 51148 30052 3419148 0 0 50 349 1524 834 22 \n3 74 2\n 1 0 61432 53460 30524 3419572 7 0 7 868 4434 6706 43 \n6 49 2\n 1 0 61432 58668 30628 3420148 0 0 0 284 1785 1628 27 \n3 69 1\n\niostat sda8 is the where the pg_data resides, sda3 is /var/log\navg-cpu: %user %nice %sys %iowait %idle\n 17.36 0.00 1.96 0.82 79.86\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda3 2.38 0.49 18.71 432395 16672800\nsda8 62.34 74.46 491.74 66345555 438143794\n\navg-cpu: %user %nice %sys %iowait %idle\n 30.50 0.00 3.57 1.70 64.22\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda3 5.60 0.00 44.80 0 448\nsda8 120.20 134.40 956.00 1344 9560\n\navg-cpu: %user %nice %sys %iowait %idle\n 20.68 0.00 3.43 1.35 74.54\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda3 3.30 0.00 26.40 0 264\nsda8 97.90 0.00 783.20 0 7832\n\navg-cpu: %user %nice %sys %iowait %idle\n 22.31 0.00 2.75 0.68 74.27\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda3 2.10 0.00 16.78 0 168\nsda8 60.34 0.80 481.92 8 4824\n\navg-cpu: %user %nice %sys %iowait %idle\n 11.65 0.00 1.60 1.03 85.72\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda3 1.70 0.00 13.61 0 136\nsda8 59.36 0.00 474.87 0 4744\n\n", "msg_date": "Thu, 06 Sep 2007 16:33:00 +0200", "msg_from": "Willo van der Merwe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "* Willo van der Merwe:\n\n> Florian Weimer wrote:\n>> You need to run \"vmstat 10\" (for ten-second averages) and report a\n>> couple of lines.\n\n> 2 80 1\n> 5 0 61732 37052 28180 3431956 0 0 14 987 2320 2021 38\n\n> sda3 3.30 0.00 26.40 0 264\n> sda8 97.90 0.00 783.20 0 7832\n\nThese values don't look I/O bound to me. CPU usage is pretty low,\ntoo.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Thu, 06 Sep 2007 17:21:35 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "On Thu, Sep 06, 2007 at 11:26:46AM +0200, Willo van der Merwe wrote:\n> Richard Huxton wrote:\n> >Willo van der Merwe wrote:\n> >>Hi guys,\n> >>\n> >>I'm have the rare opportunity to spec the hardware for a new database\n> >>server. It's going to replace an older one, driving a social networking\n> >>web application. The current server (a quad opteron with 4Gb of RAM and\n> >>80Gb fast SCSI RAID10) is coping with an average load of ranging between\n> >>1.5 and 3.5.\n> >>\n> >>The new machine spec I have so far:\n> >What's the limiting factor on your current machine - disk, memory, cpup?\n> I'm a bit embarrassed to admit that I'm not sure. The reason we're \n> changing machines is that we might be changing ISPs and we're renting / \n> leasing the machines from the ISP.\n\nGet yourself the ability to benchmark your application. This is\ninvaluable^W a requirement for any kind of performance tuning.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 15:35:55 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "Decibel! wrote:\n> On Thu, Sep 06, 2007 at 11:26:46AM +0200, Willo van der Merwe wrote:\n> \n>> Richard Huxton wrote:\n>> \n>>> Willo van der Merwe wrote:\n>>> \n>>>> Hi guys,\n>>>>\n>>>> I'm have the rare opportunity to spec the hardware for a new database\n>>>> server. It's going to replace an older one, driving a social networking\n>>>> web application. The current server (a quad opteron with 4Gb of RAM and\n>>>> 80Gb fast SCSI RAID10) is coping with an average load of ranging between\n>>>> 1.5 and 3.5.\n>>>>\n>>>> The new machine spec I have so far:\n>>>> \n>>> What's the limiting factor on your current machine - disk, memory, cpup?\n>>> \n>> I'm a bit embarrassed to admit that I'm not sure. The reason we're \n>> changing machines is that we might be changing ISPs and we're renting / \n>> leasing the machines from the ISP.\n>> \n>\n> Get yourself the ability to benchmark your application. This is\n> invaluable^W a requirement for any kind of performance tuning.\n> \nI'm pretty happy with the performance of the database at this stage. \nCorrect me if I'm wrong, but AFAIK a load of 3.5 on a quad is not \noverloading it. It also seem to scale well, so if application's demand \nincreases I see a minimal increase in database server load.\n\nI was just looking for some pointers as to where to go to ITO hardware \nfor the future, as I can now spec a new machine. I mean is it really \nworth while going for one of those RAID controllers with the battery \nbacked cache, for instance. If so, are there any specific ones to look \nout for? Which is better RAID 5, a large RAID 10 or smaller RAID 10's? \nShould I bother with RAID at all?\n\n\n", "msg_date": "Wed, 12 Sep 2007 09:01:18 +0200", "msg_from": "Willo van der Merwe <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware spec" }, { "msg_contents": "> > Get yourself the ability to benchmark your application. This is\n> > invaluable^W a requirement for any kind of performance tuning.\n> >\n> I'm pretty happy with the performance of the database at this stage.\n> Correct me if I'm wrong, but AFAIK a load of 3.5 on a quad is not\n> overloading it. It also seem to scale well, so if application's demand\n> increases I see a minimal increase in database server load.\n>\n> I was just looking for some pointers as to where to go to ITO hardware\n> for the future, as I can now spec a new machine. I mean is it really\n> worth while going for one of those RAID controllers with the battery\n> backed cache, for instance. If so, are there any specific ones to look\n> out for? Which is better RAID 5, a large RAID 10 or smaller RAID 10's?\n> Should I bother with RAID at all?\n\nThese issues have been covered before. You may want to search the\narchives and get the relevant pointers.\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Wed, 12 Sep 2007 09:17:23 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Hardware spec" } ]
[ { "msg_contents": "Hi,\n\nWe are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon, 8GB\nRAM, 4x SAS 146 GB 15K RPM on RAID 5.\n\nThe current data size is about 50GB, but we want to purchase the hardware to\nscale to about 1TB as we think our business will need to support that much\nsoon.\n- Currently we have a 80% read and 20% write perecntages.\n- Currently with this configuration the Database is showing signs of\nover-loading.\n- Auto-vaccum, etc run on this database, vaccum full runs nightly.\n- Currently CPU loads are about 20%, memory utilization is full (but this is\nalso due to linux caching disk blocks) and IO waits are frequent.\n- We have a load of about 400 queries per second\n\nNow we are considering to purchase our own servers and in the process are\nfacing the usual dilemmas. First I'll list out what machine we have decided\nto use:\n2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)\n32 GB RAM\nOS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1\n(Data Storage mentioned below)\n\nWe have already decided to split our database into 3 machines on the basis\non disjoint sets of data. So we will be purchasing three of these boxes.\n\nHELP 1: Does something look wrong with above configuration, I know there\nwill be small differences b/w opetron/xeon. But do you think there is\nsomething against going for 2.4Ghz Quad Xeons (clovertown i think)?\n\nHELP 2: The main confusion is with regards to Data Storage. We have the\noption of going for:\n\nA: IBM N-3700 SAN Box, having 12x FC 300GB disks, Partitioned into 3 disks\ninto RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2 hot spare. We\nare also considering similar solution from EMC - CX310C.\n\nB: Go for Internal of DAS based storage. Here for each server we should be\nable to have: 2x disks on RAID-1 for logs, 6x disks on RAID-10 for\ntablespace1 and 6x disks on RAID-10 for tablespace2. Or maybe 12x disks on\nRAID-10 single table-space.\n\nWhat do I think? Well..\nSAN wins on manageability, replication (say to a DR site), backup, etc...\nDAS wins on cost\n\nBut for a moment keeping these aside, i wanted to discuss, purely on\nperformance side which one is a winner? It feels like internal-disks will\nperform better, but need to understand a rough magnitude of difference in\nperformance to see if its worth loosing the manageability features.\n\nAlso if we choose to go with DAS, what would be the best tool to do async\nreplication to DR site and maybe even as a extra plus a second read-only DB\nserver to distribute select loads.\n\nRegards,\nAzad\n\nHi,We are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon, 8GB RAM, 4x SAS 146 GB 15K RPM on RAID 5.The current data size is about 50GB, but we want to purchase the hardware to scale to about 1TB as we think our business will need to support that much soon. \n- Currently we have a 80% read and 20% write perecntages. - Currently with this configuration the Database is showing signs of over-loading.- Auto-vaccum, etc run on this database, vaccum full runs nightly.\n- Currently CPU loads are about 20%, memory utilization is full (but this is also due to linux caching disk blocks) and IO waits are frequent.- We have a load of about 400 queries per secondNow we are considering to purchase our own servers and in the process are facing the usual dilemmas. First I'll list out what machine we have decided to use:\n2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)32 GB RAMOS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1(Data Storage mentioned below)We have already decided to split our database into 3 machines on the basis on disjoint sets of data. So we will be purchasing three of these boxes.\nHELP 1: Does something look wrong with above configuration, I know there will be small differences b/w opetron/xeon. But do you think there is something against going for 2.4Ghz Quad Xeons (clovertown i think)?\nHELP 2: The main confusion is with regards to Data Storage. We have the option of going for:A: IBM N-3700 SAN Box, having 12x FC 300GB disks,  Partitioned into 3 disks into RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2 hot spare. We are also considering similar solution from EMC - CX310C.\nB: Go for Internal of DAS based storage. Here for each server we should be able to have: 2x disks on RAID-1 for logs, 6x disks on RAID-10 for tablespace1 and 6x disks on RAID-10 for tablespace2. Or maybe 12x disks on RAID-10 single table-space.\nWhat do I think? Well..SAN wins on manageability, replication (say to a DR site), backup, etc...DAS wins on costBut for a moment keeping these aside, i wanted to discuss, purely on performance side which one is a winner? It feels like internal-disks will perform better, but need to understand a rough magnitude of difference in performance to see if its worth loosing the manageability features.\nAlso if we choose to go with DAS, what would be the best tool to do async replication to DR site and maybe even as a extra plus a second read-only DB server to distribute select loads.\nRegards,Azad", "msg_date": "Thu, 6 Sep 2007 18:05:02 +0530", "msg_from": "\"Harsh Azad\" <[email protected]>", "msg_from_op": true, "msg_subject": "SAN vs Internal Disks" }, { "msg_contents": "On Thu, 2007-09-06 at 18:05 +0530, Harsh Azad wrote:\n> Hi,\n> \n> We are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon,\n> 8GB RAM, 4x SAS 146 GB 15K RPM on RAID 5.\n> \n> The current data size is about 50GB, but we want to purchase the\n> hardware to scale to about 1TB as we think our business will need to\n> support that much soon. \n> - Currently we have a 80% read and 20% write perecntages. \n> - Currently with this configuration the Database is showing signs of\n> over-loading.\n> - Auto-vaccum, etc run on this database, vaccum full runs nightly.\n> - Currently CPU loads are about 20%, memory utilization is full (but\n> this is also due to linux caching disk blocks) and IO waits are\n> frequent.\n> - We have a load of about 400 queries per second\n> \n> Now we are considering to purchase our own servers and in the process\n> are facing the usual dilemmas. First I'll list out what machine we\n> have decided to use: \n> 2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)\n> 32 GB RAM\n> OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1\n> (Data Storage mentioned below)\n> \n> We have already decided to split our database into 3 machines on the\n> basis on disjoint sets of data. So we will be purchasing three of\n> these boxes. \n> \n> HELP 1: Does something look wrong with above configuration, I know\n> there will be small differences b/w opetron/xeon. But do you think\n> there is something against going for 2.4Ghz Quad Xeons (clovertown i\n> think)?\n> \n> HELP 2: The main confusion is with regards to Data Storage. We have\n> the option of going for:\n> \n> A: IBM N-3700 SAN Box, having 12x FC 300GB disks, Partitioned into 3\n> disks into RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2\n> hot spare. We are also considering similar solution from EMC -\n> CX310C. \n> \n> B: Go for Internal of DAS based storage. Here for each server we\n> should be able to have: 2x disks on RAID-1 for logs, 6x disks on\n> RAID-10 for tablespace1 and 6x disks on RAID-10 for tablespace2. Or\n> maybe 12x disks on RAID-10 single table-space. \n> \n> What do I think? Well..\n> SAN wins on manageability, replication (say to a DR site), backup,\n> etc...\n> DAS wins on cost\n> \n> But for a moment keeping these aside, i wanted to discuss, purely on\n> performance side which one is a winner? It feels like internal-disks\n> will perform better, but need to understand a rough magnitude of\n> difference in performance to see if its worth loosing the\n> manageability features. \n> \n> Also if we choose to go with DAS, what would be the best tool to do\n> async replication to DR site and maybe even as a extra plus a second\n> read-only DB server to distribute select loads.\n\nSounds like a good candidate for Slony replication for backups /\nread-only slaves.\n\nI haven't seen a SAN yet whose DR / replication facilities are on par\nwith a good database replication solution. My impression is that those\nfacilities are mostly for file servers, mail servers, etc. It would be\ndifficult for a SAN to properly replicate a database given the strict\nordering, size and consistency requirements for the data files. Not\nimpossible, but in my limited experience I haven't found one that I\ntrust to do it reliably either, vendor boastings to the contrary\nnotwithstanding. (Hint: make sure you know exactly what your vendor's\ndefinition of the term 'snapshot' really means).\n\nSo before you invest in a SAN, make sure that you're actually going to\nbe able to (and want to) use all the nice management features you're\npaying for. We have some SAN's that are basically acting just as\nexpensive external RAID arrays because we do the database\nreplication/backup in software anyway.\n\n-- Mark Lewis\n\n\n", "msg_date": "Thu, 6 Sep 2007 07:42:34 -0700", "msg_from": "\"Mark Lewis\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On 9/6/07, Harsh Azad <[email protected]> wrote:\n> Hi,\n>\n> We are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon, 8GB\n> RAM, 4x SAS 146 GB 15K RPM on RAID 5.\n>\n> The current data size is about 50GB, but we want to purchase the hardware to\n> scale to about 1TB as we think our business will need to support that much\n> soon.\n> - Currently we have a 80% read and 20% write percentages.\n\nFor this type load, you should be running on RAID10 not RAID5. Or, if\nyou must use RAID 5, use more disks and have a battery backed caching\nRAID controller known to perform well with RAID5 and large arrays.\n\n> - Currently with this configuration the Database is showing signs of\n> over-loading.\n\nOn I/O or CPU? If you're running out of CPU, then look to increasing\nCPU horsepower and tuning postgresql.\nIf I/O then you need to look into a faster I/O subsystem.\n\n> - Auto-vaccum, etc run on this database, vaccum full runs nightly.\n\nGenerally speaking, if you need to run vacuum fulls, you're doing\nsomething wrong. Is there a reason you're running vacuum full or is\nthis just precautionary. vacuum full can bloat your indexes, so you\nshouldn't run it regularly. reindexing might be a better choice if\nyou do need to regularly shrink your db. The better option is to\nmonitor your fsm usage and adjust fsm settings / autovacuum settings\nas necessary.\n\n> - Currently CPU loads are about 20%, memory utilization is full (but this\n> is also due to linux caching disk blocks) and IO waits are frequent.\n> - We have a load of about 400 queries per second\n\nWhat does vmstat et. al. say about CPU versus I/O wait?\n\n> Now we are considering to purchase our own servers and in the process are\n> facing the usual dilemmas. First I'll list out what machine we have decided\n> to use:\n> 2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)\n> 32 GB RAM\n> OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1\n> (Data Storage mentioned below)\n>\n> We have already decided to split our database into 3 machines on the basis\n> on disjoint sets of data. So we will be purchasing three of these boxes.\n>\n> HELP 1: Does something look wrong with above configuration, I know there\n> will be small differences b/w opetron/xeon. But do you think there is\n> something against going for 2.4Ghz Quad Xeons (clovertown i think)?\n\nLook like good machines, plenty fo memory.\n\n> HELP 2: The main confusion is with regards to Data Storage. We have the\n> option of going for:\n>\n> A: IBM N-3700 SAN Box, having 12x FC 300GB disks, Partitioned into 3 disks\n> into RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2 hot spare. We\n> are also considering similar solution from EMC - CX310C.\n>\n> B: Go for Internal of DAS based storage. Here for each server we should be\n> able to have: 2x disks on RAID-1 for logs, 6x disks on RAID-10 for\n> tablespace1 and 6x disks on RAID-10 for tablespace2. Or maybe 12x disks on\n> RAID-10 single table-space.\n>\n> What do I think? Well..\n> SAN wins on manageability, replication (say to a DR site), backup, etc...\n> DAS wins on cost\n\nThe problem with SAN is that it's apparently very easy to build a big\nexpensive system that performs poorly. We've seen reports of such\nhere on the lists a few times. I would definitely demand an\nevaluation period from your supplier to make sure it performs well if\nyou go SAN.\n\n> But for a moment keeping these aside, i wanted to discuss, purely on\n> performance side which one is a winner? It feels like internal-disks will\n> perform better, but need to understand a rough magnitude of difference in\n> performance to see if its worth loosing the manageability features.\n\nThat really really really depends. The quality of RAID controllers\nfor either setup is very important, as is the driver support, etc...\nAll things being even, I'd lean towards the local storage.\n\n> Also if we choose to go with DAS, what would be the best tool to do async\n> replication to DR site and maybe even as a extra plus a second read-only DB\n> server to distribute select loads.\n\nLook at slony, or PITR with continuous recovery. Of those two, I've\nonly used Slony in production, and I was very happy with it's\nperformance, and it was very easy to write a bash script to monitor\nthe replication for failures.\n", "msg_date": "Thu, 6 Sep 2007 11:25:58 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "Thanks Mark.\n\nIf I replicate a snapshot of Data and log files (basically the entire PG\ndata directory) and I maintain same version of postgres on both servers, it\nshould work right?\n\nI am also thinking that having SAN storage will provide me with facility of\nkeeping a warm standby DB. By just shutting one server down and starting the\nother mounting the same File system I should be able to bing my DB up when\nthe primary inccurs a physical failure.\n\nI'm only considering SAN storage for this feature - has anyone ever used SAN\nfor replication and warm standy-by on Postgres?\n\nRegards,\nHarsh\n\nOn 9/6/07, Mark Lewis <[email protected]> wrote:\n>\n> On Thu, 2007-09-06 at 18:05 +0530, Harsh Azad wrote:\n> > Hi,\n> >\n> > We are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon,\n> > 8GB RAM, 4x SAS 146 GB 15K RPM on RAID 5.\n> >\n> > The current data size is about 50GB, but we want to purchase the\n> > hardware to scale to about 1TB as we think our business will need to\n> > support that much soon.\n> > - Currently we have a 80% read and 20% write perecntages.\n> > - Currently with this configuration the Database is showing signs of\n> > over-loading.\n> > - Auto-vaccum, etc run on this database, vaccum full runs nightly.\n> > - Currently CPU loads are about 20%, memory utilization is full (but\n> > this is also due to linux caching disk blocks) and IO waits are\n> > frequent.\n> > - We have a load of about 400 queries per second\n> >\n> > Now we are considering to purchase our own servers and in the process\n> > are facing the usual dilemmas. First I'll list out what machine we\n> > have decided to use:\n> > 2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)\n> > 32 GB RAM\n> > OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1\n> > (Data Storage mentioned below)\n> >\n> > We have already decided to split our database into 3 machines on the\n> > basis on disjoint sets of data. So we will be purchasing three of\n> > these boxes.\n> >\n> > HELP 1: Does something look wrong with above configuration, I know\n> > there will be small differences b/w opetron/xeon. But do you think\n> > there is something against going for 2.4Ghz Quad Xeons (clovertown i\n> > think)?\n> >\n> > HELP 2: The main confusion is with regards to Data Storage. We have\n> > the option of going for:\n> >\n> > A: IBM N-3700 SAN Box, having 12x FC 300GB disks, Partitioned into 3\n> > disks into RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2\n> > hot spare. We are also considering similar solution from EMC -\n> > CX310C.\n> >\n> > B: Go for Internal of DAS based storage. Here for each server we\n> > should be able to have: 2x disks on RAID-1 for logs, 6x disks on\n> > RAID-10 for tablespace1 and 6x disks on RAID-10 for tablespace2. Or\n> > maybe 12x disks on RAID-10 single table-space.\n> >\n> > What do I think? Well..\n> > SAN wins on manageability, replication (say to a DR site), backup,\n> > etc...\n> > DAS wins on cost\n> >\n> > But for a moment keeping these aside, i wanted to discuss, purely on\n> > performance side which one is a winner? It feels like internal-disks\n> > will perform better, but need to understand a rough magnitude of\n> > difference in performance to see if its worth loosing the\n> > manageability features.\n> >\n> > Also if we choose to go with DAS, what would be the best tool to do\n> > async replication to DR site and maybe even as a extra plus a second\n> > read-only DB server to distribute select loads.\n>\n> Sounds like a good candidate for Slony replication for backups /\n> read-only slaves.\n>\n> I haven't seen a SAN yet whose DR / replication facilities are on par\n> with a good database replication solution. My impression is that those\n> facilities are mostly for file servers, mail servers, etc. It would be\n> difficult for a SAN to properly replicate a database given the strict\n> ordering, size and consistency requirements for the data files. Not\n> impossible, but in my limited experience I haven't found one that I\n> trust to do it reliably either, vendor boastings to the contrary\n> notwithstanding. (Hint: make sure you know exactly what your vendor's\n> definition of the term 'snapshot' really means).\n>\n> So before you invest in a SAN, make sure that you're actually going to\n> be able to (and want to) use all the nice management features you're\n> paying for. We have some SAN's that are basically acting just as\n> expensive external RAID arrays because we do the database\n> replication/backup in software anyway.\n>\n> -- Mark Lewis\n>\n\n\n\n-- \nHarsh Azad\n=======================\[email protected]\n\nThanks Mark.If I replicate a snapshot of Data and log files (basically the entire PG data directory) and I maintain same version of postgres on both servers, it should work right?I am also thinking that having SAN storage will provide me with facility of keeping a warm standby DB. By just shutting one server down and starting the other mounting the same File system I should be able to bing my DB up when the primary inccurs a physical failure.\nI'm only considering SAN storage for this feature - has anyone ever used SAN for replication and warm standy-by on Postgres?Regards,HarshOn 9/6/07, \nMark Lewis <[email protected]> wrote:\nOn Thu, 2007-09-06 at 18:05 +0530, Harsh Azad wrote:> Hi,>> We are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon,> 8GB RAM, 4x SAS 146 GB 15K RPM on RAID 5.>> The current data size is about 50GB, but we want to purchase the\n> hardware to scale to about 1TB as we think our business will need to> support that much soon.> - Currently we have a 80% read and 20% write perecntages.> - Currently with this configuration the Database is showing signs of\n> over-loading.> - Auto-vaccum, etc run on this database, vaccum full runs nightly.> - Currently CPU loads are about 20%, memory utilization is full (but> this is also due to linux caching disk blocks) and IO waits are\n> frequent.> - We have a load of about 400 queries per second>> Now we are considering to purchase our own servers and in the process> are facing the usual dilemmas. First I'll list out what machine we\n> have decided to use:> 2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)> 32 GB RAM> OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1> (Data Storage mentioned below)>\n> We have already decided to split our database into 3 machines on the> basis on disjoint sets of data. So we will be purchasing three of> these boxes.>> HELP 1: Does something look wrong with above configuration, I know\n> there will be small differences b/w opetron/xeon. But do you think> there is something against going for 2.4Ghz Quad Xeons (clovertown i> think)?>> HELP 2: The main confusion is with regards to Data Storage. We have\n> the option of going for:>> A: IBM N-3700 SAN Box, having 12x FC 300GB disks,  Partitioned into 3> disks into RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2> hot spare. We are also considering similar solution from EMC -\n> CX310C.>> B: Go for Internal of DAS based storage. Here for each server we> should be able to have: 2x disks on RAID-1 for logs, 6x disks on> RAID-10 for tablespace1 and 6x disks on RAID-10 for tablespace2. Or\n> maybe 12x disks on RAID-10 single table-space.>> What do I think? Well..> SAN wins on manageability, replication (say to a DR site), backup,> etc...> DAS wins on cost>\n> But for a moment keeping these aside, i wanted to discuss, purely on> performance side which one is a winner? It feels like internal-disks> will perform better, but need to understand a rough magnitude of\n> difference in performance to see if its worth loosing the> manageability features.>> Also if we choose to go with DAS, what would be the best tool to do> async replication to DR site and maybe even as a extra plus a second\n> read-only DB server to distribute select loads.Sounds like a good candidate for Slony replication for backups /read-only slaves.I haven't seen a SAN yet whose DR / replication facilities are on par\nwith a good database replication solution.  My impression is that thosefacilities are mostly for file servers, mail servers, etc.  It would bedifficult for a SAN to properly replicate a database given the strict\nordering, size and consistency requirements for the data files.  Notimpossible, but in my limited experience I haven't found one that Itrust to do it reliably either, vendor boastings to the contrarynotwithstanding.  (Hint: make sure you know exactly what your vendor's\ndefinition of the term 'snapshot' really means).So before you invest in a SAN, make sure that you're actually going tobe able to (and want to) use all the nice management features you're\npaying for.  We have some SAN's that are basically acting just asexpensive external RAID arrays because we do the databasereplication/backup in software anyway.-- Mark Lewis\n-- Harsh [email protected]", "msg_date": "Thu, 6 Sep 2007 22:28:45 +0530", "msg_from": "\"Harsh Azad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "Thanks Scott, we have now requested IBM/EMC to provide test machines.\nInterestingly since you mentioned the importance of Raid controllers and the\ndrivers; we are planning to use Cent OS 5 for hosting the DB.\n\nFirstly, I could only find postgres 8.1.x RPM for CentOS 5, could not find\nany RPM for 8.2.4. Is there any 8.2.4 RPM for CentOS 5?\n\nSecondly, would investing into Redhat enterprise edition give any\nperformance advantage? I know all the SAN boxes are only certified on RHEL\nand not CentOS. Or since CentOS is similar to RHEL it would be fine?\n\nRegards,\nHarsh\n\nOn 9/6/07, Scott Marlowe <[email protected]> wrote:\n>\n> On 9/6/07, Harsh Azad <[email protected]> wrote:\n> > Hi,\n> >\n> > We are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon, 8GB\n> > RAM, 4x SAS 146 GB 15K RPM on RAID 5.\n> >\n> > The current data size is about 50GB, but we want to purchase the\n> hardware to\n> > scale to about 1TB as we think our business will need to support that\n> much\n> > soon.\n> > - Currently we have a 80% read and 20% write percentages.\n>\n> For this type load, you should be running on RAID10 not RAID5. Or, if\n> you must use RAID 5, use more disks and have a battery backed caching\n> RAID controller known to perform well with RAID5 and large arrays.\n>\n> > - Currently with this configuration the Database is showing signs of\n> > over-loading.\n>\n> On I/O or CPU? If you're running out of CPU, then look to increasing\n> CPU horsepower and tuning postgresql.\n> If I/O then you need to look into a faster I/O subsystem.\n>\n> > - Auto-vaccum, etc run on this database, vaccum full runs nightly.\n>\n> Generally speaking, if you need to run vacuum fulls, you're doing\n> something wrong. Is there a reason you're running vacuum full or is\n> this just precautionary. vacuum full can bloat your indexes, so you\n> shouldn't run it regularly. reindexing might be a better choice if\n> you do need to regularly shrink your db. The better option is to\n> monitor your fsm usage and adjust fsm settings / autovacuum settings\n> as necessary.\n>\n> > - Currently CPU loads are about 20%, memory utilization is full (but\n> this\n> > is also due to linux caching disk blocks) and IO waits are frequent.\n> > - We have a load of about 400 queries per second\n>\n> What does vmstat et. al. say about CPU versus I/O wait?\n>\n> > Now we are considering to purchase our own servers and in the process\n> are\n> > facing the usual dilemmas. First I'll list out what machine we have\n> decided\n> > to use:\n> > 2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)\n> > 32 GB RAM\n> > OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1\n> > (Data Storage mentioned below)\n> >\n> > We have already decided to split our database into 3 machines on the\n> basis\n> > on disjoint sets of data. So we will be purchasing three of these boxes.\n> >\n> > HELP 1: Does something look wrong with above configuration, I know there\n> > will be small differences b/w opetron/xeon. But do you think there is\n> > something against going for 2.4Ghz Quad Xeons (clovertown i think)?\n>\n> Look like good machines, plenty fo memory.\n>\n> > HELP 2: The main confusion is with regards to Data Storage. We have the\n> > option of going for:\n> >\n> > A: IBM N-3700 SAN Box, having 12x FC 300GB disks, Partitioned into 3\n> disks\n> > into RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2 hot\n> spare. We\n> > are also considering similar solution from EMC - CX310C.\n> >\n> > B: Go for Internal of DAS based storage. Here for each server we should\n> be\n> > able to have: 2x disks on RAID-1 for logs, 6x disks on RAID-10 for\n> > tablespace1 and 6x disks on RAID-10 for tablespace2. Or maybe 12x disks\n> on\n> > RAID-10 single table-space.\n> >\n> > What do I think? Well..\n> > SAN wins on manageability, replication (say to a DR site), backup,\n> etc...\n> > DAS wins on cost\n>\n> The problem with SAN is that it's apparently very easy to build a big\n> expensive system that performs poorly. We've seen reports of such\n> here on the lists a few times. I would definitely demand an\n> evaluation period from your supplier to make sure it performs well if\n> you go SAN.\n>\n> > But for a moment keeping these aside, i wanted to discuss, purely on\n> > performance side which one is a winner? It feels like internal-disks\n> will\n> > perform better, but need to understand a rough magnitude of difference\n> in\n> > performance to see if its worth loosing the manageability features.\n>\n> That really really really depends. The quality of RAID controllers\n> for either setup is very important, as is the driver support, etc...\n> All things being even, I'd lean towards the local storage.\n>\n> > Also if we choose to go with DAS, what would be the best tool to do\n> async\n> > replication to DR site and maybe even as a extra plus a second read-only\n> DB\n> > server to distribute select loads.\n>\n> Look at slony, or PITR with continuous recovery. Of those two, I've\n> only used Slony in production, and I was very happy with it's\n> performance, and it was very easy to write a bash script to monitor\n> the replication for failures.\n>\n\n\n\n-- \nHarsh Azad\n=======================\[email protected]\n\nThanks Scott, we have now requested IBM/EMC to provide test machines. Interestingly since you mentioned the importance of Raid controllers and the drivers; we are planning to use Cent OS 5 for hosting the DB.Firstly, I could only find postgres \n8.1.x RPM for CentOS 5, could not find any RPM for 8.2.4. Is there any 8.2.4 RPM for CentOS 5?Secondly, would investing into Redhat enterprise edition give any performance advantage? I know all the SAN boxes are only certified on RHEL and not CentOS. Or since CentOS is similar to RHEL it would be fine?\nRegards,HarshOn 9/6/07, Scott Marlowe <[email protected]> wrote:\nOn 9/6/07, Harsh Azad <[email protected]> wrote:> Hi,>> We are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon, 8GB> RAM, 4x SAS 146 GB 15K RPM on RAID 5.\n>> The current data size is about 50GB, but we want to purchase the hardware to> scale to about 1TB as we think our business will need to support that much> soon.> - Currently we have a 80% read and 20% write percentages.\nFor this type load, you should be running on RAID10 not RAID5.  Or, ifyou must use RAID 5, use more disks and have a battery backed cachingRAID controller known to perform well with RAID5 and large arrays.\n> - Currently with this configuration the Database is showing signs of> over-loading.On I/O or CPU?  If you're running out of CPU, then look to increasingCPU horsepower and tuning postgresql.\nIf I/O then you need to look into a faster I/O subsystem.> - Auto-vaccum, etc run on this database, vaccum full runs nightly.Generally speaking, if you need to run vacuum fulls, you're doing\nsomething wrong.  Is there a reason you're running vacuum full or isthis just precautionary.  vacuum full can bloat your indexes, so youshouldn't run it regularly.  reindexing might be a better choice if\nyou do need to regularly shrink your db.  The better option is tomonitor your fsm usage and adjust fsm settings / autovacuum settingsas necessary.>  - Currently CPU loads are about 20%, memory utilization is full (but this\n> is also due to linux caching disk blocks) and IO waits are frequent.> - We have a load of about 400 queries per secondWhat does vmstat et. al. say about CPU versus I/O wait?> Now we are considering to purchase our own servers and in the process are\n> facing the usual dilemmas. First I'll list out what machine we have decided> to use:> 2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)> 32 GB RAM> OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1\n> (Data Storage mentioned below)>> We have already decided to split our database into 3 machines on the basis> on disjoint sets of data. So we will be purchasing three of these boxes.>\n> HELP 1: Does something look wrong with above configuration, I know there> will be small differences b/w opetron/xeon. But do you think there is> something against going for 2.4Ghz Quad Xeons (clovertown i think)?\nLook like good machines, plenty fo memory.> HELP 2: The main confusion is with regards to Data Storage. We have the> option of going for:>> A: IBM N-3700 SAN Box, having 12x FC 300GB disks,  Partitioned into 3 disks\n> into RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2 hot spare. We> are also considering similar solution from EMC - CX310C.>> B: Go for Internal of DAS based storage. Here for each server we should be\n> able to have: 2x disks on RAID-1 for logs, 6x disks on RAID-10 for> tablespace1 and 6x disks on RAID-10 for tablespace2. Or maybe 12x disks on> RAID-10 single table-space.>> What do I think? Well..\n> SAN wins on manageability, replication (say to a DR site), backup, etc...> DAS wins on costThe problem with SAN is that it's apparently very easy to build a bigexpensive system that performs poorly.  We've seen reports of such\nhere on the lists a few times.  I would definitely demand anevaluation period from your supplier to make sure it performs well ifyou go SAN.> But for a moment keeping these aside, i wanted to discuss, purely on\n> performance side which one is a winner? It feels like internal-disks will> perform better, but need to understand a rough magnitude of difference in> performance to see if its worth loosing the manageability features.\nThat really really really depends.  The quality of RAID controllersfor either setup is very important, as is the driver support, etc...All things being even, I'd lean towards the local storage.\n> Also if we choose to go with DAS, what would be the best tool to do async> replication to DR site and maybe even as a extra plus a second read-only DB> server to distribute select loads.Look at slony, or PITR with continuous recovery.  Of those two, I've\nonly used Slony in production, and I was very happy with it'sperformance, and it was very easy to write a bash script to monitorthe replication for failures.-- \nHarsh [email protected]", "msg_date": "Thu, 6 Sep 2007 22:33:49 +0530", "msg_from": "\"Harsh Azad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHarsh Azad wrote:\n> Thanks Scott, we have now requested IBM/EMC to provide test machines.\n> Interestingly since you mentioned the importance of Raid controllers and the\n> drivers; we are planning to use Cent OS 5 for hosting the DB.\n> \n> Firstly, I could only find postgres 8.1.x RPM for CentOS 5, could not find\n> any RPM for 8.2.4. Is there any 8.2.4 RPM for CentOS 5?\n\nLook under the RHEL section of ftp.postgresql.org\n\nJoshua D. Drake\n\n> \n> Secondly, would investing into Redhat enterprise edition give any\n> performance advantage? I know all the SAN boxes are only certified on RHEL\n> and not CentOS. Or since CentOS is similar to RHEL it would be fine?\n> \n> Regards,\n> Harsh\n> \n> On 9/6/07, Scott Marlowe <[email protected]> wrote:\n>> On 9/6/07, Harsh Azad <[email protected]> wrote:\n>>> Hi,\n>>>\n>>> We are currently running our DB on a DualCore, Dual Proc 3.Ghz Xeon, 8GB\n>>> RAM, 4x SAS 146 GB 15K RPM on RAID 5.\n>>>\n>>> The current data size is about 50GB, but we want to purchase the\n>> hardware to\n>>> scale to about 1TB as we think our business will need to support that\n>> much\n>>> soon.\n>>> - Currently we have a 80% read and 20% write percentages.\n>> For this type load, you should be running on RAID10 not RAID5. Or, if\n>> you must use RAID 5, use more disks and have a battery backed caching\n>> RAID controller known to perform well with RAID5 and large arrays.\n>>\n>>> - Currently with this configuration the Database is showing signs of\n>>> over-loading.\n>> On I/O or CPU? If you're running out of CPU, then look to increasing\n>> CPU horsepower and tuning postgresql.\n>> If I/O then you need to look into a faster I/O subsystem.\n>>\n>>> - Auto-vaccum, etc run on this database, vaccum full runs nightly.\n>> Generally speaking, if you need to run vacuum fulls, you're doing\n>> something wrong. Is there a reason you're running vacuum full or is\n>> this just precautionary. vacuum full can bloat your indexes, so you\n>> shouldn't run it regularly. reindexing might be a better choice if\n>> you do need to regularly shrink your db. The better option is to\n>> monitor your fsm usage and adjust fsm settings / autovacuum settings\n>> as necessary.\n>>\n>>> - Currently CPU loads are about 20%, memory utilization is full (but\n>> this\n>>> is also due to linux caching disk blocks) and IO waits are frequent.\n>>> - We have a load of about 400 queries per second\n>> What does vmstat et. al. say about CPU versus I/O wait?\n>>\n>>> Now we are considering to purchase our own servers and in the process\n>> are\n>>> facing the usual dilemmas. First I'll list out what machine we have\n>> decided\n>>> to use:\n>>> 2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)\n>>> 32 GB RAM\n>>> OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1\n>>> (Data Storage mentioned below)\n>>>\n>>> We have already decided to split our database into 3 machines on the\n>> basis\n>>> on disjoint sets of data. So we will be purchasing three of these boxes.\n>>>\n>>> HELP 1: Does something look wrong with above configuration, I know there\n>>> will be small differences b/w opetron/xeon. But do you think there is\n>>> something against going for 2.4Ghz Quad Xeons (clovertown i think)?\n>> Look like good machines, plenty fo memory.\n>>\n>>> HELP 2: The main confusion is with regards to Data Storage. We have the\n>>> option of going for:\n>>>\n>>> A: IBM N-3700 SAN Box, having 12x FC 300GB disks, Partitioned into 3\n>> disks\n>>> into RAID-4 for WAL/backup, and 9 disks on RAID-DP for data, 2 hot\n>> spare. We\n>>> are also considering similar solution from EMC - CX310C.\n>>>\n>>> B: Go for Internal of DAS based storage. Here for each server we should\n>> be\n>>> able to have: 2x disks on RAID-1 for logs, 6x disks on RAID-10 for\n>>> tablespace1 and 6x disks on RAID-10 for tablespace2. Or maybe 12x disks\n>> on\n>>> RAID-10 single table-space.\n>>>\n>>> What do I think? Well..\n>>> SAN wins on manageability, replication (say to a DR site), backup,\n>> etc...\n>>> DAS wins on cost\n>> The problem with SAN is that it's apparently very easy to build a big\n>> expensive system that performs poorly. We've seen reports of such\n>> here on the lists a few times. I would definitely demand an\n>> evaluation period from your supplier to make sure it performs well if\n>> you go SAN.\n>>\n>>> But for a moment keeping these aside, i wanted to discuss, purely on\n>>> performance side which one is a winner? It feels like internal-disks\n>> will\n>>> perform better, but need to understand a rough magnitude of difference\n>> in\n>>> performance to see if its worth loosing the manageability features.\n>> That really really really depends. The quality of RAID controllers\n>> for either setup is very important, as is the driver support, etc...\n>> All things being even, I'd lean towards the local storage.\n>>\n>>> Also if we choose to go with DAS, what would be the best tool to do\n>> async\n>>> replication to DR site and maybe even as a extra plus a second read-only\n>> DB\n>>> server to distribute select loads.\n>> Look at slony, or PITR with continuous recovery. Of those two, I've\n>> only used Slony in production, and I was very happy with it's\n>> performance, and it was very easy to write a bash script to monitor\n>> the replication for failures.\n>>\n> \n> \n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG4DT2ATb/zqfZUUQRAoppAJ9Pj+/nDtDd/XhzMdRkjXcGHHuaeACfRTfV\nwE8+ErUXuVnXmlchYvCPgu8=\n=TihW\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 06 Sep 2007 10:12:22 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On 6-9-2007 14:35 Harsh Azad wrote:\n> 2x Quad Xeon 2.4 Ghz (4-way only 2 populated right now)\n\nI don't understand this sentence. You seem to imply you might be able to \nfit more processors in your system?\nCurrently the only Quad Core's you can buy are dual-processor \nprocessors, unless you already got a quote for a system that yields the \nnew Intel \"Tigerton\" processors.\nI.e. if they are clovertown's they are indeed Intel Core-architecture \nprocessors, but you won't be able to fit more than 2 in the system and \nget 8 cores in a system.\nIf they are Tigerton, I'm a bit surprised you got a quote for that, \nalthough HP seems to offer a system for those. If they are the old \ndual-core MP's (70xx or 71xx), you don't want those...\n\n> 32 GB RAM\n> OS Only storage - 2x SCSI 146 GB 15k RPM on RAID-1\n> (Data Storage mentioned below)\n\nI doubt you need 15k-rpm drives for OS... But that won't matter much on \nthe total cost.\n\n> HELP 1: Does something look wrong with above configuration, I know there \n> will be small differences b/w opetron/xeon. But do you think there is \n> something against going for 2.4Ghz Quad Xeons (clovertown i think)?\n\nApart from your implication that you may be able to stick more \nprocessors in it: no, not to me. Two Quad Core Xeons were even faster \nthan 8 dual core opterons in our benchmarks, although that might also \nindicate limited OS-, postgres or underlying I/O-scaling.\nObviously the new AMD Barcelona-line of processors (coming next week \norso) and the new Intel Quad Core's DP (Penryn?) and MP (Tigerton) may \nbe interesting to look at, I don't know how soon systems will be \navailable with those processors (HP seems to offer a tigerton-server).\n\n> B: Go for Internal of DAS based storage. Here for each server we should \n> be able to have: 2x disks on RAID-1 for logs, 6x disks on RAID-10 for \n> tablespace1 and 6x disks on RAID-10 for tablespace2. Or maybe 12x disks \n> on RAID-10 single table-space.\n\nYou don't necessarily need to use internal disks for DAS, since you can \nalso link an external SAS-enclosure either with or without an integrated \nraid-controller (IBM, Sun, Dell, HP and others have options for that), \nand those are able to be expanded to either multiple enclosures tied to \neachother or to a controller in the server.\nThose may also be usable in a warm-standby-scenario and may be quite a \nbit cheaper than FC-hardware.\n\n> But for a moment keeping these aside, i wanted to discuss, purely on \n> performance side which one is a winner? It feels like internal-disks \n> will perform better, but need to understand a rough magnitude of \n> difference in performance to see if its worth loosing the manageability \n> features.\n\nAs said, you don't necessarily need real internal disks, since SAS can \nbe used with external enclosures as well, still being DAS. I have no \nidea what difference you will or may see between those in terms of \nperformance. It probably largely depends on the raid-controller \navailable, afaik the disks will be mostly the same. And it might depend \non your available bandwidth, external SAS offers you a 4port-connection \nallowing for a 12Gbit-connection between a disk-enclosure and a \ncontroller. While - as I understand it - even expensive SAN-controllers \nonly offer dual-ported, 8Gbit connections?\nWhat's more important is probably the amount of disks and raid-cache you \ncan buy in the SAN vs DAS-scenario. If you can buy 24 disks when going \nfor DAS vs only 12 whith SAN...\n\nBut then again, I'm no real storage expert, we only have two Dell MD1000 \nDAS-units at our site.\n\nBest regards and good luck,\n\nArjen\n", "msg_date": "Thu, 06 Sep 2007 19:39:51 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On 9/6/07, Harsh Azad <[email protected]> wrote:\n> Thanks Scott, we have now requested IBM/EMC to provide test machines.\n> Interestingly since you mentioned the importance of Raid controllers and the\n> drivers; we are planning to use Cent OS 5 for hosting the DB.\n\nWhat RAID controllers have you looked at. Seems the two most popular\nin terms of performance here have been Areca and 3Ware / Escalade.\nLSI seems to come in a pretty close third. Adaptec is to be avoided\nas are cheap RAID controllers (i.e. promise etc...) battery backed\ncache is a must, and the bigger the better.\n\n> Firstly, I could only find postgres 8.1.x RPM for CentOS 5, could not find\n> any RPM for 8.2.4. Is there any 8.2.4 RPM for CentOS 5?\n>\n> Secondly, would investing into Redhat enterprise edition give any\n> performance advantage? I know all the SAN boxes are only certified on RHEL\n> and not CentOS. Or since CentOS is similar to RHEL it would be fine?\n\nfor all intents and purposes, CentOS and RHEL are the same OS, so any\npgsql rpm for one should pretty much work for the other. At the\nworst, you might have to get a srpm and rebuild it for CentOS / White\nBox.\n", "msg_date": "Thu, 6 Sep 2007 13:07:43 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "Hi,\n\nHow about the Dell Perc 5/i card, 512MB battery backed cache or IBM\nServeRAID-8k Adapter?\n\nI hope I am sending relevant information here, I am not too well versed with\nRAID controllers.\n\nRegards,\nHarsh\n\nOn 9/6/07, Scott Marlowe <[email protected]> wrote:\n>\n> On 9/6/07, Harsh Azad <[email protected]> wrote:\n> > Thanks Scott, we have now requested IBM/EMC to provide test machines.\n> > Interestingly since you mentioned the importance of Raid controllers and\n> the\n> > drivers; we are planning to use Cent OS 5 for hosting the DB.\n>\n> What RAID controllers have you looked at. Seems the two most popular\n> in terms of performance here have been Areca and 3Ware / Escalade.\n> LSI seems to come in a pretty close third. Adaptec is to be avoided\n> as are cheap RAID controllers (i.e. promise etc...) battery backed\n> cache is a must, and the bigger the better.\n>\n> > Firstly, I could only find postgres 8.1.x RPM for CentOS 5, could not\n> find\n> > any RPM for 8.2.4. Is there any 8.2.4 RPM for CentOS 5?\n> >\n> > Secondly, would investing into Redhat enterprise edition give any\n> > performance advantage? I know all the SAN boxes are only certified on\n> RHEL\n> > and not CentOS. Or since CentOS is similar to RHEL it would be fine?\n>\n> for all intents and purposes, CentOS and RHEL are the same OS, so any\n> pgsql rpm for one should pretty much work for the other. At the\n> worst, you might have to get a srpm and rebuild it for CentOS / White\n> Box.\n>\n\n\n\n-- \nHarsh Azad\n=======================\[email protected]\n\nHi,How about the Dell Perc 5/i card, 512MB battery backed cache or IBM ServeRAID-8k Adapter?I hope I am sending relevant information here, I am not too well versed with RAID controllers.Regards,\nHarshOn 9/6/07, Scott Marlowe <[email protected]> wrote:\nOn 9/6/07, Harsh Azad <[email protected]> wrote:> Thanks Scott, we have now requested IBM/EMC to provide test machines.> Interestingly since you mentioned the importance of Raid controllers and the\n> drivers; we are planning to use Cent OS 5 for hosting the DB.What RAID controllers have you looked at.  Seems the two most popularin terms of performance here have been Areca and 3Ware / Escalade.\nLSI seems to come in a pretty close third.  Adaptec is to be avoidedas are cheap RAID controllers (i.e. promise etc...)  battery backedcache is a must, and the bigger the better.> Firstly, I could only find postgres \n8.1.x RPM for CentOS 5, could not find> any RPM for 8.2.4. Is there any 8.2.4 RPM for CentOS 5?>> Secondly, would investing into Redhat enterprise edition give any> performance advantage? I know all the SAN boxes are only certified on RHEL\n> and not CentOS. Or since CentOS is similar to RHEL it would be fine?for all intents and purposes, CentOS and RHEL are the same OS, so anypgsql rpm for one should pretty much work for the other.  At the\nworst, you might have to get a srpm and rebuild it for CentOS / WhiteBox.-- Harsh [email protected]", "msg_date": "Thu, 6 Sep 2007 23:45:07 +0530", "msg_from": "\"Harsh Azad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Thu, 2007-09-06 at 22:28 +0530, Harsh Azad wrote:\n> Thanks Mark.\n> \n> If I replicate a snapshot of Data and log files (basically the entire\n> PG data directory) and I maintain same version of postgres on both\n> servers, it should work right?\n> \n> I am also thinking that having SAN storage will provide me with\n> facility of keeping a warm standby DB. By just shutting one server\n> down and starting the other mounting the same File system I should be\n> able to bing my DB up when the primary inccurs a physical failure. \n> \n> I'm only considering SAN storage for this feature - has anyone ever\n> used SAN for replication and warm standy-by on Postgres?\n> \n> Regards,\n> Harsh\n\n\nWe used to use a SAN for warm standby of a database, but with Oracle and\nnot PG. It worked kinda sorta, except for occasional crashes due to\nbuggy drivers.\n\nBut after going through the exercise, we realized that we hadn't gained\nanything over just doing master/slave replication between two servers,\nexcept that it was more expensive, had a tendency to expose buggy\ndrivers, had a single point of failure in the SAN array, failover took\nlonger and we couldn't use the warm standby server to perform read-only\nqueries. So we reverted back and just used the SAN as expensive DAS and\nset up a separate box for DB replication.\n\nSo if that's the only reason you're considering a SAN, then I'd advise\nyou to spend the extra money on more DAS disks.\n\nMaybe I'm jaded by past experiences, but the only real use case I can\nsee to justify a SAN for a database would be something like Oracle RAC,\nbut I'm not aware of any PG equivalent to that.\n\n-- Mark Lewis\n", "msg_date": "Thu, 06 Sep 2007 11:29:20 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On 9/6/07, Harsh Azad <[email protected]> wrote:\n> Hi,\n>\n> How about the Dell Perc 5/i card, 512MB battery backed cache or IBM\n> ServeRAID-8k Adapter?\n\nAll Dell Percs have so far been based on either adaptec or LSI\ncontrollers, and have ranged from really bad to fairly decent\nperformers. There were some recent posts on this list where someone\nwas benchmarking one, I believe. searching the list archives might\nprove useful.\n\nI am not at all familiar with IBM's ServeRAID controllers.\n\nDo either of these come with or have the option for battery back\nmodule for the cache?\n\n> I hope I am sending relevant information here, I am not too well versed with\n> RAID controllers.\n\nYep. Def look for a chance to evaluate whichever ones you're\nconsidering. The Areca's are in the same price range as the IBM\ncontroller you're considering, maybe a few hundred dollars more. See\nif you can get one for review while looking at these other\ncontrollers.\n\nI'd recommend against Dell unless you're at a company that orders\ncomputers by the hundred lot. My experience with Dell has been that\nunless you are a big customer you're just another number (a small one\nat that) on a spreadsheet.\n", "msg_date": "Thu, 6 Sep 2007 13:42:41 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "Scott Marlowe wrote:\n> On 9/6/07, Harsh Azad <[email protected]> wrote:\n> \n>> Hi,\n>>\n>> How about the Dell Perc 5/i card, 512MB battery backed cache or IBM\n>> ServeRAID-8k Adapter?\n>> \n>\n> All Dell Percs have so far been based on either adaptec or LSI\n> controllers, and have ranged from really bad to fairly decent\n> performers. There were some recent posts on this list where someone\n> was benchmarking one, I believe. searching the list archives might\n> prove useful.\n>\n> I am not at all familiar with IBM's ServeRAID controllers.\n>\n> Do either of these come with or have the option for battery back\n> module for the cache?\n>\n> \n>> I hope I am sending relevant information here, I am not too well versed with\n>> RAID controllers.\n>> \n>\n> Yep. Def look for a chance to evaluate whichever ones you're\n> considering. The Areca's are in the same price range as the IBM\n> controller you're considering, maybe a few hundred dollars more. See\n> if you can get one for review while looking at these other\n> controllers.\n>\n> I'd recommend against Dell unless you're at a company that orders\n> computers by the hundred lot. My experience with Dell has been that\n> unless you are a big customer you're just another number (a small one\n> at that) on a spreadsheet.\n> \nIf you do go with Dell get connected with an account manager instead of\nordering online. You work with the same people every time you have an\norder and in my experience they can noticeably beat the best prices I\ncan find. This is definitely the way to go if you don't want to get\nlost in the volume. The group I have worked with for the past ~2 years\nis very responsive, remembers me and my company across the 3 - 6 month\ngaps between purchases, and the server/storage person in the group is\nreasonably knowledgeable and helpful. This is for small lots of\nmachines, our first order was just 2 boxes and i've only placed 4 orders\ntotal in the past 2 years.\n\nJust my personal experience, i'd be happy to pass along the account\nmanager's information if anyone is interested.\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n> \nJoe Uhl\[email protected]\n", "msg_date": "Thu, 06 Sep 2007 15:02:53 -0400", "msg_from": "Joe Uhl <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "I am not sure I agree with that evaluation.\nI only have 2 dell database servers and they have been 100% reliable.\nMaybe he is referring to support which does tend be up to who you get.\nWhen I asked about performance on my new server they were very helpful but I\ndid have a bad time on my NAS device (but had the really cheap support plan\non it). They did help me get it fixed but I had to RMA all the drives on the\nNAS as they were all bad and it was no fun installing the os as it had no\nfloppy. I got the better support for both the data base servers which are\nusing jbod from dell for the disk array. The quad proc opteron with duel\ncores and 16gig of memory has been extremely fast (like 70%) over my older 4\nproc 32 bit single core machine with 8 gig. But both are running postgres\nand perform needed functionality. I would like to have redundant backups of\nthese as they are mission critical, but all in good time.\n\nI'd recommend against Dell unless you're at a company that orders\ncomputers by the hundred lot. My experience with Dell has been that\nunless you are a big customer you're just another number (a small one\nat that) on a spreadsheet.\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n", "msg_date": "Thu, 6 Sep 2007 15:14:04 -0400", "msg_from": "\"Joel Fradkin\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On 6-9-2007 20:42 Scott Marlowe wrote:\n> On 9/6/07, Harsh Azad <[email protected]> wrote:\n>> Hi,\n>>\n>> How about the Dell Perc 5/i card, 512MB battery backed cache or IBM\n>> ServeRAID-8k Adapter?\n> \n> All Dell Percs have so far been based on either adaptec or LSI\n> controllers, and have ranged from really bad to fairly decent\n> performers. There were some recent posts on this list where someone\n> was benchmarking one, I believe. searching the list archives might\n> prove useful.\n\nThe Dell PERC5-cards are based on LSI-chips and perform quite well. \nAfaik Dell hasn't used adaptecs for a while now, but even recent \n(non-cheap ;) ) adaptecs aren't that bad afaik.\n\nThe disadvantage of using Areca or 3Ware is obviously the lack of \nsupport in A-brand servers and the lack of support for SAS-disks. Only \nrecently Areca has stepped in the SAS-market, but I have no idea how \neasily those controllers are integrated in standard servers (they tend \nto be quite large, which will not fit in 2U and maybe not even in 3U or \n4U-servers).\n\nArjen\n", "msg_date": "Thu, 06 Sep 2007 21:38:02 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On 6-9-2007 20:29 Mark Lewis wrote:\n> Maybe I'm jaded by past experiences, but the only real use case I can\n> see to justify a SAN for a database would be something like Oracle RAC,\n> but I'm not aware of any PG equivalent to that.\n\nPG Cluster II seems to be able to do that, but I don't know whether \nthat's production quality already...\n\nArjen\n", "msg_date": "Thu, 06 Sep 2007 21:40:25 +0200", "msg_from": "Arjen van der Meijden <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On 9/6/07, Joel Fradkin <[email protected]> wrote:\n> I am not sure I agree with that evaluation.\n> I only have 2 dell database servers and they have been 100% reliable.\n> Maybe he is referring to support which does tend be up to who you get.\n> When I asked about performance on my new server they were very helpful but I\n> did have a bad time on my NAS device (but had the really cheap support plan\n> on it). They did help me get it fixed but I had to RMA all the drives on the\n> NAS as they were all bad and it was no fun installing the os as it had no\n> floppy. I got the better support for both the data base servers which are\n> using jbod from dell for the disk array. The quad proc opteron with duel\n> cores and 16gig of memory has been extremely fast (like 70%) over my older 4\n> proc 32 bit single core machine with 8 gig. But both are running postgres\n> and perform needed functionality. I would like to have redundant backups of\n> these as they are mission critical, but all in good time.\n\nDell's ok if by support you mean replacing simple broken parts, etc...\n\nI'm talking about issues like the one we had with our 26xx servers\nwhich, thankfully, Dell hasn't made in a while. We had the adaptec\ncontrollers that locked up once every 1 to 3 months for no reason, and\nwith 4 servers this meant a lockup of one every 1 to 2 weeks. Not\nacceptable in a production environment. It took almost 2 years to get\nDell to agree to ship us the LSI based RAID controllers for those 4\nmachines. Our account rep told us not to worry about returning the\nRAID controllers as they were so old as to be obsolete. One of the\nfour replacement controllers they sent us was bad, so they\ncross-shipped another one. Sometimes you get a broken part, it\nhappens.\n\nOne month after we got all our RAID controllers replaced, we started\ngetting nasty calls from their parts people wanting those parts back,\nsaying they were gonna charge us for them, etc... Two thirds of the\nparts were still in the server because we hadn't gotten a sheet\nidentifying them (the RAID key and battery) and had left in not\nworrying about it because we'd been told they were needed.\n\nThese are production servers, we can't just shut them off for fun to\npull a part we were told we didn't need to return.\n\nOn top of that, the firmware updates come as an .exe that has to be\nput on a windows floppy. We don't have a single windows machine with\na floppy at the company I work for (lots of laptops with windows). We\nhad to dig out a floppy drive and a windows CD to create a bootable\nfloppy to install a firmware update for a linux server.\n\nSo, my main complaint is about their customer service. The machines,\nwhen they work, are pretty good. Normally the hardware gets along\nwith RH OSes. But when things go wrong, Dell will not admit to a\ndesign problem that's staring them in the face, and they have wasted\nliterally hundreds of man hours for us with their stonewalling on this\nsubject. So, I see no reason to buy more hardware from them when\nthere are dealers big and small who have so far treated us much\nbetter.\n", "msg_date": "Thu, 6 Sep 2007 17:35:11 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Thu, 6 Sep 2007, Harsh Azad wrote:\n\n> Firstly, I could only find postgres 8.1.x RPM for CentOS 5, could not find\n> any RPM for 8.2.4. Is there any 8.2.4 RPM for CentOS 5?\n\nYou've already been pointed in the right direction. Devrim, the person who \nhandles this packaging, does a great job of building all the RPMs. But I \nhave a small philisophical difference with the suggested instructions for \nactually installing them though. I just finished an alternate \ninstallation guide for RHEL/CentOS 5 that's now posted at \nhttp://www.westnet.com/~gsmith/content/postgresql/pgrpm.htm you may want \nto take a look at.\n\n> Secondly, would investing into Redhat enterprise edition give any \n> performance advantage? I know all the SAN boxes are only certified on \n> RHEL and not CentOS. Or since CentOS is similar to RHEL it would be \n> fine?\n\nWouldn't expect a performance advantage. The situation you have to ask \nconsider is this: your SAN starts having funky problems, and your \ndatabase is down because of it. You call the vendor. They find out \nyou're running CentOS instead of RHEL and say that's the cause of your \nproblem (even though it probably isn't). How much will such a passing the \nbuck problem cost your company? If it's a significant number, you'd be \nfoolish to run CentOS instead of the real RHEL. Some SAN vendors can be \nvery, very picky about what they will support, and for most business \nenvironments the RHEL subscription isn't so expensive that it's worth \nwandering into an area where your support situation is fuzzy just to save \nthat money.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 7 Sep 2007 00:26:23 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "* Arjen van der Meijden:\n\n> The disadvantage of using Areca or 3Ware is obviously the lack of\n> support in A-brand servers and the lack of support for SAS-disks. Only\n> recently Areca has stepped in the SAS-market, but I have no idea how\n> easily those controllers are integrated in standard servers (they tend\n> to be quite large, which will not fit in 2U and maybe not even in 3U\n> or 4U-servers).\n\nRecent 3ware controllers are a bit on the hot side, too. We had to\nswitch from two 12 port controllers to a single 24 port controller\nbecause of that (combined with an unlucky board layout: the two 8x\nPCIe connectors are next to each other).\n\nUnfortunately, read performance maxes out at about 8 disks in a\nRAID-10 configuration. Software RAID-0 across hardware RAID-1 is\nsignificantly faster (factor of 3 to 5 in low-level benchmarks).\nHowever, it seems that something in this stack does not enforce write\nbarriers properly, so I don't think we will use this in production.\n\nRAID-6 doesn't perform well, either (especially for several processes\nreading different files sequentially).\n\nWe'll probably split the 24 disks into a couple of RAID-10s, and\ndistribute tables and indexes manually among the file systems. This\nis a bit disappointing, especially because the system is able to read\nat 800+ MB/s, as shown by the software-RAID-on-hardware-RAID\nconfiguration.\n\nI haven't seen 24-disk benchmarks with Areca controllers. A\ncomparison might be interesting.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 07 Sep 2007 10:14:50 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Fri, Sep 07, 2007 at 12:26:23AM -0400, Greg Smith wrote:\n>consider is this: your SAN starts having funky problems, and your \n>database is down because of it. You call the vendor. They find out \n>you're running CentOS instead of RHEL and say that's the cause of your \n>problem (even though it probably isn't). How much will such a passing the \n>buck problem cost your company? If it's a significant number, you'd be \n>foolish to run CentOS instead of the real RHEL. Some SAN vendors can be \n>very, very picky about what they will support, and for most business \n>environments the RHEL subscription isn't so expensive that it's worth \n>wandering into an area where your support situation is fuzzy just to save \n>that money.\n\nCorrect. Far more sensible to skip the expensive SAN solution, not worry \nabout having to play games, and save *even more* money. \n\nSANs have their place, but postgres storage generally isn't it; you'll \nget more bang/buck with DAS and very likely better absolute performance \nas well. SANs make sense if you're doing a shared filesystem (don't \neven think about doing this with postgres), or if you're consolidating \nbackups & DR (which doesn't work especially well with databases).\n\nMike Stone\n", "msg_date": "Fri, 07 Sep 2007 06:04:49 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "We're also considering to install postgres on SAN - that is, my boss is\nconvinced this is the right way to go.\n\nAdvantages:\n\n 1. Higher I/O (at least the salesman claims so)\n 2. Easier to upgrade the disk capacity\n 3. Easy to set up \"warm standby\" functionality. (Then again, if the\n postgres server fails miserably, it's likely to be due to a disk\n crash).\n\nAlso, my boss states that \"all big enterprises uses SAN nowadays\".\n\nDisadvantages:\n\n 1. Risky? One gets the impression that there are frequent problems\n with data integrity when reading some of the posts in this thread.\n\n 2. Expensive\n\n 3. \"Single point of failure\" ... but that you have either it's a SAN or\n a local disk, one will anyway need good backup systems (and eventually\n \"warm standby\"-servers running from physically separated disks).\n\n 4. More complex setup?\n\n 5. If there are several hosts with write permission towards the same\n disk, I can imagine the risks being higher for data integrity\n breakages. Particularly, I can imagine that if two postgres instances\n is started up towards the same disk (due to some sysadmin mistake), it\n could be disasterous.\n\n", "msg_date": "Fri, 7 Sep 2007 12:33:41 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Fri, Sep 07, 2007 at 12:33:41PM +0200, Tobias Brox wrote:\n>Advantages:\n>\n> 1. Higher I/O (at least the salesman claims so)\n\nBenchmark it. It is extremely unlikely that you'll get I/O *as good as* \nDAS at a similar price point. \n\n> 2. Easier to upgrade the disk capacity\n\nIs this an issue? You may find that you can simply get dramatically more \nspace for the money with DAS and not have to worry about an upgrade. \nAlso, you can use the postgres tablespace functionality to migrate data \nto a new partition fairly transparently.\n\n> 3. Easy to set up \"warm standby\" functionality. (Then again, if the\n> postgres server fails miserably, it's likely to be due to a disk\n> crash).\n\nYou may find that using db replication will gain you even more \nreliability for less money.\n\n>Also, my boss states that \"all big enterprises uses SAN nowadays\".\n\nUse SAN *for what*?\n\nMike Stone\n", "msg_date": "Fri, 07 Sep 2007 09:16:03 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "I'm getting a san together to consolidate my disk space usage for my\nservers. It's iscsi based and I'll be pxe booting my servers from it.\nThe idea is to keep spares on hand for one system (the san) and not have\nto worry about spares for each specific storage system on each server.\nThis also makes growing filesystems and such pretty simple. Redundancy\nis also good since I'll have two iscsi switches plugged into two cisco\nethernet switches and two different raid controllers on the jbod. I'll\nstart plugging my servers into each switch for further redundancy. In\nthe end I could loose disks, ethernet switches, cables, iscsi switches,\nraid controller, whatever, and it keeps on moving.\n\nThat said, I'm not putting my postgres data on the san. The DB server\nwill boot from the san and use it for is OS, but there are 6 15k SAS\ndisks in it setup with raid-10 that will be used for the postgres data\nmount. The machine is a dell 2950 and uses an LSI raid card.\n\nThe end result is a balance of cost, performance, and reliability. I'm\nusing iscsi for the cost, reliability, and ease of use, but where I need\nperformance I'm sticking to local disks.\n\nschu\n", "msg_date": "Fri, 07 Sep 2007 09:21:10 -0800", "msg_from": "Matthew Schumacher <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "We are currently running our database against on SAN share. It looks like this:\n\n2 x RAID 10 (4 disk SATA 7200 each)\n\nRaid Group 0 contains the tables + indexes\nRaid Group 1 contains the log files + backups (pg_dump)\n\nOur database server connects to the san via iSCSI over Gig/E using\njumbo frames. File system is XFS (noatime).\n\nI believe our raid controller is an ARECA. Whatever it is, it has the\noption of adding a battery to it but I have not yet been able to\nconvince my boss that we need it.\n\nMaintenance is nice, we can easily mess around with the drive shares,\nexpand and contract them, snapshot them, yadda yadda yadda. All\nthings which we NEVER do to our database anyway. :)\n\nPerformance, however, is a mixed bag. It handles concurrency very\nwell. We have a number of shares (production shares, data shares, log\nfile shares, backup shares, etc. etc.) spread across the two raid\ngroups and it handles them with aplomb.\n\nThroughput, however, kinda sucks. I just can't get the kind of\nthroughput to it I was hoping to get. When our memory cache is blown,\nthe database can be downright painful for the next few minutes as\neverything gets paged back into the cache.\n\nI'd love to try a single 8 disk RAID 10 with battery wired up directly\nto our database, but given the size of our company and limited funds,\nit won't be feasible any time soon.\n\nBryan\n\nOn 9/7/07, Matthew Schumacher <[email protected]> wrote:\n> I'm getting a san together to consolidate my disk space usage for my\n> servers. It's iscsi based and I'll be pxe booting my servers from it.\n> The idea is to keep spares on hand for one system (the san) and not have\n> to worry about spares for each specific storage system on each server.\n> This also makes growing filesystems and such pretty simple. Redundancy\n> is also good since I'll have two iscsi switches plugged into two cisco\n> ethernet switches and two different raid controllers on the jbod. I'll\n> start plugging my servers into each switch for further redundancy. In\n> the end I could loose disks, ethernet switches, cables, iscsi switches,\n> raid controller, whatever, and it keeps on moving.\n>\n> That said, I'm not putting my postgres data on the san. The DB server\n> will boot from the san and use it for is OS, but there are 6 15k SAS\n> disks in it setup with raid-10 that will be used for the postgres data\n> mount. The machine is a dell 2950 and uses an LSI raid card.\n>\n> The end result is a balance of cost, performance, and reliability. I'm\n> using iscsi for the cost, reliability, and ease of use, but where I need\n> performance I'm sticking to local disks.\n>\n> schu\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n", "msg_date": "Fri, 7 Sep 2007 12:56:20 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Friday 07 September 2007 10:56, \"Bryan Murphy\" \n<[email protected]> wrote:\n> Our database server connects to the san via iSCSI over Gig/E using\n> jumbo frames. File system is XFS (noatime).\n>\n> Throughput, however, kinda sucks. I just can't get the kind of\n> throughput to it I was hoping to get. \n\nA single Gig/E couldn't even theoretically do better than 125MB/sec, so \nyeah I would expect throughput sucks pretty bad.\n\n-- \n\"A democracy is a sheep and two wolves deciding on what to have for\nlunch. Freedom is a well armed sheep contesting the results of the\ndecision.\" -- Benjamin Franklin\n\n", "msg_date": "Fri, 7 Sep 2007 11:18:42 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "Bryan Murphy wrote:\n\n>Our database server connects to the san via iSCSI over Gig/E using\n>jumbo frames. File system is XFS (noatime).\n>\n>\n> \n>\n...\n\n>Throughput, however, kinda sucks. I just can't get the kind of\n>throughput to it I was hoping to get. When our memory cache is blown,\n>the database can be downright painful for the next few minutes as\n>everything gets paged back into the cache.\n>\n> \n>\n\nRemember that Gig/E is bandwidth limited to about 100 Mbyte/sec. Maybe \na little faster than that downhill with a tailwind, but not much. \nYou're going to get much better bandwidth connecting to a local raid \ncard talking to local disks simply due to not having the ethernet as a \nbottleneck. iSCSI is easy to set up and manage, but it's slow. This is \nthe big advantage Fibre Channel has- serious performance. You can have \nmultiple channels on a single fibre channel card- IIRC, QLogic's cards \nhave a default of 4 channels- each pumping 400 Mbyte/sec. At which \npoint the local bus rapidly becomes the bottleneck. Of course, this \ncomes at the cost of a signifigant increase in complexity.\n\nBrian\n\n", "msg_date": "Fri, 07 Sep 2007 14:21:52 -0400", "msg_from": "Brian Hurt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nAlan Hodgson wrote:\n> On Friday 07 September 2007 10:56, \"Bryan Murphy\" \n> <[email protected]> wrote:\n>> Our database server connects to the san via iSCSI over Gig/E using\n>> jumbo frames. File system is XFS (noatime).\n>>\n>> Throughput, however, kinda sucks. I just can't get the kind of\n>> throughput to it I was hoping to get. \n> \n> A single Gig/E couldn't even theoretically do better than 125MB/sec, so \n> yeah I would expect throughput sucks pretty bad.\n\nWe have a customer that has a iSCSI SAN that can bond multiple Gig/E\nconnections that provides them with reasonable performance. You should\nsee if yours allows it.\n\nJoshua D. Drake\n\n\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG4ZbGATb/zqfZUUQRAhtmAKCh/PsmkL/JOPq4++Aci2/XwDDJ7wCfbwJs\n5vBg+TG5xQFKoJMdybpjDWo=\n=up8R\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 07 Sep 2007 11:21:58 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Fri, 7 Sep 2007, Tobias Brox wrote:\n\n> We're also considering to install postgres on SAN - that is, my boss is\n> convinced this is the right way to go.\n> Advantages:\n> 1. Higher I/O (at least the salesman claims so)\n\nShockingly, the salesman is probably lying to you. The very concept of \nSAN says that you're putting something in between your system and the \ndisks, and that something therefore must slow things down compared to \nconnecting directly. iSCSI, FC, whatever you're using as the \ncommunications channel can't be as fast as a controller card with a good \ninterface straight into the motherboard. For example, a PCI-E x16 disk \ncontroller card maxes out at 4GB/s in each direction; good luck bonding \nenough iSCSI or FC channels together to reach that transfer rate and \ngetting something even remotely cost-competative with an internal card.\n\nThe cases where a SAN can improve upon performance over direct discs are \nwhen the comparison isn't quite fair; for example:\n\n1) The SAN allows spreading the load over more disks than you can fit \ninternally in the system\n2) The SAN provides a larger memory cache than the internal cards you're \ncomparing against\n\nIf you're in one of those situations, then perhaps the salesman's claim \ncould have some merit. There are lots of reasons one might want to use a \nSAN, but a higher I/O rate when fairly comparing to connecting disks \ndirectly is unlikely to be on that list.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 7 Sep 2007 16:04:41 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "\nOn Sep 6, 2007, at 2:42 PM, Scott Marlowe wrote:\n\n> I'd recommend against Dell unless you're at a company that orders\n> computers by the hundred lot. My experience with Dell has been that\n> unless you are a big customer you're just another number (a small one\n> at that) on a spreadsheet.\n\nI order maybe 5-6 servers per year from dell, and the sales rep knows \nme when I call him. Just set up a business account.\n\nThat said, lately I've been buying Sun X4100's for my DB servers. \nThese machines are built like tanks and extremely fast. The only \ndifficulty is hooking up disks to them. The only sane choice is to \nuse a fibre channel card to an external array. The only dual-channel \ninternal SCSI RAID controller that fits is an Adaptec model, and it \nis to be avoided.\n", "msg_date": "Fri, 7 Sep 2007 16:35:09 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Fri, 7 Sep 2007, Tobias Brox wrote:\n\n> We're also considering to install postgres on SAN - that is, my boss is\n> convinced this is the right way to go.\n>\n> Advantages:\n>\n> 1. Higher I/O (at least the salesman claims so)\n\nonly if you buy better disks for the SAN then for the local system (note \nthat this includes battery backed ram for write caching. the SAN will \ninclude a bunch becouse it's performance would _suck_ otherwise. if you \ndon't put any on your stand-alone system you are comparing apples to \noranges)\n\n> 2. Easier to upgrade the disk capacity\n\nonly if you buy a SAN with a lot of empty drive slots, but wouldn't buy a \nsystem with empty drive slots.\n\n> 3. Easy to set up \"warm standby\" functionality. (Then again, if the\n> postgres server fails miserably, it's likely to be due to a disk\n> crash).\n\nand if postgres dies for some other reason the image on disk needs repair, \nunless you script stopping postgres when the SAN does it's snapshots, \nthose snapshots are not going to be that good. the problems are useually \nrepairable, but that makes starting your warm spare harder.\n\n> Also, my boss states that \"all big enterprises uses SAN nowadays\".\n\nyour bos would be very surprised at what the really big shops are doing \n(and not doing). yes they have a SAN, they have many SANs, from many \ndifferent vendors, and they have many systems that don't use the SAN and \nuse local disks instead. when you get really large you can find just about \nanything _somewhere_ in the company.\n\n> Disadvantages:\n>\n> 1. Risky? One gets the impression that there are frequent problems\n> with data integrity when reading some of the posts in this thread.\n\nSAN's add more parts and more potential points of failure, then when you \nadd the SAN replication to the mix things get even more 'interesting'. \ndoing SAN replication across a significant distance to your DR facility \ncan be a LOT harder to get working right then the salesman makes it sound. \nit's not uncommon to see a san replication decide that it's going to take \na week to catch up after doing a DR test for example.\n\n> 2. Expensive\n\nno, _extremely expensive. price one and then look at how much hardware you \ncould buy instead. you can probably buy much mroe storage, and a couple \ncomplete spare systems (do replication to a local spare as well as your \nremote system) and end up with even more reliability.\n\n> 3. \"Single point of failure\" ... but that you have either it's a SAN or\n> a local disk, one will anyway need good backup systems (and eventually\n> \"warm standby\"-servers running from physically separated disks).\n\nno, with local disks you can afford to have multiple systems so that you \ndon't have a SPOF\n\n> 4. More complex setup?\n>\n> 5. If there are several hosts with write permission towards the same\n> disk, I can imagine the risks being higher for data integrity\n> breakages. Particularly, I can imagine that if two postgres instances\n> is started up towards the same disk (due to some sysadmin mistake), it\n> could be disasterous.\n\nwhen you are useing a SAN for a database the SAN vendor will have you \nallocate complete disks to each box, so you don't have multiple boxes \nhitting the same drive, but you also don't get a lot of the anvantages the \nsalesman talks about.\n\nDavid Lang\n", "msg_date": "Fri, 7 Sep 2007 14:10:32 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\[email protected] wrote:\n> On Fri, 7 Sep 2007, Tobias Brox wrote:\n> \n>> We're also considering to install postgres on SAN - that is, my boss is\n>> convinced this is the right way to go.\n>>\n>> Advantages:\n>>\n>> 1. Higher I/O (at least the salesman claims so)\n> \n\nIn general a SAN does not provide more I/O than direct attached storage.\nIt is all about the BUS, Controller and drive types.\n\n> only if you buy better disks for the SAN then for the local system (note\n> that this includes battery backed ram for write caching. the SAN will\n> include a bunch becouse it's performance would _suck_ otherwise. if you\n> don't put any on your stand-alone system you are comparing apples to\n> oranges)\n> \n>> 2. Easier to upgrade the disk capacity\n> \n> only if you buy a SAN with a lot of empty drive slots, but wouldn't buy\n> a system with empty drive slots.\n\nWell there are SANs that have trays that can be stacked, but then again\nyou can get the same thing with DAS too.\n\n> \n>> 3. Easy to set up \"warm standby\" functionality. (Then again, if the\n>> postgres server fails miserably, it's likely to be due to a disk\n>> crash).\n> \n>> Also, my boss states that \"all big enterprises uses SAN nowadays\".\n> \n\nUhmm as someone who consults with many of the big enterprises that are\nrunning PostgreSQL, that is *not* true.\n\n>> 2. Expensive\n> \n> no, _extremely expensive. price one and then look at how much hardware\n\nLet me just +1 this. The amount of DAS storage you can get for 30k is\namazing compared to the amount of SAN you can get for 30k.\n\nJoshua D. Drake\n\n> you could buy instead. you can probably buy much mroe storage, and a\n> couple complete spare systems (do replication to a local spare as well\n> as your remote system) and end up with even more reliability.\n> \n>> 3. \"Single point of failure\" ... but that you have either it's a SAN or\n>> a local disk, one will anyway need good backup systems (and eventually\n>> \"warm standby\"-servers running from physically separated disks).\n> \n> no, with local disks you can afford to have multiple systems so that you\n> don't have a SPOF\n> \n>> 4. More complex setup?\n>>\n>> 5. If there are several hosts with write permission towards the same\n>> disk, I can imagine the risks being higher for data integrity\n>> breakages. Particularly, I can imagine that if two postgres instances\n>> is started up towards the same disk (due to some sysadmin mistake), it\n>> could be disasterous.\n> \n> when you are useing a SAN for a database the SAN vendor will have you\n> allocate complete disks to each box, so you don't have multiple boxes\n> hitting the same drive, but you also don't get a lot of the anvantages\n> the salesman talks about.\n> \n> David Lang\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: don't forget to increase your free space map settings\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG4b9/ATb/zqfZUUQRAnBiAJ4kdOicN3If4scLAVdaU4nS+srGHQCgnkR2\nC6RvSyLcAtgQ1bJJEau8s00=\n=lqbw\n-----END PGP SIGNATURE-----\n", "msg_date": "Fri, 07 Sep 2007 14:15:43 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Fri, Sep 07, 2007 at 02:10:32PM -0700, [email protected] wrote:\n> >3. Easy to set up \"warm standby\" functionality. (Then again, if the\n> >postgres server fails miserably, it's likely to be due to a disk\n> >crash).\n> \n> and if postgres dies for some other reason the image on disk needs repair, \n> unless you script stopping postgres when the SAN does it's snapshots, \n> those snapshots are not going to be that good. the problems are useually \n> repairable, but that makes starting your warm spare harder.\n\nUh, the \"image\" you get from a PITR backup \"needs repair\" too. There's\nabsolutely nothing wrong with using a SAN or filesystem snapshot as a\nbackup mechanism, as long as it's a true snapshot, and it includes *all*\nPostgreSQL data (everything under $PGDATA as well as all tablespaces).\n\nAlso, to reply to someone else's email... there is one big reason to use\na SAN over direct storage: you can do HA that results in 0 data loss.\nGood SANs are engineered to be highly redundant, with multiple\ncontrollers, PSUs, etc, so that the odds of losing the SAN itself are\nvery, very low. The same isn't true with DAS.\n\nBut unless you need that kind of HA recovery, I'd tend to stay away from\nSANs.\n\nBTW, if you need *serious* bandwidth, look at things like Sun's\n\"thumper\" (I know there's at least one other company that makes\nsomething similar). 40-48 drives in a single 4U chassis == lots of\nthroughput.\n\nFinally, if you do get a SAN, make sure and benchmark it. I've seen more\nthan one case of a SAN that wasn't getting anywhere near the performance\nit should be, even with a simple dd test.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 15:55:51 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:\n>Also, to reply to someone else's email... there is one big reason to use\n>a SAN over direct storage: you can do HA that results in 0 data loss.\n>Good SANs are engineered to be highly redundant, with multiple\n>controllers, PSUs, etc, so that the odds of losing the SAN itself are\n>very, very low. The same isn't true with DAS.\n\nYou can get DAS arrays with multiple controllers, PSUs, etc. DAS != \nsingle disk.\n\nMike Stone\n\n", "msg_date": "Tue, 11 Sep 2007 17:09:00 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Tue, Sep 11, 2007 at 05:09:00PM -0400, Michael Stone wrote:\n> On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:\n> >Also, to reply to someone else's email... there is one big reason to use\n> >a SAN over direct storage: you can do HA that results in 0 data loss.\n> >Good SANs are engineered to be highly redundant, with multiple\n> >controllers, PSUs, etc, so that the odds of losing the SAN itself are\n> >very, very low. The same isn't true with DAS.\n> \n> You can get DAS arrays with multiple controllers, PSUs, etc. DAS != \n> single disk.\n\nIt's still in the same chassis, though, which means if you lose memory\nor mobo you're still screwed. In a SAN setup for redundancy, there's\nvery little in the way of a single point of failure; generally only the\nbackplane, and because there's very little that's on there it's\nextremely rare for one to fail.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 18:07:44 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Tue, 11 Sep 2007, Decibel! wrote:\n\n> On Tue, Sep 11, 2007 at 05:09:00PM -0400, Michael Stone wrote:\n>> On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:\n>>> Also, to reply to someone else's email... there is one big reason to use\n>>> a SAN over direct storage: you can do HA that results in 0 data loss.\n>>> Good SANs are engineered to be highly redundant, with multiple\n>>> controllers, PSUs, etc, so that the odds of losing the SAN itself are\n>>> very, very low. The same isn't true with DAS.\n>>\n>> You can get DAS arrays with multiple controllers, PSUs, etc. DAS !=\n>> single disk.\n>\n> It's still in the same chassis, though, which means if you lose memory\n> or mobo you're still screwed. In a SAN setup for redundancy, there's\n> very little in the way of a single point of failure; generally only the\n> backplane, and because there's very little that's on there it's\n> extremely rare for one to fail.\n\nnot nessasarily. direct attached doesn't mean in the same chassis, \nexternal drive shelves attached via SCSI are still DAS\n\nyou can even have DAS attached to a pair of machines, with the second box \nconfigured to mount the drives only if the first one dies.\n\nDavid Lang\n", "msg_date": "Tue, 11 Sep 2007 16:51:40 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "Yeah, the DAS we are considering is Dell MD3000, it has redundant hot\nswappable raid controllers in active-active mode. Provision for hot spare\nhard-disk. And it can take upto 15 disks in 3U, you can attach two more\nMD1000 to it, giving a total of 45 disks in total.\n\n-- Harsh\n\nOn 9/12/07, [email protected] <[email protected]> wrote:\n>\n> On Tue, 11 Sep 2007, Decibel! wrote:\n>\n> > On Tue, Sep 11, 2007 at 05:09:00PM -0400, Michael Stone wrote:\n> >> On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:\n> >>> Also, to reply to someone else's email... there is one big reason to\n> use\n> >>> a SAN over direct storage: you can do HA that results in 0 data loss.\n> >>> Good SANs are engineered to be highly redundant, with multiple\n> >>> controllers, PSUs, etc, so that the odds of losing the SAN itself are\n> >>> very, very low. The same isn't true with DAS.\n> >>\n> >> You can get DAS arrays with multiple controllers, PSUs, etc. DAS !=\n> >> single disk.\n> >\n> > It's still in the same chassis, though, which means if you lose memory\n> > or mobo you're still screwed. In a SAN setup for redundancy, there's\n> > very little in the way of a single point of failure; generally only the\n> > backplane, and because there's very little that's on there it's\n> > extremely rare for one to fail.\n>\n> not nessasarily. direct attached doesn't mean in the same chassis,\n> external drive shelves attached via SCSI are still DAS\n>\n> you can even have DAS attached to a pair of machines, with the second box\n> configured to mount the drives only if the first one dies.\n>\n> David Lang\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n>\n\n\n\n-- \nHarsh Azad\n=======================\[email protected]\n\nYeah, the DAS we are considering is Dell MD3000, it has redundant hot swappable raid controllers in active-active mode. Provision for hot spare hard-disk. And it can take upto 15 disks in 3U, you can attach two more MD1000 to it, giving a total of 45 disks in total.\n-- HarshOn 9/12/07, [email protected] <[email protected]> wrote:\nOn Tue, 11 Sep 2007, Decibel! wrote:> On Tue, Sep 11, 2007 at 05:09:00PM -0400, Michael Stone wrote:>> On Tue, Sep 11, 2007 at 03:55:51PM -0500, Decibel! wrote:>>> Also, to reply to someone else's email... there is one big reason to use\n>>> a SAN over direct storage: you can do HA that results in 0 data loss.>>> Good SANs are engineered to be highly redundant, with multiple>>> controllers, PSUs, etc, so that the odds of losing the SAN itself are\n>>> very, very low. The same isn't true with DAS.>>>> You can get DAS arrays with multiple controllers, PSUs, etc.  DAS !=>> single disk.>> It's still in the same chassis, though, which means if you lose memory\n> or mobo you're still screwed. In a SAN setup for redundancy, there's> very little in the way of a single point of failure; generally only the> backplane, and because there's very little that's on there it's\n> extremely rare for one to fail.not nessasarily. direct attached doesn't mean in the same chassis,external drive shelves attached via SCSI are still DASyou can even have DAS attached to a pair of machines, with the second box\nconfigured to mount the drives only if the first one dies.David Lang---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster\n-- Harsh [email protected]", "msg_date": "Wed, 12 Sep 2007 11:14:27 +0530", "msg_from": "\"Harsh Azad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "[Decibel! - Tue at 06:07:44PM -0500]\n> It's still in the same chassis, though, which means if you lose memory\n> or mobo you're still screwed. In a SAN setup for redundancy, there's\n> very little in the way of a single point of failure; generally only the\n> backplane, and because there's very little that's on there it's\n> extremely rare for one to fail.\n\nFunny, the only time we lost a database server was due to a backplane\nfailure ...\n", "msg_date": "Wed, 12 Sep 2007 07:50:58 +0200", "msg_from": "Tobias Brox <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" }, { "msg_contents": "On Tue, Sep 11, 2007 at 06:07:44PM -0500, Decibel! wrote:\n>On Tue, Sep 11, 2007 at 05:09:00PM -0400, Michael Stone wrote:\n>> You can get DAS arrays with multiple controllers, PSUs, etc. DAS != \n>> single disk.\n>\n>It's still in the same chassis, though,\n\nI think you're confusing DAS and internal storage.\n\nMike Stone\n", "msg_date": "Thu, 13 Sep 2007 14:27:09 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: SAN vs Internal Disks" } ]
[ { "msg_contents": "Hello All,\n\nI am using a postgres setup in Windows. And it is working fine usings ODBC \ndrive, but not in ADO PostgreSQL OLE DB Provider\n\ngiving error test connection 'Test connection failed because of an error in \ninitializing provider. Unspecified error'\n\nCan i get help for the same\n\nThanks and Regards\nJayaram\n\n_________________________________________________________________\nNo. 1 Pvt. B - School in South Asia.Click here to know more \nhttp://ss1.richmedia.in/recurl.asp?pid=117\n\n", "msg_date": "Thu, 06 Sep 2007 18:45:29 +0530", "msg_from": "\"Jayaram Bhat\" <[email protected]>", "msg_from_op": true, "msg_subject": "ADO -PostgreSQL OLE DB Provider" }, { "msg_contents": "--- Jayaram Bhat <[email protected]> wrote:\n> I am using a postgres setup in Windows. And it is working fine usings ODBC \n> drive, but not in ADO PostgreSQL OLE DB Provider\n> \n> giving error test connection 'Test connection failed because of an error in \n> initializing provider. Unspecified error'\n> \n> Can i get help for the same\n\nThis is my understanding regarding the OLE DB driver for PostgreSQL:\n\n(1) The OLE DB Provider lacks the robustness that the ODBC driver has.\n(2) The OLE DB Provider lacks much of the Functionality that an OLE DB provider should have. \n(3) And this probably will not change soon since the OLE DB driver is not actively supported.\n\nRegards,\nRichard Broersma Jr.\n", "msg_date": "Thu, 6 Sep 2007 09:08:29 -0700 (PDT)", "msg_from": "Richard Broersma Jr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: ADO -PostgreSQL OLE DB Provider" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nWillo van der Merwe wrote:\n> Jean-David Beyer wrote:\n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n>>\n>> Willo van der Merwe wrote:\n>>\n>>> Richard Huxton wrote:\n>>>\n>>>> Willo van der Merwe wrote:\n>>>>\n>>>>> Hi guys,\n>>>>>\n>>>>> I'm have the rare opportunity to spec the hardware for a new database\n>>>>> server. It's going to replace an older one, driving a social\n>>>>> networking web application. The current server (a quad opteron with\n>>>>> 4Gb of RAM and 80Gb fast SCSI RAID10) is coping with an average load\n>>>>> of ranging between 1.5 and 3.5.\n>>>>>\n>>>>> The new machine spec I have so far:\n>>>>>\n>>>> What's the limiting factor on your current machine - disk, memory,\n>>>> cpup?\n>>>>\n>>> I'm a bit embarrassed to admit that I'm not sure. The reason we're\n>>> changing machines is that we might be changing ISPs and we're renting\n>>> / leasing the machines from the ISP.\n>>>\n>>>\n>> Before you get rid of the current ISP, better examine what is going on\n>> with\n>> the present setup. It would be good to know if you are memory,\n>> processor, or\n>> IO limited. That way you could increase what needs to be increased,\n>> and not\n>> waste money where the bottleneck is not.\n>>\n> Good advice. After running a vmstat and iostat, it is clear, to my mind\n> anyway, that the most likely bottleneck is IO, next is probably some\n> more RAM.\n> Here's the output:\n> procs -----------memory---------- ---swap-- -----io---- --system--\n> ----cpu----\n> r b swpd free buff cache si so bi bo in cs us sy\n> id wa\n> 0 0 29688 80908 128308 3315792 0 0 8 63 6 8 17 2\n> 80 1\n>\n>\n> avg-cpu: %user %nice %sys %iowait %idle\n> 17.18 0.00 1.93 0.81 80.08\n>\n> Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\n> sda 14.57 66.48 506.45 58557617 446072213\n> sda1 0.60 0.27 4.70 235122 4136128\n> sda2 0.38 0.77 2.27 678754 2002576\n> sda3 2.37 0.49 18.61 429171 16389960\n> sda4 0.00 0.00 0.00 2 0\n> sda5 0.71 0.66 5.46 578307 4807087\n> sda6 0.03 0.01 0.24 6300 214196\n> sda7 0.02 0.00 0.19 2622 165992\n> sda8 60.19 64.29 474.98 56626211 418356226\n>\n>\n1.) If this is when the system is heavily loaded, you have more capacity\nthan you need, on the average. Processors are idle, not in wait state. You\nhave enough memory (no swapping going on), disks not too busy (probably).\nBut if it is not heavily loaded, run these when it is.\n\n2.) Did you let vmstat and iostat run just once, or are these the last of\nseveral reports. Because these tell the average since boot for the first\nreport of each when they run the first time. If so, the numbers may not mean\nmuch.\n\nHere is iostat for my machine that is running right now. Only lines involved\nin postgreSQL are shown:\n\nDevice: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn\nsda8 409.33 0.67 3274.00 40 196440\nsdb7 0.12 0.00 0.93 0 56\nsdc1 44.73 0.13 357.73 8 21464\nsdd1 23.43 0.00 187.47 0 11248\nsde1 133.45 0.00 1067.60 0 64056\nsdf1 78.25 0.00 626.00 0 37560\n\nOn sda8 is the Write-Ahead-Log.\non sdb7 are some small seldom-used relations, but also the input files that\nsda and sdb are what my system uses for other stuff as well, though sda is\nnot too heavily used and sdb even less.\nI am presently loading into the database. sdc1, sdd1, sde1, and sdf1 are the\n drives reserved for database only.\n\nI would suggest getting at least two hard drives on the new system and\nperhaps putting the WAL on one and the rest on the other. My sda and sdb\ndrives are around 72 GBytes and the sdc, sdd, sde, and sdf are about 18\nGBytes. I believe the more spindles the better (within reason) and dividing\nthe stuff up with the indices separate from the associated data to reduce\nseeking.\n\n$ vmstat 30\nprocs -----------memory---------- ---swap-- -----io---- --system--\n- -----cpu------\n r b swpd free buff cache si so bi bo in cs us sy\nid wa\n 5 1 1340 248456 248496 6652672 0 0 28 149 2 1 94 2\n 3 0\n 4 2 1340 244240 248948 6656364 0 0 0 2670 1248 11320 80 4\n14 2\n 4 1 1340 241008 249492 6659864 0 0 0 2701 1222 11432 82 4\n12 2\n 5 0 1340 246268 249868 6653644 0 0 0 2799 1223 11412 83 4\n12 2\n\nMy machine was idle most of the time since boot (other than running BOINC\nstuff), but is busy loading data into the database at the moment. See how\nthe first line differs from the others? I have 8GBytes RAM on this 32-bit\nmachine running Red Hat Enterprise Linux 5.\n\n\n\n- --\n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 10:55:01 up 28 days, 14:17, 4 users, load average: 5.36, 5.64, 5.55\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (GNU/Linux)\nComment: Using GnuPG with CentOS - http://enigmail.mozdev.org\n\niD8DBQFG4BYFPtu2XpovyZoRArU9AJ9o3OvYNxmQVhINTcRCADy0/fv30wCfZ3oJ\nWItsCN75Xxhv52AqF6AIXmk=\n=3AfU\n-----END PGP SIGNATURE-----\n", "msg_date": "Thu, 06 Sep 2007 11:00:21 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Hardware spec]" } ]
[ { "msg_contents": "Hi All,\n\nI've recently run into problems with my kernel complaining that I ran \nout of memory, thus killing off postgres and bringing my app to a \ngrinding halt.\n\nI'm on a 32-bit architecture with 16GB of RAM, under Gentoo Linux. \nNaturally, I have to set my shmmax to 2GB because the kernel can't \nsupport more (well, I could set it to 3GB, but I use 2GB for safety).\n\nShared_buffers is 200000 and max_connections is 600.\n\nHere is a snippet of my log output (I can give more if necessary):\nSep 5 18:38:57 tii-db2.oaktown.iparadigms.com Out of Memory: Kill \nprocess 11696 (postgres) score 1181671 and children.\nSep 5 18:38:57 tii-db2.oaktown.iparadigms.com Out of Memory: Kill \nprocess 11696 (postgres) score 1181671 and children.\nSep 5 18:38:57 tii-db2.oaktown.iparadigms.com Out of memory: Killed \nprocess 11704 (postgres).\nSep 5 18:38:57 tii-db2.oaktown.iparadigms.com Out of memory: Killed \nprocess 11704 (postgres).\n[...]\nSep 5 18:38:57 tii-db2.oaktown.iparadigms.com postgres[11696]: [6-1] \n2007-09-05 18:38:57.626 PDT [user=,db= PID:11696 XID:]LOG: \nbackground writer process (PID 11704) was terminated by signal 9\nSep 5 18:38:57 tii-db2.oaktown.iparadigms.com postgres[11696]: [7-1] \n2007-09-05 18:38:57.626 PDT [user=,db= PID:11696 XID:]LOG: \nterminating any other active server processes\n\nMy understanding is that if any one postgres process's memory usage, \nplus the shared memory, exceeds the kernel limit of 4GB, then the \nkernel will kill the process off. Is this true? If so, would \npostgres have some prevention mechanism that would keep a particular \nprocess from getting too big? (Maybe I'm being too idealistic, or I \njust simply don't understand how postgres works under the hood)\n\n--Richard\n", "msg_date": "Thu, 6 Sep 2007 09:06:53 -0700", "msg_from": "Richard Yen <[email protected]>", "msg_from_op": true, "msg_subject": "postgres memory management issues?" }, { "msg_contents": "Richard Yen wrote:\n> Hi All,\n> \n> I've recently run into problems with my kernel complaining that I ran \n> out of memory, thus killing off postgres and bringing my app to a \n> grinding halt.\n> \n> I'm on a 32-bit architecture with 16GB of RAM, under Gentoo Linux. \n> Naturally, I have to set my shmmax to 2GB because the kernel can't \n> support more (well, I could set it to 3GB, but I use 2GB for safety).\n> \n> Shared_buffers is 200000 and max_connections is 600.\n\nOK, that's ~ 1.6GB shared-memory\n\n> Here is a snippet of my log output (I can give more if necessary):\n> Sep 5 18:38:57 tii-db2.oaktown.iparadigms.com Out of Memory: Kill \n> process 11696 (postgres) score 1181671 and children.\n\nOK, you've run out of memory at some point.\n\n> My understanding is that if any one postgres process's memory usage, \n> plus the shared memory, exceeds the kernel limit of 4GB, then the kernel \n> will kill the process off. Is this true? If so, would postgres have \n> some prevention mechanism that would keep a particular process from \n> getting too big? (Maybe I'm being too idealistic, or I just simply \n> don't understand how postgres works under the hood)\n\nYou've got max_connections of 600 and you think individual backends are \nusing more than 2.4GB RAM each? Long before that you'll run out of \nactual RAM+Swap. If you actually had 600 backends you'd be able to \nallocate ~24MB to each. You'd actually want much less, to allow for \ndisk-cache in the OS.\n\nThe important information missing is:\n1. How much memory is in use, and by what (vmstat/top output)\n2. What memory settings do you have in your postgresql.conf (work_mem, \nmaintenance_work_mem)\n3. What was happening at the time (how many connections etc)\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 07 Sep 2007 09:42:59 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management issues?" }, { "msg_contents": "> I've recently run into problems with my kernel complaining that I ran\n> out of memory, thus killing off postgres and bringing my app to a\n> grinding halt.\n>\n> I'm on a 32-bit architecture with 16GB of RAM, under Gentoo Linux.\n> Naturally, I have to set my shmmax to 2GB because the kernel can't\n> support more (well, I could set it to 3GB, but I use 2GB for safety).\n\nWouldn't it make sense to install an amd64 version with so much RAM?\n\n-- \nregards\nClaus\n\nWhen lenity and cruelty play for a kingdom,\nthe gentlest gamester is the soonest winner.\n\nShakespeare\n", "msg_date": "Fri, 7 Sep 2007 11:01:47 +0200", "msg_from": "\"Claus Guttesen\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management issues?" }, { "msg_contents": "\"Richard Yen\" <[email protected]> writes:\n\n> My understanding is that if any one postgres process's memory usage, plus the\n> shared memory, exceeds the kernel limit of 4GB, then the kernel will kill the\n> process off. Is this true? If so, would postgres have some prevention\n> mechanism that would keep a particular process from getting too big? (Maybe\n> I'm being too idealistic, or I just simply don't understand how postgres works\n> under the hood)\n\nI don't think you have an individual process going over 4G. \n\nI think what you have is 600 processes which in aggregate are using more\nmemory than you have available. Do you really need 600 processes by the way?\n\nYou could try lowering work_mem but actually your value seems fairly\nreasonable. Perhaps your kernel isn't actually able to use 16GB? What does cat\n/proc/meminfo say? What does it say when this is happening?\n\nYou might also tweak /proc/sys/vm/overcommit_memory but I don't remember what\nthe values are, you can search to find them.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 07 Sep 2007 10:31:34 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management issues?" }, { "msg_contents": "* Gregory Stark:\n\n> You might also tweak /proc/sys/vm/overcommit_memory but I don't remember what\n> the values are, you can search to find them.\n\n\"2\" is the interesting value, it turns off overcommit.\n\nHowever, if you're tight on memory, this will only increase your\nproblems because the system fails sooner. The main difference is that\nit's much more deterministic: malloc fails in a predictable manner and\nthis situation can be handled gracefully (at least by some processes);\nno processes are killed.\n\nWe use this setting on all of our database server, just in case\nsomeone performs a huge SELECT locally, resulting in a a client\nprocess sucking up all available memory. With vm.overcommit_memory=2,\nmemory allocation in the client process will fail. Without it,\ntypically the postgres process feeding it is killed by the kernel.\n\n-- \nFlorian Weimer <[email protected]>\nBFK edv-consulting GmbH http://www.bfk.de/\nKriegsstraße 100 tel: +49-721-96201-1\nD-76133 Karlsruhe fax: +49-721-96201-99\n", "msg_date": "Fri, 07 Sep 2007 11:38:25 +0200", "msg_from": "Florian Weimer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management issues?" }, { "msg_contents": "On Thu, Sep 06, 2007 at 09:06:53AM -0700, Richard Yen wrote:\n>My understanding is that if any one postgres process's memory usage, \n>plus the shared memory, exceeds the kernel limit of 4GB,\n\nOn a 32 bit system the per-process memory limit is a lot lower than 4G. \nIf you want to use 16G effectively it's going to be a lot easier to \nsimply use a 64bit system. That said, it's more likely that you've got a \nnumber of processes using an aggregate of more than 16G than that you're \nexceeding the limit per process. (Hitting the per-process limit should \nresult in a memory allocation failure rather than an out of memory \ncondition.)\n\nMike Stone\n", "msg_date": "Fri, 07 Sep 2007 06:11:21 -0400", "msg_from": "Michael Stone <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management issues?" }, { "msg_contents": "Richard Yen <[email protected]> writes:\n> Here is a snippet of my log output (I can give more if necessary):\n> Sep 5 18:38:57 tii-db2.oaktown.iparadigms.com Out of Memory: Kill \n> process 11696 (postgres) score 1181671 and children.\n\n> My understanding is that if any one postgres process's memory usage, \n> plus the shared memory, exceeds the kernel limit of 4GB, then the \n> kernel will kill the process off. Is this true?\n\nNo. The OOM killer is not about individual process size. It's about\ngetting the kernel out of the corner it's backed itself into when it's\npromised more memory for the total collection of processes than it\ncan actually deliver. As already noted, fooling with the overcommit\nparameter might help, and migrating to a 64-bit kernel might help.\n(32-bit kernels can run out of space for \"lowmem\" long before all of\nyour 16G is used up.)\n\nObDigression: The reason the kernel would do such a silly-sounding thing\nas promise more memory than it has is that in a lot of cases pages are\nshared by more than one process --- in fact, immediately after a fork()\nthe child process shares *all* pages of its parent --- and it would be\nreally restrictive to insist on having sufficient RAM+swap for each\nprocess to have an independent copy of shared pages. The problem is\nthat it's hard to guess whether currently-shared pages will need\nmultiple copies in future. After a fork() the child's pages are\nsupposed to be independent of the parent, so if either one scribbles\non a shared page then the kernel has to instantiate separate copies\nat that moment (google for \"copy on write\" for more about this).\nThe problem is that if there is not enough memory for another copy,\nthere is no clean API for the kernel to return \"out of memory\".\nIt cannot just fail the write instruction, so the only recourse is to\nnuke some process or other to release memory. The whole thing makes\nconsiderable sense until you are trying to run critical applications,\nand then you just wanna turn it off.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 07 Sep 2007 10:56:47 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: postgres memory management issues? " } ]
[ { "msg_contents": "Hi,\n I've a question about amount of indices.\n \n I explain my issue based on an example:\n Table which contains person information, one row per person.\n There will be lots of SELECTS doing search by special criteria, \ne.g.: Age, Gender.\n\n Now there will be 4 User groups which will select on the table:\n Group 1) Always doing reads on specific continents.\n Group 2) Always doing reads in specific country.\n Group 3) Always doing reads in specific region within a country.\n Group 4) Always doing reads in specific city.\n\n I 'm indexing the the important attributes. Would be about 5 to 6 \nindependent indexes.\n As there will be millions of rows, quite a lot of hits will be \nreturned, I guess\n it will generate big bitmaps to calculate the intersection of the \nindices.\n\n Ok to prevent this from happening I'd wanted to create 4 Indexes per \nattribute, with\n special predicate, so users which only query for a country don't \nscan an index\n which indexed the entire globe:\n\n e.g ..\n CREATE index BlaBla_city on table tblusers(dtage) WHERE dtcity='London';\n CREATE index BlaBla_country on table tblusers(dtage) WHERE \ndtcountry='uk';\n CREATE index BlaBla_continent on table tblusers(dtage) WHERE \ndtcontinent='europe';\n etc.\n\n SELECT * FROM tblusers WHERE dtcontinent='europe' and age='23'\n would then postgres lead to use the special index made for europe.\n\n Now that I've 4 Indexes. an Insert or update will lead to some more \noverhead, but which would be ok.\n\n My Question now is: Is it wise to do so, and create hundreds or \nmaybe thousands of Indices\n which partition the table for the selections.\n\n Does postgres scale good on the selecton of indices or is the \npredicate for indices not\n layed out for such a usage?\n\n (PS: Don't want partition with method postgres offers..)\n\nthanks in advance,\npatric\n\n\n \n", "msg_date": "Thu, 06 Sep 2007 22:08:45 +0200", "msg_from": "Patric <[email protected]>", "msg_from_op": true, "msg_subject": "Reasonable amount of indices" }, { "msg_contents": "Patric wrote:\n\n> My Question now is: Is it wise to do so, and create hundreds or maybe \n> thousands of Indices\n> which partition the table for the selections.\n\nNo, this is not helpful -- basically what you are doing is taking the\nfirst level (the first couple of levels maybe) of the index out of it,\nand charging the planner with the selection of which one is the best for\neach query.\n\nOf course, I am assuming you were simplifing your queries and don't\nactually store the name of the continent etc on each on every row.\nBecause if you actually do that, then there's your first oportunity for\nactual optimization (which happens to be a more normalized model), far\nmore effective than the partial indexes you are suggesting.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Thu, 6 Sep 2007 18:09:40 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reasonable amount of indices" }, { "msg_contents": "Alvaro Herrera <[email protected]> writes:\n> Patric wrote:\n>> My Question now is: Is it wise to do so, and create hundreds or maybe \n>> thousands of Indices\n>> which partition the table for the selections.\n\n> No, this is not helpful -- basically what you are doing is taking the\n> first level (the first couple of levels maybe) of the index out of it,\n\nRight --- you'd be *far* better off using a small number of multicolumn\nindexes. I wouldn't want to bet that the planner code scales\neffectively to thousands of indexes, and even if it does, you're\nthrowing away any chance of using parameterized queries.\n\nThe update overhead is unpleasant to contemplate as well (or is this a\nread-only table?)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2007 19:18:46 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Reasonable amount of indices " } ]
[ { "msg_contents": "Exactly when does the planner decide that a left-anchored like can use the \nindex?\n\nI have replaced a WHEN lower(last_name) = 'smith'\nwith WHEN lower(last_name) like 'smith%'\n\nThere is an index on lower(last_name). I have seen the planner convert the \nLIKE to lower(last_name) >= 'smith' and lower(last_name) < 'smiti' on 8.2.4 \nsystems, but a slow sequence scan and filter on 8.1.9 - is this related to \nthe version difference (8.1.9 vs 8.2.4) or is this related to something like \noperators/classes that have been installed?\n\nCarlo \n\n", "msg_date": "Thu, 6 Sep 2007 20:06:44 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "How planner decides left-anchored LIKE can use index" }, { "msg_contents": "\"Carlo Stonebanks\" <[email protected]> writes:\n> There is an index on lower(last_name). I have seen the planner convert the \n> LIKE to lower(last_name) >= 'smith' and lower(last_name) < 'smiti' on 8.2.4 \n> systems, but a slow sequence scan and filter on 8.1.9 - is this related to \n> the version difference (8.1.9 vs 8.2.4) or is this related to something like \n> operators/classes that have been installed?\n\nMost likely you used C locale for the 8.2.4 installation and some other\nlocale for the other one.\n\nIn non-C locale you can still get the optimization if you create an\nindex using the text_pattern_ops opclass ... but beware that this index\nis useless for the normal locale-aware operators.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 06 Sep 2007 20:28:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: How planner decides left-anchored LIKE can use index " } ]
[ { "msg_contents": "Hello.\nWe have made some performance tests with DRBD and Postgresql 8.2.3. We\nhave two identical servers in a cluster (Dell 2950) with a partition of\n100 GB managed by DRBD: once we checked Postgres keeping his data folder\nin a local partition, the second time we moved the data folder in the\nshared partition. The two servers are connected point to point using a\ncross cable to reduce their latency.\nThe partition is mounted with the option noatime in order to not update\nthe inode access time in case of read access.\nWe used pgbench for the testings, creating a dabase of about 3GB with a\nscale of 200. After we perfomed 10 tests for each configuration,\nsimulating the usage of 100 clients with 500 transactions each.\n\nDRBD configuration:\n--------------------------------------------------------------------------------------------\nresource drbd0 {\n\n protocol C;\n incon-degr-cmd \"halt -f\";\n\n on db-node1 {\n device /dev/drbd0;\n disk /dev/sda2;\n address 10.0.0.201:7788;\n meta-disk internal;\n }\n\n on db-node2 {\n device /dev/drbd0;\n disk /dev/sda2;\n address 10.0.0.202:7788;\n meta-disk internal;\n }\n syncer {\n rate 700000K;\n }\n}\n--------------------------------------------------------------------------------------------\n\nPgbench\n\n--------------------------------------------------------------------------------------------\npgbench -i pgbench -s 200\npgbench -c 100 -t 500 pgbench\n--------------------------------------------------------------------------------------------\n\nThe results were that the TPS (transaction per second) with Postgres\nrunning in the local partition is almost double than the one with the DRDB:\n\nPostgres in shared DRBD partition: 60.863324 TPS\nPostgres in local partition: 122.016138 TPS\n\nObviously, working with the database in DRBD, we had two writes instead\nof only one but we are a bit disappointed about the low results. We\nwould like to know if there is any way to improve the performance in\norder to have a 3/4 rate instead of the 1/2 one.\n\nWe would really appreciate it if you could give us some feedback.\n\nThank you in advance,\nMaila Fatticcioni\n\n-- \n______________________________________________________________\n Maila Fatticcioni\n______________________________________________________________\n Mediterranean Broadband Infrastructure s.r.l.\n ITALY\n______________________________________________________________", "msg_date": "Fri, 07 Sep 2007 11:37:40 +0200", "msg_from": "Maila Fatticcioni <[email protected]>", "msg_from_op": true, "msg_subject": "DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "Maila Fatticcioni wrote:\n> Hello.\n> We have made some performance tests with DRBD and Postgresql 8.2.3. We\n> have two identical servers in a cluster (Dell 2950) with a partition of\n> 100 GB managed by DRBD: once we checked Postgres keeping his data folder\n> in a local partition, the second time we moved the data folder in the\n> shared partition. The two servers are connected point to point using a\n> cross cable to reduce their latency.\n> The partition is mounted with the option noatime in order to not update\n> the inode access time in case of read access.\n> We used pgbench for the testings, creating a dabase of about 3GB with a\n> scale of 200. After we perfomed 10 tests for each configuration,\n> simulating the usage of 100 clients with 500 transactions each.\n> \n> DRBD configuration:\n> --------------------------------------------------------------------------------------------\n> resource drbd0 {\n> \n> protocol C;\n> incon-degr-cmd \"halt -f\";\n> \n> on db-node1 {\n> device /dev/drbd0;\n> disk /dev/sda2;\n> address 10.0.0.201:7788;\n> meta-disk internal;\n> }\n> \n> on db-node2 {\n> device /dev/drbd0;\n> disk /dev/sda2;\n> address 10.0.0.202:7788;\n> meta-disk internal;\n> }\n> syncer {\n> rate 700000K;\n> }\n> }\n> --------------------------------------------------------------------------------------------\n> \n> Pgbench\n> \n> --------------------------------------------------------------------------------------------\n> pgbench -i pgbench -s 200\n> pgbench -c 100 -t 500 pgbench\n> --------------------------------------------------------------------------------------------\n> \n> The results were that the TPS (transaction per second) with Postgres\n> running in the local partition is almost double than the one with the DRDB:\n> \n> Postgres in shared DRBD partition: 60.863324 TPS\n> Postgres in local partition: 122.016138 TPS\n> \n> Obviously, working with the database in DRBD, we had two writes instead\n> of only one but we are a bit disappointed about the low results. We\n> would like to know if there is any way to improve the performance in\n> order to have a 3/4 rate instead of the 1/2 one.\n\nYou seem to be limited by the speed you can fsync the WAL to the DRBD\ndevice. Using a RAID controller with a battery-backed up cache in both\nservers should help, with and without DRBD. You might find that the\ndifference between local and shared partition just gets bigger, but you\nshould get better numbers.\n\nIn 8.3, you could turn synchronous_commit=off, if you can accept the\nloss of recently committed transactions in case of a crash.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 07 Sep 2007 11:12:06 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "On 9/7/07, Maila Fatticcioni <[email protected]> wrote:\n> Obviously, working with the database in DRBD, we had two writes instead\n> of only one but we are a bit disappointed about the low results. We\n> would like to know if there is any way to improve the performance in\n> order to have a 3/4 rate instead of the 1/2 one.\n\nHave you considered warm standby PITR? It achieves essentially the\nsame thing with very little overhead on the master. The only downside\nrelative to DRDB is you have to think about the small gap between WAL\nfile rotations. From what I understand, there is some new stuff\n(check out skype skytools) that may help minimize this problem.\n\nmerlin\n", "msg_date": "Fri, 7 Sep 2007 08:56:15 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "On Fri, 2007-09-07 at 11:37 +0200, Maila Fatticcioni wrote:\n\n> protocol C;\n\nTry protocol B instead.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 07 Sep 2007 20:00:16 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "Thank you very much for your ideas. I've tried to change the protocol\nfrom C to B and I got an increase in the number of TPS: 64.555763.\n\nNow I would like to follow the advice of Mr. Bernd Helmle and change the\nvalue of snd-bufsize.\n\nThe servers are cross connected with a common 100 Mbit/sec Ethernet so I\nthink they have a bandwidth around 80 Mbit/sec (even if I haven't yet\ndone any test on it). A rate of 70Mb seems reasonable to me.\n\nThe two servers are in two different racks (next to each other) and they\nhave two power supplies connected to two different sets of UPS.\n\nUnfortunately we cannot accept a loss of recently committed transactions\nso we cannot put the synchronous_commit to off.\n\nRegards,\nMaila Fatticcioni\n\nSimon Riggs wrote:\n> On Fri, 2007-09-07 at 11:37 +0200, Maila Fatticcioni wrote:\n> \n>> protocol C;\n> \n> Try protocol B instead.\n> \n\n-- \n______________________________________________________________\n Maila Fatticcioni\n______________________________________________________________\n Mediterranean Broadband Infrastructure s.r.l.\n ITALY\n______________________________________________________________", "msg_date": "Tue, 11 Sep 2007 16:47:40 +0200", "msg_from": "Maila Fatticcioni <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "On Tue, Sep 11, 2007 at 04:47:40PM +0200, Maila Fatticcioni wrote:\n> The servers are cross connected with a common 100 Mbit/sec Ethernet so I\n> think they have a bandwidth around 80 Mbit/sec (even if I haven't yet\n> done any test on it). A rate of 70Mb seems reasonable to me.\n\nUmm, seriously? Unless that was a typo, you should consider very seriously to\ngo to gigabit; it's cheap these days, and should provide you with a very\ndecent speed boost if the network bandwidth is the bottleneck.\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 11 Sep 2007 16:57:24 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "Simon Riggs schrieb:\n> On Fri, 2007-09-07 at 11:37 +0200, Maila Fatticcioni wrote:\n> \n>> protocol C;\n> \n> Try protocol B instead.\n> \n\nSure? I've always heard that there has yet to be a case found, where B \nis better than C. We use DRBD with protocol C, and are quite happy with it.\n", "msg_date": "Tue, 11 Sep 2007 20:26:02 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "On Tue, Sep 11, 2007 at 04:57:24PM +0200, Steinar H. Gunderson wrote:\n> On Tue, Sep 11, 2007 at 04:47:40PM +0200, Maila Fatticcioni wrote:\n> > The servers are cross connected with a common 100 Mbit/sec Ethernet so I\n> > think they have a bandwidth around 80 Mbit/sec (even if I haven't yet\n> > done any test on it). A rate of 70Mb seems reasonable to me.\n> \n> Umm, seriously? Unless that was a typo, you should consider very seriously to\n> go to gigabit; it's cheap these days, and should provide you with a very\n> decent speed boost if the network bandwidth is the bottleneck.\n\nActually, in this case, I suspect that latency will be far more critical\nthan overall bandwidth. I don't know if it's inherent to Gig-E, but my\nlimited experience has been that Gig-E has higher latency than 100mb.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 16:22:02 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "Hi,\n\nDecibel! wrote:\n> Actually, in this case, I suspect that latency will be far more critical\n> than overall bandwidth. I don't know if it's inherent to Gig-E, but my\n> limited experience has been that Gig-E has higher latency than 100mb.\n\nI've been looking for some benchmarks, but it's rather hard to find. It \nlooks like people are much more concerned about throughput ?!?\n\nHowever, I'd like to share some of the sites I've found, especially \nregarding Fast Ethernet vs. Gigabit Ethernet:\n\n - Ashford Computer Consulting Service benchmarked five different \ngigabit ethernet adapters [1], back in 2004. For most cards they \nmeasured between ca. 100 - 150 microseconds for a UDP round trip of a \ntoken, a so called hot potato benchmark. Unfortunately they didn't \ncompare with Fast Ethernet.\n\n - The NetPIPE project has some of it's measurements at the very bottom \nof it's website [2]. Mostly for high speed and low latency links. Again, \nFast Ethernet is missing. The diagram tells the following latencies (in \nmicroseconds):\n\n 75 10 Gigabit Ethernet\n 62 Gigabit Ethernet\n 8 Myrinet\n 7.5 Infini Band\n 4.7 Atoll\n 4.2 SCI\n\nI've no explanation for the significantly better measure for gigabit \nethernet compared with the above benchmark. From their description I'm \nconcluding that they also measured a round-trip, but not via UDP.\n\nThe bad value for 10 Gigabit Ethernet is due to a poor Intel adapter, \nwhich also has poor throughput. They claim that newer adapters are better.\n\n - Finally, I've found a latency comparison between Fast vs Gigabit \nEthernet, here [3]. Figure 6, in the second third of the page shows a \nNetPIPE latency benchmark between Ethernet, Fast Ethernet and Gigabit \nEthernet (additionally ATM and FDDI). It looks like Gigabit Ethernet \nfeatures slightly better latency.\n\n From these findings I'm concluding, that commodity Ethernet hardware \nhas quite similar latencies, no matter if you are using Fast, Gigabit or \n10 Gigabit Ethernet. If you really want to have a low latency \ninterconnect, you need to pay the extra bucks for specialized, low \nlatency networking hardware (which may still be based on 10GE, see \nMyrinet's 10GE adapter).\n\nIf you know other resources, I'd be curious to know.\n\nRegards\n\nMarkus\n\n[1]: Ashford Computer Consulting Service, GigE benchmarks:\nhttp://www.accs.com/p_and_p/GigaBit/conclusion.html\n\n[2]: NetPIPE website:\nhttp://www.scl.ameslab.gov/netpipe/\n\n[3]: Gigabit Ethernet and Low-Cost Supercomputing\nhttp://www.scl.ameslab.gov/Publications/Gigabit/tr5126.html\n", "msg_date": "Mon, 17 Sep 2007 17:19:31 +0200", "msg_from": "Markus Schiltknecht <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" } ]
[ { "msg_contents": "--On Freitag, September 07, 2007 20:00:16 +0100 Simon Riggs \n<[email protected]> wrote:\n\n> On Fri, 2007-09-07 at 11:37 +0200, Maila Fatticcioni wrote:\n>\n>> protocol C;\n>\n> Try protocol B instead.\n\nBut that would have an impact on transaction safety, wouldn't it? It will \nreturn immediately after reaching the remote buffer cache and you can't be \nsure your data hits the remote disk.\nIt's a while ago i've played with such a setup, but it could be worth to \nplay around with max_buffers, al-extends, snd-bufsize. Oh and i think \nMaila's 'rate' setting is too high: i've found rate settings \ncounterproductive when set too high (try a value slightly above your max \nbandwidth of your connection). But i second Heikki, you should take care on \nyour disk setup as well.\n\n-- \n Thanks\n\n Bernd\n", "msg_date": "Fri, 07 Sep 2007 23:54:12 +0200", "msg_from": "Bernd Helmle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "On Fri, 2007-09-07 at 23:54 +0200, Bernd Helmle wrote:\n> --On Freitag, September 07, 2007 20:00:16 +0100 Simon Riggs \n> <[email protected]> wrote:\n> \n> > On Fri, 2007-09-07 at 11:37 +0200, Maila Fatticcioni wrote:\n> >\n> >> protocol C;\n> >\n> > Try protocol B instead.\n> \n> But that would have an impact on transaction safety, wouldn't it? It will \n> return immediately after reaching the remote buffer cache and you can't be \n> sure your data hits the remote disk.\n\nYou're right, but the distinction is a small one. What are the chances\nof losing two independent servers within a few milliseconds of each\nother? \n\nIf performance is an issue it is a particularly important distinction.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Sat, 08 Sep 2007 07:28:24 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "\"Simon Riggs\" <[email protected]> writes:\n\n> You're right, but the distinction is a small one. What are the chances\n> of losing two independent servers within a few milliseconds of each\n> other? \n\nIf they're on the same power bus?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 08 Sep 2007 11:04:41 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGregory Stark wrote:\n> \"Simon Riggs\" <[email protected]> writes:\n> \n>> You're right, but the distinction is a small one. What are the chances\n>> of losing two independent servers within a few milliseconds of each\n>> other? \n> \n> If they're on the same power bus?\n\nThat chance is minuscule or at least should be. Of course we are\nassuming some level of conditioned power that is independent of the\npower bus, e.g; a UPS.\n\nJoshua D. Drake\n\n\n\n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG4sqvATb/zqfZUUQRAq/qAKCkkFX/hTddRJriMGMYhjy04REwvgCfUoY5\npzcyvahVvsaAL8qlkJVtbX0=\n=nzIH\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 08 Sep 2007 09:15:43 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "Joshua D. Drake wrote:\n> Gregory Stark wrote:\n>> \"Simon Riggs\" <[email protected]> writes:\n> \n>>> You're right, but the distinction is a small one. What are the chances\n>>> of losing two independent servers within a few milliseconds of each\n>>> other? \n>> If they're on the same power bus?\n> \n> That chance is minuscule or at least should be. Of course we are\n> assuming some level of conditioned power that is independent of the\n> power bus, e.g; a UPS.\n\nhow is that making it different in practise ? - if both are on the same\nUPS they are affectively on the same power bus ...\nIf the UPS fails (or the generator is not kicking in which happens way\nmore often than people would believe) they could still fail at the very\nsame time ....\n\n\nStefan\n", "msg_date": "Sat, 08 Sep 2007 18:33:55 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n> Gregory Stark wrote:\n>> \"Simon Riggs\" <[email protected]> writes:\n>>> You're right, but the distinction is a small one. What are the chances\n>>> of losing two independent servers within a few milliseconds of each\n>>> other? \n>> \n>> If they're on the same power bus?\n\n> That chance is minuscule or at least should be.\n\nIt seems a bit silly to be doing replication to a slave server that has\nany common point of failure with the master.\n\nHowever, it seems like the point here is not so much \"can you recover\nyour data\" as what a commit means. Do you want a commit reported to the\nclient to mean the data is safely down to disk in both places, or only\none?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 08 Sep 2007 12:39:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance? " }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n\n> That chance is minuscule or at least should be. Of course we are\n> assuming some level of conditioned power that is independent of the\n> power bus, e.g; a UPS.\n\nI find your faith in UPSes charmingly quaint.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 08 Sep 2007 17:49:09 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nStefan Kaltenbrunner wrote:\n> Joshua D. Drake wrote:\n>> Gregory Stark wrote:\n>>> \"Simon Riggs\" <[email protected]> writes:\n>>>> You're right, but the distinction is a small one. What are the chances\n>>>> of losing two independent servers within a few milliseconds of each\n>>>> other? \n>>> If they're on the same power bus?\n>> That chance is minuscule or at least should be. Of course we are\n>> assuming some level of conditioned power that is independent of the\n>> power bus, e.g; a UPS.\n> \n> how is that making it different in practise ? - if both are on the same\n> UPS they are affectively on the same power bus ...\n\nWell I was thinking the bus that is in the wall. I would assume that\npeople were smart enough to have independent UPS systems for each server.\n\ncity power->line conditioning generator->panel->plug->UPS->server\n\nwash, rinse repeat.\n\n> If the UPS fails (or the generator is not kicking in which happens way\n> more often than people would believe) they could still fail at the very\n> same time ....\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\n> \n> \n> Stefan\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG4tOKATb/zqfZUUQRAiSTAJ4pqQqsP7aH9GPJYjY3hZDvKzU8cACeKKJ3\nwAae0tl2XswsjgEncIsOBlw=\n=xsGZ\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 08 Sep 2007 09:53:30 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGregory Stark wrote:\n> \"Joshua D. Drake\" <[email protected]> writes:\n> \n>> That chance is minuscule or at least should be. Of course we are\n>> assuming some level of conditioned power that is independent of the\n>> power bus, e.g; a UPS.\n> \n> I find your faith in UPSes charmingly quaint.\n\nIt isn't my faith in a UPS. It is my real world knowledge.\n\nFurther I will exert what I already replied to Stefan:\n\ncity power->line conditioning generator->panel->plug->UPS->server\n\nYou would have to have lightning handed by God to your server to have a\ntotal power failure without proper shutdown in the above scenario.\n\nSincerely,\n\nJoshua D. Drake\n\n\n\n\n\n\n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG4tWpATb/zqfZUUQRAl00AJ4jC/CWkqrxeUjT0REedQAG3cvPPgCcCKkU\nzbCu41UT25PnL7f7bT7dfXQ=\n=tV5r\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 08 Sep 2007 10:02:33 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "Joshua D. Drake wrote:\n> Stefan Kaltenbrunner wrote:\n>> Joshua D. Drake wrote:\n>>> Gregory Stark wrote:\n>>>> \"Simon Riggs\" <[email protected]> writes:\n>>>>> You're right, but the distinction is a small one. What are the chances\n>>>>> of losing two independent servers within a few milliseconds of each\n>>>>> other? \n>>>> If they're on the same power bus?\n>>> That chance is minuscule or at least should be. Of course we are\n>>> assuming some level of conditioned power that is independent of the\n>>> power bus, e.g; a UPS.\n>> how is that making it different in practise ? - if both are on the same\n>> UPS they are affectively on the same power bus ...\n> \n> Well I was thinking the bus that is in the wall. I would assume that\n> people were smart enough to have independent UPS systems for each server.\n> \n> city power->line conditioning generator->panel->plug->UPS->server\n> \n> wash, rinse repeat.\n\nthe typical datacenter version of this is actually more like:\n\ncity power->UPS (with generator in parallel)->panel->plug\n\nor\n\ncity power->flywheel->(maybe UPS)->panel->plug\n\nit is not really that common to have say two different UPS feeds in your\nrack (at least not for normal housing or the average corporate\ndatacenter) - mostly you get two feeds from different power distribution\npanels (so different breakers) but that's about it.\nHaving a local UPS attached is usually not really that helpful either\nbecause those have limited capacity need space and are an additional\nthing that can (and will) fail.\n\n\nStefan\n", "msg_date": "Sat, 08 Sep 2007 19:25:24 +0200", "msg_from": "Stefan Kaltenbrunner <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "\"Joshua D. Drake\" <[email protected]> writes:\n\n> It isn't my faith in a UPS. It is my real world knowledge.\n>\n> Further I will exert what I already replied to Stefan:\n>\n> city power->line conditioning generator->panel->plug->UPS->server\n>\n> You would have to have lightning handed by God to your server to have a\n> total power failure without proper shutdown in the above scenario.\n\nWhich happens a couple times a year to various trusting souls. I suppose\nyou're not a regular reader of Risks? Or a regular user of Livejournal for\nthat matter?\n\nAnalog is hard, let's go shopping.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 08 Sep 2007 18:32:13 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "--On Samstag, September 08, 2007 12:39:37 -0400 Tom Lane \n<[email protected]> wrote:\n\n> However, it seems like the point here is not so much \"can you recover\n> your data\" as what a commit means. Do you want a commit reported to the\n> client to mean the data is safely down to disk in both places, or only\n> one?\n\nYeah, that's what i meant to say. DRBD provides a handful other tweaks \nbesides changing the sync protocol, i'd start with them first. You can get \nback experimenting with the sync protocol if there are still performance \nissues then. I don't hesitate changing to B as long as I'm aware that it \nchanged semantics and I can deal with them.\n\n-- \n Thanks\n\n Bernd\n", "msg_date": "Sat, 08 Sep 2007 20:22:24 +0200", "msg_from": "Bernd Helmle <[email protected]>", "msg_from_op": true, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance? " }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nStefan Kaltenbrunner wrote:\n> Joshua D. Drake wrote:\n>> Stefan Kaltenbrunner wrote:\n\n>>> how is that making it different in practise ? - if both are on the same\n>>> UPS they are affectively on the same power bus ...\n>> Well I was thinking the bus that is in the wall. I would assume that\n>> people were smart enough to have independent UPS systems for each server.\n>>\n>> city power->line conditioning generator->panel->plug->UPS->server\n>>\n>> wash, rinse repeat.\n> \n> the typical datacenter version of this is actually more like:\n> \n> city power->UPS (with generator in parallel)->panel->plug\n\nRight, this is what we have at our colo except that I add a UPS where\nappropriate in between panel and plug.\n\n> city power->flywheel->(maybe UPS)->panel->plug\n> \n> it is not really that common to have say two different UPS feeds in your\n> rack (at least not for normal housing or the average corporate\n> datacenter) - mostly you get two feeds from different power distribution\n> panels (so different breakers) but that's about it.\n\n> Having a local UPS attached is usually not really that helpful either\n> because those have limited capacity need space and are an additional\n> thing that can (and will) fail.\n\nWe don't have the capacity issue. We use the UPS explicitly for specific\ncases (like the one mentioned at the beginning of the thread). The whole\nidea is to insure clean shutdown in case of \"total\" power failure.\n\n\nSincerely,\n\nJoshua D. Drake\n> \n> \n> Stefan\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG4xM5ATb/zqfZUUQRAszhAJ4qmwJQFHd/O5/alOSg1exrYEDe0wCeN6na\n8BgWWO1aGELPOuX3xivEBVU=\n=ETwV\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 08 Sep 2007 14:25:13 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nJoshua D. Drake wrote:\n> Stefan Kaltenbrunner wrote:\n>> Joshua D. Drake wrote:\n>>> Stefan Kaltenbrunner wrote:\n> \n>>>> how is that making it different in practise ? - if both are on the same\n>>>> UPS they are affectively on the same power bus ...\n>>> Well I was thinking the bus that is in the wall. I would assume that\n>>> people were smart enough to have independent UPS systems for each server.\n>>>\n>>> city power->line conditioning generator->panel->plug->UPS->server\n>>>\n>>> wash, rinse repeat.\n>> the typical datacenter version of this is actually more like:\n> \n>> city power->UPS (with generator in parallel)->panel->plug\n> \n> Right, this is what we have at our colo except that I add a UPS where\n> appropriate in between panel and plug.\n\nErr plug and machine.\n\nJoshua D. Drake\n- ---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n\n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG4xSTATb/zqfZUUQRAkgyAJwKLLz0Jywex/b3d6hk8L2gHsZaXQCfYCyH\n6Z/mdtOvnXc4MixgxchrxNY=\n=kv8v\n-----END PGP SIGNATURE-----\n", "msg_date": "Sat, 08 Sep 2007 14:30:59 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "On Sat, 8 Sep 2007, Joshua D. Drake wrote:\n\n> You would have to have lightning handed by God to your server to have a \n> total power failure without proper shutdown in the above scenario.\n\nDo you live somewhere without thunderstorms? This is a regular event in \nthis part of the world during the summer. It happened to me once this \nyear and once last; lost count for previous ones. In both of the recent \ncases it's believed the servers were burned from the Ethernet side because \nsomewhere in the network was a poor switch that wasn't isolated well \nenough from the grid when the building was hit. Lightning is tricky that \nway; cable TV and satellite wiring are also weak links that way.\n\nI didn't feel too bad about last year's because the building next door \nburned to the ground after being hit within a few seconds of mine, so the \nfact that I had some server repair didn't seem like too much of a hardship \nin comparison. The system was down until the fire department had put out \nthe blaze and I was allowed back into the building though. Good thing my \ncell phone works in the server room; that phone call asking \"are you aware \nthe building is being evacuated?\" is always a fun one.\n\nI'm not saying God hates you as much as me, because the fact that you've \nmade this statement says it's clearly not the case, just that a plan \npresuming he's your buddy is a bad one. The way you're characterizing \nthis risk reminds me of when people quote the odds that they'll lose two \ndisks within seconds of one another in a disk array as if it's impossible, \nsomething else which of course has also happened to me many times during \nmy career.\n\n--\n* Greg \"Lucky\" Smith [email protected] Baltimore, MD\n", "msg_date": "Mon, 10 Sep 2007 00:06:40 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "Greg Smith <[email protected]> writes:\n> On Sat, 8 Sep 2007, Joshua D. Drake wrote:\n>> You would have to have lightning handed by God to your server to have a \n>> total power failure without proper shutdown in the above scenario.\n\n> Do you live somewhere without thunderstorms? This is a regular event in \n> this part of the world during the summer. It happened to me once this \n> year and once last; lost count for previous ones. In both of the recent \n> cases it's believed the servers were burned from the Ethernet side because \n> somewhere in the network was a poor switch that wasn't isolated well \n> enough from the grid when the building was hit. Lightning is tricky that \n> way; cable TV and satellite wiring are also weak links that way.\n\nYeah. I've lost half a dozen modems of varying generations, a server\nmotherboard, a TiVo, a couple of VCRs, and miscellaneous other equipment\nfrom strikes near my house --- none closer than a couple blocks away.\nI don't really care to think about what would still work after a direct\nhit, despite the whole-house surge suppressor at the meter and the local\nsuppressor on each circuit and the allegedly surge-proof UPSes powering\nall the valuable stuff. I've also moved heavily into wireless local\nnet to eliminate any direct electrical connections between machines that\nare not on the same power circuit (the aforesaid burned motherboard\ntaught me that particular lesson). And yet I still fear every time a\nthunderstorm passes over.\n\nThen of course there are the *other* risks, such as the place burning to\nthe ground, or getting drowned by a break in the city reservoir that's\na couple hundred yards up the hill (but at least I needn't worry about\nany nearby rivers rising, as I'm well above them). Or maybe being\nburgled by Oracle employees who are specifically after my backup tapes.\n\nIf you ain't got a backup plan, you *will* lose data. Imagining that\nthere is one perfect technological solution to this problem is the very\nfastest way to lose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 10 Sep 2007 00:54:37 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance? " }, { "msg_contents": "On Mon, Sep 10, 2007 at 12:06:40AM -0400, Greg Smith wrote:\n> On Sat, 8 Sep 2007, Joshua D. Drake wrote:\n> \n> >You would have to have lightning handed by God to your server to have a \n> >total power failure without proper shutdown in the above scenario.\n> \n> Do you live somewhere without thunderstorms? This is a regular event in \n\nActually, he does. :) Or at least I don't think Portland gets a lot of\nt-storms, just rain by the bucketful.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 16:12:33 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "On Mon, Sep 10, 2007 at 12:54:37AM -0400, Tom Lane wrote:\n> Greg Smith <[email protected]> writes:\n> > On Sat, 8 Sep 2007, Joshua D. Drake wrote:\n> >> You would have to have lightning handed by God to your server to have a \n> >> total power failure without proper shutdown in the above scenario.\n> \n> > Do you live somewhere without thunderstorms? This is a regular event in \n> > this part of the world during the summer. It happened to me once this \n> > year and once last; lost count for previous ones. In both of the recent \n> > cases it's believed the servers were burned from the Ethernet side because \n> > somewhere in the network was a poor switch that wasn't isolated well \n> > enough from the grid when the building was hit. Lightning is tricky that \n> > way; cable TV and satellite wiring are also weak links that way.\n> \n> Yeah. I've lost half a dozen modems of varying generations, a server\n> motherboard, a TiVo, a couple of VCRs, and miscellaneous other equipment\n> from strikes near my house --- none closer than a couple blocks away.\n> I don't really care to think about what would still work after a direct\n> hit, despite the whole-house surge suppressor at the meter and the local\n> suppressor on each circuit and the allegedly surge-proof UPSes powering\n> all the valuable stuff. I've also moved heavily into wireless local\n\n<dons EE hat>\nPretty much every surge supressor out there is a POS... 99.9% of them\njust wire a varistor across the line; like a $0.02 part is going to stop\na 10,00+ amp discharge.\n\nThe only use I have for those things is if they come with an equipment\nguarantee, though I have to wonder how much those are still honored,\nsince as you mention it's very easy for equipment to be fried via other\nmeans (ethernet, monitor, etc).\n\n> net to eliminate any direct electrical connections between machines that\n> are not on the same power circuit (the aforesaid burned motherboard\n> taught me that particular lesson). And yet I still fear every time a\n> thunderstorm passes over.\n\nWired is safe as long as everything's on the same circuit. My house is\nwired for ethernet with a single switch running what's going to every\nroom, but in each room I have a second switch on the same power as\nwhatever's in that room; so if there is a strike it's far more likely\nthat I'll lose switches and not hardware.\n\n> Then of course there are the *other* risks, such as the place burning to\n> the ground, or getting drowned by a break in the city reservoir that's\n> a couple hundred yards up the hill (but at least I needn't worry about\n\nInvest in sponges. Lots of them. :)\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 16:20:15 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" }, { "msg_contents": "Decibel! wrote:\n\n> <dons EE hat>\n> Pretty much every surge supressor out there is a POS... 99.9% of them\n> just wire a varistor across the line; like a $0.02 part is going to stop\n> a 10,00+ amp discharge.\n> \n> The only use I have for those things is if they come with an equipment\n> guarantee, though I have to wonder how much those are still honored,\n> since as you mention it's very easy for equipment to be fried via other\n> means (ethernet, monitor, etc).\n\nMy UPSs, from American Power Conversion, have one of those impressive\nguarantees. It specifies that all connections to my computer must be\nprotected: power, modem, ethernet, etc. It further specifies that everything\nmust be UL or CSA approved, and so on and so forth.\n\nWell, that is what I have.\n> \n>> net to eliminate any direct electrical connections between machines that\n>> are not on the same power circuit (the aforesaid burned motherboard\n>> taught me that particular lesson). And yet I still fear every time a\n>> thunderstorm passes over.\n> \n> Wired is safe as long as everything's on the same circuit. My house is\n> wired for ethernet with a single switch running what's going to every\n> room, but in each room I have a second switch on the same power as\n> whatever's in that room; so if there is a strike it's far more likely\n> that I'll lose switches and not hardware.\n\nMy systems are all in the same room. The UPS for the main system has a\nsingle outlet on a circuit all its own all the way back to the power panel\nat the building entrance. The UPS for my other system also has a outlet on a\ncircuit all its own all the way back to the power panel at the building\nentrance -- on the other side of my 240 volt service so they sorta-kinda\nbalance out. The only other UPS is a little 620 VA one for the power to the\nVerizon FiOS leading into my house. That is fibre-optic all the way to the\npole. I will probably get less lightning coming in that way than when I used\nto be on copper dial-up. ;-)\n> \n>> Then of course there are the *other* risks, such as the place burning to\n>> the ground, or getting drowned by a break in the city reservoir that's\n>> a couple hundred yards up the hill (but at least I needn't worry about\n> \n> Invest in sponges. Lots of them. :)\n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 22:00:01 up 34 days, 1:22, 5 users, load average: 4.05, 4.22, 4.25\n", "msg_date": "Tue, 11 Sep 2007 22:28:05 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DRBD and Postgres: how to improve the perfomance?" } ]
[ { "msg_contents": "Can anyone answer this for me: Although I realize my client's disk subsystem \n(SCSI/RAID Smart Array E200 controller using RAID 1) is less than \nimpressive - is the default setting of 4.0 realistic or could it be lower?\n\nThanks!\n\n", "msg_date": "Mon, 10 Sep 2007 15:25:41 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Should be a lot higher, something like 10-15 is approximating accurate.\n\nIncreasing the number of disks in a RAID actually makes the number higher,\nnot lower. Until Postgres gets AIO + the ability to post multiple\nconcurrent IOs on index probes, random IO does not scale with increasing\ndisk count, but sequential does, thus the increasing \"random page cost\" as\nthe RAID gets faster.\n\nThe reason to change the number is to try to discourage the planner from\nchoosing index scans too aggressively. We (GP) have implemented something\nwe call \"Adaptive Nested Loop\" to replace a nested loop + index scan with a\nhash join when the selectivity estimates are off in order to improve this\nbehavior. We also run with a \"random_page_cost=100\" because we generally\nrun on machines with fast sequential I/O.\n\n- Luke \n\n\nOn 9/10/07 12:25 PM, \"Carlo Stonebanks\" <[email protected]>\nwrote:\n\n> Can anyone answer this for me: Although I realize my client's disk subsystem\n> (SCSI/RAID Smart Array E200 controller using RAID 1) is less than\n> impressive - is the default setting of 4.0 realistic or could it be lower?\n> \n> Thanks!\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Mon, 10 Sep 2007 13:19:03 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Luke Lonergan wrote:\n> Should be a lot higher, something like 10-15 is approximating accurate.\n> \nIn my own case, I have a much smaller database that I normally work \nwith, where everything should fit in memory (100 Mbytes?), and reducing \nit to 3.0 has resulted in consistently better timings for me. I think \nthis means that the planner doesn't understand my database size : \neffective memory size ratio. :-)\n\nAnyways - my point is that if you change the default to 10 you may hurt \npeople like me.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n", "msg_date": "Mon, 10 Sep 2007 16:52:56 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Luke,\n\n> We (GP) have implemented\n> something we call \"Adaptive Nested Loop\" to replace a nested loop +\n> index scan with a hash join when the selectivity estimates are off in\n> order to improve this behavior. We also run with a\n> \"random_page_cost=100\" because we generally run on machines with fast\n> sequential I/O.\n\nSo, when is this getting contributed? ;-)\n\n-- \n--Josh\n\nJosh Berkus\nPostgreSQL @ Sun\nSan Francisco\n", "msg_date": "Mon, 10 Sep 2007 14:26:40 -0700", "msg_from": "Josh Berkus <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "\n\"Luke Lonergan\" <[email protected]> writes:\n\n> Should be a lot higher, something like 10-15 is approximating accurate.\n\nMost people's experience is that due to Postgres underestimating the benefits\nof caching lowering the random_page_cost is helpful.\n\n> Increasing the number of disks in a RAID actually makes the number higher,\n> not lower. Until Postgres gets AIO + the ability to post multiple\n> concurrent IOs on index probes, random IO does not scale with increasing\n> disk count, but sequential does, thus the increasing \"random page cost\" as\n> the RAID gets faster.\n\nThat does sound right, though I don't think it's very common. If you have very\nwide stripes you can get some amazing sequential scan speeds and it won't\nreally help random access at all. This is especially helpful if you're in an\nenvironment where you don't care about the costs you're imposing on other\nprocesses, such as a data warehouse where you have a single large batch query\nrunning at a time.\n\nWhat I don't understand is the bit about \"until Postgres gets AIO + the\nability to post multiple concurrent IOs on index probes\". Even with AIO your\nseek times are not going to be improved by wide raid stripes. And you can't\npossibly find the page at level n+1 before you've looked at the page at level\nn. Do you mean to be able to probe multiple index keys simultaneously? How\ndoes that work out?\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 10 Sep 2007 22:44:05 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "On 9/10/07, Gregory Stark <[email protected]> wrote:\n>\n> \"Luke Lonergan\" <[email protected]> writes:\n>\n> > Should be a lot higher, something like 10-15 is approximating accurate.\n>\n> Most people's experience is that due to Postgres underestimating the benefits\n> of caching lowering the random_page_cost is helpful.\n\nQuite often the real problem is that they have effective_cache_size\ntoo small, and they use random_page_cost to get the planner to switch\nto index scans on small tables. With a large effective_cache_size and\nsmall to moderate table (i.e. it fits in memory pretty handily) the\nplanner seems much better in the last few major releases about picking\nan index over a sequential scan.\n", "msg_date": "Mon, 10 Sep 2007 16:59:38 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "On Mon, 2007-09-10 at 22:44 +0100, Gregory Stark wrote:\n> What I don't understand is the bit about \"until Postgres gets AIO + the\n> ability to post multiple concurrent IOs on index probes\". Even with AIO your\n> seek times are not going to be improved by wide raid stripes. And you can't\n> possibly find the page at level n+1 before you've looked at the page at level\n> n. Do you mean to be able to probe multiple index keys simultaneously? How\n> does that work out?\n> \n\nI think he's referring to mirrors, in which there are multiple spindles\nthat can return a requested block. That could mitigate random I/O, if\nthe I/O is asynchronous and something intelligent (OS or controller) can\nschedule it.\n\nRegards,\n\tJeff Davis\n\n", "msg_date": "Mon, 10 Sep 2007 15:03:31 -0700", "msg_from": "Jeff Davis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Gregory Stark wrote:\n> \"Luke Lonergan\" <[email protected]> writes:\n> \n>> Increasing the number of disks in a RAID actually makes the number higher,\n>> not lower. Until Postgres gets AIO + the ability to post multiple\n>> concurrent IOs on index probes, random IO does not scale with increasing\n>> disk count, but sequential does, thus the increasing \"random page cost\" as\n>> the RAID gets faster.\n>> \n> What I don't understand is the bit about \"until Postgres gets AIO + the\n> ability to post multiple concurrent IOs on index probes\". Even with AIO your\n> seek times are not going to be improved by wide raid stripes. And you can't\n> possibly find the page at level n+1 before you've looked at the page at level\n> n. Do you mean to be able to probe multiple index keys simultaneously? How\n> does that work out\nOne suggestion: The plan is already in a tree. With some dependency \nanalysis, I assume the tree could be executed in parallel (multiple \nthreads or event triggered entry into a state machine), and I/O to fetch \nindex pages or table pages could be scheduled in parallel. At this \npoint, AIO becomes necessary to let the underlying system (and hardware \nwith tagged queueing?) schedule which pages should be served best first.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nGregory Stark wrote:\n\n\"Luke Lonergan\" <[email protected]> writes:\n \n\nIncreasing the number of disks in a RAID actually makes the number higher,\nnot lower. Until Postgres gets AIO + the ability to post multiple\nconcurrent IOs on index probes, random IO does not scale with increasing\ndisk count, but sequential does, thus the increasing \"random page cost\" as\nthe RAID gets faster.\n \n\nWhat I don't understand is the bit about \"until Postgres gets AIO + the\nability to post multiple concurrent IOs on index probes\". Even with AIO your\nseek times are not going to be improved by wide raid stripes. And you can't\npossibly find the page at level n+1 before you've looked at the page at level\nn. Do you mean to be able to probe multiple index keys simultaneously? How\ndoes that work out\n\nOne suggestion: The plan is already in a tree. With some dependency\nanalysis, I assume the tree could be executed in parallel (multiple\nthreads or event triggered entry into a state machine), and I/O to\nfetch index pages or table pages could be scheduled in parallel. At\nthis point, AIO becomes necessary to let the underlying system (and\nhardware with tagged queueing?) schedule which pages should be served\nbest first.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Mon, 10 Sep 2007 18:08:26 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Scott Marlowe wrote:\n> On 9/10/07, Gregory Stark <[email protected]> wrote:\n> \n>> \"Luke Lonergan\" <[email protected]> writes:\n>> \n>>> Should be a lot higher, something like 10-15 is approximating accurate.\n>>> \n>> Most people's experience is that due to Postgres underestimating the benefits\n>> of caching lowering the random_page_cost is helpful.\n>> \n> Quite often the real problem is that they have effective_cache_size\n> too small, and they use random_page_cost to get the planner to switch\n> to index scans on small tables. With a large effective_cache_size and\n> small to moderate table (i.e. it fits in memory pretty handily) the\n> planner seems much better in the last few major releases about picking\n> an index over a sequential scan\nIn my case, I set effective_cache_size to 25% of the RAM available to \nthe system (256 Mbytes), for a database that was about 100 Mbytes or \nless. I found performance to increase when reducing random_page_cost \nfrom 4.0 to 3.0.\n\nFor a database that truly fits entirely in memory, I assume \nrandom_page_cost is closer to 1.0. The planner should know that there is \nno significant seek cost for RAM.\n\nI will try to compare results tonight using 8.2. The last time I checked \nmay have been 8.1. I am also curious to see what the current algorithm \nis with regard to effective_cache_size.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nScott Marlowe wrote:\n\nOn 9/10/07, Gregory Stark <[email protected]> wrote:\n \n\n\"Luke Lonergan\" <[email protected]> writes:\n \n\nShould be a lot higher, something like 10-15 is approximating accurate.\n \n\nMost people's experience is that due to Postgres underestimating the benefits\nof caching lowering the random_page_cost is helpful.\n \n\nQuite often the real problem is that they have effective_cache_size\ntoo small, and they use random_page_cost to get the planner to switch\nto index scans on small tables. With a large effective_cache_size and\nsmall to moderate table (i.e. it fits in memory pretty handily) the\nplanner seems much better in the last few major releases about picking\nan index over a sequential scan\n\nIn my case, I set effective_cache_size to 25% of the RAM available to\nthe system (256 Mbytes), for a database that was about 100 Mbytes or\nless. I found performance to increase when reducing random_page_cost\nfrom 4.0 to 3.0.\n\nFor a database that truly fits entirely in memory, I assume\nrandom_page_cost is closer to 1.0. The planner should know that there\nis no significant seek cost for RAM.\n\nI will try to compare results tonight using 8.2. The last time I\nchecked may have been 8.1. I am also curious to see what the current\nalgorithm is with regard to effective_cache_size.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Mon, 10 Sep 2007 18:22:06 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Hi Mark, Greg,\n\nOn 9/10/07 3:08 PM, \"Mark Mielke\" <[email protected]> wrote:\n\n> One suggestion: The plan is already in a tree. With some dependency analysis,\n> I assume the tree could be executed in parallel (multiple threads or event\n> triggered entry into a state machine), and I/O to fetch index pages or table\n> pages could be scheduled in parallel. At this point, AIO becomes necessary to\n> let the underlying system (and hardware with tagged queueing?) schedule which\n> pages should be served best first.\n\nRight now the pattern for index scan goes like this:\n\n- Find qualifying TID in index\n - Seek to TID location in relfile\n - Acquire tuple from relfile, return\n\nWhen the tuples are widely distributed in the table, as is the case with a\nvery selective predicate against an evenly distributed attribute on a\nrelation 2x larger than the I/O cache + bufcache, this pattern will result\nin effectively \"random I/O\". In actual fact, the use of the in-memory\nbitmap index will make the I/Os sequential, but sparse, which is another\nversion of \"random\" if the sequential I/Os are larger than the\ngather/scatter I/O aggregation in the OS scheduler (say 1MB). This is a\nvery common circumstance for DSS / OLAP / DW workloads.\n\nFor plans that qualify with the above conditions, the executor will issue\nblocking calls to lseek(), which will translate to a single disk actuator\nmoving to the needed location in seek_time, approximately 8ms. The\nseek_time for a single query will not improve with the increase in number of\ndisks in an underlying RAID pool, so we can do about 1000/8 = 125 seeks per\nsecond no matter what I/O subsystem we have.\n\nIf we implement AIO and allow for multiple pending I/Os used to prefetch\ngroups of qualifying tuples, basically a form of random readahead, we can\nimprove the throughput for any given query by taking advantage of multiple\ndisk actuators. This will work for RAID5, RAID10 and other disk pooling\nmechanisms because the lseek() will be issued as parallel events. Note that\nthe same approach would also work to speed sequential access by overlapping\ncompute and I/O.\n\n- Luke\n\n\n", "msg_date": "Mon, 10 Sep 2007 15:45:26 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Hi Josh,\n\nOn 9/10/07 2:26 PM, \"Josh Berkus\" <[email protected]> wrote:\n\n> So, when is this getting contributed? ;-)\n\nYes, that's the right question to ask :-)\n\nOne feeble answer: \"when we're not overwhelmed by customer activity\"...\n\n- Luke\n\n\n", "msg_date": "Mon, 10 Sep 2007 15:46:56 -0700", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "\n\"Luke Lonergan\" <[email protected]> writes:\n\n> Right now the pattern for index scan goes like this:\n>\n> - Find qualifying TID in index\n> - Seek to TID location in relfile\n> - Acquire tuple from relfile, return\n>...\n> If we implement AIO and allow for multiple pending I/Os used to prefetch\n> groups of qualifying tuples, basically a form of random readahead\n\nAh, I see what you mean now. It makes a lot more sense if you think of it for\nbitmap index scans. So, for example, the bitmap index scan could stream tids\nto the executor and the executor would strip out the block numbers and pass\nthem to the i/o layer saying \"i need this block now but following that I'll\nneed these blocks so get them moving now\".\n\nI think this seems pretty impractical for regular (non-bitmap) index probes\nthough. You might be able to do it sometimes but not very effectively and you\nwon't know when it would be useful.\n\nI think what this means is that there are actually *three* kinds of i/o: 1)\nSequential which means you get the full bandwidth of your drives * the number\nof spindles; 2) Random which gets you 1 block per seek latency regardless of\nhow many spindles you have; and 3) Random but with prefetch which gets you the\nrandom bandwidth above times the number of spindles.\n\nThe extra spindles speed up sequential i/o too so the ratio between sequential\nand random with prefetch would still be about 4.0. But the ratio between\nsequential and random without prefetch would be even higher.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Sep 2007 08:28:59 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Greg, \n\n> I think this seems pretty impractical for regular \n> (non-bitmap) index probes though. You might be able to do it \n> sometimes but not very effectively and you won't know when it \n> would be useful.\n\nMaybe so, though I think it's reasonable to get multiple actuators going\neven if the seeks are truly random. It's a dynamic / tricky business to\ndetermine how many pending seeks to post, but it should be roughly close\nto the number of disks in the pool IMO.\n\n> I think what this means is that there are actually *three* \n> kinds of i/o: 1) Sequential which means you get the full \n> bandwidth of your drives * the number of spindles; 2) Random \n> which gets you 1 block per seek latency regardless of how \n> many spindles you have; and 3) Random but with prefetch which \n> gets you the random bandwidth above times the number of spindles.\n\nPerhaps so, though I'm more optimistic that prefetch would help most\nrandom seek situations.\n\nFor reasonable amounts of concurrent usage this point becomes moot - we\nget the benefit of multiple backends doing seeking anyway, but I think\nif we do dynamic prefetch right it would degenerate gracefully in those\ncircumstances.\n\n> The extra spindles speed up sequential i/o too so the ratio \n> between sequential and random with prefetch would still be \n> about 4.0. But the ratio between sequential and random \n> without prefetch would be even higher.\n\nRight :-)\n\n- Luke\n\n", "msg_date": "Tue, 11 Sep 2007 03:33:47 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Luke Lonergan wrote:\n> For plans that qualify with the above conditions, the executor will issue\n> blocking calls to lseek(), which will translate to a single disk actuator\n> moving to the needed location in seek_time, approximately 8ms. \n\nI doubt it's actually the lseeks, but the reads/writes after the lseeks\nthat block.\n\n> If we implement AIO and allow for multiple pending I/Os used to prefetch\n> groups of qualifying tuples, basically a form of random readahead, we can\n> improve the throughput for any given query by taking advantage of multiple\n> disk actuators. \n\nRather than jumping to AIO, which is a huge change, I think we could get\nmuch of the benefit by using posix_fadvise(WILLNEED) in strategic places\nto tell the OS what pages we're going to need in the near future. If the\nOS has implemented that properly, it should schedule I/Os for the\nrequested pages ahead of time. That would require very little change to\nPostgreSQL code, and could simply be #ifdef'd away on platforms that\ndon't support posix_fadvise.\n\n> Note that\n> the same approach would also work to speed sequential access by overlapping\n> compute and I/O.\n\nYes, though the OS should already doing read ahead for us. How efficient\nit is is another question. posix_fadvise(SEQUENTIAL) could be used to\ngive a hint on that as well.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Sep 2007 10:00:46 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Gregory Stark wrote (in part):\n\n> The extra spindles speed up sequential i/o too so the ratio between sequential\n> and random with prefetch would still be about 4.0. But the ratio between\n> sequential and random without prefetch would be even higher.\n> \nI never figured out how extra spindles help sequential I-O because\nconsecutive logical blocks are not necessarily written consecutively in a\nLinux or UNIX file system. They try to group a bunch (8 512-bit?) of blocks\ntogether, but that is about it. So even if you are reading sequentially, the\nhead actuator may be seeking around anyway. I suppose you could fix this, if\nthe database were reasonably static, by backing up the entire database,\ndoing a mkfs on the file system, and restoring it. This might make the\ndatabase more contiguous, at least for a while.\n\nWhen I was working on a home-brew UNIX dbms, I used raw IO on a separate\ndisk drive so that the files could be contiguous, and this would work.\nSimilarly, IBM's DB2 does that (optionally). But it is my understanding that\npostgreSQL does not. OTOH, the large (in my case) cache in the kernel can be\nhelpful if I seek around back and forth to nearby records since they may be\nin the cache. On my 8 GByte RAM, I have the shared buffers set to 200,000\nwhich should keep any busy stuff in memory, and there are about 6 GBytes of\nram presently available for the system I-O cache. I have not optimized\nanything yet because I am still laundering the major input data to\ninitialize the database so I do not have any real transactions going through\nit yet.\n\nI have 6 SCSI hard drives on two Ultra/320 SCSI controllers. Of the database\npartitions, sda8 has the write-ahead-log, sdb7 has a few tiny seldom-used\ntables and pg_log, and sdc1, sdd1, sde1, and sdf1 are just for the other\ntables. For the data on sd[c-f]1 (there is nothing else on these drives), I\nkeep the index for a table on a different drive from the data. When\npopulating the database initially, this seems to help since I tend to fill\none table, or a very few tables, at a time, so the table itself and its\nindex do not contend for the head actuator. Presumably, the SCSI controllers\ncan do simultaneous seeks on the various drives and one transfer on each\ncontroller.\n\nWhen loading the database (using INSERTs mainly -- because the input data\nare gawdawful unnormalized spreadsheets obtained from elsewhere, growing\nonce a week), the system is IO limited with seeks (and rotational latency\ntime). IO transfers average about 1.7 Megabytes/second, although there are\npeaks that exceed 10 Megabytes/second. If I run pg_restore from a backup\ntape, I can see 90 Megabyte/second transfer rates for bursts of several\nseconds at a time, but that is pretty much of a record.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 06:35:01 up 33 days, 9:57, 0 users, load average: 4.06, 4.07, 4.02\n", "msg_date": "Tue, 11 Sep 2007 07:07:02 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "\"Jean-David Beyer\" <[email protected]> writes:\n\n> Gregory Stark wrote (in part):\n>\n>> The extra spindles speed up sequential i/o too so the ratio between sequential\n>> and random with prefetch would still be about 4.0. But the ratio between\n>> sequential and random without prefetch would be even higher.\n>> \n> I never figured out how extra spindles help sequential I-O because\n> consecutive logical blocks are not necessarily written consecutively in a\n> Linux or UNIX file system. They try to group a bunch (8 512-bit?) of blocks\n> together, but that is about it. So even if you are reading sequentially, the\n> head actuator may be seeking around anyway. \n\nThat's somewhat true but good filesystems group a whole lot more than 8 blocks\ntogether. You can do benchmarks with dd and compare the speed of reading from\na file with the speed of reading from the raw device. On typical consumer\ndrives these days you'll get 50-60MB/s raw and I would expect not a whole lot\nless than that with a large ext2 file, at least if it's created all in one\nchunk on a not overly-full filesystem. \n\n(Those assumptions is not necessarily valid for Postgres which is another\ntopic, but one that requires some empirical numbers before diving into.)\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Sep 2007 19:16:03 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGregory Stark wrote:\n> \"Jean-David Beyer\" <[email protected]> writes:\n> \n>> Gregory Stark wrote (in part):\n>>\n>>> The extra spindles speed up sequential i/o too so the ratio between sequential\n>>> and random with prefetch would still be about 4.0. But the ratio between\n>>> sequential and random without prefetch would be even higher.\n>>>\n>> I never figured out how extra spindles help sequential I-O because\n>> consecutive logical blocks are not necessarily written consecutively in a\n>> Linux or UNIX file system. They try to group a bunch (8 512-bit?) of blocks\n>> together, but that is about it. So even if you are reading sequentially, the\n>> head actuator may be seeking around anyway. \n> \n> That's somewhat true but good filesystems group a whole lot more than 8 blocks\n> together. You can do benchmarks with dd and compare the speed of reading from\n> a file with the speed of reading from the raw device. On typical consumer\n> drives these days you'll get 50-60MB/s raw and I would expect not a whole lot\n> less than that with a large ext2 file, at least if it's created all in one\n> chunk on a not overly-full filesystem. \n\n# date; dd if=/dev/sda8 of=/dev/null;date\nTue Sep 11 14:27:36 EDT 2007\n8385867+0 records in\n8385867+0 records out\n4293563904 bytes (4.3 GB) copied, 71.7648 seconds, 59.8 MB/s\nTue Sep 11 14:28:48 EDT 2007\n\n# date; dd bs=8192 if=/dev/sda8 of=/dev/null;date\nTue Sep 11 14:29:15 EDT 2007\n524116+1 records in\n524116+1 records out\n4293563904 bytes (4.3 GB) copied, 68.2595 seconds, 62.9 MB/s\nTue Sep 11 14:30:23 EDT 2007\n\n# date; dd bs=8192\nif=/srv/dbms/dataA/pgsql/data/pg_xlog/000000010000002B0000002F of=/dev/null;date\nTue Sep 11 14:34:25 EDT 2007\n2048+0 records in\n2048+0 records out\n16777216 bytes (17 MB) copied, 0.272343 seconds, 61.6 MB/s\nTue Sep 11 14:34:26 EDT 2007\n\nThe first two are the partition where the W.A.L. is in (and a bit more:\n\n[/srv/dbms/dataA/pgsql/data]# ls -l\ntotal 104\n- -rw------- 1 postgres postgres 4 Aug 11 13:32 PG_VERSION\ndrwx------ 5 postgres postgres 4096 Aug 11 13:32 base\ndrwx------ 2 postgres postgres 4096 Sep 11 14:35 global\ndrwx------ 2 postgres postgres 4096 Sep 10 18:58 pg_clog\n- -rw------- 1 postgres postgres 3396 Aug 11 13:32 pg_hba.conf\n- -rw------- 1 root root 3396 Aug 16 14:32 pg_hba.conf.dist\n- -rw------- 1 postgres postgres 1460 Aug 11 13:32 pg_ident.conf\ndrwx------ 4 postgres postgres 4096 Aug 11 13:32 pg_multixact\ndrwx------ 2 postgres postgres 4096 Sep 10 19:48 pg_subtrans\ndrwx------ 2 postgres postgres 4096 Aug 12 16:14 pg_tblspc\ndrwx------ 2 postgres postgres 4096 Aug 11 13:32 pg_twophase\ndrwx------ 3 postgres postgres 4096 Sep 10 19:53 pg_xlog\n- -rw------- 1 postgres postgres 15527 Sep 8 00:35 postgresql.conf\n- -rw------- 1 postgres postgres 13659 Aug 11 13:32 postgresql.conf.dist\n- -rw------- 1 root root 15527 Sep 4 10:37 postgresql.conf~\n- -rw------- 1 postgres postgres 56 Sep 8 08:12 postmaster.opts\n- -rw------- 1 postgres postgres 53 Sep 8 08:12 postmaster.pid\n\nIt is tricky for me to find a big enough file to test. I tried one of the\npg_xlog files, but I cannot easily copy from there because it acts a bit\ninteractive and the time is mostly my time. If I copy it elsewhere and give\nit to non-root, then it is all in the cache, so it does not really read it.\n> \n> (Those assumptions is not necessarily valid for Postgres which is another\n> topic, but one that requires some empirical numbers before diving into.)\n> \n\n\n- --\n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 14:30:04 up 33 days, 17:52, 1 user, load average: 5.50, 4.67, 4.29\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.5 (GNU/Linux)\nComment: Using GnuPG with CentOS - http://enigmail.mozdev.org\n\niD8DBQFG5uM4Ptu2XpovyZoRAhtlAKDFs5eP/CGIqB/z207j2dpwDSHOlwCfevp4\nlBWn3b2GW6gesaq+l3Rbooc=\n=F4H6\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 11 Sep 2007 14:49:28 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "On Mon, Sep 10, 2007 at 06:22:06PM -0400, Mark Mielke wrote:\n> In my case, I set effective_cache_size to 25% of the RAM available to \n> the system (256 Mbytes), for a database that was about 100 Mbytes or \n> less. I found performance to increase when reducing random_page_cost \n> from 4.0 to 3.0.\n\nJust for the record, effective_cache_size of 25% is *way* too low in\nmost cases, though if you only have 1GB setting it to 500MB probably\nisn't too far off.\n\nGenerally, I'll set this to however much memory is in the server, minus\n1G for the OS, unless there's less than 4G of total memory in which case\nI subtract less.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 16:26:34 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "On Tue, Sep 11, 2007 at 02:49:28PM -0400, Jean-David Beyer wrote:\n> It is tricky for me to find a big enough file to test. I tried one of the\n\ndd if=/dev/zero of=bigfile bs=8192 count=1000000\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 16:31:05 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Decibel! wrote:\n> On Mon, Sep 10, 2007 at 06:22:06PM -0400, Mark Mielke wrote:\n> \n>> In my case, I set effective_cache_size to 25% of the RAM available to \n>> the system (256 Mbytes), for a database that was about 100 Mbytes or \n>> less. I found performance to increase when reducing random_page_cost \n>> from 4.0 to 3.0.\n>> \n> Just for the record, effective_cache_size of 25% is *way* too low in\n> most cases, though if you only have 1GB setting it to 500MB probably\n> isn't too far off.\n>\n> Generally, I'll set this to however much memory is in the server, minus\n> 1G for the OS, unless there's less than 4G of total memory in which case\n> I subtract less.\n> \nAgree. My point was only that there are conflicting database \nrequirements, and that one setting may not be valid for both. The \ndefault should be whatever is the most useful for the most number of \npeople. People who fall into one of the two extremes should know enough \nto set the value based on actual performance measurements.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>\n\n\n\n\n\n\n\nDecibel! wrote:\n\nOn Mon, Sep 10, 2007 at 06:22:06PM -0400, Mark Mielke wrote:\n \n\nIn my case, I set effective_cache_size to 25% of the RAM available to \nthe system (256 Mbytes), for a database that was about 100 Mbytes or \nless. I found performance to increase when reducing random_page_cost \nfrom 4.0 to 3.0.\n \n\nJust for the record, effective_cache_size of 25% is *way* too low in\nmost cases, though if you only have 1GB setting it to 500MB probably\nisn't too far off.\n\nGenerally, I'll set this to however much memory is in the server, minus\n1G for the OS, unless there's less than 4G of total memory in which case\nI subtract less.\n \n\nAgree. My point was only that there are conflicting database\nrequirements, and that one setting may not be valid for both. The\ndefault should be whatever is the most useful for the most number of\npeople. People who fall into one of the two extremes should know enough\nto set the value based on actual performance measurements.\n\nCheers,\nmark\n\n-- \nMark Mielke <[email protected]>", "msg_date": "Tue, 11 Sep 2007 17:50:24 -0400", "msg_from": "Mark Mielke <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "\"Decibel!\" <[email protected]> writes:\n\n> On Tue, Sep 11, 2007 at 02:49:28PM -0400, Jean-David Beyer wrote:\n>> It is tricky for me to find a big enough file to test. I tried one of the\n>\n> dd if=/dev/zero of=bigfile bs=8192 count=1000000\n\nOn linux another useful trick is:\n\necho 1 > /proc/sys/vm/drop_caches\n\nAlso, it helps to run a \"vmstat 1\" in another window and watch the bi and bo\ncolumns.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 12 Sep 2007 00:02:46 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "On Wed, Sep 12, 2007 at 12:02:46AM +0100, Gregory Stark wrote:\n> \"Decibel!\" <[email protected]> writes:\n> \n> > On Tue, Sep 11, 2007 at 02:49:28PM -0400, Jean-David Beyer wrote:\n> >> It is tricky for me to find a big enough file to test. I tried one of the\n> >\n> > dd if=/dev/zero of=bigfile bs=8192 count=1000000\n> \n> On linux another useful trick is:\n> \n> echo 1 > /proc/sys/vm/drop_caches\n\nThe following C code should have similar effect...\n\n\n/*\n * $Id: clearmem.c,v 1.1 2003/06/29 20:41:33 decibel Exp $\n *\n * Utility to clear out a chunk of memory and zero it. Useful for flushing disk buffers\n */\n\nint main(int argc, char *argv[]) {\n if (!calloc(atoi(argv[1]), 1024*1024)) { printf(\"Error allocating memory.\\n\"); }\n}\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 18:49:41 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "On Wed, 12 Sep 2007, Gregory Stark wrote:\n\n> Also, it helps to run a \"vmstat 1\" in another window and watch the bi and bo\n> columns.\n\nRecently on Linux systems I've been using dstat ( \nhttp://dag.wieers.com/home-made/dstat/ ) for live monitoring in this sort \nof situation. Once you get the command line parameters right, you can get \ndata for each of the major disks on your system that keep the columns \nhuman readable (like switching from KB/s to MB/s as appropriate) as \nactivity goes up and down combined with the standard vmstat data.\n\nI still use vmstat/iostat if I want to archive or parse the data, but if \nI'm watching it I always use dstat now.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 11 Sep 2007 20:10:50 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGreg Smith wrote:\n> On Wed, 12 Sep 2007, Gregory Stark wrote:\n> \n>> Also, it helps to run a \"vmstat 1\" in another window and watch the bi\n>> and bo\n>> columns.\n> \n> Recently on Linux systems I've been using dstat (\n> http://dag.wieers.com/home-made/dstat/ ) for live monitoring in this\n> sort of situation. Once you get the command line parameters right, you\n> can get data for each of the major disks on your system that keep the\n> columns human readable (like switching from KB/s to MB/s as appropriate)\n> as activity goes up and down combined with the standard vmstat data.\n> \n> I still use vmstat/iostat if I want to archive or parse the data, but if\n> I'm watching it I always use dstat now.\n\nThanks for the tip Greg... I hadn't heard of dstat.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> -- \n> * Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG5zbRATb/zqfZUUQRAnz4AJwM1bGsVPdUZWy6ldqEq9l8SqRpJACcCfUc\nJoc8dLj12hISB5mQO6Tn+a8=\n=E5D2\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 11 Sep 2007 17:46:09 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "Jean-David Beyer escribi�:\n> Gregory Stark wrote (in part):\n>\n> \n>> The extra spindles speed up sequential i/o too so the ratio between sequential\n>> and random with prefetch would still be about 4.0. But the ratio between\n>> sequential and random without prefetch would be even higher.\n>>\n>> \n> I never figured out how extra spindles help sequential I-O because\n> consecutive logical blocks are not necessarily written consecutively in a\n> Linux or UNIX file system. They try to group a bunch (8 512-bit?) of blocks\n> together, but that is about it. So even if you are reading sequentially, the\n> head actuator may be seeking around anyway. I suppose you could fix this, if\n> the database were reasonably static, by backing up the entire database,\n> doing a mkfs on the file system, and restoring it. This might make the\n> database more contiguous, at least for a while.\n>\n> When I was working on a home-brew UNIX dbms, I used raw IO on a separate\n> disk drive so that the files could be contiguous, and this would work.\n> Similarly, IBM's DB2 does that (optionally). But it is my understanding that\n> postgreSQL does not. OTOH, the large (in my case) cache in the kernel can be\n> helpful if I seek around back and forth to nearby records since they may be\n> in the cache. On my 8 GByte RAM, I have the shared buffers set to 200,000\n> which should keep any busy stuff in memory, and there are about 6 GBytes of\n> ram presently available for the system I-O cache. I have not optimized\n> anything yet because I am still laundering the major input data to\n> initialize the database so I do not have any real transactions going through\n> it yet.\n>\n> I have 6 SCSI hard drives on two Ultra/320 SCSI controllers. Of the database\n> partitions, sda8 has the write-ahead-log, sdb7 has a few tiny seldom-used\n> tables and pg_log, and sdc1, sdd1, sde1, and sdf1 are just for the other\n> tables. For the data on sd[c-f]1 (there is nothing else on these drives), I\n> keep the index for a table on a different drive from the data. When\n> populating the database initially, this seems to help since I tend to fill\n> one table, or a very few tables, at a time, so the table itself and its\n> index do not contend for the head actuator. Presumably, the SCSI controllers\n> can do simultaneous seeks on the various drives and one transfer on each\n> controller.\n>\n> When loading the database (using INSERTs mainly -- because the input data\n> are gawdawful unnormalized spreadsheets obtained from elsewhere, growing\n> once a week), the system is IO limited with seeks (and rotational latency\n> time). IO transfers average about 1.7 Megabytes/second, although there are\n> peaks that exceed 10 Megabytes/second. If I run pg_restore from a backup\n> tape, I can see 90 Megabyte/second transfer rates for bursts of several\n> seconds at a time, but that is pretty much of a record.\n>\n> \n\n", "msg_date": "Wed, 12 Sep 2007 15:01:07 -0300", "msg_from": "Rafael Barrera Oro <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": "\n\"Gregory Stark\" <[email protected]> writes:\n\n> \"Luke Lonergan\" <[email protected]> writes:\n>\n>> Right now the pattern for index scan goes like this:\n>>\n>> - Find qualifying TID in index\n>> - Seek to TID location in relfile\n>> - Acquire tuple from relfile, return\n>>...\n>> If we implement AIO and allow for multiple pending I/Os used to prefetch\n>> groups of qualifying tuples, basically a form of random readahead\n>\n> Ah, I see what you mean now. It makes a lot more sense if you think of it for\n> bitmap index scans. So, for example, the bitmap index scan could stream tids\n> to the executor and the executor would strip out the block numbers and pass\n> them to the i/o layer saying \"i need this block now but following that I'll\n> need these blocks so get them moving now\".\n\nWow, I've done some preliminary testing here on Linux using posix_fadvise and\nSolaris using libaio to prefetch blocks and then access them randomly and I\nthink there's a lot of low hanging fruit here.\n\nThe use case where this helps is indeed on a raid array where you're not\nmaxing out the bandwidth of the array and care about the transaction latency,\nperhaps a narrow use case but still, quite common. \n\nSince our random access is synchronous it means we have to wait for one seek,\nprocess that page, then wait for the next seek on another drive which was\nsitting idle while we were processing the first page. By prefetching the pages\nwe'll need next we can get all the members of the array working for us\nsimultaneously even if they're all doing seeks.\n\nWhat I've done is write a test program which generates a 1G file, syncs it and\ndrops the caches (not working yet on Solaris but doesn't seem to affect the\nresults) and then picks 4096 8k buffers and reads them in random order. The\nmachines it's running on have a small raid array with 4 drives.\n\nJust seeking without any prefetch it takes about 12.5s on Linux and 13.5s on\nSolaris. If I prefetch even a single buffer using posix_fadvise or libaio I\nsee a noticeable improvement, over 25%. At 128 buffers of prefetch both\nsystems are down to about 2.5-2.7s. That's on the small raid array. On the\nboot both have a small beneficial effect but only at very large prefetch set\nsizes which I would chalk down to being able to re-order the reads even if it\ncan't overlap them.\n\nI want to test how much of this effect evaporates when I compare it to a\nbitmap index style scan but that depends on a lot of factors like the exact\npattern of file extensions on the database files. In any case bitmap index\nscans get us the reordering effect, but not the overlapping i/o requests\nassuming they're spread quite far apart in the data files.\n\n> I think this seems pretty impractical for regular (non-bitmap) index probes\n> though. You might be able to do it sometimes but not very effectively and you\n> won't know when it would be useful.\n\nHow useful this is depends a lot on how invasively we let it infect things\nlike regular index scans. If we can prefetch right siblings and deeper index\npages as we descend an index tree and future heap pages it could help a lot as\nthose aren't sorted like bitmap index scans. But even if we only fetch heap\npages all together before processing the heap pages it could be a big help.\n\nIncidentally we do need to try to make use of both as Solaris doesn't have\nposix_fadvise as far as I can tell and Linux's libaio doesn't support\nnon-O_DIRECT files.\n\nRaw data:\n\nBlocks Linux Solaris\nPrefetched posix_fadvise libaio\n---------------------------------------\n 1 12.473 13.597\n 2 9.053 9.830\n 4 6.787 7.594\n 8 5.303 6.588\n 16 4.209 5.120\n 32 3.388 4.014\n 64 2.869 3.216\n 128 2.515 2.710\n 256 2.312 2.327\n 512 2.168 2.099\n 1024 2.139 1.974\n 2048 2.242 1.903\n 4096 2.222 1.890\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 13 Sep 2007 18:46:42 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" }, { "msg_contents": ">>> On Mon, Sep 10, 2007 at 2:25 PM, in message <[email protected]>,\n\"Carlo Stonebanks\" <[email protected]> wrote: \n \n> is the default setting of 4.0 realistic or could it be lower?\n \nWow, such a simple, innocent question.\n \nAs you may have inferred, it can't be answered in isolation. Make sure that\nyou have reviewed all of your memory settings, then try adjusting this and\nseeing what the results are. With accurate effective_cache_size and a fairly\ngenerous work_mem setting, we have found that these settings work best for us\nwith our actual production loads:\n \n(1) Cache well below database size (for example 6 GB or 12 GB RAM on a box\nrunning a 210 GB database):\n \n#seq_page_cost = 1.0\nrandom_page_cost = 2.0\n \n(2) On a database which is entirely contained within cache:\n \nseq_page_cost = 0.1\nrandom_page_cost = 0.1\n \n(3) Where caching is very significant, but not complete, we have to test\nto see where performance is best. One example that significantly beat both\nof the above in production on a particular box:\n \nseq_page_cost = 0.3\nrandom_page_cost = 0.5\n \nSo, the short answer to your question is that the default might be realistic\nin some environments; the best choice will be lower in many environments;\nthe best choice will be higher in some environments; only testing your\nactual applications in your actual environment can tell you which is the\ncase for you.\n \nMy approach is to pick one of the first two, depending on whether the\ndatabase will be fully cached, then monitor for performance problems. When\nthe queries with unacceptable response time have been identified, I look\nfor ways to improve them. One of the things I may try, where a bad plan\nseems to have been chosen, is to adjust the random page cost. If I do\nchange that in production, then I closely monitor for regression in other\nqueries.\n \n-Kevin\n \n\n", "msg_date": "Sun, 16 Sep 2007 15:00:48 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: random_page_costs - are defaults of 4.0 realistic for SCSI RAID 1" } ]
[ { "msg_contents": "If I delete a whole bunch of tables (like 10,000 tables), should I vacuum system tables, and if so, which ones? (This system is still on 8.1.4 and isn't running autovacuum).\n\nThanks,\nCraig\n", "msg_date": "Mon, 10 Sep 2007 21:12:10 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": true, "msg_subject": "What to vacuum after deleting lots of tables" }, { "msg_contents": "Craig James <[email protected]> writes:\n> If I delete a whole bunch of tables (like 10,000 tables), should I vacuum system tables, and if so, which ones? (This system is still on 8.1.4 and isn't running autovacuum).\n\n\"All of them\" would do for a first approximation.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Sep 2007 00:23:11 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: What to vacuum after deleting lots of tables " } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nHi,\n\nI having the same problem I told here a few weeks before. Database is\nusing too much resources again.\n\nI do a vacumm full each day, but seems it is not working. I am preparing\nan update to postgres 8.2.4 (actually I am using at 8.1.3, and tests for\nupdate will need several days)\n\nLast time I had this problem i solved it stopping website, restarting\ndatabase, vacuumm it, run again website. But I guess this is going to\nhappen again.\n\nI would like to detect and solve the problem. Any ideas to detect it?\n\nThanks in advance,\n\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG5jbLIo1XmbAXRboRArcpAJ0YvoCT6KWv2fafVAtapu6nwFmKoACcD0uA\nzFTx9Wq+2NSxijIf/R8E5f8=\n=u0k5\n-----END PGP SIGNATURE-----", "msg_date": "Tue, 11 Sep 2007 08:33:47 +0200", "msg_from": "Ruben Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "[Again] Postgres performance problem" }, { "msg_contents": "> Last time I had this problem i solved it stopping website, restarting\n> database, vacuumm it, run again website. But I guess this is going to\n> happen again.\n>\n> I would like to detect and solve the problem. Any ideas to detect it?\n\nDo you have very long transactions? Maybe some client that is connected\nall the time that is idle in transaction?\n\n/Dennis\n\n", "msg_date": "Tue, 11 Sep 2007 09:31:23 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\[email protected] escribi�:\n>> Last time I had this problem i solved it stopping website, restarting\n>> database, vacuumm it, run again website. But I guess this is going to\n>> happen again.\n>>\n>> I would like to detect and solve the problem. Any ideas to detect it?\n> \n> Do you have very long transactions? Maybe some client that is connected\n> all the time that is idle in transaction?\n\nThere should not be long transactions. I ll keep an eye on Idle transactions\n\nI m detecting it using:\n\necho 'SELECT current_query FROM pg_stat_activity;' |\n/usr/local/pgsql/bin/psql vacadb | grep IDLE | wc -l\n\n\n\n> \n> /Dennis\n> \n> \n> \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG5kiRIo1XmbAXRboRAj3sAKCH21zIhvdvPcmVQG71owiCye96xwCcDPe0\no/aArJF0JjUnTIFd1sMYD+Y=\n=6zyY\n-----END PGP SIGNATURE-----", "msg_date": "Tue, 11 Sep 2007 09:49:37 +0200", "msg_from": "Ruben Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Tue, Sep 11, 2007 at 09:49:37AM +0200, Ruben Rubio wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> [email protected] escribi?:\n> >> Last time I had this problem i solved it stopping website, restarting\n> >> database, vacuumm it, run again website. But I guess this is going to\n> >> happen again.\n> >>\n> >> I would like to detect and solve the problem. Any ideas to detect it?\n> > \n> > Do you have very long transactions? Maybe some client that is connected\n> > all the time that is idle in transaction?\n> \n> There should not be long transactions. I ll keep an eye on Idle transactions\n> \n> I m detecting it using:\n> \n> echo 'SELECT current_query FROM pg_stat_activity;' |\n> /usr/local/pgsql/bin/psql vacadb | grep IDLE | wc -l\n\nIf you're using VACUUM FULL, you're doing something wrong. :) Run lazy\nvacuum frequently enough (better yet, autovacuum, but cut all of 8.1's\nautovac parameters in half), and make sure your FSM is big enough\n(periodic vacuumdb -av | tail is an easy way to check that).\n\nTry a REINDEX. VACUUM FULL is especially hard on the indexes, and it's\neasy for them to seriously bloat.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 17:52:42 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "\n\nDecibel! escribi�:\n> On Tue, Sep 11, 2007 at 09:49:37AM +0200, Ruben Rubio wrote:\n>> -----BEGIN PGP SIGNED MESSAGE-----\n>> Hash: SHA1\n>>\n>> [email protected] escribi?:\n>>>> Last time I had this problem i solved it stopping website, restarting\n>>>> database, vacuumm it, run again website. But I guess this is going to\n>>>> happen again.\n>>>>\n>>>> I would like to detect and solve the problem. Any ideas to detect it?\n>>> Do you have very long transactions? Maybe some client that is connected\n>>> all the time that is idle in transaction?\n>> There should not be long transactions. I ll keep an eye on Idle transactions\n>>\n>> I m detecting it using:\n>>\n>> echo 'SELECT current_query FROM pg_stat_activity;' |\n>> /usr/local/pgsql/bin/psql vacadb | grep IDLE | wc -l\n> \n> If you're using VACUUM FULL, you're doing something wrong. :) \n\nI do a VACUUM FULL VERBOSE ANALYZE each day. I save all logs so I can\ncheck if vacuum is done properly.(it is)\n\nRun lazy\n> vacuum frequently enough (better yet, autovacuum, but cut all of 8.1's\n> autovac parameters in half), and make sure your FSM is big enough\n\nI checked that there is no warnings about FSM in logs. (also in logs\nfrom vacuum). Is it reliable?\n\nWhat do u mean for \"cut all of 8.1's autovac parameters in half\" Maybe\ndefault autovac parameters?\n\n> (periodic vacuumdb -av | tail is an easy way to check that).\n\nI ll keep an eye on it.\n\n> \n> Try a REINDEX. VACUUM FULL is especially hard on the indexes, and it's\n> easy for them to seriously bloat.\n\nReindex is done everyday after VACUUM FULL VERBOSE ANALYZE. I save also\nthe output averyday and save it into a log, and I can check that it is\ndone properly.\n\n\n", "msg_date": "Wed, 12 Sep 2007 09:42:29 +0200", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On 9/12/07, [email protected] <[email protected]> wrote:\n>\n> Decibel! escribió:\n> > On Tue, Sep 11, 2007 at 09:49:37AM +0200, Ruben Rubio wrote:\n> >> -----BEGIN PGP SIGNED MESSAGE-----\n> >> Hash: SHA1\n> >>\n> >> [email protected] escribi?:\n> >>>> Last time I had this problem i solved it stopping website, restarting\n> >>>> database, vacuumm it, run again website. But I guess this is going to\n> >>>> happen again.\n> >>>>\n> >>>> I would like to detect and solve the problem. Any ideas to detect it?\n> >>> Do you have very long transactions? Maybe some client that is connected\n> >>> all the time that is idle in transaction?\n> >> There should not be long transactions. I ll keep an eye on Idle transactions\n> >>\n> >> I m detecting it using:\n> >>\n> >> echo 'SELECT current_query FROM pg_stat_activity;' |\n> >> /usr/local/pgsql/bin/psql vacadb | grep IDLE | wc -l\n> >\n> > If you're using VACUUM FULL, you're doing something wrong. :)\n>\n> I do a VACUUM FULL VERBOSE ANALYZE each day. I save all logs so I can\n> check if vacuum is done properly.(it is)\n\nThen, like Jim said, you're doing it wrong. Regular vacuum full is\nlike rebuiling a piece of equipment every night when all it needs is\nthe filter changed.\n\n> Run lazy\n> > vacuum frequently enough (better yet, autovacuum, but cut all of 8.1's\n> > autovac parameters in half), and make sure your FSM is big enough\n>\n> I checked that there is no warnings about FSM in logs. (also in logs\n> from vacuum). Is it reliable?\n>\n> What do u mean for \"cut all of 8.1's autovac parameters in half\" Maybe\n> default autovac parameters?\n\nYep. ( I assume)\n\n> > (periodic vacuumdb -av | tail is an easy way to check that).\n>\n> I ll keep an eye on it.\n>\n> >\n> > Try a REINDEX. VACUUM FULL is especially hard on the indexes, and it's\n> > easy for them to seriously bloat.\n>\n> Reindex is done everyday after VACUUM FULL VERBOSE ANALYZE. I save also\n> the output averyday and save it into a log, and I can check that it is\n> done properly.\n\nThen you're vacuum full is wasted. A reindex accomplishes the same\nthing, plus shrinks indexes (vacuum full can bloat indexes).\n\nJust run regular vacuums, preferably by autovacuum, and keep an eye on\nthe vacuum analyze you run each night to see if your fsm is big\nenough.\n\nOccasionally vacuum full is absolutely the right answer. Most the\ntime it's not.\n\nI'm getting more and more motivated to rewrite the vacuum docs. I\nthink a rewrite from the ground up might be best... I keep seeing\npeople doing vacuum full on this list and I'm thinking it's as much\nbecause of the way the docs represent vacuum full as anything. Is\nthat true for you?\n", "msg_date": "Wed, 12 Sep 2007 13:28:14 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "Scott Marlowe wrote:\n\n>I'm getting more and more motivated to rewrite the vacuum docs. I\n>think a rewrite from the ground up might be best... I keep seeing\n>people doing vacuum full on this list and I'm thinking it's as much\n>because of the way the docs represent vacuum full as anything. Is\n>that true for you?\n>\n> \n>\n\nIt's true for me.\n\nI turned off autovacuum as I was getting occassional hangs, which I \nthought were the result of vacuums (and have signifigantly decreased \nsince I did that), and went nightly vacuum fulls, and vacuum \nfull/reindex/cluster on the weekends (which I now realize is redundant).\n\nBrian\n\n", "msg_date": "Wed, 12 Sep 2007 14:46:01 -0400", "msg_from": "Brian Hurt <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On 9/12/07, Scott Marlowe <[email protected]> wrote:\n>\n> On 9/12/07, [email protected] <[email protected]> wrote:\n\n\n\n> > Try a REINDEX. VACUUM FULL is especially hard on the indexes, and it's\n> > > easy for them to seriously bloat.\n> >\n> > Reindex is done everyday after VACUUM FULL VERBOSE ANALYZE. I save also\n> > the output averyday and save it into a log, and I can check that it is\n> > done properly.\n>\n> Then you're vacuum full is wasted. A reindex accomplishes the same\n> thing, plus shrinks indexes (vacuum full can bloat indexes).\n\n\nAren't you mixing up REINDEX and CLUSTER?\n\nRegards\n\nMP\n\nOn 9/12/07, Scott Marlowe <[email protected]> wrote:\nOn 9/12/07, [email protected] <[email protected]> wrote: \n> > Try a REINDEX. VACUUM FULL is especially hard on the indexes, and it's> > easy for them to seriously bloat.>> Reindex is  done everyday after VACUUM FULL VERBOSE ANALYZE. I save also\n> the output averyday and save it into a log, and I can check that it is> done properly.Then you're vacuum full is wasted.  A reindex accomplishes the samething, plus shrinks indexes (vacuum full can bloat indexes).\nAren't you mixing up REINDEX and CLUSTER? RegardsMP", "msg_date": "Wed, 12 Sep 2007 21:56:13 +0300", "msg_from": "\"Mikko Partio\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On 9/12/07, Mikko Partio <[email protected]> wrote:\n>\n>\n> On 9/12/07, Scott Marlowe <[email protected]> wrote:\n> > On 9/12/07, [email protected] <[email protected]> wrote:\n>\n> > > > Try a REINDEX. VACUUM FULL is especially hard on the indexes, and it's\n> > > > easy for them to seriously bloat.\n> > >\n> > > Reindex is done everyday after VACUUM FULL VERBOSE ANALYZE. I save also\n> > > the output averyday and save it into a log, and I can check that it is\n> > > done properly.\n> >\n> > Then you're vacuum full is wasted. A reindex accomplishes the same\n> > thing, plus shrinks indexes (vacuum full can bloat indexes).\n>\n> Aren't you mixing up REINDEX and CLUSTER?\n\nI don't think so. reindex (which runs on tables and indexes, so the\nname is a bit confusing, I admit) basically was originally a \"repair\"\noperation that rewrote the whole relation and wasn't completely\ntransaction safe (way back, 7.2 days or so I think). Due to the\nissues with vacuum full bloating indexes, and being slowly replaced by\nregular vacuum, reindex received some attention to make it transaction\n/ crash safe and has kind of take the place of vacuum full in terms of\n\"how to fix bloated objects\".\n\ncluster, otoh, rewrites the table into index order.\n\nEither one does what a vacuum full did / does, but generally does it better.\n", "msg_date": "Wed, 12 Sep 2007 14:07:01 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Sep 12, 2007, at 9:07 PM, Scott Marlowe wrote:\n> On 9/12/07, Mikko Partio <[email protected]> wrote:\n>> �\n>> Aren't you mixing up REINDEX and CLUSTER?\n>\n> �\n> Either one does what a vacuum full did / does, but generally does \n> it better.\n\nOn topic of REINDEX / VACUUM FULL versus a CLUSTER / VACUUM ANALYZE \nI'd like to ask if CLUSTER is safe to run on a table that is in \nactive use.\n\nAfter updating my maintenance scripts from a VACUUM FULL (add me to \nthe list) to CLUSTER (which improves performance a lot) I noticed I \nwas getting \"could not open relation �\" errors in the log while the \nscripts ran so I reverted the change. This was on 8.1.9.\n\nAm I hitting a corner case or is it generally not a good idea to \nCLUSTER tables which are being queried? I haven't had problems with \nthe REINDEX / VACUUM FULL combination while CLUSTER / VACUUM ANALYZE \nresulted in errors on the first run.\n\nCan the \"could not open relation�\" error bring down the whole \ndatabase server? I'm really interested in using CLUSTER regularly as \nit speeds up my system by a factor of two because of more efficient I/O.\n\nSincerely,\n\nFrank\n", "msg_date": "Wed, 12 Sep 2007 21:19:50 +0200", "msg_from": "Frank Schoep <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On 9/12/07, Frank Schoep <[email protected]> wrote:\n> On Sep 12, 2007, at 9:07 PM, Scott Marlowe wrote:\n> > On 9/12/07, Mikko Partio <[email protected]> wrote:\n> >> …\n> >> Aren't you mixing up REINDEX and CLUSTER?\n> >\n> > …\n> > Either one does what a vacuum full did / does, but generally does\n> > it better.\n>\n> On topic of REINDEX / VACUUM FULL versus a CLUSTER / VACUUM ANALYZE\n> I'd like to ask if CLUSTER is safe to run on a table that is in\n> active use.\n>\n> After updating my maintenance scripts from a VACUUM FULL (add me to\n> the list) to CLUSTER (which improves performance a lot) I noticed I\n> was getting \"could not open relation …\" errors in the log while the\n> scripts ran so I reverted the change. This was on 8.1.9.\n>\n> Am I hitting a corner case or is it generally not a good idea to\n> CLUSTER tables which are being queried? I haven't had problems with\n> the REINDEX / VACUUM FULL combination while CLUSTER / VACUUM ANALYZE\n> resulted in errors on the first run.\n>\n> Can the \"could not open relation…\" error bring down the whole\n> database server? I'm really interested in using CLUSTER regularly as\n> it speeds up my system by a factor of two because of more efficient I/O.\n\nNo, it won't bring it down. Basically the query lost the relation is\nwas operating against because it disappeared when the cluster command\nruns.\n", "msg_date": "Wed, 12 Sep 2007 14:27:14 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "Scott Marlowe escribi�:\n\n> > Aren't you mixing up REINDEX and CLUSTER?\n> \n> I don't think so. reindex (which runs on tables and indexes, so the\n> name is a bit confusing, I admit) basically was originally a \"repair\"\n> operation that rewrote the whole relation and wasn't completely\n> transaction safe (way back, 7.2 days or so I think). Due to the\n> issues with vacuum full bloating indexes, and being slowly replaced by\n> regular vacuum, reindex received some attention to make it transaction\n> / crash safe and has kind of take the place of vacuum full in terms of\n> \"how to fix bloated objects\".\n\nHmm, REINDEX does not rewrite tables. If there are dead tuples, they\nwill still be there after REINDEX.\n\n\n> cluster, otoh, rewrites the table into index order.\n\n... excluding dead tuples, and then rewrites all the indexes.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Wed, 12 Sep 2007 15:40:36 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "\nOn Sep 12, 2007, at 2:19 PM, Frank Schoep wrote:\n\n> On Sep 12, 2007, at 9:07 PM, Scott Marlowe wrote:\n>> On 9/12/07, Mikko Partio <[email protected]> wrote:\n>>> �\n>>> Aren't you mixing up REINDEX and CLUSTER?\n>>\n>> �\n>> Either one does what a vacuum full did / does, but generally does \n>> it better.\n>\n> On topic of REINDEX / VACUUM FULL versus a CLUSTER / VACUUM ANALYZE \n> I'd like to ask if CLUSTER is safe to run on a table that is in \n> active use.\n>\n> After updating my maintenance scripts from a VACUUM FULL (add me to \n> the list) to CLUSTER (which improves performance a lot) I noticed I \n> was getting \"could not open relation �\" errors in the log while the \n> scripts ran so I reverted the change. This was on 8.1.9.\n\nYou'd probably see the same behavior on 8.2.x. CLUSTER is not \ntransactionally safe so you don't want to run CLUSTER on tables that \nare actively being used. I believe that's been fixed for 8.3.\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Wed, 12 Sep 2007 15:01:12 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Wed, 12 Sep 2007, Scott Marlowe wrote:\n\n> I'm getting more and more motivated to rewrite the vacuum docs. I think \n> a rewrite from the ground up might be best... I keep seeing people \n> doing vacuum full on this list and I'm thinking it's as much because of \n> the way the docs represent vacuum full as anything.\n\nI agree you shouldn't start thinking in terms of how to fix the existing \ndocumentation. I'd suggest instead writing a tutorial leading someone \nthrough what they need to know about their tables first and then going \ninto how vacuum works based on that data.\n\nAs an example, people throw around terms like \"index bloat\" and \"dead \ntuples\" when talking about vacuuming. The tutorial I'd like to see \nsomebody write would start by explaining those terms and showing how to \nmeasure them--preferably with a good and bad example to contrast. The way \nthese terms are thrown around right now, I don't expect newcomers to \nunderstand either the documentation or the advice people are giving them; \nI think it's shooting over their heads and what's needed are some \nwalkthroughs. Another example I'd like to see thrown in there is what it \nlooks like when you don't have enough FSM slots.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 13 Sep 2007 01:58:10 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Thu, 2007-09-13 at 01:58 -0400, Greg Smith wrote:\n> On Wed, 12 Sep 2007, Scott Marlowe wrote:\n> \n> > I'm getting more and more motivated to rewrite the vacuum docs. I think \n> > a rewrite from the ground up might be best... I keep seeing people \n> > doing vacuum full on this list and I'm thinking it's as much because of \n> > the way the docs represent vacuum full as anything.\n> \n> I agree you shouldn't start thinking in terms of how to fix the existing \n> documentation. I'd suggest instead writing a tutorial leading someone \n> through what they need to know about their tables first and then going \n> into how vacuum works based on that data.\n\nI'm new to PG and it's true that I am confused.\nAs it stands this is a newbie's understanding of the various terms.\n\ncluster -> rewrites a table according to index order so that IO is\nordered/sequential \nreindex -> basically, rewrites the indexes adding new records/fixes up\nold deleted records\nvacuum -> does cleaning \nvacuum analyse -> clean and update statistics (i run this mostly)\nautovacuum - does vacuum analyse automatically per default setup or some\nor cost based parameter\n\nvacuum full -> I also do this frequently (test DB only) as a means to\nretrieve back used spaces due to MVCC. (trying lots of different methods\nof query/add new index/make concatenated join/unique keys and then\ndeleting them if it's not useful) \n\n\n> \n> As an example, people throw around terms like \"index bloat\" and \"dead \n> tuples\" when talking about vacuuming. \n\nI honestly have only the vaguest idea what these 2 mean. (i only grasped\nrecently that tuples = records/rows)\n\n> The tutorial I'd like to see \n> somebody write would start by explaining those terms and showing how to \n> measure them--preferably with a good and bad example to contrast. The way \n> these terms are thrown around right now, I don't expect newcomers to \n> understand either the documentation or the advice people are giving them; \n> I think it's shooting over their heads and what's needed are some \n> walkthroughs. Another example I'd like to see thrown in there is what it \n> looks like when you don't have enough FSM slots.\n\n\nactually, an additional item I would like is to understand explain\nanalyse. The current docs written by tom only shows explain and not\nexplain analyse and I'm getting confuse as to the rows=xxx vs actual\nrows=yyy where on some of my queries can be very far apart 1 vs 500x\nratio on some problematic query[1]. And googling doesn't give much doc\non the explain. (the only other useful doc I've seen is a presentation\ngiven from oscon 2003)\n\n[1](See my other post)\n\n", "msg_date": "Thu, 13 Sep 2007 15:20:13 +0800", "msg_from": "El-Lotso <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On 9/13/07, Greg Smith <[email protected]> wrote:\n> On Wed, 12 Sep 2007, Scott Marlowe wrote:\n>\n> > I'm getting more and more motivated to rewrite the vacuum docs. I think\n> > a rewrite from the ground up might be best... I keep seeing people\n> > doing vacuum full on this list and I'm thinking it's as much because of\n> > the way the docs represent vacuum full as anything.\n>\n> I agree you shouldn't start thinking in terms of how to fix the existing\n> documentation. I'd suggest instead writing a tutorial leading someone\n> through what they need to know about their tables first and then going\n> into how vacuum works based on that data.\n\nI think both things are needed actually. The current docs were\nstarted back when pg 7.2 roamed the land, and they've been updated a\nbit at a time. The technical definitions of vacuum, vacuum full,\nanalyze etc all show a bit too much history from back in the day, and\nare confusing. so, I think that 1: vacuum and analyze should have\ntheir own sections. analyze used to be a subcommand of vacuum but it\nno longer is, but the docs still pretty much tie them together. 2:\nThe definition for vacuum full needs to include a caveat that vacuum\nfull should be considered more of a recovery operation than a way to\nsimply get back some space on your hard drives.\n\nWhich leads me to thinking that we then need a simple tutorial on\nvacuuming to include the free space map, vacuum, vacuum analyze,\nvacuum full, and the autovacuum daemon. We can throw analyze in there\nsomewhere too, I just don't want it to seem like it's still married to\nvacuum.\n\n> As an example, people throw around terms like \"index bloat\" and \"dead\n> tuples\" when talking about vacuuming. The tutorial I'd like to see\n> somebody write would start by explaining those terms and showing how to\n> measure them--preferably with a good and bad example to contrast.\n\nI agree. I might rearrange it a bit but that's the way I'm looking at it too.\n\n> The way\n> these terms are thrown around right now, I don't expect newcomers to\n> understand either the documentation or the advice people are giving them;\n> I think it's shooting over their heads and what's needed are some\n> walkthroughs. Another example I'd like to see thrown in there is what it\n> looks like when you don't have enough FSM slots.\n\nOK. Got something to start with. I'm thinking I might work on a\nvacuum tutorial first, then the tech docs...\n", "msg_date": "Thu, 13 Sep 2007 09:29:34 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "How many backends do you have at any given time? Have you tried using\nsomething like pgBouncer to lower backend usage? How about your IO\nsituation? Have you run something like sysstat to see what iowait is\nat?\n\nOn 9/11/07, Ruben Rubio <[email protected]> wrote:\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n>\n>\n> Hi,\n>\n> I having the same problem I told here a few weeks before. Database is\n> using too much resources again.\n>\n> I do a vacumm full each day, but seems it is not working. I am preparing\n> an update to postgres 8.2.4 (actually I am using at 8.1.3, and tests for\n> update will need several days)\n>\n> Last time I had this problem i solved it stopping website, restarting\n> database, vacuumm it, run again website. But I guess this is going to\n> happen again.\n>\n> I would like to detect and solve the problem. Any ideas to detect it?\n>\n> Thanks in advance,\n>\n>\n>\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.4.6 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n>\n> iD8DBQFG5jbLIo1XmbAXRboRArcpAJ0YvoCT6KWv2fafVAtapu6nwFmKoACcD0uA\n> zFTx9Wq+2NSxijIf/R8E5f8=\n> =u0k5\n> -----END PGP SIGNATURE-----\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n>\n>\n", "msg_date": "Thu, 13 Sep 2007 11:31:05 -0400", "msg_from": "\"Gavin M. Roy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Sep 13, 2007, at 12:58 AM, Greg Smith wrote:\n\n> On Wed, 12 Sep 2007, Scott Marlowe wrote:\n>\n>> I'm getting more and more motivated to rewrite the vacuum docs. I \n>> think a rewrite from the ground up might be best... I keep seeing \n>> people doing vacuum full on this list and I'm thinking it's as \n>> much because of the way the docs represent vacuum full as anything.\n>\n> I agree you shouldn't start thinking in terms of how to fix the \n> existing documentation. I'd suggest instead writing a tutorial \n> leading someone through what they need to know about their tables \n> first and then going into how vacuum works based on that data.\n>\n> As an example, people throw around terms like \"index bloat\" and \n> \"dead tuples\" when talking about vacuuming. The tutorial I'd like \n> to see somebody write would start by explaining those terms and \n> showing how to measure them--preferably with a good and bad example \n> to contrast. The way these terms are thrown around right now, I \n> don't expect newcomers to understand either the documentation or \n> the advice people are giving them; I think it's shooting over their \n> heads and what's needed are some walkthroughs. Another example I'd \n> like to see thrown in there is what it looks like when you don't \n> have enough FSM slots.\n\nIsn't that the point of the documentation? I mean, if the existing, \nofficial manual has been demonstrated (through countless mailing list \nhelp requests) to not sufficiently explain a given topic, shouldn't \nit be revised? One thing that might help is a hyperlinked glossary \nso that people reading through the documentation can go straight to \nthe postgres definition of dead tuple, index bloat, etc.\n\n\nErik Jones\n\nSoftware Developer | Emma�\[email protected]\n800.595.4401 or 615.292.5888\n615.292.0777 (fax)\n\nEmma helps organizations everywhere communicate & market in style.\nVisit us online at http://www.myemma.com\n\n\n", "msg_date": "Thu, 13 Sep 2007 11:03:20 -0500", "msg_from": "Erik Jones <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On 9/13/07, Erik Jones <[email protected]> wrote:\n> On Sep 13, 2007, at 12:58 AM, Greg Smith wrote:\n>\n> > On Wed, 12 Sep 2007, Scott Marlowe wrote:\n> >\n> >> I'm getting more and more motivated to rewrite the vacuum docs. I\n> >> think a rewrite from the ground up might be best... I keep seeing\n> >> people doing vacuum full on this list and I'm thinking it's as\n> >> much because of the way the docs represent vacuum full as anything.\n> >\n> > I agree you shouldn't start thinking in terms of how to fix the\n> > existing documentation. I'd suggest instead writing a tutorial\n> > leading someone through what they need to know about their tables\n> > first and then going into how vacuum works based on that data.\n> >\n> > As an example, people throw around terms like \"index bloat\" and\n> > \"dead tuples\" when talking about vacuuming. The tutorial I'd like\n> > to see somebody write would start by explaining those terms and\n> > showing how to measure them--preferably with a good and bad example\n> > to contrast. The way these terms are thrown around right now, I\n> > don't expect newcomers to understand either the documentation or\n> > the advice people are giving them; I think it's shooting over their\n> > heads and what's needed are some walkthroughs. Another example I'd\n> > like to see thrown in there is what it looks like when you don't\n> > have enough FSM slots.\n>\n> Isn't that the point of the documentation? I mean, if the existing,\n> official manual has been demonstrated (through countless mailing list\n> help requests) to not sufficiently explain a given topic, shouldn't\n> it be revised? One thing that might help is a hyperlinked glossary\n> so that people reading through the documentation can go straight to\n> the postgres definition of dead tuple, index bloat, etc.\n\nYes and no. The official docs are more of a technical specification.\nShort, simple and to the point so that if you know mostly what you're\ndoing you don't have to wade through a long tutorial to find the\nanswer. I find MySQL's documentation frustrating as hell because I\ncan never find just the one thing I wanna look for. Because it's all\nwritten as a tutorial. I.e. I have to pay the \"stupid tax\" when I\nread their docs.\n\nWhat I want to do is two fold. 1: fix the technical docs so they have\nbetter explanations of each of the topics, without turning them into\nhuge tutorials. 2: Write a vacuuming tutorial that will be useful\nshould someone be new to postgresql and need to set up their system.\nI think the tutorial should be broken into at least two sections, a\nquick start guide and an ongoing maintenance and tuning section.\n", "msg_date": "Thu, 13 Sep 2007 11:39:55 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Thu, 13 Sep 2007, Scott Marlowe wrote:\n\n> I think both things are needed actually. The current docs were\n> started back when pg 7.2 roamed the land, and they've been updated a\n> bit at a time...\n\nNo argument here that ultimately the documentation needs to be updated as \nwell. I was just suggesting what I've been thinking about as the path of \nleast resistance to move in that direction. Updating the documentation is \nharder to do because of the build process involved. It's easier to write \nsomething new that addresses the deficiencies, get that right, and then \nmerge it into the documentation when it's stable. After the main new \ncontent is done, then it's easier to sweep back through the existing \nmaterial and clean things up.\n\n> Which leads me to thinking that we then need a simple tutorial on\n> vacuuming to include the free space map, vacuum, vacuum analyze,\n> vacuum full, and the autovacuum daemon.\n\nRight, that's the sort of thing that's missing right now, and I think that \nwould be more useful to newbies than correcting the documentation that's \nalready there.\n\nAlso: if you don't have a public working area to assemble this document \nat, I've set a precedent of sorts that it's OK to put working material \nlike this onto the PG developer's wiki at http://developer.postgresql.org/ \nas long as your stated intention is ultimately to move it off of there \nonce it's complete. In addition to providing a nice set of tools for \nworking the text (presuming you're comfortable with Wiki syntax) that will \nget you a pool of reviewers/contributors who can make changes directly \nrather than you needing to do all the work yourself.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 14 Sep 2007 03:19:51 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "> > Isn't that the point of the documentation? I mean, if the existing,\n> > official manual has been demonstrated (through countless mailing list\n> > help requests) to not sufficiently explain a given topic, shouldn't\n> > it be revised? \n\nOr it proves that no one bothers to read the docs.\n\n> > One thing that might help is a hyperlinked glossary\n> > so that people reading through the documentation can go straight to\n> > the postgres definition of dead tuple, index bloat, etc.\n> Yes and no. The official docs are more of a technical specification.\n> Short, simple and to the point so that if you know mostly what you're\n> doing you don't have to wade through a long tutorial to find the\n> answer. I find MySQL's documentation frustrating as hell because I\n> can never find just the one thing I wanna look for.\n\nYes! MySQL documentation is maddening.\n\nThis is why, I suspect, for products like Informix and DB2 IBM publishes\ntwo manuals (or roughly equivalent to two manuals): a \"guide\" and a\n\"reference\".\n\n> written as a tutorial. I.e. I have to pay the \"stupid tax\" when I\n> read their docs.\n\nYep.\n\n> What I want to do is two fold. 1: fix the technical docs so they have\n> better explanations of each of the topics, without turning them into\n> huge tutorials. 2: Write a vacuuming tutorial that will be useful\n> should someone be new to postgresql and need to set up their system.\n> I think the tutorial should be broken into at least two sections, a\n> quick start guide and an ongoing maintenance and tuning section.\n\n\n", "msg_date": "Fri, 14 Sep 2007 10:00:59 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\n\nGavin M. Roy escribi�:\n> How many backends do you have at any given time? Have you tried using\n> something like pgBouncer to lower backend usage? How about your IO\n> situation? Have you run something like sysstat to see what iowait is\n> at?\n\nbackends arround 50 -100 I don't use pgBouncer yet.\nSysstat reports veeery low io.\n\nRight now Im checking out fsm parameter, as Scott recomended. Seems\nthere is the problem.\n\n\n\n> \n> On 9/11/07, Ruben Rubio <[email protected]> wrote:\n> \n> Hi,\n> \n> I having the same problem I told here a few weeks before. Database is\n> using too much resources again.\n> \n> I do a vacumm full each day, but seems it is not working. I am preparing\n> an update to postgres 8.2.4 (actually I am using at 8.1.3, and tests for\n> update will need several days)\n> \n> Last time I had this problem i solved it stopping website, restarting\n> database, vacuumm it, run again website. But I guess this is going to\n> happen again.\n> \n> I would like to detect and solve the problem. Any ideas to detect it?\n> \n> Thanks in advance,\n> \n> \n> \n>>\n>>\n- ---------------------------(end of broadcast)---------------------------\nTIP 1: if posting/reading through Usenet, please send an appropriate\n subscribe-nomail command to [email protected] so that your\n message can get through to the mailing list cleanly\n>>\n>>\n>>\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG7mx7Io1XmbAXRboRAn0VAJ4sGc1KCNlsbrybVbY/WfB+3XWBbwCfb7Z/\nWNGyJCRo6zd26uR6FB6SA8o=\n=SYzs\n-----END PGP SIGNATURE-----", "msg_date": "Mon, 17 Sep 2007 14:00:59 +0200", "msg_from": "Ruben Rubio <[email protected]>", "msg_from_op": true, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Wed, Sep 12, 2007 at 03:01:12PM -0500, Erik Jones wrote:\n> \n> On Sep 12, 2007, at 2:19 PM, Frank Schoep wrote:\n> \n> >On Sep 12, 2007, at 9:07 PM, Scott Marlowe wrote:\n> >>On 9/12/07, Mikko Partio <[email protected]> wrote:\n> >>>?\n> >>>Aren't you mixing up REINDEX and CLUSTER?\n> >>\n> >>?\n> >>Either one does what a vacuum full did / does, but generally does \n> >>it better.\n> >\n> >On topic of REINDEX / VACUUM FULL versus a CLUSTER / VACUUM ANALYZE \n> >I'd like to ask if CLUSTER is safe to run on a table that is in \n> >active use.\n> >\n> >After updating my maintenance scripts from a VACUUM FULL (add me to \n> >the list) to CLUSTER (which improves performance a lot) I noticed I \n> >was getting \"could not open relation ?\" errors in the log while the \n> >scripts ran so I reverted the change. This was on 8.1.9.\n> \n> You'd probably see the same behavior on 8.2.x. CLUSTER is not \n> transactionally safe so you don't want to run CLUSTER on tables that \n> are actively being used. I believe that's been fixed for 8.3.\n\nActually, that's a bit over-conservative... what happens prior to 8.3 is\nthat CLUSTER rewrites the table using it's XID for everything. That can\nbreak semantics for any transactions that are running in serializable\nmode; if you're just using the default isolation level of read\ncommitted, you're fine with CLUSTER.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Mon, 17 Sep 2007 07:24:14 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Thu, Sep 13, 2007 at 01:58:10AM -0400, Greg Smith wrote:\n> On Wed, 12 Sep 2007, Scott Marlowe wrote:\n> \n> >I'm getting more and more motivated to rewrite the vacuum docs. I think \n> >a rewrite from the ground up might be best... I keep seeing people \n> >doing vacuum full on this list and I'm thinking it's as much because of \n> >the way the docs represent vacuum full as anything.\n> \n> I agree you shouldn't start thinking in terms of how to fix the existing \n> documentation. I'd suggest instead writing a tutorial leading someone \n> through what they need to know about their tables first and then going \n> into how vacuum works based on that data.\n\nTake a look at the stuff at http://decibel.org/~decibel/pervasive/, it'd\nhopefully provide a useful starting point.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Mon, 17 Sep 2007 07:27:39 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: [Again] Postgres performance problem" }, { "msg_contents": "On Mon, 2007-09-17 at 07:27 -0500, Decibel! wrote:\n\n> Take a look at the stuff at http://decibel.org/~decibel/pervasive/, it'd\n> hopefully provide a useful starting point.\n\n\nA bit offtrack, but I was reading the articles and noticed this in the\nbottom. Is this a typo or ...\n\n\nMaking PostreSQL pervasive© 2005 Pervasive Software Inc\n ^^^^^^^^^\n\n\n", "msg_date": "Mon, 24 Sep 2007 14:58:36 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "[OT] Re: [Again] Postgres performance problem" } ]
[ { "msg_contents": " Hello,\n\n Now that both 4x4 out it's time for us to decide which one should be better for our PostgreSQL and Oracle. And especially for Oracle we really need such server to squeeze everything from Oracle licenses. Both of the databases handle OLTP type of the load.\n Since we plan to buy 4U HP DL580 or 585 and only very few of them so power ratings are not very critical in this our case.\n\n First benchmarks (http://www.anandtech.com/IT/showdoc.aspx?i=3091) show that Intel still has more raw CPU power but Barcelona scales much better and also has better memory bandwidth which I believe is quite critical with 16 cores and DB usage pattern.\n On the other hand Intel's X7350 (2.93GHz) has almost 50% advantage in CPU frequency against 2GHz Barcelona.\n\n Regards,\n\n Mindaugas\n\n P.S. tweakers.net does not have both of those yet? Test results far away? :)\n", "msg_date": "Tue, 11 Sep 2007 10:57:43 +0300 (EEST)", "msg_from": "\"Mindaugas\" <[email protected]>", "msg_from_op": true, "msg_subject": "Barcelona vs Tigerton" }, { "msg_contents": "Mindaugas,\n\nThe Anandtech results appear to me to support a 2.5 GHz Barcelona\nperforming better than the available Intel CPUs overall.\n\nIf you can wait for the 2.5 GHz AMD parts to come out, they'd be a\nbetter bet IMO especially considering 4 sockets. In fact, have you seen\nquad QC Intel benchmarks?\n\nBTW - Can someone please get Anand a decent PG benchmark kit? :-)\n\nAt least we can count on excellent PG bench results from the folks at\nTweakers.\n\n- Luke\n\n> -----Original Message-----\n> From: [email protected] \n> [mailto:[email protected]] On Behalf Of Mindaugas\n> Sent: Tuesday, September 11, 2007 12:58 AM\n> To: [email protected]\n> Subject: [PERFORM] Barcelona vs Tigerton\n> \n> Hello,\n> \n> Now that both 4x4 out it's time for us to decide which one \n> should be better for our PostgreSQL and Oracle. And \n> especially for Oracle we really need such server to squeeze \n> everything from Oracle licenses. Both of the databases handle \n> OLTP type of the load.\n> Since we plan to buy 4U HP DL580 or 585 and only very few \n> of them so power ratings are not very critical in this our case.\n> \n> First benchmarks \n> (http://www.anandtech.com/IT/showdoc.aspx?i=3091) show that \n> Intel still has more raw CPU power but Barcelona scales much \n> better and also has better memory bandwidth which I believe \n> is quite critical with 16 cores and DB usage pattern.\n> On the other hand Intel's X7350 (2.93GHz) has almost 50% \n> advantage in CPU frequency against 2GHz Barcelona.\n> \n> Regards,\n> \n> Mindaugas\n> \n> P.S. tweakers.net does not have both of those yet? Test \n> results far away? :)\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n> \n\n", "msg_date": "Tue, 11 Sep 2007 04:31:14 -0400", "msg_from": "\"Luke Lonergan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Barcelona vs Tigerton" }, { "msg_contents": "On Tue, 11 Sep 2007, Mindaugas wrote:\n\n> Now that both 4x4 out it's time for us to decide which one should be \n> better for our PostgreSQL and Oracle.\n\nYou're going to have to wait a bit for that. No one has had both to \ncompare for long enough yet to reach a strong conclusion, and you're \nprobably going to need a database-specific benchmark before there's useful \ndata for your case. Yesterday's meta-coverage at Ars was a nice summary \nof the current state of things:\n\nhttp://arstechnica.com/news.ars/post/20070910-barcelonas-out-and-the-reviews-are-out.html\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Tue, 11 Sep 2007 11:11:56 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Barcelona vs Tigerton" }, { "msg_contents": "Mindaugas wrote:\n> Hello,\n> \n> Now that both 4x4 out it's time for us to decide which one should be better for our PostgreSQL and Oracle. And especially for Oracle we really need such server to squeeze everything from Oracle licenses. Both of the databases handle OLTP type of the load.\n> Since we plan to buy 4U HP DL580 or 585 and only very few of them so power ratings are not very critical in this our case.\n> \n> First benchmarks (http://www.anandtech.com/IT/showdoc.aspx?i=3091) show that Intel still has more raw CPU power but Barcelona scales much better and also has better memory bandwidth which I believe is quite critical with 16 cores and DB usage pattern.\n> On the other hand Intel's X7350 (2.93GHz) has almost 50% advantage in CPU frequency against 2GHz Barcelona.\n\nBarcelona was just announced yesterday. I wouldn't want to be betting \nmy business on it just yet. Plus, AMD usually is able to up the clock on \ntheir chips pretty well after they've got a few production runs under \ntheir belts. If you've absolutely got to have something today, I'd say \nIntel would be a safer bet. If you can afford to wait 3-4 months, then \nyou'll benefit from some industry experience as well as production \nmaturity with Barcelona, and can judge then which better fits your needs.\n\n-- \nGuy Rouillier\n", "msg_date": "Tue, 11 Sep 2007 15:52:40 -0400", "msg_from": "Guy Rouillier <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Barcelona vs Tigerton" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nGuy Rouillier wrote:\n> Mindaugas wrote:\n>> Hello,\n\n> Barcelona was just announced yesterday. I wouldn't want to be betting\n> my business on it just yet. Plus, AMD usually is able to up the clock on\n> their chips pretty well after they've got a few production runs under\n> their belts. If you've absolutely got to have something today, I'd say\n> Intel would be a safer bet. If you can afford to wait 3-4 months, then\n> you'll benefit from some industry experience as well as production\n> maturity with Barcelona, and can judge then which better fits your needs.\n> \n\nAMD has already stated they will be ramping up the MHZ toward the end of\nthe year. Patience is a virtue.\n\nJoshua D. Drake\n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFG5vO3ATb/zqfZUUQRAkv5AJ9vEF/OnM23X30YdWVIiY2xGtDrHACfZ678\nfYfZxD7XdIH+VYYzhGSz9w4=\n=eBTJ\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 11 Sep 2007 12:59:51 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Barcelona vs Tigerton" }, { "msg_contents": "On Tue, Sep 11, 2007 at 10:57:43AM +0300, Mindaugas wrote:\n> Hello,\n> \n> Now that both 4x4 out it's time for us to decide which one should be better for our PostgreSQL and Oracle. And especially for Oracle we really need such server to squeeze everything from Oracle licenses. Both of the databases handle OLTP type of the load.\n\nYou might take a look at replacing Oracle with EnterpriseDB... but I'm a\nbit biased. ;)\n\n> Since we plan to buy 4U HP DL580 or 585 and only very few of them so power ratings are not very critical in this our case.\n> \n> First benchmarks (http://www.anandtech.com/IT/showdoc.aspx?i=3091) show that Intel still has more raw CPU power but Barcelona scales much better and also has better memory bandwidth which I believe is quite critical with 16 cores and DB usage pattern.\n> On the other hand Intel's X7350 (2.93GHz) has almost 50% advantage in CPU frequency against 2GHz Barcelona.\n\nDatabases are all about bandwidth and latency. Compute horsepower almost\nnever matters.\n\nThe only reason I'd look at the clock rate is if it substantially\naffects memory IO capability; but from what I've seen, memory seems to\nbe fairly independent of CPU frequency now-a-days, so I don't think\nthere's a huge difference there.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 18:00:22 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Barcelona vs Tigerton" } ]
[ { "msg_contents": "Hi List;\n\nI've recently started cleaning up some postgres db's which previous to my \nrecent arrival had no DBA to care for them.\n\nI quickly figured out that there were several tables which were grossly full \nof dead space. One table in particular had 75G worth of dead pages (or the \nequivelant in overall dead rows). So I rebuilt these several tables via this \nprocess:\n\n1) BEGIN;\n2) LOCK TABLE table_old (the current table)\n2) CREATE TABLE table_new (...) (copy of table_old above without the indexes)\n3) insert into table_new select * from table_old;\n4) DROP TABLE table_old;\n5) ALTER TABLE table_new rename to table_old;\n6) CREATE INDEX (create all original table indexes)\n7) COMMIT;\n\nThe biggest table mentioned above did in fact reduce the able's overall size \nby about 69G.\n\nAfter the table rebuild, as an interum measure since I'm still tuning and I \nneed to go through a full test/qa/prod lifecycle to get anything rolled onto \nthe production servers I added this table to pg_autovacuum with enabled = 'f' \nand setup a daily cron job to vacuum the table during off hours. This was due \nprimarily to the fact that the vacuum of this table was severely impacting \nday2day processing. I've since upped the maintenance_work_mem to 300,000 and \nin general the vacuums no longer impact day2day processing - with the \nexception of this big table. \n\nI let the cron vacuum run for 14 days. in that 14 days the time it takes to \nvacuum the table grew from 1.2hours directly after the rebuild to > 8hours \nlast nite.\n\nIt's difficult to try and vacuum this table during the day as it seems to \nbegin blocking all the other queries against the database after some time. I \nplan to rebuild the table again and see if I can get away with vacuuming more \noften - it during the day. Also I'm considering a weekly cron job each Sunday \n(minimal processing happens on the weekends) to rebuild the table.\n\nJust curious if anyone has any thoughts on an automated rebuild scenario? or \nbetter yet managing the vac of this table more efficiently? \n\nMaybe it's worth upping maintenance_work_mem sky-high for this table (via a \nsession specific SET of maintenance_work_mem) and running a vacuum every 3 \nhours or so. Also, does Postgres allocate maintenence_work_memory from the \noverall shared_buffers space available (I think not) ?\n\nIs there some method / guideline I could use to determine the memory needs on \na table by table basis for the vacuum process ? If so, I suspect I could use \nthis as a guide for setting a session specific maintenance_work_mem via cron \nto vacuum these problem tables on a specified schedule. \n\nThanks in advance...\n\n\n/Kevin\n\n \n", "msg_date": "Tue, 11 Sep 2007 10:24:58 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "More Vacuum questions..." }, { "msg_contents": "Kevin Kempter wrote:\n> It's difficult to try and vacuum this table during the day as it seems to \n> begin blocking all the other queries against the database after some time. \n\nVacuum can generate so much I/O that it overwhelms all other\ntransactions, but it shouldn't block other queries otherwise. You can\nuse the vacuum cost delay options to throttle vacuum so that it doesn't\nruns slower, but doesn't disrupt other operations so much.\n\n> I plan to rebuild the table again and see if I can get away with vacuuming more \n> often - it during the day. Also I'm considering a weekly cron job each Sunday \n> (minimal processing happens on the weekends) to rebuild the table.\n> \n> Just curious if anyone has any thoughts on an automated rebuild scenario? or \n> better yet managing the vac of this table more efficiently? \n\nCLUSTER is a handy way to do rebuild tables.\n\n> Maybe it's worth upping maintenance_work_mem sky-high for this table (via a \n> session specific SET of maintenance_work_mem) and running a vacuum every 3 \n> hours or so.\n\nYou only need enough maintenance_work_mem to hold pointers to all dead\ntuples in the table. Using more than that won't help.\n\n> Also, does Postgres allocate maintenence_work_memory from the \n> overall shared_buffers space available (I think not) ?\n\nNo.\n\n> Is there some method / guideline I could use to determine the memory needs on \n> a table by table basis for the vacuum process ? If so, I suspect I could use \n> this as a guide for setting a session specific maintenance_work_mem via cron \n> to vacuum these problem tables on a specified schedule. \n\nYou need 6 bytes per dead tuple in the table to avoid scanning the\nindexes more than once. If you vacuum regularly, you shouldn't need more\nthan a few hundred MB.\n\nOne way is to run VACUUM VERBOSE, which will tell how many passes it\nused. If it used more than one, increase maintenance_work_mem.\n\nI would suggest using autovacuum after all. If it seems to be disrupting\nother activity too much, increase autovacuum_cost_delay. Or decrease it\nif it can't keep up with the updates.\n\nBTW, you didn't mention which version of PostgreSQL you're using.\nThere's been some performance enhancements to VACUUM in 8.2, as well as\nautovacuum changes. You might consider upgrading if you're not on 8.2\nalready.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Tue, 11 Sep 2007 18:29:08 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Vacuum questions..." }, { "msg_contents": "On Tue, Sep 11, 2007 at 10:24:58AM -0600, Kevin Kempter wrote:\n> I let the cron vacuum run for 14 days. in that 14 days the time it takes to \n> vacuum the table grew from 1.2hours directly after the rebuild to > 8hours \n> last nite.\n\nSounds to me like daily isn't enough, and that your FSM is too small.\n-- \nDecibel!, aka Jim Nasby [email protected]\nEnterpriseDB http://enterprisedb.com 512.569.9461 (cell)", "msg_date": "Tue, 11 Sep 2007 18:04:21 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": false, "msg_subject": "Re: More Vacuum questions..." } ]
[ { "msg_contents": "Hi,\n\nappreciate if someone can have some pointers for this. \n\nPG.8.2.4 1.4G centrino(s/core) 1.5GB ram/5400rpm laptop HD\n\n3 mail tables which has already been selected \"out\" into separate tables\n(useing create table foo as select * from foo_main where x=y)\n\nThese test tables containing only a very small subset of the main data's\ntable (max 1k to 10k rows vs 1.5mill to 7mill rows in the main table)\n\ntable definitions and actual query are attached. (names has been altered\nto protect the innocent)\n\nI've played around with some tweaking of the postgres.conf setting per\nguidance from jdavis (in irc) w/o much(any) improvement. Also tried\nre-writing the queries to NOT use subselects (per depesz in irc also)\nalso yielded nothing spectacular.\n\nThe only thing I noticed was that when the subqueries combine more than\n3 tables, then PG will choke. If only at 3 joined tables per subquery,\nthe results come out fast, even for 6K rows.\n\nbut if the subqueries (these subqueries by itself, executes fast and\nreturns results in 1 to 10secs) were done independently and then placed\ninto a temp table, and then finally joined together using a query such\nas\n\nselect a.a,b.b,c.c from a inner join b on (x = x) left outer join c on(x\n= y)\n\nthen it would also be fast\n\nwork_mem = 8MB / 32MB /128MB (32 MB default in my setup)\neffective_Cache_size = 128MB/500MB (500 default)\nshared_buffers = 200MB\ngeqo_threshold = 5 (default 12)\ngeqo_effort = 2 (default 5)\nramdom_page_cose = 8.0 (default 4)\nmaintenance_work_mem = 64MB\njoin_collapse_limit = 1/8/15 (8 default)\nfrom_collapse_limit = 1/8/15 (8 default)\nenable_nestloop = f (on by default)\n\nbased on current performance, even with a small number of rows in the\nindividual tables (max 20k), I can't even get a result out in 2 hours.\n(> 3 tables joined per subquery) which is making me re-think of PG's\nuseful-ness.\n\n\n\nBTW, I also tried 8.2.4 CVS_STABLE Branch", "msg_date": "Wed, 12 Sep 2007 00:57:48 +0800", "msg_from": "El-Lotso <[email protected]>", "msg_from_op": true, "msg_subject": "500rows = 1min/2.5k rows=20min/6K rows 2 hours and still running" }, { "msg_contents": "sorry.. I sent this as I was about to go to bed and the explain analyse\nof the query w/ 4 tables joined per subquery came out.\n\nSo.. attaching it..\n\nOn Wed, 2007-09-12 at 00:57 +0800, El-Lotso wrote:\n> Hi,\n> \n> appreciate if someone can have some pointers for this. \n> \n> PG.8.2.4 1.4G centrino(s/core) 1.5GB ram/5400rpm laptop HD\n> \n> 3 mail tables which has already been selected \"out\" into separate tables\n> (useing create table foo as select * from foo_main where x=y)\n> \n> These test tables containing only a very small subset of the main data's\n> table (max 1k to 10k rows vs 1.5mill to 7mill rows in the main table)\n> \n> table definitions and actual query are attached. (names has been altered\n> to protect the innocent)\n> \n> I've played around with some tweaking of the postgres.conf setting per\n> guidance from jdavis (in irc) w/o much(any) improvement. Also tried\n> re-writing the queries to NOT use subselects (per depesz in irc also)\n> also yielded nothing spectacular.\n> \n> The only thing I noticed was that when the subqueries combine more than\n> 3 tables, then PG will choke. If only at 3 joined tables per subquery,\n> the results come out fast, even for 6K rows.\n> \n> but if the subqueries (these subqueries by itself, executes fast and\n> returns results in 1 to 10secs) were done independently and then placed\n> into a temp table, and then finally joined together using a query such\n> as\n> \n> select a.a,b.b,c.c from a inner join b on (x = x) left outer join c on(x\n> = y)\n> \n> then it would also be fast\n> \n> work_mem = 8MB / 32MB /128MB (32 MB default in my setup)\n> effective_Cache_size = 128MB/500MB (500 default)\n> shared_buffers = 200MB\n> geqo_threshold = 5 (default 12)\n> geqo_effort = 2 (default 5)\n> ramdom_page_cose = 8.0 (default 4)\n> maintenance_work_mem = 64MB\n> join_collapse_limit = 1/8/15 (8 default)\n> from_collapse_limit = 1/8/15 (8 default)\n> enable_nestloop = f (on by default)\n> \n> based on current performance, even with a small number of rows in the\n> individual tables (max 20k), I can't even get a result out in 2 hours.\n> (> 3 tables joined per subquery) which is making me re-think of PG's\n> useful-ness.\n> \n> \n> \n> BTW, I also tried 8.2.4 CVS_STABLE Branch", "msg_date": "Wed, 12 Sep 2007 01:02:23 +0800", "msg_from": "El-Lotso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours and still running" }, { "msg_contents": "El-Lotso <[email protected]> writes:\n> sorry.. I sent this as I was about to go to bed and the explain analyse\n> of the query w/ 4 tables joined per subquery came out.\n\nIt's those factor-of-1000 misestimates of the join sizes that are\nkilling you, eg this one:\n\n> -> Hash Join (cost=249.61..512.56 rows=1 width=87) (actual time=15.139..32.858 rows=969 loops=1)\n> Hash Cond: (((test_db.ts.id)::text = (test_db.d.id)::text) AND (test_db.ts.start_timestamp = test_db.trd.start_timestamp) AND (test_db.ts.ttype = test_db.trd.ttype))\n> -> Seq Scan on ts (cost=0.00..226.44 rows=3244 width=40) (actual time=0.135..6.916 rows=3244 loops=1)\n> -> Hash (cost=235.00..235.00 rows=835 width=47) (actual time=14.933..14.933 rows=1016 loops=1)\n\nThe single-row-result estimate persuades it to use a nestloop at the\nnext level up, and then when the output is actually 969 rows, that\nmeans 969 executions of the other side of the upper join.\n\nThe two input size estimates are reasonably close to reality, so\nthe problem seems to be in the estimate of selectivity of the\njoin condition. First off, do you have up-to-date statistics\nfor all the columns being joined here? It might be that\nincreasing the statistics targets for those columns would help.\n\nBut what I'm a bit worried about is the idea that the join\nconditions are correlated or even outright redundant; the\nplanner will not know that, and will make an unrealistic\nestimate of their combined selectivity. If that's the\ncase, you might need to redesign the table schema to\neliminate the redundancy before you'll get good plans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 11 Sep 2007 14:23:20 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours and still\n\trunning" }, { "msg_contents": "On Tue, 2007-09-11 at 14:23 -0400, Tom Lane wrote:\n> El-Lotso <[email protected]> writes:\n> > sorry.. I sent this as I was about to go to bed and the explain analyse\n> > of the query w/ 4 tables joined per subquery came out.\n> \n> It's those factor-of-1000 misestimates of the join sizes that are\n> killing you, eg this one:\n> \n> > -> Hash Join (cost=249.61..512.56 rows=1 width=87) (actual time=15.139..32.858 rows=969 loops=1)\n> > Hash Cond: (((test_db.ts.id)::text = (test_db.d.id)::text) AND (test_db.ts.start_timestamp = test_db.trd.start_timestamp) AND (test_db.ts.ttype = test_db.trd.ttype))\n> > -> Seq Scan on ts (cost=0.00..226.44 rows=3244 width=40) (actual time=0.135..6.916 rows=3244 loops=1)\n> > -> Hash (cost=235.00..235.00 rows=835 width=47) (actual time=14.933..14.933 rows=1016 loops=1)\n> \n> The single-row-result estimate persuades it to use a nestloop at the\n> next level up, and then when the output is actually 969 rows, that\n> means 969 executions of the other side of the upper join.\n\nYep.. that's consistent with the larger results output. more rows = more\nloops\n\n> \n> The two input size estimates are reasonably close to reality, so\n> the problem seems to be in the estimate of selectivity of the\n> join condition. First off, do you have up-to-date statistics\n> for all the columns being joined here? It might be that\n> increasing the statistics targets for those columns would help.\n\nI've already upped the stats level to 1000, reindex, vacuum, analysed\netc but nothing has basically changed. The issue here is mainly because\nfor each id, there is between 2 to 8 hid.\n\neg:\ntable d\nseq : 1234567 / code : CED89\n\ntable trh\nseq : 123456\nhid : 0/1/2/3/4/5/6/7\n\nand the prob is also compounded by the different ttypes available which\ncauses the use of the subqueries.\n\nend of the day.. this data output is desired\n\nID\tHID\n===========\n1234567 |0\n1234567 |1\n1234567 |2\n1234567 |3\n1234567 |4\n1234567 |5\n1234567 |6\n1234567 |7\n\nthe d table has the unique id whereas the other tables has all the\nsubsets. Like a family tree.. Starts at 2, (mom/pop) then to children +\nchildren's grandchildren (pair1) children's grandchildren(pair2) \n\nd to trh is a one to many relationship\n\n> But what I'm a bit worried about is the idea that the join\n> conditions are correlated or even outright redundant; the\n> planner will not know that, and will make an unrealistic\n> estimate of their combined selectivity. If that's the\n> case, you might need to redesign the table schema to\n> eliminate the redundancy before you'll get good plans.\n\nI'm not I understand (actually, i don't) the above comment. I've already\nmade then from subqueries to actual joins (collapse it) and still no\ndice.\n\nbtw, this same schema runs fine on SQL server. (which I'm pulling data\nfrom and pumping into PG)\n\nI'm downgrading to 8.1.9 to see if it helps too.\n\nappreciate any pointers at all.\n\n", "msg_date": "Wed, 12 Sep 2007 10:15:32 +0800", "msg_from": "El-Lotso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours\n\tand still running" }, { "msg_contents": "On Wed, 2007-09-12 at 10:15 +0800, El-Lotso wrote:\n> I'm downgrading to 8.1.9 to see if it helps too.\\\n\nNope : Doesn't help at all.. the number of rows at the nested loop and\nhash joins are still 1 to 500 ratio. This plan is slightly different in\nthat PG is choosing seq_scans\n\nNested Loop Left Join (cost=2604.28..4135.15 rows=1 width=59) (actual time=249.973..15778.157 rows=528 loops=1)\n Join Filter: (((\"inner\".id)::text = (\"outer\".id)::text) AND (\"inner\".hid = \"outer\".hid) AND (\"inner\".seq_time = \"outer\".seq_time) AND (\"inner\".seq_date = \"outer\".seq_date))\n -> Nested Loop Left Join (cost=1400.08..2766.23 rows=1 width=67) (actual time=168.375..8002.573 rows=528 loops=1)\n Join Filter: (((\"inner\".id)::text = (\"outer\".id)::text) AND (\"inner\".hid = \"outer\".hid) AND (\"inner\".seq_time = \"outer\".seq_time) AND (\"inner\".seq_date = \"outer\".seq_date))\n -> Hash Join (cost=127.25..1328.68 rows=1 width=59) (actual time=74.195..84.855 rows=528 loops=1)\n Hash Cond: (((\"outer\".id)::text = (\"inner\".id)::text) AND (\"outer\".ttype = \"inner\".ttype) AND (\"outer\".start_timestamp = \"inner\".start_timestamp))\n -> Seq Scan on trh (cost=0.00..1060.18 rows=9416 width=36) (actual time=0.022..53.830 rows=9416 loops=1)\n Filter: ((ttype = 35) OR (ttype = 75) OR (ttype = 703) OR (ttype = 740) OR (ttype = 764))\n -> Hash (cost=125.53..125.53 rows=230 width=63) (actual time=12.487..12.487 rows=192 loops=1)\n -> Hash Join (cost=18.69..125.53 rows=230 width=63) (actual time=11.043..12.007 rows=192 loops=1)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n -> Seq Scan on ts (cost=0.00..87.36 rows=3436 width=40) (actual time=0.003..5.436 rows=3436 loops=1)\n -> Hash (cost=18.57..18.57 rows=48 width=23) (actual time=0.876..0.876 rows=48 loops=1)\n -> Seq Scan on d (cost=0.00..18.57 rows=48 width=23) (actual time=0.019..0.771 rows=48 loops=1)\n Filter: ((record_update_date_time >= '2007-08-20 00:00:00'::timestamp without time zone) AND (record_update_date_time <= '2007-09-08 00:00:00'::timestamp without time zone) AND ((code)::text = 'HUA75'::text))\n -> Hash Join (cost=1272.83..1437.52 rows=1 width=61) (actual time=11.784..14.216 rows=504 loops=528)\n Hash Cond: (((\"outer\".id)::text = (\"inner\".id)::text) AND (\"outer\".ttype = \"inner\".ttype) AND (\"outer\".start_timestamp = \"inner\".start_timestamp))\n -> Seq Scan on ts (cost=0.00..87.36 rows=3436 width=40) (actual time=0.003..5.744 rows=3436 loops=528)\n -> Hash (cost=1268.29..1268.29 rows=606 width=59) (actual time=82.783..82.783 rows=504 loops=1)\n -> Hash Join (cost=18.69..1268.29 rows=606 width=59) (actual time=76.454..81.515 rows=504 loops=1)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n -> Seq Scan on trh (cost=0.00..1198.22 rows=9064 width=36) (actual time=0.051..66.555 rows=9064 loops=1)\n Filter: ((ttype = 69) OR (ttype = 178) OR (ttype = 198) OR (ttype = 704) OR (ttype = 757) OR (ttype = 741) OR (ttype = 765))\n -> Hash (cost=18.57..18.57 rows=48 width=23) (actual time=0.863..0.863 rows=48 loops=1)\n -> Seq Scan on d (cost=0.00..18.57 rows=48 width=23) (actual time=0.019..0.761 rows=48 loops=1)\n Filter: ((record_update_date_time >= '2007-08-20 00:00:00'::timestamp without time zone) AND (record_update_date_time <= '2007-09-08 00:00:00'::timestamp without time zone) AND ((code)::text = 'HUA75'::text))\n -> Hash Join (cost=1204.20..1368.89 rows=1 width=61) (actual time=11.498..13.941 rows=504 loops=528)\n Hash Cond: (((\"outer\".id)::text = (\"inner\".id)::text) AND (\"outer\".ttype = \"inner\".ttype) AND (\"outer\".start_timestamp = \"inner\".start_timestamp))\n -> Seq Scan on ts (cost=0.00..87.36 rows=3436 width=40) (actual time=0.003..5.593 rows=3436 loops=528)\n -> Hash (cost=1199.62..1199.62 rows=610 width=59) (actual time=70.186..70.186 rows=504 loops=1)\n -> Hash Join (cost=18.69..1199.62 rows=610 width=59) (actual time=64.270..68.886 rows=504 loops=1)\n Hash Cond: ((\"outer\".id)::text = (\"inner\".id)::text)\n -> Seq Scan on trh (cost=0.00..1129.20 rows=9128 width=36) (actual time=0.020..54.050 rows=9128 loops=1)\n Filter: ((ttype = 177) OR (ttype = 197) OR (ttype = 705) OR (ttype = 742) OR (ttype = 758) OR (ttype = 766))\n -> Hash (cost=18.57..18.57 rows=48 width=23) (actual time=1.100..1.100 rows=48 loops=1)\n -> Seq Scan on d (cost=0.00..18.57 rows=48 width=23) (actual time=0.019..0.994 rows=48 loops=1)\n Filter: ((record_update_date_time >= '2007-08-20 00:00:00'::timestamp without time zone) AND (record_update_date_time <= '2007-09-08 00:00:00'::timestamp without time zone) AND ((code)::text = 'HUA75'::text))\nTotal runtime: 15779.769 ms\n\nAm I screwed? Is a schema redesign really a necessity? This would be a\nreal pain given the rewrite of _all_ the queries and can't maintain\ncompatibility in the front-end app between sql server and PG.\n\n", "msg_date": "Wed, 12 Sep 2007 10:48:28 +0800", "msg_from": "El-Lotso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours\n\tand still running" }, { "msg_contents": "On Wed, 2007-09-12 at 10:15 +0800, El-Lotso wrote:\n> On Tue, 2007-09-11 at 14:23 -0400, Tom Lane wrote:\n> > El-Lotso <[email protected]> writes:\n> > > sorry.. I sent this as I was about to go to bed and the explain analyse\n> > > of the query w/ 4 tables joined per subquery came out.\n> > \n> > It's those factor-of-1000 misestimates of the join sizes that are\n> > killing you, eg this one:\n> > \n> > > -> Hash Join (cost=249.61..512.56 rows=1 width=87) (actual time=15.139..32.858 rows=969 loops=1)\n> > > Hash Cond: (((test_db.ts.id)::text = (test_db.d.id)::text) AND (test_db.ts.start_timestamp = test_db.trd.start_timestamp) AND (test_db.ts.ttype = test_db.trd.ttype))\n> > > -> Seq Scan on ts (cost=0.00..226.44 rows=3244 width=40) (actual time=0.135..6.916 rows=3244 loops=1)\n> > > -> Hash (cost=235.00..235.00 rows=835 width=47) (actual time=14.933..14.933 rows=1016 loops=1)\n> > \n> > The single-row-result estimate persuades it to use a nestloop at the\n> > next level up, and then when the output is actually 969 rows, that\n> > means 969 executions of the other side of the upper join.\n> \n> Yep.. that's consistent with the larger results output. more rows = more\n> loops\n\n\nI'm on the verge of giving up... the schema seems simple and yet there's\nso much issues with it. Perhaps it's the layout of the data, I don't\nknow. But based on the ordering/normalisation of the data and the one to\nmany relationship of some tables, this is giving the planner a headache\n(and me a bulge on the head from knockin it against the wall)\n\nI've tried multiple variations, subqueries, not use subqueries, not join\nthe table, (but to include it as a subquery - which gets re-written to a\njoin anyway) exists/not exists to no avail.\n\nPG is fast, yes even w/ all the nested loops for up to 48K of results,\n(within 4 minutes) but as soon as I put it into a inner join/left\njoin/multiple temporary(memory) tables it will choke.\n\nselect\na.a,b.b,c.c from\n(select\nx,y,z\nfrom zz)a \ninner join b\non a.a = b.a\nleft join (select\nx,a,z\nfrom xx)\nthen it will choke.\n\nI'm really at my wits end here. \n\n", "msg_date": "Wed, 12 Sep 2007 16:09:33 +0800", "msg_from": "El-Lotso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours\n\tand still running" }, { "msg_contents": "El-Lotso skrev:\n\n> I'm on the verge of giving up... the schema seems simple and yet there's\n> so much issues with it. Perhaps it's the layout of the data, I don't\n> know. But based on the ordering/normalisation of the data and the one to\n> many relationship of some tables, this is giving the planner a headache\n> (and me a bulge on the head from knockin it against the wall)\n\nI think you should look more at the db design, and less on rewriting the\nquery. Here are some observations:\n\n- Your table structure is quite hard to understand (possibly because you\nhave changed the names) - if you want help on improving it, you will\nneed to explain the data to us, and possibly post some sample data.\n- You seem to be lacking constraints on the tables. My guess is that\n(id,ttype,start_timestamp) is unique in both trh and ts - but I cannot\ntell (and neither can the query planner). Foreign key constraints might\nhelp as well. These would also help others to understand your data, and\nsuggest reformulations of your queries.\n- Another guess is that the ttype sets (177,197,705,742,758,766),\n(69,178,198,704,757,741,765) are actually indicating some other property\na common \"type\" of record, and that only one of each will be present for\nan id,start_timestamp combination. This may be related to the repeating\nfields issue - if a certain ttype indicates that we are interested in a\ncertain pber_x field (and possibly that the others are empty).\n- You have what looks like repeating fields - pber_x, fval_x, index_x -\nin your tables. Fixing this might not improve your query, but might be a\ngood idea for other reasons.\n- seq_date and seq_time seems like they may be redundant - are they\ndifferent casts of the same data?\n\nAll speculation. Hope it helps\n\nNis\n\n", "msg_date": "Wed, 12 Sep 2007 15:14:25 +0200", "msg_from": "=?ISO-8859-1?Q?Nis_J=F8rgensen?= <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours and still running" }, { "msg_contents": "El-Lotso <[email protected]> writes:\n> I'm really at my wits end here. \n\nTry to merge the multiple join keys into one, somehow. I'm not sure why\nthe planner is overestimating the selectivity of the combined join\nconditions, but that's basically where your problem is coming from.\n\nA truly brute-force solution would be \"set enable_nestloop = off\"\nbut this is likely to screw performance for other queries.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Sep 2007 10:41:16 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours and still\n\trunning" }, { "msg_contents": "On Wed, 2007-09-12 at 10:41 -0400, Tom Lane wrote:\n> El-Lotso <[email protected]> writes:\n> > I'm really at my wits end here. \n> \n> Try to merge the multiple join keys into one, somehow. I'm not sure why\n> the planner is overestimating the selectivity of the combined join\n> conditions, but that's basically where your problem is coming from.\n\nI've tried merging them together.. what previously was\n\nINNER JOIN TS \nON TS.ID = TRH.ID AND\nTS.TTYPE = TRH.TTYPE AND\nTS.START_TIMESTAMP = TRH.START_TIMESTAMP \n\nhas become \ninner join TS\non ts.id_ttype_startstamp = trh.id_ttype_startstamp\n\nwhere id_ttype_startstamp = (id || '-'||ttype || '-' || start_timestamp)\n\nIt's working somewhat better but everything is not as rosy as it should\nas the planner is still over/under estimating the # of rows.\n\nFROM org :\nNested Loop Left Join (cost=10612.48..24857.20 rows=1 width=61) (actual\ntime=1177.626..462856.007 rows=750 loops=1)\n\nTO merge joined conditions :\nHash Join (cost=41823.94..45889.49 rows=6101 width=61) (actual\ntime=3019.609..3037.692 rows=750 loops=1)\n Hash Cond: (trd.trd_join_key = ts.ts_join_key)\n\nMerged Join using the Main table : 3 - 5 million rows\nHash Left Join (cost=80846.38..121112.36 rows=25 width=244) (actual\ntime=5088.437..5457.269 rows=750 loops=1)\n\nNote that it still doesn't really help that much, the estimated rows is\nstill way off the actual number of rows. On one of the querys there the\nhid field has a subset of 8 values, it's even worst. And it seems like\nthe merge condition doesn't help at all.\n\n\nI'm still trying to merge more join conditions to see if it helps.\n\n\n\n\n> A truly brute-force solution would be \"set enable_nestloop = off\"\n> but this is likely to screw performance for other queries.\n\nI've also tried this... It's not helping much actually.\nAs mentioned previously, this is a one to many relationship and because\nof that, somehow PG just doesn't take it into account.\n\nI'm still not having much luck here. (playing with a subset of the main\ntable's data _does_ show some promise, but when querying on the main\ntable w/ 3 million data, everything grinds to a halt)\n\n\n\n\n", "msg_date": "Fri, 14 Sep 2007 13:51:41 +0800", "msg_from": "El-Lotso <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours\n\tand still running" }, { "msg_contents": "On Wed, 2007-09-12 at 15:14 +0200, Nis Jørgensen wrote:\n> El-Lotso skrev:\n> \n> > I'm on the verge of giving up... the schema seems simple and yet there's\n> > so much issues with it. Perhaps it's the layout of the data, I don't\n> > know. But based on the ordering/normalisation of the data and the one to\n> > many relationship of some tables, this is giving the planner a headache\n> > (and me a bulge on the head from knockin it against the wall)\n> \n> I think you should look more at the db design, and less on rewriting the\n> query. Here are some observations:\n\nI can't help much with the design per-se. So..\n\n> \n> - Your table structure is quite hard to understand (possibly because you\n> have changed the names) - if you want help on improving it, you will\n> need to explain the data to us, and possibly post some sample data.\n\nIf anyone is willing, I can send some sample data to you off-list.\n\non the trh table, hid is a subset of data for a particular id.\n\neg: \nPARENT : CHILD 1\nPARENT : CHILD 2\nPARENT : CHILD 3\nPARENT : CHILD 4\n\nuniqueid = merged fields from id / index1 / index2 / start_timestamp(IN EPOCH)\n/ phase_id / ttype which is unique on each table (but not across ALL the tables)\n\n\n> - You seem to be lacking constraints on the tables. My guess is that\n> (id,ttype,start_timestamp) is unique in both trh and ts - but I cannot\n> tell (and neither can the query planner). Foreign key constraints might\n> help as well. These would also help others to understand your data, and\n> suggest reformulations of your queries.\n\nAFAICT, there are no foreign constraints in the original DB design. (and\nI'm not even sure how to begin the FK design based on this org design)\n\nthe unique_id is as above.\nTRH/TRD uniqueid = merged fields from id / index1 / index2 /\nstart_timestamp(IN EPOCH) / phase_id / ttype \n\nTS uniqueid = merged fields from id / start_timestamp(IN EPOCH) / ttype \n\nProblem with this is that the fields in which they are unique is\ndifferent across the different tables, so the unique_id is only unique\nfor that table alone and acts as a primary key so that no dupes exists\nin that one table.\n\n\n\n> - Another guess is that the ttype sets (177,197,705,742,758,766),\n> (69,178,198,704,757,741,765) are actually indicating some other property\n> a common \"type\" of record, and that only one of each will be present for\n> an id,start_timestamp combination. This may be related to the repeatingd\n> fields issue - if a certain ttype indicates that we are interested in a\n> certain pber_x field (and possibly that the others are empty).\n\nyes..\n\neg:\nid | hid |ttype | start_timestamp | pber_2 | pber 3 |pber_4\nPARENT | 0 |764 | 2007-07-01 00:00 | 4000 | null | null\nPARENT | 0 |765 | 2007-07-01 00:00 | null | 9000 | null\nPARENT | 0 |766 | 2007-07-01 00:00 | null | null | 7999\nPARENT | 1 |764 | 2007-07-01 00:00 | 4550 | null | null\nPARENT | 1 |765 | 2007-07-01 00:00 | null | 9220 | null\nPARENT | 1 |766 | 2007-07-01 00:00 | null | null | 6669\n\n\nthe subqueries are just to take out the fields with the value and leave\nthe nulls so that we end-up with\n\nid |hid| start_timestamp |pber_2 | pber 3 | pber_4\nPARENT | 0 | 2007-07-01 00:00 | 4000 | 9000 | 7999\nPARENT | 1 | 2007-07-01 00:00 | 4550 | 9220 | 6669\n\nwhich is basically just joining a table by itself, but there is a caveat\nwhereby pber_3 and pber_4 is/can only be joined together based on the\nseq_date/seq_time in the ts table hence the query..\n\nJOIN1.id = join2.id\nand join1.seq_date = join2.seq_date\netc..\n\nbut the problem is confounded by the fact that there is numerous hid\nvalues for head id\n\n> - You have what looks like repeating fields - pber_x, fval_x, index_x -\n> in your tables. Fixing this might not improve your query, but might be a\n> good idea for other reasons.\n\nit's being looked at by some other team to collapse this to something\nlike this\n\nttype | pber\n764 | 500\n765 | 600\n766 | 700\n\nso that there are lesser # of columns and no null fields. But the query\nwill remain the same\n\n> - seq_date and seq_time seems like they may be redundant - are they\n> different casts of the same data?\n\nNo. They're used to join together the pber_2/3/4 fields as one may\nhappen between a few hours to days between each other, but each will be\nuniquely identified by the seq_date/time\n\neg : \n\nid | pber_2 | seq_date | seq time\nPARENT | 400 | 2007-07-01 00:00:00 | 1980-01-01 20:00:00\nPARENT | 410 | 2007-07-10 00:00:00 | 1980-01-01 22:00:00\n\nid | pber_3 | seq_date | seq time\nPARENT | 900 | 2007-07-01 00:00:00 | 1980-01-01 20:00:00\nPARENT | 100 | 2007-07-10 00:00:00 | 1980-01-01 22:00:00\n\nid | pber_4 | seq_date | seq time\nPARENT | 10000 | 2007-07-01 00:00:00 | 1980-01-01 20:00:00\nPARENT | 999 | 2007-07-10 00:00:00 | 1980-01-01 22:00:00\n\n\nso, the correct value for the fields when joined together will be of the\nform\n\nid |start_timestamp |seq_date | seq_time |pber_2 | pber 3 | pber_4\nPARENT |2007-07-01 00:00 |2007-07-01 00:00:00 | 1980-01-01 20:00:00| 400 | 900 | 10000\nPARENT |2007-07-01 00:00 |2007-07-10 00:00:00 | 1980-01-01 22:00:00| 410 | 100 | 999\n\n(repeating for each hid subset value)\n\n\n\n> All speculation. Hope it helps\n\n\nanything would help.. I'm more or less willing to try anything to make\nthings faster else this project is going to the toilet.\n", "msg_date": "Fri, 14 Sep 2007 14:30:27 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: 500rows = 1min/2.5k rows=20min/6K rows 2 hours and still\n running" } ]
[ { "msg_contents": "Hi,\n\nI have a table containing some ~13 million rows. Queries on\nindexed fields run fast, but unanchored pattern queries on a\ntext column are slooooow. Indexing the column doesn't help\n(this is already mentioned in the manual).\nhttp://www.postgresql.org/docs/8.2/interactive/indexes-types.html\n\nHowever, no alternative solution is mentioned for indexing\nand/or optimizing queries based on unanchored patterns: \ni.e. description LIKE '%kinase%'.\n\nI've already searched the archives, read the manual, googled\naround and the only alternative I've found is: full text\nindexing (tsearch2 in postgresql-contrib; OpenFTS; others?)\n\nBut do note that i) I'm not interested in finding results 'similar to'\nthe query term (and ranked based on similarity) but just\nresults 'containing an exact substring' of the query term ...\ni.e. not the original goal of a full text search\n\nAnd, ii) from what I've read it seems that for both tsearch2\nand OpenFTS the queries have to be rewritten to explicitly\nevaluate the pattern on the special indices, i.e. they're\nnot transparently available (i.e. via the query planner),\n\nI'm hoping for something like:\nCREATE INDEX newindex ON table USING fti (column);\n\nand then having the new index automagically used by the\nplanner in cases like:\nSELECT * FROM table WHERE column LIKE '%magic%';\n\nIf there's anything like this, I've failed at finding it ...\n\nThanks for any pointer,\n\nFernan\n\nPS: additional information\n\nThis is on PostgreSQL-8.2.4, FreeBSD-6.2 (amd64).\n\nEXPLAIN ANALYZE SELECT COUNT(*) FROM dots.transcript WHERE product LIKE '%kinase%'; QUERY PLAN QUERY PLAN \n-------------------------------------------------------------------------------------------------------------------------\n Aggregate (cost=651878.85..651878.86 rows=1 width=0) (actual time=45587.244..45587.246 rows=1 loops=1)\n -> Seq Scan on nafeatureimp (cost=0.00..651878.85 rows=1 width=0) (actual time=33.049..45582.628 rows=2255 loops=1)\n Filter: (((subclass_view)::text = 'Transcript'::text) AND ((string13)::text ~~ '%kinase%'::text))\n Total runtime: 45589.892 ms\n(4 rows)\n\n\n", "msg_date": "Tue, 11 Sep 2007 15:54:42 -0300", "msg_from": "Fernan Aguero <[email protected]>", "msg_from_op": true, "msg_subject": "efficient pattern queries (using LIKE, ~)" }, { "msg_contents": "Fernan Aguero schrieb:\n> Hi,\n> \n> I have a table containing some ~13 million rows. Queries on\n> indexed fields run fast, but unanchored pattern queries on a\n> text column are slooooow. Indexing the column doesn't help\n> (this is already mentioned in the manual).\n> http://www.postgresql.org/docs/8.2/interactive/indexes-types.html\n> \n> However, no alternative solution is mentioned for indexing\n> and/or optimizing queries based on unanchored patterns: \n> i.e. description LIKE '%kinase%'.\n\nMaybe trigram search might help you? Never tried it myself, but it seems \nto be able to handle substring searches.\n\n", "msg_date": "Tue, 11 Sep 2007 21:08:37 +0200", "msg_from": "Mario Weilguni <[email protected]>", "msg_from_op": false, "msg_subject": "Re: efficient pattern queries (using LIKE, ~)" } ]
[ { "msg_contents": "My PG server came to a screeching halt yesterday. Looking at top saw a very \nlarge number of \"startup waiting\" tasks. A pg_dump was running and one of my \nscripts had issued a CREATE DATABASE command. It looks like the CREATE DATABASE \nwas exclusive but was having to wait for the pg_dump to finish, causing a \nmassive traffic jam of locks behind it.\n\nOnce I killed the pg_dump process, things returned to normal.\n\nVersion 8.0.12. Is this a bug? It also concerns me because my workload is \nlikely to expose this problem again.\n", "msg_date": "Wed, 12 Sep 2007 13:58:36 -0600", "msg_from": "Dan Harris <[email protected]>", "msg_from_op": true, "msg_subject": "pg_dump blocking create database?" }, { "msg_contents": "Dan Harris <[email protected]> writes:\n> My PG server came to a screeching halt yesterday. Looking at top saw a very \n> large number of \"startup waiting\" tasks. A pg_dump was running and one of my \n> scripts had issued a CREATE DATABASE command. It looks like the CREATE DATABASE \n> was exclusive but was having to wait for the pg_dump to finish, causing a \n> massive traffic jam of locks behind it.\n\n> Once I killed the pg_dump process, things returned to normal.\n\n> Version 8.0.12. Is this a bug?\n\nIt's operating as designed :-(. 8.1 and later use less strict locking\non pg_database during create/drop database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 12 Sep 2007 23:21:13 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: pg_dump blocking create database? " } ]
[ { "msg_contents": "I'm designing a system that will be doing over a million inserts/deletes \non a single table every hour. Rather than using a single table, it is \npossible for me to partition the data into multiple tables if I wanted \nto, which would be nice because I can just truncate them when I don't \nneed them. I could even use table spaces to split the IO load over \nmultiple filers. The application does not require all this data be in \nthe same table. The data is fairly temporary, it might last 5 seconds, \nit might last 2 days, but it will all be deleted eventually and \ndifferent data will be created.\n\nConsidering a single table would grow to 10mil+ rows at max, and this \nmachine will sustain about 25mbps of insert/update/delete traffic 24/7 - \n365, will I be saving much by partitioning data like that?\n\n-- \n-Matt\n\n<http://twiki.spimageworks.com/twiki/bin/view/Software/CueDevelopment>\n\n\n\n\n\n\nI'm designing a system that will be doing over a million\ninserts/deletes on a single table every hour.  Rather than using a\nsingle table, it is possible for me to partition the data into multiple\ntables if I wanted to, which would be nice because I can just truncate\nthem when I don't need them.  I could even use table spaces to split\nthe IO load over multiple filers.  The application does not require all\nthis data be in the same table.   The data is fairly temporary, it\nmight last 5 seconds, it might last 2 days, but it will all be deleted\neventually and different data will be created.\n\nConsidering a single table would grow to 10mil+ rows at max, and this\nmachine will sustain about 25mbps of insert/update/delete traffic 24/7\n- 365, will I be saving much by partitioning data like that?\n\n-- \n-Matt", "msg_date": "Wed, 12 Sep 2007 13:33:25 -0700", "msg_from": "Matt Chambers <[email protected]>", "msg_from_op": true, "msg_subject": "db performance/design question" }, { "msg_contents": "On 9/12/07, Matt Chambers <[email protected]> wrote:\n>\n>\n> I'm designing a system that will be doing over a million inserts/deletes on\n> a single table every hour. Rather than using a single table, it is possible\n> for me to partition the data into multiple tables if I wanted to, which\n> would be nice because I can just truncate them when I don't need them. I\n> could even use table spaces to split the IO load over multiple filers. The\n> application does not require all this data be in the same table. The data\n> is fairly temporary, it might last 5 seconds, it might last 2 days, but it\n> will all be deleted eventually and different data will be created.\n>\n> Considering a single table would grow to 10mil+ rows at max, and this\n> machine will sustain about 25mbps of insert/update/delete traffic 24/7 -\n> 365, will I be saving much by partitioning data like that?\n\nThis is the exact kind of application for which partitioning shines,\nespecially if you can do the inserts directly to the partitions\nwithout having to create rules or triggers to handle it. If you have\nto point at the master table, stick to triggers as they're much more\nefficient at slinging data to various sub tables.\n", "msg_date": "Wed, 12 Sep 2007 16:58:34 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: db performance/design question" } ]
[ { "msg_contents": "Hello,\n\nMy question is about index usage when bitwise operations are invoked.\nSituation Context:\n--------------------------\n\nLets suppose we have 2 tables TBL1 and TBL2 as the following:\nTBL1 {\n ......... ;\n integer categoryGroup; // categoryGroup is declared as an index on TABL1\n ......... ;\n}\n\nTBL2 {\n ......... ;\n integer categoryGroup; // categoryGroup is declared as an index on TABL2\n ......... ;\n}\n\nBy conception, I suppose that:\n- [categoryGroup] may hold a limited number of values, less than 32 values.\n- [categoryGroup] is of type integer => it means 4 bytes => 32 bits\n => 32 places available to hold binary '0' or binary '1' values.\n- [categoryGroup] is the result of an \"OR bitwise operation\" among a\npredefined set of variables [variableCategory].\n We suppose that [variableCategory] is of type integer (=>32 bits)\n and each binary value of [variableCategory] may only hold a single binary\n'1'.\n\n\nEx: variableCategory1 = 00000000000000000000000000000010\n variableCategory2 = 00000000000000000000000000100000\n variableCategory3 = 00000000000000000000000000001000\n\n If [categoryGroup] = variableCategory1 | variableCategory2 |\nvariableCategory3\n =>[categoryGroup] = 00000000000000000000000000101010\n\n\n\nQuestion:\n--------------\nI have an SQL request similar to:\n\nSELECT ..... FROM TBL1, TBL2 WHERE\n <inner join between TBL1 and TBL2 is True> AND\n TBL1.CATEGORY & TBL2.CATEGORY <> 0 //-- where & is the AND bitwise\noperator\n\nQst:\n1/ IS the above SQL request will use the INDEX [categoryGroup] defined on\nTBL1 and TBL2 ?\n2/ What should I do or How should I modify my SQL request in order\n to force the query engine to use an index ? (the already defined index or\nanother useful index)\n\n\n\nThx a lot\n\n\nHello,\nMy question is about index usage when bitwise operations are invoked.\nSituation Context:--------------------------\nLets suppose we have 2 tables TBL1 and TBL2 as the following:TBL1 {  ......... ;  integer categoryGroup; // categoryGroup is declared as an index on TABL1  ......... ;}\nTBL2 {  ......... ;  integer categoryGroup; // categoryGroup is declared as an index on TABL2  ......... ;}\nBy conception, I suppose that:- [categoryGroup] may hold a limited number of values, less than 32 values.- [categoryGroup] is of type integer => it means 4 bytes => 32 bits   => 32 places available to hold binary '0' or binary '1' values. \n- [categoryGroup] is the result of an \"OR bitwise operation\" among a predefined set of variables [variableCategory].   We suppose that [variableCategory] is of type integer (=>32 bits)    and each binary value of [variableCategory] may only hold a single binary '1'. \n\nEx: variableCategory1 = 00000000000000000000000000000010      variableCategory2 = 00000000000000000000000000100000      variableCategory3 = 00000000000000000000000000001000\n     If [categoryGroup] =  variableCategory1 | variableCategory2 | variableCategory3    =>[categoryGroup] = 00000000000000000000000000101010\n   \nQuestion:--------------I have an SQL request similar to:\nSELECT ..... FROM TBL1, TBL2 WHERE  <inner join between TBL1 and TBL2 is True> AND TBL1.CATEGORY & TBL2.CATEGORY <> 0  //-- where & is the AND bitwise operator \nQst: 1/ IS the above SQL request will use the INDEX [categoryGroup] defined on TBL1 and TBL2 ?2/ What should I do or How should I modify my SQL request in order    to force the query engine to use an index ? (the already defined index or another useful index) \n\n \nThx a lot", "msg_date": "Thu, 13 Sep 2007 14:30:20 +0200", "msg_from": "\"W.Alphonse HAROUNY\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index usage when bitwise operator is used" }, { "msg_contents": "Hi,\n\nI could not find and normal solution for that issue. But I am using some\nworkarounds for that issue.\n\nThe solution, that I am using now is to create an index for every bit of\nyour bitmap field.\n\nSo something like\n\nCREATE INDEX idx_hobbybit_0_limited\n ON \"versionA\".user_fast_index\n USING btree\n (gender, dateofbirth) -- here the gender and dateofbirth fields are the\nfields that we usually ORDER BY in the select statements, but you can\nplay with the needed fields\n WHERE (hobby_bitmap & 1) > 0;\n\nby creating such an index for every used bit and combining WHERE\n(hobby_bitmap & 1 ) > 0 like statements the planner will be choosing the\nright index to use.\n\nAnother workaround, that will be more applicable in your case I think, is to\ncreate a functional GIN index on your bitmap field using a static function\nto create an array of bitmap keys from your bitmap field.\n\nCREATE OR REPLACE FUNCTION \"versionA\".bitmap_to_bit_array(source_bitmap\ninteger)\n RETURNS integer[] AS\n'select ARRAY( select (1 << s.i) from generate_series(0, 32) as s(i) where (\n1 << s.i ) & $1 > 0 )'\n LANGUAGE 'sql' IMMUTABLE STRICT;\n\nAnd than create a GIN index on the needed field using this stored procedure.\nAfter that, it would be possible to use intarray set operators on the result\nof that function. This will also make it possible to use that GIN index.\n\nActually it would be much much better if it were possible to build GIN\nindexes directly on the bitmap fields. But this is to be implemented by GIN\nand GiST index development team. Probably would be not a bad idea to make a\nfeature request on them.\n\n\nWith best regards,\n\nValentine Gogichashvili\n\nOn 9/13/07, W.Alphonse HAROUNY <[email protected]> wrote:\n>\n> Hello,\n>\n> My question is about index usage when bitwise operations are invoked.\n> Situation Context:\n> --------------------------\n>\n> Lets suppose we have 2 tables TBL1 and TBL2 as the following:\n> TBL1 {\n> ......... ;\n> integer categoryGroup; // categoryGroup is declared as an index on TABL1\n> ......... ;\n> }\n>\n> TBL2 {\n> ......... ;\n> integer categoryGroup; // categoryGroup is declared as an index on TABL2\n> ......... ;\n> }\n>\n> By conception, I suppose that:\n> - [categoryGroup] may hold a limited number of values, less than 32\n> values.\n> - [categoryGroup] is of type integer => it means 4 bytes => 32 bits\n> => 32 places available to hold binary '0' or binary '1' values.\n> - [categoryGroup] is the result of an \"OR bitwise operation\" among a\n> predefined set of variables [variableCategory].\n> We suppose that [variableCategory] is of type integer (=>32 bits)\n> and each binary value of [variableCategory] may only hold a single\n> binary '1'.\n>\n>\n> Ex: variableCategory1 = 00000000000000000000000000000010\n> variableCategory2 = 00000000000000000000000000100000\n> variableCategory3 = 00000000000000000000000000001000\n>\n> If [categoryGroup] = variableCategory1 | variableCategory2 |\n> variableCategory3\n> =>[categoryGroup] = 00000000000000000000000000101010\n>\n>\n>\n> Question:\n> --------------\n> I have an SQL request similar to:\n>\n> SELECT ..... FROM TBL1, TBL2 WHERE\n> <inner join between TBL1 and TBL2 is True> AND\n> TBL1.CATEGORY & TBL2.CATEGORY <> 0 //-- where & is the AND bitwise\n> operator\n>\n> Qst:\n> 1/ IS the above SQL request will use the INDEX [categoryGroup] defined on\n> TBL1 and TBL2 ?\n> 2/ What should I do or How should I modify my SQL request in order\n> to force the query engine to use an index ? (the already defined index\n> or another useful index)\n>\n>\n>\n> Thx a loт\n>\n\nHi, \n \nI could not find and normal solution for that issue. But I am using some workarounds for that issue.\n \nThe solution, that I am using now is to create an index for every bit of your bitmap field. \n \nSo something like \n \nCREATE INDEX idx_hobbybit_0_limited  ON \"versionA\".user_fast_index  USING btree  (gender, dateofbirth) -- here the gender and dateofbirth fields are the fields that we usually ORDER BY in the select statements, but you can play with the needed fields\n  WHERE (hobby_bitmap & 1) > 0;\n \nby creating such an index for every used bit and combining WHERE (hobby_bitmap & 1 ) > 0 like statements the planner will be choosing the right index to use. \n \nAnother workaround, that will be more applicable in your case I think, is to create a functional GIN index on your bitmap field using a static function to create an array of bitmap keys from your bitmap field.\n \nCREATE OR REPLACE FUNCTION \"versionA\".bitmap_to_bit_array(source_bitmap integer)  RETURNS integer[] AS'select ARRAY( select (1 << s.i) from generate_series(0, 32) as s(i) where ( 1 << \ns.i ) & $1 > 0 )'  LANGUAGE 'sql' IMMUTABLE STRICT;\n \nAnd than create a GIN index on the needed field using this stored procedure. After that, it would be possible to use intarray set operators on the result of that function. This will also make it possible to use that GIN index. \n\n \nActually it would be much much better if it were possible to build GIN indexes directly on the bitmap fields. But this is to be implemented by GIN and GiST index development team. Probably would be not a bad idea to make a feature request on them.\n\n \n \nWith best regards, \n \nValentine Gogichashvili \nOn 9/13/07, W.Alphonse HAROUNY <[email protected]> wrote:\n\n\nHello,\nMy question is about index usage when bitwise operations are invoked.\nSituation Context:--------------------------\nLets suppose we have 2 tables TBL1 and TBL2 as the following:TBL1 {  ......... ;  integer categoryGroup; // categoryGroup is declared as an index on TABL1  ......... ;}\nTBL2 {  ......... ;  integer categoryGroup; // categoryGroup is declared as an index on TABL2  ......... ;}\nBy conception, I suppose that:- [categoryGroup] may hold a limited number of values, less than 32 values.- [categoryGroup] is of type integer => it means 4 bytes => 32 bits   => 32 places available to hold binary '0' or binary '1' values. \n- [categoryGroup] is the result of an \"OR bitwise operation\" among a predefined set of variables [variableCategory].   We suppose that [variableCategory] is of type integer (=>32 bits)    and each binary value of [variableCategory] may only hold a single binary '1'. \n\nEx: variableCategory1 = 00000000000000000000000000000010      variableCategory2 = 00000000000000000000000000100000      variableCategory3 = 00000000000000000000000000001000\n     If [categoryGroup] =  variableCategory1 | variableCategory2 | variableCategory3    =>[categoryGroup] = 00000000000000000000000000101010\n   \nQuestion:--------------I have an SQL request similar to:\nSELECT ..... FROM TBL1, TBL2 WHERE  <inner join between TBL1 and TBL2 is True> AND TBL1.CATEGORY & TBL2.CATEGORY <> 0  //-- where & is the AND bitwise operator \nQst: 1/ IS the above SQL request will use the INDEX [categoryGroup] defined on TBL1 and TBL2 ?2/ What should I do or How should I modify my SQL request in order    to force the query engine to use an index ? (the already defined index or another useful index) \n\n \nThx a loт", "msg_date": "Sun, 16 Sep 2007 12:08:33 +0200", "msg_from": "\"Valentine Gogichashvili\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage when bitwise operator is used" }, { "msg_contents": ">>> On Thu, Sep 13, 2007 at 7:30 AM, in message\n<[email protected]>, \"W.Alphonse\nHAROUNY\" <[email protected]> wrote: \n\n> and each binary value of [variableCategory] may only hold a single binary\n> '1'.\n\n> TBL1.CATEGORY & TBL2.CATEGORY <> 0 //-- where & is the AND bitwise\n> operator\n\nWhat about saying?:\n \n TBL1.CATEGORY = TBL2.CATEGORY\n \nIf your indexes include this and the other columns which cause the tables\nto be related, one or both of them stand a pretty good chance of evaluating\nto the lowest-cost method to join the tables. Forcing a query to use an\nindex outside of it being the cheapest path is rarely productive.\n \n-Kevin\n\n", "msg_date": "Sun, 16 Sep 2007 18:58:21 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage when bitwise operator is used" }, { "msg_contents": "Hi,\n\nI could not find and normal solution for that issue. But I am using\nsome workarounds for that issue.\n\nThe solution, that I am using now is to create an index for every bit\nof your bitmap field.\n\nSo something like\n\nCREATE INDEX idx_hobbybit_0_limited\n ON \"versionA\".user_fast_index\n USING btree\n (gender, dateofbirth) -- here the gender and dateofbirth fields are\nthe fields that we usually ORDER BY in the select statements, but you\ncan play with the needed fields\n WHERE (hobby_bitmap & 1) > 0;\n\nby creating such an index for every used bit and combining WHERE\n(hobby_bitmap & 1 ) > 0 like statements the planner will be choosing\nthe right index to use.\n\nAnother workaround, that will be more applicable in your case I think,\nis to create a functional GIN index on your bitmap field using a\nstatic function to create an array of bitmap keys from your bitmap\nfield.\n\nCREATE OR REPLACE FUNCTION\n\"versionA\".bitmap_to_bit_array(source_bitmap integer)\n RETURNS integer[] AS\n'select ARRAY( select (1 << s.i) from generate_series(0, 32) as s(i)\nwhere ( 1 << s.i ) & $1 > 0 )'\n LANGUAGE 'sql' IMMUTABLE STRICT;\n\nAnd than create a GIN index on the needed field using this stored\nprocedure. After that, it would be possible to use intarray set\noperators on the result of that function. This will also make it\npossible to use that GIN index.\n\nActually it would be much much better if it were possible to build GIN\nindexes directly on the bitmap fields. But this is to be implemented\nby GIN and GiST index development team. Probably would be not a bad\nidea to make a feature request on them.\n\n\nWith best regards,\n\nValentine Gogichashvili\n\nOn Sep 13, 2:30 pm, [email protected] (\"W.Alphonse HAROUNY\") wrote:\n> Hello,\n>\n> My question is about index usage when bitwise operations are invoked.\n> Situation Context:\n> --------------------------\n>\n> Lets suppose we have 2 tables TBL1 and TBL2 as the following:\n> TBL1 {\n> ......... ;\n> integer categoryGroup; // categoryGroup is declared as an index on TABL1\n> ......... ;\n>\n> }\n>\n> TBL2 {\n> ......... ;\n> integer categoryGroup; // categoryGroup is declared as an index on TABL2\n> ......... ;\n>\n> }\n>\n> By conception, I suppose that:\n> - [categoryGroup] may hold a limited number of values, less than 32 values.\n> - [categoryGroup] is of type integer => it means 4 bytes => 32 bits\n> => 32 places available to hold binary '0' or binary '1' values.\n> - [categoryGroup] is the result of an \"OR bitwise operation\" among a\n> predefined set of variables [variableCategory].\n> We suppose that [variableCategory] is of type integer (=>32 bits)\n> and each binary value of [variableCategory] may only hold a single binary\n> '1'.\n>\n> Ex: variableCategory1 = 00000000000000000000000000000010\n> variableCategory2 = 00000000000000000000000000100000\n> variableCategory3 = 00000000000000000000000000001000\n>\n> If [categoryGroup] = variableCategory1 | variableCategory2 |\n> variableCategory3\n> =>[categoryGroup] = 00000000000000000000000000101010\n>\n> Question:\n> --------------\n> I have an SQL request similar to:\n>\n> SELECT ..... FROM TBL1, TBL2 WHERE\n> <inner join between TBL1 and TBL2 is True> AND\n> TBL1.CATEGORY & TBL2.CATEGORY <> 0 //-- where & is the AND bitwise\n> operator\n>\n> Qst:\n> 1/ IS the above SQL request will use the INDEX [categoryGroup] defined on\n> TBL1 and TBL2 ?\n> 2/ What should I do or How should I modify my SQL request in order\n> to force the query engine to use an index ? (the already defined index or\n> another useful index)\n>\n> Thx a lot\n\n\n", "msg_date": "Mon, 17 Sep 2007 07:47:32 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage when bitwise operator is used" }, { "msg_contents": "\n> What about saying?:\n>\n> TBL1.CATEGORY = TBL2.CATEGORY\n>\n\nAre you sure you understood what was the question?\n\nIs the TBL1.CATEGORY = TBL2.CATEGORY the same as TBL1.CATEGORY &\nTBL2.CATEGORY > 0?\n\n", "msg_date": "Mon, 17 Sep 2007 07:49:28 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage when bitwise operator is used" }, { "msg_contents": ">>> On Mon, Sep 17, 2007 at 2:49 AM, in message\n<[email protected]>, valgog\n<[email protected]> wrote: \n\n>> What about saying?:\n>>\n>> TBL1.CATEGORY = TBL2.CATEGORY\n>>\n> \n> Are you sure you understood what was the question?\n> \n> Is the TBL1.CATEGORY = TBL2.CATEGORY the same as TBL1.CATEGORY &\n> TBL2.CATEGORY > 0?\n\nYes, given that he stipulated that one and only one bit would be set.\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 17 Sep 2007 07:27:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage when bitwise operator is used" }, { "msg_contents": "\"Kevin Grittner\" <[email protected]> writes:\n> On Mon, Sep 17, 2007 at 2:49 AM, in message\n> <[email protected]>, valgog\n> <[email protected]> wrote:=20\n>> Are you sure you understood what was the question?\n>> \n>> Is the TBL1.CATEGORY = TBL2.CATEGORY the same as TBL1.CATEGORY &\n>> TBL2.CATEGORY > 0?\n\n> Yes, given that he stipulated that one and only one bit would be set.\n\nReally? In that case, isn't this bit-field just a bad implementation of\nan enum-style field?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Sep 2007 09:37:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage when bitwise operator is used " }, { "msg_contents": ">>> On Mon, Sep 17, 2007 at 8:37 AM, in message <[email protected]>,\nTom Lane <[email protected]> wrote: \n> \"Kevin Grittner\" <[email protected]> writes:\n>> On Mon, Sep 17, 2007 at 2:49 AM, in message\n>> <[email protected]>, valgog\n>> <[email protected]> wrote:=20\n>>> Are you sure you understood what was the question?\n>>> \n>>> Is the TBL1.CATEGORY = TBL2.CATEGORY the same as TBL1.CATEGORY &\n>>> TBL2.CATEGORY > 0?\n> \n>> Yes, given that he stipulated that one and only one bit would be set.\n> \n> Really? In that case, isn't this bit-field just a bad implementation of\n> an enum-style field?\n \nMy bad. I did misread it. Sorry, all.\n \n-Kevin\n \n\n\n", "msg_date": "Mon, 17 Sep 2007 09:30:26 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage when bitwise operator is used" }, { "msg_contents": "Hi,\n\nA little clarification. Actually, TBL1.CATEGORY and/or TBL2.CATEGORY may\nhold a binary value having multiple binary(ies) '1'.\nEach binary value column represent an business attribute.\nIf a binary value column is equal to '1', it means that the business\nattribute is True,\notherwise it is false.\nI adopted this avoid defining a detail table to table TBL1. Idem to TBL2.\n\nIf TBL1.CATEGORY | TBL2.CATEGORY > 0\n=> it means that we have at least one common business attribute that is TRUE\nfor TBL1 and TBL2.\n\nRegards\nW.Alf\n\n\nOn 9/17/07, Tom Lane <[email protected]> wrote:\n>\n> \"Kevin Grittner\" <[email protected]> writes:\n> > On Mon, Sep 17, 2007 at 2:49 AM, in message\n> > <[email protected]>, valgog\n> > <[email protected]> wrote:=20\n> >> Are you sure you understood what was the question?\n> >>\n> >> Is the TBL1.CATEGORY = TBL2.CATEGORY the same as TBL1.CATEGORY &\n> >> TBL2.CATEGORY > 0?\n>\n> > Yes, given that he stipulated that one and only one bit would be set.\n>\n> Really? In that case, isn't this bit-field just a bad implementation of\n> an enum-style field?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n>\n\nHi, \n \nA little clarification. Actually,  TBL1.CATEGORY and/or  TBL2.CATEGORY may hold a binary value having multiple binary(ies) '1'.\nEach binary value column represent an business attribute.\nIf a binary value column is equal to '1', it means that the business attribute is True,\notherwise it is false.\nI adopted this avoid defining a detail table to table TBL1. Idem to TBL2.\n \nIf  TBL1.CATEGORY |  TBL2.CATEGORY > 0 \n=> it means that we have at least one common business attribute that is TRUE for TBL1 and TBL2.\n \nRegards\nW.Alf\nOn 9/17/07, Tom Lane <[email protected]> wrote:\n\"Kevin Grittner\" <[email protected]\n> writes:> On Mon, Sep 17, 2007 at  2:49 AM, in message> <[email protected]>, valgog> <\[email protected]> wrote:=20>> Are you sure you understood what was the question?>>>> Is the TBL1.CATEGORY = TBL2.CATEGORY the same as TBL1.CATEGORY &\n>> TBL2.CATEGORY > 0?> Yes, given that he stipulated that one and only one bit would be set.Really?  In that case, isn't this bit-field just a bad implementation ofan enum-style field?\n                       regards, tom lane---------------------------(end of broadcast)---------------------------TIP 1: if posting/reading through Usenet, please send an appropriate      subscribe-nomail command to \[email protected] so that your      message can get through to the mailing list cleanly", "msg_date": "Mon, 17 Sep 2007 16:40:49 +0200", "msg_from": "\"W.Alphonse HAROUNY\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index usage when bitwise operator is used" }, { "msg_contents": "Hi Tom,\n\ndo you think it would be a good idea to ask GIN index team to\nimplement an int-based bitmap set indexing operator for GIN/GiST based\nindexes? Or there will be a possibility to somehow optimally index\narrays of enumerations to implement such bitmap structures in 8.3 or\nlater postgresql versions?\n\nWith best regards,\n\n-- Valentine\n\nOn Sep 17, 3:37 pm, [email protected] (Tom Lane) wrote:\n> \"Kevin Grittner\" <[email protected]> writes:\n> > On Mon, Sep 17, 2007 at 2:49 AM, in message\n> > <[email protected]>, valgog\n> > <[email protected]> wrote:=20\n> >> Are you sure you understood what was the question?\n>\n> >> Is the TBL1.CATEGORY = TBL2.CATEGORY the same as TBL1.CATEGORY &\n> >> TBL2.CATEGORY > 0?\n> > Yes, given that he stipulated that one and only one bit would be set.\n>\n> Really? In that case, isn't this bit-field just a bad implementation of\n> an enum-style field?\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to [email protected] so that your\n> message can get through to the mailing list cleanly\n\n\n", "msg_date": "Tue, 18 Sep 2007 10:36:23 -0000", "msg_from": "valgog <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index usage when bitwise operator is used" } ]
[ { "msg_contents": "hi!\nI wonder if clustering a table improves perfs somehow ?\nAny example/ideas about that ?\nref : http://www.postgresql.org/docs/8.2/interactive/sql-cluster.html\nthx,\nP.\n\n", "msg_date": "Thu, 13 Sep 2007 15:03:54 +0200", "msg_from": "Patrice Castet <[email protected]>", "msg_from_op": true, "msg_subject": "Clustered tables improves perfs ?" }, { "msg_contents": "On 9/13/07, Patrice Castet <[email protected]> wrote:\n> I wonder if clustering a table improves perfs somehow ?\n\nAs I understand it, clustering will help cases where you are fetching\ndata in the same sequence as the clustering order, because adjacent\nrows will be located in adjacent pages on disk; this is because hard\ndrives perform superbly with sequential reads, much less so with\nrandom access.\n\nFor example, given a table foo (v integer) populated with a sequence\nof integers [1, 2, 3, 4, ..., n], where the column v has an index, and\nthe table is clustered on that index, a query such as \"select v from\nfoo order by v\" will read the data sequentially from disk, since the\ndata will already be in the correct order.\n\nOn the other hand, a query such as \"select v from foo order by\nrandom()\" will not be able to exploit the clustering. In other words,\nclustering is only useful insofar as your access patterns follow the\nclustering order.\n\nAlexander.\n", "msg_date": "Thu, 13 Sep 2007 15:31:58 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clustered tables improves perfs ?" }, { "msg_contents": "[email protected] (Patrice Castet) writes:\n> I wonder if clustering a table improves perfs somehow ?\n> Any example/ideas about that ?\n> ref : http://www.postgresql.org/docs/8.2/interactive/sql-cluster.html\n\nSometimes.\n\n1. It compacts the table, which may be of value, particularly if the\ntable is not seeing heavy UPDATE/DELETE traffic. VACUUM and VACUUM\nFULL do somewhat similar things; if you are using VACUUM frequently\nenough, this is not likely to have a material effect.\n\n2. It transforms the contents of the table into some specified order,\nwhich will improve efficiency for any queries that use that specific\nordering.\n-- \noutput = reverse(\"moc.enworbbc\" \"@\" \"enworbbc\")\nhttp://linuxdatabases.info/info/emacs.html\n\"You can swear at the keyboard and it won't be offended. It was going\nto treat you badly anyway\" -- Arthur Norman\n", "msg_date": "Thu, 13 Sep 2007 10:49:57 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Clustered tables improves perfs ?" } ]
[ { "msg_contents": "I'm having a problem with long running commits appearing in my database\nlogs. It may be hardware related, as the problem appeared when we moved\nthe database to a new server connected to a different disk array. The\ndisk array is a lower class array, but still more than powerful enough\nto handle the IO requirements. One big difference though is that the\nold array had 16 GB of cache, the new one has 4 GB.\n\nRunning Postgres 8.1.8 on AIX 5.3\n\nWe have enough IO to spare that we have the bgwriter cranked up pretty\nhigh, dirty buffers are getting quickly. Vmstat indicates 0 io wait\ntime, no swapping or anything nasty like that going on.\n\nThe long running commits do not line up with checkpoint times.\n\nThe postgresql.conf config are identical except that wal_buffers was 8\non the old master, and it is set to 16 on the new one.\n\nWe have other installations of this product running on the same array\n(different servers though) and they are not suffering from this\nproblem. \n\nThe only other thing of note is that the wal files sit on the same disk\nas the data directory. This has not changed between the old and new\nconfig, but the installs that are running fine do have their wal files\non a separate partition.\n\nAny ideas where the problem could lie? Could having the wal files on\nthe same data partition cause long running commits when there is plenty\nof IO to spare?\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Thu, 13 Sep 2007 10:15:15 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Long Running Commits - Not Checkpoints" }, { "msg_contents": "On Thu, 2007-09-13 at 10:15 -0400, Brad Nicholson wrote:\n> I'm having a problem with long running commits appearing in my database\n> logs. It may be hardware related, as the problem appeared when we moved\n> the database to a new server connected to a different disk array. The\n> disk array is a lower class array, but still more than powerful enough\n> to handle the IO requirements. One big difference though is that the\n> old array had 16 GB of cache, the new one has 4 GB.\n> \n> Running Postgres 8.1.8 on AIX 5.3\n> \n> We have enough IO to spare that we have the bgwriter cranked up pretty\n> high, dirty buffers are getting quickly. Vmstat indicates 0 io wait\n> time, no swapping or anything nasty like that going on.\n> \n> The long running commits do not line up with checkpoint times.\n> \n> The postgresql.conf config are identical except that wal_buffers was 8\n> on the old master, and it is set to 16 on the new one.\n> \n> We have other installations of this product running on the same array\n> (different servers though) and they are not suffering from this\n> problem. \n> \n> The only other thing of note is that the wal files sit on the same disk\n> as the data directory. This has not changed between the old and new\n> config, but the installs that are running fine do have their wal files\n> on a separate partition.\n> \n> Any ideas where the problem could lie? Could having the wal files on\n> the same data partition cause long running commits when there is plenty\n> of IO to spare?\n\nMore on this - we also have long running commits on installations that\ndo have the wal files on a separate partition.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Thu, 13 Sep 2007 11:03:22 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "Brad Nicholson <[email protected]> writes:\n> On Thu, 2007-09-13 at 10:15 -0400, Brad Nicholson wrote:\n>> I'm having a problem with long running commits appearing in my database\n>> logs. It may be hardware related, as the problem appeared when we moved\n>> the database to a new server connected to a different disk array.\n\n> More on this - we also have long running commits on installations that\n> do have the wal files on a separate partition.\n\nWhat's your definition of \"long running commit\" --- seconds? milliseconds?\nExactly what are you measuring? Can you correlate the problem with what\nthe transaction was doing?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Sep 2007 11:10:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long Running Commits - Not Checkpoints " }, { "msg_contents": "On Thu, 2007-09-13 at 11:10 -0400, Tom Lane wrote:\n> Brad Nicholson <[email protected]> writes:\n> > On Thu, 2007-09-13 at 10:15 -0400, Brad Nicholson wrote:\n> >> I'm having a problem with long running commits appearing in my database\n> >> logs. It may be hardware related, as the problem appeared when we moved\n> >> the database to a new server connected to a different disk array.\n> \n> > More on this - we also have long running commits on installations that\n> > do have the wal files on a separate partition.\n> \n> What's your definition of \"long running commit\" --- seconds? milliseconds?\n> Exactly what are you measuring? Can you correlate the problem with what\n\nlog_min_duration is set to 150ms\n\nCommits running over that up to 788ms. Here is what we see in the logs\n(with obfuscated dbname, username and IP):\n\n2007-09-13 10:01:49.787 CUT [782426] dbname username 1.2.3.171 LOG:\nduration: 224.286 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2007-09-13 10:19:16.373 CUT [737404] dbname username 1.2.3.174 LOG:\nduration: 372.545 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2007-09-13 10:19:24.437 CUT [1806498] dbname username 11.2.3.171 LOG:\nduration: 351.544 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2007-09-13 10:33:11.204 CUT [962598] dbname username 1.2.3.170 LOG:\nduration: 504.057 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2007-09-13 10:40:33.735 CUT [1282104] dbname username 1.2.3.174 LOG:\nduration: 250.127 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2007-09-13 10:49:54.752 CUT [1188032] dbname username 1.2.3.170 LOG:\nduration: 382.781 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n2007-09-13 11:30:43.339 CUT [1589464] dbname username 1.2.3.172 LOG:\nduration: 408.463 ms statement: EXECUTE <unnamed> [PREPARE: commit]\n\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Thu, 13 Sep 2007 11:35:59 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "On Thu, 13 Sep 2007, Brad Nicholson wrote:\n\n> One big difference though is that the old array had 16 GB of cache, the \n> new one has 4 GB.\n>\n> We have enough IO to spare that we have the bgwriter cranked up pretty \n> high, dirty buffers are getting quickly.\n\nIf your system is very active, running the bgwriter very aggressively will \nresult in the cache on the disk array being almost continuously filled \nwith pending writes that then trickle their way onto real disk eventually. \nIf the typical working set on this system results in the amount of \nbackground writer cached writes regularly being >4GB but <16GB, that would \nexplain what you're seeing. The resolution is actually unexpected by most \npeople: you make the background writer less aggressive so that it's \nspewing less redundant writes clogging the array's cache, leaving more \ncache to buffer the actual commits so they don't block. This will \nincrease the odds that there will be a checkpoint block instead, but if \nyou're seeing none of them right now you may have some margin there to \nreduce the BGW activity without aggrevating checkpoints.\n\nSince you're probably not monitoring I/O waits and similar statistics on \nhow the disk array's cache is being used, whether this is happening or not \nto you won't be obvious from what the operating system is reporting. One \nor two blocked writes every couple of minutes won't even show up on the \ngross statistics for operating system I/O waits; on average, they'll still \nbe 0. So it's possible you may think you have plenty of I/O to spare, \nwhen in fact you don't quite have enough--what you've got is enough cache \nthat the OS can't see where the real I/O bottleneck is.\n\nI'd be curious to see how you've got your background writer configured to \nsee if it matches situations like this I've seen in the past. The \nparameters controlling the all scan are the ones you'd might consider \nturning down, definately the percentage and possibly the maxpages as well.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 13 Sep 2007 12:12:06 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "On Thu, 2007-09-13 at 12:12 -0400, Greg Smith wrote:\n> On Thu, 13 Sep 2007, Brad Nicholson wrote:\n\n> I'd be curious to see how you've got your background writer configured to \n> see if it matches situations like this I've seen in the past. The \n> parameters controlling the all scan are the ones you'd might consider \n> turning down, definately the percentage and possibly the maxpages as well.\n\n\nbgwriter_delay = 50 # 10-10000 milliseconds between\nrounds\nbgwriter_lru_percent = 20.0 # 0-100% of LRU buffers\nscanned/round\nbgwriter_lru_maxpages = 300 # 0-1000 buffers max\nwritten/round\nbgwriter_all_percent = 20 # 0-100% of all buffers\nscanned/round\nbgwriter_all_maxpages = 600 # 0-1000 buffers max\nwritten/round\n\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Thu, 13 Sep 2007 12:19:45 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "On Thu, 2007-09-13 at 12:19 -0400, Brad Nicholson wrote:\n> On Thu, 2007-09-13 at 12:12 -0400, Greg Smith wrote:\n> > On Thu, 13 Sep 2007, Brad Nicholson wrote:\n> \n> > I'd be curious to see how you've got your background writer configured to \n> > see if it matches situations like this I've seen in the past. The \n> > parameters controlling the all scan are the ones you'd might consider \n> > turning down, definately the percentage and possibly the maxpages as well.\n> \n> \n> bgwriter_delay = 50 # 10-10000 milliseconds between\n> rounds\n> bgwriter_lru_percent = 20.0 # 0-100% of LRU buffers\n> scanned/round\n> bgwriter_lru_maxpages = 300 # 0-1000 buffers max\n> written/round\n> bgwriter_all_percent = 20 # 0-100% of all buffers\n> scanned/round\n> bgwriter_all_maxpages = 600 # 0-1000 buffers max\n> written/round\n\nI should add, there are 6 back ends running on this disk array\n(different servers and different data partitions) with these bgwriter\nsettings. \n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Thu, 13 Sep 2007 12:52:21 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "Brad Nicholson wrote:\n> On Thu, 2007-09-13 at 12:19 -0400, Brad Nicholson wrote:\n> > On Thu, 2007-09-13 at 12:12 -0400, Greg Smith wrote:\n> > > On Thu, 13 Sep 2007, Brad Nicholson wrote:\n> > \n> > > I'd be curious to see how you've got your background writer configured to \n> > > see if it matches situations like this I've seen in the past. The \n> > > parameters controlling the all scan are the ones you'd might consider \n> > > turning down, definately the percentage and possibly the maxpages as well.\n> > \n> > \n> > bgwriter_delay = 50 # 10-10000 milliseconds between\n> > rounds\n> > bgwriter_lru_percent = 20.0 # 0-100% of LRU buffers\n> > scanned/round\n> > bgwriter_lru_maxpages = 300 # 0-1000 buffers max\n> > written/round\n> > bgwriter_all_percent = 20 # 0-100% of all buffers\n> > scanned/round\n> > bgwriter_all_maxpages = 600 # 0-1000 buffers max\n> > written/round\n> \n> I should add, there are 6 back ends running on this disk array\n> (different servers and different data partitions) with these bgwriter\n> settings. \n\nMaybe it is running deferred triggers or something?\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"I suspect most samba developers are already technically insane...\nOf course, since many of them are Australians, you can't tell.\" (L. Torvalds)\n", "msg_date": "Thu, 13 Sep 2007 13:07:19 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "On Thu, 2007-09-13 at 12:12 -0400, Greg Smith wrote:\n> Since you're probably not monitoring I/O waits and similar statistics on \n> how the disk array's cache is being used, whether this is happening or not \n> to you won't be obvious from what the operating system is reporting. \n\n\nA sysadmin looked at cache usage on the disk array. The read cache is\nbeing used heavily, and the write cache is not.\n\n-- \nBrad Nicholson 416-673-4106\nDatabase Administrator, Afilias Canada Corp.\n\n", "msg_date": "Thu, 13 Sep 2007 13:58:19 -0400", "msg_from": "Brad Nicholson <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "On Thu, 13 Sep 2007, Brad Nicholson wrote:\n\n> A sysadmin looked at cache usage on the disk array. The read cache is \n> being used heavily, and the write cache is not.\n\nGiven that information, you can take the below (which I was just about to \nsend before the above update came in) as something to think about and test \nbut perhaps not your primary line of attack. Even if my theory about the \nexact mechanism involved isn't correct, the background writer is still \nproblematic in terms of its impact on the system when run as aggressively \nas you're doing it; I'm not sure but I think that's even more true on 8.1 \nthan it is on 8.2 where I did most my testing in this area.\n\n> bgwriter_delay = 50\n> bgwriter_lru_percent = 20.0\n> bgwriter_lru_maxpages = 300\n> bgwriter_all_percent = 20\n> bgwriter_all_maxpages = 600\n\nThat was what I was expecting. Your all scan has the potential to be \nwriting 600*8K*(1/50 msec)=98MB/sec worth of data to your disk array. \nSince some of this data has a random access component to it, your array \ncannot be expected to keep with a real peak load; the only thing saving \nyou if something starts dirtying buffers as far as possible is that the \narray cache is buffering things. And that 4GB worth of cache could be \nfilling in very little time.\n\nEvery time the all scan writes a buffer that is frequently used, that \nwrite has a good chance that it was wasted because the block will be \nmodified again before checkpoint time. Your settings are beyond regular \naggressive and into the hyperactive terrority where I'd expect such \nredundant writes are happening often. I'd suggest you try to move toward \ndropping bgwriter_all_percent dramatically from its current setting and \nsee how far down you can go before it starts to introduce blocks at \ncheckpoint time. With bgwriter_delay set to 1/4 the default, I would \nexpect that even 5% would be a high setting for you. That may be a more \ndramatic change than you want to make at once though, so lowering it in \nthat direction more slowly (perhaps drop 5% each day) and seeing whether \nthings improve as that happens may make more sense.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Thu, 13 Sep 2007 14:42:17 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "On 13/09/2007, Greg Smith <[email protected]> wrote:\n>\n>\n> Every time the all scan writes a buffer that is frequently used, that\n> write has a good chance that it was wasted because the block will be\n> modified again before checkpoint time. Your settings are beyond regular\n> aggressive and into the hyperactive terrority where I'd expect such\n> redundant writes are happening often. I'd suggest you try to move toward\n> dropping bgwriter_all_percent dramatically from its current setting and\n> see how far down you can go before it starts to introduce blocks at\n> checkpoint time. With bgwriter_delay set to 1/4 the default, I would\n> expect that even 5% would be a high setting for you. That may be a more\n> dramatic change than you want to make at once though, so lowering it in\n> that direction more slowly (perhaps drop 5% each day) and seeing whether\n> things improve as that happens may make more sense.\n>\n>\nAre you suggesting that reducing bgwriter_delay and bg_writer_percent would\nreduce the time spent doing commits?\n\nI get quite a few commits that take over 500ms (the point when i start\nlogging queries). I always thought oh just one of those things but if they\ncan be reduced by changing a few config variables that would be great. I'm\njust trying to workout what figures are worth trying to see if I can reduce\nthem.\n\n From time to time I get commits that take 6 or 7 seconds but not all the\ntime.\n\nI'm currently working with the defaults.\n\nPeter Childs\n\nOn 13/09/2007, Greg Smith <[email protected]> wrote:\nEvery time the all scan writes a buffer that is frequently used, thatwrite has a good chance that it was wasted because the block will bemodified again before checkpoint time.  Your settings are beyond regular\naggressive and into the hyperactive terrority where I'd expect suchredundant writes are happening often.  I'd suggest you try to move towarddropping bgwriter_all_percent dramatically from its current setting and\nsee how far down you can go before it starts to introduce blocks atcheckpoint time.  With bgwriter_delay set to 1/4 the default, I wouldexpect that even 5% would be a high setting for you.  That may be a more\ndramatic change than you want to make at once though, so lowering it inthat direction more slowly (perhaps drop 5% each day) and seeing whetherthings improve as that happens may make more sense.\nAre you suggesting that reducing bgwriter_delay and bg_writer_percent would reduce the time spent doing commits?I get quite a few commits that take over 500ms (the point when i start logging queries). I always thought oh just one of those things but if they can be reduced by changing a few config variables that would be great. I'm just trying to workout what figures are worth trying to see if I can reduce them.\nFrom time to time I get commits that take 6 or 7 seconds but not all the time.I'm currently working with the defaults.Peter Childs", "msg_date": "Fri, 14 Sep 2007 08:02:23 +0100", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "On 14/09/2007, Peter Childs <[email protected]> wrote:\n>\n>\n>\n> On 13/09/2007, Greg Smith <[email protected]> wrote:\n> >\n> >\n> > Every time the all scan writes a buffer that is frequently used, that\n> > write has a good chance that it was wasted because the block will be\n> > modified again before checkpoint time. Your settings are beyond regular\n> >\n> > aggressive and into the hyperactive terrority where I'd expect such\n> > redundant writes are happening often. I'd suggest you try to move\n> > toward\n> > dropping bgwriter_all_percent dramatically from its current setting and\n> > see how far down you can go before it starts to introduce blocks at\n> > checkpoint time. With bgwriter_delay set to 1/4 the default, I would\n> > expect that even 5% would be a high setting for you. That may be a more\n> > dramatic change than you want to make at once though, so lowering it in\n> > that direction more slowly (perhaps drop 5% each day) and seeing whether\n> > things improve as that happens may make more sense.\n> >\n> >\n> Are you suggesting that reducing bgwriter_delay and bg_writer_percent\n> would reduce the time spent doing commits?\n>\n> I get quite a few commits that take over 500ms (the point when i start\n> logging queries). I always thought oh just one of those things but if they\n> can be reduced by changing a few config variables that would be great. I'm\n> just trying to workout what figures are worth trying to see if I can reduce\n> them.\n>\n> From time to time I get commits that take 6 or 7 seconds but not all the\n> time.\n>\n> I'm currently working with the defaults.\n>\n> Peter Childs\n>\n\nHmm Always read the manual, Increase them from the defaults.......\n\nPeter.\n\nOn 14/09/2007, Peter Childs <[email protected]> wrote:\nOn 13/09/2007, Greg Smith <\[email protected]> wrote:\nEvery time the all scan writes a buffer that is frequently used, thatwrite has a good chance that it was wasted because the block will bemodified again before checkpoint time.  Your settings are beyond regular\naggressive and into the hyperactive terrority where I'd expect suchredundant writes are happening often.  I'd suggest you try to move towarddropping bgwriter_all_percent dramatically from its current setting and\nsee how far down you can go before it starts to introduce blocks atcheckpoint time.  With bgwriter_delay set to 1/4 the default, I wouldexpect that even 5% would be a high setting for you.  That may be a more\n\ndramatic change than you want to make at once though, so lowering it inthat direction more slowly (perhaps drop 5% each day) and seeing whetherthings improve as that happens may make more sense.\n\nAre you suggesting that reducing bgwriter_delay and bg_writer_percent would reduce the time spent doing commits?I get quite a few commits that take over 500ms (the point when i start logging queries). I always thought oh just one of those things but if they can be reduced by changing a few config variables that would be great. I'm just trying to workout what figures are worth trying to see if I can reduce them.\nFrom time to time I get commits that take 6 or 7 seconds but not all the time.I'm currently working with the defaults.Peter Childs\nHmm Always read the manual, Increase them from the defaults.......Peter.", "msg_date": "Fri, 14 Sep 2007 08:42:59 +0100", "msg_from": "\"Peter Childs\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "> I'm having a problem with long running commits appearing in my database\n> logs. It may be hardware related, as the problem appeared when we moved\n> the database to a new server connected to a different disk array. The\n> disk array is a lower class array, but still more than powerful enough\n> to handle the IO requirements. One big difference though is that the\n> old array had 16 GB of cache, the new one has 4 GB.\n\nMaybe the old disk array had battery backed up ram that was used as a\nwrite cache and the new only has read cache? Without battery backed up ram\n(or flash or whatever) then a commit need to flush data down onto the\npysical disk.\n\n/Dennis\n\n", "msg_date": "Fri, 14 Sep 2007 13:31:45 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Long Running Commits - Not Checkpoints" }, { "msg_contents": "On Fri, 14 Sep 2007, Peter Childs wrote:\n\n> Are you suggesting that reducing bgwriter_delay and bg_writer_percent \n> would reduce the time spent doing commits? I get quite a few commits \n> that take over 500ms (the point when i start logging queries).\n\nOne very common cause for transactions blocking for as much as several \nseconds is hitting a checkpoint, which in current versions causes a large \namount of data to be written and synchronized to the physical disk. If \nyou're already tracking long transactions and trying to figure out what's \ncausing them, you should set checkpoint_warning to a high value (the \nmaximum of 3600 is generally fine). That will give you a harmless warning \nevery time a checkpoint occurs. If the slow transactions line up with \ncheckpoints, then you might consider tuning or re-tuning the background \nwriter to elimite the delay.\n\nIn this particular case, their system had already been tuned so \naggressively that I wondered if the background writer was being a problem \nrather than a solution. Reducing the percentage would turn it down a bit; \nso would *increasing* the delay--they had already decreased it \nconsiderably, making it 4X as active as the default.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 14 Sep 2007 12:22:03 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Long Running Commits - Not Checkpoints" } ]
[ { "msg_contents": "Hi,\n\nWhere are the database index files located in the $PGDATA directory? I was\nthinking on soft linking them to another physical hard disk array.\n\nThanks,\nAzad\n\nHi,Where are the database index files located in the $PGDATA directory? I was thinking on soft linking them to another physical hard disk array.Thanks,Azad", "msg_date": "Fri, 14 Sep 2007 08:20:17 +0530", "msg_from": "\"Harsh Azad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Index files" }, { "msg_contents": "On Fri, 2007-09-14 at 08:20 +0530, Harsh Azad wrote:\n> Hi,\n> \n> Where are the database index files located in the $PGDATA directory? I\n> was thinking on soft linking them to another physical hard disk array.\n\nyou have to search through pg_class for the \"number\"\n\nAlternatively, you can try using tablespaces.\n\ncreate tablespace indexspace location '/mnt/fastarray'\ncreate index newindex on table (index_1) tablespace indexspace\n\n", "msg_date": "Fri, 14 Sep 2007 10:57:24 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index files" }, { "msg_contents": "ah.. thanks. Didn't realize table spaces can be mentioned while creating a\nindex. BTW, are soft links ok to use for pg_clog / pg_xlog . I moved the\nexisting directories to /mnt/logs/pglogs and made soft links for both\ndirectories in $PGDATA\n\nThanks\n\nOn 9/14/07, Ow Mun Heng <[email protected]> wrote:\n>\n> On Fri, 2007-09-14 at 08:20 +0530, Harsh Azad wrote:\n> > Hi,\n> >\n> > Where are the database index files located in the $PGDATA directory? I\n> > was thinking on soft linking them to another physical hard disk array.\n>\n> you have to search through pg_class for the \"number\"\n>\n> Alternatively, you can try using tablespaces.\n>\n> create tablespace indexspace location '/mnt/fastarray'\n> create index newindex on table (index_1) tablespace indexspace\n>\n>\n\n\n-- \nHarsh Azad\n=======================\[email protected]\n\nah.. thanks. Didn't realize table spaces can be mentioned while creating a index. BTW, are soft links ok to use for pg_clog / pg_xlog . I moved the existing directories to /mnt/logs/pglogs and made soft links for both directories in $PGDATA\nThanksOn 9/14/07, Ow Mun Heng <[email protected]> wrote:\nOn Fri, 2007-09-14 at 08:20 +0530, Harsh Azad wrote:> Hi,>> Where are the database index files located in the $PGDATA directory? I> was thinking on soft linking them to another physical hard disk array.\nyou have to search through pg_class for the \"number\"Alternatively, you can try using tablespaces.create tablespace indexspace location '/mnt/fastarray'create index newindex on table (index_1) tablespace indexspace\n-- Harsh [email protected]", "msg_date": "Fri, 14 Sep 2007 08:33:12 +0530", "msg_from": "\"Harsh Azad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index files" }, { "msg_contents": "On Fri, 2007-09-14 at 08:33 +0530, Harsh Azad wrote:\n> ah.. thanks. Didn't realize table spaces can be mentioned while\n> creating a index. BTW, are soft links ok to use for pg_clog /\n> pg_xlog . I moved the existing directories to /mnt/logs/pglogs and\n> made soft links for both directories in $PGDATA \n\n\nNo idea what is the \"proper\" solution. Me being a newbie itself.\nBut from what I've read on the net and google, symlink seems to be the\norder of the day.\n\nperhaps others who are more familiar can comment as I'm lost in this.\n(I'm doing symlinking btw)\n\n", "msg_date": "Fri, 14 Sep 2007 11:09:49 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index files" }, { "msg_contents": "\"Harsh Azad\" <[email protected]> writes:\n> Where are the database index files located in the $PGDATA directory?\n\nRead\nhttp://www.postgresql.org/docs/8.2/static/storage.html\n\n> I was\n> thinking on soft linking them to another physical hard disk array.\n\nManual symlink management, while not impossible, pretty much sucks\n... especially if your tables are big enough that you actually need to\ndo this. Use a tablespace instead. (If you are on a PG version that\nhasn't got tablespaces, you are more than overdue to upgrade.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 13 Sep 2007 23:33:23 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index files " }, { "msg_contents": "Le vendredi 14 septembre 2007 à 11:09 +0800, Ow Mun Heng a écrit :\n> On Fri, 2007-09-14 at 08:33 +0530, Harsh Azad wrote:\n> > ah.. thanks. Didn't realize table spaces can be mentioned while\n> > creating a index. BTW, are soft links ok to use for pg_clog /\n> > pg_xlog . I moved the existing directories to /mnt/logs/pglogs and\n> > made soft links for both directories in $PGDATA \n> \n> \n> No idea what is the \"proper\" solution. Me being a newbie itself.\n> But from what I've read on the net and google, symlink seems to be the\n> order of the day.\n> \n> perhaps others who are more familiar can comment as I'm lost in this.\n> (I'm doing symlinking btw)\n> \n\nYou can also mount a different fs on another disk array in pglogs\n\nI thinks update to index are not written to the wal so having index on a\ndifferent tablespace should reduce read and write on the data\ntablespace, am I wrong ?\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n", "msg_date": "Fri, 14 Sep 2007 10:34:13 +0200", "msg_from": "Philippe Amelant <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index files" }, { "msg_contents": "Harsh Azad wrote:\n> Hi,\n> \n> Where are the database index files located in the $PGDATA directory? I\n> was thinking on soft linking them to another physical hard disk array.\n> \nI am not an expert, but what I have done is put the Write-Ahead-Log on one\nhard drive, some little-used relations and their indices on a second hard\ndrive, and the main database files on four other drives. These are SCSI hard\ndrives and I have two SCSI controllers. /dev/sda and /dev/sdb are on one\ncontroller, and the other four hard drives are on the other controller.\nThese controllers are on a PCI-X bus all their own.\n\nI put $PGDATA (I do not actually set or use that global variable) on /dev/sda.\n\n[/srv/dbms/dataA/pgsql/data]$ ls -l\ntotal 88\n-rw------- 1 postgres postgres 4 Aug 11 13:32 PG_VERSION\ndrwx------ 5 postgres postgres 4096 Aug 11 13:32 base\ndrwx------ 2 postgres postgres 4096 Sep 14 09:16 global\ndrwx------ 2 postgres postgres 4096 Sep 13 23:35 pg_clog\n-rw------- 1 postgres postgres 3396 Aug 11 13:32 pg_hba.conf\n-rw------- 1 root root 3396 Aug 16 14:32 pg_hba.conf.dist\n-rw------- 1 postgres postgres 1460 Aug 11 13:32 pg_ident.conf\ndrwx------ 4 postgres postgres 4096 Aug 11 13:32 pg_multixact\ndrwx------ 2 postgres postgres 4096 Sep 14 09:16 pg_subtrans\ndrwx------ 2 postgres postgres 4096 Aug 12 16:14 pg_tblspc\ndrwx------ 2 postgres postgres 4096 Aug 11 13:32 pg_twophase\ndrwx------ 3 postgres postgres 4096 Sep 14 09:13 pg_xlog\n-rw------- 1 postgres postgres 15526 Sep 11 22:31 postgresql.conf\n-rw------- 1 postgres postgres 13659 Aug 11 13:32 postgresql.conf.dist\n-rw------- 1 postgres postgres 56 Sep 14 07:33 postmaster.opts\n-rw------- 1 postgres postgres 52 Sep 14 07:33 postmaster.pid\n\nIn /dev/sdb are\n\n]$ ls -l\ntotal 12\ndrwxr-x--- 2 postgres postgres 4096 Aug 18 00:00 pg_log\n-rw------- 1 postgres postgres 2132 Sep 14 07:25 pgstartup.log\ndrwx------ 3 postgres postgres 4096 Aug 12 21:06 stock\n\nThe stuff in \"stock\" are little-used tables and their indices.\n\nEverything else is on the other four drives. I put the index for a table on\na separate drive from the tata for the table.\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 09:10:01 up 1:37, 4 users, load average: 5.77, 5.12, 4.58\n", "msg_date": "Fri, 14 Sep 2007 09:22:08 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index files" }, { "msg_contents": "Great, creating new tablespace for indexes worked! Now the question is\nwhether existing tables/index can be moved to the new tablespace using an\nalter command or the only way possible is to drop and recreate them?\n\nAzad\n\nOn 9/14/07, Jean-David Beyer <[email protected]> wrote:\n>\n> Harsh Azad wrote:\n> > Hi,\n> >\n> > Where are the database index files located in the $PGDATA directory? I\n> > was thinking on soft linking them to another physical hard disk array.\n> >\n> I am not an expert, but what I have done is put the Write-Ahead-Log on one\n> hard drive, some little-used relations and their indices on a second hard\n> drive, and the main database files on four other drives. These are SCSI\n> hard\n> drives and I have two SCSI controllers. /dev/sda and /dev/sdb are on one\n> controller, and the other four hard drives are on the other controller.\n> These controllers are on a PCI-X bus all their own.\n>\n> I put $PGDATA (I do not actually set or use that global variable) on\n> /dev/sda.\n>\n> [/srv/dbms/dataA/pgsql/data]$ ls -l\n> total 88\n> -rw------- 1 postgres postgres 4 Aug 11 13:32 PG_VERSION\n> drwx------ 5 postgres postgres 4096 Aug 11 13:32 base\n> drwx------ 2 postgres postgres 4096 Sep 14 09:16 global\n> drwx------ 2 postgres postgres 4096 Sep 13 23:35 pg_clog\n> -rw------- 1 postgres postgres 3396 Aug 11 13:32 pg_hba.conf\n> -rw------- 1 root root 3396 Aug 16 14:32 pg_hba.conf.dist\n> -rw------- 1 postgres postgres 1460 Aug 11 13:32 pg_ident.conf\n> drwx------ 4 postgres postgres 4096 Aug 11 13:32 pg_multixact\n> drwx------ 2 postgres postgres 4096 Sep 14 09:16 pg_subtrans\n> drwx------ 2 postgres postgres 4096 Aug 12 16:14 pg_tblspc\n> drwx------ 2 postgres postgres 4096 Aug 11 13:32 pg_twophase\n> drwx------ 3 postgres postgres 4096 Sep 14 09:13 pg_xlog\n> -rw------- 1 postgres postgres 15526 Sep 11 22:31 postgresql.conf\n> -rw------- 1 postgres postgres 13659 Aug 11 13:32 postgresql.conf.dist\n> -rw------- 1 postgres postgres 56 Sep 14 07:33 postmaster.opts\n> -rw------- 1 postgres postgres 52 Sep 14 07:33 postmaster.pid\n>\n> In /dev/sdb are\n>\n> ]$ ls -l\n> total 12\n> drwxr-x--- 2 postgres postgres 4096 Aug 18 00:00 pg_log\n> -rw------- 1 postgres postgres 2132 Sep 14 07:25 pgstartup.log\n> drwx------ 3 postgres postgres 4096 Aug 12 21:06 stock\n>\n> The stuff in \"stock\" are little-used tables and their indices.\n>\n> Everything else is on the other four drives. I put the index for a table\n> on\n> a separate drive from the tata for the table.\n>\n> --\n> .~. Jean-David Beyer Registered Linux User 85642.\n> /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n> /( )\\ Shrewsbury, New Jersey http://counter.li.org\n> ^^-^^ 09:10:01 up 1:37, 4 users, load average: 5.77, 5.12, 4.58\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n>\n> http://www.postgresql.org/about/donate\n>\n\n\n\n-- \nHarsh Azad\n=======================\[email protected]\n\nGreat, creating new tablespace for indexes worked! Now the question is whether existing tables/index can be moved to the new tablespace using an alter command or the only way possible is to drop and recreate them?Azad\nOn 9/14/07, Jean-David Beyer <[email protected]> wrote:\nHarsh Azad wrote:> Hi,>> Where are the database index files located in the $PGDATA directory? I> was thinking on soft linking them to another physical hard disk array.>I am not an expert, but what I have done is put the Write-Ahead-Log on one\nhard drive, some little-used relations and their indices on a second harddrive, and the main database files on four other drives. These are SCSI harddrives and I have two SCSI controllers. /dev/sda and /dev/sdb are on one\ncontroller, and the other four hard drives are on the other controller.These controllers are on a PCI-X bus all their own.I put $PGDATA (I do not actually set or use that global variable) on /dev/sda.\n[/srv/dbms/dataA/pgsql/data]$ ls -ltotal 88-rw------- 1 postgres postgres     4 Aug 11 13:32 PG_VERSIONdrwx------ 5 postgres postgres  4096 Aug 11 13:32 basedrwx------ 2 postgres postgres  4096 Sep 14 09:16 global\ndrwx------ 2 postgres postgres  4096 Sep 13 23:35 pg_clog-rw------- 1 postgres postgres  3396 Aug 11 13:32 pg_hba.conf-rw------- 1 root     root      3396 Aug 16 14:32 pg_hba.conf.dist-rw------- 1 postgres postgres  1460 Aug 11 13:32 pg_ident.conf\ndrwx------ 4 postgres postgres  4096 Aug 11 13:32 pg_multixactdrwx------ 2 postgres postgres  4096 Sep 14 09:16 pg_subtransdrwx------ 2 postgres postgres  4096 Aug 12 16:14 pg_tblspcdrwx------ 2 postgres postgres  4096 Aug 11 13:32 pg_twophase\ndrwx------ 3 postgres postgres  4096 Sep 14 09:13 pg_xlog-rw------- 1 postgres postgres 15526 Sep 11 22:31 postgresql.conf-rw------- 1 postgres postgres 13659 Aug 11 13:32 postgresql.conf.dist-rw------- 1 postgres postgres    56 Sep 14 07:33 \npostmaster.opts-rw------- 1 postgres postgres    52 Sep 14 07:33 postmaster.pidIn /dev/sdb are]$ ls -ltotal 12drwxr-x--- 2 postgres postgres 4096 Aug 18 00:00 pg_log-rw------- 1 postgres postgres 2132 Sep 14 07:25 \npgstartup.logdrwx------ 3 postgres postgres 4096 Aug 12 21:06 stockThe stuff in \"stock\" are little-used tables and their indices.Everything else is on the other four drives. I put the index for a table on\na separate drive from the tata for the table.--  .~.  Jean-David Beyer          Registered Linux User 85642.  /V\\  PGP-Key: 9A2FC99A         Registered Machine   241939. /( )\\ Shrewsbury, New Jersey    \nhttp://counter.li.org ^^-^^ 09:10:01 up 1:37, 4 users, load average: 5.77, 5.12, 4.58---------------------------(end of broadcast)---------------------------TIP 7: You can help support the PostgreSQL project by donating at\n                http://www.postgresql.org/about/donate-- Harsh Azad=======================\[email protected]", "msg_date": "Sat, 15 Sep 2007 01:51:06 +0530", "msg_from": "\"Harsh Azad\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Index files" }, { "msg_contents": "On Sat, 2007-09-15 at 01:51 +0530, Harsh Azad wrote:\n> Great, creating new tablespace for indexes worked! Now the question is\n> whether existing tables/index can be moved to the new tablespace using\n> an alter command or the only way possible is to drop and recreate\n> them?\n\nYou can alter an existing index: \n\nhttp://www.postgresql.org/docs/8.2/static/sql-alterindex.html\n\n\n", "msg_date": "Fri, 14 Sep 2007 13:25:51 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index files" }, { "msg_contents": "On Friday 14 September 2007, \"Harsh Azad\" <[email protected]> wrote:\n> Great, creating new tablespace for indexes worked! Now the question is\n> whether existing tables/index can be moved to the new tablespace using an\n> alter command or the only way possible is to drop and recreate them?\n>\n\nALTER TABLE table_name set tablespace new_tablespace;\n\nand ALTER INDEX ...\n\n\n-- \nEat right. Exercise regularly. Die anyway.\n\n", "msg_date": "Fri, 14 Sep 2007 13:29:01 -0700", "msg_from": "Alan Hodgson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Index files" } ]
[ { "msg_contents": "Hello,\n\n\nIn Postgres 7.2.4, COPY command is working fine even if tables have 6 fields\nbut we are copying only 5 fields from the file\n\nBut in Postgres 8.2.0, if table has 6 fields and we need to copy data for 5\nfields only, then we need to specify the column names too in COPY command.\n\nIs there any configurable option, so that without specifying column name in\nCOPY command we can copy records in table as happened in Postgres 7.2.4?\n\n\n\nPlease provide us some help regarding above query.\n\n\n\nThanks,\n\nSoni\n\nHello,\n \n\nIn Postgres 7.2.4, COPY command is working fine even if tables have 6 fields but we are copying only 5 fields from the file \n\nBut in Postgres 8.2.0, if table has 6 fields and we need to copy data for 5 fields only, then we need to specify the column names too in COPY command.\n\nIs there any configurable option, so that without specifying column name in COPY command we can copy records in table as happened in Postgres \n7.2.4?\n \nPlease provide us some help regarding above query.\n \nThanks,\nSoni", "msg_date": "Fri, 14 Sep 2007 15:51:51 +0530", "msg_from": "\"soni de\" <[email protected]>", "msg_from_op": true, "msg_subject": "Regarding COPY command from Postgres 8.2.0" }, { "msg_contents": "On 2007-09-14 soni de wrote:\n> In Postgres 7.2.4, COPY command is working fine even if tables have 6\n> fields but we are copying only 5 fields from the file\n> \n> But in Postgres 8.2.0, if table has 6 fields and we need to copy data\n> for 5 fields only, then we need to specify the column names too in\n> COPY command.\n> \n> Is there any configurable option, so that without specifying column\n> name in COPY command we can copy records in table as happened in\n> Postgres 7.2.4?\n\nI don't know if it is possible, but even if it were I'd strongly\nrecommend against it, as you'd be relying on the order the columns were\ncreated in. That's a rather bad idea IMHO. Why would you want to avoid\ngiving the names of the columns in the first place?\n\nRegards\nAnsgar Wiechers\n-- \n\"The Mac OS X kernel should never panic because, when it does, it\nseriously inconveniences the user.\"\n--http://developer.apple.com/technotes/tn2004/tn2118.html\n", "msg_date": "Fri, 14 Sep 2007 14:03:21 +0200", "msg_from": "Ansgar -59cobalt- Wiechers <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding COPY command from Postgres 8.2.0" }, { "msg_contents": "We have upgraded postgres from 7.2.4 to 8.2.0.\nWe have program which executes COPY command and our new database is changed\nhaving some extra columns in some tables.\nBecause of this, COPY commands are failing.\nSo, we wanted the option to COPY the data without specifying column names.\n\nThanks,\nSonal\n\n\nOn 9/14/07, Ansgar -59cobalt- Wiechers <[email protected]> wrote:\n>\n> On 2007-09-14 soni de wrote:\n> > In Postgres 7.2.4, COPY command is working fine even if tables have 6\n> > fields but we are copying only 5 fields from the file\n> >\n> > But in Postgres 8.2.0, if table has 6 fields and we need to copy data\n> > for 5 fields only, then we need to specify the column names too in\n> > COPY command.\n> >\n> > Is there any configurable option, so that without specifying column\n> > name in COPY command we can copy records in table as happened in\n> > Postgres 7.2.4?\n>\n> I don't know if it is possible, but even if it were I'd strongly\n> recommend against it, as you'd be relying on the order the columns were\n> created in. That's a rather bad idea IMHO. Why would you want to avoid\n> giving the names of the columns in the first place?\n>\n> Regards\n> Ansgar Wiechers\n> --\n> \"The Mac OS X kernel should never panic because, when it does, it\n> seriously inconveniences the user.\"\n> --http://developer.apple.com/technotes/tn2004/tn2118.html\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 9: In versions below 8.0, the planner will ignore your desire to\n> choose an index scan if your joining column's datatypes do not\n> match\n>\n\nWe have upgraded postgres from 7.2.4 to 8.2.0. \nWe have program which executes COPY command and our new database is changed having some extra columns in some tables. \nBecause of this, COPY commands are failing. \nSo, we wanted the option to COPY the data without specifying column names.\n \nThanks,\nSonal \nOn 9/14/07, Ansgar -59cobalt- Wiechers <[email protected]> wrote:\nOn 2007-09-14 soni de wrote:> In Postgres 7.2.4, COPY command is working fine even if tables have 6\n> fields but we are copying only 5 fields from the file>> But in Postgres 8.2.0, if table has 6 fields and we need to copy data> for 5 fields only, then we need to specify the column names too in\n> COPY command.>> Is there any configurable option, so that without specifying column> name in COPY command we can copy records in table as happened in> Postgres 7.2.4?I don't know if it is possible, but even if it were I'd strongly\nrecommend against it, as you'd be relying on the order the columns werecreated in. That's a rather bad idea IMHO. Why would you want to avoidgiving the names of the columns in the first place?Regards\nAnsgar Wiechers--\"The Mac OS X kernel should never panic because, when it does, itseriously inconveniences the user.\"--http://developer.apple.com/technotes/tn2004/tn2118.html\n---------------------------(end of broadcast)---------------------------TIP 9: In versions below 8.0, the planner will ignore your desire to      choose an index scan if your joining column's datatypes do not\n      match", "msg_date": "Tue, 18 Sep 2007 10:01:46 +0530", "msg_from": "\"soni de\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Regarding COPY command from Postgres 8.2.0" }, { "msg_contents": "On 9/17/07, soni de <[email protected]> wrote:\n> We have upgraded postgres from 7.2.4 to 8.2.0.\n\nThis is one of the joys of 8.x over 7.2.x think of it like a different\nsql product rather than an \"upgrade.\" Its foundations are different.\n7.4.x is still supported, and would have been a smoother upgrade for\nyou with less deprecations removed. To insert such a feature dependent\non column order without it being implied in the spec would be surely\nbe something deprecated quite quickly.\n\n> We have program which executes COPY command and our new database is changed\n> having some extra columns in some tables.\n> Because of this, COPY commands are failing.\n> So, we wanted the option to COPY the data without specifying column names.\n>\n\nThe only way I can see a request, that allows for such bad-practice,\nbeing approved would be if you were to use the '*' operator elegantly;\nand, even then it would require you to update your code, only slightly\nto a lesser degree than writing the column names.\n\nEvan Carroll\n", "msg_date": "Tue, 18 Sep 2007 09:53:28 -0500", "msg_from": "\"Evan Carroll\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding COPY command from Postgres 8.2.0" }, { "msg_contents": "On 9/17/07, soni de <[email protected]> wrote:\n> We have upgraded postgres from 7.2.4 to 8.2.0.\n> We have program which executes COPY command and our new database is changed\n> having some extra columns in some tables.\n> Because of this, COPY commands are failing.\n> So, we wanted the option to COPY the data without specifying column names.\n\nCan't you just edit the import to have the newer 8.2.x syntax of\n\nCOPY table (field1, field2, field3, field4) FROM stdin;\n\n???\n\nAnd please tell me you aren't running 8.2.0, but 8.2.4 or now that it\njust came out, 8.2.5. Those minor point releases contain a lot of bug\nfixes, and you're just asking for trouble by running a .0 release.\n", "msg_date": "Tue, 18 Sep 2007 10:04:52 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Regarding COPY command from Postgres 8.2.0" } ]
[ { "msg_contents": "Hi all,\n\nI have a problem with DELETE performance with postgres 7.4.\n\nI have a database with 2 great tables (about 150,000 rows) continuously\nupdated, with 1000 - 1200 INSERT per second and 2 or 3 huge DELETE per\nminute, in which we delete almost all the rows inserted in the 2 tables\nduring the previous minute.\n\nI have a single, indexed foreign key between the 2 tables.\n\n \n\nIn this scenario we have always a problem with the delete:\n\nFor 1 or 2 hours we update only one table, and everything goes ok, where\nDELETE last at most 6 or 7 seconds.\n\nThen for a minute we do INSERT on both table, and everything continue\ngoing ok, with DELETE that last about 10 seconds.\n\n From that moment on, DELETES become timeless, and last for 240 and more\nseconds! \n\nThen I can't recover from this state because INSERT continue with the\nsame rate and DELETE become more and more slow.\n\nI do a vacuum analyze every minute.\n\n \n\nWhat can I do to avoid or at least limit that problem?\n\n \n\nI will be graceful to everyone who could help me.\n\n \n\nHi,\n\nGianluca\n\n \n\n\nInternet Email Confidentiality Footer\n-----------------------------------------------------------------------------------------------------\nLa presente comunicazione, con le informazioni in essa contenute e ogni documento o file allegato, e' rivolta unicamente alla/e persona/e cui e' indirizzata ed alle altre da questa autorizzata/e a riceverla. Se non siete i destinatari/autorizzati siete avvisati che qualsiasi azione, copia, comunicazione, divulgazione o simili basate sul contenuto di tali informazioni e' vietata e potrebbe essere contro la legge (art. 616 C.P., D.Lgs n. 196/2003 Codice in materia di protezione dei dati personali). Se avete ricevuto questa comunicazione per errore, vi preghiamo di darne immediata notizia al mittente e di distruggere il messaggio originale e ogni file allegato senza farne copia alcuna o riprodurne in alcun modo il contenuto. \n\nThis e-mail and its attachments are intended for the addressee(s) only and are confidential and/or may contain legally privileged information. If you have received this message by mistake or are not one of the addressees above, you may take no action based on it, and you may not copy or show it to anyone; please reply to this e-mail and point out the error which has occurred. \n-----------------------------------------------------------------------------------------------------\n\n\n\n\n\n\n\n\n\n\nHi all,\nI have a problem with DELETE performance with\npostgres 7.4.\nI have a database with 2 great tables\n(about 150,000 rows) continuously updated, with 1000 – 1200 INSERT per\nsecond and 2 or 3 huge DELETE per minute, in which we delete almost all the\nrows inserted in the 2 tables during the previous minute.\nI have a single, indexed foreign key\nbetween the 2 tables.\n \nIn this scenario we have always a problem\nwith the delete:\nFor 1 or 2 hours we update only one table,\nand everything goes ok, where DELETE last at most 6 or 7 seconds.\nThen for a minute we do INSERT on both\ntable, and everything continue going ok, with DELETE that last about 10\nseconds.\nFrom that moment on, DELETES become timeless,\nand last for 240 and more seconds! \nThen I can’t recover from this state because\nINSERT continue with the same rate and DELETE become more and more slow.\nI do a vacuum analyze every minute.\n \nWhat can I do to avoid or at least limit\nthat problem?\n \nI will be graceful to everyone who could help\nme.\n \nHi,\nGianluca\n \n\n\n \n\nInternet \nEmail Confidentiality Footer\n\n\n********************************************************************************************************************************************\n\nLa presente comunicazione, con le informazioni in essa contenute e ogni documento o file allegato, e' rivolta unicamente alla/e persona/e cui e' indirizzata ed alle altre da questa autorizzata/e a riceverla. Se non siete i destinatari/autorizzati siete avvisati che qualsiasi azione, copia, comunicazione, divulgazione o simili basate sul contenuto di tali informazioni e' vietata e potrebbe essere contro la legge (art. 616 C.P., D.Lgs n. 196/2003 Codice in materia di protezione dei dati personali). Se avete ricevuto questa comunicazione per errore, vi preghiamo di darne immediata notizia al mittente e di distruggere il messaggio originale e ogni file allegato senza farne copia alcuna o riprodurne in alcun modo il contenuto. \n\nThis e-mail and its attachments are intended for the addressee(s) only and are confidential and/or may contain legally privileged information. If you have received this message by mistake or are not one of the addressees above, you may take no action based on it, and you may not copy or show it to anyone; please reply to this e-mail and point out the error which has occurred. \n\n********************************************************************************************************************************************", "msg_date": "Mon, 17 Sep 2007 09:48:09 +0200", "msg_from": "\"Galantucci Giovanni\" <[email protected]>", "msg_from_op": true, "msg_subject": "DELETE queries slow down" }, { "msg_contents": "Galantucci Giovanni wrote:\n> I have a problem with DELETE performance with postgres 7.4.\n\nYou should consider upgrading. While I don't recall any particular\nenhancements that would directly help with this problem, 8.2 is\ngenerally faster.\n\n> I have a database with 2 great tables (about 150,000 rows) continuously\n> updated, with 1000 - 1200 INSERT per second and 2 or 3 huge DELETE per\n> minute, in which we delete almost all the rows inserted in the 2 tables\n> during the previous minute.\n> \n> I have a single, indexed foreign key between the 2 tables.\n> \n> \n> \n> In this scenario we have always a problem with the delete:\n> \n> For 1 or 2 hours we update only one table, and everything goes ok, where\n> DELETE last at most 6 or 7 seconds.\n> \n> Then for a minute we do INSERT on both table, and everything continue\n> going ok, with DELETE that last about 10 seconds.\n> \n> From that moment on, DELETES become timeless, and last for 240 and more\n> seconds! \n> \n> Then I can't recover from this state because INSERT continue with the\n> same rate and DELETE become more and more slow.\n\nI suspect that at first the tables fit in memory, and operations are\ntherefore fast. But after they grow beyond a certain point, they no\nlonger fit in memory, and you start doing I/O which is slow.\n\n> I do a vacuum analyze every minute.\n\nI'd suggest doing a VACUUM (no analyze) after every DELETE.\n\nHave you checked the EXPLAIN ANALYZE output of the DELETE? It might be\nchoosing a bad plan after the table grows.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 17 Sep 2007 10:18:46 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE queries slow down" }, { "msg_contents": "\"Heikki Linnakangas\" <[email protected]> writes:\n\n> Galantucci Giovanni wrote:\n>\n>> For 1 or 2 hours we update only one table, and everything goes ok, where\n>> DELETE last at most 6 or 7 seconds.\n>> \n>> Then for a minute we do INSERT on both table, and everything continue\n>> going ok, with DELETE that last about 10 seconds.\n>> \n>> From that moment on, DELETES become timeless, and last for 240 and more\n>> seconds! \n\nWhat do the inserts and deletes actually look like? Are there subqueries or\njoins or are they just inserting values and deleting simple where clauses?\n\nAnd are these in autocommit mode or are you running multiple commands in a\nsingle transaction?\n\nGenerally it's faster to run more commands in a single transaction but what\nI'm worried about is that you may have a transaction open which you aren't\ncommitting for a long time. This can stop vacuum from being able to clean up\ndead space and if it's in the middle of a query can actually cause vacuum to\nget stuck waiting for the query to finish using the page it's using.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 17 Sep 2007 11:21:44 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: DELETE queries slow down" }, { "msg_contents": "I perform simple INSERT and simple where-clause DELETE.\nI also force a commit after every DELETE.\nMy two tables are about these:\n\nTABLE_A\nColumn_1 | column2 | .......\n\nTABLE_B\nColumn_1B foreign key references TABLE_A(column_1) on delete cascade | .........\n\nEvery row in TABLE_B is also present in TABLE_A, but the contrary is not true.\nAfter hours in which I insert and delete only on TABLE_A (everything ok), I start inserting also on TABLE_B, exploiting the constrain on column_1B. After the first DELETE I perform on both tables, each following DELETE lasts for minutes, with cpu usage on 99,9%.\nI tried also to perform a VACUUM after each DELETE, but had no benefits.\nEven the EXPLAIN ANALYZE of the DELETE shows no changes with respect to the previous DELETEs: it uses an index on column_1 of TABLE_A.\nMy doubt is that the query planner is not enough fast to follow sudden changes in the way I use the DB, is there a way in which I can help it to adjust its statistics and its query planner more quickly?\nMy other doubt is that the foreign key on TABLE_B is a problem when I try to delete from TABLE_A, and postgres tries to find nonexistent constrained rows on TABLE_B.\n\nThank you for our help\n\nGianluca Galantucci\n\n-----Messaggio originale-----\nDa: Gregory Stark [mailto:[email protected]] \nInviato: lunedì 17 settembre 2007 12.22\nA: Heikki Linnakangas\nCc: Galantucci Giovanni; [email protected]\nOggetto: Re: DELETE queries slow down\n\n\"Heikki Linnakangas\" <[email protected]> writes:\n\n> Galantucci Giovanni wrote:\n>\n>> For 1 or 2 hours we update only one table, and everything goes ok, where\n>> DELETE last at most 6 or 7 seconds.\n>> \n>> Then for a minute we do INSERT on both table, and everything continue\n>> going ok, with DELETE that last about 10 seconds.\n>> \n>> From that moment on, DELETES become timeless, and last for 240 and more\n>> seconds! \n\nWhat do the inserts and deletes actually look like? Are there subqueries or\njoins or are they just inserting values and deleting simple where clauses?\n\nAnd are these in autocommit mode or are you running multiple commands in a\nsingle transaction?\n\nGenerally it's faster to run more commands in a single transaction but what\nI'm worried about is that you may have a transaction open which you aren't\ncommitting for a long time. This can stop vacuum from being able to clean up\ndead space and if it's in the middle of a query can actually cause vacuum to\nget stuck waiting for the query to finish using the page it's using.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n\n\nInternet Email Confidentiality Footer\n-----------------------------------------------------------------------------------------------------\nLa presente comunicazione, con le informazioni in essa contenute e ogni documento o file allegato, e' rivolta unicamente alla/e persona/e cui e' indirizzata ed alle altre da questa autorizzata/e a riceverla. Se non siete i destinatari/autorizzati siete avvisati che qualsiasi azione, copia, comunicazione, divulgazione o simili basate sul contenuto di tali informazioni e' vietata e potrebbe essere contro la legge (art. 616 C.P., D.Lgs n. 196/2003 Codice in materia di protezione dei dati personali). Se avete ricevuto questa comunicazione per errore, vi preghiamo di darne immediata notizia al mittente e di distruggere il messaggio originale e ogni file allegato senza farne copia alcuna o riprodurne in alcun modo il contenuto. \n\nThis e-mail and its attachments are intended for the addressee(s) only and are confidential and/or may contain legally privileged information. If you have received this message by mistake or are not one of the addressees above, you may take no action based on it, and you may not copy or show it to anyone; please reply to this e-mail and point out the error which has occurred. \n-----------------------------------------------------------------------------------------------------\n\n", "msg_date": "Tue, 18 Sep 2007 09:17:01 +0200", "msg_from": "\"Galantucci Giovanni\" <[email protected]>", "msg_from_op": true, "msg_subject": "R: DELETE queries slow down" }, { "msg_contents": "In response to \"Galantucci Giovanni\" <[email protected]>:\n\n> I perform simple INSERT and simple where-clause DELETE.\n> I also force a commit after every DELETE.\n\nDo you mean that you delete 1 row at a time? This is slower than\nbatching your deletes.\n\n> My two tables are about these:\n> \n> TABLE_A\n> Column_1 | column2 | .......\n> \n> TABLE_B\n> Column_1B foreign key references TABLE_A(column_1) on delete cascade | .........\n> \n> Every row in TABLE_B is also present in TABLE_A, but the contrary is not true.\n> After hours in which I insert and delete only on TABLE_A (everything ok), I start inserting also on TABLE_B, exploiting the constrain on column_1B. After the first DELETE I perform on both tables, each following DELETE lasts for minutes, with cpu usage on 99,9%.\n> I tried also to perform a VACUUM after each DELETE, but had no benefits.\n> Even the EXPLAIN ANALYZE of the DELETE shows no changes with respect to the previous DELETEs: it uses an index on column_1 of TABLE_A.\n\nAre you unable to provide these details? (i.e. output of explain, the\nactual table schema, actual queries) Without them, the question is\nvery vague and difficult to give advice on.\n\nIf the planner comes up with the same plan whether running fast or slow,\nthe question is what part of that plan is no longer valid (what part's\nactual time no longer matches it's predicted time)\n\n> My doubt is that the query planner is not enough fast to follow sudden changes in the way I use the DB, is there a way in which I can help it to adjust its statistics and its query planner more quickly?\n\nSee:\nhttp://www.postgresql.org/docs/8.2/static/sql-analyze.html\nwhich also has links to other information on this topic.\n\nIf you can demonstrate that the statistics are stale, you might benefit\nfrom manual analyze after large operations.\n\n> My other doubt is that the foreign key on TABLE_B is a problem when I try to delete from TABLE_A, and postgres tries to find nonexistent constrained rows on TABLE_B.\n\nIt's quite possible, considering the fact that you seem to be CPU bound.\n\n> \n> -----Messaggio originale-----\n> Da: Gregory Stark [mailto:[email protected]] \n> Inviato: lunedì 17 settembre 2007 12.22\n> A: Heikki Linnakangas\n> Cc: Galantucci Giovanni; [email protected]\n> Oggetto: Re: DELETE queries slow down\n> \n> \"Heikki Linnakangas\" <[email protected]> writes:\n> \n> > Galantucci Giovanni wrote:\n> >\n> >> For 1 or 2 hours we update only one table, and everything goes ok, where\n> >> DELETE last at most 6 or 7 seconds.\n> >> \n> >> Then for a minute we do INSERT on both table, and everything continue\n> >> going ok, with DELETE that last about 10 seconds.\n> >> \n> >> From that moment on, DELETES become timeless, and last for 240 and more\n> >> seconds! \n> \n> What do the inserts and deletes actually look like? Are there subqueries or\n> joins or are they just inserting values and deleting simple where clauses?\n> \n> And are these in autocommit mode or are you running multiple commands in a\n> single transaction?\n> \n> Generally it's faster to run more commands in a single transaction but what\n> I'm worried about is that you may have a transaction open which you aren't\n> committing for a long time. This can stop vacuum from being able to clean up\n> dead space and if it's in the middle of a query can actually cause vacuum to\n> get stuck waiting for the query to finish using the page it's using.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Tue, 18 Sep 2007 12:18:45 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: R: DELETE queries slow down" }, { "msg_contents": "No, I perform a single DELETE for about 80000/100000 rows at a time.\n\n \n\nYesterday I tried to raise the parameter default_statistics_target on the file postgresql.conf, setting it to 50 (previously it was set to 10) and everything went ok.\n\nIt seems that postgres needs some time to adapt itself to sudden changes in the way I use the DB, maybe to adapt its planner to the new way of use. I think that tuning this parameter could be enough to help postgres update it's planner faster.\n\n \n\nDo you think it could be reasonable? \n\n \n\n-----Messaggio originale-----\nDa: Bill Moran [mailto:[email protected]] \nInviato: martedì 18 settembre 2007 18.19\nA: Galantucci Giovanni\nCc: [email protected]\nOggetto: Re: [PERFORM] R: DELETE queries slow down\n\n \n\nIn response to \"Galantucci Giovanni\" <[email protected]>:\n\n \n\n> I perform simple INSERT and simple where-clause DELETE.\n\n> I also force a commit after every DELETE.\n\n \n\nDo you mean that you delete 1 row at a time? This is slower than\n\nbatching your deletes.\n\n \n\n> My two tables are about these:\n\n> \n\n> TABLE_A\n\n> Column_1 | column2 | .......\n\n> \n\n> TABLE_B\n\n> Column_1B foreign key references TABLE_A(column_1) on delete cascade | .........\n\n> \n\n> Every row in TABLE_B is also present in TABLE_A, but the contrary is not true.\n\n> After hours in which I insert and delete only on TABLE_A (everything ok), I start inserting also on TABLE_B, exploiting the constrain on column_1B. After the first DELETE I perform on both tables, each following DELETE lasts for minutes, with cpu usage on 99,9%.\n\n> I tried also to perform a VACUUM after each DELETE, but had no benefits.\n\n> Even the EXPLAIN ANALYZE of the DELETE shows no changes with respect to the previous DELETEs: it uses an index on column_1 of TABLE_A.\n\n \n\nAre you unable to provide these details? (i.e. output of explain, the\n\nactual table schema, actual queries) Without them, the question is\n\nvery vague and difficult to give advice on.\n\n \n\nIf the planner comes up with the same plan whether running fast or slow,\n\nthe question is what part of that plan is no longer valid (what part's\n\nactual time no longer matches it's predicted time)\n\n \n\n> My doubt is that the query planner is not enough fast to follow sudden changes in the way I use the DB, is there a way in which I can help it to adjust its statistics and its query planner more quickly?\n\n \n\nSee:\n\nhttp://www.postgresql.org/docs/8.2/static/sql-analyze.html\n\nwhich also has links to other information on this topic.\n\n \n\nIf you can demonstrate that the statistics are stale, you might benefit\n\nfrom manual analyze after large operations.\n\n \n\n> My other doubt is that the foreign key on TABLE_B is a problem when I try to delete from TABLE_A, and postgres tries to find nonexistent constrained rows on TABLE_B.\n\n \n\nIt's quite possible, considering the fact that you seem to be CPU bound.\n\n \n\n> \n\n> -----Messaggio originale-----\n\n> Da: Gregory Stark [mailto:[email protected]] \n\n> Inviato: lunedì 17 settembre 2007 12.22\n\n> A: Heikki Linnakangas\n\n> Cc: Galantucci Giovanni; [email protected]\n\n> Oggetto: Re: DELETE queries slow down\n\n> \n\n> \"Heikki Linnakangas\" <[email protected]> writes:\n\n> \n\n> > Galantucci Giovanni wrote:\n\n> >\n\n> >> For 1 or 2 hours we update only one table, and everything goes ok, where\n\n> >> DELETE last at most 6 or 7 seconds.\n\n> >> \n\n> >> Then for a minute we do INSERT on both table, and everything continue\n\n> >> going ok, with DELETE that last about 10 seconds.\n\n> >> \n\n> >> From that moment on, DELETES become timeless, and last for 240 and more\n\n> >> seconds! \n\n> \n\n> What do the inserts and deletes actually look like? Are there subqueries or\n\n> joins or are they just inserting values and deleting simple where clauses?\n\n> \n\n> And are these in autocommit mode or are you running multiple commands in a\n\n> single transaction?\n\n> \n\n> Generally it's faster to run more commands in a single transaction but what\n\n> I'm worried about is that you may have a transaction open which you aren't\n\n> committing for a long time. This can stop vacuum from being able to clean up\n\n> dead space and if it's in the middle of a query can actually cause vacuum to\n\n> get stuck waiting for the query to finish using the page it's using.\n\n \n\n-- \n\nBill Moran\n\nCollaborative Fusion Inc.\n\nhttp://people.collaborativefusion.com/~wmoran/\n\n \n\[email protected]\n\nPhone: 412-422-3463x4023\n\n\nInternet Email Confidentiality Footer\n-----------------------------------------------------------------------------------------------------\nLa presente comunicazione, con le informazioni in essa contenute e ogni documento o file allegato, e' rivolta unicamente alla/e persona/e cui e' indirizzata ed alle altre da questa autorizzata/e a riceverla. Se non siete i destinatari/autorizzati siete avvisati che qualsiasi azione, copia, comunicazione, divulgazione o simili basate sul contenuto di tali informazioni e' vietata e potrebbe essere contro la legge (art. 616 C.P., D.Lgs n. 196/2003 Codice in materia di protezione dei dati personali). Se avete ricevuto questa comunicazione per errore, vi preghiamo di darne immediata notizia al mittente e di distruggere il messaggio originale e ogni file allegato senza farne copia alcuna o riprodurne in alcun modo il contenuto. \n\nThis e-mail and its attachments are intended for the addressee(s) only and are confidential and/or may contain legally privileged information. If you have received this message by mistake or are not one of the addressees above, you may take no action based on it, and you may not copy or show it to anyone; please reply to this e-mail and point out the error which has occurred. \n-----------------------------------------------------------------------------------------------------\n\n\n\n\n\n\n\n\n\n\nNo, I perform a single DELETE for about 80000/100000\nrows at a time.\n \nYesterday I tried to raise the parameter default_statistics_target on the file\npostgresql.conf, setting it to 50 (previously it was set to 10) and everything\nwent ok.\nIt seems that postgres needs some time to adapt itself\nto sudden changes in the way I use the DB, maybe to adapt its planner to the\nnew way of use. I think that tuning this parameter could be enough to help\npostgres update it’s planner faster.\n \nDo you think it could be reasonable? \n \n-----Messaggio originale-----\nDa: Bill Moran [mailto:[email protected]] \nInviato: martedì 18 settembre 2007 18.19\nA: Galantucci Giovanni\nCc: [email protected]\nOggetto: Re: [PERFORM] R: DELETE queries slow down\n \nIn response to \"Galantucci Giovanni\"\n<[email protected]>:\n \n> I perform simple INSERT and simple where-clause\nDELETE.\n> I also force a commit after every DELETE.\n \nDo you mean that you delete 1 row at a time? This is\nslower than\nbatching your deletes.\n \n> My two tables are about these:\n> \n> TABLE_A\n> Column_1 | column2 | .......\n> \n> TABLE_B\n> Column_1B foreign key references\nTABLE_A(column_1) on delete cascade | .........\n> \n> Every row in TABLE_B is also present in TABLE_A,\nbut the contrary is not true.\n> After hours in which I insert and delete only on\nTABLE_A (everything ok), I start inserting also on TABLE_B, exploiting the\nconstrain on column_1B. After the first DELETE I perform on both tables, each\nfollowing DELETE lasts for minutes, with cpu usage on 99,9%.\n> I tried also to perform a VACUUM after each\nDELETE, but had no benefits.\n> Even the EXPLAIN ANALYZE of the DELETE shows no\nchanges with respect to the previous DELETEs: it uses an index on column_1 of\nTABLE_A.\n \nAre you unable to provide these details? (i.e. output\nof explain, the\nactual table schema, actual queries) Without them,\nthe question is\nvery vague and difficult to give advice on.\n \nIf the planner comes up with the same plan whether\nrunning fast or slow,\nthe question is what part of that plan is no longer\nvalid (what part's\nactual time no longer matches it's predicted time)\n \n> My doubt is that the query planner is not enough\nfast to follow sudden changes in the way I use the DB, is there a way in which\nI can help it to adjust its statistics and its query planner more quickly?\n \nSee:\nhttp://www.postgresql.org/docs/8.2/static/sql-analyze.html\nwhich also has links to other information on this\ntopic.\n \nIf you can demonstrate that the statistics are stale,\nyou might benefit\nfrom manual analyze after large operations.\n \n> My other doubt is that the foreign key on TABLE_B\nis a problem when I try to delete from TABLE_A, and postgres tries to find\nnonexistent constrained rows on TABLE_B.\n \nIt's quite possible, considering the fact that you\nseem to be CPU bound.\n \n> \n> -----Messaggio originale-----\n> Da: Gregory Stark [mailto:[email protected]]\n\n> Inviato: lunedì 17 settembre 2007 12.22\n> A: Heikki Linnakangas\n> Cc: Galantucci Giovanni;\[email protected]\n> Oggetto: Re: DELETE queries slow down\n> \n> \"Heikki Linnakangas\"\n<[email protected]> writes:\n> \n> > Galantucci Giovanni wrote:\n> >\n> >> For 1 or 2 hours we update only one\ntable, and everything goes ok, where\n> >> DELETE last at most 6 or 7 seconds.\n> >> \n> >> Then for a minute we do INSERT on both\ntable, and everything continue\n> >> going ok, with DELETE that last about 10\nseconds.\n> >> \n> >> From that moment on, DELETES become\ntimeless, and last for 240 and more\n> >> seconds! \n> \n> What do the inserts and deletes actually look\nlike? Are there subqueries or\n> joins or are they just inserting values and\ndeleting simple where clauses?\n> \n> And are these in autocommit mode or are you\nrunning multiple commands in a\n> single transaction?\n> \n> Generally it's faster to run more commands in a\nsingle transaction but what\n> I'm worried about is that you may have a\ntransaction open which you aren't\n> committing for a long time. This can stop vacuum\nfrom being able to clean up\n> dead space and if it's in the middle of a query\ncan actually cause vacuum to\n> get stuck waiting for the query to finish using\nthe page it's using.\n \n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n \[email protected]\nPhone: 412-422-3463x4023\n\n\n \n\nInternet \nEmail Confidentiality Footer\n\n\n********************************************************************************************************************************************\n\nLa presente comunicazione, con le informazioni in essa contenute e ogni documento o file allegato, e' rivolta unicamente alla/e persona/e cui e' indirizzata ed alle altre da questa autorizzata/e a riceverla. Se non siete i destinatari/autorizzati siete avvisati che qualsiasi azione, copia, comunicazione, divulgazione o simili basate sul contenuto di tali informazioni e' vietata e potrebbe essere contro la legge (art. 616 C.P., D.Lgs n. 196/2003 Codice in materia di protezione dei dati personali). Se avete ricevuto questa comunicazione per errore, vi preghiamo di darne immediata notizia al mittente e di distruggere il messaggio originale e ogni file allegato senza farne copia alcuna o riprodurne in alcun modo il contenuto. \n\nThis e-mail and its attachments are intended for the addressee(s) only and are confidential and/or may contain legally privileged information. If you have received this message by mistake or are not one of the addressees above, you may take no action based on it, and you may not copy or show it to anyone; please reply to this e-mail and point out the error which has occurred. \n\n********************************************************************************************************************************************", "msg_date": "Wed, 19 Sep 2007 11:46:48 +0200", "msg_from": "\"Galantucci Giovanni\" <[email protected]>", "msg_from_op": true, "msg_subject": "R: R: DELETE queries slow down" }, { "msg_contents": "\"Galantucci Giovanni\" <[email protected]> wrote:\n>\n> No, I perform a single DELETE for about 80000/100000 rows at a time.\n> \n> Yesterday I tried to raise the parameter default_statistics_target on the file postgresql.conf, setting it to 50 (previously it was set to 10) and everything went ok.\n> \n> It seems that postgres needs some time to adapt itself to sudden changes in the way I use the DB, maybe to adapt its planner to the new way of use. I think that tuning this parameter could be enough to help postgres update it's planner faster.\n> \n> Do you think it could be reasonable? \n\nBased on the information you've given and the responses you've made,\nI think you're as likely to roll a 1d6 and get the right solution as\nanything else.\n\nGood luck.\n\n> -----Messaggio originale-----\n> Da: Bill Moran [mailto:[email protected]] \n> Inviato: martedì 18 settembre 2007 18.19\n> A: Galantucci Giovanni\n> Cc: [email protected]\n> Oggetto: Re: [PERFORM] R: DELETE queries slow down\n> \n> \n> \n> In response to \"Galantucci Giovanni\" <[email protected]>:\n> \n> \n> \n> > I perform simple INSERT and simple where-clause DELETE.\n> \n> > I also force a commit after every DELETE.\n> \n> \n> \n> Do you mean that you delete 1 row at a time? This is slower than\n> \n> batching your deletes.\n> \n> \n> \n> > My two tables are about these:\n> \n> > \n> \n> > TABLE_A\n> \n> > Column_1 | column2 | .......\n> \n> > \n> \n> > TABLE_B\n> \n> > Column_1B foreign key references TABLE_A(column_1) on delete cascade | .........\n> \n> > \n> \n> > Every row in TABLE_B is also present in TABLE_A, but the contrary is not true.\n> \n> > After hours in which I insert and delete only on TABLE_A (everything ok), I start inserting also on TABLE_B, exploiting the constrain on column_1B. After the first DELETE I perform on both tables, each following DELETE lasts for minutes, with cpu usage on 99,9%.\n> \n> > I tried also to perform a VACUUM after each DELETE, but had no benefits.\n> \n> > Even the EXPLAIN ANALYZE of the DELETE shows no changes with respect to the previous DELETEs: it uses an index on column_1 of TABLE_A.\n> \n> \n> \n> Are you unable to provide these details? (i.e. output of explain, the\n> \n> actual table schema, actual queries) Without them, the question is\n> \n> very vague and difficult to give advice on.\n> \n> \n> \n> If the planner comes up with the same plan whether running fast or slow,\n> \n> the question is what part of that plan is no longer valid (what part's\n> \n> actual time no longer matches it's predicted time)\n> \n> \n> \n> > My doubt is that the query planner is not enough fast to follow sudden changes in the way I use the DB, is there a way in which I can help it to adjust its statistics and its query planner more quickly?\n> \n> \n> \n> See:\n> \n> http://www.postgresql.org/docs/8.2/static/sql-analyze.html\n> \n> which also has links to other information on this topic.\n> \n> \n> \n> If you can demonstrate that the statistics are stale, you might benefit\n> \n> from manual analyze after large operations.\n> \n> \n> \n> > My other doubt is that the foreign key on TABLE_B is a problem when I try to delete from TABLE_A, and postgres tries to find nonexistent constrained rows on TABLE_B.\n> \n> \n> \n> It's quite possible, considering the fact that you seem to be CPU bound.\n> \n> \n> \n> > \n> \n> > -----Messaggio originale-----\n> \n> > Da: Gregory Stark [mailto:[email protected]] \n> \n> > Inviato: lunedì 17 settembre 2007 12.22\n> \n> > A: Heikki Linnakangas\n> \n> > Cc: Galantucci Giovanni; [email protected]\n> \n> > Oggetto: Re: DELETE queries slow down\n> \n> > \n> \n> > \"Heikki Linnakangas\" <[email protected]> writes:\n> \n> > \n> \n> > > Galantucci Giovanni wrote:\n> \n> > >\n> \n> > >> For 1 or 2 hours we update only one table, and everything goes ok, where\n> \n> > >> DELETE last at most 6 or 7 seconds.\n> \n> > >> \n> \n> > >> Then for a minute we do INSERT on both table, and everything continue\n> \n> > >> going ok, with DELETE that last about 10 seconds.\n> \n> > >> \n> \n> > >> From that moment on, DELETES become timeless, and last for 240 and more\n> \n> > >> seconds! \n> \n> > \n> \n> > What do the inserts and deletes actually look like? Are there subqueries or\n> \n> > joins or are they just inserting values and deleting simple where clauses?\n> \n> > \n> \n> > And are these in autocommit mode or are you running multiple commands in a\n> \n> > single transaction?\n> \n> > \n> \n> > Generally it's faster to run more commands in a single transaction but what\n> \n> > I'm worried about is that you may have a transaction open which you aren't\n> \n> > committing for a long time. This can stop vacuum from being able to clean up\n> \n> > dead space and if it's in the middle of a query can actually cause vacuum to\n> \n> > get stuck waiting for the query to finish using the page it's using.\n> \n> \n> \n> -- \n> \n> Bill Moran\n> \n> Collaborative Fusion Inc.\n> \n> http://people.collaborativefusion.com/~wmoran/\n> \n> \n> \n> [email protected]\n> \n> Phone: 412-422-3463x4023\n> \n> \n> Internet Email Confidentiality Footer\n> -----------------------------------------------------------------------------------------------------\n> La presente comunicazione, con le informazioni in essa contenute e ogni documento o file allegato, e' rivolta unicamente alla/e persona/e cui e' indirizzata ed alle altre da questa autorizzata/e a riceverla. Se non siete i destinatari/autorizzati siete avvisati che qualsiasi azione, copia, comunicazione, divulgazione o simili basate sul contenuto di tali informazioni e' vietata e potrebbe essere contro la legge (art. 616 C.P., D.Lgs n. 196/2003 Codice in materia di protezione dei dati personali). Se avete ricevuto questa comunicazione per errore, vi preghiamo di darne immediata notizia al mittente e di distruggere il messaggio originale e ogni file allegato senza farne copia alcuna o riprodurne in alcun modo il contenuto. \n> \n> This e-mail and its attachments are intended for the addressee(s) only and are confidential and/or may contain legally privileged information. If you have received this message by mistake or are not one of the addressees above, you may take no action based on it, and you may not copy or show it to anyone; please reply to this e-mail and point out the error which has occurred. \n> -----------------------------------------------------------------------------------------------------\n> \n> \n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\n\[email protected]\nPhone: 412-422-3463x4023\n\n", "msg_date": "Wed, 19 Sep 2007 07:14:51 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: R: R: DELETE queries slow down" }, { "msg_contents": "\"Galantucci Giovanni\" <[email protected]> wrote:\n>\n> No, I perform a single DELETE for about 80000/100000 rows at a time.\n> \n> Yesterday I tried to raise the parameter default_statistics_target on the file postgresql.conf, setting it to 50 (previously it was set to 10) and everything went ok.\n> \n> It seems that postgres needs some time to adapt itself to sudden changes in the way I use the DB, maybe to adapt its planner to the new way of use. I think that tuning this parameter could be enough to help postgres update it's planner faster.\n> \n> Do you think it could be reasonable? \n\nBased on the information you've given and the responses you've made,\nI think you're as likely to roll a 1d6 and get the right solution as\nanything else.\n\nGood luck.\n\n<...snip...>\n\nLOL ... hit the nail right on the head ...\n\nGreg Williamson\nSenior DBA\nGlobeXplorer LLC, a DigitalGlobe company\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)\n\n\n\n\n\n\n\nRE: R: [PERFORM] R: DELETE queries slow down\n\n\n\n\n\"Galantucci Giovanni\" <[email protected]> wrote:\n>\n> No, I perform a single DELETE for about 80000/100000 rows at a time.\n>\n> Yesterday I tried to raise the parameter default_statistics_target on the file postgresql.conf, setting it to 50 (previously it was set to 10) and everything went ok.\n>\n> It seems that postgres needs some time to adapt itself to sudden changes in the way I use the DB, maybe to adapt its planner to the new way of use. I think that tuning this parameter could be enough to help postgres update it's planner faster.\n>\n> Do you think it could be reasonable?\n\nBased on the information you've given and the responses you've made,\nI think you're as likely to roll a 1d6 and get the right solution as\nanything else.\n\nGood luck.\n\n<...snip...>\n\nLOL ... hit the nail right on the head ...\n\nGreg Williamson\nSenior DBA\nGlobeXplorer LLC, a DigitalGlobe company\n\nConfidentiality Notice: This e-mail message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information and must be protected in accordance with those provisions. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply e-mail and destroy all copies of the original message.\n\n(My corporate masters made me say this.)", "msg_date": "Wed, 19 Sep 2007 05:25:29 -0600", "msg_from": "\"Gregory Williamson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: R: R: DELETE queries slow down" } ]
[ { "msg_contents": "Hi all,\n\nPlease see the section marked as \"PROBLEM\" in \"ORIGINAL QUERY\" plan below. \nYou can see it's pretty slow. Oddly enough, an index for facility_address_id \nis available but not being used, but I suspect it's questionable whether it \nwould be an improvement.\n\nI knew that the filter was best applied to the results of the join - my \nattempts to restructure the query with subqueries, etc didn't fool the \nplanner - it always figured out a plan that had this problem SEQ SCAN + \nFILTER in it.\n\nFinally, I \"hid\" the condition from the planner with a coalesce function - \nsee \"SOLUTION\" in the \"KLUDGED QUERY\" plan below.\n\nSure enough, a new plan appeared with a remarkable performance improvement!\n\nThe purpose of this query is to find facilities within a geographical area \nwhen the complete address data is missing (hence the facility_address_id is \nNULL).\n\nPG is 8.4.2 on RH linux server with 1GB ram, HDD is RAID 1.\n\nI don't like kludging like this - so any and all help or advice is \nappreciated!\n\nCarlo\n\nORIGINAL QUERY\nselect\n pp.provider_id,\n pp.provider_practice_id,\n nearby.distance\nfrom mdx_core.provider_practice as pp\njoin mdx_core.facility as f\non f.facility_id = pp.facility_id\njoin (select * from mdx_core.zips_in_mile_range('08820', 10)) as nearby\non f.default_country_code = 'US'\n and f.default_postal_code = nearby.zip\nwhere facility_address_id is null\n\nHash Join (cost=30258.99..107702.53 rows=9438 width=16) (actual \ntime=169.516..3064.188 rows=872 loops=1)\n Hash Cond: (pp.facility_id = f.facility_id)\nPROBLEM:\n------------\n -> Seq Scan on provider_practice pp (cost=0.00..74632.55 rows=724429 \nwidth=12) (actual time=0.039..1999.457 rows=728396 loops=1)\n Filter: (facility_address_id IS NULL)\n------------\n -> Hash (cost=29954.15..29954.15 rows=24387 width=12) (actual \ntime=156.668..156.668 rows=907 loops=1)\n -> Nested Loop (cost=0.00..29954.15 rows=24387 width=12) (actual \ntime=149.891..155.343 rows=907 loops=1)\n -> Function Scan on zips_in_mile_range (cost=0.00..12.50 \nrows=1000 width=40) (actual time=149.850..149.920 rows=66 loops=1)\n -> Index Scan using facility_country_postal_code_idx on \nfacility f (cost=0.00..29.64 rows=24 width=15) (actual time=0.015..0.048 \nrows=14 loops=66)\n Index Cond: ((f.default_country_code = 'US'::bpchar) AND \n((f.default_postal_code)::text = zips_in_mile_range.zip))\nTotal runtime: 3065.338 ms\n\n\nKLUDGED QUERY\n\nselect\n pp.provider_id,\n pp.provider_practice_id,\n nearby.distance\nfrom mdx_core.provider_practice as pp\njoin mdx_core.facility as f\non f.facility_id = pp.facility_id\njoin (select * from mdx_core.zips_in_mile_range('08820', 10)) as nearby\non f.default_country_code = 'US'\n and f.default_postal_code = nearby.zip\n and coalesce(pp.facility_address_id, -1) = -1\n\nNested Loop (cost=0.00..112618.87 rows=180 width=16) (actual \ntime=149.680..167.261 rows=872 loops=1)\n -> Nested Loop (cost=0.00..29954.15 rows=24387 width=12) (actual \ntime=149.659..155.018 rows=907 loops=1)\n -> Function Scan on zips_in_mile_range (cost=0.00..12.50 rows=1000 \nwidth=40) (actual time=149.620..149.698 rows=66 loops=1)\n -> Index Scan using facility_country_postal_code_idx on facility f \n(cost=0.00..29.64 rows=24 width=15) (actual time=0.015..0.045 rows=14 \nloops=66)\n Index Cond: ((f.default_country_code = 'US'::bpchar) AND \n((f.default_postal_code)::text = zips_in_mile_range.zip))\nSOLUTION\n-------------\n -> Index Scan using provider_practice_facility_idx on provider_practice \npp (cost=0.00..3.38 rows=1 width=12) (actual time=0.007..0.009 rows=1 \nloops=907)\n Index Cond: (f.facility_id = pp.facility_id)\n Filter: (COALESCE(facility_address_id, -1) = -1)\n-------------\nTotal runtime: 168.275 ms\n\n", "msg_date": "Mon, 17 Sep 2007 16:22:50 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Query works when kludged, but would prefer \"best practice\" solution" }, { "msg_contents": "On 9/17/07, Carlo Stonebanks <[email protected]> wrote:\n> Hi all,\n>\n> Please see the section marked as \"PROBLEM\" in \"ORIGINAL QUERY\" plan below.\n> You can see it's pretty slow. Oddly enough, an index for facility_address_id\n> is available but not being used, but I suspect it's questionable whether it\n> would be an improvement.\n\nThis looks like it might be the problem tom caught and rigged a solution to:\nhttp://people.planetpostgresql.org/dfetter/index.php?/archives/134-PostgreSQL-Weekly-News-September-03-2007.html\n(look fro band-aid).\n\nIf that's the case, the solution is to wait for 8.2.5 (coming soon).\n\nmerlin\n", "msg_date": "Mon, 17 Sep 2007 20:03:19 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query works when kludged,\n\tbut would prefer \"best practice\" solution" }, { "msg_contents": "Well, there goes my dream of getting a recommendation that will deliver a\nblinding insight into how to speed up all of my queries a thousand-fold.\n\nThanks Merlin!\n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: September 17, 2007 8:03 PM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] Query works when kludged, but would prefer \"best\npractice\" solution\n\nOn 9/17/07, Carlo Stonebanks <[email protected]> wrote:\n> Hi all,\n>\n> Please see the section marked as \"PROBLEM\" in \"ORIGINAL QUERY\" plan below.\n> You can see it's pretty slow. Oddly enough, an index for\nfacility_address_id\n> is available but not being used, but I suspect it's questionable whether\nit\n> would be an improvement.\n\nThis looks like it might be the problem tom caught and rigged a solution to:\nhttp://people.planetpostgresql.org/dfetter/index.php?/archives/134-PostgreSQ\nL-Weekly-News-September-03-2007.html\n(look fro band-aid).\n\nIf that's the case, the solution is to wait for 8.2.5 (coming soon).\n\nmerlin\n\n\n", "msg_date": "Mon, 17 Sep 2007 20:13:18 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query works when kludged,\n\tbut would prefer \"best practice\" solution" }, { "msg_contents": "On 9/17/07, Carlo Stonebanks <[email protected]> wrote:\n> Well, there goes my dream of getting a recommendation that will deliver a\n> blinding insight into how to speed up all of my queries a thousand-fold.\n\nthat's easy...delete your data! :-)\n\nmerlin\n", "msg_date": "Mon, 17 Sep 2007 20:17:49 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query works when kludged,\n\tbut would prefer \"best practice\" solution" }, { "msg_contents": "Thanks, it worked. Client happy. Big bonus in the mail.\n\n-----Original Message-----\nFrom: Merlin Moncure [mailto:[email protected]] \nSent: September 17, 2007 8:18 PM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] Query works when kludged, but would prefer \"best\npractice\" solution\n\nOn 9/17/07, Carlo Stonebanks <[email protected]> wrote:\n> Well, there goes my dream of getting a recommendation that will deliver a\n> blinding insight into how to speed up all of my queries a thousand-fold.\n\nthat's easy...delete your data! :-)\n\nmerlin\n\n\n", "msg_date": "Mon, 17 Sep 2007 20:38:58 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query works when kludged,\n\tbut would prefer \"best practice\" solution" }, { "msg_contents": "\"Merlin Moncure\" <[email protected]> writes:\n> On 9/17/07, Carlo Stonebanks <[email protected]> wrote:\n>> Please see the section marked as \"PROBLEM\" in \"ORIGINAL QUERY\" plan below.\n\n> This looks like it might be the problem tom caught and rigged a solution to:\n> http://people.planetpostgresql.org/dfetter/index.php?/archives/134-PostgreSQL-Weekly-News-September-03-2007.html\n> (look fro band-aid).\n\nNo, fraid not, that was about misestimation of outer joins, and I see no\nouter join here.\n\nWhat I do see is misestimation of a set-returning-function's output:\n\n -> Function Scan on zips_in_mile_range (cost=0.00..12.50 rows=1000 width=40) (actual time=149.850..149.920 rows=66 loops=1)\n\nThere's not any very nice way to improve that in existing releases :-(.\nIn 8.3 it will be possible to add a ROWS option to function definitions\nto replace the default \"1000 rows\" estimate with some other number, but\nthat still helps little if the number of result rows is widely variable.\n\nAs far as kluges go: rather than kluging conditions affecting unrelated\ntables, maybe you could put in a dummy constraint on the function's\noutput --- ie, a condition you know is always true, but the planner\nwon't know that, and will scale down its result-rows estimate accordingly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 17 Sep 2007 23:29:51 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query works when kludged,\n\tbut would prefer \"best practice\" solution" }, { "msg_contents": "Hi Tom,\n\nThanks for the suggestion - this concept is pretty new to me. Can you expand\na bit on the idea of how to place such a \"dummy\" constraint on a function,\nand the conditions on which it affects the planner? Would this require that\nconstraint_exclusion be set on?\n\n(When I go to sleep, I have a dream -- and in this dream Tom writes a\nbrilliant three line code sample that makes it all clear to me, and I wake\nup a PostgreSQL guru)\n\n;-)\n\nCarlo\n \n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]] \nSent: September 17, 2007 11:30 PM\nTo: Merlin Moncure\nCc: Carlo Stonebanks; [email protected]\nSubject: Re: [PERFORM] Query works when kludged, but would prefer \"best\npractice\" solution \n\n\"Merlin Moncure\" <[email protected]> writes:\n> On 9/17/07, Carlo Stonebanks <[email protected]> wrote:\n>> Please see the section marked as \"PROBLEM\" in \"ORIGINAL QUERY\" plan\nbelow.\n\n> This looks like it might be the problem tom caught and rigged a solution\nto:\n>\nhttp://people.planetpostgresql.org/dfetter/index.php?/archives/134-PostgreSQ\nL-Weekly-News-September-03-2007.html\n> (look fro band-aid).\n\nNo, fraid not, that was about misestimation of outer joins, and I see no\nouter join here.\n\nWhat I do see is misestimation of a set-returning-function's output:\n\n -> Function Scan on zips_in_mile_range (cost=0.00..12.50 rows=1000\nwidth=40) (actual time=149.850..149.920 rows=66 loops=1)\n\nThere's not any very nice way to improve that in existing releases :-(.\nIn 8.3 it will be possible to add a ROWS option to function definitions\nto replace the default \"1000 rows\" estimate with some other number, but\nthat still helps little if the number of result rows is widely variable.\n\nAs far as kluges go: rather than kluging conditions affecting unrelated\ntables, maybe you could put in a dummy constraint on the function's\noutput --- ie, a condition you know is always true, but the planner\nwon't know that, and will scale down its result-rows estimate accordingly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 18 Sep 2007 02:29:17 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query works when kludged,\n\tbut would prefer \"best practice\" solution" }, { "msg_contents": "\"Carlo Stonebanks\" <[email protected]> writes:\n> Thanks for the suggestion - this concept is pretty new to me. Can you expand\n> a bit on the idea of how to place such a \"dummy\" constraint on a function,\n> and the conditions on which it affects the planner?\n\nLet's say that you know that the function's result column \"x\" can only\nrange from 1 to 1000. The planner does not know that, and has no\nstatistics from which it could guess, so it's going to fall back on\ndefault selectivity estimates for any WHERE clause involving x.\nSo for instance you could tack on something like\n\nFROM ... (select * from myfunc() where x <= 1000) ...\n\nwhich will change the actual query result not at all, but will cause the\nplanner to reduce its estimate of the number of rows out by whatever the\ndefault selectivity estimate for an inequality is (from memory, 0.333,\nbut try it and see). If that's too much or not enough, you could try\nsome other clauses that will never really reject any rows, for instance\n\n\twhere x >= 1 and x <= 1000\n\twhere x <> -1\n\twhere x is not null\n\nOf course this technique depends on knowing something that will always\nbe true about your data, but most people can think of something...\n\nNow this is not going to affect the evaluation of the function itself at\nall. What it will do is affect the shape of a join plan built atop that\nfunction scan, since joins are pretty much all about minimizing the\nnumber of intermediate rows.\n\n> Would this require that\n> constraint_exclusion be set on?\n\nNo.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 18 Sep 2007 10:08:39 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query works when kludged,\n\tbut would prefer \"best practice\" solution" }, { "msg_contents": "I think Tom is talking about something like this:\n\nexplain select * from foo();\n QUERY PLAN\n----------------------------------------------------------------------\n Function Scan on foo (cost=0.00..12.50 rows=1000 width=50)\n\nThe planner is estimating the function will return 1000 rows.\n\n\nexplain select * from foo() where id > 0;\n QUERY PLAN\n---------------------------------------------------------------------\n Function Scan on foo (cost=0.00..15.00 rows=333 width=50)\n Filter: (id > 0)\n\nIn the second case I am asking for all ids greater than zero, but my ids are\nall positive integers. The planner doesn't know that, so it assumes the\nwhere clause will decrease the number of results.\n\nI would still say this is a kludge, and since you already found a kludge\nthat works, this may not help you at all.\n\nDave\n \n\n-----Original Message-----\nFrom: [email protected]\n[mailto:[email protected]] On Behalf Of Carlo\nStonebanks\nSent: Tuesday, September 18, 2007 1:29 AM\nTo: 'Tom Lane'; 'Merlin Moncure'\nCc: [email protected]\nSubject: Re: [PERFORM] Query works when kludged, but would prefer \"best\npractice\" solution \n\nHi Tom,\n\nThanks for the suggestion - this concept is pretty new to me. Can you expand\na bit on the idea of how to place such a \"dummy\" constraint on a function,\nand the conditions on which it affects the planner? Would this require that\nconstraint_exclusion be set on?\n\n(When I go to sleep, I have a dream -- and in this dream Tom writes a\nbrilliant three line code sample that makes it all clear to me, and I wake\nup a PostgreSQL guru)\n\n;-)\n\nCarlo\n \n-----Original Message-----\nFrom: Tom Lane [mailto:[email protected]]\nSent: September 17, 2007 11:30 PM\nTo: Merlin Moncure\nCc: Carlo Stonebanks; [email protected]\nSubject: Re: [PERFORM] Query works when kludged, but would prefer \"best\npractice\" solution \n\n\"Merlin Moncure\" <[email protected]> writes:\n> On 9/17/07, Carlo Stonebanks <[email protected]> wrote:\n>> Please see the section marked as \"PROBLEM\" in \"ORIGINAL QUERY\" plan\nbelow.\n\n> This looks like it might be the problem tom caught and rigged a \n> solution\nto:\n>\nhttp://people.planetpostgresql.org/dfetter/index.php?/archives/134-PostgreSQ\nL-Weekly-News-September-03-2007.html\n> (look fro band-aid).\n\nNo, fraid not, that was about misestimation of outer joins, and I see no\nouter join here.\n\nWhat I do see is misestimation of a set-returning-function's output:\n\n -> Function Scan on zips_in_mile_range (cost=0.00..12.50 rows=1000\nwidth=40) (actual time=149.850..149.920 rows=66 loops=1)\n\nThere's not any very nice way to improve that in existing releases :-(.\nIn 8.3 it will be possible to add a ROWS option to function definitions to\nreplace the default \"1000 rows\" estimate with some other number, but that\nstill helps little if the number of result rows is widely variable.\n\nAs far as kluges go: rather than kluging conditions affecting unrelated\ntables, maybe you could put in a dummy constraint on the function's output\n--- ie, a condition you know is always true, but the planner won't know\nthat, and will scale down its result-rows estimate accordingly.\n\n\t\t\tregards, tom lane\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: don't forget to increase your free space map settings\n\n", "msg_date": "Tue, 18 Sep 2007 09:11:27 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query works when kludged,\n\tbut would prefer \"best practice\" solution" }, { "msg_contents": "My client \"publishes\" an \"edition\" of their DB from his production site to\nhis hosted web/db server. This is done by FTPing a backup of the DB to his\nhosting provider.\n\nImmediately after a \"publication\" (restore to web/db server) we immediately\nrun VACUUM ANALYZE to make sure the statistics and row estimates are\ncorrect.\n\nThe problem is, after this initial VACUUM ANALYZE, the row estimates in\nquery plans are off by several orders of magnitude. For example, a\ndisastrous plan was created because the planner estimated 4K rows when in\nfact it returned 980K rows.\n\nSometimes - a day or two later - the plans return to \"normal\" and row\nestimates are closer to realistic values. Guessing that there may be\nbackground events that are correcting the row estimates over time, I ran an\nANALYZE on the DB - and sure enough - the row estimates corrected\nthemselves. The puzzling thing is, there have been no writes of any sort to\nthe data - there is no reason for the stats to have changed.\n\nI believe that a VACUUM may not be necessary for a newly restored DB, but I\nassumed that VACUUM ANALYZE and ANALYZE have the same net result. Am I\nwrong?\n\nIf I am not wrong (i.e. VACUUM ANALYZE and ANALYZE should produce the same\nresults) why would the performance improve on a DB that has seen no\ntransactional activity only after the SECOND try?\n\nPG 8.2.4 on RH LINUX 1GB RAM SCSI RAID 1\n\nCarlo\n\n\n", "msg_date": "Tue, 18 Sep 2007 15:59:44 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Performance improves only after repeated VACUUM/ANALYZE" }, { "msg_contents": "I am noticing that my queries are spending a lot of time in nested loops.\nThe table/index row estimates are not bad, but the nested loops can be off\nby a factor of 50. In any case, they are always too high.\n\n \n\nIf this is always occurring, is this an indication of a general\nconfiguration problem?\n\n \n\nCarlo\n\n \n\n select\n\n pp.provider_id,\n\n pp.provider_practice_id,\n\n nearby.distance\n\n from mdx_core.provider_practice as pp\n\n join mdx_core.facility as f\n\n on f.facility_id = pp.facility_id\n\n join (select * from mdx_core.zips_in_mile_range('08820', 10) where zip\n> '') as nearby\n\n on f.default_country_code = 'US'\n\n and f.default_postal_code = nearby.zip\n\n and pp.facility_address_id is NULL\n\n union select\n\n pp.provider_id,\n\n pp.provider_practice_id,\n\n nearby.distance\n\n from mdx_core.provider_practice as pp\n\n join mdx_core.facility_address as fa\n\n on fa.facility_address_id = pp.facility_address_id\n\n join mdx_core.address as a\n\n on a.address_id = fa.address_id\n\n join (select * from mdx_core.zips_in_mile_range('08820', 10) where zip\n> '') as nearby\n\n on a.country_code = 'US'\n\n and a.postal_code = nearby.zip\n\n \n\nUnique (cost=67605.91..67653.18 rows=4727 width=16) (actual\ntime=8634.618..8637.918 rows=907 loops=1)\n\n -> Sort (cost=67605.91..67617.73 rows=4727 width=16) (actual\ntime=8634.615..8635.651 rows=907 loops=1)\n\n Sort Key: provider_id, provider_practice_id, distance\n\n -> Append (cost=0.00..67317.41 rows=4727 width=16) (actual\ntime=176.056..8632.429 rows=907 loops=1)\n\n -> Nested Loop (cost=0.00..38947.07 rows=3143 width=16)\n(actual time=176.054..7867.962 rows=872 loops=1)\n\n -> Nested Loop (cost=0.00..11520.79 rows=8121\nwidth=12) (actual time=169.372..3041.010 rows=907 loops=1)\n\n -> Function Scan on zips_in_mile_range\n(cost=0.00..15.00 rows=333 width=40) (actual time=151.479..151.671 rows=66\nloops=1)\n\n Filter: (zip > ''::text)\n\n -> Index Scan using\nfacility_country_postal_code_idx on facility f (cost=0.00..34.25 rows=24\nwidth=15) (actual time=4.969..43.740 rows=14 loops=66)\n\n Index Cond: ((f.default_country_code =\n'US'::bpchar) AND ((f.default_postal_code)::text = zips_in_mile_range.zip))\n\n -> Index Scan using provider_practice_facility_idx on\nprovider_practice pp (cost=0.00..3.36 rows=1 width=12) (actual\ntime=4.915..5.316 rows=1 loops=907)\n\n Index Cond: (f.facility_id = pp.facility_id)\n\n Filter: (facility_address_id IS NULL)\n\n -> Nested Loop (cost=0.00..28323.07 rows=1584 width=16)\n(actual time=170.310..762.472 rows=35 loops=1)\n\n -> Nested Loop (cost=0.00..7791.77 rows=1579 width=12)\n(actual time=170.289..612.579 rows=36 loops=1)\n\n -> Nested Loop (cost=0.00..2595.96 rows=712\nwidth=12) (actual time=167.017..354.261 rows=29 loops=1)\n\n -> Function Scan on zips_in_mile_range\n(cost=0.00..15.00 rows=333 width=40) (actual time=150.188..150.312 rows=66\nloops=1)\n\n Filter: (zip > ''::text)\n\n -> Index Scan using\naddress_country_postal_code_address_idx on address a (cost=0.00..7.73\nrows=2 width=17) (actual time=2.483..3.086 rows=0 loops=66)\n\n Index Cond: ((a.country_code =\n'US'::bpchar) AND ((a.postal_code)::text = zips_in_mile_range.zip))\n\n -> Index Scan using facility_address_address_idx\non facility_address fa (cost=0.00..7.15 rows=12 width=8) (actual\ntime=7.652..8.901 rows=1 loops=29)\n\n Index Cond: (a.address_id = fa.address_id)\n\n -> Index Scan using\nprovider_practice_facility_address_idx on provider_practice pp\n(cost=0.00..12.80 rows=16 width=12) (actual time=4.156..4.158 rows=1\nloops=36)\n\n Index Cond: (fa.facility_address_id =\npp.facility_address_id)\n\nTotal runtime: 8639.066 ms\n\n\n\n\n\n\n\n\n\n\n\n\nI am noticing that my queries are spending a lot of time in nested\nloops. The table/index row estimates are not bad, but the nested loops can be\noff by a factor of 50. In any case, they are always too high.\n \nIf this is always occurring, is this an indication of a general\nconfiguration problem?\n \nCarlo\n \n      select\n         pp.provider_id,\n        \npp.provider_practice_id,\n         nearby.distance\n      from mdx_core.provider_practice as pp\n      join mdx_core.facility as f\n      on f.facility_id = pp.facility_id\n      join (select * from mdx_core.zips_in_mile_range('08820',\n10) where zip > '') as nearby\n      on f.default_country_code = 'US'\n         and\nf.default_postal_code = nearby.zip\n         and\npp.facility_address_id is NULL\n      union select\n         pp.provider_id,\n         pp.provider_practice_id,\n         nearby.distance\n      from mdx_core.provider_practice as pp\n      join mdx_core.facility_address as fa\n      on fa.facility_address_id =\npp.facility_address_id\n      join mdx_core.address as a\n      on a.address_id = fa.address_id\n      join (select * from\nmdx_core.zips_in_mile_range('08820', 10) where zip > '') as nearby\n      on a.country_code = 'US'\n      and a.postal_code = nearby.zip\n \nUnique  (cost=67605.91..67653.18 rows=4727 width=16) (actual\ntime=8634.618..8637.918 rows=907 loops=1)\n  ->  Sort  (cost=67605.91..67617.73 rows=4727\nwidth=16) (actual time=8634.615..8635.651 rows=907 loops=1)\n        Sort Key: provider_id,\nprovider_practice_id, distance\n        ->  Append \n(cost=0.00..67317.41 rows=4727 width=16) (actual time=176.056..8632.429\nrows=907 loops=1)\n              -> \nNested Loop  (cost=0.00..38947.07\nrows=3143 width=16) (actual time=176.054..7867.962 rows=872 loops=1)\n                   \n->  Nested Loop \n(cost=0.00..11520.79 rows=8121 width=12) (actual time=169.372..3041.010\nrows=907 loops=1)\n                          -> \nFunction Scan on zips_in_mile_range  (cost=0.00..15.00 rows=333 width=40)\n(actual time=151.479..151.671 rows=66 loops=1)\n                               \nFilter: (zip > ''::text)\n                         \n->  Index Scan using facility_country_postal_code_idx on facility\nf  (cost=0.00..34.25 rows=24 width=15) (actual time=4.969..43.740 rows=14\nloops=66)\n                               \nIndex Cond: ((f.default_country_code = 'US'::bpchar) AND\n((f.default_postal_code)::text = zips_in_mile_range.zip))\n                   \n->  Index Scan using provider_practice_facility_idx on\nprovider_practice pp  (cost=0.00..3.36 rows=1 width=12) (actual time=4.915..5.316\nrows=1 loops=907)\n             \n            Index\nCond: (f.facility_id = pp.facility_id)\n                         \nFilter: (facility_address_id IS NULL)\n             \n->  Nested Loop \n(cost=0.00..28323.07 rows=1584 width=16) (actual time=170.310..762.472 rows=35\nloops=1)\n               \n    ->  Nested Loop \n(cost=0.00..7791.77 rows=1579 width=12) (actual time=170.289..612.579 rows=36\nloops=1)\n                         \n->  Nested Loop \n(cost=0.00..2595.96 rows=712 width=12) (actual time=167.017..354.261 rows=29\nloops=1)\n                  \n             -> \nFunction Scan on zips_in_mile_range  (cost=0.00..15.00 rows=333 width=40)\n(actual time=150.188..150.312 rows=66 loops=1)\n                                     \nFilter: (zip > ''::text)\n                               \n->  Index Scan using address_country_postal_code_address_idx on address\na  (cost=0.00..7.73 rows=2 width=17) (actual time=2.483..3.086 rows=0\nloops=66)\n                                     \nIndex Cond: ((a.country_code = 'US'::bpchar) AND ((a.postal_code)::text =\nzips_in_mile_range.zip))\n                         \n->  Index Scan using facility_address_address_idx on facility_address\nfa  (cost=0.00..7.15 rows=12 width=8) (actual time=7.652..8.901 rows=1\nloops=29)\n                               \nIndex Cond: (a.address_id = fa.address_id)\n                   \n->  Index Scan using provider_practice_facility_address_idx on\nprovider_practice pp  (cost=0.00..12.80 rows=16 width=12) (actual\ntime=4.156..4.158 rows=1 loops=36)\n                         \nIndex Cond: (fa.facility_address_id = pp.facility_address_id)\nTotal runtime: 8639.066 ms", "msg_date": "Tue, 18 Sep 2007 16:07:33 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Nested loops row estimates always too high" } ]
[ { "msg_contents": "Hi,\n\nAnyone has tried a setup combining tablespaces with NFS-mounted partitions?\n\nI'm considering the idea as a performance-booster --- our problem is \nthat we are\nrenting our dedicated server from a hoster that does not offer much \nflexibility\nin terms of custom hardware configuration; so, the *ideal* alternative \nto load\nthe machine with 4 or 6 hard drives and use tablespaces is off the table \n(no pun\nintended).\n\nWe could, however, set up a few additional servers where we could configure\nNFS shares, mount them on the main PostgreSQL server, and configure\ntablespaces to \"load balance\" the access to disk.\n\nWould you estimate that this will indeed boost performance?? (our system\ndoes lots of writing to DB --- in all forms: inserts, updates, and deletes)\n\nAs a corollary question: what about the WALs and tablespaces?? Are the\nWALs \"distributed\" when we setup a tablespace and create tables in it? \n(that is, are the WALs corresponding to the tables in a tablespace stored\nin the directory corresponding to the tablespace? Or is it only the \ndata, and\nthe WAL keeps being the one and only?)\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Wed, 19 Sep 2007 11:58:18 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Tablespaces and NFS" }, { "msg_contents": "Carlos Moreno wrote:\n> Anyone has tried a setup combining tablespaces with NFS-mounted partitions?\n\nThere has been some discussion of this recently, you can find it in the archives (http://archives.postgresql.org/). The word seems to be that NFS can lead to data corruption.\n\nCraig\n\n\n\n", "msg_date": "Wed, 19 Sep 2007 13:49:51 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tablespaces and NFS" }, { "msg_contents": "On 9/19/07, Carlos Moreno <[email protected]> wrote:\n> Hi,\n>\n> Anyone has tried a setup combining tablespaces with NFS-mounted partitions?\n>\n> I'm considering the idea as a performance-booster --- our problem is\n> that we are\n> renting our dedicated server from a hoster that does not offer much\n> flexibility\n> in terms of custom hardware configuration; so, the *ideal* alternative\n> to load\n> the machine with 4 or 6 hard drives and use tablespaces is off the table\n> (no pun\n> intended).\n>\n> We could, however, set up a few additional servers where we could configure\n> NFS shares, mount them on the main PostgreSQL server, and configure\n> tablespaces to \"load balance\" the access to disk.\n>\n> Would you estimate that this will indeed boost performance?? (our system\n> does lots of writing to DB --- in all forms: inserts, updates, and deletes)\n>\n> As a corollary question: what about the WALs and tablespaces?? Are the\n> WALs \"distributed\" when we setup a tablespace and create tables in it?\n> (that is, are the WALs corresponding to the tables in a tablespace stored\n> in the directory corresponding to the tablespace? Or is it only the\n> data, and\n> the WAL keeps being the one and only?)\n>\n> Thanks,\n>\n> Carlos\n\nAbout 5 months ago, I did an experiment serving tablespaces out of\nAFS, another shared file system.\n\nYou can read my full post at\nhttp://archives.postgresql.org/pgsql-admin/2007-04/msg00188.php\n\nOn the whole, you're not going to see a performance improvement\nrunning tablespaces on NFS (unless the disk system on the NFS server\nis a lot faster) since you have to go through the network as well as\nNFS, both of which add overhead.\n\nUsually, locking mechanisms on shared file systems don't play nice\nwith databases. You're better off using something else to load balance\nor replicate data.\n\nPeter\n\nP.S. Why not just set up those servers you're planning on using as NFS\nshares as your postgres server(s)?\n", "msg_date": "Wed, 19 Sep 2007 16:54:22 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tablespaces and NFS" }, { "msg_contents": "\n>\n> About 5 months ago, I did an experiment serving tablespaces out of\n> AFS, another shared file system.\n>\n> You can read my full post at\n> http://archives.postgresql.org/pgsql-admin/2007-04/msg00188.php\n\nThanks for the pointer! I had done a search on the archives, but\ndidn't find this one (strange, since I included the keywords\ntablespace and NFS, both of which show up in your message).\n\nAnyway... One detail I don't understand --- why do you claim that\n\"You can't take advantage of the shared file system because you can't\nshare tablespaces among clusters or servers\" ???\n\nWith NFS, I could mount, say, /mnt/nfs/fs1 to be served by NFS\nserver #1, and then create tablespace nfs1 location '/mnt/nfs/fs1' ...\nWhy wouldn't that work?? (or was the comment specific to AFS?)\n\nBTW, I'm not too worried by the lack of security with NFS, since\nboth the \"main\" postgres machine and the potential NFS servers\nthat I would use would be completely \"private\" machines (in that\nthere are no users and no other services are running in there).\nI would set up a strict firewall policy so that the NFS server\nonly accepts connections from the main postgres machine.\n\nBack to your comment:\n\n> On the whole, you're not going to see a performance improvement\n> running tablespaces on NFS (unless the disk system on the NFS server\n> is a lot faster) \n\nThis seems to be the killer point --- mainly because the network\nconnection is a 100Mbps (around 10 MB/sec --- less than 1/4 of\nthe performance we'd expect from an internal hard drive). If at\nleast it was a Gigabit connection, I might still be tempted to\nretry the experiment. I was thinking that *maybe* the latencies\nand contention due to heads movements (in the order of the millisec)\nwould take precedence and thus, a network-distributed cluster of\nhard drives would end up winning.\n\n\n> P.S. Why not just set up those servers you're planning on using as NFS\n> shares as your postgres server(s)?\n\nWe're clear that that would be the *optimal* solution --- problem\nis, there's a lot of client-side software that we would have to\nchange; I'm first looking for a \"transparent\" solution in which\nI could distribute the load at a hardware level, seeing the DB\nserver as a single entity --- the ideal solution, of course,\nbeing the use of tablespaces with 4 or 6 *internal* hard disks\n(but that's not an option with our current web hoster).\n\nAnyway, I'll keep working on alternative solutions --- I think\nI have enough evidence to close this NFS door.\n\nThanks!\n\n", "msg_date": "Thu, 20 Sep 2007 09:20:40 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tablespaces and NFS" }, { "msg_contents": "> Anyway... One detail I don't understand --- why do you claim that\n> \"You can't take advantage of the shared file system because you can't\n> share tablespaces among clusters or servers\" ???\n\nI say that because you can't set up two servers to point to the same\ntablespace (i.e. you can't have server A and server B both point to\nthe tablespace in /mnt/nfs/postgres/), which basically defeats one of\nthe main purposes of using a shared file system, seeing, using, and\nediting files from anywhere.\n\nThis is ill-advised and probably won't work for 2 reasons.\n\n- Postgres tablespaces require empty directories to for\ninitialization. If you create a tablespace on server A, it puts files\nin the previously empty directory. If you then try to create a\ntablespace on server B pointing to the same location, it won't work\nsince the directory is no longer empty. You can get around this, in\ntheory, but you'd either have to directly mess with system tables or\nfool Postgres into thinking that each server independently created\nthat tablespace (to which anyone will say, NO!!!!).\n\n- If you do manage to fool postgres into having two servers pointing\nat the same tablespace, the servers really, REALLY won't play nice\nwith these shared resources, since they have no knowledge of each\nother (i mean, two clusters on the same server don't play nice with\nmemory). Basically, if they compete for the same file, either I/O will\nbe EXTREMELY slow because of file-locking mechanisms in the file\nsystem, or you open things up to race conditions and data corruption.\nIn other words: BAD!!!!\n\nI know this doesn't fully apply to you, but I thought I should explain\nmy points betters since you asked so nicely :-)\n\n> This seems to be the killer point --- mainly because the network\n> connection is a 100Mbps (around 10 MB/sec --- less than 1/4 of\n> the performance we'd expect from an internal hard drive). If at\n> least it was a Gigabit connection, I might still be tempted to\n> retry the experiment. I was thinking that *maybe* the latencies\n> and contention due to heads movements (in the order of the millisec)\n> would take precedence and thus, a network-distributed cluster of\n> hard drives would end up winning.\n\nIf you get decently fast disks, or put some slower disks in RAID 10,\nyou'll easily get >100 MB/sec (and that's a conservative estimate).\nEven with a Gbit network, you'll get, in theory 128 MB/sec, and that's\nassuming that the NFS'd disks aren't a bottleneck.\n\n> We're clear that that would be the *optimal* solution --- problem\n> is, there's a lot of client-side software that we would have to\n> change; I'm first looking for a \"transparent\" solution in which\n> I could distribute the load at a hardware level, seeing the DB\n> server as a single entity --- the ideal solution, of course,\n> being the use of tablespaces with 4 or 6 *internal* hard disks\n> (but that's not an option with our current web hoster).\n\nI sadly don't know enough networking to tell you tell the client\nsoftware \"no really, I'm over here.\" However, one of the things I'm\nfond of is using a module to store connection strings, and dynamically\nloading said module on the client side. For instance, with Perl I\nuse...\n\nuse DBI;\nuse DBD::Pg;\nuse My::DBs;\n\nmy $dbh = DBI->connect($My::DBs::mydb);\n\nAssuming that the module and its entries are kept up to date, it will\n\"just work.\" That way, there's only 1 module to change instead of n\nclient apps. I can have a new server with a new name up without\nchanging any client code.\n\n> Anyway, I'll keep working on alternative solutions --- I think\n> I have enough evidence to close this NFS door.\n\nThat's probably for the best.\n", "msg_date": "Thu, 20 Sep 2007 13:50:21 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tablespaces and NFS" }, { "msg_contents": "\nThanks again, Peter, for expanding on these points.\n\nPeter Koczan wrote:\n>> Anyway... One detail I don't understand --- why do you claim that\n>> \"You can't take advantage of the shared file system because you can't\n>> share tablespaces among clusters or servers\" ???\n>> \n>\n> I say that because you can't set up two servers to point to the same\n> tablespace \n\nMy bad! Definitely --- I was only looking at it through the point of \nview of my\ncurrent problem at hand, so I misinterpreted what you said; it is clear \nand\nunambiguous, and I agree that there is little debate about it; in my \nmind, since\nI'm talking about *one* postgres server spreading its storage across \nseveral\nfilesystems, I didn't understand why you seemed to be claiming that that \ncan\nnot be combined with tablespaces ...\n\n> I know this doesn't fully apply to you, but I thought I should explain\n> my points betters since you asked so nicely :-)\n> \n\n:-) It's appreaciated!\n\n> If you get decently fast disks, or put some slower disks in RAID 10,\n> you'll easily get >100 MB/sec (and that's a conservative estimate).\n> Even with a Gbit network, you'll get, in theory 128 MB/sec, and that's\n> assuming that the NFS'd disks aren't a bottleneck.\n> \n\nBut still, with 128MB/sec (modulo some possible NFS bottlenecks), I would\nbe a bit more optimistic, and would actually be tempted to retry your \nexperiment\nwith my setup. After all, with the setup that we have *today*, I don't \nthink I\nget a sustained transfer rate above 80 or 90MB/sec from the hard drives \n(as\nfar as I know, they're plain vanilla Enterpreise-Grade SATA2 servers, which\nI believe don't get further than 90MB/sec S.T.R.)\n\n> I sadly don't know enough networking to tell you tell the client\n> software \"no really, I'm over here.\" However, one of the things I'm\n> fond of is using a module to store connection strings, and dynamically\n> loading said module on the client side. For instance, with Perl I\n> use...\n>\n> use DBI;\n> use DBD::Pg;\n> use My::DBs;\n>\n> my $dbh = DBI->connect($My::DBs::mydb);\n>\n> Assuming that the module and its entries are kept up to date, it will\n> \"just work.\" That way, there's only 1 module to change instead of n\n> client apps. \n\nOh no, but the problem we'd have would be at the level of the database \ndesign\nand access --- for instance, some of the tables that I think are \nbottlenecking (the\nones I would like to spread with tablespaces) are quite interconnected \nto each\nother --- foreign keys come and go; on the client applications, many \ntransaction\nblocks include several of those tables --- if I were to spread those \ntables across\nseveral backends, I'm not sure the changes would be easy :-( )\n\n> I can have a new server with a new name up without\n> changing any client code.\n> \n\nBut then, you're talking about replicating data so that multiple \nclient-apps\ncan pick one out the several available \"quasi-read-only\" servers, I'm \nguessing?\n\n>> Anyway, I'll keep working on alternative solutions --- I think\n>> I have enough evidence to close this NFS door.\n>> \n>\n> That's probably for the best.\n> \nYep --- still closing that door!! The points I'm arguing in this \nmessage is\njust in the spirit of discussing and better understanding the issue. \nI'm still\nconvinced with your evidence.\n\nThanks,\n\nCarlos\n--\n\n\n\n", "msg_date": "Thu, 20 Sep 2007 15:24:39 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Tablespaces and NFS" } ]
[ { "msg_contents": "Hi all.\nRecently I have installed a brand new server with a Pentium IV 3.2 GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4 (previously a 8.1.9).\nI have other similar server with an IDE disk, Red Hat EL 4 and PostgreSQL 8.2.3\n\nI have almost the same postgresql.conf in both servers, but in the new one (I have more work_mem than the other one) things go really slow. I began to monitor i/o disk and it's really ok, I have test disk with hdparm and it's 5 times faster than the IDE one.\nRunning the same queries in both servers in the new one it envolves almost 4 minutes instead of 18 seconds in the old one.\nBoth databases are the same, I have vacuum them and I don't know how to manage this issue.\nThe only weird thing is than in the older server running the query it uses 30% of CPU instead of 3 o 5 % of the new one!!!\nWhat's is happening with this server? I upgrade from 8.1.9 to 8.2.4 trying to solve this issue but I can't find a solution.\n\nAny ideas?\nRegards\nAgustin\n\n\n\n\n Seguí de cerca a la Selección Argentina de Rugby en el Mundial de Francia 2007.\nhttp://ar.sports.yahoo.com/mundialderugby\nHi all.Recently I have installed a brand new server with a Pentium IV 3.2 GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4 (previously a 8.1.9).I have other similar server with an IDE disk, Red Hat EL 4 and PostgreSQL 8.2.3I have almost the same postgresql.conf in both servers, but in the new one (I have more work_mem than the other one) things go really slow.  I began to monitor i/o disk and it's really ok, I have test disk with hdparm and it's 5 times faster than the IDE one.Running the same queries in both servers in the new one it envolves almost 4 minutes instead of 18 seconds in the old one.Both databases are the same, I have vacuum them and I don't know how to manage this issue.The only weird thing is than in the older server running\n the query it uses 30% of CPU instead of 3 o 5 % of the new one!!!What's is happening with this server? I upgrade from 8.1.9 to 8.2.4 trying to solve this issue but I can't find a solution.Any ideas?RegardsAgustin\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby", "msg_date": "Wed, 19 Sep 2007 10:38:13 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Low CPU Usage" }, { "msg_contents": "[email protected] wrote:\n> Recently I have installed a brand new server with a Pentium IV 3.2 GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4 (previously a 8.1.9).\n> I have other similar server with an IDE disk, Red Hat EL 4 and PostgreSQL 8.2.3\n> \n> I have almost the same postgresql.conf in both servers, but in the new one (I have more work_mem than the other one) things go really slow. I began to monitor i/o disk and it's really ok, I have test disk with hdparm and it's 5 times faster than the IDE one.\n> Running the same queries in both servers in the new one it envolves almost 4 minutes instead of 18 seconds in the old one.\n> Both databases are the same, I have vacuum them and I don't know how to manage this issue.\n\nHave you ANALYZEd all tables?\n\nTry running EXPLAIN ANALYZE on the query on both servers, and see if\nthey're using the same plan.\n\n> The only weird thing is than in the older server running the query it uses 30% of CPU instead of 3 o 5 % of the new one!!!\n\nThat implies that it's doing much more I/O on the new server, so the CPU\njust sits and waits for data to arrive from the disk.\n\nThat does iostat say about disk utilization on both servers?\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Wed, 19 Sep 2007 18:52:51 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "On Wed, 2007-09-19 at 10:38 -0700, [email protected] wrote:\n> Hi all.\n> Recently I have installed a brand new server with a Pentium IV 3.2\n> GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4\n> (previously a 8.1.9).\n> I have other similar server with an IDE disk, Red Hat EL 4 and\n> PostgreSQL 8.2.3\n> \n> I have almost the same postgresql.conf in both servers, but in the new\n> one (I have more work_mem than the other one) things go really slow.\n> I began to monitor i/o disk and it's really ok, I have test disk with\n> hdparm and it's 5 times faster than the IDE one.\n> Running the same queries in both servers in the new one it envolves\n> almost 4 minutes instead of 18 seconds in the old one.\n> Both databases are the same, I have vacuum them and I don't know how\n> to manage this issue.\n> The only weird thing is than in the older server running the query it\n> uses 30% of CPU instead of 3 o 5 % of the new one!!!\n> What's is happening with this server? I upgrade from 8.1.9 to 8.2.4\n> trying to solve this issue but I can't find a solution.\n> \n> Any ideas?\n\nIt could be a planning issue. Have you analyzed the new database to\ngather up-to-date statistics? A comparison of EXPLAIN ANALYZE results\nfor an example query in both systems should answer that one.\n\nAnother possibility because you're dealing with lower-end drives is that\nyou have a case of one drive lying about fsync where the other is not.\nIf possible, try running your test with fsync=off on both servers. If\nthere's a marked improvement on the new server but no significant change\non the old server then you've found your culprit.\n\n-- Mark Lewis\n\n", "msg_date": "Wed, 19 Sep 2007 11:03:06 -0700", "msg_from": "Mark Lewis <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "No, changing to fsync off didn't improve performance at all.\nSettings\nwork_mem = 64MB\nmax_stack_depth = 7MB #in the old server is 8MB but if I set in here give me the ulimit error\nmax_fsm_pages = 204800\neffective_cache_size = 512MB\nAtuvacuum is off.\n\nI have run vacuum full and vacuum analyze. The databases are freezed (there are no insert, update or delete operations)!!! \nIn both servers the query plans are identical with the same costs too!!!\n\nIt's really weird, I don't see high values monitoring disk. Cpu usage is about 5% and sometimes a tips of 40 % of disk usage\n when the query is finishing (the first 3 minutes there are some tips of 11 to 16%).\n\n\n\n\n\n\n----- Mensaje original ----\nDe: Mark Lewis <[email protected]>\nPara: [email protected]\nCC: [email protected]\nEnviado: miércoles 19 de septiembre de 2007, 15:03:06\nAsunto: Re: [PERFORM] Low CPU Usage\n\nOn Wed, 2007-09-19 at 10:38 -0700, [email protected] wrote:\n> Hi all.\n> Recently I have installed a brand new server with a Pentium IV 3.2\n> GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4\n> (previously a 8.1.9).\n> I have other similar server with an IDE disk, Red Hat EL 4 and\n> PostgreSQL 8.2.3\n> \n> I have almost the same postgresql.conf in both servers, but in the new\n> one (I have more work_mem than the other one) things go really slow.\n> I began to monitor i/o disk and it's really ok, I have test disk with\n> hdparm and it's 5 times faster than the IDE one.\n> Running the same queries in both servers in the new one it envolves\n> almost 4 minutes instead of 18 seconds in the old one.\n> Both databases are the same, I have vacuum them and I don't know how\n> to manage this issue.\n> The only weird thing is than in the older server running the query it\n> uses 30% of CPU instead of 3 o 5 % of the new one!!!\n> What's is happening with this server? I upgrade from 8.1.9 to 8.2.4\n> trying to solve this issue but I can't find a solution.\n> \n> Any ideas?\n\nIt could be a planning issue. Have you analyzed the new database to\ngather up-to-date statistics? A comparison of EXPLAIN ANALYZE results\nfor an example query in both systems should answer that one.\n\nAnother possibility because you're dealing with lower-end drives is that\nyou have a case of one drive lying about fsync where the other is not.\nIf possible, try running your test with fsync=off on both servers. If\nthere's a marked improvement on the new server but no significant change\non the old server then you've found your culprit.\n\n-- Mark Lewis\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: Don't 'kill -9' the postmaster\n\n\n\n\n\n\n\n Las últimas noticias sobre el Mundial de Rugby 2007 están en Yahoo! Deportes. ¡Conocelas!\nhttp://ar.sports.yahoo.com/mundialderugby\nNo, changing to fsync off didn't improve performance at all.Settingswork_mem = 64MBmax_stack_depth = 7MB #in the old server is 8MB but if I set in here give me the ulimit errormax_fsm_pages = 204800effective_cache_size = 512MBAtuvacuum is off.I have run vacuum full and vacuum analyze.\n The databases are freezed (there are no insert, update or delete operations)!!! In both servers the query plans are identical with the same costs too!!!It's really weird, I don't see high values monitoring disk. Cpu usage is about 5% and sometimes a tips of 40 % of disk usage\n when the query is finishing (the first 3 minutes there are some tips of 11 to 16%).----- Mensaje original ----De: Mark Lewis <[email protected]>Para: [email protected]: [email protected]: miércoles 19 de septiembre de 2007, 15:03:06Asunto: Re: [PERFORM] Low CPU UsageOn Wed, 2007-09-19 at 10:38 -0700, [email protected] wrote:> Hi all.> Recently I have installed a brand new server with a Pentium IV 3.2> GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4> (previously a 8.1.9).> I have other similar server with an IDE disk, Red Hat EL 4 and> PostgreSQL 8.2.3> > I have almost the same postgresql.conf in both servers, but in the new> one (I have more work_mem than the other one)\n things go really slow.> I began to monitor i/o disk and it's really ok, I have test disk with> hdparm and it's 5 times faster than the IDE one.> Running the same queries in both servers in the new one it envolves> almost 4 minutes instead of 18 seconds in the old one.> Both databases are the same, I have vacuum them and I don't know how> to manage this issue.> The only weird thing is than in the older server running the query it> uses 30% of CPU instead of 3 o 5 % of the new one!!!> What's is happening with this server? I upgrade from 8.1.9 to 8.2.4> trying to solve this issue but I can't find a solution.> > Any ideas?It could be a planning issue.  Have you analyzed the new database togather up-to-date statistics?  A comparison of EXPLAIN ANALYZE resultsfor an example query in both systems should answer that one.Another\n possibility because you're dealing with lower-end drives is thatyou have a case of one drive lying about fsync where the other is not.If possible, try running your test with fsync=off on both servers.  Ifthere's a marked improvement on the new server but no significant changeon the old server then you've found your culprit.-- Mark Lewis---------------------------(end of broadcast)---------------------------TIP 2: Don't 'kill -9' the postmaster\nLos referentes más importantes en compra/venta de autos se juntaron:Demotores y Yahoo!.\nAhora comprar o vender tu auto es más fácil. Visitá http://ar.autos.yahoo.com/", "msg_date": "Wed, 19 Sep 2007 12:13:33 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "On 9/19/07, [email protected] <[email protected]> wrote:\n>\n>\n> No, changing to fsync off didn't improve performance at all.\n>\n> Settings\n> work_mem = 64MB\n> max_stack_depth = 7MB #in the old server is 8MB but if I set in here give me\n> the ulimit error\n> max_fsm_pages = 204800\n> effective_cache_size = 512MB\n> Atuvacuum is off.\n>\n> I have run vacuum full and vacuum analyze. The databases are freezed (there\n> are no insert, update or delete operations)!!!\n> In both servers the query plans are identical with the same costs too!!!\n>\n> It's really weird, I don't see high values monitoring disk. Cpu usage is\n> about 5% and sometimes a tips of 40 % of disk usage when the query is\n> finishing (the first 3 minutes there are some tips of 11 to 16%).\n\n(please don't top post)\n\nThis sounds like you've got problems somewhere in your I/O system.\nhdparm etc may or may not be seeing the issue. What do you see in\n/var/log/messages or dmesg?\n", "msg_date": "Wed, 19 Sep 2007 14:41:45 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "This is my dmesg file, I see there are some errors but I don't know how to manage!!!\n\nWhat do you mean with don't top post?\nSorry but I'm new with this kind of mailing list and I don't want to botter some others.\nSorry my bad English too.\nThanks for your help\n\n\n\n----- Mensaje original ----\nDe: Scott Marlowe <[email protected]>\nPara: \"[email protected]\" <[email protected]>\nCC: [email protected]\nEnviado: miércoles 19 de septiembre de 2007, 16:41:45\nAsunto: Re: [PERFORM] Low CPU Usage\n\nOn 9/19/07, [email protected] <[email protected]> wrote:\n>\n>\n> No, changing to fsync off didn't improve performance at all.\n>\n> Settings\n> work_mem = 64MB\n> max_stack_depth = 7MB #in the old server is 8MB but if I set in here give me\n> the ulimit error\n> max_fsm_pages = 204800\n> effective_cache_size = 512MB\n> Atuvacuum is off.\n>\n> I have run vacuum full and vacuum analyze. The databases are freezed (there\n> are no insert, update or delete operations)!!!\n> In both servers the query plans are identical with the same costs too!!!\n>\n> It's really weird, I don't see high values monitoring disk. Cpu usage is\n> about 5% and sometimes a tips of 40 % of disk usage when the query is\n> finishing (the first 3 minutes there are some tips of 11 to 16%).\n\n(please don't top post)\n\nThis sounds like you've got problems somewhere in your I/O system.\nhdparm etc may or may not be seeing the issue. What do you see in\n/var/log/messages or dmesg?\n\n\n\n\n\n\n\n Los referentes más importantes en compra/ venta de autos se juntaron:\nDemotores y Yahoo!\nAhora comprar o vender tu auto es más fácil. Vistá ar.autos.yahoo.com/", "msg_date": "Wed, 19 Sep 2007 12:52:43 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "On 9/19/07, [email protected] <[email protected]> wrote:\n>\n> This is my dmesg file, I see there are some errors but I don't know how to\n> manage!!!\n\nnothing too horrible. Just wanted to make sure you weren't getting\nlots of bad sectors or timeouts.\n\nNothing too bad looking there.\n\n> What do you mean with don't top post?\n\nIt means when you put your post at the top of mine. The way I'm\nanswering provides context. I.e. this paragraph I'm writing goes with\nthe paragraph you wrote that I'm answering. It makes it easier to\nkeep track of long busy conversations.\n\n> Sorry but I'm new with this kind of mailing list and I don't want to botter\n> some others.\n> Sorry my bad English too.\n\nNo bother, and your english is way better than my Spanish. or\nRussian, or Japanese, etc...\n", "msg_date": "Wed, 19 Sep 2007 15:15:34 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "Hola Beto.\nI have no idea where to look for that configuration or settings.\nYesterday I red about some drivers problems with SATA disk working togheter with IDE devices with DMA.\n\nMi server server is a Pentium VI 3.3 with hyper threading (enabled in BIOS), HP Proliant ML 110.\n\nThen I entered to the BIOS and saw in IDE Configuration: \n ATA/IDE Configuration [Enhanced]\n Configure SATA as [IDE] => it has RAID option too\n\nI have any idea how to continue!!! I don't know if this a SATA problem, a configuration problem or what else. I have installed several servers beggining with postgres 6.4 and I've neved had this kind of problems (always with IDE disks). I think this is a problem with SATA disk i/o, but I don't see how to measure that (I have already set postgresql.conf).\n\nRegards\nAgustin\n\n\n----- Mensaje original ----\nDe: Norberto Meijome <[email protected]>\nPara: [email protected]\nCC: [email protected]\nEnviado: jueves 20 de septiembre de 2007, 7:53:05\nAsunto: Re: [PERFORM] Low CPU Usage\n\nOn Wed, 19 Sep 2007 12:13:33 -0700 (PDT)\[email protected] wrote:\n\n> max_stack_depth = 7MB #in the old server is 8MB but if I set in here give me the ulimit error\n\nHola Agustin :)\notro argentino en el extranjero x aca ;) \n\nanyway, back to English ;)\n\na long shot but...\n\ncheck if you have any limits set on the host for CPU usage... you may be\nlimited to x number of secs / % by the OS scheduler. When you query your CPU,\nit will say u are only using 5% or so... \n\nchau,\nBeto\n\n_________________________\nNorberto Meijome\nOctantis Pty Ltd\n\nIntelligence: Finding an error in a Knuth text.\nStupidity: Cashing that $2.56 check you got.\n\n\n\n\nNOTICE: The contents of this email and its attachments are confidential and\nintended only for the individuals or entities named above. If you have received\nthis message in error, please advise the sender by reply email and immediately\ndelete the message and any attachments without using, copying or disclosing the\ncontents. Thank you.\n\n\n\n\n\n\n\n Los referentes más importantes en compra/ venta de autos se juntaron:\nDemotores y Yahoo!\nAhora comprar o vender tu auto es más fácil. Vistá ar.autos.yahoo.com/\nHola Beto.I have no idea where to look for that configuration or settings.Yesterday I red about some drivers problems with SATA disk working togheter with IDE devices with DMA.Mi server server is a Pentium VI 3.3 with hyper threading (enabled in BIOS), HP Proliant ML 110.Then I entered to the BIOS and saw in IDE Configuration:   ATA/IDE Configuration                    [Enhanced]          Configure SATA as                   [IDE]  => it has RAID\n option tooI have any idea how to continue!!! I don't know if this a SATA problem, a configuration problem or what else. I have installed several servers beggining with postgres 6.4 and I've neved had this kind of problems (always with IDE disks). I think this is a problem with SATA disk i/o, but I don't see how to measure that (I have already set postgresql.conf).RegardsAgustin----- Mensaje original ----De: Norberto Meijome <[email protected]>Para: [email protected]: [email protected]: jueves 20 de septiembre de 2007, 7:53:05Asunto: Re: [PERFORM] Low CPU UsageOn Wed, 19 Sep 2007 12:13:33 -0700 (PDT)[email protected] wrote:> max_stack_depth = 7MB #in the old server is 8MB but if I set in here give me the ulimit errorHola Agustin\n :)otro argentino en el extranjero x aca ;) anyway, back to English ;)a long shot but...check if you have any limits set on the host for CPU usage... you may belimited to x number of secs / % by the OS scheduler. When you query your CPU,it will say u are only using 5% or so... chau,Beto_________________________Norberto MeijomeOctantis Pty LtdIntelligence: Finding an error in a Knuth text.Stupidity: Cashing that $2.56 check you got.NOTICE: The contents of this email and its attachments are confidential andintended only for the individuals or entities named above. If you have receivedthis message in error, please advise the sender by reply email and immediatelydelete the message and any attachments without using, copying or disclosing thecontents. Thank you.\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby", "msg_date": "Thu, 20 Sep 2007 04:30:53 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "[email protected] wrote:\n> Hola Beto.\n> I have no idea where to look for that configuration or settings.\n\nIn postgreSQL, the main settings are in .../pgsql/data/postgresql.conf\n\n> Yesterday I red about some drivers problems with SATA disk working\n> togheter with IDE devices with DMA.\n> \n> Mi server server is a Pentium VI 3.3 with hyper threading (enabled in\n> BIOS), HP Proliant ML 110.\n> \n> Then I entered to the BIOS and saw in IDE Configuration:\n> ATA/IDE Configuration [Enhanced]\n> Configure SATA as [IDE] => it has RAID\n> option too\n> \n> I have any idea how to continue!!! I don't know if this a SATA problem,\n> a configuration problem or what else. I have installed several servers\n> beggining with postgres 6.4 and I've neved had this kind of problems\n> (always with IDE disks). I think this is a problem with SATA disk i/o,\n> but I don't see how to measure that (I have already set postgresql.conf).\n\nAre you sure you are really having a problem with insufficient CPU time\nbeing devoted to your program(s)? When I run postgreSQL and do the initial\npopulating of my database, which takes several hours due to the nature of\nthe input data, it runs just 25% to 50% of one CPU, even though I have two\n3.06 GHz hyperthreaded Xeon processors and six 10,000 rpm Ultra/320 SCSI\nhard drives on two SCSI controllers. If I look at the results of the Linux\ntop command, and iostat and vmstat, I see that I am in io-wait state 100% of\nthe time. The transfer rate to the hard drives averages about 2\nMegabytes/second even though I have seen 90 Megabytes/second at times (when\ndoing a database restore). So the IO system can be quite fast when it is not\nwaiting (for seeks, no doubt). If the postgreSQL processes wanted more CPU\ntime, they could have it as the machine does not do much else most of the\ntime. Actually, it runs a four BOINC processes, but they run at nice level\n19, so they run only if no other process wants processing time. When I do a\ndatabase backup, it will run more than 100% of a CPU (remember I have two or\nfour processors, depending on how you count them) for extended periods, so\nthe OS is certainly capable of supplying CPU power when I need it. And\npostgreSQL runs multiple processes at once, so in theory, they could gert\n400% of a processor if they needed it. They do not seem to need to do this\nfor me.\n> \n> Regards\n> Agustin\n> \n> \n> ----- Mensaje original ----\n> De: Norberto Meijome <[email protected]>\n> Para: [email protected]\n> CC: [email protected]\n> Enviado: jueves 20 de septiembre de 2007, 7:53:05\n> Asunto: Re: [PERFORM] Low CPU Usage\n> \n> On Wed, 19 Sep 2007 12:13:33 -0700 (PDT)\n> [email protected] wrote:\n> \n>> max_stack_depth = 7MB #in the old server is 8MB but if I set in here\n> give me the ulimit error\n> \n> Hola Agustin :)\n> otro argentino en el extranjero x aca ;)\n> \n> anyway, back to English ;)\n> \n> a long shot but...\n> \n> check if you have any limits set on the host for CPU usage... you may be\n> limited to x number of secs / % by the OS scheduler. When you query your\n> CPU,\n> it will say u are only using 5% or so...\n> \n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 08:15:01 up 6 days, 42 min, 1 user, load average: 4.24, 4.25, 4.14\n", "msg_date": "Thu, 20 Sep 2007 08:31:36 -0400", "msg_from": "Jean-David Beyer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "(SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE)\n\nMy client \"publishes\" an \"edition\" of their DB from his production site to \nhis hosted web/db server. This is done by FTPing a backup of the DB to his \nhosting provider.\n\nImmediately after a \"publication\" (restore to web/db server) we immediately \nrun VACUUM ANALYZE to make sure the statistics and row estimates are \ncorrect.\n\nThe problem is, after this initial VACUUM ANALYZE, the row estimates in \nquery plans are off by several orders of magnitude. For example, a \ndisastrous plan was created because the planner estimated 4K rows when in \nfact it returned 980K rows.\n\nSometimes - a day or two later - the plans return to \"normal\" and row \nestimates are closer to realistic values. Guessing that there may be \nbackground events that are correcting the row estimates over time, I ran an \nANALYZE on the DB - and sure enough - the row estimates corrected \nthemselves. The puzzling thing is, there have been no writes of any sort to \nthe data - there is no reason for the stats to have changed.\n\nI believe that a VACUUM may not be necessary for a newly restored DB, but I \nassumed that VACUUM ANALYZE and ANALYZE have the same net result. Am I \nwrong?\n\nIf I am not wrong (i.e. VACUUM ANALYZE and ANALYZE should produce the same \nresults) why would the performance improve on a DB that has seen no \ntransactional activity only after the SECOND try?\n\nPG 8.2.4 on RH LINUX 1GB RAM SCSI RAID 1\n\nCarlo\n\n", "msg_date": "Thu, 20 Sep 2007 10:59:54 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "REPOST: Performance improves only after repeated VACUUM/ANALYZE" } ]
[ { "msg_contents": "(SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE)\n\nI am noticing that my queries are spending a lot of time in nested loops. \nThe table/index row estimates are not bad, but the nested loops can be off \nby a factor of 50. In any case, they are always too high.\n\nAre the over-estimations below significant, and if so, is this an indication \nof a general configuration problem?\n\nCarlo\n\n\nselect\n pp.provider_id,\n pp.provider_practice_id,\n nearby.distance\nfrom mdx_core.provider_practice as pp\njoin mdx_core.facility as f\non f.facility_id = pp.facility_id\njoin (select * from mdx_core.zips_in_mile_range('08820', 10) where zip > '') \nas nearby\non f.default_country_code = 'US'\n and f.default_postal_code = nearby.zip\n and pp.facility_address_id is NULL\nunion select\n pp.provider_id,\n pp.provider_practice_id,\n nearby.distance\nfrom mdx_core.provider_practice as pp\njoin mdx_core.facility_address as fa\non fa.facility_address_id = pp.facility_address_id\njoin mdx_core.address as a\non a.address_id = fa.address_id\njoin (select * from mdx_core.zips_in_mile_range('08820', 10) where zip > '') \nas nearby\non a.country_code = 'US'\nand a.postal_code = nearby.zip\n\nUnique (cost=67605.91..67653.18 rows=4727 width=16) (actual \ntime=8634.618..8637.918 rows=907 loops=1)\n -> Sort (cost=67605.91..67617.73 rows=4727 width=16) (actual \ntime=8634.615..8635.651 rows=907 loops=1)\n Sort Key: provider_id, provider_practice_id, distance\n -> Append (cost=0.00..67317.41 rows=4727 width=16) (actual \ntime=176.056..8632.429 rows=907 loops=1)\n -> Nested Loop (cost=0.00..38947.07 rows=3143 width=16) \n(actual time=176.054..7867.962 rows=872 loops=1)\n -> Nested Loop (cost=0.00..11520.79 rows=8121 \nwidth=12) (actual time=169.372..3041.010 rows=907 loops=1)\n -> Function Scan on zips_in_mile_range \n(cost=0.00..15.00 rows=333 width=40) (actual time=151.479..151.671 rows=66 \nloops=1)\n Filter: (zip > ''::text)\n -> Index Scan using \nfacility_country_postal_code_idx on facility f (cost=0.00..34.25 rows=24 \nwidth=15) (actual time=4.969..43.740 rows=14 loops=66)\n Index Cond: ((f.default_country_code = \n'US'::bpchar) AND ((f.default_postal_code)::text = zips_in_mile_range.zip))\n -> Index Scan using provider_practice_facility_idx on \nprovider_practice pp (cost=0.00..3.36 rows=1 width=12) (actual \ntime=4.915..5.316 rows=1 loops=907)\n Index Cond: (f.facility_id = pp.facility_id)\n Filter: (facility_address_id IS NULL)\n -> Nested Loop (cost=0.00..28323.07 rows=1584 width=16) \n(actual time=170.310..762.472 rows=35 loops=1)\n -> Nested Loop (cost=0.00..7791.77 rows=1579 width=12) \n(actual time=170.289..612.579 rows=36 loops=1)\n -> Nested Loop (cost=0.00..2595.96 rows=712 \nwidth=12) (actual time=167.017..354.261 rows=29 loops=1)\n -> Function Scan on zips_in_mile_range \n(cost=0.00..15.00 rows=333 width=40) (actual time=150.188..150.312 rows=66 \nloops=1)\n Filter: (zip > ''::text)\n -> Index Scan using \naddress_country_postal_code_address_idx on address a (cost=0.00..7.73 \nrows=2 width=17) (actual time=2.483..3.086 rows=0 loops=66)\n Index Cond: ((a.country_code = \n'US'::bpchar) AND ((a.postal_code)::text = zips_in_mile_range.zip))\n -> Index Scan using facility_address_address_idx \non facility_address fa (cost=0.00..7.15 rows=12 width=8) (actual \ntime=7.652..8.901 rows=1 loops=29)\n Index Cond: (a.address_id = fa.address_id)\n -> Index Scan using \nprovider_practice_facility_address_idx on provider_practice pp \n(cost=0.00..12.80 rows=16 width=12) (actual time=4.156..4.158 rows=1 \nloops=36)\n Index Cond: (fa.facility_address_id = \npp.facility_address_id)\nTotal runtime: 8639.066 ms\n\n\n", "msg_date": "Thu, 20 Sep 2007 11:02:55 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "REPOST: Nested loops row estimates always too high" }, { "msg_contents": "On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote:\n> (SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS HERE)\n> \n> I am noticing that my queries are spending a lot of time in nested loops. \n> The table/index row estimates are not bad, but the nested loops can be off \n> by a factor of 50. In any case, they are always too high.\n> \n> Are the over-estimations below significant, and if so, is this an indication \n> of a general configuration problem?\nSounds much like the issue I was seeing as well.\n\n> \n> Unique (cost=67605.91..67653.18 rows=4727 width=16) (actual \n> time=8634.618..8637.918 rows=907 loops=1)\n\nYou can to rewrite the queries to individual queries to see it if helps.\n\nIn my case, I was doing\n\nselect a.a,b.b,c.c from \n(select a from x where) a <--- Put as a SRF\nleft join (\nselect b from y where ) b <--- Put as a SRF\non a.a = b.a\n\n\n\n", "msg_date": "Mon, 24 Sep 2007 14:46:16 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REPOST: Nested loops row estimates always too high" }, { "msg_contents": "Has anyone offered any answers to you? No one else has replied to this post.\n\n\n\"Ow Mun Heng\" <[email protected]> wrote in message \nnews:[email protected]...\n> On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote:\n>> (SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS \n>> HERE)\n>>\n>> I am noticing that my queries are spending a lot of time in nested loops.\n>> The table/index row estimates are not bad, but the nested loops can be \n>> off\n>> by a factor of 50. In any case, they are always too high.\n>>\n>> Are the over-estimations below significant, and if so, is this an \n>> indication\n>> of a general configuration problem?\n> Sounds much like the issue I was seeing as well.\n>\n>>\n>> Unique (cost=67605.91..67653.18 rows=4727 width=16) (actual\n>> time=8634.618..8637.918 rows=907 loops=1)\n>\n> You can to rewrite the queries to individual queries to see it if helps.\n>\n> In my case, I was doing\n>\n> select a.a,b.b,c.c from\n> (select a from x where) a <--- Put as a SRF\n> left join (\n> select b from y where ) b <--- Put as a SRF\n> on a.a = b.a\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: explain analyze is your friend\n> \n\n", "msg_date": "Mon, 24 Sep 2007 14:12:01 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REPOST: Nested loops row estimates always too high" }, { "msg_contents": "On Mon, 2007-09-24 at 14:12 -0400, Carlo Stonebanks wrote:\n> Has anyone offered any answers to you? No one else has replied to this post.\n\nOverestimate of selectivity. I guess it's mainly due to my one to many\ntable relationships. I've tried everything from concatenated join\ncolumns and indexing it to creating all sorts of indexes and splitting\nthe (1) tables into multiple tables and upping the indexes to 1000 and\nturning of nestloops/enabling geqo/ tweaking the threshold/effort and\nmuch much more (as much as I was asked to/suggested to) but still no\nluck.\n\nIn my case, the individual queries were fast. So, In then end, I made a\nSRF and used the SRFs to join each other. This worked better.\n\n\n> \n> \n> \"Ow Mun Heng\" <[email protected]> wrote in message \n> news:[email protected]...\n> > On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote:\n> >> (SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS \n> >> HERE)\n> >>\n> >> I am noticing that my queries are spending a lot of time in nested loops.\n> >> The table/index row estimates are not bad, but the nested loops can be \n> >> off\n> >> by a factor of 50. In any case, they are always too high.\n> >>\n> >> Are the over-estimations below significant, and if so, is this an \n> >> indication\n> >> of a general configuration problem?\n> > Sounds much like the issue I was seeing as well.\n> >\n> >>\n> >> Unique (cost=67605.91..67653.18 rows=4727 width=16) (actual\n> >> time=8634.618..8637.918 rows=907 loops=1)\n> >\n> > You can to rewrite the queries to individual queries to see it if helps.\n> >\n> > In my case, I was doing\n> >\n> > select a.a,b.b,c.c from\n> > (select a from x where) a <--- Put as a SRF\n> > left join (\n> > select b from y where ) b <--- Put as a SRF\n> > on a.a = b.a\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: explain analyze is your friend\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n", "msg_date": "Tue, 25 Sep 2007 08:50:40 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REPOST: Nested loops row estimates always too high" }, { "msg_contents": "My problem is that I think that SRF's are causing my problems. The SRF's\ngets an automatic row estimate of 1000 rows. Add a condition to it, the\nplanner guesses 333 rows. Even at 333, this is an overestimate of the number\nof rows returned.\n\nI'm really disappointed - SRF's are a great way to place the enterprise's\ndb-centric business logic at the server.\n\nCarlo\n\n-----Original Message-----\nFrom: Ow Mun Heng [mailto:[email protected]] \nSent: September 24, 2007 8:51 PM\nTo: Carlo Stonebanks\nCc: [email protected]\nSubject: Re: [PERFORM] REPOST: Nested loops row estimates always too high\n\nOn Mon, 2007-09-24 at 14:12 -0400, Carlo Stonebanks wrote:\n> Has anyone offered any answers to you? No one else has replied to this\npost.\n\nOverestimate of selectivity. I guess it's mainly due to my one to many\ntable relationships. I've tried everything from concatenated join\ncolumns and indexing it to creating all sorts of indexes and splitting\nthe (1) tables into multiple tables and upping the indexes to 1000 and\nturning of nestloops/enabling geqo/ tweaking the threshold/effort and\nmuch much more (as much as I was asked to/suggested to) but still no\nluck.\n\nIn my case, the individual queries were fast. So, In then end, I made a\nSRF and used the SRFs to join each other. This worked better.\n\n\n> \n> \n> \"Ow Mun Heng\" <[email protected]> wrote in message \n> news:[email protected]...\n> > On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote:\n> >> (SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS \n> >> HERE)\n> >>\n> >> I am noticing that my queries are spending a lot of time in nested\nloops.\n> >> The table/index row estimates are not bad, but the nested loops can be \n> >> off\n> >> by a factor of 50. In any case, they are always too high.\n> >>\n> >> Are the over-estimations below significant, and if so, is this an \n> >> indication\n> >> of a general configuration problem?\n> > Sounds much like the issue I was seeing as well.\n> >\n> >>\n> >> Unique (cost=67605.91..67653.18 rows=4727 width=16) (actual\n> >> time=8634.618..8637.918 rows=907 loops=1)\n> >\n> > You can to rewrite the queries to individual queries to see it if helps.\n> >\n> > In my case, I was doing\n> >\n> > select a.a,b.b,c.c from\n> > (select a from x where) a <--- Put as a SRF\n> > left join (\n> > select b from y where ) b <--- Put as a SRF\n> > on a.a = b.a\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: explain analyze is your friend\n> > \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 7: You can help support the PostgreSQL project by donating at\n> \n> http://www.postgresql.org/about/donate\n\n\n", "msg_date": "Tue, 25 Sep 2007 00:53:55 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: REPOST: Nested loops row estimates always too high" }, { "msg_contents": "On Tue, 2007-09-25 at 00:53 -0400, Carlo Stonebanks wrote:\n> My problem is that I think that SRF's are causing my problems. The SRF's\n> gets an automatic row estimate of 1000 rows.\n\nThat's correct. That's what I see too though I may return 10K rows of\ndata. (min 10 columns)\nBut It's way faster than the normal joins I do.\n\n> I'm really disappointed - SRF's are a great way to place the enterprise's\n> db-centric business logic at the server.\n\nActually, I think in general, nested Loops, while evil, are just going\nto be around. Even in MSSQL, when I'm pulling from, the nested loops are\nmany and I presume it's cos of the 8x SMP and the multiGB ram which is\nmaking the query better.\n\n\n> \n> Carlo\n> \n> -----Original Message-----\n> From: Ow Mun Heng [mailto:[email protected]] \n> Sent: September 24, 2007 8:51 PM\n> To: Carlo Stonebanks\n> Cc: [email protected]\n> Subject: Re: [PERFORM] REPOST: Nested loops row estimates always too high\n> \n> On Mon, 2007-09-24 at 14:12 -0400, Carlo Stonebanks wrote:\n> > Has anyone offered any answers to you? No one else has replied to this\n> post.\n> \n> Overestimate of selectivity. I guess it's mainly due to my one to many\n> table relationships. I've tried everything from concatenated join\n> columns and indexing it to creating all sorts of indexes and splitting\n> the (1) tables into multiple tables and upping the indexes to 1000 and\n> turning of nestloops/enabling geqo/ tweaking the threshold/effort and\n> much much more (as much as I was asked to/suggested to) but still no\n> luck.\n> \n> In my case, the individual queries were fast. So, In then end, I made a\n> SRF and used the SRFs to join each other. This worked better.\n> \n> \n> > \n> > \n> > \"Ow Mun Heng\" <[email protected]> wrote in message \n> > news:[email protected]...\n> > > On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote:\n> > >> (SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS \n> > >> HERE)\n> > >>\n> > >> I am noticing that my queries are spending a lot of time in nested\n> loops.\n> > >> The table/index row estimates are not bad, but the nested loops can be \n> > >> off\n> > >> by a factor of 50. In any case, they are always too high.\n> > >>\n> > >> Are the over-estimations below significant, and if so, is this an \n> > >> indication\n> > >> of a general configuration problem?\n> > > Sounds much like the issue I was seeing as well.\n> > >\n> > >>\n> > >> Unique (cost=67605.91..67653.18 rows=4727 width=16) (actual\n> > >> time=8634.618..8637.918 rows=907 loops=1)\n> > >\n> > > You can to rewrite the queries to individual queries to see it if helps.\n> > >\n> > > In my case, I was doing\n> > >\n> > > select a.a,b.b,c.c from\n> > > (select a from x where) a <--- Put as a SRF\n> > > left join (\n> > > select b from y where ) b <--- Put as a SRF\n> > > on a.a = b.a\n> > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 6: explain analyze is your friend\n> > > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 7: You can help support the PostgreSQL project by donating at\n> > \n> > http://www.postgresql.org/about/donate\n> \n> \n", "msg_date": "Tue, 25 Sep 2007 13:15:58 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REPOST: Nested loops row estimates always too high" }, { "msg_contents": "On Tue, Sep 25, 2007 at 12:53:55AM -0400, Carlo Stonebanks wrote:\n> My problem is that I think that SRF's are causing my problems. The SRF's\n> gets an automatic row estimate of 1000 rows. Add a condition to it, the\n> planner guesses 333 rows. Even at 333, this is an overestimate of the number\n> of rows returned.\n> \n> I'm really disappointed - SRF's are a great way to place the enterprise's\n> db-centric business logic at the server.\n\nFortunately, in 8.3 you can attach a row estimate to the function yourself,\nwhich should most likely fix your problem. Look forward to the first beta :-)\n\n/* Steinar */\n-- \nHomepage: http://www.sesse.net/\n", "msg_date": "Tue, 25 Sep 2007 11:31:33 +0200", "msg_from": "\"Steinar H. Gunderson\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REPOST: Nested loops row estimates always too high" }, { "msg_contents": "On Tue, 2007-09-25 at 11:31 +0200, Steinar H. Gunderson wrote:\n> On Tue, Sep 25, 2007 at 12:53:55AM -0400, Carlo Stonebanks wrote:\n> > My problem is that I think that SRF's are causing my problems. The SRF's\n> > gets an automatic row estimate of 1000 rows. Add a condition to it, the\n> > planner guesses 333 rows. Even at 333, this is an overestimate of the number\n> > of rows returned.\n> > \n> > I'm really disappointed - SRF's are a great way to place the enterprise's\n> > db-centric business logic at the server.\n> \n> Fortunately, in 8.3 you can attach a row estimate to the function yourself,\n> which should most likely fix your problem. Look forward to the first beta :-)\n> \n\nWhere can I erad more about this new \"feature\"?\n", "msg_date": "Wed, 26 Sep 2007 11:50:46 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REPOST: Nested loops row estimates always too high" }, { "msg_contents": "Ow Mun Heng <[email protected]> writes:\n> Where can I erad more about this new \"feature\"?\n\nhttp://developer.postgresql.org/pgdocs/postgres/sql-createfunction.html\n\nhttp://developer.postgresql.org/pgdocs/postgres/ always has a current\nsnapshot of CVS-HEAD documentation...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2007 00:02:58 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REPOST: Nested loops row estimates always too high " }, { "msg_contents": "On Wed, 2007-09-26 at 00:02 -0400, Tom Lane wrote:\n> Ow Mun Heng <[email protected]> writes:\n> > Where can I erad more about this new \"feature\"?\n> \n> http://developer.postgresql.org/pgdocs/postgres/sql-createfunction.html\n> \n> http://developer.postgresql.org/pgdocs/postgres/ always has a current\n> snapshot of CVS-HEAD documentation...\n\n\nI read these two items\n\n...\n\nexecution_cost\n\n A positive number giving the estimated execution cost for the\nfunction, in units of cpu_operator_cost. If the function returns a set,\nthis is the cost per returned row. If the cost is not specified, 1 unit\nis assumed for C-language and internal functions, and 100 units for\nfunctions in all other languages. Larger values cause the planner to try\nto avoid evaluating the function more often than necessary. \n\nresult_rows\n\n A positive number giving the estimated number of rows that the\nplanner should expect the function to return. This is only allowed when\nthe function is declared to return a set. The default assumption is 1000\nrows. \n...\n[/snip]\n\n\n\n", "msg_date": "Thu, 27 Sep 2007 14:59:13 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: REPOST: Nested loops row estimates always too high" } ]
[ { "msg_contents": "My new server postgresql.conf is equal to the old one. I'm doubting this is a hardware issue.\nGoogling with my hard HP Proliant ML 110 G3 I saw that IHC7 controller has some problems, but looking and testing with hdparm it looks ok.\nhdparm -tT /dev/sdaç\nTiming cached reads: 1722 MB in 2.00 seconds = 860.38 MB/sec\nTiming buffered disks reads: 164 MB in 3.01 seconds = 54.53 MB/sec\n\nDoing hdparm -I /dev/sda\nDMA has * in udma5\nWhich other test can I do to find if this is a hardware, kernel o postgres issue?\n\nRegards\nAgustin\n\n----- Mensaje original ----\nDe: Jean-David Beyer <[email protected]>\nPara: [email protected]\nEnviado: jueves 20 de septiembre de 2007, 9:31:36\nAsunto: Re: [PERFORM] Low CPU Usage\n\[email protected] wrote:\n> Hola Beto.\n> I have no idea where to look for that configuration or settings.\n\nIn postgreSQL, the main settings are in .../pgsql/data/postgresql.conf\n\n> Yesterday I red about some drivers problems with SATA disk working\n> togheter with IDE devices with DMA.\n> \n> Mi server server is a Pentium VI 3.3 with hyper threading (enabled in\n> BIOS), HP Proliant ML 110.\n> \n> Then I entered to the BIOS and saw in IDE Configuration:\n> ATA/IDE Configuration [Enhanced]\n> Configure SATA as [IDE] => it has RAID\n> option too\n> \n> I have any idea how to continue!!! I don't know if this a SATA problem,\n> a configuration problem or what else. I have installed several servers\n> beggining with postgres 6.4 and I've neved had this kind of problems\n> (always with IDE disks). I think this is a problem with SATA disk i/o,\n> but I don't see how to measure that (I have already set postgresql.conf).\n\nAre you sure you are really having a problem with insufficient CPU time\nbeing devoted to your program(s)? When I run postgreSQL and do the initial\npopulating of my database, which takes several hours due to the nature of\nthe input data, it runs just 25% to 50% of one CPU, even though I have two\n3.06 GHz hyperthreaded Xeon processors and six 10,000 rpm Ultra/320 SCSI\nhard drives on two SCSI controllers. If I look at the results of the Linux\ntop command, and iostat and vmstat, I see that I am in io-wait state 100% of\nthe time. The transfer rate to the hard drives averages about 2\nMegabytes/second even though I have seen 90 Megabytes/second at times (when\ndoing a database restore). So the IO system can be quite fast when it is not\nwaiting (for seeks, no doubt). If the postgreSQL processes wanted more CPU\ntime, they could have it as the machine does not do much else most of the\ntime. Actually, it runs a four BOINC processes, but they run at nice level\n19, so they run only if no other process wants processing time. When I do a\ndatabase backup, it will run more than 100% of a CPU (remember I have two or\nfour processors, depending on how you count them) for extended periods, so\nthe OS is certainly capable of supplying CPU power when I need it. And\npostgreSQL runs multiple processes at once, so in theory, they could gert\n400% of a processor if they needed it. They do not seem to need to do this\nfor me.\n> \n> Regards\n> Agustin\n> \n> \n> ----- Mensaje original ----\n> De: Norberto Meijome <[email protected]>\n> Para: [email protected]\n> CC: [email protected]\n> Enviado: jueves 20 de septiembre de 2007, 7:53:05\n> Asunto: Re: [PERFORM] Low CPU Usage\n> \n> On Wed, 19 Sep 2007 12:13:33 -0700 (PDT)\n> [email protected] wrote:\n> \n>> max_stack_depth = 7MB #in the old server is 8MB but if I set in here\n> give me the ulimit error\n> \n> Hola Agustin :)\n> otro argentino en el extranjero x aca ;)\n> \n> anyway, back to English ;)\n> \n> a long shot but...\n> \n> check if you have any limits set on the host for CPU usage... you may be\n> limited to x number of secs / % by the OS scheduler. When you query your\n> CPU,\n> it will say u are only using 5% or so...\n> \n\n\n-- \n .~. Jean-David Beyer Registered Linux User 85642.\n /V\\ PGP-Key: 9A2FC99A Registered Machine 241939.\n /( )\\ Shrewsbury, New Jersey http://counter.li.org\n ^^-^^ 08:15:01 up 6 days, 42 min, 1 user, load average: 4.24, 4.25, 4.14\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n\n\n\n\n\n\n\n Seguí de cerca a la Selección Argentina de Rugby en el Mundial de Francia 2007.\nhttp://ar.sports.yahoo.com/mundialderugby\nMy new server postgresql.conf is equal to the old one. I'm doubting this is a hardware issue.Googling with my hard HP Proliant ML 110 G3 I saw that IHC7 controller has some problems, but looking and testing with hdparm it looks ok.hdparm -tT /dev/sdaçTiming cached reads: 1722 MB in 2.00 seconds = 860.38 MB/secTiming buffered disks reads: 164 MB in 3.01 seconds = 54.53 MB/secDoing hdparm -I /dev/sdaDMA has * in udma5Which other test can I do to find if this is a hardware, kernel o postgres issue?RegardsAgustin----- Mensaje original ----De: Jean-David Beyer\n <[email protected]>Para: [email protected]: jueves 20 de septiembre de 2007, 9:31:36Asunto: Re: [PERFORM] Low CPU [email protected] wrote:> Hola Beto.> I have no idea where to look for that configuration or settings.In postgreSQL, the main settings are in .../pgsql/data/postgresql.conf> Yesterday I red about some drivers problems with SATA disk working> togheter with IDE devices with DMA.> > Mi server server is a Pentium VI 3.3 with hyper threading (enabled in> BIOS), HP Proliant ML 110.> > Then I entered to the BIOS and saw in IDE Configuration:>   ATA/IDE Configuration                    [Enhanced]>           Configure SATA\n as                   [IDE]  => it has RAID> option too> > I have any idea how to continue!!! I don't know if this a SATA problem,> a configuration problem or what else. I have installed several servers> beggining with postgres 6.4 and I've neved had this kind of problems> (always with IDE disks). I think this is a problem with SATA disk i/o,> but I don't see how to measure that (I have already set postgresql.conf).Are you sure you are really having a problem with insufficient CPU timebeing devoted to your program(s)? When I run postgreSQL and do the initialpopulating of my database, which takes several hours due to the nature ofthe input data, it runs just 25% to 50% of one CPU, even though I have two3.06 GHz hyperthreaded Xeon processors and six 10,000 rpm Ultra/320 SCSIhard\n drives on two SCSI controllers. If I look at the results of the Linuxtop command, and iostat and vmstat, I see that I am in io-wait state 100% ofthe time. The transfer rate to the hard drives averages about 2Megabytes/second even though I have seen 90 Megabytes/second at times (whendoing a database restore). So the IO system can be quite fast when it is notwaiting (for seeks, no doubt). If the postgreSQL processes wanted more CPUtime, they could have it as the machine does not do much else most of thetime. Actually, it runs a four BOINC processes, but they run at nice level19, so they run only if no other process wants processing time. When I do adatabase backup, it will run more than 100% of a CPU (remember I have two orfour processors, depending on how you count them) for extended periods, sothe OS is certainly capable of supplying CPU power when I need it. AndpostgreSQL runs multiple processes at once,\n so in theory, they could gert400% of a processor if they needed it. They do not seem to need to do thisfor me.> > Regards> Agustin> > > ----- Mensaje original ----> De: Norberto Meijome <[email protected]>> Para: [email protected]> CC: [email protected]> Enviado: jueves 20 de septiembre de 2007, 7:53:05> Asunto: Re: [PERFORM] Low CPU Usage> > On Wed, 19 Sep 2007 12:13:33 -0700 (PDT)> [email protected] wrote:> >> max_stack_depth = 7MB #in the old server is 8MB but if I set in here> give me the ulimit error> > Hola Agustin :)> otro argentino en el extranjero x aca ;)> > anyway, back to English ;)> > a long shot but...> > check if you have any limits set on the host for CPU usage... you may be> limited to x\n number of secs / % by the OS scheduler. When you query your> CPU,> it will say u are only using 5% or so...> --   .~.  Jean-David Beyer          Registered Linux User 85642.  /V\\  PGP-Key: 9A2FC99A         Registered Machine   241939. /( )\\ Shrewsbury, New Jersey    http://counter.li.org ^^-^^ 08:15:01 up 6 days, 42 min, 1 user, load average: 4.24, 4.25, 4.14---------------------------(end of broadcast)---------------------------TIP 3: Have you checked our extensive FAQ?               http://www.postgresql.org/docs/faq\nLos referentes más importantes en compra/venta de autos se juntaron:Demotores y Yahoo!.\nAhora comprar o vender tu auto es más fácil. Visitá http://ar.autos.yahoo.com/", "msg_date": "Thu, 20 Sep 2007 10:28:37 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "On Thu, 20 Sep 2007, [email protected] wrote:\n\n> Which other test can I do to find if this is a hardware, kernel o \n> postgres issue?\n\nThe little test hdparm does is not exactly a robust hard drive benchmark. \nIf you want to rule out hard drive transfer speed issues, take at look at \nthe tests suggested at \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm and \nsee how your results compare to the single SATA disk example I give there.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Sat, 22 Sep 2007 02:29:17 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "\nHello all,\n\nOld servers that housed 7.4 performed better than 8.1.4 version...are there\nany MAJOR performance hits with this version???\n\nI set the postgresql.conf setting to equal that of 7.4 and queries still run\nSLOW on 8.1.4...\n\nI have perform maintenance tonight on the 8.1.4 server - any ideas what\nactions I should take???\n\ndefault stats set to 50 (in postgresql.conf)\n\n1) Restart instance\n2) Dump \\ reload database\n3) vacuum analyze\n4) rebuild index database\n\nI keep doing these same steps and nothing seems to work...I've read where\nsome are saying to VACUUM several times - then reindex (???)\n\nCan someone tell me what they do during a NORMAL maintenance window on their\nservers???\n\nAll this is NEW to me.\n\nThanks,\nMichelle.\n-- \nView this message in context: http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12803859\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 20 Sep 2007 12:10:23 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "smiley2211 wrote:\n> Hello all,\n>\n> Old servers that housed 7.4 performed better than 8.1.4 version...are there\n> any MAJOR performance hits with this version???\n> \n\nAre you using the default UNICODE encoding for your databases??\nThis could potentially translate into a performance hit (considerable?\nMaybe, depending on what your applications do)\n\nA related question: why not update to the latest, 8.2.x ??\n\n> I set the postgresql.conf setting to equal that of 7.4 and queries still run\n> SLOW on 8.1.4...\n> \n\nHmmm, I don't think the settings should be the same --- search the\narchives for discussions on performance tuning and an informal\ndocumentation of the postgresql.conf file.\n\n> 3) vacuum analyze\n> \n\nAm I understanding correctly that you did this?? Just to double check,\nyes, it is *very* important that you analyze the database *after loading \nit*.\n\nYou could probably check the postgres log file to see if there are any\nobvious red flags in there.\n\nHTH,\n\nCarlos\n--\n\n", "msg_date": "Thu, 20 Sep 2007 15:34:58 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "\nNo, I didn't UPGRADE it but that's what I inherited :( ...not sure of the\ncode page stuff because I am not the one who did the upgrade...I'm not sure\nI know ENOUGH about POSTGRESQL to mess around with the codepage...\n\nYes, I use vacuum analyze...\n\nYes, I used the postgresql.conf of 7.4 and tried to match the 8.1.4 to\nthat...I didn't know where else to start...The users have been complaining\nsince DAY1 as I am told...\n\nThanks,\nMichelle\n\n-- \nView this message in context: http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12805270\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 20 Sep 2007 13:32:07 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "On 9/20/07, smiley2211 <[email protected]> wrote:\n>\n> No, I didn't UPGRADE it but that's what I inherited :( ...not sure of the\n> code page stuff because I am not the one who did the upgrade...I'm not sure\n> I know ENOUGH about POSTGRESQL to mess around with the codepage...\n>\n> Yes, I use vacuum analyze...\n>\n> Yes, I used the postgresql.conf of 7.4 and tried to match the 8.1.4 to\n> that...I didn't know where else to start...The users have been complaining\n> since DAY1 as I am told...\n\nOK, a few things you need to look into.\n\nDo you have horrendous bloating in the db. run vacuum verbose on your\ndb and see what it says. You should probably turn on the autovacuum\ndaemon either way. If your database has gotten bloated you may need\nto vacuum full / reindex to get your space back.\n\nWhat queries are slow, specifically. you can set the server to log\nlong running servers in postgresql.conf. Find the longest running\nones and run them by hand with explain analyze at the front, like:\n\nexplain analyze select .....\n\nlastly, run\n\nvmstat 10\n\nfrom the command line while the machine is running slow and see where\neffort is going. I'm guessing you'll see a lot of id in there.\n", "msg_date": "Thu, 20 Sep 2007 16:25:19 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "On 9/20/07, Scott Marlowe <[email protected]> wrote:\n\n> effort is going. I'm guessing you'll see a lot of id in there.\n\nsorry, meant wa (wait)\n", "msg_date": "Thu, 20 Sep 2007 16:25:42 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "\nHow do I know if there is BLOATING??? I just ran vacuum verbose; \n\nYes, autovacuum is on.\n\nThanks...Michelle\n\n-- \nView this message in context: http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12807959\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 20 Sep 2007 16:25:43 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "On 9/20/07, smiley2211 <[email protected]> wrote:\n>\n> How do I know if there is BLOATING??? I just ran vacuum verbose;\n>\n> Yes, autovacuum is on.\n\nPost the last 4 or 5 lines from vacuum verbose.\n", "msg_date": "Thu, 20 Sep 2007 19:52:52 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "\nHere are the requested lines...\n\nThere were 0 unused item pointers.\n0 pages are entirely empty.\nCPU 0.00s/0.00u sec elapsed 0.00 sec.\nINFO: free space map contains 786 pages in 297 relations\nDETAIL: A total of 5408 page slots are in use (including overhead).\n5408 page slots are required to track all free space.\nCurrent limits are: 40000 page slots, 1000 relations, using 341 KB.\nVACUUM\n\n\n-- \nView this message in context: http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12810028\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Thu, 20 Sep 2007 20:25:44 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "> Old servers that housed 7.4 performed better than 8.1.4 version...are\n> there any MAJOR performance hits with this version???\n>\n> I set the postgresql.conf setting to equal that of 7.4 and queries still\n> run\n> SLOW on 8.1.4...\n\nWe need to find a specific query that is slow now that was fast before,\nand see the EXPLAIN ANALYZE of that query.\n\nIf you have the old server still around then showing the EXPLAIN ANALYZE\nof the same query on that server would be a lot of help.\n\n/Dennis\n\n\n", "msg_date": "Fri, 21 Sep 2007 07:58:27 +0200 (CEST)", "msg_from": "[email protected]", "msg_from_op": false, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "\nDennis,\n\nThanks for your reply.\n\nNo, the OLD server are no longer available (decommissioned) - the new\nservers are definitely better h\\w.\n\nI do not have any queries to EXPLAIN ANALYZE as they are built by the\napplication and I am not allowed to enable logging on for that server - so\nwhere do I go from here???\n\nI am pretty much trying to make changes in the postgresql.conf file but\ndon't have a CLUE as to what starting numbers I should be looking at to\nchange(???)\n\nHere is the EXPLAIN ANALYZE for the ONE (1) query I do have...it takes 4 - 5\nhours to run a SELECT with the 'EXPLAIN ANALYZE':\n\n \nQUERY PLAN \n \n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n-------------------\n Limit (cost=100013612.76..299939413.70 rows=1 width=8) (actual\ntime=10084289.859..10084289.861 rows=1 loops=1)\n -> Subquery Scan people_consent (cost=100013612.76..624068438343.99\nrows=3121 width=8) (actual time=10084289.853..10084289.853 rows=1 loops=1)\n -> Append (cost=100013612.76..624068438312.78 rows=3121\nwidth=815) (actual time=10084289.849..10084289.849 rows=1 loops=1)\n -> Nested Loop (cost=100013612.76..100013621.50 rows=2\nwidth=815) (actual time=10084289.846..10084289.846 rows=1 loops=1)\n -> Unique (cost=100013612.76..100013612.77 rows=2\nwidth=8) (actual time=10084289.817..10084289.817 rows=1 loops=1)\n -> Sort (cost=100013612.76..100013612.77 rows=2\nwidth=8) (actual time=10084289.814..10084289.814 rows=1 loops=1)\n Sort Key: temp_consent.id\n -> Unique \n(cost=100013612.71..100013612.73 rows=2 width=36) (actual\ntime=10084245.195..10084277.468 rows=7292 loops=1)\n -> Sort \n(cost=100013612.71..100013612.72 rows=2 width=36) (actual\ntime=10084245.191..10084254.425 rows=7292 loops=1)\n Sort Key: id, daterecorded,\nanswer\n -> Append \n(cost=100013515.80..100013612.70 rows=2 width=36) (actual\ntime=10083991.226..10084228.613 rows=7292 loops=1)\n -> HashAggregate \n(cost=100013515.80..100013515.82 rows=1 width=36) (actual\ntime=10083991.223..10083998.046 rows=3666 loops=1)\n -> Nested Loop \n(cost=100000060.61..100013515.80 rows=1 width=36) (actual\ntime=388.263..10083961.330 rows=3702 loops=1)\n -> Nested\nLoop (cost=100000060.61..100013511.43 rows=1 width=36) (actual\ntime=388.237..10083897.268 rows=3702 loops=1)\n -> \nNested Loop (cost=100000060.61..100013507.59 rows=1 width=24) (actual\ntime=388.209..10083833.870 rows=3702 loops=1)\n \n-> Nested Loop (cost=100000060.61..100013504.56 rows=1 width=24) (actual\ntime=388.173..10083731.122 rows=3702 loops=1)\n \nJoin Filter: (\"inner\".question_answer_id = \"outer\".id)\n \n-> Nested Loop (cost=60.61..86.33 rows=1 width=28) (actual\ntime=13.978..114.768 rows=7430 loops=1)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\nwidth=28) (actual time=0.084..0.088 rows=1 loops=1)\n \nIndex Cond: ((answer)::text = 'Yes'::text)\n \n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (actual time=13.881..87.112 rows=7430 loops=1)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'share\nWithEval'::text)))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (actual\ntime=13.198..13.198 rows=0 loops=1)\n \n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n(actual time=9.689..9.689 rows=57804 loops=1)\n \nIndex Cond: (qa.answer_id = \"outer\".id)\n \n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (actual\ntime=2.563..2.563 rows=0 loops=1)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(actual time=1.923..1.923 rows=6237 loops=1)\n \nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(actual time=0.634..0.634 rows=2047 loops=1)\n \nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n \n-> Seq Scan on encounters_questions_answers eqa \n(cost=100000000.00..100007608.66 rows=464766 width=8) (actual\ntime=0.003..735.934 rows=464766 loop\ns=7430)\n \n-> Index Scan using encounters_id on encounters ec (cost=0.00..3.02 rows=1\nwidth=8) (actual time=0.016..0.018 rows=1 loops=3702)\n \nIndex Cond: (ec.id = \"outer\".encounter_id)\n -> \nIndex Scan using enrollements_pk on enrollments en (cost=0.00..3.82 rows=1\nwidth=20) (actual time=0.008..0.010 rows=1 loops=3702)\n \nIndex Cond: (\"outer\".enrollment_id = en.id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.008..0.010 rows=1 loops=3702)\n Index\nCond: (p.id = \"outer\".person_id)\n -> HashAggregate \n(cost=96.86..96.87 rows=1 width=36) (actual time=205.471..212.207 rows=3626\nloops=1)\n -> Nested Loop \n(cost=60.61..96.85 rows=1 width=36) (actual time=13.163..196.421 rows=3722\nloops=1)\n -> Nested\nLoop (cost=60.61..92.48 rows=1 width=36) (actual time=13.149..158.112\nrows=3722 loops=1)\n -> \nNested Loop (cost=60.61..89.36 rows=1 width=24) (actual\ntime=13.125..120.021 rows=3722 loops=1)\n \n-> Nested Loop (cost=60.61..86.33 rows=1 width=28) (actual\ntime=13.013..48.460 rows=7430 loops=1)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\nwidth=28) (actual time=0.030..0.032 rows=1 loops=1)\n \nIndex Cond: ((answer)::text = 'Yes'::text)\n \n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (actual time=12.965..28.902 rows=7430 loops=1)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'shareWithEv\nal'::text)))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (actual\ntime=12.288..12.288 rows=0 loops=1)\n \n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n(actual time=8.985..8.985 rows=57804 loops=1)\n \nIndex Cond: (qa.answer_id = \"outer\".id)\n \n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (actual\ntime=2.344..2.344 rows=0 loops=1)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(actual time=1.762..1.762 rows=6237 loops=1)\n \nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(actual time=0.578..0.578 rows=2047 loops=1)\n \nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n \n-> Index Scan using ctccalls_qs_as_qaid on ctccalls_questions_answers cqa \n(cost=0.00..3.02 rows=1 width=8) (actual time=0.005..0.006 rows=1\nloops=7430)\n \nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> \nIndex Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1\nwidth=20) (actual time=0.003..0.005 rows=1 loops=3722)\n \nIndex Cond: (c.id = \"outer\".call_id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.004..0.005 rows=1 loops=3722)\n Index\nCond: (p.id = \"outer\".person_id)\n -> Index Scan using people_pk on people \n(cost=0.00..4.35 rows=1 width=815) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: (people.id = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=100000000.00..623968424691.25 rows=3119 width=676) (never executed)\n -> Seq Scan on people \n(cost=100000000.00..623968424660.06 rows=3119 width=676) (never executed)\n Filter: (NOT (subplan))\n SubPlan\n -> Subquery Scan temp_consent \n(cost=100010968.94..100010968.98 rows=2 width=8) (never executed)\nlines 1-69/129 56%\n \nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> \nIndex Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1\nwidth=20) (actual time=0.003..0.005 rows=1 loops=3722)\n \nIndex Cond: (c.id = \"outer\".call_id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.004..0.005 rows=1 loops=3722)\n Index\nCond: (p.id = \"outer\".person_id)\n -> Index Scan using people_pk on people \n(cost=0.00..4.35 rows=1 width=815) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: (people.id = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=100000000.00..623968424691.25 rows=3119 width=676) (never executed)\n -> Seq Scan on people \n(cost=100000000.00..623968424660.06 rows=3119 width=676) (never executed)\n Filter: (NOT (subplan))\n SubPlan\n -> Subquery Scan temp_consent \n(cost=100010968.94..100010968.98 rows=2 width=8) (never executed)\n -> Unique \n(cost=100010968.94..100010968.96 rows=2 width=36) (never executed)\n -> Sort \n(cost=100010968.94..100010968.95 rows=2 width=36) (never executed)\n Sort Key: id, daterecorded,\nanswer\n -> Append \n(cost=100010872.03..100010968.93 rows=2 width=36) (never executed)\n -> HashAggregate \n(cost=100010872.03..100010872.04 rows=1 width=36) (never executed)\n -> Nested Loop \n(cost=100000907.99..100010872.02 rows=1 width=36) (never executed)\n Join\nFilter: (\"inner\".question_answer_id = \"outer\".id)\n -> Nested\nLoop (cost=60.61..90.69 rows=1 width=36) (never executed)\n -> \nNested Loop (cost=0.00..9.37 rows=1 width=36) (never executed)\n \n-> Index Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8)\n(never executed)\n \nIndex Cond: (id = $0)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\nwidth=28) (never executed)\n \nIndex Cond: ((answer)::text = 'Yes'::text)\n -> \nBitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (never executed)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text =\n'shareWithEval'::text)\n))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (never executed)\n \n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n(never executed)\n \nIndex Cond: (qa.answer_id = \"outer\".id)\n \n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never executed)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(never executed)\n \nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(never executed)\n \nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n -> Hash\nJoin (cost=100000847.38..100010780.52 rows=65 width=20) (never executed)\n Hash\nCond: (\"outer\".encounter_id = \"inner\".id)\n -> \nSeq Scan on encounters_questions_answers eqa \n(cost=100000000.00..100007608.66 rows=464766 width=8) (never executed)\n -> \nHash (cost=847.37..847.37 rows=3 width=20) (never executed)\n \n-> Hash Join (cost=214.73..847.37 rows=3 width=20) (never executed)\n \nHash Cond: (\"outer\".enrollment_id = \"inner\".id)\n \n-> Index Scan using encounters_id on encounters ec (cost=0.00..524.72\nrows=21578 width=8) (never executed)\n \n-> Hash (cost=214.73..214.73 rows=1 width=20) (never executed)\n \n-> Index Scan using enrollements_pk on enrollments en (cost=0.00..214.73\nrows=1 width=20) (never executed)\n \nFilter: ($0 = person_id)\n -> HashAggregate \n(cost=96.86..96.87 rows=1 width=36) (never executed)\n -> Nested Loop \n(cost=60.61..96.85 rows=1 width=36) (never executed)\n -> Nested\nLoop (cost=60.61..93.72 rows=1 width=32) (never executed)\n -> \nNested Loop (cost=60.61..90.69 rows=1 width=36) (never executed)\n \n-> Nested Loop (cost=0.00..9.37 rows=1 width=36) (never executed)\n \n-> Index Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8)\n(never executed)\n \nIndex Cond: (id = $0)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\nwidth=28) (never executed)\n \nIndex Cond: ((answer)::text = 'Yes'::text)\n \n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (never executed)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'shareWithEval':\n:text)))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (never executed)\n \n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n(never executed)\n \nIndex Cond: (qa.answer_id = \"outer\".id)\n \n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never executed)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(never executed)\n \nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n(never executed)\n \nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n -> \nIndex Scan using ctccalls_qs_as_qaid on ctccalls_questions_answers cqa \n(cost=0.00..3.02 rows=1 width=8) (never executed)\n \nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> Index\nScan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1 width=20)\n(never executed)\n Index\nCond: (c.id = \"outer\".call_id)\n \nFilter: ($0 = person_id)\n Total runtime: 10084292.497 ms\n(125 rows)\n\n\nThanks...Marsha\n\n-- \nView this message in context: http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12820410\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Fri, 21 Sep 2007 06:14:22 -0700 (PDT)", "msg_from": "smiley2211 <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "In response to smiley2211 <[email protected]>:\n> \n> Dennis,\n> \n> Thanks for your reply.\n> \n> No, the OLD server are no longer available (decommissioned) - the new\n> servers are definitely better h\\w.\n\nSays who? I've heard that one before, and I've seen it be false.\nSome wonk replaced a 1Ghz system with 1G of RAM and a high-end SCSI\nRAID 10 with a new 3ghz server with 4G of ram and a cheapo SATA-based\nRAID 5, but doesn't know he was better off with the older system?\n\nThat may not apply to you, or it might. We don't know because you\ndidn't give us details.\n\n> I do not have any queries to EXPLAIN ANALYZE as they are built by the\n> application and I am not allowed to enable logging on for that server - so\n> where do I go from here???\n\nUpdate your resume. If you're expected to performance tune this system,\nbut you're not allowed to enable logging and you can't get a look at\nthe queries, you're going to looking for new employment soon, because\nyou've been asked to do the impossible.\n\n> I am pretty much trying to make changes in the postgresql.conf file but\n> don't have a CLUE as to what starting numbers I should be looking at to\n> change(???)\n> \n> Here is the EXPLAIN ANALYZE for the ONE (1) query I do have...it takes 4 - 5\n> hours to run a SELECT with the 'EXPLAIN ANALYZE':\n\nIt's very difficult (if not impossible) to make sense of this output\nwithout the query itself. It would also be nice if your mail program\ndidn't mangle the output, as it would save folks having to reconstruct\nit.\n\n> \n> \n> QUERY PLAN \n> \n> -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> -------------------\n> Limit (cost=100013612.76..299939413.70 rows=1 width=8) (actual\n> time=10084289.859..10084289.861 rows=1 loops=1)\n> -> Subquery Scan people_consent (cost=100013612.76..624068438343.99\n> rows=3121 width=8) (actual time=10084289.853..10084289.853 rows=1 loops=1)\n> -> Append (cost=100013612.76..624068438312.78 rows=3121\n> width=815) (actual time=10084289.849..10084289.849 rows=1 loops=1)\n> -> Nested Loop (cost=100013612.76..100013621.50 rows=2\n> width=815) (actual time=10084289.846..10084289.846 rows=1 loops=1)\n> -> Unique (cost=100013612.76..100013612.77 rows=2\n> width=8) (actual time=10084289.817..10084289.817 rows=1 loops=1)\n> -> Sort (cost=100013612.76..100013612.77 rows=2\n> width=8) (actual time=10084289.814..10084289.814 rows=1 loops=1)\n> Sort Key: temp_consent.id\n> -> Unique \n> (cost=100013612.71..100013612.73 rows=2 width=36) (actual\n> time=10084245.195..10084277.468 rows=7292 loops=1)\n> -> Sort \n> (cost=100013612.71..100013612.72 rows=2 width=36) (actual\n> time=10084245.191..10084254.425 rows=7292 loops=1)\n> Sort Key: id, daterecorded,\n> answer\n> -> Append \n> (cost=100013515.80..100013612.70 rows=2 width=36) (actual\n> time=10083991.226..10084228.613 rows=7292 loops=1)\n> -> HashAggregate \n> (cost=100013515.80..100013515.82 rows=1 width=36) (actual\n> time=10083991.223..10083998.046 rows=3666 loops=1)\n> -> Nested Loop \n> (cost=100000060.61..100013515.80 rows=1 width=36) (actual\n> time=388.263..10083961.330 rows=3702 loops=1)\n> -> Nested\n> Loop (cost=100000060.61..100013511.43 rows=1 width=36) (actual\n> time=388.237..10083897.268 rows=3702 loops=1)\n> -> \n> Nested Loop (cost=100000060.61..100013507.59 rows=1 width=24) (actual\n> time=388.209..10083833.870 rows=3702 loops=1)\n> \n> -> Nested Loop (cost=100000060.61..100013504.56 rows=1 width=24) (actual\n> time=388.173..10083731.122 rows=3702 loops=1)\n> \n> Join Filter: (\"inner\".question_answer_id = \"outer\".id)\n> \n> -> Nested Loop (cost=60.61..86.33 rows=1 width=28) (actual\n> time=13.978..114.768 rows=7430 loops=1)\n> \n> -> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\n> width=28) (actual time=0.084..0.088 rows=1 loops=1)\n> \n> Index Cond: ((answer)::text = 'Yes'::text)\n> \n> -> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\n> width=16) (actual time=13.881..87.112 rows=7430 loops=1)\n> \n> Recheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n> 'consentTransfer'::text) OR ((qa.question_tag)::text = 'share\n> WithEval'::text)))\n> \n> -> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (actual\n> time=13.198..13.198 rows=0 loops=1)\n> \n> -> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n> (actual time=9.689..9.689 rows=57804 loops=1)\n> \n> Index Cond: (qa.answer_id = \"outer\".id)\n> \n> -> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (actual\n> time=2.563..2.563 rows=0 loops=1)\n> \n> -> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n> (actual time=1.923..1.923 rows=6237 loops=1)\n> \n> Index Cond: ((question_tag)::text = 'consentTransfer'::text)\n> \n> -> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n> (actual time=0.634..0.634 rows=2047 loops=1)\n> \n> Index Cond: ((question_tag)::text = 'shareWithEval'::text)\n> \n> -> Seq Scan on encounters_questions_answers eqa \n> (cost=100000000.00..100007608.66 rows=464766 width=8) (actual\n> time=0.003..735.934 rows=464766 loop\n> s=7430)\n> \n> -> Index Scan using encounters_id on encounters ec (cost=0.00..3.02 rows=1\n> width=8) (actual time=0.016..0.018 rows=1 loops=3702)\n> \n> Index Cond: (ec.id = \"outer\".encounter_id)\n> -> \n> Index Scan using enrollements_pk on enrollments en (cost=0.00..3.82 rows=1\n> width=20) (actual time=0.008..0.010 rows=1 loops=3702)\n> \n> Index Cond: (\"outer\".enrollment_id = en.id)\n> -> Index\n> Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\n> time=0.008..0.010 rows=1 loops=3702)\n> Index\n> Cond: (p.id = \"outer\".person_id)\n> -> HashAggregate \n> (cost=96.86..96.87 rows=1 width=36) (actual time=205.471..212.207 rows=3626\n> loops=1)\n> -> Nested Loop \n> (cost=60.61..96.85 rows=1 width=36) (actual time=13.163..196.421 rows=3722\n> loops=1)\n> -> Nested\n> Loop (cost=60.61..92.48 rows=1 width=36) (actual time=13.149..158.112\n> rows=3722 loops=1)\n> -> \n> Nested Loop (cost=60.61..89.36 rows=1 width=24) (actual\n> time=13.125..120.021 rows=3722 loops=1)\n> \n> -> Nested Loop (cost=60.61..86.33 rows=1 width=28) (actual\n> time=13.013..48.460 rows=7430 loops=1)\n> \n> -> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\n> width=28) (actual time=0.030..0.032 rows=1 loops=1)\n> \n> Index Cond: ((answer)::text = 'Yes'::text)\n> \n> -> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\n> width=16) (actual time=12.965..28.902 rows=7430 loops=1)\n> \n> Recheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n> 'consentTransfer'::text) OR ((qa.question_tag)::text = 'shareWithEv\n> al'::text)))\n> \n> -> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (actual\n> time=12.288..12.288 rows=0 loops=1)\n> \n> -> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n> (actual time=8.985..8.985 rows=57804 loops=1)\n> \n> Index Cond: (qa.answer_id = \"outer\".id)\n> \n> -> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (actual\n> time=2.344..2.344 rows=0 loops=1)\n> \n> -> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n> (actual time=1.762..1.762 rows=6237 loops=1)\n> \n> Index Cond: ((question_tag)::text = 'consentTransfer'::text)\n> \n> -> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n> (actual time=0.578..0.578 rows=2047 loops=1)\n> \n> Index Cond: ((question_tag)::text = 'shareWithEval'::text)\n> \n> -> Index Scan using ctccalls_qs_as_qaid on ctccalls_questions_answers cqa \n> (cost=0.00..3.02 rows=1 width=8) (actual time=0.005..0.006 rows=1\n> loops=7430)\n> \n> Index Cond: (cqa.question_answer_id = \"outer\".id)\n> -> \n> Index Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1\n> width=20) (actual time=0.003..0.005 rows=1 loops=3722)\n> \n> Index Cond: (c.id = \"outer\".call_id)\n> -> Index\n> Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\n> time=0.004..0.005 rows=1 loops=3722)\n> Index\n> Cond: (p.id = \"outer\".person_id)\n> -> Index Scan using people_pk on people \n> (cost=0.00..4.35 rows=1 width=815) (actual time=0.018..0.018 rows=1 loops=1)\n> Index Cond: (people.id = \"outer\".id)\n> -> Subquery Scan \"*SELECT* 2\" \n> (cost=100000000.00..623968424691.25 rows=3119 width=676) (never executed)\n> -> Seq Scan on people \n> (cost=100000000.00..623968424660.06 rows=3119 width=676) (never executed)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Subquery Scan temp_consent \n> (cost=100010968.94..100010968.98 rows=2 width=8) (never executed)\n> lines 1-69/129 56%\n> \n> Index Cond: (cqa.question_answer_id = \"outer\".id)\n> -> \n> Index Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1\n> width=20) (actual time=0.003..0.005 rows=1 loops=3722)\n> \n> Index Cond: (c.id = \"outer\".call_id)\n> -> Index\n> Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\n> time=0.004..0.005 rows=1 loops=3722)\n> Index\n> Cond: (p.id = \"outer\".person_id)\n> -> Index Scan using people_pk on people \n> (cost=0.00..4.35 rows=1 width=815) (actual time=0.018..0.018 rows=1 loops=1)\n> Index Cond: (people.id = \"outer\".id)\n> -> Subquery Scan \"*SELECT* 2\" \n> (cost=100000000.00..623968424691.25 rows=3119 width=676) (never executed)\n> -> Seq Scan on people \n> (cost=100000000.00..623968424660.06 rows=3119 width=676) (never executed)\n> Filter: (NOT (subplan))\n> SubPlan\n> -> Subquery Scan temp_consent \n> (cost=100010968.94..100010968.98 rows=2 width=8) (never executed)\n> -> Unique \n> (cost=100010968.94..100010968.96 rows=2 width=36) (never executed)\n> -> Sort \n> (cost=100010968.94..100010968.95 rows=2 width=36) (never executed)\n> Sort Key: id, daterecorded,\n> answer\n> -> Append \n> (cost=100010872.03..100010968.93 rows=2 width=36) (never executed)\n> -> HashAggregate \n> (cost=100010872.03..100010872.04 rows=1 width=36) (never executed)\n> -> Nested Loop \n> (cost=100000907.99..100010872.02 rows=1 width=36) (never executed)\n> Join\n> Filter: (\"inner\".question_answer_id = \"outer\".id)\n> -> Nested\n> Loop (cost=60.61..90.69 rows=1 width=36) (never executed)\n> -> \n> Nested Loop (cost=0.00..9.37 rows=1 width=36) (never executed)\n> \n> -> Index Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8)\n> (never executed)\n> \n> Index Cond: (id = $0)\n> \n> -> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\n> width=28) (never executed)\n> \n> Index Cond: ((answer)::text = 'Yes'::text)\n> -> \n> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\n> width=16) (never executed)\n> \n> Recheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n> 'consentTransfer'::text) OR ((qa.question_tag)::text =\n> 'shareWithEval'::text)\n> ))\n> \n> -> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (never executed)\n> \n> -> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n> (never executed)\n> \n> Index Cond: (qa.answer_id = \"outer\".id)\n> \n> -> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never executed)\n> \n> -> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n> (never executed)\n> \n> Index Cond: ((question_tag)::text = 'consentTransfer'::text)\n> \n> -> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n> (never executed)\n> \n> Index Cond: ((question_tag)::text = 'shareWithEval'::text)\n> -> Hash\n> Join (cost=100000847.38..100010780.52 rows=65 width=20) (never executed)\n> Hash\n> Cond: (\"outer\".encounter_id = \"inner\".id)\n> -> \n> Seq Scan on encounters_questions_answers eqa \n> (cost=100000000.00..100007608.66 rows=464766 width=8) (never executed)\n> -> \n> Hash (cost=847.37..847.37 rows=3 width=20) (never executed)\n> \n> -> Hash Join (cost=214.73..847.37 rows=3 width=20) (never executed)\n> \n> Hash Cond: (\"outer\".enrollment_id = \"inner\".id)\n> \n> -> Index Scan using encounters_id on encounters ec (cost=0.00..524.72\n> rows=21578 width=8) (never executed)\n> \n> -> Hash (cost=214.73..214.73 rows=1 width=20) (never executed)\n> \n> -> Index Scan using enrollements_pk on enrollments en (cost=0.00..214.73\n> rows=1 width=20) (never executed)\n> \n> Filter: ($0 = person_id)\n> -> HashAggregate \n> (cost=96.86..96.87 rows=1 width=36) (never executed)\n> -> Nested Loop \n> (cost=60.61..96.85 rows=1 width=36) (never executed)\n> -> Nested\n> Loop (cost=60.61..93.72 rows=1 width=32) (never executed)\n> -> \n> Nested Loop (cost=60.61..90.69 rows=1 width=36) (never executed)\n> \n> -> Nested Loop (cost=0.00..9.37 rows=1 width=36) (never executed)\n> \n> -> Index Scan using people_pk on people p (cost=0.00..4.35 rows=1 width=8)\n> (never executed)\n> \n> Index Cond: (id = $0)\n> \n> -> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 rows=1\n> width=28) (never executed)\n> \n> Index Cond: ((answer)::text = 'Yes'::text)\n> \n> -> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\n> width=16) (never executed)\n> \n> Recheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n> 'consentTransfer'::text) OR ((qa.question_tag)::text = 'shareWithEval':\n> :text)))\n> \n> -> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (never executed)\n> \n> -> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 width=0)\n> (never executed)\n> \n> Index Cond: (qa.answer_id = \"outer\".id)\n> \n> -> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never executed)\n> \n> -> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n> (never executed)\n> \n> Index Cond: ((question_tag)::text = 'consentTransfer'::text)\n> \n> -> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 width=0)\n> (never executed)\n> \n> Index Cond: ((question_tag)::text = 'shareWithEval'::text)\n> -> \n> Index Scan using ctccalls_qs_as_qaid on ctccalls_questions_answers cqa \n> (cost=0.00..3.02 rows=1 width=8) (never executed)\n> \n> Index Cond: (cqa.question_answer_id = \"outer\".id)\n> -> Index\n> Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1 width=20)\n> (never executed)\n> Index\n> Cond: (c.id = \"outer\".call_id)\n> \n> Filter: ($0 = person_id)\n> Total runtime: 10084292.497 ms\n> (125 rows)\n> \n> \n> Thanks...Marsha\n> \n> -- \n> View this message in context: http://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-tf4489502.html#a12820410\n> Sent from the PostgreSQL - performance mailing list archive at Nabble.com.\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/docs/faq\n> \n> \n> \n> \n> \n> \n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n****************************************************************\nIMPORTANT: This message contains confidential information and is\nintended only for the individual named. If the reader of this\nmessage is not an intended recipient (or the individual\nresponsible for the delivery of this message to an intended\nrecipient), please be advised that any re-use, dissemination,\ndistribution or copying of this message is prohibited. Please\nnotify the sender immediately by e-mail if you have received\nthis e-mail by mistake and delete this e-mail from your system.\nE-mail transmission cannot be guaranteed to be secure or\nerror-free as information could be intercepted, corrupted, lost,\ndestroyed, arrive late or incomplete, or contain viruses. The\nsender therefore does not accept liability for any errors or\nomissions in the contents of this message, which arise as a\nresult of e-mail transmission.\n****************************************************************\n", "msg_date": "Fri, 21 Sep 2007 09:25:30 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" }, { "msg_contents": "\n>From: smiley2211\n>Subject: Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!\n>\n>-> Seq Scan on encounters_questions_answers eqa\n>(cost=100000000.00..100007608.66 rows=464766 width=8) (actual\n>time=0.003..735.934 rows=464766 loop\n>s=7430)\n \nIt looks like enable_seqscan is set to false. For some reason that might\nhave worked on 7.4, but I would try turning that back on for 8.1.\nSequential scans aren't always bad, sometimes they are faster than index\nscans. I would first try running the system with all the enable_* settings\non.\n\nIf you can't turn on logging its going to be very hard to track down the\nproblem. The easiest way to track down a problem normally is to set\nlog_min_duration to something like 2000ms. Then Postgres will log all slow\nqueries. Then you can run EXPLAIN ANALYZE on the slow queries to find the\nproblem.\n\nI think Carlos had a good idea when he asked about the encoding on the new\nserver vs the old. Does your application use the like keyword to compare\ntext fields? If so, you might need to create indexes which use the\ntext_pattern_ops operator classes. With unicode postgres cannot use an\nindex scan for a query like SELECT * FROM foo WHERE name LIKE 'Bob%' unless\nthere is an index like CREATE INDEX name_index ON foo (name\ntext_pattern_ops). However if you are not using like queries, then this is\nnot your problem.\n\nMore on operator classes:\nhttp://www.postgresql.org/docs/8.1/interactive/indexes-opclass.html\n\nDave\n\n", "msg_date": "Fri, 21 Sep 2007 09:07:49 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" } ]
[ { "msg_contents": "Hi everybody,\n\nIs there a way to find which query is doing large io operations and/or which\nis using cached data and which is reading from disk. I need to see this on a\nproduction server to localize a slow and resource eating query. The\npg_statio* tables are very handy, but don't help me at all in finding the\nmost disk intensive** query, just the most used table which I already knew.\n\nAnd besides that, please share your experience on how do you decide which\nqueries to optimize and how to reorganize your database? Is there any tools\nthat you use to profile your database.\n\nRegards,\nKamen Stanev\n\nHi everybody,Is there a way to find which query is doing large io operations and/or which is using cached data and which is reading from disk. I need to see this on a production server to localize a slow and resource eating query. The pg_statio* tables are very handy, but don't help me at all in finding the most disk intensive\n query, just the most used table which I already knew.And besides that, please share your experience on how do you decide which queries to optimize and how to reorganize your database? Is there any tools that you use to profile your database.\nRegards,Kamen Stanev", "msg_date": "Fri, 21 Sep 2007 00:36:59 +0300", "msg_from": "\"Kamen Stanev\" <[email protected]>", "msg_from_op": true, "msg_subject": "query io stats and finding a slow query" }, { "msg_contents": ">>> On Thu, Sep 20, 2007 at 4:36 PM, in message\n<[email protected]>, \"Kamen Stanev\"\n<[email protected]> wrote: \n> \n> Is there a way to find which query is doing large io operations and/or which\n> is using cached data and which is reading from disk.\n \nA big part of your cache is normally in the OS, which makes that tough.\n \n> please share your experience on how do you decide which\n> queries to optimize and how to reorganize your database?\n \nWe base this on two things -- query metrics from our application framework\nand user complaints about performance.\n\n> Is there any tools that you use to profile your database.\n \nMany people set log_min_duration_statement to get a look at long-running\nqueries.\n \nWhen you identify a problem query, running it with EXPLAIN ANALYZE in front\nwill show you the plan with estimated versus actual counts, costs, and time.\nThis does actually execute the query (unlike EXPLAIN without ANALYZE).\n \n-Kevin\n \n\n\n", "msg_date": "Fri, 21 Sep 2007 11:35:15 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query io stats and finding a slow query" }, { "msg_contents": "Thanks for the reply.\n\nHere is what I found about my problem. When i set the\nlog_min_duration_statement and in the moments when the server performance is\ndegrading I can see that almost all queries run very slowly (10-100 times\nslower). At first I thought that there is exclusive lock on one of the\ntables but there wasn't any.\n\nThe information from the log files becomes useless when almost every query\non you server is logged and when you can't tell which query after which. So\nI finally wrote a script to process the log file and graphically represent\nthe timing of each query from the log (something like a gantt chart), and\nthat way I found out what was the reason for the slowdowns. There was a\nquery which actually reads all the data from one of the big tables and while\nit is running and some time after it finished the server is slowing down to\ndeath. I couldn't find it just looking at the log because it was not even\nthe slowest query. After I examined the chart it was very clear what was\nhappening. As I understand it, while this table was scanned all the disk i/o\noperations were slowed down, and maybe the data from that table was stored\nin the os cache, and hence all the other queries were so slow? After I\nremoved the big query everything runs normally.\n\nHowever, I was wondering if there are any tools for such log analysis. I'm\nready to provide my script if somebody is interested? I think it is very\nuseful, but maybe someone has already done something better?\n\nRegards,\nKamen\n\nOn 9/21/07, Kevin Grittner <[email protected]> wrote:\n>\n> >>> On Thu, Sep 20, 2007 at 4:36 PM, in message\n> <[email protected]>, \"Kamen\n> Stanev\"\n> <[email protected]> wrote:\n> >\n> > Is there a way to find which query is doing large io operations and/or\n> which\n> > is using cached data and which is reading from disk.\n>\n> A big part of your cache is normally in the OS, which makes that tough.\n>\n> > please share your experience on how do you decide which\n> > queries to optimize and how to reorganize your database?\n>\n> We base this on two things -- query metrics from our application framework\n> and user complaints about performance.\n>\n> > Is there any tools that you use to profile your database.\n>\n> Many people set log_min_duration_statement to get a look at long-running\n> queries.\n>\n> When you identify a problem query, running it with EXPLAIN ANALYZE in\n> front\n> will show you the plan with estimated versus actual counts, costs, and\n> time.\n> This does actually execute the query (unlike EXPLAIN without ANALYZE).\n>\n> -Kevin\n>\n>\n>\n>\n\nThanks for the reply.Here is what I found about my problem. When i set the log_min_duration_statement and in the moments when the server performance is degrading I can see that almost all queries run very slowly (10-100 times slower). At first I thought that there is exclusive lock on one of the tables but there wasn't any.\nThe information from the log files becomes useless when almost every query on you server is logged and when you can't tell which query after which. So I finally wrote a script to process the log file and graphically represent the timing of each query from the log (something like a gantt chart), and that way I found out what was the reason for the slowdowns. There was a query which actually reads all the data from one of the big tables and while it is running and some time after it finished the server is slowing down to death. I couldn't find it just looking at the log because it was not even the slowest query. After I examined the chart it was very clear what was happening. As I understand it, while this table was scanned all the disk i/o operations were slowed down, and maybe the data from that table was stored in the os cache, and hence all the other queries were so slow? After I removed the big query everything runs normally.\nHowever, I was wondering if there are any tools for such log analysis. I'm ready to provide my script if somebody is interested? I think it is very useful, but maybe someone has already done something better?\nRegards,KamenOn 9/21/07, Kevin Grittner <[email protected]> wrote:\n>>> On Thu, Sep 20, 2007 at  4:36 PM, in message<[email protected]>, \"Kamen Stanev\"\n<[email protected]> wrote:>> Is there a way to find which query is doing large io operations and/or which> is using cached data and which is reading from disk.\nA big part of your cache is normally in the OS, which makes that tough.> please share your experience on how do you decide which> queries to optimize and how to reorganize your database?We base this on two things -- query metrics from our application framework\nand user complaints about performance.> Is there any tools that you use to profile your database.Many people set log_min_duration_statement to get a look at long-runningqueries.When you identify a problem query, running it with EXPLAIN ANALYZE in front\nwill show you the plan with estimated versus actual counts, costs, and time.This does actually execute the query (unlike EXPLAIN without ANALYZE).-Kevin", "msg_date": "Tue, 25 Sep 2007 22:41:03 +0300", "msg_from": "\"Kamen Stanev\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: query io stats and finding a slow query" }, { "msg_contents": "We use pgfouine (http://pgfouine.projects.postgresql.org/).\n\nI currently have postgres log every query that takes longer than\n100ms, roll the log files every 24 hours, and run pgfouine nightly. I\ncheck it every couple of mornings and this gives me a pretty good\npicture of who misbehaved over the last 24 hours. I can't even count\nthe # of times I've come in in the morning and some new query has\nbubbled to the top.\n\nIt's very handy. I don't know if it would have helped you identify\nyour problem, but it's saved our butts a few times.\n\nBryan\n\nOn 9/25/07, Kamen Stanev <[email protected]> wrote:\n> Thanks for the reply.\n>\n> Here is what I found about my problem. When i set the\n> log_min_duration_statement and in the moments when the server performance is\n> degrading I can see that almost all queries run very slowly (10-100 times\n> slower). At first I thought that there is exclusive lock on one of the\n> tables but there wasn't any.\n>\n> The information from the log files becomes useless when almost every query\n> on you server is logged and when you can't tell which query after which. So\n> I finally wrote a script to process the log file and graphically represent\n> the timing of each query from the log (something like a gantt chart), and\n> that way I found out what was the reason for the slowdowns. There was a\n> query which actually reads all the data from one of the big tables and while\n> it is running and some time after it finished the server is slowing down to\n> death. I couldn't find it just looking at the log because it was not even\n> the slowest query. After I examined the chart it was very clear what was\n> happening. As I understand it, while this table was scanned all the disk i/o\n> operations were slowed down, and maybe the data from that table was stored\n> in the os cache, and hence all the other queries were so slow? After I\n> removed the big query everything runs normally.\n>\n> However, I was wondering if there are any tools for such log analysis. I'm\n> ready to provide my script if somebody is interested? I think it is very\n> useful, but maybe someone has already done something better?\n>\n> Regards,\n> Kamen\n>\n> On 9/21/07, Kevin Grittner <[email protected]> wrote:\n> > >>> On Thu, Sep 20, 2007 at 4:36 PM, in message\n> >\n> <[email protected]>,\n> \"Kamen Stanev\"\n> > <[email protected]> wrote:\n> > >\n> > > Is there a way to find which query is doing large io operations and/or\n> which\n> > > is using cached data and which is reading from disk.\n> >\n> > A big part of your cache is normally in the OS, which makes that tough.\n> >\n> > > please share your experience on how do you decide which\n> > > queries to optimize and how to reorganize your database?\n> >\n> > We base this on two things -- query metrics from our application framework\n> > and user complaints about performance.\n> >\n> > > Is there any tools that you use to profile your database.\n> >\n> > Many people set log_min_duration_statement to get a look at long-running\n> > queries.\n> >\n> > When you identify a problem query, running it with EXPLAIN ANALYZE in\n> front\n> > will show you the plan with estimated versus actual counts, costs, and\n> time.\n> > This does actually execute the query (unlike EXPLAIN without ANALYZE).\n> >\n> > -Kevin\n> >\n> >\n> >\n> >\n>\n>\n", "msg_date": "Tue, 25 Sep 2007 14:58:53 -0500", "msg_from": "\"Bryan Murphy\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: query io stats and finding a slow query" } ]
[ { "msg_contents": "Sorry, I know this is probably more a linux question, but I'm guessing\nthat others have run into this...\n\nI'm finding this rather interesting report from top on a Debian box...\n\nMem: 32945280k total, 32871832k used, 73448k free, 247432k buffers\nSwap: 1951888k total, 42308k used, 1909580k free, 30294300k cached\n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n12492 postgres 15 0 8469m 8.0g 8.0g S 0 25.6 3:52.03 postmaster\n 7820 postgres 16 0 8474m 4.7g 4.7g S 0 15.1 1:23.72 postmaster\n21863 postgres 15 0 8472m 3.9g 3.9g S 0 12.4 0:30.61 postmaster\n19893 postgres 15 0 8471m 2.4g 2.4g S 0 7.6 0:07.54 postmaster\n20423 postgres 17 0 8472m 1.4g 1.4g S 0 4.4 0:04.61 postmaster\n26395 postgres 15 0 8474m 1.1g 1.0g S 1 3.4 0:02.12 postmaster\n12985 postgres 15 0 8472m 937m 930m S 0 2.9 0:05.50 postmaster\n26806 postgres 15 0 8474m 787m 779m D 4 2.4 0:01.56 postmaster\n\nThis is a machine that's been up some time and the database is 400G, so\nI'm pretty confident that shared_buffers (set to 8G) should be\ncompletely full, and that's what that top process is indicating.\n\nSo how is it that linux thinks that 30G is cached?\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected] \nGive your computer some brain candy! www.distributed.net Team #1828", "msg_date": "Thu, 20 Sep 2007 18:04:01 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": true, "msg_subject": "Linux mis-reporting memory" }, { "msg_contents": "Decibel! <[email protected]> writes:\n> I'm finding this rather interesting report from top on a Debian box...\n\n> Mem: 32945280k total, 32871832k used, 73448k free, 247432k buffers\n> Swap: 1951888k total, 42308k used, 1909580k free, 30294300k cached\n\n> So how is it that linux thinks that 30G is cached?\n\nWhy would you think that a number reported by the operating system has\nsomething to do with Postgres' shared memory?\n\nI might be mistaken, but I think that in this report \"cached\" indicates\nthe amount of memory in use for kernel disk cache. (No idea what the\nseparate \"buffers\" entry means, but it's obviously not all of the disk\nbuffers the kernel has got.) It appears that the kernel is doing\nexactly what it's supposed to do and using any not-currently-called-for\nmemory for disk cache ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 20 Sep 2007 19:12:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory " }, { "msg_contents": "> Sorry, I know this is probably more a linux question, but I'm guessing\n> that others have run into this...\n> I'm finding this rather interesting report from top on a Debian box...\n> Mem: 32945280k total, 32871832k used, 73448k free, 247432k buffers\n> Swap: 1951888k total, 42308k used, 1909580k free, 30294300k cached\n> This is a machine that's been up some time and the database is 400G, so\n> I'm pretty confident that shared_buffers (set to 8G) should be\n> completely full, and that's what that top process is indicating.\n\nNope, use \"ipcs\" to show allocated shared memory segments.\n\nOne of the better articles on LINUX & memory management -\nhttp://virtualthreads.blogspot.com/2006/02/understanding-memory-usage-on-linux.html\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Thu, 20 Sep 2007 20:22:36 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "\"Tom Lane\" <[email protected]> writes:\n\n> Decibel! <[email protected]> writes:\n>> I'm finding this rather interesting report from top on a Debian box...\n>\n>> Mem: 32945280k total, 32871832k used, 73448k free, 247432k buffers\n>> Swap: 1951888k total, 42308k used, 1909580k free, 30294300k cached\n>\n>> So how is it that linux thinks that 30G is cached?\n>\n> Why would you think that a number reported by the operating system has\n> something to do with Postgres' shared memory?\n\nI think his question is how can the kernel be using 30G for kernel buffers if\nit only has 32G total and 8G of that is taken up by Postgres's shared buffers.\n\nIt seems to imply Linux is paging out sysV shared memory. In fact some of\nHeikki's tests here showed that Linux would do precisely that.\n\nIf your working set really is smaller than shared buffers then that's not so\nbad. Those buffers really would be completely idle anyways.\n\nBut if your working set is larger than shared buffers and you're just not\nthrashing it hard enough to keep it in RAM then it's really bad. The buffer\nLinux will choose to page out are precisely those that Postgres will likely\nchoose shortly as victim buffers, forcing Linux to page them back in just so\nPostgres can overwrite them.\n\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 21 Sep 2007 09:03:04 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "On Fri, 2007-09-21 at 09:03 +0100, Gregory Stark wrote:\n> >> Mem: 32945280k total, 32871832k used, 73448k free, 247432k buffers\n> >> Swap: 1951888k total, 42308k used, 1909580k free, 30294300k cached\n> >\n> It seems to imply Linux is paging out sysV shared memory. In fact some of\n> Heikki's tests here showed that Linux would do precisely that.\n\nBut then why is it not reporting that in the \"Swap: used\" section ? It\nonly reports 42308k used swap. \n\nI have a box where I just executed 3x a select count(*) from a table\nwhich has ~5.5 GB size on disk, and the count executed in <4 seconds,\nwhich I take as it is all cached (shared memory is set to 12GB - I use\nthe box for testing for now, otherwise I would set it far lower because\nI have bad experience with setting it more than 1/3 of the available\nmemory). Top reported at the end of the process:\n\nMem: 16510724k total, 16425252k used, 85472k free, 10144k buffers\nSwap: 7815580k total, 157804k used, 7657776k free, 15980664k cached\n\nI also watched it during the selects, but it was not significantly\ndifferent. So my only conclusion is that the reported \"cached\" value is\neither including the shared memory or is simply wrong... or I just don't\nget how linux handles memory.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Fri, 21 Sep 2007 10:30:01 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "\"Csaba Nagy\" <[email protected]> writes:\n\n> On Fri, 2007-09-21 at 09:03 +0100, Gregory Stark wrote:\n>> >> Mem: 32945280k total, 32871832k used, 73448k free, 247432k buffers\n>> >> Swap: 1951888k total, 42308k used, 1909580k free, 30294300k cached\n>> >\n>> It seems to imply Linux is paging out sysV shared memory. In fact some of\n>> Heikki's tests here showed that Linux would do precisely that.\n>\n> But then why is it not reporting that in the \"Swap: used\" section ? It\n> only reports 42308k used swap. \n\nHm, good point.\n\nThe other possibility is that Postgres just hasn't even touched a large part\nof its shared buffers. \n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 21 Sep 2007 10:43:07 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "Hi,\n\nLe Friday 21 September 2007 01:04:01 Decibel!, vous avez écrit :\n> I'm finding this rather interesting report from top on a Debian box...\n\nI've read from people in other free software development groups that top/ps \nmemory usage outputs are not useful not trustable after all. A more usable \n(or precise or trustworthy) tool seems to be exmap:\n http://www.berthels.co.uk/exmap/\n\nHope this helps,\n-- \ndim\n", "msg_date": "Fri, 21 Sep 2007 11:48:50 +0200", "msg_from": "Dimitri Fontaine <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "On Fri, 2007-09-21 at 10:43 +0100, Gregory Stark wrote:\n> The other possibility is that Postgres just hasn't even touched a large part\n> of its shared buffers. \n> \n\nBut then how do you explain the example I gave, with a 5.5GB table\nseq-scanned 3 times, shared buffers set to 12 GB, and top still showing\nalmost 100% memory as cached and no SWAP \"used\" ? In this case you can't\nsay postgres didn't touch it's shared buffers - or a sequential scan\nwon't use the shared buffers ?\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Fri, 21 Sep 2007 12:08:45 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "Csaba Nagy wrote:\n> On Fri, 2007-09-21 at 10:43 +0100, Gregory Stark wrote:\n>> The other possibility is that Postgres just hasn't even touched a large part\n>> of its shared buffers. \n> \n> But then how do you explain the example I gave, with a 5.5GB table\n> seq-scanned 3 times, shared buffers set to 12 GB, and top still showing\n> almost 100% memory as cached and no SWAP \"used\" ? In this case you can't\n> say postgres didn't touch it's shared buffers - or a sequential scan\n> won't use the shared buffers ?\n\nWhich version of Postgres is this? In 8.3, a scan like that really won't\nsuck it all into the shared buffer cache. For seq scans on tables larger\nthan shared_buffers/4, it switches to the bulk read strategy, using only\n a few buffers, and choosing the starting point with the scan\nsynchronization facility.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 21 Sep 2007 11:34:53 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "On Fri, 2007-09-21 at 12:08 +0200, Csaba Nagy wrote:\n> On Fri, 2007-09-21 at 10:43 +0100, Gregory Stark wrote:\n> > The other possibility is that Postgres just hasn't even touched a large part\n> > of its shared buffers. \n> > \n> \n> But then how do you explain the example I gave, with a 5.5GB table\n> seq-scanned 3 times, shared buffers set to 12 GB, and top still showing\n> almost 100% memory as cached and no SWAP \"used\" ? In this case you can't\n> say postgres didn't touch it's shared buffers - or a sequential scan\n> won't use the shared buffers ?\n\nWell, 6.5GB of shared_buffers could be swapped out and need not be\nswapped back in to perform those 3 queries.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 21 Sep 2007 12:01:18 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "On Fri, 2007-09-21 at 11:34 +0100, Heikki Linnakangas wrote:\n> Which version of Postgres is this? In 8.3, a scan like that really won't\n> suck it all into the shared buffer cache. For seq scans on tables larger\n> than shared_buffers/4, it switches to the bulk read strategy, using only\n> a few buffers, and choosing the starting point with the scan\n> synchronization facility.\n> \nThis was on 8.1.9 installed via apt-get on Debian 4.1.1-21. In any case\nI'm pretty sure linux swaps shared buffers, as I always got worse\nperformance for shared buffers more than about 1/3 of the memory. But in\nthat case the output of top is misleading.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Fri, 21 Sep 2007 13:34:22 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "On Thu, 20 Sep 2007, Decibel! wrote:\n\n> I'm finding this rather interesting report from top on a Debian box... \n> how is it that linux thinks that 30G is cached?\n\ntop on Linux gives weird results when faced with situations where there's \nshared memory involved. I look at /proc/meminfo and run ipcs when I want \na better idea what's going on. As good of an article on this topic as \nI've found is http://gentoo-wiki.com/FAQ_Linux_Memory_Management which \nrecommends using free to clarify how big the disk cache really is.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n", "msg_date": "Fri, 21 Sep 2007 15:24:06 -0400 (EDT)", "msg_from": "Greg Smith <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "On Sep 21, 2007, at 4:43 AM, Gregory Stark wrote:\n> \"Csaba Nagy\" <[email protected]> writes:\n>\n>> On Fri, 2007-09-21 at 09:03 +0100, Gregory Stark wrote:\n>>>>> Mem: 32945280k total, 32871832k used, 73448k free, \n>>>>> 247432k buffers\n>>>>> Swap: 1951888k total, 42308k used, 1909580k free, \n>>>>> 30294300k cached\n>>>>\n>>> It seems to imply Linux is paging out sysV shared memory. In fact \n>>> some of\n>>> Heikki's tests here showed that Linux would do precisely that.\n>>\n>> But then why is it not reporting that in the \"Swap: used\" \n>> section ? It\n>> only reports 42308k used swap.\n>\n> Hm, good point.\n>\n> The other possibility is that Postgres just hasn't even touched a \n> large part\n> of its shared buffers.\n\nSorry for the late reply...\n\nNo, this is on a very active database server; the working set is \nalmost certainly larger than memory (probably by a fair margin :( ), \nand all of the shared buffers should be in use.\n\nI'm leaning towards \"top on linux == dumb\".\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Tue, 2 Oct 2007 10:57:18 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "On 10/2/07, Decibel! <[email protected]> wrote:\n> On Sep 21, 2007, at 4:43 AM, Gregory Stark wrote:\n> > \"Csaba Nagy\" <[email protected]> writes:\n> >\n> >> On Fri, 2007-09-21 at 09:03 +0100, Gregory Stark wrote:\n> >>>>> Mem: 32945280k total, 32871832k used, 73448k free,\n> >>>>> 247432k buffers\n> >>>>> Swap: 1951888k total, 42308k used, 1909580k free,\n> >>>>> 30294300k cached\n> >>>>\n> >>> It seems to imply Linux is paging out sysV shared memory. In fact\n> >>> some of\n> >>> Heikki's tests here showed that Linux would do precisely that.\n> >>\n> >> But then why is it not reporting that in the \"Swap: used\"\n> >> section ? It\n> >> only reports 42308k used swap.\n> >\n> > Hm, good point.\n> >\n> > The other possibility is that Postgres just hasn't even touched a\n> > large part\n> > of its shared buffers.\n>\n> Sorry for the late reply...\n>\n> No, this is on a very active database server; the working set is\n> almost certainly larger than memory (probably by a fair margin :( ),\n> and all of the shared buffers should be in use.\n>\n> I'm leaning towards \"top on linux == dumb\".\n\nYeah, that pretty much describes it. It's gotten better than it once\nwas. But it still doesn't seem to be able to tell shared memory from\ncache/buffer.\n", "msg_date": "Tue, 2 Oct 2007 11:51:23 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "> >> But then why is it not reporting that in the \"Swap: used\" \n> >> section ? It\n> >> only reports 42308k used swap.\n> > Hm, good point.\n> > The other possibility is that Postgres just hasn't even touched a \n> > large part\n> > of its shared buffers.\n> Sorry for the late reply...\n> No, this is on a very active database server; the working set is \n> almost certainly larger than memory (probably by a fair margin :( ), \n\n\"almost certainly\"\n\n> and all of the shared buffers should be in use.\n\n\"should be\"\n\nIt would be better to just check! :) The catalogs and informational\nviews will give you definitive answers to these quests.\n\n> I'm leaning towards \"top on linux == dumb\".\n\nI disagree, it just isn't the appropriate tool for the job. What top\ntells you is lots of correct information, it just isn't the right\ninformation.\n\nFor starters try -\nSELECT \n 'HEAP:' || relname AS table_name,\n (heap_blks_read + heap_blks_hit) AS heap_hits,\n ROUND(((heap_blks_hit)::NUMERIC / (heap_blks_read + heap_blks_hit) *\n100), 2) AS heap_buffer_percentage\nFROM pg_statio_user_tables\nWHERE (heap_blks_read + heap_blks_hit) > 0 \nUNION\nSELECT \n 'TOAST:' || relname,\n (toast_blks_read + toast_blks_hit),\n ROUND(((toast_blks_hit)::NUMERIC / (toast_blks_read + toast_blks_hit) *\n100), 2)\nFROM pg_statio_user_tables\nWHERE (toast_blks_read + toast_blks_hit) > 0 \nUNION\nSELECT\n 'INDEX:' || relname,\n (idx_blks_read + idx_blks_hit) AS heap_hits,\n ROUND(((idx_blks_hit)::NUMERIC / (idx_blks_read + idx_blks_hit) * 100),\n2)\nFROM pg_statio_user_tables\nWHERE (idx_blks_read + idx_blks_hit) > 0\n\n-- \nAdam Tauno Williams, Network & Systems Administrator\nConsultant - http://www.whitemiceconsulting.com\nDeveloper - http://www.opengroupware.org\n\n", "msg_date": "Tue, 02 Oct 2007 14:37:30 -0400", "msg_from": "Adam Tauno Williams <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Linux mis-reporting memory" }, { "msg_contents": "On Oct 2, 2007, at 1:37 PM, Adam Tauno Williams wrote:\n>> I'm leaning towards \"top on linux == dumb\".\n>\n> I disagree, it just isn't the appropriate tool for the job. What top\n> tells you is lots of correct information, it just isn't the right\n> information.\n\nIf it is in fact including shared memory as 'cached', then no, the \ninformation it's providing is not correct.\n-- \nDecibel!, aka Jim C. Nasby, Database Architect [email protected]\nGive your computer some brain candy! www.distributed.net Team #1828\n\n\n", "msg_date": "Tue, 2 Oct 2007 17:17:53 -0500", "msg_from": "Decibel! <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Linux mis-reporting memory" } ]
[ { "msg_contents": "Hi all,\n\nPostgres version: 8.2.4\n\nTables:\n\ntable_a(a bigint, b bigint, primary key(a, b) );\n\ntable_b1(b bigint primary key, more columns...);\n\ntable_b2(b bigint primary key references table_b1(b), more columns...);\n\ntable_b1: \n ~ 27M rows;\n ~25 more columns;\n width=309 (as reported by explain select *);\n\ntable_a:\n ~400M rows;\n - column \"b\" should reference table_b1, but it does not for performance\nreasons (it is an insert only table);\n - column \"a\" distinct values: 1148\n - has (a, b) as primary key;\n - has no additional columns;\n\ntable_b1:\n ~40K rows;\n ~70 more columns;\n width=1788 (as reported by explain select *);\n\nStatistics for the involved columns for each table are attached in files\n(to preserve the spacing). They were taken after analyzing the relevant\ntable (except for table_b2 where I added the \"fiddled\" statistics first\nand then remembered to analyze fresh, resulting in the \"non_fiddled\"\nversion, which gives the same result as the fiddled one).\n\nThe problem query is:\n\nprepare test_001(bigint) as\nSELECT tb.*\nFROM table_a ta \nJOIN table_b2 tb ON ta.b=tb.b\nWHERE ta.a = $1 \nORDER BY ta.a, ta.b\nlimit 10;\n\nExplain gives Plan 1 (see attached plans.txt)\n\nIf I set enable_hashjoin=off and enable_mergejoin=off, I get Plan 2\n(again, see plans.txt).\n\nThe difference is a 30x improvement in the second case...\n(I actually forgot to account for cache effects, but later rerun the\nqueries multiple times and the timings are proportional).\n\nAdditionally, if I replace table_b2 with table_b1 in the query, I get\nPlan 3 (with reasonable execution time) with both enable_hashjoin and\nenable_mergejoin on. So there is something which makes table_b2\ndifferent from table_b1 for planning purposes, but I could not identify\nwhat that is... they have differences in statistics, but fiddling with\nthe stats gave me no difference in the plan.\n\nLooking at Plan 2, it looks like the \"limit\" step is estimating wrongly\nit's cost. I guessed that it does that because it thinks the \"b\" values\nselected from table_a for a given \"a\" span a larger range than the \"b\"\nvalues in table_b2, because the \"b\" values in table_b2 are a (relatively\nsmall) subset of the \"b\" values in table_a. But this is not the case,\nthe query only gets \"a\" values for which all the \"b\" values in table_a\nwill be found in table_b2. Of course the planner has no way to know\nthis, but then I think it is not the case, as I tried to copy the\nhistogram statistics in pg_statistic for the column \"b\" from the entry\nfor table_b1 (which contains the whole span of \"b\" values) to the entry\nfor table_b2, with no change in the plan.\n\nJust for the record, this query is just a part of a more complex one,\nwhich joins in bigger tables, resulting in even worse performance, but I\ntracked it down to refusing the nested loop to be the problem.\n\nIs there anything I could do to convince the planner to use here the\nnested loop plan ?\n\nThanks,\nCsaba.", "msg_date": "Fri, 21 Sep 2007 12:03:44 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Searching for the cause of a bad plan" }, { "msg_contents": "On Fri, 2007-09-21 at 12:03 +0200, Csaba Nagy wrote:\n> prepare test_001(bigint) as\n> SELECT tb.*\n> FROM table_a ta \n> JOIN table_b2 tb ON ta.b=tb.b\n> WHERE ta.a = $1 \n> ORDER BY ta.a, ta.b\n> limit 10; \n\nPlease re-run everything on clean tables without frigging the stats. We\nneed to be able to trust what is happening is normal.\n\nPlan2 sees that b1 is wider, which will require more heap blocks to be\nretrieved. It also sees b1 is less correlated than b2, so again will\nrequire more database blocks to retrieve. Try increasing\neffective_cache_size.\n\nCan you plans with/without LIMIT and with/without cursor, for both b1\nand b2?\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 21 Sep 2007 11:59:25 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Fri, 2007-09-21 at 11:59 +0100, Simon Riggs wrote:\n> Please re-run everything on clean tables without frigging the stats. We\n> need to be able to trust what is happening is normal.\n\nI did, the plan fiddling happened after getting the plans after a fresh\nanalyze, and I did run the plan again with fresh analyze just before\nsending the mail and the plan was the same. In fact I spent almost 2\ndays playing with the query which is triggering this behavior, until I\ntracked it down to this join. Thing is that we have many queries which\nrely on this join, so it is fairly important that we understand what\nhappens there.\n\n> Plan2 sees that b1 is wider, which will require more heap blocks to be\n> retrieved. It also sees b1 is less correlated than b2, so again will\n> require more database blocks to retrieve. Try increasing\n> effective_cache_size.\n\neffective_cach_size is set to ~2.7G, the box has 4G memory. I increased\nit now to 3,5G but it makes no difference. I increased it further to 4G,\nno difference again.\n\n> Can you plans with/without LIMIT and with/without cursor, for both b1\n> and b2?\n\nThe limit is unfortunately absolutely needed part of the query, it makes\nno sense to try without. If it would be acceptable to do it without the\nlimit, then it is entirely possible that the plan I get now would be\nindeed better... but it is not acceptable.\n\nThanks,\nCsaba.\n\n\n", "msg_date": "Fri, 21 Sep 2007 13:29:26 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Fri, 2007-09-21 at 13:29 +0200, Csaba Nagy wrote:\n\n> > Can you plans with/without LIMIT and with/without cursor, for both b1\n> > and b2?\n> \n> The limit is unfortunately absolutely needed part of the query\n\nUnderstood, but not why I asked...\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 21 Sep 2007 12:34:38 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Fri, 2007-09-21 at 12:34 +0100, Simon Riggs wrote:\n> On Fri, 2007-09-21 at 13:29 +0200, Csaba Nagy wrote:\n> \n> > > Can you plans with/without LIMIT and with/without cursor, for both b1\n> > > and b2?\n> > \n> > The limit is unfortunately absolutely needed part of the query\n> \n> Understood, but not why I asked...\n> \nWell, the same query without limit goes:\n\ndbdop=# explain execute test_001(31855344);\n QUERY\nPLAN \n------------------------------------------------------------------------------------------------------------\n Sort (cost=322831.85..322831.94 rows=36 width=1804)\n Sort Key: ta.a, ta.b\n -> Hash Join (cost=3365.60..322830.92 rows=36 width=1804)\n Hash Cond: (ta.b = tb.b)\n -> Index Scan using pk_table_a on table_a ta\n(cost=0.00..314541.78 rows=389648 width=16)\n Index Cond: (a = $1)\n -> Hash (cost=524.71..524.71 rows=41671 width=1788)\n -> Seq Scan on table_b2 tb (cost=0.00..524.71\nrows=41671 width=1788)\n\n\nI'm not sure what you mean without cursor, maybe not using prepare ?\nWell we set up the JDBC driver to always prepare the queries, as this\ngives us much better worst case plans than when letting postgres see the\nparameter values, especially in queries with limit. So I simulate that\nwhen explaining the behavior we see. All our limit queries are for\ninteractive display, so the worst case is of much higher importance for\nus than the mean execution time... unfortunately postgres has a tendency\nto take the best mean performance path than avoid worst case, and it is\nnot easy to convince it otherwise.\n\nCheers,\nCsaba.\n\n\n\n\n", "msg_date": "Fri, 21 Sep 2007 14:12:24 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Fri, 2007-09-21 at 14:12 +0200, Csaba Nagy wrote:\n> On Fri, 2007-09-21 at 12:34 +0100, Simon Riggs wrote:\n> > On Fri, 2007-09-21 at 13:29 +0200, Csaba Nagy wrote:\n> > \n> > > > Can you plans with/without LIMIT and with/without cursor, for both b1\n> > > > and b2?\n> > > \n> > > The limit is unfortunately absolutely needed part of the query\n> > \n> > Understood, but not why I asked...\n> > \n> Well, the same query without limit goes:\n\nOK, thanks.\n\n> I'm not sure what you mean without cursor, maybe not using prepare ?\n\nSorry, misread that.\n\n=======================\n\nI think I understand now: The cost of the LIMIT is being applied, but in\nslightly the wrong way. The cost of the Nested Loop node is reduced by\nthe fraction of LIMIT/(number of expected rows), which is only an\napproximation of what we're doing. In Plan 2 this leads to the wildly\nwrong estimate that each row costs 49,851 cost units to retrieve, which\nis about x50 wrong. In Plan 3 that approximation leads to a more\nreasonable cost, so this works in Plan 3, but doesn't in Plan 2. \n\nWhat we should do is push down the effect of the LIMIT so that the cost\nof the Index Scan on ta reflects the fact that it returns only 10 rows.\nIt correctly expects 388638 rows that match the value requested, but it\nis not retrieving all of them. The executor handles the query\nefficiently but the cost model doesn't reflect what the executor\nactually does and so we pick the wrong plan. Pushing down the LIMIT\nwould only be possible when LIMIT has a constant value at plan time, but\nthat seems like most of the time to my eyes.\n\nThe plan estimates should look like this for Plan 2 (marked **)\n\n Limit (cost=0.00..XXXX rows=10 width=1804)\n -> Nested Loop (cost=0.00..XXXXX rows=10 width=1804)\n -> Index Scan using pk_table_a on table_a ta\n(cost=0.00..**11.96** rows=**10** width=16)\n Index Cond: (a = $1)\n -> Index Scan using pk_table_b2 on table_b2 tb\n(cost=0.00..3.77 rows=1 width=1788)\n Index Cond: (ta.b = tb.b)\n\nIncidentally, the way out of this is to improve the stats by setting\nstats target = 1000 on column a of ta. That will allow the optimizer to\nhave a better estimate of the tail of the distribution of a, which\nshould then be more sensibly reflected in the cost of the Index Scan.\nThat doesn't solve the actual problem, but should help in your case.\n\nPlans copied below for better clarity:\n\n\nPlan 2:\n\ndb> explain analyze execute test_001(31855344);\n\nQUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..498511.80 rows=10 width=1804) (actual\ntime=17.729..21.672 rows=2 loops=1)\n -> Nested Loop (cost=0.00..1794642.48 rows=36 width=1804) (actual\ntime=17.729..21.671 rows=2 loops=1)\n -> Index Scan using pk_table_a on table_a ta\n(cost=0.00..324880.88 rows=388638 width=16) (actual time=0.146..0.198\nrows=2 loops=1)\n Index Cond: (a = $1)\n -> Index Scan using pk_table_b2 on table_b2 tb\n(cost=0.00..3.77 rows=1 width=1788) (actual time=10.729..10.731 rows=1\nloops=2)\n Index Cond: (ta.b = tb.b)\n Total runtime: 21.876 ms\n\n\n\n\nPlan 3:\n\ndb> explain analyze execute test_001(31855344);\n\nQUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..853.14 rows=10 width=325) (actual\ntime=20.117..28.104 rows=2 loops=1)\n -> Nested Loop (cost=0.00..2024323.48 rows=23728 width=325) (actual\ntime=20.116..28.101 rows=2 loops=1)\n -> Index Scan using pk_table_a on table_a ta\n(cost=0.00..327561.01 rows=388684 width=16) (actual time=0.023..0.027\nrows=2 loops=1)\n Index Cond: (a = $1)\n -> Index Scan using pk_table_b1 on table_b1 tb\n(cost=0.00..4.35 rows=1 width=309) (actual time=14.032..14.034 rows=1\nloops=2)\n Index Cond: (ta.b = tb.b)\n Total runtime: 28.200 ms\n\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 21 Sep 2007 14:36:17 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "[snip]\n\nOk, I was not able to follow your explanation, it's too deep for me into\nwhat the planner does...\n\n> Incidentally, the way out of this is to improve the stats by setting\n> stats target = 1000 on column a of ta. That will allow the optimizer to\n> have a better estimate of the tail of the distribution of a, which\n> should then be more sensibly reflected in the cost of the Index Scan.\n> That doesn't solve the actual problem, but should help in your case.\n\nOK, I can confirm that. I set the statistics target for column \"a\" on\ntable_a to 1000, analyzed, and got the plan below. The only downside is\nthat analyze became quite expensive on table_a, it took 15 minutes and\ntouched half of the pages... I will experiment with lower settings,\nmaybe it will work with less than 1000 too.\n\ndb> explain analyze execute test_001(31855344);\n\nQUERY\nPLAN \n------------------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=0.00..4499.10 rows=10 width=1804) (actual\ntime=103.566..120.363 rows=2 loops=1)\n -> Nested Loop (cost=0.00..344630.97 rows=766 width=1804) (actual\ntime=103.563..120.359 rows=2 loops=1)\n -> Index Scan using pk_table_a on table_a ta\n(cost=0.00..67097.97 rows=78772 width=16) (actual time=71.965..77.284\nrows=2 loops=1)\n Index Cond: (a = $1)\n -> Index Scan using pk_table_b2 on table_b2 tb\n(cost=0.00..3.51 rows=1 width=1788) (actual time=21.526..21.528 rows=1\nloops=2)\n Index Cond: (ta.b = tb.b)\n Total runtime: 120.584 ms\n\nThanks,\nCsaba.\n\n\n", "msg_date": "Fri, 21 Sep 2007 16:26:39 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "> OK, I can confirm that. I set the statistics target for column \"a\" on\n> table_a to 1000, analyzed, and got the plan below. The only downside is\n> that analyze became quite expensive on table_a, it took 15 minutes and\n> touched half of the pages... I will experiment with lower settings,\n> maybe it will work with less than 1000 too.\n\nSo, just to finish this up: setting statistics to 100 worked too, and it\nhas an acceptable impact on analyze. My original (more complicated)\nquery is working fine now, with visible effects on server load...\n\nThanks Simon for your help !\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Fri, 21 Sep 2007 17:48:09 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "Csaba Nagy <[email protected]> writes:\n> Looking at Plan 2, it looks like the \"limit\" step is estimating wrongly\n> it's cost.\n\nThe reason you get a bad plan is that this rowcount estimate is so far\noff:\n\n> -> Index Scan using pk_table_a on table_a ta (cost=0.00..324786.18 rows=388532 width=16) (actual time=454.389..460.138 rows=2 loops=1)\n> Index Cond: (a = $1)\n\nRaising the stats target helped no doubt because it didn't overestimate\nthe number of rows so much...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2007 12:08:33 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan " }, { "msg_contents": "On Fri, 2007-09-21 at 16:26 +0200, Csaba Nagy wrote:\n> [snip]\n> \n> Ok, I was not able to follow your explanation, it's too deep for me into\n> what the planner does...\n\nI'm thinking that this case is too narrow to do too much with, when I\nthink about how we might do what I proposed. OTOH this isn't the first\nbad plan we've had because we used the index for ordering. There might\nbe some common link that we can improve upon.\n\n> > Incidentally, the way out of this is to improve the stats by setting\n> > stats target = 1000 on column a of ta. That will allow the optimizer to\n> > have a better estimate of the tail of the distribution of a, which\n> > should then be more sensibly reflected in the cost of the Index Scan.\n> > That doesn't solve the actual problem, but should help in your case.\n> \n> OK, I can confirm that. I set the statistics target for column \"a\" on\n> table_a to 1000, analyzed, and got the plan below. The only downside is\n> that analyze became quite expensive on table_a, it took 15 minutes and\n> touched half of the pages... I will experiment with lower settings,\n> maybe it will work with less than 1000 too.\n\nWell, we know there are ways of optimizing ANALYZE.\n\nISTM we should be able to auto-select stats target based upon the shape\nof the frequency distribution of the column values. We'd need to make\nsome calculations about the index cost model, but its probably worth it\nfor the future.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 21 Sep 2007 18:18:09 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Fri, 2007-09-21 at 12:08 -0400, Tom Lane wrote:\n> Csaba Nagy <[email protected]> writes:\n> > Looking at Plan 2, it looks like the \"limit\" step is estimating wrongly\n> > it's cost.\n> \n> The reason you get a bad plan is that this rowcount estimate is so far\n> off:\n\nThat's true, but its not relevant, since the query would still be fast\neven if that estimate was exactly right. With LIMIT 10, it wouldn't\nmatter how many rows were there as long as there were more than 10. The\ntrue execution cost is limited, the cost model is not.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 21 Sep 2007 18:39:12 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> On Fri, 2007-09-21 at 12:08 -0400, Tom Lane wrote:\n>> The reason you get a bad plan is that this rowcount estimate is so far\n>> off:\n\n> That's true, but its not relevant,\n\nYes it is --- the reason it wants to use a hashjoin instead of a\nnestloop is exactly that it thinks the loop would iterate too many\ntimes. (Ten is already too many in this case --- if it had estimated\nfive rows out of the join, it'd have gone with the nestloop, since\nthe cost estimate difference at the top level is less than 2x.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2007 13:53:40 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan " }, { "msg_contents": "On Fri, 2007-09-21 at 13:53 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > On Fri, 2007-09-21 at 12:08 -0400, Tom Lane wrote:\n> >> The reason you get a bad plan is that this rowcount estimate is so far\n> >> off:\n> \n> > That's true, but its not relevant,\n> \n> Yes it is --- the reason it wants to use a hashjoin instead of a\n> nestloop is exactly that it thinks the loop would iterate too many\n> times. (Ten is already too many in this case --- if it had estimated\n> five rows out of the join, it'd have gone with the nestloop, since\n> the cost estimate difference at the top level is less than 2x.)\n\nThat's not my perspective. If the LIMIT had been applied accurately to\nthe cost then the hashjoin would never even have been close to the\nnested join in the first place. It's just chance that the frequency\ndistribution is favourable to us and thus amenable to using the hint of\nimproving stats_target. The improved knowledge of the distribution just\nhides the fact that the cost model is still wrong: a cost of 45000 per\nrow shows this.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 21 Sep 2007 19:37:59 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "Simon Riggs <[email protected]> writes:\n> That's not my perspective. If the LIMIT had been applied accurately to\n> the cost then the hashjoin would never even have been close to the\n> nested join in the first place.\n\n[ shrug... ] Your perspective is mistaken. There is nothing wrong with\nthe way the LIMIT estimation is being done. The plan in question was\n\nLimit (cost=0.00..498511.80 rows=10 width=1804) (actual time=17.729..21.672 rows=2 loops=1)\n -> Nested Loop (cost=0.00..1794642.48 rows=36 width=1804) (actual time=17.729..21.671 rows=2 loops=1)\n -> Index Scan using pk_table_a on table_a ta (cost=0.00..324880.88 rows=388638 width=16) (actual time=0.146..0.198 rows=2 loops=1)\n Index Cond: (a = $1)\n -> Index Scan using pk_table_b2 on table_b2 tb (cost=0.00..3.77 rows=1 width=1788) (actual time=10.729..10.731 rows=1 loops=2)\n Index Cond: (ta.b = tb.b)\n Total runtime: 21.876 ms\n\nand there are two fairly serious estimation errors here, neither related\nat all to the LIMIT:\n\n* five-orders-of-magnitude overestimate of the number of table_a rows\nthat will match the condition on a;\n\n* enormous underestimate of the number of join rows --- it's apparently\nthinking only 0.0001 of the table_a rows will have a join partner,\nwhereas at least for this case they all do.\n\nHad the latter estimate been right, the cost of pulling results this\nway would indeed have been something like 50K units per joined row,\nbecause of the large number of inner index probes per successful join.\n\nIt might be interesting to look into why those estimates are so far\noff; the stats Csaba displayed don't seem to have any obvious oddity\nthat would justify such bizarre results. But the LIMIT has got\nnothing to do with this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 21 Sep 2007 19:30:34 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan " }, { "msg_contents": "On Fri, 2007-09-21 at 19:30 -0400, Tom Lane wrote:\n> Simon Riggs <[email protected]> writes:\n> > That's not my perspective. If the LIMIT had been applied accurately to\n> > the cost then the hashjoin would never even have been close to the\n> > nested join in the first place.\n> \n> [ shrug... ] Your perspective is mistaken. There is nothing wrong with\n> the way the LIMIT estimation is being done. The plan in question was\n> \n> Limit (cost=0.00..498511.80 rows=10 width=1804) (actual time=17.729..21.672 rows=2 loops=1)\n> -> Nested Loop (cost=0.00..1794642.48 rows=36 width=1804) (actual time=17.729..21.671 rows=2 loops=1)\n> -> Index Scan using pk_table_a on table_a ta (cost=0.00..324880.88 rows=388638 width=16) (actual time=0.146..0.198 rows=2 loops=1)\n> Index Cond: (a = $1)\n> -> Index Scan using pk_table_b2 on table_b2 tb (cost=0.00..3.77 rows=1 width=1788) (actual time=10.729..10.731 rows=1 loops=2)\n> Index Cond: (ta.b = tb.b)\n> Total runtime: 21.876 ms\n> \n> and there are two fairly serious estimation errors here, neither related\n> at all to the LIMIT:\n> \n> * five-orders-of-magnitude overestimate of the number of table_a rows\n> that will match the condition on a;\n\nI don't see any problem with this estimate, but I do now agree there is\na problem with the other estimate.\n\nWe check to see if the value is an MFV, else we assume that the\ndistribution is uniformly distributed across the histogram bucket. \n\nCsaba provided details of the fairly shallow distribution of values of a\nin table_a. 96% of rows aren't covered by the MFVs, so its a much\nshallower distribution than is typical, but still easily possible. So\nbased upon what we know there should be ~330,000 rows with the value of\na used for the EXPLAIN.\n\nSo it looks to me like we did the best we could with the available\ninformation, so I can't see that as a planner problem per se. We cannot\ndo better a priori without risking worse plans in other circumstances.\n\n> * enormous underestimate of the number of join rows --- it's apparently\n> thinking only 0.0001 of the table_a rows will have a join partner,\n> whereas at least for this case they all do.\n\nOK, I agree this estimate does have a problem and it has nothing to do\nwith LIMIT.\n\nLooking at the code I can't see how this selectivity can have been\ncalculated. AFAICS eqjoinsel() gives a selectivity of 1.0 using the data\nsupplied by Csaba and it ought to cover this case reasonably well.\n\nCsaba, please can you copy that data into fresh tables, re-ANALYZE and\nthen re-post the EXPLAINs, with stats data.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Mon, 24 Sep 2007 14:27:09 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Mon, 2007-09-24 at 14:27 +0100, Simon Riggs wrote:\n> Csaba, please can you copy that data into fresh tables, re-ANALYZE and\n> then re-post the EXPLAINs, with stats data.\n\nWell, I can of course. I actually tried to generate some random data\nwith similar record count and relations between the tables (which I'm\nnot sure I succeeded at), without the extra columns, but it was happily\nyielding the nested loop plan. So I guess I really have to copy the\nwhole data (several tens of GB).\n\nBut from my very limited understanding of what information is available\nfor the planner, I thought that the record count estimated for the join\nbetween table_a and table_b1 on column b should be something like\n\n(estimated record count in table_a for value \"a\") * (weight of \"b\" range\ncovered by table_b1 and table_a in common) / (weight of \"b\" range\ncovered by table_a)\n\nThis is if the \"b\" values in table_a wouldn't be correlated at all with\nthe content of table_b2. The reality is that they are, but the planner\nhas no information about that.\n\nI have no idea how the planner works though, so this might be totally\noff...\n\nI will copy the data and send the results (not promising though that it\nwill be today).\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Mon, 24 Sep 2007 16:04:42 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Mon, 2007-09-24 at 16:04 +0200, Csaba Nagy wrote:\n> On Mon, 2007-09-24 at 14:27 +0100, Simon Riggs wrote:\n> > Csaba, please can you copy that data into fresh tables, re-ANALYZE and\n> > then re-post the EXPLAINs, with stats data.\n> \n> Well, I can of course. I actually tried to generate some random data\n> with similar record count and relations between the tables (which I'm\n> not sure I succeeded at), without the extra columns, but it was happily\n> yielding the nested loop plan. So I guess I really have to copy the\n> whole data (several tens of GB).\n> \n> But from my very limited understanding of what information is available\n> for the planner, I thought that the record count estimated for the join\n> between table_a and table_b1 on column b should be something like\n> \n> (estimated record count in table_a for value \"a\") * (weight of \"b\" range\n> covered by table_b1 and table_a in common) / (weight of \"b\" range\n> covered by table_a)\n\nThere's no such code I'm aware of. Sounds a good idea though. I'm sure\nwe could do something with the histogram values, but we don't in the\ndefault selectivity functions.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Mon, 24 Sep 2007 18:55:22 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "> Csaba, please can you copy that data into fresh tables, re-ANALYZE and\n> then re-post the EXPLAINs, with stats data.\n\nHere you go, fresh experiment attached.\n\nCheers,\nCsaba.", "msg_date": "Wed, 26 Sep 2007 16:52:37 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "Csaba Nagy <[email protected]> writes:\n> db=# analyze verbose temp_table_a;\n> INFO: analyzing \"public.temp_table_a\"\n> INFO: \"temp_table_a\": scanned 3000 of 655299 pages, containing 1887000 live rows and 0 dead rows; 3000 rows in sample, 412183071 estimated total rows\n\nHmm. So the bottom line here is probably that that's not a big enough\nsample to derive a realistic n_distinct value. Or maybe it is ... how\nmany values of \"a\" are there really, and what's the true distribution of\ncounts? Do the plan estimates get closer to reality if you set a higher\nstatistics target?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2007 11:22:17 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan " }, { "msg_contents": "On Wed, 2007-09-26 at 11:22 -0400, Tom Lane wrote:\n> ... how\n> many values of \"a\" are there really, and what's the true distribution of\n> counts? \n\ntable_a has 23366 distinct values. Some statistics (using R):\n\n> summary(table_a_histogram)\n a count \n Min. : 70000857 Min. : 1 \n 1st Qu.:700003628 1st Qu.: 9 \n Median :700011044 Median : 22 \n Mean :622429573 Mean : 17640 \n 3rd Qu.:700018020 3rd Qu.: 391 \n Max. :800003349 Max. :3347707 \n\n\nI'm not sure what you want to see in terms of distribution of counts, so\nI created 2 plots: \"a\" against the counts for each distinct \"a\" value,\nand the histogram of the log of the counts (without the log it's not\nreally readable). I hope they'll make it through to the list...\n\n> Do the plan estimates get closer to reality if you set a higher\n> statistics target?\n\nThe results of setting higher statistics targets are attached too. I\ncan't tell if the stats are closer to reality or not, but the plan\nchanges in any case...\n\nCheers,\nCsaba.", "msg_date": "Thu, 27 Sep 2007 13:53:24 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "Csaba Nagy <[email protected]> writes:\n> On Wed, 2007-09-26 at 11:22 -0400, Tom Lane wrote:\n>> ... how\n>> many values of \"a\" are there really, and what's the true distribution of\n>> counts? \n\n> table_a has 23366 distinct values. Some statistics (using R):\n\n>> summary(table_a_histogram)\n> a count \n> Min. : 70000857 Min. : 1 \n> 1st Qu.:700003628 1st Qu.: 9 \n> Median :700011044 Median : 22 \n> Mean :622429573 Mean : 17640 \n> 3rd Qu.:700018020 3rd Qu.: 391 \n> Max. :800003349 Max. :3347707 \n\nUgh, classic long-tail distribution. This is a really hard problem to\nsolve by sampling --- the sample will naturally be biased towards the\nmore common values, and so ANALYZE tends to conclude there are fewer\ndistinct values than there really are. That means n_distinct in the\nstats is too small, and that feeds directly into the misestimation of\nthe number of matching rows.\n\nAnd yet there's another trap here: if the parameter you passed in\nchanced to be one of the very common values, a plan that was optimized\nfor a small number of matches would perform terribly.\n\nWe've speculated about trying to deal with these types of situations\nby switching plans on-the-fly at runtime, but that's just blue-sky\ndreaming at the moment. In the short run, if boosting the stats target\ndoesn't result in acceptable plans, there may be no real solution other\nthan to avoid parameterized queries on this column.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 27 Sep 2007 10:40:25 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan " }, { "msg_contents": "On Thu, 2007-09-27 at 10:40 -0400, Tom Lane wrote:\n> And yet there's another trap here: if the parameter you passed in\n> chanced to be one of the very common values, a plan that was optimized\n> for a small number of matches would perform terribly.\n> \n> We've speculated about trying to deal with these types of situations\n> by switching plans on-the-fly at runtime, but that's just blue-sky\n> dreaming at the moment. In the short run, if boosting the stats target\n> doesn't result in acceptable plans, there may be no real solution other\n> than to avoid parameterized queries on this column.\n\nWell, my problem was actually solved by rising the statistics target,\nthanks to Simon for suggesting it. The problem is that it's quite hard\nto tell (for a non-postgres-developer) which column needs higher\nstatistics target when a multi-join query doesn't work as expected...\n\nApropos switching plans on the fly and blue sky dreaming... IIRC, there\nwere some plans to cache plans in shared mode for the whole cluster, not\njust per backend.\n\nWhat about allowing the user to prepare a plan offline, i.e. without\nactually executing it (via some variant of PREPARE CACHED or so), and\nlet the planner do more exhaustive cost estimation, possibly actually\nanalyzing specific tables for correlations etc., on the ground that the\nwhole thing is done only once and reused many times. The resulting plan\ncould also contain turning points for parameter values, which would\nswitch between different variants of the plan, this way it can be more\nspecific with parameter values even if planned generically... and it\ncould set up some dependencies on the relevant statistics on which it is\nbasing it's decisions, so it will be invalidated when those statistics\nare presumably changed more than a threshold, and possibly a \"background\nplanner\" thread re-plans it, after the necessary analyze steps are run\nagain.\n\nIf there is a \"background planner\", that one could also collect \"long\nrunning query\" statistics and automatically do a cached plans for the\nmost offending ones, and possibly generate \"missing index\", \"you should\ncluster this table\" and such warnings.\n\nThe fast planner would still be needed for interactive queries which are\nnot yet prepared, so new interactive queries don't pay the unpredictable\ncost of \"hard\" planning. If those run fast enough, they will never get\nprepared, they don't need to... otherwise they should be passed to the\nbackground planner to be exhaustively (or at least more thoroughly)\nanalyzed...\n\nOne other thing I dream of would be some way to tell postgres that a\nquery should run in \"batch mode\" or \"interactive mode\", i.e. it should\nbe optimized for best throughput or fast startup, in the second case\ngreat care should be taken to avoid the worst case scenarios too. I know\nthere's a strong feeling against query hints around here, but this one\ncould fly using a GUC parameter, which could be set in the config file\nfor a default value (batch for a data warehouse, interactive for an OLTP\napplication), and it also could be set per session.\n\nOk, that's about the dreaming...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Thu, 27 Sep 2007 17:16:06 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Thu, 2007-09-27 at 10:40 -0400, Tom Lane wrote:\n\n> And yet there's another trap here: if the parameter you passed in\n> chanced to be one of the very common values, a plan that was optimized\n> for a small number of matches would perform terribly.\n\nI wonder could we move prepare_threshold onto the server? (from JDBC).\n\nWhen we prepare a statement, we record the current setting of the\nprepare_threshold parameter into the plan for that statement. Valid\nsettings are -1, 0 or more.\nIf the setting is -1 then we always re-plan the statement. \nIf the setting is 0 then we plan the statement generically.\nIf the setting is > 0 then we plan the statement according to the\nspecific parameter value and then decrement the prepare count, so\neventually the executions will hit 0 (above).\n\nThat would also simplify JDBC and allow the functionality from other\nclients, as well as from within PLs.\n\nWe could also force the plan to be -1 when the planning involves\nsomething that would force it to be a one-time plan, e.g. constraint\nexclusion. (So we could then get rid of that parameter).\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Thu, 27 Sep 2007 16:24:37 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "Csaba Nagy wrote:\n> \n> Well, my problem was actually solved by rising the statistics target,\n\nWould it do more benefit than harm if postgres increased the\ndefault_statistics_target?\n\nI see a fair number of people (myself included) asking questions who's\nresolution was to ALTER TABLE SET STATISTICS; and I think relatively\nfewer (if any?) people concerned about space in pg_statistic\nor people improving analyze time by reducing the statistics target.\n", "msg_date": "Thu, 27 Sep 2007 11:07:48 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "On Thu, 2007-09-27 at 11:07 -0700, Ron Mayer wrote:\n> Csaba Nagy wrote:\n> > \n> > Well, my problem was actually solved by rising the statistics target,\n> \n> Would it do more benefit than harm if postgres increased the\n> default_statistics_target?\n> \n> I see a fair number of people (myself included) asking questions who's\n> resolution was to ALTER TABLE SET STATISTICS; and I think relatively\n> fewer (if any?) people concerned about space in pg_statistic\n> or people improving analyze time by reducing the statistics target.\n\nWell, the cost of raising the statistics target is far from zero: with\nall defaults the analyze time was ~ 10 seconds, with one column set to\n100 was ~ 1.5 minutes, with one column set to 1000 was 15 minutes for\nthe table in question (few 100M rows). Of course the IO load must have\nbeen proportional to the timings... so I'm pretty sure the current\ndefault is serving well most of the situations.\n\nCheers,\nCsaba.\n\n\n\n\n", "msg_date": "Fri, 28 Sep 2007 10:12:10 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" }, { "msg_contents": "> Just an idea, but with the 8.3 concurrent scan support would it be\n> possible to hang a more in depth analyze over exisiting sequential\n> scans. Then it would be a lower cost to have higher resolution in\n> the statistics because the I/O component would be hidden.\n\nThe biggest problem with that is that it wouldn't be deterministic...\nthe table in question from my original post is never scanned\nsequentially in normal operation. The other way around is also possible,\nwhen sequential scans are too frequent, in that case you wouldn't want\nto also analyze all the time. So there would be a need for logic of when\nto analyze or not with a sequential scan and when do it proactively\nwithout waiting for one... and I'm not sure it will be worth the\ncomplexity.\n\nI think it would me much more productive if some \"long running query\"\ntracking combined with a \"background planner\" thread would do targeted\nanalyzes for specific correlations/distributions/conditions based on\nwhat queries are actually running on the system.\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Fri, 28 Sep 2007 14:33:52 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Searching for the cause of a bad plan" } ]
[ { "msg_contents": "I suffered the same fate when upgrading some time back. The single biggest\nissue for me was that the default 8.X setup changed what had been fast query\nplans using indexes to slow plans using sequential scans. Changing the\nrandom_page_cost in postgresql.conf from 4.0 to 2.0 (which indicates to\nPostgres that reading index pages isn't such a big deal, encouraging index\nuse) solved most of these issues for me.\n\nJeff\n\n-----Original Message-----\nFrom: smiley2211 [mailto:[email protected]] \nSent: Friday, September 21, 2007 8:14 AM\nTo: [email protected]\nSubject: Re: [PERFORM] Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!\n\n\n\nDennis,\n\nThanks for your reply.\n\nNo, the OLD server are no longer available (decommissioned) - the new\nservers are definitely better h\\w.\n\nI do not have any queries to EXPLAIN ANALYZE as they are built by the\napplication and I am not allowed to enable logging on for that server - so\nwhere do I go from here???\n\nI am pretty much trying to make changes in the postgresql.conf file but\ndon't have a CLUE as to what starting numbers I should be looking at to\nchange(???)\n\nHere is the EXPLAIN ANALYZE for the ONE (1) query I do have...it takes 4 - 5\nhours to run a SELECT with the 'EXPLAIN ANALYZE':\n\n \n\nQUERY PLAN\n\n \n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n----------------------------------------------------------------------------\n-\n-------------------\n Limit (cost=100013612.76..299939413.70 rows=1 width=8) (actual\ntime=10084289.859..10084289.861 rows=1 loops=1)\n -> Subquery Scan people_consent (cost=100013612.76..624068438343.99\nrows=3121 width=8) (actual time=10084289.853..10084289.853 rows=1 loops=1)\n -> Append (cost=100013612.76..624068438312.78 rows=3121\nwidth=815) (actual time=10084289.849..10084289.849 rows=1 loops=1)\n -> Nested Loop (cost=100013612.76..100013621.50 rows=2\nwidth=815) (actual time=10084289.846..10084289.846 rows=1 loops=1)\n -> Unique (cost=100013612.76..100013612.77 rows=2\nwidth=8) (actual time=10084289.817..10084289.817 rows=1 loops=1)\n -> Sort (cost=100013612.76..100013612.77 rows=2\nwidth=8) (actual time=10084289.814..10084289.814 rows=1 loops=1)\n Sort Key: temp_consent.id\n -> Unique \n(cost=100013612.71..100013612.73 rows=2 width=36) (actual\ntime=10084245.195..10084277.468 rows=7292 loops=1)\n -> Sort \n(cost=100013612.71..100013612.72 rows=2 width=36) (actual\ntime=10084245.191..10084254.425 rows=7292 loops=1)\n Sort Key: id, daterecorded,\nanswer\n -> Append \n(cost=100013515.80..100013612.70 rows=2 width=36) (actual\ntime=10083991.226..10084228.613 rows=7292 loops=1)\n -> HashAggregate \n(cost=100013515.80..100013515.82 rows=1 width=36) (actual\ntime=10083991.223..10083998.046 rows=3666 loops=1)\n -> Nested Loop \n(cost=100000060.61..100013515.80 rows=1 width=36) (actual\ntime=388.263..10083961.330 rows=3702 loops=1)\n -> Nested\nLoop (cost=100000060.61..100013511.43 rows=1 width=36) (actual\ntime=388.237..10083897.268 rows=3702 loops=1)\n -> \nNested Loop (cost=100000060.61..100013507.59 rows=1 width=24) (actual\ntime=388.209..10083833.870 rows=3702 loops=1)\n \n-> Nested Loop (cost=100000060.61..100013504.56 rows=1 width=24) \n-> (actual\ntime=388.173..10083731.122 rows=3702 loops=1)\n \n\nJoin Filter: (\"inner\".question_answer_id = \"outer\".id)\n \n\n-> Nested Loop (cost=60.61..86.33 rows=1 width=28) (actual\ntime=13.978..114.768 rows=7430 loops=1)\n \n\n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 \n-> rows=1\nwidth=28) (actual time=0.084..0.088 rows=1 loops=1)\n \n\nIndex Cond: ((answer)::text = 'Yes'::text)\n \n\n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (actual time=13.881..87.112 rows=7430 loops=1)\n \n\nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'share\nWithEval'::text)))\n \n\n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (actual\ntime=13.198..13.198 rows=0 loops=1)\n \n\n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 \n-> width=0)\n(actual time=9.689..9.689 rows=57804 loops=1)\n \n\nIndex Cond: (qa.answer_id = \"outer\".id)\n \n\n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (actual\ntime=2.563..2.563 rows=0 loops=1)\n \n\n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 \n-> width=0)\n(actual time=1.923..1.923 rows=6237 loops=1)\n \n\nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n\n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 \n-> width=0)\n(actual time=0.634..0.634 rows=2047 loops=1)\n \n\nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n \n\n-> Seq Scan on encounters_questions_answers eqa\n(cost=100000000.00..100007608.66 rows=464766 width=8) (actual\ntime=0.003..735.934 rows=464766 loop\ns=7430)\n \n-> Index Scan using encounters_id on encounters ec (cost=0.00..3.02 \n-> rows=1\nwidth=8) (actual time=0.016..0.018 rows=1 loops=3702)\n \n\nIndex Cond: (ec.id = \"outer\".encounter_id)\n -> \nIndex Scan using enrollements_pk on enrollments en (cost=0.00..3.82 rows=1\nwidth=20) (actual time=0.008..0.010 rows=1 loops=3702)\n \nIndex Cond: (\"outer\".enrollment_id = en.id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.008..0.010 rows=1 loops=3702)\n Index\nCond: (p.id = \"outer\".person_id)\n -> HashAggregate \n(cost=96.86..96.87 rows=1 width=36) (actual time=205.471..212.207 rows=3626\nloops=1)\n -> Nested Loop \n(cost=60.61..96.85 rows=1 width=36) (actual time=13.163..196.421 rows=3722\nloops=1)\n -> Nested\nLoop (cost=60.61..92.48 rows=1 width=36) (actual time=13.149..158.112\nrows=3722 loops=1)\n -> \nNested Loop (cost=60.61..89.36 rows=1 width=24) (actual\ntime=13.125..120.021 rows=3722 loops=1)\n \n-> Nested Loop (cost=60.61..86.33 rows=1 width=28) (actual\ntime=13.013..48.460 rows=7430 loops=1)\n \n\n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 \n-> rows=1\nwidth=28) (actual time=0.030..0.032 rows=1 loops=1)\n \n\nIndex Cond: ((answer)::text = 'Yes'::text)\n \n\n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (actual time=12.965..28.902 rows=7430 loops=1)\n \n\nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'shareWithEv\nal'::text)))\n \n\n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (actual\ntime=12.288..12.288 rows=0 loops=1)\n \n\n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 \n-> width=0)\n(actual time=8.985..8.985 rows=57804 loops=1)\n \n\nIndex Cond: (qa.answer_id = \"outer\".id)\n \n\n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (actual\ntime=2.344..2.344 rows=0 loops=1)\n \n\n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 \n-> width=0)\n(actual time=1.762..1.762 rows=6237 loops=1)\n \n\nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n\n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 \n-> width=0)\n(actual time=0.578..0.578 rows=2047 loops=1)\n \n\nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n \n-> Index Scan using ctccalls_qs_as_qaid on ctccalls_questions_answers \n-> cqa\n(cost=0.00..3.02 rows=1 width=8) (actual time=0.005..0.006 rows=1\nloops=7430)\n \n\nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> \nIndex Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1\nwidth=20) (actual time=0.003..0.005 rows=1 loops=3722)\n \nIndex Cond: (c.id = \"outer\".call_id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.004..0.005 rows=1 loops=3722)\n Index\nCond: (p.id = \"outer\".person_id)\n -> Index Scan using people_pk on people \n(cost=0.00..4.35 rows=1 width=815) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: (people.id = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=100000000.00..623968424691.25 rows=3119 width=676) (never executed)\n -> Seq Scan on people \n(cost=100000000.00..623968424660.06 rows=3119 width=676) (never executed)\n Filter: (NOT (subplan))\n SubPlan\n -> Subquery Scan temp_consent \n(cost=100010968.94..100010968.98 rows=2 width=8) (never executed) lines\n1-69/129 56%\n \n\nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> \nIndex Scan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1\nwidth=20) (actual time=0.003..0.005 rows=1 loops=3722)\n \nIndex Cond: (c.id = \"outer\".call_id)\n -> Index\nScan using people_pk on people p (cost=0.00..4.35 rows=1 width=8) (actual\ntime=0.004..0.005 rows=1 loops=3722)\n Index\nCond: (p.id = \"outer\".person_id)\n -> Index Scan using people_pk on people \n(cost=0.00..4.35 rows=1 width=815) (actual time=0.018..0.018 rows=1 loops=1)\n Index Cond: (people.id = \"outer\".id)\n -> Subquery Scan \"*SELECT* 2\" \n(cost=100000000.00..623968424691.25 rows=3119 width=676) (never executed)\n -> Seq Scan on people \n(cost=100000000.00..623968424660.06 rows=3119 width=676) (never executed)\n Filter: (NOT (subplan))\n SubPlan\n -> Subquery Scan temp_consent \n(cost=100010968.94..100010968.98 rows=2 width=8) (never executed)\n -> Unique \n(cost=100010968.94..100010968.96 rows=2 width=36) (never executed)\n -> Sort \n(cost=100010968.94..100010968.95 rows=2 width=36) (never executed)\n Sort Key: id, daterecorded,\nanswer\n -> Append \n(cost=100010872.03..100010968.93 rows=2 width=36) (never executed)\n -> HashAggregate \n(cost=100010872.03..100010872.04 rows=1 width=36) (never executed)\n -> Nested Loop \n(cost=100000907.99..100010872.02 rows=1 width=36) (never executed)\n Join\nFilter: (\"inner\".question_answer_id = \"outer\".id)\n -> Nested\nLoop (cost=60.61..90.69 rows=1 width=36) (never executed)\n -> \nNested Loop (cost=0.00..9.37 rows=1 width=36) (never executed)\n \n-> Index Scan using people_pk on people p (cost=0.00..4.35 rows=1 \n-> width=8)\n(never executed)\n \n\nIndex Cond: (id = $0)\n \n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 \n-> rows=1\nwidth=28) (never executed)\n \n\nIndex Cond: ((answer)::text = 'Yes'::text)\n -> \nBitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (never executed)\n \nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text =\n'shareWithEval'::text)\n))\n \n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (never executed)\n \n\n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 \n-> width=0)\n(never executed)\n \n\nIndex Cond: (qa.answer_id = \"outer\".id)\n \n\n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never executed)\n \n\n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 \n-> width=0)\n(never executed)\n \n\nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n\n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 \n-> width=0)\n(never executed)\n \n\nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n -> Hash\nJoin (cost=100000847.38..100010780.52 rows=65 width=20) (never executed)\n Hash\nCond: (\"outer\".encounter_id = \"inner\".id)\n -> \nSeq Scan on encounters_questions_answers eqa \n(cost=100000000.00..100007608.66 rows=464766 width=8) (never executed)\n -> \nHash (cost=847.37..847.37 rows=3 width=20) (never executed)\n \n-> Hash Join (cost=214.73..847.37 rows=3 width=20) (never executed)\n \n\nHash Cond: (\"outer\".enrollment_id = \"inner\".id)\n \n\n-> Index Scan using encounters_id on encounters ec (cost=0.00..524.72\nrows=21578 width=8) (never executed)\n \n\n-> Hash (cost=214.73..214.73 rows=1 width=20) (never executed)\n \n\n-> Index Scan using enrollements_pk on enrollments en \n-> (cost=0.00..214.73\nrows=1 width=20) (never executed)\n \n\nFilter: ($0 = person_id)\n -> HashAggregate \n(cost=96.86..96.87 rows=1 width=36) (never executed)\n -> Nested Loop \n(cost=60.61..96.85 rows=1 width=36) (never executed)\n -> Nested\nLoop (cost=60.61..93.72 rows=1 width=32) (never executed)\n -> \nNested Loop (cost=60.61..90.69 rows=1 width=36) (never executed)\n \n-> Nested Loop (cost=0.00..9.37 rows=1 width=36) (never executed)\n \n\n-> Index Scan using people_pk on people p (cost=0.00..4.35 rows=1 \n-> width=8)\n(never executed)\n \n\nIndex Cond: (id = $0)\n \n\n-> Index Scan using answers_answer_un on answers a (cost=0.00..5.01 \n-> rows=1\nwidth=28) (never executed)\n \n\nIndex Cond: ((answer)::text = 'Yes'::text)\n \n-> Bitmap Heap Scan on questions_answers qa (cost=60.61..81.23 rows=7\nwidth=16) (never executed)\n \n\nRecheck Cond: ((qa.answer_id = \"outer\".id) AND (((qa.question_tag)::text =\n'consentTransfer'::text) OR ((qa.question_tag)::text = 'shareWithEval':\n:text)))\n \n\n-> BitmapAnd (cost=60.61..60.61 rows=7 width=0) (never executed)\n \n\n-> Bitmap Index Scan on qs_as_answer_id (cost=0.00..5.27 rows=649 \n-> width=0)\n(never executed)\n \n\nIndex Cond: (qa.answer_id = \"outer\".id)\n \n\n-> BitmapOr (cost=55.08..55.08 rows=6596 width=0) (never executed)\n \n\n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 \n-> width=0)\n(never executed)\n \n\nIndex Cond: ((question_tag)::text = 'consentTransfer'::text)\n \n\n-> Bitmap Index Scan on qs_as_qtag (cost=0.00..27.54 rows=3298 \n-> width=0)\n(never executed)\n \n\nIndex Cond: ((question_tag)::text = 'shareWithEval'::text)\n -> \nIndex Scan using ctccalls_qs_as_qaid on ctccalls_questions_answers cqa \n(cost=0.00..3.02 rows=1 width=8) (never executed)\n \nIndex Cond: (cqa.question_answer_id = \"outer\".id)\n -> Index\nScan using ctccalls_pk on ctccalls c (cost=0.00..3.11 rows=1 width=20)\n(never executed)\n Index\nCond: (c.id = \"outer\".call_id)\n \nFilter: ($0 = person_id)\n Total runtime: 10084292.497 ms\n(125 rows)\n\n\nThanks...Marsha\n\n-- \nView this message in context:\nhttp://www.nabble.com/Upgraded-from-7.4-to-8.1.4-QUERIES-NOW-SLOW%21%21%21-t\nf4489502.html#a12820410\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: Have you checked our extensive FAQ?\n\n http://www.postgresql.org/docs/faq\n", "msg_date": "Fri, 21 Sep 2007 08:28:46 -0500", "msg_from": "Jeff Harris <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Upgraded from 7.4 to 8.1.4 QUERIES NOW SLOW!!!" } ]
[ { "msg_contents": "Hi,\nI think the query planner is unaware of the possibly best plan in some \nsituations. See the test case below:\n\n-- --------------------------------------------------- --\n\nCREATE TABLE tparent (\n id integer NOT NULL,\n ord integer NOT NULL,\n CONSTRAINT par_pkey_id PRIMARY KEY (id),\n CONSTRAINT par_uniq_ord UNIQUE (ord)\n);\n\nCREATE TABLE tchild (\n par_id integer NOT NULL,\n ord integer NOT NULL,\n CONSTRAINT chi_pkey_parid_ord PRIMARY KEY (par_id, ord),\n CONSTRAINT chi_fkey FOREIGN KEY (par_id) REFERENCES tparent(id)\n);\n\nINSERT INTO tparent VALUES (1, 3);\nINSERT INTO tparent VALUES (2, 1);\nINSERT INTO tparent VALUES (3, 4);\nINSERT INTO tparent VALUES (4, 5);\nINSERT INTO tparent VALUES (5, 2);\n\nINSERT INTO tchild VALUES (1, 2);\nINSERT INTO tchild VALUES (1, 1);\nINSERT INTO tchild VALUES (2, 1);\nINSERT INTO tchild VALUES (2, 3);\nINSERT INTO tchild VALUES (2, 2);\nINSERT INTO tchild VALUES (3, 1);\nINSERT INTO tchild VALUES (3, 2);\nINSERT INTO tchild VALUES (4, 1);\nINSERT INTO tchild VALUES (5, 2);\nINSERT INTO tchild VALUES (5, 1);\n\nANALYZE tparent;\nANALYZE tchild;\n\nSET enable_seqscan TO false;\nSET enable_bitmapscan TO false;\nSET enable_hashjoin TO false;\nSET enable_mergejoin TO false;\nSET enable_sort TO false;\n\nEXPLAIN ANALYZE\nSELECT *\nFROM tparent JOIN tchild ON tchild.par_id = tparent.id\nWHERE tparent.ord BETWEEN 1 AND 4\nORDER BY tparent.ord, tchild.ord;\n\n-- --------------------------------------------------- --\n\nSort\n(cost=100000132.10..100000140.10 rows=8 width=16)\n(actual time=0.440..0.456 rows=9 loops=1)\nSort Key: tparent.ord, tchild.ord\n\n-> Nested Loop\n (cost=0.00..84.10 rows=8 width=16)\n (actual time=0.179..0.270 rows=9 loops=1)\n\n -> Index Scan using par_uniq_ord on tparent\n (cost=0.00..20.40 rows=4 width=8)\n (actual time=0.089..0.098 rows=4 loops=1)\n Index Cond: ((ord >= 1) AND (ord <= 4))\n\n -> Index Scan using chi_pkey_parid_ord on tchild\n (cost=0.00..9.93 rows=2 width=8)\n (actual time=0.023..0.028 rows=2 loops=4)\n Index Cond: (tchild.par_id = \"outer\".id)\n\n-- --------------------------------------------------- --\n\nEven though I forced the nested loop plan using both indexes (that \nreturns the rows in the correct order), there is a needless sort step on \nthe top, consuming half of the time even on such small tables.\nNow it's clear why the planner did not choose this plan, why I had to \nforce it: because it isn't the best if the sort is still there.\n\nThe first time I posted this\n( http://archives.postgresql.org/pgsql-general/2007-05/msg01306.php )\nand read Tom's answer I was convinced that this is rarely a problem, \nbut now I don't think so, since I ran into it for the third time.\n\nCan that sort step somehow be eliminated if the NestLoop's outer \ntable is being scanned via a unique index? If not, how can I rewrite my \nindexes/query in such a way that it's still safe (the rows always come in \nthe order I want), but I don't have to wait for that needless sort?\n\nI'm using PostgreSQL 8.1.8.\n\n\nThanks in advance,\nDenes Daniel\n____________________________________________________\n\n\n\n\nOlvasd az [origo]-t a mobilodon: mini magazinok a Mobizin-en\n___________________________________________________\nwww.t-mobile.hu/mobizin\n\n", "msg_date": "Fri, 21 Sep 2007 17:36:25 +0200 (CEST)", "msg_from": "Denes Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "Query planner unaware of possibly best plan" }, { "msg_contents": "On Fri, 2007-09-21 at 17:36 +0200, Denes Daniel wrote:\n\n> Even though I forced the nested loop plan using both indexes (that \n> returns the rows in the correct order), there is a needless sort step on \n> the top, consuming half of the time even on such small tables.\n> Now it's clear why the planner did not choose this plan, why I had to \n> force it: because it isn't the best if the sort is still there.\n\nOrdering by parent, child is fairly common but the variation you've got\nhere isn't that common. You'd need to make a case considering all the\nalternatives; nobody will agree without a balanced case that includes\nwhat is best for everyone.\n\nYour EXPLAIN looks edited. Have you also edited the sort costs? They\nlook slightly higher than we might expect. Please provide the full\nnormal EXPLAIN output.\n\n-- \n Simon Riggs\n 2ndQuadrant http://www.2ndQuadrant.com\n\n", "msg_date": "Fri, 21 Sep 2007 18:49:33 +0100", "msg_from": "Simon Riggs <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner unaware of possibly best plan" }, { "msg_contents": "Simon Riggs <[email protected]> �rta:\n\n> Ordering by parent, child is fairly common but the variation you've got\n> here isn't that common. You'd need to make a case considering all the\n> alternatives; nobody will agree without a balanced case that includes\n> what is best for everyone.\n> \n> Your EXPLAIN looks edited. Have you also edited the sort costs? They\n> look slightly higher than we might expect. Please provide the full\n> normal EXPLAIN output.\n> \n> -- \n> Simon Riggs\n> 2ndQuadrant http://www.2ndQuadrant.com\n\n\n\nI've just inserted some newlines, so it's better to read than when my \nemail-client wraps the lines automatically. Did not touch the information \nitself. But here is the normal output of EXPLAIN ANALYZE:\n\nEXPLAIN ANALYZE SELECT * FROM tparent JOIN tchild ON tchild.par_id = \ntparent.id WHERE tparent.ord BETWEEN 1 AND 4 ORDER BY tparent.ord, \ntchild.ord;\n\n QUERY PLAN\n-------------------------------------------------------------------------------------------\n--------------------------------------------\n Sort (cost=100000132.10..100000140.10 rows=8 width=16) (actual \ntime=0.302..0.319 rows=9 loops=1)\n Sort Key: tparent.ord, tchild.ord\n -> Nested Loop (cost=0.00..84.10 rows=8 width=16) (actual \ntime=0.181..0.267 rows=9 loops=1)\n -> Index Scan using par_uniq_ord on tparent (cost=0.00..20.40 \nrows=4 width=8) (actual time=0.100..0.109 rows=4 loops=1)\n Index Cond: ((ord >= 1) AND (ord <= 4))\n -> Index Scan using chi_pkey_parid_ord on tchild \n(cost=0.00..9.93 rows=2 width=8) (actual time=0.020..0.026 rows=2 \nloops=4)\n Index Cond: (tchild.par_id = \"outer\".id)\n Total runtime: 0.412 ms\n(8 rows)\n\nThe costs may be different because I've tuned the query planner's \nparameters.\n\n> Ordering by parent, child is fairly common but the variation you've got\n> here isn't that common.\nHow else can you order by parent, child other than first ordering by a \nunique key of parent, then something in child? (Except for \nchild.parent_id, child.something because this has all the information in \nchild and can rely on a single multicolumn index.)\n\n\nDenes Daniel\n------------------------------------------------------------------------\n\n\n\n\n\nOlvasd az [origo]-t a mobilodon: mini magazinok a Mobizin-en\n___________________________________________________\nwww.t-mobile.hu/mobizin\n\n", "msg_date": "Fri, 21 Sep 2007 21:29:55 +0200 (CEST)", "msg_from": "Denes Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner unaware of possibly best plan" }, { "msg_contents": "Denes Daniel wrote:\n\n> I've just inserted some newlines, so it's better to read than when my \n> email-client wraps the lines automatically. Did not touch the information \n> itself. But here is the normal output of EXPLAIN ANALYZE:\n\nThe best thing to do is paste them in a text file and send it as an\nattachment.\n\n> QUERY PLAN\n> -------------------------------------------------------------------------------------------\n> --------------------------------------------\n> Sort (cost=100000132.10..100000140.10 rows=8 width=16) (actual \n> time=0.302..0.319 rows=9 loops=1)\n> Sort Key: tparent.ord, tchild.ord\n> -> Nested Loop (cost=0.00..84.10 rows=8 width=16) (actual \n> time=0.181..0.267 rows=9 loops=1)\n> -> Index Scan using par_uniq_ord on tparent (cost=0.00..20.40 \n> rows=4 width=8) (actual time=0.100..0.109 rows=4 loops=1)\n> Index Cond: ((ord >= 1) AND (ord <= 4))\n> -> Index Scan using chi_pkey_parid_ord on tchild \n> (cost=0.00..9.93 rows=2 width=8) (actual time=0.020..0.026 rows=2 \n> loops=4)\n> Index Cond: (tchild.par_id = \"outer\".id)\n> Total runtime: 0.412 ms\n> (8 rows)\n> \n> The costs may be different because I've tuned the query planner's \n> parameters.\n\nWhy did you set enable_sort=off? It's not like sorting 9 rows is going\nto take any noticeable amount of time anyway.\n\n-- \nAlvaro Herrera http://www.advogato.org/person/alvherre\n\"No hay hombre que no aspire a la plenitud, es decir,\nla suma de experiencias de que un hombre es capaz\"\n", "msg_date": "Fri, 21 Sep 2007 15:56:44 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner unaware of possibly best plan" }, { "msg_contents": "In reply to Alvaro Herrera:\n\n> The best thing to do is paste them in a text file and send it as an\n> attachment.\n\nOkay, it's attached.\n\n> Why did you set enable_sort=off? It's not like sorting 9 rows is going\n> to take any noticeable amount of time anyway.\n\nOf course it's no problem for 9 rows, but this is only a test case. In \nproduction there will be much more. I just wanted to show that the \nplanner doesn't even consider a plan without a sort step, using purely \nindex scans.\n\n\nDenes Daniel\n------------------------------------------------------------\n\n\n\n\nOlvasd az [origo]-t a mobilodon: mini magazinok a Mobizin-en\n___________________________________________________\nwww.t-mobile.hu/mobizin", "msg_date": "Fri, 21 Sep 2007 22:05:29 +0200 (CEST)", "msg_from": "Denes Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner unaware of possibly best plan" } ]
[ { "msg_contents": "I'm doing several tests.\nRight now I did a VACUUM FULL ANALYZE in both servers.\nIn the old one vacuum runs for about 354 seconds and in the new one 59 seconds.\n\nThen I have ran\nEXPLAIN ANALYZE\nSELECT *\nFROM fact_ven_renta fvr, dim_producto_std_producto dpp\nWHERE\n fvr.producto_std_producto_sk = dpp.producto_sk\n\nI have found that the plans aren't exactly the same.\nThis is the plan for the old server:\nHash Join (cost=449.55..8879.24 rows=136316 width=904) (actual time=50.734..1632.491 rows=136316 loops=1)\n Hash Cond: (fvr.producto_std_producto_sk = dpp.producto_sk)\n -> Seq Scan on fact_ven_renta fvr (cost=0.00..6044.16 rows=136316 width=228) (actual time=0.029..452.716 rows=136316 loops=1)\n -> Hash (cost=403.69..403.69 rows=3669 width=676) (actual time=50.582..50.582 rows=3669 loops=1)\n -> Seq Scan on dim_producto_std_producto dpp (cost=0.00..403.69 rows=3669 width=676) (actual time=0.023..19.776 rows=3669 loops=1)\nTotal runtime: 2022.293 ms\n\nAnd this is the plan for the new server:\nHash Join (cost=412.86..9524.13 rows=136316 width=905) (actual time=9.421..506.376 rows=136316 loops=1)\n Hash Cond: (\"outer\".producto_std_producto_sk = \"inner\".producto_sk)\n -> Seq Scan on fact_ven_renta fvr (cost=0.00..6044.16 rows=136316 width=228) (actual time=0.006..107.318 rows=136316 loops=1)\n -> Hash (cost=403.69..403.69 rows=3669 width=677) (actual time=9.385..9.385 rows=3669 loops=1)\n -> Seq Scan on dim_producto_std_producto dpp (cost=0.00..403.69 rows=3669 width=677) (actual time=0.003..3.157 rows=3669 loops=1)\nTotal runtime: 553.619 ms\n\n\nI see an \"outer\" join in the plan for the new server. This is weird!!! There are the same databases in both servers.\nThe old one runs this query for about 37 seconds and for the new one for about 301 seconds.\nWhy are plans different? May the backup recovery process have had an error in the new server when restoring?\n\nI appreciate some help.\nRegards Agustin\n\n----- Mensaje original ----\nDe: \"[email protected]\" <[email protected]>\nPara: [email protected]\nEnviado: miércoles 19 de septiembre de 2007, 14:38:13\nAsunto: [PERFORM] Low CPU Usage\n\nHi all.\nRecently I have installed a brand new server with a Pentium IV 3.2 GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4 (previously a 8.1.9).\nI have other similar server with an IDE disk, Red Hat EL 4 and PostgreSQL 8.2.3\n\nI have almost the same postgresql.conf in both servers, but in the new one (I have more work_mem than the other one) things go really slow. I began to monitor i/o disk and it's really ok, I have test disk with hdparm and it's 5 times faster than the IDE one.\nRunning the same queries in both servers in the new one it envolves almost 4 minutes instead of 18 seconds in the old one.\nBoth databases are the same, I have vacuum them and I don't know how to manage this issue.\nThe only weird thing is than in the older server running\n the query it uses 30% of CPU instead of 3 o 5 % of the new one!!!\nWhat's is happening with this server? I upgrade from 8.1.9 to 8.2.4 trying to solve this issue but I can't find a solution.\n\nAny ideas?\nRegards\nAgustin\n\n\n\n\n \nEl Mundial de Rugby 2007\nLas últimas noticias en Yahoo! Deportes:\n\nhttp://ar.sports.yahoo.com/mundialderugby\n\n\n\n\n\n Los referentes más importantes en compra/ venta de autos se juntaron:\nDemotores y Yahoo!\nAhora comprar o vender tu auto es más fácil. Vistá ar.autos.yahoo.com/\nI'm doing several tests.Right now I did a VACUUM FULL ANALYZE in both servers.In the old one vacuum runs for about 354 seconds and in the new one 59 seconds.Then I have ranEXPLAIN ANALYZESELECT *FROM fact_ven_renta fvr, dim_producto_std_producto dppWHERE  fvr.producto_std_producto_sk = dpp.producto_skI have found that the plans aren't exactly the same.This is the plan for the old server:Hash Join  (cost=449.55..8879.24 rows=136316 width=904) (actual time=50.734..1632.491 rows=136316 loops=1)  Hash Cond: (fvr.producto_std_producto_sk = dpp.producto_sk)  ->  Seq Scan on fact_ven_renta fvr  (cost=0.00..6044.16 rows=136316\n width=228) (actual time=0.029..452.716 rows=136316 loops=1)  ->  Hash  (cost=403.69..403.69 rows=3669 width=676) (actual time=50.582..50.582 rows=3669 loops=1)        ->  Seq Scan on dim_producto_std_producto dpp  (cost=0.00..403.69 rows=3669 width=676) (actual time=0.023..19.776 rows=3669 loops=1)Total runtime: 2022.293 msAnd this is the plan for the new server:Hash Join  (cost=412.86..9524.13 rows=136316 width=905) (actual time=9.421..506.376 rows=136316 loops=1)  Hash Cond: (\"outer\".producto_std_producto_sk = \"inner\".producto_sk)  ->  Seq Scan on fact_ven_renta fvr  (cost=0.00..6044.16 rows=136316 width=228) (actual time=0.006..107.318 rows=136316 loops=1)  ->  Hash  (cost=403.69..403.69 rows=3669 width=677) (actual time=9.385..9.385 rows=3669 loops=1)       \n ->  Seq Scan on dim_producto_std_producto dpp  (cost=0.00..403.69 rows=3669 width=677) (actual time=0.003..3.157 rows=3669 loops=1)Total runtime: 553.619 msI see an \"outer\" join in the plan for the new server. This is weird!!! There are the same databases in both servers.The old one runs this query for about 37 seconds and for the new one for about 301 seconds.Why are plans different? May the backup recovery process have had an error in the new server when restoring?I appreciate some help.Regards Agustin----- Mensaje original ----De: \"[email protected]\" <[email protected]>Para: [email protected]: miércoles 19 de septiembre de 2007, 14:38:13Asunto: [PERFORM] Low CPU UsageHi all.Recently I have installed a brand new server with a Pentium IV 3.2 GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4 (previously a 8.1.9).I have other similar server with an IDE disk, Red Hat EL 4 and PostgreSQL 8.2.3I have almost the same postgresql.conf in both servers, but in the new one (I have more work_mem than the other one) things go really slow.  I began to monitor i/o disk and it's really ok, I have test disk with hdparm and it's 5 times faster than the IDE one.Running the same queries in both servers in the new one it envolves almost 4 minutes instead of 18 seconds in the old one.Both databases are the same, I have vacuum them and I don't know how to manage this issue.The only weird thing is than in the older server running\n the query it uses 30% of CPU instead of 3 o 5 % of the new one!!!What's is happening with this server? I upgrade from 8.1.9 to 8.2.4 trying to solve this issue but I can't find a solution.Any ideas?RegardsAgustin\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby\nLos referentes más importantes en compra/venta de autos se juntaron:Demotores y Yahoo!.\nAhora comprar o vender tu auto es más fácil. Visitá http://ar.autos.yahoo.com/", "msg_date": "Fri, 21 Sep 2007 10:30:45 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": ">>> On Fri, Sep 21, 2007 at 12:30 PM, in message\n<[email protected]>, <[email protected]>\nwrote: \n \n> This is the plan for the old server:\n> Hash Join (cost=449.55..8879.24 rows=136316 width=904) (actual \n> time=50.734..1632.491 rows=136316 loops=1)\n. . .\n> Total runtime: 2022.293 ms\n \n> And this is the plan for the new server:\n> Hash Join (cost=412.86..9524.13 rows=136316 width=905) (actual \n> time=9.421..506.376 rows=136316 loops=1)\n. . .\n> Total runtime: 553.619 ms\n \n> I see an \"outer\" join in the plan for the new server. This is weird!!! There \n> are the same databases in both servers.\n \nThat's just a matter of labeling the tables with role rather than alias.\nThe plans look the same to me.\n \n> The old one runs this query for about 37 seconds and for the new one for \n> about 301 seconds.\n \nThat's not what it looks like based on the EXPLAIN ANALYZE output.\nIt looks like run time dropped from two seconds to half a second.\n \nIt seems as though you either have a network delay delivering the results,\nor your application is slow to read them.\n\nExactly how are you arriving at those timings you're reporting to us?\n \n-Kevin\n \n\n\n", "msg_date": "Fri, 21 Sep 2007 12:51:57 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "I forgot to tell this plan was with Postgres 8.1.9 in the new server with postgres 8.2.4 in the new server the plan is the same as with te old one (the little difference in rows retrieved is that the database is yesterday snapshot).\nThis is the plan for the new server with postgres 8.2.4:\nHash Join (cost=449.55..8846.67 rows=135786 width=904) (actual time=10.823..467.746 rows=135786 loops=1)\n Hash Cond: (fvr.producto_std_producto_sk = dpp.producto_sk)\n -> Seq Scan on fact_ven_renta fvr (cost=0.00..6020.86 rows=135786 width=228) (actual time=0.007..81.268 rows=135786 loops=1)\n -> Hash (cost=403.69..403.69 rows=3669 width=676) (actual time=10.733..10.733 rows=3669 loops=1)\n -> Seq Scan on dim_producto_std_producto dpp (cost=0.00..403.69 rows=3669 width=676) (actual time=0.004..2.995 rows=3669 loops=1)\nTotal runtime: 513.747 ms\n\nThis query is running for about 200 seconds, doing dstat I don't see anything weird (regards to low cpu usage 2% or 3%) and normal i/o. In the old server I have 30% of cpu usage an high i/o and run faster!!!\nThis is really weird.\n\n\n----- Mensaje original ----\nDe: \"[email protected]\" <[email protected]>\nPara: [email protected]\nEnviado: viernes 21 de septiembre de 2007, 14:30:45\nAsunto: Re: [PERFORM] Low CPU Usage\n\nI'm doing several tests.\nRight now I did a VACUUM FULL ANALYZE in both servers.\nIn the old one vacuum runs for about 354 seconds and in the new one 59 seconds.\n\nThen I have ran\nEXPLAIN ANALYZE\nSELECT *\nFROM fact_ven_renta fvr, dim_producto_std_producto dpp\nWHERE\n fvr.producto_std_producto_sk = dpp.producto_sk\n\nI have found that the plans aren't exactly the same.\nThis is the plan for the old server:\nHash Join (cost=449.55..8879.24 rows=136316 width=904) (actual time=50.734..1632.491 rows=136316 loops=1)\n Hash Cond: (fvr.producto_std_producto_sk = dpp.producto_sk)\n -> Seq Scan on fact_ven_renta fvr (cost=0.00..6044.16 rows=136316\n width=228) (actual time=0.029..452.716 rows=136316 loops=1)\n -> Hash (cost=403.69..403.69 rows=3669 width=676) (actual time=50.582..50.582 rows=3669 loops=1)\n -> Seq Scan on dim_producto_std_producto dpp (cost=0.00..403.69 rows=3669 width=676) (actual time=0.023..19.776 rows=3669 loops=1)\nTotal runtime: 2022.293 ms\n\nAnd this is the plan for the new server:\nHash Join (cost=412.86..9524.13 rows=136316 width=905) (actual time=9.421..506.376 rows=136316 loops=1)\n Hash Cond: (\"outer\".producto_std_producto_sk = \"inner\".producto_sk)\n -> Seq Scan on fact_ven_renta fvr (cost=0.00..6044.16 rows=136316 width=228) (actual time=0.006..107.318 rows=136316 loops=1)\n -> Hash (cost=403.69..403.69 rows=3669 width=677) (actual time=9.385..9.385 rows=3669 loops=1)\n \n -> Seq Scan on dim_producto_std_producto dpp (cost=0.00..403.69 rows=3669 width=677) (actual time=0.003..3.157 rows=3669 loops=1)\nTotal runtime: 553.619 ms\n\n\nI see an \"outer\" join in the plan for the new server. This is weird!!! There are the same databases in both servers.\nThe old one runs this query for about 37 seconds and for the new one for about 301 seconds.\nWhy are plans different? May the backup recovery process have had an error in the new server when restoring?\n\nI appreciate some help.\nRegards Agustin\n\n----- Mensaje original ----\nDe: \"[email protected]\" <[email protected]>\nPara: [email protected]\nEnviado: miércoles 19 de septiembre de 2007, 14:38:13\nAsunto: [PERFORM] Low CPU Usage\n\nHi all.\nRecently I have installed a brand new server with a Pentium IV 3.2 GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4 (previously a 8.1.9).\nI have other similar server with an IDE disk, Red Hat EL 4 and PostgreSQL 8.2.3\n\nI have almost the same postgresql.conf in both servers, but in the new one (I have more work_mem than the other one) things go really slow. I began to monitor i/o disk and it's really ok, I have test disk with hdparm and it's 5 times faster than the IDE one.\nRunning the same queries in both servers in the new one it envolves almost 4 minutes instead of 18 seconds in the old one.\nBoth databases are the same, I have vacuum them and I don't know how to manage this issue.\nThe only weird thing is than in the older server running\n the query it uses 30% of CPU instead of 3 o 5 % of the new one!!!\nWhat's is happening with this server? I upgrade from 8.1.9 to 8.2.4 trying to solve this issue but I can't find a solution.\n\nAny ideas?\nRegards\nAgustin\n\n\n\n\n \nEl Mundial de Rugby 2007\nLas últimas noticias en Yahoo! Deportes:\n\nhttp://ar.sports.yahoo.com/mundialderugby\n\n\n\n\n\n\n\n\n \nLos referentes más importantes en compra/venta de autos se juntaron:\nDemotores y Yahoo!.\nAhora comprar o vender tu auto es más fácil. \n Visitá http://ar.autos.yahoo.com/\n\n\n\n\n\n Los referentes más importantes en compra/ venta de autos se juntaron:\nDemotores y Yahoo!\nAhora comprar o vender tu auto es más fácil. Vistá ar.autos.yahoo.com/\nI forgot to tell this plan was with Postgres 8.1.9 in the new server with postgres 8.2.4 in the new server the plan is the same as with te old one (the little difference in rows retrieved is that the database is yesterday snapshot).This is the plan for the new server with postgres 8.2.4:Hash Join  (cost=449.55..8846.67 rows=135786 width=904) (actual time=10.823..467.746 rows=135786 loops=1)  Hash Cond: (fvr.producto_std_producto_sk = dpp.producto_sk)  ->  Seq Scan on fact_ven_renta fvr  (cost=0.00..6020.86 rows=135786 width=228) (actual time=0.007..81.268 rows=135786 loops=1)  ->  Hash  (cost=403.69..403.69 rows=3669 width=676) (actual\n time=10.733..10.733 rows=3669 loops=1)        ->  Seq Scan on dim_producto_std_producto dpp  (cost=0.00..403.69 rows=3669 width=676) (actual time=0.004..2.995 rows=3669 loops=1)Total runtime: 513.747 msThis query is running for about 200 seconds, doing dstat I don't see anything weird (regards to low cpu usage 2% or 3%) and normal i/o. In the old server I have 30% of cpu usage an high i/o and run faster!!!This is really weird.----- Mensaje original ----De: \"[email protected]\" <[email protected]>Para: [email protected]: viernes 21 de septiembre de 2007, 14:30:45Asunto: Re: [PERFORM] Low CPU UsageI'm doing several tests.Right now I did a VACUUM FULL ANALYZE in both servers.In the old one vacuum runs for about 354 seconds and in the new one 59 seconds.Then I have ranEXPLAIN ANALYZESELECT *FROM fact_ven_renta fvr, dim_producto_std_producto dppWHERE  fvr.producto_std_producto_sk = dpp.producto_skI have found that the plans aren't exactly the same.This is the plan for the old server:Hash Join  (cost=449.55..8879.24 rows=136316 width=904) (actual time=50.734..1632.491 rows=136316 loops=1)  Hash Cond: (fvr.producto_std_producto_sk = dpp.producto_sk)  ->  Seq Scan on fact_ven_renta fvr  (cost=0.00..6044.16 rows=136316\n width=228) (actual time=0.029..452.716 rows=136316 loops=1)  ->  Hash  (cost=403.69..403.69 rows=3669 width=676) (actual time=50.582..50.582 rows=3669 loops=1)        ->  Seq Scan on dim_producto_std_producto dpp  (cost=0.00..403.69 rows=3669 width=676) (actual time=0.023..19.776 rows=3669 loops=1)Total runtime: 2022.293 msAnd this is the plan for the new server:Hash Join  (cost=412.86..9524.13 rows=136316 width=905) (actual time=9.421..506.376 rows=136316 loops=1)  Hash Cond: (\"outer\".producto_std_producto_sk = \"inner\".producto_sk)  ->  Seq Scan on fact_ven_renta fvr  (cost=0.00..6044.16 rows=136316 width=228) (actual time=0.006..107.318 rows=136316 loops=1)  ->  Hash  (cost=403.69..403.69 rows=3669 width=677) (actual time=9.385..9.385 rows=3669 loops=1)       \n ->  Seq Scan on dim_producto_std_producto dpp  (cost=0.00..403.69 rows=3669 width=677) (actual time=0.003..3.157 rows=3669 loops=1)Total runtime: 553.619 msI see an \"outer\" join in the plan for the new server. This is weird!!! There are the same databases in both servers.The old one runs this query for about 37 seconds and for the new one for about 301 seconds.Why are plans different? May the backup recovery process have had an error in the new server when restoring?I appreciate some help.Regards Agustin----- Mensaje original ----De: \"[email protected]\" <[email protected]>Para: [email protected]: miércoles 19 de septiembre de 2007, 14:38:13Asunto: [PERFORM] Low CPU UsageHi all.Recently I have installed a brand new server with a Pentium IV 3.2 GHz, SATA Disk, 2GB of Ram in Debian 4.0r1 with PostgreSQL 8.2.4 (previously a 8.1.9).I have other similar server with an IDE disk, Red Hat EL 4 and PostgreSQL 8.2.3I have almost the same postgresql.conf in both servers, but in the new one (I have more work_mem than the other one) things go really slow.  I began to monitor i/o disk and it's really ok, I have test disk with hdparm and it's 5 times faster than the IDE one.Running the same queries in both servers in the new one it envolves almost 4 minutes instead of 18 seconds in the old one.Both databases are the same, I have vacuum them and I don't know how to manage this issue.The only weird thing is than in the older server running\n the query it uses 30% of CPU instead of 3 o 5 % of the new one!!!What's is happening with this server? I upgrade from 8.1.9 to 8.2.4 trying to solve this issue but I can't find a solution.Any ideas?RegardsAgustin\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby\nLos referentes más importantes en compra/venta de autos se juntaron:Demotores y Yahoo!.\nAhora comprar o vender tu auto es más fácil. Visitá http://ar.autos.yahoo.com/\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby", "msg_date": "Fri, 21 Sep 2007 10:51:57 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "> That's not what it looks like based on the EXPLAIN ANALYZE output.\n> It looks like run time dropped from two seconds to half a second.\n \n> It seems as though you either have a network delay delivering the results,\n> or your application is slow to read them.\n\n> Exactly how are you arriving at those timings you're reporting to us?\n \nI have noticed this in a daly process I run which involves normally 45 minutes and with the new server takes 1:40.\n\nSome days ago I began to do some tests with no success, then I opened PgAdmin with this simply query to read 2 big tables and then compare disk access.\nSELECT *\nFROM fact_ven_renta fvr, dim_producto_std_producto dpp\nWHERE\n fvr.producto_std_producto_sk = dpp.producto_sk\n \nfact_ven_renta has 136316 rows\ndim_producto_std_producto has 3669 rows\n\n\n\nI have made all possible combinations pgadmin (running in the same server each query, in the old one, in the new one), without difference and I only retrieve the first 100 records (I didn't count the network time in any case).\nBut the weird thing is running the query in the new server the are many disk access and cpu usage. And with other applications in the same server are a lot of disks access.\n\n\n\n\n\n\n Seguí de cerca a la Selección Argentina de Rugby en el Mundial de Francia 2007.\nhttp://ar.sports.yahoo.com/mundialderugby\n> That's not what it looks like based on the EXPLAIN ANALYZE output.> It looks like run time dropped from two seconds to half a second. > It seems as though you either have a network delay delivering the results,> or your application is slow to read them.> Exactly how are you arriving at those timings you're reporting to us? I have noticed this in a daly process I run which involves normally 45 minutes and with the new server takes 1:40.Some days ago I began to do some tests with no success, then I opened PgAdmin with this simply query to read 2 big tables and then compare disk\n access.SELECT *FROM fact_ven_renta fvr, dim_producto_std_producto dppWHERE  fvr.producto_std_producto_sk = dpp.producto_sk fact_ven_renta has 136316 rowsdim_producto_std_producto has 3669 rowsI have made all possible combinations pgadmin (running in the same server each query, in the old one, in the new one), without difference  and I only retrieve the first 100 records (I didn't count the network time in any case).But the weird thing is running the query in the new server the are many disk access and cpu usage. And with other applications in the same server are a lot of disks access.\nLos referentes más importantes en compra/venta de autos se juntaron:Demotores y Yahoo!.\nAhora comprar o vender tu auto es más fácil. Visitá http://ar.autos.yahoo.com/", "msg_date": "Fri, 21 Sep 2007 11:20:30 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "In response to [email protected]:\n\n> > That's not what it looks like based on the EXPLAIN ANALYZE output.\n> > It looks like run time dropped from two seconds to half a second.\n> \n> > It seems as though you either have a network delay delivering the results,\n> > or your application is slow to read them.\n> \n> > Exactly how are you arriving at those timings you're reporting to us?\n> \n> I have noticed this in a daly process I run which involves normally 45 minutes and with the new server takes 1:40.\n> \n> Some days ago I began to do some tests with no success, then I opened PgAdmin with this simply query to read 2 big tables and then compare disk access.\n> SELECT *\n> FROM fact_ven_renta fvr, dim_producto_std_producto dpp\n> WHERE\n> fvr.producto_std_producto_sk = dpp.producto_sk\n> \n> fact_ven_renta has 136316 rows\n> dim_producto_std_producto has 3669 rows\n\nRun the tests from psql on the same server. As Kevin pointed out, the\n_server_ is faster, but it appears as if the connection between PGadmin\nand this new server is slower somehow.\n\nAre you sure of your speed/duplex settings on the network side? That's\nthe most common cause of this kind of thing in my experience. Try doing\na raw FTP transfer between the client and server and see if you get the\nspeed you should.\n\n> \n> \n> \n> I have made all possible combinations pgadmin (running in the same server each query, in the old one, in the new one), without difference and I only retrieve the first 100 records (I didn't count the network time in any case).\n> But the weird thing is running the query in the new server the are many disk access and cpu usage. And with other applications in the same server are a lot of disks access.\n\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 21 Sep 2007 14:33:54 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": ">> > That's not what it looks like based on the EXPLAIN ANALYZE output.\n>> > It looks like run time dropped from two seconds to half a second.\n>> \n>> > It seems as though you either have a network delay delivering the results,\n>> > or your application is slow to read them.\n>> \n>> > Exactly how are you arriving at those timings you're reporting to us?\n>> \n>> I have noticed this in a daly process I run which involves normally 45 minutes and with the new server takes 1:40.\n>> \n>> Some days ago I began to do some tests with no success, then I opened PgAdmin with this simply query to read 2 big tables and then compare disk access.\n>> SELECT *\n>> FROM fact_ven_renta fvr, dim_producto_std_producto dpp\n>> WHERE\n>> fvr.producto_std_producto_sk = dpp.producto_sk\n>> \n>> fact_ven_renta has 136316 rows\n>> dim_producto_std_producto has 3669 rows\n\n>Run the tests from psql on the same server. As Kevin pointed out, the _server_ is faster, but it appears as if the connection between PGadmin and this new server is slower somehow.\n\nIt runs quickly!!! But I don't know how to compare because looks like it retrieve fields by demand, when I put ctrl+end (go to the last record) it use a lot of CPU and disk, run quickly anyway.\nCorrect me if am I wrong but, executing PgAdmin in the same server there aren't networks delays!\nAnd when the server is processing the query there isn't network traffic because is processing the result.\n\n> Are you sure of your speed/duplex settings on the network side? That's\n> the most common cause of this kind of thing in my experience. Try doing\n> a raw FTP transfer between the client and server and see if you get the\n> speed you should.\nThis isn't a dedicated database server, client application and server are running in the same machine!!!\nI have stop the client application too with same results.\n\nAnyway I will do some network test to find a solution.\n\n\n\n\n\n\n\n\n Seguí de cerca a la Selección Argentina de Rugby en el Mundial de Francia 2007.\nhttp://ar.sports.yahoo.com/mundialderugby\n>> > That's not what it looks like based on the EXPLAIN ANALYZE output.>> > It looks like run time dropped from two seconds to half a second.>>  >> > It seems as though you either have a network delay delivering the results,>> > or your application is slow to read them.>> >> > Exactly how are you arriving at those timings you're reporting to us?>>  >> I have noticed this in a daly process I run which involves normally 45 minutes and with the new server takes 1:40.>> >> Some days ago I began to\n do some tests with no success, then I opened PgAdmin with this simply query to read 2 big tables and then compare disk access.>> SELECT *>> FROM fact_ven_renta fvr, dim_producto_std_producto dpp>> WHERE>>   fvr.producto_std_producto_sk = dpp.producto_sk>>  >> fact_ven_renta has 136316 rows>> dim_producto_std_producto has 3669 rows>Run the tests from psql on the same server.  As Kevin pointed out, the _server_ is faster, but it appears as if the connection between PGadmin and this new server is slower somehow.It runs quickly!!! But I don't know how to compare because looks like it retrieve fields by demand, when I put ctrl+end (go to the last record) it use a lot of CPU and disk, run quickly anyway.Correct me if am I wrong but, executing PgAdmin in the same server there aren't networks delays!And when the server is processing the\n query there isn't network traffic because is processing the result.> Are you sure of your speed/duplex settings on the network side?  That's> the most common cause of this kind of thing in my experience.  Try doing> a raw FTP transfer between the client and server and see if you get the> speed you should.This isn't a dedicated database server, client application and server are running in the same machine!!!I have stop the client application too with same results.Anyway I will do some network test to find a solution.\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby", "msg_date": "Fri, 21 Sep 2007 12:01:34 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "In response to [email protected]:\n\n> >> > That's not what it looks like based on the EXPLAIN ANALYZE output.\n> >> > It looks like run time dropped from two seconds to half a second.\n> >> \n> >> > It seems as though you either have a network delay delivering the results,\n> >> > or your application is slow to read them.\n> >> \n> >> > Exactly how are you arriving at those timings you're reporting to us?\n> >> \n> >> I have noticed this in a daly process I run which involves normally 45 minutes and with the new server takes 1:40.\n> >> \n> >> Some days ago I began to do some tests with no success, then I opened PgAdmin with this simply query to read 2 big tables and then compare disk access.\n> >> SELECT *\n> >> FROM fact_ven_renta fvr, dim_producto_std_producto dpp\n> >> WHERE\n> >> fvr.producto_std_producto_sk = dpp.producto_sk\n> >> \n> >> fact_ven_renta has 136316 rows\n> >> dim_producto_std_producto has 3669 rows\n> \n> >Run the tests from psql on the same server. As Kevin pointed out, the _server_ is faster, but it appears as if the connection between PGadmin and this new server is slower somehow.\n> \n> It runs quickly!!! But I don't know how to compare because looks like it retrieve fields by demand, when I put ctrl+end (go to the last record) it use a lot of CPU and disk, run quickly anyway.\n\nThat's pretty odd. If you use \\timing in psql, you can get execution\ntime for each query, if it helps you track things down.\n\n> Correct me if am I wrong but, executing PgAdmin in the same server there aren't networks delays!\n\nNot network, no. But the results of your explains seem to show that the\nquery is executing much faster on the new system than the old, so the\nproblem still becomes, \"what is happening after the query completes that\nis so slow?\" It's just that networking is ruled out.\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n", "msg_date": "Fri, 21 Sep 2007 15:11:38 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "> > >> > That's not what it looks like based on the EXPLAIN ANALYZE output.\n> > >> > It looks like run time dropped from two seconds to half a second.\n> > >> \n> > >> > It seems as though you either have a network delay delivering the results,\n> > >> > or your application is slow to read them.\n> > >> \n> > >> > Exactly how are you arriving at those timings you're reporting to us?\n> > >> \n> > >> I have noticed this in a daly process I run which involves normally 45 minutes and with the new server takes 1:40.\n> > >> \n> > >> Some days ago I began to do some tests with no success, then I opened PgAdmin with this simply query to read 2 big tables and then compare disk access.\n> > >> SELECT *\n> > >> FROM fact_ven_renta fvr, dim_producto_std_producto dpp\n> > >> WHERE\n> > >> fvr.producto_std_producto_sk = dpp.producto_sk\n> > >> \n> > >> fact_ven_renta has 136316 rows\n> > >> dim_producto_std_producto has 3669 rows\n >> \n> > >Run the tests from psql on the same server. As Kevin pointed out, the _server_ is faster, but it appears as if the connection between PGadmin and this new server is slower somehow.\n> > \n> > It runs quickly!!! But I don't know how to compare because looks like it retrieve fields by demand, when I put ctrl+end (go to the last record) it use a lot of CPU and disk, run quickly anyway.\n\n> That's pretty odd. If you use \\timing in psql, you can get execution\n> time for each query, if it helps you track things down.\n\nYes, in the new server running with \\timing it consumes 5.6 seconds and in the old server it consumes 25 seconds.\n\n> > Correct me if am I wrong but, executing PgAdmin in the same server there aren't networks delays!\n\n> Not network, no. But the results of your explains seem to show that the\n> query is executing much faster on the new system than the old, so the\n> problem still becomes, \"what is happening after the query completes that\n> is so slow?\" It's just that networking is ruled out.\n\nIs connected to full 100Mb, it transfers many things quick to clients. Is running Apache adn JBoss, transfer rate is good, I did scp to copy many archives and is as quick as the old server.\n\nI have no idea how to continue researching this problem. Now I'm going to do some networks tests.\n\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n\n\n\n\n\n\n Las últimas noticias sobre el Mundial de Rugby 2007 están en Yahoo! Deportes. ¡Conocelas!\nhttp://ar.sports.yahoo.com/mundialderugby\n> > >> > That's not what it looks like based on the EXPLAIN ANALYZE output.> > >> > It looks like run time dropped from two seconds to half a second.> > >>  > > >> > It seems as though you either have a network delay delivering the results,> > >> > or your application is slow to read them.> > >> > > >> > Exactly how are you arriving at those timings you're reporting to us?> > >>  > > >> I have noticed this in a daly process I run which involves normally 45\n minutes and with the new server takes 1:40.> > >> > > >> Some days ago I began to do some tests with no success, then I opened PgAdmin with this simply query to read 2 big tables and then compare disk access.> > >> SELECT *> > >> FROM fact_ven_renta fvr, dim_producto_std_producto dpp> > >> WHERE> > >>   fvr.producto_std_producto_sk = dpp.producto_sk> > >>  > > >> fact_ven_renta has 136316 rows> > >> dim_producto_std_producto has 3669 rows >> > > >Run the tests from psql on the same server.  As Kevin pointed out, the _server_ is faster, but it appears as if the connection between PGadmin and this new server is slower somehow.> > > > It runs quickly!!! But I don't know how to compare because looks like it retrieve fields by demand, when\n I put ctrl+end (go to the last record) it use a lot of CPU and disk, run quickly anyway.> That's pretty odd.  If you use \\timing in psql, you can get execution> time for each query, if it helps you track things down.Yes, in the new server running with \\timing it consumes 5.6 seconds and in the old server it consumes 25 seconds.> > Correct me if am I wrong but, executing PgAdmin in the same server there aren't networks delays!> Not network, no.  But the results of your explains seem to show that the> query is executing much faster on the new system than the old, so the> problem still becomes, \"what is happening after the query completes that> is so slow?\"  It's just that networking is ruled out.Is connected to full 100Mb, it transfers many things quick to clients. Is running Apache adn JBoss, transfer rate is good, I did scp to copy many archives\n and is as quick as the old server.I have no idea how to continue researching this problem. Now I'm going to do some networks tests.-- Bill MoranCollaborative Fusion Inc.http://people.collaborativefusion.com/~wmoran/[email protected]: 412-422-3463x4023\nSeguí de cerca a la Selección Argentina de Rugbyen el Mundial de Francia 2007.\nhttp://ar.sports.yahoo.com/mundialderugby", "msg_date": "Fri, 21 Sep 2007 12:33:46 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "\n\[email protected] wrote:\n>\n> > That's pretty odd. If you use \\timing in psql, you can get execution\n> > time for each query, if it helps you track things down.\n>\n> Yes, in the new server running with \\timing it consumes 5.6 seconds \n> and in the old server it consumes 25 seconds.\n>\n> > > Correct me if am I wrong but, executing PgAdmin in the same server \n> there aren't networks delays!\n>\n> > Not network, no. But the results of your explains seem to show that the\n> > query is executing much faster on the new system than the old, so the\n> > problem still becomes, \"what is happening after the query completes that\n> > is so slow?\" It's just that networking is ruled out.\n>\n> Is connected to full 100Mb, it transfers many things quick to clients. \n> Is running Apache adn JBoss, transfer rate is good, I did scp to copy \n> many archives and is as quick as the old server.\n>\n> I have no idea how to continue researching this problem. Now I'm going \n> to do some networks tests.\n>\n>\nSee if this can give some help to you:\nI was experienced some problems with networks with win98 and winXP \nstations, the application was running with good performance almost of \nthe time,\nbut in suddenly the performance slow down. We noticed that the problem \nwas with the time to connect with the server, that was very slow.\nThe problem occurs also when the internet link down.\nWell, I don't know why but when we exclude win98 stations from network, \nthe problem disappears.\nI think that was some DNS problem (but not sure), because one time we \ncleared nameserver clauses in the /etc/resolv.conf the performance \nreturn to the normal.\nBut we reinstalled win98 machines with winXP too, so I don't know what \nhappened exactly.\nThe server OS was a Mandriva Linux running postgres ( 8.0, I guess) and \nsamba. Workstations connect via odbc (informing the IP of server or the \nname to connect the problem persists).\n\n\n-- \nLuiz K. Matsumura\nPlan IT Tecnologia Inform�tica Ltda.\n\n", "msg_date": "Fri, 21 Sep 2007 17:40:52 -0300", "msg_from": "\"Luiz K. Matsumura\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "\n>From: [email protected]\n>Subject: Re: [PERFORM] Low CPU Usage\n>\n>I have no idea how to continue researching this problem. Now I'm going to\ndo some networks tests.\n\n\nI would go back to the slow program and try to capture the slow queries in\nthe log file. Once you have some queries which are running slow then you\ncan run EXPLAIN ANALYZE to see what the bottle neck is.\n\nIt seems like you've found pgAdmin is slow sending across the network, but\nwe don't know if that has anything to do with your original problems.\n\nJust my 2 cents.\n\nDave\n\n", "msg_date": "Fri, 21 Sep 2007 15:55:36 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": ">>> On Fri, Sep 21, 2007 at 3:40 PM, in message <[email protected]>,\n\"Luiz K. Matsumura\" <[email protected]> wrote: \n \n> but in suddenly the performance slow down. We noticed that the problem \n> was with the time to connect with the server, that was very slow.\n \n> I think that was some DNS problem (but not sure), because one time we \n> cleared nameserver clauses in the /etc/resolv.conf the performance \n> return to the normal.\n \nYou may have it there. In some versions of Java, on Windows, connection\ntimes are horribly slow unless the machine's IP address has a reverse\nDNS entry. Perhaps the new machine lacks such an entry, or there's a\ndifferent version of Java in use?\n \n-Kevin\n \n\n\n", "msg_date": "Fri, 21 Sep 2007 15:57:34 -0500", "msg_from": "\"Kevin Grittner\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" }, { "msg_contents": "Luiz K. Matsumura wrote:\n>> Is connected to full 100Mb, it transfers many things quick to clients. \n>> Is running Apache adn JBoss, transfer rate is good, I did scp to copy \n>> many archives and is as quick as the old server.\n>>\n>> I have no idea how to continue researching this problem. Now I'm going \n>> to do some networks tests.\n\nAny chance this is your desktop machine, and you're also using it for audio? Microsoft built in a feature (!) that reduces network speed by 90% when music is playing:\n\n http://it.slashdot.org/article.pl?sid=07/08/26/1628200&from=rss\n\nCraig\n", "msg_date": "Fri, 21 Sep 2007 14:15:40 -0700", "msg_from": "Craig James <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "It's a Debian 4.0r1 server without sound (alsa is disabled), I'm running the querys locally.\nIt's happening running the query locally with PgAdmin and by jdbc locally too.\nYes I have win98, XP machines on my network, I will unplugged from the net and test again. On monday I'll give you my answer.\nLast thing I did was disabling ipv6 and with the same results.\n\nThank you very much for your help.\n\n\n----- Mensaje original ----\nDe: Craig James <[email protected]>\nPara: Luiz K. Matsumura <[email protected]>\nCC: [email protected]; [email protected]\nEnviado: viernes 21 de septiembre de 2007, 18:15:40\nAsunto: Re: [PERFORM] Low CPU Usage\n\nLuiz K. Matsumura wrote:\n>> Is connected to full 100Mb, it transfers many things quick to clients. \n>> Is running Apache adn JBoss, transfer rate is good, I did scp to copy \n>> many archives and is as quick as the old server.\n>>\n>> I have no idea how to continue researching this problem. Now I'm going \n>> to do some networks tests.\n\nAny chance this is your desktop machine, and you're also using it for audio? Microsoft built in a feature (!) that reduces network speed by 90% when music is playing:\n\n http://it.slashdot.org/article.pl?sid=07/08/26/1628200&from=rss\n\nCraig\n\n\n\n\n\n\n\n Las últimas noticias sobre el Mundial de Rugby 2007 están en Yahoo! Deportes. ¡Conocelas!\nhttp://ar.sports.yahoo.com/mundialderugby\nIt's a Debian 4.0r1 server without sound (alsa is disabled), I'm running the querys locally.It's happening running the query locally with PgAdmin and by jdbc locally too.Yes I have win98, XP machines on my network, I will unplugged from the net and test again. On monday I'll give you my answer.Last thing I did was disabling ipv6 and with the same results.Thank you very much for your help.----- Mensaje original ----De: Craig James <[email protected]>Para: Luiz K. Matsumura <[email protected]>CC: [email protected]; [email protected]:\n viernes 21 de septiembre de 2007, 18:15:40Asunto: Re: [PERFORM] Low CPU UsageLuiz K. Matsumura wrote:>> Is connected to full 100Mb, it transfers many things quick to clients. >> Is running Apache adn JBoss, transfer rate is good, I did scp to copy >> many archives and is as quick as the old server.>>>> I have no idea how to continue researching this problem. Now I'm going >> to do some networks tests.Any chance this is your desktop machine, and you're also using it for audio?  Microsoft built in a feature (!) that reduces network speed by 90% when music is playing:  http://it.slashdot.org/article.pl?sid=07/08/26/1628200&from=rssCraig\nSeguí de cerca a la Selección Argentina de Rugbyen el Mundial de Francia 2007.\nhttp://ar.sports.yahoo.com/mundialderugby", "msg_date": "Fri, 21 Sep 2007 14:45:27 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "Simon Riggs <[email protected]> wrote:\n\n> On Fri, 2007-09-21 at 21:20 +0200, D�niel D�nes wrote:\n> \n> > The costs may be different because I've tuned the query planner's \n> > parameters.\n> \n> OK, understood.\n> \n> > > Ordering by parent, child is fairly common but the variation you've\n> > > got here isn't that common.\n> > How else can you order by parent, child other than first ordering by\n> > a unique key of parent, then something in child? (Except for \n> > child.parent_id, child.something because this has all the\n> > information in child and can rely on a single multicolumn index.)\n> \n> Why \"except\"? Whats wrong with ordering that way? \n> \n> Make the case. **I** want it is not sufficient...\n> \n> -- \n> Simon Riggs\n> 2ndQuadrant http://www.2ndQuadrant.com\n\n\n\n\nIn reply to Simon Riggs <[email protected]>:\n\n> > How else can you order by parent, child other than first ordering by\n> > a unique key of parent, then something in child? (Except for \n> > child.parent_id, child.something because this has all the\n> > information in child and can rely on a single multicolumn index.)\n> \n> Why \"except\"? Whats wrong with ordering that way?\n\nWell, nothing, but what if I have to order by some other unique key? Of \ncourse I could do that by redundantly storing the parent's data in child \nand then creating a multicolumn index, but...\n\nJust to see clear: when I found this, I was trying to make a slightly \ndifferent query. It was like:\n\nSELECT *\nFROM tparent JOIN tchild ON tchild.par_id = tparent.id\nWHERE tparent.uniqcol1 = 123\nORDER BY tparent.uniqcol2, tchild.ord;\n\nwhere there was a unique index on (tparent.uniqcol1, tparent.uniqcol2) \nand the columns are marked NOT NULL.\nI expected a plan like doing an index scan on parent.uniqcol2 where \nuniqcol1 = 123, and (using a nestloop and child's pkey) joining in the \nchildren in the correct order (without a sort). But I got something else, \nso I tried everything to get what I wanted -- just to see the costs why \nthe planner chose something else. After some time I found out that \nthere is no such plan, so no matter what I do it will sort...\nSo that's how I got here. But since the original problem isn't that clean \n& simple, I thought I'd make a test case, that's easy to follow, and \nillustrates the problem: that the planner doesn't even consider my \nplan. If it did, I think that'd be the one that gets executed. But tell me if \nI'm wrong somewhere.\n\n\n\n> Make the case. **I** want it is not sufficient...\n\nSorry, I can't understand that... I'm far from perfect in english. Please \nclarify so I can do what you ask me to.\n\n\nDenes Daniel\n-----------------------------------------------------\n\n\n\n\nOlvasd az [origo]-t a mobilodon: mini magazinok a Mobizin-en\n___________________________________________________\nwww.t-mobile.hu/mobizin\n\n", "msg_date": "Sat, 22 Sep 2007 02:08:43 +0200 (CEST)", "msg_from": "Denes Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner unaware of possibly best plan" }, { "msg_contents": "Denes Daniel <[email protected]> writes:\n> Simon Riggs <[email protected]> wrote:\n>> Make the case. **I** want it is not sufficient...\n\n> Sorry, I can't understand that... I'm far from perfect in english.\n\nThe point here is that you've repeated the same example N times without\nactually making a case that it's interesting to support. We have to\nthink about the intellectual complexity that would be added to the\nplanner to support this case, and the cycles that would be expended\non every query (and wasted, for most queries) on trying to detect\nwhether the case applies. If it were simple and cheap to do, these\narguments wouldn't hold much weight, but it doesn't look to me like\neither is the case.\n\nAnother problem is that it's not clear there's much to be gained.\nAvoiding the sort step is only interesting if the query produces so many\nrows that a sort would be expensive ... but if that's the case, it seems\nunlikely that a nestloop indexscan plan would be the best choice anyway.\n\nSo basically this looks like a lot of work for a narrow and questionable\ngain. If you want it to happen you need to convince people that it's\neasier and more useful than it looks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 22 Sep 2007 02:07:28 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query planner unaware of possibly best plan " }, { "msg_contents": "Tom Lane <[email protected]> wrote:\n\n> The point here is that you've repeated the same example N times\n> without actually making a case that it's interesting to support. We\n> have to think about the intellectual complexity that would be added\n> to the planner to support this case, and the cycles that would be\n> expended on every query (and wasted, for most queries) on trying to\n> detect whether the case applies. If it were simple and cheap to do,\n> these arguments wouldn't hold much weight, but it doesn't look to\n> me like either is the case.\n> \n> Another problem is that it's not clear there's much to be gained.\n> Avoiding the sort step is only interesting if the query produces so\n> many rows that a sort would be expensive ... but if that's the case, it\n> seems unlikely that a nestloop indexscan plan would be the best\n> choice anyway.\n> \n> So basically this looks like a lot of work for a narrow and questionable\n> gain. If you want it to happen you need to convince people that it's\n> easier and more useful than it looks.\n> \n> \t\t\tregards, tom lane\n\n\n\n\nWell, I probably won't tell you that it's easy to do, because you know \nthe planner far far better than I do -- I'm just a user who knows almost \nnothing about what's happening inside it. I just thought that if the \nplanner is examining a plan, where the outer scan is using a unique \nindex with all it's columns AND doing a nestloop join with the inner scan \nreturning the rows in some order, then the planner could say \"this is \nalready ordered by (outer.unique, inner.some_order)\" and wouldn't \nmake a sort at the end. Of course this is far from perfect, because what \nabout three/four/more table joins... maybe that can be derived from \nthis, I don't really know now.\nAnother situation that could be optimized this way is if I write \"ORDER \nBY outer.idxcol1, outer.idxcol2, outer.id, inner.something\" where\n + there is a non-unique index on (outer.idxcol1, outer.idxcol2),\n + outer.id is a PRI KEY,\n + there is an index on (inner.outer_id, inner.something) too\nbut that's getting really complicated. Of course if the simpler case \nwould be working, then I could create a unique index on (outer.idxcol1, \nouter.idxcol2, outer.id) and this would run optimized too.\n\nI think what's common is:\n + outer scan produces uniquely sorted rows\n + nested loop\n + inner scan produces sorted rows\n + ORDER BY says outer.unique_sort, inner.sort\nIf this is the situation, the sort could be done separately. Even like:\n\n-> Nested Loop\n -> Sort by outer.unique\n -> Scan producing the required rows from outer\n -> Sort by inner.something\n -> Scan producing the required rows from inner\n Filter or Index Cond for the join\n\nAnd this is good for any query that needs data from two tables, but \nwants to get the joined rows so that all rows for a single outer-row \nare \"packed together\" (I mean they do not mix).\n\n\nNow about if it's worth it. I tried a query on real data, with lots of rows.\nTables involved:\n- threads: ~200K rows\n- msgs: ~8M rows\nI wanted to see the last msgs from the last threads, but did not want \nto mix them. It's like PgSQL's mailing archive viewed by threads. I \nwanted them paged, not all 8M msgs at once (LIMIT/OFFSET). Query is:\n\nSELECT *\nFROM threads AS thr JOIN msgs AS msg ON msg.thrid = thr.id\nWHERE thr.forid = 1\nORDER BY thr.id DESC, msg.msgid DESC\nLIMIT 100\n\nthr.forid is a FKEY to forums. Say forum 1 is the pgsql-performance list.\nTable msgs has only a PRI KEY: (thrid, msgid).\nTable threads has a PRI KEY: (id) and a unique index: (forid, id).\nFirst time I ran the query with seqscan, bitmapscan, hashjoin, \nmergejoin disabled (just to get the nested loop plan with the needless \nsort). Second time I ran it without disabling anything, but I modified \nORDER BY to only \"thr.id DESC\" (deleted \"msg.msgid DESC\"), to give me \nthe same plan as before, but without the sort.\nPSQL input and output attached.\n\nI think the 5000x speed would be worth it.\n\n\nRegards,\nDenes Daniel\n---------------------------------\n\n\n\n\nJ�tssz a meg�jult Kv�zparton! V�lassz t�bb mint 400 kv�z k�z�l minden t�m�ban!\n________________________________________________________\nhttp://cthandler.adverticum.net/?cturl=http%3A%2F%2Fkvizpart.hu%2F?fmalja", "msg_date": "Sat, 22 Sep 2007 14:33:21 +0200 (CEST)", "msg_from": "Denes Daniel <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query planner unaware of possibly best plan" } ]
[ { "msg_contents": "\nI recently had a puzzling experience (performace related).\n\nHad a DB running presumably smoothly, on a server with Dual-Core\nOpteron and 4GB of RAM (and SATA2 drives with Hardware RAID-1).\n(PostgreSQL 8.2.4 installed from source, on a FC4 system --- databases\nwith no encoding --- initdb -E SQL_ASCII --no-locale, and all the\ndatabases created with encoding SQL_ASCII)\n\nWe thought that performance had gone little by little down, but the\nevidence now suggests that something must have triggered a big step\ndown in the performance of the server.\n\nThinking that it was simply a bottleneck with the hardware, we moved\nto a different machine (lower performance CPU-wise, but with dual hard\ndisk, so I configured the pg_xlog directory on a partition on a separate\nhard disk, estimating that this would take precedence over the lower CPU\npower and the 2GB of RAM instead of 4).\n\nNot only the performance was faster --- a query like:\n\nselect count(*) from customer\n\nwas *instantaneous* on the new machine (just after populating it,\nwithout having even analyzed it!), and would take over a minute on\nthe old machine (the first time). Then, the second time, it would\ntake a little over two seconds on the old machine (at this point, both\nmachines had *zero* activity --- they were both essentially disconnected\nfrom the outside world; serving exclusively my psql connection).\n\nFunny thing, I dropped the database (on the old machine) and re-created\nit with the backup I had just created, and now the performance on the\nold one was again normal (the above query now gives me a result in\nessentially no time --- same as on the new machine).\n\nIn retrospect, I'm now wondering if a vacuum full would have solved\nthe issue? (we do run vacuumdb -z --- vacuum analyze --- daily)\n\nAny comments?? I'm worried that three months down the road we'll\nface the same issue with this new server (that's about the time it took\nsince we had started running the other server until we noticed the\npoor performance level) --- and we can not afford to completely stop\nthe system to drop-and-recreate the db on a regular basis.\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Sun, 23 Sep 2007 12:45:15 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Possible explanations for catastrophic performace deterioration?" }, { "msg_contents": "You didn't specify the database size, but my guess is that the total\ndata size about enough to fit in shared_buffers or kernel cache. On\nthe new system (or dropped/recreated database), it would've all or\nmostly fit in memory which would make things like count(*) work\nquickly. On the old database, you probably had a lot of fragmentation\nwhich would've caused significantly more I/O to be performed thereby\ncausing a slowdown. You could compare relation sizes to check easily.\n\nMy guess is that a vacuum full would've brought the other database\nback up to speed. In the future, you probably want to set fillfactor\nto a reasonable amount to account for updates-to-blocks-between-vacuum\nto try and capture as few row-migrations as possible.\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Sun, 23 Sep 2007 12:56:35 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible explanations for catastrophic performace deterioration?" }, { "msg_contents": "\"Jonah H. Harris\" <[email protected]> writes:\n> My guess is that a vacuum full would've brought the other database\n> back up to speed.\n\nYeah, table bloat is what it sounds like to me too.\n\n> In the future, you probably want to set fillfactor\n> to a reasonable amount to account for updates-to-blocks-between-vacuum\n> to try and capture as few row-migrations as possible.\n\nMore to the point, check your FSM size and make sure vacuums are\nhappening often enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 23 Sep 2007 13:27:35 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible explanations for catastrophic performace deterioration? " }, { "msg_contents": "Jonah H. Harris wrote:\n> You didn't specify the database size\n\nOops, sorry about that one --- the full backup is a 950MB file. The \nentire database\nshould fit in memory (and the effective_cache_size was set to 2GB for \nthe machine\nwith 4GB of memory)\n\n> , but my guess is that the total\n> data size about enough to fit in shared_buffers or kernel cache. On\n> the new system (or dropped/recreated database), it would've all or\n> mostly fit in memory which would make things like count(*) work\n> quickly. \n\nI don't understand this argument --- the newer system has actually less \nmemory\nthan the old one; how could it fit there and not on the old one? Plus, \nhow could\ndropping-recreating the database on the same machine change the fact \nthat the\nentire dataset entirely fit or not in memory??\n\nThe other part that puzzled me is that after running \"select count(*) \n... \" several\ntimes (that particular table is *very* small --- just 200 thousand \nrecords of no\nmore than 100 or 200 bytes each), then the entire table *should* have been\nin memory ... Yet, it would still take a few seconds (notice that \nthere was a\n*considerable* improvement from the first run of that query to the \nsecond one\non the old server --- from more than a minute, to just above two \nseconds.... But\nstill, on the new server, and after recreating the DB on the old one, it \nruns in\n*no time* the first time).\n\n> My guess is that a vacuum full would've brought the other database\n> back up to speed. \n\nI'm furious now that it didn't occur to me the vacuum full until *after* \nI had\nrecreated the database to see th problem disappear...\n\nI wonder if I should then periodically run a vacuum full --- say, once a \nweek?\nOnce a month?\n\n> In the future, you probably want to set fillfactor\n> to a reasonable amount to account for updates-to-blocks-between-vacuum\n> to try and capture as few row-migrations as possible.\n> \n\nCould you elaborate a bit on this? Or point me to the right places in the\ndocumentation to help me understand the above?? (I'm 100% blank after\nreading the above paragraph)\n\nThanks,\n\nCarlos\n--\n\n", "msg_date": "Sun, 23 Sep 2007 14:15:09 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible explanations for catastrophic performace deterioration?" }, { "msg_contents": "Carlos Moreno wrote:\n\n>> , but my guess is that the total\n>> data size about enough to fit in shared_buffers or kernel cache. On\n>> the new system (or dropped/recreated database), it would've all or\n>> mostly fit in memory which would make things like count(*) work\n>> quickly. \n>\n> I don't understand this argument --- the newer system has actually\n> less memory than the old one; how could it fit there and not on the\n> old one? Plus, how could dropping-recreating the database on the same\n> machine change the fact that the entire dataset entirely fit or not in\n> memory??\n\nBecause on the older server it is bloated, while on the new one it is\nfresh thus no dead tuples.\n\n\n> The other part that puzzled me is that after running \"select count(*)\n> ... \" several times (that particular table is *very* small --- just\n> 200 thousand records of no more than 100 or 200 bytes each), then the\n> entire table *should* have been in memory ... Yet, it would still\n> take a few seconds (notice that there was a *considerable*\n> improvement from the first run of that query to the second one on the\n> old server --- from more than a minute, to just above two seconds....\n> But still, on the new server, and after recreating the DB on the old\n> one, it runs in *no time* the first time).\n\nBloat can explain this as well.\n\n>> My guess is that a vacuum full would've brought the other database\n>> back up to speed. \n>\n> I'm furious now that it didn't occur to me the vacuum full until\n> *after* I had recreated the database to see th problem disappear...\n>\n> I wonder if I should then periodically run a vacuum full --- say, once\n> a week? Once a month?\n\nNever. What you need to do is make sure your FSM settings\n(fsm_max_pages in particular) are high enough, and that you VACUUM (not\nfull) frequently enough.\n\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nThe PostgreSQL Company - Command Prompt, Inc.\n", "msg_date": "Sun, 23 Sep 2007 14:23:49 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible explanations for catastrophic performace deterioration?" }, { "msg_contents": "\n>> I don't understand this argument --- the newer system has actually\n>> less memory than the old one; how could it fit there and not on the\n>> old one? Plus, how could dropping-recreating the database on the same\n>> machine change the fact that the entire dataset entirely fit or not in\n>> memory??\n>\n> Because on the older server it is bloated, while on the new one it is\n> fresh thus no dead tuples.\n\nWait a second --- am I correct in understanding then that the bloating\nyou guys are referring to occurs *in memory*??\n\nMy mind has been operating under the assumption that bloating only\noccurs on disk, and never in memory --- is there where my logic is\nmistaken?\n\n>> I wonder if I should then periodically run a vacuum full --- say, once\n>> a week? Once a month?\n>\n> Never. What you need to do is make sure your FSM settings\n> (fsm_max_pages in particular) are high enough, and that you VACUUM (not\n> full) frequently enough.\n\nNoted.\n\nThanks!\n\nCarlos\n--\n\n", "msg_date": "Sun, 23 Sep 2007 15:12:02 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible explanations for catastrophic performace\tdeterioration?" }, { "msg_contents": "On 9/23/07, Carlos Moreno <[email protected]> wrote:\n> Wait a second --- am I correct in understanding then that the bloating\n> you guys are referring to occurs *in memory*??\n\nNo, bloating occurs on-disk; but this does affect memory. Bloat means\nthat even though your table data may take up 1G after the initial\nload, due to poor vacuuming, table layouts, etc. it to equal something\nmore... say 2G.\n\nThe thing is, even though the table only stores 1G of data, it is now\nphysically 2G. So, anything that would need to read the entire table\n(like COUNT(*)), or large sections of it sequentially, are performing\ntwice as many I/Os to do so. Which means you're actually waiting on\ntwo things, I/O and additional CPU time reading blocks that have very\nlittle viable data in them.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Sun, 23 Sep 2007 15:24:10 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible explanations for catastrophic performace deterioration?" }, { "msg_contents": "Jonah H. Harris wrote:\n> On 9/23/07, Carlos Moreno <[email protected]> wrote:\n>> Wait a second --- am I correct in understanding then that the bloating\n>> you guys are referring to occurs *in memory*??\n>\n> No, bloating occurs on-disk; but this does affect memory. Bloat means\n> that even though your table data may take up 1G after the initial\n> load, due to poor vacuuming, table layouts, etc. it to equal something\n> more... say 2G.\n>\n> The thing is, even though the table only stores 1G of data, it is now\n> physically 2G. So, anything that would need to read the entire table\n> (like COUNT(*)), or large sections of it sequentially, are performing\n> twice as many I/Os to do so. \n\nOK --- that was my initial impression... But again, then I'm still puzzled\nabout why *the second time* that I load the query, it still take a few \nseconds.\n\nThat is: the first time I run the query, it has to go through the disk;\nin the normal case it would have to read 100MB of data, but due to \nbloating,\nit actually has to go through 2GB of data. Ok, but then, it will load\nonly 100MB (the ones that are not \"uncollected disk garbage\") to memory.\nThe next time that I run the query, the server would only need to read\n100MB from memory --- the result should be instantaneous...\n\nThe behaviour I observed was: first time I run the query took over one\nminute; second time, a little above two seconds. Tried four or five times\nmore; in every instance it was around 2 seconds. On the new server, *the\nfirst time* I run the query, it takes *no time* (I repeat: *no time* \n--- as\nin perhaps 10 to 100 msec; in any case, my eyes could not resolve between\nthe moment I hit enter and the moment I see the result with the count of\nrows --- that's between one and two orders of magnitude faster than with \nthe\nold server --- and again, we're comparing *the first* time I execute the \nquery\non the new machine, in which case it is expected that it would have to read\nfrom disk, compared to the second and subsequent times that I execute it on\nthe old machine, in which case, since the bloating does not occur in \nmemory,\nthe entire seq. scan should occur exclusively in memory ... )\n\nThat's what still puzzles me --- Alvaro's reply seemed to explain it if I\naccept that the bloating affects memory (dead tuples loaded to memory \nreduce\nthe capacity to load the entire dataset into memory)...\n\nSomeone could shed some light and point out if there's still something I'm\nmissing or some other mistake in my analysis?? Hope I'm not sounding like\nI'm being dense!!\n\nThanks,\n\nCarlos\n--\n\n\n", "msg_date": "Sun, 23 Sep 2007 16:33:30 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible explanations for catastrophic performance deterioration?" }, { "msg_contents": "Carlos Moreno wrote:\n\n> That is: the first time I run the query, it has to go through the\n> disk; in the normal case it would have to read 100MB of data, but due\n> to bloating, it actually has to go through 2GB of data. Ok, but\n> then, it will load only 100MB (the ones that are not \"uncollected\n> disk garbage\") to memory. The next time that I run the query, the\n> server would only need to read 100MB from memory --- the result should\n> be instantaneous...\n\nWrong. If there is 2GB of data, 1900MB of which is dead tuples, those\npages would still have to be scanned for the count(*). The system does\nnot distinguish \"pages which have no live tuples\" from other pages, so\nit has to load them all.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\n\"[PostgreSQL] is a great group; in my opinion it is THE best open source\ndevelopment communities in existence anywhere.\" (Lamar Owen)\n", "msg_date": "Sun, 23 Sep 2007 16:57:59 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible explanations for catastrophic performance deterioration?" }, { "msg_contents": "Alvaro Herrera wrote:\n> Carlos Moreno wrote:\n>\n> \n>> That is: the first time I run the query, it has to go through the\n>> disk; in the normal case it would have to read 100MB of data, but due\n>> to bloating, it actually has to go through 2GB of data. Ok, but\n>> then, it will load only 100MB (the ones that are not \"uncollected\n>> disk garbage\") to memory. The next time that I run the query, the\n>> server would only need to read 100MB from memory --- the result should\n>> be instantaneous...\n>\n> Wrong. If there is 2GB of data, 1900MB of which is dead tuples, those\n> pages would still have to be scanned for the count(*). The system does\n> not distinguish \"pages which have no live tuples\" from other pages, so\n> it has to load them all.\n\nYes, that part I understand --- I think I now know what the error is in\nmy logic. I was thinking as follows: We read 2GB of which 1900MB are\ndead tuples. But then, once they're read, the system will only keep\nin memory the 100MB that are valid tuples.\n\nI'm now thinking that the problem with my logic is that the system does\nnot keep anything in memory (or not all tuples, in any case), since it\nis only counting, so it does not *have to* keep them, and since the\ntotal amount of reading from the disk exceeds the amount of physical\nmemory, then the valid tuples are \"pushed out\" of memory.\n\nSo, the second time I execute the query, it will still need to scan the\ndisk (in my mind, the way I was seeing it, the second time I execute\nthe \"select count(*) from customer\", the entire customer table would be\nin memory from the previous time, and that's why I was thinking that\nthe bloating would not explain why the second time it is still slow).\n\nAm I understanding it right?\n\nThanks for your patience!\n\nCarlos\n--\n\n\n", "msg_date": "Sun, 23 Sep 2007 17:55:49 -0400", "msg_from": "Carlos Moreno <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Possible explanations for catastrophic performance deterioration?" }, { "msg_contents": "On 9/23/07, Carlos Moreno <[email protected]> wrote:\n> Yes, that part I understand --- I think I now know what the error is in\n> my logic. I was thinking as follows: We read 2GB of which 1900MB are\n> dead tuples. But then, once they're read, the system will only keep\n> in memory the 100MB that are valid tuples.\n\nYes, this is wrong.\n\n> I'm now thinking that the problem with my logic is that the system does\n> not keep anything in memory (or not all tuples, in any case), since it\n> is only counting, so it does not *have to* keep them, and since the\n> total amount of reading from the disk exceeds the amount of physical\n> memory, then the valid tuples are \"pushed out\" of memory.\n\nYes, it does keep some in memory, but not all of it.\n\n> So, the second time I execute the query, it will still need to scan the\n> disk (in my mind, the way I was seeing it, the second time I execute\n> the \"select count(*) from customer\", the entire customer table would be\n> in memory from the previous time, and that's why I was thinking that\n> the bloating would not explain why the second time it is still slow).\n\nYes, it is still performing additional I/Os and additional CPU work to\nread bloated data.\n\n> Am I understanding it right?\n\nNow, I think so.\n\n-- \nJonah H. Harris, Sr. Software Architect | phone: 732.331.1324\nEnterpriseDB Corporation | fax: 732.331.1301\n499 Thornall Street, 2nd Floor | [email protected]\nEdison, NJ 08837 | http://www.enterprisedb.com/\n", "msg_date": "Sun, 23 Sep 2007 18:29:02 -0400", "msg_from": "\"Jonah H. Harris\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible explanations for catastrophic performance deterioration?" }, { "msg_contents": "\"Carlos Moreno\" <[email protected]> writes:\n\n> I'm now thinking that the problem with my logic is that the system does\n> not keep anything in memory (or not all tuples, in any case), since it\n> is only counting, so it does not *have to* keep them\n\nThat's really not how it works. When Postgres talks to the OS they're just\nbits. There's no cache of rows or values or anything higher level than bits.\nNeither the OS's filesystem cache nor the Postgres shared memory knows the\ndifference between live or dead rows or even pages that don't contain any\nrows.\n\n> and since the total amount of reading from the disk exceeds the amount of\n> physical memory, then the valid tuples are \"pushed out\" of memory.\n\nThat's right.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 24 Sep 2007 00:05:31 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Possible explanations for catastrophic performance deterioration?" } ]
[ { "msg_contents": "Hi,\n\n \n\nWhen I use the statistics collector to see the number of IO, I always get\nzero in almost all of columns. I really want to know the reason for that.\n\n \n\nThe result of statistics view:\n\n# select * from pg_statio_user_tables;\n\n relid | schemaname | relname | heap_blks_read | heap_blks_hit |\nidx_blks_read | idx_blks_hit | toast_blks_read | toast_blks_hit |\ntidx_blks_read | tidx_blks_hit \n\n-------+------------+---------+----------------+---------------+------------\n---+--------------+-----------------+----------------+----------------+-----\n----------\n\n 16386 | public | tab | 0 | 0 |\n0 | 0 | | | |\n\n\n(1 row)\n\n# select * from pg_statio_user_indexes;\n\n relid | indexrelid | schemaname | relname | indexrelname | idx_blks_read |\nidx_blks_hit \n\n-------+------------+------------+---------+--------------+---------------+-\n-------------\n\n 16386 | 24581 | public | tab | idx_tab_a2 | 0 |\n0\n\n(1 row)\n\n \n\nI've set:\n\nstats_start_collector = on\n\nstats_row_level = on\n\nstats_block_level = on\n\n \n\nand I think that the statistics collector should be running, because: \n\n$ ps aux|grep stats\n\npostgres 3688 0.0 0.0 7272 648 ? Ss 00:48 0:00 postgres:\nstats collector process \n\npostgres 29790 0.0 0.0 4004 712 pts/2 S+ 04:44 0:00 grep stats\n\n \n\nAny help would be really appreciated.\n\n \n\nRegards,\n\nYinan\n\n \n\n\n\n\n\n\n\n\n\n\n\nHi,\n \nWhen I use the\nstatistics collector to see the number of IO, I always get zero in almost all\nof columns. I really want to know the reason for that.\n \nThe result of\nstatistics view:\n#\nselect * from pg_statio_user_tables;\n relid\n| schemaname | relname | heap_blks_read | heap_blks_hit | idx_blks_read |\nidx_blks_hit | toast_blks_read | toast_blks_hit | tidx_blks_read |\ntidx_blks_hit \n-------+------------+---------+----------------+---------------+---------------+--------------+-----------------+----------------+----------------+---------------\n 16386\n| public     | tab    \n|             \n0 |             0\n|             0\n|            0\n|                \n|               \n|               \n|             \n\n(1\nrow)\n#\nselect * from pg_statio_user_indexes;\n relid\n| indexrelid | schemaname | relname | indexrelname | idx_blks_read |\nidx_blks_hit \n-------+------------+------------+---------+--------------+---------------+--------------\n 16386\n|      24581 | public     |\ntab     | idx_tab_a2  \n|             0\n|            0\n(1\nrow)\n \nI’ve set:\nstats_start_collector\n= on\nstats_row_level = on\nstats_block_level = on\n \nand\nI think that the statistics collector should be running, because: \n$\nps aux|grep stats\npostgres \n3688  0.0  0.0   7272   648\n?        Ss   00:48  \n0:00 postgres: stats collector process                    \n\npostgres\n29790  0.0  0.0   4004   712\npts/2    S+   04:44   0:00 grep stats\n \nAny help would be\nreally appreciated.\n \nRegards,\nYinan", "msg_date": "Mon, 24 Sep 2007 01:00:24 +0800", "msg_from": "\"Yinan Li\" <[email protected]>", "msg_from_op": true, "msg_subject": "zero value in statistics collector's result" } ]
[ { "msg_contents": "Hi Greg this is my Bonnie result.\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\ninsaubi 8G 25893 54 26762 9 14146 3 36846 68 43502 3 102.8 0\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\ninsaubi,8G,25893,54,26762,9,14146,3,36846,68,43502,3,102.8,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\nIf I compare this against my laptop (SATA disk too) is really better, but I don't know if this result is a good one or not.\nI don't know where to continue looking for the cause of the problem, I think there is a bug or something missconfigured with Debian 4.0r1 and Postgres.\nI unppluged the server from the network with the same results. I have the server mapped as localhost in PgAdmin III, there shouldn't be network traffic and there isn't (monitoring the network interface). I'm really lost with this weird behaviour.\nI really apreciate your help\nRegards\nAgustin\n\n----- Mensaje original ----\nDe: Greg Smith <[email protected]>\nPara: [email protected]\nCC: [email protected]\nEnviado: sábado 22 de septiembre de 2007, 3:29:17\nAsunto: Re: [PERFORM] Low CPU Usage\n\nOn Thu, 20 Sep 2007, [email protected] wrote:\n\n> Which other test can I do to find if this is a hardware, kernel o \n> postgres issue?\n\nThe little test hdparm does is not exactly a robust hard drive benchmark. \nIf you want to rule out hard drive transfer speed issues, take at look at \nthe tests suggested at \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm and \nsee how your results compare to the single SATA disk example I give there.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n\n\n\n\n\n\n Las últimas noticias sobre el Mundial de Rugby 2007 están en Yahoo! Deportes. ¡Conocelas!\nhttp://ar.sports.yahoo.com/mundialderugby\nHi Greg this is my Bonnie result.Version  1.03       ------Sequential Output------ --Sequential Input- --Random-                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CPinsaubi          8G 25893  54 26762   9 14146   3 36846  68 43502   3 102.8  \n 0                    ------Sequential Create------ --------Random Create--------                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++insaubi,8G,25893,54,26762,9,14146,3,36846,68,43502,3,102.8,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++If I compare this against my laptop (SATA disk too) is really better, but I don't know if this result is a good one or not.I don't\n know where to continue looking for the cause of the problem, I think there is a bug or something missconfigured with Debian 4.0r1 and Postgres.I unppluged the server from the network with the same results. I have the server mapped as localhost in PgAdmin III, there shouldn't be network traffic and there isn't (monitoring the network interface). I'm really lost with this weird behaviour.I really apreciate your helpRegardsAgustin----- Mensaje original ----De: Greg Smith <[email protected]>Para: [email protected]: [email protected]: sábado 22 de septiembre de 2007, 3:29:17Asunto: Re: [PERFORM] Low CPU UsageOn Thu, 20 Sep 2007, [email protected] wrote:> Which other test can I do to find if this is a hardware, kernel o > postgres issue?The\n little test hdparm does is not exactly a robust hard drive benchmark. If you want to rule out hard drive transfer speed issues, take at look at the tests suggested at http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm and see how your results compare to the single SATA disk example I give there.--* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby", "msg_date": "Mon, 24 Sep 2007 06:59:26 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "hi,\n\ni have the following table:\n\nCREATE TABLE \"main_activity\" (\n \"id\" serial NOT NULL PRIMARY KEY,\n \"user_id\" integer NOT NULL,\n \"sessionid\" varchar(128) NOT NULL,\n \"login\" timestamp with time zone NOT NULL,\n \"activity\" timestamp with time zone NOT NULL,\n \"logout\" timestamp with time zone NULL\n)\n\nthe problem is that it contains around 20000 entries, and a select \ncount(*) takes around 2 minutes. that's too slow.\n\nsome background info:\n\n- this table has a lot of updates and inserts, it works very similarly \nto a session-table for a web-application\n\n- there is a cron-job that deletes all the old entries, so it's size is \nrougly between 15000 and 35000 entries (it's run daily, and every day\ndeletes around 10000 entries)\n\n- but in the past, the cron-job was not in place, so the table's size \ngrew to around 800000 entries (in around 80 days)\n\n- then we removed the old entries, added the cronjob, vacuumed + \nanalyzed the table, and the count(*) is still slow\n\n- the output of the vacuum+analyze is:\n\nINFO: vacuuming \"public.main_activity\"\nINFO: index \"main_activity_pkey\" now contains 11675 row versions in \n57301 pages\nDETAIL: 41001 index row versions were removed.\n56521 index pages have been deleted, 20000 are currently reusable.\nCPU 1.03s/0.27u sec elapsed 56.08 sec.\nINFO: index \"main_activity_user_id\" now contains 11679 row versions in \n41017 pages\nDETAIL: 41001 index row versions were removed.\n37736 index pages have been deleted, 20000 are currently reusable.\nCPU 0.70s/0.42u sec elapsed 62.04 sec.\nINFO: \"main_activity\": removed 41001 row versions in 4310 pages\nDETAIL: CPU 0.15s/0.37u sec elapsed 20.48 sec.\nINFO: \"main_activity\": found 41001 removable, 11672 nonremovable row \nversions in 160888 pages\nDETAIL: 0 dead row versions cannot be removed yet.\nThere were 14029978 unused item pointers.\n0 pages are entirely empty.\nCPU 5.53s/1.71u sec elapsed 227.35 sec.\nINFO: analyzing \"public.main_activity\"\nINFO: \"main_activity\": 160888 pages, 4500 rows sampled, 4594 estimated \ntotal rows\n\n(please note that the \"4594 estimated total rows\"... the row-count \nshould be around 15000)\n\n- this is on postgresql 7.4.8 .yes, i know it's too old, and currently \nwe are preparing a migration to postgres8.1 (or 8.2, i'm not sure yet),\nbut for now i have to solve the problem on this database\n\nthanks a lot,\n\ngabor\n", "msg_date": "Mon, 24 Sep 2007 16:07:16 +0200", "msg_from": "=?ISO-8859-1?Q?G=E1bor_Farkas?= <[email protected]>", "msg_from_op": true, "msg_subject": "select count(*) performance (vacuum did not help)" }, { "msg_contents": "On 9/24/07, Gábor Farkas <[email protected]> wrote:\n>\n>\n> INFO: \"main_activity\": found 41001 removable, 11672 nonremovable row\n> versions in 160888 pages\n> DETAIL: 0 dead row versions cannot be removed yet.\n> There were 14029978 unused item pointers.\n> 0 pages are entirely empty.\n> CPU 5.53s/1.71u sec elapsed 227.35 sec.\n> INFO: analyzing \"public.main_activity\"\n> INFO: \"main_activity\": 160888 pages, 4500 rows sampled, 4594 estimated\n> total rows\n>\n>\nLooking at the number of rows vs number of pages, ISTM that VACUUM FULL\nshould help you.\n\nThanks,\nPavan\n\n-- \nPavan Deolasee\nEnterpriseDB http://www.enterprisedb.com\n\nOn 9/24/07, Gábor Farkas <[email protected]> wrote:\nINFO:  \"main_activity\": found 41001 removable, 11672 nonremovable rowversions in 160888 pagesDETAIL:  0 dead row versions cannot be removed yet.There were 14029978 unused item pointers.0 pages are entirely empty.\nCPU 5.53s/1.71u sec elapsed 227.35 sec.INFO:  analyzing \"public.main_activity\"INFO:  \"main_activity\": 160888 pages, 4500 rows sampled, 4594 estimatedtotal rows\nLooking at the number of rows vs number of pages, ISTM that VACUUM FULLshould help you.Thanks,Pavan-- Pavan DeolaseeEnterpriseDB     http://www.enterprisedb.com", "msg_date": "Mon, 24 Sep 2007 19:53:49 +0530", "msg_from": "\"Pavan Deolasee\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance (vacuum did not help)" }, { "msg_contents": "G�bor Farkas wrote:\n> - this table has a lot of updates and inserts, it works very similarly\n> to a session-table for a web-application\n\nMake sure you run VACUUM often enough.\n\n> - there is a cron-job that deletes all the old entries, so it's size is\n> rougly between 15000 and 35000 entries (it's run daily, and every day\n> deletes around 10000 entries)\n\nRunning vacuum after these deletes to immediately reclaim the dead space\nwould also be a good idea.\n\n> - but in the past, the cron-job was not in place, so the table's size\n> grew to around 800000 entries (in around 80 days)\n\nThat bloated your table, so that there's still a lot of empty pages in\nit. VACUUM FULL should bring it back to a reasonable size. Regular\nnormal non-FULL VACUUMs should keep it in shape after that.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 24 Sep 2007 15:30:09 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance (vacuum did not help)" }, { "msg_contents": "Heikki Linnakangas wrote:\n> G�bor Farkas wrote:\n>> - but in the past, the cron-job was not in place, so the table's size\n>> grew to around 800000 entries (in around 80 days)\n> \n> That bloated your table, so that there's still a lot of empty pages in\n> it. VACUUM FULL should bring it back to a reasonable size. Regular\n> normal non-FULL VACUUMs should keep it in shape after that.\n> \n\nhmm... can a full-vacuum be performed while the database is still \"live\" \n(i mean serving requests)?\n\nwill the db still be able to respond to queries?\n\nor in a different way:\n\nif i do a full vacuum to that table only, will the database still serve \ndata from the other tables at a normal speed?\n\nthanks,\ngabor\n", "msg_date": "Mon, 24 Sep 2007 17:04:39 +0200", "msg_from": "=?ISO-8859-1?Q?G=E1bor_Farkas?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(*) performance (vacuum did not help)" }, { "msg_contents": "G�bor Farkas wrote:\n> hmm... can a full-vacuum be performed while the database is still \"live\"\n> (i mean serving requests)?\n> \n> will the db still be able to respond to queries?\n\nVACUUM FULL will exclusive lock the table, which means that other\nqueries accessing it will block and wait until it's finished.\n\n> or in a different way:\n> \n> if i do a full vacuum to that table only, will the database still serve\n> data from the other tables at a normal speed?\n\nYes. The extra I/O load vacuum full generates while it's running might\ndisrupt other activity, though.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 24 Sep 2007 16:07:51 +0100", "msg_from": "\"Heikki Linnakangas\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance (vacuum did not help)" }, { "msg_contents": "Heikki Linnakangas wrote:\n> G�bor Farkas wrote:\n>>\n>> if i do a full vacuum to that table only, will the database still serve\n>> data from the other tables at a normal speed?\n> \n> Yes. The extra I/O load vacuum full generates while it's running might\n> disrupt other activity, though.\n> \n\ni see.\n\nwill i achieve the same thing by simply dropping that table and \nre-creating it?\n\ngabor\n", "msg_date": "Mon, 24 Sep 2007 17:14:27 +0200", "msg_from": "=?ISO-8859-1?Q?G=E1bor_Farkas?= <[email protected]>", "msg_from_op": true, "msg_subject": "Re: select count(*) performance (vacuum did not help)" }, { "msg_contents": "In response to \"Gábor Farkas\" <[email protected]>:\n\n> Heikki Linnakangas wrote:\n> > Gábor Farkas wrote:\n> >>\n> >> if i do a full vacuum to that table only, will the database still serve\n> >> data from the other tables at a normal speed?\n> > \n> > Yes. The extra I/O load vacuum full generates while it's running might\n> > disrupt other activity, though.\n> > \n> \n> i see.\n> \n> will i achieve the same thing by simply dropping that table and \n> re-creating it?\n\nYes. Once you've done so, keep up the vacuum schedule you've already\nestablished. You may want to (as has already been suggested) explicitly\nvacuum this table after large delete operations as well.\n\n-- \nBill Moran\nCollaborative Fusion Inc.\nhttp://people.collaborativefusion.com/~wmoran/\n\[email protected]\nPhone: 412-422-3463x4023\n\n", "msg_date": "Mon, 24 Sep 2007 11:29:07 -0400", "msg_from": "Bill Moran <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance (vacuum did not help)" }, { "msg_contents": "On Mon, 2007-09-24 at 17:14 +0200, Gábor Farkas wrote:\n> will i achieve the same thing by simply dropping that table and \n> re-creating it?\n\nIf you have an index/PK on that table, the fastest and most useful way\nto rebuild it is to do CLUSTER on that index. That will be a lot faster\nthan VACUUM FULL and it will also order your table in index order... but\nit will also lock it in exclusive mode just as VACUUM FULL would do it.\nIf your table has just a few live rows and lots of junk in it, CLUSTER\nshould be fast enough. With 20K entries I would expect it to be fast\nenough not to be a problem...\n\nCheers,\nCsaba.\n\n\n", "msg_date": "Mon, 24 Sep 2007 17:34:16 +0200", "msg_from": "Csaba Nagy <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance (vacuum did not help)" }, { "msg_contents": "> -----Original Message-----\n> From: Gábor Farkas\n> \n> \n> i see.\n> \n> will i achieve the same thing by simply dropping that table \n> and re-creating it?\n\nYes. Or even easier (if you don't need the data anymore) you can use the\ntruncate command. Which deletes everything in the table including dead\nrows.\n\nDave\n\n", "msg_date": "Mon, 24 Sep 2007 10:37:42 -0500", "msg_from": "\"Dave Dutcher\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: select count(*) performance (vacuum did not help)" } ]
[ { "msg_contents": "I have found the reason!!! I begin to see line by line postgresql.conf and saw ssl = true.\nI have disabled ssl and then I have restarted the server and that's all. It's 4 or 5 times faster than the old server.\nI don't understand why PgAdmin is connecting using ssl if I have leave this field empty!!!\nDebian by default installs Postgres with ssl enabled.\nThank you very much all of you to help me to find the causes.\nRegards\nAgustin\n\n----- Mensaje original ----\nDe: \"[email protected]\" <[email protected]>\nPara: Greg Smith <[email protected]>\nCC: [email protected]\nEnviado: lunes 24 de septiembre de 2007, 10:59:26\nAsunto: Re: [PERFORM] Low CPU Usage\n\nHi Greg this is my Bonnie result.\n\nVersion 1.03 ------Sequential Output------ --Sequential Input- --Random-\n -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--\nMachine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP\ninsaubi 8G 25893 54 26762 9 14146 3 36846 68 43502 3 102.8 \n 0\n ------Sequential Create------ --------Random Create--------\n -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--\n files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP\n 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++\ninsaubi,8G,25893,54,26762,9,14146,3,36846,68,43502,3,102.8,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++\n\nIf I compare this against my laptop (SATA disk too) is really better, but I don't know if this result is a good one or not.\nI don't\n know where to continue looking for the cause of the problem, I think there is a bug or something missconfigured with Debian 4.0r1 and Postgres.\nI unppluged the server from the network with the same results. I have the server mapped as localhost in PgAdmin III, there shouldn't be network traffic and there isn't (monitoring the network interface). I'm really lost with this weird behaviour.\nI really apreciate your help\nRegards\nAgustin\n\n----- Mensaje original ----\nDe: Greg Smith <[email protected]>\nPara: [email protected]\nCC: [email protected]\nEnviado: sábado 22 de septiembre de 2007, 3:29:17\nAsunto: Re: [PERFORM] Low CPU Usage\n\nOn Thu, 20 Sep 2007, [email protected] wrote:\n\n> Which other test can I do to find if this is a hardware, kernel o \n> postgres issue?\n\nThe\n little test hdparm does is not exactly a robust hard drive benchmark. \nIf you want to rule out hard drive transfer speed issues, take at look at \nthe tests suggested at \nhttp://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm and \nsee how your results compare to the single SATA disk example I give there.\n\n--\n* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\n\n\n\n\n\n\n\n \nEl Mundial de Rugby 2007\nLas últimas noticias en Yahoo! Deportes:\n\nhttp://ar.sports.yahoo.com/mundialderugby\n\n\n\n\n\n Los referentes más importantes en compra/ venta de autos se juntaron:\nDemotores y Yahoo!\nAhora comprar o vender tu auto es más fácil. Vistá ar.autos.yahoo.com/\nI have found the reason!!! I begin to see line by line postgresql.conf and saw ssl = true.I have disabled ssl and then I have restarted the server and that's all. It's 4 or 5 times faster than the old server.I don't understand why PgAdmin is connecting using ssl if I have leave this field empty!!!Debian by default installs Postgres with ssl enabled.Thank you very much all of you to help me to find the causes.RegardsAgustin----- Mensaje original ----De: \"[email protected]\" <[email protected]>Para: Greg Smith <[email protected]>CC:\n [email protected]: lunes 24 de septiembre de 2007, 10:59:26Asunto: Re: [PERFORM] Low CPU UsageHi Greg this is my Bonnie result.Version  1.03       ------Sequential Output------ --Sequential Input- --Random-                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CPinsaubi          8G 25893  54 26762   9 14146   3 36846  68 43502   3 102.8  \n 0                    ------Sequential Create------ --------Random Create--------                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP                 16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++insaubi,8G,25893,54,26762,9,14146,3,36846,68,43502,3,102.8,0,16,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++If I compare this against my laptop (SATA disk too) is really better, but I don't know if this result is a good one or not.I don't\n know where to continue looking for the cause of the problem, I think there is a bug or something missconfigured with Debian 4.0r1 and Postgres.I unppluged the server from the network with the same results. I have the server mapped as localhost in PgAdmin III, there shouldn't be network traffic and there isn't (monitoring the network interface). I'm really lost with this weird behaviour.I really apreciate your helpRegardsAgustin----- Mensaje original ----De: Greg Smith <[email protected]>Para: [email protected]: [email protected]: sábado 22 de septiembre de 2007, 3:29:17Asunto: Re: [PERFORM] Low CPU UsageOn Thu, 20 Sep 2007, [email protected] wrote:> Which other test can I do to find if this is a hardware, kernel o > postgres issue?The\n little test hdparm does is not exactly a robust hard drive benchmark. If you want to rule out hard drive transfer speed issues, take at look at the tests suggested at http://www.westnet.com/~gsmith/content/postgresql/pg-disktesting.htm and see how your results compare to the single SATA disk example I give there.--* Greg Smith [email protected] http://www.gregsmith.com Baltimore, MD\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby\nEl Mundial de Rugby 2007Las últimas noticias en Yahoo! Deportes:\nhttp://ar.sports.yahoo.com/mundialderugby", "msg_date": "Mon, 24 Sep 2007 07:16:38 -0700 (PDT)", "msg_from": "[email protected]", "msg_from_op": true, "msg_subject": "Re: Low CPU Usage" } ]
[ { "msg_contents": "Hello,\n\nI have a database with an amount of tables and in several of them I have an\nattribute for a semantic definition, for which I use a field of type text. I\nam trying to decide if it would be worth using LONGTEXT instead of TEXT, as\nmaybe it would slow down the data insertion and extraction. I hope that you\ncould help me. Thank you.\n\n-- \nFabiola Fernández Gutiérrez\nGrupo de Ingeniería Biomédica\nEscuela Superior de Ingeniería\nCamino de los Descubrimientos, s/n\nIsla de la Cartuja\n41092 Sevilla (Spain)\nTfno: +34 954487399\nE-mail: [email protected]\n\nHello, I have a database with an amount of tables and in several of them I have an attribute for a semantic definition, for which I use a field of type text. I am trying to decide if it would be worth using LONGTEXT instead of TEXT, as maybe it would slow down the data insertion and extraction. I hope that you could help me. Thank you.\n-- Fabiola Fernández GutiérrezGrupo de Ingeniería BiomédicaEscuela Superior de IngenieríaCamino de los Descubrimientos, s/nIsla de la Cartuja41092 Sevilla (Spain)Tfno: +34 954487399\nE-mail: [email protected]", "msg_date": "Mon, 24 Sep 2007 17:21:25 +0200", "msg_from": "\"=?ISO-8859-1?Q?Fabiola_Fern=E1ndez?=\" <[email protected]>", "msg_from_op": true, "msg_subject": "TEXT or LONGTEXT?" }, { "msg_contents": "On 9/24/07, Fabiola Fernández <[email protected]> wrote:\n> I have a database with an amount of tables and in several of them I have an\n> attribute for a semantic definition, for which I use a field of type text. I\n> am trying to decide if it would be worth using LONGTEXT instead of TEXT, as\n> maybe it would slow down the data insertion and extraction. I hope that you\n> could help me. Thank you.\n\nEasy choice -- PostgreSQL does not have a data type named \"longtext\".\n\nAlexander.\n", "msg_date": "Mon, 24 Sep 2007 17:29:48 +0200", "msg_from": "\"Alexander Staubo\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TEXT or LONGTEXT?" }, { "msg_contents": "\nOn 24 sep 2007, at 17.21, Fabiola Fern�ndez wrote:\n> I am trying to decide if it would be worth using LONGTEXT instead \n> of TEXT, as maybe it would slow down the data insertion and \n> extraction.\n\nPostgres doesn't have a LONGTEXT datatype, so keep using TEXT.\n\nhttp://www.postgresql.org/docs/8.2/interactive/datatype-character.html\n\n\n\nSincerely,\n\nNiklas Johansson\n\n\n\n", "msg_date": "Mon, 24 Sep 2007 18:00:29 +0200", "msg_from": "Niklas Johansson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: TEXT or LONGTEXT?" } ]
[ { "msg_contents": "Is there a rule of thumb about when the planner's row estimates are too \nhigh? In particular, when should I be concerned that planner's estimated \nnumber of rows estimated for a nested loop is off? By a factor of 10? 100? \n1000?\n\nCarlo \n\n", "msg_date": "Mon, 24 Sep 2007 15:37:41 -0400", "msg_from": "\"Carlo Stonebanks\" <[email protected]>", "msg_from_op": true, "msg_subject": "Acceptable level of over-estimation?" }, { "msg_contents": "\"Carlo Stonebanks\" <[email protected]> writes:\n\n> Is there a rule of thumb about when the planner's row estimates are too high?\n> In particular, when should I be concerned that planner's estimated number of\n> rows estimated for a nested loop is off? By a factor of 10? 100? 1000?\n\nNot really. It's a big enough difference for the planner to make a bad\ndecision or it isn't. But if you pressed me I would say a factor of 10 is bad.\nA factor of 2 is inevitable in some cases.\n\n-- \n Gregory Stark\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Mon, 24 Sep 2007 21:18:25 +0100", "msg_from": "Gregory Stark <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Acceptable level of over-estimation?" } ]
[ { "msg_contents": "I have a database that we want to keep track of counts of rows.\n\nWe have triggers on the rows which increment, and decrement a count \ntable. In order to speed up deleting many rows we have added the following\n\n if user != 'mocospace_cleanup' \nthen \n\n update user_profile_count set buddycount=buddycount-1 where \nuser_profile_count.uid=OLD.userid; \n end \nif; \n\n\nHowever in the logs we can see the following. I have checked to make \nsure that the user really is the mocospace_cleanup user and checked \nmanually by logging in as the mocospace_cleanup user to make sure that \nthe code above does what it purports to.\n\nERROR: deadlock detected\nDETAIL: Process 23063 waits for ExclusiveLock on tuple (20502,48) of \nrelation 48999028 of database 14510214; blocked by process 23110.\nProcess 23110 waits for ShareLock on transaction 1427023217; blocked by \nprocess 23098.\n...\nCONTEXT: SQL statement \"update user_profile_count set \nbuddycount=buddycount-1 where user_profile_count.uid= $1 \"\nPL/pgSQL function \"user_buddy_count\" line 11 at SQL statement\nSQL statement \"DELETE FROM ONLY \"public\".\"user_buddies\" WHERE \n\"buddyuserid\" = $1\"\n\nDave\n", "msg_date": "Tue, 25 Sep 2007 07:08:42 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Attempting to disable count triggers on cleanup" }, { "msg_contents": "On Tue, Sep 25, 2007 at 07:08:42AM -0400, Dave Cramer wrote:\n> ERROR: deadlock detected\n> DETAIL: Process 23063 waits for ExclusiveLock on tuple (20502,48) of \n> relation 48999028 of database 14510214; blocked by process 23110.\n> Process 23110 waits for ShareLock on transaction 1427023217; blocked by \n> process 23098.\n> ...\n> CONTEXT: SQL statement \"update user_profile_count set \n> buddycount=buddycount-1 where user_profile_count.uid= $1 \"\n> PL/pgSQL function \"user_buddy_count\" line 11 at SQL statement\n> SQL statement \"DELETE FROM ONLY \"public\".\"user_buddies\" WHERE \n> \"buddyuserid\" = $1\"\n\ntake a look at:\nhttp://www.depesz.com/index.php/2007/09/12/objects-in-categories-counters-with-triggers/\n\nand if you want to temporarily disable trigger, simply do appropriate\n\"alter table disable trigger\".\n\ndepesz\n\n-- \nquicksil1er: \"postgres is excellent, but like any DB it requires a\nhighly paid DBA. here's my CV!\" :)\nhttp://www.depesz.com/ - blog dla ciebie (i moje CV)\n", "msg_date": "Tue, 25 Sep 2007 13:21:49 +0200", "msg_from": "hubert depesz lubaczewski <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Attempting to disable count triggers on cleanup" }, { "msg_contents": "hubert depesz lubaczewski wrote:\n> On Tue, Sep 25, 2007 at 07:08:42AM -0400, Dave Cramer wrote:\n> \n>> ERROR: deadlock detected\n>> DETAIL: Process 23063 waits for ExclusiveLock on tuple (20502,48) of \n>> relation 48999028 of database 14510214; blocked by process 23110.\n>> Process 23110 waits for ShareLock on transaction 1427023217; blocked by \n>> process 23098.\n>> ...\n>> CONTEXT: SQL statement \"update user_profile_count set \n>> buddycount=buddycount-1 where user_profile_count.uid= $1 \"\n>> PL/pgSQL function \"user_buddy_count\" line 11 at SQL statement\n>> SQL statement \"DELETE FROM ONLY \"public\".\"user_buddies\" WHERE \n>> \"buddyuserid\" = $1\"\n>> \n>\n> take a look at:\n> http://www.depesz.com/index.php/2007/09/12/objects-in-categories-counters-with-triggers/\n>\n> and if you want to temporarily disable trigger, simply do appropriate\n> \"alter table disable trigger\".\n>\n> \nWell, that doesn't work inside a transaction I've tried it. This has \nbeen fixed in 8.3\n\nDave\n> depesz\n>\n> \n\n", "msg_date": "Tue, 25 Sep 2007 07:39:42 -0400", "msg_from": "Dave Cramer <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Attempting to disable count triggers on cleanup" } ]
[ { "msg_contents": "\nHi, I am having some trouble understanding a plan and was wondering if anyone\ncould guide me. The query in question here seems to be showing some\nincorrect row counts. I have vacuumed and analyzed the table, but the\nestimate versus the actual total seems to be way out (est 2870 vs actual\n85k). Perhaps I am reading the plan incorrectly though. (hopefully the plan\nbelow is readable)\n\ndb=# select version();\nversion\n-------------------------------------------------------------------------------------------------------------\nPostgreSQL 8.2.4 on i486-pc-linux-gnu, compiled by GCC cc (GCC) 4.1.2\n20061115\n(prerelease) (Debian 4.1.1-21)\n\ndb=# show shared_buffers ;\n shared_buffers\n----------------\n 300MB\n\n#4GB ram, 2 SATA striped, XFS\n\ndb=# show default_statistics_target;\n default_statistics_target\n---------------------------\n 100\n\n# stats have been raised to 1000 on both the destip and srcip columns\n# create index slog_gri_idx on slog (gid,rule,(case when rule in (8,9) then\ndestip else srcip end)) WHERE (rule in (1, 2, 8, 9, 10));\n# vacuum analyze verbose slog;\n\ndb=# show random_page_cost ;\n random_page_cost\n------------------\n 3\n\ndb=# select count(*) from slog\n count\n---------\n 1,019,121\n\ndb=#select count(*) as total\nfrom slog\nwhere gid=10000::INTEGER\nand rule in (1,2,8,9,10)\nand (case when rule in (8,9) then destip else srcip\nend)='192.168.10.23'::INET;\n total\n-------\n 83,538\n\n# problematic query\nexplain analyze\nselect coalesce(uri,host((case when rule in (8,9) then srcip else destip\nend))) as\ndestip,\n case when rule in (8,9) then 'ext' else 'int' end as tp,\n count(*) as total,\n coalesce(sum(destbytes),0)+coalesce(sum(srcbytes),0) as bytes\nfrom slog\nwhere gid=10000::INTEGER\nand rule in (1,2,8,9,10)\nand (case when rule in (8,9) then destip else srcip\nend)='192.168.10.23'::INET\ngroup by destip,tp\norder by bytes desc,total desc,destip limit 20\n\n\nLimit (cost=6490.18..6490.23 rows=20 width=61) (actual\ntime=2036.968..2037.220 rows=20 loops=1)\n-> Sort (cost=6490.18..6490.90 rows=288 width=61) (actual\ntime=2036.960..2037.027 rows=20 loops=1)\n Sort Key: (COALESCE(sum(destbytes), 0::numeric) +\nCOALESCE(sum(srcbytes), 0::numeric)), count(*), COALESCE(uri, host(CASE WHEN\n(rule = ANY ('{8,9}'::integer[])) THEN srcip ELSE destip END))\n -> HashAggregate (cost=6470.50..6478.42 rows=288 width=61) (actual\ntime=2008.478..2022.125 rows=2057 loops=1)\n -> Bitmap Heap Scan on slog (cost=82.98..6434.62 rows=2870\nwidth=61) (actual time=50.235..1237.948 rows=83538 loops=1)\n Recheck Cond: ((gid = 10000) AND (rule = ANY\n('{1,2,8,9,10}'::integer[])) AND (CASE WHEN (rule = ANY\n('{8,9}'::integer[])) THEN destip ELSE srcip END = '192.168.10.23'::inet))\n -> Bitmap Index Scan on slog_gri_idx (cost=0.00..82.26\nrows=2870 width=0) (actual time=41.306..41.306 rows=83538 loops=1)\n Index Cond: ((gid = 10000) AND (rule = ANY\n('{1,2,8,9,10}'::integer[])) AND (CASE WHEN (rule = ANY\n('{8,9}'::integer[])) THEN destip ELSE srcip END = '192.168.10.23'::inet))\nTotal runtime: 2037.585 ms\n\nDoes anyone have any suggestions?\n\nThanks!\n-- \nView this message in context: http://www.nabble.com/Incorrect-row-estimates-in-plan--tf4522692.html#a12902068\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 26 Sep 2007 07:22:45 -0700 (PDT)", "msg_from": "pgdba <[email protected]>", "msg_from_op": true, "msg_subject": "Incorrect row estimates in plan?" }, { "msg_contents": "pgdba <[email protected]> writes:\n> -> Bitmap Heap Scan on slog (cost=82.98..6434.62 rows=2870\n> width=61) (actual time=50.235..1237.948 rows=83538 loops=1)\n> Recheck Cond: ((gid = 10000) AND (rule = ANY\n> ('{1,2,8,9,10}'::integer[])) AND (CASE WHEN (rule = ANY\n> ('{8,9}'::integer[])) THEN destip ELSE srcip END = '192.168.10.23'::inet))\n> -> Bitmap Index Scan on slog_gri_idx (cost=0.00..82.26\n> rows=2870 width=0) (actual time=41.306..41.306 rows=83538 loops=1)\n> Index Cond: ((gid = 10000) AND (rule = ANY\n> ('{1,2,8,9,10}'::integer[])) AND (CASE WHEN (rule = ANY\n> ('{8,9}'::integer[])) THEN destip ELSE srcip END = '192.168.10.23'::inet))\n\n[ blink... ] Pray tell, what is the definition of this index?\n\nWith such a bizarre scan condition, it's unlikely you'll get any really\naccurate row estimate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2007 10:45:14 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect row estimates in plan? " }, { "msg_contents": "\nHi Tom,\n\nTom Lane-2 wrote:\n> \n> pgdba <[email protected]> writes:\n>> -> Bitmap Heap Scan on slog (cost=82.98..6434.62 rows=2870\n>> width=61) (actual time=50.235..1237.948 rows=83538 loops=1)\n>> Recheck Cond: ((gid = 10000) AND (rule = ANY\n>> ('{1,2,8,9,10}'::integer[])) AND (CASE WHEN (rule = ANY\n>> ('{8,9}'::integer[])) THEN destip ELSE srcip END =\n>> '192.168.10.23'::inet))\n>> -> Bitmap Index Scan on slog_gri_idx \n>> (cost=0.00..82.26\n>> rows=2870 width=0) (actual time=41.306..41.306 rows=83538 loops=1)\n>> Index Cond: ((gid = 10000) AND (rule = ANY\n>> ('{1,2,8,9,10}'::integer[])) AND (CASE WHEN (rule = ANY\n>> ('{8,9}'::integer[])) THEN destip ELSE srcip END =\n>> '192.168.10.23'::inet))\n> \n> [ blink... ] Pray tell, what is the definition of this index?\n> \n> With such a bizarre scan condition, it's unlikely you'll get any really\n> accurate row estimate.\n> \n> \t\t\tregards, tom lane\n> \n> \n\nOriginal index: \"create index slog_gri_idx on slog (gid,rule,(case when rule\nin (8,9) then\ndestip else srcip end)) WHERE (rule in (1, 2, 8, 9, 10))\"\n\nThe purpose of that index is to match a specific query (one that gets run\nfrequently and needs to be fast). It is using the destip when rule 8/9, and\nsrcip when other, but only for a subset of the rules (1,2,8,9,10). There are\nabout 18 rules in total, but I'm only interested in those 5. I have tried a\ncouple of indices like:\ncreate index test_destip_idx on slog (gid,destip) where rule in (8,9);\ncreate index test_srcip_idx on slog (gid,srcip) where rule in (1,2,10);\n\nBut the original slog_gri_idx index was used instead. Is there a way that I\ncan rewrite that index then? Not that I'm a fan of a CASE statement in a\nfunctional index, but I'm at a loss as to how else I can create this. Or\nwhat else I can look into to make this faster?\n\n\n-- \nView this message in context: http://www.nabble.com/Incorrect-row-estimates-in-plan--tf4522692.html#a12903194\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 26 Sep 2007 08:24:01 -0700 (PDT)", "msg_from": "pgdba <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect row estimates in plan?" }, { "msg_contents": "pgdba <[email protected]> writes:\n> Tom Lane-2 wrote:\n> -> Bitmap Index Scan on slog_gri_idx \n> (cost=0.00..82.26\n> rows=2870 width=0) (actual time=41.306..41.306 rows=83538 loops=1)\n> Index Cond: ((gid = 10000) AND (rule = ANY\n> ('{1,2,8,9,10}'::integer[])) AND (CASE WHEN (rule = ANY\n> ('{8,9}'::integer[])) THEN destip ELSE srcip END =\n> '192.168.10.23'::inet))\n>> \n>> [ blink... ] Pray tell, what is the definition of this index?\n\n> Original index: \"create index slog_gri_idx on slog (gid,rule,(case when rule\n> in (8,9) then\n> destip else srcip end)) WHERE (rule in (1, 2, 8, 9, 10))\"\n\n> The purpose of that index is to match a specific query (one that gets run\n> frequently and needs to be fast).\n\nAh. I didn't think you would've put such a specific thing into an index\ndefinition, but if you're stuck supporting such badly written queries,\nmaybe there's no other way.\n\nI rather doubt that you're going to be able to make this query any\nfaster than it is, short of buying enough RAM to keep the whole table\nRAM-resident. Pulling 80000 random rows in 1200 msec doesn't sound\nall that slow to me.\n\nThe ultimate solution might be to rethink your table designs ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 26 Sep 2007 12:38:09 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Incorrect row estimates in plan? " }, { "msg_contents": "\n\n\nTom Lane-2 wrote:\n> \n> pgdba <[email protected]> writes:\n>> Tom Lane-2 wrote:\n>> -> Bitmap Index Scan on slog_gri_idx \n>> (cost=0.00..82.26\n>> rows=2870 width=0) (actual time=41.306..41.306 rows=83538 loops=1)\n>> Index Cond: ((gid = 10000) AND (rule = ANY\n>> ('{1,2,8,9,10}'::integer[])) AND (CASE WHEN (rule = ANY\n>> ('{8,9}'::integer[])) THEN destip ELSE srcip END =\n>> '192.168.10.23'::inet))\n>>> \n>>> [ blink... ] Pray tell, what is the definition of this index?\n> \n>> Original index: \"create index slog_gri_idx on slog (gid,rule,(case when\n>> rule\n>> in (8,9) then\n>> destip else srcip end)) WHERE (rule in (1, 2, 8, 9, 10))\"\n> \n>> The purpose of that index is to match a specific query (one that gets run\n>> frequently and needs to be fast).\n> \n> Ah. I didn't think you would've put such a specific thing into an index\n> definition, but if you're stuck supporting such badly written queries,\n> maybe there's no other way.\n> \n> I rather doubt that you're going to be able to make this query any\n> faster than it is, short of buying enough RAM to keep the whole table\n> RAM-resident. Pulling 80000 random rows in 1200 msec doesn't sound\n> all that slow to me.\n> \n> The ultimate solution might be to rethink your table designs ...\n> \n> \t\t\tregards, tom lane\n> \n\nBadly written the query may be, but I do have the opportunity to change it.\nPart of the problem is that I cannot come up with a better way of writing\nit.\n\nWhat about the discrepancy between the estimated row count and the actual\nrow count for that index access?\n\"Bitmap Index Scan on slog_gri_idx (cost=0.00..82.26 rows=2870 width=0)\n(actual time=41.306..41.306 rows=83538 loops=1)\"\n\nIs there anything I can do to influence that (not that it is likely to\nchange the plan, but...). I vacuumed and analyzed after I created the index,\nso the stats should be at least be close (with stats target set to 1000\nthere).\n\n-- \nView this message in context: http://www.nabble.com/Incorrect-row-estimates-in-plan--tf4522692.html#a12905186\nSent from the PostgreSQL - performance mailing list archive at Nabble.com.\n\n", "msg_date": "Wed, 26 Sep 2007 09:56:33 -0700 (PDT)", "msg_from": "pgdba <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Incorrect row estimates in plan?" } ]
[ { "msg_contents": "Hi,\n\nI am curious as to why this occurs. Why does an = change the query plan so\ndrastically?\n\nWhen my query is:\n\nSelect count(*) from View_A WHERE tradedate = '20070801';\nThe query plan is as below: I see that the scan on the alloctbl is being\nindexed on k_alloctbl_blockid_status\n\n-> Bitmap Index Scan on idx_tradeblocktbl_tradeate\n(cost=0.00..50.47rows=1444 width=0) (actual time=\n0.040..0.040 rows=106 loops=1)\n Index Cond: ((tradedate >=\n'2007-08-01'::date) AND (tradedate <= '2007-09-24'::date))\n -> Bitmap Heap Scan on alloctbl a\n(cost=4.59..270.73rows=70 width=16) (actual time=\n0.010..0.011 rows=1 loops=7)\n Recheck Cond: (tr.recid = a.blockid)\n -> Bitmap Index Scan on k_alloctbl_blockid_status (cost=\n0.00..4.59 rows=70 width=0) (actual time=0.007..0.007 rows=1 loops=7)\n Index Cond: (tr.recid = a.blockid)\n Total runtime: 1.453 ms\n\n\nBut when my query is:\nSelect count(*) from View_A WHERE tradedate BETWEEN '20070801' and\n'20070901';\nThe query plan is:\n\n-\n -> Bitmap Heap Scan on tradeblocktbl tr (cost=\n50.47..2849.67 rows=1444 width=80) (actual time=0.095..0.218 rows=104\nloops=1)\n Recheck Cond: ((tradedate >= '2007-08-01'::date)\nAND (tradedate <= '2007-09-24'::date))\n -> Bitmap Index Scan on\nidx_tradeblocktbl_tradeate (cost=0.00..50.47 rows=1444 width=0) (actual\ntime=0.050..0.050 rows=106 loops=1)\n Index Cond: ((tradedate >=\n'2007-08-01'::date) AND (tradedate <= '2007-09-24'::date))\n -> Sort (cost=99007.79..100479.68 rows=588755 width=16)\n(actual time=2660.009..3150.887 rows=588755 loops=1)\n Sort Key: a.blockid\n -> Seq Scan on alloctbl a\n(cost=0.00..20442.55rows=588755 width=16) (actual time=\n0.026..764.833 rows=588755 loops=1)\n\n\n Total runtime: 3590.715 ms\n\nThank you.\nRadhika\n\n-- \nIt is all a matter of perspective. You choose your view by choosing where to\nstand. --Larry Wall\n\nHi,I am curious as to why this occurs. Why does an = change the query plan so drastically?When my query is:Select count(*) from View_A WHERE tradedate = '20070801';The query plan is as below: I see that the scan on the alloctbl is being indexed on k_alloctbl_blockid_status\n->  Bitmap Index Scan on idx_tradeblocktbl_tradeate  (cost=0.00..50.47 rows=1444 width=0) (actual time=0.040..0.040 rows=106 loops=1)                                 Index Cond: ((tradedate >= '2007-08-01'::date) AND (tradedate <= '2007-09-24'::date))\n               ->  Bitmap Heap Scan on alloctbl a  (cost=4.59..270.73 rows=70 width=16) (actual time=0.010..0.011 rows=1 loops=7)                     Recheck Cond: (\ntr.recid = a.blockid)                     ->  Bitmap Index Scan on k_alloctbl_blockid_status  (cost=0.00..4.59 rows=70 width=0) (actual time=0.007..0.007 rows=1 loops=7)\n                           Index Cond: (tr.recid = a.blockid) Total runtime: 1.453 msBut when my query is:Select count(*) from View_A WHERE tradedate BETWEEN '20070801' and '20070901';\nThe query plan is: -                     ->  Bitmap Heap Scan on tradeblocktbl tr  (cost=50.47..2849.67 rows=1444 width=80) (actual time=0.095..0.218 rows=104 loops=1)                           Recheck Cond: ((tradedate >= '2007-08-01'::date) AND (tradedate <= '2007-09-24'::date))\n                           ->  Bitmap Index Scan on idx_tradeblocktbl_tradeate  (cost=0.00..50.47 rows=1444 width=0) (actual time=0.050..0.050 rows=106 loops=1)                                 Index Cond: ((tradedate >= '2007-08-01'::date) AND (tradedate <= '2007-09-24'::date))\n               ->  Sort  (cost=99007.79..100479.68 rows=588755 width=16) (actual time=2660.009..3150.887 rows=588755 loops=1)                     Sort Key: a.blockid                     ->  \nSeq Scan on alloctbl a  (cost=0.00..20442.55 rows=588755 width=16) (actual time=0.026..764.833 rows=588755 loops=1) Total runtime: 3590.715 ms\nThank you.\nRadhika\n-- It is all a matter of perspective. You choose your view by choosing where to stand. --Larry Wall", "msg_date": "Wed, 26 Sep 2007 14:00:42 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Difference in query plan when using = or > in where clause" }, { "msg_contents": "Radhika S wrote:\n> I am curious as to why this occurs. Why does an = change the query plan so\n> drastically?\n> \n> When my query is:\n> Select count(*) from View_A WHERE tradedate = '20070801';\n> The query plan is as below:\n> ...\n> But when my query is:\n> Select count(*) from View_A WHERE tradedate BETWEEN '20070801' and\n> '20070901';\n> The query plan is:\n> ...\n\nIn short, the planner estimates that \"tradedate BETWEEN '20070801' and\n'20070901'\" matches more rows than \"tradatedate = '20070801'\"\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Thu, 27 Sep 2007 13:39:07 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference in query plan when using = or > in where\n clause" } ]
[ { "msg_contents": "Hi All;\n\nI'm preparing to fire up WAL archiving on 8 production servers We will follow \nup with implementing a warm standby scenariio.\n\nDoes anyone have any thoughts per how to maximize performance, yet minimize \nthe potential for data loss assuming we were not able to retrieve the final \nun-archived WAL segment from the original pg_xlog dir in the case of a crash?\n\nSpecifically, I wonder if there are some general rules of thought per tuning \nwal_buffers, checkpoint_segments and friends...\n\nCurrently all servers have the following settings:\nwal_buffers = 24\ncheckpoint_segments = 32\n\n\nThanks in advance\n\n/Kevin\n\n", "msg_date": "Thu, 27 Sep 2007 11:42:06 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "Tuning for warm standby" }, { "msg_contents": "On 9/27/07, Kevin Kempter <[email protected]> wrote:\n> Hi All;\n>\n> I'm preparing to fire up WAL archiving on 8 production servers We will follow\n> up with implementing a warm standby scenariio.\n>\n> Does anyone have any thoughts per how to maximize performance, yet minimize\n> the potential for data loss assuming we were not able to retrieve the final\n> un-archived WAL segment from the original pg_xlog dir in the case of a crash?\n\nthe standby mechanism is actually very simple and there is very little\nto do for efficient operation. all the hard work is done inside the\nwal algorithms and from the outside it works like a fancy rsync.\n\nsome performance tips:\n* don't use encrypted channel (scp) to transfer wal segments from\nprimary to secondary.\n* make sure the link between servers is gigabit at least. bonded\nethernet couldn't hurt if you can easily fit it in your topology\n* do not directly write wal segments (nfs, cifs) to the remote folder.\n whatever you use, make sure it puts files into the remote folder\natomically unless it is specifically designed to handle wal segments.\n* there's not much to do on the standby side.\n\nI've set up a few warm standby systems with pg_standby...it works\ngreat. I find it works best using link mode (-l) and at lest 256 wal\nfile before it prunes.\n\n'archive_timeout' is a way to guarantee your last transferred file is\nno older than 'x' seconds. I am not a big fan of setting this...most\nof the servers I work with are fairly busy and I'd prefer to let the\nserver decide when to flip files. I would only consider setting this\nin a server that had very little writing going on but what did get\nwritten was important.\n\nThere is a new player in warm standby systems (developed by skype!):\nhttp://pgfoundry.org/projects/skytools/\n\nI haven't looked at it yet, but supposedly it can stream WAL files\nover real time. definately worth looking in to. This would moot some\nof the other advice I've given here.\n\nmerlin\n", "msg_date": "Fri, 28 Sep 2007 09:17:51 -0400", "msg_from": "\"Merlin Moncure\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Tuning for warm standby" } ]
[ { "msg_contents": "Hello,\n\nI have a weird performance issue with a query I'm testing. Basically,\nI'm trying to port a function that generates user uids, and since\npostgres offers a sequence generator function, I figure I'd take\nadvantage of that. Basically, I generate our uid range, filter out\nthose which are in use, and randomly pick however many I need.\nHowever, when I run it it takes forever (>10 minutes and I get nothing\nso I cancelled the query) and cpu usage on the server is maxed out.\n\nHere's my query (I'll post the explain output later so as not to\nobscure my question):\n=> select a.uid from generate_series(1000, 32767) as a(uid) where\na.uid not in (select uid from people) order by random() limit 1;\n\nI thought that nulls were a problem, so I tried:\n=> select a.uid from generate_series(1000, 32767) as a(uid) where\na.uid not in (select coalesce(uid,0) from people) order by random()\nlimit 1;\nAnd that finished in less than a second.\n\nI then tried:\n=> select a.uid from generate_series(1000, 32767) as a(uid) where\na.uid not in (select coalesce(uid,0) from people where uid is not\nnull) order by random() limit 1;\nAnd we're back to taking forever.\n\nSo I have 2 questions:\n\n- Is there a better query for this purpose? Mine works when coalesced,\nbut it seems a little brute-force and the random() sorting, while\nkinda nice, is slow.\n\n- Is this in any way expected? I know that nulls sometimes cause\nproblems, but why is it taking forever even when trying to filter\nthose out?\n\nThanks.\n\nPeter\n\nThe gory details:\n- There is an btree index on people(uid), and there are ~6300 rows, of\nwhich ~1300 have null uids.\n\n- EXPLAIN output (I couldn't get EXPLAIN ANALYZE output from the first\ntwo queries since they took too long):\n=> explain select a.uid from generate_series(1000, 32767) as a(uid)\nwhere a.uid not in (select uid from people) order by random() limit 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Limit (cost=40025.57..40025.60 rows=10 width=4)\n -> Sort (cost=40025.57..40026.82 rows=500 width=4)\n Sort Key: random()\n -> Function Scan on generate_series a\n(cost=693.16..40003.16 rows=500 width=4)\n Filter: (NOT (subplan))\n SubPlan\n -> Materialize (cost=693.16..756.03 rows=6287 width=2)\n -> Seq Scan on people (cost=0.00..686.87\nrows=6287 width=2)\n(8 rows)\n\n=> explain select a.uid from generate_series(1000, 32767) as a(uid)\nwhere a.uid not in (select uid from people where uid is not null)\norder by random() limit 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------\n Limit (cost=31486.71..31486.73 rows=10 width=4)\n -> Sort (cost=31486.71..31487.96 rows=500 width=4)\n Sort Key: random()\n -> Function Scan on generate_series a\n(cost=691.79..31464.29 rows=500 width=4)\n Filter: (NOT (subplan))\n SubPlan\n -> Materialize (cost=691.79..741.00 rows=4921 width=2)\n -> Seq Scan on people (cost=0.00..686.87\nrows=4921 width=2)\n Filter: (uid IS NOT NULL)\n(9 rows)\n\n=> explain select a.uid from generate_series(1000, 32767) as a(uid)\nwhere a.uid not in (select coalesce(uid, 0) from people) order by\nrandom() limit 1;\n QUERY PLAN\n----------------------------------------------------------------------------------------\n Limit (cost=756.97..756.99 rows=10 width=4)\n -> Sort (cost=756.97..758.22 rows=500 width=4)\n Sort Key: random()\n -> Function Scan on generate_series a (cost=718.30..734.55\nrows=500 width=4)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on people (cost=0.00..702.59 rows=6287 width=2)\n(7 rows)\n\n=> explain analyze select a.uid from generate_series(1000, 32767) as\na(uid) where a.uid not in (select coalesce(uid, 0) from people) order\nby random() limit 1;\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=756.97..756.99 rows=10 width=4) (actual\ntime=370.444..370.554 rows=10 loops=1)\n -> Sort (cost=756.97..758.22 rows=500 width=4) (actual\ntime=370.434..370.472 rows=10 loops=1)\n Sort Key: random()\n -> Function Scan on generate_series a (cost=718.30..734.55\nrows=500 width=4) (actual time=70.018..199.540 rows=26808 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on people (cost=0.00..702.59 rows=6287\nwidth=2) (actual time=0.023..29.167 rows=6294 loops=1)\n Total runtime: 372.224 ms\n(8 rows)\n", "msg_date": "Thu, 27 Sep 2007 17:04:35 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "sequence query performance issues" }, { "msg_contents": "Peter Koczan wrote:\n> Hello,\n> \n> I have a weird performance issue with a query I'm testing. Basically,\n> I'm trying to port a function that generates user uids, and since\n> postgres offers a sequence generator function, I figure I'd take\n> advantage of that. Basically, I generate our uid range, filter out\n> those which are in use, and randomly pick however many I need.\n> However, when I run it it takes forever (>10 minutes and I get nothing\n> so I cancelled the query) and cpu usage on the server is maxed out.\n\nI'd suspect either an unconstrained join or looping through seq-scans.\n\n> Here's my query (I'll post the explain output later so as not to\n> obscure my question):\n> => select a.uid from generate_series(1000, 32767) as a(uid) where\n> a.uid not in (select uid from people) order by random() limit 1;\n\nI let this run to it's conclusion and it's the materialize. If you see, \nit's materializing the result-set once for every value it tests against \n(loops=31768)\n\n QUERY \nPLAN\n----------------------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=62722.66..62722.67 rows=1 width=4) (actual \ntime=189963.485..189963.485 rows=0 loops=1)\n -> Sort (cost=62722.66..62723.91 rows=500 width=4) (actual \ntime=189961.063..189961.063 rows=0 loops=1)\n Sort Key: random()\n -> Function Scan on generate_series a (cost=184.00..62700.25 \nrows=500 width=4) (actual time=189960.797..189960.797 rows=0 loops=1)\n Filter: (NOT (subplan))\n SubPlan\n -> Materialize (cost=184.00..284.00 rows=10000 \nwidth=2) (actual time=0.000..2.406 rows=9372 loops=31768)\n -> Seq Scan on people (cost=0.00..174.00 \nrows=10000 width=2) (actual time=0.055..7.181 rows=10000 loops=1)\n Total runtime: 189967.150 ms\n\nHmm - why is it doing that? It's clearly confused about something.\n\nI suspect the root of the problem is that it doesn't know what \ngenerate_series() will return. To the planner it's just another \nset-returning function.\n\nThis means it's getting (i) the # of rows wrong (rows=500) and also \ndoesn't know (ii) there will be no nulls or (iii) what the range of \nvalues returned will be.\n\nEasy enough to test:\n\nCREATE TEMP TABLE all_uids (uid int2);\nINSERT INTO all_uids SELECT generate_series(1000,32767);\nANALYSE all_uids;\n\nEXPLAIN ANALYSE SELECT a.uid\nFROM all_uids a\nWHERE a.uid NOT IN (SELECT uid FROM people)\nORDER BY random() LIMIT 1;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------\n Limit (cost=1884.14..1884.14 rows=1 width=2) (actual \ntime=39.019..39.019 rows=0 loops=1)\n -> Sort (cost=1884.14..1923.85 rows=15884 width=2) (actual \ntime=39.014..39.014 rows=0 loops=1)\n Sort Key: random()\n -> Seq Scan on all_uids a (cost=199.00..775.81 rows=15884 \nwidth=2) (actual time=38.959..38.959 rows=0 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on people (cost=0.00..174.00 rows=10000 \nwidth=2) (actual time=0.046..7.282 rows=10000 loops=1)\n Total runtime: 39.284 ms\n\nThat's more sensible.\n\nI'd actually use a table to track unused_uids and have triggers that \nkept everything in step. However, if you didn't want to do that, I'd try \na left-join.\n\nEXPLAIN ANALYSE\nSELECT a.uid\nFROM generate_series(1000, 32767) as a(uid) LEFT JOIN people p ON \na.uid=p.uid\nWHERE\n p.uid IS NULL\nORDER BY random() LIMIT 1;\n\nNot ideal, but like I say I'd use an unused_uids table. If nothing else, \nI'd be wary about immediately re-using a uid - your db+application might \ncope fine, but these values have a tendency to be referred to elsewhere.\n\nHTH\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 28 Sep 2007 09:06:11 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequence query performance issues" }, { "msg_contents": "Richard Huxton <[email protected]> writes:\n> Hmm - why is it doing that?\n\nI'm betting that the OP's people.uid column is not an integer. Existing\nPG releases can't use hashed subplans for cross-data-type comparisons\n(8.3 will be a bit smarter).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Sep 2007 10:16:19 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequence query performance issues " }, { "msg_contents": "Tom Lane wrote:\n> Richard Huxton <[email protected]> writes:\n>> Hmm - why is it doing that?\n> \n> I'm betting that the OP's people.uid column is not an integer. Existing\n> PG releases can't use hashed subplans for cross-data-type comparisons\n> (8.3 will be a bit smarter).\n\nLooked like an int2 to me (width=2, max value ~ 32k)\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Fri, 28 Sep 2007 15:43:41 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: sequence query performance issues" }, { "msg_contents": "> > Hmm - why is it doing that?\n>\n> I'm betting that the OP's people.uid column is not an integer. Existing\n> PG releases can't use hashed subplans for cross-data-type comparisons\n> (8.3 will be a bit smarter).\n\n*light bulb* Ahhhhhhh, that's it. So, I guess the solution is either\nto cast the column or wait for 8.3 (which isn't a problem since the\nport won't be done until 8.3 is released anyway).\n\nThanks again.\n\nPeter\n", "msg_date": "Fri, 28 Sep 2007 12:33:35 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequence query performance issues" }, { "msg_contents": "> *light bulb* Ahhhhhhh, that's it. So, I guess the solution is either\n> to cast the column or wait for 8.3 (which isn't a problem since the\n> port won't be done until 8.3 is released anyway).\n\nJust a quick bit of follow-up:\n\nThis query works and is equivalent to what I was trying to do (minus\nthe randomization and limiting):\n=> select a.uid from generate_series(1000, 32000) as a(uid) where\na.uid::smallint not in (select uid from people where uid is not null);\n\nIt turns out that this and using coalesce are a wash in terms of\nperformance, usually coming within 10 ms of each other no matter what\nlimit and ordering constraints you put on the queries.\n\nPeter\n\n=> explain analyze select a.uid from generate_series(1000, 32767) as\na(uid) where a.uid not in (select coalesce(uid, 0) from people);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Function Scan on generate_series a (cost=718.41..733.41 rows=500\nwidth=4) (actual time=68.742..186.340 rows=26808 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on people (cost=0.00..702.68 rows=6294 width=2)\n(actual time=0.025..28.368 rows=6294 loops=1)\n Total runtime: 286.311 ms\n(5 rows)\n\n=> explain analyze select a.uid from generate_series(1000, 32767) as\na(uid) where a.uid::smallint not in (select uid from people where uid\nis not null);\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------\n Function Scan on generate_series a (cost=699.34..716.84 rows=500\nwidth=4) (actual time=58.508..177.683 rows=26808 loops=1)\n Filter: (NOT (hashed subplan))\n SubPlan\n -> Seq Scan on people (cost=0.00..686.94 rows=4958 width=2)\n(actual time=0.017..23.123 rows=4971 loops=1)\n Filter: (uid IS NOT NULL)\n Total runtime: 277.699 ms\n(6 rows)\n", "msg_date": "Mon, 1 Oct 2007 14:23:02 -0500", "msg_from": "\"Peter Koczan\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: sequence query performance issues" } ]
[ { "msg_contents": "Hi -\n\nThis has been happening more recently. Our database hangs after a VACUUM\nand is unresponsive when we come in next morning.\n\nThe vacuum job runs at 03:00 am daily.\nThe command is : /usr/local/pgsql/bin/vacuumdb --full -d DbName\n\nAlso, what exactly does this mean VACUUM waiting. Is there a reason why it\nis never emerging from the VACUUM job?\nI understand that doing a vacuumdb --full causes the tables to lock (not\nreally sure about the workings of vacuum).\n\nAny light on this would be really appreciated.\n\nThanks,\nRadhika\n\n\nBelow is what ps -ef |grep postgres shows:\n\n 5530 ? S 0:13 /usr/local/pgsql/bin/postmaster -i\n 5534 ? S 0:01 postgres: stats buffer process\n 5535 ? S 0:04 postgres: stats collector process\n 5621 ? S 0:53 postgres: slony myDB 10.142.20.50 idle\n 5626 ? S 0:51 postgres: slony myDB 10.142.20.50 idle\n 5627 ? S 0:34 postgres: slony myDB 10.142.20.50 idle\n 5628 ? S 5:40 postgres: slony myDB 10.142.20.50 idle\n 5637 ? S 2:09 postgres: slony myDB 10.132.20.26 idle\n 5638 ? S 1:56 postgres: slony myDB 10.132.20.26 idle\n 5745 ? S 42:08 postgres: abc myDB [local] idle\n 20774 ? S 4:29 postgres: abc myDB [local] idle\n 20775 ? S 0:00 postgres: abc myDB [local] idle in\ntransaction\n 20776 ? S 0:00 postgres: abc myDB [local] idle\n 17509 ? S 0:06 postgres: abc myDB [local] VACUUM waiting\n 24656 ? S 0:00 postgres: abc myDB [local] INSERT waiting\n 30489 ? S 0:00 postgres: abc myDB [local] SELECT waiting\n 30637 ? S 0:00 postgres: abc myDB [local] UPDATE waiting\n 30647 ? S 0:00 postgres: abc myDB [local] UPDATE waiting\n 30668 ? S 0:00 postgres: abc myDB [local] UPDATE waiting\n\n\n-- \nIt is all a matter of perspective. You choose your view by choosing where to\nstand. --Larry Wall\n\nHi -This has been happening more recently. Our database  hangs after a VACUUM and is unresponsive when we come in next morning.The vacuum job runs at 03:00 am daily.The command is : /usr/local/pgsql/bin/vacuumdb --full -d DbName\nAlso, what exactly does this mean VACUUM waiting. Is there a reason why it is never emerging from the VACUUM job?I understand that doing a vacuumdb --full causes the tables to lock (not really sure about the workings of vacuum).\nAny light on this would be really appreciated.Thanks,RadhikaBelow is what ps -ef |grep postgres shows: 5530 ?        S      0:13 /usr/local/pgsql/bin/postmaster -i     5534 ?        S      0:01 postgres: stats buffer process    \n     5535 ?        S      0:04 postgres: stats collector process        5621 ?        S      0:53 postgres: slony myDB 10.142.20.50 idle     5626 ?        S      0:51 postgres: slony myDB \n10.142.20.50 idle     5627 ?        S      0:34 postgres: slony myDB 10.142.20.50 idle     5628 ?        S      5:40 postgres: slony myDB \n10.142.20.50 idle     5637 ?        S      2:09 postgres: slony myDB 10.132.20.26 idle     5638 ?        S      1:56 postgres: slony myDB 10.132.20.26\n idle     5745 ?        S     42:08 postgres: abc myDB [local] idle      20774 ?        S      4:29 postgres: abc myDB [local] idle      20775 ?        S      0:00 postgres: abc myDB [local] idle in transaction\n    20776 ?        S      0:00 postgres: abc myDB [local] idle      17509 ?        S      0:06 postgres: abc myDB [local] VACUUM waiting    24656 ?        S      0:00 postgres: abc myDB [local] INSERT waiting\n    30489 ?        S      0:00 postgres: abc myDB [local] SELECT waiting    30637 ?        S      0:00 postgres: abc myDB [local] UPDATE waiting    30647 ?        S      0:00 postgres: abc myDB [local] UPDATE waiting\n    30668 ?        S      0:00 postgres: abc myDB [local] UPDATE waiting-- It is all a matter of perspective. You choose your view by choosing where to stand. --Larry Wall", "msg_date": "Fri, 28 Sep 2007 10:28:03 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Postgres 7.4.2 hanging when vacuum full is run" }, { "msg_contents": "On Sep 28, 2007, at 10:28 AM, Radhika S wrote:\n\n> 20775 ? S 0:00 postgres: abc myDB [local] idle in \n> transaction\n> 20776 ? S 0:00 postgres: abc myDB [local] idle\n> 17509 ? S 0:06 postgres: abc myDB [local] VACUUM \n> waiting\n> 24656 ? S 0:00 postgres: abc myDB [local] INSERT \n> waiting\n\nYou're vacuum is probably waiting for the \"idle in transaction\" \nsession to finish, so it can clean up. It can't take a lock if your \ntransaction has locks. Your other tasks are probably waiting behind \nthe vacuum. Don't leave your transactions open for a long time. it \nis bad.\n\n\nOn Sep 28, 2007, at 10:28 AM, Radhika S wrote:    20775 ?        S      0:00 postgres: abc myDB [local] idle in transaction     20776 ?        S      0:00 postgres: abc myDB [local] idle      17509 ?        S      0:06 postgres: abc myDB [local] VACUUM waiting    24656 ?        S      0:00 postgres: abc myDB [local] INSERT waitingYou're vacuum is probably waiting for the \"idle in transaction\" session to finish, so it can clean up.  It can't take a lock if your transaction has locks.  Your other tasks are probably waiting behind the vacuum.  Don't leave your transactions open for a long time.  it is bad.", "msg_date": "Fri, 28 Sep 2007 10:53:20 -0400", "msg_from": "Vivek Khera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.4.2 hanging when vacuum full is run" }, { "msg_contents": "On top of what Vivek said, you need to update your pg install. 7.4.2\nhad a few data eating bugs if I remember correctly. 7.4 branch is up\nto 7.4.18, and those are a lot of bug fixes (2+ years) you're missing.\n If one of those bugs eats your data, don't expect any sympathy.\n", "msg_date": "Fri, 28 Sep 2007 10:28:39 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Postgres 7.4.2 hanging when vacuum full is run" } ]
[ { "msg_contents": "In keeping with some of the recent threads regarding the planner...\n \nI have a fair sized data warehouse in which I am trying to perform an aggregation, but getting OOM errors in Postgres(8.2.4).\nI believe the reason for the OOM is that Postgres is attempting to do a hash aggregation, but it has grossly underestimated the rows resulting from the aggregation.\nThe data in the database is very uniformly distributed so I don't believe that the table stats are the cause of the problem. \nThis may be related to table inheritance, and can be demonstrated pretty easily.\n \nCREATE TABLE foo(a INT);ANALYZE foo;\nCREATE TABLE foo_1() INHERITS(foo);insert into foo_1 select generate_series(1,100000);insert into foo_1 select generate_series(1,100000);insert into foo_1 select generate_series(1,100000);ANALYZE foo_1;\nCREATE TABLE foo_2() INHERITS(foo);insert into foo_2 select generate_series(1,100000);insert into foo_2 select generate_series(1,100000);insert into foo_2 select generate_series(1,100000);ANALYZE foo_2;\n-- If I query a particular partition, the plan estimate for the hash aggregate is good\nEXPLAIN ANALYZE SELECT a,COUNT(*) from foo_1 group by a;\n HashAggregate (cost=5822.00..7061.01 rows=99121 width=4) (actual time=554.556..657.121 rows=100000 loops=1) -> Seq Scan on foo_1 (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.014..203.290 rows=300000 loops=1) Total runtime: 712.211 ms\n-- If I query the base table, the plan estimate for the hash aggregate is off by several orders of magnitude\nEXPLAIN ANALYZE SELECT a,COUNT(*) from foo group by a;\nHashAggregate (cost=11686.10..11688.60 rows=200 width=4) (actual time=1724.188..1826.630 rows=100000 loops=1) -> Append (cost=0.00..8675.40 rows=602140 width=4) (actual time=0.016..1045.134 rows=600000 loops=1) -> Seq Scan on foo (cost=0.00..31.40 rows=2140 width=4) (actual time=0.001..0.001 rows=0 loops=1) -> Seq Scan on foo_1 foo (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.012..205.130 rows=300000 loops=1) -> Seq Scan on foo_2 foo (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.011..203.542 rows=300000 loops=1) Total runtime: 1879.550 ms(6 rows)\n-- Is there something magical about the hash aggregate estimate of 200 rows?\n-- I can have 30,000 or 300,000 rows in each child partition table and multiple partition's with different values of \"a\" and yet it always come up with 200.\n-- eg.\ncreate table foo_3() inherits(foo);insert into foo_3 select generate_series(100000,300000);analyze foo_3;\nEXPLAIN ANALYZE SELECT a,COUNT(*) from foo group by a;\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------ HashAggregate (cost=15568.12..15570.62 rows=200 width=4) (actual time=2386.813..2691.254 rows=300000 loops=1) -> Append (cost=0.00..11557.41 rows=802141 width=4) (actual time=0.016..1403.121 rows=800001 loops=1) -> Seq Scan on foo (cost=0.00..31.40 rows=2140 width=4) (actual time=0.002..0.002 rows=0 loops=1) -> Seq Scan on foo_1 foo (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.013..201.549 rows=300000 loops=1) -> Seq Scan on foo_2 foo (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.010..211.332 rows=300000 loops=1) -> Seq Scan on foo_3 foo (cost=0.00..2882.01 rows=200001 width=4) (actual time=0.011..137.262 rows=200001 loops=1) Total runtime: 2851.990 ms\nIs this a bug, or some subtlety of the Postgres query planner?\n \nIn my particular case, I am doing a join out to another small table as part of the aggregation and using constraint exclusion on the partitions, but I believe the cause of my problem is the same.\n \nI am running on 64bit FreeBSD/Postgres 8.2.4 on a machine with 8GB of memory.\n \nAn explain of the query and resulting OOM diagnostics follow:\n \nThe aggregation will result in 5,000,000 rows, not 5,000.In the stats_dtl table there are 12 observations(hourly) for each customerThere are 125 different policy_id and 25 different policy_group_id'sPolicy's and policy_groups are even distributed across all customers\n \n userquery Scan table_4760 (cost=2897243.60..2897418.60 rows=5000 width=152) -> HashAggregate (cost=2897243.60..2897368.60 rows=5000 width=40) -> Hash Join (cost=7.81..2241002.00 rows=37499520 width=40) Hash Cond: (public.customer_stats_dtl.policy_id = policy.policy_id) -> Append (cost=0.00..1641001.87 rows=59999232 width=40) -> Seq Scan on customer_stats_dtl (cost=0.00..22.45 rows=4 width=40) Filter: ((period_start >= '2007-09-08 20:00:00-04'::timestamp with time zone) AND (period_start < '2007-09-09 08:00:00-04'::timestamp with time zone)) -> Seq Scan on customer_stats_dtl_027 customer_stats_dtl (cost=0.00..1640979.42 rows=59999228 width=40) Filter: ((period_start >= '2007-09-08 20:00:00-04'::timestamp with time zone) AND (period_start < '2007-09-09 08:00:00-04'::timestamp with time zone)) -> Hash (cost=6.25..6.25 rows=125 width=8) -> Seq Scan on policy (cost=0.00..6.25 rows=125 width=8) TopMemoryContext: 268400 total in 32 blocks; 20912 free (43 chunks); 247488 usedunnamed prepared statement: 2097152 total in 8 blocks; 721456 free (2 chunks); 1375696 usedTopTransactionContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544 usedSPI Plan: 3072 total in 2 blocks; 1320 free (0 chunks); 1752 usedSPI Plan: 7168 total in 3 blocks; 1312 free (0 chunks); 5856 usedSPI Plan: 3072 total in 2 blocks; 1320 free (0 chunks); 1752 usedSPI Plan: 3072 total in 2 blocks; 928 free (0 chunks); 2144 usedSPI Plan: 7168 total in 3 blocks; 3920 free (0 chunks); 3248 usedSPI Plan: 3072 total in 2 blocks; 1224 free (0 chunks); 1848 usedPL/PgSQL function context: 24576 total in 2 blocks; 12008 free (9 chunks); 12568 usedCFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedRendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedPLpgSQL function cache: 24224 total in 2 blocks; 3744 free (0 chunks); 20480 usedType information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 usedS_2: 1024 total in 1 blocks; 488 free (0 chunks); 536 usedS_1: 1024 total in 1 blocks; 488 free (0 chunks); 536 usedPrepared Queries: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 usedRecord information cache: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 usedMessageContext: 8192 total in 1 blocks; 7744 free (0 chunks); 448 usedOperator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedsmgr relation table: 57344 total in 3 blocks; 21872 free (8 chunks); 35472 usedTransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 usedPortal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedPortalMemory: 8192 total in 1 blocks; 7888 free (1 chunks); 304 usedPortalHeapMemory: 1024 total in 1 blocks; 768 free (0 chunks); 256 usedExecutorState: 122880 total in 4 blocks; 68728 free (2 chunks); 54152 usedHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedHashBatchContext: 32888 total in 3 blocks; 14512 free (0 chunks); 18376 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedAggContext: 1033887744 total in 133 blocks; 688 free (0 chunks); 1033887056 used **TupleHashTable: 566485040 total in 78 blocks; 2064896 free (304 chunks); 564420144 used **ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8040 free (1 chunks); 152 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedRelcache by OID: 57344 total in 3 blocks; 31024 free (6 chunks); 26320 usedCacheMemoryContext: 4679720 total in 23 blocks; 404104 free (1 chunks); 4275616 used\n \nIf you have actually read this far, the following wrinkle is where I am currently stuck.\n \nI set ENABLE_HASHAGG=OFF and still get an OOM :-(\nIn this situation Postgres runs for much longer before getting the OOM and I believe it has actually written some records to the aggregationI base this on seeing autovacuum kick off against the aggregation table after the query fails, which may not be valid.\nuserquery Scan table_4760 (cost=8771233.77..9521474.17 rows=5000 width=152) -> GroupAggregate (cost=8771233.77..9521349.17 rows=5000 width=40) -> Sort (cost=8771233.77..8864982.57 rows=37499520 width=40) Sort Key: public.customer_stats_dtl.user_id, policy.policy_group_id -> Hash Join (cost=7.81..2241002.00 rows=37499520 width=40) Hash Cond: (public.customer_stats_dtl.policy_id = policy.policy_id) -> Append (cost=0.00..1641001.87 rows=59999232 width=40) -> Seq Scan on customer_stats_dtl (cost=0.00..22.45 rows=4 width=40) Filter: ((period_start >= '2007-09-08 20:00:00-04'::timestamp with time zone) AND (period_start < '2007-09-09 08:00:00-04'::timestamp with time zone)) -> Seq Scan on customer_stats_dtl_027 customer_stats_dtl (cost=0.00..1640979.42 rows=59999228 width=40) Filter: ((period_start >= '2007-09-08 20:00:00-04'::timestamp with time zone) AND (period_start < '2007-09-09 08:00:00-04'::timestamp with time zone)) -> Hash (cost=6.25..6.25 rows=125 width=8) -> Seq Scan on policy (cost=0.00..6.25 rows=125 width=8)\n \nTopMemoryContext: 874608 total in 106 blocks; 14240 free (45 chunks); 860368 usedunnamed prepared statement: 2097152 total in 8 blocks; 720544 free (2 chunks); 1376608 usedTopTransactionContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544 usedAfterTriggerEvents: 758112256 total in 102 blocks; 1792 free (8 chunks); 758110464 usedS_2: 1024 total in 1 blocks; 488 free (0 chunks); 536 usedType information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 usedSPI Plan: 3072 total in 2 blocks; 1320 free (0 chunks); 1752 usedSPI Plan: 7168 total in 3 blocks; 1312 free (0 chunks); 5856 usedSPI Plan: 3072 total in 2 blocks; 1320 free (0 chunks); 1752 usedSPI Plan: 3072 total in 2 blocks; 928 free (0 chunks); 2144 usedSPI Plan: 7168 total in 3 blocks; 3920 free (0 chunks); 3248 usedSPI Plan: 3072 total in 2 blocks; 1224 free (0 chunks); 1848 usedPL/PgSQL function context: 24576 total in 2 blocks; 12008 free (9 chunks); 12568 usedCFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedRendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedPLpgSQL function cache: 24224 total in 2 blocks; 3744 free (0 chunks); 20480 usedS_1: 1024 total in 1 blocks; 488 free (0 chunks); 536 usedPrepared Queries: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 usedRecord information cache: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 usedMessageContext: 8192 total in 1 blocks; 7744 free (0 chunks); 448 usedOperator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedsmgr relation table: 253952 total in 5 blocks; 40912 free (16 chunks); 213040 usedTransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 usedPortal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedPortalMemory: 8192 total in 1 blocks; 7888 free (1 chunks); 304 usedPortalHeapMemory: 1024 total in 1 blocks; 768 free (0 chunks); 256 usedExecutorState: 122880 total in 4 blocks; 61720 free (15 chunks); 61160 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedHashBatchContext: 32888 total in 3 blocks; 14512 free (0 chunks); 18376 usedTupleSort: 822437960 total in 105 blocks; 424201336 free (5267294 chunks); 398236624 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedAggContext: 8192 total in 1 blocks; 8000 free (3 chunks); 192 usedExprContext: 8192 total in 1 blocks; 8000 free (0 chunks); 192 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedExprContext: 8192 total in 1 blocks; 8040 free (0 chunks); 152 usedRelcache by OID: 253952 total in 5 blocks; 117488 free (8 chunks); 136464 usedCacheMemoryContext: 17262632 total in 25 blocks; 3573056 free (0 chunks); 13689576 used\n_________________________________________________________________\nExplore the seven wonders of the world\nhttp://search.msn.com/results.aspx?q=7+wonders+world&mkt=en-US&form=QBRE\n\n\n\n\nIn keeping with some of the recent threads regarding the planner...\n \nI have a fair sized data warehouse in which I am trying to perform an aggregation, but getting OOM errors in Postgres(8.2.4).\nI believe the reason for the OOM is that Postgres is attempting to do a hash aggregation, but it has grossly underestimated the rows resulting from the aggregation.\nThe data in the database is very uniformly distributed so I don't believe that the table stats are the cause of the problem. \nThis may be related to table inheritance, and can be demonstrated pretty easily.\n \nCREATE TABLE foo(a INT);ANALYZE foo;\nCREATE TABLE foo_1() INHERITS(foo);insert into foo_1 select  generate_series(1,100000);insert into foo_1 select  generate_series(1,100000);insert into foo_1 select  generate_series(1,100000);ANALYZE foo_1;\nCREATE TABLE foo_2() INHERITS(foo);insert into foo_2 select  generate_series(1,100000);insert into foo_2 select  generate_series(1,100000);insert into foo_2 select  generate_series(1,100000);ANALYZE foo_2;\n-- If I query a particular partition, the plan estimate for the hash aggregate is good\nEXPLAIN ANALYZE SELECT a,COUNT(*) from foo_1 group by a;\n HashAggregate  (cost=5822.00..7061.01 rows=99121 width=4) (actual time=554.556..657.121 rows=100000 loops=1)   ->  Seq Scan on foo_1  (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.014..203.290 rows=300000 loops=1) Total runtime: 712.211 ms\n-- If I query the base table, the plan estimate for the hash aggregate is off by several orders of magnitude\nEXPLAIN ANALYZE SELECT a,COUNT(*) from foo group by a;\nHashAggregate  (cost=11686.10..11688.60 rows=200 width=4) (actual time=1724.188..1826.630 rows=100000 loops=1)   ->  Append  (cost=0.00..8675.40 rows=602140 width=4) (actual time=0.016..1045.134 rows=600000 loops=1)         ->  Seq Scan on foo  (cost=0.00..31.40 rows=2140 width=4) (actual time=0.001..0.001 rows=0 loops=1)         ->  Seq Scan on foo_1 foo  (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.012..205.130 rows=300000 loops=1)         ->  Seq Scan on foo_2 foo  (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.011..203.542 rows=300000 loops=1) Total runtime: 1879.550 ms(6 rows)\n-- Is there something magical about the hash aggregate estimate of 200 rows?\n-- I can have 30,000 or 300,000 rows in each child partition table and multiple partition's with different values of \"a\" and yet it always come up with 200.\n-- eg.\ncreate table foo_3() inherits(foo);insert into foo_3 select generate_series(100000,300000);analyze foo_3;\nEXPLAIN ANALYZE SELECT a,COUNT(*) from foo group by a;\n                                                          QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------ HashAggregate  (cost=15568.12..15570.62 rows=200 width=4) (actual time=2386.813..2691.254 rows=300000 loops=1)   ->  Append  (cost=0.00..11557.41 rows=802141 width=4) (actual time=0.016..1403.121 rows=800001 loops=1)         ->  Seq Scan on foo  (cost=0.00..31.40 rows=2140 width=4) (actual time=0.002..0.002 rows=0 loops=1)         ->  Seq Scan on foo_1 foo  (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.013..201.549 rows=300000 loops=1)         ->  Seq Scan on foo_2 foo  (cost=0.00..4322.00 rows=300000 width=4) (actual time=0.010..211.332 rows=300000 loops=1)         ->  Seq Scan on foo_3 foo  (cost=0.00..2882.01 rows=200001 width=4) (actual time=0.011..137.262 rows=200001 loops=1) Total runtime: 2851.990 ms\nIs this a bug, or some subtlety of the Postgres query planner?\n \nIn my particular case, I am doing a join out to another small table as part of the aggregation and using constraint exclusion on the partitions, but I believe the cause of my problem is the same.\n \nI am running on 64bit FreeBSD/Postgres 8.2.4 on a machine with 8GB of memory.\n \nAn explain of the query and resulting OOM diagnostics follow:\n \nThe aggregation will result in 5,000,000 rows, not 5,000.In the stats_dtl table there are 12 observations(hourly) for each customerThere are 125 different policy_id and 25 different policy_group_id'sPolicy's and policy_groups are even distributed across all customers\n \n userquery Scan table_4760  (cost=2897243.60..2897418.60 rows=5000 width=152)   ->  HashAggregate  (cost=2897243.60..2897368.60 rows=5000 width=40)         ->  Hash Join  (cost=7.81..2241002.00 rows=37499520 width=40)               Hash Cond: (public.customer_stats_dtl.policy_id = policy.policy_id)               ->  Append  (cost=0.00..1641001.87 rows=59999232 width=40)                     ->  Seq Scan on customer_stats_dtl  (cost=0.00..22.45 rows=4 width=40)                           Filter: ((period_start >= '2007-09-08 20:00:00-04'::timestamp with time zone) AND (period_start < '2007-09-09 08:00:00-04'::timestamp with time zone))                     ->  Seq Scan on customer_stats_dtl_027 customer_stats_dtl  (cost=0.00..1640979.42 rows=59999228 width=40)                           Filter: ((period_start >= '2007-09-08 20:00:00-04'::timestamp with time zone) AND (period_start < '2007-09-09 08:00:00-04'::timestamp with time zone))               ->  Hash  (cost=6.25..6.25 rows=125 width=8)                     ->  Seq Scan on policy  (cost=0.00..6.25 rows=125 width=8)                     TopMemoryContext: 268400 total in 32 blocks; 20912 free (43 chunks); 247488 usedunnamed prepared statement: 2097152 total in 8 blocks; 721456 free (2 chunks); 1375696 usedTopTransactionContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544 usedSPI Plan: 3072 total in 2 blocks; 1320 free (0 chunks); 1752 usedSPI Plan: 7168 total in 3 blocks; 1312 free (0 chunks); 5856 usedSPI Plan: 3072 total in 2 blocks; 1320 free (0 chunks); 1752 usedSPI Plan: 3072 total in 2 blocks; 928 free (0 chunks); 2144 usedSPI Plan: 7168 total in 3 blocks; 3920 free (0 chunks); 3248 usedSPI Plan: 3072 total in 2 blocks; 1224 free (0 chunks); 1848 usedPL/PgSQL function context: 24576 total in 2 blocks; 12008 free (9 chunks); 12568 usedCFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedRendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedPLpgSQL function cache: 24224 total in 2 blocks; 3744 free (0 chunks); 20480 usedType information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 usedS_2: 1024 total in 1 blocks; 488 free (0 chunks); 536 usedS_1: 1024 total in 1 blocks; 488 free (0 chunks); 536 usedPrepared Queries: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 usedRecord information cache: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 usedMessageContext: 8192 total in 1 blocks; 7744 free (0 chunks); 448 usedOperator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedsmgr relation table: 57344 total in 3 blocks; 21872 free (8 chunks); 35472 usedTransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 usedPortal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedPortalMemory: 8192 total in 1 blocks; 7888 free (1 chunks); 304 usedPortalHeapMemory: 1024 total in 1 blocks; 768 free (0 chunks); 256 usedExecutorState: 122880 total in 4 blocks; 68728 free (2 chunks); 54152 usedHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedHashBatchContext: 32888 total in 3 blocks; 14512 free (0 chunks); 18376 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedAggContext: 1033887744 total in 133 blocks; 688 free (0 chunks); 1033887056 used               **TupleHashTable: 566485040 total in 78 blocks; 2064896 free (304 chunks); 564420144 used        **ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8040 free (1 chunks); 152 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedRelcache by OID: 57344 total in 3 blocks; 31024 free (6 chunks); 26320 usedCacheMemoryContext: 4679720 total in 23 blocks; 404104 free (1 chunks); 4275616 used\n \nIf you have actually read this far, the following wrinkle is where I am currently stuck.\n \nI set ENABLE_HASHAGG=OFF and still get an OOM :-(\nIn this situation Postgres runs for much longer before getting the OOM and I believe it has actually written some records to the aggregationI base this on seeing autovacuum kick off against the aggregation table after the query fails, which may not be valid.\nuserquery Scan table_4760  (cost=8771233.77..9521474.17 rows=5000 width=152)  ->  GroupAggregate  (cost=8771233.77..9521349.17 rows=5000 width=40)        ->  Sort  (cost=8771233.77..8864982.57 rows=37499520 width=40)              Sort Key: public.customer_stats_dtl.user_id, policy.policy_group_id              ->  Hash Join  (cost=7.81..2241002.00 rows=37499520 width=40)                    Hash Cond: (public.customer_stats_dtl.policy_id = policy.policy_id)                    ->  Append  (cost=0.00..1641001.87 rows=59999232 width=40)                          ->  Seq Scan on customer_stats_dtl  (cost=0.00..22.45 rows=4 width=40)                                Filter: ((period_start >= '2007-09-08 20:00:00-04'::timestamp with time zone) AND (period_start < '2007-09-09 08:00:00-04'::timestamp with time zone))                          ->  Seq Scan on customer_stats_dtl_027 customer_stats_dtl  (cost=0.00..1640979.42 rows=59999228 width=40)                                Filter: ((period_start >= '2007-09-08 20:00:00-04'::timestamp with time zone) AND (period_start < '2007-09-09 08:00:00-04'::timestamp with time zone))                    ->  Hash  (cost=6.25..6.25 rows=125 width=8)                          ->  Seq Scan on policy  (cost=0.00..6.25 rows=125 width=8)\n \nTopMemoryContext: 874608 total in 106 blocks; 14240 free (45 chunks); 860368 usedunnamed prepared statement: 2097152 total in 8 blocks; 720544 free (2 chunks); 1376608 usedTopTransactionContext: 8192 total in 1 blocks; 7648 free (0 chunks); 544 usedAfterTriggerEvents: 758112256 total in 102 blocks; 1792 free (8 chunks); 758110464 usedS_2: 1024 total in 1 blocks; 488 free (0 chunks); 536 usedType information cache: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 usedSPI Plan: 3072 total in 2 blocks; 1320 free (0 chunks); 1752 usedSPI Plan: 7168 total in 3 blocks; 1312 free (0 chunks); 5856 usedSPI Plan: 3072 total in 2 blocks; 1320 free (0 chunks); 1752 usedSPI Plan: 3072 total in 2 blocks; 928 free (0 chunks); 2144 usedSPI Plan: 7168 total in 3 blocks; 3920 free (0 chunks); 3248 usedSPI Plan: 3072 total in 2 blocks; 1224 free (0 chunks); 1848 usedPL/PgSQL function context: 24576 total in 2 blocks; 12008 free (9 chunks); 12568 usedCFuncHash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedRendezvous variable hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedPLpgSQL function cache: 24224 total in 2 blocks; 3744 free (0 chunks); 20480 usedS_1: 1024 total in 1 blocks; 488 free (0 chunks); 536 usedPrepared Queries: 24576 total in 2 blocks; 11888 free (5 chunks); 12688 usedRecord information cache: 24576 total in 2 blocks; 15984 free (5 chunks); 8592 usedMessageContext: 8192 total in 1 blocks; 7744 free (0 chunks); 448 usedOperator class cache: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedsmgr relation table: 253952 total in 5 blocks; 40912 free (16 chunks); 213040 usedTransactionAbortContext: 32768 total in 1 blocks; 32736 free (0 chunks); 32 usedPortal hash: 8192 total in 1 blocks; 1680 free (0 chunks); 6512 usedPortalMemory: 8192 total in 1 blocks; 7888 free (1 chunks); 304 usedPortalHeapMemory: 1024 total in 1 blocks; 768 free (0 chunks); 256 usedExecutorState: 122880 total in 4 blocks; 61720 free (15 chunks); 61160 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedHashTableContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedHashBatchContext: 32888 total in 3 blocks; 14512 free (0 chunks); 18376 usedTupleSort: 822437960 total in 105 blocks; 424201336 free (5267294 chunks); 398236624 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedAggContext: 8192 total in 1 blocks; 8000 free (3 chunks); 192 usedExprContext: 8192 total in 1 blocks; 8000 free (0 chunks); 192 usedExprContext: 8192 total in 1 blocks; 8160 free (0 chunks); 32 usedExprContext: 8192 total in 1 blocks; 8040 free (0 chunks); 152 usedRelcache by OID: 253952 total in 5 blocks; 117488 free (8 chunks); 136464 usedCacheMemoryContext: 17262632 total in 25 blocks; 3573056 free (0 chunks); 13689576 usedExplore the seven wonders of the world Learn more!", "msg_date": "Fri, 28 Sep 2007 13:50:32 -0400", "msg_from": "Arctic Toucan <[email protected]>", "msg_from_op": true, "msg_subject": "OOM Errors as a result of table inheritance and a bad plan(?)" }, { "msg_contents": "Arctic Toucan <[email protected]> writes:\n> -- Is there something magical about the hash aggregate estimate of 200 rows?\n\nYeah, it's the default :-(\n\n> Is this a bug, or some subtlety of the Postgres query planner?\n\nIt's an, um, known deficiency --- the planner hasn't got any idea how to\nconstruct aggregated statistics for an inheritance tree.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 28 Sep 2007 16:56:49 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: OOM Errors as a result of table inheritance and a bad plan(?) " } ]
[ { "msg_contents": "Hello,\n\nI was wondering whether any thought has previously been given to\nhaving a non-blocking \"vacuum full\", in the sense of space reclamation\nand table compactation.\n\nThe motivation is that it is useful to be able to assume that\noperations that span a table will *roughtly* scale linearly with the\nsize of the table. But when you have a table that over an extended\nperiod of time begins small, grows large, and grows small again (where\n\"large\" might be, say, 200 GB), that assumption is most definitely\nnot correct when you're on the downward slope of that graph. Having\nthis assumption remain true simplifies things a lot for certain\nworkloads (= my particular work load ;)).\n\nI have only looked very very briefly at the PG code so I don't know\nhow far fetched it is, but my thought was that it should be possible\nto have a slow background process (similar to normal non-full vacuums\nnows) that would, instead of registering dead tuples in the FSM, move\nlive tuples around.\n\nCombine that slow moving operations with a policy to a new tuple space\nallocation policy that prefers earlier locations on-disk, it should in\ntime result in a situation where the physical on-disk file contains\nonly dead tuples after a certain percentage location. At this point\nthe file can be truncated, giving space back to the OS as well as\neliminating all that dead space from having to be covered by\nsequential scans on the table.\n\nThis does of course increase the total cost of all updates and\ndeletes, but would be very useful in some senarios. It also has the\ninteresting property that the scan for live tuples to move need not\ntouch the entire table to be effective; it could by design be applied\nto the last <n> percentage of the table, where <n> would be scaled\nappropriately with the frequency of the checks relative to\nupdate/insert frequency.\n\nOther benefits:\n\n * Never vacuum full - EVER. Not even after discovering too small\n max_fsm_pages or too infrequent vacuums and needing to retroactively\n shrink the table.\n * Increased locality in general; even if one does not care about\n the diskspace or sequential scanning. Particularly relevant for low-update frequency\n tables suffering from sudden shrinkage, where a blocking VACUUM FULL Is not\n acceptable.\n * Non-blocking CLUSTER is perhaps suddently more trivial to implement?\n Or at least SORTOFCLUSTER when you want it for reasons other than\n perfect order (\"mostly sorted\").\n\nOpinions/thoughts?\n\n-- \n/ Peter Schuller\n\nPGP userID: 0xE9758B7D or 'Peter Schuller <[email protected]>'\nKey retrieval: Send an E-Mail to [email protected]\nE-Mail: [email protected] Web: http://www.scode.org", "msg_date": "Fri, 28 Sep 2007 20:06:50 +0200", "msg_from": "Peter Schuller <[email protected]>", "msg_from_op": true, "msg_subject": "Non-blocking vacuum full" }, { "msg_contents": "Peter Schuller wrote:\n> I have only looked very very briefly at the PG code so I don't know\n> how far fetched it is, but my thought was that it should be possible\n> to have a slow background process (similar to normal non-full vacuums\n> nows) that would, instead of registering dead tuples in the FSM, move\n> live tuples around.\n\nWhat you've described is actually very close to VACUUM FULL. VACUUM FULL\nneeds to take an exclusive lock to lock out concurrent scanners that\nmight miss or see a tuple twice, when a live tuple is moved. That's the\nfundamental problem you need to solve.\n\nI think it's doable, if you take a copy of the tuple, and set the ctid\npointer on the old one like an UPDATE, and wait until the old tuple is\nno longer visible to anyone before removing it. It does require some\nchanges to tuple visibility code. For example, a transaction running in\nserializable mode shouldn't throw a serialization error when it tries to\nupdate an old, moved row version, but follow the ctid pointer instead.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Fri, 28 Sep 2007 20:31:33 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-blocking vacuum full" }, { "msg_contents": "Heikki Linnakangas wrote:\n> Peter Schuller wrote:\n>> to have a slow background process (similar to normal non-full vacuums\n> ... \n> I think it's doable, if you take a copy of the tuple, and set the ctid\n> pointer on the old one like an UPDATE, and wait until the old tuple is\n> no longer visible to anyone before removing it. It does require some\n> changes to tuple visibility code.\n\nWouldn't just having this slow background process\nrepeatedly alternating between\n update table set anycol=anycol where ctid > [some ctid near the end]\nand running normal VACUUM statements do what the original poster\nwas asking? And with 8.3, I guess also avoiding HOT?\n\n\n", "msg_date": "Fri, 28 Sep 2007 19:46:28 -0700", "msg_from": "Ron Mayer <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-blocking vacuum full" }, { "msg_contents": "Ron Mayer wrote:\n> Heikki Linnakangas wrote:\n>> Peter Schuller wrote:\n>>> to have a slow background process (similar to normal non-full vacuums\n>> ... \n>> I think it's doable, if you take a copy of the tuple, and set the ctid\n>> pointer on the old one like an UPDATE, and wait until the old tuple is\n>> no longer visible to anyone before removing it. It does require some\n>> changes to tuple visibility code.\n> \n> Wouldn't just having this slow background process\n> repeatedly alternating between\n> update table set anycol=anycol where ctid > [some ctid near the end]\n> and running normal VACUUM statements do what the original poster\n> was asking? \n\nAlmost. Updaters would block waiting for the UPDATE, and updaters in\nserializable mode would throw serialization errors. And the \"WHERE ctid\n> ?\" would actually result in a seq scan scanning the whole table, since\nour tid scans don't support inequality searches.\n\n> And with 8.3, I guess also avoiding HOT?\n\nHOT shouldn't cause any complications here AFAICS.\n\n-- \n Heikki Linnakangas\n EnterpriseDB http://www.enterprisedb.com\n", "msg_date": "Sat, 29 Sep 2007 09:05:18 +0100", "msg_from": "Heikki Linnakangas <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Non-blocking vacuum full" } ]
[ { "msg_contents": "Hi List;\n\nany suggestions for improving \"LIKE '%text%'\" queries?\n\n\nThanks in advance\n", "msg_date": "Tue, 2 Oct 2007 11:31:45 -0600", "msg_from": "Kevin Kempter <[email protected]>", "msg_from_op": true, "msg_subject": "performance of like queries" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nKevin Kempter wrote:\n> Hi List;\n> \n> any suggestions for improving \"LIKE '%text%'\" queries?\n\nfaster disks :)\n\ntake a look at pg_tgrm and tsearch2.\n\nSincerely,\n\nJoshua D. Drake\n\n\n> \n> \n> Thanks in advance\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: Don't 'kill -9' the postmaster\n> \n\n\n- --\n\n === The PostgreSQL Company: Command Prompt, Inc. ===\nSales/Support: +1.503.667.4564 24x7/Emergency: +1.800.492.2240\nPostgreSQL solutions since 1997 http://www.commandprompt.com/\n\t\t\tUNIQUE NOT NULL\nDonate to the PostgreSQL Project: http://www.postgresql.org/about/donate\nPostgreSQL Replication: http://www.commandprompt.com/products/\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.4.6 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQFHArLMATb/zqfZUUQRAiaSAJ4lbVKrKEgr9OnO6jDguALtnonm7QCggtsx\nW7dsy40KbvizyYBQYpvsIvw=\n=2J2G\n-----END PGP SIGNATURE-----\n", "msg_date": "Tue, 02 Oct 2007 14:06:20 -0700", "msg_from": "\"Joshua D. Drake\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of like queries" }, { "msg_contents": "On 10/2/07, Kevin Kempter <[email protected]> wrote:\n> Hi List;\n>\n> any suggestions for improving \"LIKE '%text%'\" queries?\n\nhttp://www.depesz.com/index.php/2007/09/15/speeding-up-like-xxx/\n", "msg_date": "Tue, 2 Oct 2007 23:49:19 +0200", "msg_from": "\"=?UTF-8?Q?Marcin_St=C4=99pnicki?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of like queries" }, { "msg_contents": "[email protected] (Kevin Kempter) writes:\n> any suggestions for improving \"LIKE '%text%'\" queries?\n\nIf you know that the 'text' portion of that query won't change, then\nyou might create a partial index on the boolean condition.\n\nThat is, \n\n create index index_foo_text on my_table (tfield) where (tfield like '%text%');\n\nI somehow doubt that is the case; more likely you want to be able to\nsearch for:\n select * from my_table where tfield like '%this%';\n select * from my_table where tfield like '%that%';\n select * from my_table where tfield like '%the other thing%';\n\nThere are basically three choices, at that point:\n\n1. Get more memory, and hope that you can have all the data get\ncached in memory.\n\n2. Get more better disk, so that you can scan the table faster on\ndisk.\n\n3. Look into tsearch2, which provides a full text search capability.\n-- \n(format nil \"~S@~S\" \"cbbrowne\" \"linuxdatabases.info\")\nhttp://cbbrowne.com/info/x.html\n\"We're born with a number of powerful instincts, which are found\nacross all cultures. Chief amongst these are a dislike of snakes, a\nfear of falling, and a hatred of popup windows\" -- Vlatko Juric-Kokic\n", "msg_date": "Tue, 02 Oct 2007 17:55:08 -0400", "msg_from": "Chris Browne <[email protected]>", "msg_from_op": false, "msg_subject": "Re: performance of like queries" } ]
[ { "msg_contents": "Hello everybody,\n\nI have just joined the list, as I am experiencing a degradation on\nperformances on my PostgreSQL instance, and I was looking for some\ninsights on how to fix/avoid it.\n\nWhat I have observed are impossibly high time on delete statements on\nsome tables.\n\nThe delete statement is very simple:\ndelete from table where pk = ?\n\nThe explain query report a single index scan on the primary key index,\nas expected.\n\nI have run vacuum using the pgAdmin tool, but to no avail.\n\nI have also dropped and recreated the indexes, again without any benefit.\n\nI have later created a copy of the table using the \"create table\ntable_copy as select * from table\" syntax.\n\nMatching the configuration of the original table also on the copy\n(indexes and constraints), I was able to delete the raws from the new\ntable with regular performances, from 20 to 100 times faster than\ndeleting from the original table.\n\nGiven this evidence, what are the best practices to fix/avoid this\nkind of problems?\n\nI am using PostgreSQL 8.1.4 both on Linux (on a Parallels virtual\nmachine with a Linux OS) and on Solaris, on a hosted zone; the Solaris\nversion is running the live DB, while the Linux instance is on my\ndevelopment machine using a snapshot of the live data.\n\nThanks for your attention.\n\nBest regards,\n\nGiulio Cesare Solaroli\n", "msg_date": "Tue, 2 Oct 2007 23:55:27 +0200", "msg_from": "\"Giulio Cesare Solaroli\" <[email protected]>", "msg_from_op": true, "msg_subject": "Newbie question about degraded performance on delete statement." }, { "msg_contents": "On 2 Oct 2007 at 23:55, Giulio Cesare Solaroli wrote:\n\n> What I have observed are impossibly high time on delete statements on\n> some tables.\n> \n> The delete statement is very simple:\n> delete from table where pk = ?\n> \n> The explain query report a single index scan on the primary key index,\n> as expected.\n> \n> I have run vacuum using the pgAdmin tool, but to no avail.\n> \n> I have also dropped and recreated the indexes, again without any benefit.\n> \n> I have later created a copy of the table using the \"create table\n> table_copy as select * from table\" syntax.\n> \n> Matching the configuration of the original table also on the copy\n> (indexes and constraints), I was able to delete the raws from the new\n> table with regular performances, from 20 to 100 times faster than\n> deleting from the original table.\n\nThere may be more to that original table. What about triggers? \nrules? Perhaps there other things going on in the background.\n\n-- \nDan Langille - http://www.langille.org/\nAvailable for hire: http://www.freebsddiary.org/dan_langille.php\n\n\n", "msg_date": "Tue, 02 Oct 2007 18:02:20 -0400", "msg_from": "\"Dan Langille\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Newbie question about degraded performance on delete statement." }, { "msg_contents": "Giulio Cesare Solaroli wrote:\n> Hello everybody,\n>\n> I have just joined the list, as I am experiencing a degradation on\n> performances on my PostgreSQL instance, and I was looking for some\n> insights on how to fix/avoid it.\n>\n> What I have observed are impossibly high time on delete statements on\n> some tables.\n>\n> The delete statement is very simple:\n> delete from table where pk = ?\n>\n> The explain query report a single index scan on the primary key index,\n> as expected.\n>\n> I have run vacuum using the pgAdmin tool, but to no avail.\n>\n> I have also dropped and recreated the indexes, again without any benefit.\n> \nMake sure you run ANALYZE on the table in question after changes to make \nsure the stats are up to date.\n> I have later created a copy of the table using the \"create table\n> table_copy as select * from table\" syntax.\n>\n> Matching the configuration of the original table also on the copy\n> (indexes and constraints), I was able to delete the raws from the new\n> table with regular performances, from 20 to 100 times faster than\n> deleting from the original table.\n>\n> \nAs another poster indicated, this sounds like foreign constraints where \nthe postmaster process has to make sure there are no child references in \ndependent tables; if you are lacking proper indexing on those tables a \nsequential scan would be involved.\n\nPosting the DDL for the table in question and anything that might refer \nto it with an FK relationship would help the list help you.\n\nTry running the query with EXPLAIN ANALYZE ... to see what the planner \nsays. Put this in a transaction and roll it back if you want to leave \nthe data unchanged, e.g.\nBEGIN;\nEXPLAIN ANALYZE DELETE FROM foo WHERE pk = 1234; -- or whatever values \nyou'd be using\nROLLBACK;\n\nHTH,\n\nGreg Williamson\nSenior DBA\nGlobeXplorer LLC, a DigitalGlobe company\n\nConfidentiality Notice: This e-mail message, including any attachments, \nis for the sole use of the intended recipient(s) and may contain \nconfidential and privileged information and must be protected in \naccordance with those provisions. Any unauthorized review, use, \ndisclosure or distribution is prohibited. If you are not the intended \nrecipient, please contact the sender by reply e-mail and destroy all \ncopies of the original message.\n\n(My corporate masters made me say this.)\n\n", "msg_date": "Tue, 02 Oct 2007 15:39:51 -0700", "msg_from": "Greg Williamson <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Newbie question about degraded performance on delete\n statement." }, { "msg_contents": "Hello Gregory,\n\nOn 10/3/07, Greg Williamson <[email protected]> wrote:\n> Giulio Cesare Solaroli wrote:\n> > Hello everybody,\n> >\n> > I have just joined the list, as I am experiencing a degradation on\n> > performances on my PostgreSQL instance, and I was looking for some\n> > insights on how to fix/avoid it.\n> >\n> > What I have observed are impossibly high time on delete statements on\n> > some tables.\n> >\n> > The delete statement is very simple:\n> > delete from table where pk = ?\n> >\n> > The explain query report a single index scan on the primary key index,\n> > as expected.\n> >\n> > I have run vacuum using the pgAdmin tool, but to no avail.\n> >\n> > I have also dropped and recreated the indexes, again without any benefit.\n> >\n> Make sure you run ANALYZE on the table in question after changes to make\n> sure the stats are up to date.\n\nI have run Analyze (always through the pgAdmin interface), and it did\nnot provide any benefits.\n\n\n> > I have later created a copy of the table using the \"create table\n> > table_copy as select * from table\" syntax.\n> >\n> > Matching the configuration of the original table also on the copy\n> > (indexes and constraints), I was able to delete the raws from the new\n> > table with regular performances, from 20 to 100 times faster than\n> > deleting from the original table.\n> >\n> >\n> As another poster indicated, this sounds like foreign constraints where\n> the postmaster process has to make sure there are no child references in\n> dependent tables; if you are lacking proper indexing on those tables a\n> sequential scan would be involved.\n>\n> Posting the DDL for the table in question and anything that might refer\n> to it with an FK relationship would help the list help you.\n\nclipperz_connection=> \\d clipperz.rcrvrs\n Table \"clipperz.rcrvrs\"\n Column | Type | Modifiers\n----------------------+--------------------------+-----------\n id_rcrvrs | integer | not null\n id_rcr | integer | not null\n id_prvrcrvrs | integer |\n reference | character varying(1000) | not null\n header | text | not null\n data | text | not null\n version | character varying(100) | not null\n creation_date | timestamp with time zone | not null\n access_date | timestamp with time zone | not null\n update_date | timestamp with time zone | not null\n previous_version_key | text | not null\nIndexes:\n \"rcrvrs_pkey\" PRIMARY KEY, btree (id_rcrvrs)\n \"unique_rcrvrs_referecnce\" UNIQUE, btree (id_rcr, reference)\nForeign-key constraints:\n \"rcrvrs_id_prvrcrvrs_fkey\" FOREIGN KEY (id_prvrcrvrs) REFERENCES\nrcrvrs(id_rcrvrs)\n \"rcrvrs_id_rcr_fkey\" FOREIGN KEY (id_rcr) REFERENCES rcr(id_rcr)\nDEFERRABLE INITIALLY DEFERRED\n\nIs this a complete listing of all the DDL involved in defining the\ntable, or is there something possibly missing here?\n\n\n> Try running the query with EXPLAIN ANALYZE ... to see what the planner\n> says. Put this in a transaction and roll it back if you want to leave\n> the data unchanged, e.g.\n> BEGIN;\n> EXPLAIN ANALYZE DELETE FROM foo WHERE pk = 1234; -- or whatever values\n> you'd be using\n> ROLLBACK;\n\nI have already tried the explain plan, but only using the pgAdmin\ninterface; running it from psql shows some more data that looks very\npromising:\n\n--------------------------------------------------------------------------------------------------------------------\n Index Scan using rcrvrs_pkey on rcrvrs (cost=0.00..3.68 rows=1\nwidth=6) (actual time=2.643..2.643 rows=1 loops=1)\n Index Cond: (id_rcrvrs = 15434)\n Trigger for constraint rcrvrs_id_prvrcrvrs_fkey: time=875.992 calls=1\n Total runtime: 878.641 ms\n(4 rows)\n\nThe trigger stuff was not shown on the pgAdmin interface.\n\nI will try to add an index on the foreign key field (id_prvrcrvrs) to\nsee if this improves performances of the incriminated query.\n\nThanks for the kind attention.\n\nBest regards,\n\nGiulio Cesare\n", "msg_date": "Wed, 3 Oct 2007 08:56:57 +0200", "msg_from": "\"Giulio Cesare Solaroli\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Newbie question about degraded performance on delete statement." } ]
[ { "msg_contents": "Hi,\n\nI have recently had to change our nightly jobs from running vacuum\nfull, as it has caused problems for us. Upon doing more reading on\nthis topic, I understand that vacuum full needs explicit locks on the\nentire db and explicit locking conflicts with all other locks.\n\nBut this has bought me to the question of what exactly is the\ndifference between vacuum and vacuum full. If both give back free\nspace to the disk, then why have vacuum full.\n\nThank you.\nRadhika\n\n-- \nIt is all a matter of perspective. You choose your view by choosing\nwhere to stand. --Larry Wall\n", "msg_date": "Tue, 2 Oct 2007 21:45:37 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Difference between Vacuum and Vacuum full" }, { "msg_contents": "On 10/2/07, Radhika S <[email protected]> wrote:\n> Hi,\n>\n> I have recently had to change our nightly jobs from running vacuum\n> full, as it has caused problems for us. Upon doing more reading on\n> this topic, I understand that vacuum full needs explicit locks on the\n> entire db and explicit locking conflicts with all other locks.\n>\n> But this has bought me to the question of what exactly is the\n> difference between vacuum and vacuum full. If both give back free\n> space to the disk, then why have vacuum full.\n\nVacuum analyzes the tables and indexes, and marks deleted entries as\nfree and available and puts and entry into the free space map for\nthem. The next time that table or index is updated, instead of\nappending the new tuple to the end it can be placed in the middle of\nthe table / index. this allows the database to reuse \"empty\" space in\nthe database. Also, if there are dead tuples on the very end of the\ntable or index, it can truncate the end of the file and free that\nspace up.\n\nVaccum full basically re-writes the whole file minus all the dead\ntuples, which requires it to lock the table while it is doing so.\n\nGenerally speaking, regular vacuum is preferable. Vacuum full should\nonly be used to recover lost space due to too infrequent regular\nvacuums or too small of a free space map.\n\nvacuum full is much more invasive and should be avoided unless\nabsolutely necessary.\n", "msg_date": "Tue, 2 Oct 2007 20:55:35 -0500", "msg_from": "\"Scott Marlowe\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between Vacuum and Vacuum full" }, { "msg_contents": "On 10/2/07, Radhika S <[email protected]> wrote:\n> ... why have vacuum full...\n\nSee:\nhttp://www.postgresql.org/docs/8.2/static/routine-vacuuming.html\n", "msg_date": "Tue, 2 Oct 2007 20:56:26 -0500", "msg_from": "\"=?UTF-8?Q?Rodrigo_De_Le=C3=B3n?=\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between Vacuum and Vacuum full" }, { "msg_contents": "On Tue, 2 Oct 2007 21:45:37 -0400\n\"Radhika S\" <[email protected]> wrote:\n> But this has bought me to the question of what exactly is the\n> difference between vacuum and vacuum full. If both give back free\n> space to the disk, then why have vacuum full.\n\nNot quite. \"VACUUM FULL\" returns space to the system. \"VACUUM\" only\nfrees the space for use by the database. In most cases a simple VACUUM\nis all you need since you are going to just be asking for the space\nback anyway eventually as your database grows.\n\n-- \nD'Arcy J.M. Cain <[email protected]> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Tue, 2 Oct 2007 22:02:01 -0400", "msg_from": "\"D'Arcy J.M. Cain\" <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Difference between Vacuum and Vacuum full" }, { "msg_contents": "Thank you much for such a precise explanation. That was very helpful.\n\nRegards,\nRadhika\n\nOn 10/2/07, Scott Marlowe <[email protected]> wrote:\n>\n> On 10/2/07, Radhika S <[email protected]> wrote:\n> > Hi,\n> >\n> > I have recently had to change our nightly jobs from running vacuum\n> > full, as it has caused problems for us. Upon doing more reading on\n> > this topic, I understand that vacuum full needs explicit locks on the\n> > entire db and explicit locking conflicts with all other locks.\n> >\n> > But this has bought me to the question of what exactly is the\n> > difference between vacuum and vacuum full. If both give back free\n> > space to the disk, then why have vacuum full.\n>\n> Vacuum analyzes the tables and indexes, and marks deleted entries as\n> free and available and puts and entry into the free space map for\n> them. The next time that table or index is updated, instead of\n> appending the new tuple to the end it can be placed in the middle of\n> the table / index. this allows the database to reuse \"empty\" space in\n> the database. Also, if there are dead tuples on the very end of the\n> table or index, it can truncate the end of the file and free that\n> space up.\n>\n> Vaccum full basically re-writes the whole file minus all the dead\n> tuples, which requires it to lock the table while it is doing so.\n>\n> Generally speaking, regular vacuum is preferable. Vacuum full should\n> only be used to recover lost space due to too infrequent regular\n> vacuums or too small of a free space map.\n>\n> vacuum full is much more invasive and should be avoided unless\n> absolutely necessary.\n>\n\n\n\n-- \nIt is all a matter of perspective. You choose your view by choosing where to\nstand. --Larry Wall\n\nThank you much for such a precise explanation. That was very helpful. Regards,RadhikaOn 10/2/07, Scott Marlowe <\[email protected]> wrote:On 10/2/07, Radhika S <\[email protected]> wrote:> Hi,>> I have recently had to change our nightly jobs from running vacuum> full, as it has caused problems for us. Upon doing more reading on> this topic, I understand that vacuum full needs explicit locks on the\n> entire db and explicit locking conflicts with all other locks.>> But this has bought me to the question of what exactly is the> difference between vacuum and vacuum full. If both give back free\n> space to the disk, then why have vacuum full.Vacuum analyzes the tables and indexes, and marks deleted entries asfree and available and puts and entry into the free space map forthem.  The next time that table or index is updated, instead of\nappending the new tuple to the end it can be placed in the middle ofthe table / index.  this allows the database to reuse \"empty\" space inthe database.  Also, if there are dead tuples on the very end of the\ntable or index, it can truncate the end of the file and free thatspace up.Vaccum full basically re-writes the whole file minus all the deadtuples, which requires it to lock the table while it is doing so.\nGenerally speaking, regular vacuum is preferable.  Vacuum full shouldonly be used to recover lost space due to too infrequent regularvacuums or too small of a free space map.vacuum full is much more invasive and should be avoided unless\nabsolutely necessary.-- It is all a matter of perspective. You choose your view by choosing where to stand. --Larry Wall", "msg_date": "Tue, 2 Oct 2007 23:05:14 -0400", "msg_from": "\"Radhika S\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Difference between Vacuum and Vacuum full" } ]
[ { "msg_contents": "Hello,\n\nthanks to the added info available running the explain plan through\npgsl (instead of using pgAdmin) I was able to realize that an\n(implicitly created) trigger was the culprit of the slowdown I was\nsuffering.\n\nAdding an index on the foreign key the trigger was monitoring solved the issue.\n\nTHANKS EVERYBODY for your kind attention.\n\nBest regards,\n\nGiulio Cesare\n\n\n\nOn 10/3/07, Giulio Cesare Solaroli <[email protected]> wrote:\n> Hello Gregory,\n>\n> On 10/3/07, Greg Williamson <[email protected]> wrote:\n> > Giulio Cesare Solaroli wrote:\n> > > Hello everybody,\n> > >\n> > > I have just joined the list, as I am experiencing a degradation on\n> > > performances on my PostgreSQL instance, and I was looking for some\n> > > insights on how to fix/avoid it.\n> > >\n> > > What I have observed are impossibly high time on delete statements on\n> > > some tables.\n> > >\n> > > The delete statement is very simple:\n> > > delete from table where pk = ?\n> > >\n> > > The explain query report a single index scan on the primary key index,\n> > > as expected.\n> > >\n> > > I have run vacuum using the pgAdmin tool, but to no avail.\n> > >\n> > > I have also dropped and recreated the indexes, again without any benefit.\n> > >\n> > Make sure you run ANALYZE on the table in question after changes to make\n> > sure the stats are up to date.\n>\n> I have run Analyze (always through the pgAdmin interface), and it did\n> not provide any benefits.\n>\n>\n> > > I have later created a copy of the table using the \"create table\n> > > table_copy as select * from table\" syntax.\n> > >\n> > > Matching the configuration of the original table also on the copy\n> > > (indexes and constraints), I was able to delete the raws from the new\n> > > table with regular performances, from 20 to 100 times faster than\n> > > deleting from the original table.\n> > >\n> > >\n> > As another poster indicated, this sounds like foreign constraints where\n> > the postmaster process has to make sure there are no child references in\n> > dependent tables; if you are lacking proper indexing on those tables a\n> > sequential scan would be involved.\n> >\n> > Posting the DDL for the table in question and anything that might refer\n> > to it with an FK relationship would help the list help you.\n>\n> clipperz_connection=> \\d clipperz.rcrvrs\n> Table \"clipperz.rcrvrs\"\n> Column | Type | Modifiers\n> ----------------------+--------------------------+-----------\n> id_rcrvrs | integer | not null\n> id_rcr | integer | not null\n> id_prvrcrvrs | integer |\n> reference | character varying(1000) | not null\n> header | text | not null\n> data | text | not null\n> version | character varying(100) | not null\n> creation_date | timestamp with time zone | not null\n> access_date | timestamp with time zone | not null\n> update_date | timestamp with time zone | not null\n> previous_version_key | text | not null\n> Indexes:\n> \"rcrvrs_pkey\" PRIMARY KEY, btree (id_rcrvrs)\n> \"unique_rcrvrs_referecnce\" UNIQUE, btree (id_rcr, reference)\n> Foreign-key constraints:\n> \"rcrvrs_id_prvrcrvrs_fkey\" FOREIGN KEY (id_prvrcrvrs) REFERENCES\n> rcrvrs(id_rcrvrs)\n> \"rcrvrs_id_rcr_fkey\" FOREIGN KEY (id_rcr) REFERENCES rcr(id_rcr)\n> DEFERRABLE INITIALLY DEFERRED\n>\n> Is this a complete listing of all the DDL involved in defining the\n> table, or is there something possibly missing here?\n>\n>\n>\n> > Try running the query with EXPLAIN ANALYZE ... to see what the planner\n> > says. Put this in a transaction and roll it back if you want to leave\n> > the data unchanged, e.g.\n> > BEGIN;\n> > EXPLAIN ANALYZE DELETE FROM foo WHERE pk = 1234; -- or whatever values\n> > you'd be using\n> > ROLLBACK;\n>\n> I have already tried the explain plan, but only using the pgAdmin\n> interface; running it from psql shows some more data that looks very\n> promising:\n>\n> --------------------------------------------------------------------------------------------------------------------\n> Index Scan using rcrvrs_pkey on rcrvrs (cost=0.00..3.68 rows=1\n> width=6) (actual time=2.643..2.643 rows=1 loops=1)\n> Index Cond: (id_rcrvrs = 15434)\n> Trigger for constraint rcrvrs_id_prvrcrvrs_fkey: time=875.992 calls=1\n> Total runtime: 878.641 ms\n> (4 rows)\n>\n> The trigger stuff was not shown on the pgAdmin interface.\n>\n> I will try to add an index on the foreign key field (id_prvrcrvrs) to\n> see if this improves performances of the incriminated query.\n>\n> Thanks for the kind attention.\n>\n> Best regards,\n>\n>\n> Giulio Cesare\n>\n", "msg_date": "Wed, 3 Oct 2007 09:00:52 +0200", "msg_from": "\"Giulio Cesare Solaroli\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Newbie question about degraded performance on delete statement.\n\t(SOLVED)" } ]
[ { "msg_contents": "Hello list,\n\nI have a little query that takes too long and what I can see in the \nexplain output is a seq scan on my biggest table \n( tbl_file_structure) which I can't explain why.\n\nHere is the output. I hope this is formatted correctly. If not, let \nme know and I'll paste it somewhere.\n\nPostgres version is 8.2.4 running on a Linux system with 2GB RAM and \na Core 2 Duo processor.\n\n\nHashAggregate (cost=418833.59..418833.63 rows=4 width=127) (actual \ntime=16331.326..16331.449 rows=160 loops=1)\n -> Hash Left Join (cost=16290.37..418833.51 rows=4 width=127) \n(actual time=4386.574..16330.727 rows=160 loops=1)\n Hash Cond: (tbl_job.fk_job_group_id = \ntbl_job_group.pk_job_group_id)\n Filter: ((tbl_job_group.job_group_type = 'B'::bpchar) OR \n(tbl_job_group.job_group_type IS NULL))\n -> Merge Join (cost=16285.22..418828.08 rows=17 \nwidth=135) (actual time=4386.474..16330.253 rows=160 loops=1)\n Merge Cond: (tbl_computer.pk_computer_id = \ntbl_share.fk_computer_id)\n -> Nested Loop (cost=16268.52..418810.55 rows=216 \nwidth=122) (actual time=4386.324..16329.638 rows=160 loops=1)\n -> Index Scan using tbl_computer_pkey on \ntbl_computer (cost=0.00..12.48 rows=1 width=20) (actual \ntime=0.013..0.024 rows=1 loops=1)\n Filter: ((computer_name)::text = \n'SOLARIS2'::text)\n -> Hash Join (cost=16268.52..418795.91 \nrows=216 width=102) (actual time=4386.307..16329.425 rows=160 loops=1)\n Hash Cond: (tbl_file.fk_filetype_id = \ntbl_filetype.pk_filetype_id)\n -> Hash Join (cost=16267.03..418791.44 \nrows=216 width=100) (actual time=4386.268..16329.119 rows=160 loops=1)\n Hash Cond: \n(tbl_file_structure.fk_structure_id = tbl_structure.pk_structure_id)\n -> Hash Join \n(cost=8605.68..410913.87 rows=19028 width=40) (actual \ntime=22.810..16196.414 rows=17926 loops=1)\n Hash Cond: \n(tbl_file_structure.fk_file_id = tbl_file.pk_file_id)\n -> Seq Scan on \ntbl_file_structure (cost=0.00..319157.94 rows=16591994 width=16) \n(actual time=0.016..7979.083 rows=16591994 loops=1)\n -> Hash \n(cost=8573.62..8573.62 rows=2565 width=40) (actual \ntime=22.529..22.529 rows=2221 loops=1)\n -> Bitmap Heap Scan on \ntbl_file (cost=74.93..8573.62 rows=2565 width=40) (actual \ntime=1.597..20.691 rows=2221 loops=1)\n Filter: (lower \n((file_name)::text) ~~ 'index.php%'::text)\n -> Bitmap Index \nScan on tbl_file_idx (cost=0.00..74.28 rows=2565 width=0) (actual \ntime=1.118..1.118 rows=2221 loops=1)\n Index Cond: \n((lower((file_name)::text) ~>=~ 'index.php'::character varying) AND \n(lower((file_name)::text) ~<~ 'index.phq'::character varying))\n -> Hash (cost=7487.57..7487.57 \nrows=13902 width=76) (actual time=100.905..100.905 rows=24571 loops=1)\n -> Index Scan using \ntbl_structure_idx3 on tbl_structure (cost=0.00..7487.57 rows=13902 \nwidth=76) (actual time=0.055..79.301 rows=24571 loops=1)\n Index Cond: \n(fk_archive_id = 56)\n -> Hash (cost=1.22..1.22 rows=22 \nwidth=18) (actual time=0.032..0.032 rows=22 loops=1)\n -> Seq Scan on tbl_filetype \n(cost=0.00..1.22 rows=22 width=18) (actual time=0.004..0.016 rows=22 \nloops=1)\n -> Sort (cost=16.70..16.70 rows=1 width=37) (actual \ntime=0.144..0.239 rows=1 loops=1)\n Sort Key: tbl_share.fk_computer_id\n -> Nested Loop (cost=4.26..16.69 rows=1 \nwidth=37) (actual time=0.072..0.115 rows=1 loops=1)\n Join Filter: (tbl_share.pk_share_id = \ntbl_archive.fk_share_id)\n -> Nested Loop Left Join \n(cost=4.26..15.42 rows=1 width=24) (actual time=0.055..0.097 rows=1 \nloops=1)\n Join Filter: (tbl_archive.fk_job_id \n= tbl_job.pk_job_id)\n -> Bitmap Heap Scan on \ntbl_archive (cost=4.26..8.27 rows=1 width=24) (actual \ntime=0.033..0.033 rows=1 loops=1)\n Recheck Cond: (56 = \npk_archive_id)\n Filter: archive_complete\n -> Bitmap Index Scan on \ntbl_archive_pkey (cost=0.00..4.26 rows=1 width=0) (actual \ntime=0.026..0.026 rows=1 loops=1)\n Index Cond: (56 = \npk_archive_id)\n -> Seq Scan on tbl_job \n(cost=0.00..6.51 rows=51 width=16) (actual time=0.003..0.033 rows=51 \nloops=1)\n -> Seq Scan on tbl_share \n(cost=0.00..1.12 rows=12 width=29) (actual time=0.003..0.008 rows=12 \nloops=1)\n -> Hash (cost=4.51..4.51 rows=51 width=13) (actual \ntime=0.084..0.084 rows=51 loops=1)\n -> Seq Scan on tbl_job_group (cost=0.00..4.51 \nrows=51 width=13) (actual time=0.006..0.046 rows=51 loops=1)\n Total runtime: 16331.890 ms\n(42 rows)\n\nHere is the query if needed.\nexplain analyze SELECT file_name FROM tbl_file_structure JOIN \ntbl_file ON pk_file_id = fk_file_id JOIN tbl_structure ON \npk_structure_id = fk_structure_id JOIN tbl_archive ON pk_archive_id \n=fk_archive_id JOIN tbl_share ON pk_share_id =fk_share_id JOIN \ntbl_computer ON pk_computer_id = fk_computer_id JOIN tbl_filetype ON \npk_filetype_id = fk_filetype_id LEFT OUTER JOIN tbl_job ON \ntbl_archive.fk_job_id = pk_job_id LEFT OUTER JOIN tbl_job_group ON \ntbl_job.fk_job_group_id = pk_job_group_id WHERE LOWER(file_name) LIKE \nLOWER('index.php%') AND (computer_name = 'SOLARIS2') AND \n(fk_archive_id = 56) AND archive_complete = true AND (job_group_type \n= 'B' OR job_group_type IS NULL) GROUP BY file_name, file_ctime, \nstructure_path, pk_computer_id, filetype_icon, computer_name, \nshare_name, share_path;\n\nThanks,\nHenrik\n", "msg_date": "Wed, 3 Oct 2007 10:03:53 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Query taking too long. Problem reading explain output." }, { "msg_contents": "On Wed, Oct 03, 2007 at 10:03:53AM +0200, Henrik wrote:\n> I have a little query that takes too long and what I can see in the \n> explain output is a seq scan on my biggest table ( tbl_file_structure) \n> which I can't explain why.\n\nHere's where almost all of the time is taken:\n\n> Hash Join (cost=8605.68..410913.87 rows=19028 width=40) (actual time=22.810..16196.414 rows=17926 loops=1)\n> Hash Cond: (tbl_file_structure.fk_file_id = tbl_file.pk_file_id)\n> -> Seq Scan on tbl_file_structure (cost=0.00..319157.94 rows=16591994 width=16) (actual time=0.016..7979.083 rows=16591994 loops=1)\n> -> Hash (cost=8573.62..8573.62 rows=2565 width=40) (actual time=22.529..22.529 rows=2221 loops=1)\n> -> Bitmap Heap Scan on tbl_file (cost=74.93..8573.62 rows=2565 width=40) (actual time=1.597..20.691 rows=2221 loops=1)\n> Filter: (lower((file_name)::text) ~~ 'index.php%'::text)\n> -> Bitmap Index Scan on tbl_file_idx (cost=0.00..74.28 rows=2565 width=0) (actual time=1.118..1.118 rows=2221 loops=1)\n> Index Cond: ((lower((file_name)::text) ~>=~ 'index.php'::character varying) AND (lower((file_name)::text) ~<~ 'index.phq'::character varying))\n\nDoes tbl_file_structure have an index on fk_file_id? If so then\nwhat's the EXPLAIN ANALYZE output if you set enable_seqscan to off?\nI don't recommend disabling sequential scans permanently but doing\nso can be useful when investigating why the planner thinks one plan\nwill be faster than another.\n\nWhat are your settings for random_page_cost, effective_cache_size,\nwork_mem, and shared_buffers? If you're using the default\nrandom_page_cost of 4 then what's the EXPLAIN ANALYZE output if you\nreduce it to 3 or 2 (after setting enable_seqscan back to on)?\n\n-- \nMichael Fuhr\n", "msg_date": "Wed, 3 Oct 2007 07:31:29 -0600", "msg_from": "Michael Fuhr <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking too long. Problem reading explain\n\toutput." }, { "msg_contents": "Henrik <[email protected]> writes:\n> Here is the query if needed.\n> explain analyze SELECT file_name FROM tbl_file_structure JOIN \n> tbl_file ON pk_file_id = fk_file_id JOIN tbl_structure ON \n> pk_structure_id = fk_structure_id JOIN tbl_archive ON pk_archive_id \n> =fk_archive_id JOIN tbl_share ON pk_share_id =fk_share_id JOIN \n> tbl_computer ON pk_computer_id = fk_computer_id JOIN tbl_filetype ON \n> pk_filetype_id = fk_filetype_id LEFT OUTER JOIN tbl_job ON \n> tbl_archive.fk_job_id = pk_job_id LEFT OUTER JOIN tbl_job_group ON \n> tbl_job.fk_job_group_id = pk_job_group_id WHERE LOWER(file_name) LIKE \n> LOWER('index.php%') AND (computer_name = 'SOLARIS2') AND \n> (fk_archive_id = 56) AND archive_complete = true AND (job_group_type \n> = 'B' OR job_group_type IS NULL) GROUP BY file_name, file_ctime, \n> structure_path, pk_computer_id, filetype_icon, computer_name, \n> share_name, share_path;\n\n[ counts the JOINs... ] Try raising join_collapse_limit. I think the\nplanner may be neglecting to consider the join order you need.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 03 Oct 2007 10:15:18 -0400", "msg_from": "Tom Lane <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking too long. Problem reading explain output. " }, { "msg_contents": "\n3 okt 2007 kl. 16:15 skrev Tom Lane:\n\n> Henrik <[email protected]> writes:\n>> Here is the query if needed.\n>> explain analyze SELECT file_name FROM tbl_file_structure JOIN\n>> tbl_file ON pk_file_id = fk_file_id JOIN tbl_structure ON\n>> pk_structure_id = fk_structure_id JOIN tbl_archive ON pk_archive_id\n>> =fk_archive_id JOIN tbl_share ON pk_share_id =fk_share_id JOIN\n>> tbl_computer ON pk_computer_id = fk_computer_id JOIN tbl_filetype ON\n>> pk_filetype_id = fk_filetype_id LEFT OUTER JOIN tbl_job ON\n>> tbl_archive.fk_job_id = pk_job_id LEFT OUTER JOIN tbl_job_group ON\n>> tbl_job.fk_job_group_id = pk_job_group_id WHERE LOWER(file_name) LIKE\n>> LOWER('index.php%') AND (computer_name = 'SOLARIS2') AND\n>> (fk_archive_id = 56) AND archive_complete = true AND (job_group_type\n>> = 'B' OR job_group_type IS NULL) GROUP BY file_name, file_ctime,\n>> structure_path, pk_computer_id, filetype_icon, computer_name,\n>> share_name, share_path;\n>\n> [ counts the JOINs... ] Try raising join_collapse_limit. I think the\n> planner may be neglecting to consider the join order you need.\n>\n> \t\t\tregards, tom lane\n\nHi,\n\nAhh I had exactly 8 joins.\nFollowing your suggestion I raised the join_collapse_limit from 8 to \n10 and the planners decision sure changed but now I have some crazy \nnested loops. Maybe I have some statistics wrong?\n\nSame query this is the new explain analyze:\n\n HashAggregate (cost=48.40..48.41 rows=1 width=127) (actual \ntime=22898.513..22898.613 rows=160 loops=1)\n -> Nested Loop Left Join (cost=2.60..48.38 rows=1 width=127) \n(actual time=10.984..22897.964 rows=160 loops=1)\n Filter: ((tbl_job_group.job_group_type = 'B'::bpchar) OR \n(tbl_job_group.job_group_type IS NULL))\n -> Nested Loop Left Join (cost=2.60..43.94 rows=1 \nwidth=135) (actual time=10.976..22896.856 rows=160 loops=1)\n Join Filter: (tbl_archive.fk_job_id = tbl_job.pk_job_id)\n -> Nested Loop (cost=2.60..36.79 rows=1 width=135) \n(actual time=10.955..22887.675 rows=160 loops=1)\n Join Filter: (tbl_share.pk_share_id = \ntbl_archive.fk_share_id)\n -> Nested Loop (cost=0.01..30.18 rows=1 \nwidth=143) (actual time=10.941..22885.841 rows=160 loops=1)\n Join Filter: (tbl_computer.pk_computer_id \n= tbl_share.fk_computer_id)\n -> Nested Loop (cost=0.01..28.91 rows=1 \nwidth=122) (actual time=10.925..22883.458 rows=160 loops=1)\n -> Nested Loop (cost=0.01..26.73 \nrows=1 width=102) (actual time=10.915..22881.411 rows=160 loops=1)\n -> Nested Loop \n(cost=0.01..20.45 rows=1 width=41) (actual time=0.107..10693.572 \nrows=20166 loops=1)\n -> Nested Loop \n(cost=0.01..10.15 rows=1 width=41) (actual time=0.080..986.100 \nrows=2223 loops=1)\n Join Filter: \n(tbl_filetype.pk_filetype_id = tbl_file.fk_filetype_id)\n -> Index Scan \nusing tbl_file_idx on tbl_file (cost=0.01..8.66 rows=1 width=39) \n(actual time=0.057..931.546 rows=2223 loops=1)\n Index Cond: \n((lower((file_name)::text) ~>=~ 'index.php'::character varying) AND \n(lower((file_name)::text) ~<~ 'index.phq'::character varying))\n Filter: \n(lower((file_name)::text) ~~ 'index.php%'::text)\n -> Seq Scan on \ntbl_filetype (cost=0.00..1.22 rows=22 width=18) (actual \ntime=0.002..0.011 rows=22 loops=2223)\n -> Index Scan using \ntbl_file_structure_idx on tbl_file_structure (cost=0.00..10.29 \nrows=1 width=16) (actual time=0.722..4.356 rows=9 loops=2223)\n Index Cond: \n(tbl_file.pk_file_id = tbl_file_structure.fk_file_id)\n -> Index Scan using \ntbl_structure_pkey on tbl_structure (cost=0.00..6.27 rows=1 \nwidth=77) (actual time=0.603..0.603 rows=0 loops=20166)\n Index Cond: \n(tbl_structure.pk_structure_id = tbl_file_structure.fk_structure_id)\n Filter: (fk_archive_id \n= 56)\n -> Seq Scan on tbl_computer \n(cost=0.00..2.16 rows=1 width=20) (actual time=0.004..0.010 rows=1 \nloops=160)\n Filter: \n((computer_name)::text = 'SOLARIS2'::text)\n -> Seq Scan on tbl_share \n(cost=0.00..1.12 rows=12 width=29) (actual time=0.002..0.007 rows=12 \nloops=160)\n -> Bitmap Heap Scan on tbl_archive \n(cost=2.59..6.60 rows=1 width=24) (actual time=0.007..0.008 rows=1 \nloops=160)\n Recheck Cond: (56 = pk_archive_id)\n Filter: archive_complete\n -> Bitmap Index Scan on \ntbl_archive_pkey (cost=0.00..2.59 rows=1 width=0) (actual \ntime=0.005..0.005 rows=1 loops=160)\n Index Cond: (56 = pk_archive_id)\n -> Seq Scan on tbl_job (cost=0.00..6.51 rows=51 \nwidth=16) (actual time=0.002..0.031 rows=51 loops=160)\n -> Index Scan using tbl_job_group_pkey on tbl_job_group \n(cost=0.00..4.42 rows=1 width=13) (actual time=0.003..0.004 rows=1 \nloops=160)\n Index Cond: (tbl_job.fk_job_group_id = \ntbl_job_group.pk_job_group_id)\n Total runtime: 22898.840 ms\n\nThanks,\nHenrik\n", "msg_date": "Thu, 4 Oct 2007 12:15:04 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking too long. Problem reading explain output. " }, { "msg_contents": "Henrik wrote:\n\n> Ahh I had exactly 8 joins.\n> Following your suggestion I raised the join_collapse_limit from 8 to 10 and \n> the planners decision sure changed but now I have some crazy nested loops. \n> Maybe I have some statistics wrong?\n\nYeah. The problematic misestimation is exactly the innermost indexscan,\nwhich is wrong by two orders of magnitude:\n\n> -> Index Scan using \n> tbl_file_idx on tbl_file (cost=0.01..8.66 rows=1 width=39) (actual \n> time=0.057..931.546 rows=2223 loops=1)\n> Index Cond: \n> ((lower((file_name)::text) ~>=~ 'index.php'::character varying) AND \n> (lower((file_name)::text) ~<~ 'index.phq'::character varying))\n> Filter: \n> (lower((file_name)::text) ~~ 'index.php%'::text)\n\nThis wreaks havoc on the rest of the plan. If this weren't\nmisestimated, it wouldn't be using those nested loops.\n\n-- \nAlvaro Herrera http://www.CommandPrompt.com/\nPostgreSQL Replication, Consulting, Custom Development, 24x7 support\n", "msg_date": "Thu, 4 Oct 2007 08:30:38 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking too long. Problem reading explain\n\toutput." }, { "msg_contents": "\n4 okt 2007 kl. 14:30 skrev Alvaro Herrera:\n\n> Henrik wrote:\n>\n>> Ahh I had exactly 8 joins.\n>> Following your suggestion I raised the join_collapse_limit from 8 \n>> to 10 and\n>> the planners decision sure changed but now I have some crazy \n>> nested loops.\n>> Maybe I have some statistics wrong?\n>\n> Yeah. The problematic misestimation is exactly the innermost \n> indexscan,\n> which is wrong by two orders of magnitude:\n>\n>> -> Index Scan \n>> using\n>> tbl_file_idx on tbl_file (cost=0.01..8.66 rows=1 width=39) (actual\n>> time=0.057..931.546 rows=2223 loops=1)\n>> Index Cond:\n>> ((lower((file_name)::text) ~>=~ 'index.php'::character varying) AND\n>> (lower((file_name)::text) ~<~ 'index.phq'::character varying))\n>> Filter:\n>> (lower((file_name)::text) ~~ 'index.php%'::text)\n>\n> This wreaks havoc on the rest of the plan. If this weren't\n> misestimated, it wouldn't be using those nested loops.\n>\nCorrect. I changed the statistics to 500 in tbl_file.file_name and \nnow the statistics is better. But now my big seq scan on \ntbl_file_Structure back and I don't know why.\nPasting new explain analyze:\n\n HashAggregate (cost=467442.44..467442.47 rows=3 width=127) (actual \ntime=25182.056..25182.169 rows=160 loops=1)\n -> Hash Join (cost=16106.29..467442.38 rows=3 width=127) \n(actual time=7825.803..25181.394 rows=160 loops=1)\n Hash Cond: (tbl_file.fk_filetype_id = \ntbl_filetype.pk_filetype_id)\n -> Hash Join (cost=16079.94..467413.50 rows=184 \nwidth=100) (actual time=7793.171..25148.405 rows=160 loops=1)\n Hash Cond: (tbl_file_structure.fk_structure_id = \ntbl_structure.pk_structure_id)\n -> Hash Join (cost=7295.70..458431.45 rows=17419 \nwidth=39) (actual time=619.779..23034.828 rows=20166 loops=1)\n Hash Cond: (tbl_file_structure.fk_file_id = \ntbl_file.pk_file_id)\n -> Seq Scan on tbl_file_structure \n(cost=0.00..357539.04 rows=18684504 width=16) (actual \ntime=5.648..12906.913 rows=18684505 loops=1)\n -> Hash (cost=7269.04..7269.04 rows=2133 \nwidth=39) (actual time=613.852..613.852 rows=2223 loops=1)\n -> Bitmap Heap Scan on tbl_file \n(cost=62.50..7269.04 rows=2133 width=39) (actual time=14.672..611.803 \nrows=2223 loops=1)\n Filter: (lower((file_name)::text) \n~~ 'index.php%'::text)\n -> Bitmap Index Scan on \ntbl_file_idx (cost=0.00..61.97 rows=2133 width=0) (actual \ntime=14.205..14.205 rows=2223 loops=1)\n Index Cond: ((lower \n((file_name)::text) ~>=~ 'index.php'::character varying) AND (lower \n((file_name)::text) ~<~ 'index.phq'::character varying))\n -> Hash (cost=8601.81..8601.81 rows=14595 width=77) \n(actual time=2076.717..2076.717 rows=24571 loops=1)\n -> Index Scan using tbl_structure_idx3 on \ntbl_structure (cost=0.00..8601.81 rows=14595 width=77) (actual \ntime=58.620..2050.555 rows=24571 loops=1)\n Index Cond: (fk_archive_id = 56)\n -> Hash (cost=26.08..26.08 rows=22 width=59) (actual \ntime=32.624..32.624 rows=22 loops=1)\n -> Nested Loop (cost=2.59..26.08 rows=22 width=59) \n(actual time=32.503..32.598 rows=22 loops=1)\n -> Nested Loop Left Join (cost=2.59..24.64 \nrows=1 width=41) (actual time=32.332..32.384 rows=1 loops=1)\n Filter: ((tbl_job_group.job_group_type = \n'B'::bpchar) OR (tbl_job_group.job_group_type IS NULL))\n -> Nested Loop Left Join \n(cost=2.59..20.20 rows=1 width=49) (actual time=27.919..27.969 rows=1 \nloops=1)\n Join Filter: (tbl_archive.fk_job_id \n= tbl_job.pk_job_id)\n -> Nested Loop (cost=2.59..13.05 \nrows=1 width=49) (actual time=27.897..27.904 rows=1 loops=1)\n Join Filter: \n(tbl_share.pk_share_id = tbl_archive.fk_share_id)\n -> Nested Loop \n(cost=0.00..6.43 rows=1 width=41) (actual time=19.638..19.642 rows=1 \nloops=1)\n Join Filter: \n(tbl_computer.pk_computer_id = tbl_share.fk_computer_id)\n -> Seq Scan on \ntbl_computer (cost=0.00..5.16 rows=1 width=20) (actual \ntime=19.611..19.614 rows=1 loops=1)\n Filter: \n((computer_name)::text = 'SOLARIS2'::text)\n -> Seq Scan on \ntbl_share (cost=0.00..1.12 rows=12 width=29) (actual \ntime=0.011..0.021 rows=12 loops=1)\n -> Bitmap Heap Scan on \ntbl_archive (cost=2.59..6.60 rows=1 width=24) (actual \ntime=8.255..8.255 rows=1 loops=1)\n Recheck Cond: (56 = \npk_archive_id)\n Filter: archive_complete\n -> Bitmap Index Scan \non tbl_archive_pkey (cost=0.00..2.59 rows=1 width=0) (actual \ntime=8.250..8.250 rows=1 loops=1)\n Index Cond: (56 = \npk_archive_id)\n -> Seq Scan on tbl_job \n(cost=0.00..6.51 rows=51 width=16) (actual time=0.003..0.034 rows=51 \nloops=1)\n -> Index Scan using tbl_job_group_pkey \non tbl_job_group (cost=0.00..4.42 rows=1 width=13) (actual \ntime=4.408..4.410 rows=1 loops=1)\n Index Cond: \n(tbl_job.fk_job_group_id = tbl_job_group.pk_job_group_id)\n -> Seq Scan on tbl_filetype (cost=0.00..1.22 \nrows=22 width=18) (actual time=0.169..0.178 rows=22 loops=1)\n Total runtime: 25182.626 ms\n\n\nThanks.,\nHenrik\n> -- \n> Alvaro Herrera http:// \n> www.CommandPrompt.com/\n> PostgreSQL Replication, Consulting, Custom Development, 24x7 support\n\n", "msg_date": "Thu, 4 Oct 2007 23:15:47 +0200", "msg_from": "Henrik <[email protected]>", "msg_from_op": true, "msg_subject": "Re: Query taking too long. Problem reading explain output." }, { "msg_contents": "Henrik wrote:\n\n> Correct. I changed the statistics to 500 in tbl_file.file_name and now the \n> statistics is better. But now my big seq scan on tbl_file_Structure back \n> and I don't know why.\n\nHmm, I think the problem here is that it needs to fetch ~200000 tuples\nfrom tbl_file_structure one way or the other. When it misestimated the\ntuples from tbl_file it thought it would only need to do the indexscan\nin tbl_file_structure a few times, but now it realizes that it needs to\ndo it several thousands of times and it considers the seqscan to be\ncheaper.\n\nPerhaps you would benefit from a higher effective_cache_size or a lower\nrandom_page_cost (or both).\n\nI think this is a problem in the optimizer: it doesn't correctly take\ninto account the fact that the upper pages of the index are most likely\nto be cached. This has been discussed a lot of times but it's not a\nsimple problem to fix.\n\n-- \nAlvaro Herrera http://www.amazon.com/gp/registry/CTMLCN8V17R4\nEste mail se entrega garantizadamente 100% libre de sarcasmo.\n", "msg_date": "Thu, 4 Oct 2007 19:43:31 -0400", "msg_from": "Alvaro Herrera <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking too long. Problem reading explain\n\toutput." }, { "msg_contents": "On Thu, 2007-10-04 at 08:30 -0400, Alvaro Herrera wrote:\n> Henrik wrote:\n> \n> > Ahh I had exactly 8 joins.\n> > Following your suggestion I raised the join_collapse_limit from 8 to 10 and \n> > the planners decision sure changed but now I have some crazy nested loops. \n> > Maybe I have some statistics wrong?\n> \n> Yeah. The problematic misestimation is exactly the innermost indexscan,\n> which is wrong by two orders of magnitude:\n\n\nNested Loops are evil.. and I've no clue on why PG has such big\nmis-estimates. Mine are like 1:500 \n\nI've solved mine using SRFs\n", "msg_date": "Thu, 11 Oct 2007 11:16:23 +0800", "msg_from": "Ow Mun Heng <[email protected]>", "msg_from_op": false, "msg_subject": "Re: Query taking too long. Problem reading explain\n\toutput." } ]
[ { "msg_contents": "Hi there,\n\nI have a database with lowest possible activity. I run VACUUM FULL AND I get \nthe following log result: \n\n\n", "msg_date": "Thu, 4 Oct 2007 12:43:12 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "can't shrink relation" }, { "msg_contents": "sorry for the previous incomplete post. I continue with the log:\n\nNOTICE: relation \"pg_shdepend\" TID 11/1: DeleteTransactionInProgress \n2657075 --- can't shrink relation\nNOTICE: relation \"pg_shdepend\" TID 11/2: DeleteTransactionInProgress \n2657075 --- can't shrink relation\n.....\nNOTICE: relation \"pg_shdepend\" TID 36/93: DeleteTransactionInProgress \n2658105 --- can't shrink relation\n\n\nWhat happen ? What I have to do ?\n\nI notice that I don't get such messages when I run just VACUUM without FULL \noption.\n\nTIA,\nSabin \n\n\n", "msg_date": "Thu, 4 Oct 2007 12:46:02 +0300", "msg_from": "\"Sabin Coanda\" <[email protected]>", "msg_from_op": true, "msg_subject": "Re: can't shrink relation" }, { "msg_contents": "\"Sabin Coanda\" <sabin.coanda 'at' deuromedia.ro> writes:\n\n> sorry for the previous incomplete post. I continue with the log:\n>\n> NOTICE: relation \"pg_shdepend\" TID 11/1: DeleteTransactionInProgress \n> 2657075 --- can't shrink relation\n> NOTICE: relation \"pg_shdepend\" TID 11/2: DeleteTransactionInProgress \n> 2657075 --- can't shrink relation\n> .....\n> NOTICE: relation \"pg_shdepend\" TID 36/93: DeleteTransactionInProgress \n> 2658105 --- can't shrink relation\n>\n>\n> What happen ? What I have to do ?\n\nYou have to use google. First match to \"postgresql can't shrink\nrelation\" (almost) returns:\n\nhttp://archives.postgresql.org/pgsql-novice/2002-12/msg00126.php\n\n-- \nGuillaume Cottenceau, MNC Mobile News Channel SA, an Alcatel-Lucent Company\nAv. de la Gare 10, 1003 Lausanne, Switzerland - direct +41 21 317 50 36\n", "msg_date": "Thu, 04 Oct 2007 12:11:48 +0200", "msg_from": "Guillaume Cottenceau <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't shrink relation" }, { "msg_contents": "Sabin Coanda wrote:\n> sorry for the previous incomplete post. I continue with the log:\n\nNot really a performance question, this. Perhaps general/admin lists \nwould be better next time. No matter...\n\n> NOTICE: relation \"pg_shdepend\" TID 11/1: DeleteTransactionInProgress \n> 2657075 --- can't shrink relation\n> NOTICE: relation \"pg_shdepend\" TID 11/2: DeleteTransactionInProgress \n> 2657075 --- can't shrink relation\n> .....\n> NOTICE: relation \"pg_shdepend\" TID 36/93: DeleteTransactionInProgress \n> 2658105 --- can't shrink relation\n> \n> What happen ? What I have to do ?\n\nThis is where having a copy of the source pays off. cd to the top-level \nof your source and type:\n find . -type f | xargs grep 'shrink relation'\nAmongst the translation files you'll see .../backend/commands/vacuum.c\n\nA quick search in there reveals...\n\ncase HEAPTUPLE_DELETE_IN_PROGRESS:\n /*\n * This should not happen, since we hold exclusive lock on\n * the relation; shouldn't we raise an error? (Actually,\n * it can happen in system catalogs, since we tend to\n * release write lock before commit there.)\n */\nereport(NOTICE,\n (errmsg(\"relation \\\"%s\\\" TID %u/%u: DeleteTransactionInProgress %u \n--- can't shrink relation\",\nrelname, blkno, offnum, HeapTupleHeaderGetXmax(tuple.t_data))));\ndo_shrinking = false;\n\nSo - it's wants to shrink a table but there is a delete in progress so \nit can't do so safely. This shouldn't happen unless it's a system table, \nand checking your error message, we're looking at pg_shdepend which is \nindeed a system table.\n\n> I notice that I don't get such messages when I run just VACUUM without FULL \n> option.\n\nThat's because VACUUM doesn't reclaim space, it just marks blocks as \navailable for re-use. If you insert 2 million rows and then delete 1 \nmillion, your table will have 1 million gaps. A vacuum will try and \ntrack those gaps (see your \"free space map\" settings in postgresql.conf) \nwhereas a vacuum-full will actually move rows around and then shrink the \nsize of the file on-disk once all the gaps are together at the end of \nthe file.\n\nA vacuum full needs to lock the table, since it's moving rows around.\n\nHTH\n\n-- \n Richard Huxton\n Archonet Ltd\n", "msg_date": "Thu, 04 Oct 2007 11:34:29 +0100", "msg_from": "Richard Huxton <[email protected]>", "msg_from_op": false, "msg_subject": "Re: can't shrink relation" } ]